id
stringlengths
2
8
url
stringlengths
31
139
title
stringlengths
1
87
text
stringlengths
10
247k
embedding
list
14011
https://en.wikipedia.org/wiki/Hillbilly
Hillbilly
"Hillbilly" is a term (often derogatory) for people who dwell in rural, mountainous areas in the United States, primarily in southern Appalachia and the Ozarks. The term was later used to refer to people from other rural and mountainous areas west of the Mississippi river too, particularly those of the Rocky Mountains and near the Rio Grande. The first known instances of "hillbilly" in print were in The Railroad Trainmen's Journal (vol. ix, July 1892), an 1899 photograph of men and women in West Virginia labeled "Camp Hillbilly", and a 1900 New York Journal article containing the definition: "a Hill-Billie is a free and untrammeled white citizen of Alabama, who lives in the hills, has no means to speak of, dresses as he can, talks as he pleases, drinks whiskey when he gets it, and fires off his revolver as the fancy takes him". The stereotype is twofold in that it incorporates both positive and negative traits: "Hillbillies" are often considered independent and self-reliant individuals who resist the modernization of society, but at the same time they are also defined as backward and violent. Scholars argue this duality is reflective of the split ethnic identities in white America. The term's later usage extended beyond solely white communities, exemplified with the "Hispanic hillbillies of northern New Mexico," in reference to the Hispanos of New Mexico. Etymology The term 'Hillbilly' is Scottish in origin but is not derived from its dialect. In Scotland, the term "hill-folk" referred to people who preferred isolation from the greater society, and "billy" meant "comrade" or "companion". The words "hill-folk" and "Billie" were combined and applied to the Cameronians who followed the teachings of a militant Presbyterians named Richard Cameron. These Scottish Covenanters fled to the hills of southern Scotland in the late 17th century to avoid persecution of their religious beliefs. Many of the early settlers to the Thirteen Colonies were from Scotland and Northern Ireland and were followers of William of Orange, the Protestant king of England. In 17th century Ireland, during the Williamite War, protestant supporters of William III ("King Billy") were referred to as "Billy's Boys" because 'Billy' is a diminutive of 'William' (common across the British Isles). In time the term hillbilly became synonymous with the Williamites who settled in the hills of North America. Some scholars disagree with this theory. Michael Montgomery's From Ulster to America: The Scotch-Irish Heritage of American English states, "In Ulster in recent years it has sometimes been supposed that [hillbilly] was coined to refer to followers of King William III and brought to America by early Ulster emigrants, but this derivation is almost certainly incorrect. ... In America hillbilly was first attested only in 1898, which suggests a later, independent development." History The Appalachian Mountains were settled in the 18th century by settlers primarily from England, lowland Scotland, and the province of Ulster in Ireland. The settlers from Ulster were mainly Protestants who migrated to Ireland from Scotland and Northern England during the Plantation of Ulster in the 17th century. Many further migrated to the American colonies beginning in the 1730s, and in America became known as the Scots-Irish. The term "hillbilly" spread in the years following the American Civil War. At this time, the country was developing both technologically and socially, but the Appalachian region was falling behind. Before the war, Appalachia was not distinctively different from other rural areas of the country. Post-war, although the frontier pushed farther west, the region maintained frontier characteristics. The Appalachians themselves were perceived as backward, quick to violence, and inbred in their isolation. Fueled by news stories of mountain feuds such as that in the 1880s between the Hatfields and McCoys, the hillbilly stereotype developed in the late 19th to early 20th century. The "classic" hillbilly stereotype reached its current characterization during the years of the Great Depression. The period of Appalachian out-migration, roughly from the 1930s through the 1950s, saw many mountain residents moving north to the Midwestern industrial cities of Chicago, Cleveland, Akron, and Detroit. This movement to Northern society, which became known as the "Hillbilly Highway", brought these previously isolated communities into mainstream United States culture. In response, poor white mountaineers became central characters in newspapers, pamphlets, and eventually, motion pictures. Authors at the time were inspired by historical figures such as Davy Crockett and Daniel Boone. The mountaineer image transferred over to the 20th century where the "hillbilly" stereotype emerged. In popular culture Pop culture has perpetuated the "hillbilly" stereotype. Scholarly works suggest that the media has exploited both the Appalachian region and people by classifying them as "hillbillies". These generalizations do not match the cultural experiences of Appalachians. Appalachians, like many other groups, do not subscribe to a single identity. One of the issues associated with stereotyping is that it is profitable. When "hillbilly" became a widely used term, entrepreneurs saw a window for potential revenue. They "recycled" the image and brought it to life through various forms of media. The comics portrayed hillbilly stereotypes, notably in two strips, Li'l Abner and Snuffy Smith. Both characters were introduced in 1934. Television and film have portrayed "hillbillies" in both derogatory and sympathetic terms. Films such as Sergeant York or the Ma and Pa Kettle series portrayed the "hillbilly" as wild but good-natured. Television programs of the 1960s such as The Real McCoys, The Andy Griffith Show, and especially The Beverly Hillbillies, portrayed the "hillbilly" as backwards but with enough wisdom to outwit more sophisticated city folk. Gunsmoke Festus Haggen was portrayed as intelligent and quick-witted (but lacking "education"). The popular 1970s television variety show Hee Haw regularly lampooned the stereotypical "hillbilly" lifestyle. A darker negative image of the hillbilly was introduced to another generation in the film Deliverance (1972), based on a novel of the same name by James Dickey, which depicted some "hillbillies" as genetically deficient, inbred, and murderous. Similar "evil hillbilly people"-type have also been seen in a more comical light in the 1988 horror film The Moonlight Sonata, but the 2010 horror comedy film Tucker & Dale vs. Evil even parodies hillbilly stereotyping. More recently, the TV series Justified (2010-2015) was centered around deputy U. S. Marshal Raylan Givens who was reassigned to his hometown in Harlan, Kentucky where he was in conflict with Boyd Crowder, a drug dealer who had grown up with Raylan. The show's plots often included "hillbilly" tropes such as dimwitted and easily manipulated men, use of homemade drugs, and snake-handling revivalists. "Hillbillies" were at the center of reality television in the 21st century. Network television shows such as The Real Beverly Hillbillies, High Life, and The Simple Life displayed the "hillbilly" lifestyle for viewers in the United States. This sparked protests across the country with rural-minded individuals gathering to fight the stereotype. The Center for Rural Strategies started a nationwide campaign stating the stereotype was "politically incorrect". The Kentucky-based organization engaged political figures in the movement such as Robert Byrd and Mike Huckabee. Both protestors argued that the discrimination of any other group in United States would not be tolerated, so neither should the discrimination against rural U.S. citizens. A 2003 piece published by The Cincinnati Enquirer read, "In this day of hypersensitivity to diversity and political correctness, Appalachians have been a group that it is still socially acceptable to demean and joke about. ... But rural folks have spoken up and said 'enough' to the Hollywood mockers." Hillbilly Elegy: A Memoir of a Family and Culture in Crisis (2016) is a memoir by J. D. Vance about the Appalachian values of his upbringing and their relationship to the social problems of his hometown, Middletown, Ohio. The book topped The New York Times Best Seller list in August 2016. A family of "Hill People", who are employed as migrant workers on a farm in 1952 Arkansas, have a major role in John Grisham's book A Painted House, with Grisham trying to avoid stereotypes. Music Hillbilly music was at one time considered an acceptable label for what is now known as country music. The label, coined in 1925 by country pianist Al Hopkins, persisted until the 1950s. The "hillbilly music" categorization covers a wide variety of musical genres including bluegrass, country, western, and gospel. Appalachian folk song existed long before the "hillbilly" label. When the commercial industry was combined with "traditional Appalachian folksong", "hillbilly music" was formed. Some argue this is a "High Culture" issue where sophisticated individuals may see something considered "unsophisticated" as "trash". In the early-20th century, artists began to utilize the "hillbilly" label. The term gained momentum due to Ralph Peer, the recording director of OKeh Records, who heard it being used among Southerners when he went down to Virginia to record the music and labeled all Southern country music as so from then on. The York Brothers entitled one of their songs "Hillbilly Rose" and the Delmore Brothers followed with their song "Hillbilly Boogie". In 1927, the Gennett studios in Richmond, Indiana, made a recording of black fiddler Jim Booker. The recordings were labeled "made for Hillbilly" in the Gennett files and were marketed to a white audience. Columbia Records had much success with the "Hill Billies" featuring Al Hopkins and Fiddlin' Charlie Bowman. By the late-1940s, radio stations started to use the "hillbilly music" label. Originally, "hillbilly" was used to describe fiddlers and string bands, but now it was used to describe traditional Appalachian music. Appalachians had never used this term to describe their own music. Popular songs whose style bore characteristics of both hillbilly and African American music were referred to as hillbilly boogie and rockabilly. Elvis Presley was a prominent player of rockabilly and was known early in his career as the "Hillbilly Cat". When the Country Music Association was founded in 1958, the term hillbilly music gradually fell out of use. The music industry merged hillbilly music, Western swing, and Cowboy music, to form the current category C&W, Country and Western. Some artists (notably Hank Williams) and fans were offended by the "hillbilly music" label. While the term is not used as frequently today, it is still used on occasion to refer to old-time music or bluegrass. For example, WHRB broadcasts a popular weekly radio show entitled "Hillbilly at Harvard". The show is devoted to playing a mix of old-time music, bluegrass, and traditional country and western. Cultural implications The hillbilly stereotype is considered to have had a traumatizing effect on some in the Appalachian region. Feelings of shame, self-hatred, and detachment are cited as a result of "culturally transmitted traumatic stress syndrome". Appalachian scholars say that the large-scale stereotyping has rewritten Appalachian history, making Appalachians feel particularly vulnerable. "Hillbilly" has now become part of Appalachian identity and some Appalachians feel they are constantly defending themselves against this image. The stereotyping also has political implications for the region. There is a sense of "perceived history" that prevents many political issues from receiving adequate attention. Appalachians are often blamed for economic struggles. "Moonshiners, welfare cheats, and coal miners" are stereotypes stemming from the greater hillbilly stereotype in the region. This prejudice has been said to serve as a barrier for addressing some serious issues such as the economy and the environment. Despite the political and social difficulties associated with stereotyping, Appalachians have organized to enact change. The War on Poverty is sometimes considered to be an example of one effort that allowed for Appalachian community organization. Grassroots movements, protests, and strikes are common in the area, though not always successful. Intragroup versus intergroup usage The Springfield, Missouri Chamber of Commerce once presented dignitaries visiting the city with an "Ozark Hillbilly Medallion" and a certificate proclaiming the honoree a "hillbilly of the Ozarks". On June 7, 1952, President Harry S. Truman received the medallion after a breakfast speech at the Shrine Mosque for the 35th Division Association. Other recipients included US Army generals Omar Bradley and Matthew Ridgway, J. C. Penney, Johnny Olson, and Ralph Story. Hillbilly Days is an annual festival held in mid-April in Pikeville, Kentucky celebrating the best of Appalachian culture. The event began by local Shriners as a fundraiser to support the Shriners Children's Hospital. It has grown since its beginning in 1976 and now is the second largest festival held in the state of Kentucky. Artists and craftspeople showcase their talents and sell their works on display. Nationally renowned musicians as well as the best of the regional mountain musicians share six different stages located throughout the downtown area of Pikeville. Aspiring hillbillies from across the nation compete to come up with the wildest Hillbilly outfit. The event has earned its name as the Mardi Gras of the Mountains. Fans of "mountain music" come from around the United States to hear this annual concentrated gathering of talent. Some refer to this event as the equivalent of a "Woodstock" for mountain music. The term "Hillbilly" is used with pride by a number of people within the region as well as famous persons, such as singer Dolly Parton, chef Sean Brock, and was used by actress Minnie Pearl. Positive self-identification with the term generally includes identification with a set of "hillbilly values" including love and respect for nature, strong work ethic, generosity toward neighbors and those in need, family ties, self-reliance, resiliency, and a simple lifestyle. See also Appalachian stereotypes Country (identity) Cracker (term) Hillbilly armor List of ethnic slurs Mountain white Okie Peckerwood Redneck Trailer trash White trash Yokel Zomia (geography) References African Banjo Echoes in Appalachia: A Study of Folk Tradition (1995), by Cecelia Conway External links American people of Scotch-Irish descent American regional nicknames Ethnic and religious slurs European-American culture in Appalachia Ozarks Scotch-Irish American history Stereotypes of rural people Stereotypes of white Americans
[ -0.007535573095083237, 0.40997543931007385, -0.321646124124527, -0.3787960410118103, 0.4619238078594208, 0.3171806335449219, 1.1606361865997314, 0.649716317653656, -0.3796362280845642, 0.17758598923683167, -0.6670442223548889, 0.6307498216629028, -0.4071276783943176, 0.5400905609130859, ...
14012
https://en.wikipedia.org/wiki/Host
Host
A host is a person responsible for guests at an event or for providing hospitality during it. Host may also refer to: Places Host, Pennsylvania, a village in Berks County People Jim Host (born 1937), American businessman Michel Host (1942–2021), French writer "Host", an author abbreviation in botany for Nicolaus Thomas Host Arts, entertainment, and media Fictional entities Hosts (World of Darkness), fictional characters in game Werewolf: The Forsaken Hosts, alien invaders and overlords in the TV series Colony Armies and hosts of Middle-earth warfare, fictional entities in J.R.R. Tolkien's works Avenging Host, a group of characters in Marvel Comics' Earth X series of comic books Rutan Host, fictional aliens from Doctor Who Film Host (film), a 2020 horror film directed by Rob Savage Literature Host, the third novel in the Rogue Mage series by Faith Hunter Host, a 1993 book by Peter James Hosts (novel), a 2001 book written by American author F. Paul Wilson The Hosts of Rebecca, a 1960 novel by Alexander Cordell about the Rebecca Riots Music H.O.S.T., an influential hip-hop group in Azerbaijan Host (Critters Buggin album), 1996 Host (Paradise Lost album), 1999 Computing and technology Host (network), a computer connected to the Internet or another IP-based network hosts (file), a computer file to be used to store information on where to find an internet host on a computer network host (Unix), a command-line Unix command Internet hosting service, a service that runs Internet servers allowing organizations and individuals to serve content to the Internet Virtual host, allowing several DNS names to share the same IP address Host, in hardware virtualization, a machine on which a virtual machine runs Cross compiler, also called a "host", a computer platform on which software development is done for a different target computer platform UOL HOST, Universal Online's HOST webhosting service Groups or formations Host, an archaic military term for an army Host, a great number; multitude Cossack host, military formations of Eastern Europe Furious Host or the Wild Hunt, a European folk myth Religion Heavenly host, an "army" of good angels in Heaven Lord of hosts, a common epithet of the God of the Old Testament Sacramental bread, called the host or hostia, used in Christian liturgy Roles Host (radio), the presenter or announcer on a radio show Television presenter, the host or announcer on a television show Casino host Maître d'hôtel (Maître d') or head waiter of a restaurant or hotel Master of ceremonies Talk show host, a presenter of a TV or radio talk show Science Host (biology), an organism harboring another organism or organisms on or in itself Host (psychology), personality as emphasized in treating dissociative identity disorder Host (astronomy), the interactions and analysis of a star-planet relationship See also Hostess (disambiguation) Hosting (disambiguation) The Host (disambiguation)
[ 0.16608105599880219, 0.39053890109062195, 0.04536031559109688, 0.24680626392364502, -0.09761378914117813, -0.3888446092605591, 0.28562822937965393, 0.4348660409450531, -0.6013132929801941, -0.2509448528289795, -0.9115689992904663, 0.21013043820858002, 0.16081315279006958, 0.280901223421096...
14013
https://en.wikipedia.org/wiki/Hern%C3%A1n%20Cort%C3%A9s
Hernán Cortés
Hernán Cortés de Monroy y Pizarro Altamirano, 1st Marquess of the Valley of Oaxaca (; ; 1485 – December 2, 1547) was a Spanish conquistador who led an expedition that caused the fall of the Aztec Empire and brought large portions of what is now mainland Mexico under the rule of the King of Castile in the early 16th century. Cortés was part of the generation of Spanish explorers and conquistadors who began the first phase of the Spanish colonization of the Americas. Born in Medellín, Spain, to a family of lesser nobility, Cortés chose to pursue adventure and riches in the New World. He went to Hispaniola and later to Cuba, where he received an encomienda (the right to the labor of certain subjects). For a short time, he served as alcalde (magistrate) of the second Spanish town founded on the island. In 1519, he was elected captain of the third expedition to the mainland, which he partly funded. His enmity with the Governor of Cuba, Diego Velázquez de Cuéllar, resulted in the recall of the expedition at the last moment, an order which Cortés ignored. Arriving on the continent, Cortés executed a successful strategy of allying with some indigenous people against others. He also used a native woman, Doña Marina, as an interpreter. She later bore his first son. When the Governor of Cuba sent emissaries to arrest Cortés, he fought them and won, using the extra troops as reinforcements. Cortés wrote letters directly to the king asking to be acknowledged for his successes instead of being punished for mutiny. After he overthrew the Aztec Empire, Cortés was awarded the title of Marqués del Valle de Oaxaca, while the more prestigious title of Viceroy was given to a high-ranking nobleman, Antonio de Mendoza. In 1541 Cortés returned to Spain, where he died six years later of natural causes. Name Cortés himself used the form "Hernando" or "Fernando" for his first name, as seen in the contemporary archive documents, his signature and the title of an early portrait. William Hickling Prescott's Conquest of Mexico (1843) also refers to him as Hernando Cortés. At some point writers began using the shortened form of "Hernán" more generally. Physical appearance There is only one known portrait made during Hernan Cortes's lifetime, a drawing by Christoph Weiditz. The account of the conquest of the Aztec Empire written by Bernal Díaz del Castillo, gives a detailed description of Hernan Cortes's physical appearance: He was of good stature and body, well proportioned and stocky, the color of his face was somewhat grey, not very cheerful, and a longer face would have suited him more. His eyes seemed at times loving and at times grave and serious. His beard was black and sparse, as was his hair, which at the time he sported in the same way as his beard. He had a high chest, a well shaped back and was lean with little belly. Early life Cortés was born in 1485 in the town of Medellín, then a village in the Kingdom of Castile, now a municipality of the modern-day province of Badajoz in Extremadura, Spain. His father, Martín Cortés de Monroy, born in 1449 to Rodrigo or Ruy Fernández de Monroy and his wife María Cortés, was an infantry captain of distinguished ancestry but slender means. Hernán's mother was Catalína Pizarro Altamirano. Through his mother, Hernán was second cousin once removed of Francisco Pizarro, who later conquered the Inca Empire of modern-day Peru, and not to be confused with another Francisco Pizarro, who joined Cortés to conquer the Aztecs. (His maternal grandmother, Leonor Sánchez Pizarro Altamirano, was first cousin of Pizarro's father Gonzalo Pizarro y Rodriguez.) Through his father, Hernán was related to Nicolás de Ovando, the third Governor of Hispaniola. His paternal great-grandfather was Rodrigo de Monroy y Almaraz, 5th Lord of Monroy. According to his biographer and chaplain, Francisco López de Gómara, Cortés was pale and sickly as a child. At the age of 14, he was sent to study Latin under an uncle in Salamanca. Later historians have misconstrued this personal tutoring as time enrolled at the University of Salamanca. After two years, Cortés returned home to Medellín, much to the irritation of his parents, who had hoped to see him equipped for a profitable legal career. However, those two years in Salamanca, plus his long period of training and experience as a notary, first in Valladolid and later in Hispaniola, gave him knowledge of the legal codes of Castile that he applied to help justify his unauthorized conquest of Mexico. At this point in his life, Cortés was described by Gómara as ruthless, haughty, and mischievous. The 16-year-old youth had returned home to feel constrained life in his small provincial town. By this time, news of the exciting discoveries of Christopher Columbus in the New World was streaming back to Spain. Early career in the New World Plans were made for Cortés to sail to the Americas with a family acquaintance and distant relative, Nicolás de Ovando, the newly appointed Governor of Hispaniola. (This island is now divided between Haiti and the Dominican Republic). Cortés suffered an injury and was prevented from traveling. He spent the next year wandering the country, probably spending most of his time in Spain's southern ports of Cadiz, Palos, Sanlucar, and Seville. He finally left for Hispaniola in 1504 and became a colonist. Arrival Cortés reached Hispaniola in a ship commanded by Alonso Quintero, who tried to deceive his superiors and reach the New World before them in order to secure personal advantages. Quintero's mutinous conduct may have served as a model for Cortés in his subsequent career. Upon his arrival in 1504 in Santo Domingo, the capital of Hispaniola, the 18-year-old Cortés registered as a citizen; this entitled him to a building plot and land to farm. Soon afterward, Governor Nicolás de Ovando granted him an encomienda and appointed him as a notary of the town of Azua de Compostela. His next five years seemed to help establish him in the colony; in 1506, Cortés took part in the conquest of Hispaniola and Cuba. The expedition leader awarded him a large estate of land and Indian slaves for his efforts. Cuba (1511–1519) In 1511, Cortés accompanied Diego Velázquez de Cuéllar, an aide of the Governor of Hispaniola, in his expedition to conquer Cuba. Velázquez was appointed Governor of New Spain. At the age of 26, Cortés was made clerk to the treasurer with the responsibility of ensuring that the Crown received the quinto, or customary one fifth of the profits from the expedition. Velázquez was so impressed with Cortés that he secured a high political position for him in the colony. He became secretary for Governor Velázquez. Cortés was twice appointed municipal magistrate (alcalde) of Santiago. In Cuba, Cortés became a man of substance with an encomienda to provide Indian labor for his mines and cattle. This new position of power also made him the new source of leadership, which opposing forces in the colony could then turn to. In 1514, Cortés led a group which demanded that more Indians be assigned to the settlers. As time went on, relations between Cortés and Governor Velázquez became strained. Cortés found time to become romantically involved with Catalina Xuárez (or Juárez), the sister-in-law of Governor Velázquez. Part of Velázquez's displeasure seems to have been based on a belief that Cortés was trifling with Catalina's affections. Cortés was temporarily distracted by one of Catalina's sisters but finally married Catalina, reluctantly, under pressure from Governor Velázquez. However, by doing so, he hoped to secure the good will of both her family and that of Velázquez. It was not until he had been almost 15 years in the Indies that Cortés began to look beyond his substantial status as mayor of the capital of Cuba and as a man of affairs in the thriving colony. He missed the first two expeditions, under the orders of Francisco Hernández de Córdoba and then Juan de Grijalva, sent by Diego Velázquez to Mexico in 1518. News reached Velázquez that Juan de Grijalva had established a colony on the mainland where there was a bonanza of silver and gold, and Velázquez decided to send him help. Cortés was appointed Captain-General of this new expedition in October 1518, but was advised to move fast before Velázquez changed his mind. With Cortés's experience as an administrator, knowledge gained from many failed expeditions, and his impeccable rhetoric he was able to gather six ships and 300 men, within a month. Velázquez's jealousy exploded and he decided to put the expedition in other hands. However, Cortés quickly gathered more men and ships in other Cuban ports. Conquest of Mexico (1519–1521) In 1518, Velázquez put Cortés in command of an expedition to explore and secure the interior of Mexico for colonization. At the last minute, due to the old argument between the two, Velázquez changed his mind and revoked Cortés's charter. Cortés ignored the orders and, in an act of open mutiny, went anyway in February 1519. He stopped in Trinidad, Cuba, to hire more soldiers and obtain more horses. Accompanied by about 11 ships, 500 men (including seasoned slaves), 13 horses, and a small number of cannons, Cortés landed on the Yucatán Peninsula in Mayan territory. There he encountered Geronimo de Aguilar, a Spanish Franciscan priest who had survived a shipwreck followed by a period in captivity with the Maya, before escaping. Aguilar had learned the Chontal Maya language and was able to translate for Cortés. Cortés's military experience was almost nonexistent, but he proved to be an effective leader of his small army and won early victories over the coastal Indians. In March 1519, Cortés formally claimed the land for the Spanish crown. Then he proceeded to Tabasco, where he met with resistance and won a battle against the natives. He received twenty young indigenous women from the vanquished natives, and he converted them all to Christianity. Among these women was La Malinche, his future mistress and mother of his son Martín. Malinche knew both the Nahuatl language and Chontal Maya, thus enabling Cortés to communicate with the Aztecs through Aguilar. At San Juan de Ulúa on Easter Sunday 1519, Cortés met with Moctezuma II's Aztec Empire governors Tendile and Pitalpitoque. In July 1519, his men took over Veracruz. By this act, Cortés dismissed the authority of the Governor of Cuba to place himself directly under the orders of King Charles. To eliminate any ideas of retreat, Cortés scuttled his ships. March on Tenochtitlán In Veracruz, he met some of the tributaries of the Aztecs and asked them to arrange a meeting with Moctezuma II, the tlatoani (ruler) of the Aztec Empire. Moctezuma repeatedly turned down the meeting, but Cortés was determined. Leaving a hundred men in Veracruz, Cortés marched on Tenochtitlán in mid-August 1519, along with 600 soldiers, 15 horsemen, 15 cannons, and hundreds of indigenous carriers and warriors. On the way to Tenochtitlán, Cortés made alliances with indigenous peoples such as the Totonacs of Cempoala and the Nahuas of Tlaxcala. The Otomis initially, and then the Tlaxcalans fought with the Spanish in a series of three battles from 2 to 5 September 1519, and at one point, Diaz remarked, "they surrounded us on every side". After Cortés continued to release prisoners with messages of peace, and realizing the Spanish were enemies of Moctezuma, Xicotencatl the Elder and Maxixcatzin persuaded the Tlaxcalan warleader, Xicotencatl the Younger, that it would be better to ally with the newcomers than to kill them. In October 1519, Cortés and his men, accompanied by about 1,000 Tlaxcalteca, marched to Cholula, the second-largest city in central Mexico. Cortés, either in a pre-meditated effort to instill fear upon the Aztecs waiting for him at Tenochtitlan or (as he later claimed, when he was being investigated) wishing to make an example when he feared native treachery, massacred thousands of unarmed members of the nobility gathered at the central plaza, then partially burned the city. By the time he arrived in Tenochtitlán, the Spaniards had a large army. On November 8, 1519, they were peacefully received by Moctezuma II. Moctezuma deliberately let Cortés enter the Aztec capital, the island city of Tenochtitlán, hoping to get to know their weaknesses better and to crush them later. Moctezuma gave lavish gifts of gold to the Spaniards which, rather than placating them, excited their ambitions for plunder. In his letters to King Charles, Cortés claimed to have learned at this point that he was considered by the Aztecs to be either an emissary of the feathered serpent god Quetzalcoatl or Quetzalcoatl himself – a belief which has been contested by a few modern historians. But quickly Cortés learned that several Spaniards on the coast had been killed by Aztecs while supporting the Totonacs, and decided to take Moctezuma as a hostage in his palace, indirectly ruling Tenochtitlán through him. Meanwhile, Velázquez sent another expedition, led by Pánfilo de Narváez, to oppose Cortés, arriving in Mexico in April 1520 with 1,100 men. Cortés left 200 men in Tenochtitlán and took the rest to confront Narváez. He overcame Narváez, despite his numerical inferiority, and convinced the rest of Narváez's men to join him. In Mexico, one of Cortés's lieutenants Pedro de Alvarado, committed the massacre in the Great Temple, triggering a local rebellion. Cortés speedily returned to Tenochtitlán. On July 1, 1520, Moctezuma was killed (he was stoned to death by his own people, as reported in Spanish accounts; although some claim he was murdered by the Spaniards once they realized his inability to placate the locals). Faced with a hostile population, Cortés decided to flee for Tlaxcala. During the Noche Triste (June 30 – July 1, 1520), the Spaniards managed a narrow escape from Tenochtitlán across the Tlacopan causeway, while their rearguard was being massacred. Much of the treasure looted by Cortés was lost (as well as his artillery) during this panicked escape from Tenochtitlán. Destruction of Tenochtitlán After a battle in Otumba, they managed to reach Tlaxcala, having lost 870 men. With the assistance of their allies, Cortés's men finally prevailed with reinforcements arriving from Cuba. Cortés began a policy of attrition towards Tenochtitlán, cutting off supplies and subduing the Aztecs' allied cities. During the siege he would construct brigantines in the lake and slowly destroy blocks of the city to avoid fighting in an urban setting. The Mexicas would fall back to Tlatelolco and even succeed in ambushing the pursuing Spanish forces, inflicting heavy losses, but would ultimately be the last portion of the island that resisted the conquistadores. The siege of Tenochtitlán ended with Spanish victory and the destruction of the city. In January 1521, Cortés countered a conspiracy against him, headed by Antonio de Villafana, who was hanged for the offense. Finally, with the capture of Cuauhtémoc, the tlatoani (ruler) of Tenochtitlán, on August 13, 1521, the Aztec Empire was captured, and Cortés was able to claim it for Spain, thus renaming the city Mexico City. From 1521 to 1524, Cortés personally governed Mexico. Appointment to governorship of Mexico and internal dissensions Many historical sources have conveyed an impression that Cortés was unjustly treated by the Spanish Crown, and that he received nothing but ingratitude for his role in establishing New Spain. This picture is the one Cortés presents in his letters and in the later biography written by Francisco López de Gómara. However, there may be more to the picture than this. Cortés's own sense of accomplishment, entitlement, and vanity may have played a part in his deteriorating position with the king: King Charles appointed Cortés as governor, captain general and chief justice of the newly conquered territory, dubbed "New Spain of the Ocean Sea". But also, much to the dismay of Cortés, four royal officials were appointed at the same time to assist him in his governing – in effect, submitting him to close observation and administration. Cortés initiated the construction of Mexico City, destroying Aztec temples and buildings and then rebuilding on the Aztec ruins what soon became the most important European city in the Americas. Cortés managed the founding of new cities and appointed men to extend Spanish rule to all of New Spain, imposing the encomienda system in 1524. He reserved many encomiendas for himself and for his retinue, which they considered just rewards for their accomplishment in conquering central Mexico. However, later arrivals and members of factions antipathetic to Cortés complained of the favoritism that excluded them. In 1523, the Crown (possibly influenced by Cortés's enemy, Bishop Fonseca), sent a military force under the command of Francisco de Garay to conquer and settle the northern part of Mexico, the region of Pánuco. This was another setback for Cortés who mentioned this in his fourth letter to the King in which he describes himself as the victim of a conspiracy by his archenemies Diego Velázquez de Cuéllar, Diego Columbus and Bishop Fonseca as well as Francisco Garay. The influence of Garay was effectively stopped by this appeal to the King who sent out a decree forbidding Garay to interfere in the politics of New Spain, causing him to give up without a fight. Royal grant of arms (1525) Although Cortés had flouted the authority of Diego Velázquez in sailing to the mainland and then leading an expedition of conquest, Cortés's spectacular success was rewarded by the crown with a coat of arms, a mark of high honor, following the conqueror's request. The document granting the coat of arms summarizes Cortés's accomplishments in the conquest of Mexico. The proclamation of the king says in part: We, respecting the many labors, dangers, and adventures which you underwent as stated above, and so that there might remain a perpetual memorial of you and your services and that you and your descendants might be more fully honored ... it is our will that besides your coat of arms of your lineage, which you have, you may have and bear as your coat of arms, known and recognized, a shield ... The grant specifies the iconography of the coat of arms, the central portion divided into quadrants. In the upper portion, there is a "black eagle with two heads on a white field, which are the arms of the empire". Below that is a "golden lion on a red field, in memory of the fact that you, the said Hernando Cortés, by your industry and effort brought matters to the state described above" (i.e., the conquest). The specificity of the other two quadrants is linked directly to Mexico, with one quadrant showing three crowns representing the three Aztec emperors of the conquest era, Moctezuma, Cuitlahuac, and Cuauhtemoc and the other showing the Aztec capital of Tenochtitlan. Encircling the central shield are symbols of the seven city-states around the lake and their lords that Cortés defeated, with the lords "to be shown as prisoners bound with a chain which shall be closed with a lock beneath the shield". Death of his first wife and remarriage Cortés's wife Catalina Súarez arrived in New Spain around summer 1522, along with her sister and brother. His marriage to Catalina was at this point extremely awkward, since she was a kinswoman of the governor of Cuba, Diego Velázquez, whose authority Cortés had thrown off and who was therefore now his enemy. Catalina lacked the noble title of doña, so at this point his marriage with her no longer raised his status. Their marriage had been childless. Since Cortés had sired children with a variety of indigenous women, including a son around 1522 by his cultural translator, Doña Marina, Cortés knew he was capable of fathering children. Cortés's only male heir at this point was illegitimate, but nonetheless named after Cortés's father, Martín Cortés. This son Martín Cortés was also popularly called "El Mestizo". Catalina Suárez died under mysterious circumstances the night of November 1–2, 1522. There were accusations at the time that Cortés had murdered his wife. There was an investigation into her death, interviewing a variety of household residents and others. The documentation of the investigation was published in the nineteenth century in Mexico and these archival documents were uncovered in the twentieth century. The death of Catalina Suárez produced a scandal and investigation, but Cortés was now free to marry someone of high status more appropriate to his wealth and power. In 1526, he built an imposing residence for himself, the Palace of Cortés in Cuernavaca, in a region close to the capital where he had extensive encomienda holdings. In 1529 he had been accorded the noble designation of don, but more importantly was given the noble title of Marquess of the Valley of Oaxaca and married the Spanish noblewoman Doña Juana de Zúñiga. The marriage produced three children, including another son, who was also named Martín. As the first-born legitimate son, Don Martín Cortés y Zúñiga was now Cortés's heir and succeeded him as holder of the title and estate of the Marquessate of the Valley of Oaxaca. Cortés's legitimate daughters were Doña Maria, Doña Catalina, and Doña Juana. Cortés and the "Spiritual Conquest" of Mexico Since the conversion to Christianity of indigenous peoples was an essential and integral part of the extension of Spanish power, making formal provisions for that conversion once the military conquest was completed was an important task for Cortés. During the Age of Discovery, the Catholic Church had seen early attempts at conversion in the Caribbean islands by Spanish friars, particularly the mendicant orders. Cortés made a request to the Spanish monarch to send Franciscan and Dominican friars to Mexico to convert the vast indigenous populations to Christianity. In his fourth letter to the king, Cortés pleaded for friars rather than diocesan or secular priests because those clerics were in his view a serious danger to the Indians' conversion. If these people [Indians] were now to see the affairs of the Church and the service of God in the hands of canons or other dignitaries, and saw them indulge in the vices and profanities now common in Spain, knowing that such men were the ministers of God, it would bring our Faith into much harm that I believe any further preaching would be of no avail. He wished the mendicants to be the main evangelists. Mendicant friars did not usually have full priestly powers to perform all the sacraments needed for conversion of the Indians and growth of the neophytes in the Christian faith, so Cortés laid out a solution to this to the king. Your Majesty should likewise beseech His Holiness [the pope] to grant these powers to the two principal persons in the religious orders that are to come here, and that they should be his delegates, one from the Order of St. Francis and the other from the Order of St. Dominic. They should bring the most extensive powers Your Majesty is able to obtain, for, because these lands are so far from the Church of Rome, and we, the Christians who now reside here and shall do so in the future, are so far from the proper remedies of our consciences and, as we are human, so subject to sin, it is essential that His Holiness should be generous with us and grant to these persons most extensive powers, to be handed down to persons actually in residence here whether it be given to the general of each order or to his provincials. The Franciscans arrived in May 1524, a symbolically powerful group of twelve known as the Twelve Apostles of Mexico, led by Fray Martín de Valencia. Franciscan Geronimo de Mendieta claimed that Cortés's most important deed was the way he met this first group of Franciscans. The conqueror himself was said to have met the friars as they approached the capital, kneeling at the feet of the friars who had walked from the coast. This story was told by Franciscans to demonstrate Cortés piety and humility and was a powerful message to all, including the Indians, that Cortés's earthly power was subordinate to the spiritual power of the friars. However, one of the first twelve Franciscans, Fray Toribio de Benavente Motolinia does not mention it in his history. Cortés and the Franciscans had a particularly strong alliance in Mexico, with Franciscans seeing him as "the new Moses" for conquering Mexico and opening it to Christian evangelization. In Motolinia's 1555 response to Dominican Bartolomé de Las Casas, he praises Cortés. And as to those who murmur against the Marqués del Valle [Cortés], God rest him, and who try to blacken and obscure his deeds, I believe that before God their deeds are not as acceptable as those of the Marqués. Although as a human he was a sinner, he had faith and works of a good Christian, and a great desire to employ his life and property in widening and augmenting the fair of Jesus Christ, and dying for the conversion of these gentiles ... Who has loved and defended the Indians of this new world like Cortés? ... Through this captain, God opened the door for us to preach his holy gospel and it was he who caused the Indians to revere the holy sacraments and respect the ministers of the church. In Fray Bernardino de Sahagún's 1585 revision of the conquest narrative first codified as Book XII of the Florentine Codex, there are laudatory references to Cortés that do not appear in the earlier text from the indigenous perspective. Whereas Book XII of the Florentine Codex concludes with an account of Spaniards' search for gold, in Sahagún's 1585 revised account, he ends with praise of Cortés for requesting the Franciscans be sent to Mexico to convert the Indians. Expedition to Honduras and aftermath From 1524 to 1526, Cortés headed an expedition to Honduras where he defeated Cristóbal de Olid, who had claimed Honduras as his own under the influence of the Governor of Cuba Diego Velázquez. Fearing that Cuauhtémoc might head an insurrection in Mexico, he brought him with him to Honduras. In a controversial move, Cuauhtémoc was executed during the journey. Raging over Olid's treason, Cortés issued a decree to arrest Velázquez, whom he was sure was behind Olid's treason. This, however, only served to further estrange the Crown of Castile and the Council of Indies, both of which were already beginning to feel anxious about Cortés's rising power. Cortés's fifth letter to King Charles attempts to justify his conduct, concludes with a bitter attack on "various and powerful rivals and enemies" who have "obscured the eyes of your Majesty". Charles, who was also Holy Roman Emperor, had little time for distant colonies (much of Charles's reign was taken up with wars with France, the German Protestants and the expanding Ottoman Empire), except insofar as they contributed to finance his wars. In 1521, year of the Conquest, Charles was attending to matters in his German domains and Bishop Adrian of Utrecht functioned as regent in Spain. Velázquez and Fonseca persuaded the regent to appoint a commissioner (a Juez de residencia, Luis Ponce de León) with powers to investigate Cortés's conduct and even arrest him. Cortés was once quoted as saying that it was "more difficult to contend against [his] own countrymen than against the Aztecs." Governor Diego Velázquez continued to be a thorn in his side, teaming up with Bishop Juan Rodríguez de Fonseca, chief of the Spanish colonial department, to undermine him in the Council of the Indies. A few days after Cortés's return from his expedition, Ponce de León suspended Cortés from his office of governor of New Spain. The Licentiate then fell ill and died shortly after his arrival, appointing Marcos de Aguilar as alcalde mayor. The aged Aguilar also became sick and appointed Alonso de Estrada governor, who was confirmed in his functions by a royal decree in August 1527. Cortés, suspected of poisoning them, refrained from taking over the government. Estrada sent Diego de Figueroa to the south. De Figueroa raided graveyards and extorted contributions, meeting his end when the ship carrying these treasures sank. Albornoz persuaded Alonso de Estrada to release Gonzalo de Salazar and Chirinos. When Cortés complained angrily after one of his adherents' hands was cut off, Estrada ordered him exiled. Cortés sailed for Spain in 1528 to appeal to King Charles. First return to Spain (1528) and Marquessate of the Valley of Oaxaca In 1528, Cortés returned to Spain to appeal to the justice of his master, Charles V. Juan Altamirano and Alonso Valiente stayed in Mexico and acted as Cortés's representatives during his absence. Cortés presented himself with great splendor before Charles V's court. By this time Charles had returned and Cortés forthrightly responded to his enemy's charges. Denying he had held back on gold due the crown, he showed that he had contributed more than the quinto (one-fifth) required. Indeed, he had spent lavishly to build the new capital of Mexico City on the ruins of the Aztec capital of Tenochtitlán, leveled during the siege that brought down the Aztec empire. He was received by Charles with every distinction, and decorated with the order of Santiago. In return for his efforts in expanding the still young Spanish Empire, Cortés was rewarded in 1529 by being accorded the noble title of don but more importantly named the "Marqués del Valle de Oaxaca" (Marquess of the Valley of Oaxaca and married the Spanish noblewoman Doña Juana Zúñiga, after the 1522 death of his much less distinguished first wife, Catalina Suárez. The noble title and senorial estate of the Marquesado was passed down to his descendants until 1811. The Oaxaca Valley was one of the wealthiest regions of New Spain, and Cortés had 23,000 vassals in 23 named encomiendas in perpetuity. Although confirmed in his land holdings and vassals, he was not reinstated as governor and was never again given any important office in the administration of New Spain. During his travel to Spain, his property was mismanaged by abusive colonial administrators. He sided with local natives in a lawsuit. The natives documented the abuses in the Huexotzinco Codex. The entailed estate and title passed to his legitimate son Don Martín Cortés upon Cortés's death in 1547, who became the Second Marquess. Don Martín's association with the so-called Encomenderos' Conspiracy endangered the entailed holdings, but they were restored and remained the continuing reward for Hernán Cortés's family through the generations. Return to Mexico Cortés returned to Mexico in 1530 with new titles and honors, but with diminished power. Although Cortés still retained military authority and permission to continue his conquests, viceroy Don Antonio de Mendoza was appointed in 1535 to administer New Spain's civil affairs. This division of power led to continual dissension, and caused the failure of several enterprises in which Cortés was engaged. On returning to Mexico, Cortés found the country in a state of anarchy. There was a strong suspicion in court circles of an intended rebellion by Cortés. After reasserting his position and reestablishing some sort of order, Cortés retired to his estates at Cuernavaca, about 30 miles (48 km) south of Mexico City. There he concentrated on the building of his palace and on Pacific exploration. Remaining in Mexico between 1530 and 1541, Cortés quarreled with Nuño Beltrán de Guzmán and disputed the right to explore the territory that is today California with Antonio de Mendoza, the first viceroy. Cortés acquired several silver mines in Zumpango del Rio in 1534. By the early 1540s, he owned 20 silver mines in Sultepec, 12 in Taxco, and 3 in Zacualpan. Earlier, Cortés had claimed the silver in the Tamazula area. In 1536, Cortés explored the northwestern part of Mexico and discovered the Baja California Peninsula. Cortés also spent time exploring the Pacific coast of Mexico. The Gulf of California was originally named the Sea of Cortés by its discoverer Francisco de Ulloa in 1539. This was the last major expedition by Cortés. Later life and death Second return to Spain After his exploration of Baja California, Cortés returned to Spain in 1541, hoping to confound his angry civilians, who had brought many lawsuits against him (for debts, abuse of power, etc.). On his return he went through a crowd to speak to the emperor, who demanded of him who he was. "I am a man," replied Cortés, "who has given you more provinces than your ancestors left you cities." Expedition against Algiers The emperor finally permitted Cortés to join him and his fleet commanded by Andrea Doria at the great expedition against Algiers in the Barbary Coast in 1541, which was then part of the Ottoman Empire and was used as a base by Hayreddin Barbarossa, a famous Turkish corsair and Admiral-in-Chief of the Ottoman Fleet. During this campaign, Cortés was almost drowned in a storm that hit his fleet while he was pursuing Barbarossa. Last years, death, and remains Having spent a great deal of his own money to finance expeditions, he was now heavily in debt. In February 1544 he made a claim on the royal treasury, but was ignored for the next three years. Disgusted, he decided to return to Mexico in 1547. When he reached Seville, he was stricken with dysentery. He died in Castilleja de la Cuesta, Seville province, on December 2, 1547, from a case of pleurisy at the age of 62. He left his many mestizo and white children well cared for in his will, along with every one of their mothers. He requested in his will that his remains eventually be buried in Mexico. Before he died he had the Pope remove the "natural" status of four of his children (legitimizing them in the eyes of the church), including Martin, the son he had with Doña Marina (also known as La Malinche), said to be his favourite. His daughter, Doña Catalina, however, died shortly after her father's death. After his death, his body was moved more than eight times for several reasons. On December 4, 1547, he was buried in the mausoleum of the Duke of Medina in the church of San Isidoro del Campo, Sevilla. Three years later (1550) due to the space being required by the duke, his body was moved to the altar of Santa Catarina in the same church. In his testament, Cortés asked for his body to be buried in the monastery he had ordered to be built in Coyoacan in México, ten years after his death, but the monastery was never built. So in 1566, his body was sent to New Spain and buried in the church of San Francisco de Texcoco, where his mother and one of his sisters were buried. In 1629, Don Pedro Cortés fourth "Marquez del Valle, his last male descendant, died, so the viceroy decided to move the bones of Cortés along with those of his descendant to the Franciscan church in México. This was delayed for nine years, while his body stayed in the main room of the palace of the viceroy. Eventually it was moved to the Sagrario of Franciscan church, where it stayed for 87 years. In 1716, it was moved to another place in the same church. In 1794, his bones were moved to the "Hospital de Jesus" (founded by Cortés), where a statue by Tolsá and a mausoleum were made. There was a public ceremony and all the churches in the city rang their bells. In 1823, after the independence of México, it seemed imminent that his body would be desecrated, so the mausoleum was removed, the statue and the coat of arms were sent to Palermo, Sicily, to be protected by the Duke of Terranova. The bones were hidden, and everyone thought that they had been sent out of México. In 1836, his bones were moved to another place in the same building. It was not until November 24, 1946 that they were rediscovered, thanks to the discovery of a secret document by Lucas Alamán. His bones were put in the charge of the Instituto Nacional de Antropología e Historia (INAH). The remains were authenticated by INAH. They were then restored to the same place, this time with a bronze inscription and his coat of arms. When the bones were first rediscovered, the supporters of the Hispanic tradition in Mexico were excited, but one supporter of an indigenist vision of Mexico "proposed that the remains be publicly burned in front of the statue of Cuauhtemoc, and the ashes flung into the air". Following the discovery and authentication of Cortés's remains, there was a discovery of what were described as the bones of Cuauhtémoc, resulting in a "battle of the bones". Taxa named after Cortés Cortés is commemorated in the scientific name of a subspecies of Mexican lizard, Phrynosoma orbiculare cortezii. Disputed interpretation of his life There are relatively few sources to the early life of Cortés; his fame arose from his participation in the conquest of Mexico and it was only after this that people became interested in reading and writing about him. Probably the best source is his letters to the king which he wrote during the campaign in Mexico, but they are written with the specific purpose of putting his efforts in a favourable light and so must be read critically. Another main source is the biography written by Cortés's private chaplain Lopez de Gómara, which was written in Spain several years after the conquest. Gómara never set foot in the Americas and knew only what Cortés had told him, and he had an affinity for knightly romantic stories which he incorporated richly in the biography. The third major source is written as a reaction to what its author calls "the lies of Gomara", the eyewitness account written by the Conquistador Bernal Díaz del Castillo does not paint Cortés as a romantic hero but rather tries to emphasize that Cortés's men should also be remembered as important participants in the undertakings in Mexico. In the years following the conquest more critical accounts of the Spanish arrival in Mexico were written. The Dominican friar Bartolomé de Las Casas wrote his A Short Account of the Destruction of the Indies which raises strong accusations of brutality and heinous violence towards the Indians; accusations against both the conquistadors in general and Cortés in particular. The accounts of the conquest given in the Florentine Codex by the Franciscan Bernardino de Sahagún and his native informants are also less than flattering towards Cortés. The scarcity of these sources has led to a sharp division in the description of Cortés's personality and a tendency to describe him as either a vicious and ruthless person or a noble and honorable cavalier. Representations in Mexico In México there are few representations of Cortés. However, many landmarks still bear his name, from the castle Palacio de Cortés in the city of Cuernavaca to some street names throughout the republic. The pass between the volcanoes Iztaccíhuatl and Popocatépetl where Cortés took his soldiers on their march to Mexico City. It is known as the Paso de Cortés. The muralist Diego Rivera painted several representation of him but the most famous, depicts him as a powerful and ominous figure along with Malinche in a mural in the National Palace in Mexico City. In 1981, President Lopez Portillo tried to bring Cortés to public recognition. First, he made public a copy of the bust of Cortés made by Manuel Tolsá in the Hospital de Jesús Nazareno with an official ceremony, but soon a nationalist group tried to destroy it, so it had to be taken out of the public. Today the copy of the bust is in the "Hospital de Jesús Nazareno" while the original is in Naples, Italy, in the Villa Pignatelli. Later, another monument, known as "Monumento al Mestizaje" by Julián Martínez y M. Maldonado (1982) was commissioned by Mexican president José López Portillo to be put in the "Zócalo" (Main square) of Coyoacan, near the place of his country house, but it had to be removed to a little known park, the Jardín Xicoténcatl, Barrio de San Diego Churubusco, to quell protests. The statue depicts Cortés, Malinche and their son Martín. There is another statue by Sebastián Aparicio, in Cuernavaca, was in a hotel "El casino de la selva". Cortés is barely recognizable, so it sparked little interest. The hotel was closed to make a commercial center, and the statue was put out of public display by Costco the builder of the commercial center. Cultural depictions Hernán Cortés is a character in the opera La Conquista (2005) by Italian composer Lorenzo Ferrero, which depicts the major episodes of the Spanish conquest of the Aztec Empire in 1521. Writings: the Cartas de Relación Cortés's personal account of the conquest of Mexico is narrated in his five letters addressed to Charles V. These five letters, the cartas de relación, are Cortés's only surviving writings. See "Letters and Dispatches of Cortés", translated by George Folsom (New York, 1843); Prescott's "Conquest of Mexico" (Boston, 1843); and Sir Arthur Helps's "Life of Hernando Cortes" (London, 1871). His first letter was considered lost, and the one from the municipality of Veracruz has to take its place. It was published for the first time in volume IV of "Documentos para la Historia de España", and subsequently reprinted. The Segunda Carta de Relacion, bearing the date of October 30, 1520, appeared in print at Seville in 1522. The third letter, dated May 15, 1522, appeared at Seville in 1523. The fourth, October 20, 1524, was printed at Toledo in 1525. The fifth, on the Honduras expedition, is contained in volume IV of the Documentos para la Historia de España. Children Natural children of Don Hernán Cortés doña Catalina Pizarro, born between 1514 and 1515 in Santiago de Cuba or maybe later in Nueva España, daughter of a Cuban woman, Leonor Pizarro. Doña Catalina married Juan de Salcedo, a conqueror and encomendero, with whom she had a son, Pedro. don Martín Cortés, born in Coyoacán in 1522, son of doña Marina (La Malinche), called the First Mestizo; about him was written The New World of Martín Cortés; married doña Bernaldina de Porras and had two children: doña Ana Cortés don Fernando Cortés, Principal Judge of Veracruz. Descendants of this line are alive today in Mexico. don Luis Cortés, born in 1525, son of doña Antonia or Elvira Hermosillo, a native of Trujillo (Cáceres) doña Leonor Cortés Moctezuma, born in 1527 or 1528 in Ciudad de Mexico, daughter of Aztec princess Tecuichpotzin (baptized Isabel), born in Tenochtitlan on July 11, 1510, and died on July 9, 1550, the eldest legitimate daughter of Moctezuma II Xocoyotzin and wife doña María Miahuaxuchitl; married to Juan de Tolosa, a Basque merchant and miner. doña María Cortés de Moctezuma, daughter of an Aztec princess; nothing more is known about her except that she probably was born with some deformity. He married twice: firstly in Cuba to Catalina Suárez Marcaida, who died at Coyoacán in 1522 without issue, and secondly in 1529 to doña Juana Ramírez de Arellano de Zúñiga, daughter of don Carlos Ramírez de Arellano, 2nd Count of Aguilar and wife the Countess doña Juana de Zúñiga, and had: don Luis Cortés y Ramírez de Arellano, born in Texcoco in 1530 and died shortly after his birth. doña Catalina Cortés de Zúñiga, born in Cuernavaca in 1531 and died shortly after her birth. don Martín Cortés y Ramírez de Arellano, 2nd Marquess of the Valley of Oaxaca, born in Cuernavaca in 1532, married at Nalda on February 24, 1548, his twice cousin once removed doña Ana Ramírez de Arellano y Ramírez de Arellano and had issue, currently extinct in male line doña María Cortés de Zúñiga, born in Cuernavaca between 1533 and 1536, married to don Luis de Quiñones y Pimentel, 5th Count of Luna doña Catalina Cortés de Zúñiga, born in Cuernavaca between 1533 and 1536, died unmarried in Sevilla after the funeral of her father doña Juana Cortés de Zúñiga, born in Cuernavaca between 1533 and 1536, married Don Fernando Enríquez de Ribera y Portocarrero, 2nd Duke of Alcalá de los Gazules, 3rd Marquess of Tarifa and 6th Count of Los Molares, and had issue In popular culture Cortés was portrayed (as "Hernando Cortez") by actor Cesar Romero in the 1947 historical adventure film Captain from Castile. "Cortez the Killer", a 1975 song by Neil Young Cortés is a major villain in the 2000 animated movie The Road to El Dorado, voiced by Jim Cummings Cortes played by Óscar Jaenada, is a morally ambiguous protagonist in the 2019 8 episode TV-series Hernán. In 1986, Polish illustrator Jerzy Wróblewski created a 48-page comic book titled Hernán Cortés i podbój Meksyku (Hernán Cortés and the Conquest of Mexico). The comic book, based on historical chronicles, narrated Cortés's life, concentrating on the titular 1519–1521 period; it was noted for its realistic depictions of violence, unusual in Polish comic books of the era. Cortés features in the 1980 novel Aztec by Gary Jennings as an antagonist. See also History of Mexico History of Mexico City New Spain Palace of Cortés, Cuernavaca Spanish conquest of the Aztec empire Spanish Empire Notes References Further reading Primary sources Cortés, Hernán. Letters – available as Letters from Mexico translated by Anthony Pagden. Yale University Press, 1986. . Available online in Spanish from an 1866 edition. Cortés, Hernán. Escritos sueltos de Hernán Cortés. Biblioteca Histórica de la Iberia. vol 12. Mexico 1871. Díaz del Castillo, Bernal. The Conquest of New Spain – available as The Discovery and Conquest of Mexico: 1517–1521 Díaz del Castillo, Bernal, Carrasco, David, Rolena. Adorno, Sandra Messinger Cypess, and Karen Vieira Powers. The History of the Conquest of New Spain. Albuquerque: University of New Mexico Press, 2008 (textbook anthology of indigenous primary sources) López de Gómara, Francisco, Cortés: The Life of the Conqueror by His Secretary, Ed. and trans. Lesley Byrd Simpson. Berkeley: University of California Press 1964. López de Gómara, Francisco. Hispania Victrix; First and Second Parts of the General History of the Indies, with the whole discovery and notable things that have happened since they were acquired until the year 1551, with the conquest of Mexico and New Spain University of California Press, 1966 Prescott, William H. History of the Conquest of Mexico, with a Preliminary View of Ancient Mexican Civilization, and the Life of the Conqueror, Hernando Cortes Last Will and Testament of Hernán Cortés Letter From Hernán Cortés to Charles the V Hernán Cortés Power of Attorney, 1526 From the Rare Book and Special Collections Division at the Library of Congress Praeclara Ferdinandi Cortesii de noua maris oceani Hyspania narratio sacratissimo... 1524. From the Rare Book and Special Collections Division at the Library of Congress Secondary sources Boruchoff, David A. "Hernán Cortés." International Encyclopedia of the Social Sciences, 2nd. ed. (2008), vol. 2, pp. 146–49. Brooks, Francis J. "Motecuzoma Xocoyotl, Hernán Cortés, and Bernal Díaz del Castillo: The Construction of an Arrest." Hispanic American Historical Review (1995): 149–183. Chamberlain, Robert S. "Two unpublished documents of Hernán Cortés and New Spain, 1519 and 1524." Hispanic American Historical Review 19 (1939) 120–137. Chamberlain, Robert S. "La controversia entre Cortés y Velázquez sobre la gobernación de Nueva España, 1519-1522" in Anales de la Sociedad de Geografía e Historia de Guatemala, vol XIX 1943. Cline, Howard F. "Hernando Cortés and the Aztec indians in Spain." The Quarterly Journal of the Library of Congress. Vol. 26. No. 2. Library of Congress, 1969. Denhardt. Robert Moorman. "The equine strategy of Cortés." Hispanic American Historical Review 18 (1938) 550–555. Elliott, J.H., "The mental world of Hernán Cortés." In Transactions of the Royal Historical Society. Fifth Series (1967) 41–58. Frankl, Victor. "Hernán Cortés y la tradición de las Siete Partidas." Revista de Historia de América 53-54 (1962) 9-74. Himmerich y Valencia, Robert. The Encomenderos of New Spain, 1521–1555, Austin: University of Texas Press 1991 Jacobs, W.J. Hernando Cortés. New York: Franklin Watts, Inc. 1974. Keen, Benjamin, The Aztecs Image in Western Thought, New Brunswick: Rutgers University Press 1971. Konetzke, Richard. "Hernán Cortés como poblador de la Nueva España." Estudios Cortesianos, pp. 341–381. Madrid 1948. Levy, Buddy. Conquistador: Hernán Cortés, King Montezuma, and the Last Stand of the Aztecs. 2008 Lorenzana, Francisco Antonio. Viaje de Hernán Cortés a la Peninsula de Californias. Mexico 1963 MacNutt, F.A. Fernando Cortés and the Conquest of Mexico, 1485-1547. New York and London 1909. Madariaga, Salvador de. Hernán Cortés, Conqueror of Mexico. Mexico 1942. Marks, Richard Lee. Cortés: The Great Adventurer and the Fate of Aztec Mexico. Alfred A. Knopf, 1993. Mathes, W. Michael, ed. The Conquistador in California: 1535. The Voyage of Fernando Cortés to Baja California in Chronicles and Documents. Vol. 31. Dawson's Book Shop, 1973. Maura, Juan Francisco."Cobardía, falsedad y opportunismo español: algunas consideraciones sobre la "verdadera" historia de la conquista de la Nueva España" Lemir (Revista de literatura medieval y del Renacimiento) 7 (2003): 1–29. Medina, José Toribio. Ensayo Bio-bibliográfico sobre Hernán Cortés. Introducción de Guillermo Feliú Cruz. Santiago de Chile 1952. Miller, Robert Ryal. "Cortés and the first attempt to colonize California." Calif Hist QJ Calif Hist Soc 53.1 (1974): 4–16. Petrov, Lisa. For an Audience of Men: Masculinity, Violence and Memory in Hernán Cortés's Las Cartas de Relación and Carlos Fuentes's Fictional Cortés. University of Wisconsin—Madison, 2004. Phelan, John Leddy The Millennial Kingdom of the Franciscans in the New World, chapter 3, "Hernán Cortés, the Moses of the New World," Berkeley: University of California Press, second edition, revised, 1971, pp. 33–34. Restall, Matthew. Seven Myths of the Spanish Conquest Oxford University Press (2003) Silva, José Valerio. El legalismo de Hernán Cortés como instrumento de su conquista. Mexico 1965. Stein, R.C. The World's Greatest Explorers: Hernando Cortés. Illinois: Chicago Press Inc. 1991. Thomas, Hugh (1993). Conquest: Cortés, Montezuma, and the Fall of Old Mexico Todorov, TzvetanThe Conquest of America (1996) Toro, Alfonso. Un crimen de Hernán Cortés. La muerte de Doña Catalina Xuárez Marcaida, estudio histórico y medico legal. Mexico 1922 Wagner, H.R. "The lost first letter of Cortés." Hispanic American Historical Review. 21 (1941) 669–672. White, Jon Manchip. (1971) Cortés and the Downfall of the Aztec Empire External links The letters by Cortés, in which Cortés describes the events related to the conquest of Mexico Genealogy of Hernán Cortés Origin of the Surname Cortés The change of Hernán Cortés's self-image by means of the conquest Hernando Cortes on the Web – web directory with thumbnail galleries Conquistadors, with Michael Wood – website for 2001 PBS documentary Ibero-American Electronic Text Series presented online by the University of Wisconsin Digital Collections Center. Hernan Cortes – The Conquistador of the Aztecs; Informational Link Blog about the History of Cortes, the Aztecs along with a variety of sources, pictures and educational resources Latin American studies center, material on Cortés Fernand Cortez opera by Gaspare Spontini, Jean-Paul Penin "Cortes, Hernando" Belinda H. Nanney "Hernán Cortés, marqués del Valle de Oaxaca", Encyclopædia Britannica 1485 births 1547 deaths People from Las Vegas Altas 16th-century Mexican people 16th-century Spanish people History of the Aztecs Spanish city founders Colonial Mexico Deaths from dysentery Encomenderos Explorers of Mexico Extremaduran conquistadors History of Baja California History of the Gulf of California Marquesses of Spain People from Morelos Spanish conquistadors Spanish generals Spanish regicides Titles of nobility in the Americas Origin of the name California
[ -0.6386193037033081, 0.4332382380962372, -0.08407101035118103, 0.3104468286037445, -0.1772993505001068, 0.3387438654899597, 0.3525596559047699, 0.6250928044319153, -0.5259212255477905, -0.043439727276563644, 0.3398943841457367, 0.36337417364120483, 0.40013203024864197, 0.2471957504749298, ...
14015
https://en.wikipedia.org/wiki/Herstory
Herstory
Herstory is a term for history written from a feminist perspective and emphasizing the role of women, or told from a woman's point of view. It originated as an alteration of the word "history", as part of a feminist critique of conventional historiography, which in their opinion is traditionally written as "his story", i.e., from the male point of view. The term is a neologism since the word "history"—from the Ancient Greek word ἱστορία, or more directly from its Latin derivate historia, meaning "knowledge obtained by inquiry"— is etymologically unrelated to the possessive pronoun his. Usage The Oxford English Dictionary credits Robin Morgan with first using the term "herstory" in print in her 1970 anthology Sisterhood is Powerful. Concerning the feminist organization W.I.T.C.H., Morgan wrote: The fluidity and wit of the witches is evident in the ever-changing acronym: the basic, original title was Women's International Terrorist Conspiracy from Hell [...] and the latest heard at this writing is Women Inspired to Commit Herstory. During the 1970s and 1980s, second-wave feminists saw the study of history as a male-dominated intellectual enterprise and presented "herstory" as a means of compensation. The term, intended to be both serious and comic, became a rallying cry used on T-shirts and buttons as well as in academia. In 2017, Hridith Sudev, an inventor, environmentalist and social activist associated with various youth movements, launched 'The Herstory Movement,' an online platform to "celebrate lesser known great persons; female, queer or otherwise marginalized, who helped shape the modern World History." It is intended as an academic platform to feature stories of female historic persons and thus help facilitate more widespread knowledge about 'Great Women' History. Non-profit organizations Global G.L.O.W and LitWorld created a joint initiative called the "HerStory Campaign". This campaign works with 25 other countries to share girl's lives and stories. They encourage others to join the campaign and to "raise our voices on behalf of all world's girls". The herstory movement has spawned women-centered presses, such as Virago Press in 1973, which publishes fiction and non-fiction by noted women authors like Janet Frame and Sarah Dunant. This movement has led to an increase in activity in other female-centric disciplines such as femistry and galgebra. Criticism Christina Hoff Sommers has been a vocal critic of the concept of herstory, and presented her argument against the movement in her 1994 book Who Stole Feminism? Sommers defined herstory as an attempt to infuse education with ideology at the expense of knowledge. The "gender feminists", as she called them, were the group of feminists responsible for the movement, which she felt amounted to negationism. She regarded most attempts to make historical studies more female-inclusive as being artificial in nature and an impediment to progress. Professor and author Devoney Looser has criticized the concept of herstory for overlooking the contributions that some women made as historians before the twentieth century. Author Richard Dawkins also described his criticism in The God Delusion, arguing that "the word history has not been influenced by the male pronoun". The Global Language Monitor, a nonprofit group that analyzes and tracks trends in language, named herstory the third most "politically incorrect" word of 2006—rivaled only by "macaca" and "Global Warming Denier". Books Books published on the topic include: Herstory: Women Who Changed the World. . Daughters of Eve: A Herstory Book. . HerStory. . Herstory: A Woman's View of American History. . See also Women's history Feminist history History of feminism Radical feminism Womyn Gender-neutral language References Historiography Feminism and history Lesbian history Nonstandard spelling
[ 0.13281458616256714, -0.07684881240129471, -0.7534061670303345, 0.1605169177055359, -0.11021915823221207, 0.6955838203430176, 0.5976396799087524, 0.5143848657608032, -0.2854139506816864, -0.15461981296539307, -0.7276471257209778, 0.15778928995132446, 0.45342281460762024, 0.3657686412334442...
14017
https://en.wikipedia.org/wiki/House%20of%20Cards%20%28British%20TV%20series%29
House of Cards (British TV series)
House of Cards is a 1990 British political thriller television serial in four episodes, set after the end of Margaret Thatcher's tenure as Prime Minister of the United Kingdom. It was televised by the BBC from 18 November to 9 December 1990, to critical and popular acclaim. The story tells the manipulative and sudden rise to power of the machiavellian Chief Whip of the Conservative Party, Francis Urquhart. Urquhart, on the party's classical extreme right, is frustrated over his lack of promotion in the wake of Thatcher's resignation and the moderate government that succeeds it. Thus, he plots an extremely calculated and meticulous plan to bring down the Prime Minister and replace him, in vein of Shakespeare's Richard III (which he often quotes). During this drawn-out, ruthless coup, his life is complicated by his relationship with young female reporter Mattie Storin, whom he uses to leak sensitive information in confidence. The question of whether the serial's ending is a tragedy (in vein of plays such as Macbeth) is left to the viewer. Andrew Davies adapted the story from the 1989 novel of the same name by Michael Dobbs, a former chief of staff at Conservative Party headquarters. Neville Teller also dramatised Dobbs's novel for BBC World Service in 1996, and it had two television sequels (To Play the King and The Final Cut). The opening and closing theme music for this TV series is entitled "Francis Urquhart's March". House of Cards was ranked 84th in the British Film Institute list of the 100 Greatest British Television Programmes in 2000. In 2013, the serial and the Dobbs novel were the basis for a US adaptation set in Washington, D.C., commissioned and released by Netflix as the first ever major streaming service television show. This version was also entitled House of Cards, and starred Kevin Spacey and Robin Wright. Due to sexual abuse allegations against the former it ended in 2018 and, despite initially receiving positive reviews, has been described as an inferior version to the "absolutely superb" original. Overview The antihero of House of Cards is Francis Urquhart, a fictional Chief Whip of the Conservative Party, played by Ian Richardson. The plot follows his amoral and manipulative scheme to become leader of the governing party and, thus, Prime Minister of the United Kingdom. Michael Dobbs did not envisage writing the second and third books, as Urquhart dies at the end of the first novel. The screenplay of the BBC's dramatisation of House of Cards differs from the book, and hence allows future series. Dobbs wrote two following books, To Play the King and The Final Cut, which were televised in 1993 and 1995, respectively. House of Cards was said to draw from Shakespeare's plays Macbeth and Richard III, both of which feature main characters who are corrupted by power and ambition. Richardson has a Shakespearean background and said he based his characterisation of Urquhart on Shakespeare's portrayal of Richard III. Urquhart frequently talks through the camera to the audience, breaking the fourth wall. Plot After Margaret Thatcher's resignation, the ruling Conservative Party is about to elect a new leader. Francis Urquhart (Ian Richardson), a Member of Parliament (MP) and the Government Chief Whip in the House of Commons, introduces viewers to the contestants, from which Henry "Hal" Collingridge (David Lyon) emerges victorious. Urquhart is secretly contemptuous of the well-meaning but weak Collingridge, but expects a promotion to a senior position in the Cabinet. After the general election, which the party wins by a reduced majority, Urquhart submits his suggestions for a reshuffle that includes his desired promotion. However, Collingridge – citing Harold Macmillan's political demise after the 1962 Night of the Long Knives – effects no changes at all. Urquhart resolves to oust Collingridge, with encouragement from his wife, Elizabeth (Diane Fletcher). At the same time, with Elizabeth's blessing, Urquhart begins an affair with Mattie Storin (Susannah Harker), a junior political reporter at a Conservative-leaning tabloid newspaper called The Chronicle. The affair allows Urquhart to manipulate Mattie and indirectly skew her coverage of the Conservative leadership contest in his favour. Mattie has an apparent Electra complex; she finds appeal in Urquhart's much older age and later refers to him as "Daddy". Another unwitting pawn is Roger O'Neill (Miles Anderson), the party's cocaine-addicted public relations consultant. Urquhart blackmails O'Neill into leaking information on budget cuts that humiliates Collingridge during the Prime Minister's Questions. Later, he blames party chairman Lord "Teddy" Billsborough (Nicholas Selby) for leaking an internal poll showing a drop in Tory numbers, leading Collingridge to sack him. As Collingridge's image suffers, Urquhart encourages ultraconservative Foreign Secretary Patrick Woolton (Malcolm Tierney) and Chronicle owner Benjamin Landless (Kenny Ireland) to support his removal. He also poses as Collingridge's alcoholic brother Charles (James Villiers) to trade shares in a chemical company about to benefit from advance information confidential to the government. Consequently, Collingridge becomes falsely accused of insider trading and is forced to resign. In the ensuing leadership race, Urquhart initially feigns unwillingness to stand before announcing his candidacy. With the help of his underling, Tim Stamper (Colin Jeavons), Urquhart goes about making sure his competitors drop out of the race: Health Secretary Peter MacKenzie (Christopher Owen) accidentally runs his car over a disabled protester at a demonstration staged by Urquhart and is forced by the public outcry to withdraw, while Education Secretary Harold Earle (Kenneth Gilbert) is blackmailed into withdrawing when Urquhart anonymously sends pictures of him in the company of a rent boy whom Earle had paid for sex. The first ballot leaves Urquhart to face Woolton and Michael Samuels, the moderate Environment Secretary supported by Billsborough. Urquhart eliminates Woolton by a prolonged scheme: at the party conference, he pressures O'Neill into persuading his personal assistant and lover, Penny Guy (Alphonsia Emmanuel), to have a one-night stand with Woolton in his suite, which Urquhart records via a bugged ministerial red box. When the tape is sent to Woolton, he is led to assume that Samuels is behind the scheme and backs Urquhart in the contest. Urquhart also receives support from Collingridge, who is unaware of Urquhart's role in his own downfall. Samuels is forced out of the running when the tabloids reveal that he backed leftist causes as a student at University of Cambridge. Stumbling across contradictions in the allegations against Collingridge and his brother, Mattie begins to dig deeper. On Urquhart's orders, O'Neill arranges for her car and flat to be vandalised in a show of intimidation. However, O'Neill becomes increasingly uneasy with what he is being asked to do, and his cocaine addiction adds to his instability. Urquhart mixes O'Neill's cocaine with rat poison, causing him to kill himself when taking the cocaine in a motorway service station lavatory on the M27 at Rownhams. Though initially blind to the truth of matters thanks to her relations with Urquhart, Mattie eventually deduces that Urquhart is responsible for O'Neill's death and is behind the unfortunate downfalls of Collingridge and all of Urquhart's rivals. Mattie looks for Urquhart at the point when it seems his victory is certain. She eventually finds him on the roof garden of the Houses of Parliament, where she confronts him. He admits to O'Neill's murder and everything else he has done. He then asks whether he can trust Mattie, and, though she answers in the affirmative, he does not believe her and throws her off the roof onto a van parked below. An unseen person picks up Mattie's tape recorder, which she had been using to secretly record her conversations with Urquhart. The series ends with Urquhart defeating Samuels in the second leadership ballot and being driven to Buckingham Palace to be invited to form a government by Elizabeth II. Deviations from the novel in the series In the first novel, but not in the television series: Urquhart never speaks directly to the reader; the character is written solely in a third-person perspective. When alone, Urquhart is much less self-assured and decisive. Mattie Storin works for The Daily Telegraph. (In the television series she is a journalist with the fictional Chronicle newspaper.) Mattie Storin does not have a relationship with Urquhart; she does not even talk to him frequently. She does, however, have a sexual relationship with John Krajewski. Urquhart's wife is called Miranda and is a minor character, not sharing in his schemes. (In the later novels, To Play the King and The Final Cut, however, she is called "Elizabeth" and plays a larger role, as in the television series.) The Conservative party conference is held in Bournemouth. (In the television series it occurs in Brighton.) The minor character Tim Stamper is introduced for the on-screen adaptation (although Dobbs introduces him in the novel To Play the King). Earle's rent boy appears in person at an important speech of Earle's, distracting him; subsequently, Earle is harassed by reporters who have been told of his indiscretion. In the final confrontation scene Urquhart throws himself from the roof terrace and Mattie survives. Before the series was reissued in 2013 to coincide with the release of the US version of House of Cards, Dobbs rewrote portions of the novel to bring the series in line with the television series and restore continuity among the three novels. In the 2013 version: Urquhart murders Mattie Storin, throwing her off the roof after she confronts Urquhart about his actions. Mattie Storin does not scream "Daddy" as she falls. Urquhart covers up his murder of Mattie Storin by claiming she was an obsessed stalker who was mentally ill and vows to make mental health amongst the young a priority. Mattie Storin works for newspaper The Chronicle, per the TV series. Urquhart's wife Miranda is changed to Mortima. Tim Stamper, though present in the serial, does not appear in the revised version of the novel. Urquhart makes asides to the audience in the form of epigraphs at the beginning of each chapter (the original novel has no chapters). Reception The first installment of the TV series coincidentally aired two days before the Conservative Party leadership election. During a time of "disillusionment with politics", the series "caught the nation's mood". Ian Richardson won a Best Actor BAFTA in 1991 for his role as Urquhart, and Andrew Davies won an Emmy for outstanding writing in a miniseries. The series ranked 84th in the British Film Institute list of the 100 Greatest British Television Programmes. American adaptation The Urquhart trilogy has been adapted in the United States as House of Cards. The show stars Kevin Spacey as Francis "Frank" Underwood, the Majority Whip of the Democratic caucus in the U.S. House of Representatives, who schemes and murders his way to becoming President of the United States. It is produced by David Fincher and Spacey's Trigger Street Productions, with the initial episodes directed by Fincher. The series, produced and financed by independent studio Media Rights Capital, was one of Netflix's first forays into original programming. Series one was made available online on 1 February 2013. The series is filmed in Baltimore, Maryland. The first series was critically acclaimed and earned four Golden Globe Nominations, including Best Drama, actor, actress and supporting actor, with Robin Wright winning best actress. It also earned nine Primetime Emmy Award nominations, winning three, and was the first show to earn nominations that was broadcast solely via an internet streaming service. In popular culture The drama introduced and popularised the phrase: "You might very well think that; I couldn't possibly comment". It was a non-confirmation confirmative statement, used by Urquhart whenever he could not be seen to agree with a leading statement, with the emphasis on either the "I" or the "possibly", depending on the situation. The phrase was even used in the House of Commons, House of Lords and Parliamentary Committees following the series. Prince Charles himself said the phrase in response to a provocative question from a journalist in 2014. A variation on the phrase was written into the TV adaptation of Terry Pratchett's Hogfather for the character Death, as an in-joke on the fact that he was voiced by Richardson. During the first Gulf War, a British reporter speaking from Baghdad, conscious of the possibility of censorship, used the code phrase "You might very well think that; I couldn't possibly comment" to answer a BBC presenter's question. A further variation was used by Nicola Murray, a fictional government minister, in the third series finale of The Thick of It. In the US adaptation, the phrase is used by Frank Underwood in the first episode during his initial meeting with Zoe Barnes, the US counterpart of Mattie Storin. See also List of House of Cards trilogy characters Politics in fiction A Very British Coup, a similar drama of fictional contemporary British politics from a left-wing perspective Yes Minister (and its sequel Yes, Prime Minister), a satirical sitcom about a generic British government List of fictional prime ministers of the United Kingdom References External links House of Cards at British Film Institute Screen Online 1990 British television series debuts 1990 British television series endings 1990s British drama television series 1990s British political television series 1990s British television miniseries BBC television dramas English-language television shows 2 Peabody Award-winning television programs Primetime Emmy Award-winning television series Television shows written by Andrew Davies Television shows based on British novels Television series about prime ministers 1990s British workplace drama television series Television shows set in London British political drama television series
[ -0.30836692452430725, -0.21521468460559845, 0.7198401689529419, -0.36674419045448303, 0.028017787262797356, 0.12000887095928192, -0.09248364716768265, -0.1981700211763382, -1.0048764944076538, 0.5195233821868896, -0.3484693467617035, 0.12901188433170319, -0.2197737991809845, 0.287060618400...
14018
https://en.wikipedia.org/wiki/Helen%20Gandy
Helen Gandy
Helen Wilburforce Gandy (April 8, 1897 – July 7, 1988) was the longtime secretary to Federal Bureau of Investigation director J. Edgar Hoover, who called her "indispensable". Serving in that role for 54 years she exercised great behind-the-scenes influence on Hoover and the workings of the Bureau. Following Hoover's death in 1972, she spent weeks destroying his "Personal File", thought to be where the most incriminating material he used to manipulate and control the most powerful figures in Washington was kept. Early life Helen Gandy was born in Rockville, New Jersey, one of three children (two daughters and a son) born to Franklin Dallas and Annie (née Williams) Gandy. She grew up in New Jersey in Fairton or the Port Norris section of Commercial Township (sources differ) and graduated from Bridgeton High School in Bridgeton, New Jersey. In 1918, aged 21, she moved to Washington, D.C., where she later took classes at Strayer Business College and George Washington University Law School. Career Gandy briefly worked in a department store in Washington before finding a job as a file clerk at the Justice Department in 1918. Within weeks, she went to work as a typist for Hoover, effective March 25, 1918, having told Hoover in her interview she had "no immediate plans to marry." She, like Hoover, would never marry; both were completely devoted to the Bureau. When Hoover went to the Bureau of Investigation (its original title; it became the FBI in 1935) as its assistant director on August 22, 1921, he specifically requested Gandy return from vacation to help him in the new post. Hoover became director of the Bureau in 1924, and Gandy continued in his service. She was promoted to "office assistant" on August 23, 1937 and "executive assistant" on October 1, 1939. Though she would receive promotions in her civil service grade subsequently, she retained her title as executive assistant until her retirement on May 2, 1972, the day Hoover died. Hoover said of her: "if there is anyone in this Bureau whose services are indispensable, I consider Miss Gandy to be that person." Despite this, Curt Gentry wrote: Theirs was a rigidly formal relationship. He'd always called her 'Miss Gandy' (when angry, barking it out as one word). In all those fifty-four years he had never once called her by her first name. Hoover biographers Theoharis and Cox would say "her stern face recalled Cerberus at the gate," a view echoed by Anthony Summers in his life of Hoover, who also pictured Gandy as Hoover's first line of defense against the outside world. When Attorney General Robert F. Kennedy, Hoover's superior, had a direct telephone line installed between their offices, Hoover refused to answer the phone. "Put that damn thing on Miss Gandy's desk where it belongs," Hoover would declare. Gentry described Gandy's influence: Her genteel manner and pleasant voice contrasted sharply with this domineering presence. Yet behind the politeness was a resolute firmness not unlike his, and no small amount of influence. Many a career in the Bureau had been quietly manipulated by her. Even those who disliked him, praised her, most often commenting on her remarkable ability to get along with all kinds of people. That she had held her position for fifty-four years was the best evidence of this, for it was a Bureau tradition that the closer you were to him, the more demanding he was. William C. Sullivan, an agent with the Bureau for three decades, reported in his memoir when he worked in the public relations section answering mail from the public, he gave a correspondent the wrong measurements for Hoover's personal popover recipe, relying on memory rather than the files. Gandy, ever protective of her boss, caught the error and brought it to Hoover's attention. The director then placed an official letter of reprimand in Sullivan's file for the lapse. Mark Felt, deputy associate director of the FBI, wrote in his memoir that Gandy "was bright and alert and quick-tempered—and completely dedicated to her boss." Files Hoover died during the night of May 1–2, 1972. According to Curt Gentry, who wrote the 1991 book J Edgar Hoover: The Man and the Secrets, Hoover's body was not discovered by his live-in cook and general housekeeper, Annie Fields; rather, it was discovered by James Crawford, who had been Hoover's chauffeur for 37 years. Crawford then yelled out to Fields and Tom Moton (Hoover's new chauffeur after Crawford had retired in January 1972). Fields first called Hoover's personal physician, Dr. Robert Choisser, then used another phone to call Clyde Tolson's private number. Tolson then called Gandy's private number with the news of Hoover's death along with orders to begin destroying the files. Within an hour, the "D List" ("d" standing for destruction) was being distributed, and the destruction of files began. However, The New York Times quoted an anonymous FBI source in spring 1975, who said: "Gandy had begun almost a year before Mr. Hoover's death and was instructed to purge the files that were then in his office." Anthony Summers reported that G. Gordon Liddy had said of his sources in the FBI: "by the time Gray went in to get the files, Miss Gandy had already got rid of them." The day after Hoover died, L. Patrick Gray, who had been named acting director by President Richard Nixon upon Tolson's resignation from that position, went to Hoover's office. Gandy paused from her work to give Gray a tour. He found file cabinets open and packing boxes being filled with papers. She informed him the boxes contained personal papers of Hoover's. Gandy stated Gray flipped through a few files and approved her work, but Gray was to deny he looked at any papers. Gandy also told Gray it would be a week before she could clear Hoover's effects out so Gray could move into the suite. Gray reported to Nixon that he had secured Hoover's office and its contents. However, he had sealed only Hoover's personal inner office, where no files were stored, not the entire suite of offices. Since 1957, Hoover's "Official/Confidential" files, containing material too sensitive to include in the FBI's central files, had been kept in the outer office, where Gandy sat. Gentry reported that Gray would not have known where to look in Gandy's office for the files, as her office was lined floor to ceiling with filing cabinets; moreover, without her index to the files, he would not have been able to locate incriminating material, for files were deliberately mislabeled, e.g., President Nixon's file was labeled "Obscene Matters". On May 4, Gandy turned over 12 boxes labelled "Official/Confidential", containing 167 files and 17,750 pages, to Mark Felt. Many of them contained derogatory information. Gray told the press that afternoon that "there are no dossiers or secret files. There are just general files and I took steps to preserve their integrity." Gandy retained the "Personal File". Gandy worked on going through Hoover's "Personal File" in the office until May 12. She then transferred at least 32 file drawers of material to the basement recreation room of Hoover's Washington home at 4936 Thirtieth Place, NW, where she continued her work from May 13 to July 17. Gandy later testified nothing official had been removed from the FBI's offices, "not even his badge." At Hoover's residence the destruction was overseen by John P. Mohr, the number three man in the FBI after Hoover and Tolson. They were aided by James Jesus Angleton, the Central Intelligence Agency's counterintelligence chief, whom Hoover's neighbors saw removing boxes from Hoover's home. Mohr would claim the boxes Angleton removed were cases of spoiled wine. In 1975, when the House Committee on Government Oversight investigated the FBI's illegal COINTELPRO program of spying on and harassment of Martin Luther King Jr. and others, Gandy was called to testify regarding the "Personal Files". "I tore them up, put them in boxes, and they were taken away to be shredded," she told the congressmen about the papers. The FBI Washington field office had FBI drivers transport the material to Hoover's home, then once Gandy had gone through the material, the drivers transported it back to the field office in the Old Post Office Building on Pennsylvania Avenue, where it was shredded and burned. Gandy stated that Hoover had left standing instructions to destroy his personal papers upon his death, and that this instruction was confirmed by Tolson and Gray. Gandy stated that she destroyed no official papers, that everything was personal papers of Hoover's. The staff of the subcommittee did not believe her, but she told the committee: "I have no reason to lie." Representative Andrew Maguire (D-New Jersey), a freshman member of the 94th Congress, said "I find your testimony very difficult to believe." Gandy held her ground: "That is your privilege." "I can give you my word. I know what there was—letters to and from friends, personal friends, a lot of letters," she testified. Gandy also said the files she took to his home also included his financial papers, such as tax returns and investment statements, the deed to his home, and papers relating to his dogs' pedigrees. Curt Gentry wrote: Helen Gandy must have felt quite safe in testifying as she did for who could contradict her? Only one other person knew exactly what the files contained and he was dead. In J. Edgar Hoover: The Man and His Secrets, Gentry describes the nature of the files: "... their contents included blackmail material on the patriarch of an American political dynasty, his sons, their wives, and other women; allegations of two homosexual arrests which Hoover leaked to help defeat a witty, urbane Democratic presidential candidate; the surveillance reports on one of America's best-known first ladies and her alleged lovers, both male and female, white and black; the child molestation documentation the director used to control and manipulate one of the Red-baiting proteges; a list of the Bureau's spies in the White House during the eight administrations when Hoover was FBI director; the forbidden fruit of hundreds of illegal wiretaps and bugs, containing, for example, evidence that an attorney general, Tom C. Clark, who later became Supreme Court justice, had received payoffs from the Chicago syndicate; as well as celebrity files, with all the unsavory gossip Hoover could amass on some of the biggest names in show business." Later years Hoover left Gandy $5,000 in his will. In 1961, she and her sister, Lucy G. Rodman, donated a portrait of their mother by Thomas Eakins to the Smithsonian American Art Museum. Gandy lived in Washington until 1986, when she moved to DeLand, Florida, in Volusia County, where a niece lived. Gandy was an avid trout fisherman. Death Gandy died of a heart attack on July 7, 1988, either in DeLand (as indicated by her New York Times obituary) or in nearby Orange City, Florida (as stated in her Washington Post obituary). In popular culture Gandy has been portrayed by actresses Lee Kessler in J. Edgar Hoover (1987), Naomi Watts in J. Edgar (2011), and Rebecca Toolan in Bad Times at the El Royale (2018). References Bibliography John Crewdson. "U.S. Investigating Missing F.B.I. Data." The New York Times. June 7, 1972. 14. W. Mark Felt. The FBI Pyramid: From the Inside. New York: G.P. Putnam's Sons, 1979. (). Franklin Dallas Gandy. Post on Ancestry.com Retrieved July 18, 2005. Curt Gentry. J. Edgar Hoover: The Man and the Secrets. New York: W.W. Norton, 1991. () Richard Hack. Puppetmaster: The Secret Life of J. Edgar Hoover. Beverly Hills, California: New Millennium Press, 2004. () "Hoover's Political Spying for Presidents" Time Magazine December 15, 1975. "Obituaries". Orlando Sentinel. July 9, 1988. D10. Gandy's Social Security Death Index. 577-60-1115 Her SSN "United States Social Security Death Index," database, FamilySearch (https://familysearch.org/ark:/61903/1:1:JTZB-L23 : 20 May 2014), Helen W Gandy, 15 Jul 1988; citing U.S. Social Security Administration, Death Master File, database (Alexandria, Virginia: National Technical Information Service, ongoing). William C. Sullivan with Bill Brown. The Bureau: My Thirty Years in Hoover's F.B.I. New York: W.W. Norton, 1979. () Athan G. Theoharis, Tony G. Poveda, Susan Rosefeld, and Richard Gid Powers. The FBI: A Comprehensive Reference Guide. New York: Checkmark Books, 2000. () Robert McG. Thomas. "John Mohr, 86, Hoover Confident and Ally at F.B.I." The New York Times. February 1, 1997. 26. "The Truth About Hoover" (cover story) Time Magazine. December 22, 1975. United Press International. "Secretary Says She Destroyed Hoover's Letters on His Orders." The New York Times. December 2, 1975. 14. United States. Congress. House of Representatives. Committee on Government Operations. Subcommittee on Government Information and Individual Rights. Inquiry Into the Destruction of Former FBI Director J. Edgar Hoover's Files and FBI Recordkeeping: Hearing Before a Subcommittee of the Committee on Government Operations, House of Representatives, 94th Congress, December 1, 1975. Washington, D.C.: United States Government Printing Office, 1975. External links Attorney General Griffin Bell's statement on the investigation into the destruction of the files: 1897 births 1988 deaths Bridgeton High School alumni Federal Bureau of Investigation People from Commercial Township, New Jersey People from Volusia County, Florida United States Department of Justice officials
[ -0.4737386405467987, 0.4689822793006897, -0.6588096618652344, 0.08309341222047806, 0.21183070540428162, 0.10344189405441284, 0.5365023612976074, -0.46232396364212036, -0.41600465774536133, 0.06333920359611511, -0.010448089800775051, 0.5919727683067322, -0.3080633580684662, 0.62649768590927...
14019
https://en.wikipedia.org/wiki/Horsepower
Horsepower
Horsepower (hp) is a unit of measurement of power, or the rate at which work is done, usually in reference to the output of engines or motors. There are many different standards and types of horsepower. Two common definitions used today are the mechanical horsepower (or imperial horsepower), which is about 745.7 watts and the metric horsepower, which is approximately 735.5 watts. The term was adopted in the late 18th century by Scottish engineer James Watt to compare the output of steam engines with the power of draft horses. It was later expanded to include the output power of other types of piston engines, as well as turbines, electric motors and other machinery. The definition of the unit varied among geographical regions. Most countries now use the SI unit watt for measurement of power. With the implementation of the EU Directive 80/181/EEC on 1 January 2010, the use of horsepower in the EU is permitted only as a supplementary unit. History The development of the steam engine provided a reason to compare the output of horses with that of the engines that could replace them. In 1702, Thomas Savery wrote in The Miner's Friend: So that an engine which will raise as much water as two horses, working together at one time in such a work, can do, and for which there must be constantly kept ten or twelve horses for doing the same. Then I say, such an engine may be made large enough to do the work required in employing eight, ten, fifteen, or twenty horses to be constantly maintained and kept for doing such a work… The idea was later used by James Watt to help market his improved steam engine. He had previously agreed to take royalties of one third of the savings in coal from the older Newcomen steam engines. This royalty scheme did not work with customers who did not have existing steam engines but used horses instead. Watt determined that a horse could turn a mill wheel 144 times in an hour (or 2.4 times a minute). The wheel was in radius; therefore, the horse travelled feet in one minute. Watt judged that the horse could pull with a force of . So: Watt defined and calculated the horsepower as 32,572 ft⋅lbf/min, which was rounded to an even 33,000 ft⋅lbf/min. Watt determined that a pony could lift an average per minute over a four-hour working shift. Watt then judged a horse was 50% more powerful than a pony and thus arrived at the 33,000 ft⋅lbf/min figure. Engineering in History recounts that John Smeaton initially estimated that a horse could produce per minute. John Desaguliers had previously suggested per minute, and Tredgold suggested per minute. "Watt found by experiment in 1782 that a "brewery horse" could produce per minute." James Watt and Matthew Boulton standardized that figure at per minute the next year. A common legend states that the unit was created when one of Watt's first customers, a brewer, specifically demanded an engine that would match a horse, and chose the strongest horse he had and driving it to the limit. Watt, while aware of the trick, accepted the challenge and built a machine that was actually even stronger than the figure achieved by the brewer, and the output of that machine became the horsepower. In 1993, R. D. Stevenson and R. J. Wassersug published correspondence in Nature summarizing measurements and calculations of peak and sustained work rates of a horse. Citing measurements made at the 1926 Iowa State Fair, they reported that the peak power over a few seconds has been measured to be as high as and also observed that for sustained activity, a work rate of about per horse is consistent with agricultural advice from both the 19th and 20th centuries and also consistent with a work rate of about four times the basal rate expended by other vertebrates for sustained activity. When considering human-powered equipment, a healthy human can produce about briefly (see orders of magnitude) and sustain about indefinitely; trained athletes can manage up to about briefly and for a period of several hours. The Jamaican sprinter Usain Bolt produced a maximum of 0.89 seconds into his 9.58 second dash world record in 2009. Calculating power When torque is in pound-foot units, rotational speed is in rpm, the resulting power in horsepower is The constant 5252 is the rounded value of (33,000 ft⋅lbf/min)/(2π rad/rev). When torque is in inch-pounds, The constant 63,025 is the approximation of Definitions The following definitions have been or are widely used: In certain situations it is necessary to distinguish between the various definitions of horsepower and thus a suffix is added: hp(I) for mechanical (or imperial) horsepower, hp(M) for metric horsepower, hp(S) for boiler (or steam) horsepower and hp(E) for electrical horsepower. Mechanical horsepower Assuming the third CGPM (1901, CR 70) definition of standard gravity, , is used to define the pound-force as well as the kilogram force, and the international avoirdupois pound (1959), one mechanical horsepower is: {| |- |1 hp |≡ 33,000 ft·lbf/min | colspan="2" |by definition |- | |= 550 ft⋅lbf/s |since |1 min = 60 s |- | |= 550 × 0.3048 × 0.45359237 m⋅kgf/s |since |1 ft ≡ 0.3048 m and 1 lb ≡ 0.45359237 kg |- | |= 76.0402249 kgf⋅m/s | | |- | |= 76.0402249 × 9.80665 kg⋅m2/s3 |since |g = 9.80665 m/s2 |- | |= 745.6998715822702 W |≈ 745.700 W |since |1 W ≡ 1 J/s = 1 N⋅m/s = 1 (kg⋅m/s2)⋅(m/s) |} Or given that 1 hp = 550 ft⋅lbf/s, 1 ft = 0.3048 m, 1 lbf ≈ 4.448 N, 1 J = 1 N⋅m, 1 W = 1 J/s: 1 hp ≈ 746 W Metric horsepower (PS, cv, hk, pk, ks, ch) The various units used to indicate this definition (PS, KM, cv, hk, pk, ks and ch) all translate to horse power in English. British manufacturers often intermix metric horsepower and mechanical horsepower depending on the origin of the engine in question. DIN 66036 defines one metric horsepower as the power to raise a mass of 75 kilograms against the Earth's gravitational force over a distance of one metre in one second: = 75 ⋅m/s = 1 PS. This is equivalent to 735.49875 W, or 98.6% of an imperial mechanical horsepower. In 1972, the PS was replaced by the kilowatt as the official power-measuring unit in EEC directives. Other names for the metric horsepower are the Italian , Dutch , the French , the Spanish and Portuguese , the Russian , the Swedish , the Finnish , the Estonian , the Norwegian and Danish , the Hungarian , the Czech and Slovak or ), the Bosnian/Croatian/Serbian , the Bulgarian , the Macedonian , the Polish , Slovenian , the Ukrainian , the Romanian , and the German . In the 19th century, the French had their own unit, which they used instead of the CV or horsepower. It was called the poncelet and was abbreviated p. Tax horsepower Tax or fiscal horsepower is a non-linear rating of a motor vehicle for tax purposes. Tax horsepower ratings were originally more or less directly related to the size of the engine; but as of 2000, many countries changed over to systems based on CO2 emissions, so are not directly comparable to older ratings. The Citroën 2CV is named for its French fiscal horsepower rating, "deux chevaux" (2CV). Electrical horsepower Nameplates on electrical motors show their power output, not the power input (the power delivered at the shaft, not the power consumed to drive the motor). This power output is ordinarily stated in watts or kilowatts. In the United States, the power output is stated in horsepower, which for this purpose is defined as exactly 746 W. Hydraulic horsepower Hydraulic horsepower can represent the power available within hydraulic machinery, power through the down-hole nozzle of a drilling rig, or can be used to estimate the mechanical power needed to generate a known hydraulic flow rate. It may be calculated as where pressure is in psi, and flow rate is in US gallons per minute. Drilling rigs are powered mechanically by rotating the drill pipe from above. Hydraulic power is still needed though, as between 2 and 7 hp are required to push mud through the drill bit to clear waste rock. Additional hydraulic power may also be used to drive a down-hole mud motor to power directional drilling. Boiler horsepower Boiler horsepower is a boiler's capacity to deliver steam to a steam engine and is not the same unit of power as the 550 ft lb/s definition. One boiler horsepower is equal to the thermal energy rate required to evaporate of fresh water at in one hour. In the early days of steam use, the boiler horsepower was roughly comparable to the horsepower of engines fed by the boiler. The term "boiler horsepower" was originally developed at the Philadelphia Centennial Exhibition in 1876, where the best steam engines of that period were tested. The average steam consumption of those engines (per output horsepower) was determined to be the evaporation of of water per hour, based on feed water at , and saturated steam generated at . This original definition is equivalent to a boiler heat output of . A few years later in 1884, the ASME re-defined the boiler horsepower as the thermal output equal to the evaporation of 34.5 pounds per hour of water "from and at" 212 °F. This considerably simplified boiler testing, and provided more accurate comparisons of the boilers at that time. This revised definition is equivalent to a boiler heat output of . Present industrial practice is to define "boiler horsepower" as a boiler thermal output equal to , which is very close to the original and revised definitions. Boiler horsepower is still used to measure boiler output in industrial boiler engineering in the US. Boiler horsepower is abbreviated BHP, not to be confused with brake horsepower, below, which is also abbreviated BHP. Drawbar horsepower Drawbar horsepower (dbhp) is the power a railway locomotive has available to haul a train or an agricultural tractor to pull an implement. This is a measured figure rather than a calculated one. A special railway car called a dynamometer car coupled behind the locomotive keeps a continuous record of the drawbar pull exerted, and the speed. From these, the power generated can be calculated. To determine the maximum power available, a controllable load is required; it is normally a second locomotive with its brakes applied, in addition to a static load. If the drawbar force () is measured in pounds-force (lbf) and speed () is measured in miles per hour (mph), then the drawbar power () in horsepower (hp) is Example: How much power is needed to pull a drawbar load of 2,025 pounds-force at 5 miles per hour? The constant 375 is because 1 hp = 375 lbf⋅mph. If other units are used, the constant is different. When using coherent SI units (watts, newtons, and metres per second), no constant is needed, and the formula becomes . This formula may also be used to calculate the horsepower of a jet engine, using the speed of the jet and the thrust required to maintain that speed. Example: How much power is generated with a thrust of 4,000 pounds at 400 miles per hour? RAC horsepower (taxable horsepower) This measure was instituted by the Royal Automobile Club and was used to denote the power of early 1900s British cars. Many cars took their names from this figure (hence the Austin Seven and Riley Nine), while others had names such as "40/50 hp", which indicated the RAC figure followed by the true measured power. Taxable horsepower does not reflect developed horsepower; rather, it is a calculated figure based on the engine's bore size, number of cylinders, and a (now archaic) presumption of engine efficiency. As new engines were designed with ever-increasing efficiency, it was no longer a useful measure, but was kept in use by UK regulations, which used the rating for tax purposes. The United Kingdom was not the only country that used the RAC rating; many states in Australia used RAC hp to determine taxation. The RAC formula was sometimes applied in British colonies as well, such as Kenya (British East Africa). where D is the diameter (or bore) of the cylinder in inches, n is the number of cylinders. Since taxable horsepower was computed based on bore and number of cylinders, not based on actual displacement, it gave rise to engines with "undersquare" dimensions (bore smaller than stroke), which tended to impose an artificially low limit on rotational speed, hampering the potential power output and efficiency of the engine. The situation persisted for several generations of four- and six-cylinder British engines: For example, Jaguar's 3.4-litre XK engine of the 1950s had six cylinders with a bore of and a stroke of , where most American automakers had long since moved to oversquare (large bore, short stroke) V8 engines. See, for example, the early Chrysler Hemi engine. Measurement The power of an engine may be measured or estimated at several points in the transmission of the power from its generation to its application. A number of names are used for the power developed at various stages in this process, but none is a clear indicator of either the measurement system or definition used. In general: Nominal horsepower is derived from the size of the engine and the piston speed and is only accurate at a steam pressure of . Indicated or gross horsepower (theoretical capability of the engine) [PLAN/ 33000] minus frictional losses within the engine (bearing drag, rod and crankshaft windage losses, oil film drag, etc.), equals Brake / net / crankshaft horsepower (power delivered directly to and measured at the engine's crankshaft) minus frictional losses in the transmission (bearings, gears, oil drag, windage, etc.), equals Shaft horsepower (power delivered to and measured at the output shaft of the transmission, when present in the system) minus frictional losses in the universal joint/s, differential, wheel bearings, tire and chain, (if present), equals Effective, true (thp) or commonly referred to as wheel horsepower (whp) All the above assumes that no power inflation factors have been applied to any of the readings. Engine designers use expressions other than horsepower to denote objective targets or performance, such as brake mean effective pressure (BMEP). This is a coefficient of theoretical brake horsepower and cylinder pressures during combustion. Nominal horsepower Nominal horsepower (nhp) is an early 19th-century rule of thumb used to estimate the power of steam engines. It assumed a steam pressure of . Nominal horsepower = 7 × area of piston in square inches × equivalent piston speed in feet per minute/33,000. For paddle ships, the Admiralty rule was that the piston speed in feet per minute was taken as 129.7 × (stroke)1/3.38. For screw steamers, the intended piston speed was used. The stroke (or length of stroke) was the distance moved by the piston measured in feet. For the nominal horsepower to equal the actual power it would be necessary for the mean steam pressure in the cylinder during the stroke to be and for the piston speed to be that generated by the assumed relationship for paddle ships. The French Navy used the same definition of nominal horse power as the Royal Navy. Indicated horsepower Indicated horsepower (ihp) is the theoretical power of a reciprocating engine if it is completely frictionless in converting the expanding gas energy (piston pressure × displacement) in the cylinders. It is calculated from the pressures developed in the cylinders, measured by a device called an engine indicator – hence indicated horsepower. As the piston advances throughout its stroke, the pressure against the piston generally decreases, and the indicator device usually generates a graph of pressure vs stroke within the working cylinder. From this graph the amount of work performed during the piston stroke may be calculated. Indicated horsepower was a better measure of engine power than nominal horsepower (nhp) because it took account of steam pressure. But unlike later measures such as shaft horsepower (shp) and brake horsepower (bhp), it did not take into account power losses due to the machinery internal frictional losses, such as a piston sliding within the cylinder, plus bearing friction, transmission and gear box friction, etc. Brake horsepower Brake horsepower (bhp) is the power measured using a brake type (load) dynamometer at a specified location, such as the crankshaft, output shaft of the transmission, rear axle or rear wheels. In Europe, the DIN 70020 standard tests the engine fitted with all ancillaries and exhaust system as used in the car. The older American standard (SAE gross horsepower, referred to as bhp) used an engine without alternator, water pump, and other auxiliary components such as power steering pump, muffled exhaust system, etc., so the figures were higher than the European figures for the same engine. The newer American standard (referred to as SAE net horsepower) tests an engine with all the auxiliary components (see "Engine power test standards" below). Brake refers to the device which is used to provide an equal braking force / load to balance / equal an engine's output force and hold it at a desired rotational speed. During testing, the output torque and rotational speed are measured to determine the brake horsepower. Horsepower was originally measured and calculated by use of the "indicator diagram" (a James Watt invention of the late 18th century), and later by means of a Prony brake connected to the engine's output shaft. Modern dynamometers use any of several braking methods to measure the engine's brake horsepower, the actual output of the engine itself, before losses to the drivetrain. Shaft horsepower Shaft horsepower (shp) is the power delivered to a propeller shaft, a turbine shaft, or to an output shaft of an automotive transmission. Shaft horsepower is a common rating for turboshaft and turboprop engines, industrial turbines, and some marine applications. Equivalent shaft horsepower (eshp) is sometimes used to rate turboprop engines. It includes the equivalent power derived from residual jet thrust from the turbine exhaust. of residual jet thrust is estimated to be produced from one unit of horsepower. Engine power test standards There exist a number of different standard determining how the power and torque of an automobile engine is measured and corrected. Correction factors are used to adjust power and torque measurements to standard atmospheric conditions, to provide a more accurate comparison between engines as they are affected by the pressure, humidity, and temperature of ambient air. Some standards are described below. Society of Automotive Engineers/SAE International Early "SAE horsepower" (see RAC horsepower for the formula) In the early twentieth century, a so-called "SAE horsepower" was sometimes quoted for U.S. automobiles. This long predates the Society of Automotive Engineers (SAE) horsepower measurement standards and was another name for the industry standard ALAM or NACC horsepower figure and the same as the British RAC horsepower also used for tax purposes. Alliance for Automotive Innovation is the current successor of ALAM and NACC. SAE gross power Prior to the 1972 model year, American automakers rated and advertised their engines in brake horsepower, bhp, which was a version of brake horsepower called SAE gross horsepower because it was measured according to Society of Automotive Engineers (SAE) standards (J245 and J1995) that call for a stock test engine without accessories (such as dynamo/alternator, radiator fan, water pump), and sometimes fitted with long tube test headers in lieu of the OEM exhaust manifolds. This contrasts with both SAE net power and DIN 70020 standards, which account for engine accessories (but not transmission losses). The atmospheric correction standards for barometric pressure, humidity and temperature for SAE gross power testing were relatively idealistic. SAE net power In the United States, the term bhp fell into disuse in 1971–1972, as automakers began to quote power in terms of SAE net horsepower in accord with SAE standard J1349. Like SAE gross and other brake horsepower protocols, SAE net hp is measured at the engine's crankshaft, and so does not account for transmission losses. However, similar to the DIN 70020 standard, SAE net power testing protocol calls for standard production-type belt-driven accessories, air cleaner, emission controls, exhaust system, and other power-consuming accessories. This produces ratings in closer alignment with the power produced by the engine as it is actually configured and sold. SAE certified power In 2005, the SAE introduced "SAE Certified Power" with SAE J2723. To attain certification the test must follow the SAE standard in question, take place in an ISO 9000/9002 certified facility and be witnessed by an SAE approved third party. A few manufacturers such as Honda and Toyota switched to the new ratings immediately. The rating for Toyota's Camry 3.0 L 1MZ-FE V6 fell from . The company's Lexus ES 330 and Camry SE V6 (3.3 L V6) were previously rated at but the ES 330 dropped to while the Camry declined to . The first engine certified under the new program was the 7.0 L LS7 used in the 2006 Chevrolet Corvette Z06. Certified power rose slightly from . While Toyota and Honda are retesting their entire vehicle lineups, other automakers generally are retesting only those with updated powertrains. For example, the 2006 Ford Five Hundred is rated at , the same as that of 2005 model. However, the 2006 rating does not reflect the new SAE testing procedure, as Ford is not going to incur the extra expense of retesting its existing engines. Over time, most automakers are expected to comply with the new guidelines. SAE tightened its horsepower rules to eliminate the opportunity for engine manufacturers to manipulate factors affecting performance such as how much oil was in the crankcase, engine control system calibration, and whether an engine was tested with high octane fuel. In some cases, such can add up to a change in horsepower ratings. Deutsches Institut für Normung 70020 (DIN 70020) DIN 70020 is a German DIN standard for measuring road vehicle horsepower. DIN hp is measured at the engine's output shaft as a form of metric horsepower rather than mechanical horsepower. Similar to SAE net power rating, and unlike SAE gross power, DIN testing measures the engine as installed in the vehicle, with cooling system, charging system and stock exhaust system all connected. DIN hp is often abbreviated as "PS", derived from the German word Pferdestärke (literally, "horsepower"). CUNA A test standard by Italian CUNA (Commissione Tecnica per l'Unificazione nell'Automobile, Technical Commission for Automobile Unification), a federated entity of standards organisation UNI, was formerly used in Italy. CUNA prescribed that the engine be tested with all accessories necessary to its running fitted (such as the water pump), while all others – such as alternator/dynamo, radiator fan, and exhaust manifold – could be omitted. All calibration and accessories had to be as on production engines. Economic Commission for Europe R24 ECE R24 is a UN standard for the approval of compression ignition engine emissions, installation and measurement of engine power. It is similar to DIN 70020 standard, but with different requirements for connecting an engine's fan during testing causing it to absorb less power from the engine. Economic Commission for Europe R85 ECE R85 is a UN standard for the approval of internal combustion engines with regard to the measurement of the net power. 80/1269/EEC 80/1269/EEC of 16 December 1980 is a European Union standard for road vehicle engine power. International Organization for Standardization The International Organization for Standardization (ISO) publishes several standards for measuring engine horsepower. ISO 14396 specifies the additional and method requirement for determining the power of reciprocating internal combustion engines when presented for an ISO 8178 exhaust emission test. It applies to reciprocating internal combustion engines for land, rail and marine use excluding engines of motor vehicles primarily designed for road use. ISO 1585 is an engine net power test code intended for road vehicles. ISO 2534 is an engine gross power test code intended for road vehicles. ISO 4164 is an engine net power test code intended for mopeds. ISO 4106 is an engine net power test code intended for motorcycles. ISO 9249 is an engine net power test code intended for earth moving machines. Japanese Industrial Standard D 1001 JIS D 1001 is a Japanese net, and gross, engine power test code for automobiles or trucks having a spark ignition, diesel engine, or fuel injection engine. See also Brake specific fuel consumption – how much fuel an engine consumes per unit energy output Dynamometer engine testing European units of measurement directives Horsepower-hour Mean effective pressure Torque References External links How Stuff Works: Horsepower Imperial units Units of power Customary units of measurement in the United States James Watt
[ 0.13534735143184662, -0.10723582655191422, 0.013889271765947342, 0.6016404628753662, -0.19511470198631287, 0.3419458568096161, 0.2560729384422302, -0.05444644019007683, -0.06788688898086548, -0.12539270520210266, -0.04520750790834427, 0.6145070195198059, 0.3994617462158203, 0.7040599584579...
14020
https://en.wikipedia.org/wiki/History%20of%20London
History of London
The history of London, the capital city of England and the United Kingdom, extends over 2000 years. In that time, it has become one of the world's most significant financial and cultural capital cities. It has withstood plague, devastating fire, civil war, aerial bombardment, terrorist attacks, and riots. The City of London is the historic core of the Greater London metropolis, and is today its primary financial district, though it represents only a small part of the wider metropolis. Foundations and prehistory Some recent discoveries indicate probable very early settlements near the Thames in the London area. In 1993, the remains of a Bronze Age bridge were found on the Thames's south foreshore, upstream of Vauxhall Bridge. This bridge either crossed the Thames or went to a now lost island in the river. Dendrology dated the timbers to between 1750 BCE and 1285 BCE. In 2001, a further dig found that the timbers were driven vertically into the ground on the south bank of the Thames west of Vauxhall Bridge. In 2010, the foundations of a large timber structure, dated to between 4800 BCE and 4500 BCE were found, again on the foreshore south of Vauxhall Bridge. The function of the mesolithic structure is not known. All these structures are on the south bank at a natural crossing point where the River Effra flows into the Thames.<ref name="thamesdis It is thought that the Thames was an important tribal boundary, and numerous finds have been made of spear heads and weaponry from the Bronze and Iron Ages near the banks of the Thames in the London area, many of which had clearly been used in battle. Archaeologist Leslie Wallace notes, "Because no LPRIA [Late pre-Roman Iron Age] settlements or significant domestic refuse have been found in London, despite extensive archaeological excavation, arguments for a purely Roman foundation of London are now common and uncontroversial." Early history Roman London (AD 47–410) Londinium was established as a civilian town by the Romans about four years after the invasion of AD 43. London, like Rome, was founded on the point of the river where it was narrow enough to bridge and the strategic location of the city provided easy access to much of Europe. Early Roman London occupied a relatively small area, roughly equivalent to the size of Hyde Park. In around AD 60, it was destroyed by the Iceni led by their queen Boudica. The city was quickly rebuilt as a planned Roman town and recovered after perhaps 10 years; the city grew rapidly over the following decades. During the 2nd century Londinium was at its height and replaced Colchester as the capital of Roman Britain (Britannia). Its population was around 60,000 inhabitants. It boasted major public buildings, including the largest basilica north of the Alps, temples, bath houses, an amphitheatre and a large fort for the city garrison. Political instability and recession from the 3rd century onwards led to a slow decline. At some time between AD 180 and AD 225, the Romans built the defensive London Wall around the landward side of the city. The wall was about long, high, and thick. The wall would survive for another 1,600 years and define the City of London's perimeters for centuries to come. The perimeters of the present City are roughly defined by the line of the ancient wall. Londinium was an ethnically diverse city with inhabitants from across the Roman Empire, including natives of Britannia, continental Europe, the Middle East, and North Africa. In the late 3rd century, Londinium was raided on several occasions by Saxon pirates. This led, from around 255 onwards, to the construction of an additional riverside wall. Six of the traditional seven city gates of London are of Roman origin, namely: Ludgate, Newgate, Aldersgate, Cripplegate, Bishopsgate and Aldgate (Moorgate is the exception, being of medieval origin). By the 5th century, the Roman Empire was in rapid decline and in AD 410, the Roman occupation of Britannia came to an end. Following this, the Roman city also went into rapid decline and by the end of the 5th century was practically abandoned. Anglo-Saxon London (5th century – 1066) Until recently it was believed that Anglo-Saxon settlement initially avoided the area immediately around Londinium. However, the discovery in 2008 of an Anglo-Saxon cemetery at Covent Garden indicates that the incomers had begun to settle there at least as early as the 6th century and possibly in the 5th. The main focus of this settlement was outside the Roman walls, clustering a short distance to the west along what is now the Strand, between the Aldwych and Trafalgar Square. It was known as Lundenwic, the -wic suffix here denoting a trading settlement. Recent excavations have also highlighted the population density and relatively sophisticated urban organisation of this earlier Anglo-Saxon London, which was laid out on a grid pattern and grew to house a likely population of 10–12,000. Early Anglo-Saxon London belonged to a people known as the Middle Saxons, from whom the name of the county of Middlesex is derived, but who probably also occupied the approximate area of modern Hertfordshire and Surrey. However, by the early 7th century the London area had been incorporated into the kingdom of the East Saxons. In 604 King Saeberht of Essex converted to Christianity and London received Mellitus, its first post-Roman bishop. At this time Essex was under the overlordship of King Æthelberht of Kent, and it was under Æthelberht's patronage that Mellitus founded the first St. Paul's Cathedral, traditionally said to be on the site of an old Roman Temple of Diana (although Christopher Wren found no evidence of this). It would have only been a modest church at first and may well have been destroyed after he was expelled from the city by Saeberht's pagan successors. The permanent establishment of Christianity in the East Saxon kingdom took place in the reign of King Sigeberht II in the 650s. During the 8th century, the kingdom of Mercia extended its dominance over south-eastern England, initially through overlordship which at times developed into outright annexation. London seems to have come under direct Mercian control in the 730s. Viking attacks dominated most of the 9th century, becoming increasingly common from around 830 onwards. London was sacked in 842 and again in 851. The Danish "Great Heathen Army", which had rampaged across England since 865, wintered in London in 871. The city remained in Danish hands until 886, when it was captured by the forces of King Alfred the Great of Wessex and reincorporated into Mercia, then governed under Alfred's sovereignty by his son-in-law Ealdorman Æthelred. Around this time the focus of settlement moved within the old Roman walls for the sake of defence, and the city became known as Lundenburh. The Roman walls were repaired and the defensive ditch re-cut, while the bridge was probably rebuilt at this time. A second fortified Borough was established on the south bank at Southwark, the Suthringa Geworc (defensive work of the men of Surrey). The old settlement of Lundenwic became known as the ealdwic or "old settlement", a name which survives today as Aldwich. From this point, the City of London began to develop its own unique local government. Following Æthelred's death in 911 it was transferred to Wessex, preceding the absorption of the rest of Mercia in 918. Although it faced competition for political pre-eminence in the united Kingdom of England from the traditional West Saxon centre of Winchester, London's size and commercial wealth brought it a steadily increasing importance as a focus of governmental activity. King Athelstan held many meetings of the witan in London and issued laws from there, while King Æthelred the Unready issued the Laws of London there in 978. Following the resumption of Viking attacks in the reign of Æthelred, London was unsuccessfully attacked in 994 by an army under King Sweyn Forkbeard of Denmark. As English resistance to the sustained and escalating Danish onslaught finally collapsed in 1013, London repulsed an attack by the Danes and was the last place to hold out while the rest of the country submitted to Sweyn, but by the end of the year it too capitulated and Æthelred fled abroad. Sweyn died just five weeks after having been proclaimed king and Æthelred was restored to the throne, but Sweyn's son Cnut returned to the attack in 1015. After Æthelred's death at London in 1016 his son Edmund Ironside was proclaimed king there by the witangemot and left to gather forces in Wessex. London was then subjected to a systematic siege by Cnut but was relieved by King Edmund's army; when Edmund again left to recruit reinforcements in Wessex the Danes resumed the siege but were again unsuccessful. However, following his defeat at the Battle of Assandun Edmund ceded to Cnut all of England north of the Thames, including London, and his death a few weeks later left Cnut in control of the whole country. A Norse saga tells of a battle when King Æthelred returned to attack Danish-occupied London. According to the saga, the Danes lined London Bridge and showered the attackers with spears. Undaunted, the attackers pulled the roofs off nearby houses and held them over their heads in the boats. Thus protected, they were able to get close enough to the bridge to attach ropes to the piers and pull the bridge down, thus ending the Viking occupation of London. This story presumably relates to Æthelred's return to power after Sweyn's death in 1014, but there is no strong evidence of any such struggle for control of London on that occasion. Following the extinction of Cnut's dynasty in 1042 English rule was restored under Edward the Confessor. He was responsible for the foundation of Westminster Abbey and spent much of his time at Westminster, which from this time steadily supplanted the City itself as the centre of government. Edward's death at Westminster in 1066 without a clear heir led to a succession dispute and the Norman conquest of England. Earl Harold Godwinson was elected king by the witangemot and crowned in Westminster Abbey but was defeated and killed by William the Bastard, Duke of Normandy at the Battle of Hastings. The surviving members of the witan met in London and elected King Edward's young nephew Edgar the Ætheling as king. The Normans advanced to the south bank of the Thames opposite London, where they defeated an English attack and burned Southwark but were unable to storm the bridge. They moved upstream and crossed the river at Wallingford before advancing on London from the north-west. The resolve of the English leadership to resist collapsed and the chief citizens of London went out together with the leading members of the Church and aristocracy to submit to William at Berkhamstead, although according to some accounts there was a subsequent violent clash when the Normans reached the city. Having occupied London, William was crowned king in Westminster Abbey. Norman and Medieval London (1066 – late 15th century) The new Norman regime established new fortresses within the city to dominate the native population. By far the most important of these was the Tower of London at the eastern end of the city, where the initial timber fortification was rapidly replaced by the construction of the first stone castle in England. The smaller forts of Baynard's Castle and Montfichet's Castle were also established along the waterfront. King William also granted a charter in 1067 confirming the city's existing rights, privileges and laws. London was a centre of England's nascent Jewish population, the first of whom arrived in about 1070. Its growing self-government was consolidated by the election rights granted by King John in 1199 and 1215. In 1097, William Rufus, the son of William the Conqueror began the construction of 'Westminster Hall', which became the focus of the Palace of Westminster. In 1176, construction began of the most famous incarnation of London Bridge (completed in 1209) which was built on the site of several earlier timber bridges. This bridge would last for 600 years, and remained the only bridge across the River Thames until 1739. Violence against Jews took place in 1190, after it was rumoured that the new King had ordered their massacre after they had presented themselves at his coronation. In 1216, during the First Barons' War London was occupied by Prince Louis of France, who had been called in by the baronial rebels against King John and was acclaimed as King of England in St Paul's Cathedral. However, following John's death in 1217 Louis's supporters reverted to their Plantagenet allegiance, rallying round John's son Henry III, and Louis was forced to withdraw from England. In 1224, after an accusation of ritual murder, the Jewish community was subjected to a steep punitive levy. Then in 1232, Henry III confiscated the principal synagogue of the London Jewish community because he claimed their chanting was audible in a neighboring church. In 1264, during the Second Barons' War, Simon de Montfort's rebels occupied London and killed 500 Jews while attempting to seize records of debts. London's Jewish community was forced to leave England by the expulsion by Edward I in 1290. They left for France, Holland and further afield; their property was seized, and many suffered robbery and murder as they departed. Over the following centuries, London would shake off the heavy French cultural and linguistic influence which had been there since the times of the Norman conquest. The city would figure heavily in the development of Early Modern English. During the Peasants' Revolt of 1381, London was invaded by rebels led by Wat Tyler. A group of peasants stormed the Tower of London and executed the Lord Chancellor, Archbishop Simon Sudbury, and the Lord Treasurer. The peasants looted the city and set fire to numerous buildings. Tyler was stabbed to death by the Lord Mayor William Walworth in a confrontation at Smithfield and the revolt collapsed. Trade increased steadily during the Middle Ages, and London grew rapidly as a result. In 1100, London's population was somewhat more than 15,000. By 1300, it had grown to roughly 80,000. London lost at least half of its population during the Black Death in the mid-14th century, but its economic and political importance stimulated a rapid recovery despite further epidemics. Trade in London was organised into various guilds, which effectively controlled the city, and elected the Lord Mayor of the City of London. Medieval London was made up of narrow and twisting streets, and most of the buildings were made from combustible materials such as timber and straw, which made fire a constant threat, while sanitation in cities was of low-quality. Modern history Tudor London (1485–1603) In 1475, the Hanseatic League set up its main English trading base (kontor) in London, called Stalhof or Steelyard. It existed until 1853, when the Hanseatic cities of Lübeck, Bremen and Hamburg sold the property to South Eastern Railway. Woollen cloth was shipped undyed and undressed from 14th/15th century London to the nearby shores of the Low Countries, where it was considered indispensable. During the Reformation, London was the principal early centre of Protestantism in England. Its close commercial connections with the Protestant heartlands in northern continental Europe, large foreign mercantile communities, disproportionately large number of literate inhabitants and role as the centre of the English print trade all contributed to the spread of the new ideas of religious reform. Before the Reformation, more than half of the area of London was the property of monasteries, nunneries and other religious houses. Henry VIII's "Dissolution of the Monasteries" had a profound effect on the city as nearly all of this property changed hands. The process started in the mid 1530s, and by 1538 most of the larger monastic houses had been abolished. Holy Trinity Aldgate went to Lord Audley, and the Marquess of Winchester built himself a house in part of its precincts. The Charterhouse went to Lord North, Blackfriars to Lord Cobham, the leper hospital of St Giles to Lord Dudley, while the king took for himself the leper hospital of St James, which was rebuilt as St James's Palace. The period saw London rapidly rising in importance among Europe's commercial centres. Trade expanded beyond Western Europe to Russia, the Levant, and the Americas. This was the period of mercantilism and monopoly trading companies such as the Muscovy Company (1555) and the British East India Company (1600) were established in London by Royal Charter. The latter, which ultimately came to rule India, was one of the key institutions in London, and in Britain as a whole, for two and a half centuries. Immigrants arrived in London not just from all over England and Wales, but from abroad as well, for example Huguenots from France; the population rose from an estimated 50,000 in 1530 to about 225,000 in 1605. The growth of the population and wealth of London was fuelled by a vast expansion in the use of coastal shipping. The late 16th and early 17th century saw the great flourishing of drama in London whose preeminent figure was William Shakespeare. During the mostly calm later years of Elizabeth's reign, some of her courtiers and some of the wealthier citizens of London built themselves country residences in Middlesex, Essex and Surrey. This was an early stirring of the villa movement, the taste for residences which were neither of the city nor on an agricultural estate, but at the time of Elizabeth's death in 1603, London was still very compact. Xenophobia was rampant in London, and increased after the 1580s. Many immigrants became disillusioned by routine threats of violence and molestation, attempts at expulsion of foreigners, and the great difficulty in acquiring English citizenship. Dutch cities proved more hospitable, and many left London permanently. Foreigners are estimated to have made up 4,000 of the 100,000 residents of London by 1600, many being Dutch and German workers and traders. Stuart London (1603–1714) London's expansion beyond the boundaries of the City was decisively established in the 17th century. In the opening years of that century the immediate environs of the City, with the principal exception of the aristocratic residences in the direction of Westminster, were still considered not conducive to health. Immediately to the north was Moorfields, which had recently been drained and laid out in walks, but it was frequented by beggars and travellers, who crossed it in order to get into London. Adjoining Moorfields were Finsbury Fields, a favourite practising ground for the archers, Mile End, then a common on the Great Eastern Road and famous as a rendezvous for the troops. The preparations for King James I becoming king were interrupted by a severe plague epidemic, which may have killed over thirty thousand people. The Lord Mayor's Show, which had been discontinued for some years, was revived by order of the king in 1609. The dissolved monastery of the Charterhouse, which had been bought and sold by the courtiers several times, was purchased by Thomas Sutton for £13,000. The new hospital, chapel, and schoolhouse were begun in 1611. Charterhouse School was to be one of the principal public schools in London until it moved to Surrey in Victorian times, and the site is still used as a medical school. The general meeting-place of Londoners in the day-time was the nave of Old St. Paul's Cathedral. Merchants conducted business in the aisles, and used the font as a counter upon which to make their payments; lawyers received clients at their particular pillars; and the unemployed looked for work. St Paul's Churchyard was the centre of the book trade and Fleet Street was a centre of public entertainment. Under James I the theatre, which established itself so firmly in the latter years of Elizabeth, grew further in popularity. The performances at the public theatres were complemented by elaborate masques at the royal court and at the inns of court. Charles I acceded to the throne in 1625. During his reign, aristocrats began to inhabit the West End in large numbers. In addition to those who had specific business at court, increasing numbers of country landowners and their families lived in London for part of the year simply for the social life. This was the beginning of the "London season". Lincoln's Inn Fields was built about 1629. The piazza of Covent Garden, designed by England's first classically trained architect Inigo Jones followed in about 1632. The neighbouring streets were built shortly afterwards, and the names of Henrietta, Charles, James, King and York Streets were given after members of the royal family. In January 1642 five members of parliament whom the King wished to arrest were granted refuge in the City. In August of the same year the King raised his banner at Nottingham, and during the English Civil War London took the side of the parliament. Initially the king had the upper hand in military terms and in November he won the Battle of Brentford a few miles to the west of London. The City organised a new makeshift army and Charles hesitated and retreated. Subsequently, an extensive system of fortifications was built to protect London from a renewed attack by the Royalists. This comprised a strong earthen rampart, enhanced with bastions and redoubts. It was well beyond the City walls and encompassed the whole urban area, including Westminster and Southwark. London was not seriously threatened by the royalists again, and the financial resources of the City made an important contribution to the parliamentarians' victory in the war. The unsanitary and overcrowded City of London has suffered from the numerous outbreaks of the plague many times over the centuries, but in Britain it is the last major outbreak which is remembered as the "Great Plague" It occurred in 1665 and 1666 and killed around 60,000 people, which was one fifth of the population. Samuel Pepys chronicled the epidemic in his diary. On 4 September 1665 he wrote "I have stayed in the city till above 7400 died in one week, and of them about 6000 of the plague, and little noise heard day or night but tolling of bells." Great Fire of London (1666) The Great Plague was immediately followed by another catastrophe, albeit one which helped to put an end to the plague. On the Sunday, 2 September 1666 the Great Fire of London broke out at one o'clock in the morning at a bakery in Pudding Lane in the southern part of the City. Fanned by an eastern wind the fire spread, and efforts to arrest it by pulling down houses to make firebreaks were disorganised to begin with. On Tuesday night the wind fell somewhat, and on Wednesday the fire slackened. On Thursday it was extinguished, but on the evening of that day the flames again burst forth at the Temple. Some houses were at once blown up by gunpowder, and thus the fire was finally mastered. The Monument was built to commemorate the fire: for over a century and a half it bore an inscription attributing the conflagration to a "popish frenzy". The fire destroyed about 60% of the City, including Old St Paul's Cathedral, 87 parish churches, 44 livery company halls and the Royal Exchange. However, the number of lives lost was surprisingly small; it is believed to have been 16 at most. Within a few days of the fire, three plans were presented to the king for the rebuilding of the city, by Christopher Wren, John Evelyn and Robert Hooke. Wren proposed to build main thoroughfares north and south, and east and west, to insulate all the churches in conspicuous positions, to form the most public places into large piazzas, to unite the halls of the 12 chief livery companies into one regular square annexed to the Guildhall, and to make a fine quay on the bank of the river from Blackfriars to the Tower of London. Wren wished to build the new streets straight and in three standard widths of thirty, sixty and ninety feet. Evelyn's plan differed from Wren's chiefly in proposing a street from the church of St Dunstan's in the East to the St Paul's, and in having no quay or terrace along the river. These plans were not implemented, and the rebuilt city generally followed the streetplan of the old one, and most of it has survived into the 21st century. Nonetheless, the new City was different from the old one. Many aristocratic residents never returned, preferring to take new houses in the West End, where fashionable new districts such as St. James's were built close to the main royal residence, which was Whitehall Palace until it was destroyed by fire in the 1690s, and thereafter St. James's Palace. The rural lane of Piccadilly sprouted courtiers mansions such as Burlington House. Thus the separation between the middle class mercantile City of London, and the aristocratic world of the court in Westminster became complete. In the City itself there was a move from wooden buildings to stone and brick construction to reduce the risk of fire. Parliament's Rebuilding of London Act 1666 stated "building with brick [is] not only more comely and durable, but also more safe against future perils of fire". From then on only doorcases, window-frames and shop fronts were allowed to be made of wood. Christopher Wren's plan for a new model London came to nothing, but he was appointed to rebuild the ruined parish churches and to replace St Paul's Cathedral. His domed baroque cathedral was the primary symbol of London for at least a century and a half. As city surveyor, Robert Hooke oversaw the reconstruction of the City's houses. The East End, that is the area immediately to the east of the city walls, also became heavily populated in the decades after the Great Fire. London's docks began to extend downstream, attracting many working people who worked on the docks themselves and in the processing and distributive trades. These people lived in Whitechapel, Wapping, Stepney and Limehouse, generally in slum conditions. In the winter of 1683–1684, a frost fair was held on the Thames. The frost, which began about seven weeks before Christmas and continued for six weeks after, was the greatest on record. The Revocation of the Edict of Nantes in 1685 led to a large migration on Huguenots to London. They established a silk industry at Spitalfields. At this time the Bank of England was founded, and the British East India Company was expanding its influence. Lloyd's of London also began to operate in the late 17th century. In 1700, London handled 80% of England's imports, 69% of its exports and 86% of its re-exports. Many of the goods were luxuries from the Americas and Asia such as silk, sugar, tea and tobacco. The last figure emphasises London's role as an entrepot: while it had many craftsmen in the 17th century, and would later acquire some large factories, its economic prominence was never based primarily on industry. Instead it was a great trading and redistribution centre. Goods were brought to London by England's increasingly dominant merchant navy, not only to satisfy domestic demand, but also for re-export throughout Europe and beyond. William III, a Dutchman, cared little for London, the smoke of which gave him asthma, and after the first fire at Whitehall Palace (1691) he purchased Nottingham House and transformed it into Kensington Palace. Kensington was then an insignificant village, but the arrival of the court soon caused it to grow in importance. The palace was rarely favoured by future monarchs, but its construction was another step in the expansion of the bounds of London. During the same reign Greenwich Hospital, then well outside the boundary of London, but now comfortably inside it, was begun; it was the naval complement to the Chelsea Hospital for former soldiers, which had been founded in 1681. During the reign of Queen Anne an act was passed authorising the building of 50 new churches to serve the greatly increased population living outside the boundaries of the City of London. 18th century The 18th century was a period of rapid growth for London, reflecting an increasing national population, the early stirrings of the Industrial Revolution, and London's role at the centre of the evolving British Empire. In 1707, an Act of Union was passed merging the Scottish and the English Parliaments, thus establishing the Kingdom of Great Britain. A year later, in 1708 Christopher Wren's masterpiece, St Paul's Cathedral was completed on his birthday. However, the first service had been held on 2 December 1697; more than 10 years earlier. This Cathedral replaced the original St. Paul's which had been completely destroyed in the Great Fire of London. This building is considered one of the finest in Britain and a fine example of Baroque architecture. Many tradesmen from different countries came to London to trade goods and merchandise. Also, more immigrants moved to London making the population greater. More people also moved to London for work and for business making London an altogether bigger and busier city. Britain's victory in the Seven Years' War increased the country's international standing and opened large new markets to British trade, further boosting London's prosperity. During the Georgian period London spread beyond its traditional limits at an accelerating pace. This is shown in a series of detailed maps, particularly John Rocque's 1741–45 map (see below) and his 1746 Map of London. New districts such as Mayfair were built for the rich in the West End, new bridges over the Thames encouraged an acceleration of development in South London and in the East End, the Port of London expanded downstream from the City. During this period was also the uprising of the American colonies. In 1780, the Tower of London held its only American prisoner, former President of the Continental Congress, Henry Laurens. In 1779, he was the Congress's representative of Holland, and got the country's support for the Revolution. On his return voyage back to America, the Royal Navy captured him and charged him with treason after finding evidence of a reason of war between Great Britain and the Netherlands. He was released from the Tower on 21 December 1781 in exchange for General Lord Cornwallis. In 1762, George III acquired Buckingham Palace (then called Buckingham House) from the Duke of Buckingham. It was enlarged over the next 75 years by architects such as John Nash. A phenomenon of the era was the coffeehouse, which became a popular place to debate ideas. Growing literacy and the development of the printing press meant that news became widely available. Fleet Street became the centre of the embryonic national press during the century. 18th-century London was dogged by crime. The Bow Street Runners were established in 1750 as a professional police force. Penalties for crime were harsh, with the death penalty being applied for fairly minor crimes. Public hangings were common in London, and were popular public events. In 1780, London was rocked by the Gordon Riots, an uprising by Protestants against Roman Catholic emancipation led by Lord George Gordon. Severe damage was caused to Catholic churches and homes, and 285 rioters were killed. Up until 1750, London Bridge was the only crossing over the Thames, but in that year Westminster Bridge was opened and, for the first time in history, London Bridge, in a sense, had a rival. In 1798, Frankfurt banker Nathan Mayer Rothschild arrived in London and set up a banking house in the city, with a large sum of money given to him by his father, Amschel Mayer Rothschild. The Rothschilds also had banks in Paris and Vienna. The bank financed numerous large-scale projects, especially regarding railways around the world and the Suez Canal. The 18th century saw the breakaway of the American colonies and many other unfortunate events in London, but also great change and Enlightenment. This all led into the beginning of modern times, the 19th century. 19th century During the 19th century, London was transformed into the world's largest city and capital of the British Empire. Its population expanded from 1 million in 1800 to 6.7 million a century later. During this period, London became a global political, financial, and trading capital. In this position, it was largely unrivalled until the latter part of the century, when Paris and New York began to threaten its dominance. While the city grew wealthy as Britain's holdings expanded, 19th-century London was also a city of poverty, where millions lived in overcrowded and unsanitary slums. Life for the poor was immortalised by Charles Dickens in such novels as Oliver Twist In 1810, after the death of Sir Francis Baring and Abraham Goldsmid, Rothschild emerges as the major banker in London. In 1829, the then Home Secretary (and future prime minister) Robert Peel established the Metropolitan Police as a police force covering the entire urban area. The force gained the nickname of "bobbies" or "peelers" named after Robert Peel. 19th-century London was transformed by the coming of the railways. A new network of metropolitan railways allowed for the development of suburbs in neighbouring counties from which middle-class and wealthy people could commute to the centre. While this spurred the massive outward growth of the city, the growth of greater London also exacerbated the class divide, as the wealthier classes emigrated to the suburbs, leaving the poor to inhabit the inner city areas. The first railway to be built in London was a line from London Bridge to Greenwich, which opened in 1836. This was soon followed by the opening of great rail termini which eventually linked London to every corner of Great Britain, including Euston station (1837), Paddington station (1838), Fenchurch Street station (1841), Waterloo station (1848), King's Cross station (1850), and St Pancras station (1863). From 1863, the first lines of the London Underground were constructed. The urbanised area continued to grow rapidly, spreading into Islington, Paddington, Belgravia, Holborn, Finsbury, Shoreditch, Southwark and Lambeth. Towards the middle of the century, London's antiquated local government system, consisting of ancient parishes and vestries, struggled to cope with the rapid growth in population. In 1855, the Metropolitan Board of Works (MBW) was created to provide London with adequate infrastructure to cope with its growth. One of its first tasks was addressing London's sanitation problems. At the time, raw sewage was pumped straight into the River Thames. This culminated in The Great Stink of 1858. Parliament finally gave consent for the MBW to construct a large system of sewers. The engineer put in charge of building the new system was Joseph Bazalgette. In what was one of the largest civil engineering projects of the 19th century, he oversaw construction of over 2100 km of tunnels and pipes under London to take away sewage and provide clean drinking water. When the London sewerage system was completed, the death toll in London dropped dramatically, and epidemics of cholera and other diseases were curtailed. Bazalgette's system is still in use today. One of the most famous events of 19th-century London was the Great Exhibition of 1851. Held at The Crystal Palace, the fair attracted 6 million visitors from across the world and displayed Britain at the height of its Imperial dominance. As the capital of a massive empire, London became a magnet for immigrants from the colonies and poorer parts of Europe. A large Irish population settled in the city during the Victorian period, with many of the newcomers refugees from the Great Famine (1845–1849). At one point, Catholic Irish made up about 20% of London's population; they typically lived in overcrowded slums. London also became home to a sizable Jewish community, which was notable for its entrepreneurship in the clothing trade and merchandising. In 1888, the new County of London was established, administered by the London County Council. This was the first elected London-wide administrative body, replacing the earlier Metropolitan Board of Works, which had been made up of appointees. The County of London covered broadly what was then the full extent of the London conurbation, although the conurbation later outgrew the boundaries of the county. In 1900, the county was sub-divided into 28 metropolitan boroughs, which formed a more local tier of administration than the county council. Many famous buildings and landmarks of London were constructed during the 19th century including: Trafalgar Square Big Ben and the Houses of Parliament The Royal Albert Hall The Victoria and Albert Museum Tower Bridge 20th century 1900 to 1939 London entered the 20th century at the height of its influence as the capital of one of the largest empires in history, but the new century was to bring many challenges. London's population continued to grow rapidly in the early decades of the century, and public transport was greatly expanded. A large tram network was constructed by the London County Council, through the LCC Tramways; the first motorbus service began in the 1900s. Improvements to London's overground and underground rail network, including large scale electrification were progressively carried out. During World War I, London experienced its first bombing raids carried out by German zeppelin airships; these killed around 700 people and caused great terror, but were merely a foretaste of what was to come. The city of London would experience many more terrors as a result of both World Wars. The largest explosion in London occurred during World War I: the Silvertown explosion, when a munitions factory containing 50 tons of TNT exploded, killing 73 and injuring 400. The period between the two World Wars saw London's geographical extent growing more quickly than ever before or since. A preference for lower density suburban housing, typically semi-detached, by Londoners seeking a more "rural" lifestyle, superseded Londoners' old predilection for terraced houses. This was facilitated not only by a continuing expansion of the rail network, including trams and the Underground, but also by slowly widening car ownership. London's suburbs expanded outside the boundaries of the County of London, into the neighbouring counties of Essex, Hertfordshire, Kent, Middlesex and Surrey. Like the rest of the country, London suffered severe unemployment during the Great Depression of the 1930s. In the East End during the 1930s, politically extreme parties of both right and left flourished. The Communist Party of Great Britain and the British Union of Fascists both gained serious support. Clashes between right and left culminated in the Battle of Cable Street in 1936. The population of London reached an all-time peak of 8.6 million in 1939. Large numbers of Jewish immigrants fleeing from Nazi Germany settled in London during the 1930s, mostly in the East End. Labour Party politician Herbert Morrison was a dominant figure in local government in the 1920s and 1930s. He became mayor of Hackney and a member of the London County Council in 1922, and for a while was Minister of Transport in Ramsay MacDonald's cabinet. When Labour gained power in London in 1934, Morrison unified the bus, tram and trolleybus services with the Underground, by the creation of the London Passenger Transport Board (known as London Transport) in 1933., He led the effort to finance and build the new Waterloo Bridge. He designed the Metropolitan Green Belt around the suburbs and worked to clear slums, build schools, and reform public assistance. In World War II During World War II, London, as many other British cities, suffered severe damage, being bombed extensively by the Luftwaffe as a part of The Blitz. Prior to the bombing, hundreds of thousands of children in London were evacuated to the countryside to avoid the bombing. Civilians took shelter from the air raids in underground stations. The heaviest bombing took place during The Blitz between 7 September 1940 and 10 May 1941. During this period, London was subjected to 71 separate raids receiving over 18,000 tonnes of high explosive. One raid in December 1940, which became known as the Second Great Fire of London, saw a firestorm engulf much of the City of London and destroy many historic buildings. St Paul's Cathedral, however, remained unscathed; a photograph showing the Cathedral shrouded in smoke became a famous image of the war. Having failed to defeat Britain, Hitler turned his attention to the Eastern front and regular bombing raids ceased. They began again, but on a smaller scale with the "Little Blitz" in early 1944. Towards the end of the war, during 1944/45 London again came under heavy attack by pilotless V-1 flying bombs and V-2 rockets, which were fired from Nazi occupied Europe. These attacks only came to an end when their launch sites were captured by advancing Allied forces. London suffered severe damage and heavy casualties, the worst hit part being the Docklands area. By the war's end, just under 30,000 Londoners had been killed by the bombing, and over 50,000 seriously injured, tens of thousands of buildings were destroyed, and hundreds of thousands of people were made homeless. 1945–2000 Three years after the war, the 1948 Summer Olympics were held at the original Wembley Stadium, at a time when the city had barely recovered from the war. London's rebuilding was slow to begin. However, in 1951 the Festival of Britain was held, which marked an increasing mood of optimism and forward looking. In the immediate postwar years housing was a major issue in London, due to the large amount of housing which had been destroyed in the war. The authorities decided upon high-rise blocks of flats as the answer to housing shortages. During the 1950s and 1960s the skyline of London altered dramatically as tower blocks were erected, although these later proved unpopular. In a bid to reduce the number of people living in overcrowded housing, a policy was introduced of encouraging people to move into newly built new towns surrounding London. Through the 19th and in the early half of the 20th century, Londoners used coal for heating their homes, which produced large amounts of smoke. In combination with climatic conditions this often caused a characteristic smog, and London became known for its typical "London Fog", also known as "Pea Soupers". London was sometimes referred to as "The Smoke" because of this. In 1952, this culminated in the disastrous Great Smog of 1952 which lasted for five days and killed over 4,000 people. In response to this, the Clean Air Act 1956 was passed, mandating the creating of "smokeless zones" where the use of "smokeless" fuels was required (this was at a time when most households still used open fires); the Act was effective. Starting in the mid-1960s, and partly as a result of the success of such UK musicians as the Beatles and The Rolling Stones, London became a centre for the worldwide youth culture, exemplified by the Swinging London subculture which made Carnaby Street a household name of youth fashion around the world. London's role as a trendsetter for youth fashion continued strongly in the 1980s during the new wave and punk eras and into the mid-1990s with the emergence of the Britpop era. From the 1950s onwards London became home to a large number of immigrants, largely from Commonwealth countries such as Jamaica, India, Bangladesh, Pakistan, which dramatically changed the face of London, turning it into one of the most diverse cities in Europe. However, the integration of the new immigrants was not always easy. Racial tensions emerged in events such as the Brixton Riots in the early 1980s. From the beginning of "The Troubles" in Northern Ireland in the early 1970s until the mid-1990s, London was subjected to repeated terrorist attacks by the Provisional IRA. The outward expansion of London was slowed by the war, and the introduction of the Metropolitan Green Belt. Due to this outward expansion, in 1965 the old County of London (which by now only covered part of the London conurbation) and the London County Council were abolished, and the much larger area of Greater London was established with a new Greater London Council (GLC) to administer it, along with 32 new London boroughs. Greater London's population declined steadily in the decades after World War II, from an estimated peak of 8.6 million in 1939 to around 6.8 million in the 1980s. However, it then began to increase again in the late 1980s, encouraged by strong economic performance and an increasingly positive image. London's traditional status as a major port declined dramatically in the post-war decades as the old Docklands could not accommodate large modern container ships. The principal ports for London moved downstream to the ports of Felixstowe and Tilbury. The docklands area had become largely derelict by the 1980s, but was redeveloped into flats and offices from the mid-1980s onwards. The Thames Barrier was completed in the 1980s to protect London against tidal surges from the North Sea. In the early 1980s political disputes between the GLC run by Ken Livingstone and the Conservative government of Margaret Thatcher led to the GLC's abolition in 1986, with most of its powers relegated to the London boroughs. This left London as the only large metropolis in the world without a central administration. In 2000, London-wide government was restored, with the creation of the Greater London Authority (GLA) by Tony Blair's government, covering the same area of Greater London. The new authority had similar powers to the old GLC, but was made up of a directly elected Mayor and a London Assembly. The first election took place on 4 May, with Ken Livingstone comfortably regaining his previous post. London was recognised as one of the nine regions of England. In global perspective, it was emerging as a World city widely compared to New York and Tokyo. 21st century Around the start of the 21st century, London hosted the much derided Millennium Dome at Greenwich, to mark the new century. Other Millennium projects were more successful. One was the largest observation wheel in the world, the "Millennium Wheel", or the London Eye, which was erected as a temporary structure, but soon became a fixture, and draws four million visitors a year. The National Lottery also released a flood of funds for major enhancements to existing attractions, for example the roofing of the Great Court at the British Museum. The London Plan, published by the Mayor of London in 2004, estimated that the population would reach 8.1 million by 2016, and continue to rise thereafter. This was reflected in a move towards denser, more urban styles of building, including a greatly increased number of tall buildings, and proposals for major enhancements to the public transport network. However, funding for projects such as Crossrail remained a struggle. On 6 July 2005 London won the right to host the 2012 Olympics and Paralympics making it the first city to host the modern games three times. However, celebrations were cut short the following day when the city was rocked by a series of terrorist attacks. More than 50 were killed and 750 injured in three bombings on London Underground trains and a fourth on a double decker bus near King's Cross. London was the starting point for countrywide riots which occurred in August 2011, when thousands of people rioted in several city boroughs and in towns across England. In 2011, the population grew over 8 million people for the first time in decades. White British formed less than half of the population for the first time. In the public there was ambivalence leading-up to the 2012 Summer Olympics in the city, though public sentiment changed strongly in their favour following a successful opening ceremony and when the anticipated organisational and transport problems never occurred. Population Historical sites of note Alexandra Palace Battersea Power Station Buckingham Palace Croydon Airport Hyde Park Monument to the Great Fire of London Palace of Westminster Parliament Hill Royal Observatory, Greenwich St Paul's Cathedral Tower Bridge Tower of London Tyburn Vauxhall station Waterloo International station Westminster Abbey See also Ale silver Economy of London Culture of London Fortifications of London Geography of London Geology of London History of local government in London Timeline of London history Notes Further reading Ackroyd, Peter. London: A Biography (2009) (First chapter.) Ball, Michael, and David T. Sunderland. Economic history of London, 1800–1914 (Routledge, 2002) Billings, Malcolm (1994), London: A Companion to Its History and Archaeology, Bucholz, Robert O., and Joseph P. Ward. London: A Social and Cultural History, 1550–1750 (Cambridge University Press; 2012) 526 pages Clark, Greg. The Making of a World City: London 1991 to 2021 (John Wiley & Sons, 2014) Emerson, Charles. 1913: In Search of the World Before the Great War (2013) compares London to 20 major world cities on the eve of World War I; pp 15 to 36, 431–49. Inwood, Stephen. A History of London (1998) Jones, Robert Wynn. The Flower of All Cities: The History of London from Earliest Times to the Great Fire (Amberley Publishing, 2019). Mort, Frank, and Miles Ogborn. "Transforming Metropolitan London, 1750–1960". Journal of British Studies (2004) 43#1 pp: 1–14. Naismith, Rory, Citadel of the Saxons: The Rise of Early London (I.B.Tauris; 2018), Porter, Roy. History of London (1995), by a leading scholar Weightman, Gavin, and Stephen Humphries. The Making of Modern London, 1914–1939 (Sidgwick & Jackson, 1984) White, Jerry. London in the 20th Century: A City and Its People (2001) 544 pages; Social history of people, neighborhoods, work, culture, power. Excerpts White, Jerry. London in the 19th Century: 'A Human Awful Wonder of God''' (2008); Social history of people, neighborhoods, work, culture, power. Excerpt and text search White, Jerry. London in the Eighteenth Century: A Great and Monstrous Thing (2013) 624 pages; Excerpt and text search 480pp; Social history of people, neighborhoods, work, culture, power. Environment Allen, Michelle Elizabeth. Cleansing the city: sanitary geographies in Victorian London (2008). Brimblecombe, Peter. The Big Smoke: A History of Air Pollution in London Since Medieval Times (Methuen, 1987) Ciecieznski, N. J. "The Stench of Disease: Public Health and the Environment in Late-Medieval English towns and cities". Health, Culture and Society (2013) 4#1 pp: 91–104. Field, Jacob F. London, Londoners and the Great Fire of 1666: Disaster and Recovery (2018) Fowler, James. London Transport: A Hybrid in History 1905-48 (Emerald Group Publishing, 2019). Hanlon, W. Walker. Pollution and Mortality in the 19th Century (UCLA and NBER, 2015) online Jackson, Lee. Dirty Old London: The Victorian Fight Against Filth (2014) Jørgensen, Dolly. "'All Good Rule of the Citee': Sanitation and Civic Government in England, 1400–1600". Journal of Urban History (2010). online Landers, John. Death and the metropolis: studies in the demographic history of London, 1670–1830 (1993). Luckin, Bill, and Peter Thorsheim, eds. A Mighty Capital under Threat: The Environmental History of London, 1800-2000 (U of Pittsburgh Press, 2020) online review. Mosley, Stephen. "'A Network of Trust': Measuring and Monitoring Air Pollution in British Cities, 1912–1960". Environment and History (2009) 15#3 pp: 273–302. Thorsheim, Peter. Inventing Pollution: Coal, Smoke, and Culture in Britain since 1800 (2009) Historiography Feldman, David, and Gareth Stedman Jones, eds. Metropolis, London: Histories and Representations since 1800 (Routledge Kegan & Paul, 1989) Older histories George Walter Thornbury. Old and new London : a narrative of its history, its people, and its places (Cassell, Pelter, & Galpin, 1873) - Vol. 1, Vol. 2, Vol. 3, Vol. 4, Vol. 5, Vol. 6. Walter Besant. London'' (Harper & Bros., 1892) (thematic bibliography about London) + v.2, v.3, Index London – Article in the 1908 Catholic Encyclopædia Archival and academic digital projects A Chronicle of London from 1089 to 1483 written in the fifteenth century Roman London - "In their own words" (PDF) A literary companion to the prehistory and archæology of London London Lives 1690-1800 - A digital archive with personal records from lond during the 18th century Exploring 20th-century London – Explore London's history, culture and religions during the 20th century The Victorian London Collage - The London Picture Archive External links Museum of London London History – From Britannia.com The Growth of London 1666–1799 Maritime London London London, Greater
[ -0.10490679740905762, -0.2797720730304718, 0.15307660400867462, -1.0039710998535156, -0.12054306268692017, 0.3068627119064331, 0.6330082416534424, 0.7234066724777222, -0.4954341948032379, -0.621211588382721, -0.08770783245563507, -0.4533636271953583, -0.1926354467868805, 0.4022912085056305...
14021
https://en.wikipedia.org/wiki/History%20of%20astronomy
History of astronomy
Astronomy is the oldest of the natural sciences, dating back to antiquity, with its origins in the religious, mythological, cosmological, calendrical, and astrological beliefs and practices of prehistory: vestiges of these are still found in astrology, a discipline long interwoven with public and governmental astronomy. It was not completely separated in Europe (see astrology and astronomy) during the Copernican Revolution starting in 1543. In some cultures, astronomical data was used for astrological prognostication. The study of astronomy has received financial and social support from many institutions, especially the Church, which was its largest source of support between the 12th century to the Enlightenment. Early history Early cultures identified celestial objects with gods and spirits. They related these objects (and their movements) to phenomena such as rain, drought, seasons, and tides. It is generally believed that the first astronomers were priests, and that they understood celestial objects and events to be manifestations of the divine, hence early astronomy's connection to what is now called astrology. A 32,500 year old carved ivory Mammoth tusk could contain the oldest known star chart (resembling the constellation Orion). It has also been suggested that drawing on the wall of the Lascaux caves in France dating from 33,000 to 10,000 years ago could be a graphical representation of the Pleiades, the Summer Triangle, and the Northern Crown. Ancient structures with possibly astronomical alignments (such as Stonehenge) probably fulfilled astronomical, religious, and social functions. Calendars of the world have often been set by observations of the Sun and Moon (marking the day, month and year), and were important to agricultural societies, in which the harvest depended on planting at the correct time of year, and for which the nearly full moon was the only lighting for night-time travel into city markets. The common modern calendar is based on the Roman calendar. Although originally a lunar calendar, it broke the traditional link of the month to the phases of the Moon and divided the year into twelve almost-equal months, that mostly alternated between thirty and thirty-one days. Julius Caesar instigated calendar reform in 46 BCE and introduced what is now called the Julian calendar, based upon the 365  day year length originally proposed by the 4th century BCE Greek astronomer Callippus. Prehistoric Europe Since 1990 our understanding of prehistoric Europeans has been radically changed by discoveries of ancient astronomical artifacts throughout Europe. The artifacts demonstrate that Neolithic and Bronze Age Europeans had a sophisticated knowledge of mathematics and astronomy. Among the discoveries are: Paleolithic archaeologist Alexander Marshack put forward a theory in 1972 that bone sticks from locations like Africa and Europe from possibly as long ago as 35,000 BCE could be marked in ways that tracked the Moon's phases, an interpretation that has met with criticism. The Warren Field calendar in the Dee River valley of Scotland's Aberdeenshire. First excavated in 2004 but only in 2013 revealed as a find of huge significance, it is to date the world's oldest known calendar, created around 8000 BC and predating all other calendars by some 5,000 years. The calendar takes the form of an early Mesolithic monument containing a series of 12 pits which appear to help the observer track lunar months by mimicking the phases of the Moon. It also aligns to sunrise at the winter solstice, thus coordinating the solar year with the lunar cycles. The monument had been maintained and periodically reshaped, perhaps up to hundreds of times, in response to shifting solar/lunar cycles, over the course of 6,000 years, until the calendar fell out of use around 4,000 years ago. Goseck circle is located in Germany and belongs to the linear pottery culture. First discovered in 1991, its significance was only clear after results from archaeological digs became available in 2004. The site is one of hundreds of similar circular enclosures built in a region encompassing Austria, Germany, and the Czech Republic during a 200-year period starting shortly after 5000 BC. The Nebra sky disc is a Bronze Age bronze disc that was buried in Germany, not far from the Goseck circle, around 1600 BC. It measures about 30 cm diameter with a mass of 2.2 kg and displays a blue-green patina (from oxidization) inlaid with gold symbols. Found by archeological thieves in 1999 and recovered in Switzerland in 2002, it was soon recognized as a spectacular discovery, among the most important of the 20th century. Investigations revealed that the object had been in use around 400 years before burial (2000 BC), but that its use had been forgotten by the time of burial. The inlaid gold depicted the full moon, a crescent moon about 4 or 5 days old, and the Pleiades star cluster in a specific arrangement forming the earliest known depiction of celestial phenomena. Twelve lunar months pass in 354 days, requiring a calendar to insert a leap month every two or three years in order to keep synchronized with the solar year's seasons (making it lunisolar). The earliest known descriptions of this coordination were recorded by the Babylonians in 6th or 7th centuries BC, over one thousand years later. Those descriptions verified ancient knowledge of the Nebra sky disc's celestial depiction as the precise arrangement needed to judge when to insert the intercalary month into a lunisolar calendar, making it an astronomical clock for regulating such a calendar a thousand or more years before any other known method. The Kokino site, discovered in 2001, sits atop an extinct volcanic cone at an elevation of , occupying about 0.5 hectares overlooking the surrounding countryside in North Macedonia. A Bronze Age astronomical observatory was constructed there around 1900 BC and continuously served the nearby community that lived there until about 700 BC. The central space was used to observe the rising of the Sun and full moon. Three markings locate sunrise at the summer and winter solstices and at the two equinoxes. Four more give the minimum and maximum declinations of the full moon: in summer, and in winter. Two measure the lengths of lunar months. Together, they reconcile solar and lunar cycles in marking the 235 lunations that occur during 19 solar years, regulating a lunar calendar. On a platform separate from the central space, at lower elevation, four stone seats (thrones) were made in north-south alignment, together with a trench marker cut in the eastern wall. This marker allows the rising Sun's light to fall on only the second throne, at midsummer (about July 31). It was used for ritual ceremony linking the ruler to the local sun god, and also marked the end of the growing season and time for harvest. Golden hats of Germany, France and Switzerland dating from 1400–800 BC are associated with the Bronze Age Urnfield culture. The Golden hats are decorated with a spiral motif of the Sun and the Moon. They were probably a kind of calendar used to calibrate between the lunar and solar calendars. Modern scholarship has demonstrated that the ornamentation of the gold leaf cones of the Schifferstadt type, to which the Berlin Gold Hat example belongs, represent systematic sequences in terms of number and types of ornaments per band. A detailed study of the Berlin example, which is the only fully preserved one, showed that the symbols probably represent a lunisolar calendar. The object would have permitted the determination of dates or periods in both lunar and solar calendars. Ancient times Mesopotamia The origins of Western astronomy can be found in Mesopotamia, the 'land between the rivers' Tigris and Euphrates, where the ancient kingdoms of Sumer, Assyria, and Babylonia were located. A form of writing known as cuneiform emerged among the Sumerians around 3500–3000 BC. Our knowledge of Sumerian astronomy is indirect, via the earliest Babylonian star catalogues dating from about 1200 BC. The fact that many star names appear in Sumerian suggests a continuity reaching into the Early Bronze Age. Astral theology, which gave planetary gods an important role in Mesopotamian mythology and religion, began with the Sumerians. They also used a sexagesimal (base 60) place-value number system, which simplified the task of recording very large and very small numbers. The modern practice of dividing a circle into 360 degrees, or an hour into 60 minutes, began with the Sumerians. For more information, see the articles on Babylonian numerals and mathematics. Classical sources frequently use the term Chaldeans for the astronomers of Mesopotamia, who were, in reality, priest-scribes specializing in astrology and other forms of divination. The first evidence of recognition that astronomical phenomena are periodic and of the application of mathematics to their prediction is Babylonian. Tablets dating back to the Old Babylonian period document the application of mathematics to the variation in the length of daylight over a solar year. Centuries of Babylonian observations of celestial phenomena are recorded in the series of cuneiform tablets known as the Enūma Anu Enlil. The oldest significant astronomical text that we possess is Tablet 63 of the Enūma Anu Enlil, the Venus tablet of Ammi-saduqa, which lists the first and last visible risings of Venus over a period of about 21 years and is the earliest evidence that the phenomena of a planet were recognized as periodic. The MUL.APIN, contains catalogues of stars and constellations as well as schemes for predicting heliacal risings and the settings of the planets, lengths of daylight measured by a water clock, gnomon, shadows, and intercalations. The Babylonian GU text arranges stars in 'strings' that lie along declination circles and thus measure right-ascensions or time-intervals, and also employs the stars of the zenith, which are also separated by given right-ascensional differences. A significant increase in the quality and frequency of Babylonian observations appeared during the reign of Nabonassar (747–733 BC). The systematic records of ominous phenomena in Babylonian astronomical diaries that began at this time allowed for the discovery of a repeating 18-year cycle of lunar eclipses, for example. The Greek astronomer Ptolemy later used Nabonassar's reign to fix the beginning of an era, since he felt that the earliest usable observations began at this time. The last stages in the development of Babylonian astronomy took place during the time of the Seleucid Empire (323–60 BC). In the 3rd century BC, astronomers began to use "goal-year texts" to predict the motions of the planets. These texts compiled records of past observations to find repeating occurrences of ominous phenomena for each planet. About the same time, or shortly afterwards, astronomers created mathematical models that allowed them to predict these phenomena directly, without consulting past records. A notable Babylonian astronomer from this time was Seleucus of Seleucia, who was a supporter of the heliocentric model. Babylonian astronomy was the basis for much of what was done in Greek and Hellenistic astronomy, in classical Indian astronomy, in Sassanian Iran, in Byzantium, in Syria, in Islamic astronomy, in Central Asia, and in Western Europe. India Astronomy in the Indian subcontinent dates back to the period of Indus Valley Civilization during 3rd millennium BCE, when it was used to create calendars. As the Indus Valley civilization did not leave behind written documents, the oldest extant Indian astronomical text is the Vedanga Jyotisha, dating from the Vedic period. Vedanga Jyotisha describes rules for tracking the motions of the Sun and the Moon for the purposes of ritual. During the 6th century, astronomy was influenced by the Greek and Byzantine astronomical traditions. Aryabhata (476–550), in his magnum opus Aryabhatiya (499), propounded a computational system based on a planetary model in which the Earth was taken to be spinning on its axis and the periods of the planets were given with respect to the Sun. He accurately calculated many astronomical constants, such as the periods of the planets, times of the solar and lunar eclipses, and the instantaneous motion of the Moon. Early followers of Aryabhata's model included Varahamihira, Brahmagupta, and Bhaskara II. Astronomy was advanced during the Shunga Empire and many star catalogues were produced during this time. The Shunga period is known as the "Golden age of astronomy in India". It saw the development of calculations for the motions and places of various planets, their rising and setting, conjunctions, and the calculation of eclipses. Indian astronomers by the 6th century believed that comets were celestial bodies that re-appeared periodically. This was the view expressed in the 6th century by the astronomers Varahamihira and Bhadrabahu, and the 10th-century astronomer Bhattotpala listed the names and estimated periods of certain comets, but it is unfortunately not known how these figures were calculated or how accurate they were. Bhāskara II (1114–1185) was the head of the astronomical observatory at Ujjain, continuing the mathematical tradition of Brahmagupta. He wrote the Siddhantasiromani which consists of two parts: Goladhyaya (sphere) and Grahaganita (mathematics of the planets). He also calculated the time taken for the Earth to orbit the Sun to 9 decimal places. The Buddhist University of Nalanda at the time offered formal courses in astronomical studies. Other important astronomers from India include Madhava of Sangamagrama, Nilakantha Somayaji and Jyeshtadeva, who were members of the Kerala school of astronomy and mathematics from the 14th century to the 16th century. Nilakantha Somayaji, in his Aryabhatiyabhasya, a commentary on Aryabhata's Aryabhatiya, developed his own computational system for a partially heliocentric planetary model, in which Mercury, Venus, Mars, Jupiter and Saturn orbit the Sun, which in turn orbits the Earth, similar to the Tychonic system later proposed by Tycho Brahe in the late 16th century. Nilakantha's system, however, was mathematically more efficient than the Tychonic system, due to correctly taking into account the equation of the centre and latitudinal motion of Mercury and Venus. Most astronomers of the Kerala school of astronomy and mathematics who followed him accepted his planetary model. Greece and Hellenistic world The Ancient Greeks developed astronomy, which they treated as a branch of mathematics, to a highly sophisticated level. The first geometrical, three-dimensional models to explain the apparent motion of the planets were developed in the 4th century BC by Eudoxus of Cnidus and Callippus of Cyzicus. Their models were based on nested homocentric spheres centered upon the Earth. Their younger contemporary Heraclides Ponticus proposed that the Earth rotates around its axis. A different approach to celestial phenomena was taken by natural philosophers such as Plato and Aristotle. They were less concerned with developing mathematical predictive models than with developing an explanation of the reasons for the motions of the Cosmos. In his Timaeus, Plato described the universe as a spherical body divided into circles carrying the planets and governed according to harmonic intervals by a world soul. Aristotle, drawing on the mathematical model of Eudoxus, proposed that the universe was made of a complex system of concentric spheres, whose circular motions combined to carry the planets around the earth. This basic cosmological model prevailed, in various forms, until the 16th century. In the 3rd century BC Aristarchus of Samos was the first to suggest a heliocentric system, although only fragmentary descriptions of his idea survive. Eratosthenes estimated the circumference of the Earth with great accuracy. Greek geometrical astronomy developed away from the model of concentric spheres to employ more complex models in which an eccentric circle would carry around a smaller circle, called an epicycle which in turn carried around a planet. The first such model is attributed to Apollonius of Perga and further developments in it were carried out in the 2nd century BC by Hipparchus of Nicea. Hipparchus made a number of other contributions, including the first measurement of precession and the compilation of the first star catalog in which he proposed our modern system of apparent magnitudes. The Antikythera mechanism, an ancient Greek astronomical observational device for calculating the movements of the Sun and the Moon, possibly the planets, dates from about 150–100 BC, and was the first ancestor of an astronomical computer. It was discovered in an ancient shipwreck off the Greek island of Antikythera, between Kythera and Crete. The device became famous for its use of a differential gear, previously believed to have been invented in the 16th century, and the miniaturization and complexity of its parts, comparable to a clock made in the 18th century. The original mechanism is displayed in the Bronze collection of the National Archaeological Museum of Athens, accompanied by a replica. Depending on the historian's viewpoint, the acme or corruption of physical Greek astronomy is seen with Ptolemy of Alexandria, who wrote the classic comprehensive presentation of geocentric astronomy, the Megale Syntaxis (Great Synthesis), better known by its Arabic title Almagest, which had a lasting effect on astronomy up to the Renaissance. In his Planetary Hypotheses, Ptolemy ventured into the realm of cosmology, developing a physical model of his geometric system, in a universe many times smaller than the more realistic conception of Aristarchus of Samos four centuries earlier. Egypt The precise orientation of the Egyptian pyramids affords a lasting demonstration of the high degree of technical skill in watching the heavens attained in the 3rd millennium BC. It has been shown the Pyramids were aligned towards the pole star, which, because of the precession of the equinoxes, was at that time Thuban, a faint star in the constellation of Draco. Evaluation of the site of the temple of Amun-Re at Karnak, taking into account the change over time of the obliquity of the ecliptic, has shown that the Great Temple was aligned on the rising of the midwinter Sun. The length of the corridor down which sunlight would travel would have limited illumination at other times of the year. The Egyptians also found the position of Sirius (the dog star) who they believed was Anubis their Jackal headed god moving through the heavens. Its position was critical to their civilisation as when it rose heliacal in the east before sunrise it foretold the flooding of the Nile. It is also where we get the phrase 'dog days of summer' from. Astronomy played a considerable part in religious matters for fixing the dates of festivals and determining the hours of the night. The titles of several temple books are preserved recording the movements and phases of the sun, moon and stars. The rising of Sirius (Egyptian: Sopdet, Greek: Sothis) at the beginning of the inundation was a particularly important point to fix in the yearly calendar. Writing in the Roman era, Clement of Alexandria gives some idea of the importance of astronomical observations to the sacred rites: And after the Singer advances the Astrologer (ὡροσκόπος), with a horologium (ὡρολόγιον) in his hand, and a palm (φοίνιξ), the symbols of astrology. He must know by heart the Hermetic astrological books, which are four in number. Of these, one is about the arrangement of the fixed stars that are visible; one on the positions of the Sun and Moon and five planets; one on the conjunctions and phases of the Sun and Moon; and one concerns their risings. The Astrologer's instruments (horologium and palm) are a plumb line and sighting instrument. They have been identified with two inscribed objects in the Berlin Museum; a short handle from which a plumb line was hung, and a palm branch with a sight-slit in the broader end. The latter was held close to the eye, the former in the other hand, perhaps at arm's length. The "Hermetic" books which Clement refers to are the Egyptian theological texts, which probably have nothing to do with Hellenistic Hermetism. From the tables of stars on the ceiling of the tombs of Rameses VI and Rameses IX it seems that for fixing the hours of the night a man seated on the ground faced the Astrologer in such a position that the line of observation of the pole star passed over the middle of his head. On the different days of the year each hour was determined by a fixed star culminating or nearly culminating in it, and the position of these stars at the time is given in the tables as in the centre, on the left eye, on the right shoulder, etc. According to the texts, in founding or rebuilding temples the north axis was determined by the same apparatus, and we may conclude that it was the usual one for astronomical observations. In careful hands it might give results of a high degree of accuracy. China The astronomy of East Asia began in China. Solar term was completed in Warring States period. The knowledge of Chinese astronomy was introduced into East Asia. Astronomy in China has a long history. Detailed records of astronomical observations were kept from about the 6th century BC, until the introduction of Western astronomy and the telescope in the 17th century. Chinese astronomers were able to precisely predict eclipses. Much of early Chinese astronomy was for the purpose of timekeeping. The Chinese used a lunisolar calendar, but because the cycles of the Sun and the Moon are different, astronomers often prepared new calendars and made observations for that purpose. Astrological divination was also an important part of astronomy. Astronomers took careful note of "guest stars"(Chinese: 客星; pinyin: kèxīng; lit.: 'guest star') which suddenly appeared among the fixed stars. They were the first to record a supernova, in the Astrological Annals of the Houhanshu in 185 AD. Also, the supernova that created the Crab Nebula in 1054 is an example of a "guest star" observed by Chinese astronomers, although it was not recorded by their European contemporaries. Ancient astronomical records of phenomena like supernovae and comets are sometimes used in modern astronomical studies. The world's first star catalogue was made by Gan De, a Chinese astronomer, in the 4th century BC. Mesoamerica Maya astronomical codices include detailed tables for calculating phases of the Moon, the recurrence of eclipses, and the appearance and disappearance of Venus as morning and evening star. The Maya based their calendrics in the carefully calculated cycles of the Pleiades, the Sun, the Moon, Venus, Jupiter, Saturn, Mars, and also they had a precise description of the eclipses as depicted in the Dresden Codex, as well as the ecliptic or zodiac, and the Milky Way was crucial in their Cosmology. A number of important Maya structures are believed to have been oriented toward the extreme risings and settings of Venus. To the ancient Maya, Venus was the patron of war and many recorded battles are believed to have been timed to the motions of this planet. Mars is also mentioned in preserved astronomical codices and early mythology. Although the Maya calendar was not tied to the Sun, John Teeple has proposed that the Maya calculated the solar year to somewhat greater accuracy than the Gregorian calendar. Both astronomy and an intricate numerological scheme for the measurement of time were vitally important components of Maya religion. Middle Ages Middle East The Arabic and the Persian world under Islam had become highly cultured, and many important works of knowledge from Greek astronomy and Indian astronomy and Persian astronomy were translated into Arabic, used and stored in libraries throughout the area. An important contribution by Islamic astronomers was their emphasis on observational astronomy. This led to the emergence of the first astronomical observatories in the Muslim world by the early 9th century. Zij star catalogues were produced at these observatories. In the 10th century, Abd al-Rahman al-Sufi (Azophi) carried out observations on the stars and described their positions, magnitudes, brightness, and colour and drawings for each constellation in his Book of Fixed Stars. He also gave the first descriptions and pictures of "A Little Cloud" now known as the Andromeda Galaxy. He mentions it as lying before the mouth of a Big Fish, an Arabic constellation. This "cloud" was apparently commonly known to the Isfahan astronomers, very probably before 905 AD. The first recorded mention of the Large Magellanic Cloud was also given by al-Sufi. In 1006, Ali ibn Ridwan observed SN 1006, the brightest supernova in recorded history, and left a detailed description of the temporary star. In the late 10th century, a huge observatory was built near Tehran, Iran, by the astronomer Abu-Mahmud al-Khujandi who observed a series of meridian transits of the Sun, which allowed him to calculate the tilt of the Earth's axis relative to the Sun. He noted that measurements by earlier (Indian, then Greek) astronomers had found higher values for this angle, possible evidence that the axial tilt is not constant but was in fact decreasing. In 11th-century Persia, Omar Khayyám compiled many tables and performed a reformation of the calendar that was more accurate than the Julian and came close to the Gregorian. Other Muslim advances in astronomy included the collection and correction of previous astronomical data, resolving significant problems in the Ptolemaic model, the development of the universal latitude-independent astrolabe by Arzachel, the invention of numerous other astronomical instruments, Ja'far Muhammad ibn Mūsā ibn Shākir's belief that the heavenly bodies and celestial spheres were subject to the same physical laws as Earth, and the introduction of empirical testing by Ibn al-Shatir, who produced the first model of lunar motion which matched physical observations. Natural philosophy (particularly Aristotelian physics) was separated from astronomy by Ibn al-Haytham (Alhazen) in the 11th century, by Ibn al-Shatir in the 14th century, and Qushji in the 15th century. Western Europe After the significant contributions of Greek scholars to the development of astronomy, it entered a relatively static era in Western Europe from the Roman era through the 12th century. This lack of progress has led some astronomers to assert that nothing happened in Western European astronomy during the Middle Ages. Recent investigations, however, have revealed a more complex picture of the study and teaching of astronomy in the period from the 4th to the 16th centuries. Western Europe entered the Middle Ages with great difficulties that affected the continent's intellectual production. The advanced astronomical treatises of classical antiquity were written in Greek, and with the decline of knowledge of that language, only simplified summaries and practical texts were available for study. The most influential writers to pass on this ancient tradition in Latin were Macrobius, Pliny, Martianus Capella, and Calcidius. In the 6th century Bishop Gregory of Tours noted that he had learned his astronomy from reading Martianus Capella, and went on to employ this rudimentary astronomy to describe a method by which monks could determine the time of prayer at night by watching the stars. In the 7th century the English monk Bede of Jarrow published an influential text, On the Reckoning of Time, providing churchmen with the practical astronomical knowledge needed to compute the proper date of Easter using a procedure called the computus. This text remained an important element of the education of clergy from the 7th century until well after the rise of the Universities in the 12th century. The range of surviving ancient Roman writings on astronomy and the teachings of Bede and his followers began to be studied in earnest during the revival of learning sponsored by the emperor Charlemagne. By the 9th century rudimentary techniques for calculating the position of the planets were circulating in Western Europe; medieval scholars recognized their flaws, but texts describing these techniques continued to be copied, reflecting an interest in the motions of the planets and in their astrological significance. Building on this astronomical background, in the 10th century European scholars such as Gerbert of Aurillac began to travel to Spain and Sicily to seek out learning which they had heard existed in the Arabic-speaking world. There they first encountered various practical astronomical techniques concerning the calendar and timekeeping, most notably those dealing with the astrolabe. Soon scholars such as Hermann of Reichenau were writing texts in Latin on the uses and construction of the astrolabe and others, such as Walcher of Malvern, were using the astrolabe to observe the time of eclipses in order to test the validity of computistical tables. By the 12th century, scholars were traveling to Spain and Sicily to seek out more advanced astronomical and astrological texts, which they translated into Latin from Arabic and Greek to further enrich the astronomical knowledge of Western Europe. The arrival of these new texts coincided with the rise of the universities in medieval Europe, in which they soon found a home. Reflecting the introduction of astronomy into the universities, John of Sacrobosco wrote a series of influential introductory astronomy textbooks: the Sphere, a Computus, a text on the Quadrant, and another on Calculation. In the 14th century, Nicole Oresme, later bishop of Liseux, showed that neither the scriptural texts nor the physical arguments advanced against the movement of the Earth were demonstrative and adduced the argument of simplicity for the theory that the Earth moves, and not the heavens. However, he concluded "everyone maintains, and I think myself, that the heavens do move and not the earth: For God hath established the world which shall not be moved." In the 15th century, Cardinal Nicholas of Cusa suggested in some of his scientific writings that the Earth revolved around the Sun, and that each star is itself a distant sun. Renaissance and Early Modern Europe Copernican Revolution During the renaissance period, astronomy began to undergo a revolution in thought known as the Copernican Revolution, which gets the name from the astronomer Nicolaus Copernicus, who proposed a heliocentric system, in which the planets revolved around the Sun and not the Earth. His De revolutionibus orbium coelestium was published in 1543. While in the long term this was a very controversial claim, in the very beginning it only brought minor controversy. The theory became the dominant view because many figures, most notably Galileo Galilei, Johannes Kepler and Isaac Newton championed and improved upon the work. Other figures also aided this new model despite not believing the overall theory, like Tycho Brahe, with his well-known observations. Brahe, a Danish noble, was an essential astronomer in this period. He came on the astronomical scene with the publication of De nova stella, in which he disproved conventional wisdom on the supernova SN 1572 (As bright as Venus at its peak, SN 1572 later became invisible to the naked eye, disproving the Aristotelian doctrine of the immutability of the heavens.) He also created the Tychonic system, where the Sun and Moon and the stars revolve around the Earth, but the other five planets revolve around the Sun. This system blended the mathematical benefits of the Copernican system with the "physical benefits" of the Ptolemaic system. This was one of the systems people believed in when they did not accept heliocentrism, but could no longer accept the Ptolemaic system. He is most known for his highly accurate observations of the stars and the solar system. Later he moved to Prague and continued his work. In Prague he was at work on the Rudolphine Tables, that were not finished until after his death. The Rudolphine Tables was a star map designed to be more accurate than either the Alfonsine tables, made in the 1300s, and the Prutenic Tables, which were inaccurate. He was assisted at this time by his assistant Johannes Kepler, who would later use his observations to finish Brahe's works and for his theories as well. After the death of Brahe, Kepler was deemed his successor and was given the job of completing Brahe's uncompleted works, like the Rudolphine Tables. He completed the Rudolphine Tables in 1624, although it was not published for several years. Like many other figures of this era, he was subject to religious and political troubles, like the Thirty Years' War, which led to chaos that almost destroyed some of his works. Kepler was, however, the first to attempt to derive mathematical predictions of celestial motions from assumed physical causes. He discovered the three Kepler's laws of planetary motion that now carry his name, those laws being as follows: The orbit of a planet is an ellipse with the Sun at one of the two foci. A line segment joining a planet and the Sun sweeps out equal areas during equal intervals of time. The square of the orbital period of a planet is proportional to the cube of the semi-major axis of its orbit. With these laws, he managed to improve upon the existing heliocentric model. The first two were published in 1609. Kepler's contributions improved upon the overall system, giving it more credibility because it adequately explained events and could cause more reliable predictions. Before this, the Copernican model was just as unreliable as the Ptolemaic model. This improvement came because Kepler realized the orbits were not perfect circles, but ellipses.Galileo Galilei was among the first to use a telescope to observe the sky, and after constructing a 20x refractor telescope. He discovered the four largest moons of Jupiter in 1610, which are now collectively known as the Galilean moons, in his honor. This discovery was the first known observation of satellites orbiting another planet. He also found that our Moon had craters and observed, and correctly explained, sunspots, and that Venus exhibited a full set of phases resembling lunar phases. Galileo argued that these facts demonstrated incompatibility with the Ptolemaic model, which could not explain the phenomenon and would even contradict it. With the moons it demonstrated that the Earth does not have to have everything orbiting it and that other parts of the Solar System could orbit another object, such as the Earth orbiting the Sun. In the Ptolemaic system the celestial bodies were supposed to be perfect so such objects should not have craters or sunspots. The phases of Venus could only happen in the event that Venus' orbit is insides Earth's orbit, which could not happen if the Earth was the center. He, as the most famous example, had to face challenges from church officials, more specifically the Roman Inquisition. They accused him of heresy because these beliefs went against the teachings of the Roman Catholic Church and were challenging the Catholic church's authority when it was at its weakest. While he was able to avoid punishment for a little while he was eventually tried and pled guilty to heresy in 1633. Although this came at some expense, his book was banned, and he was put under house arrest until he died in 1642.Sir Isaac Newton developed further ties between physics and astronomy through his law of universal gravitation. Realizing that the same force that attracts objects to the surface of the Earth held the Moon in orbit around the Earth, Newton was able to explain – in one theoretical framework – all known gravitational phenomena. In his Philosophiæ Naturalis Principia Mathematica, he derived Kepler's laws from first principles. Those first principles are as follows: In an inertial frame of reference, an object either remains at rest or continues to move at constant velocity, unless acted upon by a force. In an inertial reference frame, the vector sum of the forces F on an object is equal to the mass m of that object multiplied by the acceleration a of the object: F = ma. (It is assumed here that the mass m is constant) When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body. Thus while Kepler explained how the planets moved, Newton accurately managed to explain why the planets moved the way they do. Newton's theoretical developments laid many of the foundations of modern physics. Completing the Solar System Outside of England, Newton's theory took some time to become established. Descartes' theory of vortices held sway in France, and Huygens, Leibniz and Cassini accepted only parts of Newton's system, preferring their own philosophies. Voltaire published a popular account in 1738. In 1748, the French Academy of Sciences offered a reward for solving the perturbations of Jupiter and Saturn which was eventually solved by Euler and Lagrange. Laplace completed the theory of the planets, publishing from 1798 to 1825. The early origins of the solar nebular model of planetary formation had begun. Edmund Halley succeeded Flamsteed as Astronomer Royal in England and succeeded in predicting the return in 1758 of the comet that bears his name. Sir William Herschel found the first new planet, Uranus, to be observed in modern times in 1781. The gap between the planets Mars and Jupiter disclosed by the Titius–Bode law was filled by the discovery of the asteroids Ceres and 2 Pallas Pallas in 1801 and 1802 with many more following. At first, astronomical thought in America was based on Aristotelian philosophy, but interest in the new astronomy began to appear in Almanacs as early as 1659. Stellar astronomy Cosmic pluralism is the name given to the idea that the stars are distant suns, perhaps with their own planetary systems. Ideas in this direction were expressed in antiquity, by Anaxagoras and by Aristarchus of Samos, but did not find mainstream acceptance. The first astronomer of the European Renaissance to suggest that the stars were distant suns was Giordano Bruno in his De l'infinito universo et mondi (1584). This idea was among the charges, albeit not in a prominent position, brought against him by the Inquisition. The idea became mainstream in the later 17th century, especially following the publication of Conversations on the Plurality of Worlds by Bernard Le Bovier de Fontenelle (1686), and by the early 18th century it was the default working assumptions in stellar astronomy. The Italian astronomer Geminiano Montanari recorded observing variations in luminosity of the star Algol in 1667. Edmond Halley published the first measurements of the proper motion of a pair of nearby "fixed" stars, demonstrating that they had changed positions since the time of the ancient Greek astronomers Ptolemy and Hipparchus. William Herschel was the first astronomer to attempt to determine the distribution of stars in the sky. During the 1780s, he established a series of gauges in 600 directions and counted the stars observed along each line of sight. From this he deduced that the number of stars steadily increased toward one side of the sky, in the direction of the Milky Way core. His son John Herschel repeated this study in the southern hemisphere and found a corresponding increase in the same direction. In addition to his other accomplishments, William Herschel is noted for his discovery that some stars do not merely lie along the same line of sight, but are physical companions that form binary star systems. Modern astronomy 19th century Pre-photography, data recording of astronomical data was limited by the human eye. In 1840, John W. Draper, a chemist, created the earliest known astronomical photograph of the Moon. And by the late 18th century thousands of photographic plates of images of planets, stars, and galaxies were created. Most photography had lower quantum efficiency (i.e. captured less of the incident photons) than human eyes but had the advantage of long integration times (100 ms for the human eye compared to hours for photos). This vastly increased the data available to astronomers, which lead to the rise of human computers, famously the Harvard Computers, to track and analyze the data. Scientists began discovering forms of light which were invisible to the naked eye: X-Rays, gamma rays, radio waves, microwaves, ultraviolet radiation, and infrared radiation. This had a major impact on astronomy, spawning the fields of infrared astronomy, radio astronomy, x-ray astronomy and finally gamma-ray astronomy. With the advent of spectroscopy it was proven that other stars were similar to the Sun, but with a range of temperatures, masses and sizes. The science of stellar spectroscopy was pioneered by Joseph von Fraunhofer and Angelo Secchi. By comparing the spectra of stars such as Sirius to the Sun, they found differences in the strength and number of their absorption lines—the dark lines in stellar spectra caused by the atmosphere's absorption of specific frequencies. In 1865, Secchi began classifying stars into spectral types. The first evidence of helium was observed on August 18, 1868, as a bright yellow spectral line with a wavelength of 587.49 nanometers in the spectrum of the chromosphere of the Sun. The line was detected by French astronomer Jules Janssen during a total solar eclipse in Guntur, India. The first direct measurement of the distance to a star (61 Cygni at 11.4 light-years) was made in 1838 by Friedrich Bessel using the parallax technique. Parallax measurements demonstrated the vast separation of the stars in the heavens. Observation of double stars gained increasing importance during the 19th century. In 1834, Friedrich Bessel observed changes in the proper motion of the star Sirius and inferred a hidden companion. Edward Pickering discovered the first spectroscopic binary in 1899 when he observed the periodic splitting of the spectral lines of the star Mizar in a 104-day period. Detailed observations of many binary star systems were collected by astronomers such as Friedrich Georg Wilhelm von Struve and S. W. Burnham, allowing the masses of stars to be determined from the computation of orbital elements. The first solution to the problem of deriving an orbit of binary stars from telescope observations was made by Felix Savary in 1827. In 1847, Maria Mitchell discovered a comet using a telescope. 20th century With the accumulation of large sets of astronomical data, teams like the Harvard Computers rose in prominence which lead to many female astronomers, previously relegated as assistants to male astronomers, gaining recognition in the field. The United States Naval Observatory (USNO) and other astronomy research institutions hired human "computers", who performed the tedious calculations while scientists performed research requiring more background knowledge. A number of discoveries in this period were originally noted by the women "computers" and reported to their supervisors. Henrietta Swan Leavitt discovered the cepheid variable star period-luminosity relation which she further developed into a method of measuring distance outside of the Solar System. A veteran of the Harvard Computers, Annie J. Cannon developed the modern version of the stellar classification scheme in during the early 1900s (O B A F G K M, based on color and temperature), manually classifying more stars in a lifetime than anyone else (around 350,000). The twentieth century saw increasingly rapid advances in the scientific study of stars. Karl Schwarzschild discovered that the color of a star and, hence, its temperature, could be determined by comparing the visual magnitude against the photographic magnitude. The development of the photoelectric photometer allowed precise measurements of magnitude at multiple wavelength intervals. In 1921 Albert A. Michelson made the first measurements of a stellar diameter using an interferometer on the Hooker telescope at Mount Wilson Observatory. Important theoretical work on the physical structure of stars occurred during the first decades of the twentieth century. In 1913, the Hertzsprung-Russell diagram was developed, propelling the astrophysical study of stars. In Potsdam in 1906, the Danish astronomer Ejnar Hertzsprung published the first plots of color versus luminosity for these stars. These plots showed a prominent and continuous sequence of stars, which he named the Main Sequence. At Princeton University, Henry Norris Russell plotted the spectral types of these stars against their absolute magnitude, and found that dwarf stars followed a distinct relationship. This allowed the real brightness of a dwarf star to be predicted with reasonable accuracy. Successful models were developed to explain the interiors of stars and stellar evolution. Cecilia Payne-Gaposchkin first proposed that stars were made primarily of hydrogen and helium in her 1925 doctoral thesis. The spectra of stars were further understood through advances in quantum physics. This allowed the chemical composition of the stellar atmosphere to be determined. As evolutionary models of stars were developed during the 1930s, Bengt Strömgren introduced the term Hertzsprung–Russell diagram to denote a luminosity-spectral class diagram. A refined scheme for stellar classification was published in 1943 by William Wilson Morgan and Philip Childs Keenan. The existence of our galaxy, the Milky Way, as a separate group of stars was only proven in the 20th century, along with the existence of "external" galaxies, and soon after, the expansion of the universe seen in the recession of most galaxies from us. The "Great Debate" between Harlow Shapley and Heber Curtis, in the 1920s, concerned the nature of the Milky Way, spiral nebulae, and the dimensions of the universe. With the advent of quantum physics, spectroscopy was further refined. The Sun was found to be part of a galaxy made up of more than 1010 stars (10 billion stars). The existence of other galaxies, one of the matters of the great debate, was settled by Edwin Hubble, who identified the Andromeda nebula as a different galaxy, and many others at large distances and receding, moving away from our galaxy. Physical cosmology, a discipline that has a large intersection with astronomy, made huge advances during the 20th century, with the model of the hot Big Bang heavily supported by the evidence provided by astronomy and physics, such as the redshifts of very distant galaxies and radio sources, the cosmic microwave background radiation, Hubble's law and cosmological abundances of elements. See also Age of the universe Anthropic principle Archaeoastronomy Astrotheology Big Bang Cosmology Expansion of the universe Hebrew astronomy History of astrology History of Mars observation History of supernova observation History of the telescope Letters on Sunspots List of astronomers List of astronomical instrument makers List of astronomical observatories List of Russian astronomers and astrophysicists Patronage in astronomy Physical cosmology Timeline of astronomy Notes Historians of astronomy Scholars Past. Willy Hartner, Otto Neugebauer, B. L. van der Waerden Scholars Present. Stephen G. Brush, Stephen J. Dick, Owen Gingerich, Bruce Stephenson, Michael Hoskin, Alexander R. Jones, Curtis A. Wilson Astronomer-historians. J. B. J. Delambre, J. L. E. Dreyer, Donald Osterbrock, Carl Sagan, F. Richard Stephenson References Aaboe, Asger. Episodes from the Early History of Astronomy. Springer-Verlag 2001 Aveni, Anthony F. Skywatchers of Ancient Mexico. University of Texas Press 1980 Dreyer, J. L. E. History of Astronomy from Thales to Kepler, 2nd edition. Dover Publications 1953 (revised reprint of History of the Planetary Systems from Thales to Kepler, 1906) Eastwood, Bruce. The Revival of Planetary Astronomy in Carolingian and Post-Carolingian Europe, Variorum Collected Studies Series CS 279 Ashgate 2002 . Antoine Gautier, L'âge d'or de l'astronomie ottomane, in L'Astronomie, (Monthly magazine created by Camille Flammarion in 1882), December 2005, volume 119. Hodson, F. R. (ed.). The Place of Astronomy in the Ancient World: A Joint Symposium of the Royal Society and the British Academy. Oxford University Press, 1974 Hoskin, Michael. The History of Astronomy: A Very Short Introduction. Oxford University Press. Pannekoek, Anton. A History of Astronomy. Dover Publications 1989 Pedersen, Olaf. Early Physics and Astronomy: A Historical Introduction, revised edition. Cambridge University Press 1993 . . Stephenson, Bruce. Kepler's Physical Astronomy, Studies in the History of Mathematics and Physical Sciences, 13. New York: Springer, 1987 Walker, Christopher (ed.). Astronomy before the telescope. British Museum Press 1996 Further reading UNESCO Medieval astronomy in Europe Magli, Giulio. "On the possible discovery of precessional effects in ancient astronomy." arXiv preprint physics/0407108 (2004). Refereed Journals DIO: The International Journal of Scientific History Journal for the History of Astronomy Journal of Astronomical History and Heritage External links Paris Observatory books and manuscripts UNESCO-IAU Portal to the Heritage of Astronomy Astronomiae Historia / History of Astronomy at the Astronomical Institutes of Bonn University. Society for the History of Astronomy Mayan Astronomy Caelum Antiquum: Ancient Astronomy and Astrology at LacusCurtius Mesoamerican Archaeoastronomy "The Book of Instruction on Deviant Planes and Simple Planes" is a manuscript in Arabic that dates back to 1740 and talks about practical astronomy, with diagrams. More information on women astronomers Astronomy & Empire, BBC Radio 4 discussion with Simon Schaffer, Kristen Lippincott & Allan Chapman (In Our Time, May 4, 2006) "Sharing the sky: astronomers and astrologers in the West", an exhibition of the Library of the Observatory of Paris about the shared history of astronomy and astrology around the Mediterranea. Astronomy Astronomy
[ 0.4268157482147217, 0.5720468759536743, -0.24362578988075256, 0.21298924088478088, 0.08072762191295624, 0.2134300321340561, -0.17911390960216522, 0.558388888835907, -0.12589804828166962, -0.5944328308105469, -0.09976663440465927, 0.6393737196922302, -0.3195110857486725, 0.29565194249153137...
14022
https://en.wikipedia.org/wiki/Haber%20process
Haber process
The Haber process, also called the Haber–Bosch process, is an artificial nitrogen fixation process and is the main industrial procedure for the production of ammonia today. It is named after its inventors, the German chemists Fritz Haber and Carl Bosch, who developed it in the first decade of the 20th century. The process converts atmospheric nitrogen (N2) to ammonia (NH3) by a reaction with hydrogen (H2) using a metal catalyst under high temperatures and pressures: Before the development of the Haber process, ammonia had been difficult to produce on an industrial scale, with early methods such as the Birkeland–Eyde process and Frank–Caro process all being highly inefficient. Although the Haber process is mainly used to produce fertilizer today, during World War I it provided Germany with a source of ammonia for the production of explosives, compensating for the Allied Powers' trade blockade on Chilean saltpeter. History Throughout the 19th century the demand for nitrates and ammonia for use as fertilizers and industrial feedstocks had been steadily increasing. The main source was mining niter deposits and guano from tropical islands. At the beginning of the 20th century it was being predicted that these reserves could not satisfy future demands, and research into new potential sources of ammonia became more important. Although atmospheric nitrogen (N2) is abundant, comprising nearly 80% of the air, it is exceptionally stable and does not readily react with other chemicals. Converting N2 into ammonia posed a challenge for chemists globally. Haber, with his assistant Robert Le Rossignol, developed the high-pressure devices and catalysts needed to demonstrate the Haber process at laboratory scale. They demonstrated their process in the summer of 1909 by producing ammonia from air, drop by drop, at the rate of about per hour. The process was purchased by the German chemical company BASF, which assigned Carl Bosch the task of scaling up Haber's tabletop machine to industrial-level production. He succeeded in 1910. Haber and Bosch were later awarded Nobel prizes, in 1918 and 1931 respectively, for their work in overcoming the chemical and engineering problems of large-scale, continuous-flow, high-pressure technology. Ammonia was first manufactured using the Haber process on an industrial scale in 1913 in BASF's Oppau plant in Germany, reaching 20 tonnes per day the following year. During World War I, the production of munitions required large amounts of nitrate. The Allies had access to large deposits of sodium nitrate in Chile (Chile saltpetre) controlled by British companies. Germany had no such resources, so the Haber process proved essential to the German war effort. Synthetic ammonia from the Haber process was used for the production of nitric acid, a precursor to the nitrates used in explosives. Today, the most popular catalysts are based on iron promoted with K2O, CaO, SiO2, and Al2O3. Earlier, molybdenum was also used as a promoter. The original Haber–Bosch reaction chambers used osmium as the catalyst, but it was available in extremely small quantities. Haber noted uranium was almost as effective and easier to obtain than osmium. Under Bosch's direction in 1909, the BASF researcher Alwin Mittasch discovered a much less expensive iron-based catalyst, which is still used today. A major contributor to the elucidation of this catalysis was Gerhard Ertl. During the interwar years, alternative processes were developed, the most notably different being the Casale process, Claude process and the Mont-Cenis process by Friedrich Uhde Ingenieurbüro, founded in 1921. Luigi Casale and Georges Claude proposed to increase the pressure of the synthesis loop to , thereby increasing the single-pass ammonia conversion and making nearly complete liquefaction at ambient temperature feasible. Georges Claude even proposed to have three or four converters with liquefaction steps in series, thereby omitting the need for a recycle. Nowadays, most plants resemble the original Haber process ( and ), albeit with improved single-pass conversion and lower energy consumption due to process and catalyst optimization. Process This conversion is typically conducted at pressures above 10 MPa (100 bar; 1,450 psi) and between , as the gases (nitrogen and hydrogen) are passed over four beds of catalyst, with cooling between each pass for maintaining a reasonable equilibrium constant. On each pass only about 15% conversion occurs, but any unreacted gases are recycled, and eventually an overall conversion of 97% is achieved. The steam reforming, shift conversion, carbon dioxide removal, and methanation steps each operate at pressures of about , and the ammonia synthesis loop operates at pressures ranging from , depending upon which proprietary process is used. Sources of hydrogen The major source of hydrogen is methane from natural gas. The conversion, steam reforming, is conducted with steam in a high-temperature and pressure tube inside a reformer with a nickel catalyst, separating the carbon and hydrogen atoms in the natural gas, yielding hydrogen gas and carbon dioxide waste. Other fossil fuel sources include coal, heavy fuel oil and naphtha. Green hydrogen is produced without fossil fuels or carbon dioxide waste from biomass, electrolysis of water and the thermochemical (solar or other heat source) splitting of water. Reaction rate and equilibrium Nitrogen gas (N2) is very unreactive because the atoms are held together by strong triple bonds. The Haber process relies on catalysts that accelerate the scission of this triple bond. Two opposing considerations are relevant to this synthesis: the position of the equilibrium and the rate of reaction. At room temperature, the equilibrium is strongly in favor of ammonia, but the reaction doesn't proceed at a detectable rate due to its high activation energy. Because the reaction is exothermic, the equilibrium constant becomes unity at around (see Le Châtelier's principle). Above this temperature, the equilibrium quickly becomes quite unfavorable for the reaction product at atmospheric pressure, according to the van 't Hoff equation. Lowering the temperature is also unhelpful because the catalyst requires a temperature of at least 400 °C to be efficient. Increased pressure does favor the forward reaction because there are 4 moles of reactant for every 2 moles of product, and the pressure used () alters the equilibrium concentrations to give a substantial ammonia yield. The reason for this is evident in the equilibrium relationship, which is where is the fugacity coefficient of species , is the mole fraction of the same species, is the pressure in the reactor, and is standard pressure, typically . Economically, pressurization of the reactor is expensive: pipes, valves, and reaction vessels need to be strengthened, and there are safety considerations when working at 20 MPa. In addition, running compressors takes considerable energy, as work must be done on the (very compressible) gas. Thus, the compromise used gives a single-pass yield of around 15% While removing the product (i.e., ammonia gas) from the system would increase the reaction yield, this step is not used in practice, since the temperature is too high; it is removed from the equilibrium mixture of gases leaving the reaction vessel. The hot gases are cooled enough, whilst maintaining a high pressure, for the ammonia to condense and be removed as liquid. Unreacted hydrogen and nitrogen gases are then returned to the reaction vessel to undergo further reaction. While most ammonia is removed (typically down to 2–5 mol.%), some ammonia remains in the recycle stream to the converter. In academic literature, more complete separation of ammonia has been proposed by absorption in metal halides and by adsorption on zeolites. Such a process is called a absorbent-enhanced Haber process or adsorbent-enhanced Haber-Bosch process. Catalysts The Haber–Bosch process relies on catalysts to accelerate the hydrogenation of N2. The catalysts are "heterogeneous", meaning that they are solids that interact on gaseous reagents. The catalyst typically consists of finely divided iron bound to an iron oxide carrier containing promoters possibly including aluminium oxide, potassium oxide, calcium oxide, potassium hydroxide, molybdenum, and magnesium oxide. Production of iron-based catalysts In industrial practice, the iron catalyst is obtained from finely ground iron powder, which is usually obtained by reduction of high-purity magnetite (Fe3O4). The pulverized iron is burnt (oxidized) to give magnetite or wüstite (FeO, ferrous oxide) particles of a specific size. The magnetite (or wüstite) particles are then partially reduced, removing some of the oxygen in the process. The resulting catalyst particles consist of a core of magnetite, encased in a shell of wüstite, which in turn is surrounded by an outer shell of metallic iron. The catalyst maintains most of its bulk volume during the reduction, resulting in a highly porous high-surface-area material, which enhances its effectiveness as a catalyst. Other minor components of the catalyst include calcium and aluminium oxides, which support the iron catalyst and help it maintain its surface area. These oxides of Ca, Al, K, and Si are unreactive to reduction by the hydrogen. The production of the required magnetite catalyst requires a particular melting process in which the used raw materials must be free of catalyst poisons and the promoter aggregates must be evenly distributed in the magnetite melt. Rapid cooling of the magnetite melt, which has an initial temperature of about 3500 °C, produces the precursor desired highly active catalyst. Unfortunately, the rapid cooling ultimately forms a catalyst of reduced abrasion resistance. Despite this disadvantage, the method of rapid cooling is often preferred in practice. The reduction of the catalyst precursor magnetite to α-iron is carried out directly in the production plant with synthesis gas. The reduction of the magnetite proceeds via the formation of wüstite (FeO), so that particles with a core of magnetite surrounded by a shell of wüstite are formed. The further reduction of magnetite and wüstite leads to the formation of α-iron, which forms together with the promoters the outer shell. The involved processes are complex and depend on the reduction temperature: At lower temperatures, wüstite disproportionates into an iron phase and a magnetite phase; at higher temperatures, the reduction of the wüstite and magnetite to iron dominates. The α-iron forms primary crystallites with a diameter of about 30 nanometers. These form crystallites a bimodal pore system with pore diameters of about 10 nanometers (produced by the reduction of the magnetite phase) and of 25 to 50 nanometers (produced by the reduction of the wüstite phase). With the exception of cobalt oxide, the promoters are not reduced. During the reduction of the iron oxide with synthesis gas, water vapour is formed. This water vapor must be considered for high catalyst quality as contact with the finely divided iron would lead to premature aging of the catalyst through recrystallization, especially in conjunction with high temperatures. The vapour pressure of the water in the gas mixture produced during catalyst formation is thus kept as low as possible, target values are below 3 gm−3. For this reason, the reduction is carried out at high gas exchange, low pressure and low temperatures. The exothermic nature of the ammonia formation ensures a gradual increase in temperature. The reduction of fresh, fully oxidized catalyst or precursor to full production capacity takes four to ten days. The wüstite phase is reduced faster and at lower temperatures than the magnetite phase (Fe3O4). After detailed kinetic, microscopic and X-ray spectroscopic investigations it was shown that wüstite reacts first to metallic iron. This leads to a gradient of iron(II) ions, whereby these diffuse from the magnetite through the wüstite to the particle surface and precipitate there as iron nuclei. In industrial practice, pre-reduced, stabilised catalysts have gained a significant market share. They are delivered showing the fully developed pore structure, but have been oxidized again on the surface after manufacture and are therefore no longer pyrophoric. The reactivation of such pre-reduced catalysts requires only 30 to 40 hours instead of the usual time periods of several days. In addition to the short start-up time, they also have other advantages such as higher water resistance and lower weight. Catalysts other than iron Since the industrial launch of the Haber–Bosch process, many efforts have been made to improve it. Many metals were intensively tested in the search for suitable catalysts: The requirement for suitability is the dissociative adsorption of nitrogen (i. e. the nitrogen molecule must be split upon adsorption into nitrogen atoms). At the same time the binding of the nitrogen atoms must not be too strong, otherwise the catalyst would be blocked and the catalytic ability would be reduced (i. e. self-poisoning). The elements in the periodic table at the left of the iron group show such a strong bond to nitrogen. The formation of surface nitrides makes for example chromium catalysts ineffective. Metals to the right of the iron group, in contrast, adsorb nitrogen too weakly to be able to activate it sufficiently for ammonia synthesis. Haber initially used catalysts based on osmium and uranium. Uranium reacts to its nitride during catalysis, while osmium oxide is rare. Due to the comparatively low price, high availability, easy processing, lifespan and activity, iron was ultimately chosen as catalyst. The production of 1800 tons ammonia per day requires a gas pressure of at least 130 bar, temperatures of 400 to 500 °C and a reactor volume of at least 100 m³. According to theoretical and practical studies, further improvements of the pure iron catalyst are limited. It was noticed that the activity of iron catalysts were increased by inclusion of cobalt. Second generation catalysts Ruthenium forms highly active catalysts. Allowing milder operating pressures and temperatures, Ru-based materials are referred to as second-generation catalysts. Such catalysts are prepared by decomposition of triruthenium dodecacarbonyl on graphite. A drawback of activated-carbon-supported ruthenium-based catalysts is the methanation of the support in the presence of hydrogen. Their activity is strongly dependent on the catalyst carrier and the promoters. A wide range of substances can be used as carriers, including carbon, magnesium oxide, aluminum oxide, zeolites, spinels, and boron nitride. Ruthenium-activated carbon-based catalysts have been used industrially in the KBR Advanced Ammonia Process (KAAP) since 1992. The carbon carrier is partially degraded to methane; however, this can be mitigated by a special treatment of the carbon at 1500 °C, thus prolonging the catalysts lifetime. In addition, the finely dispersed carbon poses a risk of explosion. For these reasons and due to its low acidity, magnesium oxide has proven to be a good alternative. Carriers with acidic properties extract electrons from ruthenium, make it less reactive, and undesirably bind ammonia to the surface. Catalyst poisons Catalyst poisons lower the activity of the catalyst. They are usually impurities in the synthesis gas (a raw material). Sulfur compounds, phosphorus compounds, arsenic compounds, and chlorine compounds are permanent catalyst poisons. Water, carbon monoxide, carbon dioxide and oxygen are temporary catalyst poisons. Although chemically inert components of the synthesis gas mixture such as noble gases or methane are not catalyst poisons in the strict sense, they accumulate through the recycling of the process gases and thus lower the partial pressure of the reactants, which in turn has a negative effect on the conversion. Industrial production Synthesis parameters The formation of ammonia occurs from nitrogen and hydrogen according to the following equation: The reaction is an exothermic equilibrium reaction in which the gas volume is reduced. The equilibrium constant Keq of the reaction (see table) is obtained from the following equation: Since the reaction is exothermic, the equilibrium of the reaction shifts at lower temperatures to the side of the ammonia. Furthermore, four volumetric parts of the raw materials produce two volumetric parts of ammonia. According to Le Chatelier's principle, a high pressure therefore also favours the formation of ammonia. In addition, a high pressure is necessary to ensure sufficient surface coverage of the catalyst with nitrogen. For this reason, a ratio of nitrogen to hydrogen of 1 to 3, a pressure of 250 to 350 bar, a temperature of 450 to 550 °C and α iron are used as catalysts. The catalyst ferrite (α-Fe) is produced in the reactor by the reduction of magnetite with hydrogen. The catalyst has its highest efficiency at temperatures of about 400 to 500 °C. Even though the catalyst greatly lowers the activation energy for the cleavage of the triple bond of the nitrogen molecule, high temperatures are still required for an appropriate reaction rate. At the industrially utilized reaction temperature of 450 to 550 °C an optimum between the decomposition of ammonia into the starting materials and the effectiveness of the catalyst is achieved. The formed ammonia is continuously removed from the system. The volume fraction of ammonia in the gas mixture is about 20%. The inert components, especially the noble gases such as argon, should not exceed a certain content in order not to reduce the partial pressure of the reactants too much. To remove the inert gas components, part of the gas is removed and the argon is separated in a gas separation plant. The extraction of pure argon from the circulating gas is carried out using the Linde process. Large-scale technical implementation Modern ammonia plants produce more than 3000 tons per day in one production line. The following diagram shows the set-up of a Haber–Bosch plant: Depending on its origin, the synthesis gas must first be freed from impurities such as hydrogen sulphide or organic sulphur compounds, which act as a catalyst poison. High concentrations of hydrogen sulphide, which occur in synthesis gas from carbonization coke, are removed in a wet cleaning stage such as the Sulfosolvan process, while low concentrations are removed by adsorption on activated carbon. Organosulfur compounds are separated by pressure swing adsorption together with carbon dioxide after CO conversion. To produce hydrogen by steam reforming, methane reacts with water vapor using a nickel oxide-alumina catalyst in the primary reformer to form carbon monoxide and hydrogen. The energy required for this, the enthalpy ΔH, is 206 kJ/mol. The methane gas reacts in the primary reformer only partially. In order to increase the hydrogen yield and keep the content of inert components (i. e. methane) as low as possible, the remaining methane gas is converted in a second step with oxygen to hydrogen and carbon monoxide in the secondary reformer. The secondary reformer is supplied with air as oxygen source. Also the required nitrogen for the subsequent ammonia synthesis is added to the gas mixture. In a third step, the carbon monoxide is oxidized to carbon dioxide, which is called CO conversion or water-gas shift reaction. Carbon monoxide and carbon dioxide would form carbamates with ammonia, which would clog (as solids) pipelines and apparatus within a short time. In the following process step, the carbon dioxide must therefore be removed from the gas mixture. In contrast to carbon monoxide, carbon dioxide can easily be removed from the gas mixture by gas scrubbing with triethanolamine. The gas mixture then still contains methane and noble gases such as argon, which, however, behave inertly. The gas mixture is then compressed to operating pressure by turbo compressors. The resulting compression heat is dissipated by heat exchangers; it is used to preheat raw gases. The actual production of ammonia takes place in the ammonia reactor. The first reactors were bursting under the high pressure because the atomic hydrogen in the carbonaceous steel partially recombined to methane and produced cracks in the steel. Bosch therefore developed tube reactors consisting of a pressure-bearing steel tube in which a low-carbon iron lining tube was inserted filled with the catalyst. Hydrogen that diffused through the inner steel pipe escaped to the outside via thin holes in the outer steel jacket, the so-called Bosch holes. A disadvantage of the tubular reactors was the relatively high pressure loss, which had to be applied again by compression. The development of hydrogen-resistant chromium-molybdenum steels made it possible to construct single-walled pipes. Modern ammonia reactors are designed as multi-storey reactors with low pressure drop, in which the catalysts are distributed as fills over about ten storeys one above the other. The gas mixture flows through them one after the other from top to bottom. Cold gas is injected from the side for cooling. A disadvantage of this reactor type is the incomplete conversion of the cold gas mixture in the last catalyst bed. Alternatively, the reaction mixture between the catalyst layers is cooled using heat exchangers, whereby the hydrogen-nitrogen mixture is preheated to reaction temperature. Reactors of this type have three catalyst beds. In addition to good temperature control, this reactor type has the advantage of better conversion of the raw material gases compared to reactors with cold gas injection. Uhde has developed and is using an ammonia converter with three radial flow catalyst beds and two internal heat exchangers instead of axial flow catalyst beds. This further reduces the pressure drop in the converter. The reaction product is continuously removed for maximum yield. The gas mixture is cooled to 450 °C in a heat exchanger using water, freshly supplied gases and other process streams. The ammonia also condenses and is separated in a pressure separator. Unreacted nitrogen and hydrogen are than compressed back to the process by a circulating gas compressor, supplemented with fresh gas and fed to the reactor. In a subsequent distillation, the product ammonia is purified. Mechanism Elementary steps The mechanism of ammonia synthesis contains the following seven elementary steps: transport of the reactants from the gas phase through the boundary layer to the surface of the catalyst. pore diffusion to the reaction center adsorption of reactants reaction desorption of product transport of the product through the pore system back to the surface transport of the product into the gas phase Transport and diffusion (the first and last two steps) are fast compared to adsorption, reaction and desorption because of the shell structure of the catalyst. It is known from various investigations that the rate-determining step of the ammonia synthesis is the dissociation of nitrogen. In contrast, exchange reactions between hydrogen and deuterium on the Haber–Bosch catalysts still take place at temperatures of at a measurable rate; the exchange between deuterium and hydrogen on the ammonia molecule also takes place at room temperature. Since the adsorption of both molecules is rapid, it cannot determine the speed of ammonia synthesis. In addition to the reaction conditions, the adsorption of nitrogen on the catalyst surface depends on the microscopic structure of the catalyst surface. Iron has different crystal surfaces, whose reactivity is very different. The Fe(111) and Fe(211) surfaces have by far the highest activity. The explanation for this is that only these surfaces have so-called C7 sites - these are iron atoms with seven closest neighbours. The dissociative adsorption of nitrogen on the surface follows the following scheme, where S* symbolizes an iron atom on the surface of the catalyst: N2 → S*–N2 (γ-species) → S*–N2–S* (α-species) → 2 S*–N (β-species, surface nitride) The adsorption of nitrogen is similar to the chemisorption of carbon monoxide. On a Fe(111) surface, the adsorption of nitrogen first leads to an adsorbed γ-species with an adsorption energy of 24 kJmol−1 and an N-N stretch vibration of 2100 cm−1. Since the nitrogen is isoelectronic to carbon monoxide, it adsorbs in an on-end configuration in which the molecule is bound perpendicular to the metal surface at one nitrogen atom. This has been confirmed by photoelectron spectroscopy. Ab-initio-MO calculations have shown that, in addition to the σ binding of the free electron pair of nitrogen to the metal, there is a π binding from the d orbitals of the metal to the π* orbitals of nitrogen, which strengthens the iron-nitrogen bond. The nitrogen in the α state is more strongly bound with 31 kJmol−1. The resulting N-N bond weakening could be experimentally confirmed by a reduction of the wave numbers of the N-N stretching oscillation to 1490 cm−1. Further heating of the Fe(111) area covered by α-N2 leads to both desorption and emergence of a new band at 450 cm−1. This represents a metal-nitrogen oscillation, the β state. A comparison with vibration spectra of complex compounds allows the conclusion that the N2 molecule is bound "side-on", with an N atom in contact with a C7 site. This structure is called "surface nitride". The surface nitride is very strongly bound to the surface. Hydrogen atoms (Hads), which are very mobile on the catalyst surface, quickly combine with it. Infrared spectroscopically detected surface imides (NHad), surface amides (NH2,ad) and surface ammoniacates (NH3,ad) are formed, the latter decay under NH3 release (desorption). The individual molecules were identified or assigned by X-ray photoelectron spectroscopy (XPS), high-resolution electron energy loss spectroscopy (HREELS) and IR spectroscopy. On the basis of these experimental findings, the reaction mechanism is believed to involve the following steps (see also figure): N2 (g) → N2 (adsorbed) N2 (adsorbed) → 2 N (adsorbed) H2 (g) → H2 (adsorbed) H2 (adsorbed) → 2 H (adsorbed) N (adsorbed) + 3 H (adsorbed) → NH3 (adsorbed) NH3 (adsorbed) → NH3 (g) Reaction 5 occurs in three steps, forming NH, NH2, and then NH3. Experimental evidence points to reaction 2 as being the slow, rate-determining step. This is not unexpected, since the bond broken, the nitrogen triple bond, is the strongest of the bonds that must be broken. As with all Haber–Bosch catalysts, nitrogen dissociation is the rate determining step for ruthenium activated carbon catalysts. The active center for ruthenium is a so-called B5 site, a 5-fold coordinated position on the Ru(0001) surface where two ruthenium atoms form a step edge with three ruthenium atoms on the Ru(0001) surface. The number of B5 sites depends on the size and shape of the ruthenium particles, the ruthenium precursor and the amount of ruthenium used. The reinforcing effect of the basic carrier used in the ruthenium catalyst is similar to the promoter effect of alkali metals used in the iron catalyst. Energy diagram An energy diagram can be created based on the enthalpy of reaction of the individual steps. The energy diagram can be used to compare homogeneous and heterogeneous reactions: Due to the high activation energy of the dissociation of nitrogen, the homogeneous gas phase reaction is not realizable. The catalyst avoids this problem as the energy gain resulting from the binding of nitrogen atoms to the catalyst surface overcompensates for the necessary dissociation energy so that the reaction is finally exothermic. Nevertheless, the dissociative adsorption of nitrogen remains the rate determining step: not because of the activation energy, but mainly because of the unfavorable pre-exponential factor of the rate constant. Although hydrogenation is endothermic, this energy can easily be applied by the reaction temperature (about 700 K). Economic and environmental aspects When first invented, the Haber process competed against another industrial process, the cyanamide process. However, the cyanamide process consumed large amounts of electrical power and was more labor-intensive than the Haber process. As of 2018, the Haber process produces 230 million tonnes of anhydrous ammonia per year. The ammonia is used mainly as a nitrogen fertilizer as ammonia itself, in the form of ammonium nitrate, and as urea. The Haber process consumes 3–5% of the world's natural-gas production (around 1–2% of the world's energy supply). In combination with advances in breeding, herbicides and pesticides, these fertilizers have helped to increase the productivity of agricultural land: The energy-intensivity of the process contributes to climate change and other environmental problems: The Haber–Bosch process is one of the largest contributors to a buildup of reactive nitrogen in the biosphere, causing an anthropogenic disruption to the nitrogen cycle. Since nitrogen use efficiency is typically less than 50%, farm runoff from heavy use of fixed industrial nitrogen disrupts biological habitats. Nearly 50% of the nitrogen found in human tissues originated from the Haber–Bosch process. Thus, the Haber process serves as the "detonator of the population explosion", enabling the global population to increase from 1.6 billion in 1900 to 7.7 billion by November 2018. See also Other nitrogen fixation processes Birkeland-Eyde process Cyanamide process Other contemporary nitrogen sources Guano Chilean saltpeter Hydrogen production Industrial gas Paradas method References External links Haber–Bosch process, most important invention of the 20th century, according to V. Smil, Nature, 29 July 1999, p. 415 (by Jürgen Schmidhuber) Britannica guide to Nobel Prizes: Fritz Haber Nobel e-Museum – Biography of Fritz Haber BASF – Fertilizer out of thin air Uses and Production of Ammonia Haber Process for Ammonia Synthesis BASF Chemical processes Industrial processes Equilibrium chemistry Peak oil Catalysis History of mining in Chile German inventions Industrial gases Name reactions Fritz Haber 1909 in science 1909 in Germany
[ 0.06373972445726395, 0.21079286932945251, -0.2489493042230606, -0.5264130234718323, -0.16033366322517395, 0.5162678360939026, 0.22034278512001038, -0.049308594316244125, -0.10197094827890396, -0.41627466678619385, -0.4303973913192749, 0.17048195004463196, -0.28970977663993835, 0.7122337818...
14023
https://en.wikipedia.org/wiki/Hot%20or%20Not
Hot or Not
Hot or Not, currently rebranded as Chat & Date, is a rating site that allowed users to rate the attractiveness of photos submitted voluntarily by others. The site offered a matchmaking engine called 'Meet Me' and an extended profile feature called "Hotlists". The domain hotornot.com is currently owned by Hot Or Not Limited, and was previously owned by Avid Life Media. 'Hot or Not' was a significant influence on the people who went on to create the social media sites Facebook and YouTube. Description Users would submit photographs of themselves to the site for the purpose of other users to rate said person's attractiveness on a scale of 1 - 10, with the cumulative average acting as the overall score for a given photograph. History The site was founded in October 2000 by James Hong and Jim Young, two friends and Silicon Valley-based engineers. Both graduated from the University of California, Berkeley in electrical engineering, with Young pursuing a Ph.D at the time. It was inspired by some other developers' ideas. The site was a technical solution to a disagreement the founders had one day over a passing woman's attractiveness. The site was originally called "Am I Hot or Not". Within a week of launching, it had reached almost two million page views per day. Within a few months, the site was immediately behind CNET and NBCi on NetNielsen Rating's Top 25 advertising domains. To keep up with rising costs Hong and Young added a matchmaking component to their website called "Meet Me at Hot or Not", i.e. a system of range voting. The matchmaking service has been especially successful and the site continues to generate most of its revenue through subscriptions. In the December 2006 issue of Time magazine, the founders of YouTube stated that they originally set out to make a version of Hot or Not with Video before developing their more inclusive site. Mark Zuckerberg of Facebook similarly got his start by creating a Hot or Not type site called FaceMash, where he posted photos from Harvard's Facebook for the university's community to rate. Hot or Not was sold for a rumored $20 million on February 8, 2008, to Avid Life Media, owners of Ashley Madison. Annual revenue reached $7.5 million, with net profits of $5.5 million. They initially started off $60,000 in debt due to tuition fees James paid for his MBA. On July 31, 2008, Hot or Not launched Hot or Not Gossip and a Baresi rate box (a "hot meter") – a subdivision to expand their market, run by former radio DJ turned celebrity blogger Zack Taylor. In 2012, Hot or Not was purchased by Badoo, which is owned by Bumble Inc. The app is currently rebranded as Chat & Date which uses a similar user interface to Badoo and shares user accounts between both sites. Predecessors and spin-offs Hot or Not was preceded by the rating sites, like RateMyFace, which was registered a year earlier in the summer of 1999, and AmIHot.com, which was registered in January 2000 by MIT freshman Daniel Roy. Regardless, despite any head starts of its predecessors, Hot or Not quickly became the most popular. Since AmIHotOrNot.com's launch, the concept has spawned many imitators. The concept always remained the same, but the subject matter varied greatly. The concept has also been integrated with a wide variety of dating and matchmaking systems. In 2007 BecauseImHot.com launched and deleted anyone with a rating below 7 after a voting audit or the first 50 votes (whichever is first). Research In 2005, as an example of using image morphing methods to study the effects of averageness, imaging researcher Pierre Tourigny created a composite of about 30 faces to find out the current standard of good looks on the Internet. On the Hot or Not web site, people rate others' attractiveness on a scale of 1 to 10. An average score based on hundreds or even thousands of individual ratings takes only a few days to emerge. To make this hot or not palette of morphed images, photos from the site were sorted by rank and used SquirlzMorph to create multi-morph composites from them. Unlike projects like Face of Tomorrow, where the subjects are posed for the purpose, the portraits are blurry because the source images are of low resolution with differences in variables such as posture, hair styles and glasses, so that in this instance images could use only 36 control points for the morphs. A similar study was done with Miss Universe contestants, as shown in the averageness article, as well as one for age, as shown in youthfulness article. A 2006 "hot" or "not" style study, involving 264 women and 18 men, at the Washington University School of Medicine, as published online in the journal Brain Research, indicates that a person's brain determines whether an image is erotically appealing long before the viewer is even aware they are seeing the picture. Moreover, according to these researchers, one of the basic functions of the brain is to classify images into a hot or not type categorization. The study's researchers also discovered that sexy shots induce a uniquely powerful reaction in the brain, equal in effect for both men and women, and that erotic images produced a strong reaction in the hypothalamus. See also Tinder Badoo Notes References External links The Hotornot website Canadian entertainment websites Internet properties established in 2000 Review websites
[ 0.7495740652084351, -0.45188385248184204, -0.20752835273742676, 0.4997768998146057, 0.057817455381155014, -0.4202445149421692, 0.024207906797528267, 0.3219878077507019, -0.18025217950344086, 0.2680569291114807, -0.13401271402835846, 0.3824799358844757, -0.006506290752440691, -0.29108631610...
14024
https://en.wikipedia.org/wiki/H.263
H.263
H.263 is a video compression standard originally designed as a low-bit-rate compressed format for videoconferencing. It was standardized by the ITU-T Video Coding Experts Group (VCEG) in a project ending in 1995/1996. It is a member of the H.26x family of video coding standards in the domain of the ITU-T. Like previous H.26x standards, H.263 is based on discrete cosine transform (DCT) video compression. H.263 was later extended to add various additional enhanced features in 1998 and 2000. Smaller additions were also made in 1997 and 2001, and a unified edition was produced in 2005. History and background The H.263 standard was first designed to be utilized in H.324 based systems (PSTN and other circuit-switched network videoconferencing and videotelephony), but it also found use in H.323 (RTP/IP-based videoconferencing), H.320 (ISDN-based videoconferencing, where it was the most widely used video compression standard), RTSP (streaming media) and SIP (IP-based videoconferencing) solutions. H.263 is a required video coding format in ETSI 3GPP technical specifications for IP Multimedia Subsystem (IMS), Multimedia Messaging Service (MMS) and Transparent end-to-end Packet-switched Streaming Service (PSS). In 3GPP specifications, H.263 video is usually used in 3GP container format. H.263 also found many applications on the internet: much Flash Video content (as used on sites such as YouTube, Google Video, and MySpace) used to be encoded in Sorenson Spark format (an incomplete implementation of H.263). The original version of the RealVideo codec was based on H.263 until the release of RealVideo 8. H.263 was developed as an evolutionary improvement based on experience from H.261 and H.262 (aka MPEG-2 Video), the previous ITU-T standards for video compression, and the MPEG-1 standard developed in ISO/IEC. Its first version was completed in 1995 and provided a suitable replacement for H.261 at all bit rates. It was further enhanced in projects known as H.263v2 (also known as H.263+ or H.263 1998) and H.263v3 (also known as H.263++ or H.263 2000). It was also used as the basis for the development of MPEG-4 Part 2. MPEG-4 Part 2 is H.263 compatible in the sense that basic "baseline" H.263 bitstreams are correctly decoded by an MPEG-4 Video decoder. The next enhanced format developed by ITU-T VCEG (in partnership with MPEG) after H.263 was the H.264 standard, also known as AVC and MPEG-4 part 10. As H.264 provides a significant improvement in capability beyond H.263, the H.263 standard is now considered a legacy design. Most new videoconferencing products now include H.264 as well as H.263 and H.261 capabilities. An even-newer standard format, HEVC, has also been developed by VCEG and MPEG, and has begun to emerge in some applications. Versions Since the original ratification of H.263 in March 1996 (approving a document that was produced in November 1995), there have been two subsequent additions which improved on the original standard by additional optional extensions (for example, the H.263v2 project added a deblocking filter in its Annex J). Version 1 and Annex I The original version of H.263 specified the following annexes: Annex A – Inverse transform accuracy specification Annex B – Hypothetical Reference Decoder Annex C – Considerations for Multipoint Annex D – Unrestricted Motion Vector mode Annex E – Syntax-based Arithmetic Coding mode Annex F – Advanced Prediction mode Annex G – PB-frames mode Annex H – Forward Error Correction for coded video signal The first version of H.263 supported a limited set of picture sizes: 128x96 (a.k.a. Sub-QCIF) 176x144 (a.k.a. QCIF) 352x288 (a.k.a. CIF) 704x576 (a.k.a. 4CIF) 1408x1152 (a.k.a. 16CIF) In March 1997, an informative Appendix I describing Error Tracking – an encoding technique for providing improved robustness to data losses and errors, was approved to provide information for the aid of implementers having an interest in such techniques. H.263v2 (H.263+) H.263v2 (also known as H.263+, or as the 1998 version of H.263) is the informal name of the second edition of the ITU-T H.263 international video coding standard. It retained the entire technical content of the original version of the standard, but enhanced H.263 capabilities by adding several annexes which can substantially improve encoding efficiency and provide other capabilities (such as enhanced robustness against data loss in the transmission channel). The H.263+ project was ratified by the ITU in February 1998. It added the following Annexes: Annex I – Advanced INTRA Coding mode Annex J – Deblocking Filter mode Annex K – Slice Structured mode Annex L – Supplemental Enhancement Information Specification Annex M – Improved PB-frames mode Annex N – Reference Picture Selection mode Annex O – Temporal, SNR, and Spatial Scalability mode Annex P – Reference picture resampling Annex Q – Reduced-Resolution Update mode (see implementors' guide correction as noted below) Annex R – Independent Segment Decoding mode Annex S – Alternative INTER VLC mode Annex T – Modified Quantization mode H.263v2 also added support for flexible customized picture formats and custom picture clock frequencies. As noted above, the only picture formats previously supported in H.263 had been Sub-QCIF, QCIF, CIF, 4CIF, and 16CIF, and the only picture clock frequency had been 30000/1001 (approximately 29.97) clock ticks per second. H.263v2 specified a set of recommended modes in an informative appendix (Appendix II, since deprecated): H.263v3 (H.263++) and Annex X The definition of H.263v3 (also known as H.263++ or as the 2000 version of H.263) added three annexes. These annexes and an additional annex that specified profiles (approved the following year) were originally published as separate documents from the main body of the standard itself. The additional annexes specified are: Annex U – Enhanced reference picture selection mode Annex V – Data-partitioned slice mode Annex W – Additional supplemental enhancement information specification Annex X (originally specified in 2001) – Profiles and levels definition The prior informative Appendix II (recommended optional enhancement) was obsoleted by the creation of the normative Annex X. In June 2001, another informative appendix (Appendix III, Examples for H.263 encoder/decoder implementations) was approved. It describes techniques for encoding and for error/loss concealment by decoders. In January 2005, a unified H.263 specification document was produced (with the exception of Appendix III, which remains as a separately-published document). In August 2005, an implementors guide was approved to correct a small error in the seldom-used Annex Q reduced-resolution update mode. Open-source implementation In countries without software patents, H.263 video can be legally encoded and decoded with the free LGPL-licensed libavcodec library (part of the FFmpeg project) which is used by programs such as ffdshow, VLC media player and MPlayer. See also H.262/MPEG-2 Part 2 MPEG-4 Part 2 (MPEG-4 Visual) References External links The ITU-T specification for H.263 IETF AVT Working Group - Group that reviews codec packetizations for RTP - RTP Payload Format for ITU-T Rec. H.263 Video - RTP Payload Format for the 1998 Version of ITU-T Rec. H.263 Video (H.263+) (Obsolete, upgraded spec in RFC 4629) - RTP Payload Format for H.263 Video Streams (Historic) H.263 - MultimediaWiki Intel Integrated Performance Primitives H.263 implementation in vic (source code available) H.26x ITU-T H Series Recommendations ITU-T recommendations Open standards covered by patents Video codecs Videotelephony
[ -0.29276224970817566, 1.158532738685608, 0.3426002860069275, -0.45702582597732544, -0.6563889980316162, 0.3934284448623657, 0.8538753986358643, -0.8827177882194519, 0.112493596971035, -0.3487425148487091, -0.4518160820007324, 0.011685428209602833, 0.16050001978874207, 0.31001514196395874, ...
14026
https://en.wikipedia.org/wiki/House%20of%20Orange%20%28disambiguation%29
House of Orange (disambiguation)
The House of Orange is a branch of the House of Nassau active in European politics. House of Orange may also refer to: The House of Orange (song), by Stan Rogers House of Orange-Chalon, a medieval Frankish dynasty of Burgundy House of Orange-Nassau, a branch of the European House of Nassau See also Order of the House of Orange Principality of Orange Prince of Orange, a title originally associated with the Principality of Orange
[ -0.02295714244246483, -0.3482138514518738, 0.14923150837421417, -0.1850074976682663, -0.10895483195781708, 0.07421208918094635, 0.22115714848041534, 0.040549151599407196, -0.7782973647117615, -0.37098440527915955, -0.08577772229909897, -0.21766972541809082, -0.3747854232788086, 0.504410386...
14029
https://en.wikipedia.org/wiki/Histone
Histone
In biology, histones are highly basic proteins abundant in lysine and arginine residues that are found in eukaryotic cell nuclei. They act as spools around which DNA winds to create structural units called nucleosomes. Nucleosomes in turn are wrapped into 30-nanometer fibers that form tightly packed chromatin. Histones prevent DNA from becoming tangled and protect it from DNA damage. In addition, histones play important roles in gene regulation and DNA replication. Without histones, unwound DNA in chromosomes would be very long. For example, each human cell has about 1.8 meters of DNA if completely stretched out; however, when wound about histones, this length is reduced to about 90 micrometers (0.09 mm) of 30 nm diameter chromatin fibers. There are five families of histones which are designated H1/H5 (linker histones), H2, H3, and H4 (core histones). The nucleosome core is formed of two H2A-H2B dimers and a H3-H4 tetramer. The tight wrapping of DNA around histones is to a large degree a result of electrostatic attraction between the positively charged histones and negatively charged phosphate backbone of DNA. Histones may be chemically modified through the action of enzymes to regulate gene transcription. The most common modification are the methylation of arginine or lysine residues or the acetylation of lysine. Methylation can affect how other protein such as transcription factors interact with the nucleosomes. Lysine acetylation eliminates a positive charge on lysine thereby weakening the electrostatic attraction between histone and DNA resulting in partial unwinding of the DNA making it more accessible for gene expression. Classes and variants Five major families of histones exist: H1/H5, H2A, H2B, H3, and H4. Histones H2A, H2B, H3 and H4 are known as the core histones, while histones H1/H5 are known as the linker histones. The core histones all exist as dimers, which are similar in that they all possess the histone fold domain: three alpha helices linked by two loops. It is this helical structure that allows for interaction between distinct dimers, particularly in a head-tail fashion (also called the handshake motif). The resulting four distinct dimers then come together to form one octameric nucleosome core, approximately 63 Angstroms in diameter (a solenoid (DNA)-like particle). Around 146 base pairs (bp) of DNA wrap around this core particle 1.65 times in a left-handed super-helical turn to give a particle of around 100 Angstroms across. The linker histone H1 binds the nucleosome at the entry and exit sites of the DNA, thus locking the DNA into place and allowing the formation of higher order structure. The most basic such formation is the 10 nm fiber or beads on a string conformation. This involves the wrapping of DNA around nucleosomes with approximately 50 base pairs of DNA separating each pair of nucleosomes (also referred to as linker DNA). Higher-order structures include the 30 nm fiber (forming an irregular zigzag) and 100 nm fiber, these being the structures found in normal cells. During mitosis and meiosis, the condensed chromosomes are assembled through interactions between nucleosomes and other regulatory proteins. Histones are subdivided into canonical replication-dependent histones that are expressed during the S-phase of the cell cycle and replication-independent histone variants, expressed during the whole cell cycle. In animals, genes encoding canonical histones are typically clustered along the chromosome, lack introns and use a stem loop structure at the 3' end instead of a polyA tail. Genes encoding histone variants are usually not clustered, have introns and their mRNAs are regulated with polyA tails. Complex multicellular organisms typically have a higher number of histone variants providing a variety of different functions. Recent data are accumulating about the roles of diverse histone variants highlighting the functional links between variants and the delicate regulation of organism development. Histone variants from different organisms, their classification and variant specific features can be found in "HistoneDB 2.0 - Variants" database. The following is a list of human histone proteins: Structure The nucleosome core is formed of two H2A-H2B dimers and a H3-H4 tetramer, forming two nearly symmetrical halves by tertiary structure (C2 symmetry; one macromolecule is the mirror image of the other). The H2A-H2B dimers and H3-H4 tetramer also show pseudodyad symmetry. The 4 'core' histones (H2A, H2B, H3 and H4) are relatively similar in structure and are highly conserved through evolution, all featuring a 'helix turn helix turn helix' motif (DNA-binding protein motif that recognize specific DNA sequence). They also share the feature of long 'tails' on one end of the amino acid structure - this being the location of post-translational modification (see below). Archaeal histone only contains a H3-H4 like dimeric structure made out of the same protein. Such dimeric structures can stack into a tall superhelix ("hypernucleosome") onto which DNA coils in a manner similar to nucleosome spools. Only some archaeal histones have tails. The distance between the spools around which eukaryotic cells wind their DNA has been determined to range from 59 to 70 Å. In all, histones make five types of interactions with DNA: Salt bridges and hydrogen bonds between side chains of basic amino acids (especially lysine and arginine) and phosphate oxygens on DNA Helix-dipoles form alpha-helixes in H2B, H3, and H4 cause a net positive charge to accumulate at the point of interaction with negatively charged phosphate groups on DNA Hydrogen bonds between the DNA backbone and the amide group on the main chain of histone proteins Nonpolar interactions between the histone and deoxyribose sugars on DNA Non-specific minor groove insertions of the H3 and H2B N-terminal tails into two minor grooves each on the DNA molecule The highly basic nature of histones, aside from facilitating DNA-histone interactions, contributes to their water solubility. Histones are subject to post translational modification by enzymes primarily on their N-terminal tails, but also in their globular domains. Such modifications include methylation, citrullination, acetylation, phosphorylation, SUMOylation, ubiquitination, and ADP-ribosylation. This affects their function of gene regulation. In general, genes that are active have less bound histone, while inactive genes are highly associated with histones during interphase. It also appears that the structure of histones has been evolutionarily conserved, as any deleterious mutations would be severely maladaptive. All histones have a highly positively charged N-terminus with many lysine and arginine residues. Evolution and species distribution Core histones are found in the nuclei of eukaryotic cells and in most Archaeal phyla, but not in bacteria. However the linker histones have homologs in bacteria. The unicellular algae known as dinoflagellates were previously thought to be the only eukaryotes that completely lack histones, however, later studies showed that their DNA still encodes histone genes. Unlike the core histones, lysine-rich linker histone (H1) proteins are found in bacteria, otherwise known as nucleoprotein HC1/HC2. It has been proposed that histone proteins are evolutionarily related to the helical part of the extended AAA+ ATPase domain, the C-domain, and to the N-terminal substrate recognition domain of Clp/Hsp100 proteins. Despite the differences in their topology, these three folds share a homologous helix-strand-helix (HSH) motif. Archaeal histones may well resemble the evolutionary precursors to eukaryotic histones. Furthermore, the nucleosome (core) histones may have evolved from ribosomal proteins (RPS6/RPS15) with which they share much in common, both being short and basic proteins. Histone proteins are among the most highly conserved proteins in eukaryotes, emphasizing their important role in the biology of the nucleus. In contrast mature sperm cells largely use protamines to package their genomic DNA, most likely because this allows them to achieve an even higher packaging ratio. There are some variant forms in some of the major classes. They share amino acid sequence homology and core structural similarity to a specific class of major histones but also have their own feature that is distinct from the major histones. These minor histones usually carry out specific functions of the chromatin metabolism. For example, histone H3-like CENPA is associated with only the centromere region of the chromosome. Histone H2A variant H2A.Z is associated with the promoters of actively transcribed genes and also involved in the prevention of the spread of silent heterochromatin. Furthermore, H2A.Z has roles in chromatin for genome stability. Another H2A variant H2A.X is phosphorylated at S139 in regions around double-strand breaks and marks the region undergoing DNA repair. Histone H3.3 is associated with the body of actively transcribed genes. Function Compacting DNA strands Histones act as spools around which DNA winds. This enables the compaction necessary to fit the large genomes of eukaryotes inside cell nuclei: the compacted molecule is 40,000 times shorter than an unpacked molecule. Chromatin regulation Histones undergo posttranslational modifications that alter their interaction with DNA and nuclear proteins. The H3 and H4 histones have long tails protruding from the nucleosome, which can be covalently modified at several places. Modifications of the tail include methylation, acetylation, phosphorylation, ubiquitination, SUMOylation, citrullination, and ADP-ribosylation. The core of the histones H2A and H2B can also be modified. Combinations of modifications are thought to constitute a code, the so-called "histone code". Histone modifications act in diverse biological processes such as gene regulation, DNA repair, chromosome condensation (mitosis) and spermatogenesis (meiosis). The common nomenclature of histone modifications is: The name of the histone (e.g., H3) The single-letter amino acid abbreviation (e.g., K for Lysine) and the amino acid position in the protein The type of modification (Me: methyl, P: phosphate, Ac: acetyl, Ub: ubiquitin) The number of modifications (only Me is known to occur in more than one copy per residue. 1, 2 or 3 is mono-, di- or tri-methylation) So H3K4me1 denotes the monomethylation of the 4th residue (a lysine) from the start (i.e., the N-terminal) of the H3 protein. Modification A huge catalogue of histone modifications have been described, but a functional understanding of most is still lacking. Collectively, it is thought that histone modifications may underlie a histone code, whereby combinations of histone modifications have specific meanings. However, most functional data concerns individual prominent histone modifications that are biochemically amenable to detailed study. Chemistry Lysine methylation The addition of one, two, or many methyl groups to lysine has little effect on the chemistry of the histone; methylation leaves the charge of the lysine intact and adds a minimal number of atoms so steric interactions are mostly unaffected. However, proteins containing Tudor, chromo or PHD domains, amongst others, can recognise lysine methylation with exquisite sensitivity and differentiate mono, di and tri-methyl lysine, to the extent that, for some lysines (e.g.: H4K20) mono, di and tri-methylation appear to have different meanings. Because of this, lysine methylation tends to be a very informative mark and dominates the known histone modification functions. Glutamine serotonylation Recently it has been shown, that the addition of a serotonin group to the position 5 glutamine of H3, happens in serotonergic cells such as neurons. This is part of the differentiation of the serotonergic cells. This post-translational modification happens in conjunction with the H3K4me3 modification. The serotonylation potentiates the binding of the general transcription factor TFIID to the TATA box. Arginine methylation What was said above of the chemistry of lysine methylation also applies to arginine methylation, and some protein domains—e.g., Tudor domains—can be specific for methyl arginine instead of methyl lysine. Arginine is known to be mono- or di-methylated, and methylation can be symmetric or asymmetric, potentially with different meanings. Arginine citrullination Enzymes called peptidylarginine deiminases (PADs) hydrolyze the imine group of arginines and attach a keto group, so that there is one less positive charge on the amino acid residue. This process has been involved in the activation of gene expression by making the modified histones less tightly bound to DNA and thus making the chromatin more accessible. PADs can also produce the opposite effect by removing or inhibiting mono-methylation of arginine residues on histones and thus antagonizing the positive effect arginine methylation has on transcriptional activity. Lysine acetylation Addition of an acetyl group has a major chemical effect on lysine as it neutralises the positive charge. This reduces electrostatic attraction between the histone and the negatively charged DNA backbone, loosening the chromatin structure; highly acetylated histones form more accessible chromatin and tend to be associated with active transcription. Lysine acetylation appears to be less precise in meaning than methylation, in that histone acetyltransferases tend to act on more than one lysine; presumably this reflects the need to alter multiple lysines to have a significant effect on chromatin structure. The modification includes H3K27ac. Serine/threonine/tyrosine phosphorylation Addition of a negatively charged phosphate group can lead to major changes in protein structure, leading to the well-characterised role of phosphorylation in controlling protein function. It is not clear what structural implications histone phosphorylation has, but histone phosphorylation has clear functions as a post-translational modification, and binding domains such as BRCT have been characterised. Effects on transcription Most well-studied histone modifications are involved in control of transcription. Actively transcribed genes Two histone modifications are particularly associated with active transcription: Trimethylation of H3 lysine 4 (H3K4me3) This trimethylation occurs at the promoter of active genes and is performed by the COMPASS complex. Despite the conservation of this complex and histone modification from yeast to mammals, it is not entirely clear what role this modification plays. However, it is an excellent mark of active promoters and the level of this histone modification at a gene's promoter is broadly correlated with transcriptional activity of the gene. The formation of this mark is tied to transcription in a rather convoluted manner: early in transcription of a gene, RNA polymerase II undergoes a switch from initiating' to 'elongating', marked by a change in the phosphorylation states of the RNA polymerase II C terminal domain (CTD). The same enzyme that phosphorylates the CTD also phosphorylates the Rad6 complex, which in turn adds a ubiquitin mark to H2B K123 (K120 in mammals). H2BK123Ub occurs throughout transcribed regions, but this mark is required for COMPASS to trimethylate H3K4 at promoters. Trimethylation of H3 lysine 36 (H3K36me3) This trimethylation occurs in the body of active genes and is deposited by the methyltransferase Set2. This protein associates with elongating RNA polymerase II, and H3K36Me3 is indicative of actively transcribed genes. H3K36Me3 is recognised by the Rpd3 histone deacetylase complex, which removes acetyl modifications from surrounding histones, increasing chromatin compaction and repressing spurious transcription. Increased chromatin compaction prevents transcription factors from accessing DNA, and reduces the likelihood of new transcription events being initiated within the body of the gene. This process therefore helps ensure that transcription is not interrupted. Repressed genes Three histone modifications are particularly associated with repressed genes: Trimethylation of H3 lysine 27 (H3K27me3) This histone modification is deposited by the polycomb complex PRC2. It is a clear marker of gene repression, and is likely bound by other proteins to exert a repressive function. Another polycomb complex, PRC1, can bind H3K27me3 and adds the histone modification H2AK119Ub which aids chromatin compaction. Based on this data it appears that PRC1 is recruited through the action of PRC2, however, recent studies show that PRC1 is recruited to the same sites in the absence of PRC2. Di and tri-methylation of H3 lysine 9 (H3K9me2/3) H3K9me2/3 is a well-characterised marker for heterochromatin, and is therefore strongly associated with gene repression. The formation of heterochromatin has been best studied in the yeast Schizosaccharomyces pombe, where it is initiated by recruitment of the RNA-induced transcriptional silencing (RITS) complex to double stranded RNAs produced from centromeric repeats. RITS recruits the Clr4 histone methyltransferase which deposits H3K9me2/3. This process is called histone methylation. H3K9Me2/3 serves as a binding site for the recruitment of Swi6 (heterochromatin protein 1 or HP1, another classic heterochromatin marker) which in turn recruits further repressive activities including histone modifiers such as histone deacetylases and histone methyltransferases. Trimethylation of H4 lysine 20 (H4K20me3) This modification is tightly associated with heterochromatin, although its functional importance remains unclear. This mark is placed by the Suv4-20h methyltransferase, which is at least in part recruited by heterochromatin protein 1. Bivalent promoters Analysis of histone modifications in embryonic stem cells (and other stem cells) revealed many gene promoters carrying both H3K4Me3 and H3K27Me3, in other words these promoters display both activating and repressing marks simultaneously. This peculiar combination of modifications marks genes that are poised for transcription; they are not required in stem cells, but are rapidly required after differentiation into some lineages. Once the cell starts to differentiate, these bivalent promoters are resolved to either active or repressive states depending on the chosen lineage. Other functions DNA damage Marking sites of DNA damage is an important function for histone modifications. It also protects DNA from getting destroyed by ultraviolet radiation of sun. Phosphorylation of H2AX at serine 139 (γH2AX) Phosphorylated H2AX (also known as gamma H2AX) is a marker for DNA double strand breaks, and forms part of the response to DNA damage. H2AX is phosphorylated early after detection of DNA double strand break, and forms a domain extending many kilobases either side of the damage. Gamma H2AX acts as a binding site for the protein MDC1, which in turn recruits key DNA repair proteins (this complex topic is well reviewed in) and as such, gamma H2AX forms a vital part of the machinery that ensures genome stability. Acetylation of H3 lysine 56 (H3K56Ac) H3K56Acx is required for genome stability. H3K56 is acetylated by the p300/Rtt109 complex, but is rapidly deacetylated around sites of DNA damage. H3K56 acetylation is also required to stabilise stalled replication forks, preventing dangerous replication fork collapses. Although in general mammals make far greater use of histone modifications than microorganisms, a major role of H3K56Ac in DNA replication exists only in fungi, and this has become a target for antibiotic development. DNA repair Trimethylation of H3 lysine 36 (H3K36me3) H3K36me3 has the ability to recruit the MSH2-MSH6 (hMutSα) complex of the DNA mismatch repair pathway. Consistently, regions of the human genome with high levels of H3K36me3 accumulate less somatic mutations due to mismatch repair activity. Chromosome condensation Phosphorylation of H3 at serine 10 (phospho-H3S10) The mitotic kinase aurora B phosphorylates histone H3 at serine 10, triggering a cascade of changes that mediate mitotic chromosome condensation. Condensed chromosomes therefore stain very strongly for this mark, but H3S10 phosphorylation is also present at certain chromosome sites outside mitosis, for example in pericentric heterochromatin of cells during G2. H3S10 phosphorylation has also been linked to DNA damage caused by R-loop formation at highly transcribed sites. Phosphorylation H2B at serine 10/14 (phospho-H2BS10/14) Phosphorylation of H2B at serine 10 (yeast) or serine 14 (mammals) is also linked to chromatin condensation, but for the very different purpose of mediating chromosome condensation during apoptosis. This mark is not simply a late acting bystander in apoptosis as yeast carrying mutations of this residue are resistant to hydrogen peroxide-induced apoptotic cell death. Addiction Epigenetic modifications of histone tails in specific regions of the brain are of central importance in addictions. Once particular epigenetic alterations occur, they appear to be long lasting "molecular scars" that may account for the persistence of addictions. Cigarette smokers (about 15% of the US population) are usually addicted to nicotine. After 7 days of nicotine treatment of mice, acetylation of both histone H3 and histone H4 was increased at the FosB promoter in the nucleus accumbens of the brain, causing 61% increase in FosB expression. This would also increase expression of the splice variant Delta FosB. In the nucleus accumbens of the brain, Delta FosB functions as a "sustained molecular switch" and "master control protein" in the development of an addiction. About 7% of the US population is addicted to alcohol. In rats exposed to alcohol for up to 5 days, there was an increase in histone 3 lysine 9 acetylation in the pronociceptin promoter in the brain amygdala complex. This acetylation is an activating mark for pronociceptin. The nociceptin/nociceptin opioid receptor system is involved in the reinforcing or conditioning effects of alcohol. Methamphetamine addiction occurs in about 0.2% of the US population. Chronic methamphetamine use causes methylation of the lysine in position 4 of histone 3 located at the promoters of the c-fos and the C-C chemokine receptor 2 (ccr2) genes, activating those genes in the nucleus accumbens (NAc). c-fos is well known to be important in addiction. The ccr2 gene is also important in addiction, since mutational inactivation of this gene impairs addiction. Synthesis The first step of chromatin structure duplication is the synthesis of histone proteins: H1, H2A, H2B, H3, H4. These proteins are synthesized during S phase of the cell cycle. There are different mechanisms which contribute to the increase of histone synthesis. Yeast Yeast carry one or two copies of each histone gene, which are not clustered but rather scattered throughout chromosomes. Histone gene transcription is controlled by multiple gene regulatory proteins such as transcription factors which bind to histone promoter regions. In budding yeast, the candidate gene for activation of histone gene expression is SBF. SBF is a transcription factor that is activated in late G1 phase, when it dissociates from its repressor Whi5. This occurs when Whi5 is phosphorylated by Cdc8 which is a G1/S Cdk. Suppression of histone gene expression outside of S phases is dependent on Hir proteins which form inactive chromatin structure at the locus of histone genes, causing transcriptional activators to be blocked. Metazoan In metazoans the increase in the rate of histone synthesis is due to the increase in processing of pre-mRNA to its mature form as well as decrease in mRNA degradation; this results in an increase of active mRNA for translation of histone proteins. The mechanism for mRNA activation has been found to be the removal of a segment of the 3' end of the mRNA strand, and is dependent on association with stem-loop binding protein (SLBP). SLBP also stabilizes histone mRNAs during S phase by blocking degradation by the 3'hExo nuclease. SLBP levels are controlled by cell-cycle proteins, causing SLBP to accumulate as cells enter S phase and degrade as cells leave S phase. SLBP are marked for degradation by phosphorylation at two threonine residues by cyclin dependent kinases, possibly cyclin A/ cdk2, at the end of S phase. Metazoans also have multiple copies of histone genes clustered on chromosomes which are localized in structures called Cajal bodies as determined by genome-wide chromosome conformation capture analysis (4C-Seq). Link between cell-cycle control and synthesis Nuclear protein Ataxia-Telangiectasia (NPAT), also known as nuclear protein coactivator of histone transcription, is a transcription factor which activates histone gene transcription on chromosomes 1 and 6 of human cells. NPAT is also a substrate of cyclin E-Cdk2, which is required for the transition between G1 phase and S phase. NPAT activates histone gene expression only after it has been phosphorylated by the G1/S-Cdk cyclin E-Cdk2 in early S phase. This shows an important regulatory link between cell-cycle control and histone synthesis. History Histones were discovered in 1884 by Albrecht Kossel. The word "histone" dates from the late 19th century and is derived from the German word "Histon", a word itself of uncertain origin, perhaps from Ancient Greek ἵστημι (hístēmi, “make stand”) or ἱστός (histós, “loom”). In the early 1960s, before the types of histones were known and before histones were known to be highly conserved across taxonomically diverse organisms, James F. Bonner and his collaborators began a study of these proteins that were known to be tightly associated with the DNA in the nucleus of higher organisms. Bonner and his postdoctoral fellow Ru Chih C. Huang showed that isolated chromatin would not support RNA transcription in the test tube, but if the histones were extracted from the chromatin, RNA could be transcribed from the remaining DNA. Their paper became a citation classic. Paul T'so and James Bonner had called together a World Congress on Histone Chemistry and Biology in 1964, in which it became clear that there was no consensus on the number of kinds of histone and that no one knew how they would compare when isolated from different organisms. Bonner and his collaborators then developed methods to separate each type of histone, purified individual histones, compared amino acid compositions in the same histone from different organisms, and compared amino acid sequences  of the same histone from different organisms in collaboration with Emil Smith from UCLA. For example, they found Histone IV sequence to be highly conserved between peas and calf thymus. However, their work on the biochemical characteristics of individual histones did not reveal how the histones interacted with each other or with DNA to which they were tightly bound. Also in the 1960s, Vincent Allfrey and Alfred Mirsky had suggested, based on their analyses of histones, that acetylation and methylation of histones could provide a transcriptional control mechanism, but did not have available the kind of detailed analysis that later investigators were able to conduct to show how such regulation could be gene-specific. Until the early 1990s, histones were dismissed by most as inert packing material for eukaryotic nuclear DNA, a view based in part on the models of Mark Ptashne and others, who believed that transcription was activated by protein-DNA and protein-protein interactions on largely naked DNA templates, as is the case in bacteria. During the 1980s, Yahli Lorch and Roger Kornberg showed that a nucleosome on a core promoter prevents the initiation of transcription in vitro, and Michael Grunstein demonstrated that histones repress transcription in vivo, leading to the idea of the nucleosome as a general gene repressor. Relief from repression is believed to involve both histone modification and the action of chromatin-remodeling complexes. Vincent Allfrey and Alfred Mirsky had earlier proposed a role of histone modification in transcriptional activation, regarded as a molecular manifestation of epigenetics. Michael Grunstein and David Allis found support for this proposal, in the importance of histone acetylation for transcription in yeast and the activity of the transcriptional activator Gcn5 as a histone acetyltransferase. The discovery of the H5 histone appears to date back to the 1970s, and it is now considered an isoform of Histone H1. See also Histone variants Chromatin Gene silencing Genetics Histone acetyltransferase Histone deacetylases Histone methyltransferase Histone-modifying enzymes Nucleosome PRMT4 pathway Protamine Histone H1 References External links HistoneDB 2.0 - Database of histones and variants at NCBI Chromatin, Histones & Cathepsin; PMAP The Proteolysis Map-animation Epigenetics Proteins DNA-binding proteins
[ 0.5446056723594666, 0.6233187317848206, -0.3789730668067932, -0.07968287169933319, -0.30137112736701965, -0.43659862875938416, 0.01937212608754635, -0.20358024537563324, 0.5244304537773132, -0.6658914089202881, -0.5368684530258179, -0.22336958348751068, -0.49003055691719055, 0.707641124725...
14031
https://en.wikipedia.org/wiki/Hierarchical%20organization
Hierarchical organization
A hierarchical organization is an organizational structure where every entity in the organization, except one, is subordinate to a single other entity. This arrangement is a form of a hierarchy. In an organization, the hierarchy usually consists of a singular/group of power at the top with subsequent levels of power beneath them. This is the dominant mode of organization among large organizations; most corporations, governments, criminal enterprises, and organized religions are hierarchical organizations with different levels of management, power or authority. For example, the broad, top-level overview of the general organization of the Catholic Church consists of the Pope, then the Cardinals, then the Archbishops, and so on. Members of hierarchical organizational structures chiefly communicate with their immediate superior and with their immediate subordinates. Structuring organizations in this way is useful partly because it can reduce the communication overhead by limiting information flow. Visualization A hierarchy is typically visualized as a pyramid, where the height of the ranking or person depicts their power status and the width of that level represents how many people or business divisions are at that level relative to the whole—the highest-ranking people are at the apex, and there are very few of them, and in many cases only one; the base may include thousands of people who have no subordinates. These hierarchies are typically depicted with a tree or triangle diagram, creating an organizational chart or organogram. Those nearest the top have more power than those nearest the bottom, and there being fewer people at the top than at the bottom. As a result, superiors in a hierarchy generally have higher status and command greater rewards than their subordinates. Common social manifestations All governments and most companies feature similar hierarchical structures. Traditionally, the monarch stood at the pinnacle of the state. In many countries, feudalism and manorialism provided a formal social structure that established hierarchical links pervading every level of society, with the monarch at the top. In modern post-feudal states the nominal top of the hierarchy still remains a head of state - sometimes a president or a constitutional monarch, although in many modern states the powers of the head of state are delegated among different bodies. Below or alongside this head there is commonly a senate, parliament or congress; such bodies in turn often delegate the day-to-day running of the country to a prime minister, who may head a cabinet. In many democracies, constitutions theoretically regard "the people" as the notional top of the hierarchy, above the head of state; in reality, the people's influence is often restricted to voting in elections or in referendums. In business, the business owner traditionally occupies the pinnacle of the organization. Most modern large companies lack a single dominant shareholder and for most purposes delegate the collective power of the business owners to a board of directors, which in turn delegates the day-to-day running of the company to a managing director or CEO. Again, although the shareholders of the company nominally rank at the top of the hierarchy, in reality many companies are run at least in part as personal fiefdoms by their management; corporate governance rules attempt to mitigate this tendency. Origins and development of social hierarchical organizations Smaller and more informal social units - families, bands, tribes, special interest groups - which may form spontaneously, have little need for complex hierarchies - or indeed for any hierarchies. They may rely on self-organizing tendencies. A conventional view ascribes the growth of hierarchical social habits and structures to increased complexity; the religious syncretism and issues of tax-gathering in expanding empires played a role here. The demands of administration in increasingly larger systems may have assisted the flowering of bureaucracy and in the advent of the professional manager (19th century) and of the technocrat (20th century). Studies The organizational development theorist Elliott Jacques identified a special role for hierarchy in his concept of requisite organization. The iron law of oligarchy, introduced by Robert Michels, describes the inevitable tendency of hierarchical organizations to become oligarchic in their decision making. The Peter Principle is a term coined by Laurence J. Peter in which the selection of a candidate for a position in an hierarchical organization is based on the candidate's performance in their current role, rather than on abilities relevant to the intended role. Thus, employees only stop being promoted once they can no longer perform effectively, and managers in an hierarchical organization "rise to the level of their incompetence." Hierarchiology is another term coined by Laurence J. Peter, described in his humorous book of the same name, to refer to the study of hierarchical organizations and the behavior of their members. David Andrews' book The IRG Solution: Hierarchical Incompetence and how to Overcome it argued that hierarchies were inherently incompetent, and were only able to function due to large amounts of informal lateral communication fostered by private informal networks. Criticism and alternatives In the work of diverse theorists such as William James (1842–1910), Michel Foucault (1926–1984) and Hayden White, important critiques of hierarchical epistemology are advanced. James famously asserts in his work "Radical Empiricism" that clear distinctions of type and category are a constant but unwritten goal of scientific reasoning, so that when they are discovered, success is declared. But if aspects of the world are organized differently, involving inherent and intractable ambiguities, then scientific questions are often considered unresolved. A hesitation to declare success upon the discovery of ambiguities leaves heterarchy at an artificial and subjective disadvantage in the scope of human knowledge. This bias is an artifact of an aesthetic or pedagogical preference for hierarchy, and not necessarily an expression of objective observation. Hierarchies and hierarchical thinking has been criticized by many people, including Susan McClary and one political philosophy which is vehemently opposed to hierarchical organization: anarchism. Heterarchy is the most commonly proposed alternative to hierarchy and this has been combined with responsible autonomy by Gerard Fairtlough in his work on Triarchy theory. The most beneficial aspect of a hierarchical organization is the clear command that is established. However, hierarchy may become dismantled by abuse of power. Amidst constant innovation in information and communication technologies, hierarchical authority structures are giving way to greater decision-making latitude for individuals and more flexible definitions of job activities and this new style of work presents a challenge to existing organizational forms, with some research studies contrasting traditional organizational forms against groups that operate as online communities that are characterized by personal motivation and the satisfaction of making one's own decisions. With all levels of an organization having access to information and communication via digital means, power structures align more as a wirearchy, enabling the flow of power and authority to be based not on hierarchical levels, but on information, trust, credibility, and a focus on results. See also References Hierarchy Organizational structure Corporate governance Bureaucratic organization
[ -0.2291465848684311, 0.47639399766921997, -0.4886939525604248, 0.46959683299064636, -0.11303413659334183, -0.15604446828365326, 0.5709236860275269, 0.17019248008728027, -0.4266813397407532, -0.6311625242233276, -0.3739984333515167, 0.039931755512952805, -0.5641894340515137, 0.5065076351165...
14033
https://en.wikipedia.org/wiki/Harry%20Secombe
Harry Secombe
Sir Harold Donald Secombe (8 September 1921 – 11 April 2001) was a Welsh comedian, actor, singer and television presenter. Secombe was a member of the British radio comedy programme The Goon Show (1951–1960), playing many characters, most notably Neddie Seagoon. An accomplished tenor, he also appeared in musicals and films – notably as Bumble in Oliver! (1968) – and, in his later years, was a presenter of television shows incorporating hymns and other devotional songs. Early life Secombe was born in St Thomas, Swansea, the third of four children of Nellie Jane Gladys (née Davies), a shop manageress, and Frederick Ernest Secombe, a commercial traveller and office worker for a Swansea wholesale grocery business. From the age of 11 he attended Dynevor School, a state grammar school in central Swansea. His family were regular churchgoers, belonging to the congregation of St Thomas Church. A member of the choir, from the age of 12 Secombe would perform a sketch entitled The Welsh Courtship at church socials, acting as "feed" to his sister Carol. His elder brother, Fred Secombe, was the author of several books about his experiences as an Anglican priest and rector. Army service After leaving school in 1937, Secombe became a pay clerk at Baldwin's store. With war looming, he decided in 1938 that he would join the Territorial Army. Very short sighted, he got a friend to tell him the sight test, and then learnt it by heart. He served as a Lance Bombardier in No.132 Field Regiment of the Royal Artillery. He referred to the unit in which he served during the Second World War in the North African Campaign, Sicily, and Italy, as "The Five-Mile Snipers". While in North Africa Secombe met Spike Milligan for the first time. In Sicily he joined a concert party and developed his own comedy routines to entertain the troops. When Secombe visited the Falkland Islands to entertain the troops after the 1982 Falklands War, his old regiment promoted him to the rank of sergeant – 37 years after he had been demobbed. As an entertainer He made his first radio broadcast in May 1944 on a variety show aimed at the services. Following the end of fighting in the war but prior to demobilisation Secombe joined a pool of entertainers in Naples and formed a comedy duo with Spike Milligan. Secombe joined the cast of the Windmill Theatre in 1946, using a routine he had developed in Italy about how people shaved. An early review said that Secombe was “an original humorist of the infectious type and is very funny in a series showing how different men shave and in an impression of a vocalist.” Secombe always claimed that his ability to sing could always be counted on to save him when he bombed. After a regional touring career, his first break came in radio in 1951 when he was chosen as resident comedian for the Welsh series Welsh Rarebit, followed by appearances on Variety Bandbox and a regular role in Educating Archie. Secombe met Michael Bentine at the Windmill Theatre, and he was introduced to Peter Sellers by his agent Jimmy Grafton. Both Milligan and Sellers credited him with keeping the act on the bill when club owners had wanted to sack them. Together with Spike Milligan, the four wrote a comedy radio script, and Those Crazy People was commissioned and first broadcast on 28 May 1951. Produced by Dennis Main Wilson, this soon became The Goon Show and the show remained on the air until 1960. Secombe mainly played Neddie Seagoon, around whom the show's absurd plots developed. In 1955, whilst appearing on The Goon Show, Secombe was approached by the BBC to step in at short notice to take the lead in the radio comedy Hancock's Half Hour. The star of the show, Tony Hancock, had decided to take an unannounced break abroad the day before the live airing of the second season. Secombe appeared in the lead for the first three episodes and had a guest role in the fourth after Hancock's return. All four episodes are lost, but following the discovery of the original scripts the episodes were rerecorded in 2017, with Andrew Secombe performing the role held by his late father. With the success of The Goon Show, Secombe developed a dual career as both a comedy actor and a singer. At the beginning of his career as an entertainer his act would end with a joke version of the duet Sweethearts, in which he sang both the baritone and falsetto parts. Trained under Italian maestro Manlio di Veroli, he emerged as a bel canto tenor (characteristically, he insisted that in his case this meant "can belto") and had a long list of best-selling record albums to his credit. In 1958 he appeared in the film Jet Storm, which starred Dame Sybil Thorndike and Richard Attenborough and in the same year Secombe starred in the title role in Davy, one of Ealing Studios' last films. The power of his voice allowed Secombe to appear in many stage musicals. This included 1963's Pickwick, based on Charles Dickens' The Pickwick Papers, which gave him the number 18 hit single "If I Ruled the World" – his later signature tune. In 1965 the show was produced on tour in the United States, where on Broadway he garnered a nomination for a Tony Award for Best Actor in a Musical. Secombe scored his biggest hit single in 1967 with his version of "This Is My Song", which peaked at no. 2 on the charts in April 1967 while a recording by Petula Clark, which had hit no. 1 in February, was still in the top ten. He also appeared in the musical The Four Musketeers (1967) at Drury Lane, as Mr. Bumble in Carol Reed's film of Oliver! (1968), and in the Envy segment of The Magnificent Seven Deadly Sins (1971). He went on to star in his own television show, The Harry Secombe Show, which debuted on Christmas Day 1968 on BBC 1 and ran for thirty-one episodes until 1973. A sketch comedy show featuring Julian Orchard as Secombe's regular sidekick, the series also featured guest appearances by fellow Goon Spike Milligan as well as leading performers such as Ronnie Barker and Arthur Lowe. Secombe later starred in similar vehicles such as Sing a Song of Secombe and ITV's Secombe with Music during the 1970s. Later career Later in life, Secombe (whose brother Fred Secombe was a priest in the Church in Wales, part of the Anglican Communion) attracted new audiences as a presenter of religious programmes, such as the BBC's Songs of Praise and ITV's Stars on Sunday and Highway. He was also a special programming consultant to Harlech Television and hosted a Thames Television programme in 1979 entitled Cross on the Donkey's Back. In the latter half of the 1980s, Secombe personally sponsored a football team for boys aged 9–11 in the local West Sutton Little League, 'Secombes Knights'. In 1990, he was one of a few to be honoured by a second appearance on This Is Your Life, when he was surprised by Michael Aspel at a book signing in a London branch of WH Smith. Secombe had been a subject of the show previously in March 1958 when Eamonn Andrews surprised him at the BBC Television Theatre. Honours In 1963 he was appointed a Commander of the Order of the British Empire (CBE). He was knighted in 1981, and jokingly referred to himself as Sir Cumference (in recognition of his rotund figure). The motto he chose for his coat of arms was "GO ON", a reference to goon. Later life and death Secombe suffered from peritonitis in 1980. Within two years, taking advice from doctors, he had lost five stone in weight. He had a stroke in 1997, from which he made a slow recovery. He was then diagnosed with prostate cancer in September 1998. After suffering a second stroke in 1999, he was forced to abandon his television career, but made a documentary about his condition in the hope of giving encouragement to other sufferers. Secombe had diabetes in the latter part of his life. Secombe died on 11 April 2001 at the age of 79, from prostate cancer, in hospital in Guildford, Surrey. His ashes are interred at the parish church of Shamley Green, and a later memorial service to celebrate his life was held at Westminster Abbey on 26 October 2001. As well as family members and friends, the service was also attended by Charles, Prince of Wales and representatives of Prince Philip, Duke of Edinburgh, Anne, Princess Royal, Princess Margaret, Countess of Snowdon and Prince Edward, Duke of Kent. On his tombstone is the inscription: "To know him was to love him." Upon hearing of his old friend's death, Spike Milligan quipped, "I'm glad he died before me, because I didn't want him to sing at my funeral." But Secombe had the last laugh: upon Milligan's own death the following year, a recording of Secombe singing was played at Spike's memorial service. The Secombe Theatre at Sutton, Greater London, bears his name in memory of this former local personality. He is also fondly remembered at the London Welsh Centre, where he opened the bar on St Patrick's Day (17 March) 1971. Family Secombe met Myra Joan Atherton at the Mumbles Dance Hall in 1946. The couple were married from 1948 until his death, and had four children: Jennifer Secombe (d. 2019), widow of actor Alex Giannini. She was her father's agent in his later years. Andy Secombe, a voice actor, film actor and author David Secombe, a writer and photographer Katy Secombe, an actress Lady Myra Secombe died 7 February 2017, aged 93. Selected works Singles "On with the Motley" (Vesti la giubba) (1955) UK #6 "Bless This House" "If I Ruled the World" (1963) UK #18 "This Is My Song" (1967) UK #2 Albums Sacred Songs (1962) UK #16 Pickwick (Original Cast Album) (1965) Secombe's Personal Choice (1967) UK #6 If I Ruled the World (1971) UK #17 The Magnificent Voice of Harry Secombe (1972) AUS #14 With a Song My Heart (1977) AUS #24 Captain Beaky and His Band(1977) Bless This House: 20 Songs of Joy (1978) UK #8, AUS #28 This Is My Song (1983) AUS #9 All Things Bring and Beautiful (1983) AUS #31 Songs for Everyone (1986) AUS #43 Highway of Life (1986) UK #45 Count Your Blessings (1988) AUS #93 Your Sincerely (1991) UK #46 Books Fiction Twice Brightly (1974) Robson Books Welsh Fargo (1981) Robson Books Children's Katy and the Nurgla (1980) Autobiographical Goon for Lunch (1975) M. J. Hobbs Goon Abroad (1982) Robson Books Arias and Raspberries (1989) Robson Books Strawberries and Cheam (1998) Robson Books Alternative ISBNs for 2004 publication: ; (paperback). Partial Filmography References External links Harry Secombe biography from BBC Wales 1921 births 2001 deaths 20th-century Welsh comedians 21st-century Welsh comedians 20th-century Welsh male singers 21st-century Welsh male singers 20th-century Welsh male actors 21st-century Welsh male actors British Army personnel of World War II Burials in Surrey Deaths from cancer in England Commanders of the Order of the British Empire Deaths from prostate cancer Knights Bachelor Actors awarded knighthoods Singers awarded knighthoods Male actors from Swansea Royal Artillery soldiers Welsh Anglicans Welsh male comedians Welsh male film actors Welsh male musical theatre actors Welsh male radio actors Welsh male television actors Welsh tenors British male comedy actors People educated at Dynevor School, Swansea The Goon Show
[ -0.10114489495754242, -0.23028378188610077, -0.3287748694419861, -0.34207355976104736, 0.37021881341934204, 0.6666483879089355, 0.5724082589149475, 0.027833720669150352, -0.17185711860656738, 0.3899118900299072, -0.28939923644065857, 0.30706658959388733, -0.49929702281951904, 0.07396632432...
14034
https://en.wikipedia.org/wiki/Heroin
Heroin
Heroin, also known as diacetylmorphine and diamorphine among other names, is an opioid used as a recreational drug for its euphoric effects. Medical grade diamorphine is used as a pure hydrochloride salt which is distinguished from black tar heroin, a variable admixture of morphine derivatives—predominantly 6-MAM (6-monoacetylmorphine), which is the result of crude acetylation during clandestine production of street heroin. Diamorphine is used medically in several countries to relieve pain, such as during childbirth or a heart attack, as well as in opioid replacement therapy. It is typically injected, usually into a vein, but it can also be smoked, snorted, or inhaled. In a clinical context the route of administration is most commonly intravenous injection; it may also be given by intramuscular or subcutaneous injection, as well as orally in the form of tablets. The onset of effects is usually rapid and lasts for a few hours. Common side effects include respiratory depression (decreased breathing), dry mouth, drowsiness, impaired mental function, constipation, and addiction. Side effects of use by injection can include abscesses, infected heart valves, blood-borne infections, and pneumonia. After a history of long-term use, opioid withdrawal symptoms can begin within hours of the last use. When given by injection into a vein, heroin has two to three times the effect of a similar dose of morphine. It typically appears in the form of a white or brown powder. Treatment of heroin addiction often includes behavioral therapy and medications. Medications can include buprenorphine, methadone, or naltrexone. A heroin overdose may be treated with naloxone. An estimated 17 million people use opiates, of which heroin is the most common, and opioid use resulted in 122,000 deaths. The total number of heroin users worldwide as of 2015 is believed to have increased in Africa, the Americas, and Asia since 2000. In the United States, approximately 1.6 percent of people have used heroin at some point, with 950,000 using it in the last year. When people die from overdosing on a drug, the drug is usually an opioid and often heroin. Heroin was first made by C. R. Alder Wright in 1874 from morphine, a natural product of the opium poppy. Internationally, heroin is controlled under Schedules I and IV of the Single Convention on Narcotic Drugs, and it is generally illegal to make, possess, or sell without a license. About 448 tons of heroin were made in 2016. In 2015, Afghanistan produced about 66% of the world's opium. Illegal heroin is often mixed with other substances such as sugar, starch, caffeine, quinine, or other opioids like fentanyl. Uses Recreational Bayer's original trade name (see 'History' section) of heroin is typically used in non-medical settings. It is used as a recreational drug for the euphoria it induces. Anthropologist Michael Agar once described heroin as "the perfect whatever drug." Tolerance develops quickly, and increased doses are needed in order to achieve the same effects. Its popularity with recreational drug users, compared to morphine, reportedly stems from its perceived different effects. Short-term addiction studies by the same researchers demonstrated that tolerance developed at a similar rate to both heroin and morphine. When compared to the opioids hydromorphone, fentanyl, oxycodone, and pethidine (meperidine), former addicts showed a strong preference for heroin and morphine, suggesting that heroin and morphine are particularly susceptible to misuse and causing dependence. Morphine and heroin were also much more likely to produce euphoria and other "positive" subjective effects when compared to these other opioids. Medical uses In the United States, heroin is not accepted as medically useful. Under the generic name diamorphine, heroin is prescribed as a strong pain medication in the United Kingdom, where it is administered via oral, subcutaneous, intramuscular, intrathecal, intranasal or intravenous routes. It may be prescribed for the treatment of acute pain, such as in severe physical trauma, myocardial infarction, post-surgical pain and chronic pain, including end-stage terminal illnesses. In other countries it is more common to use morphine or other strong opioids in these situations. In 2004, the National Institute for Health and Clinical Excellence produced guidance on the management of caesarean section, which recommended the use of intrathecal or epidural diamorphine for post-operative pain relief. For women who have had intrathecal opioids, there should be a minimum hourly observation of respiratory rate, sedation and pain scores for at least 12 hours for diamorphine and 24 hours for morphine. Women should be offered diamorphine (0.3–0.4 mg intrathecally) for intra- and postoperative analgesia because it reduces the need for supplemental analgesia after a caesarean section. Epidural diamorphine (2.5–5 mg) is a suitable alternative. Diamorphine continues to be widely used in palliative care in the UK, where it is commonly given by the subcutaneous route, often via a syringe driver if patients cannot easily swallow morphine solution. The advantage of diamorphine over morphine is that diamorphine is more fat soluble and therefore more potent by injection, so smaller doses of it are needed for the same effect on pain. Both of these factors are advantageous if giving high doses of opioids via the subcutaneous route, which is often necessary for palliative care. It is also used in the palliative management of bone fractures and other trauma, especially in children. In the trauma context, it is primarily given by nose in hospital; although a prepared nasal spray is available. It has traditionally been made by the attending physician, generally from the same "dry" ampoules as used for injection. In children, Ayendi nasal spray is available at 720 micrograms and 1600 micrograms per 50 microlitres actuation of the spray, which may be preferable as a non-invasive alternative in pediatric care, avoiding the fear of injection in children. Maintenance therapy A number of European countries prescribe heroin for treatment of heroin addiction. The initial Swiss HAT (Heroin-assisted treatment) trial ("PROVE" study) was conducted as a prospective cohort study with some 1,000 participants in 18 treatment centers between 1994 and 1996, at the end of 2004, 1,200 patients were enrolled in HAT in 23 treatment centers across Switzerland. Diamorphine may be used as a maintenance drug to assist the treatment of opiate addiction, normally in long-term chronic intravenous (IV) heroin users. It is only prescribed following exhaustive efforts at treatment via other means. It is sometimes thought that heroin users can walk into a clinic and walk out with a prescription, but the process takes many weeks before a prescription for diamorphine is issued. Though this is somewhat controversial among proponents of a zero-tolerance drug policy, it has proven superior to methadone in improving the social and health situations of addicts. The UK Department of Health's Rolleston Committee Report in 1926 established the British approach to diamorphine prescription to users, which was maintained for the next 40 years: dealers were prosecuted, but doctors could prescribe diamorphine to users when withdrawing. In 1964, the Brain Committee recommended that only selected approved doctors working at approved specialized centres be allowed to prescribe diamorphine and cocaine to users. The law was made more restrictive in 1968. Beginning in the 1970s, the emphasis shifted to abstinence and the use of methadone; currently, only a small number of users in the UK are prescribed diamorphine. In 1994, Switzerland began a trial diamorphine maintenance program for users that had failed multiple withdrawal programs. The aim of this program was to maintain the health of the user by avoiding medical problems stemming from the illicit use of diamorphine. The first trial in 1994 involved 340 users, although enrollment was later expanded to 1000, based on the apparent success of the program. The trials proved diamorphine maintenance to be superior to other forms of treatment in improving the social and health situation for this group of patients. It has also been shown to save money, despite high treatment expenses, as it significantly reduces costs incurred by trials, incarceration, health interventions and delinquency. Patients appear twice daily at a treatment center, where they inject their dose of diamorphine under the supervision of medical staff. They are required to contribute about 450 Swiss francs per month to the treatment costs. A national referendum in November 2008 showed 68% of voters supported the plan, introducing diamorphine prescription into federal law. The previous trials were based on time-limited executive ordinances. The success of the Swiss trials led German, Dutch, and Canadian cities to try out their own diamorphine prescription programs. Some Australian cities (such as Sydney) have instituted legal diamorphine supervised injecting centers, in line with other wider harm minimization programs. Since January 2009, Denmark has prescribed diamorphine to a few addicts who have tried methadone and buprenorphine without success. Beginning in February 2010, addicts in Copenhagen and Odense became eligible to receive free diamorphine. Later in 2010, other cities including Århus and Esbjerg joined the scheme. It was estimated that around 230 addicts would be able to receive free diamorphine. However, Danish addicts would only be able to inject heroin according to the policy set by Danish National Board of Health. Of the estimated 1500 drug users who did not benefit from the then-current oral substitution treatment, approximately 900 would not be in the target group for treatment with injectable diamorphine, either because of "massive multiple drug abuse of non-opioids" or "not wanting treatment with injectable diamorphine". In July 2009, the German Bundestag passed a law allowing diamorphine prescription as a standard treatment for addicts; a large-scale trial of diamorphine prescription had been authorized in the country in 2002. On 26 August 2016, Health Canada issued regulations amending prior regulations it had issued under the Controlled Drugs and Substances Act; the "New Classes of Practitioners Regulations", the "Narcotic Control Regulations", and the "Food and Drug Regulations", to allow doctors to prescribe diamorphine to people who have a severe opioid addiction who have not responded to other treatments. The prescription heroin can be accessed by doctors through Health Canada's Special Access Programme (SAP) for "emergency access to drugs for patients with serious or life-threatening conditions when conventional treatments have failed, are unsuitable, or are unavailable." Routes of administration The onset of heroin's effects depends upon the route of administration. Smoking is the fastest route of drug administration, although intravenous injection results in a quicker rise in blood concentration. These are followed by suppository (anal or vaginal insertion), insufflation (snorting), and ingestion (swallowing). A 2002 study suggests that a fast onset of action increases the reinforcing effects of addictive drugs. Ingestion does not produce a rush as a forerunner to the high experienced with the use of heroin, which is most pronounced with intravenous use. While the onset of the rush induced by injection can occur in as little as a few seconds, the oral route of administration requires approximately half an hour before the high sets in. Thus, with both higher the dosage of heroin used and faster the route of administration used, the higher the potential risk for psychological dependence/addiction. Large doses of heroin can cause fatal respiratory depression, and the drug has been used for suicide or as a murder weapon. The serial killer Harold Shipman used diamorphine on his victims, and the subsequent Shipman Inquiry led to a tightening of the regulations surrounding the storage, prescribing and destruction of controlled drugs in the UK. Because significant tolerance to respiratory depression develops quickly with continued use and is lost just as quickly during withdrawal, it is often difficult to determine whether a heroin lethal overdose was accidental, suicide or homicide. Examples include the overdose deaths of Sid Vicious, Janis Joplin, Tim Buckley, Hillel Slovak, Layne Staley, Bradley Nowell, Ted Binion, and River Phoenix. By mouth Use of heroin by mouth is less common than other methods of administration, mainly because there is little to no "rush", and the effects are less potent. Heroin is entirely converted to morphine by means of first-pass metabolism, resulting in deacetylation when ingested. Heroin's oral bioavailability is both dose-dependent (as is morphine's) and significantly higher than oral use of morphine itself, reaching up to 64.2% for high doses and 45.6% for low doses; opiate-naive users showed far less absorption of the drug at low doses, having bioavailabilities of only up to 22.9%. The maximum plasma concentration of morphine following oral administration of heroin was around twice as much as that of oral morphine. Injection Injection, also known as "slamming", "banging", "shooting up", "digging" or "mainlining", is a popular method which carries relatively greater risks than other methods of administration. Heroin base (commonly found in Europe), when prepared for injection, will only dissolve in water when mixed with an acid (most commonly citric acid powder or lemon juice) and heated. Heroin in the east-coast United States is most commonly found in the hydrochloride salt form, requiring just water (and no heat) to dissolve. Users tend to initially inject in the easily accessible arm veins, but as these veins collapse over time, users resort to more dangerous areas of the body, such as the femoral vein in the groin. Users who have used this route of administration often develop a deep vein thrombosis. Intravenous users can use a various single dose range using a hypodermic needle. The dose of heroin used for recreational purposes is dependent on the frequency and level of use: thus a first-time user may use between 5 and 20 mg, while an established addict may require several hundred mg per day. As with the injection of any drug, if a group of users share a common needle without sterilization procedures, blood-borne diseases, such as HIV/AIDS or hepatitis, can be transmitted. The use of a common dispenser for water for the use in the preparation of the injection, as well as the sharing of spoons and filters can also cause the spread of blood-borne diseases. Many countries now supply small sterile spoons and filters for single use in order to prevent the spread of disease. Smoking Smoking heroin refers to vaporizing it to inhale the resulting fumes, rather than burning and inhaling the smoke. It is commonly smoked in glass pipes made from glassblown Pyrex tubes and light bulbs. Heroin may be smoked from aluminium foil, that is heated by a flame underneath it, with the resulting smoke inhaled through a tube of rolled up foil, a method also known as "chasing the dragon". Insufflation Another popular route to intake heroin is insufflation (snorting), where a user crushes the heroin into a fine powder and then gently inhales it (sometimes with a straw or a rolled-up banknote, as with cocaine) into the nose, where heroin is absorbed through the soft tissue in the mucous membrane of the sinus cavity and straight into the bloodstream. This method of administration redirects first-pass metabolism, with a quicker onset and higher bioavailability than oral administration, though the duration of action is shortened. This method is sometimes preferred by users who do not want to prepare and administer heroin for injection or smoking but still experience a fast onset. Snorting heroin becomes an often unwanted route, once a user begins to inject the drug. The user may still get high on the drug from snorting, and experience a nod, but will not get a rush. A "rush" is caused by a large amount of heroin entering the body at once. When the drug is taken in through the nose, the user does not get the rush because the drug is absorbed slowly rather than instantly. Heroin for pain has been mixed with sterile water on site by the attending physician, and administered using a syringe with a nebulizer tip. Heroin may be used for fractures, burns, finger-tip injuries, suturing, and wound re-dressing, but is inappropriate in head injuries. Suppository Little research has been focused on the suppository (anal insertion) or pessary (vaginal insertion) methods of administration, also known as "plugging". These methods of administration are commonly carried out using an oral syringe. Heroin can be dissolved and withdrawn into an oral syringe which may then be lubricated and inserted into the anus or vagina before the plunger is pushed. The rectum or the vaginal canal is where the majority of the drug would likely be taken up, through the membranes lining their walls. Adverse effects Heroin is classified as a hard drug in terms of drug harmfulness. Like most opioids, unadulterated heroin may lead to adverse effects. The purity of street heroin varies greatly, leading to overdoses when the purity is higher than they expected. Short term effects Users report an intense rush, an acute transcendent state of euphoria, which occurs while diamorphine is being metabolized into 6-monoacetylmorphine (6-MAM) and morphine in the brain. Some believe that heroin produces more euphoria than other opioids; one possible explanation is the presence of 6-monoacetylmorphine, a metabolite unique to heroin – although a more likely explanation is the rapidity of onset. While other opioids of recreational use produce only morphine, heroin also leaves 6-MAM, also a psycho-active metabolite. However, this perception is not supported by the results of clinical studies comparing the physiological and subjective effects of injected heroin and morphine in individuals formerly addicted to opioids; these subjects showed no preference for one drug over the other. Equipotent injected doses had comparable action courses, with no difference in subjects' self-rated feelings of euphoria, ambition, nervousness, relaxation, drowsiness, or sleepiness. The rush is usually accompanied by a warm flushing of the skin, dry mouth, and a heavy feeling in the extremities. Nausea, vomiting, and severe itching may also occur. After the initial effects, users usually will be drowsy for several hours; mental function is clouded; heart function slows, and breathing is also severely slowed, sometimes enough to be life-threatening. Slowed breathing can also lead to coma and permanent brain damage. Heroin use has also been associated with myocardial infarction. Long term effects Repeated heroin use changes the physical structure and physiology of the brain, creating long-term imbalances in neuronal and hormonal systems that are not easily reversed. Studies have shown some deterioration of the brain's white matter due to heroin use, which may affect decision-making abilities, the ability to regulate behavior, and responses to stressful situations. Heroin also produces profound degrees of tolerance and physical dependence. Tolerance occurs when more and more of the drug is required to achieve the same effects. With physical dependence, the body adapts to the presence of the drug, and withdrawal symptoms occur if use is reduced abruptly. Injection Intravenous use of heroin (and any other substance) with needles and syringes or other related equipment may lead to: Contracting blood-borne pathogens such as HIV and hepatitis via the sharing of needles Contracting bacterial or fungal endocarditis and possibly venous sclerosis Abscesses Poisoning from contaminants added to "cut" or dilute heroin Decreased kidney function (nephropathy), although it is not currently known if this is because of adulterants or infectious diseases Withdrawal The withdrawal syndrome from heroin may begin within as little as two hours of discontinuation of the drug; however, this time frame can fluctuate with the degree of tolerance as well as the amount of the last consumed dose, and more typically begins within 6–24 hours after cessation. Symptoms may include sweating, malaise, anxiety, depression, akathisia, priapism, extra sensitivity of the genitals in females, general feeling of heaviness, excessive yawning or sneezing, rhinorrhea, insomnia, cold sweats, chills, severe muscle and bone aches, nausea, vomiting, diarrhea, cramps, watery eyes, fever, cramp-like pains, and involuntary spasms in the limbs (thought to be an origin of the term "kicking the habit"). Overdose Heroin overdose is usually treated with the opioid antagonist, naloxone. This reverses the effects of heroin and causes an immediate return of consciousness but may result in withdrawal symptoms. The half-life of naloxone is shorter than some opioids, such that it may need to be given multiple times until the opioid has been metabolized by the body. Between 2012 and 2015, heroin was the leading cause of drug related deaths in the United States. Since then fentanyl has been a more common cause of drug related deaths. Depending on drug interactions and numerous other factors, death from overdose can take anywhere from several minutes to several hours. Death usually occurs due to lack of oxygen resulting from the lack of breathing caused by the opioid. Heroin overdoses can occur because of an unexpected increase in the dose or purity or because of diminished opioid tolerance. However, many fatalities reported as overdoses are probably caused by interactions with other depressant drugs such as alcohol or benzodiazepines. Since heroin can cause nausea and vomiting, a significant number of deaths attributed to heroin overdose are caused by aspiration of vomit by an unconscious person. Some sources quote the median lethal dose (for an average 75 kg opiate-naive individual) as being between 75 and 600 mg. Illicit heroin is of widely varying and unpredictable purity. This means that the user may prepare what they consider to be a moderate dose while actually taking far more than intended. Also, tolerance typically decreases after a period of abstinence. If this occurs and the user takes a dose comparable to their previous use, the user may experience drug effects that are much greater than expected, potentially resulting in an overdose. It has been speculated that an unknown portion of heroin-related deaths are the result of an overdose or allergic reaction to quinine, which may sometimes be used as a cutting agent. Pharmacology When taken orally, heroin undergoes extensive first-pass metabolism via deacetylation, making it a prodrug for the systemic delivery of morphine. When the drug is injected, however, it avoids this first-pass effect, very rapidly crossing the blood–brain barrier because of the presence of the acetyl groups, which render it much more fat soluble than morphine itself. Once in the brain, it then is deacetylated variously into the inactive 3-monoacetylmorphine and the active 6-monoacetylmorphine (6-MAM), and then to morphine, which bind to μ-opioid receptors, resulting in the drug's euphoric, analgesic (pain relief), and anxiolytic (anti-anxiety) effects; heroin itself exhibits relatively low affinity for the μ receptor. Analgesia follows from the activation of the μ receptor G-protein coupled receptor, which indirectly hyperpolarizes the neuron, reducing the release of nociceptive neurotransmitters, and hence, causes analgesia and increased pain tolerance. Unlike hydromorphone and oxymorphone, however, administered intravenously, heroin creates a larger histamine release, similar to morphine, resulting in the feeling of a greater subjective "body high" to some, but also instances of pruritus (itching) when they first start using. Normally GABA, released from inhibitory neurones, inhibits the release of dopamine. Opiates, like heroin and morphine, decrease the inhibitory activity of such neurones. This causes increased release of dopamine in the brain which is the reason for euphoric and rewarding effects of heroin. Both morphine and 6-MAM are μ-opioid agonists that bind to receptors present throughout the brain, spinal cord, and gut of all mammals. The μ-opioid receptor also binds endogenous opioid peptides such as β-endorphin, Leu-enkephalin, and Met-enkephalin. Repeated use of heroin results in a number of physiological changes, including an increase in the production of μ-opioid receptors (upregulation). These physiological alterations lead to tolerance and dependence, so that stopping heroin use results in uncomfortable symptoms including pain, anxiety, muscle spasms, and insomnia called the opioid withdrawal syndrome. Depending on usage it has an onset 4–24 hours after the last dose of heroin. Morphine also binds to δ- and κ-opioid receptors. There is also evidence that 6-MAM binds to a subtype of μ-opioid receptors that are also activated by the morphine metabolite morphine-6β-glucuronide but not morphine itself. The third subtype of third opioid type is the mu-3 receptor, which may be a commonality to other six-position monoesters of morphine. The contribution of these receptors to the overall pharmacology of heroin remains unknown. A subclass of morphine derivatives, namely the 3,6 esters of morphine, with similar effects and uses, includes the clinically used strong analgesics nicomorphine (Vilan), and dipropanoylmorphine; there is also the latter's dihydromorphine analogue, diacetyldihydromorphine (Paralaudin). Two other 3,6 diesters of morphine invented in 1874–75 along with diamorphine, dibenzoylmorphine and acetylpropionylmorphine, were made as substitutes after it was outlawed in 1925 and, therefore, sold as the first "designer drugs" until they were outlawed by the League of Nations in 1930. Chemistry Diamorphine is produced from acetylation of morphine derived from natural opium sources, generally using acetic anhydride. The major metabolites of diamorphine, 6-MAM, morphine, morphine-3-glucuronide, and morphine-6-glucuronide, may be quantitated in blood, plasma or urine to monitor for use, confirm a diagnosis of poisoning, or assist in a medicolegal death investigation. Most commercial opiate screening tests cross-react appreciably with these metabolites, as well as with other biotransformation products likely to be present following usage of street-grade diamorphine such as 6-acetylcholine and codeine. However, chromatographic techniques can easily distinguish and measure each of these substances. When interpreting the results of a test, it is important to consider the diamorphine usage history of the individual, since a chronic user can develop tolerance to doses that would incapacitate an opiate-naive individual, and the chronic user often has high baseline values of these metabolites in his system. Furthermore, some testing procedures employ a hydrolysis step before quantitation that converts many of the metabolic products to morphine, yielding a result that may be 2 times larger than with a method that examines each product individually. History The opium poppy was cultivated in lower Mesopotamia as long ago as 3400 BC. The chemical analysis of opium in the 19th century revealed that most of its activity could be ascribed to the alkaloids codeine and morphine. Diamorphine was first synthesized in 1874 by C. R. Alder Wright, an English chemist working at St. Mary's Hospital Medical School in London who had been experimenting combining morphine with various acids. He boiled anhydrous morphine alkaloid with acetic anhydride for several hours and produced a more potent, acetylated form of morphine which is now called diacetylmorphine or morphine diacetate. He sent the compound to F. M. Pierce of Owens College in Manchester for analysis. Pierce told Wright: Wright's invention did not lead to any further developments, and diamorphine became popular only after it was independently re-synthesized 23 years later by chemist Felix Hoffmann. Hoffmann was working at Bayer pharmaceutical company in Elberfeld, Germany, and his supervisor Heinrich Dreser instructed him to acetylate morphine with the objective of producing codeine, a constituent of the opium poppy that is pharmacologically similar to morphine but less potent and less addictive. Instead, the experiment produced an acetylated form of morphine one and a half to two times more potent than morphine itself. The head of Bayer's research department reputedly coined the drug's new name of "heroin," based on the German heroisch which means "heroic, strong" (from the ancient Greek word "heros, ήρως"). Bayer scientists were not the first to make heroin, but their scientists discovered ways to make it, and Bayer led the commercialization of heroin. In 1895, Bayer marketed diacetylmorphine as an over-the-counter drug under the trademark name Heroin. It was developed chiefly as a morphine substitute for cough suppressants that did not have morphine's addictive side-effects. Morphine at the time was a popular recreational drug, and Bayer wished to find a similar but non-addictive substitute to market. However, contrary to Bayer's advertising as a "non-addictive morphine substitute," heroin would soon have one of the highest rates of addiction among its users. From 1898 through to 1910, diamorphine was marketed under the trademark name Heroin as a non-addictive morphine substitute and cough suppressant. In the 11th edition of Encyclopædia Britannica (1910), the article on morphine states: "In the cough of phthisis minute doses [of morphine] are of service, but in this particular disease morphine is frequently better replaced by codeine or by heroin, which checks irritable coughs without the narcotism following upon the administration of morphine." In the US, the Harrison Narcotics Tax Act was passed in 1914 to control the sale and distribution of diacetylmorphine and other opioids, which allowed the drug to be prescribed and sold for medical purposes. In 1924, the United States Congress banned its sale, importation, or manufacture. It is now a Schedule I substance, which makes it illegal for non-medical use in signatory nations of the Single Convention on Narcotic Drugs treaty, including the United States. The Health Committee of the League of Nations banned diacetylmorphine in 1925, although it took more than three years for this to be implemented. In the meantime, the first designer drugs, viz. 3,6 diesters and 6 monoesters of morphine and acetylated analogues of closely related drugs like hydromorphone and dihydromorphine, were produced in massive quantities to fill the worldwide demand for diacetylmorphine—this continued until 1930 when the Committee banned diacetylmorphine analogues with no therapeutic advantage over drugs already in use, the first major legislation of this type. Bayer lost some of its trademark rights to heroin (as well as aspirin) under the 1919 Treaty of Versailles following the German defeat in World War I. Use of heroin by jazz musicians in particular was prevalent in the mid-twentieth century, including Billie Holiday, saxophonists Charlie Parker and Art Pepper, guitarist Joe Pass and piano player/singer Ray Charles; a "staggering number of jazz musicians were addicts". It was also a problem with many rock musicians, particularly from the late 1960s through the 1990s. Pete Doherty is also a self-confessed user of heroin. Nirvana lead singer Kurt Cobain's heroin addiction was well documented. Pantera frontman, Phil Anselmo, turned to heroin while touring during the 1990s to cope with his back pain. James Taylor, Jimmy Page, John Lennon, Eric Clapton, Johnny Winter, Keith Richards and Janis Joplin also used heroin. Many musicians have made songs referencing their heroin usage. Society and culture Names "Diamorphine" is the Recommended International Nonproprietary Name and British Approved Name. Other synonyms for heroin include: diacetylmorphine, and morphine diacetate. Heroin is also known by many street names including dope, H, smack, junk, horse, scag, and brown, among others. Legal status Asia In Hong Kong, diamorphine is regulated under Schedule 1 of Hong Kong's Chapter 134 Dangerous Drugs Ordinance. It is available by prescription. Anyone supplying diamorphine without a valid prescription can be fined $5,000,000 (HKD) and imprisoned for life. The penalty for trafficking or manufacturing diamorphine is a $5,000,000 (HKD) fine and life imprisonment. Possession of diamorphine without a license from the Department of Health is illegal with a $1,000,000 (HKD) fine and 7 years of jail time. Europe In the Netherlands, diamorphine is a List I drug of the Opium Law. It is available for prescription under tight regulation exclusively to long-term addicts for whom methadone maintenance treatment has failed. It cannot be used to treat severe pain or other illnesses. In the United Kingdom, diamorphine is available by prescription, though it is a restricted Class A drug. According to the 50th edition of the British National Formulary (BNF), diamorphine hydrochloride may be used in the treatment of acute pain, myocardial infarction, acute pulmonary oedema, and chronic pain. The treatment of chronic non-malignant pain must be supervised by a specialist. The BNF notes that all opioid analgesics cause dependence and tolerance but that this is "no deterrent in the control of pain in terminal illness". When used in the palliative care of cancer patients, diamorphine is often injected using a syringe driver. In Switzerland, heroin is produced in injectable or tablet form under the name Diaphin by a private company under contract to the Swiss government. Swiss-produced heroin has been imported into Canada with government approval. Australia In Australia diamorphine is listed as a schedule 9 prohibited substance under the Poisons Standard (October 2015). A schedule 9 drug is outlined in the Poisons Act 1964 as "Substances which may be abused or misused, the manufacture, possession, sale or use of which should be prohibited by law except when required for medical or scientific research, or for analytical, teaching or training purposes with approval of the CEO." North America In Canada, diamorphine is a controlled substance under Schedule I of the Controlled Drugs and Substances Act (CDSA). Any person seeking or obtaining diamorphine without disclosing authorization 30 days before obtaining another prescription from a practitioner is guilty of an indictable offense and subject to imprisonment for a term not exceeding seven years. Possession of diamorphine for the purpose of trafficking is an indictable offense and subject to imprisonment for life. In the United States, diamorphine is a Schedule I drug according to the Controlled Substances Act of 1970, making it illegal to possess without a DEA license. Possession of more than 100 grams of diamorphine or a mixture containing diamorphine is punishable with a minimum mandatory sentence of 5 years of imprisonment in a federal prison. In 2021, the US state of Oregon became the first state to decriminalize the use of heroin after voters passed Ballot Measure 110 in 2020. This measure will allow people with small amounts to avoid arrest. Turkey Turkey maintains strict laws against the use, possession or trafficking of illegal drugs. If convicted under these offences, one could receive a heavy fine or a prison sentence of 4 to 24 years. Misuse of prescription medication Misused prescription medicine, such as opioids, can lead to heroin use and dependence. The number of death from illegal opioid overdose follows the increasing number of death caused by prescription opioid overdoses. Prescription opioids are relatively easy to obtain. This may ultimately lead to heroin injection because heroin is cheaper than prescribed pills. Economics Production Diamorphine is produced from acetylation of morphine derived from natural opium sources. One such method of heroin production involves isolation of the water-soluble components of raw opium, including morphine, in a strongly basic aqueous solution, followed by recrystallization of the morphine base by addition of ammonium chloride. The solid morphine base is then filtered out. The morphine base is then reacted with acetic anhydride, which forms heroin. This highly impure brown heroin base may then undergo further purification steps, which produces a white-colored product; the final products have a different appearance depending on purity and have different names. Heroin purity has been classified into four grades. No.4 is the purest form – white powder (salt) to be easily dissolved and injected. No.3 is "brown sugar" for smoking (base). No.1 and No.2 are unprocessed raw heroin (salt or base). Trafficking Traffic is heavy worldwide, with the biggest producer being Afghanistan. According to a U.N. sponsored survey, in 2004, Afghanistan accounted for production of 87 percent of the world's diamorphine. Afghan opium kills around 100,000 people annually. In 2003 The Independent reported: Opium production in that country has increased rapidly since, reaching an all-time high in 2006. War in Afghanistan once again appeared as a facilitator of the trade. Some 3.3 million Afghans are involved in producing opium. At present, opium poppies are mostly grown in Afghanistan (), and in Southeast Asia, especially in the region known as the Golden Triangle straddling Burma (), Thailand, Vietnam, Laos () and Yunnan province in China. There is also cultivation of opium poppies in Pakistan (), Mexico () and in Colombia (). According to the DEA, the majority of the heroin consumed in the United States comes from Mexico (50%) and Colombia (43-45%) via Mexican criminal cartels such as Sinaloa Cartel. However, these statistics may be significantly unreliable, the DEA's 50/50 split between Colombia and Mexico is contradicted by the amount of hectares cultivated in each country and in 2014, the DEA claimed most of the heroin in the US came from Colombia. , the Sinaloa Cartel is the most active drug cartel involved in smuggling illicit drugs such as heroin into the United States and trafficking them throughout the United States. According to the Royal Canadian Mounted Police, 90% of the heroin seized in Canada (where the origin was known) came from Afghanistan. Pakistan is the destination and transit point for 40 percent of the opiates produced in Afghanistan, other destinations of Afghan opiates are Russia, Europe and Iran. A conviction for trafficking heroin carries the death penalty in most Southeast Asian, some East Asian and Middle Eastern countries (see Use of death penalty worldwide for details), among which Malaysia, Singapore and Thailand are the most strict. The penalty applies even to citizens of countries where the penalty is not in place, sometimes causing controversy when foreign visitors are arrested for trafficking, for example, the arrest of nine Australians in Bali, the death sentence given to Nola Blake in Thailand in 1987, or the hanging of an Australian citizen Van Tuong Nguyen in Singapore. Trafficking history The origins of the present international illegal heroin trade can be traced back to laws passed in many countries in the early 1900s that closely regulated the production and sale of opium and its derivatives including heroin. At first, heroin flowed from countries where it was still legal into countries where it was no longer legal. By the mid-1920s, heroin production had been made illegal in many parts of the world. An illegal trade developed at that time between heroin labs in China (mostly in Shanghai and Tianjin) and other nations. The weakness of the government in China and conditions of civil war enabled heroin production to take root there. Chinese triad gangs eventually came to play a major role in the illicit heroin trade. The French Connection route started in the 1930s. Heroin trafficking was virtually eliminated in the US during World War II because of temporary trade disruptions caused by the war. Japan's war with China had cut the normal distribution routes for heroin and the war had generally disrupted the movement of opium. After World War II, the Mafia took advantage of the weakness of the postwar Italian government and set up heroin labs in Sicily. The Mafia took advantage of Sicily's location along the historic route opium took westward into Europe and the United States. Large-scale international heroin production effectively ended in China with the victory of the communists in the civil war in the late 1940s. The elimination of Chinese production happened at the same time that Sicily's role in the trade developed. Although it remained legal in some countries until after World War II, health risks, addiction, and widespread recreational use led most western countries to declare heroin a controlled substance by the latter half of the 20th century. In the late 1960s and early 1970s, the CIA supported anti-Communist Chinese Nationalists settled near the Sino-Burmese border and Hmong tribesmen in Laos. This helped the development of the Golden Triangle opium production region, which supplied about one-third of heroin consumed in the US after the 1973 American withdrawal from Vietnam. In 1999, Burma, the heartland of the Golden Triangle, was the second-largest producer of heroin, after Afghanistan. The Soviet-Afghan war led to increased production in the Pakistani-Afghan border regions, as US-backed mujaheddin militants raised money for arms from selling opium, contributing heavily to the modern Golden Crescent creation. By 1980, 60 percent of the heroin sold in the US originated in Afghanistan. It increased international production of heroin at lower prices in the 1980s. The trade shifted away from Sicily in the late 1970s as various criminal organizations violently fought with each other over the trade. The fighting also led to a stepped-up government law enforcement presence in Sicily. Following the discovery at a Jordanian airport of a toner cartridge that had been modified into an improvised explosive device, the resultant increased level of airfreight scrutiny led to a major shortage (drought) of heroin from October 2010 until April 2011. This was reported in most of mainland Europe and the UK which led to a price increase of approximately 30 percent in the cost of street heroin and increased demand for diverted methadone. The number of addicts seeking treatment also increased significantly during this period. Other heroin droughts (shortages) have been attributed to cartels restricting supply in order to force a price increase and also to a fungus that attacked the opium crop of 2009. Many people thought that the American government had introduced pathogens into the Afghanistan atmosphere in order to destroy the opium crop and thus starve insurgents of income. On 13 March 2012, Haji Bagcho, with ties to the Taliban, was convicted by a US District Court of conspiracy, distribution of heroin for importation into the United States and narco-terrorism. Based on heroin production statistics compiled by the United Nations Office on Drugs and Crime, in 2006, Bagcho's activities accounted for approximately 20 percent of the world's total production for that year. Street price The European Monitoring Centre for Drugs and Drug Addiction reports that the retail price of brown heroin varies from €14.5 per gram in Turkey to €110 per gram in Sweden, with most European countries reporting typical prices of €35–40 per gram. The price of white heroin is reported only by a few European countries and ranged between €27 and €110 per gram. The United Nations Office on Drugs and Crime claims in its 2008 World Drug Report that typical US retail prices are US$172 per gram. Harm reduction Harm reduction is a public health philosophy that seeks to reduce the harms associated with the use of illicit drugs. One aspect of harm reduction initiatives focuses on the behaviour of individual users. In the case of diamorphine, this includes promoting safer means of taking the drug, such as smoking, nasal use, oral or rectal insertion. This attempts to avoid the higher risks of overdose, infections, and blood-borne viruses associated with injecting the drug. Other measures include using a small amount of the drug first to gauge the strength and minimize the risks of overdose. For the same reason, poly drug use (the use of two or more drugs at the same time) is discouraged. Injecting diamorphine users are encouraged to use new needles, syringes, spoons/steri-cups, and filters every time they inject and not share these with other users. Users are also encouraged to not use it on their own, as others can assist in the event of an overdose. Governments that support a harm reduction approach usually fund needle and syringe exchange programs, which supply new needles and syringes on a confidential basis, as well as education on proper filtering before injection, safer injection techniques, safe disposal of used injecting gear and other equipment used when preparing diamorphine for injection may also be supplied including citric acid sachets/vitamin C sachets, steri-cups, filters, alcohol pre-injection swabs, sterile water ampules and tourniquets (to stop the use of shoelaces or belts). Another harm reduction measure employed for example in Europe, Canada, and Australia are safe injection sites where users can inject diamorphine and cocaine under the supervision of medically trained staff. Safe injection sites are low threshold and allow social services to approach problem users that would otherwise be hard to reach. In the UK the Criminal Justice System has a protocol in place that requires that any individual that is arrested and is suspected of having a substance misuse problem be offered the chance to enter a treatment program. This has had the effect of drastically reducing an area's crime rate as individuals arrested for theft in order to supply the funds for their drugs are no longer in the position of having to steal to purchase heroin because they have been placed onto a methadone program, quite often more quickly than would have been possible had they not been arrested. This aspect of harm reduction is seen as being beneficial to both the individual and the community at large, who are then protected from the possible theft of their goods. During the late 1980s and early 1990s, Swiss authorities ran the ZIPP-AIDS (Zurich Intervention Pilot Project), handing out free syringes in the officially tolerated drug scene in Platzspitz park. In 1994, Zurich started a pilot project using prescription heroin in heroin-assisted treatment (HAT) which allowed users to obtain heroin and inject it under medical supervision. The HAT program proved to be cost-beneficial to society and improve patients overall health and social stability and has since been introduced in multiple European countries. Research Researchers are attempting to reproduce the biosynthetic pathway that produces morphine in genetically engineered yeast. In June 2015 the S-reticuline could be produced from sugar and R-reticuline could be converted to morphine, but the intermediate reaction could not be performed. See also Allegations of CIA drug trafficking Cheese (recreational drug) The Politics of Heroin in Southeast Asia References External links NIDA InfoFacts on Heroin ONDCP Drug Facts U.S. National Library of Medicine: Drug Information Portal – Heroin BBC Article entitled 'When Heroin Was Legal'. References to the United Kingdom and the United States Drug-poisoning Deaths Involving Heroin: United States, 2000–2013 U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Health Statistics. Heroin Trafficking in the United States (2016) by Kristin Finklea, Congressional Research Service. 1874 introductions 1898 introductions Acetate esters Brands that became generic British inventions Euphoriants Morphinans Morphine Mu-opioid receptor agonists Opioids Phenol ethers Prodrugs Semisynthetic opioids Wikipedia medicine articles ready to translate Nephrotoxins
[ 0.19772452116012573, 0.27865171432495117, -0.662278413772583, -0.016684524714946747, -0.031564861536026, 0.07104605436325073, 0.4361322820186615, 0.23106549680233002, -0.012098578736186028, 0.15593138337135315, 0.0332232266664505, 0.513298511505127, -0.445325642824173, -0.08761727809906006...
14035
https://en.wikipedia.org/wiki/Hellas%20Verona%20F.C.
Hellas Verona F.C.
Hellas Verona Football Club, commonly referred to as Hellas Verona or simply Verona, is an Italian football club based in Verona, Veneto, that currently plays in Serie A. The team won the Serie A Championship in 1984–85. History Origins and early history Founded in 1903 by a group of high school students, the club was named Hellas, at the request of a professor of classics. At a time in which football was played seriously only in the larger cities of the northwestern Italy, most of Verona was indifferent to the growing sport. However, when in 1906 two city teams chose the city's Roman amphitheatre as a venue to showcase the game, crowd enthusiasm and media interest began to rise. During these first few years, Hellas was one of three or four area teams playing at a municipal level while fighting against city rivals Bentegodi to become the city's premier football outfit. By the 1907–08 season, Hellas was playing against regional teams and an intense rivalry with Vicenza that lasts to this day was born. From 1898 to 1926, Italian football was organised into regional groups. In this period, Hellas was one of the founding teams of the early league and often among its top final contenders. In 1911, the city helped Hellas replace the early, gritty football fields with a proper venue. This allowed the team to take part in its first regional tournament, which until 1926, was the qualifying stage for the national title. In 1919, following a return to activity after a four-year suspension of all football competition in Italy during World War I, the team merged with city rival Verona and changed its name to Hellas Verona. Between 1926 and 1929, the elite "Campionato Nazionale" assimilated the top sides from the various regional groups and Hellas Verona joined the privileged teams, yet struggled to remain competitive. Serie A, as it is structured today, began in 1929, when the Campionato Nazionale turned into a professional league. Still an amateur team, Hellas merged with two city rivals, Bentegodi and Scaligera, to form AC Verona. Hoping to build a first class contender for future years, the new team debuted in Serie B in 1929. It would take the gialloblu 28 years to finally achieve their goal. After first being promoted to Serie A for one season in 1957–58, in 1959, the team merged with another city rival (called Hellas) and commemorated its beginnings by changing its name to Hellas Verona AC. Success in the 1970s and 1980s Coached by Nils Liedholm, the team returned to Serie A in 1968 and remained in the elite league almost without interruption until 1990. Along the way, it scored a famous 5–3 win in the 1972–73 season that cost Milan the scudetto (the Serie A title). The fact that the result came late during the last matchday of the season makes the sudden and unexpected end to the rossoneris title ambitions all the more memorable. In 1973–74, Hellas finished the season in fourth-last, just narrowly avoiding relegation, but were nonetheless sent down to Serie B during the summer months as a result of a scandal involving team president Saverio Garonzi. After a year in Serie B, Hellas returned to Serie A. In the 1975–76 season, the team had a successful run in the Coppa Italia, eliminating highly rated teams such as Torino, Cagliari and Internazionale from the tournament. However, in their first ever final in the competition, Hellas were trounced 4–0 by Napoli. Under the leadership of coach Osvaldo Bagnoli, in 1982–83 the team secured a fourth-place in Serie A (its highest finish at the time) and even led the Serie A standings for a few weeks. The same season Hellas again reached the Coppa Italia final. After a 2–0 home victory, Hellas then travelled to Turin to play Juventus but were defeated 3–0 after extra time. Further disappointment followed in the 1983–84 season when the team again reached the Coppa Italia final, only to lose the Cup in the final minutes of the return match against defending Serie A champions Roma The team made its first European appearance in the 1983-84 UEFA Cup and were knocked out in the second round of the tournament by Sturm Graz. Hellas were eliminated from the 1985–86 European Cup in the second round by defending champions and fellow Serie A side Juventus after a contested game, the result of a scandalous arbitrage by the French Wurtz, having beaten PAOK of Greece in the first round. In 1988, the team had their best international result when they reached the UEFA Cup quarterfinals with four victories and three draws. The decisive defeat came from German side Werder Bremen. 1984–1985 Scudetto Although the 1984–85 season squad was made up of a mix of emerging players and mature stars, at the beginning of the season no one would have regarded the team as having the necessary ingredients to make it to the end. Certainly, the additions of Hans-Peter Briegel in midfield and of Danish striker Preben Elkjær to an attack that already featured the wing play of Pietro Fanna, the creative abilities of Antonio Di Gennaro and the scoring touch of Giuseppe Galderisi were to prove crucial. To mention a few of the memorable milestones on the road to the scudetto: a decisive win against Juventus (2–0), with a goal scored by Elkjær after having lost a boot in a tackle just outside the box, set the stage early in the championship; an away win over Udinese (5–3) ended any speculation that the team was losing energy at the midway point; three straight wins (including a hard-fought 1–0 victory against a strong Roma side) served notice that the team had kept its polish and focus intact during their rival's final surge; and a 1–1 draw in Bergamo against Atalanta secured the title with a game in hand. Hellas finished the year with a 15–13–2 record and 43 points, four points ahead of Torino with Internazionale and Sampdoria rounding out the top four spots. This unusual final table of the Serie A (with the most successful Italian teams of the time, Juventus and Roma, ending up much lower than expected) has led to many speculations. The 1984–85 season was the only season when referees were assigned to matches by way of a random draw. Before then each referee had always been assigned to a specific match by a special commission of referees (designatori arbitrali). After the betting scandal of the early 1980 (the Calcio Scommesse scandal), it was decided to clean up the image of Italian football by assigning referees randomly instead of picking them, to clear up all the suspicions and accusations always accompanying Italy's football life. This resulted in a quieter championship and in a completely unexpected final table. In the following season, won again by Juventus, the choice of the referees went back in the hands of the designatori arbitrali. In 2006, a major scandal in Italian football revealed that certain clubs had been illegally influencing the referee selection process in an attempt to ensure that certain referees were assigned to their matches. Between Serie A and Serie B These were more than mere modest achievements for a mid-size city with a limited appeal to fans across the nation. But soon enough financial difficulties caught up with team managers. In 1991 the team folded and was reborn as Verona, regularly moving to and fro between Serie A and Serie B for several seasons. In 1995 the name was officially changed back to Hellas Verona. After a three-year stay, their last stint in Serie A ended in grief in 2002. That season emerging international talents such as Adrian Mutu, Mauro Camoranesi, Alberto Gilardino, Martin Laursen, Massimo Oddo, Marco Cassetti and coach Alberto Malesani failed to capitalise on an excellent start and eventually dropped into fourth-to-last place for the first time all season on the final match day, enforcing relegation into Serie B. Decline and Serie A comeback (2002–present) Following the 2002 relegation to Serie B, team fortunes continued to slip throughout the decade. In the 2003–04 season Hellas Verona struggled in Serie B and spent most of the season fighting off an unthinkable relegation to Serie C1. Undeterred, the fans supported their team and a string of late season wins eventually warded off the danger. Over 5,000 of them followed Hellas to Como on the final day of the season to celebrate. In 2004–05, things looked much brighter for the team. After a rocky start, Hellas put together a string of results and climbed to third spot. The gialloblù held on to the position until January 2005, when transfers weakened the team, yet they managed to take the battle for Serie A to the last day of the season. The 2006–07 Serie B seemed to start well, due to the club takeover by Pietro Arvedi D'Emilei, which ended nine years of controversial leadership under chairman Gianbattista Pastorello, heavily contested by the supporters in his later years at Verona. However, Verona was immediately involved in the relegation battle, and Massimo Ficcadenti was replaced in December 2006 by Giampiero Ventura. Despite a recovery in the results, Verona ended in an 18th place, thus being forced to play a two-legged playoff against 19th-placed Spezia to avert relegation. A 2–1 away loss in the first leg at La Spezia was followed by a 0–0 home tie, and Verona were relegated to Serie C1 after 64 years of play in the two highest divisions. Verona appointed experienced coach Franco Colomba for the new season with the aim to return to Serie B as soon as possible. However, despite being widely considered the division favourite, the gialloblù spent almost the entire season in last place. After seven matches, club management sacked Colomba in early October and replaced him with youth team coach (and former Verona player) Davide Pellegrini. A new owner acquired the club in late 2007, appointing Giovanni Galli in December as new director of football and Maurizio Sarri as new head coach. Halfway through the 2007–08 season, the team remained at the bottom of Serie C1, on the brink of relegation to the fourth level (Serie C2). In response, club management sacked Sarri and brought back Pellegrini. Thanks to a late-season surge the scaligeri avoided direct relegation by qualifying for the relegation play-off, and narrowly averted dropping to Lega Pro Seconda Divisione in the final game, beating Pro Patria 2–1 on aggregate. However, despite the decline in results, attendance and season ticket sales remained at 15,000 on average. For the 2008–09 season, Verona appointed former Sassuolo and Piacenza manager Gian Marco Remondina with the aim to win promotion to Serie B. However, the season did not start impressively, with Verona being out of the playoff zone by mid-season, and club chairman Pietro Arvedi D'Emilei entering into a coma after being involved in a car crash on his way back from a league match in December 2008. Arvedi died in March 2009, two months after the club was bought by new chairman Giovanni Martinelli. The following season looked promising, as new transfer players were brought aboard, and fans enthusiastically embraced the new campaign. Season ticket figures climbed to over 10,000, placing Verona ahead of several Serie A teams and all but Torino in Serie B attendance. The team led the standings for much of the season, accumulating a seven-point lead by early in the spring. However, the advantage was gradually squandered, and the team dropped to second place on the second-last day of the season, with a chance to regain first place in the final regular season match against Portogruaro on home soil. Verona, however, disappointed a crowd of over 25,000 fans and, with the loss, dropped to third place and headed towards the play-offs. A managerial change for the post-season saw the firing of Remondina and the arrival of Giovanni Vavassori. After eliminating Rimini in the semi-finals (1–0; 0–0) Verona lost the final to Pescara (2–2 on home soil and 0–1 in the return match) and were condemned to a fourth-straight year of third division football. Former 1990 World Cup star Giuseppe Giannini (a famous captain of Roma for many years) signed as manager for the 2010–11 campaign. Once again, the team was almost entirely revamped during the transfer season. The squad struggled in the early months and Giannini was eventually sacked and replaced by former Internazionale defender Andrea Mandorlini, who succeeded in reorganising the team's play and bringing discipline both on and off the pitch. In the second half of the season, Verona climbed back from the bottom of the division to clinch a play-off berth (fifth place) on the last day of the regular season. The team advanced to the play-off final after eliminating Sorrento in the semi-finals 3–1 on aggregate. Following the play-off final, after four years of Lega Pro football, Verona were promoted back to Serie B after a 2–1 aggregate win over Salernitana on 19 June 2011. On 18 May 2013, Verona finished second in Serie B and were promoted to Serie A after an eleven-year absence. Their return to the top flight began against title contenders Milan and Roma, beating the former 2–1 and losing to the latter 3–0. The team continued at a steady pace, finishing the first half of the season with 32 points and sitting in sixth place, eleven points behind the closest UEFA Champions League spot—and tied with Internazionale for the final UEFA Europa League spot. Verona, however, ultimately finished the year in tenth. During the 2015–16 season, Verona had not won a single match since the beginning of the campaign until the club edged Atalanta 2–1 on 3 February 2016 in a win at home; coming twenty-three games into the season. Consequently, Verona were relegated from Serie A. In the 2016–17 Serie B season, Hellas Verona finished second on the table and were automatically promoted back to Serie A. Hellas lasted one season back in the top division after finishing second last during the 2017–18 Serie A season and were relegated back to Serie B. At the end of the 2018–19 season, Hellas finished in fifth position and achieved promotion back to Serie A after defeating Cittadella 3–0 in the second leg of their promotion play-off to win 3–2 on aggregate. The club's return to the top flight in the 2019–20 Serie A season, in which it was considered a strong relegation candidate at the beginning of the campaign, was a successful one, with a ninth-placed finish. Heavily reliant on the defensive solidity of 20-year-old centre-back Marash Kumbulla, Amir Rrahmani and goalkeeper Marco Silvestri, along with the consistent performances of midfielder Sofyan Amrabat, Verona was a surprise contender for Europa League qualification but fell out of the race after a downturn in form after the coronavirus break which temporarily halted the season. A 2-1 win at home against eventual title winners Juventus in February was a highlight of a season in which the club achieved 10 clean sheets and punched towards the higher end of the table despite its modest budget. Ahead of Verona’s second consecutive year in Serie A, key players Amrabat, Rrahmani and Kumbulla were poached by Fiorentina, Napoli and Roma respectively, and loanee Matteo Pessina returned to Atalanta. This left the club with a heavily weakened squad and it was once again expected to struggle in the league prior to the season-opening match. Despite these losses in the transfer window, Verona again finished in the top half of the league table, ending the season in 10th place with 45 points. Successful breakout seasons for attacking midfielder Mattia Zaccagni, who was eventually called up to the Italian national team as a reward for his performances, as well as wing-backs Federico Dimarco and Davide Faraoni, were partly the reason for this achievement. At the end of the season, coach Ivan Jurić was appointed by Torino following his two impressive Serie A seasons with Verona, with the Gialloblu replacing him with Eusebio Di Francesco. Following another summer transfer window in which several of the club's star players were sold to Serie A rivals, namely Zaccagni transferring to Lazio, Marco Silvestri to Udinese and Dimarco returning to Inter, the beginning of the 2021-22 season proved to be much more difficult for Verona, as Di Francesco was fired and replaced with Igor Tudor after just three matches, all of which were defeats. This poor early-season form had left the club at the bottom of the table. Under the guidance of Tudor, the team regains competitiveness obtaining in the next eight matches three wins - including victories with Lazio and Juventus -four draws and only one defeat. Colours and badge The team's colours are yellow and blue. As a result the clubs most widely used nickname is gialloblu literally "yellow-blue" in Italian. The colours represent the city itself and Verona's emblem (a yellow cross on a blue shield) appears on most team apparel. Home kits are traditionally blue, sometimes of a navy shade, combined with yellow details and trim, although the club has used a blue and yellow striped design on occasion. Two more team nicknames are Mastini (the mastiffs) and Scaligeri, both references to Mastino I della Scala of the Della Scala princes that ruled the city during the 13th and 14th centuries. The Scala family coat of arms is depicted on the team's jersey and on its trademark logo as a stylised image of two large, powerful mastiffs facing opposite directions, introduced in 1995. In essence, the term "scaligeri" is synonymous with Veronese, and therefore can describe anything or anyone from Verona (e.g., Chievo Verona, a different team that also links itself to the Scala family – specifically to Cangrande I della Scala). Stadium Since 1963, the club have played at the Stadio Marc'Antonio Bentegodi, which has a capacity of 39,211. The ground was shared with Hellas's rivals, Chievo Verona until 2021. It was used as a venue for some matches of the 1990 FIFA World Cup. Derby with Chievo Verona The intercity fixtures against Chievo Verona are known as the "Derby della Scala". The name refers to the Scaligeri or della Scala aristocratic family, who were rulers of Verona during the Middle Ages and early Renaissance. In the season 2001–02, both Hellas Verona and the city rivals of Chievo Verona were playing in the Serie A. The first ever derby of Verona in Serie A took place on 18 November 2001, while both teams were ranked among the top four. The match was won by Hellas, 3–2. Chievo got revenge in the return match in spring 2002, winning 2–1. Verona thus became the fifth city in Italy, after Milan, Rome, Turin and Genoa to host a cross-town derby in Serie A. Honours Serie A Champions: 1984–85Serie B Champions: 1956–57, 1981–82, 1998–99 Play-off Winners: 2018–19Coppa Italia''' Runners-up: 1975–76, 1982–83, 1983–84 Divisional movements Sponsors Kit sponsors 1980–87: Adidas 1987–89: Hummel 1989–91: Adidas 1991–95: Uhlsport 1995–00: Errea 2000–03: Lotto 2003–06: Legea 2006–13: Asics 2013–18: Nike 2018–present: Macron Official sponsors 1982–86: Canon 1989–96: Rana 1996–97: Ferroli 1997–98: ZG Camini Inox 1998–99: Atreyu Immobiliare 1999–00: Salumi Marsilli 2000–01: Net Business 2001–02: Amica Chips 2002–06: Clerman 2006–07: Unika 2007–08: No sponsor'' 2008–10: Giallo 2010–11: Banca Di Verona/Sicurint Group, Protec/Consorzio Asimov 2011–12: AGSM/Sicurint Group, Protec/Leaderform 2012–13: AGSM, Leaderform 2013–14: Franklin & Marshall/Manila Grace, AGSM/Leaderform 2014–15: Franklin & Marshall, AGSM/Leaderform 2015–2018: Metano Nord, Leaderform 2018–present: AirDolomiti, Gruppo Synergy 2020-present: Kiratech S.P.A. Current squad First team squad On loan Managers András Kuttik (1929–1932) Sándor Peics (1939) Karl Stürmer (1941–1942) Luigi Ferrero (1954) Federico Allasio (1955) Aldo Olivieri (1959–1960) Romolo Bizzotto (1960–1961) Giancarlo Cadé (1964–1965) Omero Tognon (1965–1966) Nils Liedholm (1966–1968) Giancarlo Cadé (1968–1969, 1972–1975) Luigi Mascalaito (1974–1979) Ferruccio Valcareggi (1975–1978) Giuseppe Chiappella (1978–1979) Fernando Veneranda (1979–1980) Giancarlo Cadé (1980–1981) Osvaldo Bagnoli (1981–1990) Eugenio Fascetti (1 July 1990 – 28 March 1992) Nils Liedholm (29 March 1992 – 30 June 1992) Edoardo Reja (1 July 1992 – 30 June 1993) Bortolo Mutti (1 July 1994 – 30 June 1995) Attilio Perotti (1 July 1995 – 30 June 1996) Luigi Cagni (1 July 1996 – 30 June 1998) Cesare Prandelli (1 July 1998 – 30 June 2000) Attilio Perotti (1 July 2000 – 30 June 2001) Alberto Malesani (4 July 2001 – 10 June 2003) Sandro Salvioni (1 July 2003 – 30 June 2004) Massimo Ficcadenti (20 July 2004 – 24 December 2006) Giampiero Ventura (24 December 2006 – 30 June 2007) Franco Colomba (1 July 2007 – 8 October 2007) Davide Pellegrini (9 October 2007 – 30 December 2007) Maurizio Sarri (31 December 2007 – 27 February 2008) Davide Pellegrini (28 February 2008 – 11 June 2008) Gian Marco Remondina (12 June 2008 – 10 May 2010) Giovanni Vavassori (10 May 2010 – 21 June 2010) Giuseppe Giannini (22 June 2010 – 8 November 2010) Andrea Mandorlini (9 November 2010 – 30 November 2015) Luigi Delneri (1 December 2015 – 23 May 2016) Fabio Pecchia (1 June 2016 – 21 June 2018) Fabio Grosso (21 June 2018 – 1 May 2019) Ivan Jurić (14 June 2019 – 28 May 2021) World Cup players The following players have been selected by their country for the FIFA World Cup finals while playing for Hellas Verona. Roberto Tricella (1986) Antonio Di Gennaro (1986) Giuseppe Galderisi (1986) Preben Elkjær (1986) Hans-Peter Briegel (1986) Nelson Gutiérrez (1990) Ruslan Nigmatullin (2002) Anthony Šerić (2002) Lee Seung-woo (2018) In Europe European Cup UEFA Cup References Further reading External links Football clubs in Italy Football clubs in Veneto Association football clubs established in 1903 Italian football First Division clubs Serie A clubs Serie B clubs Serie C clubs Serie A winning clubs 1903 establishments in Italy Sport in Verona
[ 0.18061818182468414, 0.019017549231648445, -0.49162647128105164, -0.16112235188484192, -0.3654060661792755, 0.08997080475091934, 0.23280812799930573, -0.07179441303014755, -0.6566466093063354, -0.4069725275039673, -0.2740880846977234, 0.4747357666492462, -0.41961508989334106, 0.45874789357...
14036
https://en.wikipedia.org/wiki/Hinayana
Hinayana
Hīnayāna () is a Sanskrit term literally meaning the "small/deficient vehicle". Classical Chinese and Tibetan teachers translate it as "smaller vehicle". The term is applied collectively to the Śrāvakayāna and Pratyekabuddhayāna paths. This term appeared around the first or second century. Hīnayāna was often contrasted with Mahāyāna, which means the "great vehicle". In 1950 the World Fellowship of Buddhists declared that the term Hīnayāna should not be used when referring to any form of Buddhism existing today. In the past, the term was widely used by Western scholars to cover "the earliest system of Buddhist doctrine", as the Monier-Williams Sanskrit-English Dictionary put it. Modern Buddhist scholarship has deprecated the pejorative term, and uses instead the term Nikaya Buddhism to refer to early Buddhist schools. Hinayana has also been used as a synonym for Theravada, which is the main tradition of Buddhism in Sri Lanka and Southeast Asia; this is considered inaccurate and derogatory. Robert Thurman writes, "'Nikaya Buddhism' is a coinage of Professor Masatoshi Nagatomi of Harvard University, who suggested it to me as a usage for the eighteen schools of Indian Buddhism to avoid the term 'Hinayana Buddhism,' which is found offensive by some members of the Theravada tradition." Within Mahayana Buddhism, there were a variety of interpretations as to whom or to what the term Hinayana referred. Kalu Rinpoche stated the "lesser" or "greater" designation "did not refer to economic or social status, but concerned the spiritual capacities of the practitioner". Etymology The word hīnayāna is formed of hīna: "little", "poor", "inferior", "abandoned", "deficient", "defective"; and yāna (यान): "vehicle", where "vehicle" means "a way of going to enlightenment". The Pali Text Society's Pali-English Dictionary (1921–25) defines hīna in even stronger terms, with a semantic field that includes "poor, miserable; vile, base, abject, contemptible", and "despicable". The term was translated by Kumārajīva and others into Classical Chinese as "small vehicle" (小 meaning "small", 乘 meaning "vehicle"), although earlier and more accurate translations of the term also exist. In Mongolian (Baga Holgon) the term for hinayana also means "small" or "lesser" vehicle, while in Tibetan there are at least two words to designate the term, theg chung meaning "small vehicle" and theg dman meaning "inferior vehicle" or "inferior spiritual approach". Thrangu Rinpoche has emphasized that hinayana is in no way implying "inferior". In his translation and commentary of Asanga's Distinguishing Dharma from Dharmata, he writes, "all three traditions of hinayana, mahayana, and vajrayana were practiced in Tibet and that the hinayana which literally means "lesser vehicle" is in no way inferior to the mahayana." Origins According to Jan Nattier, it is most likely that the term Hīnayāna postdates the term Mahāyāna and was only added at a later date due to antagonism and conflict between the bodhisattva and śrāvaka ideals. The sequence of terms then began with the term Bodhisattvayāna "bodhisattva-vehicle", which was given the epithet Mahāyāna "Great Vehicle". It was only later, after attitudes toward the bodhisattva teachings had become more critical, that the term Hīnayāna was created as a back-formation, contrasting with the already established term Mahāyāna. The earliest Mahāyāna texts often use the term Mahāyāna as an epithet and synonym for Bodhisattvayāna, but the term Hīnayāna is comparatively rare in early texts, and is usually not found at all in the earliest translations. Therefore, the often-perceived symmetry between Mahāyāna and Hīnayāna can be deceptive, as the terms were not actually coined in relation to one another in the same era. According to Paul Williams, "the deep-rooted misconception concerning an unfailing, ubiquitous fierce criticism of the Lesser Vehicle by the [Mahāyāna] is not supported by our texts." Williams states that while evidence of conflict is present in some cases, there is also substantial evidence demonstrating peaceful coexistence between the two traditions. Mahāyāna members of the early Buddhist schools Although the 18–20 early Buddhist schools are sometimes loosely classified as Hīnayāna in modern times, this is not necessarily accurate. There is no evidence that Mahāyāna ever referred to a separate formal school of Buddhism but rather as a certain set of ideals, and later doctrines. Paul Williams has also noted that the Mahāyāna never had nor ever attempted to have a separate vinaya or ordination lineage from the early Buddhist schools, and therefore bhikṣus and bhikṣuṇīs adhering to the Mahāyāna formally adheres to the vinaya of an early school. This continues today with the Dharmaguptaka ordination lineage in East Asia and the Mūlasarvāstivāda ordination lineage in Tibetan Buddhism. Mahāyāna was never a separate sect of the early schools. From Chinese monks visiting India, we now know that both Mahāyāna and non-Mahāyāna monks in India often lived in the same monasteries side by side. The seventh-century Chinese Buddhist monk and pilgrim Yijing wrote about the relationship between the various "vehicles" and the early Buddhist schools in India. He wrote, "There exist in the West numerous subdivisions of the schools which have different origins, but there are only four principal schools of continuous tradition." These schools are the Mahāsāṃghika Nikāya, Sthavira nikāya, Mūlasarvāstivāda Nikāya, and Saṃmitīya Nikāya. Explaining their doctrinal affiliations, he then writes, "Which of the four schools should be grouped with the Mahāyāna or with the Hīnayāna is not determined." That is to say, there was no simple correspondence between a Buddhist school and whether its members learn "Hīnayāna" or "Mahāyāna" teachings. To identify entire schools as "Hīnayāna" that contained not only śrāvakas and pratyekabuddhas but also Mahāyāna bodhisattvas would be attacking the schools of their fellow Mahāyānists as well as their own. Instead, what is demonstrated in the definition of Hīnayāna given by Yijing is that the term referred to individuals based on doctrinal differences. Hīnayāna as Śrāvakayāna Scholar Isabelle Onians asserts that although "the Mahāyāna ... very occasionally referred to earlier Buddhism as the Hinayāna, the Inferior Way, [...] the preponderance of this name in the secondary literature is far out of proportion to occurrences in the Indian texts." She notes that the term Śrāvakayāna was "the more politically correct and much more usual" term used by Mahāyānists. Jonathan Silk has argued that the term "Hinayana" was used to refer to whomever one wanted to criticize on any given occasion, and did not refer to any definite grouping of Buddhists. Hīnayāna and Theravāda Views of Chinese pilgrims The Chinese monk Yijing, who visited India in the 7th century, distinguished Mahāyāna from Hīnayāna as follows: In the 7th century, the Chinese Buddhist monk Xuanzang describes the concurrent existence of the Mahāvihara and the Abhayagiri vihāra in Sri Lanka. He refers to the monks of the Mahāvihara as the "Hīnayāna Sthaviras" and the monks of Abhayagiri vihāra as the "Mahāyāna Sthaviras". Xuanzang further writes, "The Mahāvihāravāsins reject the Mahāyāna and practice the Hīnayāna, while the Abhayagirivihāravāsins study both Hīnayāna and Mahāyāna teachings and propagate the Tripiṭaka." Philosophical differences Mahayanists were primarily in philosophical dialectic with the Vaibhāṣika school of Sarvāstivāda, which had by far the most "comprehensive edifice of doctrinal systematics" of the nikāya schools. With this in mind it is sometimes argued that the Theravada would not have been considered a "Hinayana" school by Mahayanists because, unlike the now-extinct Sarvastivada school, the primary object of Mahayana criticism, the Theravada school does not claim the existence of independent dharmas; in this it maintains the attitude of early Buddhism. Additionally, the concept of the bodhisattva as one who puts off enlightenment rather than reaching awakening as soon as possible, has no roots in Theravada textual or cultural contexts, current or historical. Aside from the Theravada schools being geographically distant from the Mahayana, the Hinayana distinction is used in reference to certain views and practices that had become found within the Mahayana tradition itself. Theravada, as well as Mahayana schools stress the urgency of one's own awakening in order to end suffering. Some contemporary Theravadin figures have thus indicated a sympathetic stance toward the Mahayana philosophy found in the Heart Sutra and the Mūlamadhyamakakārikā. The Mahayanists were bothered by the substantialist thought of the Sarvāstivādins and Sautrāntikins, and in emphasizing the doctrine of śūnyatā, David Kalupahana holds that they endeavored to preserve the early teaching. The Theravadins too refuted the Sarvāstivādins and Sautrāntikins (and followers of other schools) on the grounds that their theories were in conflict with the non-substantialism of the canon. The Theravada arguments are preserved in the Kathavatthu. Opinions of scholars Some western scholars still regard the Theravada school to be one of the Hinayana schools referred to in Mahayana literature, or regard Hinayana as a synonym for Theravada. These scholars understand the term to refer to schools of Buddhism that did not accept the teachings of the Mahāyāna sūtras as authentic teachings of the Buddha. At the same time, scholars have objected to the pejorative connotation of the term Hinayana and some scholars do not use it for any school. Notes Sources External links "Theravada - Mahayana Buddhism" Dr. W. Rahula's article Mahayana - Hinayana - Theravada introduced by Binh Hanson, webmaster of "BuddhaSasana" (www.budsas.org) Pejorative terms Buddhist philosophical concepts
[ 0.1422768384218216, 0.49679574370384216, -0.4913305640220642, 0.11309339851140976, -0.33466002345085144, 0.3355850577354431, 0.6067467331886292, -0.09195762872695923, -0.03226298466324806, 0.12548395991325378, -0.2878512740135193, 0.5248867869377136, 0.1353084146976471, 0.46386682987213135...
14045
https://en.wikipedia.org/wiki/Humphrey%20Bogart
Humphrey Bogart
Humphrey DeForest Bogart (; December 25, 1899 – January 14, 1957), nicknamed Bogie, was an American film and stage actor. His performances in Classical Hollywood cinema films made him an American cultural icon. In 1999, the American Film Institute selected Bogart as the greatest male star of classic American cinema. Bogart began acting in Broadway shows, beginning his career in motion pictures with Up the River (1930) for Fox and appeared in supporting roles for the next decade, sometimes portraying gangsters. He was praised for his work as Duke Mantee in The Petrified Forest (1936) but remained secondary to other actors Warner Bros. cast in lead roles. His breakthrough from supporting roles to stardom came with High Sierra (1941) and The Maltese Falcon (1941), considered one of the first great noir films. Bogart's private detectives, Sam Spade (in The Maltese Falcon) and Philip Marlowe (in 1946's The Big Sleep), became the models for detectives in other noir films. His most significant romantic lead role was with Ingrid Bergman in Casablanca (1942), which earned him his first nomination for the Academy Award for Best Actor. Forty-four-year-old Bogart and 19-year-old Lauren Bacall fell in love when they filmed To Have and Have Not (1944). In 1945, a few months after principal photography for The Big Sleep, their second film together, he divorced his third wife and married Bacall. After their marriage, they played each other's love interest in the mystery thrillers Dark Passage (1947) and Key Largo (1948). Bogart's performances in The Treasure of the Sierra Madre (1948) and In a Lonely Place (1950) are now considered among his best, although they were not recognized as such when the films were released. He reprised those unsettled, unstable characters as a World War II naval-vessel commander in The Caine Mutiny (1954), which was a critical and commercial hit and earned him another Best Actor nomination. He won the Academy Award for Best Actor for his portrayal of a cantankerous river steam launch skipper opposite Katharine Hepburn's missionary in the World War I African adventure The African Queen (1951). Other significant roles in his later years included The Barefoot Contessa (1954) with Ava Gardner and his on-screen competition with William Holden for Audrey Hepburn in Sabrina (1954). A heavy smoker and drinker, Bogart died from esophageal cancer in January 1957. Early life and education Humphrey DeForest Bogart was born on Christmas Day 1899 in New York City, the eldest child of Belmont DeForest Bogart (1867–1934) and Maud Humphrey (1868–1940). Belmont was the only child of the unhappy marriage of Adam Welty Bogart (a Canandaigua, New York, innkeeper) and Julia Augusta Stiles, a wealthy heiress. The name "Bogart" derives from the Dutch surname, "Bogaert". Belmont and Maud married in June 1898. He was a Presbyterian, of English and Dutch descent, and a descendant of Sarah Rapelje (the first European child born in New Netherland). Maud was an Episcopalian of English heritage, and a descendant of Mayflower passenger John Howland. Humphrey was raised Episcopalian, but was non-practicing for most of his adult life. The date of Bogart's birth has been disputed. Clifford McCarty wrote that Warner Bros. publicity department had altered it to January 23, 1900 "to foster the view that a man born on Christmas Day couldn't really be as villainous as he appeared to be on screen". The "corrected" January birthdate subsequently appeared—and in some cases, remains—in many otherwise-authoritative sources. According to biographers Ann M. Sperber and Eric Lax, Bogart always celebrated his birthday on December 25 and listed it on official records (including his marriage license). Lauren Bacall wrote in her autobiography that Bogart's birthday was always celebrated on Christmas Day, saying that he joked about being cheated out of a present every year. Sperber and Lax noted that a birth announcement in the Ontario County Times of January 10, 1900 rules out the possibility of a January 23 birthdate; state and federal census records from 1900 also report a Christmas 1899 birthdate. Belmont, Bogart's father, was a cardiopulmonary surgeon. Maud was a commercial illustrator who received her art training in New York and France, including study with James Abbott McNeill Whistler. She later became art director of the fashion magazine The Delineator and a militant suffragette. Maud used a drawing of baby Humphrey in an advertising campaign for Mellins Baby Food. She earned over $50,000 a year at the peak of her career – a very large sum of money at the time, and considerably more than her husband's $20,000. The Bogarts lived in an Upper West Side apartment, and had a cottage on a 55-acre estate on Canandaigua Lake in upstate New York. When he was young, Bogart's group of friends at the lake would put on plays. He had two younger sisters: Frances ("Pat") and Catherine Elizabeth ("Kay"). Bogart's parents were busy in their careers, and frequently fought. Very formal, they showed little emotion towards their children. Maud told her offspring to call her "Maud" instead of "Mother", and showed little, if any, physical affection for them. When she was pleased, she "[c]lapped you on the shoulder, almost the way a man does", Bogart recalled. "I was brought up very unsentimentally but very straightforwardly. A kiss, in our family, was an event. Our mother and father didn't glug over my two sisters and me." Bogart was teased as a boy for his curls, tidiness, the "cute" pictures his mother had him pose for, the Little Lord Fauntleroy clothes in which she dressed him, and for his first name. He inherited a tendency to needle, a fondness for fishing, a lifelong love of boating, and an attraction to strong-willed women from his father. Bogart attended the private Delancey School until the fifth grade, and then attended the prestigious Trinity School. He was an indifferent, sullen student who showed no interest in after-school activities. Bogart later attended Phillips Academy, a boarding school to which he was admitted based on family connections. Although his parents hoped that he would go on to Yale University, Bogart left Phillips in 1918 after one semester. He failed four out of six classes. Several reasons have been given; according to one, he was expelled for throwing the headmaster (or a groundskeeper) into Rabbit Pond on campus. Another cited smoking, drinking, poor academic performance, and (possibly) inappropriate comments made to the staff. In a third scenario, Bogart was withdrawn by his father for failing to improve his grades. His parents were deeply disappointed in their failed plans for his future. Navy With no viable career options, Bogart enlisted in the United States Navy in the spring of 1918 (during World War I), and served as a coxswain. He recalled later, "At eighteen, war was great stuff. Paris! Sexy French girls! Hot damn!" Bogart was recorded as a model sailor, who spent most of his sea time after the armistice ferrying troops back from Europe. Bogart left the service on June 18, 1919 at the rank of Boatswain's Mate Third Class. During the Second World War, Bogart attempted to reenlist in the Navy but was rejected due to his age. He then volunteered for the Coast Guard Temporary Reserve in 1944, patrolling the California coastline in his yacht, the Santana. He may have received his trademark scar and developed his characteristic lisp during his naval stint. There are several conflicting stories. In one, his lip was cut by shrapnel when his ship (the ) was shelled. The ship was never shelled, however, and Bogart may not have been at sea before the armistice. Another story, held by longtime friend Nathaniel Benchley, was that Bogart was injured while taking a prisoner to Portsmouth Naval Prison in Kittery, Maine. While changing trains in Boston, the handcuffed prisoner reportedly asked Bogart for a cigarette. When Bogart looked for a match, the prisoner smashed him across the mouth with the cuffs (cutting Bogart's lip) and fled before he was recaptured and imprisoned. In an alternative version, Bogart was struck in the mouth by a handcuff loosened while freeing his charge; the other handcuff was still around the prisoner's wrist. By the time Bogart was treated by a doctor, a scar had formed. David Niven said that when he first asked Bogart about his scar, however, he said that it was caused by a childhood accident. "Goddamn doctor", Bogart later told Niven. "Instead of stitching it up, he screwed it up." According to Niven, the stories that Bogart got the scar during wartime were made up by the studios. His post-service physical did not mention the lip scar, although it noted many smaller scars. When actress Louise Brooks met Bogart in 1924, he had scar tissue on his upper lip which Brooks said Bogart may have had partially repaired before entering the film industry in 1930. Brooks said that his "lip wound gave him no speech impediment, either before or after it was mended." Acting First performances Bogart returned home to find his father in poor health, his medical practice faltering, and much of the family's wealth lost in bad timber investments. His character and values developed separately from his family during his navy days, and he began to rebel. Bogart became a liberal who disliked pretension, phonies and snobs, sometimes defying conventional behavior and authority; he was also well-mannered, articulate, punctual, self-effacing and standoffish. After his naval service, he worked as a shipper and a bond salesman, joining the Coast Guard Reserve. Bogart resumed his friendship with Bill Brady Jr. (whose father had show-business connections), and obtained an office job with William A. Brady's new World Films company. Although he wanted to try his hand at screenwriting, directing, and production, he excelled at none. Bogart was stage manager for Brady's daughter Alice's play A Ruined Lady. He made his stage debut a few months later as a Japanese butler in Alice's 1921 play Drifting (nervously delivering one line of dialogue), and appeared in several of her subsequent plays. Although Bogart had been raised to believe that acting was a lowly profession, he liked the late hours actors kept and the attention they received: "I was born to be indolent and this was the softest of rackets." He spent much of his free time in speakeasies, drinking heavily. A barroom brawl at this time was also a purported cause of Bogart's lip damage, dovetailing with Louise Brooks' account. Preferring to learn by doing, he never took acting lessons. Bogart was persistent and worked steadily at his craft, appearing in at least 17 Broadway productions between 1922 and 1935. He played juveniles or romantic supporting roles in drawing-room comedies and is reportedly the first actor to say, "Tennis, anyone?" on stage. According to Alexander Woollcott, Bogart "is what is usually and mercifully described as inadequate." Other critics were kinder. Heywood Broun, reviewing Nerves, wrote: "Humphrey Bogart gives the most effective performance ... both dry and fresh, if that be possible". He played a juvenile lead (reporter Gregory Brown) in Lynn Starling's comedy Meet the Wife, which had a successful 232-performance run at the Klaw Theatre from November 1923 through July 1924. Bogart disliked his trivial, effeminate early-career parts, calling them "White Pants Willie" roles. While playing a double role in Drifting at the Playhouse Theatre in 1922, he met actress Helen Menken; they were married on May 20, 1926, at the Gramercy Park Hotel in New York City. Divorced on November 18, 1927, they remained friends. Menken said in her divorce filing that Bogart valued his career more than marriage, citing neglect and abuse. He married actress Mary Philips on April 3, 1928, at her mother's apartment in Hartford, Connecticut; Bogart and Philips had worked together in the play Nerves during its brief run at the Comedy Theatre in 1924. Theatrical production dropped off sharply after the Wall Street Crash of 1929, and many of the more-photogenic actors headed for Hollywood. Bogart debuted on film with Helen Hayes in the 1928 two-reeler, The Dancing Town, a complete copy of which has not been found. He also appeared with Joan Blondell and Ruth Etting in a Vitaphone short, Broadway's Like That (1930), which was rediscovered in 1963. Broadway to Hollywood Bogart signed a contract with the Fox Film Corporation for $750 a week. There he met Spencer Tracy, a Broadway actor whom Bogart liked and admired, and the two men became close friends and drinking companions. In 1930, Tracy first called him "Bogie". Tracy made his feature film debut in his only movie with Bogart, John Ford's early sound film Up the River (1930), in which their leading roles were as inmates. Tracy received top billing, but Bogart's picture appeared on the film's posters. He was billed fourth behind Tracy, Claire Luce and Warren Hymer but his role was almost as large as Tracy's and much larger than Luce's or Hymer's. A quarter of a century later, the two men planned to make The Desperate Hours together. Both insisted upon top billing, however; Tracy dropped out, and was replaced by Fredric March. Bogart then had a supporting role in Bad Sister (1931) with Bette Davis. Bogart shuttled back and forth between Hollywood and the New York stage from 1930 to 1935, out of work for long periods. His parents had separated; his father died in 1934 in debt, which Bogart eventually paid off. He inherited his father's gold ring, which he wore in many of his films. At his father's deathbed, Bogart finally told him how much he loved him. Bogart's second marriage was rocky; dissatisfied with his acting career, depressed and irritable, he drank heavily. In Hollywood permanently: The Petrified Forest In 1934, Bogart starred in the Broadway play Invitation to a Murder at the Theatre Masque (renamed the John Golden Theatre in 1937). Its producer, Arthur Hopkins, heard the play from offstage; he sent for Bogart and offered him the role of escaped murderer Duke Mantee in Robert E. Sherwood's forthcoming play, The Petrified Forest. Hopkins later recalled: The play had 197 performances at the Broadhurst Theatre in New York in 1935. Although Leslie Howard was the star, The New York Times critic Brooks Atkinson said that the play was "a peach ... a roaring Western melodrama ... Humphrey Bogart does the best work of his career as an actor." Bogart said that the play "marked my deliverance from the ranks of the sleek, sybaritic, stiff-shirted, swallow-tailed 'smoothies' to which I seemed condemned to life." However, he still felt insecure. Warner Bros. bought the screen rights to The Petrified Forest in 1935. The play seemed ideal for the studio, which was known for its socially-realistic pictures for a public entranced by real-life criminals such as John Dillinger and Dutch Schultz. Bette Davis and Leslie Howard were cast. Howard, who held the production rights, made it clear that he wanted Bogart to star with him. The studio tested several Hollywood veterans for the Duke Mantee role and chose Edward G. Robinson, who had star appeal and was due to make a film to fulfill his contract. Bogart cabled news of this development to Howard in Scotland, who replied: "Att: Jack Warner Insist Bogart Play Mantee No Bogart No Deal L.H.". When Warner Bros. saw that Howard would not budge, they gave in and cast Bogart. Jack Warner wanted Bogart to use a stage name, but Bogart declined having built a reputation with his name in Broadway theater. The film version of The Petrified Forest was released in 1936. According to Variety, "Bogart's menace leaves nothing wanting". Frank S. Nugent wrote for The New York Times that the actor "can be a psychopathic gangster more like Dillinger than the outlaw himself." The film was successful at the box office, earning $500,000 in rentals, and made Bogart a star. He never forgot Howard's favor and named his only daughter, Leslie Howard Bogart, after him in 1952. Supporting gangster and villain roles Despite his success in The Petrified Forest (an "A movie"), Bogart signed a tepid 26-week contract at $550 per week and was typecast as a gangster in a series of B movie crime dramas. Although he was proud of his success, the fact that it derived from gangster roles weighed on him: "I can't get in a mild discussion without turning it into an argument. There must be something in my tone of voice, or this arrogant face—something that antagonizes everybody. Nobody likes me on sight. I suppose that's why I'm cast as the heavy." In spite of his success, Warner Bros. had no interest in raising Bogart's profile. His roles were repetitive and physically demanding; studios were not yet air-conditioned, and his tightly scheduled job at Warners was anything but the indolent and "peachy" actor's life he hoped for. Although Bogart disliked the roles chosen for him, he worked steadily. "In the first 34 pictures" for Warner's, he told George Frazier, "I was shot in 12, electrocuted or hanged in 8, and was a jailbird in 9". He averaged a film every two months between 1936 and 1940, sometimes working on two films at the same time. Bogart used these years to begin developing his film persona: a wounded, stoical, cynical, charming, vulnerable, self-mocking loner with a code of honor. Amenities at Warners were few, compared to the prestigious Metro-Goldwyn-Mayer. Bogart thought that the Warners wardrobe department was cheap, and often wore his own suits in his films; he used his dog, Zero, to play Pard (his character's dog) in High Sierra. His disputes with Warner Bros. over roles and money were similar to those waged by the studio with more established and less malleable stars such as Bette Davis and James Cagney. Leading men at Warner Bros. included James Cagney and Edward G. Robinson. Most of the studio's better scripts went to them (or others), leaving Bogart with what was left: films like San Quentin (1937), Racket Busters (1938), and You Can't Get Away with Murder (1939). His only leading role during this period was in Dead End (1937, on loan to Samuel Goldwyn), as a gangster modeled after Baby Face Nelson. Bogart played violent roles so often that in Nevil Shute's 1939 novel, What Happened to the Corbetts, the protagonist replies "I've seen Humphrey Bogart with one often enough" when asked if he knows how to operate an automatic weapon. Although he played a variety of supporting roles in films such as Angels with Dirty Faces (1938), Bogart's roles were either rivals of characters played by Cagney and Robinson or a secondary member of their gang. In Black Legion (1937), a movie Graham Greene described as "intelligent and exciting, if rather earnest", he played a good man who was caught up with (and destroyed by) a racist organization. The studio cast Bogart as a wrestling promoter in Swing Your Lady (1938), a "hillbilly musical" which he reportedly considered his worst film performance. He played a rejuvenated, formerly-dead scientist in The Return of Doctor X (1939), his only horror film: "If it'd been Jack Warner's blood ... I wouldn't have minded so much. The trouble was they were drinking mine and I was making this stinking movie." His wife, Mary, had a stage hit in A Touch of Brimstone and refused to abandon her Broadway career for Hollywood. After the play closed, Mary relented; she insisted on continuing her career, however, and they divorced in 1937. On August 21, 1938, Bogart entered a turbulent third marriage to actress Mayo Methot, a lively, friendly woman when sober but paranoid and aggressive when drunk. She became convinced that Bogart was unfaithful to her (which he eventually was, with Lauren Bacall, while filming To Have and Have Not in 1944). They drifted apart; Methot's drinking increased, and she threw plants, crockery and other objects at Bogart. She set their house afire, stabbed him with a knife, and slashed her wrists several times. Bogart needled her; apparently enjoying confrontation, he was sometimes violent as well. The press called them "the Battling Bogarts". According to their friend, Julius Epstein, "The Bogart-Methot marriage was the sequel to the Civil War". Bogart bought a motor launch which he named Sluggy, his nickname for Methot: "I like a jealous wife .. We get on so well together (because) we don't have illusions about each other ... I wouldn't give you two cents for a dame without a temper." Louise Brooks said that "except for Leslie Howard, no one contributed as much to Humphrey's success as his third wife, Mayo Methot." Methot's influence was increasingly destructive, however, and Bogart also continued to drink. He had a lifelong disdain for pretension and phoniness, and was again irritated by his inferior films. Bogart rarely watched his own films and avoided premieres, issuing fake press releases about his private life to satisfy journalistic and public curiosity. When he thought an actor, director or studio had done something shoddy, he spoke up publicly about it. Bogart advised Robert Mitchum that the only way to stay alive in Hollywood was to be an "againster". He was not the most popular of actors, and some in the Hollywood community shunned him privately to avoid trouble with the studios. Bogart once said, The Hollywood press, unaccustomed to such candor, was delighted. Early stardom High Sierra High Sierra (1941, directed by Raoul Walsh) featured a screenplay written by John Huston, Bogart's friend and drinking partner, adapted from a novel by W. R. Burnett, author of the novel on which Little Caesar was based. Paul Muni, George Raft, Cagney and Robinson turned down the lead role, giving Bogart the opportunity to play a character with some depth. Walsh initially opposed Bogart's casting, preferring Raft for the part. It was Bogart's last major film as a gangster; a supporting role followed in The Big Shot, released in 1942. He worked well with Ida Lupino, sparking jealousy from Mayo Methot. The film cemented a strong personal and professional connection between Bogart and Huston. Bogart admired (and somewhat envied) Huston for his skill as a writer; a poor student, Bogart was a lifelong reader. He could quote Plato, Alexander Pope, Ralph Waldo Emerson and over a thousand lines of Shakespeare, and subscribed to the Harvard Law Review. Bogart admired writers; some of his best friends were screenwriters, including Louis Bromfield, Nathaniel Benchley, and Nunnally Johnson. He enjoyed intense, provocative conversation (accompanied by stiff drinks), as did Huston. Both were rebellious and enjoyed playing childish pranks. Huston was reportedly easily bored during production and admired Bogart (also bored easily off-camera) for his acting talent and his intense concentration on-set. The Maltese Falcon Now regarded as a classic film noir, The Maltese Falcon (1941) was John Huston's directorial debut. Based on the Dashiell Hammett novel, it was first serialized in the pulp magazine Black Mask in 1929 and was the basis of two earlier film versions; the second was Satan Met a Lady (1936), starring Bette Davis. Producer Hal B. Wallis initially offered to cast George Raft as the leading man, but Raft (then better known than Bogart) had a contract stipulating he was not required to appear in remakes. Fearing that it would be nothing more than a sanitized version of the pre-Production Code The Maltese Falcon (1931), Raft turned down the role to make Manpower with director Raoul Walsh, with whom he had worked on The Bowery in 1933. Huston then eagerly accepted Bogart as his Sam Spade. Complementing Bogart were co-stars Sydney Greenstreet, Peter Lorre, Elisha Cook Jr., and Mary Astor as the treacherous female foil. Bogart's sharp timing and facial expressions were praised by the cast and director as vital to the film's quick action and rapid-fire dialogue. It was a commercial hit, and a major triumph for Huston. Bogart was unusually happy with the film: "It is practically a masterpiece. I don't have many things I'm proud of ... but that's one". Casablanca Bogart played his first romantic lead in Casablanca (1942): Rick Blaine, an expatriate nightclub owner hiding from a suspicious past and negotiating a fine line among Nazis, the French underground, the Vichy prefect and unresolved feelings for his ex-girlfriend. Bosley Crowther wrote in his November 1942 New York Times review that Bogart's character was used "to inject a cold point of tough resistance to evil forces afoot in Europe today". The film, directed by Michael Curtiz and produced by Hal Wallis, featured Ingrid Bergman, Claude Rains, Sydney Greenstreet, Paul Henreid, Conrad Veidt, Peter Lorre and Dooley Wilson. Bogart and Bergman's on-screen relationship was based on professionalism rather than actual rapport, although Mayo Methot assumed otherwise. Off the set, the co-stars hardly spoke. Bergman (who had a reputation for affairs with her leading men) later said about Bogart, "I kissed him but I never knew him." Because she was taller, Bogart had blocks attached to his shoes in some scenes. Bogart is reported to have been responsible for the notion that Rick Blaine should be portrayed as a chess player, a metaphor for the relationships he maintained with friends, enemies, and allies. He played tournament-level chess (one division below master) in real life, often enjoying games with crew members and cast but finding his better in Paul Henreid. Casablanca won the Academy Award for Best Picture at the 16th Academy Awards for 1943. Bogart was nominated for Best Actor in a Leading Role, but lost to Paul Lukas for his performance in Watch on the Rhine. The film vaulted Bogart from fourth place to first in the studio's roster, however, finally overtaking James Cagney. He more than doubled his annual salary to over $460,000 by 1946, making him the world's highest-paid actor. Bogart went on United Service Organizations and War Bond tours with Methot in 1943 and 1944, making arduous trips to Italy and North Africa (including Casablanca). He was still required to perform in films with weak scripts, leading to conflicts with the front office. He starred in Conflict (1945, again with Greenstreet), but turned down God is My Co-Pilot that year. Bogart and Bacall To Have and Have Not Howard Hawks introduced Bogart and Lauren Bacall (1924–2014) while Bogart was filming Passage to Marseille (1944). The three subsequently collaborated on To Have and Have Not (1944), a loose adaptation of the Ernest Hemingway novel, and Bacall's film debut. It has several similarities to Casablanca: the same kind of hero and enemies, and a piano player (portrayed this time by Hoagy Carmichael) as a supporting character. When they met, Bacall was 19 and Bogart 44; he nicknamed her "Baby." A model since age 16, she had appeared in two failed plays. Bogart was attracted by Bacall's high cheekbones, green eyes, tawny blond hair, lean body, maturity, poise and earthy, outspoken honesty; he reportedly said, "I just saw your test. We'll have a lot of fun together". Their emotional bond was strong from the start, their difference in age and acting-experience encouraged a mentor-student dynamic. In contrast to the Hollywood norm, their affair was Bogart's first with a leading lady. His early meetings with Bacall were discreet and brief, their separations bridged by love letters. The relationship made it easier for Bacall to make her first film, and Bogart did his best to put her at ease with jokes and quiet coaching. He encouraged her to steal scenes; Howard Hawks also did his best to highlight her role, and found Bogart easy to direct. However, Hawks began to disapprove of the relationship. He considered himself Bacall's protector and mentor, and Bogart was usurping that role. Not usually drawn to his starlets, the married director also fell for Bacall; he told her that she meant nothing to Bogart and threatened to send her to the poverty-row studio Monogram Pictures. Bogart calmed her down, and then went after Hawks; Jack Warner settled the dispute, and filming resumed. Hawks said about Bacall, "Bogie fell in love with the character she played, so she had to keep playing it the rest of her life." The Big Sleep Months after wrapping To Have and Have Not, Bogart and Bacall were reunited for an encore: the film noir The Big Sleep (1946), based on the novel by Raymond Chandler with script help from William Faulkner. Chandler admired the actor's performance: "Bogart can be tough without a gun. Also, he has a sense of humor that contains that grating undertone of contempt." Although the film was completed and scheduled for release in 1945, it was withdrawn and re-edited to add scenes exploiting Bogart and Bacall's box-office chemistry in To Have and Have Not and the publicity surrounding their offscreen relationship. At the insistence of director Howard Hawks, production partner Charles K. Feldman agreed to a rewrite of Bacall's scenes to heighten the "insolent" quality which had intrigued critics such as James Agee and audiences of the earlier film, and a memo was sent to studio head Jack Warner. The dialogue, especially in the added scenes supplied by Hawks, was full of sexual innuendo. The film was successful, although some critics found its plot confusing and overly complicated. According to Chandler, Hawks and Bogart argued about who killed the chauffeur; when Chandler received an inquiry by telegram, he could not provide an answer. Marriage Bogart filed for divorce from Methot in February 1945. He and Bacall married in a small ceremony at the country home of Bogart's close friend, Pulitzer Prize-winning author Louis Bromfield, at Malabar Farm (near Lucas, Ohio) on May 21, 1945. They moved into a $160,000 ($ in ) white brick mansion in an exclusive neighborhood of Los Angeles's Holmby Hills. The marriage was a happy one, with tensions due to their differences. Bogart's drinking was sometimes problematic. He was a homebody, and Bacall liked the nightlife; he loved the sea, which made her seasick. Bogart bought the Santana, a sailing yacht, from actor Dick Powell in 1945. He found the sea a sanctuary and spent about thirty weekends a year on the water, with a particular fondness for sailing around Catalina Island: "An actor needs something to stabilize his personality, something to nail down what he really is, not what he is currently pretending to be." Bogart joined the Coast Guard Temporary Reserve, offering the Coast Guard use of the Santana. He reportedly attempted to enlist, but was turned down due to his age. Dark Passage and Key Largo The suspenseful Dark Passage (1947) was Bogart and Bacall's next collaboration. Vincent Parry (Bogart) is intent on finding the real murderer for a crime of which he was convicted and sentenced to prison. According to Bogart's biographer, Stefan Kanfer, it was "a production line film noir with no particular distinction". Bogart and Bacall's last pairing in a film was in Key Largo (1948). Directed by John Huston, Edward G. Robinson was billed second (behind Bogart) as gangster Johnny Rocco: a seething, older synthesis of many of his early bad-guy roles. The billing question was hard-fought and at the end of at least one of the trailers, Robinson is listed above Bogart in a list of the actors' names in the last frame; and in the film itself, Robinson's name, appearing between Bogart's and Bacall's, is pictured slightly higher onscreen than the other two. Robinson had top billing over Bogart in their four previous films together: Bullets or Ballots (1936), Kid Galahad (1937), The Amazing Dr. Clitterhouse (1938) and Brother Orchid (1940). In some posters for Key Largo, Robinson's picture is substantially larger than Bogart's, and in the foreground manhandling Bacall while Bogart is in the background. The characters are trapped during a hurricane in a hotel owned by Bacall's father-in-law, portrayed by Lionel Barrymore. Claire Trevor won an Academy Award for Best Supporting Actress for her performance as Rocco's physically abused, alcoholic girlfriend. Later career The Treasure of the Sierra Madre Riding high in 1947 with a new contract which provided limited script refusal and the right to form his production company, Bogart rejoined with John Huston for The Treasure of the Sierra Madre: a stark tale of greed among three gold prospectors in Mexico. Lacking a love interest or a happy ending, it was considered a risky project. Bogart later said about co-star (and John Huston's father) Walter Huston, "He's probably the only performer in Hollywood to whom I'd gladly lose a scene." The film was shot in the heat of summer for greater realism and atmosphere and was grueling to make. James Agee wrote, "Bogart does a wonderful job with this character ... miles ahead of the very good work he has done before." Although John Huston won the Academy Award for Best Director and screenplay and his father won the Best Supporting Actor award, the film had mediocre box-office results. Bogart complained, "An intelligent script, beautifully directed—something different—and the public turned a cold shoulder on it." House Un-American Activities Committee Bogart, a liberal Democrat, organized the Committee for the First Amendment (a delegation to Washington, D.C.) opposing what he saw as the House Un-American Activities Committee's harassment of Hollywood screenwriters and actors. He later wrote an article, "I'm No Communist", for the March 1948 issue of Photoplay magazine distancing himself from the Hollywood Ten to counter negative publicity resulting from his appearance. Bogart wrote, "The ten men cited for contempt by the House Un-American Activities Committee were not defended by us." Santana Productions Bogart created his film company, Santana Productions (named after his yacht and the cabin cruiser in Key Largo), in 1948. The right to create his own company had left Jack Warner furious, fearful that other stars would do the same and further erode the major studios' power. In addition to pressure from freelancing actors such as Bogart, James Stewart, and Henry Fonda, they were beginning to buckle from the impact of television and the enforcement of antitrust laws which broke up theater chains. Bogart appeared in his final films for Warners, Chain Lightning (1950) and The Enforcer (1951). Except for Beat the Devil (1953), originally distributed in the United States by United Artists, the company released its films through Columbia Pictures; Columbia re-released Beat the Devil a decade later. In quick succession, Bogart starred in Knock on Any Door (1949), Tokyo Joe (1949), In a Lonely Place (1950), and Sirocco (1951). Santana also made two films without him: And Baby Makes Three (1949) and The Family Secret (1951). Although most lost money at the box office (ultimately forcing Santana's sale), at least two retain a reputation; In a Lonely Place is considered a film-noir high point. Bogart plays Dixon Steele, an embittered writer with a violent reputation who is the primary suspect in the murder of a young woman and falls in love with failed actress Laurel Gray (Gloria Grahame). Several Bogart biographers, and actress-writer Louise Brooks, have felt that this role is closest to the real Bogart. According to Brooks, the film "gave him a role that he could play with complexity, because the film character's pride in his art, his selfishness, drunkenness, lack of energy stabbed with lightning strokes of violence were shared by the real Bogart". The character mimics some of Bogart's personal habits, twice ordering the actor's favorite meal (ham and eggs). A parody of sorts of The Maltese Falcon, Beat the Devil was the final film for Bogart and John Huston. Co-written by Truman Capote, the eccentrically filmed story follows an amoral group of rogues, one of whom was portrayed by Peter Lorre, chasing an unattainable treasure. Bogart sold his interest in Santana to Columbia for over $1 million in 1955. The African Queen Outside Santana Productions, Bogart starred with Katharine Hepburn in the John Huston-directed The African Queen in 1951. The C. S. Forester novel on which it was based was overlooked and left undeveloped for 15 years until producer Sam Spiegel and Huston bought the rights. Spiegel sent Katharine Hepburn the book; she suggested Bogart for the male lead, believing that "he was the only man who could have played that part". Huston's love of adventure, his deep, longstanding friendship (and success) with Bogart, and the chance to work with Hepburn convinced the actor to leave Hollywood for a difficult shoot on location in the Belgian Congo. Bogart was to get 30 percent of the profits and Hepburn 10 percent, plus a relatively small salary for both. The stars met in London and announced that they would work together. Bacall came for the over-four-month duration, leaving their young son in Los Angeles. The Bogarts began the trip with a junket through Europe, including a visit with Pope Pius XII. Bacall later made herself useful as a cook, nurse and clothes washer; her husband said: "I don't know what we'd have done without her. She Luxed my undies in darkest Africa." Nearly everyone in the cast developed dysentery except Bogart and Huston, who subsisted on canned food and alcohol; Bogart said, "All I ate was baked beans, canned asparagus and Scotch whisky. Whenever a fly bit Huston or me, it dropped dead." Hepburn (a teetotaler) fared worse in the difficult conditions, losing weight and at one point becoming very ill. Bogart resisted Huston's insistence on using real leeches in a key scene where Charlie has to drag his steam launch through an infested marsh, and reasonable fakes were employed. The crew overcame illness, army-ant infestations, leaky boats, poor food, attacking hippos, poor water filters, extreme heat, isolation, and a boat fire to complete the film. Despite the discomfort of jumping from the boat into swamps, rivers and marshes, The African Queen apparently rekindled Bogart's early love of boats; when he returned to California, he bought a classic mahogany Hacker-Craft runabout which he kept until his death. His performance as cantankerous skipper Charlie Allnutt earned Bogart an Academy Award for Best Actor in 1951 (his only award of three nominations), and he considered it the best of his film career. Promising friends that if he won his speech would break the convention of thanking everyone in sight, Bogart advised Claire Trevor when she was nominated for Key Largo to "just say you did it all yourself and don't thank anyone". When Bogart won, however, he said: "It's a long way from the Belgian Congo to the stage of this theatre. It's nicer to be here. Thank you very much ... No one does it alone. As in tennis, you need a good opponent or partner to bring out the best in you. John and Katie helped me to be where I am now." Despite the award and its accompanying recognition, Bogart later said: "The way to survive an Oscar is never to try to win another one ... too many stars ... win it and then figure they have to top themselves ... they become afraid to take chances. The result: A lot of dull performances in dull pictures." The African Queen was Bogart's first starring Technicolor role. The Caine Mutiny Bogart dropped his asking price to obtain the role of Captain Queeg in Edward Dmytryk's drama, The Caine Mutiny (1954). Though he retained some of his old bitterness about having to do so, he delivered a strong performance in the lead; he received his final Oscar nomination and was the subject of a June 7, 1954 Time magazine cover story. Despite his success, Bogart was still melancholy; he grumbled to (and feuded with) the studio, while his health began to deteriorate. The character of Queeg was similar to his roles in The Maltese Falcon, Casablanca and The Big Sleep–the wary loner who trusts no one—but without their warmth and humor. Like his portrayal of Fred C. Dobbs in The Treasure of the Sierra Madre, Bogart's Queeg is a paranoid, self-pitying character whose small-mindedness eventually destroys him. Henry Fonda played a different role in the Broadway version of The Caine Mutiny, generating publicity for the film. Final roles For Sabrina (1954), Billy Wilder wanted Cary Grant for the older male lead and chose Bogart to play the conservative brother who competes with his younger, playboy sibling (William Holden) for the affection of the Cinderella-like Sabrina (Audrey Hepburn). Although Bogart was lukewarm about the part, he agreed to it on a handshake with Wilder without a finished script but with the director's assurance that he would take good care of Bogart during filming. The actor, however, got along poorly with his director and co-stars; he complained about the script's last-minute drafting and delivery, and accused Wilder of favoring Hepburn and Holden on and off the set. Wilder was the opposite of Bogart's ideal director (John Huston) in style and personality; Bogart complained to the press that Wilder was "overbearing" and "is [a] kind of Prussian German with a riding crop. He is the type of director I don't like to work with ... the picture is a crock of crap. I got sick and tired of who gets Sabrina." Wilder later said, "We parted as enemies but finally made up." Despite the acrimony, the film was successful; according to a review in The New York Times, Bogart was "incredibly adroit ... the skill with which this old rock-ribbed actor blends the gags and such duplicities with a manly manner of melting is one of the incalculable joys of the show". Joseph L. Mankiewicz's The Barefoot Contessa (1954) was filmed in Rome. In this Hollywood backstory, Bogart is a broken-down man, a cynical director-narrator who saves his career by making a star of a flamenco dancer modeled on Rita Hayworth. He was uneasy with Ava Gardner in the female lead; she had just broken up with his Rat Pack buddy Frank Sinatra, and Bogart was annoyed by her inexperienced performance. The actor was generally praised as the film's strongest part. During filming and while Bacall was home, Bogart resumed his discreet affair with Verita Bouvaire-Thompson (his long-time studio assistant, whom he drank with and took sailing). When Bacall found them together, she extracted an expensive shopping spree from her husband; the three traveled together after the shooting. Bogart could be generous with actors, particularly those who were blacklisted, down on their luck or having personal problems. During the filming of the Edward Dmytryk-directed The Left Hand of God (1955), he noticed his co-star Gene Tierney having a hard time remembering her lines and behaving oddly; he coached her, feeding Tierney her lines. Familiar with mental illness because of his sister's bouts of depression, Bogart encouraged Tierney to seek treatment. He also stood behind Joan Bennett and insisted on her as his co-star in Michael Curtiz's We're No Angels (1955) when a scandal made her persona non grata with studio head Jack Warner. Bogart had already been diagnosed with terminal cancer when shooting The Harder They Fall, a boxing drama with Rod Steiger in a supporting role. Steiger later mentioned Bogart's courage and geniality during his final performance: "Bogey and I got on very well. Unlike some other stars, when they had closeups, you might have been relegated to a two-shot, or cut out altogether. Bogey didn't play those games. He was a professional and had tremendous authority. He'd come in exactly at 9am and leave at precisely 6pm. I remember once walking to lunch in between takes and seeing Bogey on the lot. I shouldn't have because his work was finished for the day. I asked him why he was still on the lot, and he said, 'They want to shoot some retakes of my closeups because my eyes are too watery'. A little while later, after the film, somebody came up to me with word of Bogey's death. Then it struck me. His eyes were watery because he was in pain with the cancer. I thought: 'How dumb can you be, Rodney'!" Television and radio Bogart rarely performed on television, but he and Bacall appeared on Edward R. Murrow's Person to Person and disagreed on the answer to every question. He also appeared on The Jack Benny Show, where a surviving kinescope of the live telecast captures him in his only TV sketch-comedy performance (October 25, 1953). Bogart and Bacall worked on an early color telecast in 1955, an NBC adaptation of "The Petrified Forest" for Producers' Showcase. Bogart received top billing, Henry Fonda played Leslie Howard's role and Bacall played Bette Davis's part. Jack Klugman, Richard Jaeckel, and Jack Warden played supporting roles. In the late 1990s, Bacall donated the only known kinescope of the 1955 performance (in black and white) to the Museum Of Television & Radio (now the Paley Center for Media), where it remains archived for viewing in New York City and Los Angeles. It is now in the public domain. Bogart also performed radio adaptations of some of his best-known films, such as Casablanca and The Maltese Falcon, and recorded a radio series entitled Bold Venture with Bacall. Personal life Children Bogart became a father at age 49, when Bacall gave birth to Stephen Humphrey Bogart on January 6, 1949, during the filming of Tokyo Joe. The name was taken from Steve, Bogart's character's nickname in To Have and Have Not. Stephen became an author and biographer and hosted a television special about his father on Turner Classic Movies. The couple's daughter, Leslie Howard Bogart, was born on August 23, 1952. Her first and middle names honor Leslie Howard, Bogart's friend and co-star in The Petrified Forest. Rat Pack Bogart was a founding member and the original leader of the Hollywood Rat Pack. In the spring of 1955, after a long party in Las Vegas attended by Frank Sinatra, Judy Garland, her husband Sidney Luft, Michael Romanoff and his wife Gloria, David Niven, Angie Dickinson and others, Bacall surveyed the wreckage and said: "You look like a goddamn rat pack." The name stuck and was made official at Romanoff's in Beverly Hills. Sinatra was dubbed pack president; Bacall den mother; Bogart (the actual leader) director of public relations, and Sid Luft acting cage manager. Asked by columnist Earl Wilson what the group's purpose was, Bacall replied: "To drink a lot of bourbon and stay up late." Illness and death After signing a long-term deal with Warner Bros., Bogart predicted with glee that his teeth and hair would fall out before the contract ended. In 1955, however, his health was failing. In the wake of Santana, Bogart had formed a new company and had plans for a film (Melville Goodwin, U.S.A.) in which he would play a general and Bacall a press magnate. His persistent cough and difficulty eating became too serious to ignore, though, and he dropped the project. A heavy smoker and drinker, Bogart had developed esophageal cancer. He did not talk about his health and visited a doctor in January 1956 after considerable persuasion from Bacall. The disease worsened and several weeks later, on March 1, Bogart had surgery to remove his esophagus, two lymph nodes and a rib. The surgery was unsuccessful, and chemotherapy followed. He had additional surgery in November 1956, when the cancer had metastasized. Although he became too weak to walk up and down stairs, he joked despite the pain: "Put me in the dumbwaiter and I'll ride down to the first floor in style." It was then altered to accommodate his wheelchair. Sinatra, Katharine Hepburn, and Spencer Tracy visited him on January 13, 1957. In an interview, Hepburn said: Bogart lapsed into a coma and died the following day, 20 days after his 57th birthday; at the time of his death he weighed only . A simple funeral was held at All Saints Episcopal Church, with music by Bogart's favorite composers: Johann Sebastian Bach and Claude Debussy. In attendance were some of Hollywood's biggest stars, including Hepburn, Tracy, Judy Garland, David Niven, Ronald Reagan, James Mason, Bette Davis, Danny Kaye, Joan Fontaine, Marlene Dietrich, James Cagney, Errol Flynn, Edward G. Robinson, Gregory Peck, Gary Cooper, Billy Wilder, and studio head Jack L. Warner. Bacall asked Tracy to give the eulogy; he was too upset, however, and John Huston spoke instead: Bogart was cremated, and his ashes were interred in Forest Lawn Memorial Park's Columbarium of Eternal Light in its Garden of Memory in Glendale, California. He was buried with a small, gold whistle that had been part of a charm bracelet he had given to Bacall before they married. On it was inscribed, "If you want anything, just whistle." This alluded to a scene in To Have and Have Not when Bacall's character says to Bogart shortly after their first meeting, "You know how to whistle, don't you, Steve? You just put your lips together and blow." Bogart's estate had a gross value of $910,146 and a net value of $737,668 ($ million and $ million, respectively, in ). Awards and honors On August 21, 1946, he recorded his hand- and footprints in cement in a ceremony at Grauman's Chinese Theatre. On February 8, 1960, Bogart was posthumously inducted into the Hollywood Walk of Fame with a motion-picture star at 6322 Hollywood Boulevard. Legacy and tributes After his death, a "Bogie cult" formed at the Brattle Theatre in Cambridge, Massachusetts, in Greenwich Village, and in France; this contributed to his increased popularity during the late 1950s and 1960s. In 1997, Entertainment Weekly magazine ranked Bogart the number-one movie legend of all time; two years later, the American Film Institute rated him the greatest male screen legend. Jean-Luc Godard's Breathless (1960) was the first film to pay tribute to Bogart. Over a decade later, in Woody Allen's comic paean Play It Again, Sam (1972), Bogart's ghost aids Allen's character: a film critic having difficulties with women who says that his "sex life has turned into the 'Petrified Forest. The United States Postal Service honored Bogart with a stamp in its "Legends of Hollywood" series in 1997, the third figure recognized. At a ceremony attended by Lauren Bacall and the Bogart children, Stephen and Leslie, USPS governing-board chair Tirso del Junco delivered a tribute: "Today, we mark another chapter in the Bogart legacy. With an image that is small and yet as powerful as the ones he left in celluloid, we will begin today to bring his artistry, his power, his unique star quality, to the messages that travel the world." On June 24, 2006, 103rd Street between Broadway and West End Avenue in New York City was renamed Humphrey Bogart Place. Lauren Bacall and her son, Stephen Bogart, attended the ceremony. "Bogie would never have believed it", she said to the assembled city officials and onlookers. In popular culture Bogart has inspired multiple artists. Two Bugs Bunny cartoons featured the actor: Slick Hare (1947) and 8 Ball Bunny (1950, based on The Treasure of the Sierra Madre). The Man with Bogart's Face (1981, starring Bogart lookalike Robert Sacchi) was an homage to the actor. The lyrics of Bertie Higgins' 1981 song, "Key Largo", refer to two of Bogart's films, Key Largo and Casablanca. Filmography Notable radio appearances See also Bogart–Bacall syndrome List of actors with Academy Award nominations List of amateur chess players List of members of the American Legion References Bibliography Bacall, Lauren. By Myself. New York: Alfred Knopf, 1979. . Bogart, Stephen Humphrey. Bogart: In Search of My Father. New York: Dutton, 1995. . Citro, Joseph A., Mark Sceurman and Mark Moran.Weird New England. New York: Sterling, 2005. . Fantle, David; Johnson, Tom (2009). Twenty Five Years of Celebrity Interviews from Vaudeville to Movies to TV, Reel to Real. Badger Books Inc. Halliwell, Leslie. Halliwell's Film, Video and DVD Guide. New York: Harper Collins Entertainment, 2004. . Hepburn, Katharine. The Making of the African Queen. New York: Alfred Knopf, 1987. . Hill, Jonathan and Jonah Ruddy. Bogart: The Man and the Legend. London: Mayflower-Dell, 1966. History of the U.S.S. Leviathan, Cruiser and Transport Forces, United States Atlantic Fleet, pp. 207–208. Humphrey Bogart. Time, June 7, 1954. Hyams, Joe. Bogart and Bacall: A Love Story. New York: David McKay Co., Inc., 1975. . Hyams, Joe. Bogie: The Biography of Humphrey Bogart. New York: New American Library, 1966 (later editions renamed as: Bogie: The Definitive Biography of Humphrey Bogart). . Kanfer, Stefan. Tough Without A Gun: The Life and Extraordinary Afterlife of Humphrey Bogart. New York: Knopf, 2011. . Michael, Paul. Humphrey Bogart: The Man and his Films. New York: Bonanza Books, 1965. No ISBN. Porter, Darwin. The Secret Life of Humphrey Bogart: The Early Years (1899–1931). New York: Georgia Literary Association, 2003. . Pym, John, ed. "Time Out" Film Guide. London: Time Out Group Ltd., 2004. . Santas, Constantine, The Essential Humphrey Bogart. Lanham, Maryland: Rowman & Littlefield, 2016. . Shickel, Richard. Bogie: A Celebration of the Life and Films of Humphrey Bogart. New York: Thomas Dunne Books/St. Martin's Press, 2006. . Sperber, A. M. and Eric Lax. Bogart. New York: William Morrow & Co., 1997. . Tierney, Gene with Mickey Herskowitz. Self-Portrait. New York: Peter Wyden, 1979. . Wallechinsky, David and Amy Wallace. The New Book of Lists. Edinburgh, Scotland: Canongate, 2005. . Wise, James. Stars in Blue: Movie Actors in America's Sea Services. Annapolis, Maryland: Naval Institute Press, 1997. . Youngkin, Stephen D. The Lost One: A Life of Peter Lorre. Lexington, Kentucky: University Press of Kentucky, 2005, . External links 1899 births 1957 deaths 20th-century American male actors 20th Century Fox contract players American male film actors American male radio actors American male stage actors American people of Dutch descent American people of English descent Best Actor Academy Award winners Burials at Forest Lawn Memorial Park (Glendale) California Democrats Deaths from cancer in California Deaths from esophageal cancer Male actors from New York City Military personnel from New York City New York (state) Democrats People from Holmby Hills, Los Angeles People from the Upper West Side Phillips Academy alumni Trinity School (New York City) alumni United States Navy personnel of World War I United States Navy sailors Warner Bros. contract players United States Coast Guard personnel of World War II United States Coast Guard reservists
[ -0.4103391170501709, 0.09094680845737457, -0.7980747222900391, -0.029764726758003235, 0.1971697211265564, 0.6662135720252991, 0.8298025727272034, 0.10867861658334732, -0.4099571406841278, -0.023397283628582954, -0.3832627832889557, 0.4102082848548889, -0.5540286302566528, 0.306295067071914...
14051
https://en.wikipedia.org/wiki/History%20painting
History painting
History painting is a genre in painting stated as a subject matter rather than artistic style. History paintings usually depict a moment in a narrative story, rather than a specific and static subject, as in a portrait. The term is derived from the wider senses of the word historia in Latin and Italian, meaning "story" or "narrative", and essentially means "story painting". Most history paintings are not of scenes from history, especially paintings from before about 1850. In modern English, historical painting is sometimes used to describe the painting of scenes from history in its narrower sense, especially for 19th-century art, excluding religious, mythological, and allegorical subjects, which are included in the broader term history painting, and before the 19th century were the most common subjects for history paintings. History paintings almost always contain a number of figures, often a large number, and normally show some typical states on that is a moment in a narrative. The genre includes depictions of moments in religious narratives, above all the Life of Christ, Middle eastern culture as well as narrative scenes from mythology, and also allegorical scenes. These groups were for long the most frequently painted; works such as Michelangelo's Sistine Chapel ceiling are therefore history paintings, as are most very large paintings before the 19th century. The term covers large paintings in oil on canvas or fresco produced between the Renaissance and the late 19th century, after which the term is generally not used even for the many works that still meet the basic definition. History painting may be used interchangeably with historical painting, and was especially so used before the 20th century. Where a distinction is made, "historical painting" is the painting of scenes from secular history, whether specific episodes or generalized scenes. In the 19th century, historical painting in this sense became a distinct genre. In phrases such as "historical painting materials", "historical" means in use before about 1900, or some earlier date. Prestige History paintings were traditionally regarded as the highest form of Western painting, occupying the most prestigious place in the hierarchy of genres, and considered the equivalent to the epic in literature. In his De Pictura of 1436, Leon Battista Alberti had argued that multi-figure history painting was the noblest form of art, as being the most difficult, which required mastery of all the others, because it was a visual form of history, and because it had the greatest potential to move the viewer. He placed emphasis on the ability to depict the interactions between the figures by gesture and expression. This view remained general until the 19th century, when artistic movements began to struggle against the establishment institutions of academic art, which continued to adhere to it. At the same time, there was from the latter part of the 18th century an increased interest in depicting in the form of history painting moments of drama from recent or contemporary history, which had long largely been confined to battle-scenes and scenes of formal surrenders and the like. Scenes from ancient history had been popular in the early Renaissance, and once again became common in the Baroque and Rococo periods, and still more so with the rise of Neoclassicism. In some 19th or 20th century contexts, the term may refer specifically to paintings of scenes from secular history, rather than those from religious narratives, literature or mythology. Development The term is generally not used in art history in speaking of medieval painting, although the Western tradition was developing in large altarpieces, fresco cycles, and other works, as well as miniatures in illuminated manuscripts. It comes to the fore in Italian Renaissance painting, where a series of increasingly ambitious works were produced, many still religious, but several, especially in Florence, which did actually feature near-contemporary historical scenes such as the set of three huge canvases on The Battle of San Romano by Paolo Uccello, the abortive Battle of Cascina by Michelangelo and the Battle of Anghiari by Leonardo da Vinci, neither of which were completed. Scenes from ancient history and mythology were also popular. Writers such as Alberti and the following century Giorgio Vasari in his Lives of the Artists, followed public and artistic opinion in judging the best painters above all on their production of large works of history painting (though in fact the only modern (post-classical) work described in De Pictura is Giotto's huge Navicella in mosaic). Artists continued for centuries to strive to make their reputation by producing such works, often neglecting genres to which their talents were better suited. There was some objection to the term, as many writers preferred terms such as "poetic painting" (poesia), or wanted to make a distinction between the "true" istoria, covering history including biblical and religious scenes, and the fabula, covering pagan myth, allegory, and scenes from fiction, which could not be regarded as true. The large works of Raphael were long considered, with those of Michelangelo, as the finest models for the genre. In the Raphael Rooms in the Vatican Palace, allegories and historical scenes are mixed together, and the Raphael Cartoons show scenes from the Gospels, all in the Grand Manner that from the High Renaissance became associated with, and often expected in, history painting. In the Late Renaissance and Baroque the painting of actual history tended to degenerate into panoramic battle-scenes with the victorious monarch or general perched on a horse accompanied with his retinue, or formal scenes of ceremonies, although some artists managed to make a masterpiece from such unpromising material, as Velázquez did with his The Surrender of Breda. An influential formulation of the hierarchy of genres, confirming the history painting at the top, was made in 1667 by André Félibien, a historiographer, architect and theoretician of French classicism became the classic statement of the theory for the 18th century:Celui qui fait parfaitement des païsages est au-dessus d'un autre qui ne fait que des fruits, des fleurs ou des coquilles. Celui qui peint des animaux vivants est plus estimable que ceux qui ne représentent que des choses mortes & sans mouvement; & comme la figure de l'homme est le plus parfait ouvrage de Dieu sur la Terre, il est certain aussi que celui qui se rend l'imitateur de Dieu en peignant des figures humaines, est beaucoup plus excellent que tous les autres ... un Peintre qui ne fait que des portraits, n'a pas encore cette haute perfection de l'Art, & ne peut prétendre à l'honneur que reçoivent les plus sçavans. Il faut pour cela passer d'une seule figure à la représentation de plusieurs ensemble; il faut traiter l'histoire & la fable; il faut représenter de grandes actions comme les historiens, ou des sujets agréables comme les Poëtes; & montant encore plus haut, il faut par des compositions allégoriques, sçavoir couvrir sous le voile de la fable les vertus des grands hommes, & les mystères les plus relevez. He who produces perfect landscapes is above another who only produces fruit, flowers or seashells. He who paints living animals is more than those who only represent dead things without movement, and as man is the most perfect work of God on the earth, it is also certain that he who becomes an imitator of God in representing human figures, is much more excellent than all the others ... a painter who only does portraits still does not have the highest perfection of his art, and cannot expect the honour due to the most skilled. For that he must pass from representing a single figure to several together; history and myth must be depicted; great events must be represented as by historians, or like the poets, subjects that will please, and climbing still higher, he must have the skill to cover under the veil of myth the virtues of great men in allegories, and the mysteries they reveal". By the late 18th century, with both religious and mytholological painting in decline, there was an increased demand for paintings of scenes from history, including contemporary history. This was in part driven by the changing audience for ambitious paintings, which now increasingly made their reputation in public exhibitions rather than by impressing the owners of and visitors to palaces and public buildings. Classical history remained popular, but scenes from national histories were often the best-received. From 1760 onwards, the Society of Artists of Great Britain, the first body to organize regular exhibitions in London, awarded two generous prizes each year to paintings of subjects from British history. The unheroic nature of modern dress was regarded as a serious difficulty. When, in 1770, Benjamin West proposed to paint The Death of General Wolfe in contemporary dress, he was firmly instructed to use classical costume by many people. He ignored these comments and showed the scene in modern dress. Although George III refused to purchase the work, West succeeded both in overcoming his critics' objections and inaugurating a more historically accurate style in such paintings. Other artists depicted scenes, regardless of when they occurred, in classical dress and for a long time, especially during the French Revolution, history painting often focused on depictions of the heroic male nude. The large production, using the finest French artists, of propaganda paintings glorifying the exploits of Napoleon, were matched by works, showing both victories and losses, from the anti-Napoleonic alliance by artists such as Goya and J.M.W. Turner. Théodore Géricault's The Raft of the Medusa (1818–1819) was a sensation, appearing to update the history painting for the 19th century, and showing anonymous figures famous only for being victims of what was then a famous and controversial disaster at sea. Conveniently their clothes had been worn away to classical-seeming rags by the point the painting depicts. At the same time the demand for traditional large religious history paintings very largely fell away. In the mid-nineteenth century there arose a style known as historicism, which marked a formal imitation of historical styles and/or artists. Another development in the nineteenth century was the treatment of historical subjects, often on a large scale, with the values of genre painting, the depiction of scenes of everyday life, and anecdote. Grand depictions of events of great public importance were supplemented with scenes depicting more personal incidents in the lives of the great, or of scenes centred on unnamed figures involved in historical events, as in the Troubadour style. At the same time scenes of ordinary life with moral, political or satirical content became often the main vehicle for expressive interplay between figures in painting, whether given a modern or historical setting. By the later 19th century, history painting was often explicitly rejected by avant-garde movements such as the Impressionists (except for Édouard Manet) and the Symbolists, and according to one recent writer "Modernism was to a considerable extent built upon the rejection of History Painting... All other genres are deemed capable of entering, in one form or another, the 'pantheon' of modernity considered, but History Painting is excluded". History painting and historical painting The terms Initially, "history painting" and "historical painting" were used interchangeably in English, as when Sir Joshua Reynolds in his fourth Discourse uses both indiscriminately to cover "history painting", while saying "...it ought to be called poetical, as in reality it is", reflecting the French term peinture historique, one equivalent of "history painting". The terms began to separate in the 19th century, with "historical painting" becoming a sub-group of "history painting" restricted to subjects taken from history in its normal sense. In 1853 John Ruskin asked his audience: "What do you at present mean by historical painting? Now-a-days it means the endeavour, by the power of imagination, to portray some historical event of past days." So for example Harold Wethey's three-volume catalogue of the paintings of Titian (Phaidon, 1969–75) is divided between "Religious Paintings", "Portraits", and "Mythological and Historical Paintings", though both volumes I and III cover what is included in the term "History Paintings". This distinction is useful but is by no means generally observed, and the terms are still often used in a confusing manner. Because of the potential for confusion modern academic writing tends to avoid the phrase "historical painting", talking instead of "historical subject matter" in history painting, but where the phrase is still used in contemporary scholarship it will normally mean the painting of subjects from history, very often in the 19th century. "Historical painting" may also be used, especially in discussion of painting techniques in conservation studies, to mean "old", as opposed to modern or recent painting. In 19th-century British writing on art the terms "subject painting" or "anecdotic" painting were often used for works in a line of development going back to William Hogarth of monoscenic depictions of crucial moments in an implied narrative with unidentified characters, such as William Holman Hunt's 1853 painting The Awakening Conscience or Augustus Egg's Past and Present, a set of three paintings, updating sets by Hogarth such as Marriage à-la-mode. 19th century History painting was the dominant form of academic painting in the various national academies in the 18th century, and for most of the 19th, and increasingly historical subjects dominated. During the Revolutionary and Napoleonic periods the heroic treatment of contemporary history in a frankly propagandistic fashion by Antoine-Jean, Baron Gros, Jacques-Louis David, Carle Vernet and others was supported by the French state, but after the fall of Napoleon in 1815 the French governments were not regarded as suitable for heroic treatment and many artists retreated further into the past to find subjects, though in Britain depicting the victories of the Napoleonic Wars mostly occurred after they were over. Another path was to choose contemporary subjects that were oppositional to government either at home and abroad, and many of what were arguably the last great generation of history paintings were protests at contemporary episodes of repression or outrages at home or abroad: Goya's The Third of May 1808 (1814), Théodore Géricault's The Raft of the Medusa (1818–19), Eugène Delacroix's The Massacre at Chios (1824) and Liberty Leading the People (1830). These were heroic, but showed heroic suffering by ordinary civilians. Romantic artists such as Géricault and Delacroix, and those from other movements such as the English Pre-Raphaelite Brotherhood continued to regard history painting as the ideal for their most ambitious works. Others such as Jan Matejko in Poland, Vasily Surikov in Russia, José Moreno Carbonero in Spain and Paul Delaroche in France became specialized painters of large historical subjects. The style troubadour ("troubadour style") was a somewhat derisive French term for earlier paintings of medieval and Renaissance scenes, which were often small and depicting moments of anecdote rather than drama; Ingres, Richard Parkes Bonington and Henri Fradelle painted such works. Sir Roy Strong calls this type of work the "Intimate Romantic", and in French it was known as the "peinture de genre historique" or "peinture anecdotique" ("historical genre painting" or "anecdotal painting"). Church commissions for large group scenes from the Bible had greatly reduced, and historical painting became very significant. Especially in the early 19th century, much historical painting depicted specific moments from historical literature, with the novels of Sir Walter Scott a particular favourite, in France and other European countries as much as Great Britain. By the middle of the century medieval scenes were expected to be very carefully researched, using the work of historians of costume, architecture and all elements of decor that were becoming available. And example of this is the extensive research of Byzantine architecture, clothing and decoration made in Parisian museums and libraries by Moreno Carbonero for his masterwork The Entry of Roger de Flor in Constantinople. The provision of examples and expertise for artists, as well as revivalist industrial designers, was one of the motivations for the establishment of museums like the Victoria and Albert Museum in London. New techniques of printmaking such as the chromolithograph made good quality reproductions both relatively cheap and very widely accessible, and also hugely profitable for artist and publisher, as the sales were so large. Historical painting often had a close relationship with Nationalism, and painters like Matejko in Poland could play an important role in fixing the prevailing historical narrative of national history in the popular mind. In France, L'art Pompier ("Fireman art") was a derisory term for official academic historical painting, and in a final phase, "History painting of a debased sort, scenes of brutality and terror, purporting to illustrate episodes from Roman and Moorish history, were Salon sensations. On the overcrowded walls of the exhibition galleries, the paintings that shouted loudest got the attention". Orientalist painting was an alternative genre that offered similar exotic costumes and decor, and at least as much opportunity to depict sex and violence. Gallery See also Classicism Cobweb painting History of painting List of Orientalist artists Notes References Barlow, Paul, "The Death of History Painting in Nineteenth-Century Art?" PDF, Visual Culture in Britain, Volume 6, Number 1, Summer 2005, pp. 1–13(13) Blunt, Anthony, Artistic Theory in Italy, 1450-1660, 1940 (refs to 1985 edn), OUP, Bull, Malcolm, The Mirror of the Gods, How Renaissance Artists Rediscovered the Pagan Gods, Oxford UP, 2005, Green, David and Seddon, Peter, History Painting Reassessed: The Representation of History in Contemporary Art, 2000, Manchester University Press, , google books Harding, James. Artistes pompiers: French academic art in the 19th century, 1979, New York: Rizzoli Harrison, Charles, An Introduction to Art, 2009, Yale University Press, , google books Rothenstein, John, An Introduction to English Painting, 2002 (reissue), I.B.Tauris, Strong, Roy. And when did you last see your father? The Victorian Painter and British History, 1978, Thames and Hudson, White, Harrison C., Canvases and Careers: Institutional Change in the French Painting World, 1993 (2nd edn), University of Chicago Press, , google books Wright, Beth Segal, Scott's Historical Novels and French Historical Painting 1815-1855, The Art Bulletin, Vol. 63, No. 2 (Jun., 1981), pp. 268–287, JSTOR Further reading Ayers, William, ed., Picturing History: American Painting 1770-1903, External links Paintings
[ 0.5896971821784973, -0.17345790565013885, -0.34862327575683594, -0.08234988152980804, -0.20657455921173096, 0.29125601053237915, 0.3914833962917328, 0.3420030474662781, -1.0587537288665771, -0.588893711566925, -0.5281215310096741, 0.23410822451114655, 0.23009321093559265, 0.247606635093688...
14052
https://en.wikipedia.org/wiki/Hyperbola
Hyperbola
In mathematics, a hyperbola (; pl. hyperbolas or hyperbolae ; adj. hyperbolic ) is a type of smooth curve lying in a plane, defined by its geometric properties or by equations for which it is the solution set. A hyperbola has two pieces, called connected components or branches, that are mirror images of each other and resemble two infinite bows. The hyperbola is one of the three kinds of conic section, formed by the intersection of a plane and a double cone. (The other conic sections are the parabola and the ellipse. A circle is a special case of an ellipse.) If the plane intersects both halves of the double cone but does not pass through the apex of the cones, then the conic is a hyperbola. Hyperbolas arise in many ways: as the curve representing the function in the Cartesian plane, as the path followed by the shadow of the tip of a sundial, as the shape of an open orbit (as distinct from a closed elliptical orbit), such as the orbit of a spacecraft during a gravity assisted swing-by of a planet or, more generally, any spacecraft exceeding the escape velocity of the nearest planet, as the path of a single-apparition comet (one travelling too fast ever to return to the solar system), as the scattering trajectory of a subatomic particle (acted on by repulsive instead of attractive forces but the principle is the same), in radio navigation, when the difference between distances to two points, but not the distances themselves, can be determined, and so on. Each branch of the hyperbola has two arms which become straighter (lower curvature) further out from the center of the hyperbola. Diagonally opposite arms, one from each branch, tend in the limit to a common line, called the asymptote of those two arms. So there are two asymptotes, whose intersection is at the center of symmetry of the hyperbola, which can be thought of as the mirror point about which each branch reflects to form the other branch. In the case of the curve the asymptotes are the two coordinate axes. Hyperbolas share many of the ellipses' analytical properties such as eccentricity, focus, and directrix. Typically the correspondence can be made with nothing more than a change of sign in some term. Many other mathematical objects have their origin in the hyperbola, such as hyperbolic paraboloids (saddle surfaces), hyperboloids ("wastebaskets"), hyperbolic geometry (Lobachevsky's celebrated non-Euclidean geometry), hyperbolic functions (sinh, cosh, tanh, etc.), and gyrovector spaces (a geometry proposed for use in both relativity and quantum mechanics which is not Euclidean). Etymology and history The word "hyperbola" derives from the Greek , meaning "over-thrown" or "excessive", from which the English term hyperbole also derives. Hyperbolae were discovered by Menaechmus in his investigations of the problem of doubling the cube, but were then called sections of obtuse cones. The term hyperbola is believed to have been coined by Apollonius of Perga (c. 262–c. 190 BC) in his definitive work on the conic sections, the Conics. The names of the other two general conic sections, the ellipse and the parabola, derive from the corresponding Greek words for "deficient" and "applied"; all three names are borrowed from earlier Pythagorean terminology which referred to a comparison of the side of rectangles of fixed area with a given line segment. The rectangle could be "applied" to the segment (meaning, have an equal length), be shorter than the segment or exceed the segment. Definitions As locus of points A hyperbola can be defined geometrically as a set of points (locus of points) in the Euclidean plane: A hyperbola is a set of points, such that for any point of the set, the absolute difference of the distances to two fixed points (the foci) is constant, usually denoted by The midpoint of the line segment joining the foci is called the center of the hyperbola. The line through the foci is called the major axis. It contains the vertices , which have distance to the center. The distance of the foci to the center is called the focal distance or linear eccentricity. The quotient is the eccentricity . The equation can be viewed in a different way (see diagram): If is the circle with midpoint and radius , then the distance of a point of the right branch to the circle equals the distance to the focus : is called the circular directrix (related to focus ) of the hyperbola. In order to get the left branch of the hyperbola, one has to use the circular directrix related to . This property should not be confused with the definition of a hyperbola with help of a directrix (line) below. Hyperbola with equation y = A/x If the xy-coordinate system is rotated about the origin by the angle and new coordinates are assigned, then . The rectangular hyperbola (whose semi-axes are equal) has the new equation . Solving for yields Thus, in an xy-coordinate system the graph of a function with equation is a rectangular hyperbola entirely in the first and third quadrants with the coordinate axes as asymptotes, the line as major axis , the center and the semi-axis the vertices the semi-latus rectum and radius of curvature at the vertices the linear eccentricity and the eccentricity the tangent at point A rotation of the original hyperbola by results in a rectangular hyperbola entirely in the second and fourth quadrants, with the same asymptotes, center, semi-latus rectum, radius of curvature at the vertices, linear eccentricity, and eccentricity as for the case of rotation, with equation the semi-axes the line as major axis, the vertices Shifting the hyperbola with equation so that the new center is , yields the new equation and the new asymptotes are and . The shape parameters remain unchanged. By the directrix property The two lines at distance from the center and parallel to the minor axis are called directrices of the hyperbola (see diagram). For an arbitrary point of the hyperbola the quotient of the distance to one focus and to the corresponding directrix (see diagram) is equal to the eccentricity: The proof for the pair follows from the fact that and satisfy the equation The second case is proven analogously. The inverse statement is also true and can be used to define a hyperbola (in a manner similar to the definition of a parabola): For any point (focus), any line (directrix) not through and any real number with the set of points (locus of points), for which the quotient of the distances to the point and to the line is is a hyperbola. (The choice yields a parabola and if an ellipse.) Proof Let and assume is a point on the curve. The directrix has equation . With , the relation produces the equations and The substitution yields This is the equation of an ellipse () or a parabola () or a hyperbola (). All of these non-degenerate conics have, in common, the origin as a vertex (see diagram). If , introduce new parameters so that , and then the equation above becomes which is the equation of a hyperbola with center , the x-axis as major axis and the major/minor semi axis . Construction of a directrix Because of point of directrix (see diagram) and focus are inverse with respect to the circle inversion at circle (in diagram green). Hence point can be constructed using the theorem of Thales (not shown in the diagram). The directrix is the perpendicular to line through point . Alternative construction of : Calculation shows, that point is the intersection of the asymptote with its perpendicular through (see diagram). As plane section of a cone The intersection of an upright double cone by a plane not through the vertex with slope greater than the slope of the lines on the cone is a hyperbola (see diagram: red curve). In order to prove the defining property of a hyperbola (see above) one uses two Dandelin spheres , which are spheres that touch the cone along circles , and the intersecting (hyperbola) plane at points and . It turns out: are the foci of the hyperbola. Let be an arbitrary point of the intersection curve . The generatrix of the cone containing intersects circle at point and circle at a point . The line segments and are tangential to the sphere and, hence, are of equal length. The line segments and are tangential to the sphere and, hence, are of equal length. The result is: is independent of the hyperbola point , because no matter where point is, have to be on circles , , and line segment has to cross the apex. Therefore, as point moves along the red curve (hyperbola), line segment simply rotates about apex without changing its length. Pin and string construction The definition of a hyperbola by its foci and its circular directrices (see above) can be used for drawing an arc of it with help of pins, a string and a ruler: (0) Choose the foci , the vertices and one of the circular directrices , for example (circle with radius ) (1) A ruler is fixed at point free to rotate around . Point is marked at distance . (2) A string with length is prepared. (3) One end of the string is pinned at point on the ruler, the other end is pinned to point . (4) Take a pen and hold the string tight to the edge of the ruler. (5) Rotating the ruler around prompts the pen to draw an arc of the right branch of the hyperbola, because of (see the definition of a hyperbola by circular directrices). Steiner generation of a hyperbola The following method to construct single points of a hyperbola relies on the Steiner generation of a non degenerate conic section: Given two pencils of lines at two points (all lines containing and , respectively) and a projective but not perspective mapping of onto , then the intersection points of corresponding lines form a non-degenerate projective conic section. For the generation of points of the hyperbola one uses the pencils at the vertices . Let be a point of the hyperbola and . The line segment is divided into n equally-spaced segments and this division is projected parallel with the diagonal as direction onto the line segment (see diagram). The parallel projection is part of the projective mapping between the pencils at and needed. The intersection points of any two related lines and are points of the uniquely defined hyperbola. Remark: The subdivision could be extended beyond the points and in order to get more points, but the determination of the intersection points would become more inaccurate. A better idea is extending the points already constructed by symmetry (see animation). Remark: The Steiner generation exists for ellipses and parabolas, too. The Steiner generation is sometimes called a parallelogram method because one can use other points rather than the vertices, which starts with a parallelogram instead of a rectangle. Inscribed angles for hyperbolas y = a/(x − b) + c and the 3-point-form A hyperbola with equation is uniquely determined by three points with different x- and y-coordinates. A simple way to determine the shape parameters uses the inscribed angle theorem for hyperbolas: In order to measure an angle between two lines with equations in this context one uses the quotient Analogous to the inscribed angle theorem for circles one gets the Inscribed angle theorem for hyperbolas: For four points (see diagram) the following statement is true: The four points are on a hyperbola with equation if and only if the angles at and are equal in the sense of the measurement above. That means if (Proof: straightforward calculation. If the points are on a hyperbola, one can assume the hyperbola's equation is .) A consequence of the inscribed angle theorem for hyperbolas is the 3-point-form of a hyperbola's equation: The equation of the hyperbola determined by 3 points is the solution of the equation for . As an affine image of the unit hyperbola x² − y² = 1 Another definition of a hyperbola uses affine transformations: Any hyperbola is the affine image of the unit hyperbola with equation . parametric representation An affine transformation of the Euclidean plane has the form , where is a regular matrix (its determinant is not 0) and is an arbitrary vector. If are the column vectors of the matrix , the unit hyperbola is mapped onto the hyperbola is the center, a point of the hyperbola and a tangent vector at this point. vertices In general the vectors are not perpendicular. That means, in general are not the vertices of the hyperbola. But point into the directions of the asymptotes. The tangent vector at point is Because at a vertex the tangent is perpendicular to the major axis of the hyperbola one gets the parameter of a vertex from the equation and hence from which yields (The formulae were used.) The two vertices of the hyperbola are implicit representation Solving the parametric representation for by Cramer's rule and using , one gets the implicit representation . hyperbola in space The definition of a hyperbola in this section gives a parametric representation of an arbitrary hyperbola, even in space, if one allows to be vectors in space. As an affine image of the hyperbola y = 1/x Because the unit hyperbola is affinely equivalent to the hyperbola , an arbitrary hyperbola can be considered as the affine image (see previous section) of the hyperbola is the center of the hyperbola, the vectors have the directions of the asymptotes and is a point of the hyperbola. The tangent vector is At a vertex the tangent is perpendicular to the major axis. Hence and the parameter of a vertex is is equivalent to and are the vertices of the hyperbola. The following properties of a hyperbola are easily proven using the representation of a hyperbola introduced in this section. Tangent construction The tangent vector can be rewritten by factorization: This means that the diagonal of the parallelogram is parallel to the tangent at the hyperbola point (see diagram). This property provides a way to construct the tangent at a point on the hyperbola. This property of a hyperbola is an affine version of the 3-point-degeneration of Pascal's theorem. Area of the grey parallelogram The area of the grey parallelogram in the above diagram is and hence independent of point . The last equation follows from a calculation for the case, where is a vertex and the hyperbola in its canonical form Point construction For a hyperbola with parametric representation (for simplicity the center is the origin) the following is true: For any two points the points are collinear with the center of the hyperbola (see diagram). The simple proof is a consequence of the equation . This property provides a possibility to construct points of a hyperbola if the asymptotes and one point are given. This property of a hyperbola is an affine version of the 4-point-degeneration of Pascal's theorem. Tangent-asymptotes-triangle For simplicity the center of the hyperbola may be the origin and the vectors have equal length. If the last assumption is not fulfilled one can first apply a parameter transformation (see above) in order to make the assumption true. Hence are the vertices, span the minor axis and one gets and . For the intersection points of the tangent at point with the asymptotes one gets the points The area of the triangle can be calculated by a 2 × 2 determinant: (see rules for determinants). is the area of the rhombus generated by . The area of a rhombus is equal to one half of the product of its diagonals. The diagonals are the semi-axes of the hyperbola. Hence: The area of the triangle is independent of the point of the hyperbola: Reciprocation of a circle The reciprocation of a circle B in a circle C always yields a conic section such as a hyperbola. The process of "reciprocation in a circle C" consists of replacing every line and point in a geometrical figure with their corresponding pole and polar, respectively. The pole of a line is the inversion of its closest point to the circle C, whereas the polar of a point is the converse, namely, a line whose closest point to C is the inversion of the point. The eccentricity of the conic section obtained by reciprocation is the ratio of the distances between the two circles' centers to the radius r of reciprocation circle C. If B and C represent the points at the centers of the corresponding circles, then Since the eccentricity of a hyperbola is always greater than one, the center B must lie outside of the reciprocating circle C. This definition implies that the hyperbola is both the locus of the poles of the tangent lines to the circle B, as well as the envelope of the polar lines of the points on B. Conversely, the circle B is the envelope of polars of points on the hyperbola, and the locus of poles of tangent lines to the hyperbola. Two tangent lines to B have no (finite) poles because they pass through the center C of the reciprocation circle C; the polars of the corresponding tangent points on B are the asymptotes of the hyperbola. The two branches of the hyperbola correspond to the two parts of the circle B that are separated by these tangent points. Quadratic equation A hyperbola can also be defined as a second-degree equation in the Cartesian coordinates (x, y) in the plane, provided that the constants Axx, Axy, Ayy, Bx, By, and C satisfy the determinant condition This determinant is conventionally called the discriminant of the conic section. A special case of a hyperbola—the degenerate hyperbola consisting of two intersecting lines—occurs when another determinant is zero: This determinant Δ is sometimes called the discriminant of the conic section. Given the above general parametrization of the hyperbola in Cartesian coordinates, the eccentricity can be found using the formula in Conic section#Eccentricity in terms of coefficients. The center (xc, yc) of the hyperbola may be determined from the formulae In terms of new coordinates, and , the defining equation of the hyperbola can be written The principal axes of the hyperbola make an angle φ with the positive x-axis that is given by Rotating the coordinate axes so that the x-axis is aligned with the transverse axis brings the equation into its canonical form The major and minor semiaxes a and b are defined by the equations where λ1 and λ2 are the roots of the quadratic equation For comparison, the corresponding equation for a degenerate hyperbola (consisting of two intersecting lines) is The tangent line to a given point (x0, y0) on the hyperbola is defined by the equation where E, F and G are defined by The normal line to the hyperbola at the same point is given by the equation The normal line is perpendicular to the tangent line, and both pass through the same point (x0, y0). From the equation the left focus is and the right focus is where is the eccentricity. Denote the distances from a point (x, y) to the left and right foci as and For a point on the right branch, and for a point on the left branch, This can be proved as follows: If (x,y) is a point on the hyperbola the distance to the left focal point is To the right focal point the distance is If (x,y) is a point on the right branch of the hyperbola then and Subtracting these equations one gets If (x,y) is a point on the left branch of the hyperbola then and Subtracting these equations one gets In Cartesian coordinates Equation If Cartesian coordinates are introduced such that the origin is the center of the hyperbola and the x-axis is the major axis, then the hyperbola is called east-west-opening and the foci are the points , the vertices are . For an arbitrary point the distance to the focus is and to the second focus . Hence the point is on the hyperbola if the following condition is fulfilled Remove the square roots by suitable squarings and use the relation to obtain the equation of the hyperbola: This equation is called the canonical form of a hyperbola, because any hyperbola, regardless of its orientation relative to the Cartesian axes and regardless of the location of its center, can be transformed to this form by a change of variables, giving a hyperbola that is congruent to the original (see below). The axes of symmetry or principal axes are the transverse axis (containing the segment of length 2a with endpoints at the vertices) and the conjugate axis (containing the segment of length 2b perpendicular to the transverse axis and with midpoint at the hyperbola's center). As opposed to an ellipse, a hyperbola has only two vertices: . The two points on the conjugate axes are not on the hyperbola. It follows from the equation that the hyperbola is symmetric with respect to both of the coordinate axes and hence symmetric with respect to the origin. Eccentricity For a hyperbola in the above canonical form, the eccentricity is given by Two hyperbolas are geometrically similar to each other – meaning that they have the same shape, so that one can be transformed into the other by rigid left and right movements, rotation, taking a mirror image, and scaling (magnification) – if and only if they have the same eccentricity. Asymptotes Solving the equation (above) of the hyperbola for yields It follows from this that the hyperbola approaches the two lines for large values of . These two lines intersect at the center (origin) and are called asymptotes of the hyperbola With the help of the second figure one can see that The perpendicular distance from a focus to either asymptote is (the semi-minor axis). From the Hesse normal form of the asymptotes and the equation of the hyperbola one gets: The product of the distances from a point on the hyperbola to both the asymptotes is the constant which can also be written in terms of the eccentricity e as From the equation of the hyperbola (above) one can derive: The product of the slopes of lines from a point P to the two vertices is the constant In addition, from (2) above it can be shown that The product of the distances from a point on the hyperbola to the asymptotes along lines parallel to the asymptotes is the constant Semi-latus rectum The length of the chord through one of the foci, perpendicular to the major axis of the hyperbola, is called the latus rectum. One half of it is the semi-latus rectum . A calculation shows The semi-latus rectum may also be viewed as the radius of curvature at the vertices. Tangent The simplest way to determine the equation of the tangent at a point is to implicitly differentiate the equation of the hyperbola. Denoting dy/dx as y′, this produces With respect to , the equation of the tangent at point is A particular tangent line distinguishes the hyperbola from the other conic sections. Let f be the distance from the vertex V (on both the hyperbola and its axis through the two foci) to the nearer focus. Then the distance, along a line perpendicular to that axis, from that focus to a point P on the hyperbola is greater than 2f. The tangent to the hyperbola at P intersects that axis at point Q at an angle ∠PQV of greater than 45°. Rectangular hyperbola In the case the hyperbola is called rectangular (or equilateral), because its asymptotes intersect at right angles. For this case, the linear eccentricity is , the eccentricity and the semi-latus rectum . The graph of the equation is a rectangular hyperbola. Parametric representation with hyperbolic sine/cosine Using the hyperbolic sine and cosine functions , a parametric representation of the hyperbola can be obtained, which is similar to the parametric representation of an ellipse: which satisfies the Cartesian equation because Further parametric representations are given in the section Parametric equations below. Conjugate hyperbola Exchange and to obtain the equation of the conjugate hyperbola (see diagram): also written as In polar coordinates For pole = focus: The polar coordinates used most commonly for the hyperbola are defined relative to the Cartesian coordinate system that has its origin in a focus and its x-axis pointing towards the origin of the "canonical coordinate system" as illustrated in the first diagram. In this case the angle is called true anomaly. Relative to this coordinate system one has that and for pole = center: With polar coordinates relative to the "canonical coordinate system" (see second diagram) one has that For the right branch of the hyperbola the range of is Parametric equations A hyperbola with equation can be described by several parametric equations: (rational representation). Tangent slope as parameter: A parametric representation, which uses the slope of the tangent at a point of the hyperbola can be obtained analogously to the ellipse case: Replace in the ellipse case by and use formulae for the hyperbolic functions. One gets is the upper, and the lower half of the hyperbola. The points with vertical tangents (vertices ) are not covered by the representation. The equation of the tangent at point is This description of the tangents of a hyperbola is an essential tool for the determination of the orthoptic of a hyperbola. Hyperbolic functions Just as the trigonometric functions are defined in terms of the unit circle, so also the hyperbolic functions are defined in terms of the unit hyperbola, as shown in this diagram. In a unit circle, the angle (in radians) is equal to twice the area of the circular sector which that angle subtends. The analogous hyperbolic angle is likewise defined as twice the area of a hyperbolic sector. Let be twice the area between the axis and a ray through the origin intersecting the unit hyperbola, and define as the coordinates of the intersection point. Then the area of the hyperbolic sector is the area of the triangle minus the curved region past the vertex at : which simplifies to the area hyperbolic cosine Solving for yields the exponential form of the hyperbolic cosine: From one gets and its inverse the area hyperbolic sine: Other hyperbolic functions are defined according to the hyperbolic cosine and hyperbolic sine, so for example Properties The tangent bisects the angle between the lines to the foci The tangent at a point bisects the angle between the lines . Proof Let be the point on the line with the distance to the focus (see diagram, is the semi major axis of the hyperbola). Line is the bisector of the angle between the lines . In order to prove that is the tangent line at point , one checks that any point on line which is different from cannot be on the hyperbola. Hence has only point in common with the hyperbola and is, therefore, the tangent at point . From the diagram and the triangle inequality one recognizes that holds, which means: . But if is a point of the hyperbola, the difference should be . Midpoints of parallel chords The midpoints of parallel chords of a hyperbola lie on a line through the center (see diagram). The points of any chord may lie on different branches of the hyperbola. The proof of the property on midpoints is best done for the hyperbola . Because any hyperbola is an affine image of the hyperbola (see section below) and an affine transformation preserves parallelism and midpoints of line segments, the property is true for all hyperbolas: For two points of the hyperbola the midpoint of the chord is the slope of the chord is For parallel chords the slope is constant and the midpoints of the parallel chords lie on the line Consequence: for any pair of points of a chord there exists a skew reflection with an axis (set of fixed points) passing through the center of the hyperbola, which exchanges the points and leaves the hyperbola (as a whole) fixed. A skew reflection is a generalization of an ordinary reflection across a line , where all point-image pairs are on a line perpendicular to . Because a skew reflection leaves the hyperbola fixed, the pair of asymptotes is fixed, too. Hence the midpoint of a chord divides the related line segment between the asymptotes into halves, too. This means that . This property can be used for the construction of further points of the hyperbola if a point and the asymptotes are given. If the chord degenerates into a tangent, then the touching point divides the line segment between the asymptotes in two halves. Orthogonal tangents – orthoptic For a hyperbola the intersection points of orthogonal tangents lie on the circle . This circle is called the orthoptic of the given hyperbola. The tangents may belong to points on different branches of the hyperbola. In case of there are no pairs of orthogonal tangents. Pole-polar relation for a hyperbola Any hyperbola can be described in a suitable coordinate system by an equation . The equation of the tangent at a point of the hyperbola is If one allows point to be an arbitrary point different from the origin, then point is mapped onto the line , not through the center of the hyperbola. This relation between points and lines is a bijection. The inverse function maps line onto the point and line onto the point Such a relation between points and lines generated by a conic is called pole-polar relation or just polarity. The pole is the point, the polar the line. See Pole and polar. By calculation one checks the following properties of the pole-polar relation of the hyperbola: For a point (pole) on the hyperbola the polar is the tangent at this point (see diagram: ). For a pole outside the hyperbola the intersection points of its polar with the hyperbola are the tangency points of the two tangents passing (see diagram: ). For a point within the hyperbola the polar has no point with the hyperbola in common. (see diagram: ). Remarks: The intersection point of two polars (for example: ) is the pole of the line through their poles (here: ). The foci and respectively and the directrices and respectively belong to pairs of pole and polar. Pole-polar relations exist for ellipses and parabolas, too. Other properties The following are concurrent: (1) a circle passing through the hyperbola's foci and centered at the hyperbola's center; (2) either of the lines that are tangent to the hyperbola at the vertices; and (3) either of the asymptotes of the hyperbola. The following are also concurrent: (1) the circle that is centered at the hyperbola's center and that passes through the hyperbola's vertices; (2) either directrix; and (3) either of the asymptotes. Arc length The arc length of a hyperbola does not have an elementary expression. The upper half of a hyperbola can be parameterized as Then the integral giving the arc length from to can be computed as: After using the substitution , this can also be represented using the incomplete elliptic integral of the second kind with parameter : Using only real numbers, this becomes where is the incomplete elliptic integral of the first kind with parameter and is the Gudermannian function. Derived curves Several other curves can be derived from the hyperbola by inversion, the so-called inverse curves of the hyperbola. If the center of inversion is chosen as the hyperbola's own center, the inverse curve is the lemniscate of Bernoulli; the lemniscate is also the envelope of circles centered on a rectangular hyperbola and passing through the origin. If the center of inversion is chosen at a focus or a vertex of the hyperbola, the resulting inverse curves are a limaçon or a strophoid, respectively. Elliptic coordinates A family of confocal hyperbolas is the basis of the system of elliptic coordinates in two dimensions. These hyperbolas are described by the equation where the foci are located at a distance c from the origin on the x-axis, and where θ is the angle of the asymptotes with the x-axis. Every hyperbola in this family is orthogonal to every ellipse that shares the same foci. This orthogonality may be shown by a conformal map of the Cartesian coordinate system w = z + 1/z, where z= x + iy are the original Cartesian coordinates, and w=u + iv are those after the transformation. Other orthogonal two-dimensional coordinate systems involving hyperbolas may be obtained by other conformal mappings. For example, the mapping w = z2 transforms the Cartesian coordinate system into two families of orthogonal hyperbolas. Conic section analysis of the hyperbolic appearance of circles Besides providing a uniform description of circles, ellipses, parabolas, and hyperbolas, conic sections can also be understood as a natural model of the geometry of perspective in the case where the scene being viewed consists of circles, or more generally an ellipse. The viewer is typically a camera or the human eye and the image of the scene a central projection onto an image plane, that is, all projection rays pass a fixed point O, the center. The lens plane is a plane parallel to the image plane at the lens O. The image of a circle c is a) a circle, if circle c is in a special position, for example parallel to the image plane and others (see stereographic projection), b) an ellipse, if c has no point with the lens plane in common, c) a parabola, if c has one point with the lens plane in common and d) a hyperbola, if c has two points with the lens plane in common. (Special positions where the circle plane contains point O are omitted.) These results can be understood if one recognizes that the projection process can be seen in two steps: 1) circle c and point O generate a cone which is 2) cut by the image plane, in order to generate the image. One sees a hyperbola whenever catching sight of a portion of a circle cut by one's lens plane. The inability to see very much of the arms of the visible branch, combined with the complete absence of the second branch, makes it virtually impossible for the human visual system to recognize the connection with hyperbolas. Applications Sundials Hyperbolas may be seen in many sundials. On any given day, the sun revolves in a circle on the celestial sphere, and its rays striking the point on a sundial traces out a cone of light. The intersection of this cone with the horizontal plane of the ground forms a conic section. At most populated latitudes and at most times of the year, this conic section is a hyperbola. In practical terms, the shadow of the tip of a pole traces out a hyperbola on the ground over the course of a day (this path is called the declination line). The shape of this hyperbola varies with the geographical latitude and with the time of the year, since those factors affect the cone of the sun's rays relative to the horizon. The collection of such hyperbolas for a whole year at a given location was called a pelekinon by the Greeks, since it resembles a double-bladed axe. Multilateration A hyperbola is the basis for solving multilateration problems, the task of locating a point from the differences in its distances to given points — or, equivalently, the difference in arrival times of synchronized signals between the point and the given points. Such problems are important in navigation, particularly on water; a ship can locate its position from the difference in arrival times of signals from a LORAN or GPS transmitters. Conversely, a homing beacon or any transmitter can be located by comparing the arrival times of its signals at two separate receiving stations; such techniques may be used to track objects and people. In particular, the set of possible positions of a point that has a distance difference of 2a from two given points is a hyperbola of vertex separation 2a whose foci are the two given points. Path followed by a particle The path followed by any particle in the classical Kepler problem is a conic section. In particular, if the total energy E of the particle is greater than zero (that is, if the particle is unbound), the path of such a particle is a hyperbola. This property is useful in studying atomic and sub-atomic forces by scattering high-energy particles; for example, the Rutherford experiment demonstrated the existence of an atomic nucleus by examining the scattering of alpha particles from gold atoms. If the short-range nuclear interactions are ignored, the atomic nucleus and the alpha particle interact only by a repulsive Coulomb force, which satisfies the inverse square law requirement for a Kepler problem. Korteweg–de Vries equation The hyperbolic trig function appears as one solution to the Korteweg–de Vries equation which describes the motion of a soliton wave in a canal. Angle trisection As shown first by Apollonius of Perga, a hyperbola can be used to trisect any angle, a well studied problem of geometry. Given an angle, first draw a circle centered at its vertex O, which intersects the sides of the angle at points A and B. Next draw the line segment with endpoints A and B and its perpendicular bisector . Construct a hyperbola of eccentricity e=2 with as directrix and B as a focus. Let P be the intersection (upper) of the hyperbola with the circle. Angle POB trisects angle AOB. To prove this, reflect the line segment OP about the line obtaining the point P' as the image of P. Segment AP' has the same length as segment BP due to the reflection, while segment PP' has the same length as segment BP due to the eccentricity of the hyperbola. As OA, OP', OP and OB are all radii of the same circle (and so, have the same length), the triangles OAP', OPP' and OPB are all congruent. Therefore, the angle has been trisected, since 3×POB = AOB. Efficient portfolio frontier In portfolio theory, the locus of mean-variance efficient portfolios (called the efficient frontier) is the upper half of the east-opening branch of a hyperbola drawn with the portfolio return's standard deviation plotted horizontally and its expected value plotted vertically; according to this theory, all rational investors would choose a portfolio characterized by some point on this locus. Biochemistry In biochemistry and pharmacology, the Hill equation and Hill-Langmuir equation respectively describe biological responses and the formation of protein–ligand complexes as functions of ligand concentration. They are both rectangular hyperbolae. Hyperbolas as plane sections of quadrics Hyperbolas appear as plane sections of the following quadrics: Elliptic cone Hyperbolic cylinder Hyperbolic paraboloid Hyperboloid of one sheet Hyperboloid of two sheets See also Other conic sections Circle Ellipse Parabola Degenerate conic Other related topics Elliptic coordinates, an orthogonal coordinate system based on families of ellipses and hyperbolas. Hyperbolic growth Hyperbolic partial differential equation Hyperbolic sector Hyperboloid structure Hyperbolic trajectory Hyperboloid Multilateration Rotation of axes Translation of axes Unit hyperbola Notes References External links Apollonius' Derivation of the Hyperbola at Convergence Frans van Schooten: Mathematische Oeffeningen, 1659 Conic sections Analytic geometry Algebraic curves
[ -0.6208927631378174, 0.14167241752147675, 0.12313132733106613, 0.3244718909263611, -0.6418803334236145, 0.22108295559883118, 0.235158309340477, 0.5883836150169373, -0.7894482016563416, -0.7555996775627136, -0.723173201084137, -0.037239838391542435, -0.6954522728919983, 0.6402432322502136, ...
14055
https://en.wikipedia.org/wiki/Humayun
Humayun
Nasir-ud-Din Muḥammad (; 6 March 1508 – 27 January 1556), better known by his regnal name, Humayun (), was the second emperor of the Mughal Empire, who ruled over territory in what is now Afghanistan, Pakistan, Northern India, and Bangladesh from 1530 to 1540 and again from 1555 to 1556. Like his father, Babur, he lost his kingdom early but regained it with the aid of the Safavid dynasty of Persia, with additional territory. At the time of his death in 1556, the Mughal Empire spanned almost one million square kilometres. In December 1530, Humayun succeeded his father to the throne of Delhi as ruler of the Mughal territories in the Indian subcontinent. Humayun was an inexperienced ruler when he came to power, at the age of 22. His half-brother Kamran Mirza inherited Kabul and Kandahar, the northernmost parts of their father's empire. Kamran was to become a bitter rival of Humayun. Humayun lost Mughal territories to Sher Shah Suri, but regained them 15 years later with Safavid aid. Humayun's return from Persia was accompanied by a large retinue of Persian noblemen and signalled an important change in Mughal court culture. The Central Asian origins of the dynasty were largely overshadowed by the influences of Persian art, architecture, language, and literature. There are many stone carvings and thousands of Persian manuscripts in India dating from the time of Humayun. Subsequently, Humayun further expanded the Empire in a very short time, leaving a substantial legacy for his son, Akbar. Background Humayun was born as Nasir-ud-din Muhammad to Babur's favourite wife Māham Begum on the Tuesday of 6th March 1508 . According to Abu Fazal Allami, Māham was actually related to the noble family of Sultan Hussain Mirza of Khorasan. She was also related to Sheikh Ahmād Jan. The decision of Babur to divide the territories of his empire between two of his sons was unusual in India, although it had been a common Central Asian practice since the time of Genghis Khan. Unlike most monarchies, which practiced primogeniture, the Timurids followed the example of Genghis and did not leave an entire kingdom to the eldest son. Although under that system only a Chingissid could claim sovereignty and Khanal authority, any male Chinggisid within a given sub-branch had an equal right to the throne (though the Timurids were not Chinggisid in their paternal ancestry). While Genghis Khan's Empire had been peacefully divided between his sons upon his death, almost every Chinggisid succession since had resulted in fratricide. Timur himself had divided his territories among Pir Muhammad, Miran Shah, Khalil Sultan and Shah Rukh, which resulted in inter-family warfare. Upon Babur's death, Humayun's territories were the least secure. He had ruled only four years, and not all umarah (nobles) viewed Humayun as the rightful ruler. Indeed, earlier, when Babur had become ill, some of the nobles had tried to install his Brother-in-law, Mahdi Khwaja, as ruler. Although this attempt failed, it was a sign of problems to come. Early reign When Humayun came to the throne of the Mughal Empire, several of his brothers revolted against him. Another brother Khalil Mirza (1509–1530) supported Humayun but was assassinated. The Emperor commenced construction of a tomb for his brother in 1538, but this was not yet finished when Humayun was forced to flee to Persia. Sher Shah destroyed the structure and no further work was done on it after Humayun's restoration. Humayun had two major rivals for his lands: Sultan Bahadur of Gujarat to the southwest and Sher Shah Suri (Sher Khan) settled along the river Ganges in Bihar to the east. Humayun's first campaign was to confront Sher Shah Suri. Halfway through this offensive Humayun had to abandon it and concentrate on Gujarat, where a threat from Ahmed Shah had to be met. Humayun was victorious annexing Gujarat, Malwa, Champaner and the great fort of Mandu. During the first five years of Humayun's reign, Bahadur and Sher Khan extended their rule, although Sultan Bahadur faced pressure in the east from sporadic conflicts with the Portuguese. While the Mughals had obtained firearms via the Ottoman Empire, Bahadur's Gujarat had acquired them through a series of contracts drawn up with the Portuguese, allowing the Portuguese to establish a strategic foothold in north western India. In 1535 Humayun was made aware that the Sultan of Gujarat was planning an assault on the Mughal territories with Portuguese aid. Humayun gathered an army and marched on Bahadur. Within a month he had captured the forts of Mandu and Champaner. However, instead of pressing his attack, Humayun ceased the campaign and consolidated his newly conquered territory. Sultan Bahadur, meanwhile escaped and took up refuge with the Portuguese. Like his father, Humayun was a frequent user of opium. Sher Shah Suri Shortly after Humayun had marched on Gujarat, Sher Shah Suri saw an opportunity to wrest control of Agra from the Mughals. He began to gather his army together hoping for a rapid and decisive siege of the Mughal capital. Upon hearing this alarming news, Humayun quickly marched his troops back to Agra allowing Bahadur to easily regain control of the territories Humayun had recently taken. In February 1537, however, Bahadur was killed when a botched plan to kidnap the Portuguese viceroy ended in a fire-fight that the Sultan lost. Whilst Humayun succeeded in protecting Agra from Sher Shah, the second city of the Empire, Gaur the capital of the vilayat of Bengal, was sacked. Humayun's troops had been delayed while trying to take Chunar, a fort occupied by Sher Shah's son, in order to protect his troops from an attack from the rear. The stores of grain at Gauri, the largest in the empire, were emptied, and Humayun arrived to see corpses littering the roads. The vast wealth of Bengal was depleted and brought East, giving Sher Shah a substantial war chest. Sher Shah withdrew to the east, but Humayun did not follow: instead he "shut himself up for a considerable time in his Harem, and indulged himself in every kind of luxury". Hindal, Humayun's 19-year-old brother, had agreed to aid him in this battle and protect the rear from attack, but he abandoned his position and withdrew to Agra, where he decreed himself acting emperor. When Humayun sent the grand Mufti, Sheikh Buhlul, to reason with him; the Sheikh was killed. Further provoking the rebellion, Hindal ordered that the Khutba, or sermon, in the main mosque be surrounded. Humayun's other brother, Kamran Mirza, marched from his territories in the Punjab, ostensibly to aid Humayun. However, his return home had treacherous motives as he intended to stake a claim for Humayun's apparently collapsing empire. He brokered a deal with Hindal providing that his brother would cease all acts of disloyalty in return for a share in the new empire, which Kamran would create once Humayun was deposed. In June 1539 Sher Shah met Humayun in the Battle of Chausa on the banks of the Ganges, near Buxar. This was to become an entrenched battle in which both sides spent a lot of time digging themselves into positions. The major part of the Mughal army, the artillery, was now immobile, and Humayun decided to engage in some diplomacy using Muhammad Aziz as ambassador. Humayun agreed to allow Sher Shah to rule over Bengal and Bihar, but only as provinces granted to him by his Emperor, Humayun, falling short of outright sovereignty. The two rulers also struck a bargain in order to save face: Humayun's troops would charge those of Sher Shah whose forces then retreat in feigned fear. Thus honour would, supposedly, be satisfied. Once the Army of Humayun had made its charge and Sher Shah's troops made their agreed-upon retreat, the Mughal troops relaxed their defensive preparations and returned to their entrenchments without posting a proper guard. Observing the Mughals' vulnerability, Sher Shah reneged on his earlier agreement. That very night, his army approached the Mughal camp and finding the Mughal troops unprepared with a majority asleep, they advanced and killed most of them. The Emperor survived by swimming across the Ganges using an air-filled "water skin", and quietly returned to Agra. Humayun was assisted across the Ganges by Shams al-Din Muhammad. In Agra When Humayun returned to Agra, he found that all three of his brothers were present. Humayun once again not only pardoned his brothers for plotting against him, but even forgave Hindal for his outright betrayal. With his armies travelling at a leisurely pace, Sher Shah was gradually drawing closer and closer to Agra. This was a serious threat to the entire family, but Humayun and Kamran squabbled over how to proceed. Kamran withdrew after Humayun refused to make a quick attack on the approaching enemy, instead opting to build a larger army under his own name. When Kamran returned to Lahore, Humayun, with his other brothers Askari and Hindal, marched to meet Sher Shah east of Agra at the battle of Kannauj on 17 May 1540. Humayun was soundly defeated. He retreated to Agra, pursued by Sher Shah, and thence through Delhi to Lahore. Sher Shah's founding of the short-lived Sur Empire, with its capital at Delhi, resulted in Humayun's exile for 15 years in the court of Shah Tahmasp I. In Lahore The four brothers were united in Lahore, but every day they were informed that Sher Shah was getting closer and closer. When he reached Sirhind, Humayun sent an ambassador carrying the message "I have left you the whole of Hindustan [i.e. the lands to the East of Punjab, comprising most of the Ganges Valley]. Leave Lahore alone, and let Sirhind be a boundary between you and me." Sher Shah, however, replied "I have left you Kabul. You should go there." Kabul was the capital of the empire of Humayun's brother Kamran, who was far from willing to hand over any of his territories to his brother. Instead, Kamran approached Sher Shah and proposed that he actually revolt against his brother and side with Sher Shah in return for most of the Punjab. Sher Shah dismissed his help, believing it not to be required, though word soon spread to Lahore about the treacherous proposal, and Humayun was urged to make an example of Kamran and kill him. Humayun refused, citing the last words of his father, Babur, "Do nothing against your brothers, even though they may deserve it." Withdrawing further Humayun decided it would be wise to withdraw still further. He and his army rode out through and across the Thar Desert, when the Hindu ruler Rao Maldeo Rathore allied with Sher Shah Suri against the Mughal Empire. In many accounts Humayun mentions how he and his pregnant wife had to trace their steps through the desert at the hottest time of year. Their rations were low, and they had little to eat; even drinking water was a major problem in the desert. When Hamida Bano's horse died, no one would lend the Queen (who was now eight months pregnant) a horse, so Humayun did so himself, resulting in him riding a camel for six kilometres (four miles), although Khaled Beg then offered him his mount. Humayun was later to describe this incident as the lowest point in his life. Humayun asked that his brothers join him as he fell back into Sindh. While the previously rebellious Hindal Mirza remained loyal and was ordered to join his brothers in Kandahar. Kamran Mirza and Askari Mirza instead decided to head to the relative peace of Kabul. This was to be a definitive schism in the family. Humayun headed for Sindh because he expected aid from the Emir of Sindh, Hussein Umrani, whom he had appointed and who owed him his allegiance. Also, his wife Hamida hailed from Sindh; she was the daughter of a prestigious pir family (a pir is an Islamic religious guide) of Persian heritage long settled in Sindh. En route to the Emir's court, Humayun had to break journey because his pregnant wife Hamida was unable to travel further. Humayun sought refuge with the Hindu ruler of the oasis town of Amarkot (now part of Sindh province). Rana Prasad Rao of Amarkot duly welcomed Humayun into his home and sheltered the refugees for several months. Here, in the household of a Hindu Rajput nobleman, Humayun's wife Hamida Bano, daughter of a Sindhi family, gave birth to the future Emperor Akbar on 15 October 1542. The date of birth is well established because Humayun consulted his astronomer to utilise the astrolabe and check the location of the planets. The infant was the long-awaited heir-apparent to the 34-year-old Humayun and the answer of many prayers. Shortly after the birth, Humayun and his party left Amarkot for Sindh, leaving Akbar behind, who was not ready for the grueling journey ahead in his infancy. He was later adopted by Askari Mirza. For a change, Humayun was not deceived in the character of the man on whom he has pinned his hopes. Emir Hussein Umrani, ruler of Sindh, welcomed Humayun's presence and was loyal to Humayun just as he had been loyal to Babur against the renegade Arghuns. While in Sindh, Humayun alongside Emir Hussein Umrani, gathered horses and weapons and formed new alliances that helped regain lost territories. Until finally Humayun had gathered hundreds of Sindhi and Baloch tribesmen alongside his Mughals and then marched towards Kandahar and later Kabul, thousands more gathered by his side as Humayun continually declared himself the rightful Timurid heir of the first Mughal Emperor, Babur. Retreat to Kabul After Humayun set out from his expedition in Sindh, along with 300 camels (mostly wild) and 2000 loads of grain, he set off to join his brothers in Kandahar after crossing the Indus River on 11 July 1543 along with the ambition to regain the Mughal Empire and overthrow the Suri dynasty. Among the tribes that had sworn allegiance to Humayun were the Leghari, Magsi, Rind and many others. In Kamran Mirza's territory, Hindal Mirza had been placed under house arrest in Kabul after refusing to have the Khutba recited in Kamran Mirza's name. His other brother, Askari Mirza, was now ordered to gather an army and march on Humayun. When Humayun received word of the approaching hostile army he decided against facing them, and instead sought refuge elsewhere. Akbar was left behind in camp close to Kandahar, as it was December, too cold and dangerous to include the 14-month-old toddler in the march through the mountains of the Hindu Kush. Askari Mirza took Akbar in, leaving the wives of Kamran and Askari Mirza to raise him. The Akbarnama specifies Kamran Mirza's wife, Sultan Begam. Once again Humayun turned toward Kandahar where his brother Kamran Mirza was in power, but he received no help and had to seek refuge with the Shah of Persia Refuge in Persia Humayun fled to the refuge of the Safavid Empire in Persia, marching with 40 men, his wife Bega Begum, and her companion through mountains and valleys. Among other trials the Imperial party were forced to live on horse meat boiled in the soldiers' helmets. These indignities continued during the month it took them to reach Herat, however after their arrival they were reintroduced to the finer things in life. Upon entering the city his army was greeted with an armed escort, and they were treated to lavish food and clothing. They were given fine accommodations and the roads were cleared and cleaned before them. Shah Tahmasp, unlike Humayun's own family, actually welcomed the Mughal, and treated him as a royal visitor. Here Humayun went sightseeing and was amazed at the Persian artwork and architecture he saw: much of this was the work of the Timurid Sultan Husayn Bayqarah and his ancestor, princess Gauhar Shad, thus he was able to admire the work of his relatives and ancestors at first hand. He was introduced to the work of the Persian miniaturists, and Kamaleddin Behzad had two of his pupils join Humayun in his court. Humayun was amazed at their work and asked if they would work for him if he were to regain the sovereignty of Hindustan: they agreed. With so much going on Humayun did not even meet the Shah until July, some six months after his arrival in Persia. After a lengthy journey from Herat the two met in Qazvin where a large feast and parties were held for the event. The meeting of the two monarchs is depicted in a famous wall-painting in the Chehel Sotoun (Forty Columns) palace in Esfahan. The Shah urged that Humayun convert from Sunni to Shia Islam in order to keep himself and several hundred followers alive. Although the Mughals initially disagreed to their conversion they knew that with this outward acceptance of Shi'ism, Shah Tahmasp was eventually prepared to offer Humayun more substantial support. When Humayun's brother, Kamran Mirza, offered to cede Kandahar to the Persians in exchange for Humayun, dead or alive, Shah Tahmasp refused. Instead the Shah staged a celebration for Humayun, with 300 tents, an imperial Persian carpet, 12 musical bands and "meat of all kinds". Here the Shah announced that all this, and 12,000 elite cavalry were his to lead an attack on his brother Kamran. All that Shah Tahmasp asked for was that, if Humayun's forces were victorious, Kandahar would be his. Kandahar and onward With this Persian Safavid aid Humayun took Kandahar from Askari Mirza after a two-week siege. He noted how the nobles who had served Askari Mirza quickly flocked to serve him, "in very truth the greater part of the inhabitants of the world are like a flock of sheep, wherever one goes the others immediately follow". Kandahar was, as agreed, given to the Shah of Persia who sent his infant son, Murad, as the Viceroy. However, the baby soon died and Humayun thought himself strong enough to assume power. Humayun now prepared to take Kabul, ruled by his brother Kamran Mirza. In the end, there was no actual siege. Kamran Mirza was detested as a leader and as Humayun's Persian army approached the city hundreds of Kamran Mirza's troops changed sides, flocking to join Humayun and swelling his ranks. Kamran Mirza absconded and began building an army outside the city. In November 1545, Hamida and Humayun were reunited with their son Akbar, and held a huge feast. They also held another, larger, feast in the child's honour when he was circumcised. However, while Humayun had a larger army than his brother and had the upper hand, on two occasions his poor military judgement allowed Kamran Mirza to retake Kabul and Kandahar, forcing Humayun to mount further campaigns for their recapture. He might have been aided in this by his reputation for leniency towards the troops who had defended the cities against him, as opposed to Kamran Mirza, whose brief periods of possession were marked by atrocities against the inhabitants who, he supposed, had helped his brother. His youngest brother, Hindal Mirza, formerly the most disloyal of his siblings, died fighting on his behalf. His brother Askari Mirza was shackled in chains at the behest of his nobles and aides. He was allowed go on Hajj, and died en route in the desert outside Damascus. Humayun's other brother, Kamran Mirza, had repeatedly sought to have Humayun killed. In 1552 Kamran Mirza attempted to make a pact with Islam Shah, Sher Shah's successor, but was apprehended by a Gakhar. The Gakhars were one of the minority of tribal groups who had consistently remained loyal to their oath to the Mughals. Sultan Adam of the Gakhars handed Kamran Mirza over to Humayun. Humayun was inclined to forgive his brother. However he was warned that allowing Kamran Mirza's repeated acts of treachery to go unpunished could foment rebellion amongst his own supporters. So, instead of killing his brother, Humayun had Kamran Mirza blinded which would end any claim by the latter to the throne. Humayun sent Kamran Mirza on Hajj, as he hoped to see his brother thereby absolved of his offences. However Kamran Mirza died close to Mecca in the Arabian Peninsula in 1557. Restoration of the Mughal Empire Sher Shah Suri had died in 1545; his son and successor Islam Shah died in 1554. These two deaths left the dynasty reeling and disintegrating. Three rivals for the throne all marched on Delhi, while in many cities leaders tried to stake a claim for independence. This was a perfect opportunity for the Mughals to march back to India. The Mughal Emperor Humayun gathered a vast army, which included the Baloch tribes of Leghari, Magsi and Rind, and attempted the challenging task of retaking the throne in Delhi. Humayun placed the army under the leadership of Bairam Khan, a wise move given Humayun's own record of military ineptitude, and it turned out to be prescient as Bairam proved himself a great tactician. At the Battle of Sirhind on 22 June 1555, the armies of Sikandar Shah Suri were decisively defeated and the Mughal Empire was re-established in India. Marriage relations with the Khanzadas The Gazetteer of Ulwur states: Bairam Khan led the army through the Punjab virtually unopposed. The fort of Rohtas, which was built in 1541–1543 by Sher Shah Suri to crush the Gakhars who were loyal to Humayun, was surrendered without a shot by a treacherous commander. The walls of the Rohtas Fort measure up to 12.5 meters in thickness and up to 18.28 meters in height. They extend for 4 km and feature 68 semi-circular bastions. Its sandstone gates, both massive and ornate, are thought to have exerted a profound influence on Mughal military architecture. The only major battle faced by Humayun's armies was against Sikander Suri in Sirhind, where Bairam Khan employed a tactic whereby he engaged his enemy in open battle, but then retreated quickly in apparent fear. When the enemy followed after them they were surprised by entrenched defensive positions and were easily annihilated. After Sirhind, most towns and villages chose to welcome the invading army as it made its way to the capital. On 23 July 1555, Humayun once again sat on Babur's throne in Delhi. Ruling Kashmir With all of Humayun's brothers now dead, there was no fear of another usurping his throne during his military campaigns. He was also now an established leader and could trust his generals. With this new-found strength Humayun embarked on a series of military campaigns aimed at extending his reign over areas in the east and west of the subcontinent. His sojourn in exile seems to have reduced his reliance on astrology, and his military leadership came to imitate the more effective methods that he had observed in Persia. Character Edward S. Holden writes; "He was uniformly kind and considerate to his dependents, devotedly attached to his son Akbar, to his friends, and to his turbulent brothers. The misfortunes of his reign arose in great, from his failure to treat them with rigor." He further writes: "The very defects of his character, which render him less admirable as a successful ruler of nations, make us more fond of him as a man. His renown has suffered in that his reign came between the brilliant conquests of Babur and the beneficent statesmanship of Akbar; but he was not unworthy to be the son of the one and the father of the other." Stanley Lane-Poole writes in his book Medieval India: "His name meant the winner (Lucky/Conqueror), there is no king in the history to be named as wrong as Humayun", he was of a forgiving nature. He further writes, "He was in fact unfortunate ... Scarcely had he enjoyed his throne for six months in Delhi when he slipped down from the polished steps of his palace and died in his forty-ninth year (Jan. 24, 1556). If there was a possibility of falling, Humayun was not the man to miss it. He tumbled through his life and tumbled out of it." Humayun ordered the crushing by elephant of an imam he mistakenly believed to be critical of his reign. Death and legacy On 24 January 1556, Humayun, with his arms full of books, was descending the staircase from his library Sher Mandal when the muezzin announced the Azaan (the call to prayer). It was his habit, wherever and whenever he heard the summons, to bow his knee in holy reverence. Trying to kneel, he caught his foot in his robe, slipped down several steps and hit his temple on a rugged stone edge. He died three days later. His body was laid to rest in Purana Quila initially, but, because of an attack by Hemu on Delhi and the capture of Purana Qila, Humayun's body was exhumed by the fleeing army and transferred to Kalanaur in Punjab where Akbar was crowned. After young Mughal emperor Akbar defeated and killed Hemu in the Second Battle of Panipat. Humayun's body was buried in Humayun's Tomb in Delhi the first very grand garden tomb in Mughal architecture, setting the precedent later followed by the Taj Mahal and many other Indian monuments. It was commissioned by his favourite and devoted chief wife, Bega Begum. Akbar later asked his paternal aunt, Gulbadan Begum, to write a biography of his father Humayun, the Humayun nameh (or Humayun-nama), and what she remembered of Babur. The full title is Ahwal Humayun Padshah Jamah Kardom Gulbadan Begum bint Babur Padshah amma Akbar Padshah. She was only eight when Babur died, and was married at 17, and her work is in simple Persian style. Unlike other Mughal royal biographies (the Zafarnama of Timur, Baburnama, and his own Akbarnama) no richly illustrated copy has survived, and the work is only known from a single battered and slightly incomplete manuscript, now in the British Library, that emerged in the 1860s. Annette Beveridge published an English translation in 1901, and editions in English and Bengali have been published since 2000. See also Humayun (film) Persian Inscriptions on Indian Monuments Footnotes References Bibliography ; English translation only, as text Cambridge History of India, Vol. III & IV, "Turks and Afghan" and "The Mughal Period". (Cambridge) 1928 Muzaffar Alam & Sanjay Subrahmanyan (Eds.) The Mughal State 1526–1750 (Delhi) 1998 William Irvine The Army of the Indian Moghuls. (London) 1902. (Last revised 1985) Jos Gommans Mughal Warfare (London) 2002 External links Accidental deaths from falls Accidental deaths in India Akbar Mughal emperors People from Kabul 1508 births 1556 deaths 16th-century Indian monarchs
[ -1.158078670501709, -0.08214429765939713, -0.4632909595966339, -0.15059275925159454, -0.3062874674797058, 0.6429746747016907, 0.5510357618331909, 0.22439031302928925, -0.2706316113471985, -0.28540173172950745, -0.32350337505340576, -0.06001371145248413, -0.8044293522834778, 0.3283774554729...
14056
https://en.wikipedia.org/wiki/Prince-elector
Prince-elector
The prince-electors ( pl. , , ), or electors for short, were the members of the electoral college that elected the emperor of the Holy Roman Empire. From the 13th century onwards, the prince-electors had the privilege of electing the monarch who would be crowned by the pope. After 1508, there were no imperial coronations and the election was sufficient. Charles V (elected in 1519) was the last emperor to be crowned (1530); his successors were elected emperors by the electoral college, each being titled "Elected Emperor of the Romans" (; ). The dignity of elector carried great prestige and was considered to be second only to that of king or emperor. The electors held exclusive privileges that were not shared with other princes of the Empire, and they continued to hold their original titles alongside that of elector. The heir apparent to a secular prince-elector was known as an electoral prince (). Rights and privileges Electors were rulers of (Imperial Estates), enjoying precedence over the other Imperial Princes. They were, until the 18th century, exclusively entitled to be addressed with the title (Serene Highness). In 1742, the electors became entitled to the superlative (Most Serene Highness), while other princes were promoted to . As rulers of Imperial Estates, the electors enjoyed all the privileges of Imperial Princes, including the right to enter into alliances, to autonomy in relation to dynastic affairs, and to precedence over other subjects. The Golden Bull granted them the Privilegium de non appellando, which prevented their subjects from lodging an appeal to a higher Imperial court. However, while this privilege, and some others, were automatically granted to Electors, they were not exclusive to them and many of the larger Imperial Estates were also to be individually granted some or all those rights and privileges. Imperial Diet The electors, like the other princes ruling States of the Empire, were members of the Imperial Diet, which was divided into three collegia: the Council of Electors, the Council of Princes, and the Council of Cities. In addition to being members of the Council of Electors, most electors were also members of the Council of Princes by virtue of possessing territory or holding ecclesiastical position. The assent of both bodies was required for important decisions affecting the structure of the Empire, such as the creation of new electorates or States of the Empire. Many electors ruled a number of States of the Empire or held several ecclesiastical titles, and therefore had multiple votes in the Council of Princes. In 1792, the Elector of Brandenburg had eight votes, the Elector of Bavaria six votes, the Elector of Hanover six votes, the King of Bohemia three votes, the Elector-Archbishop of Trier three votes, the Elector-Archbishop of Cologne two votes, and the Elector-Archbishop of Mainz one vote. Thus, of the hundred votes in the Council of Princes in 1792, twenty-nine belonged to electors, giving them considerable influence in the Council of Princes in addition to their positions as electors. In addition to voting by colleges or councils, the Imperial Diet also voted in religious coalitions, as provided for in the Peace of Westphalia. The Archbishop of Mainz presided over the Catholic body, or , while the Elector of Saxony presided over the Protestant body, or . The division into religious bodies was on the basis of the official religion of the state, and not of its rulers. Thus, even when the Electors of Saxony were Catholics during the eighteenth century, they continued to preside over the , since the state of Saxony was officially Protestant. Elections The electors were originally summoned by the Archbishop of Mainz within one month of an Emperor's death, and met within three months of being summoned. During the interregnum, imperial power was exercised by two imperial vicars. Each vicar, in the words of the Golden Bull, was "the administrator of the empire itself, with the power of passing judgments, of presenting to ecclesiastical benefices, of collecting returns and revenues and investing with fiefs, of receiving oaths of fealty for and in the name of the holy empire". The Elector of Saxony was vicar in areas operating under Saxon law (Saxony, Westphalia, Hannover, and northern Germany), while the Elector Palatine was vicar in the remainder of the Empire (Franconia, Swabia, the Rhine, and southern Germany). The Elector of Bavaria replaced the Elector Palatine in 1623, but when the latter was granted a new electorate in 1648, there was a dispute between the two as to which was vicar. In 1659, both purported to act as vicar, but ultimately, the other vicar recognized the Elector of Bavaria. Later, the two electors made a pact to act as joint vicars, but the Imperial Diet rejected the agreement. In 1711, while the Elector of Bavaria was under the ban of the Empire, the Elector Palatine again acted as vicar, but his cousin was restored to his position upon his restoration three years later. Finally, in 1745, the two agreed to alternate as vicars, with Bavaria starting first. This arrangement was upheld by the Imperial Diet in 1752. In 1777, the question was settled when the Elector Palatine inherited Bavaria. On many occasions, however, there was no interregnum, as a new king had been elected during the lifetime of the previous Emperor. Frankfurt regularly served as the site of the election from the fifteenth century on, but elections were also held at Cologne (1531), Regensburg (1575 and 1636), and Augsburg (1653 and 1690). An elector could appear in person or could appoint another elector as his proxy. More often, an electoral suite or embassy was sent to cast the vote; the credentials of such representatives were verified by the Archbishop of Mainz, who presided over the ceremony. The deliberations were held at the city hall, but voting occurred in the cathedral. In Frankfurt, a special electoral chapel, or , was used for elections. Under the Golden Bull, a majority of electors sufficed to elect a king, and each elector could cast only one vote. Electors were free to vote for whomsoever they pleased (including themselves), but dynastic considerations played a great part in the choice. Electors drafted a , or electoral capitulation, which was presented to the king-elect. The capitulation may be described as a contract between the princes and the king, the latter conceding rights and powers to the electors and other princes. Once an individual swore to abide by the electoral capitulation, he assumed the office of King of the Romans. In the 10th and 11th centuries, princes often acted merely to confirm hereditary succession in the Saxon Ottonian dynasty and Franconian Salian dynasty. But with the actual formation of the prince-elector class, elections became more open, starting with the election of Lothair II in 1125. The Staufen dynasty managed to get its sons formally elected in their fathers' lifetimes almost as a formality. After these lines ended in extinction, the electors began to elect kings from different families so that the throne would not once again settle within a single dynasty. For some two centuries, the monarchy was elective both in theory and in practice; the arrangement, however, did not last, since the powerful House of Habsburg managed to secure succession within their dynasty during the fifteenth century. All kings elected from 1438 onwards were from among the Habsburg Archdukes of Austria (and later Kings of Hungary and Bohemia) until 1740, when the archduchy was inherited by a woman, Maria Theresa, sparking the War of the Austrian Succession. A representative of the House of Wittelsbach was elected for a short period of time, but in 1745, Maria Theresa's husband, Francis I of the Habsburg-Lorraine dynasty, became King. All of his successors were also from the same family. Hence, for the greater part of the Empire's history, the role of the electors was largely ceremonial. High offices Each elector held a "High Office of the Empire" () analogous to a modern Cabinet office and was a member of the (ceremonial) Imperial Household. The three spiritual electors were Arch-Chancellors (, ): the Archbishop of Mainz was Arch-Chancellor of Germany, the Archbishop of Cologne was Arch-Chancellor of Italy, and the Archbishop of Trier was Arch-Chancellor of Burgundy. The six remaining were secular electors, who were granted augmentations to their arms reflecting their position in the Household. These augments were displayed either as an inset badge, as in the case of the Arch Steward, Treasurer, and Chamberlain—or dexter, as in the case of the Arch Marshal and Arch Bannerbearer. Or, as in the case of the Arch Cupbearer, the augment was integrated into the escutcheon, held in the royal Bohemian lion's right paw. When the Duke of Bavaria replaced the Elector Palatine in 1623, he assumed the latter's office of Arch-Steward. When the Count Palatine was granted a new electorate, he assumed the position of Arch-Treasurer of the Empire. When the Duke of Bavaria was banned in 1706, the Elector Palatine returned to the office of Arch-Steward, and in 1710, the Elector of Hanover was promoted to the post of Arch-Treasurer. Matters were complicated by the Duke of Bavaria's restoration in 1714; the Elector of Bavaria resumed the office of Arch-Steward, while the Elector Palatine returned to the post of Arch-Treasurer, and the Elector of Hanover was given the new office of Archbannerbearer. The Electors of Hanover, however, continued to be styled Arch-Treasurers, though the Elector Palatine was the one who actually exercised the office until 1777, when he inherited Bavaria and the Arch-Stewardship. After 1777, no further changes were made to the Imperial Household; new offices were planned for the Electors admitted in 1803, but the Empire was abolished before they could be created. The Duke of Württemberg, however, started to adopt the trappings of the Arch-Bannerbearer. Many High Officers were entitled to use "augmentations" on their coats of arms; said augmentations, which were special marks of honor, appeared in the middle of the electors' shields (as shown in the image above) atop the other charges (in heraldic terms, the augmentations appeared in the form of inescutcheons). The Arch-Steward used gules an orb Or (a gold orb on a red field). The Arch-Marshal used the more complicated per fess sable and argent, two swords in saltire gules (two red swords arranged in the form of a saltire, on a black and white field). The Arch-Chamberlain's augmentation was azure a scepter palewise Or (a golden scepter on a blue field), while the Arch-Treasurer's was gules the crown of Charlemagne Or (a gold crown on a red field). As noted above, the Elector Palatine and the Elector of Hanover styled themselves Arch-Treasurer from 1714 until 1777; during this time, both electors used the corresponding augmentations. The three Arch-Chancellors and the Arch-Cupbearer, however, did not use any augmentations. The electors discharged the ceremonial duties associated with their offices only during coronations, where they bore the crown and regalia of the Empire. Otherwise, they were represented by holders of corresponding "Hereditary Offices of the Household". The Arch-Butler was represented by the Hereditary Butler (Cupbearer) (the Count of Althann), the Arch-Seneschal by the Hereditary Steward (the Count of Waldburg, who adopted the title into their name as "Truchsess von Waldburg"), the Arch-Chamberlain by the Hereditary Chamberlain (the Count of Hohenzollern), the Arch-Marshal by the Hereditary Marshal (the Count of Pappenheim), and the Arch-Treasurer by the Hereditary Treasurer (the Count of Sinzendorf). After 1803, the Duke of Württemberg as Arch-Bannerbearer assigned the count of Zeppelin-Aschhausen as Hereditary Bannerbearer. History The German practice of electing monarchs began when ancient Germanic tribes formed ad hoc coalitions and elected the leaders thereof. Elections were irregularly held by the Franks, whose successor states include France and the Holy Roman Empire. The French monarchy eventually became hereditary, but the Holy Roman Emperors remained elective, at least in theory, although the Habsburgs provided most of the later monarchs. While all free men originally exercised the right to vote in such elections, suffrage eventually came to be limited to the leading men of the realm. In the election of Lothar II in 1125, a small number of eminent nobles chose the monarch and then submitted him to the remaining magnates for their approbation. Soon, the right to choose the monarch was settled on an exclusive group of princes, and the procedure of seeking the approval of the remaining nobles was abandoned. The college of electors was mentioned in 1152 and again in 1198. The composition of electors at that time is unclear, but appears to have included representatives of the church and the dukes of the four nations of Germany: the Franks (Duchy of Franconia), Swabians (Duchy of Swabia), Saxons (Duchy of Saxony) and Bavarians (Duchy of Bavaria). 1257 to Thirty Years' War The electoral college is known to have existed by 1152, but its composition is unknown. A letter written by Pope Urban IV in 1265 suggests that by "immemorial custom", seven princes had the right to elect the King and future Emperor. The pope wrote that the seven electors were those who had just voted in the election of 1257, which resulted in the election of two kings. Three ecclesiastical Electors: The Archbishop of Mainz The Archbishop of Trier The Archbishop of Cologne Four secular Electors: The King of Bohemia The Count Palatine of the Rhine The Duke of Saxony The Margrave of Brandenburg The three Archbishops oversaw the most venerable and powerful sees in Germany, while the other four were supposed to represent the dukes of the four nations. The Count Palatine of the Rhine held most of the former Duchy of Franconia after the last Duke died in 1039. The Margrave of Brandenburg became an Elector when the Duchy of Swabia was dissolved after the last Duke of Swabia was beheaded in 1268. Saxony, even with diminished territory, retained its eminent position. The Palatinate and Bavaria were originally (since 1214) held by the same individual, but in 1253, they were divided between two members of the House of Wittelsbach. The other electors refused to allow two princes from the same dynasty to have electoral rights, so a heated rivalry arose between the Count Palatine and the Duke of Bavaria over who should hold the Wittelsbach seat. Meanwhile, the King of Bohemia, who held the ancient imperial office of Arch-Cupbearer, asserted his right to participate in elections. Sometimes he was challenged on the grounds that his kingdom was not German, though usually he was recognized, instead of Bavaria which after all was just a younger line of Wittelsbachs. The Declaration of Rhense issued in 1338 had the effect that election by the majority of the electors automatically conferred the royal title and rule over the empire, without papal confirmation. The Golden Bull of 1356 finally resolved the disputes among the electors. Under it, the Archbishops of Mainz, Trier, and Cologne, as well as the King of Bohemia, the Count Palatine of the Rhine, the Duke of Saxony, and the Margrave of Brandenburg held the right to elect the King. The college's composition remained unchanged until the 17th century, although the Electorate of Saxony was transferred from the senior to the junior branch of the Wettin family in 1547, in the aftermath of the Schmalkaldic War. Thirty Years' War to Napoleon In 1621, the Elector Palatine, Frederick V, came under the imperial ban after participating in the Bohemian Revolt (a part of the Thirty Years' War). The Elector Palatine's seat was conferred on the Duke of Bavaria, the head of a junior branch of his family. Originally, the Duke held the electorate personally, but it was later made hereditary along with the duchy. When the Thirty Years' War concluded with the Peace of Westphalia in 1648, a new electorate was created for the Count Palatine of the Rhine. Since the Elector of Bavaria retained his seat, the number of electors increased to eight; the two Wittelsbach lines were now sufficiently estranged so as not to pose a combined potential threat. In 1685, the religious composition of the College of Electors was disrupted when a Catholic branch of the Wittelsbach family inherited the Palatinate. A new Protestant electorate was created in 1692 for the Duke of Brunswick-Lüneburg, who became known as the Elector of Hanover (the Imperial Diet officially confirmed the creation in 1708). The Elector of Saxony converted to Catholicism in 1697 so that he could become King of Poland, but no additional Protestant electors were created. Although the Elector of Saxony was personally Catholic, the Electorate itself remained officially Protestant, and the Elector even remained the leader of the Protestant body in the Reichstag. In 1706, the Elector of Bavaria and Archbishop of Cologne were outlawed during the War of the Spanish Succession, but both were restored in 1714 after the Peace of Baden. In 1777, the number of electors was reduced to eight when the Elector Palatine inherited Bavaria. Many changes to the composition of the college were necessitated by Napoleon's aggression during the early 19th century. The Treaty of Lunéville (1801), which ceded territory on the Rhine's left bank to France, led to the abolition of the archbishoprics of Trier and Cologne, and the transfer of the remaining spiritual Elector from Mainz to Regensburg. In 1803, electorates were created for the Duke of Württemberg, the Margrave of Baden, the Landgrave of Hesse-Kassel, and the Duke of Salzburg, bringing the total number of electors to ten. When Austria annexed Salzburg under the Treaty of Pressburg (1805), the Duke of Salzburg moved to the Grand Duchy of Würzburg and retained his electorate. None of the new electors, however, had an opportunity to cast votes, as the Holy Roman Empire was abolished in 1806, and the new electorates were never confirmed by the Emperor. In 1788, the ruling family of Savoy pushed to receive an electoral title. Their ambition was backed by Brandenburg-Prussia. However, the French Revolution and subsequent Coalition Wars soon rendered this a moot point.Peter Wilson. "Heart of Europe: A History of the Holy Roman Empire." Cambridge: 2016. Page 227. After the Empire After the abolition of the Holy Roman Empire in August 1806, the Electors continued to reign over their territories, many of them taking higher titles. The Electors of Bavaria, Württemberg, and Saxony styled themselves Kings, while the Electors of Baden, Hesse-Darmstadt, Regensburg, and Würzburg became Grand Dukes. The Elector of Hesse-Kassel, however, retained the meaningless title "Elector of Hesse", thus distinguishing himself from other Hessian princes (the Grand Duke of Hesse-Darmstadt and the Landgrave of Hesse-Homburg). Napoleon soon exiled him and Kassel was annexed to the Kingdom of Westphalia, a new creation. The King of Great Britain remained at war with Napoleon and continued to style himself Elector of Hanover, while the Hanoverian government continued to operate in London. The Congress of Vienna accepted the Electors of Bavaria, Württemberg, and Saxony as Kings, along with the newly created Grand Dukes. The Elector of Hanover finally joined his fellow Electors by declaring himself the King of Hanover. The restored Elector of Hesse, a Napoleonic creation, tried to be recognized as the King of the Chatti. However, the European powers refused to acknowledge this title at the Congress of Aix-la-Chapelle (1818) and instead listed him with the grand dukes as a "Royal Highness". Believing the title of Prince-Elector to be superior in dignity to that of Grand Duke, the Elector of Hesse-Kassel chose to remain an Elector, even though there was no longer a Holy Roman Emperor to elect. Hesse-Kassel remained the only Electorate in Germany until 1866, when the country backed the losing side in the Austro-Prussian War and was absorbed into Prussia. Spiritual The Elector of Mainz was always a Roman Catholic. The Elector of Trier was always a Roman Catholic. The Elector of Cologne was usually a Roman Catholic, with the exception of Hermann V von Wied (Lutheran, 1542–1546) and Gebhard Truchsess von Waldburg (Reformed 1582–1588). Secular The Elector of Bohemia, who was also ruler of the lands of the Austrian Circle (primarily as the Archduke of Austria) and the King of Hungary from 1526, was usually a Roman Catholic. The exceptions were George of Podebrady (Hussite, 1457–1471) and Frederick I (Reformed, 1619–1620). The Elector of Brandenburg, who was also Duke of Prussia from 1618, King in Prussia from 1701, and King of Prussia from 1772, was Roman Catholic until 1539, then Lutheran until 1613, then Reformed until the end of the Empire. The Elector Palatine was Roman Catholic until the 1530s, then Lutheran until 1559, then Reformed until 1575, then again Lutheran until 1583, then again Reformed until 1623, when the electoral dignity was lost to Bavaria. The Elector of Saxony was Roman Catholic until 1525, then Lutheran until 1697, and then again Roman Catholic. Added in the 17th century The Elector of Bavaria, added in 1623 and restored in 1714, was always Roman Catholic. The Elector of Hanover, added in 1692, was Lutheran until 1714, when he became King of Great Britain and also the head of the Anglican Church of England. Added in the 19th century The Elector of Regensburg (added in 1801), Karl Theodor Anton Maria von Dalberg, was Catholic. The Elector of Salzburg (1803–1805) and Würzburg (1805–1806) Ferdinand III and I was Catholic. The Elector of Württemberg (added in 1803), Frederick I, was Lutheran. The Elector of Baden (added in 1803), Charles Frederick, was Lutheran. The Elector of Hesse (added in 1803), William I, was Reformed. Marks of office Electoral Arms Below are the State arms of each Imperial Elector. Emblems of Imperial High Offices are shown on the appropriate arms. Three Electors Spiritual (Archbishops): all three were annexed by various powers through German Mediatisation of 1803. Four Electors Secular: Electors added in the 17th century: Napoleonic Additions As Napoleon waged war on Europe, between 1803 and 1806, the following changes to the Constitution of the Holy Roman Empire were attempted until the Empire's collapse. Except for the prince Württemberg, who had already inherited his office, the electors were not given augments or high office in the imperial household. See also Elective monarchy Electoral Palace (disambiguation) Electress Imperial election References Citations Sources Bryce, J. (1887). The Holy Roman Empire, 8th ed. New York: Macmillan. External links The Avalon Project. (2003). "The Golden Bull of the Emperor Charles IV 1356 A.D." Oestreich, G. and Holzer, E. (1973). " Übersicht über die Reichsstände." In Gebhardt, Bruno. , 9th ed. (Vol. 2, pp. 769–784). Stuttgart: Ernst Ketler Verlag. Velde, F. R. (2003). "Royal Styles." Velde, F. R. (2004). "The Holy Roman Empire." Armin Wolf, Electors, published 9 May 2011, english version published 26 February 2020 ; in: Historisches Lexikon Bayerns 1125 establishments in Europe 1120s establishments in the Holy Roman Empire 1866 disestablishments in Germany German noble titles Titles of nobility of the Holy Roman Empire Electoral colleges Imperial election (Holy Roman Empire) Monarchy in Germany
[ -0.09316792339086533, 0.7093920111656189, 0.20181383192539215, 0.18984805047512054, -0.6934491395950317, 0.6557169556617737, 0.42189690470695496, 0.4019918441772461, -0.16910789906978607, -0.536763072013855, 0.005128071643412113, -0.027132825925946236, 0.15902696549892426, 0.17872366309165...
14059
https://en.wikipedia.org/wiki/Howard%20Hughes
Howard Hughes
Howard Robard Hughes Jr. (December 24, 1905 – April 5, 1976) was an American business magnate, investor, record-setting pilot, engineer, film director, and philanthropist, known during his lifetime as one of the most influential and financially successful individuals in the world. He first became prominent as a film producer, and then as an important figure in the aviation industry. Later in life, he became known for his eccentric behavior and reclusive lifestyle—oddities that were caused in part by his worsening obsessive-compulsive disorder (OCD), chronic pain from a near-fatal plane crash, and increasing deafness. As a film tycoon, Hughes gained fame in Hollywood beginning in the late 1920s, when he produced big-budget and often controversial films such as The Racket (1928), Hell's Angels (1930), and Scarface (1932). He later took over the RKO Pictures film studio in 1948, recognized then as one of the Big Five studios of Hollywood's Golden Age, although the production company struggled under his control and ultimately ceased operations in 1957. Through his interest in aviation and aerospace travel, Hughes formed the Hughes Aircraft Company in 1932, hiring numerous engineers, designers, and defense contractors. He spent the rest of the 1930s and much of the 1940s setting multiple world air speed records and building the Hughes H-1 Racer (1935) and H-4 Hercules (the Spruce Goose, 1947), the latter being the largest flying boat in history and having the longest wingspan of any aircraft from the time it was built until 2019. He acquired and expanded Trans World Airlines and later acquired Air West, renaming it Hughes Airwest. Hughes won the Harmon Trophy on two occasions (1936 and 1938), the Collier Trophy (1938), and the Congressional Gold Medal (1939) all for his achievements in aviation throughout the 1930s. He was inducted into the National Aviation Hall of Fame in 1973 and was included in Flying magazine's 2013 list of the 51 Heroes of Aviation, ranked at 25. During the 1960s and early 1970s, Hughes extended his financial empire to include several major businesses in Las Vegas, such as real estate, hotels, casinos, and media outlets. Known at the time as one of the most powerful men in the state of Nevada, he is largely credited with transforming Vegas into a more refined cosmopolitan city. After years of mental and physical decline, Hughes died of kidney failure in 1976, at the age of 70. Today, his legacy is maintained through the Howard Hughes Medical Institute and the Howard Hughes Corporation. Early life Howard Robard Hughes Jr. was the son of Allene Stone Gano (1883–1922) and of Howard R. Hughes Sr. (1869–1924), a successful inventor and businessman from Missouri. He had English, Welsh and some French Huguenot ancestry, and was a descendant of John Gano (1727–1804), the minister who allegedly baptized George Washington. His father patented (1909) the two-cone roller bit, which allowed rotary drilling for petroleum in previously inaccessible places. The senior Hughes made the shrewd and lucrative decision to commercialize the invention by leasing the bits instead of selling them, obtained several early patents, and founded the Hughes Tool Company in 1909. Hughes's uncle was the famed novelist, screenwriter, and film-director Rupert Hughes. A 1941 affidavit birth certificate of Hughes, signed by his aunt Annette Gano Lummis and by Estelle Boughton Sharp, states that he was born on December 24, 1905, in Harris County, Texas. However, his certificate of baptism, recorded on October 7, 1906, in the parish register of St. John's Episcopal Church in Keokuk, Iowa, listed his date of birth as September 24, 1905, without any reference to the place of birth. At a young age, Hughes showed interest in science and technology. In particular, he had great engineering aptitude and built Houston's first "wireless" radio transmitter at age 11. He went on to be one of the first licensed ham-radio operators in Houston, having the assigned callsign W5CY (originally 5CY). At 12, Hughes was photographed in the local newspaper, identified as the first boy in Houston to have a "motorized" bicycle, which he had built from parts from his father's steam engine. He was an indifferent student, with a liking for mathematics, flying, and mechanics. He took his first flying lesson at 14, and attended Fessenden School in Massachusetts in 1921. After a brief stint at The Thacher School, Hughes attended math and aeronautical engineering courses at Caltech. The red-brick house where Hughes lived as a teenager at 3921 Yoakum Blvd., Houston, still stands, now known as Hughes House on the grounds of the University of St. Thomas. His mother Allene died in March 1922 from complications of an ectopic pregnancy. Howard Hughes Sr. died of a heart attack in 1924. Their deaths apparently inspired Hughes to include the establishment of a medical research laboratory in the will that he signed in 1925 at age 19. Howard Sr.'s will had not been updated since Allene's death, and Hughes inherited 75% of the family fortune. On his 19th birthday, Hughes was declared an emancipated minor, enabling him to take full control of his life. From a young age, Hughes became a proficient and enthusiastic golfer. He often scored near-par figures, played the game to a two-three handicap during his 20s, and for a time aimed for a professional golf career. He golfed frequently with top players, including Gene Sarazen. Hughes rarely played competitively and gradually gave up his passion for the sport to pursue other interests. Hughes played golf every afternoon at LA courses including the Lakeside Golf Club, Wilshire Country Club, or the Bel-Air Country Club. Partners included George Von Elm or Ozzie Carlton. After Hughes hurt himself in the late 1920s, his golfing tapered off, and after his F-11 crash, Hughes was unable to play at all. Hughes withdrew from Rice University shortly after his father's death. On June 1, 1925, he married Ella Botts Rice, daughter of David Rice and Martha Lawson Botts of Houston, and great-niece of William Marsh Rice, for whom Rice University was named. They moved to Los Angeles, where he hoped to make a name for himself as a filmmaker. They moved into the Ambassador Hotel, and Hughes proceeded to learn to fly a Waco, while simultaneously producing his first motion picture, Swell Hogan. Business career Hughes enjoyed a highly successful business career beyond engineering, aviation, and filmmaking; many of his career endeavors involved varying entrepreneurial roles. Entertainment Ralph Graves persuaded Hughes to finance a short film, Swell Hogan, which Graves had written and would star in. Hughes himself produced it. However, it was a disaster. After hiring a film editor to try to salvage it, he finally ordered that it be destroyed. His next two films, Everybody's Acting (1926) and Two Arabian Knights (1927), achieved financial success; the latter won the first Academy Award for Best Director of a comedy picture. The Racket (1928) and The Front Page (1931) were also nominated for Academy Awards. Hughes spent $3.5 million to make the flying film Hell's Angels (1930). Hell's Angels received one Academy Award nomination for Best Cinematography. He produced another hit, Scarface (1932), a production delayed by censors' concern over its violence. The Outlaw premiered in 1943, but was not released nationally until 1946. The film featured Jane Russell, who received considerable attention from industry censors, this time owing to her revealing costumes. RKO From the 1940s to the late 1950s, the Hughes Tool Company ventured into the film industry when it obtained partial ownership of the RKO companies, which included RKO Pictures, RKO Studios, a chain of movie theaters known as RKO Theatres and a network of radio stations known as the RKO Radio Network. In 1948, Hughes gained control of RKO, a struggling major Hollywood studio, by acquiring the 929,000 shares owned by Floyd Odlum's Atlas Corporation, for $8,825,000. Within weeks of acquiring the studio, Hughes dismissed 700 employees. Production dwindled to 9 pictures during the first year of Hughes's control; previously RKO had averaged 30 per year. Production shut down for six months, during which time Hughes ordered investigations of each employee who remained with RKO as far as their political leanings were concerned. Only after ensuring that the stars under contract to RKO had no suspect affiliations would Hughes approve completed pictures to be sent back for re-shooting. This was especially true of the women under contract to RKO at that time. If Hughes felt that his stars did not properly represent the political views of his liking or if a film's anti-communist politics were not sufficiently clear, he pulled the plug. In 1952, an abortive sale to a Chicago-based group connected to the mafia with no experience in the industry disrupted studio operations at RKO even further. In 1953, Hughes became involved with a high-profile lawsuit as part of the settlement of the United States v. Paramount Pictures, Inc. Antitrust Case. As a result of the hearings, the shaky status of RKO became increasingly apparent. A steady stream of lawsuits from RKO's minority shareholders had grown to become extremely annoying to Hughes. They had accused him of financial misconduct and corporate mismanagement. Since Hughes wanted to focus primarily on his aircraft manufacturing and TWA holdings during the years of the Korean War of 1950 to 1953, Hughes offered to buy out all other stockholders in order to dispense with their distractions. By the end of 1954, Hughes had gained near-total control of RKO at a cost of nearly $24 million, becoming the first sole owner of a major Hollywood studio since the silent-film era. Six months later Hughes sold the studio to the General Tire and Rubber Company for $25 million. Hughes retained the rights to pictures that he had personally produced, including those made at RKO. He also retained Jane Russell's contract. For Howard Hughes, this was the virtual end of his 25-year involvement in the motion-picture industry. However, his reputation as a financial wizard emerged unscathed. During that time period, RKO became known as the home of classic film noir productions, thanks in part to the limited budgets required to make such films during Hughes's tenure. Hughes reportedly walked away from RKO having made $6.5 million in personal profit. According to Noah Dietrich, Hughes made a $10,000,000 profit from the sale of the theaters and made a profit of $1,000,000 from his 7-year ownership of RKO. Real estate According to Noah Dietrich, "Land became a principal asset for the Hughes empire". Hughes acquired 1200 acres in Culver City for Hughes Aircraft, bought 7 sections [4,480 acres] in Tucson for his Falcon missile-plant, and purchased 25,000 acres near Las Vegas. In 1968, the Hughes Tool Company purchased the North Las Vegas Air Terminal. Originally known as Summa Corporation, the Howard Hughes Corporation formed in 1972 when the oil-tools business of Hughes Tool Company, then owned by Howard Hughes Jr., floated on the New York Stock Exchange under the "Hughes Tool" name. This forced the remaining businesses of the "original" Hughes Tool to adopt a new corporate name: "Summa". The name "Summa"Latin for "highest"was adopted without the approval of Hughes himself, who preferred to keep his own name on the business, and suggested "HRH Properties" (for Hughes Resorts and Hotels, and also his own initials). In 1988 Summa announced plans for Summerlin, a master-planned community named for the paternal grandmother of Howard Hughes, Jean Amelia Summerlin. Initially staying in the Desert Inn, Hughes refused to vacate his room, and instead decided to purchase the entire hotel. Hughes extended his financial empire to include Las Vegas real estate, hotels, and media outlets, spending an estimated $300 million, and using his considerable powers to take over many of the well-known hotels, especially the venues connected with organized crime. He quickly became one of the most powerful men in Las Vegas. He was instrumental in changing the image of Las Vegas from its Wild West roots into a more refined cosmopolitan city. In addition to the Desert Inn, Hughes would eventually own the Sands, Frontier, Silver Slipper, Castaways and Landmark and Harold's Club in Reno. Hughes would eventually become the largest employer in Nevada. Aviation and aerospace Another portion of Hughes's commercial interests involved aviation, airlines, and the aerospace and defense industries. A lifelong aircraft enthusiast and pilot, Hughes survived four airplane accidents: one in a Thomas-Morse Scout while filming Hell's Angels, one while setting the airspeed record in the Hughes Racer, one at Lake Mead in 1943, and the near-fatal crash of the Hughes XF-11 in 1946. At Rogers Airport in Los Angeles, he learned to fly from pioneer aviators, including Moye Stephens and J.B. Alexander. He set many world records and commissioned the construction of custom aircraft for himself while heading Hughes Aircraft at the airport in Glendale, CA. Operating from there, the most technologically important aircraft he commissioned was the Hughes H-1 Racer. On September 13, 1935, Hughes, flying the H-1, set the landplane airspeed record of over his test course near Santa Ana, California (Giuseppe Motta reaching 362 mph in 1929 and George Stainforth reached 407.5 mph in 1931, both in seaplanes). This marked the last time in history that an aircraft built by a private individual set the world airspeed record. A year and a half later, on January 19, 1937, flying the same H-1 Racer fitted with longer wings, Hughes set a new transcontinental airspeed record by flying non-stop from Los Angeles to Newark in seven hours, 28 minutes, and 25 seconds (beating his own previous record of nine hours, 27 minutes). His average ground-speed over the flight was . The H-1 Racer featured a number of design innovations: it had retractable landing gear (as Boeing Monomail had five years before), and all rivets and joints set flush into the body of the aircraft to reduce drag. The H-1 Racer is thought to have influenced the design of a number of World War II fighters such as the Mitsubishi A6M Zero, Focke-Wulf Fw 190, and F8F Bearcat, although that has never been reliably confirmed. In 1975 the H-1 Racer was donated to the Smithsonian. Round-the-world flight On July 14, 1938, Hughes set another record by completing a flight around the world in just 91 hours (three days, 19 hours, 17 minutes), beating the previous record of 186 hours (7 days, 18 hours, 49 minutes) set in 1933 by Wiley Post in a single-engine Lockheed Vega by almost four days. Hughes returned home ahead of photographs of his flight. Taking off from New York City, Hughes continued to Paris, Moscow, Omsk, Yakutsk,  Fairbanks, and Minneapolis, then returning to New York City. For this flight he flew a  Lockheed 14 Super Electra (NX18973, a twin-engine transport with a four-man crew) fitted with the latest radio and navigational equipment. Harry Connor was the co-pilot, Thomas Thurlow the navigator, Richard Stoddart the engineer, and Ed Lund the mechanic. Hughes wanted the flight to be a triumph of American aviation technology, illustrating that safe, long-distance air travel was possible. Albert Lodwick of Mystic, Iowa, provided organizational skills as the flight operations manager. While Hughes had previously been relatively obscure despite his wealth, being better known for dating Katharine Hepburn, New York City now gave him a ticker-tape parade in the Canyon of Heroes. Hughes and his crew were awarded the 1938 Collier Trophy for flying around the world in record time. He was awarded the Harmon Trophy in 1936 and 1938 for the record-breaking global circumnavigation. In 1938 the William P. Hobby Airport in Houston, Texas—known at the time as Houston Municipal Airport—was renamed after Hughes, but the name was changed back due to public outrage over naming the airport after a living person. Hughes also had a role in the design and financing of both the Boeing 307 Stratoliner and Lockheed L-049 Constellation. Other aviator awards include: the Bibesco Cup of the Fédération Aéronautique Internationale in 1938, the Octave Chanute Award in 1940, and a special Congressional Gold Medal in 1939 "in recognition of the achievements of Howard Hughes in advancing the science of aviation and thus bringing great credit to his country throughout the world". President Harry S. Truman sent the Congressional medal to Hughes after the F-11 crash. After his around-the-world flight, Hughes had declined to go to the White House to collect it. Hughes D-2 and XF-11 The Hughes D-2 was conceived in 1939 as a bomber with five crew members, powered by 42-cylinder Wright R-2160 Tornado engines. In the end, it appeared as two-seat fighter-reconnaissance aircraft designated the D-2A, powered by two Pratt & Whitney R-2800-49 engines. The aircraft was constructed using the Duramold process. The prototype was brought to Harper's Dry Lake in California in great secrecy in 1943 and first flew on June 20 of that year. Acting on a recommendation of the president's son, Colonel Elliott Roosevelt, who had become friends with Hughes, in September 1943 the USAAF ordered 100 of a reconnaissance development of the D-2, known as the F-11. Hughes then attempted to get the military to pay for the development of the D-2. In November 1944, the hangar containing the D-2A was reportedly hit by lightning and the aircraft was destroyed. The D-2 design was abandoned but led to the extremely controversial Hughes XF-11. The XF-11 was a large, all-metal, two-seat reconnaissance aircraft, powered by two Pratt & Whitney R-4360-31 engines, each driving a set of contra-rotating propellers. Only two prototypes were completed; the second one with a single propeller per side. Fatal crash of the Sikorsky S-43 In the spring of 1943 Hughes spent nearly a month in Las Vegas, test-flying his Sikorsky S-43 amphibious aircraft, practising touch-and-go landings on Lake Mead in preparation for flying the H-4 Hercules. The weather conditions at the lake during the day were ideal and he enjoyed Las Vegas at night. On May 17, 1943, Hughes flew the Sikorsky from California, carrying two CAA aviation inspectors, two of his employees, and actress Ava Gardner. Hughes dropped Gardner off in Las Vegas and proceeded to Lake Mead to conduct qualifying tests in the S-43. The test flight did not go well. The Sikorsky crashed into Lake Mead, killing CAA inspector Ceco Cline and Hughes's employee Richard Felt. Hughes suffered a severe gash on the top of his head when he hit the upper control panel and had to be rescued by one of the others on board. Hughes paid divers $100,000 to raise the aircraft and later spent more than $500,000 restoring it. Hughes sent the plane to Houston, where it remained for many years. Near-fatal crash of the XF-11 Hughes was involved in another near-fatal aircraft accident on July 7, 1946, while performing the first flight of the prototype U.S. Army Air Forces reconnaissance aircraft, the XF-11, near Hughes airfield at Culver City, California. An oil leak caused one of the contra-rotating propellers to reverse pitch, causing the aircraft to yaw sharply and lose altitude rapidly. Hughes attempted to save the aircraft by landing it at the Los Angeles Country Club golf course, but just seconds before reaching the course, the XF-11 started to drop dramatically and crashed in the Beverly Hills neighborhood surrounding the country club. When the XF-11 finally came to a halt after destroying three houses, the fuel tanks exploded, setting fire to the aircraft and a nearby home at 808 North Whittier Drive owned by Lt Col. Charles E. Meyer. Hughes managed to pull himself out of the flaming wreckage but lay beside the aircraft until rescued by Marine Master Sgt. William L. Durkin, who happened to be in the area visiting friends. Hughes sustained significant injuries in the crash, including a crushed collar bone, multiple cracked ribs, crushed chest with collapsed left lung, shifting his heart to the right side of the chest cavity, and numerous third-degree burns. An oft-told story said that Hughes sent a check to the Marine weekly for the remainder of his life as a sign of gratitude. Noah Dietrich asserted that Hughes did send Durkin $200 a month, but Durkin's daughter denied knowing that he received any money from Hughes. Despite his physical injuries, Hughes took pride that his mind was still working. As he lay in his hospital bed, he decided that he did not like the bed's design. He called in plant engineers to design a customized bed, equipped with hot and cold running water, built in six sections, and operated by 30 electric motors, with push-button adjustments. Hughes designed the hospital bed specifically to alleviate the pain caused by moving with severe burn injuries. Although he never used the bed that he designed, Hughes's bed served as a prototype for the modern hospital bed. Hughes's doctors considered his recovery almost miraculous. Many attribute his long-term dependence on opiates to his use of codeine as a painkiller during his convalescence. Yet Dietrich asserts that Hughes recovered the "hard way—no sleeping pills, no opiates of any kind". The trademark mustache he wore afterward hid a scar on his upper lip resulting from the accident. H-4 Hercules The War Production Board (not the military) originally contracted with Henry Kaiser and Hughes to produce the gigantic HK-1 Hercules flying boat for use during World War II to transport troops and equipment across the Atlantic as an alternative to seagoing troop transport ships that were vulnerable to German U-boats. The military services opposed the project, thinking it would siphon resources from higher-priority programs, but Hughes's powerful allies in Washington, D.C., advocated for it. After disputes, Kaiser withdrew from the project and Hughes elected to continue it as the H-4 Hercules. However, the aircraft was not completed until after the end of World War II. The Hercules was the world's largest flying boat, the largest aircraft made from wood, and, at , had the longest wingspan of any aircraft (the next-largest wingspan was about ). (The Hercules is no longer the longest nor heaviest aircraft ever built - surpassed by the Antonov An-225 Mriya produced in 1985.) The Hercules flew only once for one mile (1.6 km), and above the water, with Hughes at the controls, on November 2, 1947. Critics nicknamed the Hercules the Spruce Goose, but it was actually made largely from birch (not spruce) rather than from aluminum, because the contract required that Hughes build the aircraft of "non-strategic materials". It was built in Hughes's Westchester, California, facility. In 1947, Howard Hughes was summoned to testify before the Senate War Investigating Committee to explain why the H-4 development had been so troubled, and why $22 million had produced only two prototypes of the XF-11. General Elliott Roosevelt and numerous other USAAF officers were also called to testify in hearings that transfixed the nation during August and November 1947. In hotly-disputed testimony over TWA's route awards and malfeasance in the defense-acquisition process, Hughes turned the tables on his main interlocutor, Maine Senator Owen Brewster, and the hearings were widely interpreted as a Hughes victory. After being displayed at the harbor of Long Beach, California, the Hercules was moved to McMinnville, Oregon, where it features at the Evergreen Aviation & Space Museum. On November 4, 2017, the 70th anniversary of the only flight of the H-4 Hercules was celebrated at the Evergreen Aviation & Space Museum with Hughes's paternal cousin Michael Wesley Summerlin and Brian Palmer Evans, son of Hughes radio-technology pioneer Dave Evans, taking their positions in the recreation of a photo that was previously taken of Hughes, Dave Evans and Joe Petrali on board the H-4 Hercules. Hughes Aircraft In 1932 Hughes founded the Hughes Aircraft Company, a division of Hughes Tool Company, in a rented corner of a Lockheed Aircraft Corporation hangar in Burbank, California, to build the H-1 racer. Shortly after founding the company, Hughes used the alias "Charles Howard" to accept a job as a baggage handler for American Airlines. He was soon promoted to co-pilot. Hughes continued to work for American Airlines until his real identity was discovered. During and after World War II Hughes fashioned his company into a major defense contractor. The Hughes Helicopters division started in 1947 when helicopter manufacturer Kellett sold their latest design to Hughes for production. Hughes Aircraft became a major American aerospace- and defense contractor, manufacturing numerous technology-related products that included spacecraft vehicles, military aircraft, radar systems, electro-optical systems, the first working laser, aircraft computer systems, missile systems, ion-propulsion engines (for space travel), commercial satellites, and other electronics systems. In 1948 Hughes created a new division of Hughes Aircraft: the Hughes Aerospace Group. The Hughes Space and Communications Group and the Hughes Space Systems Division were later spun off in 1948 to form their own divisions and ultimately became the Hughes Space and Communications Company in 1961. In 1953 Howard Hughes gave all his stock in the Hughes Aircraft Company to the newly formed Howard Hughes Medical Institute, thereby turning the aerospace and defense contractor into a tax-exempt charitable organization. The Howard Hughes Medical Institute sold Hughes Aircraft in 1985 to General Motors for $5.2 billion. In 1997 General Motors sold Hughes Aircraft to Raytheon and in 2000, sold Hughes Space & Communications to Boeing. A combination of Boeing, GM, and Raytheon acquired the Hughes Research Laboratories, which focused on advanced developments in microelectronics, information & systems sciences, materials, sensors, and photonics; their work-space spans from basic research to product delivery. It has particularly emphasized capabilities in high-performance integrated circuits, high-power lasers, antennas, networking, and smart materials. Airlines In 1939, at the urging of Jack Frye, president of Transcontinental & Western Airlines, the predecessor of Trans World Airlines (TWA), Hughes began to quietly purchase a majority share of TWA stock; he took a controlling interest in the airline by 1944. Although he never had an official position with TWA, Hughes handpicked the board of directors, which included Noah Dietrich, and often issued orders directly to airline staff. Hughes Tool Co. purchased the first six Stratoliners Boeing manufactured. Hughes used one personally, and he let TWA operate the other five. Hughes is commonly credited as the driving force behind the Lockheed Constellation airliner, which Hughes and Frye ordered in 1939 as a long-range replacement for TWA's fleet of Boeing 307 Stratoliners. Hughes personally financed TWA's acquisition of 40 Constellations for $18 million, the largest aircraft order in history up to that time. The Constellations were among the highest-performing commercial aircraft of the late 1940s and 1950s and allowed TWA to pioneer nonstop transcontinental service. During World War II Hughes leveraged political connections in Washington to obtain rights for TWA to serve Europe, making it the only U.S. carrier with a combination of domestic and transatlantic routes. After the announcement of the Boeing 707, Hughes opted to pursue a more advanced jet aircraft for TWA and approached Convair in late 1954. Convair proposed two concepts to Hughes, but Hughes was unable to decide which concept to adopt, and Convair eventually abandoned its initial jet project after the mockups of the 707 and Douglas DC-8 were unveiled. Even after competitors such as United Airlines, American Airlines and Pan American World Airways had placed large orders for the 707, Hughes only placed eight orders for 707s through the Hughes Tool Company and forbade TWA from using the aircraft. After finally beginning to reserve 707 orders in 1956, Hughes embarked on a plan to build his own "superior" jet aircraft for TWA, applied for CAB permission to sell Hughes aircraft to TWA, and began negotiations with the state of Florida to build a manufacturing plant there. However, he abandoned this plan around 1958, and in the interim, negotiated new contracts for 707 and Convair 880 aircraft and engines totaling $400 million. The financing of TWA's jet orders precipitated the end of Hughes's relationship with Noah Dietrich, and ultimately Hughes's ouster from control of TWA. Hughes did not have enough cash on hand or future cash flow to pay for the orders and did not immediately seek bank financing. Hughes's refusal to heed Dietrich's financing advice led to a major rift between the two by the end of 1956. Hughes believed that Dietrich wished to have Hughes committed as mentally incompetent, although the evidence of this is inconclusive. Dietrich resigned by telephone in May 1957 after repeated requests for stock options, which Hughes refused to grant, and with no further progress on the jet financing. As Hughes's mental state worsened, he ordered various tactics to delay payments to Boeing and Convair; his behavior led TWA's banks to insist that he be removed from management as a condition for further financing. In 1960, Hughes was ultimately forced out of the management of TWA, although he continued to own 78% of the company. In 1961, TWA filed suit against Hughes Tool Company, claiming that the latter had violated antitrust law by using TWA as a captive market for aircraft trading. The claim was largely dependent upon obtaining testimony from Hughes himself. Hughes went into hiding and refused to testify. A default judgment was issued against Hughes Tool Company for $135 million in 1963 but was overturned by the Supreme Court of the United States in 1973, on the basis that Hughes was immune from prosecution. In 1966, Hughes was forced to sell his TWA shares. The sale of his TWA shares brought Hughes $546,549,771. Hughes acquired control of Boston-based Northeast Airlines in 1962. However, the airline's lucrative route authority between major northeastern cities and Miami was terminated by a CAB decision around the time of the acquisition, and Hughes sold control of the company to a trustee in 1964. Northeast went on to merge with Delta Air Lines in 1972. In 1970, Hughes acquired San Francisco-based Air West and renamed it Hughes Airwest. Air West had been formed in 1968 by the merger of Bonanza Air Lines, Pacific Air Lines, and West Coast Airlines, all of which operated in the western U.S. By the late 1970s, Hughes Airwest operated an all-jet fleet of Boeing 727-200, Douglas DC-9-10, and McDonnell Douglas DC-9-30 jetliners serving an extensive route network in the western U.S. with flights to Mexico and western Canada as well. By 1980, the airline's route system reached as far east as Houston (Hobby Airport) and Milwaukee with a total of 42 destinations being served. Hughes Airwest was then acquired by and merged into Republic Airlines (1979–1986) in late 1980. Republic was subsequently acquired by and merged into Northwest Airlines which in turn was ultimately merged into Delta Air Lines in 2008. Business with David Charnay Hughes had made numerous business partnerships through industrialist and producer David Charnay. Their friendship and many partnerships began with the film The Conqueror, which was first released to the public in 1956. The film caused many controversies due to its critical flop and radioactive location used in St. George, Utah, that eventually led to Hughes buying up nearly every copy of the film he could, only to watch the film at home repeatedly for many nights in a row. Charnay later bought Four Star, the film and television production company that produced The Conqueror. Hughes and Charnay's most published dealings were with a contested AirWest leveraged buyout. Charnay led the buyout group that involved Howard Hughes and their partners acquiring Air West. Hughes, Charnay, as well as three others, were indicted. The indictment, made by U.S. Attorney DeVoe Heaton, accused the group of conspiring to drive down the stock price of Air West in order to pressure company directors to sell to Hughes. The charges were dismissed after a judge had determined that the indictment had failed to allege an illegal action on the part of Hughes, Charnay, and all the other accused in the indictment. Thompson, the federal judge that made the decision to dismiss the charges called the indictment one of the worst claims that he had ever seen. The charges were filed again, a second time, by U.S. Attorney DeVoe Heaton's assistant, Dean Vernon. The Federal Judge ruled on November 13, 1974, and elaborated to say that the case suggested a "reprehensible misuse of the power of great wealth", but in his judicial opinion, "no crime had been committed." The aftermath of the Air West deal was later settled with the SEC by paying former stockholders for alleged losses from the sale of their investment in Air West stock. As noted above, Air West was subsequently renamed Hughes Airwest. During a long pause between the years of the dismissed charges against Hughes, Charnay, and their partners, Howard Hughes mysteriously died mid-flight while on the way to Houston from Acapulco. No further attempts were made to file any indictments after Hughes died. Howard Hughes Medical Institute In 1953, Hughes launched the Howard Hughes Medical Institute in Miami, Florida, (currently located in Chevy Chase, Maryland) with the expressed goal of basic biomedical research, including trying to understand, in Hughes's words, the "genesis of life itself", due to his lifelong interest in science and technology. Hughes's first will, which he signed in 1925 at the age of 19, stipulated that a portion of his estate should be used to create a medical institute bearing his name. When a major battle with the IRS loomed ahead, Hughes gave all his stock in the Hughes Aircraft Company to the institute, thereby turning the aerospace and defense contractor into a for-profit entity of a fully tax-exempt charity. Hughes's internist, Verne Mason, who treated Hughes after his 1946 aircraft crash, was chairman of the institute's medical advisory committee. The Howard Hughes Medical Institute's new board of trustees sold Hughes Aircraft in 1985 to General Motors for $5.2 billion, allowing the institute to grow dramatically. In 1954, Hughes transferred Hughes Aircraft to the foundation, which paid Hughes Tool Co. $18,000,000 for the assets. The foundation leased the land from Hughes Tool Co., which then subleased it to Hughes Aircraft Corp. The difference in rent, $2,000,000 per year, became the foundation's working capital. The deal was the topic of a protracted legal battle between Hughes and the Internal Revenue Service, which Hughes ultimately won. After his death in 1976, many thought that the balance of Hughes's estate would go to the institute, although it was ultimately divided among his cousins and other heirs, given the lack of a will to the contrary. The HHMI was the fourth largest private organization and one of the largest devoted to biological and medical research, with an endowment of $20.4 billion . Glomar Explorer and the taking of K-129 In 1972, during the cold war era, Hughes was approached by the CIA through his longtime partner, David Charnay, to help secretly recover the Soviet submarine K-129, which had sunk near Hawaii four years earlier. Hughes's involvement provided the CIA with a plausible cover story, conducting expensive civilian marine research at extreme depths and the mining of undersea manganese nodules. The recovery plan used the special-purpose salvage vessel Glomar Explorer. In the summer of 1974, Glomar Explorer attempted to raise the Soviet vessel. However, during the recovery a mechanical failure in the ship's grapple caused half of the submarine to break off and fall to the ocean floor. This section is believed to have held many of the most sought-after items, including its code book and nuclear missiles. Two nuclear-tipped torpedoes and some cryptographic machines were recovered, along with the bodies of six Soviet submariners who were subsequently given formal burial at sea in a filmed ceremony. The operation, known as Project Azorian (but incorrectly referred to by the press as Project Jennifer), became public in February 1975 after secret documents were released, obtained by burglars of Hughes's headquarters in June 1974. Although he lent his name and his company's resources to the operation, Hughes and his companies had no operational involvement in the project. The Glomar Explorer was eventually acquired by Transocean and was sent to the scrap yard in 2015 during a large decline in oil prices. Personal life Early romances In 1929, Hughes's wife of four years, Ella, returned to Houston and filed for divorce. Hughes dated many famous women, including Joan Crawford, Billie Dove, Faith Domergue, Bette Davis, Yvonne De Carlo, Ava Gardner, Olivia de Havilland, Katharine Hepburn, Hedy Lamarr, Ginger Rogers, Janet Leigh, Pat Sheehan, Mamie Van Doren and Gene Tierney. He also proposed to Joan Fontaine several times, according to her autobiography No Bed of Roses. Jean Harlow accompanied him to the premiere of Hell's Angels, but Noah Dietrich wrote many years later that the relationship was strictly professional, as Hughes disliked Harlow personally. In his 1971 book, Howard: The Amazing Mr. Hughes, Dietrich said that Hughes genuinely liked and respected Jane Russell, but never sought romantic involvement with her. According to Russell's autobiography, however, Hughes once tried to bed her after a party. Russell (who was married at the time) refused him, and Hughes promised it would never happen again. The two maintained a professional and private friendship for many years. Hughes remained good friends with Tierney who, after his failed attempts to seduce her, was quoted as saying "I don't think Howard could love anything that did not have a motor in it". Later, when Tierney's daughter Daria was born deaf and blind and with a severe learning disability because of Tierney's exposure to rubella during her pregnancy, Hughes saw to it that Daria received the best medical care and paid all expenses. Luxury yacht In 1933, Hughes made a purchase of a luxury steam yacht named the Rover, which was previously owned by Scottish shipping magnate Lord Inchcape. "I have never seen the Rover but bought it on the blueprints, photographs and the reports of Lloyd's surveyors. My experience is that the English are the most honest race in the world." Hughes renamed the yacht Southern Cross and later sold her to Swedish entrepreneur Axel Wenner-Gren. 1936 automobile accident On July 11, 1936, Hughes struck and killed a pedestrian named Gabriel S. Meyer with his car at the corner of 3rd Street and Lorraine in Los Angeles. After the crash, Hughes was taken to the hospital and certified as sober, but an attending doctor made a note that Hughes had been drinking. A witness to the crash told police that Hughes was driving erratically and too fast and that Meyer had been standing in the safety zone of a streetcar stop. Hughes was booked on suspicion of negligent homicide and held overnight in jail until his attorney, Neil S. McCarthy, obtained a writ of habeas corpus for his release pending a coroner's inquest. By the time of the coroner's inquiry, however, the witness had changed his story and claimed that Meyer had moved directly in front of Hughes's car. Nancy Bayly (Watts), who was in the car with Hughes at the time of the crash, corroborated this version of the story. On July 16, 1936, Hughes was held blameless by a coroner's jury at the inquest into Meyer's death. Hughes told reporters outside the inquiry, "I was driving slowly and a man stepped out of the darkness in front of me". Marriage to Jean Peters On January 12, 1957, Hughes married actress Jean Peters at a small hotel in Tonopah, Nevada. The couple met in the 1940s, before Peters became a film actress. They had a highly publicized romance in 1947 and there was talk of marriage, but she said she could not combine it with her career. Some later claimed that Peters was "the only woman [Hughes] ever loved", and he reportedly had his security officers follow her everywhere even when they were not in a relationship. Such reports were confirmed by actor Max Showalter, who became a close friend of Peters while shooting Niagara (1953). Showalter told an interviewer that because he frequently met with Peters, Hughes's men threatened to ruin his career if he did not leave her alone. Connections to Richard Nixon and Watergate Shortly before the 1960 Presidential election, Richard Nixon was alarmed when it was revealed that his brother, Donald, received a $205,000 loan from Hughes. It has long been speculated that Nixon's drive to learn what the Democrats were planning in 1972 was based in part on his belief that the Democrats knew about a later bribe that his friend Bebe Rebozo had received from Hughes after Nixon took office. In late 1971, Donald Nixon was collecting intelligence for his brother in preparation for the upcoming presidential election. One of his sources was John H. Meier, a former business adviser of Hughes who had also worked with Democratic National Committee Chairman Larry O'Brien. Meier, in collaboration with former Vice President Hubert Humphrey and others, wanted to feed misinformation to the Nixon campaign. Meier told Donald that he was sure the Democrats would win the election because Larry O'Brien had a great deal of information on Richard Nixon's illicit dealings with Howard Hughes that had never been released; O'Brien did not actually have any such information, but Meier wanted Nixon to think that he did. Donald told his brother that O'Brien was in possession of damaging Hughes information that could destroy his campaign. Terry Lenzner, who was the chief investigator for the Senate Watergate Committee, speculates that it was Nixon's desire to know what O'Brien knew about Nixon's dealings with Hughes that may have partially motivated the Watergate break-in. Last years and death Physical and mental decline Hughes was widely considered eccentric and to have suffered with severe obsessive-compulsive disorder (OCD). Dietrich wrote that Hughes always ate the same thing for dinner, a New York strip steak cooked medium rare, dinner salad, and peas, but only the smaller ones, pushing the larger ones aside. For breakfast, Hughes wanted his eggs cooked the way his family cook, Lily, made them. Hughes had a "phobia about germs", and "his passion for secrecy became a mania." While directing The Outlaw, Hughes became fixated on a small flaw in one of Jane Russell's blouses, claiming that the fabric bunched up along a seam and gave the appearance of two nipples on each breast. He wrote a detailed memorandum to the crew on how to fix the problem. Richard Fleischer, who directed His Kind of Woman with Hughes as executive producer, wrote at length in his autobiography about the difficulty of dealing with the tycoon. In his book, Just Tell Me When to Cry, Fleischer explained that Hughes was fixated on trivial details and was alternately indecisive and obstinate. He also revealed that Hughes's unpredictable mood swings made him wonder if the film would ever be completed. In 1958, Hughes told his aides that he wanted to screen some movies at a film studio near his home. He stayed in the studio's darkened screening room for more than four months, never leaving. He ate only chocolate bars and chicken and drank only milk, and was surrounded by dozens of Kleenex that he continuously stacked and re-arranged. He wrote detailed memos to his aides giving them explicit instructions neither to look at him nor speak to him unless spoken to. Throughout this period, Hughes sat fixated in his chair, often naked, continually watching movies. When he finally emerged in the summer of 1958, his hygiene was terrible. He had neither bathed nor cut his hair and nails for weeks; this may have been due to allodynia, which results in a pain response to stimuli that would normally not cause pain. After the screening room incident, Hughes moved into a bungalow at the Beverly Hills Hotel where he also rented rooms for his aides, his wife, and numerous girlfriends. He would sit naked in his bedroom with a pink hotel napkin placed over his genitals, watching movies. This may have been because Hughes found the touch of clothing painful due to allodynia. He may have watched movies to distract himself from his pain—a common practice among patients with intractable pain, especially those who do not receive adequate treatment. In one year, Hughes spent an estimated $11 million at the hotel. Hughes began purchasing restaurant chains and four-star hotels that had been founded within the state of Texas. This included, if for only a short period, many unknown franchises currently out of business. He placed ownership of the restaurants with the Howard Hughes Medical Institute, and all licenses were resold shortly after. Another time, he became obsessed with the 1968 film Ice Station Zebra, and had it run on a continuous loop in his home. According to his aides, he watched it 150 times. Feeling guilty about the commercial, critical, and rumored toxicity of his film The Conqueror, he bought every copy of the film for $12 million, watching the film on repeat. Paramount Pictures acquired the rights of the film in 1979, three years after his death. Hughes insisted on using tissues to pick up objects to insulate himself from germs. He would also notice dust, stains, or other imperfections on people's clothes and demand that they take care of them. Once one of the most visible men in America, Hughes ultimately vanished from public view, although tabloids continued to follow rumors of his behavior and whereabouts. He was reported to be terminally ill, mentally unstable, or even dead. Injuries from numerous aircraft crashes caused Hughes to spend much of his later life in pain, and he eventually became addicted to codeine, which he injected intramuscularly. Hughes had his hair cut and nails trimmed only once a year, likely due to the pain caused by the RSD/CRPS, which was caused by the plane crashes. He also stored his urine in bottles. Later years in Las Vegas The wealthy and aging Hughes, accompanied by his entourage of personal aides, began moving from one hotel to another, always taking up residence in the top floor penthouse. In the last ten years of his life, 1966 to 1976, Hughes lived in hotels in many cities—including Beverly Hills, Boston, Las Vegas, Nassau, Freeport and Vancouver. On November 24, 1966 (Thanksgiving Day), Hughes arrived in Las Vegas by railroad car and moved into the Desert Inn. Because he refused to leave the hotel and to avoid further conflicts with the owners, Hughes bought the Desert Inn in early 1967. The hotel's eighth floor became the nerve center of Hughes's empire and the ninth-floor penthouse became his personal residence. Between 1966 and 1968, he bought several other hotel-casinos, including the Castaways, New Frontier, the Landmark Hotel and Casino, and the Sands. He bought the small Silver Slipper casino for the sole purpose of moving its trademark neon silver slipper, which was visible from his bedroom, and had apparently kept him awake at night. After Hughes left the Desert Inn, hotel employees discovered that his drapes had not been opened during the time he lived there and had rotted through. Hughes wanted to change the image of Las Vegas to something more glamorous. He wrote in a memo to an aide, "I like to think of Las Vegas in terms of a well-dressed man in a dinner jacket and a beautifully jeweled and furred female getting out of an expensive car." Hughes bought several local television stations (including KLAS-TV). Eventually the brain trauma from Hughes's previous accidents, the effects of neurosyphilis diagnosed in 1932 and (undiagnosed) Obsessive-Compulsive Disorder considerably affected his decision-making. A small panel, unofficially dubbed "The Mormon Mafia" because of the many Latter-day Saints on the committee, was led by Frank William Gay and originally served Hughes's "secret police" headquartered at 9000 Romaine. Over the next two decades, however, this group oversaw and controlled considerable business holdings, with the CIA anointing Gay while awarding a contract to the Hughes corporation to acquire sensitive information on a sunken Russian submarine. In addition to supervising day-to-day business operations and Hughes's health, they also went to great pains to satisfy Hughes's every whim. For example, Hughes once became fond of Baskin-Robbins' banana nut ice cream, so his aides sought to secure a bulk shipment for him, only to discover that Baskin-Robbins had discontinued the flavor. They put in a request for the smallest amount the company could provide for a special order, 350 gallons (1,300 L), and had it shipped from Los Angeles. A few days after the order arrived, Hughes announced he was tired of banana nut and wanted only French vanilla ice cream. The Desert Inn ended up distributing free banana nut ice cream to casino customers for a year. In a 1996 interview, ex–Howard Hughes Chief of Nevada Operations Robert Maheu said, "There is a rumor that there is still some banana nut ice cream left in the freezer. It is most likely true." As an owner of several major Las Vegas businesses, Hughes wielded much political and economic influence in Nevada and elsewhere. During the 1960s and early 1970s, he disapproved of underground nuclear testing at the Nevada Test Site. Hughes was concerned about the risk from residual nuclear radiation and attempted to halt the tests. When the tests finally went through despite Hughes's efforts, the detonations were powerful enough that the entire hotel in which he was living trembled from the shock waves. In two separate, last-ditch maneuvers, Hughes instructed his representatives to offer bribes of $1m to both Presidents Lyndon B. Johnson and Richard Nixon. In 1970, Jean Peters filed for divorce. The two had not lived together for many years. Peters requested a lifetime alimony payment of $70,000 a year, adjusted for inflation, and waived all claims to Hughes's estate. Hughes offered her a settlement of over a million dollars, but she declined it. Hughes did not insist on a confidentiality agreement from Peters as a condition of the divorce. Aides reported that Hughes never spoke ill of her. She refused to discuss her life with Hughes and declined several lucrative offers from publishers and biographers. Peters would state only that she had not seen Hughes for several years before their divorce and had dealt with him only by phone. Hughes was living in the Intercontinental Hotel near Lake Managua in Nicaragua, seeking privacy and security, when a magnitude 6.5 earthquake damaged Managua in December 1972. As a precaution, Hughes moved to a rather large tent facing the hotel; after a few days, he moved to the Nicaraguan National Palace and stayed there as a guest of Anastasio Somoza Debayle before leaving for Florida on a private jet the following day. He subsequently moved into the Penthouse at the Xanadu Princess Resort on Grand Bahama Island, which he had recently purchased. He lived almost exclusively in the penthouse of the Xanadu Beach Resort & Marina for the last four years of his life. Hughes spent a total of $300 million on his many properties in Las Vegas. Autobiography hoax In 1972, author Clifford Irving caused a media sensation when he claimed he had co-written an authorized Hughes autobiography. Irving claimed he and Hughes had corresponded through the United States mail, and offered as proof handwritten notes allegedly sent by Hughes. Publisher McGraw-Hill, Inc. was duped into believing the manuscript was authentic. Hughes was so reclusive that he did not immediately publicly refute Irving's statement, leading many to believe that Irving's book was genuine. However, before the book's publication, Hughes finally denounced Irving in a teleconference attended by reporters Hughes knew personally: James Bacon of the Hearst papers, Marin Miles of the Los Angeles Times, Vernon Scott of UPI, Roy Neal of NBC News, Gene Handsaker of AP, Wayne Thomas of the Chicago Tribune, and Gladwin Hill of the New York Times. The entire hoax finally unraveled. The United States Postal Inspection Service got a subpoena to force Irving to turn over samples of his handwriting. The USPS investigation led to Irving's indictment and subsequent conviction for using the postal service to commit fraud. He was incarcerated for 17 months. In 1974, the Orson Welles film F for Fake included a section on the Hughes autobiography hoax, leaving a question open as to whether it was actually Hughes who took part in the teleconference (since so few people had actually heard or seen him in recent years). In 1977, The Hoax by Clifford Irving was published in the United Kingdom, telling his story of these events. The 2006 film The Hoax, starring Richard Gere, is also based on these events. Death Hughes is reported to have died on April 5, 1976, at 1:27 p.m. on board an aircraft, Learjet 24B N855W, owned by Robert Graf and piloted by Jeff Abrams. He was en route from his penthouse at the Acapulco Princess Hotel (now the Fairmont Acapulco Princess) in Mexico to the Methodist Hospital in Houston. His reclusiveness and possibly his drug use made him practically unrecognizable. His hair, beard, fingernails, and toenails were long—his tall frame now weighed barely , and the FBI had to use fingerprints to conclusively identify the body. Howard Hughes's alias, John T. Conover, was used when his body arrived at a morgue in Houston on the day of his death. An autopsy recorded kidney failure as the cause of death. In an eighteen-month study investigating Hughes' drug abuse for the estate, it was found "someone administered a deadly injection of the painkiller to this comatose man... obviously needlessly and almost certainly fatal". He suffered from malnutrition and was covered in bedsores. While his kidneys were damaged, his other internal organs, including his brain, which had no visible damage or illnesses, were deemed perfectly healthy. X-rays revealed five broken-off hypodermic needles in the flesh of his arms. To inject codeine into his muscles, Hughes had used glass syringes with metal needles that easily became detached. Hughes is buried next to his parents at Glenwood Cemetery in Houston. Alleged survival Following his death, Hughes was subject to several widely rebuked conspiracy theories that he had faked his own death. A notable allegation came from retired Major General Mark Musick, Assistant Secretary of the Air Force, who claimed Hughes went on to live under an assumed identity, dying on November 15, 2001, in Troy, Alabama. Estate Approximately three weeks after Hughes's death, a handwritten will was found on the desk of an official of The Church of Jesus Christ of Latter-day Saints in Salt Lake City, Utah. The so-called "Mormon Will" gave $1.56 billion to various charitable organizations (including $625 million to the Howard Hughes Medical Institute), nearly $470 million to the upper management in Hughes's companies and to his aides, $156 million to first cousin William Lummis, and $156 million split equally between his two ex-wives Ella Rice and Jean Peters. A further $156 million was endowed to a gas-station owner, Melvin Dummar, who told reporters that in 1967, he found a disheveled and dirty man lying along U.S. Route 95, just north of Las Vegas. The man asked for a ride to Vegas. Dropping him off at the Sands Hotel, Dummar said the man told him that he was Hughes. Dummar later claimed that days after Hughes's death a "mysterious man" appeared at his gas station, leaving an envelope containing the will on his desk. Unsure if the will was genuine and unsure of what to do, Dummar left the will at the LDS Church office. In 1978, a Nevada court ruled the Mormon Will a forgery and officially declared that Hughes had died intestate (without a valid will). Dummar's story was later adapted into Jonathan Demme's film Melvin and Howard in 1980. Hughes's $2.5 billion estate was eventually split in 1983 among 22 cousins, including William Lummis, who serves as a trustee of the Howard Hughes Medical Institute. The Supreme Court of the United States ruled that Hughes Aircraft was owned by the Howard Hughes Medical Institute, which sold it to General Motors in 1985 for $5.2 billion. The court rejected suits by the states of California and Texas that claimed they were owed inheritance tax. In 1984, Hughes's estate paid an undisclosed amount to Terry Moore, who claimed she and Hughes had secretly married on a yacht in international waters off Mexico in 1949 and never divorced. Moore never produced proof of a marriage, but her book, The Beauty and the Billionaire, became a bestseller. Awards Harmon Trophy (1936 and 1938) Collier Trophy (1938) Congressional Gold Medal (1939) Octave Chanute Award (1940) National Aviation Hall of Fame (1973) International Air & Space Hall of Fame (1987) Motorsports Hall of Fame of America (2018) Archive The moving image collection of Howard Hughes is held at the Academy Film Archive. The collection consists of over 200 items including 35mm and 16mm elements of feature films, documentaries, and television programs made or accumulated by Hughes. Filmography In popular culture Film In The Carpetbaggers (1964), the main character Jonas Cord (played by George Peppard) is loosely based on Howard Hughes. The James Bond film Diamonds Are Forever (1971) features a tall, Texan, reclusive billionaire character named Willard Whyte (played by Jimmy Dean) who operates his business empire from the penthouse of a Las Vegas hotel. Although he appears only late in the film, his habitual seclusion and his control of a major aerospace contracting firm are key elements of the movie's plot. Several sequences were actually filmed on location at The Landmark Hotel and Casino, which was owned by Hughes at the time. The Amazing Howard Hughes is a 1977 American made-for-television biographical film which aired as a mini-series on the CBS network, made a year after Hughes's death and based on Noah Dietrich's book Howard: The Amazing Mr. Hughes. Tommy Lee Jones plays Hughes. Melvin and Howard (1980), directed by Jonathan Demme and starring Jason Robards as Howard Hughes and Paul Le Mat as Melvin Dummar. The film won Academy Awards for Best Original Screenplay (Bo Goldman) and Best Supporting Actress (Mary Steenburgen). The film focuses on Melvin Dummar's claims of meeting Hughes in the Nevada desert and subsequent estate battles over his inclusion in Hughes's will. Critic Pauline Kael called the film "an almost flawless act of sympathetic imagination". In Tucker: The Man and His Dream, (1988), Hughes (played by Dean Stockwell) figures in the plot by telling Preston Tucker to source steel and engines for Tucker's automobiles from a helicopter manufacturer in New York. Scene occurs in a hangar with the Hercules. In The Rocketeer, a 1991 American period superhero film from Walt Disney Pictures, the title character attracts the attention of Howard Hughes (played by Terry O'Quinn) and the FBI, who are hunting for a missing jet pack, as well as Nazi operatives. "Howard Hughes Documentary", broadcast in 1992 as an episode of the Time Machine documentary series, was introduced by Peter Graves, later released by A&E Home Video. In Conspiracy Theory (1997), the character Jerry Fletcher (played by Mel Gibson) mentions one of his theories to a street vendor by saying, "Did you know that the whole Vietnam War was fought over a bet that Howard Hughes lost to Aristotle Onassis?" referring to his (Fletcher's) thoughts on the politics of that conflict. In The Aviator (2004), directed by Martin Scorsese, Hughes is portrayed by Leonardo DiCaprio. The film focuses on Hughes's personal life from the making of Hell's Angels through his successful flight of the Hercules or Spruce Goose. Critically acclaimed, it was nominated for 11 Academy Awards, winning five for Best Cinematography; Best Film Editing; Best Costume Design; Best Art Direction; and Best Actress in a Supporting Role for Cate Blanchett. Howard Hughes: The Real Aviator documentary was broadcast in 2004 and went on to win the Grand Festival Award for Best Documentary at the 2004 Berkeley Video & Film Festival. In the 2005 animated film Robots, the character Mr Bigweld (voiced by Mel Brooks), a reclusive inventor and owner of Bigweld Industries, is loosely based on Howard Hughes. The American Aviator: The Howard Hughes Story was broadcast in 2006 on the Biography Channel. It was later released to home media as a DVD with a copy of the full-length film The Outlaw starring Jane Russell. Captain America: The First Avenger (2011), as a plot-related prequel to Iron Man 2 (2010), in which Howard Stark (played by Dominic Cooper), father of Tony Stark (Iron Man), showed his inventions of future technology, clearly embodying Hughes's persona and enthusiasm. His subsequent appearances in the TV series Marvel's Agent Carter further this persona, as well as depicting him as sharing the real Hughes's reputation as a womanizer. Stan Lee has noted that Tony, who shared several of these traits himself, was based on Hughes. Rules Don't Apply (2016), written and directed by Warren Beatty, features Beatty as Hughes from 1958 through 1964. In the Dark Knight Trilogy, director Christopher Nolan's characterisation of Bruce Wayne is heavily inspired by Hughes's perceived lifestyle – from a playboy in Batman Begins to a recluse in The Dark Knight Rises. Nolan is reported to have integrated his original material intended for a shelved Hughes biopic into the trilogy. Games The character of Andrew Ryan in the 2007 video game BioShock is loosely based on Hughes. Ryan is a billionaire industrialist in post-World War II America who, seeking to avoid governments, religions, and other "parasitic" influences, ordered the secret construction of an underwater city, Rapture. Years later, when Ryan's vision for Rapture falls into dystopia, he hides himself away and uses armies of mutated humans, "Splicers", to defend himself and fight against those trying to take over his city, including the player-character. In L.A. Noire, Hughes makes an appearance presenting his Hercules H-4 aircraft in the game opening scene. The H-4 is later a central plot piece of DLC Arson Case, "Nicholson Electroplating". In Fallout: New Vegas, the character of Robert Edwin House, a wealthy business magnate and entrepreneur who owns the New Vegas strip, is based on Howard Hughes and closely resembles him in appearance, personality and background. A portrait of Mr. House can also be found in the game which strongly resembles a portrait of Howard Hughes standing in front of a Boeing Army Pursuit Plane. Literature Stan Lee repeatedly stated he created the Marvel Comics character Iron Man's civilian persona, Tony Stark, drawing inspiration from Howard Hughes's colorful lifestyle and personality. Additionally, the first name of Stark's father is Howard. Hughes is a supporting character in all three parts of James Ellroy's Underworld USA Trilogy, employing several of the protagonists as private investigators, bagmen, and consultants in his attempt to assume control of Las Vegas. Referred to behind his back as "Count Dracula" (due to his reclusiveness and rumored obsession with blood transfusions from Mormon donors), Hughes is portrayed as a spoiled, racist, opioid-addicted megalomaniac whose grandiose plans for Las Vegas are undermined by the manipulations of the Chicago Outfit. In the 1981 novel Dream Park by Larry Niven and Steven Barnes, the weapon "which might have defeated the Japs if it hadn't come so late" is revealed to be the Spruce Goose, which had been magically hijacked on its test flight by evil Foré sorcerers in New Guinea. Hughes's skeleton is found at the controls, identified by Hughes's trademark fedora and cloth-and-leather jacket. Music The 1974 song "Workin' at the Car Wash Blues" by Jim Croce compares the main protagonist of the song to Howard Hughes in one of the lyrics. The 1974 song "The Wall Street Shuffle" by English rock band 10cc directly references Hughes and his ways of life in the last verse. The song "Me and Howard Hughes" by Irish band The Boomtown Rats on their 1978 album A Tonic for the Troops is about the title subject. The song "Closet Chronicles" by American rock band Kansas on their 1977 album Point of Know Return is a Howard Hughes allegory. The song "Ain't No Fun (Waiting 'Round To Be a Millionaire)" by AC/DC on their 1976 album "Dirty Deeds Done Dirt Cheap" singer Bon Scott referenced Howard Hughes toward the end of the song: "Hey, hello Howard, how you doin', my next door neighbour? Oh, yea... Get your fuckin' jumbo jet off my airport" Hughes's name is mentioned in the title and the lyrics of the 2002 song "Bargain Basement Howard Hughes" by Jerry Cantrell. The 2012 song "Nancy From Now On" by American songwriter Father John Misty likens Hughes's destructive and erratic tendencies to the singer's own. Television In The Greatest American Hero Season 2 episode 3, "Don't Mess Around with Jim," Ralph and Bill are kidnapped by a reclusive tycoon, owner of Beck Air airplane company, who fakes his own death, and seems to know more about the suit than they do. He then blackmails them into retrieving his will to prevent it from being misused by the president of his company. In The Simpsons Season 5 episode "$pringfield (Or, How I Learned to Stop Worrying and Love Legalized Gambling)", Mr. Burns resembles Hughes in his recluse state. Various nods to his life appear in the episode, ranging from casino ownership and penthouse office to the "Spruce Goose" being renamed "Spruce Moose" as well as a lack of hygiene and being a germaphobe. In The Beverly Hillbillies episode, "The Clampett-Hewes Empire", Jed Clampett, while in Hooterville, decides to merge his interests with a man Mr. Drysdale believes is Howard Hughes, the famous reclusive billionaire. Eventually it turns out, to Mr. Drysdale's chagrin, "Howard Hughes" is no billionaire; he is nothing but a plain old farmer named "Howard Hewes" (H-E-W-E-S). In the Invader Zim episode, "Germs," the alien Zim becomes paranoid after discovering that Earth is covered in germs. Referencing Howard Hughes, he isolates himself in his home and dons tissue boxes on his feet. In the Superjail! episode "The Superjail! Six", The Warden repeatedly watches a film called Ice Station Jailpup which parodies Hughes's obsession with the film Ice Station Zebra See also Analgesic nephropathy List of richest Americans in history List of wealthiest historical figures List of aviation pioneers List of entrepreneurs Phenacetin References Notes Citations Bibliography Barkow, Al. Gettin' to the Dance Floor: An Oral History of American Golf. Short Hills, New Jersey: Burford Books, 1986. . Barton, Charles. Howard Hughes and his Flying Boat. Fallbrook, CA: Aero Publishers, 1982. Republished in 1998, Vienna, VA: Charles Barton, Inc. . Barlett, Donald L. and James B. Steele. Empire: The Life, Legend and Madness of Howard Hughes. New York: W.W. Norton & Company, 1979. , republished in 2004 as Howard Hughes: His Life and Madness. Bellett, Gerald. Age of Secrets: The Conspiracy that Toppled Richard Nixon and the Hidden Death of Howard Hughes. Stillwater, Minnesota: Voyageur Press, 1995. . Blackman, Tony Tony Blackman Test Pilot Grub Street, 2009, Brown, Peter Harry and Pat H. Broeske. Howard Hughes: The Untold Story. New York: Penguin Books, 1996. . Burleson, Clyde W. The Jennifer Project. College Station, Texas: Texas A&M University Press, 1997. . Dietrich, Noah and Bob Thomas. Howard: The Amazing Mr. Hughes. New York: Fawcett Publications, 1972. . Drosnin, Michael. Citizen Hughes: In his Own Words, How Howard Hughes Tried to Buy America. Portland, Oregon: Broadway Books, 2004. . Hack, Richard. Hughes: The Private Diaries, Memos and Letters: The Definitive Biography of the First American Billionaire. Beverly Hills, California: New Millennium Press, 2002. . Herman, Arthur. Freedom's Forge: How American Business Produced Victory in World War II. New York: Random House, 2012. . Higham, Charles. Howard Hughes: The Secret Life, 1993. Porter, Donald J., Howard's Whirlybirds: Howard Hughes' Amazing Pioneering Helicopter Exploits. Fonthill Media, 2013. (ISBN 978-1-78155-089) Irving, Clifford. The Hoax. New York: E. Reads Ltd., 1999. . Klepper, Michael and Michael Gunther. The Wealthy 100: From Benjamin Franklin to Bill Gates—A Ranking of the Richest Americans, Past and Present. Secaucus, New Jersey: Carol Publishing Group, 1996. Marrett, George J. Howard Hughes: Aviator. Annapolis, Maryland: Naval Institute Press, 2004. . Kistler, Ron. I Caught Flies for Howard Hughes. Chicago: Playboy Press, 1976. . Lasky, Betty. RKO: The Biggest Little Major of Them All, 2d ed . Santa Monica, California: Roundtable, 1989. . Maheu, Robert and Richard Hack. Next to Hughes: Behind the Power and Tragic Downfall of Howard Hughes by his Closest Adviser. New York: Harper Collins, 1992. . Moore, Terry. The Beauty and the Billionaire. New York: Pocket Books, 1984. . Moore, Terry and Jerry Rivers. The Passions of Howard Hughes. Los Angeles: General Publishing Group, 1996. . Parker, Dana T. Building Victory: Aircraft Manufacturing in the Los Angeles Area in World War II, Cypress, California: Dana T. Parker Books, 2013. . Phelan, James. Howard Hughes: The Hidden Years. New York, Random House, 1976. . Real, Jack. The Asylum of Howard Hughes. Philadelphia: Xlibris Corporation, 2003. . Thomas, Bob. Liberace: The True Story. New York: St. Martin's Press, 1987. . Tierney, Gene with Mickey Herskowitz. Self-Portrait. New York: Peter Wyden, 1979. lSBN 0-883261-52-9. Weaver, Tom. Science Fiction and Fantasy Film Flashbacks: Conversations with 24 Actors, Writers, Producers and Directors from the Golden Age. New York: McFarland & Company, 2004. . External links AZORIAN The Raising of the K-129 / 2009 – 2 Part TV Documentary / Michael White Films Vienna Welcome Home Howard: Collection of photographs kept by UNLV A history of the remarkable achievements of Howard Hughes FBI file on Howard Hughes Exclutive Biography of Howard R. Hughes Jr. Biography in the National Aviation Hall of Fame 1905 births 1976 deaths 20th-century American businesspeople 20th-century American engineers 20th-century aviation 20th-century Methodists Amateur radio people American aerospace businesspeople American aerospace designers American aerospace engineers American airline chief executives American billionaires American businesspeople in the oil industry American casino industry businesspeople American chairpersons of corporations American chief executives of manufacturing companies American communications businesspeople American construction businesspeople American consulting businesspeople American film studio executives American financiers American health care businesspeople American hoteliers American inventors American investors American mass media owners American media executives American mining businesspeople American nonprofit executives American people of English descent American people of French descent American people of Welsh descent American philanthropists American political fundraisers American real estate businesspeople American restaurateurs American technology chief executives American technology company founders American United Methodists Articles containing video clips Aviation inventors Aviators from Texas Burials at Glenwood Cemetery (Houston, Texas) Businesspeople from Los Angeles Businesspeople from Houston California Republicans Collier Trophy recipients Congressional Gold Medal recipients Deaths from kidney failure Eccentrics Engineers from California Film directors from Texas Film producers from California History of Clark County, Nevada History of Houston Hypochondriacs People from Ventura County, California People with obsessive–compulsive disorder Rice University alumni Survivors of aviation accidents or incidents Texas Republicans Trans World Airlines people California Institute of Technology alumni Aviation pioneers American aviation record holders Film directors from Los Angeles Watergate scandal
[ -0.2332146316766739, 0.4652559757232666, -0.6187809705734253, 0.16330748796463013, -0.35643845796585083, 0.5411345362663269, 0.22835642099380493, 0.2812731862068176, -0.41323965787887573, -0.5345596671104431, 0.054692864418029785, 0.5581904053688049, 0.04287073016166687, 0.2091869264841079...
14062
https://en.wikipedia.org/wiki/Hook%20of%20Holland
Hook of Holland
Hook of Holland (, ) is a town in the southwestern corner of Holland (hence the name; hoek means "corner" and was the word in use before the word kaap – "cape", from Portuguese cabo – became Dutch), at the mouth of the New Waterway shipping canal into the North Sea. The town is administered by the municipality of Rotterdam as a district of that city. Its district covers an area of 16.7 km2, of which 13.92 km2 is land. On 1 January 1999 it had an estimated population of 9,400. Towns near "the Hook" () include Monster, 's-Gravenzande, Naaldwijk and Delft to the northeast, and Maassluis to the southeast. On the other side of the river is the Europort and the Maasvlakte. The wide sandy beach, one section of which is designated for use by naturists, runs for approximately 18 kilometres to Scheveningen and for most of this distance is backed by extensive sand dunes through which there are foot and cycle paths. On the north side of the New Waterway, to the west of the town, is a pier, part of which is accessible to pedestrians and cyclists. The Berghaven is a small harbour on the New Waterway where the Rotterdam and Europort pilots are based. This small harbour is only for the use of the pilot service, government vessels and the Hook of Holland lifeboat. History The Hook of Holland area was created as a sandbar in the Meuse estuary, when it became more and more silted after St. Elizabeth's flood of 1421. All kinds of plans were designed to improve the shipping channel to Rotterdam. In 1863 it was finally decided to construct the New Waterway which was dug between 1866 and 1868. The route ran through the Hook of Holland, where a primitive settlement, Old Hook (Oude Hoek - nowadays the Zuidelijk Strandcentrum), was created. Many workers and senior employees of the Rijkswaterstaat settled in Old Hook. The Hook initially fell under the administrative authority of 's-Gravenzande. An attempt by the inhabitants to transform the place into an independent municipality failed and, on 1 January 1914, Hook of Holland was added to Rotterdam. After the First World War the village started to develop into a seaside resort. It has since been informally known as 'Rotterdam by the sea'. During World War II, the Hook was one of the most important places for the Wehrmacht to hold because of its harbour, which comprised an important and strategic part of the Atlantic Wall. The German Army installed three 11" guns (removed from the damaged battleship Gneisenau) as shore batteries to protect the port area from invasion. Hook of Holland already had a ward council in 1947. Hook of Holland has been a borough since 1973. In 2014 it was replaced by an 'area committee'. Transport links Railways The Schiedam–Hoek van Holland railway is a 24-kilometre branch line from Schiedam Centrum station via Vlaardingen and Maassluis. The final two stations on the line are located within the town. Hoek van Holland Haven, the penultimate station, is close to the town centre, adjacent to the ferry terminal and the small harbour, the Berghaven. Hoek van Holland Strand, the terminus, is closest to the beach. The railway line opened for service in 1893 and was electrified in 1935. International trains ran from Berlin and Moscow to connect these with London via the ferry service. From 1928 to 1939 and from 1962 to 1979, Hook of Holland was the northern terminus of the Rheingold Express to Frankfurt and Geneva. Services on the line to Rotterdam Centraal station were operated by NS every half-hour during the day until April 2017, when the line was closed for conversion to metro standards. It was reopened in September 2019, as an extension of the Rotterdam Metro. The metro line service from Hook of Holland does not offer direct connections to Rotterdam Centraal. Ferry Hook of Holland is also the location of an international ferry terminal, from which service to eastern England has operated since 1893 except for the durations of the two World Wars. Currently, two routes are operated: one, a day-and-night freight and passenger service to Harwich, Essex, and the other, a night, freight-only service to North Killingholme Haven, Lincolnshire. The passenger ferry service is operated by Stena Line as part of the Dutchflyer rail-ferry service between Hook van Holland Haven station and Harwich International station in England, from which Greater Anglia provides service to Liverpool Street station in central London. A local ferry operated by RET links the Hook with the Maasvlakte part of the Port of Rotterdam. Motorways The A20 motorway begins approximately 10 kilometres east of Hook of Holland near Westerlee, heading east towards Rotterdam and Utrecht. It connects to the A4 heading north towards The Hague and Amsterdam 17 kilometres east of the town. Notable people Jan Knippenberg (1948–1995), a Dutch ultrarunner and historian; in 1974 he ran from Hook of Holland to Stockholm () in 18 days Richard de Mos (born 1976), a Dutch PVV politician and climate-change sceptic; brought up in Hook of Holland Jesper Leerdam (born 1987), a Dutch footballer who has played for Dayton Dutch Lions, Excelsior Maassluis and SW Scheveningen Roy Kortsmit (born 1992), a Dutch professional footballer who currently plays as a goalkeeper for NAC Breda in the Dutch Eerste Divisie Bryan Janssen (born 1995), a Dutch professional footballer who since January 2020 has played for ASWH References Bibliography External links Hook of Holland VVV (tourist information) site Harwich - Hoek ferry service Rotterdam Boroughs of Rotterdam Populated coastal places in the Netherlands Populated places in South Holland Port cities and towns in the Netherlands Port cities and towns of the North Sea
[ -0.6823256015777588, -0.3917505145072937, 0.05592904984951019, -0.5664056539535522, 0.28614991903305054, 0.29037046432495117, -0.21468111872673035, 0.09647903591394424, -0.539318859577179, -0.2925846576690674, 0.0650717243552208, -0.4279137849807739, 0.23492339253425598, 0.1513828039169311...
14063
https://en.wikipedia.org/wiki/Hugh%20Binning
Hugh Binning
Hugh Binning (1627–1653) was a Scottish philosopher and theologian. He was born in Scotland during the reign of Charles I and was ordained in the (Presbyterian) Church of Scotland. He died in 1653, during the time of Oliver Cromwell and the Commonwealth of England. Personal life Hugh Binning was the son of John Binning of Dalvennan, Straiton.and Margaret M'Kell. Margaret was the daughter of Rev. Matthew M'Kell, who was a minister in the parish of Bothwell, Scotland, and sister of Hugh M'Kell, a minister in Edinburgh. Binning was born on his father's estate in Dalvennan, Straiton, in the shire of Ayr. The family owned other lands in the parishes of Straiton and Colmonell as well as Maybole in Carrick. A precocious child, Binning was admitted to the study of philosophy at the University of Glasgow at age thirteen. Binning has been described as "an extraordinary instance of precocious learning and genius." In 1645, James Dalrymple, 1st Viscount of Stair, who was Hugh's master (primary professor) in the study of philosophy, announced he was retiring from the University of Glasgow. Dalrymple was afterward President of the Court of Session, and Viscount Stair. After a national search for a replacement on the faculty, three men were selected to compete for the position. Binning was one of those selected, but was at a disadvantage because of his extreme youth and because he was not of noble birth. However, he had strong support from the existing faculty, who suggested that the candidates speak extemporaneously on any topic of the candidate's choice. After hearing Hugh speak, the other candidates withdrew, making Hugh a regent and professor of philosophy, while he was still 18 years old. On 7 February 1648, (at the age of 21) Hugh was appointed an Advocate before the Court of Sessions (an attorney). In the same year, he married Barbara Simpson (sometimes called Mary), daughter of Rev. James Simpson a minister in Ireland. Their son, John, was born in 1650. Binning was called on 25 October 1649. As minister of Govan, he was the successor of Mr. William Wilkie. His ordination took place on the 8th of January 1649, when Mr David Dickson, one of the theological professors at the College of Glasgow, and author of Therapeutica Sacra, presided. He was ordained in January, at the age of 22, holding his regency until 14 May that year. At that time Govan was a separate town rather than part of Glasgow. Hugh died around September 1653 and was buried in the churchyard of Govan, where Patrick Gillespie, then principal of the University of Glasgow, ordered a monument inscribed in Latin, roughly translated: Here lies Mr. Hugh Binning, a man distinguished for his piety and eloquence, learned in philology, philosophy, and theology, a Prelate, faithful to the Gospel, and finally an excellent preacher. In the middle of a series of events, he was taken at the age of 26, in the year of our Lord 1653. Alive, he changed the society of his own land because he walked with God. And if you wish to make other inquires, the rest should keep silence, since neither you nor the marble can comprehend it. Hugh's widow, Barbara (sometimes called Mary), then remarried James Gordon, an Anglican priest at Cumber in Ireland. Together they had a daughter,Jean who married Daniel MacKenzie, who was on the winning side of the Battle of Bothwell Bridge serving as an ensign under Lieutenant-Colonel William Ramsay (who became the third Earl of Dalhousie), in the Earl of Mar's Regiment of Foot. Binning's son, John Binning, married Hanna Keir, who was born in Ireland. The Binnings were Covenanters, a resistance movement that objected to the return of Charles II (who was received into the Catholic Church on his deathbed). They were on the losing side in the 1679 Battle of Bothwell Bridge. Most of the rebels who were not executed were exiled to the Americas; about 30 Covenanters were exiled to the Carolinas on the Carolina Merchant in 1684. After the battle, John and Hanna were separated. In the aftermath of the battle at Bothwell Bridge, Binning's widow (now Barbara Gordon) tried to reclaim the family estate at Dalvennan by saying that John and his wife owed his stepfather a considerable some of money. The legal action was successful and Dalvennan became the possession of John's half-sister Jean, and her husband Daniel MacKenzie. In addition, Jean came into possession of Hanna Keir's property in Ireland. By 1683, Jean was widowed. John Binning was branded a traitor, was sentenced to death and forfeited his property to the Crown. John's wife (Hanna Keir) was branded as a traitor and forfeited her property in Ireland. In 1685, Jean "donated" the Binning family's home at Dalvennan and other properties, along with the Keir properties to Roderick MacKenzie, who was a Scottish advocate of James II (James VII of Scotland), and the baillie of Carrick. According to an act of the Scottish Parliament, Roderick MacKenzie was also very effective in "suppressing the rebellious, fanatical party in the western and other shires of this realm, and putting the laws to vigorous execution against them". Since Bothwell Bridge, Hanna had been hiding from the authorities. In 1685, Hanna was in Edinburgh where she was found during a sweep for subversives and imprisoned in the Tolbooth of Edinburgh, a combination city hall and prison. Those arrested with Hanna were exiled to North America, however, she developed dysentery and remained behind. By 1687, near death, Hanna petitioned the Privy Council of Scotland for her release; she was exiled to her family in Ireland, where she died around 1692. In 1690, the Scottish Parliament rescinded John's fines and forfeiture, but he was unable to recover his family's estates, the courts suggesting that he had relinquished his claim to Dalvennan in exchange for forgiveness of debt, rather than forfeiture. There is little documentation about John after his wife's death. John received a small income from royalties on his father's works after parliament extended copyrights on Binning's writings to him. However, the income was not significant and John made several petitions to the Scottish parliament for money, the last occurring in 1717. It is thought that he died in Somerset county, in southwestern England. He died of consumption at the age of 26 on September 1653. He was remarkably popular as a preacher, having been considered "the most accomplished philosopher and divine in his time, and styled the Scottish Cicero." He married (cont. 17 May 1650), Mary (who died at Paisley in 1694) and had a son, John of Dalvennan. She was the daughter of Richard Simson, minister of Sprouston. After John's early death Mary married her second husband, James Gordon, minister of Comber, in Ireland. A marble tablet, with an inscription in classical Latin, was erected to his memory by his friend Mr Patrick Gillespie, who was then Principal of the University of Glasgow. It has been placed in the vestibule of the present parish church. The whole of his works are posthumous publications. He was a follower of James Dalrymple. In later life, he was well known as an evangelical Christian. Impact of the Commonwealth Hugh Binning was born two years after Charles I became monarch of England, Ireland, and Scotland. At the time, each was an independent country sharing the same monarch. The Acts of Union 1707 integrated Scotland and England to form the Kingdom of Great Britain, and the Acts of Union 1800 integrated Ireland to form the United Kingdom of Great Britain and Ireland. The period was dominated by both political and religious strife between the three independent countries. Religious disputes centered on questions such as whether religion was to be dictated by the monarch or was to be the choice of the people, and whether individuals had a direct relationship with God or needed to use an intermediary. Civil disputes centered on debates about the extent of the King's power (a question of the Divine right of kings), and specifically whether the King had the right to raise taxes and armed forces without the consent of the governed. These wars ultimately changed the relationship between king and subjects. In 1638, the General Assembly of the Church of Scotland voted to remove bishops and the Book of Common Prayer that had been introduced by Charles I to impose the Anglican model on the Presbyterian Church of Scotland. Public riots followed, culminating in the Wars of the Three Kingdoms, an interrelated series of conflicts that took place in the three countries. The first conflict, which was also the first of the Bishops' Wars, took place in 1639 and was a single border skirmish between England and Scotland, also known as the war the armys did not wanted to fight. To maintain his English power base, Charles I made secret alliances with Catholic Ireland and Presbyterian Scotland to invade Anglican England, promising that each country could establish their own separate state religion. Once these secret entreaties became known to the English Long Parliament, the Congregationalist faction (of which Oliver Cromwell was a primary spokesman) took matters into its own hands and Parliament established an army separate from the King. Charles I was executed in January 1649, which led to the rule of Cromwell and the establishment of the Commonwealth. The conflicts concluded with The English Restoration of the monarchy and the return of Charles II in 1660. The Act of Classes was passed by the Parliament of Scotland on 23 January 1649; the act banned Royalists (people supporting the monarchy) from holding political or military office. In exile, Charles II signed the Treaty of Breda (1650) with the Scottish Parliament; among other things, the treaty established Presbyterianism as the national religion. Charles was crowned King of Scots at Scone in January 1651. By September 1651, Scotland was annexed by England, its legislative institutions abolished, Presbyterianism dis-established, and Charles was forced into exile in France. The Scottish Parliament rescinded the Act of Classes in 1651, which produced a split within Scottish Society. The sides of the conflict were called the Resolutioners (who supported the rescission of the act – supported the monarchy and the Scottish House of Stewart) and the Protesters (who supported Cromwell and the Commonwealth); Binning sided with the Protestors. Binning joined the Protesters in 1651. When Cromwell had sent troops to Scotland, he was also attempting to dis-establish Presbyterianism and the Church of Scotland, Binning spoke against Cromwell's act. On Saturday 19 April 1651, Cromwell entered Glasgow and the next day he heard a sermon by three ministers who condemned him for invading Scotland. That evening, Cromwell summoned those ministers and others, to a debate on the issue. a discussion on some of the controverted points of the times was held in his presence, between his chaplains, the learned Dr John Owen, Joseph Caryl, and others on the one side, and some Scots ministers on the other. Mr. Binning, who was one of the disputants, apparently nonplussed the Independents, which led Cromwell to ask who the learned and bold young man was. Told it was Binning, he said: "He hath bound well, indeed," ... " but, laying his hand on his sword, this will lose all again." The late Mr. Orme was of the opinion that there is nothing improbable in the account of the meeting, but that such a meeting took place is certain. This appears from two letters which were written by Principal Robert Baillie, who was then Professor of Theology at the University of Glasgow.At the debate, Rev Hugh Binning is said to have out-debated Cromwell's ministers so completely that he silenced them. Politics Hugh Binning's political views were based on his theology. Binning was a Covenanter, a movement that began in Scotland at Greyfriars Kirkyard in 1638 with the National Covenant and continued with the 1643 Solemn League and Covenant—in effect a treaty between the English Long Parliament and Scotland for the preservation of the reformed religion in exchange for troops to confront the threat of Irish Catholic troops joining the Royalist army. Binning could also be described as a Protestor; both political positions were taken because of their religious implications. However, he saw the evils of the politics of his day was not a "fomenter of factions" writing "A Treatise of Christian Love" as a response. Theology Because of the tumultuous time in which Hugh Binning lived, politics and religion were inexorably intertwined. Binning was a Calvinist and follower of John Knox. As a profession, Binning was trained as a Philosopher, and he believed that philosophy was the servant of theology. He thought that both Philosophy and Theology should be taught in parallel. Binning's writing, which is primarily a collection of his sermons, "forms an important bridge between the 17th century, when philosophy in Scotland was heavily dominated by Calvinism, and the 18th century when figures such as Francis Hutcheson re-asserted a greater degree of independence between the two and allied philosophy with the developing human sciences." Religiously, Hugh Binning was, what we would call today, an Evangelical Calvinist. He spoke on the primacy of God's love as the ground of salvation: "... our salvation is not the business of Christ alone, but the whole Godhead is interested in it deeply, so deeply that you cannot say who loves it most, or who likes it most. The Father is the very fountain of it, his love is the spring of all." With regards to the extent of the 'atonement', Hugh Binning, did not hold that the offer of redemption applied only to the few that are elect but said that "the ultimate ground of faith is in the electing will of God." In Scotland, during the 1600s, the questions concerning atonement revolved around the terms in which the offer was expressed. Binning believed that "forgiveness is based on Christ's death, understood as a satisfaction and as a sacrifice: 'If he had pardoned sin without any satisfaction what rich grace it had been! But truly, to provide the Lamb and sacrifice himself, to find out the ransom, and to exact it of his own Son, in our name, is a testimony of mercy and grace far beyond that. But then, his justice is very conspicuous in this work'." Works All of the works of Hugh Binning were published posthumously and were primarily collections of his sermons. Of his speaking style, it was said: "There is originality without any affectation, a rich imagination, without anything fanciful or extravert, the utmost simplicity, without an thing mean or trifling." The Common Principles of the Christian Religion, Clearly Proved, and Singularly Improved; or, A Practical Catechism published by Patrick Gillespie in 1660 An analysis of the Westminster Confession of Faith. The work was translated into Dutch in 1678 by James Koelman, a minister of Sluys in Flanders. (The Common Principles of the Christian Religion, fulltext) Quotations from the publication include: On the love of God And what is love but the very motion of the soul to God? And so till it have attained that, to be in him, it can find no place of rest. On the free grace of the Gospel I am guilty, and can say nothing against it, while I stand alone. But though I cannot satisfy, and have not; yet there is one, Jesus Christ, who gave his life a ransom for many, and whom God hath given as a propitiation for sins. He hath satisfied and paid the debt in my name; go and apprehend the cautioner, since he hath undertaken it, nay, he hath done it, and is absolved. On Learning Be not ignorant as beasts, that know no other things than to follow the drove; quæ pergunt, non quo eundum est, sed quo itur; they follow not whither they ought to go, but whither most go. You are men, and have reasonable souls within you; therefore I beseech you, be not composed and fashioned according to custom and example, that is, brutish, but according to some inward knowledge and reason. Retire once from the multitude, and ask in earnest at God, What is the way? Him that fears him he will teach the way that he should choose. The way to his blessed end is very strait, very difficult; you must have a guide in it,—you must have a lamp and a light in it,—else you cannot but go wrong. Sinner's Sanctuary, being forty Sermons upon the eighth Chapter of the Epistle of the Romans, from the First Verse down to the Eighteenth. a treatise originally published in 1670 Fellowship with God, being Twenty Eight Sermons on the First Epistle of John, Chap. 1st and Chap. 2nd, Verses 1, 2, 3. a treatise originally published in 1671 by "A.S. who in the preferace to the reader, styles himself, his servant in the gosple of our dearest Lord and Savior" Heart Humiliation or Miscellany Sermons, preached upon some choice texts at several solemn occasions. originally published in 1676 by the same A.S. that published the treatice "Fellowship with God". The first of the sermons was preached July 1650 An Useful Case of Conscience, Learnedly and Accurately Discussed and Resolved, Concerning Associations and Confederacies with Idolaters, Infidels, Heretics, Malignants or any other Known Enemies of Truth and Godliness. The treatise was used by the Covenanters and seems to have been originally published in Holland in 1693. There is a reference to the treatise at a "general meeting of Society people ... at Edinburgh 28 May 1683." The treatise expressed the opinion that Scotland should not support Charles I without some restraint placed on relatively absolute royal power and without assurance the Presbyterian religion could be maintained. The documents seem to have been presented to the Society either by Hugh Binning's son, John, or his widow, Barbara Gordon (who remarried about 1657 to James Gordon; he was born in Ireland and became a minister at Paisley, Renfrewshire, Scotland.) (An Useful Case of Conscience, fulltext). In the treatise Binning writes: Where God hath given us liberty by the law of nature, or his word, no king can justly tie us, and when God binds and obliges us by any of these, no king or parliament can loose or untie us. A Treatise of Christian Love a sermon based on John 13:35, “By this shall all men know that ye are my disciples, if ye have love one to another” and 1 Corinthians 13. Binning explores the concept that as a believer in Christ, there is a need for Christians to show by their love for one another. (A Treatise of Christian Love, fulltext) Binning argues: But Christ’s last words persuade this, that unity in affection is more essential and fundamental. This is the badge he left to his disciples. If we cast away this upon every different apprehension of mind, we disown our Master, and disclaim his token and badge. On Charity Charity "thinketh no evil." [1 Cor. 13:5] Charity is apt to take all things in the best sense. If a thing may be subject to diverse acceptations, it can put the best construction on it. It is so benign and good in its own nature that it is not inclinable to suspect others. It desires to condemn no man, but would gladly, as far as reason and conscience will permit, absolve every man. It is so far from desire of revenge, that it is not provoked or troubled with an injury. For that were nothing else but to wrong itself because others have wronged it already, and it is so far from wronging others, that it will not willingly so much as think evil of them. Yet if need requires, charity can execute justice, and inflict chastisement, not out of desire of another’s misery, but out of love and compassion to mankind. Charitas non punit quia peccatum est, sed ne peccaretur, it looks more to prevention of future sin, than to revenge of a bypast fault, and can do all without any discomposure of spirit, as a physician cuts a vein without anger. Quis enim cut medetur irascitur? "Who is angry at his own patient?" In 1735, the collections of Binning's works were published posthumously, originally edited by Rev. M. Leishman, D.D., a minister who was a later successor to Hugh in the parish of Govan, which contained sermons not previously published. There have been several editions of the Complete "Works of the Rev. Hugh Binning", one of the latest (Classic Reprint) was published by Forgotten Books in 2012 Bibliography Scott's Fasti, ii. 67-8; Minutes Univ. Glasg.; Wodrow's Analecta; Reid's Presbyterianism of rights as against the invasion of the state, Ireland, i.; Edin. Christian Instructor, xxii. Acts of Assembly; New Statistical Account, vi.; Chalmers's Biogr. Dict.; Scots Worthies, i. 205-10, ed. Macgavin, 1837. "Evangelical Beauties of Hugh Binning," 1829, with a memoir of the author by the Rev John Brown of Whitburn. The Common Principles of the Christian Religion, or a Practical Catechism (Edinburgh, 1659) ; The Sinner's Sanctuary (Edinburgh, 1670) ; Fellowship with God (Edinburgh, 1671); Heart Humiliation, or Miscellany Sermons (Edinburgh, 1676) ; A Useful Case of Conscience (1693) ; Works (which were recommended to be published by the General Assembly, 28 March 1704 and 10 May 1717)(Edinburgh, 1735 ; Glasgow, 1842) ; A Treatise of Christian Love (Edinburgh, 1743) ; Sermons on the most important subjects of Practical Religion (Glasgow, 1760) ; Evangelical Beauties (1829) Wodrow's Anal., i. 161, iii. 40, 438; Glasg. Tests.; Reid's Ireland, ii., 351 ; Inq. Ret., Ayr, 580; Dictionary of National Biography Notes References Sources External links Biographical Sketch from the home page of the Reformed Presbyterian Church (Covenanted) John Binning of Dalvennan, The Forfeited: The Carrick Lairds at the Battle of Bothwell Bridge Map of Dalvennan, South Ayrshire, Scotland 1627 births 1653 deaths Scottish Calvinist and Reformed theologians Academics of the University of Glasgow Scottish philosophers Alumni of the University of Glasgow People from Ayr 17th-century Calvinist and Reformed theologians 17th-century Scottish people 17th-century philosophers 17th-century deaths from tuberculosis Tuberculosis deaths in Scotland
[ -0.29750150442123413, 0.5392219424247742, -0.9241806864738464, -0.8153994083404541, -0.5738167762756348, 0.9464108943939209, 1.503407597541809, -0.1279025822877884, -0.1716940551996231, 0.12867309153079987, -0.04827815294265747, -0.0020780821796506643, -0.08799125999212265, 0.2936936318874...
14064
https://en.wikipedia.org/wiki/Henry%20Home%2C%20Lord%20Kames
Henry Home, Lord Kames
Henry Home, Lord Kames (169627 December 1782) was a Scottish writer, philosopher, advocate, judge, and agricultural improver. A central figure of the Scottish Enlightenment, a founding member of the Philosophical Society of Edinburgh, and active in the Select Society, he acted as patron to some of the most influential thinkers of the Scottish Enlightenment, including the philosopher David Hume, the economist Adam Smith, the writer James Boswell, the chemical philosopher William Cullen, and the naturalist John Walker. Biography He was born at Kames House, between Eccles and Birgham, Berwickshire, son of George Home of Kames House. He was educated at home by a private tutor until the age of 16. In 1712 he was apprenticed as a lawyer under a Writer to the Signet in Edinburgh, was called to the Scottish bar as an advocate bar in 1724. He soon acquired reputation by a number of publications on the civil and Scottish law, and was one of the leaders of the Scottish Enlightenment. In 1752, he was "raised to the bench", thus acquiring the title of Lord Kames. Kames held a primary interest in the production of linen in Scotland and encouraged the development of linen manufacture. Kames was one of the original proprietors of the British Linen Company, and a director between 1754–1756. Kames was on the panel of judges in the Joseph Knight case which ruled that there could be no slavery in Scotland. His address in 1775 is shown as New Street on the Canongate. Cassell's clarifies that this was a very fine mansion at the head of the street, on its east side, facing onto the Canongate. He is buried in the Home-Drummond plot at Kincardine-in-Menteith just west of Blair Drummond. Writings Home wrote much about the importance of property to society. In his Essay Upon Several Subjects Concerning British Antiquities, written just after the Jacobite rising of 1745, he showed that the politics of Scotland were based not on loyalty to Kings, as the Jacobites had said, but on the royal land grants that lay at the base of feudalism, the system whereby the sovereign maintained "an immediate hold of the persons and property of his subjects". In Historical Law Tracts Home described a four-stage model of social evolution that became "a way of organizing the history of Western civilization". The first stage was that of the hunter-gatherer, wherein families avoided each other as competitors for the same food. The second was that of the herder of domestic animals, which encouraged the formation of larger groups but did not result in what Home considered a true society. No laws were needed at these early stages except those given by the head of the family, clan, or tribe. Agriculture was the third stage, wherein new occupations such as "plowman, carpenter, blacksmith, stonemason" made "the industry of individuals profitable to others as well as to themselves", and a new complexity of relationships, rights, and obligations required laws and law enforcers. A fourth stage evolved with the development of market towns and seaports, "commercial society", bringing yet more laws and complexity but also providing more benefit. Lord Kames could see these stages within Scotland itself, with the pastoral Highlands, the agricultural Lowlands, the "polite" commercial towns of Glasgow and Edinburgh, and in the Western Isles a remaining culture of rude huts where fishermen and gatherers of seaweed eked out their subsistence living. Home was a polygenist, he believed God had created different races on earth in separate regions. In his book Sketches of the History of Man, in 1774, Home claimed that the environment, climate, or state of society could not account for racial differences, so that the races must have come from distinct, separate stocks. The above studies created the genre of the story of civilization and defined the fields of anthropology and sociology and therefore the modern study of history for two hundred years. In the popular book Elements of Criticism (1762) Home interrogated the notion of fixed or arbitrary rules of literary composition, and endeavoured to establish a new theory based on the principles of human nature. The late eighteenth-century tradition of sentimental writing was associated with his notion that 'the genuine rules of criticism are all of them derived from the human heart. Prof Neil Rhodes has argued that Lord Kames played a significant role in the development of English as an academic discipline in the Scottish Universities. Social milieu He enjoyed intelligent conversation and cultivated a large number of intellectual associates, among them John Home, David Hume and James Boswell.. Lord Monboddo was also a frequent debater of Kames, although these two usually had a fiercely competitive and adversarial relationship. Family He was married to Agatha Drummond of Blair Drummond. Their children included George Drummond-Home. Major works Remarkable Decisions of the Court of Session (1728) Essays upon Several Subjects in Law (1732) Essay Upon Several Subjects Concerning British Antiquities (c. 1745) Essays on the Principles of Morality and Natural Religion (1751) He advocates the doctrine of philosophical necessity. Historical Law-Tracts (1758) Principles of Equity (1760) Introduction to the Art of Thinking (1761) Elements of Criticism (1762) Published by two Scottish booksellers, Andrew Millar and Alexander Kincaid. Sketches of the History of Man (1774) Gentleman Farmer (1776) Loose Thoughts on Education (1781) See also George Anderson (minister) Literature References External links Henry Home, Lord Kames at James Boswell – a Guide 1696 births 1782 deaths 18th-century philosophers 18th-century Scottish people People from Berwickshire Members of the Faculty of Advocates Enlightenment philosophers Members of the Philosophical Society of Edinburgh Scottish rhetoricians People of the Scottish Enlightenment Scottish philosophers Kames Scottish historians Scottish legal writers Scottish agronomists Scottish literary critics Scottish anthropologists Scottish sociologists Moral philosophers Alumni of the University of Edinburgh
[ -0.7283591628074646, 0.4549521207809448, -0.424666166305542, -0.38183608651161194, 0.08709535002708435, 1.117651343345642, 0.7603676915168762, -0.37281334400177, -0.15231481194496155, -0.3824017643928528, -0.33722594380378723, -0.32416021823883057, -0.11200782656669617, 0.2917279303073883,...
14065
https://en.wikipedia.org/wiki/Harwich
Harwich
Harwich is a town in Essex, England and one of the Haven ports, located on the coast with the North Sea to the east. It is in the Tendring district. Nearby places include Felixstowe to the northeast, Ipswich to the northwest, Colchester to the southwest and Clacton-on-Sea to the south. It is the northernmost coastal town within Essex. Its position on the estuaries of the Stour and Orwell rivers and its usefulness to mariners as the only safe anchorage between the Thames and the Humber led to a long period of maritime significance, both civil and military. The town became a naval base in 1657 and was heavily fortified, with Harwich Redoubt, Beacon Hill Battery, and Bath Side Battery. Harwich is the likely launch point of the Mayflower which carried English Puritans to North America, and is the presumed birthplace of Mayflower captain Christopher Jones. Harwich today is contiguous with Dovercourt and the two, along with Parkeston, are often referred to collectively as Harwich. History The town's name means "military settlement", from Old English here-wic. The town received its charter in 1238, although there is evidence of earlier settlement – for example, a record of a chapel in 1177, and some indications of a possible Roman presence. The town was the target of an abortive raid by French forces under Antonio Doria on 24 March 1339 during the Hundred Years' War. Because of its strategic position, Harwich was the target for the invasion of Britain by William of Orange on 11 November 1688. However, unfavourable winds forced his fleet to sail into the English Channel instead and eventually land at Torbay. Due to the involvement of the Schomberg family in the invasion, Charles Louis Schomberg was made Marquess of Harwich. Writer Daniel Defoe devotes a few pages to the town in A tour thro' the Whole Island of Great Britain. Visiting in 1722, he noted its formidable fort and harbour "of a vast extent". The town, he recounts, was also known for an unusual chalybeate spring rising on Beacon Hill (a promontory to the north-east of the town), which "petrified" clay, allowing it to be used to pave Harwich's streets and build its walls. The locals also claimed that "the same spring is said to turn wood into iron", but Defoe put this down to the presence of "copperas" in the water. Regarding the atmosphere of the town, he states: "Harwich is a town of hurry and business, not much of gaiety and pleasure; yet the inhabitants seem warm in their nests and some of them are very wealthy". Harwich played an important part in the Napoleonic and more especially the two world wars. Of particular note: 1793-1815—Post Office Station for communication with Europe, one of embarkation and evacuation bases for expeditions to Holland in 1799, 1809 and 1813/14; base for capturing enemy privateers. The dockyard built many ships for the Navy, including HMS Conqueror which captured the French Admiral Villeneuve at the Battle of Trafalgar. The Redoubt and the now-demolished Ordnance Building date from that era. 1914-18—base for the Royal Navy's Harwich Force light cruisers and destroyers under Commodore Tyrwhitt, and for British submarines. In November 1918 the German U-boat fleet surrendered to the Royal Navy in the harbour. 1939-1945—one of main East Coast minesweeping and destroyer bases, at one period base for British and French submarines; assembled fleets for Dutch and Dunkirk evacuations and follow-up to D-Day; unusually, a target for Italian bombers during the Battle of Britain. Royal Naval Dockyard Harwich Dockyard was established as a Royal Navy Dockyard in 1652. It ceased to operate as a Royal Dockyard in 1713 (though a Royal Navy presence was maintained until 1829). During the various wars with France and Holland, through to 1815, the dockyard was responsible for both building and repairing numerous warships. HMS Conqueror, a 74-gun ship completed in 1801, captured the French admiral Villeneuve at Trafalgar. The yard was then a semi-private concern, with the actual shipbuilding contracted to Joseph Graham, who was sometimes mayor of the town. During World War II parts of Harwich were again requisitioned for naval use and ships were based at HMS Badger; Badger was decommissioned in 1946, but the Royal Naval Auxiliary Service maintained a headquarters on the site until 1992. Lighthouses In 1665, not long after the establishment of the Dockyard, a pair of lighthouses were set up on the Town Green to serve as leading lights for ships entering the harbour. Completely rebuilt in 1818, both towers are still standing (though they ceased functioning as lighthouses in 1863, when they were replaced by a new pair of lights at Dovercourt). Transport The Royal Navy no longer has a presence in Harwich but Harwich International Port at nearby Parkeston continues to offer regular ferry services to the Hook of Holland (Hoek van Holland) in the Netherlands. Mann Lines operates a roll-on roll-off ferry service from Harwich Navyard to Bremerhaven, Cuxhaven, Paldiski and Turku. Many operations of the Port of Felixstowe and of Trinity House, the lighthouse authority, are managed from Harwich. The Mayflower railway line serves Harwich and there are three operational passenger stations: , and . The line also allows freight trains to access the Port. The port is famous for the phrase "Harwich for the Continent", seen on road signs and in London & North Eastern Railway (LNER) advertisements. From 1924 to 1987 (with a break during the second world war), a train ferry service operated between Harwich and Zeebrugge. The train ferry linkspan still exists today and the rails leading from the former goods yard of Harwich Town railway station are still in position across the road, although the line is blocked by the Trinity House buoy store. Architecture Despite, or perhaps because of, its small size Harwich is highly regarded in terms of architectural heritage, and the whole of the older part of the town, excluding Navyard Wharf, is a conservation area. The regular street plan with principal thoroughfares connected by numerous small alleys indicates the town's medieval origins, although many buildings of this period are hidden behind 18th century facades. The extant medieval structures are largely private homes. The house featured in the image of Kings Head St to the left is unique in the town and is an example of a sailmaker's house, thought to have been built circa 1600. Notable public buildings include the parish church of St. Nicholas (1821) in a restrained Gothic style, with many original furnishings, including a somewhat altered organ in the west end gallery. There is also the Guildhall of 1769, the only Grade I listed building in Harwich. The Pier Hotel of 1860 and the building that was the Great Eastern Hotel of 1864 can both been seen on the quayside, both reflecting the town's new importance to travellers following the arrival of the Great Eastern Main Line from Colchester in 1854. In 1923, The Great Eastern Hotel was closed by the newly formed LNER, as the Great Eastern Railway had opened a new hotel with the same name at the new passenger port at Parkeston Quay, causing a decline in numbers. The hotel became the Harwich Town Hall, which included the Magistrates Court and, following changes in local government, was sold and divided into apartments. Also of interest are the High Lighthouse (1818), the unusual Treadwheel Crane (late 17th century), the Old Custom Houses on West Street, a number of Victorian shopfronts and the Electric Palace Cinema (1911), one of the oldest purpose-built cinemas to survive complete with its ornamental frontage and original projection room still intact and operational. There is little notable building from the later parts of the 20th century, but major recent additions include the lifeboat station and two new structures for Trinity House. The Trinity House office building, next door to the Old Custom Houses, was completed in 2005. All three additions are influenced by the high-tech style. Notable residents Harwich has also historically hosted a number of notable inhabitants, linked with Harwich's maritime past. Christopher Newport (1561–1617) seaman and privateer, captain of the expedition that founded Jamestown, Virginia Christopher Jones (c.1570–1622) Captain of the 1620 voyage of the Pilgrim ship Mayflower Thomas Cobbold (1708–1767), brewer and owner of Three Cups William Shearman (1767–1861) physician and medical writer James Francillon (1802–1866) barrister and legal writer Captain Charles Fryatt (1872–1916) mariner executed by the Germans, brought back from Belgium and buried at Dovercourt Peter Firmin (1928- 2018) artist and puppet maker Randolph Stow (1935–2010) reclusive but award-winning Australian-born writer made his home in Harwich Myles de Vries (born 1940), first-class cricketer Liana Bridges (born 1969) actress, best known for co-presenting Sooty & Co Kate Hall (born 1983) British-Danish singer Politicians Sir John Jacob, 1st Baronet of Bromley (c.1597–1666) politician who sat in the House of Commons in 1640 and 1641 Sir Capel Luckyn, 2nd Baronet (1622–1680) politician sat in the House of Commons variously between 1647 and 1679 Samuel Pepys (1633–1703) diarist and Member of Parliament (MP) for Harwich Sir Anthony Deane (1638–1721) Mayor of Harwich, naval architect, Master Shipwright, commercial shipbuilder and MP Lieutenant-General Edward Harvey (1718–1788) Adjutant-General to the Forces and MP for Harwich 1768 to 1778 Tony Newton, Baron Newton of Braintree OBE, PC, DL (1937–2012) Conservative politician and former Cabinet member Nick Alston (born 1952) Conservative Essex Police and Crime Commissioner Bernard Jenkin (born 1959) Conservative politician, MP for Harwich and North Essex since the 2010 Andrew Murrison VR (born 1961) doctor and Conservative Party politician, MP 2001/2010 Dan Rowe Sport Harwich is home to Harwich & Parkeston F.C.; Harwich and Dovercourt RFC; Harwich Rangers FC; Sunday Shrimpers; Harwich & Dovercourt Sailing Club; Harwich, Dovercourt & Parkeston Swimming Club; Harwich & Dovercourt Rugby Union Football Club; Harwich & Dovercourt Cricket Club; and Harwich Runners who with support from Harwich Swimming Club host the annual Harwich Triathlons. Arms See also Harwich Force Harwich Redoubt Harwich (UK Parliament constituency) Harwich and Dovercourt High School Harwich Lifeboat Station Harwich Mayflower Heritage Centre Notes References External links Harwich Town Council The Harwich Society Port cities and towns in the East of England Port cities and towns of the North Sea Ports and harbours of Essex Towns in Essex Populated coastal places in Essex Tendring
[ -0.5199131369590759, 0.3547673225402832, 0.8916855454444885, -1.5878305435180664, 0.03982631117105484, 0.5178120732307434, 0.36305129528045654, -0.04401422664523125, -0.1664002388715744, -0.400073766708374, -0.36494573950767517, 0.31983640789985657, -0.10735513269901276, 0.3346092998981476...
14067
https://en.wikipedia.org/wiki/Hendrick%20Avercamp
Hendrick Avercamp
Hendrick Avercamp (January 27, 1585 (bapt.) – May 15, 1634 (buried)) was a Dutch painter. Avercamp was born in Amsterdam, where he studied with the Danish-born portrait painter Pieter Isaacks (1569–1625), and perhaps also with David Vinckboons. In 1608 he moved from Amsterdam to Kampen in the province of Overijssel. Avercamp was deaf and mute and was known as "de Stomme van Kampen" (the mute of Kampen). As one of the first landscape painters of the 17th-century Dutch school, he specialized in painting the Netherlands in winter. Avercamp's paintings are colorful and lively, with carefully crafted images of the people in the landscape. His works give a vivid depiction of sport and leisure in the Netherlands in the beginning of the 17th century. Many of Avercamp's paintings feature people ice skating on frozen lakes. Avercamp's work enjoyed great popularity and he sold his drawings, many of which were tinted with water-color, as finished pictures to be pasted into the albums of collectors. The Royal Collection has an outstanding collection of his works. Avercamp died in Kampen and was interred there in the Sint Nicolaaskerk. Artwork Avercamp probably painted in his studio on the basis of sketches he had made in the winter. Avercamp was famous even abroad for his winter landscapes. The passion for painting skating characters probably came from his childhood as he practiced skating with his parents. The last quarter of the 16th century, during which Avercamp was born, was one of the coldest periods of the Little Ice Age. The Flemish painting tradition is mainly expressed in Avercamp's early work. This is consistent with the landscapes of Pieter Bruegel the Elder. Avercamp painted landscapes with a high horizon and many figures who are working on something. The paintings are narrative, with many anecdotes. For instance, included in the painting Winter Landscape with Skaters are several prurient details: a couple making love, naked buttocks, and a peeing male. Later in his life drawing the atmosphere was also important in his work. The horizon also gradually dropped down under more and more air. Avercamp used the painting technique of aerial perspective. The depth is suggested by change of color in the distance. To the front objects are painted in richer colors, such as trees or a boat, while farther objects are lighter. This technique strengthens the impression of depth in the painting. Avercamp has also painted cattle and seascapes. Sometimes Avercamp used paper frames, which were a cheap alternative to oil paintings. He first drew with pen and ink. This work was then covered with finishing paint. The contours of the drawing remained. Even with this technique, Avercamp could show the pale wintry colors and nuances of the ice. Avercamp produced about a hundred paintings. The bulk of his artwork can be seen in the Rijksmuseum in Amsterdam and the Mauritshuis in The Hague. From November 20, 2009 to February 15, 2010 the Rijksmuseum presented an exhibition of his work entitled "Little Ice Age". References External links Avercamp at the WebMuseum Avercamp at Museum Syndicate Avercamp at Rijksmuseum Amsterdam master pieces from Hendrick Avercamp - Online Exhibition at Owlstand 1585 births 1634 deaths Dutch Golden Age painters Dutch male painters Painters from Amsterdam People from Kampen, Overijssel Mute people Deaf people from the Netherlands Deaf artists
[ 0.08214696496725082, 0.5263506174087524, -0.033747412264347076, -0.20665103197097778, -0.7074355483055115, 1.4111067056655884, 1.3345727920532227, -0.041951555758714676, -0.9224916696548462, -0.5508298277854919, 0.826336681842804, 0.21269460022449493, 0.022288747131824493, 0.18193443119525...
14068
https://en.wikipedia.org/wiki/Hans%20Baldung
Hans Baldung
Hans Baldung (1484 or 1485 – September 1545), called Hans Baldung Grien, (being an early nickname, because of his predilection for the colour green), was an artist in painting and printmaking, engraver, draftsman, and stained glass artist, who was considered the most gifted student of Albrecht Dürer, whose art belongs to both German Renaissance and Mannerism. Throughout his lifetime, he developed a distinctive style, full of colour, expression and imagination. His talents were varied, and he produced a great and extensive variety of work including portraits, woodcuts, drawings, tapestries, altarpieces, stained glass, allegories and mythological motifs. Baldung was born and raised in Schwäbisch Gmünd (Swabian Gmuend), East Wuerttemberg. At the age of 26, he married Margaretha Härlerin (née Herlin), with whom he had one child: Margarethe Baldungin. Life Early life, 1484–1500 Hans Baldung was the son of Johann Baldung, a university-educated jurist, having since 1492 the office of legal adviser to the bishop of Strasbourg (Albert of Bavaria), and Margarethe Herlin, daughter of Arbogast Herlin, he was not propertyless, but with unknown occupation, and his family living in this city, Hans made his apprenticeship there, with an artist remained unknown. His exact date of birth is unknown. Hans was born in the small free city of Schwäbisch Gmünd (formerly Gmünd in Germany), a free city of the Empire, part of the East Württemberg region in former Swabia, Germany, in the year 1484 or 1485, into a family of intellectuals, academics and professionals, where his father was from and died in Strasbourg in September 1545. His uncle, Hieronymus Baldung, was a doctor in medicine, he had a son, Pius Hieronymus, that can be seen as Hans' cousin, who taught law at Freiburg, and became by 1527 chancellor of the Tyrol. In fact, Baldung was the first male in his family not to attend university, but was one of the first German artists to come from an academic family. His earliest training as an artist began around 1500 in the Upper Rhineland by an artist from Strasbourg. He perfected his art in Albrecht Dürer's studio in Nuremberg between 1503 and 1507. At the age of 26, Baldung married Margaretha (née Herlin), with whom he had one child: Margarethe Baldungin. Life as a student of Dürer Beginning in 1503, during the "Wanderjahre" ("Hiking years") required of artists of the time, Baldung became an assistant to Albrecht Dürer. Here, he may have been given his nickname "Grien". This name is thought to have come foremost from a preference to the color green: he seems to have worn green clothing. He probably also got this nickname to distinguish him from at least two other Hanses in Dürer's shop, Hans Schäufelein and Hans Suess von Kulmbach. He later included the name "Grien" in his monogram, and it has also been suggested that the name came from, or consciously echoed, "grienhals", a German word for witch—one of his signature themes. Hans quickly picked up Dürer's influence and style, and they became friends: Baldung seems to have managed Dürer's workshop during the latter's second sojourn in Venice. In a later trip to the Netherlands in 1521 Dürer's account book records that he took with him and sold prints by Baldung. On Dürer's death Baldung was sent a lock of his hair, which suggests a close friendship. Near the end of his Nuremberg years, Grien oversaw the production by Dürer of stained glass, woodcuts and engravings, and therefore developed an affinity for these media and for the Nuremberg master's handing of them. Strasbourg In 1509, when Baldung's time in Nuremberg was complete, he moved back to Strasbourg and became a citizen there. He became a celebrity of the town, and received many important commissions. The following year he married Margarethe Herlin, a local merchant's daughter, joined the guild "Zur Steltz", opened a workshop, and began signing his works with the HGB monogram that he used for the rest of his career. His style became much more deliberately individual—a tendency art historians used to term "mannerist." He stayed in Freiburg im Breisgau in 1513–1516 where he made, among other things, the . Witchcraft and religious imagery In addition to traditional religious subjects, Baldung was concerned during these years with the profane theme of the imminence of death and with scenes of sorcery and witchcraft. He helped introduce supernatural and erotic themes into German art, although these were already amply present in Dürer's work. Most famously, he depicted witches, also a local interest: Strasbourg's humanists studied witchcraft and its bishop was charged with finding and prosecuting witches. His most characteristic works in this area are small in scale and mostly in the medium of drawing; these include a series of puzzling, often erotic allegories and mythological works executed in quill pen and ink and white body color on primed paper. The number of Hans Baldung's religious works diminished with the Protestant Reformation, which generally repudiated church art as either wasteful or idolatrous. But earlier, around the same time that he produced an important chiaroscuro woodcut of Adam and Eve, the artist became interested in themes related to death, the supernatural, witchcraft, sorcery, and the relation between the sexes. Baldung's fascination with witchcraft began early, with his first chiaroscuro woodcut print in 1510, and lasted to the end of his career. Hans Baldung Grien's work depicting witches was produced in the first half of the 16th century, before witch hunting became a widespread cultural phenomenon in Europe. According to one view, Baldung's work did not represent widespread cultural beliefs at the time of creation but reflected largely individual choices. On the other hand, through his family, Baldung stood as closer to the leading intellectuals of the day than any of his contemporaries, and could draw on a burgeoning literature on witchcraft, as well as on developing juridical and forensic strategies for witch-hunting. Baldung never worked directly with any Reformation leaders to spread religious ideals through his artwork, although living in fervently religious Strasbourg, although he was a supporter of the movement, working on the high altar in the city of Münster, Germany. Baldung was the first artist to heavily incorporate witches and witchcraft into his artwork (his mentor Albrecht Dürer had sporadically included them but not as prominently as Baldung would). During his lifetime there were few witch trials, therefore, some believe Baldung's depictions of witchcraft to be based on folklore rather than the cultural beliefs of his time. By contrast, throughout the early sixteenth century, humanism became very popular, and within this movement, Latin literature was valorized, particularly poetry and satire, some of which included views on witches that could be combined with witch lore massively accumulated in works such as the Malleus Maleficarum. Baldung partook in this culture, producing not only many works depicting Strasbourg humanists and scenes from ancient art and literature, but what an earlier literature on the artist described as his satirical take on his depiction of witches. Gert von der Osten comments on this aspect of "Baldung [treating] his witches humorously, an attitude that reflects the dominant viewpoint of the humanists in Strasbourg at this time who viewed witchcraft as 'lustig,' a matter that was more amusing than serious". However, the separation of a satirical tone from deadly serious vilifying intent proves difficult to maintain for Baldung as it is for many other artists, including his rough contemporary Hieronymus Bosch. Baldung's art simultaneously represents ideals presented in ancient Greek and Roman poetry, such as the pre-16th century notion that witches could control the weather, which Baldung is believed to have alluded to in his 1523 oil painting "Weather Witches", which showcases two attractive and naked witches in front of a stormy sky. Baldung also regularly incorporated scenes of witches flying in his art, a characteristic that had been contested centuries before his artwork came into being. Flying was inherently attributed to witches by those who believed in the myth of the Sabbath (without their ability to fly, the myth fragmented), such as Baldung, which he depicted in works like "Witches Preparing for the Sabbath Flight" (1514). Work Painting Throughout his life, Baldung painted numerous portraits, known for their sharp characterizations. While Dürer rigorously details his models, Baldung's style differs by focusing more on the personality of the represented character, an abstract conception of the model's state of mind. Baldung settled eventually in Strasbourg and then to Freiburg im Breisgau, where he executed what is held to be his masterpiece. Here in painted an eleven-panel altarpiece for the Freiburg Cathedral, still intact today, depicting scenes from the life of the Virgin, including, The Annunciation, The Visitation, The Nativity, The Flight into Egypt, The Crucifixion, Four Saints and The Donators. These depictions were a large part of the artist's greater body of work containing several renowned pieces of the Virgin. The earliest pictures assigned to him by some are altar-pieces with the monogram H. B. interlaced, and the date of 1496, in the monastery chapel of Lichtenthal near Baden-Baden. Another early work is a portrait of the emperor Maximilian, drawn in 1501 on a leaf of a sketch-book now in the print-room at Karlsruhe. "The Martyrdom of St Sebastian and the Epiphany" (now Berlin, 1507), were painted for the market-church of Halle in Saxony. Baldung's prints, though Düreresque, are very individual in style, and often in subject. They show little direct Italian influence. His paintings are less important than his prints. He worked mainly in woodcut, although he made six engravings, one very fine. He joined in the fashion for chiaroscuro woodcuts, adding a tone block to a woodcut of 1510. Most of his hundreds of woodcuts were commissioned for books, as was usual at the time; his "single-leaf" woodcuts (i.e. prints not for book illustration) are fewer than 100, though no two catalogues agree as to the exact number. Unconventional as a draughtsman, his treatment of human form is often exaggerated and eccentric (hence his linkage, in the art historical literature, with European Mannerism), whilst his ornamental style—profuse, eclectic, and akin to the self-consciously "German" strain of contemporary limewood sculptors—is equally distinctive. Though Baldung has been commonly called the Correggio of the north, his compositions are a curious medley of glaring and heterogeneous colours, in which pure black is contrasted with pale yellow, dirty grey, impure red and glowing green. Flesh is a mere glaze under which the features are indicated by lines. His works are notable for their individualistic departure from the Renaissance composure of his model, Dürer, for the wild and fantastic strength that some of them display, and for their remarkable themes. In the field of painting, his Eve, the Serpent and Death (National Gallery of Canada) shows his strengths well. There is special force in the "Death and the Maiden" panel of 1517 (Basel), in the "Weather Witches" (Frankfurt), in the monumental panels of "Adam" and "Eve" (Madrid), and in his many powerful portraits. Baldung's most sustained effort is the altarpiece of Freiburg, where the Coronation of the Virgin, and the Twelve Apostles, the Annunciation, Visitation, Nativity and Flight into Egypt, and the Crucifixion, with portraits of donors, are executed with some of that fanciful power that Martin Schongauer bequeathed to the Swabian school. He is well known as a portrait painter, his works include historical pictures and portraits; among the latter may be named those of Maximilian I. and Charles V. His bust of Margrave Philip in the Munich Gallery tells us that he was connected with the reigning family of Baden as early as 1514. At a later period he had sittings with Margrave Christopher of Baden, Ottilia his wife, and all their children, and the picture containing these portraits is still in the gallery at Karlsruhe. Like Dürer and Cranach, Baldung supported the Protestant Reformation. He was present at the diet of Augsburg in 1518, and one of his woodcuts represents Luther in quasi-saintly guise, under the protection of (or being inspired by) the Holy Spirit, which hovers over him in the shape of a dove. Selected works Phyllis and Aristotle, Paris, Louvre. 1503 Two altar wings (Charles the Great, St. George), Augsburg, State Gallery. Portrait of a Youth, Hampton Court, Royal Collection 1509 The birth of Christ, Basel, Kunstmuseum Basel, 1510 The Adoration of the Magi, Dessau, Anhalt Art Gallery, 1510 The Witches, 1510 The Mass of St. Gregory, Cleveland, Cleveland Museum of Art, 1511 The crucifixion of Christ, Basel, Kunstmuseum Basel, 1512 The crucifixion of Christ, Berlin, Gemäldegalerie, 1512 The Holy Trinity, London, National Gallery, 1512 The Rest on the Flight into Egypt, Vienna, Paintings Gallery of the Academy of Fine Arts, 1513 Portrait of a Man, London, National Gallery, 1514 The Lamentation of Christ, Berlin, Gemäldegalerie, 1516 Death and the Maiden, Basel, Kunstmuseum Basel, 1517 The Baptism of Christ, Frankfurt am Main, Städel, 1518 Venus with Cupid, Otterlo, Rijksmuseum Kröller-Müller, 1525 Pyramus and Thisbe, Berlin, Gemäldegalerie, around 1530 Ambrosius Volmar Keller, Strasbourg, Musée de l’Œuvre Notre-Dame, 1538 Christ as a Gardener, Darmstadt, Hessen State Museum, 1539 Adam and Eve, Florence, Galleria degli Uffizi - Uffizi The unlikely couple, Liverpool, Walker Art Gallery, 1527 The Three Ages of Man and Death, Museo del Prado, Madrid Mercury as a Planet God, Stockholm, Nationalmuseum, 1530–1540 Harmony, or The Three Graces Die Jugend (Die drei Grazien) The youth (the three graces) Museo del Prado between 1541 and 1544 See also Early Renaissance painting Old master print Notes References Citations Attribution: Bibliography External links Prints & People: A Social History of Printed Pictures, an exhibition catalog from The Metropolitan Museum of Art (fully available online as PDF), which contains material on Hans Baldung (see index) Article: Sacred and Profane: Christian Imagery and Witchcraft in Prints by Hans Baldung Grien, by Stan Parchin "Hans Baldung Grien", National Gallery of Art Hans Baldung in the "A World History of Art" Several of Baldung's witches and erotic prints 1480s births 1545 deaths People from Schwäbisch Gmünd 16th-century German painters German male painters German Renaissance painters Woodcut designers
[ -0.35166651010513306, 0.3826446235179901, -0.33625611662864685, -0.05801813676953316, -0.08168787509202957, 0.5999338030815125, 0.41099876165390015, -0.31030741333961487, -0.8848184943199158, -0.6407849192619324, -0.316577672958374, 0.19107221066951752, 0.12268585711717606, -0.397585481405...
14070
https://en.wikipedia.org/wiki/Hammered%20dulcimer
Hammered dulcimer
The hammered dulcimer (also called the hammer dulcimer, dulcimer, santouri, or tympanon) is a percussion-stringed instrument which consists of strings typically stretched over a trapezoidal resonant sound board. The hammered dulcimer is set before the musician, who in more traditional styles may sit cross-legged on the floor, or in a more modern style may stand or sit at a wooden support with legs. The player holds a small spoon-shaped mallet hammer in each hand to strike the strings. The Graeco-Roman dulcimer ("sweet song") derives from the Latin dulcis (sweet) and the Greek melos (song). The dulcimer, in which the strings are beaten with small hammers, originated from the psaltery, in which the strings are plucked. Hammered dulcimers and other similar instruments are traditionally played in Iraq, India, Iran, Southwest Asia, China, Korea, and parts of Southeast Asia, Central Europe (Hungary, Slovenia, Romania, Slovakia, Poland, Czech Republic, Switzerland (particularly Appenzell), Austria and Bavaria), the Balkans, Eastern Europe (Ukraine and Belarus), and Scandinavia. The instrument is also played in the United Kingdom (Wales, East Anglia, Northumbria), and the US, where its traditional use in folk music saw a notable revival in the late 20th century. Strings and tuning A dulcimer usually has two bridges, a bass bridge near the right and a treble bridge on the left side. The bass bridge holds up bass strings, which are played to the left of the bridge. The treble strings can be played on either side of the treble bridge. In the usual construction, playing them on the left side gives a note a fifth higher than playing them on the right of the bridge. The dulcimer comes in various sizes, identified by the number of strings that cross each of the bridges. A 15/14, for example, has 15 strings crossing the treble bridge and 14 crossing the bass bridge, and can span three octaves. The strings of a hammered dulcimer are usually found in pairs, two strings for each note (though some instruments have three or four strings per note). Each set of strings is tuned in unison and is called a course. As with a piano, the purpose of using multiple strings per course is to make the instrument louder, although as the courses are rarely in perfect unison, a chorus effect usually results like a mandolin. A hammered dulcimer, like an autoharp, harp, or piano, requires a tuning wrench for tuning, since the dulcimer's strings are wound around tuning pins with square heads. (Ordinarily, 5 mm "zither pins" are used, similar to, but smaller in diameter than piano tuning pins, which come in various sizes ranging upwards from "1/0" or 7 mm.) The strings of the hammered dulcimer are often tuned according to a circle of fifths pattern. Typically, the lowest note (often a G or D) is struck at the lower right-hand of the instrument, just to the left of the right-hand (bass) bridge. As a player strikes the courses above in sequence, they ascend following a repeating sequence of two whole steps and a half step. With this tuning, a diatonic scale is broken into two tetrachords, or groups of four notes. For example, on an instrument with D as the lowest note, the D major scale is played starting in the lower-right corner and ascending the bass bridge: D – E – F – G. This is the lower tetrachord of the D major scale. At this point the player returns to the bottom of the instrument and shifts to the treble strings to the right of the treble bridge to play the higher tetrachord: A – B – C – D. The player can continue up the scale on the right side of the treble bridge with E – F – G – A – B, but the next note will be C, not C, so he or she must switch to the left side of the treble bridge (and closer to the player) to continue the D major scale. See the drawing on the left above, in which "DO" would correspond to D (see Movable do solfège). The shift from the bass bridge to the treble bridge is required because the bass bridge's fourth string G is the start of the lower tetrachord of the G scale. The player could go on up a couple notes (G – A – B), but the next note will be a flatted seventh (C natural in this case), because this note is drawn from the G tetrachord. This D major scale with a flatted seventh is the mixolydian mode in D. The same thing happens as the player goes up the treble bridge – after getting to La (B in this case), one has to go to the left of the treble bridge. Moving from the left side of the bass bridge to the right side of the treble bridge is analogous to moving from the right side of the treble bridge to the left side of the treble bridge. The whole pattern can be shifted up by three courses, so that instead of a D-major scale one would have a G-major scale, and so on. This transposes one equally tempered scale to another. Shifting down three courses transposes the D-major scale to A-major, but of course the first Do-Re-Mi would be shifted off the instrument. This tuning results in most, but not all, notes of the chromatic scale being available. To fill in the gaps, many modern dulcimer builders include extra short bridges at the top and bottom of the soundboard, where extra strings are tuned to some or all of the missing pitches. Such instruments are often called "chromatic dulcimers" as opposed to the more traditional "diatonic dulcimers". The tetrachord markers found on the bridges of most hammered dulcimers in the English-speaking world were introduced by the American player and maker Sam Rizzetta in the 1960s. In the Alps there are also chromatic dulcimers with crossed strings, which are in a whole tone distance in every row. This chromatic Salzburger hackbrett was developed in the mid 1930s from the diatonic hammered dulcimer by Tobi Reizer and his son along with Franz Peyer and Heinrich Bandzauner. In the postwar period it was one of the instruments taught in state-sponsored music schools. Hammered dulcimers of non-European descent may have other tuning patterns, and builders of European-style dulcimers sometimes experiment with alternate tuning patterns. Hammers The instrument is referred to as "hammered" in reference to the small mallets (referred to as hammers) that players use to strike the strings. Hammers are usually made of wood (most likely hardwoods such as maple, cherry, padauk, oak, walnut, or any other hardwood), but can also be made from any material, including metal and plastic. In the Western hemisphere, hammers are usually stiff, but in Asia, flexible hammers are often used. The head of the hammer can be left bare for a sharp attack sound, or can be covered with adhesive tape, leather, or fabric for a softer sound. Two-sided hammers are also available. The heads of two sided hammers are usually oval or round. Most of the time, one side is left as bare wood while the other side may be covered in leather or a softer material such as piano felt. Several traditional players have used hammers that differ substantially from those in common use today. Paul Van Arsdale (1920–2018), a player from upstate New York, used flexible hammers made from hacksaw blades, with leather-covered wooden blocks attached to the ends (these were modeled after the hammers used by his grandfather, Jesse Martin). The Irish player John Rea (1915–1983) used hammers made of thick steel wire, which he made himself from old bicycle spokes wrapped with wool. Billy Bennington (1900–1986), a player from Norfolk, England, used cane hammers bound with wool. Variants and adaptations The hammered dulcimer was extensively used during the Middle Ages in England, France, Italy, Germany, the Netherlands, and Spain. Although it had a distinctive name in each country, it was everywhere regarded as a kind of psalterium. The importance of the method of setting the strings in vibration by means of hammers, and its bearing on the acoustics of the instrument, were recognized only when the invention of the pianoforte had become a matter of history. It was then perceived that the psalterium (in which the strings were plucked) and the dulcimer (in which they were struck), when provided with keyboards would give rise to two distinct families of instruments, differing essentially in tone quality, in technique and in capabilities. The evolution of the psalterium resulted in the harpsichord; that of the dulcimer produced the pianoforte. Around the world Versions of the hammered dulcimer, each of which has its own distinct manner of construction and playing style, are used throughout the world: Austria – Hackbrett Belarus – tsymbaly (цымбалы) Belgium – hakkebord Brazil – saltério Cambodia – khim Canada – hammered dulcimer China – yangqin (扬琴, formerly 洋琴) Croatian – cimbal, cimbale, cimbule Czech Republic – cimbál Denmark – hakkebræt France – tympanon Germany – Hackbrett Greece – santouri Hungary – cimbalom India – santoor Iran – santur Iraq – santur Ireland – tiompan Israel – דולצימר פטישים Italy – salterio Japan – darushimaa (ダルシマー) Korea – yanggeum (양금) Laos – khim Latgalia (Latvia) – cymbala Latvia – cimbole Lithuania – cimbalai, cimbolai Mexico – salterio Mongolia – yoochin (ёочин or ёчин) Netherlands – hakkebord Norway – hakkebrett Poland – cymbały Portugal – saltério Romania – ţambal Russia – цимбалы, dultsimer (дульцимер) Serbia – цимбал (tsimbal) Slovakia – cimbal Slovenia – cimbale, oprekelj Spain (and Spanish-speaking countries) – salterio, dulcémele Sweden – hackbräde, hammarharpa Switzerland – Hackbrett Thailand – khim Turkey – santur Ukraine – tsymbaly (цимбали) United Kingdom – hammered dulcimer United States – hammered dulcimer Uzbekistan – chang Vietnam – đàn tam thập lục (lit. "36 strings") Yiddish – tsimbl See also List of hammered dulcimer players Santoor – India Santur§Santurs from around the world Yangqin – China Santouri – Greece References Further reading Gifford, Paul M. (2001), The Hammered Dulcimer: A History, The Scarecrow Press, Inc. . A comprehensive history of the hammered dulcimer and its variants. Kettlewell, David (1976), The Dulcimer, PhD thesis. History and playing traditions around the world; web-version at https://web.archive.org/web/20110717071302/http://www.new-renaissance.net/dulcimer. External links Santur on Nay-Nava, the encyclopedia of Persian music instruments Pete Rushefsky, "Jewish Strings: An Introduction to the Klezmer Tsimbl" (Related to the Hammered Dulcimer) (archive from 27 December 2009). Smithsonian Institution booklet on hammered dulcimer history and playing Smithsonian Institution booklet on making a hammered dulcimer (by Sam Rizzetta) Hammered dulcimers from polish collections (Polish folk musical instruments) East Anglian Dulcimers(ongoing historic research by John & Katie Howson about dulcimer players and makers from Norfolk, Suffolk, Cambridgeshire and Essex, UK.) Hammered box zithers Austrian musical instruments Early musical instruments English musical instruments German musical instruments Celtic musical instruments Hungarian musical instruments Polish musical instruments Romanian musical instruments Arabic musical instruments American musical instruments Welsh musical instruments Ukrainian musical instruments da:Dulcimer no:Dulcimer pl:Cymbały tr:Santur
[ 0.053755905479192734, 0.07900716364383698, -0.4108427166938782, 0.014850658364593983, -0.22223617136478424, 0.48526522517204285, -0.24861906468868256, 0.11800623685121536, -0.17991246283054352, 0.14904353022575378, -0.0797448605298996, 0.19605065882205963, 0.020908745005726814, 0.539283633...
14071
https://en.wikipedia.org/wiki/Humanae%20vitae
Humanae vitae
Humanae vitae (Latin: Of Human Life) is an encyclical written by Pope Paul VI and dated 25 July 1968. The text was issued at a Vatican press conference on 29 July. Subtitled On the Regulation of Birth, it re-affirmed the teaching of the Catholic Church regarding married love, responsible parenthood, and the rejection of artificial contraception. In formulating his teaching he explained why he did not accept the conclusions of the Pontifical Commission on Birth Control established by his predecessor, Pope John XXIII, a commission he himself had expanded. Mainly because of its restatement of the Church's opposition to artificial contraception, the encyclical was politically controversial. It affirmed traditional Church moral teaching on the sanctity of life and the procreative and unitive nature of conjugal relations. It was the last of Paul's seven encyclicals. Summary Affirmation of traditional teaching In this encyclical Paul VI reaffirmed the Catholic Church's view of marriage and marital relations and a continued condemnation of "artificial" birth control. There were two Papal committees and numerous independent experts looking into the latest advancement of science and math on the question of artificial birth control, which were noted by the Pope in his encyclical. The expressed views of Paul VI reflected the teachings of his predecessors, especially Pius XI, Pius XII and John XXIII, all of whom had insisted on the divine obligations of the marital partners in light of their partnership with God the creator. Doctrinal basis Paul VI himself, even as commission members issued their personal views over the years, always reaffirmed the teachings of the Church, repeating them more than once in the first years of his Pontificate. To Pope Paul VI, marital relations are much more than a union of two people. In his view, they constitute a union of the loving couple with a loving God, in which the two persons generate the matter for the body, while God creates the unique soul of a person. For this reason, Paul VI teaches in the first sentence of Humanae Vitae, that the "transmission of human life is a most serious role in which married people collaborate freely and responsibly with God the Creator." This is divine partnership, so Paul VI does not allow for arbitrary human decisions, which may limit divine providence. According to Paul VI, marital relations are a source of great joy, but also of difficulties and hardships. The question of human procreation with God, exceeds in the view of Paul VI specific disciplines such as biology, psychology, demography or sociology. According to Paul VI, married love takes its origin from God, who is love, and from this basic dignity, he defines his position: The encyclical opens with an assertion of the competency of the magisterium of the Catholic Church to decide questions of morality. It then goes on to observe that circumstances often dictate that married couples should limit the number of children, and that the sexual act between husband and wife is still worthy even if it can be foreseen not to result in procreation. Nevertheless, it is held that the sexual act must retain its intrinsic relationship to the procreation of human life. Every action specifically intended to prevent procreation is forbidden, except in medically necessary circumstances. Therapeutic means necessary to cure diseases are exempted, even if a foreseeable impediment to procreation should result, but only if infertility is not directly intended. This is held to directly contradict the moral order which was established by God. Abortion, even for therapeutic reasons, is absolutely forbidden, as is sterilization, even if temporary. Therapeutic means which induce infertility are allowed (e.g., hysterectomy), if they are not specifically intended to cause infertility (e.g., the uterus is cancerous, so the preservation of life is intended). If there are well grounded reasons (arising from the physical or psychological condition of husband or wife, or from external circumstances), natural family planning methods (abstaining from intercourse during certain parts of the menstrual cycle) are allowed, since they take advantage of a faculty provided by nature. The acceptance of artificial methods of birth control is then claimed to result in several negative consequences, among them a general lowering of moral standards resulting from sex without consequences, and the danger that men may reduce women to being a mere instrument for the satisfaction of [their] own desires; finally, abuse of power by public authorities, and a false sense of autonomy. Appeal to natural law and conclusion Public authorities should oppose laws which undermine natural law; scientists should further study effective methods of natural birth control; doctors should further familiarize themselves with this teaching, in order to be able to give advice to their patients, priests must spell out clearly and completely the Church's teaching on marriage. The encyclical acknowledges that "perhaps not everyone will easily accept this particular teaching", but that "...it comes as no surprise to the church that she, no less than her Divine founder is destined to be a sign of contradiction." Noted is the duty of proclaiming the entire moral law, "both natural and evangelical." The encyclical also points out that the Roman Catholic Church cannot "declare lawful what is in fact unlawful", because she is concerned with "safeguarding the holiness of marriage, in order to guide married life to its full human and Christian perfection." This is to be the priority for his fellow bishops and priests and lay people. The Pope predicts that future progress in social cultural and economic spheres will make marital and family life more joyful, provided God's design for the world is faithfully followed. The encyclical closes with an appeal to observe the natural laws of the Most High God. "These laws must be wisely and lovingly observed." History Origins There had been a long-standing general Christian prohibition on contraception and abortion, with such Church Fathers as Clement of Alexandria and Saint Augustine condemning the practices. It was not until the 1930 Lambeth Conference that the Anglican Communion allowed for contraception in limited circumstances. Mainline Protestant denominations have since removed prohibitions against artificial contraception. In a partial reaction, Pope Pius XI wrote the encyclical Casti connubii (On Christian Marriage) in 1930, reaffirming the Catholic Church's belief in various traditional Christian teachings on marriage and sexuality, including the prohibition of artificial birth control even within marriage. Casti connubii is against contraception and regarding natural family planning allowed married couples to use their nuptial rights "in the proper manner" when because of either time or defects, new life could not be brought forth. The commission of John XXIII With the appearance of the first oral contraceptives in 1960, dissenters in the Church argued for a reconsideration of the Church positions. In 1963 Pope John XXIII established a commission of six European non-theologians to study questions of birth control and population. It met once in 1963 and twice in 1964. As Vatican Council II was concluding, Pope Paul VI enlarged it to fifty-eight members, including married couples, laywomen, theologians and bishops. The last document issued by the council (Gaudium et spes) contained a section titled "Fostering the Nobility of Marriage" (1965, nos. 47-52), which discussed marriage from the personalist point of view. The "duty of responsible parenthood" was affirmed, but the determination of licit and illicit forms of regulating birth was reserved to Pope Paul VI. In the spring of 1966, following the close of the council, the commission held its fifth and final meeting, having been enlarged again to include sixteen bishops as an executive committee. The commission was only consultative but it submitted a report approved by a majority of 64 members to Paul VI. It proposed he approve of artificial contraception without distinction of the various means. A minority of four members opposed this report and issued a parallel report to the Pope. Arguments in the minority report, against change in the church's teaching, were that "we should have to concede frankly that the Holy Spirit had been on the side of the Protestant churches in 1930" (when Casti connubii was promulgated) and that "it should likewise have to be admitted that for a half a century the Spirit failed to protect Pius XI, Pius XII, and a large part of the Catholic hierarchy from a very serious error." After two more years of study and consultation, the pope issued Humanae vitae, which removed any doubt that the Church views hormonal anti-ovulants as contraceptive. He explained why he did not accept the opinion of the majority report of the commission (1968, #6). Arguments were raised in the decades that followed that his decision has never passed the condition of "reception" to become church doctrine. Drafting of the Encyclical In his role as Theologian of the Pontifical Household Mario Luigi Ciappi advised Pope Paul VI during the drafting of Humanae vitae. Ciappi, a doctoral graduate of the Pontificium Athenaeum Internationale Angelicum, the future Pontifical University of Saint Thomas Aquinas, Angelicum, served as professor of dogmatic theology there and was Dean of the Angelicum's Faculty of Theology from 1935 to 1955. According to George Weigel, Paul VI named Archbishop Karol Wojtyła (later Pope John Paul II) to the commission, but Polish government authorities would not permit him to travel to Rome. Wojtyła had earlier defended the church's position from a philosophical standpoint in his 1960 book Love and Responsibility. Wojtyła's position was strongly considered and it was reflected in the final draft of the encyclical, although much of his language and arguments were not incorporated. Weigel attributes much of the poor reception of the encyclical to the omission of many of Wojtyła's arguments. In 2017, anticipating the 50th anniversary of the encyclical, four theologians led by Mgr. Gilfredo Marengo, a professor of theological anthropology at the Pontifical John Paul II Institute for Studies on Marriage and Family, launched a research project he called "a work of historical-critical investigation without any aim other than reconstructing as well as possible the whole process of composing the encyclical". Using the resources of the Vatican Secret Archives and the Congregation for the Doctrine of the Faith, they hope to detail the writing process and the interaction between the commission, publicity surrounding the commission's work, and Paul's own authorship. Highlights Faithfulness to God's design 13. Men rightly observe that a conjugal act imposed on one's partner without regard to his or her condition or personal and reasonable wishes in the matter, is no true act of love, and therefore offends the moral order in its particular application to the intimate relationship of husband and wife. If they further reflect, they must also recognize that an act of mutual love which impairs the capacity to transmit life which God the Creator, through specific laws, has built into it, frustrates His design which constitutes the norm of marriage, and contradicts the will of the Author of life. Hence to use this divine gift while depriving it, even if only partially, of its meaning and purpose, is equally repugnant to the nature of man and of woman, and is consequently in opposition to the plan of God and His holy will. But to experience the gift of married love while respecting the laws of conception is to acknowledge that one is not the master of the sources of life but rather the minister of the design established by the Creator. Just as man does not have unlimited dominion over his body in general, so also, and with more particular reason, he has no such dominion over his specifically sexual faculties, for these are concerned by their very nature with the generation of life, of which God is the source. "Human life is sacred—all men must recognize that fact," Our predecessor Pope John XXIII recalled. "From its very inception it reveals the creating hand of God." Lawful therapeutic means 15. ...the Church does not consider at all illicit the use of those therapeutic means necessary to cure bodily diseases, even if a foreseeable impediment to procreation should result therefrom — provided such impediment is not directly intended. Recourse to infertile periods 16. ...If therefore there are well-grounded reasons for spacing births, arising from the physical or psychological condition of husband or wife, or from external circumstances, the Church teaches that married people may then take advantage of the natural cycles immanent in the reproductive system and engage in marital intercourse only during those times that are infertile, thus controlling birth in a way which does not in the least offend the moral principles which We have just explained. Concern of the Church 18. It is to be anticipated that perhaps not everyone will easily accept this particular teaching. There is too much clamorous outcry against the voice of the Church, and this is intensified by modern means of communication. But it comes as no surprise to the Church that it, no less than its divine Founder, is destined to be a "sign of contradiction.") The Church does not, because of this, evade the duty imposed on it of proclaiming humbly but firmly the entire moral law, both natural and evangelical. Since the Church did not make either of these laws, it cannot be their arbiter—only their guardian and interpreter. It could never be right for the Church to declare lawful what is in fact unlawful, since that, by its very nature, is always opposed to the true good of man. In preserving intact the whole moral law of marriage, the Church is convinced that it is contributing to the creation of a truly human civilization. The Church urges man not to betray his personal responsibilities by putting all his faith in technical expedients. In this way it defends the dignity of husband and wife. This course of action shows that the Church, loyal to the example and teaching of the divine Savior, is sincere and unselfish in its regard for men whom it strives to help even now during this earthly pilgrimage "to share God's life as sons of the living God, the Father of all men". Developing countries 23. We are fully aware of the difficulties confronting the public authorities in this matter, especially in the developing countries. In fact, We had in mind the justifiable anxieties which weigh upon them when We published Our encyclical letter Populorum Progressio. But now We join Our voice to that of Our predecessor John XXIII of venerable memory, and We make Our own his words: "No statement of the problem and no solution to it is acceptable which does violence to man's essential dignity; those who propose such solutions base them on an utterly materialistic conception of man himself and his life. The only possible solution to this question is one which envisages the social and economic progress both of individuals and of the whole of human society, and which respects and promotes true human values." No one can, without being grossly unfair, make divine Providence responsible for what clearly seems to be the result of misguided governmental policies, of an insufficient sense of social justice, of a selfish accumulation of material goods, and finally of a culpable failure to undertake those initiatives and responsibilities which would raise the standard of living of peoples and their children. Response and criticism Galileo affair comparison Cardinal Leo Joseph Suenens, a moderator of the ecumenical council, questioned, "whether moral theology took sufficient account of scientific progress, which can help determine, what is according to nature. I beg you my brothers let us avoid another Galileo affair. One is enough for the Church." In an interview in Informations Catholiques Internationales on 15 May 1969, he criticized the Pope’s decision again as frustrating the collegiality defined by the Council, calling it a non-collegial or even an anti-collegial act. He was supported by Vatican II theologians such as Karl Rahner, Hans Küng, several Episcopal conferences, e.g. the Episcopal Conference of Austria, Germany, and Switzerland, as well as several bishops, including Christopher Butler, who called it one of the most important contributions to contemporary discussion in the Church. Open dissent The publication of the encyclical marks the first time in the twentieth century that open dissent from the laity about teachings of the Church was voiced widely and publicly. The teaching has been criticized by development organizations and others who claim that it limits the methods available to fight worldwide population growth and struggle against HIV/AIDS. Within two days of the encyclical's release, a group of dissident theologians, led by Rev. Charles Curran, then of The Catholic University of America, issued a statement stating, "spouses may responsibly decide according to their conscience that artificial contraception in some circumstances is permissible and indeed necessary to preserve and foster the value and sacredness of marriage. Canadian bishops Two months later, the controversial "Winnipeg Statement" issued by the Canadian Conference of Catholic Bishops stated that those who cannot accept the teaching should not be considered shut off from the Catholic Church, and that individuals can in good conscience use contraception as long as they have first made an honest attempt to accept the difficult directives of the encyclical. Dutch Catechism The Dutch Catechism of 1966, based on the Dutch bishops' interpretation of the just completed Vatican Council, and the first post-Council comprehensive Catholic catechism, noted the lack of mention of artificial contraception in the Council. "As everyone can ascertain nowadays, there are several methods of regulating births. The Second Vatican Council did not speak of any of these concrete methods… This is a different standpoint than that taken under Pius XI some thirty years which was also maintained by his successor ... we can sense here a clear development in the Church, a development, which is also going on outside the Church." Soviet Union In the Soviet Union, Literaturnaja Gazeta, a publication of Soviet intellectuals, included an editorial and statement by Russian physicians against the encyclical. Ecumenical reactions Ecumenical reactions were mixed. Liberal and Moderate Lutherans and the World Council of Churches were disappointed. Eugene Carson Blake criticised the concepts of nature and natural law, which, in his view, still dominated Catholic theology, as outdated. This concern dominated several articles in Catholic and non-Catholic journals at the time. Patriarch Athenagoras I stated his full agreement with Pope Paul VI: “He could not have spoken in any other way.” Latin America In Latin America, much support developed for the Pope and his encyclical. As World Bank President Robert McNamara declared at the 1968 Annual Meeting of the International Monetary Fund and the World Bank Group that countries permitting birth control practices will get preferential access to resources, doctors in La Paz, Bolivia, called it insulting that money should be exchanged for the conscience of a Catholic nation. In Colombia, Cardinal Aníbal Muñoz Duque declared, if American conditionality undermines Papal teachings, we prefer not to receive one cent. The Senate of Bolivia passed a resolution, stating that Humanae vitae can be discussed in its implications on individual consciences, but is of greatest significance because it defends the rights of developing nations to determine their own population policies. The Jesuit Journal Sic dedicated one edition to the encyclical with supportive contributions. However, against eighteen insubordinate priests, professors of theology at Pontifical Catholic University of Chile, and the ensuing conspiracy of silence practiced by the Chilean Episcopate, which had to be censured by the Nuncio in Santiago at the behest of Cardinal Gabriel-Marie Garrone, prefect of the Congregation for Catholic Education, triggering eventually a media conflict with , Plinio Corrêa de Oliveira expressed his affliction with the lamentations of Jeremiah: "O ye all that pass through the way…" Lamentations 1 Cardinal Martini In the book "Nighttime conversations in Jerusalem. On the risk of faith." well-known liberal Cardinal Carlo Maria Martini accused Paul VI of deliberately concealing the truth, leaving it to theologians and pastors to fix things by adapting precepts to practice: "I knew Paul VI well. With the encyclical, he wanted to express consideration for human life. He explained his intention to some of his friends by using a comparison: although one must not lie, sometimes it is not possible to do otherwise; it may be necessary to conceal the truth, or it may be unavoidable to tell a lie. It is up to the moralists to explain where sin begins, especially in the cases in which there is a higher duty than the transmission of life." Response of Pope Paul VI Pope Paul VI was troubled by the encyclical's reception in the West. Acknowledging the controversy, Paul VI in a letter to the Congress of German Catholics (30 August 1968), stated: "May the lively debate aroused by our encyclical lead to a better knowledge of God’s will." In March 1969, he had a meeting with one of the main critics of Humanae vitae, Cardinal Leo Joseph Suenens. Paul heard him out and said merely, "Yes, pray for me; because of my weaknesses, the Church is badly governed." And to jog the memory of his critics, he put in their minds the experience of no less a figure than Pope Saint Peter: "[n]ow I understand St Peter: he came to Rome twice, the second time to be crucified", — herewith directing their attention to his rejoicing in glorifying the Lord. Increasingly convinced, that "the smoke of Satan entered the temple of God from some fissure", Paul VI reaffirmed, on 23 June 1978, weeks before his death, in an address to the College of Cardinals, his Humanae vitae: "following the confirmations of serious science", and which sought to affirm the principle of respect for the laws of nature and of "a conscious and ethically responsible paternity". Legacy Polls show that most Catholics use artificial means of contraception, and very few use natural family planning, However, John L. Allen Jr. wrote in 2008: "Three decades of bishops’ appointments by John Paul II and Benedict XVI, both unambiguously committed to Humanae Vitae, mean that senior leaders in Catholicism these days are far less inclined than they were in 1968 to distance themselves from the ban on birth control, or to soft-pedal it. Some Catholic bishops have brought out documents of their own defending Humanae Vitae." Also, developments in fertility awareness since the 1960s have given rise to natural family planning organizations such as the Billings Ovulation Method, Couple to Couple League and the Creighton Model FertilityCare System, which actively provide formal instruction on the use and reliability of natural methods of birth control. Pope John Paul I Albino Luciani's views on Humanae vitae have been debated. Journalist John L. Allen Jr. claims that "it's virtually certain that John Paul I would not have reversed Paul VI’s teaching, particularly since he was no doctrinal radical. Moreover, as Patriarch in Venice some had seen a hardening of his stance on social issues as the years went by." According to Allen, "...it is reasonable to assume that John Paul I would not have insisted upon the negative judgment in Humanae Vitae as aggressively and publicly as John Paul II did, and probably would not have treated it as a quasi-infallible teaching. It would have remained a more 'open' question". Other sources take a different view and note that during his time as Patriarch of Venice that "Luciani was intransigent with his upholding of the teaching of the Church and severe with those, through intellectual pride and disobedience paid no attention to the Church's prohibition of contraception", though while not condoning the sin, he was tolerant of those who sincerely tried and failed to live up to the Church's teaching. The book states that "...if some people think that his compassion and gentleness in this respect implies he was against Humane Vitae one can only infer it was wishful thinking on their part and an attempt to find an ally in favor of artificial contraception." Pope John Paul II After he became pope in 1978, John Paul II continued on the Catholic Theology of the Body of his predecessors with a series of lectures, entitled Theology of the Body, in which he talked about an original unity between man and women, purity of heart (on the Sermon on the Mount), marriage and celibacy and reflections on Humanae vitae, focusing largely on responsible parenthood and marital chastity. In 1981, the Pope's Apostolic exhortation, Familiaris consortio restated the Church's opposition to artificial birth control stated previously in Humanae vitae. John Paul II readdressed some of the same issues in his 1993 encyclical Veritatis splendor. He reaffirmed much of Humanae vitae, and specifically described the practice of artificial contraception as an act not permitted by Catholic teaching in any circumstances. The same encyclical also clarifies the use of conscience in arriving at moral decisions, including in the use of contraception. However, John Paul also said, “It is not right then to regard the moral conscience of the individual and the magisterium of the Church as two contenders, as two realities in conflict. The authority which the magisterium enjoys by the will of Christ exists so that the moral conscience can attain the truth with security and remain in it.” John Paul quoted Humanae vitae as a compassionate encyclical, "Christ has come not to judge the world but to save it, and while he was uncompromisingly stern towards sin, he was patient and rich in mercy towards sinners". Pope John Paul's 1995 encyclical, Evangelium vitae ("The Gospel of Life"), affirmed the Church's position on contraception and multiple topics related to the culture of life. Pope Benedict XVI On 12 May 2008, Benedict XVI accepted an invitation to talk to participants in the International Congress organized by the Pontifical Lateran University on the 40th anniversary of Humanae vitae. He put the encyclical in the broader view of love in a global context, a topic he called "so controversial, yet so crucial for humanity's future." Humanae vitae became "a sign of contradiction but also of continuity of the Church's doctrine and tradition... What was true yesterday is true also today." The Church continues to reflect "in an ever new and deeper way on the fundamental principles that concern marriage and procreation." The key message of Humanae vitae is love. Benedict states, that the fullness of a person is achieved by a unity of soul and body, but neither spirit nor body alone can love, only the two together. If this unity is broken, if only the body is satisfied, love becomes a commodity. Pope Francis On 16 January 2015, Pope Francis said to a meeting with families in Manila, insisting on the need to protect the family: "The family is ...threatened by growing efforts on the part of some to redefine the very institution of marriage, by relativism, by the culture of the ephemeral, by a lack of openness to life. I think of Blessed Paul VI. At a time when the problem of population growth was being raised, he had the courage to defend openness to life in families. He knew the difficulties that are there in every family, and so in his Encyclical he was very merciful towards particular cases, and he asked confessors to be very merciful and understanding in dealing with particular cases. But he also had a broader vision: he looked at the peoples of the earth and he saw this threat of the destruction of the family through the privation of children [original Spanish: destrucción de la familia por la privación de los hijos]. Paul VI was courageous; he was a good pastor and he warned his flock of the wolves who were coming." A year before, on 1 May 2014, Pope Francis, in an interview given to Italian newspaper Corriere della Sera, expressed his opinion and praise for Humanae Vitae: "Everything depends on how Humanae Vitae is interpreted. Paul VI himself, in the end, urged confessors to be very merciful and pay attention to concrete situations. But his genius was prophetic, he had the courage to take a stand against the majority, to defend moral discipline, to exercise a cultural restraint, to oppose present and future neo-Malthusianism. The question is not of changing doctrine, but of digging deep and making sure that pastoral care takes into account situations and what it is possible for persons to do." References Further reading External links Latin text of Humanae vitae at the Vatican website English text of Humanae vitae at the Vatican website The Humanae Vitae controversy, chapter from George Weigel's biography of Karol Wojtyła G. E. M. Anscombe: Contraception and Chastity Cardinal Varkey's Letter on Family Planning Trends Among Catholics: Cardinal Mar Varkey Vithayathil, Kerala, India Natural Family Planning, John and Sheila Kippley's website that supports Humanae vitae and provides instruction in natural family planning The Vindication of Humanae Vitae, by Mary Eberstadt First Things, August/September 2008. John Paul II's THEOLOGY OF THE BODY on the EWTN website. Catholic theology of the body Papal encyclicals Documents of Pope Paul VI 1968 documents 1968 in Christianity Catholic Church and abortion July 1968 events
[ 0.25719618797302246, 0.34066042304039, -0.3480321168899536, -0.16713565587997437, 0.18871915340423584, 0.446123331785202, 0.5387599468231201, 0.29577627778053284, 0.5667666792869568, -0.760653018951416, -0.24871398508548737, 0.5908719301223755, -0.1688852459192276, 0.3708842098712921, -0...
14072
https://en.wikipedia.org/wiki/History%20of%20Wikipedia
History of Wikipedia
Wikipedia began with its first edit on 15 January 2001, two days after the domain was registered by Jimmy Wales and Larry Sanger. Its technological and conceptual underpinnings predate this; the earliest known proposal for an online encyclopedia was made by Rick Gates in 1993, and the concept of a free-as-in-freedom online encyclopedia (as distinct from mere open source) was proposed by Richard Stallman in 1998. Crucially, Stallman's concept specifically included the idea that no central organization should control editing. This characteristic greatly contrasted with contemporary digital encyclopedias such as Microsoft Encarta, Encyclopædia Britannica, and even Bomis's Nupedia, which was Wikipedia's direct predecessor. In 2001, the license for Nupedia was changed to GFDL, and Wales and Sanger launched Wikipedia using the concept and technology of a wiki pioneered in 1995 by Ward Cunningham. Initially, Wikipedia was intended to complement Nupedia, an online encyclopedia project edited solely by experts, by providing additional draft articles and ideas for it. In practice, Wikipedia quickly overtook Nupedia, becoming a global project in multiple languages and inspiring a wide range of other online reference projects. Wikipedia's worldwide monthly readership in 2014 was approximately 495 million. Worldwide in September 2018, WMF Labs tallied 15.5 billion page views for the month. According to comScore, Wikipedia receives over 117 million monthly unique visitors from the United States alone. Historical overview Background The concept of compiling the world's knowledge in a single location dates back to the ancient Library of Alexandria and Library of Pergamum, but the modern concept of a general-purpose, widely distributed, printed encyclopedia originated with Denis Diderot and the 18th-century French encyclopedists. The idea of using automated machinery beyond the printing press to build a more useful encyclopedia can be traced to Paul Otlet's 1934 book Traité de Documentation; Otlet also founded the Mundaneum, an institution dedicated to indexing the world's knowledge, in 1910. This concept of a machine-assisted encyclopedia was further expanded in H. G. Wells' book of essays World Brain (1938) and Vannevar Bush's future vision of the microfilm-based Memex in his essay "As We May Think" (1945). Another milestone was Ted Nelson's hypertext design Project Xanadu, which was begun in 1960. The use of volunteers was integral and instrumental in creating and maintaining Wikipedia. However, even without the internet, huge complex projects of similar nature had made use of volunteers. Specifically, the creation of the Oxford English Dictionary was conceived with the speech at the London Library, on Guy Fawkes Day, Nov 5, 1857 by Richard Chenevix Trench. It took 70 years to complete. Dr. Trench envisioned a grand new dictionary of every word in the English language, and to be used democratically and freely. According to author Simon Winchester "The undertaking of the scheme, he said, was beyond the ability of any one man. To peruse all of English literature—and to comb the London and New York newspapers and the most literate of the magazines and journals—must be instead 'the combined action of many.' It would be necessary to recruit a team—moreover, a huge one—probably comprising hundreds and hundreds of unpaid amateurs, all of them working as volunteers." Advances in information technology in the late 20th century led to changes in the form of encyclopedias. While previous encyclopedias, notably the Encyclopædia Britannica, were often book-based, Microsoft's Encarta, published in 1993, was available on CD-ROM and hyperlinked. The development of the World Wide Web led to many attempts to develop internet encyclopedia projects. An early proposal for an online encyclopedia was Interpedia in 1993 by Rick Gates; this project died before generating any encyclopedic content. Free software proponent Richard Stallman described the usefulness of a "Free Universal Encyclopedia and Learning Resource" in 1998. His published document outlined how to "ensure that progress continues towards this best and most natural outcome." On Wednesday 17 January 2001, two days after the founding of Wikipedia, the Free Software Foundation's (FSF) GNUPedia project went online, competing with Nupedia, but today the FSF encourages people "to visit and contribute to [Wikipedia]". Wikipedia co-founder Jimmy Wales has stated that the germ of the concept for Wikipedia, for him, came back when he was a graduate student at Indiana University where he was impressed with the successes of the open-source movement and found Richard Stallman's Emacs Manifesto promoting free software and a sharing economy to be quite interesting. At the time, Wales was studying finance and was intrigued by the incentives of the many people who contributed as volunteers toward creating free software where there were many examples having excellent results. Formulation of the concept Wikipedia was initially conceived as a feeder project for the Wales-founded Nupedia, an earlier project to produce a free online encyclopedia, volunteered by Bomis, a web-advertising firm owned by Jimmy Wales, Tim Shell and Michael E. Davis. Nupedia was founded upon the use of highly qualified volunteer contributors and an elaborate multi-step peer review process. Despite its mailing list of interested editors, and the presence of a full-time editor-in-chief, Larry Sanger, a graduate philosophy student hired by Wales, the writing of content for Nupedia was extremely slow, with only 12 articles written during the first year. Wales and Sanger discussed various ways to create content more rapidly. The idea of a wiki-based complement originated from a conversation between Sanger and Ben Kovitz. Ben Kovitz was a computer programmer and regular on Ward Cunningham's revolutionary wiki "the WikiWikiWeb". He explained to Sanger what wikis were, at that time a difficult concept to understand, over a dinner on Tuesday 2 January 2001. Wales first stated, in October 2001, that "Larry had the idea to use Wiki software", though he later stated in December 2005 that Jeremy Rosenfeld, a Bomis employee, introduced him to the concept. Sanger thought a wiki would be a good platform to use, and proposed on the Nupedia mailing list that a wiki based upon UseModWiki (then v. 0.90) be set up as a "feeder" project for Nupedia. Under the subject "Let's make a wiki", he wrote: Wales set one up and put it online on Wednesday 10 January 2001. Founding of Wikipedia There was considerable resistance on the part of Nupedia's editors and reviewers to the idea of associating Nupedia with a wiki-style website. Sanger suggested giving the new project its own name, Wikipedia, and Wikipedia was soon launched on its own domain, , on Monday 15 January 2001. The bandwidth and server (located in San Diego) used for these initial projects were donated by Bomis. Many former Bomis employees later contributed content to the encyclopedia: notably Tim Shell, co-founder and later CEO of Bomis, and programmer Jason Richey. Wales stated in December 2008 that he made Wikipedia's first edit, a test edit with the text "Hello, World!", but this edit may have been to an old version of Wikipedia which soon after was scrapped and replaced by a restart; see [WikiEN-l] "Hello world?". The existence of the project was formally announced and an appeal for volunteers to engage in content creation was made to the Nupedia mailing list on 17 January 2001. The project received many new participants after being mentioned on the Slashdot website in July 2001, having already earned two minor mentions in March 2001. It then received a prominent pointer to a story on the community-edited technology and culture website Kuro5hin on 25 July. Between these relatively rapid influxes of traffic, there had been a steady stream of traffic from other sources, especially Google, which alone sent hundreds of new visitors to the site every day. Its first major mainstream media coverage was in The New York Times on 20 September 2001. The project gained its 1,000th article around Monday 12 February 2001, and reached 10,000 articles around 7 September. In the first year of its existence, over 20,000 encyclopedia entries were created – a rate of over 1,500 articles per month. On Friday 30 August 2002, the article count reached 40,000. Wikipedia's earliest edits were long believed lost, since the original UseModWiki software deleted old data after about a month. On Tuesday 14 December 2010, developer Tim Starling found backups on SourceForge containing every change made to Wikipedia from its creation in January 2001 to 17 August 2001. It showed the first edit as being to HomePage on 15 January 2001, reading "This is the new WikiPedia!". That edit was imported in 2019 and can be found here. The first three edits that were known of before Tim Starling's discovery, are: To page Wikipedia:UuU at 20:08, 16 January 2001 To page TransporT at 20:12, 16 January 2001 To page User:ScottMoonen at 21:16, 16 January 2001 Divisions and internationalization Early in Wikipedia's development, it began to expand internationally, with the creation of new namespaces, each with a distinct set of usernames. The first subdomain created for a non-English Wikipedia was deutsche.wikipedia.com (created on Friday 16 March 2001, 01:38 UTC), followed after a few hours by Catalan.wikipedia.com (at 13:07 UTC). The Japanese Wikipedia, started as nihongo.wikipedia.com, was created around that period, and initially used only Romanized Japanese. For about two months Catalan was the one with the most articles in a non-English language, although statistics of that early period are imprecise. The French Wikipedia was created on or around 11 May 2001, in a wave of new language versions that also included Chinese, Dutch, Esperanto, Hebrew, Italian, Portuguese, Russian, Spanish, and Swedish. These languages were soon joined by Arabic and Hungarian. In September 2001, an announcement pledged commitment to the multilingual provision of Wikipedia, notifying users of an upcoming roll-out of Wikipedias for all major languages, the establishment of core standards, and a push for the translation of core pages for the new wikis. At the end of that year, when international statistics first began to be logged, Afrikaans, Norwegian, and Serbian versions were announced. In January 2002, 90% of all Wikipedia articles were in English. By January 2004, fewer than 50% were English, and this internationalization has continued to increase as the encyclopedia grows. , about 85.5% of all Wikipedia articles are contained within non-English Wikipedia versions. Development of Wikipedia In March 2002, following the withdrawal of funding by Bomis during the dot-com bust, Larry Sanger left both Nupedia and Wikipedia. By 2002, Sanger and Wales differed in their views on how best to manage open encyclopedias. Both still supported the open-collaboration concept, but the two disagreed on how to handle disruptive editors, specific roles for experts, and the best way to guide the project to success. Wales went on to establish self-governance and bottom-up self-direction by editors on Wikipedia. He made it clear that he would not be involved in the community's day-to-day management, but would encourage it to learn to self-manage and find its own best approaches. , Wales mostly restricts his own role to occasional input on serious matters, executive activity, advocacy of knowledge, and encouragement of similar reference projects. Sanger says he is an "inclusionist" and is open to almost anything. He proposed that experts still have a place in the Web 2.0 world. He returned briefly to academia, then joined the Digital Universe Foundation. In 2006, Sanger founded Citizendium, an open encyclopedia that used real names for contributors in an effort to reduce disruptive editing, and hoped to facilitate "gentle expert guidance" to increase the accuracy of its content. Decisions about article content were to be up to the community, but the site was to include a statement about "family-friendly content". He stated early on that he intended to leave Citizendium in a few years, by which time the project and its management would presumably be established. Organization The Wikipedia project has grown rapidly in the course of its life, at several levels. Content has grown organically through the addition of new articles, new wikis have been added in English and non-English languages, and entire new projects replicating these growth methods in other related areas (news, quotations, reference books and so on) have been founded as well. Wikipedia itself has grown, with the creation of the Wikimedia Foundation to act as an umbrella body and the growth of software and policies to address the needs of the editorial community. These are documented below: Evolution of logo Timeline First decade: 2000–2009 2000 In March 2000, the Nupedia project was started. Its intention was to publish articles written by experts which would be licensed as free content. Nupedia was founded by Jimmy Wales, with Larry Sanger as editor-in-chief, and funded by the web-advertising company Bomis. 2001 In January 2001, Wikipedia began as a side-project of Nupedia, to allow collaboration on articles prior to entering the peer-review process. The name was suggested by Sanger on 11 January 2001 as a portmanteau of the words wiki (Hawaiian for "quick") and encyclopedia. The wikipedia.com and wikipedia.org domain names were registered on 12 and 13 January, respectively, with wikipedia.org being brought online on the same day. The project formally opened on 15 January ("Wikipedia Day"), with the first international Wikipedias – the French, German, Catalan, Swedish, and Italian editions – being created between March and May. The "neutral point of view" (NPOV) policy was officially formulated at this time, and Wikipedia's first slashdotter wave arrived on 26 July. The first media report about Wikipedia appeared in August 2001 in the newspaper Wales on Sunday. The September 11 attacks spurred the appearance of breaking news stories on the homepage, as well as information boxes linking related articles. At the time, approximately 100 articles related to 9/11 had been created. After the September 11 attacks, a link to the Wikipedia article on the attacks appeared on Yahoo!'s home page, resulting in a spike in traffic. 2002 2002 saw the end of funding for Wikipedia from Bomis and the departure of Larry Sanger. The forking of the Spanish Wikipedia also took place with the establishment of the Enciclopedia Libre. The first portable MediaWiki software went live on 25 January. Bots were introduced, Jimmy Wales confirmed that Wikipedia would never run commercial advertising, and the first sister project (Wiktionary) and first formal Manual of Style were launched. A separate board of directors to supervise the project was proposed and initially discussed at Meta-Wikipedia. Close to 200 contributors were editing Wikipedia daily. 2003 The English Wikipedia passed 100,000 articles in 2003, while the next largest edition, the German Wikipedia, passed 10,000. The Wikimedia Foundation was established, and Wikipedia adopted its jigsaw world logo. Mathematical formulae using TeX were reintroduced to the website. The first Wikipedian social meeting took place in Munich, Germany, in October. The basic principles of Wikipedia's (known colloquially as "ArbCom") were developed. Wikisource was created as a separate project on 24 November 2003, to host free textual sources. 2004 The worldwide Wikipedia article pool continued to grow rapidly in 2004, doubling in size in 12 months, from under 500,000 articles in late 2003 to over 1 million in over 100 languages by the end of 2004. The English Wikipedia accounted for just under half of these articles. The website's server farms were moved from California to Florida, and CSS style configuration sheets were introduced, and the first attempt to block Wikipedia occurred, with the website being blocked in China for two weeks in June. The formal election of a board and Arbitration Committee began. The first formal projects were proposed to deliberately balance content and seek out systemic bias arising from Wikipedia's community structure. Bourgeois v. Peters, (11th Cir. 2004), a court case decided by the United States Court of Appeals for the Eleventh Circuit was one of the earliest court opinions to cite and quote Wikipedia. It stated: "We also reject the notion that the Department of Homeland Security's threat advisory level somehow justifies these searches. Although the threat level was 'elevated' at the time of the protest, 'to date, the threat level has stood at yellow (elevated) for the majority of its time in existence. It has been raised to orange (high) six times." Wikimedia Commons was created on 7 September 2004 to host media files for Wikipedia in all languages. 2005 In 2005, Wikipedia became the most popular reference website on the Internet, according to Hitwise, with the English Wikipedia alone exceeding 750,000 articles. Wikipedia's first multilingual and subject portals were established in 2005. A formal fundraiser held in the first quarter of the year raised almost US$100,000 for system upgrades to handle growing demand. China again blocked Wikipedia in October 2005. The first major Wikipedia scandal, the Seigenthaler incident, occurred in 2005, when a well-known figure was found to have a vandalized biography which had gone unnoticed for months. In the wake of this and other concerns, the first policy and system changes specifically designed to counter this form of abuse were established. These included a new Checkuser privilege policy update to assist in sock puppetry investigations, a new feature called , a more strict policy on biographies of living people and the tagging of such articles for stricter review. A restriction of new article creation to registered users only was put in place in December 2005, after the Seigenthaler incident where an anonymous user posted a hoax. Wikimania 2005, the first Wikimania conference, was held from 4 to 8 August 2005 at the Haus der Jugend in Frankfurt, Germany, attracting about 380 attendees. 2006 The English Wikipedia gained its one-millionth article, Jordanhill railway station, on 1 March 2006. The first approved Wikipedia article selection was made freely available to download, and "Wikipedia" became registered as a trademark of the Wikimedia Foundation. The congressional aides biography scandals – multiple incidents in which congressional staffers and a campaign manager were caught trying to covertly alter Wikipedia biographies – came to public attention, leading to the resignation of the campaign manager. Nonetheless, Wikipedia was rated as one of the top five global brands of 2006. Jimmy Wales indicated at Wikimania 2006 that Wikipedia had achieved sufficient volume and called for an emphasis on quality, perhaps best expressed in the call for 100,000 feature-quality articles. A new privilege, "oversight", was created, allowing specific versions of archived pages with unacceptable content to be marked as non-viewable. Semi-protection against anonymous vandalism, introduced in 2005, proved more popular than expected, with over 1,000 pages being semi-protected at any given time in 2006. 2007 Wikipedia continued to grow rapidly in 2007, possessing over 5 million registered editor accounts by 13 August. The 250 language editions of Wikipedia contained a combined total of 7.5 million articles, totalling 1.74 billion words, by 13 August. The English Wikipedia gained articles at a steady rate of 1,700 a day, with the wikipedia.org domain name ranked the 10th-busiest in the world. Wikipedia continued to garner visibility in the press – the Essjay controversy broke when a prominent member of Wikipedia was found to have lied about his credentials. Citizendium, a competing online encyclopedia, launched publicly. A new trend developed in Wikipedia, with the encyclopedia addressing people whose notability stemmed from being a participant in a news story by adding a redirect from their name to the larger story, rather than creating a distinct biographical article. On 9 September 2007, the English Wikipedia gained its two-millionth article, El Hormiguero. There was some controversy in late 2007 when the Volapük Wikipedia jumped from 797 to over 112,000 articles, briefly becoming the 15th-largest Wikipedia edition, due to automated stub generation by an enthusiast for the Volapük constructed language. According to the MIT Technology Review, the number of regularly active editors on the English-language Wikipedia peaked in 2007 at more than 51,000, and has since been declining. 2008 Various in many areas continued to expand and refine article contents within their scope. In April 2008, the 10-millionth Wikipedia article was created, and by the end of the year the English Wikipedia exceeded 2.5 million articles. 2009 On 25 June 2009 at 3:15 pm PDT (22:15 UTC), following the death of pop icon Michael Jackson, the website temporarily crashed. The Wikimedia Foundation reported nearly a million visitors to Jackson's biography within one hour, probably the most visitors in a one-hour period to any article in Wikipedia's history. By late August 2009, the number of articles in all Wikipedia editions had exceeded 14 million. The three-millionth article on the English Wikipedia, Beate Eriksen, was created on 17 August 2009 at 04:05 UTC. On 27 December 2009, the German Wikipedia exceeded one million articles, becoming the second edition after the English Wikipedia to do so. A TIME article listed Wikipedia among 2009's best websites. Wikipedia content became licensed under Creative Commons in 2009. Second decade: 2010–2019 2010 On 24 March, the European Wikipedia servers went offline due to an overheating problem. Failover to servers in Florida turned out to be broken, causing DNS resolution for Wikipedia to fail across the world. The problem was resolved quickly, but due to DNS caching effects, some areas were slower to regain access to Wikipedia than others. On 13 May, the site released a new interface. New features included an updated logo, new navigation tools, and a link wizard. However, the classic interface remained available for those who wished to use it. On 12 December, the English Wikipedia passed the 3.5-million-article mark, while the French Wikipedia's millionth article was created on 21 September. The 1-billionth Wikimedia project edit was performed on 16 April. 2011 Wikipedia and its users held many celebrations worldwide to commemorate the site's 10th anniversary on 15 January. The site began efforts to expand its growth in India, holding its first Indian conference in Mumbai in November 2011. The English Wikipedia passed the 3.6-million-article mark on 2 April, and reached 3.8 million articles on 18 November. On 7 November 2011, the German Wikipedia exceeded 100 million page edits, becoming the second language edition to do so after the English edition, which attained 500 million page edits on 24 November 2011. The Dutch Wikipedia exceeded 1 million articles on 17 December 2011, becoming the fourth Wikipedia edition to do so. The "Wikimania 2011 – Haifa, Israel" stamp was issued by Israel Post on 2 August 2011. This was the first-ever stamp dedicated to a Wikimedia-related project. Between 4 and 6 October 2011, the Italian Wikipedia became intentionally inaccessible in protest against the Italian Parliament's proposed DDL intercettazioni law, which, if approved, would allow any person to force websites to remove information that is perceived as untrue or offensive, without the need to provide evidence. Also in October 2011, Wikimedia announced the launch of Wikipedia Zero, an initiative to enable free mobile access to Wikipedia in developing countries through partnerships with mobile operators. 2012 On 16 January, Wikipedia co-founder Jimmy Wales announced that the English Wikipedia would shut down for 24 hours on 18 January as part of a protest meant to call public attention to the proposed Stop Online Piracy Act and PROTECT IP Act, two anti-piracy laws under debate in the United States Congress. Calling the blackout a "community decision", Wales and other opponents of the laws believed that they would endanger free speech and online innovation. A similar blackout was staged on 10 July by the Russian Wikipedia, in protest against a proposed Russian internet regulation law. In late March 2012, the Wikimedia Deutschland announced Wikidata, a universal platform for sharing data between all Wikipedia language editions. The US$1.7-million Wikidata project was partly funded by Google, the Gordon and Betty Moore Foundation, and the Allen Institute for Artificial Intelligence. Wikimedia Deutschland assumed responsibility for the first phase of Wikidata, and initially planned to make the platform available to editors by December 2012. Wikidata's first phase became fully operational in March 2013. In April 2012, Justin Knapp became the first single contributor to make over one million edits to Wikipedia. Jimmy Wales congratulated Knapp for his work and presented him with the site's Special Barnstar medal and the Golden Wiki award for his achievement. Wales also declared that 20 April would be "Justin Knapp Day". On 13 July 2012, the English Wikipedia gained its 4-millionth article, Izbat al-Burj. In October 2012, historian and Wikipedia editor Richard J. Jensen opined that the English Wikipedia was "nearing completion", noting that the number of regularly active editors had fallen significantly since 2007, despite Wikipedia's rapid growth in article count and readership. According to Alexa Internet, Wikipedia was the world's sixth-most-popular website as of November 2012. Dow Jones ranked Wikipedia fifth worldwide as of December 2012. 2013 On 22 January 2013, the Italian Wikipedia became the fifth language edition of Wikipedia to exceed 1 million articles, while the Russian and Spanish Wikipedias gained their millionth articles on 11 and 16 May respectively. On 15 July the Swedish and on 24 September the Polish Wikipedias gained their millionth articles, becoming the eighth and ninth Wikipedia editions to do so. On 27 January, the main belt asteroid 274301 was officially renamed "Wikipedia" by the Committee for Small Body Nomenclature. The first phase of the Wikidata database, automatically providing interlanguage links and other data, became available for all language editions in March 2013. In April 2013, the French secret service was accused of attempting to censor Wikipedia by threatening a Wikipedia volunteer with arrest unless "classified information" about a military radio station was deleted. In July, the VisualEditor editing system was launched, forming the first stage of an effort to allow articles to be edited with a word processor-like interface instead of using wiki markup. An editor specifically designed for smartphones and other mobile devices was also launched. 2014 In February 2014, a project to make a print edition of the English Wikipedia, consisting of 1,000 volumes and over 1,100,000 pages, was launched by German Wikipedia contributors. The project sought funding through Indiegogo, and was intended to honor the contributions of Wikipedia's editors. On 22 October 2014, the first monument to Wikipedia was unveiled in the Polish town of Slubice. On 8 June, 15 June, and 16 July 2014, the Waray Wikipedia, the Vietnamese Wikipedia and the Cebuano Wikipedia each exceeded the one million article mark. They were the tenth, eleventh and twelfth Wikipedias to reach that milestone. Despite having very few active users, the Waray and Cebuano Wikipedias had a high number of automatically generated articles created by bots. 2015 In mid-2015, Wikipedia was the world's seventh-most-popular website according to Alexa Internet, down one place from the position it held in November 2012. At the start of 2015, Wikipedia remained the largest general-knowledge encyclopedia online, with a combined total of over 36 million mainspace articles across all 291 language editions. On average, Wikipedia receives a total of 10 billion global pageviews from around 495 million unique visitors every month, including 85 million visitors from the United States alone, where it is the sixth-most-popular site. Print Wikipedia was an art project by Michael Mandiberg that created the ability to print 7473 volumes of Wikipedia as it existed on 7 April 2015. Each volume has 700 pages and only 110 were printed by the artist. On 1 November 2015, the English Wikipedia reached 5,000,000 articles with the creation of an article on Persoonia terminalis, a type of shrub. 2016 On 19 January 2016, the Japanese Wikipedia exceeded the one million article mark, becoming the thirteenth Wikipedia to reach that milestone. The millionth article was 波号第二百二十四潜水艦 (a World War II submarine of the Imperial Japanese Navy). In mid-2016, Wikipedia was once again the world's sixth-most-popular website according to Alexa Internet, up one place from the position it held in the previous year. In October 2016, the mobile version of Wikipedia got a new look. 2017 In mid-2017, Wikipedia was listed as the world's fifth-most-popular website according to Alexa Internet, rising one place from the position it held in the previous year. Wikipedia Zero was made available in Iraq and Afghanistan. On 29 April 2017, online access to Wikipedia was blocked across all language editions in Turkey by the Turkish authorities. This block lasted until 15 January 2020, as the court of Turkey ruled that the block violated human rights. The encrypted Japanese Wikipedia has been blocked in China since 28 December 2017. 2018 During 2018, Wikipedia retained its listing as the world's fifth-most-popular website according to Alexa Internet. One notable development was the use of Artificial Intelligence to create draft articles on overlooked topics. On 13 April 2018, the number of Chinese Wikipedia articles exceeded 1 million, becoming the fourteenth Wikipedia to reach that milestone. The Chinese Wikipedia has been blocked in Mainland China since May 2015. Later in the year, on 26 June, the Portuguese Wikipedia exceeded the one million article mark, becoming the fifteenth Wikipedia to reach that milestone. The millionth article was Perdão de Richard Nixon (the Pardon of Richard Nixon). 2019 In August 2019, according to Alexa.com, Wikipedia fell from fifth placed to seventh placed website in the world for global internet engagement. On 23 April 2019, Chinese authorities expanded the block of Wikipedia to versions in all languages. The timing of the block coincided with the 30th anniversary of the 1989 Tiananmen Square protests and massacre and the 100th anniversary of the May Fourth Movement, resulting in stricter internet censorship in China. Third decade: 2020–present 2020 On 23 January 2020, the six millionth article, the biography of Maria Elise Turner Lauder, was added to the English Wikipedia. Despite this growth in articles, Wikipedia's global internet engagement, as measured by Alexa, continued to decline. By February 2020, Wikipedia fell to the eleventh placed website in the world for global internet engagement. Both Wikipedia's coverage of the COVID-19 pandemic crisis and the supporting edits, discussions and even deletions were thought to be a useful resource for future historians seeking to understand the period in detail. The World Health Organization collaborated with Wikipedia as a key resource for the dissemination of COVID-19-related information as to help combat the spread of misinformation. 2021 In January 2021, Wikipedia's 20th anniversary was noted in the media. On 13 January 2021, the English Wikipedia reached one billion edits. MIT Press published an open access book of essays Wikipedia @ 20: Stories of an Unfinished Revolution, edited by Joseph Reagle and Jackie Koerner with contributions from prominent Wikipedians, Wikimedians, researchers, journalists, librarians and other experts reflecting on particular histories and themes. By November 2021, Wikipedia had fallen to the thirteenth placed website in the world for global internet engagement. History by subject area Hardware and software The software that runs Wikipedia, and the computer hardware, server farms and other systems upon which Wikipedia relies. In January 2001, Wikipedia ran on UseModWiki, written in Perl by Clifford Adams. The server still runs on Linux, although the original text was stored in files rather than in a database. Articles were named with the CamelCase convention. In January 2002, "Phase II" of the wiki software powering Wikipedia was introduced, replacing the older UseModWiki. Written specifically for the project by Magnus Manske, it included a PHP wiki engine. In July 2002, a major rewrite of the software powering Wikipedia went live; dubbed "Phase III", it replaced the older "Phase II" version, and became MediaWiki. It was written by Lee Daniel Crocker in response to the increasing demands of the growing project. In October 2002, Derek Ramsey created a bot—an automated program called Rambot—to add a large number of articles about United States towns; these articles were automatically generated from U.S. census data. He thus increased the number of Wikipedia articles by 33,832. This has been called "the most controversial move in Wikipedia history". In January 2003, support for mathematical formulas in TeX was added. The code was contributed by Tomasz Wegrzanowski. On 9 June 2003, Wikipedia's ISBN interface was amended to make ISBNs in articles link to Special:Booksources, which fetches its contents from the user-editable page . Before this, ISBN link targets were coded into the software and new ones were suggested on the page. See the edit that changed this. After 6 December 2003, various system messages shown to Wikipedia users were no longer hard coded, allowing Wikipedia to modify certain parts of MediaWiki's interface, such as the message shown to blocked users. On 12 February 2004, server operations were moved from San Diego, California to Tampa, Florida. On 29 May 2004, all the various websites were updated to a new version of the MediaWiki software. On 30 May 2004, the first instances of "categorization" entries appeared. Category schemes, like Recent Changes and Edit This Page, had existed from the founding of Wikipedia. However, Larry Sanger had viewed the schemes as lists, and even hand-entered articles, whereas the categorization effort centered on individual categorization entries in each article of the encyclopedia, as part of a larger automatic categorization of the articles of the encyclopedia. After 3 June 2004, administrators could edit the style of the interface by changing the CSS in the monobook stylesheet at MediaWiki:Monobook.css. Also on 30 May 2004, with MediaWiki 1.3, the Template namespace was created, allowing transclusion of standard texts. On 7 June 2005 at 3:00 a.m. Eastern Standard Time, the bulk of the Wikimedia servers were moved to a new facility across the street. All Wikimedia projects were down during this time. In March 2013, the first phase of the Wikidata interwiki database became available across Wikipedia's language editions. In July 2013, the VisualEditor editing interface was inaugurated, allowing users to edit Wikipedia using a WYSIWYG text editor (similar to a word processor) instead of wiki markup. An editing interface optimised for mobile devices was also released. Look and feel The external face of Wikipedia, its look and feel, and the Wikipedia branding, as presented to users. On 4 April 2002, BrilliantProse, since renamed Featured Articles, was moved to the Wikipedia namespace from the article namespace. Around 15 October 2003, a new Wikipedia logo was installed. The logo concept was selected by a voting process, which was followed by a revision process to select the best variant. The final selection was created by David Friedland (who edits Wikipedia under the username "nohat") based on a logo design and concept created by Paul Stansifer. On 22 February 2004, Did You Know (DYK) made its first Main Page appearance. On 23 February 2004, a coordinated new look for the Main Page appeared at 19:46 UTC. Hand-chosen entries for the Daily Featured Article, Anniversaries, In the News, and Did You Know rounded out the new look. On 10 January 2005, the multilingual portal at www.wikipedia.org was set up, replacing a redirect to the English-language Wikipedia. On 5 February 2005, was created, becoming the first thematic "portal" on the English Wikipedia. However, the concept was pioneered on the German Wikipedia, where Portal:Recht (law studies) was set up in October 2003. On 16 July 2005, the English Wikipedia began the practice of including the day's "featured pictures" on the Main Page. On 19 March 2006, following a vote, the Main Page of the English-language Wikipedia featured its first redesign in nearly two years. On 13 May 2010, the site released a new interface. New features included an updated logo, new navigation tools, and a link wizard. The "classic" Wikipedia interface remained available as an option. Internal structures Landmarks in the Wikipedia community, and the development of its organization, internal structures, and policies. April 2001, Wales formally defines the "neutral point of view", Wikipedia's core non-negotiable editorial policy, a reformulation of the "Lack of Bias" policy outlined by Sanger for Nupedia in spring or summer 2000, which covered many of the same core principles. In September 2001, collaboration by subject matter in is introduced. In February 2002, concerns over the risk of future censorship and commercialization by Bomis Inc (Wikipedia's original host) combined with a lack of guarantee this would not happen, led most participants of the Spanish Wikipedia to break away and establish it independently as the Enciclopedia Libre. Following clarification of Wikipedia's status and non-commercial nature later that year, re-merger talks between Enciclopedia Libre and the re-founded Spanish Wikipedia occasionally took place in 2002 and 2003, but no conclusion was reached. As of October 2009, the two continue to coexist as substantial Spanish language reference sources, with around 43,000 articles (EL) and 520,000 articles (Sp.W) respectively. Also in 2002, policy and style issues were clarified with the creation of the Manual of Style, along with a number of other policies and guidelines. November 2002 – new mailing lists for WikiEN and Announce are set up, as well as other language mailing lists (e.g. Polish), to reduce the volume of traffic on mailing lists. In July 2003, the rule against editing one's autobiography is introduced. On 28 October 2003, the first "real" meeting of Wikipedians happened in Munich. Many cities followed suit, and soon a number of regular Wikipedian get-togethers were established around the world. Several Internet communities, including one on the popular blog website LiveJournal, have also sprung up since. From 10 July to 30 August 2004 the and formerly on the Main Page were replaced by links to overviews. On 27 August 2004 the Community Portal was started, to serve as a focus for community efforts. These were previously accomplished on an informal basis, by individual queries of the Recent Changes, in wiki style, as ad hoc collaborations between like-minded editors. During September to December 2005 following the Seigenthaler controversy and other similar concerns, several anti-abuse features and policies were added to Wikipedia. These were: The policy for "Checkuser" (a MediaWiki extension to assist detection of abuse via internet sock-puppetry) was established in November 2005. Checkuser function had previously existed, but was viewed more as a system tool at the time, so there had been no need for a policy covering use on a more routine basis. Creation of new pages on the English Wikipedia was restricted to editors who had created a user account. The introduction and rapid adoption of the policy Wikipedia:Biographies of living people, giving a far tighter quality control and fact-check system to biographical articles related to living people. The "semi-protection" function and policy, allowing pages to be protected so that only those with an account could edit. In May 2006, a new "oversight" feature was introduced on the English Wikipedia, allowing a handful of highly trusted users to permanently erase page revisions containing copyright infringements or libelous or personal information from a page's history. Previous to this, page version deletion was laborious, and also deleted versions remained visible to other administrators and could be un-deleted by them. On 1 January 2007, the subcommunity named Esperanza was disbanded by communal consent. Esperanza had begun as an effort to promote "wikilove" and a social support network, but had developed its own subculture and private structures. Its disbanding was described as the painful but necessary remedy for a project that had allowed editors to "see themselves as Esperanzans first and foremost". A number of Esperanza's subprojects were integrated back into Wikipedia as free-standing projects, but most of them are now inactive. When the group was founded in September 2005, there had been concerns expressed that it would eventually be condemned as such. In April 2007 the results of 4 months policy review by a working group of several hundred editors seeking to merge the core Wikipedia policies into one core policy (See: Wikipedia:Attribution) was polled for community support. The proposal did not gain consensus; a significant view became evident that the existing structure of three strong focused policies covering the respective areas of policy, was frequently seen as more helpful to quality control than one more general merged proposal. A one-day blackout of Wikipedia was called by Jimmy Wales on 18 January 2012, in conjunction with Google and over 7,000 other websites, to protest the Stop Online Piracy Act then under consideration by the United States Congress. The Wikimedia Foundation and legal structures Legal and organizational structure of the Wikimedia Foundation, its executive, and its activities as a foundation. In August 2002, shortly after Jimmy Wales announced that he would never run commercial advertisements on Wikipedia, the URL of Wikipedia was changed from wikipedia.com to wikipedia.org (see: .com and .org). On 20 June 2003, the Wikimedia Foundation was founded. Communications committee was formed in January 2006 to handle media inquiries and emails received for the foundation and Wikipedia via the newly implemented OTRS (a ticket handling system). Angela Beesley and Florence Nibart-Devouard were elected to the Board of Trustees of the Wikimedia Foundation. During this time, Angela was active in editing content and setting policy, such as privacy policy, within the Foundation. On 10 January 2006, Wikipedia became a registered trademark of Wikimedia Foundation. In July 2006, Angela Beesley resigned from the board of the Wikimedia Foundation. In June 2006, Brad Patrick was hired to be the first executive director of the Foundation. He resigned in January 2007, and was later replaced by Sue Gardner (June 2007). In October 2006, Florence Nibart-Devouard became chair of the board of Wikimedia Foundation. Projects and milestones Sister projects and milestones related to articles, user base, and other statistics. On 15 January 2001, the first recorded edit of Wikipedia was performed. In December 2002, the first sister project, Wiktionary, was created; aiming to produce a dictionary and thesaurus of the words in all languages. It uses the same software as Wikipedia. On 22 January 2003, the English Wikipedia was again slashdotted after having reached the 100,000 article milestone with the Hastings, New Zealand, article. Two days later, the German-language Wikipedia, the largest non-English language version, passed the 10,000 article mark. On 20 June 2003, the same day that the Wikimedia Foundation was founded, "Wikiquote" was created. A month later, "Wikibooks" was launched. "Wikisource" was set up towards the end of the year. In January 2004, Wikipedia reached the 200,000-article milestone in English with the article on Neil Warnock, and reached 450,000 articles for both English and non-English Wikipedias. The next month, the combined article count of the English and non-English reached 500,000. On 20 April 2004, the article count of the English Wikipedia reached 250,000. On 7 July 2004, the article count of the English Wikipedia reached 300,000. On 20 September 2004, Wikipedia's total article count exceeded 1,000,000 articles in over 105 languages; the project received a flurry of related attention in the press. The one millionth article was published in the Hebrew Wikipedia, and discusses the flag of Kazakhstan. On 20 November 2004, the article count of the English Wikipedia reached 400,000. On 18 March 2005, Wikipedia passed the 500,000-article milestone in English, with Involuntary settlements in the Soviet Union being announced in a press release as the landmark article. In May 2005, Wikipedia became the most popular reference website on the Internet according to traffic monitoring company Hitwise, relegating Dictionary.com to second place. On 29 September 2005, the English Wikipedia passed the 750,000-article mark. On 1 March 2006, the English Wikipedia passed the 1,000,000-article mark, with Jordanhill railway station being announced on the Main Page as the milestone article. On 8 June 2006, the English Wikipedia passed the 1,000-featured-article mark, with Iranian peoples. On 15 August 2006, the Wikimedia Foundation launched Wikiversity. On 1 September 2006, Wikipedia exceeded 5,000,000 articles across all 229 language editions. On 24 November 2006, the English Wikipedia passed the 1,500,000-article mark, with Kanab ambersnail being announced on the Main Page as the milestone article. On 4 April 2007, the first Wikipedia CD selection in English was published as a free download. On 22 April 2007, the English Wikipedia passed the 1,750,000-article mark. RAF raid on La Caine HQ was the 1,750,000th article. On 9 September 2007, the English Wikipedia passed the 2,000,000-article mark. El Hormiguero was accepted by consensus as the 2,000,000th article. On 28 March 2008, Wikipedia exceeded 10 million articles across all 251 language editions. On 11 October 2008, the English Wikipedia passed the 2,500,000-article mark. While no attempt was made to officially identify the 2,500,000th article, Joe Connor (baseball) has been suggested as the possible article. On 17 August 2009, the English Wikipedia passed the 3,000,000-article mark, with Beate Eriksen being announced on the Main Page as the milestone article. On 27 December 2009, the German Wikipedia exceeded 1,000,000 articles, becoming the second Wikipedia language edition to do so. On 21 September 2010, the French Wikipedia exceeded 1,000,000 articles, becoming the third Wikipedia language edition to do so. On 12 December 2010, the English Wikipedia passed the 3,500,000-article mark. On 22 November 2011, Wikipedia exceeded 20 million articles across all 282 language editions. On 7 November 2011, the German Wikipedia exceeded 100 million page edits. On 24 November 2011, the English Wikipedia exceeded 500 million page edits. On 17 December 2011, the Dutch Wikipedia exceeded 1,000,000 articles, becoming the fourth Wikipedia language edition to do so. On 13 July 2012, the English Wikipedia exceeded 4,000,000 articles, with Izbat al-Burj. On 22 January 2013, the Italian Wikipedia exceeded 1,000,000 articles, becoming the fifth Wikipedia language edition to do so. On 11 May 2013, the Russian Wikipedia exceeded 1,000,000 articles, becoming the sixth Wikipedia language edition to do so. On 16 May 2013, the Spanish Wikipedia exceeded 1,000,000 articles, becoming the seventh Wikipedia language edition to do so. On 15 June 2013, the Swedish Wikipedia exceeded 1,000,000 articles, becoming the eighth Wikipedia language edition to do so. On 25 September 2013, the Polish Wikipedia exceeded 1,000,000 articles, becoming the ninth Wikipedia language edition to do so. On 21 October 2013, Wikipedia exceeded 30 million articles across all 287 language editions. On 17 December 2013, the French Wikipedia exceeded 100,000,000 page edits. On 25 April 2014, the English Wikipedia passed the 4,500,000 article mark. On 8 June 2014, the Waray Wikipedia exceeded 1,000,000 articles, becoming the tenth Wikipedia language edition to do so. On 15 June 2014, the Vietnamese Wikipedia exceeded 1,000,000 articles, becoming the eleventh Wikipedia language edition to do so. On 17 July 2014, the Cebuano Wikipedia exceeded 1,000,000 articles, becoming the twelfth Wikipedia language edition to do so. On 6 September 2015, the Swedish Wikipedia exceeded 2,000,000 articles, becoming the second Wikipedia language edition to do so. On 1 November 2015, the English Wikipedia exceeded 5,000,000 articles, with Persoonia terminalis, and it has over 125,000 editors who have made 1 or more edits in the past 30 days. On 1 February 2016, the Japanese Wikipedia exceeded 1,000,000 articles, becoming the thirteenth Wikipedia language edition to do so. On 14 February 2016, the Cebuano Wikipedia exceeded 2,000,000 articles, becoming the third Wikipedia language edition to do so. On 29 April 2016, the Swedish Wikipedia exceeded 3,000,000 articles, becoming the second Wikipedia language edition to do so. On 26 May 2016, Wikipedia exceeded 40 million articles across all 293 language editions. On 26 September 2016, the Cebuano Wikipedia exceeded 3,000,000 articles, becoming the third Wikipedia language edition to do so. On 19 November 2016, the German Wikipedia exceeded 2,000,000 articles, becoming the fourth Wikipedia language edition to do so. On 3 March 2017, the Cebuano Wikipedia exceeded 4,000,000 articles, becoming the second Wikipedia language edition to do so. On 6 July 2017, the Spanish Wikipedia exceeded 100,000,000 page edits. On 15 September 2017, the Russian Wikipedia exceeded 100,000,000 page edits. On 27 October 2017, the English Wikipedia passed the 5,500,000 article mark. On 13 April 2018, the Chinese Wikipedia exceeded 1,000,000 articles, becoming the fourteenth Wikipedia language edition to do so. On 27 June 2018, the Portuguese Wikipedia exceeded 1,000,000 articles, becoming the fifteenth Wikipedia language edition to do so. On 8 July 2018, the French Wikipedia exceeded 2,000,000 articles, becoming the fifth Wikipedia language edition to do so. On 14 October 2018, the Arabic Wikipedia exceeded 1,000,000 articles, becoming the sixteenth Wikipedia language edition to do so. On 9 March 2019, Wikipedia exceeded 50 million articles across all 309 language editions. On 23 January 2020, the English Wikipedia exceeded 6,000,000 articles, with Maria Elise Turner Lauder as the milestone article. On 9 March 2020, the Dutch Wikipedia exceeded 2,000,000 articles, becoming the sixth Wikipedia language edition to do so. On 23 March 2020, the Ukrainian Wikipedia exceeded 1,000,000 articles, becoming the seventeenth Wikipedia language edition to do so. On 1 July 2020, the Egyptian Arabic Wikipedia exceeded 1,000,000 articles, becoming the eighteenth Wikipedia language edition to do so. Fundraising Every year, Wikipedia runs a fundraising campaign to support its operations. One of the first fundraisers was held from 18 February 2005 to 1 March 2005, raising , which was more than expected. On 6 January 2006, the Q4 2005 fundraiser concluded, raising a total of just over . The 2007 fundraising campaign raised US$1.5 million from 44,188 donations. The 2008 fundraising campaign gained Wikipedia more than . The 2010 campaign was launched on 13 November 2010. The campaign raised . The 2011 campaign raised from more than one million donors. The 2012 campaign raised from around 1.2 million donors. Since then, donations income has risen every year, approaching $150 million in 2020/2021. In addition, the Wikimedia Endowment, an organizationally separate fundraising effort, reached $100 million in 2021, five years sooner than planned. External impact In 2007, Wikipedia was deemed fit to be used as a major source by the UK Intellectual Property Office in a Formula One trademark case ruling. Over time, Wikipedia gained recognition amongst more traditional media as a "key source" for major new events, such as the 2004 Indian Ocean earthquake and related tsunami, the 2008 American Presidential election, and the 2007 Virginia Tech shooting. The latter article was accessed 750,000 times in two days, with newspapers published local to the shootings adding that "Wikipedia has emerged as the clearinghouse for detailed information on the event." On 21 February 2007, Noam Cohen of the New York Times reported that some academics were banning the use of Wikipedia as a research tool. On 27 February 2007, an article in The Harvard Crimson newspaper reported that some professors at Harvard University included Wikipedia in their syllabi, but that there was a split in their perception of using Wikipedia. In July 2013, a large-scale study by four major universities identified the most contested articles on Wikipedia, finding that Israel, Adolf Hitler and God were more fiercely debated than any other subjects. Effect of biographical articles Because Wikipedia biographies are often updated as soon as new information comes to light, they are often used as a reference source on the lives of notable people. This has led to attempts to manipulate and falsify Wikipedia articles for promotional or defamatory purposes (see Controversies). It has also led to novel uses of the biographical material provided. Some notable people's lives are being affected by their Wikipedia biography. November 2005: The Seigenthaler controversy occurred when a hoaxer asserted on Wikipedia that journalist John Seigenthaler had been involved in the Kennedy assassination of 1963. December 2006: German comedian Atze Schröder sued Arne Klempert, secretary of Wikimedia Deutschland, because he did not want his real name published in Wikipedia. Schröder later withdrew his complaint, but wanted his attorney's costs to be paid by Klempert. A court decided that the artist had to cover those costs by himself. 16 February 2007: Turkish historian Taner Akçam was briefly detained upon arrival at Montréal–Pierre Elliott Trudeau International Airport because of false information on his Wikipedia biography claiming he was a terrorist. November 2008: The German Left Party politician Lutz Heilmann claimed that some remarks in his Wikipedia article caused damage to his reputation. He succeeded in getting a court order to make Wikimedia Deutschland remove a key search portal. The result was a national outpouring of support for Wikipedia, more donations to Wikimedia Deutschland, and a rise in daily pageviews of Lutz Heilmann's article from a few dozen to half a million. Shortly after, Heilmann asked the court to withdraw the court order. December 2008: Wikimedia Nederland, the Dutch chapter, won a preliminary injunction after an entrepreneur was linked in "his" article with the criminal Willem Holleeder and wanted the article deleted. The judge in Utrecht believed Wikimedia's assertion that it has no influence on the content of Dutch Wikipedia. February 2009: When Karl Theodor Maria Nikolaus Johann Jacob Philipp Franz Joseph Sylvester Buhl-Freiherr von und zu Guttenberg became federal minister on 10 February 2009, an unregistered user added an eleventh given name in the article on German Wikipedia: Wilhelm. Numerous newspapers took it over. When wary Wikipedians wanted to erase Wilhelm, the revert was reverted with regard to those newspapers. This case about Wikipedia reliability and journalists copying from Wikipedia became known as Falscher Wilhelm ("wrong Wilhelm"). May 2009: An article about the German journalist Richard Herzinger in the German Wikipedia was vandalized. The IP user added that Herzinger, who wrote for Die Welt, was Jewish; the sighter marked this as "sighted" (meaning that there is no vandalism in the article). Herzinger complained about that to Wikipedians who immediately deleted the assertion. According to Herzinger, who wrote about the incident in a newspaper article, he is regularly called a Jew by right-wing extremists due to his perceived pro-Israel stance. October 2009: In 1990, the German actor Walter Sedlmayr was murdered. Years later, when the two murderers were released from prison, German law prohibited the media from mentioning their names. The men's lawyer also sent the Wikimedia Foundation a cease and desist letter requesting the men's names be removed from the English Wikipedia. Early roles of Wales and Sanger Sanger played an important role in the early stages of creating Wikipedia. Wales says that Sanger was his subordinate employee. Sanger initially brought the wiki concept to Wales and suggested it be applied to Nupedia and then, after some initial skepticism, Wales agreed to try it. It was Jimmy Wales, along with other people, who came up with the broader idea of an open-source, collaborative encyclopedia that would accept contributions from ordinary people and it was Wales who invested in it. Wales stated in October 2001 that "Larry had the idea to use Wiki software." Sanger coined the portmanteau "Wikipedia" as the project name. In review, Larry Sanger conceived of a wiki-based encyclopedia as a strategic solution to Nupedia's inefficiency problems. In terms of project roles, Sanger spearheaded and pursued the project as its leader in its first year, and did most of the early work in formulating policies (including "Ignore all rules" and "Neutral point of view") and building up the community. Upon departure in March 2002, Sanger emphasized the main issue was purely the cessation of Bomis' funding for his role, which was not viable part-time, and his changing personal priorities; however, by 2004, the two had drifted apart and Sanger became more critical. Two weeks after the launch of Citizendium, Sanger criticized Wikipedia, describing the latter as "broken beyond repair." By 2005 Wales began to dispute Sanger's role in the project, three years after Sanger left. In 2005, Wales described himself simply as the founder of Wikipedia; however, according to Brian Bergstein of the Associated Press, "Sanger has long been cited as a co-founder." There is evidence that Sanger was called co-founder, along with Wales, as early as 2001, and he is referred to as such in early Wikipedia press releases and Wikipedia articles and in a September 2001 New York Times article for which both were interviewed. In 2006, Wales said, "He used to work for me [...] I don't agree with calling him a co-founder, but he likes the title"; nonetheless, before January 2004, Wales did not dispute Sanger's status as co-founder and, indeed, identified himself as "co-founder" as late as August 2002. In Sanger's introductory message to the Nupedia mailing list, he said that Jimmy Wales "contacted me and asked me to apply as editor-in-chief of Nupedia. [...] He had had the idea for Nupedia since at least last fall. He tells me that, when thinking about people (particularly philosophers) he knew who could manage this sort of long-term project, he thought I would be perfect for the job. This is indeed my dream job". As of March 2007: Wales emphasized this employer–employee relationship and his ultimate authority, terming himself Wikipedia's sole founder; and Sanger emphasized their statuses as co-founders, referencing earlier versions of Wikipedia pages (2004, 2006), press releases (2002–2004), and media coverage from the time of his involvement routinely terming them in this manner. Controversies January 2001: Licensing and structure. After partial breakdown of discussions with Bomis, Richard Stallman announced GNUpedia as a competing project. Besides having a nearly identical name, it was very similar functionally to Nupedia/Wikipedia (the former which launched in March 2000 but had as yet published very few articles—the latter of which was intended to be a source of seed-articles for the former). The goals and methods of GNUpedia were nearly identical to Wikipedia: anyone can contribute, small contributions welcome, plan on taking years, narrow focus on encyclopedic content as the primary goal, anyone can read articles, anyone can mirror articles, anyone can translate articles, use libre-licensed code to run the site, encourage peer review, and rely primarily on volunteers. GNUpedia was roughly intended to be a combination of Wikipedia and also Wikibooks. The main exceptions were: The strong prohibition against *any* sort of centralized control ("[must not be] written under the direction of a single organization, which made all decisions about the content, and... published in a centralized fashion. ...we dare not allow any organization to decide what counts as part of [our encyclopedia]"). In particular, deletionists were not allowed; editing an article would require forking it, making a change, and then saving the result as a 'new' article on the same topic. Assuming attribution for articles (rather than anonymous by default), requiring attribution for quotations, and allowing original authors to control straightforward translations, In particular, the idea was to have a set of N articles covering the Tiananmen Square protests of 1989, with some to-be-determined mechanism for readers to endorse/rank/like/plus/star the version of the article they found best. Given the structure above, where every topic (especially controversial ones) might have a thousand articles purporting to be *the* GNUpedia article about Sarah Palin, Stallman explicitly rejected the idea of a centralized website that would specify which article of those thousand was worth reading. Instead of an official catalogue, the plan was to rely on search engines at first (the reader would begin by googling "gnupedia sarah palin"), and then eventually if necessary construct catalogues according to the same principles as articles were constructed. In Wikipedia, there is an official central website for each language (en.wikipedia.org), and an official catalogue of sorts (category-lists and lists-of-lists), but search engines still provide about 60% of the inbound traffic. The goals which led to GNUpedia were published at least as early as 18 December 2000, and these exact goals were finalized on the 12th and 13th of January 2001, albeit with a copyright of 1999, from when Stallman had first started considering the problem. The only sentence added between 18 December and the unveiling of GNUpedia the week of 12–16 January was this: "The GNU Free Documentation License would be a good license to use for courses." GNUpedia was "formally" announced on the slashdot website, on 16 January, the same day that their mailing list first went online with a test-message. Wales posted to the list on 17 January, the first full day of messages, explaining the discussions with Stallman concerning the change in Nupedia content-licensing, and suggesting cooperation. Stallman himself first posted on 19 January, and, in his second post on 22 January, mentioned that discussions about merging Wikipedia and GNUpedia were ongoing. Within a couple of months, Wales had changed his email signature from the open source encyclopedia to the free encyclopedia; both Nupedia and Wikipedia had adopted the GFDL; and the merger of GNUpedia into Wikipedia was effectively accomplished. November 2001: Wales announced that advertising would soon begin on Wikipedia, starting in early or mid-2002. Instead, in early 2002, Chief Editor Larry Sanger was fired, since his salary was the largest expense in the operation of Wikipedia. By September 2002, Wales had publicly stated: "There are currently no plans for advertising on Wikipedia." By June 2003, the Wikimedia Foundation was formally incorporated. The Foundation is explicitly against paid advertising; although, it does "internally" advertise Wikimedia Foundation fundraising events on Wikipedia. , the by-laws of the Wikimedia Foundation do not explicitly prohibit the adoption of a broader advertising policy, if such an action is deemed necessary—such by-laws are subject to vote. 2003: No notable controversies occurred. 2004: No notable controversies occurred. January 2005: The fake charity QuakeAID, in the month following the 2004 Indian Ocean earthquake, attempted to use a Wikipedia page for promotional purposes. October 2005: Alan Mcilwraith was exposed as a fake war hero through a Wikipedia page. November 2005: The Seigenthaler controversy caused Brian Chase to resign from his employment, after his identity was ascertained by Daniel Brandt of Wikipedia Watch. Following this, the scientific journal Nature undertook a peer reviewed study to test articles in Wikipedia against their equivalents in Encyclopædia Britannica, and concluded they are comparable in terms of accuracy. Britannica rejected their methodology and their conclusion. Nature refused to release any form of apology, and instead asserted the reliability of its study and a rejection of the criticisms. Early-to-mid-2006: The congressional aides biography scandals were publicized, whereby several political aides were caught trying to influence the Wikipedia biographies of several politicians. The aides removed undesirable information (including pejorative quotes, or broken campaign promises), added favorable information or "glowing" tributes, or replaced the article in part or whole by staff-authored biographies. The staff of at least five politicians were implicated: Marty Meehan, Norm Coleman, Conrad Burns, Joe Biden and Gil Gutknecht. The activities documented were: In a separate but similar incident, the campaign manager for Cathy Cox, Morton Brilliant, resigned after being found to have added negative information to the Wikipedia entries of political opponents. Following media publicity, the incidents tapered off around August 2006. July 2006: Joshua Gardner was exposed as a fake Duke of Cleveland with a Wikipedia page. January 2007: English-language Wikipedians in Qatar were briefly blocked from editing, following a spate of vandalism, by an administrator who did not realize that the country's internet traffic is routed through a single IP address. Multiple media sources promptly declared that Wikipedia was banning Qatar from the site. On 23 January 2007, a Microsoft employee offered to pay Rick Jelliffe to review and change certain Wikipedia articles regarding an open-source document standard which was rival to a Microsoft format. In February 2007, The New Yorker magazine issued a rare editorial correction that a prominent English Wikipedia editor and administrator known as "Essjay", had invented a persona using fictitious credentials. The editor, Ryan Jordan, became a Wikia employee in January 2007 and divulged his real name; this was noticed by Daniel Brandt of Wikipedia Watch, and communicated to the original article author. (See: Essjay controversy) February 2007: Fuzzy Zoeller sued a Miami firm because defamatory information was added to his Wikipedia biography in an anonymous edit that came from their network. 16 February 2007: Turkish historian Taner Akçam was briefly detained upon arrival at a Canadian airport because of false information on his biography indicating that he was a terrorist. In June 2007, an anonymous user posted hoax information that, by coincidence, foreshadowed the Chris Benoit murder-suicide, hours before the bodies were found by investigators. The discovery of the edit attracted widespread media attention and was first covered in sister site Wikinews. In October 2007, in their obituaries of recently deceased TV theme composer Ronnie Hazlehurst, many British media organisations reported that he had co-written the S Club 7 song "Reach". In fact, he hadn't, and it was discovered that this information had been sourced from a hoax edit to Hazlehurst's Wikipedia article. In February 2007, Barbara Bauer, a literary agent, sued Wikimedia for defamation and causing harm to her business, the Barbara Bauer Literary Agency. In Bauer v. Glatzer, Bauer claimed that information on Wikipedia critical of her abilities as a literary agent caused this harm. The Electronic Frontier Foundation defended Wikipedia and moved to dismiss the case on 1 May 2008. The case against the Wikimedia Foundation was dismissed on 1 July 2008. On 14 July 2009, the National Portrait Gallery issued a cease and desist letter for alleged breach of copyright, against a Wikipedia editor who downloaded more than 3,000 high-resolution images from the NPG website, and placed them on Wikimedia Commons. See National Portrait Gallery and Wikimedia Foundation copyright dispute for more. In April and May 2010, there was controversy over the hosting and display of sexual drawing and pornographic images including images of children on Wikipedia. It led to the mass removal of pornographic content from Wikimedia Foundation sites. In November 2012, Lord Justice Leveson wrote in his report on British press standards, "The Independent was founded in 1986 by the journalists Andreas Whittam Smith, Stephen Glover and Brett Straub..." He had used the Wikipedia article for The Independent newspaper as his source, but an act of vandalism had replaced Matthew Symonds (a genuine co-founder) with Brett Straub (an unknown character). The Economist said of the Leveson report, "Parts of it are a scissors-and-paste job culled from Wikipedia." In late 2013, commentators publicly shared observations of the reappearance of many of the pornographic images deleted from Wikipedia since 2010. Notable forks and derivatives There are a number of . Other sites also use the MediaWiki software and concept, popularized by Wikipedia. No list of them is maintained. Specialized foreign language forks using the Wikipedia concept include Enciclopedia Libre (Spanish), Wikiweise (German), WikiZnanie (Russian), Susning.nu (Swedish), and Baidu Baike (Chinese). Some of these (such as Enciclopedia Libre) use GFDL or compatible licenses as used by Wikipedia, leading to exchange of material with their respective language Wikipedias. In 2006, Larry Sanger founded Citizendium, based upon a modified version of MediaWiki. The site said it aimed 'to improve on the Wikipedia model with "gentle expert oversight", among other things'. (See also Nupedia). Publication on other media The German Wikipedia was the first to be partly published also using other media (rather than online on the internet), including releases on CD in November 2004 and more extended versions on CDs or DVD in April 2005 and December 2006. In December 2005, the publisher Zenodot Verlagsgesellschaft mbH, a sister company of Directmedia, published a 139-page book explaining Wikipedia, its history and policies, which was accompanied by a 7.5 GB DVD containing 300,000 articles and 100,000 images from the German Wikipedia. Originally, Directmedia also announced plans to print the German Wikipedia in its entirety, in 100 volumes of 800 pages each. Publication was due to begin in October 2006, and finish in 2010. In March 2006, however, this project was called off. In September 2008, Bertelsmann published a 1000 pages volume with a selection of popular German Wikipedia articles. Bertelsmann paid voluntarily 1 Euro per sold copy to Wikimedia Deutschland. The first CD version containing a selection of articles from the English Wikipedia was published in April 2006 by SOS Children as the 2006 Wikipedia CD Selection. In April 2007, "Wikipedia Version 0.5", a CD containing around 2000 articles selected from the online encyclopedia was published by the Wikimedia Foundation and Linterweb. The selection of articles included was based on both the quality of the online version and the importance of the topic to be included. This CD version was created as a test-case in preparation for a DVD version including far more articles. The CD version can be purchased online, downloaded as a DVD image file or Torrent file, or accessed online at the project's website. A free software project has also been launched to make a static version of Wikipedia available for use on iPods. The "Encyclopodia" project was started around March 2006 and can currently be used on 1st to 4th generation iPods. Lawsuits In limited ways, the Wikimedia Foundation is protected by Section 230 of the Communications Decency Act. In the defamation action Bauer et al. v. Glatzer et al., it was held that Wikimedia had no case to answer because of this section. A similar law in France caused a lawsuit to be dismissed in October 2007. In 2013, a German appeals court (the Higher Regional Court of Stuttgart) ruled that Wikipedia is a "service provider" not a "content provider", and as such is immune from liability as long as it takes down content that is accused of being illegal. See also History of wikis The Wikipedia Revolution, 2009 book by Andrew Lih Predictions of the end of Wikipedia References External links Wikipedia records and archives Wikipedia's project files contain a large quantity of reference and archive material. Useful internal resources on Wikipedia history include: Historical summaries :Category:Wikipedia years – historical events by year Wikipedia:Wikipedia's oldest articles Wikipedia:BrilliantProse – predecessor of Wikipedia:Featured articles and Wikipedia:Good articles History of Wikipedia – from the Wikipedia:Meta Wikipedia:Historic debates Wikipedia:Wikipedia records meta:Wikimedia News – news and milestones index from all Wikipedias Wikipedia:History of Wikipedia bots Milestones, size and statistics Stats.wikimedia.org – the Wikimedia Foundation's main interface for all project statistics, including the various and combined Wikipedia's. Wikipedia milestones Wikipedia:Milestones (inactive) Wikipedia:Statistics Wikipedia:Size of Wikipedia Discussion and debate archives Wikipedia:Mailing lists Wikipedia:Announcement archive Other Wikipedia:CamelCase and Wikipedia Nostalgia Wikipedia – a snapshot of Wikipedia from 20 December 2001, running a later version of MediaWiki for security reasons but using a skin that looks like the software of the time Larry Sanger on the origins of Wikipedia Wikipedia:Volunteer Fire Department – handling of major editorial influx. Disbanded when no longer needed (2004) Wikipedia:Magnus Manske Day – MediaWiki software goes live into production MediaWiki history Third party The Free Universal Encyclopedia and Learning Resource – Free Software Foundation endorsement of Nupedia (later updated to include Wikipedia). 1999. Early Wikipedia snapshot via Internet Archive. 28 February 2001. New York Times on Wikipedia. September 2001. Larry Sanger. "The Early History of Nupedia and Wikipedia: A Memoir" and "Part II". Slashdot. 18 April 2005 – 19 April 2005. Giles, Jim, "Internet encyclopaedias go head to head". Nature comparison between Wikipedia and Britannica. 14 December 2005 "Fatally Flawed: Refuting the recent study on encyclopedic accuracy by the journal Nature". Encyclopædia Britannica. March 2006. Nature's responses to Encyclopædia Britannica. Nature. 23 March 2006. Articles containing video clips Wikipedia Encyclopedism Jimmy Wales
[ 0.35428574681282043, 0.3220149874687195, -0.1342887580394745, -0.1380194127559662, -0.5729354619979858, -0.4272797405719757, 0.3702080547809601, 0.4637117385864258, -0.07685057073831558, -0.2131749391555786, -0.08235704898834229, -0.045999933034181595, -0.38481178879737854, 0.8410085439682...
14073
https://en.wikipedia.org/wiki/Hydropower
Hydropower
Hydropower (from , "water"), also known as water power, is the use of falling or fast-running water to produce electricity or to power machines. This is achieved by converting the gravitational potential or kinetic energy of a water source to produce power. Hydropower is a method of sustainable energy production. Since ancient times, hydropower from watermills has been used as a renewable energy source for irrigation and the operation of mechanical devices, such as gristmills, sawmills, textile mills, trip hammers, dock cranes, domestic lifts, and ore mills. A trompe, which produces compressed air from falling water, is sometimes used to power other machinery at a distance. Hydropower is now used principally for hydroelectric power generation, and is also applied as one half of an energy storage system known as pumped-storage hydroelectricity. Hydropower is an attractive alternative to fossil fuels as it does not directly produce carbon dioxide or other atmospheric pollutants and it provides a relatively consistent source of power. Nonetheless, it has economic, sociological, and environmental downsides and requires a sufficiently energetic source of water, such as a river or elevated lake. International institutions such as the World Bank view hydropower as a low-carbon means for economic development. History Evidence suggests that the fundamentals of hydropower date to ancient Greek civilization. Other evidence indicates that the waterwheel independently emerged in China around the same period. Evidence of water wheels and watermills date to the ancient Near East in the 4th century BC. Moreover, evidence indicates the use of hydropower using irrigation machines to ancient civilizations such as Sumer and Babylonia. Studies suggest that the water wheel was the initial form of water power and it was driven by either humans or animals. In the Roman Empire, water-powered mills were described by Vitruvius by the first century BC. The Barbegal mill, located in modern-day France, had 16 water wheels processing up to 28 tons of grain per day. Roman waterwheels were also used for sawing marble such as the Hierapolis sawmill of the late 3rd century AD. Such sawmills had a waterwheel that drove two crank-and-connecting rods to power two saws. It also appears in two 6th century Eastern Roman saw mills excavated at Ephesus and Gerasa respectively. The crank and connecting rod mechanism of these Roman watermills converted the rotary motion of the waterwheel into the linear movement of the saw blades. Water-powered trip hammers and bellows in China, during the Han dynasty (202 BC - 220 AD), were initially thought to be powered by water scoops. However, some historians suggested that they were powered by waterwheels. This is since it was theorized that water scoops would not have had the motive force to operate their blast furnace bellows. Many texts describe the Hun waterwheel; some of the earliest ones are the Jijiupian dictionary of 40 BC, Yang Xiong's text known as the Fangyan of 15 BC, as well as Xin Lun, written by Huan Tan about 20 AD. It was also during this time that the engineer Du Shi (c. AD 31) applied the power of waterwheels to piston-bellows in forging cast iron. Another example of the early use of hydropower is seen in hushing. Hushing is the use of the power of a wave of water released from a tank in the extraction of metal ores. The method was first used at the Dolaucothi Gold Mines in Wales from 75 AD onwards. This method was further developed in Spain in mines such as Las Médulas. Hushing was also widely used in Britain in the Medieval and later periods to extract lead and tin ores. It later evolved into hydraulic mining when used during the California Gold Rush in the 19th century. The Islamic Empire spanned a large region, mainly in Asia and Africa, along with other surrounding areas. During the Islamic Golden Age and the Arab Agricultural Revolution (8th–13th centuries), hydropower was widely used and developed. Early uses of tidal power emerged along with large hydraulic factory complexes.A wide range of water-powered industrial mills were used in the region including fulling mills, gristmills, paper mills, hullers, sawmills, ship mills, stamp mills, steel mills, sugar mills, and tide mills. By the 11th century, every province throughout the Islamic Empire had these industrial mills in operation, from Al-Andalus and North Africa to the Middle East and Central Asia. Muslim engineers also used water turbines while employing gears in watermills and water-raising machines. They also pioneered the use of dams as a source of water power, used to provide additional power to watermills and water-raising machines. Furthermore, in his book, The Book of Knowledge of Ingenious Mechanical Devices, the Muslim mechanical engineer, Al-Jazari (1136–1206) described designs for 50 devices. Many of these devices were water-powered, including clocks, a device to serve wine, and five devices to lift water from rivers or pools, where three of them are animal-powered and one can be powered by animal or water. Moreover, they included an endless belt with jugs attached, a cow-powered shadoof (a crane-like irrigation tool), and a reciprocating device with hinged valves. In the 19th century, French engineer Benoit Fourneyron developed the first hydropower turbine. This device was implemented in the commercial plant of Niagara Falls in 1895 and it is still operating. In the early 20th century, English engineer William Armstrong built and operated the first private electrical power station which was located in his house in Cragside in Northumberland, England. In 1753, the French engineer Bernard Forest de Bélidor published his book, Architecture Hydraulique, which described vertical- and horizontal-axis hydraulic machines. The growing demand for the Industrial Revolution would drive development as well. At the beginning of the Industrial Revolution in Britain, water was the main power source for new inventions such as Richard Arkwright's water frame. Although water power gave way to steam power in many of the larger mills and factories, it was still used during the 18th and 19th centuries for many smaller operations, such as driving the bellows in small blast furnaces (e.g. the Dyfi Furnace) and gristmills, such as those built at Saint Anthony Falls, which uses the 50-foot (15 m) drop in the Mississippi River. Technological advances moved the open water wheel into an enclosed turbine or water motor. In 1848, the British-American engineer James B. Francis, head engineer of Lowell's Locks and Canals company, improved on these designs to create a turbine with 90% efficiency. He applied scientific principles and testing methods to the problem of turbine design. His mathematical and graphical calculation methods allowed the confident design of high-efficiency turbines to exactly match a site's specific flow conditions. The Francis reaction turbine is still in use. In the 1870s, deriving from uses in the California mining industry, Lester Allan Pelton developed the high-efficiency Pelton wheel impulse turbine, which used hydropower from the high head streams characteristic of the Sierra Nevada. Calculating the amount of available power A hydropower resource can be evaluated by its available power. Power is a function of the hydraulic head and volumetric flow rate. The head is the energy per unit weight (or unit mass) of water. The static head is proportional to the difference in height through which the water falls. Dynamic head is related to the velocity of moving water. Each unit of water can do an amount of work equal to its weight times the head. The power available from falling water can be calculated from the flow rate and density of water, the height of fall, and the local acceleration due to gravity: where (work flow rate out) is the useful power output (in watts) ("eta") is the efficiency of the turbine (dimensionless) is the mass flow rate (in kilograms per second) ("rho") is the density of water (in kilograms per cubic metre) is the volumetric flow rate (in cubic metres per second) is the acceleration due to gravity (in metres per second per second) ("Delta h") is the difference in height between the outlet and inlet (in metres) To illustrate, the power output of a turbine that is 85% efficient, with a flow rate of 80 cubic metres per second (2800 cubic feet per second) and a head of 145 metres (480 feet), is 97 megawatts: Operators of hydroelectric stations compare the total electrical energy produced with the theoretical potential energy of the water passing through the turbine to calculate efficiency. Procedures and definitions for calculation of efficiency are given in test codes such as ASME PTC 18 and IEC 60041. Field testing of turbines is used to validate the manufacturer's efficiency guarantee. Detailed calculation of the efficiency of a hydropower turbine accounts for the head lost due to flow friction in the power canal or penstock, rise in tailwater level due to flow, the location of the station and effect of varying gravity, the air temperature and barometric pressure, the density of the water at ambient temperature, and the relative altitudes of the forebay and tailbay. For precise calculations, errors due to rounding and the number of significant digits of constants must be considered. Some hydropower systems such as water wheels can draw power from the flow of a body of water without necessarily changing its height. In this case, the available power is the kinetic energy of the flowing water. Over-shot water wheels can efficiently capture both types of energy. The flow in a stream can vary widely from season to season. The development of a hydropower site requires analysis of flow records, sometimes spanning decades, to assess the reliable annual energy supply. Dams and reservoirs provide a more dependable source of power by smoothing seasonal changes in water flow. However, reservoirs have a significant environmental impact, as does alteration of naturally occurring streamflow. Dam design must account for the worst-case, "probable maximum flood" that can be expected at the site; a spillway is often included to route flood flows around the dam. A computer model of the hydraulic basin and rainfall and snowfall records are used to predict the maximum flood. Disadvantages and limitations Some disadvantages of hydropower have been identified. People who live near a hydro plant site are displaced during construction or when reservoir banks become unstable. Another potential disadvantage is cultural or religious sites may block construction. Dams and reservoirs can have major negative impacts on river ecosystems such as preventing some animals traveling upstream, cooling and de-oxygenating of water released downstream, loss of nutrients due to settling of particulates. River sediment builds river deltas and dams prevent them from restoring what is lost from erosion. Large and deep dam and reservoir plants cover large areas of land which causes greenhouse gas emissions from underwater rotting vegetation. Furthermore, although at lower levels than other renewable energy sources, it was found that hydropower produces methane gas which is a greenhouse gas. This occurs when organic matters accumulate at the bottom of the reservoir because of the deoxygenation of water which triggers anaerobic digestion. Furthermore, studies found that the construction of dams and reservoirs can result in habitat loss for some aquatic species. Dam failures can have catastrophic effects, including loss of life, property and pollution of land. Applications Mechanical power Watermills Compressed air A plentiful head of water can be made to generate compressed air directly without moving parts. In these designs, a falling column of water is deliberately mixed with air bubbles generated through turbulence or a venturi pressure reducer at the high-level intake. This allows it to fall down a shaft into a subterranean, high-roofed chamber where the now-compressed air separates from the water and becomes trapped. The height of the falling water column maintains compression of the air in the top of the chamber, while an outlet, submerged below the water level in the chamber allows water to flow back to the surface at a lower level than the intake. A separate outlet in the roof of the chamber supplies the compressed air. A facility on this principle was built on the Montreal River at Ragged Shutes near Cobalt, Ontario in 1910 and supplied 5,000 horsepower to nearby mines. Electricity Hydroelectricity is the biggest hydropower application. Hydroelectricity generates about 15% of global electricity and provides at least 50% of the total electricity supply for more than 35 countries.  Hydroelectricity generation starts with converting either the potential energy of water that is present due to the site's elevation or the kinetic energy of moving water into electrical energy. Hydroelectric power plants vary in terms of the way they harvest energy. One type involves a dam and a reservoir. The water in the reservoir is available on demand to be used to generate electricity by passing through channels that connect the dam to the reservoir. The water spins a turbine, which is connected to the generator that produces electricity. The other type is called a run-of-river plant. In this case, a barrage is built to control the flow of water, absent a reservoir. The run-of river power plant needs continuous water flow and therefore has less ability to provide power on demand. The kinetic energy of flowing water is the main source of energy. Both designs have limitations. For example, dam construction can result in discomfort to nearby residents. The dam and reservoirs occupy a relatively large amount of space that may be opposed by nearby communities. Moreover, reservoirs can potentially have major environmental consequences such as harming downstream habitats. On the other hand, the limitation of the run-of-river project is the decreased efficiency of electricity generation because the process depends on the speed of the seasonal river flow. This means that the rainy season increases electricity generation compared to the dry season. The size of hydroelectric plants can vary from small plants called micro hydro, to large plants supply that power to a whole country. As of 2019, the five largest power stations in the world are conventional hydroelectric power stations with dams. Hydroelectricity can also be used to store energy in the form of potential energy between two reservoirs at different heights with pumped-storage. Water is pumped uphill into reservoirs during periods of low demand to be released for generation when demand is high or system generation is low. Other forms of electricity generation with hydropower include tidal stream generators using energy from tidal power generated from oceans, rivers, and human-made canal systems to generating electricity. Rain power Rain has been referred to as "one of the last unexploited energy sources in nature. When it rains, billions of litres of water can fall, which have enormous electric potential if used in the right way." Research is being done into the different methods of generating power from rain, such as by using the energy in the impact of raindrops. This is in its very early stages with new and emerging technologies being tested, prototyped and created. Such power has been called rain power. One method in which this has been attempted is by using hybrid solar panels called "all-weather solar panels" that can generate electricity from both the sun and the rain. According to zoologist and science and technology educator, Luis Villazon, "A 2008 French study estimated that you could use piezoelectric devices, which generate power when they move, to extract 12 milliwatts from a raindrop. Over a year, this would amount to less than 0.001kWh per square metre – enough to power a remote sensor." Villazon suggested a better application would be to collect the water from fallen rain and use it to drive a turbine, with an estimated energy generation of 3kWh of energy per year for a 185m2 roof. A microturbine-based system created by three students from the Technological University of Mexico has been used to generate electricity. The Pluvia system "uses the stream of rainwater runoff from houses' rooftop rain gutters to spin a microturbine in a cylindrical housing. Electricity generated by that turbine is used to charge 12-volt batteries." The term rain power has also been applied to hydropower systems which include the process of capturing the rain. See also Deep water source cooling Gravitation water vortex power plant Hydraulic efficiency Hydraulic ram International Hydropower Association Low head hydro power Marine current power Marine energy Ocean thermal energy conversion Osmotic power Wave power Notes References External links International Hydropower Association International Centre for Hydropower (ICH) hydropower portal with links to numerous organizations related to hydropower worldwide IEC TC 4: Hydraulic turbines (International Electrotechnical Commission – Technical Committee 4) IEC TC 4 portal with access to scope, documents and TC 4 website Micro-hydro power, Adam Harvey, 2004, Intermediate Technology Development Group. Retrieved 1 January 2005 Microhydropower Systems, US Department of Energy, Energy Efficiency and Renewable Energy, 2005 Power station technology Energy conversion Hydraulic engineering Sustainable technologies
[ 0.5022435784339905, -0.359255313873291, 0.13432873785495758, 0.6689450144767761, 0.02309519611299038, 0.21025045216083527, -0.04756244271993637, -0.03219340369105339, -0.5088193416595459, -0.36047276854515076, -0.4438534677028656, 0.40385761857032776, -0.45656248927116394, 0.79903703927993...
14076
https://en.wikipedia.org/wiki/Horse%20breed
Horse breed
A horse breed is a selectively bred population of domesticated horses, often with pedigrees recorded in a breed registry. However, the term is sometimes used in a broader sense to define landrace animals of a common phenotype located within a limited geographic region, or even feral “breeds” that are naturally selected. Depending on definition, hundreds of "breeds" exist today, developed for many different uses. Horse breeds are loosely divided into three categories based on general temperament: spirited "hot bloods" with speed and endurance; "cold bloods," such as draft horses and some ponies, suitable for slow, heavy work; and "warmbloods," developed from crosses between hot bloods and cold bloods, often focusing on creating breeds for specific riding purposes, particularly in Europe. Horse breeds are groups of horses with distinctive characteristics that are transmitted consistently to their offspring, such as conformation, color, performance ability, or disposition. These inherited traits are usually the result of a combination of natural crosses and artificial selection methods aimed at producing horses for specific tasks. Certain breeds are known for certain talents. For example, Standardbreds are known for their speed in harness racing. Some breeds have been developed through centuries of crossings with other breeds, while others, such as the Morgan horse, originated via a single sire from which all current breed members descend. More than 300 horse breeds exist in the world today. Origin of breeds Modern horse breeds developed in response to a need for "form to function", the necessity to develop certain physical characteristics to perform a certain type of work. Thus, powerful but refined breeds such as the Andalusian or the Lusitano developed in the Iberian peninsula as riding horses that also had a great aptitude for dressage, while heavy draft horses such as the Clydesdale and the Shire developed out of a need to perform demanding farm work and pull heavy wagons. Ponies of all breeds originally developed mainly from the need for a working animal that could fulfill specific local draft and transportation needs while surviving in harsh environments. However, by the 20th century, many pony breeds had Arabian and other blood added to make a more refined pony suitable for riding. Other horse breeds developed specifically for light agricultural work, heavy and light carriage and road work, various equestrian disciplines, or simply as pets. Purebreds and registries Horses have been selectively bred since their domestication. However, the concept of purebred bloodstock and a controlled, written breed registry only became of significant importance in modern times. Today, the standards for defining and registration of different breeds vary. Sometimes, purebred horses are called Thoroughbreds, which is incorrect; "Thoroughbred" is a specific breed of horse, while a "purebred" is a horse (or any other animal) with a defined pedigree recognized by a breed registry. An early example of people who practiced selective horse breeding were the Bedouin, who had a reputation for careful breeding practices, keeping extensive pedigrees of their Arabian horses and placing great value upon pure bloodlines. Though these pedigrees were originally transmitted by an oral tradition, written pedigrees of Arabian horses can be found that date to the 14th century. In the same period of the early Renaissance, the Carthusian monks of southern Spain bred horses and kept meticulous pedigrees of the best bloodstock; the lineage survives to this day in the Andalusian horse. One of the earliest formal registries was General Stud Book for Thoroughbreds, which began in 1791 and traced back to the Arabian stallions imported to England from the Middle East that became the foundation stallions for the breed. Some breed registries have a closed stud book, where registration is based on pedigree, and no outside animals can gain admittance. For example, a registered Thoroughbred or Arabian must have two registered parents of the same breed. Other breeds have a partially closed stud book, but still allow certain infusions from other breeds. For example, the modern Appaloosa must have at least one Appaloosa parent, but may also have a Quarter Horse, Thoroughbred, or Arabian parent, so long as the offspring exhibits appropriate color characteristics. The Quarter Horse normally requires both parents to be registered Quarter Horses, but allows "Appendix" registration of horses with one Thoroughbred parent, and the horse may earn its way to full registration by completing certain performance requirements. Open stud books exist for horse breeds that either have not yet developed a rigorously defined standard phenotype, or for breeds that register animals that conform to an ideal via the process of passing a studbook selection process. Most of the warmblood breeds used in sport horse disciplines have open stud books to varying degrees. While pedigree is considered, outside bloodlines are admitted to the registry if the horses meet the set standard for the registry. These registries usually require a selection process involving judging of an individual animal's quality, performance, and conformation before registration is finalized. A few "registries," particularly some color breed registries, are very open and will allow membership of all horses that meet limited criteria, such as coat color and species, regardless of pedigree or conformation. Breed registries also differ as to their acceptance or rejection of breeding technology. For example, all Jockey Club Thoroughbred registries require that a registered Thoroughbred be a product of a natural mating, so-called "live cover". A foal born of two Thoroughbred parents, but by means of artificial insemination or embryo transfer, cannot be registered in the Thoroughbred studbook. However, since the advent of DNA testing to verify parentage, most breed registries now allow artificial insemination, embryo transfer, or both. The high value of stallions has helped with the acceptance of these techniques because they allow a stallion to breed more mares with each "collection" and greatly reduce the risk of injury during mating. Cloning of horses is highly controversial, and at the present time most mainstream breed registries will not accept cloned horses, though several cloned horses and mules have been produced. Such restrictions have led to legal challenges in the United States, sometime based on state law and sometimes based on antitrust laws. Hybrids Horses can crossbreed with other equine species to produce hybrids. These hybrid types are not breeds, but they resemble breeds in that crosses between certain horse breeds and other equine species produce characteristic offspring. The most common hybrid is the mule, a cross between a "jack" (male donkey) and a mare. A related hybrid, the hinny, is a cross between a stallion and a jenny (female donkey). Most other hybrids involve the zebra (see Zebroid). With rare exceptions, most equine hybrids are sterile and cannot reproduce. A notable exception is hybrid crosses between horses and Equus ferus przewalskii, commonly known as Przewalski's horse. See also References Further reading
[ 0.7005876898765564, -0.13649597764015198, -0.5834529995918274, 0.010622644796967506, 0.09992554783821106, 0.36402571201324463, 0.4891388416290283, 0.22884660959243774, -0.08738617599010468, -0.5799055099487305, -0.3824809789657593, 0.28140103816986084, -0.13964565098285675, 0.3685981929302...
14082
https://en.wikipedia.org/wiki/Horse%20breeding
Horse breeding
Horse breeding is reproduction in horses, and particularly the human-directed process of selective breeding of animals, particularly purebred horses of a given breed. Planned matings can be used to produce specifically desired characteristics in domesticated horses. Furthermore, modern breeding management and technologies can increase the rate of conception, a healthy pregnancy, and successful foaling. Terminology The male parent of a horse, a stallion, is commonly known as the sire and the female parent, the mare, is called the dam. Both are genetically important, as each parent provides half of the genetic makeup of the ensuing offspring, called a foal. Contrary to popular misuse, "colt" refers to a young male horse only; "filly" is a young female. Though many horse owners may simply breed a family mare to a local stallion in order to produce a companion animal, most professional breeders use selective breeding to produce individuals of a given phenotype, or breed. Alternatively, a breeder could, using individuals of differing phenotypes, create a new breed with specific characteristics. A horse is "bred" where it is foaled (born). Thus a colt conceived in England but foaled in the United States is regarded as being bred in the US. In some cases, most notably in the Thoroughbred breeding industry, American- and Canadian-bred horses may also be described by the state or province in which they are foaled. Some breeds denote the country, or state, where conception took place as the origin of the foal. Similarly, the "breeder", is the person who owned or leased the mare at the time of foaling. That individual may not have had anything to do with the mating of the mare. It is important to review each breed registry's rules to determine which applies to any specific foal. In the horse breeding industry, the term "half-brother" or "half-sister" only describes horses which have the same dam, but different sires. Horses with the same sire but different dams are simply said to be "by the same sire", and no sibling relationship is implied. "Full" (or "own") siblings have both the same dam and the same sire. The terms paternal half-sibling, and maternal half-sibling are also often used. Three-quarter siblings are horses out of the same dam, and are by sires that are either half-brothers (i.e. same dam) or who are by the same sire. Thoroughbreds and Arabians are also classified through the "distaff" or direct female line, known as their "family" or "tail female" line, tracing back to their taproot foundation bloodstock or the beginning of their respective stud books. The female line of descent always appears at the bottom of a tabulated pedigree and is therefore often known as the bottom line. In addition, the maternal grandfather of a horse has a special term: damsire. "Linebreeding" technically is the duplication of fourth generation or more distant ancestors. However, the term is often used more loosely, describing horses with duplication of ancestors closer than the fourth generation. It also is sometimes used as a euphemism for the practice of inbreeding, a practice that is generally frowned upon by horse breeders, though used by some in an attempt to fix certain traits. Estrous cycle of the mare The estrous cycle (also spelled oestrous) controls when a mare is sexually receptive toward a stallion, and helps to physically prepare the mare for conception. It generally occurs during the spring and summer months, although some mares may be sexually receptive into the late fall, and is controlled by the photoperiod (length of the day), the cycle first triggered when the days begin to lengthen. The estrous cycle lasts about 19–22 days, with the average being 21 days. As the days shorten, the mare returns to a period when she is not sexually receptive, known as anestrus. Anestrus – occurring in the majority of, but not all, mares – prevents the mare from conceiving in the winter months, as that would result in her foaling during the harshest part of the year, a time when it would be most difficult for the foal to survive. This cycle contains 2 phases: Estrus, or Follicular, phase: 5–7 days in length, when the mare is sexually receptive to a stallion. Estrogen is secreted by the follicle. Ovulation occurs in the final 24–48 hours of estrus. Diestrus, or Luteal, phase: 14–15 days in length, the mare is not sexually receptive to the stallion. The corpus luteum secretes progesterone. Depending on breed, on average, 16% of mares have double ovulations, allowing them to twin, though this does not affect the length of time of estrus or diestrus. Effects on the reproductive system during the estrous cycle Changes in hormone levels can have great effects on the physical characteristics of the reproductive organs of the mare, thereby preparing, or preventing, her from conceiving. Uterus: increased levels of estrogen during estrus cause edema within the uterus, making it feel heavier, and the uterus loses its tone. This edema decreases following ovulation, and the muscular tone increases. High levels of progesterone do not cause edema within the uterus. The uterus becomes flaccid during anestrus. Cervix: the cervix starts to relax right before estrus occurs, with maximal relaxation around the time of ovulation. The secretions of the cervix increase. High progesterone levels (during diestrus) cause the cervix to close and become toned. Vagina: the portion of the vagina near the cervix becomes engorged with blood right before estrus. The vagina becomes relaxed and secretions increase. Vulva: relaxes right before estrus begins. Becomes dry, and closes more tightly, during diestrus. Hormones involved in the estrous cycle, during foaling, and after birth The cycle is controlled by several hormones which regulate the estrous cycle, the mare's behavior, and the reproductive system of the mare. The cycle begins when the increased day length causes the pineal gland to reduce the levels of melatonin, thereby allowing the hypothalamus to secrete GnRH. GnRH (Gonadotropin releasing hormone): secreted by the hypothalamus, causes the pituitary to release two gonadotrophins: LH and FSH. LH (Luteinizing hormone): levels are highest 2 days following ovulation, then slowly decrease over 4–5 days, dipping to their lowest levels 5–16 days after ovulation. Stimulates maturation of the follicle, which then in turn secretes estrogen. Unlike most mammals, the mare does not have an increase of LH right before ovulation. FSH (Follicle-stimulating hormone): secreted by the pituitary, causes the ovarian follicle to develop. Levels of FSH rise slightly at the end of estrus, but have their highest peak about 10 days before the next ovulation. FSH is inhibited by inhibin (see below), at the same time LH and estrogen levels rise, which prevents immature follicles from continuing their growth. Mares may however have multiple FSH waves during a single estrous cycle, and diestrus follicles resulting from a diestrus FSH wave are not uncommon, particularly in the height of the natural breeding season. Estrogen: secreted by the developing follicle, it causes the pituitary gland to secrete more LH (therefore, these 2 hormones are in a positive feedback loop). Additionally, it causes behavioral changes in the mare, making her more receptive toward the stallion, and causes physical changes in the cervix, uterus, and vagina to prepare the mare for conception (see above). Estrogen peaks 1–2 days before ovulation, and decreases within 2 days following ovulation. Inhibin: secreted by the developed follicle right before ovulation, "turns off" FSH, which is no longer needed now that the follicle is larger. Progesterone: prevents conception and decreases sexual receptibility of the mare to the stallion. Progesterone is therefore lowest during the estrus phase, and increases during diestrus. It decreases 12–15 days after ovulation, when the corpus luteum begins to decrease in size. Prostaglandin: secreted by the endrometrium 13–15 days following ovulation, causes luteolysis and prevents the corpus luteum from secreting progesterone eCG – equine chorionic gonadotropin – also called PMSG (pregnant mare serum gonadotropin): chorionic gonadotropins secreted if the mare conceives. First secreted by the endometrial cups around the 36th day of gestation, peaking around day 60, and decreasing after about 120 days of gestation. Also help to stimulate the growth of the fetal gonads. Prolactin: stimulates lactation Oxytocin: stimulates the uterus to contract Breeding and gestation While horses in the wild mate and foal in mid to late spring, in the case of horses domestically bred for competitive purposes, especially horse racing, it is desirable that they be born as close to January 1 in the northern hemisphere or August 1 in the southern hemisphere as possible, so as to be at an advantage in size and maturity when competing against other horses in the same age group. When an early foal is desired, barn managers will put the mare "under lights" by keeping the barn lights on in the winter to simulate a longer day, thus bringing the mare into estrus sooner than she would in nature. Mares signal estrus and ovulation by urination in the presence of a stallion, raising the tail and revealing the vulva. A stallion, approaching with a high head, will usually nicker, nip and nudge the mare, as well as sniff her urine to determine her readiness for mating. Once fertilized, the oocyte (egg) remains in the oviduct for approximately 5.5 more days, and then descends into the uterus. The initial single cell combination is already dividing and by the time of entry into the uterus, the egg might have already reached the blastocyst stage. The gestation period lasts for about eleven months, or about 340 days (normal average range 320–370 days). During the early days of pregnancy, the conceptus is mobile, moving about in the uterus until about day 16 when "fixation" occurs. Shortly after fixation, the embryo proper (so called up to about 35 days) will become visible on trans-rectal ultrasound (about day 21) and a heartbeat should be visible by about day 23. After the formation of the endometrial cups and early placentation is initiated (35–40 days of gestation) the terminology changes, and the embryo is referred to as a fetus. True implantation – invasion into the endometrium of any sort – does not occur until about day 35 of pregnancy with the formation of the endometrial cups, and true placentation (formation of the placenta) is not initiated until about day 40-45 and not completed until about 140 days of pregnancy. The fetus's sex can be determined by day 70 of the gestation using ultrasound. Halfway through gestation the fetus is the size of between a rabbit and a beagle. The most dramatic fetal development occurs in the last 3 months of pregnancy when 60% of fetal growth occurs. Colts are carried on average about 4 days longer than fillies. Care of the pregnant mare Domestic mares receive specific care and nutrition to ensure that they and their foals are healthy. Mares are given vaccinations against diseases such as the Rhinopneumonitis (EHV-1) virus (which can cause miscarriage) as well as vaccines for other conditions that may occur in a given region of the world. Pre-foaling vaccines are recommended 4–6 weeks prior to foaling to maximize the immunoglobulin content of the colostrum in the first milk. Mares are dewormed a few weeks prior to foaling, as the mare is the primary source of parasites for the foal. Mares can be used for riding or driving during most of their pregnancy. Exercise is healthy, though should be moderated when a mare is heavily in foal. Exercise in excessively high temperatures has been suggested as being detrimental to pregnancy maintenance during the embryonic period; however ambient temperatures encountered during the research were in the region of 100 degrees F and the same results may not be encountered in regions with lower ambient temperatures. During the first several months of pregnancy, the nutritional requirements do not increase significantly since the rate of growth of the fetus is very slow. However, during this time, the mare may be provided supplemental vitamins and minerals, particularly if forage quality is questionable. During the last 3–4 months of gestation, rapid growth of the fetus increases the mare's nutritional requirements. Energy requirements during these last few months, and during the first few months of lactation are similar to those of a horse in full training. Trace minerals such as copper are extremely important, particularly during the tenth month of pregnancy, for proper skeletal formation. Many feeds designed for pregnant and lactating mares provide the careful balance required of increased protein, increased calories through extra fat as well as vitamins and minerals. Overfeeding the pregnant mare, particularly during early gestation, should be avoided, as excess weight may contribute to difficulties foaling or fetal/foal related problems. Foaling Mares due to foal are usually separated from other horses, both for the benefit of the mare and the safety of the soon-to-be-delivered foal. In addition, separation allows the mare to be monitored more closely by humans for any problems that may occur while giving birth. In the northern hemisphere, a special foaling stall that is large and clutter free is frequently used, particularly by major breeding farms. Originally, this was due in part to a need for protection from the harsh winter climate present when mares foal early in the year, but even in moderate climates, such as Florida, foaling stalls are still common because they allow closer monitoring of mares. Smaller breeders often use a small pen with a large shed for foaling, or they may remove a wall between two box stalls in a small barn to make a large stall. In the milder climates seen in much of the southern hemisphere, most mares foal outside, often in a paddock built specifically for foaling, especially on the larger stud farms. Many stud farms worldwide employ technology to alert human managers when the mare is about to foal, including webcams, closed-circuit television, or assorted types of devices that alert a handler via a remote alarm when a mare lies down in a position to foal. On the other hand, some breeders, particularly those in remote areas or with extremely large numbers of horses, may allow mares to foal out in a field amongst a herd, but may also see higher rates of foal and mare mortality in doing so. Most mares foal at night or early in the morning, and prefer to give birth alone when possible. Labor is rapid, often no more than 30 minutes, and from the time the feet of the foal appear to full delivery is often only about 15 to 20 minutes. Once the foal is born, the mare will lick the newborn foal to clean it and help blood circulation. In a very short time, the foal will attempt to stand and get milk from its mother. A foal should stand and nurse within the first hour of life. To create a bond with her foal, the mare licks and nuzzles the foal, enabling her to distinguish the foal from others. Some mares are aggressive when protecting their foals, and may attack other horses or unfamiliar humans that come near their newborns. After birth, a foal's navel is dipped in antiseptic to prevent infection. The foal is sometimes given an enema to help clear the meconium from its digestive tract. The newborn is monitored to ensure that it stands and nurses without difficulty. While most horse births happen without complications, many owners have first aid supplies prepared and a veterinarian on call in case of a birthing emergency. People who supervise foaling should also watch the mare to be sure that she passes the placenta in a timely fashion, and that it is complete with no fragments remaining in the uterus. Retained fetal membranes can cause a serious inflammatory condition (endometritis) and/or infection. If the placenta is not removed from the stall after it is passed, a mare will often eat it, an instinct from the wild, where blood would attract predators. Foal care Foals develop rapidly, and within a few hours a wild foal can travel with the herd. In domestic breeding, the foal and dam are usually separated from the herd for a while, but within a few weeks are typically pastured with the other horses. A foal will begin to eat hay, grass and grain alongside the mare at about 4 weeks old; by 10–12 weeks the foal requires more nutrition than the mare's milk can supply. Foals are typically weaned at 4–8 months of age, although in the wild a foal may nurse for a year. How breeds develop Beyond the appearance and conformation of a specific type of horse, breeders aspire to improve physical performance abilities. This concept, known as matching "form to function," has led to the development of not only different breeds, but also families or bloodlines within breeds that are specialists for excelling at specific tasks. For example, the Arabian horse of the desert naturally developed speed and endurance to travel long distances and survive in a harsh environment, and domestication by humans added a trainable disposition to the animal's natural abilities. In the meantime, in northern Europe, the locally adapted heavy horse with a thick, warm coat was domesticated and put to work as a farm animal that could pull a plow or wagon. This animal was later adapted through selective breeding to create a strong but rideable animal suitable for the heavily armored knight in warfare. Then, centuries later, when people in Europe wanted faster horses than could be produced from local horses through simple selective breeding, they imported Arabians and other oriental horses to breed as an outcross to the heavier, local animals. This led to the development of breeds such as the Thoroughbred, a horse taller than the Arabian and faster over the distances of a few miles required of a European race horse or light cavalry horse. Another cross between oriental and European horses produced the Andalusian, a horse developed in Spain that was powerfully built, but extremely nimble and capable of the quick bursts of speed over short distances necessary for certain types of combat as well as for tasks such as bullfighting. Later, the people who settled America needed a hardy horse that was capable of working with cattle. Thus, Arabians and Thoroughbreds were crossed on Spanish horses, both domesticated animals descended from those brought over by the Conquistadors, and feral horses such as the Mustangs, descended from the Spanish horse, but adapted by natural selection to the ecology and climate of the west. These crosses ultimately produced new breeds such as the American Quarter Horse and the Criollo of Argentina. In Canada, the Canadian Horse descended from the French stock Louis XIV sent to Canada in the late 17th century.[6] The initial shipment, in 1665, consisted of two stallions and twenty mares from the Royal Stables in Normandy and Brittany, the centre of French horse breeding.[7] Only 12 of the 20 mares survived the trip. Two more shipments followed, one in 1667 of 14 horses (mostly mares, but with at least one stallion), and one in 1670 of 11 mares and a stallion. The shipments included a mix of draft horses and light horses, the latter of which included both pacing and trotting horses.[1] The exact origins of all the horses are unknown, although the shipments probably included Bretons, Normans, Arabians, Andalusians and Barbs. In modern times, these breeds themselves have since been selectively bred to further specialize at certain tasks. One example of this is the American Quarter Horse. Once a general-purpose working ranch horse, different bloodlines now specialize in different events. For example, larger, heavier animals with a very steady attitude are bred to give competitors an advantage in events such as team roping, where a horse has to start and stop quickly, but also must calmly hold a full-grown steer at the end of a rope. On the other hand, for an event known as cutting, where the horse must separate a cow from a herd and prevent it from rejoining the group, the best horses are smaller, quick, alert, athletic and highly trainable. They must learn quickly, have conformation that allows quick stops and fast, low turns, and the best competitors have a certain amount of independent mental ability to anticipate and counter the movement of a cow, popularly known as "cow sense." Another example is the Thoroughbred. While most representatives of this breed are bred for horse racing, there are also specialized bloodlines suitable as show hunters or show jumpers. The hunter must have a tall, smooth build that allows it to trot and canter smoothly and efficiently. Instead of speed, value is placed on appearance and upon giving the equestrian a comfortable ride, with natural jumping ability that shows bascule and good form. A show jumper, however, is bred less for overall form and more for power over tall fences, along with speed, scope, and agility. This favors a horse with a good galloping stride, powerful hindquarters that can change speed or direction easily, plus a good shoulder angle and length of neck. A jumper has a more powerful build than either the hunter or the racehorse. History of horse breeding The history of horse breeding goes back millennia. Though the precise date is in dispute, humans could have domesticated the horse as far back as approximately 4500 BCE. However, evidence of planned breeding has a more blurry history. It is well known, for example, that the Romans did breed horses and valued them in their armies, but little is known regarding their breeding and husbandry practices: all that remains are statues and artwork. Mankind has plenty of equestrian statues of Roman emperors, horses are mentioned in the Odyssey by Homer, and hieroglyphics and paintings left behind by Egyptians tell stories of pharaohs hunting elephants from chariots. Nearly nothing is known of what became of the horses they bred for hippodromes, for warfare, or even for farming. One of the earliest people known to document the breedings of their horses were the Bedouin of the Middle East, the breeders of the Arabian horse. While it is difficult to determine how far back the Bedouin passed on pedigree information via an oral tradition, there were written pedigrees of Arabian horses by CE 1330. The Akhal-Teke of West-Central Asia is another breed with roots in ancient times that was also bred specifically for war and racing. The nomads of the Mongolian steppes bred horses for several thousand years as well, and the Caspian horse is believed to be a very close relative of Ottoman horses from the earliest origins of the Turks in Central Asia. The types of horse bred varied with culture and with the times. The uses to which a horse was put also determined its qualities, including smooth amblers for riding, fast horses for carrying messengers, heavy horses for plowing and pulling heavy wagons, ponies for hauling cars of ore from mines, packhorses, carriage horses and many others. Medieval Europe bred large horses specifically for war, called destriers. These horses were the ancestors of the great heavy horses of today, and their size was preferred not simply because of the weight of the armor, but also because a large horse provided more power for the knight's lance. Weighing almost twice as much as a normal riding horse, the destrier was a powerful weapon in battle meant to act like a giant battering ram that could quite literally run down men on an enemy line. On the other hand, during this same time, lighter horses were bred in northern Africa and the Middle East, where a faster, more agile horse was preferred. The lighter horse suited the raids and battles of desert people, allowing them to outmaneuver rather than overpower the enemy. When Middle Eastern warriors and European knights collided in warfare, the heavy knights were frequently outmaneuvered. The Europeans, however, responded by crossing their native breeds with "oriental" type horses such as the Arabian, Barb, and Turkoman horse This cross-breeding led both to a nimbler war horse, such as today's Andalusian horse, but also created a type of horse known as a Courser, a predecessor to the Thoroughbred, which was used as a message horse. During the Renaissance, horses were bred not only for war, but for haute ecole riding, derived from the most athletic movements required of a war horse, and popular among the elite nobility of the time. Breeds such as the Lipizzan and the now extinct Neapolitan horse were developed from Spanish-bred horses for this purpose, and also became the preferred mounts of cavalry officers, who were derived mostly from the ranks of the nobility. It was during this time that firearms were developed, and so the light cavalry horse, a faster and quicker war horse, was bred for "shoot and run" tactics rather than the shock action as in the Middle Ages. Fine horses usually had a well muscled, curved neck, slender body, and sweeping mane, as the nobility liked to show off their wealth and breeding in paintings of the era. After Charles II retook the British throne in 1660, horse racing, which had been banned by Cromwell, was revived. The Thoroughbred was developed 40 years later, bred to be the ultimate racehorse, through the lines of three foundation Arabian stallions and one Turkish horse. In the 18th century, James Burnett, Lord Monboddo noted the importance of selecting appropriate parentage to achieve desired outcomes of successive generations. Monboddo worked more broadly in the abstract thought of species relationships and evolution of species. The Thoroughbred breeding hub in Lexington, Kentucky was developed in the late 18th century, and became a mainstay in American racehorse breeding. The 17th and 18th centuries saw more of a need for fine carriage horses in Europe, bringing in the dawn of the warmblood. The warmblood breeds have been exceptionally good at adapting to changing times, and from their carriage horse beginnings they easily transitioned during the 20th century into a sport horse type. Today's warmblood breeds, although still used for competitive driving, are more often seen competing in show jumping or dressage. The Thoroughbred continues to dominate the horse racing world, although its lines have been more recently used to improve warmblood breeds and to develop sport horses. The French saddle horse is an excellent example as is the Irish Sport Horse, the latter being an unusual combination between a Thoroughbred and a draft breed. The American Quarter Horse was developed early in the 18th century, mainly for quarter racing (racing ¼ of a mile). Colonists did not have racetracks or any of the trappings of Europe that the earliest Thoroughbreds had at their disposal, so instead the owners of Quarter Horses would run their horses on roads that lead through town as a form of local entertainment. As the USA expanded West, the breed went with settlers as a farm and ranch animal, and "cow sense" was particularly valued: their use for herding cattle increased on rough, dry terrain that often involved sitting in the saddle for long hours. However, this did not mean that the original ¼-mile races that colonists held ever went out of fashion, so today there are three types: the stock horse type, the racer, and the more recently evolving sport type. The racing type most resembles the finer-boned ancestors of the first racing Quarter Horses, and the type is still used for ¼-mile races. The stock horse type, used in western events and as a farm and patrol animal is bred for a shorter stride, an ability to stop and turn quickly, and an unflappable attitude that remains calm and focused even in the face of an angry charging steer. The first two are still to this day bred to have a combination of explosive speed that exceeds the Thoroughbred on short distances clocked as high as 55 mph, but they still retain the gentle, calm, and kindly temperament of their ancestors that makes them easily handled. The Canadian horse's origin corresponds to shipments of French horses, some of which came from Louis XIV's own stable and most likely were Baroque horses meant to be gentlemen's mounts. These were ill-suited to farm work and to the hardscrabble life of the New World, so like the Americans, early Canadians crossed their horses with natives escapees. In time they evolved along similar lines as the Quarter Horse to the South as both the US and Canada spread westward and needed a calm and tractable horse versatile enough to carry the farmer's son to school but still capable of running fast and running hard as a cavalry horse, a stockhorse, or a horse to pull a conestoga wagon. Other horses from North America retained a hint of their mustang origins by being either derived from stock that Native Americans bred that came in a rainbow of color, like the Appaloosa and American Paint Horse. with those East of the Mississippi River increasingly bred to impress and mimic the trends of the upper classes of Europe: The Tennessee Walking Horse and Saddlebred were originally plantation horses bred for their gait and comfortable ride in the saddle as a plantation master would survey his vast lands like an English lord. Horses were needed for heavy draft and carriage work until replaced by the automobile, truck, and tractor. After this time, draft and carriage horse numbers dropped significantly, though light riding horses remained popular for recreational pursuits. Draft horses today are used on a few small farms, but today are seen mainly for pulling and plowing competitions rather than farm work. Heavy harness horses are now used as an outcross with lighter breeds, such as the Thoroughbred, to produce the modern warmblood breeds popular in sport horse disciplines, particularly at the Olympic level. Deciding to breed a horse Breeding a horse is an endeavor where the owner, particularly of the mare, will usually need to invest considerable time and money. For this reason, a horse owner needs to consider several factors, including: Does the proposed breeding animal have valuable genetic qualities to pass on? Is the proposed breeding animal in good physical health, fertile, and able to withstand the rigors of reproduction? For what purpose will the foal be used? Is there a market for the foal if the owner does not wish to keep the foal for its entire life? What is the anticipated economic benefit, if any, to the owner of the ensuing foal? What is the anticipated economic benefit, if any, to the owner(s) of the sire and dam or the foal? Does the owner of the mare have the expertise to properly manage the mare through gestation and parturition? Does the owner of the potential foal have the expertise to properly manage and train a young animal once it is born? There are value judgements involved in considering whether an animal is suitable breeding stock, hotly debated by breeders. Additional personal beliefs may come into play when considering a suitable level of care for the mare and ensuing foal, the potential market or use for the foal, and other tangible and intangible benefits to the owner. If the breeding endeavor is intended to make a profit, there are additional market factors to consider, which may vary considerably from year to year, from breed to breed, and by region of the world. In many cases, the low end of the market is saturated with horses, and the law of supply and demand thus allows little or no profit to be made from breeding unregistered animals or animals of poor quality, even if registered. The minimum cost of breeding for a mare owner includes the stud fee, and the cost of proper nutrition, management and veterinary care of the mare throughout gestation, parturition, and care of both mare and foal up to the time of weaning. Veterinary expenses may be higher if specialized reproductive technologies are used or health complications occur. Making a profit in horse breeding is often difficult. While some owners of only a few horses may keep a foal for purely personal enjoyment, many individuals breed horses in hopes of making some money in the process. A rule of thumb is that a foal intended for sale should be worth three times the cost of the stud fee if it were sold at the moment of birth. From birth forward, the costs of care and training are added to the value of the foal, with a sale price going up accordingly. If the foal wins awards in some form of competition, that may also enhance the price. On the other hand, without careful thought, foals bred without a potential market for them may wind up being sold at a loss, and in a worst-case scenario, sold for "salvage" value—a euphemism for sale to slaughter as horsemeat. Therefore, a mare owner must consider their reasons for breeding, asking hard questions of themselves as to whether their motivations are based on either emotion or profit and how realistic those motivations may be. Choosing breeding stock The stallion should be chosen to complement the mare, with the goal of producing a foal that has the best qualities of both animals, yet avoids having the weaker qualities of either parent. Generally, the stallion should have proven himself in the discipline or sport the mare owner wishes for the "career" of the ensuing foal. Mares should also have a competition record showing that they also have suitable traits, though this does not happen as often. Some breeders consider the quality of the sire to be more important than the quality of the dam. However, other breeders maintain that the mare is the most important parent. Because stallions can produce far more offspring than mares, a single stallion can have a greater overall impact on a breed. However, the mare may have a greater influence on an individual foal because its physical characteristics influence the developing foal in the womb and the foal also learns habits from its dam when young. Foals may also learn the "language of intimidation and submission" from their dam, and this imprinting may affect the foal's status and rank within the herd. Many times, a mature horse will achieve status in a herd similar to that of its dam; the offspring of dominant mares become dominant themselves. A purebred horse is usually worth more than a horse of mixed breeding, though this matters more in some disciplines than others. The breed of the horse is sometimes secondary when breeding for a sport horse, but some disciplines may prefer a certain breed or a specific phenotype of horse. Sometimes, purebred bloodlines are an absolute requirement: For example, most racehorses in the world must be recorded with a breed registry in order to race. Bloodlines are often considered, as some bloodlines are known to cross well with others. If the parents have not yet proven themselves by competition or by producing quality offspring, the bloodlines of the horse are often a good indicator of quality and possible strengths and weaknesses. Some bloodlines are known not only for their athletic ability, but could also carry a conformational or genetic defect, poor temperament, or for a medical problem. Some bloodlines are also fashionable or otherwise marketable, which is an important consideration should the mare owner wish to sell the foal. Horse breeders also consider conformation, size and temperament. All of these traits are heritable, and will determine if the foal will be a success in its chosen discipline. The offspring, or "get", of a stallion are often excellent indicators of his ability to pass on his characteristics, and the particular traits he actually passes on. Some stallions are fantastic performers but never produce offspring of comparable quality. Others sire fillies of great abilities but not colts. At times, a horse of mediocre ability sires foals of outstanding quality. Mare owners also look into the question of if the stallion is fertile and has successfully "settled" (i.e. impregnated) mares. A stallion may not be able to breed naturally, or old age may decrease his performance. Mare care boarding fees and semen collection fees can be a major cost. Costs related to breeding Breeding a horse can be an expensive endeavor, whether breeding a backyard competition horse or the next Olympic medalist. Costs may include: The stud and booking fee Fees for collecting, handling, and transporting semen (if AI is used and semen is shipped) Mare exams: to determine if she is healthy enough to breed, to determine when she ovulates, and (if AI is used) to inseminate her Mare transport, care, and board if the mare is bred live cover at the stallion's residence Veterinary bills to keep the pregnant mare healthy while in foal Possible veterinary bills during pregnancy or foaling should something go wrong Veterinary bills for the foal for its first exam a few days following foaling Stud fees are determined by the quality of the stallion, his performance record, the performance record of his get (offspring), as well as the sport and general market that the animal is standing for. The highest stud fees are generally for racing Thoroughbreds, which may charge from two to three thousand dollars for a breeding to a new or unproven stallion, to several hundred thousand dollars for a breeding to a proven producer of stakes winners. Stallions in other disciplines often have stud fees that begin in the range of $1,000 to $3,000, with top contenders who produce champions in certain disciplines able to command as much as $20,000 for one breeding. The lowest stud fees to breed to a grade horse or an animal of low-quality pedigree may only be $100–$200, but there are trade-offs: the horse will probably be unproven, and likely to produce lower-quality offspring than a horse with a stud fee that is in the typical range for quality breeding stock. As a stallion's career, either performance or breeding, improves, his stud fee tends to increase in proportion. If one or two offspring are especially successful, winning several stakes races or an Olympic medal, the stud fee will generally greatly increase. Younger, unproven stallions will generally have a lower stud fee earlier on in their careers. To help decrease the risk of financial loss should the mare die or abort the foal while pregnant, many studs have a live foal guarantee (LFG) – also known as "no foal, free return" or "NFFR" - allowing the owner to have a free breeding to their stallion the next year. However, this is not offered for every breeding. Covering the mare There are two general ways to "cover" or breed the mare: Live cover: the mare is brought to the stallion's residence and is covered "live" in the breeding shed. She may also be turned out in a pasture with the stallion for several days to breed naturally ('pasture bred'). The former situation is often preferred, as it provides a more controlled environment, allowing the breeder to ensure that the mare was covered, and places the handlers in a position to remove the horses from one another should one attempt to kick or bite the other. Artificial Insemination (AI): the mare is inseminated by a veterinarian or an equine reproduction manager, using either fresh, cooled or frozen semen. After the mare is bred or artificially inseminated, she is checked using ultrasound 14–16 days later to see if she "took", and is pregnant. A second check is usually performed at 28 days. If the mare is not pregnant, she may be bred again during her next cycle. It is considered safe to breed a mare to a stallion of much larger size. Because of the mare's type of placenta and its attachment and blood supply, the foal will be limited in its growth within the uterus to the size of the mare's uterus, but will grow to its genetic potential after it is born. Test breedings have been done with draft horse stallions bred to small mares with no increase in the number of difficult births. Live cover When breeding live cover, the mare is usually boarded at the stud. She may be "teased" several times with a stallion that will not breed to her, usually with the stallion being presented to the mare over a barrier. Her reaction to the teaser, whether hostile or passive, is noted. A mare that is in heat will generally tolerate a teaser (although this is not always the case), and may present herself to him, holding her tail to the side. A veterinarian may also determine if the mare is ready to be bred, by ultrasound or palpating daily to determine if ovulation has occurred. Live cover can also be done in liberty on a paddock or on pasture, although due to safety and efficacy concerns, it is not common at professional breeding farms. When it has been determined that the mare is ready, both the mare and intended stud will be cleaned. The mare will then be presented to the stallion, usually with one handler controlling the mare and one or more handlers in charge of the stallion. Multiple handlers are preferred, as the mare and stallion can be easily separated should there be any trouble. The Jockey Club, the organization that oversees the Thoroughbred industry in the United States, requires all registered foals to be bred through live cover. Artificial insemination, listed below, is not permitted. Similar rules apply in other countries, such as Australia. By contrast, the U.S. standardbred industry allows registered foals to be bred by live cover, or by artificial insemination (AI) with fresh or frozen (not dried) semen. No other artificial fertility treatment is allowed. In addition, foals bred via AI of frozen semen may only be registered if the stallion's sperm was collected during his lifetime, and used no later than the calendar year of his death or castration. Artificial insemination Artificial insemination (AI) has several advantages over live cover, and has a very similar conception rate: The mare and stallion never have to come in contact with each other, which therefore reduces breeding accidents, such as the mare kicking the stallion. AI opens up the world to international breeding, as semen may be shipped across continents to mares that would otherwise be unable to breed to a particular stallion. A mare also does not have to travel to the stallion, so the process is less stressful on her, and if she already has a foal, the foal does not have to travel. AI allows more mares to be bred from one stallion, as the ejaculate may be split between mares. AI reduces the chance of spreading sexually transmitted diseases between mare and stallion. AI allows mares or stallions with health issues, such as sore hocks which may prevent a stallion from mounting, to continue to breed. Frozen semen may be stored and used to breed mares even after the stallion is dead, allowing his lines to continue. However, the semen of some stallions does not freeze well. Some breed registries may not permit the registration of foals resulting from the use of frozen semen after the stallion's death, although other large registries accept such usage and provide registrations. The overall trend is toward permitting use of frozen semen after the death of the stallion. A stallion is usually trained to mount a phantom (or dummy) mare, although a live mare may be used, and he is most commonly collected using an artificial vagina (AV) which is heated to simulate the vagina of the mare. The AV has a filter and collection area at one end to capture the semen, which can then be processed in a lab. The semen may be chilled or frozen and shipped to the mare owner or used to breed mares "on-farm". When the mare is in heat, the person inseminating introduces the semen directly into her uterus using a syringe and pipette. Advanced reproductive techniques Often an owner does not want to take a valuable competition mare out of training to carry a foal. This presents a problem, as the mare will usually be quite old by the time she is retired from her competitive career, at which time it is more difficult to impregnate her. Other times, a mare may have physical problems that prevent or discourage breeding. However, there are now several options for breeding these mares. These options also allow a mare to produce multiple foals each breeding season, instead of the usual one. Therefore, mares may have an even greater value for breeding. Embryo transfer: This relatively new method involves flushing out the mare's fertilized embryo a few days following insemination, and transferring to a surrogate mare, which has been synchronized to be in the same phase of the estrous cycle as the donor mare. Gamete intrafallopian transfer (GIFT): The mare's ovum and the stallion's sperm are deposited in the oviduct of a surrogate dam. This technique is very useful for subfertile stallions, as fewer sperm are needed, so a stallion with a low sperm count can still successfully breed. Egg transfer: An oocyte is removed from the mare's follicle and transferred into the oviduct of the recipient mare, who is then bred. This is best for mares with physical problems, such as an obstructed oviduct, that prevent breeding. Intracytoplasmic sperm injection (ICSI): Used in horses due to lack of successful co-incubation of female and male gametes in simple IVF. A plug of the zona pellucida is removed and a single sperm cell is injected into the ooplasm of the mature oocyte. An advantage of ICSI over IVF is that lower quality sperm can be used since the sperm does not have to penetrate the zona pellucida. The success rate of ICSI is 23-44% blastocyst development. The world's first cloned horse, Prometea, was born in 2003. Other notable instances of horse cloning are: In 2006, Scamper, an extremely successful barrel racing horse, a gelding, was cloned. The resulting stallion, Clayton, became the first cloned horse to stand at stud in the U.S. In 2007, a renowned show jumper and Thoroughbred, Gem Twist, was cloned by Frank Chapot and his family. In September 2008, Gemini was born and several other clones followed, leading to the development of a breeding line from Gem Twist. In 2010, the first lived equine cloned of a Criollo horse was born in Argentina, and was the first horse clone produced in Latin America. In the same year a cloned polo horse was sold for $800,000 - the highest known price ever paid for a polo horse. In 2013, the world-famous polo star Adolfo Cambiaso helped his high-handicap team La Dolfina win the Argentine National Open, scoring nine goals in the 16-11 match. Two of those he scored atop a horse named Show Me, a clone, and the first to ride onto the Argentine pitch. See also Domestication of the horse Endometrosis Evolution of the horse Glossary of equestrian terms Pedigree chart Thoroughbred breeding theories References Further reading Riegal, Ronald J. DMV, and Susan E. Hakola DMV. Illustrated Atlas of Clinical Equine Anatomy and Common Disorders of the Horse Vol. II. Equistar Publication, Limited. Marysville, OH. Copyright 2000. Horse health Horse-related professions and professionals
[ 0.6625313758850098, -0.3373808264732361, -0.903785228729248, 0.23167908191680908, 0.1426531970500946, 0.7045243382453918, 0.3220582902431488, 0.34196674823760986, -0.06781190633773804, -0.48380786180496216, -0.1910826712846756, 0.5645073056221008, -0.08219723403453827, 0.2623904347419739, ...
14084
https://en.wikipedia.org/wiki/Heterosexuality
Heterosexuality
Heterosexuality is romantic attraction, sexual attraction or sexual behavior between persons of the opposite sex or gender. As a sexual orientation, heterosexuality is "an enduring pattern of emotional, romantic, and/or sexual attractions" to persons of the opposite sex; it "also refers to a person's sense of identity based on those attractions, related behaviors, and membership in a community of others who share those attractions." Someone who is heterosexual is commonly referred to as straight. Along with bisexuality and homosexuality, heterosexuality is one of the three main categories of sexual orientation within the heterosexual–homosexual continuum. Across cultures, most people are heterosexual, and heterosexual activity is by far the most common type of sexual activity. Scientists do not know the exact cause of sexual orientation, but they theorize that it is caused by a complex interplay of genetic, hormonal, and environmental influences, and do not view it as a choice. Although no single theory on the cause of sexual orientation has yet gained widespread support, scientists favor biologically-based theories. There is considerably more evidence supporting nonsocial, biological causes of sexual orientation than social ones, especially for males. The term heterosexual or heterosexuality is usually applied to humans, but heterosexual behavior is observed in all other mammals and in other animals, as it is necessary for sexual reproduction. Terminology Hetero- comes from the Greek word ἕτερος [héteros], meaning "other party" or "another", used in science as a prefix meaning "different"; and the Latin word for sex (that is, characteristic sex or sexual differentiation). The current use of the term heterosexual has its roots in the broader 19th century tradition of personality taxonomy. The term heterosexual was coined alongside the word homosexual by Karl Maria Kertbeny in 1869. The terms were not in current use during the late nineteenth century, but were reintroduced by Richard von Krafft-Ebing and Albert Moll around 1890. The noun came into wider use from the early 1920s, but did not enter common use until the 1960s. The colloquial shortening "hetero" is attested from 1933. The abstract noun "heterosexuality" is first recorded in 1900. The word "heterosexual" was listed in Merriam-Webster's New International Dictionary in 1923 as a medical term for "morbid sexual passion for one of the opposite sex"; however, in 1934 in their Second Edition Unabridged it is defined as a "manifestation of sexual passion for one of the opposite sex; normal sexuality". In LGBT slang, the term breeder has been used as a denigrating phrase to deride heterosexuals. Hyponyms of heterosexual include heteroflexible. The word can be informally shortened to "hetero". The term straight originated as a mid-20th century gay slang term for heterosexuals, ultimately coming from the phrase "to go straight" (as in "straight and narrow"), or stop engaging in homosexual sex. One of the first uses of the word in this way was in 1941 by author G. W. Henry. Henry's book concerned conversations with homosexual males and used this term in connection with people who are identified as ex-gays. It is now simply a colloquial term for "heterosexual", having changed in primary meaning over time. Some object to usage of the term straight because it implies that non-heteros are crooked. Demographics In their 2016 literature review, Bailey et al. stated that they "expect that in all cultures the vast majority of individuals are sexually predisposed exclusively to the other sex (i.e., heterosexual)" and that there is no persuasive evidence that the demographics of sexual orientation have varied much across time or place. Heterosexual activity between only one male and one female is by far the most common type of sociosexual activity. According to several major studies, 89% to 98% of people have had only heterosexual contact within their lifetime; but this percentage falls to 79–84% when either or both same-sex attraction and behavior are reported. A 1992 study reported that 93.9% of males in Britain have only had heterosexual experience, while in France the number was reported at 95.9%. According to a 2008 poll, 85% of Britons have only opposite-sex sexual contact while 94% of Britons identify themselves as heterosexual. Similarly, a survey by the UK Office for National Statistics (ONS) in 2010 found that 95% of Britons identified as heterosexual, 1.5% of Britons identified themselves as homosexual or bisexual, and the last 3.5% gave more vague answers such as "don't know", "other", or did not respond to the question. In the United States, according to a Williams Institute report in April 2011, 96% or approximately 250 million of the adult population are heterosexual. An October 2012 Gallup poll provided unprecedented demographic information about those who identify as heterosexual, arriving at the conclusion that 96.6%, with a margin of error of ±1%, of all U.S. adults identify as heterosexual. The Gallup results show: In a 2015 YouGov survey of 1,000 adults of the United States, 89% of the sample identified as heterosexual, 4% as homosexual (2% as homosexual male and 2% as homosexual female) and 4% as bisexual (of either sex). Bailey et al., in their 2016 review, stated that in recent Western surveys, about 93% of men and 87% of women identify as completely heterosexual, and about 4% of men and 10% of women as mostly heterosexual. Academic study Biological and environmental No simple and singular determinant for sexual orientation has been conclusively demonstrated, but scientists believe that a combination of genetic, hormonal, and environmental factors determine sexual orientation. They favor biological theories for explaining the causes of sexual orientation, as there is considerably more evidence supporting nonsocial, biological causes than social ones, especially for males. Factors related to the development of a heterosexual orientation include genes, prenatal hormones, and brain structure, and their interaction with the environment. Prenatal hormones The neurobiology of the masculinization of the brain is fairly well understood. Estradiol and testosterone, which is catalyzed by the enzyme 5α-reductase into dihydrotestosterone, act upon androgen receptors in the brain to masculinize it. If there are few androgen receptors (people with androgen insensitivity syndrome) or too much androgen (females with congenital adrenal hyperplasia), there can be physical and psychological effects. It has been suggested that both male and female heterosexuality are the results of this process. In these studies heterosexuality in females is linked to a lower amount of masculinization than is found in lesbian females, though when dealing with male heterosexuality there are results supporting both higher and lower degrees of masculinization than homosexual males. Animals and reproduction Sexual reproduction in the animal world is facilitated through opposite-sex sexual activity, although there are also animals that reproduce asexually, including protozoa and lower invertebrates. Reproductive sex does not require a heterosexual orientation, since sexual orientation typically refers to a long-term enduring pattern of sexual and emotional attraction leading often to long-term social bonding, while reproduction requires as little as a single act of copulation to fertilize the ovum by sperm. Sexual fluidity Often, sexual orientation and sexual orientation identity are not distinguished, which can impact accurately assessing sexual identity and whether or not sexual orientation is able to change; sexual orientation identity can change throughout an individual's life, and may or may not align with biological sex, sexual behavior or actual sexual orientation. Sexual orientation is stable and unlikely to change for the vast majority of people, but some research indicates that some people may experience change in their sexual orientation, and this is more likely for women than for men. The American Psychological Association distinguishes between sexual orientation (an innate attraction) and sexual orientation identity (which may change at any point in a person's life). A 2012 study found that 2% of a sample of 2,560 adult participants reported a change of sexual orientation identity after a 10-year period. For men, a change occurred in 0.78% of those who had identified as heterosexual, 9.52% of homosexuals, and 47% of bisexuals. For women, a change occurred in 1.36% of heterosexuals, 63.6% of lesbians, and 64.7% of bisexuals. A 2-year study by Lisa M. Diamond on a sample of 80 non-heterosexual female adolescents (age 16-23) reported that half of the participants had changed sexual-minority identities more than once, one third of them during the 2-year follow-up. Diamond concluded that "although sexual attractions appear fairly stable, sexual identities and behaviors are more fluid." Heteroflexibility is a form of sexual orientation or situational sexual behavior characterized by minimal homosexual activity in an otherwise primarily heterosexual orientation that is considered to distinguish it from bisexuality. It has been characterized as "mostly straight". Sexual orientation change efforts Sexual orientation change efforts are methods that aim to change sexual orientation, used to try to convert homosexual and bisexual people to heterosexuality. Scientists and mental health professionals generally do not believe that sexual orientation is a choice. There are no studies of adequate scientific rigor that conclude that sexual orientation change efforts are effective. Society and culture A heterosexual couple, a man and woman in an intimate relationship, form the core of a nuclear family. Many societies throughout history have insisted that a marriage take place before the couple settle down, but enforcement of this rule or compliance with it has varied considerably. Symbolism Heterosexual symbolism dates back to the earliest artifacts of humanity, with gender symbols, ritual fertility carvings, and primitive art. This was later expressed in the symbolism of fertility rites and polytheistic worship, which often included images of human reproductive organs, such as lingam in Hinduism. Modern symbols of heterosexuality in societies derived from European traditions still reference symbols used in these ancient beliefs. One such image is a combination of the symbol for Mars, the Roman god of war, as the definitive male symbol of masculinity, and Venus, the Roman goddess of love and beauty, as the definitive female symbol of femininity. The unicode character for this combined symbol is ⚤ (U+26A4). Historical views There was no need to coin a term such as heterosexual until there was something else to contrast and compare it with. Jonathan Ned Katz dates the definition of heterosexuality, as it is used today, to the late 19th century. According to Katz, in the Victorian era, sex was seen as a means to achieve reproduction, and relations between the sexes were not believed to be overtly sexual. The body was thought of as a tool for procreation – "Human energy, thought of as a closed and severely limited system, was to be used in producing children and in work, not wasted in libidinous pleasures." Katz argues that modern ideas of sexuality and eroticism began to develop in America and Germany in the later 19th century. The changing economy and the "transformation of the family from producer to consumer" resulted in shifting values. The Victorian work ethic had changed, pleasure became more highly valued and this allowed ideas of human sexuality to change. Consumer culture had created a market for the erotic, pleasure became commoditized. At the same time medical doctors began to acquire more power and influence. They developed the medical model of "normal love," in which healthy men and women enjoyed sex as part of a "new ideal of male-female relationships that included.. an essential, necessary, normal eroticism." This model also had a counterpart, "the Victorian Sex Pervert," anyone who failed to meet the norm. The basic oppositeness of the sexes was the basis for normal, healthy sexual attraction. "The attention paid the sexual abnormal created a need to name the sexual normal, the better to distinguish the average him and her from the deviant it." The creation of the term heterosexual consolidated the social existence of the pre-existing heterosexual experience and created a sense of ensured and validated normalcy within it. Religious views The Judeo-Christian tradition has several scriptures related to heterosexuality. The Book of Genesis states that God created women because "It is not good that the man should be alone; I will make him an help meet for him,", and that "Therefore shall a man leave his father and his mother, and shall cleave unto his wife: and they shall be one flesh" For the most part, religious traditions in the world reserve marriage to heterosexual unions, but there are exceptions including certain Buddhist and Hindu traditions, Unitarian Universalists, Metropolitan Community Church, some Anglican dioceses, and some Quaker, United Church of Canada, and Reform and Conservative Jewish congregations. Almost all religions believe that lawful sex between a man and a woman is allowed, but there are a few that believe that it is a sin, such as The Shakers, The Harmony Society, and The Ephrata Cloister. These religions tend to view all sexual relations as sinful, and promote celibacy. Some religions require celibacy for certain roles, such as Catholic priests; however, the Catholic Church also views heterosexual marriage as sacred and necessary. Heteronormativity and heterosexism Heteronormativity denotes or relates to a world view that promotes heterosexuality as the normal or preferred sexual orientation for people to have. It can assign strict gender roles to males and females. The term was popularized by Michael Warner in 1991. Feminist Adrienne Rich argues that compulsory heterosexuality, a continual and repeating reassertion of heterosexual norms, is a facet of heterosexism. Compulsory heterosexuality is the idea that female heterosexuality is both assumed and enforced by a patriarchal society. Heterosexuality is then viewed as the natural inclination or obligation by both sexes. Consequently, anyone who differs from the normalcy of heterosexuality is deemed deviant or abhorrent. Heterosexism is a form of bias or discrimination in favor of opposite-sex sexuality and relationships. It may include an assumption that everyone is heterosexual and may involve various kinds of discrimination against gays, lesbians, bisexuals, asexuals, heteroflexible people, or transgender or non-binary individuals. Straight pride is a slogan that arose in the late 1980s and early 1990s and has been used primarily by social conservative groups as a political stance and strategy. The term is described as a response to gay pride adopted by various LGBT groups in the early 1970s or to the accommodations provided to gay pride initiatives. See also Heterosociality Human reproduction Queer heterosexuality References Further reading LeVay, Simon. Gay, Straight, and the Reason Why: The Science of Sexual Orientation, Oxford University Press, 2017 Johnson, P. (2005) Love, Heterosexuality and Society. London: Routledge Answers to Your Questions About Sexual Orientation and Homosexuality. American Psychiatric Association. Bohan, Janis S., Psychology and Sexual Orientation: Coming to Terms, Routledge, 1996 Kinsey, Alfred C., et al., Sexual Behavior in the Human Male. Indiana University Press. Kinsey, Alfred C., et al., Sexual Behavior in the Human Female. Indiana University Press. External links Keel, Robert O., Heterosexual Deviance. (Goode, 1994, chapter 8, and Chapter 9, 6th edition, 2001.) Sociology of Deviant Behavior: FS 2003, University of Missouri–St. Louis. Coleman, Thomas F., What's Wrong with Excluding Heterosexual Couples from Domestic Partner Benefits Programs? Unmarried America, American Association for Single People. Interpersonal attraction Interpersonal relationships Love Normative ethics Sexual orientation 1860s neologisms
[ -0.16417406499385834, 0.19429713487625122, -0.4418291747570038, 0.13013112545013428, -0.33063846826553345, -0.06484327465295792, 0.6801708936691284, 0.5778822302818298, -0.17707549035549164, -0.28559547662734985, -0.47134044766426086, 0.054327771067619324, -0.5617085099220276, 0.0059793749...
14086
https://en.wikipedia.org/wiki/Hopewell%20Centre%20%28Hong%20Kong%29
Hopewell Centre (Hong Kong)
Hopewell Centre is a , 64-storey skyscraper at 183 Queen's Road East, in Wan Chai, Hong Kong Island in Hong Kong. The tower is the first circular skyscraper in Hong Kong. It is named after Hong Kong–listed property firm Hopewell Holdings Limited, which constructed the building. Hopewell Holdings Limited's headquarters are in the building and its Chief executive officer, Gordon Wu, has his office on the top floor. Description Construction started in 1977 and was completed in 1980. Upon completion, Hopewell Centre surpassed Jardine House as Hong Kong's tallest building. It was also the second tallest building in Asia at the time. It kept its title in Hong Kong until 1989, when the Bank of China Tower was completed. The building is now the 20th tallest building in Hong Kong. The building has a circular floor plan. Although the front entrance is on the 'ground floor', commuters are taken through a set of escalators to the 3rd floor lift lobby. Hopewell Centre stands on the slope of a hill so steep that the building has its back entrance on the 17th floor towards Kennedy Road. There is a circular private swimming pool on the roof of the building built for feng shui reasons. A revolving restaurant located on the 62nd floor, called "Revolving 66", overlooks other tall buildings below and the harbour. It was originally called Revolving 62, but soon changed its name as locals kept calling it Revolving 66. It completes a 360-degree rotation each hour. Passengers take either office lifts (faster) or the scenic lifts (with a view) to the 56/F, where they transfer to smaller lifts up to the 62/F. The restaurant is now named The Grand Buffet. The building comprises several groups of lifts. Lobbies are on the 3rd and 17th floor, and are connected to Queen's Road East and Kennedy Road respectively. A mini-skylobby is on the 56th floor and serves as a transfer floor for diners heading to the 60/F and 62/F restaurants. The building's white 'bumps' between the windows have built in window-washer guide rails. This skyscraper was the filming location for R&B group Dru Hill's music video for "How Deep Is Your Love," directed by Brett Ratner, who also directed the movie Rush Hour, whose soundtrack features the song. The circular private swimming pool is well visible in this music video. This swimming pool has also featured in an Australian television advertisement by one of that country's major gaming companies, Tattersall's Limited, promoting a weekly lottery competition. Access MTR Wan Chai station Exit D, followed by a 5-minute walk south through Lee Tung Avenue. Gallery News Hopewell shares shoot up 31 per cent after developer unveils HK$21.26 billion privatisation plan See also List of tallest buildings in Hong Kong List of buildings and structures in Hong Kong List of tallest buildings List of buildings References External links Building's Website Dru Hill's music video How Deep Is Your Love at YouTube Elevator Layout Office buildings completed in 1980 Skyscraper office buildings in Hong Kong Landmarks in Hong Kong Wan Chai Buildings and structures with revolving restaurants Queen's Road East Hopewell Holdings Round buildings
[ 0.27024510502815247, 0.3732245862483978, 0.7908223271369934, -0.34443867206573486, 0.6466823816299438, 0.4091682732105255, -0.011037264950573444, -0.07987216114997864, 0.2338428795337677, -0.8698786497116089, -0.48467788100242615, -0.3879816234111786, -0.1831507831811905, 0.172905236482620...
14089
https://en.wikipedia.org/wiki/Harwich%2C%20Massachusetts
Harwich, Massachusetts
Harwich ( ) is a New England town on Cape Cod, in Barnstable County in the state of Massachusetts in the United States. At the 2020 census it had a population of 13,440. Harwich experiences a seasonal increase to roughly 37,000. The town is a popular vacation spot, located near the Cape Cod National Seashore. Harwich's beaches are on the Nantucket Sound side of Cape Cod. Harwich has three active harbors. Saquatucket, Wychmere and Allen Harbors are all in Harwich Port. The town of Harwich includes the villages of Pleasant Lake, West Harwich, East Harwich, Harwich Port, Harwich Center, North Harwich and South Harwich. History Harwich was first settled by Europeans in 1670 as part of Yarmouth. The town was officially incorporated in 1694, and originally included the lands of the current town of Brewster. Early industry involved fishing and farming. The town is considered by some to be the birthplace of the cranberry industry, with the first commercial operation opened in 1846. There are still many bogs in the town, although the economy is now more centered on tourism and as a residential community. The town is also the site of the start/finish line of the "Sail Around the Cape", which rounds the Cape counter-clockwise, returning via the Cape Cod Canal. Attractions Since 1976, the town has hosted the annual Harwich Cranberry Festival, noted for its fireworks display, in September. In the summer, the town is host to the Harwich Mariners of the Cape Cod Baseball League. The Mariners were the 2008 league champions. The team plays at Whitehouse Field. Harwich Port is a popular destination in the summer. The most popular beach in Harwich Port is Bank Street Beach. Harwich has 18 beaches and ponds. In recent years Harwich Port has become a popular night life destination during the summer as many new bars and restaurants opened and established restaurants remained popular. The Patriot Square Shopping Center in neighboring South Dennis is convenient for residents of North Harwich and West Harwich. The plaza contains a Stop & Shop supermarket and other stores around it. Supermarkets in Harwich include a Shaw's Star Market on the Harwich Port/West Harwich border and another Stop & Shop in East Harwich. Geography According to the United States Census Bureau, the town has a total area of , of which is land and , or 36.97%, is water. The seven villages of Harwich are West Harwich, North Harwich, East Harwich, South Harwich, Harwich Center, Harwich Port and Pleasant Lake. These are also referred to as the Harwiches. Harwich is on the southern side of Cape Cod, just west of the southeastern corner. It is bordered by Dennis to the west, Brewster to the north, Orleans to the northeast, Chatham to the east, and Nantucket Sound to the south. Harwich is approximately east of Barnstable, east of the Cape Cod Canal, south of Provincetown, and southeast of Boston. The town shares the largest lake on the Cape, called Long Pond, with the town of Brewster. Long Pond serves as a private airport for planes with the ability to land on water. The village of Pleasant Lake is at the southwest corner of the lake. Numerous other smaller bodies of water dot the town. Sand Pond, a public beach and swimming area, is located off Great Western Road in North Harwich. The shore is home to several harbors and rivers, including the Herring River, Allens Harbor, Wychmere Harbor, Saquatucket Harbor, and the Andrews River. The town is also the home to the Hawksnest State Park, as well as a marina and several beaches, including two on Long Pond. There are also many beaches in West Harwich and South Harwich. Climate According to the Köppen climate classification system, Harwich, Massachusetts, has a warm-summer, wet all year, humid continental climate (Dfb). Dfb climates are characterized by at least one month having an average mean temperature ≤ 32.0 °F (≤ 0.0 °C), at least four months with an average mean temperature ≥ 50.0 °F (≥ 10.0 °C), all months with an average mean temperature ≤ 71.6 °F (≤ 22.0 °C), and no significant precipitation difference between seasons. The average seasonal (Nov-Apr) snowfall total is around 30 in (76 cm). The average snowiest month is February which corresponds with the annual peak in nor'easter activity. The plant hardiness zone is 7a with an average annual extreme minimum air temperature of 4.0 °F (-15.6 °C). Ecology According to the A. W. Kuchler U.S. potential natural vegetation types, Harwich, Massachusetts, would primarily contain a Northeastern oak/pine (110) vegetation type with a Southern mixed forest (26) vegetation form. Demographics As of the census of 2000, there were 12,386 people, 5,471 households, and 3,545 families residing in the town. The population density was 588.6 people per square mile (227.3/km2). There were 9,450 housing units at an average density of 449.1 per square mile (173.4/km2). The racial makeup of the town was 95.41% White, 0.71% Black or African American, 0.19% Native American, 0.22% Asian, 0.05% Pacific Islander, 2.03% from other races, and 1.40% from two or more races. 0.96% of the population were Hispanic or Latino of any race. There were 5,471 households, out of which 21.3% had children under the age of 18 living with them, 53.4% were married couples living together, 9.0% had a female householder with no husband present, and 35.2% were non-families. 29.8% of all households were made up of individuals, and 16.9% had someone living alone who was 65 years of age or older. The average household size was 2.20 and the average family size was 2.72. In the town, the population was spread out, with 18.3% under the age of 18, 4.2% from 18 to 24, 22.1% from 25 to 44, 25.8% from 45 to 64, and 29.6% who were 65 years of age or older. The median age was 49 years. For every 100 females, there were 84.5 males. For every 100 females age 18 and over, there were 79.7 males. The median income for a household in the town was $41,552, and the median income for a family was $51,070. Males had a median income of $38,948 versus $27,439 for females. The per capita income for the town was $23,063. About 2.9% of families and 15.5% of the population were below the poverty line, including 8.4% of those under age 18 and 8.1% of those age 65 or over. The town of Harwich contains several smaller census-designated places (CDPs) for which the U.S. Census reports more focused geographic and demographic information. The CDPs in Harwich are Harwich Center, Harwich Port (including South Harwich), East Harwich and Northwest Harwich (including West Harwich, North Harwich, and Pleasant Lake). Government Harwich is represented in the Massachusetts House of Representatives as a part of the Fourth Barnstable district, which includes (with the exception of Brewster) all the towns east and north of Harwich on the Cape. The town is represented in the Massachusetts Senate as a part of the Cape and Islands District, which includes all of Cape Cod, Martha's Vineyard and Nantucket except the towns of Bourne, Falmouth, Sandwich and a portion of Barnstable. The town is patrolled by the Second (Yarmouth) Barracks of Troop D of the Massachusetts State Police. After results from the 2020 census, Massachusetts decreased from 10 to 9 congressional districts due to decreased growth in population. These new boundaries now put Harwich in the 9th congressional district as the 10th no longer exists. Harwich is currently represented by William R. Keating. The state's senior member of the United States Senate is Elizabeth Warren, elected in 2012. The junior senator is Ed Markey, elected in 2013. Harwich is governed by the open town meeting form of government, led by a town administrator and a board of selectmen. Public and health services There are three libraries in the town. The municipal library, the Brooks Free Library in Harwich Center, is the largest and is a member of the Cape Libraries Automated Materials Sharing (CLAMS) library network. There are two smaller non-municipal libraries – the Chase Library on Route 28 in West Harwich at the Dennis town line, and the Harwich Port Library on Lower Bank Street in Harwich Port. Harwich is the site of the Long Pond Medical Center, which serves the southeastern Cape region. Harwich has police and fire departments, with one fire and police station headquarters, and Station 2 in East Harwich. There are post offices in Harwich Port, South Harwich, West Harwich, and East Harwich. Education Harwich's schools are part of the Monomoy Regional School District. Harwich Elementary School serves students from pre-school through fourth grade, Monomoy Regional Middle School which serves both Harwich and its joining town, Chatham. This middle school serves grades 5–7, and Monomoy Regional High School serves grades 8–12 for both Harwich and Chatham. Monomoy's teams are known as the Sharks. Harwich is known for its excellent boys basketball, girls basketball, girls field hockey, softball and baseball teams. The Lighthouse Charter School recently moved into where the Harwich Cinema building was located. Harwich is the site of Cape Cod Regional Technical High School, a grades 9–12 high school which serves most of Cape Cod. The town is also home to Holy Trinity PreSchool, a Catholic pre-school which serves pre-kindergarten in West Harwich. Transportation Roadways Two of Massachusetts major routes, U.S. Route 6 and Massachusetts Route 28, cross the town. The town has the southern termini of Routes 39 and 124, and a portion of Route 137 passes through the town. Route 39 leads east through East Harwich to Orleans. Route 28 passes through West Harwich and Harwich Port, connecting the towns of Dennis and Chatham. Route 124 leads from Harwich Center to Brewster, and Route 137 cuts through East Harwich leading from Chatham to Brewster. Cape Cod Rail Trail A portion of the Cape Cod Rail Trail, as well as several other bicycle routes, are in town. There is no rail service in town, but the Cape Cod Rail Trail rotary is located in North Harwich near Main Street. Air travel Other than the occasional sea plane landing on the pond, the nearest airport is in neighboring Chatham; the nearest regional service is at Barnstable Municipal Airport; and the nearest national and international air service is at Logan International Airport in Boston. CCRTA bus connections In recent years parts of Cape Cod have introduced bus service, especially during the summer to help cut down on traffic. The Flex Harwich Port – West Harwich – Dennis Port - South Dennis – East Dennis - South Yarmouth - West Yarmouth - Hyannis Route H2O Hyannis – Orleans via South Dennis, West Dennis, Dennis Port, Harwich Port, Chatham and Orleans. Notable people Ruby Braff, (1927–2003) jazz trumpeter and cornetist, former resident of Harwich in his later life A. Elmer Crowell, (1862–1952) was a master decoy carver from East Harwich. Crowell specialized in shorebirds, waterfowl, and miniatures. Crowell's decoys are consistently regarded as the finest and most desirable decoys ever made. Two of Crowell's decoys have repeatedly set world records for sales. Currently, Crowell's preening pintail drake and Canada goose decoys share the world record at $1.13 million. Seth Doane, award-winning television journalist; raised in Harwich and graduate of Harwich High School Shawn Fanning, creator and owner of MP3 music downloading application Napster; graduated from Harwich High School John Kendrick, (1740–1794) maritime fur trader; one of the first Americans to visit the Pacific Northwest, the Hawaiian Islands, and China. Thomas Nickerson, survivor of the ill-fated whaleship Essex, which inspired Melville's novel Moby Dick Tip O'Neill, (1912–1994) politician; owned a vacation home near Bank Street Beach and buried in Mount Pleasant Cemetery Jonathan Walker, (1799–1878) abolitionist; had his hand branded as a consequence of helping free slaves Notable events In 1975, during the bicentennial celebrations across America, a time capsule was buried in front of Brooks Academy in Harwich Center. The time capsule is due to be opened in 100 years, in the year 2075. On September 14, 1994, the town celebrated its tricentennial, which marked 300 years since the town's founding on the same day in 1694. References External links Town of Harwich official website Brooks Free Library Harwich Chamber of Commerce Monomoy Regional School District Harwich Mariners Harwich Harbormaster & Natural Resources The Cape Cod Chronicle, local newspaper Cape Cod Regional Transit Authority 1670 establishments in Massachusetts Populated coastal places in Massachusetts Populated places established in 1670 Towns in Barnstable County, Massachusetts Towns in Massachusetts
[ -0.7362084984779358, 0.7629901170730591, 0.3129012882709503, -0.871603786945343, 0.28799453377723694, 0.5791110396385193, 0.6161559224128723, 0.27754098176956177, -0.44641822576522827, -0.49755939841270447, -0.598423957824707, 0.18100064992904663, -0.08233314752578735, 0.4507109820842743, ...
14090
https://en.wikipedia.org/wiki/Hull%20classification%20symbol
Hull classification symbol
The United States Navy, United States Coast Guard, and United States National Oceanic and Atmospheric Administration (NOAA) use a hull classification symbol (sometimes called hull code or hull number) to identify their ships by type and by individual ship within a type. The system is analogous to the pennant number system that the Royal Navy and other European and Commonwealth navies use. History United States Navy The U.S. Navy began to assign unique Naval Registry Identification Numbers to its ships in the 1890s. The system was a simple one in which each ship received a number which was appended to its ship type, fully spelled out, and added parenthetically after the ship's name when deemed necessary to avoid confusion between ships. Under this system, for example, the battleship Indiana was USS Indiana (Battleship No. 1), the cruiser Olympia was USS Olympia (Cruiser No. 6), and so on. Beginning in 1907, some ships also were referred to alternatively by single-letter or three-letter codes—for example, USS Indiana (Battleship No. 1) could be referred to as USS Indiana (B-1) and USS Olympia (Cruiser No. 6) could also be referred to as USS Olympia (C-6), while USS Pennsylvania (Armored Cruiser No. 4) could be referred to as USS Pennsylvania (ACR-4). However, rather than replacing it, these codes coexisted and were used interchangeably with the older system until the modern system was instituted on 17 July 1920. During World War I, the U.S. Navy acquired large numbers of privately owned and commercial ships and craft for use as patrol vessels, mine warfare vessels, and various types of naval auxiliary ships, some of them with identical names. To keep track of them all, the Navy assigned unique identifying numbers to them. Those deemed appropriate for patrol work received section patrol numbers (SP), while those intended for other purposes received "identification numbers", generally abbreviated "Id. No." or "ID;" some ships and craft changed from an SP to an ID number or vice versa during their careers, without their unique numbers themselves changing, and some ships and craft assigned numbers in anticipation of naval service were never acquired by the Navy. The SP/ID numbering sequence was unified and continuous, with no SP number repeated in the ID series or vice versa so that there could not be, for example, both an "SP-435" and an "Id. No. 435". The SP and ID numbers were used parenthetically after each boat's or ship's name to identify it; although this system pre-dated the modern hull classification system and its numbers were not referred to at the time as "hull codes" or "hull numbers," it was used in a similar manner to today's system and can be considered its precursor. United States Revenue Cutter Service and United States Coast Guard The United States Revenue Cutter Service, which merged with the United States Lifesaving Service in January 1915 to form the modern United States Coast Guard, began following the Navy's lead in the 1890s, with its cutters having parenthetical numbers called Naval Registry Identification Numbers following their names, such as (Cutter No. 1), etc. This persisted until the Navy's modern hull classification system's introduction in 1920, which included Coast Guard ships and craft. United States Coast and Geodetic Survey Like the U.S. Navy, the United States Coast and Geodetic Survey – a uniformed seagoing service of the United States Government and a predecessor of the National Oceanic and Atmospheric Administration (NOAA) – adopted a hull number system for its fleet in the 20th century. Its largest vessels, "Category I" oceanographic survey ships, were classified as "ocean survey ships" and given the designation "OSS". Intermediate-sized "Category II" oceanographic survey ships received the designation "MSS" for "medium survey ship," and smaller "Category III" oceanographic survey ships were given the classification "CSS" for "coastal survey ship." A fourth designation, "ASV" for "auxiliary survey vessel," included even smaller vessels. In each case, a particular ship received a unique designation based on its classification and a unique hull number separated by a space rather than a hyphen; for example, the third Coast and Geodetic Survey ship named Pioneer was an ocean survey ship officially known as USC&GS Pioneer (OSS 31). The Coast and Geodetic Surveys system persisted after the creation of NOAA in 1970, when NOAA took control of the Surveys fleet, but NOAA later changed to its modern hull classification system. United States Fish and Wildlife Service The Fish and Wildlife Service, created in 1940 and reorganized as the United States Fish and Wildlife Service (USFWS) in 1956, adopted a hull number system for its fisheries research ships and patrol vessels. It consisted of "FWS" followed by a unique identifying number. In 1970, NOAA took control of the seagoing ships of the USFWS′s Bureau of Commercial Fisheries, and as part of the NOAA fleet they eventually were renumbered under the NOAA hull number system. The modern hull classification system United States Navy The U.S. Navy instituted its modern hull classification system on 17 July 1920, doing away with section patrol numbers, "identification numbers", and the other numbering systems described above. In the new system, all hull classification symbols are at least two letters; for basic types the symbol is the first letter of the type name, doubled, except for aircraft carriers. The combination of symbol and hull number identifies a modern Navy ship uniquely. A heavily modified or re-purposed ship may receive a new symbol, and either retain the hull number or receive a new one. For example, the heavy gun cruiser was converted to a gun/missile cruiser, changing the hull number to CAG-1. Also, the system of symbols has changed a number of times both since it was introduced in 1907 and since the modern system was instituted in 1920, so ships' symbols sometimes change without anything being done to the physical ship. Hull numbers are assigned by classification. Duplication between, but not within, classifications is permitted. Hence, CV-1 was the aircraft carrier and BB-1 was the battleship . Ship types and classifications have come and gone over the years, and many of the symbols listed below are not presently in use. The Naval Vessel Register maintains an online database of U.S. Navy ships showing which symbols are presently in use. After World War II until 1975, the U.S. Navy defined a "frigate" as a type of surface warship larger than a destroyer and smaller than a cruiser. In other navies, such a ship generally was referred to as a "flotilla leader", or "destroyer leader". Hence the U.S. Navy's use of "DL" for "frigate" prior to 1975, while "frigates" in other navies were smaller than destroyers and more like what the U.S. Navy termed a "destroyer escort", "ocean escort", or "DE". The United States Navy 1975 ship reclassification of cruisers, frigates, and ocean escorts brought U.S. Navy classifications into line with other nations' classifications, at least cosmetically in terms of terminology, and eliminated the perceived "cruiser gap" with the Soviet Navy by redesignating the former "frigates" as "cruisers". Military Sealift Command If a U.S. Navy ship's hull classification symbol begins with "T-", it is part of the Military Sealift Command, has a primarily civilian crew, and is a United States Naval Ship (USNS) in non-commissioned service – as opposed to a commissioned United States Ship (USS) with an all-military crew. United States Coast Guard If a ship's hull classification symbol begins with "W", it is a commissioned cutter of the United States Coast Guard. Until 1965, the Coast Guard used U.S. Navy hull classification codes, prepending a "W" to their beginning. In 1965, it retired some of the less mission-appropriate Navy-based classifications and developed new ones of its own, most notably WHEC for "high endurance cutter" and WMEC for "medium endurance cutter". National Oceanic and Atmospheric Administration The National Oceanic and Atmospheric Administration (NOAA), a component of the United States Department of Commerce, includes the National Oceanic and Atmospheric Administration Commissioned Officer Corps (or "NOAA Corps"), one of the eight uniformed services of the United States, and operates a fleet of seagoing research and survey ships. The NOAA fleet also uses a hull classification symbol system, which it also calls "hull numbers," for its ships. After NOAA took over the former fleets of the U.S. Coast and Geodetic Survey and the U.S. Fish and Wildlife Service Bureau of Commercial Fisheries in 1970, it adopted a new system of ship classification. In its system, the NOAA fleet is divided into two broad categories, research ships and survey ships. The research ships, which include oceanographic and fisheries research vessels, are given hull numbers beginning with "R", while the survey ships, generally hydrographic survey vessels, receive hull numbers beginning with "S". The letter is followed by a three-digit number; the first digit indicates the NOAA "class" (i.e., size) of the vessel, which NOAA assigns based on the ship's gross tonnage and horsepower, while the next two digits combine with the first digit to create a unique three-digit identifying number for the ship. Generally, each NOAA hull number is written with a space between the letter and the three-digit number, as in, for example, or . Unlike in the U.S. Navy system, once an older NOAA ship leaves service, a newer one can be given the same hull number; for example, "S 222" was assigned to , then assigned to NOAAS Thomas Jefferson (S 222), which entered NOAA service after Mount Mitchell was stricken. United States Navy hull classification codes The U.S. Navy's system of alpha-numeric ship designators, and its associated hull numbers, have been for several decades a unique method of categorizing ships of all types: combatants, auxiliaries and district craft. Though considerably changed in detail and expanded over the years, this system remains essentially the same as when formally implemented in 1920. It is a very useful tool for organizing and keeping track of naval vessels, and also provides the basis for the identification numbers painted on the bows (and frequently the sterns) of most U.S. Navy ships. The ship designator and hull number system's roots extend back to the late 1880s when ship type serial numbers were assigned to most of the new-construction warships of the emerging "Steel Navy". During the course of the next thirty years, these same numbers were combined with filing codes used by the Navy's clerks to create an informal version of the system that was put in place in 1920. Limited usage of ship numbers goes back even earlier, most notably to the "Jeffersonian Gunboats" of the early 1800s and the "Tinclad" river gunboats of the Civil War Mississippi Squadron. It is important to understand that hull number-letter prefixes are not acronyms, and should not be carelessly treated as abbreviations of ship type classifications. Thus, "DD" does not stand for anything more than "Destroyer". "SS" simply means "Submarine". And "FF" is the post-1975 type code for "Frigate." The hull classification codes for ships in active duty in the United States Navy are governed under Secretary of the Navy Instruction 5030.8B (SECNAVINST 5030.8B). Warships Warships are designed to participate in combat operations. The origin of the two-letter code derives from the need to distinguish various cruiser subtypes. Aircraft carrier type Aircraft carriers are ships designed primarily for the purpose of conducting combat operations by aircraft which engage in attacks against airborne, surface, sub-surface and shore targets. Contrary to popular belief, the "CV" hull classification symbol does not stand for "carrier vessel". "CV" derives from the cruiser designation, with one popular theory that the v comes from French voler, "to fly", but this has never been definitively proven. Aircraft carriers are designated in two sequences: the first sequence runs from CV-1 USS Langley to the very latest ships, and the second sequence, "CVE" for escort carriers, ran from CVE-1 Long Island to CVE-127 Okinawa before being discontinued. AV: Heavier-than-air aircraft tender (retired) AZ: Lighter-than-air aircraft tender (retired) (1920-23) AVG: General-purpose aircraft tender (repurposed escort carrier) (1941–42) AVD: Seaplane tender destroyer (retired) AVP: Seaplane tender, Small (retired) AVT (i) Auxiliary aircraft transport (retired) AVT (ii) Auxiliary training carrier (retired) ACV: Auxiliary aircraft carrier (escort carrier, replaced by CVE) (1942) CV: Fleet aircraft carrier (1921–1975), multi-purpose aircraft carrier (1975–present) CVA: Aircraft carrier, attack (category merged into CV, 30 June 1975) CV(N): Aircraft carrier, night (deck equipped with lighting and pilots trained and for nighttime fights) (1944) (retired) CVAN: Aircraft carrier, attack, nuclear-powered (category merged into CVN, 30 June 1975) CVB: Aircraft carrier, large (original USS Midway class, category merged into CVA, 1952) CVE: Escort aircraft carrier (retired) (1943–retirement of type) CVHA: Aircraft carrier, helicopter assault (retired in favor of several LH-series amphibious assault ship hull codes) CVHE: Aircraft carrier, helicopter, escort (retired) CVL: Light aircraft carrier or aircraft carrier, small (retired) CVN: Aircraft carrier, nuclear-powered CVS: Antisubmarine aircraft carrier (retired) CVT: Aircraft carrier, training (changed to AVT (auxiliary)) CVU: Aircraft carrier, utility (retired) CVG: Aircraft carrier, guided missile (retired) CF: Flight-deck cruiser (1930s, retired unused) CVV: Aircraft carrier, vari-purpose, medium (retired unused) Surface combatant type Surface combatants are ships which are designed primarily to engage enemy forces on the high seas. The primary surface combatants are battleships, cruisers and destroyers. Battleships are very heavily armed and armored; cruisers moderately so; destroyers and smaller warships, less so. Before 1920, ships were called "<type> no. X", with the type fully pronounced. The types were commonly abbreviated in ship lists to "B-X", "C-X", "D-X" et cetera—for example, before 1920, would have been called "USS Minnesota, Battleship number 22" orally and "USS Minnesota, B-22" in writing. After 1920, the ship's name would have been both written and pronounced "USS Minnesota (BB-22)". In generally decreasing size, the types are: ACR: Armored Cruiser (pre-1920) AFSB: Afloat forward staging base (also AFSB(I) for "interim", changed to ESB for Expeditionary Mobile Dock) B: Battleship (pre-1920) BB: Battleship BBG: Battleship, guided missile or arsenal ship (theoretical only, never assigned) BM: Monitor (1920–retirement) C: Cruiser (pre-1920 protected cruisers and peace cruisers) CA: (first series) Cruiser, armored (retired, comprised all surviving pre-1920 armored and protected cruisers) CA: (second series) Heavy cruiser, category later renamed gun cruiser (retired) CAG: Cruiser, heavy, guided missile (retired) CB: Large cruiser (retired) CBC: Large command cruiser (retired, never used operationally) CC: Battlecruiser (retired, never used operationally) CC: (second usage) command ship (retired) CLC: Command cruiser CLD: Cruiser-destroyer, light (never used operationally) CG: Cruiser, guided missile CGN: Cruiser, guided missile, nuclear-powered: and CL: Cruiser, light (retired) CLAA: Cruiser, light, anti-aircraft (retired) CLG: Cruiser, light, guided missile (retired) CLGN: Cruiser, light, guided missile, nuclear-powered (retired) CLK: Cruiser, hunter–killer (abolished 1951) CM: Cruiser–minelayer (retired) CS: Scout cruiser (retired) CSGN: Cruiser, strike, guided missile, nuclear-powered (retired, never used operationally) D: Destroyer (pre-1920) DD: Destroyer DDC: Corvette (briefly proposed in the mid-1950s) DDE: Escort destroyer, a destroyer (DD) converted for antisubmarine warfare – category abolished 1962. (not to be confused with destroyer escort DE) DDG: Destroyer, guided missile DDK: Hunter–killer destroyer (category merged into DDE, 4 March 1950) DDR: Destroyer, radar picket (retired) DE: Destroyer escort (World War II, later became Ocean escort) DE: Ocean escort (abolished 30 June 1975) DEG: Guided missile ocean escort (abolished 30 June 1975) DER: Radar picket destroyer escort (abolished 30 June 1975) There were two distinct breeds of DE, the World War II destroyer escorts (some of which were converted to DERs) and the postwar DE/DEG classes, which were known as ocean escorts despite carrying the same type symbol as the World War II destroyer escorts. All DEs, DEGs, and DERs were reclassified as FFs, FFGs, or FFRs, 30 June 1975. DL: Destroyer leader (later frigate) (retired) DLG: Frigate, guided missile (abolished 30 June 1975) DLGN: Frigate, guided missile, nuclear-propulsion (abolished 30 June 1975) The DL category was established in 1951 with the abolition of the CLK category. CLK 1 became DL 1 and DD 927–930 became DL 2–5. By the mid-1950s the term destroyer leader had been dropped in favor of frigate. Most DLGs and DLGNs were reclassified as CGs and CGNs, 30 June 1975. However, DLG 6–15 became DDG 37–46. The old DLs were already gone by that time. Only applied to . DM: Destroyer, minelayer (retired) DMS: Destroyer, minesweeper (retired) FF: Frigate PF: Patrol frigate (retired) FFG: Frigate, guided missile FFH: Frigate with assigned helicopter FFL: Frigate, light FFR: Frigate, radar picket (retired) FFT: Frigate (reserve training) (retired) The FF, FFG, and FFR designations were established 30 June 1975 as new type symbols for ex-DEs, DEGs, and DERs. The first new-built ships to carry the FF/FFG designation were the s. In January 2015, it was announced that the LCS ship types would be redesignated as FF. PG: Patrol gunboat (retired) PCH: Patrol craft, hydrofoil (retired) PHM: Patrol, hydrofoil, missile (retired) K: Corvette (retired) LCS: Littoral combat ship In January 2015, the Navy announced that the up-gunned LCS will be reclassified as a frigate, since the requirements of the SSC Task Force was to upgrade the ships with frigate-like capabilities. Hull designations will be changed from LCS to FF; existing LCSs back-fitted with modifications may also earn the FF label. The Navy is hoping to start retrofitting technological upgrades onto existing and under construction LCSs before 2019. LSES: Large Surface Effect Ship M: Monitor (1880s–1920) SES: Surface Effect Ship TB: Torpedo boat Submarine type Submarines are all self-propelled submersible types (usually started with SS) regardless of whether employed as combatant, auxiliary, or research and development vehicles which have at least a residual combat capability. While some classes, including all diesel-electric submarines, are retired from USN service, non-U.S. navies continue to employ SS, SSA, SSAN, SSB, SSC, SSG, SSM, and SST types. With the advent of new Air Independent Propulsion/Power (AIP) systems, both SSI and SSP are used to distinguish the types within the USN, but SSP has been declared the preferred term. SSK, retired by the USN, continues to be used colloquially and interchangeably with SS for diesel-electric attack/patrol submarines within the USN, and, more formally, by the Royal Navy and British firms such as Jane's Information Group. SC: Cruiser Submarine (retired) SF: Fleet Submarine (retired) SM: Submarine Minelayer (retired) SS: Submarine, Attack Submarine SSA: Submarine Auxiliary, Auxiliary/Cargo Submarine SSAN: Submarine Auxiliary Nuclear, Auxiliary/Cargo Submarine, Nuclear-powered SSB: Submarine Ballistic, Ballistic Missile Submarine SSBN: Submarine Ballistic Nuclear, Ballistic Missile Submarine, Nuclear-powered SSC: Coastal Submarine, over 150 tons SSG: Guided Missile Submarine SSGN: Guided Missile Submarine, Nuclear-powered SSI: Attack Submarine (Diesel Air-Independent Propulsion) SSK: Hunter-Killer/ASW Submarine (retired) SSKN: Hunter-Killer/ASW Submarine, Nuclear-powered (retired) SSM: Midget Submarine, under 150 tons SSN: Attack Submarine, Nuclear-powered SSNR: Special Attack Submarine SSO: Submarine Oiler (retired) SSP: Attack Submarine (Diesel Air-Independent Power) (alternate use), formerly Submarine Transport SSQ: Auxiliary Submarine, Communications (retired) SSQN: Auxiliary Submarine, Communications, Nuclear-powered (retired) SSR: Radar Picket Submarine (retired) SSRN: Radar Picket Submarine, Nuclear-powered (retired) SST: Training Submarine IXSS: Unclassified Miscellaneous Submarine MTS: Moored Training Ship (Naval Nuclear Power School Training Platform; reconditioned SSBNs and SSNs) Patrol combatant type Patrol combatants are ships whose mission may extend beyond coastal duties and whose characteristics include adequate endurance and seakeeping, providing a capability for operations exceeding 48 hours on the high seas without support. This notably included Brown Water Navy/Riverine Forces during the Vietnam War. Few of these ships are in service today. PBR: Patrol Boat, River, Brown Water Navy (Pibber or PBR-Vietnam) PC: Coastal Patrol, originally Sub Chaser PCF: Patrol Craft, Fast; Swift Boat, Brown Water Navy (Vietnam) PE: Eagle Boat of World War I PF: World War II Frigate, based on British . PFG: Original designation of PG: WWII-era Gunboats, later Patrol combatant, with ability to operate in rivers; what is generally known as River gunboats PGH: Patrol Combatant, Hydrofoil () PHM: Patrol, Hydrofoil Missile () PR: Patrol, River, such as the PT: Patrol Torpedo Boat, the U.S. take on the Motor Torpedo Boat (World War II) PTF: Patrol Torpedo Fast, Brown Water Navy (Vietnam) PTG: Patrol Torpedo Gunboat Monitor: Heavily gunned riverine boat, Brown Water Navy (Vietnam and prior). Named for ASPB: Assault Support Patrol Boat, "Alpha Boat", Brown Water Navy; also used as riverine minesweeper (Vietnam) PACV: Patrol Air Cushion Vehicle, hovercraft that was part of the Brown Water Navy (Vietnam) SP: Section Patrol, used indiscriminately for patrol vessels, mine warfare vessels, and some other types (World War I; retired 1920) Amphibious warfare type Amphibious warfare vessels include all ships having an organic capability for amphibious warfare and which have characteristics enabling long duration operations on the high seas. There are two classifications of craft: amphibious warfare ships which are built to cross oceans, and landing craft, which are designed to take troops from ship to shore in an invasion. The US Navy hull classification symbol for a ship with a well deck depends on its facilities for aircraft: An LSD has a helicopter deck. An LPD has a hangar in addition to the helicopter deck. An LHD or LHA has a full-length flight deck. Ships AKA: Attack Cargo Ship (To LKA, 1969) APA: Attack Transport (To LPA, 1969) APD: High speed transport (Converted Destroyer or Destroyer Escort) (To LPR, 1969) AGC: Amphibious Force Flagship (To LCC, 1969) LCC: Amphibious Command Ship, also known as Landing Craft, Control LHA: General-Purpose Amphibious Assault Ship, also known as Landing ship, Helicopter, Assault LHD: Multi-Purpose Amphibious Assault Ship, also known as Landing ship, Helicopter, Dock LKA: Amphibious Cargo Ship (out of commission) LPA: Amphibious Transport LPD: Amphibious transport dock, also known as Landing ship, Personnel, Dock LPH: Landing ship, Personnel, Helicopter LPR: High speed transport LSD: Landing Ship, Dock LSH: Landing Ship, Heavy LSIL: Landing Ship, Infantry (Large) (formerly LCIL) LSL: Landing Ship, Logistics LSM: Landing Ship, Medium LSM(R): Landing Ship, Medium (Rocket) LSSL: Landing Ship, Support (Large) (formerly LCSL) LST: Landing Ship, Tank LST(H): Landing Ship, Tank (Hospital) LSV: Landing Ship, Vehicle Landing Craft LCA: Landing Craft, Assault LCAC: Landing Craft Air Cushion LCFF: (Flotilla Flagship) LCH: Landing Craft, Heavy LCI: Landing Craft, Infantry, World War II-era classification further modified by (G) – Gunboat (L) – Large (M) – Mortar (R) – Rocket LCL: Landing Craft, Logistics (UK) LCM: Landing Craft, Mechanized LCP: Landing Craft, Personnel LCP(L): Landing Craft, Personnel, Large LCP(R): Landing Craft, Personnel, Ramped LCPA: Landing Craft, Personnel, Air-Cushioned LCS(L): Landing Craft, Support (Large) changed to LSSL in 1949 LCT: Landing Craft, Tank (World War II era) LCU: Landing Craft, Utility LCVP: Landing Craft, Vehicle and Personnel LSH: Landing Ship Heavy (Royal Australian Navy) Expeditionary support Operated by Military Sealift Command, have ship prefix "USNS", hull code begins with "T-". ESD: Expeditionary Transfer Dock ESB: Expeditionary Mobile Base (a variant of ESD, formerly AFSB) EPF: Expeditionary fast transport MLP: Mobile landing platform (changed to ESD) JHSV: Joint high-speed vessel (changed to EPF) HST: High-speed transport (similar to JHSV, not to be confused with WWII-era High-speed transport (APD)) HSV: High-speed vessel Combat logistics type Ships which have the capability to provide underway replenishment to fleet units. AC: Collier (retired) AE: Ammunition ship AF: Stores ship (retired) AFS: Combat stores ship AKE: Advanced dry cargo ship AKS: General stores ship AO: Fleet Oiler AOE: Fast combat support ship AOR: Replenishment oiler AW: Distilling ship (retired) Mine warfare type Mine warfare ships are those ships whose primary function is mine warfare on the high seas. ADG: Degaussing ship AM: Minesweeper AMb: Harbor minesweeper AMc: Coastal minesweeper AMCU: Underwater mine locater AMS: Motor minesweeper CM: Cruiser (i.e., large) minelayer CMc: Coastal minelayer DM: High-speed minelayer (converted destroyer) DMS: High-speed minesweeper (converted-destroyer) MCM: Mine countermeasures ship MCS: Mine countermeasures support ship MH(C)(I)(O)(S): Minehunter, (coastal) (inshore) (ocean) (hunter and sweeper, general) MLC: Coastal minelayer MSC: Minesweeper, coastal MSF: Minesweeper, steel hulled MSO: Minesweeper, ocean PCS: Submarine chasers (wooden) fitted for minesweeping YDG: District degaussing vessel Coastal defense type Coastal defense ships are those whose primary function is coastal patrol and interdiction. FS: Corvette PB: Patrol boat PBR: Patrol boat, river PC: Patrol, coastal PCE: Patrol craft, escort PCF: Patrol craft, fast, (swift boat) PCS: Patrol craft, sweeper (modified-motor minesweepers meant for anti-submarine warfare) PF: Frigate, in a role similar to World War II Commonwealth corvette PG: Patrol gunboat PGM: Motor gunboat (To PG, 1967) PR: Patrol, river SP: Section patrol Mobile logistics type Mobile logistics ships have the capability to provide direct material support to other deployed units operating far from home ports. AD: Destroyer tender AGP: Patrol craft tender AR (AR, ARB, ARC, ARG, ARH, ARL, ARV): repair ship AS: Submarine tender AV: Seaplane tender Auxiliary type An auxiliary ship is designed to operate in any number of roles supporting combatant ships and other naval operations. AN: Net laying ship ARL: Auxiliary repair light—light craft or landing craft repair ship (World War II-era, out of commission) ATF: Fleet ocean tug AGHS: Patrol combatant support ship—ocean or inshore Airships Although technically an aircraft, pre-World War II rigid airships (e.g., zeppelins) were treated like commissioned surface warships and submarines, flew the U.S. ensign from their stern and carried a United States Ship (USS) designation. Non-rigid airships (e.g., blimps) continued to fly the U.S. ensign from their stern but were always considered to be primarily aircraft. ZMC: Airship metal clad ZNN-G: G-class blimp ZNN-J: J-class blimp ZNN-L: L-class blimp ZNP-K: K-class blimp ZNP-M: M-class blimp ZNP-N: N-class blimp ZPG-3W: surveillance patrol blimp ZR: Rigid airship ZRS: Rigid airship scout Support ships Support ships are not designed to participate in combat and are generally not armed. For ships with civilian crews (owned by and/or operated for Military Sealift Command and the Maritime Administration), the prefix T- is placed at the front of the hull classification. Support type Support ships are designed to operate in the open ocean in a variety of sea states to provide general support to either combatant forces or shore-based establishments. They include smaller auxiliaries which, by the nature of their duties, leave inshore waters. AB: Auxiliary Crane Ship (1920-41) AC: Collier (retired) ACS: Auxiliary Crane Ship AG: Miscellaneous Auxiliary AGDE: Testing Ocean Escort AGDS: Deep Submergence Support Ship AGER (i): Miscellaneous Auxiliary, Electronic Reconnaissance AGER (ii): Environmental Research Ship AGF: Miscellaneous Command Ship AGFF: Testing Frigate AGL: Auxiliary vessel, lighthouse tender AGM: Missile Range Instrumentation Ship AGOR: Oceanographic Research Ship AGOS: Ocean Surveillance Ship AGP: Motor Torpedo Boat Tender AGR: Radar picket ship AGS: Surveying Ship AGSE: Submarine and Special Warfare Support AGSS: Auxiliary Research Submarine AGTR: Technical research ship AH: Hospital ship AK: Cargo Ship AKR: Vehicle Cargo Ship AKS: General Stores Issue Ship AKV: Cargo Ship and Aircraft Ferry AO: Oiler AOE: Fast Combat Support Ship AOR: Replenishment oiler (retired) AOG: Gasoline Tanker AOT: Transport Oiler AP: Transport ARC: Cable Repair Ship (see also Cable layer) ARG: Internal Combustion Engine repair ship APB: Self-propelled Barracks Ship APL: Barracks Craft ARB: Battle Damage Repair Ship ARL: Small Repair Ship ARS: Salvage Ship AS: Submarine tender ASR: Submarine Rescue Ship AT: Ocean-Going Tug ATA: Auxiliary Ocean Tug ATF: Fleet Ocean Tug ATLS: Drone Launch Ship ATS: Salvage and Rescue Ship AVB(i): Aviation Logistics Support Ship AVB(ii): Advance Aviation Base Ship AVS: Aviation Stores Issue Ship AVT(i): Auxiliary Aircraft Transport AVT(ii): Auxiliary Aircraft Landing Training Ship EPCER: Experimental – Patrol Craft Escort – Rescue ID or Id. No.: Civilian ship taken into service for auxiliary duties, used indiscriminately for large ocean-going ships of all kinds and coastal and yard craft (World War I; retired 1920) PCER: Patrol Craft Escort – Rescue SBX: Sea-based X-band Radar – a mobile active electronically scanned array early-warning radar station. Service type craft Service craft are navy-subordinated craft (including non-self-propelled) designed to provide general support to either combatant forces or shore-based establishments. The suffix "N" refers to non-self-propelled variants. AB: Crane Ship AFDB: Large Auxiliary Floating Dry Dock AFD/AFDL: Small Auxiliary Floating Dry Dock AFDM: Medium Auxiliary Floating Dry Dock APB: Self-Propelled Barracks ship APL: Barracks Craft ARD: Auxiliary Repair Dry Dock ARDM: Medium Auxiliary Repair Dry Dock ATA: Auxiliary Ocean Tug DSRV: Deep Submergence Rescue Vehicle DSV: Deep Submergence Vehicle JUB/JB : Jack Up Barge NR: Submersible Research Vehicle YC: Open Lighter YCF: Car Float YCV: Aircraft Transportation Lighter YD: Floating Crane YDT: Diving Tender YF: Covered Lighter YFB: Ferry Boat or Launch YFD: Yard Floating Dry Dock YFN: Covered Lighter (non-self propelled) YFNB: Large Covered Lighter (non-self propelled) YFND: Dry Dock Companion Craft (non-self propelled) YFNX: Lighter (Special purpose) (non-self propelled) YFP: Floating Power Barge YFR: Refrigerated Cover Lighter YFRN: Refrigerated Covered Lighter (non-self propelled) YFRT: Range Tender USNS Range Recoverer (T-AG-161) YFU: Harbor Utility Craft YG: Garbage Lighter YGN: Garbage Lighter (non-self propelled) YH: Ambulance boat/small medical support vessel YLC: Salvage Lift Craft YM: Dredge YMN: Dredge (non-self propelled) YNG: Gate Craft YN: Yard Net Tender YNT: Net Tender YO: Fuel Oil Barge YOG: Gasoline Barge YOGN: Gasoline Barge (non-self propelled) YON: Fuel Oil Barge (non-self propelled) YOS: Oil Storage Barge YP: Patrol Craft, Training YPD: Floating Pile Driver YR: Floating Workshop YRB: Repair and Berthing Barge YRBM: Repair, Berthing and Messing Barge YRDH: Floating Dry Dock Workshop (Hull) YRDM: Floating Dry Dock Workshop (Machine) YRR: Radiological Repair Barge nuclear ships and submarines service YRST: Salvage Craft Tender YSD: Seaplane Wrecking Derrick - Yard Seaplane Derrick YSR: Sludge Removal Barge YT: Harbor Tug (craft later assigned YTB, YTL, or YTM classifications) YTB: Large Harbor tug YTL: Small Harbor Tug YTM: Medium Harbor Tug YTT: Torpedo trials craft YW: Water Barge YWN: Water Barge (non-self propelled) ID or Id. No.: Civilian ship taken into service for auxiliary duties, used indiscriminately for large ocean-going ships of all kinds and coastal and yard craft (World War I; retired 1920) IX: Unclassified Miscellaneous Unit X: Submersible Craft "none": To honor her unique historical status, USS Constitution, formerly IX 21, was reclassified to "none", effective 1 September 1975. United States Coast Guard vessels Prior to 1965, U.S. Coast Guard cutters used the same designation as naval ships but preceded by a "W" to indicate Coast Guard commission. The U.S. Coast Guard considers any ship over 65 feet in length with a permanently assigned crew, a cutter. Current USCG cutter classes and types Historic USCG cutter classes and types USCG classification symbols definitions CG: all Coast Guard ships in the 1920s (retired) WAGB: Coast Guard WAGL: Auxiliary vessel, lighthouse tender (retired 1960's) WAVP: seagoing Coast Guard seaplane tenders (retired 1960s) WDE: seagoing Coast Guard destroyer escorts (retired 1960s) WHEC: Coast Guard high endurance cutters WIX: Coast Guard barque WLB: Coast Guard buoy tenders WLBB: Coast Guard seagoing buoy tenders/ice breaker WLI: Coast Guard inland buoy tenders WLIC: Coast Guard inland construction tenders WLM: Coast Guard coastal buoy tenders WLR: Coast Guard river buoy tenders WMEC: Coast Guard medium endurance cutters WMSL: Coast Guard maritime security cutter, large (referred to as national security cutters) WPB: Coast Guard patrol boats WPC: Coast Guard patrol craft—later reclassed under WHEC, symbol reused for Coast Guard patrol cutter (referred to as fast response cutters) WPG: seagoing Coast Guard gunboats (retired 1960s) WTGB: Coast Guard tug boat (140' icebreakers) WYTL: Small harbor tug USCG classification symbols for small craft and boats MLB: Motor Life Boat (52', 47', and 44' variants) UTB: Utility Boat DPB: Deployable Pursuit Boat ANB: Aids to Navigation Boats TPSB: Transportable Port Security Boat RHIB: Rigid Hull Inflatable Boats Temporary designations United States Navy Designations (Temporary) are a form of U.S. Navy ship designation, intended for temporary identification use. Such designations usually occur during periods of sudden mobilization, such as that which occurred prior to, and during, World War II or the Korean War, when it was determined that a sudden temporary need arose for a ship for which there was no official Navy designation. During World War II, for example, a number of commercial vessels were requisitioned, or acquired, by the U.S. Navy to meet the sudden requirements of war. A yacht acquired by the U.S. Navy during the start of World War II might seem desirable to the Navy whose use for the vessel might not be fully developed or explored at the time of acquisition. On the other hand, a U.S. Navy vessel, such as the yacht in the example above, already in commission or service, might be desired, or found useful, for another need or purpose for which there is no official designation. IX: Unclassified Miscellaneous Auxiliary Ship, for example, yacht Chanco acquired by the U.S. Navy on 1 October 1940. It was classified as a minesweeper , but instead, mainly used as a patrol craft along the New England coast. When another assignment came, and it could not be determined how to classify the vessel, it was redesignated IX-175 on 10 July 1944. IXSS: Unclassified Miscellaneous Submarines, such as the , the and the . YAG: Miscellaneous Auxiliary Service Craft, such as the , and which, curiously, was earlier known as . Numerous other U.S. Navy vessels were launched with a temporary, or nominal, designation, such as YMS or PC, since it could not be determined, at the time of construction, what they should be used for. Many of these were vessels in the 150 to 200 feet length class with powerful engines, whose function could be that of a minesweeper, patrol craft, submarine chaser, seaplane tender, tugboat, or other. Once their destiny, or capability, was found or determined, such vessels were reclassified with their actual designation. National Oceanic and Atmospheric Administration hull codes R: Research ships, including oceanographic and fisheries research ships S: Survey ships, including hydrographic survey ships The letter is paired with a three-digit number. The first digit of the number is determined by the ships "power tonnage," defined as the sum of its shaft horsepower and gross international tonnage, as follows: If the power tonnage is 5,501 through 9,000, the first digit is "1". If the power tonnage 3,501 through 5,500, the first digit is "2." If the power tonnage is 2,001 through 3,500, the first digit is "3." If the power tonnage is 1,001 through 2,000, the first digit is "4." If the power tonnage is 501 through 1,000, the first digit is "5." If the power tonnage is 500 or less and the ship is at least 65 feet (19.8 meters) long, the first digit is "6." The second and third digits are assigned to create a unique three-digit hull number. See also United States Navy 1975 ship reclassification List of hull classifications Ship prefix Hull classification symbol (Canada) Pennant number for the British Commonwealth equivalent Notes Footnotes Citations References United States Naval Aviation 1910–1995, Appendix 16: U.S. Navy and Marine Corps Squadron Designations and Abbreviations. U.S. Navy, c. 1995. Quoted in Derdall and DiGiulian, op cit. USCG Designations Naval History and Heritage Command Online Library of Selected Images: U.S. Navy Ships – Listed by Hull Number: "SP" #s and "ID" #s — World War I Era Patrol Vessels and other Acquired Ships and Craft Wertheim, Eric. The Naval Institute Guide to Combat Fleets of the World, 15th Edition: Their Ships, Aircraft, and Systems. Annapolis, Maryland: Naval Institute Press, 2007. . . Further reading Friedman, Norman. U.S. Small Combatants, Including PT-Boats, Subchasers, and the Brown-Water Navy: An Illustrated Design History. Annapolis, Md: Naval Institute Press, 1987. . External links Current U.S. Navy Ship Classifications U.S. Navy Inactive Classification Symbols U.S. Naval Vessels Registry (Service Craft) U.S. Naval Vessels Registry (Ships) U.S. Naval Vessel Register (Current ships) Ships of the United States Navy Ship identification numbers Hull classifications United States Service vessels of the United States
[ 0.1469389647245407, 0.2355407029390335, -0.2002515345811844, 0.16008543968200684, -0.10129301249980927, 0.5287481546401978, 0.3360329568386078, 0.1730063557624817, -0.7514178156852722, -0.651672899723053, -0.22829920053482056, -0.06496218591928482, 0.09037169069051743, 0.7584678530693054, ...
14091
https://en.wikipedia.org/wiki/Habeas%20corpus
Habeas corpus
Habeas corpus (; from Medieval Latin, ; in law: a Court, command] that you have the body [of the detainee brought before ) is a recourse in law through which a person can report an unlawful detention or imprisonment to a court and request that the court order the custodian of the person, usually a prison official, to bring the prisoner to court, to determine whether the detention is lawful. The writ of habeas corpus was described in the eighteenth century by William Blackstone as a "great and efficacious writ in all manner of illegal confinement". It is a summons with the force of a court order; it is addressed to the custodian (a prison official, for example) and demands that a prisoner be brought before the court, and that the custodian present proof of authority, allowing the court to determine whether the custodian has lawful authority to detain the prisoner. If the custodian is acting beyond their authority, then the prisoner must be released. Any prisoner, or another person acting on their behalf, may petition the court, or a judge, for a writ of habeas corpus. One reason for the writ to be sought by a person other than the prisoner is that the detainee might be held incommunicado. Most civil law jurisdictions provide a similar remedy for those unlawfully detained, but this is not always called habeas corpus. For example, in some Spanish-speaking nations, the equivalent remedy for unlawful imprisonment is the amparo de libertad ("protection of freedom"). Habeas corpus has certain limitations. Though a writ of right, it is not a writ of course. It is technically only a procedural remedy; it is a guarantee against any detention that is forbidden by law, but it does not necessarily protect other rights, such as the entitlement to a fair trial. So if an imposition such as internment without trial is permitted by the law, then habeas corpus may not be a useful remedy. In some countries, the writ has been temporarily or permanently suspended under the pretext of a war or state of emergency, for example by Abraham Lincoln during the American Civil War. The right to petition for a writ of habeas corpus has nonetheless long been celebrated as the most efficient safeguard of the liberty of the subject. The jurist Albert Venn Dicey wrote that the British Habeas Corpus Acts "declare no principle and define no rights, but they are for practical purposes worth a hundred constitutional articles guaranteeing individual liberty". The writ of habeas corpus is one of what are called the "extraordinary", "common law", or "prerogative writs", which were historically issued by the English courts in the name of the monarch to control inferior courts and public authorities within the kingdom. The most common of the other such prerogative writs are quo warranto, prohibito, mandamus, procedendo, and certiorari. The due process for such petitions is not simply civil or criminal, because they incorporate the presumption of non-authority. The official who is the respondent must prove their authority to do or not do something. Failing this, the court must decide for the petitioner, who may be any person, not just an interested party. This differs from a motion in a civil process in which the movant must have standing, and bears the burden of proof. Etymology The phrase is from the Latin habeās, 2nd person singular present subjunctive active of habēre, "to have", "to hold"; and corpus, accusative singular of corpus, "body". In reference to more than one person, the phrase is habeas corpora. Literally, the phrase means "[we command] that you should have the [detainee's] body [brought to court]". The complete phrase habeas corpus [coram nobis] ad subjiciendum means "that you have the person [before us] for the purpose of subjecting (the case to examination)". These are words of writs included in a 14th-century Anglo-French document requiring a person to be brought before a court or judge, especially to determine if that person is being legally detained. Examples United Kingdom of Great Britain and Ireland United States of America Similarly named writs The full name of the writ is often used to distinguish it from similar ancient writs, also named habeas corpus. These include: Habeas corpus ad deliberandum et recipiendum: a writ for bringing an accused from a different county into a court in the place where a crime had been committed for purposes of trial, or more literally to return holding the body for purposes of "deliberation and receipt" of a decision. ("Extradition") Habeas corpus ad faciendum et recipiendum (also called habeas corpus cum causa): a writ of a superior court to a custodian to return with the body being held by the order of a lower court "with reasons", for the purpose of "receiving" the decision of the superior court and of "doing" what it ordered. Habeas corpus ad prosequendum: a writ ordering return with a prisoner for the purpose of "prosecuting" him before the court. Habeas corpus ad respondendum: a writ ordering return to allow the prisoner to "answer" to new proceedings before the court. Habeas corpus ad testificandum: a writ ordering return with the body of a prisoner for the purposes of "testifying". Origins in England Habeas corpus originally stems from the Assize of Clarendon of 1166, a re-issuance of rights during the reign of Henry II of England in the 12th century. The foundations for habeas corpus are "wrongly thought" to have originated in Magna Carta, but in fact predates it. This charter declared that: However the preceding article of Magna Carta, nr 38, declares: Pursuant to that language, a person may not be subjected to any legal proceeding, such as arrest and imprisonment, without sufficient evidence having already been collected to show that there is a prima facie case to answer. This evidence must be collected beforehand, because it must be available to be exhibited in a public hearing within hours, or at the most days, after arrest, not months or longer as may happen in other jurisdictions that apply Napoleonic-inquisitorial criminal laws where evidence is commonly sought after a suspect's incarceration. Any charge levelled at the hearing thus must be based on evidence already collected, and an arrest and incarceration order is not lawful if not supported by sufficient evidence. In contrast with the common law approach, consider the case of Luciano Ferrari-Bravo v. Italy the European Court of Human Rights ruled that "detention is intended to facilitate … the preliminary investigation". Ferrari-Bravo sought relief after nearly five years of preventive detention, and his application was rejected. The European Court of Human Rights deemed the five-year detention to be "reasonable" under Article 6 of the European Convention on Human Rights, which provides that a prisoner has a right to a public hearing before an impartial tribunal within a "reasonable" time after arrest. After his eventual trial, the evidence against Ferrari-Bravo was deemed insufficient and he was found not guilty. William Blackstone cites the first recorded usage of habeas corpus ad subjiciendum in 1305, during the reign of King Edward I. However, other writs were issued with the same effect as early as the reign of Henry II in the 12th century. Blackstone explained the basis of the writ, saying "[t]he king is at all times entitled to have an account, why the liberty of any of his subjects is restrained, wherever that restraint may be inflicted." The procedure for issuing a writ of habeas corpus was first codified by the Habeas Corpus Act 1679, following judicial rulings which had restricted the effectiveness of the writ. A previous law (the Habeas Corpus Act 1640) had been passed forty years earlier to overturn a ruling that the command of the King was a sufficient answer to a petition of habeas corpus. The cornerstone purpose of the writ of habeas corpus was to limit the King's Chancery's ability to undermine the surety of law by allowing courts of justice decisions to be overturned in favor and application of equity, a process managed by the Chancellor (a bishop) with the King's authority. The 1679 codification of habeas corpus took place in the context of a sharp confrontation between King Charles II and the Parliament, which was dominated by the then sharply oppositional, nascent Whig Party. The Whig leaders had good reasons to fear the King moving against them through the courts (as indeed happened in 1681) and regarded habeas corpus as safeguarding their own persons. The short-lived Parliament which made this enactment came to be known as the Habeas Corpus Parliament – being dissolved by the King immediately afterwards. Then, as now, the writ of habeas corpus was issued by a superior court in the name of the Sovereign, and commanded the addressee (a lower court, sheriff, or private subject) to produce the prisoner before the royal courts of law. A habeas corpus petition could be made by the prisoner him or herself or by a third party on his or her behalf and, as a result of the Habeas Corpus Acts, could be made regardless of whether the court was in session, by presenting the petition to a judge. Since the 18th century the writ has also been used in cases of unlawful detention by private individuals, most famously in Somersett's Case (1772), where the black slave, Somersett, was ordered to be freed. During that case, these famous words are said to have been uttered: "... that the air of England was too pure for slavery" (although it was the lawyers in argument who expressly used this phrase – referenced from a much earlier argument heard in The Star Chamber – and not Lord Mansfield himself). During the Seven Years' War and later conflicts, the Writ was used on behalf of soldiers and sailors pressed into military and naval service. The Habeas Corpus Act 1816 introduced some changes and expanded the territoriality of the legislation. The privilege of habeas corpus has been suspended or restricted several times during English history, most recently during the 18th and 19th centuries. Although internment without trial has been authorised by statute since that time, for example during the two World Wars and the Troubles in Northern Ireland, the habeas corpus procedure has in modern times always technically remained available to such internees. However, as habeas corpus is only a procedural device to examine the lawfulness of a prisoner's detention, so long as the detention is in accordance with an Act of Parliament, the petition for habeas corpus is unsuccessful. Since the passage of the Human Rights Act 1998, the courts have been able to declare an Act of Parliament to be incompatible with the European Convention on Human Rights, but such a declaration of incompatibility has no legal effect unless and until it is acted upon by the government. The wording of the writ of habeas corpus implies that the prisoner is brought to the court for the legality of the imprisonment to be examined. However, rather than issuing the writ immediately and waiting for the return of the writ by the custodian, modern practice in England is for the original application to be followed by a hearing with both parties present to decide the legality of the detention, without any writ being issued. If the detention is held to be unlawful, the prisoner can usually then be released or bailed by order of the court without having to be produced before it. With the development of modern public law, applications for habeas corpus have been to some extent discouraged, in favour of applications for judicial review. The writ, however, maintains its vigour, and was held by the UK Supreme Court in 2012 to be available in respect of a prisoner captured by British forces in Afghanistan, albeit that the Secretary of State made a valid return to the writ justifying the detention of the claimant. Precedents in medieval Catalonia and Biscay Although the first recorded historical references come from Anglo-Saxon law in the 12th century and one of the first documents referring to this right is a law of the English Parliament (1679), it must be noted that in Catalonia there are already references from 1428 in the (appeal of people's manifestation) collected in the of the Crown of Aragon and some references to this term in the Law of the Lordship of Biscay (1527). Other jurisdictions Australia The writ of habeas corpus as a procedural remedy is part of Australia's English law inheritance. In 2005, the Australian parliament passed the Australian Anti-Terrorism Act 2005. Some legal experts questioned the constitutionality of the act, due in part to limitations it placed on habeas corpus. Canada Habeas corpus rights are part of the British legal tradition inherited by Canada. The rights exist in the common law but have been enshrined in section 10(c) of the Charter of Rights and Freedoms, which states that "[e]veryone has the right on arrest or detention ... to have the validity of the detention determined by way of habeas corpus and to be released if the detention is not lawful". The test for habeas corpus in Canada was recently laid down by the Supreme Court of Canada in Mission Institution v Khela, as follows:To be successful, an application for habeas corpus must satisfy the following criteria. First, the applicant [i.e., the person seeking habeas corpus review] must establish that he or she has been deprived of liberty. Once a deprivation of liberty is proven, the applicant must raise a legitimate ground upon which to question its legality. If the applicant has raised such a ground, the onus shifts to the respondent authorities [i.e., the person or institution detaining the applicant] to show that the deprivation of liberty was lawful.Suspension of the writ in Canadian history occurred famously during the October Crisis, during which the War Measures Act was invoked by the Governor General of Canada on the constitutional advice of Prime Minister Pierre Trudeau, who had received a request from the Quebec Cabinet. The Act was also used to justify German, Slavic, and Ukrainian Canadian internment during the First World War, and the internment of German-Canadians, Italian-Canadians and Japanese-Canadians during the Second World War. The writ was suspended for several years following the Battle of Fort Erie (1866) during the Fenian Rising, though the suspension was only ever applied to suspects in the Thomas D'Arcy McGee assassination. The writ is available where there is no other adequate remedy. However, a superior court always has the discretion to grant the writ even in the face of an alternative remedy (see May v Ferndale Institution). Under the Criminal Code the writ is largely unavailable if a statutory right of appeal exists, whether or not this right has been exercised. France A fundamental human right in the 1789 Declaration of the Rights of Man and of the Citizen drafted by Lafayette in cooperation with Thomas Jefferson, the guarantees against arbitrary detention are enshrined in the French Constitution and regulated by the Penal Code. The safeguards are equivalent to those found under the Habeas-Corpus provisions found in Germany, the United States and several Commonwealth countries. The French system of accountability prescribes severe penalties for ministers, police officers and civil and judiciary authorities who either violate or fail to enforce the law. France and the United States played a synergistic role in the international team, led by Eleanor Roosevelt, which crafted the Universal Declaration of Human Rights. The French judge and Nobel Peace Laureate René Cassin produced the first draft and argued against arbitrary detentions. René Cassin and the French team subsequently championed the habeas corpus provisions enshrined in the European Convention for the Protection of Human Rights and Fundamental Freedoms. Germany Germany has constitutional guarantees against improper detention and these have been implemented in statutory law in a manner that can be considered as equivalent to writs of habeas corpus. Article 104, paragraph 1 of the Basic Law for the Federal Republic of Germany provides that deprivations of liberty may be imposed only on the basis of a specific enabling statute that also must include procedural rules. Article 104, paragraph 2 requires that any arrested individual be brought before a judge by the end of the day following the day of the arrest. For those detained as criminal suspects, article 104, paragraph 3 specifically requires that the judge must grant a hearing to the suspect in order to rule on the detention. Restrictions on the power of the authorities to arrest and detain individuals also emanate from article 2 paragraph 2 of the Basic Law which guarantees liberty and requires a statutory authorization for any deprivation of liberty. In addition, several other articles of the Basic Law have a bearing on the issue. The most important of these are article 19, which generally requires a statutory basis for any infringements of the fundamental rights guaranteed by the Basic Law while also guaranteeing judicial review; article 20, paragraph 3, which guarantees the rule of law; and article 3 which guarantees equality. In particular, a constitutional obligation to grant remedies for improper detention is required by article 19, paragraph 4 of the Basic Law, which provides as follows: "Should any person's right be violated by public authority, he may have recourse to the courts. If no other jurisdiction has been established, recourse shall be to the ordinary courts." India The Indian judiciary, in a catena of cases, has effectively resorted to the writ of habeas corpus to secure release of a person from illegal detention. For example, in October 2009, the Karnataka High Court heard a habeas corpus petition filed by the parents of a girl who married a Muslim boy from Kannur district and was allegedly confined in a madrasa in Malapuram town. Usually, in most other jurisdictions, the writ is directed at police authorities. The extension to non-state authorities has its grounds in two cases: the 1898 Queen's Bench case of Ex Parte Daisy Hopkins, wherein the Proctor of Cambridge University did detain and arrest Hopkins without his jurisdiction, and Hopkins was released, and that of Somerset v Stewart, in which an African slave whose master had moved to London was freed by action of the writ. The Indian judiciary has dispensed with the traditional doctrine of locus standi, so that if a detained person is not in a position to file a petition, it can be moved on his behalf by any other person. The scope of habeas relief has expanded in recent times by actions of the Indian judiciary. In 1976, the habeas writ was used in the Rajan case, a student victim of torture in local police custody during the nationwide Emergency in India. On 12 March 2014, Subrata Roy's counsel approached the Chief Justice moving a habeas corpus petition. It was also filed by the Panthers Party to protest the imprisonment of Anna Hazare, a social activist. Ireland In the Republic of Ireland, the writ of habeas corpus is available at common law and under the Habeas Corpus Acts of 1782 and 1816. A remedy equivalent to habeas corpus is also guaranteed by Article 40 of the 1937 constitution. The article guarantees that "no citizen shall be deprived of his personal liberty save in accordance with law" and outlines a specific procedure for the High Court to enquire into the lawfulness of any person's detention. It does not mention the Latin term habeas corpus, but includes the English phrase "produce the body". Article 40.4.2° provides that a prisoner, or anyone acting on his behalf, may make a complaint to the High Court (or to any High Court judge) of unlawful detention. The court must then investigate the matter "forthwith" and may order that the defendant bring the prisoner before the court and give reasons for his detention. The court must immediately release the detainee unless it is satisfied that he is being held lawfully. The remedy is available not only to prisoners of the state, but also to persons unlawfully detained by any private party. However the constitution provides that the procedure is not binding on the Defence Forces during a state of war or armed rebellion. The full text of Article 40.4.2° is as follows: The writ of habeas corpus continued as part of the Irish law when the state seceded from the United Kingdom in 1922. A remedy equivalent to habeas corpus was also guaranteed by Article 6 of the Constitution of the Irish Free State, enacted in 1922. That article used similar wording to Article 40.4 of the current constitution, which replaced it 1937. The relationship between the Article 40 and the Habeas Corpus Acts of 1782 and 1816 is ambiguous, and Forde and Leonard write that "The extent if any to which Article 40.4 has replaced these Acts has yet to be determined". In The State (Ahern) v. Cotter (1982) Walsh J. opined that the ancient writ referred to in the Habeas Corpus Acts remains in existence in Irish law as a separate remedy from that provided for in Article 40. In 1941, the Article 40 procedure was restricted by the Second Amendment. Prior to the amendment, a prisoner had the constitutional right to apply to any High Court judge for an enquiry into her detention, and to as many High Court judges as she wished. If the prisoner successfully challenged her detention before the High Court she was entitled to immediate, unconditional release. The Second Amendment provided that a prisoner has only the right to apply to a single judge, and, once a writ has been issued, the President of the High Court has authority to choose the judge or panel of three judges who will decide the case. If the High Court finds that the prisoner's detention is unlawful due to the unconstitutionality of a law the judge must refer the matter to the Supreme Court, and until the Supreme's Court's decision is rendered the prisoner may be released only on bail. The power of the state to detain persons prior to trial was extended by the Sixteenth Amendment, in 1996. In 1965, the Supreme Court ruled in the O'Callaghan case that the constitution required that an individual charged with a crime could be refused bail only if she was likely to flee or to interfere with witnesses or evidence. Since the Sixteenth Amendment, it has been possible for a court to take into account whether a person has committed serious crimes while on bail in the past. Italy The right to freedom from arbitrary detention is guaranteed by Article 13 of the Constitution of Italy, which states: This implies that within 48 hours every arrest made by a police force must be validated by a court. Furthermore, if subject to a valid detention, an arrested can ask for a review of the detention to another court, called the Review Court (Tribunale del Riesame, also known as the Freedom Court, Tribunale della Libertà). Macau In Macau, the relevant provision is Article 204 in the Code of Penal Processes, which became law in 1996 under Portuguese rule. cases are heard before the Tribunal of Ultimate Instance. A notable case is Case 3/2008 in Macau. Malaysia In Malaysia, the remedy of habeas corpus is guaranteed by the federal constitution, although not by name. Article 5(2) of the Constitution of Malaysia provides that "Where complaint is made to a High Court or any judge thereof that a person is being unlawfully detained the court shall inquire into the complaint and, unless satisfied that the detention is lawful, shall order him to be produced before the court and release him". As there are several statutes, for example, the Internal Security Act 1960, that still permit detention without trial, the procedure is usually effective in such cases only if it can be shown that there was a procedural error in the way that the detention was ordered. New Zealand In New Zealand, habeas corpus may be invoked against the government or private individuals. In 2006, a child was allegedly kidnapped by his maternal grandfather after a custody dispute. The father began habeas corpus proceedings against the mother, the grandfather, the grandmother, the great grandmother, and another person alleged to have assisted in the kidnap of the child. The mother did not present the child to the court and so was imprisoned for contempt of court. She was released when the grandfather came forward with the child in late January 2007. Pakistan Issuance of a writ is an exercise of an extraordinary jurisdiction of the superior courts in Pakistan. A writ of habeas corpus may be issued by any High Court of a province in Pakistan. Article 199 of the 1973 Constitution of the Islamic Republic of Pakistan, specifically provides for the issuance of a writ of habeas corpus, empowering the courts to exercise this prerogative. Subject to the Article 199 of the Constitution, "A High Court may, if it is satisfied that no other adequate remedy is provided by law, on the application of any person, make an order that a person in custody within the territorial jurisdiction of the Court be brought before it so that the Court may satisfy itself that he is not being held in custody without a lawful authority or in an unlawful manner". The hallmark of extraordinary constitutional jurisdiction is to keep various functionaries of State within the ambit of their authority. Once a High Court has assumed jurisdiction to adjudicate the matter before it, justiciability of the issue raised before it is beyond question. The Supreme Court of Pakistan has stated clearly that the use of words "in an unlawful manner" implies that the court may examine, if a statute has allowed such detention, whether it was a colorable exercise of the power of authority. Thus, the court can examine the malafides of the action taken. Portugal In Portugal, article 31 of the Constitution guarantees citizens against improper arrest, imprisonment or detention. The full text of Article 31 is as follows: There are also statutory provisions, most notably the Code of Criminal Procedure, articles 220 and 222 that stipulate the reasons by which a judge may guarantee Habeas corpus. The Philippines In the Bill of Rights of the Philippine constitution, habeas corpus is guaranteed in terms almost identically to those used in the U.S. Constitution. Article 3, Section 15 of the Constitution of the Philippines states that "The privilege of the writ of habeas corpus shall not be suspended except in cases of invasion or rebellion when the public safety requires it". In 1971, after the Plaza Miranda bombing, the Marcos administration, under Ferdinand Marcos, suspended habeas corpus in an effort to stifle the oncoming insurgency, having blamed the Filipino Communist Party for the events of August 21. Many considered this to be a prelude to martial law. After widespread protests, however, the Marcos administration decided to reintroduce the writ. The writ was again suspended when Marcos declared martial law in 1972. In December 2009, habeas corpus was suspended in Maguindanao as President Gloria Macapagal Arroyo placed the province under martial law. This occurred in response to the Maguindanao massacre. In 2016, President Rodrigo Duterte said he was planning on suspending habeas corpus. At 10 pm on 23 May 2017 Philippine time, President Rodrigo Duterte declared martial law in the whole island of Mindanao including Sulu and Tawi-tawi for the period of 60 days due to the series of attacks mounted by the Maute group, an ISIS-linked terrorist organization. The declaration suspended the writ. Scotland The Parliament of Scotland passed a law to have the same effect as habeas corpus in the 18th century. This is now known as the Criminal Procedure Act 1701 c.6. It was originally called "the Act for preventing wrongful imprisonment and against undue delays in trials". It is still in force although certain parts have been repealed. Spain The present Constitution of Spain states that "A habeas corpus procedure shall be provided for by law to ensure the immediate handing over to the judicial authorities of any person illegally arrested". The statute which regulates the procedure is the Law of Habeas Corpus of 24 May 1984, which provides that a person imprisoned may, on her or his own or through a third person, allege that she or he is imprisoned unlawfully and request to appear before a judge. The request must specify the grounds on which the detention is considered to be unlawful, which can be, for example, that the custodian holding the prisoner does not have the legal authority, that the prisoner's constitutional rights have been violated, or that he has been subjected to mistreatment. The judge may then request additional information if needed, and may issue a habeas corpus order, at which point the custodian has 24 hours to bring the prisoner before the judge. Historically, many of the territories of Spain had remedies equivalent to the habeas corpus, such as the privilege of manifestación in the Crown of Aragon or the right of the Tree in Biscay. United States The United States inherited habeas corpus from the English common law. In England, the writ was issued in the name of the monarch. When the original thirteen American colonies declared independence, and became a republic based on popular sovereignty, any person, in the name of the people, acquired authority to initiate such writs. The U.S. Constitution specifically includes the habeas procedure in the Suspension Clause (Clause 2), located in Article One, Section 9. This states that "The privilege of the writ of habeas corpus shall not be suspended, unless when in cases of rebellion or invasion the public safety may require it". The writ of habeas corpus ad subjiciendum is a civil, not criminal, ex parte proceeding in which a court inquires as to the legitimacy of a prisoner's custody. Typically, habeas corpus proceedings are to determine whether the court that imposed sentence on the defendant had jurisdiction and authority to do so, or whether the defendant's sentence has expired. Habeas corpus is also used as a legal avenue to challenge other types of custody such as pretrial detention or detention by the United States Bureau of Immigration and Customs Enforcement pursuant to a deportation proceeding. Presidents Abraham Lincoln and Ulysses Grant suspended habeas corpus during the Civil War and Reconstruction for some places or types of cases. During World War II, President Franklin D. Roosevelt suspended habeas corpus. Following the September 11 attacks, President George W. Bush attempted to place Guantanamo Bay detainees outside of the jurisdiction of habeas corpus, but the Supreme Court of the United States overturned this action in Boumediene v. Bush. Equivalent remedies Biscay In 1526, the Fuero Nuevo of the Señorío de Vizcaya (New Charter of the Lordship of Biscay) established a form of habeas corpus in the territory of the Señorío de Vizcaya, nowadays part of Spain. This revised version of the Fuero Viejo (Old Charter) of 1451 codified the medieval custom whereby no person could be arbitrarily detained without being summoned first to the Oak of Gernika, an ancestral oak tree located in the outskirts of Gernika under which all laws of the Lordship of Biscay were passed. The New Charter formalised that no one could be detained without a court order (Law 26 of Chapter 9) nor due to debts (Law 3 of Chapter 16). It also established due process and a form of habeas corpus: no one could be arrested without previously having been summoned to the Oak of Gernika and given 30 days to answer the said summons. Upon appearing under the Tree, they had to be provided with accusations and all evidence held against them so that they could defend themselves (Law 7 of Chapter 9). No one could be sent to prison or deprived of their freedom until being formally trialed, and no one could be accused of a different crime until their current court trial was over (Law 5 of Chapter 5). Those fearing they were being arrested illegally could appeal to the Regimiento General that their rights could be upheld. The Regimiento (the executive arm of the Juntas Generales of Biscay) would demand the prisoner be handed over to them, and thereafter the prisoner would be released and placed under the protection of the Regimiento while awaiting for trial. Crown of Aragon The Crown of Aragon also had a remedy equivalent to the habeas corpus called the manifestación de personas (literally, demonstration of persons). According to the right of manifestación, the Justicia de Aragon (lit. Justice of Aragon, an Aragonese judiciary figure similar to an ombudsman, but with far reaching executive powers) could require a judge, a court of justice, or any other official that they handed over to the Justicia (i.e., that they be demonstrated to the Justicia) anyone being prosecuted so as to guarantee that this person's rights were upheld, and that no violence would befall this person prior to their being sentenced. Furthermore, the Justicia retained the right to examine the judgement passed, and decide whether it satisfied the conditions of a fair trial. If the Justicia was not satisfied, he could refuse to hand over the accused back to the authorities. The right of manifestación acted like an habeas corpus: knowing that the appeal to the Justicia would immediately follow any unlawful detention, these were effectively illegal. Equally, torture (which had been banned since 1325 in Aragon) would never take place. In some cases, people exerting their right of manifestación were kept under the Justicia's watch in manifestación prisons (famous for their mild and easy conditions) or under house arrest. More generally however, the person was released from confinement and placed under the Justicia's protection, awaiting for trial. The Justicia always granted the right of manifestación by default, but they only really had to act in extreme cases, as for instance famously happened in 1590 when Antonio Pérez, the disgraced secretary to Philip II of Spain, fled from Castile to Aragon and used his Aragonese ascendency to appeal to the Justicia for manifestación right, thereby preventing his arrest at the King's behest. The right of manifestación was codified in 1325 in the Declaratio Privilegii generalis passed by the Aragonese Corts under king James II of Aragon. It had been practised since the inception of the kingdom of Aragon in the 11th century, and therefore predates the English habeas corpus itself. Poland In 1430, King Władysław II Jagiełło of Poland granted the Privilege of Jedlnia, which proclaimed, Neminem captivabimus nisi iure victum ("We will not imprison anyone except if convicted by law"). This revolutionary innovation in civil libertarianism gave Polish citizens due process-style rights that did not exist in any other European country for another 250 years. Originally, the Privilege of Jedlnia was restricted to the nobility (the szlachta), but it was extended to cover townsmen in the 1791 Constitution. Importantly, social classifications in the Polish–Lithuanian Commonwealth were not as rigid as in other European countries; townspeople and Jews were sometimes ennobled. The Privilege of Jedlnia provided broader coverage than many subsequently enacted habeas corpus laws, because Poland's nobility constituted an unusually large percentage of the country's total population, which was Europe's largest. As a result, by the 16th century, it was protecting the liberty of between five hundred thousand and a million Poles. Roman-Dutch law In South Africa and other countries whose legal systems are based on Roman-Dutch law, the interdictum de homine libero exhibendo is the equivalent of the writ of habeas corpus. In South Africa, it has been entrenched in the Bill of Rights, which provides in section 35(2)(d) that every detained person has the right to challenge the lawfulness of the detention in person before a court and, if the detention is unlawful, to be released. World habeas corpus In the 1950s, American lawyer Luis Kutner began advocating an international writ of habeas corpus to protect individual human rights. In 1952, he filed a petition for a "United Nations Writ of Habeas Corpus" on behalf of William N. Oatis, an American journalist jailed the previous year by the Communist government of Czechoslovakia. Alleging that Czechoslovakia had violated Oatis' rights under the United Nations Charter and the Universal Declaration of Human Rights and that the United Nations General Assembly had "inherent power" to fashion remedies for human rights violations, the petition was filed with the United Nations Commission on Human Rights. The Commission forwarded the petition to Czechoslovakia, but no other United Nations action was taken. Oatis was released in 1953. Kutner went on to publish numerous articles and books advocating the creation of an "International Court of Habeas Corpus". International human rights standards Article 3 of the Universal Declaration of Human Rights provides that "everyone has the right to life, liberty and security of person". Article 5 of the European Convention on Human Rights goes further and calls for persons detained to have the right to challenge their detention, providing at article 5.4: See also Arbitrary arrest and detention corpus delicti – other Latin legal term using corpus, here meaning the fact of a crime having been committed, not the body of the person being detained nor (as sometimes inaccurately used) the body of the victim Habeas corpus petitions of Guantanamo Bay detainees Habeas Corpus (play), by the English writer and playwright Alan Bennett. Habeas Corpus Restoration Act of 2007 Habeas data Edward Hyde, 1st Earl of Clarendon Habeas Corpus Parliament List of legal Latin terms Military Commissions Act of 2006 Murder conviction without a body Neminem captivabimus Presumption of innocence Philippine habeas corpus cases Remand Security of person Recurso de amparo (writ of amparo) Subpoena ad testificandum Subpoena duces tecum Notes References Further reading Political context for Ex Parte Milligan explained on pp. 186–189. External links Constitutional law Writs Prerogative writs Emergency laws Human rights Latin legal terminology Liberalism Philosophy of law Quotations from law
[ -0.012804912403225899, -0.4298573434352875, -0.19906406104564667, -0.1516440212726593, 0.19063353538513184, -0.007938754744827747, 0.4061141312122345, -0.3326377868652344, -0.7463131546974182, 0.2611447274684906, -0.90960294008255, 0.44902360439300537, -0.23755747079849243, 0.1594673991203...
14092
https://en.wikipedia.org/wiki/Prince%20Henry%20the%20Navigator
Prince Henry the Navigator
Dom Henrique of Portugal, Duke of Viseu (4 March 1394 – 13 November 1460), better known as Prince Henry the Navigator (), was a central figure in the early days of the Portuguese Empire and in the 15th-century European maritime discoveries and maritime expansion. Through his administrative direction, he is regarded as the main initiator of what would be known as the Age of Discovery. Henry was the fourth child of the Portuguese King John I, who founded the House of Aviz. After procuring the new caravel ship, Henry was responsible for the early development of Portuguese exploration and maritime trade with other continents through the systematic exploration of Western Africa, the islands of the Atlantic Ocean, and the search for new routes. He encouraged his father to conquer Ceuta (1415), the Muslim port on the North African coast across the Straits of Gibraltar from the Iberian Peninsula. He learned of the opportunities offered by the Saharan trade routes that terminated there, and became fascinated with Africa in general; he was most intrigued by the Christian legend of Prester John and the expansion of Portuguese trade. He is regarded as the patron of Portuguese exploration. Life Henry was the third surviving son of King John I and his wife Philippa, sister of King Henry IV of England. He was baptized in Porto, and may have been born there, probably when the royal couple was living in the city's old mint, now called Casa do Infante (Prince's House), or in the region nearby. Another possibility is that he was born at the Monastery of Leça do Balio, in Leça da Palmeira, during the same period of the royal couple's residence in the city of Porto. Henry was 21 when he and his father and brothers captured the Moorish port of Ceuta in northern Morocco. Ceuta had long been a base for Barbary pirates who raided the Portuguese coast, depopulating villages by capturing their inhabitants to be sold in the African slave trade. Following this success, Henry began to explore the coast of Africa, most of which was unknown to Europeans. His objectives included finding the source of the West African gold trade and the legendary Christian kingdom of Prester John, and stopping the pirate attacks on the Portuguese coast. At that time, the cargo ships of the Mediterranean were too slow and heavy to undertake such voyages. Under Henry's direction, a new and much lighter ship was developed, the caravel, which could sail further and faster. Above all, it was highly maneuverable and could sail "into the wind", making it largely independent of the prevailing winds. The caravel used the lateen sail, the prevailing rig in Christian Mediterranean navigation since late antiquity. With this ship, Portuguese mariners freely explored uncharted waters around the Atlantic, from rivers and shallow waters to transocean voyages. In 1419, Henry's father appointed him governor of the province of the Algarve. Resources and income On 25 May 1420, Henry gained appointment as the Grand Master of the Military Order of Christ, the Portuguese successor to the Knights Templar, which had its headquarters at Tomar in central Portugal. Henry held this position for the remainder of his life, and the Order was an important source of funds for Henry's ambitious plans, especially his persistent attempts to conquer the Canary Islands, which the Portuguese had claimed to have discovered before the year 1346. In 1425, his second brother the Infante Peter, Duke of Coimbra, made a diplomatic tour of Europe, with an additional charge from Henry to seek out geographic material. Peter returned with a current world map from Venice. In 1431, Henry donated houses for the Estudo Geral to teach all the sciences—grammar, logic, rhetoric, arithmetic, music, and astronomy—in what would later become the University of Lisbon. For other subjects like medicine or philosophy, he ordered that each room should be decorated according to the subject taught. Henry also had other resources. When John I died in 1433, Henry's eldest brother Edward of Portugal became king. He granted Henry all profits from trading within the areas he discovered as well as the sole right to authorize expeditions beyond Cape Bojador. Henry also held a monopoly on tuna fishing in the Algarve. When Edward died eight years later, Henry supported his brother Peter, Duke of Coimbra for the regency during the minority of Edward's son Afonso V, and in return received a confirmation of this levy. Henry functioned as a primary organizer of the disastrous expedition to Tangier in 1437 against Çala Ben Çala, which ended in Henry's younger brother Ferdinand being given as hostage to guarantee Portuguese promises in the peace agreement. The Portuguese Cortes refused to return Ceuta as ransom for Ferdinand, who remained in captivity until his death six years later. Prince Regent Peter supported Portuguese maritime expansion in the Atlantic Ocean and Africa, and Henry promoted the colonization of the Azores during Peter's regency (1439–1448). For most of the latter part of his life, Henry concentrated on his maritime activities and court politics. Vila do Infante and Portuguese exploration According to João de Barros, in Algarve, Prince Henry the Navigator repopulated a village that he called Terçanabal (from terça nabal or tercena nabal). This village was situated in a strategic position for his maritime enterprises and was later called Vila do Infante ("Estate or Town of the Prince"). It is traditionally suggested that Henry gathered at his villa on the Sagres peninsula a school of navigators and map-makers. However modern historians hold this to be a misconception. He did employ some cartographers to chart the coast of Mauritania after the voyages he sent there, but there was no center of navigation science or observatory in the modern sense of the word, nor was there an organized navigational center. Referring to Sagres, sixteenth-century Portuguese mathematician and cosmographer Pedro Nunes remarked, "from it our sailors went out well taught and provided with instruments and rules which all map makers and navigators should know." The view that Henry's court rapidly grew into the technological base for exploration, with a naval arsenal and an observatory, etc., although repeated in popular culture, has never been established. Henry did possess geographical curiosity, and employed cartographers. Jehuda Cresques, a noted cartographer, has been said to have accepted an invitation to come to Portugal to make maps for the infante. Prestage makes the argument that the presence of the latter at the Prince's court "probably accounts for the legend of the School of Sagres, which is now discredited." The first contacts with the African slave market were made by expeditions to ransom Portuguese subjects enslaved by pirate attacks on Portuguese ships or villages. Henry's explorations Henry sponsored voyages, collecting a 20% tax (o quinto) on profits, the usual practice in the Iberian states at the time. The nearby port of Lagos provided a convenient home port for these expeditions. The voyages were made in very small ships, mostly the caravel, a light and maneuverable vessel equipped by lateen sails. Most of the voyages sent out by Henry consisted of one or two ships that navigated by following the coast, stopping at night to tie up along some shore. During Prince Henry's time and after, the Portuguese navigators discovered and perfected the North Atlantic volta do mar (the 'turn of the sea' or 'return from the sea'): the dependable pattern of trade winds blowing largely from the east near the equator and the returning westerlies in the mid-Atlantic. This was a major step in the history of navigation, when an understanding of oceanic wind patterns was crucial to Atlantic navigation, from Africa and the open ocean to Europe, and enabled the main route between the New World and Europe in the North Atlantic in future voyages of discovery. Although the lateen sail allowed sailing upwind to some extent, it was worth even major extensions of course to have a faster and calmer following wind for most of a journey. Portuguese mariners who sailed south and southwest towards the Canary Islands and West Africa would afterwards sail far to the northwest—that is, away from continental Portugal, and seemingly in the wrong direction—before turning northeast near the Azores islands and finally east to Europe in order to have largely following winds for their full journey. Christopher Columbus used this on his transatlantic voyages. Madeira The first explorations followed not long after the capture of Ceuta in 1415. Henry was interested in locating the source of the caravans that brought gold to the city. During the reign of his father, John I, João Gonçalves Zarco and Tristão Vaz Teixeira were sent to explore along the African coast. Zarco, a knight in service to Prince Henry, had commanded the caravels guarding the coast of Algarve from the incursions of the Moors. He had also been at Ceuta. In 1418, Zarco and Teixeira were blown off-course by a storm while making the volta do mar westward swing to return to Portugal. They found shelter at an island they named Porto Santo. Henry directed that Porto Santo be colonized. The move to claim the Madeiran islands was probably a response to Castile's efforts to claim the Canary Islands. In 1420, settlers then moved to the nearby island of Madeira. The Azores A chart drawn by the Catalan cartographer, Gabriel de Vallseca of Mallorca, has been interpreted to indicate that the Azores were first discovered by Diogo de Silves in 1427. In 1431, Gonçalo Velho was dispatched with orders to determine the location of "islands" first identified by de Silves. Velho apparently got as far as the Formigas, in the eastern archipelago, before having to return to Sagres, probably due to bad weather. By this time the Portuguese navigators had also reached the Sargasso Sea (western North Atlantic region), naming it after the Sargassum seaweed growing there (sargaço / sargasso in Portuguese). West African coast Until Henry's time, Cape Bojador remained the most southerly point known to Europeans on the desert coast of Africa. Superstitious seafarers held that beyond the cape lay sea monsters and the edge of the world. In 1434, Gil Eanes, the commander of one of Henry's expeditions, became the first European known to pass Cape Bojador. Using the new ship type, the expeditions then pushed onwards. Nuno Tristão and Antão Gonçalves reached Cape Blanco in 1441. The Portuguese sighted the Bay of Arguin in 1443 and built an important slave fort on the island of Arguin around the year 1448. Dinis Dias soon came across the Senegal River and rounded the peninsula of Cap-Vert in 1444. By this stage the explorers had passed the southern boundary of the desert, and from then on Henry had one of his wishes fulfilled: the Portuguese had circumvented the Muslim land-based trade routes across the western Sahara Desert, and slaves and gold began arriving in Portugal. This rerouting of trade devastated Algiers and Tunis, but made Portugal rich. By 1452, the influx of gold permitted the minting of Portugal's first gold cruzado coins. A cruzado was equal to 400 reis at the time. From 1444 to 1446, as many as forty vessels sailed from Lagos on Henry's behalf, and the first private mercantile expeditions began. Alvise Cadamosto explored the Atlantic coast of Africa and discovered several islands of the Cape Verde archipelago between 1455 and 1456. In his first voyage, which started on 22 March 1455, he visited the Madeira Islands and the Canary Islands. On the second voyage, in 1456, Cadamosto became the first European to reach the Cape Verde Islands. António Noli later claimed the credit. By 1462, the Portuguese had explored the coast of Africa as far as present-day Sierra Leone. Twenty-eight years later, Bartolomeu Dias proved that Africa could be circumnavigated when he reached the southern tip of the continent, now known as the Cape of Good Hope. In 1498, Vasco da Gama became the first European sailor to reach India by sea. Origin of the "Navigator" nickname No one used the nickname "Henry the Navigator" to refer to prince Henry during his lifetime or in the following three centuries. The term was coined by two nineteenth-century German historians: Heinrich Schaefer and Gustave de Veer. Later on it was made popular by two British authors who included it in the titles of their biographies of the prince: Henry Major in 1868 and Raymond Beazley in 1895. In Portuguese, even in modern times, it is uncommon to call him by this epithet; the preferred use is "Infante D. Henrique". Contrary to his brothers, Prince Henry was not praised for his intellectual gifts by his contemporaries. It was only later chroniclers such as João de Barros and Damião de Góis who attributed him a scholarly character and an interest for cosmography. The myth of the "Sagres school" allegedly founded by Prince Henry was created in the 17th century, mainly by Samuel Purchas and Antoine Prévost. In nineteenth-century Portugal, the idealized vision of Prince Henry as a putative pioneer of exploration and science reached its apogee. Travels in Brazil, in the Years 1817–1820: Undertaken by Command of His Majesty the King of Bavaria by Dr. J.B. Von Spix and Dr. C.F.P. Von Martius, published 1824, refers to the introduction of sugar cane to Brazil by "the Infant Don Henrique Navegador". Fiction Arkan Simaan, L'Écuyer d'Henri le Navigateur, Éditions l'Harmattan, Paris. Historical novel based on Zurara's chronicles, written in French. Ancestry See also Prince Henry the Navigator Park Hermitage of Our Lady of Guadalupe Notes References Sources Ariganello, Lisa. Henry the Navigator : prince of Portuguese exploration (2007); for elementary schools. online Bradford, Ernle. A wind from the north; the life of Henry the Navigator (1960) online Elbl, Ivana. "Man of His Time (and Peers): A New Look at Henry the Navigator." Luso-Brazilian Review 28.2 (1991): 73-89. online 1394 births 1460 deaths 15th-century explorers of Africa Dukes of Viseu History of the Atlantic Ocean Henry Knights of the Garter Order of Christ (Portugal) People from Porto Portuguese people of British descent Portuguese infantes Maritime history of Portugal 14th-century Portuguese people 15th-century Portuguese people Donatários of the Azores Portuguese exploration in the Age of Discovery Sons of kings
[ -0.7021439075469971, -0.14265523850917816, 0.538730263710022, 0.37869131565093994, 0.06007465720176697, 0.09290815144777298, 0.3370102345943451, 0.06797008961439133, -0.200673446059227, -0.40156060457229614, -0.3034428060054779, -0.03094448521733284, -0.043021298944950104, 0.36911487579345...
14094
https://en.wikipedia.org/wiki/Human%20cloning
Human cloning
Human cloning is the creation of a genetically identical copy (or clone) of a human. The term is generally used to refer to artificial human cloning, which is the reproduction of human cells and tissue. It does not refer to the natural conception and delivery of identical twins. The possibility of human cloning has raised controversies. These ethical concerns have prompted several nations to pass laws regarding human cloning. Two commonly discussed types of human cloning are therapeutic cloning and reproductive cloning. Therapeutic cloning would involve cloning cells from a human for use in medicine and transplants. It is an active area of research, but is not in medical practice anywhere in the world, as of 2022. Two common methods of therapeutic cloning that are being researched are somatic-cell nuclear transfer and (more recently) pluripotent stem cell induction. Reproductive cloning would involve making an entire cloned human, instead of just specific cells or tissues. History Although the possibility of cloning humans had been the subject of speculation for much of the 20th century, scientists and policymakers began to take the prospect seriously in 1969. J. B. S. Haldane was the first to introduce the idea of human cloning, for which he used the terms "clone" and "cloning", which had been used in agriculture since the early 20th century. In his speech on "Biological Possibilities for the Human Species of the Next Ten Thousand Years" at the Ciba Foundation Symposium on Man and his Future in 1963, he said: Nobel Prize-winning geneticist Joshua Lederberg advocated cloning and genetic engineering in an article in The American Naturalist in 1966 and again, the following year, in The Washington Post. He sparked a debate with conservative bioethicist Leon Kass, who wrote at the time that "the programmed reproduction of man will, in fact, dehumanize him." Another Nobel Laureate, James D. Watson, publicized the potential and the perils of cloning in his Atlantic Monthly essay, "Moving Toward the Clonal Man", in 1971. With the cloning of a sheep known as Dolly in 1996 by somatic cell nuclear transfer (SCNT), the idea of human cloning became a hot debate topic. Many nations outlawed it, while a few scientists promised to make a clone within the next few years. The first hybrid human clone was created in November 1998, by Advanced Cell Technology. It was created using SCNT; a nucleus was taken from a man's leg cell and inserted into a cow's egg from which the nucleus had been removed, and the hybrid cell was cultured and developed into an embryo. The embryo was destroyed after 12 days. In 2004 and 2005, Hwang Woo-suk, a professor at Seoul National University, published two separate articles in the journal Science claiming to have successfully harvested pluripotent, embryonic stem cells from a cloned human blastocyst using SCNT techniques. Hwang claimed to have created eleven different patient-specific stem cell lines. This would have been the first major breakthrough in human cloning. However, in 2006 Science retracted both of his articles on clear evidence that much of his data from the experiments was fabricated. In January 2008, Dr. Andrew French and Samuel Wood of the biotechnology company Stemagen announced that they successfully created the first five mature human embryos using SCNT. In this case, each embryo was created by taking a nucleus from a skin cell (donated by Wood and a colleague) and inserting it into a human egg from which the nucleus had been removed. The embryos were developed only to the blastocyst stage, at which point they were studied in processes that destroyed them. Members of the lab said that their next set of experiments would aim to generate embryonic stem cell lines; these are the "holy grail" that would be useful for therapeutic or reproductive cloning. In 2011, scientists at the New York Stem Cell Foundation announced that they had succeeded in generating embryonic stem cell lines, but their process involved leaving the oocyte's nucleus in place, resulting in triploid cells, which would not be useful for cloning. In 2013, a group of scientists led by Shoukhrat Mitalipov published the first report of embryonic stem cells created using SCNT. In this experiment, the researchers developed a protocol for using SCNT in human cells, which differs slightly from the one used in other organisms. Four embryonic stem cell lines from human fetal somatic cells were derived from those blastocysts. All four lines were derived using oocytes from the same donor, ensuring that all mitochondrial DNA inherited was identical. A year later, a team led by Robert Lanza at Advanced Cell Technology reported that they had replicated Mitalipov's results and further demonstrated the effectiveness by cloning adult cells using SCNT. In 2018, the first successful cloning of primates using SCNT was reported with the birth of two live female clones, crab-eating macaques named Zhong Zhong and Hua Hua. Methods Somatic cell nuclear transfer (SCNT) In somatic cell nuclear transfer ("SCNT"), the nucleus of a somatic cell is taken from a donor and transplanted into a host egg cell, which had its own genetic material removed previously, making it an enucleated egg. After the donor somatic cell genetic material is transferred into the host oocyte with a micropipette, the somatic cell genetic material is fused with the egg using an electric current. Once the two cells have fused, the new cell can be permitted to grow in a surrogate or artificially. This is the process that was used to successfully clone Dolly the sheep (see section on History in this article). The technique, now refined, has indicated that it was possible to replicate cells and reestablish pluripotency-"the potential of an embryonic cell to grow into any one of the numerous different types of mature body cells that make up a complete organism" Induced pluripotent stem cells (iPSCs) Creating induced pluripotent stem cells ("iPSCs") is a long and inefficient process. Pluripotency refers to a stem cell that has the potential to differentiate into any of the three germ layers: endoderm (interior stomach lining, gastrointestinal tract, the lungs), mesoderm (muscle, bone, blood, urogenital), or ectoderm (epidermal tissues and nervous tissue). A specific set of genes, often called "reprogramming factors", are introduced into a specific adult cell type. These factors send signals in the mature cell that cause the cell to become a pluripotent stem cell. This process is highly studied and new techniques are being discovered frequently on how to better this induction process. Depending on the method used, reprogramming of adult cells into iPSCs for implantation could have severe limitations in humans. If a virus is used as a reprogramming factor for the cell, cancer-causing genes called oncogenes may be activated. These cells would appear as rapidly dividing cancer cells that do not respond to the body's natural cell signaling process. However, in 2008 scientists discovered a technique that could remove the presence of these oncogenes after pluripotency induction, thereby increasing the potential use of iPSC in humans. Comparing SCNT to reprogramming Both the processes of SCNT and iPSCs have benefits and deficiencies. Historically, reprogramming methods were better studied than SCNT derived embryonic stem cells (ESCs). However, more recent studies have put more emphasis on developing new procedures for SCNT-ESCs. The major advantage of SCNT over iPSCs at this time is the speed with which cells can be produced. iPSCs derivation takes several months while SCNT would take a much shorter time, which could be important for medical applications. New studies are working to improve the process of iPSC in terms of both speed and efficiency with the discovery of new reprogramming factors in oocytes. Another advantage SCNT could have over iPSCs is its potential to treat mitochondrial disease, as it utilizes a donor oocyte. No other advantages are known at this time in using stem cells derived from one method over stem cells derived from the other. Uses, actual and potential Work on cloning techniques has advanced our basic understanding of developmental biology in humans. Observing human pluripotent stem cells grown in culture provides great insight into human embryo development, which otherwise cannot be seen. Scientists are now able to better define steps of early human development. Studying signal transduction along with genetic manipulation within the early human embryo has the potential to provide answers to many developmental diseases and defects. Many human-specific signaling pathways have been discovered by studying human embryonic stem cells. Studying developmental pathways in humans has given developmental biologists more evidence toward the hypothesis that developmental pathways are conserved throughout species. iPSCs and cells created by SCNT are useful for research into the causes of disease, and as model systems used in drug discovery. Cells produced with SCNT, or iPSCs could eventually be used in stem cell therapy, or to create organs to be used in transplantation, known as regenerative medicine. Stem cell therapy is the use of stem cells to treat or prevent a disease or condition. Bone marrow transplantation is a widely used form of stem cell therapy. No other forms of stem cell therapy are in clinical use at this time. Research is underway to potentially use stem cell therapy to treat heart disease, diabetes, and spinal cord injuries. Regenerative medicine is not in clinical practice, but is heavily researched for its potential uses. This type of medicine would allow for autologous transplantation, thus removing the risk of organ transplant rejection by the recipient. For instance, a person with liver disease could potentially have a new liver grown using their same genetic material and transplanted to remove the damaged liver. In current research, human pluripotent stem cells have been promised as a reliable source for generating human neurons, showing the potential for regenerative medicine in brain and neural injuries. Ethical implications In bioethics, the ethics of cloning refers to a variety of ethical positions regarding the practice and possibilities of cloning, especially human cloning. While many of these views are religious in origin, for instance relating to Christian views of procreation and personhood, the questions raised by cloning engage secular perspectives as well. Advocates support development of therapeutic cloning in order to generate tissues and whole organs to treat patients who otherwise cannot obtain transplants, to avoid the need for immunosuppressive drugs, and to stave off the effects of aging. Advocates for reproductive cloning believe that parents who cannot otherwise procreate should have access to the technology. Opposition to therapeutic cloning mainly centers around the status of embryonic stem cells, which has connections with the abortion debate. Some opponents of reproductive cloning have concerns that technology is not yet developed enough to be safe – for example, the position of the American Association for the Advancement of Science , while others emphasize that reproductive cloning could be prone to abuse (leading to the generation of humans whose organs and tissues would be harvested), and have concerns about how cloned individuals could integrate with families and with society at large. Members of religious groups are divided. Some Christian theologians perceive the technology as usurping God's role in creation and, to the extent embryos are used, destroying a human life; others see no inconsistency between Christian tenets and cloning's positive and potentially life-saving benefits. Current law In 2018 it was reported that about 70 countries had banned human cloning. In popular culture Science fiction has used cloning, most commonly and specifically human cloning, due to the fact that it brings up controversial questions of identity. Humorous fiction, such as Multiplicity (1996) and the Maxwell Smart feature The Nude Bomb (1980), have featured human cloning. A recurring sub-theme of cloning fiction is the use of clones as a supply of organs for transplantation. Robin Cook's 1997 novel Chromosome 6 and Michael Bay's The Island are examples of this; Chromosome 6 also features genetic manipulation and xenotransplantation. The series Orphan Black follows human clones' stories and experiences as they deal with issues and react to being the property of a chain of scientific institutions. In the 2019 horror film Us, the entirety of the United States' population is secretly cloned. Years later, these clones (known as The Tethered) reveal themselves to the world by successfully pulling off a mass genocide of their counterparts. See also Homunculus Notes References Further reading Araujo, Robert John, "The UN Declaration on Human Cloning: a survey and assessment of the debate," 7 The National Catholic Bioethics Quarterly 129 – 149 (2007). Seyyed Hassan Eslami Ardakani, Human Cloning in Catholic and Islamic Perspectives, University of Religions and Denominations, 2007 Oregon Health & Science University. "Human skin cells converted into embryonic stem cells: First time human stem cells have been produced via nuclear transfer." ScienceDaily. ScienceDaily, 15 May 2013. . External links "Variations and voids: the regulation of human cloning around the world" academic article by S. Pattinson & T. Caulfield Moving Toward the Clonal Man Should We Really Fear Reproductive Human Cloning United Nation declares law against cloning. General Assembly Adopts United Nations Declaration on Human Cloning By Vote of 84-34-37 Cloning Fact Sheet How Human Cloning Will Work Biotechnology
[ 0.35788217186927795, 0.3418104946613312, -0.6785663366317749, 0.4551500976085663, 0.10907739400863647, 0.4745613932609558, -0.1551801711320877, 0.44826629757881165, -0.06571487337350845, -0.2374255657196045, -0.1263676881790161, 0.06748829036951065, -0.6339941620826721, 0.717625081539154, ...
14097
https://en.wikipedia.org/wiki/History%20of%20Asia
History of Asia
The history of Asia can be seen as the collective history of several distinct peripheral coastal regions such as East Asia, South Asia, Southeast Asia and the Middle East linked by the interior mass of the Eurasian steppe. See History of the Middle East and Outline of South Asian history for further details. The coastal periphery was the home to some of the world's earliest known civilizations and religions, with each of the three regions developing early civilizations around fertile river valleys. These valleys were fertile because the soil there was rich and could bear many root crops. The civilizations in Mesopotamia, India, and China shared many similarities and likely exchanged technologies and ideas such as mathematics and the wheel. Other notions such as that of writing likely developed individually in each area. Cities, states, and then empires developed in these lowlands. The steppe region had long been inhabited by mounted nomads, and from the central steppes, they could reach all areas of the Asian continent. The northern part of the continent, covering much of Siberia was also inaccessible to the steppe nomads due to the dense forests and the tundra. These areas in Siberia were very sparsely populated. The centre and periphery were kept separate by mountains and deserts. The Caucasus, Himalaya, Karakum Desert, and Gobi Desert formed barriers that the steppe horsemen could only cross with difficulty. While technologically and culturally the city dwellers were more advanced, they could do little militarily to defend against the mounted hordes of the steppe. However, the lowlands did not have enough open grasslands to support a large horsebound force. Thus the nomads who conquered states in the Middle East were soon forced to adapt to the local societies. The spread of Islam waved the Islamic Golden Age and the Timurid Renaissance, which later influenced the age of Islamic gunpowder empires. Asia's history features major developments seen in other parts of the world, as well as events that have affected those other regions. These include the trade of the Silk Road, which spread cultures, languages, religions, and diseases throughout Afro-Eurasian trade. Another major advancement was the innovation of gunpowder in medieval China, later developed by the Gunpowder empires, mainly by the Mughals and Safavids, which led to advanced warfare through the use of guns. Prehistory A report by archaeologist Rakesh Tewari on Lahuradewa, India shows new C14 datings that range between 9000 and 8000 BCE associated with rice, making Lahuradewa the earliest Neolithic site in entire South Asia. The prehistoric Beifudi site near Yixian in Hebei Province, China, contains relics of a culture contemporaneous with the Cishan and Xinglongwa cultures of about 8000–7000 BCE, neolithic cultures east of the Taihang Mountains, filling in an archaeological gap between the two Northern Chinese cultures. The total excavated area is more than 1,200 square meters and the collection of neolithic findings at the site consists of two phases. Around 5500 BCE the Halafian culture appeared in Lebanon, Israel, Syria, Anatolia, and northern Mesopotamia, based upon dryland agriculture. In southern Mesopotamia were the alluvial plains of Sumer and Elam. Since there was little rainfall, irrigation systems were necessary. The Ubaid culture flourished from 5500 BCE. Ancient Bronze Age The Chalcolithic period (or Copper Age) began about 4500 BCE, then the Bronze Age began about 3500 BCE, replacing the Neolithic cultures. The Indus Valley Civilization (IVC) was a Bronze Age civilization (3300–1300 BCE; mature period 2600–1900 BCE) which was centered mostly in the western part of the Indian Subcontinent; it is considered that an early form of Hinduism was performed during this civilization. Some of the great cities of this civilization include Harappa and Mohenjo-daro, which had a high level of town planning and arts. The cause of the destruction of these regions around 1700 BCE is debatable, although evidence suggests it was caused by natural disasters (especially flooding). This era marks Vedic period in India, which lasted from roughly 1500 to 500 BCE. During this period, the Sanskrit language developed and the Vedas were written, epic hymns that told tales of gods and wars. This was the basis for the Vedic religion, which would eventually sophisticate and develop into Hinduism. China and Vietnam were also centres of metalworking. Dating back to the Neolithic Age, the first bronze drums, called the Dong Son drums have been uncovered in and around the Red River Delta regions of Vietnam and Southern China. These relate to the prehistoric Dong Son Culture of Vietnam. Song Da bronze drum's surface, Dong Son culture, Vietnam In Ban Chiang, Thailand (Southeast Asia), bronze artifacts have been discovered dating to 2100 BCE. In Nyaunggan, Burma bronze tools have been excavated along with ceramics and stone artifacts. Dating is still currently broad (3500–500 BCE). Iron and Axial Age The Iron Age saw the widespread use of iron tools, weaponry, and armor throughout the major civilizations of Asia. Middle East The Achaemenid dynasty of the Persian Empire, founded by Cyrus the Great, ruled an area from Greece and Turkey to the Indus River and Central Asia during the 6th to 4th centuries BCE. Persian politics included a tolerance for other cultures, a highly centralized government, and significant infrastructure developments. Later, in Darius the Great's rule, the territories were integrated, a bureaucracy was developed, nobility were assigned military positions, tax collection was carefully organized, and spies were used to ensure the loyalty of regional officials. The primary religion of Persia at this time was Zoroastrianism, developed by the philosopher Zoroaster. It introduced an early form of monotheism to the area. The religion banned animal sacrifice and the use of intoxicants in rituals; and introduced the concept of spiritual salvation through personal moral action, an end time, and both general and Particular judgment with a heaven or hell. These concepts would heavily influence later emperors and the masses. More importantly, Zoroastrianism would be an important precursor for the Abrahamic religions such as Christianity, Islam, or Judaism. The Persian Empire was successful in establishing peace and stability throughout the Middle East and were a major influence in art, politics (affecting Hellenistic leaders), and religion. Alexander the Great conquered this dynasty in the 4th century BCE, creating the brief Hellenistic period. He was unable to establish stability and after his death, Persia broke into small, weak dynasties including the Seleucid Empire, followed by the Parthian Empire. By the end of the Classical age, Persia had been reconsolidated into the Sassanid Empire, also known as the second Persian Empire. The Roman Empire would later control parts of Western Asia. The Seleucid, Parthian and Sassanid dynasties of Persia dominated Western Asia for centuries. India The Maurya and Gupta empires are called the Golden Age of India and were marked by extensive inventions and discoveries in science, technology, art, religion, and philosophy that crystallized the elements of what is generally known as Indian culture. The religions of Hinduism and Buddhism, which began in Indian sub-continent, were an important influence on South, East and Southeast Asia. By 600 BCE, India had been divided into 17 regional states that would occasionally feud amongst themselves. In 327 BCE, Alexander the Great came to India with a vision of conquering the whole world. He crossed northwestern India and created the province Bactria but could not move further because his army wanted to go back to their family. Shortly prior, the soldier Chandragupta Maurya began to take control of the Ganges river and soon established the Maurya Empire. The Maurya Empire (Sanskrit: मौर्य राजवंश, Maurya Rājavaṃśa) was the geographically extensive and powerful empire in ancient India, ruled by the Mauryan dynasty from 321 to 185 BCE. It was one of the world's largest empires in its time, stretching to the Himalayas in the north, what is now Assam in the east, probably beyond modern Pakistan in the west, and annexing Balochistan and much of what is now Afghanistan, at its greatest extent. South of Mauryan empire was the Tamilakam an independent country dominated by three dynasties, the Pandyans, Cholas and Cheras. The government established by Chandragupta was led by an autocratic king, who primarily relied on the military to assert his power. It also applied the use of a bureaucracy and even sponsored a postal service. Chandragupta's grandson, Ashoka, greatly extended the empire by conquering most of modern-day India (save for the southern tip). He eventually converted to Buddhism, though, and began a peaceful life where he promoted the religion as well as humane methods throughout India. The Maurya Empire would disintegrate soon after Ashoka's death and was conquered by the Kushan invaders from the northwest, establishing the Kushan Empire. Their conversion to Buddhism caused the religion to be associated with foreigners and therefore a decline in its popularity occurred. The Kushan Empire would fall apart by 220 CE, creating more political turmoil in India. Then in 320, the Gupta Empire (Sanskrit: गुप्त राजवंश, Gupta Rājavanśha) was established and covered much of the Indian Subcontinent. Founded by Maharaja Sri-Gupta, the dynasty was the model of a classical civilization. Gupta kings united the area primarily through negotiation of local leaders and families as well as strategical intermarriage. Their rule covered less land than the Maurya Empire, but established the greatest stability. In 535, the empire ended when India was overrun by the Hunas. Classical China Zhou Dynasty Since 1029 BCE, the Zhou dynasty ( ), had existed in China and it would continue to until 258 BCE. The Zhou dynasty had been using a feudal system by giving power to local nobility and relying on their loyalty in order to control its large territory. As a result, the Chinese government at this time tended to be very decentralized and weak, and there was often little the emperor could do to resolve national issues. Nonetheless, the government was able to retain its position with the creation of the Mandate of Heaven, which could establish an emperor as divinely chosen to rule. The Zhou additionally discouraged the human sacrifice of the preceding eras and unified the Chinese language. Finally, the Zhou government encouraged settlers to move into the Yangtze River valley, thus creating the Chinese Middle Kingdom. But by 500 BCE, its political stability began to decline due to repeated nomadic incursions and internal conflict derived from the fighting princes and families. This was lessened by the many philosophical movements, starting with the life of Confucius. His philosophical writings (called Confucianism) concerning the respect of elders and of the state would later be popularly used in the Han dynasty. Additionally, Laozi's concepts of Taoism, including yin and yang and the innate duality and balance of nature and the universe, became popular throughout this period. Nevertheless, the Zhou Dynasty eventually disintegrated as the local nobles began to gain more power and their conflict devolved into the Warring States period, from 402 to 201 BCE. Qin Dynasty One leader eventually came on top, Qin Shi Huang (, Shǐ Huángdì), who overthrew the last Zhou emperor and established the Qin dynasty. The Qin dynasty (Chinese: 秦朝; pinyin: Qín Cháo) was the first ruling dynasty of Imperial China, lasting from 221 to 207 BCE. The new Emperor abolished the feudal system and directly appointed a bureaucracy that would rely on him for power. Huang's imperial forces crushed any regional resistance, and they furthered the Chinese empire by expanding down to the South China Sea and northern Vietnam. Greater organization brought a uniform tax system, a national census, regulated road building (and cart width), standard measurements, standard coinage, and an official written and spoken language. Further reforms included new irrigation projects, the encouragement of silk manufacturing, and (most famously) the beginning of the construction of the Great Wall of China—designed to keep out the nomadic raiders who'd constantly badger the Chinese people. However, Shi Huang was infamous for his tyranny, forcing laborers to build the Wall, ordering heavy taxes, and severely punishing all who opposed him. He oppressed Confucians and promoted Legalism, the idea that people were inherently evil, and that a strong, forceful government was needed to control them. Legalism was infused with realistic, logical views and rejected the pleasures of educated conversation as frivolous. All of this made Shi Huang extremely unpopular with the people. As the Qin began to weaken, various factions began to fight for control of China. Han Dynasty The Han dynasty (simplified Chinese: 汉朝; traditional Chinese: 漢朝; pinyin: Hàn Cháo; 206 BCE – 220 CE) was the second imperial dynasty of China, preceded by the Qin Dynasty and succeeded by the Three Kingdoms (220–265 CE). Spanning over four centuries, the period of the Han Dynasty is considered a golden age in Chinese history. One of the Han dynasty's greatest emperors, Emperor Wu of Han, established a peace throughout China comparable to the Pax Romana seen in the Mediterranean a hundred years later. To this day, China's majority ethnic group refers to itself as the "Han people". The Han Dynasty was established when two peasants succeeded in rising up against Shi Huang's significantly weaker successor-son. The new Han government retained the centralization and bureaucracy of the Qin, but greatly reduced the repression seen before. They expanded their territory into Korea, Vietnam, and Central Asia, creating an even larger empire than the Qin. The Han developed contacts with the Persian Empire in the Middle East and the Romans, through the Silk Road, with which they were able to trade many commodities—primarily silk. Many ancient civilizations were influenced by the Silk Road, which connected China, India, the Middle East and Europe. Han emperors like Wu also promoted Confucianism as the national "religion" (although it is debated by theologians as to whether it is defined as such or as a philosophy). Shrines devoted to Confucius were built and Confucian philosophy was taught to all scholars who entered the Chinese bureaucracy. The bureaucracy was further improved with the introduction of an examination system that selected scholars of high merit. These bureaucrats were often upper-class people educated in special schools, but whose power was often checked by the lower-class brought into the bureaucracy through their skill. The Chinese imperial bureaucracy was very effective and highly respected by all in the realm and would last over 2,000 years. The Han government was highly organized and it commanded the military, judicial law (which used a system of courts and strict laws), agricultural production, the economy, and the general lives of its people. The government also promoted intellectual philosophy, scientific research, and detailed historical records. However, despite all of this impressive stability, central power began to lose control by the turn of the Common Era. As the Han Dynasty declined, many factors continued to pummel it into submission until China was left in a state of chaos. By 100 CE, philosophical activity slowed, and corruption ran rampant in the bureaucracy. Local landlords began to take control as the scholars neglected their duties, and this resulted in heavy taxation of the peasantry. Taoists began to gain significant ground and protested the decline. They started to proclaim magical powers and promised to save China with them; the Taoist Yellow Turban Rebellion in 184 (led by rebels in yellow scarves) failed but was able to weaken the government. The aforementioned Huns combined with diseases killed up to half of the population and officially ended the Han dynasty by 220. The ensuing period of chaos was so terrible it lasted for three centuries, where many weak regional rulers and dynasties failed to establish order in China. This period of chaos and attempts at order is commonly known as that of the Six Dynasties. The first part of this included the Three Kingdoms which started in 220 and describes the brief and weak successor "dynasties" that followed the Han. In 265, the Jin dynasty of China was started and this soon split into two different empires in control of northwestern and southeastern China. In 420, the conquest and abdication of those two dynasties resulted in the first of the Southern and Northern Dynasties. The Northern and Southern Dynasties passed through until finally, by 557, the Northern Zhou dynasty ruled the north and the Chen dynasty ruled the south. Medieval During this period, the Eastern world empires continued to expand through trade, migration and conquests of neighboring areas. Gunpowder was widely used as early as the 11th century and they were using moveable type printing five hundred years before Gutenberg created his press. Buddhism, Taoism, Confucianism were the dominant philosophies of the Far East during the Middle Ages. Marco Polo was not the first Westerner to travel to the Orient and return with amazing stories of this different culture, but his accounts published in the late 13th and early 14th centuries were the first to be widely read throughout Europe. Western Asia (Middle East) The Arabian peninsula and the surrounding Middle East and Near East regions saw dramatic change during the Medieval era caused primarily by the spread of Islam and the establishment of the Arabian Empires. In the 5th century, the Middle East was separated into small, weak states; the two most prominent were the Sassanian Empire of the Persians in what is now Iran and Iraq, and the Byzantine Empire in Anatolia (modern-day Turkey). The Byzantines and Sassanians fought with each other continually, a reflection of the rivalry between the Roman Empire and the Persian Empire seen during the previous five hundred years. The fighting weakened both states, leaving the stage open to a new power. Meanwhile, the nomadic Bedouin tribes who dominated the Arabian desert saw a period of tribal stability, greater trade networking and a familiarity with Abrahamic religions or monotheism. While the Byzantine Roman and Sassanid Persian empires were both weakened by the Byzantine–Sasanian War of 602–628, a new power in the form of Islam grew in the Middle East under Muhammad in Medina. In a series of rapid Muslim conquests, the Rashidun army, led by the Caliphs and skilled military commanders such as Khalid ibn al-Walid, swept through most of the Middle East, taking more than half of Byzantine territory in the Arab–Byzantine wars and completely engulfing Persia in the Muslim conquest of Persia. It would be the Arab Caliphates of the Middle Ages that would first unify the entire Middle East as a distinct region and create the dominant ethnic identity that persists today. These Caliphates included the Rashidun Caliphate, Umayyad Caliphate, Abbasid Caliphate, and later the Seljuq Empire. After Muhammad introduced Islam, it jump-started Middle Eastern culture into an Islamic Golden Age, inspiring achievements in architecture, the revival of old advances in science and technology, and the formation of a distinct way of life. Muslims saved and spread Greek advances in medicine, algebra, geometry, astronomy, anatomy, and ethics that would later finds it way back to Western Europe. The dominance of the Arabs came to a sudden end in the mid-11th century with the arrival of the Seljuq Turks, migrating south from the Turkic homelands in Central Asia. They conquered Persia, Iraq (capturing Baghdad in 1055), Syria, Palestine, and the Hejaz. This was followed by a series of Christian Western Europe invasions. The fragmentation of the Middle East allowed joined forces, mainly from England, France, and the emerging Holy Roman Empire, to enter the region. In 1099 the knights of the First Crusade captured Jerusalem and founded the Kingdom of Jerusalem, which survived until 1187, when Saladin retook the city. Smaller crusader fiefdoms survived until 1291. In the early 13th century, a new wave of invaders, the armies of the Mongol Empire, swept through the region, sacking Baghdad in the Siege of Baghdad (1258) and advancing as far south as the border of Egypt in what became known as the Mongol conquests. The Mongols eventually retreated in 1335, but the chaos that ensued throughout the empire deposed the Seljuq Turks. In 1401, the region was further plagued by the Turko-Mongol, Timur, and his ferocious raids. By then, another group of Turks had arisen as well, the Ottomans. Central Asia Mongol Empire The Mongol Empire conquered a large part of Asia in the 13th century, an area extending from China to Europe. Medieval Asia was the kingdom of the Khans. Never before had any person controlled as much land as Genghis Khan. He built his power unifying separate Mongol tribes before expanding his kingdom south and west. He and his grandson, Kublai Khan, controlled lands in China, Burma, Central Asia, Russia, Iran, the Middle East, and Eastern Europe. Genghis Khan was a Khagan who tolerated nearly every religion. South Asia/Indian Subcontinent India The Indian early medieval age, 600 to 1200, is defined by regional kingdoms and cultural diversity. When Harsha of Kannauj, who ruled much of the Indo-Gangetic Plain from 606 to 647, attempted to expand southwards, he was defeated by the Chalukya ruler of the Deccan. When his successor attempted to expand eastwards, he was defeated by the Pala king of Bengal. When the Chalukyas attempted to expand southwards, they were defeated by the Pallavas from farther south, who in turn were opposed by the Pandyas and the Cholas from still farther south. The Cholas could under the rule of Raja Raja Chola defeat their rivals and rise to a regional power. Cholas expanded northward and defeated Eastern Chalukya, Kalinga and the Pala. Under Rajendra Chola the Cholas created the first notable navy of Indian subcontinent. The Chola navy extended the influence of Chola empire to southeast asia. During this time, pastoral peoples whose land had been cleared to make way for the growing agricultural economy were accommodated within caste society, as were new non-traditional ruling classes. The Muslim conquest in the Indian subcontinent mainly took place from the 12th century onwards, though earlier Muslim conquests include the limited inroads into modern Afghanistan and Pakistan and the Umayyad campaigns in India, during the time of the Rajput kingdoms in the 8th century. Major economic and military powers like the Delhi Sultanate and Bengal Sultanate, were seen to be established. The search of their wealth led the Voyages of Christopher Columbus. East Asia China China saw the rise and fall of the Sui, Tang, Song, and Yuan dynasties and therefore improvements in its bureaucracy, the spread of Buddhism, and the advent of Neo-Confucianism. It was an unsurpassed era for Chinese ceramics and painting. Medieval architectural masterpieces the Great South Gate in Todaiji, Japan, and the Tien-ning Temple in Peking, China are some of the surviving constructs from this era. Sui Dynasty A new powerful dynasty began to rise in the 580s, amongst the divided factions of China. This was started when an aristocrat named Yang Jian married his daughter into the Northern Zhou dynasty. He proclaimed himself Emperor Wen of Sui and appeased the nomadic military by abandoning the Confucian scholar-gentry. Emperor Wen soon led the conquest of the southern Chen Dynasty and united China once more under the Sui dynasty. The emperor lowered taxes and constructed granaries that he used to prevent famine and control the market. Later Wen's son would murder him for the throne and declare himself Emperor Yang of Sui. Emperor Yang revived the Confucian scholars and the bureaucracy, much to anger of the aristocrats and nomadic military leaders. Yang became an excessive leader who overused China's resources for personal luxury and perpetuated exhaustive attempts to conquer Goguryeo. His military failures and neglect of the empire forced his own ministers to assassinate him in 618, ending the Sui Dynasty. Tang dynasty Fortunately, one of Yang's most respectable advisors, Li Yuan, was able to claim the throne quickly, preventing a chaotic collapse. He proclaimed himself Emperor Gaozu, and established the Tang dynasty in 623. The Tang saw expansion of China through conquest to Tibet in the west, Vietnam in the south, and Manchuria in the north. Tang emperors also improved the education of scholars in the Chinese bureaucracy. A Ministry of Rites was established and the examination system was improved to better qualify scholars for their jobs. In addition, Buddhism became popular in China with two different strains between the peasantry and the elite, the Pure Land and Zen strains, respectively. Greatly supporting the spread of Buddhism was Empress Wu, who additionally claimed an unofficial "Zhou dynasty" and displayed China's tolerance of a woman ruler, which was rare at the time. However, Buddhism would also experience some backlash, especially from Confucianists and Taoists. This would usually involve criticism about how it was costing the state money, since the government was unable to tax Buddhist monasteries, and additionally sent many grants and gifts to them. The Tang dynasty began to decline under the rule of Emperor Xuanzong, who began to neglect the economy and military and caused unrest amongst the court officials due to the excessive influence of his concubine, Yang Guifei, and her family. This eventually sparked a revolt in 755. Although the revolt failed, subduing it required involvement with the unruly nomadic tribes outside of China and distributing more power to local leaders—leaving the government and economy in a degraded state. The Tang dynasty officially ended in 907 and various factions led by the aforementioned nomadic tribes and local leaders would fight for control of China in the Five Dynasties and Ten Kingdoms period. Liao, Song and Jin dynasties By 960, most of China proper had been reunited under the Song dynasty, although it lost territories in the north and could not defeat one of the nomadic tribes there—the Liao dynasty of the highly sinicized Khitan people. From then on, the Song would have to pay tribute to avoid invasion and thus set the precedent for other nomadic kingdoms to oppress them. The Song also saw the revival of Confucianism in the form of Neo-Confucianism. This had the effect of putting the Confucian scholars at a higher status than aristocrats or Buddhists and also intensified the reduction of power in women. The infamous practice of foot binding developed in this period as a result. Eventually the Liao dynasty in the north was overthrown by the Jin dynasty of the Manchu-related Jurchen people. The new Jin kingdom invaded northern China, leaving the Song to flee farther south and creating the Southern Song dynasty in 1126. There, cultural life flourished. Yuan Dynasty By 1227, the Mongols had conquered the Western Xia kingdom northwest of China. Soon the Mongols incurred upon the Jin empire of the Jurchens. Chinese cities were soon besieged by the Mongol hordes that showed little mercy for those who resisted and the Southern Song Chinese were quickly losing territory. In 1271 the current great khan, Kublai Khan, claimed himself Emperor of China and officially established the Yuan Dynasty. By 1290, all of China was under control of the Mongols, marking the first time they were ever completely conquered by a foreign invader; the new capital was established at Khanbaliq (modern-day Beijing). Kublai Khan segregated Mongol culture from Chinese culture by discouraging interactions between the two peoples, separating living spaces and places of worship, and reserving top administrative positions to Mongols, thus preventing Confucian scholars to continue the bureaucratic system. Nevertheless, Kublai remained fascinated with Chinese thinking, surrounding himself with Chinese Buddhist, Taoist, or Confucian advisors. Mongol women displayed a contrasting independent nature compared to the Chinese women who continued to be suppressed. Mongol women often rode out on hunts or even to war. Kublai's wife, Chabi, was a perfect example of this; Chabi advised her husband on several political and diplomatic matters; she convinced him that the Chinese were to be respected and well-treated in order to make them easier to rule. However, this was not enough to affect Chinese women's position, and the increasingly Neo-Confucian successors of Kublai further repressed Chinese and even Mongol women. The Black Death, which would later ravage Western Europe, had its beginnings in Asia, where it wiped out large populations in China in 1331. Korea Three Kingdoms of Korea The three Kingdoms of Korea involves Goguryeo in north, Baekje in southwest, and Silla in southeast Korean peninsula. These three kingdoms were like a bridge of cultures between China and Japan. Thanks to them, Japan was able to accept Chinese splendid cultures. Prince Shōtoku of Japan had been taught by two teachers. One was from Baekje, the other was from Goguryeo. Once Japan invaded Silla, Goguryeo helped Silla to defeat Japan. Baekje met the earliest heyday of them. Its heyday was the 5th century AD. Its capital was Seoul. During its heyday, the kingdom made colonies overseas. Liaodong, China and Kyushu, Japan were the colonies of Baekje during its short heyday. Goguryeo was the strongest kingdom of all. They sometimes called themselves as an Empire. Its heyday was 6th century. King Gwanggaeto widened its territory to north. So Goguryeo dominated from Korean peninsula to Manchuria. And his son, King Jangsu widened its territory to south. He occupied Seoul, and moved its capital to Pyeongyang. Goguryeo almost occupied three quarters of South Korean peninsula thanks to king Jangsu who widened the kingdom's territory to south. Silla met the latest heyday. King Jinheung went north and occupiedSeoul. But it was short. Baekje became stronger and attacked Silla. Baekje occupied more than 40 cities of Silla. So Silla could hardly survive. China's Sui dynasty invaded Goguryeo and Goguryeo–Sui War occurred between Korea and China. Goguryeo won against China and Sui dynasty fell. After then, Tang dynasty reinvaded Goguryeo and helped Silla to unify the peninsula. Goguryeo, Baekje, and Japan helped each other against Tang-Silla alliance, but Baekje and Goguryeo fell. Unfortunately, Tang dynasty betrayed Silla and invaded Korean peninsula in order to occupy the whole Korean peninsula(Silla-Tang war). Silla advocated 'Unification of Three Korea', so people of fallen Baekje and Goguryeo helped Silla against Chinese invasion. Eventually Silla could beat China and unified the peninsula. This war helped Korean people to unite mentally. North-South States Period The rest of Goguryeo people established Balhae and won the war against Tang in later 7th century AD. Balhae is the north state, and Later Silla was the south state. Balhae was a quite strong kingdom as their ancestor Goguryeo did. Finally, the Emperor of Tang dynasty admits Balhae as 'A strong country in the East'. They liked to trade with Japan, China, and Silla. Balhae and Later Silla sent a lot of international students to China. And Arabian merchants came into Korean peninsula, so Korea became known as 'Silla' in the western countries. Silla improved Korean writing system called Idu letters. Idu affected Katakana of Japan. Liao dynasty invaded Balhae in early 10th century, so Balhae fell. Later Three Kingdoms of Korea The unified Korean kingdom, Later Silla divided into three kingdoms again because of the corrupt central government. It involves Later Goguryeo (also as known as "Taebong"), Later Baekje, and Later Silla. The general of Later Goguryeo, Wang Geon took the throne and changed the name of kingdom into Goryeo, which was derived by the ancient strong kingdom, Goguryeo, and Goryeo reunified the peninsula. Goryeo Goryeo reunited the Korean peninsula during the later three kingdoms period and named itself as 'Empire'. But nowadays, Goryeo is known as a kingdom. The name 'Goryeo' was derived from Goguryeo, and the name Korea was derived from Goryeo. Goryeo adopted people from fallen Balhae. They also widened their territory to north by defending Liao dynasty and attacking the Jurchen people. Goryeo developed a splendid culture. The first metal type printed book Jikji was also from Korea. The Goryeo ware is one of the most famous legacies of this kingdom. Goryeo imported Chinese government system and developed into their own ways. During this period, laws were codified and a civil service system was introduced. Buddhism flourished and spread throughout the peninsula. The Tripitaka Koreana is 81,258 books total. It was made to keep Korea safe against the Mongolian invasion. It is now a UNESCO world heritage. Goryeo won the battle against Liao dynasty. Then, the Mongolian Empire invaded Goryeo. Goryeo did not disappear but it had to obey Mongolians. After 80 years, in 14th century, the Mongolian dynasty Yuan lost power, King Gongmin tried to free themselves against Mongol although his wife was also Mongolian. At the 14th century, Ming dynasty wanted Goryeo to obey China. But Goryeo didn't. They decided to invade China. Going to China, the general of Goryeo, Lee Sung-Gae came back and destroyed Goryeo. Then, in 1392, he established new dynasty, Joseon. And he became Taejo of Joseon, which means the first king of Joseon. Japan Asuka period Japan's medieval history began with the Asuka period, from around 600 to 710. The time was characterized by the Taika Reform and imperial centralization, both of which were a direct result of growing Chinese contact and influences. In 603, Prince Shōtoku of the Yamato dynasty began significant political and cultural changes. He issued the Seventeen-article constitution in 604, centralizing power towards the emperor (under the title tenno, or heavenly sovereign) and removing the power to levy taxes from provincial lords. Shōtoku was also a patron of Buddhism and he encouraged building temples competitively. Nara period Shōtoku's reforms transitioned Japan to the Nara period (c. 710 to c. 794), with the moving of the Japanese capital to Nara in Honshu. This period saw the culmination of Chinese-style writing, etiquette, and architecture in Japan along with Confucian ideals to supplement the already present Buddhism. Peasants revered both Confucian scholars and Buddhist monks. However, in the wake of the 735–737 Japanese smallpox epidemic, Buddhism gained the status of state religion and the government ordered the construction of numerous Buddhist temples, monasteries, and statues. The lavish spending combined with the fact that many aristocrats did not pay taxes, put a heavy burden on peasantry that caused poverty and famine. Eventually the Buddhist position got out of control, threatening to seize imperial power and causing Emperor Kanmu to move the capital to Heian-kyō to avoid a Buddhist takeover. This marked the beginning of the Heian period and the end of Taika reform. Heian period With the Heian period (from 794 to 1185) came a decline of imperial power. Chinese influence also declined, as a result of its correlation with imperial centralization and the heavenly mandate, which came to be regarded as ineffective. By 838, the Japanese court discontinued its embassies in China; only traders and Buddhist monks continued to travel to China. Buddhism itself came to be considered more Japanese than Chinese, and persisted to be popular in Japan. Buddhists monks and monasteries continued their attempts to gather personal power in courts, along with aristocrats. One particular noble family that dominated influence in the imperial bureaucracy was the Fujiwara clan. During this time cultural life in the imperial court flourished. There was a focus on beauty and social interaction and writing and literature was considered refined. Noblewomen were cultured the same as noblemen, dabbling in creative works and politics. A prime example of both Japanese literature and women's role in high-class culture at this time was The Tale of Genji, written by the lady-in-waiting Murasaki Shikibu. Popularization of wooden palaces and shōji sliding doors amongst the nobility also occurred. Loss of imperial power also led to the rise of provincial warrior elites. Small lords began to function independently. They administered laws, supervised public works projects, and collected revenue for themselves instead of the imperial court. Regional lords also began to build their own armies. These warriors were loyal only their local lords and not the emperor, although the imperial government increasingly called them in to protect the capital. The regional warrior class developed into the samurai, which created its own culture: including specialized weapons such as the katana and a form of chivalry, bushido. The imperial government's loss of control in the second half of the Heian period allowed banditry to grow, requiring both feudal lords and Buddhist monasteries to procure warriors for protection. As imperial control over Japan declined, feudal lords also became more independent and seceded from the empire. These feudal states squandered the peasants living in them, reducing the farmers to an almost serfdom status. Peasants were also rigidly restricted from rising to the samurai class, being physically set off by dress and weapon restrictions. As a result of their oppression, many peasants turned to Buddhism as a hope for reward in the afterlife for upright behavior. With the increase of feudalism, families in the imperial court began to depend on alliances with regional lords. The Fujiwara clan declined from power, replaced by a rivalry between the Taira clan and the Minamoto clan. This rivalry grew into the Genpei War in the early 1180s. This war saw the use of both samurai and peasant soldiers. For the samurai, battle was ritual and they often easily cut down the poorly trained peasantry. The Minamoto clan proved successful due to their rural alliances. Once the Taira was destroyed, the Minamoto established a military government called the shogunate (or bakufu), centered in Kamakura. Kamakura period The end of the Genpei War and the establishment of the Kamakura shogunate marked the end of the Heian period and the beginning of the Kamakura period in 1185, solidifying feudal Japan. Southeast Asia Khmers In 802, Jayavarman II consolidated his rule over neighboring peoples and declared himself chakravartin, or "universal ruler". The Khmer Empire effectively dominated all Mainland Southeast Asia from the early 9th until the 15th century, during which time they developed a sophisticated monumental architecture of most exquisite expression and mastery of composition at Angkor. Vietnam Early modern The Russian Empire began to expand into Asia from the 17th century, and would eventually take control of all of Siberia and most of Central Asia by the end of the 19th century. The Ottoman Empire controlled Anatolia, the Middle East, North Africa and the Balkans from the 16th century onwards. In the 17th century, the Manchu conquered China and established the Qing Dynasty. In the 16th century, the Mughal Empire controlled much of India and initiated the second golden age for India. China was the largest economy in the world for much of the time, followed by India until the 18th century. Ming China By 1368, Zhu Yuanzhang had claimed himself Hongwu Emperor and established the Ming dynasty of China. Immediately, the new emperor and his followers drove the Mongols and their culture out of China and beyond the Great Wall. The new emperor was somewhat suspicious of the scholars that dominated China's bureaucracy, for he had been born a peasant and was uneducated. Nevertheless, Confucian scholars were necessary to China's bureaucracy and were reestablished as well as reforms that would improve the exam systems and make them more important in entering the bureaucracy than ever before. The exams became more rigorous, cut down harshly on cheating, and those who excelled were more highly appraised. Finally, Hongwu also directed more power towards the role of emperor so as to end the corrupt influences of the bureaucrats. Society and economy The Hongwu emperor, perhaps for his sympathy of the common-folk, had built many irrigation systems and other public projects that provided help for the peasant farmers. They were also allowed to cultivate and claim unoccupied land without having to pay any taxes and labor demands were lowered. However, none of this was able to stop the rising landlord class that gained many privileges from the government and slowly gained control of the peasantry. Moneylenders foreclosed on peasant debt in exchange for mortgages and bought up farmer land, forcing them to become the landlords' tenants or to wander elsewhere for work. Also during this time, Neo-Confucianism intensified even more than the previous two dynasties (the Song and Yuan). Focus on the superiority of elders over youth, men over women, and teachers over students resulted in minor discrimination of the "inferior" classes. The fine arts grew in the Ming era, with improved techniques in brush painting that depicted scenes of court, city or country life; people such as scholars or travelers; or the beauty of mountains, lakes, or marshes. The Chinese novel fully developed in this era, with such classics written such as Water Margin, Journey to the West, and Jin Ping Mei. Economics grew rapidly in the Ming Dynasty as well. The introduction of American crops such as maize, sweet potatoes, and peanuts allowed for cultivation of crops in infertile land and helped prevent famine. The population boom that began in the Song dynasty accelerated until China's population went from 80 or 90 million to 150 million in three centuries, culminating in 1600. This paralleled the market economy that was growing both internally and externally. Silk, tea, ceramics, and lacquer-ware were produced by artisans that traded them in Asia and to Europeans. Westerners began to trade (with some Chinese-assigned limits), primarily in the port-towns of Macau and Canton. Although merchants benefited greatly from this, land remained the primary symbol of wealth in China and traders' riches were often put into acquiring more land. Therefore, little of these riches were used in private enterprises that could've allowed for China to develop the market economy that often accompanied the highly-successful Western countries. Foreign interests In the interest of national glory, the Chinese began sending impressive junk ships across the South China Sea and the Indian Ocean. From 1403 to 1433, the Yongle Emperor commissioned expeditions led by the admiral Zheng He, a Muslim eunuch from China. Chinese junks carrying hundreds of soldiers, goods, and animals for zoos, traveled to Southeast Asia, Persia, southern Arabia, and east Africa to show off Chinese power. Their prowess exceeded that of current Europeans at the time, and had these expeditions not ended, the world economy may be different from today. In 1433, the Chinese government decided that the cost of a navy was an unnecessary expense. The Chinese navy was slowly dismantled and focus on interior reform and military defense began. It was China's longstanding priority that they protect themselves from nomads and they have accordingly returned to it. The growing limits on the Chinese navy would leave them vulnerable to foreign invasion by sea later on. As was inevitable, Westerners arrived on the Chinese east coast, primarily Jesuit missionaries which reached the mainland in 1582. They attempted to convert the Chinese people to Christianity by first converting the top of the social hierarchy and allowing the lower classes to subsequently convert. To further gain support, many Jesuits adopted Chinese dress, customs, and language. Some Chinese scholars were interested in certain Western teachings and especially in Western technology. By the 1580s, Jesuit scholars like Matteo Ricci and Adam Schall amazed the Chinese elite with technological advances such as European clocks, improved calendars and cannons, and the accurate prediction of eclipses. Although some the scholar-gentry converted, many were suspicious of the Westerners whom they called "barbarians" and even resented them for the embarrassment they received at the hand of Western correction. Nevertheless, a small group of Jesuit scholars remained at the court to impress the emperor and his advisors. Decline Near the end of the 1500s, the extremely centralized government that gave so much power to the emperor had begun to fail as more incompetent rulers took the mantle. Along with these weak rulers came increasingly corrupt officials who took advantage of the decline. Once more the public projects fell into disrepair due to neglect by the bureaucracy and resulted in floods, drought, and famine that rocked the peasantry. The famine soon became so terrible that some peasants resorted to selling their children to slavery to save them from starvation, or to eating bark, the feces of geese, or other people. Many landlords abused the situation by building large estates where desperate farmers would work and be exploited. In turn, many of these farmers resorted to flight, banditry, and open rebellion. All of this corresponded with the usual dynastic decline of China seen before, as well as the growing foreign threats. In the mid-16th century, Japanese and ethnic Chinese pirates began to raid the southern coast, and neither the bureaucracy nor the military were able to stop them. The threat of the northern Manchu people also grew. The Manchu were an already large state north of China, when in the early 17th century a local leader named Nurhaci suddenly united them under the Eight Banners—armies that the opposing families were organized into. The Manchus adopted many Chinese customs, specifically taking after their bureaucracy. Nevertheless, the Manchus still remained a Chinese vassal. In 1644 Chinese administration became so weak, the 16th and last emperor, the Chongzhen Emperor, did not respond to the severity of an ensuing rebellion by local dissenters until the enemy had invaded the Forbidden City (his personal estate). He soon hanged himself in the imperial gardens. For a brief amount of time, the Shun dynasty was claimed, until a loyalist Ming official called support from the Manchus to put down the new dynasty. The Shun Dynasty ended within a year and the Manchu were now within the Great Wall. Taking advantage of the situation, the Manchus marched on the Chinese capital of Beijing. Within two decades all of China belonged to the Manchu and the Qing dynasty was established. Korea: Joseon dynasty (1392–1897) In early-modern Korea, the 500-year-old kingdom, Goryeo fell and new dynasty Joseon rose in August 5, 1392. Taejo of Joseon changed the country's name from Goryeo to Joseon. Sejong the Great created Hangul, the modern Korean alphabet, in 1443; likewise the Joseon dynasty saw several improvements in science and technology, like Sun Clocks, Water Clocks, Rain-Measuring systems, Star Maps, and detailed records of Korean small villages. The ninth king, Seongjong accomplished the first complete Korean law code in 1485. So the culture and people's lives were improved again. In 1592, Japan under Toyotomi Hideyoshi invaded Korea. That war is Imjin war. Before that war, Joseon was in a long peace like PAX ROMANA. So Joseon was not ready for the war. Joseon had lost again and again. Japanese army conquered Seoul. The whole Korean peninsula was in danger. But Yi Sun-sin, the most renowned general of Korea, defeated Japanese fleet in southern Korea coast even 13 ships VS 133 ships. This incredible battle is called "Battle of Myeongnyang". After that, Ming dynasty helped Joseon, and Japan lost the battle. So Toyotomi Hideyoshi's campaign in Korea failed, and the Tokugawa Shogunate has later began. Korea was hurt a lot at Imjin war. Not long after, Manchurian people invaded Joseon again. It is called Qing invasion of Joseon. The first invasion was for sake. Because Qing was at war between Ming, so Ming's alliance with Joseon was threatening. And the second invasion was for Joseon to obey Qing. After that, Qing defeated Ming and took the whole Chinese territories. Joseon also had to obey Qing because Joseon lose the second war against Qing. After the Qing invasion, the princes of the Joseon dynasty lived their childhood in China. The son of King Injo met Adam Schall in Beijing. So he wanted to introduce western technologies to Korean people when he becomes a king. Unfortunately, he died before he could take the throne. After then, the alternative prince became the 17th king of the Joseon dynasty, Hyojong, trying to revenge for his kingdom and fallen Ming dynasty to Qing. Later kings such as Yeongjo and Jeongjo tried to improve their people's lives and stop the governors' unreasonable competition. From the 17th century to the 18th century, Joseon sent diplomats and artists to Japan more than 10 times. This group was called 'Tongshinsa'. They were sent to Japan to teach Japan about advanced Korean culture. Japanese people liked to receive poems from Korean nobles. At that time, Korea was more powerful than Japan. But that relationship between Joseon and Japan was reversed after the 19th century. Because Japan became more powerful than Korea and China, either. So Joseon sent diplomats called 'Sooshinsa' to learn Japanese advanced technologies. After king Jeongjo's death, some noble families controlled the whole kingdom in the early 19th century. At the end of that period, Western people invaded Joseon. In 1876, Joseon was set free from Qing so they did not have to obey Qing. But Japanese Empire was happy because Joseon became a perfect independent kingdom. So Japan could intervene in the kingdom more. After this, Joseon traded with the United States and sent 'Sooshinsa' to Japan, 'Youngshinsa' to Qing, and 'Bobingsa' to the US and Europe. These groups took many modern things to the Korean peninsula. Japan: Tokugawa or Edo period (1603–1867) In early-modern Japan following the Sengoku period of "warring states", central government had been largely reestablished by Oda Nobunaga and Toyotomi Hideyoshi during the Azuchi–Momoyama period. After the Battle of Sekigahara in 1600, central authority fell to Tokugawa Ieyasu who completed this process and received the title of shōgun in 1603. Society in the Japanese "Tokugawa period" (see Edo society), unlike the shogunates before it, was based on the strict class hierarchy originally established by Toyotomi Hideyoshi. The daimyōs (feudal lords) were at the top, followed by the warrior-caste of samurai, with the farmers, artisans, and merchants ranking below. The country was strictly closed to foreigners with few exceptions with the Sakoku policy. Literacy rose in the two centuries of isolation. In some parts of the country, particularly smaller regions, daimyōs and samurai were more or less identical, since daimyōs might be trained as samurai, and samurai might act as local lords. Otherwise, the largely inflexible nature of this social stratification system unleashed disruptive forces over time. Taxes on the peasantry were set at fixed amounts which did not account for inflation or other changes in monetary value. As a result, the tax revenues collected by the samurai landowners were worth less and less over time. This often led to numerous confrontations between noble but impoverished samurai and well-to-do peasants. None, however, proved compelling enough to seriously challenge the established order until the arrival of foreign powers. India In the Indian subcontinent, the Mughal Empire ruled most of India in the early 18th century. During emperor Shah Jahan and his son Aurangzeb's Islamic sharia reigns, the empire reached its architectural and economic zenith, and became the world's largest economy, worth over 25% of world GDP and signaled the proto-industrialization. Following major events such as the Nader Shah's invasion of the Mughal Empire, Battle of Plassey, Battle of Buxar and the long Anglo-Mysore Wars, most of South Asia was colonised and governed by the British Empire, thus establishing the British Raj. The "classic period" ended with the death of Mughal Emperor Aurangzeb, although the dynasty continued for another 150 years. During this period, the Empire was marked by a highly centralized administration connecting the different regions. All the significant monuments of the Mughals, their most visible legacy, date to this period which was characterised by the expansion of Persian cultural influence in the Indian subcontinent, with brilliant literary, artistic, and architectural results. The Maratha Empire was located in the south west of present-day India and expanded greatly under the rule of the Peshwas, the prime ministers of the Maratha empire. In 1761, the Maratha army lost the Third Battle of Panipat against Ahmad shah Durrani king of Afghanistan which halted imperial expansion and the empire was then divided into a confederacy of Maratha states. British and Dutch colonization The European economic and naval powers pushed into Asia, first to do trading, and then to take over major colonies. The Dutch led the way followed by the British. Portugal had arrived first, but was too weak to maintain its small holdings and was largely pushed out, retaining only Goa and Macau. The British set up a private organization, the East India Company, which handled both trade and Imperial control of much of India. The commercial colonization of India commenced in 1757, after the Battle of Plassey, when the Nawab of Bengal surrendered his dominions to the British East India Company, in 1765, when the company was granted the diwani, or the right to collect revenue, in Bengal and Bihar, or in 1772, when the company established a capital in Calcutta, appointed its first Governor-General, Warren Hastings, and became directly involved in governance. The Maratha states, following the Anglo-Maratha wars, eventually lost to the British East India Company in 1818 with the Third Anglo-Maratha War. The rule lasted until 1858, when, after the Indian rebellion of 1857 and consequent of the Government of India Act 1858, the British government assumed the task of directly administering India in the new British Raj. In 1819 Stamford Raffles established Singapore as a key trading post for Britain in their rivalry with the Dutch. However, their rivalry cooled in 1824 when an Anglo-Dutch treaty demarcated their respective interests in Southeast Asia. From the 1850s onwards, the pace of colonization shifted to a significantly higher gear. The Dutch East India Company (1800) and British East India Company (1858) were dissolved by their respective governments, who took over the direct administration of the colonies. Only Thailand was spared the experience of foreign rule, although, Thailand itself was also greatly affected by the power politics of the Western powers. Colonial rule had a profound effect on Southeast Asia. While the colonial powers profited much from the region's vast resources and large market, colonial rule did develop the region to a varying extent. Late modern Central Asia: The Great Game, Russia vs Great Britain The Great Game was a political and diplomatic confrontation between Great Britain and Russia over Afghanistan and neighbouring territories in Central and South Asia. It lasted from 1828 to 1907. There was no war, but there were many threats. Russia was fearful of British commercial and military inroads into Central Asia, and Britain was fearful of Russia threatening its largest and most important possession, India. This resulted in an atmosphere of distrust and the constant threat of war between the two empires. Britain made it a high priority to protect all the approaches to India, and the "great game" is primarily how the British did this in terms of a possible Russian threat. Historians with access to the archives have concluded that Russia had no plans involving India, as the Russians repeatedly stated. The Great Game began in 1838 when Britain decided to gain control over the Emirate of Afghanistan and make it a protectorate, and to use the Ottoman Empire, the Persian Empire, the Khanate of Khiva, and the Emirate of Bukhara as buffer states between both empires. This would protect India and also key British sea trade routes by stopping Russia from gaining a port on the Persian Gulf or the Indian Ocean. Russia proposed Afghanistan as the neutral zone, and the final result was diving up Afghanistan with a neutral zone in the middle between Russian areas in the north and British in the South. Important episodes included the failed First Anglo-Afghan War of 1838, the First Anglo-Sikh War of 1845, the Second Anglo-Sikh War of 1848, the Second Anglo-Afghan War of 1878, and the annexation of Kokand by Russia. The 1901 novel Kim by Rudyard Kipling made the term popular and introduced the new implication of great power rivalry. It became even more popular after the 1979 advent of the Soviet–Afghan War. Qing China By 1644, the northern Manchu people had conquered Ming Dynasty and established a foreign dynasty—the Qing Dynasty—once more. The Manchu Qing emperors, especially Confucian scholar Kangxi, remained largely conservative—retaining the bureaucracy and the scholars within it, as well as the Confucian ideals present in Chinese society. However, changes in the economy and new attempts at resolving certain issues occurred too. These included increased trade with Western countries that brought large amounts of silver into the Chinese economy in exchange for tea, porcelain, and silk textiles. This allowed for a new merchant-class, the compradors, to develop. In addition, repairs were done on existing dikes, canals, roadways, and irrigation works. This, combined with the lowering of taxes and government-assigned labor, was supposed to calm peasant unrest. However, the Qing failed to control the growing landlord class which had begun to exploit the peasantry and abuse their position. By the late 18th century, both internal and external issues began to arise in Qing China's politics, society, and economy. The exam system with which scholars were assigned into the bureaucracy became increasingly corrupt; bribes and other forms of cheating allowed for inexperienced and inept scholars to enter the bureaucracy and this eventually caused rampant neglect of the peasantry, military, and the previously mentioned infrastructure projects. Poverty and banditry steadily rose, especially in rural areas, and mass migrations looking for work throughout China occurred. The perpetually conservative government refused to make reforms that could resolve these issues. Opium War China saw its status reduced by what it perceived as parasitic trade with Westerners. Originally, European traders were at a disadvantage because the Chinese cared little for their goods, while European demand for Chinese commodities such as tea and porcelain only grew. In order to tip the trade imbalance in their favor, British merchants began to sell Indian opium to the Chinese. Not only did this sap Chinese bullion reserves, it also led to widespread drug addiction amongst the bureaucracy and society in general. A ban was placed on opium as early as 1729 by the Yongzheng Emperor, but little was done to enforce it. By the early 19th century, under the new Daoguang Emperor, the government began serious efforts to eradicate opium from Chinese society. Leading this endeavour were respected scholar-officials including Imperial Commissioner Lin Zexu. After Lin destroyed more than 20,000 chests of opium in the summer of 1839, Europeans demanded compensation for what they saw as unwarranted Chinese interference in their affairs. When it was not paid, the British declared war later the same year, starting what became known as the First Opium War. The outdated Chinese junks were no match for the advanced British gunboats, and soon the Yangzi River region came under threat of British bombardment and invasion. The emperor had no choice but to sue for peace, resulting in the exile of Lin and the making of the Treaty of Nanking, which ceded the British control of Hong Kong and opened up trade and diplomacy with other European countries, including Germany, France, and the USA. Inner Manchuria Northeast China came under influence of Russia with the building of the Chinese Eastern Railway through Harbin to Vladivostok. The Empire of Japan replaced Russian influence in the region as a result of the Russo-Japanese War in 1904–1905, and Japan laid the South Manchurian Railway in 1906 to Port Arthur. During the Warlord Era in China, Zhang Zuolin established himself in Northeast China, but was murdered by the Japanese for being too independent. The former Chinese emperor, Puyi, was then placed on the throne to lead a Japanese puppet state of Manchukuo. In August 1945, the Soviet Union invaded the region. From 1945 to 1948, Northeast China was a base area for Mao Zedong's People's Liberation Army in the Chinese Civil War. With the encouragement of the Kremlin, the area was used as a staging ground during the Civil War for the Chinese Communists, who were victorious in 1949 and have controlled ever since. Joseon When it became the 19th century, the king of Joseon was powerless. Because the noble family of the king's wife got the power and ruled the country by their way. The 26th king of Joseon dynasty, Gojong's father, Heungseon Daewongun wanted the king be powerful again. Even he wasn't the king. As the father of young king, he destroyed noble families and corrupt organizations. So the royal family got the power again. But he wanted to rebuild Gyeongbokgung palace in order to show the royal power to people. So he was criticized by people because he spent enormous money and inflation occurred because of that. So his son, the real king Gojong got power. Korean Empire The 26th king of Joseon, Gojong changed the nation's name to Daehan Jeguk. It means the Korean Empire. And he also promoted himself as an emperor. The new empire accepted more western technology and strengthened military power. And Korean Empire was going to become a Neutral Nation. Unfortunately, in the Russo-Japanese war, Japan ignored this, and eventually Japan won against Russian Empire, and started to invade Korea. Japan first stole the right of diplomacy from Korean Empire illegally. But every western country ignored this invasion because they knew Japan became a strong country as they defeated Russian Empire. So emperor Gojong sent diplomats to a Dutch city known as The Hague to let everyone know that Japan stole the Empire's right illegally. But it was failed. Because the diplomats couldn't go into the conference room. Japan kicked Gojong off on the grounds that this reason. 3 years after, In 1910, Korean Empire became a part of Empire of Japan. It was the first time ever after invasion of Han dynasty in 108 BC. Contemporary The European powers had control of other parts of Asia by the early 20th century, such as British India, French Indochina, Spanish East Indies, and Portuguese Macau and Goa. The Great Game between Russia and Britain was the struggle for power in the Central Asian region in the nineteenth century. The Trans-Siberian Railway, crossing Asia by train, was complete by 1916. Parts of Asia remained free from European control, although not influence, such as Persia, Thailand and most of China. In the twentieth century, Imperial Japan expanded into China and Southeast Asia during the World War II. After the war, many Asian countries became independent from European powers. During the Cold War, the northern parts of Asia were communist controlled with the Soviet Union and People's Republic of China, while western allies formed pacts such as CENTO and SEATO. Conflicts such as the Korean War, Vietnam War and Soviet invasion of Afghanistan were fought between communists and anti-communists. In the decades after the Second World War, a massive restructuring plan drove Japan to become the world's second-largest economy, a phenomenon known as the Japanese post-war economic miracle. The Arab–Israeli conflict has dominated much of the recent history of the Middle East. After the Soviet Union's collapse in 1991, there were many new independent nations in Central Asia. China Prior to World War II, China faced a civil war between Mao Zedong's Communist party and Chiang Kai-shek's nationalist party; the nationalists appeared to be in the lead. However, once the Japanese invaded in 1937, the two parties were forced to form a temporary cease-fire in order to defend China. The nationalists faced many military failures that caused them to lose territory and subsequently, respect from the Chinese masses. In contrast, the communists' use of guerilla warfare (led by Lin Biao) proved effective against the Japanese's conventional methods and put the Communist Party on top by 1945. They also gained popularity for the reforms they were already applying in controlled areas, including land redistribution, education reforms, and widespread health care. For the next four years, the nationalists would be forced to retreat to the small island east of China, known as Taiwan (formerly known as Formosa), where they remain today. In mainland China, People's Republic of China was established by the Communist Party, with Mao Zedong as its state chairman. The communist government in China was defined by the party cadres. These hard-line officers controlled the People's Liberation Army, which itself controlled large amounts of the bureaucracy. This system was further controlled by the Central Committee, which additionally supported the state chairman who was considered the head of the government. The People's Republic's foreign policies included the repressing of secession attempts in Mongolia and Tibet and supporting of North Korea and North Vietnam in the Korean War and Vietnam War, respectively. By 1960 China and the USSR became adversaries, battling worldwide for control of local communist movements. Today China plays important roles in world economics and politics. China today is the world's second largest economy and the second fastest growing economy. Korea During the period when the Korean War occurred, Korea divided into North and South. Syngman Rhee became the first president of South Korea, and Kim Il-sung became the supreme leader of North Korea. After the war, the president of South Korea, Syngman Rhee tries to become a dictator. So the April Revolution occurred, eventually Syngman Rhee was exiled from his country. In 1963, Park Chung-hee was empowered with a military coup d'état. He dispatched Republic of Korea Army to Vietnam War. And during this age, the economy of South Korea outran that of North Korea. Although Park Chung-hee improved the nation's economy, he was a dictator, so people didn't like him. Eventually, he is murdered by Kim Jae-gyu. In 1979, Chun Doo-hwan was empowered by another coup d’état by military. He oppressed the resistances in the city of Gwangju. That event is called 'Gwangju Uprising'. Despite the Gwangju Uprising, Chun Doo-hwan became the president. But the people resisted again in 1987. This movement is called 'June Struggle'. As a result of Gwangju Uprising and June Struggle, South Korea finally became a democratic republic in 1987. Roh Tae-woo (1988–93), Kim Young-sam (1993–98), Kim Dae-jung (1998–2003), Roh Moo-hyun (2003–2008), Lee Myung-bak (2008–2013), Park Geun-hye (2013–2017), Moon Jae-in (2017–) were elected as a president in order after 1987. In 1960, North Korea was far wealthier than South Korea. But in 1970, South Korea begins to outrun the North Korean economy. In 2018, South Korea is ranked #10 in world GDP ranking. See also Ancient Asian history History of Southeast Asia Prehistoric Asia References Bibliography Cotterell, Arthur. Asia: A Concise History (2011) Cotterell, Arthur. Western Power in Asia: Its Slow Rise and Swift Fall, 1415 - 1999 (2009) popular history; excerpt Curtin, Philip D. The World and the West: The European Challenge and the Overseas Response in the Age of Empire (2002) Embree, Ainslie T., and Carol Gluck, eds. Asia in Western and World History: A Guide for Teaching (M.E. Sharpe, 1997). Embree, Ainslie T., ed. Encyclopedia of Asian history (1988) vol. 1 online; vol 2 online; vol 3 online; vol 4 online Fairbank, John K., Edwin O. Reischauer. A History of East Asian Civilization: Volume One : East Asia the Great Tradition and A History of East Asian Civilization: Volume Two : East Asia the Modern transformation (1966) Online free to borrow Macnair, Harley Farnsworth and Donald F. Lach. Modern Far Eastern International Relations (1955) online free Moffett, Samuel Hugh. A History of Christianity in Asia, Vol. II: 1500–1900 (2003) excerpt Murphey, Rhoads. A History of Asia (8th ed, 2019) excerpt also Online Paine, S. C. M. The Wars for Asia, 1911-1949 (2014) excerpt Stearns, Peter N., and William L. Langer. The Encyclopedia of World History: Ancient, Medieval, and Modern (2001) Regions Adshead, Samuel Adrian Miles. Central Asia in world history (Springer, 2016). Best, Antony. The International History of East Asia, 1900-1968: Trade, Ideology and the Quest for Order (2010) online Catchpole, Brian. A map history of modern China (1976), new maps & diagrams Clyde, Paul Herbert. International-Rivalries-In-Manchuria-1689-1928 (2nd ed. 1928) online free Clyde, Paul H, and Burton H. Beers. The Far East, a history of the Western impact and the Eastern response, 1830-1975 (6th ed. 1975) 575pp Clyde, Paul Hibbert. The Far East: A History of the Impact of the West on Eastern Asia (3rd ed. 1948) online free; 836pp Ebrey, Patricia Buckley, Anne Walthall and James Palais. East Asia: A Cultural, Social, and Political History (2006); 639pp; also in 2-vol edition split at 1600. Fenby, Jonatham The Penguin History of Modern China: The Fall and Rise of a Great Power 1850 to the Present (3rd ed. 2019) popular history. Gilbert, Marc Jason. South Asia in World History (Oxford UP, 2017) Goldin, Peter B. Central Asia in World History (Oxford UP, 2011) Holcombe, Charles. A History of East Asia: From the Origins of Civilization to the Twenty-First Century (2010). Huffman, James L. Japan in World History (Oxford, 2010) Jansen, Marius B. Japan and China: From War to Peace, 1894-1972 (1975) Karl, Rebecca E. "Creating Asia: China in the world at the beginning of the twentieth century." American Historical Review 103.4 (1998): 1096–1118. online Lockard, Craig. Southeast Asia in world history (Oxford UP, 2009). Ludden, David. India and South Asia: A Short History (2013). Mansfield, Peter, and Nicolas Pelham, A History of the Middle East (4th ed, 2013). Park, Hye Jeong. "East Asian Odyssey Towards One Region: The Problem of East Asia as a Historiographical Category." History Compass 12.12 (2014): 889–900. online Ropp, Paul S. China in World History (Oxford UP, 2010) Economic history Allen, G.C. A Short Economic History Of Modern Japan 1867-1937 (1945) online; also 1981 edition free to borrow Cowan, C.D. ed. The economic development of China and Japan: studies in economic history and political economy (1964) online free to borrow Hansen, Valerie. The Silk Road: A New History (Oxford University Press, 2012). Jones, Eric. The European miracle: environments, economies and geopolitics in the history of Europe and Asia. (Cambridge UP, 2003). Lockwood, William W. The economic development of Japan; growth and structural change (1970) online free to borrow Pomeranz, Kenneth. The Great Divergence: China, Europe, and the Making of the Modern World Economy. (2001) Schulz-Forberg, Hagen, ed. A Global Conceptual History of Asia, 1860–1940 (2015) Smith, Alan K. Creating a World Economy: Merchant Capital, Colonialism, and World Trade, 1400-1825 (Routledge, 2019). Von Glahn, Richard. The Economic History of China (2016) Relations with Europe Belk, Russell. "China’s global trade history: A western perspective." Journal of China Marketing 6.1 (2016): 1-22 [1 online]. Hoffman, Philip T. Why did Europe conquer the world? (Princeton UP, 2017).\ Ji, Fengyuan. "The West and China: discourses, agendas and change." Critical Discourse Studies 14.4 (2017): 325-340. Lach, Donald F. Asia in the Making of Europe (3 vol. U of Chicago Press, 1994). Lach, Donald F. Southeast Asia in the eyes of Europe: the sixteenth century (U of Chicago Press, 1968). Lach, Donald F., and Edwin J. Van Kley. "Asia in the eyes of Europe: the seventeenth century." The Seventeenth Century 5.1 (1990): 93-109. Lach, Donald F. China in the eyes of Europe: the Sixteenth Century (U of Chicago Press, 1968). Lee, Christina H., ed. Western visions of the Far East in a Transpacific Age, 1522-1657 (Routledge, 2016). Nayar, Pramod K. "Marvelous excesses: English travel writing and India, 1608–1727." Journal of British Studies 44.2 (2005): 213-238. Pettigrew, William A., and Mahesh Gopalan, eds. The East India Company, 1600-1857: Essays on Anglo-Indian Connection (Routledge, 2016). Smith, Alan K. Creating a World Economy: Merchant Capital, Colonialism, and World Trade, 1400-1825 (Routledge, 2019). Steensgaard, Niels. "European shipping to Asia 1497–1700." Scandinavian Economic History Review'' 18.1 (1970): 1–11.
[ -0.08047499507665634, -0.03142305091023445, -0.47271811962127686, -0.07280310988426208, 0.2625259459018707, 0.6226065158843994, 0.21711362898349762, 1.2073588371276855, -0.3870128095149994, -0.30965760350227356, -0.2445090115070343, -0.6812477111816406, -0.5506986379623413, 0.9787659645080...
14098
https://en.wikipedia.org/wiki/History%20of%20the%20Americas
History of the Americas
The prehistory of the Americas (North, South, and Central America, and the Caribbean) begins with people migrating to these areas from Asia during the height of an Ice Age. These groups are generally believed to have been isolated from the people of the "Old World" until the coming of Europeans in the 10th century from Iceland led by Leif Erikson and in 1492 with the voyages of Christopher Columbus. The ancestors of today's American Indigenous peoples were the Paleo-Indians; they were hunter-gatherers who migrated into North America. The most popular theory asserts that migrants came to the Americas via Beringia, the land mass now covered by the ocean waters of the Bering Strait. Small lithic stage peoples followed megafauna like bison, mammoth (now extinct), and caribou, thus gaining the modern nickname "big-game hunters." Groups of people may also have traveled into North America on shelf or sheet ice along the northern Pacific coast. Cultures that may be considered advanced or civilized include Norte Chico, Cahokia, Zapotec, Toltec, Olmec, Maya, Aztec, Chimor, Mixtec, Moche, Mississippian, Puebloan, Totonac, Teotihuacan, Huastec people, Purépecha, Izapa, Mazatec, Muisca, and the Inca. After the voyages of Christopher Columbus in 1492, Spanish and later Portuguese, English, French and Dutch colonial expeditions arrived in the New World, conquering and settling the discovered lands, which led to a transformation of the cultural and physical landscape in the Americas. Spain colonized most of the Americas from present-day Southwestern United States, Florida and the Caribbean to the southern tip of South America. Portugal settled in what is mostly present-day Brazil while England established colonies on the Eastern coast of the United States, as well as the North Pacific coast and in most of Canada. France settled in Quebec and other parts of Eastern Canada and claimed an area in what is today the central United States. The Netherlands settled New Netherland (administrative centre New Amsterdam – now New York), some Caribbean islands and parts of Northern South America. European colonization of the Americas led to the rise of new cultures, civilizations and eventually states, which resulted from the fusion of Native American, European, and African traditions, peoples and institutions. The transformation of American cultures through colonization is evident in architecture, religion, gastronomy, the arts and particularly languages, the most widespread being Spanish (376 million speakers), English (348 million) and Portuguese (201 million). The colonial period lasted approximately three centuries, from the early 16th to the early 19th centuries, when Brazil and the larger Hispanic American nations declared independence. The United States obtained independence from Great Britain much earlier, in 1776, while Canada formed a federal dominion in 1867 and received legal independence in 1931. Others remained attached to their European parent state until the end of the 19th century, such as Cuba and Puerto Rico which were linked to Spain until 1898. Smaller territories such as Guyana obtained independence in the mid-20th century, while certain Caribbean islands and French Guiana remain part of a European power to this day. Pre-colonization Migration into the continents The specifics of Paleo-Indian migration to and throughout the Americas, including the exact dates and routes traveled, are subject to ongoing research and discussion. The traditional theory has been that these early migrants moved into the Beringia land bridge between eastern Siberia and present-day Alaska around 40,000 – 17,000 years ago, when sea levels were significantly lowered due to the Quaternary glaciation. These people are believed to have followed herds of now-extinct Pleistocene megafauna along ice-free corridors that stretched between the Laurentide and Cordilleran ice sheets. Another route proposed is that, either on foot or using primitive boats, they migrated down the Pacific Northwest coast to South America. Evidence of the latter would since have been covered by a sea level rise of a hundred meters following the last ice age. Archaeologists contend that the Paleo-Indian migration out of Beringia (eastern Alaska), ranges from 40,000 to around 16,500 years ago. This time range is a hot source of debate. The few agreements achieved to date are the origin from Central Asia, with widespread habitation of the Americas during the end of the last glacial period, or more specifically what is known as the late glacial maximum, around 16,000 – 13,000 years before present. The American Journal of Human Genetics released an article in 2007 stating "Here we show, by using 86 complete mitochondrial genomes, that all Indigenous American haplogroups, including Haplogroup X (mtDNA), were part of a single founding population." Amerindian groups in the Bering Strait region exhibit perhaps the strongest DNA or mitochondrial DNA relations to Siberian peoples. The genetic diversity of Amerindian indigenous groups increase with distance from the assumed entry point into the Americas. Certain genetic diversity patterns from West to East suggest, particularly in South America, that migration proceeded first down the west coast, and then proceeded eastward. Geneticists have variously estimated that peoples of Asia and the Americas were part of the same population from 42,000 to 21,000 years ago. New studies shed light on the founding population of indigenous Americans, suggesting that their ancestry traced to both east Asian and western Eurasians who migrated to North America directly from Siberia. A 2013 study in the journal Nature reported that DNA found in the 24,000-year-old remains of a young boy in Mal’ta Siberia suggest that up to one-third of the indigenous Americans may have ancestry that can be traced back to western Eurasians, who may have "had a more north-easterly distribution 24,000 years ago than commonly thought" Professor Kelly Graf said that "Our findings are significant at two levels. First, it shows that Upper Paleolithic Siberians came from a cosmopolitan population of early modern humans that spread out of Africa to Europe and Central and South Asia. Second, Paleoindian skeletons with phenotypic traits atypical of modern-day Native Americans can be explained as having a direct historical connection to Upper Paleolithic Siberia." A route through Beringia is seen as more likely than the Solutrean hypothesis. On October 3, 2014, the Oregon cave where the oldest DNA evidence of human habitation in North America was found was added to the National Register of Historic Places. The DNA, radiocarbon dated to 14,300 years ago, was found in fossilized human coprolites uncovered in the Paisley Five Mile Point Caves in south central Oregon. Lithic stage (before 8000 BCE) The Lithic stage or Paleo-Indian period, is the earliest classification term referring to the first stage of human habitation in the Americas, covering the Late Pleistocene epoch. The time period derives its name from the appearance of "Lithic flaked" stone tools. Stone tools, particularly projectile points and scrapers, are the primary evidence of the earliest well known human activity in the Americas. Lithic reduction stone tools are used by archaeologists and anthropologists to classify cultural periods. Archaic stage (8000 BCE – 1000 BCE) Several thousand years after the first migrations, the first complex civilizations arose as hunter-gatherers settled into semi-agricultural communities. Identifiable sedentary settlements began to emerge in the so-called Middle Archaic period around 6000 BCE. Particular archaeological cultures can be identified and easily classified throughout the Archaic period. In the late Archaic, on the north-central coastal region of Peru, a complex civilization arose which has been termed the Norte Chico civilization, also known as Caral-Supe. It is the oldest known civilization in the Americas and one of the six sites where civilization originated independently and indigenously in the ancient world, flourishing between the 30th and 18th centuries BC. It pre-dated the Mesoamerican Olmec civilization by nearly two millennia. It was contemporaneous with the Egypt following the unification of its kingdom under Narmer and the emergence of the first Egyptian hieroglyphics. Monumental architecture, including earthwork platform mounds and sunken plazas have been identified as part of the civilization. Archaeological evidence points to the use of textile technology and the worship of common god symbols. Government, possibly in the form of theocracy, is assumed to have been required to manage the region. However, numerous questions remain about its organization. In archaeological nomenclature, the culture was pre-ceramic culture of the pre-Columbian Late Archaic period. It appears to have lacked ceramics and art. Ongoing scholarly debate persists over the extent to which the flourishing of Norte Chico resulted from its abundant maritime food resources, and the relationship that these resources would suggest between coastal and inland sites. The role of seafood in the Norte Chico diet has been a subject of scholarly debate. In 1973, examining the Aspero region of Norte Chico, Michael E. Moseley contended that a maritime subsistence (seafood) economy had been the basis of society and its early flourishing. This theory, later termed "maritime foundation of Andean Civilization" was at odds with the general scholarly consensus that civilization arose as a result of intensive grain-based agriculture, as had been the case in the emergence of civilizations in northeast Africa (Egypt) and southwest Asia (Mesopotamia). While earlier research pointed to edible domestic plants such as squash, beans, lucuma, guava, pacay, and camote at Caral, publications by Haas and colleagues have added avocado, achira, and maize (Zea Mays) to the list of foods consumed in the region. In 2013, Haas and colleagues reported that maize was a primary component of the diet throughout the period of 3000 to 1800 BC. Cotton was another widespread crop in Norte Chico, essential to the production of fishing nets and textiles. Jonathan Haas noted a mutual dependency, whereby "The prehistoric residents of the Norte Chico needed the fish resources for their protein and the fishermen needed the cotton to make the nets to catch the fish." In the 2005 book 1491: New Revelations of the Americas Before Columbus, journalist Charles C. Mann surveyed the literature at the time, reporting a date "sometime before 3200 BC, and possibly before 3500 BC" as the beginning date for the formation of Norte Chico. He notes that the earliest date securely associated with a city is 3500 BC, at Huaricanga in the (inland) Fortaleza area. The Norte Chico civilization began to decline around 1800 BC as more powerful centers appeared to the south and north along its coast, and to the east within the Andes Mountains. Mesoamerica, the Woodland Period, and Mississippian culture (2000 BCE – 500 CE) After the decline of the Norte Chico civilization, several large, centralized civilizations developed in the Western Hemisphere: Chavin, Nazca, Moche, Huari, Quitus, Cañaris, Chimu, Pachacamac, Tiahuanaco, Aymara and Inca in the Central Andes (Ecuador, Peru and Bolivia); Muisca in Colombia ; Taínos in Dominican Republic (Hispaniola, Española) and part of Caribbean; and the Olmecs, Maya, Toltecs, Mixtecs, Zapotecs, Aztecs and Purepecha in southern North America (Mexico, Guatemala). The Olmec civilization was the first Mesoamerican civilization, beginning around 1600–1400 BC and ending around 400 BC. Mesoamerica is considered one of the six sites around the globe in which civilization developed independently and indigenously. This civilization is considered the mother culture of the Mesoamerican civilizations. The Mesoamerican calendar, numeral system, writing, and much of the Mesoamerican pantheon seem to have begun with the Olmec. Some elements of agriculture seem to have been practiced in Mesoamerica quite early. The domestication of maize is thought to have begun around 7,500 to 12,000 years ago. The earliest record of lowland maize cultivation dates to around 5100 BC. Agriculture continued to be mixed with a hunting-gathering-fishing lifestyle until quite late compared to other regions, but by 2700 BC, Mesoamericans were relying on maize, and living mostly in villages. Temple mounds and classes started to appear. By 1300/ 1200 BC, small centres coalesced into the Olmec civilization, which seems to have been a set of city-states, united in religious and commercial concerns. The Olmec cities had ceremonial complexes with earth/clay pyramids, palaces, stone monuments, aqueducts and walled plazas. The first of these centers was at San Lorenzo (until 900 bc). La Venta was the last great Olmec centre. Olmec artisans sculpted jade and clay figurines of Jaguars and humans. Their iconic giant heads – believed to be of Olmec rulers – stood in every major city. The Olmec civilization ended in 400 BC, with the defacing and destruction of San Lorenzo and La Venta, two of the major cities. It nevertheless spawned many other states, most notably the Mayan civilization, whose first cities began appearing around 700–600 BC. Olmec influences continued to appear in many later Mesoamerican civilizations. Cities of the Aztecs, Mayas, and Incas were as large and organized as the largest in the Old World, with an estimated population of 200,000 to 350,000 in Tenochtitlan, the capital of the Aztec Empire. The market established in the city was said to have been the largest ever seen by the conquistadors when they arrived. The capital of the Cahokians, Cahokia, located near modern East St. Louis, Illinois, may have reached a population of over 20,000. At its peak, between the 12th and 13th centuries, Cahokia may have been the most populous city in North America. Monk's Mound, the major ceremonial center of Cahokia, remains the largest earthen construction of the prehistoric New World. These civilizations developed agriculture as well, breeding maize (corn) from having ears 2–5 cm in length to perhaps 10–15 cm in length. Potatoes, tomatoes, beans (greens), pumpkins, avocados, and chocolate are now the most popular of the pre-Columbian agricultural products. The civilizations did not develop extensive livestock as there were few suitable species, although alpacas and llamas were domesticated for use as beasts of burden and sources of wool and meat in the Andes. By the 15th century, maize was being farmed in the Mississippi River Valley after introduction from Mexico. The course of further agricultural development was greatly altered by the arrival of Europeans. Classic stage (800 BCE – 1533 CE) Cahokia Cahokia was a major regional chiefdom, with trade and tributary chiefdoms located in a range of areas from bordering the Great Lakes to the Gulf of Mexico. Haudenosaune The Iroquois League of Nations or "People of the Long House", based in present-day upstate and western New York, had a confederacy model from the mid-15th century. It has been suggested that their culture contributed to political thinking during the development of the later United States government. Their system of affiliation was a kind of federation, different from the strong, centralized European monarchies. Leadership was restricted to a group of 50 sachem chiefs, each representing one clan within a tribe; the Oneida and Mohawk people had nine seats each; the Onondagas held fourteen; the Cayuga had ten seats; and the Seneca had eight. Representation was not based on population numbers, as the Seneca tribe greatly outnumbered the others. When a sachem chief died, his successor was chosen by the senior woman of his tribe in consultation with other female members of the clan; property and hereditary leadership were passed matrilineally. Decisions were not made through voting but through consensus decision making, with each sachem chief holding theoretical veto power. The Onondaga were the "firekeepers", responsible for raising topics to be discussed. They occupied one side of a three-sided fire (the Mohawk and Seneca sat on one side of the fire, the Oneida and Cayuga sat on the third side.) Long-distance trading did not prevent warfare and displacement among the indigenous peoples, and their oral histories tell of numerous migrations to the historic territories where Europeans encountered them. The Iroquois invaded and attacked tribes in the Ohio River area of present-day Kentucky and claimed the hunting grounds. Historians have placed these events as occurring as early as the 13th century, or in the 17th century Beaver Wars. Through warfare, the Iroquois drove several tribes to migrate west to what became known as their historically traditional lands west of the Mississippi River. Tribes originating in the Ohio Valley who moved west included the Osage, Kaw, Ponca and Omaha people. By the mid-17th century, they had resettled in their historical lands in present-day Kansas, Nebraska, Arkansas and Oklahoma. The Osage warred with Caddo-speaking Native Americans, displacing them in turn by the mid-18th century and dominating their new historical territories. Oasisamerica Pueblo people The Pueblo people of what is now occupied by the Southwestern United States and northern Mexico, living conditions were that of large stone apartment like adobe structures. They live in Arizona, New Mexico, Utah, Colorado, and possibly surrounding areas. Aridoamerica Chichimeca Chichimeca was the name that the Mexica (Aztecs) generically applied to a wide range of semi-nomadic peoples who inhabited the north of modern-day Mexico, and carried the same sense as the European term "barbarian". The name was adopted with a pejorative tone by the Spaniards when referring especially to the semi-nomadic hunter-gatherer peoples of northern Mexico. Mesoamerica Olmec The Olmec civilization emerged around 1200 BCE in Mesoamerica and ended around 400 BCE. Olmec art and concepts influenced surrounding cultures after their downfall. This civilization was thought to be the first in America to develop a writing system. After the Olmecs abandoned their cities for unknown reasons, the Maya, Zapotec and Teotihuacan arose. Purepecha The Purepecha civilization emerged around 1000 CE in Mesoamerica . They flourished from 1100 CE to 1530 CE. They continue to live on in the state of Michoacán. Fierce warriors, they were never conquered and in their glory years, successfully sealed off huge areas from Aztec domination. Maya Maya history spans 3,000 years. The Classic Maya may have collapsed due to changing climate in the end of the 10th century. Toltec The Toltec were a nomadic people, dating from the 10th–12th century, whose language was also spoken by the Aztecs. Teotihuacan Teotihuacan (4th century BCE – 7/8th century CE) was both a city, and an empire of the same name, which, at its zenith between 150 and the 5th century, covered most of Mesoamerica. Aztec The Aztec having started to build their empire around 14th century found their civilization abruptly ended by the Spanish conquistadors. They lived in Mesoamerica, and surrounding lands. Their capital city Tenochtitlan was one of the largest cities of all time. South America Norte Chico The oldest known civilization of the Americas was established in the Norte Chico region of modern Peru. Complex society emerged in the group of coastal valleys, between 3000 and 1800 BCE. The Quipu, a distinctive recording device among Andean civilizations, apparently dates from the era of Norte Chico's prominence. Chavín The Chavín established a trade network and developed agriculture by as early as (or late compared to the Old World) 900 BCE according to some estimates and archaeological finds. Artifacts were found at a site called Chavín in modern Peru at an elevation of 3,177 meters. Chavín civilization spanned from 900 BCE to 300 BCE. Inca Holding their capital at the great city of Cusco, the Inca civilization dominated the Andes region from 1438 to 1533. Known as Tahuantinsuyu, or "the land of the four regions", in Quechua, the Inca culture was highly distinct and developed. Cities were built with precise, unmatched stonework, constructed over many levels of mountain terrain. Terrace farming was a useful form of agriculture. There is evidence of excellent metalwork and even successful trepanation of the skull in Inca civilization. European colonization Around 1000, the Vikings established a short-lived settlement in Newfoundland, now known as L'Anse aux Meadows. Speculations exist about other Old World discoveries of the New World, but none of these are generally or completely accepted by most scholars. Spain sponsored a major exploration led by Italian explorer Christopher Columbus in 1492; it quickly led to extensive European colonization of the Americas. The Europeans brought Old World diseases which are thought to have caused catastrophic epidemics and a huge decrease of the native population. Columbus came at a time in which many technical developments in sailing techniques and communication made it possible to report his voyages easily and to spread word of them throughout Europe. It was also a time of growing religious, imperial and economic rivalries that led to a competition for the establishment of colonies. Colonial period 15th to 19th century colonies in the New World: Spanish colonization of the Americas (1492) Viceroyalty of New Spain (1535 to 1821) Viceroyalty of Peru (1542–1824) Spanish Main Spanish West Indies Captaincy General of Guatemala British America / Thirteen Colonies (1584/1607 to 1776/20th century) Danish West Indies New Netherland New France Captaincy General of Venezuela Portuguese colonization of the Americas (1499 to 1822) Colonial Brazil (1500 to 1815) Decolonization The formation of sovereign states in the New World began with the United States Declaration of Independence of 1776. The American Revolutionary War lasted through the period of the Siege of Yorktown — its last major campaign — in the early autumn of 1781, with peace being achieved in 1783. The Spanish colonies won their independence in the first quarter of the 19th century, in the Spanish American wars of independence. Simón Bolívar and José de San Martín, among others, led their independence struggle. Although Bolivar attempted to keep the Spanish-speaking parts of Latin America politically allied, they rapidly became independent of one another as well, and several further wars were fought, such as the Paraguayan War and the War of the Pacific. (See Latin American integration.) In the Portuguese colony Dom Pedro I (also Pedro IV of Portugal), son of the Portuguese king Dom João VI, proclaimed the country's independence in 1822 and became Brazil's first Emperor. This was peacefully accepted by the crown in Portugal, upon compensation. Effects of slavery Slavery has had a significant role in the economic development of the New World after the colonization of the Americas by the Europeans. The cotton, tobacco, and sugar cane harvested by slaves became important exports for the United States and the Caribbean countries. 20th century North America As a part of the British Empire, Canada immediately entered World War I when it broke out in 1914. Canada bore the brunt of several major battles during the early stages of the war, including the use of poison gas attacks at Ypres. Losses became grave, and the government eventually brought in conscription, despite the fact this was against the wishes of the majority of French Canadians. In the ensuing Conscription Crisis of 1917, riots broke out on the streets of Montreal. In neighboring Newfoundland, the new dominion suffered a devastating loss on July 1, 1916, the First day on the Somme. The United States stayed out of the conflict until 1917, when it joined the Entente powers. The United States was then able to play a crucial role at the Paris Peace Conference of 1919 that shaped interwar Europe. Mexico was not part of the war, as the country was embroiled in the Mexican Revolution at the time. The 1920s brought an age of great prosperity in the United States, and to a lesser degree Canada. But the Wall Street Crash of 1929 combined with drought ushered in a period of economic hardship in the United States and Canada. From 1936 to 1949, there was a popular uprising against the anti-Catholic Mexican government of the time, set off specifically by the anti-clerical provisions of the Mexican Constitution of 1917. Once again, Canada found itself at war before its neighbors, with numerically modest but significant contributions overseas such as the Battle of Hong Kong and the Battle of Britain. The entry of the United States into the war helped to tip the balance in favour of the allies. Two Mexican tankers, transporting oil to the United States, were attacked and sunk by the Germans in the Gulf of Mexico waters, in 1942. The incident happened in spite of Mexico's neutrality at that time. This led Mexico to enter the conflict with a declaration of war on the Axis nations. The destruction of Europe wrought by the war vaulted all North American countries to more important roles in world affairs, especially the United States, which emerged as a "superpower". The early Cold War era saw the United States as the most powerful nation in a Western coalition of which Mexico and Canada were also a part. In Canada, Quebec was transformed by the Quiet Revolution and the emergence of Quebec nationalism. Mexico experienced an era of huge economic growth after World War II, a heavy industrialization process and a growth of its middle class, a period known in Mexican history as "El Milagro Mexicano" (the Mexican miracle). The Caribbean saw the beginnings of decolonization, while on the largest island the Cuban Revolution introduced Cold War rivalries into Latin America. The civil rights movement in the U.S. ended Jim Crow and empowered black voters in the 1960s, which allowed black citizens to move into high government offices for the first time since Reconstruction. However, the dominant New Deal coalition collapsed in the mid 1960s in disputes over race and the Vietnam War, and the conservative movement began its rise to power, as the once dominant liberalism weakened and collapsed. Canada during this era was dominated by the leadership of Pierre Elliot Trudeau. In 1982, at the end of his tenure, Canada enshrined a new constitution. Canada's Brian Mulroney not only ran on a similar platform but also favored closer trade ties with the United States. This led to the Canada-United States Free Trade Agreement in January 1989. Mexican presidents Miguel de la Madrid, in the early 1980s and Carlos Salinas de Gortari in the late 1980s, started implementing liberal economic strategies that were seen as a good move. However, Mexico experienced a strong economic recession in 1982 and the Mexican peso suffered a devaluation. In the United States president Ronald Reagan attempted to move the United States back towards a hard anti-communist line in foreign affairs, in what his supporters saw as an attempt to assert moral leadership (compared to the Soviet Union) in the world community. Domestically, Reagan attempted to bring in a package of privatization and regulation to stimulate the economy. The end of the Cold War and the beginning of the era of sustained economic expansion coincided during the 1990s. On January 1, 1994, Canada, Mexico and the United States signed the North American Free Trade Agreement, creating the world's largest free trade area. In 2000, Vicente Fox became the first non-PRI candidate to win the Mexican presidency in over 70 years. The optimism of the 1990s was shattered by the 9/11 attacks of 2001 on the United States, which prompted military intervention in Afghanistan, which also involved Canada. Canada did not support the United States' later move to invade Iraq, however. In the U.S. the Reagan Era of conservative national policies, deregulation and tax cuts took control with the election of Ronald Reagan in 1980. By 2010, political scientists were debating whether the election of Barack Obama in 2008 represented an end of the Reagan Era, or was only a reaction against the bubble economy of the 2000s (decade), which burst in 2008 and became the Late-2000s recession with prolonged unemployment. Central America Despite the failure of a lasting political union, the concept of Central American reunification, though lacking enthusiasm from the leaders of the individual countries, rises from time to time. In 1856–1857 the region successfully established a military coalition to repel an invasion by United States adventurer William Walker. Today, all five nations fly flags that retain the old federal motif of two outer blue bands bounding an inner white stripe. (Costa Rica, traditionally the least committed of the five to regional integration, modified its flag significantly in 1848 by darkening the blue and adding a double-wide inner red band, in honor of the French tricolor). In 1907, a Central American Court of Justice was created. On December 13, 1960, Guatemala, El Salvador, Honduras, and Nicaragua established the Central American Common Market ("CACM"). Costa Rica, because of its relative economic prosperity and political stability, chose not to participate in the CACM. The goals for the CACM were to create greater political unification and success of import substitution industrialization policies. The project was an immediate economic success, but was abandoned after the 1969 "Football War" between El Salvador and Honduras. A Central American Parliament has operated, as a purely advisory body, since 1991. Costa Rica has repeatedly declined invitations to join the regional parliament, which seats deputies from the four other former members of the Union, as well as from Panama and the Dominican Republic. South America In the 1960s and 1970s, the governments of Argentina, Brazil, Chile, and Uruguay were overthrown or displaced by U.S.-aligned military dictatorships. These dictatorships detained tens of thousands of political prisoners, many of whom were tortured and/or killed (on inter-state collaboration, see Operation Condor). Economically, they began a transition to neoliberal economic policies. They placed their own actions within the United States Cold War doctrine of "National Security" against internal subversion. Throughout the 1980s and 1990s, Peru suffered from an internal conflict (see Túpac Amaru Revolutionary Movement and Shining Path). Revolutionary movements and right-wing military dictatorships have been common, but starting in the 1980s a wave of democratization came through the continent, and democratic rule is widespread now. Allegations of corruption remain common, and several nations have seen crises which have forced the resignation of their presidents, although normal civilian succession has continued. International indebtedness became a notable problem, as most recently illustrated by Argentina's default in the early 21st century. In recent years, South American governments have drifted to the left, with socialist leaders being elected in Chile, Bolivia, Brazil, Venezuela, and a leftist president in Argentina and Uruguay. Despite the move to the left, South America is still largely capitalist. With the founding of the Union of South American Nations, South America has started down the road of economic integration, with plans for political integration in the European Union style. See also History of the west coast of North America History of the Caribbean History of Latin America History of the Southern United States American Old West History of New England Spanish Empire Portuguese Empire List of oldest buildings in the Americas Notes Further reading Boyer, Paul S. The Oxford Companion to United States History (2001) excerpt and text search; online at many libraries Carnes, Mark C., and John A. Garraty. The American Nation: A History of the United States: AP Edition (2008) Egerton, Douglas R. et al. The Atlantic World: A History, 1400–1888 (2007), college textbook; 530pp Elliott, John H. Empires of the Atlantic World: Britain and Spain in America 1492–1830 (2007), 608pp excerpt and text search, advanced synthesis Hardwick, Susan W., Fred M. Shelley, and Donald G. Holtgrieve. The Geography of North America: Environment, Political Economy, and Culture (2007) Jacobs, Heidi Hayes, and Michal L. LeVasseur. World Studies: Latin America: Geography – History – Culture (2007) Bruce E. Johansen, The Native Peoples of North America: A History (2006) Kaltmeier, Olaf, Josef Raab, Michael Stewart Foley, Alice Nash, Stefan Rinke, and Mario Rufer. The Routledge Handbook to the History and Society of the Americas. New York: Routledge (2019) Keen, Benjamin, and Keith Haynes. A History of Latin America (2008) Kennedy, David M., Lizabeth Cohen, and Thomas Bailey. The American Pageant (2 vol 2008), U.S. history The Canadian Encyclopedia Morton, Desmond. A Short History of Canada 5th ed (2001) Veblen, Thomas T. Kenneth R. Young, and Antony R. Orme. The Physical Geography of South America (2007) World history
[ -0.008690147660672665, 0.34467193484306335, -0.20106160640716553, 0.05103127658367157, 0.4596819579601288, 0.17998068034648895, 0.16606494784355164, 1.0314478874206543, -0.3828999102115631, -0.5897141695022583, 0.2677892744541168, -0.2598637342453003, -0.09012097120285034, 0.80802994966506...
14099
https://en.wikipedia.org/wiki/History%20of%20Africa
History of Africa
The history of Africa begins with the emergence of hominids, archaic humans and - around 300-250,000 years ago—anatomically modern humans (Homo sapiens), in East Africa, and continues unbroken into the present as a patchwork of diverse and politically developing nation states. The earliest known recorded history arose in Ancient Egypt, and later in Nubia, the Sahel, the Maghreb and the Horn of Africa. Following the desertification of the Sahara, North African history became entwined with the Middle East and Southern Europe while the Bantu expansion swept from modern day Cameroon (Central Africa) across much of the sub-Saharan continent in waves between around 1000 BC and 1 AD, creating a linguistic commonality across much of the central and Southern continent. During the Middle Ages, Islam spread west from Arabia to Egypt, crossing the Maghreb and the Sahel. Some notable pre-colonial states and societies in Africa include the Ajuran Empire, Bachwezi Empire, D'mt, Adal Sultanate, Alodia, Warsangali Sultanate, Buganda Kingdom, Kingdom of Nri, Nok culture, Mali Empire, Bono State, Songhai Empire, Benin Empire, Oyo Empire, Kingdom of Lunda (Punu-yaka), Ashanti Empire, Ghana Empire, Mossi Kingdoms, Mutapa Empire, Kingdom of Mapungubwe, Kingdom of Sine, Kingdom of Sennar, Kingdom of Saloum, Kingdom of Baol, Kingdom of Cayor, Kingdom of Zimbabwe, Kingdom of Kongo, Empire of Kaabu, Kingdom of Ile Ife, Ancient Carthage, Numidia, Mauretania, and the Aksumite Empire. At its peak, prior to European colonialism, it is estimated that Africa had up to 10,000 different states and autonomous groups with distinct languages and customs. From the late 15th century, Europeans joined the slave trade. That includes the triangular trade, with the Portuguese initially acquiring slaves through trade and later by force as part of the Atlantic slave trade. They transported enslaved West, Central, and Southern Africans overseas. Subsequently, European colonization of Africa developed rapidly from around 10% (1870) to over 90% (1914) in the Scramble for Africa (1881–1914). However following struggles for independence in many parts of the continent, as well as a weakened Europe after the Second World War , decolonization took place across the continent, culminating in the 1960 Year of Africa. Disciplines such as recording of oral history, historical linguistics, archaeology and genetics have been vital in rediscovering the great African civilizations of antiquity. Prehistory Paleolithic The first known hominids evolved in Africa. According to paleontology, the early hominids' skull anatomy was similar to that of the gorilla and the chimpanzee, great apes that also evolved in Africa, but the hominids had adopted a bipedal locomotion which freed their hands. This gave them a crucial advantage, enabling them to live in both forested areas and on the open savanna at a time when Africa was drying up and the savanna was encroaching on forested areas. This would have occurred 10 to 5 million years ago, but these claims are controversial because biologists and genetics have humans appearing around the last 70 thousand to 200 thousand years. By 4 million years ago, several australopithecine hominid species had developed throughout Southern, Eastern and Central Africa. They were tool users, and makers of tools. They scavenged for meat and were omnivores. By approximately 3.3 million years ago, primitive stone tools were first used to scavenge kills made by other predators and to harvest carrion and marrow from their bones. In hunting, Homo habilis was probably not capable of competing with large predators and was still more prey than hunter. H. habilis probably did steal eggs from nests and may have been able to catch small game and weakened larger prey (cubs and older animals). The tools were classed as Oldowan. Around 1.8 million years ago, Homo ergaster first appeared in the fossil record in Africa. From Homo ergaster, Homo erectus evolved 1.5 million years ago. Some of the earlier representatives of this species were still fairly small-brained and used primitive stone tools, much like H. habilis. The brain later grew in size, and H. erectus eventually developed a more complex stone tool technology called the Acheulean. Possibly the first hunters, H. erectus mastered the art of making fire and was the first hominid to leave Africa, colonizing most of Afro-Eurasia and perhaps later giving rise to Homo floresiensis. Although some recent writers have suggested that Homo georgicus was the first and primary hominid ever to live outside Africa, many scientists consider H. georgicus to be an early and primitive member of the H. erectus species. The fossil record shows Homo sapiens (also known as "modern humans" or "anatomically modern humans") living in Africa by about 350,000-260,000 years ago. The earliest known Homo sapiens fossils include the Jebel Irhoud remains from Morocco (ca. 315,000 years ago), the Florisbad Skull from South Africa (ca. 259,000 years ago), and the Omo remains from Ethiopia (ca. 233,000 years ago). Scientists have suggested that Homo sapiens may have arisen between 350,000 and 260,000 years ago through a merging of populations in East Africa and South Africa. Evidence of a variety of behaviors indicative of Behavioral modernity date to the African Middle Stone Age, associated with early Homo sapiens and their emergence. Abstract imagery, widened subsistence strategies, and other "modern" behaviors have been discovered from that period in Africa, especially South, North, and East Africa. The Blombos Cave site in South Africa, for example, is famous for rectangular slabs of ochre engraved with geometric designs. Using multiple dating techniques, the site was confirmed to be around 77,000 and 100–75,000 years old. Ostrich egg shell containers engraved with geometric designs dating to 60,000 years ago were found at Diepkloof, South Africa. Beads and other personal ornamentation have been found from Morocco which might be as much as 130,000 years old; as well, the Cave of Hearths in South Africa has yielded a number of beads dating from significantly prior to 50,000 years ago,., and shell beads dating to about 75,000 years ago have been found at Blombos Cave, South Africa. Specialized projectile weapons as well have been found at various sites in Middle Stone Age Africa, including bone and stone arrowheads at South African sites such as Sibudu Cave (along with an early bone needle also found at Sibudu) dating approximately 60,000-70,000 years ago, and bone harpoons at the Central African site of Katanda dating to about 90,000 years ago. Evidence also exists for the systematic heat treating of silcrete stone to increase its flake-ability for the purpose of toolmaking, beginning approximately 164,000 years ago at the South African site of Pinnacle Point and becoming common there for the creation of microlithic tools at about 72,000 years ago. Early stone-tipped projectile weapons (a characteristic tool of Homo sapiens), the stone tips of javelins or throwing spears, were discovered in 2013 at the Ethiopian site of Gademotta, and date to around 279,000 years ago. In 2008, an ochre processing workshop likely for the production of paints was uncovered dating to ca. 100,000 years ago at Blombos Cave, South Africa. Analysis shows that a liquefied pigment-rich mixture was produced and stored in the two abalone shells, and that ochre, bone, charcoal, grindstones and hammer-stones also formed a composite part of the toolkits. Evidence for the complexity of the task includes procuring and combining raw materials from various sources (implying they had a mental template of the process they would follow), possibly using pyrotechnology to facilitate fat extraction from bone, using a probable recipe to produce the compound, and the use of shell containers for mixing and storage for later use. Modern behaviors, such as the making of shell beads, bone tools and arrows, and the use of ochre pigment, are evident at a Kenyan site by 78,000-67,000 years ago. Expanding subsistence strategies beyond big-game hunting and the consequential diversity in tool types has been noted as signs of behavioral modernity. A number of South African sites have shown an early reliance on aquatic resources from fish to shellfish. Pinnacle Point, in particular, shows exploitation of marine resources as early as 120,000 years ago, perhaps in response to more arid conditions inland. Establishing a reliance on predictable shellfish deposits, for example, could reduce mobility and facilitate complex social systems and symbolic behavior. Blombos Cave and Site 440 in Sudan both show evidence of fishing as well. Taphonomic change in fish skeletons from Blombos Cave have been interpreted as capture of live fish, clearly an intentional human behavior. Humans in North Africa (Nazlet Sabaha, Egypt) are known to have dabbled in chert mining, as early as ≈100,000 years ago, for the construction of stone tools. Evidence was found in 2018, dating to about 320,000 years ago, at the Kenyan site of Olorgesailie, of the early emergence of modern behaviors including: long-distance trade networks (involving goods such as obsidian), the use of pigments, and the possible making of projectile points. It is observed by the authors of three 2018 studies on the site, that the evidence of these behaviors is approximately contemporary to the earliest known Homo sapiens fossil remains from Africa (such as at Jebel Irhoud and Florisbad), and they suggest that complex and modern behaviors began in Africa around the time of the emergence of Homo sapiens. In 2019, further evidence of early complex projectile weapons in Africa was found at Adouma, Ethiopia dated 80,000-100,000 years ago, in the form of points considered likely to belong to darts delivered by spear throwers. Around 65–50,000 years ago, the species' expansion out of Africa launched the colonization of the planet by modern human beings. By 10,000 BC, Homo sapiens had spread to most corners of Afro-Eurasia. Their dispersals are traced by linguistic, cultural and genetic evidence. The earliest physical evidence of astronomical activity may be a lunar calendar found on the Ishango bone dated to between 23,000 and 18,000 BC from in what is now the Democratic Republic of the Congo. However, this interpretation of the object's purpose is disputed. Scholars have argued that warfare was absent throughout much of humanity's prehistoric past, and that it emerged from more complex political systems as a result of sedentism, agricultural farming, etc. However, the findings at the site of Nataruk in Turkana County, Kenya, where the remains of 27 individuals who died as the result of an intentional attack by another group 10,000 years ago, suggest that inter-human conflict has a much longer history. Emergence of agriculture and desertification of the Sahara Around 16,000 BC, from the Red Sea Hills to the northern Ethiopian Highlands, nuts, grasses and tubers were being collected for food. By 13,000 to 11,000 BC, people began collecting wild grains. This spread to Western Asia, which domesticated its wild grains, wheat and barley. Between 10,000 and 8000 BC, Northeast Africa was cultivating wheat and barley and raising sheep and cattle from Southwest Asia. A wet climatic phase in Africa turned the Ethiopian Highlands into a mountain forest. Omotic speakers domesticated enset around 6500–5500 BC. Around 7000 BC, the settlers of the Ethiopian highlands domesticated donkeys, and by 4000 BC domesticated donkeys had spread to Southwest Asia. Cushitic speakers, partially turning away from cattle herding, domesticated teff and finger millet between 5500 and 3500 BC. During the 11th millennium BP, pottery was independently invented in Africa, with the earliest pottery there dating to about 9,400 BC from central Mali. It soon spread throughout the southern Sahara and Sahel. In the steppes and savannahs of the Sahara and Sahel in Northern West Africa, the Nilo-Saharan speakers and Mandé peoples started to collect and domesticate wild millet, African rice and sorghum between 8000 and 6000 BC. Later, gourds, watermelons, castor beans, and cotton were also collected and domesticated. The people started capturing wild cattle and holding them in circular thorn hedges, resulting in domestication. They also started making pottery and built stone settlements (e.g., Tichitt, Oualata). Fishing, using bone-tipped harpoons, became a major activity in the numerous streams and lakes formed from the increased rains. Mande peoples have been credited with the independent development of agriculture about 3000–4000 BC. In West Africa, the wet phase ushered in an expanding rainforest and wooded savanna from Senegal to Cameroon. Between 9000 and 5000 BC, Niger–Congo speakers domesticated the oil palm and raffia palm. Two seed plants, black-eyed peas and voandzeia (African groundnuts), were domesticated, followed by okra and kola nuts. Since most of the plants grew in the forest, the Niger–Congo speakers invented polished stone axes for clearing forest. Most of Southern Africa was occupied by pygmy peoples and Khoisan who engaged in hunting and gathering. Some of the oldest rock art was produced by them. For several hundred thousand years the Sahara has alternated between desert and savanna grassland in a 41,000 year cycle caused by changes ("precession") in the Earth's axis as it rotates around the sun which change the location of the North African Monsoon. When the North African monsoon is at its strongest annual precipitation and subsequent vegetation in the Sahara region increase, resulting in conditions commonly referred to as the "green Sahara". For a relatively weak North African monsoon, the opposite is true, with decreased annual precipitation and less vegetation resulting in a phase of the Sahara climate cycle known as the "desert Sahara". The Sahara has been a desert for several thousand years, and is expected to become green again in about 15,000 years time (17,000 AD). Just prior to Saharan desertification, the communities that developed south of Egypt, in what is now Sudan, were full participants in the Neolithic revolution and lived a settled to semi-nomadic lifestyle, with domesticated plants and animals. It has been suggested that megaliths found at Nabta Playa are examples of the world's first known archaeoastronomical devices, predating Stonehenge by some 1,000 years. The sociocultural complexity observed at Nabta Playa and expressed by different levels of authority within the society there has been suggested as forming the basis for the structure of both the Neolithic society at Nabta and the Old Kingdom of Egypt. By 5000 BC, Africa entered a dry phase, and the climate of the Sahara region gradually became drier. The population trekked out of the Sahara region in all directions, including towards the Nile Valley below the Second Cataract, where they made permanent or semipermanent settlements. A major climatic recession occurred, lessening the heavy and persistent rains in Central and Eastern Africa. Central Africa Archaeological findings in Central Africa have been discovered dating back to over 100,000 years. Extensive walled sites and settlements have recently been found in Zilum, Chad approximately southwest of Lake Chad dating to the first millennium BC. Trade and improved agricultural techniques supported more sophisticated societies, leading to the early civilizations of Sao, Kanem, Bornu, Shilluk, Baguirmi, and Wadai. Around 1,000 BC, Bantu migrants had reached the Great Lakes Region in Central Africa. Halfway through the first millennium BC, the Bantu had also settled as far south as what is now Angola. Metallurgy Evidence of the early smelting of metals lead, copper, and bronze dates from the fourth millennium BC. Egyptians smelted copper during the predynastic period, and bronze came into use after 3,000 BC at the latest in Egypt and Nubia. Nubia became a major source of copper as well as of gold. The use of gold and silver in Egypt dates back to the predynastic period. In the Aïr Mountains of present-day Niger people smelted copper independently of developments in the Nile valley between 3,000 and 2,500 BC. They used a process unique to the region, suggesting that the technology was not brought in from outside; it became more mature by about 1,500 BC. By the 1st millennium BC iron working had reached Northwestern Africa, Egypt, and Nubia. Zangato and Holl document evidence of iron-smelting in the Central African Republic and Cameroon that may date back to 3,000 to 2,500 BC. Assyrians using iron weapons pushed Nubians out of Egypt in 670 BC, after which the use of iron became widespread in the Nile valley. The theory that iron spread to Sub-Saharan Africa via the Nubian city of Meroe is no longer widely accepted, and some researchers believe that sub-Saharan Africans invented iron metallurgy independently. Metalworking in West Africa has been dated as early as 2,500 BC at Egaro west of the Termit in Niger, and iron working was practiced there by 1,500 BC. Iron smelting has been dated to 2,000 BC in southeast Nigeria. Central Africa provides possible evidence of iron working as early as the 3rd millennium BC. Iron smelting developed in the area between Lake Chad and the African Great Lakes between 1,000 and 600 BC, and in West Africa around 2,000 BC, long before the technology reached Egypt. Before 500 BC, the Nok culture in the Jos Plateau was already smelting iron. Archaeological sites containing iron-smelting furnaces and slag have been excavated at sites in the Nsukka region of southeast Nigeria in Igboland: dating to 2,000 BC at the site of Lejja (Eze-Uzomaka 2009) and to 750 BC and at the site of Opi (Holl 2009). The site of Gbabiri (in the Central African Republic) has also yielded evidence of iron metallurgy, from a reduction furnace and blacksmith workshop; with earliest dates of 896-773 BC and 907-796 BC respectively. Antiquity The ancient history of North Africa is inextricably linked to that of the Ancient Near East. This is particularly true of Ancient Egypt and Nubia. In the Horn of Africa the Kingdom of Aksum ruled modern-day Eritrea, northern Ethiopia and the coastal area of the western part of the Arabian Peninsula. The Ancient Egyptians established ties with the Land of Punt in 2,350 BC. Punt was a trade partner of Ancient Egypt and it is believed that it was located in modern-day Somalia, Djibouti or Eritrea. Phoenician cities such as Carthage were part of the Mediterranean Iron Age and classical antiquity. Sub-Saharan Africa developed more or less independently in those times. Ancient Egypt After the desertification of the Sahara, settlement became concentrated in the Nile Valley, where numerous sacral chiefdoms appeared. The regions with the largest population pressure were in the Nile Delta region of Lower Egypt, in Upper Egypt, and also along the second and third cataracts of the Dongola Reach of the Nile in Nubia. This population pressure and growth was brought about by the cultivation of southwest Asian crops, including wheat and barley, and the raising of sheep, goats, and cattle. Population growth led to competition for farm land and the need to regulate farming. Regulation was established by the formation of bureaucracies among sacral chiefdoms. The first and most powerful of the chiefdoms was Ta-Seti, founded around 3,500 BC. The idea of sacral chiefdom spread throughout Upper and Lower Egypt. Later consolidation of the chiefdoms into broader political entities began to occur in Upper and Lower Egypt, culminating into the unification of Egypt into one political entity by Narmer (Menes) in 3,100 BC. Instead of being viewed as a sacral chief, he became a divine king. The henotheism, or worship of a single god within a polytheistic system, practiced in the sacral chiefdoms along Upper and Lower Egypt, became the polytheistic Ancient Egyptian religion. Bureaucracies became more centralized under the pharaohs, run by viziers, governors, tax collectors, generals, artists, and technicians. They engaged in tax collecting, organizing of labor for major public works, and building irrigation systems, pyramids, temples, and canals. During the Fourth Dynasty (2,620–2,480 BC), long-distance trade was developed, with the Levant for timber, with Nubia for gold and skins, with Punt for frankincense, and also with the western Libyan territories. For most of the Old Kingdom, Egypt developed her fundamental systems, institutions and culture, always through the central bureaucracy and by the divinity of the Pharaoh. After the fourth millennium BC, Egypt started to extend direct military and political control over her southern and western neighbors. By 2,200 BC, the Old Kingdom's stability was undermined by rivalry among the governors of the nomes who challenged the power of pharaohs and by invasions of Asiatics into the Nile Delta. The First Intermediate Period had begun, a time of political division and uncertainty. Middle Kingdom of Egypt arose when Mentuhotep II of Eleventh Dynasty unified Egypt once again between 2041 and 2016 BC beginning with his conquering of Tenth Dynasty in 2041 BC. Pyramid building resumed, long-distance trade re-emerged, and the center of power moved from Memphis to Thebes. Connections with the southern regions of Kush, Wawat and Irthet at the second cataract were made stronger. Then came the Second Intermediate Period, with the invasion of the Hyksos on horse-drawn chariots and utilizing bronze weapons, a technology heretofore unseen in Egypt. Horse-drawn chariots soon spread to the west in the inhabitable Sahara and North Africa. The Hyksos failed to hold on to their Egyptian territories and were absorbed by Egyptian society. This eventually led to one of Egypt's most powerful phases, the New Kingdom (1,580–1,080 BC), with the Eighteenth Dynasty. Egypt became a superpower controlling Nubia and Judea while exerting political influence on the Libyans to the West and on the Mediterranean. As before, the New Kingdom ended with invasion from the west by Libyan princes, leading to the Third Intermediate Period. Beginning with Shoshenq I, the Twenty-second Dynasty was established. It ruled for two centuries. To the south, Nubian independence and strength was being reasserted. This reassertion led to the conquest of Egypt by Nubia, begun by Kashta and completed by Piye (Pianhky, 751–730 BC) and Shabaka (716–695 BC). This was the birth of the Twenty-fifth Dynasty of Egypt. The Nubians tried to re-establish Egyptian traditions and customs. They ruled Egypt for a hundred years. This was ended by an Assyrian invasion, with Taharqa experiencing the full might of Assyrian iron weapons. The Nubian pharaoh Tantamani was the last of the Twenty-fifth dynasty. When the Assyrians and Nubians left, a new Twenty-sixth Dynasty emerged from Sais. It lasted until 525 BC, when Egypt was invaded by the Persians. Unlike the Assyrians, the Persians stayed. In 332, Egypt was conquered by Alexander the Great. This was the beginning of the Ptolemaic dynasty, which ended with Roman conquest in 30 BC. Pharaonic Egypt had come to an end. Nubia Around 3,500 BC, one of the first sacral kingdoms to arise in the Nile was Ta-Seti, located in northern Nubia. Ta-Seti was a powerful sacral kingdom in the Nile Valley at the 1st and 2nd cataracts that exerted an influence over nearby chiefdoms based on pictorial representation ruling over Upper Egypt. Ta-Seti traded as far as Syro-Palestine, as well as with Egypt. Ta-Seti exported gold, copper, ostrich feathers, ebony and ivory to the Old Kingdom. By the 32nd century BC, Ta-Seti was in decline. After the unification of Egypt by Narmer in 3,100 BC, Ta-Seti was invaded by the Pharaoh Hor-Aha of the First Dynasty, destroying the final remnants of the kingdom. Ta-Seti is affiliated with the A-Group Culture known to archaeology. Small sacral kingdoms continued to dot the Nubian portion of the Nile for centuries after 3,000 BC. Around the latter part of the third millennium, there was further consolidation of the sacral kingdoms. Two kingdoms in particular emerged: the Sai kingdom, immediately south of Egypt, and the Kingdom of Kerma at the third cataract. Sometime around the 18th century BC, the Kingdom of Kerma conquered the Kingdom of Sai, becoming a serious rival to Egypt. Kerma occupied a territory from the first cataract to the confluence of the Blue Nile, White Nile, and Atbarah River. About 1,575 to 1,550 BC, during the latter part of the Seventeenth Dynasty, the Kingdom of Kerma invaded Egypt. The Kingdom of Kerma allied itself with the Hyksos invasion of Egypt. Egypt eventually re-energized under the Eighteenth Dynasty and conquered the Kingdom of Kerma or Kush, ruling it for almost 500 years. The Kushites were Egyptianized during this period. By 1100 BC, the Egyptians had withdrawn from Kush. The region regained independence and reasserted its culture. Kush built a new religion around Amun and made Napata its spiritual center. In 730 BC, the Kingdom of Kush invaded Egypt, taking over Thebes and beginning the Nubian Empire. The empire extended from Palestine to the confluence of the Blue Nile, the White Nile, and River Atbara. In 664 BC, the Kushites were expelled from Egypt by iron-wielding Assyrians. Later, the administrative capital was moved from Napata to Meröe, developing a new Nubian culture. Initially, Meroites were highly Egyptianized, but they subsequently began to take on distinctive features. Nubia became a center of iron-making and cotton cloth manufacturing. Egyptian writing was replaced by the Meroitic alphabet. The lion god Apedemak was added to the Egyptian pantheon of gods. Trade links to the Red Sea increased, linking Nubia with Mediterranean Greece. Its architecture and art diversified, with pictures of lions, ostriches, giraffes, and elephants. Eventually, with the rise of Aksum, Nubia's trade links were broken and it suffered environmental degradation from the tree cutting required for iron production. In 350 AD, the Aksumite king Ezana brought Meröe to an end. Carthage The Egyptians referred to the people west of the Nile, ancestral to the Berbers, as Libyans. The Libyans were agriculturalists like the Mauri of Morocco and the Numidians of central and eastern Algeria and Tunis. They were also nomadic, having the horse, and occupied the arid pastures and desert, like the Gaetuli. Berber desert nomads were typically in conflict with Berber coastal agriculturalists. The Phoenicians were Mediterranean seamen in constant search for valuable metals such as copper, gold, tin, and lead. They began to populate the North African coast with settlements—trading and mixing with the native Berber population. In 814 BC, Phoenicians from Tyre established the city of Carthage. By 600 BC, Carthage had become a major trading entity and power in the Mediterranean, largely through trade with tropical Africa. Carthage's prosperity fostered the growth of the Berber kingdoms, Numidia and Mauretania. Around 500 BC, Carthage provided a strong impetus for trade with Sub-Saharan Africa. Berber middlemen, who had maintained contacts with Sub-Saharan Africa since the desert had desiccated, utilized pack animals to transfer products from oasis to oasis. Danger lurked from the Garamantes of Fez, who raided caravans. Salt and metal goods were traded for gold, slaves, beads, and ivory. The Carthaginians were rivals to the Greeks and Romans. Carthage fought the Punic Wars, three wars with Rome: the First Punic War (264 to 241 BC), over Sicily; the Second Punic War (218 to 201 BC), in which Hannibal invaded Europe; and the Third Punic War (149 to 146 BC). Carthage lost the first two wars, and in the third it was destroyed, becoming the Roman province of Africa, with the Berber Kingdom of Numidia assisting Rome. The Roman province of Africa became a major agricultural supplier of wheat, olives, and olive oil to imperial Rome via exorbitant taxation. Two centuries later, Rome brought the Berber kingdoms of Numidia and Mauretania under its authority. In the 420's AD, Vandals invaded North Africa and Rome lost her territories, subsequently the Berber kingdoms regained their independence. Christianity gained a foothold in Africa at Alexandria in the 1st century AD and spread to Northwest Africa. By 313 AD, with the Edict of Milan, all of Roman North Africa was Christian. Egyptians adopted Monophysite Christianity and formed the independent Coptic Church. Berbers adopted Donatist Christianity. Both groups refused to accept the authority of the Roman Catholic Church. Role of the Berbers As Carthaginian power grew, its impact on the indigenous population increased dramatically. Berber civilization was already at a stage in which agriculture, manufacturing, trade, and political organization supported several states. Trade links between Carthage and the Berbers in the interior grew, but territorial expansion also resulted in the enslavement or military recruitment of some Berbers and in the extraction of tribute from others. By the early 4th century BC, Berbers formed one of the largest element, with Gauls, of the Carthaginian army. In the Mercenary War (241-238 BC), a rebellion was instigated by mercenary soldiers of Carthage and African allies. Berber soldiers participated after being unpaid following the defeat of Carthage in the First Punic War. Berbers succeeded in obtaining control of much of Carthage's North African territory, and they minted coins bearing the name Libyan, used in Greek to describe natives of North Africa. The Carthaginian state declined because of successive defeats by the Romans in the Punic Wars; in 146 BC the city of Carthage was destroyed. As Carthaginian power waned, the influence of Berber leaders in the hinterland grew. By the 2nd century BC, several large but loosely administered Berber kingdoms had emerged. Two of them were established in Numidia, behind the coastal areas controlled by Carthage. West of Numidia lay Mauretania, which extended across the Moulouya River in Morocco to the Atlantic Ocean. The high point of Berber civilization, unequaled until the coming of the Almohads and Almoravid dynasty more than a millennium later, was reached during the reign of Masinissa in the 2nd century BC. After Masinissa's death in 148 BC, the Berber kingdoms were divided and reunited several times. Masinissa's line survived until 24 AD, when the remaining Berber territory was annexed to the Roman Empire. Macrobia and the Barbari City States Macrobia was an ancient kingdom situated in the Horn of Africa (Present day Somalia) it is mentioned in the 5th century BC. According to Herodotus' account, the Persian Emperor Cambyses II upon his conquest of Egypt (525 BC) sent ambassadors to Macrobia, bringing luxury gifts for the Macrobian king to entice his submission. The Macrobian ruler, who was elected based at least in part on stature, replied instead with a challenge for his Persian counterpart in the form of an unstrung bow: if the Persians could manage to string it, they would have the right to invade his country; but until then, they should thank the gods that the Macrobians never decided to invade their empire. The Macrobians were a regional power reputed for their advanced architecture and gold wealth, which was so plentiful that they shackled their prisoners in golden chains. After the collapse of Macrobia, several wealthy ancient city-states, such as Opone, Essina, Sarapion, Nikon, Malao, Damo and Mosylon near Cape Guardafui would emerge from the 1st millennium BC–500 AD to compete with the Sabaeans, Parthians and Axumites for the wealthy Indo-Greco-Roman trade and flourished along the Somali coast. They developed a lucrative trading network under a region collectively known in the Peripilus of the Erythraean Sea as Barbaria. Roman North Africa "Increases in urbanization and in the area under cultivation during Roman rule caused wholesale dislocations of the Berber society, forcing nomad tribes to settle or to move from their traditional rangelands. Sedentary tribes lost their autonomy and connection with the land. Berber opposition to the Roman presence was nearly constant. The Roman emperor Trajan established a frontier in the south by encircling the Aurès and Nemencha mountains and building a line of forts from Vescera (modern Biskra) to Ad Majores (Henchir Besseriani, southeast of Biskra). The defensive line extended at least as far as Castellum Dimmidi (modern Messaâd, southwest of Biskra), Roman Algeria's southernmost fort. Romans settled and developed the area around Sitifis (modern Sétif) in the 2nd century, but farther west the influence of Rome did not extend beyond the coast and principal military roads until much later." The Roman military presence of North Africa remained relatively small, consisting of about 28,000 troops and auxiliaries in Numidia and the two Mauretanian provinces. Starting in the 2nd century AD, these garrisons were manned mostly by local inhabitants. Aside from Carthage, urbanization in North Africa came in part with the establishment of settlements of veterans under the Roman emperors Claudius (reigned 41–54), Nerva (96–98), and Trajan (98–117). In Algeria such settlements included Tipasa, Cuicul or Curculum (modern Djemila, northeast of Sétif), Thamugadi (modern Timgad, southeast of Sétif), and Sitifis (modern Sétif). The prosperity of most towns depended on agriculture. Called the "granary of the empire", North Africa became one of the largest exporters of grain in the empire, shipping to the provinces which did not produce cereals, like Italy and Greece. Other crops included fruit, figs, grapes, and beans. By the 2nd century AD, olive oil rivaled cereals as an export item. The beginnings of the Roman imperial decline seemed less serious in North Africa than elsewhere. However, uprisings did take place. In 238 AD, landowners rebelled unsuccessfully against imperial fiscal policies. Sporadic tribal revolts in the Mauretanian mountains followed from 253 to 288, during the Crisis of the Third Century. The towns also suffered economic difficulties, and building activity almost ceased. The towns of Roman North Africa had a substantial Jewish population. Some Jews had been deported from Judea or Palestine in the 1st and 2nd centuries AD for rebelling against Roman rule; others had come earlier with Punic settlers. In addition, a number of Berber tribes had converted to Judaism. Christianity arrived in the 2nd century and soon gained converts in the towns and among slaves. More than eighty bishops, some from distant frontier regions of Numidia, attended the Council of Carthage (256) in 256. By the end of the 4th century, the settled areas had become Christianized, and some Berber tribes had converted en masse. A division in the church that came to be known as the Donatist heresy began in 313 among Christians in North Africa. The Donatists stressed the holiness of the church and refused to accept the authority to administer the sacraments of those who had surrendered the scriptures when they were forbidden under the Emperor Diocletian (reigned 284–305). The Donatists also opposed the involvement of Constantine the Great (reigned 306–337) in church affairs in contrast to the majority of Christians who welcomed official imperial recognition. The occasionally violent Donatist controversy has been characterized as a struggle between opponents and supporters of the Roman system. The most articulate North African critic of the Donatist position, which came to be called a heresy, was Augustine, bishop of Hippo Regius. Augustine maintained that the unworthiness of a minister did not affect the validity of the sacraments because their true minister was Jesus Christ. In his sermons and books Augustine, who is considered a leading exponent of Christian dogma, evolved a theory of the right of orthodox Christian rulers to use force against schismatics and heretics. Although the dispute was resolved by a decision of an imperial commission in Carthage in 411, Donatist communities continued to exist as late as the 6th century. A decline in trade weakened Roman control. Independent kingdoms emerged in mountainous and desert areas, towns were overrun, and Berbers, who had previously been pushed to the edges of the Roman Empire, returned. During the Vandalic War, Belisarius, general of the Byzantine emperor Justinian I based in Constantinople, landed in North Africa in 533 with 16,000 men and within a year destroyed the Vandal Kingdom. Local opposition delayed full Byzantine control of the region for twelve years, however, and when imperial control came, it was but a shadow of the control exercised by Rome. Although an impressive series of fortifications were built, Byzantine rule was compromised by official corruption, incompetence, military weakness, and lack of concern in Constantinople for African affairs, which made it an easy target for the Arabs during the Early Muslim conquests. As a result, many rural areas reverted to Berber rule. Aksum The earliest state in Eritrea and northern Ethiopia, Dʿmt, dates from around the 8th and 7th centuries BC. D'mt traded through the Red Sea with Egypt and the Mediterranean, providing frankincense. By the 5th and 3rd centuries, D'mt had declined, and several successor states took its place. Later there was greater trade with South Arabia, mainly with the port of Saba. Adulis became an important commercial center in the Ethiopian Highlands. The interaction of the peoples in the two regions, the southern Arabia Sabaeans and the northern Ethiopians, resulted in the Ge'ez culture and language and eventual development of the Ge'ez script. Trade links increased and expanded from the Red Sea to the Mediterranean, with Egypt, Israel, Phoenicia, Greece, and Rome, to the Black Sea, and to Persia, India, and China. Aksum was known throughout those lands. By the 5th century BC, the region was very prosperous, exporting ivory, hippopotamus hides, gold dust, spices, and live elephants. It imported silver, gold, olive oil, and wine. Aksum manufactured glass crystal, brass, and copper for export. A powerful Aksum emerged, unifying parts of eastern Sudan, northern Ethiopia (Tigre), and Eritrea. Its kings built stone palatial buildings and were buried under megalithic monuments. By 300 AD, Aksum was minting its own coins in silver and gold. In 331 AD, King Ezana (320–350 AD) was converted to Miaphysite Christianity which believes in one united divine-human nature of Christ, supposedly by Frumentius and Aedesius, who became stranded on the Red Sea coast. Some scholars believed the process was more complex and gradual than a simple conversion. Around 350, the time Ezana sacked Meroe, the Syrian monastic tradition took root within the Ethiopian church. In the 6th century Aksum was powerful enough to add Saba on the Arabian peninsula to her empire. At the end of the 6th century, the Sasanian Empire pushed Aksum out of the peninsula. With the spread of Islam through Western Asia and Northern Africa, Aksum's trading networks in the Mediterranean faltered. The Red Sea trade diminished as it was diverted to the Persian Gulf and dominated by Arabs, causing Aksum to decline. By 800 AD, the capital was moved south into the interior highlands, and Aksum was much diminished. West Africa In the western Sahel the rise of settled communities occurred largely as a result of the domestication of millet and of sorghum. Archaeology points to sizable urban populations in West Africa beginning in the 2nd millennium BC. Symbiotic trade relations developed before the trans-Saharan trade, in response to the opportunities afforded by north–south diversity in ecosystems across deserts, grasslands, and forests. The agriculturists received salt from the desert nomads. The desert nomads acquired meat and other foods from pastoralists and farmers of the grasslands and from fishermen on the Niger River. The forest-dwellers provided furs and meat. Dhar Tichitt and Oualata in present-day Mauritania figure prominently among the early urban centers, dated to 2,000 BC. About 500 stone settlements litter the region in the former savannah of the Sahara. Its inhabitants fished and grew millet. It has been found Augustin Holl that the Soninke of the Mandé peoples were likely responsible for constructing such settlements. Around 300 BC the region became more desiccated and the settlements began to decline, most likely relocating to Koumbi Saleh. Architectural evidence and the comparison of pottery styles suggest that Dhar Tichitt was related to the subsequent Ghana Empire. Djenné-Djenno (in present-day Mali) was settled around 300 BC, and the town grew to house a sizable Iron Age population, as evidenced by crowded cemeteries. Living structures were made of sun-dried mud. By 250 BC Djenné-Djenno had become a large, thriving market town. Towns similar to that at Djenne-Jeno also developed at the site of Dia, also in Mali along the Niger River, from around 900 BC. Farther south, in central Nigeria, around 1,500 BC, the Nok culture developed in Jos Plateau. It was a highly centralized community. The Nok people produced lifelike representations in terracotta, including human heads and human figures, elephants, and other animals. By 500 BC they were smelting iron. By 200 AD the Nok culture had vanished. Based on stylistic similarities with the Nok terracottas, the bronze figurines of the Yoruba kingdom of Ife and those of the Bini kingdom of Benin are now believed to be continuations of the traditions of the earlier Nok culture. Bantu expansion The Bantu expansion involved a significant movement of people in African history and in the settling of the continent. People speaking Bantu languages (a branch of the Niger–Congo family) began in the second millennium BC to spread from Cameroon eastward to the Great Lakes region. In the first millennium BC, Bantu languages spread from the Great Lakes to southern and east Africa. One early movement headed south to the upper Zambezi valley in the 2nd century BC. Then Bantu-speakers pushed westward to the savannahs of present-day Angola and eastward into Malawi, Zambia, and Zimbabwe in the 1st century AD. The second thrust from the Great Lakes was eastward, 2,000 years ago, expanding to the Indian Ocean coast, Kenya and Tanzania. The eastern group eventually met the southern migrants from the Great Lakes in Malawi, Zambia, and Zimbabwe. Both groups continued southward, with eastern groups continuing to Mozambique and reaching Maputo in the 2nd century AD, and expanding as far as Durban. By the later first millennium AD, the expansion had reached the Great Kei River in present-day South Africa. Sorghum, a major Bantu crop, could not thrive under the winter rainfall of Namibia and the western Cape. Khoisan people inhabited the remaining parts of southern Africa. Medieval and Early Modern (6th to 18th centuries) Sao civilization The Sao civilization flourished from about the sixth century BC to as late as the 16th century AD in Central Africa. The Sao lived by the Chari River south of Lake Chad in territory that later became part of present-day Cameroon and Chad. They are the earliest people to have left clear traces of their presence in the territory of modern Cameroon. Today, several ethnic groups of northern Cameroon and southern Chad – but particularly the Sara people – claim descent from the civilization of the Sao. Sao artifacts show that they were skilled workers in bronze, copper, and iron. Finds include bronze sculptures and terracotta statues of human and animal figures, coins, funerary urns, household utensils, jewelry, highly decorated pottery, and spears. The largest Sao archaeological finds have occurred south of Lake Chad. Kanem Empire The Kanem Empire was centered in the Chad Basin. It was known as the Kanem Empire from the 9th century AD onward and lasted as the independent kingdom of Bornu until 1893. At its height it encompassed an area covering not only much of Chad, but also parts of modern southern Libya, eastern Niger, northeastern Nigeria, northern Cameroon, parts of South Sudan and the Central African Republic. The history of the Empire is mainly known from the Royal Chronicle or Girgam discovered in 1851 by the German traveller Heinrich Barth. Kanem rose in the 8th century in the region to the north and east of Lake Chad. The Kanem empire went into decline, shrank, and in the 14th century was defeated by Bilala invaders from the Lake Fitri region. Around the 9th century AD, the central Sudanic Empire of Kanem, with its capital at Njimi, was founded by the Kanuri-speaking nomads. Kanem arose by engaging in the trans-Saharan trade. It exchanged slaves captured by raiding the south for horses from North Africa, which in turn aided in the acquisition of slaves. By the late 11th century, the Islamic Sayfawa (Saifawa) dynasty was founded by Humai (Hummay) ibn Salamna. The Sayfawa Dynasty ruled for 771 years, making it one of the longest-lasting dynasties in human history. In addition to trade, taxation of local farms around Kanem became a source of state income. Kanem reached its peak under Mai (king) Dunama Dibalemi ibn Salma (1210–1248). The empire reportedly was able to field 40,000 cavalry, and it extended from Fezzan in the north to the Sao state in the south. Islam became firmly entrenched in the empire. Pilgrimages to Mecca were common; Cairo had hostels set aside specifically for pilgrims from Kanem. Bornu Empire The Kanuri people led by the Sayfuwa migrated to the west and south of the lake, where they established the Bornu Empire. By the late 16th century the Bornu empire had expanded and recaptured the parts of Kanem that had been conquered by the Bulala. Satellite states of Bornu included the Damagaram in the west and Baguirmi to the southeast of Lake Chad. Around 1400, the Sayfawa Dynasty moved its capital to Bornu, a tributary state southwest of Lake Chad with a new capital Birni Ngarzagamu. Overgrazing had caused the pastures of Kanem to become too dry. In addition, political rivalry from the Bilala clan was becoming intense. Moving to Bornu better situated the empire to exploit the trans-Saharan trade and to widen its network in that trade. Links to the Hausa states were also established, providing horses and salt from Bilma for Bonoman gold. Mai Ali Gazi ibn Dunama (c. 1475 – 1503) defeated the Bilala, reestablishing complete control of Kanem. During the early 16th century, the Sayfawa Dynasty solidified its hold on the Bornu population after much rebellion. In the latter half of the 16th century, Mai Idris Alooma modernized its military, in contrast to the Songhai Empire. Turkish mercenaries were used to train the military. The Sayfawa Dynasty were the first monarchs south of the Sahara to import firearms. The empire controlled all of the Sahel from the borders of Darfur in the east to Hausaland to the west. Friendly relationship was established with the Ottoman Empire via Tripoli. The Mai exchanged gifts with the Ottoman sultan. During the 17th and 18th centuries, not much is known about Bornu. During the 18th century, it became a center of Islamic learning. However, Bornu's army became outdated by not importing new arms, and Kamembu had also begun its decline. The power of the mai was undermined by droughts and famine that were becoming more intense, internal rebellion in the pastoralist north, growing Hausa power, and the importation of firearms which made warfare more bloody. By 1841, the last mai was deposed, bringing to an end the long-lived Sayfawa Dynasty. In its place, the al-Kanemi dynasty of the shehu rose to power. Shilluk Kingdom The Shilluk Kingdom was centered in South Sudan from the 15th century from along a strip of land along the western bank of the White Nile, from Lake No to about 12° north latitude. The capital and royal residence was in the town of Fashoda. The kingdom was founded during the mid-15th century AD by its first ruler, Nyikang. During the 19th century, the Shilluk Kingdom faced decline following military assaults from the Ottoman Empire and later British and Sudanese colonization in Anglo-Egyptian Sudan. Baguirmi Kingdom The Kingdom of Baguirmi existed as an independent state during the 16th and 17th centuries southeast of Lake Chad in what is now the country of Chad. Baguirmi emerged to the southeast of the Kanem–Bornu Empire. The kingdom's first ruler was Mbang Birni Besse. Later in his reign, the Bornu Empire conquered and made the state a tributary. Wadai Empire The Wadai Empire was centered on Chad and the Central African Republic from the 17th century. The Tunjur people founded the Wadai Kingdom to the east of Bornu in the 16th century. In the 17th century there was a revolt of the Maba people who established a Muslim dynasty. At first Wadai paid tribute to Bornu and Durfur, but by the 18th century Wadai was fully independent and had become an aggressor against its neighbors.To the west of Bornu, by the 15th century the Kingdom of Kano had become the most powerful of the Hausa Kingdoms, in an unstable truce with the Kingdom of Katsina to the north. Both were absorbed into the Sokoto Caliphate during the Fulani Jihad of 1805, which threatened Bornu itself. Luba Empire Sometime between 1300 and 1400 AD, Kongolo Mwamba (Nkongolo) from the Balopwe clan unified the various Luba peoples, near Lake Kisale. He founded the Kongolo Dynasty, which was later ousted by Kalala Ilunga. Kalala expanded the kingdom west of Lake Kisale. A new centralized political system of spiritual kings () with a court council of head governors and sub-heads all the way to village heads. The was the direct communicator with the ancestral spirits and chosen by them. Conquered states were integrated into the system and represented in the court, with their titles. The authority of the resided in his spiritual power rather than his military authority. The army was relatively small. The Luba was able to control regional trade and collect tribute for redistribution. Numerous offshoot states were formed with founders claiming descent from the Luba. The Luba political system spread throughout Central Africa, southern Uganda, Rwanda, Burundi, Malawi, Zambia, Zimbabwe, and the western Congo. Two major empires claiming Luba descent were the Lunda Empire and Maravi Empire. The Bemba people and Basimba people of northern Zambia were descended from Luba migrants who arrived in Zambia during the 17th century. Lunda Empire In the 1450s, a Luba from the royal family Ilunga Tshibinda married Lunda queen Rweej and united all Lunda peoples. Their son Luseeng expanded the kingdom. His son Naweej expanded the empire further and is known as the first Lunda emperor, with the title (, ), the Lord of Vipers. The Luba political system was retained, and conquered peoples were integrated into the system. The assigned a or (royal adviser) and tax collector to each state conquered. Numerous states claimed descent from the Lunda. The Imbangala of inland Angola claimed descent from a founder, Kinguri, brother of Queen Rweej, who could not tolerate the rule of Tshibunda. Kinguri became the title of kings of states founded by Queen Rweej's brother. The Luena (Lwena) and Lozi (Luyani) in Zambia also claim descent from Kinguri. During the 17th century, a Lunda chief and warrior called Mwata Kazembe set up an Eastern Lunda kingdom in the valley of the Luapula River. The Lunda's western expansion also saw claims of descent by the Yaka and the Pende. The Lunda linked Central Africa with the western coast trade. The kingdom of Lunda came to an end in the 19th century when it was invaded by the Chokwe, who were armed with guns. Kingdom of Kongo By the 15th century AD, the farming Bakongo people (ba being the plural prefix) were unified as the Kingdom of Kongo under a ruler called the manikongo, residing in the fertile Pool Malebo area on the lower Congo River. The capital was M'banza-Kongo. With superior organization, they were able to conquer their neighbors and extract tribute. They were experts in metalwork, pottery, and weaving raffia cloth. They stimulated interregional trade via a tribute system controlled by the manikongo. Later, maize (corn) and cassava (manioc) would be introduced to the region via trade with the Portuguese at their ports at Luanda and Benguela. The maize and cassava would result in population growth in the region and other parts of Africa, replacing millet as a main staple. By the 16th century, the manikongo held authority from the Atlantic in the west to the Kwango River in the east. Each territory was assigned a mani-mpembe (provincial governor) by the manikongo. In 1506, Afonso I (1506–1542), a Christian, took over the throne. Slave trading increased with Afonso's wars of conquest. About 1568 to 1569, the Jaga invaded Kongo, laying waste to the kingdom and forcing the manikongo into exile. In 1574, Manikongo Álvaro I was reinstated with the help of Portuguese mercenaries. During the latter part of the 1660s, the Portuguese tried to gain control of Kongo. Manikongo António I (1661–1665), with a Kongolese army of 5,000, was destroyed by an army of Afro-Portuguese at the Battle of Mbwila. The empire dissolved into petty polities, fighting among each other for war captives to sell into slavery. Kongo gained captives from the Kingdom of Ndongo in wars of conquest. Ndongo was ruled by the ngola. Ndongo would also engage in slave trading with the Portuguese, with São Tomé being a transit point to Brazil. The kingdom was not as welcoming as Kongo; it viewed the Portuguese with great suspicion and as an enemy. The Portuguese in the latter part of the 16th century tried to gain control of Ndongo but were defeated by the Mbundu. Ndongo experienced depopulation from slave raiding. The leaders established another state at Matamba, affiliated with Queen Nzinga, who put up a strong resistance to the Portuguese until coming to terms with them. The Portuguese settled along the coast as trade dealers, not venturing on conquest of the interior. Slavery wreaked havoc in the interior, with states initiating wars of conquest for captives. The Imbangala formed the slave-raiding state of Kasanje, a major source of slaves during the 17th and 18th centuries. Horn of Africa Somalia The birth of Islam opposite Somalia's Red Sea coast meant that Somali merchants and sailors living on the Arabian Peninsula gradually came under the influence of the new religion through their converted Arab Muslim trading partners. With the migration of Muslim families from the Islamic world to Somalia in the early centuries of Islam, and the peaceful conversion of the Somali population by Somali Muslim scholars in the following centuries, the ancient city-states eventually transformed into Islamic Mogadishu, Berbera, Zeila, Barawa and Merka, which were part of the Berber (the medieval Arab term for the ancestors of the modern Somalis) civilization. The city of Mogadishu came to be known as the City of Islam and controlled the East African gold trade for several centuries. During this period, sultanates such as the Ajuran Empire and the Sultanate of Mogadishu, and republics like Barawa, Merca and Hobyo and their respective ports flourished and had a lucrative foreign commerce with ships sailing to and coming from Arabia, India, Venice, Persia, Egypt, Portugal and as far away as China. Vasco da Gama, who passed by Mogadishu in the 15th century, noted that it was a large city with houses four or five stories high and big palaces in its centre, in addition to many mosques with cylindrical minarets. In the 16th century, Duarte Barbosa noted that many ships from the Kingdom of Cambaya in modern-day India sailed to Mogadishu with cloth and spices, for which they in return received gold, wax, and ivory. Barbosa also highlighted the abundance of meat, wheat, barley, horses, and fruit in the coastal markets, which generated enormous wealth for the merchants. Mogadishu, the center of a thriving weaving industry known as toob benadir (specialized for the markets in Egypt and Syria), together with Merca and Barawa, served as a transit stop for Swahili merchants from Mombasa and Malindi and for the gold trade from Kilwa. Jewish merchants from the Strait of Hormuz brought their Indian textiles and fruit to the Somali coast to exchange for grain and wood. Trading relations were established with Malacca in the 15th century, with cloth, ambergris, and porcelain being the main commodities of the trade. Giraffes, zebras, and incense were exported to the Ming Empire of China, which established Somali merchants as leaders in the commerce between the Asia and Africa and influenced the Chinese language with borrowings from the Somali language in the process. Hindu merchants from Surat and southeast African merchants from Pate, seeking to bypass both the Portuguese blockade and Omani meddling, used the Somali ports of Merca and Barawa (which were out of the two powers' jurisdiction) to conduct their trade in safety and without any problems. Ethiopia The Zagwe dynasty ruled many parts of modern Ethiopia and Eritrea from approximately 1137 to 1270. The name of the dynasty comes from the Cushitic speaking Agaw of northern Ethiopia. From 1270 AD and on for many centuries, the Solomonic dynasty ruled the Ethiopian Empire. In the early 15th century Ethiopia sought to make diplomatic contact with European kingdoms for the first time since Aksumite times. A letter from King Henry IV of England to the Emperor of Abyssinia survives. In 1428, the Emperor Yeshaq I sent two emissaries to Alfonso V of Aragon, who sent return emissaries who failed to complete the return trip. The first continuous relations with a European country began in 1508 with the Kingdom of Portugal under Emperor Lebna Dengel, who had just inherited the throne from his father. This proved to be an important development, for when the empire was subjected to the attacks of the Adal general and imam, Ahmad ibn Ibrahim al-Ghazi (called "Grañ", or "the Left-handed"), Portugal assisted the Ethiopian emperor by sending weapons and four hundred men, who helped his son Gelawdewos defeat Ahmad and re-establish his rule. This Abyssinian–Adal War was also one of the first proxy wars in the region as the Ottoman Empire, and Portugal took sides in the conflict. When Emperor Susenyos converted to Roman Catholicism in 1624, years of revolt and civil unrest followed resulting in thousands of deaths. The Jesuit missionaries had offended the Orthodox faith of the local Ethiopians, and on June 25, 1632, Susenyos's son, Emperor Fasilides, declared the state religion to again be Ethiopian Orthodox Christianity and expelled the Jesuit missionaries and other Europeans. North Africa Maghreb By 711 AD, the Umayyad Caliphate had conquered all of North Africa. By the 10th century, the majority of the population of North Africa was Muslim. By the 9th century AD, the unity brought about by the Islamic conquest of North Africa and the expansion of Islamic culture came to an end. Conflict arose as to who should be the successor of the prophet. The Umayyads had initially taken control of the Caliphate, with their capital at Damascus. Later, the Abbasids had taken control, moving the capital to Baghdad. The Berber people, being independent in spirit and hostile to outside interference in their affairs and to Arab exclusivity in orthodox Islam, adopted Shi'ite and Kharijite Islam, both considered unorthodox and hostile to the authority of the Abbasid Caliphate. Numerous Kharijite kingdoms came and fell during the 8th and 9th centuries, asserting their independence from Baghdad. In the early 10th century, Shi'ite groups from Syria, claiming descent from Muhammad's daughter Fatimah, founded the Fatimid Dynasty in the Maghreb. By 950, they had conquered all of the Maghreb and by 969 all of Egypt. They had immediately broken away from Baghdad. In an attempt to bring about a purer form of Islam among the Sanhaja Berbers, Abdallah ibn Yasin founded the Almoravid movement in present-day Mauritania and Western Sahara. The Sanhaja Berbers, like the Soninke, practiced an indigenous religion alongside Islam. Abdallah ibn Yasin found ready converts in the Lamtuna Sanhaja, who were dominated by the Soninke in the south and the Zenata Berbers in the north. By the 1040s, all of the Lamtuna was converted to the Almoravid movement. With the help of Yahya ibn Umar and his brother Abu Bakr ibn Umar, the sons of the Lamtuna chief, the Almoravids created an empire extending from the Sahel to the Mediterranean. After the death of Abdallah ibn Yassin and Yahya ibn Umar, Abu Bakr split the empire in half, between himself and Yusuf ibn Tashfin, because it was too big to be ruled by one individual. Abu Bakr took the south to continue fighting the Soninke, and Yusuf ibn Tashfin took the north, expanding it to southern Spain. The death of Abu Bakr in 1087 saw a breakdown of unity and increase military dissension in the south. This caused a re-expansion of the Soninke. The Almoravids were once held responsible for bringing down the Ghana Empire in 1076, but this view is no longer credited. During the 10th through 13th centuries, there was a large-scale movement of bedouins out of the Arabian Peninsula. About 1050, a quarter of a million Arab nomads from Egypt moved into the Maghreb. Those following the northern coast were referred to as Banu Hilal. Those going south of the Atlas Mountains were the Banu Sulaym. This movement spread the use of the Arabic language and hastened the decline of the Berber language and the Arabisation of North Africa. Later an Arabised Berber group, the Hawwara, went south to Nubia via Egypt. In the 1140s, Abd al-Mu'min declared jihad on the Almoravids, charging them with decadence and corruption. He united the northern Berbers against the Almoravids, overthrowing them and forming the Almohad Empire. During this period, the Maghreb became thoroughly Islamised and saw the spread of literacy, the development of algebra, and the use of the number zero and decimals. By the 13th century, the Almohad states had split into three rival states. Muslim states were largely extinguished in the Iberian Peninsula by the Christian kingdoms of Castile, Aragon, and Portugal. Around 1415, Portugal engaged in a reconquista of North Africa by capturing Ceuta, and in later centuries Spain and Portugal acquired other ports on the North African coast. In 1492, at the end of the Granada War, Spain defeated Muslims in the Emirate of Granada, effectively ending eight centuries of Muslim domination in southern Iberia. . The pashas of Tripoli traded horses, firearms, and armor via Fez with the sultans of the Bornu Empire for slaves. In the 16th century, an Arab nomad tribe that claimed descent from Muhammad's daughter, the Saadis, conquered and united Morocco. They prevented the Ottoman Empire from reaching to the Atlantic and expelled Portugal from Morocco's western coast. Ahmad al-Mansur brought the state to the height of its power. He invaded Songhay in 1591, to control the gold trade, which had been diverted to the western coast of Africa for European ships and to the east, to Tunis. Morocco's hold on Songhay diminished in the 17th century. In 1603, after Ahmad's death, the kingdom split into the two sultanates of Fes and Marrakesh. Later it was reunited by Moulay al-Rashid, founder of the Alaouite Dynasty (1672–1727). His brother and successor, Ismail ibn Sharif (1672–1727), strengthened the unity of the country by importing slaves from the Sudan to build up the military. Nile Valley Egypt In 642 AD, the Rashidun Caliphate conquered Byzantine Egypt. Egypt under the Fatimid Caliphate was prosperous. Dams and canals were repaired, and wheat, barley, flax, and cotton production increased. Egypt became a major producer of linen and cotton cloth. Its Mediterranean and Red Sea trade increased. Egypt also minted a gold currency called the Fatimid dinar, which was used for international trade. The bulk of revenues came from taxing the fellahin (peasant farmers), and taxes were high. Tax collecting was leased to Berber overlords, who were soldiers who had taken part in the Fatimid conquest in 969 AD. The overlords paid a share to the caliphs and retained what was left. Eventually, they became landlords and constituted a settled land aristocracy. To fill the military ranks, Mamluk Turkish slave cavalry and Sudanese slave infantry were used. Berber freemen were also recruited. In the 1150s, tax revenues from farms diminished. The soldiers revolted and wreaked havoc in the countryside, slowed trade, and diminished the power and authority of the Fatimid caliphs. During the 1160s, Fatimid Egypt came under threat from European crusaders. Out of this threat, a Kurdish general named Ṣalāḥ ad-Dīn Yūsuf ibn Ayyūb (Saladin), with a small band of professional soldiers, emerged as an outstanding Muslim defender. Saladin defeated the Christian crusaders at Egypt's borders and recaptured Jerusalem in 1187. On the death of Al-Adid, the last Fatimid caliph, in 1171, Saladin became the ruler of Egypt, ushering in the Ayyubid Dynasty. Under his rule, Egypt returned to Sunni Islam, Cairo became an important center of Arab Islamic learning, and Mamluk slaves were increasingly recruited from Turkey and southern Russia for military service. Support for the military was tied to the iqta, a form of land taxation in which soldiers were given ownership in return for military service. Over time, Mamluk slave soldiers became a very powerful landed aristocracy, to the point of getting rid of the Ayyubid dynasty in 1250 and establishing a Mamluk dynasty. The more powerful Mamluks were referred to as amirs. For 250 years, Mamluks controlled all of Egypt under a military dictatorship. Egypt extended her territories to Syria and Palestine, thwarted the crusaders, and halted a Mongol invasion in 1260 at the Battle of Ain Jalut. Mamluk Egypt came to be viewed as a protector of Islam, and of Medina and Mecca. Eventually the iqta system declined and proved unreliable for providing an adequate military. The Mamluks started viewing their iqta as hereditary and became attuned to urban living. Farm production declined, and dams and canals lapsed into disrepair. Mamluk military skill and technology did not keep pace with new technology of handguns and cannons. With the rise of the Ottoman Empire, Egypt was easily defeated. In 1517, at the end of an Ottoman–Mamluk War, Egypt became part of the Ottoman Empire. The Istanbul government revived the iqta system. Trade was reestablished in the Red Sea, but it could not completely connect with the Indian Ocean trade because of growing Portuguese presence. During the 17th and 18th centuries, hereditary Mamluks regained power. The leading Mamluks were referred to as beys. Pashas, or viceroys, represented the Istanbul government in name only, operating independently. During the 18th century, dynasties of pashas became established. The government was weak and corrupt. In 1798, Napoleon invaded Egypt. The local forces had little ability to resist the French conquest. However, the British Empire and the Ottoman Empire were able to remove French occupation in 1801. These events marked the beginning of a 19th-century Anglo-Franco rivalry over Egypt. Sudan Christian and Islamic Nubia After Ezana of Aksum sacked Meroe, people associated with the site of Ballana moved into Nubia from the southwest and founded three kingdoms: Makuria, Nobatia, and Alodia. They would rule for 200 years. Makuria was above the third cataract, along the Dongola Reach with its capital at Dongola. Nobadia was to the north with its capital at Faras, and Alodia was to the south with its capital at Soba. Makuria eventually absorbed Nobadia. The people of the region converted to Monophysite Christianity around 500 to 600 CE. The church initially started writing in Coptic, then in Greek, and finally in Old Nubian, a Nilo-Saharan language. The church was aligned with the Egyptian Coptic Church. By 641, Egypt was conquered by the Rashidun Caliphate. This effectively blocked Christian Nubia and Aksum from Mediterranean Christendom. In 651–652, Arabs from Egypt invaded Christian Nubia. Nubian archers soundly defeated the invaders. The Baqt (or Bakt) Treaty was drawn, recognizing Christian Nubia and regulating trade. The treaty controlled relations between Christian Nubia and Islamic Egypt for almost six hundred years. By the 13th century, Christian Nubia began its decline. The authority of the monarchy was diminished by the church and nobility. Arab bedouin tribes began to infiltrate Nubia, causing further havoc. Fakirs (holy men) practicing Sufism introduced Islam into Nubia. By 1366, Nubia had become divided into petty fiefdoms when it was invaded by Mamluks. During the 15th century, Nubia was open to Arab immigration. Arab nomads intermingled with the population and introduced the Arab culture and the Arabic language. By the 16th century, Makuria and Nobadia had been Islamized. During the 16th century, Abdallah Jamma headed an Arab confederation that destroyed Soba, capital of Alodia, the last holdout of Christian Nubia. Later Alodia would fall under the Funj Sultanate. During the 15th century, Funj herders migrated north to Alodia and occupied it. Between 1504 and 1505, the kingdom expanded, reaching its peak and establishing its capital at Sennar under Badi II Abu Daqn (c. 1644 – 1680). By the end of the 16th century, the Funj had converted to Islam. They pushed their empire westward to Kordofan. They expanded eastward, but were halted by Ethiopia. They controlled Nubia down to the 3rd Cataract. The economy depended on captured enemies to fill the army and on merchants travelling through Sennar. Under Badi IV (1724–1762), the army turned on the king, making him nothing but a figurehead. In 1821, the Funj were conquered by Muhammad Ali (1805–1849), Pasha of Egypt. Southern Africa Settlements of Bantu-speaking peoples who were iron-using agriculturists and herdsmen were long already well established south of the Limpopo River by the 4th century CE, displacing and absorbing the original Khoisan speakers. They slowly moved south, and the earliest ironworks in modern-day KwaZulu-Natal Province are believed to date from around 1050. The southernmost group was the Xhosa people, whose language incorporates certain linguistic traits from the earlier Khoi-San people, reaching the Great Fish River in today's Eastern Cape Province. Great Zimbabwe and Mapungubwe The Kingdom of Mapungubwe was the first state in Southern Africa, with its capital at Mapungubwe. The state arose in the 12th century CE. Its wealth came from controlling the trade in ivory from the Limpopo Valley, copper from the mountains of northern Transvaal, and gold from the Zimbabwe Plateau between the Limpopo and Zambezi rivers, with the Swahili merchants at Chibuene. By the mid-13th century, Mapungubwe was abandoned. After the decline of Mapungubwe, Great Zimbabwe rose on the Zimbabwe Plateau. Zimbabwe means stone building. Great Zimbabwe was the first city in Southern Africa and was the center of an empire, consolidating lesser Shona polities. Stone building was inherited from Mapungubwe. These building techniques were enhanced and came into maturity at Great Zimbabwe, represented by the wall of the Great Enclosure. The dry-stack stone masonry technology was also used to build smaller compounds in the area. Great Zimbabwe flourished by trading with Swahili Kilwa and Sofala. The rise of Great Zimbabwe parallels the rise of Kilwa. Great Zimbabwe was a major source of gold. Its royal court lived in luxury, wore Indian cotton, surrounded themselves with copper and gold ornaments, and ate on plates from as far away as Persia and China. Around the 1420s and 1430s, Great Zimbabwe was on decline. The city was abandoned by 1450. Some have attributed the decline to the rise of the trading town Ingombe Ilede. A new chapter of Shona history ensued. Nyatsimba Mutota, a northern Shona king of the Karanga, engaged in conquest. He and his son Mutope conquered the Zimbabwe Plateau, going through Mozambique to the east coast, linking the empire to the coastal trade. They called their empire Wilayatu 'l Mu'anamutapah or mwanamutapa (Lord of the Plundered Lands), or the Kingdom of Mutapa. Monomotapa was the Portuguese corruption. They did not build stone structures; the northern Shonas had no traditions of building in stone. After the death of Matope in 1480, the empire split into two small empires: Torwa in the south and Mutapa in the north. The split occurred over rivalry from two Shona lords, Changa and Togwa, with the mwanamutapa line. Changa was able to acquire the south, forming the Kingdom of Butua with its capital at Khami. The Mutapa Empire continued in the north under the mwenemutapa line. During the 16th century the Portuguese were able to establish permanent markets up the Zambezi River in an attempt to gain political and military control of Mutapa. They were partially successful. In 1628, a decisive battle allowed them to put a puppet mwanamutapa named Mavura, who signed treaties that gave favorable mineral export rights to the Portuguese. The Portuguese were successful in destroying the mwanamutapa system of government and undermining trade. By 1667, Mutapa was in decay. Chiefs would not allow digging for gold because of fear of Portuguese theft, and the population declined. The Kingdom of Butua was ruled by a changamire, a title derived from the founder, Changa. Later it became the Rozwi Empire. The Portuguese tried to gain a foothold but were thrown out of the region in 1693, by Changamire Dombo. The 17th century was a period of peace and prosperity. The Rozwi Empire fell into ruins in the 1830s from invading Nguni from Natal. Namibia By 1500 AD, most of southern Africa had established states. In northwestern Namibia, the Ovambo engaged in farming and the Herero engaged in herding. As cattle numbers increased, the Herero moved southward to central Namibia for grazing land. A related group, the Ovambanderu, expanded to Ghanzi in northwestern Botswana. The Nama, a Khoi-speaking, sheep-raising group, moved northward and came into contact with the Herero; this would set the stage for much conflict between the two groups. The expanding Lozi states pushed the Mbukushu, Subiya, and Yei to Botei, Okavango, and Chobe in northern Botswana. South Africa and Botswana Sotho–Tswana The development of Sotho–Tswana states based on the highveld, south of the Limpopo River, began around 1000 CE. The chief's power rested on cattle and his connection to the ancestor. This can be seen in the Toutswemogala Hill settlements with stone foundations and stone walls, north of the highveld and south of the Vaal River. Northwest of the Vaal River developed early Tswana states centered on towns of thousands of people. When disagreements or rivalry arose, different groups moved to form their own states. Nguni peoples Southeast of the Drakensberg mountains lived Nguni-speaking peoples (Zulu, Xhosa, Swazi, and Ndebele). They too engaged in state building, with new states developing from rivalry, disagreements, and population pressure causing movement into new regions. This 19th-century process of warfare, state building and migration later became known as the Mfecane (Nguni) or Difaqane (Sotho). Its major catalyst was the consolidation of the Zulu Kingdom. They were metalworkers, cultivators of millet, and cattle herders. Khoisan and Boers The Khoisan lived in the southwestern Cape Province, where winter rainfall is plentiful. Earlier Khoisan populations were absorbed by Bantu peoples, such as the Sotho and Nguni, but the Bantu expansion stopped at the region with winter rainfall. Some Bantu languages have incorporated the click consonant of the Khoisan languages. The Khoisan traded with their Bantu neighbors, providing cattle, sheep, and hunted items. In return, their Bantu speaking neighbors traded copper, iron, and tobacco. By the 16th century, the Dutch East India Company established a replenishing station at Table Bay for restocking water and purchasing meat from the Khoikhoi. The Khoikhoi received copper, iron, tobacco, and beads in exchange. In order to control the price of meat and stock and make service more consistent, the Dutch established a permanent settlement at Table Bay in 1652. They grew fresh fruit and vegetables and established a hospital for sick sailors. To increase produce, the Dutch decided to increase the number of farms at Table Bay by encouraging freeburgher boers (farmers) on lands worked initially by slaves from West Africa. The land was taken from Khoikhoi grazing land, triggering the first Khoikhoi-Dutch war in 1659. No victors emerged, but the Dutch assumed a "right of conquest" by which they claimed all of the cape. In a series of wars pitting the Khoikhoi against each other, the Boers assumed all Khoikhoi land and claimed all their cattle. The second Khoikoi-Dutch war (1673–1677) was a cattle raid. The Khoikhoi also died in thousands from European diseases. By the 18th century, the cape colony had grown, with slaves coming from Madagascar, Mozambique, and Indonesia. The settlement also started to expand northward, but Khoikhoi resistance, raids, and guerrilla warfare slowed the expansion during the 18th century. Boers who started to practice pastoralism were known as trekboers. A common source of trekboer labor was orphan children who were captured during raids and whose parents had been killed. Southeast Africa Prehistory According to the theory of recent African origin of modern humans, the mainstream position held within the scientific community, all humans originate from either Southeast Africa or the Horn of Africa. During the first millennium CE, Nilotic and Bantu-speaking peoples moved into the region. Swahili coast Following the Bantu Migration, on the coastal section of Southeast Africa, a mixed Bantu community developed through contact with Muslim Arab and Persian traders, leading to the development of the mixed Arab, Persian and African Swahili City States. The Swahili culture that emerged from these exchanges evinces many Arab and Islamic influences not seen in traditional Bantu culture, as do the many Afro-Arab members of the Bantu Swahili people. With its original speech community centered on the coastal parts of Tanzania (particularly Zanzibar) and Kenya—a seaboard referred to as the Swahili Coast—the Bantu Swahili language contains many Arabic language loan-words as a consequence of these interactions. The earliest Bantu inhabitants of the Southeast coast of Kenya and Tanzania encountered by these later Arab and Persian settlers have been variously identified with the trading settlements of Rhapta, Azania and Menouthias referenced in early Greek and Chinese writings from 50 AD to 500 AD, ultimately giving rise to the name for Tanzania. These early writings perhaps document the first wave of Bantu settlers to reach Southeast Africa during their migration. Historically, the Swahili people could be found as far north as northern Kenya and as far south as the Ruvuma River in Mozambique. Arab geographers referred to the Swahili coast as the land of the zanj (blacks). Although once believed to be the descendants of Persian colonists, the ancient Swahili are now recognized by most historians, historical linguists, and archaeologists as a Bantu people who had sustained important interactions with Muslim merchants, beginning in the late 7th and early 8th centuries AD. Medieval Swahili kingdoms are known to have had island trade ports, described by Greek historians as "metropolises", and to have established regular trade routes with the Islamic world and Asia. Ports such as Mombasa, Zanzibar, and Kilwa were known to Chinese sailors under Zheng He and medieval Islamic geographers such as the Berber traveller Abu Abdullah ibn Battuta. The main Swahili exports were ivory, slaves, and gold. They traded with Arabia, India, Persia, and China. The Portuguese arrived in 1498. On a mission to economically control and Christianize the Swahili coast, the Portuguese attacked Kilwa first in 1505 and other cities later. Because of Swahili resistance, the Portuguese attempt at establishing commercial control was never successful. By the late 17th century, Portuguese authority on the Swahili coast began to diminish. With the help of Omani Arabs, by 1729 the Portuguese presence had been removed. The Swahili coast eventually became part of the Sultanate of Oman. Trade recovered, but it did not regain the levels of the past. Urewe The Urewe culture developed and spread in and around the Lake Victoria region of Africa during the African Iron Age. The culture's earliest dated artifacts are located in the Kagera Region of Tanzania, and it extended as far west as the Kivu region of the Democratic Republic of the Congo, as far east as the Nyanza and Western provinces of Kenya, and north into Uganda, Rwanda and Burundi. Sites from the Urewe culture date from the Early Iron Age, from the 5th century BC to the 6th century AD. The origins of the Urewe culture are ultimately in the Bantu expansion originating in Cameroon. Research into early Iron Age civilizations in Sub-Saharan Africa has been undertaken concurrently with studies on African linguistics on Bantu expansion. The Urewe culture may correspond to the Eastern subfamily of Bantu languages, spoken by the descendants of the first wave of Bantu peoples to settle East Africa. At first sight, Urewe seems to be a fully developed civilization recognizable through its distinctive, stylish earthenware and highly technical and sophisticated iron working techniques. Given our current level of knowledge, neither seems to have developed or altered for nearly 2,000 years. However, minor local variations in the ceramic ware can be observed. Urewe is the name of the site in Kenya brought to prominence through the publication in 1948 of Mary Leakey's archaeological findings. She described the early Iron Age period in the Great Lakes region in Central East Africa around Lake Victoria. Madagascar and Merina Madagascar was apparently first settled by Austronesian speakers from Southeast Asia before the 6th century AD and subsequently by Bantu speakers from the east African mainland in the 6th or 7th century, according to archaeological and linguistic data. The Austronesians introduced banana and rice cultivation, and the Bantu speakers introduced cattle and other farming practices. About the year 1000, Arab and Indian trade settlement were started in northern Madagascar to exploit the Indian Ocean trade. By the 14th century, Islam was introduced on the island by traders. Madagascar functioned in the East African medieval period as a contact port for the other Swahili seaport city-states such as Sofala, Kilwa, Mombasa, and Zanzibar. Several kingdoms emerged after the 15th century: the Sakalava Kingdom (16th century) on the west coast, Tsitambala Kingdom (17th century) on the east coast, and Merina (15th century) in the central highlands. By the 19th century, Merina controlled the whole island. In 1500, the Portuguese were the first Europeans on the island, raiding the trading settlements. The British and later the French arrived. During the latter part of the 17th century, Madagascar was a popular transit point for pirates. Radama I (1810–1828) invited Christian missionaries in the early 19th century. Queen Ranavalona I "the Cruel" (1828–1861) banned the practice of Christianity in the kingdom, and an estimated 150,000 Christians perished. Under Radama II (1861–1863), Madagascar took a French orientation, with great commercial concession given to the French. In 1895, in the second Franco-Hova War, the French invaded Madagascar, taking over Antsiranana (Diego Suarez) and declaring Madagascar a protectorate. Lake Plateau states and empires Between the 14th and 15th centuries, large Southeast African kingdoms and states emerged, such as the Buganda and Karagwe Kingdoms of Uganda and Tanzania. 𝗘𝗺𝗽𝗶𝗿𝗲 𝗼𝗳 𝗸𝗶𝘁𝗮𝗿𝗮 By 1000 AD, numerous states had arisen on the Lake Plateau among the Great Lakes of East Africa. Cattle herding, cereal growing, and banana cultivation were the economic mainstays of these states. The Ntusi and Bigo earthworks are representative of one of the first states, the Bunyoro kingdom, which oral tradition stipulates was part of the Empire of Kitara that dominated the whole Lakes region. A Luo ethnic elite, from the Babito clan, ruled over the Bantu-speaking Nyoro people. The society was essentially Nyoro in its culture, based on the evidence from pottery, settlement patterns, and economic specialization. The Babito clan claim legitimacy by being descended from the Bachwezi clan, who were said to have ruled the Empire of Kitara. Buganda The Buganda kingdom was founded by Kato Kimera around the 14th century AD. Kato Kintu may have migrated to the northwest of Lake Victoria as early as 1000 BC. Buganda was ruled by the kabaka with a bataka composed of the clan heads. Over time, the kabakas diluted the authority of the bataka, with Buganda becoming a centralized monarchy. By the 16th century, Buganda was engaged in expansion but had a serious rival in Bunyoro. By the 1870s, Buganda was a wealthy nation-state. The kabaka ruled with his Lukiko (council of ministers). Buganda had a naval fleet of a hundred vessels, each manned by thirty men. Buganda supplanted Bunyoro as the most important state in the region. However, by the early 20th century, Buganda became a province of the British Uganda Protectorate. Rwanda Southeast of Bunyoro, near Lake Kivu at the bottom of the western rift, the Kingdom of Rwanda was founded, perhaps during the 17th century. Tutsi (BaTutsi) pastoralists formed the elite, with a king called the mwami. The Hutu (BaHutu) were farmers. Both groups spoke the same language, but there were strict social norms against marrying each other and interaction. According to oral tradition, the Kingdom of Rwanda was founded by Mwami Ruganzu II (Ruganzu Ndori) (c. 1600 – 1624), with his capital near Kigali. It took 200 years to attain a truly centralized kingdom under Mwami Kigeli IV (Kigeri Rwabugiri) (1840–1895). Subjugation of the Hutu proved more difficult than subduing the Tutsi. The last Tutsi chief gave up to Mwami Mutara II (Mutara Rwogera) (1802–1853) in 1852, but the last Hutu holdout was conquered in the 1920s by Mwami Yuhi V (Yuli Musinga) (1896–1931). Burundi South of the Kingdom of Rwanda was the Kingdom of Burundi. It was founded by the Tutsi chief Ntare Rushatsi (c. 1657 – 1705). Like Rwanda, Burundi was built on cattle raised by Tutsi pastoralists, crops from Hutu farmers, conquest, and political innovations. Under Mwami Ntare Rugaamba (c. 1795 – 1852), Burundi pursued an aggressive expansionist policy, one based more on diplomacy than force. Maravi The Maravi claimed descent from Karonga (kalonga), who took that title as king. The Maravi connected Central Africa to the east coastal trade, with Swahili Kilwa. By the 17th century, the Maravi Empire encompassed all the area between Lake Malawi and the mouth of the Zambezi River. The karonga was Mzura, who did much to extend the empire. Mzura made a pact with the Portuguese to establish a 4,000-man army to attack the Shona in return for aid in defeating his rival Lundi, a chief of the Zimba. In 1623, he turned on the Portuguese and assisted the Shona. In 1640, he welcomed back the Portuguese for trade. The Maravi Empire did not long survive the death of Mzura. By the 18th century, it had broken into its previous polities. West Africa Sahelian empires and states Ghana The Ghana Empire may have been an established kingdom as early as the 8th century AD, founded among the Soninke by Dinge Cisse. Ghana was first mentioned by Arab geographer Al-Farazi in the late 8th century. Ghana was inhabited by urban dwellers and rural farmers. The urban dwellers were the administrators of the empire, who were Muslims, and the Ghana (king), who practiced traditional religion. Two towns existed, one where the Muslim administrators and Berber-Arabs lived, which was connected by a stone-paved road to the king's residence. The rural dwellers lived in villages, which joined together into broader polities that pledged loyalty to the Ghana. The Ghana was viewed as divine, and his physical well-being reflected on the whole society. Ghana converted to Islam around 1050, after conquering Aoudaghost. The Ghana Empire grew wealthy by taxing the trans-Saharan trade that linked Tiaret and Sijilmasa to Aoudaghost. Ghana controlled access to the goldfields of Bambouk, southeast of Koumbi Saleh. A percentage of salt and gold going through its territory was taken. The empire was not involved in production. By the 11th century, Ghana was in decline. It was once thought that the sacking of Koumbi Saleh by Berbers under the Almoravid dynasty in 1076 was the cause. This is no longer accepted. Several alternative explanations are cited. One important reason is the transfer of the gold trade east to the Niger River and the Taghaza Trail, and Ghana's consequent economic decline. Another reason cited is political instability through rivalry among the different hereditary polities. The empire came to an end in 1230, when Takrur in northern Senegal took over the capital. Mali The Mali Empire began in the 13th century AD, when a Mande (Mandingo) leader, Sundiata (Lord Lion) of the Keita clan, defeated Soumaoro Kanté, king of the Sosso or southern Soninke, at the Battle of Kirina in c. 1235. Sundiata continued his conquest from the fertile forests and Niger Valley, east to the Niger Bend, north into the Sahara, and west to the Atlantic Ocean, absorbing the remains of the Ghana Empire. Sundiata took on the title of mansa. He established the capital of his empire at Niani. Although the salt and gold trade continued to be important to the Mali Empire, agriculture and pastoralism was also critical. The growing of sorghum, millet, and rice was a vital function. On the northern borders of the Sahel, grazing cattle, sheep, goats, and camels were major activities. Mande society was organize around the village and land. A cluster of villages was called a kafu, ruled by a farma. The farma paid tribute to the mansa. A dedicated army of elite cavalry and infantry maintained order, commanded by the royal court. A formidable force could be raised from tributary regions, if necessary. Conversion to Islam was a gradual process. The power of the mansa depended on upholding traditional beliefs and a spiritual foundation of power. Sundiata initially kept Islam at bay. Later mansas were devout Muslims but still acknowledged traditional deities and took part in traditional rituals and festivals, which were important to the Mande. Islam became a court religion under Sundiata's son Uli I (1225–1270). Mansa Uli made a pilgrimage to Mecca, becoming recognized within the Muslim world. The court was staffed with literate Muslims as secretaries and accountants. Muslim traveller Ibn Battuta left vivid descriptions of the empire. Mali reached the peak of its power and extent in the 14th century, when Mansa Musa (1312–1337) made his famous hajj to Mecca with 500 slaves, each holding a bar of gold worth 500 mitqals. Mansa Musa's hajj devalued gold in Mamluk Egypt for a decade. He made a great impression on the minds of the Muslim and European world. He invited scholars and architects like Ishal al-Tuedjin (al-Sahili) to further integrate Mali into the Islamic world. The Mali Empire saw an expansion of learning and literacy. In 1285, Sakura, a freed slave, usurped the throne. This mansa drove the Tuareg out of Timbuktu and established it as a center of learning and commerce. The book trade increased, and book copying became a very respectable and profitable profession. Timbuktu and Djenné became important centers of learning within the Islamic world. After the reign of Mansa Suleyman (1341–1360), Mali began its spiral downward. Mossi cavalry raided the exposed southern border. Tuareg harassed the northern border in order to retake Timbuktu. Fulani (Fulbe) eroded Mali's authority in the west by establishing the independent Imamate of Futa Toro, a successor to the kingdom of Takrur. Serer and Wolof alliances were broken. In 1545 to 1546, the Songhai Empire took Niani. After 1599, the empire lost the Bambouk goldfields and disintegrated into petty polities. Songhai The Songhai people are descended from fishermen on the Middle Niger River. They established their capital at Kukiya in the 9th century AD and at Gao in the 12th century. The Songhai speak a Nilo-Saharan language. Sonni Ali, a Songhai, began his conquest by capturing Timbuktu in 1468 from the Tuareg. He extended the empire to the north, deep into the desert, pushed the Mossi further south of the Niger, and expanded southwest to Djenne. His army consisted of cavalry and a fleet of canoes. Sonni Ali was not a Muslim, and he was portrayed negatively by Berber-Arab scholars, especially for attacking Muslim Timbuktu. After his death in 1492, his heirs were deposed by General Muhammad Ture, a Muslim of Soninke origins Muhammad Ture (1493–1528) founded the Askiya Dynasty, askiya being the title of the king. He consolidated the conquests of Sonni Ali. Islam was used to extend his authority by declaring jihad on the Mossi, reviving the trans-Saharan trade, and having the Abbasid "shadow" caliph in Cairo declare him as caliph of Sudan. He established Timbuktu as a great center of Islamic learning. Muhammad Ture expanded the empire by pushing the Tuareg north, capturing Aïr in the east, and capturing salt-producing Taghaza. He brought the Hausa states into the Songhay trading network. He further centralized the administration of the empire by selecting administrators from loyal servants and families and assigning them to conquered territories. They were responsible for raising local militias. Centralization made Songhay very stable, even during dynastic disputes. Leo Africanus left vivid descriptions of the empire under Askiya Muhammad. Askiya Muhammad was deposed by his son in 1528. After much rivalry, Muhammad Ture's last son Askiya Daoud (1529–1582) assumed the throne. In 1591, Morocco invaded the Songhai Empire under Ahmad al-Mansur of the Saadi Dynasty in order to secure the goldfields of the Sahel. At the Battle of Tondibi, the Songhai army was defeated. The Moroccans captured Djenne, Gao, and Timbuktu, but they were unable to secure the whole region. Askiya Nuhu and the Songhay army regrouped at Dendi in the heart of Songhai territory where a spirited guerrilla resistance sapped the resources of the Moroccans, who were dependent upon constant resupply from Morocco. Songhai split into several states during the 17th century. Morocco found its venture unprofitable. The gold trade had been diverted to Europeans on the coast. Most of the trans-Saharan trade was now diverted east to Bornu. Expensive equipment purchased with gold had to be sent across the Sahara, an unsustainable scenario. The Moroccans who remained married into the population and were referred to as Arma or Ruma. They established themselves at Timbuktu as a military caste with various fiefs, independent from Morocco. Amid the chaos, other groups began to assert themselves, including the Fulani of Futa Tooro who encroached from the west. The Bambara Empire, one of the states that broke from Songhai, sacked Gao. In 1737, the Tuareg massacred the Arma. Sokoto Caliphate The Fulani were migratory people. They moved from Mauritania and settled in Futa Tooro, Futa Djallon, and subsequently throughout the rest of West Africa. By the 14th century CE, they had converted to Islam. During the 16th century, they established themselves at Macina in southern Mali. During the 1670s, they declared jihads on non-Muslims. Several states were formed from these jihadist wars, at Futa Toro, Futa Djallon, Macina, Oualia, and Bundu. The most important of these states was the Sokoto Caliphate or Fulani Empire. In the city of Gobir, Usman dan Fodio (1754–1817) accused the Hausa leadership of practicing an impure version of Islam and of being morally corrupt. In 1804, he launched the Fulani War as a jihad among a population that was restless about high taxes and discontented with its leaders. Jihad fever swept northern Nigeria, with strong support among both the Fulani and the Hausa. Usman created an empire that included parts of northern Nigeria, Benin, and Cameroon, with Sokoto as its capital. He retired to teach and write and handed the empire to his son Muhammed Bello. The Sokoto Caliphate lasted until 1903 when the British conquered northern Nigeria. Forest empires and states Akan kingdoms and emergence of Asante Empire The Akan speak a Kwa language. The speakers of Kwa languages are believed to have come from East/Central Africa, before settling in the Sahel. By the 12th century, the Akan Kingdom of Bonoman (Bono State) was established. During the 13th century, when the gold mines in modern-day Mali started to dry up, Bonoman and later other Akan states began to rise to prominence as the major players in the Gold trade. It was Bonoman and other Akan kingdoms like Denkyira, Akyem, Akwamu which were the predecessors, and later the emergence of the Empire of Ashanti. When and how the Ashante got to their present location is debatable. What is known is that by the 17th century an Akan people were identified as living in a state called Kwaaman. The location of the state was north of Lake Bosomtwe. The state's revenue was mainly derived from trading in gold and kola nuts and clearing forest to plant yams. They built towns between the Pra and Ofin rivers. They formed alliances for defense and paid tribute to Denkyira one of the more powerful Akan states at that time along with Adansi and Akwamu. During the 16th century, Ashante society experienced sudden changes, including population growth because of cultivation of New World plants such as cassava and maize and an increase in the gold trade between the coast and the north. By the 17th century, Osei Kofi Tutu I (c. 1695 – 1717), with help of Okomfo Anokye, unified what became the Ashante into a confederation with the Golden Stool as a symbol of their unity and spirit. Osei Tutu engaged in a massive territorial expansion. He built up the Ashante army based on the Akan state of Akwamu, introducing new organization and turning a disciplined militia into an effective fighting machine. In 1701, the Ashante conquered Denkyira, giving them access to the coastal trade with Europeans, especially the Dutch. Opoku Ware I (1720–1745) engaged in further expansion, adding other southern Akan states to the growing empire. He turned north adding Techiman, Banda, Gyaaman, and Gonja, states on the Black Volta. Between 1744 and 1745, Asantehene Opoku attacked the powerful northern state of Dagomba, gaining control of the important middle Niger trade routes. Kusi Obodom (1750–1764) succeeded Opoku. He solidified all the newly won territories. Osei Kwadwo (1777–1803) imposed administrative reforms that allowed the empire to be governed effectively and to continue its military expansion. Osei Kwame Panyin (1777–1803), Osei Tutu Kwame (1804–1807), and Osei Bonsu (1807–1824) continued territorial consolidation and expansion. The Ashante Empire included all of present-day Ghana and large parts of the Ivory Coast. The Ashantehene inherited his position from his mother. He was assisted at the capital, Kumasi, by a civil service of men talented in trade, diplomacy, and the military, with a head called the Gyaasehene. Men from Arabia, Sudan, and Europe were employed in the civil service, all of them appointed by the Ashantehene. At the capital and in other towns, the ankobia or special police were used as bodyguards to the Ashantehene, as sources of intelligence, and to suppress rebellion. Communication throughout the empire was maintained via a network of well-kept roads from the coast to the middle Niger and linking together other trade cities. For most of the 19th century, the Ashante Empire remained powerful. It was later destroyed in 1900 by British superior weaponry and organization following the four Anglo-Ashanti wars. Dahomey The Dahomey Kingdom was founded in the early 17th century when the Aja people of the Allada kingdom moved northward and settled among the Fon. They began to assert their power a few years later. In so doing they established the Kingdom of Dahomey, with its capital at Agbome. King Houegbadja (c. 1645 – 1685) organized Dahomey into a powerful centralized state. He declared all lands to be owned of the king and subject to taxation. Primogeniture in the kingship was established, neutralizing all input from village chiefs. A "cult of kingship" was established. A captive slave would be sacrificed annually to honor the royal ancestors. During the 1720s, the slave-trading states of Whydah and Allada were taken, giving Dahomey direct access to the slave coast and trade with Europeans. King Agadja (1708–1740) attempted to end the slave trade by keeping the slaves on plantations producing palm oil, but the European profits on slaves and Dahomey's dependency on firearms were too great. In 1730, under king Agaja, Dahomey was conquered by the Oyo Empire, and Dahomey had to pay tribute. Taxes on slaves were mostly paid in cowrie shells. During the 19th century, palm oil was the main trading commodity. France conquered Dahomey during the Second Franco-Dahomean War (1892–1894) and established a colonial government there. Most of the troops who fought against Dahomey were native Africans. Yoruba Traditionally, the Yoruba people viewed themselves as the inhabitants of a united empire, in contrast to the situation today, in which "Yoruba" is the cultural-linguistic designation for speakers of a language in the Niger–Congo family. The name comes from a Hausa word to refer to the Oyo Empire. The first Yoruba state was Ile-Ife, said to have been founded around 1000 AD by a supernatural figure, the first oni Oduduwa. Oduduwa's sons would be the founders of the different city-states of the Yoruba, and his daughters would become the mothers of the various Yoruba obas, or kings. Yoruba city-states were usually governed by an oba and an iwarefa, a council of chiefs who advised the oba. by the 18th century, the Yoruba city-states formed a loose confederation, with the Oni of Ife as the head and Ife as the capital. As time went on, the individual city-states became more powerful with their obas assuming more powerful spiritual positions and diluting the authority of the Oni of Ife. Rivalry became intense among the city-states. The Oyo Empire rose in the 16th century. The Oyo state had been conquered in 1550 by the kingdom of Nupe, which was in possession of cavalry, an important tactical advantage. The alafin (king) of Oyo was sent into exile. After returning, Alafin Orompoto (c. 1560 – 1580) built up an army based on heavily armed cavalry and long-service troops. This made them invincible in combat on the northern grasslands and in the thinly wooded forests. By the end of the 16th century, Oyo had added the western region of the Niger to the hills of Togo, the Yoruba of Ketu, Dahomey, and the Fon nation. A governing council served the empire, with clear executive divisions. Each acquired region was assigned a local administrator. Families served in king-making capacities. Oyo, as a northern Yoruba kingdom, served as middle-man in the north–south trade and connecting the eastern forest of Guinea with the western and central Sudan, the Sahara, and North Africa. The Yoruba manufactured cloth, ironware, and pottery, which were exchanged for salt, leather, and most importantly horses from the Sudan to maintain the cavalry. Oyo remained strong for two hundred years. It became a protectorate of Great Britain in 1888, before further fragmenting into warring factions. The Oyo state ceased to exist as any sort of power in 1896. Benin The Kwa Niger–Congo speaking Edo people had established the Benin Empire by the middle of the 15th century. It was engaged in political expansion and consolidation from its very beginning. Under Oba (king) Ewuare (c. 1450 – 1480 AD), the state was organized for conquest. He solidified central authority and initiated 30 years of war with his neighbors. At his death, the Benin Empire extended to Dahomey in the west, to the Niger Delta in the east, along the west African coast, and to the Yoruba towns in the north. Ewuare's grandson Oba Esigie (1504–1550) eroded the power of the uzama (state council) and increased contact and trade with Europeans, especially with the Portuguese who provided a new source of copper for court art. The oba ruled with the advice of the uzama, a council consisting of chiefs of powerful families and town chiefs of different guilds. Later its authority was diminished by the establishment of administrative dignitaries. Women wielded power. The queen mother who produced the future oba wielded immense influence. Benin was never a significant exporter of slaves, as Alan Ryder's book Benin and the Europeans showed. By the early 18th century, it was wrecked with dynastic disputes and civil wars. However, it regained much of its former power in the reigns of Oba Eresoyen and Oba Akengbuda. After the 16th century, Benin mainly exported pepper, ivory, gum, and cotton cloth to the Portuguese and Dutch who resold it to other African societies on the coast. In 1897, the British sacked the city. Niger Delta and Igbo The Niger Delta comprised numerous city-states with numerous forms of government. These city-states were protected by the waterways and thick vegetation of the delta. The region was transformed by trade in the 17th century. The delta's city-states were comparable to those of the Swahili people in East Africa. Some, like Bonny, Kalabari, and Warri, had kings. Others, like Brass, were republics with small senates, and those at Cross River and Old Calabar were ruled by merchants of the ekpe society. The ekpe society regulated trade and made rules for members known as house systems. Some of these houses, like the Pepples of Bonny, were well known in the Americas and Europe. The Igbo lived east of the delta (but with the Anioma on the west of the Niger River). The Kingdom of Nri rose in the 9th century, with the Eze Nri being its leader. It was a political entity composed of villages, and each village was autonomous and independent with its own territory and name, each recognized by its neighbors. Villages were democratic with all males and sometimes females a part of the decision-making process. Graves at Igbo-Ukwu (800 AD) contained brass artifacts of local manufacture and glass beads from Egypt or India, indicative of extraregional trade. 19th century Southern Africa By the 1850s, British and German missionaries and traders had penetrated present-day Namibia. The Herero and Nama peoples competed for guns and ammunition, providing cattle, ivory, and ostrich feathers. The Germans were more firmly established than the British in the region. By 1884, the Germans declared the coastal region from the Orange River to the Kunene River a German protectorate, part of German South West Africa. They pursued an aggressive policy of land expansion for white settlements. They exploited rivalry between the Nama and Herero. The Herero entered into an alliance with the Germans, thinking they could get an upper hand on the Nama. The Germans set up a garrison at the Herero capital and started allocating Herero land for white settlements, including the best grazing land in the central plateau, and made tax and labor demands. The Herero and Ovambanderu rebelled, but the rebellion was crushed and leaders were executed. Between 1896 and 1897, rinderpest crippled the Herero and Nama economy and slowed white expansion. The Germans continued the policy of making Namibia a white settlement by seizing land and cattle, and even trying to export Herero labor to South Africa. In 1904, the Herero rebelled again. German General Lothar von Trotha implemented an extermination policy at the Battle of Waterberg, which drove the Herero west of the Kalahari Desert. At the end of 1905, only 16,000 Herero were alive, out of a previous population of 80,000. Nama resistance was crushed in 1907. All Nama and Herero cattle and land were confiscated from the very diminished population, with remaining Nama and Herero assuming a subordinate position. Labor had to be imported from among the Ovambo. Nguniland A moment of great disorder in southern Africa was the Mfecane, "the crushing." It was started by the northern Nguni kingdoms of Mthethwa, Ndwandwe, and Swaziland over scarce resource and famine. When Dingiswayo of Mthethwa died, Shaka of the Zulu people took over. He established the Zulu Kingdom, asserting authority over the Ndwandwe and pushing the Swazi north. The scattering Ndwandwe and Swazi caused the Mfecane to spread. During the 1820s, Shaka expanded the empire all along the Drakensberg foothills, with tribute being paid as far south as the Tugela and Umzimkulu rivers. He replaced the chiefs of conquered polities with indunas, responsible to him. He introduced a centralized, dedicated, and disciplined military force not seen in the region, with a new weapon in the short stabbing-spear. In 1828, Shaka was assassinated by his half brother Dingane, who lacked the military genius and leadership skills of Shaka. Voortrekkers tried to occupy Zulu land in 1838. In the early months they were defeated, but the survivors regrouped at the Ncome River and soundly defeated the Zulu. However, the Voortrekkers dared not settle Zulu land. Dingane was killed in 1840 during a civil war. His brother Mpande took over and strengthened Zulu territories to the north. In 1879 the Zulu Kingdom was invaded by Britain in a quest to control all of South Africa. The Zulu Kingdom was victorious at the Battle of Isandlwana but was defeated at the Battle of Ulundi. One of the major states to emerge from the Mfecane was the Sotho Kingdom founded at Thaba Bosiu by Moshoeshoe I around 1821 to 1822. It was a confederation of different polities that accepted the absolute authority of Moshoeshoe. During the 1830s, the kingdom invited missionaries as a strategic means of acquiring guns and horses from the Cape. The Orange Free State slowly diminished the kingdom but never completely defeated it. In 1868, Moshoeshoe asked that the Sotho Kingdom be annexed by Britain, to save the remnant. It became the British protectorate of Basutoland. Sotho-Tswana The arrival of the ancestors of the Tswana-speakers who came to control the region (from the Vaal River to Botswana) has yet to be dated precisely although AD 600 seems to be a consensus estimate. This massive cattle-raising complex prospered until 1300 AD or so. All these various peoples were connected to trade routes that ran via the Limpopo River to the Indian Ocean, and trade goods from Asia such as beads made their way to Botswana most likely in exchange for ivory, gold, and rhinoceros horn. The first written records relating to modern-day Botswana appear in 1824. What these records show is that the Bangwaketse had become the predominant power in the region. Under the rule of Makaba II, the Bangwaketse kept vast herds of cattle in well-protected desert areas, and used their military prowess to raid their neighbours. Other chiefdoms in the area, by this time, had capitals of 10,000 or so and were fairly prosperous. This equilibrium came to end during the Mfecane period, 1823–1843, when a succession of invading peoples from South Africa entered the country. Although the Bangwaketse were able to defeat the invading Bakololo in 1826, over time all the major chiefdoms in Botswana were attacked, weakened, and impoverished. The Bakololo and Amandebele raided repeatedly, and took large numbers of cattle, women, and children from the Batswana—most of whom were driven into the desert or sanctuary areas such as hilltops and caves. Only after 1843, when the Amandebele moved into western Zimbabwe, did this threat subside. During the 1840s and 1850s trade with Cape Colony-based merchants opened up and enabled the Batswana chiefdoms to rebuild. The Bakwena, Bangwaketse, Bangwato and Batawana cooperated to control the lucrative ivory trade, and then used the proceeds to import horses and guns, which in turn enabled them to establish control over what is now Botswana. This process was largely complete by 1880, and thus the Bushmen, the Bakalanga, the Bakgalagadi, the Batswapong and other current minorities were subjugated by the Batswana. Following the Great Trek, Afrikaners from the Cape Colony established themselves on the borders of Botswana in the Transvaal. In 1852 a coalition of Tswana chiefdoms led by Sechele I resisted Afrikaner incursions, and after about eight years of intermittent tensions and hostilities, eventually came to a peace agreement in Potchefstroom in 1860. From that point on, the modern-day border between South Africa and Botswana was agreed on, and the Afrikaners and Batswana traded and worked together peacefully. In the 1820s, refugees from the Zulu expansion under Shaka came into contact with the Basotho people residing on the highveld. In 1823, those pressures caused one group of Basotho, the Kololo, to migrate north, past the Okavango Swamp and across the Zambezi into Barotseland, now part of Zambia. In 1845, the Kololo conquered Barotseland. At about the same time, the Boers began to encroach upon Basotho territory. After the Cape Colony had been ceded to Britain at the conclusion of the Napoleonic Wars, the voortrekkers ("pioneers") were farmers who opted to leave the former Dutch colony and moved inland where they eventually established independent polities. At the time of these developments, Moshoeshoe I gained control of the Basotho kingdoms of the southern Highveld. Universally praised as a skilled diplomat and strategist, he was able to wield the disparate refugee groups escaping the Difaqane into a cohesive nation. His inspired leadership helped his small nation to survive the dangers and pitfalls (the Zulu hegemony, the inward expansion of the voortrekkers and the designs of imperial Britain) that destroyed other indigenous South African kingdoms during the 19th century. In 1822, Moshoeshoe established his capital at Butha-Buthe, an easily defensible mountain in the northern Drakensberg mountains, laying the foundations of the eventual Kingdom of Lesotho. His capital was later moved to Thaba Bosiu. To deal with the encroaching voortrekker groups, Moshoeshoe encouraged French missionary activity in his kingdom. Missionaries sent by the Paris Evangelical Missionary Society provided the King with foreign affairs counsel and helped to facilitate the purchase of modern weapons. Aside from acting as state ministers, missionaries (primarily Casalis and Arbousset) played a vital role in delineating Sesotho orthography and printing Sesotho language materials between 1837 and 1855. The first Sesotho translation of the Bible appeared in 1878. In 1868, after losing the western lowlands to the Boers during the Free State–Basotho Wars; Moshoeshoe successfully appealed to Queen Victoria to proclaim Lesotho (then known as Basutoland) a protectorate of Britain and the British administration was placed in Maseru, the site of Lesotho's current capital. Local chieftains retained power over internal affairs while Britain was responsible for foreign affairs and the defence of the protectorate. In 1869, the British sponsored a process by which the borders of Basutoland were finally demarcated. While many clans had territory within Basutoland, large numbers of Sesotho speakers resided in areas allocated to the Orange Free State, the sovereign voortrekker republic that bordered the Basotho kingdom. Voortrekkers By the 19th century, most Khoikhoi territory was under Boer control. The Khoikhoi had lost economic and political independence and had been absorbed into Boer society. The Boers spoke Afrikaans, a language or dialect derived from Dutch, and no longer called themselves Boers but Afrikaners. Some Khoikhoi were used as commandos in raids against other Khoikhoi and later Xhosa. A mixed Khoi, slave, and European population called the Cape Coloureds, who were outcasts within colonial society, also arose. Khoikhoi who lived far on the frontier included the Kora, Oorlams, and Griqua. In 1795, the British took over the cape colony from the Dutch. In the 1830s, Boers embarked on a journey of expansion, east of the Great Fish River into the Zuurveld. They were referred to as Voortrekkers. They founded republics of the Transvaal and Orange Free State, mostly in areas of sparse population that had been diminished by the Mfecane/Difaqane. Unlike the Khoisan, the Bantu states were not conquered by the Afrikaners, because of population density and greater unity. Additionally, they began to arm themselves with guns acquired through trade at the cape. In some cases, as in the Xhosa/Boer Wars, Boers were removed from Xhosa lands. It required a dedicated imperial military force to subdue the Bantu-speaking states. In 1901, the Boer republics were defeated by Britain in the Second Boer War. The defeat however consummated many Afrikaners' ambition: South Africa would be under white rule. The British placed all power—legislative, executive, administrative—in English and Afrikaner hands. European trade, exploration and conquest Between 1878 and 1898, European states partitioned and conquered most of Africa. For 400 years, European nations had mainly limited their involvement to trading stations on the African coast. Few dared venture inland from the coast; those that did, like the Portuguese, often met defeats and had to retreat to the coast. Several technological innovations helped to overcome this 400-year pattern. One was the development of repeating rifles, which were easier and quicker to load than muskets. Artillery was being used increasingly. In 1885, Hiram S. Maxim developed the maxim gun, the model of the modern-day machine gun. European states kept these weapons largely among themselves by refusing to sell these weapons to African leaders. African germs took numerous European lives and deterred permanent settlements. Diseases such as yellow fever, sleeping sickness, yaws, and leprosy made Africa a very inhospitable place for Europeans. The deadliest disease was malaria, endemic throughout Tropical Africa. In 1854, the discovery of quinine and other medical innovations helped to make conquest and colonization in Africa possible. Strong motives for conquest of Africa were at play. Raw materials were needed for European factories. Europe in the early part of the 19th century was undergoing its Industrial Revolution. Nationalist rivalries and prestige were at play. Acquiring African colonies would show rivals that a nation was powerful and significant. These factors culminated in the Scramble for Africa. Knowledge of Africa increased. Numerous European explorers began to explore the continent. Mungo Park traversed the Niger River. James Bruce travelled through Ethiopia and located the source of the Blue Nile. Richard Francis Burton was the first European at Lake Tanganyika. Samuel White Baker explored the Upper Nile. John Hanning Speke located a source of the Nile at Lake Victoria. Other significant European explorers included Heinrich Barth, Henry Morton Stanley (coiner of the term "Dark Continent" for Africa in an 1878 book), Silva Porto, Alexandre de Serpa Pinto, Rene Caille, Friedrich Gerhard Rohlfs, Gustav Nachtigal, George Schweinfurth, and Joseph Thomson. The most famous of the explorers was David Livingstone, who explored southern Africa and traversed the continent from the Atlantic at Luanda to the Indian Ocean at Quelimane. European explorers made use of African guides and servants, and established long-distance trading routes. Missionaries attempting to spread Christianity also increased European knowledge of Africa. Between 1884 and 1885, European nations met at the Berlin West Africa Conference to discuss the partitioning of Africa. It was agreed that European claims to parts of Africa would only be recognised if Europeans provided effective occupation. In a series of treaties in 1890–1891, colonial boundaries were completely drawn. All of Sub-Saharan Africa was claimed by European powers, except for Ethiopia (Abyssinia) and Liberia. The European powers set up a variety of different administrations in Africa, reflecting different ambitions and degrees of power. In some areas, such as parts of British West Africa, colonial control was tenuous and intended for simple economic extraction, strategic power, or as part of a long-term development plan. In other areas, Europeans were encouraged to settle, creating settler states in which a European minority dominated. Settlers only came to a few colonies in sufficient numbers to have a strong impact. British settler colonies included British East Africa (now Kenya), Northern and Southern Rhodesia, (Zambia and Zimbabwe, respectively), and South Africa, which already had a significant population of European settlers, the Boers. France planned to settle Algeria and eventually incorporate it into the French state on an equal basis with the European provinces. Algeria's proximity across the Mediterranean allowed plans of this scale. In most areas colonial administrations did not have the manpower or resources to fully administer the territory and had to rely on local power structures to help them. Various factions and groups within the societies exploited this European requirement for their own purposes, attempting to gain positions of power within their own communities by cooperating with Europeans. One aspect of this struggle included what Terence Ranger has termed the "invention of tradition." In order to legitimize their own claims to power in the eyes of both the colonial administrators and their own people, native elites would essentially manufacture "traditional" claims to power, or ceremonies. As a result, many societies were thrown into disarray by the new order. Following the Scramble for Africa, an early but secondary focus for most colonial regimes was the suppression of slavery and the slave trade. By the end of the colonial period they were mostly successful in this aim, though slavery is still very active in Africa. France versus Britain: the Fashoda crisis of 1898 As a part of the Scramble for Africa, France had the establishment of a continuous west–east axis of the continent as an objective, in contrast with the British north–south axis. Tensions between Britain and France reached tinder stage in Africa. At several points war was possible, but never happened. The most serious episode was the Fashoda Incident of 1898. French troops tried to claim an area in the Southern Sudan, and a much more powerful British force purporting to be acting in the interests of the Khedive of Egypt arrived to confront them. Under heavy pressure the French withdrew securing British control over the area. The status quo was recognised by an agreement between the two states acknowledging British control over Egypt, while France became the dominant power in Morocco, but France suffered a humiliating defeat overall. European colonial territories Belgium Congo Free State and Belgian Congo (today's Democratic Republic of the Congo) Ruanda-Urundi (comprising modern Rwanda and Burundi, between 1916 and 1960) France French West Africa: Mauritania Senegal French Sudan (now Mali) French Guinea (now Guinea) Ivory Coast Niger French Upper Volta (now Burkina Faso) French Dahomey (now Benin) French Algeria (now Algeria) Tunisia French Morocco French Somaliland (now Djibouti) Madagascar Comoros French Equatorial Africa: Gabon Middle Congo (now the Republic of the Congo) Oubangi-Chari (now the Central African Republic) Chad Germany German Kamerun (now Cameroon and part of Nigeria) German East Africa (now Rwanda, Burundi and most of Tanzania) German South West Africa (now Namibia) German Togoland (now Togo and eastern part of Ghana) Italy Italian North Africa (now Libya) Eritrea Italian Somaliland (now part of Somalia) Portugal Portuguese West Africa (now Angola) Mainland Angola Portuguese Congo (now Cabinda Province of Angola) Portuguese East Africa (now Mozambique) Portuguese Guinea (now Guinea-Bissau) Cape Verde Islands São Tomé e Príncipe São Tomé Island Príncipe Island Fort of São João Baptista de Ajudá (now Ouidah, in Benin) Spain Spanish Sahara (now Western Sahara) Río de Oro Saguia el-Hamra Spanish Morocco Tarfaya Strip Ifni Spanish Guinea (now Equatorial Guinea) Fernando Po Río Muni Annobón United Kingdom Egypt Anglo-Egyptian Sudan (now Sudan) British Somaliland (now part of Somalia) British East Africa: Kenya Uganda Protectorate (now Uganda) Tanganyika (1919–1961, now part of Tanzania) Zanzibar (now part of Tanzania) Bechuanaland (now Botswana) Southern Rhodesia (now Zimbabwe) Northern Rhodesia (now Zambia) British South Africa (now South Africa) Transvaal (now part of South Africa) Cape Colony (now part of South Africa) Colony of Natal (now part of South Africa) Orange Free State (now part of South Africa) The Gambia Sierra Leone Nigeria Cameroons (now parts of Cameroon and Nigeria) British Gold Coast (now Ghana) Nyasaland (now Malawi) Basutoland (now Lesotho) Swaziland Independent states Liberia, founded by the American Colonization Society of the United States in 1821; declared independence in 1847 Ethiopian Empire (Abyssinia) had its borders re-drawn with Italian Eritrea and French Somaliland (modern Djibouti), briefly occupied by Italy from 1936 to 1941 during the Abyssinia Crisis; Sudan, independent under Mahdi rule between 1885 and 1899. It was then under British rule from 1899 to 1956. 20th century In the 1880s the European powers had divided up almost all of Africa (only Ethiopia and Liberia were independent). They ruled until after World War II when forces of nationalism grew much stronger. In the 1950s and 1960s the colonial holdings became independent states. The process was usually peaceful but there were several long bitter bloody civil wars, as in Algeria, Kenya and elsewhere. Across Africa the powerful new force of nationalism drew upon the organizational skills that natives learned in the British and French and other armies in the world wars. It led to organizations that were not controlled by or endorsed by either the colonial powers not the traditional local power structures that were collaborating with the colonial powers. Nationalistic organizations began to challenge both the traditional and the new colonial structures and finally displaced them. Leaders of nationalist movements took control when the European authorities exited; many ruled for decades or until they died off. These structures included political, educational, religious, and other social organizations. In recent decades, many African countries have undergone the triumph and defeat of nationalistic fervor, changing in the process the loci of the centralizing state power and patrimonial state. World War I With the vast majority of the continent under the colonial control of European governments, the World Wars were significant events in the geopolitical history of Africa. Africa was a theater of war and saw fighting in both wars. More important in most regions, the total war footing of colonial powers impacted the governance of African colonies, through resource allocation, conscription, and taxation. In World War I there were several campaigns in Africa, including the Togoland Campaign, the Kamerun Campaign, the South West Africa campaign, and the East African campaign. In each, Allied forces, primarily British, but also French, Belgian, South African, and Portuguese, sought to force the Germans out of their African colonies. In each, German forces were badly outnumbered and, due to Allied naval superiority, were cut off from reinforcement or resupply. The Allies eventually conquered all German colonies; German forces in East Africa managed to avoid surrender throughout the war, though they could not hold any territory after 1917. After World War I, former German colonies in Africa were taken over by France, Belgium, and the British Empire. After World War I, colonial powers continued to consolidate their control over their African territories. In some areas, particularly in Southern and East Africa, large settler populations were successful in pressing for additional devolution of administration, so-called "home rule" by the white settlers. In many cases, settler regimes were harsher on African populations, tending to see them more as a threat to political power, as opposed to colonial regimes which had generally endeavored to co-opt local populations into economic production. The Great Depression strongly affected Africa's non-subsistence economy, much of which was based on commodity production for Western markets. As demand increased in the late 1930s, Africa's economy rebounded as well. Africa was the site of one of the first instances of fascist territorial expansions in the 1930s. Italy had attempted to conquer Ethiopia in the 1890s but had been rebuffed in the First Italo-Ethiopian War. Ethiopia lay between two Italian colonies, Italian Somaliland and Eritrea and was invaded in October 1935. With an overwhelming advantage in armor and aircraft, by May 1936, Italian forces had occupied the capital of Addis Ababa and effectively declared victory. Ethiopia and their other colonies were consolidated into Italian East Africa. World War II: Political Africa was a large continent whose geography gave it strategic importance during the war. North Africa was the scene of major British and American campaigns against Italy and Germany; East Africa was the scene of a major British campaign against Italy. The vast geography provided major transportation routes linking the United States to the Middle East and Mediterranean regions. The sea route around South Africa was heavily used even though it added 40 days to voyages that had to avoid the dangerous Suez region. Lend Lease supplies to Russia often came this way. Internally, long-distance road and railroad connections facilitated the British war effort. The Union of South Africa had dominion status and was largely self-governing, the other British possessions were ruled by the colonial office, usually with close ties to local chiefs and kings. Italian holdings were the target of successful British military campaigns. The Belgian Congo, and two other Belgian colonies, were major exporters. In terms of numbers and wealth, the British -controlled the richest portions of Africa, and made extensive use not only of the geography, but the manpower, and the natural resources. Civilian colonial officials made a special effort to upgrade the African infrastructure, promote agriculture, integrate colonial Africa with the world economy, and recruit over a half million soldiers. Before the war, Britain had made few plans for the utilization of Africa, but it quickly set up command structures. The Army set up the West Africa Command, which recruited 200,000 soldiers. The East Africa Command was created in September 1941 to support the overstretched Middle East Command. It provided the largest number of men, over 320,000, chiefly from Kenya, Tanganyika, and Uganda. The Southern Command was the domain of South Africa. The Royal Navy set up the South Atlantic Command based in Sierra Leone, that became one of the main convoy assembly points. The RAF Coastal Command had major submarine-hunting operations based in West Africa, while a smaller RAF command Dealt with submarines in the Indian Ocean. Ferrying aircraft from North America and Britain was the major mission of the Western Desert Air Force. In addition smaller more localized commands were set up throughout the war. Before 1939, the military establishments were very small throughout British Africa, and largely consisted of whites, who comprised under two percent of the population outside South Africa. As soon as the war began, newly created African units were set up, primarily by the Army. The new recruits were almost always volunteers, usually provided in close cooperation with local tribal leaders. During the war, military pay scales far exceeded what civilians natives could earn, especially when food, housing and clothing allowances are included. The largest numbers were in construction units, called Pioneer units, with over 82,000 soldiers. The RAF and Navy also did some recruiting. The volunteers did some fighting, a great deal of guard duty, and construction work. 80,000 served in the Middle East. A special effort was made not to challenge white supremacy, certainly before the war, and to a large extent during the war itself. Nevertheless, the soldiers were drilled and train to European standards, given strong doses of propaganda, and learn leadership and organizational skills that proved essential to the formation of nationalistic and independence movements after 1945. There were minor episodes of discontent, but nothing serious, among the natives. Afrikaner nationalism was a factor in South Africa, But the pro-German Afrikaner prime minister was replaced in 1939 by Jan Smuts, an Afrikaner who was an enthusiastic supporter of the British Empire. His government closely cooperated with London and raised 340,000 volunteers (190,000 were white, or about one-third of the eligible white men). French Africa As early as 1857, the French established volunteer units of black soldiers in sub- Sahara Africa, termed the tirailleurs senegalais. They served in military operations throughout the Empire, including 171,000 soldiers in World War I and 160,000 in World War II. About 90,000 became POWs in Germany. The veterans played a central role in the postwar independence movement in French Africa. authorities in West Africa declared allegiance to the Vichy regime, as did the colony of French Gabon Vichy forces defeated a Free French Forces invasion of French West Africa in the two battles of Dakar in July and September 1940. Gabon fell to Free France after the Battle of Gabon in November 1940, but West Africa remained under Vichy control until November 1942. Vichy forces tried to resist the overwhelming Allied landings in North Africa (operation Torch) in November 1942. Vichy Admiral François Darlan suddenly switched sides and the fighting ended. The Allies gave Darlan control of North African French forces in exchange for support from both French North Africa as well as French West Africa. Vichy was now eliminated as a factor in Africa. Darlan was assassinated in December, and the two factions of Free French, led by Charles de Gaulle and Henri Giraud, jockeyed for power. De Gaulle finally won out. World War II: Military Since Germany had lost its African colonies following World War I, World War II did not reach Africa until Italy joined the war on June 10, 1940, controlling Libya and Italian East Africa. With the fall of France on June 25, most of France's colonies in North and West Africa were controlled by the Vichy government, though much of Central Africa fell under Free French control after some fighting between Vichy and Free French forces at the Battle of Dakar and the Battle of Gabon. After the fall of France, Africa was the only active theater for ground combat until the Italian invasion of Greece in October. In the Western Desert campaign Italian forces from Libya sought to overrun Egypt, controlled by the British. Simultaneously, in the East African campaign, Italian East African forces overran British Somaliland and some British outposts in Kenya and Anglo-Egyptian Sudan. When Italy's efforts to conquer Egypt (including the crucial Suez Canal) and Sudan fell short, they were unable to reestablish supply to Italian East Africa. Without the ability to reinforce or resupply and surrounded by Allied possessions, Italian East Africa was conquered by mainly British and South African forces in 1941. In North Africa, the Italians soon requested help from the Germans who sent a substantial force under General Rommel. With German help, the Axis forces regained the upper hand but were unable to break through British defenses in two tries at El Alamein. In late 1942, Allied forces, mainly Americans and Canadians, invaded French North Africa in Operation Torch, where Vichy French forces initially surprised them with their resistance but were convinced to stop fighting after three days. The second front relieved pressure on the British in Egypt who began pushing west to meet up with the Torch forces, eventually pinning German and Italian forces in Tunisia, which was conquered by May 1943 in the Tunisia campaign, ending the war in Africa. The only other significant operations occurred in the French colony of Madagascar, which was invaded by the British in May 1942 to deny its ports to the Axis (potentially the Japanese who had reached the eastern Indian Ocean). The French garrisons in Madagascar surrendered in November 1942. Post-war Africa: decolonization The decolonization of Africa started with Libya in 1951, although Liberia, South Africa, Egypt and Ethiopia were already independent. Many countries followed in the 1950s and 1960s, with a peak in 1960 with the Year of Africa, which saw 17 African nations declare independence, including a large part of French West Africa. Most of the remaining countries gained independence throughout the 1960s, although some colonizers (Portugal in particular) were reluctant to relinquish sovereignty, resulting in bitter wars of independence which lasted for a decade or more. The last African countries to gain formal independence were Guinea-Bissau (1974), Mozambique (1975) and Angola (1975) from Portugal; Djibouti from France in 1977; Zimbabwe from the United Kingdom in 1980; and Namibia from South Africa in 1990. Eritrea later split off from Ethiopia in 1993. East Africa The Mau Mau Uprising took place in Kenya from 1952 until 1956 but was put down by British and local forces. A state of emergency remained in place until 1960. Kenya became independent in 1963, and Jomo Kenyatta served as its first president. The early 1960s also signaled the start of major clashes between the Hutus and the Tutsis in Rwanda and Burundi. In 1994 this culminated in the Rwandan genocide, a conflict in which over 800,000 people were murdered. North Africa Moroccan nationalism developed during the 1930s; the Istiqlal Party was formed, pushing for independence. In 1953 sultan Mohammed V of Morocco called for independence. On March 2, 1956, Morocco became independent of France. Mohammed V became ruler of independent Morocco. In 1954, Algeria formed the National Liberation Front (FLN) as it split from France. This resulted in the Algerian War, which lasted until independence negotiations in 1962. Muhammad Ahmed Ben Bella was elected President of Algeria. Over a million French nationals, predominantly Pied-Noirs, left the country, crippling the economy. In 1934, the "Neo Destour" (New Constitution) party was founded by Habib Bourguiba pushing for independence in Tunisia. Tunisia became independent in 1955. Its bey was deposed and Habib Bourguiba elected as President of Tunisia. In 1954, Gamal Abdel Nasser deposed the monarchy of Egypt in the Egyptian Revolution of 1952 and came to power as Prime Minister of Egypt. Muammar Gaddafi led the 1969 Libyan coup d'état which deposed Idris of Libya. Gaddafi remained in power until his death in the Libyan Civil War of 2011. Egypt was involved in several wars against Israel and was allied with other Arab countries. The first was the 1948 Arab–Israeli War, right after the state of Israel was founded. Egypt went to war again in the Six-Day War of 1967 and lost the Sinai Peninsula to Israel. They went to war yet again in the Yom Kippur War of 1973. In 1979, President of Egypt Anwar Sadat and Prime Minister of Israel Menachem Begin signed the Camp David Accords, which gave back the Sinai Peninsula to Egypt in exchange for the recognition of Israel. The accords are still in effect today. In 1981, Sadat was assassinated by members of the Egyptian Islamic Jihad under Khalid Islambouli. The assassins were Islamists who targeted Sadat for his signing of the Accords. Southern Africa In 1948 the apartheid laws were started in South Africa by the dominant National Party. These were largely a continuation of existing policies; the difference was the policy of "separate development" (Apartheid). Where previous policies had only been disparate efforts to economically exploit the African majority, Apartheid represented an entire philosophy of separate racial goals, leading to both the divisive laws of 'petty apartheid,' and the grander scheme of African homelands. In 1994, Apartheid ended, and Nelson Mandela of the African National Congress was elected president after the South African general election, 1994, the country's first non-racial election. Central Africa The central regions of Africa were traditionally regarded to be the regions between Kilwa and the mouth of the Zambesi river. Due to its isolated position from the coasts, this area has received minimal attention from historian pertaining to Africa. It also had one of the most varied sources of European colonial imperialists including Germany in Cameroon, Britain in Northern Cameroons, Belgium in Congo, and France in CAF. Due to its territory, among the main trope s regarding Central Africa is traversing its lands and the nature of its tropicals. Since 1982, one of the main protracted issues within central Africa has been the ongoing secession movement of the secessionist entity of Ambazonia. The impasse between Cameroon and Ambazonia gained steam in 1992 when Fon Gorji-Dinka filed an international lawsuit against Cameroon claiming that Ambazonian territories are held illegally by the latter and describing Cameroonian claims on Ambazonian territories as illegal. Fifteen years later, this stalemate would escalate when Abmazonia formally declared itself as the Federal Republic of Ambazonia. West Africa Following World War II, nationalist movements arose across West Africa, most notably in Ghana under Kwame Nkrumah. In 1957, Ghana became the first sub-Saharan colony to achieve its independence, followed the next year by France's colonies; by 1974, West Africa's nations were entirely autonomous. Since independence, many West African nations have been plagued by corruption and instability, with notable civil wars in Nigeria, Sierra Leone, Liberia, and Ivory Coast, and a succession of military coups in Ghana and Burkina Faso. Many states have failed to develop their economies despite enviable natural resources, and political instability is often accompanied by undemocratic government. See also 2014 Ebola virus epidemic in Sierra Leone, 2014 Ebola virus epidemic in Guinea, and 2014 Ebola virus epidemic in Liberia Historiography of British Africa The first historical studies in English appeared in the 1890s, and followed one of four approaches. 1) The territorial narrative was typically written by a veteran soldier or civil servant who gave heavy emphasis to what he had seen. 2) The "apologia" were essays designed to justify British policies. 3) Popularizers tried to reach a large audience. 4) Compendia appeared designed to combine academic and official credentials. Professional scholarship appeared around 1900, and began with the study of business operations, typically using government documents and unpublished archives. The economic approach was widely practiced in the 1930s, primarily to provide descriptions of the changes underway in the previous half-century. In 1935, American historian William L. Langer published The Diplomacy of Imperialism: 1890–1902, a book that is still widely cited. In 1939, Oxford professor Reginald Coupland published The Exploitation of East Africa, 1856–1890: The Slave Trade and the Scramble, another popular treatment. World War II diverted most scholars to wartime projects and accounted for a pause in scholarship during the 1940s. By the 1950s many African students were studying in British universities, and they produced a demand for new scholarship, and started themselves to supply it as well. Oxford University became the main center for African studies, with activity as well at Cambridge University and the London School of Economics. The perspective of British government policymakers or international business operations slowly gave way to a new interest in the activities of the natives, especially nationalistic movements and the growing demand for independence. The major breakthrough came from Ronald Robinson and John Andrew Gallagher, especially with their studies of the impact of free trade on Africa. In 1985 The Oxford History of South Africa (2 vols.) was published, attempting to synthesize the available materials. In 2013, The Oxford Handbook of Modern African History was published, bringing the scholarship up to date. Economic history of Africa Military history of Africa Genetic history of Africa Historiographic and Conceptual Problems of North Africa and Sub-Saharan Africa Historiographic and Conceptual Problems The current major problem in African studies that Mohamed (2010/2012) identified is the inherited religious, Orientalist, colonial paradigm that European Africanists have preserved in present-day secularist, post-colonial, Anglophone African historiography. African and African-American scholars also bear some responsibility in perpetuating this European Africanist preserved paradigm. Following conceptualizations of Africa developed by Leo Africanus and Hegel, European Africanists conceptually separated continental Africa into two racialized regions – Sub-Saharan Africa and North Africa. Sub-Saharan Africa, as a racist geographic construction, serves as an objectified, compartmentalized region of “Africa proper”, “Africa noire,” or “Black Africa.” The African diaspora is also considered to be a part of the same racialized construction as Sub-Saharan Africa. North Africa serves as a racialized region of “European Africa”, which is conceptually disconnected from Sub-Saharan Africa, and conceptually connected to the Middle East, Asia, and the Islamic world. As a result of these racialized constructions and the conceptual separation of Africa, darker skinned North Africans, such as the so-called Haratin, who have long resided in the Maghreb, and do not reside south of Saharan Africa, have become analogically alienated from their indigeneity and historic reality in North Africa. While the origin of the term “Haratin” remains speculative, the term may not date much earlier than the 18th century CE and has been involuntarily assigned to darker skinned Maghrebians. Prior to the modern use of the term Haratin as an identifier, and utilized in contrast to bidan or bayd (white), sumr/asmar, suud/aswad, or Sudan/sudani (black/brown) were Arabic terms utilized as identifiers for darker skinned Maghrebians before the modern period. “Haratin” is considered to be an offensive term by the darker skinned Maghrebians it is intended to identify; for example, people in the southern region (e.g., Wad Noun, Draa) of Morocco consider it to be an offensive term. Despite its historicity and etymology being questionable, European colonialists and European Africanists have used the term Haratin as identifiers for groups of “black” and apparently “mixed” people found in Algeria, Mauritania, and Morocco. The Saadian invasion of the Songhai Empire serves as the precursor to later narratives that grouped darker skinned Maghrebians together and identified their origins as being Sub-Saharan West Africa. With gold serving as a motivation behind the Saadian invasion of the Songhai Empire, this made way for changes in latter behaviors toward dark-skinned Africans. As a result of changing behaviors toward dark-skinned Africans, darker skinned Maghrebians were forcibly recruited into the army of Ismail Ibn Sharif as the Black Guard, based on the claim of them having descended from enslaved peoples from the times of the Saadian invasion. Shurafa historians of the modern period would later utilize these events in narratives about the manumission of enslaved “Hartani” (a vague term, which, by merit of it needing further definition, is implicit evidence for its historicity being questionable). The narratives derived from Shurafa historians would later become analogically incorporated into the Americanized narratives (e.g., the trans-Saharan slave trade, imported enslaved Sub-Saharan West Africans, darker skinned Magrebian freedmen) of the present-day European Africanist paradigm. As opposed to having been developed through field research, the analogy in the present-day European Africanist paradigm, which conceptually alienates, dehistoricizes, and denaturalizes darker skinned North Africans in North Africa and darker skinned Africans throughout the Islamic world at-large, is primarily rooted in an Americanized textual tradition inherited from 19th century European Christian abolitionists. Consequently, reliable history, as opposed to an antiquated analogy-based history, for darker skinned North Africans and darker skinned Africans in the Islamic world are limited. Part of the textual tradition generally associates an inherited status of servant with dark skin (e.g., Negro labor, Negro cultivators, Negroid slaves, freedman). The European Africanist paradigm uses this as the primary reference point for its construction of origins narratives for darker skinned North Africans (e.g., imported slaves from Sub-Saharan West Africa). With darker skinned North Africans or darker skinned Africans in the Islamic world treated as an allegory of alterity, another part of the textual tradition is the trans-Saharan slave trade and their presence in these regions are treated as that of an African diaspora in North Africa and the Islamic world. Altogether, darker skinned North Africans (e.g., “black” and apparently “mixed” Maghrebians), darker skinned Africans in the Islamic world, the inherited status of servant associated with dark skin, and the trans-Saharan slave trade are conflated and modeled in analogy with African-Americans and the trans-Atlantic slave trade. The trans-Saharan slave trade has been used as a literary device in narratives that analogically explain the origins of darker skinned North Africans in North Africa and the Islamic world. Caravans have been equated with slave ships, and the amount of forcibly enslaved Africans transported across the Sahara are alleged to be numerically comparable to the considerably large amount of forcibly enslaved Africans transported across the Atlantic Ocean. The simulated narrative of comparable numbers is contradicted by the limited presence of darker skinned North Africans in the present-day Maghreb. As part of this simulated narrative, post-classical Egypt has also been characterized as having plantations. Another part of this simulated narrative is an Orientalist construction of hypersexualized Moors, concubines, and eunuchs. Concubines in harems have been used as an explanatory bridge between the allegation of comparable numbers of forcibly enslaved Africans and the limited amount of present-day darker skinned Maghrebians who have been characterized as their diasporic descendants. Eunuchs were characterized as sentinels who guarded these harems. The simulated narrative is also based on the major assumption that the indigenous peoples of the Maghreb were once purely white Berbers, who then became biracialized through miscegenation with black concubines (existing within a geographic racial binary of pale-skinned Moors residing further north, closer to the Mediterranean region, and dark-skinned Moors residing further south, closer to the Sahara). The religious polemical narrative involving the suffering of enslaved European Christians of the Barbary slave trade has also been adapted to fit the simulated narrative of a comparable number of enslaved Africans being transported by Muslim slaver caravans, from the south of Saharan Africa, into North Africa and the Islamic world. Despite being an inherited part of the 19th century religious polemical narratives, the use of race in the secularist narrative of the present-day European Africanist paradigm has given the paradigm an appearance of possessing scientific quality. The religious polemical narrative (e.g., holy cause, hostile neologisms) of 19th century European abolitionists about Africa and Africans are silenced, but still preserved, in the secularist narratives of the present-day European Africanist paradigm. The Orientalist stereotyped hypersexuality of the Moors were viewed by 19th century European abolitionists as deriving from the Quran. The reference to times prior, often used in concert with biblical references, by 19th century European abolitionists, may indicate that realities described of Moors may have been literary fabrications. The purpose of these apparent literary fabrications may have been to affirm their view of the Bible as being greater than the Quran and to affirm the viewpoints held by the readers of their composed works. The adoption of 19th century European abolitionists’ religious polemical narrative into the present-day European Africanist paradigm may have been due to its correspondence with the established textual tradition. The use of stereotyped hypersexuality for Moors are what 19th century European abolitionists and the present-day European Africanist paradigm have in common. Due to a lack of considerable development in field research regarding enslavement in Islamic societies, this has resulted in the present-day European Africanist paradigm relying on unreliable estimates for the trans-Saharan slave trade. However, insufficient data has also used as a justification for continued use of the faulty present-day European Africanist paradigm. Darker skinned Maghrebians, particularly in Morocco, have grown weary of the lack of discretion foreign academics have shown toward them, bear resentment toward the way they have been depicted by foreign academics, and consequently, find the intended activities of foreign academics to be predictable. Rather than continuing to rely on the faulty present-day European Africanist paradigm, Mohamed (2012) recommends revising and improving the current Africanist paradigm (e.g., critical inspection of the origins and introduction of the present characterization of the Saharan caravan; reconsideration of what makes the trans-Saharan slave trade, within its own context in Africa, distinct from the trans-Atlantic slave trade; realistic consideration of the experiences of darker-skinned Maghrebians within their own regional context). Conceptual Problems Merolla (2017) has indicated that the academic study of Sub-Saharan Africa and North Africa by Europeans developed with North Africa being conceptually subsumed within the Middle East and Arab world, whereas, the study of Sub-Saharan Africa was viewed as conceptually distinct from North Africa, and as its own region, viewed as inherently the same. The common pattern of conceptual separation of continental Africa into two regions and the view of conceptual sameness within the region of Sub-Saharan Africa has continued until present-day. Yet, with increasing exposure of this problem, discussion about the conceptual separation of Africa has begun to develop. The Sahara has served as a trans-regional zone for peoples in Africa. Authors from various countries (e.g., Algeria, Cameroon, Sudan) in Africa have critiqued the conceptualization of the Sahara as a regional barrier, and provided counter-arguments supporting the interconnectedness of continental Africa; there are historic and cultural connections as well as trade between West Africa, North Africa, and East Africa (e.g., North Africa with Niger and Mali, North Africa with Tanzania and Sudan, major hubs of Islamic learning in Niger and Mali). Africa has been conceptually compartmentalized into meaning “Black Africa”, “Africa South of the Sahara”, and “Sub-Saharan Africa.” North Africa has been conceptually "Orientalized" and separated from Sub-Saharan Africa. While its historic development has occurred within a longer time frame, the epistemic development (e.g., form, content) of the present-day racialized conceptual separation of Africa came as a result of the Berlin Conference and the Scramble for Africa. In African and Berber literary studies, scholarship has remained largely separate from one another. The conceptual separation of Africa in these studies may be due to how editing policies of studies in the Anglophone and Francophone world are affected by the international politics of the Anglophone and Francophone world. While studies in the Anglophone world have more clearly followed the trend of the conceptual separation of Africa, the Francophone world has been more nuanced, which may stem from imperial policies relating to French colonialism in North Africa and Sub-Saharan Africa. As the study of North Africa has largely been initiated by the Arabophone and Francophone world, denial of the Arabic language having become Africanized throughout the centuries it has been present in Africa has shown that the conceptual separation of Africa remains pervasive in the Francophone world; this denial may stem from historic development of the characterization of an Islamic Arabia existing as a diametric binary to Europe. Among studies in the Francophone world, ties between North Africa and Sub-Saharan Africa have been denied or downplayed, while the ties (e.g., religious, cultural) between the regions and peoples (e.g., Arab language and literature with Berber language and literature) of the Middle East and North Africa have been established by diminishing the differences between the two and selectively focusing on the similarities between the two. In the Francophone world, construction of racialized regions, such as Black Africa (Sub-Saharan Africans) and White Africa (North Africans, e.g., Berbers and Arabs), has also developed. Despite having invoked and utilized identities in reference to the racialized conceptualizations of Africa (e.g., North Africa, Sub-Saharan Africa) to oppose imposed identities, Berbers have invoked North African identity to oppose Arabized and Islamicized identities, and Sub-Saharan Africans (e.g., Negritude, Black Consciousness) and the African diaspora (e.g., Black is Beautiful) have invoked and utilized black identity to oppose colonialism and racism. While Berber studies has largely sought to be establish ties between Berbers and North Africa with Arabs and the Middle East, Merolla (2017) indicated that efforts to establish ties between Berbers and North Africa with Sub-Saharan Africans and Sub-Saharan Africa have recently started to being undertaken. See also Economic history of Africa African historiography Historians of Africa List of history journals#Africa List of kingdoms in pre-colonial Africa List of sovereign states and dependent territories in Africa Outline of Africa#History of Africa Africa-Europe relations Africa-United States relations Africa–China relations Soviet Union-Africa relations Notes References Akyeampong. Emmanuel and Robert H. Bates, eds. Africa's Development in Historical Perspective (2014) Beshah, Girma; Aregay, Merid Wolde (1964). The Question of the Union of the Churches in Luso-Ethiopian Relations (1500–1632) (Lisbon: Junta de Investigações do Ultramar and Centro de Estudos Históricos Ultramarinos), Collins, Robert O.; Burns, James M. (2007). A History of Sub-Saharan Africa. NY: Cambridge UP, . Ehret, Christopher (2002). The Civilizations of Africa. Charlottesville, Virginia: University of Virginia, . Iliffe, John (2007). Africans: The History of a Continent. 2nd ed. NY : Cambridge University Press, . Manning, Patrick. (2009) The African Diaspora: A History Through Culture (NY: Columbia UP); looks at the slave trade, the adaptation of Africans to new conditions, their struggle for freedom and equality, and the establishment of a "black" diaspora and its local influence around the world; covers 1430 to 2001. Martin, Phyllis M., and O'Meara, Patrick (1995). Africa. 3rd ed. Bloomington: Indiana University Press, . Shillington, Kevin (2005). History of Africa. Revised 2nd ed. New York City: Palgrave Macmillan, . Further reading Byfield, Judith A. et al. eds. Africa and World War II (Cambridge UP, 2015). Clark, J. Desmond (1970). The Prehistory of Africa. Thames and Hudson Davidson, Basil (1964). The African Past. Penguin, Harmondsworth Devermont, Judd. "World Is Coming to Sub-Saharan Africa. Where Is the United States?" (Center for Strategic and International Studies (CSIS), 2018) online. Duignan, P., and L. H. Gann. The United States and Africa: A History (Cambridge University Press, 1984) Fage, J.D. and Roland Oliver, eds. The Cambridge History of Africa (8 vol 1975–1986) Falola, Toyin. Africa, Volumes 1–5. FitzSimons, William. "Sizing Up the 'Small Wars' of African Empire: An Assessment of the Context and Legacies of Nineteenth-Century Colonial Warfare". Journal of African Military History 2#1 (2018): 63–78. Freund, Bill (1998). The Making of Contemporary Africa, Lynne Rienner, Boulder (including a substantial "Annotated Bibliography" pp. 269–316). Herbertson, A. J. and O. J. R. Howarth. eds. The Oxford Survey Of The British Empire (6 vol 1914) on Africa; 550pp; comprehensive coverage of South Africa and British colonies July, Robert (1998). A History of the African People, (Waveland Press, 1998). Killingray, David, and Richard Rathbone, eds. Africa and the Second World War (Springer, 1986). Lamphear, John, ed. African Military History (Routledge, 2007). Obenga, Théophile (1980). Pour une Nouvelle Histoire Présence Africaine, Paris Reader, John (1997). Africa: A Biography of the Continent. Hamish Hamilton. Roberts, Stephen H. History of French Colonial Policy (1870–1925) (2 vols., 1929) vol 1 online also vol 2 online; comprehensive scholarly history Shillington, Kevin (1989). History of Africa, New York: St. Martin's. Thornton, John K. Warfare in Atlantic Africa, 1500–1800 (Routledge, 1999). UNESCO (1980–1994). General History of Africa . 8 volumes. Worden, Nigel (1995). The Making of Modern South Africa, Oxford UK, Cambridge US: Blackwell. Atlases Ajayi, A.J.F. and Michael Crowder. Historical Atlas of Africa (1985); 300 color maps. Fage, J.D. Atlas of African History (1978) Freeman-Grenville, G.S.P. The New Atlas of African History (1991). Kwamena-Poh, Michael, et al. African history in Maps (Longman, 1982). McEvedy, Colin. The Penguin Atlas of African History (2nd ed. 1996). excerpt Historiography Boyd, Kelly, ed. Encyclopedia of Historians and Historical Writers (Rutledge, 1999) 1:4–14. External links "Race, Evolution and the Science of Human Origins" by Allison Hopper, Scientific American (5 July 2021). Worldtimelines.org.uk – Africa The British Museum. 2005 The Historyscoper. About.com:African History. The Story of Africa BBC World Service. Wonders of the African World, PBS. Civilization of Africa by Richard Hooker, Washington State University. African Art (chunk of historical data) Metropolitan Museum of Art. African Kingdoms, by Khaleel Muhammad. Mapungubwe Museum at the University of Pretoria Project Diaspora. Kush Communications | Media Production Company London.
[ 0.15571565926074982, 0.390927791595459, -0.7670879364013672, -0.15161554515361786, 0.2637407183647156, 0.45785054564476013, 0.255884051322937, 0.6883203983306885, -0.6127274036407471, -0.5488446950912476, 0.31618237495422363, -0.3248993754386902, -0.8265547752380371, 0.9950980544090271, ...
14104
https://en.wikipedia.org/wiki/History%20of%20Oceania
History of Oceania
The History of Oceania includes the history of Australia, New Zealand, Hawaii, Papua New Guinea, Fiji and other Pacific island nations. Prehistory The prehistory of Oceania is divided into the prehistory of each of its major areas: Polynesia, Micronesia, Melanesia, and Australasia, and these vary greatly as to when they were first inhabited by humans—from 70,000 years ago (Australasia) to 3,000 years ago (Polynesia). Polynesia theories The Polynesian people are considered to be by linguistic, archaeological and human genetic ancestry a subset of the sea-migrating Austronesian people and tracing Polynesian languages places their prehistoric origins in the Malay Archipelago, and ultimately, in Taiwan. Between about 3000 and 1000 BCE speakers of Austronesian languages began spreading from Taiwan into Island South-East Asia, as tribes whose natives were thought to have arrived through South China about 8,000 years ago to the edges of western Micronesia and on into Melanesia, although they are different from the Han Chinese who now form the majority of people in China and Taiwan. There are three theories regarding the spread of humans across the Pacific to Polynesia. These are outlined well by Kayser et al. (2000) and are as follows: Express Train model: A recent (c. 3000–1000 BCE) expansion out of Taiwan, via the Philippines and eastern Indonesia and from the north-west ("Bird's Head") of New Guinea, on to Island Melanesia by roughly 1400 BCE, reaching western Polynesian islands right about 900 BCE. This theory is supported by the majority of current human genetic data, linguistic data, and archaeological data Entangled Bank model: Emphasizes the long history of Austronesian speakers' cultural and genetic interactions with indigenous Island South-East Asians and Melanesians along the way to becoming the first Polynesians. Slow Boat model: Similar to the express-train model but with a longer hiatus in Melanesia along with admixture, both genetically, culturally and linguistically with the local population. This is supported by the Y-chromosome data of Kayser et al. (2000), which shows that all three haplotypes of Polynesian Y chromosomes can be traced back to Melanesia. In the archaeological record there are well-defined traces of this expansion which allow the path it took to be followed and dated with some certainty. It is thought that by roughly 1400 BCE, "Lapita Peoples", so-named after their pottery tradition, appeared in the Bismarck Archipelago of north-west Melanesia. This culture is seen as having adapted and evolved through time and space since its emergence "Out of Taiwan". They had given up rice production, for instance, after encountering and adapting to breadfruit in the Bird's Head area of New Guinea. In the end, the most eastern site for Lapita archaeological remains recovered so far has been through work on the archaeology in Samoa. The site is at Mulifanua on Upolu. The Mulifanua site, where 4,288 pottery shards have been found and studied, has a "true" age of c. 1000 BCE based on C14 dating. A 2010 study places the beginning of the human archaeological sequences of Polynesia in Tonga at 900 BCE, the small differences in dates with Samoa being due to differences in radiocarbon dating technologies between 1989 and 2010, the Tongan site apparently predating the Samoan site by some few decades in real time. Within a mere three or four centuries between about 1300 and 900 BCE, the Lapita archaeological culture spread 6,000 kilometres further to the east from the Bismarck Archipelago, until it reached as far as Fiji, Tonga, and Samoa. The area of Tonga, Fiji, and Samoa served as a gateway into the rest of the Pacific region known as Polynesia. Ancient Tongan mythologies recorded by early European explorers report the islands of 'Ata and Tongatapu as the first islands being hauled to the surface from the deep ocean by Maui. The "Tuʻi Tonga Empire" or "Tongan Empire" in Oceania are descriptions sometimes given to Tongan expansionism and projected hegemony dating back to 950 CE, but at its peak during the period 1200–1500. While modern researchers and cultural experts attest to widespread Tongan influence and evidences of transoceanic trade and exchange of material and non-material cultural artifacts, empirical evidence of a true political empire ruled for any length of time by successive rulers is lacking. Modern archeology, anthropology and linguistic studies confirm widespread Tongan cultural influence ranging widely through East 'Uvea, Rotuma, Futuna, Samoa and Niue, parts of Micronesia (Kiribati, Pohnpei), Vanuatu, and New Caledonia and the Loyalty Islands, and while some academics prefer the term "maritime chiefdom", others argue that, while very different from examples elsewhere, ..."empire" is probably the most convenient term. Pottery art from Fijian towns shows that Fiji was settled before or around 3500 to 1000 BC, although the question of Pacific migration still lingers. It is believed that the Lapita people or the ancestors of the Polynesians settled the islands first but not much is known of what became of them after the Melanesians arrived; they may have had some influence on the new culture, and archaeological evidence shows that they would have then moved on to Tonga, Samoa and even Hawai'i. The first settlements in Fiji were started by voyaging traders and settlers from the west about 5000 years ago. Lapita pottery shards have been found at numerous excavations around the country. Aspects of Fijian culture are similar to the Melanesian culture of the western Pacific but have a stronger connection to the older Polynesian cultures. Across from east to west, Fiji has been a nation of many languages. Fiji's history was one of settlement but also of mobility. Over the centuries, a unique Fijian culture developed. Constant warfare and cannibalism between warring tribes were quite rampant and very much part of everyday life. In later centuries, the ferocity of the cannibal lifestyle deterred European sailors from going near Fijian waters, giving Fiji the name Cannibal Isles; as a result, Fiji remained unknown to the rest of the world. Early European visitors to Easter Island recorded the local oral traditions about the original settlers. In these traditions, Easter Islanders claimed that a chief Hotu Matu'a arrived on the island in one or two large canoes with his wife and extended family. They are believed to have been Polynesian. There is considerable uncertainty about the accuracy of this legend as well as the date of settlement. Published literature suggests the island was settled around 300–400 CE, or at about the time of the arrival of the earliest settlers in Hawaii. Some scientists say that Easter Island was not inhabited until 700–800 CE. This date range is based on glottochronological calculations and on three radiocarbon dates from charcoal that appears to have been produced during forest clearance activities. Moreover, a recent study which included radiocarbon dates from what is thought to be very early material suggests that the island was settled as recently as 1200 CE. This seems to be supported by a 2006 study of the island's deforestation, which could have started around the same time. A large now extinct palm, Paschalococos disperta, related to the Chilean wine palm (Jubaea chilensis), was one of the dominant trees as attested by fossil evidence; this species, whose sole occurrence was Easter Island, became extinct due to deforestation by the early settlers. Micronesia theories Micronesia began to be settled several millennia ago, although there are competing theories about the origin and arrival of the first settlers. There are numerous difficulties with conducting archaeological excavations in the islands, due to their size, settlement patterns and storm damage. As a result, much evidence is based on linguistic analysis. The earliest archaeological traces of civilization have been found on the island of Saipan, dated to 1500 BCE or slightly before. The ancestors of the Micronesians settled there over 4,000 years ago. A decentralized chieftain-based system eventually evolved into a more centralized economic and religious culture centered on Yap and Pohnpei. The prehistory of many Micronesian islands such as Yap are not known very well. On Pohnpei, pre-colonial history is divided into three eras: Mwehin Kawa or Mwehin Aramas (Period of Building, or Period of Peopling, before c. 1100); Mwehin Sau Deleur (Period of the Lord of Deleur, c. 1100 to c. 1628); and Mwehin Nahnmwarki (Period of the Nahnmwarki, c. 1628 to c. 1885). Pohnpeian legend recounts that the Saudeleur rulers, the first to bring government to Pohnpei, were of foreign origin. The Saudeleur centralized form of absolute rule is characterized in Pohnpeian legend as becoming increasingly oppressive over several generations. Arbitrary and onerous demands, as well as a reputation for offending Pohnpeian deities, sowed resentment among Pohnpeians. The Saudeleur Dynasty ended with the invasion of Isokelekel, another semi-mythical foreigner, who replaced the Saudeleur rule with the more decentralized nahnmwarki system in existence today. Isokelekel is regarded as the creator of the modern Pohnpeian nahnmwarki social system and the father of the Pompeian people. Construction of Nan Madol, a megalithic complex made from basalt lava logs in Pohnpei began as early as 1200 CE. Nan Madol is offshore of Temwen Island near Pohnpei, consists of a series of small artificial islands linked by a network of canals, and is often called the Venice of the Pacific. It is located near the island of Pohnpei and was the ceremonial and political seat of the Saudeleur Dynasty that united Pohnpei's estimated 25,000 people until its centralized system collapsed amid the invasion of Isokelekel. Isokelekel and his descendants initially occupied the stone city, but later abandoned it. The first people of the Northern Mariana Islands navigated to the islands at some period between 4000 BCE to 2000 BCE from South-East Asia. They became known as the Chamorros, and spoke an Austronesian language called Chamorro. The ancient Chamorro left a number of megalithic ruins, including Latte stone. The Refaluwasch or Carolinian people came to the Marianas in the 1800s from the Caroline Islands. Micronesian colonists gradually settled the Marshall Islands during the 2nd millennium BCE, with inter-island navigation made possible using traditional stick charts. Melanesia theories The first settlers of Australia, New Guinea, and the large islands just to the east arrived between 50,000 and 30,000 years ago, when Neanderthals still roamed Europe. The original inhabitants of the group of islands now named Melanesia were likely the ancestors of the present-day Papuan-speaking people. Migrating from South-East Asia, they appear to have occupied these islands as far east as the main islands in the Solomon Islands archipelago, including Makira and possibly the smaller islands farther to the east. Particularly along the north coast of New Guinea and in the islands north and east of New Guinea, the Austronesian people, who had migrated into the area somewhat more than 3,000 years ago, came into contact with these pre-existing populations of Papuan-speaking peoples. In the late 20th century, some scholars theorized a long period of interaction, which resulted in many complex changes in genetics, languages, and culture among the peoples. Kayser, et al. proposed that, from this area, a very small group of people (speaking an Austronesian language) departed to the east to become the forebears of the Polynesian people. However, the theory is contradicted by the findings of a genetic study published by Temple University in 2008; based on genome scans and evaluation of more than 800 genetic markers among a wide variety of Pacific peoples, it found that neither Polynesians nor Micronesians have much genetic relation to Melanesians. Both groups are strongly related genetically to East Asians, particularly Taiwanese aborigines. It appeared that, having developed their sailing outrigger canoes, the Polynesian ancestors migrated from East Asia, moved through the Melanesian area quickly on their way, and kept going to eastern areas, where they settled. They left little genetic evidence in Melanesia. The study found a high rate of genetic differentiation and diversity among the groups living within the Melanesian islands, with the peoples distinguished by island, language, topography, and geography among the islands. Such diversity developed over their tens of thousands of years of settlement before the Polynesian ancestors ever arrived at the islands. For instance, populations developed differently in coastal areas, as opposed to those in more isolated mountainous valleys. Additional DNA analysis has taken research into new directions, as more human species have been discovered since the late 20th century. Based on his genetic studies of the Denisova hominin, an ancient human species discovered in 2010, Svante Pääbo claims that ancient human ancestors of the Melanesians interbred in Asia with these humans. He has found that people of New Guinea share 4–6% of their genome with the Denisovans, indicating this exchange. The Denisovans are considered cousin to the Neanderthals; both groups are now understood to have migrated out of Africa, with the Neanderthals going into Europe, and the Denisovans heading east about 400,000 years ago. This is based on genetic evidence from a fossil found in Siberia. The evidence from Melanesia suggests their territory extended into south Asia, where ancestors of the Melanesians developed. Melanesians of some islands are one of the few non-European peoples, and the only dark-skinned group of people outside Australia, known to have blond hair. Australasia theories Indigenous Australians are the original inhabitants of the Australian continent and nearby islands. Indigenous Australians migrated from Africa to Asia around 70,000 years ago and arrived in Australia around 50,000 years ago. The Torres Strait Islanders are indigenous to the Torres Strait Islands, which are at the northernmost tip of Queensland near Papua New Guinea. The term "Aboriginal" is traditionally applied to only the indigenous inhabitants of mainland Australia and Tasmania, along with some of the adjacent islands, i.e.: the "first peoples". Indigenous Australians is an inclusive term used when referring to both Aboriginal and Torres Strait islanders. The earliest definite human remains found to date are that of Mungo Man, which have been dated at about 40,000 years old, but the time of arrival of the ancestors of Indigenous Australians is a matter of debate among researchers, with estimates dating back as far as 125,000 years ago. There is great diversity among different Indigenous communities and societies in Australia, each with its own unique mixture of cultures, customs and languages. In present-day Australia these groups are further divided into local communities. European contact and exploration (1500s–1700s) Iberian pioneers Early Iberian exploration Oceania was first explored by Europeans from the 16th century onwards. Portuguese navigators, between 1512 and 1526, reached the Moluccas (by António de Abreu and Francisco Serrão in 1512), Timor, the Aru Islands (Martim A. Melo Coutinho), the Tanimbar Islands, some of the Caroline Islands (by Gomes de Sequeira in 1525), and west Papua New Guinea (by Jorge de Menezes in 1526). In 1519 a Castilian ('Spanish') expedition led by Ferdinand Magellan sailed down the east coast of South America, found and sailed through the strait that bears his name and on 28 November 1520 entered the ocean which he named "Pacific". The three remaining ships, led by Magellan and his captains Duarte Barbosa and João Serrão, then sailed north and caught the trade winds which carried them across the Pacific to the Philippines where Magellan was killed. One surviving ship led by Juan Sebastián Elcano returned west across the Indian Ocean and the other went north in the hope of finding the westerlies and reaching Mexico. Unable to find the right winds, it was forced to return to the East Indies. The Magellan-Elcano expedition achieved the first circumnavigation of the world and reached the Philippines, the Mariana Islands and other islands of Oceania. Other large expeditions From 1527 to 1595 a number of other large Spanish expeditions crossed the Pacific Ocean, leading to the discovery of the Marshall Islands and Palau in the North Pacific, as well as Tuvalu, the Marquesas, the Solomon Islands archipelago, the Cook Islands and the Admiralty Islands in the South Pacific. In 1565, Spanish navigator Andrés de Urdaneta found a wind system that would allow ships to sail eastward from Asia, back to the Americas. From then until 1815 the annual Manila Galleons crossed the Pacific from Mexico to the Philippines and back, in the first transpacific trade route in history. Combined with the Spanish Atlantic or West Indies Fleet, the Manila Galleons formed one of the first global maritime exchange in human history, linking Seville in Spain with Manila in the Philippines, via Mexico. Later, in the quest for Terra Australis, Spanish explorers in the 17th century discovered the Pitcairn and Vanuatu archipelagos, and sailed the Torres Strait between Australia and New Guinea, named after navigator Luís Vaz de Torres. In 1668 the Spanish founded a colony on Guam as a resting place for west-bound galleons. For a long time this was the only non-coastal European settlement in the Pacific. Oceania during the Golden Age of Dutch exploration and discovery Early Dutch exploration The Dutch were the first non-natives to undisputedly explore and chart coastlines of Australia, Tasmania, New Zealand, Tonga, Fiji, Samoa, and Easter Island. Verenigde Oostindische Compagnie (or VOC) was a major force behind the Golden Age of Dutch exploration (c. 1590s–1720s) and Netherlandish cartography (c. 1570s–1670s). In the 17th century, the VOC's navigators and explorers charted almost three-quarters of the Australian coastline, except the east coast. Abel Tasman's exploratory voyages Abel Tasman was the first known European explorer to reach the islands of Van Diemen's Land (now Tasmania) and New Zealand, and to sight the Fiji islands. His navigator François Visscher, and his merchant Isaack Gilsemans, mapped substantial portions of Australia, New Zealand, Tonga and the Fijian islands. On 24 November 1642 Abel Tasman sighted the west coast of Tasmania, north of Macquarie Harbour. He named his discovery Van Diemen's Land after Antonio van Diemen, Governor-General of the Dutch East Indies. then claimed formal possession of the land on 3 December 1642. After some exploration, Tasman had intended to proceed in a northerly direction but as the wind was unfavourable he steered east. On 13 December they sighted land on the north-west coast of the South Island, New Zealand, becoming the first Europeans to do so. Tasman named it Staten Landt on the assumption that it was connected to an island (Staten Island, Argentina) at the south of the tip of South America. Proceeding north and then east, he stopped to gather water, but one of his boats was attacked by Māori in a double hulled waka (canoes) and four of his men were attacked and killed by mere. As Tasman sailed out of the bay he was again attacked, this time by 11 waka . The waka approached the Zeehan which fired and hit one Māori who fell down. Canister shot hit the side of a waka. Archeological research has shown the Dutch had tried to land at a major agricultural area, which the Māori may have been trying to protect. Tasman named the bay Murderers' Bay (now known as Golden Bay) and sailed north, but mistook Cook Strait for a bight (naming it Zeehaen's Bight). Two names he gave to New Zealand landmarks still endure, Cape Maria van Diemen and Three Kings Islands, but Kaap Pieter Boreels was renamed by Cook 125 years later to Cape Egmont. En route back to Batavia, Tasman came across the Tongan archipelago on 20 January 1643. While passing the Fiji Islands Tasman's ships came close to being wrecked on the dangerous reefs of the north-eastern part of the Fiji group. He charted the eastern tip of Vanua Levu and Cikobia before making his way back into the open sea. He eventually turned north-west to New Guinea, and arrived at Batavia on 15 June 1643. For over a century after Tasman's voyages, until the era of James Cook, Tasmania and New Zealand were not visited by Europeans—mainland Australia was visited, but usually only by accident. British exploration and Captain James Cook's voyages First voyage (1768–1771) In 1766 the Royal Society engaged James Cook to travel to the Pacific Ocean to observe and record the transit of Venus across the Sun. The expedition sailed from England on 26 August 1768, rounded Cape Horn and continued westward across the Pacific to arrive at Tahiti on 13 April 1769, where the observations of the Venus Transit were made. Once the observations were completed, Cook opened the sealed orders which were additional instructions from the Admiralty for the second part of his voyage: to search the south Pacific for signs of the postulated rich southern continent of Terra Australis. With the help of a Tahitian named Tupaia, who had extensive knowledge of Pacific geography, Cook managed to reach New Zealand on 6 October 1769, leading only the second group of Europeans to do so (after Abel Tasman over a century earlier, in 1642). Cook mapped the complete New Zealand coastline, making only some minor errors (such as calling Banks Peninsula an island, and thinking Stewart Island/Rakiura was a peninsula of the South Island). He also identified Cook Strait, which separates the North Island from the South Island, and which Tasman had not seen. Cook then voyaged west, reaching the south-eastern coast of Australia on 19 April 1770, and in doing so his expedition became the first recorded Europeans to have encountered its eastern coastline. On 23 April he made his first recorded direct observation of indigenous Australians at Brush Island near Bawley Point, noting in his journal: "…and were so near the Shore as to distinguish several people upon the Sea beach they appear'd to be of a very dark or black Colour but whether this was the real colour of their skins or the C[l]othes they might have on I know not." On 29 April Cook and crew made their first landfall on the mainland of the continent at a place now known as the Kurnell Peninsula. It is here that James Cook made first contact with an aboriginal tribe known as the Gweagal. After his departure from Botany Bay he continued northwards. After a grounding mishap on the Great Barrier Reef, the voyage continued, sailing through Torres Strait before returning to England via Batavia, the Cape of Good Hope, and Saint Helena. Second voyage (1772–1775) In 1772 the Royal Society commissioned Cook to search for the hypothetical Terra Australis again. On his first voyage, Cook had demonstrated by circumnavigating New Zealand that it was not attached to a larger landmass to the south. Although he charted almost the entire eastern coastline of Australia, showing it to be continental in size, the Terra Australis was believed by the Royal Society to lie further south. Cook commanded on this voyage, while Tobias Furneaux commanded its companion ship, . Cook's expedition circumnavigated the globe at an extreme southern latitude, becoming one of the first to cross the Antarctic Circle (17 January 1773). In the Antarctic fog, Resolution and Adventure became separated. Furneaux made his way to New Zealand, where he lost some of his men during an encounter with Māori, and eventually sailed back to Britain, while Cook continued to explore the Antarctic, reaching 71°10'S on 31 January 1774. Cook almost encountered the mainland of Antarctica, but turned towards Tahiti to resupply his ship. He then resumed his southward course in a second fruitless attempt to find the supposed continent. On this leg of the voyage he brought a young Tahitian named Omai, who proved to be somewhat less knowledgeable about the Pacific than Tupaia had been on the first voyage. On his return voyage to New Zealand in 1774, Cook landed at the Friendly Islands, Easter Island, Norfolk Island, New Caledonia, and Vanuatu. Before returning to England, Cook made a final sweep across the South Atlantic from Cape Horn. He then turned north to South Africa, and from there continued back to England. His reports upon his return home put to rest the popular myth of Terra Australis. Third voyage (1776–1779) On his last voyage, Cook again commanded HMS Resolution, while Captain Charles Clerke commanded . The voyage was ostensibly planned to return the Pacific Islander, Omai to Tahiti, or so the public were led to believe. The trip's principal goal was to locate a North-West Passage around the American continent. After dropping Omai at Tahiti, Cook travelled north and in 1778 became the first European to visit the Hawaiian Islands. After his initial landfall in January 1778 at Waimea harbour, Kauai, Cook named the archipelago the "Sandwich Islands" after the fourth Earl of Sandwich—the acting First Lord of the Admiralty. From the Sandwich Islands Cook sailed north and then north-east to explore the west coast of North America north of the Spanish settlements in Alta California. Cook explored and mapped the coast all the way to the Bering Strait, on the way identifying what came to be known as Cook Inlet in Alaska. In a single visit, Cook charted the majority of the North American north-west coastline on world maps for the first time, determined the extent of Alaska, and closed the gaps in Russian (from the West) and Spanish (from the South) exploratory probes of the Northern limits of the Pacific. Cook returned to Hawaii in 1779. After sailing around the archipelago for some eight weeks, he made landfall at Kealakekua Bay, on 'Hawaii Island', largest island in the Hawaiian Archipelago. Cook's arrival coincided with the Makahiki, a Hawaiian harvest festival of worship for the Polynesian god Lono. Coincidentally the form of Cook's ship, HMS Resolution, or more particularly the mast formation, sails and rigging, resembled certain significant artefacts that formed part of the season of worship. Similarly, Cook's clockwise route around the island of Hawaii before making landfall resembled the processions that took place in a clockwise direction around the island during the Lono festivals. It has been argued (most extensively by Marshall Sahlins) that such coincidences were the reasons for Cook's (and to a limited extent, his crew's) initial deification by some Hawaiians who treated Cook as an incarnation of Lono. Though this view was first suggested by members of Cook's expedition, the idea that any Hawaiians understood Cook to be Lono, and the evidence presented in support of it, were challenged in 1992. After a month's stay, Cook resumed his exploration of the Northern Pacific. Shortly after leaving Hawaii Island, however, the Resolution foremast broke, so the ships returned to Kealakekua Bay for repairs. Tensions rose, and a number of quarrels broke out between the Europeans and Hawaiians. On 14 February 1779, at Kealakekua Bay, some Hawaiians took one of Cook's small boats. As thefts were quite common in Tahiti and the other islands, Cook would have taken hostages until the stolen articles were returned. He attempted to take as hostage the King of Hawaiʻi, Kalaniʻōpuʻu. The Hawaiians prevented this, and Cook's men had to retreat to the beach. As Cook turned his back to help launch the boats, he was struck on the head by the villagers and then stabbed to death as he fell on his face in the surf. Hawaiian tradition says that he was killed by a chief named Kalaimanokahoʻowaha or Kanaʻina. The Hawaiians dragged his body away. Four of Cook's men were also killed and two others were wounded in the confrontation. The esteem which the islanders nevertheless held for Cook caused them to retain his body. Following their practice of the time, they prepared his body with funerary rituals usually reserved for the chiefs and highest elders of the society. The body was disembowelled, baked to facilitate removal of the flesh, and the bones were carefully cleaned for preservation as religious icons in a fashion somewhat reminiscent of the treatment of European saints in the Middle Ages. Some of Cook's remains, thus preserved, were eventually returned to his crew for a formal burial at sea. Clerke assumed leadership of the expedition. Following the death of Clerke, Resolution and Discovery returned home in October 1780 commanded by John Gore, a veteran of Cook's first voyage, and Captain James King. After their arrival in England, King completed Cook's account of the voyage. Colonisation British colonization In 1789 the Mutiny on the Bounty against William Bligh led to several of the mutineers escaping the Royal Navy and settling on Pitcairn Islands, which later became a British colony. Britain also established colonies in Australia in 1788, New Zealand in 1840 and Fiji in 1872, with much of Oceania becoming part of the British Empire. The Gilbert Islands (now known as Kiribati) and the Ellice Islands (now known as Tuvalu) came under Britain's sphere of influence in the late 19th century. The Ellice Islands were administered as British protectorate by a Resident Commissioner from 1892 to 1916 as part of the British Western Pacific Territories (BWPT), and later as part of the Gilbert and Ellice Islands colony from 1916 to 1974. Among the last islands in Oceania to be colonised was Niue (1900). In 1887, King Fata-a-iki, who reigned Niue from 1887 to 1896, offered to cede sovereignty to the British Empire, fearing the consequences of annexation by a less benevolent colonial power. The offer was not accepted until 1900. Niue was a British protectorate, but the UK's direct involvement ended in 1901 when New Zealand annexed the island. French colonization French Catholic missionaries arrived on Tahiti in 1834; their expulsion in 1836 caused France to send a gunboat in 1838. In 1842, Tahiti and Tahuata were declared a French protectorate, to allow Catholic missionaries to work undisturbed. The capital of Papeetē was founded in 1843. In 1880, France annexed Tahiti, changing the status from that of a protectorate to that of a colony. On 24 September 1853, under orders from Napoleon III, Admiral Febvrier Despointes took formal possession of New Caledonia and Port-de-France (Nouméa) was founded 25 June 1854. A few dozen free settlers settled on the west coast in the following years. New Caledonia became a penal colony, and from the 1860s until the end of the transportations in 1897, about 22,000 criminals and political prisoners were sent to New Caledonia, among them many Communards, including Henri de Rochefort and Louise Michel. Between 1873 and 1876, 4,200 political prisoners were "relegated" in New Caledonia. Only forty of them settled in the colony, the rest returned to France after being granted amnesty in 1879 and 1880. In the 1880s, France claimed the Tuamotu Archipelago, which formerly belonged to the Pōmare Dynasty, without formally annexing it. Having declared a protectorate over Tahuata in 1842, the French regarded the entire Marquesas Islands as French. In 1885, France appointed a governor and established a general council, thus giving it the proper administration for a colony. The islands of Rimatara and Rūrutu unsuccessfully lobbied for British protection in 1888, so in 1889 they were annexed by France. Postage stamps were first issued in the colony in 1892. The first official name for the colony was Établissements de l'Océanie (Settlements in Oceania); in 1903 the general council was changed to an advisory council and the colony's name was changed to Établissements Français de l'Océanie (French Settlements in Oceania). Spanish colonization The Spanish explorer Alonso de Salazar landed in the Marshall Islands in 1529. They were later named by Krusenstern, after English explorer John Marshall, who visited them together with Thomas Gilbert in 1788, en route from Botany Bay to Canton (two ships of the First Fleet). The Marshall Islands were claimed by Spain in 1874. In November 1770, Felipe González de Ahedo commanded an expedition from the Viceroyalty of Peru that searched for Davis Land and Madre de Dios Island and looked for foreign naval activities. This expedition landed on Isla de San Carlos (Easter Island) and signed a treaty of annexion with the Rapa Nui chiefs. Dutch colonization In 1606 Luís Vaz de Torres explored the southern coast of New Guinea from Milne Bay to the Gulf of Papua including Orangerie Bay which he named Bahía de San Lorenzo. His expedition also discovered Basilaki Island naming it Tierra de San Buenaventura, which he claimed for Spain in July 1606. On 18 October his expedition reached the western part of the island in present-day Indonesia, and also claimed the territory for the King of Spain. A successive European claim occurred in 1828, when the Netherlands formally claimed the western half of the island as Netherlands New Guinea. In 1883, following a short-lived French annexation of New Ireland, the British colony of Queensland annexed south-eastern New Guinea. However, the Queensland government's superiors in the United Kingdom revoked the claim, and (formally) assumed direct responsibility in 1884, when Germany claimed north-eastern New Guinea as the protectorate of German New Guinea (also called Kaiser-Wilhelmsland). The first Dutch government posts were established in 1898 and in 1902: Manokwari on the north coast, Fak-Fak in the west and Merauke in the south at the border with British New Guinea. The German, Dutch and British colonial administrators each attempted to suppress the still-widespread practices of inter-village warfare and headhunting within their respective territories. In 1905 the British government transferred some administrative responsibility over south-east New Guinea to Australia (which renamed the area "Territory of Papua"); and in 1906, transferred all remaining responsibility to Australia. During World War I, Australian forces seized German New Guinea, which in 1920 became the Territory of New Guinea, to be administered by Australia under a League of Nations mandate. The territories under Australian administration became collectively known as The Territories of Papua and New Guinea (until February 1942). German colonization Germany established colonies in New Guinea in 1884, and Samoa in 1900. Following papal mediation and German compensation of $4.5 million, Spain recognized a German claim in 1885. Germany established a protectorate and set up trading stations on the islands of Jaluit and Ebon to carry out the flourishing copra (dried coconut meat) trade. Marshallese Iroij (high chiefs) continued to rule under indirect colonial German administration. American colonization The United States also expanded into the Pacific, beginning with Baker Island and Howland Island in 1857, and with Hawaii becoming a U.S. territory in 1898. Disagreements between the US, Germany and UK over Samoa led to the Tripartite Convention of 1899. Samoa aligned its interests with the United States in a Deed of Succession, signed by the Tui Manúʻa (supreme chief of Manúʻa) on 16 July 1904 at the Crown residence of the Tuimanuʻa called the Faleula in the place called Lalopua (from Official documents of the Tuimanuʻa government, 1893; Office of the Governor, 2004). Cession followed the Tripartite Convention of 1899 that partitioned the eastern islands of Samoa (including Tutuila and the Manúʻa Group) from the western islands of Samoa (including ʻUpolu and Savaiʻi). Japanese colonization At the beginning of World War I, Japan assumed control of the Marshall Islands. The Japanese headquarters was established at the German center of administration, Jaluit. On 31 January 1944, during World War II, American forces landed on Kwajalein atoll and U.S. Marines and Army troops later took control of the islands from the Japanese on 3 February, following intense fighting on Kwajalein and Enewetak atolls. In 1947, the United States, as the occupying power, entered into an agreement with the UN Security Council to administer much of Micronesia, including the Marshall Islands, as the Trust Territory of the Pacific Islands. During World War II, Japan colonized many Oceanic colonies by wresting control from western powers. Samoan Crisis 1887–1889 The Samoan Crisis was a confrontation standoff between the United States, Imperial Germany and Great Britain from 1887 to 1889 over control of the Samoan Islands during the Samoan Civil War. The prime minister of the kingdom of Hawaii Walter M. Gibson had long aimed to establishing an empire in the Pacific. In 1887 his government sent the "homemade battleship" Kaimiloa to Samoa looking for an alliance against colonial powers. It ended in suspicions from the German Navy and embarrassment for the conduct of the crew. The 1889 incident involved three American warships, , and and three German warships, SMS Adler, SMS Olga, and SMS Eber, keeping each other at bay over several months in Apia harbor, which was monitored by the British warship . The standoff ended on 15 and 16 March when a cyclone wrecked all six warships in the harbor. Calliope was able to escape the harbor and survived the storm. Robert Louis Stevenson witnessed the storm and its aftermath at Apia and later wrote about what he saw. The Samoan Civil War continued, involving Germany, United States and Britain, eventually resulting, via the Tripartite Convention of 1899, in the partition of the Samoan Islands into American Samoa and German Samoa. World War I The Asian and Pacific Theatre of World War I was a conquest of German colonial possession in the Pacific Ocean and China. The most significant military action was the Siege of Tsingtao in what is now China, but smaller actions were also fought at Battle of Bita Paka and Siege of Toma in German New Guinea. All other German and Austrian possessions in Asia and the Pacific fell without bloodshed. Naval warfare was common; all of the colonial powers had naval squadrons stationed in the Indian or Pacific Oceans. These fleets operated by supporting the invasions of German held territories and by destroying the East Asia Squadron. One of the first land offensives in the Pacific theatre was the Occupation of German Samoa in August 1914 by New Zealand forces. The campaign to take Samoa ended without bloodshed after over 1,000 New Zealanders landed on the German colony, supported by an Australian and French naval squadron. Australian forces attacked German New Guinea in September 1914: 500 Australians encountered 300 Germans and native policemen at the Battle of Bita Paka; the Allies won the day and the Germans retreated to Toma. A company of Australians and a British warship besieged the Germans and their colonial subjects, ending with a German surrender. After the fall of Toma, only minor German forces were left in New Guinea and these generally capitulated once met by Australian forces. In December 1914, one German officer near Angorum attempted resist the occupation with thirty native police but his force deserted him after they fired on an Australian scouting party and he was subsequently captured. German Micronesia, the Marianas, the Carolines and the Marshall Islands also fell to Allied forces during the war. World War II The Pacific front saw major action during the Second World War, mainly between the belligerents Japan and the United States. The attack on Pearl Harbor was a surprise military strike conducted by the Imperial Japanese Navy against the United States naval base at Pearl Harbor, Hawaii, on the morning of 7 December 1941 (8 December in Japan). The attack led to the United States' entry into World War II. The attack was intended as a preventive action in order to keep the U.S. Pacific Fleet from interfering with military actions the Empire of Japan was planning in South-East Asia against overseas territories of the United Kingdom, the Netherlands, and the United States. There were simultaneous Japanese attacks on the U.S.-held Philippines and on the British Empire in Malaya, Singapore, and Hong Kong. The Japanese subsequently invaded New Guinea, the Solomon Islands and other Pacific islands. The Japanese were turned back at the Battle of the Coral Sea and the Kokoda Track campaign before they were finally defeated in 1945. Some of the most prominent Oceanic battlegrounds were the Solomon Islands campaign, the Air raids on Darwin, the Kokada Track, and the Borneo campaign. In 1940 the administration of French Polynesia recognised the Free French Forces and many Polynesians served in World War II. Unknown at the time to French and Polynesians, the Konoe Cabinet in Imperial Japan on 16 September 1940 included French Polynesia among the many territories which were to become Japanese possessions in the post-war world—though in the course of the war in the Pacific the Japanese were not able to launch an actual invasion of the French islands. Solomon Islands campaign Some of the most intense fighting of the Second World War occurred in the Solomons. The most significant of the Allied Forces' operations against the Japanese Imperial Forces was launched on 7 August 1942, with simultaneous naval bombardments and amphibious landings on the Florida Islands at Tulagi and Red Beach on Guadalcanal. The Guadalcanal Campaign became an important and bloody campaign fought in the Pacific War as the Allies began to repulse Japanese expansion. Of strategic importance during the war were the coastwatchers operating in remote locations, often on Japanese held islands, providing early warning and intelligence of Japanese naval, army and aircraft movements during the campaign. "The Slot" was a name for New Georgia Sound, when it was used by the Tokyo Express to supply the Japanese garrison on Guadalcanal. Of more than 36,000 Japanese on Guadalcanal, about 26,000 were killed or missing, 9,000 died of disease, and 1,000 were captured. Kokoda Track campaign The Kokoda Track campaign was a campaign consisting of a series of battles fought between July and November 1942 between Japanese and Allied—primarily Australian—forces in what was then the Australian territory of Papua. Following a landing near Gona, on the north coast of New Guinea, Japanese forces attempted to advance south overland through the mountains of the Owen Stanley Range to seize Port Moresby as part of a strategy of isolating Australia from the United States. Initially only limited Australian forces were available to oppose them, and after making rapid progress the Japanese South Seas Force clashed with under strength Australian forces at Awala, forcing them back to Kokoda. A number of Japanese attacks were subsequently fought off by the Australian Militia, yet they began to withdraw over the Owen Stanley Range, down the Kokoda Track. In sight of Port Moresby itself, the Japanese began to run out of momentum against the Australians who began to receive further reinforcements. Having outrun their supply lines and following the reverses suffered by the Japanese at Guadalcanal, the Japanese were now on the defensive, marking the limit of the Japanese advance southwards. The Japanese subsequently withdrew to establish a defensive position on the north coast, but they were followed by the Australians who recaptured Kokoda on 2 November. Further fighting continued into November and December as the Australian and United States forces assaulted the Japanese beachheads, in what later became known as the Battle of Buna–Gona. Nuclear testing in Oceania Due to its low population, Oceania was a popular location for atmospheric and underground nuclear tests. Tests were conducted in various locations by the United Kingdom (Operation Grapple and Operation Antler), the United States (Bikini atoll and the Marshall Islands) and France (Moruroa), often with devastating consequences for the inhabitants. From 1946 to 1958, the Marshall Islands served as the Pacific Proving Grounds for the United States, and was the site of 67 nuclear tests on various atolls. The world's first hydrogen bomb, codenamed "Mike", was tested at the Enewetak atoll in the Marshall Islands on 1 November (local date) in 1952, by the United States. In 1954, fallout from the American Castle Bravo hydrogen bomb test in the Marshall Islands was such that the inhabitants of the Rongelap Atoll were forced to abandon their island. Three years later the islanders were allowed to return, but suffered abnormally high levels of cancer. They were evacuated again in 1985 and in 1996 given $45 million in compensation. A series of British tests were also conducted in the 1950s at Maralinga in South Australia, forcing the removal of the Pitjantjatjara and Yankunytjatjara peoples from their ancestral homelands. In 1962, France's early nuclear testing ground of Algeria became independent and the atoll of Moruroa in the Tuamotu Archipelago was selected as the new testing site. Moruroa atoll became notorious as a site of French nuclear testing, primarily because tests were carried out there after most Pacific testing had ceased. These tests were opposed by most other nations in Oceania. The last atmospheric test was conducted in 1974, and the last underground test in 1996. French nuclear testing in the Pacific was controversial in the 1980s, in 1985 French agents caused the Sinking of the Rainbow Warrior in Auckland to prevent it from arriving at the test site in Moruroa. In September 1995, France stirred up widespread protests by resuming nuclear testing at Fangataufa atoll after a three-year moratorium. The last test was on 27 January 1996. On 29 January 1996, France announced that it would accede to the Comprehensive Test Ban Treaty, and no longer test nuclear weapons. Fijian coups Fiji has suffered several coups d'état: military in 1987 and 2006 and civilian in 2000. All were ultimately due to ethnic tension between indigenous Fijians and Indo-Fijians, who originally came to the islands as indentured labour in the late nineteenth and early twentieth century. The 1987 coup followed the election of a multi-ethnic coalition, which Lieutenant Colonel Sitiveni Rabuka overthrew, claiming racial discrimination against ethnic Fijians. The coup was denounced by the United Nations and Fiji was expelled from the Commonwealth of Nations. The 2000 coup was essentially a repeat of the 1987 affair, although it was led by civilian George Speight, apparently with military support. Commodore Frank Bainimarama, who was opposed to Speight, then took over and appointed a new Prime Minister. Speight was later tried and convicted for treason. Many indigenous Fijians were unhappy at the treatment of Speight and his supporters, feeling that the coup had been legitimate. In 2006 the Fijian parliament attempted to introduce a series of bills which would have, amongst other things, pardoned those involved in the 2000 coup. Bainimarama, concerned that the legal and racial injustices of the previous coups would be perpetuated, staged his own coup. It was internationally condemned, and Fiji again suspended from the Commonwealth. In 2006 the then Australia Defence Minister, Brendan Nelson, warned Fijian officials of an Australian Naval fleet within proximity of Fiji that would respond to any attacks against its citizens. Bougainville Civil War The Australian government estimated that anywhere between 15,000 and 20,000 people could have died in the Bougainville Civil War. More conservative estimates put the number of combat deaths as 1–2,000. From 1975, there were attempts by the Bougainville Province to secede from Papua New Guinea. These were resisted by Papua New Guinea primarily because of the presence in Bougainville of the Panguna mine, which was vital to Papua New Guinea's economy. The Bougainville Revolutionary Army began attacking the mine in 1988, forcing its closure the following year. Further BRA activity led to the declaration of a state of emergency and the conflict continued until about 2005, when successionist leader and self-proclaimed King of Bougainville Francis Ona died of malaria. Peacekeeping troops led by Australia have been in the region since the late 1990s, and a referendum on independence will be held in the 2010s. Modern age In 1946, French Polynesians were granted French citizenship and the islands' status was changed to an overseas territory; the islands' name was changed in 1957 to Polynésie Française (French Polynesia). Australia and New Zealand became dominions in the 20th century, adopting the Statute of Westminster Act in 1942 and 1947 respectively, marking their legislative independence from the United Kingdom. Hawaii became a U.S. state in 1959. Samoa became the first pacific nation to gain independence in 1962, Fiji and Tonga became independent in 1970, with many other nations following in the 1970s and 1980s. The South Pacific Forum was founded in 1971, which became the Pacific Islands Forum in 2000. Bougainville Island, geographically part of the Solomon Islands archipelago but politically part of Papua New Guinea, tried unsuccessfully to become independent in 1975, and a civil war followed in the early 1990s, with it later being granted autonomy. On 1 May 1979, in recognition of the evolving political status of the Marshall Islands, the United States recognized the constitution of the Marshall Islands and the establishment of the Government of the Republic of the Marshall Islands. The constitution incorporates both American and British constitutional concepts. In 1852, French Polynesia was granted partial internal autonomy; in 1984, the autonomy was extended. French Polynesia became a full overseas collectivity of France in 2004. Between 2001 and 2007 Australia's Pacific Solution policy transferred asylum seekers to several Pacific nations, including the Nauru detention centre. Australia, New Zealand and other nations took part in the Regional Assistance Mission to Solomon Islands from 2003 after a request for aid. See also Europeans in Oceania History of Australia History of Bougainville History of New Zealand History of Solomon Islands History of the Pacific Islands List of countries and islands by first human settlement List of Oceanian cuisines Notes References Bibliography
[ 0.45940861105918884, 0.220456063747406, -0.7610644102096558, 0.0004868820251431316, -0.13135643303394318, 0.33729273080825806, 0.16580259799957275, 0.5702193975448608, -0.5874580144882202, -0.09827908873558044, 0.04834704101085663, -0.48487943410873413, -0.5967987179756165, 1.2009553909301...
14105
https://en.wikipedia.org/wiki/Hanseatic%20League
Hanseatic League
The Hanseatic League (; , , ; ; ; ) was a medieval commercial and defensive confederation of merchant guilds and market towns in central and northern Europe. Growing from a few north German towns in the late 12th century, the League ultimately encompassed nearly 200 settlements across seven modern-day countries; at its height between the 13th and 15th centuries, it stretched from the Netherlands in the west to Russia in the east, and from Estonia in the north to Kraków, Poland in the south. The League originated from various loose associations of German traders and towns formed to advance mutual commercial interests, such as protection against piracy and banditry. These arrangements gradually coalesced into the Hanseatic League, whose traders enjoyed duty-free treatment, protection, and diplomatic privileges in affiliated communities and their trade routes. Hanseatic cities gradually developed a common legal system governing their merchants and goods, even operating their own armies for mutual defense and aid. Reduced barriers to trade resulted in mutual prosperity, which fostered economic interdependence, kinship ties between merchant families, and deeper political integration; these factors solidified the League into a cohesive political organization by the end of the 13th century. During the peak of its power, the Hanseatic League had a virtual monopoly over maritime trade in the North and Baltic seas. Its commercial reach extended as far as Portugal, England, Novgorod, and Venice, with trading posts, factories, and mercantile "branches" established in numerous towns and cities across Europe. Hanseatic merchants were widely renowned for their access to a variety of commodities and manufactured goods, subsequently gaining privileges and protections abroad, including extraterritorial districts in foreign realms that operated almost exclusively under Hanseatic law. This collective economic influence made the League a powerful force, capable of imposing blockades and even waging war against kingdoms and principalities. Even at its zenith, the Hanseatic League was never more than a loosely aligned confederation of city-states. It lacked a permanent administrative body, treasury, and standing military force; only a very small number of members enjoyed autonomy and liberties comparable to those of neighboring free imperial cities. By the mid-16th century, these tenuous connections left the Hanseatic League vulnerable to rising competitors such as England, the Netherlands, and Russia. External pressures steadily eroded the confederation's unity, while rising local parochialism and political disputes from within frustrated the League's foundational principles of common purpose and mutuality. The League gradually unraveled as members departed or became consolidated into other realms, ultimately disintegrating in 1669. Despite its inherent structural weaknesses, the Hanseatic League managed to endure and thrive for centuries under a quasi-legislative diet that operated on deliberation and consensus. Members united on the basis of mutual interest and comity, working together to pool resources, raise levies, and amicably resolve disputes to further common goals. The League's long-lived success and unity during a period of political upheaval and fragmentation has led to it being described as the most successful trade alliance in history, while its unique governance structure has been identified as a precursor to the supranational model of the European Union. Etymology Although some historians identify as originally meaning An-See, or "on the sea", it is the Old High German word for a band or troop. This word was applied to bands of merchants traveling between the Hanseatic cities — whether by land or by sea. in Middle Low German came to mean a society of merchants or a trader guild. History Exploratory trading adventures, raids, and piracy occurred early throughout the Baltic Sea; the sailors of Gotland sailed up rivers as far away as Novgorod. Scandinavians led international trade in the Baltic area before the Hanseatic League, establishing major trading hubs at Birka, Haithabu, and Schleswig by the 9th century CE. The later Hanseatic ports between Mecklenburg and Königsberg (present-day Kaliningrad) originally formed part of the Scandinavian-led Baltic trade-system. Historians generally trace the origins of the Hanseatic League to the rebuilding of the north German town of Lübeck in 1159 by the powerful Henry the Lion, Duke of Saxony and Bavaria, after he had captured the area from Adolf II, Count of Schauenburg and Holstein. More recent scholarship has deemphasized the focus on Lübeck due to its having been designed as one of several regional trading centers. German cities achieved domination of trade in the Baltic with striking speed during the 13th century, and Lübeck became a central node in the seaborne trade that linked the areas around the North and Baltic seas. The hegemony of Lübeck peaked during the 15th century. Foundation and formation Lübeck became a base for merchants from Saxony and Westphalia trading eastward and northward. Well before the term Hanse appeared in a document in 1267, merchants in different cities began to form guilds, or Hansa, with the intention of trading with towns overseas, especially in the economically less-developed eastern Baltic. This area could supply timber, wax, amber, resins, and furs, along with rye and wheat brought down on barges from the hinterland to port markets. The towns raised their own armies, with each guild required to provide levies when needed. The Hanseatic cities came to the aid of one another, and commercial ships often had to be used to carry soldiers and their arms. Visby (on the island of Gotland) functioned as the leading centre in the Baltic before the Hansa. Sailing east, Visby merchants established a trading post at Novgorod called Gutagard (also known as Gotenhof) in 1080. Merchants from northern Germany also stayed there in the early period of the Gotlander settlement. Later they established their own trading station in Novgorod, known as , which was further up-river, in the first half of the 13th century. In 1229 German merchants at Novgorod were granted certain privileges that made their positions more secure. The granting of privileges was enacted by the current ruler of Novgorod, a Rus' prince, Michael of Chernigov. Hansa societies worked to remove restrictions on trade for their members. The earliest extant documentary mention, although without a name, of a specific German commercial federation dates from 1157 in London. That year, the merchants of the Hansa in Cologne convinced King Henry II of England to exempt them from all tolls in London and allow them to trade at fairs throughout England. The "Queen of the Hansa", Lübeck, where traders were required to trans-ship goods between the North Sea and the Baltic, gained imperial privileges to become a free imperial city in 1226, as had Hamburg in 1189. In 1241 Lübeck, which had access to the Baltic and North seas' fishing grounds, formed an alliance—a precursor to the League—with Hamburg, another trading city, which controlled access to salt-trade routes from Lüneburg. The allied cities gained control over most of the salt-fish trade, especially the Scania Market; Cologne joined them in the Diet of 1260. "In 1266 King Henry III of England granted the Lübeck and Hamburg Hansa a charter for operations in England, and the Cologne Hansa joined them in 1282 to form the most powerful Hanseatic colony in London. Much of the drive for this co-operation came from the fragmented nature of existing territorial governments, which failed to provide security for trade. Over the next 50 years, the Hansa solidified with formal agreements for confederation and co-operation covering the west and east trade routes. The principal city and linchpin remained Lübeck; with the first general diet of the Hansa held there in 1356, the Hanseatic League acquired an official structure." Commercial expansion Lübeck's location on the Baltic provided access for trade with Scandinavia and Kievan Rus' (with its sea-trade center, Veliky Novgorod), putting it in direct competition with the Scandinavians who had previously controlled most of the Baltic trade-routes. A treaty with the Visby Hansa put an end to this competition: through this treaty the Lübeck merchants gained access to the inland Russian port of Novgorod, where they built a trading post or Kontor (literally: "office"). Although such alliances formed throughout the Holy Roman Empire, the league never became a closely managed formal organisation. Assemblies of the Hanseatic towns met irregularly in Lübeck for a Hansetag (Hanseatic Diet) from 1356 onwards, but many towns chose not to attend nor to send representatives, and decisions were not binding on individual cities. Over the period, a network of alliances grew to include a flexible roster of 70 to 170 cities. The league succeeded in establishing additional Kontors in Bruges (Flanders), Bergen (Norway), and London (England). These trading posts became significant enclaves. The London Kontor, first alluded to by crusaders from Lübeck for whom the Kontor arranged the purchase of a replacement cog-ship in Summer 1189, formally established in 1320, stood west of London Bridge near Upper Thames Street, on the site now occupied by Cannon Street station. It grew into a significant walled community with its own warehouses, weighhouse, church, offices and houses, reflecting the importance and scale of trading activity on the premises. The first reference to it as the Steelyard (der Stahlhof) occurs in 1422. Starting with trade in coarse woollen fabrics, the Hanseatic League had the effect of bringing both commerce and industry to northern Germany. As trade increased, newer and finer woollen and linen fabrics, and even silks, were manufactured in northern Germany. The same refinement of products out of cottage industry occurred in other fields, e.g. etching, wood carving, armour production, engraving of metals, and wood-turning. The century-long monopolization of sea navigation and trade by the Hanseatic League ensured that the Renaissance arrived in northern Germany long before it did in the rest of Europe. A legacy of the period is a regional style of architecture known the Weser Renaissance, typified by the embellished facade added to the Bremen Rathaus in 1612. In addition to the major Kontors, individual Hanseatic ports had a representative merchant and warehouse. In England this happened in Boston, Bristol, Bishop's Lynn (now King's Lynn, which features the sole remaining Hanseatic warehouse in England), Hull, Ipswich, Norwich, Yarmouth (now Great Yarmouth), and York. The league primarily traded timber, furs, resin (or tar), flax, honey, wheat, and rye from the east to Flanders and England with cloth (and, increasingly, manufactured goods) going in the other direction. Metal ore (principally copper and iron) and herring came southwards from Sweden. German colonists in the 12th and 13th centuries settled in numerous cities on and near the east Baltic coast, such as Elbing (Elbląg), Thorn (Toruń), Reval (Tallinn), Riga, and Dorpat (Tartu), which became members of the Hanseatic League, and some of which still retain many Hansa buildings and bear the style of their Hanseatic days. Most were granted Lübeck law (Lübisches Recht), after the league's most prominent town. The law provided that they had to appeal in all legal matters to Lübeck's city council. The Livonian Confederation of 1435 to incorporated modern-day Estonia and parts of Latvia and had its own Hanseatic parliament (diet); all of its major towns became members of the Hanseatic League. The dominant language of trade was Middle Low German, a dialect with significant impact for countries involved in the trade, particularly the larger Scandinavian languages, Estonian, and Latvian. Zenith The league had a fluid structure, but its members shared some characteristics; most of the Hansa cities either started as independent cities or gained independence through the collective bargaining power of the league, though such independence remained limited. The Hanseatic free cities owed allegiance directly to the Holy Roman Emperor, without any intermediate family tie of obligation to the local nobility. Another similarity involved the cities' strategic locations along trade routes. At the height of their power in the late-14th century, the merchants of the Hanseatic League succeeded in using their economic power and, sometimes, their military might—trade routes required protection and the league's ships sailed well-armed—to influence imperial policy. The league also wielded power abroad. Between 1361 and 1370 it waged war against Denmark. Initially unsuccessful, Hanseatic towns in 1368 allied in the Confederation of Cologne, sacked Copenhagen and Helsingborg, and forced Valdemar IV, King of Denmark, and his son-in-law Haakon VI, King of Norway, to grant the league 15% of the profits from Danish trade in the subsequent peace treaty of Stralsund in 1370, thus gaining an effective trade and economic monopoly in Scandinavia. This favourable treaty marked the height of Hanseatic power. After the Danish-Hanseatic War and the Bombardment of Copenhagen, the Treaty of Vordingborg renewed the commercial privileges in 1435. The Hansa also waged a vigorous campaign against pirates. Between 1392 and 1440 maritime trade of the league faced danger from raids of the Victual Brothers and their descendants, privateers hired in 1392 by Albert of Mecklenburg, King of Sweden, against Margaret I, Queen of Denmark. In the Dutch–Hanseatic War (1438–1441), the merchants of Amsterdam sought and eventually won free access to the Baltic and broke the Hanseatic monopoly. As an essential part of protecting their investment in ships and their cargoes, the League trained pilots and erected lighthouses. Most foreign cities confined the Hanseatic traders to certain trading areas and to their own trading posts. They seldom interacted with the local inhabitants, except when doing business. Many locals, merchant and noble alike, envied the power of the League and tried to diminish it. For example, in London, the local merchants exerted continuing pressure for the revocation of privileges. The refusal of the Hansa to offer reciprocal arrangements to their English counterparts exacerbated the tension. King Edward IV of England reconfirmed the league's privileges in the Treaty of Utrecht despite the latent hostility, in part thanks to the significant financial contribution the League made to the Yorkist side during the Wars of the Roses of 1455–1487. In 1597 Queen Elizabeth of England expelled the League from London, and the Steelyard closed the following year. Tsar Ivan III of Russia closed the Hanseatic Kontor at Novgorod in 1494. The very existence of the League and its privileges and monopolies created economic and social tensions that often crept over into rivalries between League members. Rise of rival powers The economic crises of the late 15th century did not spare the Hansa. Nevertheless, its eventual rivals emerged in the form of the territorial states, whether new or revived, and not just in the west: Ivan III, Grand Prince of Moscow, ended the entrepreneurial independence of Hansa's Novgorod Kontor in 1478—it closed completely and finally in 1494. New vehicles of credit were imported from Italy, where double-entry book-keeping was popularly formalized in 1494, and outpaced the Hansa economy, in which silver coins changed hands rather than bills of exchange. In the 15th century, tensions between the Prussian region and the "Wendish" cities (Lübeck and its eastern neighbours) increased. Lübeck was dependent on its role as centre of the Hansa, being on the shore of the sea without a major river. It was on the entrance of the land route to Hamburg, but this land route could be bypassed by sea travel around Denmark and through the Kattegat. Prussia's main interest, on the other hand, was the export of bulk products like grain and timber, which were very important for England, the Low Countries, and, later on, also for Spain and Italy. In 1454, the year of the marriage of Elisabeth of Austria to King-Grand Duke Casimir IV Jagiellon of Poland-Lithuania, the towns of the Prussian Confederation rose up against the dominance of the Teutonic Order and asked Casimir IV for help. Gdańsk (Danzig), Thorn and Elbing became part of the Kingdom of Poland, (from 1466 to 1569 referred to as Royal Prussia, region of Poland) by the Second Peace of Thorn. Poland in turn was heavily supported by the Holy Roman Empire through family connections and by military assistance under the Habsburgs. Kraków, then the capital of Poland, had a loose association with the Hansa. The lack of customs borders on the River Vistula after 1466 helped to gradually increase Polish grain exports, transported to the sea down the Vistula, from per year, in the late 15th century, to over in the 17th century. The Hansa-dominated maritime grain trade made Poland one of the main areas of its activity, helping Danzig to become the Hansa's largest city. The member cities took responsibility for their own protection. In 1567, a Hanseatic League agreement reconfirmed previous obligations and rights of league members, such as common protection and defense against enemies. The Prussian Quartier cities of Thorn, Elbing, Königsberg and Riga and Dorpat also signed. When pressed by the King of Poland–Lithuania, Danzig remained neutral and would not allow ships running for Poland into its territory. They had to anchor somewhere else, such as at Pautzke (Puck). A major economic advantage for the Hansa was its control of the shipbuilding market, mainly in Lübeck and in Danzig. The Hansa sold ships everywhere in Europe, including Italy. They drove out the Dutch, because Holland wanted to favour Bruges as a huge staple market at the end of a trade route. When the Dutch started to become competitors of the Hansa in shipbuilding, the Hansa tried to stop the flow of shipbuilding technology from Hanseatic towns to Holland. Danzig, a trading partner of Amsterdam, attempted to forestall the decision. Dutch ships sailed to Danzig to take grain from the city directly, to the dismay of Lübeck. Hollanders also circumvented the Hanseatic towns by trading directly with north German princes in non-Hanseatic towns. Dutch freight costs were much lower than those of the Hansa, and the Hansa were excluded as middlemen. When Bruges, Antwerp and Holland all became part of the Duchy of Burgundy they actively tried to take over the monopoly of trade from the Hansa, and the staples market from Bruges was transferred to Amsterdam. The Dutch merchants aggressively challenged the Hansa and met with much success. Hanseatic cities in Prussia, Livonia, supported the Dutch against the core cities of the Hansa in northern Germany. After several naval wars between Burgundy and the Hanseatic fleets, Amsterdam gained the position of leading port for Polish and Baltic grain from the late 15th century onwards. The Dutch regarded Amsterdam's grain trade as the (Moedernegotie). Nuremberg in Franconia developed an overland route to sell formerly Hansa-monopolised products from Frankfurt via Nuremberg and Leipzig to Poland and Russia, trading Flemish cloth and French wine in exchange for grain and furs from the east. The Hansa profited from the Nuremberg trade by allowing Nurembergers to settle in Hanseatic towns, which the Franconians exploited by taking over trade with Sweden as well. The Nuremberger merchant Albrecht Moldenhauer was influential in developing the trade with Sweden and Norway, and his sons Wolf Moldenhauer and Burghard Moldenhauer established themselves in Bergen and Stockholm, becoming leaders of the local Hanseatic activities. End of the Hansa At the start of the 16th century, the Hanseatic League found itself in a weaker position than it had known for many years. In the Swedish War of Liberation 1521-1523 the Hanseatic League was successful in opposition in an economic conflict it had over the trade, mining and metal industry in Bergslagen (the main mining area of Sweden in the 16th century) with Jakob Fugger (early extremely rich industrialist in the mining and metal industry on the continent) and his unfriendly business take-over attempt. Fugger allied with his financially dependent pope Leo X, Maximilian I, Holy Roman Emperor and Christian II of Denmark/Norway. Both sides made huge costly investments in support of larger amounts of expensive hired mercenaries to win the war. The Hanseatic League fully restored its power in Gustav Vasa's Sweden and Frederick I's Denmark, 1523 after the war. However the Hanseatic League ended up on the wrong side 1536, after Christian III's victory in the Count's Feud in Scania and Denmark, with Sweden as his ally, money was gone, the Hanseatic League's influence in the Nordic countries was over. After that the Hanseatic League was only seen as an unwanted competitor by Denmark and Sweden. Later in the 16th century, Denmark took control of much of the Baltic Sea. Sweden had regained control over its own trade, the Kontor in Novgorod had closed, and the Kontor in Bruges had become effectively moribund. The individual cities making up the league had also started to put self-interest before their common Hanseatic interests. Finally, the political authority of the German princes had started to grow, constraining the independence of the merchants and Hanseatic towns. The league attempted to deal with some of these issues: it created the post of Syndic in 1556 and elected Heinrich Sudermann as a permanent official with legal training, who worked to protect and extend the diplomatic agreements of the member towns. In 1557 and 1579 revised agreements spelled out the duties of towns and some progress was made. The Bruges Kontor moved to Antwerp and the Hansa attempted to pioneer new routes. However the league proved unable to prevent the growing mercantile competition, and so a long decline commenced. The Antwerp Kontor closed in 1593, followed by the London Kontor in 1598. The Bergen Kontor continued until 1754; of all the Kontore, only its buildings, the Bryggen, survive. The gigantic warship Adler von Lübeck was constructed for military use against Sweden during the Northern Seven Years' War (1563–70) but was never put to military use, epitomizing the vain attempts of Lübeck to uphold its long-privileged commercial position in a changing economic and political climate. By the late 17th century, the league had imploded and could no longer deal with its own internal struggles. The social and political changes that accompanied the Protestant Reformation included the rise of Dutch and English merchants and the pressure of the Ottoman Empire upon the Holy Roman Empire and its trade routes. In 1666, the Hanseatic Steelyard in London was burned down by the Great Fire of London. The Kontor-manager sent a letter to Lübeck appealing for immediate financial assistance for a reconstruction. Hamburg, Bremen and Lübeck called for a Hanseatic Day in 1669. Only a few cities participated and those who came were very reluctant to contribute financially to the reconstruction. It was the last formal meeting. Hamburg, Bremen and Lübeck remained as the only members until the League's demise in 1862, on the eve of the founding of the German Empire under Kaiser Wilhelm I. Today, these three cities are the only ones that retain the words "Hanseatic City" in their official German titles. Organization The members of the Hanseatic League were Low German merchants, whose towns were, with the exception of Dinant, where these merchants held citizenship. Not all towns with Low German merchant communities were members of the league (e.g., Emden, Memel (today Klaipėda), Viborg (today Vyborg) and Narva never joined). However, Hanseatic merchants could also come from settlements without German town law—the premise for league membership was birth to German parents, subjection to German law, and a commercial education. The league served to advance and defend the common interests of its heterogeneous members: commercial ambitions such as enhancement of trade, and political ambitions such as ensuring maximum independence from the noble territorial rulers.The Hanseatic League was by no means a monolithic organization or a 'state within a state' but rather a complex and loose-jointed confederation of protagonists pursuing their own interests, which coincided in a shared program of economic domination in the Baltic region. Decisions and actions of the Hanseatic League were the consequence of a consensus-based procedure. If an issue arose, the league's members were invited to participate in a central meeting, the Tagfahrt ("meeting ride", sometimes also referred to as Hansetag, since 1358). The member communities then chose envoys (Ratssendeboten) to represent their local consensus on the issue at the Tagfahrt. Not every community sent an envoy; delegates were often entitled to represent a set of communities. Consensus-building on local and Tagfahrt levels followed the Low Saxon tradition of Einung, where consensus was defined as absence of protest: after a discussion, the proposals which gained sufficient support were dictated aloud to the scribe and passed as binding Rezess if the attendees did not object; those favouring alternative proposals unlikely to get sufficient support were obliged to remain silent during this procedure. If consensus could not be established on a certain issue, it was found instead in the appointment of a number of league members who were then empowered to work out a compromise. The Hanseatic Kontore, which operated like an early stock exchange, each had their own treasury, court and seal. Like the guilds, the Kontore were led by Ältermänner ("eldermen", or English aldermen). The Stalhof Kontor, as a special case, had a Hanseatic and an English Ältermann. In 1347 the Kontor of Brussels modified its statute to ensure an equal representation of the league's members. To that end, member communities from different regions were pooled into three circles (Drittel ("third [part]"): the Wendish and Saxon Drittel, the Westphalian and Prussian Drittel as well as the Gothlandian, Livonian and Swedish Drittel). The merchants from their respective Drittel would then each choose two Ältermänner and six members of the Eighteen Men's Council (Achtzehnmännerrat) to administer the Kontor for a set period of time. In 1356, during a Hanseatic meeting in preparation of the first Tagfahrt, the league confirmed this statute. The league in general gradually adopted and institutionalized the division into Drittel (see table). The Tagfahrt or Hansetag was the only central institution of the Hanseatic League. However, with the division into Drittel (= Thirds), the members of the respective subdivisions frequently held a Dritteltage ("Drittel meeting") to work out common positions which could then be presented at a Tagfahrt. On a more local level, league members also met, and while such regional meetings were never formalized into a Hanseatic institution, they gradually gained importance in the process of preparing and implementing Tagfahrt decisions. Quarters From 1554, the division into Drittel was modified to reduce the circles' heterogeneity, to enhance the collaboration of the members on a local level and thus to make the league's decision-making process more efficient. The number of circles rose to four, so they were called Quartiere (quarters): This division was however not adopted by the Kontore, who, for their purposes (like Ältermänner elections), grouped the league members in different ways (e.g., the division adopted by the Stahlhof in London in 1554 grouped the league members into Dritteln, whereby Lübeck merchants represented the Wendish, Pomeranian Saxon and several Westphalian towns, Cologne merchants represented the Cleves, Mark, Berg and Dutch towns, while Danzig merchants represented the Prussian and Livonian towns). Lists of former Hansa cities The names of the Quarters have been abbreviated in the following table: Wendish: Wendish and Pomeranian (or just Wendish) Quarter Saxon: Saxon, Thuringian and Brandenburg (or just Saxon) Quarter Baltic: Prussian, Livonian and Swedish (or East Baltic) Quarter Westphalian: Rhine-Westphalian and Netherlands (including Flanders) (or Rhineland) Quarter Kontor: The Kontore were foreign trading posts of the League, not cities that were Hanseatic members, and are set apart in a separate table below. The remaining column headings are as follows: "City" is the name, with any variants. "Territory" indicates the jurisdiction to which the city was subject at the time of the League. "Now" indicates the modern nation-state in which the city is located. "From" and "Until" record the dates at which the city joined and/or left the league. Hansa Proper Kontore (Foreign trading posts of the League) Ports with Hansa trading posts Berwick-upon-Tweed Bristol Boston Damme Leith Hull Newcastle Great Yarmouth King's Lynn York Other cities with a Hansa community Aberdeen Åbo (Turku) Arnhem Avaldsnes Bolsward Bordeaux Brae Doesburg Elburg Fellin (Viljandi) Goldingen (Kuldīga) Göttingen Grindavík Grundarfjörður Gunnister Haapsalu Hafnarfjörður Hamelin Hanover Harderwijk Harlingen Haroldswick Hasselt Hattem Herford Hildesheim Hindeloopen (Hylpen) Kalmar Kokenhusen (Koknese) Krambatangi Kumbaravogur Kulm (Chełmno) Leghorn Lemgo Lemsal (Limbaži) Lippe Lisbon Lunna Wick Messina Minden Naples Nantes Narva Nijmegen Nordhausen Nyborg Nyköping Oldenzaal Ommen Paderborn Pernau (Pärnu) Roermond Roop (Straupe) Scalloway Smolensk Stargard Stavoren (Starum) Tórshavn Trondheim Tver Uelzen Venlo Vilnius Walk (Valka) Weißenstein (Paide) Wenden (Cēsis) Wesel Wesenberg (Rakvere) Windau (Ventspils) Wolmar (Valmiera) Zutphen Zwolle Legacy Hanseatic connections Despite its collapse, several cities still maintained the link to the Hanseatic League. Dutch cities including Groningen, Deventer, Kampen, Zutphen and Zwolle, and a number of German cities including Bremen, Buxtehude, Demmin, Greifswald, Hamburg, Lübeck, Lüneburg, Rostock, Stade, Stralsund, Uelzen and Wismar still call themselves Hanse cities (their car license plates are prefixed H, e.g. –HB– for "Hansestadt Bremen"). Hamburg and Bremen continue to style themselves officially as "free Hanseatic cities", with Lübeck named "Hanseatic City" (Rostock's football team is named F.C. Hansa Rostock in memory of the city's trading past). For Lübeck in particular, this anachronistic tie to a glorious past remained especially important in the 20th century. In 1937, the Nazi Party removed this privilege through the Greater Hamburg Act possibly because the Senat of Lübeck did not permit Adolf Hitler to speak in Lübeck during his 1932 election campaign. He held the speech in Bad Schwartau, a small village on the outskirts of Lübeck. Subsequently, he referred to Lübeck as "the small city close to Bad Schwartau." After the EU enlargement to the East in May 2004 there were some experts who wrote about the resurrection of the Baltic Hansa. The legacy of the Hansa is remembered today in several names: the German airline Lufthansa (lit. "Air Hansa"); F.C. Hansa Rostock; Hanze University of Applied Sciences in Groningen, Netherlands; Hanze oil production platform, Netherlands; the Hansa Brewery in Bergen and the Hanse Sail in Rostock. DDG Hansa was a major German shipping company from 1881 until its bankruptcy and takeover by Hapag-Lloyd in 1980. Hansabank in the Baltic states, which has been rebranded into Swedbank. Hansa-Park, one of the biggest theme parks in Germany. There are two museums in Europe dedicated specifically to the history of the Hanseatic League: the European Hansemuseum in Lübeck and the Hanseatic Museum and Schøtstuene in Bergen. Modern versions of the Hanseatic League "City League The Hanse" In 1980, former Hanseatic League members established a "new Hanse" in Zwolle. This league is open to all former Hanseatic League members and cities that share a Hanseatic Heritage. In 2012 the New Hanseatic league had 187 members. This includes twelve Russian cities, most notably Novgorod, which was a major Russian trade partner of the Hansa in the Middle Ages. The "new Hanse" fosters and develops business links, tourism and cultural exchange. The headquarters of the New Hansa is in Lübeck, Germany. The current President of the Hanseatic League of New Time is Jan Lindenau, Mayor of Lübeck. Each year one of the member cities of the New Hansa hosts the Hanseatic Days of New Time international festival. In 2006 King's Lynn became the first English member of the newly formed new Hanseatic League. It was joined by Hull in 2012 and Boston in 2016. New Hanseatic League The New Hanseatic League was established in February 2018 by finance ministers from Denmark, Estonia, Finland, Ireland, Latvia, Lithuania, the Netherlands and Sweden through the signing of a foundational document which set out the countries' "shared views and values in the discussion on the architecture of the EMU". Historical maps In popular culture In the Patrician series of trading simulation video games, the player assumes the role of a merchant in any of several cities of the Hanseatic League. In the Saga of Seven Suns series of space opera novels by American writer Kevin J. Anderson, the human race has colonized multiple planets in the Spiral Arm, most of which are governed by the powerful Terran Hanseatic League (Hansa). Hansa Teutonica is a German board game designed by Andreas Steding and published by Argentum Verlag in 2009. In the Metro franchise of post-apocalyptic novels and video games, a trading alliance of stations called The Commonwealth of the Stations of the Ring Line is also known as the Hanseatic League, usually shortened to Hansa or Hanza. See also Baltic maritime trade (c. 1400–1800) Bay Fleet Brick Gothic Company of Merchant Adventurers of London Hanseatic Cross Hanseatic Days of New Time Hanseatic flags Hanseatic Museum and Schøtstuene Hanseatic Trade Center History of Bremen (City) Lufthansa Maritime republics Peasants' Republic Schiffskinder Thalassocracy References Further reading Halliday, Stephen. "The First Common Market?" History Today 59 (2009): 31–37. Wubs-Mrozewicz, Justyna, and Stuart Jenks, eds. The Hanse in Medieval and Early Modern Europe (Leiden: Koninklijke Brill NV, 2013). Historiography Cowan, Alexander. "Hanseatic League: Oxford Bibliographies Online Research Guide" (Oxford University Press, 2010) online Harrison, Gordon. "The Hanseatic League in Historical Interpretation." The Historian 33 (1971): 385–97. . Szepesi, Istvan. "Reflecting the Nation: The Historiography of Hanseatic Institutions." Waterloo Historical Review 7 (2015). online External links 29th International Hansa Days in Novgorod 30th International Hansa Days 2010 in Parnu-Estonia Chronology of the Hanseatic League Hanseatic Cities in the Netherlands Hanseatic League Historical Re-enactors Hanseatic Towns Network Hanseatic League related sources in the German Wikisource Colchester: a Hanseatic port – Gresham The Lost Port of Sutton: Maritime trade Northern Europe Former monopolies Trade monopolies Early Modern Holy Roman Empire Former confederations Early Modern history of Germany Early Modern Netherlands Economy of the Holy Roman Empire Economic history of the Netherlands History of international trade Hanseatic League International trade organizations Baltic Sea Brandenburg-Prussia Gotland Guilds Northern Renaissance History of Prussia 1862 disestablishments in Europe 14th century in Europe 15th century in Europe 16th century in Europe Medieval Germany
[ -0.3664665222167969, -0.771857738494873, 0.19054250419139862, -0.004286644980311394, -0.43694329261779785, 0.14088211953639984, 0.0709085464477539, 0.39968347549438477, -0.6514625549316406, -0.34366610646247864, 0.23056314885616302, -0.18776585161685944, -0.39995840191841125, 0.16100093722...
14107
https://en.wikipedia.org/wiki/Harvard%20%28disambiguation%29
Harvard (disambiguation)
Harvard University is a university in Cambridge, Massachusetts, USA. Harvard may also refer to: People Allison Harvard (born 1988), model and television personality John Harvard (clergyman) (1607–1638), clergyman after whom Harvard University is named John Harvard (politician) (1938–2016), former Lieutenant-Governor of Manitoba Boston area Harvard College, the undergraduate division of Harvard University Harvard Crimson, Harvard University's athletic program The Harvard Crimson, Harvard University's daily student newspaper Harvard Bridge, a bridge over the Charles River near the Massachusetts Institute of Technology Harvard Square, a square in Cambridge, Massachusetts, adjacent to the Harvard University campus Harvard Yard, the center of the Harvard campus, adjacent to Harvard Square Harvard (MBTA station), the subway station located in Harvard Square Cities Harvard, Idaho Harvard, Illinois, a city in the United States Harvard, Massachusetts, a town in the United States Harvard, Nebraska, a city in the United States Harvard Township, Clay County, Nebraska, a township in the United States Aeroplanes Harvard (aeroplane), often used name for the North American T-6. Harvard Blue Yonder EZ (aeroplane), a replica of the Harvard. Ships List of ships named Harvard USS Harvard, several ships of the United Started Navy Other Harvard (name), given name/first name and surname/last name. Harvard architecture, a type of computer architecture. Harvard (automobile), a Brass Era car built in New York between 1915 and 1921. Harvard Graphics, an early breaking computer software for handling diagrams. Harvard Mark I, an early digital computer. Harvard referencing, a citation style also known as the "author-date method". Harvard station (disambiguation), stations of the name. Harvard-Westlake School, a prep school in Los Angeles. Harvard 736 (planet), a minor planet orbiting the Sun. Fender Harvard, a guitar amplifier.
[ -0.09605443477630615, 0.06740273535251617, 0.11280269920825958, 0.10067985206842422, 0.48631900548934937, 0.5576004981994629, 0.48309382796287537, 0.49897974729537964, -0.49261677265167236, -0.3353039622306824, 0.08668883144855499, -0.2107362002134323, -0.20430372655391693, 0.3200686573982...
14108
https://en.wikipedia.org/wiki/Historical%20African%20place%20names
Historical African place names
This is a list of historical African place names. The names on the left are linked to the corresponding subregion(s) from History of Africa. Axum - Eritrea and Ethiopia Mauritania Tingitana-Morocco Africa (province) - Tunisia Barbary Coast - Algeria Bechuanaland - Botswana Belgian Congo - Democratic Republic of the Congo Carthage - Tunisia Central African Empire - Central African Republic Congo Free State - Democratic Republic of the Congo Dahomey - Benin Equatoria - Sudan and Uganda Fernando Pó - Bioko French Congo - Gabon and Republic of the Congo French Equatorial Africa - Chad, Central African Republic, Gabon, Republic of the Congo French Sudan - Mali French West Africa - Mauritania, Senegal, Mali, Guinea, Ivory Coast, Niger, Burkina Faso, and Benin German East Africa - Tanzania and Zanzibar German South-West Africa - Namibia The Gold Coast - Ghana Guinea Grain Coast or Pepper Coast - Liberia Malagasy Republic - Madagascar Mdre Bahri -Eritrea Monomotapa - Zimbabwe, South Africa, Lesotho, Swaziland, Mozambique and parts of Namibia and Botswana Middle Congo - Republic of the Congo Nubia - Sudan and Egypt Numidia - Algeria, Libya and Tunisia Nyasaland - Malawi Western Pentapolis - Libya Portuguese Guinea - Guinea-Bissau Rhodesia - Northern Rhodesia - Zambia Southern Rhodesia - Zimbabwe (Southern Rhodesia was commonly referred to simply as Rhodesia from 1964 to 1980) Rwanda-Urundi - Rwanda and Burundi The Slave Coast - Benin Somaliland - Somalia South-West Africa - Namibia Spanish Sahara - Western Sahara Swaziland - Eswatini French Upper Volta - Republic of Upper Volta - Burkina Faso Zaire - Republic of the Congo - Democratic Republic of the Congo See also List of former sovereign states Africa-related lists History of Africa Names of places in Africa
[ 0.0069875861518085, 0.7259641885757446, -1.2141715288162231, -0.17463041841983795, -0.6986639499664307, 0.6725848913192749, 0.9551524519920349, 0.5921902656555176, -0.8462033271789551, -0.1352921724319458, -0.32687047123908997, -0.4459216296672821, -0.45784449577331543, 1.0829514265060425,...
14109
https://en.wikipedia.org/wiki/Horror%20fiction
Horror fiction
Horror is a genre of speculative fiction which is intended to frighten, scare, or disgust. Literary historian J. A. Cuddon defined the horror story as "a piece of fiction in prose of variable length... which shocks, or even frightens the reader, or perhaps induces a feeling of repulsion or loathing". Horror intends to create an eerie and frightening atmosphere for the reader. Horror is often divided into the psychological horror and supernatural horror sub-genres. Often the central menace of a work of horror fiction can be interpreted as a metaphor for the larger fears of a society. Prevalent elements include ghosts, demons, vampires, werewolves, ghouls, the Devil, witches, monsters, dystopian and apocalyptic worlds, serial killers, cannibalism, psychopaths, cults, dark magic, Satanism, the macabre, gore, and torture. History Before 1000 The horror genre has ancient origins, with roots in folklore and religious traditions focusing on death, the afterlife, evil, the demonic and the principle of the thing embodied in the person. These manifested in stories of beings such as demons, witches, vampires, werewolves and ghosts. European horror-fiction became established through works of the Ancient Greeks and Ancient Romans. Mary Shelley's well-known 1818 novel about Frankenstein was greatly influenced by the story of Hippolytus, whom Asclepius revives from death. Euripides wrote plays based on the story, Hippolytos Kalyptomenos and Hippolytus. In Plutarch's The Lives of the Noble Grecians and Romans in the account of Cimon, the author describes the spirit of a murderer, Damon, who himself was murdered in a bathhouse in Chaeronea. Pliny the Younger (61 to 113) tells the tale of Athenodorus Cananites, who bought a haunted house in Athens. Athenodorus was cautious since the house seemed inexpensive. While writing a book on philosophy, he was visited by a ghostly figure bound in chains. The figure disappeared in the courtyard; the following day, the magistrates dug in the courtyard and found an unmarked grave. Elements of the horror genre also occur in Biblical texts, notably in the Book of Revelation. After 1000 Werewolf stories were popular in medieval French literature. One of Marie de France's twelve lais is a werewolf story titled "Bisclavret". The Countess Yolande commissioned a werewolf story titled "Guillaume de Palerme". Anonymous writers penned two werewolf stories, "Biclarel" and "Melion". Much horror fiction derives from the cruellest personages of the 15th century. Dracula can be traced to the Prince of Wallachia Vlad III, whose alleged war crimes were published in German pamphlets. A 1499 pamphlet was published by Markus Ayrer, which is most notable for its woodcut imagery. The alleged serial-killer sprees of Gilles de Rais have been seen as the inspiration for "Bluebeard". The motif of the vampiress is most notably derived from the real-life noblewoman and murderess, Elizabeth Bathory, and helped usher in the emergence of horror fiction in the 18th century, such as through László Turóczi's 1729 book Tragica Historia. 18th century The 18th century saw the gradual development of Romanticism and the Gothic horror genre. It drew on the written and material heritage of the Late Middle Ages, finding its form with Horace Walpole's seminal and controversial 1764 novel, The Castle of Otranto. In fact, the first edition was published disguised as an actual medieval romance from Italy, discovered and republished by a fictitious translator. Once revealed as modern, many found it anachronistic, reactionary, or simply in poor taste but it proved immediately popular. Otranto inspired Vathek (1786) by William Beckford, A Sicilian Romance (1790), The Mysteries of Udolpho (1794) and The Italian (1796) by Ann Radcliffe and The Monk (1797) by Matthew Lewis. A significant amount of horror fiction of this era was written by women and marketed towards a female audience, a typical scenario of the novels being a resourceful female menaced in a gloomy castle. 19th century The Gothic tradition blossomed into the genre that modern readers today call horror literature in the 19th century. Influential works and characters that continue resonating in fiction and film today saw their genesis in the Brothers Grimm's "Hänsel und Gretel" (1812), Mary Shelley's Frankenstein (1818), John Polidori's "The Vampyre" (1819), Charles Maturin's Melmoth the Wanderer (1820), Washington Irving's "The Legend of Sleepy Hollow" (1820), Jane C. Loudon's The Mummy!: Or a Tale of the Twenty-Second Century (1827), Victor Hugo's The Hunchback of Notre Dame (1831), Thomas Peckett Prest's Varney the Vampire (1847), the works of Edgar Allan Poe, the works of Sheridan Le Fanu, Robert Louis Stevenson's Strange Case of Dr Jekyll and Mr Hyde (1886), Oscar Wilde's The Picture of Dorian Gray (1890), H. G. Wells' The Invisible Man (1897), and Bram Stoker's Dracula (1897). Each of these works created an enduring icon of horror seen in later re-imaginings on the page, stage and screen. 20th century A proliferation of cheap periodicals around turn of the century led to a boom in horror writing. For example, Gaston Leroux serialized his Le Fantôme de l'Opéra before it became a novel in 1910. One writer who specialized in horror fiction for mainstream pulps, such as All-Story Magazine, was Tod Robbins, whose fiction deals with themes of madness and cruelty. In Russia, the writer Alexander Belyaev popularized these themes in his story Professor Dowell's Head (1925), in which a mad doctor performs experimental head transplants and reanimations on bodies stolen from the morgue, and which was first published as a magazine serial before being turned into a novel. Later, specialist publications emerged to give horror writers an outlet, prominent among them was Weird Tales and Unknown Worlds. Influential horror writers of the early 20th century made inroads in these mediums. Particularly, the venerated horror author H. P. Lovecraft, and his enduring Cthulhu Mythos transformed and popularized the genre of cosmic horror, and M. R. James is credited with redefining the ghost story in that era. The serial murderer became a recurring theme. Yellow journalism and sensationalism of various murderers, such as Jack the Ripper, and lesser so, Carl Panzram, Fritz Haarman, and Albert Fish, all perpetuated this phenomenon. The trend continued in the postwar era, partly renewed after the murders committed by Ed Gein. In 1959, Robert Bloch, inspired by the murders, wrote Psycho. The crimes committed in 1969 by the Manson Family influenced the slasher theme in horror fiction of the 1970s. In 1981, Thomas Harris wrote Red Dragon, introducing Dr. Hannibal Lecter. In 1988, the sequel to that novel, The Silence of the Lambs, was published. Early cinema was inspired by many aspects of horror literature, and started a strong tradition of horror films and subgenres that continues to this day. Up until the graphic depictions of violence and gore on the screen commonly associated with 1960s and 1970s slasher films and splatter films, comic books such as those published by EC Comics (most notably Tales From The Crypt) in the 1950s satisfied readers' quests for horror imagery that the silver screen could not provide. This imagery made these comics controversial, and as a consequence, they were frequently censored. The modern zombie tale dealing with the motif of the living dead harks back to works including H. P. Lovecraft's stories "Cool Air" (1925), "In The Vault" (1926), and "The Outsider" (1926), and Dennis Wheatley's "Strange Conflict" (1941). Richard Matheson's novel I Am Legend (1954) influenced an entire genre of apocalyptic zombie fiction emblematized by the films of George A. Romero. In the late 1960s and early 1970s, the enormous commercial success of three books - Rosemary's Baby (1967) by Ira Levin, The Exorcist by William Peter Blatty, and The Other by Thomas Tryon - encouraged publishers to begin releasing numerous other horror novels, thus creating a "horror boom". One of the best-known late-20th century horror writers is Stephen King, known for Carrie, The Shining, It, Misery and several dozen other novels and about 200 short stories. Beginning in the 1970s, King's stories have attracted a large audience, for which he was awarded by the U.S. National Book Foundation in 2003. Other popular horror authors of the period included Anne Rice, Brian Lumley, Graham Masterton, James Herbert, Dean Koontz, Clive Barker, Ramsey Campbell, and Peter Straub. 21st century Best-selling book series of contemporary times exist in genres related to horror fiction, such as the werewolf fiction urban fantasy Kitty Norville books by Carrie Vaughn (2005 onward). Horror elements continue to expand outside the genre. The alternate history of more traditional historical horror in Dan Simmons's 2007 novel The Terror sits on bookstore shelves next to genre mash ups such as Pride and Prejudice and Zombies (2009), and historical fantasy and horror comics such as Hellblazer (1993 onward) and Mike Mignola's Hellboy (1993 onward). Horror also serves as one of the central genres in more complex modern works such as Mark Z. Danielewski's House of Leaves (2000), a finalist for the National Book Award. There are many horror novels for teens, such as The Monstrumologist by Rick Yancey (2009). Additionally, many movies, particularly animated ones, use a horror aesthetic. These are what can be collectively referred to as "children's horror". Although it's unknown for sure why children enjoy these movies (as it seems counter-intuitive), it is theorized that it is the grotesque monsters that fascinate kids. Tangential to this, the internalized impact of horror television programs and films on children is rather under-researched, especially when compared to the research done on the similar subject of violence in TV and film's impact on the young mind. What little research there is tends to be inconclusive on the impact that viewing such media has. Characteristics One defining trait of the horror genre is that it provokes an emotional, psychological, or physical response within readers that causes them to react with fear. One of H. P. Lovecraft's most famous quotes about the genre is that: "The oldest and strongest emotion of mankind is fear, and the oldest and strongest kind of fear is fear of the unknown." the first sentence from his seminal essay, "Supernatural Horror in Literature". Science fiction historian Darrell Schweitzer has stated, "In the simplest sense, a horror story is one that scares us" and "the true horror story requires a sense of evil, not in necessarily in a theological sense; but the menaces must be truly menacing, life-destroying, and antithetical to happiness." In her essay "Elements of Aversion", Elizabeth Barrette articulates the need by some for horror tales in a modern world: In a sense similar to the reason a person seeks out the controlled thrill of a roller coaster, readers in the modern era seek out feelings of horror and terror to feel a sense of excitement. However, Barrette adds that horror fiction is one of the few mediums where readers seek out a form of art that forces themselves to confront ideas and images they "might rather ignore to challenge preconceptions of all kinds." One can see the confrontation of ideas that readers and characters would "rather ignore" throughout literature in famous moments such as Hamlet's musings about the skull of Yorick, its implications of the mortality of humanity, and the gruesome end that bodies inevitably come to. In horror fiction, the confrontation with the gruesome is often a metaphor for the problems facing the current generation of the author. There are many theories as to why people enjoy being scared. For example, "people who like horror films are more likely to score highly for openness to experience, a personality trait linked to intellect and imagination." It is a now commonly accepted viewpoint that the horror elements of Dracula's portrayal of vampirism are metaphors for sexuality in a repressed Victorian era. But this is merely one of many interpretations of the metaphor of Dracula. Jack Halberstam postulates many of these in his essay Technologies of Monstrosity: Bram Stoker's Dracula. He writes: Halberstram articulates a view of Dracula as manifesting the growing perception of the aristocracy as an evil and outdated notion to be defeated. The depiction of a multinational band of protagonists using the latest technologies (such as a telegraph) to quickly share, collate, and act upon new information is what leads to the destruction of the vampire. This is one of many interpretations of the metaphor of only one central figure of the canon of horror fiction, as over a dozen possible metaphors are referenced in the analysis, from the religious to the anti-semitic. Noël Carroll's Philosophy of Horror postulates that a modern piece of horror fiction's "monster", villain, or a more inclusive menace must exhibit the following two traits: A menace that is threatening — either physically, psychologically, socially, morally, spiritually, or some combination of the aforementioned. A menace that is impure — that violates the generally accepted schemes of cultural categorization. "We consider impure that which is categorically contradictory". Scholarship and criticism In addition to those essays and articles shown above, scholarship on horror fiction is almost as old as horror fiction itself. In 1826, the gothic novelist Ann Radcliffe published an essay distinguishing two elements of horror fiction, "terror" and "horror." Whereas terror is a feeling of dread that takes place before an event happens, horror is a feeling of revulsion or disgust after an event has happened. Radcliffe describes terror as that which "expands the soul and awakens the faculties to a high degree of life," whereas horror is described as that which "freezes and nearly annihilates them." Modern scholarship on horror fiction draws upon a range of sources. In their historical studies of the gothic novel, both Devandra Varma and S.L. Varnado make reference to the theologian Rudolf Otto, whose concept of the "numinous" was originally used to describe religious experience. A recent survey reports how often horror media is consumed:To assess frequency of horror consumption, we asked respondents the following question: “In the past year, about how often have you used horror media (e.g., horror literature, film, and video games) for entertainment?” 11.3% said “Never,” 7.5% “Once,” 28.9% “Several times,” 14.1% “Once a month,” 20.8% “Several times a month,” 7.3% “Once a week,” and 10.2% “Several times a week.” Evidently, then, most respondents (81.3%) claimed to use horror media several times a year or more often. Unsurprisingly, there is a strong correlation between liking and frequency of use (r=.79, p<.0001). Awards and associations Achievements in horror fiction are recognized by numerous awards. The Horror Writer's Association presents the Bram Stoker Awards for Superior Achievement, named in honor of Bram Stoker, author of the seminal horror novel Dracula. The Australian Horror Writers Association presents annual Australian Shadows Awards. The International Horror Guild Award was presented annually to works of horror and dark fantasy from 1995 to 2008. The Shirley Jackson Awards are literary awards for outstanding achievement in the literature of psychological suspense, horror, and the dark fantastic works. Other important awards for horror literature are included as subcategories within general awards for fantasy and science fiction in such awards as the Aurealis Award. Alternative terms Some writers of fiction normally classified as "horror" tend to dislike the term, considering it too lurid. They instead use the terms dark fantasy or Gothic fantasy for supernatural horror, or "psychological thriller" for non-supernatural horror. See also Related genres Crime fiction Dark fantasy Ghost stories Monster literature Mystery fiction Speculative fiction Thriller Weird fiction Horror convention Horror podcast LGBT themes in horror fiction Horror film History of horror films List of horror fiction writers List of ghost films List of horror television programs References Further reading Neil Barron, Horror Literature: A Reader's Guide. New York: Garland, 1990. . Jason Colavito, Knowing Fear: Science, Knowledge and the Development of the Horror Genre. Jefferson, NC: McFarland, 2008. . Brian Docherty, American Horror Fiction: From Brockden Brown to Stephen King. New York: St. Martin's, 1990. . Stephen Jones and Kim Newman, (eds.), Horror: 100 Best Books. New York: Carroll & Graf, 1998. . Stephen King, Danse Macabre. New York: Everest House, 1981. . H. P. Lovecraft, Supernatural Horror in Literature, 1927, rev. 1934, collected in Dagon and Other Macabre Tales. Arkham House, 1965. David J. Skal, The Monster Show: A Cultural History of Horror. New York: Norton, 1993. . Andrea Sauchelli "Horror and Mood", American Philosophical Quarterly, 51:1 (2014), pp. 39–50. Gina Wisker, Horror Fiction: An Introduction. New York: Continuum, 2005. . External links H. P. Lovecraft, "Supernatural Horror in Literature" Horror Writers Association's Horror Reading List A list of interesting horror books, WeirdPond
[ 0.9173470735549927, 0.3049861490726471, -0.48927974700927734, 0.4368656873703003, -0.1535983830690384, -0.2785324454307556, 0.6173122525215149, 0.6742777228355408, -0.22214488685131073, -0.05642940104007721, -0.6221899390220642, 0.18017210066318512, -0.26151761412620544, 0.8363014459609985...
14110
https://en.wikipedia.org/wiki/Holomorphic%20function
Holomorphic function
In mathematics, a holomorphic function is a complex-valued function of one or more complex variables that is complex differentiable in a neighbourhood of each point in a domain in complex coordinate space . The existence of a complex derivative in a neighbourhood is a very strong condition: it implies that a holomorphic function is infinitely differentiable and locally equal to its own Taylor series (analytic). Holomorphic functions are the central objects of study in complex analysis. Though the term analytic function is often used interchangeably with "holomorphic function", the word "analytic" is defined in a broader sense to denote any function (real, complex, or of more general type) that can be written as a convergent power series in a neighbourhood of each point in its domain. That all holomorphic functions are complex analytic functions, and vice versa, is a major theorem in complex analysis. Holomorphic functions are also sometimes referred to as regular functions. A holomorphic function whose domain is the whole complex plane is called an entire function. The phrase "holomorphic at a point " means not just differentiable at , but differentiable everywhere within some neighbourhood of in the complex plane. Definition Given a complex-valued function of a single complex variable, the derivative of at a point in its domain is defined by the limit This is the same as the definition of the derivative for real functions, except that all of the quantities are complex. In particular, the limit is taken as the complex number approaches , and must have the same value for any sequence of complex values for that approach on the complex plane. If the limit exists, we say that is complex differentiable at the point . This concept of complex differentiability shares several properties with real differentiability: it is linear and obeys the product rule, quotient rule, and chain rule. If is complex differentiable at every point in an open set , we say that is holomorphic on . We say that is holomorphic at the point if is complex differentiable on some neighbourhood of . We say that is holomorphic on some non-open set if it is holomorphic in a neighbourhood of . As a pathological non-example, the function given by is not complex differentiable at exactly one point (), and for this reason, it is not holomorphic at because there is no open set around on which is complex differentiable. The relationship between real differentiability and complex differentiability is the following: If a complex function is holomorphic, then and have first partial derivatives with respect to and , and satisfy the Cauchy–Riemann equations: or, equivalently, the Wirtinger derivative of with respect to , the complex conjugate of , is zero: which is to say that, roughly, is functionally independent from the complex conjugate of . If continuity is not given, the converse is not necessarily true. A simple converse is that if and have continuous first partial derivatives and satisfy the Cauchy–Riemann equations, then is holomorphic. A more satisfying converse, which is much harder to prove, is the Looman–Menchoff theorem: if is continuous, and have first partial derivatives (but not necessarily continuous), and they satisfy the Cauchy–Riemann equations, then is holomorphic. Terminology The term holomorphic was introduced in 1875 by Charles Briot and Jean-Claude Bouquet, two of Augustin-Louis Cauchy's students, and derives from the Greek ὅλος (hólos) meaning "whole", and μορφή (morphḗ) meaning "form" or "appearance" or "type", in contrast to the term meromorphic derived from μέρος (méros) meaning "part". A holomorphic function resembles an entire function ("whole") in a domain of the complex plane while a meromorphic function (defined to mean holomorphic except at certain isolated poles), resembles a rational fraction ("part") of entire functions in a domain of the complex plane. Cauchy had instead used the term synectic. Today, the term "holomorphic function" is sometimes preferred to "analytic function". An important result in complex analysis is that every holomorphic function is complex analytic, a fact that does not follow obviously from the definitions. The term "analytic" is however also in wide use. Properties Because complex differentiation is linear and obeys the product, quotient, and chain rules, the sums, products and compositions of holomorphic functions are holomorphic, and the quotient of two holomorphic functions is holomorphic wherever the denominator is not zero. That is, if functions and are holomorphic in a domain , then so are , , , and . Furthermore, is holomorphic if has no zeros in , or is meromorphic otherwise. If one identifies with the real plane , then the holomorphic functions coincide with those functions of two real variables with continuous first derivatives which solve the Cauchy–Riemann equations, a set of two partial differential equations. Every holomorphic function can be separated into its real and imaginary parts , and each of these is a harmonic function on (each satisfies Laplace's equation ), with the harmonic conjugate of . Conversely, every harmonic function on a simply connected domain is the real part of a holomorphic function: If is the harmonic conjugate of , unique up to a constant, then is holomorphic. Cauchy's integral theorem implies that the contour integral of every holomorphic function along a loop vanishes: Here is a rectifiable path in a simply connected complex domain whose start point is equal to its end point, and is a holomorphic function. Cauchy's integral formula states that every function holomorphic inside a disk is completely determined by its values on the disk's boundary. Furthermore: Suppose is a complex domain, is a holomorphic function and the closed disk is completely contained in . Let be the circle forming the boundary of . Then for every in the interior of : where the contour integral is taken counter-clockwise. The derivative can be written as a contour integral using Cauchy's differentiation formula: for any simple loop positively winding once around , and for infinitesimal positive loops around . In regions where the first derivative is not zero, holomorphic functions are conformal: they preserve angles and the shape (but not size) of small figures. Every holomorphic function is analytic. That is, a holomorphic function has derivatives of every order at each point in its domain, and it coincides with its own Taylor series at in a neighbourhood of . In fact, coincides with its Taylor series at in any disk centred at that point and lying within the domain of the function. From an algebraic point of view, the set of holomorphic functions on an open set is a commutative ring and a complex vector space. Additionally, the set of holomorphic functions in an open set is an integral domain if and only if the open set is connected. In fact, it is a locally convex topological vector space, with the seminorms being the suprema on compact subsets. From a geometric perspective, a function is holomorphic at if and only if its exterior derivative in a neighbourhood of is equal to for some continuous function . It follows from that is also proportional to , implying that the derivative is itself holomorphic and thus that is infinitely differentiable. Similarly, implies that any function that is holomorphic on the simply connected region is also integrable on . (For a path from to lying entirely in , define in light of the Jordan curve theorem and the generalized Stokes' theorem, is independent of the particular choice of path , and thus is a well-defined function on having and .) Examples All polynomial functions in with complex coefficients are entire functions (holomorphic in the whole complex plane ), and so are the exponential function and the trigonometric functions and (cf. Euler's formula). The principal branch of the complex logarithm function is holomorphic on the domain The square root function can be defined as and is therefore holomorphic wherever the logarithm is. The reciprocal function is holomorphic on (The reciprocal function, and any other rational function, is meromorphic on .) As a consequence of the Cauchy–Riemann equations, any real-valued holomorphic function must be constant. Therefore, the absolute value , the argument , the real part and the imaginary part are not holomorphic. Another typical example of a continuous function which is not holomorphic is the complex conjugate (The complex conjugate is antiholomorphic.) Several variables The definition of a holomorphic function generalizes to several complex variables in a straightforward way. Let to be polydisk and also, denote an open subset of , and let . The function is analytic at a point in if there exists an open neighbourhood of in which is equal to a convergent power series in complex variables. Define to be holomorphic if it is analytic at each point in its domain. Osgood's lemma shows (using the multivariate Cauchy integral formula) that, for a continuous function , this is equivalent to being holomorphic in each variable separately (meaning that if any coordinates are fixed, then the restriction of is a holomorphic function of the remaining coordinate). The much deeper Hartogs' theorem proves that the continuity hypothesis is unnecessary: is holomorphic if and only if it is holomorphic in each variable separately. More generally, a function of several complex variables that is square integrable over every compact subset of its domain is analytic if and only if it satisfies the Cauchy–Riemann equations in the sense of distributions. Functions of several complex variables are in some basic ways more complicated than functions of a single complex variable. For example, the region of convergence of a power series is not necessarily an open ball; these regions are logarithmically-convex Reinhardt domains, the simplest example of which is a polydisk. However, they also come with some fundamental restrictions. Unlike functions of a single complex variable, the possible domains on which there are holomorphic functions that cannot be extended to larger domains are highly limited. Such a set is called a domain of holomorphy. A complex differential -form is holomorphic if and only if its antiholomorphic Dolbeault derivative is zero, . Extension to functional analysis The concept of a holomorphic function can be extended to the infinite-dimensional spaces of functional analysis. For instance, the Fréchet or Gateaux derivative can be used to define a notion of a holomorphic function on a Banach space over the field of complex numbers. See also Antiderivative (complex analysis) Antiholomorphic function Biholomorphy Holomorphic separability Meromorphic function Quadrature domains Harmonic maps Harmonic morphisms Wirtinger derivatives References Further reading External links Analytic functions
[ -0.35692688822746277, 0.11087176948785782, 0.16291475296020508, 0.06619679927825928, -0.23376576602458954, 0.023402094841003418, -0.19810132682323456, 0.20794346928596497, 0.0695866271853447, -0.3968130946159363, -0.7711016535758972, 0.5660365223884583, -0.5238984823226929, 0.1686149686574...
14113
https://en.wikipedia.org/wiki/History%20of%20Algeria
History of Algeria
Much of the history of Algeria has taken place on the fertile coastal plain of North Africa, which is often called the Maghreb (or Maghreb). North Africa served as a transit region for people moving towards Europe or the Middle East, thus, the region's inhabitants have been influenced by populations from other areas, including the Carthaginians, Romans, and Vandals. The region was conquered by the Muslims in the early 8th century AD, but broke off from the Umayyad Caliphate after the Berber Revolt of 740. During the Ottoman period, Algeria became an important state in the Mediterranean sea which led to many naval conflicts. The last significant events in the country's recent history have been the Algerian War and Algerian Civil War. Prehistory Evidence of the early human occupation of Algeria is demonstrated by the discovery of 1.8 million year old Oldowan stone tools found at Ain Hanech in 1992. In 1954 fossilised Homo erectus bones were discovered by C. Arambourg at Ternefine that are 700,000 years old. Neolithic civilization (marked by animal domestication and subsistence agriculture) developed in the Saharan and Mediterranean Maghrib between 6000 and 2000 BC. This type of economy, richly depicted in the Tassili n'Ajjer cave paintings in southeastern Algeria, predominated in the Maghrib until the classical period. Numidia Phoenician traders arrived on the North African coast around 900 BC and established Carthage (in present-day Tunisia) around 800 BC. During the classical period, Berber civilization was already at a stage in which agriculture, manufacturing, trade, and political organization supported several states. Trade links between Carthage and the Berbers in the interior grew, but territorial expansion also resulted in the enslavement or military recruitment of some Berbers and in the extraction of tribute from others. The Carthaginian state declined because of successive defeats by the Romans in the Punic Wars, and in 146 BC, the city of Carthage was destroyed. As Carthaginian power waned, the influence of Berber leaders in the hinterland grew. By the 2nd century BC, several large but loosely administered Berber kingdoms had emerged. After that, king Masinissa managed to unify Numidia under his rule. Roman empire Madghacen was a king of independent kingdoms of the Numidians, between 12 and 3 BC. Christianity arrived in the 2nd century. By the end of the 4th century, the settled areas had become Christianized, and some Berber tribes had converted en masse. After the fall of the Western Roman Empire, Algeria came under the control of the Vandal Kingdom. Later, the Eastern Roman Empire (also known as the Byzantine Empire) conquered Algeria from the Vandals, incorporating it into the Praetorian prefecture of Africa and later the Exarchate of Africa. Middle Ages From the 8th century Umayyad conquest of North Africa led by Musa bin Nusayr, Arab colonization started. The 11th century invasion of migrants from the Arabian peninsula brought oriental tribal customs. The introduction of Islam and Arabic had a profound impact on North Africa. The new religion and language introduced changes in social and economic relations, and established links with the Arab world through acculturation and assimilation. Berber dynasties According to historians of the Middle Ages, the Berbers were divided into two branches, both going back to their ancestors Mazigh. The two branches, called Botr and Barnès were divided into tribes, and each Maghreb region is made up of several tribes. The large Berber tribes or peoples are Sanhaja, Houara, Zenata, Masmuda, Kutama, Awarba, Barghawata ... etc. Each tribe is divided into sub tribes. All these tribes had independent and territorial decisions. Several Berber dynasties emerged during the Middle Ages: - In North and West Africa, in Spain (al-Andalus), Sicily, Egypt, as well as in the southern part of the Sahara, in modern-day Mali, Niger, and Senegal. The medieval historian Ibn Khaldun described the follwing Berber dynasties: Zirid, Banu Ifran, Maghrawa, Almoravid, Hammadid, Almohad Caliphate, Marinid, Zayyanid, Wattasid, Meknes, Hafsid dynasty, Fatimids. The invasion of the Banu Hilal Arab tribes in the 11th century sacked Kairouan, and the area under Zirid control was reduced to the coastal region, and the Arab conquests fragmented into petty Bedouin emirates. Medieval Muslim Algeria The second Arab military expeditions into the Maghreb, between 642 and 669, resulted in the spread of Islam. The Umayyads (a Muslim dynasty based in Damascus from 661 to 750) recognised that the strategic necessity of dominating the Mediterranean dictated a concerted military effort on the North African front. By 711 Umayyad forces helped by Berber converts to Islam had conquered all of North Africa. In 750 the Abbasids succeeded the Umayyads as Muslim rulers and moved the caliphate to Baghdad. Under the Abbasids, Berber Kharijites Sufri Banu Ifran were opposed to Umayyad and Abbasids. After, the Rustumids (761–909) actually ruled most of the central Maghrib from Tahirt, southwest of Algiers. The imams gained a reputation for honesty, piety, and justice, and the court of Tahirt was noted for its support of scholarship. The Rustumid imams failed, however, to organise a reliable standing army, which opened the way for Tahirt's demise under the assault of the Fatimid dynasty. The Fatimids left the rule of most of Algeria to the Zirids and Hammadid (972–1148), a Berber dynasty that centered significant local power in Algeria for the first time, but who were still at war with Banu Ifran (kingdom of Tlemcen) and Maghraoua (942-1068). This period was marked by constant conflict, political instability, and economic decline. Following a large incursion of Arab Bedouin from Egypt beginning in the first half of the 11th century, the use of Arabic spread to the countryside, and sedentary Berbers were gradually Arabised. The Almoravid ("those who have made a religious retreat") movement developed early in the 11th century among the Sanhaja Berbers of southern Morocco. The movement's initial impetus was religious, an attempt by a tribal leader to impose moral discipline and strict adherence to Islamic principles on followers. But the Almoravid movement shifted to engaging in military conquest after 1054. By 1106, the Almoravids had conquered the Maghreb as far east as Algiers and Morocco, and Spain up to the Ebro River. Like the Almoravids, the Almohads ("unitarians") found their inspiration in Islamic reform. The Almohads took control of Morocco by 1146, captured Algiers around 1151, and by 1160 had completed the conquest of the central Maghrib. The zenith of Almohad power occurred between 1163 and 1199. For the first time, the Maghrib was united under a local regime, but the continuing wars in Spain overtaxed the resources of the Almohads, and in the Maghrib their position was compromised by factional strife and a renewal of tribal warfare. In the central Maghrib, the Abdalwadid founded a dynasty that ruled the Kingdom of Tlemcen in Algeria. For more than 300 years, until the region came under Ottoman suzerainty in the 16th century, the Zayanids kept a tenuous hold in the central Maghrib. Many coastal cities asserted their autonomy as municipal republics governed by merchant oligarchies, tribal chieftains from the surrounding countryside, or the privateers who operated out of their ports. Nonetheless, Tlemcen, the "pearl of the Maghrib," prospered as a commercial center. Examples of some Algerian Berber dynasties/empires: Ifranid Dynasty and Sulaymanid Sulaymanids of tlemcen Maghrawa Dynasty Zirid Dynasty Hammadid Dynasty Zayyanid Kingdom of Tlemcen Kingdom of Beni Abbas Kingdom of Kuku Christian reconquest of Spain The final triumph of the 700-year Christian reconquest of Spain was marked by the fall of Granada in 1492. Christian Spain imposed its influence on the Maghrib coast by constructing fortified outposts and collecting tribute. But Spain never sought to extend its North African conquests much beyond a few modest enclaves. Privateering was an age-old practice in the Mediterranean, and North African rulers engaged in it increasingly in the late 16th and early 17th centuries because it was so lucrative. Until the 17th century the Barbary pirates used galleys, but a Dutch renegade of the name of Zymen Danseker taught them the advantage of using sailing ships. Algeria became the privateering city-state par excellence, and two privateer brothers were instrumental in extending Ottoman influence in Algeria. At about the time Spain was establishing its presidios in the Maghrib, the Muslim privateer brothers Aruj and Khair ad Din—the latter known to Europeans as Barbarossa, or Red Beard—were operating successfully off Tunisia. In 1516 Aruj moved his base of operations to Algiers but was killed in 1518. Khair ad Din succeeded him as military commander of Algiers, and the Ottoman sultan gave him the title of beglerbey (provincial governor). Spanish enclaves The Spanish expansionist policy in North Africa began with the Catholic Monarchs and the regent Cisneros, once the Reconquista in the Iberian Peninsula was finished. That way, several towns and outposts in the Algerian coast were conquered and occupied: Mers El Kébir (1505), Oran (1509), Algiers (1510) and Bugia (1510). The Spanish conquest of Oran was won with much bloodshed: 4,000 Algerians were massacred, and up to 8,000 were taken prisoner. For about 200 years, Oran's inhabitants were virtually held captive in their fortress walls, ravaged by famine and plague; Spanish soldiers, too, were irregularly fed and paid. The Spaniards left Algiers in 1529, Bujia in 1554, Mers El Kébir and Oran in 1708. The Spanish returned in 1732 when the armada of the Duke of Montemar was victorious in the Battle of Aïn-el-Turk and retook Oran and Mers El Kébir; the Spanish massacred many Muslim soldiers. In 1751, a Spanish adventurer, named John Gascon, obtained permission, and vessels and fireworks, to go against Algiers, and set fire, at night, to the Algerian fleet. The plan, however, miscarried. In 1775, Charles III of Spain sent a large force to attack Algiers, under the command of Alejandro O'Reilly (who had led Spanish forces in crushing French rebellion in Louisiana), resulting in a disastrous defeat. The Algerians suffered 5,000 casualties. The Spanish navy bombarded Algiers in 1784; over 20,000 cannonballs were fired, much of the city and its fortifications were destroyed and most of the Algerian fleet was sunk. Oran and Mers El Kébir were held until 1792, when they were sold by the king Charles IV to the Bey of Algiers. Ottoman era Under Khair ad Din's regency, Algiers became the center of Ottoman authority in the Maghrib. For 300 years, Algeria was a Vassal state of the Ottoman Empire under a regency that had Algiers as its capital (see Dey). Subsequently, with the institution of a regular Ottoman administration, governors with the title of pasha ruled. Turkish was the official language. In 1671 a new leader took power, adopting the title of dey. In 1710 the dey persuaded the sultan to recognize him and his successors as regent, replacing the pasha in that role. Although Algiers remained a part of the Ottoman Empire, the Ottoman government ceased to have effective influence there. European maritime powers paid the tribute demanded by the rulers of the privateering states of North Africa (Algiers, Tunis, Tripoli, and Morocco) to prevent attacks on their shipping. The Napoleonic wars of the early 19th century diverted the attention of the maritime powers from suppressing piracy. But when peace was restored to Europe in 1815, Algiers found itself at war with Spain, the Netherlands, Prussia, Denmark, Russia, and Naples. Algeria and surrounding areas, collectively known as the Barbary States, were responsible for piracy in the Mediterranean Sea, as well as the enslaving of Christians, actions which brought them into the First and Second Barbary War with the United States of America. French rule 19th century colonialism North African boundaries have shifted during various stages of the conquests. The borders of modern Algeria were expanded by the French, whose colonization began in 1830 (French invasion began on July 5). To benefit French colonists (many of whom were not in fact of French origin but Italian, Maltese, and Spanish) and nearly the entirety of whom lived in urban areas, northern Algeria was eventually organized into overseas departments of France, with representatives in the French National Assembly. France controlled the entire country, but the traditional Muslim population in the rural areas remained separated from the modern economic infrastructure of the European community. As a result of what the French considered an insult to the French consul in Algiers by the Day in 1827, France blockaded Algiers for three years. In 1830, France invaded and occupied the coastal areas of Algeria, citing a diplomatic incident as casus belli. Hussein Dey went into exile. French colonization then gradually penetrated southwards, and came to have a profound impact on the area and its populations. The European conquest, initially accepted in the Algiers region, was soon met by a rebellion, led by Abdel Kadir, which took roughly a decade for the French troops to put down after the "pacification campaign", in which the French used chemical weapons, mass executions of civilians and prisoners, concentration camps and many other atrocities. By 1848 nearly all of northern Algeria was under French control, and the new government of the French Second Republic declared the occupied lands an integral part of France. Three "civil territories"—Algiers, Oran, and Constantine—were organized as French départements (local administrative units) under a civilian government. In addition to enduring the affront of being ruled by a foreign, non-Muslim power, many Algerians lost their lands to the new government or to colonists. Traditional leaders were eliminated, coopted, or made irrelevant, and the traditional educational system was largely dismantled; social structures were stressed to the breaking point. From 1856, native Muslims and Jews were viewed as French subjects not citizens. However, in 1865, Napoleon III allowed them to apply for full French citizenship, a measure that few took, since it involved renouncing the right to be governed by sharia law in personal matters, and was considered a kind of apostasy; in 1870, the Crémieux Decree made French citizenship automatic for Jewish natives, a move which largely angered many Muslims, which resulted in the Jews being seen as the accomplices of the colonial power by anti-colonial Algerians. Nonetheless, this period saw progress in health, some infrastructures, and the overall expansion of the economy of Algeria, as well as the formation of new social classes, which, after exposure to ideas of equality and political liberty, would help propel the country to independence. During the colonization France focused on eradicating the local culture by destroying hundreds years old palaces and important buildings. It is estimated that around half of Algiers, a city founded in the 10th century, was destroyed. Many segragatory laws were levied against the Algerians and their culture. Rise of Algerian nationalism and French resistance A new generation of Islamic leadership emerged in Algeria at the time of World War I and grew to maturity during the 1920s and 1930s. Various groups were formed in opposition to French rule, most notable the National Liberation Front (FLN) and the National Algerian Movement. Colons (colonists), or, more popularly, pieds noirs (literally, black feet) dominated the government and controlled the bulk of Algeria's wealth. Throughout the colonial era, they continued to block or delay all attempts to implement even the most modest reforms. But from 1933 to 1936, mounting social, political, and economic crises in Algeria induced the indigenous population to engage in numerous acts of political protest. The government responded with more restrictive laws governing public order and security. Algerian Muslims rallied to the French side at the start of World War II as they had done in World War I. But the colons were generally sympathetic to the collaborationist Vichy regime established following France's defeat by Nazi Germany. After the fall of the Vichy regime in Algeria (November 11, 1942) as a result of Operation Torch, the Free French commander in chief in North Africa slowly rescinded repressive Vichy laws, despite opposition by colon extremists. In March 1943, Muslim leader Ferhat Abbas presented the French administration with the Manifesto of the Algerian People, signed by 56 Algerian nationalist and international leaders. The manifesto demanded an Algerian constitution that would guarantee immediate and effective political participation and legal equality for Muslims. Instead, the French administration in 1944 instituted a reform package, based on the 1936 Viollette Plan, that granted full French citizenship only to certain categories of "meritorious" Algerian Muslims, who numbered about 60,000. In April 1945 the French had arrested the Algerian nationalist leader Messali Hadj. On May 1 the followers of his Parti du Peuple Algérien (PPA) participated in demonstrations which were violently put down by the police. Several Algerians were killed. The tensions between the Muslim and colon communities exploded on May 8, 1945, V-E Day, causing the Sétif and Guelma massacre. When a Muslim march was met with violence, marchers rampaged. The army and police responded by conducting a prolonged and systematic ratissage (literally, raking over) of suspected centers of dissidence. According to official French figures, 1,500 Muslims died as a result of these countermeasures. Other estimates vary from 6,000 to as high as 45,000 killed. Many nationalists drew the conclusion that independence could not be won by peaceful means, and so started organizing for violent rebellion. In August 1947, the French National Assembly approved the government-proposed Organic Statute of Algeria. This law called for the creation of an Algerian Assembly with one house representing Europeans and "meritorious" Muslims and the other representing the remaining 8 million or more Muslims. Muslim and colon deputies alike abstained or voted against the statute but for diametrically opposed reasons: the Muslims because it fell short of their expectations and the colons because it went too far. Algerian War of Independence (1954–1962) The Algerian War of Independence (1954–1962), brutal and long, was the most recent major turning point in the country's history. Although often fratricidal, it ultimately united Algerians and seared the value of independence and the philosophy of anticolonialism into the national consciousness. In the early morning hours of November 1, 1954, the National Liberation Front (Front de Libération Nationale—FLN) launched attacks throughout Algeria in the opening salvo of a war of independence. An important watershed in this war was the massacre of Pieds-Noirs civilians by the FLN near the town of Philippeville in August 1955. Which prompted Jacques Soustelle into calling for more repressive measures against the rebels. The French authorities claimed that 1,273 "guerrillas" died in what Soustelle admitted were "severe" reprisals. The FLN subsequently, giving names and addresses, claimed that 12,000 Muslims were killed. After Philippeville, all-out war began in Algeria. The FLN fought largely using guerrilla tactics whilst the French counter-insurgency tactics often included severe reprisals and repression. Eventually, protracted negotiations led to a cease-fire signed by France and the FLN on March 18, 1962, at Evian, France. The Evian accords also provided for continuing economic, financial, technical, and cultural relations, along with interim administrative arrangements until a referendum on self-determination could be held. The Evian accords guaranteed the religious and property rights of French settlers, but the perception that they would not be respected led to the exodus of one million pieds-noirs and harkis. Abusive tactics of the French Army remains a controversial subject in France to this day. Deliberate illegal methods were used, such as beatings, mutilations, hanging by the feet or hands, torture by electroshock, waterboarding, sleep deprivation and sexual assaults, among others. French war crimes against Algerian civilians were also committed, including indiscriminate shootings of civilians, bombings of villages suspected of helping the ALN, rape, disembowelment of pregnant women, imprisonment without food in small cells (some of which were small enough to impede lying down), throwing prisoners out of helicopters to their death or into the sea with concrete on their feet, and burying people alive. The FLN also committed many atrocities, both against French pieds-noirs and against fellow Algerians whom they deemed as supporting the French. These crimes included killing unarmed men, women and children, rape and disembowelment or decapitation of women and murdering children by slitting their throats or banging their heads against walls. Between 350,000 and 1 million Algerians are estimated to have died during the war, and more than 2 million, out of a total Muslim population of 9 or 10 million, were made into refugees or forcibly relocated into government-controlled camps. Much of the countryside and agriculture was devastated, along with the modern economy, which had been dominated by urban European settlers (the pied-noirs). French sources estimated that at least 70,000 Muslim civilians were killed or abducted and presumed killed, by the FLN during the Algerian War. Nearly one million people of mostly French, Spanish and Italian descent left the country at independence due to the privileges that they lost as settlers and their unwillingness to be on equal footing with indigenous Algerians along with them left most Algerians of Jewish descent and those Muslim Algerians who had supported a French Algeria (harkis). 30–150,000 pro-French Muslims were also killed in Algeria by FLN in post-war reprisals. Independent Algeria Ben Bella presidency (1962–65) The Algerian independence referendum was held in French Algeria on 1 July 1962, passing with 99.72% of the vote. As a result, France declared Algeria independent on 3 July. On 8 September 1963, the first Algerian constitution was adopted by nationwide referendum under close supervision by the National Liberation Front (FLN). Later that month, Ahmed Ben Bella was formally elected the first president of Algeria for a five-year term after receiving support from the FLN and the military, led by Colonel Houari Boumédiène. However, the war for independence and its aftermath had severely disrupted Algeria's society and economy. In addition to the destruction of much of Algeria's infrastructure, an exodus of the upper-class French and European colons from Algeria deprived the country of most of its managers, civil servants, engineers, teachers, physicians, and skilled workers. The homeless and displaced numbered in the hundreds of thousands, many suffering from illness, and some 70 percent of the workforce was unemployed. The months immediately following independence witnessed the pell-mell rush of Algerians and government officials to claim the property and jobs left behind by the European colons. For example in the 1963 March Decrees, President Ben Bella declared all agricultural, industrial, and commercial properties previously owned and operated by Europeans vacant, thereby legalizing confiscation by the state. The military played an important role in Ben Bella's administration. Since the president recognized the role that the military played in bringing him to power, he appointed senior military officers as ministers and other important positions within the new state, including naming Colonel Boumédiène as defence minister. These military officials played a core role into implementing the country's security and foreign policy. Under the new constitution, Ben Bella's presidency combined the functions of chief of state and head of government with those of supreme commander of the armed forces. He formed his government without needing legislative approval and was responsible for the definition and direction of its policies. There was no effective institutional check on the president's powers. As a result, opposition leader Hocine Aït-Ahmed quit the National Assembly in 1963 to protest the increasingly dictatorial tendencies of the regime and formed a clandestine resistance movement, the Socialist Forces Front (Front des Forces Socialistes—FFS), dedicated to overthrowing the Ben Bella regime by force. Late summer 1963 saw sporadic incidents attributed to the FFS, but more serious fighting broke out a year later, and the army moved quickly and in force to crush a rebellion. Minister of Defense Boumédiène had no qualms about sending the army to put down regional uprisings because he felt they posed a threat to the state. However, President Ben Bella attempted to co-opt allies from among these regional leaders in order to undermine the ability of military commanders to influence foreign and security policy. Tensions consequently built between Boumédiène and Ben Bella, and in 1965 the military removed Ben Bella in a coup d'état, replacing him with Boumédiène as head of state. The 1965 coup and the Boumédienne military regime On 19 June 1965, Houari Boumédiène deposed Ahmed Ben Bella in a military coup d'état that was both swift and bloodless. Ben Bella "disappeared", and would not be seen again until he was released from house arrest in 1980 by Boumédiène's successor, Colonel Chadli Bendjedid. Boumédiène immediately dissolved the National Assembly and suspended the 1963 constitution. Political power resided in the Nation Council of the Algerian Revolution (Conseil National de la Révolution Algérienne—CNRA), a predominantly military body intended to foster cooperation among various factions in the army and the party. Houari Boumédiène's position as head of government and of state was initially insecure, partly because of his lack of a significant power base outside of the armed forces. He relied strongly on a network of former associates known as the Oujda group, named after Boumédiène's posting as National Liberation Army (Armée de Libération Nationale—ALN) leader in the Moroccan border town of Oujda during the war years, but he could not fully dominate his fractious regime. This situation may have accounted for his deference to collegial rule. Over Boumédiène's 11-year reign as Chairman of the CNRA, the council introduced two formal mechanisms: the People's Municipal Assembly (Assemblée Populaires Communales) and the People's Provincial Assembly (Assemblée Populaires de Wilaya) for popular participation in politics. Under Boumédiène's rule, leftist and socialist concepts were merged with Islam. Boumédiène also used Islam to opportunistically consolidate his power. On one hand, he made token concessions and cosmetic changes to the government to appear more Islamic, such as putting Islamist Ahmed Taleb Ibrahimi in charge of national education in 1965 and adopting policies criminalizing gambling, establishing Friday as the national holiday, and dropping plans to introduce birth control to paint an Islamic image of the new government. But on the other hand, Boumédiène's government also progressively repressed Islamic groups, such as by ordering the dissolution of Al Qiyam. Following attempted coups—most notably that of chief-of-staff Col. Tahar Zbiri in December 1967—and a failed assassination attempt on 25 April, 1968, Boumédiène consolidated power and forced military and political factions to submit. He took a systematic, authoritarian approach to state building, arguing that Algeria needed stability and an economic base before building any political institutions. Eleven years after Boumédiène took power, after much public debate, a long-promised new constitution was promulgated in November 1976. The constitution restored the National Assembly and gave it legislative, consent, and oversight functions. Boumédiène was later elected president with 95 percent of the cast votes. Bendjedid rule (1978–92), the 1992 Coup d'État and the rise of the civil war Boumédiène's death on 27 December, 1978 set off a struggle within the FLN to choose a successor. A deadlock occurred between two candidates was broken when Colonel Chadli Bendjedid, a moderate who had collaborated with Boumédiène in deposing Ahmed Ben Bella, was sworn in on February 9, 1979. He was re-elected in 1984 and 1988. After the violent 1988 October Riots, a new constitution was adopted in 1989 that eradicated the Algerian one-party state by allowing the formation of political associations in addition to the FLN. It also removed the armed forces, which had run the government since the days of Boumédiène, from a role in the operation of the government. Among the scores of parties that sprang up under the new constitution, the militant Islamic Salvation Front (Front Islamique du Salut—FIS) was the most successful, winning a majority of votes in the June 1990 municipal elections, as well as the first stage of the December national legislative elections. The surprising first round of success for the fundamentalist FIS party in the December 1991 balloting caused the army to discuss options to intervene in the election. Officers feared that an Islamist government would interfere with their positions and core interests in economic, national security, and foreign policy, since the FIS has promised to make a fundamental re-haul of the social, political, and economic structure to achieve a radical Islamist agenda. Senior military figures, such as Defence Minister Khaled Nezzar, Chief of the General Staff Abdelmalek Guenaizia, and other leaders of the navy, Gendarmerie, and security services, all agreed that the FIS should be stopped from gaining power at the polling box. They also agreed that Bendjedid would need to be removed from office due to his determination to uphold the country's new constitution by continuing with the second round of ballots. On 11 January 1992, Bendjedid announced his resignation on national television, saying it was necessary to "protect the unity of the people and the security of the country". Later that same day, the High Council of State (Haut Comité d'Etat—HCE), which was composed of five people (including Khaled Nezzar, Tedjini Haddam, Ali Kafi, Mohamed Boudiaf and Ali Haroun), was appointed to carry out the duties of the president. The new government, led by Sid Ahmed Ghozali, banned all political activity at mosques and began stopping people from attending prayers at popular mosques. The FIS was legally dissolved by Interior Minister Larbi Belkheir on 9 February for attempting "insurrections against the state". A state of emergency was also declared and extraordinary powers, such as curtailing the right to associate, were granted to the regime. Between January and March, a growing number of FIS militants were arrested by the military, including Abdelkader Hachani and his successors, Othman Aissani and Rabah Kebir. Following the announcement to dissolve the FIS and implement a state of emergency on 9 February, the Algerian security forces used their new emergency powers to conduct large scale arrests of FIS members and housed them in 5 "detention centers" in the Sahara. Between 5,000 (official number) and 30,000 (FIS number) people were detained. This crackdown led to a fundamental Islamic insurgency, resulting in the continuous and brutal 10 year-long Algerian Civil War. During the civil war, the secular state apparatus nonetheless allowed elections featuring pro-government and moderate religious-based parties. The civil war lasted from 1991 to 2002. Civil War and Bouteflika (1992–2019) After Chadli Bendjedid resigned from the presidency in the military coup of 1992, a series of figureheads were selected by the military to assume the presidency, as officers were reluctant to assume public political power even though they had manifested control over the government. Additionally, the military's senior leaders felt a need to give a civilian face to the new political regime they had hastily constructed in the aftermath of Benjedid's ousting and the termination of elections, preferring a friendlier non-military face to front the regime. The first such head of state was Mohamed Boudiaf, who was appointed president of the High Council of State (HCE) in February 1992 after a 27-year exile in Morocco. However, Boudiaf quickly came to odds with the military when attempts by Boudiaf to appoint his own staff or form a political party were viewed with suspicion by officers. Boudiaf also launched political initiatives, such as a rigorous anti-corruption campaign in April 1992 and the sacking of Khaled Nezzar from his post as Defence Minister, which were seen by the military as an attempt to remove their influence in the government. The former of these initiatives was especially hazardous to the many senior military officials who had benefited massively and illegally from the political system for years. In the end, Boudiaf was assassinated in June 1992 by one of his bodyguards with Islamist sympathies. Ali Kafi briefly assumed the HCE presidency after Boudiaf's death, before Liamine Zéroual was appointed as a long-term replacement in 1994. However, Zéroual only remained in office for four years before he announced his retirement, as he quickly became embroiled in a clan warfare within the upper classes of the military and fell out with groups of the more senior generals. After this Abdelaziz Bouteflika, Boumédiène's foreign minister, succeeded as the president. As the Algerian civil war wound to a close, presidential elections were held again in April 1999. Although seven candidates qualified for election, all but Abdelaziz Bouteflika, who had the support of the military as well as the National Liberation Front (FLN), withdrew on the eve of the election amid charges of electoral fraud and interference from the military. Bouteflika went on to win with 70 percent of the cast votes. Despite the purportedly democratic elections, the civilian government immediately after the 1999 elections only acted as a sort of 'hijab' over the true government, mostly running day-to-day businesses, while the military still largely ran the country behind the scenes. For example, ministerial mandates to individuals were only granted with the military's approval, and different factions of the military invested in various political parties and the press, using them as pawns to gain influence. However, the military's influence over politics decreased gradually, leaving Bouteflika with more authority on deciding policy. One reason for this was that the senior commanders who had dominated the political scene during the 1960s and 1970s started to retire. Bouteflika's former experience as Boumédiène's foreign minister earned him connections that rejuvenated Algeria's international reputation, which had been tarnished in the early 1990s due to the civil war. On the domestic front, Bouteflika's policy of "national reconciliation" to bring a close to civilian violence earned him a popular mandate that helped him to win further presidential terms in 2004, 2009 and 2014. In 2010, journalists gathered to demonstrate for press freedom and against Bouteflika's self-appointed role as editor-in-chief of Algeria's state television station. In February 2011, the government rescinded the state of emergency that had been in place since 1992 but still banned all protest gatherings and demonstrations. However, in April 2011, over 2,000 protesters defied the official ban and took to the streets of Algiers, clashing with police forces. These protests can be seen as a part of the Arab Spring, with protesters noting that they were inspired by the recent Egyptian revolution, and that Algeria was a police state that was "corrupt to the bone". In 2019, after 20 years in office, Bouteflika announced in February that he would seek a fifth term of office. This sparked widespread discontent around Algeria and protests in Algiers. Despite later attempts at saying he would resign after his term finished in late April, Bouteflika resigned on 2 April, after the chief of the army, Ahmed Gaid Salah, made a declaration that he was "unfit for office". Despite Gaid Salah being loyal to Bouteflika, many in the military identified with civilians, as nearly 70 percent of the army are civilian conscripts who are required to serve for 18 months. Also, since demonstrators demanded a change to the whole governmental system, many army officers aligned themselves with demonstrators in the hopes of surviving an anticipated revolution and retaining their positions. After Bouteflika (2019-) After the resignation of Abdelaziz Bouteflika on 9 April 2019, the President of the Council of the Nation Abdelkader Bensalah became acting president of Algeria. Following the presidential election on 12 December 2019, Abdelmadjid Tebboune was elected president after taking 58% of the votes, beating the candidates from both main parties, the National Liberation Front and the Democratic National Rally. On the eve of the first anniversary of the Hirak Movement, which led to the resignation of former president Bouteflika, President Abdelmadjid Tebboune announced in a statement to the Algerian national media that 22 February would be declared the Algerian "National Day of Fraternity and Cohesion between the People and Its Army for Democracy." In the same statement, Tebboune spoke in favor of the Hirak Movement, saying that "the blessed Hirak has preserved the country from a total collapse", and that he had "made a personal commitment to carry out all of the [movement's] demands." On 21 and 22 February 2020, masses of demonstrators (with turnout comparable to well-established Algerian holidays like the Algerian Day of Independence) gathered to honor the anniversary of the Hirak Movement and the newly established national day. In an effort to contain the COVID-19 pandemic, Tebboune announced on 17 March 2020 that "marches and rallies, whatever their motives" would be prohibited. But after protesters and journalists were arrested for participating in such marches, Tebboune faced accusations of attempting to "silence Algerians." Notably, the government's actions were condemned by Amnesty International, which said in a statement that "when all eyes [...] are on the management of the COVID-19 pandemic, the Algerian authorities are devoting time to speeding up the prosecution and trial of activists, journalists, and supporters of the Hirak movement." The National Committee for the Liberation of Detainees (Comité national pour la libération des détenus—CNLD) estimated that around 70 prisoners of conscience were imprisoned by 2 July 2020 and that several of the imprisoned were arrested for Facebook posts. On 28 December 2019, the then-recently inaugurated President Tebboune met with Ahmed Benbitour, the former Algerian Head of Government, with whom he discussed the "foundations of the new Republic." On 8 January 2020, Tebboune established a "commission of experts" composed of 17 members (a majority of which were professors of constitutional law) responsible for examining the previous constitution and making any necessary revisions. Led by Ahmed Laraba, the commission was required to submit its proposals to Tebboune directly within the following two months. In a letter to Laraba on the same day, Tebboune outlined seven axes around which the commission should focus its discussion. These areas of focus included strengthening citizens' rights, combating corruption, consolidating the balance of powers in the Algerian government, increasing the oversight powers of parliament, promoting the independence of the judiciary, furthering citizens' equality under the law, and constitutionalizing elections. Tebboune's letter also included a call for an "immutable and intangible" two-term limit to anyone serving as president — a major point of contention in the initial Hirak Movement protests, which were spurred by former president Abdelaziz Bouteflika's announcement to run for a fifth term. The preliminary draft revision of the constitution was publicly published on 7 May 2020, but the Laraba Commission (as the "commission of experts" came to be known) was open to additional proposals from the public until 20 June. By 3 June, the commission had received an estimated 1,200 additional public proposals. After all revisions were considered by the Laraba Commission, the draft was introduced to the Cabinet of Algeria (Council of Ministers). The revised constitution was adopted in the Council of Ministers on 6 September, in the People's National Assembly on 10 September, and in the Council of the Nation on 12 September. The constitutional changes were approved in the 1 November 2020 referendum, with 66.68% of voters participating in favour of the changes. On 16 February 2021, mass protests and a wave of nationwide rallies and peaceful demonstrations against the government of Abdelmadjid Tebboune began. See also Culture of Algeria Colonial heads of Algeria List of heads of government of Algeria History of Africa History of North Africa List of presidents of Algeria Politics of Algeria Prime Minister of Algeria History of cities in Algeria: Algiers history and timeline Oran history and timeline References Notes 1. The indigenous peoples of northern Africa were identified by the Romans as Berbers, a word derived from the word Barbare or Barbarian, but they prefer being called "Imazighen". 2. On the Banu Hilal invasion, see Ibn Khaldoun (v.1). References Further reading Ageron, Charles Robert, and Michael Brett. Modern Algeria: A History from 1830 to the Present (1992) Bennoune, Mahfoud (1988). The Making of Contemporary Algeria – Colonial Upheavals and Post-Independence Development, 1830–1987. Cambridge: Cambridge University Press. . Derradji, Abder-Rahmane. The Algerian Guerrilla Campaign, Strategy & Tactics (The Edwin Mellen Press, 1997). Derradji, Abder-Rahmane. A Concise History of Political Violence in Algeria: Brothers in Faith Enemies in Arms (2 vol. The Edwin Mellen Press, 2002), Horne, Alistair. A Savage War of Peace: Algeria 1954-1962 (2006) Laouisset, Djamel (2009). A Retrospective Study of the Algerian Iron and Steel Industry. New York City: Nova Publishers. . McDougall, James. (2017) A history of Algeria (Cambridge UP, 2017). Roberts, Hugh (2003). The Battlefield – Algeria, 1988–2002. Studies in a Broken Polity. London: Verso Books. . Ruedy, John (1992). Modern Algeria – The Origins and Development of a Nation. Bloomington: Indiana University Press. . Sessions, Jennifer E. By Sword and Plow: France and the Conquest of Algeria (Cornell University Press; 2011) 352 pages Sidaoui, Riadh (2009). "Islamic Politics and the Military – Algeria 1962–2008". Religion and Politics – Islam and Muslim Civilisation. Farnham: Ashgate Publishing. . Historiography and memory Branche, Raphaëlle. "The martyr's torch: memory and power in Algeria." Journal of North African Studies 16.3 (2011): 431–443. Cohen, William B. "Pied-Noir memory, history, and the Algerian War." in Europe's Invisible Migrants (2003): 129-145 online. Hannoum, Abdelmajid. "The historiographic state: how Algeria once became French." History and Anthropology 19.2 (2008): 91-114. online Hassett, Dónal. Mobilizing Memory: The Great War and the Language of Politics in Colonial Algeria, 1918-1939 (Oxford UP, 2019). House, Jim. "Memory and the Creation of Solidarity during the Decolonization of Algeria." Yale French Studies 118/119 (2010): 15-38 online. Johnson, Douglas. "Algeria: some problems of modern history." Journal of African history (1964): 221–242. Lorcin, Patricia M.E., ed. Algeria and France, 1800-2000: identity, memory, nostalgia (Syracuse UP, 2006). McDougall, James. History and the Culture of Nationalism in Algeria (Cambridge UP, 2006) excerpt. Vince, Natalya. Our fighting sisters: Nation, memory and gender in Algeria, 1954–2012 (Manchester UP, 2072115). External links List of rulers for Algeria Articles containing video clips
[ -0.0023077691439539194, 0.17081935703754425, -1.2554022073745728, -0.23615193367004395, -0.24166318774223328, 0.2322792410850525, 0.2822404205799103, 0.6387367248535156, -0.5973430275917053, -0.3359643518924713, 0.28205886483192444, -0.22046592831611633, -0.6195106506347656, 0.612938046455...
14114
https://en.wikipedia.org/wiki/History%20of%20Zimbabwe
History of Zimbabwe
Until roughly 2,000 years ago, what would become Zimbabwe was populated by ancestors of the San people. Bantu inhabitants of the region arrived and developed ceramic production in the area. A series of trading empires emerged, including the Kingdom of Mapungubwe and Kingdom of Zimbabwe. In the 1880s, the British South Africa Company began its activities in the region, leading to the colonial era in Southern Rhodesia. Following the Lancaster House Agreement of 1979 there was a transition to internationally recognized majority rule in 1980; the British, more specifically, the United Kingdom ceremonially granted Zimbabwe independence on 18 April that year. In the 2000s Zimbabwe's economy began to deteriorate due to various factors, including the imposition of economic sanctions by western countries led by the United Kingdom and widespread corruption in government. Economic instability caused many Zimbabweans to emigrate. Prior to its recognized independence as Zimbabwe in 1980, the nation had been known by several names: Rhodesia, Southern Rhodesia and Zimbabwe Rhodesia. Pre-Colonial era (1000–1887) Prior to the arrival of Bantu speakers in present-day Zimbabwe the region was populated by ancestors of the San people. The first Bantu-speaking farmers arrived during the Bantu expansion around 2000 years ago. These Bantu speakers were the makers of early Iron Age pottery belonging to the Silver Leaves or Matola tradition, third to fifth centuries A.D., found in southeast Zimbabwe. This tradition was part of the eastern stream of Bantu expansion (sometimes called Kwale) which originated west of the Great Lakes, spreading to the coastal regions of southeastern Kenya and north eastern Tanzania, and then southwards to Mozambique, south eastern Zimbabwe and Natal. More substantial in numbers in Zimbabwe were the makers of the Ziwa and Gokomere ceramic wares, of the fourth century A.D. Their early Iron Age ceramic tradition belonged to the highlands facies of the eastern stream, which moved inland to Malawi and Zimbabwe. Imports of beads have been found at Gokomere and Ziwa sites, possibly in return for gold exported to the coast. A later phase of the Gokomere culture was the Zhizo in southern Zimbabwe. Zhizo communities settled in the Shashe-Limpopo area in the tenth century. Their capital there was Schroda (just across the Limpopo River from Zimbabwe). Many fragments of ceramic figurines have been recovered from there, figures of animals and birds, and also fertility dolls. The inhabitants produced ivory bracelets and other ivory goods. Imported beads found there and at other Zhizo sites, are evidence of trade, probably of ivory and skins, with traders on the Indian Ocean coast. Pottery belonging to a western stream of Bantu expansion (sometimes called Kalundu) has been found at sites in northeastern Zimbabwe, dated from the seventh century. (The western stream originated in the same area as the eastern stream: both belong to the same style system, called by Phillipson the Chifumbadze system, which has general acceptance by archaeologists.) The terms eastern and western streams represent the expansion of the Bantu speaking peoples in terms of their culture. Another question is the branches of the Bantu languages which they spoke. It seems that the makers of the Ziwa/Gokomere wares were not the ancestral speakers of the Shona languages of today's Zimbabwe, who did not arrive in there until around the tenth century, from south of the Limpopo river, and whose ceramic culture belonged to the western stream. The linguist and historian Ehret believes that in view of the similarity of the Ziwa/Gokomere pottery to the Nkope of the ancestral Nyasa language speakers, the Ziwa/Gokomere people spoke a language closely related to the Nyasa group. Their language, whatever it was, was superseded by the ancestral Shona languages, although Ehret says that a set of Nyasa words occur in central Shona dialects today. The evidence that the ancestral Shona speakers came from South Africa is that the ceramic styles associated with Shona speakers in Zimbabwe from the thirteenth to the seventeenth centuries can be traced back to western stream (Kalunndu) pottery styles in South Africa. The Ziwa /Gokomere and Zhizo traditions were superseded by Leopards Kopje and Gumanye wares of the Kalundu tradition from the tenth century. Although the western stream Kalundu tradition was ancestral to Shona ceramic wares, the closest relationships of the ancestral Shona language according to many linguists were with a southern division of eastern Bantu – such languages as the southeastern languages (Nguni, Sotho-Tswana, Tsonga), Nyasa and Makwa. While it may well be the case that the people of the western stream spoke a language belonging to a wider Eastern Bantu division, it is a puzzle which remains to be resolved that they spoke a language most closely related to the languages just mentioned, all of which are today spoken in southeastern Africa. After the Shona speaking people moved into the present day Zimbabwe many different dialects developed over time in the different parts of the country. Among these was Kalanga. It is believed that Kalanga speaking societies first emerged in the middle Limpopo valley in the 9th century before moving on to the Zimbabwean highlands. The Zimbabwean plateau eventually became the centre of subsequent Kalanga states. The Kingdom of Mapungubwe was the first in a series of sophisticated trade states developed in Zimbabwe by the time of the first European explorers from Portugal. They traded in gold, ivory and copper for cloth and glass. From about 1250 until 1450, Mapungubwe was eclipsed by the Kingdom of Zimbabwe. This Kalanga state further refined and expanded upon Mapungubwe's stone architecture, which survives to this day at the ruins of the kingdom's capital of Great Zimbabwe. From circa 1450–1760, Zimbabwe gave way to the Kingdom of Mutapa. This Kalanga state ruled much of the area that is known as Zimbabwe today, and parts of central Mozambique. It is known by many names including the Mutapa Empire, also known as Mwenemutapa was known for its gold trade routes with Arabs and the Portuguese. António Fernandes, a Portuguese explorer, first entered the area in 1511 from Sofala and encountered the Manyika people. He returned in 1513 and explored the northern region of the territory, coming into contact with Chikuyo Chisamarengu, the ruler of Mutapa. In the early 17th century, Portuguese settlers destroyed the trade and began a series of wars which left the empire in near collapse. As a direct response to Portuguese aggression in the interior, a new Kalanga state emerged called the Rozwi Empire. Relying on centuries of military, political and religious development, the Rozwi (which means "destroyers") removed the Portuguese from the Zimbabwe plateau by force of arms. The Rozwi continued the stone building traditions of the Zimbabwe and Mapungubwe kingdoms while adding guns to its arsenal and developing a professional army to protect its trade routes and conquests. Around 1821, the Zulu general Mzilikazi of the Khumalo clan successfully rebelled from King Shaka and created his own clan, the Ndebele. The Ndebele fought their way northwards into the Transvaal, leaving a trail of destruction in their wake and beginning an era of widespread devastation known as the Mfecane. When Boer trekkers converged on the Transvaal in 1836, they drove the tribe even further northward. After losing their remaining South African lands in 1840, Mzilikazi and his tribe permanently settled the southwest of present-day Zimbabwe in what became known as Matabeleland, establishing Bulawayo as their capital. Mzilikazi then organised his society into a military system with regimental kraals, similar to those of Shaka, which was stable enough to repel further Boer incursions. During the pre-colonial period, the Ndebele social structure was stratified. It was composed of mainly three social groups, Zansi, Enhla and Amahole. The Zansi comprised the ruling class the original Khumalo people who migrated from south of Limpopo with Mzilikazi. The Enhla and Amahole groups were made up of other tribes and ethnics who had been incorporated into the empire during the migration. However, with the passage of time, this stratification has slowly disappeared The Ndebele people have for long ascribed to the worship of Unkunkulu as their supreme being. Their religious life in general, rituals, ceremonies, practices, devotion and loyalty revolves around the worship of this Supreme Being. However, with the popularisation of Christianity and other religions, Ndebele traditional religion is now uncommon Mzilikazi died in 1868 and, following a violent power struggle, was succeeded by his son, Lobengula. King Mzilikazi had established the Ndebele Kingdom, with Shona subjects paying tribute to him. The nascent kingdom encountered European powers for the first time and Lonbengula signed various treaties with the various nations jostling for power in the region, playing them off one another in order to preserve the sovereignty of his kingdom and gain the aid of the Europeans should the kingdom become involved in a war. Colonial era (1890–1980) In the 1880s, British diamond magnate Cecil Rhodes' British South Africa Company (BSAC) started to make inroads into the region. In 1898, the name Southern Rhodesia was adopted. In 1888, Rhodes obtained a concession for mining rights from King Lobengula of the Ndebele peoples. Cecil Rhodes presented this concession to persuade the British government to grant a royal charter to his British South Africa Company over Matabeleland, and its subject states such as Mashonaland. Rhodes sought permission to negotiate similar concessions covering all territory between the Limpopo River and Lake Tanganyika, then known as 'Zambesia'. In accordance with the terms of aforementioned concessions and treaties, Cecil Rhodes promoted the immigration of white settlers into the region, as well as the establishment of mines, primarily to extract the diamond ores present. In 1895 the BSAC adopted the name 'Rhodesia' for the territory of Zambesia, in honour of Cecil Rhodes. In 1898, 'Southern Rhodesia' became the official denotation for the region south of the Zambezi, which later became Zimbabwe. The region to the north was administered separately by the BSAC and later named Northern Rhodesia (now Zambia). The Shona waged unsuccessful wars (known as Chimurenga) against encroachment upon their lands by clients of BSAC and Cecil Rhodes in 1896 and 1897. Following the failed insurrections of 1896–97 the Ndebele and Shona groups became subject to Rhodes's administration thus precipitating European settlement en masse in the new colony. The colony's first formal constitution was drafted in 1899, and copied various pieces of legislation directly from that of the Union of South Africa; Rhodesia was meant to be, in many ways, a shadow colony of the Cape. Many within the administrative framework of the BSAC assumed that Southern Rhodesia, when its "development" was "suitably advanced", would "take its rightful place as a member of" the Union of South Africa after the Second Boer War (1898-1902), when the four South African colonies joined under the auspices of one flag and began to work towards the creation of a unified administrative structure. The territory was made open to white settlement, and these settlers were then in turn given considerable administrative powers, including a franchise that, while on the surface non-racial, ensured "a predominantly European electorate" which "operated to preclude Great Britain from modifying her policy in Southern Rhodesia and subsequently treating it as a territory inhabited mainly by Africans whose interests should be paramount and to whom British power should be transferred". Southern Rhodesia became a self-governing British colony in October 1923, subsequent to a referendum held the previous year. The British government took full command of the British South Africa Company's holdings, including both Northern and Southern Rhodesia. Northern Rhodesia retained its status as a colonial protectorate; Southern Rhodesia was given responsible self-government – with limitations and still annexed to the crown as a colony. Many studies of the country see it as a state that operated independently within the Commonwealth; nominally under the rule of the Crown, but technically able to do as it pleased. And in theory, Southern Rhodesia was able to govern itself, draft its own legislation, and elect its own parliamentary leaders. But in reality, this was self-government subject to supervision. Until the white minority settler government's declaration of unilateral independence in 1965, London remained in control of the colony's external affairs, and all legislation was subject to approval from the United Kingdom Government and the Queen. In 1930, the Land Apportionment Act divided rural land along racial lines, creating four types of land: white-owned land that could not be acquired by Africans; purchase areas for those Africans who could afford to purchase land; Tribal Trust Lands designated as the African reserves; and Crown lands owned by the state, reserved for future use and public parks. Fifty one percent of the land was given to approximately 50,000 white inhabitants, with 29.8 per cent left for over a million Africans. Many Rhodesians served on behalf of the United Kingdom during World War II, mainly in the East African Campaign against Axis forces in Italian East Africa. In 1953, the British government consolidated the two colonies of Rhodesia with Nyasaland (now Malawi) in the ill-fated Federation of Rhodesia and Nyasaland which was dominated by Southern Rhodesia. This move was heavily opposed by the residents of Nyasaland, who feared coming under the domination of white Rhodesians. In 1962, however, with growing African nationalism and general dissent, the British government declared that Nyasaland had the right to secede from the Federation; soon afterwards, they said the same for Northern Rhodesia. After African-majority governments had assumed control in neighbouring Northern Rhodesia and in Nyasaland, the white-minority Southern Rhodesian government led by Ian Smith made a Unilateral Declaration of Independence (UDI) from the United Kingdom on 11 November 1965. The United Kingdom deemed this an act of rebellion, but did not re-establish control by force. The white minority government declared itself a republic in 1970. A civil war ensued, with Joshua Nkomo's ZAPU and Robert Mugabe's ZANU using assistance from the governments of Zambia and Mozambique. Although Smith's declaration was not recognised by the United Kingdom nor any other foreign power, Southern Rhodesia dropped the designation "Southern", and claimed nation status as the Republic of Rhodesia in 1970 although this was not recognised internationally. Independence and the 1980s The country gained official independence as Zimbabwe on 18 April 1980. The government held independence celebrations in Rufaro stadium in Salisbury, the capital. Lord Christopher Soames, the last Governor of Southern Rhodesia, watched as Charles, Prince of Wales, gave a farewell salute and the Rhodesian Signal Corps played "God Save the Queen". Many foreign dignitaries also attended, including Prime Minister Indira Gandhi of India, President Shehu Shagari of Nigeria, President Kenneth Kaunda of Zambia, President Seretse Khama of Botswana, and Prime Minister Malcolm Fraser of Australia, representing the Commonwealth of Nations. Bob Marley sang 'Zimbabwe', a song he wrote, at the government's invitation in a concert at the country's independence festivities. President Shagari pledged $15 million at the celebration to train Zimbabweans in Zimbabwe and expatriates in Nigeria. Mugabe's government used part of the money to buy newspaper companies owned by South Africans, increasing the government's control over the media. The rest went to training students in Nigerian universities, government workers in the Administrative Staff College of Nigeria in Badagry, and soldiers in the Nigerian Defence Academy in Kaduna. Later that year Mugabe commissioned a report by the BBC on press freedom in Zimbabwe. The BBC issued its report on 26 June, recommending the privatisation of the Zimbabwe Broadcasting Corporation and its independence from political interests. Mugabe's government changed the capital's name from Salisbury to Harare on 18 April 1982 in celebration of the second anniversary of independence. The government renamed the main street in the capital, Jameson Avenue, in honour of Samora Machel, President of Mozambique. In 1992, a World Bank study indicated that more than 500 health centres had been built since 1980. The percentage of children vaccinated increased from 25% in 1980 to 67% in 1988 and life expectancy increased from 55 to 59 years. Enrolment increased by 232 per cent one year after primary education was made free and secondary school enrolment increased by 33 per cent in two years. These social policies lead to an increase in the debt ratio. Several laws were passed in the 1980s in an attempt to reduce wage gaps. However, the gaps remained considerable. In 1988, the law gave women, at least in theory, the same rights as men. Previously, they could only take a few personal initiatives without the consent of their father or husband. The new Constitution provided for an executive President as Head of State with a Prime Minister as Head of Government. Reverend Canaan Banana served as the first President. In government amended the Constitution in 1987 to provide for an Executive President and abolished the office of Prime Minister. The constitutional changes came into effect on 1 January 1988 with Robert Mugabe as president. The bicameral Parliament of Zimbabwe had a directly elected House of Assembly and an indirectly elected Senate, partly made up of tribal chiefs. The Constitution established two separate voters rolls, one for the black majority, who had 80% of the seats in Parliament, and the other for whites and other ethnic minorities, such as Coloureds, people of mixed race, and Asians, who held 20%. The government amended the Constitution in 1986, eliminating the voter rolls and replacing the white seats with seats filled by nominated members. Many white MPs joined ZANU which then reappointed them. In 1990 the government abolished the Senate and increased the House of Assembly's membership to include members nominated by the President. Prime Minister Mugabe kept Peter Walls, the head of the army, in his government and put him in charge of integrating the Zimbabwe People's Revolutionary Army (ZIPRA), Zimbabwe African National Liberation Army (ZANLA), and the Rhodesian Army. While Western media outlets praised Mugabe's efforts at reconciliation with the white minority, tension soon developed. On 17 March 1980, after several unsuccessful assassination attempts Mugabe asked Walls, "Why are your men trying to kill me?" Walls replied, "If they were my men you would be dead." BBC news interviewed Walls on 11 August 1980. He told the BBC that he had asked British Prime Minister Margaret Thatcher to annul the 1980 election prior to the official announcement of the result on the grounds that Mugabe used intimidation to win the election. Walls said Thatcher had not replied to his request. On 12 August British government officials denied that they had not responded, saying Antony Duff, Deputy Governor of Salisbury, told Walls on 3 March that Thatcher would not annul the election. Minister of Information Nathan Shamuyarira said the government would not be "held ransom by racial misfits" and told "all those Europeans who do not accept the new order to pack their bags." He also said the government continued to consider taking "legal or administrative action" against Walls. Mugabe, returning from a visit with United States President Jimmy Carter in New York City, said, "One thing is quite clear—we are not going to have disloyal characters in our society." Walls returned to Zimbabwe after the interview, telling Peter Hawthorne of Time magazine, "To stay away at this time would have appeared like an admission of guilt." Mugabe drafted legislation that would exile Walls from Zimbabwe for life and Walls moved to South Africa. Ethnic divisions soon came back to the forefront of national politics. Tension between ZAPU and ZANU erupted with guerrilla activity starting again in Matabeleland in south-western Zimbabwe. Nkomo (ZAPU) left for exile in Britain and did not return until Mugabe guaranteed his safety. In 1982 government security officials discovered large caches of arms and ammunition on properties owned by ZAPU, accusing Nkomo and his followers of plotting to overthrow the government. Mugabe fired Nkomo and his closest aides from the cabinet. Seven MPs, members of the Rhodesian Front, left Smith's party to sit as "independents" on 4 March 1982, signifying their dissatisfaction with his policies. As a result of what they saw as persecution of Nkomo and his party, PF-ZAPU supporters, army deserters began a campaign of dissidence against the government. Centring primarily in Matabeleland, home of the Ndebeles who were at the time PF-ZAPU's main followers, this dissidence continued through 1987. It involved attacks on government personnel and installations, armed banditry aimed at disrupting security and economic life in the rural areas, and harassment of ZANU-PF members. Because of the unsettled security situation immediately after independence and democratic sentiments, the government kept in force a "state of emergency". This gave the government widespread powers under the "Law and Order Maintenance Act," including the right to detain persons without charge which it used quite widely. In 1983 to 1984 the government declared a curfew in areas of Matabeleland and sent in the army in an attempt to suppress members of the Ndebele tribe. The pacification campaign, known as the Gukuruhundi, or strong wind, resulted in at least 20,000 civilian deaths perpetrated by an elite, North Korean-trained brigade, known in Zimbabwe as the Gukurahundi. ZANU-PF increased its majority in the 1985 elections, winning 67 of the 100 seats. The majority gave Mugabe the opportunity to start making changes to the constitution, including those with regard to land restoration. Fighting did not cease until Mugabe and Nkomo reached an agreement in December 1987 whereby ZAPU became part of ZANU-PF and the government changed the constitution to make Mugabe the country's first executive president and Nkomo one of two vice-presidents. 1990s Elections in March 1990 resulted in another overwhelming victory for Mugabe and his party, which won 117 of the 120 election seats. Election observers estimated voter turnout at only 54% and found the campaign neither free nor fair, though balloting met international standards. Unsatisfied with a de facto one-party state, Mugabe called on the ZANU-PF Central Committee to support the creation of a de jure one-party state in September 1990 and lost. The government began further amending the constitution. The judiciary and human rights advocates fiercely criticised the first amendments enacted in April 1991 because they restored corporal and capital punishment and denied recourse to the courts in cases of compulsory purchase of land by the government. The general health of the civilian population also began to significantly flounder and by 1997 25% of the population of Zimbabwe had been infected by HIV, the AIDS virus. During the 1990s students, trade unionists, and workers often demonstrated to express their discontent with the government. Students protested in 1990 against proposals for an increase in government control of universities and again in 1991 and 1992 when they clashed with police. Trade unionists and workers also criticised the government during this time. In 1992 police prevented trade unionists from holding anti-government demonstrations. In 1994 widespread industrial unrest weakened the economy. In 1996 civil servants, nurses, and junior doctors went on strike over salary issues. On 9 December 1997 a national strike paralysed the country. Mugabe was panicked by demonstrations by Zanla ex-combatants, war veterans, who had been the heart of incursions 20 years earlier in the Bush War. He agreed to pay them large gratuities and pensions, which proved to be a wholly unproductive and unbudgeted financial commitment. The discontent with the government spawned draconian government crackdowns which in turn started to destroy both the fabric of the state and of society. This in turn brought with it further discontent within the population. Thus a vicious downward spiral commenced. Although many whites had left Zimbabwe after independence, mainly for neighbouring South Africa, those who remained continued to wield disproportionate control of some sectors of the economy, especially agriculture. In the late-1990s whites accounted for less than 1% of the population but owned 70% of arable land. Mugabe raised this issue of land ownership by white farmers. In a calculated move, he began forcible land redistribution, which brought the government into headlong conflict with the International Monetary Fund. Amid a severe drought in the region, the police and military were instructed not to stop the invasion of white-owned farms by the so-called 'war veterans' and youth militia. This led to a mass migration of White Zimbabweans out of Zimbabwe. At present almost no arable land is in the possession of white farmers. The economy during the 1980s and 1990s The economy was run along corporatist lines with strict governmental controls on all aspects of the economy. Controls were placed on wages, prices and massive increases in government spending resulting in significant budget deficits. This experiment met with very mixed results and Zimbabwe fell further behind the first world and unemployment. Some market reforms in the 1990s were attempted. A 40 per cent devaluation of the Zimbabwean dollar was allowed to occur and price and wage controls were removed. These policies also failed at that time. Growth, employment, wages, and social service spending contracted sharply, inflation did not improve, the deficit remained well above target, and many industrial firms, notably in textiles and footwear, closed in response to increased competition and high real interest rates. The incidence of poverty in the country increased during this time. 1999 to 2000 However, Zimbabwe began experiencing a period of considerable political and economic upheaval in 1999. Opposition to President Mugabe and the ZANU-PF government grew considerably after the mid-1990s in part due to worsening economic and human rights conditions brought about by the seizure of farmland owned by white farmers and economic sanctions imposed by Western countries in response. The Movement for Democratic Change (MDC) was established in September 1999 as an opposition party founded by trade unionist Morgan Tsvangirai. The MDC's first opportunity to test opposition to the Mugabe government came in February 2000, when a referendum was held on a draft constitution proposed by the government. Among its elements, the new constitution would have permitted President Mugabe to seek two additional terms in office, granted government officials immunity from prosecution, and authorised government seizure of white-owned land. The referendum was handily defeated. Shortly thereafter, the government, through a loosely organised group of war veterans, some of the so-called war veterans judging from their age were not war veterans as they were too young to have fought in the chimurenga, sanctioned an aggressive land redistribution program often characterised by forced expulsion of white farmers and violence against both farmers and farm employees. Parliamentary elections held in June 2000 were marred by localised violence, and claims of electoral irregularities and government intimidation of opposition supporters. Nonetheless, the MDC succeeded in capturing 57 of 120 seats in the National Assembly. 2002 Presidential elections were held in March 2002. In the months leading up to the poll, ZANU-PF, with the support of the army, security services, and especially the so-called 'war veterans', – very few of whom actually fought in the Second Chimurenga against the Smith regime in the 1970s – set about wholesale intimidation and suppression of the MDC-led opposition. Despite strong international criticism, these measures, together with organised subversion of the electoral process, ensured a Mugabe victory . The government's behaviour drew strong criticism from the EU and the US, which imposed limited sanctions against the leading members of the Mugabe regime. Since the 2002 election, Zimbabwe has suffered further economic difficulty and growing political chaos. 2003–2005 Divisions within the opposition MDC had begun to fester early in the decade, after Morgan Tsvangirai (the president of the MDC) was lured into a government sting operation that videotaped him talking of Mr. Mugabe's removal from power. He was subsequently arrested and put on trial on treason charges. This crippled his control of party affairs and raised questions about his competence. It also catalysed a major split within the party. In 2004 he was acquitted, but not until after suffering serious abuse and mistreatment in prison. The opposing faction was led by Welshman Ncube who was the general secretary of the party. In mid-2004, vigilantes loyal to Mr. Tsvangirai began attacking members who were mostly loyal to Ncube, climaxing in a September raid on the party's Harare headquarters in which the security director was nearly thrown to his death. An internal party inquiry later established that aides to Tsvangirai had tolerated, if not endorsed, the violence. Divisive as the violence was, it was a debate over the rule of law that set off the party's final break-up in November 2005. These division severely weakened the opposition. In addition the government employed its own operatives to both spy on each side and to undermine each side via acts of espionage. Zimbabwean parliamentary election, 2005 were held in March 2005 in which ZANU-PF won a two-thirds majority, were again criticised by international observers as being flawed. Mugabe's political operatives were thus able to weaken the opposition internally and the security apparatus of the state was able to destabilise it externally by using violence in anti-Mugabe strongholds to prevent citizens from voting. Some voters were 'turned away' from polling station despite having proper identification, further guaranteeing that the government could control the results. Additionally Mugabe had started to appoint judges sympathetic to the government, making any judicial appeal futile. Mugabe was also able to appoint 30 of the members of parliament. As Senate elections approached further opposition splits occurred. Ncube's supporters argued that the M.D.C. should field a slate of candidates; Tsvangirai's argued for a boycott. When party leaders voted on the issue, Ncube's side narrowly won, but Mr. Tsvangirai declared that as president of the party he was not bound by the majority's decision. Again the opposition was weakened. As a result, the elections for a new Senate in November 2005 were largely boycotted by the opposition. Mugabe's party won 24 of the 31 constituencies where elections were held amid low voter turnout. Again, evidence surfaced of voter intimidation and fraud. In May 2005 the government began Operation Murambatsvina. It was officially billed to rid urban areas of illegal structures, illegal business enterprises, and criminal activities. In practice its purpose was to punish political opponents. The UN estimates 700,000 people have been left without jobs or homes as a result. Families and traders, especially at the beginning of the operation, were often given no notice before police destroyed their homes and businesses. Others were able to salvage some possessions and building materials but often had nowhere to go, despite the government's statement that people should be returning to their rural homes. Thousands of families were left unprotected in the open in the middle of Zimbabwe's winter., . The government interfered with non-governmental organisation (NGO) efforts to provide emergency assistance to the displaced in many instances. Some families were removed to transit camps, where they had no shelter or cooking facilities and minimal food, supplies, and sanitary facilities. The operation continued into July 2005, when the government began a program to provide housing for the newly displaced. Human Rights Watch said the evictions had disrupted treatment for people with HIV/AIDS in a country where 3,000 die from the disease each week and about 1.3 million children have been orphaned. The operation was "the latest manifestation of a massive human rights problem that has been going on for years", said Amnesty International. As of September 2006, housing construction fell far short of demand, and there were reports that beneficiaries were mostly civil servants and ruling party loyalists, not those displaced. The government campaign of forced evictions continued in 2006, albeit on a lesser scale. In September 2005 Mugabe signed constitutional amendments that reinstituted a national senate (abolished in 1987) and that nationalised all land. This converted all ownership rights into leases. The amendments also ended the right of landowners to challenge government expropriation of land in the courts and marked the end of any hope of returning any land that had been hitherto grabbed by armed land invasions. Elections for the senate in November resulted in a victory for the government. The MDC split over whether to field candidates and partially boycotted the vote. In addition to low turnout there was widespread government intimidation. The split in the MDC hardened into factions, each of which claimed control of the party. The early months of 2006 were marked by food shortages and mass hunger. The sheer extremity of the siltation was revealed by the fact that in the courts, state witnesses said they were too weak from hunger to testify. 2006 to 2007 In August 2006 runaway inflation forced the government to replace its existing currency with a revalued one. In December 2006, ZANU-PF proposed the "harmonisation" of the parliamentary and presidential election schedules in 2010; the move was seen by the opposition as an excuse to extend Mugabe's term as president until 2010. Morgan Tsvangirai was badly beaten on 12 March 2007 after being arrested and held at Machipisa Police Station in the Highfield suburb of Harare. The event garnered an international outcry and was considered particularly brutal and extreme, even considering the reputation of Mugabe's government. Kolawole Olaniyan, Director of Amnesty International's Africa Programme said "We are very concerned by reports of continuing brutal attacks on opposition activists in Zimbabwe and call on the government to stop all acts of violence and intimidation against opposition activists". The economy has shrunk by 50% from 2000 to 2007. In September 2007 the inflation rate was put at almost 8,000%, the world's highest. There are frequent power and water outages. Harare's drinking water became unreliable in 2006 and as a consequence dysentery and cholera swept the city in December 2006 and January 2007. Unemployment in formal jobs is running at a record 80%. There was widespread hunger, manipulated by the government so that opposition strongholds suffer the most. Availability of bread was severely constrained after a poor wheat harvest and the closure of all bakeries. The country, which used to be one of Africa's richest, became one of its poorest. Many observers now view the country as a 'failed state'. The settlement of the Second Congo War brought back Zimbabwe's substantial military commitment, although some troops remain to secure the mining assets under their control. The government lacks the resources or machinery to deal with the ravages of the HIV/AIDS pandemic, which affects 25% of the population. With all this and the forced and violent removal of white farmers in a brutal land redistribution program, Mugabe has earned himself widespread scorn from the international arena. The regime has managed to cling to power by creating wealthy enclaves for government ministers, and senior party members. For example, Borrowdale Brook, a suburb of Harare is an oasis of wealth and privilege. It features mansions, manicured lawns, full shops with fully stocked shelves containing an abundance of fruit and vegetables, big cars and a golf club give is the home to President Mugabe's out-of-town retreat. Zimbabwe's bakeries shut down in October 2007 and supermarkets warned that they would have no bread for the foreseeable future due to collapse in wheat production after the seizure of white-owned farms. The ministry of agriculture has also blamed power shortages for the wheat shortfall, saying that electricity cuts have affected irrigation and halved crop yields per acre. The power shortages are because Zimbabwe relies on Mozambique for some of its electricity and that due to an unpaid bill of $35 million Mozambique had reduced the amount of electrical power it supplies. On 4 December 2007, The United States imposed travel sanctions against 38 people with ties to President Mugabe because they "played a central role in the regime's escalated human rights abuses." On 8 December 2007, Mugabe attended a meeting of EU and African leaders in Lisbon, prompting UK Prime Minister Gordon Brown to decline to attend. While German chancellor Angela Merkel criticised Mugabe with her public comments, the leaders of other African countries offered him statements of support. Deterioration of the educational system The educational system in Zimbabwe which was once regarded as among the best in Africa, went into crisis in 2007 because of the country's economic meltdown. One foreign reporter witnessed hundreds of children at Hatcliffe Extension Primary School in Epworth, west of Harare, writing in the dust on the floor because they had no exercise books or pencils. The high school exam system unravelled in 2007. Examiners refused to mark examination papers when they were offered just Z$79 a paper, enough to buy three small candies. Corruption has crept into the system and may explain why in January 2007 thousands of pupils received no marks for subjects they had entered, while others were deemed "excellent" in subjects they had not sat. However, as of late the education system has recovered and is still considered the best in Southern Africa. 2008 2008 elections Zimbabwe held a presidential election along with a 2008 parliamentary election of 29 March. The three major candidates were incumbent President Robert Mugabe of the Zimbabwe African National Union – Patriotic Front (ZANU-PF), Morgan Tsvangirai of the Movement for Democratic Change – Tsvangirai (MDC-T), and Simba Makoni, an independent. As no candidate received an outright majority in the first round, a second round was held on 27 June 2008 between Tsvangirai (with 47.9% of the first round vote) and Mugabe (43.2%). Tsvangirai withdrew from the second round a week before it was scheduled to take place, citing violence against his party's supporters. The second round went ahead, despite widespread criticism, and led to victory for Mugabe. Because of Zimbabwe's dire economic situation the election was expected to provide President Mugabe with his toughest electoral challenge to date. Mugabe's opponents were critical of the handling of the electoral process, and the government was accused of planning to rig the election; Human Rights Watch said that the election was likely to be "deeply flawed". After the first round, but before the counting was completed, Jose Marcos Barrica, the head of the Southern African Development Community observer mission, described the election as "a peaceful and credible expression of the will of the people of Zimbabwe." No official results were announced for more than a month after the first round. The failure to release results was strongly criticised by the MDC, which unsuccessfully sought an order from the High Court to force their release. An independent projection placed Tsvangirai in the lead, but without the majority needed to avoid a second round. The MDC declared that Tsvangirai won a narrow majority in the first round and initially refused to participate in any second round. ZANU-PF has said that Mugabe will participate in a second round; the party alleged that some electoral officials, in connection with the MDC, fraudulently reduced Mugabe's score, and as a result a recount was conducted. After the recount and the verification of the results, the Zimbabwe Electoral Commission (ZEC) announced on 2 May that Tsvangirai won 47.9% and Mugabe won 43.2%, thereby necessitating a run-off, which was to be held on 27 June 2008. Despite Tsvangirai's continuing claims to have won a first round majority, he refused to participate in the second round. The period following the first round was marked by serious political violence caused by ZANU-PF. ZANU-PF blamed the MDC supporters for perpetrating this violence; Western governments and prominent Western organisations have blamed ZANU-PF for the violence which seems very likely to be true. On 22 June 2008, Tsvangirai announced that he was withdrawing from the run-off, describing it as a "violent sham" and saying that his supporters risked being killed if they voted for him. The second round nevertheless went ahead as planned with Mugabe as the only actively participating candidate, although Tsvangirai's name remained on the ballot. Mugabe won the second round by an overwhelming margin and was sworn in for another term as president on 29 June. The international reaction to the second round have varied. The United States and states of the European Union have called for increased sanctions. On 11 July, the United Nations Security Council voted to impose sanctions on the Zimbabwe; Russia and China vetoed. The African Union has called for a "government of national unity." Preliminary talks to set up conditions for official negotiations began between leading negotiators from both parties on 10 July, and on 22 July, the three party leaders met for the first time in Harare to express their support for a negotiated settlement of disputes arising out of the presidential and parliamentary elections. Negotiations between the parties officially began on 25 July and are currently proceeding with very few details released from the negotiation teams in Pretoria, as coverage by the media is barred from the premises where the negotiations are taking place. The talks were mediated by South African President Thabo Mbeki. On 15 September 2008, the leaders of the 14-member Southern African Development Community witnessed the signing of the power-sharing agreement, brokered by South African leader Thabo Mbeki. With symbolic handshake and warm smiles at the Rainbow Towers hotel, in Harare, Mugabe and Tsvangirai signed the deal to end the violent political crisis. As provided, Robert Mugabe will remain president, Morgan Tsvangirai will become prime minister, ZANU-PF and the MDC will share control of the police, Mugabe's Zanu (PF) will command the Army, and Arthur Mutambara becomes deputy prime minister. Marange diamond fields massacre In November 2008 the Air Force of Zimbabwe was sent, after some police officers began refusing orders to shoot the illegal miners at Marange diamond fields. Up to 150 of the estimated 30,000 illegal miners were shot from helicopter gunships. In 2008 some Zimbabwean lawyers and opposition politicians from Mutare claimed that Shiri was the prime mover behind the military assaults on illegal diggers in the diamond mines in the east of Zimbabwe. Estimates of the death toll by mid-December range from 83 reported by the Mutare City Council, based on a request for burial ground, to 140 estimated by the (then) opposition Movement for Democratic Change - Tsvangirai party. 2009 to present 2009–2017 In January 2009, Morgan Tsvangirai announced that he would do as the leaders across Africa had insisted and join a coalition government as prime minister with his nemesis, President Robert Mugabe . On 11 February 2009 Tsvangirai was sworn in as the Prime Minister of Zimbabwe. By 2009 inflation had peaked at 500 billion % per year under the Mugabe government and the Zimbabwe currency was worthless. The opposition shared power with the Mugabe regime between 2009 and 2013, Zimbabwe switched to using the US dollar as currency and the economy improved reaching a growth rate of 10% per year. In 2013 the Mugabe government won an election which The Economist described as "rigged," doubled the size of the civil service and embarked on "...misrule and dazzling corruption." However, the United Nations, African Union and SADC endorsed the elections as free and fair. By 2016 the economy had collapsed, nationwide protests took place throughout the country and the finance minister admitted "Right now we literally have nothing." There was the introduction of bond notes to literally fight the biting cash crisis and liquidity crunch. Cash became scarce on the market in the year 2017. On Wednesday 15 November 2017 the military placed President Mugabe under house arrest and removed him from power. The military stated that the president was safe. The military placed tanks around government buildings in Harare and blocked the main road to the airport. Public opinion in the capital favored the dictators removal although they were uncertain about his replacement with another dictatorship. The Times reported that Emmerson Mnangagwa helped to orchestrate the coup. He had recently been sacked by Mr Mugabe so that the path could be smoothed for Grace Mugabe to replace her husband. A Zimbabwean army officer, Major General Sibusiso Moyo, went on television to say the military was targeting "criminals" around President Mugabe but not actively removing the president from power. However the head of the African Union described it as such. Ugandan writer Charles Onyango-Obbo stated on Twitter "If it looks like a coup, walks like a coup and quacks like a coup, then it's a coup". Naunihal Singh, an assistant professor at the U.S. Naval War College and author of a book on military coups, described the situation in Zimbabwe as a coup. He tweeted that "'The President is safe' is a classic coup catch-phrase" of such an event. Robert Mugabe resigned 21 November 2017. Second Vice-President Phelekezela Mphoko became the Acting President. Emmerson Mnangagwa was sworn in as president on 24 November 2017. 2018–2019 General elections were held on 30 July 2018 to elect the president and members of both houses of parliament. Ruling party ZANU-PF won the majority of seats in parliament, incumbent President Emmerson Mnangagwa was declared the winner after receiving 50.8% of votes. The opposition accused the government of rigging the vote. In subsequent riots by MDC supporters, the army opened fire and killed three people, while three others died of their injuries the following day. In January 2019 following a 130% increase in the price of fuel thousands of Zimbabweans protested and the government responded with a coordinated crackdown that resulted in hundreds of arrests and multiple deaths. Economic statistics 2021 HARARE, June 10, 2021–-Gross Domestic Product (GDP) growth in Zimbabwe is projected to reach 3.9 percent in 2021, a significant improvement after a two-year recession, according to the World Bank Zimbabwe Economic Update See also Economic history of Zimbabwe Education in Zimbabwe Foreign relations of Zimbabwe Governor of Southern Rhodesia History of Africa History of Southern Africa Land reform in Zimbabwe List of presidents of Zimbabwe Politics of Zimbabwe President of Rhodesia Prime Minister of Rhodesia Prime Minister of Zimbabwe Bulawayo history and timeline Harare history and timeline Years in Zimbabwe References Further reading Bourne, Richard. Catastrophe: What Went Wrong in Zimbabwe? (Zed Books 2011). Davoodi, Schoresch & Sow, Adama: Democracy and Peace in Zimbabwe in: EPU Research Papers: Issue 12/08, Stadtschlaining 2008 Maguwu, Farai: Land Reform, Famine and Environmental Degradation in Zimbabwe in: EPU Research Papers: Issue 06/07, Stadtschlaining 2007 Michel, Eddie. The White House and White Africa: Presidential Policy Toward Rhodesia During the UDI Era, 1965-1979 (New York: Routledge, 2019). online review Mlambo, Alois. History of Zimbabwe (Oxford University Press, 2014) Raftopoulos, Brian & Alois Mlambo, Eds. Becoming Zimbabwe. A History from the Pre-colonial Period to 2008 (Weaver Press, 2009). Scarnecchia, Timothy. The Urban Roots of Democracy and Political Violence in Zimbabwe: Harare and Highfield, 1940-1964 (Rochester University Press, 2008). Sibanda, Eliakim M. The Zimbabwe African People's Union, 1961-87: A Political History of Insurgency in Southern Rhodesia (2004). External links Background Note: Zimbabwe Monomotapa
[ 0.04097301885485649, 0.09608851373195648, -0.7614471316337585, -0.3043394982814789, -0.1585647016763687, 0.2792058289051056, 0.5881488919258118, 0.39556604623794556, -0.5929886698722839, -0.5684105753898621, 0.1234680637717247, -0.1501476764678955, -0.580282986164093, 0.7947487831115723, ...
14115
https://en.wikipedia.org/wiki/History%20of%20Russia
History of Russia
The history of Russia begins with the histories of the East Slavs. The traditional start-date of specifically Russian history is the establishment of the Rus' state in the north in 862, ruled by Varangians. Staraya Ladoga and Novgorod became the first major cities of the new union of immigrants from Scandinavia with the Slavs and Finns. In 882 Prince Oleg of Novgorod seized Kiev, thereby uniting the northern and southern lands of the Eastern Slavs under one authority. The state adopted Christianity from the Byzantine Empire in 988, beginning the synthesis of Byzantine and Slavic cultures that defined Russian culture for the next millennium. Kievan Rus' ultimately disintegrated as a state due to the Mongol invasions in 1237–1240 along with the resulting deaths of significant numbers of the population. After the 13th century, Moscow became a political and cultural center. Moscow has become a center for the unification of Russian lands. By the end of the 15th century, Moscow united the northeastern and northwestern Russian principalities, in 1480 finally overthrew the Mongol yoke. The territories of the Grand Duchy of Moscow became the Tsardom of Russia in 1547. In 1721, Tsar Peter the Great renamed his state as the Russian Empire, hoping to associate it with historical and cultural achievements of ancient Rus' – in contrast to his policies oriented towards Western Europe. The state now extended from the eastern borders of the Polish–Lithuanian Commonwealth to the Pacific Ocean. Russia became a great power and dominated Europe after the victory over Napoleon. Peasant revolts were common, and all were fiercely suppressed. The Emperor Alexander II abolished Russian serfdom in 1861, but the peasants fared poorly and revolutionary pressures grew. In the following decades, reform efforts such as the Stolypin reforms of 1906–1914, the constitution of 1906, and the State Duma (1906–1917) attempted to open and liberalize the economy and political system, but the emperor refused to relinquish autocratic rule and resisted sharing his power. A combination of economic breakdown, war-weariness, and discontent with the autocratic system of government triggered the Russian Revolution in 1917. The overthrow of the monarchy initially brought into office a coalition of liberals and moderate socialists, but their failed policies led to seizure of power by the communist Bolsheviks on 25 October 1917 (7 November New Style). In 1922, Soviet Russia, along with Soviet Ukraine, Soviet Belarus, and the Transcaucasian SFSR signed the Treaty on the Creation of the USSR, officially merging all four republics to form the Soviet Union as a country. Between 1922 and 1991 the history of Russia became essentially the history of the Soviet Union, effectively an ideologically based state roughly conterminous with the Russian Empire before the 1918 Treaty of Brest-Litovsk. From its first years, government in the Soviet Union based itself on the one-party rule of the Communists, as the Bolsheviks called themselves, beginning in March 1918. The approach to the building of socialism, however, varied over different periods in Soviet history: from the mixed economy and diverse society and culture of the 1920s through the command economy and repressions of the Joseph Stalin era to the "era of stagnation" from the 1960s to the 1980s. During this period, the Soviet Union was one of the victors in World War II after recovering from a massive surprise invasion in 1941 by its previously secretly cooperative partner, Nazi Germany. It became a superpower competing with fellow new superpower the United States and other Western countries in the Cold War. The USSR was successful with its space program, launching the first artificial satellite and first man into space. By the mid-1980s, with the weaknesses of Soviet economic and political structures becoming acute, Mikhail Gorbachev embarked on major reforms, which eventually led to overthrow of the communist party and breakup of the USSR, leaving Russia again on its own and marking the start of the history of post-Soviet Russia. The Russian Soviet Federative Socialist Republic renamed itself as the Russian Federation and became one of the several successors to the Soviet Union. The Russian Federation was the only post-soviet republic to assume the USSR's permanent membership in the UN Security Council. Later on, Russia inherited the Soviet Union's entire nuclear arsenal in 1994 after signing the Budapest Memorandum. Russia retained its nuclear arsenal but lost its superpower status. Scrapping the socialist central planning and state-ownership of property of the socialist era, new leaders, led by President Vladimir Putin (who first became President in 2000), took political and economic power after 2000 and engaged in an assertive foreign policy. Coupled with economic growth, Russia has since regained significant global status as a world power. Russia's 2014 annexation of the Crimean Peninsula has led to economic sanctions imposed by the United States and the European Union. Under Putin's leadership, corruption in Russia is rated as the worst in Europe, and Russia's human rights situation has been increasingly criticized by international observers. Prehistory The first human settlement on the territory of Russia dates back to the Oldowan period in the early Lower Paleolithic. About 2 million years ago, representatives of Homo erectus migrated from Western Asia to the North Caucasus (archaeological site of on the Taman Peninsula). At the archaeological site in the skull Elasmotherium caucasicum, which lived 1.5-1.2 million years ago, a stone tool was found. 1.5-million-year-old Oldowan flint tools have been discovered in the Dagestan Akusha region of the north Caucasus, demonstrating the presence of early humans in the territory of the present-day Russian Federation from a very early time. Fossils of Denisova man date to about 110,000 years ago. DNA from a bone fragment found in Denisova cave, that of a teenage girl who died about 90,000 years ago, shows that she was a hybrid of a Neanderthal mother and a Denisovan father. Russia was also home to some of the last surviving Neanderthals - the partial skeleton of a Neanderthal infant (Mezmaiskaya 2) in Mezmaiskaya cave in Adygea, showed a carbon-dated age of only 45,000 years. In 2008, Russian archaeologists from the Institute of Archaeology and Ethnology of Novosibirsk, working at the site of Denisova Cave in the Altai Mountains of Siberia, uncovered a 40,000-year-old small bone fragment from the fifth finger of a juvenile hominin, which DNA analysis revealed to be a previously unknown species of human, which was named the Denisova hominin. The first trace of Homo sapiens on the large expanse of Russian territory dates back to 45,000 years - in central Siberia (Ust'-Ishim man). The discovery of some of the earliest evidence for the presence of anatomically modern humans found anywhere in Europe was reported in 2007 from the deepest levels of the Kostenki archaeological site near the Don River in Russia (dated to at least 40,000 years ago) and at Sungir (34,600 years ago). Humans reached Arctic Russia (Mamontovaya Kurya) by 40,000 years ago. During the prehistoric eras the vast steppes of Southern Russia were home to tribes of nomadic pastoralists. (In classical antiquity, the Pontic Steppe was known as "Scythia".) Remnants of these long-gone steppe cultures were discovered in the course of the 20th century in such places as Ipatovo, Sintashta, Arkaim, and Pazyryk. Antiquity In the later part of the 8th century BCE, Greek merchants brought classical civilization to the trade emporiums in Tanais and Phanagoria. Gelonus was described by Herodotus as a huge (Europe's biggest) earth- and wood-fortified grad inhabited around 500 BC by Heloni and Budini. The Bosporan Kingdom was incorporated as part of the Roman province of Moesia Inferior from 63 to 68 AD, under Emperor Nero. At about the 2nd century AD Goths migrated to the Black Sea, and in the 3rd and 4th centuries AD, a semi-legendary Gothic kingdom of Oium existed in Southern Russia until it was overrun by Huns. Between the 3rd and 6th centuries AD, the Bosporan Kingdom, a Hellenistic polity which succeeded the Greek colonies, was also overwhelmed by successive waves of nomadic invasions, led by warlike tribes which would often move on to Europe, as was the case with the Huns and Turkish Avars. In the second millennium BC, the territories between the Kama and the Irtysh Rivers were the home of a Proto-Uralic-speaking population that had contacts with Proto-Indo-European speakers from the south. The woodland population is the ancestor of the modern Ugrian inhabitants of Trans-Uralia. Other researchers say that the Khanty people originated in the south Ural steppe and moved northwards into their current location about 500 AD. A Turkic people, the Khazars, ruled the lower Volga basin steppes between the Caspian and Black Seas through to the 8th century. Noted for their laws, tolerance, and cosmopolitanism, the Khazars were the main commercial link between the Baltic and the Muslim Abbasid empire centered in Baghdad. They were important allies of the Byzantine Empire, and waged a series of successful wars against the Arab Caliphates. In the 8th century, the Khazars embraced Judaism. Early history Early East Slavs Some of the ancestors of the modern Russians were the Slavic tribes, whose original home is thought by some scholars to have been the wooded areas of the Pripet Marshes. The Early East Slavs gradually settled Western Russia in two waves: one moving from Kiev towards present-day Suzdal and Murom and another from Polotsk towards Novgorod and Rostov. From the 7th century onwards, East Slavs constituted the bulk of the population in Western Russia and slowly but peacefully assimilated the native Finnic tribes, such as the Merya, the Muromians, and the Meshchera. Kievan Rus' (882–1283) Scandinavian Norsemen, known as Vikings in Western Europe and Varangians in the East, combined piracy and trade throughout Northern Europe. In the mid-9th century, they began to venture along the waterways from the eastern Baltic to the Black and Caspian Seas. According to the earliest Russian chronicle, a Varangian named Rurik was elected ruler (knyaz) of Novgorod in about 860, before his successors moved south and extended their authority to Kiev, which had been previously dominated by the Khazars. Oleg, Rurik's son Igor and Igor's son Sviatoslav subsequently subdued all local East Slavic tribes to Kievan rule, destroyed the Khazar Khaganate and launched several military expeditions to Byzantium and Persia. Thus, the first East Slavic state, Rus', emerged in the 9th century along the Dnieper River valley. A coordinated group of princely states with a common interest in maintaining trade along the river routes, Kievan Rus' controlled the trade route for furs, wax, and slaves between Scandinavia and the Byzantine Empire along the Volkhov and Dnieper Rivers. By the end of the 10th century, the minority Norse military aristocracy had merged with the native Slavic population, which also absorbed Greek Christian influences in the course of the multiple campaigns to loot Tsargrad, or Constantinople. One such campaign claimed the life of the foremost Slavic druzhina leader, Svyatoslav I, who was renowned for having crushed the power of the Khazars on the Volga. At the time, the Byzantine Empire was experiencing a major military and cultural revival; despite its later decline, its culture would have a continuous influence on the development of Russia in its formative centuries. Kievan Rus' is important for its introduction of a Slavic variant of the Eastern Orthodox religion, dramatically deepening a synthesis of Byzantine and Slavic cultures that defined Russian culture for the next thousand years. The region adopted Christianity in 988 by the official act of public baptism of Kiev inhabitants by Prince Vladimir I, who followed the private conversion of his grandmother. Some years later the first code of laws, Russkaya Pravda, was introduced by Yaroslav the Wise. From the onset, the Kievan princes followed the Byzantine example and kept the Church dependent on them, even for its revenues, so that the Russian Church and state were always closely linked. By the 11th century, particularly during the reign of Yaroslav the Wise, Kievan Rus' displayed an economy and achievements in architecture and literature superior to those that then existed in the western part of the continent. Compared with the languages of European Christendom, the Russian language was little influenced by the Greek and Latin of early Christian writings. This was because Church Slavonic was used directly in liturgy instead. A nomadic Turkic people, the Kipchaks (also known as the Cumans), replaced the earlier Pechenegs as the dominant force in the south steppe regions neighbouring to Rus' at the end of the 11th century and founded a nomadic state in the steppes along the Black Sea (Desht-e-Kipchak). Repelling their regular attacks, especially in Kiev, which was just one day's ride from the steppe, was a heavy burden for the southern areas of Rus'. The nomadic incursions caused a massive influx of Slavs to the safer, heavily forested regions of the north, particularly to the area known as Zalesye. Kievan Rus' ultimately disintegrated as a state because of in-fighting between members of the princely family that ruled it collectively. Kiev's dominance waned, to the benefit of Vladimir-Suzdal in the north-east, Novgorod in the north, and Halych-Volhynia in the south-west. Conquest by the Mongol Golden Horde in the 13th century was the final blow. Kiev was destroyed. Halych-Volhynia would eventually be absorbed into the Polish–Lithuanian Commonwealth, while the Mongol-dominated Vladimir-Suzdal and independent Novgorod Republic, two regions on the periphery of Kiev, would establish the basis for the modern Russian nation. Mongol invasion and vassalage (1223–1480) The invading Mongols accelerated the fragmentation of the Rus'. In 1223, the disunited southern princes faced a Mongol raiding party at the Kalka River and were soundly defeated. In 1237–1238 the Mongols burnt down the city of Vladimir (4 February 1238) and other major cities of northeast Russia, routed the Russians at the Sit' River, and then moved west into Poland and Hungary. By then they had conquered most of the Russian principalities. Only the Novgorod Republic escaped occupation and continued to flourish in the orbit of the Hanseatic League. The impact of the Mongol invasion on the territories of Kievan Rus' was uneven. The advanced city culture was almost completely destroyed. As older centers such as Kiev and Vladimir never recovered from the devastation of the initial attack, the new cities of Moscow, Tver and Nizhny Novgorod began to compete for hegemony in the Mongol-dominated Russia. Although a Russian army defeated the Golden Horde at Kulikovo in 1380, Mongol domination of the Russian-inhabited territories, along with demands of tribute from Russian princes, continued until about 1480. The Mongols held Russia and Volga Bulgaria in sway from their western capital at Sarai, one of the largest cities of the medieval world. The princes of southern and eastern Russia had to pay tribute to the Mongols of the Golden Horde, commonly called Tatars; but in return they received charters authorizing them to act as deputies to the khans. In general, the princes were allowed considerable freedom to rule as they wished, while the Russian Orthodox Church even experienced a spiritual revival under the guidance of Metropolitan Alexis and Sergius of Radonezh. The Mongols left their impact on the Russians in such areas as military tactics and transportation. Under Mongol occupation, Russia also developed its postal road network, census, fiscal system, and military organization. At the same time, Prince of Novgorod, Alexander Nevsky, managed to repel the offensive of the Northern Crusades against Russia from the West. Despite this, becoming the Grand Prince, Alexander declared himself a vassal to the Golden Horde, not having the strength to resist its power. Grand Duchy of Moscow (1283–1547) Rise of Moscow Daniil Aleksandrovich, the youngest son of Alexander Nevsky, founded the principality of Moscow (known as Muscovy in English), which first cooperated with and ultimately expelled the Tatars from Russia. Well-situated in the central river system of Russia and surrounded by protective forests and marshes, Moscow was at first only a vassal of Vladimir, but soon it absorbed its parent state. A major factor in the ascendancy of Moscow was the cooperation of its rulers with the Mongol overlords, who granted them the title of Grand Prince of Moscow and made them agents for collecting the Tatar tribute from the Russian principalities. The principality's prestige was further enhanced when it became the center of the Russian Orthodox Church. Its head, the Metropolitan, fled from Kiev to Vladimir in 1299 and a few years later established the permanent headquarters of the Church in Moscow under the original title of Kiev Metropolitan. By the middle of the 14th century, the power of the Mongols was declining, and the Grand Princes felt able to openly oppose the Mongol yoke. In 1380, at Battle of Kulikovo on the Don River, the Mongols were defeated, and although this hard-fought victory did not end Tatar rule of Russia, it did bring great fame to the Grand Prince Dmitry Donskoy. Moscow's leadership in Russia was now firmly based and by the middle of the 14th century its territory had greatly expanded through purchase, war, and marriage. Ivan III, the Great In the 15th century, the grand princes of Moscow continued to consolidate Russian land to increase their population and wealth. The most successful practitioner of this process was Ivan III, who laid the foundations for a Russian national state. Ivan competed with his powerful northwestern rival, the Grand Duchy of Lithuania, for control over some of the semi-independent Upper Principalities in the upper Dnieper and Oka River basins. Through the defections of some princes, border skirmishes, and a long war with the Novgorod Republic, Ivan III was able to annex Novgorod and Tver. As a result, the Grand Duchy of Moscow tripled in size under his rule. During his conflict with Pskov, a monk named Filofei (Philotheus of Pskov) composed a letter to Ivan III, with the prophecy that the latter's kingdom would be the Third Rome. The Fall of Constantinople and the death of the last Greek Orthodox Christian emperor contributed to this new idea of Moscow as New Rome and the seat of Orthodox Christianity, as did Ivan's 1472 marriage to Byzantine Princess Sophia Palaiologina. Under Ivan III, the first central government bodies were created in Russia - Prikaz. The Sudebnik was adopted, the first set of laws since the 11th century. The double-headed eagle was adopted as the coat of arms of Russia, as a symbol of the continuity of the power of Byzantium by Russia. A contemporary of the Tudors and other "new monarchs" in Western Europe, Ivan proclaimed his absolute sovereignty over all Russian princes and nobles. Refusing further tribute to the Tatars, Ivan initiated a series of attacks that opened the way for the complete defeat of the declining Golden Horde, now divided into several Khanates and hordes. Ivan and his successors sought to protect the southern boundaries of their domain against attacks of the Crimean Tatars and other hordes. To achieve this aim, they sponsored the construction of the Great Abatis Belt and granted manors to nobles, who were obliged to serve in the military. The manor system provided a basis for an emerging cavalry-based army. In this way, internal consolidation accompanied outward expansion of the state. By the 16th century, the rulers of Moscow considered the entire Russian territory their collective property. Various semi-independent princes still claimed specific territories, but Ivan III forced the lesser princes to acknowledge the grand prince of Moscow and his descendants as unquestioned rulers with control over military, judicial, and foreign affairs. Gradually, the Russian ruler emerged as a powerful, autocratic ruler, a tsar. The first Russian ruler to officially crown himself "Tsar" was Ivan IV. Ivan III tripled the territory of his state, ended the dominance of the Golden Horde over the Rus', renovated the Moscow Kremlin, and laid the foundations of the Russian state. Biographer Fennell concludes that his reign was "militarily glorious and economically sound," and especially points to his territorial annexations and his centralized control over local rulers. However, Fennell, the leading British specialist on Ivan III, argues that his reign was also "a period of cultural depression and spiritual barrenness. Freedom was stamped out within the Russian lands. By his bigoted anti-Catholicism Ivan brought down the curtain between Russia and the west. For the sake of territorial aggrandizement he deprived his country of the fruits of Western learning and civilization." Tsardom of Russia (1547–1721) Ivan IV, the Terrible The development of the Tsar's autocratic powers reached a peak during the reign of Ivan IV (1547–1584), known as "Ivan the Terrible". He strengthened the position of the monarch to an unprecedented degree, as he ruthlessly subordinated the nobles to his will, exiling or executing many on the slightest provocation. Nevertheless, Ivan is often seen as a farsighted statesman who reformed Russia as he promulgated a new code of laws (Sudebnik of 1550), established the first Russian feudal representative body (Zemsky Sobor), curbed the influence of the clergy, and introduced local self-management in rural regions. Tsar also created the first regular army in Russia - Streltsy. Although his long Livonian War for control of the Baltic coast and access to the sea trade ultimately proved a costly failure, Ivan managed to annex the Khanates of Kazan, Astrakhan, and Siberia. These conquests complicated the migration of aggressive nomadic hordes from Asia to Europe via the Volga and Urals. Through these conquests, Russia acquired a significant Muslim Tatar population and emerged as a multiethnic and multiconfessional state. Also around this period, the mercantile Stroganov family established a firm foothold in the Urals and recruited Russian Cossacks to colonise Siberia. In the later part of his reign, Ivan divided his realm in two. In the zone known as the oprichnina, Ivan's followers carried out a series of bloody purges of the feudal aristocracy (whom he suspected of treachery after the betrayal of prince Kurbsky), culminating in the Massacre of Novgorod in 1570. This combined with the military losses, epidemics, and poor harvests so weakened Russia that the Crimean Tatars were able to sack central Russian regions and burn down Moscow in 1571. Despite this, the next year the Russians defeated the Crimean Tatar army at the Battle of Molodi. In 1572, Ivan abandoned the oprichnina. At the end of Ivan IV's reign the Polish–Lithuanian and Swedish armies carried out a powerful intervention in Russia, devastating its northern and northwest regions. Time of Troubles The death of Ivan's childless son Feodor was followed by a period of civil wars and foreign intervention known as the Time of Troubles (1606–13). Extremely cold summers (1601–1603) wrecked crops, which led to the Russian famine of 1601–1603 and increased the social disorganization. Boris Godunov's (Борис Годунов) reign ended in chaos, civil war combined with foreign intrusion, devastation of many cities and depopulation of the rural regions. The country rocked by internal chaos also attracted several waves of interventions by the Polish–Lithuanian Commonwealth. During the Polish–Muscovite War (1605–1618), Polish–Lithuanian forces reached Moscow and installed the impostor False Dmitriy I in 1605, then supported False Dmitry II in 1607. The decisive moment came when a combined Russian-Swedish army was routed by the Polish forces under hetman Stanisław Żółkiewski at the Battle of Klushino on . As the result of the battle, the Seven Boyars, a group of Russian nobles, deposed the tsar Vasily Shuysky on , and recognized the Polish prince Władysław IV Vasa as the Tsar of Russia on . The Poles entered Moscow on . Moscow revolted but riots there were brutally suppressed and the city was set on fire. The crisis provoked a patriotic national uprising against the invasion, both in 1611 and 1612. Finally, a volunteer army, led by the merchant Kuzma Minin and prince Dmitry Pozharsky, expelled the foreign forces from the capital on . The Russian statehood survived the "Time of Troubles" and the rule of weak or corrupt Tsars because of the strength of the government's central bureaucracy. Government functionaries continued to serve, regardless of the ruler's legitimacy or the faction controlling the throne. However, the Time of Troubles caused the loss of much territory to the Polish–Lithuanian Commonwealth in the Russo-Polish war, as well as to the Swedish Empire in the Ingrian War. Accession of the Romanovs and early rule In February 1613, after the chaos and expulsion of the Poles from Moscow, a national assembly, composed of representatives from 50 cities and even some peasants, elected Michael Romanov, the young son of Patriarch Filaret, to the throne. The Romanov dynasty ruled Russia until 1917. The immediate task of the new monarch was to restore peace. Fortunately for Moscow, its major enemies, the Polish–Lithuanian Commonwealth and Sweden, were engaged in a bitter conflict with each other, which provided Russia the opportunity to make peace with Sweden in 1617 and to sign a truce with the Polish–Lithuanian Commonwealth in 1619. Recovery of lost territories began in the mid-17th century, when the Khmelnitsky Uprising (1648–57) in Ukraine against Polish rule brought about the Treaty of Pereyaslav between Russia and the Ukrainian Cossacks. In the treaty, Russia granted protection to the Cossacks state in Left-bank Ukraine, formerly under Polish control. This triggered a prolonged Russo-Polish War (1654-1667), which ended with the Treaty of Andrusovo, where Poland accepted the loss of Left-bank Ukraine, Kiev and Smolensk. The Russian conquest of Siberia, begun at the end of the 16th century, continued in the 17th century. By the end of the 1640s, the Russians reached the Pacific Ocean, the Russian explorer Semyon Dezhnev, discovered the strait between Asia and America. Russian expansion in the Far East faced resistance from Qing China. After the war between Russia and China, the Treaty of Nerchinsk was signed, delimiting the territories in the Amur region. Rather than risk their estates in more civil war, the boyars cooperated with the first Romanovs, enabling them to finish the work of bureaucratic centralization. Thus, the state required service from both the old and the new nobility, primarily in the military. In return, the tsars allowed the boyars to complete the process of enserfing the peasants. In the preceding century, the state had gradually curtailed peasants' rights to move from one landlord to another. With the state now fully sanctioning serfdom, runaway peasants became state fugitives, and the power of the landlords over the peasants "attached" to their land had become almost complete. Together, the state and the nobles placed an overwhelming burden of taxation on the peasants, whose rate was 100 times greater in the mid-17th century than it had been a century earlier. Likewise, middle-class urban tradesmen and craftsmen were assessed taxes, and were forbidden to change residence. All segments of the population were subject to military levy and special taxes. Riots among peasants and citizens of Moscow at this time were endemic and included the Salt Riot (1648), Copper Riot (1662), and the Moscow Uprising (1682). By far the greatest peasant uprising in 17th-century Europe erupted in 1667. As the free settlers of South Russia, the Cossacks, reacted against the growing centralization of the state, serfs escaped from their landlords and joined the rebels. The Cossack leader Stenka Razin led his followers up the Volga River, inciting peasant uprisings and replacing local governments with Cossack rule. The tsar's army finally crushed his forces in 1670; a year later Stenka was captured and beheaded. Yet, less than half a century later, the strains of military expeditions produced another revolt in Astrakhan, ultimately subdued. Russian Empire (1721–1917) Population Much of Russia's expansion occurred in the 17th century, culminating in the first Russian colonisation of the Pacific in the mid-17th century, the Russo-Polish War (1654–67) that incorporated left-bank Ukraine, and the Russian conquest of Siberia. Poland was divided in the 1790–1815 era, with much of the land and population going to Russia. Most of the19th century growth came from adding territory in Asia, south of Siberia. Peter the Great Peter the Great (1672–1725) brought centralized autocracy into Russia and played a major role in bringing his country into the European state system. Russia had now become the largest country in the world, stretching from the Baltic Sea to the Pacific Ocean. The vast majority of the land was unoccupied, and travel was slow. Much of its expansion had taken place in the 17th century, culminating in the first Russian settlement of the Pacific in the mid-17th century, the reconquest of Kiev, and the pacification of the Siberian tribes. However, a population of only 14 million was stretched across this vast landscape. With a short growing season, grain yields trailed behind those in the West and potato farming was not yet widespread. As a result, the great majority of the population workforce was occupied with agriculture. Russia remained isolated from the sea trade and its internal trade, communication and manufacturing were seasonally dependent. Peter reformed the Russian army and created the Russian navy. Peter's first military efforts were directed against the Ottoman Turks. His aim was to establish a Russian foothold on the Black Sea by taking the town of Azov. His attention then turned to the north. Peter still lacked a secure northern seaport except at Archangel on the White Sea, whose harbor was frozen nine months a year. Access to the Baltic was blocked by Sweden, whose territory enclosed it on three sides. Peter's ambitions for a "window to the sea" led him in 1699 to make a secret alliance with the Polish–Lithuanian Commonwealth and Denmark against Sweden resulting in the Great Northern War. The war ended in 1721 when an exhausted Sweden sued for peace with Russia. Peter acquired four provinces situated south and east of the Gulf of Finland, thus securing his coveted access to the sea. There, in 1703, he had already founded the city that was to become Russia's new capital, Saint Petersburg, as a "window opened upon Europe" to replace Moscow, long Russia's cultural center. Russian intervention in the Commonwealth marked, with the Silent Sejm, the beginning of a 200-year domination of that region by the Russian Empire. In celebration of his conquests, Peter assumed the title of emperor, and the Russian Tsardom officially became the Russian Empire in 1721. Peter re-organized his government based on the latest Western models, molding Russia into an absolutist state. He replaced the old boyar Duma (council of nobles) with a nine-member senate, in effect a supreme council of state. The countryside was also divided into new provinces and districts. Peter told the senate that its mission was to collect taxes. In turn tax revenues tripled over the course of his reign. Administrative Collegia (ministries) were established in St. Petersburg, to replace the old governmental departments. In 1722, Peter promulgated his famous Table of ranks. As part of the government reform, the Orthodox Church was partially incorporated into the country's administrative structure, in effect making it a tool of the state. Peter abolished the patriarchate and replaced it with a collective body, the Holy Synod, led by a lay government official. Peter continued and intensified his predecessors' requirement of state service for all nobles. By then, the once powerful Persian Safavid Empire to the south was heavily declining. Taking advantage, Peter launched the Russo-Persian War (1722-1723), known as "The Persian Expedition of Peter the Great" by Russian histographers, in order to be the first Russian emperor to establish Russian influence in the Caucasus and Caspian Sea region. After considerable success and the capture of many provinces and cities in the Caucasus and northern mainland Persia, the Safavids were forced to hand over the territories to Russia. However, by 12 years later, all the territories were ceded back to Persia, which was now led by the charismatic military genius Nader Shah, as part of the Treaty of Resht and Treaty of Ganja and the Russo-Persian alliance against the Ottoman Empire, the common neighbouring rivalling enemy. Peter the Great died in 1725, leaving an unsettled succession, but Russia had become a great power by the end of his reign. Peter I was succeeded by his second wife, Catherine I (1725–1727), who was merely a figurehead for a powerful group of high officials, then by his minor grandson, Peter II (1727–1730), then by his niece, Anna (1730–1740), daughter of Tsar Ivan V. The heir to Anna was soon deposed in a coup and Elizabeth, daughter of Peter I, ruled from 1741 to 1762. During her reign, Russia took part in the Seven Years' War. Catherine the Great Nearly 40 years passed before a comparably ambitious ruler appeared on the Russian throne. Catherine II, "the Great" (r. 1762–1796), was a German princess who married the German heir to the Russian crown. He took weak positions, and Catherine overthrew him in a coup in 1762, becoming queen regnant. Catherine enthusiastically supported the ideals of The Enlightenment, thus earning the status of an enlightened despot. She patronized the arts, science and learning. She contributed to the resurgence of the Russian nobility that began after the death of Peter the Great. Catherine promulgated the Charter to the Gentry reaffirming rights and freedoms of the Russian nobility and abolishing mandatory state service. She seized control of all the church lands, drastically reduced the size of the monasteries, and put the surviving clergy on a tight budget. Catherine spent heavily to promote an expansive foreign policy. She extended Russian political control over the Polish–Lithuanian Commonwealth with actions, including the support of the Targowica Confederation. The cost of her campaigns, plus the oppressive social system that required serfs to spend almost all their time laboring on the land of their lords, provoked a major peasant uprising in 1773. Inspired by a Cossack named Pugachev, with the emphatic cry of "Hang all the landlords!", the rebels threatened to take Moscow until Catherine crushed the rebellion. Like the other enlightened despots of Europe, Catherine made certain of her own power and formed an alliance with the nobility. Catherine successfully waged two wars (1768-74, 1787-92) against the decaying Ottoman Empire and advanced Russia's southern boundary to the Black Sea. Russia annexed Crimea in 1783 and created the Black Sea fleet. Then, by allying with the rulers of Austria and Prussia, she incorporated the territories of the Polish–Lithuanian Commonwealth, where after a century of Russian rule non-Catholic, mainly Orthodox population prevailed during the Partitions of Poland, pushing the Russian frontier westward into Central Europe. In accordance to Russia's treaty with the Georgians to protect them against any new invasion of their Persian suzerains and further political aspirations, Catherine waged a new war against Persia in 1796 after they had again invaded Georgia and established rule over it about a year prior, and had expelled the newly established Russian garrisons in the Caucasus. In 1798–99, Russian troops participated in the anti-French coalition, the troops under the command of Alexander Suvorov defeated the French in Northern Italy. Ruling the Empire (1725–1825) Russian emperors of the 18th century professed the ideas of Enlightened absolutism. Innovative tsars such as Peter the Great and Catherine the Great brought in Western experts, scientists, philosophers, and engineers. However, Westernization and modernization affected only the upper classes of Russian society, while the bulk of the population, consisting of peasants, remained in a state of serfdom. Powerful Russians resented their privileged positions and alien ideas. The backlash was especially severe after the Napoleonic wars. It produced a powerful anti-western campaign that "led to a wholesale purge of Western specialists and their Russian followers in universities, schools, and government service." The mid-18th century was marked by the emergence of higher education in Russia. The first two major universities Saint Petersburg State University and Moscow State University were opened in both capitals. Russian exploration of Siberia and the Far East continued. Great Northern Expedition laid the foundation for the development of Alaska by the Russians. By the end of the 18th century, Alaska became a Russian colony (Russian America). In the early 19th century, Alaska was used as a base for the First Russian circumnavigation. In 1819–21, Russian sailors discovered Antarctica during an Antarctic expedition. Russia was in a continuous state of financial crisis. While revenue rose from 9 million rubles in 1724 to 40 million in 1794, expenses grew more rapidly, reaching 49 million in 1794. The budget was allocated 46% to the military, 20% to government economic activities, 12% to administration, and 9% for the Imperial Court in St. Petersburg. The deficit required borrowing, primarily from Amsterdam; 5% of the budget was allocated to debt payments. Paper money was issued to pay for expensive wars, thus causing inflation. For its spending, Russia obtained a large and glorious army, a very large and complex bureaucracy, and a splendid court that rivaled Paris and London. However, the government was living far beyond its means, and 18th-century Russia remained "a poor, backward, overwhelmingly agricultural, and illiterate country." Alexander I and victory over Napoleon By the time of her death in 1796, Catherine's expansionist policy had made Russia a major European power. Alexander I continued this policy, wresting Finland from the weakened kingdom of Sweden in 1809 and Bessarabia from the Ottomans in 1812. His key advisor was Adam Jerzy Czartoryski. After Russian armies liberated allied Georgia from Persian occupation in 1802, they clashed with Persia over control and consolidation over Georgia, as well as the Iranian territories that comprise modern-day Azerbaijan and Dagestan. They also became involved in the Caucasian War against the Caucasian Imamate. In 1813, the war with Persia concluded with a Russian victory, forcing Qajar Iran to cede swaths of its territories in the Caucasus to Russia, which drastically increased its territory in the region. To the south-west, Russia tried to expand at the expense of the Ottoman Empire, using Georgia at its base for the Caucasus and Anatolian front. In European policy, Alexander I switched Russia back and forth four times in 1804–1812 from neutral peacemaker to anti-Napoleon to an ally of Napoleon, winding up in 1812 as Napoleon's enemy. In 1805, he joined Britain in the War of the Third Coalition against Napoleon, but after the massive defeat at the Battle of Austerlitz he switched and formed an alliance with Napoleon by the Treaty of Tilsit (1807) and joined Napoleon's Continental System. He fought a small-scale naval war against Britain, 1807–12. He and Napoleon could never agree, especially about Poland, and the alliance collapsed by 1810. Russia's economy had been hurt by Napoleon's Continental System, which cut off trade with Britain. As Esdaile notes, "Implicit in the idea of a Russian Poland was, of course, a war against Napoleon." Schroeder says Poland was the root cause of the conflict but Russia's refusal to support the Continental System was also a factor. The invasion of Russia was a catastrophe for Napoleon and his 450,000 invasion troops. One major battle was fought at Borodino; casualties were very high, but it was indecisive, and Napoleon was unable to engage and defeat the Russian armies. He tried to force the Tsar to terms by capturing Moscow at the onset of winter, even though he had lost most of his men. Instead, the Russians retreated, burning crops and food supplies in a scorched earth policy that multiplied Napoleon's logistic problems. Unprepared for winter warfare, 85%–90% of Napoleon's soldiers died from disease, cold, starvation or ambush by peasant guerrillas. As Napoleon's forces retreated, Russian troops pursued them into Central and Western Europe, defeated Napoleon's army in the Battle of the Nations and finally captured Paris. Of a total population of around 43 million people, Russia lost about 1.5 million in the year 1812; of these about 250,000 to 300,000 were soldiers and the rest peasants and serfs. After the final defeat of Napoleon in 1815, Alexander became known as the 'savior of Europe'. He presided over the redrawing of the map of Europe at the Congress of Vienna (1814–15), which made him the king of Congress Poland. He formed the Holy Alliance with Austria and Prussia, to suppress revolutionary movements in Europe that he saw as immoral threats to legitimate Christian monarchs. He helped Austria's Klemens von Metternich in suppressing all national and liberal movements. Although the Russian Empire would play a leading role on behalf of conservatism as late as 1848, its retention of serfdom precluded economic progress of any significant degree. As West European economic growth accelerated during the Industrial Revolution, sea trade and colonialism which had begun in the second half of the 18th century, Russia began to lag ever farther behind, undermining its ability to field strong armies. Nicholas I and the Decembrist Revolt Russia's great power status obscured the inefficiency of its government, the isolation of its people, and its economic backwardness. Following the defeat of Napoleon, Alexander I was willing to discuss constitutional reforms, and though a few were introduced, no thoroughgoing changes were attempted. The tsar was succeeded by his younger brother, Nicholas I (1825–1855), who at the onset of his reign was confronted with an uprising. The background of this revolt lay in the Napoleonic Wars, when a number of well-educated Russian officers traveled in Europe in the course of the military campaigns, where their exposure to the liberalism of Western Europe encouraged them to seek change on their return to autocratic Russia. The result was the Decembrist Revolt (December 1825), the work of a small circle of liberal nobles and army officers who wanted to install Nicholas' brother as a constitutional monarch. But the revolt was easily crushed, leading Nicholas to turn away from liberal reforms and champion the reactionary doctrine "Orthodoxy, Autocracy, and Nationality". In 1826–1828 Russia fought another war against Persia. Russia lost almost all of its recently consolidated territories during the first year but regained them and won the war on highly favourable terms. At the 1828 Treaty of Turkmenchay, Russia gained Armenia, Nakhchivan, Nagorno-Karabakh, Azerbaijan, and Iğdır. In the 1828–1829 Russo-Turkish War Russia invaded northeastern Anatolia and occupied the strategic Ottoman towns of Erzurum and Gumushane and, posing as protector and saviour of the Greek Orthodox population, received extensive support from the region's Pontic Greeks. After a brief occupation, the Russian imperial army withdrew into Georgia. By the 1830s, Russia had conquered all Persian territories and major Ottoman territories in the Caucasus. In 1831, Nicholas crushed the November Uprising in Poland. The Russian autocracy gave Polish artisans and gentry reason to rebel in 1863 by assailing the national core values of language, religion, and culture. The resulting January Uprising was a massive Polish revolt, which also was crushed. France, Britain and Austria tried to intervene in the crisis but were unable. The Russian patriotic press used the Polish uprising to unify the Russian nation, claiming it was Russia's God-given mission to save Poland and the world. Poland was punished by losing its distinctive political and judicial rights, with Russianization imposed on its schools and courts. Russian Army Tsar Nicholas I (reigned 1825–1855) lavished attention on his army. In a nation of 60–70 million people, it included a million men. They had outdated equipment and tactics, but the tsar, who dressed like a soldier and surrounded himself with officers, gloried in the victory over Napoleon in 1812 and took pride in its smartness on parade. The cavalry horses, for example, were only trained in parade formations, and did poorly in battle. The glitter and braid masked weaknesses that he did not see. He put generals in charge of most of his civilian agencies regardless of their qualifications. The Army became the vehicle of upward social mobility for noble youths from non-Russian areas, such as Poland, the Baltic, Finland and Georgia. On the other hand, many miscreants, petty criminals and undesirables were punished by local officials by enlisting them for life in the Army. Village oligarchies controlled employment, conscription for the army, and local patronage; they blocked reforms and sent the most unpromising peasant youth to the army. The conscription system was unpopular with people, as was the practice of forcing peasants to house the soldiers for six months of the year. Finally the Crimean War at the end of his reign showed the world what no one had previously realized: Russia was militarily weak, technologically backward, and administratively incompetent. Despite his ambitions toward the south and Ottoman Empire, Russia had not built its railroad network in that direction, and communications were poor. The bureaucracy was riddled with graft, corruption and inefficiency and was unprepared for war. The Navy was weak and technologically backward; the Army, although very large, was good only for parades, suffered from colonels who pocketed their men's pay, poor morale, and was even more out of touch with the latest technology as developed by Britain and France. The nation's leaders realized that reforms were urgently needed. Russian society in the first half of 19th century The 1st quarter of the 19th century is the time when Russian literature becomes an independent and very striking phenomenon; this is the time when the very laws of the Russian literary language are formed. The reasons for such a rapid development of Russian literature during this period lie both in the intra-literary processes and in the socio-political life of Russian society. As Western Europe modernized, after 1840 the issue for Russia became one of direction. Westernizers favored imitating Western Europe while others renounced the West and called for a return of the traditions of the past. The latter path was championed by Slavophiles, who heaped scorn on the "decadent" West. The Slavophiles were opponents of bureaucracy and preferred the collectivism of the medieval Russian mir, or village community, to the individualism of the West. Westernizers formed an intellectual movement that deplored the backwardness of Russian culture, and looked to western Europe for intellectual leadership. They were opposed by Slavophiles who denounced the West as too materialistic and instead promoted the spiritual depth of Russian traditionalism. A forerunner of the movement was Pyotr Chaadayev (1794–1856). He exposed the cultural isolation of Russia, from the perspective of Western Europe, in his Philosophical Letters of 1831. He cast doubt on the greatness of the Russian past, and ridiculed Orthodoxy for failing to provide a sound spiritual basis for the Russian mind. He called on Russia to emulate Western Europe, especially in rational and logical thought, its progressive spirit, its leadership in science, and indeed its leadership on the path to freedom. Vissarion Belinsky (1811–1848), and Alexander Herzen (1812–1870) were prominent Westernizers. The Crimean War Since the war against Napoleon, Russia had become deeply involved in the affairs of Europe, as part of the "Holy Alliance." The Holy Alliance was formed to serve as the "policeman of Europe." However, to maintain the alliance required large armies. Prussia, Austria, Britain and France (the other members of the alliance) lacked large armies and needed Russia to supply the required numbers, which fit the philosophy of Nicholas I. When the Revolutions of 1848 swept Europe, however, Russia was quiet. The Tsar sent his army into Hungary in 1849 at the request of the Austrian Empire and broke the revolt there, while preventing its spread to Russian Poland. The Tsar cracked down on any signs of internal unrest. Russia expected that in exchange for supplying the troops to be the policeman of Europe, it should have a free hand in dealing with the decaying Ottoman Empire—the "sick man of Europe." In 1853, Russia invaded Ottoman-controlled areas leading to the Crimean War. Britain and France came to the rescue of the Ottomans. After a grueling war fought largely in Crimea, with very high death rates from disease, the allies won. Historian Orlando Figes points to the long-term damage Russia suffered: The demilitarization of the Black Sea was a major blow to Russia, which was no longer able to protect its vulnerable southern coastal frontier against the British or any other fleet.... The destruction of the Russian Black Sea Fleet, Sevastopol and other naval docks was a humiliation. No compulsory disarmament had ever been imposed on a great power previously.... The Allies did not really think that they were dealing with a European power in Russia. They regarded Russia as a semi-Asiatic state....In Russia itself, the Crimean defeat discredited the armed services and highlighted the need to modernize the countries defenses, not just in the strictly military sense, but also through the building of railways, industrialization, sound finances and so on....The image many Russians had built up of their country – the biggest, richest and most powerful in the world – had suddenly been shattered. Russia's backwardness had been exposed....The Crimean disaster had exposed the shortcomings of every institution in Russia – not just the corruption and incompetence of the military command, the technological backwardness of the army and navy, or the inadequate roads and lack of railways the accounted for the chronic problems of supply, but the poor condition and illiteracy of the serfs who made up the armed forces, the inability of the serf economy to sustain a state of war against industrial powers, and the failures of autocracy itself. Alexander II and the abolition of serfdom When Alexander II came to the throne in 1855, the demand for reform was widespread. The most pressing problem confronting the Government was serfdom. In 1859, there were 23 million serfs (out of a total population of 67 Million). In anticipation of civil unrest that could ultimately foment a revolution, Alexander II chose to preemptively abolish serfdom with the emancipation reform in 1861. Emancipation brought a supply of free labor to the cities, stimulated industry, and the middle class grew in number and influence. The freed peasants had to buy land, allotted to them, from the landowners with the state assistance. The Government issued special bonds to the landowners for the land that they had lost, and collected a special tax from the peasants, called redemption payments, at a rate of 5% of the total cost of allotted land yearly. All the land turned over to the peasants was owned collectively by the mir, the village community, which divided the land among the peasants and supervised the various holdings. Alexander was responsible for numerous reforms besides abolishing serfdom. He reorganized the judicial system, setting up elected local judges, abolishing capital punishment, promoting local self-government through the zemstvo system, imposing universal military service, ending some of the privileges of the nobility, and promoting the universities. In foreign policy, he sold Alaska to the United States in 1867, fearing the remote colony would fall into British hands if there was another war. He modernized the military command system. He sought peace, and moved away from France when Napoleon III fell. He joined with Germany and Austria in the League of the Three Emperors that stabilized the European situation. The Russian Empire expanded in Siberia and in the Caucasus and made gains at the expense of China. Faced with an uprising in Poland in 1863, he stripped that land of its separate Constitution and incorporated it directly into Russia. To counter the rise of a revolutionary and anarchistic movements, he sent thousands of dissidents into exile in Siberia and was proposing additional parliamentary reforms when he was assassinated in 1881. In the late 1870s Russia and the Ottoman Empire again clashed in the Balkans. The Russo-Turkish War was popular among the Russian people, who supported the independence of their fellow Orthodox Slavs, the Serbs and the Bulgarians. Russia's victory in this war allowed a number of Balkan states to gain independence: Romania, Serbia, Montenegro. In addition, Bulgaria de facto also became independent after 500 years of Turkish rule. However, the war increased tension with Austria-Hungary, which also had ambitions in the region. The Tsar was disappointed by the results of the Congress of Berlin in 1878, but abided by the agreement. During this period Russia expanded its empire into Central Asia, conquering the khanates of Kokand, Bukhara, and Khiva, as well as the Trans-Caspian region. Russia's advance in Asia led to British fears that the Russians planned aggression against British India. Before 1815 London worried Napoleon would combine with Russia to do that in one mighty campaign. After 1815 London feared Russia alone would do it step by step. Rudyard Kipling called it "the Great Game" and the term caught on. However historians report that the Russians never had any intention to move against India. Russian society in the second half of 19th century In the 1860s, a movement known as Nihilism developed in Russia. A term originally coined by Ivan Turgenev in his 1862 novel Fathers and Sons, Nihilists favoured the destruction of human institutions and laws, based on the assumption that they are artificial and corrupt. At its core, Russian nihilism was characterized by the belief that the world lacks comprehensible meaning, objective truth, or value. For some time, many Russian liberals had been dissatisfied by what they regarded as the empty discussions of the intelligentsia. The Nihilists questioned all old values and shocked the Russian establishment. They became involved in the cause of reform and became major political forces. Their path was facilitated by the previous actions of the Decembrists, who revolted in 1825, and the financial and political hardship caused by the Crimean War, which caused many Russians to lose faith in political institutions. Russian nihilists created the manifesto «Catechism of a Revolutionary». One leader of Russian nihilists, Sergei Nechaev, was basis for Dostoevsky's novel Demons. After the Nihilists failed to convert the aristocracy and landed gentry to the cause of reform, they turned to the peasants. Their campaign became known as the Narodnk ("Populist") movement. It was based on the belief that the common people had the wisdom and peaceful ability to lead the nation. As the Narodnik movement gained momentum, the government moved to extirpate it. In response to the growing reaction of the government, a radical branch of the Narodniks advocated and practiced terrorism. One after another, prominent officials were shot or killed by bombs. This represented the ascendancy of anarchism in Russia as a powerful revolutionary force. Finally, after several attempts, Alexander II was assassinated by anarchists in 1881, on the very day he had approved a proposal to call a representative assembly to consider new reforms in addition to the abolition of serfdom designed to ameliorate revolutionary demands. The end of the 19th century - the beginning of the 20th century is known as the Silver Age of Russian culture. The Silver Age was dominated by the artistic movements of Russian Symbolism, Acmeism, and Russian Futurism, many poetic schools flourished, including the Mystical Anarchism tendency within the Symbolist movement. The Russian avant-garde was a large, influential wave of modern art that flourished in Russian Empire and Soviet Union, approximately from 1890 to 1930—although some have placed its beginning as early as 1850 and its end as late as 1960. The term covers many separate art movements of the era in painting, literature, music and architecture. Autocracy and reaction under Alexander III Unlike his father, the new tsar Alexander III (1881–1894) was throughout his reign a staunch reactionary who revived the maxim of "Orthodoxy, Autocracy, and National Character". A committed Slavophile, Alexander III believed that Russia could be saved from chaos only by shutting itself off from the subversive influences of Western Europe. In his reign Russia concluded the union with republican France to contain the growing power of Germany, completed the conquest of Central Asia, and exacted important territorial and commercial concessions from China. The tsar's most influential adviser was Konstantin Pobedonostsev, tutor to Alexander III and his son Nicholas, and procurator of the Holy Synod from 1880 to 1895. He taught his royal pupils to fear freedom of speech and press and to hate democracy, constitutions, and the parliamentary system. Under Pobedonostsev, revolutionaries were hunted down and a policy of Russification was carried out throughout the empire. Nicholas II and new revolutionary movement Alexander was succeeded by his son Nicholas II (1894–1918). The Industrial Revolution, which began to exert a significant influence in Russia, was meanwhile creating forces that would finally overthrow the tsar. Politically, these opposition forces organized into three competing parties: The liberal elements among the industrial capitalists and nobility, who wanted peaceful social reform and a constitutional monarchy, founded the Constitutional Democratic party or Kadets in 1905. Followers of the Narodnik tradition established the Socialist-Revolutionary Party or Esers in 1901, advocating the distribution of land among the peasants who worked it. A third radical group founded the Russian Social Democratic Labour Party or RSDLP in 1898; this party was the primary exponent of Marxism in Russia. Gathering their support from the radical intellectuals and the urban working class, they advocated complete social, economic and political revolution. In 1903, the RSDLP split into two wings: the radical Bolsheviks, led by Vladimir Lenin, and the relatively moderate Mensheviks, led by Yuli Martov. The Mensheviks believed that Russian socialism would grow gradually and peacefully and that the tsar's regime should be succeeded by a democratic republic in which the socialists would cooperate with the liberal bourgeois parties. The Bolsheviks advocated the formation of a small elite of professional revolutionaries, subject to strong party discipline, to act as the vanguard of the proletariat in order to seize power by force. At the beginning of the 20th century, Russia continued its expansion in the Far East; Chinese Manchuria was in the zone of Russian interests. Russia took an active part in the intervention of the great powers in China to suppress the Boxer rebellion. During this war, Russia occupied Manchuria, which caused a clash of interests with Japan. In 1904, the Russo-Japanese War began, which ended extremely unfavourably for Russia. Revolution of 1905 The disastrous performance of the Russian armed forces in the Russo-Japanese War was a major blow to the Russian State and increased the potential for unrest. In January 1905, an incident known as "Bloody Sunday" occurred when Father Gapon led an enormous crowd to the Winter Palace in Saint Petersburg to present a petition to the tsar. When the procession reached the palace, Cossacks opened fire on the crowd, killing hundreds. The Russian masses were so aroused over the massacre that a general strike was declared demanding a democratic republic. This marked the beginning of the Russian Revolution of 1905. Soviets (councils of workers) appeared in most cities to direct revolutionary activity. In October 1905, Nicholas reluctantly issued the October Manifesto, which conceded the creation of a national Duma (legislature) to be called without delay. The right to vote was extended, and no law was to go into force without confirmation by the Duma. The moderate groups were satisfied; but the socialists rejected the concessions as insufficient and tried to organize new strikes. By the end of 1905, there was disunity among the reformers, and the tsar's position was strengthened for the time being. World War I On 28 June 1914, Gavrilo Princip assassinated Archduke Franz Ferdinand of Austro-Hungary. In response, on 23 July, Austro-Hungary issued an ultimatum to Serbia, which it considered a Russian client-state. Russia had no treaty obligation to Serbia, and in long-term perspective, Russia was militarily gaining on Germany and Austro-Hungary, and so had an incentive to wait. Most Russian leaders wanted to avoid war. But in that crisis they had the support of France, and believed that supporting Serbia was important for Russia's credibility and for its goal of a leadership role in the Balkans. Tsar Nicholas II mobilised Russian forces on 30 July 1914 to defend Serbia from Austria-Hungary. Christopher Clark states: "The Russian general mobilisation [of 30 July] was one of the most momentous decisions of the July crisis. This was the first of the general mobilisations. It came at the moment when the German government had not yet even declared the State of Impending War". Germany responded with its own mobilisation and declaration of War on 1 August 1914. At the opening of hostilities, the Russians took the offensive against both Germany and Austria-Hungary. The very large but poorly equipped Russian army fought tenaciously and desperately at times, despite its lack of organization and very weak logistics. Casualties were enormous. In the 1914 campaign, Russian forces defeated Austro-Hungarian forces in the Battle of Galicia. The success of the Russian army forced the German army to withdraw troops from the western front to the Russian front. However, the shell famine led to the defeat of the Russian forces in Poland by the central powers in the 1915 campaign, which led to a major retreat of the Russian army. In 1916, the Russians again dealt a powerful blow to the Austrians during the Brusilov offensive. By 1915, morale was bad and getting worse. Many recruits were sent to the front unarmed, and told to pick up whatever weapons they could from the battlefield. Nevertheless, the Russian army fought on, and tied down large numbers of Germans and Austrians. When the homefront showed an occasional surge of patriotism, the tsar and his entourage failed to exploit it for military benefit. The Russian army neglected to rally the ethnic and religious minorities that were hostile to Austria, such as Poles. The tsar refused to cooperate with the national legislature, the Duma, and listened less to experts than to his wife, who was in thrall to her chief advisor, the holy man Grigori Rasputin. More than two million refugees fled. Repeated military failures and bureaucratic ineptitude soon turned large segments of the population against the government. The German and Ottoman fleets prevented Russia from importing urgently needed supplies through the Baltic and Black seas. By the middle of 1915 the impact of the war was demoralizing. Food and fuel were in short supply, casualties kept occurring, and inflation was mounting. Strikes increased among low-paid factory workers, and the peasants, who wanted land reforms, were restless. Meanwhile, elite distrust of the regime was deepened by reports that Rasputin was gaining influence; his assassination in late 1916 ended the scandal but did not restore the autocracy's lost prestige. Russian Civil War (1917–1922) Russian Revolution In late February (3 March 1917), a strike occurred in a factory in the capital Petrograd (the new name for Saint Petersburg). On 23 February (8 March) 1917, thousands of female textile workers walked out of their factories protesting the lack of food and calling on other workers to join them. Within days, nearly all the workers in the city were idle, and street fighting broke out. The tsar ordered the Duma to disband, ordered strikers to return to work, and ordered troops to shoot at demonstrators in the streets. His orders triggered the February Revolution, especially when soldiers openly sided with the strikers. The tsar and the aristocracy fell on 2 March, as Nicholas II abdicated. To fill the vacuum of authority, the Duma declared a Provisional Government, headed by Prince Lvov, which was collectively known as the Russian Republic. Meanwhile, the socialists in Petrograd organized elections among workers and soldiers to form a soviet (council) of workers' and soldiers' deputies, as an organ of popular power that could pressure the "bourgeois" Provisional Government. In July, following a series of crises that undermined their authority with the public, the head of the Provisional Government resigned and was succeeded by Alexander Kerensky, who was more progressive than his predecessor but not radical enough for the Bolsheviks or many Russians discontented with the deepening economic crisis and the continuation of the war. While Kerensky's government marked time, the socialist-led soviet in Petrograd joined with soviets that formed throughout the country to create a national movement. The German government provided over 40 million gold marks to subsidize Bolshevik publications and activities subversive of the tsarist government, especially focusing on disgruntled soldiers and workers. In April 1917 Germany provided a special sealed train to carry Vladimir Lenin back to Russia from his exile in Switzerland. After many behind-the-scenes maneuvers, the soviets seized control of the government in November 1917 and drove Kerensky and his moderate provisional government into exile, in the events that would become known as the October Revolution. When the national Constituent Assembly (elected in December 1917) refused to become a rubber stamp of the Bolsheviks, it was dissolved by Lenin's troops and all vestiges of democracy were removed. With the handicap of the moderate opposition removed, Lenin was able to free his regime from the war problem by the harsh Treaty of Brest-Litovsk (1918) with Germany. Russia lost much of her western borderlands. However, when Germany was defeated the Soviet government repudiated the Treaty. Russian Civil War The Bolshevik grip on power was by no means secure, and a lengthy struggle broke out between the new regime and its opponents, which included the Socialist Revolutionaries, the anti-Bolshevik White movement, and large numbers of peasants. At the same time the Allied powers sent several expeditionary armies to support the anti-Communist forces in an attempt to force Russia to rejoin the world war. The Bolsheviks fought against both these forces and national independence movements in the former Russian Empire. By 1921, they had defeated their internal enemies and brought most of the newly independent states under their control, with the exception of Finland, the Baltic States, the Moldavian Democratic Republic (which joined Romania), and Poland (with whom they had fought the Polish–Soviet War). Finland also annexed the region Pechenga of the Russian Kola peninsula; Soviet Russia and allied Soviet republics conceded the parts of its territory to Estonia (Petseri County and Estonian Ingria), Latvia (Pytalovo), and Turkey (Kars). Poland incorporated the contested territories of Western Belarus and Western Ukraine, the former parts of the Russian Empire (except Galicia) east to Curzon Line. Both sides regularly committed brutal atrocities against civilians. During the civil war era White Terror (Russia) for example, Petlyura and Denikin's forces massacred 100,000 to 150,000 Jews in Ukraine and southern Russia. Hundreds of thousands of Jews were left homeless and tens of thousands became victims of serious illness. Estimates for the total number of people killed during the Red Terror carried out by the Bolsheviks vary widely. One source asserts that the total number of victims of repression and pacification campaigns could be 1.3 million, whereas others give estimates ranging from 10,000 in the initial period of repression to 50,000 to 140,000 and an estimate of 28,000 executions per year from December 1917 to February 1922. The most reliable estimations for the total number of killings put the number at about 100,000, whereas others suggest a figure of 200,000. The Russian economy was devastated by the war, with factories and bridges destroyed, cattle and raw materials pillaged, mines flooded and machines damaged. The droughts of 1920 and 1921, as well as the 1921 famine, worsened the disaster still further. Disease had reached pandemic proportions, with 3,000,000 dying of typhus alone in 1920. Millions more also died of widespread starvation. By 1922 there were at least 7,000,000 street children in Russia as a result of nearly ten years of devastation from the Great War and the civil war. Another one to two million people, known as the White émigrés, fled Russia, many were evacuated from Crimea in the 1920, some through the Far East, others west into the newly independent Baltic countries. These émigrés included a large percentage of the educated and skilled population of Russia. Soviet Union (1922–1991) Creation of the Soviet Union The history of Russia between 1922 and 1991 is essentially the history of the Union of Soviet Socialist Republics, or Soviet Union. This ideologically based union, established in December 1922 by the leaders of the Russian Communist Party, was roughly coterminous with Russia before the Treaty of Brest-Litovsk. At that time, the new nation included four constituent republics: the Russian SFSR, the Ukrainian SSR, the Belarusian SSR, and the Transcaucasian SFSR. The constitution, adopted in 1924, established a federal system of government based on a succession of soviets set up in villages, factories, and cities in larger regions. This pyramid of soviets in each constituent republic culminated in the All-Union Congress of Soviets. However, while it appeared that the congress exercised sovereign power, this body was actually governed by the Communist Party, which in turn was controlled by the Politburo from Moscow, the capital of the Soviet Union, just as it had been under the tsars before Peter the Great. War Communism and the New Economic Policy The period from the consolidation of the Bolshevik Revolution in 1917 until 1921 is known as the period of war communism. Land, all industry, and small businesses were nationalized, and the money economy was restricted. Strong opposition soon developed. The peasants wanted cash payments for their products and resented having to surrender their surplus grain to the government as a part of its civil war policies. Confronted with peasant opposition, Lenin began a strategic retreat from war communism known as the New Economic Policy (NEP). The peasants were freed from wholesale levies of grain and allowed to sell their surplus produce in the open market. Commerce was stimulated by permitting private retail trading. The state continued to be responsible for banking, transportation, heavy industry, and public utilities. Although the left opposition among the Communists criticized the rich peasants, or kulaks, who benefited from the NEP, the program proved highly beneficial and the economy revived. The NEP would later come under increasing opposition from within the party following Lenin's death in early 1924. Changes to Russian society As the Russian Empire included during this period not only the region of Russia, but also today's territories of Ukraine, Belarus, Poland, Lithuania, Estonia, Latvia, Moldavia and the Caucasian and Central Asian countries, it is possible to examine the firm formation process in all those regions. One of the main determinants of firm creation for given regions of Russian Empire might be urban demand of goods and supply of industrial and organizational skill. While the Russian economy was being transformed, the social life of the people underwent equally drastic changes. The Family Code of 1918 granted women equal status to men, and permitted a couple to take either the husband or wife's name. Divorce no longer required court procedure, and to make women completely free of the responsibilities of childbearing, abortion was made legal as early as 1920. As a side effect, the emancipation of women increased the labor market. Girls were encouraged to secure an education and pursue a career in the factory or the office. Communal nurseries were set up for the care of small children, and efforts were made to shift the center of people's social life from the home to educational and recreational groups, the soviet clubs. The Soviet government pursued a policy of eliminating illiteracy Likbez. After industrialization, massive urbanization began in the USSR. In the field of national policy in the 1920s, the Korenizatsiya was carried out. However, from the mid-30s, the Stalinist government returned to the tsarist policy of Russification of the outskirts. In particular, the languages of all the nations of the USSR were translated into the Cyrillic alphabet Cyrillization. Industrialization and collectivization The years from 1929 to 1939 comprised a tumultuous decade in Soviet history—a period of massive industrialization and internal struggles as Joseph Stalin established near total control over Soviet society, wielding virtually unrestrained power. Following Lenin's death Stalin wrestled to gain control of the Soviet Union with rival factions in the Politburo, especially Leon Trotsky's. By 1928, with the Trotskyists either exiled or rendered powerless, Stalin was ready to put a radical programme of industrialisation into action. In 1929, Stalin proposed the first five-year plan. Abolishing the NEP, it was the first of a number of plans aimed at swift accumulation of capital resources through the buildup of heavy industry, the collectivization of agriculture, and the restricted manufacture of consumer goods. For the first time in history a government controlled all economic activity. The rapid growth of production capacity and the volume of production of heavy industry (4 times) was of great importance for ensuring economic independence from western countries and strengthening the country's defense capability. At this time, the Soviet Union made the transition from an agrarian country to an industrial one. As a part of the plan, the government took control of agriculture through the state and collective farms (kolkhozes). By a decree of February 1930, about one million individual peasants (kulaks) were forced off their land. Many peasants strongly opposed regimentation by the state, often slaughtering their herds when faced with the loss of their land. In some sections they revolted, and countless peasants deemed "kulaks" by the authorities were executed. The combination of bad weather, deficiencies of the hastily established collective farms, and massive confiscation of grain precipitated a serious famine, and several million peasants died of starvation, mostly in Ukraine, Kazakhstan and parts of southwestern Russia. The deteriorating conditions in the countryside drove millions of desperate peasants to the rapidly growing cities, fueling industrialization, and vastly increasing Russia's urban population in the space of just a few years. The plans received remarkable results in areas aside from agriculture. Russia, in many measures the poorest nation in Europe at the time of the Bolshevik Revolution, now industrialized at a phenomenal rate, far surpassing Germany's pace of industrialization in the 19th century and Japan's earlier in the 20th century. Stalinist repression The NKVD gathered in tens of thousands of Soviet citizens to face arrest, deportation, or execution. Of the six original members of the 1920 Politburo who survived Lenin, all were purged by Stalin. Old Bolsheviks who had been loyal comrades of Lenin, high officers in the Red Army, and directors of industry were liquidated in the Great Purges. Purges in other Soviet republics also helped centralize control in the USSR. Stalin destroyed the opposition in the party consisting of the old Bolsheviks during the Moscow trials. The NKVD under the leadership of Stalin's commissar Nikolai Yezhov carried out a series of massive repressive operations against the kulaks and various national minorities in the USSR. During the Great Purges of 1937–38, about 700 000 people were executed. Penalties were introduced, and many citizens were prosecuted for fictitious crimes of sabotage and espionage. The labor provided by convicts working in the labor camps of the Gulag system became an important component of the industrialization effort, especially in Siberia. An estimated 18 million people passed through the Gulag system, and perhaps another 15 million had experience of some other form of forced labor. After the partition of Poland in 1939, the NKVD executed 20,000 captured Polish officers in the Katyn massacre. In the late 30s - first half of the 40s, the Stalinist government carried out massive deportations of various nationalities. A number of ethnic groups were deported from their settlement to Central Asia. Soviet Union on the international stage The Soviet Union viewed the 1933 accession of fervently anti-Communist Hitler's government to power in Germany with great alarm from the onset, especially since Hitler proclaimed the Drang nach Osten as one of the major objectives in his vision of the German strategy of Lebensraum. The Soviets supported the republicans of Spain who struggled against fascist German and Italian troops in the Spanish Civil War. In 1938–1939, immediately prior to WWII, the Soviet Union successfully fought against Imperial Japan in the Soviet–Japanese border conflicts in the Russian Far East, which led to Soviet-Japanese neutrality and the tense border peace that lasted until August 1945. In 1938, Germany annexed Austria and, together with major Western European powers, signed the Munich Agreement following which Germany, Hungary and Poland divided parts of Czechoslovakia between themselves. German plans for further eastward expansion, as well as the lack of resolve from Western powers to oppose it, became more apparent. Despite the Soviet Union strongly opposing the Munich deal and repeatedly reaffirming its readiness to militarily back commitments given earlier to Czechoslovakia, the Western Betrayal led to the end of Czechoslovakia and further increased fears in the Soviet Union of a coming German attack. This led the Soviet Union to rush the modernization of its military industry and to carry out its own diplomatic maneuvers. In 1939, the Soviet Union signed the Molotov–Ribbentrop Pact: a non-aggression pact with Nazi Germany dividing Eastern Europe into two separate spheres of influence. Following the pact, the USSR normalized relations with Nazi Germany and resumed Soviet–German trade. World War II On 17 September 1939, sixteen days after the start of World War II and with the victorious Germans having advanced deep into Polish territory, the Red Army invaded eastern Poland, stating as justification the "need to protect Ukrainians and Belarusians" there, after the "cessation of existence" of the Polish state. As a result, the Belarusian and Ukrainian Soviet republics' western borders were moved westward, and the new Soviet western border was drawn close to the original Curzon line. In the meantime negotiations with Finland over a Soviet-proposed land swap that would redraw the Soviet-Finnish border further away from Leningrad failed, and in December 1939 the USSR invaded Finland, beginning a campaign known as the Winter War (1939–40). The war took a heavy death toll on the Red Army but forced Finland to sign a Moscow Peace Treaty and cede the Karelian Isthmus and Ladoga Karelia. In summer 1940 the USSR issued an ultimatum to Romania forcing it to cede the territories of Bessarabia and Northern Bukovina. At the same time, the Soviet Union also occupied the three formerly independent Baltic states (Estonia, Latvia and Lithuania). The peace with Germany was tense, as both sides were preparing for the military conflict, and abruptly ended when the Axis forces led by Germany swept across the Soviet border on 22 June 1941. By the autumn the German army had seized Ukraine, laid a siege of Leningrad, and threatened to capture the capital, Moscow, itself. Despite the fact that in December 1941 the Red Army threw off the German forces from Moscow in a successful counterattack, the Germans retained the strategic initiative for approximately another year and held a deep offensive in the south-eastern direction, reaching the Volga and the Caucasus. However, two major German defeats in Stalingrad and Kursk proved decisive and reversed the course of the entire World War as the Germans never regained the strength to sustain their offensive operations and the Soviet Union recaptured the initiative for the rest of the conflict. By the end of 1943, the Red Army had broken through the German siege of Leningrad and liberated much of Ukraine, much of Western Russia and moved into Belarus. During the 1944 campaign, the Red Army defeated German forces in a series of offensive campaigns known as Stalin's ten blows. By the end of 1944, the front had moved beyond the 1939 Soviet frontiers into eastern Europe. Soviet forces drove into eastern Germany, capturing Berlin in May 1945. The war with Germany thus ended triumphantly for the Soviet Union. As agreed at the Yalta Conference, three months after the Victory Day in Europe the USSR launched the Soviet invasion of Manchuria, defeating the Japanese troops in neighboring Manchuria, the last Soviet battle of World War II. Although the Soviet Union was victorious in World War II, the war resulted in around 26–27 million Soviet deaths (estimates vary) and had devastated the Soviet economy in the struggle. Some 1,710 towns and 70,000 settlements were destroyed. The occupied territories suffered from the ravages of German occupation and deportations of slave labor by Germany. Thirteen million Soviet citizens became victims of the repressive policies of Germany and its allies in occupied territories, where people died because of mass murders, famine, absence of elementary medical aid and slave labor. The Nazi Genocide of the Jews, carried out by German Einsatzgruppen along with local collaborators, resulted in almost complete annihilation of the Jewish population over the entire territory temporarily occupied by Germany and its allies. During the occupation, the Leningrad region lost around a quarter of its population, Soviet Belarus lost from a quarter to a third of its population, and 3.6 million Soviet prisoners of war (of 5.5 million) died in German camps. Cold War Collaboration among the major Allies had won the war and was supposed to serve as the basis for postwar reconstruction and security. USSR became one of the founders of the UN and a permanent member of the UN Security Council. However, the conflict between Soviet and U.S. national interests, known as the Cold War, came to dominate the international stage in the postwar period. The Cold War emerged from a conflict between Stalin and U.S. President Harry Truman over the future of Eastern Europe during the Potsdam Conference in the summer of 1945. Russia had suffered three devastating Western onslaughts in the previous 150 years during the Napoleonic Wars, the First World War, and the Second World War, and Stalin's goal was to establish a buffer zone of states between Germany and the Soviet Union. Truman charged that Stalin had betrayed the Yalta agreement. With Eastern Europe under Red Army occupation, Stalin was also biding his time, as his own atomic bomb project was steadily and secretly progressing. In April 1949 the United States sponsored the North Atlantic Treaty Organization (NATO), a mutual defense pact in which most Western nations pledged to treat an armed attack against one nation as an assault on all. The Soviet Union established an Eastern counterpart to NATO in 1955, dubbed the Warsaw Pact. The division of Europe into Western and Soviet blocks later took on a more global character, especially after 1949, when the U.S. nuclear monopoly ended with the testing of a Soviet bomb and the Communist takeover in China. The foremost objectives of Soviet foreign policy were the maintenance and enhancement of national security and the maintenance of hegemony over Eastern Europe. The Soviet Union maintained its dominance over the Warsaw Pact through crushing the Hungarian Revolution of 1956, suppressing the Prague Spring in Czechoslovakia in 1968, and supporting the suppression of the Solidarity movement in Poland in the early 1980s. The Soviet Union opposed the United States in a number of proxy conflicts all over the world, including the Korean War and Vietnam War. As the Soviet Union continued to maintain tight control over its sphere of influence in Eastern Europe, the Cold War gave way to Détente and a more complicated pattern of international relations in the 1970s in which the world was no longer clearly split into two clearly opposed blocs. The nuclear race continued, the number of nuclear weapons in the hands of the USSR and the United States reached a menacing scale, giving them the ability to destroy the planet multiple times. Less powerful countries had more room to assert their independence, and the two superpowers were partially able to recognize their common interest in trying to check the further spread and proliferation of nuclear weapons in treaties such as SALT I, SALT II, and the Anti-Ballistic Missile Treaty. U.S.–Soviet relations deteriorated following the beginning of the nine-year Soviet–Afghan War in 1979 and the 1980 election of Ronald Reagan, a staunch anti-communist, but improved as the communist bloc started to unravel in the late 1980s. With the collapse of the Soviet Union in 1991, Russia lost the superpower status that it had won in the Second World War. De-Stalinization and the era of stagnation Nikita Khrushchev solidified his position in a speech before the Twentieth Congress of the Communist Party in 1956 detailing Stalin's atrocities. In 1964, Khrushchev was impeached by the Communist Party's Central Committee, charging him with a host of errors that included Soviet setbacks such as the Cuban Missile Crisis. After a period of collective leadership led by Leonid Brezhnev, Alexei Kosygin and Nikolai Podgorny, a veteran bureaucrat, Brezhnev, took Khrushchev's place as Soviet leader. Brezhnev emphasized heavy industry, instituted the Soviet economic reform of 1965, and also attempted to ease relationships with the United States. In the 1960s the USSR became a leading producer and exporter of petroleum and natural gas. Soviet science and industry peaked in the Khrushchev and Brezhnev years. The world's first nuclear power plant was established in 1954 in Obninsk, and the Baikal Amur Mainline was built. In addition, in 1980 Moscow hosted the Summer Olympic Games. While all modernized economies were rapidly moving to computerization after 1965, the USSR fell further and further behind. Moscow's decision to copy the IBM 360 of 1965 proved a decisive mistake for it locked scientists into an antiquated system they were unable to improve. They had enormous difficulties in manufacturing the necessary chips reliably and in quantity, in programming workable and efficient programs, in coordinating entirely separate operations, and in providing support to computer users. One of the greatest strengths of Soviet economy was its vast supplies of oil and gas; world oil prices quadrupled in 1973–74, and rose again in 1979–1981, making the energy sector the chief driver of the Soviet economy, and was used to cover multiple weaknesses. At one point, Soviet Premier Alexei Kosygin told the head of oil and gas production, "things are bad with bread. Give me 3 million tons [of oil] over the plan." Former prime minister Yegor Gaidar, an economist looking back three decades, in 2007 wrote: Soviet space program The Soviet space program, founded by Sergey Korolev, was especially successful. On 4 October 1957, the Soviet Union launched the first satellite, Sputnik. On 12 April 1961, Yuri Gagarin became the first human to travel into space in the Soviet spaceship Vostok 1. Other achievements of Russian space program include: the first photo of the far side of the Moon; exploration of Venus; the first spacewalk by Alexei Leonov; first female spaceflight by Valentina Tereshkova. In 1970 and 1973, the world's first planetary rovers were sent to the moon and successfully worked there: Lunokhod 1 and Lunokhod 2. More recently, the Soviet Union produced the world's first space station, Salyut which in 1986 was replaced by Mir, the first consistently inhabited long-term space station, that served from 1986 to 2001. Perestroika and breakup of the Union Two developments dominated the decade that followed: the increasingly apparent crumbling of the Soviet Union's economic and political structures, and the patchwork attempts at reforms to reverse that process. After the rapid succession of former KGB Chief Yuri Andropov and Konstantin Chernenko, transitional figures with deep roots in Brezhnevite tradition, Mikhail Gorbachev implemented perestroika in an attempt to modernize Soviet communism, and made significant changes in the party leadership. However, Gorbachev's social reforms led to unintended consequences. His policy of glasnost facilitated public access to information after decades of government repression, and social problems received wider public attention, undermining the Communist Party's authority. Glasnost allowed ethnic and nationalist disaffection to reach the surface, and many constituent republics, especially the Baltic republics, Georgian SSR and Moldavian SSR, sought greater autonomy, which Moscow was unwilling to provide. In the revolutions of 1989 the USSR lost its allies in Eastern Europe. Gorbachev's attempts at economic reform were not sufficient, and the Soviet government left intact most of the fundamental elements of communist economy. Suffering from low pricing of petroleum and natural gas, the ongoing war in Afghanistan, and outdated industry and pervasive corruption, the Soviet planned economy proved to be ineffective, and by 1990 the Soviet government had lost control over economic conditions. Due to price control, there were shortages of almost all products, reaching their peak in the end of 1991, when people had to stand in long lines and were lucky to buy even the essentials. Control over the constituent republics was also relaxed, and they began to assert their national sovereignty over Moscow. The tension between Soviet Union and Russian SFSR authorities came to be personified in the bitter power struggle between Gorbachev and Boris Yeltsin. Squeezed out of Union politics by Gorbachev in 1987, Yeltsin, who represented himself as a committed democrat, presented a significant opposition to Gorbachev's authority. In a remarkable reversal of fortunes, he gained election as chairman of the Russian republic's new Supreme Soviet in May 1990. The following month, he secured legislation giving Russian laws priority over Soviet laws and withholding two-thirds of the budget. In the first Russian presidential election in 1991 Yeltsin became president of the Russian SFSR. At last Gorbachev attempted to restructure the Soviet Union into a less centralized state. However, on 19 August 1991, a coup against Gorbachev, conspired by senior Soviet officials, was attempted. The coup faced wide popular opposition and collapsed in three days, but disintegration of the Union became imminent. The Russian government took over most of the Soviet Union government institutions on its territory. Because of the dominant position of Russians in the Soviet Union, most gave little thought to any distinction between Russia and the Soviet Union before the late 1980s. In the Soviet Union, only Russian SFSR lacked even the paltry instruments of statehood that the other republics possessed, such as its own republic-level Communist Party branch, trade union councils, Academy of Sciences, and the like. The Communist Party of the Soviet Union was banned in Russia in 1991–1992, although no lustration has ever taken place, and many of its members became top Russian officials. However, as the Soviet government was still opposed to market reforms, the economic situation continued to deteriorate. By December 1991, the shortages had resulted in the introduction of food rationing in Moscow and Saint Petersburg for the first time since World War II. Russia received humanitarian food aid from abroad. After the Belavezha Accords, the Supreme Soviet of Russia withdrew Russia from the Soviet Union on 12 December. The Soviet Union officially ended on 25 December 1991, and the Russian Federation (formerly the Russian Soviet Federative Socialist Republic) took power on 26 December. The Russian government lifted price control in January 1992. Prices rose dramatically, but shortages disappeared. Russian Federation (1991–present) Liberal reforms of the 1990s Although Yeltsin came to power on a wave of optimism, he never recovered his popularity after endorsing Yegor Gaidar's "shock therapy" of ending Soviet-era price controls, drastic cuts in state spending, and an open foreign trade regime in early 1992 (see Russian economic reform in the 1990s). The reforms immediately devastated the living standards of much of the population. In the 1990s Russia suffered an economic downturn that was, in some ways, more severe than the United States or Germany had undergone six decades earlier in the Great Depression. Hyperinflation hit the ruble, due to monetary overhang from the days of the planned economy. Meanwhile, the profusion of small parties and their aversion to coherent alliances left the legislature chaotic. During 1993, Yeltsin's rift with the parliamentary leadership led to the September–October 1993 constitutional crisis. The crisis climaxed on 3 October, when Yeltsin chose a radical solution to settle his dispute with parliament: he called up tanks to shell the Russian White House, blasting out his opponents. As Yeltsin was taking the unconstitutional step of dissolving the legislature, Russia came close to a serious civil conflict. Yeltsin was then free to impose the current Russian constitution with strong presidential powers, which was approved by referendum in December 1993. The cohesion of the Russian Federation was also threatened when the republic of Chechnya attempted to break away, leading to the First and Second Chechen Wars. Economic reforms also consolidated a semi-criminal oligarchy with roots in the old Soviet system. Advised by Western governments, the World Bank, and the International Monetary Fund, Russia embarked on the largest and fastest privatization that the world had ever seen in order to reform the fully nationalized Soviet economy. By mid-decade, retail, trade, services, and small industry was in private hands. Most big enterprises were acquired by their old managers, engendering a new rich (Russian tycoons) in league with criminal mafias or Western investors. Corporate raiders such as Andrei Volgin engaged in hostile takeovers of corrupt corporations by the mid-1990s. By the mid-1990s Russia had a system of multiparty electoral politics. But it was harder to establish a representative government because of two structural problems—the struggle between president and parliament and the anarchic party system. Meanwhile, the central government had lost control of the localities, bureaucracy, and economic fiefdoms, and tax revenues had collapsed. Still in a deep depression, Russia's economy was hit further by the financial crash of 1998. After the crisis, Yeltsin was at the end of his political career. Just hours before the first day of 2000, Yeltsin made a surprise announcement of his resignation, leaving the government in the hands of the little-known Prime Minister Vladimir Putin, a former KGB official and head of the FSB, the KGB's post-Soviet successor agency. The era of Putin In 2000, the new acting president defeated his opponents in the presidential election on 26 March and won in a landslide four years later. The Second Chechen war ended with the victory of Russia, at the same time, after the September 11 terrorist attacks, there was a rapprochement between Russia and the United States. Putin has created a system of guided democracy in Russia by subjugating parliament, suppressing independent media and placing major oil and gas companies under state control. International observers were alarmed by moves in late 2004 to further tighten the presidency's control over parliament, civil society, and regional officeholders. In 2008, Dmitri Medvedev, a former Gazprom chairman and Putin's head of staff, was elected new President of Russia. In 2012, Putin and Medvedev switched places, Putin became president again, prompting massive protests in Moscow in 2011–12. Russia's long-term problems include a shrinking workforce, rampant corruption, and underinvestment in infrastructure. Nevertheless, reversion to a socialist command economy seemed almost impossible. The economic problems are aggravated by massive capital outflows, as well as extremely difficult conditions for doing business, due to pressure from the security forces Siloviki and government agencies. Due to high oil prices, from 2000 to 2008, Russia's GDP at PPP doubled. Although high oil prices and a relatively cheap ruble initially drove this growth, since 2003 consumer demand and, more recently, investment have played a significant role. Russia is well ahead of most other resource-rich countries in its economic development, with a long tradition of education, science, and industry. The economic recovery of the 2000s allowed Russia to obtain the right to host the 2014 Winter Olympic Games in Sochi. In 2014, following a controversial referendum, in which separation was favored by a large majority of voters, the Russian leadership announced the accession of Crimea into the Russian Federation. Following Russia's annexation of Crimea and alleged Russian interference in the war in eastern Ukraine, Western sanctions were imposed on Russia. Since 2015, Russia has been conducting military intervention in Syria in support of the Bashar al-Assad regime, against ISIS and the Syrian opposition. In 2018, the FIFA World Cup was held in Russia. Vladimir Putin was re-elected for a fourth presidential term. In 2022, Russia launched an invasion of Ukraine. The invasion was widely condemned by the global community, with new sanctions being imposed on Russia. Historiography Historians See also Dissolution of the Soviet Union Family tree of the Russian monarchs General Secretary of the Communist Party of the Soviet Union History of Central Asia History of Siberia History of the administrative division of Russia History of the Caucasus History of the Jews in Russia History of the Soviet Union List of heads of government of Russia List of Mongol and Tatar raids against Rus' List of presidents of Russia List of Russian explorers List of Russian historians List of Russian rulers List of wars involving Russia Military history of the Russian Empire Military history of the Soviet Union Politics of Russia Russia Russian Armed Forces Russian colonization of the Americas Russian Empire Russian Medical Fund Soviet Union Timeline of Moscow Timeline of Russian history Timeline of Russian innovation Timeline of Saint Petersburg References Further reading Surveys Auty, Robert, and Dimitri Obolensky, eds. Companion to Russian Studies: vol 1: An Introduction to Russian History (1981) 403 pages; surveys by scholars. Bartlett, Roger P. A history of Russia (2005) online Brown, Archie et al. eds. The Cambridge Encyclopedia of Russia and the Former Soviet Union (2nd ed. 1994) 664 pages online Bushkovitch, Paul. A Concise History of Russia (2011) excerpt and text search Connolly, Richard. The Russian Economy: A Very Short Introduction (Oxford University Press, 2020). Online review Figes, Orlando. Natasha's Dance: A Cultural History of Russia (2002). excerpt Florinsky, Michael T. ed. McGraw-Hill Encyclopedia of Russia and the Soviet Union (1961). Freeze, Gregory L., ed.,. Russia: A History. 2nd ed. (Oxford UP, 2002). . Harcave, Sidney, ed. Readings in Russian history (1962) excerpts from scholars. online Hosking, Geoffrey A. Russia and the Russians: a history (2011) online Jelavich, Barbara. St. Petersburg and Moscow: tsarist and Soviet foreign policy, 1814–1974 (1974). Kort, Michael. A brief history of Russia (2008) online McKenzie, David & Michael W. Curran. A History of Russia, the Soviet Union, and Beyond. 6th ed. Belmont, CA: Wadsworth Publishing, 2001. . Millar, James, ed. Encyclopedia of Russian History (4 vol. 2003). online Pares, Bernard. A History of Russia (1926) By a leading historian. Online Paxton, John. Encyclopedia of Russian History (1993) online Paxton, John. Companion to Russian history (1983) online Perrie, Maureen, et al. The Cambridge History of Russia. (3 vol. Cambridge University Press, 2006). excerpt and text search Riasanovsky, Nicholas V., and Mark D. Steinberg. A History of Russia (9th ed. 2018) 9th edition 1993 online Service, Robert. A History of Modern Russia: From Tsarism to the Twenty-First Century (Harvard UP, 3rd ed., 2009) excerpt Stone, David. A Military History of Russia: From Ivan the Terrible to the War in Chechnya excerpts Ziegler; Charles E. The History of Russia (Greenwood Press, 1999) Russian Empire Baykov, Alexander. “The Economic Development of Russia.” Economic History Review 7#2 1954, pp. 137–149. online Billington, James H. The icon and the axe; an interpretive history of Russian culture (1966) online Christian, David. A History of Russia, Central Asia and Mongolia. Vol. 1: Inner Eurasia from Prehistory to the Mongol Empire. Malden, MA: Blackwell Publishers, 1998. . De Madariaga, Isabel. Russia in the Age of Catherine the Great (2002), comprehensive topical survey Fuller, William C. Strategy and Power in Russia 1600–1914 (1998) excerpts Hughes, Lindsey. Russia in the Age of Peter the Great (Yale UP, 1998), Comprehensive topical survey. online Kahan, Arcadius. The Plow, the Hammer, and the Knout: An Economic History of Eighteenth-Century Russia (1985) Kahan, Arcadius. Russian Economic History: The Nineteenth Century (1989) Gatrell, Peter. "Review: Russian Economic History: The Legacy of Arcadius Kahan" Slavic Review 50#1 (1991), pp. 176–178 online Lincoln, W. Bruce. The Romanovs: Autocrats of All the Russias (1983) online, sweeping narrative history Lincoln, W. Bruce. The great reforms : autocracy, bureaucracy, and the politics of change in Imperial Russia (1990) online Manning, Roberta. The Crisis of the Old Order in Russia: Gentry and Government. Princeton University Press, 1982. Markevich, Andrei, and Ekaterina Zhuravskaya. 2018. “Economic Effects of the Abolition of Serfdom: Evidence from the Russian Empire.” American Economic Review 108.4–5: 1074–1117. Mironov, Boris N., and Ben Eklof. The Social History of Imperial Russia, 1700–1917 (2 vol Westview Press, 2000) Moss, Walter G. A History of Russia. Vol. 1: To 1917. 2d ed. Anthem Press, 2002. Oliva, Lawrence Jay. ed. Russia in the era of Peter the Great (1969), excerpts from primary and secondary sources online Pipes, Richard. Russia under the Old Regime (2nd ed. 1997) Seton-Watson, Hugh. The Russian Empire 1801–1917 (Oxford History of Modern Europe) (1988) excerpt and text search Treasure, Geoffrey. The Making of Modern Europe, 1648–1780 (3rd ed. 2003). pp. 550–600. Soviet era Chamberlin, William Henry. The Russian Revolution 1917–1921 (2 vol 1935) online free Cohen, Stephen F. Rethinking the Soviet Experience: Politics and History since 1917. (Oxford University Press, 1985) Davies, R. W. Soviet economic development from Lenin to Khrushchev (1998) excerpt Davies, R.W., Mark Harrison and S.G. Wheatcroft. The Economic transformation of the Soviet Union, 1913-1945 (1994) Figes, Orlando. A people's tragedy a history of the Russian Revolution (1997) online Fitzpatrick, Sheila. The Russian Revolution. (Oxford University Press, 1982), 208 pages. Gregory, Paul R. and Robert C. Stuart, Russian and Soviet Economic Performance and Structure (7th ed. 2001) Hosking, Geoffrey. The First Socialist Society: A History of the Soviet Union from Within (2nd ed. Harvard UP 1992) 570 pages Kennan, George F. Russia and the West under Lenin and Stalin (1961) online Kort, Michael. The Soviet Colossus: History and Aftermath (7th ed. 2010) 502 pages Kotkin, Stephen. Stalin: Paradoxes of Power, 1878–1928 (2014); vol 2 (2017) Library of Congress. Russia: a country study edited by Glenn E. Curtis. (Federal Research Division, Library of Congress, 1996). online Lincoln, W. Bruce. Passage Through Armageddon: The Russians in War and Revolution, 1914–1918 (1986) Lewin, Moshe. Russian Peasants and Soviet Power. (Northwestern University Press, 1968) McCauley, Martin. The Rise and Fall of the Soviet Union (2007), 522 pages. Moss, Walter G. A History of Russia. Vol. 2: Since 1855. 2d ed. Anthem Press, 2005. Nove, Alec. An Economic History of the USSR, 1917–1991. 3rd ed. London: Penguin Books, 1993. . Ofer, Gur. "Soviet Economic Growth: 1928-1985," Journal of Economic Literature (1987) 25#4: 1767–1833. online Pipes, Richard. A concise history of the Russian Revolution (1995) online Regelson, Lev. Tragedy of Russian Church. 1917–1953. http://www.regels.org/Russian-Church.htm Remington, Thomas. Building Socialism in Bolshevik Russia. Pittsburgh: University of Pittsburgh Press, 1984. Service, Robert. A History of Twentieth-Century Russia. 2nd ed. Cambridge, MA: Harvard University Press, 1999. . Service, Robert. Stalin: A Biography (2004), along with Tucker and Kotkin, a standard biography Steinberg, Mark D. The Russian Revolution, 1905–1921 (Oxford Histories, 2017). Tucker, Robert C. Stalin as Revolutionary, 1879–1929 (1973); Stalin in Power: The Revolution from Above, 1929–1941. (1990)along with Kotkin and Service books, a standard biography; online at ACLS e-books Post-Soviet era Asmus, Ronald. A Little War that Shook the World : Georgia, Russia, and the Future of the West. NYU (2010). Cohen, Stephen. Failed Crusade: America and the Tragedy of Post-Communist Russia. New York: W.W. Norton, 2000, 320 pages. Gregory, Paul R. and Robert C. Stuart, Russian and Soviet Economic Performance and Structure, Addison-Wesley, Seventh Edition, 2001. Medvedev, Roy. Post-Soviet Russia A Journey Through the Yeltsin Era, Columbia University Press, 2002, 394 pages. Moss, Walter G. A History of Russia. Vol. 2: Since 1855. 2d ed. Anthem Press, 2005. Chapter 22. Smorodinskaya, Tatiana, and Karen Evans-Romaine, eds. Encyclopedia of Contemporary Russian Culture (2014) excerpt; 800 pp covering art, literature, music, film, media, crime, politics, business, and economics. Stent, Angela. The Limits of Partnership: U.S.-Russian Relations in the Twenty-First Century (2014) Atlases, geography Blinnikov, Mikhail S. A geography of Russia and its neighbors (Guilford Press, 2011) Barnes, Ian. Restless Empire: A Historical Atlas of Russia (2015), copies of historic maps Catchpole, Brian. A Map History of Russia (Heinemann Educational Publishers, 1974), new topical maps. Channon, John, and Robert Hudson. The Penguin historical atlas of Russia (Viking, 1995), new topical maps. Chew, Allen F. An atlas of Russian history: eleven centuries of changing borders (Yale UP, 1970), new topical maps. Gilbert, Martin. Routledge Atlas of Russian History (4th ed. 2007) excerpt and text search online Henry, Laura A. Red to green: environmental activism in post-Soviet Russia (2010) Kaiser, Robert J. The Geography of Nationalism in Russia and the USSR (1994). Medvedev, Andrei. Economic Geography of the Russian Federation by (2000) Parker, William Henry. An historical geography of Russia (University of London Press, 1968) Shaw, Denis J.B. Russia in the modern world: A new geography (Blackwell, 1998) of Finland. Historiography Baron, Samuel H., and Nancy W. Heer. "The Soviet Union: Historiography Since Stalin." in Georg G. Iggers and Harold Talbot Parker, eds. International handbook of historical studies: contemporary research and theory (Taylor & Francis, 1979). pp. 281–94. David-Fox, Michael et al. eds. After the Fall: Essays in Russian and Soviet Historiography (Bloomington: Slavica Publishers, 2004) Firestone, Thomas. "Four Sovietologists: A Primer." National Interest No. 14 (Winter 1988–9), pp. 102–107 on the ideas of Zbigniew Brzezinski, Stephen F. Cohen Jerry F. Hough, and Richard Pipes. Fitzpatrick, Sheila. "Revisionism in Soviet History" History and Theory (2007) 46#4 pp. 77–91 online, covers the scholarship of the three major schools, totalitarianism, revisionism, and post-revisionism. Sanders, Thomas, ed. Historiography of Imperial Russia: The Profession and Writing of History in a Multinational State (1999). Suny, Ronald Grigor. "Rehabilitating Tsarism: The Imperial Russian State and Its Historians. A Review Article" Comparative Studies in Society and History 31#1 (1989) pp. 168–179 online Topolski, Jerzy. "Soviet Studies and Social History" in Georg G. Iggers and Harold Talbot Parker, eds. International handbook of historical studies: contemporary research and theory (Taylor & Francis, 1979. pp. 295–300. Primary sources Kaiser, Daniel H. and Gary Marker, eds. Reinterpreting Russian History: Readings 860-1860s (1994) 464 pages excerpt and text search; primary documents and excerpts from historians Vernadsky, George, et al. eds. Source Book for Russian History from Early Times to 1917 (3 vol 1972) Seventeen Moments in Soviet History (An on-line archive of primary source materials on Soviet history.) External links Guides to Sources on Russian History and Historiography History of Russia: Primary Documents Дневник Истории России A historic project supported by the Ministry of Culture of the Russian Federation.
[ -0.4169667065143585, 0.060244783759117126, -0.3874590992927551, 0.12374012917280197, 0.046711068600416183, 0.5262990593910217, 0.5383684635162354, 0.6090688109397888, -0.7946425676345825, -0.33368661999702454, -0.16157735884189606, -0.49857717752456665, -0.4106939136981964, 0.8602427840232...
14117
https://en.wikipedia.org/wiki/History%20of%20Christianity
History of Christianity
The history of Christianity concerns the Christian religion, Christian countries, and the Christians with their various denominations, from the 1st century to the present. Christianity originated with the ministry of Jesus, a Jewish teacher and healer who proclaimed the imminent Kingdom of God and was crucified in Jerusalem in the Roman province of Judea. His followers believe that, according to the Gospels, he was the Son of God and that he died for the forgiveness of sins and was raised from the dead and exalted by God, and will return soon at the inception of God's kingdom. The earliest followers of Jesus were apocalyptic Jewish Christians. The inclusion of Gentiles in the developing early Christian Church caused the separation of early Christianity from Judaism during the first two centuries of the Christian Era. In 313, the Roman Emperor Constantine I issued the Edict of Milan legalizing Christian worship. In 380, with the Edict of Thessalonica put forth under Theodosius I, the Roman Empire officially adopted Trinitarian Christianity as its state religion, and Christianity established itself as a predominantly Roman religion in the State church of the Roman Empire. Various Christological debates about the human and divine nature of Jesus consumed the Christian Church for three centuries, and seven ecumenical councils were called to resolve these debates. Arianism was condemned at the First Council of Nicea (325), which supported the Trinitarian doctrine as expounded in the Nicene Creed. In the Early Middle Ages, missionary activities spread Christianity towards the west and the north among Germanic peoples; towards the east among Armenians, Georgians, and Slavic peoples; in the Middle East among Syrians and Egyptians; in Eastern Africa among the Ethiopians; and further into Central Asia, China, and India. During the High Middle Ages, Eastern and Western Christianity grew apart, leading to the East–West Schism of 1054. Growing criticism of the Roman Catholic ecclesiastical structure and its corruption led to the Protestant Reformation and its related reform movements in the 15th and 16th centuries, which concluded with the European wars of religion that set off the split of Western Christianity. Since the Renaissance era, with colonialism inspired by the Christian Church, Christianity has expanded throughout the world. Today, there are more than two billion Christians worldwide and Christianity has become the world's largest religion. Within the last century, as the influence of Christianity has progressively waned in the Western world, Christianity continues to be the predominant religion in Europe (including Russia) and the Americas, and has rapidly grown in Asia as well as in the Global South and Third World countries, most notably in Latin America, China, South Korea, and much of Sub-Saharan Africa. Origins Jewish-Hellenistic background The religious, social, and political climate of 1st-century Roman Judea and its neighbouring provinces was extremely diverse and constantly characterized by socio-political turmoil, with numerous Judaic movements that were both religious and political. The ancient Roman-Jewish historian Josephus described the four most prominent sects within Second Temple Judaism: Pharisees, Sadducees, Essenes, and an unnamed "fourth philosophy", which modern historians recognize to be the Zealots and Sicarii. The 1st century BC and 1st century AD had numerous charismatic religious leaders contributing to what would become the Mishnah of Rabbinic Judaism, including the Jewish sages Yohanan ben Zakkai and Hanina ben Dosa. Jewish messianism, and the Jewish Messiah concept, has its roots in the apocalyptic literature produced between the 2nd century BC and the 1st century BC, promising a future "anointed" leader (messiah or king) from the Davidic line to resurrect the Israelite Kingdom of God, in place of the foreign rulers of the time. Ministry of Jesus The main sources of information regarding Jesus' life and teachings are the four canonical gospels, and to a lesser extent the Acts of the Apostles and the Pauline epistles. According to the Gospels, Jesus is the Son of God, who was crucified in Jerusalem. His followers believed that he was raised from the dead and exalted by God, heralding the coming Kingdom of God. Early Christianity (c. 31/33–324) Early Christianity is generally reckoned by church historians to begin with the ministry of Jesus ( 27–30) and end with the First Council of Nicaea (325). It is typically divided into two periods: the Apostolic Age ( 30–100, when the first apostles were still alive) and the Ante-Nicene Period ( 100–325). Apostolic Age The Apostolic Age is named after the Apostles and their missionary activities. It holds special significance in Christian tradition as the age of the direct apostles of Jesus. A primary source for the Apostolic Age is the Acts of the Apostles, but its historical accuracy is questionable and its coverage is partial, focusing especially from Acts 15 onwards on the ministry of Paul, and ending around 62 AD with Paul preaching in Rome under house arrest. The earliest followers of Jesus were a sect of apocalyptic Jewish Christians within the realm of Second Temple Judaism. The early Christian groups were strictly Jewish, such as the Ebionites, and the early Christian community in Jerusalem, led by James the Just, brother of Jesus. According to Acts 9, they described themselves as "disciples of the Lord" and [followers] "of the Way", and according to Acts 11, a settled community of disciples at Antioch were the first to be called "Christians". Some of the early Christian communities attracted God-fearers, i.e. Greco-Roman sympathizers which made an allegiance to Judaism but refused to convert and therefore retained their Gentile (non-Jewish) status, who already visited Jewish synagogues. The inclusion of Gentiles posed a problem, as they could not fully observe the Halakha. Saul of Tarsus, commonly known as Paul the Apostle, persecuted the early Jewish Christians, then converted and started his mission among the Gentiles. The main concern of Paul's letters is the inclusion of Gentiles into God's New Covenant, sending the message that faith in Christ is sufficient for salvation. Because of this inclusion of Gentiles, early Christianity changed its character and gradually grew apart from Judaism and Jewish Christianity during the first two centuries of the Christian Era. The fourth-century church fathers Eusebius and Epiphanius of Salamis cite a tradition that before the destruction of Jerusalem in AD 70 the Jerusalem Christians had been miraculously warned to flee to Pella in the region of the Decapolis across the Jordan River. The Gospels and New Testament epistles contain early creeds and hymns, as well as accounts of the Passion, the empty tomb, and Resurrection appearances. Early Christianity spread to pockets of believers among Aramaic-speaking peoples along the Mediterranean coast and also to the inland parts of the Roman Empire and beyond, into the Parthian Empire and the later Sasanian Empire, including Mesopotamia, which was dominated at different times and to varying extent by these empires. Ante-Nicene period The ante-Nicene period (literally meaning "before Nicaea") was the period following the Apostolic Age down to the First Council of Nicaea in 325. By the beginning of the Nicene period, the Christian faith had spread throughout Western Europe and the Mediterranean Basin, and to North Africa and the East. A more formal Church structure grew out of the early communities, and various Christian doctrines developed. Christianity grew apart from Judaism, creating its own identity by an increasingly harsh rejection of Judaism and of Jewish practices. Developing church structure The number of Christians grew by approximately 40% per decade during the first and second centuries. In the post-Apostolic church a hierarchy of clergy gradually emerged as overseers of urban Christian populations took on the form of episkopoi (overseers, the origin of the terms bishop and episcopal) and presbyters (elders; the origin of the term priest) and then deacons (servants). But this emerged slowly and at different times in different locations. Clement, a 1st-century bishop of Rome, refers to the leaders of the Corinthian church in his epistle to Corinthians as bishops and presbyters interchangeably. The New Testament writers also use the terms overseer and elders interchangeably and as synonyms. Variant Christianities The Ante-Nicene period saw the rise of a great number of Christian sects, cults and movements with strong unifying characteristics lacking in the apostolic period. They had different interpretations of Scripture, particularly the divinity of Jesus and the nature of the Trinity. Many variations in this time defy neat categorizations, as various forms of Christianity interacted in a complex fashion to form the dynamic character of Christianity in this era. The Post-Apostolic period was diverse both in terms of beliefs and practices. In addition to the broad spectrum of general branches of Christianity, there was constant change and diversity that variably resulted in both internecine conflicts and syncretic adoption. Development of the biblical canon The Pauline epistles were circulating in collected form by the end of the 1st century. By the early 3rd century, there existed a set of Christian writings similar to the current New Testament, though there were still disputes over the canonicity of Hebrews, James, I Peter, I and II John, and Revelation. By the 4th century, there existed unanimity in the West concerning the New Testament canon, and by the 5th century the East, with a few exceptions, had come to accept the Book of Revelation and thus had come into harmony on the matter of the canon. Early orthodox writings As Christianity spread, it acquired certain members from well-educated circles of the Hellenistic world; they sometimes became bishops. They produced two sorts of works, theological and apologetic, the latter being works aimed at defending the faith by using reason to refute arguments against the veracity of Christianity. These authors are known as the Church Fathers, and study of them is called patristics. Notable early fathers include Ignatius of Antioch, Polycarp, Justin Martyr, Irenaeus, Tertullian, Clement of Alexandria, and Origen. Early art Christian art emerged relatively late and the first known Christian images emerge from about 200 AD, although there is some literary evidence that small domestic images were used earlier. The oldest known Christian paintings are from the Roman catacombs, dated to about 200, and the oldest Christian sculptures are from sarcophagi, dating to the beginning of the 3rd century. The early rejection of images, and the necessity to hide Christian practice from persecution, left behind few written records regarding early Christianity and its evolution. Persecutions and legalisation There was no empire-wide persecution of Christians until the reign of Decius in the third century. The last and most severe persecution organised by the imperial authorities was the Diocletianic Persecution, 303–311. The Edict of Serdica was issued in 311 by the Roman Emperor Galerius, officially ending the persecution in the East. With the passage in 313 AD of the Edict of Milan, in which the Roman Emperors Constantine the Great and Licinius legalised the Christian religion, persecution of Christians by the Roman state ceased. Armenia became the first country to establish Christianity as its state religion when, in an event traditionally dated to 301 AD, St. Gregory the Illuminator convinced Tiridates III, the king of Armenia, to convert to Christianity. Late antiquity (325–476) Influence of Constantine How much Christianity Constantine adopted at this point is difficult to discern, but his accession was a turning point for the Christian Church. He supported the Church financially, built various basilicas, granted privileges (e.g., exemption from certain taxes) to clergy, promoted Christians to some high offices, and returned confiscated property. Constantine played an active role in the leadership of the Church. In 316, he acted as a judge in a North African dispute concerning the Donatist controversy. More significantly, in 325 he summoned the Council of Nicaea, the first ecumenical council. He thus established a precedent for the emperor as responsible to God for the spiritual health of his subjects, and thus with a duty to maintain orthodoxy. He was to enforce doctrine, root out heresy, and uphold ecclesiastical unity. Constantine's son's successor, his nephew Julian, under the influence of his adviser Mardonius, renounced Christianity and embraced a Neo-platonic and mystical form of paganism, shocking the Christian establishment. He began reopening pagan temples, modifying them to resemble Christian traditions such as the episcopal structure and public charity (previously unknown in Roman paganism). Julian's short reign ended when he died in battle with the Persians. Arianism and the first ecumenical councils A popular doctrine in the 4th century was Arianism, which taught that Christ is distinct from and subordinate to God the Father. Although this doctrine was condemned as heresy and eventually eliminated by the Roman Church, it remained popular underground for some time. In the late 4th century, Ulfilas, a Roman bishop and an Arian, was appointed as the first bishop to the Goths, the Germanic peoples in much of Europe at the borders of and within the Empire. Ulfilas spread Arian Christianity among the Goths, firmly establishing the faith among many of the Germanic tribes, thus helping to keep them culturally distinct. During this age, the first ecumenical councils were convened. They were mostly concerned with Christological disputes. The First Council of Nicaea (325) and the First Council of Constantinople (381) resulted in condemnation of Arian teachings as heresy and produced the Nicene Creed. Christianity as Roman state religion On 27 February 380, with the Edict of Thessalonica put forth under Theodosius I, Gratian, and Valentinian II, the Roman Empire officially adopted Trinitarian Christianity as its state religion. Prior to this date, Constantius II and Valens had personally favoured Arian or Semi-Arian forms of Christianity, but Valens' successor Theodosius I supported the Trinitarian doctrine as expounded in the Nicene Creed. After its establishment, the Church adopted the same organisational boundaries as the Empire: geographical provinces, called dioceses, corresponding to imperial government territorial divisions. The bishops, who were located in major urban centres as in pre-legalisation tradition, thus oversaw each diocese. The bishop's location was his "seat", or "see". Among the sees, five came to hold special eminence: Rome, Constantinople, Jerusalem, Antioch, and Alexandria. The prestige of most of these sees depended in part on their apostolic founders, from whom the bishops were therefore the spiritual successors. Though the bishop of Rome was still held to be the First among equals, Constantinople was second in precedence as the new capital of the empire. Theodosius I decreed that others not believing in the preserved "faithful tradition", such as the Trinity, were to be considered to be practitioners of illegal heresy, and in 385, this resulted in the first case of the state, not Church, infliction of capital punishment on a heretic, namely Priscillian. Church of the East and the Sasanian Empire During the early 5th century, the School of Edessa had taught a Christological perspective stating that Christ's divine and human nature were distinct persons. A particular consequence of this perspective was that Mary could not be properly called the mother of God but could only be considered the mother of Christ. The most widely known proponent of this viewpoint was the Patriarch of Constantinople Nestorius. Since referring to Mary as the mother of God had become popular in many parts of the Church this became a divisive issue. The Roman Emperor Theodosius II called for the Council of Ephesus (431), with the intention of settling the issue. The council ultimately rejected Nestorius' view. Many churches who followed the Nestorian viewpoint broke away from the Roman Church, causing a major schism. The Nestorian churches were persecuted, and many followers fled to the Sasanian Empire where they were accepted. The Sasanian (Persian) Empire had many Christian converts early in its history tied closely to the Syriac branch of Christianity. The Empire was officially Zoroastrian and maintained a strict adherence to this faith in part to distinguish itself from the religion of the Roman Empire (originally the pagan Roman religion and then Christianity). Christianity became tolerated in the Sasanian Empire, and as the Roman Empire increasingly exiled heretics during the 4th and 6th centuries, the Sasanian Christian community grew rapidly. By the end of the 5th century, the Persian Church was firmly established and had become independent of the Roman Church. This church evolved into what is today known as the Church of the East. In 451, the Council of Chalcedon was held to further clarify the Christological issues surrounding Nestorianism. The council ultimately stated that Christ's divine and human nature were separate but both part of a single entity, a viewpoint rejected by many churches who called themselves miaphysites. The resulting schism created a communion of churches, including the Armenian, Syrian, and Egyptian churches. Though efforts were made at reconciliation in the next few centuries, the schism remained permanent, resulting in what is today known as Oriental Orthodoxy. Monasticism Monasticism is a form of asceticism whereby one renounces worldly pursuits and goes off alone as a hermit or joins a tightly organized community. It began early in the Church as a family of similar traditions, modelled upon Scriptural examples and ideals, and with roots in certain strands of Judaism. John the Baptist is seen as an archetypical monk, and monasticism was inspired by the organisation of the Apostolic community as recorded in Acts 2:42–47. Eremitic monks, or hermits, live in solitude, whereas cenobitics live in communities, generally in a monastery, under a rule (or code of practice) and are governed by an abbot. Originally, all Christian monks were hermits, following the example of Anthony the Great. However, the need for some form of organised spiritual guidance lead Pachomius in 318 to organise his many followers in what was to become the first monastery. Soon, similar institutions were established throughout the Egyptian desert as well as the rest of the eastern half of the Roman Empire. Women were especially attracted to the movement. Central figures in the development of monasticism were Basil the Great in the East and, in the West, Benedict, who created the famous Rule of Saint Benedict, which would become the most common rule throughout the Middle Ages and the starting point for other monastic rules. Early Middle Ages (476–799) The transition into the Middle Ages was a gradual and localised process. Rural areas rose as power centres whilst urban areas declined. Although a greater number of Christians remained in the East (Greek areas), important developments were underway in the West (Latin areas) and each took on distinctive shapes. The bishops of Rome, the popes, were forced to adapt to drastically changing circumstances. Maintaining only nominal allegiance to the emperor, they were forced to negotiate balances with the "barbarian rulers" of the former Roman provinces. In the East, the Church maintained its structure and character and evolved more slowly. Western missionary expansion The stepwise loss of Western Roman Empire dominance, replaced with foederati and Germanic kingdoms, coincided with early missionary efforts into areas not controlled by the collapsing empire. As early as in the 5th century, missionary activities from Roman Britain into the Celtic areas (Scotland, Ireland and Wales) produced competing early traditions of Celtic Christianity, that was later reintegrated under the Church in Rome. Prominent missionaries were Saints Patrick, Columba and Columbanus. The Anglo-Saxon tribes that invaded southern Britain some time after the Roman abandonment were initially pagan but were converted to Christianity by Augustine of Canterbury on the mission of Pope Gregory the Great. Soon becoming a missionary centre, missionaries such as Wilfrid, Willibrord, Lullus and Boniface converted their Saxon relatives in Germania. The largely Christian Gallo-Roman inhabitants of Gaul (modern France) were overrun by the Franks in the early 5th century. The native inhabitants were persecuted until the Frankish King Clovis I converted from paganism to Roman Catholicism in 496. Clovis insisted that his fellow nobles follow suit, strengthening his newly established kingdom by uniting the faith of the rulers with that of the ruled. After the rise of the Frankish Kingdom and the stabilizing political conditions, the Western part of the Church increased the missionary activities, supported by the Merovingian kingdom as a means to pacify troublesome neighbour peoples. After the foundation of a church in Utrecht by Willibrord, backlashes occurred when the pagan Frisian King Radbod destroyed many Christian centres between 716 and 719. In 717, the English missionary Boniface was sent to aid Willibrord, re-establishing churches in Frisia and continuing missions in Germany. During the late 8th century, Charlemagne used mass killings to subjugate the pagan Saxons and compel them to accept Christianity Byzantine Iconoclasm Following a series of heavy military reverses against the Muslims, Iconoclasm emerged in the early 8th century. In the 720s, the Byzantine Emperor Leo III the Isaurian banned the pictorial representation of Christ, saints, and biblical scenes. In the West, Pope Gregory III held two synods at Rome and condemned Leo's actions. The Byzantine Iconoclast Council, held at Hieria in 754, ruled that holy portraits were heretical. The movement destroyed much of the Christian church's early artistic history. The iconoclastic movement was later defined as heretical in 787 under the Second Council of Nicaea (the seventh ecumenical council) but had a brief resurgence between 815 and 842. High Middle Ages (800–1299) Carolingian Renaissance The Carolingian Renaissance was a period of intellectual and cultural revival of literature, arts, and scriptural studies during the late 8th and 9th centuries, mostly during the reigns of Charlemagne and Louis the Pious, Frankish rulers. To address the problems of illiteracy among clergy and court scribes, Charlemagne founded schools and attracted the most learned men from all of Europe to his court. Growing tensions between East and West Tensions in Christian unity started to become evident in the 4th century. Two basic problems were involved: the nature of the primacy of the bishop of Rome and the theological implications of adding a clause to the Nicene Creed, known as the filioque clause. These doctrinal issues were first openly discussed in Photius's patriarchate. The Eastern churches viewed Rome's understanding of the nature of episcopal power as being in direct opposition to the Church's essentially conciliar structure and thus saw the two ecclesiologies as mutually antithetical. Another issue developed into a major irritant to Eastern Christendom, the gradual introduction into the Nicene Creed in the West of the Filioque clause – meaning "and the Son" – as in "the Holy Spirit ... proceeds from the Father and the Son", where the original Creed, sanctioned by the councils and still used today by the Eastern Orthodox, simply states "the Holy Spirit, ... proceeds from the Father." The Eastern Church argued that the phrase had been added unilaterally and therefore illegitimately, since the East had never been consulted. In addition to this ecclesiological issue, the Eastern Church also considered the Filioque clause unacceptable on dogmatic grounds. Photian schism In the 9th century, a controversy arose between Eastern (Byzantine, Greek Orthodox) and Western (Latin, Roman Catholic) Christianity that was precipitated by the opposition of the Roman Pope John VII to the appointment by the Byzantine Emperor Michael III of Photios I to the position of patriarch of Constantinople. Photios was refused an apology by the pope for previous points of dispute between the East and West. Photios refused to accept the supremacy of the pope in Eastern matters or accept the Filioque clause. The Latin delegation at the council of his consecration pressed him to accept the clause in order to secure their support. The controversy also involved Eastern and Western ecclesiastical jurisdictional rights in the Bulgarian church. Photios did provide concession on the issue of jurisdictional rights concerning Bulgaria, and the papal legates made do with his return of Bulgaria to Rome. This concession, however, was purely nominal, as Bulgaria's return to the Byzantine rite in 870 had already secured for it an autocephalous church. Without the consent of Boris I of Bulgaria, the papacy was unable to enforce any of its claims. East–West Schism (1054) The East–West Schism, or Great Schism, separated the Church into Western (Latin) and Eastern (Greek) branches, i.e., Western Catholicism and Eastern Orthodoxy. It was the first major division since certain groups in the East rejected the decrees of the Council of Chalcedon (see Oriental Orthodoxy) and was far more significant. Though normally dated to 1054, the East–West Schism was actually the result of an extended period of estrangement between Latin and Greek Christendom over the nature of papal primacy and certain doctrinal matters like the Filioque, but intensified from cultural and linguistic differences. Monastic reform From the 6th century onward, most of the monasteries in the West were of the Benedictine Order. Owing to the stricter adherence to a reformed Benedictine rule, the abbey of Cluny became the acknowledged leader of western monasticism from the later 10th century. Cluny created a large, federated order in which the administrators of subsidiary houses served as deputies of the abbot of Cluny and answered to him. The Cluniac spirit was a revitalising influence on the Norman church, at its height from the second half of the 10th century through the early 12th century. The next wave of monastic reform came with the Cistercian Movement. The first Cistercian abbey was founded in 1098, at Cîteaux Abbey. The keynote of Cistercian life was a return to a literal observance of the Benedictine rule, rejecting the developments of the Benedictines. The most striking feature in the reform was the return to manual labour, and especially to field-work. Inspired by Bernard of Clairvaux, the primary builder of the Cistercians, they became the main force of technological diffusion in medieval Europe. By the end of the 12th century, the Cistercian houses numbered 500, and at its height in the 15th century the order claimed to have close to 750 houses. Most of these were built in wilderness areas, and played a major part in bringing such isolated parts of Europe into economic cultivation. A third level of monastic reform was provided by the establishment of the Mendicant orders. Commonly known as friars, mendicants live under a monastic rule with traditional vows of poverty, chastity, and obedience but they emphasise preaching, missionary activity, and education, in a secluded monastery. Beginning in the 12th century, the Franciscan order was instituted by the followers of Francis of Assisi, and thereafter the Dominican order was begun by St. Dominic. Investiture Controversy The Investiture Controversy, or Lay Investiture Controversy, was the most significant conflict between secular and religious powers in medieval Europe. It began as a dispute in the 11th century between the Holy Roman Emperor Henry IV and Pope Gregory VII concerning who would appoint bishops (investiture). The end of lay investiture threatened to undercut the power of the Empire and the ambitions of noblemen. Bishoprics being merely lifetime appointments, a king could better control their powers and revenues than those of hereditary noblemen. Even better, he could leave the post vacant and collect the revenues, theoretically in trust for the new bishop, or give a bishopric to pay a helpful noble. The Church wanted to end lay investiture to end this and other abuses, to reform the episcopate and provide better pastoral care. Pope Gregory VII issued the Dictatus Papae, which declared that the pope alone could appoint bishops. Henry IV's rejection of the decree led to his excommunication and a ducal revolt. Eventually Henry received absolution after dramatic public penance, though the Great Saxon Revolt and conflict of investiture continued. A similar controversy occurred in England between King Henry I and St. Anselm, Archbishop of Canterbury, over investiture and episcopal vacancy. The English dispute was resolved by the Concordat of London, 1107, where the king renounced his claim to invest bishops but continued to require an oath of fealty. This was a partial model for the Concordat of Worms (Pactum Calixtinum), which resolved the Imperial investiture controversy with a compromise that allowed secular authorities some measure of control but granted the selection of bishops to their cathedral canons. As a symbol of the compromise, both ecclesiastical and lay authorities invested bishops with respectively, the staff and the ring. Crusades Generally, the Crusades refer to the campaigns in the Holy Land sponsored by the papacy against Muslim forces. There were other crusades against Islamic forces in southern Spain, southern Italy, and Sicily. The Papacy also sponsored numerous Crusades to subjugate and convert the pagan peoples of north-eastern Europe, against its political enemies in Western Europe, and against heretical or schismatic religious minorities within Christendom. The Holy Land had been part of the Roman Empire, and thus Byzantine Empire, until the Islamic conquests of the 7th and 8th centuries. Thereafter, Christians had generally been permitted to visit the sacred places in the Holy Land until 1071, when the Seljuk Turks closed Christian pilgrimages and assailed the Byzantines, defeating them at the Battle of Manzikert. Emperor Alexius I asked for aid from Pope Urban II against Islamic aggression. He probably expected money from the pope for the hiring of mercenaries. Instead, Urban II called upon the knights of Christendom in a speech made at the Council of Clermont on 27 November 1095, combining the idea of pilgrimage to the Holy Land with that of waging a holy war against infidels. The First Crusade captured Antioch in 1099 and then Jerusalem. The Second Crusade occurred in 1145 when Edessa was taken by Islamic forces. Jerusalem was held until 1187 and the Third Crusade, famous for the battles between Richard the Lionheart and Saladin. The Fourth Crusade, begun by Innocent III in 1202, intended to retake the Holy Land but was soon subverted by the Venetians. When the crusaders arrived in Constantinople, they sacked the city and other parts of Asia Minor and established the Latin Empire of Constantinople in Greece and Asia Minor. Five numbered crusades to the Holy Land, culminating in the siege of Acre of 1219, essentially ending the Western presence in the Holy Land. Jerusalem was held by the crusaders for nearly a century, while other strongholds in the Near East remained in Christian possession much longer. The crusades in the Holy Land ultimately failed to establish permanent Christian kingdoms. Islamic expansion into Europe remained a threat for centuries, culminating in the campaigns of Suleiman the Magnificent in the 16th century. Crusades in Iberia (the Reconquista), southern Italy, and Sicily eventually lead to the demise of Islamic power in Europe. The Albigensian Crusade targeted the heretical Cathars of southern France; in combination with the Inquisition set up in its aftermath, it succeeded in exterminating them. The Wendish Crusade succeeded in subjugating and forcibly converting the pagan Slavs of modern eastern Germany. The Livonian Crusade, carried out by the Teutonic Knights and other orders of warrior-monks, similarly conquered and forcibly converted the pagan Balts of Livonia and Old Prussia. However, the pagan Grand Duchy of Lithuania successfully resisted the Knights and converted only voluntarily in the 14th century. Medieval Inquisition The Medieval Inquisition was a series of inquisitions (Roman Catholic Church bodies charged with suppressing heresy) from around 1184, including the Episcopal Inquisition (1184–1230s) and later the Papal Inquisition (1230s). It was in response to movements within Europe considered apostate or heretical to Western Catholicism, in particular the Cathars and the Waldensians in southern France and northern Italy. These were the first inquisition movements of many that would follow. The inquisitions in combination with the Albigensian Crusade were fairly successful in ending heresy. Spread of Christianity Early evangelization in Scandinavia was begun by Ansgar, Archbishop of Bremen, "Apostle of the North". Ansgar, a native of Amiens, was sent with a group of monks to Jutland in around 820 at the time of the pro-Christian King Harald Klak. The mission was only partially successful, and Ansgar returned two years later to Germany, after Harald had been driven out of his kingdom. In 829, Ansgar went to Birka on Lake Mälaren, Sweden, with his aide friar Witmar, and a small congregation was formed in 831 which included the king's steward Hergeir. Conversion was slow, however, and most Scandinavian lands were only completely Christianised at the time of rulers such as Saint Canute IV of Denmark and Olaf I of Norway in the years following AD 1000. The Christianisation of the Slavs was initiated by one of Byzantium's most learned churchmen – the patriarch Photios I of Constantinople. The Byzantine Emperor Michael III chose Cyril and Methodius in response to a request from King Rastislav of Moravia, who wanted missionaries that could minister to the Moravians in their own language. The two brothers spoke the local Slavonic vernacular and translated the Bible and many of the prayer books. As the translations prepared by them were copied by speakers of other dialects, the hybrid literary language Old Church Slavonic was created, which later evolved into Church Slavonic and is the common liturgical language still used by the Russian Orthodox Church and other Slavic Orthodox Christians. Methodius went on to convert the Serbs. Bulgaria was a pagan country since its establishment in 681 until 864 when Boris I converted to Christianity. The reasons for that decision were complex; the most important factors were that Bulgaria was situated between two powerful Christian empires, Byzantium and East Francia; Christian doctrine particularly favoured the position of the monarch as God's representative on Earth, while Boris also saw it as a way to overcome the differences between Bulgars and Slavs. Bulgaria was officially recognised as a patriarchate by Constantinople in 927, Serbia in 1346, and Russia in 1589. All of these nations had been converted long before these dates. Late Middle Ages and the early Renaissance (1300–1520) Avignon Papacy and the Western Schism The Avignon Papacy, sometimes referred to as the Babylonian Captivity, was a period from 1309 to 1378 during which seven popes resided in Avignon, in modern-day France. In 1309, Pope Clement V moved to Avignon in southern France. Confusion and political animosity waxed, as the prestige and influence of Rome waned without a resident pontiff. Troubles reached their peak in 1378 when Gregory XI died while visiting Rome. A papal conclave met in Rome and elected Urban VI, an Italian. Urban soon alienated the French cardinals, and they held a second conclave electing Robert of Geneva to succeed Gregory XI, beginning the Western Schism. Criticism of Church corruption John Wycliffe, an English scholar and alleged heretic best known for denouncing the corruptions of the Church, was a precursor of the Protestant Reformation. He emphasized the supremacy of the Bible and called for a direct relationship between God and the human person, without interference by priests and bishops. His followers played a role in the English Reformation. Jan Hus, a Czech theologian in Prague, was influenced by Wycliffe and spoke out against the corruptions he saw in the Church. He was a forerunner of the Protestant Reformation, and his legacy has become a powerful symbol of Czech culture in Bohemia. Renaissance and the Church The Renaissance was a period of great cultural change and achievement, marked in Italy by a classical orientation and an increase of wealth through mercantile trade. The city of Rome, the papacy, and the papal states were all affected by the Renaissance. On the one hand, it was a time of great artistic patronage and architectural magnificence, where the Church commissioned such artists as Michelangelo, Brunelleschi, Bramante, Raphael, Fra Angelico, Donatello, and Leonardo da Vinci. On the other hand, wealthy Italian families often secured episcopal offices, including the papacy, for their own members, some of whom were known for immorality, such as Alexander VI and Sixtus IV. In addition to being the head of the Church, the pope became one of Italy's most important secular rulers, and pontiffs such as Julius II often waged campaigns to protect and expand their temporal domains. Furthermore, the popes, in a spirit of refined competition with other Italian lords, spent lavishly both on private luxuries but also on public works, repairing or building churches, bridges, and a magnificent system of aqueducts in Rome that still function today. Fall of Constantinople In 1453, Constantinople fell to the Ottoman Empire. Eastern Christians fleeing Constantinople, and the Greek manuscripts they carried with them, is one of the factors that prompted the literary renaissance in the West at about this time. The Ottoman government followed Islamic law when dealing with the conquered Christian population. Christians were officially tolerated as people of the Book. As such, the Church's canonical and hierarchical organisation were not significantly disrupted, and its administration continued to function. One of the first things that Mehmet the Conqueror did was to allow the Church to elect a new patriarch, Gennadius Scholarius. However, these rights and privileges, including freedom of worship and religious organisation, were often established in principle but seldom corresponded to reality. Christians were viewed as second-class citizens, and the legal protections they depended upon were subject to the whims of the sultan and the sublime porte. The Hagia Sophia and the Parthenon, which had been Christian churches for nearly a millennium, were converted into mosques. Violent persecutions of Christians were common and reached their climax in the Armenian, Assyrian, and Greek genocides. Early modern period (c. 1500–c. 1750) Reformation In the early 16th century, attempts were made by the theologians Martin Luther and Huldrych Zwingli, along with many others, to reform the Church. They considered the root of corruptions to be doctrinal, rather than simply a matter of moral weakness or lack of ecclesiastical discipline, and thus advocated for God's autonomy in redemption, and against voluntaristic notions that salvation could be earned by people. The Reformation is usually considered to have started with the publication of the Ninety-five Theses by Luther in 1517, although there was no schism until the 1521 Diet of Worms. The edicts of the Diet condemned Luther and officially banned citizens of the Holy Roman Empire from defending or propagating his ideas. The word Protestant is derived from the Latin protestatio, meaning declaration, which refers to the letter of protestation by Lutheran princes against the decision of the Diet of Speyer in 1529, which reaffirmed the edict of the Diet of Worms ordering the seizure of all property owned by persons guilty of advocating Lutheranism. The term "Protestant" was not originally used by Reformation era leaders; instead, they called themselves "evangelical", emphasising the "return to the true gospel (Greek: euangelion)." Early protest was against corruptions such as simony, the holding of multiple church offices by one person at the same time, episcopal vacancies, and the sale of indulgences. The Protestant position also included sola scriptura, sola fide, the priesthood of all believers, Law and Gospel, and the two kingdoms doctrine. The three most important traditions to emerge directly from the Reformation were the Lutheran, Reformed, and Anglican traditions, though the latter group identifies as both "Reformed" and "Catholic", and some subgroups reject the classification as "Protestant". Unlike other reform movements, the English Reformation began by royal influence. Henry VIII considered himself a thoroughly Catholic king, and in 1521 he defended the papacy against Luther in a book he commissioned entitled, The Defence of the Seven Sacraments, for which Pope Leo X awarded him the title Fidei Defensor (Defender of the Faith). However, the king came into conflict with the papacy when he wished to annul his marriage with Catherine of Aragon, for which he needed papal sanction. Catherine, among many other noble relations, was the aunt of Emperor Charles V, the papacy's most significant secular supporter. The ensuing dispute eventually lead to a break from Rome and the declaration of the King of England as head of the English Church, which saw itself as a Protestant Church navigating a middle way between Lutheranism and Reformed Christianity, but leaning more towards the latter. Consequently, England experienced periods of reform and also Counter-Reformation. Monarchs such as Edward VI, Lady Jane Grey, Mary I, Elizabeth I, and Archbishops of Canterbury such as Thomas Cranmer and William Laud, pushed the Church of England in different directions over the course of only a few generations. What emerged was the Elizabethan Religious Settlement and a state church that considered itself both "Reformed" and "Catholic" but not "Roman", and other unofficial more radical movements, such as the Puritans. In terms of politics, the English Reformation included heresy trials, the exiling of Roman Catholic populations to Spain and other Roman Catholic lands, and censorship and prohibition of books. Radical Reformation The Radical Reformation represented a response to corruption both in the Catholic Church and in the expanding Magisterial Protestant movement led by Martin Luther and many others. Beginning in Germany and Switzerland in the 16th century, the Radical Reformation gave birth to many radical Protestant groups throughout Europe. The term covers radical reformers like Thomas Müntzer and Andreas Karlstadt, the Zwickau prophets, and Anabaptist Christians, most notably the Amish, Mennonites, Hutterites, the Bruderhof Communities, and Schwarzenau Brethren. Counter-Reformation The Counter-Reformation was the response of the Catholic Church to the Protestant Reformation. In terms of meetings and documents, it consisted of the Confutatio Augustana, the Council of Trent, the Roman Catechism, and the Defensio Tridentinæ fidei. In terms of politics, the Counter-Reformation included heresy trials, the exiling of Protestant populations from Catholic lands, the seizure of children from their Protestant parents for institutionalized Catholic upbringing, a series of wars, the Index Librorum Prohibitorum (the list of prohibited books), and the Spanish Inquisition. Although Protestant Christians were excommunicated in an attempt to reduce their influence within the Catholic Church, at the same time they were persecuted during the Counter-Reformation, prompting some to live as crypto-Protestants (also termed Nicodemites, against the urging of John Calvin who urged them to live their faith openly. Crypto-Protestants were documented as late as the 19th century in Latin America. The Council of Trent (1545–1563) initiated by Pope Paul III addressed issues of certain ecclesiastical corruptions such as simony, absenteeism, nepotism, the holding of multiple church offices by one person, and other abuses. It also reasserted traditional practices and doctrines of the Church, such as the episcopal structure, clerical celibacy, the seven Sacraments, transubstantiation (the belief that during mass the consecrated bread and wine truly become the body and blood of Christ), the veneration of relics, icons, and saints (especially the Blessed Virgin Mary), the necessity of both faith and good works for salvation, the existence of purgatory and the issuance (but not the sale) of indulgences. In other words, all Protestant doctrinal objections and changes were uncompromisingly rejected. The Council also fostered an interest in education for parish priests to increase pastoral care. Milan's Archbishop Saint Charles Borromeo set an example by visiting the remotest parishes and instilling high standards. Catholic Reformation Simultaneous to the Counter-Reformation, the Catholic Reformation consisted of improvements in art and culture, anti-corruption measures, the founding of the Jesuits, the establishment of seminaries, a reassertion of traditional doctrines and the emergence of new religious orders aimed at both moral reform and new missionary activity. Also part of this was the development of new yet orthodox forms of spirituality, such as that of the Spanish mystics and the French school of spirituality. The papacy of St. Pius V was known not only for its focus on halting heresy and worldly abuses within the Church, but also for its focus on improving popular piety in a determined effort to stem the appeal of Protestantism. Pius began his pontificate by giving large alms to the poor, charity, and hospitals, and the pontiff was known for consoling the poor and sick as well as supporting missionaries. These activities coincided with a rediscovery of the ancient Christian catacombs in Rome. As Diarmaid MacCulloch states, "Just as these ancient martyrs were revealed once more, Catholics were beginning to be martyred afresh, both in mission fields overseas and in the struggle to win back Protestant northern Europe: the catacombs proved to be an inspiration for many to action and to heroism." Catholic missions were carried to new places beginning with the new Age of Discovery, and the Roman Catholic Church established missions in the Americas. Trial of Galileo The Galileo affair, in which Galileo Galilei came into conflict with the Roman Catholic Church over his support of heliocentrism, is often considered a defining moment in the history of the relationship between religion and science. In 1610, Galileo published his Sidereus Nuncius (Starry Messenger), describing the surprising observations that he had made with the new telescope. These and other discoveries exposed major difficulties with the understanding of the heavens that had been held since antiquity, and raised new interest in radical teachings such as the heliocentric theory of Copernicus. In reaction, many scholars maintained that the motion of the earth and immobility of the sun were heretical, as they contradicted some accounts given in the Bible as understood at that time. Galileo's part in the controversies over his theological and philosophical positions culminated in his trial and sentencing in 1633, on a grave suspicion of heresy. Puritans in North America The most famous colonization by Protestants in the New World was that of English Puritans in North America. Unlike the Spanish or French, the English colonists made surprisingly little effort to evangelize the native peoples. The Puritans, or Pilgrims, left England so that they could live in an area with Puritanism established as the exclusive civic religion. Though they had left England because of the suppression of their religious practice, most Puritans had thereafter originally settled in the Low Countries but found the licentiousness there, where the state hesitated from enforcing religious practice, as unacceptable, and thus they set out for the New World and the hopes of a Puritan utopia. Late modern period (c. 1750–c. 1945) Christian revivalism Christian revivalism refers to the Calvinist and Wesleyan revival, called the "Great Awakening" in North America, which saw the development of evangelical Congregationalist, Presbyterian, Baptist, and new Methodist churches. Great Awakenings The First Great Awakening was a wave of religious enthusiasm among Protestants in the American colonies c. 1730–1740, emphasising the traditional Reformed virtues of Godly preaching, rudimentary liturgy, and a deep sense of personal guilt and redemption by Christ Jesus. Historian Sydney E. Ahlstrom saw it as part of a "great international Protestant upheaval" that also created pietism in Germany, the Evangelical Revival, and Methodism in England. It centred on reviving the spirituality of established congregations and mostly affected Congregational, Presbyterian, Dutch Reformed, German Reformed, Baptist, and Methodist churches, while also spreading within the slave population. The Second Great Awakening (1800–1830s), unlike the first, focused on the unchurched and sought to instill in them a deep sense of personal salvation as experienced in revival meetings. It also sparked the beginnings of groups such as the Mormons, the Restoration Movement and the Holiness movement. The Third Great Awakening began from 1857 and was most notable for taking the movement throughout the world, especially in English speaking countries. The final group to emerge from the "great awakenings" in North America was Pentecostalism, which had its roots in the Methodist, Wesleyan, and Holiness movements, and began in 1906 on Azusa Street in Los Angeles. Pentecostalism would later lead to the Charismatic movement. Restorationism Restorationism refers to the belief that a purer form of Christianity should be restored using the early church as a model. In many cases, restorationist groups believed that contemporary Christianity, in all its forms, had deviated from the true, original Christianity, which they then attempted to "reconstruct", often using the Book of Acts as a "guidebook" of sorts. Restorationists do not usually describe themselves as "reforming" a Christian church continuously existing from the time of Jesus, but as restoring the Church that they believe was lost at some point. "Restorationism" is often used to describe the Stone-Campbell Restoration Movement. The term "restorationist" is also used to describe the Jehovah's Witness movement, founded in the late 1870s by Charles Taze Russell. The term can also be used to describe the Latter Day Saint movement, including The Church of Jesus Christ of Latter-day Saints (LDS Church), the Community of Christ and numerous other Latter Day Saints sects. Latter Day Saints, also known as Mormons, believe that Joseph Smith was chosen to restore the original organization established by Jesus, now "in its fullness", rather than to reform the church. Eastern Orthodoxy The Russian Orthodox Church held a privileged position in the Russian Empire, expressed in the motto of the late empire from 1833: Orthodoxy, Autocracy, and Populism. Nevertheless, the Church reform of Peter I in the early 18th century had placed the Orthodox authorities under the control of the tsar. An ober-procurator appointed by the tsar ran the committee which governed the Church between 1721 and 1918: the Most Holy Synod. The Church became involved in the various campaigns of russification, and was accused of involvement in Russian anti-semitism, despite the lack of an official position on Judaism as such. The Bolsheviks and other Russian revolutionaries saw the Church, like the tsarist state, as an enemy of the people. Criticism of atheism was strictly forbidden and sometimes lead to imprisonment. Some actions against Orthodox priests and believers included torture, being sent to prison camps, labour camps or mental hospitals, as well as execution. In the first five years after the Bolshevik revolution, 28 bishops and 1,200 priests were executed. This included people like the Grand Duchess Elizabeth Fyodorovna who was at this point a monastic. Executed along with her were: Grand Duke Sergei Mikhailovich Romanov; the Princes Ioann Konstantinvich, Konstantin Konstantinovich, Igor Konstantinovich and Vladimir Pavlovich Paley; Grand Duke Sergei's secretary, Fyodor Remez; and Varvara Yakovleva, a sister from the Grand Duchess Elizabeth's convent. Trends in Christian theology Liberal Christianity, sometimes called liberal theology, is an umbrella term covering diverse, philosophically informed religious movements and moods within late 18th, 19th and 20th-century Christianity. The word "liberal" in liberal Christianity does not refer to a leftist political agenda or set of beliefs, but rather to the freedom of dialectic process associated with continental philosophy and other philosophical and religious paradigms developed during the Age of Enlightenment. Fundamentalist Christianity is a movement that arose mainly within British and American Protestantism in the late 19th century and early 20th century in reaction to modernism and certain liberal Protestant groups that denied doctrines considered fundamental to Christianity yet still called themselves "Christian." Thus, fundamentalism sought to re-establish tenets that could not be denied without relinquishing a Christian identity, the "fundamentals": inerrancy of the Bible, the principle of sola scriptura, the Virgin Birth of Jesus, the doctrine of substitutionary atonement, the bodily resurrection of Jesus, and the imminent return of Jesus Christ. Under Communism and Nazism Under the state atheism of countries in the Soviet Union and the Eastern Bloc, Christians of many denominations experienced persecution, with many churches and monasteries being destroyed, as well as clergy being executed. The position of Christians affected by Nazism is highly complex. Pope Pius XI declared – Mit brennender Sorge – that Fascist governments had hidden "pagan intentions" and expressed the irreconcilability of the Catholic position and totalitarian fascist state worship, which placed the nation above God, fundamental human rights, and dignity. His declaration that "Spiritually, [Christians] are all Semites" prompted the Nazis to give him the title "Chief Rabbi of the Christian World." Catholic priests were executed in concentration camps alongside Jews; for example, 2,600 Catholic priests were imprisoned in Dachau, and 2,000 of them were executed (cf. Priesterblock). A further 2,700 Polish priests were executed (a quarter of all Polish priests), and 5,350 Polish nuns were either displaced, imprisoned, or executed. Many Catholic laymen and clergy played notable roles in sheltering Jews during the Holocaust, including Pope Pius XII. The head rabbi of Rome became a Catholic in 1945 and, in honour of the actions the pope undertook to save Jewish lives, he took the name Eugenio (the pope's first name). A former Israeli consul in Italy claimed: "The Catholic Church saved more Jewish lives during the war than all the other churches, religious institutions, and rescue organisations put together." The relationship between Nazism and Protestantism, especially the German Lutheran Church, was complex. Though many Protestant church leaders in Germany supported the Nazis' growing anti-Jewish activities, some such as Dietrich Bonhoeffer (a Lutheran pastor) of the Confessing Church, a movement within Protestantism that strongly opposed Nazism, were strongly opposed to the Third Reich. Bonhoeffer was later found guilty in the conspiracy to assassinate Hitler and executed. Contemporary Christianity Second Vatican Council On 11 October 1962, Pope John XXIII opened the Second Vatican Council, the 21st ecumenical council of the Catholic Church. The council was "pastoral" in nature, interpreting dogma in terms of its scriptural roots, revising liturgical practices, and providing guidance for articulating traditional Church teachings in contemporary times. The council is perhaps best known for its instructions that the Mass may be celebrated in the vernacular as well as in Latin. Ecumenism Ecumenism broadly refers to movements between Christian groups to establish a degree of unity through dialogue. Ecumenism is derived from Greek (oikoumene), which means "the inhabited world", but more figuratively something like "universal oneness." The movement can be distinguished into Catholic and Protestant movements, with the latter characterised by a redefined ecclesiology of "denominationalism" (which the Catholic Church, among others, rejects). Over the last century, moves have been made to reconcile the schism between the Catholic Church and the Eastern Orthodox churches. Although progress has been made, concerns over papal primacy and the independence of the smaller Orthodox churches has blocked a final resolution of the schism. On 30 November 1894, Pope Leo XIII published Orientalium Dignitas. On 7 December 1965, a Joint Catholic-Orthodox Declaration of Pope Paul VI and the Ecumenical Patriarch Athenagoras I was issued lifting the mutual excommunications of 1054. Some of the most difficult questions in relations with the ancient Eastern Churches concern some doctrine (i.e. Filioque, scholasticism, functional purposes of asceticism, the essence of God, Hesychasm, Fourth Crusade, establishment of the Latin Empire, Uniatism to note but a few) as well as practical matters such as the concrete exercise of the claim to papal primacy and how to ensure that ecclesiastical union would not mean mere absorption of the smaller Churches by the Latin component of the much larger Catholic Church (the most numerous single religious denomination in the world) and the stifling or abandonment of their own rich theological, liturgical and cultural heritage. With respect to Catholic relations with Protestant communities, certain commissions were established to foster dialogue and documents have been produced aimed at identifying points of doctrinal unity, such as the Joint Declaration on the Doctrine of Justification produced with the Lutheran World Federation in 1999. Ecumenical movements within Protestantism have focused on determining a list of doctrines and practices essential to being Christian and thus extending to all groups which fulfill these basic criteria a (more or less) co-equal status, with perhaps one's own group still retaining a "first among equal" standing. This process involved a redefinition of the idea of "the Church" from traditional theology. This ecclesiology, known as denominationalism, contends that each group (which fulfills the essential criteria of "being Christian") is a sub-group of a greater "Christian Church", itself a purely abstract concept with no direct representation, i.e., no group, or "denomination", claims to be "the Church." This ecclesiology is at variance with other groups that indeed consider themselves to be "the Church." The "essential criteria" generally consist of belief in the Trinity, belief that Jesus Christ is the only way to bring forgiveness and eternal life, and that Jesus died and rose again bodily. Pentecostal movement and Charismatic Christianity In reaction to these developments, Christian fundamentalism was a movement to reject the radical influences of philosophical humanism as this was affecting the Christian religion. Especially targeting critical approaches to the interpretation of the Bible, and trying to blockade the inroads made into their churches by atheistic scientific assumptions, fundamentalist Christians began to appear in various Christian denominations as numerous independent movements of resistance to the drift away from historic Christianity. Over time, the Evangelical movement has divided into two main wings, with the label Fundamentalist following one branch, while the term Evangelical has become the preferred banner of the more moderate side. Although both strands of Evangelicalism primarily originated in the English-speaking world, the majority of Evangelicals today live elsewhere in the world. World Christianity World Christianity, otherwise known as "global Christianity", has been defined both as a term that attempts to convey the global nature of the Christian religion and an academic field of study that encompasses analysis of the histories, practices, and discourses of Christianity as a world religion and its various forms as they are found on the six continents. However, the term often focuses on "non-Western Christianity" which "comprises (usually the exotic) instances of Christian faith in 'the global South', in Asia, Africa, and Latin America." It also includes Indigenous or diasporic forms of Christianity in Western Europe and North America. See also Christian anarchism Christianity and Paganism Christianization History of Christian theology History of the Eastern Orthodox Church History of Oriental Orthodoxy History of Protestantism History of the Catholic Church Mandaeism Rise of Christianity during the Fall of Rome Role of the Christian Church in civilization Timeline of Christian missions Timeline of Christianity Timeline of the Roman Catholic Church References Sources Printed sources Brown, Schuyler. The Origins of Christianity: A Historical Introduction to the New Testament. Oxford University Press (1993). Johnson, L.T., The Real Jesus, San Francisco, Harper San Francisco, 1996 Ludemann, Gerd, What Really Happened to Jesus? trans. J. Bowden, Louisville, Kentucky: Westminster John Knox Press, 1995 Web-sources E.P. Sanders, Jaroslav Jan Pelikan, Jesus, Encyclopedia Britannica Further reading Bowden, John. Encyclopedia of Christianity (2005), 1406 pp excerpt and text search Carrington, Philip. The Early Christian Church (2 vol. 1957) vol 1; online edition vol 2 ; Holt, Bradley P. Thirsty for God: A Brief History of Christian Spirituality (2nd ed. 2005) Jacomb-Hood, Anthony. Rediscovering the New Testament Church. CreateSpace (2014). . Johnson, Paul. A History of Christianity (1976) excerpt and text search excerpt and text search and highly detailed table of contents excerpt and text search; Livingstone, E. A., ed. The Concise Oxford Dictionary of the Christian Church (2nd ed. 2006) excerpt and text search online at Oxford Reference MacCulloch, Diarmaid. A History of Christianity: The First Three Thousand Years (2010) McLeod, Hugh, and Werner Ustorf, eds. The Decline of Christendom in Western Europe, 1750–2000 (2003) 13 essays by scholars; online edition McGuckin, John Anthony. The Orthodox Church: An Introduction to its History, Doctrine, and Spiritual Culture (2010), 480pp excerpt and text search McGuckin, John Anthony. The Encyclopedia of Eastern Orthodox Christianity (2011), 872pp Moore, Edward Caldwell. The Spread of Christianity in the Modern World. Chicago, Ill: University of Chicago Press, 1919. Muraresku, Brian C. The Immortality Key: The Secret History of the Religion with No Name. Macmillan USA. 2020. ISBN 978-1250207142 Stark, Rodney. The Rise of Christianity (1996) Tomkins, Stephen. A Short History of Christianity (2006) excerpt and text search External links The following links give an overview of the history of Christianity: History of Christianity Reading Room: Extensive online resources for the study of global church history (Tyndale Seminary). Dictionary of the History of Ideas: Christianity in History Dictionary of the History of Ideas: Church as an Institution Sketches of Church History From AD 33 to the Reformation by Rev. J. C Robertson, M.A., Canon of Canterbury A History of Christianity in 15 Objects online series in association with Faculty of Theology, Uni. of Oxford from September 2011 The following links provide quantitative data related to Christianity and other major religions, including rates of adherence at different points in time: American Religion Data Archive Early Stages of the Establishment of Christianity Theandros, a journal of Orthodox theology and philosophy, containing articles on early Christianity and patristic studies. Historical Christianity, A timeline with references to the descendants of the early church. Reformation Timeline, A short timeline of the Protestant Reformation. Fourth-Century Christianity
[ 0.2452392578125, 0.5938522815704346, -0.49309858679771423, -0.2831001579761505, -0.10143820941448212, 0.3684171736240387, 0.452937513589859, 0.7849113345146179, -0.24026410281658173, -0.16441000998020172, -0.09377938508987427, 0.25421810150146484, -0.2745180130004883, 0.4844028651714325, ...
14121
https://en.wikipedia.org/wiki/Hertz
Hertz
The hertz (symbol: Hz) is the unit of frequency in the International System of Units (SI) and is defined as one cycle per second. The hertz is an SI derived unit whose expression in terms of SI base units is s−1, meaning that one hertz is the reciprocal of one second. It is named after Heinrich Rudolf Hertz (1857–1894), the first person to provide conclusive proof of the existence of electromagnetic waves. Hertz are commonly expressed in multiples: kilohertz (, kHz), megahertz (, MHz), gigahertz (, GHz), terahertz (, THz). Some of the unit's most common uses are in the description of sine waves and musical tones, particularly those used in radio- and audio-related applications. It is also used to describe the clock speeds at which computers and other electronics are driven. The units are sometimes also used as a representation of the energy of a photon, via the Planck relation E=hν, where E is the photon's energy, ν is its frequency, and the proportionality constant h is Planck's constant. Definition The hertz is defined as one cycle per second. The International Committee for Weights and Measures defined the second as "the duration of periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom" and then adds: "It follows that the hyperfine splitting in the ground state of the caesium 133 atom is exactly hertz, ν(hfs Cs) = ." The dimension of the unit hertz is 1/time (1/T). Expressed in base SI units, the unit is 1/second (1/s). Problems can arise because the unit of angular measure (radian) is sometimes omitted in SI. In English, "hertz" is also used as the plural form. As an SI unit, Hz can be prefixed; commonly used multiples are kHz (kilohertz, ), MHz (megahertz, ), GHz (gigahertz, ) and THz (terahertz, ). One hertz simply means "one cycle per second" (typically that which is being counted is a complete cycle); means "one hundred cycles per second", and so on. The unit may be applied to any periodic event—for example, a clock might be said to tick at , or a human heart might be said to beat at . The occurrence rate of aperiodic or stochastic events is expressed in reciprocal second or inverse second (1/s or s−1) in general or, in the specific case of radioactive decay, in becquerels. Whereas is one cycle per second, is one aperiodic radionuclide event per second. Even though angular velocity, angular frequency and the unit hertz all have the dimension 1/T, angular velocity and angular frequency are not expressed in hertz, but rather in an appropriate angular unit such as the radian per second. Thus a disc rotating at 60 revolutions per minute (rpm) is said to be rotating at either 2 rad/s or , where the former measures the angular velocity and the latter reflects the number of complete revolutions per second. The conversion between a frequency f measured in hertz and an angular velocity ω measured in radians per second is and . History The hertz is named after the German physicist Heinrich Hertz (1857–1894), who made important scientific contributions to the study of electromagnetism. The name was established by the International Electrotechnical Commission (IEC) in 1935. It was adopted by the General Conference on Weights and Measures (CGPM) (Conférence générale des poids et mesures) in 1960, replacing the previous name for the unit, "cycles per second" (cps), along with its related multiples, primarily "kilocycles per second" (kc/s) and "megacycles per second" (Mc/s), and occasionally "kilomegacycles per second" (kMc/s). The term "cycles per second" was largely replaced by "hertz" by the 1970s. Sometimes the adjectival form "per second" was omitted, so that "megacycles" (Mc) was used as an abbreviation of "megacycles per second" (that is, megahertz (MHz)). Applications Vibration Sound is a traveling longitudinal wave which is an oscillation of pressure. Humans perceive frequency of sound waves as pitch. Each musical note corresponds to a particular frequency which can be measured in hertz. An infant's ear is able to perceive frequencies ranging from to ; the average adult human can hear sounds between and . The range of ultrasound, infrasound and other physical vibrations such as molecular and atomic vibrations extends from a few femtohertz into the terahertz range and beyond. Electromagnetic radiation Electromagnetic radiation is often described by its frequency—the number of oscillations of the perpendicular electric and magnetic fields per second—expressed in hertz. Radio frequency radiation is usually measured in kilohertz (kHz), megahertz (MHz), or gigahertz (GHz). Light is electromagnetic radiation that is even higher in frequency, and has frequencies in the range of tens (infrared) to thousands (ultraviolet) of terahertz. Electromagnetic radiation with frequencies in the low terahertz range (intermediate between those of the highest normally usable radio frequencies and long-wave infrared light) is often called terahertz radiation. Even higher frequencies exist, such as that of gamma rays, which can be measured in exahertz (EHz). (For historical reasons, the frequencies of light and higher frequency electromagnetic radiation are more commonly specified in terms of their wavelengths or photon energies: for a more detailed treatment of this and the above frequency ranges, see electromagnetic spectrum.) Computers In computers, most central processing units (CPU) are labeled in terms of their clock rate expressed in megahertz () or gigahertz (). This specification refers to the frequency of the CPU's master clock signal. This signal is a square wave, which is an electrical voltage that switches between low and high logic values at regular intervals. As the hertz has become the primary unit of measurement accepted by the general populace to determine the performance of a CPU, many experts have criticized this approach, which they claim is an easily manipulable benchmark. Some processors use multiple clock periods to perform a single operation, while others can perform multiple operations in a single cycle. For personal computers, CPU clock speeds have ranged from approximately in the late 1970s (Atari, Commodore, Apple computers) to up to in IBM Power microprocessors. Various computer buses, such as the front-side bus connecting the CPU and northbridge, also operate at various frequencies in the megahertz range. Higher frequencies than the International System of Units provides prefixes for are believed to occur naturally in the frequencies of the quantum-mechanical vibrations of high-energy, or, equivalently, massive particles, although these are not directly observable and must be inferred from their interactions with other phenomena. By convention, these are typically not expressed in hertz, but in terms of the equivalent quantum energy, which is proportional to the frequency by the factor of Planck's constant. Unicode The CJK Compatibility block in Unicode contains characters for common SI units for frequency. These are intended for compatibility with East Asian character encodings, and not for use in new documents (which would be expected to use Latin letters, e.g. "MHz"). See also Alternating current Bandwidth (signal processing) Electronic tuner FLOPS Frequency changer Normalized frequency (unit) Orders of magnitude (frequency) Periodic function Radian per second Rate Sampling rate Notes and references External links SI Brochure: Unit of time (second) National Research Council of Canada: Cesium fountain clock National Research Council of Canada: Optical frequency standard based on a single trapped ion National Research Council of Canada: Optical frequency comb National Physical Laboratory: Time and frequency Optical atomic clocks Online Tone Generator SI derived units Units of frequency Heinrich Hertz
[ 0.2834237217903137, -0.39323896169662476, 0.17639556527137756, 0.32159197330474854, -0.3266698122024536, 0.3937503695487976, 0.2299063503742218, -0.11144179105758667, 0.10565473139286041, -0.2515173852443695, -0.1353731006383896, 0.14555670320987701, 0.4036547839641571, 0.7069736123085022,...
14123
https://en.wikipedia.org/wiki/Heroic%20couplet
Heroic couplet
A heroic couplet is a traditional form for English poetry, commonly used in epic and narrative poetry, and consisting of a rhyming pair of lines in iambic pentameter. Use of the heroic couplet was pioneered by Geoffrey Chaucer in the Legend of Good Women and the Canterbury Tales, and generally considered to have been perfected by John Dryden and Alexander Pope in the Restoration Age and early 18th century respectively. Example A frequently-cited example illustrating the use of heroic couplets is this passage from Cooper's Hill by John Denham, part of his description of the Thames: History The term "heroic couplet" is sometimes reserved for couplets that are largely closed and self-contained, as opposed to the enjambed couplets of poets like John Donne. The heroic couplet is often identified with the English Baroque works of John Dryden and Alexander Pope, who used the form for their translations of the epics of Virgil and Homer, respectively. Major poems in the closed couplet, apart from the works of Dryden and Pope, are Samuel Johnson's The Vanity of Human Wishes, Oliver Goldsmith's The Deserted Village, and John Keats's Lamia. The form was immensely popular in the 18th century. The looser type of couplet, with occasional enjambment, was one of the standard verse forms in medieval narrative poetry, largely because of the influence of the Canterbury Tales. Variations English heroic couplets, especially in Dryden and his followers, are sometimes varied by the use of the occasional alexandrine, or hexameter line, and triplet. Often these two variations are used together to heighten a climax. The breaking of the regular pattern of rhyming pentameter pairs brings about a sense of poetic closure. Here are two examples from Book IV of Dryden's translation of the Aeneid. Alexandrine Alexandrine and Triplet Modern use Twentieth-century authors have occasionally made use of the heroic couplet, often as an allusion to the works of poets of previous centuries. An example of this is Vladimir Nabokov's novel Pale Fire, the second section of which is a 999-line, 4-canto poem largely written in loose heroic couplets with frequent enjambment. Here is an example from the first canto: The use of heroic couplets in translations of Greco-Roman epics has also inspired translations of non-Western works into English. In 2021, Vietnamese translator Nguyen Binh published a translation of the Vietnamese epic poem Tale of Kiều, in which the lục bát couplets of the original were rendered into heroic couplets. Binh named John Dryden and Alexander Pope as major influences on their work, which also mimicked the spelling of Dryden and Pope's translations to evoke the medieval air of the Vietnamese original. An example of the heroic couplet translation can be found below: See also Metre (poetry) Iambic pentameter Foot (prosody) Heroic verse References Poetic form
[ -0.4080623388290405, 0.06481344252824783, 0.05683895945549011, -0.25248849391937256, -0.29669928550720215, 0.6096311211585999, 0.8450934886932373, 0.24042874574661255, 0.017225166782736778, -0.5150014758110046, -0.7379553318023682, 0.21318472921848297, -0.01929728128015995, -0.144151389598...
14127
https://en.wikipedia.org/wiki/H%C3%B6%C3%B0r
Höðr
Höðr ( ; often anglicized as Hod, Hoder, or Hodur) is a god in Norse mythology. The blind son of Odin and Frigg, he is tricked and guided by Loki into shooting a mistletoe arrow which was to slay the otherwise invulnerable Baldr. According to the Prose Edda and the Poetic Edda, the goddess Frigg, Baldr's mother, made everything in existence swear never to harm Baldr, except for the mistletoe, which she found too unimportant to ask (alternatively, which she found too young to demand an oath from). The gods amused themselves by trying weapons on Baldr and seeing them fail to do any harm. Loki, the mischief-maker, upon finding out about Baldr's one weakness, made a spear from mistletoe, and helped Höðr shoot it at Baldr. In reaction to this, Odin and the giantess Rindr gave birth to Váli, who grew to adulthood within a day and slew Höðr. The Danish historian Saxo Grammaticus recorded an alternative version of this myth in his Gesta Danorum. In this version, the mortal hero Høtherus and the demi-god Balderus compete for the hand of Nanna. Ultimately, Høtherus slays Balderus. Name Rawlinson and Bosworth Professor of Anglo-Saxon Andy Orchard, argues that the name Hǫðr, means 'warrior', and is comparable with the Old English heaðu-deór ('brave, stout in war'). the Old Norse noun hǫð ('war, slaughter'), it stems from Proto-Germanic *haþuz ('battle'; compare with Old English heaðo-, Old High German hadu-, Old Saxon hathu-). Yet these etymological excersises does not correspond completely with the contexts and meaning of the word as it is used in Norse literature according to the Old Norse Dictionary of University of Copenhagen and The Árni Magnússon Institute for Icelandic Studies operating Málið, facilitating digital searching for information on the Icelandic language and learning about language usage. Both resources refers to Íslensk orðsifjabók, Icelandic etymological dictionary, which says that additionally to refer to the ås, is Höðr the name of a legendary king of Hadeland in Norway, as well as possibly denoting 'eagle'. Icelandic etymologists relate Hǫðr to HauðrIcelandic etymological dictionary, denoting 'heath', 'meadow', as well as to Hoð, Höð meaning 'battle'. Hodd means 'treasure house', 'hiding place'. The Prose Edda In the Gylfaginning part of Snorri Sturluson's Prose Edda Höðr is introduced in an ominous way. Höðr is not mentioned again until the prelude to Baldr's death is described. All things except the mistletoe (believed to be harmless) have sworn an oath not to harm Baldr, so the Æsir throw missiles at him for sport. The Gylfaginning does not say what happens to Höðr after this. In fact it specifically states that Baldr cannot be avenged, at least not immediately. It does seem, however, that Höðr ends up in Hel one way or another for the last mention of him in Gylfaginning is in the description of the post-Ragnarök world. Snorri's source of this knowledge is clearly Völuspá as quoted below. In the Skáldskaparmál section of the Prose Edda several kennings for Höðr are related. None of those kennings, however, are actually found in surviving skaldic poetry. Neither are Snorri's kennings for Váli, which are also of interest in this context. It is clear from this that Snorri was familiar with the role of Váli as Höðr's slayer, even though he does not relate that myth in the Gylfaginning prose. Some scholars have speculated that he found it distasteful, since Höðr is essentially innocent in his version of the story. The Poetic Edda Höðr is referred to several times in the Poetic Edda, always in the context of Baldr's death. The following strophes are from Völuspá. This account seems to fit well with the information in the Prose Edda, but here the role of Baldr's avenging brother is emphasized. Baldr and Höðr are also mentioned in Völuspás description of the world after Ragnarök. The poem Vafþrúðnismál informs us that the gods who survive Ragnarök are Viðarr, Váli, Móði and Magni with no mention of Höðr and Baldr. The myth of Baldr's death is also referred to in another Eddic poem, Baldrs draumar. Höðr is not mentioned again by name in the Eddas. He is, however, referred to in Völuspá in skamma. Skaldic poetry The name of Höðr occurs several times in skaldic poetry as a part of warrior-kennings. Thus Höðr brynju, "Höðr of byrnie", is a warrior and so is Höðr víga, "Höðr of battle". Some scholars have found the fact that the poets should want to compare warriors with Höðr to be incongruous with Snorri's description of him as a blind god, unable to harm anyone without assistance. It is possible that this indicates that some of the poets were familiar with other myths about Höðr than the one related in Gylfaginning - perhaps some where Höðr has a more active role. On the other hand, the names of many gods occur in kennings and the poets might not have been particular in using any god name as a part of a kenning. Gesta Danorum In Gesta Danorum Hotherus''' is a human hero of the Danish and Swedish royal lines. He is gifted in swimming, archery, fighting and music and Nanna, daughter of King Gevarus falls in love with him. But at the same time Balderus, son of Othinus, has caught sight of Nanna bathing and fallen violently in love with her. He resolves to slay Hotherus, his rival. Out hunting, Hotherus is led astray by a mist and meets wood-maidens who control the fortunes of war. They warn him that Balderus has designs on Nanna but also tell him that he shouldn't attack him in battle since he is a demigod. Hotherus goes to consult with King Gevarus and asks him for his daughter. The king replies that he would gladly favour him but that Balderus has already made a like request and he does not want to incur his wrath. Gevarus tells Hotherus that Balderus is invincible but that he knows of one weapon which can defeat him, a sword kept by Mimingus, the satyr of the woods. Mimingus also has another magical artifact, a bracelet that increases the wealth of its owner. Riding through a region of extraordinary cold in a carriage drawn by reindeer, Hotherus captures the satyr with a clever ruse and forces him to yield his artifacts. Hearing about Hotherus's artifacts, Gelderus, king of Saxony, equips a fleet to attack him. Gevarus warns Hotherus of this and tells him where to meet Gelderus in battle. When the battle is joined, Hotherus and his men save their missiles while defending themselves against those of the enemy with a testudo formation. With his missiles exhausted, Gelderus is forced to sue for peace. He is treated mercifully by Hotherus and becomes his ally. Hotherus then gains another ally with his eloquent oratory by helping King Helgo of Hålogaland win a bride. Meanwhile, Balderus enters the country of king Gevarus armed and sues for Nanna. Gevarus tells him to learn Nanna's own mind. Balderus addresses her with cajoling words but is refused. Nanna tells him that because of the great difference in their nature and stature, since he is a demigod, they are not suitable for marriage. As news of Balderus's efforts reaches Hotherus, he and his allies resolve to attack Balderus. A great naval battle ensues where the gods fight on the side of Balderus. Thoro in particular shatters all opposition with his mighty club. When the battle seems lost, Hotherus manages to hew Thoro's club off at the haft and the gods are forced to retreat. Gelderus perishes in the battle and Hotherus arranges a funeral pyre of vessels for him. After this battle Hotherus finally marries Nanna. Balderus is not completely defeated and shortly afterwards returns to defeat Hotherus in the field. But Balderus's victory is without fruit for he is still without Nanna. Lovesick, he is harassed by phantoms in Nanna's likeness and his health deteriorates so that he cannot walk but has himself drawn around in a cart. After a while Hotherus and Balderus have their third battle and again Hotherus is forced to retreat. Weary of life because of his misfortunes, he plans to retire and wanders into the wilderness. In a cave he comes upon the same maidens he had met at the start of his career. Now they tell him that he can defeat Balderus if he gets a taste of some extraordinary food which had been devised to increase the strength of Balderus. Encouraged by this, Hotherus returns from exile and once again meets Balderus in the field. After a day of inconclusive fighting, he goes out during the night to spy on the enemy. He finds where Balderus's magical food is prepared and plays the lyre for the maidens preparing it. While they don't want to give him the food, they bestow on him a belt and a girdle which secure victory. Heading back to his camp, Hotherus meets Balderus and plunges his sword into his side. After three days, Balderus dies from his wound. Many years later, Bous, the son of Othinus and Rinda, avenges his brother by killing Hotherus in a duel. Chronicon Lethrense and Annales Lundenses There are also two lesser-known DanishLatin chronicles, the Chronicon Lethrense and the Annales Lundenses, of which the latter is included in the former. These two sources provide a second euhemerized account of Höðr's slaying of Balder. It relates that Hother was the king of the Saxons, son of Hothbrod, the daughter of Hadding. Hother first slew Othen's (i.e., Odin's) son Balder in battle and then chased Othen and Thor. Finally, Othen's son Both killed Hother. Hother, Balder, Othen, and Thor were incorrectly considered to be gods. Rydberg's theories According to the Swedish mythologist and romantic poet Viktor Rydberg, the story of Baldr's death was taken from Húsdrápa, a poem composed by Ulfr Uggason around 990 AD at a feast thrown by the Icelandic Chief Óláfr Höskuldsson to celebrate the finished construction of his new home, Hjarðarholt, the walls of which were filled with symbolic representations of the Baldr myth among others. Rydberg suggested that Höðr was depicted with eyes closed and Loki guiding his aim to indicate that Loki was the true cause of Baldr's death and Höðr was only his "blind tool." Rydberg theorized that the author of the Gylfaginning then mistook the description of the symbolic artwork in the Húsdrápa as the actual tale of Baldr's death. Notes References Sources Bellows, Henry Adams (trans.) (1936). The Poetic Edda. Princeton: Princeton University Press. Available online Brodeur, Arthur Gilchrist (transl.) (1916). The Prose Edda by Snorri Sturluson. New York: The American-Scandinavian Foundation. Available online in parallel text Dronke, Ursula (ed. and trans.) (1997) The Poetic Edda: Mythological Poems. Oxford: Oxford University Press. . Eysteinn Björnsson (2001). Lexicon of Kennings : The Domain of Battle. Published online: https://web.archive.org/web/20090328200122/http://www3.hi.is/~eybjorn/ugm/kennings/kennings.html Eysteinn Björnsson (ed.). Snorra-Edda: Formáli & Gylfaginning : Textar fjögurra meginhandrita. 2005. Published online: https://web.archive.org/web/20080611212105/http://www.hi.is/~eybjorn/gg/ Eysteinn Björnsson (ed.). Völuspá. Published online: https://web.archive.org/web/20090413124631/http://www3.hi.is/~eybjorn/ugm/vsp3.html Guðni Jónsson (ed.) (1949). Eddukvæði : Sæmundar Edda. Reykjavík: Íslendingasagnaútgáfan. Available online Thorpe, Benjamin (transl.) (1866). Edda Sæmundar Hinns Froða : The Edda Of Sæmund The Learned''. (2 vols.) London: Trübner & Co. Available online at Google Books External links MyNDIR (My Norse Digital Image Repository) Illustrations of Höðr from manuscripts and early print books. Clicking on the thumbnail will give you the full image and information concerning it. Æsir Fictional blind characters Sons of Odin Killed deities Norse gods
[ -0.4671235680580139, 0.9834042191505432, 0.051253583282232285, -0.760404646396637, -0.9780499935150146, 0.3095618188381195, 0.4262106120586395, 0.11825986951589584, -0.40767499804496765, -0.13488906621932983, -0.5116906762123108, 0.4493313729763031, -0.008601594716310501, 0.013732597231864...
14128
https://en.wikipedia.org/wiki/Herat
Herat
Herāt (; Dari/Pashto: ) is an oasis city and the third-largest city of Afghanistan. In 2020, it had an estimated population of 574,276, and serves as the capital of Herat Province, situated south of the Paropamisus Mountains (Selseleh-ye Safēd Kōh) in the fertile valley of the Hari River in the western part of the country. An ancient civilization on the Silk Road between the Middle East, Central and South Asia,. people in Herat usually speak Pashto Language and also know Dari. Herat dates back to Avestan times and was traditionally known for its wine. The city has a number of historic sites, including the Herat Citadel and the Musalla Complex. During the Middle Ages Herat became one of the important cities of Khorasan, as it was known as the Pearl of Khorasan. After the conquest of Tamerlane, the city became an important center of intellectual and artistic life in the Islamic world. Under the rule of Shah Rukh the city served as the focal point of the Timurid Renaissance, whose glory matched Florence of the Italian Renaissance as the center of a cultural rebirth. After the fall of the Timurid Empire, Herat has been governed by various Afghan rulers since the early 18th century. In 1716, the Abdali Afghans inhabiting the city revolted and formed their own Sultanate, the Sadozai Sultanate of Herat. They were conquered by the Afsharids in 1732. After Nader Shah's death and Ahmad Shah Durrani's rise to power in 1747, Herat became part of Afghanistan. It became an independent city-state in the first half of the 19th century, facing several Iranian invasions until being incorporated into Afghanistan in 1863. The roads from Herat to Iran (through the border town of Islam Qala) and Turkmenistan (through the border town of Torghundi) are still strategically important. As the gateway to Iran, it collects high amount of customs revenue for Afghanistan. It also has an international airport. Following the 2001 war the city had been relatively safe from Taliban insurgent attacks. In 2021, it was announced that Herat would be listed as a UNESCO World Heritage Site. On 12 August 2021, the city was seized by Taliban fighters as part of the Taliban's summer offensive. History Herat is first recorded in ancient times, but its precise date of foundation is unknown. Under the Persian Achaemenid Empire (550–330 BC), the surrounding district was known by the Old Persian name of Haraiva (𐏃𐎼𐎡𐎺), and in classical sources, the region was correspondingly known as Areia (Aria). In the Zoroastrian collection of Avesta, the district is referred as Haroiva. The name of the district and its principal town is a derivative from that of the local river, the Herey River (from Old Iranian Harayu, meaning "with velocity"), which goes through the district and ends south of Herat. Herey is mentioned in Sanskrit as a yellow or golden color equivalent to Persian "Zard" meaning Gold (yellow). The naming of a region and its principal town after the main river is a common feature in this part of the world— compare the adjoining districts/rivers/towns of Arachosia and Bactria. The district Aria of the Achaemenid Empire is mentioned in the provincial lists that are included in various royal inscriptions, for instance, in the Behistun inscription of Darius I (ca. 520 BC). Representatives from the district are depicted in reliefs, e.g., at the royal Achaemenid tombs of Naqsh-e Rustam and Persepolis. They are wearing Scythian-style dress (with a tunic and trousers tucked into high boots) and a twisted Bashlyk that covers their head, chin and neck. Hamdallah Mustawfi, composer of the 14th-century work The Geographical Part of the Nuzhat-al-Qulub writes that: Herodotus described Herat as the bread-basket of Central Asia. At the time of Alexander the Great in 330 BC, Aria was obviously an important district. It was administered by a satrap called Satibarzanes, who was one of the three main Persian officials in the East of the Empire, together with the satrap Bessus of Bactria and Barsaentes of Arachosia. In late 330 BC, Alexander captured the Arian capital that was called Artacoana. The town was rebuilt and the citadel was constructed. Afghanistan became part of the Seleucid Empire. However, most sources suggest that Herat was predominantly Zoroastrian. It became part of the Parthian Empire in 167 BC. In the Sasanian period (226-652), 𐭧𐭥𐭩𐭥 Harēv is listed in an inscription on the Ka'ba-i Zartosht at Naqsh-e Rustam; and Hariy is mentioned in the Pahlavi catalogue of the provincial capitals of the empire. In around 430, the town is also listed as having a Christian community, with a Nestorian bishop. In the last two centuries of Sasanian rule, Aria (Herat) had great strategic importance in the endless wars between the Sasanians, the Chionites and the Hephthalites who had been settled in the northern section of Afghanistan since the late 4th century. Islamization At the time of the Arab invasion in the middle of the 7th century, the Sasanian central power seemed already largely nominal in the province in contrast with the role of the Hephthalites tribal lords, who were settled in the Herat region and in the neighboring districts, mainly in pastoral Bādghis and in Qohestān. It must be underlined, however, that Herat remained one of the three Sasanian mint centers in the east, the other two beings Balkh and Marv. The Hephthalites from Herat and some unidentified Turks opposed the Arab forces in a battle of Qohestān in 651-52 AD, trying to block their advance on Nishāpur, but they were defeated When the Arab armies appeared in Khorāsān in the 650s AD, Herāt was counted among the twelve capital towns of the Sasanian Empire. The Arab army under the general command of Ahnaf ibn Qais in its conquest of Khorāsān in 652 seems to have avoided Herāt, but it can be assumed that the city eventually submitted to the Arabs, since shortly afterward an Arab governor is mentioned there. A treaty was drawn in which the regions of Bādghis and Bushanj were included. As did many other places in Khorāsān, Herāt rebelled and had to be re-conquered several times. Another power that was active in the area in the 650s was Tang dynasty China which had embarked on a campaign that culminated in the Conquest of the Western Turks. By 659–661, the Tang claimed a tenuous suzerainty over Herat, the westernmost point of Chinese power in its long history. This hold however would be ephemeral with local Turkish tribes rising in rebellion in 665 and driving out the Tang. In 702 AD Yazid ibn al-Muhallab defeated certain Arab rebels, followers of Ibn al-Ash'ath, and forced them out of Herat. The city was the scene of conflicts between different groups of Muslims and Arab tribes in the disorders leading to the establishment of the Abbasid Caliphate. Herat was also a center of the followers of Ustadh Sis. In 870 AD, Yaqub ibn Layth Saffari, a local ruler of the Saffarid dynasty conquered Herat and the rest of the nearby regions in the name of Islam. “Pearl of Khorasan” The region of Herāt was under the rule of King Nuh III, the seventh of the Samanid line—at the time of Sebük Tigin and his older son, Mahmud of Ghazni. The governor of Herāt was a noble by the name of Faik, who was appointed by Nuh III. It is said that Faik was a powerful, but insubordinate governor of Nuh III, and had been punished by Nuh III. Faik made overtures to Bogra Khan and Ughar Khan of Khorasan. Bogra Khan answered Faik's call, came to Herāt, and became its ruler. The Samanids fled, betrayed at the hands of Faik to whom the defense of Herāt had been entrusted by Nuh III. In 994, Nuh III invited Alptegin to come to his aid. Alptegin, along with Mahmud of Ghazni, defeated Faik and annexed Herāt, Nishapur and Tous. Herat was a great trading center strategically located on trade routes from Mediterranean to India or to China. The city was noted for its textiles during the Abbasid Caliphate, according to many references by geographers. Herāt also had many learned sons such as Ansārī. The city is described by Estakhri and Ibn Hawqal in the 10th century as a prosperous town surrounded by strong walls with plenty of water sources, extensive suburbs, an inner citadel, a congregational mosque, and four gates, each gate opening to a thriving market place. The government building was outside the city at a distance of about a mile in a place called Khorāsānābād. A church was still visible in the countryside northeast of the town on the road to Balkh, and farther away on a hilltop stood a flourishing fire temple, called Sereshk, or Arshak according to Mustawfi. Herat was a part of the Taherid dominion in Khorāsān until the rise of the Saffarids in Sistān under Ya'qub-i Laith in 861, who, in 862, started launching raids on Herat before besieging and capturing it on 16 August 867, and again in 872. The Saffarids succeeded in expelling the Taherids from Khorasan in 873. The Sāmānid dynasty was established in Transoxiana by three brothers, Nuh, Yahyā, and Ahmad. Ahmad Sāmāni opened the way for the Samanid dynasty to the conquest of Khorāsān, including Herāt, which they were to rule for one century. The centralized Samanid administration served as a model for later dynasties. The Samanid power was destroyed in 999 by the Qarakhanids, who were advancing on Transoxiana from the northeast, and by the Ghaznavids, former Samanid retainers, attacking from the southeast. Sultan Maḥmud of Ghazni officially took control of Khorāsān in 998. Herat was one of the six Ghaznavid mints in the region. In 1040, Herat was captured by the Seljuk Empire. During this change of power in Herat, there was supposedly a power vacuum which was filled by Abdullah Awn, who established a city-state and made an alliance with Mahmud of Ghazni. Yet, in 1175, it was captured by the Ghurids of Ghor and then came under the Khawarazm Empire in 1214. According to the account of Mustawfi, Herat flourished especially under the Ghurid dynasty in the 12th century. Mustawfi reported that there were "359 colleges in Herat, 12,000 shops all fully occupied, 6,000 bath-houses; besides caravanserais and mills, also a darwish convent and a fire temple". There were about 444,000 houses occupied by a settled population. The men were described as "warlike and carry arms", and they were Sunni Muslims. The great mosque of Herāt was built by Ghiyasuddin Ghori in 1201. In this period Herāt became an important center for the production of metal goods, especially in bronze, often decorated with elaborate inlays in precious metals. Herat was invaded and destroyed by Genghis Khan's Mongol army in 1221. The city was destroyed a second time and remained in ruins from 1222 to about 1236. In 1244 a local prince Shams al-Din Kart was named ruler of Herāt by the Mongol governor of Khorāsān and in 1255 he was confirmed in his rule by the founder of the Il-Khan dynasty Hulagu. Shamsuddin Kart founded a new dynasty and his successors, especially Fakhruddin Kart and Ghiyasuddin Kart, built many mosques and other buildings. The members of this dynasty were great patrons of literature and the arts. By this time Herāt became known as the pearl of Khorasan. Timur took Herat in 1380 and he brought the Kartid dynasty to an end a few years later. The city reached its greatest glory under the Timurid princes, especially Sultan Husayn Bayqara who ruled Herat from 1469 until May 4, 1506. His chief minister, the poet and author in Persian and Turkish, Mir Ali-Shir Nava'i was a great builder and patron of the arts. Under the Timurids, Herat assumed the role of the main capital of an empire that extended in the West as far as central Persia. As the capital of the Timurid empire, it boasted many fine religious buildings and was famous for its sumptuous court life and musical performance and its tradition of miniature paintings. On the whole, the period was one of relative stability, prosperity, and development of economy and cultural activities. It began with the nomination of Shahrokh, the youngest son of Timur, as governor of Herat in 1397. The reign of Shahrokh in Herat was marked by intense royal patronage, building activities, and the promotion of manufacturing and trade, especially through the restoration and enlargement of the Herat's bāzār. The present Musallah Complex, and many buildings such as the madrasa of Gawhar Shad, Ali Shir mahāl, many gardens, and others, date from this time. The village of Gazar Gah, over two km northeast of Herat, contained a shrine that was enlarged and embellished under the Timurids. The tomb of the poet and mystic Khwājah Abdullāh Ansārī (d. 1088), was first rebuilt by Shahrokh about 1425, and other famous men were buried in the shrine area. Herat was shortly captured by Kara Koyunlu between 1458 and 1459. In 1507 Herat was occupied by the Uzbeks but after much fighting the city was taken by Shah Isma'il, the founder of the Safavid dynasty, in 1510 and the Shamlu Qizilbash assumed the governorship of the area. Under the Safavids, Herat was again relegated to the position of a provincial capital, albeit one of particular importance. At the death of Shah Isma'il the Uzbeks again took Herat and held it until Shah Tahmasp retook it in 1528. The Persian king, Abbas was born in Herat, and in Safavid texts, Herat is referred to as a'zam-i bilād-i īrān, meaning "the greatest of the cities of Iran". In the 16th century, all future Safavid rulers, from Tahmasp I to Abbas I, were governors of Herat in their youth. Modern history By the early 18th century Herat was governed by the Abdali Afghans. After Nader Shah's death in 1747, Ahmad Shah Durrani took possession of the city and became part of the Durrani Empire. In 1793, Herat became independent for several years when Afghanistan underwent a civil war between different sons of Timur Shah. The Iranians had multiple wars with Herat between 1801 and 1837 (1804, 1807, 1811, 1814, 1817, 1818, 1821, 1822, 1825, 1833). The Iranians besieged the city in 1837, but the British helped the Heratis in repelling them. In 1856, they invaded again, and briefly managed to take the city on October 25; it led directly to the Anglo-Persian War. In 1857 hostilities between the Iranians and the British ended after the Treaty of Paris was signed, and the Persian troops withdrew from Herat in September 1857. Afghanistan conquered Herat on May 26, 1863, under Dost Muhammad Khan, two weeks before his death. The famous Musalla of Gawhar Shah of Herat, a large Islamic religious complex consisting of five minarets, several mausoleums along with mosques and madrasas was dynamited during the Panjdeh incident to prevent their usage by the advancing Russian forces. Some emergency preservation work was carried out at the site in 2001 which included building protective walls around the Gawhar Shad Mausoleum and Sultan Husain Madrasa, repairing the remaining minaret of Gawhar Shad's Madrasa, and replanting the mausoleum garden. In the 1960s, engineers from the United States built Herat Airport, which was used by the Soviet forces during the Democratic Republic of Afghanistan in the 1980s. Even before the Soviet invasion at the end of 1979, there was a substantial presence of Soviet advisors in the city with their families. Between March 10 and March 20, 1979, the Afghan Army in Herāt under the control of commander Ismail Khan mutinied. Thousands of protesters took to the streets against the Khalq communist regime's oppression led by Nur Mohammad Taraki. The new rebels led by Khan managed to oust the communists and take control of the city for 3 days, with some protesters murdering any Soviet advisers. This shocked the government, who blamed the new administration of Iran following the Iranian Revolution for influencing the uprising. Reprisals by the government followed, and between 3,000 and 24,000 people (according to different sources) were killed, in what is called the 1979 Herat uprising, or in Persian as the Qiam-e Herat. The city itself was recaptured with tanks and airborne forces, but at the cost of thousands of civilians killed. This massacre was the first of its kind since the Third Anglo-Afghan War in 1919, and was the bloodiest event preceding the Soviet–Afghan War. Herat received damage during the Soviet–Afghan War in the 1980s, especially its western side. The province as a whole was one of the worst-hit. In April 1983, a series of Soviet bombings damaged half of the city and killed around 3,000 civilians, described as "extremely heavy, brutal and prolonged". Ismail Khan was the leading mujahideen commander in Herāt fighting against the Soviet-backed government. After the communist government's collapse in 1992, Khan joined the new government and he became governor of Herat Province. The city was relatively safe and it was recovering and rebuilding from the damage caused in the Soviet–Afghan War. However, on September 5, 1995, the city was captured by the Taliban without much resistance, forcing Khan to flee. Herat became the first Persian-speaking city to be captured by the Taliban. The Taliban's strict enforcement of laws confining women at home and closing girls' schools alienated Heratis who are traditionally more liberal and educated, like the Kabulis, than other urban populations in the country. Two days of anti-Taliban protests occurred in December 1996 which was violently dispersed and led to the imposition of a curfew. In May 1999, a rebellion in Herat was crushed by the Taliban, who blamed Iran for causing it. After the U.S. invasion of Afghanistan, on November 12, 2001, it was captured from the Taliban by forces loyal to the Northern Alliance and Ismail Khan returned to power (see Battle of Herat). The state of the city was reportedly much better than that of Kabul. In 2004, Mirwais Sadiq, Aviation Minister of Afghanistan and the son of Ismail Khan, was ambushed and killed in Herāt by a local rival group. More than 200 people were arrested under suspicion of involvement. In 2005, the International Security Assistance Force (ISAF) began establishing bases in and around the city. Its main mission was to train the Afghan National Security Forces (ANSF) and help with the rebuilding process of the country. Regional Command West, led by Italy, assisted the Afghan National Army (ANA) 207th Corps. Herat was one of the first seven areas that transitioned security responsibility from NATO to Afghanistan. In July 2011, the Afghan security forces assumed security responsibility from NATO. Due to their close relations, Iran began investing in the development of Herat's power, economy and education sectors. In the meantime, the United States built a consulate in Herat to help further strengthen its relations with Afghanistan. In addition to the usual services, the consulate works with the local officials on development projects and with security issues in the region. On 12 August 2021, the city was captured by the Taliban during the 2021 Taliban offensive. Geography Climate Herat has a cold semi-arid climate (Köppen climate classification BSk). Precipitation is very low, and mostly falls in winter. Although Herāt is approximately lower than Kandahar, the summer climate is more temperate, and the climate throughout the year is far from disagreeable, although winter temperatures are comparably lower. From May to September, the wind blows from the northwest with great force. The winter is tolerably mild; snow melts as it falls, and even on the mountains does not lie long. Three years out of four it does not freeze hard enough for the people to store ice. The eastern reaches of the Hari River, including the rapids, are frozen hard in the winter, and people travel on it as on a road. Places of interest Foreign consulates India, Iran and Pakistan operate their consulate here for trade, military and political links. Neighborhoods Shahr-e Naw (Downtown) Welayat (Office of the governor) Qol-Ordue (Army's HQ) Farqa (Army's HQ) Darwaze Khosh Chaharsu Pul-e Rangine Sufi-abad New-abad Pul-e malaan Thakhte Safar Howz-e-Karbas Baramaan Darwaze-ye Qandahar Darwaze-ye Iraq Darwaze Az Kordestan Parks Park-e Taraki Park-e Millat Khane-ye Jihad Park Monuments Herat Citadel (Qala Ikhtyaruddin or Arg) Musallah Complex Musalla Minarets of Herat Of the more than dozen minarets that once stood in Herāt, many have been toppled from war and neglect over the past century. Recently, however, everyday traffic threatens many of the remaining unique towers by shaking the very foundations they stand on. Cars and trucks that drive on a road encircling the ancient city rumble the ground every time they pass these historic structures. UNESCO personnel and Afghan authorities have been working to stabilize the Fifth Minaret. Museums Herat Museum, located inside the Herat Citadel Jihad Museum Mausoleums and tombs Gawhar Shad Mausoleum Mausoleum of Khwajah Abdullah Ansari Tomb of Jami Tomb of khaje Qaltan Mausoleum of Mirwais Sadiq Jewish cemetery – there once existed an ancient Jewish community in the city. Its remnants are a cemetery and a ruined shrine. Mosques Jumu'ah Mosque (Friday Mosque of Herat) Gazargah Sharif Khalghe Sharif Shah Zahdahe Hotels Serena Hotel (coming soon) Diamond Hotel Marcopolo Hotel Stadiums Herat Stadium Universities Herat University Demography The population of Herat numbered approximately 592,902 in 2021. The city houses a multi-ethnic society and speakers of the Persian language are in the majority. There is no current data on the precise ethnic composition of the city's population, but according to a 2003 map found in the National Geographic Magazine, Pashtun peoples form the majority of the city, comprising around 85% of the population. The remaining population comprises Tajek (10%), Hazaras (2%), Uzbeks (2%) and Turkmens (1%). Pashto is the native language of Herat and the local dialect – known by natives as Herātī – belongs to the cluster within Persian. It is akin to the Pashto dialects of eastern Afghanistan, The second language that is understood by many is Dari, which is the native language of the Turks and Tajeks. The local Pashto dialect spoken in Herat is a variant of western Pashto, which is also spoken in Kandahar and southern and western Afghanistan. Religiously, Sunni Islam is practiced by the majority, while Shias make up the minority. The city has high residential density clustered around the core of the city. However, vacant plots account for a higher percentage of the city (21%) than residential land use (18%) and agricultural is the largest percentage of total land use (36%). The city once had a Jewish community. About 280 families lived in Herat as of 1948, but most of them moved to Israel that year, and the community disappeared by 1992. There are four former synagogues in the city's old quarter, which were neglected for decades and fell into disrepair. In the late 2000s, the buildings of the synagogues were renovated by the Aga Khan Trust for Culture, and at this time, three of them were turned into schools and nurseries, the Jewish community having vanished. The Jewish cemetery is being taken care of by Jalil Ahmed Abdelaziz. Sports Professional sports teams from Herat Stadiums Herat Cricket Ground Herat Stadium Notable people from Herat Rulers and emperors Tahir ibn Husayn 9th century Abbasid Caliphate army general, and the founder of Tahirid dynasty Ghiyasuddin Muhammad, was the emperor of the Ghurid dynasty from 1163 to 1202. During his reign, the Ghurid dynasty became a world power, which stretched from Gorgan to Bengal Mīrzā Shāhrūkh bin Tīmur Barlas, Emperor of the Timurid dynasty of Herāt Abu Sa'id Mirza, ruler of the Timurid Empire during the mid-fifteenth century Mīrzā Husseyn Bāyqarāh, Emperor of the Timurid dynasty of Herāt Shāh Abbās The Great, Emperor of Safavid Persia Ahmad Shah Durrani, founder of the Durrani Empire Emir Dost Mohammad Khan, founder of the Barakzai dynasty, buried in the city Sultan Jan, ruler of Herat in the 19th century Politicians Ahmad Maymandi 11th century Persian vizier of the Ghaznavid empire Ismail Khan, former governor of Herat Province and Minister of Water and Energy Amena Afzali, politician Faramarz Tamanna, politician Scientists Abu Mansur Muvaffak Harawi, 10th-century Persian physician Abolfadl Harawi, 10th-century astronomer under the patroange of the Buyids in Rey, originally from Herat Ahmad ibn Farrokh, 12th-century Persian physician Taftazani, a Muslim polymath of the 14th century Muhammad ibn Yusuf al-Harawi 15th century Persian physician Nimat Allah al-Harawi 17th century Persian chronicler at the court of the Mughal Emperor Jahangir Religious figures Fakhr ad-Din al-Razi, polymath and Islamic scholar of the 12th-century Hussain Kashefi, a 15th-century Persian prose-stylist and Islamic scholar and scientist Ali al-Hirawi al-Qari, from 17th century, considered to be one of the masters of hadith and Imams of fiqh Artists Ali ibn Abi Bakr al-Harawi 12th and 13th century Persian traveller and first known graffiti artist in the Muslim world, originally from Herat Ustād Kamāl ud-Dīn Behzād, the greatest of the medieval Persian painters Mir Ali Heravi, prominent Persian calligrapher and calligraphy teacher of Nastaʿlīq script in the 16th century Alka Sadat, Film producer was born here Sonita Alizadeh, rapper and activist Sports Nadia Nadim, Afghan-Danish football player, most influential and greatest Afghan female football player of all time, won the French league title in the 2020-21 season with Paris Saint-Germain Hamidullah Karimi, Afghan footballer, plays as a forward for Indian club Delhi United FC Mohammad Rafi Barekzay, Afghan footballer, plays as a midfielder for Toofaan Harirod F.C Others Gowhar Shad, wife of Shāh Rūkh Mīrzā Zablon Simintov, last remaining Jew living in Afghanistan Ferdos Hosseinzadeh, last remaining king in Afghanistan Economy and infrastructure Transport Air Herat International Airport was built by engineers from the United States in the 1960s and was used by the Soviet Armed Forces during the Soviet–Afghan War in the 1980s. It was bombed in late 2001 during Operation Enduring Freedom but had been rebuilt within the next decade. The runway of the airport has been extended and upgraded and as of August 2014 there were regularly scheduled direct flights to Delhi, Dubai, Mashad, and various airports in Afghanistan. At least five airlines operated regularly scheduled direct flights to Kabul. Rail Rail connections to and from Herat were proposed many times, during The Great Game of the 19th century and again in the 1970s and 1980s, but nothing came to life. In February 2002, Iran and the Asian Development Bank announced funding for a railway connecting Torbat-e Heydarieh in Iran to Herat. This was later changed to begin in Khaf in Iran, a railway for both cargo and passengers, with work on the Iranian side of the border starting in 2006. Construction is underway in the Afghan side and it was estimated to be completed by March 2018. There is also the prospect of an extension across Afghanistan to Sher Khan Bandar. Road The AH76 highway connects Herat to Maymana and the north. The AH77 connects it east towards Chaghcharan and north towards Mary in Turkmenistan. Highway 1 (part of Asian highway AH1) links it to Mashhad in Iran to the northwest, and south via the Kandahar–Herat Highway to Delaram. Gallery Herat in fiction The beginning of Khaled Hosseini's 2007 novel A Thousand Splendid Suns is set in and around Herāt. Salman Rushdie's novel The Enchantress of Florence makes frequent reference to events in Herāt in the Middle Ages. Sister cities Council Bluffs, Iowa, United States (since 2016) See also Aria (satrapy) Geography of Afghanistan Greater Khorasan Herāt Province History of Afghanistan References Sources Attribution Bibliography External links Roofing of Herat of Afghanistan (video February 2019). Gadi ride Herat Afghnistan (video by Kambiz Galanawi, October 2018). Park Stadium (video September 2018). City of Herat Afghanistan (video by Kambiz Galanawi, June 2018). Video: Herat After Transition, with Voiceover by Natochannel Heratonline.com: Information and news about Herāt Detailed map of Herāt city Map of Herāt and surroundings in 1942, Perry–Castañeda Library Map Collection, University of Texas at Austin Cities in Afghanistan Cities in Central Asia Populated places along the Silk Road Populated places in Herat Province Provincial capitals in Afghanistan Cities founded by Alexander the Great
[ -0.30784162878990173, -0.3697444498538971, -0.14242246747016907, -0.3525744676589966, -0.5593944787979126, 0.7716328501701355, 0.532773494720459, 0.45763570070266724, -0.19560955464839935, 0.12127836793661118, -0.6861358880996704, -0.6589394807815552, -0.5482741594314575, 0.279183238744735...
14130
https://en.wikipedia.org/wiki/Hedeby
Hedeby
Hedeby (, Old Norse Heiðabýr, German Haithabu) was an important Danish Viking Age (8th to the 11th centuries) trading settlement near the southern end of the Jutland Peninsula, now in the Schleswig-Flensburg district of Schleswig-Holstein, Germany. It is the most important archaeological site in Schleswig-Holstein. Around 965, chronicler Abraham ben Jacob visited Hedeby and described it as, "a very large city at the very end of the world's ocean." The settlement developed as a trading centre at the head of a narrow, navigable inlet known as the Schlei, which connects to the Baltic Sea. The location was favorable because there is a short portage of less than 15 km to the Treene River, which flows into the Eider with its North Sea estuary, making it a convenient place where goods and ships could be pulled on a corduroy road overland for an almost uninterrupted seaway between the Baltic and the North Sea and avoid a dangerous and time-consuming circumnavigation of Jutland, providing Hedeby with a role similar to later Lübeck. Hedeby was the second largest Nordic town during the Viking Age, after Uppåkra in present-day southern Sweden, The city of Schleswig was later founded on the other side of the Schlei. Hedeby was abandoned after its destruction in 1066. Hedeby was rediscovered in the late 19th century and excavations began in 1900. The Hedeby Museum was opened next to the site in 1985. Hedeby is mentioned in Hans Christian Andersen's fairy tale, "The Marsh King's Daughter." Name The Old Norse name Heiða-býr simply translates to "heath-settlement" (heiðr "heath" and býr = "yard; settlement, village, town"). The name is recorded in numerous spelling variants. Heiðabýr is the reconstructed name in standard Old Norse, also anglicized as Heithabyr. The Stone of Eric, a 10th-century Danish runestone with an inscription mentioning ᚼᛅᛁᚦᛅ᛭ᛒᚢ (haiþa bu), found in 1796. Old English æt Hæðum, from Ohthere's account of his travels to Alfred the Great in the Old English Orosius. Hedeby, the modern Danish spelling, also most commonly used in English. Haddeby is the Low German form, also the name of the administrative district formed in 1949 and named for the site; in 1985, the district introduced a coat of arms featuring a bell with a runic inscription reading ᚼᛁᚦᛅ᛬ᛒᚢ (hiþa:bu). Haithabu is the modern German spelling used when referring to the historical settlement; this spelling represents the transliteration of the name as found in the Stone of Eric inscription; it was introduced among other variants in antiquarian literature in the 19th century and has since become the standard German name of the settlement. Sources from the 9th and 10th century AD also attest to the names Sliesthorp and Sliaswich (cf. -thorp vs. -wich), and the town of Schleswig still exists 3 km north of Hedeby. However, Æthelweard claimed in his Latin translation of the Anglo-Saxon Chronicle that the Saxons used Slesuuic and the Danes Haithaby to refer to the same town. History Origins Hedeby is first mentioned in the Frankish chronicles of Einhard (804) who was in the service of Charlemagne, but was probably founded around 770. In 808 the Danish king Godfred (Lat. Godofredus) destroyed a competing Slav trade centre named Reric, and it is recorded in the Frankish chronicles that he moved the merchants from there to Hedeby. This may have provided the initial impetus for the town to develop. The same sources record that Godfred strengthened the Danevirke, an earthen wall that stretched across the south of the Jutland peninsula. The Danevirke joined the defensive walls of Hedeby to form an east–west barrier across the peninsula, from the marshes in the west to the Schlei inlet leading into the Baltic in the east. The town itself was surrounded on its three landward sides (north, west, and south) by earthworks. At the end of the 9th century the northern and southern parts of the town were abandoned for the central section. Later a 9-metre (29-ft) high semi-circular wall was erected to guard the western approaches to the town. On the eastern side, the town was bordered by the innermost part of the Schlei inlet and the bay of Haddebyer Noor. Timeline Rise Hedeby became a principal marketplace because of its geographical location on the major trade routes between the Frankish Empire and Scandinavia (north-south), and between the Baltic and the North Sea (east-west). Between 800 and 1000 the growing economic power of the Vikings led to its dramatic expansion as a major trading centre. Along with Birka and Schleswig, Hedeby's prominence as a major international trading hub served as a foundation of the Hanseatic League that would emerge by the 12th century. The following indicate the importance achieved by the town: The town was described by visitors from England (Wulfstan - 9th century) and the Mediterranean (Al-Tartushi - 10th century). Hedeby became the seat of a bishop (948) and belonged to the Archbishopric of Hamburg and Bremen. The town minted its own coins (from 825). Adam of Bremen (11th century) reports that ships were sent from this portus maritimus to Slavic lands, to Sweden, Samland (Semlant) and even Greece. A Swedish dynasty founded by Olof the Brash is said to have ruled Hedeby during the last decades of the 9th century and the first part of the 10th century. This was told to Adam of Bremen by the Danish king Sweyn Estridsson, and it is supported by three runestones found in Denmark. Two of them were raised by the mother of Olof's grandson Sigtrygg Gnupasson. The third runestone, discovered in 1796, is from Hedeby, the Stone of Eric (). It is inscribed with Norwegian-Swedish runes. It is, however, possible that Danes also occasionally wrote with this version of the younger futhark. Lifestyle Life was short and crowded in Hedeby. The small houses were clustered tightly together in a grid, with the east–west streets leading down to jetties in the harbour. People rarely lived beyond 30 or 40, and archaeological research shows that their later years were often painful due to crippling diseases such as tuberculosis. Al-Tartushi, a late 10th-century traveller from al-Andalus, provides one of the most colourful and often quoted descriptions of life in Hedeby. Al-Tartushi was from Cordoba in Spain, which had a significantly more wealthy and comfortable lifestyle than Hedeby. While Hedeby may have been significant by Scandinavian standards, Al-Tartushi was unimpressed: "Slesvig (Hedeby) is a very large town at the extreme end of the world ocean... The inhabitants worship Sirius, except for a minority of Christians who have a church of their own there.... He who slaughters a sacrificial animal puts up poles at the door to his courtyard and impales the animal on them, be it a piece of cattle, a ram, billy goat or a pig so that his neighbours will be aware that he is making a sacrifice in honour of his god. The town is poor in goods and riches. People eat mainly fish which exist in abundance. Babies are thrown into the sea for reasons of economy. The right to divorce belongs to the women.... Artificial eye make-up is another peculiarity; when they wear it their beauty never disappears, indeed it is enhanced in both men and women. Further: Never did I hear singing fouler than that of these people, it is a rumbling emanating from their throats, similar to that of a dog but even more bestial." Destruction The town was sacked in 1050 by King Harald Hardrada of Norway during a conflict with King Sweyn II of Denmark. He set the town on fire by sending several burning ships into the harbour, the charred remains of which were found at the bottom of the Schlei during recent excavations. A Norwegian skald, quoted by Snorri Sturluson, describes the sack as follows: Burnt in anger from end to end was Hedeby[..] High rose the flames from the houses when, before dawn, I stood upon the stronghold's arm In 1066 the town was sacked and burned by West Slavs. Following the destruction, Hedeby was slowly abandoned. People moved across the Schlei inlet, which separates the two peninsulas of Angeln and Schwansen, and founded the town of Schleswig. Archaeology 20th-century excavations After the settlement was abandoned, rising waters contributed to the complete disappearance of all visible structures on the site. It was even forgotten where the settlement had been. This proved to be fortunate for later archaeological work at the site. Archaeological work began at the site in 1900 after the rediscovery of the settlement. Excavations were conducted for the next 15 years. Further excavations were carried out between 1930 and 1939. Archaeological work on the site was productive for two main reasons: that the site had never been built on since its destruction some 840 years earlier, and that the permanently waterlogged ground had preserved wood and other perishable materials. After the Second World War, in 1959, archaeological work was started again and has continued intermittently ever since. The embankments surrounding the settlement were excavated, and the harbour was partially dredged, during which the wreck of a Viking ship was discovered. Despite all this work, only 5% of the settlement (and only 1% of the harbour) has as yet been investigated. The most important finds resulting from the excavations are now on display in the adjoining Haithabu Museum. 21st-century reconstructions In 2005 an ambitious archaeological reconstruction program was initiated on the original site. Based on the results of archaeological analyses, exact copies of some of the original Viking houses have been rebuilt. See also Hedeby Viking Museum Hedeby stones, Schlei People: Wulfstan of Hedeby, Al-Tartushi, Adam of Bremen, Harold Hardrada, Rurik, Godfred (Danish King), Olof the Brash Towns: Jelling, Birka, Ribe, Schleswig, Reric Viking Age Notes Bibliography and media A number of short archaeological films relating to Hedeby and produced by researchers during the 1980s are available on DVD from the University of Kiel's Archaeological Film Project. Most publications on Hedeby are in German. See Wikipedia's German-language article on Hedeby. External links Website of the Haithabu Viking Museum Pictures from the Haithabu Viking Museum Flickr Photo Gallery: Viking houses and museum Archaeological sites in Germany Former populated places in Denmark Former populated places in Germany History of Schleswig-Holstein Viking Age populated places World Heritage Sites in Germany
[ 0.011277122423052788, 0.17736268043518066, 0.5955435037612915, -0.5242491364479065, -0.5896545648574829, 0.09514054656028748, 0.36161962151527405, 0.03738793730735779, -0.48664262890815735, 0.16700901091098785, -0.021528813987970352, -0.2676566243171692, -0.31520718336105347, 0.09845277667...
14131
https://en.wikipedia.org/wiki/Hazaras
Hazaras
The Hazaras (; ) are a Persian-speaking ethnic group native to, and primarily residing in the Hazarajat region in central Afghanistan and generally scattered throughout Afghanistan. They speak the Hazaragi dialect of Persian which is mutually intelligible with Dari, one of the two official languages of Afghanistan. They are one of the largest ethnic groups in Afghanistan, and are also significant minority groups in neighboring Pakistan, mostly in Quetta, and as well as in Iran. Hazaras are considered by some to be one of the most vulnerable groups in Afghanistan, and their persecution has occurred various times across previous decades. Etymology The etymology of the word Hazāra remains disputed, but some have differing views on the term. Babur, founder of the Mughal Empire in the early 16th century, records the name Hazāra in his autobiography. He referred to the populace of a region called Hazāristān. Historian Abdul Hai Habibi considers the word "Hazara" to be very old, and it is derived from "Hazala" ( ), which has changed to "Hazara" over time and has meant "good hearted". Another view is that the name Hazāra derives from the Persian word for "thousand" ( ). It may be the translation of the Mongol word (or ), a military unit of 1,000 soldiers at the time of Genghis Khan. With time, the term Hazār could have been substituted for the Mongol word and now stands for the group of people, while the Hazara people in their native language call themselves Azra ( or ). Origin Although the origins of the Hazara people have not been fully reconstructed, Turkic and Mongol origin is probable for the majority. This is a result of common physical attributes, physical appearance, parts of their culture and language resembling those of Central Asian Turkic tribes and the Mongols, although phenotype can vary, with some noting that certain Hazaras may resemble Europeans or peoples native to the Iranian plateau. Genetic analysis of some of the Hazara indicates partial Mongol ancestry. Invading Mongols and Turco-Mongols mixed with the local indigenous Turkic and Iranic populations. For example, Qara'unas settled in what is now Afghanistan and mixed with the local populations. The second wave of mostly Chagatai Turco-Mongols came from Central Asia, associated with the Ilkhanate and the Timurids, all of whom settled in Hazarajat and mixed with the local population. These result in academics believing that Hazaras are ultimately a result of several Turco-Mongol tribes mixing with the local population. mtDNA sequencing studies demonstrated relatively high frequencies of West Eurasian mtDNA. History The first mention of Hazara is made by Babur in the early 16th century and later by the court historians of Shah Abbas of the Safavid dynasty. It is reported that they embraced Shia Islam between the end of the 16th and the beginning of the 17th century, during the Safavid period. Hazara men, along with those of other ethnic groups, were recruited to the army of Ahmad Shah Durrani in the 18th century. 19th century During the second reign of Dost Mohammad Khan in the 19th century, Hazara from Hazarajat began to be taxed for the first time. However, for the most part, they still managed to keep their regional autonomy until the subjugation of Abdur Rahman Khan began in the late 19th century. When the Treaty of Gandomak was signed and the Second Anglo-Afghan War ended in 1880, Abdur Rahman Khan set out a goal to bring Hazarajat and Kafiristan under his control. He launched several campaigns in Hazarajat due to resistance from the Hazara in which his forces committed atrocities. The southern part of Hazarajat was spared as they accepted his rule, while the other parts of Hazarajat rejected Abdur Rahman and instead supported his uncle, Sher Ali Khan. In response to this Abdur Rahman waged a war against tribal leaders who rejected his policies and rule. This is known as the Hazara Uprisings. Abdur Rahman arrested Syed Jafar, chief of the Sheikh Ali Hazaras, and jailed him in Mazar-e Sharif. These campaigns had a catastrophic impact on the demographics of Hazaras causing over 60% of them to perish with some becoming displaced. 20th and 21st century In 1901, Habibullah Khan, Abdur Rahman's eldest son and successor granted amnesty to all people who were exiled by his predecessor. However, the division between the Afghan government and the Hazara people was already made too deep under Abdur Rahman. Hazara continued to face severe social, economic, and political discrimination through most of the 20th century. In 1933 King Mohammed Nadir Khan was assassinated by Abdul Khaliq Hazara. The Afghan government captured and executed him later, along with several of his innocent family members. Mistrust of the central government by the Hazaras and local uprisings continued. In particular, from 1945 to 1946, during Zahir Shah's rule, a revolt took place against new taxes that were exclusively imposed on the Hazara. The Kuchi nomads meanwhile not only were exempted from taxes but also received allowances from the Afghan government. The angry rebels began capturing and killing government officials. In response, the central government sent a force to subdue the region and later removed the taxes. The repressive policies of the People's Democratic Party of Afghanistan (PDPA) after the Saur Revolution in 1978 caused uprisings throughout the country. Fearing Iranian influence, the Hazaras were particularly persecuted. President Hafizullah Amin published in October 1979 a list of 12,000 victims of the Taraki government. Among them were 7,000 Hazaras who were shot in the notorious Pul-e-Charkhi prison. During the Soviet-Afghan War, the Hazarajat region did not see as much heavy fighting as other regions of Afghanistan. Most of the Hazara mujahideen fought the Soviets in the regions which were in the periphery of the Hazarajat region. However, within Hazarajat, rival Hazara political factions did engage in a non-violent however extreme tussle. The division was between the Tanzeem Nasle Nau Hazara, a party based in Quetta, of Hazara nationalists and secular intellectuals, and the Islamist parties in Hazarajat. By 1979, the Hazara-Islamist groups had already liberated Hazarajat from the central Soviet-backed Afghan government and later took entire control of Hazarajat away from the secularists. By 1984, the Islamist dominance of Hazarajat was complete. As the Soviets withdrew in 1989, the Islamist groups felt the need to broaden their political appeal and turned their focus to Hazara ethnic nationalism. This led to the establishment of the Hizb-i-Wahdat, an alliance of all the Hazara resistance groups (except the Harakat-i Islami). In 1992 with the fall of Kabul, the Harakat-i Islami took sides with Burhanuddin Rabbani's government while the Hizb-i-Wahdat took sides with the opposition. The Hizb-i-Wahdat was eventually forced out of Kabul in 1995 when the Taliban movement captured and killed their leader Abdul Ali Mazari. With the Taliban's capture of Kabul in 1996, all the Hazara groups united with the new Northern Alliance against the common new enemy. However, it was too late and despite the fierce resistance Hazarajat fell to the Taliban by 1998. The Taliban had Hazarajat isolated from the rest of the world going as far as not allowing the United Nations to deliver food to the provinces of Bamyan, Ghor, Maidan Wardak, and Daykundi. Hazaras have also played a significant role in the creation of Pakistan. One such Hazara was Qazi Muhammad Essa of the Sheikh Ali tribe, who had been close friends with Muhammad Ali Jinnah, having had met each other for the first time while they were studying in London. He had been the first from his native province of Balochistan to obtain a Bar-at-Law degree and had helped set up the All-India Muslim League in Balochistan. Though Hazara played a role in the anti-Soviet movement, other Hazara participated in the new communist government, which actively courted Afghan minorities. Sultan Ali Kishtmand, a Hazara, served as prime minister of Afghanistan from 1981 to 1990 (with one brief interruption in 1988). The Ismaili Hazara of Baghlan Province likewise supported the communists, and their pir (religious leader) Jaffar Naderi led a pro-Communist militia in the region. During the years that followed, Hazara suffered severe oppression, and many ethnic massacres, genocides, and pogroms were carried out by the predominantly ethnic Pashtun Taliban and are documented by such groups as the Human Rights Watch. Following the September 11, 2001 attacks in the United States, American and Coalition forces invaded Afghanistan. Many Hazaras have become leaders in today's newly emerging Afghanistan. Hazara have also pursued higher education, enrolled in the army, and many have top government positions. For example, Mohammad Mohaqiq, a Hazara from the Hizb-i-Wahdat party, ran in the 2004 presidential election in Afghanistan, and Karim Khalili became the Vice President of Afghanistan. Some ministers and governors are Hazara, including Sima Samar, Habiba Sarabi, Sarwar Danish, Sayed Hussein Anwari, Abdul Haq Shafaq, Sayed Anwar Rahmati, Qurban Ali Oruzgani. The mayor of Nili in Daykundi Province is Azra Jafari, who became the first female mayor in Afghanistan. Some other notable Hazara include Sultan Ali Keshtmand, Abdul Wahed Sarābi, Ghulam Ali Wahdat, Akram Yari, Sayed Mustafa Kazemi, Muhammad Arif Shah Jahan, Ghulam Husain Naseri, Abbas Noyan, Abbas Ibrahim Zada, Ramazan Bashardost, Ahmad Shah Ramazan, Ahmad Behzad, Nasrullah Sadiqi Zada Nili, Fahim Hashimy, Maryam Monsef and more. Although Afghanistan has been historically one of the poorest countries in the world, the Hazarajat region has been kept less developed by past governments. Since ousting the Taliban in late 2001, billions of dollars have poured into Afghanistan for reconstruction and several large-scale reconstruction projects took place in Afghanistan from August 2012. For example, there have been more than 5000 kilometres of road pavement completed across Afghanistan, of which little was done in central Afghanistan (Hazarajat). On the other hand, the Band-e Amir in Bamyan Province became the first national park of Afghanistan. A road from Kabul to Bamyan was also built, along with new police stations, government institutions, hospitals, and schools in Bamyan Province, Daykundi Province, and others. The first ski resort of Afghanistan was also established in Bamyan Province. Discrimination indicates that Kuchis (Pashtun nomads who have historically been migrating from region to region depending on the season) are allowed to use Hazarajat pastures during the summer season. It is believed that allowing the Kuchis to use some of the grazing lands in Hazarajat began during the rule of Abdur Rahman Khan. Living in mountainous Hazarajat, where little farmland exists, Hazara people rely on these pasture lands for their livelihood during the long and harsh winters. In 2007 some Kuchi nomads entered into parts of Hazarajat to graze their livestock, and when the local Hazara resisted, a clash took place and several people on both sides died using assault rifles. Such events continue to occur, even after the central government was forced to intervene, including President Hamid Karzai. In late July 2012, a Hazara police commander in Uruzgan province reportedly rounded up and killed 9 Pashtun civilians in revenge for the death of two local Hazara. The matter is being investigated by the Afghan government. The drive-by President Hamid Karzai after the Peace Jirga to strike a deal with Taliban leaders caused deep unease in Afghanistan's minority communities, who fought the Taliban the longest and suffered the most during their rule. The leaders of the Tajik, Uzbek and Hazara communities, vowed to resist any return of the Taliban to power, referring to the large-scale massacres of Hazara civilians during the Taliban period. Following the Fall of Kabul to the Taliban in 2021, which ended the war in Afghanistan, concerns were raised as to whether the Taliban would reimpose the persecution of Hazaras as in the 1990s. An academic at Melbourne's La Trobe University said that "The Hazaras are very fearful that the Taliban will likely be reinstating the policies of the 1990s" in spite of Taliban reassurances that they will not revert to the bad old ways of the 1990s. Genetics Genetically, the Hazara are a mixture of western Eurasian and eastern Eurasian components, i.e. racially Eurasian. Genetic research suggests that the Hazaras of Afghanistan cluster closely with the Uzbek population of the country, while both groups are at a notable distance from Afghanistan's Tajik and Pashtun populations. There is evidence of both paternal and maternal relations to Turkic peoples and Mongols amongst some Hazaras. East Eurasian male and female ancestry is supported by studies in genetic genealogy as well. East Asian maternal haplogroups (mtDNA) make up about 35%, suggesting that the male descendants of Turkic and Mongolic peoples were accompanied by women of East Asian ancestry, though the Hazaras as a whole have mostly west Eurasian mtDNA. Women of Non-East Asian mtDNA in Hazaras are at about 65%, most which are West Eurasians and some South Asian. The most frequent paternal haplogroups found amongst the Pakistani Hazara were haplogroup C-M217 at 40%(10/25) and Haplogroup R1b at 32% (8/25). One study about paternal DNA haplogroups of Afghanistan shows that the Y-DNA haplogroups R1a and C-M217 are the most common haplogroups, followed by J2-M172 and L-M20. Some Hazaras also have the haplogroup R1a1a-M17, E1b1b1-M35, L-M20 and H-M69, which are common in Tajiks, Pashtuns as well as Indian populations. In one study, a small minority had the haplogroup B-M60, normally found in East Africa, and in one mtDNA study of Hazara, mtDNA Haplogroup L (which is of African origin) was detected at a frequency of 7.5%. A recent study shows that the Uyghurs are closely related to the Hazaras. The study also suggests a small but notable East Asian ancestry in other populations of Pakistan and India. Demographics Some sources claim that Hazaras are about 20 to 30 percent of the total population of Afghanistan. They were by far the largest ethnic group in the past, in 1888–1893 Uprisings of Hazaras over 60% of them massacred with some being displaced. Geographic distribution The vast majority of Hazaras live in Hazarajat, and many others live in the cities, including in neighboring countries or abroad. Diaspora Alessandro Monsutti argues, in his recent anthropological book, that migration is the traditional way of life of the Hazara people, referring to the seasonal and historical migrations which have never ceased and do not seem to be dictated only by emergencies such as war. Due to the decades of war in Afghanistan and the sectarian violence in Pakistan, many Hazaras left their communities and have settled in Australia, New Zealand, Canada, the United States, the United Kingdom and particularly the Northern European countries such as Sweden and Denmark. Some go to these countries as exchange students while others through human smuggling, which sometimes costs them their lives. Since 2001, about 1,000 people have died in the ocean while trying to reach Australia by boats from Indonesia. Many of these were Hazaras, including women and small children who could not swim. The notable case was the Tampa affair in which a shipload of refugees, mostly Hazara, was rescued by the Norwegian freighter MV Tampa and subsequently sent to Nauru. New Zealand agreed to take some of the refugees and all but one of those were granted a stay. Hazara in Pakistan During the period of British colonial rule on the Indian subcontinent in the 19th century, Hazaras worked during the winter months in coal mines, road construction, and in other working-class jobs in some cities of what is now Pakistan. The earliest record of Hazara in the areas of Pakistan is found in Broadfoot's Sappers company from 1835 in Quetta. This company had also participated in the First Anglo-Afghan War. Some Hazara also worked in the agriculture farms in Sindh and the construction of the Sukkur barrage. Haider Ali Karmal Jaghori was a prominent political thinker of the Hazara people in Pakistan, writing about the political history of the Hazara people. His work Hazaraha wa Hazarajat Bastan Dar Aiyna-i-Tarikh was published in Quetta in 1992, and another work by Aziz Tughyan Hazara Tarikh Milli Hazara was published in 1984 in Quetta. Most Pakistani Hazaras today live in the city of Quetta, in Balochistan, Pakistan. Localities in the city of Quetta with prominent Hazara populations include Hazara Town and Mehr Abad and Hazara tribes such as the Sardar are exclusively Pakistani. The literacy level among the Hazara community in Pakistan is relatively high compared to the Hazaras of Afghanistan, and they have integrated well into the social dynamics of the local society. Saira Batool, a Hazara woman, was one of the first female pilots in Pakistan Air Force. Other notable Hazaras include Qazi Mohammad Esa, Muhammad Musa Khan, who served as Commander in Chief of the Pakistani Army from 1958 to 1968, Air Marshal Sharbat Ali Changezi, Hussain Ali Yousafi, the slain chairman of the Hazara Democratic Party, Syed Nasir Ali Shah, MNA from Quetta and his father Haji Sayed Hussain Hazara who was a senator and member of Majlis-e-Shura during the Zia-ul-Haq era. Despite all of this, Hazaras are often targeted by militant groups such as the Lashkar-e-Jhangvi and others. "Activists say at least 800-1,000 Hazaras have been killed since 1999 and the pace is quickening. More than one hundred have been murdered in and around Quetta since January, according to Human Rights Watch." The political representation of the community is served by Hazara Democratic Party, a secular liberal democratic party, headed by Abdul Khaliq Hazara. Hazara in Iran Hazaras in Iran are also referred to as Khawaris or Barbaris. Over the many years as a result of political unrest in Afghanistan, some Hazaras have migrated to Iran. The local Hazara population has been estimated at 500,000 people of which at least one-third have spent more than half their life in Iran. Culture The Hazara, outside of Hazarajat, have adopted the cultures of the cities where they dwell, resembling customs and traditions of the Afghan Tajiks and Pashtuns. Traditionally the Hazara are highland farmers and although sedentary, in the Hazarajat, they have retained many of their own customs and traditions, some of which are more closely related to those of Central Asia than to those of the Afghan Tajiks. The Hazara live in houses rather than tents; Aimaq Hazaras and Aimaqs in tents rather than houses. Music Many Hazara musicians are widely hailed as being skilled in playing the dambura, a native, regional lute instrument similarly found in other Central Asian nations such as Kazakhstan, Uzbekistan and Tajikistan. Some of the popular Hazara dambura players are, such as Sarwar Sarkhosh, Dawood Sarkhosh, Safdar Tawakoli, Sayed Anwar Azad and others. Cuisine The Hazara cuisine is strongly influenced by Central Asian, South Asian and Persian cuisines. However, there are special foods, cooking methods and different cooking styles that are specific to them. They have a hospitable dining etiquette. In their culture, it is customary to prepare special food for guests. Language Hazara people living in Hazarajat (Hazaristan) areas speak the Hazaragi dialect of Persian, which is infused with many Turkic and a few Mongolic loanwords. The primary differences between Persian and Hazaragi are the accent. Despite these differences, Hazaragi is mutually intelligible with Dari, one of the two official languages of Afghanistan. Religion Hazaras predominantly practice Islam, mostly the Shi'a of the Twelver sect, with significant Sunni, some Isma'ili and Non-denominational Muslim minorities. The majority of Afghanistan's population practice Sunni Islam; this may have contributed to the discrimination against them. There is no single theory about the acceptance of the Shi'a Islam by the Hazaras. Probably most of them converted to Shi'a Islam during the first part of the 16th century, in the early days of the Safavid Dynasty. Some Sunni Hazaras, who have been attached to non-Hazara tribes are the Timuris and Aimaq Hazaras, while the Ismaili Hazaras have always been kept separate from the rest of the Hazaras on account of religious beliefs and political purposes. Hazara tribes The Hazara people have been organized by various tribes. They include Sheikh Ali, Jaghori, Muhammad Khwaja, Jaghatu, Qara Baghi, Ghaznichi, Behsudi, Dai Mirdadi, Turkmani, Uruzgani, Dai Kundi, Dai Zangi, Dai Chopan, Dai Zinyat, Qarlugh and others. The different tribes come from Hazarajat, regions such as Parwan, Bamyan, Ghazni, Ghor, Urozgan, Daykundi, Maidan Wardak and have spread outwards from Hazarajat (main region) into other parts of Afghanistan. Sports Many Hazaras engaged in varieties of sports, including football, volleyball, wrestling, martial arts, boxing, karate, taekwondo, judo, wushu, Jujitsu, Cricket, Tennis and more. Pahlawan Ebrahim Khedri, 62 kg wrestler, was the national champion for two decades in Afghanistan. Another famous Hazara wrestler Wakil Hussain Allahdad was killed in the 22 April 2018 Kabul suicide bombing in the Dashte Barchi area of Kabul. Rohullah Nikpai, won a bronze medal in Taekwondo in the Beijing Olympics 2008, beating world champion Juan Antonio Ramos of Spain 4–1 in a play-off final. It was Afghanistan's first-ever Olympic medal. He then won a second Olympic medal for Afghanistan in the London 2012 games. Another famous Hazara athlete Syed Abdul Jalil Waiz, was the first ever badminton player representing Afghanistan in Asian Junior Championships in 2005 where he produced the first win for his country against Iraq, with 15–13, 15–1. He participated in several international championships since 2005 and achieved victories against Australia, Philippines and Mongolia. Hamid Rahimi is a new boxer from Afghanistan and lives in Germany. Hazara famous football players are Zohib Islam Amiri, who is currently playing for the Afghanistan national football team, Moshtagh Yaghoubi an Afghan-Finnish footballer who plays for HIFK, Mustafa Amini an Afghan-Australian footballer who plays as a midfielder for Danish Superliga club AGF and the Australian national team, Rahmat Akbari an Afghan-Australian footballer who plays as a midfielder for Brisbane Roar, and others like Ali Hazara and Zahra Mahmoodi. A Pakistani Hazara named Abrar Hussain, a former Olympic boxer, served as deputy director-general of the Pakistan Sports Board. He represented Pakistan three times at the Olympics and won a gold medal at the 1990 Asian Games in Beijing. Another Hazara boxer from Pakistan is Haider Ali a Commonwealth Games gold medalist and Olympian who is currently retired. Some Hazara from Pakistan have also excelled in sports and have received numerous awards particularly in boxing, football and field hockey. Qayum Changezi, a legendary Pakistani football player, was a Hazara. New Hazara youngsters are seen to appear in many sports in Pakistan mostly from Quetta. Rajab Ali Hazara, who is leading the under 16 Pakistan Football team as captain. Buzkashi Buzkashi is a Central Asian sport in which horse-mounted players attempt to place a goat or calf carcass in a goal. It is the national sport in Afghanistan and is one of the main sports of the Hazara people and still they practice this sport in Afghanistan. Notable people Gallery See also List of Hazara tribes List of Hazara people Aimaq Hazara Aimaq people Hazara diaspora Ethnic groups in Afghanistan References Further reading External links Hazara tribal structure, Program for Culture and Conflict Studies, US Naval Postgraduate School Peril and Persecution in Afghanistan Ethnic groups in Afghanistan Ethnic groups in Iran Ethnic groups in Pakistan
[ -0.16246064007282257, 0.16252736747264862, -0.6398469805717468, -0.5638512969017029, -0.23692849278450012, 0.6359039545059204, 0.8508182168006897, 0.24668410420417786, -0.34019210934638977, -0.0927715003490448, -0.7775341272354126, -0.35660433769226074, -0.8265870809555054, 0.2418437302112...
14132
https://en.wikipedia.org/wiki/Hawala
Hawala
Hawala or hewala ( , meaning transfer or sometimes trust), also known as in Persian, and or in Somali, is a popular and informal value transfer system based not on the movement of cash, or on telegraph or computer network wire transfers between banks, but instead on the performance and honour of a huge network of money brokers (known as hawaladars). While hawaladars are spread throughout the world, they are primarily located in the Middle East, North Africa, the Horn of Africa and the Indian subcontinent. They operate outside of, or parallel to, traditional banking, financial channels and remittance systems. Hawala follows Islamic traditions but its use is not limited to Muslims. Origins The hawala system originated in India. It has existed since the 8th century between Indian, Arab and Muslim traders who operated alongside the Silk Road and beyond, as a protection against theft. It is believed to have arisen in the financing of long-distance trade around the emerging capital trade centers in the early medieval period. In South Asia, it appears to have developed into a fully-fledged money market instrument, which was only gradually replaced by the instruments of the formal banking system in the first half of the 20th century. "Hawala" itself influenced the development of the agency in common law and in civil laws, such as the aval in French law and the avallo in Italian law. The words aval and avallo were themselves derived from hawala. The transfer of debt, which was "not permissible under Roman law but became widely practiced in medieval Europe, especially in commercial transactions", was due to the large extent of the "trade conducted by the Italian cities with the Muslim world in the Middle Ages". The agency was also "an institution unknown to Roman law" as no "individual could conclude a binding contract on behalf of another as his agent". In Roman law, the "contractor himself was considered the party to the contract and it took a second contract between the person who acted on behalf of a principal and the latter in order to transfer the rights and the obligations deriving from the contract to him". On the other hand, Islamic law and the later common law "had no difficulty in accepting agency as one of its institutions in the field of contracts and of obligations in general". How hawala works In the most basic variant of the hawala system, money is transferred via a network of hawala brokers, or hawaladars. It is the transfer of money without actually moving it. In fact, a successful definition of the hawala system that is used is "money transfer without money movement". According to author Sam Vaknin, while there are large hawaladar operators with networks of middlemen in cities across many countries, most hawaladars are small businesses who work at hawala as a sideline or moonlighting operation. The figure shows how hawala works: (1) a customer (A, left-hand side) approaches a hawala broker (X) in one city and gives a sum of money (red arrow) that is to be transferred to a recipient (B, right-hand side) in another, usually foreign, city. Along with the money, he usually specifies something like a password that will lead to the money being paid out (blue arrows). (2b) The hawala broker X calls another hawala broker M in the recipient's city, and informs M about the agreed password, or gives other disposition of the funds. Then, the intended recipient (B), who also has been informed by A about the password (2a), now approaches M and tells him the agreed password (3a). If the password is correct, then M releases the transferred sum to B (3b), usually minus a small commission. X now owes M the money that M had paid out to B; thus M has to trust Xs promise to settle the debt at a later date. The unique feature of the system is that no promissory instruments are exchanged between the hawala brokers; the transaction takes place entirely on the honour system. As the system does not depend on the legal enforceability of claims, it can operate even in the absence of a legal and juridical environment. Trust and extensive use of connections are the components that distinguish it from other remittance systems. Hawaladar networks are often based on membership in the same family, village, clan or ethnic group, and cheating is punished by effective excommunication and "loss of honour" that leads to severe economic hardship. Informal records are produced of individual transactions, and a running tally of the amount owed by one broker to another is kept. Settlements of debts between hawala brokers can take a variety of forms (such as goods, services, properties, transfers of employees, etc.), and need not take the form of direct cash transactions. In addition to commissions, hawala brokers often earn their profits through bypassing official exchange rates. Generally, the funds enter the system in the source country's currency and leave the system in the recipient country's currency. As settlements often take place without any foreign exchange transactions, they can be made at other than official exchange rates. Hawala is attractive to customers because it provides a fast and convenient transfer of funds, usually with a far lower commission than that charged by banks. Its advantages are most pronounced when the receiving country applies unprofitable exchange rate regulations or when the banking system in the receiving country is less complex (e.g., due to differences in legal environment in places such as Afghanistan, Yemen, Somalia). Moreover, in some parts of the world it is the only option for legitimate fund transfers, and has even been used by aid organizations in areas where it is the best-functioning institution. Regional variants Dubai has been prominent for decades as a welcoming hub for hawala transactions worldwide. South Asia Hundis The hundi is a financial instrument that developed on the Indian sub-continent for use in trade and credit transactions. Hundis are used as a form of remittance instrument to transfer money from place to place, as a form of credit instrument or IOU to borrow money and as a bill of exchange in trade transactions. The Reserve Bank of India describes the Hundi as "an unconditional order in writing made by a person directing another to pay a certain sum of money to a person named in the order." Horn of Africa According to the CIA, with the dissolution of Somalia's formal banking system, many informal money transfer operators arose to fill the void. It estimates that such hawaladars, xawilaad or xawala brokers are now responsible for the transfer of up to $1.6 billion per year in remittances to the country, most coming from working Somalis outside Somalia. Such funds have in turn had a stimulating effect on local business activity. West Africa The 2012 Tuareg rebellion left Northern Mali without an official money transfer service for months. The coping mechanisms that appeared were patterned on the hawala system. See also Economy related Global ranking of remittance by nations Remittances to India Hundi Informal value transfer system FATF Related contemporary issues Jizya Zakat Riba FATF blacklist Terrorism financing References Further reading , exploring the operation of contemporary hawala networks, and the role they play in the transmission of migrant workers' remittances from Europe to South Asia. . . . . . . Informal value transfer systems Remittances Payment systems Islamic financial contracts
[ 0.17448905110359192, -0.19096487760543823, -0.08223973214626312, -0.06983569264411926, 0.3485133945941925, 0.4674376845359802, -0.09067057073116302, -0.13141311705112457, -0.18736448884010315, -0.15691456198692322, -0.6997632384300232, 0.22617720067501068, -0.277258038520813, 0.41598689556...
14133
https://en.wikipedia.org/wiki/Hydroponics
Hydroponics
Hydroponics is a type of horticulture and a subset of hydroculture which involves growing plants, usually crops, without soil, by using mineral nutrient solutions in an aqueous solvent. Terrestrial or aquatic plants may grow with their roots exposed to the nutritious liquid, or, in addition, the roots may be physically supported by an inert medium such as perlite, gravel, or other substrates. Despite inert media, roots can cause changes of the rhizosphere pH and root exudates can affect rhizosphere biology and physiological balance of the nutrient solution by secondary metabolites. The nutrients used in hydroponic systems can come from many different sources, including fish excrement, duck manure, purchased chemical fertilizers, or artificial nutrient solutions. Plants commonly grown hydroponically, on inert media, include tomatoes, peppers, cucumbers, strawberries, lettuces, and cannabis for commercial use, and Arabidopsis thaliana, which serves as a model organism in plant science and genetics. Hydroponics offers many advantages, notably a decrease in water usage in agriculture. To grow of tomatoes using intensive farming methods requires of water; using hydroponics, ; and only using aeroponics. Since hydroponics takes much less water to grow produce, it could be possible in the future for people in harsh environments with little accessible water to grow their own food. History The earliest published work on growing terrestrial plants without soil was the 1627 book Sylva Sylvarum or 'A Natural History' by Francis Bacon, printed a year after his death. As a result of his work, water culture became a popular research technique. In 1699, John Woodward published his water culture experiments with spearmint. He found that plants in less-pure water sources grew better than plants in distilled water. By 1842, a list of nine elements believed to be essential for plant growth had been compiled, and the discoveries of German botanists Julius von Sachs and Wilhelm Knop, in the years 1859–1875, resulted in a development of the technique of soilless cultivation. To quote Sachs directly: "In the year 1860, I published the results of experiments which demonstrated that land plants are capable of absorbing their nutritive matters out of watery solutions, without the aid of soil, and that it is possible in this way not only to maintain plants alive and growing for a long time, as had long been known, but also to bring about a vigorous increase of their organic substance, and even the production of seed capable of germination." Growth of terrestrial plants without soil in mineral nutrient solutions was later called "solution culture". It quickly became a standard research and teaching technique and is still widely used. Solution culture is now considered a type of hydroponics where there is an inert medium for stabilizing plant growth. Around the 1930s plant scientists investigated diseases of certain plants, and thereby, observed symptoms related to existing soil conditions such as salinity. In this context, water culture experiments were undertaken with the hope of delivering similar symptoms under controlled conditions. This approach forced by Dennis Robert Hoagland led to model systems (e.g., green algae Nitella) and standardized nutrient recipes playing an increasingly important role in modern plant physiology. In 1929, William Frederick Gericke of the University of California at Berkeley began publicly promoting that solution culture be used for agricultural crop production. He first termed this cultivation method "aquaculture" but later found that aquaculture was already applied to culture of aquatic organisms. Gericke created a sensation by growing tomato vines high in his back yard in mineral nutrient solutions rather than soil. He then introduced the term hydroponics, water culture, in 1937, proposed to him by W. A. Setchell, a phycologist with an extensive education in the classics. Hydroponics is derived from neologism υδρωπονικά (derived from Greek ύδωρ=water and πονέω=cultivate), constructed in analogy to γεωπονικά (derived from Greek γαία=earth and πονέω=cultivate), geoponica, that which concerns agriculture, replacing, γεω-, earth, with ὑδρο-, water. Gericke, however, underestimated that the time was not yet ripe for the general technical application and commercial use of hydroponics for producing crops because the system he employed was at that time too sensitive and required too much monitoring to be used in commercial applications. Reports of Gericke's work and his claims that hydroponics would revolutionize plant agriculture prompted a huge number of requests for further information. Gericke had been denied use of the university's greenhouses for his experiments due to the administration's skepticism, and when the university tried to compel him to release his preliminary nutrient recipes developed at home, he requested greenhouse space and time to improve them using appropriate research facilities. While he was eventually provided greenhouse space, the university assigned Hoagland and Arnon to re-evaluate Gericke's claims and show his formula held no benefit over soil grown plant yields, a view held by Hoagland. In 1940, Gericke, whose work is considered to be the basis for all forms of hydroponic growing, published the book, Complete Guide to Soilless Gardening, after leaving his academic position in 1937 in a climate that was politically unfavorable. Therein, for the first time, he published his basic formula involving the macro- and micronutrient salts for hydroponically-grown plants. As a result of research of Gericke's claims by order of the Director of the California Agricultural Experiment Station of the University of California, Claude B. Hutchison, Dennis Robert Hoagland and Daniel Israel Arnon wrote a classic 1938 agricultural bulletin, The Water Culture Method for Growing Plants Without Soil, which made the claim that hydroponic crop yields were no better than crop yields obtained with good-quality soils. Ultimately, crop yields would be limited by factors other than mineral nutrients, especially light. However, this study did not adequately appreciate that hydroponics has other key benefits including the fact that the roots of the plant have constant access to oxygen and that the plants have access to as much or as little water as they need. This is important as one of the most common errors when cultivating plants is overwatering and underwatering; and hydroponics prevents this from occurring as large amounts of water, which may drown root systems in soil, can be made available to the plant in hydroponics, and any water not used, is drained away, recirculated, or actively aerated, thus, eliminating anoxic conditions in the root area. In soil, a grower needs to be very experienced to know exactly with how much water to feed the plant. Too much and the plant will be unable to access oxygen because the air in the soil pores is displaced; too little and the plant will lose the ability to absorb nutrients, which are typically moved into the roots while dissolved, leading to nutrient deficiency symptoms such as chlorosis. Hoagland's views and helpful support by the University prompted these two researchers to develop several new formulas for mineral nutrient solutions, universally known as Hoagland solution. One of the earliest successes of hydroponics occurred on Wake Island, a rocky atoll in the Pacific Ocean used as a refueling stop for Pan American Airlines. Hydroponics was used there in the 1930s to grow vegetables for the passengers. Hydroponics was a necessity on Wake Island because there was no soil, and it was prohibitively expensive to airlift in fresh vegetables. From 1943 to 1946, Daniel I. Arnon served as a major in the United States Army and used his prior expertise with plant nutrition to feed troops stationed on barren Ponape Island in the western Pacific by growing crops in gravel and nutrient-rich water because there was no arable land available. In the 1960s, Allen Cooper of England developed the nutrient film technique. The Land Pavilion at Walt Disney World's EPCOT Center opened in 1982 and prominently features a variety of hydroponic techniques. In recent decades, NASA has done extensive hydroponic research for its Controlled Ecological Life Support System (CELSS). Hydroponics research mimicking a Martian environment uses LED lighting to grow in a different color spectrum with much less heat. Ray Wheeler, a plant physiologist at Kennedy Space Center's Space Life Science Lab, believes that hydroponics will create advances within space travel, as a bioregenerative life support system. In 2007, Eurofresh Farms in Willcox, Arizona, sold more than 200 million pounds of hydroponically grown tomatoes. Eurofresh has under glass and represents about a third of the commercial hydroponic greenhouse area in the U.S. Eurofresh tomatoes were pesticide-free, grown in rockwool with top irrigation. Eurofresh declared bankruptcy, and the greenhouses were acquired by NatureSweet Ltd. in 2013. As of 2017, Canada had hundreds of acres of large-scale commercial hydroponic greenhouses, producing tomatoes, peppers and cucumbers. Due to technological advancements within the industry and numerous economic factors, the global hydroponics market is forecast to grow from US$226.45 million in 2016 to US$724.87 million by 2023. Techniques There are two main variations for each medium: sub-irrigation and top irrigation. For all techniques, most hydroponic reservoirs are now built of plastic, but other materials have been used, including concrete, glass, metal, vegetable solids, and wood. The containers should exclude light to prevent algae and fungal growth in the nutrient solution. Static solution culture In static solution culture, plants are grown in containers of nutrient solution, such as glass Mason jars (typically, in-home applications), pots, buckets, tubs, or tanks. The solution is usually gently aerated but may be un-aerated. If un-aerated, the solution level is kept low enough that enough roots are above the solution so they get adequate oxygen. A hole is cut (or drilled) in the top of the reservoir for each plant; if it a jar or tub, it may be its lid, but otherwise, cardboard, foil, paper, wood or metal may be put on top. A single reservoir can be dedicated to a single plant, or to various plants. Reservoir size can be increased as plant size increases. A home-made system can be constructed from food containers or glass canning jars with aeration provided by an aquarium pump, aquarium airline tubing and aquarium valves. Clear containers are covered with aluminium foil, butcher paper, black plastic, or other material to exclude light, thus helping to eliminate the formation of algae. The nutrient solution is changed either on a schedule, such as once per week, or when the concentration drops below a certain level as determined with an electrical conductivity meter. Whenever the solution is depleted below a certain level, either water or fresh nutrient solution is added. A Mariotte's bottle, or a float valve, can be used to automatically maintain the solution level. In raft solution culture, plants are placed in a sheet of buoyant plastic that is floated on the surface of the nutrient solution. That way, the solution level never drops below the roots. Continuous-flow solution culture In continuous-flow solution culture, the nutrient solution constantly flows past the roots. It is much easier to automate than the static solution culture because sampling and adjustments to the temperature, pH, and nutrient concentrations can be made in a large storage tank that has potential to serve thousands of plants. A popular variation is the nutrient film technique or NFT, whereby a very shallow stream of water containing all the dissolved nutrients required for plant growth is recirculated in a thin layer past a bare root mat of plants in a watertight channel, with an upper surface exposed to air. As a consequence, an abundant supply of oxygen is provided to the roots of the plants. A properly designed NFT system is based on using the right channel slope, the right flow rate, and the right channel length. The main advantage of the NFT system over other forms of hydroponics is that the plant roots are exposed to adequate supplies of water, oxygen, and nutrients. In all other forms of production, there is a conflict between the supply of these requirements, since excessive or deficient amounts of one results in an imbalance of one or both of the others. NFT, because of its design, provides a system where all three requirements for healthy plant growth can be met at the same time, provided that the simple concept of NFT is always remembered and practised. The result of these advantages is that higher yields of high-quality produce are obtained over an extended period of cropping. A downside of NFT is that it has very little buffering against interruptions in the flow (e.g., power outages). But, overall, it is probably one of the more productive techniques. The same design characteristics apply to all conventional NFT systems. While slopes along channels of 1:100 have been recommended, in practice it is difficult to build a base for channels that is sufficiently true to enable nutrient films to flow without ponding in locally depressed areas. As a consequence, it is recommended that slopes of 1:30 to 1:40 are used. This allows for minor irregularities in the surface, but, even with these slopes, ponding and water logging may occur. The slope may be provided by the floor, benches or racks may hold the channels and provide the required slope. Both methods are used and depend on local requirements, often determined by the site and crop requirements. As a general guide, flow rates for each gully should be one liter per minute. At planting, rates may be half this and the upper limit of 2 L/min appears about the maximum. Flow rates beyond these extremes are often associated with nutritional problems. Depressed growth rates of many crops have been observed when channels exceed 12 meters in length. On rapidly growing crops, tests have indicated that, while oxygen levels remain adequate, nitrogen may be depleted over the length of the gully. As a consequence, channel length should not exceed 10–15 meters. In situations where this is not possible, the reductions in growth can be eliminated by placing another nutrient feed halfway along the gully and halving the flow rates through each outlet. Aeroponics Aeroponics is a system wherein roots are continuously or discontinuously kept in an environment saturated with fine drops (a mist or aerosol) of nutrient solution. The method requires no substrate and entails growing plants with their roots suspended in a deep air or growth chamber with the roots periodically wetted with a fine mist of atomized nutrients. Excellent aeration is the main advantage of aeroponics. Aeroponic techniques have proven to be commercially successful for propagation, seed germination, seed potato production, tomato production, leaf crops, and micro-greens. Since inventor Richard Stoner commercialized aeroponic technology in 1983, aeroponics has been implemented as an alternative to water intensive hydroponic systems worldwide. The limitation of hydroponics is the fact that of water can only hold of air, no matter whether aerators are utilized or not. Another distinct advantage of aeroponics over hydroponics is that any species of plants can be grown in a true aeroponic system because the microenvironment of an aeroponic can be finely controlled. The limitation of hydroponics is that certain species of plants can only survive for so long in water before they become waterlogged. The advantage of aeroponics is that suspended aeroponic plants receive 100% of the available oxygen and carbon dioxide to the roots zone, stems, and leaves, thus accelerating biomass growth and reducing rooting times. NASA research has shown that aeroponically grown plants have an 80% increase in dry weight biomass (essential minerals) compared to hydroponically grown plants. Aeroponics used 65% less water than hydroponics. NASA also concluded that aeroponically grown plants require ¼ the nutrient input compared to hydroponics. Unlike hydroponically grown plants, aeroponically grown plants will not suffer transplant shock when transplanted to soil, and offers growers the ability to reduce the spread of disease and pathogens. Aeroponics is also widely used in laboratory studies of plant physiology and plant pathology. Aeroponic techniques have been given special attention from NASA since a mist is easier to handle than a liquid in a zero-gravity environment. Fogponics Fogponics is a derivation of aeroponics wherein the nutrient solution is aerosolized by a diaphragm vibrating at ultrasonic frequencies. Solution droplets produced by this method tend to be 5–10 µm in diameter, smaller than those produced by forcing a nutrient solution through pressurized nozzles, as in aeroponics. The smaller size of the droplets allows them to diffuse through the air more easily, and deliver nutrients to the roots without limiting their access to oxygen. Passive sub-irrigation Passive sub-irrigation, also known as passive hydroponics, semi-hydroponics, or hydroculture, is a method wherein plants are grown in an inert porous medium that transports water and fertilizer to the roots by capillary action from a separate reservoir as necessary, reducing labor and providing a constant supply of water to the roots. In the simplest method, the pot sits in a shallow solution of fertilizer and water or on a capillary mat saturated with nutrient solution. The various hydroponic media available, such as expanded clay and coconut husk, contain more air space than more traditional potting mixes, delivering increased oxygen to the roots, which is important in epiphytic plants such as orchids and bromeliads, whose roots are exposed to the air in nature. Additional advantages of passive hydroponics are the reduction of root rot and the additional ambient humidity provided through evaporations. Hydroculture compared to traditional farming in terms of crops yield per area in a controlled environment was roughly 10 times more efficient than traditional farming, uses 13 times less water in one crop cycle than traditional farming, but on average uses 100 times more kilojoules per kilogram of energy than traditional farming. Ebb and flow (flood and drain) sub-irrigation In its simplest form, there is a tray above a reservoir of nutrient solution. Either the tray is filled with growing medium (clay granules being the most common) and then plant directly or place the pot over medium, stand in the tray. At regular intervals, a simple timer causes a pump to fill the upper tray with nutrient solution, after which the solution drains back down into the reservoir. This keeps the medium regularly flushed with nutrients and air. Once the upper tray fills past the drain stop, it begins recirculating the water until the timer turns the pump off, and the water in the upper tray drains back into the reservoirs. Run-to-waste In a run-to-waste system, nutrient and water solution is periodically applied to the medium surface. The method was invented in Bengal in 1946; for this reason it is sometimes referred to as "The Bengal System". This method can be set up in various configurations. In its simplest form, a nutrient-and-water solution is manually applied one or more times per day to a container of inert growing media, such as rockwool, perlite, vermiculite, coco fibre, or sand. In a slightly more complex system, it is automated with a delivery pump, a timer and irrigation tubing to deliver nutrient solution with a delivery frequency that is governed by the key parameters of plant size, plant growing stage, climate, substrate, and substrate conductivity, pH, and water content. In a commercial setting, watering frequency is multi-factorial and governed by computers or PLCs. Commercial hydroponics production of large plants like tomatoes, cucumber, and peppers uses one form or another of run-to-waste hydroponics. In environmentally responsible uses, the nutrient-rich waste is collected and processed through an on-site filtration system to be used many times, making the system very productive. Some bonsai are also grown in soil-free substrates (typically consisting of akadama, grit, diatomaceous earth and other inorganic components) and have their water and nutrients provided in a run-to-waste form. Deep water culture The hydroponic method of plant production by means of suspending the plant roots in a solution of nutrient-rich, oxygenated water. Traditional methods favor the use of plastic buckets and large containers with the plant contained in a net pot suspended from the centre of the lid and the roots suspended in the nutrient solution. The solution is oxygen saturated by an air pump combined with porous stones. With this method, the plants grow much faster because of the high amount of oxygen that the roots receive. The Kratky Method is similar to deep water culture, but uses a non-circulating water reservoir. Top-fed deep water culture Top-fed deep water culture is a technique involving delivering highly oxygenated nutrient solution direct to the root zone of plants. While deep water culture involves the plant roots hanging down into a reservoir of nutrient solution, in top-fed deep water culture the solution is pumped from the reservoir up to the roots (top feeding). The water is released over the plant's roots and then runs back into the reservoir below in a constantly recirculating system. As with deep water culture, there is an airstone in the reservoir that pumps air into the water via a hose from outside the reservoir. The airstone helps add oxygen to the water. Both the airstone and the water pump run 24 hours a day. The biggest advantage of top-fed deep water culture over standard deep water culture is increased growth during the first few weeks. With deep water culture, there is a time when the roots have not reached the water yet. With top-fed deep water culture, the roots get easy access to water from the beginning and will grow to the reservoir below much more quickly than with a deep water culture system. Once the roots have reached the reservoir below, there is not a huge advantage with top-fed deep water culture over standard deep water culture. However, due to the quicker growth in the beginning, grow time can be reduced by a few weeks. Rotary A rotary hydroponic garden is a style of commercial hydroponics created within a circular frame which rotates continuously during the entire growth cycle of whatever plant is being grown. While system specifics vary, systems typically rotate once per hour, giving a plant 24 full turns within the circle each 24-hour period. Within the center of each rotary hydroponic garden can be a high intensity grow light, designed to simulate sunlight, often with the assistance of a mechanized timer. Each day, as the plants rotate, they are periodically watered with a hydroponic growth solution to provide all nutrients necessary for robust growth. Due to the plants continuous fight against gravity, plants typically mature much more quickly than when grown in soil or other traditional hydroponic growing systems. Because rotary hydroponic systems have a small size, it allows for more plant material to be grown per area of floor space than other traditional hydroponic systems. Rotary hydroponic systems should be avoided at most circumstances, mainly because of their experimental nature and their high costs for finding, buying, operating, and maintaining them. Substrates (growing support materials) One of the most obvious decisions hydroponic farmers have to make is which medium they should use. Different media are appropriate for different growing techniques. Rock wool Rock wool (mineral wool) is the most widely used medium in hydroponics. Rock wool is an inert substrate suitable for both run-to-waste and recirculating systems. Rock wool is made from molten rock, basalt or 'slag' that is spun into bundles of single filament fibres, and bonded into a medium capable of capillary action, and is, in effect, protected from most common microbiological degradation. Rock wool is typically used only for the seedling stage, or with newly cut clones, but can remain with the plant base for its lifetime. Rock wool has many advantages and some disadvantages. The latter being the possible skin irritancy (mechanical) whilst handling (1:1000). Flushing with cold water usually brings relief. Advantages include its proven efficiency and effectiveness as a commercial hydroponic substrate. Most of the rock wool sold to date is a non-hazardous, non-carcinogenic material, falling under Note Q of the European Union Classification Packaging and Labeling Regulation (CLP). Mineral wool products can be engineered to hold large quantities of water and air that aid root growth and nutrient uptake in hydroponics; their fibrous nature also provides a good mechanical structure to hold the plant stable. The naturally high pH of mineral wool makes them initially unsuitable to plant growth and requires "conditioning" to produce a wool with an appropriate, stable pH. Expanded clay aggregate Baked clay pellets are suitable for hydroponic systems in which all nutrients are carefully controlled in water solution. The clay pellets are inert, pH-neutral, and do not contain any nutrient value. The clay is formed into round pellets and fired in rotary kilns at . This causes the clay to expand, like popcorn, and become porous. It is light in weight, and does not compact over time. The shape of an individual pellet can be irregular or uniform depending on brand and manufacturing process. The manufacturers consider expanded clay to be an ecologically sustainable and re-usable growing medium because of its ability to be cleaned and sterilized, typically by washing in solutions of white vinegar, chlorine bleach, or hydrogen peroxide (), and rinsing completely. Another view is that clay pebbles are best not re-used even when they are cleaned, due to root growth that may enter the medium. Breaking open a clay pebble after a crop has been shown to reveal this growth. Growstones Growstones, made from glass waste, have both more air and water retention space than perlite and peat. This aggregate holds more water than parboiled rice hulls. Growstones by volume consist of 0.5 to 5% calcium carbonate – for a standard 5.1 kg bag of Growstones that corresponds to 25.8 to 258 grams of calcium carbonate. The remainder is soda-lime glass. Coconut Coir Regardless of hydroponic demand, coconut coir is a natural byproduct derived from coconut processes. The outer husk of a coconut consists of fibers which are commonly used to make a myriad of items ranging from floor mats to brushes. After the long fibers are used for those applications, the dust and short fibers are merged to create coir. Coconuts absorb high levels of nutrients throughout their life cycle, so the coir must undergo a maturation process before it becomes a viable growth medium. This process removes salt, tannins and phenolic compounds through substantial water washing. Contaminated water is a byproduct of this process, as three hundred to six hundred liters of water per one cubic meter of coir is needed. Additionally, this maturation can take up to six months and one study concluded the working conditions during the maturation process are dangerous and would be illegal in North America and Europe. Despite requiring attention, posing health risks and environmental impacts, coconut coir has impressive material properties. When exposed to water, the brown, dry, chunky and fibrous material expands nearly three-four times its original size. This characteristic combined with coconut coir's water retention capacity and resistance to pests and diseases make it an effective growth medium. Used as an alternative to rock wool, coconut coir, also known as coir peat, offers optimized growing conditions. Rice husks Parboiled rice husks (PBH) are an agricultural byproduct that would otherwise have little use. They decay over time, and allow drainage, and even retain less water than growstones. A study showed that rice husks did not affect the effects of plant growth regulators. Perlite Perlite is a volcanic rock that has been superheated into very lightweight expanded glass pebbles. It is used loose or in plastic sleeves immersed in the water. It is also used in potting soil mixes to decrease soil density. Perlite has similar properties and uses to vermiculite but, in general, holds more air and less water and is buoyant. Vermiculite Like perlite, vermiculite is a mineral that has been superheated until it has expanded into light pebbles. Vermiculite holds more water than perlite and has a natural "wicking" property that can draw water and nutrients in a passive hydroponic system. If too much water and not enough air surrounds the plants roots, it is possible to gradually lower the medium's water-retention capability by mixing in increasing quantities of perlite. Pumice Like perlite, pumice is a lightweight, mined volcanic rock that finds application in hydroponics. Sand Sand is cheap and easily available. However, it is heavy, does not hold water very well, and it must be sterilized between uses. Due to sand being easily available and in high demand sand shortages are on our horizon as we are running out. Gravel The same type that is used in aquariums, though any small gravel can be used, provided it is washed first. Indeed, plants growing in a typical traditional gravel filter bed, with water circulated using electric powerhead pumps, are in effect being grown using gravel hydroponics, also termed "nutriculture". Gravel is inexpensive, easy to keep clean, drains well and will not become waterlogged. However, it is also heavy, and, if the system does not provide continuous water, the plant roots may dry out. Wood fiber Wood fibre, produced from steam friction of wood, is a very efficient organic substrate for hydroponics. It has the advantage that it keeps its structure for a very long time. Wood wool (i.e. wood slivers) have been used since the earliest days of the hydroponics research. However, more recent research suggests that wood fibre may have detrimental effects on "plant growth regulators". Sheep wool Wool from shearing sheep is a little-used yet promising renewable growing medium. In a study comparing wool with peat slabs, coconut fibre slabs, perlite and rockwool slabs to grow cucumber plants, sheep wool had a greater air capacity of 70%, which decreased with use to a comparable 43%, and water capacity that increased from 23% to 44% with use. Using sheep wool resulted in the greatest yield out of the tested substrates, while application of a biostimulator consisting of humic acid, lactic acid and Bacillus subtilis improved yields in all substrates. Brick shards Brick shards have similar properties to gravel. They have the added disadvantages of possibly altering the pH and requiring extra cleaning before reuse. Polystyrene packing peanuts Polystyrene packing peanuts are inexpensive, readily available, and have excellent drainage. However, they can be too lightweight for some uses. They are used mainly in closed-tube systems. Note that non-biodegradable polystyrene peanuts must be used; biodegradable packing peanuts will decompose into a sludge. Plants may absorb styrene and pass it to their consumers; this is a possible health risk. Nutrient solutions Inorganic hydroponic solutions The formulation of hydroponic solutions is an application of plant nutrition, with nutrient deficiency symptoms mirroring those found in traditional soil based agriculture. However, the underlying chemistry of hydroponic solutions can differ from soil chemistry in many significant ways. Important differences include: Unlike soil, hydroponic nutrient solutions do not have cation-exchange capacity (CEC) from clay particles or organic matter. The absence of CEC and soil pores means the pH, oxygen saturation, and nutrient concentrations can change much more rapidly in hydroponic setups than is possible in soil. Selective absorption of nutrients by plants often imbalances the amount of counterions in solution. This imbalance can rapidly affect solution pH and the ability of plants to absorb nutrients of similar ionic charge (see article membrane potential). For instance, nitrate anions are often consumed rapidly by plants to form proteins, leaving an excess of cations in solution. This cation imbalance can lead to deficiency symptoms in other cation based nutrients (e.g. Mg2+) even when an ideal quantity of those nutrients are dissolved in the solution. Depending on the pH or on the presence of water contaminants, nutrients such as iron can precipitate from the solution and become unavailable to plants. Routine adjustments to pH, buffering the solution, or the use of chelating agents is often necessary. Unlike soil types, which can vary greatly in their composition, hydroponic solutions are often standardized and require routine maintenance for plant cultivation. Hydroponic solutions are periodically pH adjusted to near neutral (pH ≈ 6.0) and are aerated with oxygen. Also, water levels must be refilled to account for transpiration losses and nutrient solutions require re-fortification to correct the nutrient imbalances that occur as plants grow and deplete nutrient reserves. Sometimes the regular measurement of nitrate ions is used as a parameter to estimate the remaining proportions and concentrations of other nutrient ions in a solution. As in conventional agriculture, nutrients should be adjusted to satisfy Liebig's law of the minimum for each specific plant variety. Nevertheless, generally acceptable concentrations for nutrient solutions exist, with minimum and maximum concentration ranges for most plants being somewhat similar. Most nutrient solutions are mixed to have concentrations between 1,000 and 2,500 ppm. Acceptable concentrations for the individual nutrient ions, which comprise that total ppm figure, are summarized in the following table. For essential nutrients, concentrations below these ranges often lead to nutrient deficiencies while exceeding these ranges can lead to nutrient toxicity. Optimum nutrition concentrations for plant varieties are found empirically by experience or by plant tissue tests. Organic hydroponic solutions Organic fertilizers can be used to supplement or entirely replace the inorganic compounds used in conventional hydroponic solutions. However, using organic fertilizers introduces a number of challenges that are not easily resolved. Examples include: organic fertilizers are highly variable in their nutritional compositions in terms of minerals and different chemical species. Even similar materials can differ significantly based on their source (e.g. the quality of manure varies based on an animal's diet). organic fertilizers are often sourced from animal byproducts, making disease transmission a serious concern for plants grown for human consumption or animal forage. organic fertilizers are often particulate and can clog substrates or other growing equipment. Sieving or milling the organic materials to fine dusts is often necessary. some organic materials (i.e. particularly manures and offal) can further degrade to emit foul odors under anaerobic conditions. many organic molecules (i.e. sugars) demand additional oxygen during aerobic degradation, which is essential for cellular respiration in the plant roots. organic compounds are not necessary for normal plant nutrition. Nevertheless, if precautions are taken, organic fertilizers can be used successfully in hydroponics. Organically sourced macronutrients Examples of suitable materials, with their average nutritional contents tabulated in terms of percent dried mass, are listed in the following table. Organically sourced micronutrients Micronutrients can be sourced from organic fertilizers as well. For example, composted pine bark is high in manganese and is sometimes used to fulfill that mineral requirement in hydroponic solutions. To satisfy requirements for National Organic Programs, pulverized, unrefined minerals (e.g. Gypsum, Calcite, and glauconite) can also be added to satisfy a plant's nutritional needs. Additives Compounds can be added in both organic and conventional hydroponic systems to improve nutrition acquisition and uptake by the plant. Chelating agents and humic acid have been shown to increase nutrient uptake. Additionally, plant growth promoting rhizobacteria (PGPR), which are regularly utilized in field and greenhouse agriculture, have been shown to benefit hydroponic plant growth development and nutrient acquisition. Some PGPR are known to increase nitrogen fixation. While nitrogen is generally abundant in hydroponic systems with properly maintained fertilizer regimens, Azospirillum and Azotobacter genera can help maintain mobilized forms of nitrogen in systems with higher microbial growth in the rhizosphere. Traditional fertilizer methods often lead to high accumulated concentrations of nitrate within plant tissue at harvest. Rhodopseudo-monas palustris has been shown to increase nitrogen use efficiency, increase yield, and decrease nitrate concentration by 88% at harvest compared to traditional hydroponic fertilizer methods in leafy greens. Many Bacillus spp., Pseudomonas spp. and Streptomyces spp. convert forms of phosphorus in the soil that are unavailable to the plant into soluble anions by decreasing soil pH, releasing phosphorus bound in chelated form that is available in a wider pH range, and mineralizing organic phosphorus. Some studies have found that Bacillus inoculants allow hydroponic leaf lettuce to overcome high salt stress that would otherwise reduce growth. This can be especially beneficial in regions with high electrical conductivity or salt content in their water source. This could potentially avoid costly reverse osmosis filtration systems while maintaining high crop yield. Tools Common equipment Managing nutrient concentrations, oxygen saturation, and pH values within acceptable ranges is essential for successful hydroponic horticulture. Common tools used to manage hydroponic solutions include: Electrical conductivity meters, a tool which estimates nutrient ppm by measuring how well a solution transmits an electric current. pH meter, a tool that uses an electric current to determine the concentration of hydrogen ions in solution. Oxygen electrode, an electrochemical sensor for determining the oxygen concentration in solution. Litmus paper, disposable pH indicator strips that determine hydrogen ion concentrations by color changing chemical reaction. Graduated cylinders or measuring spoons to measure out premixed, commercial hydroponic solutions. Equipment Chemical equipment can also be used to perform accurate chemical analyses of nutrient solutions. Examples include: Balances for accurately measuring materials. Laboratory glassware, such as burettes and pipettes, for performing titrations. Colorimeters for solution tests which apply the Beer–Lambert law. Spectrophotometer to measure the concentrations of the lead parameter nitrate and other nutrients, such as phosphate, sulfate or iron. Using chemical equipment for hydroponic solutions can be beneficial to growers of any background because nutrient solutions are often reusable. Because nutrient solutions are virtually never completely depleted, and should never be due to the unacceptably low osmotic pressure that would result, re-fortification of old solutions with new nutrients can save growers money and can control point source pollution, a common source for the eutrophication of nearby lakes and streams. Software Although pre-mixed concentrated nutrient solutions are generally purchased from commercial nutrient manufacturers by hydroponic hobbyists and small commercial growers, several tools exist to help anyone prepare their own solutions without extensive knowledge about chemistry. The free and open source tools HydroBuddy and HydroCal have been created by professional chemists to help any hydroponics grower prepare their own nutrient solutions. The first program is available for Windows, Mac and Linux while the second one can be used through a simple JavaScript interface. Both programs allow for basic nutrient solution preparation although HydroBuddy provides added functionality to use and save custom substances, save formulations and predict electrical conductivity values. Mixing solutions Often mixing hydroponic solutions using individual salts is impractical for hobbyists or small-scale commercial growers because commercial products are available at reasonable prices. However, even when buying commercial products, multi-component fertilizers are popular. Often these products are bought as three part formulas which emphasize certain nutritional roles. For example, solutions for vegetative growth (i.e. high in nitrogen), flowering (i.e. high in potassium and phosphorus), and micronutrient solutions (i.e. with trace minerals) are popular. The timing and application of these multi-part fertilizers should coincide with a plant's growth stage. For example, at the end of an annual plant's life cycle, a plant should be restricted from high nitrogen fertilizers. In most plants, nitrogen restriction inhibits vegetative growth and helps induce flowering. Additional improvements Growrooms With pest problems reduced and nutrients constantly fed to the roots, productivity in hydroponics is high; however, growers can further increase yield by manipulating a plant's environment by constructing sophisticated growrooms. CO2 enrichment To increase yield further, some sealed greenhouses inject CO2 into their environment to help improve growth and plant fertility. See also Aeroponics Anthroponics Aquaponics Fogponics Folkewall Grow box Growroom Organoponics Passive hydroponics Plant factory Plant nutrition Plant pathology Root rot Vertical farming Xeriscaping References Hydroculture Aeroponics de:Hydrokultur
[ 0.6208036541938782, -0.08710446953773499, -0.05680837109684944, 0.6382592916488647, 0.21791107952594757, 0.12317080050706863, -0.25188565254211426, -0.16003620624542236, -0.014660176821053028, -0.5284959077835083, -0.5849098563194275, 0.7290833592414856, -0.4455462396144867, 0.662202239036...
14134
https://en.wikipedia.org/wiki/Humanist%20%28disambiguation%29
Humanist (disambiguation)
Humanist may refer to: A proponent or practitioner of humanism, which has several distinct senses, which are listed at: Humanism (disambiguation) A Renaissance Humanist or scholar in the Renaissance Humanist, typeface classes under the Vox-ATypI classification, which may refer to: Humanist sans-serif typefaces Humanist or old-style serif typefaces Humanist (electronic seminar), an email discussion list on humanities computing, described as “an international online seminar on humanities computing and the digital humanities” The Humanist (journal), a magazine published by the American Humanist Association Humanist (journal), a magazine published by the Norwegian Humanist Association A scholar or academic in the Humanities Humanism (philosophy of education) Humanistic (album), the 2001 debut album by Abandoned Pools Humanist minuscule, a style of handwriting invented in 15th century Italy Humanist Movement, international volunteer organisation linked to Silo (Mario Rodriguez Cobos), sometimes referred to as New Humanism See also Centre démocrate humaniste, also known as The Humanist Democratic Centre Humanist International, consortium of Humanist Movement's political parties Humanistic Judaism Humanist Manifesto Humanistic psychology
[ 0.2123991847038269, 0.5447225570678711, 0.029499772936105728, 0.16041482985019684, -0.6551616787910461, 0.12892070412635803, 0.3698072135448456, -0.1131925955414772, -0.17091867327690125, -0.26374348998069763, -0.4301227629184723, 0.3557435870170593, -0.1942676454782486, 0.2212709337472915...
14135
https://en.wikipedia.org/wiki/Henry%20Purcell
Henry Purcell
Henry Purcell (, rare: September 1659 – 21 November 1695) was an English composer. Although he incorporated Italian and French stylistic elements, Purcell's style was a uniquely English form of Baroque music. He is generally considered to be one of the greatest English composers; no later native-born English composer approached his fame until Edward Elgar, Ralph Vaughan Williams, Gustav Holst, William Walton and Benjamin Britten in the 20th century. Life and work Early life Purcell was born in St Ann's Lane, Old Pye Street, Westminster – the area of London later known as Devil's Acre, a notorious slum – in 1659. Henry Purcell Senior, whose older brother Thomas Purcell was a musician, was a gentleman of the Chapel Royal and sang at the coronation of King Charles II of England. Henry the elder had three sons: Edward, Henry and Daniel. Daniel Purcell, the youngest of the brothers, was also a prolific composer who wrote the music for much of the final act of The Indian Queen after his brother Henry's death. The family lived just a few hundred yards west of Westminster Abbey from 1659 onwards. After his father's death in 1664, Purcell was placed under the guardianship of his uncle Thomas, who showed him great affection and kindness. Thomas arranged for Henry to be admitted as a chorister. Henry studied first under Captain Henry Cooke, Master of the Children, and afterwards under Pelham Humfrey, Cooke's successor. The composer Matthew Locke was a family friend and, particularly with his semi-operas, probably also had a musical influence on the young Purcell. Henry was a chorister in the Chapel Royal until his voice broke in 1673 when he became assistant to the organ-builder John Hingston, who held the post of keeper of wind instruments to the King. Career Purcell is said to have been composing at nine years old, but the earliest work that can be certainly identified as his is an ode for the King's birthday, written in 1670. (The dates for his compositions are often uncertain, despite considerable research.) It is assumed that the three-part song Sweet tyranness, I now resign was written by him as a child. After Humfrey's death, Purcell continued his studies under Dr John Blow. He attended Westminster School and in 1676 was appointed copyist at Westminster Abbey. Henry Purcell's earliest anthem Lord, who can tell was composed in 1678. It is a psalm that is prescribed for Christmas Day and also to be read at morning prayer on the fourth day of the month. In 1679, he wrote songs for John Playford's Choice Ayres, Songs and Dialogues and an anthem, the name of which is unknown, for the Chapel Royal. From an extant letter written by Thomas Purcell we learn that this anthem was composed for the exceptionally fine voice of the Rev. John Gostling, then at Canterbury, but afterwards a gentleman of His Majesty's Chapel. Purcell wrote several anthems at different times for Gostling's extraordinary basso profondo voice, which is known to have had a range of at least two full octaves, from D below the bass staff to the D above it. The dates of very few of these sacred compositions are known; perhaps the most notable example is the anthem They that go down to the sea in ships. In gratitude for the providential escape of King Charles II from shipwreck, Gostling, who had been of the royal party, put together some verses from the Psalms in the form of an anthem and requested Purcell to set them to music. The challenging work opens with a passage which traverses the full extent of Gostling's range, beginning on the upper D and descending two octaves to the lower. In 1679, Blow, who had been appointed organist of Westminster Abbey 10 years before, resigned his office in favour of Purcell. Purcell now devoted himself almost entirely to the composition of sacred music, and for six years severed his connection with the theatre. However, during the early part of the year, probably before taking up his new office, he had produced two important works for the stage, the music for Nathaniel Lee's Theodosius, and Thomas d'Urfey's Virtuous Wife. Between 1680 and 1688 Purcell wrote music for seven plays. The composition of his chamber opera Dido and Aeneas, which forms a very important landmark in the history of English dramatic music, has been attributed to this period, and its earliest production may well have predated the documented one of 1689. It was written to a libretto furnished by Nahum Tate, and performed in 1689 in cooperation with Josias Priest, a dancing master and the choreographer for the Dorset Garden Theatre. Priest's wife kept a boarding school for young gentlewomen, first in Leicester Fields and afterwards at Chelsea, where the opera was performed. It is occasionally considered the first genuine English opera, though that title is usually given to Blow's Venus and Adonis: as in Blow's work, the action does not progress in spoken dialogue but in Italian-style recitative. Each work runs to less than one hour. At the time, Dido and Aeneas never found its way to the theatre, though it appears to have been very popular in private circles. It is believed to have been extensively copied, but only one song was printed by Purcell's widow in Orpheus Britannicus, and the complete work remained in manuscript until 1840 when it was printed by the Musical Antiquarian Society under the editorship of Sir George Macfarren. The composition of Dido and Aeneas gave Purcell his first chance to write a sustained musical setting of a dramatic text. It was his only opportunity to compose a work in which the music carried the entire drama. The story of Dido and Aeneas derives from the original source in Virgil's epic the Aeneid. Soon after Purcell's marriage, in 1682, on the death of Edward Lowe, he was appointed organist of the Chapel Royal, an office which he was able to hold simultaneously with his position at Westminster Abbey. His eldest son was born in this same year, but he was short-lived. His first printed composition, Twelve Sonatas, was published in 1683. For some years after this, he was busy in the production of sacred music, odes addressed to the king and royal family, and other similar works. In 1685, he wrote two of his finest anthems, I was glad and My heart is inditing, for the coronation of King James II. In 1690 he composed a setting of the birthday ode for Queen Mary, Arise, my muse and four years later wrote one of his most elaborate, important and magnificent works – a setting for another birthday ode for the Queen, written by Nahum Tate, entitled Come Ye Sons of Art. In 1687, he resumed his connection with the theatre by furnishing the music for John Dryden's tragedy Tyrannick Love. In this year, Purcell also composed a march and passepied called Quick-step, which became so popular that Lord Wharton adapted the latter to the fatal verses of Lillibullero; and in or before January 1688, Purcell composed his anthem Blessed are they that fear the Lord by the express command of the King. A few months later, he wrote the music for D'Urfey's play, The Fool's Preferment. In 1690, he composed the music for Betterton's adaptation of Fletcher and Massinger's Prophetess (afterwards called Dioclesian) and Dryden's Amphitryon. In 1691, he wrote the music for what is sometimes considered his dramatic masterpiece, King Arthur, or The British Worthy . In 1692, he composed The Fairy-Queen (an adaptation of Shakespeare's A Midsummer Night's Dream), the score of which (his longest for theatre) was rediscovered in 1901 and published by the Purcell Society. The Indian Queen followed in 1695, in which year he also wrote songs for Dryden and Davenant's version of Shakespeare's The Tempest (recently, this has been disputed by music scholars), probably including "Full fathom five" and "Come unto these yellow sands". The Indian Queen was adapted from a tragedy by Dryden and Sir Robert Howard. In these semi-operas (another term for which at the time was "dramatic opera"), the main characters of the plays do not sing but speak their lines: the action moves in dialogue rather than recitative. The related songs are sung "for" them by singers, who have minor dramatic roles. Purcell's Te Deum and Jubilate Deo were written for Saint Cecilia's Day, 1694, the first English Te Deum ever composed with orchestral accompaniment. This work was annually performed at St Paul's Cathedral until 1712, after which it was performed alternately with Handel's Utrecht Te Deum and Jubilate until 1743, when both works were replaced by Handel's Dettingen Te Deum. He composed an anthem and two elegies for Queen Mary II's funeral, his Funeral Sentences and Music for the Funeral of Queen Mary. Besides the operas and semi-operas already mentioned, Purcell wrote the music and songs for Thomas d'Urfey's The Comical History of Don Quixote, Bonduca, The Indian Queen and others, a vast quantity of sacred music, and numerous odes, cantatas, and other miscellaneous pieces. The quantity of his instrumental chamber music is minimal after his early career, and his keyboard music consists of an even more minimal number of harpsichord suites and organ pieces. In 1693, Purcell composed music for two comedies: The Old Bachelor, and The Double Dealer. Purcell also composed for five other plays within the same year. In July 1695, Purcell composed an ode for the Duke of Gloucester for his sixth birthday. The ode is titled Who can from joy refrain? Purcell's four-part sonatas were issued in 1697. In the final six years of his life, Purcell wrote music for forty-two plays. Death Purcell died in 1695 at his home in Marsham Street, at the height of his career. He is believed to have been 35 or 36 years old at the time. The cause of his death is unclear: one theory is that he caught a chill after returning home late from the theatre one night to find that his wife had locked him out. Another is that he succumbed to tuberculosis. The beginning of Purcell's will reads: Purcell is buried adjacent to the organ in Westminster Abbey. The music that he had earlier composed for Queen Mary's funeral was performed during his funeral as well. Purcell was universally mourned as "a very great master of music."  Following his death, the officials at Westminster honoured him by unanimously voting that he be buried with no expense spared in the north aisle of the Abbey. His epitaph reads: "Here lyes Henry Purcell Esq., who left this life and is gone to that Blessed Place where only His harmony can be exceeded." Purcell fathered six children by his wife Frances, four of whom died in infancy. His wife, as well as his son Edward (1689–1740) and daughter Frances, survived him. His wife Frances died in 1706, having published a number of her husband's works, including the now-famous collection called Orpheus Britannicus, in two volumes, printed in 1698 and 1702, respectively. Edward was appointed organist of St Clement's, Eastcheap, London, in 1711 and was succeeded by his son Edward Henry Purcell (died 1765). Both men were buried in St Clement's near the organ gallery. Legacy Notable compositions Purcell worked in many genres, both in works closely linked to the court, such as symphony song, to the Chapel Royal, such as the symphony anthem, and the theatre. Among Purcell's most notable works are his opera Dido and Aeneas (1688), his semi-operas Dioclesian (1690), King Arthur (1691), The Fairy-Queen (1692) and Timon of Athens (1695), as well as the compositions Hail! Bright Cecilia (1692), Come Ye Sons of Art (1694) and Funeral Sentences and Music for the Funeral of Queen Mary (1695). Influence and reputation After his death, Purcell was honoured by many of his contemporaries, including his old friend John Blow, who wrote An Ode, on the Death of Mr. Henry Purcell (Mark how the lark and linnet sing) with text by his old collaborator, John Dryden. William Croft's 1724 setting for the Burial Service was written in the style of "the great Master". Croft preserved Purcell's setting of "Thou knowest Lord" (Z 58) in his service, for reasons "obvious to any artist"; it has been sung at every British state funeral ever since. More recently, the English poet Gerard Manley Hopkins wrote a famous sonnet entitled simply "Henry Purcell", with a headnote reading: "The poet wishes well to the divine genius of Purcell and praises him that, whereas other musicians have given utterance to the moods of man's mind, he has, beyond that, uttered in notes the very make and species of man as created both in him and in all men generally." Purcell also had a strong influence on the composers of the English musical renaissance of the early 20th century, most notably Benjamin Britten, who arranged many of Purcell's vocal works for voice(s) and piano in Britten's Purcell Realizations, including from Dido and Aeneas, and whose The Young Person's Guide to the Orchestra is based on a theme from Purcell's Abdelazar. Stylistically, the aria "I know a bank" from Britten's opera A Midsummer Night's Dream is clearly inspired by Purcell's aria "Sweeter than Roses", which Purcell originally wrote as part of incidental music to Richard Norton's Pausanias, the Betrayer of His Country. Purcell is honoured together with Johann Sebastian Bach and George Frideric Handel with a feast day on the liturgical calendar of the Episcopal Church (USA) on 28 July. In a 1940 interview Ignaz Friedman stated that he considered Purcell as great as Bach and Beethoven. In Victoria Street, Westminster, England, there is a bronze monument to Purcell, sculpted by Glynn Williams and unveiled in 1995 to mark the three hundredth anniversary of his death. Purcell's works have been catalogued by Franklin Zimmerman, who gave them a number preceded by Z. A Purcell Club was founded in London in 1836 for promoting the performance of his music but was dissolved in 1863. In 1876 a Purcell Society was founded, which published new editions of his works. A modern-day Purcell Club has been created, and provides guided tours and concerts in support of Westminster Abbey. Today there is a Henry Purcell Society of Boston, which performs his music in live concert and currently is online streaming concerts, in response to the pandemic. There is a Purcell Society in London, which collects and studies Purcell manuscripts and musical scores, concentrating on producing revised versions of the scores of all his music. So strong was his reputation that a popular wedding processional was incorrectly attributed to Purcell for many years. The so-called Purcell's Trumpet Voluntary was in fact written around 1700 by a British composer named Jeremiah Clarke as the Prince of Denmark's March. In popular culture Music for the Funeral of Queen Mary was reworked by Wendy Carlos for the title music of the 1971 film by Stanley Kubrick, A Clockwork Orange. The 1973 Rolling Stone review of Jethro Tull's A Passion Play compared the musical style of the album with that of Purcell. In 2009 Pete Townshend of The Who, an English rock band that established itself in the 1960s, identified Purcell's harmonies, particularly the use of suspension and resolution that Townshend had learned from producer Kit Lambert, as an influence on the band's music (in songs such as "Won't Get Fooled Again" (1971), "I Can See for Miles" (1967) and the very Purcellian intro to "Pinball Wizard"). Purcell's music was widely featured as background music in the Academy Award winning 1979 film Kramer vs. Kramer, with a soundtrack on CBS Masterworks Records. In the 21st century, the soundtrack of the 2005 film version of Pride and Prejudice features a dance titled "A Postcard to Henry Purcell". This is a version by composer Dario Marianelli of Purcell's Abdelazar theme. In the German-language 2004 movie, Downfall, the music of Dido's Lament is used repeatedly as Nazi Germany collapses. The 2012 film Moonrise Kingdom contains Benjamin Britten's version of the Rondeau in Purcell's Abdelazar created for his 1946 The Young Person's Guide to the Orchestra. In 2013, the Pet Shop Boys released their single "Love Is a Bourgeois Construct" incorporating one of the same ground basses from King Arthur used by Nyman in his Draughtsman's Contract score. Olivia Chaney performs her adaptation of "There's Not a Swain" on her CD "The Longest River." The 1995 film, England, My England tells the story of an actor who is himself writing a play about Purcell's life and music, and features many of his compositions. "What Power Art Thou" (from King Arthur, or The British Worthy (Z. 628), a semi-opera in five acts with music by Henry Purcell and a libretto by John Dryden) is featured in The Crown (s1e9). Notes References Sources External links Purcell's London by Brian Robins The Purcell Society Short biography, audio samples and images of Purcell Monument to Purcell Dido's Lament – Research leading to a narrative account of how Henry Purcell's opera Dido and Aeneas was created. [ Henry Purcell] at AllMusic National Trust catalogue entry for manuscript music, copied by Philip Hayes directly from Purcell's original manuscripts Select digitized images from Old English Songs, containing works by Purcell, housed at the University of Kentucky Libraries Special Collections Research Center 1659 births 1695 deaths 17th-century classical composers 17th-century English composers Anglican saints British male organists English Baroque composers English opera composers English classical organists English male classical composers Classical composers of church music Glee composers Male opera composers Gentlemen of the Chapel Royal People educated at Westminster School, London People from Victoria, London 17th-century deaths from tuberculosis Tuberculosis deaths in England Burials at Westminster Abbey 17th-century male musicians
[ -0.5500689148902893, -0.04116090014576912, -0.2676672339439392, -0.5764615535736084, 0.3436710834503174, 0.613419771194458, 0.5920605063438416, 0.215227872133255, -0.5204353332519531, -0.1512637436389923, -0.369523823261261, 0.3013696074485779, -0.4430229961872101, 0.04771723598241806, -...
14136
https://en.wikipedia.org/wiki/Hydrophobe
Hydrophobe
In chemistry, hydrophobicity is the physical property of a molecule that is seemingly repelled from a mass of water (known as a hydrophobe). In contrast, hydrophiles are attracted to water. Hydrophobic molecules tend to be nonpolar and, thus, prefer other neutral molecules and nonpolar solvents. Because water molecules are polar, hydrophobes do not dissolve well among them. Hydrophobic molecules in water often cluster together, forming micelles. Water on hydrophobic surfaces will exhibit a high contact angle. Examples of hydrophobic molecules include the alkanes, oils, fats, and greasy substances in general. Hydrophobic materials are used for oil removal from water, the management of oil spills, and chemical separation processes to remove non-polar substances from polar compounds. Hydrophobic is often used interchangeably with lipophilic, "fat-loving". However, the two terms are not synonymous. While hydrophobic substances are usually lipophilic, there are exceptions, such as the silicones and fluorocarbons. The term hydrophobe comes from the Ancient Greek ὑδρόφόβος (hýdrophóbos), "having a fear of water", constructed . Chemical background The hydrophobic interaction is mostly an entropic effect originating from the disruption of the highly dynamic hydrogen bonds between molecules of liquid water by the nonpolar solute forming a clathrate-like structure around the non-polar molecules. This structure formed is more highly ordered than free water molecules due to the water molecules arranging themselves to interact as much as possible with themselves, and thus results in a higher entropic state which causes non-polar molecules to clump together to reduce the surface area exposed to water and decrease the entropy of the system. Thus, the two immiscible phases (hydrophilic vs. hydrophobic) will change so that their corresponding interfacial area will be minimal. This effect can be visualized in the phenomenon called phase separation. Superhydrophobicity Superhydrophobic surfaces, such as the leaves of the lotus plant, are those that are extremely difficult to wet. The contact angles of a water droplet exceeds 150°. This is referred to as the lotus effect, and is primarily a physical property related to interfacial tension, rather than a chemical property. Theory In 1805, Thomas Young defined the contact angle θ by analyzing the forces acting on a fluid droplet resting on a solid surface surrounded by a gas. where = Interfacial tension between the solid and gas = Interfacial tension between the solid and liquid = Interfacial tension between the liquid and gas θ can be measured using a contact angle goniometer. Wenzel determined that when the liquid is in intimate contact with a microstructured surface, θ will change to θW* where r is the ratio of the actual area to the projected area. Wenzel's equation shows that microstructuring a surface amplifies the natural tendency of the surface. A hydrophobic surface (one that has an original contact angle greater than 90°) becomes more hydrophobic when microstructured – its new contact angle becomes greater than the original. However, a hydrophilic surface (one that has an original contact angle less than 90°) becomes more hydrophilic when microstructured – its new contact angle becomes less than the original. Cassie and Baxter found that if the liquid is suspended on the tops of microstructures, θ will change to θCB*: where φ is the area fraction of the solid that touches the liquid. Liquid in the Cassie–Baxter state is more mobile than in the Wenzel state. We can predict whether the Wenzel or Cassie–Baxter state should exist by calculating the new contact angle with both equations. By a minimization of free energy argument, the relation that predicted the smaller new contact angle is the state most likely to exist. Stated in mathematical terms, for the Cassie–Baxter state to exist, the following inequality must be true. A recent alternative criterion for the Cassie–Baxter state asserts that the Cassie–Baxter state exists when the following 2 criteria are met:1) Contact line forces overcome body forces of unsupported droplet weight and 2) The microstructures are tall enough to prevent the liquid that bridges microstructures from touching the base of the microstructures. A new criterion for the switch between Wenzel and Cassie-Baxter states has been developed recently based on surface roughness and surface energy. The criterion focuses on the air-trapping capability under liquid droplets on rough surfaces, which could tell whether Wenzel's model or Cassie-Baxter's model should be used for certain combination of surface roughness and energy. Contact angle is a measure of static hydrophobicity, and contact angle hysteresis and slide angle are dynamic measures. Contact angle hysteresis is a phenomenon that characterizes surface heterogeneity. When a pipette injects a liquid onto a solid, the liquid will form some contact angle. As the pipette injects more liquid, the droplet will increase in volume, the contact angle will increase, but its three-phase boundary will remain stationary until it suddenly advances outward. The contact angle the droplet had immediately before advancing outward is termed the advancing contact angle. The receding contact angle is now measured by pumping the liquid back out of the droplet. The droplet will decrease in volume, the contact angle will decrease, but its three-phase boundary will remain stationary until it suddenly recedes inward. The contact angle the droplet had immediately before receding inward is termed the receding contact angle. The difference between advancing and receding contact angles is termed contact angle hysteresis and can be used to characterize surface heterogeneity, roughness, and mobility. Surfaces that are not homogeneous will have domains that impede motion of the contact line. The slide angle is another dynamic measure of hydrophobicity and is measured by depositing a droplet on a surface and tilting the surface until the droplet begins to slide. In general, liquids in the Cassie–Baxter state exhibit lower slide angles and contact angle hysteresis than those in the Wenzel state. Research and development Dettre and Johnson discovered in 1964 that the superhydrophobic lotus effect phenomenon was related to rough hydrophobic surfaces, and they developed a theoretical model based on experiments with glass beads coated with paraffin or TFE telomer. The self-cleaning property of superhydrophobic micro-nanostructured surfaces was reported in 1977. Perfluoroalkyl, perfluoropolyether, and RF plasma -formed superhydrophobic materials were developed, used for electrowetting and commercialized for bio-medical applications between 1986 and 1995. Other technology and applications have emerged since the mid 1990s. A durable superhydrophobic hierarchical composition, applied in one or two steps, was disclosed in 2002 comprising nano-sized particles ≤ 100 nanometers overlaying a surface having micrometer-sized features or particles ≤ 100 micrometers. The larger particles were observed to protect the smaller particles from mechanical abrasion. In recent research, superhydrophobicity has been reported by allowing alkylketene dimer (AKD) to solidify into a nanostructured fractal surface. Many papers have since presented fabrication methods for producing superhydrophobic surfaces including particle deposition, sol-gel techniques, plasma treatments, vapor deposition, and casting techniques. Current opportunity for research impact lies mainly in fundamental research and practical manufacturing. Debates have recently emerged concerning the applicability of the Wenzel and Cassie–Baxter models. In an experiment designed to challenge the surface energy perspective of the Wenzel and Cassie–Baxter model and promote a contact line perspective, water drops were placed on a smooth hydrophobic spot in a rough hydrophobic field, a rough hydrophobic spot in a smooth hydrophobic field, and a hydrophilic spot in a hydrophobic field. Experiments showed that the surface chemistry and geometry at the contact line affected the contact angle and contact angle hysteresis, but the surface area inside the contact line had no effect. An argument that increased jaggedness in the contact line enhances droplet mobility has also been proposed. Many hydrophobic materials found in nature rely on Cassie's law and are biphasic on the submicrometer level with one component air. The lotus effect is based on this principle. Inspired by it, many functional superhydrophobic surfaces have been prepared. An example of a bionic or biomimetic superhydrophobic material in nanotechnology is nanopin film. One study presents a vanadium pentoxide surface that switches reversibly between superhydrophobicity and superhydrophilicity under the influence of UV radiation. According to the study, any surface can be modified to this effect by application of a suspension of rose-like V2O5 particles, for instance with an inkjet printer. Once again hydrophobicity is induced by interlaminar air pockets (separated by 2.1 nm distances). The UV effect is also explained. UV light creates electron-hole pairs, with the holes reacting with lattice oxygen, creating surface oxygen vacancies, while the electrons reduce V5+ to V3+. The oxygen vacancies are met by water, and it is this water absorbency by the vanadium surface that makes it hydrophilic. By extended storage in the dark, water is replaced by oxygen and hydrophilicity is once again lost. A significant majority of hydrophobic surfaces have their hydrophobic properties imparted by structural or chemical modification of a surface of a bulk material, through either coatings or surface treatments. That is to say, the presence of molecular species (usually organic) or structural features results in high contact angles of water. In recent years, rare earth oxides have been shown to possess intrinsic hydrophobicity. The intrinsic hydrophobicity of rare earth oxides depends on surface orientation and oxygen vacancy levels, and is naturally more robust than coatings or surface treatments, having potential applications in condensers and catalysts that can operate at high temperatures or corrosive environments. Applications and potential applications Hydrophobic concrete has been produced since the mid-20th century. Active recent research on superhydrophobic materials might eventually lead to more industrial applications. A simple routine of coating cotton fabric with silica or titania particles by sol-gel technique has been reported, which protects the fabric from UV light and makes it superhydrophobic. An efficient routine has been reported for making polyethylene superhydrophobic and thus self-cleaning. 99% of dirt on such a surface is easily washed away. Patterned superhydrophobic surfaces also have promise for lab-on-a-chip microfluidic devices and can drastically improve surface-based bioanalysis. In pharmaceuticals, hydrophobicity of pharmaceutical blends affects important quality attributes of final products, such as drug dissolution and hardness. Methods have been developed to measure the hydrophobicity of pharmaceutical materials. See also References External links What are superhydrophobic surfaces? The difference between Hydrophobic and Hydrophilic Chemical properties Intermolecular forces Surface science Articles containing video clips
[ 0.1311561018228531, 0.42230695486068726, 0.2582343518733978, 0.1028221920132637, -0.5407817959785461, -0.4755505919456482, 0.37840980291366577, -0.5744404792785645, 0.23918797075748444, -0.4620114266872406, -0.6038759350776672, 0.04519713670015335, -0.7156752347946167, 0.8730815649032593, ...
14142
https://en.wikipedia.org/wiki/Harley-Davidson
Harley-Davidson
Harley-Davidson, Inc., H-D, or Harley, is an American motorcycle manufacturer founded in 1903 in Milwaukee, Wisconsin. Along with Indian, it was one of two major American motorcycle manufacturers to survive the Great Depression. The company has survived numerous ownership arrangements, subsidiary arrangements, periods of poor economic health and product quality, and intense global competition to become one of the world's largest motorcycle manufacturers and an iconic brand widely known for its loyal following. There are owner clubs and events worldwide, as well as a company-sponsored, brand-focused museum. Harley-Davidson is noted for a style of customization that gave rise to the chopper motorcycle style. The company traditionally marketed heavyweight, air-cooled cruiser motorcycles with engine displacements greater than 700 cc, but it has broadened its offerings to include more contemporary VRSC (2002) and middle-weight Street (2015) platforms. Harley-Davidson manufactures its motorcycles at factories in York, Pennsylvania; Milwaukee, Wisconsin; Manaus, Brazil; Bawal, India; and Pluak Daeng, Thailand. The company markets its products worldwide, and also licenses and markets merchandise under the Harley-Davidson brand, among them apparel, home décor and ornaments, accessories, toys, scale models of its motorcycles, and video games based on its motorcycle line and the community. History In 1901, -year-old William S. Harley drew up plans for a small engine with a displacement of 7.07 cubic inches (116 cc) and four-inch (102 mm) flywheels designed for use in a regular pedal-bicycle frame. Over the next two years, he and his childhood friend Arthur Davidson worked on their motor-bicycle using the northside Milwaukee machine shop at the home of their friend Henry Melk. It was finished in 1903 with the help of Arthur's brother Walter Davidson. Upon testing their power-cycle, Harley and the Davidson brothers found it unable to climb the hills around Milwaukee without pedal assistance, and they wrote off their first motor-bicycle as a valuable learning experiment. The three began work on a new and improved machine with an engine of 24.74 cubic inches (405 cc) with flywheels weighing . Its advanced loop-frame pattern was similar to the 1903 Milwaukee Merkel motorcycle designed by Joseph Merkel, later of Flying Merkel fame. The bigger engine and loop-frame design took it out of the motorized bicycle category and marked the path to future motorcycle designs. They also received help with their bigger engine from outboard motor pioneer Ole Evinrude, who was then building gas engines of his own design for automotive use on Milwaukee's Lake Street. The prototype of the new loop-frame Harley-Davidson was assembled in a shed in the Davidson family backyard. Most of the major parts, however, were made elsewhere, including some probably fabricated at the West Milwaukee railshops where oldest brother William A. Davidson was toolroom foreman. This prototype machine was functional by September 8, 1904, when it competed in a Milwaukee motorcycle race held at State Fair Park. Edward Hildebrand rode it and placed fourth in the race. In January 1905, the company placed small advertisements in the Automobile and Cycle Trade Journal offering bare Harley-Davidson engines to the do-it-yourself trade. By April, they were producing complete motorcycles on a very limited basis. That year, Harley-Davidson dealer Carl H. Lang of Chicago sold three bikes from the five built in the Davidson backyard shed. Years later, the company moved the original shed to the Juneau Avenue factory where it stood for many decades as a tribute. In 1906, Harley and the Davidson brothers built their first factory on Chestnut Street (later Juneau Avenue), at the current location of Harley-Davidson's corporate headquarters. The first Juneau Avenue plant was a single-story wooden structure. The company produced about 50 motorcycles that year. In 1907, William S. Harley graduated from the University of Wisconsin–Madison with a degree in mechanical engineering. That year, they expanded the factory with a second floor and later with facings and additions of Milwaukee pale yellow ("cream") brick. With the new facilities, production increased to 150 motorcycles in 1907. The company was officially incorporated that September. They also began selling their motorcycles to police departments around this time, a market that has been important to them ever since. In 1907, William A. Davidson quit his job as tool foreman for the Milwaukee Road railroad and joined the Motor Company. Production in 1905 and 1906 were all single-cylinder models with 26.84-cubic-inch (440 cc) engines. In February 1907, they displayed a prototype model at the Chicago Automobile Show with a 45-degree V-Twin engine. Very few V-Twin models were built between 1907 and 1910. These first V-Twins displaced 53.68 cubic inches (880 cc) and produced about . This gave about double the power of the first singles, and top speed was about . Production jumped from 450 motorcycles in 1908 to 1,149 machines in 1909. In 1911, the company introduced an improved V-Twin model with a displacement of 49.48 cubic inches (811 cc) and mechanically operated intake valves, as opposed to the "automatic" intake valves used on earlier V-Twins that opened by engine vacuum. It was smaller than earlier twins but gave better performance. After 1913, the majority of bikes produced by Harley-Davidson were V-Twin models. In 1912, Harley-Davidson introduced their patented "Ful-Floteing Seat", which was suspended by a coil spring inside the seat tube. The spring tension could be adjusted to suit the rider's weight, and more than of travel was available. Harley-Davidson used seats of this type until 1958. By 1913, the yellow brick factory had been demolished and a new five-story structure had been built on the site which took up two blocks along Juneau Avenue and around the corner on 38th Street. Despite the competition, Harley-Davidson was already pulling ahead of Indian and dominated motorcycle racing after 1914. Production that year swelled to 16,284 machines. World War I In 1917, the United States entered World War I and the military demanded motorcycles for the war effort. Harleys had already been used by the military in the Pancho Villa Expedition but World War I was the first time that it was adopted for military issue, first with the British Model H produced by Triumph Motorcycles Ltd in 1915. The U.S. military purchased over 20,000 motorcycles from Harley-Davidson. Harley-Davidson launched a line of bicycles in 1917 in hopes of recruiting more domestic customers for its motorcycles. Models included the traditional diamond frame men's bicycle, a step-through frame 3–18 "Ladies Standard", and a 5–17 "Boy Scout" for youth. The effort was discontinued in 1923 because of disappointing sales. The bicycles were built for Harley-Davidson in Dayton, Ohio by the Davis Machine Company from 1917 to 1921, when Davis stopped manufacturing bicycles. 1920s By 1920, Harley-Davidson was the largest motorcycle manufacturer in the world, with 28,189 machines produced and dealers in 67 countries. In 1921, Otto Walker set a record on a Harley-Davidson as the first motorcycle to win a race at an average speed greater than . Harley-Davidson put several improvements in place during the 1920s, such as a new 74 cubic inch (1,212.6  cc) V-Twin introduced in 1921, and the "teardrop" gas tank in 1925. They added a front brake in 1928, although only on the J/JD models. In the late summer of 1929, Harley-Davidson introduced its 45-cubic-inch (737 cc) flathead V-Twin to compete with the Indian 101 Scout and the Excelsior Super X. This was the "D" model produced from 1929 to 1931. Riders of Indian motorcycles derisively referred to it as the "three cylinder Harley" because the generator was upright and parallel to the front cylinder. Great Depression The Great Depression began a few months after the introduction of their model. Harley-Davidson's sales fell from 21,000 in 1929 to 3,703 in 1933. Despite this, Harley-Davidson unveiled a new lineup for 1934, which included a flathead engine and Art Deco styling. In order to survive the remainder of the Depression, the company manufactured industrial powerplants based on their motorcycle engines. They also designed and built a three-wheeled delivery vehicle called the Servi-Car, which remained in production until 1973. In the mid-1930s, Alfred Rich Child opened a production line in Japan with the VL. The Japanese license-holder, Sankyo Seiyaku Corporation, severed its business relations with Harley-Davidson in 1936 and continued manufacturing the VL under the Rikuo name. An flathead engine was added to the line in 1935, by which time the single-cylinder motorcycles had been discontinued. In 1936, the 61E and 61EL models with the "Knucklehead" OHV engines were introduced. Valvetrain problems in early Knucklehead engines required a redesign halfway through its first year of production and retrofitting of the new valvetrain on earlier engines. By 1937, all Harley-Davidson flathead engines were equipped with dry-sump oil recirculation systems similar to the one introduced in the "Knucklehead" OHV engine. The revised V and VL models were renamed U and UL, the VH and VLH to be renamed UH and ULH, and the R to be renamed W. In 1941, the 74-cubic-inch "Knucklehead" was introduced as the F and the FL. The flathead UH and ULH models were discontinued after 1941, while the 74-cubic-inchU & UL flathead models were produced up to 1948. World War II One of only two American cycle manufacturers to survive the Great Depression, Harley-Davidson again produced large numbers of motorcycles for the US Army in World War II and resumed civilian production afterwards, producing a range of large V-twin motorcycles that were successful both on racetracks and for private buyers. Harley-Davidson, on the eve of World War II, was already supplying the Army with a military-specific version of its WL line, called the WLA. The A in this case stood for "Army". Upon the outbreak of war, the company, along with most other manufacturing enterprises, shifted to war work. More than 90,000 military motorcycles, mostly WLAs and WLCs (the Canadian version) were produced, many to be provided to allies. Harley-Davidson received two Army-Navy "E" Awards, one in 1943 and the other in 1945, which were awarded for Excellence in Production. Shipments to the Soviet Union under the Lend-Lease program numbered at least 30,000. The WLAs produced during all four years of war production generally have 1942 serial numbers. Production of the WLA stopped at the end of World War II, but was resumed from 1950 to 1952 for use in the Korean War. The U.S. Army also asked Harley-Davidson to produce a new motorcycle with many of the features of BMW's side-valve and shaft-driven R71. Harley-Davidson largely copied the BMW engine and drive train and produced the shaft-driven 750 cc 1942 Harley-Davidson XA. This shared no dimensions, no parts or no design concepts (except side valves) with any prior Harley-Davidson engine. Due to the superior cooling of the flat-twin engine with the cylinders across the frame, Harley's XA cylinder heads ran 100 °F (56 °C) cooler than its V-twins. The XA never entered full production: the motorcycle by that time had been eclipsed by the Jeep as the Army's general-purpose vehicle, and the WLA—already in production—was sufficient for its limited police, escort, and courier roles. Only 1,000 were made and the XA never went into full production. It remains the only shaft-driven Harley-Davidson ever made. Small: Hummer, Sportcycle and Aermacchi As part of war reparations, Harley-Davidson acquired the design of a small German motorcycle, the DKW RT 125, which they adapted, manufactured, and sold from 1948 to 1966. Various models were made, including the Hummer from 1955 to 1959, but they are all colloquially referred to as "Hummers" at present. BSA in the United Kingdom took the same design as the foundation of their BSA Bantam. In 1960, Harley-Davidson consolidated the Model 165 and Hummer lines into the Super-10, introduced the Topper scooter, and bought fifty percent of Aermacchi's motorcycle division. Importation of Aermacchi's 250 cc horizontal single began the following year. The bike bore Harley-Davidson badges and was marketed as the Harley-Davidson Sprint. The engine of the Sprint was increased to 350 cc in 1969 and would remain that size until 1974, when the four-stroke Sprint was discontinued. After the Pacer and Scat models were discontinued at the end of 1965, the Bobcat became the last of Harley-Davidson's American-made two-stroke motorcycles. The Bobcat was manufactured only in the 1966 model year. Harley-Davidson replaced their American-made lightweight two-stroke motorcycles with the Italian Aermacchi-built two-stroke powered M-65, M-65S, and Rapido. The M-65 had a semi-step-through frame and tank. The M-65S was a M-65 with a larger tank that eliminated the step-through feature. The Rapido was a larger bike with a 125 cc engine. The Aermacchi-built Harley-Davidsons became entirely two-stroke powered when the 250 cc two-stroke SS-250 replaced the four-stroke 350 cc Sprint in 1974. Harley-Davidson purchased full control of Aermacchi's motorcycle production in 1974 and continued making two-stroke motorcycles there until 1978, when they sold the facility to Cagiva, owned by the Castiglioni family. Tarnished reputation In 1952, following their application to the U.S. Tariff Commission for a 40 percent tax on imported motorcycles, Harley-Davidson was charged with restrictive practices. In 1969, American Machine and Foundry (AMF) bought the company, streamlined production, and slashed the workforce. This tactic resulted in a labor strike and cost-cutting produced lower-quality bikes. The bikes were expensive and inferior in performance, handling, and quality to Japanese motorcycles. Sales and quality declined, and the company almost went bankrupt. The "Harley-Davidson" name was mocked as "Hardly Ableson", "Hardly Driveable", and "Hogly Ferguson", and the nickname "Hog" became pejorative. The early '70s saw the introduction of what the motoring press called the Universal Japanese Motorcycle in North America, that revolutionized the industry and made motorcycling in America more accessible during the 1970s and 1980s. In 1977, following the successful manufacture of the Liberty Edition to commemorate America's bicentennial in 1976, Harley-Davidson produced what has become one of its most controversial models, the Harley-Davidson Confederate Edition. The bike was essentially a stock Harley-Davidson with Confederate-specific paint and details. Restructuring and revival In 1981, AMF sold the company to a group of 13 investors led by Vaughn Beals and Willie G. Davidson for $80 million. The new management team improved product quality, introduced new technologies, and adopted just-in-time inventory management. These operational and product improvements were matched with a strategy of seeking tariff protection for large-displacement motorcycles in the face of intense competition with Japanese manufacturers. These protections were granted by the Reagan administration in 1983, giving Harley-Davidson time to implement their new strategies. Revising stagnated product designs was a crucial centerpiece of Harley-Davidson's turnaround strategy. Rather than trying to mimic popular Japanese designs, the new management deliberately exploited the "retro" appeal of Harley motorcycles, building machine that deliberately adopted the look and feel of their earlier bikes and the subsequent customizations of owners of that era. Many components such as brakes, forks, shocks, carburetors, electrics and wheels were outsourced from foreign manufacturers and quality increased, technical improvements were made, and buyers slowly returned. Harley-Davidson bought the "Sub Shock" cantilever-swingarm rear suspension design from Missouri engineer Bill Davis and developed it into its Softail series of motorcycles, introduced in 1984 with the FXST Softail. In response to possible motorcycle market loss due to the aging of baby-boomers, Harley-Davidson bought luxury motorhome manufacturer Holiday Rambler in 1986. In 1996, the company sold Holiday Rambler to the Monaco Coach Corporation. The "Sturgis" model, boasting a dual belt-drive, was introduced initially in 1980 and was made for three years. This bike was then brought back as a commemorative model in 1991. Fat Boy, Dyna, and Harley-Davidson museum By 1990, with the introduction of the "Fat Boy", Harley-Davidson once again became the sales leader in the heavyweight (over 750 cc) market. At the time of the Fat Boy model introduction, a false etymology spread that "Fat Boy" was a combination of the names of the atomic bombs Fat Man and Little Boy. This has been debunked, as the name "Fat Boy" actually comes from the observation that the motorcycle is somewhat wider than other bikes when viewed head-on. 1993 and 1994 saw the replacement of FXR models with the Dyna (FXD), which became the sole rubber mount FX Big Twin frame in 1994. The FXR was revived briefly from 1999 to 2000 for special limited editions (FXR2, FXR3 & FXR4). Harley-Davidson celebrated their 100th anniversary on September 1, 2003 with a large event and concert featuring performances from Elton John, The Doobie Brothers, Kid Rock, and Tim McGraw. Construction started on the $75 million, 130,000 square-foot (12,000 m2) Harley-Davidson Museum in the Menomonee Valley of Milwaukee, Wisconsin on June 1, 2006. It opened in 2008 and houses the company's vast collection of historic motorcycles and corporate archives, along with a restaurant, café and meeting space. Overseas operations Established in 1918, the oldest continuously operating Harley-Davidson dealership outside of the United States is in Australia. Sales in Japan started in 1912 then in 1929, Harley-Davidsons were produced in Japan under license to the company Rikuo (Rikuo Internal Combustion Company) under the name of Harley-Davidson and using the company's tooling, and later under the name Rikuo. Production continued until 1958. In 1998 the first Harley-Davidson factory outside the US opened in Manaus, Brazil, taking advantage of the free economic zone there. The location was positioned to sell motorcycles in the southern hemisphere market. In August 2009, Harley-Davidson launched Harley-Davidson India and started selling motorcycles there in 2010. The company established the subsidiary in Gurgaon, near Delhi, in 2011 and created an Indian dealer network. On September 24, 2020, Harley Davidson announced that it would discontinue its sales and manufacturing operations in India due to weak demand and sales. The move involves $75 million in restructuring costs, 70 layoffs and the closure of its Bawal plant in northern India. Buell Motorcycle Company Harley-Davidson's association with sportbike manufacturer Buell Motorcycle Company began in 1987 when they supplied Buell with fifty surplus XR1000 engines. Buell continued to buy engines from Harley-Davidson until 1993, when Harley-Davidson bought 49 percent of the Buell Motorcycle Company. Harley-Davidson increased its share in Buell to ninety-eight percent in 1998, and to complete ownership in 2003. In an attempt to attract newcomers to motorcycling in general and to Harley-Davidson in particular, Buell developed a low-cost, low-maintenance motorcycle. The resulting single-cylinder Buell Blast was introduced in 2000, and was made through 2009, which, according to Buell, was to be the final year of production. The Buell Blast was the training vehicle for the Harley-Davidson Rider's Edge New Rider Course from 2000 until May 2014, when the company re-branded the training academy and started using the Harley-Davidson Street 500 motorcycles. In those 14 years, more than 350,000 participants in the course learned to ride on the Buell Blast. On October 15, 2009, Harley-Davidson Inc. issued an official statement that it would be discontinuing the Buell line and ceasing production immediately. The stated reason was to focus on the Harley-Davidson brand. The company refused to consider selling Buell. Founder Erik Buell subsequently established Erik Buell Racing and continued to manufacture and develop the company's 1125RR racing motorcycle. Claims of stock price manipulation During its period of peak demand, during the late 1990s and early first decade of the 21st century, Harley-Davidson embarked on a program of expanding the number of dealerships throughout the country. At the same time, its current dealers typically had waiting lists that extended up to a year for some of the most popular models. Harley-Davidson, like the auto manufacturers, records a sale not when a consumer buys their product, but rather when it is delivered to a dealer. Therefore, it is possible for the manufacturer to inflate sales numbers by requiring dealers to accept more inventory than desired in a practice called channel stuffing. When demand softened following the unique 2003 model year, this news led to a dramatic decline in the stock price. In April 2004 alone, the price of HOG shares dropped from more than $60 to less than $40. Immediately prior to this decline, retiring CEO Jeffrey Bleustein profited $42 million on the exercise of employee stock options. Harley-Davidson was named as a defendant in numerous class action suits filed by investors who claimed they were intentionally defrauded by Harley-Davidson's management and directors. By January 2007, the price of Harley-Davidson shares reached $70. Problems with Police Touring models Starting around 2000, several police departments started reporting problems with high-speed instability on the Harley-Davidson Touring motorcycles. A Raleigh, North Carolina police officer, Charles Paul, was killed when his 2002 police touring motorcycle crashed after reportedly experiencing a high-speed wobble. The California Highway Patrol conducted testing of the Police Touring motorcycles in 2006. The CHP test riders reported experiencing wobble or weave instability while operating the motorcycles on the test track. 2007 strike On February 2, 2007, upon the expiration of their union contract, about 2,700 employees at Harley-Davidson Inc.'s largest manufacturing plant in York, Pennsylvania, went on strike after failing to agree on wages and health benefits. During the pendency of the strike, the company refused to pay for any portion of the striking employees' health care. The day before the strike, after the union voted against the proposed contract and to authorize the strike, the company shut down all production at the plant. The York facility employs more than 3,200 workers, both union and non-union. Harley-Davidson announced on February 16, 2007, that it had reached a labor agreement with union workers at its largest manufacturing plant, a breakthrough in the two-week-old strike. The strike disrupted Harley-Davidson's national production and was felt in Wisconsin, where 440 employees were laid off, and many Harley suppliers also laid off workers because of the strike. MV Agusta Group On July 11, 2008, Harley-Davidson announced they had signed a definitive agreement to acquire the MV Agusta Group for US$109 million (€70M). MV Agusta Group contains two lines of motorcycles: the high-performance MV Agusta brand and the lightweight Cagiva brand. The acquisition was completed on August 8. On October 15, 2009, Harley-Davidson announced that it would divest its interest in MV Agusta. Harley-Davidson Inc. sold Italian motorcycle maker MV Agusta to Claudio Castiglioni – a member of the family that had purchased Aermacchi from H-D in 1978 – for a reported 3 euros, ending the transaction in the first week of August 2010. Castiglioni was MV Agusta's former owner, and had been MV Agusta's chairman since Harley-Davidson bought it in 2008. As part of the deal, Harley-Davidson put $26M into MV Agusta's accounts, essentially giving Castiglioni $26M to take the brand. Financial crisis According to Interbrand, the value of the Harley-Davidson brand fell by 43 percent to $4.34 billion in 2009. The fall in value is believed to be connected to the 66 percent drop in the company profits in two-quarters of the previous year. On April 29, 2010, Harley-Davidson stated that they must cut $54 million in manufacturing costs from its production facilities in Wisconsin, and that they would explore alternative U.S. sites to accomplish this. The announcement came in the wake of a massive company-wide restructuring, which began in early 2009 and involved the closing of two factories, one distribution center, and the planned elimination of nearly 25 percent of its total workforce (around 3,500 employees). The company announced on September 14, 2010, that it would remain in Wisconsin. Motorcycle engines The classic Harley-Davidson engines are V-twin engines, with a 45° angle between the cylinders. The crankshaft has a single pin, and both pistons are connected to this pin through their connecting rods. This 45° angle is covered under several United States patents and is an engineering tradeoff that allows a large, high-torque engine in a relatively small space. It causes the cylinders to fire at uneven intervals and produces the choppy "potato-potato" sound so strongly linked to the Harley-Davidson brand. To simplify the engine and reduce costs, the V-twin ignition was designed to operate with a single set of points and no distributor. This is known as a dual fire ignition system, causing both spark plugs to fire regardless of which cylinder was on its compression stroke, with the other spark plug firing on its cylinder's exhaust stroke, effectively "wasting a spark". The exhaust note is basically a throaty growling sound with some popping. The 45° design of the engine thus creates a plug firing sequencing as such: The first cylinder fires, the second (rear) cylinder fires 315° later, then there is a 405° gap until the first cylinder fires again, giving the engine its unique sound. Harley-Davidson has used various ignition systems throughout its history – be it the early points and condenser system, (Big Twin and Sportsters up to 1978), magneto ignition system used on some 1958 to 1969 Sportsters, early electronic with centrifugal mechanical advance weights, (all models from mid-1978 until 1979), or the late electronic with a transistorized ignition control module, more familiarly known as the black box or the brain, (all models 1980 to present). Starting in 1995, the company introduced Electronic Fuel Injection (EFI) as an option for the 30th anniversary edition Electra Glide. EFI became standard on all Harley-Davidson motorcycles, including Sportsters, upon the introduction of the 2007 product line. In 1991, Harley-Davidson began to participate in the Sound Quality Working Group, founded by Orfield Labs, Bruel and Kjaer, TEAC, Yamaha, Sennheiser, SMS and Cortex. This was the nation's first group to share research on psychological acoustics. Later that year, Harley-Davidson participated in a series of sound quality studies at Orfield Labs, based on recordings taken at the Talladega Superspeedway, with the objective to lower the sound level for EU standards while analytically capturing the "Harley Sound". This research resulted in the bikes that were introduced in compliance with EU standards for 1998. On February 1, 1994, the company filed a sound trademark application for the distinctive sound of the Harley-Davidson motorcycle engine: "The mark consists of the exhaust sound of applicant's motorcycles, produced by V-twin, common crankpin motorcycle engines when the goods are in use". Nine of Harley-Davidson's competitors filed comments opposing the application, arguing that cruiser-style motorcycles of various brands use a single-crankpin V-twin engine which produce a similar sound. These objections were followed by litigation. In June 2000, the company dropped efforts to federally register its trademark. Big V-twins F-head, also known as JD, pocket valve and IOE (intake over exhaust), 1914–1929 (1,000 cc), and 1922–1929 (1,200 cc) Flathead, 1930–1949 (1,200 cc) and 1935–1941 (1,300 cc). Knucklehead, 1936–1947 61 cubic inch (1,000 cc), and 1941–1947 74 cubic inch (1,200 cc) Panhead, 1948–1965 61 cubic inch (1,000 cc), and 1948–1965, 74 cubic inch (1,200 cc) Shovelhead, 1966–1984, 74 cubic inch (1,200 cc) and 80 cubic inch (1,338 cc) since late 1978 Evolution (a.k.a. "Evo" and "Blockhead"), 1984–1999, 80 cubic inch (1,340 cc) Twin Cam (a.k.a. "Fathead" as named by American Iron Magazine) 1999–2017, in the following versions: Twin Cam 88, 1999–2006, 88 cubic inch (1,450 cc) Twin Cam 88B, counterbalanced version of the Twin Cam 88, 2000–2006, 88 cubic inch (1,450 cc) Twin Cam 95, since 2000, 95 cubic inch (1,550 cc) (engines for early C.V.O. models) Twin Cam 96, since 2007. Twin Cam 103, 2003–2006, 2009, 103 cubic inch (1,690 cc) (engines for C.V.O. models), Standard on 2011 Touring models: Ultra Limited, Road King Classic and Road Glide Ultra and optional on the Road Glide Custom and Street Glide. Standard on most 2012 models excluding Sportsters and 2 Dynas (Street Bob and Super Glide Custom). Standard on all 2014 dyna models. Twin Cam 110, 2007–2017, 110 cubic inch (1,800 cc) (engines for C.V.O. models, 2016 Soft Tail Slim S; FatBoy S, Low Rider S, and Pro-Street Breakout) Milwaukee-Eight Standard : Standard on touring model year 2017+ and Softail models 2018+. Twin-cooled : Optional on some touring and trike model year 2017+. Twin-cooled : Optional on touring and trike model year 2017+, standard on 2017 CVO models. Twin-cooled : Standard on 2018 CVO models Small V-twins D Model, 1929–1931, 750 cc R Model, 1932–1936, 750 cc W Model, 1937–1952, 750 cc, solo (2 wheel, frame only) G (Servi-Car) Model, 1932–1973, 750 cc K Model, 1952–1953, 750 cc KH Model, 1954–1956, 900 cc Ironhead, 1957–1971, 883 cc; 1972–1985, 1,000 cc Evolution, since 1986, 883 cc, 1,100 cc and 1,200 cc Revolution engine The Revolution engine is based on the VR-1000 Superbike race program, developed by Harley-Davidson's Powertrain Engineering with Porsche helping to make the engine suitable for street use. It is a liquid cooled, dual overhead cam, internally counterbalanced 60 degree V-twin engine with a displacement of 69 cubic inch (1,130 cc), producing at 8,250 rpm at the crank, with a redline of 9,000 rpm. It was introduced for the new VRSC (V-Rod) line in 2001 for the 2002 model year, starting with the single VRSCA (V-Twin Racing Street Custom) model. The Revolution marks Harley's first collaboration with Porsche since the V4 Nova project, which, like the V-Rod, was a radical departure from Harley's traditional lineup until it was cancelled by AMF in 1981 in favor of the Evolution engine. A 1,250 cc Screamin' Eagle version of the Revolution engine was made available for 2005 and 2006, and was present thereafter in a single production model from 2005 to 2007. In 2008, the 1,250 cc Revolution Engine became standard for the entire VRSC line. Harley-Davidson claims at the crank for the 2008 VRSCAW model. The VRXSE Destroyer dragbike is equipped with a stroker (75 mm crank) Screamin' Eagle 79 cubic inch (1,300 cc) Revolution Engine, producing , and more than . 750 cc and 500 cc versions of the Revolution engine are used in Harley-Davidson's Street line of light cruisers. These motors, named the Revolution X, use a single overhead cam, screw and locknut valve adjustment, a single internal counterbalancer, and vertically split crankcases; all of these changes making it different from the original Revolution design. Düsseldorf-Test An extreme endurance test of the Revolution engine was performed in a dynamometer installation at the Harley-Davidson factory in Milwaukee, simulating the German Autobahn (highways without general speed limit) between the Porsche research and development center in Weissach, near Stuttgart to Düsseldorf. An undisclosed number of samples of engines failed, until an engine successfully passed the 500-hour nonstop run. This was the benchmark for the engineers to approve the start of production for the Revolution engine, which was documented in the Discovery channel special Harley-Davidson: Birth of the V-Rod, October 14, 2001. Single-cylinder engines IOE singles The first Harley-Davidson motorcycles were powered by single-cylinder IOE engines with the inlet valve operated by engine vacuum, based on the DeDion-Bouton pattern. Singles of this type continued to be made until 1913, when a pushrod and rocker system was used to operate the overhead inlet valve on the single, a similar system having been used on their V-twins since 1911. Single-cylinder motorcycle engines were discontinued in 1918. Flathead and OHV singles Single-cylinder engines were reintroduced in 1925 as 1926 models. These singles were available either as flathead engines or as overhead valve engines until 1930, after which they were only available as flatheads. The flathead single-cylinder motorcycles were designated Model A for engines with magneto systems only and Model B for engines with battery and coil systems, while overhead valve versions were designated Model AA and Model BA respectively, and a magneto-only racing version was designated Model S. This line of single-cylinder motorcycles ended production in 1934. Two-stroke singles Model families Modern Harley-branded motorcycles fall into one of seven model families: Touring, Softail, Dyna, Sportster, Vrod, Street and LiveWire. These model families are distinguished by the frame, engine, suspension, and other characteristics. Touring Touring models use Big-Twin engines and large-diameter telescopic forks. All Touring designations begin with the letters FL, e.g., FLHR (Road King) and FLTR (Road Glide). The touring family, also known as "dressers" or "baggers", includes Road King, Road Glide, Street Glide and Electra Glide models offered in various trims. The Road Kings have a "retro cruiser" appearance and are equipped with a large clear windshield. Road Kings are reminiscent of big-twin models from the 1940s and 1950s. Electra Glides can be identified by their full front fairings. Most Electra Glides sport a fork-mounted fairing referred to as the "Batwing" due to its unmistakable shape. The Road Glide and Road Glide Ultra Classic have a frame-mounted fairing, referred to as the "Sharknose". The Sharknose includes a unique, dual front headlight. Touring models are distinguishable by their large saddlebags, rear coil-over air suspension and are the only models to offer full fairings with radios and CBs. All touring models use the same frame, first introduced with a Shovelhead motor in 1980, and carried forward with only modest upgrades until 2009, when it was extensively redesigned. The frame is distinguished by the location of the steering head in front of the forks and was the first H-D frame to rubber mount the drivetrain to isolate the rider from the vibration of the big V-twin. The frame was modified for the 1993 model year when the oil tank went under the transmission and the battery was moved inboard from under the right saddlebag to under the seat. In 1997, the frame was again modified to allow for a larger battery under the seat and to lower seat height. In 2007, Harley-Davidson introduced the Twin Cam 96 engine, as well the six-speed transmission to give the rider better speeds on the highway. In 2006, Harley introduced the FLHX Street Glide, a bike designed by Willie G. Davidson to be his personal ride, to its touring line. In 2008, Harley added anti-lock braking systems and cruise control as a factory installed option on all touring models (standard on CVO and Anniversary models). Also new for 2008 is the fuel tank for all touring models. 2008 also brought throttle-by-wire to all touring models. For the 2009 model year, Harley-Davidson redesigned the entire touring range with several changes, including a new frame, new swingarm, a completely revised engine-mounting system, front wheels for all but the FLHRC Road King Classic, and a 2–1–2 exhaust. The changes result in greater load carrying capacity, better handling, a smoother engine, longer range and less exhaust heat transmitted to the rider and passenger. Also released for the 2009 model year is the FLHTCUTG Tri-Glide Ultra Classic, the first three-wheeled Harley since the Servi-Car was discontinued in 1973. The model features a unique frame and a 103-cubic-inch (1,690 cc) engine exclusive to the trike. In 2014, Harley-Davidson released a redesign for specific touring bikes and called it "Project Rushmore". Changes include a new 103CI High Output engine, one handed easy open saddlebags and compartments, a new Boom! Box Infotainment system with either 4.3-inch (10 cm) or 6.5-inch (16.5 cm) screens featuring touchscreen functionality [6.5-inch (16.5 cm) models only], Bluetooth (media and phone with approved compatible devices), available GPS and SiriusXM, Text-to-Speech functionality (with approved compatible devices) and USB connectivity with charging. Other features include ABS with Reflex linked brakes, improved styling, Halogen or LED lighting and upgraded passenger comfort. Softail These big-twin motorcycles capitalize on Harley's strong value on tradition. With the rear-wheel suspension hidden under the transmission, they are visually similar to the "hardtail" choppers popular in the 1960s and 1970s, as well as from their own earlier history. In keeping with that tradition, Harley offers Softail models with "Heritage" styling that incorporate design cues from throughout their history and used to offer "Springer" front ends on these Softail models from the factory. Designation Softail models utilize the big-twin engine (F) and the Softail chassis (ST). Softail models that use 21-inch (530 mm) Front Wheels have designations that begin with FX, e.g., FXSTB (Night Train), FXSTD (Deuce), and FXSTS (Springer). Softail models that use 16-inch (410 mm) Front Wheels have designations beginning with FL, e.g., FLSTF (Fat Boy), FLSTC (Heritage Softail Classic), FLSTN (Softail Deluxe) and FLS (Softail Slim). Softail models that use Springer forks with a wheel have designations that begin with FXSTS, e.g., FXSTS (Springer Softail) and FXSTSB (Bad Boy). Softail models that use Springer forks with a wheel have designations that begin with FLSTS, e.g., FLSTSC (Springer Classic) and FLSTSB (Cross Bones). Dyna Dyna-frame motorcycles were developed in the 1980s and early 1990s and debuted in the 1991 model year with the FXDB Sturgis offered in limited edition quantities. In 1992 the line continued with the limited edition FXDB Daytona and a production model FXD Super Glide. The new DYNA frame featured big-twin engines and traditional styling. They can be distinguished from the Softail by the traditional coil-over suspension that connects the swingarm to the frame, and from the Sportster by their larger engines. On these models, the transmission also houses the engine's oil reservoir. Prior to 2006, Dyna models typically featured a narrow, XL-style 39mm front fork and front wheel, as well as footpegs which the manufacturer included the letter "X" in the model designation to indicate. This lineup traditionally included the Super Glide (FXD), Super Glide Custom (FXDC), Street Bob (FXDB), and Low Rider (FXDL). One exception was the Wide Glide (FXDWG), which featured thicker 41mm forks and a narrow front wheel, but positioned the forks on wider triple-trees that give a beefier appearance. In 2008, the Dyna Fat Bob (FXDF) was introduced to the Dyna lineup, featuring aggressive styling like a new 2–1–2 exhaust, twin headlamps, a 180 mm rear tire, and, for the first time in the Dyna lineup, a 130 mm front tire. For the 2012 model year, the Dyna Switchback (FLD) became the first Dyna to break the tradition of having an FX model designation with floorboards, detachable painted hard saddlebags, touring windshield, headlight nacelle and a wide front tire with full fender. The new front end resembled the big-twin FL models from 1968 to 1971. The Dyna family used the 88-cubic-inch (1,440 cc) twin cam from 1999 to 2006. In 2007, the displacement was increased to 96 cubic inches (1,570 cc) as the factory increased the stroke to . For the 2012 model year, the manufacturer began to offer Dyna models with the 103-cubic-inch (1,690 cc) upgrade. All Dyna models use a rubber-mounted engine to isolate engine vibration. Harley discontinued the Dyna platform in 2017 for the 2018 model year, having been replaced by a completely-redesigned Softail chassis; some of the existing models previously released by the company under the Dyna nameplate have since been carried over to the new Softail line. Designation Dyna models utilize the big-twin engine (F), footpegs noted as (X) with the exception of the 2012 FLD Switchback, a Dyna model which used floorboards as featured on the Touring (L) models, and the Dyna chassis (D). Therefore, except for the FLD from 2012 to 2016, all Dyna models have designations that begin with FXD, e.g., FXDWG (Dyna Wide Glide) and FXDL (Dyna Low Rider). Sportster Introduced in 1957, the Sportster family were conceived as racing motorcycles, and were popular on dirt and flat-track race courses through the 1960s and 1970s. Smaller and lighter than the other Harley models, contemporary Sportsters make use of 883 cc or 1,200 cc Evolution engines and, though often modified, remain similar in appearance to their racing ancestors. Up until the 2003 model year, the engine on the Sportster was rigidly mounted to the frame. The 2004 Sportster received a new frame accommodating a rubber-mounted engine. This made the bike heavier and reduced the available lean angle, while it reduced the amount of vibration transmitted to the frame and the rider, providing a smoother ride for rider and passenger. In the 2007 model year, Harley-Davidson celebrated the 50th anniversary of the Sportster and produced a limited edition called the XL50, of which only 2000 were made for sale worldwide. Each motorcycle was individually numbered and came in one of two colors, Mirage Pearl Orange or Vivid Black. Also in 2007, electronic fuel injection was introduced to the Sportster family, and the Nightster model was introduced in mid-year. In 2009, Harley-Davidson added the Iron 883 to the Sportster line, as part of the Dark Custom series. In the 2008 model year, Harley-Davidson released the XR1200 Sportster in Europe, Africa, and the Middle East. The XR1200 had an Evolution engine tuned to produce , four-piston dual front disc brakes, and an aluminum swing arm. Motorcyclist featured the XR1200 on the cover of its July 2008 issue and was generally positive about it in their "First Ride" story, in which Harley-Davidson was repeatedly asked to sell it in the United States. One possible reason for the delayed availability in the United States was the fact that Harley-Davidson had to obtain the "XR1200" naming rights from Storz Performance, a Harley customizing shop in Ventura, Calif. The XR1200 was released in the United States in 2009 in a special color scheme including Mirage Orange highlighting its dirt-tracker heritage. The first 750 XR1200 models in 2009 were pre-ordered and came with a number 1 tag for the front of the bike, autographed by Kenny Coolbeth and Scott Parker and a thank you/welcome letter from the company, signed by Bill Davidson. The XR1200 was discontinued in model year 2013. In 2021, Harley-Davidson launched the Sportster S model, with a 121hp engine and 228 Kg ready-to-ride weight. The Sportster S was one of the first Harleys to come with cornering-ABS and lean-sensitive traction control. The Sportster S is also the first model under the Sportster nameplate since 1957 to receive a completely new engine. Designation Except for the street-going XR1000 of the 1980s and the XR1200, most Sportsters made for street use have the prefix XL in their model designation. For the Sportster Evolution engines used since the mid-1980s, there have been two engine sizes. Motorcycles with the smaller engine are designated XL883, while those with the larger engine were initially designated XL1100. When the size of the larger engine was increased from 1,100 cc to 1,200 cc, the designation was changed accordingly from XL1100 to XL1200. Subsequent letters in the designation refer to model variations within the Sportster range, e.g. the XL883C refers to an 883 cc Sportster Custom, while the XL1200S designates the now-discontinued 1200 Sportster Sport. VRSC Introduced in 2001 and produced until 2017, the VRSC muscle bike family bears little resemblance to Harley's more traditional lineup. Competing against Japanese and American muscle bikes in the upcoming muscle bike/power cruiser segment, the "V-Rod" makes use of the revolution engine that, for the first time in Harley history, incorporates overhead cams and liquid cooling. The V-Rod is visually distinctive, easily identified by the 60-degree V-Twin engine, the radiator and the hydroformed frame members that support the round-topped air cleaner cover. The VRSC platform was also used for factory drag-racing motorcycles. In 2008, Harley added the anti-lock braking system as a factory-installed option on all VRSC models. Harley also increased the displacement of the stock engine from , which had only previously been available from Screamin' Eagle, and added a slipper clutch as standard equipment. VRSC models include: VRSCA: V-Rod (2002–2006), VRSCAW: V-Rod (2007–2010), VRSCB: V-Rod (2004–2005), VRSCD: Night Rod (2006–2008), VRSCDX: Night Rod Special (2007–2014), VRSCSE: Screamin' Eagle CVO V-Rod (2005), VRSCSE2: Screamin' Eagle CVO V-Rod (2006), VRSCR: Street Rod (2006–2007), VRSCX: Screamin' Eagle Tribute V-Rod (2007), VRSCF: V-Rod Muscle (2009–2014). VRSC models utilize the Revolution engine (VR), and the street versions are designated Street Custom (SC). After the VRSC prefix common to all street Revolution bikes, the next letter denotes the model, either A (base V-Rod: discontinued), AW (base V-Rod + W for Wide with a 240 mm rear tire), B (discontinued), D (Night Rod: discontinued), R (Street Rod: discontinued), SE and SEII(CVO Special Edition), or X (Special edition). Further differentiation within models are made with an additional letter, e.g., VRSCDX denotes the Night Rod Special. VRXSE The VRXSE V-Rod Destroyer is Harley-Davidson's production drag racing motorcycle, constructed to run the quarter mile in less than ten seconds. It is based on the same revolution engine that powers the VRSC line, but the VRXSE uses the Screamin' Eagle 1,300 cc "stroked" incarnation, featuring a 75 mm crankshaft, 105 mm Pistons, and 58 mm throttle bodies. The V-Rod Destroyer is not a street-legal motorcycle. As such, it uses "X" instead of "SC" to denote a non-street bike. "SE" denotes a CVO Special Edition. Street The Street, Harley-Davidson's newest platform and their first all new platform in thirteen years, was designed to appeal to younger riders looking for a lighter bike at a cheaper price. The Street 750 model was launched in India at the 2014 Indian Auto Expo, Delhi-NCR on February 5, 2014. The Street 750 weighs 218 kg and has a ground clearance of 144 mm giving it the lowest weight and the highest ground clearance of Harley-Davidson motorcycles currently available. The Street 750 uses an all-new, liquid-cooled, 60° V-twin engine called the Revolution X. In the Street 750, the engine displaces and produces 65 Nm at 4,000 rpm. A six speed transmission is used. The Street 750 and the smaller-displacement Street 500 have been available since late 2014. Street series motorcycles for the North American market will be built in Harley-Davidson's Kansas City, Missouri plant, while those for other markets around the world will be built completely in their plant in Bawal, India. LiveWire Harley-Davidson's LiveWire, released in 2019, is their first electric vehicle. The high-voltage battery provides a minimum city range of 98 miles (158 km). The LiveWire targets a different type of customer than their classic V-twin powered motorcycles. In March 2020, a Harley-Davidson LiveWire was used to break the 24-hour distance record for an electric motorcycle. The bike traveled a reported 1,723 km (1,079 miles) in 23 hours and 48 minutes. The LiveWire offers a Level 1 slow recharge, which uses a regular wall outlet to refill an empty battery overnight, or a quick Level 3 DC Fast Charge. The Fast Charge fills the battery most of the way in about 40 minutes. Swiss rider Michel von Tell used the Level 3 charging to make the 24-hour ride. In December 2021, the news was published that LiveWire would be spun-off from parent Harley Davidson, set to go public in the first half of 2022 as a special purpose acquisition company (SPAC) valued at $1.77 billion. Custom Vehicle Operations Custom Vehicle Operations (CVO) is a team within Harley-Davidson that produces limited-edition customizations of Harley's stock models. Every year since 1999, the team has selected two to five of the company's base models and added higher-displacement engines, performance upgrades, special-edition paint jobs, more chromed or accented components, audio system upgrades, and electronic accessories to create high-dollar, premium-quality customizations for the factory custom market. The models most commonly upgraded in such a fashion are the Ultra Classic Electra Glide, which has been selected for CVO treatment every year from 2006 to the present, and the Road King, which was selected in 2002, 2003, 2007, and 2008. The Dyna, Softail, and VRSC families have also been selected for CVO customization. Environmental record The Environmental Protection Agency conducted emissions-certification and representative emissions test in Ann Arbor, Michigan, in 2005. Subsequently, Harley-Davidson produced an "environmental warranty". The warranty ensures each owner that the vehicle is designed and built free of any defects in materials and workmanship that would cause the vehicle to not meet EPA standards. In 2005, the EPA and the Pennsylvania Department of Environmental Protection (PADEP) confirmed Harley-Davidson to be the first corporation to voluntarily enroll in the One Clean-Up Program. This program is designed for the clean-up of the affected soil and groundwater at the former York Naval Ordnance Plant. The program is backed by the state and local government along with participating organizations and corporations. Paul Gotthold, Director of Operations for the EPA, congratulated the motor company: Harley-Davidson also purchased most of Castalloy, a South Australian producer of cast motorcycle wheels and hubs. The South Australian government has set forth "protection to the purchaser (Harley-Davidson) against environmental risks". In August 2016 Harley-Davidson settled with the EPA for $12 million, without admitting wrongdoing, over the sale of after-market "super tuners". Super tuners were devices, marketed for competition, which enabled increased performance of Harley-Davidson products. However, the devices also modified the emission control systems, producing increased hydrocarbon and nitrogen oxide. Harley-Davidson is required to buy back and destroy any super tuners which do not meet Clean Air Act requirements and spend $3 million on air pollution mitigation. Brand culture According to a recent Harley-Davidson study, in 1987 half of all Harley riders were under age 35. However, by 2006, only 15 percent of Harley buyers were under 35, and as of 2005, the median age had risen to 46.7. In 2008, Harley-Davidson stopped disclosing the average age of riders; at this point it was 48 years old. In 1987, the median household income of a Harley-Davidson rider was $38,000. By 1997, the median household income for those riders had more than doubled, to $83,000. Many Harley-Davidson Clubs exist nowadays around the world; the oldest one, founded in 1928, is in Prague. Harley-Davidson attracts a loyal brand community, with licensing of the Harley-Davidson logo accounting for almost 5 percent of the company's net revenue ($41 million in 2004). Harley-Davidson supplies many American police forces with their motorcycle fleets. From its founding, Harley-Davidson had worked to brand its motorcycles as respectable and refined products, with ads that showed what motorcycling writer Fred Rau called "refined-looking ladies with parasols, and men in conservative suits as the target market". The 1906 Harley-Davidson's effective, and polite, muffler was emphasized in advertisements with the nickname "The Silent Gray Fellow". That began to shift in the 1960s, partially in response to the clean-cut motorcyclist portrayed in Honda's "You meet the nicest people on a Honda" campaign, when Harley-Davidson sought to draw a contrast with Honda by underscoring the more working-class, macho, and even a little anti-social attitude associated with motorcycling's dark side. With the 1971 FX Super Glide, the company embraced, rather than distanced, itself from chopper style, and the counterculture custom Harley scene. Their marketing cultivated the "bad boy" image of biker and motorcycle clubs, and to a point, even outlaw or one-percenter motorcycle clubs. Origin of "Hog" nickname Beginning in 1920, a team of farm boys, including Ray Weishaar, who became known as the "hog boys", consistently won races. The group had a live hog as their mascot. Following a win, they would put the hog on their Harley and take a victory lap. In 1983, the Motor Company formed a club for owners of its product, taking advantage of the long-standing nickname by turning "hog" into the acronym HOG, for Harley Owners Group. Harley-Davidson attempted to trademark "hog", but lost a case against an independent Harley-Davidson specialist, The Hog Farm of West Seneca, New York, in 1999, when the appellate panel ruled that "hog" had become a generic term for large motorcycles and was therefore unprotectable as a trademark. On August 15, 2006, Harley-Davidson Inc. had its NYSE ticker symbol changed from HDI to HOG. Bobbers Harley-Davidson FL "big twins" normally had heavy steel fenders, chrome trim, and other ornate and heavy accessories. After World War II, riders wanting more speed would often shorten the fenders or take them off completely to reduce the weight of the motorcycle. These bikes were called "bobbers" or sometimes "choppers", because parts considered unnecessary were chopped off. Those who made or rode choppers and bobbers, especially members of motorcycle clubs like the Hells Angels, referred to stock FLs as "garbage wagons". Harley Owners Group Harley-Davidson established the Harley Owners Group (HOG) in 1983 to build on the loyalty of Harley-Davidson enthusiasts as a means to promote a lifestyle alongside its products. The HOG also opened new revenue streams for the company, with the production of tie-in merchandise offered to club members, numbering more than one million. Other motorcycle brands, and other and consumer brands outside motorcycling, have also tried to create factory-sponsored community marketing clubs of their own. HOG members typically spend 30 percent more than other Harley owners on such items as clothing and Harley-Davidson-sponsored events. In 1991, HOG went international, with the first official European HOG Rally in Cheltenham, England. Today, more than one million members and more than 1400 chapters worldwide make HOG the largest factory-sponsored motorcycle organization in the world. HOG benefits include organized group rides, exclusive products and product discounts, insurance discounts, and the Hog Tales newsletter. A one-year full membership is included with the purchase of a new, unregistered Harley-Davidson. In 2008, HOG celebrated its 25th anniversary in conjunction with the Harley 105th in Milwaukee, Wisconsin. 3rd Southern HOG Rally set to bring together largest gathering of Harley-Davidson owners in South India. More than 600 Harley-Davidson Owners expected to ride to Hyderabad from across 13 HOG Chapters Factory tours and museum Harley-Davidson offers factory tours at four of its manufacturing sites, and the Harley-Davidson Museum, which opened in 2008, exhibits Harley-Davidson's history, culture, and vehicles, including the motor company's corporate archives. York, Pennsylvania – Vehicle Operations: Manufacturing site for Touring class, Softail, and custom vehicles. Tomahawk, Wisconsin – Tomahawk Operations: Facility that makes sidecars, saddlebags, windshields, and more. Kansas City, Missouri – Vehicle and Powertrain Operations: Manufacturing site of Sportster, VRSC, and other vehicles. Menomonee Falls, Wisconsin – Pilgrim Road Powertrain Operations plant, two types of tours. Milwaukee, Wisconsin – Harley-Davidson Museum: Archive; exhibits of people, products, culture and history; restaurant & café; and museum store. Due to the consolidation of operations, the Capitol Drive Tour Center in Wauwatosa, Wisconsin, was closed in 2009. Historic register designations Some of the company's buildings have been listed on state and national historic registers, including: Juneau Avenue factory – added to National Register of Historic Places on November 9, 1994. Factory No. 7 – added to Wisconsin State Register of Historic Places on August 14, 2020. Anniversary celebrations Beginning with Harley-Davidson's 90th anniversary in 1993, Harley-Davidson has had celebratory rides to Milwaukee called the "Ride Home". This new tradition has continued every five years, and is referred to unofficially as "Harleyfest", in line with Milwaukee's other festivals (Summerfest, German fest, Festa Italiana, etc.). This event brings Harley riders from all around the world. The 105th anniversary celebration was held on August 28–31, 2008, and included events in Milwaukee, Waukesha, Racine, and Kenosha counties, in Southeast Wisconsin. The 110th-anniversary celebration was held on August 29–31, 2013. The 115th anniversary was held in Prague, Czech Republic, the home country of the oldest existing Harley Davidson Club, on July 5–8, 2018 and attracted more than 100.000 visitors and 60.000 bikes. Labor Hall of Fame William S. Harley, Arthur Davidson, William A. Davidson and Walter Davidson, Sr were, in 2004, inducted into the Labor Hall of Fame for their accomplishments for the H-D company and its workforce. Television drama The company's origins were dramatized in a 2016 miniseries entitled Harley and the Davidsons, starring Robert Aramayo as William Harley, Bug Hall as Arthur Davidson and Michiel Huisman as Walter Davidson, and premiered on the Discovery Channel as a "three-night event series" on September 5, 2016. See also List of Harley-Davidson motorcycles Category:Harley-Davidson engines Harley-Davidson (Bally pinball) Harley-Davidson (Sega/Stern pinball) Harley-Davidson & L.A. Riders Harley-Davidson: Race Across America List of motor scooter manufacturers and brands References Further reading Gnadt, Amy. "Exposed! Harley-Davidson's Lost Photographs, 1915–1916". Wisconsin Magazine of History, vol. 98, no. 1 (Autumn 2014): 28–37. Videos External links Motorcycle manufacturers of the United States Motor vehicle manufacturers based in Wisconsin Vehicle manufacturing companies established in 1903 1903 establishments in Wisconsin Companies based in Wisconsin Companies listed on the New York Stock Exchange American brands American companies established in 1903
[ 0.2990057170391083, 0.2546323239803314, 0.001491583650931716, -0.1136445552110672, -0.26205071806907654, 0.12297520786523819, -0.20334403216838837, 0.22426527738571167, -0.40288016200065613, 0.3126281201839447, 0.21837960183620453, 0.4572620093822479, -0.03604036197066307, 0.61446243524551...
14144
https://en.wikipedia.org/wiki/Hiberno-English
Hiberno-English
Hiberno-English (from Latin Hibernia: "Ireland") or Irish English (, ) is the set of English dialects natively written and spoken within the island of Ireland (including both the Republic of Ireland and Northern Ireland). Old English, as well as Anglo-Norman, was brought to Ireland as a result of the Anglo-Norman invasion of Ireland of the late 12th century; this became the Forth and Bargy dialect, which is not mutually comprehensible with Modern English. A second wave of the English language was brought to Ireland in the 16th century Elizabethan period, making the variety of English spoken in Ireland the oldest outside of Great Britain and phonologically more conservative to Elizabethan English. Initially, Norman-English was mainly spoken in an area known as the Pale around Dublin, with mostly the Irish language spoken throughout the rest of the country. Some small pockets remained of speakers who predominantly continued to use the English of that time; because of their sheer isolation these dialects developed into later (now-extinct) English-related varieties known as Yola in Wexford and Fingallian in Fingal, Dublin. These were no longer mutually intelligible with other English varieties. By the Tudor period, Irish culture and language had regained most of the territory lost to the invaders: even in the Pale, "all the common folk… for the most part are of Irish birth, Irish habit, and of Irish language". However, the Tudor conquest and colonisation of Ireland in the 16th century led to the second wave of immigration by English speakers along with the forced suppression and decline in the status and use of the Irish language. By the mid-19th century English had become the majority language spoken in the country. It has retained this status to the present day, with even those whose first language is Irish being fluent in English as well. Today, there is little more than one percent of the population who speaks the Irish language natively, though it is required to be taught in all state-funded schools. Of the 40% of the population who self-identified as speaking some Irish in 2016, 4% speak Irish daily outside the education system. In the Republic of Ireland, English is one of two official languages (along with Irish) and is the country's working language. Irish English's writing standards align with British rather than American English. However, Irish English's diverse accents and some of its grammatical structures are unique, with some influence by the Irish language and some instances of phonologically conservative features: features no longer common in the accents of England or North America. Phonologists today often divide Irish English into four or five overarching dialects or accents: Ulster accents, West and South-West Irish accents (like the widely discussed Cork accent), various Dublin accents, and a non-regional standard accent expanding since only the last quarter of the twentieth century (outside of Northern Ireland). Ulster English Ulster English (or Northern Irish English) here refers collectively to the varieties of the Ulster province, including Northern Ireland and neighbouring counties outside of Northern Ireland, which has been influenced by Ulster Irish as well as the Scots language, brought over by Scottish settlers during the Plantation of Ulster. Its main subdivisions are Mid-Ulster English, South Ulster English and Ulster Scots, the latter of which is arguably a separate language. Ulster varieties distinctly pronounce: An ordinarily grammatically structured (i.e. non-topicalised) declarative sentence, often, with a rising intonation at the end of the sentence (the type of intonation pattern that other English speakers usually associate with questions). as lowered, in the general vicinity of . as fronted and slightly rounded, more closely approaching . and as merged in the general vicinity of . with a backed on-glide and fronted off-glide, putting it in the vicinity of . as , particularly before voiceless consonants. as , though nowadays commonly or even when in a closed syllable. , almost always, as a slightly raised monophthong . A lack of happy-tensing; with the final vowel of happy, holy, money, etc. as . Syllable-final occasionally as "dark [ɫ]", though especially before a consonant. Notable lifelong native speakers Christine Bleakley, Jamie Dornan, Rory McIlroy, Liam Neeson – "The Northern Irish accent is the sexiest in the UK, according to a new poll. The dulcet tones of Liam Neeson, Jamie Dornan, Christine Bleakley and Rory McIlroy helped ensure the accent came top of the popularity charts" John Cole – "His distinctive Ulster accent" Nadine Coyle – "I was born and raised in Derry and I can't change the way I talk". Daniel O'Donnell – "the languid Donegal accent made famous by Daniel O'Donnell" Colin Morgan – "Colin Morgan has revealed that fans of the show are often confused by his accent. The 23-year-old... is originally from Northern Ireland" West and South-West Irish English West and South-West Irish English here refers to broad varieties of Ireland's West and South-West Regions. Accents of both regions are known for: The backing and slight lowering of towards . The more open starting point for and of and , respectively. The preservation of as monophthongal . and , respectively, as and . In the West, and may respectively be pronounced by older speakers as and before a consonant, so fist sounds like fished, castle like , and arrest like . South-West Irish English (often known, by specific county, as Cork English, Kerry English, or Limerick English) also features two major defining characteristics of its own. One is the pin–pen merger: the raising of to when before or (as in again or pen). The other is the intonation pattern of a slightly higher pitch followed by a significant drop in pitch on stressed long-vowel syllables (across multiple syllables or even within a single one), which is popularly heard in rapid conversation, by speakers of other English dialects, as a noticeable kind of undulating "sing-song" pattern. Notable lifelong native speakers Nicola Coughlan She seamlessly switches from a soft Galway accent Robert Sheehan Kerry Condon – "Tipperary accent" Aisling O'Sullivan Dolores O'Riordan – "singing in her Limerick accent" Roy Keane – "Cork accent" Dáithí Ó Sé – "his Kerry dialect" The Rubberbandits – "Rubberbandits' strong Limerick city accent... sits on a frequency like a tambourine which can cut through any noise" Roger Clarke "so I developed an Irish twang fairly quickly" "the family moved to just outside Sligo town when he was 12 years old" Paul McGrath (footballer) "With a beautiful soft Irish accent" The Clancy Brothers Rachel Pilkington Dublin English Dublin English is highly internally diverse and refers collectively to the Irish English varieties immediately surrounding and within the metropolitan area of Dublin. Modern-day Dublin English largely lies on a phonological continuum, ranging from a more traditional, lower-prestige, local urban accent on the one end to a more recently developing, higher-prestige, non-local (regional and even supraregional) accent on the other end, whose most advanced characteristics only first emerged in the late 1980s and 1990s. The accent that most strongly uses the traditional working-class features has been labelled by linguists as local Dublin English. Most speakers from Dublin and its suburbs, however, have accent features falling variously along the entire middle as well as the newer end of the spectrum, which together form what is called non-local Dublin English, spoken by middle- and upper-class natives of Dublin and the greater eastern Irish region surrounding the city. A subset of this variety, whose middle-class speakers mostly range in the middle section of the continuum, is called mainstream Dublin English. Mainstream Dublin English has become the basis of an accent that has otherwise become supraregional (see more below) everywhere except in the north of the country. The majority of Dubliners born since the 1980s (led particularly by women) has shifted towards the most innovative non-local accent, here called new Dublin English, which has gained ground over mainstream Dublin English and which is the most extreme variety in rejecting the local accent's traditional features. The varieties at either extreme of the spectrum, local and new Dublin English, are both discussed in further detail below. In the most general terms, all varieties of Dublin English have the following identifying sounds that are often distinct from the rest of Ireland, pronouncing: as fronted and/or raised . as retracted and/or centralised . as a diphthong in the range (local to non-local) of . Local Dublin English Local Dublin English (or popular Dublin English) here refers to a traditional, broad, working-class variety spoken in the Republic of Ireland's capital city of Dublin. It is the only Irish English variety that in earlier history was non-rhotic; however, it is today weakly rhotic, Known for diphthongisation of the and vowels, the local Dublin accent is also known for a phenomenon called "vowel breaking", in which , , and in closed syllables are "broken" into two syllables, approximating , , , and , respectively. New Dublin English Evolving as a fashionable outgrowth of the mainstream non-local Dublin English, new Dublin English (also, advanced Dublin English and, formerly, fashionable Dublin English) is a youthful variety that originally began in the early 1990s among the "avant-garde" and now those aspiring to a non-local "urban sophistication". New Dublin English itself, first associated with affluent and middle-class inhabitants of southside Dublin, is probably now spoken by a majority of Dubliners born since the 1980s. It has replaced (yet was largely influenced by) moribund D4 English (often known as "Dublin 4" or "DART speak" or, mockingly, "Dortspeak"), which originated around the 1970s from Dubliners who rejected traditional notions of Irishness, regarding themselves as more trendy and sophisticated; however, particular aspects of the D4 accent became quickly noticed and ridiculed as sounding affected, causing these features to fall out of fashion by the 1990s. New Dublin English can have a fur–fair merger, horse–hoarse, and witch–which mergers, while resisting the traditionally Irish English cot–caught merger. This accent has since spread South to parts of East Co. Wicklow, West to parts of North Co. Kildare and parts of South Co. Meath. The accent can be also heard among the middle to upper classes in most major cities in the Republic today. Standard Irish English Supraregional Southern Irish English (sometimes, simply Supraregional Irish English or Standard Irish English) refers to a variety spoken particularly by educated and middle- or higher-class Irish people, crossing regional boundaries throughout all of the Republic of Ireland, except the north. As mentioned earlier, mainstream Dublin English of the early- to mid-twentieth century is the direct influence and catalyst for this variety, coming about by the suppression of certain markedly Irish features (and retention of other Irish features) as well as the adoption of certain standard British (i.e., non-Irish) features. The result is a configuration of features that is still unique; in other words, this accent is not simply a wholesale shift towards British English. Most speakers born in the 1980s or later are showing fewer features of this late-twentieth-century mainstream supraregional form and more characteristics aligning to a rapidly spreading new Dublin accent (see more above, under "Non-local Dublin English"). Ireland's supraregional dialect pronounces: as quite open . along a possible spectrum , with innovative [ɑɪ] particularly more common before voiced consonants, notably including . as starting fronter and often more raised than other dialects: . may be , with a backer vowel than in other Irish accents, though still relatively fronted. as . as , almost always separate from , keeping words like war and wore, or horse and hoarse, pronounced distinctly. as . as a diphthong, approaching , as in the mainstream United States, or , as in mainstream England. as higher, fronter, and often rounder . Overview of pronunciation and phonology The following charts list the vowels typical of each Irish English dialect as well as the several distinctive consonants of Irish English. Phonological characteristics of overall Irish English are given as well as categorisations into five major divisions of Hiberno-English: northern Ireland (or Ulster); West & South-West Ireland; local Dublin; new Dublin; and supraregional (southern) Ireland. Features of mainstream non-local Dublin English fall on a range between "local Dublin" and "new Dublin". Pure vowels (monophthongs) The defining monophthongs of Irish English: The following pure vowel sounds are defining characteristics of Irish English: is typically centralised in the mouth and often somewhat more rounded than other standard English varieties, such as Received Pronunciation in England or General American in the United States. There is a partial trap-bath split in most Irish English varieties (cf. Variation in Australian English). There is inconsistency regarding the lot–cloth split and the cot–caught merger; certain Irish English dialects have these phenomena while others do not. The cot-caught merger by definition rules out the presence of the lot-cloth split. Any and many are pronounced to rhyme with nanny, Danny, etc. by very many speakers, i.e. with each of these words pronounced with . All pure vowels of various Hiberno-English dialects: {| class="wikitable" style="text-align:center" |+ |- | English diaphoneme | Ulster | West & South-West Ireland | Local Dublin | New Dublin | Supraregional Ireland | Example words |- | flat | | colspan="2" | | | | add, land, trap |- | and broad | | colspan="2" | | colspan="2" | | bath, calm, dance |- | conservative | | colspan="2" | | | | lot, top, wasp|- | divergent | | colspan="2" | | | | loss, off |- | | | colspan="2" | | | | all, bought, saw|- | | colspan="5" | | dress, met, bread |- | | colspan="5" | | about, syrup, arena|- | | | colspan="4" | | hit, skim, tip |- | | colspan="5" | | beam, chic, fleet |- | | colspan="2" | | | | | bus, flood |- | | rowspan="2" | | colspan="4" | | book, put, should |- | | colspan="3" | | | food, glue, new|}Footnotes:In southside Dublin's once-briefly fashionable "Dublin 4" (or "Dortspeak") accent, the " and broad " set becomes rounded as [ɒː]. In South-West Ireland, before or is raised to . Due to the local Dublin accent's phenomenon of "vowel breaking", may be realised in this accent as in a closed syllable, and, in the same environment, may be realised as . The vowel is rather open in Ulster accents, uniquely among Irish accents.Other notes:In some highly conservative Irish English varieties, words spelled with ea and pronounced with in RP are pronounced with , for example meat, beat, and leaf. In words like took where the spelling "oo" usually represents , conservative speakers may use . This is most common in local Dublin and the speech of north-east Leinster. Gliding vowels (diphthongs)The defining diphthongs of Hiberno-English:The following gliding vowel (diphthong) sounds are defining characteristics of Irish English: The first element of the diphthong , as in ow or doubt, may move forward in the mouth in the east (namely, Dublin) and supraregionally; however, it may actually move backwards throughout the entire rest of the country. In the north alone, the second element is particularly moved forward, as in Scotland. The first element of the diphthong , as in boy or choice, is slightly or significantly lowered in all geographic regions except the north. The diphthong , as in rain or bay, is most commonly monophthongised to . Furthermore, this often lowers to in words such as gave and came (sounding like "gev" and "kem").All diphthongs of various Hiberno-English dialects:Footnotes: Due to the local Dublin accent's phenomenon of "vowel breaking", may be realised in that accent as in a closed syllable, and, in the same environment, may be realised as . R-coloured vowelsThe defining r-coloured vowels of Hiberno-English:The following r-coloured vowel features are defining characteristics of Hiberno-English: Rhoticity: Every major accent of Hiberno-English pronounces the letter "r" whenever it follows a vowel sound, though this is weaker in the local Dublin accent due to its earlier history of non-rhoticity. Rhoticity is a feature that Hiberno-English shares with Canadian English and General American but not with Received Pronunciation. The distinction between and is almost always preserved, so that, for example, horse and hoarse are not merged in most Irish accents.All r-coloured vowels of various Hiberno-English dialects:Footnotes: In older varieties of the conservative accents, like local Dublin, the "r" sound before a vowel may be pronounced as a tapped , rather than as the typical approximant . Every major accent of Irish English is rhotic (pronounces "r" after a vowel sound). The local Dublin accent is the only one that during an earlier time was non-rhotic, though it usually very lightly rhotic today, with a few minor exceptions. The rhotic consonant in this and most other Irish accents is an approximant . The "r" sound of the mainstream non-local Dublin accent is more precisely a velarised approximant , while the "r" sound of the more recently emerging non-local Dublin (or "new Dublin") accent is more precisely a retroflex approximant . In southside Dublin's once-briefly fashionable "Dublin 4" (or "Dortspeak") accent, is realised as . In non-local Dublin's more recently emerging (or "new Dublin") accent, and may both be realised more rounded as . The mergers have not occurred in local Dublin, West/South-West, and other very conservative and traditional Irish English varieties ranging from the south to the north. Whereas the vowels corresponding to historical , and have merged to in most dialects of English, the local Dublin and West/South-West accents retain a two-way distinction: versus . The distribution of these two in these accents does not always align to what their spelling suggests: is used when after a labial consonant (e.g. fern), when spelled as "ur" or "or" (e.g. word), or when spelled as "ir" after an alveolar stop (e.g. dirt); is used in all other situations. However, there are apparent exceptions to these rules; John C. Wells describes prefer and per as falling under the class, despite the vowel in question following a labial. The distribution of versus is listed below in some other example words:certain chirp circle earn earth girl germ heard or herd Hertz irk tern bird dirt first hurts murder nurse turn third or turd urn work world Non-local Dublin, younger, and supraregional Irish accents do feature the full mergers to , as in American English. In rare few local Dublin varieties that are non-rhotic, is either lowered to or backed and raised to . The distinction between and is widely preserved in Ireland, so that, for example, horse and hoarse are not merged in most Irish English dialects; however, they are usually merged in Belfast and new Dublin. In local Dublin, due to the phenomenon of "vowel breaking" may in fact be realised as . ConsonantsThe defining consonants of Hiberno-English:The consonants of Hiberno-English mostly align to the typical English consonant sounds. However, a few Irish English consonants have distinctive, varying qualities. The following consonant features are defining characteristics of Hiberno-English: H-fulness: Unlike most English varieties of England and Wales, which drop the word-initial sound in words like house or happy, Hiberno-English always retains word-initial . Furthermore, Hiberno-English also allows where it is permitted in Irish but excluded in other dialects of English, such as before an unstressed vowel (e.g. Haughey ) and at the end of a word (e.g. McGrath ). The phonemes dental fricatives (as in the) and (as in thin) are pronounced uniquely as stops in most Hiberno-English—either dental or alveolar—. is pronounced as or , depending on specific dialect; and is pronounced as or . In some middle- or upper-class accents, they are realized as the dental stops and as such do not merge with the alveolar stops ; thus, for example, tin () is not a homophone of thin . In older, rural, or working-class accents, such pairs are indeed merged. The phoneme , when appearing at the end of a word or between vowel sounds, is pronounced uniquely in most Hiberno-English; the most common pronunciation is as a "slit fricative". The phoneme is almost always of a "light" or "clear" quality (i.e. not velarised), unlike Received Pronunciation, which uses both a clear and a dark "L" sound, or General American, which pronounces all "L" sounds as dark. Rhoticity: The pronunciation of historical is nearly universal in Irish accents of English. Like with General American (but not Received Pronunciation), this means that the letter "r", if appearing after a vowel sound, is always pronounced (in words such as here, cart, or surf).Unique consonants in various Hiberno-English dialects:Footnotes:In traditional, conservative Ulster English, and are palatalised before a low front vowel. Local Dublin also undergoes cluster simplification, so that stop consonant sounds occurring after fricatives or sonorants may be left unpronounced, resulting, for example, in "poun(d)" and "las(t)". Rhoticity: Every major accent of Irish English is strongly rhotic (pronounces "r" after a vowel sound), though to a weaker degree with the local Dublin accent. The accents of local Dublin and some smaller eastern towns like Drogheda were historically non-rhotic and now only very lightly rhotic or variably rhotic, with the rhotic consonant being an alveolar approximant, . In extremely traditional and conservative accents (exemplified, for instance, in the speech of older speakers throughout the country, even in South-West Ireland, such as Mícheál Ó Muircheartaigh and Jackie Healy-Rae), the rhotic consonant, before a vowel sound, can also be an alveolar tap, . The rhotic consonant for the northern Ireland and new Dublin accents is a retroflex approximant, . Dublin's retroflex approximant has no precedent outside of northern Ireland and is a genuine innovation of the 1990s and 2000s. A guttural/uvular is found in north-east Leinster. Otherwise, the rhotic consonant of virtually all other Irish accents is the postalveolar approximant, . The symbol [θ̠] is used here to represent the voiceless alveolar non-sibilant fricative, sometimes known as a "slit fricative", whose articulation is described as being apico-alveolar. Overall, and are being increasingly merged in supraregional Irish English, for example, making wine and whine homophones, as in most varieties of English around the world. Other phonological characteristics of Irish English include that consonant clusters ending in before are distinctive:Wells, 1982, p. 435. is dropped after coronal sonorants and fricatives, e.g. new sounds like noo, and sue like soo. becomes , e.g. dew/due, duke and duty sound like "jew", "jook" and "jooty". becomes , e.g. tube is "choob", tune is "choon" The following show neither dropping nor coalescence: (as in cute), (as in mute), and (as in huge; though the can be dropped in the South-West of Ireland). The naming of the letter H as "haytch" is standard. Due to Gaelic influence, an epenthetic schwa is sometimes inserted, perhaps as a feature of older and less careful speakers, e.g. film and form . Vocabulary Loan words from Irish A number of Irish-language loan words are used in Hiberno-English, particularly in an official state capacity. For example, the head of government is the Taoiseach, the deputy head is the Tánaiste, the parliament is the Oireachtas and its lower house is Dáil Éireann. Less formally, people also use loan words in day-to-day speech, although this has been on the wane in recent decades and among the young. Derived words from Irish Another group of Hiberno-English words are those derived from the Irish language. Some are words in English that have entered into general use, while others are unique to Ireland. These words and phrases are often Anglicised versions of words in Irish or direct translations into English. In the latter case, they often give meaning to a word or phrase that is generally not found in wider English use. Derived words from Old and Middle English Another class of vocabulary found in Hiberno-English are words and phrases common in Old and Middle English, but which have since become obscure or obsolete in the modern English language generally. Hiberno-English has also developed particular meanings for words that are still in common use in English generally. Other words In addition to the three groups above, there are also additional words and phrases whose origin is disputed or unknown. While this group may not be unique to Ireland, their usage is not widespread, and could be seen as characteristic of the language in Ireland. Grammar and syntax The syntax of the Irish language is quite different from that of English. Various aspects of Irish syntax have influenced Hiberno-English, though many of these idiosyncrasies are disappearing in suburban areas and among the younger population. The other major influence on Hiberno-English that sets it apart from modern English in general is the retention of words and phrases from Old- and Middle-English. From Irish Reduplication Reduplication is an alleged trait of Hiberno-English strongly associated with Stage Irish and Hollywood films. the Irish ar bith corresponds to English "at all", so the stronger ar chor ar bith gives rise to the form "at all at all". "I've no time at all at all." ar eagla go … (lit. "on fear that …") means "in case …". The variant ar eagla na heagla, (lit. "on fear of fear") implies the circumstances are more unlikely. The corresponding Hiberno-English phrases are "to be sure" and the very rarely used "to be sure to be sure". In this context, these are not, as might be thought, disjuncts meaning "certainly"; they could better be translated "in case" and "just in case". Nowadays normally spoken with conscious levity. "I brought some cash in case I saw a bargain, and my credit card to be sure to be sure." Yes and no Irish has no words that directly translate as "yes" or "no", and instead repeats the verb used in the question, negated if necessary, to answer. Hiberno-English uses "yes" and "no" less frequently than other English dialects as speakers can repeat the verb, positively or negatively, instead of (or in redundant addition to) using "yes" or "no". "Are you coming home soon?" – "I am." "Is your mobile charged?" – "It isn't." This is not limited only to the verb to be: it is also used with to have when used as an auxiliary; and, with other verbs, the verb to do is used. This is most commonly used for intensification, especially in Ulster English. "This is strong stuff, so it is." "We won the game, so we did." Recent past construction Irish indicates recency of an action by adding "after" to the present continuous (a verb ending in "-ing"), a construction known as the "hot news perfect" or "after perfect". The idiom for "I had done X when I did Y" is "I was after doing X when I did Y", modelled on the Irish usage of the compound prepositions , , and :  /  / . "Why did you hit him?" – "He was after giving me cheek." (he had [just beforehand] been cheeky to me). A similar construction is seen where exclamation is used in describing a recent event: "I'm after hitting him with the car!" "She's after losing five stone in five weeks!" When describing less astonishing or significant events, a structure resembling the German perfect can be seen: "I have the car fixed." "I have my breakfast eaten." This correlates with an analysis of "H1 Irish" proposed by Adger & Mitrovic, in a deliberate parallel to the status of German as a V2 language. Recent past construction has been directly adopted into Newfoundland English, where it is common in both formal and casual register. In rural areas of the Avalon peninsula, where Newfoundland Irish was spoken until the early 20th century, it is the grammatical standard for describing whether or not an action has occurred. Reflection for emphasis The reflexive version of pronouns is often used for emphasis or to refer indirectly to a particular person, etc., according to context. Herself, for example, might refer to the speaker's boss or to the woman of the house. Use of herself or himself in this way often indicates that the speaker attributes some degree of arrogance or selfishness to the person in question. Note also the indirectness of this construction relative to, for example, She's coming now. This reflexive pronoun can also be used to describe a partner – "I was with himself last night." or "How's herself doing?" "'Tis herself that's coming now." Is í féin atá ag teacht anois. "Was it all of ye or just yourself?" An sibhse ar fad nó tusa féin a bhí i gceist? Prepositional pronouns There are some language forms that stem from the fact that there is no verb to have in Irish. Instead, possession is indicated in Irish by using the preposition at, (in Irish, ag.). To be more precise, Irish uses a prepositional pronoun that combines ag "at" and mé "me" to create agam. In English, the verb "to have" is used, along with a "with me" or "on me" that derives from Tá … agam. This gives rise to the frequent "Do you have the book?" – "I have it with me." "Have you change for the bus on you?" "He will not shut up if he has drink taken." Somebody who can speak a language "has" a language, in which Hiberno-English has borrowed the grammatical form used in Irish. "She does not have Irish." Níl Gaeilge aici. literally "There is no Irish at her". When describing something, many Hiberno-English speakers use the term "in it" where "there" would usually be used. This is due to the Irish word ann (pronounced "oun" or "on") fulfilling both meanings. "Is it yourself that is in it?" An tú féin atá ann? "Is there any milk in it?" An bhfuil bainne ann? Another idiom is this thing or that thing described as "this man here" or "that man there", which also features in Newfoundland English in Canada. "This man here." An fear seo. (cf. the related anseo = here) "That man there." An fear sin. (cf. the related ansin = there) Conditionals have a greater presence in Hiberno-English due to the tendency to replace the simple present tense with the conditional (would) and the simple past tense with the conditional perfect (would have). "John asked me would I buy a loaf of bread." (John asked me to buy a loaf of bread.) "How do you know him? We would have been in school together." (We were in school together.)Bring and take: Irish use of these words differs from that of British English because it follows the Irish grammar for beir and tóg. English usage is determined by direction; a person determines Irish usage. So, in English, one takes "from here to there", and brings it "to here from there". In Irish, a person takes only when accepting a transfer of possession of the object from someone elseand a person brings at all other times, irrespective of direction (to or from). Don't forget to bring your umbrella with you when you leave. (To a child) Hold my hand: I don't want someone to take you. To be The Irish equivalent of the verb "to be" has two present tenses, one (the present tense proper or "aimsir láithreach") for cases which are generally true or are true at the time of speaking and the other (the habitual present or "aimsir ghnáthláithreach") for repeated actions. Thus, "you are [now, or generally]" is tá tú, but "you are [repeatedly]" is bíonn tú. Both forms are used with the verbal noun (equivalent to the English present participle) to create compound tenses. This is similar to the distinction between ser and estar in Spanish or the use of the 'habitual be' in African-American Vernacular English. The corresponding usage in English is frequently found in rural areas, especially Mayo/Sligo in the west of Ireland and Wexford in the south-east, Inner-City Dublin and Cork city along with border areas of the North and Republic. In this form, the verb "to be" in English is similar to its use in Irish, with a "does be/do be" (or "bees", although less frequently) construction to indicate the continuous, or habitual, present: "He does be working every day." Bíonn sé ag obair gach lá. "They do be talking on their mobiles a lot." Bíonn siad ag caint go minic ar a bhfóin póca. "He does be doing a lot of work at school." Bíonn sé ag déanamh go leor oibre ar scoil. "It's him I do be thinking of." Is air a bhíonn mé ag smaoineamh. This construction also surfaces in African American Vernacular English, as the famous habitual be. From Old and Middle English In old-fashioned usage, "it is" can be freely abbreviated ’tis, even as a standalone sentence. This also allows the double contraction ’tisn’t, for "it is not". Irish has separate forms for the second person singular (tú) and the second person plural (sibh). Mirroring Irish, and almost every other Indo-European language, the plural you is also distinguished from the singular in Hiberno-English, normally by use of the otherwise archaic English word ye ; the word yous (sometimes written as youse) also occurs, but primarily only in Dublin and across Ulster. In addition, in some areas in Leinster, north Connacht and parts of Ulster, the hybrid word ye-s, pronounced "yiz", may be used. The pronunciation differs with that of the northwestern being and the Leinster pronunciation being . "Did ye all go to see it?" Ar imigh sibh go léir chun é a fheicint? "None of youse have a clue!" Níl ciall/leid ar bith agaibh! "Are ye not finished yet?" Nach bhfuil sibh críochnaithe fós? "Yis are after destroying it!" Tá sibh tar éis é a scriosadh! The word ye, yis or yous, otherwise archaic, is still used in place of "you" for the second-person plural. Ye'r, Yisser or Yousser are the possessive forms, e.g. "Where are yous going?" The verb mitch is very common in Ireland, indicating being truant from school. This word appears in Shakespeare (though he wrote in Early Modern English rather than Middle English), but is seldom heard these days in British English, although pockets of usage persist in some areas (notably South Wales, Devon, and Cornwall). In parts of Connacht and Ulster the mitch is often replaced by the verb scheme, while in Dublin it is often replaced by "on the hop/bounce". Another usage familiar from Shakespeare is the inclusion of the second person pronoun after the imperative form of a verb, as in "Wife, go you to her ere you go to bed" (Romeo and Juliet, Act III, Scene IV). This is still common in Ulster: "Get youse your homework done or you're no goin' out!" In Munster, you will still hear children being told, "Up to bed, let ye" . For influence from Scotland, see Ulster Scots and Ulster English. Other grammatical influencesNow is often used at the end of sentences or phrases as a semantically empty word, completing an utterance without contributing any apparent meaning. Examples include "Bye now" (= "Goodbye"), "There you go now" (when giving someone something), "Ah now!" (expressing dismay), "Hold on now" (= "wait a minute"), "Now then" as a mild attention-getter, etc. This usage is universal among English dialects, but occurs more frequently in Hiberno-English. It is also used in the manner of the Italian 'prego' or German 'bitte', for example, a barman might say "Now, Sir." when delivering drinks.So is often used for emphasis ("I can speak Irish, so I can"), or it may be tacked onto the end of a sentence to indicate agreement, where "then" would often be used in Standard English ("Bye so", "Let's go so", "That's fine so", "We'll do that so"). The word is also used to contradict a negative statement ("You're not pushing hard enough" – "I am so!"). (This contradiction of a negative is also seen in American English, though not as often as "I am too", or "Yes, I am".) The practice of indicating emphasis with so and including reduplicating the sentence's subject pronoun and auxiliary verb (is, are, have, has, can, etc.) such as in the initial example, is particularly prevalent in more northern dialects such as those of Sligo, Mayo and the counties of Ulster.Sure/Surely is often used as a tag word, emphasising the obviousness of the statement, roughly translating as but/and/well/indeed. It can be used as "to be sure" (but note that the other stereotype of "Sure and …" is not actually used in Ireland.) Or "Sure, I can just go on Wednesday", "I will not, to be sure." The word is also used at the end of sentences (primarily in Munster), for instance, "I was only here five minutes ago, sure!" and can express emphasis or indignation. In Ulster, the reply "Aye, surely" may be given to show strong agreement.To is often omitted from sentences where it would exist in British English. For example, "I'm not allowed go out tonight", instead of "I'm not allowed to go out tonight".Will is often used where British English would use "shall" or American English "should" (as in "Will I make us a cup of tea?"). The distinction between "shall" (for first-person simple future, and second- and third-person emphatic future) and "will" (second- and third-person simple future, first-person emphatic future), maintained by many in England, does not exist in Hiberno-English, with "will" generally used in all cases.Once''' is sometimes used in a different way from how it is used in other dialects; in this usage, it indicates a combination of logical and causal conditionality: "I have no problem laughing at myself once the joke is funny." Other dialects of English would probably use "if" in this situation. See also Manx English English language in Europe Highland English Kiltartanese Languages of Ireland List of English words of Irish origin Regional accents of English Welsh English Notes References Bibliography Further reading External links Everyday English and Slang in Ireland Languages attested from the 12th century Dialects of English English British English English
[ -0.41972219944000244, -0.05022096261382103, -0.017890719696879387, -1.0988218784332275, -0.14725911617279053, 0.5101805329322815, 0.671092689037323, 0.9390184283256531, -0.021503427997231483, -0.08014827221632004, -0.6451340913772583, 0.7218843698501587, 0.11310034245252609, -0.39402762055...
14147
https://en.wikipedia.org/wiki/Harmonic%20analysis
Harmonic analysis
Harmonic analysis is a branch of mathematics concerned with the representation of functions or signals as the superposition of basic waves, and the study of and generalization of the notions of Fourier series and Fourier transforms (i.e. an extended form of Fourier analysis). In the past two centuries, it has become a vast subject with applications in areas as diverse as number theory, representation theory, signal processing, quantum mechanics, tidal analysis and neuroscience. The term "harmonics" originated as the Ancient Greek word harmonikos, meaning "skilled in music". In physical eigenvalue problems, it began to mean waves whose frequencies are integer multiples of one another, as are the frequencies of the harmonics of music notes, but the term has been generalized beyond its original meaning. The classical Fourier transform on Rn is still an area of ongoing research, particularly concerning Fourier transformation on more general objects such as tempered distributions. For instance, if we impose some requirements on a distribution f, we can attempt to translate these requirements in terms of the Fourier transform of f. The Paley–Wiener theorem is an example of this. The Paley–Wiener theorem immediately implies that if f is a nonzero distribution of compact support (these include functions of compact support), then its Fourier transform is never compactly supported (i.e. if a signal is limited in one domain, it is unlimited in the other). This is a very elementary form of an uncertainty principle in a harmonic-analysis setting. Fourier series can be conveniently studied in the context of Hilbert spaces, which provides a connection between harmonic analysis and functional analysis. There are four versions of the Fourier Transform, dependent on the spaces that are mapped by the transformation (discrete/periodic-discrete/periodic: Digital Fourier Transform, continuous/periodic-discrete/aperiodic: Fourier Analysis, discrete/aperiodic-continuous/periodic: Fourier Synthesis, continuous/aperiodic-continuous/aperiodic: continuous Fourier Transform). Abstract harmonic analysis One of the most modern branches of harmonic analysis, having its roots in the mid-20th century, is analysis on topological groups. The core motivating ideas are the various Fourier transforms, which can be generalized to a transform of functions defined on Hausdorff locally compact topological groups. The theory for abelian locally compact groups is called Pontryagin duality. Harmonic analysis studies the properties of that duality and Fourier transform and attempts to extend those features to different settings, for instance, to the case of non-abelian Lie groups. For general non-abelian locally compact groups, harmonic analysis is closely related to the theory of unitary group representations. For compact groups, the Peter–Weyl theorem explains how one may get harmonics by choosing one irreducible representation out of each equivalence class of representations. This choice of harmonics enjoys some of the useful properties of the classical Fourier transform in terms of carrying convolutions to pointwise products, or otherwise showing a certain understanding of the underlying group structure. See also: Non-commutative harmonic analysis. If the group is neither abelian nor compact, no general satisfactory theory is currently known ("satisfactory" means at least as strong as the Plancherel theorem). However, many specific cases have been analyzed, for example SLn. In this case, representations in infinite dimensions play a crucial role. Other branches Study of the eigenvalues and eigenvectors of the Laplacian on domains, manifolds, and (to a lesser extent) graphs is also considered a branch of harmonic analysis. See e.g., hearing the shape of a drum. Harmonic analysis on Euclidean spaces deals with properties of the Fourier transform on Rn that have no analog on general groups. For example, the fact that the Fourier transform is rotation-invariant. Decomposing the Fourier transform into its radial and spherical components leads to topics such as Bessel functions and spherical harmonics. Harmonic analysis on tube domains is concerned with generalizing properties of Hardy spaces to higher dimensions. Applied harmonic analysis Many applications of harmonic analysis in science and engineering begin with the idea or hypothesis that a phenomenon or signal is composed of a sum of individual oscillatory components. Ocean tides and vibrating strings are common and simple examples. The theoretical approach is often to try to describe the system by a differential equation or system of equations to predict the essential features, including the amplitude, frequency, and phases of the oscillatory components. The specific equations depend on the field, but theories generally try to select equations that represent major principles that are applicable. The experimental approach is usually to acquire data that accurately quantifies the phenomenon. For example, in a study of tides, the experimentalist would acquire samples of water depth as a function of time at closely enough spaced intervals to see each oscillation and over a long enough duration that multiple oscillatory periods are likely included. In a study on vibrating strings, it is common for the experimentalist to acquire a sound waveform sampled at a rate at least twice that of the highest frequency expected and for a duration many times the period of the lowest frequency expected. For example, the top signal at the right is a sound waveform of a bass guitar playing an open string corresponding to an A note with a fundamental frequency of 55 Hz. The waveform appears oscillatory, but it is more complex than a simple sine wave, indicating the presence of additional waves. The different wave components contributing to the sound can be revealed by applying a mathematical analysis technique known as the Fourier transform, the result of which is shown in the lower figure. Note that there is a prominent peak at 55 Hz, but that there are other peaks at 110 Hz, 165 Hz, and at other frequencies corresponding to integer multiples of 55 Hz. In this case, 55 Hz is identified as the fundamental frequency of the string vibration, and the integer multiples are known as harmonics. See also Convergence of Fourier series Fourier analysis for computing periodicity in evenly spaced data Harmonic (mathematics) Least-squares spectral analysis for computing periodicity in unevenly spaced data Spectral density estimation Tate's thesis References Bibliography Elias Stein and Guido Weiss, Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, 1971. Elias Stein with Timothy S. Murphy, Harmonic Analysis: Real-Variable Methods, Orthogonality, and Oscillatory Integrals, Princeton University Press, 1993. Elias Stein, Topics in Harmonic Analysis Related to the Littlewood-Paley Theory, Princeton University Press, 1970. Yitzhak Katznelson, An introduction to harmonic analysis, Third edition. Cambridge University Press, 2004. ; 0-521-54359-2 Terence Tao, Fourier Transform. (Introduces the decomposition of functions into odd + even parts as a harmonic decomposition over ℤ₂.) Yurii I. Lyubich. Introduction to the Theory of Banach Representations of Groups. Translated from the 1985 Russian-language edition (Kharkov, Ukraine). Birkhäuser Verlag. 1988. George W. Mackey, Harmonic analysis as the exploitation of symmetry–a historical survey, Bull. Amer. Math. Soc. 3 (1980), 543–698. External links Acoustics Musical terminology
[ -0.24687404930591583, -0.08413846790790558, -0.2810569107532501, 0.14349336922168732, -0.31969666481018066, 0.3367905020713806, -0.1954173594713211, -0.14143218100070953, 0.18348413705825806, -0.7003889679908752, -0.3251812160015106, 0.6655115485191345, -0.3600306808948517, 0.0894406363368...
14148
https://en.wikipedia.org/wiki/Home%20run
Home run
In baseball, a home run (abbreviated HR) is scored when the ball is hit in such a way that the batter is able to circle the bases and reach home safely in one play without any errors being committed by the defensive team in the process. In modern baseball, the feat is typically achieved by hitting the ball over the outfield fence between the foul poles (or making contact with either foul pole) without first touching the ground, resulting in an automatic home run. There is also the "inside-the-park" home run where the batter reaches home safely while the baseball is in play on the field. When a home run is scored, the batter is also credited with a hit and a run scored, and an RBI for each runner that scores, including himself. Likewise, the pitcher is recorded as having given up a hit and a run, with additional runs charged for each runner that scores other than the batter. Home runs are among the most popular aspects of baseball and, as a result, prolific home run hitters are usually the most popular among fans and consequently the highest paid by teams—hence the old saying, "Home run hitters drive Cadillacs, and singles hitters drive Fords" (coined, circa 1948, by veteran pitcher Fritz Ostermueller, by way of mentoring his young teammate, Ralph Kiner). Nicknames for a home run include "homer", "round tripper", "four-bagger", "big fly", "dinger", "long ball", "jack", "shot"/"moon shot", "bomb", and "blast", while a player hitting a home run may be said to have "gone deep" or "gone yard". Types of home runs Out of the park In modern times a home run is most often scored when the ball is hit over the outfield wall between the foul poles (in fair territory) before it touches the ground (in flight), and without being caught or deflected back onto the field by a fielder. A batted ball is also a home run if it touches either a foul pole or its attached screen before touching the ground, as the foul poles are by definition in fair territory. Additionally, many major-league ballparks have ground rules stating that a batted ball in flight that strikes a specified location or fixed object is a home run; this usually applies to objects that are beyond the outfield wall but are located such that it may be difficult for an umpire to judge. In professional baseball, a batted ball that goes over the outfield wall after touching the ground (i.e. a ball that bounces over the outfield wall) becomes an automatic double. This is colloquially referred to as a "ground rule double" even though it is uniform across all of Major League Baseball, per MLB rules 5.05(a)(6) through 5.05(a)(9). A fielder is allowed to reach over the wall to attempt to catch the ball as long as his feet are on or over the field during the attempt, and if the fielder successfully catches the ball while it is in flight the batter is out, even if the ball had already passed the vertical plane of the wall. However, since the fielder is not part of the field, a ball that bounces off a fielder (including his glove) and over the wall without touching the ground is still a home run. A fielder may not deliberately throw his glove, cap, or any other equipment or apparel to stop or deflect a fair ball, and an umpire may award a home run to the batter if a fielder does so on a ball that, in the umpire's judgment, would have otherwise been a home run (this is rare in modern professional baseball). A home run accomplished in any of the above manners is an automatic home run. The ball is dead, even if it rebounds back onto the field (e.g., from striking a foul pole), and the batter and any preceding runners cannot be put out at any time while running the bases. However, if one or more runners fail to touch a base or one runner passes another before reaching home plate, that runner or runners can be called out on appeal, though in the case of not touching a base a runner can go back and touch it if doing so won't cause them to be passed by another preceding runner and they have not yet touched the next base (or home plate in the case of missing third base). This stipulation is in Approved Ruling (2) of Rule 7.10(b). Inside-the-park home run An inside-the-park home run occurs when a batter hits the ball into play and is able to circle the bases before the fielders can put him out. Unlike with an outside-the-park home run, the batter-runner and all preceding runners are liable to be put out by the defensive team at any time while running the bases. This can only happen if the ball does not leave the ballfield. In the early days of baseball, outfields were relatively much more spacious, reducing the likelihood of an over-the-fence home run, while increasing the likelihood of an inside-the-park home run, as a ball getting past an outfielder had more distance that it could roll before a fielder could track it down. Modern outfields are much less spacious and more uniformly designed than in the game's early days. Therefore, inside-the-park home runs are now rare. They usually occur when a fast runner hits the ball deep into the outfield and the ball bounces in an unexpected direction away from the nearest outfielder (e.g., of a divot in the grass or off the outfield wall), the nearest outfielder is injured on the play and cannot get to the ball, or an outfielder misjudges the flight of the ball in a way that he cannot quickly recover from the mistake (e.g., by diving and missing). The speed of the runner is crucial as even triples are relatively rare in most modern ballparks. If any defensive play on an inside-the-park home run is labeled an error by the official scorer, a home run is not scored. Instead, it is scored as a single, double, or triple, and the batter-runner and any applicable preceding runners are said to have taken all additional bases on error. All runs scored on such a play, however, still count. An example of an unexpected bounce occurred during the 2007 Major League Baseball All-Star Game at AT&T Park in San Francisco on July 10, 2007. Ichiro Suzuki of the American League team hit a fly ball that caromed off the right-center field wall in the opposite direction from where National League right fielder Ken Griffey Jr. was expecting it to go. By the time the ball was relayed, Ichiro had already crossed the plate standing up. This was the first inside-the-park home run in All-Star Game history, and led to Suzuki being named the game's Most Valuable Player. Number of runs batted in Home runs are often characterized by the number of runners on base at the time. A home run hit with the bases empty is seldom called a "one-run homer", but rather a solo home run, solo homer, or "solo shot". With one runner on base, two runs are scored (the baserunner and the batter) and thus the home run is often called a two-run homer or two-run shot. Similarly, a home run with two runners on base is a three-run homer or three-run shot. The term "four-run homer" is seldom used; instead, it is nearly always called a "grand slam". Hitting a grand slam is the best possible result for the batter's turn at bat and the worst possible result for the pitcher and his team. Grand slam A grand slam occurs when the bases are "loaded" (that is, there are base runners standing at first, second, and third base) and the batter hits a home run. According to The Dickson Baseball Dictionary, the term originated in the card game of contract bridge. An inside-the-park grand slam is a grand slam that is also an inside-the-park home run, a home run without the ball leaving the field, and it is very rare, due to the relative rarity of loading the bases along with the significant rarity (nowadays) of inside-the-park home runs. On July 25, 1956, Roberto Clemente became the only MLB player to have ever scored a walk-off inside-the-park grand slam in a 9–8 Pittsburgh Pirates win over the Chicago Cubs, at Forbes Field. On April 23, 1999, Fernando Tatís made history by hitting two grand slams in one inning, both against Chan Ho Park of the Los Angeles Dodgers. With this feat, Tatís also set a Major League record with 8 RBI in one inning. On July 29, 2003, against the Texas Rangers, Bill Mueller of the Boston Red Sox became the only player in major league history to hit two grand slams in one game from opposite sides of the plate; he hit three home runs in that game, and his two grand slams were in consecutive at-bats. On August 25, 2011, the New York Yankees became the first team to hit three grand slams in one game vs the Oakland A's. The Yankees eventually won the game 22–9, after trailing 7–1. Specific situation home runs These types of home runs are characterized by the specific game situation in which they occur, and can theoretically occur on either an outside-the-park or inside-the-park home run. Walk-off home run A walk-off home run is a home run hit by the home team in the bottom of the ninth inning, any extra inning, or other scheduled final inning, which gives the home team the lead and thereby ends the game. The term is attributed to Hall of Fame relief pitcher Dennis Eckersley, so named because after the run is scored, the losing team has to "walk off" the field. Two World Series have ended via the "walk-off" home run. The first was the 1960 World Series when Bill Mazeroski of the Pittsburgh Pirates hit a ninth inning solo home run in the seventh game of the series off New York Yankees pitcher Ralph Terry to give the Pirates the World Championship. The second time was the 1993 World Series when Joe Carter of the Toronto Blue Jays hit a ninth inning three-run home run off Philadelphia Phillies pitcher Mitch Williams in Game 6 of the series, to help the Toronto Blue Jays capture their second World Series Championship in a row. Such a home run can also be called a "sudden death" or "sudden victory" home run. That usage has lessened as "walk-off home run" has gained favor. Along with Mazeroski's 1960 shot, the most famous walk-off or sudden-death homer would probably be the "Shot Heard 'Round the World" hit by Bobby Thomson to win the 1951 National League pennant for the New York Giants, along with many other game-ending home runs that famously ended some of the most important and suspenseful baseball games. A walk-off home run over the fence is an exception to baseball's one-run rule. Normally if the home team is tied or behind in the ninth or extra innings, the game ends as soon as the home team scores enough run to achieve a lead. If the home team has two outs in the inning, and the game is tied, the game will officially end either the moment the batter successfully reaches first base or the moment the runner touches home plate—whichever happens last. However, this is superseded by the "ground rule", which provides automatic doubles (when a ball-in-play hits the ground first then leaves the playing field) and home runs (when a ball-in-play leaves the playing field without ever touching the ground). In the latter case, all base runners including the batter are allowed to cross the plate. Leadoff home run A leadoff home run is a home run hit by the first batter of a team, the leadoff hitter of the first inning of the game. In MLB, Rickey Henderson holds the career record with 81 lead-off home runs. Craig Biggio holds the National League career record with 53, third overall to Henderson, and Alfonso Soriano with 54. As of 2018, Ian Kinsler held the career record among active players, with 48 leadoff home runs, which also ranked him fourth all-time. In 1996, Brady Anderson set a Major League record by hitting a lead-off home run in four consecutive games. Back-to-back When two consecutive batters each hit a home run, this is described as back-to-back home runs. It is still considered back-to-back even if both batters hit their home runs off different pitchers. A third batter hitting a home run is commonly referred to as back-to-back-to-back. Four home runs in a row by consecutive batters has only occurred ten times in the history of Major League Baseball. Following convention, this is called back-to-back-to-back-to-back. The most recent occurrence was on August 16, 2020, when the Chicago White Sox hit four in a row against the St. Louis Cardinals. Yoan Moncada, Yasmani Grandal, José Abreu and Eloy Jiménez hit consecutive home runs during the fifth inning off relief pitcher Roel Ramírez, who was making his major league debut. On June 9, 2019, the Washington Nationals hit four in a row against the San Diego Padres in Petco Park as Howie Kendrick, Trea Turner, Adam Eaton and Anthony Rendon homered off pitcher Craig Stammen. Stammen became the fifth pitcher to surrender back-to-back-to-back-to-back home runs, following Paul Foytack on July 31, 1963, Chase Wright on April 22, 2007, Dave Bush on August 10, 2010, and Michael Blazek on July 27, 2017. On August 14, 2008, the Chicago White Sox defeated the Kansas City Royals 9–2. In this game, Jim Thome, Paul Konerko, Alexei Ramírez, and Juan Uribe hit back-to-back-to-back-to-back home runs in that order. Thome, Konerko, and Ramirez blasted their homers off of Joel Peralta, while Uribe did it off of Rob Tejeda. The next batter, veteran backstop Toby Hall, tried aimlessly to hit the ball as far as possible, but his effort resulted in a strike out. On April 22, 2007, the Boston Red Sox were trailing the New York Yankees 3–0 when Manny Ramirez, J. D. Drew, Mike Lowell and Jason Varitek hit consecutive home runs to put them up 4–3. They eventually went on to win the game 7–6 after a three-run home run by Mike Lowell in the bottom of the seventh inning. On September 18, 2006, trailing 9–5 to the San Diego Padres in the ninth inning, Jeff Kent, J. D. Drew, Russell Martin, and Marlon Anderson of the Los Angeles Dodgers hit back-to-back-to-back-to-back home runs to tie the game. After giving up a run in the top of the tenth, the Dodgers won the game in the bottom of the tenth, on a walk-off two-run home run by Nomar Garciaparra. J. D. Drew has been part of two different sets of back-to-back-to-back-to-back home runs. In both occurrences, his homer was the second of the four. On September 30, 1997, in the sixth inning of Game One of the American League Division Series between the New York Yankees and Cleveland Indians, Tim Raines, Derek Jeter and Paul O'Neill hit back-to-back-to-back home runs for the Yankees. Raines' home run tied the game. New York went on to win 8–6. This was the first occurrence of three home runs in a row ever in postseason play. The Boston Red Sox repeated the feat in Game Four of the 2007 American League Championship Series, also against the Indians. The Indians returned the favor in Game One of the 2016 American League Division Series. Twice in MLB history have two brothers hit back-to-back home runs. On April 23, 2013, brothers Melvin Upton Jr. (formerly B.J. Upton) and Justin Upton hit back-to-back home runs. The first time was on September 15, 1938, when Lloyd Waner and Paul Waner performed the feat. Simple back-to-back home runs are a relatively frequent occurrence. If a pitcher gives up a homer, he might have his concentration broken and might alter his normal approach in an attempt to "make up for it" by striking out the next batter with some fastballs. Sometimes the next batter will be expecting that and will capitalize on it. A notable back-to-back home run of that type in World Series play involved "Babe Ruth's called shot" in 1932, which was accompanied by various Ruthian theatrics, yet the pitcher, Charlie Root, was allowed to stay in the game. He delivered just one more pitch, which Lou Gehrig drilled out of the park for a back-to-back shot, after which Root was removed from the game. In Game 3 of the 1976 NLCS, George Foster and Johnny Bench hit back-to-back homers in the last of the ninth off Ron Reed to tie the game. The Series-winning run was scored later in the inning. Another notable pair of back-to-back home runs occurred on September 14, 1990, when Ken Griffey Sr. and Ken Griffey Jr. hit back-to-back home runs, off Kirk McCaskill, the only father-and-son duo to do so in Major League history. On May 2, 2002, Bret Boone and Mike Cameron of the Seattle Mariners hit back-to-back home runs off of starter Jon Rauch in the first inning of a game against the Chicago White Sox. The Mariners batted around in the inning, and Boone and Cameron came up to bat against reliever Jim Parque with two outs, again hitting back-to-back home runs and becoming the only pair of teammates to hit back-to-back home runs twice in the same inning. On June 19, 2012, José Bautista and Colby Rasmus hit back-to-back home runs and back-to-back-to-back home runs with Edwin Encarnación for a lead change in each instance. On July 23, 2017, Whit Merrifield, Jorge Bonifacio, and Eric Hosmer of the Kansas City Royals hit back-to-back-to-back home runs in the fourth inning against the Chicago White Sox. The Royals went on to win the game 5–4. On June 20, 2018, George Springer, Alex Bregman, and José Altuve of the Houston Astros hit back-to-back-to-back home runs in the sixth inning against the Tampa Bay Rays. The Astros went on to win the game 5–1. On April 3, 2018, the St. Louis Cardinals began the game against the Milwaukee Brewers with back-to-back homers from Dexter Fowler and Tommy Pham. Then in the bottom of the ninth, with two outs and the Cardinals leading 4–3, Christian Yelich homered to tie the game; and Ryan Braun hit the next pitch for a walk-off homer. This is the only major league game to begin and end with back-to-back homers. On May 5, 2019, Eugenio Suarez, Jesse Winker and Derek Dietrich of the Cincinnati Reds, hit back-to-back-to-back home runs on three straight pitches against Jeff Samardzija of the San Francisco Giants in the bottom of the first inning. On October 30, 2021, Dansby Swanson and Jorge Soler hit back-to-back home runs for the Atlanta Braves off of Houston Astros pitcher Cristian Javier to give the Braves a 3–2 lead in the bottom of the seventh in Game 4 of the World Series. Consecutive home runs by one batter The record for consecutive home runs by a batter under any circumstances is four. Of the sixteen players (through 2012) who have hit four in one game, six have hit them consecutively. Twenty-eight other batters have hit four consecutive across two games. Bases on balls do not count as at-bats, and Ted Williams holds the record for consecutive home runs across the most games, four in four games played, during September 17–22, 1957, for the Red Sox. Williams hit a pinch-hit homer on the 17th; walked as a pinch-hitter on the 18th; there was no game on the 19th; hit another pinch-homer on the 20th; homered and then was lifted for a pinch-runner after at least one walk, on the 21st; and homered after at least one walk on the 22nd. All in all, he had four walks interspersed among his four homers. In World Series play, Reggie Jackson hit a record three in one Series game, the final game (Game 6) in 1977. But those three were a part of a much more impressive feat. He walked on four pitches in the second inning of game 6. Then he hit his three home runs on the first pitch of his next three at bats, off of three different pitchers (4th inning- Hooten, 5th inning- Sosa, 8th inning- Hough). He had also hit one in his last at bat of the previous game, giving him four home runs on four consecutive swings. The four in a row set the record for consecutive homers across two Series games. In Game 3 of the World Series in 2011, Albert Pujols hit three home runs to tie the record with Babe Ruth and Reggie Jackson. The St. Louis Cardinals went on to win the World Series in Game 7 at Busch Stadium. In Game 1 of the World Series in 2012, Pablo Sandoval of the San Francisco Giants hit three home runs on his first three at-bats of the Series. Nomar Garciaparra holds the record for consecutive home runs in the shortest time in terms of innings: three homers in two innings, on July 23, 2002, for the Boston Red Sox. Home run cycle An offshoot of hitting for the cycle, a "home run cycle" is when a player hits a solo home run, two-run home run, three-run home run, and grand slam all in one game. This is an extremely rare feat, as it requires the batter not only to hit four home runs in the game, but also to hit the home runs with a specific number of runners already on base. This is largely dependent on circumstances outside of the player's control, such as teammates' ability to get on base, and the order in which the player comes to bat in any particular inning. A further variant of the home run cycle would be the "natural home run cycle", should a batter hit the home runs in the specific order listed above. A home run cycle has never occurred in MLB, which has only had 18 instances of a player hitting four home runs in a game. Though multiple home run cycles have been recorded in collegiate baseball, the only known home run cycle in a professional baseball game belongs to Tyrone Horne, playing for the Arkansas Travelers in a Double-A level Minor League Baseball game against the San Antonio Missions on July 27, 1998. Major league players have come close to hitting a home run cycle, a notable example being Scooter Gennett of the Cincinnati Reds on June 6, 2017, when he hit four home runs against the St. Louis Cardinals. He hit a grand slam in the third inning, a two-run home run in the fourth inning, a solo home run in the sixth inning, and a two-run home run in the eighth inning. He had an opportunity for a three-run home run in the first inning, but drove in one run with a single in that at bat. History In the early days of the game, when the ball was less lively and the ballparks generally had very large outfields, most home runs were of the inside-the-park variety. The first home run ever hit in the National League was by Ross Barnes of the Chicago White Stockings (now known as the Chicago Cubs), in 1876. The home "run" was literally descriptive. Home runs over the fence were rare, and only in ballparks where a fence was fairly close. Hitters were discouraged from trying to hit home runs, with the conventional wisdom being that if they tried to do so they would simply fly out. This was a serious concern in the 19th century, because in baseball's early days a ball caught after one bounce was still an out. The emphasis was on place-hitting and what is now called "manufacturing runs" or "small ball". The home run's place in baseball changed dramatically when the live-ball era began after World War I. First, the materials and manufacturing processes improved significantly, making the now-mass-produced, cork-centered ball somewhat more lively. Batters such as Babe Ruth and Rogers Hornsby took full advantage of rules changes that were instituted during the 1920s, particularly prohibition of the spitball, and the requirement that balls be replaced when worn or dirty. These changes resulted in the baseball being easier to see and hit, and easier to hit out of the park. Meanwhile, as the game's popularity boomed, more outfield seating was built, shrinking the size of the outfield and increasing the chances of a long fly ball resulting in a home run. The teams with the sluggers, typified by the New York Yankees, became the championship teams, and other teams had to change their focus from the "inside game" to the "power game" in order to keep up. Before , Major League Baseball considered a fair ball that bounced over an outfield fence to be a home run. The rule was changed to require the ball to clear the fence on the fly, and balls that reached the seats on a bounce became automatic doubles (often referred to as a ground rule double). The last "bounce" home run in MLB was hit by Al López of the Brooklyn Robins on September 12, 1930, at Ebbets Field. A carryover of the old rule is that if a player deflects a ball over the outfield fence in fair territory without it touching the ground, it is a home run, per MLB rule 5.05(a)(9). Additionally, MLB rule 5.05(a)(5) still stipulates that a ball hit over a fence in fair territory that is less that from home plate "shall entitle the batter to advance to second base only", as some early ballparks had short dimensions. Also until circa 1931, the ball had to go not only over the fence in fair territory, but it had to land in the bleachers in fair territory or still be visibly fair when disappearing from view. The rule stipulated "fair when last seen" by the umpires. Photos from that era in ballparks, such as the Polo Grounds and Yankee Stadium, show ropes strung from the foul poles to the back of the bleachers, or a second "foul pole" at the back of the bleachers, in a straight line with the foul line, as a visual aid for the umpire. Ballparks still use a visual aid much like the ropes; a net or screen attached to the foul poles on the fair side has replaced ropes. As with American football, where a touchdown once required a literal "touch down" of the ball in the end zone but now only requires the "breaking of the [vertical] plane" of the goal line, in baseball the ball need only "break the plane" of the fence in fair territory (unless the ball is caught by a player who is in play, in which case the batter is called out). Babe Ruth's 60th home run in 1927 was somewhat controversial, because it landed barely in fair territory in the stands down the right field line. Ruth lost a number of home runs in his career due to the when-last-seen rule. Bill Jenkinson, in The Year Babe Ruth Hit 104 Home Runs, estimates that Ruth lost at least 50 and as many as 78 in his career due to this rule. Further, the rules once stipulated that an over-the-fence home run in a sudden-victory situation would only count for as many bases as was necessary to "force" the winning run home. For example, if a team trailed by two runs with the bases loaded, and the batter hit a fair ball over the fence, it only counted as a triple, because the runner immediately ahead of him had technically already scored the game-winning run. That rule was changed in the 1920s as home runs became increasingly frequent and popular. Babe Ruth's career total of 714 would have been one higher had that rule not been in effect in the early part of his career. Records Major League Baseball keeps running totals of all-time home runs by the team, including teams no longer active (prior to 1900) as well as by individual players. Gary Sheffield hit the 250,000th home run in MLB history with a grand slam on September 8, 2008. Sheffield had hit MLB's 249,999th home run against Gio González in his previous at-bat. The all-time, verified professional baseball record for career home runs for one player, excluding the U.S. Negro leagues during the era of segregation, is held by Sadaharu Oh. Oh spent his entire career playing for the Yomiuri Giants in Japan's Nippon Professional Baseball, later managing the Giants, the Fukuoka SoftBank Hawks and the 2006 World Baseball Classic Japanese team. Oh holds the all-time home run world record, having hit 868 home runs in his career. In Major League Baseball, the career record is 762, held by Barry Bonds, who broke Hank Aaron's record on August 7, 2007, when he hit his 756th home run at AT&T Park off pitcher Mike Bacsik. Only eight other major league players have hit as many as 600: Hank Aaron (755), Babe Ruth (714), Alex Rodriguez (696), Albert Pujols (679), Willie Mays (660), Ken Griffey Jr. (630), Jim Thome (612), and Sammy Sosa (609); Pujols holds the record for active MLB players. The single season record is 73, set by Barry Bonds in 2001. Other notable single season records were achieved by Babe Ruth who hit 60 in 1927, Roger Maris, with 61 home runs in 1961, Sammy Sosa who hit 66 in 1998, and Mark McGwire, who hit 70 in 1998. Negro league slugger Josh Gibson's Baseball Hall of Fame plaque says he hit "almost 800" home runs in his career. The Guinness Book of World Records lists Gibson's lifetime home run total at 800. Ken Burns' award-winning series, Baseball, states that his actual total may have been as high as 950. Gibson's true total is not known, in part due to inconsistent record keeping in the Negro leagues. The 1993 edition of the MacMillan Baseball Encyclopedia attempted to compile a set of Negro league records, and subsequent work has expanded on that effort. Those records demonstrate that Gibson and Ruth were of comparable power. The 1993 book had Gibson hitting 146 home runs in the 501 "official" Negro league games they were able to account for in his 17-year career, about 1 homer every 3.4 games. Babe Ruth, in 22 seasons (several of them in the dead-ball era), hit 714 in 2503 games, or 1 homer every 3.5 games. The large gap in the numbers for Gibson reflect the fact that Negro league clubs played relatively far fewer league games and many more "barnstorming" or exhibition games during the course of a season, than did the major league clubs of that era. Other legendary home run hitters include Jimmie Foxx, Mel Ott, Ted Williams, Mickey Mantle (who on September 10, 1960, mythically hit "the longest home run ever" at an estimated distance of , although this was measured after the ball stopped rolling), Reggie Jackson, Harmon Killebrew, Ernie Banks, Mike Schmidt, Dave Kingman, Sammy Sosa (who hit 60 or more home runs in a season three times), Ken Griffey Jr. and Eddie Mathews. In 1987, Joey Meyer of the minor league Denver Zephyrs hit the longest verifiable home run in professional baseball history. The home run was measured at a distance of and was hit inside Denver's Mile High Stadium. On May 6, 1964, Chicago White Sox outfielder Dave Nicholson hit a home run officially measured at 573 feet that either bounced atop the left-field roof of Comiskey Park or entirely cleared it. Major League Baseball's longest verifiable home run distance is about , by Babe Ruth, to straightaway center field at Tiger Stadium (then called Navin Field and before the double-deck), which landed nearly across the intersection of Trumbull and Cherry. The location of where Hank Aaron's record 755th home run landed has been monumented in Milwaukee. The spot sits outside American Family Field, where the Milwaukee Brewers currently play. Similarly, the point where Aaron's 715th homer landed, upon breaking Ruth's career record in 1974, is marked in the Turner Field parking lot. A red-painted seat in Fenway Park marks the landing place of the 502-ft home run Ted Williams hit in 1946, the longest measured homer in Fenway's history; a red stadium seat mounted on the wall of the Mall of America in Bloomington, Minnesota, marks the landing spot of Harmon Killebrew's record 520-foot shot in old Metropolitan Stadium. May 2019 saw 1,135 MLB home runs, the highest ever number of home runs in a single month in Major League Baseball history. During this month, 44.5% of all runs came during a homer, breaking the previous record of 42.3%. Instant replay Replays "to get the call right" have been used extremely sporadically in the past, but the use of instant replay to determine "boundary calls"—home runs and foul balls—was not officially allowed until 2008. In a game on May 31, 1999, involving the St. Louis Cardinals and Florida Marlins, a hit by Cliff Floyd of the Marlins was initially ruled a double, then a home run, then was changed back to a double when umpire Frank Pulli decided to review video of the play. The Marlins protested that video replay was not allowed, but while the National League office agreed that replay was not to be used in future games, it declined the protest on the grounds it was a judgment call, and the play stood. In November 2007, the general managers of Major League Baseball voted in favor of implementing instant replay reviews on boundary home run calls. The proposal limited the use of instant replay to determining whether a boundary/home run call is: A fair (home run) or foul ball A live ball (ball hit a fence and rebounded onto the field), ground rule double (ball hit a fence before leaving the field), or home run (ball hit some object beyond the fence while in flight) Spectator interference or home run (spectator touched the ball after it broke the plane of the fence). On August 28, 2008, instant replay review became available in MLB for reviewing calls in accordance with the above proposal. It was first utilized on September 3, 2008, in a game between the New York Yankees and the Tampa Bay Rays at Tropicana Field. Alex Rodriguez of the Yankees hit what appeared to be a home run, but the ball hit a catwalk behind the foul pole. It was at first called a home run, until Tampa Bay manager Joe Maddon argued the call, and the umpires decided to review the play. After 2 minutes and 15 seconds, the umpires came back and ruled it a home run. About two weeks later, on September 19, also at Tropicana Field, a boundary call was overturned for the first time. In this case, Carlos Peña of the Rays was given a ground rule double in a game against the Minnesota Twins after an umpire believed a fan reached into the field of play to catch a fly ball in right field. The umpires reviewed the play, determined the fan did not reach over the fence, and reversed the call, awarding Peña a home run. Aside from the two aforementioned reviews at Tampa Bay, the replay was used four more times in the 2008 MLB regular season: twice at Houston, once at Seattle, and once at San Francisco. The San Francisco incident is perhaps the most unusual. Bengie Molina, the Giants' catcher, hit what was first called a single. Molina then was replaced in the game by Emmanuel Burriss, a pinch-runner, before the umpires re-evaluated the call and ruled it a home run. In this instance though, Molina was not allowed to return to the game to complete the run, as he had already been replaced. Molina was credited with the home run, and two RBIs, but not for the run scored which went to Burriss instead. On October 31, 2009, in the fourth inning of Game 3 of the World Series, Alex Rodriguez hit a long fly ball that hit a camera protruding over the wall and into the field of play in deep right field. The ball ricocheted off the camera and re-entered the field, initially ruled a double. However, after the umpires consulted with each other after watching the instant replay, the hit was ruled a home run, marking the first time an instant replay home run was hit in a playoff game. See also Babe Ruth Home Run Award Home Run Derby Joe Bauman Home Run Award Josh Gibson Legacy Award List of Major League Baseball annual home run leaders (by year) Major League Baseball single-season home run record Mel Ott Award The Year Babe Ruth Hit 104 Home Runs, 2007 non-fiction book Career achievements List of Major League Baseball players with 20 doubles, 20 triples, and 20 home runs in the same season 500 home run club List of Major League Baseball all-time leaders in home runs by pitchers List of Major League Baseball career home run leaders List of Major League Baseball players with a home run in their final major league at bat List of Major League Baseball players with a home run in their first major league at bat Other sports Six (cricket) References External links MLB's Home Run Leaders – batting statistics for over 16,000 players Baseball rules Batting statistics Baseball terminology
[ -0.1731003373861313, -0.09546630084514618, 0.007002361584454775, 0.10085762292146683, -0.2389957159757614, -0.14511671662330627, 0.3330128788948059, 0.1544639766216278, -0.010730618610978127, -0.3160173296928406, 0.05878673493862152, 0.642403244972229, 0.11201190203428268, 0.12016140669584...
14149
https://en.wikipedia.org/wiki/Harappa
Harappa
Harappa (; Urdu/) is an archaeological site in Punjab, Pakistan, about west of Sahiwal. The site takes its name from a modern village located near the former course of the Ravi River which now runs to the north. The current village of Harappa is less than from the ancient site. Although modern Harappa has a legacy railway station from the British Raj period, it is a small crossroads town of 15,000 people today. The site of the ancient city contains the ruins of a Bronze Age fortified city, which was part of the Indus Valley Civilisation centred in Sindh and the Punjab, and then the Cemetery H culture. The city is believed to have had as many as 23,500 residents and occupied about with clay brick houses at its greatest extent during the Mature Harappan phase (2600 BC – 1900 BC), which is considered large for its time. Per archaeological convention of naming a previously unknown civilisation by its first excavated site, the Indus Valley Civilisation is also called the Harappan Civilisation. The ancient city of Harappa was heavily damaged under British rule, when bricks from the ruins were used as track ballast in the construction of the Lahore–Multan Railway. In 2005, a controversial amusement park scheme at the site was abandoned when builders unearthed many archaeological artefacts during the early stages of building work. History The Harappan Civilisation has its earliest roots in cultures such as that of Mehrgarh, approximately 6000 BC. The two greatest cities, Mohenjo-daro and Harappa, emerged circa 2600 BC along the Indus River valley in Punjab and Sindh. The civilisation, with a possible writing system, urban centres, and diversified social and economic system, was rediscovered in the 1920s also after excavations at Mohenjo-daro in Sindh near Larkana, and Harappa, in west Punjab south of Lahore. A number of other sites stretching from the Himalayan foothills in the east Punjab, India in the north, to Gujarat in the south and east, and to Pakistani Balochistan in the west have also been discovered and studied. Although the archaeological site at Harappa was damaged in 1857 when engineers constructing the Lahore-Multan railroad used brick from the Harappa ruins for track ballast, an abundance of artefacts have nevertheless been found. Because of the reducing sea-levels certain regions in late Harappan period were abandoned . Towards the end Harappan civilisation lost features such as writing and hydraulic engineering. As a result the Ganges Valley settlement gained prominence and Ganges cities developed. Culture and economy The Indus Valley civilization was basically an urban culture sustained by surplus agricultural production and commerce, the latter including trade with Elam and Sumer in southern Mesopotamia. Both Mohenjo-Daro and Harappa are generally characterised as having "differentiated living quarters, flat-roofed brick houses, and fortified administrative or religious centers." Although such similarities have given rise to arguments for the existence of a standardised system of urban layout and planning, the similarities are largely due to the presence of a semi-orthogonal type of civic layout, and a comparison of the layouts of Mohenjo-Daro and Harappa shows that they are in fact, arranged in a quite dissimilar fashion. The weights and measures of the Indus Valley Civilisation, on the other hand, were highly standardised, and conform to a set scale of gradations. Distinctive seals were used, among other applications, perhaps for the identification of property and shipment of goods. Although copper and bronze were in use, iron was not yet employed. "Cotton was woven and dyed for clothing; wheat, rice, and a variety of vegetables and fruits were cultivated; and a number of animals, including the humped bull, was domesticated," as well as "fowl for fighting". Wheel-made pottery—some of it adorned with animal and geometric motifs—has been found in profusion at all the major Indus sites. A centralised administration for each city, though not the whole civilisation, has been inferred from the revealed cultural uniformity; however, it remains uncertain whether authority lay with a commercial oligarchy. Harappans had many trade routes along the Indus River that went as far as the Persian Gulf, Mesopotamia, and Egypt. Some of the most valuable things traded were carnelian and lapis lazuli. What is clear is that Harappan society was not entirely peaceful, with the human skeletal remains demonstrating some of the highest rates of injury (15.5%) found in South Asian prehistory. Paleopathological analysis demonstrated that leprosy and tuberculosis were present at Harappa, with the highest prevalence of both disease and trauma present in the skeletons from Area G (an ossuary located south-east of the city walls). Furthermore, rates of craniofacial trauma and infection increased through time demonstrating that the civilisation collapsed amid illness and injury. The bioarchaeologists who examined the remains have suggested that the combined evidence for differences in mortuary treatment and epidemiology indicate that some individuals and communities at Harappa were excluded from access to basic resources like health and safety. Trade The Harappans had traded with ancient Mesopotamia, especially Elam, among other areas. Cotton textiles and agricultural products were the primary trading objects. The Harappan merchants also had procurement colonies in Mesopotamia which also served as trading centres. Archaeology The excavators of the site have proposed the following chronology of Harappa's occupation: Ravi Aspect of the Hakra phase, c. 3300 – 2800 BC. Kot Dijian (Early Harappan) phase, c. 2800 – 2600 BC. Harappan Phase, c. 2600 – 1900 BC. Transitional Phase, c. 1900 – 1800 BC. Late Harappan Phase, c. 1800 – 1300 BC. By far the most exquisite and obscure artefacts unearthed to date are the small, square steatite (soapstone) seals engraved with human or animal motifs. A large number of seals have been found at such sites as Mohenjo-Daro and Harappa. Many bear pictographic inscriptions generally thought to be a form of writing or script. Despite the efforts of philologists from all parts of the world, and despite the use of modern cryptographic analysis, the signs remain undeciphered. It is also unknown if they reflect proto-Dravidian or other non-Vedic language(s). The ascribing of Indus Valley Civilisation iconography and epigraphy to historically known cultures is extremely problematic, in part due to the rather tenuous archaeological evidence for such claims, as well as the projection of modern South Asian political concerns onto the archaeological record of the area. This is especially evident in the radically varying interpretations of Harappan material culture as seen from both Pakistan- and India-based scholars. In February 2006 a school teacher in the village of Sembian-Kandiyur in Tamil Nadu discovered a stone celt (tool) with an inscription estimated to be up to 3,500 years old. Indian epigraphist Iravatham Mahadevan postulated that the four signs were in the Indus script and called the find "the greatest archaeological discovery of a century in Tamil Nadu". Based on this evidence he goes on to suggest that the language used in the Indus Valley was of Dravidian origin. However, the absence of a Bronze Age in South India, contrasted with the knowledge of bronze making techniques in the Indus Valley cultures, calls into question the validity of this hypothesis. The area of the late Harappan period consisted of areas of Daimabad, Maharashtra, and Badakshan regions of Afghanistan. The area covered by this civilisation would have been very large with a distance of around . Early symbols similar to Indus script Clay and stone tablets unearthed at Harappa, which were carbon-dated 3300–3200 BC., contain trident-shaped and plant-like markings. "It is a big question as to if we can call what we have found true writing, but we have found symbols that have similarities to what became Indus script" said Dr. Richard Meadow of Harvard University, Director of the Harappa Archeological Research Project. This primitive writing is placed slightly earlier than primitive writings of the Sumerians of Mesopotamia, dated c.3100 BC. These markings have similarities to what later became Indus Script. Notes The earliest radiocarbon dating mentioned on the web is 2725±185 BC (uncalibrated) or 3338, 3213, 3203 BC calibrated, giving a midpoint of 3251 BC. Kenoyer, Jonathan Mark (1991) Urban process in the Indus Tradition: A preliminary report. In Harappa Excavations, 1986–1990: A multidisciplinary approach to Second Millennium urbanism, edited by Richard H. Meadow: 29–59. Monographs in World Archaeology No.3. Prehistory Press, Madison Wisconsin. Periods 4 and 5 are not dated at Harappa. The termination of the Harappan tradition at Harappa falls between 1900 and 1500 BC. Mohenjo-daro is another major city of the same period, located in Sindhprovince of Pakistan. One of its most well-known structures is the Great Bath of Mohenjo-Daro. See also Charles Masson – First European explorer of Harappa Mohenjo-daro Mehrgarh Ganeriwala Dholavira Lothal Harappan architecture Mandi, Uttar Pradesh Sheri Khan Tarakai Sokhta Koh Kalibangan Rakhigarhi Taxila References External links Harappa.com Harappa.info "Harappa Town Planning"-article by Dr S. Srikanta Sastri Art of the Bronze Age: Southeastern Iran, Western Central Asia, and the Indus Valley, an exhibition catalogue from The Metropolitan Museum of Art (fully available online as PDF), which contains material on Harappa 4th-millennium BC architecture Archaeological sites in Punjab, Pakistan History of Pakistan Bronze Age sites Populated places in Sahiwal District Indus Valley Civilisation Major Indus Valley Civilisation sites Former populated places in Pakistan Culture of Punjab, Pakistan Sahiwal District Tourist attractions in Sahiwal Ruins in Pakistan
[ 0.1175428181886673, 0.24581660330295563, -1.0315818786621094, -0.46237197518348694, 0.3306729793548584, 1.1992331743240356, 0.5332186818122864, 0.15405681729316711, -0.47526395320892334, -0.25588202476501465, -0.6058562994003296, -0.12042805552482605, -0.26849284768104553, 0.65952515602111...
14153
https://en.wikipedia.org/wiki/Hendecasyllable
Hendecasyllable
In poetry, a hendecasyllable is a line of eleven syllables. The term "hendecasyllabic" is used to refer to two different poetic meters, the older of which is quantitative and used chiefly in classical (Ancient Greek and Latin) poetry and the newer of which is accentual and used in medieval and modern poetry. The term is often used when a line of iambic pentameter contains 11 syllables. In classical poetry The classical hendecasyllable is a quantitative meter used in Ancient Greece in Aeolic verse and in scolia, and later by the Roman poets Catullus and Martial. Each line has eleven syllables; hence the name, which comes from the Greek word for eleven. The heart of the line is the choriamb (– ⏑ ⏑ –). There are three different versions. The pattern of the “Phalaecian” (Latin: hendecasyllabus phalaecius) is as follows (using “–” for a long syllable, “⏑” for a short and “⏓” for an “anceps” or variable syllable): ⏓ ⏓ – ⏑ ⏑ – ⏑ – ⏑ – ⏓ (where ⏓ ⏓ is one of – ⏑ or – – or ⏑ –) Another form of hendecasyllabic verse is the “Alcaic” (Latin: hendecasyllabus alcaicus; used in the Alcaic stanza), which has the pattern: ⏓ – ⏑ – ⏓ – ⏑ ⏑ – ⏑ – The third form of hendecasyllabic verse is the “Sapphic” (Latin: hendecasyllabus sapphicus; so named for its use in the Sapphic stanza), with the pattern: – ⏓ – ⏓ – ⏑ ⏑ – ⏑  – – Forty-three of Catullus's poems are hendecasyllabic; for an example, see Catullus 1. The metre has been imitated in English, notably by Alfred Tennyson, Swinburne, and Robert Frost, cf. “For Once Then Something.” Contemporary American poets Annie Finch (“Lucid Waking”) and Patricia Smith (“The Reemergence of the Noose”) have published recent examples. Poets wanting to capture the hendecasyllabic rhythm in English have simply transposed the pattern into its accentual-syllabic equivalent: – ⏑ |– ⏑ |– ⏑ ⏑ |– ⏑ |– ⏑ |, or trochee/trochee/dactyl/trochee/trochee, so that the long/short pattern becomes a stress/unstress pattern. Tennyson, however, maintained the quantitative features of the metre: O you chorus of indolent reviewers, Irresponsible, indolent reviewers, Look, I come to the test, a tiny poem All composed in a metre of Catullus... (“Hendecasyllabics”) In Italian poetry The hendecasyllable () is the principal metre in Italian poetry. Its defining feature is a constant stress on the tenth syllable, so that the number of syllables in the verse may vary, equaling eleven in the usual case where the final word is stressed on the penultimate syllable. The verse also has a stress preceding the caesura, on either the fourth or sixth syllable. The first case is called endecasillabo a minore, or lesser hendecasyllable, and has the first hemistich equivalent to a quinario; the second is called endecasillabo a maiore, or greater hendecasyllable, and has a settenario as the first hemistich. There is a strong tendency for hendecasyllabic lines to end with feminine rhymes (causing the total number of syllables to be eleven, hence the name), but ten-syllable lines ("Ciò che 'n grembo a Benaco star non può") and twelve-syllable lines ("Ergasto mio, perché solingo e tacito") are encountered as well. Lines of ten or twelve syllables are more common in rhymed verse; versi sciolti, which rely more heavily on a pleasant rhythm for effect, tend toward a stricter eleven-syllable format. As a novelty, lines longer than twelve syllables can be created by the use of certain verb forms and affixed enclitic pronouns ("Ottima è l'acqua; ma le piante abbeverinosene."). Additional accents beyond the two mandatory ones provide rhythmic variation and allow the poet to express thematic effects. A line in which accents fall consistently on even-numbered syllables ("Al còr gentìl rempàira sèmpre amóre") is called iambic (giambico) and may be a greater or lesser hendecasyllable. This line is the simplest, commonest and most musical but may become repetitive, especially in longer works. Lesser hendecasyllables often have an accent on the seventh syllable ("fàtta di giòco in figùra d'amóre"). Such a line is called dactylic (dattilico) and its less pronounced rhythm is considered particularly appropriate for representing dialogue. Another kind of greater hendecasyllable has an accent on the third syllable ("Se Mercé fosse amìca a' miei disìri") and is known as anapestic (anapestico). This sort of line has a crescendo effect and gives the poem a sense of speed and fluidity. It is considered improper for the lesser hendecasyllable to use a word accented on its antepenultimate syllable (parola sdrucciola) for its mid-line stress. A line like "Più non sfavìllano quegli òcchi néri", which delays the caesura until after the sixth syllable, is not considered a valid hendecasylable. Most classical Italian poems are composed in hendecasyllables, including the major works of Dante, Francesco Petrarca, Ludovico Ariosto, and Torquato Tasso. The rhyme systems used include terza rima, ottava, sonnet and canzone, and some verse forms use a mixture of hendecasyllables and shorter lines. From the early 16th century onward, hendecasyllables are often used without a strict system, with few or no rhymes, both in poetry and in drama. This is known as verso sciolto. An early example is Le Api ("the bees") by Giovanni di Bernardo Rucellai, written around 1517 and published in 1525, which begins: Mentr'era per cantare i vostri doni Con altre rime, o Verginette caste, Vaghe Angelette delle erbose rive, Preso dal sonno, in sul spuntar dell'Alba M'apparve un coro della vostra gente, E dalla lingua, onde s'accoglie il mele, Sciolsono in chiara voce este parole: O spirto amici, che dopo mill'anni, E cinque cento, rinovar ti piace E le nostre fatiche, e i nostri studi, Fuggi le rime, e'l rimbombar sonoro. Like other early Italian-language tragedies, the Sophonisba of Gian Giorgio Trissino (1515) is in blank hendecasyllables. Later examples can be found in the Canti of Giacomo Leopardi, where hendecasyllables are alternated with settenari. In Polish poetry The hendecasyllabic metre (Polish: jedenastozgłoskowiec) was very popular in Polish poetry, especially in the seventeenth and eighteenth centuries, owing to strong Italian literary influence. It was used by Jan Kochanowski, Piotr Kochanowski (who translated Jerusalem Delivered by Torquato Tasso), Sebastian Grabowiecki, Wespazjan Kochowski and Stanisław Herakliusz Lubomirski. The greatest Polish Romantic poet, Adam Mickiewicz, set his poem [[Grażyna]] in this measure. The Polish hendecasyllable is widely used when translating English blank verse. The eleven-syllable line is normally defined by primary stresses on the fourth and tenth syllables and a caesura after the fifth syllable. Only rarely it is fully iambic. A popular form of Polish literature that employs the hendecasyllable is the Sapphic stanza: 11/11/11/5. The Polish hendecasyllable is often combined with an 8-syllable line: 11a/8b/11a/8b. Such a stanza was used by Mickiewicz in his ballads, as in the following example. Ktokolwiek będziesz w nowogródzkiej stronie, Do Płużyn ciemnego boru Wjechawszy, pomnij zatrzymać twe konie, Byś się przypatrzył jezioru. (Świteź) In Portuguese poetry The hendecasyllable (Portuguese: hendecassílabo) is a common meter in Portuguese poetry. The best-known Portuguese poem composed in hendecasyllables is Luís de Camões' Lusiads, which begins as follows: As armas e os barões assinalados, Que da ocidental praia Lusitana, Por mares nunca de antes navegados, Passaram ainda além da Taprobana, Em perigos e guerras esforçados, Mais do que prometia a força humana, E entre gente remota edificaram Novo Reino, que tanto sublimaram In Portuguese, the hendecasyllable meter is often called "decasyllable" (decassílabo), even when the work in question uses overwhelmingly feminine rhymes (as is the case with the Lusiads). This is due to Portuguese prosody considering verses to end at the last stressed syllable, thus the aforementioned verses are effectively decasyllabic according to Portuguese scansion. In Spanish poetry The hendecasyllable (endecasílabo) is less pervasive in Spanish poetry than in Italian or Portuguese, but it is commonly used with Italianate verse forms like sonnets and ottava rima. An example of the latter is Alonso de Ercilla's epic La Araucana, which opens as follows: No las damas, amor, no gentilezas de caballeros canto enamorados, ni las muestras, regalos y ternezas de amorosos afectos y cuidados; mas el valor, los hechos, las proezas de aquellos españoles esforzados, que a la cerviz de Arauco no domada pusieron duro yugo por la espada. Spanish dramatists often use hendecasyllables in tandem with shorter lines like heptasyllables, as can be seen in Rosaura's opening speech from Calderón's La vida es sueño: Hipogrifo violento Que corriste parejas con el viento, ¿Dónde, rayo sin llama, Pájaro sin matiz, pez sin escama, Y bruto sin instinto Natural, al confuso laberinto Destas desnudas peñas Te desbocas, arrastras y despeñas? In English poetry The term "hendecasyllable" is sometimes used to describe a line of iambic pentameter with a feminine ending, as in the first line of John Keats's Endymion: "A thing of beauty is a joy for ever." See also hexasyllable octosyllable decasyllable dodecasyllable iambic pentameter The Italian hendecasyllable Raffaele Spongano, Nozioni ed esempi di metrica italiana, Bologna, R. Pàtron, 1966 Angelo Marchese, Dizionario di retorica e di stilistica, Milano, Mondadori, 1978 Mario Pazzaglia, Manuale di metrica italiana, Firenze, Sansoni, 1990 The Polish hendecasyllable Wiktor Jarosław Darasz, Mały przewodnik po wierszu polskim, Kraków 2003. References Types of verses Sonnet studies la:Hendecasyllabi
[ -0.5957593321800232, 0.04526841640472412, -0.14281946420669556, -0.26311632990837097, -0.9211717844009399, 0.3345142602920532, 0.5928518772125244, 0.11314169317483902, -0.2733587920665741, -0.12527823448181152, 0.08337867259979248, 0.17992405593395233, 0.041347797960042953, -0.149733081459...
14155
https://en.wikipedia.org/wiki/Hebrides
Hebrides
The Hebrides (; , ; , "southern isles") are a Scottish archipelago off the west coast of the Scottish mainland. The islands fall into two main groups, based on their proximity to the mainland: the Inner and Outer Hebrides. These islands have a long history of occupation (dating back to the Mesolithic period), and the culture of the inhabitants has been successively influenced by the cultures of Celtic-speaking, Norse-speaking, and English-speaking peoples. This diversity is reflected in the various names given to the islands, which are derived from the different languages that have been spoken there at various points in their history. The Hebrides are where much of Scottish Gaelic literature and Gaelic music has historically originated. Today, the economy of the islands is dependent on crofting, fishing, tourism, the oil industry, and renewable energy. The Hebrides have less biodiversity than mainland Scotland, but a significant number of seals and seabirds. The islands have a combined area of approximately , and, , a combined population of around 45,000. Geology, geography and climate The Hebrides have a diverse geology, ranging in age from Precambrian strata that are amongst the oldest rocks in Europe, to Paleogene igneous intrusions. Raised shore platforms in the Hebrides have been identified as strandflats, possibly formed during the Pliocene period and later modified by the Quaternary glaciations. The Hebrides can be divided into two main groups, separated from one another by the Minch to the north and the Sea of the Hebrides to the south. The Inner Hebrides lie closer to mainland Scotland and include Islay, Jura, Skye, Mull, Raasay, Staffa and the Small Isles. There are 36 inhabited islands in this group. The Outer Hebrides form a chain of more than 100 islands and small skerries located about west of mainland Scotland. Among them, 15 are inhabited. The main inhabited islands include Lewis and Harris, North Uist, Benbecula, South Uist, and Barra. A complication is that there are various descriptions of the scope of the Hebrides. The Collins Encyclopedia of Scotland describes the Inner Hebrides as lying "east of the Minch". This definition would encompass all offshore islands, including those that lie in the sea lochs, such as and , which might not ordinarily be described as "Hebridean". However, no formal definition exists. In the past, the Outer Hebrides were often referred to as the Long Isle (). Today, they are also sometimes known as the Western Isles, although this phrase can also be used to refer to the Hebrides in general. The Hebrides have a cool, temperate climate that is remarkably mild and steady for such a northerly latitude, due to the influence of the Gulf Stream. In the Outer Hebrides, the average temperature is 6 °C (44 °F) in January and 14 °C (57 °F) in the summer. The average annual rainfall in Lewis is , and there are between 1,100 and 1,200 hours of sunshine per annum (13%). The summer days are relatively long, and May through August is the driest period. Etymology The earliest surviving written references to the islands were made circa 77 AD by Pliny the Elder in his Natural History: He states that there are 30 , and makes a separate reference to , which Watson (1926) concluded refers unequivocally to the Outer Hebrides. About 80 years after Pliny the Elder, in 140–150 AD, Ptolemy (drawing on accounts of the naval expeditions of ) writes that there are five (possibly meaning the Inner Hebrides) and . Later texts in classical Latin, by writers such as , use the forms and . The name (used by Ptolemy) may be pre-Celtic. Ptolemy calls Islay “”, and the use of the letter "p" suggests a Brythonic or Pictish tribal name, , because the root is not Gaelic. Woolf (2012) has suggested that may be "an Irish attempt to reproduce the word phonetically, rather than by translating it", and that the tribe's name may come from the root , meaning "horse". Watson (1926) also notes a possible relationship between , and the ancient Irish Ulaid tribal name , and also the personal name of a king (recorded in the Silva Gadelica). The names of other individual islands reflect their complex linguistic history. The majority are Norse or Gaelic, but the roots of several other names for Hebrides islands may have a pre-Celtic origin. Adomnán, a 7th-century abbot of Iona, records Colonsay as Colosus and Tiree as Ethica, and both of these may be pre-Celtic names. The etymology of Skye is complex and may also include a pre-Celtic root. Lewis is in Old Norse. Various suggestions have been made as to possible meanings of the name in Norse (for example, "song house"), but the name is not of Gaelic origin, and the Norse provenance is questionable. The earliest comprehensive written list of Hebridean island names was compiled by Donald Monro in 1549. This list also provides the earliest written reference to the names of some of the islands. The derivations of all the inhabited islands of the Hebrides and some of the larger uninhabited ones are listed below. Outer Hebrides Lewis and Harris is the largest island in Scotland and the third largest of the British Isles, after Great Britain and Ireland. It incorporates Lewis in the north and Harris in the south, both of which are frequently referred to as individual islands, although they are joined by a land border. The island does not have a single common name in either English or Gaelic and is referred to as "Lewis and Harris", "Lewis with Harris", "Harris with Lewis" etc. For this reason it is treated as two separate islands below. The derivation of Lewis may be pre-Celtic (see above) and the origin of Harris is no less problematic. In the Ravenna Cosmography, Erimon may refer to Harris (or possibly the Outer Hebrides as a whole). This word may derive from the ( "desert". The origin of Uist () is similarly unclear. Inner Hebrides There are various examples of earlier names for Inner Hebridean islands that were Gaelic, but these names have since been completely replaced. For example, Adomnán records Sainea, Elena, Ommon and Oideacha in the Inner Hebrides. These names presumably passed out of usage in the Norse era, and the locations of the islands they refer to are not clear. As an example of the complexity: Rona may originally have had a Celtic name, then later a similar-sounding Norse name, and then still later a name that was essentially Gaelic again, but with a Norse "øy" or "ey" ending. (See Rona, below.) Uninhabited islands The names of uninhabited islands follow the same general patterns as the inhabited islands. (See the list, below, of the ten largest islands in the Hebrides and their outliers.) The etymology of the name “St Kilda”, a small archipelago west of the Outer Hebrides, and the name of its main island, “Hirta,” is very complex. No saint is known by the name of Kilda, so various other theories have been proposed for the word's origin, which dates from the late 16th century. Haswell-Smith (2004) notes that the full name "St Kilda" first appears on a Dutch map dated 1666, and that it may derive from the Norse phrase ("sweet wellwater") or from a mistaken Dutch assumption that the spring was dedicated to a saint. ( is a tautological placename, consisting of the Gaelic and Norse words for well, i.e., "well well"). Similarly unclear is the origin of the Gaelic for "Hirta", , , or a name for the island that long pre-dates the name "St Kilda". Watson (1926) suggests that it may derive from the Old Irish word ("death"), possibly a reference to the often lethally dangerous surrounding sea. Maclean (1977) notes that an Icelandic saga about an early 13th-century voyage to Ireland refers to “the islands of ”, which means "stags" in Norse, and suggests that the outline of the island of Hirta resembles the shape of a stag, speculating that therefore the name “Hirta” may be a reference to the island's shape. The etymology of the names of small islands may be no less complex and elusive. In relation to , Robert Louis Stevenson believed that "black and dismal" was one translation of the name, noting that "as usual, in Gaelic, it is not the only one." History Prehistory The Hebrides were settled during the Mesolithic era around 6500 BC or earlier, after the climatic conditions improved enough to sustain human settlement. Occupation at a site on is dated to 8590 ±95 uncorrected radiocarbon years BP, which is amongst the oldest evidence of occupation in Scotland. There are many examples of structures from the Neolithic period, the finest example being the standing stones at Callanish, dating to the 3rd millennium BC. Cladh Hallan, a Bronze Age settlement on South Uist is the only site in the UK where prehistoric mummies have been found. Celtic era In 55 BC, the Greek historian Diodorus Siculus wrote that there was an island called Hyperborea (which means "beyond the North Wind"), where a round temple stood from which the moon appeared only a little distance above the earth every 19 years. This may have been a reference to the stone circle at Callanish. A traveller called Demetrius of Tarsus related to Plutarch the tale of an expedition to the west coast of Scotland in or shortly before 83 AD. He stated it was a gloomy journey amongst uninhabited islands, but he had visited one which was the retreat of holy men. He mentioned neither the druids nor the name of the island. The first written records of native life begin in the 6th century AD, when the founding of the kingdom of Dál Riata took place. This encompassed roughly what is now Argyll and Bute and Lochaber in Scotland and County Antrim in Ireland. The figure of Columba looms large in any history of Dál Riata, and his founding of a monastery on Iona ensured that the kingdom would be of great importance in the spread of Christianity in northern Britain. However, Iona was far from unique. Lismore in the territory of the Cenél Loairn, was sufficiently important for the death of its abbots to be recorded with some frequency and many smaller sites, such as on Eigg, Hinba, and Tiree, are known from the annals. North of Dál Riata, the Inner and Outer Hebrides were nominally under Pictish control, although the historical record is sparse. Hunter (2000) states that in relation to King Bridei I of the Picts in the sixth century: "As for Shetland, Orkney, Skye and the Western Isles, their inhabitants, most of whom appear to have been Pictish in culture and speech at this time, are likely to have regarded Bridei as a fairly distant presence.” Norwegian control Viking raids began on Scottish shores towards the end of the 8th century, and the Hebrides came under Norse control and settlement during the ensuing decades, especially following the success of Harald Fairhair at the Battle of in 872. In the Western Isles Ketill Flatnose may have been the dominant figure of the mid 9th century, by which time he had amassed a substantial island realm and made a variety of alliances with other Norse leaders. These princelings nominally owed allegiance to the Norwegian crown, although in practice the latter's control was fairly limited. Norse control of the Hebrides was formalised in 1098 when Edgar of Scotland formally signed the islands over to Magnus III of Norway. The Scottish acceptance of Magnus III as King of the Isles came after the Norwegian king had conquered Orkney, the Hebrides and the Isle of Man in a swift campaign earlier the same year, directed against the local Norwegian leaders of the various island petty kingdoms. By capturing the islands Magnus imposed a more direct royal control, although at a price. His skald Bjorn Cripplehand recorded that in Lewis "fire played high in the heaven" as "flame spouted from the houses" and that in the Uists "the king dyed his sword red in blood". The Hebrides were now part of the Kingdom of the Isles, whose rulers were themselves vassals of the Kings of Norway. This situation lasted until the partitioning of the Western Isles in 1156, at which time the Outer Hebrides remained under Norwegian control while the Inner Hebrides broke out under Somerled, the Norse-Gael kinsman of the Manx royal house. Following the ill-fated 1263 expedition of Haakon IV of Norway, the Outer Hebrides and the Isle of Man were yielded to the Kingdom of Scotland as a result of the 1266 Treaty of Perth. Although their contribution to the islands can still be found in personal and place names, the archaeological record of the Norse period is very limited. The best known find is the Lewis chessmen, which date from the mid 12th century. Scottish control As the Norse era drew to a close, the Norse-speaking princes were gradually replaced by Gaelic-speaking clan chiefs including the MacLeods of Lewis and Harris, Clan Donald and MacNeil of Barra. This transition did little to relieve the islands of internecine strife although by the early 14th century the MacDonald Lords of the Isles, based on Islay, were in theory these chiefs' feudal superiors and managed to exert some control. The Lords of the Isles ruled the Inner Hebrides as well as part of the Western Highlands as subjects of the King of Scots until John MacDonald, fourth Lord of the Isles, squandered the family's powerful position. A rebellion by his nephew, Alexander of Lochalsh provoked an exasperated James IV to forfeit the family's lands in 1493. In 1598, King James VI authorised some "Gentleman Adventurers" from Fife to civilise the "most barbarous Isle of Lewis". Initially successful, the colonists were driven out by local forces commanded by Murdoch and Neil MacLeod, who based their forces on in . The colonists tried again in 1605 with the same result, but a third attempt in 1607 was more successful and in due course Stornoway became a Burgh of Barony. By this time, Lewis was held by the Mackenzies of Kintail (later the Earls of Seaforth), who pursued a more enlightened approach, investing in fishing in particular. The Seaforths' royalist inclinations led to Lewis becoming garrisoned during the Wars of the Three Kingdoms by Cromwell's troops, who destroyed the old castle in Stornoway. Early British era With the implementation of the Treaty of Union in 1707, the Hebrides became part of the new Kingdom of Great Britain, but the clans' loyalties to a distant monarch were not strong. A considerable number of islesmen "came out" in support of the Jacobite Earl of Mar in the 1715 and again in the 1745 rising including Macleod of Dunvegan and MacLea of Lismore. The aftermath of the decisive Battle of Culloden, which effectively ended Jacobite hopes of a Stuart restoration, was widely felt. The British government's strategy was to estrange the clan chiefs from their kinsmen and turn their descendants into English-speaking landlords whose main concern was the revenues their estates brought rather than the welfare of those who lived on them. This may have brought peace to the islands, but in the following century it came at a terrible price. In the wake of the rebellion, the clan system was broken up and islands of the Hebrides became a series of landed estates. The early 19th century was a time of improvement and population growth. Roads and quays were built; the slate industry became a significant employer on Easdale and surrounding islands; and the construction of the Crinan and Caledonian canals and other engineering works such as Clachan Bridge improved transport and access. However, in the mid-19th century, the inhabitants of many parts of the Hebrides were devastated by the Clearances, which destroyed communities throughout the Highlands and Islands as the human populations were evicted and replaced with sheep farms. The position was exacerbated by the failure of the islands' kelp industry that thrived from the 18th century until the end of the Napoleonic Wars in 1815 and large scale emigration became endemic. As , a Gaelic poet from South Uist, wrote for his countrymen who were obliged to leave the Hebrides in the late 18th century, emigration was the only alternative to "sinking into slavery" as the Gaels had been unfairly dispossessed by rapacious landlords. In the 1880s, the "Battle of the Braes" involved a demonstration against unfair land regulation and eviction, stimulating the calling of the Napier Commission. Disturbances continued until the passing of the 1886 Crofters' Act. Language The residents of the Hebrides have spoken a variety of different languages during the long period of human occupation. It is assumed that Pictish must once have predominated in the northern Inner Hebrides and Outer Hebrides. The Scottish Gaelic language arrived from Ireland due to the growing influence of the kingdom of Dál Riata from the 6th century AD onwards, and became the dominant language of the southern Hebrides at that time. For a few centuries, the military might of the meant that Old Norse was prevalent in the Hebrides. North of , the place names that existed prior to the 9th century have been all but obliterated. The Old Norse name for the Hebrides during the Viking occupation was , which means "Southern Isles"; in contrast to the , or "Northern Isles" of Orkney and Shetland. South of , Gaelic place names are more common, and after the 13th century, Gaelic became the main language of the entire Hebridean archipelago. Due to Scots and English being favoured in government and the educational system, the Hebrides have been in a state of diglossia since at least the 17th century. The Highland Clearances of the 19th century accelerated the language shift away from Scottish Gaelic, as did increased migration and the continuing lower status of Gaelic speakers. Nevertheless, as late as the end of the 19th century, there were significant populations of monolingual Gaelic speakers, and the Hebrides still contain the highest percentages of Gaelic speakers in Scotland. This is especially true of the Outer Hebrides, where a slim majority speak the language. The Scottish Gaelic college, , is based on Skye and Islay. Ironically, given the status of the Western Isles as the last Gaelic-speaking stronghold in Scotland, the Gaelic language name for the islands – – means "isles of the foreigners"; from the time when they were under Norse colonisation. Modern economy For those who remained, new economic opportunities emerged through the export of cattle, commercial fishing and tourism. Nonetheless, emigration and military service became the choice of many and the archipelago's populations continued to dwindle throughout the late 19th century and for much of the 20th century. Lengthy periods of continuous occupation notwithstanding, many of the smaller islands were abandoned. There were, however, continuing gradual economic improvements, among the most visible of which was the replacement of the traditional thatched blackhouse with accommodation of a more modern design and with the assistance of Highlands and Islands Enterprise many of the islands' populations have begun to increase after decades of decline. The discovery of substantial deposits of North Sea oil in 1965 and the renewables sector have contributed to a degree of economic stability in recent decades. For example, the Arnish yard has had a chequered history but has been a significant employer in both the oil and renewables industries. The widespread immigration of mainlanders, particularly non-Gaelic speakers, has been a subject of controversy. Agriculture practised by crofters remained popular in the 21st century in the Hebrides; crofters own a small property but often share a large common grazing area. Various types of funding are available to crofters to help supplement their incomes, including the "Basic Payment Scheme, the suckler beef support scheme, the upland sheep support scheme and the Less Favoured Area support scheme". One reliable source discussed the Crofting Agricultural Grant Scheme (CAGS) in March 2020:the scheme "pays up to £25,000 per claim in any two-year period, covering 80% of investment costs for those who are under 41 and have had their croft less than five years. Older, more established crofters can get 60% grants". Media and the arts Music Many contemporary Gaelic musicians have roots in the Hebrides, including Julie Fowlis (North Uist), Catherine-Ann MacPhee (Barra), Kathleen MacInnes (South Uist), and Ishbel MacAskill (Lewis). All of these singers have repertoire based on the Hebridean tradition, such as and (waulking songs). This tradition includes many songs composed by little-known or anonymous poets before 1800, such as "", "" and "". Several of Runrig's songs are inspired by the archipelago; Calum and were raised on North Uist and Donnie Munro on Skye. Literature The Gaelic poet spent much of his life in the Hebrides and often referred to them in his poetry, including in and . The best known Gaelic poet of her era, (Mary MacPherson, 1821–98), embodied the spirit of the land agitation of the 1870s and 1880s. This, and her powerful evocation of the Hebrides—she was from Skye—has made her among the most enduring Gaelic poets. Allan MacDonald (1859–1905), who spent his adult life on Eriskay and South Uist, composed hymns and verse in honour of the Blessed Virgin, the Christ Child, and the Eucharist. In his secular poetry, MacDonald praised the beauty of Eriskay and its people. In his verse drama, (The Old Wives' Parliament), he lampooned the gossiping of his female parishioners and local marriage customs. In the 20th century, Murdo Macfarlane of Lewis wrote , a well-known poem about the Gaelic revival in the Outer Hebrides. Sorley MacLean, the most respected 20th-century Gaelic writer, was born and raised on Raasay, where he set his best known poem, , about the devastating effect of the Highland Clearances. , raised on South Uist and described by MacLean as "one of the few really significant living poets in Scotland, writing in any language" (West Highland Free Press, October 1992) wrote the Scottish Gaelic-language novel which was voted in the Top Ten of the 100 Best-Ever Books from Scotland. Film The area around the Inaccessible Pinnacle of of Skye provided the setting for the Scottish Gaelic feature film Seachd: The Inaccessible Pinnacle (2006). The script was written by the actor, novelist, and poet Aonghas Phàdraig Chaimbeul, who also starred in the movie. , an hour-long documentary in Scottish Gaelic, was made for BBC Alba documenting the battle to remove tolls from the Skye bridge. Video Games The 2012 exploration adventure game Dear Esther by developer The Chinese Room is set on an unnamed island in the Hebrides. Influence on visitors J.M. Barrie's Marie Rose contains references to Harris inspired by a holiday visit to Castle and he wrote a screenplay for the 1924 film adaptation of Peter Pan whilst on . The Hebrides, also known as Fingal's Cave, is a famous overture composed by Felix Mendelssohn while residing on these islands, while Granville Bantock composed the Hebridean Symphony. Enya's song "Ebudæ" from Shepherd Moons is named after the Hebrides (see below). The 1973 British horror film The Wicker Man is set on the fictional Hebridean island of Summerisle. The 2011 British romantic comedy The Decoy Bride is set on the fictional Hebrides island of Hegg. Natural history In some respects the Hebrides lack biodiversity in comparison to mainland Britain; for example, there are only half as many mammalian species. However, these islands provide breeding grounds for many important seabird species including the world's largest colony of northern gannets. Avian life includes the corncrake, red-throated diver, rock dove, kittiwake, tystie, Atlantic puffin, goldeneye, golden eagle and white-tailed sea eagle. The latter was re-introduced to Rùm in 1975 and has successfully spread to various neighbouring islands, including Mull. There is a small population of red-billed chough concentrated on the islands of Islay and Colonsay. Red deer are common on the hills and the grey seal and common seal are present around the coasts of Scotland. Colonies of seals are found on Oronsay and the Treshnish Isles. The rich freshwater streams contain brown trout, Atlantic salmon and water shrew. Offshore, minke whales, Killer whales, basking sharks, porpoises and dolphins are among the sealife that can be seen. Heather moor containing ling, bell heather, cross-leaved heath, bog myrtle and fescues is abundant and there is a diversity of Arctic and alpine plants including Alpine pearlwort and mossy cyphal. Loch Druidibeg on South Uist is a national nature reserve owned and managed by Scottish Natural Heritage. The reserve covers 1,677 hectares across the whole range of local habitats. Over 200 species of flowering plants have been recorded on the reserve, some of which are nationally scarce. South Uist is considered the best place in the UK for the aquatic plant slender naiad, which is a European Protected Species. Hedgehogs are not native to the Outer Hebrides—they were introduced in the 1970s to reduce garden pests—and their spread poses a threat to the eggs of ground nesting wading birds. In 2003, Scottish Natural Heritage undertook culls of hedgehogs in the area although these were halted in 2007 due to protests. Trapped animals were relocated to the mainland. See also List of islands of Scotland Scottish island names Geology of Scotland Timeline of prehistoric Scotland Fauna of Scotland New Hebrides Languages of Scotland Goidelic substrate hypothesis Insular Celtic languages Canadian Boat-Song The Lewis Awakening (Religious Revival) References and footnotes Notes Citations General references Ballin Smith, B. and Banks, I. (eds) (2002) In the Shadow of the Brochs, the Iron Age in Scotland. Stroud. Tempus. Ballin Smith, Beverley; Taylor, Simon; and Williams, Gareth (2007) West over Sea: Studies in Scandinavian Sea-Borne Expansion and Settlement Before 1300. Leiden. Brill. Benvie, Neil (2004) Scotland's Wildlife. London. Aurum Press. Buchanan, Margaret (1983) St Kilda: a Photographic Album. W. Blackwood. Buxton, Ben. (1995) Mingulay: An Island and Its People. Edinburgh. Birlinn. Downham, Clare "England and the Irish-Sea Zone in the Eleventh Century" in Gillingham, John (ed) (2004) Anglo-Norman Studies XXVI: Proceedings of the Battle Conference 2003. Woodbridge. Boydell Press. First published in 1947 under title: Natural history in the Highlands & Islands; by F. Fraser Darling. First published under the present title 1964. Gammeltoft, Peder (2010) "Shetland and Orkney Island-Names – A Dynamic Group". Northern Lights, Northern Words. Selected Papers from the FRLSU Conference, Kirkwall 2009, edited by Robert McColl Millar. "Occasional Paper No 10: Statistics for Inhabited Islands". (28 November 2003) General Register Office for Scotland. Edinburgh. Retrieved 22 January 2011. Gillies, Hugh Cameron (1906) The Place Names of Argyll. London. David Nutt. Gregory, Donald (1881) The History of the Western Highlands and Isles of Scotland 1493–1625. Edinburgh. Birlinn. 2008 reprint - originally published by Thomas D. Morrison. Hunter, James (2000) Last of the Free: A History of the Highlands and Islands of Scotland. Edinburgh. Mainstream. Keay, J. & Keay, J. (1994) Collins Encyclopaedia of Scotland. London. HarperCollins. Lynch, Michael (ed) (2007) Oxford Companion to Scottish History. Oxford University Press. . Maclean, Charles (1977) Island on the Edge of the World: the Story of St. Kilda. Edinburgh. Canongate Monro, Sir Donald (1549) A Description Of The Western Isles of Scotland. Appin Regiment/Appin Historical Society. Retrieved 3 March 2007. First published in 1774. Murray, W. H. (1966) The Hebrides. London. Heinemann. Murray, W.H. (1973) The Islands of Western Scotland. London. Eyre Methuen. Omand, Donald (ed.) (2006) The Argyll Book. Edinburgh. Birlinn. Ordnance Survey (2009) "Get-a-map". Retrieved 1–15 August 2009. Rotary Club of Stornoway (1995) The Outer Hebrides Handbook and Guide. Machynlleth. Kittiwake. Slesser, Malcolm (1970) The Island of Skye. Edinburgh. Scottish Mountaineering Club. Steel, Tom (1988) The Life and Death of St. Kilda. London. Fontana. Stevenson, Robert Louis (1995) The New Lighthouse on the Dhu Heartach Rock, Argyllshire. California. Silverado Museum. Based on an 1872 manuscript and edited by Swearingen, R.G. Thompson, Francis (1968) Harris and Lewis, Outer Hebrides. Newton Abbot. David & Charles. Watson, W. J. (1994) The Celtic Place-Names of Scotland. Edinburgh. Birlinn. . First published 1926. External links Hebrides/Western Isles Guide National Library of Scotland: SCOTTISH SCREEN ARCHIVE (selection of archive films about the Hebrides) Former Norwegian colonies Archipelagoes of Scotland Scottish toponymy Kingdom of Norway (872–1397)
[ 0.5780764222145081, -0.006786280311644077, 0.6227510571479797, -0.6496097445487976, 0.48404502868652344, -0.5180164575576782, 0.2661273777484894, 0.3546276390552521, -0.2668384313583374, -0.0017407588893547654, -0.9149655699729919, -0.02462015114724636, 0.02083086036145687, 0.6423901319503...
14158
https://en.wikipedia.org/wiki/HMS%20Dreadnought
HMS Dreadnought
Several ships and one submarine of the Royal Navy have borne the name HMS Dreadnought in the expectation that they would "dread nought", i.e. "fear nothing". The 1906 ship was one of the Royal Navy's most famous vessels; battleships built after her were referred to as 'dreadnoughts', and earlier battleships became known as pre-dreadnoughts. English ship Dreadnought 1553 was a 40-gun ship built in 1553. was a 41-gun ship launched in 1573, rebuilt in 1592 and 1614, then broken up in 1648. was a 52-gun third-rate ship of the line launched in 1654 as the Torrington for the Commonwealth of England Navy, renamed Dreadnought at the Restoration in 1660, and lost in 1690. was a 60-gun fourth-rate ship of the line launched in 1691, rebuilt in 1706 and broken up 1748. was a 60-gun fourth rate launched in 1742 and sold 1784. was a 98-gun second rate launched in 1801, converted to a hospital ship in 1827, and broken up 1857. was a hospital ship, formerly HMS Caledonia. was a battleship launched in 1875 and hulked in 1903, then sold in 1908. was a revolutionary battleship, launched in 1906 and sold for breakup in 1921. was the UK's first nuclear-powered submarine, launched in 1960 and decommissioned in 1980. HMS Dreadnought will be the first of the UK's new ballistic missile submarines. See also Dreadnought was a gunboat that the garrison at Gibraltar launched in June 1782 during the Great Siege of Gibraltar. She was one of 12. Each was armed with an 18-pounder gun, and received a crew of 21 men drawn from Royal Navy vessels stationed at Gibraltar. provided Dreadnoughts crew. Dreadnought was a gunboat operating in North American waters in 1813. On 6 November 1813 she captured the schooners Polly and Cyrus. Citations and references Citations References Boniface, Patrick (2003) Dreadnought: Britain's First Nuclear Powered Submarine. (Periscope Publishing). Drinkwater, John (1905) A History of the Siege of Gibraltar, 1779-1783: With a Description and Account of that Garrison from the Earliest Times. (J. Murray). Royal Navy ship names
[ -0.290708988904953, 0.12063097953796387, -0.4366799592971802, -0.4725525379180908, -0.5099624991416931, 0.543404757976532, 0.6376408934593201, 0.42722266912460327, -0.6070958375930786, 0.22722022235393524, -0.3818117678165436, 0.0504782497882843, -0.1275632530450821, 1.222968339920044, 0...
14159
https://en.wikipedia.org/wiki/Hartmann%20Schedel
Hartmann Schedel
Hartmann Schedel (13 February 1440 – 28 November 1514) was a German historian, physician, humanist, and one of the first cartographers to use the printing press. He was born and died in Nuremberg. Matheolus Perusinus served as his tutor. Schedel is best known for his writing the text for the Nuremberg Chronicle, known as Schedelsche Weltchronik (English: Schedel's World Chronicle), published in 1493 in Nuremberg. It was commissioned by Sebald Schreyer (1446–1520) and Sebastian Kammermeister (1446–1503). Maps in the Chronicle were the first ever illustrations of many cities and countries. With the invention of the printing press by Johannes Gutenberg in 1447, it became feasible to print books and maps for a larger customer basis. Because they had to be handwritten, books were previously rare and very expensive. Schedel was also a notable collector of books, art and old master prints. An album he had bound in 1504, which once contained five engravings by Jacopo de' Barbari, provides important evidence for dating de' Barbari's work. Gallery Editions Hartmann Schedel: Registrum huius operis libri cronicarum cu [cum] figuris et imagibus [imaginibus] ab inicio mudi [mundi]. [Nachdruck der Ausgabe Nürnberg, Koberger, 1493]. Ostfildern: Quantum Books, [2002?]. - CCXCIX, [51] S., Hartmann Schedel: Register des Buchs der Croniken und geschichten mit figuren und pildnussen von anbeginn der welt bis auf dise unnsere Zeit. [Durch Georgium Alten ... in diss Teutsch gebracht]. Reprint [der Ausg.] Nürnberg, Koberger, 1493, 1. Wiederdruck. München: Reprint-Verlag Kölbl, 1991. - [9], CCLXXXVI Bl., IDN: 947020551 Hartmann Schedel: Weltchronik. Nachdruck [der] kolorierten Gesamtausgabe von 1493. Einleitung und Kommentar von Stephan Füssel. Augsburg: Weltbild, 2004. - 680 S., Stephan Füssel (Hg.): Schedel'sche Weltchronik. Taschen Verlag, Köln 2001. Digitalisat der lateinischen Ausgabe (mit brasil-portugiesischer Bedien-Oberfläche) Digitalisat der Bayerischen Staatsbibliothek Digitalisat der Beloit copy (Morse Library, Beloit College, Beloit, WI 53511, United States) Holzschnitte aus einem der Exemplare der Bibliothèque nationale de France Notes Sources Elisabeth Rücker: Hartmann Schedels Weltchronik, das größte Buchunternehmen der Dürerzeit. Verlag Prestel, München 1988. Stephan Füssel (Hrsg.): 500 Jahre Schedelsche Weltchronik. Carl, Nürnberg 1994. Peter Zahn: Hartmann Schedels Weltchronik. Bilanz der jüngeren Forschung. In: Bibliotheksforum Bayern 24 (1996), 230-248 Christoph Reske: Die Produktion der Schedelschen Weltchronik in Nürnberg. Harrassowitz, Wiesbaden 2000. Michael Zellmann-Rohrer, Constantine Hadavas, Selim S. Nahas: Liber Chronicarum Translation Volume 1. Boston. External links Liber chronicarum. Nuremberg, Anton Koberger, 23 Dec. 1493. From the Rare Book and Special Collections Division at the Library of Congress 1440 births 1514 deaths Writers from Nuremberg German cartographers German Renaissance humanists 15th-century German physicians Physicians from Nuremberg
[ -0.3080449402332306, 0.5821713209152222, -0.43021491169929504, 0.2748663127422333, 0.07700297236442566, 1.041709065437317, 0.4151306450366974, -0.11933139711618423, -0.35348275303840637, -0.668093740940094, -0.22934558987617493, -0.09394204616546631, 0.10672376304864883, 0.1243119388818740...
14160
https://en.wikipedia.org/wiki/Hexameter
Hexameter
Hexameter is a metrical line of verses consisting of six feet (a "foot" here is the pulse, or major accent, of words in an English line of poetry; in Greek and Latin a "foot" is not an accent, but describes various combinations of syllables). It was the standard epic metre in classical Greek and Latin literature, such as in the Iliad, Odyssey and Aeneid. Its use in other genres of composition include Horace's satires, Ovid's Metamorphoses, and the Hymns of Orpheus. According to Greek mythology, hexameter was invented by Phemonoe, daughter of Apollo and the first Pythia of Delphi. Classical Hexameter In classical hexameter, the six feet follow these rules: A foot can be made up of two long syllables (– –), a spondee; or a long and two short syllables, a dactyl (– υ υ). The first four feet can contain either one of them. The fifth is almost always a dactyl, and last must be a spondee. A short syllable (υ) is a syllable with a short vowel and no consonant at the end. A long syllable (–) is a syllable that either has a long vowel, one or more consonants at the end (or a long consonant), or both. Spaces between words are not counted in syllabification, so for instance "cat" is a long syllable in isolation, but "cat attack" would be syllabified as short-short-long: "ca", "ta", "tack" (υ υ –). Variations of the sequence from line to line, as well as the use of caesura (logical full stops within the line) are essential in avoiding what may otherwise be a monotonous sing-song effect. Application Although the rules seem simple, it is hard to use classical hexameter in English, because English is a stress-timed language that condenses vowels and consonants between stressed syllables, while hexameter relies on the regular timing of the phonetic sounds. Languages having the latter properties (i.e., languages that are not stress-timed) include Ancient Greek, Latin, Lithuanian and Hungarian. While the above classical hexameter has never enjoyed much popularity in English, where the standard metre is iambic pentameter, English poems have frequently been written in iambic hexameter. There are numerous examples from the 16th century and a few from the 17th; the most prominent of these is Michael Drayton's Poly-Olbion (1612) in couplets of iambic hexameter. An example from Drayton (marking the feet): Nor a | ny o | ther wold | like Cot | swold e | ver sped, So rich | and fair | a vale | in for | tuning | to wed. In the 17th century the iambic hexameter, also called alexandrine, was used as a substitution in the heroic couplet, and as one of the types of permissible lines in lyrical stanzas and the Pindaric odes of Cowley and Dryden. Several attempts were made in the 19th century to naturalise the dactylic hexameter to English, by Henry Wadsworth Longfellow, Arthur Hugh Clough and others, none of them particularly successful. Gerard Manley Hopkins wrote many of his poems in six-foot iambic and sprung rhythm lines. In the 20th century a loose ballad-like six-foot line with a strong medial pause was used by William Butler Yeats. The iambic six-foot line has also been used occasionally, and an accentual six-foot line has been used by translators from the Latin and many poets. In the late 18th century the hexameter was adapted to the Lithuanian language by Kristijonas Donelaitis. His poem "Metai" (The Seasons) is considered the most successful hexameter text in Lithuanian as yet. For dactylic hexameter poetry in Hungarian language, see Dactylic hexameter#In Hungarian. See also Dactylic hexameter Prosody (Latin) Notes References Stephen Greenblatt et al. The Norton Anthology of English Literature, volume D, 9th edition (Norton, 2012). Pausanias. Description of Greece, Vol. IV. Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A. (Cambridge, MA: Harvard University Press; London: William Heinemann Ltd., 1918). Pliny the Elder. The Natural History. Translated by John Bostock, M.D., F.R.S. H.T. Riley, Esq., B.A. (London: Taylor and Francis, 1855). External links Hexametrica, a tutorial on Latin dactylic hexameter at Skidmore College Hexameter.co, practice scanning lines of dactylic hexameter from a variety of Latin authors Types of verses
[ -0.34992867708206177, -0.011362072080373764, 0.02762668952345848, -0.08801405876874924, -0.42117688059806824, 0.7078881859779358, 0.779388427734375, 0.01890994980931282, -0.07982716709375381, -0.6149069666862488, -0.12291143089532852, -0.12438863515853882, 0.09286843240261078, -0.072934314...
14162
https://en.wikipedia.org/wiki/Timeline%20of%20Polish%20history
Timeline of Polish history
This is a timeline of Polish history, comprising important legal and territorial changes and political events in Poland and its predecessor states. To read about the background to these events, see History of Poland. See also the list of Polish monarchs and list of prime ministers of Poland. Centuries: 5th6th7th8th9th10th11th12th13th14th15th16th17th18th19th20th21stSee also 5th century 10th century 11th century 12th century 13th century 14th century 15th century 16th century 17th century 18th century 19th century 20th century The Second Polish Republic (1918–39) Occupation of Poland (1939–45) Communist takeover, Polish People's Republic Democratic Republic of Poland 21st century See also Cities in Poland Timeline of Białystok Timeline of Gdańsk Timeline of Kraków Timeline of Łódź Timeline of Lwów (formerly in Poland; now in Ukraine) Timeline of Poznań Timeline of Szczecin Timeline of Warsaw Timeline of Wrocław Category:Timelines of cities in Poland (in Polish) References Library of Congress, A Country Study: Poland, Chronology of Important Events: online Further reading External links Years in Poland Poland
[ -0.05533868074417114, -0.22166778147220612, -0.31722164154052734, -0.3488467037677765, -0.33284762501716614, 0.18413054943084717, 0.6010173559188843, 1.085011601448059, -0.46890366077423096, -0.48556289076805115, -0.38834527134895325, -0.47556525468826294, 0.13155853748321533, 1.0361969470...
14168
https://en.wikipedia.org/wiki/Himalia
Himalia
Himalia may refer to: Himalia (moon), a moon of Jupiter Himalia group Himalia (mythology), a nymph from Cyprus in Greek mythology Himalia Ridge, a ridge on the Ganymede Heights massif on Alexander Island, Antarctica See also Himalaya (disambiguation)
[ 0.24493664503097534, 0.18387332558631897, 0.17224644124507904, 0.3455052375793457, -0.33072027564048767, 0.6310362815856934, -0.2913609445095062, -0.22529254853725433, -0.7457532286643982, -0.09127761423587799, -0.22863289713859558, -0.13191059231758118, -0.4746183454990387, 0.432691305875...
14169
https://en.wikipedia.org/wiki/Heracleidae
Heracleidae
In Greek mythology, the Heracleidae (; ) or Heraclids were the numerous descendants of Heracles (Hercules), especially applied in a narrower sense to the descendants of Hyllus, the eldest of his four sons by Deianira (Hyllus was also sometimes thought of as Heracles' son by Melite). Other Heracleidae included Macaria, Lamos, Manto, Bianor, Tlepolemus, and Telephus. These Heraclids were a group of Dorian kings who conquered the Peloponnesian kingdoms of Mycenae, Sparta and Argos; according to the literary tradition in Greek mythology, they claimed a right to rule through their ancestor. Since Karl Otfried Müller's Die Dorier (1830, English translation 1839), I. ch. 3, their rise to dominance has been associated with a "Dorian invasion". Though details of genealogy differ from one ancient author to another, the cultural significance of the mythic theme, that the descendants of Heracles, exiled after his death, returned some generations later to reclaim land that their ancestors had held in Mycenaean Greece, was to assert the primal legitimacy of a traditional ruling clan that traced its origin, thus its legitimacy, to Heracles. Origin Heracles, whom Zeus had originally intended to be ruler of Argos, Lacedaemon and Messenian Pylos, had been supplanted by the cunning of Hera, and his intended possessions had fallen into the hands of Eurystheus, king of Mycenae. After the death of Heracles, his children, after many wanderings, found refuge from Eurystheus at Athens. Eurystheus, on his demand for their surrender being refused, attacked Athens, but was defeated and slain. Hyllus and his brothers then invaded Peloponnesus, but after a year's stay were forced by a pestilence to quit. They withdrew to Thessaly, where Aegimius, the mythical ancestor of the Dorians, whom Heracles had assisted in war against the Lapithae, adopted Hyllus and made over to him a third part of his territory. After the death of Aegimius, his two sons, Pamphylus and Dymas, voluntarily submitted to Hyllus (who was, according to the Dorian tradition in Herodotus V. 72, really an Achaean), who thus became ruler of the Dorians, the three branches of that race being named after these three heroes. Desiring to reconquer his paternal inheritance, Hyllus consulted the Delphic oracle, which told him to wait for "the third fruit", (or "the third crop") and then enter Peloponnesus by "a narrow passage by sea". Accordingly, after three years, Hyllus marched across the isthmus of Corinth to attack Atreus, the successor of Eurystheus, but was slain in single combat by Echemus, king of Tegea. This second attempt was followed by a third under Cleodaeus and a fourth under Aristomachus, both unsuccessful. Dorian invasion At last, Temenus, Cresphontes and Aristodemus, the sons of Aristomachus, complained to the oracle that its instructions had proved fatal to those who had followed them. They received the answer that by the "third fruit" the "third generation" was meant, and that the "narrow passage" was not the isthmus of Corinth, but the straits of Rhium. They accordingly built a fleet at Naupactus, but before they set sail, Aristodemus was struck by lightning (or shot by Apollo) and the fleet destroyed, because one of the Heracleidae had slain an Acarnanian soothsayer. The oracle, being again consulted by Temenus, bade him offer an expiatory sacrifice and banish the murderer for ten years, and look out for a man with three eyes to act as guide. On his way back to Naupactus, Temenus fell in with Oxylus, an Aetolian, who had lost one eye, riding on a horse (thus making up the three eyes) and immediately pressed him into his service. According to another account, a mule on which Oxylus rode had lost an eye. The Heracleidae repaired their ships, sailed from Naupactus to Antirrhium, and thence to Rhium in Peloponnesus. A decisive battle was fought with Tisamenus, son of Orestes, the chief ruler in the peninsula, who was defeated and slain. This conquest was traditionally dated eighty years after the Trojan War. The Heracleidae, who thus became practically masters of Peloponnesus, proceeded to distribute its territory among themselves by lot. Argos fell to Temenus, Lacedaemon to Procles and Eurysthenes, the twin sons of Aristodemus; and Messenia to Cresphontes (tradition maintains that Cresphontes cheated in order to obtain Messenia, which had the best land of all.) The fertile district of Elis had been reserved by agreement for Oxylus. The Heracleidae ruled in Lacedaemon until 221 BCE, but disappeared much earlier in the other countries. This conquest of Peloponnesus by the Dorians, commonly called the "Dorian invasion" or the "Return of the Heraclidae", is represented as the recovery by the descendants of Heracles of the rightful inheritance of their hero ancestor and his sons. The Dorians followed the custom of other Greek tribes in claiming as ancestor for their ruling families one of the legendary heroes, but the traditions must not on that account be regarded as entirely mythical. They represent a joint invasion of Peloponnesus by Aetolians and Dorians, the latter having been driven southward from their original northern home under pressure from the Thessalians. It is noticeable that there is no mention of these Heraclidae or their invasion in Homer or Hesiod. Herodotus (vi. 52) speaks of poets who had celebrated their deeds, but these were limited to events immediately succeeding the death of Heracles. List of Heracleidae At Sparta At Sparta, the Heraclids formed two dynasties ruling jointly: the Agiads and the Eurypontids. Other Spartiates also claimed Heraclid descent, such as Lysander. At Corinth At Corinth the Heraclids ruled as the Bacchiadae dynasty before the aristocratic revolution, which brought a Bacchiad aristocracy into power. At Argos A descendent of Heracles, Temenus, was the first king of Argos, who later counted the famous tyrant Pheidon. At Macedonia At Macedonia, the Heraclids formed the Argead Dynasty, whose name comes from Argos, as one of the Heraclids from this city, Perdiccas I, settled in Macedonia, where he founded his kingdom. By the time of Philip II the family had expanded their reign further, to include under the rule of Macedonia all Upper Macedonian states. Their most celebrated members were Philip II of Macedon and his son Alexander the Great, under whose leadership the kingdom of Macedonia gradually gained predominance throughout Greece, defeated the Achaemenid Empire and expanded as far as Egypt and India. The mythical founder of the Argead dynasty is King Caranus. In Euripides' tragedy The Greek tragedians amplified the story, probably drawing inspiration from local legends which glorified the services rendered by Athens to the rulers of Peloponnesus. The Heracleidae feature as the main subjects of Euripides' play, Heracleidae. J. A. Spranger found the political subtext of Heracleidae, never far to seek, so particularly apt in Athens towards the end of the peace of Nicias, in 419 BCE, that he suggested the date as that of the play's first performance. In the tragedy, Iolaus, Heracles' old comrade and nephew, and Heracles' children, Macaria and her brothers and sisters have hidden from Eurystheus in Athens, ruled by King Demophon; as the first scene makes clear, they expect that the blood relationship of the kings with Heracles and their father's past indebtedness to Theseus will finally provide them sanctuary. As Eurystheus prepares to attack, an oracle tells Demophon that only the sacrifice of a noble woman to Persephone can guarantee an Athenian victory. Macaria volunteers for the sacrifice and a spring is named the Macarian spring in her honor. References Sources Bibliotheca ii. 8 Diodorus Siculus iv. 57, 58 Pausanias i. 32, 41, ii. 13, 18, iii. I, iv. 3, v. 3 Euripides, Heracleidae Pindar, Pythia, ix. 137 Herodotus ix. 27 Connop Thirlwall, History of Greece, ch. vii George Grote, History of Greece, pt. i. ch. xviii Georg Busolt, Griechische Geschichte, i. ch. ii. sec. 7, where a list of modern authorities is given External links Article by George Hinge Greek Mythology Links Timeless Mythology Article about Dorian Invasion Dorians Iron Age Greece Mycenaean Greece Patronymics from Greek mythology
[ 0.06395988166332245, -0.2110992968082428, -0.8361429572105408, 0.3269999027252197, -0.2824375629425049, 1.197662591934204, 0.1944204717874527, 0.14241208136081696, -0.7573212385177612, -0.31290990114212036, 0.13583903014659882, 0.0009525112109258771, -0.6422827243804932, 0.2382209599018097...
14170
https://en.wikipedia.org/wiki/HIV
HIV
The human immunodeficiency viruses (HIV) are two species of Lentivirus (a subgroup of retrovirus) that infect humans. Over time, they cause acquired immunodeficiency syndrome (AIDS), a condition in which progressive failure of the immune system allows life-threatening opportunistic infections and cancers to thrive. Without treatment, average survival time after infection with HIV is estimated to be 9 to 11 years, depending on the HIV subtype. In most cases, HIV is a sexually transmitted infection and occurs by contact with or transfer of blood, pre-ejaculate, semen, and vaginal fluids. Research has shown (for both same-sex and opposite-sex couples) that HIV is untransmittable through condomless sexual intercourse if the HIV-positive partner has a consistently undetectable viral load. Non-sexual transmission can occur from an infected mother to her infant during pregnancy, during childbirth by exposure to her blood or vaginal fluid, and through breast milk. Within these bodily fluids, HIV is present as both free virus particles and virus within infected immune cells. HIV infects vital cells in the human immune system, such as helper T cells (specifically CD4+ T cells), macrophages, and dendritic cells. HIV infection leads to low levels of CD4+ T cells through a number of mechanisms, including pyroptosis of abortively infected T cells, apoptosis of uninfected bystander cells, direct viral killing of infected cells, and killing of infected CD4+ T cells by CD8+ cytotoxic lymphocytes that recognize infected cells. When CD4+ T cell numbers decline below a critical level, cell-mediated immunity is lost, and the body becomes progressively more susceptible to opportunistic infections, leading to the development of AIDS. Virology Classification HIV is a member of the genus Lentivirus, part of the family Retroviridae. Lentiviruses have many morphologies and biological properties in common. Many species are infected by lentiviruses, which are characteristically responsible for long-duration illnesses with a long incubation period. Lentiviruses are transmitted as single-stranded, positive-sense, enveloped RNA viruses. Upon entry into the target cell, the viral RNA genome is converted (reverse transcribed) into double-stranded DNA by a virally encoded enzyme, reverse transcriptase, that is transported along with the viral genome in the virus particle. The resulting viral DNA is then imported into the cell nucleus and integrated into the cellular DNA by a virally encoded enzyme, integrase, and host co-factors. Once integrated, the virus may become latent, allowing the virus and its host cell to avoid detection by the immune system, for an indeterminate amount of time. The HIV virus can remain dormant in the human body for up to ten years after primary infection; during this period the virus does not cause symptoms. Alternatively, the integrated viral DNA may be transcribed, producing new RNA genomes and viral proteins, using host cell resources, that are packaged and released from the cell as new virus particles that will begin the replication cycle anew. Two types of HIV have been characterized: HIV-1 and HIV-2. HIV-1 is the virus that was initially discovered and termed both lymphadenopathy associated virus (LAV) and human T-lymphotropic virus 3 (HTLV-III). HIV-1 is more virulent and more infective than HIV-2, and is the cause of the majority of HIV infections globally. The lower infectivity of HIV-2, compared to HIV-1, implies that fewer of those exposed to HIV-2 will be infected per exposure. Due to its relatively poor capacity for transmission, HIV-2 is largely confined to West Africa. Structure and genome HIV is different in structure from other retroviruses. It is roughly spherical with a diameter of about 120 nm, around 60 times smaller than a red blood cell. It is composed of two copies of positive-sense single-stranded RNA that codes for the virus's nine genes enclosed by a conical capsid composed of 2,000 copies of the viral protein p24. The single-stranded RNA is tightly bound to nucleocapsid proteins, p7, and enzymes needed for the development of the virion such as reverse transcriptase, proteases, ribonuclease and integrase. A matrix composed of the viral protein p17 surrounds the capsid ensuring the integrity of the virion particle. This is, in turn, surrounded by the viral envelope, that is composed of the lipid bilayer taken from the membrane of a human host cell when the newly formed virus particle buds from the cell. The viral envelope contains proteins from the host cell and relatively few copies of the HIV envelope protein, which consists of a cap made of three molecules known as glycoprotein (gp) 120, and a stem consisting of three gp41 molecules that anchor the structure into the viral envelope. The envelope protein, encoded by the HIV env gene, allows the virus to attach to target cells and fuse the viral envelope with the target cell's membrane releasing the viral contents into the cell and initiating the infectious cycle. As the sole viral protein on the surface of the virus, the envelope protein is a major target for HIV vaccine efforts. Over half of the mass of the trimeric envelope spike is N-linked glycans. The density is high as the glycans shield the underlying viral protein from neutralisation by antibodies. This is one of the most densely glycosylated molecules known and the density is sufficiently high to prevent the normal maturation process of glycans during biogenesis in the endoplasmic and Golgi apparatus. The majority of the glycans are therefore stalled as immature 'high-mannose' glycans not normally present on human glycoproteins that are secreted or present on a cell surface. The unusual processing and high density means that almost all broadly neutralising antibodies that have so far been identified (from a subset of patients that have been infected for many months to years) bind to, or are adapted to cope with, these envelope glycans. The molecular structure of the viral spike has now been determined by X-ray crystallography and cryogenic electron microscopy. These advances in structural biology were made possible due to the development of stable recombinant forms of the viral spike by the introduction of an intersubunit disulphide bond and an isoleucine to proline mutation (radical replacement of an amino acid) in gp41. The so-called SOSIP trimers not only reproduce the antigenic properties of the native viral spike, but also display the same degree of immature glycans as presented on the native virus. Recombinant trimeric viral spikes are promising vaccine candidates as they display less non-neutralising epitopes than recombinant monomeric gp120, which act to suppress the immune response to target epitopes. The RNA genome consists of at least seven structural landmarks (LTR, TAR, RRE, PE, SLIP, CRS, and INS), and nine genes (gag, pol, and env, tat, rev, nef, vif, vpr, vpu, and sometimes a tenth tev, which is a fusion of tat, env and rev), encoding 19 proteins. Three of these genes, gag, pol, and env, contain information needed to make the structural proteins for new virus particles. For example, env codes for a protein called gp160 that is cut in two by a cellular protease to form gp120 and gp41. The six remaining genes, tat, rev, nef, vif, vpr, and vpu (or vpx in the case of HIV-2), are regulatory genes for proteins that control the ability of HIV to infect cells, produce new copies of virus (replicate), or cause disease. The two tat proteins (p16 and p14) are transcriptional transactivators for the LTR promoter acting by binding the TAR RNA element. The TAR may also be processed into microRNAs that regulate the apoptosis genes ERCC1 and IER3. The rev protein (p19) is involved in shuttling RNAs from the nucleus and the cytoplasm by binding to the RRE RNA element. The vif protein (p23) prevents the action of APOBEC3G (a cellular protein that deaminates cytidine to uridine in the single-stranded viral DNA and/or interferes with reverse transcription). The vpr protein (p14) arrests cell division at G2/M. The nef protein (p27) down-regulates CD4 (the major viral receptor), as well as the MHC class I and class II molecules. Nef also interacts with SH3 domains. The vpu protein (p16) influences the release of new virus particles from infected cells. The ends of each strand of HIV RNA contain an RNA sequence called a long terminal repeat (LTR). Regions in the LTR act as switches to control production of new viruses and can be triggered by proteins from either HIV or the host cell. The Psi element is involved in viral genome packaging and recognized by gag and rev proteins. The SLIP element () is involved in the frameshift in the gag-pol reading frame required to make functional pol. Tropism The term viral tropism refers to the cell types a virus infects. HIV can infect a variety of immune cells such as CD4+ T cells, macrophages, and microglial cells. HIV-1 entry to macrophages and CD4+ T cells is mediated through interaction of the virion envelope glycoproteins (gp120) with the CD4 molecule on the target cells' membrane and also with chemokine co-receptors. Macrophage-tropic (M-tropic) strains of HIV-1, or non-syncytia-inducing strains (NSI; now called R5 viruses) use the β-chemokine receptor, CCR5, for entry and are thus able to replicate in both macrophages and CD4+ T cells. This CCR5 co-receptor is used by almost all primary HIV-1 isolates regardless of viral genetic subtype. Indeed, macrophages play a key role in several critical aspects of HIV infection. They appear to be the first cells infected by HIV and perhaps the source of HIV production when CD4+ cells become depleted in the patient. Macrophages and microglial cells are the cells infected by HIV in the central nervous system. In the tonsils and adenoids of HIV-infected patients, macrophages fuse into multinucleated giant cells that produce huge amounts of virus. T-tropic strains of HIV-1, or syncytia-inducing strains (SI; now called X4 viruses) replicate in primary CD4+ T cells as well as in macrophages and use the α-chemokine receptor, CXCR4, for entry. Dual-tropic HIV-1 strains are thought to be transitional strains of HIV-1 and thus are able to use both CCR5 and CXCR4 as co-receptors for viral entry. The α-chemokine SDF-1, a ligand for CXCR4, suppresses replication of T-tropic HIV-1 isolates. It does this by down-regulating the expression of CXCR4 on the surface of HIV target cells. M-tropic HIV-1 isolates that use only the CCR5 receptor are termed R5; those that use only CXCR4 are termed X4, and those that use both, X4R5. However, the use of co-receptors alone does not explain viral tropism, as not all R5 viruses are able to use CCR5 on macrophages for a productive infection and HIV can also infect a subtype of myeloid dendritic cells, which probably constitute a reservoir that maintains infection when CD4+ T cell numbers have declined to extremely low levels. Some people are resistant to certain strains of HIV. For example, people with the CCR5-Δ32 mutation are resistant to infection by the R5 virus, as the mutation leaves HIV unable to bind to this co-receptor, reducing its ability to infect target cells. Sexual intercourse is the major mode of HIV transmission. Both X4 and R5 HIV are present in the seminal fluid, which enables the virus to be transmitted from a male to his sexual partner. The virions can then infect numerous cellular targets and disseminate into the whole organism. However, a selection process leads to a predominant transmission of the R5 virus through this pathway. In patients infected with subtype B HIV-1, there is often a co-receptor switch in late-stage disease and T-tropic variants that can infect a variety of T cells through CXCR4. These variants then replicate more aggressively with heightened virulence that causes rapid T cell depletion, immune system collapse, and opportunistic infections that mark the advent of AIDS. HIV-positive patients acquire an enormously broad spectrum of opportunistic infections, which was particularly problematic prior to the onset of HAART therapies; however, the same infections are reported among HIV-infected patients examined post-mortem following the onset of antiretroviral therapies. Thus, during the course of infection, viral adaptation to the use of CXCR4 instead of CCR5 may be a key step in the progression to AIDS. A number of studies with subtype B-infected individuals have determined that between 40 and 50 percent of AIDS patients can harbour viruses of the SI and, it is presumed, the X4 phenotypes. HIV-2 is much less pathogenic than HIV-1 and is restricted in its worldwide distribution to West Africa. The adoption of "accessory genes" by HIV-2 and its more promiscuous pattern of co-receptor usage (including CD4-independence) may assist the virus in its adaptation to avoid innate restriction factors present in host cells. Adaptation to use normal cellular machinery to enable transmission and productive infection has also aided the establishment of HIV-2 replication in humans. A survival strategy for any infectious agent is not to kill its host, but ultimately become a commensal organism. Having achieved a low pathogenicity, over time, variants that are more successful at transmission will be selected. Replication cycle Entry to the cell The HIV virion enters macrophages and CD4+ T cells by the adsorption of glycoproteins on its surface to receptors on the target cell followed by fusion of the viral envelope with the target cell membrane and the release of the HIV capsid into the cell. Entry to the cell begins through interaction of the trimeric envelope complex (gp160 spike) on the HIV viral envelope and both CD4 and a chemokine co-receptor (generally either CCR5 or CXCR4, but others are known to interact) on the target cell surface. Gp120 binds to integrin α4β7 activating LFA-1, the central integrin involved in the establishment of virological synapses, which facilitate efficient cell-to-cell spreading of HIV-1. The gp160 spike contains binding domains for both CD4 and chemokine receptors. The first step in fusion involves the high-affinity attachment of the CD4 binding domains of gp120 to CD4. Once gp120 is bound with the CD4 protein, the envelope complex undergoes a structural change, exposing the chemokine receptor binding domains of gp120 and allowing them to interact with the target chemokine receptor. This allows for a more stable two-pronged attachment, which allows the N-terminal fusion peptide gp41 to penetrate the cell membrane. Repeat sequences in gp41, HR1, and HR2 then interact, causing the collapse of the extracellular portion of gp41 into a hairpin shape. This loop structure brings the virus and cell membranes close together, allowing fusion of the membranes and subsequent entry of the viral capsid. After HIV has bound to the target cell, the HIV RNA and various enzymes, including reverse transcriptase, integrase, ribonuclease, and protease, are injected into the cell. During the microtubule-based transport to the nucleus, the viral single-strand RNA genome is transcribed into double-strand DNA, which is then integrated into a host chromosome. HIV can infect dendritic cells (DCs) by this CD4-CCR5 route, but another route using mannose-specific C-type lectin receptors such as DC-SIGN can also be used. DCs are one of the first cells encountered by the virus during sexual transmission. They are currently thought to play an important role by transmitting HIV to T cells when the virus is captured in the mucosa by DCs. The presence of FEZ-1, which occurs naturally in neurons, is believed to prevent the infection of cells by HIV. HIV-1 entry, as well as entry of many other retroviruses, has long been believed to occur exclusively at the plasma membrane. More recently, however, productive infection by pH-independent, clathrin-mediated endocytosis of HIV-1 has also been reported and was recently suggested to constitute the only route of productive entry. Replication and transcription Shortly after the viral capsid enters the cell, an enzyme called reverse transcriptase liberates the positive-sense single-stranded RNA genome from the attached viral proteins and copies it into a complementary DNA (cDNA) molecule. The process of reverse transcription is extremely error-prone, and the resulting mutations may cause drug resistance or allow the virus to evade the body's immune system. The reverse transcriptase also has ribonuclease activity that degrades the viral RNA during the synthesis of cDNA, as well as DNA-dependent DNA polymerase activity that creates a sense DNA from the antisense cDNA. Together, the cDNA and its complement form a double-stranded viral DNA that is then transported into the cell nucleus. The integration of the viral DNA into the host cell's genome is carried out by another viral enzyme called integrase. The integrated viral DNA may then lie dormant, in the latent stage of HIV infection. To actively produce the virus, certain cellular transcription factors need to be present, the most important of which is NF-κB (nuclear factor kappa B), which is upregulated when T cells become activated. This means that those cells most likely to be targeted, entered and subsequently killed by HIV are those actively fighting infection. During viral replication, the integrated DNA provirus is transcribed into RNA. The full-length genomic RNAs (gRNA) can be packaged into new viral particles in a pseudodiploid form. The selectivity in the packaging is explained by the structural properties of the dimeric conformer of the gRNA. The gRNA dimer is characterized by a tandem three-way junction within the gRNA monomer, in which the SD and AUG hairpins, responsible for splicing and translation respectively, are sequestered and the DIS (dimerization initiation signal) hairpin is exposed.The formation of the gRNA dimer is mediated by a ‘kissing’ interaction between the DIS hairpin loops of the gRNA monomers. At the same time, certain guanosine residues in the gRNA are made available for binding of the nucleocapsid (NC) protein leading to the subsequent virion assembly. The labile gRNA dimer has been also reported to achieve a more stable conformation following the NC binding, in which both the DIS and the U5:AUG regions of the gRNA participate in extensive base pairing. RNA can also be processed to produce mature messenger RNAs (mRNAs). In most cases, this processing involves RNA splicing to produce mRNAs that are shorter than the full-length genome. Which part of the RNA is removed during RNA splicing determines which of the HIV protein-coding sequences is translated. Mature HIV mRNAs are exported from the nucleus into the cytoplasm, where they are translated to produce HIV proteins, including Rev. As the newly produced Rev protein is produced it moves to the nucleus, where it binds to full-length, unspliced copies of virus RNAs and allows them to leave the nucleus. Some of these full-length RNAs function as mRNAs that are translated to produce the structural proteins Gag and Env. Gag proteins bind to copies of the virus RNA genome to package them into new virus particles. HIV-1 and HIV-2 appear to package their RNA differently. HIV-1 will bind to any appropriate RNA. HIV-2 will preferentially bind to the mRNA that was used to create the Gag protein itself. Recombination Two RNA genomes are encapsidated in each HIV-1 particle (see Structure and genome of HIV). Upon infection and replication catalyzed by reverse transcriptase, recombination between the two genomes can occur. Recombination occurs as the single-strand, positive-sense RNA genomes are reverse transcribed to form DNA. During reverse transcription, the nascent DNA can switch multiple times between the two copies of the viral RNA. This form of recombination is known as copy-choice. Recombination events may occur throughout the genome. Anywhere from two to 20 recombination events per genome may occur at each replication cycle, and these events can rapidly shuffle the genetic information that is transmitted from parental to progeny genomes. Viral recombination produces genetic variation that likely contributes to the evolution of resistance to anti-retroviral therapy. Recombination may also contribute, in principle, to overcoming the immune defenses of the host. Yet, for the adaptive advantages of genetic variation to be realized, the two viral genomes packaged in individual infecting virus particles need to have arisen from separate progenitor parental viruses of differing genetic constitution. It is unknown how often such mixed packaging occurs under natural conditions. Bonhoeffer et al. suggested that template switching by reverse transcriptase acts as a repair process to deal with breaks in the single-stranded RNA genome. In addition, Hu and Temin suggested that recombination is an adaptation for repair of damage in the RNA genomes. Strand switching (copy-choice recombination) by reverse transcriptase could generate an undamaged copy of genomic DNA from two damaged single-stranded RNA genome copies. This view of the adaptive benefit of recombination in HIV could explain why each HIV particle contains two complete genomes, rather than one. Furthermore, the view that recombination is a repair process implies that the benefit of repair can occur at each replication cycle, and that this benefit can be realized whether or not the two genomes differ genetically. On the view that recombination in HIV is a repair process, the generation of recombinational variation would be a consequence, but not the cause of, the evolution of template switching. HIV-1 infection causes chronic inflammation and production of reactive oxygen species. Thus, the HIV genome may be vulnerable to oxidative damage, including breaks in the single-stranded RNA. For HIV, as well as for viruses in general, successful infection depends on overcoming host defense strategies that often include production of genome-damaging reactive oxygen species. Thus, Michod et al. suggested that recombination by viruses is an adaptation for repair of genome damage, and that recombinational variation is a byproduct that may provide a separate benefit. Assembly and release The final step of the viral cycle, assembly of new HIV-1 virions, begins at the plasma membrane of the host cell. The Env polyprotein (gp160) goes through the endoplasmic reticulum and is transported to the Golgi apparatus where it is cleaved by furin resulting in the two HIV envelope glycoproteins, gp41 and gp120. These are transported to the plasma membrane of the host cell where gp41 anchors gp120 to the membrane of the infected cell. The Gag (p55) and Gag-Pol (p160) polyproteins also associate with the inner surface of the plasma membrane along with the HIV genomic RNA as the forming virion begins to bud from the host cell. The budded virion is still immature as the gag polyproteins still need to be cleaved into the actual matrix, capsid and nucleocapsid proteins. This cleavage is mediated by the packaged viral protease and can be inhibited by antiretroviral drugs of the protease inhibitor class. The various structural components then assemble to produce a mature HIV virion. Only mature virions are then able to infect another cell. Spread within the body The classical process of infection of a cell by a virion can be called "cell-free spread" to distinguish it from a more recently recognized process called "cell-to-cell spread". In cell-free spread (see figure), virus particles bud from an infected T cell, enter the blood or extracellular fluid and then infect another T cell following a chance encounter. HIV can also disseminate by direct transmission from one cell to another by a process of cell-to-cell spread, for which two pathways have been described. Firstly, an infected T cell can transmit virus directly to a target T cell via a virological synapse. Secondly, an antigen-presenting cell (APC), such as a macrophage or dendritic cell, can transmit HIV to T cells by a process that either involves productive infection (in the case of macrophages) or capture and transfer of virions in trans (in the case of dendritic cells). Whichever pathway is used, infection by cell-to-cell transfer is reported to be much more efficient than cell-free virus spread. A number of factors contribute to this increased efficiency, including polarised virus budding towards the site of cell-to-cell contact, close apposition of cells, which minimizes fluid-phase diffusion of virions, and clustering of HIV entry receptors on the target cell towards the contact zone. Cell-to-cell spread is thought to be particularly important in lymphoid tissues, where CD4+ T cells are densely packed and likely to interact frequently. Intravital imaging studies have supported the concept of the HIV virological synapse in vivo. The many dissemination mechanisms available to HIV contribute to the virus's ongoing replication in spite of anti-retroviral therapies. Genetic variability HIV differs from many viruses in that it has very high genetic variability. This diversity is a result of its fast replication cycle, with the generation of about 1010 virions every day, coupled with a high mutation rate of approximately 3 x 10−5 per nucleotide base per cycle of replication and recombinogenic properties of reverse transcriptase. This complex scenario leads to the generation of many variants of HIV in a single infected patient in the course of one day. This variability is compounded when a single cell is simultaneously infected by two or more different strains of HIV. When simultaneous infection occurs, the genome of progeny virions may be composed of RNA strands from two different strains. This hybrid virion then infects a new cell where it undergoes replication. As this happens, the reverse transcriptase, by jumping back and forth between the two different RNA templates, will generate a newly synthesized retroviral DNA sequence that is a recombinant between the two parental genomes. This recombination is most obvious when it occurs between subtypes. The closely related simian immunodeficiency virus (SIV) has evolved into many strains, classified by the natural host species. SIV strains of the African green monkey (SIVagm) and sooty mangabey (SIVsmm) are thought to have a long evolutionary history with their hosts. These hosts have adapted to the presence of the virus, which is present at high levels in the host's blood, but evokes only a mild immune response, does not cause the development of simian AIDS, and does not undergo the extensive mutation and recombination typical of HIV infection in humans. In contrast, when these strains infect species that have not adapted to SIV ("heterologous" or similar hosts such as rhesus or cynomologus macaques), the animals develop AIDS and the virus generates genetic diversity similar to what is seen in human HIV infection. Chimpanzee SIV (SIVcpz), the closest genetic relative of HIV-1, is associated with increased mortality and AIDS-like symptoms in its natural host. SIVcpz appears to have been transmitted relatively recently to chimpanzee and human populations, so their hosts have not yet adapted to the virus. This virus has also lost a function of the nef gene that is present in most SIVs. For non-pathogenic SIV variants, nef suppresses T cell activation through the CD3 marker. Nef's function in non-pathogenic forms of SIV is to downregulate expression of inflammatory cytokines, MHC-1, and signals that affect T cell trafficking. In HIV-1 and SIVcpz, nef does not inhibit T-cell activation and it has lost this function. Without this function, T cell depletion is more likely, leading to immunodeficiency. Three groups of HIV-1 have been identified on the basis of differences in the envelope (env) region: M, N, and O. Group M is the most prevalent and is subdivided into eight subtypes (or clades), based on the whole genome, which are geographically distinct. The most prevalent are subtypes B (found mainly in North America and Europe), A and D (found mainly in Africa), and C (found mainly in Africa and Asia); these subtypes form branches in the phylogenetic tree representing the lineage of the M group of HIV-1. Co-infection with distinct subtypes gives rise to circulating recombinant forms (CRFs). In 2000, the last year in which an analysis of global subtype prevalence was made, 47.2% of infections worldwide were of subtype C, 26.7% were of subtype A/CRF02_AG, 12.3% were of subtype B, 5.3% were of subtype D, 3.2% were of CRF_AE, and the remaining 5.3% were composed of other subtypes and CRFs. Most HIV-1 research is focused on subtype B; few laboratories focus on the other subtypes. The existence of a fourth group, "P", has been hypothesised based on a virus isolated in 2009. The strain is apparently derived from gorilla SIV (SIVgor), first isolated from western lowland gorillas in 2006. HIV-2's closest relative is SIVsm, a strain of SIV found in sooty mangabees. Since HIV-1 is derived from SIVcpz, and HIV-2 from SIVsm, the genetic sequence of HIV-2 is only partially homologous to HIV-1 and more closely resembles that of SIVsm. Diagnosis Many HIV-positive people are unaware that they are infected with the virus. For example, in 2001 less than 1% of the sexually active urban population in Africa had been tested, and this proportion is even lower in rural populations. Furthermore, in 2001 only 0.5% of pregnant women attending urban health facilities were counselled, tested or receive their test results. Again, this proportion is even lower in rural health facilities. Since donors may therefore be unaware of their infection, donor blood and blood products used in medicine and medical research are routinely screened for HIV. HIV-1 testing is initially done using an enzyme-linked immunosorbent assay (ELISA) to detect antibodies to HIV-1. Specimens with a non-reactive result from the initial ELISA are considered HIV-negative, unless new exposure to an infected partner or partner of unknown HIV status has occurred. Specimens with a reactive ELISA result are retested in duplicate. If the result of either duplicate test is reactive, the specimen is reported as repeatedly reactive and undergoes confirmatory testing with a more specific supplemental test (e.g., a polymerase chain reaction (PCR), western blot or, less commonly, an immunofluorescence assay (IFA)). Only specimens that are repeatedly reactive by ELISA and positive by IFA or PCR or reactive by western blot are considered HIV-positive and indicative of HIV infection. Specimens that are repeatedly ELISA-reactive occasionally provide an indeterminate western blot result, which may be either an incomplete antibody response to HIV in an infected person or nonspecific reactions in an uninfected person. Although IFA can be used to confirm infection in these ambiguous cases, this assay is not widely used. In general, a second specimen should be collected more than a month later and retested for persons with indeterminate western blot results. Although much less commonly available, nucleic acid testing (e.g., viral RNA or proviral DNA amplification method) can also help diagnosis in certain situations. In addition, a few tested specimens might provide inconclusive results because of a low quantity specimen. In these situations, a second specimen is collected and tested for HIV infection. Modern HIV testing is extremely accurate, when the window period is taken into consideration. A single screening test is correct more than 99% of the time. The chance of a false-positive result in a standard two-step testing protocol is estimated to be about 1 in 250,000 in a low risk population. Testing post-exposure is recommended immediately and then at six weeks, three months, and six months. The latest recommendations of the US Centers for Disease Control and Prevention (CDC) show that HIV testing must start with an immunoassay combination test for HIV-1 and HIV-2 antibodies and p24 antigen. A negative result rules out HIV exposure, while a positive one must be followed by an HIV-1/2 antibody differentiation immunoassay to detect which antibodies are present. This gives rise to four possible scenarios: 1. HIV-1 (+) & HIV-2 (−): HIV-1 antibodies detected 2. HIV-1 (−) & HIV-2 (+): HIV-2 antibodies detected 3. HIV-1 (+) & HIV-2 (+): both HIV-1 and HIV-2 antibodies detected 4. HIV-1 (−) or indeterminate & HIV-2 (−): Nucleic acid test must be carried out to detect the acute infection of HIV-1 or its absence. Research HIV/AIDS research includes all medical research that attempts to prevent, treat, or cure HIV/AIDS, as well as fundamental research about the nature of HIV as an infectious agent and AIDS as the disease caused by HIV. Many governments and research institutions participate in HIV/AIDS research. This research includes behavioral health interventions, such as research into sex education, and drug development, such as research into microbicides for sexually transmitted diseases, HIV vaccines, and anti-retroviral drugs. Other medical research areas include the topics of pre-exposure prophylaxis, post-exposure prophylaxis, circumcision and HIV, and accelerated aging effects. Treatment and transmission The management of HIV/AIDS normally includes the use of multiple antiretroviral drugs. In many parts of the world, HIV has become a chronic condition in which progression to AIDS is increasingly rare. HIV latency, and the consequent viral reservoir in CD4+ T cells, dendritic cells, as well as macrophages, is the main barrier to eradication of the virus. It is important to note that although HIV is highly virulent, transmission does not occur through sex when an HIV-positive person has a consistently undetectable viral load (<50 copies/ml) due to anti-retroviral treatment. This was first argued by the Swiss Federal Commission for AIDS/HIV in 2008 in the Swiss Statement, though the statement was controversial at the time. However, following multiple studies, it became clear that the chance of passing on HIV through sex is effectively zero where the HIV-positive person has a consistently undetectable viral load; this is known as U=U, "Undetectable=Untransmittable", also phrased as "can't pass it on". The studies demonstrating U=U are: Opposites Attract, PARTNER 1, PARTNER 2, (for male-male couples) and HPTN052 (for heterosexual couples) when "the partner living with HIV had a durably suppressed viral load." In these studies, couples where one partner was HIV positive and one partner was HIV negative were enrolled and regular HIV testing completed. In total from the four studies, 4097 couples were enrolled over four continents and 151,880 acts of condomless sex were reported; there were zero phylogenetically linked transmissions of HIV where the positive partner had an undetectable viral load. Following this, the U=U consensus statement advocating the use of "zero risk" was signed by hundreds of individuals and organisations, including the US CDC, British HIV Association and The Lancet medical journal. The importance of the final results of the PARTNER 2 study were described by the medical director of the Terrence Higgins Trust as "impossible to overstate", while lead author Alison Rodger declared that the message that "undetectable viral load makes HIV untransmittable ... can help end the HIV pandemic by preventing HIV transmission. The authors summarised their findings in The Lancet as follows: This result is consistent with the conclusion presented by Anthony S. Fauci, the Director of the National Institute of Allergy and Infectious Diseases for the U.S. National Institutes of Health, and his team in a viewpoint published in the Journal of the American Medical Association, that U=U is an effective HIV prevention method when an undetectable viral load is maintained. Genital herpes (HSV-2) reactivation in those infected with the virus have an associated increase in CCR-5 enriched CD4+ T cells as well as inflammatory dendritic cells in the submucosa of the genital skin. Tropism of HIV for CCR-5 positive cells explains the two to threefold increase in HIV acquisition among persons with genital herpes. Daily antiviral (e.g. acyclovir) medication do not reduce the sub-clinical post reactivation inflammation and therefore does not confer reduced risk of HIV acquisition. History Discovery The first news story on "an exotic new disease" appeared May 18, 1981 in the gay newspaper New York Native. AIDS was first clinically observed in 1981 in the United States. The initial cases were a cluster of injection drug users and gay men with no known cause of impaired immunity who showed symptoms of Pneumocystis pneumonia (PCP or PJP, the latter term recognizing that the causative agent is now called Pneumocystis jirovecii), a rare opportunistic infection that was known to occur in people with very compromised immune systems. Soon thereafter, researchers at the NYU School of Medicine studied gay men developing a previously rare skin cancer called Kaposi's sarcoma (KS). Many more cases of PJP and KS emerged, alerting U.S. Centers for Disease Control and Prevention (CDC) and a CDC task force was formed to monitor the outbreak. The earliest retrospectively described case of AIDS is believed to have been in Norway beginning in 1966. In the beginning, the CDC did not have an official name for the disease, often referring to it by way of the diseases that were associated with it, for example, lymphadenopathy, the disease after which the discoverers of HIV originally named the virus. They also used Kaposi's Sarcoma and Opportunistic Infections, the name by which a task force had been set up in 1981. In the general press, the term GRID, which stood for gay-related immune deficiency, had been coined. The CDC, in search of a name and looking at the infected communities, coined "the 4H disease", as it seemed to single out homosexuals, heroin users, hemophiliacs, and Haitians. However, after determining that AIDS was not isolated to the gay community, it was realized that the term GRID was misleading and AIDS was introduced at a meeting in July 1982. By September 1982 the CDC started using the name AIDS. In 1983, two separate research groups led by American Robert Gallo and French investigators and Luc Montagnier independently declared that a novel retrovirus may have been infecting AIDS patients, and published their findings in the same issue of the journal Science. Gallo claimed that a virus his group had isolated from a person with AIDS was strikingly similar in shape to other human T-lymphotropic viruses (HTLVs) his group had been the first to isolate. Gallo admitted in 1987 that the virus he claimed to have discovered in 1984 was in reality a virus sent to him from France the year before. Gallo's group called their newly isolated virus HTLV-III. Montagnier's group isolated a virus from a patient presenting with swelling of the lymph nodes of the neck and physical weakness, two classic symptoms of primary HIV infection. Contradicting the report from Gallo's group, Montagnier and his colleagues showed that core proteins of this virus were immunologically different from those of HTLV-I. Montagnier's group named their isolated virus lymphadenopathy-associated virus (LAV). As these two viruses turned out to be the same, in 1986 LAV and HTLV-III were renamed HIV. Another group working contemporaneously with the Montagnier and Gallo groups was that of Dr. Jay A. Levy at the University of California, San Francisco. He independently discovered the AIDS virus in 1983 and named it the AIDS associated retrovirus (ARV). This virus was very different from the virus reported by the Montagnier and Gallo groups. The ARV strains indicated, for the first time, the heterogeneity of HIV isolates and several of these remain classic examples of the AIDS virus found in the United States. Origins Both HIV-1 and HIV-2 are believed to have originated in non-human primates in West-central Africa, and are believed to have transferred to humans (a process known as zoonosis) in the early 20th century. HIV-1 appears to have originated in southern Cameroon through the evolution of SIVcpz, a simian immunodeficiency virus (SIV) that infects wild chimpanzees (HIV-1 descends from the SIVcpz endemic in the chimpanzee subspecies Pan troglodytes troglodytes). The closest relative of HIV-2 is SIVsmm, a virus of the sooty mangabey (Cercocebus atys atys), an Old World monkey living in littoral West Africa (from southern Senegal to western Côte d'Ivoire). New World monkeys such as the owl monkey are resistant to HIV-1 infection, possibly because of a genomic fusion of two viral resistance genes. HIV-1 is thought to have jumped the species barrier on at least three separate occasions, giving rise to the three groups of the virus, M, N, and O. There is evidence that humans who participate in bushmeat activities, either as hunters or as bushmeat vendors, commonly acquire SIV. However, SIV is a weak virus, and it is typically suppressed by the human immune system within weeks of infection. It is thought that several transmissions of the virus from individual to individual in quick succession are necessary to allow it enough time to mutate into HIV. Furthermore, due to its relatively low person-to-person transmission rate, it can only spread throughout the population in the presence of one or more high-risk transmission channels, which are thought to have been absent in Africa prior to the 20th century. Specific proposed high-risk transmission channels, allowing the virus to adapt to humans and spread throughout the society, depend on the proposed timing of the animal-to-human crossing. Genetic studies of the virus suggest that the most recent common ancestor of the HIV-1 M group dates back to circa 1910. Proponents of this dating link the HIV epidemic with the emergence of colonialism and growth of large colonial African cities, leading to social changes, including different patterns of sexual contact (especially multiple, concurrent partnerships), the spread of prostitution, and the concomitant high frequency of genital ulcer diseases (such as syphilis) in nascent colonial cities. While transmission rates of HIV during vaginal intercourse are typically low, they are increased manyfold if one of the partners suffers from a sexually transmitted infection resulting in genital ulcers. Early 1900s colonial cities were notable for their high prevalence of prostitution and genital ulcers to the degree that as of 1928 as many as 45% of female residents of eastern Leopoldville (currently Kinshasa) were thought to have been prostitutes and as of 1933 around 15% of all residents of the same city were infected by one of the forms of syphilis. The earliest, well-documented case of HIV in a human dates back to 1959 in the Belgian Congo. The virus may have been present in the United States as early as the mid- to late 1960s, as a sixteen-year-old male named Robert Rayford presented with symptoms in 1966 and died in 1969. An alternative and likely complementary hypothesis points to the widespread use of unsafe medical practices in Africa during years following World War II, such as unsterile reuse of single-use syringes during mass vaccination, antibiotic, and anti-malaria treatment campaigns. Research on the timing of most recent common ancestor for HIV-1 groups M and O, as well as on HIV-2 groups A and B, indicates that SIV has given rise to transmissible HIV lineages throughout the twentieth century. The dispersed timing of these transmissions to humans implies that no single external factor is needed to explain the cross-species transmission of HIV. This observation is consistent with both of the two prevailing views of the origin of the HIV epidemics, namely SIV transmission to humans during the slaughter or butchering of infected primates, and the colonial expansion of sub-Saharan African cities. See also Antiviral drug Discovery and development of HIV-protease inhibitors HIV/AIDS denialism World AIDS Day References Further reading External links Causes of death Discovery and invention controversies IARC Group 2B carcinogens Lentiviruses Sexually transmitted diseases and infections 1983 in biology
[ 0.17127056419849396, 0.14248481392860413, -0.7793015241622925, -0.013243907131254673, -0.02459629438817501, 0.051539983600378036, 0.4114908277988434, 0.42712628841400146, -0.2299639880657196, -0.6662163734436035, 0.14703966677188873, 0.3152492046356201, -0.8177027106285095, 0.0097106741741...
14173
https://en.wikipedia.org/wiki/HOL
HOL
Hol or HOL may refer to: People Hol (surname) K'inich Popol Hol, 5th-century Mayan king Places Hol, a municipality in Buskerud county, Norway Old Hol Church Hol, Tjeldsund Hol, Nordland, Lofoten Hol Church (Nordland) Hol, Ludhiana, a village in India Stations Hollywood station (Florida) (station code: HOL), USA Holmesglen railway station (station code: HOL), Malvern East, Melbourne, Victoria, Australia Holsworthy railway station, Sydney (station code: HOL), NSW, Australia Holton Heath railway station (station code: HOL), England, UK Science and technology HOL (proof assistant), theorem proving systems Head-of-line blocking in computer networking Higher-order logic, a branch of symbolic logic Holonomy group in differential geometry Sports and games Hol (role-playing game) Hol IL, a sports club in Buskerud county Hollingworth Lake Rowing Club (prefix code: HOL) Holdsworth (cycling team) (UCI code HOL) HOL, pre-1992 code for Netherlands at the Olympics Other uses Hands On Learning Australia, a charity Hellas On-Line, a Greek Internet service provider Holiday Airlines (US airline) (ICAO airline code HOL) Holu language (ISO 639 code hol) tlhIngan Hol, fictional Klingon language See also Holl h0i HOI (disambiguation) HO-1 (disambiguation), including HO1 H1 (disambiguation), including H01
[ -0.08467140793800354, 0.5647680759429932, 0.373930960893631, -0.23628710210323334, 0.18227171897888184, 0.5971299409866333, 0.14877226948738098, 0.6957622766494751, -0.4500090777873993, -0.1429138481616974, -0.7826321125030518, -0.43616771697998047, 0.11241275817155838, 0.24421827495098114...
14174
https://en.wikipedia.org/wiki/Hostile%20witness
Hostile witness
A hostile witness, also known as an adverse witness or an unfavorable witness, is a witness at trial whose testimony on direct examination is either openly antagonistic or appears to be contrary to the legal position of the party who called the witness. This concept is used in the legal proceedings in the United States, and analogues of it exist in other legal systems in Western countries. Process During direct examination, if the examining attorney who called the witness finds that their testimony is antagonistic or contrary to the legal position of their client, the attorney may request that the judge declare the witness "hostile". If the request is granted, the attorney may proceed to ask the witness leading questions. Leading questions either suggest the answer ("You saw my client sign the contract, correct?") or challenge (impeach) the witness's testimony. As a rule, leading questions are generally allowed only during cross-examination, but a hostile witness is an exception to this rule. In cross-examination conducted by the opposing party's attorney, a witness is presumed to be hostile and the examining attorney is not required to seek the judge's permission before asking leading questions. Attorneys can influence a hostile witness's responses by using Gestalt psychology to influence the way the witness perceives the situation, and utility theory to understand their likely responses. The attorney will integrate a hostile witness's expected responses into the larger case strategy through pretrial planning and through adapting as necessary during the course of the trial. Jurisdiction Australia In the state of New South Wales, the term 'unfavourable witness' is defined by section 38 of the Evidence Act which permits the prosecution to cross-examine their own witness. For example, if the prosecution calls all material witnesses relevant to a case before the court, and any evidence given is not favourable to, or supports the prosecution case, or a witness has given a prior inconsistent statement, then the prosecution may seek leave of the court, via section 192, to test the witness in relation to their evidence. New Zealand In New Zealand, section 94 of the Evidence Act 2006 permits a party to cross-examine their own witness if the presiding judge determines the witness to be hostile and gives permission. References External links Federal Rules of Evidence - Rule 611: Mode and Order of Interrogation and Presentation Evidence law Eyewitness
[ -0.08863667398691177, 0.6008219718933105, -0.5297034382820129, 0.11922558397054672, -0.1585460603237152, 0.061643969267606735, 0.2758142650127411, -0.061127811670303345, -0.1262231320142746, -0.27608922123908997, 0.030591703951358795, 0.3585926294326782, -0.16479399800300598, 0.21648356318...
14179
https://en.wikipedia.org/wiki/Henry%20I%20of%20England
Henry I of England
Henry I (c. 1068 – 1 December 1135), also known as Henry Beauclerc, was King of England from 1100 to his death in 1135. He was the fourth son of William the Conqueror and was educated in Latin and the liberal arts. On William's death in 1087, Henry's elder brothers Robert Curthose and William Rufus inherited Normandy and England, respectively, but Henry was left landless. He purchased the County of Cotentin in western Normandy from Robert, but his brothers deposed him in 1091. He gradually rebuilt his power base in the Cotentin and allied himself with William against Robert. Present at the place where his brother William died in a hunting accident in 1100, Henry seized the English throne, promising at his coronation to correct many of William's less popular policies. He married Matilda of Scotland and they had two surviving children, William Adelin and Empress Matilda; he also had many illegitimate children by his many mistresses. Robert, who invaded in 1101, disputed Henry's control of England; this military campaign ended in a negotiated settlement that confirmed Henry as king. The peace was short-lived, and Henry invaded the Duchy of Normandy in 1105 and 1106, finally defeating Robert at the Battle of Tinchebray. Henry kept Robert imprisoned for the rest of his life. Henry's control of Normandy was challenged by Louis VI of France, Baldwin VII of Flanders and Fulk V of Anjou, who promoted the rival claims of Robert's son, William Clito, and supported a major rebellion in the Duchy between 1116 and 1119. Following Henry's victory at the Battle of Brémule, a favourable peace settlement was agreed with Louis in 1120. Considered by contemporaries to be a harsh but effective ruler, Henry skillfully manipulated the barons in England and Normandy. In England, he drew on the existing Anglo-Saxon system of justice, local government and taxation, but also strengthened it with additional institutions, including the royal exchequer and itinerant justices. Normandy was also governed through a growing system of justices and an exchequer. Many of the officials who ran Henry's system were "new men" of obscure backgrounds, rather than from families of high status, who rose through the ranks as administrators. Henry encouraged ecclesiastical reform, but became embroiled in a serious dispute in 1101 with Archbishop Anselm of Canterbury, which was resolved through a compromise solution in 1105. He supported the Cluniac order and played a major role in the selection of the senior clergy in England and Normandy. Henry's son William drowned in the White Ship disaster of 1120, throwing the royal succession into doubt. Henry took a second wife, Adeliza of Louvain, in the hope of having another son, but their marriage was childless. In response to this, he declared his daughter Matilda his heir and married her to Geoffrey of Anjou. The relationship between Henry and the couple became strained, and fighting broke out along the border with Anjou. Henry died on 1 December 1135 after a week of illness. Despite his plans for Matilda, the king was succeeded by his nephew Stephen of Blois, resulting in a period of civil war known as the Anarchy. Early life, 1068–1099 Childhood and appearance, 1068–86 Henry was probably born in England in 1068, in either the summer or the last weeks of the year, possibly in the town of Selby in Yorkshire. His father was William the Conqueror, the Duke of Normandy who had invaded England in 1066 to become the king of England, establishing lands stretching into Wales. The invasion had created an Anglo-Norman ruling class, many with estates on both sides of the English Channel. These Anglo-Norman barons typically had close links to the Kingdom of France, which was then a loose collection of counties and smaller polities, under only the nominal control of the king. Henry's mother, Matilda of Flanders, was the granddaughter of Robert II of France, and she probably named Henry after her uncle, King Henry I of France. Henry was the youngest of William and Matilda's four sons. Physically he resembled his older brothers Robert Curthose, Richard and William Rufus, being, as historian David Carpenter describes, "short, stocky and barrel-chested," with black hair. As a result of their age differences and Richard's early death, Henry would have probably seen relatively little of his older brothers. He probably knew his sister Adela well, as the two were close in age. There is little documentary evidence for his early years; historians Warren Hollister and Kathleen Thompson suggest he was brought up predominantly in England, while Judith Green argues he was initially brought up in the Duchy. He was probably educated by the Church, possibly by Bishop Osmund, the King's chancellor, at Salisbury Cathedral; it is uncertain if this indicated an intent by his parents for Henry to become a member of the clergy. It is also uncertain how far Henry's education extended, but he was probably able to read Latin and had some background in the liberal arts. He was given military training by an instructor called Robert Achard, and Henry was knighted by his father on 24 May 1086. Inheritance, 1087–88 In 1087, William was fatally injured during a campaign in the Vexin. Henry joined his dying father near Rouen in September, where the King partitioned his possessions among his sons. The rules of succession in western Europe at the time were uncertain; in some parts of France, primogeniture, in which the eldest son would inherit a title, was growing in popularity. In other parts of Europe, including Normandy and England, the tradition was for lands to be divided, with the eldest son taking patrimonial lands – usually considered to be the most valuable – and younger sons given smaller, or more recently acquired, partitions or estates. In dividing his lands, William appears to have followed the Norman tradition, distinguishing between Normandy, which he had inherited, and England, which he had acquired through war. William's second son, Richard, had died in a hunting accident, leaving Henry and his two brothers to inherit William's estate. Robert, the eldest, despite being in armed rebellion against his father at the time of his death, received Normandy. England was given to William Rufus, who was in favour with the dying king. Henry was given a large sum of money, usually reported as £5,000, with the expectation that he would also be given his mother's modest set of lands in Buckinghamshire and Gloucestershire. William's funeral at Caen was marred by angry complaints from a local man, and Henry may have been responsible for resolving the dispute by buying off the protester with silver. Robert returned to Normandy, expecting to have been given both the Duchy and England, to find that William Rufus had crossed the Channel and been crowned king. The two brothers disagreed fundamentally over the inheritance, and Robert soon began to plan an invasion of England to seize the kingdom, helped by a rebellion by some of the leading nobles against William Rufus. Henry remained in Normandy and took up a role within Robert's court, possibly either because he was unwilling to side openly with William Rufus, or because Robert might have taken the opportunity to confiscate Henry's inherited money if he had tried to leave. William Rufus sequestered Henry's new estates in England, leaving Henry landless. In 1088, Robert's plans for the invasion of England began to falter, and he turned to Henry, proposing that his brother lend him some of his inheritance, which Henry refused. Henry and Robert then came to an alternative arrangement, in which Robert would make Henry the count of western Normandy, in exchange for £3,000. Henry's lands were a new countship created by a delegation of the ducal authority in the Cotentin, but it extended across the Avranchin, with control over the bishoprics of both. This also gave Henry influence over two major Norman leaders, Hugh d'Avranches and Richard de Redvers, and the abbey of Mont Saint-Michel, whose lands spread out further across the Duchy. Robert's invasion force failed to leave Normandy, leaving William Rufus secure in England. Count of the Cotentin, 1088–90 Henry quickly established himself as count, building up a network of followers from western Normandy and eastern Brittany, whom historian John Le Patourel has characterised as "Henry's gang". His early supporters included Roger of Mandeville, Richard of Redvers, Richard d'Avranches and Robert Fitzhamon, along with the churchman Roger of Salisbury. Robert attempted to go back on his deal with Henry and re-appropriate the county, but Henry's grip was already sufficiently firm to prevent this. Robert's rule of the duchy was chaotic, and parts of Henry's lands became almost independent of central control from Rouen. During this period, neither William nor Robert seems to have trusted Henry. Waiting until the rebellion against William Rufus was safely over, Henry returned to England in July 1088. He met with the King but was unable to persuade him to grant him their mother's estates, and travelled back to Normandy in the autumn. While he had been away, however, Odo, Bishop of Bayeux, who regarded Henry as a potential competitor, had convinced Robert that Henry was conspiring against the duke with William Rufus. On landing, Odo seized Henry and imprisoned him in Neuilly-la-Forêt, and Robert took back the county of the Cotentin. Henry was held there over the winter, but in the spring of 1089 the senior elements of the Normandy nobility prevailed upon Robert to release him. Although no longer formally the Count of Cotentin, Henry continued to control the west of Normandy. The struggle between his brothers continued. William Rufus continued to put down resistance to his rule in England, but began to build a number of alliances against Robert with barons in Normandy and neighbouring Ponthieu. Robert allied himself with Philip I of France. In late 1090 William Rufus encouraged Conan Pilatus, a powerful burgher in Rouen, to rebel against Robert; Conan was supported by most of Rouen and made appeals to the neighbouring ducal garrisons to switch allegiance as well. Robert issued an appeal for help to his barons, and Henry was the first to arrive in Rouen in November. Violence broke out, leading to savage, confused street fighting as both sides attempted to take control of the city. Robert and Henry left the castle to join the battle, but Robert then retreated, leaving Henry to continue the fighting. The battle turned in favour of the ducal forces and Henry took Conan prisoner. Henry was angry that Conan had turned against his feudal lord. He had him taken to the top of Rouen Castle and then, despite Conan's offers to pay a huge ransom, threw him off the top of the castle to his death. Contemporaries considered Henry to have acted appropriately in making an example of Conan, and Henry became famous for his exploits in the battle. Fall and rise, 1091–99 In the aftermath, Robert forced Henry to leave Rouen, probably because Henry's role in the fighting had been more prominent than his own, and possibly because Henry had asked to be formally reinstated as the count of the Cotentin. In early 1091, William Rufus invaded Normandy with a sufficiently large army to bring Robert to the negotiating table. The two brothers signed a treaty at Rouen, granting William Rufus a range of lands and castles in Normandy. In return, William Rufus promised to support Robert's attempts to regain control of the neighbouring county of Maine, once under Norman control, and help in regaining control over the duchy, including Henry's lands. They nominated each other as heirs to England and Normandy, excluding Henry from any succession while either one of them lived. War now broke out between Henry and his brothers. Henry mobilised a mercenary army in the west of Normandy, but as William Rufus and Robert's forces advanced, his network of baronial support melted away. Henry focused his remaining forces at Mont Saint-Michel, where he was besieged, probably in March 1091. The site was easy to defend, but lacked fresh water. The chronicler William of Malmesbury suggested that when Henry's water ran short, Robert allowed his brother fresh supplies, leading to remonstrations between Robert and William Rufus. The events of the final days of the siege are unclear: the besiegers had begun to argue about the future strategy for the campaign, but Henry then abandoned Mont Saint-Michel, probably as part of a negotiated surrender. He left for Brittany and crossed over into France. Henry's next steps are not well documented; one chronicler, Orderic Vitalis, suggests that he travelled in the French Vexin, along the Normandy border, for over a year with a small band of followers. By the end of the year, Robert and William Rufus had fallen out once again, and the Treaty of Rouen had been abandoned. In 1092, Henry and his followers seized the Normandy town of Domfront. Domfront had previously been controlled by Robert of Bellême, but the inhabitants disliked his rule and invited Henry to take over the town, which he did in a bloodless coup. Over the next two years, Henry re-established his network of supporters across western Normandy, forming what Judith Green terms a "court in waiting". By 1094, he was allocating lands and castles to his followers as if he were the Duke of Normandy. William Rufus began to support Henry with money, encouraging his campaign against Robert, and Henry used some of this to construct a substantial castle at Domfront. William Rufus crossed into Normandy to take the war to Robert in 1094, and when progress stalled, called upon Henry for assistance. Henry responded, but travelled to London instead of joining the main campaign further east in Normandy, possibly at the request of the King, who in any event abandoned the campaign and returned to England. Over the next few years, Henry appears to have strengthened his power base in western Normandy, visiting England occasionally to attend at William Rufus's court. In 1095 Pope Urban II called the First Crusade, encouraging knights from across Europe to join. Robert joined the Crusade, borrowing money from William Rufus to do so, and granting the King temporary custody of his part of the Duchy in exchange. The King appeared confident of regaining the remainder of Normandy from Robert, and Henry appeared ever closer to William Rufus. They campaigned together in the Norman Vexin between 1097 and 1098. Early reign, 1100–06 Taking the throne, 1100 On the afternoon of 2 August 1100, King William went hunting in the New Forest, accompanied by a team of huntsmen and a number of the Norman nobility, including Henry. An arrow, possibly shot by the baron Walter Tirel, hit and killed William Rufus. Numerous conspiracy theories have been put forward suggesting that the King was killed deliberately; most modern historians reject these, as hunting was a risky activity, and such accidents were common. Chaos broke out, and Tirel fled the scene for France, either because he had shot the fatal arrow, or because he had been incorrectly accused and feared that he would be made a scapegoat for the King's death. Henry rode to Winchester, where an argument ensued as to who now had the best claim to the throne. William of Breteuil championed the rights of Robert, who was still abroad, returning from the Crusade, and to whom Henry and the barons had given homage in previous years. Henry argued that, unlike Robert, he had been born to a reigning king and queen, thereby giving him a claim under the right of porphyrogeniture. Tempers flared, but Henry, supported by Henry de Beaumont and Robert of Meulan, held sway and persuaded the barons to follow him. He occupied Winchester Castle and seized the royal treasury. Henry was hastily crowned king in Westminster Abbey on 5 August by Maurice, the bishop of London, as Anselm, the archbishop of Canterbury, had been exiled by William Rufus, and Thomas, the archbishop of York, was in the north of England at Ripon. In accordance with English tradition and in a bid to legitimise his rule, Henry issued a coronation charter laying out various commitments. The new king presented himself as having restored order to a trouble-torn country. He announced that he would abandon William Rufus's policies towards the Church, which had been seen as oppressive by the clergy; he promised to prevent royal abuses of the barons' property rights, and assured a return to the gentler customs of Edward the Confessor; he asserted that he would "establish a firm peace" across England and ordered "that this peace shall henceforth be kept". In addition to his existing circle of supporters, many of whom were richly rewarded with new lands, Henry quickly co-opted many of the existing administration into his new royal household. William Giffard, William Rufus's chancellor, was made the bishop of Winchester, and the prominent sheriffs Urse d'Abetot, Haimo Dapifer and Robert Fitzhamon continued to play a senior role in government. By contrast, the unpopular Ranulf Flambard, the bishop of Durham and a key member of the previous regime, was imprisoned in the Tower of London and charged with corruption. The late king had left many Church positions unfilled, and Henry set about nominating candidates to these, in an effort to build further support for his new government. The appointments needed to be consecrated, and Henry wrote to Anselm, apologising for having been crowned while the archbishop was still in France and asking him to return at once. Marriage to Matilda, 1100 On 11 November 1100 Henry married Matilda, the daughter of Malcolm III of Scotland, in Westminster Abbey. Henry was now around 31 years old, but late marriages for noblemen were not unusual in the 11th century. The pair had probably first met earlier the previous decade, possibly being introduced through Bishop Osmund of Salisbury. Historian Warren Hollister argues that Henry and Matilda were emotionally close, but their union was also certainly politically motivated. Matilda had originally been named Edith, an Anglo-Saxon name, and was a member of the West Saxon royal family, being the niece of Edgar the Ætheling, the great-granddaughter of Edmund Ironside and a descendant of Alfred the Great. For Henry, marrying Matilda gave his reign increased legitimacy, and for Matilda, an ambitious woman, it was an opportunity for high status and power in England. Matilda had been educated in a sequence of convents, however, and may well have taken the vows to formally become a nun, which formed an obstacle to the marriage progressing. She did not wish to be a nun and appealed to Anselm for permission to marry Henry, and the Archbishop established a council at Lambeth Palace to judge the issue. Despite some dissenting voices, the council concluded that although Matilda had lived in a convent, she had not actually become a nun and was therefore free to marry, a judgement that Anselm then affirmed, allowing the marriage to proceed. Matilda proved an effective queen for Henry, acting as a regent in England on occasion, addressing and presiding over councils, and extensively supporting the arts. The couple soon had two children, Matilda, born in 1102, and William Adelin, born in 1103; it is possible that they also had a second son, Richard, who died young. Following the birth of these children, Matilda preferred to remain based in Westminster while Henry travelled across England and Normandy, either for religious reasons or because she enjoyed being involved in the machinery of royal governance. Henry had a considerable sexual appetite and enjoyed a substantial number of sexual partners, resulting in many illegitimate children, at least nine sons and 13 daughters, many of whom he appears to have recognised and supported. It was normal for unmarried Anglo-Norman noblemen to have sexual relations with prostitutes and local women, and kings were also expected to have mistresses. Some of these relationships occurred before Henry was married, but many others took place after his marriage to Matilda. Henry had a wide range of mistresses from a range of backgrounds, and the relationships appear to have been conducted relatively openly. He may have chosen some of his noble mistresses for political purposes, but the evidence to support this theory is limited. Treaty of Alton, 1101–02 By early 1101, Henry's new regime was established and functioning, but many of the Anglo-Norman elite still supported his brother Robert, or would be prepared to switch sides if Robert appeared likely to gain power in England. In February, Flambard escaped from the Tower of London and crossed the Channel to Normandy, where he injected fresh direction and energy to Robert's attempts to mobilise an invasion force. By July, Robert had formed an army and a fleet, ready to move against Henry in England. Raising the stakes in the conflict, Henry seized Flambard's lands and, with the support of Anselm, Flambard was removed from his position as bishop. The King held court in April and June, where the nobility renewed their oaths of allegiance to him, but their support still appeared partial and shaky. With the invasion imminent, Henry mobilised his forces and fleet outside Pevensey, close to Robert's anticipated landing site, training some of them personally in how to counter cavalry charges. Despite English levies and knights owing military service to the Church arriving in considerable numbers, many of his barons did not appear. Anselm intervened with some of the doubters, emphasising the religious importance of their loyalty to Henry. Robert unexpectedly landed further up the coast at Portsmouth on 20 July with a modest force of a few hundred men, but these were quickly joined by many of the barons in England. However, instead of marching into nearby Winchester and seizing Henry's treasury, Robert paused, giving Henry time to march west and intercept the invasion force. The two armies met at Alton, Hampshire, where peace negotiations began, possibly initiated by either Henry or Robert, and probably supported by Flambard. The brothers then agreed to the Treaty of Alton, under which Robert released Henry from his oath of homage and recognised him as king; Henry renounced his claims on western Normandy, except for Domfront, and agreed to pay Robert £2,000 a year for life; if either brother died without a male heir, the other would inherit his lands; the barons whose lands had been seized by either the King or the Duke for supporting his rival would have them returned, and Flambard would be reinstated as bishop; the two brothers would campaign together to defend their territories in Normandy. Robert remained in England for a few months more with Henry before returning to Normandy. Despite the treaty, Henry set about inflicting severe penalties on the barons who had stood against him during the invasion. William de Warenne, the Earl of Surrey, was accused of fresh crimes, which were not covered by the Alton amnesty, and was banished from England. In 1102 Henry then turned against Robert of Bellême and his brothers, the most powerful of the barons, accusing him of 45 different offences. Robert escaped and took up arms against Henry. Henry besieged Robert's castles at Arundel, Tickhill and Shrewsbury, pushing down into the south-west to attack Bridgnorth. His power base in England broken, Robert accepted Henry's offer of banishment and left the country for Normandy. Conquest of Normandy, 1103–06 Henry's network of allies in Normandy became stronger during 1103. He arranged the marriages of his illegitimate daughters, Juliane and Matilda, to Eustace of Breteuil and Rotrou III, Count of Perche, respectively, the latter union securing the Norman border. Henry attempted to win over other members of the Norman nobility and gave other English estates and lucrative offers to key Norman lords. Duke Robert continued to fight Robert of Bellême, but the Duke's position worsened, until by 1104, he had to ally himself formally with Bellême to survive. Arguing that the Duke had broken the terms of their treaty, the King crossed over the Channel to Domfront, where he met with senior barons from across Normandy, eager to ally themselves with him. He confronted the Duke and accused him of siding with his enemies, before returning to England. Normandy continued to disintegrate into chaos. In 1105, Henry sent his friend Robert Fitzhamon and a force of knights into the Duchy, apparently to provoke a confrontation with Duke Robert. Fitzhamon was captured, and Henry used this as an excuse to invade, promising to restore peace and order. Henry had the support of most of the neighbouring counts around Normandy's borders, and King Philip of France was persuaded to remain neutral. Henry occupied western Normandy, and advanced east on Bayeux, where Fitzhamon was held. The city refused to surrender, and Henry besieged it, burning it to the ground. Terrified of meeting the same fate, the town of Caen switched sides and surrendered, allowing Henry to advance on Falaise, Calvados, which he took with some casualties. His campaign stalled, and the King instead began peace discussions with Robert. The negotiations were inconclusive and the fighting dragged on until Christmas, when Henry returned to England. Henry invaded again in July 1106, hoping to provoke a decisive battle. After some initial tactical successes, he turned south-west towards the castle of Tinchebray. He besieged the castle and Duke Robert, supported by Robert of Bellême, advanced from Falaise to relieve it. After attempts at negotiation failed, the Battle of Tinchebray took place, probably on 28 September. The battle lasted around an hour, and began with a charge by Duke Robert's cavalry; the infantry and dismounted knights of both sides then joined the battle. Henry's reserves, led by Elias I, Count of Maine, and Alan IV, Duke of Brittany, attacked the enemy's flanks, routing first Bellême's troops and then the bulk of the ducal forces. Duke Robert was taken prisoner, but Bellême escaped. Henry mopped up the remaining resistance in Normandy, and Duke Robert ordered his last garrisons to surrender. Reaching Rouen, Henry reaffirmed the laws and customs of Normandy and took homage from the leading barons and citizens. The lesser prisoners taken at Tinchebray were released, but the Duke and several other leading nobles were imprisoned indefinitely. The Duke's son, William Clito, was only three years old and was released to the care of Helias of Saint-Saens, a Norman baron. Henry reconciled himself with Robert of Bellême, who gave up the ducal lands he had seized and rejoined the royal court. Henry had no way of legally removing the Duchy from his brother, and initially Henry avoided using the title "duke" at all, emphasising that, as the king of England, he was only acting as the guardian of the troubled Duchy. Government, family and household Government, law and court Henry inherited the kingdom of England from William Rufus, giving him a claim of suzerainty over Wales and Scotland, and acquired the Duchy of Normandy, a complex entity with troubled borders. The borders between England and Scotland were still uncertain during Henry's reign, with Anglo-Norman influence pushing northwards through Cumbria, but his relationship with King David I of Scotland was generally good, partially due to Henry's marriage to his sister. In Wales, Henry used his power to coerce and charm the indigenous Welsh princes, while Norman Marcher Lords pushed across the valleys of South Wales. Normandy was controlled via various interlocking networks of ducal, ecclesiastical and family contacts, backed by a growing string of important ducal castles along the borders. Alliances and relationships with neighbouring counties along the Norman border were particularly important to maintaining the stability of the Duchy. Henry ruled through the various barons and lords in England and Normandy, whom he manipulated skillfully for political effect. Political friendships, termed amicitia in Latin, were important during the 12th century, and Henry maintained a wide range of these, mediating between his friends in various factions across his realm when necessary, and rewarding those who were loyal to him. He also had a reputation for punishing those barons who stood against him, and he maintained an effective network of informers and spies who reported to him on events. Henry was a harsh, firm ruler, but not excessively so by the standards of the day. Over time, he increased the degree of his control over the barons, removing his enemies and bolstering his friends until the "reconstructed baronage", as historian Warren Hollister describes it, was predominantly loyal and dependent on the King. Henry's itinerant royal court comprised various parts. At the heart was his domestic household, called the domus; a wider grouping was termed the familia regis, and formal gatherings of the court were termed curia. The domus was divided into several parts. The chapel, headed by the chancellor, looked after the royal documents, the chamber dealt with financial affairs and the master-marshal was responsible for travel and accommodation. The familia regis included Henry's mounted household troops, up to several hundred strong, who came from a wider range of social backgrounds, and could be deployed across England and Normandy as required. Initially Henry continued his father's practice of regular crown-wearing ceremonies at his curia, but they became less frequent as the years passed. Henry's court was grand and ostentatious, financing the construction of large new buildings and castles with a range of precious gifts on display, including his private menagerie of exotic animals, which he kept at Woodstock Palace. Despite being a lively community, Henry's court was more tightly controlled than those of previous kings. Strict rules controlled personal behaviour and prohibited members of the court from pillaging neighbouring villages, as had been the norm under William Rufus. Henry was responsible for a substantial expansion of the royal justice system. In England, Henry drew on the existing Anglo-Saxon system of justice, local government and taxes, but strengthened it with additional central governmental institutions. Roger of Salisbury began to develop the royal exchequer after 1110, using it to collect and audit revenues from the King's sheriffs in the shires. Itinerant justices began to emerge under Henry, travelling around the country managing eyre courts, and many more laws were formally recorded. Henry gathered increasing revenue from the expansion of royal justice, both from fines and from fees. The first Pipe Roll that is known to have survived dates from 1130, recording royal expenditures. Henry reformed the coinage in 1107, 1108 and in 1125, inflicting harsh corporal punishments to English coiners who had been found guilty of debasing the currency. In Normandy, he restored law and order after 1106, operating through a body of Norman justices and an exchequer system similar to that in England. Norman institutions grew in scale and scope under Henry, although less quickly than in England. Many of the officials that ran Henry's system were termed "new men", relatively low-born individuals who rose through the ranks as administrators, managing justice or the royal revenues. Relations with the Church Church and the King Henry's ability to govern was intimately bound up with the Church, which formed the key to the administration of both England and Normandy, and this relationship changed considerably over the course of his reign. William the Conqueror had reformed the English Church with the support of his Archbishop of Canterbury, Lanfranc, who became a close colleague and advisor to the King. Under William Rufus this arrangement had collapsed, the King and Archbishop Anselm had become estranged and Anselm had gone into exile. Henry also believed in Church reform, but on taking power in England he became embroiled in the investiture controversy. The argument concerned who should invest a new bishop with his staff and ring: traditionally, this had been carried out by the King in a symbolic demonstration of royal power, but Pope Urban II had condemned this practice in 1099, arguing that only the papacy could carry out this task, and declaring that the clergy should not give homage to their local temporal rulers. Anselm returned to England from exile in 1100 having heard Urban's pronouncement, and informed Henry that he would be complying with the Pope's wishes. Henry was in a difficult position. On one hand, the symbolism and homage was important to him; on the other hand, he needed Anselm's support in his struggle with his brother Duke Robert. Anselm stuck firmly to the letter of the papal decree, despite Henry's attempts to persuade him to give way in return for a vague assurance of a future royal compromise. Matters escalated, with Anselm going back into exile and Henry confiscating the revenues of his estates. Anselm threatened excommunication, and in July 1105 the two men finally negotiated a solution. A distinction was drawn between the secular and ecclesiastical powers of the prelates, under which Henry gave up his right to invest his clergy, but retained the custom of requiring them to come and do homage for the temporalities, the landed properties they held in England. Despite this argument, the pair worked closely together, combining to deal with Duke Robert's invasion of 1101, for example, and holding major reforming councils in 1102 and 1108. A long-running dispute between the Archbishops of Canterbury and York flared up under Anselm's successor, Ralph d'Escures. Canterbury, traditionally the senior of the two establishments, had long argued that the Archbishop of York should formally promise to obey their Archbishop, but York argued that the two episcopates were independent within the English Church and that no such promise was necessary. Henry supported the primacy of Canterbury, to ensure that England remained under a single ecclesiastical administration, but the Pope preferred the case of York. The matter was complicated by Henry's personal friendship with Thurstan, the Archbishop of York, and the King's desire that the case should not end up in a papal court, beyond royal control. Henry needed the support of the Papacy in his struggle with Louis of France, however, and therefore allowed Thurstan to attend the Council of Rheims in 1119, where Thurstan was then consecrated by the Pope with no mention of any duty towards Canterbury. Henry believed that this went against assurances Thurstan had previously made and exiled him from England until the King and Archbishop came to a negotiated solution the following year. Even after the investiture dispute, Henry continued to play a major role in the selection of new English and Norman bishops and archbishops. He appointed many of his officials to bishoprics and, as historian Martin Brett suggests, "some of his officers could look forward to a mitre with all but absolute confidence". Henry's chancellors, and those of his queens, became bishops of Durham, Hereford, London, Lincoln, Winchester and Salisbury. Henry increasingly drew on a wider range of these bishops as advisors – particularly Roger of Salisbury – breaking with the earlier tradition of relying primarily on the Archbishop of Canterbury. The result was a cohesive body of administrators through which Henry could exercise careful influence, holding general councils to discuss key matters of policy. This stability shifted slightly after 1125, when he began to inject a wider range of candidates into the senior positions of the Church, often with more reformist views, and the impact of this generation would be felt in the years after Henry's death. Personal beliefs and piety Like other rulers of the period, Henry donated to the Church and patronised various religious communities, but contemporary chroniclers did not consider him an unusually pious king. His personal beliefs and piety may, however, have developed during the course of his life. Henry had always taken an interest in religion, but in his later years he may have become much more concerned about spiritual affairs. If so, the major shifts in his thinking would appear to have occurred after 1120, when his son William Adelin died, and 1129, when his daughter's marriage teetered on the verge of collapse. As a proponent of religious reform, Henry gave extensively to reformist groups within the Church. He was a keen supporter of the Cluniac order, probably for intellectual reasons. He donated money to the abbey at Cluny itself, and after 1120 gave generously to Reading Abbey, a Cluniac establishment. Construction on Reading began in 1121, and Henry endowed it with rich lands and extensive privileges, making it a symbol of his dynastic lines. He also focused effort on promoting the conversion of communities of clerks into Augustinian canons, the foundation of leper hospitals, expanding the provision of nunneries, and the charismatic orders of the Savigniacs and Tironensians. He was an avid collector of relics, sending an embassy to Constantinople in 1118 to collect Byzantine items, some of which were donated to Reading Abbey. Later reign, 1107–1135 Continental and Welsh politics, 1108–1114 Normandy faced an increased threat from France, Anjou and Flanders after 1108. King Louis VI succeeded to the French throne in 1108 and began to reassert central royal power. Louis demanded Henry give homage to him and that two disputed castles along the Normandy border be placed into the control of neutral castellans. Henry refused, and Louis responded by mobilising an army. After some arguments, the two kings negotiated a truce and retreated without fighting, leaving the underlying issues unresolved. Fulk V assumed power in Anjou in 1109 and began to rebuild Angevin authority. He inherited the county of Maine, but refused to recognise Henry as his feudal lord and instead allied himself with Louis. Robert II of Flanders also briefly joined the alliance, before his death in 1111. In 1108, Henry betrothed his six-year-old daughter, Matilda, to Henry V, the future Holy Roman Emperor. For King Henry, this was a prestigious match; for Henry V, it was an opportunity to restore his financial situation and fund an expedition to Italy, as he received a dowry of £6,666 from England and Normandy. Raising this money proved challenging, and required the implementation of a special "aid", or tax, in England. Matilda was crowned German queen in 1110. Henry responded to the French and Angevin threat by expanding his own network of supporters beyond the Norman borders. Some Norman barons deemed unreliable were arrested or dispossessed, and Henry used their forfeited estates to bribe his potential allies in the neighbouring territories, in particular Maine. Around 1110, Henry attempted to arrest the young William Clito, but William's mentors moved him to the safety of Flanders before he could be taken. At about this time, Henry probably began to style himself as the duke of Normandy. Robert of Bellême turned against Henry once again, and when he appeared at Henry's court in 1112 in a new role as a French ambassador, he was arrested and imprisoned. Rebellions broke out in France and Anjou between 1111 and 1113, and Henry crossed into Normandy to support his nephew, Count Theobald II, Count of Champagne, who had sided against Louis in the uprising. In a bid to isolate Louis diplomatically, Henry betrothed his young son, William Adelin, to Fulk's daughter Matilda, and married his illegitimate daughter Matilda to Duke Conan III of Brittany, creating alliances with Anjou and Brittany respectively. Louis backed down and in March 1113 met with Henry near Gisors to agree a peace settlement, giving Henry the disputed fortresses and confirming Henry's overlordship of Maine, Bellême and Brittany. Meanwhile, the situation in Wales was deteriorating. Henry had conducted a campaign in South Wales in 1108, pushing out royal power in the region and colonising the area around Pembroke with Flemings. By 1114, some of the resident Norman lords were under attack, while in Mid-Wales, Owain ap Cadwgan blinded one of the political hostages he was holding, and in North Wales Gruffudd ap Cynan threatened the power of the Earl of Chester. Henry sent three armies into Wales that year, with Gilbert Fitz Richard leading a force from the south, Alexander, King of Scotland, pressing from the north and Henry himself advancing into Mid-Wales. Owain and Gruffudd sued for peace, and Henry accepted a political compromise. He reinforced the Welsh Marches with his own appointees, strengthening the border territories. Rebellion, 1115–1120 Concerned about the succession, Henry sought to persuade Louis VI to accept his son, William Adelin, as the legitimate future Duke of Normandy, in exchange for his son's homage. Henry crossed into Normandy in 1115 and assembled the Norman barons to swear loyalty; he also almost successfully negotiated a settlement with Louis, affirming William's right to the Duchy in exchange for a large sum of money. However, Louis, backed by his ally Baldwin of Flanders, instead declared that he considered William Clito the legitimate heir to the Duchy. War broke out after Henry returned to Normandy with an army to support Theobald of Blois, who was under attack from Louis. Henry and Louis raided each other's towns along the border, and a wider conflict then broke out, probably in 1116. Henry was pushed onto the defensive as French, Flemish and Angevin forces began to pillage the Normandy countryside. Amaury III of Montfort and many other barons rose up against Henry, and there was an assassination plot from within his own household. Henry's wife, Matilda, died in early 1118, but the situation in Normandy was sufficiently pressing that Henry was unable to return to England for her funeral. Henry responded by mounting campaigns against the rebel barons and deepening his alliance with Theobald. Baldwin of Flanders was wounded in battle and died in September 1118, easing the pressure on Normandy from the north-east. Henry attempted to crush a revolt in the city of Alençon, but was defeated by Fulk and the Angevin army. Forced to retreat from Alençon, Henry's position deteriorated alarmingly, as his resources became overstretched and more barons abandoned his cause. Early in 1119, Eustace of Breteuil and Henry's daughter, Juliana, threatened to join the baronial revolt. Hostages were exchanged in a bid to avoid conflict, but relations broke down and both sides mutilated their captives. Henry attacked and took the town of Breteuil, Eure, despite Juliana's attempt to kill her father with a crossbow. In the aftermath, Henry dispossessed the couple of almost all of their lands in Normandy. Henry's situation improved in May 1119 when he enticed Fulk to switch sides by finally agreeing to marry William Adelin to Fulk's daughter, Matilda, and paying Fulk a large sum of money. Fulk left for the Levant, leaving the County of Maine in Henry's care, and the King was free to focus on crushing his remaining enemies. During the summer Henry advanced into the Norman Vexin, where he encountered Louis's army, resulting in the Battle of Brémule. Henry appears to have deployed scouts and then organised his troops into several carefully formed lines of dismounted knights. Unlike Henry's forces, the French knights remained mounted; they hastily charged the Anglo-Norman positions, breaking through the first rank of the defences but then becoming entangled in Henry's second line of knights. Surrounded, the French army began to collapse. In the melee, Henry was hit by a sword blow, but his armour protected him. Louis and William Clito escaped from the battle, leaving Henry to return to Rouen in triumph. The war slowly petered out after this battle, and Louis took the dispute over Normandy to Pope Callixtus II's council in Reims that October. Henry faced a number of French complaints concerning his acquisition and subsequent management of Normandy, and despite being defended by Geoffrey, the Archbishop of Rouen, Henry's case was shouted down by the pro-French elements of the council. Callixtus declined to support Louis, however, and merely advised the two rulers to seek peace. Amaury de Montfort came to terms with Henry, but Henry and William Clito failed to find a mutually satisfactory compromise. In June 1120, Henry and Louis formally made peace on terms advantageous to the King of England: William Adelin gave homage to Louis, and in return Louis confirmed William's rights to the Duchy. Succession crisis, 1120–1124 Henry's succession plans were thrown into chaos by the sinking of the White Ship on 25 November 1120. Henry had left the port of Barfleur for England in the early evening, leaving William Adelin and many of the younger members of the court to follow on that night in a separate vessel, the White Ship. Both the crew and passengers were drunk and, just outside the harbour, the ship hit a submerged rock. The ship sank, killing as many as 300 people, with only one survivor, a butcher from Rouen. Henry's court was initially too scared to report William's death to the King. When he was finally told, he collapsed with grief. The disaster left Henry with no legitimate son, his various nephews now the closest possible male heirs. Henry announced he would take a new wife, Adeliza of Louvain, opening up the prospect of a new royal son, and the two were married at Windsor Castle in January 1121. Henry appears to have chosen her because she was attractive and came from a prestigious noble line. Adeliza seems to have been fond of Henry and joined him in his travels, probably to maximise the chances of her conceiving a child. The White Ship disaster initiated fresh conflict in Wales, where the drowning of Richard, Earl of Chester, encouraged a rebellion led by Maredudd ap Bleddyn. Henry intervened in North Wales that summer with an army and, although he was hit by a Welsh arrow, the campaign reaffirmed royal power across the region. Henry's alliance with Anjou – which had been based on his son William marrying Fulk's daughter Matilda – began to disintegrate. Fulk returned from the Levant and demanded that Henry return Matilda and her dowry, a range of estates and fortifications in Maine. Matilda left for Anjou, but Henry argued that the dowry had in fact originally belonged to him before it came into the possession of Fulk, and so declined to hand the estates back to Anjou. Fulk married his daughter Sibylla to William Clito, and granted them Maine. Once again, conflict broke out, as Amaury de Montfort allied himself with Fulk and led a revolt along the Norman-Anjou border in 1123. Amaury was joined by several other Norman barons, headed by Waleran de Beaumont, one of the sons of Henry's old ally, Robert of Meulan. Henry dispatched Robert of Gloucester and Ranulf le Meschin to Normandy and then intervened himself in late 1123. He began the process of besieging the rebel castles, before wintering in the Duchy. In the spring of 1124, campaigning began again. In the battle of Bourgthéroulde, Odo Borleng, castellan of Bernay, Eure, led the King's army and received intelligence that the rebels were departing from the rebel base in Beaumont-le-Roger allowing him to ambush them as they traversed through the Brotonne forest. Waleran charged the royal forces, but his knights were cut down by Odo's archers and the rebels were quickly overwhelmed. Waleran was captured, but Amaury escaped. Henry mopped up the remainder of the rebellion, blinding some of the rebel leaders – considered, at the time, a more merciful punishment than execution – and recovering the last rebel castles. He paid Pope Callixtus a large amount of money, in exchange for the Papacy annulling the marriage of William Clito and Sibylla on the grounds of consanguinity. Planning the succession, 1125–1134 Henry and Adeliza did not conceive any children, generating prurient speculation as to the possible explanation, and the future of the dynasty appeared at risk. Henry may have begun to look among his nephews for a possible heir. He may have considered Stephen of Blois as a possible option and, perhaps in preparation for this, he arranged a beneficial marriage for Stephen to a wealthy heiress, Matilda. Theobald of Blois, his close ally, may have also felt that he was in favour with Henry. William Clito, who was King Louis's preferred choice, remained opposed to Henry and was therefore unsuitable. Henry may have also considered his own illegitimate son, Robert of Gloucester, as a possible candidate, but English tradition and custom would have looked unfavourably on this. Henry's plans shifted when the Empress Matilda's husband, the Emperor Henry, died in 1125. The King recalled his daughter to England the next year and declared that, should he die without a male heir, she was to be his rightful successor. The Anglo-Norman barons were gathered together at Westminster at Christmas 1126, where they swore to recognise Matilda and any future legitimate heir she might have. Putting forward a woman as a potential heir in this way was unusual: opposition to Matilda continued to exist within the English court, and Louis was vehemently opposed to her candidacy. Fresh conflict broke out in 1127, when the childless Charles I, Count of Flanders, was murdered, creating a local succession crisis. Backed by King Louis, William Clito was chosen by the Flemings to become their new ruler. This development potentially threatened Normandy, and Henry began to finance a proxy war in Flanders, promoting the claims of William's Flemish rivals. In an effort to disrupt the French alliance with William, Henry mounted an attack into France in 1128, forcing Louis to cut his aid to William. William died unexpectedly in July, removing the last major challenger to Henry's rule and bringing the war in Flanders to a halt. Without William, the baronial opposition in Normandy lacked a leader. A fresh peace was made with France, and Henry was finally able to release the remaining prisoners from the revolt of 1123, including Waleran of Meulan, who was rehabilitated into the royal court. Meanwhile, Henry rebuilt his alliance with Fulk of Anjou, this time by marrying Matilda to Fulk's eldest son, Geoffrey. The pair were betrothed in 1127 and married the following year. It is unknown whether Henry intended Geoffrey to have any future claim on England or Normandy, and he was probably keeping his son-in-law's status deliberately uncertain. Similarly, although Matilda was granted a number of Normandy castles as part of her dowry, it was not specified when the couple would actually take possession of them. Fulk left Anjou for Jerusalem in 1129, declaring Geoffrey the Count of Anjou and Maine. The marriage proved difficult, as the couple did not particularly like each other and the disputed castles proved a point of contention, resulting in Matilda returning to Normandy later that year. Henry appears to have blamed Geoffrey for the separation, but in 1131 the couple were reconciled. Much to the pleasure and relief of Henry, Matilda then gave birth to a sequence of two sons, Henry and Geoffrey, in 1133 and 1134. Death and legacy Death Relations among Henry, Matilda, and Geoffrey became increasingly strained during the King's final years. Matilda and Geoffrey suspected that they lacked genuine support in England. In 1135 they urged Henry to hand over the royal castles in Normandy to Matilda whilst he was still alive, and insisted that the Norman nobility swear immediate allegiance to her, thereby giving the couple a more powerful position after Henry's death. Henry angrily declined to do so, probably out of concern that Geoffrey would try to seize power in Normandy. A fresh rebellion broke out amongst the barons in southern Normandy, led by William III, Count of Ponthieu, whereupon Geoffrey and Matilda intervened in support of the rebels. Henry campaigned throughout the autumn, strengthening the southern frontier, and then travelled to Lyons-la-Forêt in November to enjoy some hunting, still apparently healthy. There he fell ill – according to the chronicler Henry of Huntingdon, he ate too many ("a surfeit of") lampreys against his physician's advice – and his condition worsened over the course of a week. Once the condition appeared terminal, Henry gave confession and summoned Archbishop Hugh of Amiens, who was joined by Robert of Gloucester and other members of the court. In accordance with custom, preparations were made to settle Henry's outstanding debts and to revoke outstanding sentences of forfeiture. The King died on 1 December 1135, and his corpse was taken to Rouen accompanied by the barons, where it was embalmed; his entrails were buried locally at the priory of Notre-Dame du Pré, and the preserved body was taken on to England, where it was interred at Reading Abbey. Despite Henry's efforts, the succession was disputed. When news began to spread of the King's death, Geoffrey and Matilda were in Anjou supporting the rebels in their campaign against the royal army, which included a number of Matilda's supporters such as Robert of Gloucester. Many of these barons had taken an oath to stay in Normandy until the late king was properly buried, which prevented them from returning to England. The Norman nobility discussed declaring Theobald of Blois king. Theobald's younger brother, Stephen of Blois, quickly crossed from Boulogne to England, however, accompanied by his military household. Hugh Bigod dubiously testified that Henry, on his deathbed, had released the barons from their oath to Matilda, and with the help of his brother, Henry of Blois, Stephen seized power in England and was crowned king on 22 December. Matilda did not give up her claim to England and Normandy, appealing at first to the Pope against the decision to allow the coronation of Stephen, and then invading England to start a prolonged civil war, known as the Anarchy, between 1135 and 1153. Historiography Historians have drawn on a range of sources on Henry, including the accounts of chroniclers; other documentary evidence, including early pipe rolls; and surviving buildings and architecture. The three main chroniclers to describe the events of Henry's life were William of Malmesbury, Orderic Vitalis, and Henry of Huntingdon, but each incorporated extensive social and moral commentary into their accounts and borrowed a range of literary devices and stereotypical events from other popular works. Other chroniclers include Eadmer, Hugh the Chanter, Abbot Suger, and the authors of the Welsh Brut. Not all royal documents from the period have survived, but there are a number of royal acts, charters, writs, and letters, along with some early financial records. Some of these have since been discovered to be forgeries, and others had been subsequently amended or tampered with. Late medieval historians seized on the accounts of selected chroniclers regarding Henry's education and gave him the title of Henry "Beauclerc", a theme echoed in the analysis of Victorian and Edwardian historians such as Francis Palgrave and Henry Davis. The historian Charles David dismissed this argument in 1929, showing the more extreme claims for Henry's education to be without foundation. Modern histories of Henry commenced with Richard Southern's work in the early 1960s, followed by extensive research during the rest of the 20th century into a wide number of themes from his reign in England, and a much more limited number of studies of his rule in Normandy. Only two major, modern biographies of Henry have been produced, C. Warren Hollister's posthumous volume in 2001, and Judith Green's 2006 work. Interpretation of Henry's personality by historians has altered over time. Earlier historians such as Austin Poole and Richard Southern considered Henry as a cruel, draconian ruler. More recent historians, such as Hollister and Green, view his implementation of justice much more sympathetically, particularly when set against the standards of the day, but even Green has noted that Henry was "in many respects highly unpleasant", and Alan Cooper has observed that many contemporary chroniclers were probably too scared of the King to voice much criticism. Historians have also debated the extent to which Henry's administrative reforms genuinely constituted an introduction of what Hollister and John Baldwin have termed systematic, "administrative kingship", or whether his outlook remained fundamentally traditional. Henry's burial at Reading Abbey is marked by a local cross and a plaque, but Reading Abbey was slowly demolished during the Dissolution of the Monasteries in the 16th century. The exact location is uncertain, but the most likely location of the tomb itself is now in a built-up area of central Reading, on the site of the former abbey choir. A plan to locate his remains was announced in March 2015, with support from English Heritage and Philippa Langley, who aided with the successful discovery and exhumation of Richard III. Family and children Legitimate In addition to Matilda and William, Henry possibly had a short-lived son, Richard, with his first wife, Matilda of Scotland. Henry and his second wife, Adeliza of Louvain, had no children. Illegitimate Henry had a number of illegitimate children by various mistresses. Sons Robert of Gloucester, born in the 1090s. Richard, born to Ansfride, brought up by Robert Bloet, the Bishop of Lincoln. Reginald de Dunstanville, Earl of Cornwall, born in the 1110s or early 1120s, possibly to Sibyl Corbet. Robert FitzEdith, born to Edith Forne. Gilbert FitzRoy, possibly born to an unnamed sister or daughter of Walter of Gand. William de Tracy, possibly born in the 1090s. Henry FitzRoy, possibly born to Nest ferch Rhys. Fulk FitzRoy, possibly born to Ansfride. William, the full brother of Sybilla of Normandy, probably also of Reginald de Dunstanville. Daughters Matilda FitzRoy, Countess of Perche. Matilda FitzRoy, Duchess of Brittany. Juliane, wife of Eustace of Breteuil, possibly born to Ansfrida. Mabel, wife of William Gouet. Constance, Viscountess of Beaumont-sur-Sarthe. Aline, wife of Matthew de Montmorency. Isabel, daughter of Isabel de Beaumont, Countess of Pembroke. Sybilla de Normandy, Queen of Scotland, probably born before 1100. Matilda Fitzroy, Abbess of Montivilliers. Gundrada de Dunstanville. Possibly Rohese, wife of Henry de la Pomerai. Emma, wife of Guy of Laval. Adeliza, the King's daughter. Elizabeth Fitzroy, the wife of Fergus of Galloway. Possibly Sibyl of Falaise. Family tree Notes References Bibliography 1060s births 1135 deaths 11th-century monarchs of England 12th-century English monarchs 12th-century Dukes of Normandy English people of French descent House of Normandy English Roman Catholics French Roman Catholics People from Selby Deaths from food poisoning Burials at Reading Abbey Children of William the Conqueror Norman warriors Anglo-Normans
[ -0.3156917691230774, -0.3091111481189728, -0.05495666712522507, -0.38294440507888794, -0.34262675046920776, 0.4185059070587158, 0.3940969705581665, 0.1535569578409195, -0.6136450171470642, -0.41866108775138855, -0.08508654683828354, 0.0263809897005558, -0.589326024055481, 0.037086125463247...
14183
https://en.wikipedia.org/wiki/Hentai
Hentai
Hentai is anime and manga pornography. A loanword from Japanese, the original term ( ) does not describe a genre of media, but rather an abnormal sexual desire or act, as an abbreviation of . In addition to anime and manga, hentai works exist in a variety of media, including artwork and video games (commonly known as eroge). The development of hentai has been influenced by Japanese cultural and historical attitudes toward sexuality. Hentai works, which are often self-published, form a significant portion of the market for doujin works, including doujinshi. Numerous subgenres exist depicting a variety of sexual acts and relationships, as well as novel fetishes. Terminology Hentai is a kanji compound of (hen; 'change' or 'weird') and (tai; 'appearance' or 'condition'), and means "metamorphosis" or "transformation". In sexual contexts, it carries additional meanings of "perversion" or "abnormality", especially when used as an adjective; in these uses, it is the shortened form of the phrase which means "sexual perversion". The character hen is catch-all for queerness as a peculiarity—it does not carry an explicit sexual reference. While the term has expanded in use to cover a range of publications including homosexual publications, it remains primarily a heterosexual term, as terms indicating homosexuality entered Japan as foreign words. Japanese pornographic works are often simply tagged as , meaning "prohibited to those not yet 18 years old", and . Less official terms also in use include , , and the English initialism AV (for "adult video"). Usage of the term hentai does not define a genre in Japan. Hentai is defined differently in English. The Oxford Dictionary Online defines it as "a subgenre of the Japanese genres of manga and anime, characterized by overtly sexualized characters and sexually explicit images and plots." The origin of the word in English is unknown, but AnimeNation's John Oppliger points to the early 1990s, when a Dirty Pair erotic doujinshi (self-published work) titled H-Bomb was released, and when many websites sold access to images culled from Japanese erotic visual novels and games. The earliest English use of the term traces back to the rec.arts.anime boards; with a 1990 post concerning Happosai of Ranma ½ and the first discussion of the meaning in 1991. A 1995 glossary on the rec.arts.anime boards contained reference to the Japanese usage and the evolving definition of hentai as "pervert" or "perverted sex". The Anime Movie Guide, published in 1997, defines as the initial sound of hentai (i.e., the name of the letter H, as pronounced in Japanese); it included that ecchi was "milder than hentai". A year later it was defined as a genre in Good Vibrations Guide to Sex. At the beginning of 2000, "hentai" was listed as the 41st most-popular search term of the internet, while "anime" ranked 99th. The attribution has been applied retroactively to works such as Urotsukidōji, La Blue Girl, and Cool Devices. Urotsukidōji had previously been described with terms such as "Japornimation", and "erotic grotesque", prior to being identified as hentai. Etymology The history of the word hentai has its origins in science and psychology. By the middle of the Meiji era, the term appeared in publications to describe unusual or abnormal traits, including paranormal abilities and psychological disorders. A translation of German sexologist Richard von Krafft-Ebing's text Psychopathia Sexualis originated the concept of hentai seiyoku, as a "perverse or abnormal sexual desire", though it was popularized outside psychology, as in the case of Mori Ōgai's 1909 novel Vita Sexualis. Continued interest in hentai seiyoku resulted in numerous journals and publications on sexual advice which circulated in the public, served to establish the sexual connotation of hentai as perverse. Any perverse or abnormal act could be hentai, such as committing shinjū (love suicide). It was Nakamura Kokyo's journal Abnormal Psychology which started the popular sexology boom in Japan which would see the rise of other popular journals like Sexuality and Human Nature, Sex Research and Sex. Originally, Tanaka Kogai wrote articles for Abnormal Psychology, but it would be Tanaka's own journal Modern Sexuality which would become one of the most popular sources of information about erotic and neurotic expression. Modern Sexuality was created to promote fetishism, S&M, and necrophilia as a facet of modern life. The ero-guro movement and depiction of perverse, abnormal and often erotic undertones were a response to interest in hentai seiyoku. Following World War II, Japan took a new interest in sexualization and public sexuality. Mark McLelland puts forth the observation that the term hentai found itself shortened to "H" and that the English pronunciation was "etchi", referring to lewdness and which did not carry the stronger connotation of abnormality or perversion. By the 1950s, the "hentai seiyoku" publications became their own genre and included fetish and homosexual topics. By the 1960s, the homosexual content was dropped in favor of subjects like sadomasochism and stories of lesbianism targeted to male readers. The late 1960s brought a sexual revolution which expanded and solidified the normalizing of the term's identity in Japan that continues to exist today through publications such as Bessatsu Takarajimas Hentai-san ga iku series. History With the usage of hentai as any erotic depiction, the history of these depictions is split into their media. Japanese artwork and comics serve as the first example of hentai material, coming to represent the iconic style after the publication of Azuma Hideo's in 1979. Hentai first appeared in animation in the 1932 film Suzumi-bune by Hakusan Kimura, which was seized by police when it was half complete. The remnants of the film were donated to the National Film Center in the early 21st century. The film has never been viewed by the public. However, the 1984 release of Wonderkid's Lolita Anime was the first hentai to get a general release, overlooking the erotic and sexual depictions in 1969's One Thousand and One Arabian Nights and the bare-breasted Cleopatra in 1970's Cleopatra film. Erotic games, another area of contention, has its first case of the art style depicting sexual acts in 1985's Tenshitachi no Gogo. In each of these mediums, the broad definition and usage of the term complicates its historic examination. Origin of erotic manga Depictions of sex and abnormal sex can be traced back through the ages, predating the term "hentai". Shunga, a Japanese term for erotic art, is thought to have existed in some form since the Heian period. From the 16th to the 19th centuries, shunga works were suppressed by shōguns. A well-known example is The Dream of the Fisherman's Wife, which depicts a woman being stimulated by two octopuses. Shunga production fell with the introduction of pornographic photographs in the late 19th century. To define erotic manga, a definition for manga is needed. While the Hokusai Manga uses the term "manga" in its title, it does not depict the story-telling aspect common to modern manga, as the images are unrelated. Due to the influence of pornographic photographs in the 19th and 20th centuries, the manga artwork was depicted by realistic characters. Osamu Tezuka helped define the modern look and form of manga, and was later proclaimed as the "God of Manga". His debut work New Treasure Island was released in 1947 as a comic book through Ikuei Publishing and sold over 400,000 copies, though it was the popularity of Tezuka's Astro Boy, Metropolis, and Jungle Emperor manga that would come to define the media. This story-driven manga style is distinctly unique from comic strips like Sazae-san, and story-driven works came to dominate shōjo and shōnen magazines. Adult themes in manga have existed since the 1940s, but some of these depictions were more realistic than the cartoon-cute characters popularized by Tezuka. In 1973, Manga Bestseller (later known as Manga Erotopia), which is considered to be the first hentai manga magazine published in Japan, would be responsible for creating a new genre known as ero-gekiga, where gekiga was taken, and the sexual and violent content was intensified. Other well-known "ero-gekiga" magazines were Erogenica (1975), and Alice (1977). The circulation of ero-gekiga magazines would peak in 1978, and it is believed that somewhere between eighty to one hundred different ero-gekiga magazines were being published annually. The 1980s would see the decline of ero-gekiga in favor of the rising popularity of lolicon and bishōjo magazines, which grew from otaku fan culture. It has been theorized that the decline of ero-gekiga was due to the baby boomer readership beginning to start their own families, as well as migrating to seinen magazines such as Weekly Young Magazine, and when it came to sexual material, the readership was stolen by gravure and pornographic magazines. The distinct shift in the style of Japanese pornographic comics from realistic to cartoon-cute characters is accredited to Hideo Azuma, "The Father of Lolicon". In 1979, he penned , which offered the first depictions of sexual acts between cute, unrealistic Tezuka-style characters. This would start a pornographic manga movement. The lolicon boom of the 1980s saw the rise of magazines such as the anthologies Lemon People and Petit Apple Pie. As the lolicon boom waned in the mid-1980s, the dominant form of representation for female characters became "baby faced and big chested" women. The shift in popularity from lolicon to bishōjo has been credited to Naoki Yamamoto (who wrote under the pen name of Tō Moriyama). Moriyama's manga had a style that had not been seen before at the time, and was different from the ero-gekiga and lolicon styles, and used bishōjo designs as a base to build upon. Moriyama's books sold well upon publication, creating even more fans for the genre. These new artists would then write for magazines such as Monthly Penguin Club Magazine (1986) and Manga Hot Milk (1986) which would become popular with their readership, drawing in new fans. The publication of erotic materials in the United States can be traced back to at least 1990, when IANVS Publications printed its first Anime Shower Special. In March 1994, Antarctic Press released Bondage Fairies, an English translation of Insect Hunter, an "insect rape" manga which became popular in the American market, while it apparently had a poor showing in Japan. During this time, the one American publisher translating and publishing hentai was Fantagraphics on their adult comic imprint, Eros Comix, which was established around 1990. Origin of erotic anime Because there are fewer animation productions, most erotic works are retroactively tagged as hentai since the coining of the term in English. Hentai is typically defined as consisting of excessive nudity, and graphic sexual intercourse whether or not it is perverse. The term "ecchi" is typically related to fanservice, with no sexual intercourse being depicted. The earliest pornographic anime was Suzumi-bune, created in 1932 by Hakusan Kimura. It was the first part of a two-reeler film, which was half complete before it was seized by the police. The remnants of the film were donated to the National Film Center in the early 21st century by the Tokyo police, who were removing all silver nitrate film in their possession, as it is extremely flammable. The film has never been viewed by the public. Two early works escape being defined as hentai, but contain erotic themes. This is likely due to the obscurity and unfamiliarity of the works, arriving in the United States and fading from public focus a full 20 years before importation and surging interests coined the Americanized term hentai. The first is the 1969 film One Thousand and One Arabian Nights, which faithfully includes erotic elements of the original story. In 1970, Cleopatra: Queen of Sex, was the first animated film to carry an X rating, but it was mislabeled as erotica in the United States. The Lolita Anime series is typically identified as the first erotic anime and original video animation (OVA); it was released in 1984 by Wonder Kids. Containing six episodes, the series focused on underage sex and rape, and included one episode containing BDSM bondage. Several sub-series were released in response, including a second Lolita Anime series released by Nikkatsu. It has not been officially licensed or distributed outside of its original release. The Cream Lemon franchise of works ran from 1984 to 2005, with a number of them entering the American market in various forms. The Brothers Grime series released by Excalibur Films contained Cream Lemon works as early as 1986. However, they were not billed as anime and were introduced during the same time that the first underground distribution of erotic works began. The American release of licensed erotic anime was first attempted in 1991 by Central Park Media, with I Give My All, but it never occurred. In December 1992, Devil Hunter Yohko was the first risque (ecchi) title that was released by A.D. Vision. While it contains no sexual intercourse, it pushes the limits of the ecchi category with sexual dialogue, nudity and one scene in which the heroine is about to be raped. It was Central Park Media's 1993 release of Urotsukidōji which brought the first hentai film to American viewers. Often cited for inventing the tentacle rape subgenre, it contains extreme depictions of violence and monster sex. As such, it is acknowledged for being the first to depict tentacle sex on screen. When the film premiered in the United States, it was described as being "drenched in graphic scenes of perverse sex and ultra-violence". Following this release, a wealth of pornographic content began to arrive in the United States, with companies such as A.D. Vision, Central Park Media and Media Blasters releasing licensed titles under various labels. A.D. Vision's label SoftCel Pictures released 19 titles in 1995 alone. Another label, Critical Mass, was created in 1996 to release an unedited edition of Violence Jack. When A.D. Vision's hentai label SoftCel Pictures shut down in 2005, most of its titles were acquired by Critical Mass. Following the bankruptcy of Central Park Media in 2009, the licenses for all Anime 18-related products and movies were transferred to Critical Mass. Origin of erotic games The term eroge (erotic game) literally defines any erotic game, but has become synonymous with video games depicting the artistic styles of anime and manga. The origins of eroge began in the early 1980s, while the computer industry in Japan was struggling to define a computer standard with makers like NEC, Sharp, and Fujitsu competing against one another. The PC98 series, despite lacking in processing power, CD drives and limited graphics, came to dominate the market, with the popularity of eroge games contributing to its success. Because of vague definitions of what constitutes an "erotic game", there are several possible candidates for the first eroge. If the definition applies to adult themes, the first game was Softporn Adventure. Released in America in 1981 for the Apple II, this was a text-based comedic game from On-Line Systems. If eroge is defined as the first graphical depictions or Japanese adult themes, it would be Koei's 1982 release of Night Life. Sexual intercourse is depicted through simple graphic outlines. Notably, Night Life was not intended to be erotic so much as an instructional guide "to support married life". A series of "undressing" games appeared as early as 1983, such as "Strip Mahjong". The first anime-styled erotic game was Tenshitachi no Gogo, released in 1985 by JAST. In 1988, ASCII released the first erotic role-playing game, Chaos Angel. In 1989, AliceSoft released the turn-based role-playing game Rance and ELF released Dragon Knight. In the late 1980s, eroge began to stagnate under high prices and the majority of games containing uninteresting plots and mindless sex. ELF's 1992 release of Dōkyūsei came as customer frustration with eroge was mounting and spawned a new genre of games called dating sims. Dōkyūsei was unique because it had no defined plot and required the player to build a relationship with different girls in order to advance the story. Each girl had her own story, but the prospect of consummating a relationship required the girl growing to love the player; there was no easy sex. The term "visual novel" is vague, with Japanese and English definitions classifying the genre as a type of interactive fiction game driven by narration and limited player interaction. While the term is often retroactively applied to many games, it was Leaf that coined the term with their "Leaf Visual Novel Series" (LVNS) with the 1996 release of Shizuku and Kizuato. The success of these two dark eroge games would be followed by the third and final installment of the LVNS, the 1997 romantic eroge To Heart. Eroge visual novels took a new emotional turn with Tactics' 1998 release One: Kagayaku Kisetsu e. Key's 1999 release of Kanon proved to be a major success and would go on to have numerous console ports, two manga series and two anime series. Censorship Japanese laws have impacted depictions of works since the Meiji Restoration, but these predate the common definition of hentai material. Since becoming law in 1907, Article 175 of the Criminal Code of Japan forbids the publication of obscene materials. Specifically, depictions of male–female sexual intercourse and pubic hair are considered obscene, but bare genitalia is not. As censorship is required for published works, the most common representations are the blurring dots on pornographic videos and "bars" or "lights" on still images. In 1986, Toshio Maeda sought to get past censorship on depictions of sexual intercourse, by creating tentacle sex. This led to the large number of works containing sexual intercourse with monsters, demons, robots, and aliens, whose genitals look different from men's. While Western views attribute hentai to any explicit work, it was the products of this censorship which became not only the first titles legally imported to America and Europe, but the first successful ones. While uncut for American release, the United Kingdom's release of Urotsukidōji removed many scenes of the violence and tentacle rape scenes. Another technique used to evade regulation was the "sexual intercourse cross-section view", an imaginary view of intercourse resembling an anatomic drawing or an MRI, which would eventually evolve as a prevalent expression in hentai for its erotic appeal. This expression is known in the Western world as the "x-ray view", but has also been known as the "bisection view" since the mid 2000s by manga critics. It was also because of this law that the artists began to depict the characters with a minimum of anatomical details and without pubic hair, by law, prior to 1991. Part of the ban was lifted when Nagisa Oshima prevailed over the obscenity charges at his trial for his film In the Realm of the Senses. Though not enforced, the lifting of this ban did not apply to anime and manga as they were not deemed artistic exceptions. Alterations of material or censorship and banning of works are common. The US release of La Blue Girl altered the age of the heroine from 16 to 18, removed sex scenes with a dwarf ninja named Nin-nin, and removed the Japanese blurring dots. La Blue Girl was outright rejected by UK censors who refused to classify it and prohibited its distribution. In 2011, members of the Liberal Democratic Party of Japan sought a ban on the subgenre lolicon but were unsuccessful. The last law proposed against it was introduced on May 27, 2013 by the Liberal Democratic Party, the New Komei Party and the Japan Restoration Party that would have made possession of sexual images of individuals under 18 illegal with a fine of 1 million yen (about US$10,437) and less than a year in jail. The Japanese Democratic Party, along with several industry associations involved in anime and manga protested against the bill saying "while they appreciate that the bill protects children, it will also restrict freedom of expression". The law was ultimately passed in June 2014 after the regulation of lolicon anime and manga was removed from the bill. This new law went into full effect in 2015 banning real life child pornography. Demographics According to data from Pornhub in 2017, the most prolific consumers of hentai are men. However, Patrick W. Galbraith and Jessica Bauwens-Sugimoto note that hentai manga attracts "a diverse readership, which of course includes women." Kathryn Hemmann also writes that "self-identified female otaku [...] readily admit to enjoying [hentai] dōjinshi catering to a male erotic gaze". When it comes to mediums of hentai, eroge games in particular combine three favored media—cartoons, pornography and gaming—into an experience. The hentai genre engages a wide audience that expands yearly, and desires better quality and storylines, or works which push the creative envelope. Nobuhiro Komiya, a manga censor, states that the unusual and extreme depictions in hentai are not about perversion so much as they are an example of the profit-oriented industry. Anime depicting normal sexual situations enjoy less market success than those that break social norms, such as sex at schools or bondage. According to clinical psychologist Megha Hazuria Gorem, "Because toons are a kind of final fantasy, you can make the person look the way you want him or her to look. Every fetish can be fulfilled." Sexologist Narayan Reddy noted of eroge, "Animators make new games because there is a demand for them, and because they depict things that the gamers do not have the courage to do in real life, or that might just be illegal, these games are an outlet for suppressed desire." Classification The hentai genre can be divided into numerous subgenres, the broadest of which encompasses heterosexual and homosexual acts. Hentai that features mainly heterosexual interactions occur in both male-targeted (ero or dansei-muke) and female-targeted ("ladies' comics") form. Those that feature mainly homosexual interactions are known as yaoi or Boys' Love (male–male) and yuri (female–female). Both yaoi and, to a lesser extent, yuri, are generally aimed at members of the opposite sex from the persons depicted. While yaoi and yuri are not always explicit, their pornographic history and association remain. Yaoi pornographic usage has remained strong in textual form through fanfiction. The definition of yuri has begun to be replaced by the broader definitions of "lesbian-themed animation or comics". Hentai is perceived as "dwelling" on sexual fetishes. These include dozens of fetish and paraphilia related subgenres, which can be further classified with additional terms, such as heterosexual or homosexual types. Many works are focused on depicting the mundane and the impossible across every conceivable act and situation, no matter how fantastical. One subgenre of hentai is futanari (hermaphroditism), which most often features a woman with a penis or penis-like appendage in place of, or in addition to, a vulva. Futanari characters are often depicted as having sex with other women, but many other works feature sex with men or, as in Anal Justice, with both genders. Futanari can be dominant, submissive, or switch between the two roles in a single work. Genres See also Cartoon pornography Dōjinshi E-Hentai List of hentai anime List of hentai authors (groups, studios, production companies, circles) List of hentai manga Panchira Uniform fetishism :ja:アダルトアニメ ("Adult anime [animation]") References Further reading Buckley, Sandra (1991). Penguin in Bondage': A Graphic Tale of Japanese Comic Books", pp. 163–196, In Technoculture. C. Penley and A. Ross, eds. Minneapolis: University of Minnesota. . McCarthy, Helen, and Jonathan Clements (1998). The Erotic Anime Movie Guide. London: Titan. . External links Anime and manga genres Anime and manga terminology Japanese pornography Japanese sex terms Pornographic animation Pornography by genre Sexuality in anime and manga Sexuality in Japan
[ 0.42575663328170776, 0.051545582711696625, -0.04307957738637924, -0.012860143557190895, -1.018370509147644, -0.06005411967635155, 0.1677476465702057, 0.24559234082698822, -0.23977960646152496, -0.011254659853875637, -0.12755057215690613, 0.2066212296485901, 0.024528004229068756, 0.27357727...
14186
https://en.wikipedia.org/wiki/Henry%20VII%20of%20England
Henry VII of England
Henry VII (; 28 January 1457 – 21 April 1509) was King of England and Lord of Ireland from his seizure of the crown on 22 August 1485 until his death in 1509. He was the first monarch of the House of Tudor. Henry's mother, Margaret Beaufort, was a descendant of the Lancastrian branch of the House of Plantagenet. Henry's father, Edmund Tudor, 1st Earl of Richmond, a half-brother of Henry VI of England and descendant of the Welsh Tudors of Penmynydd, died three months before his son Henry was born. During Henry's early years, his uncle Henry VI was fighting against Edward IV, a member of the Yorkist Plantagenet branch. After Edward retook the throne in 1471, Henry Tudor spent 14 years in exile in Brittany. He attained the throne when his forces, supported by France, Scotland, and Wales, defeated Edward IV's brother Richard III at the Battle of Bosworth Field, the culmination of the Wars of the Roses. He was the last king of England to win his throne on the field of battle. He cemented his claim by marrying Elizabeth of York, daughter of King Edward. Henry was successful in restoring power and stability to the English monarchy following the civil war. He is credited with a number of administrative, economic and diplomatic initiatives. His supportive policy toward England's wool industry and his standoff with the Low Countries had long-lasting benefit to the English economy. He paid very close attention to detail, and instead of spending lavishly he concentrated on raising new revenues. He introduced several new taxes, which stabilised the government's finances. After his death, a commission found widespread abuses in the tax collection process. Henry reigned for nearly 24 years and was peacefully succeeded by his son, Henry VIII. Ancestry and early life Henry VII was born at Pembroke Castle on 28 January 1457 to Lady Margaret Beaufort, Countess of Richmond. His father, Edmund Tudor, 1st Earl of Richmond, died three months before his birth. Henry's paternal grandfather, Owen Tudor, originally from the Tudors of Penmynydd, Isle of Anglesey in Wales, had been a page in the court of King Henry V. He rose to become one of the "Squires to the Body to the King" after military service at the Battle of Agincourt. Owen is said to have secretly married the widow of Henry V, Catherine of Valois. One of their sons was Edmund, Henry's father. Edmund was created Earl of Richmond in 1452, and "formally declared legitimate by Parliament". Henry's mother, Margaret, provided Henry's main claim to the English throne through the House of Beaufort. She was a great-granddaughter of John of Gaunt, 1st Duke of Lancaster (fourth son of Edward III), and his third wife Katherine Swynford. Katherine was Gaunt's mistress for about 25 years. When they married in 1396 they already had four children, including Henry's great-grandfather John Beaufort. Thus, Henry's claim was somewhat tenuous; it was from a woman, and by illegitimate descent. In theory, the Portuguese and Castilian royal families had a better claim as descendants of Catherine of Lancaster, the daughter of John of Gaunt and his second wife Constance of Castile. Gaunt's nephew Richard II legitimised Gaunt's children by Katherine Swynford by Letters Patent in 1397. In 1407, Henry IV, Gaunt's son by his first wife, issued new Letters Patent confirming the legitimacy of his half-siblings but also declaring them ineligible for the throne. Henry IV's action was of doubtful legality, as the Beauforts were previously legitimised by an Act of Parliament, but it further weakened Henry's claim. Nonetheless, by 1483 Henry was the senior male Lancastrian claimant remaining after the deaths in battle, by murder or execution of Henry VI (son of Henry V and Catherine of Valois), his son Edward of Westminster, Prince of Wales, and the other Beaufort line of descent through Lady Margaret's uncle, Edmund Beaufort, 2nd Duke of Somerset. Henry also made some political capital out of his Welsh ancestry in attracting military support and safeguarding his army's passage through Wales on its way to the Battle of Bosworth. He came from an old, established Anglesey family that claimed descent from Cadwaladr, in legend, the last ancient British king, and on occasion Henry displayed the red dragon of Cadwaladr. He took it, as well as the standard of St. George, on his procession through London after the victory at Bosworth. A contemporary writer and Henry's biographer, Bernard André, also made much of Henry's Welsh descent. In 1456, Henry's father Edmund Tudor was captured while fighting for Henry VI in South Wales against the Yorkists. He died shortly afterwards in Carmarthen Castle. His younger brother, Jasper Tudor, the Earl of Pembroke, undertook to protect Edmund's widow Margaret, who was 13 years old when she gave birth to Henry. When Edward IV became King in 1461, Jasper Tudor went into exile abroad. Pembroke Castle, and later the Earldom of Pembroke, were granted to the Yorkist William Herbert, who also assumed the guardianship of Margaret Beaufort and the young Henry. Henry lived in the Herbert household until 1469, when Richard Neville, Earl of Warwick (the "Kingmaker"), went over to the Lancastrians. Herbert was captured fighting for the Yorkists and executed by Warwick. When Warwick restored Henry VI in 1470, Jasper Tudor returned from exile and brought Henry to court. When the Yorkist Edward IV regained the throne in 1471, Henry fled with other Lancastrians to Brittany. He spent most of the next 14 years under the protection of Francis II, Duke of Brittany. In November 1476, Francis fell ill and his principal advisers were more amenable to negotiating with King Edward. Henry was thus handed over to English envoys and escorted to the Breton port of Saint-Malo. While there, he feigned stomach cramps and delayed his departure long enough to miss the tides. An ally of Henry's, Viscount , soon arrived, bringing news that Francis had recovered, and in the confusion Henry was able to flee to a monastery. There he claimed sanctuary until the envoys were forced to depart. Rise to the throne By 1483, Henry's mother was actively promoting him as an alternative to Richard III, despite her being married to Lord Stanley, a Yorkist. At Rennes Cathedral on Christmas Day 1483, Henry pledged to marry Elizabeth of York, the eldest daughter of Edward IV. She was Edward's heir since the presumed death of her brothers, the Princes in the Tower, King Edward V and Richard of Shrewsbury, Duke of York. With money and supplies borrowed from his host, Francis II of Brittany, Henry tried to land in England, but his conspiracy unravelled resulting in the execution of his primary co-conspirator, Henry Stafford, 2nd Duke of Buckingham. Now supported by Francis II's prime minister, Pierre Landais, Richard III attempted to extradite Henry from Brittany, but Henry escaped to France. He was welcomed by the French, who readily supplied him with troops and equipment for a second invasion. Henry gained the support of the Woodvilles, in-laws of the late Edward IV, and sailed with a small French and Scottish force, landing at Mill Bay near Dale, Pembrokeshire. He marched toward England accompanied by his uncle Jasper and John de Vere, 13th Earl of Oxford. Wales was historically a Lancastrian stronghold, and Henry owed the support he gathered to his Welsh birth and ancestry, being agnatically descended from Rhys ap Gruffydd. He amassed an army of about 5,000-6,000 soldiers. Henry devised a plan to seize the throne by engaging Richard quickly because Richard had reinforcements in Nottingham and Leicester. Though outnumbered, Henry's Lancastrian forces decisively defeated Richard's Yorkist army at the Battle of Bosworth Field on 22 August 1485. Several of Richard's key allies, such as Henry Percy, 4th Earl of Northumberland, and also Lord Stanley and his brother William, crucially switched sides or left the battlefield. Richard III's death at Bosworth Field effectively ended the Wars of the Roses. Reign To secure his hold on the throne, Henry declared himself king by right of conquest retroactively from 21 August 1485, the day before Bosworth Field. Thus, anyone who had fought for Richard against him would be guilty of treason and Henry could legally confiscate the lands and property of Richard III, while restoring his own. Henry spared Richard's nephew and designated heir, John de la Pole, Earl of Lincoln, and made the Yorkist heiress Margaret Plantagenet Countess of Salisbury suo jure. He took care not to address the baronage or summon Parliament until after his coronation, which took place in Westminster Abbey on 30 October 1485. After his coronation Henry issued an edict that any gentleman who swore fealty to him would, notwithstanding any previous attainder, be secure in his property and person. Henry honoured his pledge of December 1483 to marry Elizabeth of York. They were third cousins, as both were great-great-grandchildren of John of Gaunt. Henry married Elizabeth of York with the hope of uniting the Yorkist and Lancastrian sides of the Plantagenet dynastic disputes, and he was largely successful. However, such a level of paranoia persisted that anyone (John de la Pole, Earl of Lincoln, for example) with blood ties to the Plantagenets was suspected of coveting the throne. Henry had Parliament repeal Titulus Regius, the statute that declared Edward IV's marriage invalid and his children illegitimate, thus legitimising his wife. Amateur historians Bertram Fields and Sir Clements Markham have claimed that he may have been involved in the murder of the Princes in the Tower, as the repeal of Titulus Regius gave the Princes a stronger claim to the throne than his own. Alison Weir points out that the Rennes ceremony, two years earlier, was plausible only if Henry and his supporters were certain that the Princes were already dead. Henry secured his crown principally by dividing and undermining the power of the nobility, especially through the aggressive use of bonds and recognisances to secure loyalty. He also enacted laws against livery and maintenance, the great lords' practice of having large numbers of "retainers" who wore their lord's badge or uniform and formed a potential private army. Henry began taking precautions against rebellion while still in Leicester after Bosworth Field. Edward, Earl of Warwick, the ten-year-old son of Edward IV's brother George, Duke of Clarence, was the senior surviving male of the House of York. Before departing for London, Henry sent Robert Willoughby to Sheriff Hutton in Yorkshire, to arrest Warwick and take him to the Tower of London. Despite such precautions, Henry faced several rebellions over the next twelve years. The first was the 1486 rebellion of the Stafford brothers, abetted by Viscount Lovell, which collapsed without fighting. Next, in 1487, Yorkists led by Lincoln rebelled in support of Lambert Simnel, a boy they claimed to be Edward of Warwick (who was actually a prisoner in the Tower). The rebellion began in Ireland, where the historically Yorkist nobility, headed by the powerful Gerald FitzGerald, 8th Earl of Kildare, proclaimed Simnel king and provided troops for his invasion of England. The rebellion was defeated and Lincoln killed at the Battle of Stoke. Henry showed remarkable clemency to the surviving rebels: he pardoned Kildare and the other Irish nobles, and he made the boy, Simnel, a servant in the royal kitchen where he was in charge of roasting meats on a spit. In 1490, a young Fleming, Perkin Warbeck, appeared and claimed to be Richard of Shrewsbury, the younger of the "Princes in the Tower". Warbeck won the support of Edward IV's sister Margaret, Duchess of Burgundy. He led attempted invasions of Ireland in 1491 and England in 1495, and persuaded James IV of Scotland to invade England in 1496. In 1497 Warbeck landed in Cornwall with a few thousand troops, but was soon captured and executed. When the King's agents searched the property of William Stanley (Chamberlain of the Household, with direct access to Henry VII) they found a bag of coins amounting to around £10,000 and a collar of livery with Yorkist garnishings. Stanley was accused of supporting Warbeck's cause, arrested and later executed. In response to this threat within his own household, the King instituted more rigid security for access to his person. In 1499, Henry had the Earl of Warwick executed. However, he spared Warwick's elder sister Margaret, who survived until 1541 when she was executed by Henry VIII. Economics For most of Henry VII's reign Edward Story was Bishop of Chichester. Story's register still exists and, according to the 19th-century historian W.R.W. Stephens, "affords some illustrations of the avaricious and parsimonious character of the king". It seems that Henry was skillful at extracting money from his subjects on many pretexts, including that of war with France or war with Scotland. The money so extracted added to the King's personal fortune rather than being used for the stated purpose. Unlike his predecessors, Henry VII came to the throne without personal experience in estate management or financial administration. Despite this, during his reign he became a fiscally prudent monarch who restored the fortunes of an effectively bankrupt exchequer. Henry VII introduced stability to the financial administration of England by keeping the same financial advisors throughout his reign. For instance, except for the first few months of the reign, the Baron Dynham and the Earl of Surrey were the only Lord High Treasurers throughout his reign. Henry VII improved tax collection in the realm by introducing ruthlessly efficient mechanisms of taxation. He was supported in this effort by his chancellor, Archbishop John Morton, whose "Morton's Fork" was a catch-22 method of ensuring that nobles paid increased taxes: those nobles who spent little must have saved much, and thus could afford the increased taxes; in contrast, those nobles who spent much obviously had the means to pay the increased taxes. The capriciousness and lack of due process that indebted many would tarnish his legacy and were soon ended upon Henry VII's death, after a commission revealed widespread abuses. According to the contemporary historian Polydore Vergil, simple "greed" underscored the means by which royal control was over-asserted in Henry's final years. Following Henry VII's death, Henry VIII executed Richard Empson and Edmund Dudley, his two most hated tax collectors, on trumped-up charges of treason. Henry VII established the pound avoirdupois as a standard of weight; it later became part of the Imperial and customary systems of units. Foreign policy Henry VII's policy was both to maintain peace and to create economic prosperity. Up to a point, he succeeded. The Treaty of Redon was signed in February 1489 between Henry and representatives of Brittany. Based on the terms of the accord, Henry sent 6000 troops to fight (at the expense of Brittany) under the command of Lord Daubeney. The purpose of the agreement was to prevent France from annexing Brittany. According to John M. Currin, the treaty redefined Anglo-Breton relations. Henry started a new policy to recover Guyenne and other lost Plantagenet claims in France. The treaty marks a shift from neutrality over the French invasion of Brittany to active intervention against it. Henry later concluded a treaty with France at Etaples that brought money into the coffers of England, and ensured the French would not support pretenders to the English throne, such as Perkin Warbeck. However, this treaty came at a price, as Henry mounted a minor invasion of Brittany in November 1492. Henry decided to keep Brittany out of French hands, signed an alliance with Spain to that end, and sent 6,000 troops to France. The confused, fractious nature of Breton politics undermined his efforts, which finally failed after three sizeable expeditions, at a cost of £24,000. However, as France was becoming more concerned with the Italian Wars, the French were happy to agree to the Treaty of Etaples. Henry had pressured the French by laying siege to Boulogne in October 1492. Henry had been under the financial and physical protection of the French throne or its vassals for most of his life before becoming king. To strengthen his position, however, he subsidised shipbuilding, so strengthening the navy (he commissioned Europe's first ever – and the world's oldest surviving – dry dock at Portsmouth in 1495) and improving trading opportunities. Henry VII was one of the first European monarchs to recognise the importance of the newly united Spanish kingdom; he concluded the Treaty of Medina del Campo, by which his son Arthur, Prince of Wales, was married to Catherine of Aragon. He also concluded the Treaty of Perpetual Peace with Scotland (the first treaty between England and Scotland for almost two centuries), which betrothed his daughter Margaret Tudor to King James IV of Scotland. By this marriage, Henry VII hoped to break the Auld Alliance between Scotland and France. Though this was not achieved during his reign, the marriage eventually led to the union of the English and Scottish crowns under Margaret's great-grandson, James VI and I, following the death of Henry's granddaughter Elizabeth I. Henry also formed an alliance with Holy Roman Emperor Maximilian I (1493–1519) and persuaded Pope Innocent VIII to issue a papal bull of excommunication against all pretenders to Henry's throne. In 1506, Grand Master of the Knights Hospitaller Emery d'Amboise asked Henry VII to become the protector and patron of the Order, as he had an interest in the crusade. Later on, Henry had exchanged letters with Pope Julius II in 1507, in which he encouraged him to establish peace among Christian realms, and to organize an expedition against the Turks of the Ottoman Empire. Trade agreements Henry VII was much enriched by trading alum, which was used in the wool and cloth trades as a chemical fixative for dyeing fabrics. Since alum was mined in only one area in Europe (Tolfa, Italy), it was a scarce commodity and therefore especially valuable to its land holder, the Pope. With the English economy heavily invested in wool production, Henry VII became involved in the alum trade in 1486. With the assistance of the Italian merchant banker Lodovico della Fava and the Italian banker Girolamo Frescobaldi, Henry VII became deeply involved in the trade by licensing ships, obtaining alum from the Ottoman Empire, and selling it to the Low Countries and in England. This trade made an expensive commodity cheaper, which raised opposition from Pope Julius II, since the Tolfa mine was a part of papal territory and had given the Pope monopoly control over alum. Henry's most successful diplomatic achievement as regards the economy was the Magnus Intercursus ("great agreement") of 1496. In 1494, Henry embargoed trade (mainly in wool) with the Burgundian Netherlands in retaliation for Margaret of Burgundy's support for Perkin Warbeck. The Merchant Adventurers, the company which enjoyed the monopoly of the Flemish wool trade, relocated from Antwerp to Calais. At the same time, Flemish merchants were ejected from England. The dispute eventually paid off for Henry. Both parties realised they were mutually disadvantaged by the reduction in commerce. Its restoration by the Magnus Intercursus was very much to England's benefit in removing taxation for English merchants and significantly increasing England's wealth. In turn, Antwerp became an extremely important trade entrepôt (transshipment port), through which, for example, goods from the Baltic, spices from the east and Italian silks were exchanged for English cloth. In 1506, Henry extorted the Treaty of Windsor from Philip the Handsome, Duke of Burgundy. Philip had been shipwrecked on the English coast, and while Henry's guest, was bullied into an agreement so favourable to England at the expense of the Netherlands that it was dubbed the Malus Intercursus ("evil agreement"). France, Burgundy, the Holy Roman Empire, Spain and the Hanseatic League all rejected the treaty, which was never in force. Philip died shortly after the negotiations. Law enforcement and Justices of the Peace Henry's principal problem was to restore royal authority in a realm recovering from the Wars of the Roses. There were too many powerful noblemen and, as a consequence of the system of so-called bastard feudalism, each had what amounted to private armies of indentured retainers (mercenaries masquerading as servants). Following the example of Edward IV, Henry VII created a Council of Wales and the Marches for his son Arthur, which was intended to govern Wales and the Marches, Cheshire and Cornwall. He was content to allow the nobles their regional influence if they were loyal to him. For instance, the Stanley family had control of Lancashire and Cheshire, upholding the peace on the condition that they stayed within the law. In other cases, he brought his over-powerful subjects to heel by decree. He passed laws against "livery" (the upper classes' flaunting of their adherents by giving them badges and emblems) and "maintenance" (the keeping of too many male "servants"). These laws were used shrewdly in levying fines upon those that he perceived as threats. However, his principal weapon was the Court of Star Chamber. This revived an earlier practice of using a small (and trusted) group of the Privy Council as a personal or Prerogative Court, able to cut through the cumbersome legal system and act swiftly. Serious disputes involving the use of personal power, or threats to royal authority, were thus dealt with. Henry VII used Justices of the Peace on a large, nationwide scale. They were appointed for every shire and served for a year at a time. Their chief task was to see that the laws of the country were obeyed in their area. Their powers and numbers steadily increased during the time of the Tudors, never more so than under Henry's reign. Despite this, Henry was keen to constrain their power and influence, applying the same principles to the Justices of the Peace as he did to the nobility: a similar system of bonds and recognisances to that which applied to both the gentry and the nobles who tried to exert their elevated influence over these local officials. All Acts of Parliament were overseen by the Justices of the Peace. For example, Justices of the Peace could replace suspect jurors in accordance with the 1495 act preventing the corruption of juries. They were also in charge of various administrative duties, such as the checking of weights and measures. By 1509, Justices of the Peace were key enforcers of law and order for Henry VII. They were unpaid, which, in comparison with modern standards, meant a smaller tax bill for law enforcement. Local gentry saw the office as one of local influence and prestige and were therefore willing to serve. Overall, this was a successful area of policy for Henry, both in terms of efficiency and as a method of reducing the corruption endemic within the nobility of the Middle Ages. Later years and death In 1502, Henry VII's life took a difficult and personal turn in which many people he was close to died in quick succession. His first son and heir apparent, Arthur, Prince of Wales, died suddenly at Ludlow Castle, very likely from a viral respiratory illness known at the time as the "English sweating sickness". This made Henry VII's second son, Henry, Duke of York, heir apparent to the throne. The King, normally a reserved man who rarely showed much emotion in public unless angry, surprised his courtiers by his intense grief and sobbing at his son's death, while his concern for the Queen is evidence that the marriage was a happy one, as is his reaction to Queen Elizabeth's death the following year, when he shut himself away for several days, refusing to speak to anyone. Henry VII was shattered by the loss of Elizabeth, and her death impacted him severely. Henry wanted to maintain the Spanish alliance. He, therefore, arranged a papal dispensation from Pope Julius II for Prince Henry to marry his brother's widow Catherine, a relationship that would have otherwise precluded marriage in the Roman Catholic Church. Elizabeth had died in childbirth, so Henry had the dispensation also permit him to marry Catherine himself. After obtaining the dispensation, Henry had second thoughts about the marriage of his son and Catherine. Catherine's mother Isabella I of Castile had died and Catherine's sister Joanna had succeeded her; Catherine was, therefore, daughter of only one reigning monarch and so less desirable as a spouse for Henry VII's heir-apparent. The marriage did not take place during his lifetime. Otherwise, at the time of his father's arranging of the marriage to Catherine of Aragon, the future Henry VIII was too young to contract the marriage according to Canon Law and would be ineligible until age fourteen. Henry made half-hearted plans to remarry and beget more heirs, but these never came to anything. He entertained thoughts of remarriage to renew the alliance with Spain — Joanna, Dowager Queen of Naples (a niece of Queen Isabella of Castile), Queen Joanna of Castile, and Margaret, Dowager Duchess of Savoy (sister-in-law of Joanna of Castile), were all considered. In 1505 he was sufficiently interested in a potential marriage to Joanna of Naples that he sent ambassadors to Naples to report on the 27-year-old Joanna's physical suitability. The wedding never took place, and the physical description Henry sent with his ambassadors of what he desired in a new wife matched the description of his wife Elizabeth. After 1503, records show the Tower of London was never again used as a royal residence by Henry VII, and all royal births under Henry VIII took place in palaces. Henry VII falls among the minority of British monarchs that never had any known mistresses, and for the times, it is very unusual that he did not remarry: his son Henry was the only male heir left after the death of his wife, thus the death of Arthur created a precarious political position for the House of Tudor. During his lifetime the nobility often criticised Henry VII for re-centralizing power in London, and later the 16th-century historian Francis Bacon was ruthlessly critical of the methods by which he enforced tax law, but it is equally true that Henry VII was diligent about keeping detailed records of his personal finances, down to the last halfpenny; these and one account book detailing the expenses of his queen survive in the British National Archives, as do accounts of courtiers and many of the king's own letters. Until the death of his wife, the evidence is clear from these accounting books that Henry was a more doting father and husband than was widely known and there is evidence that his outwardly austere personality belied a devotion to his family. Letters to relatives have an affectionate tone not captured by official state business, as evidenced by many written to his mother Margaret. Many of the entries show a man who loosened his purse strings generously for his wife and children, and not just on necessities: in spring 1491 he spent a great amount of gold on a lute for his daughter Mary; the following year he spent money on a lion for Elizabeth's menagerie. With Elizabeth's death, the possibilities for such family indulgences greatly diminished. Immediately afterwards, Henry became very sick and nearly died himself, allowing only his mother Margaret Beaufort near him: "privily departed to a solitary place, and would that no man should resort unto him." Further compounding Henry's distress, his older daughter Margaret had previously been betrothed to King James IV of Scotland and within months of her mother's death she had to be escorted to the border by her father: he would never see her again. Margaret Tudor wrote letters to her father declaring her homesickness, but Henry could do nothing but mourn the loss of his family and honor the terms of the peace treaty he had agreed to with the King of Scotland. Henry VII died of tuberculosis at Richmond Palace on 21 April 1509 and was buried in the chapel he commissioned in Westminster Abbey next to his wife, Elizabeth. He was succeeded by his second son, Henry VIII (reigned 1509–47). His mother survived him, but died two months later on 29 June 1509. Appearance and character Good contemporary visual records of Henry's appearance exist in realistic portraits that are relatively free of idealisation. At 27, he was tall and slender, with small blue eyes, which were said to have a noticeable animation of expression, and noticeably bad teeth in a long, sallow face beneath very fair hair. Amiable and high-spirited, Henry was friendly if dignified in manner, and it was clear that he was extremely intelligent. His biographer, Professor Chrimes, credits him – even before he had become king – with "a high degree of personal magnetism, ability to inspire confidence, and a growing reputation for shrewd decisiveness". On the debit side, he may have looked a little delicate as he suffered from poor health. Legacy and memory Historians have always compared Henry VII with his continental contemporaries, especially Louis XI of France and Ferdinand II of Aragon. By 1600 historians emphasised Henry's wisdom in drawing lessons in statecraft from other monarchs. In 1622 Francis Bacon published his History of the Reign of King Henry VII. By 1900 the "New Monarchy" interpretation stressed the common factors that in each country led to the revival of monarchical power. This approach raised puzzling questions about similarities and differences in the development of national states. In the late 20th century a model of European state formation was prominent in which Henry less resembles Louis and Ferdinand. Family Henry VII and Elizabeth had seven children: Arthur (19 September 1486 – 2 April 1502), Prince of Wales, heir apparent from birth to death Margaret (28 November 1489 – 18 October 1541), Queen of Scotland as the wife of James IV and regent for their son James V Henry VIII (28 June 1491 – 28 January 1547), Henry VII's successor Elizabeth (2 July 1492 – 14 September 1495) Mary (18 March 1496 – 25 June 1533), Queen of France as the wife of Louis XII Edmund (21 February 1499 – 19 June 1500), styled Duke of Somerset but never formally created a peer Katherine (2 February 1503 – 10 February 1503) Sir Roland de Velville (or Veleville) was knighted in 1497 and was Constable of Beaumaris Castle. He is sometimes presented as the clear "illegitimate issue" of Henry VII of England by "a Breton lady whose name is not known". The possibility this was Henry's illegitimate son is baseless. See also Cestui que Cultural depictions of Henry VII of England References Citations Bibliography Historiography Anglo, Sydney. "Ill of the dead. The posthumous reputation of Henry VII," Renaissance Studies 1 (1987): 27–47. online External links Tudor Place page on Henry VII Discussion of marital bed by Janina Ramirez and Jonathan Foyle: Art Detective Podcast, 15 Feb 2017 Further reading 1457 births 1509 deaths 15th-century Welsh people 16th-century Welsh people 15th-century English monarchs 16th-century English monarchs Burials at Westminster Abbey 16th-century deaths from tuberculosis Earls of Richmond (1452) English pretenders to the French throne House of Tudor Knights of the Golden Fleece People from Pembrokeshire People of the Wars of the Roses Tuberculosis deaths in England Lords of Glamorgan
[ -0.671908438205719, -0.5437111258506775, 0.2969014346599579, -0.3678857386112213, -0.2701561450958252, 0.4655440151691437, 0.3833128809928894, 0.2774730920791626, -0.4665061831474304, -0.12569232285022736, -0.48956647515296936, 0.11664150655269623, -0.3753562867641449, -0.0809001699090004,...
14187
https://en.wikipedia.org/wiki/Henry%20VIII
Henry VIII
Henry VIII (28 June 149128 January 1547) was King of England from 22 April 1509 until his death in 1547. Henry is best known for his six marriages, including his efforts to have his first marriage (to Catherine of Aragon) annulled. His disagreement with Pope Clement VII about such an annulment led Henry to initiate the English Reformation, separating the Church of England from papal authority. He appointed himself Supreme Head of the Church of England and dissolved convents and monasteries, for which he was excommunicated. Henry is also known as "the father of the Royal Navy," as he invested heavily in the navy, increasing its size from a few to more than 50 ships, and established the Navy Board. Domestically, Henry is known for his radical changes to the English Constitution, ushering in the theory of the divine right of kings in opposition to Papal supremacy. He also greatly expanded royal power during his reign. He frequently used charges of treason and heresy to quell dissent, and those accused were often executed without a formal trial by means of bills of attainder. He achieved many of his political aims through the work of his chief ministers, some of whom were banished or executed when they fell out of his favour. Thomas Wolsey, Thomas More, Thomas Cromwell, Richard Rich, and Thomas Cranmer all figured prominently in his administration. Henry was an extravagant spender, using the proceeds from the dissolution of the monasteries and acts of the Reformation Parliament. He also converted the money that was formerly paid to Rome into royal revenue. Despite the money from these sources, he was continually on the verge of financial ruin due to his personal extravagance, as well as his numerous costly and largely unsuccessful wars, particularly with King Francis I of France, Holy Roman Emperor Charles V, King James V of Scotland and the Scottish regency under the Earl of Arran and Mary of Guise. At home, he oversaw the legal union of England and Wales with the Laws in Wales Acts 1535 and 1542, and he was the first English monarch to rule as King of Ireland following the Crown of Ireland Act 1542. Henry's contemporaries considered him to be an attractive, educated, and accomplished king. He has been described as "one of the most charismatic rulers to sit on the English throne" and his reign has been described as the "most important" in English history. He was an author and composer. As he aged, he became severely overweight and his health suffered. He is frequently characterised in his later life as a lustful, egotistical, paranoid and tyrannical monarch. He was succeeded by his son Edward VI. Early years Born on 28 June 1491 at the Palace of Placentia in Greenwich, Kent, Henry Tudor was the third child and second son of King Henry VII and Elizabeth of York. Of the young Henry's six (or seven) siblings, only three – his brother Arthur, Prince of Wales, and sisters Margaret and Mary – survived infancy. He was baptised by Richard Foxe, the Bishop of Exeter, at a church of the Observant Franciscans close to the palace. In 1493, at the age of two, Henry was appointed Constable of Dover Castle and Lord Warden of the Cinque Ports. He was subsequently appointed Earl Marshal of England and Lord Lieutenant of Ireland at age three and was made a Knight of the Bath soon after. The day after the ceremony, he was created Duke of York and a month or so later made Warden of the Scottish Marches. In May 1495, he was appointed to the Order of the Garter. The reason for giving such appointments to a small child was to enable his father to retain personal control of lucrative positions and not share them with established families. Not much is known about Henry's early life – save for his appointments – because he was not expected to become king, but it is known that he received a first-rate education from leading tutors. He became fluent in Latin and French and learned at least some Italian. In November 1501, Henry played a considerable part in the ceremonies surrounding his brother Arthur's marriage to Catherine, the youngest child of King Ferdinand II of Aragon and Queen Isabella I of Castile. As Duke of York, Henry used the arms of his father as king, differenced by a label of three points ermine. He was further honoured on 9 February 1506 by Holy Roman Emperor Maximilian I, who made him a Knight of the Golden Fleece. In 1502, Arthur died at the age of 15, possibly of sweating sickness, just 20 weeks after his marriage to Catherine. Arthur's death thrust all his duties upon his younger brother. The 10-year-old Henry became the new Duke of Cornwall, and the new Prince of Wales and Earl of Chester in February 1504. Henry VII gave his second son few responsibilities even after the death of Arthur. Young Henry was strictly supervised and did not appear in public. As a result, he ascended the throne "untrained in the exacting art of kingship". Henry VII renewed his efforts to seal a marital alliance between England and Spain, by offering his son Henry in marriage to the widowed Catherine. Both Henry VII and Catherine's mother Queen Isabella were keen on the idea, which had arisen very shortly after Arthur's death. On 23 June 1503, a treaty was signed for their marriage, and they were betrothed two days later. A papal dispensation was only needed for the "impediment of public honesty" if the marriage had not been consummated as Catherine and her duenna claimed, but Henry VII and the Spanish ambassador set out instead to obtain a dispensation for "affinity", which took account of the possibility of consummation. Cohabitation was not possible because Henry was too young. Isabella's death in 1504, and the ensuing problems of succession in Castile, complicated matters. Catherine's father Ferdinand preferred her to stay in England, but Henry VII's relations with Ferdinand had deteriorated. Catherine was therefore left in limbo for some time, culminating in Prince Henry's rejection of the marriage as soon he was able, at the age of 14. Ferdinand's solution was to make his daughter ambassador, allowing her to stay in England indefinitely. Devout, she began to believe that it was God's will that she marry the prince despite his opposition. Early reign Henry VII died on 21 April 1509, and the 17-year-old Henry succeeded him as king. Soon after his father's burial on 10 May, Henry suddenly declared that he would indeed marry Catherine, leaving unresolved several issues concerning the papal dispensation and a missing part of the marriage portion. The new king maintained that it had been his father's dying wish that he marry Catherine. Whether or not this was true, it was certainly convenient. Emperor Maximilian I had been attempting to marry his granddaughter Eleanor, Catherine's niece, to Henry; she had now been jilted. Henry's wedding to Catherine was kept low-key and was held at the friar's church in Greenwich on 11 June 1509. On 23 June 1509, Henry led the now 23-year-old Catherine from the Tower of London to Westminster Abbey for their coronation, which took place the following day. It was a grand affair: the king's passage was lined with tapestries and laid with fine cloth. Following the ceremony, there was a grand banquet in Westminster Hall. As Catherine wrote to her father, "our time is spent in continuous festival". Two days after his coronation, Henry arrested his father's two most unpopular ministers, Sir Richard Empson and Edmund Dudley. They were charged with high treason and were executed in 1510. Politically motivated executions would remain one of Henry's primary tactics for dealing with those who stood in his way. Henry also returned some of the money supposedly extorted by the two ministers. By contrast, Henry's view of the House of York – potential rival claimants for the throne – was more moderate than his father's had been. Several who had been imprisoned by his father, including Thomas Grey, 2nd Marquess of Dorset, were pardoned. Others went unreconciled; Edmund de la Pole was eventually beheaded in 1513, an execution prompted by his brother Richard siding against the king. Soon after marrying Henry, Catherine conceived. She gave birth to a stillborn girl on 31 January 1510. About four months later, Catherine again became pregnant. On 1 January 1511, New Year's Day, a son Henry was born. After the grief of losing their first child, the couple were pleased to have a boy and festivities were held, including a two-day joust known as the Westminster Tournament. However, the child died seven weeks later. Catherine had two stillborn sons in 1513 and 1515, but gave birth in February 1516 to a girl, Mary. Relations between Henry and Catherine had been strained, but they eased slightly after Mary's birth. Although Henry's marriage to Catherine has since been described as "unusually good", it is known that Henry took mistresses. It was revealed in 1510 that Henry had been conducting an affair with one of the sisters of Edward Stafford, 3rd Duke of Buckingham, either Elizabeth or Anne Hastings, Countess of Huntingdon. The most significant mistress for about three years, starting in 1516, was Elizabeth Blount. Blount is one of only two completely undisputed mistresses, considered by some to be few for a virile young king. Exactly how many Henry had is disputed: David Loades believes Henry had mistresses "only to a very limited extent", whilst Alison Weir believes there were numerous other affairs. Catherine is not known to have protested. In 1518 she fell pregnant again with another girl, who was also stillborn. Blount gave birth in June 1519 to Henry's illegitimate son, Henry FitzRoy. The young boy was made Duke of Richmond in June 1525 in what some thought was one step on the path to his eventual legitimisation. In 1533, FitzRoy married Mary Howard, but died childless three years later. At the time of his death in June 1536, Parliament was considering the Second Succession Act, which could have allowed him to become king. France and the Habsburgs In 1510, France, with a fragile alliance with the Holy Roman Empire in the League of Cambrai, was winning a war against Venice. Henry renewed his father's friendship with Louis XII of France, an issue that divided his council. Certainly, war with the combined might of the two powers would have been exceedingly difficult. Shortly thereafter, however, Henry also signed a pact with Ferdinand II of Aragon. After Pope Julius II created the anti-French Holy League in October 1511, Henry followed Ferdinand's lead and brought England into the new League. An initial joint Anglo-Spanish attack was planned for the spring to recover Aquitaine for England, the start of making Henry's dreams of ruling France a reality. The attack, however, following a formal declaration of war in April 1512, was not led by Henry personally and was a considerable failure; Ferdinand used it simply to further his own ends, and it strained the Anglo-Spanish alliance. Nevertheless, the French were pushed out of Italy soon after, and the alliance survived, with both parties keen to win further victories over the French. Henry then pulled off a diplomatic coup by convincing Emperor Maximilian to join the Holy League. Remarkably, Henry had also secured the promised title of "Most Christian King of France" from Julius and possibly coronation by the Pope himself in Paris, if only Louis could be defeated. On 30 June 1513, Henry invaded France, and his troops defeated a French army at the Battle of the Spurs – a relatively minor result, but one which was seized on by the English for propaganda purposes. Soon after, the English took Thérouanne and handed it over to Maximillian; Tournai, a more significant settlement, followed. Henry had led the army personally, complete with a large entourage. His absence from the country, however, had prompted his brother-in-law James IV of Scotland to invade England at the behest of Louis. Nevertheless, the English army, overseen by Queen Catherine, decisively defeated the Scots at the Battle of Flodden on 9 September 1513. Among the dead was the Scottish king, thus ending Scotland's brief involvement in the war. These campaigns had given Henry a taste of the military success he so desired. However, despite initial indications, he decided not to pursue a 1514 campaign. He had been supporting Ferdinand and Maximilian financially during the campaign but had received little in return; England's coffers were now empty. With the replacement of Julius by Pope Leo X, who was inclined to negotiate for peace with France, Henry signed his own treaty with Louis: his sister Mary would become Louis' wife, having previously been pledged to the younger Charles, and peace was secured for eight years, a remarkably long time. Charles V, the nephew of Henry's wife Catherine, inherited a large empire in Europe, becoming king of Spain in 1516 and Holy Roman Emperor in 1519. When Louis XII of France died in 1515, he was succeeded by his cousin Francis I. These accessions left three relatively young rulers and an opportunity for a clean slate. The careful diplomacy of Cardinal Thomas Wolsey had resulted in the Treaty of London in 1518, aimed at uniting the kingdoms of western Europe in the wake of a new Ottoman threat, and it seemed that peace might be secured. Henry met the new French king, Francis, on 7 June 1520 at the Field of the Cloth of Gold near Calais for a fortnight of lavish entertainment. Both hoped for friendly relations in place of the wars of the previous decade. The strong air of competition laid to rest any hopes of a renewal of the Treaty of London, however, and conflict was inevitable. Henry had more in common with Charles, whom he met once before and once after Francis. Charles brought his realm into war with France in 1521; Henry offered to mediate, but little was achieved and by the end of the year Henry had aligned England with Charles. He still clung to his previous aim of restoring English lands in France but also sought to secure an alliance with Burgundy, then a territorial possession of Charles, and the continued support of the Emperor. A small English attack in the north of France made up little ground. Charles defeated and captured Francis at Pavia and could dictate peace, but he believed he owed Henry nothing. Sensing this, Henry decided to take England out of the war before his ally, signing the Treaty of the More on 30 August 1525. Marriages Annulment from Catherine During his marriage to Catherine of Aragon, Henry conducted an affair with Mary Boleyn, Catherine's lady-in-waiting. There has been speculation that Mary's two children, Henry Carey and Catherine Carey, were fathered by Henry, but this has never been proved, and the king never acknowledged them as he did in the case of Henry FitzRoy. In 1525, as Henry grew more impatient with Catherine's inability to produce the male heir he desired, he became enamoured of Mary Boleyn's sister, Anne Boleyn, then a charismatic young woman of 25 in the queen's entourage. Anne, however, resisted his attempts to seduce her, and refused to become his mistress as her sister had. It was in this context that Henry considered his three options for finding a dynastic successor and hence resolving what came to be described at court as the king's "great matter". These options were legitimising Henry FitzRoy, which would need the involvement of the Pope and would be open to challenge; marrying off Mary, his daughter with Catherine, as soon as possible and hoping for a grandson to inherit directly, but Mary was considered unlikely to conceive before Henry's death, or somehow rejecting Catherine and marrying someone else of child-bearing age. Probably seeing the possibility of marrying Anne, the third was ultimately the most attractive possibility to the 34-year-old Henry, and it soon became the king's absorbing desire to annul his marriage to the now 40-year-old Catherine. Henry's precise motivations and intentions over the coming years are not widely agreed on. Henry himself, at least in the early part of his reign, was a devout and well-informed Catholic to the extent that his 1521 publication Assertio Septem Sacramentorum ("Defence of the Seven Sacraments") earned him the title of Fidei Defensor (Defender of the Faith) from Pope Leo X. The work represented a staunch defence of papal supremacy, albeit one couched in somewhat contingent terms. It is not clear exactly when Henry changed his mind on the issue as he grew more intent on a second marriage. Certainly, by 1527, he had convinced himself that Catherine had produced no male heir because their union was "blighted in the eyes of God". Indeed, in marrying Catherine, his brother's wife, he had acted contrary to Leviticus 20:21, a justification Thomas Cranmer used to declare the marriage null. Martin Luther, on the other hand, had initially argued against the annulment, stating that Henry VIII could take a second wife in accordance with his teaching that the Bible allowed for polygamy but not divorce. Henry now believed the Pope had lacked the authority to grant a dispensation from this impediment. It was this argument Henry took to Pope Clement VII in 1527 in the hope of having his marriage to Catherine annulled, forgoing at least one less openly defiant line of attack. In going public, all hope of tempting Catherine to retire to a nunnery or otherwise stay quiet was lost. Henry sent his secretary, William Knight, to appeal directly to the Holy See by way of a deceptively worded draft papal bull. Knight was unsuccessful; the Pope could not be misled so easily. Other missions concentrated on arranging an ecclesiastical court to meet in England, with a representative from Clement VII. Although Clement agreed to the creation of such a court, he never had any intention of empowering his legate, Lorenzo Campeggio, to decide in Henry's favour. This bias was perhaps the result of pressure from Emperor Charles V, Catherine's nephew, but it is not clear how far this influenced either Campeggio or the Pope. After less than two months of hearing evidence, Clement called the case back to Rome in July 1529, from which it was clear that it would never re-emerge. With the chance for an annulment lost, Cardinal Wolsey bore the blame. He was charged with praemunire in October 1529, and his fall from grace was "sudden and total". Briefly reconciled with Henry (and officially pardoned) in the first half of 1530, he was charged once more in November 1530, this time for treason, but died while awaiting trial. After a short period in which Henry took government upon his own shoulders, Sir Thomas More took on the role of Lord Chancellor and chief minister. Intelligent and able, but also a devout Catholic and opponent of the annulment, More initially cooperated with the king's new policy, denouncing Wolsey in Parliament. A year later, Catherine was banished from court, and her rooms were given to Anne Boleyn. Anne was an unusually educated and intellectual woman for her time and was keenly absorbed and engaged with the ideas of the Protestant Reformers, but the extent to which she herself was a committed Protestant is much debated. When Archbishop of Canterbury William Warham died, Anne's influence and the need to find a trustworthy supporter of the annulment had Thomas Cranmer appointed to the vacant position. This was approved by the Pope, unaware of the king's nascent plans for the Church. Henry was married to Catherine for 24 years. Their divorce has been described as a "deeply wounding and isolating" experience for Henry. Marriage to Anne Boleyn In the winter of 1532, Henry met with Francis I at Calais and enlisted the support of the French king for his new marriage. Immediately upon returning to Dover in England, Henry, now 41, and Anne went through a secret wedding service. She soon became pregnant, and there was a second wedding service in London on 25 January 1533. On 23 May 1533, Cranmer, sitting in judgment at a special court convened at Dunstable Priory to rule on the validity of the king's marriage to Catherine of Aragon, declared the marriage of Henry and Catherine null and void. Five days later, on 28 May 1533, Cranmer declared the marriage of Henry and Anne to be valid. Catherine was formally stripped of her title as queen, becoming instead "princess dowager" as the widow of Arthur. In her place, Anne was crowned queen consort on 1 June 1533. The queen gave birth to a daughter slightly prematurely on 7 September 1533. The child was christened Elizabeth, in honour of Henry's mother, Elizabeth of York. Following the marriage, there was a period of consolidation, taking the form of a series of statutes of the Reformation Parliament aimed at finding solutions to any remaining issues, whilst protecting the new reforms from challenge, convincing the public of their legitimacy, and exposing and dealing with opponents. Although the canon law was dealt with at length by Cranmer and others, these acts were advanced by Thomas Cromwell, Thomas Audley and the Duke of Norfolk and indeed by Henry himself. With this process complete, in May 1532 More resigned as Lord Chancellor, leaving Cromwell as Henry's chief minister. With the Act of Succession 1533, Catherine's daughter, Mary, was declared illegitimate; Henry's marriage to Anne was declared legitimate; and Anne's issue declared to be next in the line of succession. With the Acts of Supremacy in 1534, Parliament also recognised the king's status as head of the church in England and, together with the Act in Restraint of Appeals in 1532, abolished the right of appeal to Rome. It was only then that Pope Clement VII took the step of excommunicating the king and Cranmer, although the excommunication was not made official until some time later. The king and queen were not pleased with married life. The royal couple enjoyed periods of calm and affection, but Anne refused to play the submissive role expected of her. The vivacity and opinionated intellect that had made her so attractive as an illicit lover made her too independent for the largely ceremonial role of a royal wife and it made her many enemies. For his part, Henry disliked Anne's constant irritability and violent temper. After a false pregnancy or miscarriage in 1534, he saw her failure to give him a son as a betrayal. As early as Christmas 1534, Henry was discussing with Cranmer and Cromwell the chances of leaving Anne without having to return to Catherine. Henry is traditionally believed to have had an affair with Madge Shelton in 1535, although historian Antonia Fraser argues that Henry in fact had an affair with her sister Mary Shelton. Opposition to Henry's religious policies was quickly suppressed in England. A number of dissenting monks, including the first Carthusian Martyrs, were executed and many more pilloried. The most prominent resisters included John Fisher, Bishop of Rochester, and Sir Thomas More, both of whom refused to take the oath to the king. Neither Henry nor Cromwell sought at that stage to have the men executed; rather, they hoped that the two might change their minds and save themselves. Fisher openly rejected Henry as the Supreme Head of the Church, but More was careful to avoid openly breaking the Treasons Act of 1534, which (unlike later acts) did not forbid mere silence. Both men were subsequently convicted of high treason, however – More on the evidence of a single conversation with Richard Rich, the Solicitor General, and both were executed in the summer of 1535. These suppressions, as well as the Dissolution of the Lesser Monasteries Act of 1536, in turn contributed to more general resistance to Henry's reforms, most notably in the Pilgrimage of Grace, a large uprising in northern England in October 1536. Some 20,000 to 40,000 rebels were led by Robert Aske, together with parts of the northern nobility. Henry VIII promised the rebels he would pardon them and thanked them for raising the issues. Aske told the rebels they had been successful and they could disperse and go home. Henry saw the rebels as traitors and did not feel obliged to keep his promises to them, so when further violence occurred after Henry's offer of a pardon he was quick to break his promise of clemency. The leaders, including Aske, were arrested and executed for treason. In total, about 200 rebels were executed, and the disturbances ended. Execution of Anne Boleyn On 8 January 1536, news reached the king and queen that Catherine of Aragon had died. The following day, Henry dressed all in yellow, with a white feather in his bonnet. Queen Anne was pregnant again, and she was aware of the consequences if she failed to give birth to a son. Later that month, the king was unhorsed in a tournament and was badly injured; it seemed for a time that his life was in danger. When news of this accident reached the queen, she was sent into shock and miscarried a male child at about 15 weeks' gestation, on the day of Catherine's funeral, 29 January 1536. For most observers, this personal loss was the beginning of the end of this royal marriage. Although the Boleyn family still held important positions on the Privy Council, Anne had many enemies, including the Duke of Suffolk. Even her own uncle, the Duke of Norfolk, had come to resent her attitude to her power. The Boleyns preferred France over the Emperor as a potential ally, but the king's favour had swung towards the latter (partly because of Cromwell), damaging the family's influence. Also opposed to Anne were supporters of reconciliation with Princess Mary (among them the former supporters of Catherine), who had reached maturity. A second annulment was now a real possibility, although it is commonly believed that it was Cromwell's anti-Boleyn influence that led opponents to look for a way of having her executed. Anne's downfall came shortly after she had recovered from her final miscarriage. Whether it was primarily the result of allegations of conspiracy, adultery, or witchcraft remains a matter of debate among historians. Early signs of a fall from grace included the king's new mistress, the 28-year-old Jane Seymour, being moved into new quarters, and Anne's brother, George Boleyn, being refused the Order of the Garter, which was instead given to Nicholas Carew. Between 30 April and 2 May, five men, including George Boleyn, were arrested on charges of treasonable adultery and accused of having sexual relationships with the queen. Anne was also arrested, accused of treasonous adultery and incest. Although the evidence against them was unconvincing, the accused were found guilty and condemned to death. The accused men were executed on 17 May 1536. Henry and Anne's marriage was annulled by Archbishop Cranmer at Lambeth on the same day. Cranmer appears to have had difficulty finding grounds for an annulment and probably based it on the prior liaison between Henry and Anne's sister Mary, which in canon law meant that Henry's marriage to Anne was, like his first marriage, within a forbidden degree of affinity and therefore void. At 8 am on 19 May 1536, Anne was executed on Tower Green. Marriage to Jane Seymour; domestic and foreign affairs The day after Anne's execution the 45-year-old Henry became engaged to Seymour, who had been one of the queen's ladies-in-waiting. They were married ten days later at the Palace of Whitehall, Whitehall, London, in the queen's closet, by Stephen Gardiner, Bishop of Winchester. On 12 October 1537, Jane gave birth to a son, Prince Edward, the future Edward VI. The birth was difficult, and Queen Jane died on 24 October 1537 from an infection and was buried in Windsor. The euphoria that had accompanied Edward's birth became sorrow, but it was only over time that Henry came to long for his wife. At the time, Henry recovered quickly from the shock. Measures were immediately put in place to find another wife for Henry, which, at the insistence of Cromwell and the Privy Council, were focused on the European continent. With Charles V distracted by the internal politics of his many kingdoms and also external threats, and Henry and Francis on relatively good terms, domestic and not foreign policy issues had been Henry's priority in the first half of the 1530s. In 1536, for example, Henry granted his assent to the Laws in Wales Act 1535, which legally annexed Wales, uniting England and Wales into a single nation. This was followed by the Second Succession Act (the Act of Succession 1536), which declared Henry's children by Jane to be next in the line of succession and declared both Mary and Elizabeth illegitimate, thus excluding them from the throne. The king was also granted the power to further determine the line of succession in his will, should he have no further issue. However, when Charles and Francis made peace in January 1539, Henry became increasingly paranoid, perhaps as a result of receiving a constant list of threats to the kingdom (real or imaginary, minor or serious) supplied by Cromwell in his role as spymaster. Enriched by the dissolution of the monasteries, Henry used some of his financial reserves to build a series of coastal defences and set some aside for use in the event of a Franco-German invasion. Marriage to Anne of Cleves Having considered the matter, Cromwell suggested Anne, the 25-year-old sister of the Duke of Cleves, who was seen as an important ally in case of a Roman Catholic attack on England, for the duke fell between Lutheranism and Catholicism. Hans Holbein the Younger was dispatched to Cleves to paint a portrait of Anne for the king. Despite speculation that Holbein painted her in an overly flattering light, it is more likely that the portrait was accurate; Holbein remained in favour at court. After seeing Holbein's portrait, and urged on by the complimentary description of Anne given by his courtiers, the 49-year-old king agreed to wed Anne. However, it was not long before Henry wished to annul the marriage so he could marry another. Anne did not argue, and confirmed that the marriage had never been consummated. Anne's previous betrothal to the Duke of Lorraine's son Francis provided further grounds for the annulment. The marriage was subsequently dissolved, and Anne received the title of "The King's Sister", two houses, and a generous allowance. It was soon clear that Henry had fallen for the 17-year-old Catherine Howard, the Duke of Norfolk's niece. This worried Cromwell, for Norfolk was his political opponent. Shortly after, the religious reformers (and protégés of Cromwell) Robert Barnes, William Jerome and Thomas Garret were burned as heretics. Cromwell, meanwhile, fell out of favour although it is unclear exactly why, for there is little evidence of differences in domestic or foreign policy. Despite his role, he was never formally accused of being responsible for Henry's failed marriage. Cromwell was now surrounded by enemies at court, with Norfolk also able to draw on his niece Catherine's position. Cromwell was charged with treason, selling export licences, granting passports, and drawing up commissions without permission, and may also have been blamed for the failure of the foreign policy that accompanied the attempted marriage to Anne. He was subsequently attainted and beheaded. Marriage to Catherine Howard On 28 July 1540 (the same day Cromwell was executed), Henry married the young Catherine Howard, a first cousin and lady-in-waiting of Anne Boleyn. He was absolutely delighted with his new queen and awarded her the lands of Cromwell and a vast array of jewellery. Soon after the marriage, however, Queen Catherine had an affair with the courtier Thomas Culpeper. She also employed Francis Dereham, who had previously been informally engaged to her and had an affair with her prior to her marriage, as her secretary. The Privy Council was informed of her affair with Dereham whilst Henry was away; Thomas Cranmer was dispatched to investigate, and he brought evidence of Queen Catherine's previous affair with Dereham to the king's notice. Though Henry originally refused to believe the allegations, Dereham confessed. It took another meeting of the council, however, before Henry believed the accusations against Dereham and went into a rage, blaming the council before consoling himself in hunting. When questioned, the queen could have admitted a prior contract to marry Dereham, which would have made her subsequent marriage to Henry invalid, but she instead claimed that Dereham had forced her to enter into an adulterous relationship. Dereham, meanwhile, exposed Catherine's relationship with Culpeper. Culpeper and Dereham were both executed, and Catherine too was beheaded on 13 February 1542. Marriage to Catherine Parr Henry married his last wife, the wealthy widow Catherine Parr, in July 1543. A reformer at heart, she argued with Henry over religion. Henry remained committed to an idiosyncratic mixture of Catholicism and Protestantism; the reactionary mood that had gained ground after Cromwell's fall had neither eliminated his Protestant streak nor been overcome by it. Parr helped reconcile Henry with his daughters, Mary and Elizabeth. In 1543, the Third Succession Act put them back in the line of succession after Edward. The same act allowed Henry to determine further succession to the throne in his will. Shrines destroyed and monasteries dissolved In 1538, the chief minister Thomas Cromwell pursued an extensive campaign against what the government termed "idolatry" practised under the old religion, culminating in September with the dismantling of the shrine of St. Thomas Becket at Canterbury Cathedral. As a consequence, the king was excommunicated by Pope Paul III on 17 December of the same year. In 1540, Henry sanctioned the complete destruction of shrines to saints. In 1542, England's remaining monasteries were all dissolved, and their property transferred to the Crown. Abbots and priors lost their seats in the House of Lords; only archbishops and bishops remained. Consequently, the Lords Spiritual—as members of the clergy with seats in the House of Lords were known—were for the first time outnumbered by the Lords Temporal. Second invasion of France and the "Rough Wooing" of Scotland The 1539 alliance between Francis and Charles had soured, eventually degenerating into renewed war. With Catherine of Aragon and Anne Boleyn dead, relations between Charles and Henry improved considerably, and Henry concluded a secret alliance with the Emperor and decided to enter the Italian War in favour of his new ally. An invasion of France was planned for 1543. In preparation for it, Henry moved to eliminate the potential threat of Scotland under the youthful James V. The Scots were defeated at Battle of Solway Moss on 24 November 1542, and James died on 15 December. Henry now hoped to unite the crowns of England and Scotland by marrying his son Edward to James' successor, Mary. The Scottish Regent Lord Arran agreed to the marriage in the Treaty of Greenwich on 1 July 1543, but it was rejected by the Parliament of Scotland on 11 December. The result was eight years of war between England and Scotland, a campaign later dubbed "the Rough Wooing". Despite several peace treaties, unrest continued in Scotland until Henry's death. Despite the early success with Scotland, Henry hesitated to invade France, annoying Charles. Henry finally went to France in June 1544 with a two-pronged attack. One force under Norfolk ineffectively besieged Montreuil. The other, under Suffolk, laid siege to Boulogne. Henry later took personal command, and Boulogne fell on 18 September 1544. However, Henry had refused Charles' request to march against Paris. Charles' own campaign fizzled, and he made peace with France that same day. Henry was left alone against France, unable to make peace. Francis attempted to invade England in the summer of 1545 but reached only the Isle of Wight before being repulsed in the Battle of the Solent. Financially exhausted, France and England signed the Treaty of Camp on 7 June 1546. Henry secured Boulogne for eight years. The city was then to be returned to France for 2 million crowns (£750,000). Henry needed the money; the 1544 campaign had cost £650,000, and England was once again bankrupt. Physical decline and death Late in life, Henry became obese, with a waist measurement of , and had to be moved about with the help of mechanical devices. He was covered with painful, pus-filled boils and possibly suffered from gout. His obesity and other medical problems can be traced to the jousting accident in 1536 in which he suffered a leg wound. The accident reopened and aggravated an injury he had sustained years earlier, to the extent that his doctors found it difficult to treat. The chronic wound festered for the remainder of his life and became ulcerated, preventing him from maintaining the level of physical activity he had previously enjoyed. The jousting accident is also believed to have caused Henry's mood swings, which may have had a dramatic effect on his personality and temperament. The theory that Henry suffered from syphilis has been dismissed by most historians. Historian Susan Maclean Kybett ascribes his demise to scurvy, which is caused by insufficient vitamin C most often due to a lack of fresh fruits and vegetables in one's diet. Alternatively, his wives' pattern of pregnancies and his mental deterioration have led some to suggest that he may have been Kell positive and suffered from McLeod syndrome. According to another study, Henry's history and body morphology may have been the result of traumatic brain injury after his 1536 jousting accident, which in turn led to a neuroendocrine cause of his obesity. This analysis identifies growth hormone deficiency (GHD) as the reason for his increased adiposity but also significant behavioural changes noted in his later years, including his multiple marriages. Henry's obesity hastened his death at the age of 55, on 28 January 1547 in the Palace of Whitehall, on what would have been his father's 90th birthday. The tomb he had planned (with components taken from the tomb intended for Cardinal Wolsey) was only partly constructed and was never completed. (The sarcophagus and its base were later removed and used for Lord Nelson's tomb in the crypt of St. Paul's Cathedral.) Henry was interred in a vault at St George's Chapel, Windsor Castle, next to Jane Seymour. Over 100 years later, King Charles I (ruled 1625–1649) was buried in the same vault. Wives, mistresses, and children English historian and House of Tudor expert David Starkey describes Henry VIII as a husband:What is extraordinary is that Henry was usually a very good husband. And he liked women—that's why he married so many of them! He was very tender to them, we know that he addressed them as "sweetheart." He was a good lover, he was very generous: the wives were given huge settlements of land and jewels—they were loaded with jewels. He was immensely considerate when they were pregnant. But, once he had fallen out of love... he just cut them off. He just withdrew. He abandoned them. They didn't even know he'd left them. Succession Upon Henry's death, he was succeeded by his son Edward VI. Since Edward was then only nine years old, he could not rule directly. Instead, Henry's will designated 16 executors to serve on a council of regency until Edward reached 18. The executors chose Edward Seymour, 1st Earl of Hertford, Jane Seymour's elder brother, to be Lord Protector of the Realm. If Edward died childless, the throne was to pass to Mary, Henry VIII's daughter by Catherine of Aragon, and her heirs. If Mary's issue failed, the crown was to go to Elizabeth, Henry's daughter by Anne Boleyn, and her heirs. Finally, if Elizabeth's line became extinct, the crown was to be inherited by the descendants of Henry VIII's deceased younger sister, Mary, the Greys. The descendants of Henry's sister Margaret—the Stuarts, rulers of Scotland—were thereby excluded from the succession. This provision ultimately failed when James VI of Scotland became King of England in 1603. Public image Henry cultivated the image of a Renaissance man, and his court was a centre of scholarly and artistic innovation and glamorous excess, epitomised by the Field of the Cloth of Gold. He scouted the country for choirboys, taking some directly from Wolsey's choir, and introduced Renaissance music into court. Musicians included Benedict de Opitiis, Richard Sampson, Ambrose Lupo, and Venetian organist Dionisio Memo, and Henry himself kept a considerable collection of instruments. He was skilled on the lute and played the organ, and was a talented player of the virginals. He could also sightread music and sing well. He was an accomplished musician, author, and poet; his best-known piece of music is "Pastime with Good Company" ("The Kynges Ballade"), and he is reputed to have written "Greensleeves" but probably did not. Henry was an avid gambler and dice player, and excelled at sports, especially jousting, hunting, and real tennis. He was also known for his strong defence of conventional Christian piety. He was involved in the construction and improvement of several significant buildings, including Nonsuch Palace, King's College Chapel, Cambridge, and Westminster Abbey in London. Many of the existing buildings which he improved were properties confiscated from Wolsey, such as Christ Church, Oxford, Hampton Court Palace, the Palace of Whitehall, and Trinity College, Cambridge. Henry was an intellectual, the first English king with a modern humanist education. He read and wrote English, French, and Latin, and owned a large library. He annotated many books and published one of his own, and he had numerous pamphlets and lectures prepared to support the reformation of the church. Richard Sampson's Oratio (1534), for example, was an argument for absolute obedience to the monarchy and claimed that the English church had always been independent of Rome. At the popular level, theatre and minstrel troupes funded by the crown travelled around the land to promote the new religious practices; the pope and Catholic priests and monks were mocked as foreign devils, while Henry was hailed as the glorious king of England and as a brave and heroic defender of the true faith. Henry worked hard to present an image of unchallengeable authority and irresistible power. Henry was a large, well-built athlete, over tall, strong, and broad in proportion. His athletic activities were more than pastimes; they were political devices that served multiple goals, enhancing his image, impressing foreign emissaries and rulers, and conveying his ability to suppress any rebellion. He arranged a jousting tournament at Greenwich in 1517 where he wore gilded armour and gilded horse trappings, and outfits of velvet, satin, and cloth of gold with pearls and jewels. It suitably impressed foreign ambassadors, one of whom wrote home that "the wealth and civilisation of the world are here, and those who call the English barbarians appear to me to render themselves such". Henry finally retired from jousting in 1536 after a heavy fall from his horse left him unconscious for two hours, but he continued to sponsor two lavish tournaments a year. He then started gaining weight and lost the trim, athletic figure that had made him so handsome, and his courtiers began dressing in heavily padded clothes to emulate and flatter him. His health rapidly declined near the end of his reign. Government The power of Tudor monarchs, including Henry, was 'whole' and 'entire', ruling, as they claimed, by the grace of God alone. The crown could also rely on the exclusive use of those functions that constituted the royal prerogative. These included acts of diplomacy (including royal marriages), declarations of war, management of the coinage, the issue of royal pardons and the power to summon and dissolve parliament as and when required. Nevertheless, as evident during Henry's break with Rome, the monarch stayed within established limits, whether legal or financial, that forced him to work closely with both the nobility and parliament (representing the gentry). In practice, Tudor monarchs used patronage to maintain a royal court that included formal institutions such as the Privy Council as well as more informal advisers and confidants. Both the rise and fall of court nobles could be swift: Henry did undoubtedly execute at will, burning or beheading two of his wives, 20 peers, four leading public servants, six close attendants and friends, one cardinal (John Fisher) and numerous abbots. Among those who were in favour at any given point in Henry's reign, one could usually be identified as a chief minister, though one of the enduring debates in the historiography of the period has been the extent to which those chief ministers controlled Henry rather than vice versa. In particular, historian G. R. Elton has argued that one such minister, Thomas Cromwell, led a "Tudor revolution in government" independently of the king, whom Elton presented as an opportunistic, essentially lazy participant in the nitty-gritty of politics. Where Henry did intervene personally in the running of the country, Elton argued, he mostly did so to its detriment. The prominence and influence of faction in Henry's court is similarly discussed in the context of at least five episodes of Henry's reign, including the downfall of Anne Boleyn. From 1514 to 1529, Thomas Wolsey (1473–1530), a cardinal of the established Church, oversaw domestic and foreign policy for the king from his position as Lord Chancellor. Wolsey centralised the national government and extended the jurisdiction of the conciliar courts, particularly the Star Chamber. The Star Chamber's overall structure remained unchanged, but Wolsey used it to provide much-needed reform of the criminal law. The power of the court itself did not outlive Wolsey, however, since no serious administrative reform was undertaken and its role eventually devolved to the localities. Wolsey helped fill the gap left by Henry's declining participation in government (particularly in comparison to his father) but did so mostly by imposing himself in the king's place. His use of these courts to pursue personal grievances, and particularly to treat delinquents as mere examples of a whole class worthy of punishment, angered the rich, who were annoyed as well by his enormous wealth and ostentatious living. Following Wolsey's downfall, Henry took full control of his government, although at court numerous complex factions continued to try to ruin and destroy each other. Thomas Cromwell (c. 1485–1540) also came to define Henry's government. Returning to England from the continent in 1514 or 1515, Cromwell soon entered Wolsey's service. He turned to law, also picking up a good knowledge of the Bible, and was admitted to Gray's Inn in 1524. He became Wolsey's "man of all work". Driven in part by his religious beliefs, Cromwell attempted to reform the body politic of the English government through discussion and consent, and through the vehicle of continuity, not outward change. Many saw him as the man they wanted to bring about their shared aims, including Thomas Audley. By 1531, Cromwell and his associates were already responsible for the drafting of much legislation. Cromwell's first office was that of the master of the king's jewels in 1532, from which he began to invigorate the government finances. By that point, Cromwell's power as an efficient administrator, in a Council full of politicians, exceeded what Wolsey had achieved. Cromwell did much work through his many offices to remove the tasks of government from the Royal Household (and ideologically from the personal body of the king) and into a public state. But he did so in a haphazard fashion that left several remnants, not least because he needed to retain Henry's support, his own power, and the possibility of actually achieving the plan he set out. Cromwell made the various income streams Henry VII put in place more formal and assigned largely autonomous bodies for their administration. The role of the King's Council was transferred to a reformed Privy Council, much smaller and more efficient than its predecessor. A difference emerged between the king's financial health and the country's, although Cromwell's fall undermined much of his bureaucracy, which required him to keep order among the many new bodies and prevent profligate spending that strained relations as well as finances. Cromwell's reforms ground to a halt in 1539, the initiative lost, and he failed to secure the passage of an enabling act, the Proclamation by the Crown Act 1539. He was executed on 28 July 1540. Finances Henry inherited a vast fortune and a prosperous economy from his father, who had been frugal. This fortune is estimated at £1,250,000 (the equivalent of £375 million today). By comparison, Henry VIII's reign was a near disaster financially. He augmented the royal treasury by seizing church lands, but his heavy spending and long periods of mismanagement damaged the economy. Henry spent much of his wealth on maintaining his court and household, including many of the building works he undertook on royal palaces. He hung 2,000 tapestries in his palaces; by comparison, James V of Scotland hung just 200. Henry took pride in showing off his collection of weapons, which included exotic archery equipment, 2,250 pieces of land ordnance and 6,500 handguns. Tudor monarchs had to fund all government expenses out of their own income. This income came from the Crown lands that Henry owned as well as from customs duties like tonnage and poundage, granted by parliament to the king for life. During Henry's reign the revenues of the Crown remained constant (around £100,000), but were eroded by inflation and rising prices brought about by war. Indeed, war and Henry's dynastic ambitions in Europe exhausted the surplus he had inherited from his father by the mid-1520s. Henry VII had not involved Parliament in his affairs very much, but Henry VIII had to turn to Parliament during his reign for money, in particular for grants of subsidies to fund his wars. The dissolution of the monasteries provided a means to replenish the treasury, and as a result, the Crown took possession of monastic lands worth £120,000 (£36 million) a year. The Crown had profited by a small amount in 1526 when Wolsey put England onto a gold, rather than silver, standard, and had debased the currency slightly. Cromwell debased the currency more significantly, starting in Ireland in 1540. The English pound halved in value against the Flemish pound between 1540 and 1551 as a result. The nominal profit made was significant, helping to bring income and expenditure together, but it had a catastrophic effect on the country's economy. In part, it helped to bring about a period of very high inflation from 1544 onwards. Reformation Henry is generally credited with initiating the English Reformation—the process of transforming England from a Catholic country to a Protestant one—though his progress at the elite and mass levels is disputed, and the precise narrative not widely agreed upon. Certainly, in 1527, Henry, until then an observant and well-informed Catholic, appealed to the Pope for an annulment of his marriage to Catherine. No annulment was immediately forthcoming, since the papacy was now under the control of Charles V, Catherine' s nephew. The traditional narrative gives this refusal as the trigger for Henry's rejection of papal supremacy, which he had previously defended. Yet as E. L. Woodward put it, Henry's determination to annul his marriage with Catherine was the occasion rather than the cause of the English Reformation so that "neither too much nor too little" should be made of the annulment. Historian A. F. Pollard has argued that even if Henry had not needed an annulment, he might have come to reject papal control over the governance of England purely for political reasons. Indeed, Henry needed a son to secure the Tudor Dynasty and avert the risk of civil war over disputed succession. In any case, between 1532 and 1537, Henry instituted a number of statutes that dealt with the relationship between king and pope and hence the structure of the nascent Church of England. These included the Statute in Restraint of Appeals (passed 1533), which extended the charge of praemunire against all who introduced papal bulls into England, potentially exposing them to the death penalty if found guilty. Other acts included the Supplication against the Ordinaries and the Submission of the Clergy, which recognised Royal Supremacy over the church. The Ecclesiastical Appointments Act 1534 required the clergy to elect bishops nominated by the Sovereign. The Act of Supremacy in 1534 declared that the king was "the only Supreme Head on Earth of the Church of England" and the Treasons Act 1534 made it high treason, punishable by death, to refuse the Oath of Supremacy acknowledging the king as such. Similarly, following the passage of the Act of Succession 1533, all adults in the kingdom were required to acknowledge the Act's provisions (declaring Henry's marriage to Anne legitimate and his marriage to Catherine illegitimate) by oath; those who refused were subject to imprisonment for life, and any publisher or printer of any literature alleging that the marriage to Anne was invalid subject to the death penalty. Finally, the Peter's Pence Act was passed, and it reiterated that England had "no superior under God, but only your Grace" and that Henry's "imperial crown" had been diminished by "the unreasonable and uncharitable usurpations and exactions" of the Pope. The king had much support from the Church under Cranmer. To Cromwell's annoyance, Henry insisted on parliamentary time to discuss questions of faith, which he achieved through the Duke of Norfolk. This led to the passing of the Act of Six Articles, whereby six major questions were all answered by asserting the religious orthodoxy, thus restraining the reform movement in England. It was followed by the beginnings of a reformed liturgy and of the Book of Common Prayer, which would take until 1549 to complete. But this victory for religious conservatives did not convert into much change in personnel, and Cranmer remained in his position. Overall, the rest of Henry's reign saw a subtle movement away from religious orthodoxy, helped in part by the deaths of prominent figures from before the break with Rome, especially the executions of Thomas More and John Fisher in 1535 for refusing to renounce papal authority. Henry established a new political theology of obedience to the crown that continued for the next decade. It reflected Martin Luther's new interpretation of the fourth commandment ("Honour thy father and mother"), brought to England by William Tyndale. The founding of royal authority on the Ten Commandments was another important shift: reformers within the Church used the Commandments' emphasis on faith and the word of God, while conservatives emphasised the need for dedication to God and doing good. The reformers' efforts lay behind the publication of the Great Bible in 1539 in English. Protestant Reformers still faced persecution, particularly over objections to Henry's annulment. Many fled abroad, including the influential Tyndale, who was eventually executed and his body burned at Henry's behest. When taxes once payable to Rome were transferred to the Crown, Cromwell saw the need to assess the taxable value of the Church's extensive holdings as they stood in 1535. The result was an extensive compendium, the Valor Ecclesiasticus. In September 1535, Cromwell commissioned a more general visitation of religious institutions, to be undertaken by four appointee visitors. The visitation focussed almost exclusively on the country's religious houses, with largely negative conclusions. In addition to reporting back to Cromwell, the visitors made the lives of the monks more difficult by enforcing strict behavioural standards. The result was to encourage self-dissolution. In any case, the evidence Cromwell gathered led swiftly to the beginning of the state-enforced dissolution of the monasteries, with all religious houses worth less than £200 vested by statute in the crown in January 1536. After a short pause, surviving religious houses were transferred one by one to the Crown and new owners, and the dissolution confirmed by a further statute in 1539. By January 1540 no such houses remained; 800 had been dissolved. The process had been efficient, with minimal resistance, and brought the crown some £90,000 a year. The extent to which the dissolution of all houses was planned from the start is debated by historians; there is some evidence that major houses were originally intended only to be reformed. Cromwell's actions transferred a fifth of England's landed wealth to new hands. The programme was designed primarily to create a landed gentry beholden to the crown, which would use the lands much more efficiently. Although little opposition to the supremacy could be found in England's religious houses, they had links to the international church and were an obstacle to further religious reform. Response to the reforms was mixed. The religious houses had been the only support of the impoverished, and the reforms alienated much of the populace outside London, helping to provoke the great northern rising of 1536–37, known as the Pilgrimage of Grace. Elsewhere the changes were accepted and welcomed, and those who clung to Catholic rites kept quiet or moved in secrecy. They reemerged during the reign of Henry's daughter Mary (1553–58). Military Apart from permanent garrisons at Berwick, Calais, and Carlisle, England's standing army numbered only a few hundred men. This was increased only slightly by Henry. Henry's invasion force of 1513, some 30,000 men, was composed of billmen and longbowmen, at a time when the other European nations were moving to hand guns and pikemen. But the difference in capability was at this stage not significant, and Henry's forces had new armour and weaponry. They were also supported by battlefield artillery and the war wagon, relatively new innovations, and several large and expensive siege guns. The invasion force of 1544 was similarly well-equipped and organised, although command on the battlefield was laid with the dukes of Suffolk and Norfolk, which in the latter case produced disastrous results at Montreuil. Henry's break with Rome incurred the threat of a large-scale French or Spanish invasion. To guard against this, in 1538 he began to build a chain of expensive, state-of-the-art defences along Britain's southern and eastern coasts, from Kent to Cornwall, largely built of material gained from the demolition of the monasteries. These were known as Henry VIII's Device Forts. He also strengthened existing coastal defence fortresses such as Dover Castle and, at Dover, Moat Bulwark and Archcliffe Fort, which he visited for a few months to supervise. Wolsey had many years before conducted the censuses required for an overhaul of the system of militia, but no reform resulted. In 1538–39, Cromwell overhauled the shire musters, but his work mainly served to demonstrate how inadequate they were in organisation. The building works, including that at Berwick, along with the reform of the militias and musters, were eventually finished under Queen Mary. Henry is traditionally cited as one of the founders of the Royal Navy. Technologically, Henry invested in large cannon for his warships, an idea that had taken hold in other countries, to replace the smaller serpentines in use. He also flirted with designing ships personally. His contribution to larger vessels, if any, is unknown, but it is believed that he influenced the design of rowbarges and similar galleys. Henry was also responsible for the creation of a permanent navy, with the supporting anchorages and dockyards. Tactically, Henry's reign saw the Navy move away from boarding tactics to employ gunnery instead. The Tudor navy was enlarged up to 50 ships (the Mary Rose among them), and Henry was responsible for the establishment of the "council for marine causes" to oversee the maintenance and operation of the Navy, becoming the basis for the later Admiralty. Ireland At the beginning of Henry's reign, Ireland was effectively divided into three zones: the Pale, where English rule was unchallenged; Leinster and Munster, the so-called "obedient land" of Anglo-Irish peers; and the Gaelic Connaught and Ulster, with merely nominal English rule. Until 1513, Henry continued the policy of his father, to allow Irish lords to rule in the king's name and accept steep divisions between the communities. However, upon the death of the 8th Earl of Kildare, governor of Ireland, fractious Irish politics combined with a more ambitious Henry to cause trouble. When Thomas Butler, 7th Earl of Ormond died, Henry recognised one successor for Ormond's English, Welsh and Scottish lands, whilst in Ireland another took control. Kildare's successor, the 9th Earl, was replaced as Lord Lieutenant of Ireland by The Earl of Surrey in 1520. Surrey's ambitious aims were costly but ineffective; English rule became trapped between winning the Irish lords over with diplomacy, as favoured by Henry and Wolsey, and a sweeping military occupation as proposed by Surrey. Surrey was recalled in 1521, with Piers Butler – one of the claimants to the Earldom of Ormond – appointed in his place. Butler proved unable to control opposition, including that of Kildare. Kildare was appointed chief governor in 1524, resuming his dispute with Butler, which had before been in a lull. Meanwhile, the Earl of Desmond, an Anglo-Irish peer, had turned his support to Richard de la Pole as pretender to the English throne; when in 1528 Kildare failed to take suitable actions against him, Kildare was once again removed from his post. The Desmond situation was resolved on his death in 1529, which was followed by a period of uncertainty. This was effectively ended with the appointment of Henry FitzRoy, Duke of Richmond and the king's son, as lord lieutenant. Richmond had never before visited Ireland, his appointment a break with past policy. For a time it looked as if peace might be restored with the return of Kildare to Ireland to manage the tribes, but the effect was limited and the Irish parliament soon rendered ineffective. Ireland began to receive the attention of Cromwell, who had supporters of Ormond and Desmond promoted. Kildare, on the other hand, was summoned to London; after some hesitation, he departed for London in 1534, where he would face charges of treason. His son, Thomas, Lord Offaly was more forthright, denouncing the king and leading a "Catholic crusade" against the king, who was by this time mired in marital problems. Offaly had the Archbishop of Dublin murdered and besieged Dublin. Offaly led a mixture of Pale gentry and Irish tribes, although he failed to secure the support of Lord Darcy, a sympathiser, or Charles V. What was effectively a civil war was ended with the intervention of 2,000 English troops – a large army by Irish standards – and the execution of Offaly (his father was already dead) and his uncles. Although the Offaly revolt was followed by a determination to rule Ireland more closely, Henry was wary of drawn-out conflict with the tribes, and a royal commission recommended that the only relationship with the tribes was to be promises of peace, their land protected from English expansion. The man to lead this effort was Sir Antony St Leger, as Lord Deputy of Ireland, who would remain into the post past Henry's death. Until the break with Rome, it was widely believed that Ireland was a Papal possession granted as a mere fiefdom to the English king, so in 1541 Henry asserted England's claim to the Kingdom of Ireland free from the Papal overlordship. This change did, however, also allow a policy of peaceful reconciliation and expansion: the Lords of Ireland would grant their lands to the king, before being returned as fiefdoms. The incentive to comply with Henry's request was an accompanying barony, and thus a right to sit in the Irish House of Lords, which was to run in parallel with England's. The Irish law of the tribes did not suit such an arrangement, because the chieftain did not have the required rights; this made progress tortuous, and the plan was abandoned in 1543, not to be replaced. Historiography The complexities and sheer scale of Henry's legacy ensured that, in the words of Betteridge and Freeman, "throughout the centuries, Henry has been praised and reviled, but he has never been ignored". Historian J.D. Mackie sums up Henry's personality and its impact on his achievements and popularity: A particular focus of modern historiography has been the extent to which the events of Henry's life (including his marriages, foreign policy and religious changes) were the result of his own initiative and, if they were, whether they were the result of opportunism or of a principled undertaking by Henry. The traditional interpretation of those events was provided by historian A.F. Pollard, who in 1902 presented his own, largely positive, view of the king, lauding him, "as the king and statesman who, whatever his personal failings, led England down the road to parliamentary democracy and empire". Pollard's interpretation remained the dominant interpretation of Henry's life until the publication of the doctoral thesis of G. R. Elton in 1953. Elton's book on The Tudor Revolution in Government maintained Pollard's positive interpretation of the Henrician period as a whole, but reinterpreted Henry himself as a follower rather than a leader. For Elton, it was Cromwell and not Henry who undertook the changes in government – Henry was shrewd but lacked the vision to follow a complex plan through. Henry was little more, in other words, than an "ego-centric monstrosity" whose reign "owed its successes and virtues to better and greater men about him; most of its horrors and failures sprang more directly from [the king]". Although the central tenets of Elton's thesis have since been questioned, it has consistently provided the starting point for much later work, including that of J. J. Scarisbrick, his student. Scarisbrick largely kept Elton's regard for Cromwell's abilities but returned agency to Henry, who Scarisbrick considered to have ultimately directed and shaped policy. For Scarisbrick, Henry was a formidable, captivating man who "wore regality with a splendid conviction". The effect of endowing Henry with this ability, however, was largely negative in Scarisbrick's eyes: to Scarisbrick, the Henrician period was one of upheaval and destruction and those in charge worthy of blame more than praise. Even among more recent biographers, including David Loades, David Starkey and John Guy, there has ultimately been little consensus on the extent to which Henry was responsible for the changes he oversaw or the assessment of those he did bring about. This lack of clarity about Henry's control over events has contributed to the variation in the qualities ascribed to him: religious conservative or dangerous radical; lover of beauty or brutal destroyer of priceless artefacts; friend and patron or betrayer of those around him; chivalry incarnate or ruthless chauvinist. One traditional approach, favoured by Starkey and others, is to divide Henry's reign into two halves, the first Henry being dominated by positive qualities (politically inclusive, pious, athletic but also intellectual) who presided over a period of stability and calm, and the latter a "hulking tyrant" who presided over a period of dramatic, sometimes whimsical, change. Other writers have tried to merge Henry's disparate personality into a single whole; Lacey Baldwin Smith, for example, considered him an egotistical borderline neurotic given to great fits of temper and deep and dangerous suspicions, with a mechanical and conventional, but deeply held piety, and having at best a mediocre intellect. Style and arms Many changes were made to the royal style during his reign. Henry originally used the style "Henry the Eighth, by the Grace of God, King of England, France and Lord of Ireland". In 1521, pursuant to a grant from Pope Leo X rewarding Henry for his Defence of the Seven Sacraments, the royal style became "Henry the Eighth, by the Grace of God, King of England and France, Defender of the Faith and Lord of Ireland". Following Henry's excommunication, Pope Paul III rescinded the grant of the title "Defender of the Faith", but an Act of Parliament (35 Hen 8 c 3) declared that it remained valid; and it continues in royal usage to the present day, as evidenced by the letters FID DEF or F.D. on all British coinage. Henry's motto was "Coeur Loyal" ("true heart"), and he had this embroidered on his clothes in the form of a heart symbol and with the word "loyal". His emblem was the Tudor rose and the Beaufort portcullis. As king, Henry's arms were the same as those used by his predecessors since Henry IV: Quarterly, Azure three fleurs-de-lys Or (for France) and Gules three lions passant guardant in pale Or (for England). In 1535, Henry added the "supremacy phrase" to the royal style, which became "Henry the Eighth, by the Grace of God, King of England and France, Defender of the Faith, Lord of Ireland and of the Church of England in Earth Supreme Head". In 1536, the phrase "of the Church of England" changed to "of the Church of England and also of Ireland". In 1541, Henry had the Irish Parliament change the title "Lord of Ireland" to "King of Ireland" with the Crown of Ireland Act 1542, after being advised that many Irish people regarded the Pope as the true head of their country, with the Lord acting as a mere representative. The reason the Irish regarded the Pope as their overlord was that Ireland had originally been given to King Henry II of England by Pope Adrian IV in the 12th century as a feudal territory under papal overlordship. The meeting of the Irish Parliament that proclaimed Henry VIII as King of Ireland was the first meeting attended by the Gaelic Irish chieftains as well as the Anglo-Irish aristocrats. The style "Henry the Eighth, by the Grace of God, King of England, France and Ireland, Defender of the Faith and of the Church of England and also of Ireland in Earth Supreme Head" remained in use until the end of Henry's reign. Ancestry See also Cestui que Cultural depictions of Henry VIII Family tree of English monarchs History of the foreign relations of the United Kingdom Inventory of Henry VIII List of English monarchs Tudor period Mouldwarp Footnotes References Bibliography Further reading Biographical Scholarly studies 0 , history of foreign policy Historiography Primary sources , (36 volumes, 1862–1908) External links Portraits of Henry VIII 1491 births 1547 deaths 15th-century English people 15th-century Roman Catholics 16th-century English nobility 16th-century English monarchs 16th-century Irish monarchs 16th-century English writers 16th-century male writers 16th-century Roman Catholics 16th-century Anglicans Annulment British founders Burials at St George's Chapel, Windsor Castle Composers of the Tudor period Converts to Anglicanism from Roman Catholicism Dukes of Cornwall Dukes of York Earls Marshal English Anglicans English classical composers English people of Welsh descent English pretenders to the French throne English real tennis players Founders of English schools and colleges Founders of religions House of Tudor Husbands of Catherine Parr Knights of the Bath Knights of the Garter Knights of the Golden Fleece Lords Lieutenant of Ireland Lords Warden of the Cinque Ports Male Shakespearean characters Military leaders of the Italian Wars Musicians from Kent People associated with the Dissolution of the Monasteries People excommunicated by the Catholic Church People from Greenwich People with endocrine, nutritional and metabolic diseases People with mood disorders People with traumatic brain injuries Princes of Wales Children of Henry VII of England Lords of Glamorgan
[ -0.4425070881843567, -0.5379445552825928, 0.41709867119789124, -0.10972525924444199, -0.585506021976471, 0.43930456042289734, 0.4437827467918396, -0.0789850652217865, -0.3952847719192505, -0.1614311933517456, -0.23610778152942657, 0.22416305541992188, -0.32683035731315613, 0.25019425153732...
14189
https://en.wikipedia.org/wiki/Haryana
Haryana
Haryana (; ) is an Indian state located in the northern-part of the country. It was carved out of the former state of East Punjab on 1 November 1966 on a linguistic basis. It is ranked 21st in terms of area, with less than 1.4% () of India's land area. The state capital is Chandigarh, which it shares with the neighboring state of Punjab, and the most populous city is Faridabad, which is a part of the National Capital Region. The city of Gurgaon is among India's largest financial and technology hubs. Haryana has 6 administrative divisions, 22 districts, 72 sub-divisions, 93 revenue tehsils, 50 sub-tehsils, 140 community development blocks, 154 cities and towns, 7,356 villages, and 6,222 villages panchayats. Haryana contains 32 special economic zones (SEZs), mainly located within the industrial corridor projects connecting the National Capital Region. Gurgaon is considered one of the major information technology and automobile hubs of India. Haryana is the 11th-highest ranking among Indian states in human development index. The economy of Haryana is the 13th largest in India, with a gross state domestic product (GSDP) of and has the country's 5th-highest GSDP per capita of . Haryana has the highest unemployment rate among Indian states. The state is rich in history, monuments, heritage, flora and fauna and tourism, with a well developed economy, national highways and state roads. It is bordered by Himachal Pradesh to the north-east, by river Yamuna along its eastern border with Uttar Pradesh, by Rajasthan to the west and south, and Ghaggar-Hakra River flows along its northern border with Punjab. Since Haryana surrounds the country's capital New Delhi on three sides (north, west and south), consequently a large area of Haryana state is included in the economically important National Capital Region of India for the purposes of planning and development. Etymology Anthropologists came up with the view that Haryana was known by this name because in the post- Mahabharata period here lived the Abhiras. who developed special skills in the art of agriculture. According to Pran Nath Chopra Haryana got its name from Abhirayana-Ahirayana-Hirayana-Haryana. History Ancient period The villages of Rakhigarhi in Hisar district and Bhirrana in Fatehabad district are home to the largest and one of the world's oldest ancient Indus Valley Civilization sites, dated at over 9,000 years old. Evidence of paved roads, a drainage system, a large-scale rainwater collection storage system, terracotta brick and statue production, and skilled metal working (in both bronze and precious metals) have been uncovered. According to archaeologists, Rakhigarhi may be the origin of Harappan civilisation, which arose in the Ghaggar basin in Haryana and gradually and slowly moved to the Indus Valley. During the Vedic era, Haryana was the site of the Kuru Kingdom, one of India's great Mahajanapadas. The south of Haryana is the claimed location of the Vedic Brahmavarta region. Medieval period Ancient bronze and stone idols of Jain Tirthankara were found in archaeological expeditions in Badli, Bhiwani (Ranila, Charkhi Dadri and Badhra), Dadri, Gurgaon (Ferozepur Jhirka), Hansi, Hisar, Kasan, Nahad, Narnaul, Pehowa, Rewari, Rohad, Rohtak (Asthal Bohar) and Sonepat in Haryana. Pushyabhuti dynasty ruled parts of northern India in the 7th century with its capital at Thanesar. Harsha was a prominent king of the dynasty. Tomara dynasty ruled the south Haryana region in the 10th century. Anangpal Tomar was a prominent king among the Tomaras. After the sack of Bhatner fort during the Timurid conquests of India in 1398, Timur attacked and sacked the cities of Sirsa, Fatehabad, Sunam, Kaithal and Panipat. When he reached the town of Sarsuti (Sirsa), the residents, who were mostly non-Muslims, fled and were chased by a detachment of Timur's troops, with thousands of them being killed and looted by the troops. From there he travelled to Fatehabad, whose residents fled and a large number of those remaining in the town were massacred. The Ahirs resisted him at Ahruni but were defeated, with thousands being killed and many being taken prisoners while the town was burnt to ashes. From there he travelled to Tohana, whose Jat inhabitants were stated to be robbers according to Sharaf ad-Din Ali Yazdi. They tried to resist but were defeated and fled. Timur's army pursued and killed 200 Jats, while taking many more as prisoners. He then sent a detachment to chase the fleeing Jats and killed 2,000 of them while their wives and children were enslaved and their property plundered. Timur proceeded to Kaithal whose residents were massacred and plundered, destroying all villages along the way. On the next day, he came to Assandh whose residents were "fire-worshippers" according to Yazdi, and had fled to Delhi. Next, he travelled to and subdued Tughlaqpur fort and Salwan before reaching Panipat whose residents had already fled. He then marched on to Loni fort. Hemu claimed royal status after defeating Akbar's Mughal forces on 7 October 1556 in the Battle of Delhi and assumed the ancient title of Vikramaditya. The area that is now Haryana has been ruled by some of the major empires of India. Panipat is known for three seminal battles in the history of India. In the First Battle of Panipat (1526), Babur defeated the Lodis. In the Second Battle of Panipat (1556), Akbar defeated the local Haryanvi Hindu Emperor of Delhi, who belonged to Rewari. Hem Chandra Vikramaditya had earlier won 22 battles across India from Punjab to Bengal, defeating Mughals and Afghans. Hemu had defeated Akbar's forces twice at Agra and the Battle of Delhi in 1556 to become the last Hindu Emperor of India with a formal coronation at Purana Quila in Delhi on 7 October 1556. In the Third Battle of Panipat (1761), the Afghan king Ahmad Shah Abdali defeated the Marathas. Formation Haryana as a state came into existence on 1 November 1966 the Punjab Reorganisation Act (1966). The Indian government set up the Shah Commission under the chairmanship of Justice JC Shah on 23 April 1966 to divide the existing state of Punjab and determine the boundaries of the new state of Haryana after consideration of the languages spoken by the people. The commission delivered its report on 31 May 1966 whereby the then-districts of Hisar, Mahendragarh, Gurgaon, Rohtak and Karnal were to be a part of the new state of Haryana. Further, the tehsils of Jind and Narwana in the Sangrur district – along with Naraingarh, Ambala and Jagadhri – were to be included. The commission recommended that the tehsil of Kharar, which includes Chandigarh, the state capital of Punjab, should be a part of Haryana. However Kharar was given to Punjab. The city of Chandigarh was made a union territory, serving as the capital of both Punjab and Haryana. Bhagwat Dayal Sharma became the first Chief Minister of Haryana. Demographics Religion According to the 2011 census, of total 25,350,000 population of Haryana, Hindus (87.46%) constitute the majority of the state's population with Muslims (7.03%) (mainly Meos) and Sikhs (4.91%) being the largest minorities. Muslims are mainly found in the Nuh. Haryana has the second largest Sikh population in India after Punjab, and they mostly live in the districts adjoining Punjab, such as Sirsa, Jind, Fatehabad, Kaithal, Kurukshetra, Ambala and Panchkula. Languages The official language of Haryana is Hindi. Several regional languages or dialects, often subsumed under Hindi, are spoken in the state. Predominant among them is Haryanvi (also known as Bangru), whose territory encompasses the central and eastern portions of Haryana. Hindustani is spoken in the northeast, Bagri in the west, and Ahirwati, Mewati and Braj Bhasha in the south. There are also significant numbers of speakers of Urdu and Punjabi, the latter of which was recognised in 2010 as a second official language of Haryana for government and administrative purposes. After the state's formation, Telugu was made the state's "second language" – to be taught in schools – but it was not the "second official language" for official communication. Due to a lack of students, the language ultimately stopped being taught. Tamil was made the second language in 1969 by Bansi Lal to show the state's differences with Punjab although there were no Tamil speakers in Haryana at the time. In 2010, due to the lack of Tamil speakers, the language was removed from its status. There are also some speakers of several major regional languages of neighbouring states or other parts of the subcontinent, like Bengali, Bhojpuri, Marwari, Mewari, Nepali and Saraiki, as well as smaller communities of speakers of languages that are dispersed across larger regions, like Bauria, Bazigar, Gujari, Gade Lohar, Oadki, and Sansi. Culture Music Haryana has its own unique traditional folk music, folk dances, saang (folk theatre), cinema, belief system such as Jathera (ancestral worship), and arts such as Phulkari and Shisha embroidery. Folk dances Folk music and dances of Haryana are based on satisfying cultural needs of primarily agrarian and martial natures of Haryanavi tribes. Haryanvi musical folk theatre main types are Saang, Rasa lila and Ragini. The Saang and Ragini form of theatre was popularised by Lakhmi Chand. Haryanvi folk dances and music have fast energetic movements. Three popular categories of dance are: festive-seasonal, devotional, and ceremonial-recreational. The festive-seasonal dances and songs are Gogaji/Gugga, Holi, Phaag, Sawan, Teej. The devotional dances and songs are Chaupaiya, Holi, Manjira, Ras Leela, Raginis). The ceremonial-recreational dances and songs are of following types: legendary bravery (Kissa and Ragini of male warriors and female Satis), love and romance (Been and its variant Nāginī dance, and Ragini), ceremonial (Dhamal Dance, Ghoomar, Jhoomar (male), Khoria, Loor, and Ragini). Folk music and songs Haryanvi folk music is based on day to day themes and injecting earthly humour enlivens the feel of the songs. Haryanvi music takes two main forms: "Classical folk music" and "Desi Folk music" (Country Music of Haryana), and sung in the form of ballads and love, valor and bravery, harvest, happiness and pangs of parting of lovers. Classical Haryanvi folk music Classical Haryanvi folk music is based on Indian classical music. Hindustani classical ragas, learnt in gharana parampara of guru–shishya tradition, are used to sing songs of heroic bravery (such as Alha-Khand (1163–1202 CE) about the bravery of Alha and Udal, Jaimal and Patta of Maharana Udai Singh II), Brahmas worship and festive seasonal songs (such as Teej, Holi and Phaag songs of Phalgun month near Holi). Bravery songs are sung in high pitch. Desi Haryanvi folk music Desi Haryanvi folk music, is a form of Haryanvi music, based on Raag Bhairvi, Raag Bhairav, Raag Kafi, Raag Jaijaivanti, Raag Jhinjhoti and Raag Pahadi and used for celebrating community bonhomie to sing seasonal songs, ballads, ceremonial songs (wedding, etc.) and related religious legendary tales such as Puran Bhagat. Relationship and songs celebrating love and life are sung in medium pitch. Ceremonial and religious songs are sung in low pitch. Young girls and women usually sing entertaining and fast seasonal, love, relationship and friendship related songs such as Phagan (song for eponymous season/month), Katak (songs for the eponymous season/month), Samman (songs for the eponymous season/month), (male-female duet songs), (songs of sharing heartfelt feelings among female friends). Older women usually sing devotional Mangal Geet (auspicious songs) and ceremonial songs such as Bhajan, Bhat (wedding gift to the mother of bride or groom by her brother), Sagai, Ban (Hindu wedding ritual where pre-wedding festivities starts), Kuan-Poojan (a custom that is performed to welcome the birth of a child by worshiping the well or source of drinking water), Sanjhi and Holi festival. Socially normative-cohesive impact Music and dance for Haryanvi people is a great way of demolishing societal differences as folk singers are highly esteemed and they are sought after and invited for the events, ceremonies and special occasions regardless of their caste or status. These inter-caste songs are fluid in nature, and never personalised for any specific caste, and they are sung collectively by women from different strata, castes, dialects. These songs do transform fluidly in dialect, style, words, etc. This adoptive style can be seen from the adoption of tunes of Bollywood movie songs into Haryanvi songs. Despite this continuous fluid transforming nature, Haryanvi songs have a distinct style of their own as explained above. With the coming up of a strongly socio-economic metropolitan culture in the emergence of urban Gurgaon (Gurugram) Haryana is also witnessing community participation in public arts and city beautification. Several landmarks across Gurgaon are decorated with public murals and graffiti with cultural cohesive ideologies and stand the testimony of a lived sentiment in Haryana folk. Cuisine As per a survey, 13% of males and 7.8% of females of Haryana are non-vegetarian. The regional cuisine features the staples of roti, saag, vegetarian sabzi and milk products such as ghee, milk, lassi and kheer. Society Haryanvi people have a concept of inclusive society involving the "36 Jātis" or communities. Castes such as Jat, Rajput, Gurjar, Saini, Pasi, Ahirs, Ror, Mev, Vishnoi, Harijan, Aggarwal, Brahmin, Khatri and Tyagi are some of the notable of these 36 Jātis. Geography Haryana is a landlocked state in northern India. It is between 27°39' to 30°35' N latitude and between 74°28' and 77°36' E longitude. The total geographical area of the state is 4.42 m ha, which is 1.4% of the geographical area of the country. The altitude of Haryana varies between 700 and 3600 ft (200 metres to 1200 metres) above sea level. Haryana has only 4% (compared with national 21.85%) area under forests. Karoh Peak, a tall mountain peak in the Sivalik Hills range of the greater Himalayas range located near Morni Hills area of Panchkula district, is highest point in Haryana. Haryana has 4 states and 2 union territories on its border - Punjab, Rajasthan, Uttar Pradesh, Himachal Pradesh, Delhi, and Chandigarh. Plains and mountains Haryana has four main geographical features. The Yamuna-Ghaggar plain forming the largest part of the state is also called Delhi doab consisting of Sutlej-Ghaggar doab (between Sutlej in north in Punjab and Ghaggar river flowing through northern Haryana), Ghaggar-Hakra doab (between Ghaggar river and Hakra or Drishadvati river which is the paleochannel of the holy Saraswati River) and Hakra-Yamuna doab (between Hakra river and Yamuna). See also: Doab. The Lower Shivalik Hills to the northeast in foothills of Himalaya The Bagar tract semi-desert dry sandy plain to the south-west. See also: Bangar and Khadir. The Aravali Range's northernmost low rise isolated non-continuous outcrops in the south Hydrography The Yamuna, tributary of Ganges, flows along the state's eastern boundary. Northern Haryana has several north-east to west flowing rivers originating from the Sivalik Hills of Himalayas, such as Ghaggar-Hakra (palaeochannel of vedic Sarasvati river), Chautang (paleochannel of vedic Drishadvati river, tributary of Ghagghar), Tangri river (tributary of Ghagghar), Kaushalya river (tributary of Ghagghar), Markanda River (tributary of Ghagghar), Sarsuti, Dangri, Somb river. Haryana's main seasonal river, the Ghaggar-Hakra, known as Ghaggar before the Ottu barrage and as the Hakra downstream of the barrage, rises in the outer Himalayas, between the Yamuna and the Satluj and enters the state near Pinjore in the Panchkula district, passes through Ambala and Sirsa, it reaches Bikaner in Rajasthan and runs for before disappearing into the deserts of Rajasthan. The seasonal Markanda River, known as the Aruna in ancient times, originates from the lower Shivalik Hills and enters Haryana west of Ambala, and swells into a raging torrent during monsoon is notorious for its devastating power, carries its surplus water on to the Sanisa Lake where the Markanda joins the Sarasuti and later the Ghaggar. Southern Haryana has several south-west to east flowing seasonal rivulets originating from the Aravalli Range in and around the hills in Mewat region, including Sahibi River (called Najafgarh drain in Delhi), Dohan river (tributary of Sahibi, originates at Mandoli village near Neem Ka Thana in Jhunjhunu district of Rajasthan and then disappears in Mahendragarh district), Krishnavati river (former tributary of Sahibi river, originates near Dariba and disappears in Mahendragarh district much before reaching Sahibi river) and Indori river (longest tributary of Sahibi River, originates in Sikar district of Rajasthan and flows to Rewari district of Haryana), these once were tributaries of the Drishadwati/Saraswati river. Major canals are Western Yamuna Canal, Sutlej Yamuna link canal (from Sutlej river tributary of Indus), and Indira Gandhi Canal. Major dams are Kaushalya Dam in Panchkula district, Hathnikund Barrage and Tajewala Barrage on Yamuna in Yamunanagar district, Pathrala barrage on Somb river in Yamunanagar district, ancient Anagpur Dam near Surajkund in Faridabad district, and Ottu barrage on Ghaggar-Hakra River in Sirsa district. Major lakes are Dighal Wetland, Basai Wetland, Badkhal Lake in Faridabad, holy Brahma Sarovar and Sannihit Sarovar in Kurukshetra, Blue Bird Lake in Hisar, Damdama Lake at Sohna in Gurgram district, Hathni Kund in Yamunanagar district, Karna Lake at Karnal, ancient Surajkund in Faridabad, and Tilyar Lake in Rohtak. The Haryana State Waterbody Management Board is responsible for rejuvenation of 14,000 Johads of Haryana and up to 60 lakes in National Capital Region falling within the Haryana state. Only hot spring of Haryana is the Sohna Sulphur Hot Spring at Sohna in Gurgaon district. Tosham Hill range has several sacred sulphur pond of religious significance that are revered for the healing impact of sulphur, such as Pandu Teerth Kund, Surya Kund, Kukkar Kund, Gyarasia Kund or Vyas Kund. Seasonal waterfalls include Tikkar Taal twin lakes at Morni hiills, Dhosi Hill in Mahendragarh district and Pali village on outskirts of Faridabad. Climate Haryana is extremely hot in summer at around and mild in winter. The hottest months are May and June and the coldest December and January. The climate is arid to semi-arid with average rainfall of 354.5 mm. Around 29% of rainfall is received during the months from July to September, and the remaining rainfall is received during the period from December to February. Flora and fauna Forests Forest cover in the state in 2013 was 3.59% (1586 km2) and the Tree Cover in the state was 2.90% (1282 km2), giving a total forest and tree cover of 6.49%. In 2016–17, 18,412 hectares were brought under tree cover by planting 14.1 million seedlings. Thorny, dry, deciduous forest and thorny shrubs can be found all over the state. During the monsoon, a carpet of grass covers the hills. Mulberry, eucalyptus, pine, kikar, shisham and babul are some of the trees found here. The species of fauna found in the state of Haryana include black buck, nilgai, panther, fox, mongoose, jackal and wild dog. More than 450 species of birds are found here. Wildlife Haryana has thousand national parks, eight wildlife sanctuaries, two wildlife conservation areas, four animal and bird breeding centers, one deer park and three zoos, all of which are managed by the Haryana Forest Department of the Government of Haryana. Sultanpur National Park is a notable Park located in Gurugram District Environmental and ecological issues Haryana Environment Protection Council is the advisory committee and Department of Environment, Haryana is the department responsible for the administration of environment. Areas of Haryana surrounding Delhi NCR are the most polluted. During smog of November 2017, Air quality index of Gurgaon and Faridabad showed that the density of Fine particulates (2.5 PM diameter) was an average of 400 PM and monthly average of Haryana was 60 pm. Other sources of pollution are exhaust gases from old vehicles, stone crushers and Brick Kiln. Haryana has 7.5 million old vehicles, of which 40% are old more polluting vehicles, besides 500,000 new vehicles are added every year. Other majorly polluted cities are Bhiwani, Bahadurgarh, Dharuhera, Hisar and Yamunanagar. Administration Divisions The state is divided into 6 revenue divisions, 5 Police Ranges and 3 Police Commissionerates (c. January 2017). Six revenue divisions are: Ambala, Rohtak, Gurgaon, Hisar, Karnal and Faridabad. Haryana has 11 municipal corporations (Gurgaon, Faridabad, Ambala, Panchkula, Yamunanagar, Rohtak, Hisar, Panipat, Karnal, Sonepat, and Manesar), 18 municipal councils and 52 municipalities. Within these there are 22 districts, 72 sub-divisions, 93 tehsils, 50 sub-tehsils, 140 blocks, 154 cities and towns, 6,848 villages, 6,222 villages panchayats and numerous smaller dhanis. Districts Law and order Haryana Police force is the law enforcement agency of Haryana. Five Police Ranges are Ambala, Hissar, Karnal, Rewari and Rohtak. Three Police Commissionerates are Faridabad, Gurgaon and Panchkula. Cybercrime investigation cell is based in Gurgaon's Sector 51. The highest judicial authority in the state is the Punjab and Haryana High Court, with next higher right of appeal to Supreme Court of India. Haryana uses an e-filing facility. Governance and e-governance The Common Service Centres (CSCs) have been upgraded in all districts to offer hundreds of e-services to citizens, including applications of new water and sanitation connections, electricity bill collection, ration card member registration, result of HBSE, admit cards for board examinations, online admission forms for government colleges, long route booking of buses, admission forms for Kurukshetra University and HUDA plots status inquiry. Haryana has become the first state to implement Aadhaar-enabled birth registration in all the districts. Thousands of all traditional offline state and central government services are also available 24/7 online through single unified UMANG app and portal as part of Digital India initiative. Economy Haryana's 14th placed 12.96% 2012-17 CAGR estimated 2017-18 GSDP of US$95 billion is split in to 52% services, 30% industries and 18% agriculture. Services sector is split across 45% in real estate and financial and professional services, 26% trade and hospitality, 15% state and central govt employees, and 14% transport and logistics & warehousing. In IT services, Gurugram ranks number 1 in India in growth rate and existing technology infrastructure, and number 2 in startup ecosystem, innovation and livability (Nov 2016). Industries sector is split across 69% manufacturing, 28% construction, 2% utilities and 1% mining. In industrial manufacturing, Haryana produces India's 67% of passenger cars, 60% of motorcycles, 50% of tractors and 50% of the refrigerators. Services and industrial sectors are boosted by 7 operational SEZs and additional 23 formally approved SEZs (20 already notified and 3 in-principal approval) that are mostly spread along the Delhi–Mumbai Industrial Corridor, Amritsar Delhi Kolkata Industrial Corridor and Western Peripheral Expressway. Agriculture sector is split across 93% crops and livestock, 4% commercial forestry and logging, and 2% fisheries. Agriculture sector of Haryana, with only less than 1.4% area of India, contributes 15% food grains to the central food security public distribution system, and 7% of total national agricultural exports including 60% of total national Basmati rice export. Agriculture Crops Haryana is traditionally an agrarian society of zamindars (owner-cultivator farmers). About 70% of Haryana's residents are engaged in agriculture. The Green Revolution in Haryana of the 1960s combined with completion of Bhakra Dam in 1963 and Western Yamuna Command Network canal system in 1970s resulted in the significantly increased food grain production. As a result, Haryana is self sufficient in food production and the second largest contributor to India's central pool of food grains In 2015–2016, Haryana produced the following principal crops: 13,352,000 tonne wheat, 4,145,000 tonne rice, 7,169,000 tonne sugarcane, 993,000 tonne cotton and 855,000 tonne oilseeds (mustard seed, sunflower, etc.). Fruits, vegetables and spices Vegetable production was: potato 853,806 tonnes, onion 705,795 tonnes, tomato 675,384 tonnes, cauliflower 578,953 tonnes, leafy vegetables 370,646 tonnes, brinjal 331,169 tonnes, guard 307,793 tonnes, peas 111,081 tonnes and others 269,993 tonnes. Fruits production was: citrus 301,764 tonnes, guava 152,184 tonnes, mango 89,965 tonnes, chikoo 16,022 tonnes, aonla 12,056 tonnes and other fruits 25,848 tonnes. Spices production was: garlic 40,497 tonnes, fenugreek 9,348 tonnes, ginger 4,304 tonnes and others 840 tonnes. Flowers and medicinal plants Cut flowers production was: marigold 61,830 tonnes, gladiolus 2,448,620 million, rose 1,861,160 million and other 691,300 million. Medicinal plants production was: aloe vera 1403 tonnes and stevia 13 tonnes. Livestock Haryana is well known for its high-yield Murrah buffalo. Other breeds of cattle native to Haryana are Haryanvi, Mewati, Sahiwal and Nili-Ravi. Research To support its agrarian economy, both central government (Central Institute for Research on Buffaloes, Central Sheep Breeding Farm, National Research Centre on Equines, Central Institute of Fisheries, National Dairy Research Institute, Regional Centre for Biotechnology, Indian Institute of Wheat and Barley Research and National Bureau of Animal Genetic Resources) and state government (CCS HAU, LUVAS, Government Livestock Farm, Regional Fodder Station and Northern Region Farm Machinery Training and Testing Institute) have opened several institutes for research and education. Industrial sector Manufacturing Faridabad is one of the biggest industrial cities of Haryana as well as North India. The city is home to large-scale MNC companies like India Yamaha Motor Pvt. Ltd., Havells India Limited, JCB India Limited, Escorts Group, Indian Oil (R&D), and Larsen & Toubro (L&T). Eyewear e-tailer Lenskart and healthcare startup Lybrate have their headquarters in Faridabad. Hissar, a NCR Counter Magnet city known as steel and cotton spinning hub as well as upcoming integrated industrial aerocity and aero MRO hub at Hisar Airport, is a fast developing city and the hometown of Navin Jindal and Subhash Chandra of Zee TV fame. Savitri Jindal, Navin Jindal's mother, has been listed by Forbes as the third richest woman in the world. Panipat has heavy industry, including a refinery operated by the Indian Oil Corporation, a urea manufacturing plant operated by National Fertilizers Limited and a National Thermal Power Corporation power plant. It is known for its woven modhas or round stools. Sonipat: IMT Kundli, Nathupur, Rai and Barhi are industrial areas with several Small and medium-sized enterprises, including come large ones such as Atlas cycles, E.C.E., Birla factory, OSRAM Gurgaon: IMT Minesar, Dundahera and Sohna are industrial and logistics hubs, that also has National Security Guards, Indian Institute of Corporate Affairs, National Brain Research Centre and National Bomb Data Centre. Utilities Haryana State has always given high priority to the expansion of electricity infrastructure, as it is one of the most important inputs for the development of the state. Haryana was the first state in the country to achieve 100% rural electrification in 1970 as well as the first in the country to link all villages with all-weather roads and provide safe drinking water facilities throughout the state. Power in the state are: Renewable and non-polluting sources Hydroelectricity Bhakra-Nangal Dam Hydroelectric Power Plant WYC Hydro Electric Station, 62.4 MW, Yamunanagar Solar power stations Faridabad Solar Power Plant: being set up by HPGCL Faridabad (c.2016). Nuclear power stations Gorakhpur Nuclear Power Plant, 2800MW, Fatehabad, Phase-I 1400MW by 2021 Coal-fired thermal power stations Deenbandhu Chhotu Ram Thermal Power Station, 600MW, Yamunanagar Indira Gandhi Super Thermal Power Project, 1500MW, Jhajjar Jhajjar Power Station, 1500MW Panipat Thermal Power Station I, 440MW Panipat Thermal Power Station II, 920MW Rajiv Gandhi Thermal Power Station, 1200MW, Hisar Services sector Transport Aviation Roads and Highways Haryana has a total road length of , including 29 national highways, state highways, Major District Roads (MDR) and Other District Roads (ODR) (c. December 2017). A fleet of 3,864 Haryana Roadways buses covers a distance of 1.15 million km per day, and it was the first state in the country to introduce luxury video coaches. Ancient Delhi Multan Road and Grand Trunk Road, South Asia's oldest and longest major roads, pass through Haryana. GT Road passes through the districts of Sonipat, Panipat, Karnal, Kurukshetra and Ambala in north Haryana where it enters Delhi and subsequently the industrial town of Faridabad on its way. The Kundli-Manesar-Palwal Expressway (KMP) will provide a high-speed link to northern Haryana with its southern districts such as Sonipat, Gurgaon, and Faridabad. The Delhi-Agra Expressway (NH-2) that passes through Faridabad is being widened to six lanes from current four lanes. It will further boost Faridabad's connectivity with Delhi. Railway Rail network in Haryana is covered by five rail divisions under three rail zones. Diamond Quadrilateral High-speed rail network, Eastern Dedicated Freight Corridor (72 km) and Western Dedicated Freight Corridor (177 km) pass through Haryana. Bikaner railway division of North Western Railway zone manages rail network in western and southern Haryana covering Bhatinda-Dabwali-Hanumangarh line, Rewari-Bhiwani-Hisar-Bathinda line, Hisar-Sadulpur line and Rewari-Loharu-Sadulpur line. Jaipur railway division of North Western Railway zone manages the rail network in south-west Haryana covering Rewari-Reengas-Jaipur line, Delhi-Alwar-Jaipur line and Loharu-Sikar line. Delhi railway division of Northern Railway zone manages the rail network in north and east and central Haryana covering Delhi-Panipat-Ambala line, Delhi-Rohtak-Tohana line, Rewari–Rohtak line, Jind-Sonepat line and Delhi-Rewari line. Agra railway division of North Central Railway zone manages another very small part of the network in south-east Haryana covering Palwal-Mathura line only. Ambala railway division of Northern Railway zone manages a small part of the rail network in north-east Haryana covering Ambala-Yamunanagar line, Ambala-Kurukshetra line and UNESCO World Heritage Kalka–Shimla Railway. Metro Delhi Metro connects the national capital Delhi with NCR cities of Faridabad, Gurgaon and Bahadurgarh. Faridabad has the longest metro network in the NCR Region consisting of 11 stations and track length being 17 km. Sky Way The Haryana and Delhi governments have constructed the international standard Delhi Faridabad Skyway, the first of its kind in North India, to connect Delhi and Faridabad. Communication and media Haryana has a statewide network of telecommunication facilities. Haryana Government has its own statewide area network by which all government offices of 22 districts and 126 blocks across the state are connected with each other thus making it the first SWAN of the country. Bharat Sanchar Nigam Limited and most of the leading private sector players (such as Reliance Infocom, Tata Teleservices, Bharti Telecom, Idea Vodafone Essar, Aircel, Uninor and Videocon) have operations in the state. Two biggest cities of Haryana, Faridabad and Gurgaon which are part of National Capital Region come under the local Delhi Mobile Telecommunication System. The rest of the cities of Haryana comes under Haryana Telecommunication System. Electronic media channels include, MTV, 9XM, Star Group, SET Max, News Time, NDTV 24x7 and Zee Group. The radio stations include All India Radio and other FM stations. Panipat, Hisar, Ambala and Rohtak are the cities in which the leading newspapers of Haryana are printed and circulated throughout Haryana, in which Dainik Bhaskar, Dainik Jagran, Punjab Kesari, The Tribune, Aaj Samaj, Hari Bhoomi and Amar Ujala are prominent. Healthcare The total fertility rate of Haryana is 2.3. The infant mortality rate is 41 (SRS 2013) and maternal mortality ratio is 146 (SRS 2010–2012). The state of Haryana has various Medical Colleges including Pandit Bhagwat Dayal Sharma Post Graduate Institute of Medical Sciences Rohtak, Bhagat Phool Singh Medical College in District Sonipat, ESIC Medical College, Faridabad along with notable private medical institutes like Medanta, Max Hospital, Fortis Healthcare Education Literacy Literacy rate in Haryana has seen an upward trend and is 76.64 per cent as per 2011 population census. Male literacy stands at 85.38 per cent, while female literacy is at 66.67 per cent. In 2001, the literacy rate in Haryana stood at 67.91 per cent of which male and female were 78.49 per cent and 55.73 per cent literate respectively. , Gurgaon city had the highest literacy rate in Haryana at 86.30% followed by Panchkula at 81.9 per cent and Ambala at 81.7 per cent. In terms of districts, Rewari had the highest literacy rate in Haryana at 74%, higher than the national average of 59.5%: male literacy was 79%, and female 67%. Schools Haryana Board of School Education, established in September 1969 and shifted to Bhiwani in 1981, conducts public examinations at middle, matriculation, and senior secondary levels twice a year. Over 700,000 candidates attend annual examinations in February and March; 150,000 attend supplementary examinations each November. The Board also conducts examinations for Haryana Open School at senior and senior secondary levels twice a year. The Haryana government provides free education to women up to the bachelor's degree level. In 2015–2016, there were nearly 20,000 schools, including 10,100 state government schools (36 Aarohi Schools, 11 Kasturba Gandhi Balika Vidyalayas, 21 Model Sanskriti Schools, 8,744 government primary school, 3386 government middle school, 1,284 government high school and 1,967 government senior secondary schools), 7,635 private schools (200 aided, 6,612 recognised unaided, and 821 unrecognised unaided private schools) and several hundred other central government and private schools such as Kendriya Vidyalaya, Indian Army Public Schools, Jawahar Navodaya Vidyalaya and DAV schools affiliated to central government's CBSE and ICSE school boards. Universities and higher education Haryana has 48 universities and 1,038 colleges, including 115 government colleges, 88 government-aided colleges and 96 self-finance colleges. Hisar has three universities: Chaudhary Charan Singh Haryana Agricultural University - Asia's largest agricultural university, Guru Jambheshwar University of Science and Technology, Lala Lajpat Rai University of Veterinary & Animal Sciences); several national agricultural and veterinary research centres (National Research Centre on Equines), Central Sheep Breeding Farm, National Institute on Pig Breeding and Research, Northern Region Farm Machinery Training and Testing Institute and Central Institute for Research on Buffaloes (CIRB); and more than 20 colleges including Maharaja Agrasen Medical College, Agroha. Demographically, Haryana has 471,000 women and 457,000 men pursuing post-secondary school higher education. There are more 18,616 female teachers and 17,061 male teachers in higher education. Union Minister Ravi Shankar Prasad announced on 27 February 2016 that National Institute of Electronics and Information Technology (NIELIT) would be set up in Kurukshetra to provide computer training to youth and a Software Technology Park of India (STPI) would be set up in Panchkula's existing HSIIDC IT Park in Sector 23. Hindi and English are compulsory languages in schools whereas Punjabi, Sanskrit and Urdu are chosen as optional languages. Sports In the 2010 Commonwealth Games at Delhi, 22 out of 38 gold medals that India won came from Haryana. During the 33rd National Games held in Assam in 2007, Haryana stood first in the nation with a medal tally of 80, including 30 gold, 22 silver and 28 bronze medals. The 1983 World Cup winning captain Kapil Dev made his domestic-cricket debut playing for Haryana. Nahar Singh Stadium was built in Faridabad in the year 1981 for international cricket. This ground has the capacity to hold around 25,000 people as spectators. Tejli Sports Complex is an ultra-modern sports complex in Yamuna Nagar. Tau Devi Lal Stadium in Gurgaon is a multi-sport complex. Chief Minister of Haryana Manohar Lal Khattar announced the "Haryana Sports and Physical Fitness Policy", a policy to support 26 Olympic sports, on 12 January 2015 with the words "We will develop Haryana as the sports hub of the country." Haryana is home to Haryana Gold, one of India's eight professional basketball teams which compete in the country's UBA Pro Basketball League. At the 2016 Summer Olympics, Sakshi Malik won the bronze medal in the 58 kg category, becoming the first Indian female wrestler to win a medal at the Olympics and the fourth female Olympic medalist from the country. Notable Badminton Player, Saina Nehwal is from Hisar in Haryana. Notable Athlete, Neeraj Chopra , who competes in Javelin Throw and won the first track and field gold medal in 2020 Tokyo Olympics for India, was born and raised in Panipat, Haryana. Wrestling is also very prominent in Haryana as 2 medals won in wrestling at 2020 Tokyo Olympics were from Haryana. Notable Athlete, Ravi Dahiya, who was born in Nahri village of Sonipat District. Won Silver Medal in 2020 Tokyo Olympics for India. Bajrang Punia-Bronze Medal. Ravi Kumar,[2] is an Indian freestyle wrestler who won a silver medal at the 2020 Tokyo Olympics in the 57 kg category. Dahiya is also a bronze medalist from 2019 World Wrestling Championships and a two-time Asian champion. See also List of Monuments of National Importance in Haryana List of State Protected Monuments in Haryana List of people from Haryana Outline of Haryana Politics of Haryana Tourism in Haryana Haryanvi cinema List of highways in Haryana References Sources External links Government The Official Site of the Government of Haryana Official Tourism Site of Haryana, India General information haryana.com 1966 establishments in India Hindi-speaking countries and territories Punjabi-speaking countries and territories States and territories established in 1966 States and union territories of India
[ -0.2577371299266815, -0.4077896475791931, 0.3486959636211395, 0.0907726064324379, -0.6127970218658447, 0.6088902950286865, 0.23644348978996277, 0.34725794196128845, -0.08753543347120285, -0.2164391726255417, -0.4212301969528198, -0.15238627791404724, 0.10554784536361694, 0.2948033213615417...
14190
https://en.wikipedia.org/wiki/Himachal%20Pradesh
Himachal Pradesh
{{Infobox settlement | name = Himachal Pradesh | native_name = | image_skyline = | image_caption = From top, left to right: Shimla, Kinner Kailash, Spiti, Khajjiar, Dhauladhar, Kalpa, Parvati Valley, Rupin Pass | native_name_lang = = | image_blank_emblem = Himachal Pradesh seal.svg | blank_emblem_size = 100px | blank_emblem_type = State Emblem | motto = सत्यमेव जयते {{plainlist|IAST: () }} | nickname = Devbhumi () and Veerbhumi () | etymology = | seal_alt = Himachal Pradesh | image_map = IN-HP.svg | map_alt = Himachal Pradesh | map_caption = Location in India | image_map1 = | map_caption1 = | coordinates = | coor_pinpoint = Shimla | subdivision_type = State | subdivision_name = | established_title = Union territory | established_date = 1 November 1956 | established_title1 = State | established_date1 = 25 January 1971 | seat_type = Capital | seat = | parts_type = 12 Districts | parts_style = coll | p1 = | governing_body = Government of Himachal Pradesh | leader_title = Governor | leader_name = Rajendra Arlekar | area_total_km2 = 55673 | area_rank = 18th | elevation_footnotes = | elevation_min_m = 350 | elevation_max_m = 6816 | elevation_max_point = Reo Purgil | population_total = 6864602 | population_as_of = 2011 | population_footnotes = | population_density_km2 = 123 | population_rank = 21st | demographics_type1 = Language | demographics1_title1 = Official | demographics1_info1 = Hindi | demographics1_title2 = Additional official | demographics1_info2 = Sanskrit | demographics1_title3 = Native | demographics1_info3 = | timezone1 = IST | utc_offset1 = +05:30 | iso_code = IN-HP | type = State | leader_title1 = Chief Minister | leader_name1 = Jai Ram Thakur (BJP) | leader_title2 = Legislature | leader_name2 = Himachal Pradesh Legislative Assembly (68 seats) | leader_title3 = Speaker | leader_name3 = Vipin Singh Parmar | leader_title4 = Chief Secretary | leader_name4 = Ram Subhag Singh, IAS | blank_name_sec1 = HDI | blank_info_sec1 = 0.725 () · 8th | blank_name_sec2 = Literacy in India | blank_info_sec2 = 74.04% | blank1_name_sec2 = Literacy in Himachal Pradesh | blank1_info_sec2 = 86.06% | website = | footnotes = It was elevated to the status of state by the State of Himachal Pradesh Act, 1970 | official_name = }} Himachal Pradesh (; ; "Province of the Snow-laden Mountains") is a state in the northern part of India. Situated in the Western Himalayas, it is one of the thirteen mountain states and is characterized by an extreme landscape featuring several peaks and extensive river systems. Himachal Pradesh is the northernmost state of India and shares borders with the union territories of Jammu and Kashmir and Ladakh to the north, and the states of Punjab to the west, Haryana to the southwest, Uttarakhand to the southeast and a very narrow border with Uttar Pradesh to the south. The state also shares an international border to the east with the Tibet Autonomous Region in China. Himachal Pradesh is also known as , meaning 'Land of God' and which means 'Land of Braves'. The predominantly mountainous region comprising the present-day Himachal Pradesh has been inhabited since pre-historic times having witnessed multiple waves of human migrations from other areas. Through its history, the region was mostly ruled by local kingdoms some of which accepted the suzerainty of larger empires. Prior to India's independence from the British, Himachal comprised the hilly regions of Punjab Province of British India. After independence, many of the hilly territories were organized as the Chief Commissioner's province of Himachal Pradesh which later became a union territory. In 1966, hilly areas of neighboring Punjab state were merged into Himachal and it was ultimately granted full statehood in 1971. Himachal Pradesh is spread across valleys with many perennial rivers flowing through them. Around 90% of the state's population lives in rural areas. Agriculture, horticulture, hydropower and tourism are important constituents of the state's economy. The hilly state is almost universally electrified with 99.5% of the households having electricity as of 2016. The state was declared India's second open-defecation-free state in 2016. According to a survey of CMS – India Corruption Study 2017, Himachal Pradesh is India's least corrupt state. History Tribes such as the Koli, Hali, Dagi, Dhaugri, Dasa, Khasa, Kanaura, and Kirata inhabited the region from the prehistoric era. The foothills of the modern state of Himachal Pradesh were inhabited by people from the Indus valley civilisation which flourished between 2250 and 1750 BCE. The Kols and Mundas are believed to be the original inhabitants to the hills of present-day Himachal Pradesh followed by the Bhotas and Kiratas. During the Vedic period, several small republics known as Janapada existed which were later conquered by the Gupta Empire. After a brief period of supremacy by King Harshavardhana, the region was divided into several local powers headed by chieftains, including some Rajputs principalities. These kingdoms enjoyed a large degree of independence and were invaded by Delhi Sultanate a number of times. Mahmud Ghaznavi conquered Kangra at the beginning of the 11th century. Timur and Sikander Lodi also marched through the lower hills of the state and captured a number of forts and fought many battles. Several hill states acknowledged Mughal suzerainty and paid regular tribute to the Mughals. The Kingdom of Gorkha conquered many kingdoms and came to power in Nepal in 1768. They consolidated their military power and began to expand their territory. Gradually, the Kingdom of Nepal annexed Sirmour and Shimla. Under the leadership of Amar Singh Thapa, the Nepali army laid siege to Kangra. They managed to defeat Sansar Chand Katoch, the ruler of Kangra, in 1806 with the help of many provincial chiefs. However, the Nepali army could not capture Kangra fort which came under Maharaja Ranjeet Singh in 1809. After the defeat, they expanded towards the south of the state. However, Raja Ram Singh, Raja of Siba State, captured the fort of Siba from the remnants of Lahore Darbar in Samvat 1846, during the First Anglo-Sikh War. They came into direct conflict with the British along the tarai belt after which the British expelled them from the provinces of the Satluj. The British gradually emerged as the paramount power in the region. In the revolt of 1857, or first Indian war of independence, arising from a number of grievances against the British, the people of the hill states were not as politically active as were those in other parts of the country. They and their rulers, with the exception of Bushahr, remained more or less inactive. Some, including the rulers of Chamba, Bilaspur, Bhagal and Dhami, rendered help to the British government during the revolt. The British territories came under the British Crown after Queen Victoria's proclamation of 1858. The states of Chamba, Mandi and Bilaspur made good progress in many fields during the British rule. During World War I, virtually all rulers of the hill states remained loyal and contributed to the British war effort, both in the form of men and materials. Among these were the states of Kangra, Jaswan, Datarpur, Guler, Rajgarh, Nurpur, Chamba, Suket, Mandi, and Bilaspur. After independence, the Chief Commissioner's Province of Himachal Pradesh was organised on 15 April 1948 as a result of the integration of 28 petty princely states (including feudal princes and zaildars) in the promontories of the western Himalayas. These were known as the Simla Hills States and four Punjab southern hill states under the Himachal Pradesh (Administration) Order, 1948 under Sections 3 and 4 of the Extra-Provincial Jurisdiction Act, 1947 (later renamed as the Foreign Jurisdiction Act, 1947 vide A.O. of 1950). The State of Bilaspur was merged into Himachal Pradesh on 1 July 1954 by the Himachal Pradesh and Bilaspur (New State) Act, 1954. Himachal became a Part 'C' state on 26 January 1950 when Constitution of India came into effect and the Lieutenant Governor was appointed. The Legislative Assembly was elected in 1952. Himachal Pradesh became a union territory on 1 November 1956. Some areas of Punjab State, namely, Simla, Kangra, Kullu and Lahul and Spiti Districts, Lohara, Amb and Una Kanungo circles, some area of Santokhgarh Kanungo circle and some other specified area of Una Tehsil of Hoshiarpur District, as well as Kandaghat and Nalagarh Tehsils of earstwhile PEPSU State, besides some parts of Dhar Kalan Kanungo circle of Pathankot District—were merged with Himachal Pradesh on 1 November 1966 on enactment by Parliament of the Punjab Reorganisation Act, 1966. On 18 December 1970, the State of Himachal Pradesh Act was passed by Parliament, and the new state came into being on 25 January 1971. Himachal became the 18th state of the Indian Union with Dr. Yashwant Singh Parmar as its first chief minister. Geography and climate Himachal is in the western Himalayas situated between 30°22′N and 33°12′N latitude and 75°47′E ́ and 79°04′E longitude. Covering an area of , it is a mountainous state. The Zanskar range runs in the northeastern part of the state and the great Himalayan range run through the eastern and northern parts, while the Dhauladhar and the Pir Panjal ranges of the lesser Himalayas, and their valleys, form much of the core regions. The outer Himalayas, or the Shiwalik range, form southern and western Himachal Pradesh. At 7,025 m, Shilla is the highest mountain peak in the state of Himachal Pradesh. The drainage system of Himachal is composed both of rivers and glaciers. Himalayan rivers criss-cross the entire mountain chain. Himachal Pradesh provides water to both the Indus and Ganges basins. The drainage systems of the region are the Chandra Bhaga or the Chenab, the Ravi, the Beas, the Sutlej, and the Yamuna. These rivers are perennial and are fed by snow and rainfall. They are protected by an extensive cover of natural vegetation. Four of the five Punjab rivers flow through the state, three of them originating here. Due to extreme variation in elevation, great variation occurs in the climatic conditions of Himachal. The climate varies from hot and humid subtropical in the southern tracts to, with more elevation, cold, alpine, and glacial in the northern and eastern mountain ranges. The state's winter capital, Dharamsala receives very heavy rainfall, while areas like Lahaul and Spiti are cold and almost rainless. Broadly, Himachal experiences three seasons: summer, winter, and rainy season. Summer lasts from mid-April till the end of June and most parts become very hot (except in the alpine zone which experiences a mild summer) with the average temperature ranging from . Winter lasts from late November till mid-March. Snowfall is common in alpine tracts. The pollution is affecting the climate of almost all the states of India. There are many steps taken by governments to prevent pollution. In relation to this, Ujjwala Yojana and Grihni Suvidha Scheme were launched and as a result Himachal Pradesh has become the first smoke free state in India which means the cooking in entire state is free of traditional chulhas. Flora and fauna Himachal Pradesh is one of the states that lies in the Indian Himalayan Region (IHR), one of the richest reservoirs of biological diversity in the world. As of 2002, the IHR is undergoing large scale irrational extraction of wild, medicinal herbs, thus endangering many of its high-value gene stock. To address this, a workshop on ‘Endangered Medicinal Plant Species in Himachal Pradesh’ was held in 2002 and the conference was attended by forty experts from diverse disciplines. According to 2003 Forest Survey of India report, legally defined forest areas constitute 66.52% of the area of Himachal Pradesh. Vegetation in the state is dictated by elevation and precipitation. The state is endowed with a high diversity of medicinal and aromatic plants. Lahaul-Spiti region of the state, being a cold desert, supports unique plants of medicinal value including Ferula jaeschkeana, Hyoscyamus niger, Lancea tibetica, and Saussurea bracteata.Kala, C.P. (2005) Health traditions of Buddhist community and role of amchis in trans-Himalayan region of India. Current Science, 89 (8): 1331–1338. Himachal is also said to be the fruit bowl of the country, with orchards being widespread. Meadows and pastures are also seen clinging to steep slopes. After the winter season, the hillsides and orchards bloom with wild flowers, while gladiolas, carnations, marigolds, roses, chrysanthemums, tulips and lilies are carefully cultivated. Himachal Pradesh Horticultural Produce Marketing and Processing Corporation Ltd. (HPMC) is a state body that markets fresh and processed fruits. Himachal Pradesh has around 463 bird, and Tragopan melanocephalus is the state bird of Himanchal Pradesh 77 mammalian, 44 reptile and 80 fish species.Himachal Pradesh has currently five National Parks. Great Himalayan National Park, oldest and largest National park in the state, is a UNESCO World Heritage Site. Pin Valley National Park, Inderkilla, Khirganga and Simbalbara are the other national Parks located in the state. The state also has 30 wildlife sanctuaries and 3 conservation reserves. The state bird of Himachal Pradesh is the Western tragopan, locally known as the jujurana. It is one of the rarest living pheasants in the world. The state animal is the Snow leopard, which is even rarer to find than the jujurana. Government The Legislative Assembly of Himachal Pradesh has no pre-constitution history. The State itself is a post-independence creation. It came into being as a centrally administered territory on 15 April 1948 from the integration of thirty erstwhile princely states. Himachal Pradesh is governed through a parliamentary system of representative democracy, a feature the state shares with other Indian states. Universal suffrage is granted to residents. The legislature consists of elected members and special office bearers such as the Speaker and the Deputy Speaker who are elected by the members. Assembly meetings are presided over by the Speaker or the Deputy Speaker in the Speaker's absence. The judiciary is composed of the Himachal Pradesh High Court and a system of lower courts. Executive authority is vested in the Council of Ministers headed by the Chief Minister, although the titular head of government is the Governor. The governor is the head of state appointed by the President of India. The leader of the party or coalition with a majority in the Legislative Assembly is appointed as the Chief Minister by the governor, and the Council of Ministers are appointed by the governor on the advice of the Chief Minister. The Council of Ministers reports to the Legislative Assembly. The Assembly is unicameral with 68 Members of the Legislative Assembly (MLA). Terms of office run for five years, unless the Assembly is dissolved prior to the completion of the term. Auxiliary authorities known as panchayats, for which local body elections are regularly held, govern local affairs. In the assembly elections held in November 2017, the Bharatiya Janata Party secured an absolute majority, winning 44 of the 68 seats while the Congress won only 21 of the 68 seats. Jai Ram Thakur was sworn in as Himachal Pradesh's Chief Minister for the first time in Shimla on 27 December 2017. Administrative divisions The state of Himachal Pradesh is divided into 12 districts which are grouped into three divisions, Shimla, Kangra and Mandi. The districts are further divided into 73 subdivisions, 78 blocks and 172 Tehsils. Economy The era of planning in Himachal Pradesh started in 1951 along with the rest of India with the implementation of the first five-year plan. The First Plan allocated 52.7 million to Himachal Pradesh. More than 50% of this expenditure was incurred on transport and communication; while the power sector got a share of just 4.6%, though it had steadily increased to 7% by the Third Plan. Expenditure on agriculture and allied activities increased from 14.4% in the First Plan to 32% in the Third Plan, showing a progressive decline afterwards from 24% in the Fourth Plan to less than 10% in the Tenth Plan. Expenditure on energy sector was 24.2% of the total in the Tenth Plan. The total GDP for 2005–06 was estimated at 254 billion as against 230 billion in the year 2004–05, showing an increase of 10.5%. The GDP for fiscal 2015–16 was estimated at 1.110 trillion, which increased to 1.247 trillion in 2016–17, recording growth of 6.8%. The per capita income increased from 130,067 in 2015–16 to 147,277 in 2016–17. The state government's advance estimates for fiscal 2017–18 stated the total GDP and per capita income as 1.359 trillion and 158,462, respectively. As of 2018, Himachal is the 22nd-largest state economy in India with in gross domestic product and has the 13th-highest per capita income () among the states and union territories of India. Himachal Pradesh also ranks as the second-best performing state in the country on human development indicators after Kerala. One of the Indian government's key initiatives to tackle unemployment is the National Rural Employment Guarantee Act (NREGA). The participation of women in the NREGA has been observed to vary across different regions of the nation. As of the year 2009–2010, Himachal Pradesh joined the category of high female participation, recording a 46% share of NREGS (National Rural Employment Guarantee Scheme) work days to women. This was a drastic increase from the 13% that was recorded in 2006–2007. Agriculture Agriculture accounts for 9.4% of the net state domestic product. It is the main source of income and employment in Himachal. About 90% of the population in Himachal depends directly upon agriculture, which provides direct employment to 62% of total workers of state. The main cereals grown include wheat, maize, rice and barley with major cropping systems being maize-wheat, rice-wheat and maize-potato-wheat. Pulses, fruits, vegetables and oilseeds are among the other crops grown in the state. Centuries-old traditional Kuhl irrigation system is prevalent in the Kangra valley, though in recent years these Kuhls have come under threat from hydroprojects on small streams in the valley. Land husbandry initiatives such as the Mid-Himalayan Watershed Development Project, which includes the Himachal Pradesh Reforestation Project (HPRP), the world's largest clean development mechanism (CDM) undertaking, have improved agricultural yields and productivity, and raised rural household incomes. Apple is the principal cash crop of the state grown principally in the districts of Shimla, Kinnaur, Kullu, Mandi, Chamba and some parts of Sirmaur and Lahaul-Spiti with an average annual production of five lakh tonnes and per hectare production of 8 to 10 tonnes. The apple cultivation constitute 49 per cent of the total area under fruit crops and 85% of total fruit production in the state with an estimated economy of 3500 crore. Apples from Himachal are exported to other Indian states and even other countries. In 2011–12, the total area under apple cultivation was 104,000 hectares, increased from 90,347 hectares in 2000–01. According to the provisional estimates of Ministry of Agriculture & Farmers Welfare, the annual apple production in Himachal for fiscal 2015–16 stood at 753,000 tonnes, making it India's second-largest apple-producing state after Jammu and Kashmir. The state is also among the leading producers of other fruits such as apricots, cherries, peaches, pears, plums and strawberries in India. Kangra tea is grown in the Kangra valley. Tea plantation began in 1849, and production peaked in the late 19th century with the tea becoming popular across the globe. Production dipped sharply after the 1905 Kangra earthquake and continues to decline. The tea received geographical indication status in 2005. Industry Energy Hydropower is one of the major sources of income generation for the state. The state has an abundance of hydropower resources because of the presence of various perennial rivers. Many high-capacity hydropower plants have been constructed which produce surplus electricity that is sold to other states, such as Delhi, Punjab and West Bengal. The income generated from exporting the electricity to other states is being provided as subsidy to the consumers in the state. The rich hydropower resources of Himachal have resulted in the state becoming almost universally electrified with around 94.8% houses receiving electricity as of 2001, as compared to the national average of 55.9%. Himachal's hydro-electric power production is, however, yet to be fully utilised. The identified hydroelectric potential for the state is 27,436 MW in five river basins while the hydroelectric capacity in 2016 was 10,351 MW. Tourism Tourism in Himachal Pradesh is a major contributor to the state's economy and growth. The Himalayas attracts tourists from all over the world. Hill stations like Shimla, Manali, Dharamshala, Dalhousie, Chamba, Khajjiar, Kullu and Kasauli are popular destinations for both domestic and foreign tourists. The state also has many important Hindu pilgrimage sites with prominent temples like Shri Chamunda Devi Mandir, Naina Devi Temple, Bajreshwari Mata Temple, Jwala Ji Temple, Chintpurni, Baijnath Temple, Bhimakali Temple, Bijli Mahadev and Jakhoo Temple. Manimahesh Lake situated in the Bharmour region of Chamba district is the venue of an annual Hindu pilgrimage trek held in the month of August which attracts lakhs of devotees. The state is also referred to as "Dev Bhoomi" (literally meaning Abode of Gods) due to its mention as such in ancient Hindu texts and occurrence of a large number of historical temples in the state. Himachal is also known for its adventure tourism activities like ice skating in Shimla, paragliding in Bir Billing and Solang valley, rafting in Kullu, skiing in Manali, boating in Bilaspur and trekking, horse riding and fishing in different parts of the state. Shimla, the state's capital, is home to Asia's only natural ice-skating rink. Spiti Valley in Lahaul and Spiti District situated at an altitude of over 3000 metres with its picturesque landscapes is an important destination for adventure seekers. The region also has some of the oldest Buddhist Monasteries in the world Himachal hosted the first Paragliding World Cup in India from 24 to 31 October in 2015. The venue for the paragliding world cup was Bir Billing, which is 70 km from the tourist town Macleod Ganj, located in the heart of Himachal in Kangra District. Bir Billing is the centre for aero sports in Himachal and considered as best for paragliding. Buddhist monasteries, trekking to tribal villages and mountain biking are other local possibilities. Transport Air Himachal has three Domestic Airports in Kangra, Kullu and Shimla districts. The air routes connect the state with Delhi and Chandigarh. Bhuntar Airport is in Kullu district, around from district headquarters. Gaggal Airport is in Kangra district, around from district headquarters at Dharamshala, which is around 10 kilometres from Kangra Shimla Airport is around west of the city. Railways Broad-gauge lines The only broad-gauge railway line in the whole state connects –Una Himachal railway station to in Punjab and runs all the way to Daulatpur, Himachal Pradesh. It is an electrified track since 1999. While a tiny portion of line adjacent to Kandrori(KNDI) station on either side on Pathankot-Jalandhar Section, under Ferozepur Division of Northern Railway also crosses into Himachal Pradesh, before venturing out to Punjab again. Future constructions: –Hamirpur rail project via Dhundla Bhanupali (Punjab)–Bilaspur, Himachal Pradesh Chandigarh–Baddi Narrow-gauge lines Himachal is known for its narrow-gauge railways. One is the Kalka-Shimla Railway, a UNESCO World Heritage Site, and another is the Kangra Valley Railway. The total length of these two tracks is . The Kalka-Shimla Railway passes through many tunnels and Bridgies, while the Pathankot–Jogindernagar meanders through a maze of hills and valleys. The total route length of the operational railway network in the state is . Roads Roads are the major mode of transport in the hilly terrains. The state has road network of , including eight National Highways (NH) that constitute and 19 State Highways with a total length of . Hamirpur district has the highest road density in the country. Some roads are closed during winter and monsoon seasons due to snow and landslides. The state-owned Himachal Road Transport Corporation with a fleet of over 3,100, operates bus services connecting important cities and towns with villages within the state and also on various interstate routes. In addition, around 5,000 private buses ply in the state. Demographics Population Himachal Pradesh has a total population of 6,864,602 including 3,481,873 males and 3,382,729 females according to the Census of India 2011. The Koli forms the largest caste-cluster, comprising 30% of the total population of Himachal Pradesh. It has only 0.57 per cent of India's total population, recording a growth of 12.81 per cent. The scheduled castes and scheduled tribes account for 25.19 per cent and 5.71 per cent of the population, respectively. The sex ratio stood at 972 females per 1,000 males, recording a marginal increase from 968 in 2001. The child sex ratio increased from 896 in 2001 to 909 in 2011. The total fertility rate (TFR) per woman in 2015 stood at 1.7, one of the lowest in India. In the census, the state is placed 21st on the population chart, followed by Tripura at 22nd place. Kangra District was top-ranked with a population strength of 1,507,223 (21.98%), Mandi District 999,518 (14.58%), Shimla District 813,384 (11.86%), Solan District 576,670 (8.41%), Sirmaur District 530,164 (7.73%), Una District 521,057 (7.60%), Chamba District 518,844 (7.57%), Hamirpur district 454,293 (6.63%), Kullu District 437,474 (6.38%), Bilaspur district 382,056 (5.57%), Kinnaur District 84,298 (1.23%) and Lahaul Spiti 31,528 (0.46%). The life expectancy at birth in Himachal Pradesh increased significantly from 52.6 years in the period from 1970 to 1975 (above the national average of 49.7 years) to 72.0 years for the period 2011–15 (above the national average of 68.3 years). The infant mortality rate stood at 40 in 2010, and the crude birth rate has declined from 37.3 in 1971 to 16.9 in 2010, below the national average of 26.5 in 1998. The crude death rate was 6.9 in 2010. Himachal Pradesh's literacy rate has almost doubled between 1981 and 2011 (see table to right). The state is one of the most literate states of India with a literacy rate of 83.78% as of 2011. Languages Hindi is the official language of Himachal Pradesh and is spoken by the majority of the population as a lingua franca. Sanskrit is the additional official language of the state. Although mostly encountered in academic and symbolic contexts, the government of Himachal Pradesh is encouraging its wider study and use. Most of the population, however, speaks natively one or another of the Western Pahari languages (locally also known as Himachali or just Pahari), a subgroup of the Indo-Aryan languages that includes Bhattiyali, Bilaspuri, Chambeali, Churahi, Gaddi, Hinduri, Kangri, Kullu, Mahasu Pahari, Mandeali, Pahari Kinnauri, Pangwali, and Sirmauri. Additional Indo-Aryan languages spoken include Punjabi (native to 4.4% of the population), Nepali (1.3%), Chinali, Lahul Lohar, and others. In parts of the state there are speakers of Tibeto-Burman languages like Kinnauri (1.2%), Tibetan (0.3%), Lahuli–Spiti languages (0.16%), Pattani (0.12%), Bhoti Kinnauri, Chitkuli Kinnauri, Bunan (or Gahri), Jangshung, Kanashi, Shumcho, Spiti Bhoti, Sunam, Tinani, and Tukpa. Religion Hinduism is the major religion in Himachal Pradesh. More than 95% of the total population adheres to the Hindu faith and majorly follows Shaivism and Shaktism traditions, the distribution of which is evenly spread throughout the state. Himachal Pradesh has the highest proportion of Hindu population among all the states and union territories in India. Other religions that form a small percentage are Islam, Sikhism and Buddhism. Muslims are mainly concentrated in Sirmaur, Chamba, Una and Solan districts where they form 2.53-6.27% of the population. Sikhs mostly live in towns and cities and constitute 1.16% of the state population. The Buddhists, who constitute 1.15%, are mainly natives and tribals from Lahaul and Spiti, where they form a majority of 62%, and Kinnaur, where they form 21.5%. Culture Himachal Pradesh was one of the few states that had remained largely untouched by external customs, largely due to its difficult terrain. With remarkable economic and social advancements, the state has changed rapidly. Himachal Pradesh is a multilingual state like other Indian states. Western Pahari languages also known as Himachali languages are widely spoken in the state. Some of the most commonly spoken individual languages are Kangri, Mandeali, Kulvi, Chambeali, Bharmauri and Kinnauri. The main caste groups in Himachal Pradesh are Rajputs, Brahmins, Kanets, Kulindas, Girths, Raos, Rathis, Thakurs, Kolis, Holis, Chamars, Darains, Rehars, Chanals, Lohars, Baris, Dagis, Dhakhis, Turis, Batwals Himachal is well known for its handicrafts. The carpets, leather works, Kullu shawls, Kangra paintings, Chamba Rumals, stoles, embroidered grass footwear (Pullan chappal), silver jewellery, metal ware, knitted woolen socks, Pattoo, basketry of cane and bamboo (Wicker and Rattan) and woodwork are among the notable ones. Of late, the demand for these handicrafts has increased within and outside the country. Himachali caps of various colour bands are also well-known local art work, and are often treated as a symbol of the Himachali identity. The colour of the Himachali caps has been an indicator of political loyalties in the hill state for a long period of time with Congress party leaders like Virbhadra Singh donning caps with green band and the rival BJP leader Prem Kumar Dhumal wearing a cap with maroon band. The former has served six terms as the Chief Minister of the state while the latter is a two-time Chief Minister. Local music and dance also reflect the cultural identity of the state. Through their dance and music, the Himachali people entreat their gods during local festivals and other special occasions. Apart from national fairs and festivals, there are regional fairs and festivals, including the temple fairs in nearly every region that are of great significance to Himachal Pradesh. The Kullu Dussehra festival is nationally known. The day-to-day cuisine of Himachalis is similar to the rest of northern India with Punjabi and Tibetan influences. Lentils (Dāl), rice ( or ), vegetables () and chapati (wheat flatbread) form the staple food of the local population. Non-vegetarian food is more widely accepted in Himachal Pradesh than elsewhere in India, partly due to the scarcity of fresh vegetables on the hilly terrain of the state. Himachali specialities include Notable people Virbhadra Singh JP Nadda Prem Kumar Dhumal Jai Ram Thakur Anurag Thakur Mohit Chauhan Kangana Ranaut Yami Gautam Randeep Guleria Vikram Batra Rubina Dilaik Preity Zinta Jay Chaudhry Anupam Kher Gaurav Sharma (politician) Education At the time of Independence, Himachal Pradesh had a literacy rate of 8% – one of the lowest in the country. By 2011, the literacy rate surged to 82.8%, making Himachal one of the most-literate states in the country. There are over 10,000 primary schools, 1,000 secondary schools and more than 1,300 high schools in the state. In meeting the constitutional obligation to make primary education compulsory, Himachal became the first state in India to make elementary education accessible to every child. Himachal Pradesh is an exception to the nationwide gender bias in education levels. The state has a female literacy rate of around 76%. In addition, school enrolment and participation rates for girls are almost universal at the primary level. While higher levels of education do reflect a gender-based disparity, Himachal is still significantly ahead of other states at bridging the gap. The Hamirpur District in particular stands out for high literacy rates across all metrics of measurement. The state government has played an instrumental role in the rise of literacy in the state by spending a significant proportion of the state's GDP on education. During the first six five-year plans, most of the development expenditure in the education sector was utilised in quantitative expansion, but after the seventh five-year-plan the state government switched emphasis on qualitative improvement and modernisation of education. In an effort to raise the number of teaching staff at primary schools they appointed over 1000 teacher aids through the Vidya Upasak Yojna in 2001. The Sarva Shiksha Abhiyan is another HP government initiative that not only aims for universal elementary education but also encourages communities to engage in the management of schools. The Rashtriya Madhayamic Shiksha Abhiyan launched in 2009, is a similar scheme but focuses on improving access to quality secondary education. The standard of education in the state has reached a considerably high level as compared to other states in India with several reputed educational institutes for higher studies. The Baddi University of Emerging Sciences and Technologies, Indian Institute of Technology Mandi, Indian Institute of Management Sirmaur, Himachal Pradesh University in Shimla, Central University of Himachal Pradesh, Dharamsala, National Institute of Technology, Hamirpur, Indian Institute of Information Technology Una, Alakh Prakash Goyal University, Maharaja Agrasen University, Himachal Pradesh National Law University are some of the notable universities in the state. Indira Gandhi Medical College and Hospital in Shimla, Dr. Rajendra Prasad Government Medical College in Kangra, Rajiv Gandhi Government Post Graduate Ayurvedic College in Paprola and Homoeopathic Medical College & Hospital in Kumarhatti are the prominent medical institutes in the state. Besides these, there is a Government Dental College in Shimla which is the state's first recognised dental institute. The state government has also decided to start three major nursing colleges to develop the healthcare system of the state. CSK Himachal Pradesh Krishi Vishwavidyalya Palampur is one of the most renowned hill agriculture institutes in the world. Dr. Yashwant Singh Parmar University of Horticulture and Forestry has earned a unique distinction in India for imparting teaching, research and extension education in horticulture, forestry and allied disciplines. Further, state-run Jawaharlal Nehru Government Engineering College was inaugurated in 2006 at Sundernagar. Himachal Pradesh also hosts a campus of the fashion college, National Institute of Fashion Technology (NIFT) in Kangra. State profile Source: Department of Information and Public Relations.'' See also Outline of Himachal Pradesh References Further reading Statistics and Data, Planning Department, Government of Himachal Pradesh External links Government The Official Site of Himachal Pradesh The Official Tourism Site of Himachal Pradesh, India NGO in Himachal Pradesh General information States and territories established in 1971 1971 establishments in India Punjabi-speaking countries and territories States and union territories of India
[ -0.16013628244400024, -0.2310377061367035, 0.6307532787322998, -0.2352772057056427, -0.46671023964881897, 0.26006120443344116, -0.2796518802642822, -0.1731841117143631, -0.5404177904129028, -0.2641292214393616, -0.6586886048316956, -0.3663279414176941, -0.48782625794410706, -0.038076125085...
14192
https://en.wikipedia.org/wiki/Helene
Helene
Helene or Hélène may refer to: People Helene (given name), a Greek feminine given name Helen of Troy, the daughter of Zeus and Leda Helene, a figure in Greek mythology who was a friend of Aphrodite and helped her seduce Adonis Helene (Amazon), a daughter of Tityrus and an Amazon who fought Achilles and died after he seriously wounded her Helene, the consort of Simon Magus in Adversus Haereses Hélène (given name), a feminine given name, the French version of Helen Hélène (singer), Hélène Rollès Astronomy Helene (moon), a moon of Saturn Books and film Hélène (drama), an 1891 play by Paul Delair Helene, English edition of German novel by Vicki Baum Hélène (film), a 1936 French drama film, based on the novel by Baum Music Hélène (opera), an opera by Camille Saint-Saëns 1904 Polka Hélène in D minor for piano 4 hands by Borodin Hélène (album), an album by Roch Voisine 1989 Hélène (Hélène Rollès album) album by Hélène Rollès 1992 Hélène, album by Hélène Segara 2002 "Hélène" (song), a 1989 song by Roch Voisine "Hélène", song by Julien Clerc 1987 Other Tropical Storm Helene, various storms See also Helena (disambiguation) Helen (disambiguation) Eleni (disambiguation) Ellen (disambiguation)
[ -0.35940757393836975, 0.13192898035049438, -0.288105845451355, -0.044675491750240326, -0.42032185196876526, 0.9929313063621521, 0.5700132846832275, -0.2492993026971817, -1.0765578746795654, -0.5924991965293884, -0.15415725111961365, 0.17635059356689453, 0.5330023169517517, 0.34024533629417...
14193
https://en.wikipedia.org/wiki/Hyperion
Hyperion
Hyperion may refer to: Greek mythology Hyperion (Titan), one of the twelve Titans Hyperion, a byname of the Sun, Helios Hyperion of Troy or Yperion, son of King Priam Science Hyperion (moon), a moon of the planet Saturn Hyperion (beetle), a genus of beetles in the family Carabidae Hyperion (tree), a coast redwood in Northern California and the world's tallest known living tree Hyperion proto-supercluster, a supercluster of galaxy groups discovered in 2018 Literature Hyperion (Hölderlin novel), a 1799 book by Friedrich Hölderlin Hyperion (poem), a 1819 poem by John Keats Hyperion (Longfellow novel), an 1839 book by Henry Wadsworth Longfellow Hyperion (Simmons novel), a 1989 novel by Dan Simmons Hyperion Cantos, the series of novels that started with Hyperion Hyperion (magazine), a 1908–1910 German literary journal Hyperion (comics), the name of several characters in the Marvel Comics universe Music Hyperion (Manticora album), 2002 Hyperion (Marilyn Crispell album), 1992 Hyperion (Gesaffelstein album), 2019 Hyperion, a 2016 EP by Krallice Hyperion, a 2018 album by St. Lucia Hyperion Records, an independent British classical music label Businesses and organizations Hachette Books, a book publishing division known until 2014 as Hyperion Books Hyperion Books for Children, a book publisher Hyperion Entertainment, a computer game producer Hyperion Pictures, a film production company Hyperion Power Generation, a nuclear power company Hyperion Records, an independent British classical music label Hyperion Theatricals, part of Disney Theatrical Group Oracle Hyperion, a business software company owned by Oracle Places and facilities Hyperion, California Hyperion Theater, a theater at the Disney California Adventure theme park in Anaheim, California Hyperion sewage treatment plant, Playa del Rey, California Hyperion Tower (or Mok-dong Hyperion Towers), Seoul, South Korea Fictional entities and characters Hyperion, the flagship of Jim Raynor in StarCraft Hyperion Corporation, an organization in the Borderlands series Hyperion, a Gallente battleship in Eve Online Hyperion UCS Mk.XII, a military satellite from Einhänder Hyperion, Seifer Almasy's weapon in Final Fantasy VIII Ark Hyperion, one of the four Ark starships in Mass Effect: Andromeda Hyperion, an airship in the film The Island at the Top of the World Hyperion, an airship in the novel Skybreaker Hyperion, a ship in the TV series Skyland Hyperion, the flagship of Yang Wenli, a Legend of the Galactic Heroes character Emperor Hyperion, chief of the alien villains' race in the anime series Gekiganger III Hyperion Hotel, a fictional home base for Angel in the television series Angel Computing Hyperion (computer), an early portable computer Hyperion, a RuneScape emulator by Graham Edgecombe Hyperion, a hyperspectral imaging spectrometer on the NASA Earth Observing-1 satellite Hyperion, Disney's rendering system first used for Big Hero 6 (film) Vehicles Hyperion, a version of the Rolls-Royce Phantom Drophead Coupé Hyperion (yacht), a large sloop launched in 1998 HMS Hyperion, the name of several Royal Navy ships Other uses Hyperion (roller coaster), a roller coaster in Poland Hyperion (horse) (1930–1960), a British Thoroughbred horse Hyperion, a sculpture by Angela Laich after the Friedrich Hölderlin novel
[ 0.0694129690527916, 0.5815967917442322, -0.4616861939430237, 0.721813440322876, -0.049135416746139526, 0.7778040170669556, 0.19735144078731537, 0.4938516318798065, -0.7654257416725159, -0.3864944875240326, -0.5892273783683777, -0.09108710289001465, -0.03071662038564682, 0.7010471820831299,...
14194
https://en.wikipedia.org/wiki/History%20of%20medicine
History of medicine
The history of medicine shows how societies have changed in their approach to illness and disease from ancient times to the present. Early medical traditions include those of Babylon, China, Egypt and India. The Hippocratic Oath was written in ancient Greece in the 5th century BCE, and is a direct inspiration for oaths of office that physicians swear upon entry into the profession today. In the Middle Ages, surgical practices inherited from the ancient masters were improved and then systematized in Rogerius's The Practice of Surgery. Universities began systematic training of physicians around 1220 CE in Italy. Invention of the microscope was a consequence of improved understanding, during the Renaissance. Prior to the 19th century, humorism (also known as humoralism) was thought to explain the cause of disease but it was gradually replaced by the germ theory of disease, leading to effective treatments and even cures for many infectious diseases. Military doctors advanced the methods of trauma treatment and surgery. Public health measures were developed especially in the 19th century as the rapid growth of cities required systematic sanitary measures. Advanced research centers opened in the early 20th century, often connected with major hospitals. The mid-20th century was characterized by new biological treatments, such as antibiotics. These advancements, along with developments in chemistry, genetics, and radiography led to modern medicine. Medicine was heavily professionalized in the 20th century, and new careers opened to women as nurses (from the 1870s) and as physicians (especially after 1970). Prehistoric medicine Although there is little record to establish when plants were first used for medicinal purposes (herbalism), the use of plants as healing agents, as well as clays and soils is ancient. Over time, through emulation of the behavior of fauna, a medicinal knowledge base developed and passed between generations. Even earlier, Neanderthals may have engaged in medical practices. As tribal culture specialized specific castes, shamans and apothecaries fulfilled the role of healer. The first known dentistry dates to c. 7000 BCE in Baluchistan where Neolithic dentists used flint-tipped drills and bowstrings. The first known trepanning operation was carried out c. 5000 BCE in Ensisheim, France. A possible amputation was carried out c. 4,900 BCE in Buthiers-Bulancourt, France. Early civilizations Mesopotamia The ancient Mesopotamians had no distinction between "rational science" and magic. When a person became ill, doctors would prescribe both magical formulas to be recited as well as medicinal treatments. The earliest medical prescriptions appear in Sumerian during the Third Dynasty of Ur ( 2112 BCE – 2004 BCE). The oldest Babylonian texts on medicine date back to the Old Babylonian period in the first half of the 2nd millennium BCE. The most extensive Babylonian medical text, however, is the Diagnostic Handbook written by the ummânū, or chief scholar, Esagil-kin-apli of Borsippa, during the reign of the Babylonian king Adad-apla-iddina (1069–1046 BCE). Along with the Egyptians, the Babylonians introduced the practice of diagnosis, prognosis, physical examination, and remedies. In addition, the Diagnostic Handbook introduced the methods of therapy and cause. The text contains a list of medical symptoms and often detailed empirical observations along with logical rules used in combining observed symptoms on the body of a patient with its diagnosis and prognosis. The Diagnostic Handbook was based on a logical set of axioms and assumptions, including the modern view that through the examination and inspection of the symptoms of a patient, it is possible to determine the patient's disease, its cause and future development, and the chances of the patient's recovery. The symptoms and diseases of a patient were treated through therapeutic means such as bandages, herbs and creams. In East Semitic cultures, the main medicinal authority was a kind of exorcist-healer known as an āšipu. The profession was generally passed down from father to son and was held in extremely high regard. Of less frequent recourse was another kind of healer known as an asu, who corresponds more closely to a modern physician and treated physical symptoms using primarily folk remedies composed of various herbs, animal products, and minerals, as well as potions, enemas, and ointments or poultices. These physicians, who could be either male or female, also dressed wounds, set limbs, and performed simple surgeries. The ancient Mesopotamians also practiced prophylaxis and took measures to prevent the spread of disease. Mental illnesses were well known in ancient Mesopotamia, where diseases and mental disorders were believed to be caused by specific deities. Because hands symbolized control over a person, mental illnesses were known as "hands" of certain deities. One psychological illness was known as Qāt Ištar, meaning "Hand of Ishtar". Others were known as "Hand of Shamash", "Hand of the Ghost", and "Hand of the God". Descriptions of these illnesses, however, are so vague that it is usually impossible to determine which illnesses they correspond to in modern terminology. Mesopotamian doctors kept detailed record of their patients' hallucinations and assigned spiritual meanings to them. A patient who hallucinated that he was seeing a dog was predicted to die; whereas, if he saw a gazelle, he would recover. The royal family of Elam was notorious for its members frequently suffering from insanity. Erectile dysfunction was recognized as being rooted in psychological problems. Egypt Ancient Egypt developed a large, varied and fruitful medical tradition. Herodotus described the Egyptians as "the healthiest of all men, next to the Libyans", because of the dry climate and the notable public health system that they possessed. According to him, "the practice of medicine is so specialized among them that each physician is a healer of one disease and no more." Although Egyptian medicine, to a considerable extent, dealt with the supernatural, it eventually developed a practical use in the fields of anatomy, public health, and clinical diagnostics. Medical information in the Edwin Smith Papyrus may date to a time as early as 3000 BCE. Imhotep in the 3rd dynasty is sometimes credited with being the founder of ancient Egyptian medicine and with being the original author of the Edwin Smith Papyrus, detailing cures, ailments and anatomical observations. The Edwin Smith Papyrus is regarded as a copy of several earlier works and was written c. 1600 BCE. It is an ancient textbook on surgery almost completely devoid of magical thinking and describes in exquisite detail the examination, diagnosis, treatment, and prognosis of numerous ailments. The Kahun Gynaecological Papyrus treats women's complaints, including problems with conception. Thirty four cases detailing diagnosis and treatment survive, some of them fragmentarily. Dating to 1800 BCE, it is the oldest surviving medical text of any kind. Medical institutions, referred to as Houses of Life are known to have been established in ancient Egypt as early as 2200 BCE. The Ebers Papyrus is the oldest written text mentioning enemas. Many medications were administered by enemas and one of the many types of medical specialists was an Iri, the Shepherd of the Anus. The earliest known physician is also credited to ancient Egypt: Hesy-Ra, "Chief of Dentists and Physicians" for King Djoser in the 27th century BCE. Also, the earliest known woman physician, Peseshet, practiced in Ancient Egypt at the time of the 4th dynasty. Her title was "Lady Overseer of the Lady Physicians." India The Atharvaveda, a sacred text of Hinduism dating from the Early Iron Age, is one of the first Indian texts dealing with medicine. The Atharvaveda also contains prescriptions of herbs for various ailments. The use of herbs to treat ailments would later form a large part of Ayurveda. Ayurveda, meaning the "complete knowledge for long life" is another medical system of India. Its two most famous texts belong to the schools of Charaka and Sushruta. The earliest foundations of Ayurveda were built on a synthesis of traditional herbal practices together with a massive addition of theoretical conceptualizations, new nosologies and new therapies dating from about 600 BCE onwards, and coming out of the communities of thinkers which included the Buddha and others. According to the compendium of Charaka, the Charakasamhitā, health and disease are not predetermined and life may be prolonged by human effort. The compendium of Suśruta, the Suśrutasamhitā defines the purpose of medicine to cure the diseases of the sick, protect the healthy, and to prolong life. Both these ancient compendia include details of the examination, diagnosis, treatment, and prognosis of numerous ailments. The Suśrutasamhitā is notable for describing procedures on various forms of surgery, including rhinoplasty, the repair of torn ear lobes, perineal lithotomy, cataract surgery, and several other excisions and other surgical procedures. Most remarkable was Susruta's surgery specially the rhinoplasty for which he is called father of modern plastic surgery. Susruta also described more than 125 surgical instruments in detail. Also remarkable is Sushruta's penchant for scientific classification: His medical treatise consists of 184 chapters, 1,120 conditions are listed, including injuries and illnesses relating to aging and mental illness. The Ayurvedic classics mention eight branches of medicine: kāyācikitsā (internal medicine), śalyacikitsā (surgery including anatomy), śālākyacikitsā (eye, ear, nose, and throat diseases), kaumārabhṛtya (pediatrics with obstetrics and gynaecology), bhūtavidyā (spirit and psychiatric medicine), agada tantra (toxicology with treatments of stings and bites), rasāyana (science of rejuvenation), and vājīkaraṇa (aphrodisiac and fertility). Apart from learning these, the student of Āyurveda was expected to know ten arts that were indispensable in the preparation and application of his medicines: distillation, operative skills, cooking, horticulture, metallurgy, sugar manufacture, pharmacy, analysis and separation of minerals, compounding of metals, and preparation of alkalis. The teaching of various subjects was done during the instruction of relevant clinical subjects. For example, the teaching of anatomy was a part of the teaching of surgery, embryology was a part of training in pediatrics and obstetrics, and the knowledge of physiology and pathology was interwoven in the teaching of all the clinical disciplines. The normal length of the student's training appears to have been seven years. But the physician was to continue to learn. As an alternative form of medicine in India, Unani medicine found deep roots and royal patronage during medieval times. It progressed during the Indian sultanate and mughal periods. Unani medicine is very close to Ayurveda. Both are based on the theory of the presence of the elements (in Unani, they are considered to be fire, water, earth, and air) in the human body. According to followers of Unani medicine, these elements are present in different fluids and their balance leads to health and their imbalance leads to illness. By the 18th century CE, Sanskrit medical wisdom still dominated. Muslim rulers built large hospitals in 1595 in Hyderabad, and in Delhi in 1719, and numerous commentaries on ancient texts were written. China China also developed a large body of traditional medicine. Much of the philosophy of traditional Chinese medicine derived from empirical observations of disease and illness by Taoist physicians and reflects the classical Chinese belief that individual human experiences express causative principles effective in the environment at all scales. These causative principles, whether material, essential, or mystical, correlate as the expression of the natural order of the universe. The foundational text of Chinese medicine is the Huangdi neijing, (or Yellow Emperor's Inner Canon), written 5th century to 3rd century BCE. Near the end of the 2nd century CE, during the Han dynasty, Zhang Zhongjing, wrote a Treatise on Cold Damage, which contains the earliest known reference to the Neijing Suwen. The Jin Dynasty practitioner and advocate of acupuncture and moxibustion, Huangfu Mi (215–282), also quotes the Yellow Emperor in his Jiayi jing, c. 265. During the Tang Dynasty, the Suwen was expanded and revised and is now the best extant representation of the foundational roots of traditional Chinese medicine. Traditional Chinese Medicine that is based on the use of herbal medicine, acupuncture, massage and other forms of therapy has been practiced in China for thousands of years. In the 18th century, during the Qing dynasty, there was a proliferation of popular books as well as more advanced encyclopedias on traditional medicine. Jesuit missionaries introduced Western science and medicine to the royal court, although the Chinese physicians ignored them. Finally in the 19th century, Western medicine was introduced at the local level by Christian medical missionaries from the London Missionary Society (Britain), the Methodist Church (Britain) and the Presbyterian Church (US). Benjamin Hobson (1816–1873) in 1839, set up a highly successful Wai Ai Clinic in Guangzhou, China. The Hong Kong College of Medicine for Chinese was founded in 1887 by the London Missionary Society, with its first graduate (in 1892) being Sun Yat-sen, who later led the Chinese Revolution (1911). The Hong Kong College of Medicine for Chinese was the forerunner of the School of Medicine of the University of Hong Kong, which started in 1911. Because of the social custom that men and women should not be near to one another, the women of China were reluctant to be treated by male doctors. The missionaries sent women doctors such as Dr. Mary Hannah Fulton (1854–1927). Supported by the Foreign Missions Board of the Presbyterian Church (US) she in 1902 founded the first medical college for women in China, the Hackett Medical College for Women, in Guangzhou. Historiography of Chinese Medicine When reading the Chinese classics, it is important for scholars to examine these works from the Chinese perspective. Historians have noted two key aspects of Chinese medical history: understanding conceptual differences when translating the term "身, and observing the history from the perspective of cosmology rather than biology. In Chinese classical texts, the term 身 is the closest historical translation to the English word "body" because it sometimes refers to the physical human body in terms of being weighed or measured, but the term is to be understood as an “ensemble of functions” encompassing both the human psyche and emotions.> This concept of the human body is opposed to the European duality of a separate mind and body. It is critical for scholars to understand the fundamental differences in concepts of the body in order to connect the medical theory of the classics to the “human organism” it is explaining. Chinese scholars established a correlation between the cosmos and the “human organism.” The basic components of cosmology, qi, yin yang and the Five Phase theory, were used to explain health and disease in texts such as Huangdi neijing. Yin and yang are the changing factors in cosmology, with qi as the vital force or energy of life. The Five phase theory Wu Xing of the Han dynasty contains the elements wood, fire, earth, metal, and water. By understanding medicine from a cosmology perspective, historians better understand Chinese medical and social classifications such as gender, which was defined by a domination or remission of yang in terms of yin. These two distinctions are imperative when analyzing the history of traditional Chinese medical science. A majority of Chinese medical history written after the classical canons comes in the form of primary source case studies where academic physicians record the illness of a particular person and the healing techniques used, as well as their effectiveness. Historians have noted that Chinese scholars wrote these studies instead of “books of prescriptions or advice manuals;” in their historical and environmental understanding, no two illnesses were alike so the healing strategies of the practitioner was unique every time to the specific diagnosis of the patient. Medical case studies existed throughout Chinese history, but “individually authored and published case history” was a prominent creation of the Ming Dynasty. An example such case studies would be the literati physician, Cheng Congzhou, collection of 93 cases published in 1644. Greece and Roman Empire Around 800 BCE Homer in The Iliad gives descriptions of wound treatment by the two sons of Asklepios, the admirable physicians Podaleirius and Machaon and one acting doctor, Patroclus. Because Machaon is wounded and Podaleirius is in combat Eurypylus asks Patroclus to cut out this arrow from my thigh, wash off the blood with warm water and spread soothing ointment on the wound. Asklepios, like Imhotep, became to be associated as a god of healing over time. Temples dedicated to the healer-god Asclepius, known as Asclepieia (, sing. , Asclepieion), functioned as centers of medical advice, prognosis, and healing. At these shrines, patients would enter a dream-like state of induced sleep known as enkoimesis () not unlike anesthesia, in which they either received guidance from the deity in a dream or were cured by surgery. Asclepeia provided carefully controlled spaces conducive to healing and fulfilled several of the requirements of institutions created for healing. In the Asclepeion of Epidaurus, three large marble boards dated to 350 BCE preserve the names, case histories, complaints, and cures of about 70 patients who came to the temple with a problem and shed it there. Some of the surgical cures listed, such as the opening of an abdominal abscess or the removal of traumatic foreign material, are realistic enough to have taken place, but with the patient in a state of enkoimesis induced with the help of soporific substances such as opium. Alcmaeon of Croton wrote on medicine between 500 and 450 BCE. He argued that channels linked the sensory organs to the brain, and it is possible that he discovered one type of channel, the optic nerves, by dissection. Hippocrates A towering figure in the history of medicine was the physician Hippocrates of Kos (c. 460c. 370 BCE), considered the "father of modern medicine." The Hippocratic Corpus is a collection of around seventy early medical works from ancient Greece strongly associated with Hippocrates and his students. Most famously, the Hippocratics invented the Hippocratic Oath for physicians. Contemporary physicians swear an oath of office which includes aspects found in early editions of the Hippocratic Oath. Hippocrates and his followers were first to describe many diseases and medical conditions. Though humorism (humoralism) as a medical system predates 5th-century Greek medicine, Hippocrates and his students systematized the thinking that illness can be explained by an imbalance of blood, phlegm, black bile, and yellow bile. Hippocrates is given credit for the first description of clubbing of the fingers, an important diagnostic sign in chronic suppurative lung disease, lung cancer and cyanotic heart disease. For this reason, clubbed fingers are sometimes referred to as "Hippocratic fingers". Hippocrates was also the first physician to describe the Hippocratic face in Prognosis. Shakespeare famously alludes to this description when writing of Falstaff's death in Act II, Scene iii. of Henry V. Hippocrates began to categorize illnesses as acute, chronic, endemic and epidemic, and use terms such as, "exacerbation, relapse, resolution, crisis, paroxysm, peak, and convalescence." Another of Hippocrates's major contributions may be found in his descriptions of the symptomatology, physical findings, surgical treatment and prognosis of thoracic empyema, i.e. suppuration of the lining of the chest cavity. His teachings remain relevant to present-day students of pulmonary medicine and surgery. Hippocrates was the first documented person to practise cardiothoracic surgery, and his findings are still valid. Some of the techniques and theories developed by Hippocrates are now put into practice by the fields of Environmental and Integrative Medicine. These include recognizing the importance of taking a complete history which includes environmental exposures as well as foods eaten by the patient which might play a role in his or her illness. Herophilus and Erasistratus Two great Alexandrians laid the foundations for the scientific study of anatomy and physiology, Herophilus of Chalcedon and Erasistratus of Ceos. Other Alexandrian surgeons gave us ligature (hemostasis), lithotomy, hernia operations, ophthalmic surgery, plastic surgery, methods of reduction of dislocations and fractures, tracheotomy, and mandrake as an anaesthetic. Some of what we know of them comes from Celsus and Galen of Pergamum. Herophilus of Chalcedon, the renowned Alexandrian physician, was one of the pioneers of human anatomy. Though his knowledge of the anatomical structure of the human body was vast, he specialized in the aspects of neural anatomy. Thus, his experimentation was centered around the anatomical composition of the blood-vascular system and the pulsations that can be analyzed from the system. Furthermore, the surgical experimentation he administered caused him to become very prominent throughout the field of medicine, as he was one of the first physicians to initiate the exploration and dissection of the human body. The banned practice of human dissection was lifted during his time within the scholastic community. This brief moment in the history of Greek medicine allowed him to further study the brain, which he believed was the core of the nervous system. He also distinguished between veins and arteries, noting that the latter pulse and the former do not. Thus, while working at the medical school of Alexandria, Herophilus placed intelligence in the brain based on his surgical exploration of the body, and he connected the nervous system to motion and sensation. In addition, he and his contemporary, Erasistratus of Chios, continued to research the role of veins and nerves. After conducting extensive research, the two Alexandrians mapped out the course of the veins and nerves across the human body. Erasistratus connected the increased complexity of the surface of the human brain compared to other animals to its superior intelligence. He sometimes employed experiments to further his research, at one time repeatedly weighing a caged bird, and noting its weight loss between feeding times. In Erasistratus' physiology, air enters the body, is then drawn by the lungs into the heart, where it is transformed into vital spirit, and is then pumped by the arteries throughout the body. Some of this vital spirit reaches the brain, where it is transformed into animal spirit, which is then distributed by the nerves. Galen The Greek Galen (c. ) was one of the greatest physicians of the ancient world, as his theories dominated all medical studies for nearly 1500 years. His theories and experimentation laid the foundation for modern medicine surrounding the heart and blood. Galen's influence and innovations in medicine can be attributed to the experiments he conducted, which were unlike any other medical experiments of his time. Galen strongly believed that medical dissection was one of the essential procedures in truly understanding medicine. He began to dissect different animals that were anatomically similar to humans, which allowed him to learn more about the internal organs and extrapolate the surgical studies to the human body. In addition, he performed many audacious operations—including brain and eye surgeries—that were not tried again for almost two millennia. Through the dissections and surgical procedures, Galen concluded that blood is able to circulate throughout the human body, and the heart is most similar to the human soul. In Ars medica ("Arts of Medicine"), he further explains the mental properties in terms of specific mixtures of the bodily organs. While much of his work surrounded the physical anatomy, he also worked heavily in humoural physiology. Galen's medical work was regarded as authoritative until well into the Middle Ages. He left a physiological model of the human body that became the mainstay of the medieval physician's university anatomy curriculum. Although he attempted to extrapolate the animal dissections towards the model of the human body, some of Galen's theories were incorrect. This caused his model to suffer greatly from stasis and intellectual stagnation. Greek and Roman taboos caused dissection of the human body to usually be banned in ancient times, but in the Middle Ages it changed. In 1523 Galen's On the Natural Faculties was published in London. In the 1530s Belgian anatomist and physician Andreas Vesalius launched a project to translate many of Galen's Greek texts into Latin. Vesalius's most famous work, De humani corporis fabrica was greatly influenced by Galenic writing and form. Roman contributions The Romans invented numerous surgical instruments, including the first instruments unique to women, as well as the surgical uses of forceps, scalpels, cautery, cross-bladed scissors, the surgical needle, the sound, and speculas. Romans also performed cataract surgery. The Roman army physician Dioscorides (c. 40–90 CE), was a Greek botanist and pharmacologist. He wrote the encyclopedia De Materia Medica describing over 600 herbal cures, forming an influential pharmacopoeia which was used extensively for the following 1,500 years. Early Christians in the Roman Empire incorporated medicine into their theology, ritual practices, and metaphors. The Middle Ages, 400 to 1400 Byzantine Empire and Sassanid Empire Byzantine medicine encompasses the common medical practices of the Byzantine Empire from about 400 CE to 1453 CE. Byzantine medicine was notable for building upon the knowledge base developed by its Greco-Roman predecessors. In preserving medical practices from antiquity, Byzantine medicine influenced Islamic medicine as well as fostering the Western rebirth of medicine during the Renaissance. Byzantine physicians often compiled and standardized medical knowledge into textbooks. Their records tended to include both diagnostic explanations and technical drawings. The Medical Compendium in Seven Books, written by the leading physician Paul of Aegina, survived as a particularly thorough source of medical knowledge. This compendium, written in the late seventh century, remained in use as a standard textbook for the following 800 years. Late antiquity ushered in a revolution in medical science, and historical records often mention civilian hospitals (although battlefield medicine and wartime triage were recorded well before Imperial Rome). Constantinople stood out as a center of medicine during the Middle Ages, which was aided by its crossroads location, wealth, and accumulated knowledge. The first ever known example of separating conjoined twins occurred in the Byzantine Empire in the 10th century. The next example of separating conjoined twins will be first recorded many centuries later in Germany in 1689. The Byzantine Empire's neighbors, the Persian Sassanid Empire, also made their noteworthy contributions mainly with the establishment of the Academy of Gondeshapur, which was "the most important medical center of the ancient world during the 6th and 7th centuries." In addition, Cyril Elgood, British physician and a historian of medicine in Persia, commented that thanks to medical centers like the Academy of Gondeshapur, "to a very large extent, the credit for the whole hospital system must be given to Persia." Islamic world The Islamic civilization rose to primacy in medical science as its physicians contributed significantly to the field of medicine, including anatomy, ophthalmology, pharmacology, pharmacy, physiology, and surgery. Islamic civilization's contribution to these fields within medicine was a gradual process that took hundreds of years. Dating back to the time of the first great Muslim dynasty, the Umayyad Caliphate (661-750 CE), there was not an abundance of medical understanding in these fields that were in their very early stages of development.  A factor why that there was not a strong push for medical advancements is because of how the population handled diseases and illnesses. This was influenced by the direction, energy, and resources that the early Umayyad caliphs directed after the death of Prophet Muhammad (632 CE) towards spreading Islam to the nations that they invaded and the expansion of their caliphate. Because of this effort from the growing dynasty towards the expansion of Islam, there was not nearly as much effort that was given towards medicine. Rather, the Umayyad Caliphate after taking over the Caliphate wanted to foremost establish control over the new empire. The priority on these factors led a dense amount of the population to believe that God will provide cures for their illnesses and diseases because of the attention on spirituality. There were also many other areas of interest during that time before there was a rising interest in the field of medicine. Abd al-Malik ibn Marwan, the fifth caliph of the Umayyad, developed governmental administration, adopted Arabic as the main language, and focused on many other areas. However, this rising interest in Islamic medicine grew significantly when the Abbasid Caliphate (750-1258 CE) overthrew the Umayyad Caliphate in 750 CE. This change in dynasty from the Umayyad Caliphate to the Abbasid Caliphate served as a turning point towards scientific and medical developments. A big contributor to this is because, under Abbasid rule, there was a great part of the Greek legacy that was transmitted into Arabic which by then, was the main language of Islamic nations. Because of this, many Islamic physicians were heavily influenced by the works of Greek scholars of Alexandria and Egypt and were able to further expand on those texts to produce new medical pieces of knowledge. This period of time is also known as the Islamic Golden Age where there was a period of development for development and flourishments of technology, commerce, and sciences including medicine. Additionally, during this time the creation of the first Islamic Hospital in 805 CE by the Abbasid caliph Harun al-Rashid in Baghdad was recounted as a glorious event of the Golden Age. This hospital in Baghdad contributed immensely to Baghdad's success and also provided educational opportunities for Islamic physicians. During the Islamic Golden Age, there were many infamous Islamic physicians that paved the way for medical advancements and understandings. Muhammad ibn Zakariya al-Razi (965-1040 CE), sometimes referred to as the father of modern optics, is the author of the monumental Book of Optics and also was known for his work in differentiating smallpox from measles. However, this would not be possible without the influence from many different areas of the world that influenced the Arabs. The Arabs were influenced by ancient Indian, Persian, Greek, Roman and Byzantine medical practices, and helped them develop further. Galen & Hippocrates were pre-eminent authorities. The translation of 129 of Galen's works into Arabic by the Nestorian Christian Hunayn ibn Ishaq and his assistants, and in particular Galen's insistence on a rational systematic approach to medicine, set the template for Islamic medicine, which rapidly spread throughout the Arab Empire. Its most famous physicians included the Persian polymaths Muhammad ibn Zakarīya al-Rāzi and Avicenna, who wrote more than 40 works on health, medicine, and well-being. Taking leads from Greece and Rome, Islamic scholars kept both the art and science of medicine alive and moving forward. Persian polymath Avicenna has also been called the "father of medicine". He wrote The Canon of Medicine which became a standard medical text at many medieval European universities, considered one of the most famous books in the history of medicine. The Canon of Medicine presents an overview of the contemporary medical knowledge of the medieval Islamic world, which had been influenced by earlier traditions including Greco-Roman medicine (particularly Galen), Persian medicine, Chinese medicine and Indian medicine. Persian physician al-Rāzi was one of the first to question the Greek theory of humorism, which nevertheless remained influential in both medieval Western and medieval Islamic medicine. Some volumes of al-Rāzi's work Al-Mansuri, namely "On Surgery" and "A General Book on Therapy", became part of the medical curriculum in European universities. Additionally, he has been described as a doctor's doctor, the father of pediatrics, and a pioneer of ophthalmology. For example, he was the first to recognize the reaction of the eye's pupil to light. In addition to contributions to mankind’s understanding of human anatomy, Islamicate scientists and scholars, physicians specifically, played an invaluable role in the development of the modern hospital system, creating the foundations on which more contemporary medical professionals would build models of public health systems in Europe and elsewhere. During the time of the Safavid empire (16th–18th centuries) in Iran and the Mughal empire (16th–19th centuries) in India, Muslim scholars radically transformed the institution of the hospital, creating an environment in which rapidly developing medical knowledge of the time could be passed among students and teachers from a wide range of cultures. There were two main schools of thought with patient care at the time. These included humoural physiology from the Persians and Ayurvedic practice. After these theories were translated from Sanskrit to Persian and vice-versa, hospitals could have a mix of culture and techniques. This allowed for a sense of collaborative medicine. Hospitals became increasingly common during this period as wealthy patrons commonly founded them. Many features that are still in use today, such as an emphasis on hygiene, a staff fully dedicated to the care of patients, and separation of individual patients from each other were developed in Islamicate hospitals long before they came into practice in Europe. At the time, the patient care aspects of hospitals in Europe had not taken effect. European hospitals were places of religion rather than institutions of science. As was the case with much of the scientific work done by Islamicate scholars, many of these novel developments in medical practice were transmitted to European cultures hundreds of years after they had long been utilized throughout the Islamicate world. Although Islamicate scientists were responsible for discovering much of the knowledge that allows the hospital system to function safely today, European scholars who built on this work still receive the majority of the credit historically Before the development of scientific medical practices in the Islamicate empires, medical care was mainly performed by religious figures such as priests. Without a profound understanding of how infectious diseases worked and why sickness spread from person to person, these early attempts at caring for the ill and injured often did more harm than good. Contrarily, with the development of new and safer practices by Islamicate scholars and physicians in Arabian hospitals, ideas vital for the effective care of patients were developed, learned, and transmitted widely. Hospitals served as a way to spread these novel and necessary practices, some of which included separation of men and women patients, use of pharmacies for storing and keeping track of medications, keeping of patient records, and personal and institutional sanitation and hygiene. Much of this knowledge was recorded and passed on through Islamicate medical texts, many of which were carried to Europe and translated for the use of European medical workers. The Tasrif, written by surgeon Abu Al-Qasim Al-Zahrawi, was translated into Latin; it became one of the most important medical texts in European universities during the Middle Ages and contained useful information on surgical techniques and spread of bacterial infection. The hospital was a typical institution included in the majority of Muslim cities, and although they were often physically attached to religious institutions, they were not themselves places of religious practice. Rather, they served as facilities in which education and scientific innovation could flourish. If they had places of worship, they were secondary to the medical side of the hospital. Islamicate hospitals, along with observatories used for astronomical science, were some of the most important points of exchange for the spread of scientific knowledge. Undoubtedly, the hospital system developed in the Islamicate world played an invaluable role in the creation and evolution of the hospitals we as a society know and depend on today. Europe After 400 CE, the study and practice of medicine in the Western Roman Empire went into deep decline. Medical services were provided, especially for the poor, in the thousands of monastic hospitals that sprang up across Europe, but the care was rudimentary and mainly palliative. Most of the writings of Galen and Hippocrates were lost to the West, with the summaries and compendia of St. Isidore of Seville being the primary channel for transmitting Greek medical ideas. The Carolingian renaissance brought increased contact with Byzantium and a greater awareness of ancient medicine, but only with the twelfth-century renaissance and the new translations coming from Muslim and Jewish sources in Spain, and the fifteenth-century flood of resources after the fall of Constantinople did the West fully recover its acquaintance with classical antiquity. Greek and Roman taboos had meant that dissection was usually banned in ancient times, but in the Middle Ages it changed: medical teachers and students at Bologna began to open human bodies, and Mondino de Luzzi (c. 1275–1326) produced the first known anatomy textbook based on human dissection. Wallis identifies a prestige hierarchy with university educated physicians on top, followed by learned surgeons; craft-trained surgeons; barber surgeons; itinerant specialists such as dentist and oculists; empirics; and midwives. Schools The first medical schools were opened in the 9th century, most notably the Schola Medica Salernitana at Salerno in southern Italy. The cosmopolitan influences from Greek, Latin, Arabic, and Hebrew sources gave it an international reputation as the Hippocratic City. Students from wealthy families came for three years of preliminary studies and five of medical studies. The medicine, following the laws of Federico II, that he founded in 1224 the University ad improved the Schola Salernitana, in the period between 1200 and 1400, it had in Sicily (so-called Sicilian Middle Ages) a particular development so much to create a true school of Jewish medicine. As a result of which, after a legal examination, was conferred to a Jewish Sicilian woman, Virdimura, wife of another physician Pasquale of Catania, the historical record of before woman officially trained to exercise of the medical profession. By the thirteenth century, the medical school at Montpellier began to eclipse the Salernitan school. In the 12th century, universities were founded in Italy, France, and England, which soon developed schools of medicine. The University of Montpellier in France and Italy's University of Padua and University of Bologna were leading schools. Nearly all the learning was from lectures and readings in Hippocrates, Galen, Avicenna, and Aristotle. In later centuries, the importance of universities founded in the late Middle Ages gradually increased, e.g. Charles University in Prague (established in 1348), Jagiellonian University in Cracow (1364), University of Vienna (1365), Heidelberg University (1386) and University of Greifswald (1456). Humors The theory of humors was derived from ancient medical works, dominated western medicine until the 19th century, and is credited to Greek philosopher and surgeon Galen of Pergamon (129–c. 216 BCE). In Greek medicine, there are thought to be four humors, or bodily fluids that are linked to illness: blood, phlegm, yellow bile, and black bile. Early scientists believed that food is digested into blood, muscle, and bones, while the humors that weren't blood were then formed by indigestible materials that are left over. An excess or shortage of any one of the four humors is theorized to cause an imbalance that results in sickness; the aforementioned statement was hypothesized by sources before Hippocrates. Hippocrates (c. 400 BCE) deduced that the four seasons of the year and four ages of man that affect the body in relation to the humors. The four ages of man are childhood, youth, prime age, and old age. The four humors as associated with the four seasons are black bileautumn, yellow bilesummer, phlegmwinter and bloodspring. In De temperamentis, Galen linked what he called temperaments, or personality characteristics, to a person's natural mixture of humors. He also said that the best place to check the balance of temperaments was in the palm of the hand. A person that is considered to be phlegmatic is said to be an introvert, even-tempered, calm, and peaceful. This person would have an excess of phlegm, which is described as a viscous substance or mucous. Similarly, a melancholic temperament related to being moody, anxious, depressed, introverted, and pessimistic. A melancholic temperament is caused by an excess of black bile, which is sedimentary and dark in color. Being extroverted, talkative, easygoing, carefree, and sociable coincides with a sanguine temperament, which is linked to too much blood. Finally, a choleric temperament is related to too much yellow bile, which is actually red in color and has the texture of foam; it is associated with being aggressive, excitable, impulsive, and also extroverted. There are numerous ways to treat a disproportion of the humors. For example, if someone was suspected to have too much blood, then the physician would perform bloodletting as a treatment. Likewise, if a person that had too much phlegm would feel better after expectorating, and someone with too much yellow bile would purge. Another factor to be considered in the balance of humors is the quality of air in which one resides, such as the climate and elevation. Also, the standard of food and drink, balance of sleeping and waking, exercise and rest, retention and evacuation are important. Moods such as anger, sadness, joy, and love can affect the balance. During that time, the importance of balance was demonstrated by the fact that women lose blood monthly during menstruation, and have a lesser occurrence of gout, arthritis, and epilepsy then men do. Galen also hypothesized that there are three faculties. The natural faculty affects growth and reproduction and is produced in the liver. Animal or vital faculty controls respiration and emotion, coming from the heart. In the brain, the psychic faculty commands the senses and thought. The structure of bodily functions is related to the humors as well. Greek physicians understood that food was cooked in the stomach; this is where the nutrients are extracted. The best, most potent and pure nutrients from food are reserved for blood, which is produced in the liver and carried through veins to organs. Blood enhanced with pneuma, which means wind or breath, is carried by the arteries. The path that blood take is as follows: venous blood passes through the vena cava and is moved into the right ventricle of the heart; then, the pulmonary artery takes it to the lungs. Later, the pulmonary vein then mixes air from the lungs with blood to form arterial blood, which has different observable characteristics. After leaving the liver, half of the yellow bile that is produced travels to the blood, while the other half travels to the gallbladder. Similarly, half of the black bile produced gets mixed in with blood, and the other half is used by the spleen. Women In 1376, in Sicily, it was historically given, in relationship to the laws of Federico II that they foresaw an examination with a regal errand of physicists, the first qualification to the exercise of the medicine to a woman, Virdimura a Jewish woman of Catania, whose document is preserved in Palermo to the Italian national archives. Renaissance to early modern period 16th–18th century The Renaissance brought an intense focus on scholarship to Christian Europe. A major effort to translate the Arabic and Greek scientific works into Latin emerged. Europeans gradually became experts not only in the ancient writings of the Romans and Greeks, but in the contemporary writings of Islamic scientists. During the later centuries of the Renaissance came an increase in experimental investigation, particularly in the field of dissection and body examination, thus advancing our knowledge of human anatomy. The development of modern neurology began in the 16th century in Italy and France with Niccolò Massa, Jean Fernel, Jacques Dubois and Andreas Vesalius. Vesalius described in detail the anatomy of the brain and other organs; he had little knowledge of the brain's function, thinking that it resided mainly in the ventricles. Over his lifetime he corrected over 200 of Galen's mistakes. Understanding of medical sciences and diagnosis improved, but with little direct benefit to health care. Few effective drugs existed, beyond opium and quinine. Folklore cures and potentially poisonous metal-based compounds were popular treatments. Independently from Ibn al-Nafis, Michael Servetus rediscovered the pulmonary circulation, but this discovery did not reach the public because it was written down for the first time in the "Manuscript of Paris" in 1546, and later published in the theological work which he paid with his life in 1553. Later this was perfected by Renaldus Columbus and Andrea Cesalpino. In 1628 the English physician William Harvey made a ground-breaking discovery when he correctly described the circulation of the blood in his Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus. Before this time the most useful manual in medicine used both by students and expert physicians was Dioscorides' De Materia Medica, a pharmacopoeia. Bacteria and protists were first observed with a microscope by Antonie van Leeuwenhoek in 1676, initiating the scientific field of microbiology. Paracelsus Paracelsus (1493–1541), was an erratic and abusive innovator who rejected Galen and bookish knowledge, calling for experimental research, with heavy doses of mysticism, alchemy and magic mixed in. He rejected sacred magic (miracles) under Church auspices and looked for cures in nature. He preached but he also pioneered the use of chemicals and minerals in medicine. His hermetical views were that sickness and health in the body relied on the harmony of man (microcosm) and Nature (macrocosm). He took an approach different from those before him, using this analogy not in the manner of soul-purification but in the manner that humans must have certain balances of minerals in their bodies, and that certain illnesses of the body had chemical remedies that could cure them. Most of his influence came after his death. Paracelsus is a highly controversial figure in the history of medicine, with most experts hailing him as a Father of Modern Medicine for shaking off religious orthodoxy and inspiring many researchers; others say he was a mystic more than a scientist and downplay his importance. Padua and Bologna University training of physicians began in the 13th century. The University of Padua was founded about 1220 by walkouts from the University of Bologna, and began teaching medicine in 1222. It played a leading role in the identification and treatment of diseases and ailments, specializing in autopsies and the inner workings of the body. Starting in 1595, Padua's famous anatomical theatre drew artists and scientists studying the human body during public dissections. The intensive study of Galen led to critiques of Galen modeled on his own writing, as in the first book of Vesalius's De humani corporis fabrica. Andreas Vesalius held the chair of Surgery and Anatomy (explicator chirurgiae) and in 1543 published his anatomical discoveries in De Humani Corporis Fabrica. He portrayed the human body as an interdependent system of organ groupings. The book triggered great public interest in dissections and caused many other European cities to establish anatomical theatres. At the University of Bologna the training of physicians began in 1219. The Italian city attracted students from across Europe. Taddeo Alderotti built a tradition of medical education that established the characteristic features of Italian learned medicine and was copied by medical schools elsewhere. Turisanus (d. 1320) was his student. The curriculum was revised and strengthened in 1560–1590. A representative professor was Julius Caesar Aranzi (Arantius) (1530–1589). He became Professor of Anatomy and Surgery at the University of Bologna in 1556, where he established anatomy as a major branch of medicine for the first time. Aranzi combined anatomy with a description of pathological processes, based largely on his own research, Galen, and the work of his contemporary Italians. Aranzi discovered the 'Nodules of Aranzio' in the semilunar valves of the heart and wrote the first description of the superior levator palpebral and the coracobrachialis muscles. His books (in Latin) covered surgical techniques for many conditions, including hydrocephalus, nasal polyp, goitre and tumours to phimosis, ascites, haemorrhoids, anal abscess and fistulae. Women Catholic women played large roles in health and healing in medieval and early modern Europe. A life as a nun was a prestigious role; wealthy families provided dowries for their daughters, and these funded the convents, while the nuns provided free nursing care for the poor. The Catholic elites provided hospital services because of their theology of salvation that good works were the route to heaven. The Protestant reformers rejected the notion that rich men could gain God's grace through good works—and thereby escape purgatory—by providing cash endowments to charitable institutions. They also rejected the Catholic idea that the poor patients earned grace and salvation through their suffering. Protestants generally closed all the convents and most of the hospitals, sending women home to become housewives, often against their will. On the other hand, local officials recognized the public value of hospitals, and some were continued in Protestant lands, but without monks or nuns and in the control of local governments. In London, the crown allowed two hospitals to continue their charitable work, under nonreligious control of city officials. The convents were all shut down but Harkness finds that women—some of them former nuns—were part of a new system that delivered essential medical services to people outside their family. They were employed by parishes and hospitals, as well as by private families, and provided nursing care as well as some medical, pharmaceutical, and surgical services. Meanwhile, in Catholic lands such as France, rich families continued to fund convents and monasteries, and enrolled their daughters as nuns who provided free health services to the poor. Nursing was a religious role for the nurse, and there was little call for science. Age of Enlightenment During the Age of Enlightenment, the 18th century, science was held in high esteem and physicians upgraded their social status by becoming more scientific. The health field was crowded with self-trained barber-surgeons, apothecaries, midwives, drug peddlers, and charlatans. Across Europe medical schools relied primarily on lectures and readings. The final year student would have limited clinical experience by trailing the professor through the wards. Laboratory work was uncommon, and dissections were rarely done because of legal restrictions on cadavers. Most schools were small, and only Edinburgh, Scotland, with 11,000 alumni, produced large numbers of graduates. Britain In Britain, there were but three small hospitals after 1550. Pelling and Webster estimate that in London in the 1580 to 1600 period, out of a population of nearly 200,000 people, there were about 500 medical practitioners. Nurses and midwives are not included. There were about 50 physicians, 100 licensed surgeons, 100 apothecaries, and 250 additional unlicensed practitioners. In the last category about 25% were women. All across Britain—and indeed all of the world—the vast majority of the people in city, town or countryside depended for medical care on local amateurs with no professional training but with a reputation as wise healers who could diagnose problems and advise sick people what to do—and perhaps set broken bones, pull a tooth, give some traditional herbs or brews or perform a little magic to cure what ailed them. The London Dispensary opened in 1696, the first clinic in the British Empire to dispense medicines to poor sick people. The innovation was slow to catch on, but new dispensaries were open in the 1770s. In the colonies, small hospitals opened in Philadelphia in 1752, New York in 1771, and Boston (Massachusetts General Hospital) in 1811. Guy's Hospital, the first great British hospital with a modern foundation opened in 1721 in London, with funding from businessman Thomas Guy. It had been preceded by St Bartholomew's Hospital and St Thomas's Hospital, both medieval foundations. In 1821 a bequest of £200,000 by William Hunt in 1829 funded expansion for an additional hundred beds at Guy's. Samuel Sharp (1709–78), a surgeon at Guy's Hospital from 1733 to 1757, was internationally famous; his A Treatise on the Operations of Surgery (1st ed., 1739), was the first British study focused exclusively on operative technique. English physician Thomas Percival (1740–1804) wrote a comprehensive system of medical conduct, Medical Ethics; or, a Code of Institutes and Precepts, Adapted to the Professional Conduct of Physicians and Surgeons (1803) that set the standard for many textbooks. Spain and Spanish Empire In the Spanish Empire, the viceregal capital of Mexico City was a site of medical training for physicians and the creation of hospitals. Epidemic disease had decimated indigenous populations starting with the early sixteenth-century Spanish conquest of the Aztec empire, when a black auxiliary in the armed forces of conqueror Hernán Cortés, with an active case of smallpox, set off a virgin land epidemic among indigenous peoples, Spanish allies and enemies alike. Aztec emperor Cuitlahuac died of smallpox. Disease was a significant factor in the Spanish conquest elsewhere as well. Medical education instituted at the Royal and Pontifical University of Mexico chiefly served the needs of urban elites. Male and female curanderos or lay practitioners, attended to the ills of the popular classes. The Spanish crown began regulating the medical profession just a few years after the conquest, setting up the Royal Tribunal of the Protomedicato, a board for licensing medical personnel in 1527. Licensing became more systematic after 1646 with physicians, druggists, surgeons, and bleeders requiring a license before they could publicly practice. Crown regulation of medical practice became more general in the Spanish empire. Elites and the popular classes alike called on divine intervention in personal and society-wide health crises, such as the epidemic of 1737. The intervention of the Virgin of Guadalupe was depicted in a scene of dead and dying Indians, with elites on their knees praying for her aid. In the late eighteenth century, the crown began implementing secularizing policies on the Iberian peninsula and its overseas empire to control disease more systematically and scientifically. Spanish Quest for Medicinal Spices Botanical medicines also became popular during the 16th, 17th, and 18th Centuries. Spanish pharmaceutical books during this time contain medicinal recipes consisting of spices, herbs, and other botanical products. For example, nutmeg oil was documented for curing stomach ailments and cardamom oil was believed to relieve intestinal ailments. During the rise of the global trade market, spices and herbs, along with many other goods, that were indigenous to different territories began to appear in different locations across the globe. Herbs and spices were especially popular for their utility in cooking and medicines. As a result of this popularity and increased demand for spices, some areas in Asia, like China and Indonesia, became hubs for spice cultivation and trade. The Spanish Empire also wanted to benefit from the international spice trade, so they looked towards their American colonies. The Spanish American colonies became an area where the Spanish searched to discover new spices and indigenous American medicinal recipes. The Florentine Codex, a 16th-century ethnographic research study in Mesoamerica by the Spanish Franciscan friar Bernardino de Sahagún, is a major contribution to the history of Nahua medicine. The Spanish did discover many spices and herbs new to them, some of which were reportedly similar to Asian spices. A Spanish physician by the name of Nicolás Monardes studied many of the American spices coming into Spain. He documented many of the new American spices and their medicinal properties in his survey Historia medicinal de las cosas que se traen de nuestras Indias Occidentales. For example, Monardes describes the "Long Pepper" (Pimienta luenga), found along the coasts of the countries that are now known Panama and Colombia, as a pepper that was more flavorful, healthy, and spicy in comparison to the Eastern black pepper. The Spanish interest in American spices can first be seen in the commissioning of the Libellus de Medicinalibus Indorum Herbis, which was a Spanish-American codex describing indigenous American spices and herbs and describing the ways that these were used in natural Aztec medicines. The codex was commissioned in the year 1552 by Francisco de Mendoza, the son of Antonio de Mendoza, who was the first Viceroy of New Spain. Francisco de Mendoza was interested in studying the properties of these herbs and spices, so that he would be able to profit from the trade of these herbs and the medicines that could be produced by them. Francisco de Mendoza recruited the help of Monardez in studying the traditional medicines of the indigenous people living in what was then the Spanish colonies. Monardez researched these medicines and performed experiments to discover the possibilities of spice cultivation and medicine creation in the Spanish colonies. The Spanish transplanted some herbs from Asia, but only a few foreign crops were successfully grown in the Spanish Colonies. One notable crop brought from Asia and successfully grown in the Spanish colonies was ginger, as it was considered Hispaniola's number 1 crop at the end of the 16th Century. The Spanish Empire did profit from cultivating herbs and spices, but they also introduced pre-Columbian American medicinal knowledge to Europe. Other Europeans were inspired by the actions of Spain and decided to try to establish a botanical transplant system in colonies that they controlled, however, these subsequent attempts were not successful. 19th century: rise of modern medicine The practice of medicine changed in the face of rapid advances in science, as well as new approaches by physicians. Hospital doctors began much more systematic analysis of patients' symptoms in diagnosis. Among the more powerful new techniques were anaesthesia, and the development of both antiseptic and aseptic operating theatres. Effective cures were developed for certain endemic infectious diseases. However, the decline in many of the most lethal diseases was due more to improvements in public health and nutrition than to advances in medicine. Medicine was revolutionized in the 19th century and beyond by advances in chemistry, laboratory techniques, and equipment. Old ideas of infectious disease epidemiology were gradually replaced by advances in bacteriology and virology. Germ theory and bacteriology In the 1830s in Italy, Agostino Bassi traced the silkworm disease muscardine to microorganisms. Meanwhile, in Germany, Theodor Schwann led research on alcoholic fermentation by yeast, proposing that living microorganisms were responsible. Leading chemists, such as Justus von Liebig, seeking solely physicochemical explanations, derided this claim and alleged that Schwann was regressing to vitalism. In 1847 in Vienna, Ignaz Semmelweis (1818–1865), dramatically reduced the death rate of new mothers (due to childbed fever) by requiring physicians to clean their hands before attending childbirth, yet his principles were marginalized and attacked by professional peers. At that time most people still believed that infections were caused by foul odors called miasmas. French scientist Louis Pasteur confirmed Schwann's fermentation experiments in 1857 and afterwards supported the hypothesis that yeast were microorganisms. Moreover, he suggested that such a process might also explain contagious disease. In 1860, Pasteur's report on bacterial fermentation of butyric acid motivated fellow Frenchman Casimir Davaine to identify a similar species (which he called ) as the pathogen of the deadly disease anthrax. Others dismissed "" as a mere byproduct of the disease. British surgeon Joseph Lister, however, took these findings seriously and subsequently introduced antisepsis to wound treatment in 1865. German physician Robert Koch, noting fellow German Ferdinand Cohn's report of a spore stage of a certain bacterial species, traced the life cycle of Davaine's , identified spores, inoculated laboratory animals with them, and reproduced anthrax—a breakthrough for experimental pathology and germ theory of disease. Pasteur's group added ecological investigations confirming spores' role in the natural setting, while Koch published a landmark treatise in 1878 on the bacterial pathology of wounds. In 1881, Koch reported discovery of the "tubercle bacillus", cementing germ theory and Koch's acclaim. Upon the outbreak of a cholera epidemic in Alexandria, Egypt, two medical missions went to investigate and attend the sick, one was sent out by Pasteur and the other led by Koch. Koch's group returned in 1883, having successfully discovered the cholera pathogen. In Germany, however, Koch's bacteriologists had to vie against Max von Pettenkofer, Germany's leading proponent of miasmatic theory. Pettenkofer conceded bacteria's casual involvement, but maintained that other, environmental factors were required to turn it pathogenic, and opposed water treatment as a misdirected effort amid more important ways to improve public health. The massive cholera epidemic in Hamburg in 1892 devastated Pettenkoffer's position, and yielded German public health to "Koch's bacteriology". On losing the 1883 rivalry in Alexandria, Pasteur switched research direction, and introduced his third vaccine—rabies vaccine—the first vaccine for humans since Jenner's for smallpox. From across the globe, donations poured in, funding the founding of Pasteur Institute, the globe's first biomedical institute, which opened in 1888. Along with Koch's bacteriologists, Pasteur's group—which preferred the term microbiology—led medicine into the new era of "scientific medicine" upon bacteriology and germ theory. Accepted from Jakob Henle, Koch's steps to confirm a species' pathogenicity became famed as "Koch's postulates". Although his proposed tuberculosis treatment, tuberculin, seemingly failed, it soon was used to test for infection with the involved species. In 1905, Koch was awarded the Nobel Prize in Physiology or Medicine, and remains renowned as the founder of medical microbiology. Women Women as healers Women have always served as healers and midwives since ancient times. However, the professionalization of medicine forced them increasingly to the sidelines. As hospitals multiplied they relied in Europe on orders of Roman Catholic nun-nurses, and German Protestant and Anglican deaconesses in the early 19th century. They were trained in traditional methods of physical care that involved little knowledge of medicine. The breakthrough to professionalization based on knowledge of advanced medicine was led by Florence Nightingale in England. She resolved to provide more advanced training than she saw on the Continent. At Kaiserswerth, where the first German nursing schools were founded in 1836 by Theodor Fliedner, she said, "The nursing was nil and the hygiene horrible.") Britain's male doctors preferred the old system, but Nightingale won out and her Nightingale Training School opened in 1860 and became a model. The Nightingale solution depended on the patronage of upper-class women, and they proved eager to serve. Royalty became involved. In 1902 the wife of the British king took control of the nursing unit of the British army, became its president, and renamed it after herself as the Queen Alexandra's Royal Army Nursing Corps; when she died the next queen became president. Today its Colonel In Chief is Sophie, Countess of Wessex, the daughter-in-law of Queen Elizabeth II. In the United States, upper-middle-class women who already supported hospitals promoted nursing. The new profession proved highly attractive to women of all backgrounds, and schools of nursing opened in the late 19th century. They soon a function of large hospitals , where they provided a steady stream of low-paid idealistic workers. The International Red Cross began operations in numerous countries in the late 19th century, promoting nursing as an ideal profession for middle-class women. The Nightingale model was widely copied. Linda Richards (1841–1930) studied in London and became the first professionally trained American nurse. She established nursing training programs in the United States and Japan, and created the first system for keeping individual medical records for hospitalized patients. The Russian Orthodox Church sponsored seven orders of nursing sisters in the late 19th century. They ran hospitals, clinics, almshouses, pharmacies, and shelters as well as training schools for nurses. In the Soviet era (1917–1991), with the aristocratic sponsors gone, nursing became a low-prestige occupation based in poorly maintained hospitals. Women as physicians It was very difficult for women to become doctors in any field before the 1970s. Elizabeth Blackwell (1821–1910) became the first woman to formally study and practice medicine in the United States. She was a leader in women's medical education. While Blackwell viewed medicine as a means for social and moral reform, her student Mary Putnam Jacobi (1842–1906) focused on curing disease. At a deeper level of disagreement, Blackwell felt that women would succeed in medicine because of their humane female values, but Jacobi believed that women should participate as the equals of men in all medical specialties using identical methods, values and insights. In the Soviet Union although the majority of medical doctors were women, they were paid less than the mostly male factory workers. Paris Paris (France) and Vienna were the two leading medical centers on the Continent in the era 1750–1914. In the 1770s–1850s Paris became a world center of medical research and teaching. The "Paris School" emphasized that teaching and research should be based in large hospitals and promoted the professionalization of the medical profession and the emphasis on sanitation and public health. A major reformer was Jean-Antoine Chaptal (1756–1832), a physician who was Minister of Internal Affairs. He created the Paris Hospital, health councils, and other bodies. Louis Pasteur (1822–1895) was one of the most important founders of medical microbiology. He is remembered for his remarkable breakthroughs in the causes and preventions of diseases. His discoveries reduced mortality from puerperal fever, and he created the first vaccines for rabies and anthrax. His experiments supported the germ theory of disease. He was best known to the general public for inventing a method to treat milk and wine in order to prevent it from causing sickness, a process that came to be called pasteurization. He is regarded as one of the three main founders of microbiology, together with Ferdinand Cohn and Robert Koch. He worked chiefly in Paris and in 1887 founded the Pasteur Institute there to perpetuate his commitment to basic research and its practical applications. As soon as his institute was created, Pasteur brought together scientists with various specialties. The first five departments were directed by Emile Duclaux (general microbiology research) and Charles Chamberland (microbe research applied to hygiene), as well as a biologist, Ilya Ilyich Mechnikov (morphological microbe research) and two physicians, Jacques-Joseph Grancher (rabies) and Emile Roux (technical microbe research). One year after the inauguration of the Institut Pasteur, Roux set up the first course of microbiology ever taught in the world, then entitled Cours de Microbie Technique (Course of microbe research techniques). It became the model for numerous research centers around the world named "Pasteur Institutes." Vienna The First Viennese School of Medicine, 1750–1800, was led by the Dutchman Gerard van Swieten (1700–1772), who aimed to put medicine on new scientific foundations—promoting unprejudiced clinical observation, botanical and chemical research, and introducing simple but powerful remedies. When the Vienna General Hospital opened in 1784, it at once became the world's largest hospital and physicians acquired a facility that gradually developed into the most important research centre. Progress ended with the Napoleonic wars and the government shutdown in 1819 of all liberal journals and schools; this caused a general return to traditionalism and eclecticism in medicine. Vienna was the capital of a diverse empire and attracted not just Germans but Czechs, Hungarians, Jews, Poles and others to its world-class medical facilities. After 1820 the Second Viennese School of Medicine emerged with the contributions of physicians such as Carl Freiherr von Rokitansky, Josef Škoda, Ferdinand Ritter von Hebra, and Ignaz Philipp Semmelweis. Basic medical science expanded and specialization advanced. Furthermore, the first dermatology, eye, as well as ear, nose, and throat clinics in the world were founded in Vienna. The textbook of ophthalmologist Georg Joseph Beer (1763–1821) Lehre von den Augenkrankheiten combined practical research and philosophical speculations, and became the standard reference work for decades. Berlin After 1871 Berlin, the capital of the new German Empire, became a leading center for medical research. Robert Koch (1843–1910) was a representative leader. He became famous for isolating Bacillus anthracis (1877), the Tuberculosis bacillus (1882) and Vibrio cholerae (1883) and for his development of Koch's postulates. He was awarded the Nobel Prize in Physiology or Medicine in 1905 for his tuberculosis findings. Koch is one of the founders of microbiology, inspiring such major figures as Paul Ehrlich and Gerhard Domagk. U.S. Civil War In the American Civil War (1861–65), as was typical of the 19th century, more soldiers died of disease than in battle, and even larger numbers were temporarily incapacitated by wounds, disease and accidents. Conditions were poor in the Confederacy, where doctors and medical supplies were in short supply. The war had a dramatic long-term impact on medicine in the U.S., from surgical technique to hospitals to nursing and to research facilities. Weapon development -particularly the appearance of Springfield Model 1861, mass-produced and much more accurate than muskets led to generals underestimating the risks of long range rifle fire; risks exemplified in the death of John Sedgwick and the disastrous Pickett's Charge. The rifles could shatter bone forcing amputation and longer ranges meant casualties were sometimes not quickly found. Evacuation of the wounded from Second Battle of Bull Run took a week. As in earlier wars, untreated casualties sometimes survived unexpectedly due to maggots debriding the wound -an observation which led to the surgical use of maggots -still a useful method in the absence of effective antibiotics. The hygiene of the training and field camps was poor, especially at the beginning of the war when men who had seldom been far from home were brought together for training with thousands of strangers. First came epidemics of the childhood diseases of chicken pox, mumps, whooping cough, and, especially, measles. Operations in the South meant a dangerous and new disease environment, bringing diarrhea, dysentery, typhoid fever, and malaria. There were no antibiotics, so the surgeons prescribed coffee, whiskey, and quinine. Harsh weather, bad water, inadequate shelter in winter quarters, poor policing of camps, and dirty camp hospitals took their toll. This was a common scenario in wars from time immemorial, and conditions faced by the Confederate army were even worse. The Union responded by building army hospitals in every state. What was different in the Union was the emergence of skilled, well-funded medical organizers who took proactive action, especially in the much enlarged United States Army Medical Department, and the United States Sanitary Commission, a new private agency. Numerous other new agencies also targeted the medical and morale needs of soldiers, including the United States Christian Commission as well as smaller private agencies. The U.S. Army learned many lessons and in August 1886, it established the Hospital Corps. Statistical methods A major breakthrough in epidemiology came with the introduction of statistical maps and graphs. They allowed careful analysis of seasonality issues in disease incidents, and the maps allowed public health officials to identify critical loci for the dissemination of disease. John Snow in London developed the methods. In 1849, he observed that the symptoms of cholera, which had already claimed around 500 lives within a month, were vomiting and diarrhoea. He concluded that the source of contamination must be through ingestion, rather than inhalation as was previously thought. It was this insight that resulted in the removal of The Pump On Broad Street, after which deaths from cholera plummeted afterwards. English nurse Florence Nightingale pioneered analysis of large amounts of statistical data, using graphs and tables, regarding the condition of thousands of patients in the Crimean War to evaluate the efficacy of hospital services. Her methods proved convincing and led to reforms in military and civilian hospitals, usually with the full support of the government. By the late 19th and early 20th century English statisticians led by Francis Galton, Karl Pearson and Ronald Fisher developed the mathematical tools such as correlations and hypothesis tests that made possible much more sophisticated analysis of statistical data. During the U.S. Civil War the Sanitary Commission collected enormous amounts of statistical data, and opened up the problems of storing information for fast access and mechanically searching for data patterns. The pioneer was John Shaw Billings (1838–1913). A senior surgeon in the war, Billings built the Library of the Surgeon General's Office (now the National Library of Medicine), the centerpiece of modern medical information systems. Billings figured out how to mechanically analyze medical and demographic data by turning facts into numbers and punching the numbers onto cardboard cards that could be sorted and counted by machine. The applications were developed by his assistant Herman Hollerith; Hollerith invented the punch card and counter-sorter system that dominated statistical data manipulation until the 1970s. Hollerith's company became International Business Machines (IBM) in 1911. Worldwide dissemination United States Johns Hopkins Hospital, founded in 1889, originated several modern medical practices, including residency and rounds. Japan European ideas of modern medicine were spread widely through the world by medical missionaries, and the dissemination of textbooks. Japanese elites enthusiastically embraced Western medicine after the Meiji Restoration of the 1860s. However they had been prepared by their knowledge of the Dutch and German medicine, for they had some contact with Europe through the Dutch. Highly influential was the 1765 edition of Hendrik van Deventer's pioneer work Nieuw Ligt ("A New Light") on Japanese obstetrics, especially on Katakura Kakuryo's publication in 1799 of Sanka Hatsumo ("Enlightenment of Obstetrics"). A cadre of Japanese physicians began to interact with Dutch doctors, who introduced smallpox vaccinations. By 1820 Japanese ranpô medical practitioners not only translated Dutch medical texts, they integrated their readings with clinical diagnoses. These men became leaders of the modernization of medicine in their country. They broke from Japanese traditions of closed medical fraternities and adopted the European approach of an open community of collaboration based on expertise in the latest scientific methods. Kitasato Shibasaburō (1853–1931) studied bacteriology in Germany under Robert Koch. In 1891 he founded the Institute of Infectious Diseases in Tokyo, which introduced the study of bacteriology to Japan. He and French researcher Alexandre Yersin went to Hong Kong in 1894, where; Kitasato confirmed Yersin's discovery that the bacterium Yersinia pestis is the agent of the plague. In 1897 he isolated and described the organism that caused dysentery. He became the first dean of medicine at Keio University, and the first president of the Japan Medical Association. Japanese physicians immediately recognized the values of X-Rays. They were able to purchase the equipment locally from the Shimadzu Company, which developed, manufactured, marketed, and distributed X-Ray machines after 1900. Japan not only adopted German methods of public health in the home islands, but implemented them in its colonies, especially Korea and Taiwan, and after 1931 in Manchuria. A heavy investment in sanitation resulted in a dramatic increase of life expectancy. Psychiatry Until the nineteenth century, the care of the insane was largely a communal and family responsibility rather than a medical one. The vast majority of the mentally ill were treated in domestic contexts with only the most unmanageable or burdensome likely to be institutionally confined. This situation was transformed radically from the late eighteenth century as, amid changing cultural conceptions of madness, a new-found optimism in the curability of insanity within the asylum setting emerged. Increasingly, lunacy was perceived less as a physiological condition than as a mental and moral one to which the correct response was persuasion, aimed at inculcating internal restraint, rather than external coercion. This new therapeutic sensibility, referred to as moral treatment, was epitomised in French physician Philippe Pinel's quasi-mythological unchaining of the lunatics of the Bicêtre Hospital in Paris and realised in an institutional setting with the foundation in 1796 of the Quaker-run York Retreat in England. From the early nineteenth century, as lay-led lunacy reform movements gained in influence, ever more state governments in the West extended their authority and responsibility over the mentally ill. Small-scale asylums, conceived as instruments to reshape both the mind and behaviour of the disturbed, proliferated across these regions. By the 1830s, moral treatment, together with the asylum itself, became increasingly medicalised and asylum doctors began to establish a distinct medical identity with the establishment in the 1840s of associations for their members in France, Germany, the United Kingdom and America, together with the founding of medico-psychological journals. Medical optimism in the capacity of the asylum to cure insanity soured by the close of the nineteenth century as the growth of the asylum population far outstripped that of the general population. Processes of long-term institutional segregation, allowing for the psychiatric conceptualisation of the natural course of mental illness, supported the perspective that the insane were a distinct population, subject to mental pathologies stemming from specific medical causes. As degeneration theory grew in influence from the mid-nineteenth century, heredity was seen as the central causal element in chronic mental illness, and, with national asylum systems overcrowded and insanity apparently undergoing an inexorable rise, the focus of psychiatric therapeutics shifted from a concern with treating the individual to maintaining the racial and biological health of national populations. Emil Kraepelin (1856–1926) introduced new medical categories of mental illness, which eventually came into psychiatric usage despite their basis in behavior rather than pathology or underlying cause. Shell shock among frontline soldiers exposed to heavy artillery bombardment was first diagnosed by British Army doctors in 1915. By 1916, similar symptoms were also noted in soldiers not exposed to explosive shocks, leading to questions as to whether the disorder was physical or psychiatric. In the 1920s surrealist opposition to psychiatry was expressed in a number of surrealist publications. In the 1930s several controversial medical practices were introduced including inducing seizures (by electroshock, insulin or other drugs) or cutting parts of the brain apart (leucotomy or lobotomy). Both came into widespread use by psychiatry, but there were grave concerns and much opposition on grounds of basic morality, harmful effects, or misuse. In the 1950s new psychiatric drugs, notably the antipsychotic chlorpromazine, were designed in laboratories and slowly came into preferred use. Although often accepted as an advance in some ways, there was some opposition, due to serious adverse effects such as tardive dyskinesia. Patients often opposed psychiatry and refused or stopped taking the drugs when not subject to psychiatric control. There was also increasing opposition to the use of psychiatric hospitals, and attempts to move people back into the community on a collaborative user-led group approach ("therapeutic communities") not controlled by psychiatry. Campaigns against masturbation were done in the Victorian era and elsewhere. Lobotomy was used until the 1970s to treat schizophrenia. This was denounced by the anti-psychiatric movement in the 1960s and later. 20th century and beyond Twentieth-century warfare and medicine The ABO blood group system was discovered in 1901, and the Rhesus blood group system in 1937, facilitating blood transfusion. During the 19th century, large-scale wars were attended with medics and mobile hospital units which developed advanced techniques for healing massive injuries and controlling infections rampant in battlefield conditions. During the Mexican Revolution (1910–1920), General Pancho Villa organized hospital trains for wounded soldiers. Boxcars marked Servicio Sanitario ("sanitary service") were re-purposed as surgical operating theaters and areas for recuperation, and staffed by up to 40 Mexican and U.S. physicians. Severely wounded soldiers were shuttled back to base hospitals. Canadian physician Norman Bethune, M.D. developed a mobile blood-transfusion service for frontline operations in the Spanish Civil War (1936–1939), but ironically, he himself died of blood poisoning. Thousands of scarred troops provided the need for improved prosthetic limbs and expanded techniques in plastic surgery or reconstructive surgery. Those practices were combined to broaden cosmetic surgery and other forms of elective surgery. During the second World War, Alexis Carrel and Henry Dakin developed the Carrel-Dakin method of treating wounds with an irrigation, Dakin's solution, a germicide which helped prevent gangrene. The War spurred the usage of Roentgen's X-ray, and the electrocardiograph, for the monitoring of internal bodily functions. This was followed in the inter-war period by the development of the first anti-bacterial agents such as the sulpha antibiotics. Public health Public health measures became particularly important during the 1918 flu pandemic, which killed at least 50 million people around the world. It became an important case study in epidemiology. Bristow shows there was a gendered response of health caregivers to the pandemic in the United States. Male doctors were unable to cure the patients, and they felt like failures. Women nurses also saw their patients die, but they took pride in their success in fulfilling their professional role of caring for, ministering, comforting, and easing the last hours of their patients, and helping the families of the patients cope as well. From 1917 to 1932, the American Red Cross moved into Europe with a battery of long-term child health projects. It built and operated hospitals and clinics, and organized antituberculosis and antityphus campaigns. A high priority involved child health programs such as clinics, better baby shows, playgrounds, fresh air camps, and courses for women on infant hygiene. Hundreds of U.S. doctors, nurses, and welfare professionals administered these programs, which aimed to reform the health of European youth and to reshape European public health and welfare along American lines. Second World War The advances in medicine made a dramatic difference for Allied troops, while the Germans and especially the Japanese and Chinese suffered from a severe lack of newer medicines, techniques and facilities. Harrison finds that the chances of recovery for a badly wounded British infantryman were as much as 25 times better than in the First World War. The reason was that: "By 1944 most casualties were receiving treatment within hours of wounding, due to the increased mobility of field hospitals and the extensive use of aeroplanes as ambulances. The care of the sick and wounded had also been revolutionized by new medical technologies, such as active immunization against tetanus, sulphonamide drugs, and penicillin." Nazi and Japanese medical research Unethical human subject research, and killing of patients with disabilities, peaked during the Nazi era, with Nazi human experimentation and Aktion T4 during the Holocaust as the most significant examples. Many of the details of these and related events were the focus of the Doctors' Trial. Subsequently, principles of medical ethics, such as the Nuremberg Code, were introduced to prevent a recurrence of such atrocities. After 1937, the Japanese Army established programs of biological warfare in China. In Unit 731, Japanese doctors and research scientists conducted large numbers of vivisections and experiments on human beings, mostly Chinese victims. Malaria Starting in World War II, DDT was used as insecticide to combat insect vectors carrying malaria, which was endemic in most tropical regions of the world. The first goal was to protect soldiers, but it was widely adopted as a public health device. In Liberia, for example, the United States had large military operations during the war and the U.S. Public Health Service began the use of DDT for indoor residual spraying (IRS) and as a larvicide, with the goal of controlling malaria in Monrovia, the Liberian capital. In the early 1950s, the project was expanded to nearby villages. In 1953, the World Health Organization (WHO) launched an antimalaria program in parts of Liberia as a pilot project to determine the feasibility of malaria eradication in tropical Africa. However these projects encountered a spate of difficulties that foreshadowed the general retreat from malaria eradication efforts across tropical Africa by the mid-1960s. Post-World War II The World Health Organization was founded in 1948 as a United Nations agency to improve global health. In most of the world, life expectancy has improved since then, and was about 67 years , and well above 80 years in some countries. Eradication of infectious diseases is an international effort, and several new vaccines have been developed during the post-war years, against infections such as measles, mumps, several strains of influenza and human papilloma virus. The long-known vaccine against Smallpox finally eradicated the disease in the 1970s, and Rinderpest was wiped out in 2011. Eradication of polio is underway. Tissue culture is important for development of vaccines. Though the early success of antiviral vaccines and antibacterial drugs, antiviral drugs were not introduced until the 1970s. Through the WHO, the international community has developed a response protocol against epidemics, displayed during the SARS epidemic in 2003, the Influenza A virus subtype H5N1 from 2004, the Ebola virus epidemic in West Africa and onwards. As infectious diseases have become less lethal, and the most common causes of death in developed countries are now tumors and cardiovascular diseases, these conditions have received increased attention in medical research. Tobacco smoking as a cause of lung cancer was first researched in the 1920s, but was not widely supported by publications until the 1950s. Cancer treatment has been developed with radiotherapy, chemotherapy and surgical oncology. Oral rehydration therapy has been extensively used since the 1970s to treat cholera and other diarrhea-inducing infections. The sexual revolution included taboo-breaking research in human sexuality such as the 1948 and 1953 Kinsey reports, invention of hormonal contraception, and the normalization of abortion and homosexuality in many countries. Family planning has promoted a demographic transition in most of the world. With threatening sexually transmitted infections, not least HIV, use of barrier contraception has become imperative. The struggle against HIV has improved antiretroviral treatments. X-ray imaging was the first kind of medical imaging, and later ultrasonic imaging, CT scanning, MR scanning and other imaging methods became available. Genetics have advanced with the discovery of the DNA molecule, genetic mapping and gene therapy. Stem cell research took off in the 2000s (decade), with stem cell therapy as a promising method. Evidence-based medicine is a modern concept, not introduced to literature until the 1990s. Prosthetics have improved. In 1958, Arne Larsson in Sweden became the first patient to depend on an artificial cardiac pacemaker. He died in 2001 at age 86, having outlived its inventor, the surgeon, and 26 pacemakers. Lightweight materials as well as neural prosthetics emerged in the end of the 20th century. Modern surgery Cardiac surgery was revolutionized in 1948 as open-heart surgery was introduced for the first time since 1925. In 1954 Joseph Murray, J. Hartwell Harrison and others accomplished the first kidney transplantation. Transplantations of other organs, such as heart, liver and pancreas, were also introduced during the later 20th century. The first partial face transplant was performed in 2005, and the first full one in 2010. By the end of the 20th century, microtechnology had been used to create tiny robotic devices to assist microsurgery using micro-video and fiber-optic cameras to view internal tissues during surgery with minimally invasive practices. Laparoscopic surgery was broadly introduced in the 1990s. Natural orifice surgery has followed. Remote surgery is another recent development, with the transatlantic Lindbergh operation in 2001 as a groundbreaking example. See also Health care in the United States History of dental treatments History of herbalism History of hospitals History of medicine in Canada History of medicine in the United States History of nursing History of pathology History of pharmacy History of surgery Timeline of nursing history Timeline of medicine and medical technology History of health care (disambiguation) Explanatory notes References Further reading Bowers, Barbara S. ed. The Medieval Hospital and Medical Practice (Ashgate, 2007); 258 pp; essays by scholars Brockliss, Laurence and Colin Jones. The Medical World of Early Modern France (1997). 984 pp; detailed survey, 1600–1790s excerpt and text search Burnham, John C. Health Care in America: A History (2015), Comprehensive scholarly history Bynum, W.F. and Roy Porter, eds. Companion Encyclopedia of the History of Medicine (2 vol. 1997); 1840 pp; 36 essays by scholars excerpt and text search Bynum, W.F. et al. The Western Medical Tradition: 1800–2000 (2006) 610 pp; 4 essays excerpt and text search Conrad, Lawrence I. et al. The Western Medical Tradition: 800 BC to AD 1800 (1995); excerpt and text search Donahue, M. Patricia. Nursing, The Finest Art: An Illustrated History (3r ed. 2010) excerpt and text search Loudon, Irvine, ed. Western Medicine: An Illustrated History (1997) online McGrew, Roderick. Encyclopedia of Medical History (1985) Nutton, Vivian. Ancient Medicine (2004) 489 pp. online Porter, Roy, ed. The Cambridge Illustrated History of Medicine (2001) excerpt and text search Porter, Roy, ed. The Cambridge History of Medicine (2006); 416 pp; excerpt and text search same text without the illustrations Porter, Roy. Blood and Guts: A Short History of Medicine (2004) 224 pp; excerpt and text search Rousseau With Miranda Gill, David Haycock and Malte Herwig. Singer, Charles, and E. Ashworth Underwood. A Short History of Medicine (2nd ed. 1962) Siraisi, Nancy G. Medieval and Early Renaissance Medicine: An Introduction to Knowledge and Practice (1990) excerpt and text search Watts, Sheldon. Disease and Medicine in World History (2003), 166 pp. online Weatherall, Miles. In Search of a Cure: A History of Pharmaceutical Discovery (1990), emphasis on antibiotics. Physicians Bonner, Thomas Neville. Becoming a Physician: Medical Education in Britain, France, Germany, and the United States, 1750–1945 (Johns Hopkins U.P. 2000) excerpt and text search Bonner, Thomas Neville. To the Ends of the Earth: Women's Search for Education in Medicine (Harvard U.P., 1992) More, Ellen S. Restoring the Balance: Women Physicians and the Profession of Medicine, 1850–1995 (Harvard U.P. 1999), focus on U.S. online Britain Berridge, Virginia. "Health and Medicine" in F M.L. Thompson, ed., The Cambridge Social History of Britain, 1750–1950, vol. 3, Social Agencies and Institutions, (1990). pp. 171–242. Borsay A. (ed.) Medicine in Wales c. 1800–2000: Public Service or Private Commodity? (University of Wales Press, 2003). Cherry, Stephen. Medical Services and the Hospital in Britain, 1860–1939 (1996) excerpt and text search Dingwall, Helen M. A history of Scottish medicine: themes and influences (Edinburgh UP, 2003). Howe G. M. People, Environment, Death and Disease: A Medical Geography of Britain Through the Ages (U of Wales Press, 1997). Kirby, Peter. Child Workers and Industrial Health in Britain, 1780–1850 (2013). Miller, Ian. A Modern History of the Stomach: Gastric Illness, Medicine and British Society, 1800–1950 (Routledge, 2015). Nagy D. Popular Medicine in Seventeenth-Century England (Bowling Green State UP, 1988). Porter, Roy. Bodies politic: disease, death, and doctors in Britain, 1650–1900 (Cornell UP, 2001). online review Porter, Roy, and Dorothy Porter. In Sickness and in Health: The British Experience, 1650–1850 (1988). Porter, Roy. Mind forg'd manacles: madness and psychiatry in England from restoration to regency (1987). Riley, James C. Sick not dead: the health of British workingmen during the mortality decline (Johns Hopkins UP, 1997). Wall, Rosemary. Bacteria in Britain, 1880–1939 (Routledge, 2015). excerpt Withey, Alun. "Health, Medicine and the Family in Wales, c. 1600–1750." (2009). online Wohl, Anthony S. Endangered Lives: Public Health in Victorian Britain (1983). Historiography Brieger, Gert H. "History of Medicine," in Paul T. Durbin, ed. A Guide to the Culture of Science, Technology, and Medicine (1980) pp. 121–94 Burnham, John C. What Is Medical History? (2005) 163 pp. excerpt Green, Monica H. "Gendering the History of Women's Healthcare," Gender & History (2008) 20#3 pp. 487–518. online Huber, Valeska. "Pandemics and the politics of difference: rewriting the history of internationalism through nineteenth-century cholera." Journal of Global History 15.3 (2020): 394–407 online. Huisman, Frank, and John Harley Warner, eds. Locating Medical History: The Stories and Their Meanings (2006) excerpt and text search 530 pp. 21 various essays by scholars Johnson, Jennifer. "New Directions in the History of Medicine in European, Colonial and Transimperial Contexts." Contemporary European History 25.2 (2016): 387–99 Lewenson, Sandra B. and Eleanor Krohn Herrmann, eds. Capturing Nursing History: A Guide to Historical Methods in Research (2008) 236 pp. Primary sources Elmer, Peter, and Ole Peter Grell, eds. Health, Disease and Society in Europe, 1500–1800: A Sourcebook (, 2004) excerpt and text search Unschuld, Paul U. Huang Di Nei Jing Su Wen: Nature, Knowledge, Imagery in an Ancient Chinese Medical Text. (2003). online Wallis, Faith. ed. Medieval Medicine: A Reader (2010) excerpt and text search Warner, John Harley, and Janet A. Tighe, eds. Major Problems in the History of American Medicine and Public Health (2006), 560 pp; readings in primary and secondary sources excerpt and text search Illustrations The history of medicine and surgery as portrayed by various artists External links Directory of History of Medicine Collections, Index to the major collections in the United States and Canada, selected by the US National Institute of Health Medicine
[ 0.3615516424179077, 0.38947969675064087, -0.6662197113037109, -0.10932844132184982, 0.2622578740119934, 0.457149475812912, 0.6250901222229004, 0.019205080345273018, -0.27637723088264465, -0.8651090264320374, 0.08801808953285217, 0.4995899796485901, -0.14251962304115295, 0.5696578025817871,...
14196
https://en.wikipedia.org/wiki/Hamoaze
Hamoaze
The Hamoaze (; ) is an estuarine stretch of the tidal River Tamar, between its confluence with the River Lynher and Plymouth Sound, England. The name first appears as ryver of Hamose in 1588 and it originally most likely applied just to a creek of the estuary that led up to the manor of Ham, north of the present-day Devonport Dockyard. The name evidently later came to be used for the estuary's main channel. The ose element possibly derives from Old English meaning 'mud' (as in 'ooze') – the creek consisting of mud-banks at low tide. The Hamoaze flows past Devonport Dockyard, which is one of three major bases of the Royal Navy today. The presence of large numbers of small watercraft is a challenge and hazard to the warships using the naval base and dockyard. Navigation on the waterway is controlled by the Queen's Harbour Master for Plymouth. Settlements on the banks of the Hamoaze are Saltash, Wilcove, Torpoint and Cremyll in Cornwall, as well as Devonport and Plymouth in Devon. Two regular ferry services crossing the Hamoaze exist: the Torpoint Ferry (a chain ferry that takes vehicles) and the Cremyll Ferry (passengers and cyclists only). The Hamoaze has a street in Torpoint named after it. See also Tamar-Tavy Estuary References Geography of Plymouth, Devon Estuaries of England Transport in Plymouth Rivers of Cornwall River Tamar
[ -0.21081160008907318, -0.0453571155667305, -0.29790398478507996, -0.6957395076751709, 0.2499692291021347, 0.6862974762916565, 0.12283658236265182, 0.1482863873243332, -0.052890460938215256, -0.42315465211868286, -0.1730186939239502, 0.7300826907157898, 0.07874535024166107, -0.0862171351909...
14197
https://en.wikipedia.org/wiki/Hanover
Hanover
Hanover (; ; ) is the capital and largest city of the German state of Lower Saxony. Its 534,049 (2020) inhabitants make it the 13th-largest city in Germany as well as the third-largest city in Northern Germany after Hamburg and Bremen. Hanover's urban area comprises the towns of Garbsen, Langenhagen and Laatzen and has a population of about 791,000 (2018). The Hanover Region has approximately 1.16 million inhabitants (2019). The city lies at the confluence of the River Leine and its tributary the Ihme, in the south of the North German Plain, and is the largest city in the Hannover–Braunschweig–Göttingen–Wolfsburg Metropolitan Region. It is the fifth-largest city in the Low German dialect area after Hamburg, Dortmund, Essen and Bremen. Before it became the capital of Lower Saxony in 1946, Hanover was the capital of the Principality of Calenberg (1636–1692), the Electorate of Hanover (1692–1814), the Kingdom of Hanover (1814–1866), the Province of Hanover of the Kingdom of Prussia (1868–1918), the Province of Hanover of the Free State of Prussia (1918–1946) and of the State of Hanover (1946). From 1714 to 1837 Hanover was by personal union the family seat of the Hanoverian Kings of the United Kingdom of Great Britain and Ireland, under their title of the dukes of Brunswick-Lüneburg (later described as the Elector of Hanover). The city is a major crossing point of railway lines and motorways (Autobahnen), connecting European main lines in both the east-west (Berlin–Ruhr area/Düsseldorf/Cologne) and north-south (Hamburg–Frankfurt/Stuttgart/Munich) directions. Hannover Airport lies north of the city, in Langenhagen, and is Germany's ninth-busiest airport. The city's most notable institutes of higher education are the Hannover Medical School (Medizinische Hochschule Hannover), one of Germany's leading medical schools, with its university hospital Klinikum der Medizinischen Hochschule Hannover, and the Leibniz University Hannover. The Hanover fairground, owing to numerous extensions, especially for the Expo 2000, is the largest in the world. Hanover hosts annual commercial trade fairs such as the Hanover Fair and up to 2018 the CeBIT. The IAA Commercial Vehicles show takes place every two years. It is the world's leading trade show for transport, logistics and mobility. Every year Hanover hosts the Schützenfest Hannover, the world's largest marksmen's festival, and the Oktoberfest Hannover. 'Hanover' is the traditional English spelling. The German spelling (with a double n) is becoming more popular in English; recent editions of encyclopedias prefer the German spelling, and the local government uses the German spelling on English websites. The English pronunciation, with stress on the first syllable, is applied to both the German and English spellings, which is different from German pronunciation, with stress on the second syllable and a long second vowel. The traditional English spelling is still used in historical contexts, especially when referring to the British House of Hanover. History Hanover was founded in medieval times on the east bank of the River Leine. Its original name Honovere may mean 'high (river)bank', though this is debated (cf. das Hohe Ufer). Hanover was a small village of ferrymen and fishermen that became a comparatively large town in the 13th century, receiving town privileges in 1241, owing to its position at a natural crossroads. As overland travel was relatively difficult its position on the upper navigable reaches of the river helped it to grow by increasing trade. It was connected to the Hanseatic League city of Bremen by the Leine and was situated near the southern edge of the wide North German Plain and north-west of the Harz mountains, so that east-west traffic such as mule trains passed through it. Hanover was thus a gateway to the Rhine, Ruhr and Saar river valleys, their industrial areas which grew up to the southwest and the plains regions to the east and north, for overland traffic skirting the Harz between the Low Countries and Saxony or Thuringia. In the 14th century the main churches of Hanover were built, as well as a city wall with three city gates. The beginning of industrialization in Germany led to trade in iron and silver from the northern Harz Mountains, which increased the city's importance. In 1636 George, Duke of Brunswick-Lüneburg, ruler of the Brunswick-Lüneburg principality of Calenberg, moved his residence to Hanover. The Dukes of Brunswick-Lüneburg were elevated by the Holy Roman Emperor to the rank of Prince-Elector in 1692 and this elevation was confirmed by the Imperial Diet in 1708. Thus the principality was upgraded to the Electorate of Brunswick-Lüneburg, colloquially known as the Electorate of Hanover after Calenberg's capital (see also: House of Hanover). Its Electors later become monarchs of Great Britain (and from 1801 of the United Kingdom of Great Britain and Ireland). The first of these was George I Louis, who acceded to the British throne in 1714. The last British monarch who reigned in Hanover was William IV. Semi-Salic law, which required succession by the male line if possible, forbade the accession of Queen Victoria in Hanover. As a male-line descendant of George I, Queen Victoria was herself a member of the House of Hanover. Her descendants, however, bore her husband's titular name of Saxe-Coburg-Gotha. Three kings of Great Britain, or the United Kingdom, were concurrently also Electoral Princes of Hanover. During the time of the personal union of the crowns of the United Kingdom and Hanover (1714–1837) the monarchs rarely visited the city. In fact during the reigns of the final three joint rulers (1760–1837) there was only one short visit, by George IV in 1821. From 1816 to 1837 Viceroy Adolphus represented the monarch in Hanover. During the Seven Years' War the Battle of Hastenbeck was fought near the city on 26 July 1757. The French army defeated the Hanoverian Army of Observation, leading to the city's occupation as part of the Invasion of Hanover. It was recaptured by Anglo-German forces led by Ferdinand of Brunswick the following year. 19th century After Napoleon imposed the Convention of Artlenburg (Convention of the Elbe) on July 5, 1803, about 35,000 French soldiers occupied Hanover. The Convention also required disbanding the army of Hanover. However, George III did not recognise the Convention of the Elbe. This resulted in a great number of soldiers from Hanover eventually emigrating to Great Britain, where the King's German Legion was formed. It was only troops from Hanover and Brunswick that consistently opposed France throughout the entire Napoleonic wars. The Legion later played an important role in the Peninsular War and the Battle of Waterloo in 1815. In 1814 the electorate became the Kingdom of Hanover. In 1837, the personal union of the United Kingdom and Hanover ended because William IV's heir in the United Kingdom was female (Queen Victoria). Hanover could be inherited only by male heirs. Thus, Hanover passed to William IV's brother, Ernest Augustus, and remained a kingdom until 1866, when it was annexed by Prussia during the Austro-Prussian war. Despite Hanover being expected to defeat Prussia at the Battle of Langensalza, Prussia employed Moltke the Elder's Kesselschlacht order of battle to instead destroy the Hanoverian army. The city of Hanover became the capital of the Prussian Province of Hanover. In 1842 the first horse railway was inaugurated, and from 1893 an electric tram was installed. In 1887 Hanover's Emile Berliner invented the record and the gramophone. Nazi Germany After 1937 the lord mayor and the state commissioners of Hanover were members of the NSDAP (Nazi party). A large Jewish population then existed in Hanover. In October 1938, 484 Hanoverian Jews of Polish origin were expelled to Poland, including the Grynszpan family. However, Poland refused to accept them, leaving them stranded at the border with thousands of other Polish-Jewish deportees, fed only intermittently by the Polish Red Cross and Jewish welfare organisations. The Grynszpans' son Herschel Grynszpan was in Paris at the time. When he learned of what was happening, he drove to the German embassy in Paris and shot the German diplomat Eduard Ernst vom Rath, who died shortly afterwards. The Nazis took this act as a pretext to stage a nationwide pogrom known as Kristallnacht (9 November 1938). On that day, the synagogue of Hanover, designed in 1870 by Edwin Oppler in neo-romantic style, was burnt by the Nazis. In September 1941, through the "Action Lauterbacher" plan, a ghettoisation of the remaining Hanoverian Jewish families began. Even before the Wannsee Conference, on 15 December 1941, the first Jews from Hanover were deported to Riga. A total of 2,400 people were deported, and very few survived. During the war seven concentration camps were constructed in Hanover, in which many Jews were confined. Of the approximately 4,800 Jews who had lived in Hannover in 1938, fewer than 100 were still in the city when troops of the United States Army arrived on 10 April 1945 to occupy Hanover at the end of the war. Today, a memorial at the Opera Square is a reminder of the persecution of the Jews in Hanover. After the war a large group of Orthodox Jewish survivors of the nearby Bergen-Belsen concentration camp settled in Hanover. World War II As an important railway and road junction and production centre, Hanover was a major target for strategic bombing during World War II, including the Oil Campaign. Targets included the AFA (Stöcken), the Deurag-Nerag refinery (Misburg), the Continental plants (Vahrenwald and Limmer), the United light metal works (VLW) in Ricklingen and Laatzen (today Hanover fairground), the Hanover/Limmer rubber reclamation plant, the Hanomag factory (Linden) and the tank factory M.N.H. Maschinenfabrik Niedersachsen (Badenstedt). Residential areas were also targeted, and more than 6,000 civilians were killed by the Allied bombing raids. More than 90% of the city centre was destroyed in a total of 88 bombing raids. After the war, the Aegidienkirche was not rebuilt and its ruins were left as a war memorial. The Allied ground advance into Germany reached Hanover in April 1945. The US 84th Infantry Division captured the city on 10 April 1945. Hanover was in the British zone of occupation of Germany and became part of the new state (Land) of Lower Saxony in 1946. Today Hanover is a Vice-President City of Mayors for Peace, an international mayoral organisation mobilising cities and citizens worldwide to abolish and eliminate nuclear weapons by the year 2020. Population development Geography Climate Hanover has an oceanic climate (Köppen: Cfb) independent of the isotherm. Although the city is not on a coastal location, the predominant air masses are still from the ocean, unlike other places further east or south-central Germany. Subdivisions The city of Hanover is divided into 13 boroughs (Stadtbezirke) and 53 quarters (Stadtteile). Boroughs Mitte Vahrenwald-List Bothfeld-Vahrenheide Buchholz-Kleefeld Misburg-Anderten Kirchrode-Bemerode-Wülferode Südstadt-Bult Döhren-Wülfel Ricklingen Linden-Limmer Ahlem-Badenstedt-Davenstedt Herrenhausen-Stöcken Nord Quarters A selection of the 53 quarters: Nordstadt Südstadt Oststadt Zoo (for the zoo itself, see Hanover Zoo) Herrenhausen Waldheim Main sights One of Hanover's sights is the Royal Gardens of Herrenhausen. Its Great Garden is an important European baroque garden. The palace itself was largely destroyed by Allied bombing but has been reconstructed and reopened in 2013. Among the points of interest is the Grotto. Its interior was designed by French artist Niki de Saint Phalle). The Great Garden consists of several parts and contains Europe's highest garden fountain. The historic Garden Theatre hosted the musicals of the German rock musician Heinz Rudolf Kunze. Also at Herrenhausen, the Berggarten is a botanical garden with the most varied collection of orchids in Europe. Some points of interest are the Tropical House, the Cactus House, the Canary House and the Orchid House, and free-flying birds and butterflies. Near the entrance to the Berggarten is the historic Library Pavillon. The Mausoleum of the Guelphs is also located in the Berggarten. Like the Great Garden, the Berggarten also consists of several parts, for example the Paradies and the Prairie Garden. The Georgengarten is an English landscape garden. The Leibniz Temple and the Georgen Palace are two points of interest there. The landmark of Hanover is the New Town Hall (Neues Rathaus). Inside the building are four scale models of the city. A worldwide unique diagonal/arch elevator goes up the large dome at a 17 degree angle to an observation deck. The Hanover Zoo received the Park Scout Award for the fourth year running in 2009/10, placing it among the best zoos in Germany. The zoo consists of several theme areas: Sambesi, Meyers Farm, Gorilla-Mountain, Jungle-Palace, and Mullewapp. Some smaller areas are Australia, the wooded area for wolves, and the so-called swimming area with many seabirds. There is also a tropical house, a jungle house, and a show arena. The new Canadian-themed area, Yukon Bay, opened in 2010. In 2010 the Hanover Zoo had over 1.6 million visitors. There is also the Sea Life Centre Hanover, which is the first tropical aquarium in Germany. Another point of interest is the Old Town. In the centre are the large Marktkirche (Church St. Georgii et Jacobi, preaching venue of the bishop of the Lutheran Landeskirche Hannovers) and the Old Town Hall. Nearby are the Leibniz House, the Nolte House, and the Beguine Tower. The Kreuz-Church-Quarter around the Kreuz Church contains many little lanes. Nearby is the old royal sports hall, now called the Ballhof theatre. On the edge of the Old Town are the Market Hall, the Leine Palace, and the ruin of the Aegidien Church which is now a monument to the victims of war and violence. Through the Marstall Gate the bank of the river Leine can be reached; the Nanas of Niki de Saint Phalle are located here. They are part of the Mile of Sculptures, which starts from Trammplatz, leads along the river bank, crosses Königsworther Square, and ends at the entrance of the Georgengarten. Near the Old Town is the district of Calenberger Neustadt where the Catholic Basilica Minor of St. Clemens, the Reformed Church and the Lutheran Neustädter Hof- und Stadtkirche St. Johannis are located. Some other popular sights are the Waterloo Column, the Laves House, the Wangenheim Palace, the Lower Saxony State Archives, the Hanover Playhouse, the Kröpcke Clock, the Anzeiger Tower Block, the Administration Building of the NORD/LB, the Cupola Hall of the Congress Centre, the Lower Saxony Stock, the Ministry of Finance, the Garten Church, the Luther Church, the Gehry Tower (designed by the American architect Frank O. Gehry), the specially designed Bus Stops, the Opera House, the Central Station, the Maschsee lake and the city forest Eilenriede, which is one of the largest of its kind in Europe. With around 40 parks, forests and gardens, a couple of lakes, two rivers and one canal, Hanover offers a large variety of leisure activities. Since 2007 the historic Leibniz Letters, which can be viewed in the Gottfried Wilhelm Leibniz Library, are on UNESCO's Memory of the World Register. Outside the city centre is the EXPO-Park, the former site of EXPO 2000. Some points of interest are the Planet M., the former German Pavillon, some nations' vacant pavilions, the Expowale, the EXPO-Plaza and the EXPO-Gardens (Parc Agricole, EXPO-Park South and the Gardens of change). The fairground can be reached by the Exponale, one of the largest pedestrian bridges in Europe. The Hanover fairground is the largest exhibition centre in the world. It provides of covered indoor space, of open-air space, 27 halls and pavilions. Many of the Exhibition Centre's halls are architectural highlights. Furthermore, it offers the Convention Center with its 35 function rooms, glassed-in areas between halls, grassy park-like recreation zones and its own heliport. Two important sights on the fairground are the Hermes Tower ( high) and the EXPO Roof, the largest wooden roof in the world. In the district of Anderten is the European Cheese Centre, the only Cheese Experience Centre in Europe. Another tourist sight in Anderten is the Hindenburg Lock, which was the biggest lock in Europe at the time of its construction in 1928. The Tiergarten (literally the "animals' garden") in the district of Kirchrode is a large forest originally used for deer and other game for the king's table. In the district of Groß-Buchholz the Telemax is located, which is the tallest building in Lower Saxony and the highest television tower in Northern Germany. Some other notable towers are the VW-Tower in the city centre and the old towers of the former middle-age defence belt: Döhrener Tower, Lister Tower and the Horse Tower. The 36 most important sights of the city centre are connected with a red line, which is painted on the pavement. This so-called Red Thread marks out a walk that starts at the Tourist Information Office and ends on the Ernst-August-Square in front of the central station. There is also a guided sightseeing-bus tour through the city. Society and culture Religious life Hanover is headquarters for several Protestant organizations, including the World Communion of Reformed Churches, the Evangelical Church in Germany, the Reformed Alliance, the United Evangelical Lutheran Church of Germany, and the Independent Evangelical-Lutheran Church. In 2015, 31.1% of the population were Protestant and 13.4% were Roman Catholic. The majority 55.5% were irreligious or other faith. Museums and galleries The Historisches Museum Hannover (Historic museum) describes the history of Hanover, from the medieval settlement "Honovere" to the city of today. The museum focuses on the period from 1714 to 1834 when Hanover had a strong relationship with the British royal house. With more than 4,000 members, the Kestnergesellschaft is the largest art society in Germany. The museum hosts exhibitions from classical modernist art to contemporary art. Emphasis is placed on film, video, contemporary music and architecture, room installments and presentations of contemporary paintings, sculptures and video art. The Kestner-Museum is located in the House of 5.000 windows. The museum is named after August Kestner and exhibits 6,000 years of applied art in four areas: Ancient cultures, ancient Egypt, applied art and a valuable collection of historic coins. The KUBUS is a forum for contemporary art. It features mostly exhibitions and projects of artists from Hanover. The Kunstverein Hannover (Art Society Hanover) shows contemporary art and was established in 1832 as one of the first art societies in Germany. It is located in the Künstlerhaus (House of artists). There are around seven international exhibitions each year. The Landesmuseum Hannover is the largest museum in Hanover. The art gallery shows European art from the 11th to the 20th century, the nature department shows the zoology, geology, botanic, geology and a vivarium with fish, insects, reptiles and amphibians. The primeval department shows the primeval history of Lower Saxony, and the folklore department shows cultures from all over the world. The Sprengel Museum shows the art of the 20th century. It is one of the most notable art museums in Germany. The focus is put on the classical modernist art with the collection of Kurt Schwitters, works of German expressionism, and French cubism, the cabinet of abstracts, the graphics and the department of photography and media. Furthermore, the museum shows the works of the French artist Niki de Saint-Phalle. The Theatre Museum shows an exhibition of the history of the theatre in Hanover from the 17th century up to now: opera, concert, drama and ballet. The museum also hosts several touring exhibitions during the year. The Wilhelm Busch Museum is the German Museum of Caricature and Critical Graphic Arts. The collection of the works of Wilhelm Busch and the extensive collection of cartoons and critical graphics is unique in Germany. Furthermore, the museum hosts several exhibitions of national and international artists during the year. A cabinet of coins is the Münzkabinett der TUI-AG. The Polizeigeschichtliche Sammlung Niedersachsen is the largest police museum in Germany. Textiles from all over the world can be visited in the Museum for textile art. The EXPOseeum is the museum of the world-exhibition "EXPO 2000 Hannover". Carpets and objects from the orient can be visited in the Oriental Carpet Museum. The Museum for the visually impaired is a rarity in Germany, there is only one other of its kind in Berlin. The Museum of veterinary medicine is unique in Germany. The Museum for Energy History describes the 150 years old history of the application of energy. The Heimat-Museum Ahlem shows the history of the district of Ahlem. The Mahn- und Gedenkstätte Ahlem describes the history of the Jewish people in Hanover and the Stiftung Ahlers Pro Arte / Kestner Pro Arte shows modern art. Modern art is also the main topic of the Kunsthalle Faust, the Nord/LB Art Gallery and of the Foro Artistico / Eisfabrik. Some leading art events in Hanover are the Long Night of the Museums and the Zinnober Kunstvolkslauf which features all the galleries in Hanover. People who are interested in astronomy should visit the Observatory Geschwister Herschel on the Lindener Mountain or the small planetarium inside of the Bismarck School. Theatre, cabaret and musical Around 40 theatres are located in Hanover. The Opera House, the Schauspielhaus (Play House), the Ballhof eins, the Ballhof zwei and the Cumberlandsche Galerie belong to the Lower Saxony State Theatre. The Theater am Aegi is Hanover's principal theatre for musicals, shows and guest performances. The Neues Theater (New Theatre) is the boulevard theatre of Hanover. The Theater für Niedersachsen is another large theatre in Hanover, which also has an own musical company. Some of the most important musical productions are the rock musicals of the German rock musician Heinz Rudolph Kunze, which take place at the Garden-Theatre in the Great Garden. Some important theatre events are the Tanztheater International, the Long Night of the Theatres, the Festival Theaterformen and the International Competition for Choreographers. Hanover's leading cabaret stage is the GOP Variety theatre which is located in the Georgs Palace. Some other cabaret-stages are the Variety Marlene, the Uhu-Theatre. the theatre Die Hinterbühne, the Rampenlich Variety and the revue-stage TAK. The most important cabaret event is the Kleines Fest im Großen Garten (Little Festival in the Great Garden) which is the most successful cabaret festival in Germany. It features artists from around the world. Some other important events are the Calenberger Cabaret Weeks, the Hanover Cabaret Festival and the Wintervariety. Music Classical music Hanover has two symphony orchestras: The Lower Saxon State Orchestra Hanover and the NDR Radiophilharmonie (North German Radio Philharmonic Orchestra). Two notable choirs have their homes in Hanover: the Mädchenchor Hannover (girls' choir) and the Knabenchor Hannover (boys' choir). There are two major international competitions for classical music in Hanover: Hanover International Violin Competition (since 1991) Classica Nova International Music Competition (1997) (Non profit association Classica Nova exists in Hanover with the aim of continuing the Classica Nova competition). Popular music The rock bands Scorpions and Fury in the Slaughterhouse are originally from Hanover. Acclaimed DJ Mousse T also has his main recording studio in the area. Rick J. Jordan, member of the band Scooter was born here in 1968. Eurovision Song Contest winner of 2010, Lena, is also from Hanover. Sport Hannover 96 (nickname Die Roten or 'The Reds') is the top local football team that currently plays in the 2. Bundesliga. Home games are played at the HDI-Arena, which hosted matches in the 1974 and 2006 World Cups and the Euro 1988. Their reserve team Hannover 96 II plays in the fourth league. Their home games were played in the traditional Eilenriedestadium till they moved to the HDI Arena due to DFL directives. Arminia Hannover is another traditional soccer team in Hanover that has played in the second division (then 2. Liga Nord) for years and plays now in the Niedersachsen-West Liga (Lower Saxony League West). Home matches are played in the Rudolf-Kalweit-Stadium. The Hannover Indians are the local ice hockey team. They play in the third tier. Their home games are played at the traditional Eisstadion am Pferdeturm. The Hannover Scorpions played in Hanover in Germany's top league until 2013 when they sold their license and moved to Langenhagen. Hanover was one of the rugby union capitals in Germany. The first German rugby team was founded in Hanover in 1878. Hanover-based teams dominated the German rugby scene for a long time. DRC Hannover plays in the first division, and SV Odin von 1905 as well as SG 78/08 Hannover play in the second division. The first German fencing club was founded in Hanover in 1862. Today there are three additional fencing clubs in Hanover. The Hannover Korbjäger are the city's top basketball team. They play their home games at the IGS Linden. Hanover is a centre for water sports. Thanks to the Maschsee lake, the rivers Ihme and Leine and to the Mittellandkanal channel, Hanover hosts sailing schools, yacht schools, waterski clubs, rowing clubs, canoe clubs and paddle clubs. The water polo team WASPO W98 plays in the first division. The Hannover Regents play in the third Bundesliga (baseball) division. The Hannover Grizzlies, Armina Spartans and Hannover Stampeders are the local American football teams. The Hannover Marathon is the biggest running event in Hanover with more than 11,000 participants and usually around 200.000 spectators. Some other important running events are the Gilde Stadtstaffel (relay), the Sport-Check Nachtlauf (night-running), the Herrenhäuser Team-Challenge, the Hannoversche Firmenlauf (company running) and the Silvesterlauf (sylvester running). Hanover also hosts an important international cycle race: The Nacht von Hannover (night of Hanover). The race takes place around the Market Hall. The lake Maschsee hosts the International Dragon Boat Races and the Canoe Polo-Tournament. Many regattas take place during the year. "Head of the river Leine" on the river Leine is one of the biggest rowing regattas in Hanover. One of Germany's most successful dragon boat teams, the All Sports Team Hannover, which has won since its foundation in year 2000 more than 100 medals on national and international competitions, is doing practising on the Maschsee in the heart of Hannover. The All Sports Team has received the award "Team of the Year 2013" in Lower Saxony. Some other important sport events are the Lower Saxony Beach Volleyball Tournament, the international horse show "German Classics" and the international ice hockey tournament Nations Cup. Regular events Hanover is one of the leading exhibition cities in the world. It hosts more than 60 international and national exhibitions every year. The most popular ones are the CeBIT, the Hanover Fair, the Domotex, the Ligna, the IAA Nutzfahrzeuge and the Agritechnica. Hanover also hosts a huge number of congresses and symposiums like the International Symposium on Society and Resource Management. Hanover is also host to the Schützenfest Hannover, the largest marksmen's fun fair in the world which takes place once a year from late June to early July. Founded in 1529, it consists of more than 260 rides and inns, five large beer tents and a large entertainment programme. The highlight of this fun fair is the Parade of the Marksmen with more than 12,000 participants from all over the world, including around 5,000 marksmen, 128 bands, and more than 70 wagons, carriages, and other festival vehicles. This makes it the longest procession in Europe. Around 2 million people visit this fun fair every year. The landmark of this fun fair is the biggest transportable Ferris wheel in the world, at about high. Hanover also hosts one of the two largest spring festivals in Europe, with around 180 rides and inns, 2 large beer tents, and around 1.5 million visitors each year. The Oktoberfest Hannover is the second largest Oktoberfest in the world with around 160 rides and inns, two large beer tents and around 1 million visitors each year. The Maschsee Festival takes place around the Maschsee Lake. Each year around 2 million visitors come to enjoy live music, comedy, cabaret, and much more. It is the largest Volksfest of its kind in Northern Germany. The Great Garden hosts every year the International Fireworks Competition, and the International Festival Weeks Herrenhausen, with music and cabaret performances. The Carnival Procession is around long and consists of 3.000 participants, around 30 festival vehicles and around 20 bands and takes place every year. Other festivals include the Festival Feuer und Flamme (Fire and Flames), the Gartenfestival (Garden Festival), the Herbstfestival (Autumn Festival), the Harley Days, the Steintor Festival (Steintor is a party area in the city centre) and the Lister-Meile-Festival (Lister Meile is a large pedestrian area). Hanover also hosts food-oriented festivals including the Wine Festival and the Gourmet Festival. It also hosts some special markets. The Old Town Flea Market is said to be the oldest flea market in Germany and the Market for Art and Trade has a high reputation. Some other major markets include the Christmas Markets of the City of Hanover in the Old Town and city centre, and the Lister Meile. Transport Rail The city's central station, Hannover Hauptbahnhof, is a hub of the German high-speed ICE network. It is the starting point of the Hanover-Würzburg high-speed rail line and also the central hub for the Hanover S-Bahn. It offers many international and national connections. Air Hanover and its area is served by Hanover/Langenhagen International Airport (IATA code: HAJ; ICAO code: EDDV) Road Hanover is also an important hub of Germany's Autobahn network; the junction of two major autobahns, the A2 and A7 is at Kreuz Hannover-Ost, at the northeastern edge of the city. Local autobahns are A 352 (a short cut between A7 [north] and A2 [west], also known as the airport autobahn because it passes Hanover Airport) and the A 37. The Schnellweg (en: expressway) system, a number of Bundesstraße roads, forms a structure loosely resembling a large ring road together with A2 and A7. The roads are B 3, B 6 and Bundesstraße 65|B 65, called Westschnellweg (B6 on the northern part, B3 on the southern part), Messeschnellweg (B3, becomes A37 near Burgdorf, crosses A2, becomes B3 again, changes to B6 at Seelhorster Kreuz, then passes the Hanover fairground as B6 and becomes A37 again before merging into A7) and Südschnellweg (starts out as B65, becomes B3/B6/B65 upon crossing Westschnellweg, then becomes B65 again at Seelhorster Kreuz). Bus and light rail Hanover has an extensive Stadtbahn and bus system, operated by üstra. The city uses designer buses and tramways, the TW 6000 and TW 2000 trams being examples. Bicycle Bicycle paths are very common in the city centre. At off-peak hours you are allowed to take your bike on a tram or bus. Economy Various industrial businesses are located in Hannover. The Volkswagen Commercial Vehicles Transporter (VWN) factory at Hannover-Stöcken is the biggest employer in the region and operates a large plant at the northern edge of town adjoining the Mittellandkanal and Motorway A2. Volkswagen shares a coal-burning power plant with a factory of German tire and automobile parts manufacturer Continental AG. Continental AG, founded in Hanover in 1871, is one of the city's major companies. Since 2008 a take-over has been in progress: the Schaeffler Group from Herzogenaurach (Bavaria) holds the majority of Continental's stock but were required due to the financial crisis to deposit the options as securities at banks. The audio equipment company Sennheiser and the travel group TUI AG are both based in Hanover. Hanover is home to many insurance companies including Talanx, VHV Group, and Concordia Insurance. The major global reinsurance company Hannover Re also has its headquarters east of the city centre. List of largest employers in Hanover Key figures In 2012, the city generated a GDP of €29.5 billion, which is equivalent to €74,822 per employee. The gross value of production in 2012 was €26.4 billion, which is equivalent to €66,822 per employee. Around 300,000 employees were counted in 2014. Of these, 189,000 had their primary residence in Hanover, while 164,892 commute into the city every day. In 2014 the city was home to 34,198 businesses, of which 9,342 were registered in the German Trade Register and 24,856 counted as small businesses. Hence, more than half of the metropolitan area's businesses in the German Trade Register are located in Hanover (17,485 total). Business development Hannoverimpuls GMBH is a joint business development company from the city and region of Hannover. The company was founded in 2003 and supports the start-up, growth and relocation of businesses in the Hannover Region. The focus is on thirteen sectors, which stand for sustainable economic growth: Automotive, Energy Solutions, Information and Communications Technology, Life Sciences, Optical Technologies, Creative Industries and Production Engineering. A range of programmes supports companies from the key industries in their expansion plans in Hannover or abroad. Three regional centres specifically promote international economic relations with Russia, India and Turkey. Education The Leibniz University Hannover is the largest funded institution in Hanover for providing higher education to students from around the world. Below are the names of the universities and some of the important schools, including newly opened Hannover Medical Research School in 2003 for attracting the students from biology background from around the world. There are several universities in Hanover: Leibniz University Hannover, host institution to the Max Planck Institute for Gravitational Physics Hochschule für Musik, Theater und Medien Hannover Hannover Medical School School of Veterinary Medicine Hanover (Tierärztliche Hochschule Hannover) GISMA Business School, part of the for-profit education company Global University Systems. There is one University of Applied Science and Arts in Hanover: Hochschule Hannover (the former Fachhochschule) The Schulbiologiezentrum Hannover maintains practical biology schools in four locations (Botanischer Schulgarten Burg, Freiluftschule Burg, Zooschule Hannover, and Botanischer Schulgarten Linden). The University of Veterinary Medicine Hanover also maintains its own botanical garden specializing in medicinal and poisonous plants, the Heil- und Giftpflanzengarten der Tierärztlichen Hochschule Hannover. Notable people Hannah Arendt (1906–1975), American political theorist Erdoğan Atalay (born 1966), actor Rudolf Augstein (1923–2002), journalist, founder of the weekly journal Der Spiegel Hermann Bahlsen (1859–1919), businessman, inventor of the Leibniz-Keks Marc Bator (born 1972), journalist Rudolf von Bennigsen (1824–1902), liberal politician Klaus Bernbacher (born 1931), conductor, music event manager, broadcasting manager and academic teacher Gero von Boehm (born 1954), director, journalist and television presenter Emil Berliner (1851–1929), inventor of the phonograph Walter Bruch (1908–1990), inventor of the PAL color television system Wilhelm Busch (1832–1908), caricaturist, painter and poet Champion Jack Dupree (1910–1992), American Born Blues Musician Niki de Saint Phalle (1930–2002), sculptor, painter and film maker George I, King of Great Britain and Ireland, prince elector of Hanover George II, King of Great Britain and Ireland, prince elector of Hanover George III, King of Great Britain and Ireland, prince elector of Hanover Laurent Chappuzeau, eldest son of Samuel Chappuzeau, Horologer to the Elector of Hanover 1689–1701 Johannes Dietwald (born 1985), footballer Gustav Fröhlich (1902–1987), actor and film director Gerhard Glogowski (born 1943), politician (SPD) Georg Friedrich Grotefend (1775–1853), epigraphist and philologist Conrad Wilhelm Hase, (1818–1902), architect, founder of the Hanover school of architecture Fritz Haarmann (1870–1925), prolific serial killer and rapist Hilal El-Helwe (born 1994), German-Lebanese football player Caroline Herschel and William Herschel (1738–1822), astronomers Wyn Hoop (born 1936), singer Alfred Hugenberg (1865–1951), businessman and politician (DNVP) Manfred Kohrs (born 1957), tattooist, conceptual artist and Master of Economics Georg Ludwig Friedrich Laves (1788–1864), architect Gottfried Wilhelm Leibniz (1646–1716), philosopher, mathematician, developed differential and integral calculus Jan Martín (born 1984), German-Israeli-Spanish basketball player Georg Meissner (1829–1905), anatomist and physiologist Per Mertesacker (born 1984), footballer Otto Fritz Meyerhof (1884–1951), recipient of the Nobel prize in medicine, 1922 Lena Meyer-Landrut (born 1991), winner of the Eurovision Song Contest 2010 Reiner E. Moritz (born 1938), film director and producer Oliver Pocher (born 1978), comedian and television presenter Daniel Reiss (born 1982), professional ice hockey player Waldemar R. Röhrbein (1935–2014), historian, director of Historisches Museum Hannover Dirk Rossmann (born 1946), businessman Dieter Roth (1930–1998), artist, print-maker, author, poet, world renowned composer Gerhard Schröder (born 1944), politician (SPD) (former Chancellor of Germany) Helga Schuchardt (born 1939), politician and engineer Kurt Schumacher (1895–1952), politician, re-organiser of the SPD after World War II Kurt Schwitters (1887–1948), artist Alexander Moritz Simon (1837–1905), Jewish philanthropist, banker and American vice consul Uli Stein (artist) (1954–2020), artist, cartoonist Phylicia Whitney (Born 1950), journalist and public speaker Christian Wulff (born 1959), politician (CDU), former President of Germany Shlomo Zev Zweigenhaft (1915–2005), Chief Rabbi of Hannover and Lower Saxony Twin towns – sister cities Hanover is twinned with: Blantyre, Malawi (1968) Bristol, England, United Kingdom (1947) Hiroshima, Japan (1983) Leipzig, Germany (1987) Perpignan, France (1960) Poznań, Poland (1979) Rouen, France (1966) See also CeBIT (CeBIT Computer Messe) Expo 2000 Hanover Fair (Hannover Messe) Metropolitan region Hannover-Braunschweig-Göttingen-Wolfsburg Schützenfest Hannover References Bibliography External links Official website Official website for tourism, holiday and leisure in Lower Saxony and Hanover Cities in Lower Saxony German state capitals Hanover Region Province of Hanover Members of the Hanseatic League Holocaust locations in Germany
[ 0.16825035214424133, -0.030287982895970345, 0.5098157525062561, -0.04918442294001579, -0.1437770128250122, -0.052796244621276855, 0.42067351937294006, 0.2506829500198364, -0.3385087251663208, -0.01634669117629528, -0.05986907333135605, -0.7708712220191956, 0.08663526922464371, 0.6250569820...
14199
https://en.wikipedia.org/wiki/Handheld%20game%20console
Handheld game console
A handheld game console, or simply handheld console, is a small, portable self-contained video game console with a built-in screen, game controls and speakers. Handheld game consoles are smaller than home video game consoles and contain the console, screen, speakers, and controls in one unit, allowing people to carry them and play them at any time or place. In 1976, Mattel introduced the first handheld electronic game with the release of Auto Race. Later, several companies—including Coleco and Milton Bradley—made their own single-game, lightweight table-top or handheld electronic game devices. The first commercial successful handheld console was Merlin from 1978 which sold more than 5 million units. The first handheld game console with interchangeable cartridges is the Milton Bradley Microvision in 1979. Nintendo is credited with popularizing the handheld console concept with the release of the Game Boy in 1989 and continues to dominate the handheld console market. The first internet-enabled handheld console and the first with a touchscreen was the Game.com released by Tiger Electronics in 1997. The Nintendo DS, released in 2004, introduced touchscreen controls and wireless online gaming to a wider audience, becoming the best-selling handheld console with over units sold worldwide. History Timeline This table describes handheld games consoles over video game generations with over 1 million sales. Origins The origins of handheld game consoles are found in handheld and tabletop electronic game devices of the 1970s and early 1980s. These electronic devices are capable of playing only a single game, they fit in the palm of the hand or on a tabletop, and they may make use of a variety of video displays such as LED, VFD, or LCD. In 1978, handheld electronic games were described by Popular Electronics magazine as "nonvideo electronic games" and "non-TV games" as distinct from devices that required use of a television screen. Handheld electronic games, in turn, find their origins in the synthesis of previous handheld and tabletop electro-mechanical devices such as Waco's Electronic Tic-Tac-Toe (1972) Cragstan's Periscope-Firing Range (1951), and the emerging optoelectronic-display-driven calculator market of the early 1970s. This synthesis happened in 1976, when "Mattel began work on a line of calculator-sized sports games that became the world's first handheld electronic games. The project began when Michael Katz, Mattel's new product category marketing director, told the engineers in the electronics group to design a game the size of a calculator, using LED (light-emitting diode) technology." our big success was something that I conceptualized—the first handheld game. I asked the design group to see if they could come up with a game that was electronic that was the same size as a calculator. —Michael Katz, former marketing director, Mattel Toys. The result was the 1976 release of Auto Race. Followed by Football later in 1977, the two games were so successful that according to Katz, "these simple electronic handheld games turned into a '$400 million category.'" Mattel would later win the honor of being recognized by the industry for innovation in handheld game device displays. Soon, other manufacturers including Coleco, Parker Brothers, Milton Bradley, Entex, and Bandai began following up with their own tabletop and handheld electronic games. In 1979 the LCD-based Microvision, designed by Smith Engineering and distributed by Milton-Bradley, became the first handheld game console and the first to use interchangeable game cartridges. The Microvision game Cosmic Hunter (1981) also introduced the concept of a directional pad on handheld gaming devices, and is operated by using the thumb to manipulate the on-screen character in any of four directions. In 1979, Gunpei Yokoi, traveling on a bullet train, saw a bored businessman playing with an LCD calculator by pressing the buttons. Yokoi then thought of an idea for a watch that doubled as a miniature game machine for killing time. Starting in 1980, Nintendo began to release a series of electronic games designed by Yokoi called the Game & Watch games. Taking advantage of the technology used in the credit-card-sized calculators that had appeared on the market, Yokoi designed the series of LCD-based games to include a digital time display in the corner of the screen. For later, more complicated Game & Watch games, Yokoi invented a cross shaped directional pad or "D-pad" for control of on-screen characters. Yokoi also included his directional pad on the NES controllers, and the cross-shaped thumb controller soon became standard on game console controllers and ubiquitous across the video game industry since. When Yokoi began designing Nintendo's first handheld game console, he came up with a device that married the elements of his Game & Watch devices and the Famicom console, including both items' D-pad controller. The result was the Nintendo Game Boy. In 1982, the Bandai LCD Solarpower was the first solar-powered gaming device. Some of its games, such as the horror-themed game Terror House, features two LCD panels, one stacked on the other, for an early 3D effect. In 1983, Takara Tomy's Tomytronic 3D simulates 3D by having two LCD panels that were lit by external light through a window on top of the device, making it the first dedicated home video 3D hardware. Beginnings The late 1980s and early 1990s saw the beginnings of the modern-day handheld game console industry, after the demise of the Microvision. As backlit LCD game consoles with color graphics consume a lot of power, they were not battery-friendly like the non-backlit original Game Boy whose monochrome graphics allowed longer battery life. By this point, rechargeable battery technology had not yet matured and so the more advanced game consoles of the time such as the Sega Game Gear and Atari Lynx did not have nearly as much success as the Game Boy. Even though third-party rechargeable batteries were available for the battery-hungry alternatives to the Game Boy, these batteries employed a nickel-cadmium process and had to be completely discharged before being recharged to ensure maximum efficiency; lead-acid batteries could be used with automobile circuit limiters (cigarette lighter plug devices); but the batteries had mediocre portability. The later NiMH batteries, which do not share this requirement for maximum efficiency, were not released until the late 1990s, years after the Game Gear, Atari Lynx, and original Game Boy had been discontinued. During the time when technologically superior handhelds had strict technical limitations, batteries had a very low mAh rating since batteries with heavy power density were not yet available. Modern game systems such as the Nintendo DS and PlayStation Portable have rechargeable Lithium-Ion batteries with proprietary shapes. Other seventh-generation consoles such as the GP2X use standard alkaline batteries. Because the mAh rating of alkaline batteries has increased since the 1990s, the power needed for handhelds like the GP2X may be supplied by relatively few batteries. Game Boy Nintendo released the Game Boy on April 21, 1989 (September 1990 for the UK). The design team headed by Gunpei Yokoi had also been responsible for the Game & Watch system, as well as the Nintendo Entertainment System games Metroid and Kid Icarus. The Game Boy came under scrutiny by Nintendo president Hiroshi Yamauchi, saying that the monochrome screen was too small, and the processing power was inadequate. The design team had felt that low initial cost and battery economy were more important concerns, and when compared to the Microvision, the Game Boy was a huge leap forward. Yokoi recognized that the Game Boy needed a killer app—at least one game that would define the console, and persuade customers to buy it. In June 1988, Minoru Arakawa, then-CEO of Nintendo of America saw a demonstration of the game Tetris at a trade show. Nintendo purchased the rights for the game, and packaged it with the Game Boy system as a launch title. It was almost an immediate hit. By the end of the year more than a million units were sold in the US. As of March 31, 2005, the Game Boy and Game Boy Color combined to sell over 118 million units worldwide. Atari Lynx In 1987, Epyx created the Handy Game; a device that would become the Atari Lynx in 1989. It is the first color handheld console ever made, as well as the first with a backlit screen. It also features networking support with up to 17 other players, and advanced hardware that allows the zooming and scaling of sprites. The Lynx can also be turned upside down to accommodate left-handed players. However, all these features came at a very high price point, which drove consumers to seek cheaper alternatives. The Lynx is also very unwieldy, consumes batteries very quickly, and lacked the third-party support enjoyed by its competitors. Due to its high price, short battery life, production shortages, a dearth of compelling games, and Nintendo's aggressive marketing campaign, and despite a redesign in 1991, the Lynx became a commercial failure. Despite this, companies like Telegames helped to keep the system alive long past its commercial relevance, and when new owner Hasbro released the rights to develop for the public domain, independent developers like Songbird have managed to release new commercial games for the system every year until 2004's Winter Games. TurboExpress The TurboExpress is a portable version of the TurboGrafx, released in 1990 for $249.99. Its Japanese equivalent is the PC Engine GT. It is the most advanced handheld of its time and can play all the TurboGrafx-16's games (which are on a small, credit-card sized media called HuCards). It has a 66 mm (2.6 in.) screen, the same as the original Game Boy, but in a much higher resolution, and can display 64 sprites at once, 16 per scanline, in 512 colors. Although the hardware can only handle 481 simultaneous colors. It has 8 kilobytes of RAM. The Turbo runs the HuC6820 CPU at 1.79 or 7.16 MHz. The optional "TurboVision" TV tuner includes RCA audio/video input, allowing users to use TurboExpress as a video monitor. The "TurboLink" allowed two-player play. Falcon, a flight simulator, included a "head-to-head" dogfight mode that can only be accessed via TurboLink. However, very few TG-16 games offered co-op play modes especially designed with the TurboExpress in mind. Bitcorp Gamate The Bitcorp Gamate is the one of the first handheld game systems created in response to the Nintendo Game Boy. It was released in Asia in 1990 and distributed worldwide by 1991. Like the Sega Game Gear, it was horizontal in orientation and like the Game Boy, required 4 AA batteries. Unlike many later Game Boy clones, its internal components were professionally assembled (no "glop-top" chips). Unfortunately the system's fatal flaw is its screen. Even by the standards of the day, its screen is rather difficult to use, suffering from similar ghosting problems that were common complaints with the first generation Game Boys. Likely because of this fact sales were quite poor, and Bitcorp closed by 1992. However, new games continued to be published for the Asian market, possibly as late as 1994. The total number of games released for the system remains unknown. Gamate games were designed for stereo sound, but the console is only equipped with a mono speaker. Sega Game Gear The Game Gear is the third color handheld console, after the Lynx and the TurboExpress; produced by Sega. Released in Japan in 1990 and in North America and Europe in 1991, it is based on the Master System, which gave Sega the ability to quickly create Game Gear games from its large library of games for the Master System. While never reaching the level of success enjoyed by Nintendo, the Game Gear proved to be a fairly durable competitor, lasting longer than any other Game Boy rivals. While the Game Gear is most frequently seen in black or navy blue, it was also released in a variety of additional colors: red, light blue, yellow, clear, and violet. All of these variations were released in small quantities and frequently only in the Asian market. Following Sega's success with the Game Gear, they began development on a successor during the early 1990s, which was intended to feature a touchscreen interface, many years before the Nintendo DS. However, such a technology was very expensive at the time, and the handheld itself was estimated to have cost around $289 were it to be released. Sega eventually chose to shelve the idea and instead release the Genesis Nomad, a handheld version of the Genesis, as the successor. Watara Supervision The Watara Supervision was released in 1992 in an attempt to compete with the Nintendo Game Boy. The first model was designed very much like a Game Boy, but it is grey in color and has a slightly larger screen. The second model was made with a hinge across the center and can be bent slightly to provide greater comfort for the user. While the system did enjoy a modest degree of success, it never impacted the sales of Nintendo or Sega. The Supervision was redesigned a final time as "The Magnum". Released in limited quantities it was roughly equivalent to the Game Boy Pocket. It was available in three colors: yellow, green and grey. Watara designed many of the games themselves, but did receive some third party support, most notably from Sachen. A TV adapter was available in both PAL and NTSC formats that could transfer the Supervision's black-and-white palette to 4 colors, similar in some regards to the Super Game Boy from Nintendo. Hartung Game Master The Hartung Game Master is an obscure handheld released at an unknown point in the early 1990s. Its graphics fidelity was much lower than most of its contemporaries, displaying just 64x64 pixels. It was available in black, white, and purple, and was frequently rebranded by its distributors, such as Delplay, Videojet and Systema. The exact number of games released is not known, but is likely around 20. The system most frequently turns up in Europe and Australia. Late 1990s By this time, the lack of significant development in Nintendo's product line began allowing more advanced systems such as the Neo Geo Pocket Color and the WonderSwan Color to be developed. Sega Nomad The Nomad was released in October 1995 in North America only. The release was five years into the market span of the Genesis, with an existing library of more than 500 Genesis games. According to former Sega of America research and development head Joe Miller, the Nomad was not intended to be the Game Gear's replacement; he believed that there was little planning from Sega of Japan for the new handheld. Sega was supporting five different consoles: Saturn, Genesis, Game Gear, Pico, and the Master System, as well as the Sega CD and 32X add-ons. In Japan, the Mega Drive had never been successful and the Saturn was more successful than Sony's PlayStation, so Sega Enterprises CEO Hayao Nakayama decided to focus on the Saturn. By 1999, the Nomad was being sold at less than a third of its original price. Game Boy Pocket The Game Boy Pocket is a redesigned version of the original Game Boy having the same features. It was released in 1996. Notably, this variation is smaller and lighter. It comes in seven different colors; red, yellow, green, black, clear, silver, blue, and pink. It has space for two AAA batteries, which provide approximately 10 hours of game play. The screen was changed to a true black-and-white display, rather than the "pea soup" monochromatic display of the original Game Boy. Although, like its predecessor, the Game Boy Pocket has no backlight to allow play in a darkened area, it did notably improve visibility and pixel response-time (mostly eliminating ghosting). The first model of the Game Boy Pocket did not have an LED to show battery levels, but the feature was added due to public demand. The Game Boy Pocket was not a new software platform and played the same software as the original Game Boy model. Game.com The Game.com (pronounced in TV commercials as "game com", not "game dot com", and not capitalized in marketing material) is a handheld game console released by Tiger Electronics in September 1997. It featured many new ideas for handheld consoles and was aimed at an older target audience, sporting PDA-style features and functions such as a touch screen and stylus. However, Tiger hoped it would also challenge Nintendo's Game Boy and gain a following among younger gamers too. Unlike other handheld game consoles, the first game.com consoles included two slots for game cartridges, which would not happen again until the Tapwave Zodiac, the DS and DS Lite, and could be connected to a 14.4 kbit/s modem. Later models had only a single cartridge slot. Game Boy Color The Game Boy Color (also referred to as GBC or CGB) is Nintendo's successor to the Game Boy and was released on October 21, 1998, in Japan and in November of the same year in the United States. It features a color screen, and is slightly bigger than the Game Boy Pocket. The processor is twice as fast as a Game Boy's and has twice as much memory. It also had an infrared communications port for wireless linking which did not appear in later versions of the Game Boy, such as the Game Boy Advance. The Game Boy Color was a response to pressure from game developers for a new system, as they felt that the Game Boy, even in its latest incarnation, the Game Boy Pocket, was insufficient. The resulting product was backward compatible, a first for a handheld console system, and leveraged the large library of games and great installed base of the predecessor system. This became a major feature of the Game Boy line, since it allowed each new launch to begin with a significantly larger library than any of its competitors. As of March 31, 2005, the Game Boy and Game Boy Color combined to sell 118.69 million units worldwide. The console is capable of displaying up to 56 different colors simultaneously on screen from its palette of 32,768, and can add basic four-color shading to games that had been developed for the original Game Boy. It can also give the sprites and backgrounds separate colors, for a total of more than four colors. Neo Geo Pocket Color The Neo Geo Pocket Color (or NGPC) was released in 1999 in Japan, and later that year in the United States and Europe. It is a 16-bit color handheld game console designed by SNK, the maker of the Neo Geo home console and arcade machine. It came after SNK's original Neo Geo Pocket monochrome handheld, which debuted in 1998 in Japan. In 2000 following SNK's purchase by Japanese Pachinko manufacturer Aruze, the Neo Geo Pocket Color was dropped from both the US and European markets, purportedly due to commercial failure. The system seemed well on its way to being a success in the U.S. It was more successful than any Game Boy competitor since Sega's Game Gear, but was hurt by several factors, such as SNK's infamous lack of communication with third-party developers, and anticipation of the Game Boy Advance. The decision to ship U.S. games in cardboard boxes in a cost-cutting move rather than hard plastic cases that Japanese and European releases were shipped in may have also hurt US sales. Wonderswan Color The WonderSwan Color is a handheld game console designed by Bandai. It was released on December 9, 2000, in Japan, Although the WonderSwan Color was slightly larger and heavier (7 mm and 2 g) compared to the original WonderSwan, the color version featured 512 kB of RAM and a larger color LCD screen. In addition, the WonderSwan Color is compatible with the original WonderSwan library of games. Prior to WonderSwan's release, Nintendo had virtually a monopoly in the Japanese video game handheld market. After the release of the WonderSwan Color, Bandai took approximately 8% of the market share in Japan partly due to its low price of 6800 yen (approximately US$65). Another reason for the WonderSwan's success in Japan was the fact that Bandai managed to get a deal with Square to port over the original Famicom Final Fantasy games with improved graphics and controls. However, with the popularity of the Game Boy Advance and the reconciliation between Square and Nintendo, the WonderSwan Color and its successor, the SwanCrystal quickly lost its competitive advantage. Early 2000s The 2000s saw a major leap in innovation, particularly in the second half with the release of the DS and PSP. Game Boy Advance In 2001, Nintendo released the Game Boy Advance (GBA or AGB), which added two shoulder buttons, a larger screen, and more computing power than the Game Boy Color. The design was revised two years later when the Game Boy Advance SP (GBA SP), a more compact version, was released. The SP features a "clamshell" design (folding open and closed, like a laptop computer), as well as a frontlit color display and rechargeable battery. Despite the smaller form factor, the screen remained the same size as that of the original. In 2005, the Game Boy Micro was released. This revision sacrifices screen size and backwards compatibility with previous Game Boys for a dramatic reduction in total size and a brighter backlit screen. A new SP model with a backlit screen was released in some regions around the same time. Along with the Nintendo GameCube, the GBA also introduced the concept of "connectivity": using a handheld system as a console controller. A handful of games use this feature, most notably Animal Crossing, Pac-Man Vs., Final Fantasy Crystal Chronicles, The Legend of Zelda: Four Swords Adventures, The Legend of Zelda: The Wind Waker, Metroid Prime, and Sonic Adventure 2: Battle. As of December 31, 2007, the GBA, GBA SP, and the Game Boy Micro combined have sold 80.72 million units worldwide. Game Park 32 The original GP32 was released in 2001 by the South Korean company Game Park a few months after the launch of the Game Boy Advance. It featured a 32-bit CPU, 133 MHz processor, MP3 and Divx player, and e-book reader. SmartMedia cards were used for storage, and could hold up to 128mb of anything downloaded through a USB cable from a PC. The GP32 was redesigned in 2003. A front-lit screen was added and the new version was called GP32 FLU (Front Light Unit). In summer 2004, another redesign, the GP32 BLU, was made, and added a backlit screen. This version of the handheld was planned for release outside South Korea; in Europe, and it was released for example in Spain (VirginPlay was the distributor). While not a commercial success on a level with mainstream handhelds (only 30,000 units were sold), it ended up being used mainly as a platform for user-made applications and emulators of other systems, being popular with developers and more technically adept users. N-Gage Nokia released the N-Gage in 2003. It was designed as a combination MP3 player, cellphone, PDA, radio, and gaming device. The system received much criticism alleging defects in its physical design and layout, including its vertically oriented screen and requirement of removing the battery to change game cartridges. The most well known of these was "sidetalking", or the act of placing the phone speaker and receiver on an edge of the device instead of one of the flat sides, causing the user to appear as if they are speaking into a taco. The N-Gage QD was later released to address the design flaws of the original. However, certain features available in the original N-Gage, including MP3 playback, FM radio reception, and USB connectivity were removed. Second generation of N-Gage launched on April 3, 2008 in the form of a service for selected Nokia Smartphones. Cybiko The Cybiko is a Russian hand-held computer introduced in May 2000 by David Yang's company and designed for teenage audiences, featuring its own two-way radio text messaging system. It has over 430 "official" freeware games and applications. Because of the text messaging system, it features a QWERTY keyboard that was used with a stylus. An MP3 player add-on was made for the unit as well as a SmartMedia card reader. The company stopped manufacturing the units after two product versions and only a few years on the market. Cybikos can communicate with each other up to a maximum range of 300 metres (0.19 miles). Several Cybikos can chat with each other in a wireless chatroom. Cybiko Classic: There were two models of the Classic Cybiko. Visually, the only difference was that the original version had a power switch on the side, whilst the updated version used the "escape" key for power management. Internally, the differences between the two models were in the internal memory, and the location of the firmware. Cybiko Xtreme: The Cybiko Xtreme was the second-generation Cybiko handheld. It featured various improvements over the original Cybiko, such as a faster processor, more RAM, more ROM, a new operating system, a new keyboard layout and case design, greater wireless range, a microphone, improved audio output, and smaller size. Tapwave Zodiac In 2003, Tapwave released the Zodiac. It was designed to be a PDA-handheld game console hybrid. It supported photos, movies, music, Internet, and documents. The Zodiac used a special version Palm OS 5, 5.2T, that supported the special gaming buttons and graphics chip. Two versions were available, Zodiac 1 and 2, differing in memory and looks. The Zodiac line ended in July 2005 when Tapwave declared bankruptcy. Mid 2000s Nintendo DS The Nintendo DS was released in November 2004. Among its new features were the incorporation of two screens, a touchscreen, wireless connectivity, and a microphone port. As with the Game Boy Advance SP, the DS features a clamshell design, with the two screens aligned vertically on either side of the hinge. The DS's lower screen is touch sensitive, designed to be pressed with a stylus, a user's finger or a special "thumb pad" (a small plastic pad attached to the console's wrist strap, which can be affixed to the thumb to simulate an analog stick). More traditional controls include four face buttons, two shoulder buttons, a D-pad, and "Start" and "Select" buttons. The console also features online capabilities via the Nintendo Wi-Fi Connection and ad-hoc wireless networking for multiplayer games with up to sixteen players. It is backwards-compatible with all Game Boy Advance games, but like the Game Boy Micro, it is not compatible with games designed for the Game Boy or Game Boy Color. In January 2006, Nintendo revealed an updated version of the DS: the Nintendo DS Lite (released on March 2, 2006, in Japan) with an updated, smaller form factor (42% smaller and 21% lighter than the original Nintendo DS), a cleaner design, longer battery life, and brighter, higher-quality displays, with adjustable brightness. It is also able to connect wirelessly with Nintendo's Wii console. On October 2, 2008, Nintendo announced the Nintendo DSi, with larger, 3.25-inch screens and two integrated cameras. It has an SD card storage slot in place of the Game Boy Advance slot, plus internal flash memory for storing downloaded games. It was released on November 1, 2008, in Japan, April 2, 2009 in Australia, April 3, 2009 in Europe, and April 5, 2009 in North America. On October 29, 2009, Nintendo announced a larger version of the DSi, called the DSi XL, which was released on November 21, 2009 in Japan, March 5, 2010 in Europe, March 28, 2010 in North America, and April 15, 2010 in Australia. As of December 31, 2009, the Nintendo DS, Nintendo DS Lite, and Nintendo DSi combined have sold 125.13 million units worldwide. Game King The GameKing is a handheld game console released by the Chinese company TimeTop in 2004. The first model while original in design owes a large debt to Nintendo's Game Boy Advance. The second model, the GameKing 2, is believed to be inspired by Sony's PSP. This model also was upgraded with a backlit screen, with a distracting background transparency (which can be removed by opening up the console). A color model, the GameKing 3 apparently exists, but was only made for a brief time and was difficult to purchase outside of Asia. Whether intentionally or not, the GameKing has the most primitive graphics of any handheld released since the Game Boy of 1989. As many of the games have an "old school" simplicity, the device has developed a small cult following. The Gameking's speaker is quite loud and the cartridges' sophisticated looping soundtracks (sampled from other sources) are seemingly at odds with its primitive graphics. TimeTop made at least one additional device sometimes labeled as "GameKing", but while it seems to possess more advanced graphics, is essentially an emulator that plays a handful of multi-carts (like the GB Station Light II). Outside of Asia (especially China) however the Gameking remains relatively unheard of due to the enduring popularity of Japanese handhelds such as those manufactured by Nintendo and Sony. PlayStation Portable The PlayStation Portable (officially abbreviated PSP) is a handheld game console manufactured and marketed by Sony Computer Entertainment. Development of the console was first announced during E3 2003, and it was unveiled on May 11, 2004, at a Sony press conference before E3 2004. The system was released in Japan on December 12, 2004, in North America on March 24, 2005, and in the PAL region on September 1, 2005. The PlayStation Portable is the first handheld video game console to use an optical disc format, Universal Media Disc (UMD), for distribution of its games. UMD Video discs with movies and television shows were also released. The PSP utilized the Sony/SanDisk Memory Stick Pro Duo format as its primary storage medium. Other distinguishing features of the console include its large viewing screen, multi-media capabilities, and connectivity with the PlayStation 3, other PSPs, and the Internet. Gizmondo Tiger's Gizmondo came out in the UK during March 2005 and it was released in the U.S. during October 2005. It is designed to play music, movies, and games, have a camera for taking and storing photos, and have GPS functions. It also has Internet capabilities. It has a phone for sending text and multimedia messages. Email was promised at launch, but was never released before Gizmondo, and ultimately Tiger Telematics', downfall in early 2006. Users obtained a second service pack, unreleased, hoping to find such functionality. However, Service Pack B did not activate the e-mail functionality. GP2X Series The GP2X is an open-source, Linux-based handheld video game console and media player created by GamePark Holdings of South Korea, designed for homebrew developers as well as commercial developers. It is commonly used to run emulators for game consoles such as Neo-Geo, Genesis, Master System, Game Gear, Amstrad CPC, Commodore 64, Nintendo Entertainment System, TurboGrafx-16, MAME and others. A new version called the "F200" was released October 30, 2007, and features a touchscreen, among other changes. Followed by GP2X Wiz (2009) and GP2X Caanoo (2010). Late 2000s Dingoo The Dingoo A-320 is a micro-sized gaming handheld that resembles the Game Boy Micro and is open to game development. It also supports music, radio, emulators (8 bit and 16 bit) and video playing capabilities with its own interface much like the PSP. There is also an onboard radio and recording program. It is currently available in two colors — white and black. Other similar products from the same manufacturer are the Dingoo A-330 (also known as Geimi), Dingoo A-360, Dingoo A-380 (available in pink, white and black) and the recently released Dingoo A-320E. PSP Go The PSP Go is a version of the PlayStation Portable handheld game console manufactured by Sony. It was released on October 1, 2009, in American and European territories, and on November 1 in Japan. It was revealed prior to E3 2009 through Sony's Qore VOD service. Although its design is significantly different from other PSPs, it is not intended to replace the PSP 3000, which Sony continued to manufacture, sell, and support. On April 20, 2011, the manufacturer announced that the PSP Go would be discontinued so that they may concentrate on the PlayStation Vita. Sony later said that only the European and Japanese versions were being cut, and that the console would still be available in the US. Unlike previous PSP models, the PSP Go does not feature a UMD drive, but instead has 16 GB of internal flash memory to store games, video, pictures, and other media. This can be extended by up to 32 GB with the use of a Memory Stick Micro (M2) flash card. Also unlike previous PSP models, the PSP Go's rechargeable battery is not removable or replaceable by the user. The unit is 43% lighter and 56% smaller than the original PSP-1000, and 16% lighter and 35% smaller than the PSP-3000. It has a 3.8" 480 × 272 LCD (compared to the larger 4.3" 480 × 272 pixel LCD on previous PSP models). The screen slides up to reveal the main controls. The overall shape and sliding mechanism are similar to that of Sony's mylo COM-2 internet device. Pandora The Pandora is a handheld game console/UMPC/PDA hybrid designed to take advantage of existing open source software and to be a target for home-brew development. It runs a full distribution of Linux, and in functionality is like a small PC with gaming controls. It is developed by OpenPandora, which is made up of former distributors and community members of the GP32 and GP2X handhelds. OpenPandora began taking pre-orders for one batch of 4000 devices in November 2008 and after manufacturing delays, began shipping to customers on May 21, 2010. FC-16 Go The FC-16 Go is a portable Super NES hardware clone manufactured by Yobo Gameware in 2009. It features a 3.5-inch display, two wireless controllers, and CRT cables that allow cartridges to be played on a television screen. Unlike other Super NES clone consoles, it has region tabs that only allow NTSC North American cartridges to be played. Later revisions feature stereo sound output, larger shoulder buttons, and a slightly re-arranged button, power, and A/V output layout. 2010s Nintendo 3DS The Nintendo 3DS is the successor to Nintendo's DS handheld. The autostereoscopic device is able to project stereoscopic three-dimensional effects without requirement of active shutter or passive polarized glasses, which are required by most current 3D televisions to display the 3D effect. The 3DS was released in Japan on February 26, 2011; in Europe on March 25, 2011; in North America on March 27, 2011, and in Australia on March 31, 2011. The system features backward compatibility with Nintendo DS series software, including Nintendo DSi software except those that require the Game Boy Advance slot. It also features an online service called the Nintendo eShop, launched on June 6, 2011, in North America and June 7, 2011, in Europe and Japan, which allows owners to download games, demos, applications and information on upcoming film and game releases. On November 24, 2011, a limited edition Legend of Zelda 25th Anniversary 3DS was released that contained a unique Cosmo Black unit decorated with gold Legend of Zelda related imagery, along with a copy of The Legend of Zelda: Ocarina of Time 3D. There are also other models including the Nintendo 2DS and the New Nintendo 3DS, the latter with a larger (XL/LL) variant, like the original Nintendo 3DS, as well as the New Nintendo 2DS XL. Xperia Play The Sony Ericsson Xperia PLAY is a handheld game console smartphone produced by Sony Ericsson under the Xperia smartphone brand. The device runs Android 2.3 Gingerbread, and is the first to be part of the PlayStation Certified program which means that it can play PlayStation Suite games. The device is a horizontally sliding phone with its original form resembling the Xperia X10 while the slider below resembles the slider of the PSP Go. The slider features a D-pad on the left side, a set of standard PlayStation buttons (, , and ) on the right, a long rectangular touchpad in the middle, start and select buttons on the bottom right corner, a menu button on the bottom left corner, and two shoulder buttons (L and R) on the back of the device. It is powered by a 1 GHz Qualcomm Snapdragon processor, a Qualcomm Adreno 205 GPU, and features a display measuring 4.0 inches (100 mm) (854 × 480), an 8-megapixel camera, 512 MB RAM, 8 GB internal storage, and a micro-USB connector. It supports microSD cards, versus the Memory Stick variants used in PSP consoles. The device was revealed officially for the first time in a Super Bowl ad on Sunday, February 6, 2011. On February 13, 2011, at Mobile World Congress (MWC) 2011, it was announced that the device would be shipping globally in March 2011, with a launch lineup of around 50 software titles. PlayStation Vita The PlayStation Vita is the successor to Sony's PlayStation Portable (PSP) Handheld series. It was released in Japan on December 17, 2011 and in Europe, Australia, North, and South America on February 22, 2012. The handheld includes two analog sticks, a 5-inch (130 mm) OLED/LCD multi-touch capacitive touchscreen, and supports Bluetooth, Wi-Fi and optional 3G. Internally, the PS Vita features a 4 core ARM Cortex-A9 MPCore processor and a 4 core SGX543MP4+ graphics processing unit, as well as LiveArea software as its main user interface, which succeeds the XrossMediaBar. The device is fully backwards-compatible with PlayStation Portable games digitally released on the PlayStation Network via the PlayStation Store. However, PSone Classics and PS2 titles were not compatible at the time of the primary public release in Japan. The Vita's dual analog sticks will be supported on selected PSP games. The graphics for PSP releases will be up-scaled, with a smoothing filter to reduce pixelation. On September 20, 2018, Sony announced at Tokyo Game Show 2018 that the Vita would be discontinued in 2019, ending its hardware production. Production of Vita hardware officially ended on March 1, 2019. Razer Switchblade The Razer Switchblade was a prototype pocket-sized like a Nintendo DSi XL designed to run Windows 7, featured a multi-touch LCD screen and an adaptive keyboard that changed keys depending on the game the user would play. It also was to feature a full mouse. It was first unveiled on January 5, 2011, on the Consumer Electronics Show (CES). The Switchblade won The Best of CES 2011 People's Voice award. It has since been in development and the release date is still unknown. The device has likely been suspended indefinitely. Nvidia Shield Project Shield is a handheld system developed by Nvidia announced at CES 2013. It runs on Android 4.2 and uses Nvidia Tegra 4 SoC. The hardware includes a 5-inches multitouch screen with support for HD graphics (720p). The console allows for the streaming of games running on a compatible desktop PC, or laptop. Nvidia Shield Portable has received mixed reception from critics. Generally, reviewers praised the performance of the device, but criticized the cost and lack of worthwhile games. Engadget's review noted the system's "extremely impressive PC gaming", but also that due to its high price, the device was "a hard sell as a portable game console", especially when compared to similar handhelds on the market. CNET's Eric Franklin states in his review of the device that "The Nvidia Shield is an extremely well made device, with performance that pretty much obliterates any mobile product before it; but like most new console launches, there is currently a lack of available games worth your time." Eurogamer's comprehensive review of the device provides a detailed account of the device and its features; concluded by saying: "In the here and now, the first-gen Shield Portable is a gloriously niche, luxury product - the most powerful Android system on the market by a clear stretch and possessing a unique link to PC gaming that's seriously impressive in beta form, and can only get better." Nintendo Switch The Nintendo Switch is a hybrid console that can either be used in a handheld form, or inserted into a docking station attached to a television to play on a bigger screen. The Switch features two detachable wireless controllers, called Joy-Con, which can be used individually or attached to a grip to provide a traditional gamepad form. A handheld-only revision named Nintendo Switch Lite was released on September 20, 2019. The Switch Lite had sold about 1.95 million units worldwide by September 30, 2019, only 10 days after its launch. Evercade Evercade is a handheld game console developed and manufactured by UK company Blaze Entertainment. It focuses on retrogaming with ROM cartridges that each contain a number of emulated games. Development began in 2018, and the console was released in May 2020, after a few delays. Upon its launch, the console offered 10 game cartridges with a combined total of 122 games. Arc System Works, Atari, Data East, Interplay Entertainment, Bandai Namco Entertainment and Piko Interactive have released emulated versions of their games for the Evercade. Pre-existing homebrew games have also been re-released for the console by Mega Cat Studios. The Evercade is capable of playing games originally released for the Atari 2600, the Atari 7800, the Atari Lynx, the NES, the SNES, and the Sega Genesis/Mega Drive. 2020s Analogue Pocket The Analogue Pocket is a FPGA-based handheld game console designed and manufactured by Analogue, Inc., It is designed to play games designed for handhelds of the fourth, fifth and sixth generation of video game consoles. The console features a design reminiscent of the Game Boy, with additional buttons for the supported platforms. It features a 3.5" 1600x1440 LTPS LCD display, an SD card port, and a link cable port compatible with Game Boy link cables. The Analogue Pocket uses an Altera Cyclone V processor, and is compatible with the original Game Boy, Game Boy Color and Game Boy Advance cartridges out of the box. With cartridge adapters (sold separately) the Analogue Pocket can play Game Gear, Neo Geo Pocket, Neo Geo Pocket Color and Atari Lynx game cartridges. The Analogue Pocket includes an additional FPGA, allowing 3rd party FPGA development. The Analogue Pocket was released in December 2021. Steam Deck The Steam Deck is a handheld computer device, developed by Valve, which runs SteamOS 3.0, a tailored distro of Arch Linux and includes support for Proton, a compatibility layer that allows most Microsoft Windows games to be played on the Linux-based operating system. In terms of hardware, the Deck includes a custom accelerated processing unit (APU) built by AMD based on their Zen 2 and RDNA 2 architectures, with the CPU running a four-core/eight-thread unit and the GPU running on eight compute units with a total estimated performance of 1.6 TFLOPS. Both the CPU and GPU use variable timing frequencies, with the CPU running between 2.4 and 3.5 GHz and the GPU between 1.0 and 1.6 GHz based on current processor needs. Valve stated that the CPU has comparable performance to Ryzen 3000 desktop computer processors and the GPU performance to the Radeon RX 6000 series. The Deck includes 16 GB of LPDDR5 RAM in a quad channel configuration. Valve revealed the Steam Deck on July 15, 2021, with pre-orders being made option the next day. The Deck was expected to ship in December 2021 to the US, Canada, the EU and the UK but was delayed to February 2022, with other regions to follow in 2022. Pre-orders were limited to those with Steam accounts opened before June 2021 to prevent resellers from controlling access to the device. Pre-orders reservations on July 16, 2021 through the Steam storefront briefly crashed the servers due to the demand. While initial shipments are still planned by February 2022, Valve has reported to new purchasers that wider availability will be later, with the 64 GB model and 256 GB NVMe model due in Q2 2022, and the 512 GB NVMe model by Q3 2022. Steam Deck was released on February 25, 2022. List of handheld consoles See also Comparison of handheld game consoles List of handheld game consoles Video game console emulator Handheld electronic game Handheld television Linux gaming Cloud gaming Mobile game References Video game terminology Handheld game consoles
[ -0.09080634266138077, 0.1907886415719986, 0.24878714978694916, 0.5900698304176331, 0.32249322533607483, -0.11100700497627258, -0.7581891417503357, 0.14777182042598724, 0.2472657412290573, -0.41245201230049133, -0.5688837170600891, 0.47339358925819397, 0.14248928427696228, 0.535165071487426...
14200
https://en.wikipedia.org/wiki/Heinrich%20Abeken
Heinrich Abeken
Heinrich Abeken (19 August 18098 August 1872) was a German theologian and Prussian Privy Legation Councillor in the Ministry of Foreign Affairs in Berlin. Early life Abeken was born and raised in the city of Osnabrück as a son of a merchant, he was incited to a higher education by the example of his uncle Bernhard Rudolf Abeken. After finishing the college in Osnabrück, he moved in 1827 to visit the University of Berlin to study theology. He combined philosophical and philological studies and was interested in art and modern literature. Career In 1831, Abeken acquired a licenciate of theology. At the end of the year he visited Rome, and was welcomed in the house of Christian Karl Josias, Freiherr von Bunsen. Abeken participated in Bunsen's works, namely an evangelical prayer and hymn-book. In 1834 he became chaplain to the Prussian embassy in Rome. He married his first wife, who died soon thereafter. Bunsen left Rome in 1838 and Abeken followed soon thereafter to Germany. In 1841, he was sent to England to help in founding a German-English missionary bishopric in Jerusalem. In the same year, he was sent by Frederick William IV of Prussia to Egypt and Ethiopia, where he joined an expedition led by professor Karl Richard Lepsius. In 1845 and 1846 he returned via Jerusalem and Rome to Germany. He became Legation Councillor in Berlin, later Council Referee at the Ministry of Foreign Affairs. In 1848 he received an appointment in the Prussian ministry for foreign affairs, and in 1853 was promoted to be privy councillor of legation (Geheimer Legationsrath). Abeken remained in charge for more than twenty years of Prussian politics, assisting Otto Theodor Freiherr von Manteuffel and Chancellor Otto von Bismarck. The latter was so much pleased with Abeken's work that officials started to call Abeken "the quill [i.e., the scribe] of Bismarck." Abeken married again in 1866; his second wife was Hedwig von Olfers, daughter of the general director of the royal museums, Privy Councilor von Olfers. He was much employed by Bismarck in the writing of official despatches, and stood high in the favour of King William, whom he often accompanied on his journeys as representative of the foreign office. He was present with the king during the campaigns of 1866 and 1870–71. In 1851 he published anonymously Babylon und Jerusalem, a scathing criticism of the views of the Countess von Hahn-Hahn. During the war against Austria in 1866 as well as in the wars against France in 1870 and 1871, Abeken stayed in the Prussian headquarters. A major part of the dispatches of the time have been written by him. Unfortunately his health was damaged by the endeavours of these travels, and he died after an illness of several months. Emperor Wilhelm I described Abeken in a condolence letter to his widow: One of my most reliable advisors, standing on my side in the most decisive moments; His loss is irreplaceable to me; In him his fatherland has lost one of the most noble and most loyal men and officials. Despite his engagement in politics, Abeken never lost his interest in theology and continued to publish and speak in this sector during all of his life. He was interested in art and archeology, and was sponsor of the Archeological Institute of Rome and member of the Archeological Society of Rome. He founded a Circle of Friends of the Greek Literature in Berlin and was member of the prize commission for the royal Schiller-Prize. Publications A letter to the Reverend E. B. Pusey in reference to certain charges against the German Church, (1842) Babylon und Jerusalem (1851), letter to Countess Ida Hahn-Hahn Der Gottesdienst der alten Kirche (1853) Das religiöse Leben des Islam (1854) biography of Bunsen in the Jahrbuch zum Conversationslexikon (Leipzig, Brockhaus), Unsere Zeit (1861) Wolfgang Frischbier, Heinrich Abeken 1809–1872. Eine Biographie Paderborn: Ferdinand Schöningh, 2008 (Otto-von-Bismarck-Stiftung. Wissenschaftliche Reihe, 9). Notes References External links Attribution Allgemeine Deutsche Biographie - online version at Wikisource 1809 births 1872 deaths Writers from Osnabrück 19th-century German Protestant theologians People from the Kingdom of Hanover Prussian diplomats 19th-century German male writers German male non-fiction writers
[ -0.14907486736774445, 1.1072829961776733, -0.3684087097644806, 0.07791918516159058, -0.4326821565628052, 0.8088683485984802, 1.5332202911376953, -0.41881850361824036, 0.0041030398570001125, -0.7475607395172119, -0.06654052436351776, -0.03730860725045204, 0.22900691628456116, 0.520337402820...
14201
https://en.wikipedia.org/wiki/Henry%20Bruce%2C%201st%20Baron%20Aberdare
Henry Bruce, 1st Baron Aberdare
Henry Austin Bruce, 1st Baron Aberdare, (16 April 1815 – 25 February 1895), was a British Liberal Party politician, who served in government most notably as Home Secretary (1868–1873) and as Lord President of the Council. Background and education Henry Bruce was born at Duffryn, Aberdare, Glamorganshire, the son of John Bruce, a Glamorganshire landowner, and his first wife Sarah, daughter of Reverend Hugh Williams Austin. John Bruce's original family name was Knight, but on coming of age in 1805 he assumed the name of Bruce: his mother, through whom he inherited the Duffryn estate, was the daughter of William Bruce, high sheriff of Glamorganshire. Henry was educated from the age of twelve at the Bishop Gore School, Swansea (Swansea Grammar School). In 1837 he was called to the bar from Lincoln's Inn. Shortly after he had begun to practice, the discovery of coal beneath the Duffryn and other Aberdare Valley estates brought his family great wealth. From 1847 to 1854 Bruce was stipendiary magistrate for Merthyr Tydfil and Aberdare, resigning the position in the latter year, after entering parliament as Liberal member for Merthyr Tydfil. Industrialist and politician, 1852–1868 Bruce was returned unopposed as MP for Merthyr Tydfil in December 1852, following the death of Sir John Guest. He did so with the enthusiastic support of the late member's political allies, notably the iron masters of Dowlais, and he was thereafter regarded by his political opponents, most notably in the Aberdare Valley, as their nominee. Even so, Bruce's parliamentary record demonstrated support for liberal policies, with the exception of the ballot. The electorate in the constituency at this time remained relatively small, excluding the vast majority of the working classes. Significantly, however, Bruce's relationship with the miners of the Aberdare Valley, in particular, deteriorated as a result of the Aberdare Strike of 1857–58. In a speech to a large audience of miners at the Aberdare Market Hall, Bruce sought to strike a conciliatory tone in persuading the miners to return to work. In a second speech, however, he delivered a broadside against the trade union movement generally, referring to the violence engendered elsewhere as a result of strikes and to alleged examples of intimidation and violence in the immediate locality. The strike damaged his reputation and may well have contributed to his eventual election defeat ten years later. In 1855, Bruce was appointed a trustee of the Dowlais Iron Company and played a role in the further development of the iron industry. In November 1862, after nearly ten years in Parliament, he became Under-Secretary of State for the Home Department, and held that office until April 1864. He became a Privy Councillor and a Charity Commissioner for England and Wales in 1864, when he was moved to be Vice-President of the Council of Education. 1868 general election At the 1868 general election, Merthyr Tydfil became a two-member constituency with a much-increased electorate as a result of the Second Reform Act of 1867. Since the formation of the constituency, Merthyr Tydfil had dominated representation as the vast majority of the electorate lived in the town and its vicinity, whereas there was a much lower number of electors in the neighbouring Aberdare Valley. During the 1850s and 1860s, however, the population of Aberdare grew rapidly, and the franchise changes in 1867 gave the vote to large numbers of miners in that valley. Amongst these new electors, Bruce remained unpopular as a result of his actions during the 1857–58 dispute. Initially, it appeared that the Aberdare iron master, Richard Fothergill, would be elected to the second seat alongside Bruce. However, the appearance of a third Liberal candidate, Henry Richard, a nonconformist radical popular in both Merthyr and Aberdare, left Bruce on the defensive and he was ultimately defeated, finishing in third place behind both Richard and Fothergill. Later political career After losing his seat, Bruce was elected for Renfrewshire on 25 January 1869, he was made Home Secretary by William Ewart Gladstone. His tenure of this office was conspicuous for a reform of the licensing laws, and he was responsible for the Licensing Act 1872, which made the magistrates the licensing authority, increased the penalties for misconduct in public-houses and shortened the number of hours for the sale of drink. In 1873 Bruce relinquished the home secretaryship, at Gladstone's request, to become Lord President of the Council, and was elevated to the peerage as Baron Aberdare, of Duffryn in the County of Glamorgan, on 23 August that year. Being a Gladstonian Liberal, Aberdare had hoped for a much more radical proposal to keep existing licensee holders for a further ten years, and to prevent any new applicants. Its unpopularity pricked his nonconformist's conscience, when like Gladstone himself he had a strong leaning towards Temperance. He had already pursued 'moral improvement' on miners in the regulations attempting to further ban boys from the pits. The Trades Union Act 1871 was another more liberal regime giving further rights to unions, and protection from malicious prosecutions. The defeat of the Liberal government in the following year terminated Lord Aberdare's official political life, and he subsequently devoted himself to social, educational and economic questions. Education became one of Lord Aberdare's main interests in later life. His interest had been shown by the speech on Welsh education which he had made on 5 May 1862. In 1880, he was appointed to chair the Departmental Committee on Intermediate and Higher Education in Wales and Monmouthshire, whose report ultimately led to the Welsh Intermediate Education Act of 1889. The report also stimulated the campaign for the provision of university education in Wales. In 1883, Lord Aberdare was elected the first president of the University College of South Wales and Monmouthshire. In his inaugural address he declared that the framework of Welsh education would not be complete until there was a University of Wales. The University was eventually founded in 1893 and Aberdare became its first chancellor. In 1876 he was elected a Fellow of the Royal Society; from 1878 to 1891 he was president of the Royal Historical Society. and in 1881 he became president of both the Royal Geographical Society and the Girls' Day School Trust. In 1888 he headed the commission that established the Official Table of Drops, listing how far a person of a particular weight should be dropped when hanged for a capital offence (the only method of 'judicial execution' in the United Kingdom at that time), to ensure an instant and painless death, by cleanly breaking the neck between the 2nd and 3rd vertebrae, an 'exacting science', eventually brought to perfection by Chief Executioner Albert Pierrepoint. Prisoners health, clothing and discipline was a particular concern even at the end of his career. In the Lords he spoke at some length to the Home Affairs Committee chaired by Arthur Balfour about the prison rules system. Aberdare had always expressed concern about intemperate working-classes; in 1878 urging greater vigilance against the vice of excessive drinking, he took evidence on miners and railway colliers drinking habits. The committee tried to establish special legislation based on a link between Sunday Opening and absenteeism established in 1868. Aberdare had been interested in the plight of working class drinkers since Gladstone had appointed him Home Secretary. The defeat of the Licensing Bill by the Tory 'beerage' and publicans was drafted to limit hours and protect the public, but it persuaded a convinced Anglican forever more of the iniquities. In 1882 he began a connection with West Africa which lasted the rest of his life, by accepting the chairmanship of the National African Company, formed by Sir George Goldie, which in 1886 received a charter under the title of the Royal Niger Company and in 1899 was taken over by the British government, its territories being constituted the protectorate of Nigeria. West African affairs, however, by no means exhausted Lord Aberdare's energies, and it was principally through his efforts that a charter was in 1894 obtained for the University College of South Wales and Monmouthshire,a constituent institution of the University of Wales. This is now Cardiff University. Lord Aberdare, who in 1885 was made a Knight Grand Cross of the Order of the Bath, presided over several Royal Commissions at different times. Family Henry Bruce married firstly Annabella, daughter of Richard Beadon, of Clifton by Annabella A'Court, sister of 1st Baron Heytesbury, on 6 January 1846. They had one son and three daughters. Henry Campbell Bruce, 2nd Baron Margaret Cecilia married on 16 September 1889, Douglas Close Richmond, CB, MA, son of Rev. Henry Sylvester Richmond MA, rector of Wyck Rissington, Glos. Rachel Mary married 10 September 1872, Augustus George Vernon-Harcourt of St Clare, Ryde, Isle of Wight, son of Admiral Frederick Edward Vernon-Harcourt. Jessie Frances, married 3 September 1878, Rev John William Wynne-Jones, MA, rector of Llantrisant, Anglesey. son of John Wynne-Jones JP, DL, of Treiorworth, Bodedern, Holyhead, Anglesey. After her death on 28 July 1852 he married secondly on 17 August 1854 Norah Creina Blanche, youngest daughter of Lt-Gen Sir William Napier, KCB the historian of the Peninsular War, whose biography he edited, by Caroline Amelia, second daughter of Gen. Hon Henry Edward Fox, son of the Earl of Ilchester. They had seven daughters and two sons, of whom: the youngest was the mountaineer Charles Granville Bruce. Alice Bruce took on her mother's ideas and took a leading role in women's education. Sarah married Montague Muir Mackenzie, barrister. Elizabeth Fox Bruce (1861–1935) married the author Percy Ewing Matheson. Lord Aberdare died at his London home, 39 Princes Gardens, South Kensington, on 25 February 1895, aged 79, and was succeeded in the barony by his only son by his first marriage, Henry. He was survived by his wife, Lady Aberdare, born 1827, who died on 27 April 1897. She was a proponent of women's education and active in the establishment of Aberdare Hall in Cardiff. Memorial Henry Austin Bruce is buried at Aberffrwd Cemetery in Mountain Ash, Wales. His large family plot is surrounded by a chain, and his gravestone is a simple Celtic cross with double plinth and kerb. In place is written "To God the Judge of all and to the spirits of just men more perfect." References Bibliography External links Aberdare, Henry Austin Bruce, 1st Baron Aberdare, Henry Austin Bruce, 1st Baron Bruce, Henry Austin Bruce, Henry Austin Bruce, Henry Austin Bruce, Henry Austin Bruce, Henry Austin Bruce, Henry Austin Bruce, Henry Austin Bruce, Henry Austin UK MPs who were granted peerages Fellows of the Royal Society Deputy Lieutenants of Glamorgan Aberdare, Henry Austin Bruce, 1st Baron Aberdare, Henry Austin Bruce, 1st Baron Aberdare, Henry Austin Bruce, 1st Baron Aberdare, Henry Austin Bruce, 1st Baron Aberdare, Henry Austin Bruce, 1st Baron Aberdare, Henry Austin Bruce, 1st Baron Aberdare, Henry Austin Bruce, 1st Baron Aberdare, Henry Austin Bruce, 1st Baron (1881–1895) Aberdare, Henry Austin Bruce, 1st Baron People educated at Bishop Gore School Peers of the United Kingdom created by Queen Victoria Henry 1
[ -0.05981072038412094, 0.33336323499679565, -0.1676502376794815, -0.709908664226532, -0.06733766943216324, 0.27496153116226196, 0.5663915276527405, -0.28410860896110535, -0.1336030215024948, -0.5843299031257629, -0.5502546429634094, 0.2043382078409195, -0.6203736662864685, 0.430589884519577...
14203
https://en.wikipedia.org/wiki/Harpers%20Ferry%20%28disambiguation%29
Harpers Ferry (disambiguation)
Harpers Ferry is the name of several places in the United States of America: Harpers Ferry, Iowa, a city in Allamakee County, Iowa Harpers Ferry, West Virginia, a town in Jefferson County, West Virginia John Brown's raid on Harpers Ferry (1859) Harpers Ferry Armory, second federal armory (construction begun 1799) and site of John Brown's slave revolt of 1859 Harpers Ferry National Historical Park Battle of Harpers Ferry (September 12–15, 1862), a battle in the American Civil War that took place around what is now Harpers Ferry, West Virginia Harpers Ferry may also refer to: Harpers Ferry class dock landing ship, a ship class in the United States Navy USS Harpers Ferry (LSD-49), a Harpers Ferry class dock landing ship of the United States Navy, commissioned in 1995 Harpers Ferry (nightclub), a music venue and nightclub in Boston Harper's Ferry flintlock pistol See also Harpur's Ferry, A student volunteer ambulance service in Binghamton University
[ -0.04157412052154541, 0.20011068880558014, -0.29099616408348083, 0.3045444190502167, 0.42103397846221924, 0.551171600818634, 0.2865629196166992, 0.6719638705253601, -0.4616030156612396, -0.2611938416957855, 0.17297305166721344, 0.2136155515909195, -0.4673149287700653, 0.4934781491756439, ...
14204
https://en.wikipedia.org/wiki/Halophile
Halophile
The halophiles, named after the Greek word for "salt-loving", are extremophiles that thrive in high salt concentrations. While most halophiles are classified into the domain Archaea, there are also bacterial halophiles and some eukaryotic species, such as the alga Dunaliella salina and fungus Wallemia ichthyophaga. Some well-known species give off a red color from carotenoid compounds, notably bacteriorhodopsin. Halophiles can be found in water bodies with salt concentration more than five times greater than that of the ocean, such as the Great Salt Lake in Utah, Owens Lake in California, the Urmia Lake in Iran, the Dead Sea, and in evaporation ponds. They are theorized to be a possible candidate for extremophiles living in the salty subsurface water ocean of Jupiter's Europa and other similar moons. Classification Halophiles are categorized by the extent of their halotolerance: slight, moderate, or extreme. Slight halophiles prefer 0.3 to 0.8 M (1.7 to 4.8%—seawater is 0.6 M or 3.5%), moderate halophiles 0.8 to 3.4 M (4.7 to 20%), and extreme halophiles 3.4 to 5.1 M (20 to 30%) salt content. Halophiles require sodium chloride (salt) for growth, in contrast to halotolerant organisms, which do not require salt but can grow under saline conditions. Lifestyle High salinity represents an extreme environment in which relatively few organisms have been able to adapt and survive. Most halophilic and all halotolerant organisms expend energy to exclude salt from their cytoplasm to avoid protein aggregation ('salting out'). To survive the high salinities, halophiles employ two differing strategies to prevent desiccation through osmotic movement of water out of their cytoplasm. Both strategies work by increasing the internal osmolarity of the cell. The first strategy is employed by some archaea, the majority of halophilic bacteria, yeasts, algae, and fungi; the organism accumulates organic compounds in the cytoplasm—osmoprotectants which are known as compatible solutes. These can be either synthesised or accumulated from the environment. The most common compatible solutes are neutral or zwitterionic, and include amino acids, sugars, polyols, betaines, and ectoines, as well as derivatives of some of these compounds. The second, more radical adaptation involves selectively absorbing potassium (K+) ions into the cytoplasm. This adaptation is restricted to the extremely halophilic archaeal family Halobacteriaceae, the moderately halophilic bacterial order Halanaerobiales, and the extremely halophilic bacterium Salinibacter ruber. The presence of this adaptation in three distinct evolutionary lineages suggests convergent evolution of this strategy, it being unlikely to be an ancient characteristic retained in only scattered groups or passed on through massive lateral gene transfer. The primary reason for this is the entire intracellular machinery (enzymes, structural proteins, etc.) must be adapted to high salt levels, whereas in the compatible solute adaptation, little or no adjustment is required to intracellular macromolecules; in fact, the compatible solutes often act as more general stress protectants, as well as just osmoprotectants. Of particular note are the extreme halophiles or haloarchaea (often known as halobacteria), a group of archaea, which require at least a 2 M salt concentration and are usually found in saturated solutions (about 36% w/v salts). These are the primary inhabitants of salt lakes, inland seas, and evaporating ponds of seawater, such as the deep salterns, where they tint the water column and sediments bright colors. These species most likely perish if they are exposed to anything other than a very high-concentration, salt-conditioned environment. These prokaryotes require salt for growth. The high concentration of sodium chloride in their environment limits the availability of oxygen for respiration. Their cellular machinery is adapted to high salt concentrations by having charged amino acids on their surfaces, allowing the retention of water molecules around these components. They are heterotrophs that normally respire by aerobic means. Most halophiles are unable to survive outside their high-salt native environments. Many halophiles are so fragile that when they are placed in distilled water, they immediately lyse from the change in osmotic conditions. Halophiles use a variety of energy sources and can be aerobic or anaerobic; anaerobic halophiles include phototrophic, fermentative, sulfate-reducing, homoacetogenic, and methanogenic species. The Haloarchaea, and particularly the family Halobacteriaceae, are members of the domain Archaea, and comprise the majority of the prokaryotic population in hypersaline environments. Currently, 15 recognised genera are in the family. The domain Bacteria (mainly Salinibacter ruber) can comprise up to 25% of the prokaryotic community, but is more commonly a much lower percentage of the overall population. At times, the alga Dunaliella salina can also proliferate in this environment. A comparatively wide range of taxa has been isolated from saltern crystalliser ponds, including members of these genera: Haloferax, Halogeometricum, Halococcus, Haloterrigena, Halorubrum, Haloarcula, and Halobacterium. However, the viable counts in these cultivation studies have been small when compared to total counts, and the numerical significance of these isolates has been unclear. Only recently has it become possible to determine the identities and relative abundances of organisms in natural populations, typically using PCR-based strategies that target 16S small subunit ribosomal ribonucleic acid (16S rRNA) genes. While comparatively few studies of this type have been performed, results from these suggest that some of the most readily isolated and studied genera may not in fact be significant in the in situ community. This is seen in cases such as the genus Haloarcula, which is estimated to make up less than 0.1% of the in situ community, but commonly appears in isolation studies. Genomic and proteomic signature The comparative genomic and proteomic analysis showed distinct molecular signatures exist for the environmental adaptation of halophiles. At the protein level, the halophilic species are characterized by low hydrophobicity, an overrepresentation of acidic residues, underrepresentation of Cys, lower propensities for helix formation, and higher propensities for coil structure. The core of these proteins is less hydrophobic, such as DHFR, that was found to have narrower β-strands. At the DNA level, the halophiles exhibit distinct dinucleotide and codon usage. Examples Halobacteriaceae is a family that includes a large part of halophilic archaea. The genus Halobacterium under it has a high tolerance for elevated levels of salinity. Some species of halobacteria have acidic proteins that resist the denaturing effects of salts. Halococcus is another genus of the family Halobacteriaceae. Some hypersaline lakes are habitat to numerous families of halophiles. For example, the Makgadikgadi Pans in Botswana form a vast, seasonal, high-salinity water body that manifests halophilic species within the diatom genus Nitzschia in the family Bacillariaceae, as well as species within the genus Lovenula in the family Diaptomidae. Owens Lake in California also contains a large population of the halophilic bacterium Halobacterium halobium. Wallemia ichthyophaga is a basidiomycetous fungus, which requires at least 1.5 M sodium chloride for in vitro growth, and it thrives even in media saturated with salt. Obligate requirement for salt is an exception in fungi. Even species that can tolerate salt concentrations close to saturation (for example Hortaea werneckii) in almost all cases grow well in standard microbiological media without the addition of salt. The fermentation of salty foods (such as soy sauce, Chinese fermented beans, salted cod, salted anchovies, sauerkraut, etc.) often involves halophiles as either essential ingredients or accidental contaminants. One example is Chromohalobacter beijerinckii, found in salted beans preserved in brine and in salted herring. Tetragenococcus halophilus is found in salted anchovies and soy sauce. Artemia is a ubiquitous genus of small halophilic crustaceans living in salt lakes (such as Great Salt Lake) and solar salterns that can exist in water approaching the precipitation point of NaCl (340 g/L) and can withstand strong osmotic shocks due to its mitigating strategies for fluctuating salinity levels, such as its unique larval salt gland and osmoregulatory capacity. North Ronaldsay sheep are a breed of sheep originating from Orkney, Scotland. They have limited access to freshwater sources on the island and their only food source is seaweed. They have adapted to handle salt concentrations that would kill other breeds of sheep. See also Arid Forest Research Institute Biosalinity Halotolerance References Further reading External links HaloArchaea.com Important Groups of Prokaryotes - Kenneth Todar Astrobiology: extremophiles- life in extreme environments
[ 0.1136031523346901, 0.4918471872806549, 0.2519127130508423, -0.16777923703193665, -0.41099613904953003, -0.8591188788414001, -0.15141956508159637, -0.3606020510196686, -0.02950095199048519, -0.21872591972351074, -0.8240932822227478, 0.35077959299087524, -0.5072817802429199, 0.5240132808685...