text
stringlengths
10
951k
source
stringlengths
39
44
Geography of Albania Albania is a small country in Southern and Southeastern Europe strategically positioned on the Adriatic and Ionian Sea inside the Mediterranean Sea, with a coastline of about . It is bounded by Montenegro to the northwest, Kosovo to the northeast, North Macedonia to the east and Greece to the southeast and south. Most of Albania rises into mountains and hills, tending to run the length of the country from north to south, as for instance the Albanian Alps in the north, the Sharr Mountains in the northeast, the Skanderbeg Mountains in the center, the Korab Mountains in the east, the Pindus Mountains in the southeast, and the Ceraunian Mountains in the southwest. Plains and plateaus extend in the west along the Albanian Adriatic and Ionian Sea Coast. Few of the most considerable and oldest bodies of freshwater of Europe occur in Albania. The second largest lake of Southern Europe, the Lake of Shkodër, is located in the northwest surrounded by the Albanian Alps and the Adriatic Sea. Nevertheless, one of the oldest continuously existing lakes in the world, the Lake of Ohrid, straddles in the southeast, while the highest tectonic lakes of the Balkan Peninsula, the Large and Small Lake of Prespa are well hidden among high mountains in the southeast. Rivers originate in the east of Albania and loops towards the west into the sea. They are encompassed by the drainage basins of the Adriatic, Aegean and Black Sea. The longest river in the country, measured from its mouth to its source, is the Drin that starts at the confluence of its two headwaters, the Black and White Drin, though also notable is the Vjosë, one of the last intact large river systems in Europe. For a small country, Albania is characterised for its biological diversity and abundance of contrasting ecosystems and habitats, defined in an area of 28,748 square kilometres. This great diversity derives from Albania's geographic location on the Mediterranean Sea, with typical climatic conditions, varied topography, as well as the wealth of terrestrial and marine ecosystems providing a variety of habitats, each with its own typical flora and fauna. There are 799 Albanian protected areas covering a surface of 5,216.96 square kilometres. These include 2 strict nature reserves, 14 national parks, 1 marine park, 8 archaeological parks, 750 natural monuments, 22 habitat/species management areas, 5 protected landscapes, 4 protected landscapes, 4 managed resources areas and 4 ramsar wetlands. The national parks cover a surface area of or roughly 13.65% of the overall territory. A total surface area of 28.748 square kilometres (11,100 sq mi), the country is located in the southeastern part of the Adriatic and the northeastern part of the Ionian Sea, both located within the Mediterranean Sea. It has a length of borders of about , of which are taken by terrestrial borders, of shore borders, river borders and of lake borders. Inland water surface is , composed by natural lakes , coastal lagoons , artificial lakes and rivers . The countries of Montenegro () and Kosovo () border the country in the north and northeast, respectively. A significant portion of this border connects high points and follows mountain ridges through the largely inaccessible Albanian Alps. The eastern border is shared with North Macedonia, which stretches . This border is located at the tripoint between Albania, Kosovo, and North Macedonia passing through the Sharr and Korab Mountains and continues until it reaches Lake Ohrid and Lake Prespa. The southern and southeastern border with Greece is long. The border is located at the tripoint border between Albania, North Macedonia, and Greece running across the Large and Small Lake Prespa until it reaches the Ionian Sea at the Strait of Corfu. The most significant feature of Albania is possibly its relief, with numerous successive mountain ranges and its average altitude, more than 700 metres above sea level. Much of the mountains lie to the north, east and south of the western lowlands in the northern, eastern and southern mountain ranges. The Albanian Alps extend over 90 kilometres through the north of Albania, traversing an area of 2,000 square kilometers. These mountains represent an extension of the Dinaric Alps and no wider than 40 kilometers. They are deeply fragmented and often inaccessible. It holds the Maja Jezercë, which is the highest point of the Dinarides and simultaneously the second highest point of Albania. The Korab Mountains dominate the east of the country and expand 40 kilometres along the eastern border of the country, where peaks can clearly reach 2,500 metres. The mountains offer the country's most prominent peak at Mount Korab, which is fragmented by many deep structural depressions. Another distinguishing feature remains the evidence of the last ice age in form of glacial lakes at relatively low altitudes. One of the most remarkable features about the south of Albania is the presence of the Ceraunian Mountains that cut across the landscape for nearly 100 kilometres. Thousand meter high mountains fall vertically into the Mediterranean Sea constituting at least the first barrier to communication between the sea and the country's southern inland. The country has a dense network of rivers and streams characterized by a high flow rate. They belong to the drainage basins of the Adriatic, Aegean and Black Sea. They mostly rise in the mountainous eastern half of the country and have their mouths in the west along the coasts. They are sustained from snowmelt of the snowcapped mountains or from the abundant precipitation that falls at loftier elevations. Eight notable rivers, along with their numerous tributaries, make up the river system of the country. The longest river is the Drin, which flows for within the country. Its catchment area is one of the most biodiverse places in Europe and encompasses the lakes of Shkodër, Ohrid and Prespa. Of particular concern represent the river of Vjosa as it is considered as the last remaining wild river in Europe. The country's other rivers are Fan, Ishëm, Erzen, Mat, Seman and Shkumbin. With more than 250 lakes, the country is home to two of the largest lakes in Southern Europe and one of the oldest in the World. The lakes of the country are predominantly of karstic or glacial origin. The largest lake in Southern Europe is the Lake of Shkodër geographically located in the north, shared with Montenegro. One of the ancientest lakes in the world is the Lake of Ohrid located in the southeast, shared with North Macedonia. The highest positioned lakes in Southern Europe are the Large and Small Lake of Prespa in the southeast, shared with North Macedonia and Greece. The country is also home to many lagoons of varied shape, size and structure. They are primarily positioned along the coast in the west of the country and also in the southeast. They contribute to the overall productivity of the coastal waters by supporting a variety of habitats and ecosystems. The largest lagoon in the country is the Lagoon of Karavasta located between the mouth of Shkumbin and Seman. Another significant lagoons are the lagoons of Patoku, Narta, Kunë-Vain and Butrint. The proximity to the Mediterranean Sea and the convergence of exceptional climatic, geological and hydrological conditions, have contributed for the development of a diverse biodiversity, making Albania one of the biodiversity hotspots of Europe. In terms of phytogeography, the land area of Albania extends within the Boreal Kingdom, specifically within the Illyrian province of the Circumboreal Region. Its territory can be subdivided into four terrestrial ecoregions of the Palearctic realm – the Illyrian, Balkan, Pindus and Dinaric forests. Forests and woodlands are the most widespread terrestrial ecosystem in Albania. That of the northern edge of the country has an affinity for those of Continental Europe, while those of the southern edge refer to the affinity with that of the Mediterranean Basin. They are substantial because they provide shelter for a vast array of rare and endangered species of animals, among others the brown bear, balkan lynx, grey wolf, golden jackal, egyptian vulture and golden eagle. The mediterranean monk seal, short-beaked common dolphin and common bottlenose dolphin can frequently be seen in the coastal waters of the country. Outstanding is the presence of the dalmatian pelican, the rarest pelican in the world. The biodiversity of Albania is conserved in its protected areas, which provide protection to hundreds of threatened and endangered species. Numerous parts of Albania are protected in accordance with a number of national and international designations due to their natural, historical or cultural value. Protected areas belong to the most principal instruments of conservation which in turn contributes effectively to the maintenance of species, habitats and ecosystems. The country has currently fifteen designated national parks, whereby one is specified as a marine park. Ranging from the Adriatic Sea and the Ionian Sea to the Albanian Alps and the Ceraunian Mountains, they possess outstanding landscapes constituting habitats to thousands of plant and animal species. Butrint, Divjakë-Karavasta, Karaburun-Sazan, Llogara, Prespa, Shebenik-Jabllanicë, Theth and Valbonë are among the most spectacular national parks of the country. Albania has a combination of a mediterranean climate and a continental climate, with four distinct seasons. The climatic conditions are highly variable and modified locally by altitude and latitude. Its climate is strongly influenced by the Mediterranean Sea in the west, and the mountains that raise in every corner around the country. Its diverse regions have a remarkable range of microclimates, with the weather system on the coasts contrasting with that prevailing in the interior. Moreover, the weather varies from the north and towards the south and the west to the east. Albania occupies the climate ranges from temperate climate on the coasts to continental climate in the interior. The warmest areas of the country are at the coasts, which are characterized by a Mediterranean climate (Csa, Csb and Cfa) as defined by the Köppen climate classification. The highlands experiences an Oceanic climate. Winters in Albania are characteristically mild and wet while summers are warm and dry. The northern areas of country such as the Albanian Alps experiences a subarctic climate with frequently very cold winters, and short, mild summers. The lowlands of Albania have mild winters, averaging about . The summer temperatures average , however, humidity is low. In the southern lowlands, specifically the areas on the Ionian sea, temperatures average about in the winter and during the summer. The country is divided into four physiographic regions, the Northern Mountain Range, Central Mountain Range, Southern Mountain Range and Western Lowlands with two subdivisions corresponding roughly to the Albanian Adriatic and Ionian Sea Coast. Except for its sea coast in the west, the western lowlands are ringed by an great arc-shaped line of mountains spreading in the north, east and south. The predominantly flat and regular landscape is morphologically characterized by the sea and rivers, in terms of topography and soil, and also in its climate and biodiversity. The region experiences mild and short winters, with low rainfalls, and long hot and dry summers. It is mostly dominated by the extensive Plain of Myzeqe, a large alluvial plain traversed by three main rivers, the Shkumbin, Seman and Vjosë. The Albanian Adriatic Sea Coast stretches from the mouth of the River of Buna nearby the Lake of Shkodër in the north, across the Gulf of Drin, to the Bay of Vlorë in the south. The two largest coastal cities Durrës and Vlorë are located in the northern and the southern part of the region, respectively. The total length of the coastline is approximately , mostly of which are taken up by sandy beaches, wetlands and alluvial deposits, occurred by quaternary sediments of the main rivers of the country. The coastline is lined with different habitats and ecosystems, providing optimal conditions for a rich biodiversity. Of particular importance is the presence of numerous lagoons and ponds, representing one of the wealthiest scenery of the country, in terms of biology. Likely, the lagoons of Karavasta and Narta, in the immediate proximity to the sea, are among the most important ones in the Mediterranean Sea. While the Adriatic coast is relatively low, the Ionian coast is, with few exceptions, rugged and mountainous. The Albanian Ionian Sea Coast stretches from the Peninsula of Karaburun in the north, through the Albanian Riviera, to the Straits of Corfu in the south. The largest coastal city Sarandë is located in the southern part of the region, while smaller villages include Borsh, Dhermi, Himara, Qeparo, Piqeras, and Lukovë. The total length of the coastline is approximately . The most characteristic features of the coastline are the Ceraunian Mountains, which extend nearly along the Albanian Riviera roughly in a southeast–northwest direction. The region is particular fertile and known for its citrus fruits and viticulture, that has a long and special tradition in the region. The coastline is abundant in dolomites from the triassic period, the carbonate rocks follows with limestone from the jurassic period and bituminous schists, cretaceous porcelain and phosphate limestone. The northern mountain range is the most mountainous region in the country and its physical geography and shape is distinguished by the convergence of two distinct geological regions, the Albanian Alps and Mirdita Highlands. A dominant feature of the region's landscape is the presence of several u-shaped valleys, such as the Valley of Valbonë, formed by the process of glaciation. The Albanian Alps are the southernmost extension of the Dinaric Alps and simultaneously the highest and most imposing mountains of the country. Despite the fact that the ice ages had relatively little geological influence on the alps, the southernmost glaciers of Europe were recently discovered on the alps. The Albanian Alps are home to many important rivers of Western Balkans. The main drainage basins of the Alps are those of the Drin and Danube Rivers. Rivers on the Alps fall roughly into two categories, those that flow into the Lim and those that enter the White Drin and meet the Black Drin downstream at the Drin confluence. However, Drin dominates, draining most of the Alps with its tributaries and when measured from the source of the White Drin to the mouth of the Drin near Lezhë. But not all of the Drin flows near or parallel to the Alps. One Drin tributary is the Valbona River, which drains into the Adriatic Sea, and its eastern tributary the Gashi River. The terrain of the central mountain range is predominantly mountainous and extremely fragmented carved into its present shape by the glaciers of the last ice age. The region is dominated by three mountain ranges, each running from north-northeast to south-southwest, roughly parallel to the eastern border of Albania. The Korab Mountains are certainly the most striking feature of the physical geography of the region. They offer simultaneously the highest mountain of the country, the Mount Korab. The mountains extends over 40 kilometres and covers an area of 560 square kilometres. Between the valleys of Shkumbin and Devoll rise the mountains that constitute to the Valamara Mountains, while farther north stretches the connected mountain massifs of Shebenik and Jabllanicë. The vast majority of the region's natural lakes are located in the southern half of the region and most of them are the product of a long contiguous history. The Lake of Ohrid lie adjacent to the border shared with North Macedonia. It is one of the oldest continuously existing lakes in the world with a unique biodiversity. Further south, well hidden among high mountains, extend the Lake of Prespa that is linked by a small channel with a sluice that separates the two lakes. The Ceraunian Mountains, a coastal mountain range in southwestern Albania, stretches about one the Ionian Sea from Sarandë in south-east-northwest direction along the Albanian Riviera Orikum. The relief is varied, with many mountain passes, canyons, gorges, hills and other landforms. The mountains are characterized by housing Black pines, Bulgarian firs, Bosnian pines and Ash trees. The mountain chain is home to many large mammals, including brown bears, grey wolves, lynx, golden eagles and others. The highest point on the chain is Maja e Çikës, that rises to an elevation of above the Adriatic. From the peak, there is a view of the Albanian Riviera, the northern Ionian Islands as well as the Italian coast of Apulia and Otranto. The section has wide and long beaches, with a number of bays and headlands. Stretching until the Llogara Pass at , the mountain chain gets separated into the Ceraunians in the west and the Akroceraunians (or "Reza e Kanalit") in the east within the Karaburun Peninsula. The villages of Palasë, Dhërmi, Vuno, Himarë, Qeparo, Borsh, Pilur, Kudhës and Ilias are located on the Ceraunian range. The Llogara National Park covers an area of and The Karaburun Peninsula is situated at the eastern side of Strait of Otranto, where the Adriatic Sea meets the Ionian Sea. Its area is , having a length of , and a width of only . The Mesokanali is the narrow channel, that separates the peninsula from the Sazan Island. Geologically, it is made up of carbonic limestone, dating back to the Mesozoic period, while in the northwestern it is composed of terrigenous sediment. Furthermore, these formations have been continuously under the effect of Karst and are exploited as marble. The relief of the peninsula comprises a number of hills with an average altitude of about above the Adriatic. The highest summits are Maja e Ilqës , Maja e Flamurit and Maja Çadëri . The coastal landscape is characterized by a rough relief, that dips vertically into the Ionian Sea, it features several solitary peaks, large canyons, bays, caves and gulfs. Examples of typical landforms include Gjipe Canyon, Gjiri i Arushës, Gjiri i Dafinës, Gjiri i Gramës and so on. The geological evolution has formed also capes such as Haxhi Aliu, Galloveci and Kepi i Gjuhëzës, and other of 20 caves along the entire shoreline. The climate is Mediterranean, having hot summers and generally warm to cool, dry winters. Due to its climatic, hydrological and geological conditions, the area is characterized by its unique flora and fauna. Most of the territory consists of forests and is relatively well preserved, it includes many types of trees, such as Mediterranean oak, Manna ash, Kermes oak and Field maple.
https://en.wikipedia.org/wiki?curid=12691
Group representation In the mathematical field of representation theory, group representations describe abstract groups in terms of bijective linear transformations (i.e. automorphisms) of vector spaces; in particular, they can be used to represent group elements as invertible matrices so that the group operation can be represented by matrix multiplication. Representations of groups are important because they allow many group-theoretic problems to be reduced to problems in linear algebra, which is well understood. They are also important in physics because, for example, they describe how the symmetry group of a physical system affects the solutions of equations describing that system. The term "representation of a group" is also used in a more general sense to mean any "description" of a group as a group of transformations of some mathematical object. More formally, a "representation" means a homomorphism from the group to the automorphism group of an object. If the object is a vector space we have a "linear representation". Some people use "realization" for the general notion and reserve the term "representation" for the special case of linear representations. The bulk of this article describes linear representation theory; see the last section for generalizations. The representation theory of groups divides into subtheories depending on the kind of group being represented. The various theories are quite different in detail, though some basic definitions and concepts are similar. The most important divisions are: Representation theory also depends heavily on the type of vector space on which the group acts. One distinguishes between finite-dimensional representations and infinite-dimensional ones. In the infinite-dimensional case, additional structures are important (e.g. whether or not the space is a Hilbert space, Banach space, etc.). One must also consider the type of field over which the vector space is defined. The most important case is the field of complex numbers. The other important cases are the field of real numbers, finite fields, and fields of p-adic numbers. In general, algebraically closed fields are easier to handle than non-algebraically closed ones. The characteristic of the field is also significant; many theorems for finite groups depend on the characteristic of the field not dividing the order of the group. A representation of a group "G" on a vector space "V" over a field "K" is a group homomorphism from "G" to GL("V"), the general linear group on "V". That is, a representation is a map such that Here "V" is called the representation space and the dimension of "V" is called the dimension of the representation. It is common practice to refer to "V" itself as the representation when the homomorphism is clear from the context. In the case where "V" is of finite dimension "n" it is common to choose a basis for "V" and identify GL("V") with , the group of "n"-by-"n" invertible matrices on the field "K". Consider the complex number "u" = e2πi / 3 which has the property "u"3 = 1. The cyclic group "C"3 = {1, "u", "u"2} has a representation ρ on C2 given by: This representation is faithful because ρ is a one-to-one map. Another representation for "C"3 on C2, isomorphic to the previous one, is σ given by: The group "C"3 may also be faithfully represented on R2 by τ given by: where A subspace "W" of "V" that is invariant under the group action is called a "subrepresentation". If "V" has exactly two subrepresentations, namely the zero-dimensional subspace and "V" itself, then the representation is said to be irreducible; if it has a proper subrepresentation of nonzero dimension, the representation is said to be reducible. The representation of dimension zero is considered to be neither reducible nor irreducible, just like the number 1 is considered to be neither composite nor prime. Under the assumption that the characteristic of the field "K" does not divide the size of the group, representations of finite groups can be decomposed into a direct sum of irreducible subrepresentations (see Maschke's theorem). This holds in particular for any representation of a finite group over the complex numbers, since the characteristic of the complex numbers is zero, which never divides the size of a group. In the example above, the first two representations given (ρ and σ) are both decomposable into two 1-dimensional subrepresentations (given by span{(1,0)} and span{(0,1)}), while the third representation (τ) is irreducible. A "set-theoretic representation" (also known as a group action or "permutation representation") of a group "G" on a set "X" is given by a function ρ : "G" → "X""X", the set of functions from "X" to "X", such that for all "g"1, "g"2 in "G" and all "x" in "X": where formula_11 is the identity element of "G". This condition and the axioms for a group imply that ρ("g") is a bijection (or permutation) for all "g" in "G". Thus we may equivalently define a permutation representation to be a group homomorphism from G to the symmetric group S"X" of "X". For more information on this topic see the article on group action. Every group "G" can be viewed as a category with a single object; morphisms in this category are just the elements of "G". Given an arbitrary category "C", a "representation" of "G" in "C" is a functor from "G" to "C". Such a functor selects an object "X" in "C" and a group homomorphism from "G" to Aut("X"), the automorphism group of "X". In the case where "C" is Vect"K", the category of vector spaces over a field "K", this definition is equivalent to a linear representation. Likewise, a set-theoretic representation is just a representation of "G" in the category of sets. When "C" is Ab, the category of abelian groups, the objects obtained are called "G"-modules. For another example consider the category of topological spaces, Top. Representations in Top are homomorphisms from "G" to the homeomorphism group of a topological space "X". Two types of representations closely related to linear representations are:
https://en.wikipedia.org/wiki?curid=12695
GRE Physics Test The GRE physics test is an examination administered by the Educational Testing Service (ETS). The test attempts to determine the extent of the examinees' understanding of fundamental principles of physics and their ability to apply them to problem solving. Many graduate schools require applicants to take the exam and base admission decisions in part on the results. The scope of the test is largely that of the first three years of a standard United States undergraduate physics curriculum, since many students who plan to continue to graduate school apply during the first half of the fourth year. It consists of 100 five-option multiple-choice questions covering subject areas including classical mechanics, electromagnetism, wave phenomena and optics, thermal physics, relativity, atomic and nuclear physics, quantum mechanics, laboratory techniques, and mathematical methods. The table below indicates the relative weights, as asserted by ETS, and detailed contents of the major topics.
https://en.wikipedia.org/wiki?curid=12696
Gardnerian Wicca Gardnerian Wicca, or Gardnerian witchcraft, is a tradition in the neopagan religion of Wicca, whose members can trace initiatory descent from Gerald Gardner. The tradition is itself named after Gardner (1884–1964), a British civil servant and amateur scholar of magic. The term "Gardnerian" was probably coined by the founder of Cochranian Witchcraft, Robert Cochrane in the 1950s or 60s, who himself left that tradition to found his own. Gardner claimed to have learned the beliefs and practices that would later become known as Gardnerian Wicca from the New Forest coven, who allegedly initiated him into their ranks in 1939. For this reason, Gardnerian Wicca is usually considered to be the earliest created tradition of Wicca, from which most subsequent Wiccan traditions are derived. From the supposed New Forest coven, Gardner formed his own Bricket Wood coven, and in turn initiated many Witches, including a series of High Priestesses, founding further covens and continuing the initiation of more Wiccans into the tradition. In the UK, Europe and most Commonwealth countries someone self-defined as Wiccan is usually understood to be claiming initiatory descent from Gardner, either through Gardnerian Wicca, or through a derived branch such as Alexandrian Wicca or Algard Wicca. Elsewhere, these original lineaged traditions are termed "British Traditional Wicca". Gardnerian Wiccans organise into covens, that traditionally, though not always, are limited to thirteen members. Covens are led by a High Priestess and the High Priest of her choice, and celebrate both a Goddess and a God. Gardnerian Wicca and other forms of British Traditional Wicca operate as an initiatory mystery cult; membership is gained only through initiation by a Wiccan High Priestess or High Priest. Any valid line of initiatory descent can be traced all the way back to Gerald Gardner, and through him back to the New Forest coven. Rituals and coven practices are kept secret from non-initiates, and many Wiccans maintain secrecy regarding their membership in the Religion. Whether any individual Wiccan chooses secrecy or openness often depends on their location, career, and life circumstances. In all cases, Gardnerian Wicca absolutely forbids any member to share the name, personal information, fact of membership, and so on without advanced individual consent of that member for that specific instance of sharing. (In this regard, secrecy is specifically for reasons of safety, in parallel to the LGBT custom of being "in the closet", the heinousness of the act of "outing" anyone, and the dire possibilities of the consequences to an individual who is "outed". Wiccans often refer to being in or out of the "broom closet", to make the exactness of the parallel clear.) In Gardnerian Wicca, there are three grades of initiation. Ronald Hutton suggests that they appear to be based upon the three grades of Freemasonry. In Gardnerian Wicca, the two principal deities are the Horned God and the Mother Goddess. Gardnerians use specific names for the God and the Goddess in their rituals. Doreen Valiente, a Gardnerian High Priestess, revealed that there were more than one. The Gardnerian tradition teaches a core ethical guideline, often referred to as "The Rede" or "The Wiccan Rede". In the archaic language often retained in some Gardnerian lore, the Rede states, "An it harm none, do as thou wilt." Witches ... are inclined to the morality of the legendary Good King Pausol, "Do what you like so long as you harm no one". But they believe a certain law to be important, "You must not use magic for anything which will cause harm to anyone, and if, to prevent a greater wrong being done, you must discommode someone, you must do it only in a way which will abate the harm." Two features stand out about the Rede. The first is that the word rede means "advice" or "counsel". The Rede is not a commandment but a recommendation, a guideline. The second is that the advice to harm none stands at equal weight with the advice to do as one wills. Thus Gardnerian Wiccan teachings stand firm against coercion and for informed consent; forbid proselytisation while requiring anyone seeking to become an initiate of Gardnerian Wicca to ask for teaching, studies, initiation. To expound a little further, the qualifying phrase "an (if) it harm none" includes not only other, but self. Hence, weighing the possible outcomes of an action is a part of the thought given before taking an action; the metaphor of tossing a pebble into a pond and observing the ripples that spread in every direction is sometimes used. The declarative statement "do as thou wilt" expresses a clear statement of what is, philosophically, known as "free will." A second ethical guideline is often called the Law of Return, sometimes the Rule of Three, which mirrors the physics concept described in Sir Isaac Newton's Third Law of Motion: "When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body." This basic law of physics is more usually today stated thus: "For every action, there is an equal and opposite reaction." Like the Rede, this guideline teaches Gardnerians that whatever energy or intention one puts out into the world, whether magical or not, will return to that person "multiplied by three." This teaching underlies the importance of doing no harm—for that would give impetus to a negative reaction centered on oneself or one's group (such as a coven). This law is controversial, as discussed by John Coughlin, author of "The Pagan Resource Guide," in an essay, "The Three-Fold Law." In Gardnerian Wicca, these tradition-specific teachings demand thought before action, especially magical action (spell work). An individual or a coven uses these guidelines to consider beforehand what the possible ramifications may be of any working. Given these two ethical core principles, Gardnerian Wicca hold themselves to a high ethical standard. For example, Gardnerian High Priestess Eleanor Bone was not only a respected elder in the tradition, but also a matron of a nursing home. Moreover, the Bricket Wood coven today is well known for its many members from academic or intellectual backgrounds, who contribute to the preservation of Wiccan knowledge. Gerald Gardner himself actively disseminated educational resources on folklore and the occult to the general public through his Museum of Witchcraft on the Isle of Man. Therefore, Gardnerian Wicca can be said to differ from some modern non-coven Craft practices that often concentrate on the solitary practitioner's spiritual development. The religion tends to be non-dogmatic, allowing each initiate to find for him/herself what the ritual experience means by using the basic language of the shared ritual tradition, to be discovered through the Mysteries. The tradition is often characterised as an orthopraxy (correct practice) rather than an orthodoxy (correct thinking), with adherents placing greater emphasis on a shared body of practices as opposed to faith. Algard Wicca is a tradition, or denomination, in the Neo-Pagan religion of Wicca. It was founded in the United States in 1972 by Mary Nesnick, an initiate of both Gardnerian and Alexandrian Wicca, in an attempt to fuse the two traditions. One of the spiritual seekers who approached Nesnick in the early 1970s was Eddie Buczynski, but she turned him down for initiation because he was homosexual. On retirement from the British Colonial Service, Gardner moved to London but then before World War II moved to Highcliffe, east of Bournemouth and near the New Forest on the south coast of England. After attending a performance staged by the Rosicrucian Order Crotona Fellowship, he reports meeting a group of people who had preserved their historic occult practices. They recognised him as being "one of them" and convinced him to be initiated. It was only halfway through the initiation, he says, that it dawned on him what kind of group it was, and that witchcraft was still being practised in England. The group into which Gardner was initiated, known as the New Forest coven, was small and utterly secret as the Witchcraft Act of 1735 made it illegal—a crime—to claim to predict the future, conjure spirits, or cast spells; it likewise made an accusation of witchcraft a criminal offence. Gardner's enthusiasm over the discovery that witchcraft survived in England led him to wish to document it, but both the witchcraft laws and the coven's secrecy forbade that, despite his excitement. After World War II, Gardner's High Priestess and coven leader relented sufficiently to allow a fictional treatment that did not expose them to prosecution, "High Magic's Aid". Anyhow, I soon found myself in the circle and took the usual oaths of secrecy which bound me not to reveal any secrets of the cult. But, as it is a dying cult, I thought it was a pity that all the knowledge should be lost, so in the end I was permitted to write, as fiction, something of what a witch believes in the novel High Magic's Aid.After the witchcraft laws were repealed in 1951, and replaced by the Fraudulent Mediums Act, Gerald Gardner went public, publishing his first non-fiction book about Witchcraft, "Witchcraft Today", in 1954. Gardner continued, as the text often iterates, to respect his oaths and the wishes of his High Priestess in his writing. Fearing, as Gardner stated in the quote above, that witchcraft was literally dying out, he pursued publicity and welcomed new initiates during that last years of his life. Gardner even courted the attentions of the tabloid press, to the consternation of some more conservative members of the tradition. In Gardner's own words, "Witchcraft doesn't pay for broken windows!" Gardner knew many famous occultists. Ross Nichols was a friend and fellow Druid (until 1964 Chairman of the Ancient Order of Druids, when he left to found his own Druidic Order of Bards, Ovates, and Druids). Nichols edited Gardner's "Witchcraft Today" and is mentioned extensively in Gardner's "The Meaning of Witchcraft". Near the end of Aleister Crowley's life, Gardner met with him for the first time on 1 May 1947 and visited him twice more before Crowley's death that autumn; at some point, Crowley gave Gardner an Ordo Templi Orientis (OTO) charter and the 4th OTO degree—the lowest degree authorising use of the charter. Doreen Valiente, one of Gardner's priestesses, identified the woman who initiated Gardner as Dorothy Clutterbuck, referenced in "A Witches' Bible" by Janet and Stewart Farrar. Valiente's identification was based on references Gardner made to a woman he called "Old Dorothy" whom Valiente remembered. Biographer Philip Heselton corrects Valiente, clarifying that Clutterbuck (Dorothy St. Quintin-Fordham, née Clutterbuck), a Pagan-minded woman, owned the Mill House, where the New Forest coven performed Gardner's initiation ritual. Scholar Ronald Hutton argues in his "Triumph of the Moon" that Gardner's tradition was largely the inspiration of members of the Rosicrucian Order Crotona Fellowship and especially that of a woman known by the magical name of "Dafo". Dr. Leo Ruickbie, in his "Witchcraft Out of the Shadows", analysed the documented evidence and concluded that Aleister Crowley played a crucial role in inspiring Gardner to establish a new pagan religion. Ruickbie, Hutton, and others further argue that much of what has been published of Gardnerian Wicca, as Gardner's practice came to be known, was written by Blake, Yeats, Valiente and Crowley and contains borrowings from other identifiable sources. The witches Gardner was originally introduced to were originally referred to by him as "the Wica" and he would often use the term "Witch Cult" to describe the religion. Other terms used, included "Witchcraft" or "the Old Religion." Later publications standardised the spelling to "Wicca" and it came to be used as the term for the Craft, rather than its followers. "Gardnerian" was originally a pejorative term used by Gardner's contemporary Roy Bowers (also known as Robert Cochrane), a British cunning man, who nonetheless was initiated into Gardnerian Wicca a couple of years following Gardner's death. Gardner stated that the rituals of the existing group were fragmentary at best, and he set about fleshing them out, drawing on his library and knowledge as an occultist and amateur folklorist. Gardner borrowed and wove together appropriate material from other artists and occultists, most notably Charles Godfrey Leland's "Aradia, or the Gospel of the Witches", the Key of Solomon as published by S.L. MacGregor Mathers, Masonic ritual, Crowley, and Rudyard Kipling. Doreen Valiente wrote much of the best-known poetry, including the much-quoted Charge of the Goddess. In 1948-9 Gardner and Dafo were running a coven separate from the original New Forest coven at a naturist club near Bricket Wood to the north of London. By 1952 Dafo's health had begun to decline, and she was increasingly wary of Gardner's publicity-seeking. In 1953 Gardner met Doreen Valiente who was to become his High Priestess in succession to Dafo. The question of publicity led to Doreen and others formulating thirteen proposed 'Rules for the Craft', which included restrictions on contact with the press. Gardner responded with the sudden production of the Wiccan Laws which led to some of his members, including Valiente, leaving the coven. Gardner reported that witches were taught that the power of the human body can be released, for use in a coven's circle, by various means, and released more easily without clothing. A simple method was dancing round the circle singing or chanting; another method was the traditional "binding and scourging." In addition to raising power, "binding and scourging" can heighten the initiates' sensitivity and spiritual experience. Following the time Gardner spent on the Isle of Man, the coven began to experiment with circle dancing as an alternative. It was also about this time that the lesser 4 of the 8 Sabbats were given greater prominence. Bricket Wood coven members liked the Sabbat celebrations so much, they decided that there was no reason to keep them confined to the closest full moon meeting, and made them festivities in their own right. As Gardner had no objection to this change suggested by the Bricket Wood coven, this collective decision resulted in what is now the standard eight festivities in the Wiccan Wheel of the year. The split with Valiente led to the Bricket Wood coven being led by Jack Bracelin and a new High Priestess, Dayonis. This was the first of a number of disputes between individuals and groups, but the increased publicity only seems to have allowed Gardnerian Wicca to grow much more rapidly. Certain initiates such as Alex Sanders and Raymond Buckland who brought his take on the Gardnerian tradition to the United States in 1964 started off their own major traditions allowing further expansion.
https://en.wikipedia.org/wiki?curid=12697
Greenwich Mean Time Greenwich Mean Time (GMT) is the mean solar time at the Royal Observatory in Greenwich, London, reckoned from midnight. At different times in the past, it has been calculated in different ways, including being calculated from noon; as a consequence, it cannot be used to specify a precise time unless a context is given. English speakers often use GMT as a synonym for Coordinated Universal Time (UTC). For navigation, it is considered equivalent to UT1 (the modern form of mean solar time at 0° longitude); but this meaning can differ from UTC by up to 0.9s. The term GMT should not thus be used for certain technical purposes requiring precision. Because of Earth's uneven angular velocity in its elliptical orbit and its axial tilt, noon (12:00:00) GMT is rarely the exact moment the Sun crosses the Greenwich meridian and reaches its highest point in the sky there. This event may occur up to 16 minutes before or after noon GMT, a discrepancy calculated by the equation of time. Noon GMT is the annual average (i.e. "mean") moment of this event, which accounts for the word "mean" in "Greenwich Mean Time". Originally, astronomers considered a GMT day to start at noon, while for almost everyone else it started at midnight. To avoid confusion, the name Universal Time was introduced to denote GMT as counted from midnight. Astronomers preferred the old convention to simplify their observational data, so that each night was logged under a single calendar date. Today, Universal Time usually refers to UTC or UT1. The term "GMT" is especially used by bodies connected with the United Kingdom, such as the BBC World Service, the Royal Navy, and the Met Office; and others particularly in Arab countries, such as the Middle East Broadcasting Centre and OSN. It is a term commonly used in the United Kingdom and countries of the Commonwealth, including Australia, New Zealand, South Africa, India, Pakistan, Bangladesh and Malaysia; and in many other countries of the Eastern Hemisphere. As the United Kingdom developed into an advanced maritime nation, British mariners kept at least one chronometer on GMT to calculate their longitude from the Greenwich meridian, which was considered to have longitude zero degrees, by a convention adopted in the International Meridian Conference of 1884. Synchronisation of the chronometer on GMT did not affect shipboard time, which was still solar time. But this practice, combined with mariners from other nations drawing from Nevil Maskelyne's method of lunar distances based on observations at Greenwich, led to GMT being used worldwide as a standard time independent of location. Most time zones were based upon GMT, as an offset of a number of hours (and possibly half or quarter hours) "ahead of GMT" or "behind GMT". Greenwich Mean Time was adopted across the island of Great Britain by the Railway Clearing House in 1847 and by almost all railway companies by the following year, from which the term "railway time" is derived. It was gradually adopted for other purposes, but a legal case in 1858 held "local mean time" to be the official time. On 14 May 1880, a letter signed by "Clerk to Justices" appeared in "The Times", stating that "Greenwich time is now kept almost throughout England, but it appears that Greenwich time is not legal time. For example, our polling booths were opened, say, at 8 13 and closed at 4 13 p.m." This was changed later in 1880, when Greenwich Mean Time was legally adopted throughout the island of Great Britain. GMT was adopted in the Isle of Man in 1883, in Jersey in 1898 and in Guernsey in 1913. Ireland adopted GMT in 1916, supplanting Dublin Mean Time. Hourly time signals from Greenwich Observatory were first broadcast on 5 February 1924, rendering the time ball at the observatory redundant. The daily rotation of the Earth is irregular (see ΔT) and has a slowing trend; therefore atomic clocks constitute a much more stable timebase. On 1 January 1972, GMT was superseded as the international civil time standard by Coordinated Universal Time, maintained by an ensemble of atomic clocks around the world. Universal Time (UT), a term introduced in 1928, initially represented mean time at Greenwich determined in the traditional way to accord with the originally defined universal day; from 1 January 1956 (as decided by the International Astronomical Union in Dublin in 1955, at the initiative of William Markowitz) this "raw" form of UT was re-labelled UT0 and effectively superseded by refined forms UT1 (UT0 equalised for the effects of polar wandering) and UT2 (UT1 further equalised for annual seasonal variations in Earth rotation rate). Historically, GMT has been used with two different conventions for numbering hours. The long-standing astronomical convention, dating from the work of Ptolemy, was to refer to noon as zero hours (see Julian day). This contrasted with the civil convention of referring to midnight as zero hours dating from the Roman Empire. The latter convention was adopted on and after 1 January 1925 for astronomical purposes, resulting in a discontinuity of 12 hours, or half a day. The instant that was designated "December 31.5 GMT" in 1924 almanacs became "January 1.0 GMT" in 1925 almanacs. The term Greenwich Mean Astronomical Time (GMAT) was introduced to unambiguously refer to the previous noon-based astronomical convention for GMT. The more specific terms UT and UTC do not share this ambiguity, always referring to midnight as zero hours. Legally, the civil time used in the UK is called "Greenwich mean time" (without capitalisation), according to the Interpretation Act 1978, with an exception made for those periods when the Summer Time Act 1972 orders an hour's shift for daylight saving. The Interpretation Act 1978, section 9, provides that whenever an expression of time occurs in an Act, the time referred to shall (unless otherwise specifically stated) be held to be Greenwich mean time. Under subsection 23(3), the same rule applies to deeds and other instruments. During the experiment of 1968 to 1971, when the British Isles did not revert to Greenwich Mean Time during the winter, the all-year British Summer Time was called British Standard Time (BST). In the UK, is disseminated to the general public in winter and in summer. BBC radio stations broadcast the "six pips" of the Greenwich Time Signal. It is named from its original generation at the Royal Greenwich Observatory, is aligned to Coordinated Universal Time, and called either Greenwich Mean Time or British Summer Time as appropriate for the time of year. Several countries define their local time by reference to Greenwich Mean Time. Some examples are: Greenwich Mean Time is used as standard time in the following countries and areas, which also advance their clocks one hour (GMT+1) in summer. Greenwich Mean Time is used as standard time all year round in the following countries and areas: Since legal, political, social and economic criteria, in addition to physical or geographical criteria, are used in the drawing of time zones, actual time zones do not precisely adhere to meridian lines. The "GMT" time zone, were it determined purely by longitude, would consist of the area between meridians 7°30'W and 7°30'E. However, in much of Western and Central Europe, despite lying between those two meridians, UTC+1 is used; similarly, there are European areas that use UTC, even though their physical time zone is (e.g. most of Portugal), or UTC−2 (the westernmost part of Iceland). Because the UTC time zone in Europe is shifted to the west, Lowestoft in the United Kingdom, at only 1°45'E, is the easternmost settlement in Europe in which UTC is applied. Following is a list of the incongruencies:
https://en.wikipedia.org/wiki?curid=12701
GIF The Graphics Interchange Format (GIF or ) is a bitmap image format that was developed by a team at the online services provider CompuServe led by American computer scientist Steve Wilhite on June 15, 1987. It has since come into widespread usage on the World Wide Web due to its wide support and portability between applications and operating systems. The format supports up to 8 bits per pixel for each image, allowing a single image to reference its own palette of up to 256 different colors chosen from the 24-bit RGB color space. It also supports animations and allows a separate palette of up to 256 colors for each frame. These palette limitations make GIF less suitable for reproducing color photographs and other images with color gradients, but well-suited for simpler images such as graphics or logos with solid areas of color. Unlike video, the GIF file format does not support audio. GIF images are compressed using the Lempel–Ziv–Welch (LZW) lossless data compression technique to reduce the file size without degrading the visual quality. This compression technique was patented in 1985. Controversy over the licensing agreement between the software patent holder, Unisys, and CompuServe in 1994 spurred the development of the Portable Network Graphics (PNG) standard. By 2004 all the relevant patents had expired. CompuServe introduced GIF on June 15, 1987 to provide a color image format for their file downloading areas, replacing their earlier run-length encoding (RLE) format, which was black and white only. GIF became popular because it used LZW data compression, which was more efficient than the run-length encoding that formats such as those used by PCX and MacPaint, and fairly large images could therefore be downloaded in a reasonably short time, even with very slow modems. The original version of GIF was called 87a. In 1989, CompuServe released an enhanced version, called 89a, which added support for animation delays (multiple images in a stream were already supported in 87a), transparent background colors, and storage of application-specific metadata. The 89a specification also supports incorporating text labels as text (not embedding them in the graphical data), but as there is little control over display fonts, this feature is not widely used. The two versions can be distinguished by looking at the first six bytes of the file (the "magic number" or signature), which, when interpreted as ASCII, read "GIF87a" and "GIF89a", respectively. CompuServe encouraged the adoption of GIF by providing downloadable conversion utilities for many computers. By December 1987, for example, an Apple IIGS user could view pictures created on an Atari ST or Commodore 64. GIF was one of the first two image formats commonly used on Web sites, the other being the black-and-white XBM. In September 1995 Netscape Navigator 2.0 added the ability for animated GIFs to loop. The feature of storing multiple images in one file, accompanied by control data, is used extensively on the Web to produce simple animations. The optional interlacing feature, which stores image scan lines out of order in such a fashion that even a partially downloaded image was somewhat recognizable, also helped GIF's popularity, as a user could abort the download if it was not what was required. In May 2015 Facebook added support for GIF. In January 2018 Instagram also added GIF stickers to the story mode. As a noun, the word "GIF" is found in the newer editions of many dictionaries. In 2012, the American wing of the Oxford University Press recognized "GIF" as a verb as well, meaning "to create a GIF file", as in "GIFing was perfect medium for sharing scenes from the Summer Olympics". The press's lexicographers voted it their word of the year, saying that GIFs have evolved into "a tool with serious applications including research and journalism". The creators of the format pronounced the word as "jif" with a soft "G" as in "gym". Steve Wilhite says that the intended pronunciation deliberately echoes the American peanut butter brand Jif, and CompuServe employees would often say "Choosy developers choose GIF", spoofing this brand's television commercials. The word is now also widely pronounced with a hard "G" as in "gift". In 2017, an informal poll on programming website Stack Overflow showed some numerical preference for hard-"G" pronunciation, especially among respondents in eastern Europe, though both soft-"G" and enunciating each letter individually were found to be popular in Asia and emerging countries. The "American Heritage Dictionary" cites both, indicating "jif" as the primary pronunciation, while "Cambridge Dictionary of American English" offers only the hard-"G" pronunciation. "Merriam-Webster's Collegiate Dictionary" and the "OED" cite both pronunciations, but place "gif" in the default position ("\ˈgif, ˈjif\"). The "New Oxford American Dictionary" gave only "jif" in its 2nd edition but updated it to "jif, gif" in its 3rd edition. The disagreement over the pronunciation led to heated Internet debate. On the occasion of receiving a lifetime achievement award at the 2013 Webby Award ceremony, Wilhite rejected the hard-"G" pronunciation, and his speech led to 17,000 posts on Twitter and 50 news articles. The White House and TV program "Jeopardy!" also entered the debate during 2013. In February 2020, The J.M. Smucker Company, the owners of the Jif peanut butter brand, partnered with animated image database and search engine Giphy to release a limited-edition "Jif vs. GIF" (hashtagged as #JIFvsGIF) jar of Jif peanut butter that has a label humorously declaring the soft-"G" pronunciation to exclusively refer to the peanut butter, and GIF to be exclusively pronounced with the hard-"G" pronunciation. Conceptually, a GIF file describes a fixed-sized graphical area (the "logical screen") populated with zero or more "images". Many GIF files have a single image that fills the entire logical screen. Others divide the logical screen into separate sub-images. The images may also function as animation frames in an animated GIF file, but again these need not fill the entire logical screen. GIF files start with a fixed-length header ("GIF87a" or "GIF89a") giving the version, followed by a fixed-length Logical Screen Descriptor giving the pixel dimensions and other characteristics of the logical screen. The screen descriptor may also specify the presence and size of a Global Color Table, which follows next if present. 00000000 47 49 46 38 39 61 01 00 01 00 80 00 00 00 00 00 |GIF89a...| 00000010 ff ff ff 21 f9 04 01 00 00 00 00 2c 00 00 00 00 |...!......| 00000020 01 00 01 00 00 02 01 44 00 3b |...D.;| 0000002a Thereafter, the file is divided into segments, each introduced by a 1-byte sentinel: An image starts with a fixed-length Image Descriptor, which may specify the presence and size of a Local Color Table (which follows next if present). The image data follows: one byte giving the bit width of the unencoded symbols (which must be at least 2 bits wide, even for bi-color images), followed by a linked list of sub-blocks containing the LZW-encoded data. Extension blocks (blocks that "extend" the 87a definition via a mechanism already defined in the 87a spec) consist of the sentinel, an additional byte specifying the type of extension, and a linked list of sub-blocks with the extension data. Extension blocks that modify an image (like the Graphic Control Extension that specifies the optional animation delay time and optional transparent background color) must immediately precede the segment with the image they refer to. The linked lists used by the image data and the extension blocks consist of series of sub-blocks, each sub-block beginning with a byte giving the number of subsequent data bytes in the sub-block (1 to 255). The series of sub-blocks is terminated by an empty sub-block (a 0 byte). This structure allows the file to be parsed even if not all parts are understood. A GIF marked 87a may contain extension blocks; the intent is that a decoder can read and display the file without the features covered in extensions it does not understand. The full detail of the file format is covered in the GIF specification. GIF is palette-based: the colors used in an image (a frame) in the file have their RGB values defined in a palette table that can hold up to 256 entries, and the data for the image refer to the colors by their indices (0–255) in the palette table. The color definitions in the palette can be drawn from a color space of millions of shades (224 shades, 8 bits for each primary), but the maximum number of colors a frame can use is 256. This limitation seemed reasonable when GIF was developed because few people could afford the hardware to display more colors simultaneously. Simple graphics, line drawings, cartoons, and grey-scale photographs typically need fewer than 256 colors. Each frame can designate one index as a "transparent background color": any pixel assigned this index takes on the color of the pixel in the same position from the background, which may have been determined by a previous frame of animation. Many techniques, collectively called dithering, have been developed to approximate a wider range of colors with a small color palette by using pixels of two or more colors to approximate in-between colors. These techniques sacrifice spatial resolution to approximate deeper color resolution. While not part of the GIF specification, dithering can be used in images subsequently encoded as GIF images. This is often not an ideal solution for GIF images, both because the loss of spatial resolution typically makes an image look fuzzy on the screen, and because the dithering patterns often interfere with the compressibility of the image data, working against GIF's main purpose. In the early days of graphical web browsers, graphics cards with 8-bit buffers (allowing only 256 colors) were common and it was fairly common to make GIF images using the websafe palette. This ensured predictable display, but severely limited the choice of colors. When 24-bit color became the norm palettes could instead be populated with the optimum colors for individual images. A small color table may suffice for small images, and keeping the color table small allows the file to be downloaded faster. Both the 87a and 89a specifications allow color tables of 2"n" colors for any "n" from 1 through 8. Most graphics applications will read and display GIF images with any of these table sizes; but some do not support all sizes when "creating" images. Tables of 2, 16, and 256 colors are widely supported. Although GIF is almost never used for true color images, it is possible to do so. A GIF image can include multiple image blocks, each of which can have its own 256-color palette, and the blocks can be tiled to create a complete image. Alternatively, the GIF89a specification introduced the idea of a "transparent" color where each image block can include its own palette of 255 visible colors plus one transparent color. A complete image can be created by layering image blocks with the visible portion of each layer showing through the transparent portions of the layers above. To render a full-color image as a GIF, the original image must be broken down into smaller regions having no more than 255 or 256 different colors. Each of these regions is then stored as a separate image block with its own local palette and when the image blocks are displayed together (either by tiling or by layering partially transparent image blocks) the complete, full-color image appears. For example, breaking an image into tiles of 16 by 16 pixels (256 pixels in total) ensures that no tile has more than the local palette limit of 256 colors, although larger tiles may be used and similar colors merged resulting in some loss of color information. Since each image block can have its own local color table, a GIF file having many image blocks can be very large, limiting the usefulness of full-color GIFs. Additionally, not all GIF rendering programs handle tiled or layered images correctly. Many rendering programs interpret tiles or layers as animation frames and display them in sequence as an endless animation with most web browsers automatically displaying the frames with a delay time of 0.1 seconds or more. Microsoft Paint saves a small black-and-white image as the following GIF file. Paint does not make optimal use of GIF; due to the unnecessarily large color table (storing a full 256 colors instead of the used 2) and symbol width, this GIF file is not an efficient representation of the 15-pixel image (illustrated enlarged above). Although the Graphics Control Extension block declares color index 16 (hexadecimal 10) to be transparent, that index is not used in the image. The only color indexes appearing in the image data are decimal 40 and 255, which the Global Color Table maps to black and white, respectively. Note that the hex numbers in the following tables are in little-endian byte order, as the format specification prescribes. The image pixel data, scanned horizontally from top left, are converted by LZW encoding to codes that are then mapped into bytes for storing in the file. The pixel codes typically don't match the 8-bit size of the bytes, so the codes are packed into bytes by a "little-Endian" scheme: the least significant bit of the first code is stored in the least significant bit of the first byte, higher order bits of the code into higher order bits of the byte, spilling over into the low order bits of the next byte as necessary. Each subsequent code is stored starting at the least significant bit not already used. This byte stream is stored in the file as a series of "sub-blocks". Each sub-block has a maximum length 255 bytes and is prefixed with a byte indicating the number of data bytes in the sub-block. The series of sub-blocks is terminated by an empty sub-block (a single 0 byte, indicating a sub-block with 0 data bytes). For the sample image above the reversible mapping between 9-bit codes and bytes is shown below. A slight compression is evident: pixel colors defined initially by 15 bytes are exactly represented by 12 code bytes including control codes. The encoding process that produces the 9-bit codes is shown below. A local string accumulates pixel color numbers from the palette, with no output action as long as the local string can be found in a code table. There is special treatment of the first two pixels that arrive before the table grows from its initial size by additions of strings. After each output code, the local string is initialized to the latest pixel color (that could not be included in the output code). For clarity the table is shown above as being built of strings of increasing length. That scheme can function but the table consumes an unpredictable amount of memory. Memory can be saved in practice by noting that each new string to be stored consists of a previously stored string augmented by one character. It is economical to store at each address only two words: an existing address and one character. The LZW algorithm requires a search of the table for each pixel. A linear search through up to 4096 addresses would make the coding slow. In practice the codes can be stored in order of numerical value; this allows each search to be done by a SAR (Successive Approximation Register, as used in some ADCs), with only 12 magnitude comparisons. For this efficiency an extra table is needed to convert between codes and actual memory addresses; the extra table upkeeping is needed only when a new code is stored which happens at much less than pixel rate. Decoding begins by mapping the stored bytes back to 9-bit codes. These are decoded to recover the pixel colors as shown below. A table identical to the one used in the encoder is built by adding strings by this rule: Shorter code lengths can be used for palettes smaller than the 256 colors in the example. If the palette is only 64 colors (so color indexes are 6 bits wide), the symbols can range from 0 to 63, and the symbol width can be taken to be 6 bits, with codes starting at 7 bits. In fact, the symbol width need not match the palette size: as long as the values decoded are always less than the number of colors in the palette, the symbols can be any width from 2 to 8, and the palette size any power of 2 from 2 to 256. For example, if only the first four colors (values 0 to 3) of the palette are used, the symbols can be taken to be 2 bits wide with codes starting at 3 bits. Conversely, the symbol width could be set at 8, even if only values 0 and 1 are used; these data would only require a two-color table. Although there would be no point in encoding the file that way, something similar typically happens for bi-color images: the minimum symbol width is 2, even if only values 0 and 1 are used. The code table initially contains codes that are one bit longer than the symbol size in order to accommodate the two special codes "clr" and "end" and codes for strings that are added during the process. When the table is full the code length increases to give space for more strings, up to a maximum code 4095 = FFF(hex). As the decoder builds its table it tracks these increases in code length and it is able to unpack incoming bytes accordingly. The GIF encoding process can be modified to create a file without LZW compression that is still viewable as a GIF image. This technique was introduced originally as a way to avoid patent infringement. Uncompressed GIF can also be a useful intermediate format for a graphics programmer because individual pixels are accessible for reading or painting. An uncompressed GIF file can be converted to an ordinary GIF file simply by passing it through an image editor. The modified encoding method ignores building the LZW table and emits only the root palette codes and the codes for CLEAR and STOP. This yields a simpler encoding (a 1-to-1 correspondence between code values and palette codes) but sacrifices all of the compression: each pixel in the image generates an output code indicating its color index. When processing an uncompressed GIF, a standard GIF decoder will not be prevented from writing strings to its dictionary table, but the code width must never increase since that triggers a different packing of bits to bytes. If the symbol width is , the codes of width fall naturally into two blocks: the lower block of codes for coding single symbols, and the upper block of codes that will be used by the decoder for sequences of length greater than one. Of that upper block, the first two codes are already taken: for CLEAR and for STOP. The decoder must also be prevented from using the last code in the upper block, , because when the decoder fills that slot, it will increase the code width. Thus in the upper block there are codes available to the decoder that won't trigger an increase in code width. Because the decoder is always one step behind in maintaining the table, it does not generate a table entry upon receiving the first code from the encoder, but will generate one for each succeeding code. Thus the encoder can generate codes without triggering an increase in code width. Therefore the encoder must emit extra CLEAR codes at intervals of codes or less to make the decoder reset the coding dictionary. The GIF standard allows such extra CLEAR codes to be inserted in the image data at any time. The composite data stream is partitioned into sub-blocks that each carry from 1 to 255 bytes. For the sample 3×5 image above, the following 9-bit codes represent "clear" (100) followed by image pixels in scan order and "stop" (101). After the above codes are mapped to bytes, the uncompressed file differs from the compressed file thus: 320: 14 20 20 bytes uncompressed image data follow 321: 00 51 FC FB F7 0F C5 BF 7F FF FE FD FB F7 EF DF BF 7F 01 01 335: 00 - end The trivial example of a large image of solid color demonstrates the variable-length LZW compression used in GIF files. The code values shown are packed into bytes which are then packed into blocks of up to 255 bytes. A block of image data begins with a byte that declares the number of bytes to follow. The last block of data for an image is marked by a zero block-length byte. The GIF Specification allows each image within the logical screen of a GIF file to specify that it is interlaced; i.e., that the order of the raster lines in its data block is not sequential. This allows a partial display of the image that can be recognized before the full image is painted. An interlaced image is divided from top to bottom into strips 8 pixels high, and the rows of the image are presented in the following order: The pixels within each line are not interlaced, but presented consecutively from left to right. As with non-interlaced images, there is no break between the data for one line and the data for the next. The indicator that an image is interlaced is a bit set in the image's header block. Although GIF was not designed as an animation medium, its ability to store multiple images in one file naturally suggested using the format to store the frames of an animation sequence. To facilitate "displaying" animations, the GIF89a spec added the Graphics Control Extension (GCE), which allows the images (frames) in the file to be painted with time delays, forming a video clip. Each frame in an animation GIF is introduced by its own GCE specifying the time delay to wait after the frame is drawn. Global information at the start of the file applies by default to all frames. The data is stream-oriented, so the file offset of the start of each GCE depends on the length of preceding data. Within each frame the LZW-coded image data is arranged in sub-blocks of up to 255 bytes; the size of each sub-block is declared by the byte that precedes it. By default, an animation displays the sequence of frames only once, stopping when the last frame is displayed. To enable an animation to loop, Netscape in the 1990s used the Application Extension block (intended to allow vendors to add application-specific information to the GIF file) to implement the Netscape Application Block (NAB). This block, placed immediately before the sequence of animation frames, specifies the number of times the sequence of frames should be played (1 to 65535 times) or that it should repeat continuously (zero indicates loop forever). Support for these repeating animations first appeared in Netscape Navigator version 2.0, and then spread to other browsers. Most browsers now recognize and support NAB, though it is not strictly part of the GIF89a specification. The following example shows the structure of the animation file "" shown (as a thumbnail) at the top of the article. The animation delay for each frame is specified in the GCE in hundredths of a second. Some economy of data is possible where a frame need only rewrite a portion of the pixels of the display, because the Image Descriptor can define a smaller rectangle to be rescanned instead of the whole image. Browsers or other displays that do not support animated GIFs typically show only the first frame. The size and color quality of animated GIF files can vary significantly depending on the application used to create them. Strategies for minimizing file size include using a common global color table for all frames (rather than a complete local color table for each frame) and minimizing the number of pixels covered in successive frames (so that only the pixels that change from one frame to the next are included in the latter frame). Simply packing a series of independent frame images into a composite animation tends to yield large file sizes. Internet Explorer slows down GIFs if the frame-rate is 20 frames per second or higher and Microsoft reports that Google Chrome and Safari also slow down some GIF animations. Starting in early 1995, the University of Ulm used animated GIF as live video streaming format to show a controllable model railroad. Metadata can be stored in GIF files as a comment block, a plain text block, or an application-specific application extension block. Several graphics editors use unofficial application extension blocks to include the data used to generate the image, so that it can be recovered for further editing. All of these methods technically require the metadata to be broken into sub-blocks so that applications can navigate the metadata block without knowing its internal structure. The Extensible Metadata Platform (XMP) metadata standard introduced an unofficial but now widespread "XMP Data" application extension block for including XMP data in GIF files. Since the XMP data is encoded using UTF-8 without NUL characters, there are no 0 bytes in the data. Rather than break the data into formal sub-blocks, the extension block terminates with a "magic trailer" that routes any application treating the data as sub-blocks to a final 0 byte that terminates the sub-block chain. In 1977 and 1978, Jacob Ziv and Abraham Lempel published a pair of papers on a new class of lossless data-compression algorithms, now collectively referred to as LZ77 and LZ78. In 1983, Terry Welch developed a fast variant of LZ78 which was named Lempel–Ziv–Welch (LZW). Welch filed a patent application for the LZW method in June 1983. The resulting patent, , granted in December 1985, was assigned to Sperry Corporation who subsequently merged with Burroughs Corporation in 1986 and formed Unisys. Further patents were obtained in the United Kingdom, France, Germany, Italy, Japan and Canada. In addition to the above patents, Welch's 1983 patent also includes citations to several other patents that influenced it, including two 1980 Japanese patents (JP9343880A and JP17790880A) from NEC's Jun Kanatsu, (1974) from John S. Hoerning, (1977) from Klaus E. Holtz, and a 1981 Dutch patent (DE19813118676) from Karl Eckhart Heinz. In June 1984, an article by Welch was published in the IEEE magazine which publicly described the LZW technique for the first time. LZW became a popular data compression technique and, when the patent was granted, Unisys entered into licensing agreements with over a hundred companies. The popularity of LZW led CompuServe to choose it as the compression technique for their version of GIF, developed in 1987. At the time, CompuServe was not aware of the patent. Unisys became aware that the version of GIF used the LZW compression technique and entered into licensing negotiations with CompuServe in January 1993. The subsequent agreement was announced on 24 December 1994. Unisys stated that they expected all major commercial on-line information services companies employing the LZW patent to license the technology from Unisys at a reasonable rate, but that they would not require licensing, or fees to be paid, for non-commercial, non-profit GIF-based applications, including those for use on the on-line services. Following this announcement, there was widespread condemnation of CompuServe and Unisys, and many software developers threatened to stop using GIF. The PNG format (see below) was developed in 1995 as an intended replacement. However, obtaining support from the makers of Web browsers and other software for the PNG format proved difficult and it was not possible to replace GIF, although PNG has gradually increased in popularity. Therefore, GIF variations without LZW compression were developed. For instance the libungif library, based on Eric S. Raymond's giflib, allows creation of GIFs that followed the data format but avoided the compression features, thus avoiding use of the Unisys LZW patent. A 2001 "Dr. Dobb's" article described another alternative to LZW compression, based on square roots. In August 1999, Unisys changed the details of their licensing practice, announcing the option for owners of certain non-commercial and private websites to obtain licenses on payment of a one-time license fee of $5000 or $7500. Such licenses were not required for website owners or other GIF users who had used licensed software to generate GIFs. Nevertheless, Unisys was subjected to thousands of online attacks and abusive emails from users believing that they were going to be charged $5000 or sued for using GIFs on their websites. Despite giving free licenses to hundreds of non-profit organizations, schools and governments, Unisys was completely unable to generate any good publicity and continued to be condemned by individuals and organizations such as the League for Programming Freedom who started the "Burn All GIFs" campaign in 1999. The United States LZW patent expired on 20 June 2003. The counterpart patents in the United Kingdom, France, Germany and Italy expired on 18 June 2004, the Japanese patents expired on 20 June 2004, and the Canadian patent expired on 7 July 2004. Consequently, while Unisys has further patents and patent applications relating to improvements to the LZW technique, GIF may now be used freely. Portable Network Graphics (PNG) was designed as a replacement for GIF in order to avoid infringement of Unisys' patent on the LZW compression technique. PNG offers better compression and more features than GIF, animation being the only significant exception. PNG is more suitable than GIF in instances where true-color imaging and alpha transparency are required. Although support for PNG format came slowly, new web browsers generally support PNG. Older versions of Internet Explorer do not support all features of PNG. Versions 6 and earlier do not support alpha channel transparency without using Microsoft-specific HTML extensions. Gamma correction of PNG images was not supported before version 8, and the display of these images in earlier versions may have the wrong tint. For identical 8-bit (or lower) image data, PNG files are typically smaller than the equivalent GIFs, due to the more efficient compression techniques used in PNG encoding. Complete support for GIF is complicated chiefly by the complex canvas structure it allows, though this is what enables the compact animation features. Videos resolve many issues that GIFs present through common usage on the web. They include drastically smaller file sizes, the ability to surpass the 8-bit color restriction, and better frame-handling and compression through codecs. Virtually universal support for the GIF format in web browsers and a lack of official support for video in the HTML standard caused GIF to rise to prominence for the purpose of displaying short video-like files on the web. MNG ("Multiple-image Network Graphics") was originally developed as a PNG-based solution for animations. MNG reached version 1.0 in 2001, but few applications support it. In 2006, an extension to the PNG format called APNG ("Animated Portable Network Graphics") was proposed as alternative to the MNG format by Mozilla. APNG is supported by most browsers as of 2019. APNG provide the ability to animate PNG files, while retaining backwards compatibility in decoders that cannot understand the animation chunk (unlike MNG). Older decoders will simply render the first frame of the animation. The PNG group officially rejected APNG as an official extension on April 20, 2007. There have been several subsequent proposals for a simple animated graphics format based on PNG using several different approaches. Nevertheless, Animated Portable Network Graphics is still under development by Mozilla and is supported in Firefox 3 while MNG support was dropped. APNG is currently supported by all major web browsers including Chrome since version 59.0 and Opera and Firefox and Edge. Embedded Adobe Flash objects and MPEGs are used on some websites to display simple video, but require the use of an additional browser plugin. WebM and WebP are in development and are supported by some web browsers. Other options for web animation include serving individual frames using AJAX, or animating SVG images using JavaScript or SMIL ("Synchronized Multimedia Integration Language"). With the introduction of widespread support of the HTML5 video (codice_1) tag in most web browsers, some websites use a looped version of the video tag generated by JavaScript functions. This gives the appearance of a GIF, but with the size and speed advantages of compressed video. Notable examples are Gfycat and Imgur and their GIFV metaformat, which is really a video tag playing a looped MP4 or WebM compressed video. High Efficiency Image File Format (HEIF) is an image file format, finalized in 2015, which uses a discrete cosine transform (DCT) lossy compression algorithm based on the HEVC video format, and related to the JPEG image format. In contrast to JPEG, HEIF supports animation. Compared to the GIF format, which lacks DCT compression, HEIF allows significantly more efficient compression. HEIF stores more information and produces higher-quality animated images at a small fraction of an equivalent GIF's size. VP9 only supports alpha compositing with 4:2:0 chroma subsampling in the YUVA420 pixel format, which may be unsuitable for GIFs that combine transparency with rasterised vector graphics with fine color details. In April 2014, 4chan added support for silent WebM videos that are under 3 MB in size and 2 min in length, and in October 2014, Imgur started converting any GIF files uploaded to the site to video and giving the link to the HTML player the appearance of an actual file with a codice_2 extension. In January 2016, Telegram started re-encoding all GIFs to MPEG4 videos that "require up to 95% less disk space for the same image quality." An example of this usage:
https://en.wikipedia.org/wiki?curid=12702
Greg Egan Greg Egan (born 20 August 1961) is an Australian science fiction writer and amateur mathematician, best known for his works of hard science fiction. Egan has won multiple awards including the John W. Campbell Memorial Award, the Hugo Award, and the Locus Award. Egan holds a Bachelor of Science degree in Mathematics from the University of Western Australia. He published his first work in 1983. He specialises in hard science fiction stories with mathematical and quantum ontology themes, including the nature of consciousness. Other themes include genetics, simulated reality, posthumanism, mind uploading, sexuality, artificial intelligence, and the superiority of rational naturalism to religion. He often deals with complex technical material, like new physics and epistemology. He is a Hugo Award winner (with eight other works shortlisted for the Hugos) and has also won the John W. Campbell Memorial Award for Best Science Fiction Novel. His early stories feature strong elements of supernatural horror. Egan's short stories have been published in a variety of genre magazines, including regular appearances in "Interzone" and "Asimov's Science Fiction". In 2018, Egan described a construction of superpermutations, thus giving an upper bound to their length. On 27 February 2019, using ideas developed by Robin Houston and others, Egan produced a superpermutation of "n" = 7 symbols of length 5906, breaking previous records. As of 2015, Egan lives in Perth. Egan is a vegetarian and an atheist. Egan does not attend science fiction conventions, does not sign books, and has stated that he appears in no photographs on the web, though both SF fan sites and Google Search have at times mistakenly represented photos of other people with the same name as those of the writer. Egan is a multiple Seiun Award winner. "Teranesia" was named the winner of the 2000 Ditmar Award for best novel, but Egan declined the award. Axiomatic (1995), Our Lady of Chernobyl (1995), Luminous (1998), Dark Integers and Other Stories (2008), Crystal Nights and Other Stories (2009), Oceanic (2009), The Best of Greg Egan (2019), Instantiation (2020) The production of a short film inspired by the story "Axiomatic" commenced in 2015, and the film was released online in October 2017.
https://en.wikipedia.org/wiki?curid=12706
Guy Fawkes Guy Fawkes (; 13 April 1570 – 31 January 1606), also known as Guido Fawkes while fighting for the Spanish, was a member of a group of provincial English Catholics who was involved in the failed Gunpowder Plot of 1605. He was born and educated in York; his father died when Fawkes was eight years old, after which his mother married a recusant Catholic. Fawkes converted to Catholicism and left for mainland Europe, where he fought for Catholic Spain in the Eighty Years' War against Protestant Dutch reformers in the Low Countries. He travelled to Spain to seek support for a Catholic rebellion in England without success. He later met Thomas Wintour, with whom he returned to England. Wintour introduced him to Robert Catesby, who planned to assassinate and restore a Catholic monarch to the throne. The plotters leased an undercroft beneath the House of Lords; Fawkes was placed in charge of the gunpowder which they stockpiled there. The authorities were prompted by an anonymous letter to search Westminster Palace during the early hours of 5 November, and they found Fawkes guarding the explosives. He was questioned and tortured over the next few days and confessed to wanting to blow up the House of Lords. Immediately before his execution on 31 January, Fawkes fell from the scaffold where he was to be hanged and broke his neck, thus avoiding the agony of being hanged, drawn and quartered. He became synonymous with the Gunpowder Plot, the failure of which has been commemorated in the UK as Guy Fawkes Night since 5 November 1605, when his effigy is traditionally burned on a bonfire, commonly accompanied by fireworks. Guy Fawkes was born in 1570 in Stonegate, York. He was the second of four children born to Edward Fawkes, a proctor and an advocate of the consistory court at York, and his wife, Edith. Guy's parents were regular communicants of the Church of England, as were his paternal grandparents; his grandmother, born Ellen Harrington, was the daughter of a prominent merchant, who served as Lord Mayor of York in 1536. Guy's mother's family were recusant Catholics, and his cousin, Richard Cowling, became a Jesuit priest. "Guy" was an uncommon name in England, but may have been popular in York on account of a local notable, Sir Guy Fairfax of Steeton. The date of Fawkes's birth is unknown, but he was baptised in the church of St Michael le Belfrey, York on 16 April. As the customary gap between birth and baptism was three days, he was probably born about 13 April. In 1568, Edith had given birth to a daughter named Anne, but the child died aged about seven weeks, in November that year. She bore two more children after Guy: Anne (b. 1572), and Elizabeth (b. 1575). Both were married, in 1599 and 1594 respectively. In 1579, when Guy was eight years old, his father died. His mother remarried several years later, to the Catholic Dionis Baynbrigge (or Denis Bainbridge) of Scotton, Harrogate. Fawkes may have become a Catholic through the Baynbrigge family's recusant tendencies, and also the Catholic branches of the Pulleyn and Percy families of Scotton, but also from his time at St. Peter's School in York. A governor of the school had spent about 20 years in prison for recusancy, and its headmaster, John Pulleyn, came from a family of noted Yorkshire recusants, the Pulleyns of Blubberhouses. In her 1915 work "The Pulleynes of Yorkshire", author Catharine Pullein suggested that Fawkes's Catholic education came from his Harrington relatives, who were known for harbouring priests, one of whom later accompanied Fawkes to Flanders in 1592–1593. Fawkes's fellow students included John Wright and his brother Christopher (both later involved with Fawkes in the Gunpowder Plot) and Oswald Tesimond, Edward Oldcorne and Robert Middleton, who became priests (the latter executed in 1601). After leaving school Fawkes entered the service of Anthony Browne, 1st Viscount Montagu. The Viscount took a dislike to Fawkes and after a short time dismissed him; he was subsequently employed by Anthony-Maria Browne, 2nd Viscount Montagu, who succeeded his grandfather at the age of 18. At least one source claims that Fawkes married and had a son, but no known contemporary accounts confirm this. In October 1591 Fawkes sold the estate in Clifton in York that he had inherited from his father. He travelled to the continent to fight in the Eighty Years War for Catholic Spain against the new Dutch Republic and, from 1595 until the Peace of Vervins in 1598, France. Although England was not by then engaged in land operations against Spain, the two countries were still at war, and the Spanish Armada of 1588 was only five years in the past. He joined Sir William Stanley, an English Catholic and veteran commander in his mid-fifties who had raised an army in Ireland to fight in Leicester's expedition to the Netherlands. Stanley had been held in high regard by Elizabeth I, but following his surrender of Deventer to the Spanish in 1587 he, and most of his troops, had switched sides to serve Spain. Fawkes became an alférez or junior officer, fought well at the siege of Calais in 1596, and by 1603 had been recommended for a captaincy. That year, he travelled to Spain to seek support for a Catholic rebellion in England. He used the occasion to adopt the Italian version of his name, Guido, and in his memorandum described James I (who became king of England that year) as "a heretic", who intended "to have all of the Papist sect driven out of England." He denounced Scotland, and the King's favourites among the Scottish nobles, writing "it will not be possible to reconcile these two nations, as they are, for very long". Although he was received politely, the court of Philip III was unwilling to offer him any support. In 1604 Fawkes became involved with a small group of English Catholics, led by Robert Catesby, who planned to assassinate the Protestant King James and replace him with his daughter, third in the line of succession, Princess Elizabeth. Fawkes was described by the Jesuit priest and former school friend Oswald Tesimond as "pleasant of approach and cheerful of manner, opposed to quarrels and strife ... loyal to his friends". Tesimond also claimed Fawkes was "a man highly skilled in matters of war", and that it was this mixture of piety and professionalism that endeared him to his fellow conspirators. The author Antonia Fraser describes Fawkes as "a tall, powerfully built man, with thick reddish-brown hair, a flowing moustache in the tradition of the time, and a bushy reddish-brown beard", and that he was "a man of action ... capable of intelligent argument as well as physical endurance, somewhat to the surprise of his enemies." The first meeting of the five central conspirators took place on Sunday 20 May 1604, at an inn called the Duck and Drake, in the fashionable Strand district of London. Catesby had already proposed at an earlier meeting with Thomas Wintour and John Wright to kill the King and his government by blowing up "the Parliament House with gunpowder". Wintour, who at first objected to the plan, was convinced by Catesby to travel to the continent to seek help. Wintour met with the Constable of Castile, the exiled Welsh spy Hugh Owen, and Sir William Stanley, who said that Catesby would receive no support from Spain. Owen did, however, introduce Wintour to Fawkes, who had by then been away from England for many years, and thus was largely unknown in the country. Wintour and Fawkes were contemporaries; each was militant, and had first-hand experience of the unwillingness of the Spaniards to help. Wintour told Fawkes of their plan to "doe some whatt in Ingland if the pece with Spaine healped us nott", and thus in April 1604 the two men returned to England. Wintour's news did not surprise Catesby; despite positive noises from the Spanish authorities, he feared that "the deeds would nott answere". One of the conspirators, Thomas Percy, was promoted in June 1604, gaining access to a house in London that belonged to John Whynniard, Keeper of the King's Wardrobe. Fawkes was installed as a caretaker and began using the pseudonym John Johnson, servant to Percy. The contemporaneous account of the prosecution (taken from Thomas Wintour's confession) claimed that the conspirators attempted to dig a tunnel from beneath Whynniard's house to Parliament, although this story may have been a government fabrication; no evidence for the existence of a tunnel was presented by the prosecution, and no trace of one has ever been found; Fawkes himself did not admit the existence of such a scheme until his fifth interrogation, but even then he could not locate the tunnel. If the story is true, however, by December 1604 the conspirators were busy tunnelling from their rented house to the House of Lords. They ceased their efforts when, during tunnelling, they heard a noise from above. Fawkes was sent out to investigate, and returned with the news that the tenant's widow was clearing out a nearby undercroft, directly beneath the House of Lords. The plotters purchased the lease to the room, which also belonged to John Whynniard. Unused and filthy, it was considered an ideal hiding place for the gunpowder the plotters planned to store. According to Fawkes, 20 barrels of gunpowder were brought in at first, followed by 16 more on 20 July. On 28 July however, the ever-present threat of the plague delayed the opening of Parliament until Tuesday, 5 November. In an attempt to gain foreign support, in May 1605 Fawkes travelled overseas and informed Hugh Owen of the plotters' plan. At some point during this trip his name made its way into the files of Robert Cecil, 1st Earl of Salisbury, who employed a network of spies across Europe. One of these spies, Captain William Turner, may have been responsible. Although the information he provided to Salisbury usually amounted to no more than a vague pattern of invasion reports, and included nothing which regarded the Gunpowder Plot, on 21 April he told how Fawkes was to be brought by Tesimond to England. Fawkes was a well-known Flemish mercenary, and would be introduced to "Mr Catesby" and "honourable friends of the nobility and others who would have arms and horses in readiness". Turner's report did not, however, mention Fawkes's pseudonym in England, John Johnson, and did not reach Cecil until late in November, well after the plot had been discovered. It is uncertain when Fawkes returned to England, but he was back in London by late August 1605, when he and Wintour discovered that the gunpowder stored in the undercroft had decayed. More gunpowder was brought into the room, along with firewood to conceal it. Fawkes's final role in the plot was settled during a series of meetings in October. He was to light the fuse and then escape across the Thames. Simultaneously, a revolt in the Midlands would help to ensure the capture of Princess Elizabeth. Acts of regicide were frowned upon, and Fawkes would therefore head to the continent, where he would explain to the Catholic powers his holy duty to kill the King and his retinue. A few of the conspirators were concerned about fellow Catholics who would be present at Parliament during the opening. On the evening of 26 October, Lord Monteagle received an anonymous letter warning him to stay away, and to "retyre youre self into yowre contee whence yow maye expect the event in safti for ... they shall receyve a terrible blowe this parleament". Despite quickly becoming aware of the letterinformed by one of Monteagle's servantsthe conspirators resolved to continue with their plans, as it appeared that it "was clearly thought to be a hoax". Fawkes checked the undercroft on 30 October, and reported that nothing had been disturbed. Monteagle's suspicions had been aroused, however, and the letter was shown to King James. The King ordered Sir Thomas Knyvet to conduct a search of the cellars underneath Parliament, which he did in the early hours of 5 November. Fawkes had taken up his station late on the previous night, armed with a slow match and a watch given to him by Percy "becaus he should knowe howe the time went away". He was found leaving the cellar, shortly after midnight, and arrested. Inside, the barrels of gunpowder were discovered hidden under piles of firewood and coal. Fawkes gave his name as John Johnson and was first interrogated by members of the King's Privy chamber, where he remained defiant. When asked by one of the lords what he was doing in possession of so much gunpowder, Fawkes answered that his intention was "to blow you Scotch beggars back to your native mountains." He identified himself as a 36-year-old Catholic from Netherdale in Yorkshire, and gave his father's name as Thomas and his mother's as Edith Jackson. Wounds on his body noted by his questioners he explained as the effects of pleurisy. Fawkes admitted his intention to blow up the House of Lords, and expressed regret at his failure to do so. His steadfast manner earned him the admiration of King James, who described Fawkes as possessing "a Roman resolution". James's admiration did not, however, prevent him from ordering on 6 November that "John Johnson" be tortured, to reveal the names of his co-conspirators. He directed that the torture be light at first, referring to the use of manacles, but more severe if necessary, authorising the use of the rack: "the gentler Tortures are to be first used unto him "et sic per gradus ad ima tenditur" [and so by degrees proceeding to the worst]". Fawkes was transferred to the Tower of London. The King composed a list of questions to be put to "Johnson", such as ""as to what he is", For I can never yet hear of any man that knows him", "When and where he learned to speak French?", and "If he was a Papist, who brought him up in it?" The room in which Fawkes was interrogated subsequently became known as the Guy Fawkes Room. Sir William Waad, Lieutenant of the Tower, supervised the torture and obtained Fawkes's confession. He searched his prisoner, and found a letter addressed to Guy Fawkes. To Waad's surprise, "Johnson" remained silent, revealing nothing about the plot or its authors. On the night of 6 November he spoke with Waad, who reported to Salisbury "He [Johnson] told us that since he undertook this action he did every day pray to God he might perform that which might be for the advancement of the Catholic Faith and saving his own soul". According to Waad, Fawkes managed to rest through the night, despite his being warned that he would be interrogated until "I had gotton the inwards secret of his thoughts and all his complices". His composure was broken at some point during the following day. The observer Sir Edward Hoby remarked "Since Johnson's being in the Tower, he beginneth to speak English". Fawkes revealed his true identity on 7 November, and told his interrogators that there were five people involved in the plot to kill the King. He began to reveal their names on 8 November, and told how they intended to place Princess Elizabeth on the throne. His third confession, on 9 November, implicated Francis Tresham. Following the Ridolfi plot of 1571 prisoners were made to dictate their confessions, before copying and signing them, if they still could. Although it is uncertain if he was tortured on the rack, Fawkes's scrawled signature suggests the suffering he endured at the hands of his interrogators. The trial of eight of the plotters began on Monday 27 January 1606. Fawkes shared the barge from the Tower to Westminster Hall with seven of his co-conspirators. They were kept in the Star Chamber before being taken to Westminster Hall, where they were displayed on a purpose-built scaffold. The King and his close family, watching in secret, were among the spectators as the Lords Commissioners read out the list of charges. Fawkes was identified as Guido Fawkes, "otherwise called Guido Johnson". He pleaded not guilty, despite his apparent acceptance of guilt from the moment he was captured. The jury found all the defendants guilty, and the Lord Chief Justice Sir John Popham pronounced them guilty of high treason. The Attorney General Sir Edward Coke told the court that each of the condemned would be drawn backwards to his death, by a horse, his head near the ground. They were to be "put to death halfway between heaven and earth as unworthy of both". Their genitals would be cut off and burnt before their eyes, and their bowels and hearts removed. They would then be decapitated, and the dismembered parts of their bodies displayed so that they might become "prey for the fowls of the air". Fawkes's and Tresham's testimony regarding the Spanish treason was read aloud, as well as confessions related specifically to the Gunpowder Plot. The last piece of evidence offered was a conversation between Fawkes and Wintour, who had been kept in adjacent cells. The two men apparently thought they had been speaking in private, but their conversation was intercepted by a government spy. When the prisoners were allowed to speak, Fawkes explained his not guilty plea as ignorance of certain aspects of the indictment. On 31 January 1606, Fawkes and three others – Thomas Wintour, Ambrose Rookwood, and Robert Keyes – were dragged ("i.e.", "drawn") from the Tower on wattled hurdles to the Old Palace Yard at Westminster, opposite the building they had attempted to destroy. His fellow plotters were then hanged and quartered. Fawkes was the last to stand on the scaffold. He asked for forgiveness of the King and state, while keeping up his "crosses and idle ceremonies" (Catholic practices). Weakened by torture and aided by the hangman, Fawkes began to climb the ladder to the noose, but either through jumping to his death or climbing too high so the rope was incorrectly set, he managed to avoid the agony of the latter part of his execution by breaking his neck. His lifeless body was nevertheless quartered and, as was the custom, his body parts were then distributed to "the four corners of the kingdom", to be displayed as a warning to other would-be traitors. On 5 November 1605, Londoners were encouraged to celebrate the King's escape from assassination by lighting bonfires, provided that "this testemonye of joy be carefull done without any danger or disorder". An Act of Parliament designated each 5 November as a day of thanksgiving for "the joyful day of deliverance", and remained in force until 1859. Fawkes was one of 13 conspirators, but he is the individual most associated with the plot. In Britain, 5 November has variously been called Guy Fawkes Night, Guy Fawkes Day, Plot Night, and Bonfire Night (which can be traced directly back to the original celebration of 5 November 1605). Bonfires were accompanied by fireworks from the 1650s onwards, and it became the custom after 1673 to burn an effigy (usually of the pope) when heir presumptive James, Duke of York, converted to Catholicism. Effigies of other notable figures have found their way onto the bonfires, such as Paul Kruger and Margaret Thatcher, although most modern effigies are of Fawkes. The "guy" is normally created by children from old clothes, newspapers, and a mask. During the 19th century, "guy" came to mean an oddly dressed person, while in many places it has lost any pejorative connotation and instead refers to any male person and the plural form can refer to people of any gender (as in "you guys"). James Sharpe, professor of history at the University of York, has described how Guy Fawkes came to be toasted as "the last man to enter Parliament with honest intentions". William Harrison Ainsworth's 1841 historical romance "Guy Fawkes; or, The Gunpowder Treason" portrays Fawkes in a generally sympathetic light, and his novel transformed Fawkes in the public perception into an "acceptable fictional character". Fawkes subsequently appeared as "essentially an action hero" in children's books and penny dreadfuls such as "The Boyhood Days of Guy Fawkes; or, The Conspirators of Old London", published around 1905. According to historian Lewis Call, Fawkes is now "a major icon in modern political culture" whose face has become "a potentially powerful instrument for the articulation of postmodern anarchism" in the late 20th century. Footnotes Citations Bibliography
https://en.wikipedia.org/wiki?curid=12707
Goodtimes virus The Goodtimes Virus was a computer virus hoax that spread during the early years of the Internet's popularity. Warnings about a computer virus named "Good Times" began being passed around among Internet users in 1994. The Goodtimes virus was supposedly transmitted via an email bearing the subject header "Good Times" or "Goodtimes," hence the virus's name, and the warning recommended deleting any such email unread. The virus described in the warnings did not exist, but the warnings themselves, were, in effect, virus-like. In 1997 the Cult of the Dead Cow hacker collective announced that they had been responsible for the perpetration of the "Good Times" virus hoax as an exercise to "prove the gullibility of self-proclaimed 'experts' on the Internet." The first recorded email warnings about the Good Times virus showed up on 15 November 1994. The first message was brief, a simple five sentence email with a Christmas greeting, advising recipients not to open email messages with subject "GOOD TIMES!!", as doing so would ruin their files. Later messages became more intricate. The most common versions—the "Infinite loop" and "ASCII buffer" editions—were much longer, containing descriptions of what exactly Good Times would do to the computer of someone who opened it, as well as comparisons to other viruses of the time, and references to a U.S. Federal Communications Commission warning. FYI, a file, going under the name “Good Times” is being sent to some Internet users who subscribe to on-line services (Compuserve, Prodigy and America On Line). If you should receive this file, do not download it! Delete it immediately. I understand that there is a virus included in that file, which if downloaded to your personal computer, will ruin all of your files. The longer version of the Good Times warning contained descriptions of what Good Times was supposedly capable of doing to computers. In addition to sending itself to every email address in a recipient's received or sent mail, the Good Times virus caused a wide variety of other nasty things to happen. For example, one version said that if an infected computer contained a hard drive, it could be destroyed. If Good Times was not stopped in time, an infected computer would enter an "nth-complexity infinite binary loop" (a meaningless term), damaging the processor. The "ASCII" buffer email described the mechanism of Good Times as a buffer overflow. A number of computer virus hoaxes appeared in the wake of Good Times. These messages were similar in form to Good Times, warning users not to open messages bearing particular subject lines. Subject lines mentioned in these emails include "Penpal greetings," "Free Money," "Deeyenda," "Invitation,", and "Win a Holiday." The Bad Times computer virus warning is generally considered to be a spoof of the Good Times warning. Developments in mail systems, such as Microsoft Outlook, without sufficient thought for security implications, made viruses that indeed propagate themselves via email possible. Notable examples include the Melissa worm, the ILOVEYOU virus, and the Anna Kournikova virus. In some cases, a user must open a document or program contained in an email message in order to spread the virus; in others, notably the Kak worm, merely opening or previewing an email message itself will trigger the virus. Some e-mail viruses written after the Good Times scare contained text announcing that "This virus is called 'Good Times,'" presumably hoping to gain kudos amongst other virus writers by appearing to have created a worldwide scare. In general, virus researchers avoided naming these viruses as "Good Times," but an obvious potential for confusion exists, and some Anti-Virus tools may well detect a real virus they identify as "Good Times," though this will not be the cause of the original scare. Weird Al Yankovic made a song parody of the virus titled "Virus Alert".
https://en.wikipedia.org/wiki?curid=12709
Gary Snyder Gary Snyder (born May 8, 1930) is an American man of letters. Perhaps best known as a poet (often associated with the Beat Generation and the San Francisco Renaissance), he is also an essayist, lecturer, and environmental activist with anarchoprimitivist leanings. He has been described as the "poet laureate of Deep Ecology". Snyder is a winner of a Pulitzer Prize for Poetry and the American Book Award. His work, in his various roles, reflects an immersion in both Buddhist spirituality and nature. Snyder has translated literature into English from ancient Chinese and modern Japanese. Snyder was an academic at the University of California, Davis and a member of the California Arts Council. Gary Sherman Snyder was born in San Francisco, California to Harold and Lois Hennessy Snyder. Snyder is of German, Scottish, Irish and English ancestry. His family, impoverished by the Great Depression, moved to King County, Washington, when he was two years old. There, they tended dairy cows, kept laying hens, had a small orchard, and made cedar-wood shingles. At the age of seven, Snyder was laid up for four months by an accident. "So my folks brought me piles of books from the Seattle Public Library," he recalled in interview, "and it was then I really learned to read and from that time on was voracious — I figure that accident changed my life. At the end of four months, I had read more than most kids do by the time they're eighteen. And I didn't stop." Also during his ten childhood years in Washington, Snyder became aware of the presence of the Coast Salish people and developed an interest in the Native American peoples in general and their traditional relationship with nature. In 1942, following his parents' divorce, Snyder moved to Portland, Oregon with his mother and his younger sister, Anthea. Their mother, Lois Snyder Hennessy (born Wilkey), worked during this period as a reporter for "The Oregonian". One of Gary's boyhood jobs was as a newspaper copy boy, also at the "Oregonian". Also, during his teen years, he attended Lincoln High School, worked as a camp counselor, and went mountain climbing with the Mazamas youth group. Climbing remained an interest of his, especially during his twenties and thirties. In 1947, he started attending Reed College on a scholarship. Here he met, and for a time, roomed with the education author Carl Proujan; and became acquainted with Philip Whalen and Lew Welch. During his time at Reed, Snyder published his first poems in a student journal. In 1948, he spent the summer working as a seaman. To get this job, he joined the now defunct Marine Cooks and Stewards union, and would later work as a seaman in the mid-1950s to gain experience of other cultures in port cities. Snyder married Alison Gass in 1950; however, they separated after seven months, and divorced in 1952. While attending Reed, Snyder did folklore research on the Warm Springs Indian Reservation in central Oregon. He graduated with a dual degree in anthropology and literature in 1951. Snyder's senior thesis, entitled "The Dimensions of a Myth", employed perspectives from anthropology, folklore, psychology, and literature to examine a myth of the Pacific Northwest's Haida people. He spent the following few summers working as a timber scaler at Warm Springs, developing relationships with its people that were less rooted in academia. This experience formed the basis for some of his earliest published poems (including "A Berry Feast"), later collected in the book "The Back Country". He also encountered the basic ideas of Buddhism and, through its arts, some of the Far East's traditional attitudes toward nature. He went to Indiana University with a graduate fellowship to study anthropology. (Snyder also began practicing self-taught Zen meditation.) He left after a single semester to return to San Francisco and to 'sink or swim as a poet'. Snyder worked for two summers in the North Cascades in Washington as a fire lookout, on Crater Mountain in 1952 and Sourdough Mountain in 1953 (both locations on the upper Skagit River). His attempts to get another lookout stint in 1954 (at the peak of McCarthyism), however, failed. He had been barred from working for the government, due to his association with the Marine Cooks and Stewards. Instead, he went back to Warm Springs to work in logging as a choker setter (fastening cables to logs). This experience contributed to his "Myths and Texts" and the essay "Ancient Forests of the Far West". Back in San Francisco, Snyder lived with Whalen, who shared his growing interest in Zen. Snyder's reading of the writings of D. T. Suzuki had in fact been a factor in his decision not to continue as a graduate-student in anthropology, and in 1953 he enrolled at the University of California, Berkeley to study Asian culture and languages. He studied ink and wash painting under Chiura Obata and Tang dynasty poetry under Ch'en Shih-hsiang. Snyder continued to spend summers working in the forests, including one summer as a trail-builder in Yosemite. He spent some months in 1955 and 1956 living in a cabin (which he dubbed "Marin-an") outside Mill Valley, California with Jack Kerouac. It was also at this time that Snyder was an occasional student at the American Academy of Asian Studies, where Saburō Hasegawa and Alan Watts, among others, were teaching. Hasegawa introduced Snyder to the treatment of landscape painting as a meditative practice. This inspired Snyder to attempt something equivalent in poetry, and with Hasegawa's encouragement, he began work on "Mountains and Rivers without End", which would be completed and published forty years later. During these years, Snyder was writing and collecting his own work, as well as embarking on the translation of the "Cold Mountain" poems by the 8th-century Chinese recluse Han Shan; this work appeared in chapbook-form in 1959, under the title "Riprap & Cold Mountain Poems". Snyder met Allen Ginsberg when the latter sought Snyder out on the recommendation of Kenneth Rexroth. Then, through Ginsberg, Snyder and Kerouac came to know each other. This period provided the materials for Kerouac's novel "The Dharma Bums", and Snyder was the inspiration for the novel's main character, Japhy Ryder, in the same way Neal Cassady had inspired Dean Moriarty in "On the Road". As the large majority of people in the Beat movement had urban backgrounds, writers like Ginsberg and Kerouac found Snyder, with his backcountry and manual-labor experience and interest in things rural, a refreshing and almost exotic individual. Lawrence Ferlinghetti later referred to Snyder as 'the Thoreau of the Beat Generation'. Snyder read his poem "A Berry Feast" at the poetry reading at the Six Gallery in San Francisco (October 7, 1955) that heralded what was to become known as the San Francisco Renaissance. This also marked Snyder's first involvement with the Beats, although he was not a member of the original New York circle, but rather entered the scene through his association with Kenneth Rexroth. As recounted in Kerouac's "Dharma Bums", even at age 25 Snyder felt he could have a role in the fateful future meeting of West and East. Snyder's first book, "Riprap", which drew on his experiences as a forest lookout and on the trail-crew in Yosemite, was published in 1959. Independently, some of the Beats, including Philip Whalen, had become interested in Zen, but Snyder was one of the more serious scholars of the subject among them, preparing in every way he could think of for eventual study in Japan. In 1955, the First Zen Institute of America offered him a scholarship for a year of Zen training in Japan, but the State Department refused to issue him a passport, informing him that "it has been alleged you are a Communist." A subsequent District of Columbia Court of Appeals ruling forced a change in policy, and Snyder got his passport. In the end, his expenses were paid by Ruth Fuller Sasaki, for whom he was supposed to work; but initially he served as personal attendant and English tutor to Zen abbot Miura Isshu, at Rinko-in, a temple in Shokoku-ji in Kyoto, where Dwight Goddard, Buddhism in the United States and R. H. Blyth had preceded him. Mornings, after zazen, sutra chanting, and chores for Miura, he took Japanese classes, bringing his spoken Japanese up to a level sufficient for kōan study. He developed a friendship with Philip Yampolsky, who took him around Kyoto. In early July 1955, he took refuge and requested to become Miura's disciple, thus formally becoming a Buddhist. He returned to California via the Persian Gulf, Turkey, Sri Lanka and various Pacific Islands, in 1958, voyaging as a crewman in the engine room on the oil freighter "Sappa Creek", and took up residence at Marin-an again. He turned one room into a zendo, with about six regular participants. In early June, he met the poet Joanne Kyger. She became his girlfriend, and eventually his wife. In 1959, he shipped for Japan again, where he rented a cottage outside Kyoto. He became the first foreign disciple of Oda Sesso Roshi, the new abbot of Daitoku-ji. He married Kyger on February 28, 1960, immediately after her arrival, which Sasaki insisted they do, if they were to live together and be associated with the First Zen Institute of America. Snyder and Joanne Kyger were married from 1960 to 1965. During the period between 1956 and 1969, Snyder went back and forth between California and Japan, studying Zen, working on translations with Ruth Fuller Sasaki, and finally living for a while with a group of other people on the small, volcanic island of Suwanosejima. His previous study of written Chinese assisted his immersion in the Zen tradition (with its roots in Tang Dynasty China) and enabled him to take on certain professional projects while he was living in Japan. Snyder received the Zen precepts and a dharma name ("Chofu", "Listen to the Wind"), and lived sometimes as a de facto monk, but never registered to become a priest and planned eventually to return to the United States to 'turn the wheel of the dharma'. During this time, he published a collection of his poems from the early to mid '50s, "Myths & Texts" (1960), and "Six Sections from Mountains and Rivers Without End" (1965). This last was the beginning of a project that he was to continue working on until the late 1990s. Much of Snyder's poetry expresses experiences, environments, and insights involved with the work he has done for a living: logger, fire-lookout, steam-freighter crew, translator, carpenter, and itinerant poet, among other things. During his years in Japan, Snyder was also initiated into Shugendo, a form of ancient Japanese animism, (see also Yamabushi). In the early 1960s he traveled for six months through India with his wife Joanne, Allen Ginsberg, and Peter Orlovsky. Snyder and Joanne Kyger separated soon after a trip to India, and divorced in 1965. In the 1950s, Snyder took part in the rise of a strand of Buddhist anarchism emerging from the Beat movement. Snyder was the inspiration for the character Japhy Ryder in Jack Kerouac's novel "The Dharma Bums" (1958). Snyder had spent considerable time in Japan studying Zen Buddhism, and in 1961 published an essay, "Buddhist Anarchism", where he described the connection he saw between these two traditions, originating in different parts of the world: "The mercy of the West has been social revolution; the mercy of the East has been individual insight into the basic self/void." He advocated "using such means as civil disobedience, outspoken criticism, protest, pacifism, voluntary poverty and even gentle violence" and defended "the right of individuals to smoke ganja, eat peyote, be polygymous, polyandrous or homosexual" which he saw as being banned by "the Judaeo-Capitalist-Christian-Marxist West". In 1966, Snyder joined Allen Ginsberg, Zentatsu Richard Baker, Roshi of the San Francisco Zen Center, and Donald Walters, a.k.a. "Swami Kriyananda," to buy in the Sierra foothills, north of Nevada City, California. In 1970, this would become his home, with the Snyder family's portion being named Kitkitdizze. Snyder spent the summers of 1967 and 1968 with a group of Japanese back-to-the-land drop-outs known as "the Tribe" on Suwanosejima (a small Japanese island in the East China Sea), where they combed the beaches, gathered edible plants, and fished. On the island, on August 6, 1967, he married Masa Uehara, whom he had met in Osaka a year earlier. In 1968, they moved to California with their infant son, Kai (born April 1968). Their second son, Gen, was born a year later. In 1971, they moved to the San Juan Ridge in the foothills of the Sierra Nevada of Northern California, near the South Yuba River, where they and friends built a house that drew on rural-Japanese and Native-American architectural ideas. In 1967 his book "The Back Country" appeared, again mainly a collection of poems stretching back over about fifteen years. Snyder devoted a section at the end of the book to his translations of eighteen poems by Kenji Miyazawa. "Regarding Wave" appeared in January 1970, a stylistic departure offering poems that were more emotional, metaphoric, and lyrical. From the late 1960s, the content of Snyder's poetry increasingly had to do with family, friends, and community. He continued to publish poetry throughout the 1970s, much of it reflecting his re-immersion in life on the American continent and his involvement in the back-to-the-land movement in the Sierra foothills. His 1974 book "Turtle Island", titled after a Native American name for the North American continent, won a Pulitzer Prize. It also influenced numerous West Coast Generation X writers, including Alex Steffen, Bruce Barcott and Mark Morford. His 1983 book "Axe Handles", won an American Book Award. Snyder wrote numerous essays setting forth his views on poetry, culture, social experimentation, and the environment. Many of these were collected in "Earth House Hold" (1969), "The Old Ways" (1977), "The Real Work" (1980), "The Practice of the Wild" (1990), "A Place in Space" (1995), and "The Gary Snyder Reader" (1999). In 1979, Snyder published "He Who Hunted Birds in His Father's Village: The Dimensions of a Haida Myth", based on his Reed thesis. Snyder's journals from his travel in India in the mid-1960s appeared in 1983 under the title "Passage Through India". In these, his wide-ranging interests in cultures, natural history, religions, social critique, contemporary America, and hands-on aspects of rural life, as well as his ideas on literature, were given full-blown articulation. In 1986, Snyder became a professor in the writing program at the University of California, Davis. Snyder is now professor emeritus of English. Snyder was married to Uehara for twenty-two years; the couple divorced in 1989. Snyder married Carole Lynn Koda (October 3, 1947 – June 29, 2006), who would write "Homegrown: Thirteen brothers and sisters, a century in America", in 1991, and remained married to her until her death of cancer. She had been born in the third generation of a successful Japanese-American farming family, noted for its excellent rice. She shared Buddhism, extensive travels, and work with Snyder, and performed independent work as a naturalist. As Snyder's involvement in environmental issues and his teaching grew, he seemed to move away from poetry for much of the 1980s and early 1990s. However, in 1996 he published the complete "Mountains and Rivers Without End", a mixture of the lyrical and epic modes celebrating the act of inhabitation on a specific place on the planet. This work was written over a 40-year period. It has been translated into Japanese, French and Russian. In 2004 Snyder published "Danger on Peaks", his first collection of new poems in twenty years. Snyder was awarded the Levinson Prize from the journal "Poetry", the American Poetry Society Shelley Memorial Award (1986), was inducted into the American Academy of Arts and Letters (1987), and won the 1997 Bollingen Prize for Poetry and, that same year, the John Hay Award for Nature Writing. Snyder also has the distinction of being the first American to receive the Buddhism Transmission Award (for 1998) from the Japan-based Bukkyo Dendo Kyokai Foundation. For his ecological and social activism, Snyder was named as one of the 100 visionaries selected in 1995 by "Utne Reader". Snyder's life and work was celebrated in John J. Healy's 2010 documentary "The Practice of the Wild." The film, which debuted at the 53rd San Francisco International Film Festival, features wide-ranging, running conversations between Snyder and poet, writer and longtime colleague Jim Harrison, filmed mostly on the Hearst Ranch in San Simeon, California. The film also shows archival photographs and film of Snyder's life. Gary Snyder uses mainly common speech-patterns as the basis for his lines, though his style has been noted for its "flexibility" and the variety of different forms his poems have taken. He typically uses neither conventional meters nor intentional rhyme. "Love and respect for the primitive tribe, honour accorded the Earth, the escape from city and industry into both the past and the possible, contemplation, the communal", such, according to Glyn Maxwell, is the awareness and commitment behind the specific poems. The author and editor Stewart Brand once wrote: "Gary Snyder's poetry addresses the life-planet identification with unusual simplicity of style and complexity of effect." According to Jody Norton, this simplicity and complexity derives from Snyder's use of natural imagery (geographical formations, flora, and fauna) in his poems. Such imagery can be both sensual at a personal level yet universal and generic in nature. In the 1968 poem "Beneath My Hand and Eye the Distant Hills, Your Body," the author compares the intimate experience of a lover's caress with the mountains, hills, cinder cones, and craters of the Uintah Mountains. Readers become explorers on both a very private level as well as a very public and grand level. A simplistic touch becoming a very complex interaction occurring at multiple levels. This is the effect Snyder intended. In an interview with Faas, he states, "There is a direction which is very beautiful, and that's the direction of the organism being less and less locked into itself, less and less locked into its own body structure and its relatively inadequate sense organs, towards a state where the organism can actually go out from itself and share itself with others." Snyder has always maintained that his personal sensibility arose from his interest in Native Americans and their involvement with nature and knowledge of it; indeed, their ways seemed to resonate with his own. And he has sought something akin to this through Buddhist practices, Yamabushi initiation, and other experiences and involvements. However, since his youth he has been quite literate, and he has written about his appreciation of writers of similar sensibilities, like D. H. Lawrence, William Butler Yeats, and some of the great ancient Chinese poets. William Carlos Williams was another influence, especially on Snyder's earliest published work. Starting in high school, Snyder read and loved the work of Robinson Jeffers, his predecessor in poetry of the landscape of the American West; but, whereas Jeffers valued nature over humankind, Snyder saw humankind as part of nature. Snyder commented in interviews, "I have some concerns that I'm continually investigating that tie together biology, mysticism, prehistory, general systems theory". Snyder argues that poets, and humans in general, need to adjust to very long timescales, especially when judging the consequences of their actions. His poetry examines the gap between nature and culture so as to point to ways in which the two can be more closely integrated. In 2004, receiving the Masaoka Shiki International Haiku Awards Grand Prize, Snyder highlighted traditional ballads and folk songs, Native American songs and poems, William Blake, Walt Whitman, Jeffers, Ezra Pound, Noh drama, Zen aphorisms, Federico García Lorca, and Robert Duncan as significant influences on his poetry, but added, "the influence from haiku and from the Chinese is, I think, the deepest." Snyder is among those writers who have sought to dis-entrench conventional thinking about primitive peoples that has viewed them as simple-minded, ignorantly superstitious, brutish, and prone to violent emotionalism. In the 1960s Snyder developed a "neo-tribalist" view akin to the "post-modernist" theory of French Sociologist Michel Maffesoli. The "re-tribalization" of the modern, mass-society world envisioned by Marshall McLuhan, with all of the ominous, dystopian possibilities that McLuhan warned of, subsequently accepted by many modern intellectuals, is not the future that Snyder expects or works toward. Snyder's is a positive interpretation of the tribe and of the possible future. Todd Ensign describes Snyder's interpretation as blending ancient tribal beliefs and traditions, philosophy, physicality, and nature with politics to create his own form of Postmodern environmentalism. Snyder rejects the perspective which portrays nature and humanity in direct opposition to one another. Instead, he chooses to write from multiple viewpoints. He purposely sets out to bring about change on the emotional, physical, and political levels by emphasizing the ecological problems faced by today's society. Gary Snyder is widely regarded as a member of the Beat Generation circle of writers: he was one of the poets that read at the famous "Six Gallery" event, and was written about in one of Kerouac's most popular novels, "The Dharma Bums". Some critics argue that Snyder's connection with the Beats is exaggerated and that he might better be regarded as a part of the San Francisco Renaissance, which developed independently. Snyder himself has some reservations about the label "Beat", but does not appear to have any strong objection to being included in the group. He often talks about the Beats in the first person plural, referring to the group as "we" and "us". A quotation from a 1974 interview at the University of North Dakota Writers Conference (published in "The Beat Vision"): I never did know exactly what was meant by the term 'The Beats', but let's say that the original meeting, association, comradeship of Allen [Ginsberg], myself, Michael [McClure], Lawrence [Ferlinghetti], Philip Whalen, who's not here, Lew Welch, who's dead, Gregory [Corso], for me, to a somewhat lesser extent (I never knew Gregory as well as the others) did embody a criticism and a vision which we shared in various ways, and then went our own ways for many years. Where we began to come really close together again, in the late '60s, and gradually working toward this point, it seems to me, was when Allen began to take a deep interest in Oriental thought and then in Buddhism which added another dimension to our levels of agreement; and later through Allen's influence, Lawrence began to draw toward that; and from another angle, Michael and I after the lapse of some years of contact, found our heads very much in the same place, and it's very curious and interesting now; and Lawrence went off in a very political direction for a while, which none of us had any objection with, except that wasn't my main focus. It's very interesting that we find ourselves so much on the same ground again, after having explored divergent paths; and find ourselves united on this position of powerful environmental concern, critique of the future of the individual state, and an essentially shared poetics, and only half-stated but in the background very powerfully there, a basic agreement on some Buddhist type psychological views of human nature and human possibilities. Snyder has also commented "The term Beat is better used for a smaller group of writers ... the immediate group around Allen Ginsberg and Jack Kerouac, plus Gregory Corso and a few others. Many of us ... belong together in the category of the San Francisco Renaissance. ... Still, beat can also be defined as a particular state of mind ... and I was in that mind for a while".
https://en.wikipedia.org/wiki?curid=12710
Giant panda The giant panda ("Ailuropoda melanoleuca"; ), also known as the panda bear or simply the panda, is a bear native to south central China. It is characterised by large, black patches around its eyes, over the ears, and across its round body. The name "giant panda" is sometimes used to distinguish it from the red panda, a neighboring musteloid. Though it belongs to the order Carnivora, the giant panda is a folivore, with bamboo shoots and leaves making up more than 99% of its diet. Giant pandas in the wild will occasionally eat other grasses, wild tubers, or even meat in the form of birds, rodents, or carrion. In captivity, they may receive honey, eggs, fish, yams, shrub leaves, oranges, or bananas along with specially prepared food. The giant panda lives in a few mountain ranges in central China, mainly in Sichuan, but also in neighbouring Shaanxi and Gansu. As a result of farming, deforestation, and other development, the giant panda has been driven out of the lowland areas where it once lived, and it is a conservation-reliant vulnerable species. A 2007 report showed 239 pandas living in captivity inside China and another 27 outside the country. As of December 2014, 49 giant pandas lived in captivity outside China, living in 18 zoos in 13 different countries. Wild population estimates vary; one estimate shows that there are about 1,590 individuals living in the wild, while a 2006 study via DNA analysis estimated that this figure could be as high as 2,000 to 3,000. Some reports also show that the number of giant pandas in the wild is on the rise. In March 2015, conservation news site Mongabay stated that the wild giant panda population had increased by 268, or 16.8%, to 1,864. In 2016, the IUCN reclassified the species from "endangered" to "vulnerable". While the dragon has often served as China's national symbol, internationally the giant panda has often filled this role. As such, it is becoming widely used within China in international contexts, for example, appearing since 1982 on gold panda bullion coins and as one of the five Fuwa mascots of the Beijing Olympics. For many decades, the precise taxonomic classification of the giant panda was under debate because it shares characteristics with both bears and raccoons. However, molecular studies indicate the giant panda is a true bear, part of the family Ursidae. These studies show it diverged about from the common ancestor of the Ursidae; It is the most basal member of this family and equidistant from all other extant bear species. The giant panda has been referred to as a living fossil. The word "panda" was borrowed into English from French, but no conclusive explanation of the origin of the French word "panda" has been found. The closest candidate is the Nepali word "ponya," possibly referring to the adapted wrist bone of the red panda, which is native to Nepal. The Western world originally applied this name to the red panda. In many older sources, the name "panda" or "common panda" refers to the lesser-known red panda, thus necessitating the inclusion of "giant" and "lesser/red" prefixes in front of the names. Even in 2013, the "Encyclopædia Britannica" still used "giant panda" or "panda bear" for the bear, and simply "panda" for the red panda, despite the popular usage of the word "panda" to refer to giant pandas. Since the earliest collection of Chinese writings, the Chinese language has given the bear 20 different names, such as "huāxióng" ( "spotted bear") and "zhúxióng" ( "bamboo bear"). The most popular names in China today is "dàxióngmāo" ( literally "giant bear cat"), or simply "xióngmāo" ( "bear cat"). The name "xióngmāo" ( "bear cat") was originally used to describe the red panda ("Ailurus fulgens"), but since the giant panda was thought to be closely related to the red panda, "dàxióngmāo" () was named relatively. In Taiwan, another popular name for panda is the inverted "dàmāoxióng" ( "giant cat bear"), though many encyclopediae and dictionaries in Taiwan still use the "bear cat" form as the correct name. Some linguists argue, in this construction, "bear" instead of "cat" is the base noun, making this name more grammatically and logically correct, which may have led to the popular choice despite official writings. This name did not gain its popularity until 1988, when a private zoo in Tainan painted a sun bear black and white and created the Tainan fake panda incident. Two subspecies of giant panda have been recognized on the basis of distinct cranial measurements, colour patterns, and population genetics. A detailed study of the giant panda's genetic history from 2012 confirms that the separation of the Qinlin population occurred about 300,000 years ago, and reveals that the non-Qinlin population further diverged into two groups, named the Minshan and the Qionglai-Daxiangling-Xiaoxiangling-Liangshan group respectively, about 2,800 years ago. The giant panda has luxuriant black-and-white fur. Adults measure around long, including a tail of about , and tall at the shoulder. Males can weigh up to . Females (generally 10–20% smaller than males) can weigh as little as , but can also weigh up to . Average adult weight is . The giant panda has a body shape typical of bears. It has black fur on its ears, eye patches, muzzle, legs, arms and shoulders. The rest of the animal's coat is white. Although scientists do not know why these unusual bears are black and white, speculation suggests that the bold colouring provides effective camouflage in their shade-dappled snowy and rocky habitat. The giant panda's thick, wooly coat keeps it warm in the cool forests of its habitat. The panda's skull shape is typical of durophagous carnivorans. It has evolved from previous ancestors to exhibit larger molars with increased complexity and expanded temporal fossa. A 110.45 kg (243.5 lb) giant panda has a 3D canine teeth bite force of 2603.47 newtons and bite force quotient of 292. Another study had a 117.5 kg (259 lb) giant panda bite of 1298.9 newtons (BFQ 151.4) at canine teeth and 1815.9 newtons (BFQ 141.8) at carnassial teeth. The giant panda's paw has a "thumb" and five fingers; the "thumb" – actually a modified sesamoid bone – helps it to hold bamboo while eating. Stephen Jay Gould discusses this feature in his book of essays on evolution and biology, "The Panda's Thumb". The giant panda's tail, measuring , is the second-longest in the bear family (the longest belongs to the sloth bear). The giant panda typically lives around 20 years in the wild and up to 30 years in captivity. A female named Jia Jia was the oldest giant panda ever in captivity, born in 1978 and died at an age of 38 on 16 October 2016. A seven-year-old female named Jin Yi died in 2014 in a zoo in Zhengzhou, China, after showing symptoms of gastroenteritis and respiratory disease. It was found that the cause of death was toxoplasmosis, a disease caused by "Toxoplasma gondii" and infecting most warm-blooded animals, including humans. The giant panda genome was sequenced in 2009 using Illumina dye sequencing. Its genome contains 20 pairs of autosomes and one pair of sex chromosomes. Despite its taxonomic classification as a carnivoran, the giant panda's diet is primarily herbivorous, consisting almost exclusively of bamboo. However, the giant panda still has the digestive system of a carnivore, as well as carnivore-specific genes, and thus derives little energy and little protein from consumption of bamboo. Its ability to digest cellulose is ascribed to the microbes in its gut. Pandas are born with sterile intestines and require bacteria obtained from their mother's feces to digest vegetation. The giant panda is a highly specialised animal with unique adaptations, and has lived in bamboo forests for millions of years. The average giant panda eats as much as 9 to 14 kg (20 to 30 lb) of bamboo shoots a day to compensate for the limited energy content of its diet. Ingestion of such a large quantity of material is possible and necessary because of the rapid passage of large amounts of indigestible plant material through the short, straight digestive tract. It is also noted, however, that such rapid passage of digesta limits the potential of microbial digestion in the gastrointestinal tract, limiting alternative forms of digestion. Given this voluminous diet, the giant panda defecates up to 40 times a day. The limited energy input imposed on it by its diet has affected the panda's behavior. The giant panda tends to limit its social interactions and avoids steeply sloping terrain to limit its energy expenditures. Two of the panda's most distinctive features, its large size and round face, are adaptations to its bamboo diet. Anthropologist Russell Ciochon observed: "[much] like the vegetarian gorilla, the low body surface area to body volume [of the giant panda] is indicative of a lower metabolic rate. This lower metabolic rate and a more sedentary lifestyle allows the giant panda to subsist on nutrient poor resources such as bamboo." Similarly, the giant panda's round face is the result of powerful jaw muscles, which attach from the top of the head to the jaw. Large molars crush and grind fibrous plant material. The morphological characteristics of extinct relatives of the giant panda suggest that while the ancient giant panda was omnivorous 7 million years ago (mya), it only became herbivorous some 2–2.4 mya with the emergence of "A. microta". Genome sequencing of the giant panda suggests that the dietary switch could have initiated from the loss of the sole T1R1/T1R3 umami taste receptor, resulting from two frameshift mutations within the T1R1 exons. Umami taste corresponds to high levels of glutamate as found in meat and may have thus altered the food choice of the giant panda. Although the pseudogenisation of the umami taste receptor in "Ailuropoda" coincides with the dietary switch to herbivory, it is likely a result of, and not the reason for, the dietary change. The mutation time for the T1R1 gene in the giant panda is estimated to 4.2 mya while fossil evidence indicates bamboo consumption in the giant panda species at least 7 mya, signifying that although complete herbivory occurred around 2 mya, the dietary switch was initiated prior to T1R1 loss-of-function. Pandas eat any of 25 bamboo species in the wild, such as "Fargesia dracocephala" and "Fargesia rufa". Only a few bamboo species are widespread at the high altitudes pandas now inhabit. Bamboo leaves contain the highest protein levels; stems have less. Because of the synchronous flowering, death, and regeneration of all bamboo within a species, the giant panda must have at least two different species available in its range to avoid starvation. While primarily herbivorous, the giant panda still retains decidedly ursine teeth and will eat meat, fish, and eggs when available. In captivity, zoos typically maintain the giant panda's bamboo diet, though some will provide specially formulated biscuits or other dietary supplements. Pandas will travel between different habitats if they need to, so they can get the nutrients that they need and to balance their diet for reproduction. For six years, scientists studied six pandas tagged with GPS collars at the Foping Reserve in the Qinling Mountains. They took note of their foraging and mating habits and analyzed samples of their food and feces. The pandas would move from the valleys into the Qinling Mountains and would only return to the valleys in autumn. During the summer months bamboo shoots rich in protein are only available at higher altitudes which causes low calcium rates in the pandas and during breeding season the pandas would trek back down to eat bamboo leaves rich in calcium. Although adult giant pandas have few natural predators other than humans, young cubs are vulnerable to attacks by snow leopards, yellow-throated martens, eagles, feral dogs, and the Asian black bear. Sub-adults weighing up to may be vulnerable to predation by leopards. The giant panda is a terrestrial animal and primarily spends its life roaming and feeding in the bamboo forests of the Qinling Mountains and in the hilly province of Sichuan. Giant pandas are generally solitary. Each adult has a defined territory and a female is not tolerant of other females in her range. Social encounters occur primarily during the brief breeding season in which pandas in proximity to one another will gather. After mating, the male leaves the female alone to raise the cub. Pandas were thought to fall into the crepuscular category, those who are active twice a day, at dawn and dusk; however, Jindong Zhang found that pandas may belong to a category all of their own, with activity peaks in the morning, afternoon and midnight. Due to their sheer size, they can be active at any time of the day. Activity is highest in June and decreases in late summer to fall with an increase from November through the following March. Activity is also directly related to the amount of sunlight during colder days. Pandas communicate through vocalisation and scent marking such as clawing trees or spraying urine. They are able to climb and take shelter in hollow trees or rock crevices, but do not establish permanent dens. For this reason, pandas do not hibernate, which is similar to other subtropical mammals, and will instead move to elevations with warmer temperatures. Pandas rely primarily on spatial memory rather than visual memory. Though the panda is often assumed to be docile, it has been known to attack humans, presumably out of irritation rather than aggression. Initially, the primary method of breeding giant pandas in captivity was by artificial insemination, as they seemed to lose their interest in mating once they were captured. This led some scientists to try extreme methods, such as showing them videos of giant pandas mating and giving the males sildenafil (commonly known by name Viagra). Only recently have researchers started having success with captive breeding programs, and they have now determined giant pandas have comparable breeding to some populations of the American black bear, a thriving bear species. The normal reproductive rate is considered to be one young every two years. Giant pandas reach sexual maturity between the ages of four and eight, and may be reproductive until age 20. The mating season is between March and May, when a female goes into estrus, which lasts for two or three days and only occurs once a year. When mating, the female is in a crouching, head-down position as the male mounts her from behind. Copulation time is short, ranging from 30 seconds to five minutes, but the male may mount her repeatedly to ensure successful fertilisation. The gestation period ranges from 95 to 160 days. Giant pandas give birth to twins in about half of pregnancies. If twins are born, usually only one survives in the wild. The mother will select the stronger of the cubs, and the weaker cub will die due to starvation. The mother is thought to be unable to produce enough milk for two cubs since she does not store fat. The father has no part in helping raise the cub. When the cub is first born, it is pink, blind, and toothless, weighing only 90 to 130 grams (3.2 to 4.6 ounces), or about th of the mother's weight, proportionally the smallest baby of any placental mammal. It nurses from its mother's breast six to 14 times a day for up to 30 minutes at a time. For three to four hours, the mother may leave the den to feed, which leaves the cub defenseless. One to two weeks after birth, the cub's skin turns grey where its hair will eventually become black. Slight pink colour may appear on cub's fur, as a result of a chemical reaction between the fur and its mother's saliva. A month after birth, the colour pattern of the cub's fur is fully developed. Its fur is very soft and coarsens with age. The cub begins to crawl at 75 to 80 days; mothers play with their cubs by rolling and wrestling with them. The cubs can eat small quantities of bamboo after six months, though mother's milk remains the primary food source for most of the first year. Giant panda cubs weigh 45 kg (100 pounds) at one year and live with their mothers until they are 18 months to two years old. The interval between births in the wild is generally two years. In July 2009, Chinese scientists confirmed the birth of the first cub to be successfully conceived through artificial insemination using frozen sperm. The cub was born at 07:41 on 23 July that year in Sichuan as the third cub of You You, an 11-year-old. The technique for freezing the sperm in liquid nitrogen was first developed in 1980 and the first birth was hailed as a solution to the dwindling availability of giant panda semen, which had led to inbreeding. Panda semen, which can be frozen for decades, could be shared between different zoos to save the species. It is expected that zoos in destinations such as San Diego in the United States and Mexico City will now be able to provide their own semen to inseminate more giant pandas. In August 2014, a rare birth of panda triplets was announced in China; it was the fourth of such births ever reported. Attempts have also been made to reproduce giant pandas by interspecific pregnancy by implanting cloned panda embryos into the uterus of an animal of another species. This has resulted in panda fetuses, but no live births. In the past, pandas were thought to be rare and noble creatures – the Empress Dowager Bo was buried with a panda skull in her vault. The grandson of Emperor Taizong of Tang is said to have given Japan two pandas and a sheet of panda skin as a sign of goodwill. Unlike many other animals in Ancient China, pandas were rarely thought to have medical uses. The few known uses include the Sichuan tribal peoples' use of panda urine to melt accidentally swallowed needles, and the use of panda pelts to control menses as described in the Qin Dynasty encyclopedia "Erya". The creature named "mo" (貘) mentioned in some ancient books has been interpreted as giant panda. The dictionary "Shuowen Jiezi" (Eastern Han Dynasty) says that the "mo", from Shu (Sichuan), is bear-like, but yellow-and-black, although the older "Erya" describes "mo" simply as a "white leopard". The interpretation of the legendary fierce creature "pixiu" (貔貅) as referring to the giant panda is also common. During the reign of the Yongle Emperor (early 15th century), his relative from Kaifeng sent him a captured "zouyu" (), and another "zouyu" was sighted in Shandong. "Zouyu" is a legendary "righteous" animal, which, similarly to a "qilin", only appears during the rule of a benevolent and sincere monarch. It is said to be fierce as a tiger, but gentle and strictly vegetarian, and described in some books as a white tiger with black spots. Puzzled about the real zoological identity of the creature captured during the Yongle era, J.J.L. Duyvendak exclaims, "Can it possibly have been a Pandah?" The comparative obscurity of the giant panda throughout most of China's history is illustrated by the fact that, despite there being a number of depictions of bears in Chinese art starting from its most ancient times, and the bamboo being one of the favorite subjects for Chinese painters, there are no known pre-20th-century artistic representations of giant pandas. The West first learned of the giant panda on 11 March 1869, when the French missionary Armand David received a skin from a hunter. The first Westerner known to have seen a living giant panda is the German zoologist Hugo Weigold, who purchased a cub in 1916. Kermit and Theodore Roosevelt, Jr., became the first Westerners to shoot a panda, on an expedition funded by the Field Museum of Natural History in the 1920s. In 1936, Ruth Harkness became the first Westerner to bring back a live giant panda, a cub named Su Lin which went to live at the Brookfield Zoo in Chicago. In 1938, Floyd Tangier Smith captured and delivered five giant pandas to London, they arrived on 23rd December aboard the SS Antenor. These five were the first on British soil and were transferred to London Zoo. One, named Grandma, only lasted a few days. She was taxidermied by E. Gerrard and Sons and sold to Leeds City Museum where she is currently on display to the public. Another, Ming, became London Zoo's first Giant Panda. Her skull is held by the Royal College of Surgeons of England. Gifts of giant pandas to American and Japanese zoos formed an important part of the diplomacy of the People's Republic of China (PRC) in the 1970s, as it marked some of the first cultural exchanges between China and the West. This practice has been termed "panda diplomacy". By 1984, however, pandas were no longer given as gifts. Instead, China began to offer pandas to other nations only on 10-year loans, under terms including a fee of up to US$1,000,000 per year and a provision that any cubs born during the loan are the property of China. Since 1998, because of a WWF lawsuit, the United States Fish and Wildlife Service only allows a US zoo to import a panda if the zoo can ensure that China will channel more than half of its loan fee into conservation efforts for the giant panda and its habitat. In May 2005, China offered a breeding pair to Taiwan. The issue became embroiled in cross-Strait relations – both over the underlying symbolism, and over technical issues such as whether the transfer would be considered "domestic" or "international", or whether any true conservation purpose would be served by the exchange. A contest in 2006 to name the pandas was held in the mainland, resulting in the politically charged names Tuan Tuan and Yuan Yuan (from "tuanyuan", meaning "reunion", i.e. "reunification"). China's offer was initially rejected by Chen Shui-bian, then President of Taiwan. However, when Ma Ying-jeou assumed the presidency in 2008, the offer was accepted, and the pandas arrived in December of that year. Microbes in panda waste are being investigated for their use in creating biofuels from bamboo and other plant materials. The giant panda is a vulnerable species, threatened by continued habitat loss and habitat fragmentation, and by a very low birthrate, both in the wild and in captivity. Its range is currently confined to a small portion on the western edge of its historical range, which stretched through southern and eastern China, northern Myanmar, and northern Vietnam. The giant panda has been a target of poaching by locals since ancient times and by foreigners since it was introduced to the West. Starting in the 1930s, foreigners were unable to poach giant pandas in China because of the Second Sino-Japanese War and the Chinese Civil War, but pandas remained a source of soft furs for the locals. The population boom in China after 1949 created stress on the pandas' habitat and the subsequent famines led to the increased hunting of wildlife, including pandas. During the Cultural Revolution, all studies and conservation activities on the pandas were stopped. After the Chinese economic reform, demand for panda skins from Hong Kong and Japan led to illegal poaching for the black market, acts generally ignored by the local officials at the time. In 1963, the PRC government set up Wolong National Nature Reserve to save the declining panda population. In 2006, scientists reported that the number of pandas living in the wild may have been underestimated at about 1,000. Previous population surveys had used conventional methods to estimate the size of the wild panda population, but using a new method that analyzes DNA from panda droppings, scientists believe the wild population may be as large as 3,000. In 2006, there were 40 panda reserves in China, compared to just 13 reserves in 1998. As the species has been reclassified to "vulnerable" since 2016, the conservation efforts are thought to be working. Furthermore, in response to this reclassification, the State Forestry Administration of China announced that they would not accordingly lower the conservation level for panda, and would instead reinforce the conservation efforts. The giant panda is among the world's most adored and protected rare animals, and is one of the few in the world whose natural inhabitant status was able to gain a UNESCO World Heritage Site designation. The Sichuan Giant Panda Sanctuaries, located in the southwest province of Sichuan and covering seven natural reserves, were inscribed onto the World Heritage List in 2006. Not all conservationists agree that the money spent on conserving pandas is well spent. Chris Packham has argued that the breeding of pandas in captivity is "pointless" because "there is not enough habitat left to sustain them". Packham argues that the money spent on pandas would be better spent elsewhere, and has said he would "eat the last panda if I could have all the money we have spent on panda conservation put back on the table for me to do more sensible things with", though he has apologised for upsetting people who like pandas. He said, "The panda is possibly one of the grossest wastes of conservation money in the last half century." However, a 2015 paper found that the giant panda can serve as an umbrella species as the preservation of their habitat also helps other endemic species in China, including 70% of the country's forest birds, 70% of mammals and 31% of amphibians. In 2012, Earthwatch Institute, a global nonprofit that teams volunteers with scientists to conduct important environmental research, launched a program called "On the Trail of Giant Panda". This program, based in the Wolong National Nature Reserve, allows volunteers to work up close with pandas cared for in captivity, and help them adapt to life in the wild, so that they may breed, and live longer and healthier lives. Pandas have been kept in zoos as early as the Western Han Dynasty in China, where the writer Sima Xiangru noted that the panda was the most treasured animal in the emperor's garden of exotic animals in the capital Chang'an (present Xi'an). Not until the 1950s were pandas again recorded to have been exhibited in China's zoos. Chi Chi at the London Zoo became very popular. This influenced the World Wildlife Fund to use a panda as its symbol. A 2006 "New York Times" article outlined the economics of keeping pandas, which costs five times more than keeping the next most expensive animal, an elephant. American zoos generally pay the Chinese government $1 million a year in fees, as part of a typical ten-year contract. San Diego's contract with China was to expire in 2008, but got a five-year extension at about half of the previous yearly cost. The last contract, with the Memphis Zoo in Memphis, Tennessee, ended in 2013. The Face of the Giant Panda Sign is an MRI sign in patients with Wilson's disease, named for the midbrain's resemblance to a giant panda's face.
https://en.wikipedia.org/wiki?curid=12713
Glissando In music, a glissando (; plural: "glissandi", abbreviated "gliss.") is a glide from one pitch to another (). It is an Italianized musical term derived from the French "glisser", "to glide". In some contexts, it is distinguished from the continuous portamento. Some colloquial equivalents are slide, sweep (referring to the "discrete glissando" effects on guitar and harp, respectively), bend, smear, rip (for a loud, violent gliss to the beginning of a note), lip (in jazz terminology, when executed by changing one's embouchure on a wind instrument), plop, or falling hail (a glissando on a harp using the back of the fingernails). Prescriptive attempts to distinguish the glissando from the portamento by limiting the former to the filling in of discrete intermediate pitches on instruments like the piano, harp, and fretted stringed instruments have run up against established usage of instruments like the trombone and timpani. The latter could thus be thought of as capable of either "glissando" or "portamento", depending on whether the drum was rolled or not. The clarinet gesture that opens "Rhapsody in Blue" could likewise be thought of either way: it was originally planned as a glissando (Gershwin's score labels each individual note) but is in practice played as a portamento though described as a glissando. The glissando is indicated by following the initial note with a line, sometimes wavy, in the desired direction, often accompanied by the abbreviation "gliss.". Occasionally, the desired notes are notated in the standard method (i.e. semiquavers) accompanied by the word 'glissando'. On some instruments (e.g., piano, harp, xylophone), discrete tones are clearly audible when sliding. For example, on a keyboard, a player's fingertips can be made to slide across the white keys or over the black keys, producing either a C major scale or an F major pentatonic (or their relative modes); or, by performing both at once, it is possible to produce a full chromatic scale. Pianists can also complete a glissando of two pitches an octave apart. Maurice Ravel used glissandi in many of his piano compositions, and "Alborada del Gracioso" contains notable piano glissando passages in thirds executed by the right hand. Rachmaninoff, Prokofiev, Liszt and Gershwin have all used glissandi for piano in notable compositions. Organ players—particularly in contemporary music—sometimes employ an effect known as the palm glissando, where over the course of the glissando the flat of the hand is used to depress a wide area of keys simultaneously, resulting in a dramatic atonal sweep. A similar device on the piano are cluster-glissandos, used extensively by Karlheinz Stockhausen in "Klavierstück X", and which "more than anything else, lend the work its unique aural flavour". On a harp, the player can slide their finger across the strings, quickly playing the scale (or on pedal harp even arpeggios such as C-D-E-F-G-A-B). Wind, brass, and fretted-stringed-instrument players can perform an extremely rapid chromatic scale (e.g., sliding up or down a string quickly on a fretted instrument). Arpeggio effects (likewise named glissando) are also obtained by bowed strings (playing harmonics) and brass, especially the horn. Musical instruments with can effect a portamento over a substantial range. These include unfretted stringed instruments (such as the violin, viola, cello and double bass, and fretless guitars), stringed instruments with a way of stretching the strings (such as the guitar, veena, or sitar), a fretted guitar or lap steel guitar when accompanied with the use of a slide, wind instruments without valves or stops (such as the trombone or slide whistle), timpani (kettledrums), electronic instruments (such as the theremin, the ondes Martenot, synthesizers and keytars), the water organ, and the human voice. Other wind instruments can effect a similar limited slide by altering the lip pressure (on trumpet, for example) or a combination of embouchure and rolling the head joint (as on the flute), while others such as the clarinet can achieve this by slowly dragging fingers off tone holes or changing the oral cavity's resonance by manipulating tongue position, embouchure, and throat shaping. Many electric guitars are fitted with a tremolo arm which can produce either a portamento, a vibrato, or a combination of both (but not a true tremolo despite the name). A bent note is a musical note that is varied in pitch. With unfretted strings or other continuous-pitch instruments such as the trombone, or with the human voice, such variation is more properly described in terms of intonation. Bent notes are commonly played on fretted instruments, literally by bending the string with excess finger pressure, or on free reed instruments such as the harmonica, by using excess air pressure to overblow the reed. On brass instruments such as the trumpet, the note is bent by using the lip. "Indeterminately pitched instruments [such as unpitched percussion instruments and friction drum rolls]...produce a pitch or pitch spectrum that becomes higher with an increase of dynamic and lower with a decrease of dynamic." The bent note is commonly found in various forms of jazz, blues, and rock.
https://en.wikipedia.org/wiki?curid=12716
Giraffe The giraffe ("Giraffa") is an African artiodactyl mammal, the tallest living terrestrial animal and the largest ruminant. It is traditionally considered to be one species, "Giraffa camelopardalis", with nine subspecies. However, the existence of up to eight extant giraffe species has been described, based upon research into the mitochondrial and nuclear DNA, as well as morphological measurements of "Giraffa". Seven other species are extinct, prehistoric species known from fossils. The giraffe's chief distinguishing characteristics are its extremely long neck and legs, its horn-like ossicones, and its distinctive coat patterns. It is classified under the family Giraffidae, along with its closest extant relative, the okapi. Its scattered range extends from Chad in the north to South Africa in the south, and from Niger in the west to Somalia in the east. Giraffes usually inhabit savannahs and woodlands. Their food source is leaves, fruits and flowers of woody plants, primarily acacia species, which they browse at heights most other herbivores cannot reach. They may be preyed on by lions, leopards, spotted hyenas and African wild dogs. Giraffes live in herds of related females and their offspring, or bachelor herds of unrelated adult males, but are gregarious and may gather in large aggregations. Males establish social hierarchies through "necking", which are combat bouts where the neck is used as a weapon. Dominant males gain mating access to females, which bear the sole responsibility for raising the young. The giraffe has intrigued various cultures, both ancient and modern, for its peculiar appearance, and has often been featured in paintings, books, and cartoons. It is classified by the International Union for Conservation of Nature as vulnerable to extinction, and has been extirpated from many parts of its former range. Giraffes are still found in numerous national parks and game reserves but estimates as of 2016 indicate that there are approximately 97,500 members of "Giraffa" in the wild. More than 1,600 were kept in zoos in 2010. The name "giraffe" has its earliest known origins in the Arabic word (), perhaps borrowed from the animal's Somali name . The Arab name is translated as "fast-walker". There were several Middle English spellings, such as , , and . The Italian form arose in the 1590s. The modern English form developed around 1600 from the French . "Camelopard" is an archaic English name for the giraffe deriving from the Ancient Greek for camel and leopard, referring to its camel-like shape and its leopard-like colouring. Living giraffes were originally classified as one species by Carl Linnaeus in 1758. He gave it the binomial name "Cervus camelopardalis". Morten Thrane Brünnich classified the genus "Giraffa" in 1772. The species name "camelopardalis" is from Latin. The giraffe is one of only two living genera of the family Giraffidae in the order Artiodactyla, the other being the okapi. The family was once much more extensive, with over 10 fossil genera described. Their closest known relatives may have been the extinct deer-like climacocerids. They, together with the family Antilocapridae (whose only extant species is the pronghorn), have been placed in the superfamily Giraffoidea. These animals may have evolved from the extinct family Palaeomerycidae which might also have been the ancestor of deer. The elongation of the neck appears to have started early in the giraffe lineage. Comparisons between giraffes and their ancient relatives suggest that vertebrae close to the skull lengthened earlier, followed by lengthening of vertebrae further down. One early giraffid ancestor was "Canthumeryx" which has been dated variously to have lived 25–20 million years ago (mya), 17–15 mya or 18–14.3 mya and whose deposits have been found in Libya. This animal was medium-sized, slender and antelope-like. "Giraffokeryx" appeared 15 mya in the Indian subcontinent and resembled an okapi or a small giraffe, and had a longer neck and similar ossicones. "Giraffokeryx" may have shared a clade with more massively built giraffids like "Sivatherium" and "Bramatherium". Giraffids like "Palaeotragus", "Shansitherium" and "Samotherium" appeared 14 mya and lived throughout Africa and Eurasia. These animals had bare ossicones and small cranial sinuses and were longer with broader skulls. "Paleotragus" resembled the okapi and may have been its ancestor. Others find that the okapi lineage diverged earlier, before "Giraffokeryx". "Samotherium" was a particularly important transitional fossil in the giraffe lineage as its cervical vertebrae was intermediate in length and structure between a modern giraffe and an okapi, and was more vertical than the okapi's. "Bohlinia", which first appeared in southeastern Europe and lived 9–7 mya was likely a direct ancestor of the giraffe. "Bohlinia" closely resembled modern giraffes, having a long neck and legs and similar ossicones and dentition. "Bohlinia" entered China and northern India in response to climate change. From there, the genus "Giraffa" evolved and, around 7 mya, entered Africa. Further climate changes caused the extinction of the Asian giraffes, while the African giraffes survived and radiated into several new species. Living giraffes appear to have arisen around 1 mya in eastern Africa during the Pleistocene. Some biologists suggest the modern giraffes descended from "G. jumae"; others find "G. gracilis" a more likely candidate. "G. jumae" was larger and more heavily built while "G. gracilis" was smaller and more lightly built. The main driver for the evolution of the giraffes is believed to have been the changes from extensive forests to more open habitats, which began 8 mya. During this time, tropical plants disappeared and were replaced by arid C4 plants, and a dry savannah emerged across eastern and northern Africa and western India. Some researchers have hypothesised that this new habitat coupled with a different diet, including acacia species, may have exposed giraffe ancestors to toxins that caused higher mutation rates and a higher rate of evolution. The coat patterns of modern giraffes may also have coincided with these habitat changes. Asian giraffes are hypothesised to have had more okapi-like colourations. The giraffe genome is around 2.9 billion base pairs in length compared to the 3.3 billion base pairs of the okapi. Of the proteins in giraffe and okapi genes, 19.4% are identical. The two species are equally distantly related to cattle, suggesting the giraffe's unique characteristics are not because of faster evolution. The divergence of giraffe and okapi lineages dates to around 11.5 mya. A small group of regulatory genes in the giraffe appear to be responsible for the animal's stature and associated circulatory adaptations. The IUCN currently recognises only one species of giraffe with nine subspecies. In 2001, a two-species taxonomy was proposed. A 2007 study on the genetics of giraffes, suggested they were six species: the West African, Rothschild's, reticulated, Masai, Angolan, and South African giraffe. The study deduced from genetic differences in nuclear and mitochondrial DNA (mtDNA) that giraffes from these populations are reproductively isolated and rarely interbreed, though no natural obstacles block their mutual access. This includes adjacent populations of Rothschild's, reticulated, and Masai giraffes. The Masai giraffe was also suggested to consist of possibly two species separated by the Rift Valley. Reticulated and Masai giraffes were found to have the highest mtDNA diversity, which is consistent with giraffes originating in eastern Africa. Populations further north are more closely related to the former, while those to the south are more related to the latter. Giraffes appear to select mates of the same coat type, which are imprinted on them as calves. A 2011 study using detailed analyses of the morphology of giraffes, and application of the phylogenetic species concept, described eight species of living giraffes. A 2016 study also concluded that living giraffes consist of multiple species. The researchers suggested the existence of four species, which have not exchanged genetic information between each other for 1 to 2 million years. Since then, a response to this publication has been published, highlighting seven problems in data interpretation, and concludes "the conclusions should not be accepted unconditionally". A 2020 study showed that depending on the method chosen, different taxonomic hypotheses recognizing from two to six species, can be considered for the genus "Giraffa". That study also found that multi species coalescent methods can lead to taxonomic over-splitting, as those methods delimit geographic structure rather than species. The 3-species hypothesis, which recognises "G". "camelopardalis", "G". "giraffa", and "G". "tippelskirchi", is highly supported by phylogenetic analyses and also corroborated by most population genetic and multi species coalescent analyses. There are also seven extinct species of giraffe, listed as the following: "G. attica", also extinct, was formerly considered part of "Giraffa" but was reclassified as "Bohlinia attica" in 1929. Fully grown giraffes stand tall, with males taller than females. The tallest recorded male was and the tallest recorded female was tall. The average weight is for an adult male and for an adult female with maximum weights of and having been recorded for males and females, respectively. Despite its long neck and legs, the giraffe's body is relatively short. Located at both sides of the head, the giraffe's large, bulging eyes give it good all-round vision from its great height. Giraffes see in colour and their senses of hearing and smell are also sharp. The animal can close its muscular nostrils to protect against sandstorms and ants. The giraffe's prehensile tongue is about long. It is purplish-black in colour, perhaps to protect against sunburn, and is useful for grasping foliage, as well as for grooming and cleaning the animal's nose. The upper lip of the giraffe is also prehensile and useful when foraging, and is covered in hair to protect against thorns. The tongue and inside of the mouth are covered in papillae. The coat has dark blotches or patches (which can be orange, chestnut, brown, or nearly black in colour) separated by light hair (usually white or cream in colour.) Male giraffes become darker as they age. The coat pattern has been claimed to serve as camouflage in the light and shade patterns of savannah woodlands. Giraffe calves inherit some spot pattern traits from their mothers, and variation in some spot traits are correlated with neonatal survival. The skin underneath the dark areas may serve as windows for thermoregulation, being sites for complex blood vessel systems and large sweat glands. Each individual giraffe has a unique coat pattern. The skin of a giraffe is mostly gray, or tan. Its thickness allows the animal to run through thorn bushes without being punctured. The fur may serve as a chemical defence, as its parasite repellents give the animal a characteristic scent. At least 11 main aromatic chemicals are in the fur, although indole and 3-methylindole are responsible for most of the smell. Because the males have a stronger odour than the females, the odour may also have sexual function. Along the animal's neck is a mane made of short, erect hairs. The one-metre (3.3-ft) tail ends in a long, dark tuft of hair and is used as a defense against insects. Both sexes have prominent horn-like structures called ossicones, which are formed from ossified cartilage, covered in skin and fused to the skull at the parietal bones. Being vascularized, the ossicones may have a role in thermoregulation, and are also used in combat between males. Appearance is a reliable guide to the sex or age of a giraffe: the ossicones of females and young are thin and display tufts of hair on top, whereas those of adult males end in knobs and tend to be bald on top. Also, a median lump, which is more prominent in males, emerges at the front of the skull. Males develop calcium deposits that form bumps on their skulls as they age. A giraffe's skull is lightened by multiple sinuses. However, as males age, their skulls become heavier and more club-like, helping them become more dominant in combat. The upper jaw has a grooved palate and lacks front teeth. The giraffe's molars have a rough surface. The front and back legs of a giraffe are about the same length. The radius and ulna of the front legs are articulated by the carpus, which, while structurally equivalent to the human wrist, functions as a knee. It appears that a suspensory ligament allows the lanky legs to support the animal's great weight. The foot of the giraffe reaches a diameter of , and the hoof is high in males and in females. The rear of each hoof is low and the fetlock is close to the ground, allowing the foot to provide additional support to the animal's weight. Giraffes lack dewclaws and interdigital glands. The giraffe's pelvis, though relatively short, has an ilium that is outspread at the upper ends. A giraffe has only two gaits: walking and galloping. Walking is done by moving the legs on one side of the body at the same time, then doing the same on the other side. When galloping, the hind legs move around the front legs before the latter move forward, and the tail will curl up. The animal relies on the forward and backward motions of its head and neck to maintain balance and the counter momentum while galloping. The giraffe can reach a sprint speed of up to , and can sustain for several kilometres. A giraffe rests by lying with its body on top of its folded legs. To lie down, the animal kneels on its front legs and then lowers the rest of its body. To get back up, it first gets on its knees and spreads its hind legs to raise its hindquarters. It then straightens its front legs. With each step, the animal swings its head. In captivity, the giraffe sleeps intermittently around 4.6 hours per day, mostly at night. It usually sleeps lying down; however, standing sleeps have been recorded, particularly in older individuals. Intermittent short "deep sleep" phases while lying are characterised by the giraffe bending its neck backwards and resting its head on the hip or thigh, a position believed to indicate paradoxical sleep. If the giraffe wants to bend down to drink, it either spreads its front legs or bends its knees. Giraffes would probably not be competent swimmers as their long legs would be highly cumbersome in the water, although they could possibly float. When swimming, the thorax would be weighed down by the front legs, making it difficult for the animal to move its neck and legs in harmony or keep its head above the surface. The giraffe has an extremely elongated neck, which can be up to in length, accounting for much of the animal's vertical height. The long neck results from a disproportionate lengthening of the cervical vertebrae, not from the addition of more vertebrae. Each cervical vertebra is over long. They comprise 52–54 per cent of the length of the giraffe's vertebral column, compared with the 27–33 percent typical of similar large ungulates, including the giraffe's closest living relative, the okapi. This elongation largely takes place after birth, perhaps because giraffe mothers would have a difficult time giving birth to young with the same neck proportions as adults. The giraffe's head and neck are held up by large muscles and a strengthened nuchal ligament, which are anchored by long dorsal spines on the anterior thoracic vertebrae, giving the animal a hump. The giraffe's neck vertebrae have ball and socket joints. In particular, the atlas–axis joint (C1 and C2) allows the animal to tilt its head vertically and reach more branches with the tongue. The point of articulation between the cervical and thoracic vertebrae of giraffes is shifted to lie between the first and second thoracic vertebrae (T1 and T2), unlike most other ruminants where the articulation is between the seventh cervical vertebra (C7) and T1. This allows C7 to contribute directly to increased neck length and has given rise to the suggestion that T1 is actually C8, and that giraffes have added an extra cervical vertebra. However, this proposition is not generally accepted, as T1 has other morphological features, such as an articulating rib, deemed diagnostic of thoracic vertebrae, and because exceptions to the mammalian limit of seven cervical vertebrae are generally characterised by increased neurological anomalies and maladies. There are several hypotheses regarding the evolutionary origin and maintenance of elongation in giraffe necks. The "competing browsers hypothesis" was originally suggested by Charles Darwin and challenged only recently. It suggests that competitive pressure from smaller browsers, such as kudu, steenbok and impala, encouraged the elongation of the neck, as it enabled giraffes to reach food that competitors could not. This advantage is real, as giraffes can and do feed up to high, while even quite large competitors, such as kudu, can feed up to only about high. There is also research suggesting that browsing competition is intense at lower levels, and giraffes feed more efficiently (gaining more leaf biomass with each mouthful) high in the canopy. However, scientists disagree about just how much time giraffes spend feeding at levels beyond the reach of other browsers, and a 2010 study found that adult giraffes with longer necks actually suffered higher mortality rates under drought conditions than their shorter-necked counterparts. This study suggests that maintaining a longer neck requires more nutrients, which puts longer-necked giraffes at risk during a food shortage. Another theory, the sexual selection hypothesis, proposes that the long necks evolved as a secondary sexual characteristic, giving males an advantage in "necking" contests (see below) to establish dominance and obtain access to sexually receptive females. In support of this theory, necks are longer and heavier for males than females of the same age, and the former do not employ other forms of combat. However, one objection is that it fails to explain why female giraffes also have long necks. It has also been proposed that the neck serves to give the animal greater vigilance. In mammals, the left recurrent laryngeal nerve is longer than the right; in the giraffe it is over longer. These nerves are longer in the giraffe than in any other living animal; the left nerve is over long. Each nerve cell in this path begins in the brainstem and passes down the neck along the vagus nerve, then branches off into the recurrent laryngeal nerve which passes back up the neck to the larynx. Thus, these nerve cells have a length of nearly in the largest giraffes. The structure of a giraffe's brain resembles that of domestic cattle. It is kept cool by evaporative heat loss in the nasal passages. The shape of the skeleton gives the giraffe a small lung volume relative to its mass. Its long neck gives it a large amount of dead space, in spite of its narrow windpipe. These factors increase the resistance to airflow. Nevertheless, the animal can still supply enough oxygen to its tissues and it can increase its respiratory rate and oxygen diffusion when running. The circulatory system of the giraffe has several adaptations for its great height. Its heart, which can weigh more than and measures about long, must generate approximately double the blood pressure required for a human to maintain blood flow to the brain. As such, the wall of the heart can be as thick as . Giraffes have unusually high heart rates for their size, at 150 beats per minute. When the animal lowers its head the blood rushes down fairly unopposed and a rete mirabile in the upper neck, with its large cross sectional area, prevents excess blood flow to the brain. When it raises again, the blood vessels constrict and direct blood into the brain so the animal does not faint. The jugular veins contain several (most commonly seven) valves to prevent blood flowing back into the head from the inferior vena cava and right atrium while the head is lowered. Conversely, the blood vessels in the lower legs are under great pressure because of the weight of fluid pressing down on them. To solve this problem, the skin of the lower legs is thick and tight; preventing too much blood from pouring into them. Giraffes have oesophageal muscles that are unusually strong to allow regurgitation of food from the stomach up the neck and into the mouth for rumination. They have four chambered stomachs, as in all ruminants, and the first chamber has adapted to their specialised diet. The intestines of an adult giraffe measure more than in length and have a relatively small ratio of small to large intestine. The liver of the giraffe is small and compact. A gallbladder is generally present during fetal life, but it may disappear before birth. Giraffes usually inhabit savannahs and open woodlands. They prefer Acacieae, "Commiphora", "Combretum" and open "Terminalia" woodlands over denser environments like "Brachystegia" woodlands. The Angolan giraffe can be found in desert environments. Giraffes browse on the twigs of trees, preferring trees of the subfamily Acacieae and the genera "Commiphora" and "Terminalia", which are important sources of calcium and protein to sustain the giraffe's growth rate. They also feed on shrubs, grass and fruit. A giraffe eats around of foliage daily. When stressed, giraffes may chew the bark off branches. Although herbivorous, the giraffe has been known to visit carcasses and lick dried meat off bones. During the wet season, food is abundant and giraffes are more spread out, while during the dry season, they gather around the remaining evergreen trees and bushes. Mothers tend to feed in open areas, presumably to make it easier to detect predators, although this may reduce their feeding efficiency. As a ruminant, the giraffe first chews its food, then swallows it for processing and then visibly passes the half-digested cud up the neck and back into the mouth to chew again. It is common for a giraffe to salivate while feeding. The giraffe requires less food than many other herbivores because the foliage it eats has more concentrated nutrients and it has a more efficient digestive system. The animal's faeces come in the form of small pellets. When it has access to water, a giraffe drinks at intervals no longer than three days. Giraffes have a great effect on the trees that they feed on, delaying the growth of young trees for some years and giving "waistlines" to trees that are too tall. Feeding is at its highest during the first and last hours of daytime. Between these hours, giraffes mostly stand and ruminate. Rumination is the dominant activity during the night, when it is mostly done lying down. Giraffes are usually found in groups that vary in size and composition according to ecological, anthropogenic, temporal, and social factors. Traditionally, the composition of these groups had been described as open and ever-changing. For research purposes, a "group" has been defined as "a collection of individuals that are less than a kilometre apart and moving in the same general direction." More recent studies have found that giraffes have long-term social associations and may form groups or pairs based on kinship, sex or other factors. These groups may regularly associate with one another in larger communities or sub-communities within a fission–fusion society. The number of giraffes in a group can range up to 66 individuals. Giraffe groups tend to be sex-segregated although mixed-sex groups made of adult females and young males are known to occur. Particularly stable giraffe groups are those made of mothers and their young, which can last weeks or months. Social cohesion in these groups is maintained by the bonds formed between calves. Female association appears to be based on space-use and individuals may be matrilineally related. In general, females are more selective than males in who they associate with in regards to individuals of the same sex. Young males also form groups and will engage in playfights. However, as they get older males become more solitary but may also associate in pairs or with female groups. Giraffes are not territorial, but they have home ranges that vary according to rainfall and proximity to human settlements. Male giraffes occasionally wander far from areas that they normally frequent. Although generally quiet and non-vocal, giraffes have been heard to communicate using various sounds. During courtship, males emit loud coughs. Females call their young by bellowing. Calves will emit snorts, bleats, mooing and mewing sounds. Giraffes also snore, hiss, moan, grunt and make flute-like sounds. During nighttime, giraffes appear to hum to each other above the infrasound range for purposes which are unclear. Reproduction in giraffes is broadly polygamous: a few older males mate with the fertile females. Male giraffes assess female fertility by tasting the female's urine to detect oestrus, in a multi-step process known as the flehmen response. Males prefer young adult females over juveniles and older adults. Once an oestrous female is detected, the male will attempt to court her. When courting, dominant males will keep subordinate ones at bay. A courting male may lick a female's tail, rest his head and neck on her body or nudge her with his horns. During copulation, the male stands on his hind legs with his head held up and his front legs resting on the female's sides. Giraffe gestation lasts 400–460 days, after which a single calf is normally born, although twins occur on rare occasions. The mother gives birth standing up. The calf emerges head and front legs first, having broken through the fetal membranes, and falls to the ground, severing the umbilical cord. The mother then grooms the newborn and helps it stand up. A newborn giraffe is tall. Within a few hours of birth, the calf can run around and is almost indistinguishable from a one-week-old. However, for the first 1–3 weeks, it spends most of its time hiding; its coat pattern providing camouflage. The ossicones, which have lain flat while it was in the womb, become erect within a few days. Mothers with calves will gather in nursery herds, moving or browsing together. Mothers in such a group may sometimes leave their calves with one female while they forage and drink elsewhere. This is known as a "calving pool". Adult males play almost no role in raising the young, although they appear to have friendly interactions. Calves are at risk of predation, and a mother giraffe will stand over her calf and kick at an approaching predator. Females watching calving pools will only alert their own young if they detect a disturbance, although the others will take notice and follow. The length time in which offspring stay with their mother varies, though it can last until the female's next calving. Likewise, calves may suckle for only a month or as long as a year. Females become sexually mature when they are four years old, while males become mature at four or five years. Spermatogenesis in male giraffes begins at three to four years of age. Males must wait until they are at least seven years old to gain the opportunity to mate. Male giraffes use their necks as weapons in combat, a behaviour known as "necking". Necking is used to establish dominance and males that win necking bouts have greater reproductive success. This behaviour occurs at low or high intensity. In low intensity necking, the combatants rub and lean against each other. The male that can hold itself more erect wins the bout. In high intensity necking, the combatants will spread their front legs and swing their necks at each other, attempting to land blows with their ossicones. The contestants will try to dodge each other's blows and then get ready to counter. The power of a blow depends on the weight of the skull and the arc of the swing. A necking duel can last more than half an hour, depending on how well matched the combatants are. Although most fights do not lead to serious injury, there have been records of broken jaws, broken necks, and even deaths. After a duel, it is common for two male giraffes to caress and court each other. Such interactions between males have been found to be more frequent than heterosexual coupling. In one study, up to 94 percent of observed mounting incidents took place between males. The proportion of same-sex activities varied from 30–75 percent. Only one percent of same-sex mounting incidents occurred between females. Giraffes have high adult survival probability, and an unusually long lifespan compared to other ruminants, up to 38 years. Because of their size, eyesight and powerful kicks, adult giraffes are usually not subject to predation, although lions may regularly prey on individuals up to . Giraffes are the most common food source for the big cats in Kruger National Park, comprising nearly a third of the meat consumed, although only a small portion of the giraffes were probably killed by predators, as a majority of the consumed giraffes appeared to be scavenged. Nile crocodiles can also be a threat to giraffes when they bend down to drink. Calves are much more vulnerable than adults and are additionally preyed on by leopards, spotted hyenas and wild dogs. A quarter to a half of giraffe calves reach adulthood. Calf survival varies according to the season of birth, with calves born during the dry season having higher survival rates. The local, seasonal presence of large herds of migratory wildebeests and zebras reduces predation pressure on giraffe calves and increases their survival probability. In turn, it has been suggested that other ungulates may benefit from associating with giraffes as their height allows them to spot predators from further away. Zebras were found to glean information on predation risk from giraffe body language and spend less time scanning the environment when giraffes are present. Some parasites feed on giraffes. They are often hosts for ticks, especially in the area around the genitals, which has thinner skin than other areas. Tick species that commonly feed on giraffes are those of genera "Hyalomma", "Amblyomma" and "Rhipicephalus". Giraffes may rely on red-billed and yellow-billed oxpeckers to clean them of ticks and alert them to danger. Giraffes host numerous species of internal parasite and are susceptible to various diseases. They were victims of the (now eradicated) viral illness rinderpest. Giraffes can also suffer from a skin disorder, which comes in the form of wrinkles, lesions or raw fissures. In Tanzania, it appears to be caused by a nematode, and may be further affected by secondary infections. As much as 79% of giraffes show signs of the disease in Ruaha National Park, but it did not cause mortality in Tarangire and is less prevalent in areas with fertile soils. Humans have interacted with giraffes for millennia. They were depicted in art throughout the African continent, including that of the Kiffians, Egyptians and Kushites. The Kiffians were responsible for a life-size rock engraving of two giraffes, dated 8,000 years ago, that has been called the "world's largest rock art petroglyph". The San people of southern Africa have medicine dances named after some animals; the giraffe dance is performed to treat head ailments. How the giraffe got its height has been the subject of various African folktales, including one from eastern Africa which explains that the giraffe grew tall from eating too many magic herbs. According to a tale in Tanzania, the giraffe was given both its height and silence when it asked the creator for the gift of wisdom. It could now see and hear all but it not speak as "silence is wisdom". The Dinka people of the Sudan have traditionally considered the giraffe to be their clan animal and the earthly representative of their deity. The Tugen people of modern Kenya used the giraffe to depict their god Mda. The Egyptians gave the giraffe its own hieroglyph, named 'sr' in Old Egyptian and 'mmy' in later periods. They also kept giraffes as pets and shipped them around the Mediterranean. The giraffe was also known to the Greeks and Romans, who believed that it was an unnatural hybrid of a camel and a leopard or a panther and called it "camelopardalis". The giraffe was among the many animals collected and displayed by the Romans. The first one in Rome was brought in by Julius Caesar in 46 BC and exhibited to the public. With the fall of the Western Roman Empire, the housing of giraffes in Europe declined. During the Middle Ages, giraffes were known to Europeans through contact with the Arabs, who revered the giraffe for its peculiar appearance. In the early centuries of the first millennium AD, there was a misconception in Europe that giraffes were native to India. Cassianus Bassus, in one of his letters, claimed that giraffes were brought from India. In 1550s, French explorer Andre Thevet wrote that the giraffes were found in "in the high mountains in interior India beyond the river Ganges" and he concluded that the giraffe was the very "image of the learned and educated man". Individual captive giraffes were given celebrity status throughout history. In 1414, a giraffe was shipped from Malindi to Bengal. It was then taken to China by explorer Zheng He and placed in a Ming dynasty zoo. The animal was a source of fascination for the Chinese people, who associated it with the mythical Qilin. The Medici giraffe was a giraffe presented to Lorenzo de' Medici in 1486. It caused a great stir on its arrival in Florence. Zarafa, another famous giraffe, was brought from Egypt to Paris in the early 19th century as a gift from Muhammad Ali of Egypt to Charles X of France. A sensation, the giraffe was the subject of numerous memorabilia or "giraffanalia". Giraffes continue to have a presence in modern culture. Salvador Dalí depicted them with burning manes in some of his surrealist paintings. Dali considered the giraffe to be a symbol of masculinity, and a flaming giraffe was meant to be a "masculine cosmic apocalyptic monster". Several children's books feature the giraffe, including David A. Ufer's "The Giraffe Who Was Afraid of Heights", Giles Andreae's "Giraffes Can't Dance" and Roald Dahl's "The Giraffe and the Pelly and Me". Giraffes have appeared in animated films, as minor characters in Disney's "The Lion King" and "Dumbo", and in more prominent roles in "The Wild" and in the "Madagascar" films. Sophie the Giraffe has been a popular teether since 1961. Another famous fictional giraffe is the Toys "R" Us mascot Geoffrey the Giraffe. The giraffe has also been used for some scientific experiments and discoveries. Scientists have looked at the properties of giraffe skin when developing suits for astronauts and fighter pilots because the people in these professions are in danger of passing out if blood rushes to their legs. Computer scientists have modeled the coat patterns of several subspecies using reaction–diffusion mechanisms. The constellation of Camelopardalis, introduced in the seventeenth century, depicts a giraffe. The Tswana people of Botswana traditionally see the constellation Crux as two giraffes – Acrux and Mimosa forming a male, and Gacrux and Delta Crucis forming the female. In 2010, giraffes were assessed as Least Concern from a conservation perspective by the International Union for Conservation of Nature (IUCN), but the 2016 assessment categorized giraffes as Vulnerable. Giraffes have been extirpated from much of their historic range including Eritrea, Guinea, Mauritania and Senegal. They may also have disappeared from Angola, Mali, and Nigeria, but have been introduced to Rwanda and Swaziland. The Masai and reticulated subspecies are endangered, and the Rothschild subspecies is near threatened. The Nubian subspecies is critically endangered. In 1997, Jonathan Kingdon suggested that the Nubian giraffe was the most threatened of all giraffes; , it may number about 450 individuals. Private game reserves have contributed to the preservation of giraffe populations in southern Africa. Giraffes were probably common targets for hunters throughout Africa. Different parts of their bodies were used for different purposes. Their meat was used for food. The tail hairs served as flyswatters, bracelets, necklaces and thread. Shields, sandals and drums were made using the skin, and the strings of musical instruments were from the tendons. The smoke from burning giraffe skins was used by the medicine men of Buganda to treat nose bleeds. The Humr people of Kordofan, Sudan consume the drink Umm Nyolokh; which is prepared from the liver and bone marrow of giraffes. Richard Rudgley hypothesised that Umm Nyolokh might contain DMT. The drink is said to cause hallucinations of giraffes, believed to be the giraffes' ghosts by the Humr. In the 19th century, European explorers began to hunt them for sport. Habitat destruction has hurt the giraffe, too: in the Sahel, the need for firewood and grazing room for livestock has led to deforestation. Normally, giraffes can coexist with livestock, since they do not directly compete with them. In 2017, severe droughts in northern Kenya have led to increased tensions over land and the killing of wildlife by herders, with giraffe populations being particularly hit. Giraffe Manor is a popular hotel in Nairobi that also serves as sanctuary for Rothschild's giraffes. The giraffe is a protected species in most of its range. It is the national animal of Tanzania, and is protected by law. Unauthorised killing can result in imprisonment. The UN backed Convention of Migratory Species selected giraffes for protection in 2017. In 1999, it was estimated that over 140,000 giraffes existed in the wild, estimations as of 2016 indicate that there are approximately 97,500 members of "Giraffa" in the wild, down from 155,000 in 1985. , there were more than 1,600 in captivity at Species360-registered zoos (not including non-Species360 zoos or any kept by private people). Aerial survey is the most common method of monitoring giraffe population trends in the vast roadless tracts of African landscapes, but aerial methods are known to undercount giraffes. Ground-based survey methods are more accurate and should be used in conjunction with aerial surveys to make accurate estimates of population sizes and trends.
https://en.wikipedia.org/wiki?curid=12717
Gustave de Molinari Gustave de Molinari (; 3 March 1819 – 28 January 1912) was a Belgian political economist and French Liberal School theorist associated with French "laissez-faire" economists such as Frédéric Bastiat and Hippolyte Castille. Born in Liège, Wallonia, Molinari's critique of the state sometimes resulted in his opposing causes and events which might seemingly be aligned with his overall critique of power and privilege. An example of this was the American Civil War which Molinari believed to be far more about the trade interests of Northern industrialists than about slavery, although he did not deny that abolitionism was a part of the picture. According to Ralph Raico, Molinari never relented in his last work published a year before his death in 1912, writing: The American Civil War had not been simply a humanitarian crusade to free the slaves. The war "ruined the conquered provinces," but the Northern plutocrats pulling the strings achieved their aim: the imposition of a vicious protectionism that led ultimately "to the regime of trusts and produced the billionaires." Some anarcho-capitalists consider Molinari to be the first proponent of anarcho-capitalism. In the preface to the 1977 English translation by Murray Rothbard called "The Production of Security", the "first presentation anywhere in human history of what is now called anarcho-capitalism", although admitting that "Molinari did not use the terminology, and probably would have balked at the name". Austrian School economist Hans-Hermann Hoppe says that "the 1849 article 'The Production of Security' is probably the single most important contribution to the modern theory of anarcho-capitalism". In the past, Molinari influenced some of the political thoughts of individualist anarchist Benjamin Tucker and the "Liberty" circle. The market anarchist Molinari Institute directed by philosopher Roderick T. Long is named after him, whom it terms the "originator of the theory of Market Anarchism".
https://en.wikipedia.org/wiki?curid=12719
Guerrilla warfare Guerrilla warfare is a form of irregular warfare in which small groups of combatants, such as paramilitary personnel, armed civilians, or irregulars, use military tactics including ambushes, sabotage, raids, petty warfare, hit-and-run tactics, and mobility, to fight a larger and less-mobile traditional military. Guerrilla groups are a type of violent non-state actor. The Spanish word is the diminutive form of ('war'). The term became popular during the early-19th century Peninsular War, when the Spanish and Portuguese people rose against the Napoleonic troops and fought against a highly superior army using the guerrilla strategy. In correct Spanish usage, a person who is a member of a unit is a () if male, or a (]) if female. The term "guerrilla" was used in English as early as 1809 to refer to the individual "fighters" (e.g., "The town was taken by the guerrillas"), and also (as in Spanish) to denote "a group or band" of such fighters. However, in most languages "guerrilla" still denotes the specific style of warfare. The use of the diminutive evokes the differences in number, scale, and scope between the guerrilla army and the formal, professional army of the state. Guerrilla warfare is a type of asymmetric warfare: competition between opponents of unequal strength. It is also a type of irregular warfare: that is, it aims not simply to defeat an enemy, but to win popular support and political influence, to the enemy's cost. Accordingly, guerrilla strategy aims to magnify the impact of a small, mobile force on a larger, more-cumbersome one. If successful, guerrillas weaken their enemy by attrition, eventually forcing them to withdraw. Tactically, guerrillas usually avoid confrontation with large units and formations of enemy troops, but seek and attack small groups of enemy personnel and resources to gradually deplete the opposing force while minimizing their own losses. The guerrilla prizes mobility, secrecy, and surprise, organizing in small units and taking advantage of terrain that is difficult for larger units to use. For example, Mao Zedong summarized basic guerrilla tactics at the beginning of the Chinese Civil War as:"The enemy advances, we retreat; the enemy camps, we harass; the enemy tires, we attack; the enemy retreats, we pursue." At least one author credits the ancient Chinese work "The Art of War" with inspiring Mao's tactics. In the 20th century, other communist leaders, including North Vietnamese Ho Chi Minh, often used and developed guerrilla warfare tactics, which provided a model for their use elsewhere, leading to the Cuban "foco" theory and the anti-Soviet Mujahadeen in Afghanistan. In addition to traditional military methods, guerrilla groups may rely also on destroying infrastructure, using improvised explosive devices, for example. They typically also rely on logistical and political support from the local population and foreign backers, are often embedded within it (thereby using the population as a human shield), and many guerrilla groups are adept at public persuasion through propaganda and use of force. Many guerrilla movements today also rely heavily on children as combatants, scouts, porters, spies, informants, and in other roles. It has drawn international condemnation. Many states also recruit children into their armed forces. Some guerrilla groups also use refugees as weapons to solidify power or politically destabilize an adversary. The FARC guerrilla war displaced millions of Colombians, and so did the tribal guerrilla warfare (against Soviets) in Afghanistan. The civilian population living in the area is suspected of having collaborated [Awa tribe in southwest Colombia] with the enemy ["Colombia"] find ["unprotected"] itself displaced [resulting FARC solidify the power], as the guerrillas fight for territory. There is no commonly accepted definition of "terrorism", and the term is frequently used as a political tactic by belligerents (most often by governments in power) to denounce opponents whose status as terrorists is disputed. Contrary to some terrorist groups, guerrillas usually work in open positions as armed units, try to hold and seize land, do not refrain from fighting enemy military force in battle and usually apply pressure to control or dominate territory and population. While the primary concern of guerrillas is the enemy's active military units, terrorists largely are concerned with non-military agents and target mostly civilians. Guerrilla forces principally fight in accordance with the law of war ("jus in bello"). In this sense, they respect the rights of innocent civilians by refraining from targeting them. According to the Ankara Center for Crisis and Policy Studies, terrorists do not limit their actions and terrorise civilians by putting fear in people's hearts and even kill innocent foreigners in the country. Irregular warfare, based on elements later characteristic of modern guerrilla warfare, has existed throughout the battles of many ancient civilizations. The growth of guerrilla warfare in the 20th century was inspired in part by theoretical works on guerrilla warfare, starting with the "Manual de Guerra de Guerrillas" by Matías Ramón Mella written in the 19th century and, more recently, Mao Zedong's "On Guerrilla Warfare", Che Guevara's "Guerrilla Warfare", and Lenin's text of the same name, all written after the successful revolutions carried by them in China, Cuba and Russia, respectively. Those texts characterized the tactic of guerrilla warfare as, according to Che Guevara's text, being "used by the side which is supported by a majority but which possesses a much smaller number of arms for use in defense against oppression". The Chinese general and strategist Sun Tzu, in his "The Art of War" (6th century BC), was one of the earliest to propose the use of guerrilla warfare. This directly inspired the development of modern guerrilla warfare. Guerrilla tactics were presumably employed by prehistoric tribal warriors against enemy tribes. Evidence of conventional warfare, on the other hand, did not emerge until 3100 BC in Egypt and Mesopotamia. Since the Enlightenment, ideologies such as nationalism, liberalism, socialism, and religious fundamentalism have played an important role in shaping insurgencies and guerrilla warfare. Kerala Varma Pazhassi Raja used guerrilla techniques in his war against British East India Company between 1790-1805. The term guerrilla war was coined in English in 1809 after the Pazhassi revolt against the British. Arthur Wellesley was in charge to defeat his techniques but failed. The Moroccan national hero Abd el-Krim, along with his father, unified the Moroccan tribes under their control and took up arms against the Spanish and French invaders during the early 20th century. For the first time in history, tunnel warfare was used alongside modern guerrilla tactics, which caused considerable damage and annoyance to both invading armies in Morocco. Quintus Fabius Maximus Verrucosus, widely regarded as the "father of guerrilla warfare", devised the Fabian strategy which was used to great effect against Hannibal Barca's army. The strategy would further influence guerrilla tactics into the modern era. Shivaji, founder of Maratha Empire used guerrilla tactics to defeat larger armies of Mughal Empire in the 17th century. Michael Collins developed many tactical features of this combat system during the guerrilla phase of the Irish Civil War. In the 1960s, the Marxist revolutionary Che Guevara developed the "foco" () theory of revolution in his book "Guerrilla Warfare", based on his experiences during the 1959 Cuban Revolution. This theory was later formalised as "focal-ism" by Régis Debray. Its central principle is that vanguardism by cadres of small, fast-moving paramilitary groups can provide a focus for popular discontent against a sitting regime, and thereby lead a general insurrection. Although the original approach was to mobilize and launch attacks from rural areas, many "foco" ideas were adapted into urban guerrilla warfare movements.
https://en.wikipedia.org/wiki?curid=12720
Original proof of Gödel's completeness theorem The proof of Gödel's completeness theorem given by Kurt Gödel in his doctoral dissertation of 1929 (and a rewritten version of the dissertation, published as an article in 1930, titled "On Formally Undecidable Propositions of Principia Mathematica and Related Systems") is not easy to read today; it uses concepts and formalisms that are no longer used and terminology that is often obscure. The version given below attempts to represent all the steps in the proof and all the important ideas faithfully, while restating the proof in the modern language of mathematical logic. This outline should not be considered a rigorous proof of the theorem. We work with first-order predicate calculus. Our languages allow constant, function and relation symbols. Structures consist of (non-empty) domains and interpretations of the relevant symbols as constant members, functions or relations over that domain. We assume classical logic (as opposed to intuitionistic logic for example). We fix some axiomatization (i.e. a syntax-based, machine-manageable proof system) of the predicate calculus: logical axioms and rules of inference. Any of the several well-known equivalent axiomatizations will do. Gödel's original proof assumed the Hilbert-Ackermann proof system. We assume without proof all the basic well-known results about our formalism that we need, such as the normal form theorem or the soundness theorem. We axiomatize predicate calculus "without equality" (sometimes confusingly called "without identity"), i.e. there are no special axioms expressing the properties of (object) equality as a special relation symbol. After the basic form of the theorem has been proved, it will be easy to extend it to the case of predicate calculus "with equality". In the following, we state two equivalent forms of the theorem, and show their equivalence. Later, we prove the theorem. This is done in the following steps: This is the most basic form of the completeness theorem. We immediately restate it in a form more convenient for our purposes: When we say "all structures", it is important to specify that the structures involved are classical (Tarskian) interpretations I, where I= (U is a non-empty (possibly infinite) set of objects, whereas F is a set of functions from expressions of the interpreted symbolism into U). [By contrast, so-called "free logics" countenance possibly empty sets for U. For more regarding free logics, see the work of Karel Lambert.] "φ is refutable" means "by definition" "¬φ is provable". If Theorem 1 holds, and φ is not satisfiable in any structure, then ¬φ is valid in all structures and therefore provable, thus φ is refutable and Theorem 2 holds. If on the other hand Theorem 2 holds and φ is valid in all structures, then ¬φ is not satisfiable in any structure and therefore refutable; then ¬¬φ is provable and then so is φ, thus Theorem 1 holds. We approach the proof of Theorem 2 by successively restricting the class of all formulas φ for which we need to prove "φ is either refutable or satisfiable". At the beginning we need to prove this for all possible formulas φ in our language. However, suppose that for every formula φ there is some formula ψ taken from a more restricted class of formulas C, such that "ψ is either refutable or satisfiable" → "φ is either refutable or satisfiable". Then, once this claim (expressed in the previous sentence) is proved, it will suffice to prove "φ is either refutable or satisfiable" only for φ's belonging to the class C. If φ is provably equivalent to ψ ("i.e.", (φ≡ψ) is provable), then it is indeed the case that "ψ is either refutable or satisfiable" → "φ is either refutable or satisfiable" (the soundness theorem is needed to show this). There are standard techniques for rewriting an arbitrary formula into one that does not use function or constant symbols, at the cost of introducing additional quantifiers; we will therefore assume that all formulas are free of such symbols. Gödel's paper uses a version of first-order predicate calculus that has no function or constant symbols to begin with. Next we consider a generic formula φ (which no longer uses function or constant symbols) and apply the prenex form theorem to find a formula ψ in "normal form" such that φ≡ψ (ψ being in "normal form" means that all the quantifiers in ψ, if there are any, are found at the very beginning of ψ). It follows now that we need only prove Theorem 2 for formulas φ in normal form. Next, we eliminate all free variables from φ by quantifying them existentially: if, say, x1...xn are free in φ, we form formula_1. If ψ is satisfiable in a structure M, then certainly so is φ and if ψ is refutable, then formula_2 is provable, and then so is ¬φ, thus φ is refutable. We see that we can restrict φ to be a "sentence", that is, a formula with no free variables. Finally, we would like, for reasons of technical convenience, that the "prefix" of φ (that is, the string of quantifiers at the beginning of φ, which is in normal form) begin with a universal quantifier and end with an existential quantifier. To achieve this for a generic φ (subject to restrictions we have already proved), we take some one-place relation symbol F unused in φ, and two new variables y and z.. If φ = (P)Φ, where (P) stands for the prefix of φ and Φ for the "matrix" (the remaining, quantifier-free part of φ) we form formula_3. Since formula_4 is clearly provable, it is easy to see that formula_5 is provable. Our generic formula φ now is a sentence, in normal form, and its prefix starts with a universal quantifier and ends with an existential quantifier. Let us call the class of all such formulas R. We are faced with proving that every formula in R is either refutable or satisfiable. Given our formula φ, we group strings of quantifiers of one kind together in blocks: We define the degree of formula_7 to be the number of universal quantifier blocks, separated by existential quantifier blocks as shown above, in the prefix of formula_7. The following lemma, which Gödel adapted from Skolem's proof of the Löwenheim–Skolem theorem, lets us sharply reduce the complexity of the generic formula formula_7 we need to prove the theorem for: Lemma. Let k>=1. If every formula in R of degree k is either refutable or satisfiable, then so is every formula in R of degree k+1. Proof. Let φ be a formula of degree k+1; then we can write it as where (P) is the remainder of the prefix of formula_7 (it is thus of degree k-1) and formula_16 is the quantifier-free matrix of formula_7. x, y, u and v denote here "tuples" of variables rather than single variables; "e.g." formula_18 really stands for formula_19 where formula_20 are some distinct variables. Let now x' and y' be tuples of previously unused variables of the same length as x and y respectively, and let Q be a previously unused relation symbol that takes as many arguments as the sum of lengths of x and y; we consider the formula Clearly, formula_22 is provable. Now since the string of quantifiers formula_23 does not contain variables from x or y, the following equivalence is easily provable with the help of whatever formalism we're using: And since these two formulas are equivalent, if we replace the first with the second inside Φ, we obtain the formula Φ' such that Φ≡Φ': Now Φ' has the form formula_26, where (S) and (S') are some quantifier strings, ρ and ρ' are quantifier-free, and, furthermore, no variable of (S) occurs in ρ' and no variable of (S') occurs in ρ. Under such conditions every formula of the form formula_27, where (T) is a string of quantifiers containing all quantifiers in (S) and (S') interleaved among themselves in any fashion, but maintaining the relative order inside (S) and (S'), will be equivalent to the original formula Φ'(this is yet another basic result in first-order predicate calculus that we rely on). To wit, we form Ψ as follows: and we have formula_29. Now formula_30 is a formula of degree k and therefore by assumption either refutable or satisfiable. If formula_30 is satisfiable in a structure M, then, considering formula_32, we see that formula_7 is satisfiable as well. If formula_30 is refutable, then so is formula_35, which is equivalent to it; thus formula_36 is provable. Now we can replace all occurrences of Q inside the provable formula formula_36 by some other formula dependent on the same variables, and we will still get a provable formula. In this particular case, we replace Q(x',y') in formula_36 with the formula formula_40. Here (x,y|x',y') means that instead of ψ we are writing a different formula, in which x and y are replaced with x' and y'. Q(x,y) is simply replaced by formula_41. formula_36 then becomes and this formula is provable; since the part under negation and after the formula_44 sign is obviously provable, and the part under negation and before the formula_44 sign is obviously φ, just with x and y replaced by x' and y', we see that formula_46 is provable, and φ is refutable. We have proved that φ is either satisfiable or refutable, and this concludes the proof of the Lemma. Notice that we could not have used formula_40 instead of Q(x',y') from the beginning, because formula_30 would not have been a well-formed formula in that case. This is why we cannot naively use the argument appearing at the comment that precedes the proof. As shown by the Lemma above, we only need to prove our theorem for formulas φ in R of degree 1. φ cannot be of degree 0, since formulas in R have no free variables and don't use constant symbols. So the formula φ has the general form: Now we define an ordering of the k-tuples of natural numbers as follows: formula_50 should hold if either formula_51, or formula_52, and formula_53 precedes formula_54 in lexicographic order. [Here formula_55 denotes the sum of the terms of the tuple.] Denote the nth tuple in this order by formula_56. Set the formula formula_57 as formula_58. Then put formula_59 as Lemma: For every "n", φformula_61. Proof: By induction on n; we have formula_62, where the latter implication holds by variable substitution, since the ordering of the tuples is such that formula_63. But the last formula is equivalent to formula_64φ. For the base case, formula_65 is obviously a corollary of φ as well. So the Lemma is proven. Now if formula_66 is refutable for some "n", it follows that φ is refutable. On the other hand, suppose that formula_66 is not refutable for any "n". Then for each "n" there is some way of assigning truth values to the distinct subpropositions formula_68 (ordered by their first appearance in formula_59; "distinct" here means either distinct predicates, or distinct bound variables) in formula_70, such that formula_66 will be true when each proposition is evaluated in this fashion. This follows from the completeness of the underlying propositional logic. We will now show that there is such an assignment of truth values to formula_68, so that all formula_59 will be true: The formula_68 appear in the same order in every formula_66; we will inductively define a general assignment to them by a sort of "majority vote": Since there are infinitely many assignments (one for each formula_66) affecting formula_77, either infinitely many make formula_77 true, or infinitely many make it false and only finitely many make it true. In the former case, we choose formula_77 to be true in general; in the latter we take it to be false in general. Then from the infinitely many "n" for which formula_77 through formula_81 are assigned the same truth value as in the general assignment, we pick a general assignment to formula_68 in the same fashion. This general assignment must lead to every one of the formula_83 and formula_84 being true, since if one of the formula_83 were false under the general assignment, formula_59 would also be false for every "n > k". But this contradicts the fact that for the finite collection of general formula_68 assignments appearing in formula_84, there are infinitely many "n" where the assignment making formula_59 true matches the general assignment. From this general assignment, which makes all of the formula_84 true, we construct an interpretation of the language's predicates that makes φ true. The universe of the model will be the natural numbers. Each i-ary predicate formula_30 should be true of the naturals formula_92 precisely when the proposition formula_93 is either true in the general assignment, or not assigned by it (because it never appears in any of the formula_84). In this model, each of the formulas formula_95 is true by construction. But this implies that φ itself is true in the model, since the formula_96 range over all possible k-tuples of natural numbers. So φ is satisfiable, and we are done. We may write each Bi as Φ(x1...xk,y1...ym) for some x-s, which we may call "first arguments" and y-s that we may call "last arguments". Take B1 for example. Its "last arguments" are z2,z3...zm+1, and for every possible combination of k of these variables there is some j so that they appear as "first arguments" in Bj. Thus for large enough n1, Dn1 has the property that the "last arguments" of B1 appear, in every possible combinations of k of them, as "first arguments" in other Bj-s within Dn. For every Bi there is a Dni with the corresponding property. Therefore in a model that satisfies all the Dn-s, there are objects corresponding to z1, z2... and each combination of k of these appear as "first arguments" in some Bj, meaning that for every k of these objects zp1...zpk there are zq1...zqm, which makes Φ(zp1...zpk,zq1...zqm) satisfied. By taking a submodel with only these z1, z2... objects, we have a model satisfying φ. Gödel reduced a formula containing instances of the equality predicate to ones without it in an extended language. His method involves replacing a formula φ containing some instances of equality with the formula Here formula_105 denote the predicates appearing in φ (with formula_106 their respective arities), and φ' is the formula φ with all occurrences of equality replaced with the new predicate "Eq". If this new formula is refutable, the original φ was as well; the same is true of satisfiability, since we may take a quotient of satisfying model of the new formula by the equivalence relation representing "Eq". This quotient is well-defined with respect to the other predicates, and therefore will satisfy the original formula φ. Gödel also considered the case where there are a countably infinite collection of formulas. Using the same reductions as above, he was able to consider only those cases where each formula is of degree 1 and contains no uses of equality. For a countable collection of formulas formula_107 of degree 1, we may define formula_108 as above; then define formula_109 to be the closure of formula_110. The remainder of the proof then went through as before. When there is an uncountably infinite collection of formulas, the Axiom of Choice (or at least some weak form of it) is needed. Using the full AC, one can well-order the formulas, and prove the uncountable case with the same argument as the countable one, except with transfinite induction. Other approaches can be used to prove that the completeness theorem in this case is equivalent to the Boolean prime ideal theorem, a weak form of AC.
https://en.wikipedia.org/wiki?curid=12727
General Electric General Electric Company (GE) is an American multinational conglomerate incorporated in New York City and headquartered in Boston. , the company operates through the following segments: aviation, healthcare, power, renewable energy, digital industry, additive manufacturing, venture capital and finance and lighting. In 2019, GE ranked among the Fortune 500 as the 21st-largest firm in the United States by gross revenue. In 2011, GE ranked among the Fortune 20 as the 14th-most profitable company but has since very severely underperformed the market (by about 75%) as its profitability collapsed. Two employees of GE—Irving Langmuir (1932) and Ivar Giaever (1973)—have been awarded the Nobel Prize. During 1889, Thomas Edison had business interests in many electricity-related companies including Edison Lamp Company, a lamp manufacturer in East Newark, New Jersey; Edison Machine Works, a manufacturer of dynamos and large electric motors in Schenectady, New York; Bergmann & Company, a manufacturer of electric lighting fixtures, sockets, and other electric lighting devices; and "Edison Electric Light Company", the patent-holding company and the financial arm backed by J. P. Morgan and the Vanderbilt family for Edison's lighting experiments. In 1889, Drexel, Morgan & Co., a company founded by J.P. Morgan and Anthony J. Drexel, financed Edison's research and helped merge those companies under one corporation to form Edison General Electric Company, which was incorporated in New York on April 24, 1889. The new company also acquired Sprague Electric Railway & Motor Company in the same year. In 1880, Gerald Waldo Hart formed the American Electric Company of New Britain, Connecticut, which merged a few years later with Thomson-Houston Electric Company, led by Charles Coffin. In 1887, Hart left to become superintendent of the Edison Electric Company of Kansas City, Missouri. General Electric was formed through the 1892 merger of Edison General Electric Company of Schenectady, New York, and Thomson-Houston Electric Company of Lynn, Massachusetts, with the support of Drexel, Morgan & Co. Both plants continue to operate under the GE banner to this day. The company was incorporated in New York, with the Schenectady plant used as headquarters for many years thereafter. Around the same time, General Electric's Canadian counterpart, Canadian General Electric, was formed. In 1893 General Electric bought the business of Rudolf Eickemeyer in Yonkers, New York, along with all of its patents and designs. One of the employees was Charles Proteus Steinmetz. Only recently arrived in the United States, Steinmetz was already publishing in the field of magnetic hysteresis and had earned worldwide professional recognition. Led by Steinmetz, Eickemeyer's firm had developed transformers for use in the transmission of electrical power among many other mechanical and electrical devices. Steinmetz quickly became known as the engineering wizard in GE's engineering community. In 1896, General Electric was one of the original 12 companies listed on the newly formed Dow Jones Industrial Average, where it remained a part of the index for 122 years, though not continuously. In 1911, General Electric absorbed the National Electric Lamp Association (NELA) into its lighting business. GE established its lighting division headquarters at Nela Park in East Cleveland, Ohio. The lighting division has since remained in the same location. Owen D. Young, through GE, founded the Radio Corporation of America (RCA) in 1919, after purchasing the Marconi Wireless Telegraph Company of America. He aimed to expand international radio communications. GE used RCA as its retail arm for radio sales. In 1926, RCA co-founded the National Broadcasting Company (NBC), which built two radio broadcasting networks. In 1930, General Electric was charged with antitrust violations and decided to divest itself of RCA. In 1927, Ernst Alexanderson of GE made the first demonstration of his television broadcasts at his General Electric Realty Plot home at 1132 Adams Rd, Schenectady, New York. On January 13, 1928, he made what was said to be the first broadcast to the public in the United States on GE's W2XAD: the pictures were picked up on 1.5 square inch (9.7 square centimeter) screens in the homes of four GE executives. The sound was broadcast on GE's WGY (AM). Experimental television station W2XAD evolved into station WRGB which, along with WGY and WGFM (now WRVE), was owned and operated by General Electric until 1983. Led by Sanford Alexander Moss, GE moved into the new field of aircraft turbo superchargers. GE introduced the first set of superchargers during World War I, and continued to develop them during the interwar period. Superchargers became indispensable in the years immediately prior to World War II. GE supplied 300,000 turbo superchargers for use in fighter and bomber engines. This work led the U.S. Army Air Corps to select GE to develop the nation's first jet engine during the war. This experience, in turn, made GE a natural selection to develop the Whittle W.1 jet engine that was demonstrated in the United States in 1941. GE was ranked ninth among United States corporations in the value of wartime production contracts. Although, their early work with Whittle's designs was later handed to Allison Engine Company. GE Aviation then emerged as one of the world's largest engine manufacturers, bypassing the British company, Rolls-Royce plc. Some consumers boycotted GE light bulbs, refrigerators and other products during the 1980s and 1990s. The purpose of the boycott was to protest against GE's role in nuclear weapons production. In 2002, GE acquired the wind power assets of Enron during its bankruptcy proceedings. Enron Wind was the only surviving U.S. manufacturer of large wind turbines at the time, and GE increased engineering and supplies for the Wind Division and doubled the annual sales to $1.2 billion in 2003. It acquired ScanWind in 2009. In 2018, GE Power garnered press attention when a model 7HA gas turbine in Texas was shut down for two months due to the break of a turbine blade. This model uses similar blade technology to GE's newest and most efficient model, the 9HA. After the break, GE developed new protective coatings and heat treatment methods. Gas turbines represent a significant portion of GE Power's revenue, and also represent a significant portion of the power generation fleet of several utility companies in the United States. Chubu Electric of Japan and Électricité de France also had units that were impacted. Initially, GE did not realize the turbine blade issue of the 9FB unit would impact the new HA units. GE was one of the eight major computer companies of the 1960s along with IBM, Burroughs, NCR, Control Data Corporation, Honeywell, RCA and UNIVAC. GE had a line of general purpose and special purpose computers, including the GE 200, GE 400, and GE 600 series general purpose computers, the GE 4010, GE 4020, and GE 4060 real-time process control computers, and the DATANET-30 and Datanet 355 message switching computers (DATANET-30 and 355 were also used as front end processors for GE mainframe computers). A Datanet 500 computer was designed, but never sold. In 1962, GE started developing its GECOS (later renamed GCOS) operating system, originally for batch processing, but later extended to timesharing and transaction processing. Versions of GCOS are still in use today. From 1964 to 1969, GE and Bell Laboratories (which soon dropped out) joined with MIT to develop the Multics operating system on the GE 645 mainframe computer. The project took longer than expected and was not a major commercial success, but it demonstrated concepts such as single level store, dynamic linking, hierarchical file system, and ring-oriented security. Active development of Multics continued until 1985. GE got into computer manufacturing because in the 1950s they were the largest user of computers outside the United States federal government, aside from being the first business in the world to own a computer. Its major appliance manufacturing plant "Appliance Park" was the first non-governmental site to host one. However, in 1970, GE sold its computer division to Honeywell, exiting the computer manufacturing industry, though it retained its timesharing operations for some years afterwards. GE was a major provider of computer time-sharing services, through General Electric Information Services (GEIS, now GXS), offering online computing services that included GEnie. In 2000, when United Technologies Corp. planned to buy Honeywell, GE made a counter-offer that was approved by Honeywell. On July 3, 2001, the European Union issued a statement that "prohibit the proposed acquisition by General Electric Co. of Honeywell Inc.". The reasons given were it "would create or strengthen dominant positions on several markets and that the remedies proposed by GE were insufficient to resolve the competition concerns resulting from the proposed acquisition of Honeywell". On June 27, 2014, GE partnered with collaborative design company Quirky to announce its connected LED bulb called Link. The Link bulb is designed to communicate with smartphones and tablets using a mobile app called Wink. In December 1985, GE reacquired RCA, primarily for the NBC television network (also parent of Telemundo Communications Group) for $6.28 billion; this merger surpassed the Capital Cities/ABC merger that happened earlier that year as the largest non-oil merger in world business history. The remainder was sold to various companies, including Bertelsmann (Bertelsmann acquired RCA Records) and Thomson SA, which traces its roots to Thomson-Houston, one of the original components of GE. Also in 1986, Kidder, Peabody & Co., a U.S.-based securities firm, was sold to GE and following heavy losses was sold to PaineWebber in 1994. In 2002, Francisco Partners and Norwest Venture Partners acquired a division of GE called GE Information Systems (GEIS). The new company, named GXS, is based in Gaithersburg, Maryland. GXS is a provider of B2B e-Commerce solutions. GE maintains a minority stake in GXS. Also in 2002, GE Wind Energy was formed when GE bought the wind turbine manufacturing assets of Enron Wind after the Enron scandals. In 2004, GE bought 80% of Universal Pictures from Vivendi. Vivendi bought 20% of NBC forming the company NBCUniversal. GE then owned 80% of NBCUniversal and Vivendi owned 20%. In 2004, GE completed the spin-off of most of its mortgage and life insurance assets into an independent company, Genworth Financial, based in Richmond, Virginia. Genpact formerly known as GE Capital International Services (GECIS) was established by GE in late 1997 as its captive India-based BPO. GE sold 60% stake in Genpact to General Atlantic and Oak Hill Capital Partners in 2005 and hived off Genpact into an independent business. GE is still a major client to Genpact today, for services in customer service, finance, information technology and analytics. In May 2007, GE acquired Smiths Aerospace for $4.8 billion. Also in 2007, GE Oil & Gas acquired Vetco Gray for $1.9 billion, followed by the acquisition of Hydril Pressure & Control in 2008 for $1.1 billion. GE Plastics was sold in 2008 to SABIC (Saudi Arabia Basic Industries Corporation). In May 2008, GE announced it was exploring options for divesting the bulk of its consumer and industrial business. On December 3, 2009, it was announced that NBCUniversal would become a joint venture between GE and cable television operator Comcast. Comcast would hold a controlling interest in the company, while GE would retain a 49% stake and would buy out shares owned by Vivendi. Vivendi would sell its 20% stake in NBCUniversal to GE for US$5.8 billion. Vivendi would sell 7.66% of NBCUniversal to GE for US$2 billion if the GE/Comcast deal was not completed by September 2010 and then sell the remaining 12.34% stake of NBCUniversal to GE for US$3.8 billion when the deal was completed or to the public via an IPO if the deal was not completed. On March 1, 2010, GE announced plans to sell its 20.85% stake in Turkey-based Garanti Bank. In August 2010, GE Healthcare signed a strategic partnership to bring cardiovascular Computed Tomography (CT) technology from start-up Arineta Ltd. of Israel to the hospital market. In October 2010, GE acquired gas engines manufacture Dresser Inc. in a $3 billion deal and also bought a $1.6 billion portfolio of retail credit cards from Citigroup Inc. On October 14, 2010, GE announced the acquisition of data migration & SCADA simulation specialists Opal Software. In December 2010, for the second time that year (after the Dresser acquisition), GE bought the oil sector company Wellstream., an oil pipe maker, for 800 million pounds ($1.3 billion). In March 2011, GE announced that it had completed the acquisition of privately held Lineage Power Holdings, Inc., from The Gores Group, LLC. In April 2011, GE announced it had completed its purchase of John Wood plc's Well Support Division for $2.8 billion. In 2011, GE Capital sold its $2 billion Mexican assets to Santander for $162 million and exit the business in Mexico. Santander additionally assumed the portfolio debts of GE Capital in the country. Following this, GE Capital focused in its core business and shed its non-core assets. In June 2012, CEO and President of GE Jeff Immelt said that the company would invest ₹3 billion to accelerate its businesses in Karnataka. In October 2012, GE acquired $7 billion worth of bank deposits from Metlife Inc. On March 19, 2013, Comcast bought GE's shares in NBCU for $16.7 billion, ending the company's longtime stake in television and film media. In April 2013, GE acquired oilfield pump maker Lufkin Industries for $2.98 billion. In April 2014, it was announced that GE was in talks to acquire the global power division of French engineering group Alstom for a figure of around $13 billion. A rival joint bid was submitted in June 2014 by Siemens and Mitsubishi Heavy Industries (MHI) with Siemens seeking to acquire Alstom's gas turbine business for €3.9 billion, and MHI proposing a joint venture in steam turbines, plus a €3.1 billion cash investment. In June 2014 a formal offer from GE worth $17 billion was agreed by the Alstom board. Part of the transaction involved the French government taking a 20% stake in Alstom to help secure France's energy and transport interests and French jobs. A rival offer from Siemens-Mitsubishi Heavy Industries was rejected. The acquisition was expected to be completed in 2015. In October 2014, GE announced it was considering the sale of its Polish banking business Bank BPH. Later in 2014, General Electric announced plans to open its global operations center in Cincinnati, Ohio. The Global Operations Center opened in October 2016 as home to GE's multifunctional shared services organization. It supports the company's finance/accounting, human resources, information technology, supply chain, legal and commercial operations, and is one of GE's four multifunctional shared services centers worldwide in Pudong, China; Budapest, Hungary; and Monterrey, Mexico. In April 2015, GE announced its intention to sell off its property portfolio, worth $26.5 billion, to Wells Fargo and The Blackstone Group. It was announced in April 2015 that GE would sell most of its finance unit and return around $90 billion to shareholders as the firm looked to trim down on its holdings and rid itself of its image of a "hybrid" company, working in both banking and manufacturing. In August 2015, GE Capital agreed to sell its Healthcare Financial Services business to Capital One for US$9 billion. The transaction involved US$8.5 billion of loans made to a wide array of sectors including senior housing, hospitals, medical offices, outpatient services, pharmaceuticals and medical devices. Also in August 2015, GE Capital agreed to sell GE Capital Bank's on-line deposit platform to Goldman Sachs. Terms of the transaction were not disclosed, but the sale included US$8 billion of on-line deposits and another US$8 billion of brokered certificates of deposit. The sale was part of GE's strategic plan to exit the U.S. banking sector and to free itself from tightening banking regulations. GE also aimed to shed its status as a "systematically important financial institution". In September 2015, GE Capital agreed to sell its transportation-finance unit to Canada's Bank of Montreal. The unit sold had US$8.7 billion (CA$11.5 billion) of assets, 600 employees and 15 offices in the U.S. and Canada. Exact terms of the sale were not disclosed, but the final price would be based on the value of the assets at closing, plus a premium according to the parties. In October 2015, activist investor Nelson Peltz's fund Trian bought a $2.5 billion stake in the company. In January 2016, Haier Group acquired GE's appliance division for $5.4 billion. In October 2016, GE Renewable Energy agreed to pay €1.5 billion to Doughty Hanson & Co for LM Wind Power during 2017. At the end of October 2016, it was announced that GE was under negotiations for a deal valued at about $30 billion to combine GE Oil and Gas with Baker Hughes. The transaction would create a publicly traded entity controlled by GE. It was announced that GE Oil and Gas would sell off its water treatment business as part of its divestment agreement with Baker Hughes. The deal was cleared by the EU in May 2017, and by the DOJ in June 2017. The merger agreement was approved by shareholders at the end of June 2017. On July 3, 2017, the transaction was completed and Baker Hughes became a GE company and was renamed Bake Hughes, A GE Company (BHGE). In November 2018, GE reduced its stake in Baker Hughes to 50.4%. On 18 October 2019, GE reduced its stake to 36.8% and the company was renamed back to Baker Hughes. In April 2017, GE announced the name of their $200 million corporate headquarters would be "GE Innovation Point". The groundbreaking ceremony for the 2.5-acre, 800-person campus was held on May 8, 2017, and the completion date is expected to be sometime in mid-2019. In May 2017, GE had signed $15 billion of business deals with Saudi Arabia. Saudi Arabia is one of GE's largest customers. In September 2017, GE announced the sale of its Industrial Solutions Business to ABB. The deal closed on June 30, 2018. On August 15, 2019, Harry Markopolos, a financial fraud investigator known for his discovery of a Ponzi Scheme run by Bernard Madoff, accused General Electric of being a "bigger fraud than Enron", alleging $38 billion in accounting fraud. GE denied wrongdoing. It is alleged that GE is "hiding" (i.e. under-reserved) $29 billion in losses related to its long-term care business. According to an August 2019 Fitch Ratings report, there are concerns that GE has not set aside enough money to cover its long-term care liabilities. In 2018, a lawsuit (the Bezio case) was filed in New York state court on behalf of participants in GE's 401(k) plan and shareowners alleging violations of Section 11 of the Securities Act of 1933 based on alleged misstatements and omissions related to insurance reserves and performance of GE's business segments. The Kansas Insurance Department (KID) is requiring General Electric to make $14.5 billion of capital contributions for its insurance contracts during the 7-year period ending in 2024. GE reported the total liability related to its insurance contracts was $35.6 billion as at December 31, 2018. In 2018, GE announced the issuance of the new standard by the Financial Accounting Standards Board (FASB) regarding Financial Services - Insurance (Topic 944) will materially affect its financial statements. Mr. Markopolos estimated there will be a $US 10.5 billion charge when the new accounting standard is adopted in the first quarter of 2021. In 2017, GE acquired a 62.5% interest in Baker Hughes (BHGE) when it combined its oil & gas business with Baker Hughes Incorporated. In 2018, GE reduced its interest to 50.4%, resulting in the realization of a $2.1 billion loss. GE is planning to divest its remaining interest and has warned that the divestment will result in an additional loss of $8.4 billion (assuming a BHGE share price of $23.57 per share). In response to the fraud allegations, GE noted the amount of the loss would be $7.4 billion if the divestment occurred on July 26, 2019. Mr. Markopolos noted that BHGE is an asset available for sale and therefore mark-to-market accounting is required. Mr. Markopolos noted GE's current ratio was only 0.67. He expressed concerns that GE may file for bankruptcy if there is a recession. In 2018, the GE Pension Plan reported losses of $US 3.3 billion on plan assets. In 2018, General Electric changed the discount rate used to calculate the actuarial liabilities of its pension plans. The rate was increased from 3.64% to 4.34%. Consequently, the reported liability for the underfunded pension plans decreased by $7 billion year-over-year, from $34.2 billion in 2017 to $27.2 billion in 2018. In October 2018, General Electric announced it would "freeze pensions" for about 20,000 salaried U.S. employees. The employees will be moved to a defined-contribution retirement plan in 2021. On March 30th, 2020, General Electric factory workers protested to convert jet engine factories to make ventilators during the COVID-19 crisis. In June 2020 GE made an agreement to sell its Lighting business to Savant Systems, Inc., an industry leader in the professional smart home space. Financial details of the transaction were not disclosed. In 2018, GE reduced its quarterly dividend from $0.12 to $0.01 per share. As a publicly-traded company on the New York Stock Exchange, GE stock was one of the 30 components of the Dow Jones Industrial Average from 1907 to 2018, the longest continuous presence of any company on the index, and during this time the only company which was part of the original Dow Jones Industrial Index created in 1896. On June 26, 2018, the stock was removed from the index and replaced with Walgreens Boots Alliance. In the years leading to its removal, GE was the worst performing stock in the Dow, falling more than 55 percent year on year and more than 25 percent year to date. In July 2010, General Electric was willing to pay $23.4 million to settle an SEC complaint, as GE bribed Iraqi government officials to win contracts under the U.N. oil-for-food program. In 1959, General Electric was accused of promoting the largest illegal cartel in the United States since the adoption of the Sherman Antitrust Act. (1890) in order to maintain artificially high prices. In total, 29 companies and 45 executives would be convicted. Subsequent parliamentary inquiries revealed that "white-collar crime" was by far the most costly form of crime for the United States' finances. GE is a multinational conglomerate headquartered in Boston, Massachusetts. However its main offices are located at 30 Rockefeller Plaza at Rockefeller Center in New York City, known now as the Comcast Building. It was formerly known as the GE Building for the prominent GE logo on the roof; NBC's headquarters and main studios are also located in the building. Through its RCA subsidiary, it has been associated with the center since its construction in the 1930s. GE moved its corporate headquarters from the GE Building on Lexington Avenue to Fairfield, Connecticut in 1974. In 2016, GE announced a move to the South Boston Waterfront neighborhood of Boston, Massachusetts, partly as a result of an incentive package provide by state and city governments. The first group of workers arrived in the summer of 2016, and the full move will be completed by 2018. Due to poor financial performance and corporate downsizing, GE sold the land it planned to build its new headquarters building on, instead choosing to occupy neighboring leased buildings. GE's tax return is the largest return filed in the United States; the 2005 return was approximately 24,000 pages when printed out, and 237 megabytes when submitted electronically. The company also "spends more on U.S. lobbying than any other company". In 2005, GE launched its ""Ecomagination"" initiative in an attempt to position itself as a "green" company. GE is one of the biggest players in the wind power industry, and is developing environment-friendly products such as hybrid locomotives, desalination and water reuse solutions, and photovoltaic cells. The company "plans to build the largest solar-panel-making factory in the U.S.," and has set goals for its subsidiaries to lower their greenhouse gas emissions. On May 21, 2007, GE announced it would sell its GE Plastics division to petrochemicals manufacturer SABIC for net proceeds of $11.6 billion. The transaction took place on August 31, 2007, and the company name changed to SABIC Innovative Plastics, with Brian Gladden as CEO. In February 2017, GE announced that the company intends to close the gender gap by promising to hire and place 20,000 women in technical roles by 2020. The company is also seeking to have a 50:50 male to female gender representation in all entry-level technical programs. In October 2017, GE announced they would be closing research and development centers in Shanghai, Munich and Rio de Janeiro. The company spent $5 billion on R&D in the last year. On February 25, 2019, GE sold its diesel locomotive business to Westinghouse Air Brake Technologies Corporation (WAB), also known as Wabtec. , John L. Flannery was replaced by H. Lawrence Culp Jr. as Chairman and CEO in a unanimous vote of the GE Board of Directors. John L. Flannery had succeeded Jeffrey Immelt as chief executive officer and chairman of the board of GE. In 2011, "Fortune" ranked GE the sixth-largest firm in the U.S., and the 14th-most profitable. Other rankings for 2011/2012 include the following: In 2012, GE's brand was valued at $28.8 billion. CEO Jeff Immelt had a set of changes in the presentation of the brand commissioned in 2004, after he took the reins as chairman, to unify the diversified businesses of GE. The changes included a new corporate color palette, small modifications to the GE logo, a new customized font (GE Inspira) and a new slogan, "Imagination at work", composed by David Lucas, to replace the slogan "We Bring Good Things to Life" used since 1979. The standard requires many headlines to be lowercased and adds visual "white space" to documents and advertising. The changes were designed by Wolff Olins and are used on GE's marketing, literature and website. In 2014, a second typeface family was introduced: GE Sans and Serif by Bold Monday created under art direction by Wolff Olins. , GE had appeared on the Fortune 500 list for 22 years and held the 11th rank. GE was removed from the Dow Jones Industrial Average on June 28, 2018, after the value had dropped below 1% of the index's weight. GE's primary business divisions are: The former GE Transportation division merged with Wabtec on February 25, 2019, leaving GE with a 24.9% holding in Wabtec. The former GE Appliances and Lighting segment was dissolved in 2014 when GE's appliance division was sold to Haier for $5.4 billion. GE Lighting (consumer lighting) and the newly created Current, powered by GE, which deals in commercial LED, solar, EV, and energy storage, became stand-alone businesses within the company, until the sale of the later to American Industrial Partners in April 2019 Through these businesses, GE participates in markets that include the generation, transmission and distribution of electricity (e.g. nuclear, gas and solar), lighting, industrial automation, medical imaging equipment, motors, railway locomotives, aircraft jet engines, and aviation services. Through GE Commercial Finance, GE Consumer Finance, GE Equipment Services, and GE Insurance it offers a range of financial services. It has a presence in over 100 countries. General Imaging manufacturers GE digital cameras. Even though the first wave of conglomerates (such as ITT Corporation, Ling-Temco-Vought, Tenneco, etc.) fell by the wayside by the mid-1980s, in the late 1990s, another wave (consisting of Westinghouse, Tyco, and others) tried and failed to emulate GE's success. On May 4, 2008, it was announced that GE would auction off its appliances business for an expected sale of $5–8 billion. However, this plan fell through as a result of the recession. On September 14, 2015, GE announced the creation of a new unit: GE Digital, which will bring together its software and IT capabilities. The new business unit will be headed by Bill Ruh, who joined GE in 2011 from Cisco Systems and has since worked on GE's software efforts. GE has a history of some of its activities giving rise to large-scale air and water pollution. Based on data from 2000, researchers at the Political Economy Research Institute listed the corporation as the fourth-largest corporate producer of air pollution in the United States, with more than 4.4 million pounds per year (2,000 tons) of toxic chemicals released into the air. GE has also been implicated in the creation of toxic waste. According to EPA documents, only the United States Government, Honeywell, and Chevron Corporation are responsible for producing more Superfund toxic waste sites. In 1983, New York State Attorney General Robert Abrams filed suit in the United States District Court for the Northern District of New York to compel GE to pay for the clean-up of what was claimed to be more than 100,000 tons of chemicals dumped from their plant in Waterford, New York. In 1999, the company agreed to pay a $250 million settlement in connection with claims it polluted the Housatonic River (Pittsfield, Massachusetts) and other sites with polychlorinated biphenyls (PCBs) and other hazardous substances. In 2003, acting on concerns that the plan proposed by GE did not "provide for adequate protection of public health and the environment," the United States Environmental Protection Agency issued a unilateral administrative order for the company to "address cleanup at the GE site" in Rome, Georgia, also contaminated with PCBs. The nuclear reactors involved in the 2011 crisis at Fukushima I in Japan were GE designs, and the architectural designs were done by Ebasco, formerly owned by GE. Concerns over the design and safety of these reactors were raised as early as 1972, but tsunami danger was not discussed at that time. , the same model nuclear power reactors designed by GE are operating in the US, however, as of May 31, 2019, the controversial Pilgrim Nuclear Generating Station, in Plymouth, Massachusetts, has been shutdown and is in the process of decommission. GE heavily contaminated the Hudson River with polychlorinated biphenyls (PCBs) between 1947 and 1977. This pollution caused a range of harmful effects to wildlife and people who eat fish from the river or drink the water. In response to the contamination, activists protested in various ways. Musician Pete Seeger founded the Hudson River Sloop Clearwater and the Clearwater Festival to draw attention to the problem. In 1983, the United States Environmental Protection Agency (EPA) declared a 200-mile (320 km) stretch of the river, from Hudson Falls to New York City, to be a Superfund site requiring cleanup. This Superfund site is considered to be one of the largest in the nation. Other sources of pollution, including mercury contamination and sewage dumping, have also contributed to problems in the Hudson River watershed. From until 1977, GE polluted the Housatonic River with PCB discharges from its plant at Pittsfield, Massachusetts. EPA designated the Pittsfield plant and several miles of the Housatonic to be a Superfund site in 1997, and ordered GE to remediate the site. Aroclor 1254 and Aroclor 1260, made by Monsanto was the primary contaminant of the pollution. The highest concentrations of PCBs in the Housatonic River are found in Woods Pond in Lenox, Massachusetts, just south of Pittsfield, where they have been measured up to 110 mg/kg in the sediment. About 50% of all the PCBs currently in the river are estimated to be retained in the sediment behind Woods Pond dam. This is estimated to be about of PCBs. Former filled oxbows are also polluted. Waterfowl and fish who live in and around the river contain significant levels of PCBs and can present health risks if consumed. On June 6, 2011, GE announced that it has licensed solar thermal technology from California-based eSolar for use in power plants that use both solar and natural gas. On May 26, 2011, GE unveiled its EV Solar Carport, a carport that incorporates solar panels on its roof, with electric vehicle charging stations under its cover. In May 2005, GE announced the launch of a program called "Ecomagination", intended, in the words of CEO Jeff Immelt, "to develop tomorrow's solutions such as solar energy, hybrid locomotives, fuel cells, lower-emission aircraft engines, lighter and stronger durable materials, efficient lighting, and water purification technology". The announcement prompted an op-ed piece in "The New York Times" to observe that, "while General Electric's increased emphasis on clean technology will probably result in improved products and benefit its bottom line, Mr. Immelt's credibility as a spokesman on national environmental policy is fatally flawed because of his company's intransigence in cleaning up its own toxic legacy." GE has said that it will invest $1.4 billion in clean technology research and development in 2008 as part of its Ecomagination initiative. As of October 2008, the scheme had resulted in 70 green products being brought to market, ranging from halogen lamps to biogas engines. In 2007, GE raised the annual revenue target for its Ecomagination initiative from $20 billion in 2010 to $25 billion following positive market response to its new product lines. In 2010, GE continued to raise its investment by adding $10 billion into Ecomagination over the next five years. GE Energy's renewable energy business has expanded greatly, to keep up with growing U.S. and global demand for clean energy. Since entering the renewable energy industry in 2002, GE has invested more than $850 million in renewable energy commercialization. In August 2008 it acquired Kelman Ltd, a Northern Ireland-based company specializing in advanced monitoring and diagnostics technologies for transformers used in renewable energy generation, and announced an expansion of its business in Northern Ireland in May 2010. In 2009, GE's renewable energy initiatives, which include solar power, wind power and GE Jenbacher gas engines using renewable and non-renewable methane-based gases, employ more than 4,900 people globally and have created more than 10,000 supporting jobs. GE Energy and Orion New Zealand (Orion) have announced implementation of the first phase of a GE network management system to help improve power reliability for customers. GE's ENMAC Distribution Management System is the foundation of Orion's initiative. The system of smart grid technologies will significantly improve the network company's ability to manage big network emergencies and help it to restore power faster when outages occur. In June 2018, GE Volunteers, an internal group of GE Employees, along with Malaysian Nature Society transplanted more than 270 plants from the Taman Tugu forest reserve so that they may be replanted in the forest trail which is under construction. GE Healthcare is collaborating with The Wayne State University School of Medicine and the Medical University of South Carolina to offer an integrated radiology curriculum during their respective MD Programs led by investigators of the Advanced Diagnostic Ultrasound in micro-gravity study. GE has donated over one million dollars of Logiq E Ultrasound equipment to these two institutions. Between September 2011 and April 2013, GE ran a content marketing campaign dedicated to telling the stories of "innovators—people who are reshaping the world through act or invention". The initiative included 30 3-minute films from leading documentary film directors (Albert Maysles, Jessica Yu, Leslie Iwerks, Steve James, Alex Gibney, Lixin Fan, Gary Hustwit and others), and a user-generated competition that received over 600 submissions, out of which 20 finalists were chosen. "Short Films, Big Ideas" was launched at the 2011 Toronto International Film Festival in partnership with cinelan. Stories included breakthroughs in Slingshot (water vapor distillation system), cancer research, energy production, pain management and food access. Each of the 30 films received world premiere screenings at a major international film festival, including the Sundance Film Festival and the Tribeca Film Festival. The winning amateur director film, "The Cyborg Foundation", was awarded a prize at the 2013 at Sundance Film Festival. According to GE, the campaign garnered more than 1.5 billion total media impressions, 14 million online views, and was seen in 156 countries. In January 2017, GE signed an estimated $7 million deal with the Boston Celtics to have its corporate logo put on the NBA team's jersey. In the 1950s GE sponsored Ronald Reagan's TV career and launched him on the lecture circuit. GE's has also designed social programs, supported civil rights organizations, and funds minority education programs. In the early 1950s Kurt Vonnegut was a writer for GE. A number of his novels and stories (notably "Cat's Cradle" and "Player Piano") refer to the fictional city of Ilium, which appears to be loosely based on Schenectady, New York. The Ilium Works is the setting for the short story "Deer in the Works". In 1981, GE won a Clio award for its :30 Soft White Light Bulbs commercial, We Bring Good Things to Life. The slogan "We Bring Good Things to Life" was created by Phil Dusenberry at the ad agency BBDO. GE was the primary focus of a 1991 short subject Academy Award-winning documentary, "Deadly Deception: General Electric, Nuclear Weapons, and Our Environment", that juxtaposed GE's "We Bring Good Things To Life" commercials with the true stories of workers and neighbors whose lives have been affected by the company's activities involving nuclear weapons. In 2013, GE received a National Jefferson Award for Outstanding Service by a Major Corporation.
https://en.wikipedia.org/wiki?curid=12730
George Harrison George Harrison (25 February 1943 – 29 November 2001) was an English musician, singer, songwriter, and music and film producer who achieved international fame as the lead guitarist of the Beatles. Sometimes called "the quiet Beatle", Harrison embraced Indian culture and helped broaden the scope of popular music through his incorporation of Indian instrumentation and Hindu-aligned spirituality in the Beatles' work. Although the majority of the band's songs were written by John Lennon and Paul McCartney, most Beatles albums from 1965 onwards contained at least two Harrison compositions. His songs for the group include "Taxman", "Within You Without You", "While My Guitar Gently Weeps", "Here Comes the Sun" and "Something". Harrison's earliest musical influences included George Formby and Django Reinhardt; Carl Perkins, Chet Atkins and Chuck Berry were subsequent influences. By 1965, he had begun to lead the Beatles into folk rock through his interest in Bob Dylan and the Byrds, and towards Indian classical music through his use of the sitar on "Norwegian Wood (This Bird Has Flown)". Having initiated the band's embracing of Transcendental Meditation in 1967, he subsequently developed an association with the Hare Krishna movement. After the band's break-up in 1970, Harrison released the triple album "All Things Must Pass", a critically acclaimed work that produced his most successful hit single, "My Sweet Lord", and introduced his signature sound as a solo artist, the slide guitar. He also organised the 1971 Concert for Bangladesh with Indian musician Ravi Shankar, a precursor to later benefit concerts such as Live Aid. In his role as a music and film producer, Harrison produced acts signed to the Beatles' Apple record label before founding Dark Horse Records in 1974 and co-founding HandMade Films in 1978. Harrison released several best-selling singles and albums as a solo performer. In 1988, he co-founded the platinum-selling supergroup the Traveling Wilburys. A prolific recording artist, he was featured as a guest guitarist on tracks by Badfinger, Ronnie Wood and Billy Preston, and collaborated on songs and music with Dylan, Eric Clapton, Ringo Starr and Tom Petty, among others. "Rolling Stone" magazine ranked him number 11 in their list of the "100 Greatest Guitarists of All Time". He is a two-time Rock and Roll Hall of Fame inductee – as a member of the Beatles in 1988, and posthumously for his solo career in 2004. Harrison's first marriage, to model Pattie Boyd in 1966, ended in divorce in 1977. The following year he married Olivia Arias, with whom he had a son, Dhani. Harrison died from lung cancer in 2001 at the age of 58, two years after surviving a knife attack by an intruder at his Friar Park home. His remains were cremated and the ashes were scattered according to Hindu tradition in a private ceremony in the Ganges and Yamuna rivers in India. He left an estate of almost £100 million. Harrison was born at 12 Arnold Grove in Wavertree, Liverpool on 25 February 1943. He was the youngest of four children of Harold Hargreaves (or Hargrove) Harrison (1909–1978) and Louise (née French; 1911–1970). Harold was a bus conductor who had worked as a ship's steward on the White Star Line, and Louise was a shop assistant of Irish Catholic descent. He had one sister, Louise (born 16 August 1931), and two brothers, Harold (born 1934) and Peter (20 July 1940 – 1 June 2007). According to Boyd, Harrison's mother was particularly supportive: "All she wanted for her children is that they should be happy, and she recognized that nothing made George quite as happy as making music." Louise was an enthusiastic music fan, and she was known among friends for her loud singing voice, which at times startled visitors by rattling the Harrisons' windows. When Louise was pregnant with George, she often listened to the weekly broadcast "Radio India". Harrison's biographer Joshua Greene wrote, "Every Sunday she tuned in to mystical sounds evoked by sitars and tablas, hoping that the exotic music would bring peace and calm to the baby in the womb." Harrison lived the first four years of his life at 12 Arnold Grove, a terraced house on a cul-de-sac. The home had an outdoor toilet and its only heat came from a single coal fire. In 1949, the family was offered a council house and moved to 25 Upton Green, Speke. In 1948, at the age of five, Harrison enrolled at Dovedale Primary School. He passed the eleven-plus exam and attended Liverpool Institute High School for Boys from 1954 to 1959. Though the institute did offer a music course, Harrison was disappointed with the absence of guitars, and felt the school "moulded [students] into being frightened". Harrison's earliest musical influences included George Formby, Cab Calloway, Django Reinhardt and Hoagy Carmichael; by the 1950s, Carl Perkins and Lonnie Donegan were significant influences. In early 1956, he had an epiphany: while riding his bicycle, he heard Elvis Presley's "Heartbreak Hotel" playing from a nearby house, and the song piqued his interest in rock and roll. He often sat at the back of the class drawing guitars in his schoolbooks, and later commented, "I was totally into guitars." Harrison cited Slim Whitman as another early influence: "The first person I ever saw playing a guitar was Slim Whitman, either a photo of him in a magazine or live on television. Guitars were definitely coming in." At first, Harold Harrison was apprehensive about his son's interest in pursuing a music career. However, in 1956, he bought George a Dutch Egmond flat-top acoustic guitar, which according to Harold, cost £3.10 (equivalent to £ in 2020). One of his father's friends taught Harrison how to play "Whispering", "Sweet Sue", and "Dinah". Inspired by Donegan's music, Harrison formed a skiffle group, the Rebels, with his brother Peter and a friend, Arthur Kelly. On the bus to school, Harrison met Paul McCartney, who also attended the Liverpool Institute, and the pair bonded over their shared love of music. Harrison became part of the Beatles with McCartney and John Lennon when the band were still a skiffle group called the Quarrymen. In March 1958, he auditioned for the Quarrymen at Rory Storm's Morgue Skiffle Club, playing Arthur "Guitar Boogie" Smith's "Guitar Boogie Shuffle", but Lennon felt that Harrison, having just turned 15, was too young to join the band. McCartney arranged a second meeting, on the upper deck of a Liverpool bus, during which Harrison impressed Lennon by performing the lead guitar part for the instrumental "Raunchy". He began socialising with the group, filling in on guitar as needed, and then became accepted as a member. Although his father wanted him to continue his education, Harrison left school at 16 and worked for several months as an apprentice electrician at Blacklers, a local department store. During the group's first tour of Scotland, in 1960, Harrison used the pseudonym "Carl Harrison", in reference to Carl Perkins. In 1960, promoter Allan Williams arranged for the band, now calling themselves the Beatles, to play at the Indra and Kaiserkeller clubs in Hamburg, both owned by Bruno Koschmider. Their first residency in Hamburg ended prematurely when Harrison was deported for being too young to work in nightclubs. When Brian Epstein became their manager in December 1961, he polished up their image and later secured them a recording contract with EMI. The group's first single, "Love Me Do", peaked at number seventeen on the "Record Retailer" chart, and by the time their debut album, "Please Please Me", was released in early 1963, Beatlemania had arrived. Often serious and focused while on stage with the band, Harrison was known as "the quiet Beatle". That moniker arose when the Beatles arrived in the United States in early 1964, and Harrison was ill with a case of Strep throat and a fever and was medically advised to limit speaking as much as possible until he performed on "The Ed Sullivan Show" as scheduled. As such, the press noticed Harrison's apparent laconic nature in public appearances on that tour and the subsequent nickname stuck, much to Harrison's amusement. He had two lead vocal credits on the LP, including the Lennon–McCartney song "Do You Want to Know a Secret?", and three on their second album, "With the Beatles" (1963). The latter included "Don't Bother Me", Harrison's first solo writing credit. Harrison served as the Beatles' scout for new American releases, being especially knowledgeable about soul music. By 1965's "Rubber Soul", he had begun to lead the other Beatles into folk rock through his interest in the Byrds and Bob Dylan, and towards Indian classical music through his use of the sitar on "Norwegian Wood (This Bird Has Flown)". He later called "Rubber Soul" his "favourite [Beatles] album". "Revolver" (1966) included three of his compositions: "Taxman", selected as the album's opening track, "Love You To" and "I Want to Tell You". His drone-like tambura part on Lennon's "Tomorrow Never Knows" exemplified the band's ongoing exploration of non-Western instruments, while the sitar- and tabla-based "Love You To" represented the Beatles' first genuine foray into Indian music. According to the ethnomusicologist David Reck, the latter song set a precedent in popular music as an example of Asian culture being represented by Westerners respectfully and without parody. Author Nicholas Schaffner wrote in 1978 that following Harrison's increased association with the sitar after "Norwegian Wood", he became known as "the maharaja of raga-rock". Harrison continued to develop his interest in non-Western instrumentation, playing swarmandal on "Strawberry Fields Forever". By late 1966, Harrison's interests had moved away from the Beatles. This was reflected in his choice of Eastern gurus and religious leaders for inclusion on the album cover for "Sgt. Pepper's Lonely Hearts Club Band" in 1967. His sole composition on the album was the Indian-inspired "Within You Without You", to which no other Beatle contributed. He played sitar and tambura on the track, backed by musicians from the London Asian Music Circle on dilruba, swarmandal and tabla. He later commented on the "Sgt. Pepper" album: "It was a millstone and a milestone in the music industry ... There's about half the songs I like and the other half I can't stand." In January 1968, he recorded the basic track for his song "The Inner Light" at EMI's studio in Bombay, using a group of local musicians playing traditional Indian instruments. Released as the B-side to McCartney's "Lady Madonna", it was the first Harrison composition to appear on a Beatles single. Derived from a quotation from the "Tao Te Ching", the song's lyric reflected Harrison's deepening interest in Hinduism and meditation. During the recording of "The Beatles" that same year, tensions within the group ran high, and drummer Ringo Starr quit briefly. Harrison's four songwriting contributions to the double album included "While My Guitar Gently Weeps", which featured Eric Clapton on lead guitar, and the horn-driven "Savoy Truffle". Dylan and the Band were a major musical influence on Harrison at the end of his career with the Beatles. While on a visit to Woodstock in late 1968, he established a friendship with Dylan and found himself drawn to the Band's sense of communal music-making and to the creative equality among the band members, which contrasted with Lennon and McCartney's domination of the Beatles' songwriting and creative direction. This coincided with a prolific period in his songwriting and a growing desire to assert his independence from the Beatles. Tensions among the group surfaced again in January 1969, at Twickenham Studios, during the filmed rehearsals that became the 1970 documentary "Let It Be". Frustrated by the cold and sterile film studio, by Lennon's creative disengagement from the Beatles, and by what he perceived as a domineering attitude from McCartney, Harrison quit the group on 10 January. He returned twelve days later, after his bandmates had agreed to move the film project to their own Apple Studio and to abandon McCartney's plan for making a return to public performance. Relations among the Beatles were more cordial, though still strained, when the band recorded their 1969 album "Abbey Road". The LP included what Lavezzoli describes as "two classic contributions" from Harrison – "Here Comes the Sun" and "Something" – that saw him "finally achieve equal songwriting status" with Lennon and McCartney. During the album's recording, Harrison asserted more creative control than before, rejecting suggestions for changes to his music, particularly from McCartney. "Something" became his first A-side when issued on a double A-side single with "Come Together"; the song was number one in Canada, Australia, New Zealand and West Germany, and the combined sides topped the "Billboard" Hot 100 chart in the United States. In the 1970s Frank Sinatra recorded "Something" twice (1970 and 1979) and later dubbed it "the greatest love song of the past fifty years". Lennon considered it the best song on "Abbey Road", and it became the Beatles' second most covered song after "Yesterday". In May 1970 Harrison's song "For You Blue" was coupled on a US single with McCartney's "The Long and Winding Road" and became Harrison's second chart-topper when the sides were listed together at number one on the Hot 100. His increased productivity meant that by the time of their break-up he had amassed a stockpile of unreleased compositions. While Harrison grew as a songwriter, his compositional presence on Beatles albums remained limited to two or three songs, increasing his frustration, and significantly contributing to the band's break-up. Harrison's last recording session with the Beatles was on 4 January 1970, when he, McCartney and Starr recorded his song "I Me Mine" for the "Let It Be" soundtrack album. Before the Beatles' break-up, Harrison had already recorded and released two solo albums: "Wonderwall Music" and "Electronic Sound", both of which contain mainly instrumental compositions. "Wonderwall Music", a soundtrack to the 1968 film "Wonderwall", blends Indian and Western instrumentation, while "Electronic Sound" is an experimental album that prominently features a Moog synthesizer. Released in November 1968, "Wonderwall Music" was the first solo album by a Beatle and the first LP released by Apple Records. Indian musicians Aashish Khan and Shivkumar Sharma performed on the album, which contains the experimental sound collage "Dream Scene", recorded several months before Lennon's "Revolution 9". In December 1969, Harrison participated in a brief tour of Europe with the American group Delaney & Bonnie and Friends. During the tour that included Clapton, Bobby Whitlock, drummer Jim Gordon and band leaders Delaney and Bonnie Bramlett, Harrison began to write "My Sweet Lord", which became his first single as a solo artist. Delaney Bramlett inspired Harrison to learn slide guitar, significantly influencing his later music. For many years, Harrison was restricted in his songwriting contributions to the Beatles' albums, but he released "All Things Must Pass", a triple album with two discs of his songs and the third of recordings of Harrison jamming with friends. The album was regarded by many as his best work, and it topped the charts on both sides of the Atlantic. The LP produced the number-one hit single "My Sweet Lord" and the top-ten single "What Is Life". The album was co-produced by Phil Spector using his "Wall of Sound" approach, and the musicians included Starr, Clapton, Gary Wright, Preston, Klaus Voormann, the whole of Delaney and Bonnie's Friends band and the Apple group Badfinger. On release, "All Things Must Pass" was received with critical acclaim; Ben Gerson of "Rolling Stone" described it as being "of classic Spectorian proportions, Wagnerian, Brucknerian, the music of mountain tops and vast horizons". Author and musicologist Ian Inglis considers the lyrics of the album's title track "a recognition of the impermanence of human existence ... a simple and poignant conclusion" to Harrison's former band. In 1971, Bright Tunes sued Harrison for copyright infringement over "My Sweet Lord", owing to its similarity to the 1963 Chiffons hit "He's So Fine". When the case was heard in the United States district court in 1976, he denied deliberately plagiarising the song, but lost the case, as the judge ruled that he had done so subconsciously. In 2000, Apple Records released a thirtieth anniversary edition of the album, and Harrison actively participated in its promotion. In an interview, he reflected on the work: "It's just something that was like my continuation from the Beatles, really. It was me sort of getting out of the Beatles and just going my own way ... it was a very happy occasion." He commented on the production: "Well, in those days it was like the reverb was kind of used a bit more than what I would do now. In fact, I don't use reverb at all. I can't stand it ... You know, it's hard to go back to anything thirty years later and expect it to be how you would want it now." Harrison responded to a request from Ravi Shankar by organising a charity event, the Concert for Bangladesh, which took place on 1 August 1971. The event drew over 40,000 people to two shows in New York's Madison Square Garden. The goal of the event was to raise money to aid starving refugees during the Bangladesh Liberation War. Shankar opened the show, which featured popular musicians such as Dylan, Clapton, Leon Russell, Badfinger, Preston and Starr. A triple album, "The Concert for Bangladesh", was released by Apple in December, followed by a concert film in 1972. Credited to "George Harrison and Friends", the album topped the UK chart and peaked at number 2 in the US, and went on to win the Grammy Award for Album of the Year. Tax troubles and questionable expenses later tied up many of the proceeds, but Harrison commented: "Mainly the concert was to attract attention to the situation ... The money we raised was secondary, and although we had some money problems ... they still got plenty ... even though it was a drop in the ocean. The main thing was, we spread the word and helped get the war ended." Harrison's 1973 album "Living in the Material World" held the number one spot on the "Billboard" albums chart for five weeks, and the album's single, "Give Me Love (Give Me Peace on Earth)", also reached number one in the US. In the UK, the LP peaked at number two and the single reached number 8. The album was lavishly produced and packaged, and its dominant message was Harrison's Hindu beliefs. In Greene's opinion it "contained many of the strongest compositions of his career". Stephen Holden, writing in "Rolling Stone", felt the album was "vastly appealing" and "profoundly seductive", and that it stood "alone as an article of faith, miraculous in its radiance". Other reviewers were less enthusiastic, describing the release as awkward, sanctimonious and overly sentimental. In November 1974, Harrison became the first ex-Beatle to tour North America when he began his 45-date Dark Horse Tour. The shows included guest spots by his band members Billy Preston and Tom Scott, and traditional and contemporary Indian music performed by "Ravi Shankar, Family and Friends". Despite numerous positive reviews, the consensus reaction to the tour was negative. Some fans found Shankar's significant presence to be a bizarre disappointment, and many were affronted by what Inglis described as Harrison's "sermonizing". Further, he reworked the lyrics to several Beatles songs, and his laryngitis-affected vocals led to some critics calling the tour "dark hoarse". The author Robert Rodriguez commented: "While the Dark Horse tour might be considered a noble failure, there were a number of fans who were tuned-in to what was being attempted. They went away ecstatic, conscious that they had just witnessed something so uplifting that it could never be repeated." Simon Leng called the tour "groundbreaking" and "revolutionary in its presentation of Indian Music". In December, Harrison released "Dark Horse", which was an album that earned him the least favourable reviews of his career. "Rolling Stone" called it "the chronicle of a performer out of his element, working to a deadline, enfeebling his overtaxed talents by a rush to deliver a new 'LP product', rehearse a band, and assemble a cross-country tour, all within three weeks". The album reached number 4 on the "Billboard" chart and the single "Dark Horse" reached number 15, but they failed to make an impact in the UK. The music critic Mikal Gilmore described "Dark Horse" as "one of Harrison's most fascinating works – a record about change and loss". Harrison's final studio album for EMI and Apple Records, the soul music-inspired "Extra Texture (Read All About It)" (1975), peaked at number 8 on the "Billboard" chart and number 16 in the UK. Harrison considered it the least satisfactory of the three albums he had recorded since "All Things Must Pass". Leng identified "bitterness and dismay" in many of the tracks; his long-time friend Klaus Voormann commented: "He wasn't up for it ... It was a terrible time because I think there was a lot of cocaine going around, and that's when I got out of the picture ... I didn't like his frame of mind". He released two singles from the LP: "You", which reached the "Billboard" top 20, and "This Guitar (Can't Keep from Crying)", Apple's final original single release. "Thirty Three & 1/3" (1976), Harrison's first album release on his own Dark Horse Records label, produced the hit singles "This Song" and "Crackerbox Palace", both of which reached the top 25 in the US. The surreal humour of "Crackerbox Palace" reflected Harrison's association with Monty Python's Eric Idle, who directed a comical music video for the song. With an emphasis on melody and musicianship, and a more subtle subject matter than the pious message of his earlier works, "Thirty Three & 1/3" earned Harrison his most favourable critical notices in the US since "All Things Must Pass". The album peaked just outside the top ten there, but outsold his previous two LPs. As part of his promotion for the release, Harrison performed on "Saturday Night Live" with Paul Simon. In 1979, Harrison released "George Harrison", which followed his second marriage and the birth of his son Dhani. Co-produced by Russ Titelman, the album and the single "Blow Away" both made the "Billboard" top 20. The album marked the beginning of Harrison's gradual retreat from the music business, with several of the songs having been written in the tranquil setting of Maui in the Hawaiian archipelago. Leng described "George Harrison" as "melodic and lush ... peaceful ... the work of a man who had lived the rock and roll dream twice over and was now embracing domestic as well as spiritual bliss". The murder of John Lennon on 8 December 1980 disturbed Harrison and reinforced his decades-long concern about stalkers. The tragedy was also a deep personal loss, although Harrison and Lennon had little contact in the years before Lennon was killed. Following the murder, Harrison commented: "After all we went through together I had and still have great love and respect for John Lennon. I am shocked and stunned." Harrison modified the lyrics of a song he had written for Starr in order to make the song a tribute to Lennon. "All Those Years Ago", which included vocal contributions from Paul and Linda McCartney, as well as Starr's original drum part, peaked at number two in the US charts. The single was included on the album "Somewhere in England" in 1981. Harrison did not release any new albums for five years after 1982's "Gone Troppo" received little notice from critics or the public. During this period he made several guest appearances, including a 1985 performance at a tribute to Carl Perkins titled "". In March 1986 he made a surprise appearance during the finale of the Birmingham Heart Beat Charity Concert, an event organised to raise money for the Birmingham Children's Hospital. The following year, he appeared at The Prince's Trust concert at London's Wembley Arena, performing "While My Guitar Gently Weeps" and "Here Comes the Sun". In February 1987 he joined Dylan, John Fogerty and Jesse Ed Davis on stage for a two-hour performance with the blues musician Taj Mahal. Harrison recalled: "Bob rang me up and asked if I wanted to come out for the evening and see Taj Mahal ... So we went there and had a few of these Mexican beers – and had a few more ... Bob says, 'Hey, why don't we all get up and play, and you can sing?' But every time I got near the microphone, Dylan comes up and just starts singing this rubbish in my ear, trying to throw me." In November 1987 Harrison released the platinum album "Cloud Nine". Co-produced with Jeff Lynne of Electric Light Orchestra (ELO), the album included Harrison's rendition of James Ray's "Got My Mind Set on You", which went to number one in the US and number two in the UK. The accompanying music video received substantial airplay, and another single, "When We Was Fab", a retrospective of the Beatles' career, earned two MTV Music Video Awards nominations in 1988. Recorded at his estate in Friar Park, Harrison's slide guitar playing featured prominently on the album, which included several of his long-time musical collaborators, including Clapton, Jim Keltner and Jim Horn. "Cloud Nine" reached number eight and number ten on the US and UK charts respectively, and several tracks from the album achieved placement on "Billboard"s Mainstream Rock chart – "Devil's Radio", "This Is Love" and "Cloud 9". In 1988, Harrison formed the Traveling Wilburys with Jeff Lynne, Roy Orbison, Bob Dylan and Tom Petty. The band had gathered in Dylan's garage to record a song for a Harrison European single release. Harrison's record company decided the track, "Handle with Care", was too good for its original purpose as a B-side and asked for a full album. The LP, "Traveling Wilburys Vol. 1", was released in October 1988 and recorded under pseudonyms as half-brothers, supposed sons of Charles Truscott Wilbury, Sr. It reached number 16 in the UK and number 3 in the US, where it was certified triple platinum. Harrison's pseudonym on the album was "Nelson Wilbury"; he used the name "Spike Wilbury" for their second album. In 1989, Harrison and Starr appeared in the music video for Petty's song "I Won't Back Down". In October that year, Harrison assembled and released "Best of Dark Horse 1976–1989", a compilation of his later solo work. The album included three new songs, including "Cheer Down", which Harrison had recently contributed to the "Lethal Weapon 2" film soundtrack. Following Orbison's death in December 1988, the Wilburys recorded as a four-piece. Their second album, issued in October 1990, was mischievously titled "Traveling Wilburys Vol. 3". According to Lynne, "That was George's idea. He said, 'Let's confuse the buggers.'" It peaked at number 14 in the UK and number 11 in the US, where it was certified platinum. The Wilburys never performed live, and the group did not record together again following the release of their second album. In December 1991, Harrison joined Clapton for a tour of Japan. It was Harrison's first since 1974 and no others followed. On 6 April 1992, Harrison held a benefit concert for the Natural Law Party at the Royal Albert Hall, his first London performance since the Beatles' 1969 rooftop concert. In October 1992, he performed at a Bob Dylan tribute concert at Madison Square Garden in New York City, playing alongside Dylan, Clapton, McGuinn, Petty and Neil Young. In 1994 Harrison began a collaboration with McCartney, Starr and producer Jeff Lynne for the "Beatles Anthology" project. This included the recording of two new Beatles songs built around solo vocal and piano tapes recorded by Lennon as well as lengthy interviews about the Beatles' career. Released in December 1995, "Free as a Bird" was the first new Beatles single since 1970. In March 1996, they released a second single, "Real Love". Harrison refused to participate in the completion of a third song. He later commented on the project: "I hope somebody does this to all my crap demos when I'm dead, make them into hit songs." Following the "Anthology" project, Harrison collaborated with Ravi Shankar on the latter's "Chants of India". Harrison's final television appearance was a VH-1 special to promote the album, taped in May 1997. Soon afterwards, Harrison was diagnosed with throat cancer; he was treated with radiotherapy, which was thought at the time to be successful. He publicly blamed years of smoking for the illness. In January 1998, Harrison attended Carl Perkins' funeral in Jackson, Tennessee, where he performed a brief rendition of Perkins' song "Your True Love". In May, he represented the Beatles at London's High Court in their successful bid to gain control of unauthorised recordings made of a 1962 performance by the band at the Star-Club in Hamburg. The following year, he was the most active of his former bandmates in promoting the reissue of their 1968 animated film "Yellow Submarine". On 30 December 1999, Harrison and his wife were attacked at their home, Friar Park. Michael Abram, a 34-year-old man suffering from paranoid schizophrenia, broke in and attacked Harrison with a kitchen knife, puncturing a lung and causing head injuries before Olivia Harrison incapacitated the assailant by striking him repeatedly with a fireplace poker and a lamp. Harrison later commented, "I felt exhausted and could feel the strength draining from me. I vividly remember a deliberate thrust to my chest. I could hear my lung exhaling and had blood in my mouth. I believed I had been fatally stabbed." Following the attack, Harrison was hospitalised with more than 40 stab wounds, and part of his punctured lung was removed. He released a statement soon afterwards regarding his assailant: "He wasn't a burglar, and he certainly wasn't auditioning for the Traveling Wilburys. Adi Shankara, an Indian historical, spiritual and groovy-type person, once said, 'Life is fragile like a raindrop on a lotus leaf.' And you'd better believe it." In May 2001, it was revealed that Harrison had undergone an operation to remove a cancerous growth from one of his lungs, and in July, it was reported that he was being treated for a brain tumour at a clinic in Switzerland. While in Switzerland, Starr visited him but had to cut short his stay in order to travel to Boston, where his daughter was undergoing emergency brain surgery. Harrison, who was very weak, quipped: "Do you want me to come with you?" In November 2001, he began radiotherapy at Staten Island University Hospital in New York City for non-small cell lung cancer that had spread to his brain. When the news was made public, Harrison bemoaned his physician's breach of privacy, and his estate later claimed damages. On 12 November in New York, Harrison, Starr and McCartney came together for the last time. On 29 November 2001, Harrison died at Paul McCartney's property on Heather Road in Beverly Hills, Los Angeles. He was 58 years old. He was cremated at Hollywood Forever Cemetery and his funeral was held at the Self-Realization Fellowship Lake Shrine in Pacific Palisades, California. His close family scattered his ashes according to Hindu tradition in a private ceremony in the Ganges and Yamuna rivers near Varanasi, India. He left almost £100 million in his will. Harrison's final album, "Brainwashed" (2002), was released posthumously after it was completed by his son Dhani and Jeff Lynne. A quotation from the "Bhagavad Gita" is included in the album's liner notes: "There never was a time when you or I did not exist. Nor will there be any future when we shall cease to be." A media-only single, "Stuck Inside a Cloud", which Leng described as "a uniquely candid reaction to illness and mortality", achieved number 27 on "Billboard"s Adult Contemporary chart. The single "Any Road", released in May 2003, peaked at number 37 on the UK Singles Chart. "Marwa Blues" went on to receive the 2004 Grammy Award for Best Pop Instrumental Performance, while "Any Road" was nominated for Best Male Pop Vocal Performance. Harrison's guitar work with the Beatles was varied and flexible. Although not fast or flashy, his lead guitar playing was solid and typified the more subdued lead guitar style of the early 1960s. His rhythm guitar playing was innovative, for example when he used a capo to shorten the strings on an acoustic guitar, as on the "Rubber Soul" album and "Here Comes the Sun", to create a bright, sweet sound. Eric Clapton felt that Harrison was "clearly an innovator" as he was "taking certain elements of R&B and rock and rockabilly and creating something unique". "Rolling Stone" founder Jann Wenner described Harrison as "a guitarist who was never showy but who had an innate, eloquent melodic sense. He played exquisitely in the service of the song". The guitar picking style of Chet Atkins and Carl Perkins influenced Harrison, giving a country music feel to many of the Beatles' recordings. He identified Chuck Berry as another early influence. In 1961 the Beatles recorded "Cry for a Shadow", a blues-inspired instrumental co-written by Lennon and Harrison, who is credited with composing the song's lead guitar part, building on unusual chord voicings and imitating the style of other English groups such as the Shadows. Harrison's liberal use of the diatonic scale in his guitar playing reveals the influence of Buddy Holly, and his interest in Berry inspired him to compose songs based on the blues scale while incorporating a rockabilly feel in the style of Perkins. Another of Harrison's musical techniques was the use of guitar lines written in octaves, as on "I'll Be on My Way". By 1964, he had begun to develop a distinctive personal style as a guitarist, writing parts that featured the use of nonresolving tones, as with the ending chord arpeggios on "A Hard Day's Night". On this and other songs from the period, he used a Rickenbacker 360/12 – an electric guitar with twelve strings, the low eight of which are tuned in pairs, one octave apart, with the higher four being pairs tuned in unison. His use of the Rickenbacker on "A Hard Day's Night" helped to popularise the model, and the jangly sound became so prominent that "Melody Maker" termed it the Beatles' "secret weapon". In 1965 Harrison used an expression pedal to control his guitar's volume on "I Need You", creating a syncopated flautando effect with the melody resolving its dissonance through tonal displacements. He used the same volume-swell technique on "Yes It Is", applying what Everett described as "ghostly articulation" to the song's natural harmonics. In 1966, Harrison contributed innovative musical ideas to "Revolver". He played backwards guitar on Lennon's composition "I'm Only Sleeping" and a guitar counter-melody on "And Your Bird Can Sing" that moved in parallel octaves above McCartney's bass downbeats. His guitar playing on "I Want to Tell You" exemplified the pairing of altered chordal colours with descending chromatic lines and his guitar part for "Sgt Pepper"s "Lucy in the Sky with Diamonds" mirrors Lennon's vocal line in much the same way that a sarangi player accompanies a khyal singer in a Hindu devotional song. Everett described Harrison's guitar solo from "Old Brown Shoe" as "stinging [and] highly Claptonesque". He identified two of the composition's significant motifs: a bluesy trichord and a diminished triad with roots in A and E. Huntley called the song "a sizzling rocker with a ferocious ... solo". In Greene's opinion, Harrison's demo for "Old Brown Shoe" contains "one of the most complex lead guitar solos on any Beatles song". Harrison's playing on "Abbey Road", and in particular on "Something", marked a significant moment in his development as a guitarist. The song's guitar solo shows a varied range of influences, incorporating the blues guitar style of Clapton and the styles of Indian gamakas. According to author and musicologist Kenneth Womack: "'Something' meanders toward the most unforgettable of Harrison's guitar solos ... A masterpiece in simplicity, [it] reaches toward the sublime". After Delaney Bramlett inspired him to learn slide guitar, Harrison began to incorporate it into his solo work, which allowed him to mimic many traditional Indian instruments, including the sarangi and the dilruba. Leng described Harrison's slide guitar solo on Lennon's "How Do You Sleep?" as a departure for "the sweet soloist of 'Something'", calling his playing "rightly famed ... one of Harrison's greatest guitar statements". Lennon commented: "That's the best he's ever fucking played in his life." A Hawaiian influence is notable in much of Harrison's music, ranging from his slide guitar work on "Gone Troppo" (1982) to his televised performance of the Cab Calloway standard "Between the Devil and the Deep Blue Sea" on ukulele in 1992. Lavezzoli described Harrison's slide playing on the Grammy-winning instrumental "Marwa Blues" (2002) as demonstrating Hawaiian influences while comparing the melody to an Indian sarod or veena, calling it "yet another demonstration of Harrison's unique slide approach". Harrison was an admirer of George Formby and a member of the Ukulele Society of Great Britain, and played a ukulele solo in the style of Formby at the end of "Free as a Bird". He performed at a Formby convention in 1991, and served as the honorary president of the George Formby Appreciation Society. Harrison played bass guitar on a few tracks, including the Beatles songs "She Said She Said", "Golden Slumbers", "Birthday" and "Honey Pie". He also played bass on several solo recordings, including "Faster", "Wake Up My Love" and "Bye Bye Love". During the Beatles' American tour in August 1965, Harrison's friend David Crosby of the Byrds introduced him to Indian classical music and the work of sitar maestro Ravi Shankar. Harrison described Shankar as "the first person who ever impressed me in my life ... and he was the only person who didn't try to impress me." Harrison became fascinated with the sitar and immersed himself in Indian music. According to Lavezzoli, Harrison's introduction of the instrument on the Beatles' song "Norwegian Wood" "opened the floodgates for Indian instrumentation in rock music, triggering what Shankar would call 'The Great Sitar Explosion' of 1966–67". Lavezzoli recognises Harrison as "the man most responsible for this phenomenon". In June 1966 Harrison met Shankar at the home of Mrs Angadi of the Asian Music Circle, asked to be his student, and was accepted. Before this meeting, Harrison had recorded his "Revolver" track "Love You To", contributing a sitar part that Lavezzoli describes as an "astonishing improvement" over "Norwegian Wood" and "the most accomplished performance on sitar by any rock musician". On 6 July, Harrison travelled to India to buy a sitar from Rikhi Ram & Sons in New Delhi. In September, following the Beatles' final tour, he returned to India to study sitar for six weeks with Shankar. He initially stayed in Bombay until fans learned of his arrival, then moved to a houseboat on a remote lake in Kashmir. During this visit, he also received tutelage from Shambhu Das, Shankar's protégé. Harrison studied the instrument until 1968, when, following a discussion with Shankar about the need to find his "roots", an encounter with Clapton and Jimi Hendrix at a hotel in New York convinced him to return to guitar playing. Harrison commented: "I decided ... I'm not going to be a great sitar player ... because I should have started at least fifteen years earlier." Harrison continued to use Indian instrumentation occasionally on his solo albums and remained strongly associated with the genre. Lavezzoli groups him with Paul Simon and Peter Gabriel as the three rock musicians who have given the most "mainstream exposure to non-Western musics, or the concept of 'world music'". Harrison wrote his first song, "Don't Bother Me", while sick in a hotel bed in Bournemouth during August 1963, as "an exercise to see if I "could" write a song", as he remembered. His songwriting ability improved throughout the Beatles' career, but his material did not earn full respect from Lennon, McCartney and producer George Martin until near the group's break-up. In 1969, McCartney told Lennon: "Until this year, our songs have been better than George's. Now this year his songs are at least as good as ours". Harrison often had difficulty getting the band to record his songs. Most Beatles albums from 1965 onwards contain at least two Harrison compositions; three of his songs appear on "Revolver", "the album on which Harrison came of age as a songwriter", according to Inglis. Harrison wrote the chord progression of "Don't Bother Me" almost exclusively in the Dorian mode, demonstrating an interest in exotic tones that eventually culminated in his embrace of Indian music. The latter proved a strong influence on his songwriting and contributed to his innovation within the Beatles. According to Mikal Gilmore of "Rolling Stone", "Harrison's openness to new sounds and textures cleared new paths for his rock and roll compositions. His use of dissonance on ... 'Taxman' and 'I Want to Tell You' was revolutionary in popular music – and perhaps more originally creative than the avant-garde mannerisms that Lennon and McCartney borrowed from the music of Karlheinz Stockhausen, Luciano Berio, Edgard Varèse and Igor Stravinsky ..." Of the 1967 Harrison song "Within You Without You", author Gerry Farrell said that Harrison had created a "new form", calling the composition "a quintessential fusion of pop and Indian music". Lennon called the song one of Harrison's best: "His mind and his music are clear. There is his innate talent, he brought that sound together." In his next fully Indian-styled song, "The Inner Light", Harrison embraced the Karnatak discipline of Indian music, rather than the Hindustani style he had used in "Love You To" and "Within You Without You". Writing in 1997, Farrell commented: "It is a mark of Harrison's sincere involvement with Indian music that, nearly thirty years on, the Beatles' 'Indian' songs remain the most imaginative and successful examples of this type of fusion – for example, 'Blue Jay Way' and 'The Inner Light'." Beatles biographer Bob Spitz described "Something" as a masterpiece, and "an intensely stirring romantic ballad that would challenge 'Yesterday' and 'Michelle' as one of the most recognizable songs they ever produced". Inglis considered "Abbey Road" a turning point in Harrison's development as a songwriter and musician. He described Harrison's two contributions to the LP, "Here Comes the Sun" and "Something", as "exquisite", declaring them equal to any previous Beatles songs. From 1968 onwards, Harrison collaborated with other musicians; he brought in Eric Clapton to play lead guitar on "While My Guitar Gently Weeps" for the 1968 Beatles' White Album, and collaborated with John Barham on his 1968 debut solo album, "Wonderwall Music", which included contributions from Clapton again, as well as Peter Tork from the Monkees. He played on tracks by Dave Mason, Nicky Hopkins, Alvin Lee, Ronnie Wood, Billy Preston and Tom Scott. Harrison co-wrote songs and music with Dylan, Clapton, Preston, Doris Troy, David Bromberg, Gary Wright, Wood, Jeff Lynne and Tom Petty, among others. Harrison's music projects during the final years of the Beatles included producing Apple Records artists Doris Troy, Jackie Lomax and Billy Preston. Harrison co-wrote the song "Badge" with Clapton, which was included on Cream's 1969 album, "Goodbye". Harrison played rhythm guitar on the track, using the pseudonym "L'Angelo Misterioso" for contractual reasons. In May 1970 he played guitar on several songs during a recording session for Dylan's album "New Morning". Between 1971 and 1973 he co-wrote and/or produced three top ten hits for Starr: "It Don't Come Easy", "Back Off Boogaloo" and "Photograph". Aside from "How Do You Sleep?", his contributions to Lennon's 1971 album "Imagine" included a slide guitar solo on "Gimme Some Truth" and dobro on "Crippled Inside". Also that year, he produced and played slide guitar on Badfinger's top ten hit "Day After Day", and a dobro on Preston's "I Wrote a Simple Song". He worked with Harry Nilsson on "You're Breakin' My Heart" (1972) and with Cheech & Chong on "Basketball Jones" (1973). In 1974 Harrison founded Dark Horse Records as an avenue for collaboration with other musicians. He wanted Dark Horse to serve as a creative outlet for artists, as Apple Records had for the Beatles. Eric Idle commented: "He's extremely generous, and he backs and supports all sorts of people that you'll never, ever hear of." The first acts signed to the new label were Ravi Shankar and the duo Splinter. Harrison produced and made multiple musical contributions to Splinter's debut album, "The Place I Love", which provided Dark Horse with its first hit, "Costafine Town". He also produced and played guitar and autoharp on Shankar's "Shankar Family & Friends", the label's other inaugural release. Other artists signed by Dark Horse include Attitudes, Henry McCullough, Jiva and Stairsteps. Harrison collaborated with Tom Scott on Scott's 1975 album "New York Connection", and in 1981 he played guitar on "Walk a Thin Line", from Mick Fleetwood's "The Visitor". His contributions to Starr's solo career continued with "Wrack My Brain", a 1981 US top 40 hit written and produced by Harrison, and guitar overdubs to two tracks on "Vertical Man" (1998). In 1996 Harrison recorded "Distance Makes No Difference With Love" with Carl Perkins for the latter's album "Go Cat Go!", and in 1990 he played slide guitar on the title track of Dylan's "Under the Red Sky" album. In 2001 he performed as a guest musician on Jeff Lynne and Electric Light Orchestra's comeback album "Zoom", and on the song "Love Letters" for Bill Wyman's Rhythm Kings. He also co-wrote a new song with his son Dhani, "Horse to the Water", which was recorded on 2 October, eight weeks before his death. It appeared on Jools Holland's album "Small World, Big Band". When Harrison joined the Quarrymen in 1958 his main guitar was a Höfner President Acoustic, which he soon traded for a Höfner Club 40 model. His first solid-body electric guitar was a Czech-built Jolana Futurama/Grazioso. The guitars he used on early recordings were mainly Gretsch models, played through a Vox amplifier, including a Gretsch Duo Jet that he bought secondhand in 1961 and posed with on the album cover for "Cloud Nine". He also bought a Gretsch Tennessean and a Gretsch Country Gentleman, which he played on "She Loves You", and during the Beatles' 1964 appearance on "The Ed Sullivan Show". In 1963 he bought a Rickenbacker 425 Fireglo, and in 1964 he acquired a Rickenbacker 360/12 guitar, which was the second of its kind to be manufactured. Harrison obtained his first Fender Stratocaster in 1965 and first used it during the recording of the "Help!" album that February; he also used it when recording "Rubber Soul" later that year, most notably on the song "Nowhere Man". In early 1966 Harrison and Lennon each purchased Epiphone Casinos, which they used on "Revolver". Harrison also used a Gibson J-160E and a Gibson SG Standard while recording the album. He later painted his Stratocaster in a psychedelic design that included the word "Bebopalula" above the pickguard and the guitar's nickname, "Rocky", on the headstock. He played this guitar in the "Magical Mystery Tour" film and throughout his solo career. In July 1968, Clapton gave him a Gibson Les Paul, which Harrison nicknamed "Lucy". Around this time, he obtained a Gibson Jumbo J-200 acoustic guitar, which he subsequently gave to Dylan to use at the 1969 Isle of Wight Festival. In late 1968 Fender Musical Instruments Corporation gave Harrison a custom-made Fender Telecaster Rosewood prototype, made especially for him by Philip Kubicki. In August 2017, Fender released a "Limited Edition George Harrison Rosewood Telecaster" modelled after a Telecaster that Roger Rossmeisl originally created for Harrison. Harrison helped finance Ravi Shankar's documentary "Raga" and released it through Apple Films in 1971. He also produced, with Apple manager Allen Klein, the "Concert for Bangladesh" film. In 1973, he produced the feature film "Little Malcolm", but the project was lost amid the litigation surrounding the former Beatles ending their business ties with Klein. In 1973 Peter Sellers introduced Harrison to Denis O'Brien. Soon after, the two went into business together. In 1978, in an effort to produce "Monty Python's Life of Brian", they formed the film production and distribution company HandMade Films. Their opportunity for investment came after EMI Films withdrew funding at the demand of their chief executive, Bernard Delfont. Harrison financed the production of "Life of Brian" in part by mortgaging his home, which Idle later called "the most anybody's ever paid for a cinema ticket in history". The film grossed $21 million at the box office in the US. The first film distributed by HandMade Films was "The Long Good Friday" (1980), and the first they produced was "Time Bandits" (1981), a co-scripted project by Monty Pythons Terry Gilliam and Michael Palin. The film featured a new song by Harrison, "Dream Away", in the closing credits. "Time Bandits" became one of HandMade's most successful and acclaimed efforts; with a budget of $5 million, it earned $35 million in the US within ten weeks of its release. Harrison served as executive producer for 23 films with HandMade, including "A Private Function", "Mona Lisa", "Shanghai Surprise", "Withnail and I" and "How to Get Ahead in Advertising". He made cameo appearances in several of these films, including a role as a nightclub singer in "Shanghai Surprise", for which he recorded five new songs. According to Ian Inglis, Harrison's "executive role in HandMade Films helped to sustain British cinema at a time of crisis, producing some of the country's most memorable movies of the 1980s." Following a series of box office bombs in the late 1980s, and excessive debt incurred by O'Brien which was guaranteed by Harrison, HandMade's financial situation became precarious. The company ceased operations in 1991 and was sold three years later to Paragon Entertainment, a Canadian corporation. Afterwards, Harrison sued O'Brien for $25 million for fraud and negligence, resulting in an $11.6 million judgement in 1996. Harrison was involved in humanitarian and political activism throughout his life. In the 1960s, the Beatles supported the civil rights movement and protested against the Vietnam War. In early 1971, Ravi Shankar consulted Harrison about how to provide aid to the people of Bangladesh after the 1970 Bhola cyclone and the Bangladesh Liberation War. Harrison hastily wrote and recorded the song "Bangla Desh", which became pop music's first charity single when issued by Apple Records in late July. He also pushed Apple to release Shankar's "Joi Bangla" EP in an effort to raise further awareness for the cause. Shankar asked for Harrison's advice about planning a small charity event in the US. Harrison responded by organising the Concert for Bangladesh, which raised more than $240,000. Around $13.5 million was generated through the album and film releases, although most of the funds were frozen in an Internal Revenue Service audit for ten years, due to Klein's failure to register the event as a UNICEF benefit beforehand. In June 1972, UNICEF honoured Harrison and Shankar, and Klein, with the "Child Is the Father of Man" award at an annual ceremony in recognition of their fundraising efforts for Bangladesh. From 1980, Harrison became a vocal supporter of Greenpeace and CND. He also protested against the use of nuclear energy with Friends of the Earth, and helped finance "Vole", a green magazine launched by Monty Python member Terry Jones. In 1990, he helped promote his wife Olivia's Romanian Angel Appeal on behalf of the thousands of Romanian orphans left abandoned by the state following the fall of Communism in Eastern Europe. Harrison recorded a benefit single, "Nobody's Child", with the Traveling Wilburys, and assembled a fundraising album with contributions from other artists including Clapton, Starr, Elton John, Stevie Wonder, Donovan and Van Morrison. The Concert for Bangladesh has been described as an innovative precursor for the large-scale charity rock shows that followed, including Live Aid. The George Harrison Humanitarian Fund for UNICEF, a joint effort between the Harrison family and the US Fund for UNICEF, aims to support programmes that help children caught in humanitarian emergencies. In December 2007, they donated $450,000 to help the victims of Cyclone Sidr in Bangladesh. On 13 October 2009, the first George Harrison Humanitarian Award went to Ravi Shankar for his efforts in saving the lives of children, and his involvement with the Concert for Bangladesh. By the mid-1960s Harrison had become an admirer of Indian culture and mysticism, introducing it to the other Beatles. During the filming of "Help!" in the Bahamas, they met the founder of Sivananda Yoga, Swami Vishnu-devananda, who gave each of them a signed copy of his book, "The Complete Illustrated Book of Yoga". Between the end of the last Beatles tour in 1966 and the beginning of the "Sgt Pepper" recording sessions, he made a pilgrimage to India with his wife Pattie; there, he studied sitar with Ravi Shankar, met several gurus, and visited various holy places. In 1968 he travelled to Rishikesh in northern India with the other Beatles to study meditation with Maharishi Mahesh Yogi. Harrison's use of psychedelic drugs encouraged his path to meditation and Hinduism. He commented: "For me, it was like a flash. The first time I had acid, it just opened up something in my head that was inside of me, and I realized a lot of things. I didn't learn them because I already knew them, but that happened to be the key that opened the door to reveal them. From the moment I had that, I wanted to have it all the time – these thoughts about the yogis and the Himalayas, and Ravi's music." In line with the Hindu yoga tradition, Harrison became a vegetarian in the late 1960s. After being given various religious texts by Shankar in 1966, he remained a lifelong advocate of the teachings of Swami Vivekananda and Paramahansa Yogananda – yogis and authors, respectively, of "Raja Yoga" and "Autobiography of a Yogi". In mid-1969, he produced the single "Hare Krishna Mantra", performed by members of the London Radha Krishna Temple. Having also helped the Temple devotees become established in Britain, Harrison then met their leader, A.C. Bhaktivedanta Swami Prabhupada, whom he described as "my friend ... my master" and "a perfect example of everything he preached". Harrison embraced the Hare Krishna tradition, particularly "japa-yoga" chanting with beads, and became a lifelong devotee. Regarding other faiths he once remarked: "All religions are branches of one big tree. It doesn't matter what you call Him just as long as you call." He commented on his beliefs: Before his religious conversion, Cliff Richard had been the only British performer known for similar activities; Richard's conversion to Christianity in 1966 had gone largely unnoticed by the public. "By contrast," wrote Inglis, "Harrison's spiritual journey was seen as a serious and important development that reflected popular music's increasing maturity ... what he, and the Beatles, had managed to overturn was the paternalistic assumption that popular musicians had no role other than to stand on stage and sing their hit songs." Harrison married model Pattie Boyd on 21 January 1966, with McCartney serving as best man. Harrison and Boyd had met in 1964 during the production of the film "A Hard Day's Night", in which the 19-year-old Boyd had been cast as a schoolgirl. They separated in 1974 and their divorce was finalised in 1977. Boyd said her decision to end the marriage was due largely to George's repeated infidelities. The last infidelity culminated in an affair with Ringo's wife Maureen, which Boyd called "the final straw". She characterised the last year of their marriage as "fuelled by alcohol and cocaine", and she stated: "George used coke excessively, and I think it changed him ... it froze his emotions and hardened his heart." She subsequently moved in with Eric Clapton, and they married in 1979. Harrison married Dark Horse Records' secretary Olivia Trinidad Arias on 2 September 1978. They had met at the A&M Records offices in Los Angeles in 1974, and together had one son, Dhani Harrison, born on 1 August 1978. He restored the English manor house and grounds of Friar Park, his home in Henley-on-Thames, where several of his music videos were filmed including "Crackerbox Palace"; the grounds also served as the background for the cover of "All Things Must Pass". He employed ten workers to maintain the garden. Harrison commented on gardening as a form of escapism: "Sometimes I feel like I'm actually on the wrong planet, and it's great when I'm in my garden, but the minute I go out the gate I think: 'What the hell am I doing here?'" His autobiography, "I, Me, Mine", is dedicated "to gardeners everywhere". The former Beatles publicist Derek Taylor helped Harrison write the book, which said little about the Beatles, focusing instead on Harrison's hobbies, music and lyrics. Taylor commented: "George is not disowning the Beatles ... but it was a long time ago and actually a short part of his life." Harrison had an interest in sports cars and motor racing; he was one of the 100 people who purchased the McLaren F1 road car. He had collected photos of racing drivers and their cars since he was young; at 12 he had attended his first race, the 1955 British Grand Prix at Aintree. He wrote "Faster" as a tribute to the Formula One racing drivers Jackie Stewart and Ronnie Peterson. Proceeds from its release went to the Gunnar Nilsson cancer charity, set up after the Swedish driver's death from the disease in 1978. Harrison's first extravagant car, a 1964 Aston Martin DB5, was sold at auction on 7 December 2011 in London. An anonymous Beatles collector paid £350,000 for the vehicle that Harrison had bought new in January 1965. For most of the Beatles' career the relationships in the group were close. According to Hunter Davies, "the Beatles spent their lives not living a communal life, but communally living the same life. They were each other's greatest friends." Harrison's ex-wife Pattie Boyd described how the Beatles "all belonged to each other" and admitted, "George has a lot with the others that I can never know about. Nobody, not even the wives, can break through or even comprehend it." Starr said, "We really looked out for each other and we had so many laughs together. In the old days we'd have the biggest hotel suites, the whole floor of the hotel, and the four of us would end up in the bathroom, just to be with each other." He added, "there were some really loving, caring moments between four people: a hotel room here and there – a really amazing closeness. Just four guys who loved each other. It was pretty sensational." Lennon stated that his relationship with Harrison was "one of young follower and older guy ... [he] was like a disciple of mine when we started." The two later bonded over their LSD experiences, finding common ground as seekers of spirituality. They took radically different paths thereafter with Harrison finding God and Lennon coming to the conclusion that people are the creators of their own lives. In 1974 Harrison said of his former bandmate: "John Lennon is a saint and he's heavy-duty, and he's great and I love him. But at the same time, he's such a "bastard" – but that's the great thing about him, you see?" Harrison and McCartney were the first of the Beatles to meet, having shared a school bus, and often learned and rehearsed new guitar chords together. McCartney stated that he and Harrison usually shared a bedroom while touring. McCartney has referred to Harrison as his "baby brother". In a 1974 BBC radio interview with Alan Freeman, Harrison stated: "[McCartney] ruined me as a guitar player". Perhaps the most significant obstacle to a Beatles reunion after the death of Lennon was Harrison and McCartney's personal relationship, as both men admitted that they often got on each other's nerves. Rodriguez commented: "Even to the end of George's days, theirs was a volatile relationship". In June 1965, Harrison and the other Beatles were appointed Members of the Order of the British Empire (MBE). They received their insignia from the Queen at an investiture at Buckingham Palace on 26 October. In 1971 the Beatles received an Academy Award for the best Original Song Score for the film "Let It Be". The minor planet 4149 Harrison, discovered in 1984, was named after him, as was a variety of Dahlia flower. In December 1992 he became the first recipient of the Billboard Century Award, an honour presented to music artists for significant bodies of work. The award recognised Harrison's "critical role in laying the groundwork for the modern concept of world music" and for his having "advanced society's comprehension of the spiritual and altruistic power of popular music". "Rolling Stone" magazine ranked him number 11 in their list of the "100 Greatest Guitarists of All Time".He is also in number 65 in the list of "100 greatest songwriters of all time" by the same magazine. In 2002, on the first anniversary of his death, the Concert for George was held at the Royal Albert Hall. Eric Clapton organised the event, which included performances by many of Harrison's friends and musical collaborators, including McCartney and Starr. Eric Idle, who described Harrison as "one of the few morally good people that rock and roll has produced", was among the performers of Monty Python's "Lumberjack Song". The profits from the concert went to Harrison's charity, the Material World Charitable Foundation. In 2004, Harrison was posthumously inducted into the Rock and Roll Hall of Fame as a solo artist by his former bandmates Lynne and Petty, and into the Madison Square Garden Walk of Fame in 2006 for the Concert for Bangladesh. On 14 April 2009, the Hollywood Chamber of Commerce awarded Harrison a star on the Walk of Fame in front of the Capitol Records Building. McCartney, Lynne and Petty were present when the star was unveiled. Harrison's widow Olivia, the actor Tom Hanks and Idle made speeches at the ceremony, and Harrison's son Dhani spoke the Hare Krishna mantra. A documentary film entitled "", directed by Martin Scorsese, was released in October 2011. The film features interviews with Olivia and Dhani Harrison, Klaus Voormann, Terry Gilliam, Starr, Clapton, McCartney, Keltner and Astrid Kirchherr. Harrison was posthumously honoured with The Recording Academy's Grammy Lifetime Achievement Award at the Grammy Awards in February 2015.
https://en.wikipedia.org/wiki?curid=12731
Giant planet A giant planet is any planet much larger than Earth. They are usually primarily composed of low-boiling-point materials (gases or ices), rather than rock or other solid matter, but massive solid planets can also exist. There are four known giant planets in the Solar System: Jupiter, Saturn, Uranus and Neptune. Many extrasolar giant planets have been identified orbiting other stars. Giant planets are also sometimes called jovian planets, after Jupiter ("Jove" being another name for the Roman god "Jupiter"). They are also sometimes known as gas giants. However, many astronomers now apply the latter term only to Jupiter and Saturn, classifying Uranus and Neptune, which have different compositions, as ice giants. Both names are potentially misleading: all of the giant planets consist primarily of fluids above their critical points, where distinct gas and liquid phases do not exist. The principal components are hydrogen and helium in the case of Jupiter and Saturn, and water, ammonia and methane in the case of Uranus and Neptune. The defining differences between a very low-mass brown dwarf and a gas giant () are debated. One school of thought is based on formation; the other, on the physics of the interior. Part of the debate concerns whether "brown dwarfs" must, by definition, have experienced nuclear fusion at some point in their history. The term "gas giant" was coined in 1952 by the science fiction writer James Blish and was originally used to refer to all giant planets. Arguably it is something of a misnomer, because throughout most of the volume of these planets the pressure is so high that matter is not in gaseous form. Other than the upper layers of the atmosphere and solids in the core, all matter is above the , where there is no distinction between liquids and gases. "Fluid planet" would be a more accurate term. Jupiter also has metallic hydrogen near its center, but much of its volume is hydrogen, helium, and traces of other gases above their critical points. The observable atmospheres of all these planets (at less than unit optical depth) are quite thin compared to their radii, only extending perhaps one percent of the way to the center. Thus the observable portions are gaseous (in contrast to Mars and Earth, which have gaseous atmospheres through which the crust can be seen). The rather misleading term has caught on because planetary scientists typically use "rock", "gas", and "ice" as shorthands for classes of elements and compounds commonly found as planetary constituents, irrespective of the matter's phase. In the outer Solar System, hydrogen and helium are referred to as "gases"; water, methane, and ammonia as "ices"; and silicates and metals as "rock". When deep planetary interiors are considered, it may not be far off to say that, by "ice" astronomers mean oxygen and carbon, by "rock" they mean silicon, and by "gas" they mean hydrogen and helium. The many ways in which Uranus and Neptune differ from Jupiter and Saturn have led some to use the term only for the planets similar to the latter two. With this terminology in mind, some astronomers have started referring to Uranus and Neptune as ice giants to indicate the predominance of the "ices" (in fluid form) in their interior composition. The alternative term "jovian planet" refers to the Roman god Jupiter—the genitive form of which is "Jovis", hence "Jovian"—and was intended to indicate that all of these planets were similar to Jupiter. Objects large enough to start deuterium fusion (above 13 Jupiter masses for solar composition) are called brown dwarfs, and these occupy the mass range between that of large giant planets and the lowest-mass stars. The 13-Jupiter-mass () cutoff is a rule of thumb rather than something of precise physical significance. Larger objects will burn most of their deuterium and smaller ones will burn only a little, and the value is somewhere in between. The amount of deuterium burnt depends not only on the mass but also on the composition of the planet, especially on the amount of helium and deuterium present. The Extrasolar Planets Encyclopaedia includes objects up to 60 Jupiter masses, and the Exoplanet Data Explorer up to 24 Jupiter masses. A giant planet is a massive planet and has a thick atmosphere of hydrogen and helium. They may have a dense molten core of rocky elements, or the core may have completely dissolved and dispersed throughout the planet if the planet is hot enough. In "traditional" giant planets such as Jupiter and Saturn (the gas giants) hydrogen and helium constitute most of the mass of the planet, whereas they only make up an outer envelope on Uranus and Neptune, which are instead mostly composed of water, ammonia, and methane and therefore increasingly referred to as "ice giants". Extrasolar giant planets that orbit very close to their stars are the exoplanets that are easiest to detect. These are called hot Jupiters and hot Neptunes because they have very high surface temperatures. Hot Jupiters were, until the advent of space-borne telescopes, the most common form of exoplanet known, due to the relative ease of detecting them with ground-based instruments. Giant planets are commonly said to lack solid surfaces, but it is more accurate to say that they lack surfaces altogether since the gases that constitute them simply become thinner and thinner with increasing distance from the planets' centers, eventually becoming indistinguishable from the interplanetary medium. Therefore, landing on a giant planet may or may not be possible, depending on the size and composition of its core. Gas giants consist mostly of hydrogen and helium. The Solar System's gas giants, Jupiter and Saturn, have heavier elements making up between 3 and 13 percent of their mass. Gas giants are thought to consist of an outer layer of molecular hydrogen, surrounding a layer of liquid metallic hydrogen, with a probable molten core with a rocky composition. Jupiter and Saturn's outermost portion of the hydrogen atmosphere has many layers of visible clouds that are mostly composed of water and ammonia. The layer of metallic hydrogen makes up the bulk of each planet, and is referred to as "metallic" because the very high pressure turns hydrogen into an electrical conductor. The core is thought to consist of heavier elements at such high temperatures (20,000 K) and pressures that their properties are poorly understood. Ice giants have distinctly different interior compositions from gas giants. The Solar System's ice giants, Uranus and Neptune, have a hydrogen-rich atmosphere that extends from the cloud tops down to about 80% (Uranus) or 85% (Neptune) of their radius. Below this, they are predominantly "icy", i.e. consist mostly of water, methane, and ammonia. There is also some rock and gas, but various proportions of ice–rock–gas could mimic pure ice, so that the exact proportions are unknown. Uranus and Neptune have very hazy atmospheric layers with small amounts of methane, giving them aquamarine colors; light blue and ultramarine respectively. Both have magnetic fields that are sharply inclined to their axes of rotation. Unlike the other giant planets, Uranus has an extreme tilt that causes its seasons to be severely pronounced. The two planets also have other subtle but important differences. Uranus has more hydrogen and helium than Neptune despite being less massive overall. Neptune is therefore denser and has much more internal heat and a more active atmosphere. The Nice model, in fact, suggests that Neptune formed closer to the Sun than Uranus did, and should therefore have more heavy elements. Massive solid planets can also exist. Solid planets up to thousands of Earth masses may be able to form around massive stars (B-type and O-type stars; 5–120 solar masses), where the protoplanetary disk would contain enough heavy elements. Also, these stars have high UV radiation and winds that could photoevaporate the gas in the disk, leaving just the heavy elements. For comparison, Neptune's mass equals 17 Earth masses, Jupiter has 318 Earth masses, and the 13 Jupiter-mass limit used in the IAU's working definition of an exoplanet equals approximately 4000 Earth masses. A super-puff is a type of exoplanet with a mass only a few times larger than Earth’s but a radius larger than Neptune, giving it a very low mean density. They are cooler and less massive than the inflated low-density hot-Jupiters. The most extreme examples known are the three planets around Kepler-51 which are all Jupiter-sized but with densities below 0.1 g/cm3. Because of the limited techniques currently available to detect exoplanets, many of those found to date have been of a size associated, in the Solar System, with giant planets. Because these large planets are inferred to share more in common with Jupiter than with the other giant planets, some have claimed that "jovian planet" is a more accurate term for them. Many of the exoplanets are much closer to their parent stars and hence much hotter than the giant planets in the Solar System, making it possible that some of those planets are a type not observed in the Solar System. Considering the relative abundances of the elements in the universe (approximately 98% hydrogen and helium) it would be surprising to find a predominantly rocky planet more massive than Jupiter. On the other hand, models of planetary-system formation have suggested that giant planets would be inhibited from forming as close to their stars as many of the extrasolar giant planets have been observed to orbit. The bands seen in the atmosphere of Jupiter are due to counter-circulating streams of material called zones and belts, encircling the planet parallel to its equator. The zones are the lighter bands, and are at higher altitudes in the atmosphere. They have an internal updraft and are high-pressure regions. The belts are the darker bands, are lower in the atmosphere, and have an internal downdraft. They are low-pressure regions. These structures are somewhat analogous to the high and low-pressure cells in Earth's atmosphere, but they have a very different structure—latitudinal bands that circle the entire planet, as opposed to small confined cells of pressure. This appears to be a result of the rapid rotation and underlying symmetry of the planet. There are no oceans or landmasses to cause local heating and the rotation speed is much higher than that of Earth. There are smaller structures as well: spots of different sizes and colors. On Jupiter, the most noticeable of these features is the Great Red Spot, which has been present for at least 300 years. These structures are huge storms. Some such spots are thunderheads as well.
https://en.wikipedia.org/wiki?curid=12733
Goddess A goddess is a female deity. Goddesses have been linked with virtues such as beauty, love, sexuality, motherhood and fertility (Mother-goddess cult in prehistoric times). They have also been associated with ideas such as war, creation, and death. In some faiths, a sacred female figure holds a central place in religious prayer and worship. For example, Shaktism, the worship of the female force that animates the world, is one of the three major sects of Hinduism. The primacy of a monotheistic or near-monotheistic "Great Goddess" is advocated by some modern matriarchists as a female version of, preceding, or analogue to, the Abrahamic God associated with the historical rise of monotheism in the Mediterranean Axis Age. Polytheist religions, including Polytheistic reconstructionists, honour multiple goddesses and gods, and usually view them as discrete, separate beings. These deities may be part of a pantheon, or different regions may have tutelary deities. The noun "goddess" is a secondary formation, combining the Germanic "god" with the Latinate "-ess" suffix. It first appeared in Middle English, from about 1350. The English word follows the linguistic precedent of a number of languages—including Egyptian, Classical Greek, and several Semitic languages—that add a feminine ending to the language's word for "god". Inanna was the most worshipped goddess in ancient Sumer. She was later syncretized with the East Semitic goddess Ishtar. Other Mesopotamian goddesses include Ninhursag, Ninlil, Antu and Gaga. Goddesses of the Canaanite religion: Ba`alat Gebal, Astarte, Anat. In pre-Islamic Mecca the goddesses Uzza, Manāt and al-Lāt were known as "the daughters of god". Uzzā was worshipped by the Nabataeans, who equated her with the Graeco-Roman goddesses Aphrodite, Urania, Venus and Caelestis. Each of the three goddesses had a separate shrine near Mecca. Uzzā, was called upon for protection by the pre-Islamic Quraysh. "In 624 at the battle called "Uhud", the war cry of the Qurayshites was, "O people of Uzzā, people of Hubal!" (Tawil 1993). According to Ibn Ishaq's controversial account of the Satanic Verses ("q.v."), these verses had previously endorsed them as intercessors for Muslims, but were abrogated. Most Muslim scholars have regarded the story as historically implausible, while opinion is divided among western scholars such as Leone Caetani and John Burton, who argue against, and William Muir and William Montgomery Watt, who argue for its plausibility. Pre-Christian and pre-Islamic goddesses in cultures that spoke Indo-European languages. Goddesses and Otherworldly Women in Celtic polytheism include: The Celts honored goddesses of nature and natural forces, as well as those connected with skills and professions such as healing, warfare and poetry. The Celtic goddesses have diverse qualities such as abundance, creation and beauty, as well as harshness, slaughter and vengeance. They have been depicted as beautiful or hideous, old hags or young women, and at times may transform their appearance from one state to another, or into their associated creatures such as crows, cows, wolves or eels, to name but a few. In Irish mythology in particular, tutelary goddesses are often associated with sovereignty and various features of the land, notably mountains, rivers, forests and holy wells. Surviving accounts of Germanic mythology and Norse mythology contain numerous tales of female goddesses, giantesses, and divine female figures in their scriptures. The Germanic peoples had altars erected to the "Mothers and Matrons" and held celebrations specific to these goddesses (such as the Anglo-Saxon "Mothers-night"). Various other female deities are attested among the Germanic peoples, such as Nerthus attested in an early account of the Germanic peoples, Ēostre attested among the pagan Anglo-Saxons, and Sinthgunt attested among the pagan continental Germanic peoples. Examples of goddesses attested in Norse mythology include Frigg (wife of Odin, and the Anglo-Saxon version of whom is namesake of the modern English weekday Friday), Skaði (one time wife of Njörðr), Njerda (Scandinavian name of Nerthus), that also was married to Njörðr during Bronze Age, Freyja (wife of Óðr), Sif (wife of Thor), Gerðr (wife of Freyr), and personifications such as Jörð (earth), Sól (the sun), and Nótt (night). Female deities also play heavily into the Norse concept of death, where half of those slain in battle enter Freyja's field Fólkvangr, Hel's realm of the same name, and Rán who receives those who die at sea. Other female deities such as the valkyries, the norns, and the dísir are associated with a Germanic concept of fate (Old Norse "Ørlög", Old English "Wyrd"), and celebrations were held in their honor, such as the Dísablót and Disting. The Inca pantheon included: Pachamama, the supreme Mother Earth, Mama Killa, a moon goddess, and Mama Ocllo, a fertility goddess. The main goddesses in the Maya pantheon were Ixchel, a mother goddess, and the Maya moon goddess. The Goddess I presided over eroticism, human procreation, and marriage. Ixtab was the goddess of suicide. In African and African diasporic religions, goddesses are often syncretized with Marian devotion, as in Ezili Dantor (Black Madonna of Częstochowa) and Erzulie Freda (Mater Dolorosa). There is also Buk, an Ethiopian goddess still worshipped in the southern regions. She represents the fertile aspect of women. So when a woman is having her period not only does it signify her submission to nature but also her union with the goddess. Another Ethiopian goddess is Atete—the goddess of spring and fertility. Farmers traditionally leave some of their products at the end of each harvesting season as an offering while women sing traditional songs. A rare example of henotheism focused on a single Goddess is found among the Southern Nuba of Sudan. The Nuba conceive of the creator Goddess as the "Great Mother" who gave birth to earth and to mankind. Goddess Amaterasu is the chief among the Shinto gods, while there are important female deities Ame-no-Uzume-no-Mikoto, Inari and Konohanasakuya-hime. Hinduism is a complex of various belief systems that sees many gods and goddesses as being representative of and/or emanative from a single source, Brahman, understood either as a formless, infinite, impersonal monad in the Advaita tradition or as a dual god in the form of Lakshmi-Vishnu, Radha-Krishna, Shiva-Shakti in Dvaita traditions. Shaktas, worshippers of the Goddess, equate this god with Devi, the Mother Goddess. Such aspects of one god as male god (Shaktiman) and female energy (Shakti), working as a pair are often envisioned as male gods and their wives or consorts and provide many analogues between passive male ground and dynamic female energy. For example, Brahma pairs with Sarasvati. Shiva likewise pairs with Parvati who later is represented through a number of Avatars (incarnations): Sati and the warrior figures, Durga and Kali. All goddesses in Hinduism are sometimes grouped together as the great goddess, Devi. The "Shaktis" took a further step. Their ideology, based mainly on tantras, sees Shakti as the principle of energy through which all divinity functions, thus showing the masculine as depending on the feminine. In the great shakta scripture known as the Devi Mahatmya, all the goddesses are aspects of one presiding female force—one in truth and many in expression—giving the world and the cosmos the galvanic energy for motion. It expresses through philosophical tracts and metaphor, that the potentiality of masculine being is actuated by the feminine divine. More recently, the Indian author Rajesh Talwar has critiqued Western religion and written eloquently on the sacred feminine in the context of the North Indian Goddess Vaishno Devi. Local deities of different village regions in India were often identified with "mainstream" Hindu deities, a process that has been called "Sanskritization". Others attribute it to the influence of monism or "Advaita", which discounts polytheist or monotheist categorization. While the monist forces have led to a fusion between some of the goddesses (108 names are common for many goddesses), centrifugal forces have also resulted in new goddesses and rituals gaining ascendance among the laity in different parts of Hindu world. Thus, the immensely popular goddess Durga was a pre-Vedic goddess who was later fused with Parvati, a process that can be traced through texts such as Kalika Purana (10th century), Durgabhaktitarangini (Vidyapati 15th century), Chandimangal (16th century) etc. According to Zohar, Lilith is the name of Adam's first wife, who was created at the same time as Adam. She left Adam and refused to return to the Garden of Eden after she mated with archangel Samael. Her story was greatly developed during the Middle Ages in the tradition of Aggadic midrashim, the Zohar and Jewish mysticism. The Zohar tradition has influenced Jewish folkore, which postulates God created Adam to marry a woman named Lilith. Outside of Jewish tradition, Lilith was associated with the Mother Goddess, Inanna – later known as both Ishtar and Asherah. In The Epic of Gilgamesh, Gilgamesh was said to have destroyed a tree that was in a sacred grove dedicated to the goddess Ishtar/Inanna/Asherah. Lilith ran into the wilderness in despair. She then is depicted in the Talmud and Kabbalah as first wife to God's first creation of man, Adam. In time, as stated in the Old Testament, the Hebrew followers continued to worship "False Idols", like Asherah, as being as powerful as God. Jeremiah speaks of his (and God's) displeasure at this behavior to the Hebrew people about the worship of the goddess in the Old Testament. Lilith is banished from Adam and God's presence when she is discovered to be a "demon" and Eve becomes Adam's wife. Lilith then takes the form of the serpent in her jealous rage at being displaced as Adam's wife. Lilith as serpent then proceeds to trick Eve into eating the fruit from the tree of knowledge and in this way is responsible for the downfall of all of mankind. It is worthwhile to note here that in religions pre-dating Judaism, the serpent was associated with wisdom and rebirth (with the shedding of its skin). The following female deities are mentioned in prominent Hebrew texts: In Christianity, worship of any other deity besides the Trinity was deemed heretical. The veneration of Mary, the mother of Jesus Christ, as an especially privileged saint has continued since the beginning of the Catholic faith. Mary is venerated as the Mother of God, Queen of Heaven, Mother of the Church, Our Lady, Star of the Sea, and other lofty titles. Marian devotion similar to this kind is also found in Eastern Orthodoxy and sometimes in Anglicanism, though not in the majority of denominations of Protestantism. That being said, the Virgin Mary is not a goddess. In some Christian traditions (like the Orthodox tradition), Sophia is the personification of either divine wisdom (or of an archangel) that takes female form. She is mentioned in the first chapter of the Book of Proverbs. Sophia is identified by some as the wisdom imparting Holy Spirit of the Christian Trinity, whose names in Hebrew—Ruach and Shekhinah—are both feminine, and whose symbol of the dove was commonly associated in the Ancient Near East with the figure of the Mother Goddess. Members of The Church of Jesus Christ of Latter-day Saints (Mormons) believe, though don't directly worship, in the existence of a Heavenly Mother who is the female counterpart of the Heavenly Father. Its adherents also believe that all humans, both men and women, have the potential to become as Gods, through a process known as exaltation. In Mysticism, Gnosticism, as well as some Hellenistic religions, there is a female spirit or goddess named Sophia who is said to embody wisdom and who is sometimes described as a virgin. In Roman Catholic mysticism, Saint Hildegard celebrated Sophia as a cosmic figure both in her writing and art. Within the Protestant tradition in England, the 17th-century mystic universalist and founder of the Philadelphian Society Jane Leade wrote copious descriptions of her visions and dialogues with the "Virgin Sophia" who, she said, revealed to her the spiritual workings of the universe. Leade was hugely influenced by the theosophical writings of 16th-century German Christian mystic Jakob Böhme, who also speaks of Sophia in works such as "The Way to Christ". Jakob Böhme was very influential to a number of Christian mystics and religious leaders, including George Rapp and the Harmony Society. At least since first-wave feminism in the United States, there has been interest in analyzing religion to see if and how doctrines and practices treat women unfairly, as in Elizabeth Cady Stanton's "The Woman's Bible". Again in second-wave feminism in the U.S., as well as in many European and other countries, religion became the focus of some feminist analysis in Judaism, Christianity, and other religions, and some women turned to ancient goddess religions as an alternative to Abrahamic religions ("Womanspirit Rising" 1979; "Weaving the Visions" 1989). Today both women and men continue to be involved in the Goddess movement (Christ 1997). The popularity of organizations such as the Fellowship of Isis attest to the continuing growth of the religion of the Goddess throughout the world. While much of the attempt at gender equity in mainstream Christianity (Judaism never recognized any gender for God) is aimed at reinterpreting scripture and degenderizing language used to name and describe the divine (Ruether, 1984; Plaskow, 1991), there are a growing number of people who identify as Christians or Jews who are trying to integrate goddess imagery into their religions (Kien, 2000; Kidd 1996,"Goddess Christians Yahoo Group"). The term "sacred feminine" was first coined in the 1970s, in New Age popularizations of the Hindu Shakti. Hinduism also worships multitude of goddesses that have their important role and thus in all came to interest for the New Age, feminist, and lesbian feminist movements. The term "goddess" has also been adapted to poetic and secular use as a complimentary description of a non-mythological woman. The OED notes 1579 as the date of the earliest attestation of such figurative use, in "Lauretta the diuine Petrarches Goddesse". Shakespeare had several of his male characters address female characters as goddesses, including Demetrius to Helena in "A Midsummer Night's Dream" ("O Helen, goddess, nymph, perfect, divine!"), Berowne to Rosaline in "Love's Labour's Lost" ("A woman I forswore; but I will prove, Thou being a goddess, I forswore not thee"), and Bertram to Diana in "All's Well That Ends Well". Pisanio also compares Imogen to a goddess to describe her composure under duress in "Cymbeline". Most Modern Pagan traditions honour one or more goddesses. While some who follow Wicca believe in a duotheistic belief system, consisting of a single goddess and a single god, who in hieros gamos represent a united whole, others recognize only one or more goddesses. In Wicca "the Goddess" is a deity of prime importance, along with her consort the Horned God. Within many forms of Wicca the Goddess has come to be considered as a universal deity, more in line with her description in the Charge of the Goddess, a key Wiccan text. In this guise she is the "Queen of Heaven", similar to Isis. She also encompasses and conceives all life, much like Gaia. Similarly to Isis and certain late Classical conceptions of Selene, she is the summation of all other goddesses, who represent her different names and aspects across the different cultures. The Goddess is often portrayed with strong lunar symbolism, drawing on various cultures and deities such as Diana, Hecate, and Isis, and is often depicted as the Maiden, Mother, and Crone triad popularised by Robert Graves (see Triple Goddess below). Many depictions of her also draw strongly on Celtic goddesses. Some Wiccans believe there are many goddesses, and in some forms of Wicca, notably Dianic Wicca, the Goddess alone is worshipped, and the God plays very little part in their worship and ritual. Goddesses or demi-goddesses appear in sets of three in a number of ancient European pagan mythologies; these include the Greek "Erinyes" (Furies) and "Moirai" (Fates); the Norse "Norns"; Brighid and her two sisters, also called Brighid, from Irish or Celtic mythology. Robert Graves popularised the triad of "Maiden" (or "Virgin"), "Mother" and "Crone", and while this idea did not rest on sound scholarship, his poetic inspiration has gained a tenacious hold. Considerable variation in the precise conceptions of these figures exists, as typically occurs in Neopaganism and indeed in pagan religions in general. Some choose to interpret them as three stages in a woman's life, separated by menarche and menopause. Others find this too biologically based and rigid, and prefer a freer interpretation, with the Maiden as birth (independent, self-centred, seeking), the Mother as giving birth (interrelated, compassionate nurturing, creating), and the Crone as death and renewal (holistic, remote, unknowable) — and all three erotic and wise.
https://en.wikipedia.org/wiki?curid=12734
Gunpowder Gunpowder, also known as black powder to distinguish it from modern smokeless powder, is the earliest known chemical explosive. It consists of a mixture of sulfur (S), charcoal (C), and potassium nitrate (saltpeter, KNO3). The sulfur and charcoal act as fuels while the saltpeter is an oxidizer. Because of its incendiary properties and the amount of heat and gas volume that it generates, gunpowder has been widely used as a propellant in firearms, artillery, rocketry, and pyrotechnics, including use as a blasting agent for explosives in quarrying, mining, and road building. Gunpowder was invented in 9th-century China as one of the Four Great Inventions, and spread throughout most parts of Eurasia by the end of the 13th century. Originally developed by the Taoists for medicinal purposes, gunpowder was first used for warfare about 904 AD. Gunpowder is classified as a low explosive because of its relatively slow decomposition rate and consequently low brisance. Low explosives deflagrate (i.e., burn) at "subsonic" speeds, whereas high explosives detonate producing a supersonic shockwave. Ignition of gunpowder packed behind a projectile generates enough pressure to force the shot from the muzzle at high speed, but usually not enough force to rupture the gun barrel. Gunpowder thus makes a good propellant, but is less suitable for shattering rock or fortifications with its low-yield explosive power. However, by transferring enough energy (from the burning gunpowder to the mass of the cannonball, and then from the cannonball to the opposing fortifications by way of the impacting ammunition) eventually a bombardier may wear down an opponent's fortified defenses. Gunpowder was widely used to fill fused artillery shells (and used in mining and civil engineering projects) until the second half of the 19th century, when the first high explosives were put into use. Gunpowder is no longer used in modern weapons, nor is it used for industrial purposes, due to its relatively inefficient cost compared to newer alternatives such as dynamite and ammonium nitrate/fuel oil. Today gunpowder firearms are limited primarily to hunting, target shooting, and bulletless historical reenactments. The first confirmed reference to what can be considered gunpowder in China occurred in the 9th century AD during the Tang dynasty, first in a formula contained in the "Taishang Shengzu Jindan Mijue" (太上聖祖金丹秘訣) in 808, and then about 50 years later in a Taoist text known as the "Zhenyuan miaodao yaolüe" (真元妙道要略). The "Taishang Shengzu Jindan Mijue" mentions a gunpowder formula composed of six parts sulfur to six parts saltpeter to one part birthwort herb. According to the "Zhenyuan miaodao yaolüe", "Some have heated together sulfur, realgar and saltpeter with honey; smoke and flames result, so that their hands and faces have been burnt, and even the whole house where they were working burned down." Based on these Taoist texts, the invention of gunpowder by Chinese alchemists was likely an accidental byproduct from experiments seeking to create the elixir of life. This experimental medicine origin of gunpowder is reflected in its Chinese name "huoyao" ( ), which means "fire medicine". Saltpeter was known to the Chinese by the mid-1st century AD and was primarily produced in the provinces of Sichuan, Shanxi, and Shandong. There is strong evidence of the use of saltpeter and sulfur in various medicinal combinations. A Chinese alchemical text dated 492 noted saltpeter burnt with a purple flame, providing a practical and reliable means of distinguishing it from other inorganic salts, thus enabling alchemists to evaluate and compare purification techniques; the earliest Latin accounts of saltpeter purification are dated after 1200. The earliest chemical formula for gunpowder appeared in the 11th century Song dynasty text, "Wujing Zongyao" ("Complete Essentials from the Military Classics"), written by Zeng Gongliang between 1040 and 1044. The "Wujing Zongyao" provides encyclopedia references to a variety of mixtures that included petrochemicals—as well as garlic and honey. A slow match for flame throwing mechanisms using the siphon principle and for fireworks and rockets is mentioned. The mixture formulas in this book do not contain enough saltpeter to create an explosive however; being limited to at most 50% saltpeter, they produce an incendiary. The "Essentials" was written by a Song dynasty court bureaucrat and there is little evidence that it had any immediate impact on warfare; there is no mention of gunpowder use in the chronicles of the wars against the Tanguts in the 11th century, and China was otherwise mostly at peace during this century. However gunpowder had already been used for fire arrows since at least the 10th century. The first recorded military application of gunpowder dates its use to the year 904 in the form of incendiary projectiles. In the following centuries various gunpowder weapons such as bombs, fire lances, and the gun appeared in China. Explosive weapons such as bombs have been discovered in a shipwreck off the shore of Japan dated from 1281, during the Mongol invasions of Japan. By 1083 the Song court was producing hundreds of thousands of fire arrows for their garrisons. Bombs and the first proto-guns, known as "fire lances", became prominent during the 12th century and were used by the Song during the Jin-Song Wars. Fire lances were first recorded to have been used at the Siege of De'an in 1132 by Song forces against the Jin. In the early 13th century the Jin utilized iron-casing bombs. Projectiles were added to fire lances, and re-usable fire lance barrels were developed, first out of hardened paper, and then metal. By 1257 some fire lances were firing wads of bullets. In the late 13th century metal fire lances became 'eruptors', proto-cannons firing co-viative projectiles (mixed with the propellant, rather than seated over it with a wad), and by 1287 at the latest, had become true guns, the hand cannon. The Muslims acquired knowledge of gunpowder some time between 1240 and 1280, by which point the Syrian Hasan al-Rammah had written, in Arabic, recipes for gunpowder, instructions for the purification of saltpeter, and descriptions of gunpowder incendiaries. It is implied by al-Rammah's usage of "terms that suggested he derived his knowledge from Chinese sources" and his references to saltpeter as "Chinese snow" ( '), fireworks as "Chinese flowers" and rockets as "Chinese arrows" that knowledge of gunpowder arrived from China. However, because al-Rammah attributes his material to "his father and forefathers", al-Hassan argues that gunpowder became prevalent in Syria and Egypt by "the end of the twelfth century or the beginning of the thirteenth". In Persia saltpeter was known as "Chinese salt" () "namak-i chīnī") or "salt from Chinese salt marshes" ( '). Hasan al-Rammah included 107 gunpowder recipes in his text "al-Furusiyyah wa al-Manasib al-Harbiyya" ("The Book of Military Horsemanship and Ingenious War Devices"), 22 of which are for rockets. If one takes the median of 17 of these 22 compositions for rockets (75% nitrates, 9.06% sulfur, and 15.94% charcoal), it is nearly identical to the modern reported ideal gunpowder recipe of 75% potassium nitrate, 10% sulfur, and 15% charcoal. Al-Hassan claims that in the Battle of Ain Jalut of 1260, the Mamluks used against the Mongols in "the first cannon in history" gunpowder formula with near-identical ideal composition ratios for explosive gunpowder. Other historians urge caution regarding claims of Islamic firearms use in the 1204–1324 period as late medieval Arabic texts used the same word for gunpowder, "naft", that they used for an earlier incendiary, naphtha. Khan claims that it was invading Mongols who introduced gunpowder to the Islamic world and cites Mamluk antagonism towards early musketeers in their infantry as an example of how gunpowder weapons were not always met with open acceptance in the Middle East. Similarly, the refusal of their Qizilbash forces to use firearms contributed to the Safavid rout at Chaldiran in 1514. The musket appeared in the Ottoman Empire by 1465. In 1598, Chinese writer Zhao Shizhen described Turkish muskets as being superior to European muskets. The Chinese military book "Wu Pei Chih" (1621) later described Turkish muskets that used a rack-and-pinion mechanism, which was not known to have been used in European or Chinese firearms at the time. The state-controlled manufacture of gunpowder by the Ottoman Empire through early supply chains to obtain nitre, sulfur and high-quality charcoal from oaks in Anatolia contributed significantly to its expansion between the 15th and 18th century. It was not until later in the 19th century when the syndicalist production of Turkish gunpowder was greatly reduced, which coincided with the decline of its military might. The earliest Western accounts of gunpowder appear in texts written by English philosopher Roger Bacon in the 13th century. Several sources mention Chinese firearms and gunpowder weapons being deployed by the Mongols against European forces at the Battle of Mohi in 1241. Professor Kenneth Warren Chase credits the Mongols for introducing into Europe gunpowder and its associated weaponry. However, there is no clear route of transmission, and while the Mongols are often pointed to as the likeliest vector, Timothy May points out that "there is no concrete evidence that the Mongols used gunpowder weapons on a regular basis outside of China." However, Timothy May also points out "However... the Mongols used the gunpowder weapon in their wars against the Jin, the Song and in their invasions of Japan." In Europe, one of the first mentions of gunpowder use appears in a passage found in Roger Bacon's "Opus Maius" of 1267 and "Opus Tertium" in what has been interpreted as being firecrackers. The most telling passage reads: "We have an example of these things (that act on the senses) in [the sound and fire of] that children's toy which is made in many [diverse] parts of the world; i.e., a device no bigger than one's thumb. From the violence of that salt called saltpeter [together with sulfur and willow charcoal, combined into a powder] so horrible a sound is made by the bursting of a thing so small, no more than a bit of parchment [containing it], that we find [the ear assaulted by a noise] exceeding the roar of strong thunder, and a flash brighter than the most brilliant lightning." In the early 20th century, British artillery officer Henry William Lovett Hime proposed that another work tentatively attributed to Bacon, "Epistola de Secretis Operibus Artis et Naturae, et de Nullitate Magiae" contained an encrypted formula for gunpowder. This claim has been disputed by historians of science including Lynn Thorndike, John Maxson Stillman and George Sarton and by Bacon's editor Robert Steele, both in terms of authenticity of the work, and with respect to the decryption method. In any case, the formula claimed to have been decrypted (7:5:5 saltpeter:charcoal:sulfur) is not useful for firearms use or even firecrackers, burning slowly and producing mostly smoke. However, if Bacon's recipe is taken as measurements by volume rather than weight, a far more potent and serviceable explosive powder is created suitable for firing hand-cannons, albeit less consistent due to the inherent inaccuracies of measurements by volume. One example of this composition resulted in 100 parts saltpeter, 27 parts charcoal, and 45 parts sulfur, by weight. The "Liber Ignium", or "Book of Fires", attributed to Marcus Graecus, is a collection of incendiary recipes, including some gunpowder recipes. Partington dates the gunpowder recipes to approximately 1300. One recipe for "flying fire" ("ignis volatilis") involves saltpeter, sulfur, and colophonium, which, when inserted into a reed or hollow wood, "flies away suddenly and burns up everything." Another recipe, for artificial "thunder", specifies a mixture of one pound native sulfur, two pounds linden or willow charcoal, and six pounds of saltpeter. Another specifies a 1:3:9 ratio. Some of the gunpowder recipes of "De Mirabilibus Mundi" of Albertus Magnus are identical to the recipes of the "Liber Ignium", and according to Partington, "may have been taken from that work, rather than conversely." Partington suggests that some of the book may have been compiled by Albert's students, "but since it is found in thirteenth century manuscripts, it may well be by Albert." Albertus Magnus died in 1280. A major advance in manufacturing began in Europe in the late 14th century when the safety and thoroughness of incorporation was improved by wet grinding; liquid, such as distilled spirits was added during the grinding-together of the ingredients and the moist paste dried afterwards. The principle of wet mixing to prevent the separation of dry ingredients, invented for gunpowder, is used today in the pharmaceutical industry. It was also discovered that if the paste was rolled into balls before drying the resulting gunpowder absorbed less water from the air during storage and traveled better. The balls were then crushed in a mortar by the gunner immediately before use, with the old problem of uneven particle size and packing causing unpredictable results. If the right size particles were chosen, however, the result was a great improvement in power. Forming the damp paste into "corn"-sized clumps by hand or with the use of a sieve instead of larger balls produced a product after drying that loaded much better, as each tiny piece provided its own surrounding air space that allowed much more rapid combustion than a fine powder. This "corned" gunpowder was from 30% to 300% more powerful. An example is cited where 34 pounds of serpentine was needed to shoot a 47-pound ball, but only 18 pounds of corned powder. The optimum size of the grain depended on its use; larger for large cannon, finer for small arms. Larger cast cannons were easily muzzle-loaded with corned powder using a long-handled ladle. Corned powder also retained the advantage of low moisture absorption, as even tiny grains still had much less surface area to attract water than a floury powder. During this time, European manufacturers also began regularly purifying saltpeter, using wood ashes containing potassium carbonate to precipitate calcium from their dung liquor, and using ox blood, alum, and slices of turnip to clarify the solution. During the Renaissance, two European schools of pyrotechnic thought emerged, one in Italy and the other at Nuremberg, Germany. The German printer and publisher Christiaan Egenolff adapted an earlier work on pyrotechnics from manuscript to print form, publishing his "Büchsenmeysterei" in 1529 and reprinting it in 1531. Now extremely rare, the book discusses the manufacturing of gunpowder, the operation of artillery and the rules of conduct for the gunsmith. In Italy, Vannoccio Biringuccio, born in 1480, was a member of the guild "Fraternita di Santa Barbara" but broke with the tradition of secrecy by setting down everything he knew in a book titled "De la pirotechnia", written in vernacular. It was published posthumously in 1540, with 9 editions over 138 years, and also reprinted by MIT Press in 1966. By the mid-17th century fireworks were used for entertainment on an unprecedented scale in Europe, being popular even at resorts and public gardens. With the publication of "Deutliche Anweisung zur Feuerwerkerey" (1748), methods for creating fireworks were sufficiently well-known and well-described that "Firework making has become an exact science." In 1774 Louis XVI ascended to the throne of France at age 20. After he discovered that France was not self-sufficient in gunpowder, a Gunpowder Administration was established; to head it, the lawyer Antoine Lavoisier was appointed. Although from a bourgeois family, after his degree in law Lavoisier became wealthy from a company set up to collect taxes for the Crown; this allowed him to pursue experimental natural science as a hobby. Without access to cheap saltpeter (controlled by the British), for hundreds of years France had relied on saltpetremen with royal warrants, the "droit de fouille" or "right to dig", to seize nitrous-containing soil and demolish walls of barnyards, without compensation to the owners. This caused farmers, the wealthy, or entire villages to bribe the petermen and the associated bureaucracy to leave their buildings alone and the saltpeter uncollected. Lavoisier instituted a crash program to increase saltpeter production, revised (and later eliminated) the "droit de fouille", researched best refining and powder manufacturing methods, instituted management and record-keeping, and established pricing that encouraged private investment in works. Although saltpeter from new Prussian-style putrefaction works had not been produced yet (the process taking about 18 months), in only a year France had gunpowder to export. A chief beneficiary of this surplus was the American Revolution. By careful testing and adjusting the proportions and grinding time, powder from mills such as at Essonne outside Paris became the best in the world by 1788, and inexpensive. In the Spanish Empire, the crown held a monopoly on gunpowder and the terms were set out during eighteenth-century Bourbon Reforms. Gunpowder production in Britain appears to have started in the mid 14th century with the aim of supplying the English Crown. Records show that, in England, gunpowder was being made in 1346 at the Tower of London; a powder house existed at the Tower in 1461; and in 1515 three King's gunpowder makers worked there. Gunpowder was also being made or stored at other Royal castles, such as Portchester. By the early 14th century, according to N.J.G. Pounds's study "The Medieval Castle in England and Wales," many English castles had been deserted and others were crumbling. Their military significance faded except on the borders. Gunpowder had made smaller castles useless. Henry VIII of England was short of gunpowder when he invaded France in 1544 and England needed to import gunpowder via the port of Antwerp in what is now Belgium. The English Civil War (1642–1645) led to an expansion of the gunpowder industry, with the repeal of the Royal Patent in August 1641. One of the most notable uses of gunpowder in Great Britain was the Gunpowder Plot of 1605: a failed assassination attempt on King James I and VI. The plot was foiled when Guy Fawkes was found under the House of Lords with hidden barrels of gunpowder. All assailants who had a role in the plot escaped but were eventually caught. King James later decreed that 5 November become a day of celebration, which is a tradition that carries on today known as Bonfire Night. Two British physicists, Andrew Noble and Frederick Abel, worked to improve the properties of black powder during the late 19th century. This formed the basis for the Noble-Abel gas equation for internal ballistics. The introduction of smokeless powder in the late 19th century led to a contraction of the gunpowder industry. After the end of World War I, the majority of the United Kingdom gunpowder manufacturers merged into a single company, "Explosives Trades limited"; and a number of sites were closed down, including those in Ireland. This company became Nobel Industries Limited; and in 1926 became a founding member of Imperial Chemical Industries. The Home Office removed gunpowder from its list of "Permitted Explosives"; and shortly afterwards, on 31 December 1931, the former Curtis & Harvey's Glynneath gunpowder factory at Pontneddfechan, in Wales, closed down, and it was demolished by fire in 1932. The last remaining gunpowder mill at the Royal Gunpowder Factory, Waltham Abbey was damaged by a German parachute mine in 1941 and it never reopened. This was followed by the closure of the gunpowder section at the Royal Ordnance Factory, ROF Chorley, the section was closed and demolished at the end of World War II; and ICI Nobel's Roslin gunpowder factory, which closed in 1954. This left the sole United Kingdom gunpowder factory at ICI Nobel's Ardeer site in Scotland; it too closed in October 1976. Since then gunpowder has been imported into the United Kingdom. In the late 1970s/early 1980s gunpowder was bought from eastern Europe, particularly from what was then the German Democratic Republic and former Yugoslavia. Gunpowder and gunpowder weapons were transmitted to India through the Mongol invasions of India. The Mongols were defeated by Alauddin Khalji of the Delhi Sultanate, and some of the Mongol soldiers remained in northern India after their conversion to Islam. It was written in the "Tarikh-i Firishta" (1606–1607) that Nasiruddin Mahmud the ruler of the Delhi Sultanate presented the envoy of the Mongol ruler Hulegu Khan with a dazzling pyrotechnics display upon his arrival in Delhi in 1258. Nasiruddin Mahmud tried to express his strength as a ruler and tried to ward off any Mongol attempt similar to the Siege of Baghdad (1258). Firearms known as "top-o-tufak" also existed in many Muslim kingdoms in India by as early as 1366. From then on the employment of gunpowder warfare in India was prevalent, with events such as the "Siege of Belgaum" in 1473 by Sultan Muhammad Shah Bahmani. The shipwrecked Ottoman Admiral Seydi Ali Reis is known to have introduced the earliest type of matchlock weapons, which the Ottomans used against the Portuguese during the Siege of Diu (1531). After that, a diverse variety of firearms, large guns in particular, became visible in Tanjore, Dacca, Bijapur, and Murshidabad. Guns made of bronze were recovered from Calicut (1504)- the former capital of the Zamorins The Mughal emperor Akbar mass-produced matchlocks for the Mughal Army. Akbar is personally known to have shot a leading Rajput commander during the Siege of Chittorgarh. The Mughals began to use bamboo rockets (mainly for signalling) and employ sappers: special units that undermined heavy stone fortifications to plant gunpowder charges. The Mughal Emperor Shah Jahan is known to have introduced much more advanced matchlocks, their designs were a combination of Ottoman and Mughal designs. Shah Jahan also countered the British and other Europeans in his province of Gujarāt, which supplied Europe saltpeter for use in gunpowder warfare during the 17th century. Bengal and Mālwa participated in saltpeter production. The Dutch, French, Portuguese, and English used Chhapra as a center of saltpeter refining. Ever since the founding of the Sultanate of Mysore by Hyder Ali, French military officers were employed to train the Mysore Army. Hyder Ali and his son Tipu Sultan were the first to introduce modern cannons and muskets, their army was also the first in India to have official uniforms. During the Second Anglo-Mysore War Hyder Ali and his son Tipu Sultan unleashed the Mysorean rockets at their British opponents effectively defeating them on various occasions. The Mysorean rockets inspired the development of the Congreve rocket, which the British widely utilized during the Napoleonic Wars and the War of 1812. The Javanese Majapahit Empire was arguably able to encompass much of modern-day Indonesia due to its unique mastery of bronze-smithing and use of a central arsenal fed by a large number of cottage industries within the immediate region. Documentary and archeological evidence indicate that Arab traders introduced gunpowder, gonnes, muskets, blunderbusses, and cannons to the Javanese, Acehnese, and Batak via long established commercial trade routes around the early to mid 14th century. The resurgent Singhasari Empire overtook Sriwijaya and later emerged as the Majapahit whose warfare featured the use of fire-arms and cannonade. Cannons were introduced to Majapahit when Kublai Khan's Chinese army under the leadership of Ike Mese sought to invade Java in 1293. History of Yuan mentioned that the Mongol used cannons (Chinese: "Pao") against Daha forces. Javanese bronze breech-loaded swivel-guns, known as cetbang, or erroneously as lantaka, was used widely by the Majapahit navy as well as by pirates and rival lords. Following the decline of the Majapahit, particularly after the paregreg civil war (1404-1406), the consequent decline in demand for gunpowder weapons caused many weapon makers and bronze-smiths to move to Brunei, Sumatra, Malaysia and the Philippines lead to widespread use, especially in the Makassar Strait. It led to near universal use of the swivel-gun and cannons in the Nusantara archipelago. Even though the knowledge of making gunpowder-based weapon has been known after the failed Mongol invasion of Java, and the predecessor of firearms, the pole gun (bedil tombak), was recorded as being used by Java in 1413, the knowledge of making "true" firearms came much later, after the middle of 15th century. It was brought by the Islamic nations of West Asia, most probably the Arabs. The precise year of introduction is unknown, but it may be safely concluded to be no earlier than 1460. Before the arrival of the Portuguese in Southeast Asia, the natives already possessed primitive firearms, the Java arquebus. Portuguese influence to local weaponry, particularly after the capture of Malacca (1511), resulted in a new type of hybrid tradition matchlock firearm, the istinggar. Portuguese and Spanish invaders were unpleasantly surprised and even outgunned on occasion. Circa 1540, the Javanese, always alert for new weapons found the newly arrived Portuguese weaponry superior to that of the locally made variants. Majapahit-era cetbang cannons were further improved and used in the Demak Sultanate period during the Demak invasion of Portuguese Malacca. During this period, the iron for manufacturing Javanese cannons was imported from Khorasan in northern Persia. The material was known by Javanese as "wesi kurasani" (Khorasan iron). When the Portuguese came to the archipelago, they referred to it as "Berço", which was also used to refer to any breech-loading swivel gun, while the Spaniards call it "Verso". By early 16th century, the Javanese already locally-producing large guns, some of them still survived until the present day and dubbed as "sacred cannon" or "holy cannon". These cannons varied between 180-260-pounders, weighing anywhere between 3–8 tons, length of them between 3–6 m. Saltpeter harvesting was recorded by Dutch and German travelers as being common in even the smallest villages and was collected from the decomposition process of large dung hills specifically piled for the purpose. The Dutch punishment for possession of non-permitted gunpowder appears to have been amputation. Ownership and manufacture of gunpowder was later prohibited by the colonial Dutch occupiers. According to colonel McKenzie quoted in Sir Thomas Stamford Raffles', "The History of Java" (1817), the purest sulfur was supplied from a crater from a mountain near the straits of Bali. On the origins of gunpowder technology, historian Tonio Andrade remarked, "Scholars today overwhelmingly concur that the gun was invented in China." Gunpowder and the gun are widely believed by historians to have originated from China because there is a large body of evidence that documents the evolution of the gun from the Chinese fire lance to a metal gun and the evolution of gunpowder from a medicine to an incendiary and an explosive, whereas similar records do not exist in Europe. As Andrade explains, the large amount of variation in gunpowder recipes in China relative to Europe is "evidence of experimentation in China, where gunpowder was at first used as an incendiary and only later became an explosive and a propellant... in contrast, formulas in Europe diverged only very slightly from the ideal proportions for use as an explosive and a propellant, suggesting that gunpowder was introduced as a mature technology." However, the history of gunpowder is not without controversy. A major problem confronting the study of early gunpowder history is ready access to sources close to the events described. Often the first records potentially describing use of gunpowder in warfare were written several centuries after the fact, and may well have been colored by the contemporary experiences of the chronicler. Translation difficulties have led to errors or loose interpretations bordering on artistic licence. Ambiguous language can make it difficult to distinguish gunpowder weapons from similar technologies that do not rely on gunpowder. A commonly cited example is a report of the Battle of Mohi in Eastern Europe that mentions a "long lance" sending forth "evil-smelling vapors and smoke", which has been variously interpreted by different historians as the "first-gas attack upon European soil" using gunpowder, "the first use of cannon in Europe", or merely a "toxic gas" with no evidence of gunpowder. It is difficult to accurately translate original Chinese alchemical texts, which tend to explain phenomena through metaphor, into modern scientific language with rigidly defined terminology in English. Early texts potentially mentioning gunpowder are sometimes marked by a linguistic process where semantic change occurred. For instance, the Arabic word "naft" transitioned from denoting naphtha to denoting gunpowder, and the Chinese word "pào" changed in meaning from catapult to referring to a cannon. This has led to arguments on the exact origins of gunpowder based on etymological foundations. Science and technology historian Bert S. Hall makes the observation that, "It goes without saying, however, that historians bent on special pleading, or simply with axes of their own to grind, can find rich material in these terminological thickets." Another major area of contention in modern studies of the history of gunpowder is regarding the transmission of gunpowder. While the literary and archaeological evidence supports a Chinese origin for gunpowder and guns, the manner in which gunpowder technology was transferred from China to the West is still under debate. It is unknown why the rapid spread of gunpowder technology across Eurasia took place over several decades whereas other technologies such as paper, the compass, and printing did not reach Europe until centuries after they were invented in China. For the most powerful black powder, meal powder, a wood charcoal, is used. The best wood for the purpose is Pacific willow, but others such as alder or buckthorn can be used. In Great Britain between the 15th and 19th centuries charcoal from alder buckthorn was greatly prized for gunpowder manufacture; cottonwood was used by the American Confederate States. The ingredients are reduced in particle size and mixed as intimately as possible. Originally, this was with a mortar-and-pestle or a similarly operating stamping-mill, using copper, bronze or other non-sparking materials, until supplanted by the rotating ball mill principle with non-sparking bronze or lead. Historically, a marble or limestone edge runner mill, running on a limestone bed, was used in Great Britain; however, by the mid 19th century this had changed to either an iron-shod stone wheel or a cast iron wheel running on an iron bed. The mix was dampened with alcohol or water during grinding to prevent accidental ignition. This also helps the extremely soluble saltpeter to mix into the microscopic pores of the very high surface-area charcoal. Around the late 14th century, European powdermakers first began adding liquid during grinding to improve mixing, reduce dust, and with it the risk of explosion. The powder-makers would then shape the resulting paste of dampened gunpowder, known as mill cake, into corns, or grains, to dry. Not only did corned powder keep better because of its reduced surface area, gunners also found that it was more powerful and easier to load into guns. Before long, powder-makers standardized the process by forcing mill cake through sieves instead of corning powder by hand. The improvement was based on reducing the surface area of a higher density composition. At the beginning of the 19th century, makers increased density further by static pressing. They shoveled damp mill cake into a two-foot square box, placed this beneath a screw press and reduced it to its volume. "Press cake" had the hardness of slate. They broke the dried slabs with hammers or rollers, and sorted the granules with sieves into different grades. In the United States, Eleuthere Irenee du Pont, who had learned the trade from Lavoisier, tumbled the dried grains in rotating barrels to round the edges and increase durability during shipping and handling. (Sharp grains rounded off in transport, producing fine "meal dust" that changed the burning properties.) Another advance was the manufacture of kiln charcoal by distilling wood in heated iron retorts instead of burning it in earthen pits. Controlling the temperature influenced the power and consistency of the finished gunpowder. In 1863, in response to high prices for Indian saltpeter, DuPont chemists developed a process using potash or mined potassium chloride to convert plentiful Chilean sodium nitrate to potassium nitrate. The following year (1864) the Gatebeck Low Gunpowder Works in Cumbria (Great Britain) started a plant to manufacture potassium nitrate by essentially the same chemical process. This is nowadays called the 'Wakefield Process', after the owners of the company. It would have used potassium chloride from the Staßfurt mines, near Magdeburg, Germany, which had recently become available in industrial quantities. During the 18th century, gunpowder factories became increasingly dependent on mechanical energy. Despite mechanization, production difficulties related to humidity control, especially during the pressing, were still present in the late 19th century. A paper from 1885 laments that "Gunpowder is such a nervous and sensitive spirit, that in almost every process of manufacture it changes under our hands as the weather changes." Pressing times to the desired density could vary by a factor of three depending on the atmospheric humidity. The term "black powder" was coined in the late 19th century, primarily in the United States, to distinguish prior gunpowder formulations from the new smokeless powders and semi-smokeless powders. Semi-smokeless powders featured bulk volume properties that approximated black powder, but had significantly reduced amounts of smoke and combustion products. Smokeless powder has different burning properties (pressure vs. time) and can generate higher pressures and work per gram. This can rupture older weapons designed for black powder. Smokeless powders ranged in color from brownish tan to yellow to white. Most of the bulk semi-smokeless powders ceased to be manufactured in the 1920s. Black powder is a granular mixture of Potassium nitrate is the most important ingredient in terms of both bulk and function because the combustion process releases oxygen from the potassium nitrate, promoting the rapid burning of the other ingredients. To reduce the likelihood of accidental ignition by static electricity, the granules of modern black powder are typically coated with graphite, which prevents the build-up of electrostatic charge. Charcoal does not consist of pure carbon; rather, it consists of partially pyrolyzed cellulose, in which the wood is not completely decomposed. Carbon differs from ordinary charcoal. Whereas charcoal's autoignition temperature is relatively low, carbon's is much greater. Thus, a black powder composition containing pure carbon would burn similarly to a match head, at best. The current standard composition for the black powders that are manufactured by pyrotechnicians was adopted as long ago as 1780. Proportions by weight are 75% potassium nitrate (known as saltpeter or saltpetre), 15% softwood charcoal, and 10% sulfur. These ratios have varied over the centuries and by country, and can be altered somewhat depending on the purpose of the powder. For instance, power grades of black powder, unsuitable for use in firearms but adequate for blasting rock in quarrying operations, are called blasting powder rather than gunpowder with standard proportions of 70% nitrate, 14% charcoal, and 16% sulfur; blasting powder may be made with the cheaper sodium nitrate substituted for potassium nitrate and proportions may be as low as 40% nitrate, 30% charcoal, and 30% sulfur. In 1857, Lammot du Pont solved the main problem of using cheaper sodium nitrate formulations when he patented DuPont "B" blasting powder. After manufacturing grains from press-cake in the usual way, his process tumbled the powder with graphite dust for 12 hours. This formed a graphite coating on each grain that reduced its ability to absorb moisture. Neither the use of graphite nor sodium nitrate was new. Glossing gunpowder corns with graphite was already an accepted technique in 1839, and sodium nitrate-based blasting powder had been made in Peru for many years using the sodium nitrate mined at Tarapacá (now in Chile). Also, in 1846, two plants were built in south-west England to make blasting powder using this sodium nitrate. The idea may well have been brought from Peru by Cornish miners returning home after completing their contracts. Another suggestion is that it was William Lobb, the planthunter, who recognised the possibilities of sodium nitrate during his travels in South America. Lammot du Pont would have known about the use of graphite and probably also knew about the plants in south-west England. In his patent he was careful to state that his claim was for the combination of graphite with sodium nitrate-based powder, rather than for either of the two individual technologies. French war powder in 1879 used the ratio 75% saltpeter, 12.5% charcoal, 12.5% sulfur. English war powder in 1879 used the ratio 75% saltpeter, 15% charcoal, 10% sulfur. The British Congreve rockets used 62.4% saltpeter, 23.2% charcoal and 14.4% sulfur, but the British Mark VII gunpowder was changed to 65% saltpeter, 20% charcoal and 15% sulfur. The explanation for the wide variety in formulation relates to usage. Powder used for rocketry can use a slower burn rate since it accelerates the projectile for a much longer time—whereas powders for weapons such as flintlocks, cap-locks, or matchlocks need a higher burn rate to accelerate the projectile in a much shorter distance. Cannons usually used lower burn-rate powders, because most would burst with higher burn-rate powders. In the First Opium war, the mixture for Qing China gunpowder contained a high ratio of charcoal which gave it a high stability and longer shelf life but generated less kinetic energy when ignited, decreasing the range and accuracy. In comparison, the mixture for British gunpowder contained a higher ratio of sulfur, allowing the powder to burn faster and thus generate more kinetic energy. The original dry-compounded powder used in 15th-century Europe was known as "Serpentine", either a reference to Satan or to a common artillery piece that used it. The ingredients were ground together with a mortar and pestle, perhaps for 24 hours, resulting in a fine flour. Vibration during transportation could cause the components to separate again, requiring remixing in the field. Also if the quality of the saltpeter was low (for instance if it was contaminated with highly hygroscopic calcium nitrate), or if the powder was simply old (due to the mildly hygroscopic nature of potassium nitrate), in humid weather it would need to be re-dried. The dust from "repairing" powder in the field was a major hazard. Loading cannons or bombards before the powder-making advances of the Renaissance was a skilled art. Fine powder loaded haphazardly or too tightly would burn incompletely or too slowly. Typically, the breech-loading powder chamber in the rear of the piece was filled only about half full, the serpentine powder neither too compressed nor too loose, a wooden bung pounded in to seal the chamber from the barrel when assembled, and the projectile placed on. A carefully determined empty space was necessary for the charge to burn effectively. When the cannon was fired through the touchhole, turbulence from the initial surface combustion caused the rest of the powder to be rapidly exposed to the flame. The advent of much more powerful and easy to use "corned" powder changed this procedure, but serpentine was used with older guns into the 17th century. For propellants to oxidize and burn rapidly and effectively, the combustible ingredients must be reduced to the smallest possible particle sizes, and be as thoroughly mixed as possible. Once mixed, however, for better results in a gun, makers discovered that the final product should be in the form of individual dense grains that spread the fire quickly from grain to grain, much as straw or twigs catch fire more quickly than a pile of sawdust. Because the dry powdered ingredients must be mixed and bonded together for extrusion and cut into grains to maintain the blend, size reduction and mixing is done while the ingredients are damp, usually with water. After 1800, instead of forming grains by hand or with sieves, the damp "mill-cake" was pressed in molds to increase its density and extract the liquid, forming "press-cake". The pressing took varying amounts of time, depending on conditions such as atmospheric humidity. The hard, dense product was broken again into tiny pieces, which were separated with sieves to produce a uniform product for each purpose: coarse powders for cannons, finer grained powders for muskets, and the finest for small hand guns and priming. Inappropriately fine-grained powder often caused cannons to burst before the projectile could move down the barrel, due to the high initial spike in pressure. "Mammoth" powder with large grains, made for Rodman's 15-inch cannon, reduced the pressure to only 20 percent as high as ordinary cannon powder would have produced. In the mid-19th century, measurements were made determining that the burning rate within a grain of black powder (or a tightly packed mass) is about 6 cm/s (0.20 feet/s), while the rate of ignition propagation from grain to grain is around 9 m/s (30 feet/s), over two orders of magnitude faster. Modern corning first compresses the fine black powder meal into blocks with a fixed density (1.7 g/cm³). In the United States, gunpowder grains were designated F (for fine) or C (for coarse). Grain diameter decreased with a larger number of Fs and increased with a larger number of Cs, ranging from about for 7F to for 7C. Even larger grains were produced for artillery bore diameters greater than about . The standard DuPont "Mammoth" powder developed by Thomas Rodman and Lammot du Pont for use during the American Civil War had grains averaging in diameter with edges rounded in a glazing barrel. Other versions had grains the size of golf and tennis balls for use in Rodman guns. In 1875 DuPont introduced "Hexagonal" powder for large artillery, which was pressed using shaped plates with a small center core—about diameter, like a wagon wheel nut, the center hole widened as the grain burned. By 1882 German makers also produced hexagonal grained powders of a similar size for artillery. By the late 19th century manufacturing focused on standard grades of black powder from Fg used in large bore rifles and shotguns, through FFg (medium and small-bore arms such as muskets and fusils), FFFg (small-bore rifles and pistols), and FFFFg (extreme small bore, short pistols and most commonly for priming flintlocks). A coarser grade for use in military artillery blanks was designated A-1. These grades were sorted on a system of screens with oversize retained on a mesh of 6 wires per inch, A-1 retained on 10 wires per inch, Fg retained on 14, FFg on 24, FFFg on 46, and FFFFg on 60. Fines designated FFFFFg were usually reprocessed to minimize explosive dust hazards. In the United Kingdom, the main service gunpowders were classified RFG (rifle grained fine) with diameter of one or two millimeters and RLG (rifle grained large) for grain diameters between two and six millimeters. Gunpowder grains can alternatively be categorized by mesh size: the BSS sieve mesh size, being the smallest mesh size, which retains no grains. Recognized grain sizes are Gunpowder G 7, G 20, G 40, and G 90. Owing to the large market of antique and replica black-powder firearms in the US, modern gunpowder substitutes like Pyrodex, Triple Seven and Black Mag3 pellets have been developed since the 1970s. These products, which should not be confused with smokeless powders, aim to produce less fouling (solid residue), while maintaining the traditional volumetric measurement system for charges. Claims of less corrosiveness of these products have been controversial however. New cleaning products for black-powder guns have also been developed for this market. Besides black powder, there are other historically important types of gunpowder. "Brown gunpowder" is cited as composed of 79% nitre, 3% sulfur, and 18% charcoal per 100 of dry powder, with about 2% moisture. Prismatic Brown Powder is a large-grained product the Rottweil Company introduced in 1884 in Germany, which was adopted by the British Royal Navy shortly thereafter. The French navy adopted a fine, 3.1 millimeter, not prismatic grained product called "Slow Burning Cocoa" (SBC) or "cocoa powder". These brown powders reduced burning rate even further by using as little as 2 percent sulfur and using charcoal made from rye straw that had not been completely charred, hence the brown color. Lesmok powder was a product developed by DuPont in 1911, one of several semi-smokeless products in the industry containing a mixture of black and nitrocellulose powder. It was sold to Winchester and others primarily for .22 and .32 small calibers. Its advantage was that it was believed at the time to be less corrosive than smokeless powders then in use. It was not understood in the U.S. until the 1920s that the actual source of corrosion was the potassium chloride residue from potassium chlorate sensitized primers. The bulkier black powder fouling better disperses primer residue. Failure to mitigate primer corrosion by dispersion caused the false impression that nitrocellulose-based powder caused corrosion. Lesmok had some of the bulk of black powder for dispersing primer residue, but somewhat less total bulk than straight black powder, thus requiring less frequent bore cleaning. It was last sold by Winchester in 1947. The development of smokeless powders, such as cordite, in the late 19th century created the need for a spark-sensitive priming charge, such as gunpowder. However, the sulfur content of traditional gunpowders caused corrosion problems with Cordite Mk I and this led to the introduction of a range of sulfur-free gunpowders, of varying grain sizes. They typically contain 70.5 parts of saltpeter and 29.5 parts of charcoal. Like black powder, they were produced in different grain sizes. In the United Kingdom, the finest grain was known as "sulfur-free mealed powder" ("SMP"). Coarser grains were numbered as sulfur-free gunpowder (SFG n): 'SFG 12', 'SFG 20', 'SFG 40' and 'SFG 90', for example; where the number represents the smallest BSS sieve mesh size, which retained no grains. Sulfur's main role in gunpowder is to decrease the ignition temperature. A sample reaction for sulfur-free gunpowder would be Gunpowder does not burn as a single reaction, so the byproducts are not easily predicted. One study showed that it produced (in order of descending quantities) 55.91% solid products: potassium carbonate, potassium sulfate, potassium sulfide, sulfur, potassium nitrate, potassium thiocyanate, carbon, ammonium carbonate and 42.98% gaseous products: carbon dioxide, nitrogen, carbon monoxide, hydrogen sulfide, hydrogen, methane, 1.11% water. However, simplified equations have been cited. A simple, commonly cited, chemical equation for the combustion of black powder is A balanced, but still simplified, equation is Black powder made with less-expensive and more plentiful sodium nitrate (in appropriate proportions) works just as well, and previous equations apply, with sodium instead of potassium. However, it is more hygroscopic than powders made from potassium nitrate—popularly known as saltpeter. Because "corned" black powder grains made with saltpeter are less affected by moisture in the air, they can be stored unsealed without degradation by humidity. Muzzleloaders have been known to fire after hanging on a wall for decades in a loaded state, provided they remained dry. By contrast, black powder made with sodium nitrate must be kept sealed to remain stable. The matchlock musket or pistol (an early gun ignition system), as well as the flintlock would often be unusable in wet weather, due to powder in the pan being exposed and dampened. Gunpowder releases 3 megajoules per kilogram and contains its own oxidant. This is lower than TNT (4.7 megajoules per kilogram), or gasoline (47.2 megajoules per kilogram, but gasoline requires an oxidant, so an optimized gasoline and O2 mixture contains 10.4 megajoules per kilogram). Black powder also has a low energy density compared to modern "smokeless" powders, and thus to achieve high energy loadings, large amounts of black powder are needed with heavy projectiles. Gunpowder is a low explosive, that is, it does not detonate but rather deflagrates (burns quickly). This is an advantage in a propellant device, where one does not desire a shock that would shatter the gun and potentially harm the operator, however it is a drawback when some explosion is wanted. In that case, gunpowder (and most importantly, gases produced by its burning) must be confined. Since it contains its own oxidizer and additionally burns faster under pressure, its combustion is capable of bursting containers such as shell, grenade, or improvised "pipe bomb" or "pressure cooker" casings to form shrapnel. In quarrying, high explosives are generally preferred for shattering rock. However, because of its low brisance, black powder causes fewer fractures and results in more usable stone compared to other explosives, making black powder useful for blasting slate, which is fragile, or monumental stone such as granite and marble. Black powder is well suited for blank rounds, signal flares, burst charges, and rescue-line launches. Black powder is also used in fireworks for lifting shells, in rockets as fuel, and in certain special effects. As seen above, combustion converts less than half the mass of black powder to gas, most of it turns into particulate matter. Some of it is ejected, wasting propelling power, fouling the air, and generally being a nuisance (giving away a soldier's position, generating fog that hinders vision, etc.). Some of it ends up as a thick layer of soot inside the barrel, where it also is a nuisance for subsequent shots, and a cause of jamming an automatic weapon. Moreover, this residue is hygroscopic, and with the addition of moisture absorbed from the air forms a corrosive substance. The soot contains potassium oxide or sodium oxide that turns into potassium hydroxide, or sodium hydroxide, which corrodes wrought iron or steel gun barrels. Black powder arms must therefore be well cleaned after use, both inside and out, to remove the residue. The United Nations Model Regulations on the Transportation of Dangerous Goods and national transportation authorities, such as United States Department of Transportation, have classified gunpowder (black powder) as a "Group A: Primary explosive substance" for shipment because it ignites so easily. Complete manufactured devices containing black powder are usually classified as "Group D: Secondary detonating substance, or black powder, or article containing secondary detonating substance", such as firework, class D model rocket engine, etc., for shipment because they are harder to ignite than loose powder. As explosives, they all fall into the category of Class 1. Besides its use as a propellant in firearms and artillery, black powder's other main use has been as a blasting powder in quarrying, mining, and road construction (including railroad construction). During the 19th century, outside of war emergencies such as the Crimean War or the American Civil War, more black powder was used in these industrial uses than in firearms and artillery. But dynamite gradually replaced it for those uses. Today industrial explosives for such uses are still a huge market, but most of the market is in newer explosives rather than black powder. Beginning in the 1930s, gunpowder or smokeless powder was used in rivet guns, stun guns for animals, cable splicers and other industrial construction tools. The "stud gun" drove nails or screws into solid concrete, a function not possible with hydraulic tools. Today powder-actuated tools are still an important part of various industries, but the cartridges usually use smokeless powders. Industrial shotguns have been used to eliminate persistent material rings in operating rotary kilns (such as those for cement, lime, phosphate, etc.) and clinker in operating furnaces, and commercial tools make the method more reliable. Gunpowder has occasionally been employed for other purposes besides weapons, mining, and construction:
https://en.wikipedia.org/wiki?curid=12737
Gary Kildall Gary Arlen Kildall (; May 19, 1942 – July 11, 1994) was an American computer scientist and microcomputer entrepreneur who created the CP/M operating system and founded Digital Research, Inc. (DRI). Kildall was one of the first people to see microprocessors as fully capable computers, rather than equipment controllers, and to organize a company around this concept. He also co-hosted the PBS TV show "Computer Chronicles". Although his career in computing spanned more than two decades, he is mainly remembered in connection with IBM's unsuccessful attempt in 1980 to license CP/M for the IBM Personal Computer. Gary Kildall was born and grew up in Seattle, Washington, where his family operated a seamanship school. His father, Joseph Kildall, was a captain of Norwegian heritage. His mother Emma was of half Swedish descent, as Gary's grandmother was born in Långbäck, Sweden, in Skellefteå Municipality, but emigrated to Canada at 23 years of age. Gary attended the University of Washington (UW) hoping to become a mathematics teacher, but became increasingly interested in computer technology. After receiving his degree, he fulfilled a draft obligation to the United States Navy by teaching at the Naval Postgraduate School (NPS) in Monterey, California. Being within an hour's drive of Silicon Valley, Kildall heard about the first commercially available microprocessor, the Intel 4004. He bought one of the processors and began writing experimental programs for it. To learn more about the processors, he worked at Intel as a consultant on his days off. Kildall briefly returned to UW and finished his doctorate in computer science in 1972, then resumed teaching at NPS. He published a paper that introduced the theory of data-flow analysis used today in optimizing compilers (sometimes known as Kildall's method), and he continued to experiment with microcomputers and the emerging technology of floppy disks. Intel lent him systems using the 8008 and 8080 processors, and in 1973, he developed the first high-level programming language for microprocessors, called PL/M. For Intel he also wrote an 8080 instruction set simulator named INTERP/80. He created CP/M the same year to enable the 8080 to control a floppy drive, combining for the first time all the essential components of a computer at the microcomputer scale. He demonstrated CP/M to Intel, but Intel had little interest and chose to market PL/M instead. Kildall and his wife Dorothy established a company, originally called "Intergalactic Digital Research" (later renamed as Digital Research, Inc.), to market CP/M through advertisements in hobbyist magazines. Digital Research licensed CP/M for the IMSAI 8080, a popular clone of the Altair 8800. As more manufacturers licensed CP/M, it became a de facto standard and had to support an increasing number of hardware variations. In response, Kildall pioneered the concept of a BIOS, a set of simple programs stored in the computer hardware (ROM or EPROM chip) that enabled CP/M to run on different systems without modification. CP/M's quick success took Kildall by surprise, and he was slow to update it for high density floppy disks and hard disk drives. After hardware manufacturers talked about creating a rival operating system, Kildall started a rush project to develop CP/M 2. By 1981, at the peak of its popularity, CP/M ran on different computer models and DRI had  million in yearly revenues. IBM, presided by John R. Opel, approached Digital Research in 1980, at Bill Gates' suggestion, to negotiate the purchase of a forthcoming version of CP/M called CP/M-86 for the IBM PC. Gary had left negotiations to his wife, Dorothy, as he usually did, while he and colleague and developer of MP/M operating system Tom Rolander used Gary's private airplane to deliver software to manufacturer Bill Godbout. Before the IBM representatives would explain the purpose of their visit, they insisted that Dorothy sign a non-disclosure agreement. On the advice of DRI attorney Gerry Davis, Dorothy refused to sign the agreement without Gary's approval. Gary returned in the afternoon and tried to move the discussion with IBM forward, but accounts disagree on whether he signed the non-disclosure agreement, as well as if he ever met with the IBM representatives. Various reasons have been given for the two companies failing to reach an agreement. DRI, which had only a few products, might have been unwilling to sell its main product to IBM for a one-time payment rather than its usual royalty-based plan. Dorothy might have believed that the company could not deliver CP/M-86 on IBM's proposed schedule, as the company was busy developing an implementation of the PL/I programming language for Data General. Also possible, the IBM representatives might have been annoyed that DRI had spent hours on what they considered a routine formality. According to Kildall, the IBM representatives took the same flight to Florida that night that he and Dorothy took for their vacation, and they negotiated further on the flight, reaching a handshake agreement. IBM lead negotiator Jack Sams insisted that he never met Gary, and one IBM colleague has confirmed that Sams said so at the time. He accepted that someone else in his group might have been on the same flight, but noted that he flew back to Seattle to talk with Microsoft again. Sams related the story to Gates, who had already agreed to provide a BASIC interpreter and several other programs for the PC. Gates' impression of the story was that Gary capriciously "went flying", as he would later tell reporters. Sams left Gates with the task of finding a usable operating system, and a few weeks later he proposed using the operating system 86-DOS—an independently developed operating system that implemented Kildall's CP/M API—from Seattle Computer Products (SCP). Paul Allen negotiated a licensing deal with SCP. Allen had 86-DOS adapted for IBM's hardware, and IBM shipped it as IBM PC DOS. Kildall obtained a copy of PC DOS, examined it, and concluded that it infringed on CP/M. When he asked Gerry Davis what legal options were available, Davis told him that intellectual property law for software was not clear enough to sue. Instead Kildall only threatened IBM with legal action, and IBM responded with a proposal to offer CP/M-86 as an option for the PC in return for a release of liability. Kildall accepted, believing that IBM's new system (like its previous personal computers) would not be a significant commercial success. When the IBM PC was introduced, IBM sold its operating system as an unbundled option. One of the operating system options was PC DOS, priced at . PC DOS was seen as a practically necessary option; most software titles required it and without it the IBM PC was limited to its built-in Cassette BASIC. CP/M-86 shipped a few months later six times more expensive at , but sold poorly against DOS and enjoyed far less software support. With the loss of the IBM deal, Gary and Dorothy found themselves under pressure to bring in more experienced management, and Gary's influence over the company waned. He worked in various experimental and research projects, such as a version of CP/M with multitasking (MP/M) and an implementation of the Logo programming language. He hoped that Logo, an educational dialect of LISP, would supplant BASIC in education, but it did not. After seeing a demonstration of the Apple Lisa, Kildall oversaw the creation of DRI's own graphical user interface, called GEM. Novell acquired DRI in 1991 in a deal that netted millions for Kildall. Kildall resigned as CEO of Digital Research on 28 June 1985, but remained chairman of the board. Kildall also pursued computing-related projects outside DRI. During the seven years from 1983 to 1990 he co-hosted a public television program on the side, called "Computer Chronicles", that followed trends in personal computing. In 1984 he started another company, "Activenture", which adapted optical disc technology for computer use. In early 1985 it was renamed "KnowledgeSet" and released the first computer encyclopedia in June 1985, a CD-ROM version of Grolier's "Academic American Encyclopedia" named "The Electronic Encyclopedia", later acquired by Banta Corporation. Kildall's final business venture, known as "Prometheus Light and Sound" (PLS) and based in Austin, Texas, developed a home PBX system that integrated land-line telephones with mobile phones. Kildall's colleagues recall him as creative, easygoing, and adventurous. In addition to flying, he loved sports cars, auto racing, and boating, and he had a lifelong love of the sea. Although Kildall preferred to leave the IBM affair in the past and to be known for his work before and afterward, he continually faced comparisons between himself and Bill Gates, as well as fading memories of his contributions. A legend grew around the fateful IBM-DRI meeting, encouraged by Gates and various journalists, suggesting that Kildall had irresponsibly taken the day off for a recreational flight, and he became tired of constantly having to refute that story. In later years, he had occasional private expressions of bitterness at being overshadowed by Microsoft. Kildall was annoyed when the University of Washington asked him, as a distinguished graduate, to attend their computer science program anniversary in 1992, but gave the keynote speech to Gates, a dropout from Harvard. In response, he started writing his memoir, "Computer Connections". The memoir, which he distributed only to a few friends, expressed his frustration that people did not seem to value elegance in software, and it said of Gates, "He is divisive. He is manipulative. He is a user. He has taken much from me and the industry." In an appendix he called DOS "plain and simple theft" because its first 26 system calls worked the same as CP/M's. He accused IBM of contriving the price difference between PC DOS and CP/M-86 in order to marginalize CP/M. The journalist Harold Evans used the memoir as a source for a chapter about Kildall in the 2004 book "They Made America", concluding that Microsoft had robbed Kildall of his inventions. IBM veterans from the PC project disputed the book's description of events, and Microsoft described it as "one-sided and inaccurate". In August 2016, Kildall's family made the first part of his memoir available to the public. Selling DRI to Novell had made Kildall a wealthy man, and he moved to the West Lake Hills suburb of Austin. His Austin house was a lakeside property, with stalls for several sports cars, and a video studio in the basement. Kildall owned and flew his own Learjet and had at least one boat on the lake. While in Austin he also participated in volunteer efforts to assist children with HIV/AIDS. He owned a mansion with a panoramic ocean view in Pebble Beach, California, near the headquarters of DRI. On July 8, 1994, Kildall fell at a Monterey, California, biker bar and hit his head. The exact circumstances of the injury remain unclear. He had been an alcoholic in his later years. Various sources have claimed he fell from a chair, fell down steps, or was assaulted, because he had walked into the Franklin Street Bar & Grill wearing Harley-Davidson leathers. He checked in and out of the hospital twice, and died three days later at the Community Hospital of the Monterey Peninsula. An autopsy the next day did not conclusively determine a cause of death. A CP/M Usenet FAQ says he was concussed from the fall and died of a heart attack; the connection between the two are unclear. He is buried in Evergreen Washelli Memorial Park in north Seattle. Following the announcement of Kildall's death, Bill Gates commented that he was "one of the original pioneers of the PC revolution" and "a very creative computer scientist who did excellent work. Although we were competitors, I always had tremendous respect for his contributions to the PC industry. His untimely death was very unfortunate and his work will be missed." In March 1995, Kildall was posthumously honored by the Software Publishers Association (SPA) for his contributions to the microcomputer industry: In April 2014, the city of Pacific Grove installed a commemorative plaque outside Kildall's former residence, which also served as the early headquarters of Digital Research.
https://en.wikipedia.org/wiki?curid=12740
Gesta Danorum Gesta Danorum ("Deeds of the Danes") is a patriotic work of Danish history, by the 12th century author Saxo Grammaticus ("Saxo the Literate", literally "the Grammarian"). It is the most ambitious literary undertaking of medieval Denmark and is an essential source for the nation's early history. It is also one of the oldest known written documents about the history of Estonia and Latvia. Consisting of sixteen books written in Latin on the invitation of Archbishop Absalon, "Gesta Danorum" describes Danish history and to some degree Scandinavian history in general, from prehistory to the late 12th century. In addition, "Gesta Danorum" offers singular reflections on European affairs in the High Middle Ages from a unique Scandinavian perspective, supplementing what has been handed down by historians from Western and Southern Europe. The sixteen books, in prose with an occasional excursion into poetry, can be categorized into two parts: Books 1-9, which deal with Norse mythology and semi-legendary Danish history, and Books 10-16, which deal with medieval history. Book 9 ends with Gorm the Old. The last three books (14-16), which describe Danish conquests on the south shore of the Baltic Sea and wars against Slavic peoples (the Northern Crusades), are very valuable for the history of West Slavic tribes (Polabian Slavs, Pomeranians) and Slavic paganism. Book 14 contains a unique description of the temple on the island of Rügen. The first book is mostly Saxo's original work, sharing little with other primary works, but taking some inspiration from Ancient Greek epics. It very briefly covers the rule of the eponymous founder of the Danish nation, Dan, but also his sons Humble and Lother, and Dan's grandson, Skiod, whose son, Gram is the first Danish king to be given some detail, his reign revolves around conquering Sweden and Finland, only to die in battle. Most of the book deals with adventures of Hading, the son of Gram and Finnish princess, Signe. The adventure being that Denmark is lost to the Swedes, and Hading trying to reclaim it with the help of giants and Odin. The book concludes with Hading's suicide after hearing of his friend's death. When exactly "Gesta Danorum" was written is the subject of numerous works; however, it is generally agreed that "Gesta Danorum" was not finished before 1208. The last event described in the last book (Book 16) is King Canute VI of Denmark subduing Pomerania under Duke Bogislaw I, in 1186. However the preface of the work, dedicated to Archbishop Anders Sunesen, mentions the Danish conquest of the areas north of the Elbe in 1208. Book 14, comprising nearly one-quarter of the text of the entire work, ends with Absalon's appointment to archbishop in 1178. Since this book is so large and "Absalon" has greater importance than King Valdemar I, this book may have been written first and comprised a work on its own. It is possible that Saxo then enlarged it with Books 15 and 16, telling the story of King Valdemar I's last years and King Canute VI's first years. It is believed that Saxo then wrote Books 11, 12, and 13. Svend Aagesen's history of Denmark, "Brevis Historia Regum Dacie" (circa 1186), states that Saxo had decided to write about "The king-father and his sons," which would be King Sweyn Estridson, in Books 11, 12, and 13. He would later add the first ten books. This would also explain the 22 years between the last event described in the last book (Book 16) and the 1208 event described in the preface. The original manuscripts of the work are lost, except for four fragments: the Angers Fragment, Lassen Fragment, Kall-Rasmussen Fragment and Plesner Fragment. The Angers Fragment is the biggest fragment, and the only one attested to be in Saxo’s own handwriting. The other ones are copies from ca. 1275. All four fragments are in the collection of the Danish Royal Library in Copenhagen, Denmark. The text has, however, survived. In 1510–1512, Christiern Pedersen, a Danish translator working in Paris, searched Denmark high and low for an existing copy of Saxo’s works, which by that time was nearly all but lost. By that time most knowledge of Saxo’s work came from a summary located in "Chronica Jutensis", from around 1342, called "Compendium Saxonis". It is also in this summary that the name "Gesta Danorum" is found. The title Saxo himself used for his work is unknown. Christiern Pedersen finally found a copy in the collection of Archbishop Birger Gunnersen of Lund, Skåne (Skåne is now part of Sweden, but at the time was still part of Denmark), which he gladly lent him. With the help of printer Jodocus Badius, "Gesta Danorum" was refined and printed. The first printed press publication and the oldest known complete text of Saxo’s works is Christiern Pedersen's Latin edition, printed and published by Jodocus Badius in Paris, France on 15 March 1514 under the title of "Danorum Regum heroumque Historiae" ("History of the Kings and heroes of the Danes"). The edition features the following colophon: ..."impressit in inclyta Parrhisorum academia Iodocus Badius Ascensius Idibus Martiis. MDXIIII. Supputatione Romana." (the Ides of March, 1514). The full front page reads (with abbreviations expanded) in Latin: "Danorum Regum heroumque Historiae stilo eleganti a Saxone Grammatico natione Zialandico necnon Roskildensis ecclesiae praeposito, abhinc supra trecentos annos conscriptae et nunc primum literaria serie illustratae tersissimeque impressae." Danish language: "De danske Kongers og Heltes Historie, skrevet i pyntelig Stil for over 300 Aar siden af Saxo Grammaticus, en Sjællandsfar og Provst ved Kirken i Roskilde, og nu for første Gang oplyst ved et Register og omhyggeligt trykt". English language: "Histories of the Kings and heroes of the Danes, composed in elegant style by Saxo Grammaticus, a Zealander and also provost of the church of Roskilde, over three hundred years ago, and now for the first time illustrated and printed correctly in a learned compilation." The source of all existing translations and new editions is Christiern Pedersen's Latin "Danorum Regum heroumque Historiae". There exist a number of different translations today, some complete, some partial: "Gesta Danorum" is also translated partially in other English, French and German releases. Certain aspects of "Gesta Danorum" formed the basis for William Shakespeare's play "Hamlet". It is thought that Shakespeare never read "Gesta Danorum", and instead had access to an auxiliary version of the tale describing the downfall of Amleth, Prince of Denmark. Saxo's version, told of in Books 3 and 4, is very similar to that of Shakespeare's "Hamlet". In Saxo's version, two brothers, Orvendil and Fengi are given the rule over Jutland by King Rørik Slyngebond of the Danes. Soon after, Orvendil marries King Rørik's daughter, Geruth (Gertrude in "Hamlet"). Amleth is their first and only child. Fengi becomes resentful of his brother's marriage, and also wants sole leadership of Jutland, so therefore murders Orvendil. After a very brief period of mourning, Fengi marries Geruth, and declares himself sole leader of Jutland. Eventually, Amleth avenges his father's murder and plans the murder of his uncle, making him the new and rightful King of Jutland. However, while Hamlet dies in Shakespeare's version just after his uncle's death, in Saxo's version Amleth survives and begins ruling his kingdom, going on to other adventures.
https://en.wikipedia.org/wiki?curid=12741
Garrison Keillor Gary Edward "Garrison" Keillor (born August 7, 1942) is an American author, storyteller, humorist, voice actor, and radio personality. He is best known as the creator of the Minnesota Public Radio (MPR) show "A Prairie Home Companion" (called "Garrison Keillor's Radio Show" in some international syndication), which he hosted from 1974 to 2016. Keillor created the fictional Minnesota town Lake Wobegon, the setting of many of his books, including "Lake Wobegon Days "and "". Other creations include Guy Noir, a detective voiced by Keillor who appeared in "A Prairie Home Companion" comic skits. Keillor is also the creator of the five-minute daily radio/podcast program "The Writer's Almanac", which pairs one or two poems of his choice with a script about important literary, historical, and scientific events that coincided with that date in history. In November 2017, Minnesota Public Radio cut all business ties with Keillor after an allegation of inappropriate behavior with a freelance writer for "A Prairie Home Companion". On April 13, 2018, MPR and Keillor announced a settlement that allows archives of "A Prairie Home Companion" and "The Writer's Almanac" to be publicly available again, and soon thereafter, Keillor began publishing new episodes of "The Writer's Almanac" on his website. Keillor was born in Anoka, Minnesota, the son of Grace Ruth ("née" Denham) and John Philip Keillor. His father was a carpenter and postal worker who was half-Canadian with English ancestry; Keillor's paternal grandfather was from Kingston, Ontario. His maternal grandparents were Scottish emigrants from Glasgow. Keillor's family belonged to the Plymouth Brethren, an Evangelical Christian movement that he has since left. In 2006, he told "Christianity Today" that he was attending the St. John the Evangelist Episcopal church in Saint Paul, Minnesota, after previously attending a Lutheran church in New York. Keillor graduated from Anoka High School in 1960 and from the University of Minnesota with a bachelor's degree in English in 1966. During college, he began his broadcasting career on the student-operated radio station known today as Radio K. In his 2004 book "Homegrown Democrat: A Few Plain Thoughts from the Heart of America", Keillor mentions some of his noteworthy ancestors, including Joseph Crandall, who was an associate of Roger Williams, who founded Rhode Island and the first American Baptist church; and Prudence Crandall, who founded the first African-American women's school in America. Garrison Keillor started his professional radio career in November 1969 with Minnesota Educational Radio (MER), later Minnesota Public Radio (MPR), which today distributes programs under the American Public Media (APM) brand. He hosted a weekday drive-time broadcast called "A Prairie Home Entertainment", on KSJR FM at St. John's University in Collegeville. The show's eclectic music was a major divergence from the station's usual classical fare. During this time he submitted fiction to "The New Yorker" magazine, where his first story for that publication, "Local Family Keeps Son Happy," appeared in September 1970. Keillor resigned from "The Morning Program" in February 1971 in protest of what he considered interference with his musical programming; as part of his protest, he played nothing but the Beach Boys' "Help Me, Rhonda" during one broadcast. When he returned to the station in October, the show was dubbed "A Prairie Home Companion". Keillor has attributed the idea for the live Saturday night radio program to his 1973 assignment to write about the Grand Ole Opry for "The New Yorker", but he had already begun showcasing local musicians on the morning show, despite limited studio space. In August 1973, MER announced plans to broadcast a Saturday night version of "A Prairie Home Companion" with live musicians. "A Prairie Home Companion" ("PHC") debuted as an old-style variety show before a live audience on July 6, 1974; it featured guest musicians and a cadre cast doing musical numbers and comic skits replete with elaborate live sound effects. The show is punctuated by spoof commercial spots for "PHC" fictitious sponsors such as Powdermilk Biscuits, the Ketchup Advisory Board, and the Professional Organization of English Majors (POEM); it presents parodic serial melodramas, such as "The Adventures of Guy Noir, Private Eye" and "The Lives of the Cowboys." Keillor voices Noir, the cowboy Lefty, and other recurring characters, and provides lead or backup vocals for some of the show's musical numbers. The show airs from the Fitzgerald Theater in St. Paul. After the show's intermission, Keillor reads clever and often humorous greetings to friends and family at home submitted by members of the theater audience in exchange for an honorarium. Also in the second half of the show, Keillor delivers a monologue called "The News from Lake Wobegon", a fictitious town based in part on Keillor's own hometown of Anoka, Minnesota, and on Freeport and other small towns in Stearns County, Minnesota, where he lived in the early 1970s. Lake Wobegon is a quintessentially Minnesota small town characterized by the narrator as a place "... where all the women are strong, all the men are good-looking, and all the children are above average." The original "PHC" ran until 1987, when Keillor ended it to focus on other projects. In 1989, he launched a new live radio program from New York City, "The American Radio Company of the Air", which had essentially the same format as "PHC". In 1992, he moved ARC back to St. Paul, and a year later changed the name back to "A Prairie Home Companion"; it has remained a fixture of Saturday night radio broadcasting ever since. On a typical broadcast of "A Prairie Home Companion", Keillor's name is not mentioned unless a guest addresses him by name, although some sketches feature Keillor as his alter ego, Carson Wyler. In the closing credits, which Keillor reads, he gives himself no billing or credit except "written by Sarah Bellum," a joking reference to his own brain. Keillor regularly takes the radio company on the road to broadcast from popular venues around the United States; the touring production typically features local celebrities and skits incorporating local color. In April 2000, he took the program to Edinburgh, Scotland, producing two performances in the city's Queen's Hall, which were broadcast by BBC Radio. He toured Scotland with the program to celebrate its 25th anniversary. (In the UK, Ireland, Australia, and New Zealand, the program is known as "Garrison Keillor's Radio Show".) Keillor has produced broadcast performances similar to "PHC" but without the "Prairie Home Companion" brand, as in his 2008 appearance at the Oregon Bach Festival. He was also the host of "The Writer's Almanac," from 1993 to 2017, which, like "PHC", was produced and distributed by American Public Media. In a March 2011 interview, Keillor announced that he would be retiring from "A Prairie Home Companion" in 2013; but in a December 2011 interview with the "Sioux City Journal", Keillor said: "The show is going well. I love doing it. Why quit?" During an interview on July 20, 2015, Keillor announced his intent to retire from the show after the 2015–2016 season, saying, "I have a lot of other things that I want to do. I mean, nobody retires anymore. Writers never retire. But this is my last season. This tour this summer is the farewell tour." Keillor's final episode of the show was recorded live for an audience of 18,000 fans at the Hollywood Bowl in California on July 1, 2016, and broadcast the next day, ending 42 seasons of the show. After the performance, President Barack Obama phoned Keillor to congratulate him. The show continued on October 15, 2016, with Chris Thile as its host. At age thirteen, Keillor adopted the pen name "Garrison" to distinguish his personal life from his professional writing. He commonly uses "Garrison" in public and in other media. Keillor has been called "[o]ne of the most perceptive and witty commentators about Midwestern life" by Randall Balmer in "Encyclopedia of Evangelicalism." He has written numerous magazine and newspaper articles and more than a dozen books for adults as well as children. In addition to writing for "The New Yorker," he has written for "The Atlantic Monthly" and "National Geographic." He has also written for Salon.com and authored an advice column there under the name "Mr. Blue." Following a heart operation, he resigned on September 4, 2001, his last column being titled "Every dog has his day": In 2004 Keillor published a collection of political essays, "Homegrown Democrat: A Few Plain Thoughts from the Heart of America," and in June 2005 he began a column called "The Old Scout", which ran at Salon.com and in syndicated newspapers. The column went on hiatus in April 2010 so that he "... [could] finish a screenplay and start writing a novel." Keillor wrote the screenplay for the 2006 movie "A Prairie Home Companion," directed by Robert Altman. He also appears in the movie. On November 1, 2006, Keillor opened an independent bookstore, "Common Good Books, G. Keillor, Prop." in the Blair Arcade Building at the southwest corner of Selby and N. Western Avenues in the Cathedral Hill area in the Summit-University neighborhood of Saint Paul, Minnesota. Upon opening the bookstore, Keillor wrote this poem: In April 2012, the store moved to a new location on Snelling Avenue across from Macalester College in the Macalester-Groveland neighborhood. Probably owing in part to his distinctive North-Central accent, Keillor is often used as a voice-over actor. Some notable appearances include: In "Slate", Sam Anderson called Keillor "very clearly a genius. His range and stamina alone are incredible—after 30 years, he rarely repeats himself—and he has the genuine wisdom of a Cosby or Mark Twain." But Keillor's "willful simplicity," Anderson wrote, "is annoying because, after a while, it starts to feel prescriptive. Being a responsible adult doesn't necessarily mean speaking slowly about tomatoes." Anderson also noted that in 1985, when "Time" magazine called Keillor the funniest man in America, Bill Cosby said, "That's true if you're a pilgrim." Keillor's style, particularly his speaking voice, has often been parodied. Keillor is a member of the Democratic-Farmer-Labor Party. He is tall. He considers himself a loner and prefers not to make eye contact with people. Though not diagnosed, he also considers himself to be on the high-functioning end of the autism spectrum. He spoke about his experiences as an autistic person in his keynote address at the 19th Annual Minnesota Autism Conference in 2014. Keillor has been married three times. He was married to Mary Guntzel from 1965 to 1976; they had one son, Jason (born 1969). He was married to Ulla Skaerved, a former exchange student from Denmark at Keillor's high school whom he re-encountered at a class reunion, from 1985 to 1990. He married classical string player Jenny Lind Nilsson (born 1957), who is also from Anoka, in 1995. They have one daughter, Maia Grace Keillor (born December 29, 1997). Between his first and second marriages, Keillor was romantically involved with Margaret Moos, who worked as a producer of "A Prairie Home Companion". On September 7, 2009, Keillor was briefly hospitalized after suffering a minor stroke. He returned to work a few days later. In 2006, after a visit to a United Methodist church in Highland Park, Texas, Keillor created a local controversy with his remarks about the event, including the rhetorical suggestion of a connection between event participants and supporters of torture and a statement creating an impression of political intimidation: "I walked in, was met by two burly security men ... and within 10 minutes was told by three people that this was the Bushes' church and that it would be better if I didn't talk about politics." In response, the lecture series coordinator said the two "burly security men" were a local policeman and the church's own security supervisor, both present because the agreement with Keillor's publisher specified that the venue provide security. In addition the coordinator said that Keillor arrived at the church, declined an introduction and took the stage without an opportunity to mingle with the audience, so he did not know when these warnings might have been dispensed. The publicist concurred, saying that Keillor did not have contact with any church members or people in the audience before he spoke. Supposedly, before Keillor's remarks, participants at the event had considered the visit to have been cordial and warm. Asked to respond, Keillor stuck to his story, describing the people who advised him not to discuss politics and saying that he did not have security guards at other stops on the tour. In 2007, Keillor wrote a column that in part criticized "stereotypical" gay parents, who he said were "sardonic fellows with fussy hair who live in over-decorated apartments with a striped sofa and a small weird dog and who worship campy performers." In response to the strong reactions of many readers, Keillor said: In 2008, Keillor created a controversy in St. Paul when he filed a lawsuit against his neighbor's plan to build an addition on her home, citing his need for "light and air" and a view of "open space and beyond". Keillor's home is significantly larger than others in his neighborhood and it would still be significantly larger than his neighbor's with its planned addition. Keillor came to an undisclosed settlement with his neighbor shortly after the story became public. In 2009, one of Keillor's "Old Scout" columns contained a reference to "lousy holiday songs by Jewish guys" and a complaint about "Silent Night" as rewritten by Unitarians, upsetting some readers. A Unitarian minister named Cynthia Landrum responded, "Listening to him talk about us over the years, it's becoming more and more evident that he isn't laughing with us—he's laughing at us", while Jeff Jacoby of the "Boston Globe" called Keillor "cranky and intolerant". On November 29, 2017, the "Star Tribune" reported that Minnesota Public Radio was terminating all business relationships with Keillor as a result of "allegations of his inappropriate behavior with an individual who worked with him." In January 2018, MPR CEO Jon McTaggart elaborated that they had received allegations of "dozens" of sexually inappropriate incidents from the individual, including requests for sexual contact. Keillor has denied any wrongdoing and said his firing stems from an incident when he touched a woman's bare back while trying to console her. He stated he apologized to her soon after, that they had already made up, and that he was surprised to hear the allegations when her lawyer called. In its statement of termination, MPR announced that Keillor would keep his executive credit for the show, but that since he owns the trademark for the phrase "prairie home companion", they would cease rebroadcasting episodes of "A Prairie Home Companion" featuring Keillor and remove the trademarked phrase from the radio show hosted by Chris Thile. MPR also eliminated its business connections to PrairieHome.org and stopped distributing Keillor's daily program "The Writer's Almanac". "The Washington Post" also canceled Keillor's weekly column when they learned he had continued writing columns, including a controversial piece criticizing Al Franken's resignation because of sexual misconduct allegations, without revealing that he was under investigation at MPR. Several fans wrote MPR to protest his firing, and within the month, 153 members had canceled their memberships because of it. In January 2018, Keillor announced he was in mediation with MPR over the firing. On January 23, 2018, MPR News reported further on the investigation after interviewing almost 60 people who had worked with Keillor. The story described other alleged sexual misconduct by Keillor, and a $16,000 severance check for a woman who was asked to sign a confidentiality agreement to prevent her from talking about her time at MPR (she refused and never deposited the check). Keillor received a letter from the MPR CEO, Jon McTaggart, dated April 5, 2018, confirming that both sides wanted archives of "A Prairie Home Companion" and "The Writer's Almanac" to be publicly available again. In April 2018, MPR and Keillor announced a settlement under which MPR would restore the online archives. Also due to the allegations of inappropriate behavior, Keillor's segment in the PBS series "Finding Your Roots" episode that aired on December 19, 2017 was replaced with an older segment featuring Maya Rudolph. Keillor's work in print includes:
https://en.wikipedia.org/wiki?curid=12743
Generalization A generalization is a form of abstraction whereby common properties of specific instances are formulated as general concepts or claims. Generalizations posit the existence of a domain or set of elements, as well as one or more common characteristics shared by those elements (thus creating a conceptual model). As such, they are the essential basis of all valid deductive inferences (particularly in logic, mathematics and science), where the process of verification is necessary to determine whether a generalization holds true for any given situation. Generalization can also be used to refer to the process of identifying the parts of a whole, as belonging to the whole. The parts, which might be unrelated when left on their own, may be brought together as a group, hence belonging to the whole by establishing a common relation between them. However, the parts cannot be generalized into a whole—until a common relation is established among "all" parts. This does not mean that the parts are unrelated, only that no common relation has been established yet for the generalization. The concept of generalization has broad application in many connected disciplines, and might sometimes have a more specific meaning in a specialized context (e.g. generalization in psychology, generalization in learning). In general, given two related concepts "A" and "B," "A" is a "generalization" of "B" (equiv., "B" is a special case of "A") if and only if both of the following hold: For example, the concept "animal" is a generalization of the concept "bird", since every bird is an animal, but not all animals are birds (dogs, for instance). For more, see Specialisation (biology). The connection of "generalization" to "specialization" (or "particularization") is reflected in the contrasting words hypernym and hyponym. A hypernym as a generic stands for a class or group of equally ranked items, such as the term "tree" which stands for equally ranked items such as "peach" and "oak", and the term "ship" which stands for equally ranked items such as "cruiser" and "steamer". In contrast, a hyponym is one of the items included in the generic, such as "peach" and "oak" which are included in "tree", and "cruiser" and "steamer" which are included in "ship". A hypernym is superordinate to a hyponym, and a hyponym is subordinate to a hypernym. An animal is a generalization of a mammal, a bird, a fish, an amphibian and a reptile. Generalization has a long history in cartography as an art of creating maps for different scale and purpose. Cartographic generalization is the process of selecting and representing information of a map in a way that adapts to the scale of the display medium of the map. In this way, every map has, to some extent, been generalized to match the criteria of display. This includes small cartographic scale maps, which cannot convey every detail of the real world. As a result, cartographers must decide and then adjust the content within their maps, to create a suitable and useful map that conveys the geospatial information within their representation of the world. Generalization is meant to be context-specific. That is to say, correctly generalized maps are those that emphasize the most important map elements, while still representing the world in the most faithful and recognizable way. The level of detail and importance in what is remaining on the map must outweigh the insignificance of items that were generalized—so as to preserve the distinguishing characteristics of what makes the map useful and important.
https://en.wikipedia.org/wiki?curid=12746
Giacomo Puccini Giacomo Antonio Domenico Michele Secondo Maria Puccini ( , , ; 22 December 1858 29 November 1924) was an Italian opera composer who has been called "the greatest composer of Italian opera after Verdi". Puccini's early work was rooted in traditional late-19th-century romantic Italian opera. Later, he successfully developed his work in the realistic "verismo" style, of which he became one of the leading exponents. Puccini's most renowned works are "La bohème" (1896), "Tosca" (1900), "Madama Butterfly" (1904), and "Turandot" (1924), all of which are among the important operas played as standards. Puccini was born Giacomo Antonio Domenico Michele Secondo Maria Puccini in Lucca, Italy, in 1858. He was the sixth of nine children of Michele Puccini (1813–1864) and Albina Magi (1830–1884). The Puccini family was established in Lucca as a local musical dynasty by Puccini's great-great-grandfather – also named Giacomo (1712–1781). This first Giacomo Puccini was "maestro di cappella" of the Cattedrale di San Martino in Lucca. He was succeeded in this position by his son, Antonio Puccini, and then by Antonio's son Domenico, and Domenico's son Michele (father of the subject of this article). Each of these men studied music at Bologna, and some took additional musical studies elsewhere. Domenico Puccini studied for a time under Giovanni Paisiello. Each composed music for the church. In addition, Domenico composed several operas, and Michele composed one opera. Puccini's father Michele enjoyed a reputation throughout northern Italy, and his funeral was an occasion of public mourning, at which the then-famed composer Giovanni Pacini conducted a Requiem. With the Puccini family having occupied the position of "maestro di cappella" for 124 years (1740–1864) by the time of Michele's death, it was anticipated that Michele's son Giacomo would occupy that position as well when he was old enough. However, when Michele Puccini died in 1864, his son Giacomo was only six years old, and thus not capable of taking over his father's job. As a child, he nevertheless participated in the musical life of the Cattedrale di San Martino, as a member of the boys' choir and later as a substitute organist. Puccini was given a general education at the seminary of San Michele in Lucca, and then at the seminary of the cathedral. One of Puccini's uncles, Fortunato Magi, supervised his musical education. Puccini got a diploma from the Pacini School of Music in Lucca in 1880, having studied there with his uncle Fortunato, and later with Carlo Angeloni, who had also instructed Alfredo Catalani. A grant from Queen Margherita, and assistance from another uncle, Nicholas Cerù, provided the funds necessary for Puccini to continue his studies at the Milan Conservatory, where he studied composition with Stefano Ronchetti-Monteviti, Amilcare Ponchielli, and Antonio Bazzini. Puccini studied at the conservatory for three years, sharing a room with Pietro Mascagni. In 1880, at the age of 21, Puccini composed his "Mass", which marks the culmination of his family's long association with church music in his native Lucca. Puccini wrote an orchestral piece called the "Capriccio sinfonico" as a thesis composition for the Milan Conservatory. Puccini's teachers Ponchielli and Bazzini were impressed by the work, and it was performed at a student concert at the conservatory on 14 July 1883, conducted by Franco Faccio. Puccini's work was favorably reviewed in the Milanese publication "Perseveranza", and thus Puccini began to build a reputation as a young composer of promise in Milanese music circles. After the premiere of the "Capriccio sinfonico", Ponchielli and Puccini discussed the possibility that Puccini's next work might be an opera. Ponchielli invited Puccini to stay at his villa, where Puccini was introduced to another young man named Ferdinando Fontana. Puccini and Fontana agreed to collaborate on an opera, for which Fontana would provide the libretto. The work, "Le Villi", was entered into a competition sponsored by the Sozogno music publishing company in 1883 (the same competition in which Pietro Mascagni's "Cavalleria rusticana" was the winner in 1889). Although it did not win, "Le Villi" was later staged at the Teatro Dal Verme, premiering on 31 May 1884. G. Ricordi & Co. music publishers assisted with the premier by printing the libretto without charge. Fellow students from the Milan Conservatory formed a large part of the orchestra. The performance was enough of a success that Casa Ricordi purchased the opera. Revised into a two-act version with an intermezzo between the acts, "Le Villi" was performed at La Scala in Milan, on 24 January 1885. However, Ricordi did not publish the score until 1887, hindering further performance of the work. Giulio Ricordi, head of G. Ricordi & Co. music publishers, was sufficiently impressed with "Le Villi" and its young composer that he commissioned a second opera, which would result in "Edgar". Work was begun in 1884 when Fontana began working out the scenario for the libretto. Puccini finished primary composition in 1887 and orchestration in 1888. "Edgar" premiered at La Scala on 21 April 1889 to a lukewarm response. The work was withdrawn for revisions after its third performance. In a Milanese newspaper, Giulio Ricordi published a defense of Puccini's skill as a composer, while criticizing Fontana's libretto. A revised version met with success at the Teatro del Giglio in Puccini's native Lucca on 5 September 1891. In 1892, further revisions reduced the length of the opera from four acts to three, in a version that was well received in Ferrara and was performed in Turin and in Spain. Puccini made further revisions in 1901 and 1905, but the work never achieved popularity. Without the personal support of Ricordi, "Edgar" might have cost Puccini his career. Puccini had eloped with his former piano student, the married Elvira Gemignani, and Ricordi's associates were willing to turn a blind eye to his life style as long as he was successful. When "Edgar" failed, they suggested to Ricordi that he should drop Puccini, but Ricordi said that he would stay with him and continued his allowance until his next opera. On commencing his next opera, "Manon Lescaut", Puccini announced that he would write his own libretto so that "no fool of a librettist" could spoil it. Ricordi persuaded him to accept Ruggero Leoncavallo as his librettist, but Puccini soon asked Ricordi to remove him from the project. Four other librettists were then involved with the opera, as Puccini constantly changed his mind about the structure of the piece. It was almost by accident that the final two, Luigi Illica and Giuseppe Giacosa, came together to complete the opera. "Manon Lescaut" premiered at the Teatro Regio in Turin on 2 February 1893. By coincidence, Puccini's first enduringly popular opera appeared within a week of the premiere of Verdi's last opera, "Falstaff", which was first performed on 9 February 1893. In anticipation of the premiere, "La Stampa" wrote that Puccini was a young man concerning whom "great hopes" had a real basis (""). Because of the failure of "Edgar", however, a failure of "Manon Lescaut" could have jeopardized Puccini's future as a composer. Although Giulio Ricordi, head of Casa Ricordi, was supportive of Puccini while "Manon Lescaut" was still in development, the Casa Ricordi board of directors was considering cutting off Puccini's financial support. In any event, ""Manon Lescaut" was Puccini's first and only uncontested triumph, acclaimed by critics and public alike." After the London premiere in 1894, George Bernard Shaw pronounced: "Puccini looks to me more like the heir of Verdi than any of his rivals." "Manon Lescaut" was a great success and established Puccini's reputation as the most promising rising composer of his generation, and the most likely "successor" to Verdi as the leading exponent of the Italian operatic tradition. Illica and Giacosa returned as librettists for Puccini for his next three operas, probably his greatest successes: "La bohème", "Tosca" and "Madama Butterfly". Puccini's next work after "Manon Lescaut" was "La bohème", a four-act opera based on the 1851 book by Henri Murger, "La Vie de Bohème". "La bohème" premiered in Turin in 1896, conducted by Arturo Toscanini. Within a few years, it had been performed in many of the leading opera houses of Europe, including in Britain, as well as in the United States. It was a popular success, and remains one of the most frequently performed operas ever written. The libretto of the opera, freely adapted from Murger's episodic novel, combines comic elements of the impoverished life of the young protagonists with the tragic aspects, such as the death of the young seamstress Mimí. Puccini's own life as a young man in Milan served as a source of inspiration for elements of the libretto. During his years as a conservatory student and in the years before "Manon Lescaut", he experienced poverty similar to that of the bohemians in "La bohème", including chronic shortage of necessities like food, clothing and money to pay rent. Although Puccini was granted a small monthly stipend by the Congregation of Charity in Rome (), he frequently had to pawn his possessions to cover basic expenses. Early biographers such as Wakeling Dry and Eugenio Checchi, who were Puccini's contemporaries, drew express parallels between these incidents and particular events in the opera. Checchi cited a diary kept by Puccini while he was still a student, which recorded an occasion in which, as in Act 4 of the opera, a single herring served as a dinner for four people. Puccini himself commented: "I lived that "Bohème", when there wasn't yet any thought stirring in my brain of seeking the theme of an opera". ("") Puccini's composition of "La bohème" was the subject of a public dispute between Puccini and fellow composer Ruggiero Leoncavallo. In early 1893, the two composers discovered that they were both engaged in writing operas based on Murger's work. Leoncavallo had started his work first, and he and his music publisher claimed to have "priority" on the subject (although Murger's work was in the public domain). Puccini responded that he started his own work without having any knowledge of Leoncavallo's project, and wrote: "Let him compose. I will compose. The audience will decide." Puccini's opera premiered a year before that of Leoncavallo, and has been a perennial audience favorite, while Leoncavallo's version quickly faded into obscurity. Puccini's next work after "La bohème" was "Tosca" (1900), arguably Puccini's first foray into "verismo", the realistic depiction of many facets of real life including violence. Puccini had been considering an opera on this theme since he saw the play "Tosca" by Victorien Sardou in 1889, when he wrote to his publisher, Giulio Ricordi, begging him to get Sardou's permission for the work to be made into an opera: "I see in this "Tosca" the opera I need, with no overblown proportions, no elaborate spectacle, nor will it call for the usual excessive amount of music." The music of "Tosca" employs musical signatures for particular characters and emotions, which have been compared to Wagnerian leitmotivs, and some contemporaries saw Puccini as thereby adopting a new musical style influenced by Wagner. Others viewed the work differently. Rejecting the allegation that "Tosca" displayed Wagnerian influences, a critic reporting on 20 February 1900 Torino premiere wrote: "I don't think you could find a more Puccinian score than this." On 25 February 1903, Puccini was seriously injured in a car crash during a nighttime journey on the road from Lucca to Torre del Lago. The car was driven by Puccini's chauffeur and was carrying Puccini, his future wife Elvira, and their son Antonio. It went off the road, fell several metres, and flipped over. Elvira and Antonio were flung from the car and escaped with minor injuries. Puccini's chauffeur, also thrown from the car, suffered a serious fracture of his femur. Puccini was pinned under the vehicle, with a severe fracture of his right leg and with a portion of the car pressing down on his chest. A doctor living near the scene of the crash, together with another person who came to investigate, saved Puccini from the wreckage. The injury did not heal well, and Puccini remained under treatment for months. During the medical examinations that he underwent it was also found that he was suffering from a form of diabetes. The accident and its consequences slowed Puccini's completion of his next work, "Madama Butterfly". The original version of "Madama Butterfly", premiered at La Scala on 17 February 1904 with Rosina Storchio in the title role. It was initially greeted with great hostility (probably largely owing to inadequate rehearsals). When Storchio's kimono accidentally lifted during the performance, some in the audience started shouting: "The butterfly is pregnant" and "There is the little Toscanini". The latter comment referred to her well publicised affair with Arturo Toscanini. This version was in two acts; after its disastrous premiere, Puccini withdrew the opera, revising it for what was virtually a second premiere at Brescia in May 1904 and performances in Buenos Aires, London, the USA and Paris. In 1907, Puccini made his final revisions to the opera in a fifth version, which has become known as the "standard version". Today, the standard version of the opera is the version most often performed around the world. However, the original 1904 version is occasionally performed as well, and has been recorded. After 1904, Puccini's compositions were less frequent. In 1906 Giacosa died and, in 1909, there was scandal after Puccini's wife, Elvira, falsely accused their maid Doria Manfredi of having an affair with Puccini. Finally, in 1912, the death of Giulio Ricordi, Puccini's editor and publisher, ended a productive period of his career. Puccini completed "La fanciulla del West", based on a play by David Belasco, in 1910. This was commissioned by, and first performed at, the Metropolitan Opera in New York on 10 December 1910 with Met stars Enrico Caruso and Emmy Destinn for whom Puccini created the leading roles of Dick Johnson and Minnie. Toscanini, then the musical director of the Met, conducted. This was the first world premiere of an opera at the Met. The premiere was a great success. However, the compositional style employed in the opera, with few stand-alone arias, was criticized at the time. Some contemporaries also criticized the opera for failing to achieve an "American" tone. However, the opera has been acclaimed for its incorporation of advanced harmonic language and rhythmic complexity into the Italian operatic form. In addition, one aria from the opera, "Ch'ella mi creda", has become a staple of compilation albums by operatic tenors. It is said that during World War I, Italian soldiers sang this aria to maintain their spirits. The 2008 Italian film, "Puccini e la fanciulla" ("Puccini and the Girl"), is based on the period of his life when he was composing the opera. Puccini completed the score of "La rondine", to a libretto by Giuseppe Adami in 1916 after two years of work, and it was premiered at the Grand Théâtre de Monte Carlo on 27 March 1917. The opera had been originally commissioned by Vienna's Carltheater; however, the outbreak of World War I prevented the premiere from being given there. Moreover, the firm of Ricordi had declined the score of the opera – Giulio Ricordi's son Tito was then in charge and he described the opera as "bad Lehár". It was taken up by their rival, Lorenzo Sonzogno, who arranged the first performance in neutral Monaco. The composer continued to work at revising this, the least known of his mature operas, until his death. "La rondine" was initially conceived as an operetta, but Puccini eliminated spoken dialogue, rendering the work closer in form to an opera. A modern reviewer described "La rondine" as "a continuous fabric of lilting waltz tunes, catchy pop-styled melodies, and nostalgic love music," while characterizing the plot as recycling characters and incidents from works like 'La traviata' and 'Die Fledermaus'. In 1918, "Il trittico" premiered in New York. This work is composed of three one-act operas, each concerning the concealment of a death: a horrific episode ("Il tabarro") in the style of the Parisian Grand Guignol, a sentimental tragedy ("Suor Angelica"), and a comedy ("Gianni Schicchi"). "Turandot", Puccini's final opera, was left unfinished, and the last two scenes were completed by Franco Alfano based on the composer's sketches. The libretto for "Turandot" was based on a play of the same name by Carlo Gozzi. The music of the opera is heavily inflected with pentatonic motifs, intended to produce an Asiatic flavor to the music. "Turandot" contains a number of memorable stand-alone arias, among them "Nessun dorma". The libretto of "Edgar" was a significant factor in the failure of that opera. Thereafter, especially throughout his middle and late career, Puccini was extremely selective, and at times indecisive, in his choice of subject matter for new works. Puccini was deeply involved in the process of writing the libretto itself, requiring many iterative revisions of his libretti in terms of both structure and text. Puccini's relationships with his librettists were at times very difficult. His publisher, Casa Ricordi, was frequently required to mediate disputes and impasses between them. Puccini explored many possible subjects that he ultimately rejected only after a significant amount of effort—such as the creation of a libretto—had been put into them. Among the subjects that Puccini seriously considered, but abandoned, were: "Cristoforo Sly", "Anima Allegra" (based on the play "El genio alegre" by Serafín and Joaquín Álvarez Quintero), "Two Little Wooden Shoes" ("I due zoccoletti") (a short story by Maria Louise Ramé, a.k.a. Ouida), the life of Marie Antoinette, Margherita da Cortona, and "Conchita" (based on the novel "La Femme et le pantin" – "The Woman and the Puppet", by Pierre Loüys). Some of these abandoned subjects were taken up and turned into operas by other composers. For example, Franco Vittadini made an opera of "Anima Allegra", Mascagni's opera "Lodoletta" is derived from "Two Little Wooden Shoes", and Riccardo Zandonai eventually wrote "Conchita". From 1891 onwards, Puccini spent most of his time, when not traveling on business, at Torre del Lago, a small community about fifteen miles from Lucca situated between the Ligurian Sea and Lake Massaciuccoli, just south of Viareggio. Torre del Lago was the primary place for Puccini to indulge his love of hunting. "I love hunting, I love cars: and for these things, in the isolation of Torre del Lago, I keep the faith." ("Amo la caccia, adoro l’automobile: e a questo e a quella nelle solitudini di Torre del Lago serbo intera la mia fede.") By 1900, he had acquired land and built a villa on the lake, now known as the "Villa Museo Puccini". He lived there until 1921, when pollution produced by peat works on the lake forced him to move to Viareggio, a few kilometres north. After his death, a mausoleum was created in the Villa Puccini and the composer is buried there in the chapel, along with his wife and son who died later. The Villa Museo was owned by his granddaughter, Simonetta Puccini, until her death, and is open to the public. An annual Festival Puccini is held at Torre del Lago. In the autumn of 1884, Puccini began a relationship with a married woman named Elvira Gemignani (née Bonturi, 1860–1930) in Lucca. Elvira's husband, Narciso Gemignani, was an "unrepentant womanizer", and Elvira's marriage was not a happy one. Elvira became pregnant by Puccini, and their son, Antonio (1886–1946), was born in Monza. Elvira left Lucca when the pregnancy began to show, and gave birth elsewhere to avoid gossip. Elvira, Antonio and Elvira's daughter by Narciso, Fosca (1880–1968), began to live with Puccini shortly afterwards. Narciso was killed by the husband of a woman that Narciso had an affair with, dying on 26 February 1903, one day after Puccini's car accident. Only then, in early 1904, were Puccini and Elvira able to marry, and to legitimize Antonio. The marriage between Puccini and Elvira was also troubled by infidelity, as Puccini had frequent affairs himself, including with well-known singers such as Maria Jeritza, Emmy Destinn, Cesira Ferrani, and Hariclea Darclée. In 1906, while attending the opening of "Madama Butterfly" in Budapest, Puccini fell in love with Blanke Lendvai, the sister of Hungarian composer Ervin Lendvai (his friend and protégé for many years). Blanke and Puccini exchanged love letters until 1911, when he started an affair with German aristocrat Baroness Josephine von Stangel, which lasted for six years. In 1909, Puccini's wife Elvira publicly accused Doria Manfredi, a maid working for the Puccini family, of having an affair with the composer. After being publicly accused of adultery, Doria Manfredi committed suicide. An autopsy determined, however, that Doria had died a virgin, refuting the allegations made against her. Elvira Puccini was prosecuted for slander, and was sentenced to more than five months in prison, although a payment to the Manfredi family by Puccini spared Elvira from having to serve the sentence. Some music critics and interpreters of Puccini's work have speculated that the psychological effects of this incident on Puccini interfered with his ability to complete compositions later in his career, and also influenced the development of his characters such as Liù (from "Turandot"), a slave girl who dies tragically by suicide. According to documents found in the possession of a descendant of the Manfredi family, Nadia Manfredi, in 2007, Puccini was actually having an affair with Giulia Manfredi, Doria's cousin. Press reports at the time when these documents were discovered alleged that Nadia Manfredi was Puccini's granddaughter, by a son, Antonio Manfredi, born to Giulia. Unlike Wagner and Verdi, Puccini was not active in politics. Puccini biographer Mary Jane Phillips-Matz wrote: "Throughout this entire period [of World War I and its immediate aftermath], Puccini's interest in politics was close to zero, as it had been all his life, so far as one can judge. He seemed almost indifferent to everything from mayoral elections in Viareggio to cabinet appointments in Rome." Another biographer speculates that Puccini may have been—if he had a political philosophy—a monarchist. Puccini's indifference to politics caused him problems during World War I. Puccini's long-standing and close friendship with Toscanini was interrupted for nearly a decade because of an argument in the summer of 1914 (in the opening months of the war) during which Puccini remarked that Italy could benefit from German organization. Puccini was also criticized during the war for his work on "La rondine" under a 1913 commission contract with an Austrian theater after Italy and Austria-Hungary became opponents in the war in 1915 (although the contract was ultimately cancelled). Puccini did not participate in the public war effort, but privately rendered assistance to individuals and families affected by the war. In 1919, Puccini was commissioned to write music to an ode by honoring Italy's victories in World War I. The work, "Inno a Roma" (Hymn to Rome), was to premiere on 21 April 1919, during a celebration of the anniversary of the founding of Rome. The premiere was delayed to 1 June 1919, when it was played at the opening of a gymnastics competition. Although not written for the fascists, "Inno a Roma" was widely played during Fascist street parades and public ceremonies. Puccini had some contact with Benito Mussolini and the Italian Fascist Party in the year preceding his death. In 1923 the Fascist Party in Viareggio made Puccini an honorary member and sent him a membership card. However, evidence that Puccini was actually a member of the Fascist party is ambiguous. The Italian Senate has traditionally included a small number of members appointed in recognition of their cultural contributions to the nation. Puccini hoped to attain this honor, which had been granted to Verdi, and undertook to use his connections to bring about the appointment. While honorary senators could vote, there is no indication that Puccini sought the appointment for this purpose. Puccini also wished to establish a national theater in Viareggio, a project which would require government support. Puccini met with Mussolini twice, in November and December 1923, seeking support for the theater project. While the theater project never came to fruition, Puccini was named Senator ("senatore a vita") a few months before his death. At the time Puccini met with Mussolini, Mussolini had been prime minister for approximately a year, but his party had not yet taken full control of the Italian Parliament through the violence and irregularities of the 1924 general election. Puccini was no longer alive when Mussolini announced the end of representative government, and the beginning of a fascist dictatorship, in his speech before the Chamber of Deputies on 3 January 1925. A chain smoker of Toscano cigars and cigarettes, Puccini began to complain of chronic sore throats towards the end of 1923. A diagnosis of throat cancer led his doctors to recommend a new and experimental radiation therapy treatment, which was being offered in Brussels. Puccini and his wife never knew how serious the cancer was, as the news was revealed only to his son. Puccini died in Brussels on 29 November 1924, aged 65, from complications after the treatment; uncontrolled bleeding led to a heart attack the day after surgery. News of his death reached Rome during a performance of "La bohème". The opera was immediately stopped, and the orchestra played Chopin's "Funeral March" for the stunned audience. He was buried in Milan, in Toscanini's family tomb, but that was always intended as a temporary measure. In 1926 his son arranged for the transfer of his father's remains to a specially created chapel inside the Puccini villa at Torre del Lago. Most broadly, Puccini wrote in the style of the late-Romantic period of classical music (see Romantic music). Music historians also refer to Puccini as a component of the "giovane scuola" ("young school"), a cohort of composers who came onto the Italian operatic scene as Verdi's career came to an end, such as Mascagni, Leoncavallo, and others mentioned below. Puccini is also frequently referred to as a "verismo" composer. Puccini's career extended from the end of the Romantic period into the modern period. He consciously attempted to 'update' his style to keep pace with new trends, but did not attempt to fully adopt a modern style. One critic, Anthony Davis has stated: "Loyalty toward nineteenth-century Italian-opera traditions and, more generally, toward the musical language of his Tuscan heritage is one of the clearest features of Puccini's music." Davis also identifies, however, a "stylistic pluralism" in Puccini's work, including influences from "the German symphonic tradition, French harmonic and orchestrational traditions, and, to a lesser extent, aspects of Wagnerian chromaticism". In addition, Puccini frequently sought to introduce music or sounds from outside sources into his operas, such as his use of Chinese folk melodies in Turandot. All of Puccini's operas have at least one set piece for a lead singer that is separate enough from its surroundings that it can be treated as a distinct aria, and most of his works have several of these. At the same time, Puccini's work continued the trend away from operas constructed from a series of set pieces, and instead used a more "through-composed" or integrated construction. His works are strongly melodic. In orchestration, Puccini frequently doubled the vocal line in unison or at octaves in order to emphasize and strengthen the melodic line. "Verismo" is a style of Italian opera that began in 1890 with the first performance of Mascagni's "Cavalleria rusticana", peaked in the early 1900s, and lingered into the 1920s. The style is distinguished by realistic – sometimes sordid or violent – depictions of everyday life, especially the life of the contemporary lower classes. It by and large rejects the historical or mythical subjects associated with Romanticism. "Cavalleria rusticana", "Pagliacci", and "Andrea Chénier" are uniformly considered to be "verismo" operas. Puccini's career as a composer is almost entirely coincident in time with the "verismo" movement. Only his "Le Villi" and "Edgar" preceded "Cavalleria rusticana". Some view Puccini as essentially a "verismo" composer, while others, although acknowledging that he took part in the movement to some degree, do not view him as a "pure" "verismo" composer. In addition, critics differ as to the degree to which particular operas by Puccini are, or are not, properly described as "verismo" operas. Two of Puccini's operas, "Tosca" and "Il tabarro," are universally considered to be "verismo" operas. Puccini scholar Mosco Carner places only two of Puccini's operas other than "Tosca" and "Il tabarro" within the "verismo" school: "Madama Butterfly", and "La fanciulla del West." Because only three "verismo" works by Puccini continue to appear regularly on stage (the aforementioned "Cavalleria rusticana", "Pagliacci", and "Andrea Chénier"), Puccini's contribution has had lasting significance to the genre. Both during his lifetime and in posterity, Puccini's success outstripped other Italian opera composers of his time, and he has been matched in this regard by only a handful of composers in the entire history of opera. Between 2004 and 2018, Puccini ranked third (behind Verdi and Mozart) in the number of performances of his operas worldwide, as surveyed by Operabase. Three of his operas ("La bohème", "Tosca", and "Madame Butterfly") were amongst the 10 most frequently performed operas worldwide. Gustav Kobbé, the original author of "The Complete Opera Book", a standard reference work on opera, wrote in the 1919 edition: "Puccini is considered the most important figure in operatic Italy today, the successor of Verdi, if there is any." Other contemporaries shared this view. Italian opera composers of the generation with whom Puccini was compared included Pietro Mascagni (1863–1945), Ruggero Leoncavallo (1857–1919), Umberto Giordano (1867–1948), Francesco Cilea (1866–1950), Baron Pierantonio Tasca (1858–1934), Gaetano Coronaro (1852–1908), and Alberto Franchetti (1860–1942). Only three composers, and three works, by Italian contemporaries of Puccini appear on the Operabase list of most-performed works: "Cavalleria rusticana" by Mascagni, "Pagliacci" by Ruggero Leoncavallo, and "Andrea Chénier" by Umberto Giordano. Kobbé contrasted Puccini's ability to achieve "sustained" success with the failure of Mascagni and Leoncavallo to produce more than merely "one sensationally successful short opera". By the time of Puccini's death in 1924, he had earned $4 million from his works. Although the popular success of Puccini's work is undeniable, and his mastery of the craft of composition has been consistently recognized, opinion among critics as to the artistic value of his work has always been divided. Grove Music Online described Puccini's strengths as a composer as follows: Puccini succeeded in mastering the orchestra as no other Italian had done before him, creating new forms by manipulating structures inherited from the great Italian tradition, loading them with bold harmonic progressions which had little or nothing to do with what was happening then in Italy, though they were in step with the work of French, Austrian and German colleagues. In his work on Puccini, Julian Budden describes Puccini as a gifted and original composer, noting the innovation hidden in the popularity of works such as "Che gelida manina". He describes the aria in musical terms (the signature embedded in the harmony for example), and points out that its structure was rather unheard of at the time, having three distinct musical paragraphs that nonetheless form a complete and coherent whole. This gumption in musical experimentation was the essence of Puccini's style, as evidenced in his diverse settings and use of the motif to express ideas beyond those in the story and text. Puccini has, however, consistently been the target of condescension by some music critics who find his music insufficiently sophisticated or difficult. Some have explicitly condemned his efforts to please his audience, such as this contemporary Italian critic: He willingly stops himself at minor genius, stroking the taste of the public ... obstinately shunning too-daring innovation ... A little heroism, but not taken to great heights; a little bit of veristic comedy, but brief; a lot of sentiment and romantic idyll: this is the recipe in which he finds happiness. () Budden attempted to explain the paradox of Puccini's immense popular success and technical mastery on the one hand, and the relative disregard in which his work has been held by academics: No composer communicates more directly with an audience than Puccini. Indeed, for many years he has remained a victim of his own popularity; hence the resistance to his music in academic circles. Be it remembered, however, that Verdi's melodies were once dismissed as barrel-organ fodder. The truth is that music that appeals immediately to a public becomes subject to bad imitation, which can cast a murky shadow over the original. So long as counterfeit Puccinian melody dominated the world of sentimental operetta, many found it difficult to come to terms with the genuine article. Now that the current coin of light music has changed, the composer admired by Schoenberg, Ravel, and Stravinsky can be seen to emerge in his full stature. Puccini wrote orchestral pieces, sacred music, chamber music, solo music for piano and organ and songs for voice and piano, most notably his 1880 mass "Messa di gloria", his "Preludio Sinfonico" of 1882, and his 1890 string quartet movement "Crisantemi". However, he is primarily known for his operas: Founded in 1996 in Lucca, the "Centro di studi Giacomo Puccini" embraces a wide range of approaches to the study of Puccini's work. In the USA, the American Center for Puccini Studies specializes in the presentation of unusual performing editions of composer's works and introduces neglected or unknown Puccini pieces. It was founded in 2004 by the singer and director Harry Dunstan.
https://en.wikipedia.org/wiki?curid=12750
Gramophone (disambiguation) Gramophone or phonograph is a device for the mechanical recording and reproduction of sound. Gramophone may also refer to:
https://en.wikipedia.org/wiki?curid=12753
George Cukor George Dewey Cukor (; July 7, 1899 – January 24, 1983) was an American film director. He mainly concentrated on comedies and literary adaptations. His career flourished at RKO when David O. Selznick, the studio's Head of Production, assigned Cukor to direct several of RKO's major films, including "What Price Hollywood?" (1932), "A Bill of Divorcement" (1932), "Our Betters" (1933), and "Little Women" (1933). When Selznick moved to MGM in 1933, Cukor followed and directed "Dinner at Eight" (1933) and "David Copperfield" (1935) for Selznick and "Romeo and Juliet" (1936) and "Camille" (1936) for Irving Thalberg. He was replaced as the director of "Gone with the Wind" (1939), but he went on to direct "The Philadelphia Story" (1940), "Gaslight" (1944), "Adam's Rib" (1949), "Born Yesterday" (1950), "A Star Is Born" (1954), "Bhowani Junction" (1956), and won the Academy Award for Best Director for "My Fair Lady" (1964). He continued to work into the 1980s. Cukor was born on the Lower East Side of Manhattan in New York City, the younger child and only son of Hungarian-Jewish immigrants Viktor, an assistant district attorney, and Helén Ilona Gross. His parents selected his middle name in honor of Spanish–American War hero George Dewey. The family was not particularly religious (pork was a staple on the dinner table), and when he started attending temple as a boy, Cukor learned Hebrew phonetically, with no real understanding of the meaning of the words or what they represented. As a result, he was ambivalent about his faith and dismissive of old world traditions from childhood, and as an adult he embraced Anglophilia to remove himself even further from his roots. As a child, Cukor appeared in several amateur plays and took dance lessons, and at the age of seven he performed in a recital with David O. Selznick, who in later years became a mentor and friend. As a teenager, Cukor frequently was taken to the New York Hippodrome by his uncle. Infatuated with theatre, he often cut classes at DeWitt Clinton High School to attend afternoon matinees. During his senior year, he worked as a supernumerary with the Metropolitan Opera, earning 50¢ per appearance, and $1 if he was required to perform in blackface. Following his graduation in 1917, Cukor was expected to follow in his father's footsteps and pursue a career in law. He halfheartedly enrolled in the City College of New York, where he entered the Students Army Training Corps in October 1918. His military experience was limited; Germany surrendered in early November, and Cukor's duty ended after only two months. He left school shortly afterwards. Cukor obtained a job as an assistant stage manager and bit player with a touring production of "The Better 'Ole", a popular British musical based on Old Bill, a cartoon character created by Bruce Bairnsfather. In 1920, he became the stage manager for the Knickerbocker Players, a troupe that shuttled between Syracuse and Rochester, New York, and the following year he was hired as general manager of the newly formed Lyceum Players, an upstate summer stock company. In 1925 he formed the C.F. and Z. Production Company with Walter Folmer and John Zwicki, which gave him his first opportunity to direct. Following their first season, he made his Broadway directorial debut with "Antonia" by Hungarian playwright Melchior Lengyel, then returned to Rochester, where C.F. and Z. evolved into the Cukor-Kondolf Stock Company, a troupe that included Louis Calhern, Ilka Chase, Phyllis Povah, Frank Morgan, Reginald Owen, Elizabeth Patterson and Douglass Montgomery, all of whom worked with Cukor in later years in Hollywood. Lasting only one season with the company was Bette Davis. Cukor later recalled, "Her talent was apparent, but she did buck at direction. She had her own ideas, and though she only did bits and ingenue roles, she didn't hesitate to express them." For the next several decades, Davis claimed she was fired, and although Cukor never understood why she placed so much importance on an incident he considered so minor, he never worked with her again. For the next few years, Cukor alternated between Rochester in the summer months and Broadway in the winter. His direction of a 1926 stage adaptation of "The Great Gatsby" by Owen Davis brought him to the attention of the New York critics. Writing in the "Brooklyn Eagle", drama critic Arthur Pollock called it "an unusual piece of work by a director not nearly so well known as he should be." Cukor directed six more Broadway productions before departing for Hollywood in 1929. When Hollywood began to recruit New York theater talent for sound films, Cukor immediately answered the call. In December 1928, Paramount Pictures signed him to a contract that reimbursed him for his airfare and initially paid him $600 per week with no screen credit during a six-month apprenticeship. He arrived in Hollywood in February 1929, and his first assignment was to coach the cast of "River of Romance" to speak with an acceptable Southern accent. In October, the studio lent him to Universal Pictures to conduct the screen tests and work as a dialogue director for "All Quiet on the Western Front" which was released in 1930. That year he co-directed three films at Paramount, and his weekly salary was increased to $1500. He made his solo directorial debut with "Tarnished Lady" (1931) starring Tallulah Bankhead. Cukor was then assigned to "One Hour with You" (1932), an operetta with Maurice Chevalier and Jeanette MacDonald, when original director Ernst Lubitsch opted to concentrate on producing the film instead. At first the two men worked well together, but two weeks into filming Lubitsch began arriving on the set on a regular basis, and he soon began directing scenes with Cukor's consent. Upon the film's completion, Lubitsch approached Paramount general manager B. P. Schulberg and threatened to leave the studio if Cukor's name wasn't removed from the credits. When Schulberg asked him to cooperate, Cukor filed suit. He eventually settled for being billed as assistant director and then left Paramount to work with David O. Selznick at RKO Studios. Cukor quickly earned a reputation as a director who could coax great performances from actresses and he became known as a "woman's director", a title he resented. Despite this reputation, during his career, he oversaw more performances honored with the Academy Award for Best Actor than any other director: James Stewart in "The Philadelphia Story" (1940), Ronald Colman in "A Double Life" (1947), and Rex Harrison in "My Fair Lady" (1964). One of Cukor's earlier ingenues was actress Katharine Hepburn, who debuted in "A Bill of Divorcement" (1932) and whose looks and personality left RKO officials at a loss as to how to use her. Cukor directed her in several films, both successful, such as "Little Women" (1933) and "Holiday" (1938), and disastrous, such as "Sylvia Scarlett" (1935). Cukor and Hepburn became close friends off the set. Cukor was hired to direct "Gone with the Wind" by Selznick in 1936, even before the book was published. He spent the next two years involved with pre-production, including supervision of the numerous screen tests of actresses anxious to portray Scarlett O'Hara. Cukor favored Hepburn for the role, but Selznick, concerned about her reputation as "box office poison", would not consider her without a screen test, and the actress refused to film one. Of those who did, Cukor preferred Paulette Goddard, but her supposedly illicit relationship with Charlie Chaplin (they were, in fact, secretly married) concerned Selznick. Between his "Wind" chores, the director assisted with other projects. He filmed the cave scene for "The Adventures of Tom Sawyer" (1938), and, following the firing of its original director Richard Thorpe, Cukor spent a week on the set of "The Wizard of Oz" (1939). Although he filmed no footage, he made crucial changes to the look of Dorothy by eliminating Judy Garland's blonde wig and adjusting her makeup and costume, encouraging her to act in a more natural manner. Additionally, Cukor softened the Scarecrow's makeup and gave Margaret Hamilton a different hairstyle for the Wicked Witch of the West, as well as altering her makeup and other facial features. Cukor also suggested that the studio cast Jack Haley, on loan from 20th Century Fox, as the Tin Man. Cukor spent many hours coaching Vivien Leigh and Olivia de Havilland before the start of filming "Wind", but Clark Gable resisted his efforts to get him to master a Southern accent. However, despite rumors about Gable being uncomfortable with Cukor on the set, nothing in the internal memos of David O. Selznick indicates or suggests that Clark Gable played any role in Cukor's dismissal from the film. Rather, they show Selznick's mounting dissatisfaction with Cukor's slow pace and quality of work. From a private letter from journalist Susan Myrick to Margaret Mitchell in February 1939: "George [Cukor] finally told me all about it. He hated [leaving the production] very much he said but he could not do otherwise. In effect he said he is an honest craftsman and he cannot do a job unless he knows it is a good job and he feels the present job is not right. For days, he told me he has looked at the rushes and felt he was failing...the things did not click as it should. Gradually he became convinced that the script was the trouble...So George just told David he would not work any longer if the script was not better and he wanted the [Sidney] Howard script back...he would not let his name go out over a lousy picture...and bull-headed David said 'OK get out!'" Selznick had already been unhappy with Cukor ("a very expensive luxury") for not being more receptive to directing other Selznick assignments, even though Cukor had remained on salary since early 1937; and in a confidential memo written in September 1938, four months before principal photography began, Selznick flirted with the idea of replacing him with Victor Fleming. "I think the biggest black mark against our management to date is the Cukor situation and we can no longer be sentimental about it... We are a business concern and not patrons of the arts." Cukor was relieved of his duties, but he continued to work with Leigh and Olivia de Havilland off the set. Various rumors about the reasons behind his dismissal circulated throughout Hollywood. Selznick's friendship with Cukor had crumbled slightly when the director refused other assignments, including "A Star is Born" (1937) and "Intermezzo" (1939). Given that Gable and Cukor had worked together before (on "Manhattan Melodrama", 1934) and Gable had no objection to working with him then, and given Selznick's desperation to get Gable for Rhett Butler, if Gable had any objections to Cukor, certainly they would have been expressed before he signed his contract for the film. Yet, writer Gore Vidal, in his autobiography "Point to Point Navigation", recounted that Gable demanded that Cukor be fired off "Wind" because, according to Cukor, the young Gable had been a male hustler and Cukor had been one of his johns. This has been confirmed by Hollywood biographer E.J. Fleming, who has recounted that, during a particularly difficult scene, Gable erupted publicly, screaming: "I can't go on with this picture. I won't be directed by a fairy. I have to work with a real man." Cukor's dismissal from "Wind" freed him to direct "The Women" (1939), which has an all-female cast, followed by "The Philadelphia Story" (1940). He also directed Greta Garbo, another of his favorite actresses, in "Two-Faced Woman" (1941), her last film before she retired from the screen. In 1942, at the age of 43, Cukor enlisted in the Signal Corps. Following basic training at Fort Monmouth, he was assigned to the old Paramount studios in Astoria, Queens (where he had directed three films in the early 1930s), although he was permitted to lodge at the St. Regis Hotel in Manhattan. Working with Irwin Shaw, John Cheever and William Saroyan, among others, Cukor produced training and instructional films for army personnel. Because he lacked an officer's commission, he found it difficult to give orders and directions to his superiors. Despite his efforts to rise above the rank of private—he even called upon Frank Capra to intercede on his behalf—he never achieved officer's status or any commendations during his six months of service. In later years, Cukor suspected his homosexuality impeded him from receiving any advances or honors, although rumors to that effect could not be confirmed. The remainder of the decade was a series of hits and misses for Cukor. Both "Two-Faced Woman" and "Her Cardboard Lover" (1942) were commercial failures. More successful were "A Woman's Face" (1941) with Joan Crawford and "Gaslight" (1944) with Ingrid Bergman and Charles Boyer. During this era, Cukor forged an alliance with screenwriters Garson Kanin and Ruth Gordon, who had met in Cukor's home in 1939 and married three years later. Over the course of seven years, the trio collaborated on seven films, including "A Double Life" (1947) starring Ronald Colman, "Adam's Rib" (1949), "Born Yesterday" (1950), "The Marrying Kind" (1952), and "It Should Happen to You" (1954), all featuring Judy Holliday, another Cukor favorite, who won the Academy Award for Best Actress for "Born Yesterday". In December 1952, Cukor was approached by Sid Luft, who proposed the director helm a musical remake of "A Star is Born" (1937) with his then-wife Judy Garland in the lead role. Cukor had declined to direct the earlier film because it was too similar to his own "What Price Hollywood?" (1932), but the opportunity to direct his first Technicolor film, first musical, and work with screenwriter Moss Hart and especially Garland appealed to him, and he accepted. Getting the updated "A Star Is Born" (1954) to the screen proved to be a challenge. Cukor wanted Cary Grant for the male lead and went so far as to read the entire script with him, but Grant, while agreeing it was the role of a lifetime, steadfastly refused to do it, and Cukor never forgave him. The director then suggested either Humphrey Bogart or Frank Sinatra tackle the part, but Jack L. Warner rejected both. Stewart Granger was the front runner for a period of time, but he backed out when he was unable to adjust to Cukor's habit of acting out scenes as a form of direction. James Mason eventually was contracted, and filming began on October 12, 1953. As the months passed, Cukor was forced to deal not only with constant script changes but a very unstable Garland, who was plagued by chemical and alcohol dependencies, extreme weight fluctuations, and real and imagined illnesses. In March 1954, a rough cut still missing several musical numbers was assembled, and Cukor had mixed feelings about it. When the last scene finally was filmed in the early morning hours of July 28, 1954, Cukor already had departed the production and was unwinding in Europe. The first preview the following month ran 210 minutes and, despite ecstatic feedback from the audience, Cukor and editor Folmar Blangsted trimmed it to 182 minutes for its New York premiere in October. The reviews were the best of Cukor's career, but Warner executives, concerned the running time would limit the number of daily showings, made drastic cuts without Cukor, who had departed for Pakistan to scout locations for the epic "Bhowani Junction" in 1954-1955. At its final running time of 154 minutes, the film had lost musical numbers and crucial dramatic scenes, and Cukor called it "very painful." He was not included in the film's six Oscar nominations. Over the next ten years, Cukor directed a handful of films with varying success. "Les Girls" (1957) won the Golden Globe Award for Best Motion Picture – Musical or Comedy, and "Wild Is the Wind" (also 1957) earned Oscar nominations for Anna Magnani and Anthony Quinn, but neither "Heller in Pink Tights" nor "Let's Make Love" (both 1960) were box-office hits. Another project during this period was the ill-fated "Something's Got to Give", an updated remake of the comedy "My Favorite Wife" (1940). Cukor liked leading lady Marilyn Monroe but found it difficult to deal with her erratic work habits, frequent absences from the set, and the constant presence of Monroe's acting coach Paula Strasberg. After 32 days of shooting, the director had only 7½ minutes of usable film. Then Monroe travelled to New York to appear at a birthday celebration for President John F. Kennedy at Madison Square Garden, where she serenaded Kennedy. Studio documents released after Monroe's death confirmed that her appearance at the political fundraising event was approved by Fox executives. The production came to a halt when Cukor had filmed every scene not involving Monroe and the actress remained unavailable. 20th Century Fox executive Peter Levathes fired her and hired Lee Remick to replace her, prompting co-star Dean Martin to quit because his contract guaranteed he would be playing opposite Monroe. With the production already $2 million over budget and everyone back at the starting gate, the studio pulled the plug on the project. Less than two months later, Monroe was found dead in her home. Two years later, Cukor achieved one of his greatest successes with "My Fair Lady" (1964). Throughout filming there were mounting tensions between the director and designer Cecil Beaton, but Cukor was thrilled with leading lady Audrey Hepburn, although the crew was less enchanted with her diva-like demands. Although several reviews were critical of the film – Pauline Kael said it "staggers along" and Stanley Kauffmann thought Cukor's direction was like "a rich gravy poured over everything, not remotely as delicately rich as in the Asquith-Howard 1937 [sic] "Pygmalion"" — the film was a box office hit which won him the Academy Award for Best Director, the Golden Globe Award for Best Director, and the Directors Guild of America Award after having been nominated for each several times. Following "My Fair Lady", Cukor became less active. He directed Maggie Smith in "Travels with My Aunt" (1972) and helmed the critical and commercial flop "The Blue Bird" (1976), the first joint Soviet-American production. He reunited twice with Katharine Hepburn for the television movies "Love Among the Ruins" (1975) and "The Corn Is Green" (1979). He directed "Rich and Famous" (1981), his final film, with Jacqueline Bisset and Candice Bergen, at the age of 82. In 1976, Cukor was awarded the George Eastman Award, given by George Eastman House for distinguished contribution to the art of film. It was an open secret in Hollywood that Cukor was gay, at a time when society was against it, although he was discreet about his sexual orientation and "never carried it as a pin on his lapel," as producer Joseph L. Mankiewicz put it. He was a celebrated "bon vivant" whose luxurious home was the site of weekly Sunday afternoon parties attended by closeted celebrities and the attractive young men they met in bars and gyms and brought with them. At least once, in the midst of his reign at MGM, he was arrested on vice charges, but studio executives managed to get the charges dropped and all records of it expunged, and the incident was never publicized by the press. In the late 1950s, Cukor became involved with a considerably younger man named George Towers. He financed his education at the Los Angeles State College of Applied Arts and Sciences and the University of Southern California, from which Towers graduated with a law degree in 1967. That fall Towers married a woman, and his relationship with Cukor evolved into one of father and son, and for the remainder of Cukor's life the two remained very close. By the mid-1930s, Cukor was not only established as a prominent director but, socially, as an unofficial head of Hollywood's gay subculture. His home, redecorated in 1935 by gay actor-turned-interior designer William Haines with gardens designed by Florence Yoch and Lucile Council, was the scene of many gatherings for the industry's homosexuals. The close-knit group reputedly included Haines and his partner Jimmie Shields, writer W. Somerset Maugham, director James Vincent, screenwriter Rowland Leigh, costume designers Orry-Kelly and Robert Le Maire, and actors John Darrow, Anderson Lawler, Grady Sutton, Robert Seiter, and Tom Douglas. Frank Horn, secretary to Cary Grant, was also a frequent guest. Cukor's friends were of paramount importance to him and he kept his home filled with their photographs. Regular attendees at his soirées included Katharine Hepburn and Spencer Tracy, Joan Crawford and Douglas Fairbanks, Jr., Lauren Bacall and Humphrey Bogart, Claudette Colbert, Marlene Dietrich, Laurence Olivier and Vivien Leigh, actor Richard Cromwell, Stanley Holloway, Judy Garland, Gene Tierney, Noël Coward, Cole Porter, director James Whale, costume designer Edith Head, and Norma Shearer, especially after the death of her first husband Irving Thalberg. He often entertained literary figures like Sinclair Lewis, Theodore Dreiser, Hugh Walpole, Aldous Huxley and Ferenc Molnár. Frances Goldwyn, second wife of studio mogul Sam Goldwyn, long considered Cukor to be the love of her life, although their relationship remained platonic. According to biographer A. Scott Berg, Frances even arranged for Cukor's burial to be adjacent to her own plot at Forest Lawn Memorial Park Cemetery. The PBS series "American Masters" produced a comprehensive documentary about his life and work titled "On Cukor" directed by Robert Trachtenberg in 2000. Cukor died of a heart attack on January 24, 1983, and was interred in Grave D, Little Garden of Constancy, Garden of Memory (private), Forest Lawn Memorial Park (Glendale), California. Records in probate court indicated his net worth at the time of his death was $2,377,720. In 2013, The Film Society of Lincoln Center presented a comprehensive weeks-long retrospective of his work entitled "The Discreet Charm of George Cukor." In 2019, Cukor's film "Gaslight" was selected by the Library of Congress for preservation in the National Film Registry for being "culturally, historically, or aesthetically significant". George Kuchar
https://en.wikipedia.org/wiki?curid=12754
Gas mask A gas mask is a mask used to protect the wearer from inhaling airborne pollutants and toxic gases. The mask forms a sealed cover over the nose and mouth, but may also cover the eyes and other vulnerable soft tissues of the face. Most gas masks are also respirators, though the word "gas mask" is often used to refer to military equipment (such as a field protective mask). The gas mask only protects the user from digesting, inhaling and contact through the eyes (many agents affect through eye contact). Most combined gas mask filters will last around 8 hours in a nuclear biological chemical (NBC) situation. Chemical specific filters can last up to 20 hours in an NBC situation. Airborne toxic materials may be gaseous (for example, mustard gas and chlorine gas) or particulates (such as biological agents). Many filters include protection from both types. Gas masks are used in construction to protect against welding fumes, in demolition to protect against asbestos or other hazardous particles, and in the chemical industry when handling hazardous materials, as in making repairs to leaking equipment or cleaning up after spills; workers are usually issued gas masks as a precaution against leaks. During demonstrations and protests where tear gas or CS gas is employed by riot police, gas masks are commonly used by police and demonstrators alike. Aside from serving their functional purposes, gas masks are also used as emblems in industrial music, with the most notable example, the subgenre of drum and bass called neurofunk. These emblems are used by graffiti taggers because the mask protects them from the paint canister's toxic fumes, and by urban explorers venturing into environments where hazardous materials, such as asbestos, may be present. The first gas masks mostly used circular lenses made either of glass, mica or cellulose acetate. With the exception of the latter, these materials were quite brittle and needed frequent replacement. Later on, the Triplex lense style (two layers of glass and one layer of cellulose acetate in between) became more popular, and alongside the simpler cellulose acetate they became the standard into the '30s. Panoramic lenses were not popular until the '30s, but there are some examples of those being used even during the war (Austro-Hungarian 15M). Later, polycarbonate started being used for its strength. Some have one or two filters screwed (via inlets) onto to the gas mask while others have a large filter (coffee can filter) connected to the gas mask with a hose that is sometimes confused with an "air-supplied" "respirator" in which an alternate supply of fresh air (oxygen tanks) is delivered. Absorption is the process of being drawn into a (usually larger) body or substrate, and adsorption is the process of deposition upon a surface. This can be used to remove both particulate and gaseous hazards. Although some form of reaction may take place, it is not necessary; the method may work by attractive charges. For example, if the target particles are positively charged, a negatively charged substrate may be used. Examples of substrates include activated carbon, and zeolites. This effect can be very simple and highly effective, for example using a damp cloth to cover the mouth and nose while escaping a fire. While this method can be effective at trapping particulates produced by combustion, it does not filter out harmful gases which may be toxic or which displace the oxygen required for survival. Gas masks have a limited useful lifespan that is related to the absorbent capacity of the filter. Once the filter has been saturated with hazardous chemicals, it ceases to provide protection and the user may be injured. Most gas masks use sealing caps over the air intake and are stored in vacuum sealed bags to prevent the filter from degrading before use, but the protective abilities also degrade as the filter ages or if it is exposed to moisture and heat. Very old unused gas mask filters from World War II may not be effective at all in protecting the user, and can even potentially cause harm to the user due to long-term changes in the filter chemical composition. Some World War II gas masks contained chrysotile asbestos or crocidolite asbestos in their filters. It is unknown how long for certain the materials were used in filters. Breathing blue asbestos in the factories resulted in the death of 10 percent of the workforce due to pleural and peritoneal mesothelioma. This rate was between 2.5 and 3.2 times the normal incidence of lung or respiratory cancers. Many stories have originated from various Russian gas masks and their filters that are now common in surplus stores; the GP-5 was often considered to have an asbestos filter. Although the filter is made so that the asbestos fibres cannot be breathed in if the filter layer is not damaged, these expired filters and others should be avoided for risk of health. All Russian Cold War period gas mask filters contain asbestos and should be avoided. Modern gas masks are quite safe and do not use asbestos, but it is still important to be careful when using a modern gas mask. Typically, masks using 40 mm connections are a more recent design. Rubber degrades with time, so new in-box "modern type" masks can be cracked and leak. Also, the US C2 canister (black) was shown to contain hexavalent chromium: the studies by the U.S. Army Chemical Corps showed that the levels in the filter were acceptable, but imply caution when using, as it is a carcinogen. The filter is selected according to the toxic compound. Each filter type protects against a particular hazard and is color-coded: Particle filters are often included, because in many cases, the hazardous materials are in the form of mist, which is captured already by the particle filter before entering the chemical adsorber. In Europe and jurisdictions with similar rules like Russia and Australia, filter types are given suffix numbers to indicate their capacity: for non-particle hazards, the level "1" is assumed and a number "2" is used to indicate a better level. For particles (P), three levels are always given with the number. In the US, only the particle part is further classified by NIOSH air filtration ratings. A filter type that can protect multiple hazards is notated with the European symbols concatenated with each other. Examples include ABEK, ABEK-P3, and ABEK-HgP3. A2B2E2K2-P3 is the highest rating of filter available. An entirely different "multi/CBRN" filter class with an olive color is used in the US. Filtration may be aided with an air pump to improve wearer comfort. Filtration of air is only possible if there is sufficient oxygen in the first place. Thus, when handling asphyxiants, or when ventilation is poor or the hazards are unknown, filtration is not possible and air must be supplied (with a SCBA system) from a pressurized bottle as in scuba diving. A modern mask typically is constructed of an elastic polymer in various sizes. It is fitted with various adjustable straps which may be tightened to secure a good fit. Crucially, it is connected to a filter cartridge near the mouth either directly, or via a flexible hose. Some models contain drinking tubes which may be connected to a water bottle. Corrective lens inserts are also available for users who require them. Masks are typically tested for fit before use. After a mask is fitted, it is often tested by various challenge agents. Isoamyl acetate, a synthetic banana flavourant, and camphor are often used as innocuous challenge agents. In the military, teargases such as CN, CS, and stannic chloride in a chamber may be used to give the users confidence in the efficiency of the mask. This principle relies on substances harmful to humans being usually more reactive than air. This method of separation will use some form of generally reactive substance (for example an acid) coating or supported by some solid material. An example is synthetic resins. These can be created with different groups of atoms (usually called functional groups) that have different properties. Thus a resin can be tailored to a particular toxic group. When the reactive substance comes in contact with the resin, it will bond to it, removing it from the air stream. It may also exchange with a less harmful substance at this site. Though it was crude, the hypo helmet was a stopgap measure for British troops in the trenches that offered at least some protection during a gas attack. As the months passed and poison gas was used more often, more sophisticated gas masks were developed and introduced. There are two main difficulties with gas mask design: According to "Popular Mechanics", "The common sponge was used in ancient Greece as a gas mask..." In 1785, Jean-François Pilâtre de Rozier invented a respirator. Primitive respirator examples were used by miners and introduced by Alexander von Humboldt already in 1799, when he worked as a mining engineer in Prussia. The forerunner to the modern gas mask was invented in 1847 by Lewis P. Haslett, a device that contained elements that allowed breathing through a nose and mouthpiece, inhalation of air through a bulb-shaped filter, and a vent to exhale air back into the atmosphere. According to "First Facts," it states that the "gas mask resembling the modern type was patented by Lewis Phectic Haslett of Louisville, Kentucky who received a patent on June 12, 1849." U.S. patent #6,529 issued to Haslett, described the first "Inhaler or Lung Protector" that filtered dust from the air. Early versions were constructed by the Scottish chemist John Stenhouse in 1854 and the physicist John Tyndall in the 1870s. Another early design was the "Safety Hood and Smoke Protector" invented by Garrett Morgan in 1912, and patented in 1914. It was a simple device consisting of a cotton hood with two hoses which hung down to the floor, allowing the wearer to breathe the safer air found there. In addition, moist sponges were inserted at the end of the hoses in order to better filter the air. This was later modified to include its own air supply, leading to World War I era gas masks. The First World War brought about the first need for mass-produced gas masks on both sides because of extensive use of chemical weapons. The German army successfully used poison gas for the first time against Allied troops at the Second Battle of Ypres, Belgium on 22 April 1915. As an immediate response was cotton wool wrapped in muslin issued to the troops by 1 May. This was followed by the Black Veil Respirator, invented by John Scott Haldane, which was a cotton pad soaked in an absorbent solution which was secured over the mouth using black cotton veiling. Seeking to improve on the Black Veil respirator, Cluny MacPherson created a mask made of chemical absorbing fabric and which fitted over the entire head. A canvas hood treated with chlorine-absorbing chemicals, and fitted with a transparent mica eyepiece. Macpherson presented his idea to the British War Office Anti-Gas Department on 10th of May 1915, with prototypes being developed soon after. The design was adopted by the British Army and introduced as the British Smoke Hood in June 1915; Macpherson was appointed to the War Office Committee for Protection against Poisonous Gases. More elaborate sorbent compounds were added later to further iterations of his helmet (PH helmet), to defeat other respiratory poison gases used such as phosgene, diphosgene and chloropicrin. In summer and autumn 1915, Edward Harrison, Bertram Lambert and John Sadd developed the Large Box Respirator. This canister gas mask had a tin can containing the absorbent materials by a hose and began to be issued in February 1916. A compact version, the Small Box Respirator, was made a universal issue from August 1916. In the first gas masks of World War I, it was initially found that wood charcoal was a good absorbent of poison gases. Around 1918, it was found that charcoals made from the shells and seeds of various fruits and nuts such as coconuts, chestnuts, horse-chestnuts, and peach stones performed much better than wood charcoal. These waste materials were collected from the public in recycling programs to assist the war effort. The first effective filtering activated charcoal gas mask in the world was invented in 1915 by Russian chemist Nikolay Zelinsky. Also in World War I, since dogs were frequently used on the front lines, a special type of gas mask was developed that dogs were trained to wear. Other gas masks were developed during World War I and the time following for horses in the various mounted units that operated near the front lines. In America thousands of gas masks were produced for American as well as Allied troops. Mine Safety Appliances was a chief producer. This mask was later used widely in industry. The British Respirator, Anti-Gas (Light) was developed in 1943 by the British. It was made of plastic and rubber-like material that greatly reduced the weight and bulk, compared to World War I gas masks and fitted the user's face more snugly and comfortably. The main improvement was replacing the separate filter canister connected with a hose by a filter canister screwed on the side of the gas mask, that could be replaced easily. Also, it had replaceable plastic lenses. Gas mask development since has mirrored the development of chemical agents in warfare, filling the need to protect against ever more deadly threats, biological weapons, and radioactive dust in the nuclear era. However, for agents that cause harm through contact or penetration of the skin, such as blister agent or nerve agent, a gas mask alone is not sufficient protection, and full protective clothing must be worn in addition to protect from contact with the atmosphere. For reasons of civil defense and personal protection, individuals often buy gas masks since thry believe that they protect against the harmful effects of an attack with nuclear, biological, or chemical (NBC) agents, which is only partially true, as gas masks protect only against respiratory absorption. Most military gas masks are designed to be capable of protection against all NBC agents, but they can have filter canisters proof against those agents (heavier) or only against riot control agents and smoke (lighter and often used for training purposes); likewise, there are lightweight masks solely for use in riot control agents and not for NBC situations. Although thorough training and the availability of gas masks and other protective equipment can nullify the casualty-causing effects of an attack by chemical agents, troops who are forced to operate in full protective gear are less efficient in completing tasks, tire easily, and may be affected psychologically by the threat of attack by those weapons. During the Cold War, it was seen as inevitable that there would be a constant NBC threat on the battlefield and so troops needed protection in which they could remain fully functional; thus, protective gear and especially gas masks have evolved to incorporate innovations in terms of increasing user comfort and in compatibility with other equipment (from drinking devices to artificial respiration tubes, to communications systems etc.). The gas mask has thus now arrived at a "fourth generation" of development. During the Iran–Iraq War (1980–88), Iraq developed its chemical weapons program with the help of European countries such as Germany and France and used them in a large scale against Iranians and Iraqi Kurds. Iran was unprepared for chemical warfare. In 1984, Iran received gas masks from the Republic of Korea and East Germany, but the Korean masks were not suited for the faces of non-East Asian people, the filter lasted for only 15 minutes, and the 5,000 masks bought from East Germany proved to be not gas masks but spray-painting goggles. As late as 1986, Iranian diplomats still traveled in Europe to buy active charcoal and models of filters to produce defensive gear domestically. In April 1988, Iran started domestic production of gas masks by the Iran Yasa factories. Most civilians learned how to use gas masks through the civil defense department, but children received most gas mask education in school drills. Schools would implement gas mask education and training after the outbreak of a war. Schools would harshly enforce the compulsory carrying of gas masks at all times. Gas mask and air raid drills were closely related and children would be made to wear gas masks in everyday activities, including gymnastics. The wearing of gas masks in the classrooms were especially difficult for teachers as they had trouble distinguishing one child from another. The gas masks became as uniform as the students' uniforms. Other civilians learned the use of a gas mask through posters, pamphlets, and radio lectures, but children learned through cartoons and rhymes such as "coughs and sneezes spread diseases".
https://en.wikipedia.org/wiki?curid=12772
George Frideric Handel George Frideric (or Frederick) Handel (; born ; 23 February 1685 (O.S.) [(N.S.) 5 March] – 14 April 1759) was a German, later British, Baroque composer who spent the bulk of his career in London, becoming well known for his operas, oratorios, anthems, concerti grossi and organ concertos. Handel received important training in Halle and worked as a composer in Hamburg and Italy before settling in London in 1712; he became a naturalised British subject in 1727. He was strongly influenced both by the great composers of the Italian Baroque and by the middle-German polyphonic choral tradition. He would become a huge influence on classical composers such as Mozart and Beethoven. Within fifteen years, Handel had started three commercial opera companies to supply the English nobility with Italian opera. Musicologist Winton Dean writes that his operas show that "Handel was not only a great composer; he was a dramatic genius of the first order." As "Alexander's Feast" (1736) was well received, Handel made a transition to English choral works. After his success with "Messiah" (1742) he never composed an Italian opera again. Almost blind, and having lived in England for nearly fifty years, he died in 1759, a respected and rich man. His funeral was given full state honours, and he was buried in Westminster Abbey in London. Handel is regarded as one of the greatest composers of the Baroque era, with works such as "Messiah", "Water Music", and "Music for the Royal Fireworks" remaining steadfastly popular. One of his four coronation anthems, "Zadok the Priest" (1727), composed for the coronation of George II, has been performed at every subsequent British coronation, traditionally during the sovereign's anointing. Another of his English oratorios, "Solomon" (1748), has also remained popular, with the Sinfonia that opens act 3 (known more commonly as "The Arrival of the Queen of Sheba") featuring at the 2012 London Olympics opening ceremony. Handel composed more than forty opera serias in over thirty years, and since the late 1960s, with the revival of baroque music and historically informed musical performance, interest in Handel's operas has grown. Handel was born in 1685 in Halle, Duchy of Magdeburg (then part of Brandenburg-Prussia), to Georg Händel and Dorothea Taust. His father, aged sixty-three when George Frideric was born, was an eminent barber-surgeon who served the court of Saxe-Weissenfels and the Margraviate of Brandenburg. Georg Händel (senior) was the son of a coppersmith, Valentin Händel (1582–1636), who had emigrated from Eisleben in 1608 with his first wife Anna Belching, the daughter of a master coppersmith. They were Protestants and chose reliably Protestant Saxony over Silesia, a Habsburg possession, as religious tensions mounted in the years before the Thirty Years War. Halle was a relatively prosperous city, home of a salt-mining industry and center of trade (and member of the Hanseatic League). The Margrave of Brandenburg became the administrator of the archiepiscopal territories of Mainz, including Magdeburg when they converted, and by the early 17th century held his court in Halle, which attracted renowned musicians. Even the smaller churches all had "able organists and fair choirs", and humanities and the letters thrived (Shakespeare was performed in the theaters early in the 17th century). The Thirty Years War brought extensive destruction to Halle, and by the 1680s it was impoverished. However, since the middle of the war the city had been under the administration of the Duke of Saxony, and soon after the end of the war he would bring musicians trained in Dresden to his court in Weissenfels. The arts and music, however, flourished only among the higher strata (not only in Halle but throughout Germany), of which Handel's family was not a member. Georg Händel (senior) was born at the beginning of the war, and was apprenticed to a barber in Halle at the age of 14, after his father died. When he was 20, he married the widow of the official barber-surgeon of a suburb of Halle, inheriting his practice. With this, Georg determinedly began the process of becoming self-made; by dint of his "conservative, steady, thrifty, unadventurous" lifestyle, he guided the five children he had with Anna who reached adulthood into the medical profession (except his youngest daughter, who married a government official). Anna died in 1682. Within a year Georg married again, this time to the daughter of a Lutheran minister, Pastor Georg Taust of the Church of St. Bartholomew in Giebichenstein, who himself came from a long line of Lutheran pastors. Handel was the second child of this marriage; the first son was stillborn. Two younger sisters were born after the birth of George Frideric: Dorthea Sophia, born 6 October 1687, and Johanna Christiana, born 10 January 1690. Early in his life Handel is reported to have attended the "Gymnasium" in Halle, where the headmaster, , was reputed to be an ardent musician. Whether Handel remained there, and if he did for how long, is unknown, but many biographers suggest that he was withdrawn from school by his father, based on the characterization of him by Handel's first biographer, John Mainwaring. Mainwaring is the source for almost all information (little as it is) of Handel's childhood, and much of that information came from J.C. Smith, Jr., Handel's confidant and copyist. Whether it came from Smith or elsewhere, Mainwaring frequently relates misinformation. It is from Mainwaring that the portrait comes of Handel's father as implacably opposed to any musical education. Mainwaring writes that Georg Händel was "alarmed" at Handel's very early propensity for music, "took every measure to oppose it", including forbidding any musical instrument in the house and preventing Handel from going to any house where they might be found. This did nothing to dampen young Handel's inclination; in fact, it did the reverse. Mainwaring tells the story of Handel's secret attic spinet: Handel "found means to get a little clavichord privately convey'd to a room at the top of the house. To this room he constantly stole when the family was asleep". Although both John Hawkins and Charles Burney credited this tale, Schoelcher found it nearly "incredible" and a feat of "poetic imagination" and Lang considers it one of the unproven "romantic stories" that surrounded Handel's childhood. But Handel had to have had some experience with the keyboard to have made the impression in Weissenfels that resulted in his receiving formal musical training. Sometime between the ages of seven and nine, Handel accompanied his father to Weissenfels where he came under the notice of one whom Handel thereafter always regarded throughout life as his benefactor, Duke Johann Adolf I. Somehow Handel made his way to the court organ in the palace chapel of the Holy Trinity, where he surprised everyone with his playing. Overhearing this performance and noting the youth of the performer caused the Duke, whose suggestions were not to be disregarded, to recommend to Georg Händel that Handel be given musical instruction. Handel's father engaged the organist at the Halle parish church, the young Friedrich Wilhelm Zachow, to instruct Handel. Zachow would be the only teacher that Handel ever had. Because of his church employment, Zachow was an organist "of the old school", reveling in fugues, canons and counterpoint. But he was also familiar with developments in music across Europe and his own compositions "embraced the new concerted, dramatic style". When Zachow discovered the talent of Handel, he introduced him "to a vast collection of German and Italian music, which he possessed, sacred and profane, vocal and instrumental compositions of different schools, different styles, and of every master". Many traits considered "Handelian" can be traced back to Zachow's music. At the same time Handel continued practice on the harpsichord, learned violin and organ, but according to Burney his special affection was for the "hautbois" (oboe). Schoelcher speculates that his youthful devotion to the instrument explains the large number of pieces he composed for oboe. With respect to instruction in composition, in addition to having Handel apply himself to traditional fugue and cantus firmus work, Zachow, recognizing Handel's precocious talents, systematically introduced Handel to the variety of styles and masterworks contained in his extensive library. He did this by requiring Handel to copy selected scores. "I used to write like the devil in those days", Handel recalled much later. Much of this copying was entered into a notebook that Handel maintained for the rest of his life. Although it has since disappeared, the notebook has been sufficiently described to understand what pieces Zachow wished Handel to study. Among the chief composers represented in this exercise book were Johann Krieger, an "old master" in the fugue and prominent organ composer, Johann Caspar Kerll, a representative of the "southern style" after his teacher Frescobaldi and imitated later by Handel, Johann Jakob Froberger, an "internationalist" also closely studied by Buxtehude and Bach, and Georg Muffat, whose amalgam of French and Italian styles and his synthesis of musical forms influenced Handel. Mainwaring writes that during this time Zachow had begun to have Handel assume some of his church duties. Zachow, Mainwaring asserts, was "often" absent, "from his love of company, and a chearful glass", and Handel therefore performed on organ frequently. What is more, according to Mainwaring, Handel began composing, at the age of nine, church services for voice and instruments "and from that time actually did compose a service every week for three years successively." Mainwaring ends this chapter of Handel's life by concluding that three or four years had been enough to allow Handel to surpass Zachow, and Handel had become "impatient for another situation"; "Berlin was the place agreed upon." Carelessness with dates or sequences (and possibly imaginative interpretation by Mainwaring) makes this period confused. Handel's father died on 11 February 1697. It was German custom for friends and family to compose funeral odes for a substantial burgher like Georg, and young Handel discharged his duty with a poem dated 18 February and signed with his name and (in deference to his father's wishes) "dedicated to the liberal arts." At the time Handel was studying either at Halle's Lutheran Gymnasium or the Latin School. Mainwaring has Handel traveling to Berlin the next year, 1698. The problem with Mainwaring as an authority for this date, however, is that he tells of how Handel's father communicated with the "king" during Handel's stay, declining the Court's offer to send Handel to Italy on a stipend and that his father died "after his return from Berlin." But since Georg Händel died in 1697, either the date of the trip or Mainwaring's statements about Handel's father must be in error. Early biographers solved the problem by making the year of the trip 1696, then noting that at the age of 11 Handel would need a guardian, so they have Handel's father or a friend of the family accompany him, all the while puzzling over why the elder Handel, who wanted Handel to become a lawyer, would spend the sum to lead his son further into the temptation of music as a career. Schoelcher for example has Handel traveling to Berlin at 11, meeting both Bononcini and Attilio Ariosti in Berlin and then returning at the direction of his father. But Ariosti was not in Berlin before the death of Handel's father, and Handel could not have met Bononcini in Berlin before 1702. Modern biographers either accept the year as 1698, since most reliable older authorities agree with it, and discount what Mainwaring says about what took place during the trip or assume that Mainwaring conflated two or more visits to Berlin, as he did with Handel's later trips to Venice. Perhaps to fulfill a promise to his father or simply because he saw himself as "dedicated to the liberal arts," on 10 February 1702 Handel matriculated at the University of Halle. That university had only recently been founded. In 1694 the Elector of Brandenburg Frederick III (later Prussian King Frederick I) created the school, largely to provide a lecture forum for the jurist Christian Thomasius who had been expelled from Leipzig for his liberal views. Händel did not enroll in the faculty of law, although he almost certainly attended lectures. Thomasius was an intellectual and academic crusader who was the first German academic to lecture in German and also denounced witch trials. Lang believes that Thomasius instilled in Händel a "respect for the dignity and freedom of man's mind and the solemn majesty of the law," principles that would have drawn him to and kept him in England for half a century. Handel also there encountered theologian and professor of Oriental languages August Hermann Francke, who was particularly solicitous of children, particularly orphans. The orphanage he founded became a model for Germany, and undoubtedly influenced Handel's own charitable impulse, when he assigned the rights of "Messiah" to London's Foundling Hospital. Shortly after commencing his university education, Handel (though Lutheran) on 13 March 1702 accepted the position of organist at the Calvinist Cathedral in Halle, the Domkirche, replacing J.C. Leporin, for whom he had acted as assistant. The position, which was a one-year probationary appointment showed the foundation he had received from Zachow, for a church organist and cantor was a highly prestigious office. From it he received 5 thalers a year and lodgings in the run-down castle of Moritzburg. Around this same time Handel made the acquaintance of Telemann. Four years Handel's senior, Telemann was studying law and assisting cantor Johann Kuhnau (Bach's predecessor at the Thomaskirche there). Telemann recalled forty years later in an autobiography for Mattheson's "Grundlage": "The writing of the excellent Johann Kuhnau served as a model for me in fugue and counterpoint; but in fashioning melodic movements and examining them Handel and I were constantly occupied, frequently visiting each other as well as writing letters." Although Mainwaring records that Handel wrote weekly when assistant to Zachow and as probationary organist at Domkirche part of his duty was to provide suitable music, no sacred compositions from his Halle period can now be identified. Mattheson, however, summarized his opinion of Handel's church cantatas written in Halle: "Handel in those days set very, very long arias and sheerly unending cantatas which, while not possessing the proper knack or correct taste, were perfect so far as harmony is concerned." Early chamber works do exist, but it is difficult to date any of them to Handel's time in Halle. Many historians until recently followed Chrysander and designated the six trio sonatas for two oboes and basso continuo as his first known composition, supposedly written in 1696 (when Handel was 11). Lang doubts the dating based on a handwritten date of a copy (1700) and stylistic considerations. Lang writes that the works "show thorough acquaintance with the distilled sonata style of the Corelli school" and are notable for "the formal security and the cleanness of the texture." Hogwood considers all of the oboe trio sonatas spurious and even suggests that some parts cannot be performed on oboe. That authentic manuscript sources do not exist and that Handel never recycled any material from these works make their authenticity doubtful. Other early chamber works were printed in Amsterdam in 1724 as opus 1, but it is impossible to tell which are early works in their original form, rather than later re-workings by Handel, a frequent practice of his. Handel's probationary appointment to Domkirche expired in March 1703. By July Handel was in Hamburg. Since he left no explanation for the move biographers have offered their own speculation. Burrows believes that the answer can be found by untangling Mainwaring's confused chronology of the trip to Berlin. Burrows dates this trip to 1702 or 1703 (after his father's death) and concluded that since Handel (through a "friend and relation" at the Berlin court) turned down Frederick's offer to subsidize his musical education in Italy (with the implicit understanding that he would become a court musician on his return), Handel was no longer able to expect preferment (whether as musician, lawyer or otherwise) within Brandenburg-Prussia. And since he was attracted to secular, dramatic music (by meeting the Italians Bononcini and Attilio Ariosti and through the influence of Telemann), Hamburg, a free city with an established opera company, was the logical choice. The question remains, however, why Handel rejected the King's offer, given that Italy was the center of opera. Lang suggests that, influenced by the teachings of Thomasius, Handel's character was such that he was unable to make himself subservient to anyone, even a king. Lang sees Handel as someone who could not accept class distinctions that required him to regard himself as a social inferior. "What Handel craved was personal freedom to raise himself out of his provincial milieu to a life of culture." Burrows notes that like his father, Handel was able to accept royal (and aristocratic) favors without considering himself a court servant. And so given the embarrassed financial condition of his mother, Handel set off for Hamburg to obtain experience while supporting himself. In 1703 he accepted a position as violinist and harpsichordist in the orchestra of the Hamburg Oper am Gänsemarkt. There he met the composers Johann Mattheson, Christoph Graupner and Reinhard Keiser. His first two operas, "Almira" and "Nero", were produced in 1705. He produced two other operas, "Daphne" and "Florindo", in 1708. It is unclear whether Handel directed these performances. According to Mainwaring, in 1706 Handel travelled to Italy at the invitation of Ferdinando de' Medici. Other sources say Handel was invited by Gian Gastone de' Medici, whom Handel had met in 1703–1704 in Hamburg. De' Medici , who had a keen interest in opera, was trying to make Florence Italy's musical capital by attracting the leading talents of his day. In Italy Handel met librettist Antonio Salvi, with whom he later collaborated. Handel left for Rome and, since opera was (temporarily) banned in the Papal States, composed sacred music for the Roman clergy. His famous "Dixit Dominus" (1707) is from this era. He also composed cantatas in pastoral style for musical gatherings in the palaces of duchess Aurora Sanseverino (whom Mainwaring called "Donna Laura") one of the most influential patrons from the Kingdom of Naples, and cardinals Pietro Ottoboni, Benedetto Pamphili and Carlo Colonna. Two oratorios, "La resurrezione" and "Il trionfo del tempo", were produced in a private setting for Ruspoli and Ottoboni in 1709 and 1710, respectively. "Rodrigo", his first all-Italian opera, was produced in the Cocomero theatre in Florence in 1707. "Agrippina" was first produced in 1709 at Teatro San Giovanni Grisostomo in Venice, owned by the Grimanis. The opera, with a libretto by Cardinal Vincenzo Grimani, ran for 27 nights successively. The audience, thunderstruck with the grandeur and sublimity of his style, applauded for "Il caro Sassone" ("the dear Saxon" – referring to Handel's German origins). In 1710, Handel became "Kapellmeister" to German prince George, the Elector of Hanover, who in 1714 would become King George I of Great Britain and Ireland. He visited Anna Maria Luisa de' Medici and her husband in Düsseldorf on his way to London in 1710. With his opera "Rinaldo", based on "La Gerusalemme Liberata" by the Italian poet Torquato Tasso, Handel enjoyed great success, although it was composed quickly, with many borrowings from his older Italian works. This work contains one of Handel's favourite arias, "Cara sposa, amante cara", and the famous Lascia ch'io pianga. In 1712, Handel decided to settle permanently in England. In the summer of 1713 he lived at Mr Mathew Andrews' estate in Barn Elms, Surrey. He received a yearly income of £200 from Queen Anne after composing for her the "Utrecht Te Deum and Jubilate", first performed in 1713. One of his most important patrons was The 3rd Earl of Burlington and 4th Earl of Cork, a young and extremely wealthy member of an Anglo-Irish aristocratic family. While living in the mansion of Lord Burlington, Handel wrote "Amadigi di Gaula", a "magic" opera, about a damsel in distress, based on the tragedy by Antoine Houdar de la Motte. The conception of an opera as a coherent structure was slow to capture Handel's imagination and he composed no operas for five years. In July 1717 Handel's "Water Music" was performed more than three times on the River Thames for the King and his guests. It is said the compositions spurred reconciliation between Handel and the King, supposedly annoyed by the composer's abandonment of his Hanover post. In 1717 Handel became house composer at Cannons in Middlesex, where he laid the cornerstone for his future choral compositions in the "Chandos Anthems". Romain Rolland wrote that these anthems (or Psalms) stood in relation to Handel's oratorios, much the same way that the Italian cantatas stood to his operas: "splendid sketches of the more monumental works." Another work, which he wrote for The 1st Duke of Chandos, the owner of Cannons, was "Acis and Galatea": during Handel's lifetime it was his most performed work. Winton Dean wrote, "the music catches breath and disturbs the memory". In 1719 the Duke of Chandos became one of the composer's important patrons and main subscribers to his new opera company, the Royal Academy of Music, but his patronage declined after Chandos lost money in the South Sea bubble, which burst in 1720 in one of history's greatest financial cataclysms. Handel himself invested in South Sea stock in 1716, when prices were low and sold before 1720. In May 1719, The 1st Duke of Newcastle, the Lord Chamberlain, ordered Handel to look for new singers. Handel travelled to Dresden to attend the newly built opera. He saw "Teofane" by Antonio Lotti, and engaged members of the cast for the Royal Academy of Music, founded by a group of aristocrats to assure themselves a constant supply of baroque opera or opera seria. Handel may have invited John Smith, his fellow student in Halle, and his son Johann Christoph Schmidt, to become his secretary and amanuensis. By 1723 he had moved into a Georgian house at 25 Brook Street, which he rented for the rest of his life. This house, where he rehearsed, copied music and sold tickets, is now the Handel House Museum. During twelve months between 1724 and 1725, Handel wrote three outstanding and successful operas, "Giulio Cesare", "Tamerlano" and "Rodelinda". Handel's operas are filled with da capo arias, such as "Svegliatevi nel core". After composing "Silete venti", he concentrated on opera and stopped writing cantatas. "Scipio", from which the regimental slow march of the British Grenadier Guards is derived, was performed as a stopgap, waiting for the arrival of Faustina Bordoni. In 1727 Handel was commissioned to write four anthems for the Coronation ceremony of King George II. One of these, "Zadok the Priest", has been played at every British coronation ceremony since. In 1728 John Gay's "The Beggar's Opera", which made fun of the type of Italian opera Handel had popularised in London, premiered at Lincoln's Inn Fields Theatre and ran for 62 consecutive performances, the longest run in theatre history up to that time. After nine years the Royal Academy of Music ceased to function but Handel soon started a new company. The Queen's Theatre at the Haymarket (now Her Majesty's Theatre), established in 1705 by architect and playwright John Vanbrugh, quickly became an opera house. Between 1711 and 1739, more than 25 of Handel's operas premièred there. In 1729 Handel became joint manager of the theatre with John James Heidegger. Handel travelled to Italy to engage new singers and also composed seven more operas, among them the comic masterpiece "Partenope" and the "magic" opera "Orlando". After two commercially successful English oratorios "Esther" and "Deborah", he was able to invest again in the South Sea Company. Handel reworked his "Acis and Galatea" which then became his most successful work ever. Handel failed to compete with the Opera of the Nobility, who engaged musicians such as Johann Adolph Hasse, Nicolo Porpora and the famous castrato Farinelli. The strong support by Frederick, Prince of Wales caused conflicts in the royal family. In March 1734 Handel composed a wedding anthem "This is the day which the Lord hath made", and a serenata "Parnasso in Festa" for Anne, Princess Royal. Despite the problems the Opera of the Nobility was causing him at the time, Handel's neighbour in Brook Street, Mary Delany, reported on a party she invited Handel to at her house on 12 April 1734 where he was in good spirits:I had Lady Rich and her daughter, Lady Cath. Hanmer and her husband, Mr. and Mrs. Percival, Sir John Stanley and my brother, Mrs. Donellan, Strada [star soprano of Handel's operas] and Mr. Coot. Lord Shaftesbury begged of Mr. Percival to bring him, and being a profess'd friend of Mr. Handel (who was here also) was admitted; I never was so well entertained at an opera! Mr. Handel was in the best humour in the world, and played lessons and accompanied Strada and all the ladies that sang from seven o'clock till eleven. I gave them tea and coffee, and about half an hour after nine had a salver brought in of chocolate, mulled white wine and biscuits. Everybody was easy and seemed pleased. In 1733 the Earl of Essex received a letter with the following sentence: "Handel became so arbitrary a prince, that the Town murmurs." The board of chief investors expected Handel to retire when his contract ended, but Handel immediately looked for another theatre. In cooperation with John Rich he started his third company at Covent Garden Theatre. Rich was renowned for his spectacular productions. He suggested Handel use his small chorus and introduce the dancing of Marie Sallé, for whom Handel composed "Terpsicore". In 1735 he introduced organ concertos between the acts. For the first time Handel allowed Gioacchino Conti, who had no time to learn his part, to substitute arias. Financially, "Ariodante" was a failure, although he introduced ballet suites at the end of each act. "Alcina", his last opera with a magic content, and "Alexander's Feast or the Power of Music" based on John Dryden's "Alexander's Feast" starred Anna Maria Strada del Pò and John Beard. In April 1737, at age 52, Handel apparently suffered a stroke which disabled the use of four fingers on his right hand, preventing him from performing. In summer the disorder seemed at times to affect his understanding. Nobody expected that Handel would ever be able to perform again. But whether the affliction was rheumatism, a stroke or a nervous breakdown, he recovered remarkably quickly . To aid his recovery, Handel had travelled to Aachen, a spa in Germany. During six weeks he took long hot baths, and ended up playing the organ for a surprised audience. He wrote " Faramondo" in December 1737 and "Serse" in January 1738. "Deidamia", his last opera, a co-production with the Earl of Holderness, was performed three times in 1741. Handel gave up the opera business, while he enjoyed more success with his English oratorios. In 1738 he composed music for a musical clock with a pipe organ built by Charles Clay; it was bought by Gerrit Braamcamp and was in 2016 acquired by the Museum Speelklok in Utrecht. "Il trionfo del tempo e del disinganno", an allegory, Handel's first oratorio was composed in Italy in 1707, followed by "La resurrezione" in 1708 which uses material from the Bible. The circumstances of "Esther" and its first performance, possibly in 1718, are obscure. Another 12 years had passed when an act of piracy caused him to take up "Esther" once again. Three earlier performances aroused such interest that they naturally prompted the idea of introducing it to a larger public. Next came "Deborah", strongly coloured by the coronation anthems and "Athaliah", his first English Oratorio. In these three oratorios Handel laid the foundation for the traditional use of the chorus which marks his later oratorios. Handel became sure of himself, broader in his presentation, and more diverse in his composition. It is evident how much he learned from Arcangelo Corelli about writing for instruments, and from Alessandro Scarlatti about writing for the solo voice; but there is no single composer who taught him how to write for chorus. Handel tended more and more to replace Italian soloists by English ones. The most significant reason for this change was the dwindling financial returns from his operas. Thus a tradition was created for oratorios which was to govern their future performance. The performances were given without costumes and action; the singers appeared in their own clothes. In 1736 Handel produced "Alexander's Feast". John Beard appeared for the first time as one of Handel's principal singers and became Handel's permanent tenor soloist for the rest of Handel's life. The piece was a great success and it encouraged Handel to make the transition from writing Italian operas to English choral works. In "Saul", Handel was collaborating with Charles Jennens and experimenting with three trombones, a carillon and extra-large military kettledrums (from the Tower of London), to be sure "...it will be most excessive noisy". "Saul" and "Israel in Egypt" both from 1739 head the list of great, mature oratorios, in which the da capo aria became the exception and not the rule. "Israel in Egypt" consists of little else but choruses, borrowing from the "Funeral Anthem for Queen Caroline". In his next works Handel changed his course. In these works he laid greater stress on the effects of orchestra and soloists; the chorus retired into the background. "L'Allegro, il Penseroso ed il Moderato" has a rather diverting character; the work is light and fresh. During the summer of 1741, The 3rd Duke of Devonshire invited Handel to Dublin, capital of the Kingdom of Ireland, to give concerts for the benefit of local hospitals. His "Messiah" was first performed at the New Music Hall in Fishamble Street on 13 April 1742, with 26 boys and five men from the combined choirs of St Patrick's and Christ Church cathedrals participating. Handel secured a balance between soloists and chorus which he never surpassed. In 1747 Handel wrote his oratorio "Alexander Balus". This work was produced at Covent Garden Theatre in London, on 23 March 1748, and to the aria "Hark! hark! He strikes the golden lyre", Handel wrote the accompaniment for mandolin, harp, violin, viola, and violoncello. Another of his English oratorios, "Solomon", was first performed on 17 March 1749 at the Covent Garden Theatre. "Solomon" contains a short and lively instrumental passage for two oboes and strings in act 3, known as "The Arrival of the Queen of Sheba". The use of English soloists reached its height at the first performance of "Samson". The work is highly theatrical. The role of the chorus became increasingly important in his later oratorios. "Jephtha" was first performed on 26 February 1752; even though it was his last oratorio, it was no less a masterpiece than his earlier works. In 1749 Handel composed "Music for the Royal Fireworks"; 12,000 people attended the first performance. In 1750 he arranged a performance of "Messiah" to benefit the Foundling Hospital. The performance was considered a great success and was followed by annual concerts that continued throughout his life. In recognition of his patronage, Handel was made a governor of the Hospital the day after his initial concert. He bequeathed a copy of "Messiah" to the institution upon his death. His involvement with the Foundling Hospital is today commemorated with a permanent exhibition in London's Foundling Museum, which also holds the "Gerald Coke Handel Collection". In addition to the Foundling Hospital, Handel also gave to a charity that assisted impoverished musicians and their families. In August 1750, on a journey back from Germany to London, Handel was seriously injured in a carriage accident between The Hague and Haarlem in the Netherlands. In 1751 one eye started to fail. The cause was a cataract which was operated on by the great charlatan Chevalier Taylor. This did not improve his eyesight, but possibly made it worse. He was completely blind by 1752. He died in 1759 at home in Brook Street, at age 74. The last performance he attended was of "Messiah". Handel was buried in Westminster Abbey. More than three thousand mourners attended his funeral, which was given full state honours. Handel never married, and kept his personal life private. His initial will bequeathed the bulk of his estate to his niece Johanna, however four codicils distributed much of his estate to other relations, servants, friends and charities. Handel owned an art collection that was auctioned posthumously in 1760. The auction catalogue listed approximately seventy paintings and ten prints (other paintings were bequeathed). Handel's compositions include 42 operas, 25 oratorios, more than 120 cantatas, trios and duets, numerous arias, chamber music, a large number of ecumenical pieces, odes and serenatas, 18 concerti grossi and 12 organ concertos. His most famous work, the oratorio "Messiah" with its "Hallelujah" chorus, is among the most popular works in choral music and has become the centrepiece of the Christmas season. The Lobkowicz Palace in Prague holds Mozart's copy of "Messiah", complete with handwritten annotations. Among the works with opus numbers published and popularised in his lifetime are the Organ concertos Op. 4 and Op. 7, together with the Opus 3 and Opus 6 Concerti grossi; the latter incorporate an earlier organ concerto "The Cuckoo and the Nightingale" in which birdsong is imitated in the upper registers of the organ. Also notable are his 16 keyboard suites, especially "The Harmonious Blacksmith". Handel introduced previously uncommon musical instruments in his works: the viola d'amore and violetta marina ("Orlando"), the lute ("Ode for St. Cecilia's Day"), three trombones ("Saul"), clarinets or small high cornetts ("Tamerlano"), theorbo, French horn ("Water Music"), lyrichord, double bassoon, viola da gamba, carillon (bell chimes), positive organ, and harp ("Giulio Cesare", "Alexander's Feast"). The first published catalogue of Handel's works appeared as an appendix to Mainwaring's "Memoirs". Between 1787 and 1797 Samuel Arnold compiled a 180-volume collection of Handel's works—however, it was far from complete. Also incomplete was the collection produced between 1843 and 1858 by the English Handel Society (founded by Sir George Macfarren). The 105-volume "Händel-Gesellschaft" ("Handel Society") edition was published between 1858 and 1902 – mainly due to the efforts of Friedrich Chrysander. For modern performance, the realisation of the basso continuo reflects 19th century practice. Vocal scores drawn from the edition were published by Novello in London, but some scores, such as the vocal score to "Samson", are incomplete. The continuing "Hallische Händel-Ausgabe" edition was first inaugurated in 1955 in the Halle region in Saxony-Anhalt, Eastern Germany. It did not start as a critical edition, but after heavy criticism of the first volumes, which were performing editions without a critical apparatus (for example, the opera "Serse" was published with the title character recast as a tenor, reflecting pre-war German practice), it repositioned itself as a critical edition. Influenced in part by cold-war realities, editorial work was inconsistent: misprints are found in abundance and editors failed to consult important sources. In 1985 a committee was formed to establish better standards for the edition. The unification of Germany in 1990 removed communication problems, and the volumes issued have since shown a significant improvement in standards. Between 1978 and 1986 the German academic Bernd Baselt catalogued Handel's works in his "Händel-Werke-Verzeichnis" publication. The catalogue has achieved wide acceptance and is used as the modern numbering system, with each of Handel's works designated an "HWV" number, for example "Messiah" is catalogued as "HWV 56". Handel's works were collected and preserved by two men: Sir Samuel Hellier, a country squire whose musical acquisitions form the nucleus of the Shaw-Hellier Collection, and the abolitionist Granville Sharp. The catalogue accompanying the National Portrait Gallery exhibition marking the tercentenary of the composer's birth calls them two men of the late eighteenth century "who have left us solid evidence of the means by which they indulged their enthusiasm". With his English oratorios, such as "Messiah" and "Solomon", the coronation anthems, and other works including "Water Music" and "Music for the Royal Fireworks", Handel became a national icon in Britain, and featured in the BBC series, "The Birth of British Music: Handel – The Conquering Hero". After his death, Handel's Italian operas fell into obscurity, except for selections such as the aria from "Serse", "Ombra mai fu". The oratorios continued to be performed but not long after Handel's death they were thought to need some modernisation, and Mozart orchestrated German versions of "Messiah" and other works. Throughout the 19th century and first half of the 20th century, particularly in the Anglophone countries, his reputation rested primarily on his English oratorios, which were customarily performed by choruses of amateur singers on solemn occasions. The centenary of his death, in 1859, was celebrated by a performance of "Messiah" at The Crystal Palace, involving 2,765 singers and 460 instrumentalists, who played for an audience of about 10,000 people. Recent decades have revived his secular cantatas and what one might call 'secular oratorios' or 'concert operas'. Of the former, "Ode for St. Cecilia's Day" (1739) (set to texts by John Dryden) and "Ode for the Birthday of Queen Anne" (1713) are noteworthy. For his secular oratorios, Handel turned to classical mythology for subjects, producing such works as "Acis and Galatea" (1719), "Hercules" (1745) and "Semele" (1744). These works have a close kinship with the sacred oratorios, particularly in the vocal writing for the English-language texts. They also share the lyrical and dramatic qualities of Handel's Italian operas. As such, they are sometimes fully staged as operas. With the rediscovery of his theatrical works, Handel, in addition to his renown as instrumentalist, orchestral writer, and melodist, is now perceived as being one of opera's great musical dramatists. The original form of his name, Georg Friedrich Händel, is generally used in Germany and elsewhere, but he is known as "Haendel" in France. A different composer, Jacob Handl or Händl (1550–1591) is usually known by the Latin form Jacobus Gallus that appears in his publications. Handel has generally been accorded high esteem by fellow composers, both in his own time and since. Johann Sebastian Bach attempted, unsuccessfully, to meet Handel while he was visiting Halle. (Handel was born in the same year as Bach and Domenico Scarlatti.) Mozart is reputed to have said of him, "Handel understands affect better than any of us. When he chooses, he strikes like a thunder bolt." To Beethoven he was "the master of us all... the greatest composer that ever lived. I would uncover my head and kneel before his tomb." Beethoven emphasised above all the simplicity and popular appeal of Handel's music when he said, "Go to him to learn how to achieve great effects, by such simple means." Since 1831, when William Crotch raised the issue in his "Substance of Several Lectures on Music," scholars have extensively studied Handel's "borrowing" of music from other composers. Summarising the field in 2005, Richard Taruskin wrote that Handel "seems to have been the champion of all parodists, adapting both his own works and those of other composers in unparalleled numbers and with unparalleled exactitude." Among the composers whose music has been shown to have been re-used by Handel are Alessandro Stradella, Gottlieb Muffat, Alessandro Scarlatti, Domenico Scarlatti Giacomo Carissimi, Georg Philipp Telemann, Carl Heinrich Graun, Leonardo Vinci, Jacobus Gallus, Francesco Antonio Urio, Reinhard Keiser, Francesco Gasparini, Giovanni Bononcini, William Boyce, Henry Lawes, Michael Wise, Agostino Steffani, Franz Johann Habermann, and numerous others. In an essay published in 1985, John H. Roberts demonstrated that Handel's borrowings were unusually frequent even for his own era, enough to have been criticised by contemporaries (notably Johann Mattheson); Roberts suggested several reasons for Handel's practice, including Handel's attempts to make certain works sound more up-to-date and more radically, his "basic lack of facility in inventing original ideas" – though Roberts took care to argue that this does not "diminish Handel's stature", which should be "judged not by his methods, still less by his motives in employing them, but solely by the effects he achieves." After Handel's death, many composers wrote works based on or inspired by his music. The first movement from Louis Spohr's "Symphony No. 6", Op. 116, "The Age of Bach and Handel", resembles two melodies from Handel's "Messiah". In 1797 Ludwig van Beethoven published the "12 Variations in G major on ‘See the conqu’ring hero comes’ from Judas Maccabaeus by Handel", for cello and piano. In 1822 Beethoven composed "The Consecration of the House" overture, which also bears the influence of Handel. Guitar virtuoso Mauro Giuliani composed his "Variations on a Theme by Handel, Op. 107" for guitar, based on Handel's Suite No. 5 in E major, HWV 430, for harpsichord. In 1861, using a theme from the second of Handel's harpsichord suites, Johannes Brahms wrote the "Variations and Fugue on a Theme by Handel", Op. 24, one of his most successful works (praised by Richard Wagner). Several works by the French composer Félix-Alexandre Guilmant use Handel's themes, for example his "March on a Theme by Handel" uses a theme from "Messiah". French composer and flautist Philippe Gaubert wrote his "Petite marche" for flute and piano based on the fourth movement of Handel's Trio Sonata, Op. 5, No. 2, HWV 397. Argentine composer Luis Gianneo composed his "Variations on a Theme by Handel" for piano. In 1911, Australian-born composer and pianist Percy Grainger based one of his most famous works on the final movement of Handel's Suite No. 5 in E major (just like Giuliani). He first wrote some variations on the theme, which he titled "Variations on Handel's 'The Harmonious Blacksmith' ". Then he used the first sixteen bars of his set of variations to create "Handel in the Strand", one of his most beloved pieces, of which he made several versions (for example, the piano solo version from 1930). Arnold Schoenberg's Concerto for String Quartet and Orchestra in B-flat major (1933) was composed after Handel's Concerto Grosso, Op. 6/7. Handel is honoured with a feast day on 28 July in the liturgical calendar of the Episcopal Church, with Johann Sebastian Bach and Henry Purcell. In the Lutheran Calendar of Saints Handel and Bach share that date with Heinrich Schütz, and Handel and Bach are commemorated in the calendar of saints prepared by the Order of Saint Luke for the use of the United Methodist Church. The Book of Common Worship of the Presbyterian Church (USA) (Westminster John Knox Press, 2018) commemorates him on April 20. In 1942, Handel was the subject of the British biopic "The Great Mr. Handel" directed by Norman Walker and starring Wilfrid Lawson. It was made at Denham Studios by the Rank Organisation, and shot in Technicolor. He is also the central character in the television films "God Rot Tunbridge Wells!" (1985) and "Handel's Last Chance" (1996) and the stage play "All the Angels" (2015). Handel was portrayed by Jeroen Krabbé as the antagonist in the film "Farinelli" (1994).
https://en.wikipedia.org/wiki?curid=12775
Giovanni Pierluigi da Palestrina Giovanni Pierluigi da Palestrina ( 1525 – 2 February 1594) was an Italian Renaissance composer of sacred music and the best-known 16th-century representative of the Roman School of musical composition. He had a long-lasting influence on the development of church and secular music in Europe, especially on the development of counterpoint, and his work is considered the culmination of Renaissance polyphony. Palestrina was born in the town of Palestrina, near Rome, then part of the Papal States to Napoli parents, Santo and Palma Pierluigi, in the year 1525, presumed on February 3. His mother died on 16 January 1536, when Palestrina was aged 10. Documents suggest that he first visited Rome in 1537, when he is listed as a chorister at the Santa Maria Maggiore basilica, Cathedral of the Diocese of Rome. He studied with Robin Mallapert and Firmin Lebel. He spent most of his career in the city. Palestrina came of age as a musician under the influence of the northern European style of polyphony, which owed its dominance in Italy primarily to two influential Netherlandish composers, Guillaume Dufay and Josquin des Prez, who had spent significant portions of their careers there. Italy itself had yet to produce anyone of comparable fame or skill in polyphony. From 1544 to 1551, Palestrina was the organist of the Cathedral of St. Agapito, the principal church of his native city. In 1551 Pope Julius III (previously the Bishop of Palestrina) appointed Palestrina "maestro di cappella" or musical director of the Cappella Giulia, (Julian Chapel, in the sense of choir), the choir of the chapter of canons at St. Peter's Basilica. Palestrina dedicated to Julius III his first published compositions (1554), a book of Masses. It was the first book of Masses by a native composer, since in the Italian states of Palestrina's day, most composers of sacred music were from the Low Countries, France, or Spain. In fact the book was modeled on one by Cristóbal de Morales: the woodcut in the front is almost an exact copy of the one from the book by the Spanish composer. During the next decade, Palestrina held positions similar to his Julian Chapel appointment at other chapels and churches in Rome, notably St. John Lateran (1555–1560, a post previously held by Lassus), and Santa Maria Maggiore (1561–1566). In 1571 he returned to the Julian Chapel and remained at St Peter's for the rest of his life. The decade of the 1570s was difficult for him personally: he lost his brother, two of his sons, and his wife in three separate outbreaks of the plague (1572, 1575, and 1580, respectively). He seems to have considered becoming a priest at this time, but instead he remarried, this time to a wealthy widow. This finally gave him financial independence (he was not well paid as choirmaster) and he was able to compose prolifically until his death. He died in Rome of pleurisy on 2 February 1594. It is said that Palestrina died only one day before his 69th birthday. As was usual, Palestrina was buried on the same day he died, in a plain coffin with a lead plate on which was inscribed "Libera me Domine". A five-part psalm for three choirs was sung at the funeral. Palestrina's funeral was held at St. Peter's, and he was buried beneath the floor of the basilica. His tomb was later covered by new construction and attempts to locate the site have been unsuccessful. Palestrina left hundreds of compositions, including 105 masses, 68 offertories, at least 140 madrigals and more than 300 motets. In addition, there are at least 72 hymns, 35 magnificats, 11 litanies, and four or five sets of lamentations. The "Gloria" melody from Palestrina's "Magnificat Tertii Toni" (1591) is widely used today in the resurrection hymn tune, "Victory" (The Strife Is O'er). His attitude toward madrigals was somewhat enigmatic: whereas in the preface to his collection of "Canticum canticorum" (Song of Songs) motets (1584) he renounced the setting of profane texts, only two years later he was back in print with Book II of his secular madrigals (some of these being among the finest compositions in the medium). He published just two collections of madrigals with profane texts, one in 1555 and another in 1586. The other two collections were spiritual madrigals, a genre beloved by the proponents of the Counter-Reformation. Palestrina's masses show how his compositional style developed over time. His "Missa sine nomine" seems to have been particularly attractive to Johann Sebastian Bach, who studied and performed it while writing the Mass in B minor. Most of Palestrina's masses appeared in thirteen volumes printed between 1554 and 1601, the last seven published after his death. One of his most important works, the "Missa Papae Marcelli" (Pope Marcellus Mass), has been historically associated with erroneous information involving the Council of Trent. According to this tale (which forms the basis of Hans Pfitzner's opera "Palestrina"), it was composed in order to persuade the Council of Trent that a draconian ban on the polyphonic treatment of text in sacred music (as opposed, that is, to a more directly intelligible homophonic treatment) was unnecessary. However, more recent scholarship shows that this mass was in fact composed before the cardinals convened to discuss the ban (possibly as much as 10 years before). Historical data indicates that the Council of Trent, as an official body, never actually banned any church music and failed to make any ruling or official statement on the subject. These stories originated from the unofficial points-of-view of some Council attendees who discussed their ideas with those not privy to the Council's deliberations. Those opinions and rumors have, over centuries, been transmuted into fictional accounts, put into print, and often incorrectly taught as historical fact. While Palestrina's compositional motivations are not known, he may have been quite conscious of the need for intelligible text; however, this was not to conform with any doctrine of the Counter-Reformation, because no such doctrine exists. His characteristic style remained consistent from the 1560s until the end of his life. Roche's hypothesis that Palestrina's seemingly dispassionate approach to expressive or emotive texts could have resulted from his having to produce many to order, or from a deliberate decision that any intensity of expression was unbecoming in church music, reflects modern expectations about expressive freedom and underestimates the extent to which the mood of Palestrina's settings is adapted to the liturgical occasions for which the texts were set, rather than the line-by-line meaning of the text, and depends on the distinctive characters of the church modes and variations in vocal grouping for expressive effect. Performing editions and recordings of Palestrina have tended to favour his works in the more familiar modes and standard (SATB) voicings, under-representing the expressive variety of his settings. There are two comprehensive editions of Palestrina's works: a 33-volume edition published by Breitkopf and Härtel, in Leipzig Germany between 1862 and 1894 edited by Franz Xaver Haberl, and a 34-volume edition published in the mid twentieth century, by Fratelli Scalera, in Rome, Italy edited by R. Casimiri and others. One of the hallmarks of Palestrina's music is that dissonances are typically relegated to the "weak" beats in a measure. This produced a smoother and more consonant type of polyphony which is now considered to be definitive of late Renaissance music, given Palestrina's position as Europe's leading composer (along with Orlande de Lassus) in the wake of Josquin des Prez (d. 1521). The "Palestrina style" taught in college courses covering Renaissance counterpoint is often based on the codification by the 18th-century composer and theorist Johann Joseph Fux, published as "Gradus ad Parnassum" (Steps to Parnassus, 1725). Citing Palestrina as his model, Fux divided counterpoint into five "species" (hence the term "species counterpoint"), designed as exercises for the student, which deployed progressively more elaborate rhythmic combinations of voices while adhering to strict harmonic and melodic requirements. The method was widely adopted and was the main basis of contrapuntal training in the 19th century, but Fux had introduced a number of simplifications to the Palestrina style, notably the obligatory use of a "cantus firmus" in semibreves, which were corrected by later authors such as Knud Jeppesen and R. O. Morris. Palestrina's music conforms in many ways to Fux's rules, particularly in the fifth species but does not fit his pedagogical format. The main insight, that the "pure" style of polyphony achieved by Palestrina followed an invariable set of stylistic and combinational requirements, was justified. Fux's manual was endorsed by his contemporary J.S. Bach, who himself arranged two of Palestrina's masses for performance. According to Fux, Palestrina had established and followed these basic guidelines: Fux omits to mention the manner in which the musical phrasing of Palestrina followed the syntax of the sentences he was setting to music, something not always observed by earlier composers. Also to be noticed in Palestrina is a great deal of tone painting. Elementary examples of this are descending musical motion with Latin words like "descendit" (descends) or of a static musical or cadential moment with the words "de coelis" (from heaven). Palestrina was extremely famous in his day, and if anything, his reputation and influence increased after his death. J.S. Bach studied and hand-copied Palestrina's first book of "Masses", and in 1742 wrote his own adaption of the Kyrie and Gloria of the "Missa sine nomine." Felix Mendelssohn placed him in the pantheon of the greatest musicians, writing, "I always get upset when some praise only Beethoven, others only Palestrina and still others only Mozart or Bach. All four of them, I say, or none at all.". Conservative music of the Roman school continued to be written in Palestrina's style (which in the 17th century came to be known as the "prima pratica") by such students of his as Giovanni Maria Nanino, Ruggiero Giovanelli, Arcangelo Crivelli, Teofilo Gargari, Francesco Soriano, and Gregorio Allegri. As late as the 1750s, Palestrina's style was still the reference for composers working in the motet form, as can be seen by Francesco Barsanti's "Sei Antifones" 'in the style of Palestrina' (c. 1750; published by [Peter] Welcker, c. 1762). Much research on Palestrina was done in the 19th century by Giuseppe Baini, who published a monograph in 1828 which made Palestrina famous again and reinforced the already existing legend that he was the "Saviour of Church Music" during the reforms of the Council of Trent. 20th and 21st century scholarship by and large retains the view that Palestrina was a strong and refined composer whose music represents a summit of technical perfection. Contemporary analysis highlighted the modern qualities in the compositions of Palestrina such as research of color and sonority, use of sonic grouping in large-scale setting, interest in vertical as well as horizontal organization, studied attention to text setting. These unique characteristics, together with effortless delivery and an indefinable "otherness", constitute to this day the attraction of Palestrina's work. In 2009 a film about the composer was produced by German television ZDF/Arte. Title: "Palestrina - Prince of Music", directed by Georg Brintrup. https://www.jstor.org/stable/pdf/3125600.pdf
https://en.wikipedia.org/wiki?curid=12776
Group velocity The group velocity of a wave is the velocity with which the overall envelope shape of the wave's amplitudes—known as the "modulation" or "envelope" of the wave—propagates through space. For example, if a stone is thrown into the middle of a very still pond, a circular pattern of waves with a quiescent center appears in the water, also known as a capillary wave. The expanding ring of waves is the wave group, within which one can discern individual wavelets of differing wavelengths traveling at different speeds. The shorter waves travel faster than the group as a whole, but their amplitudes diminish as they approach the leading edge of the group. The longer waves travel more slowly, and their amplitudes diminish as they emerge from the trailing boundary of the group. The group velocity is defined by the equation: where is the wave's angular frequency (usually expressed in radians per second), and is the angular wavenumber (usually expressed in radians per meter). The phase velocity is: . The function , which gives as a function of , is known as the dispersion relation. One derivation of the formula for group velocity is as follows. Consider a wave packet as a function of position and time . Let be its Fourier transform at time , By the superposition principle, the wavepacket at any time is where is implicitly a function of . Assume that the wave packet is almost monochromatic, so that is sharply peaked around a central wavenumber . Then, linearization gives where (see next section for discussion of this step). Then, after some algebra, There are two factors in this expression. The first factor, formula_9, describes a perfect monochromatic wave with wavevector , with peaks and troughs moving at the phase velocity formula_10 within the envelope of the wavepacket. The other factor, gives the envelope of the wavepacket. This envelope function depends on position and time "only" through the combination formula_12. Therefore, the envelope of the wavepacket travels at velocity which explains the group velocity formula. Part of the previous derivation is the Taylor series approximation that: If the wavepacket has a relatively large frequency spread, or if the dispersion has sharp variations (such as due to a resonance), or if the packet travels over very long distances, this assumption is not valid, and higher-order terms in the Taylor expansion become important. As a result, the envelope of the wave packet not only moves, but also "distorts," in a manner that can be described by the material's group velocity dispersion. Loosely speaking, different frequency-components of the wavepacket travel at different speeds, with the faster components moving towards the front of the wavepacket and the slower moving towards the back. Eventually, the wave packet gets stretched out. This is an important effect in the propagation of signals through optical fibers and in the design of high-power, short-pulse lasers. The idea of a group velocity distinct from a wave's phase velocity was first proposed by W.R. Hamilton in 1839, and the first full treatment was by Rayleigh in his "Theory of Sound" in 1877. For light, the refractive index , vacuum wavelength , and wavelength in the medium , are related by with the phase velocity. The group velocity, therefore, can be calculated by any of the following formulas, For waves traveling through three dimensions, such as light waves, sound waves, and matter waves, the formulas for phase and group velocity are generalized in a straightforward way: where means the gradient of the angular frequency as a function of the wave vector formula_20, and formula_21 is the unit vector in direction k. If the waves are propagating through an anisotropic (i.e., not rotationally symmetric) medium, for example a crystal, then the phase velocity vector and group velocity vector may point in different directions. The group velocity is often thought of as the velocity at which energy or information is conveyed along a wave. In most cases this is accurate, and the group velocity can be thought of as the signal velocity of the waveform. However, if the wave is travelling through an absorptive or gainful medium, this does not always hold. In these cases the group velocity may not be a well-defined quantity, or may not be a meaningful quantity. In his text “Wave Propagation in Periodic Structures”, Brillouin argued that in a dissipative medium the group velocity ceases to have a clear physical meaning. An example concerning the transmission of electromagnetic waves through an atomic gas is given by Loudon. Another example is mechanical waves in the solar photosphere: The waves are damped (by radiative heat flow from the peaks to the troughs), and related to that, the energy velocity is often substantially lower than the waves' group velocity. Despite this ambiguity, a common way to extend the concept of group velocity to complex media is to consider spatially damped plane wave solutions inside the medium, which are characterized by a "complex-valued" wavevector. Then, the imaginary part of the wavevector is arbitrarily discarded and the usual formula for group velocity is applied to the real part of wavevector, i.e., Or, equivalently, in terms of the real part of complex refractive index, = , one has It can be shown that this generalization of group velocity continues to be related to the apparent speed of the peak of a wavepacket. The above definition is not universal, however: alternatively one may consider the time damping of standing waves (real , complex ), or, allow group velocity to be a complex-valued quantity. Different considerations yield distinct velocities, yet all definitions agree for the case of a lossless, gainless medium. The above generalization of group velocity for complex media can behave strangely, and the example of anomalous dispersion serves as a good illustration. At the edges of a region of anomalous dispersion, formula_24 becomes infinite (surpassing even the speed of light in vacuum), and formula_24 may easily become negative (its sign opposes Re) inside the band of anomalous dispersion. Since the 1980s, various experiments have verified that it is possible for the group velocity (as defined above) of laser light pulses sent through lossy materials, or gainful materials, to significantly exceed the speed of light in vacuum . The peaks of wavepackets were also seen to move faster than . In all these cases, however, there is no possibility that signals could be carried faster than the speed of light in vacuum, since the high value of does not help to speed up the true motion of the sharp wavefront that would occur at the start of any real signal. Essentially the seemingly superluminal transmission is an artifact of the narrow band approximation used above to define group velocity and happens because of resonance phenomena in the intervening medium. In a wide band analysis it is seen that the apparently paradoxical speed of propagation of the signal envelope is actually the result of local interference of a wider band of frequencies over many cycles, all of which propagate perfectly causally and at phase velocity. The result is akin to the fact that shadows can travel faster than light, even if the light causing them always propagates at light speed; since the phenomenon being measured is only loosely connected with causality, it does not necessarily respect the rules of causal propagation, even if it under normal circumstances does so and leads to a common intuition.
https://en.wikipedia.org/wiki?curid=12778
Group action In mathematics, a group action on a space is a group homomorphism of a given group into the group of transformations of the space. Similarly, a group action on a mathematical structure is a group homomorphism of a group into the automorphism group of the structure. It is said that the group "acts" on the space or structure. If a group acts on a structure, it also acts on everything that is built on the structure. For example, the group of Euclidean isometries acts on Euclidean space and also on the figures drawn in it. In particular, it acts on the set of all triangles. Similarly, the group of symmetries of a polyhedron acts on the vertices, the edges, and the faces of the polyhedron. A group action on a (finite-dimensional) vector space is called a representation of the group. It allows one to identify many groups with subgroups of , the group of the invertible matrices of dimension over a field . The symmetric group acts on any set with elements by permuting the elements of the set. Although the group of all permutations of a set depends formally on the set, the concept of group action allows one to consider a single group for studying the permutations of all sets with the same cardinality. If "G" is a group and "X" is a set, then a ("left") "group action" formula_1 of "G" on "X" is a function that satisfies the following two axioms: The group "G" is said to "act on" "X" (from the left). The set "X" is called a ("left") "G"-"set". From these two axioms, it follows that for every "g" in "G", the function which maps "x" in "X" to formula_7 is a bijective map from "X" to "X" (its inverse being the function which maps "x" to formula_8). Therefore, one may alternatively define a group action of "G" on "X" as a group homomorphism from "G" into the symmetric group Sym("X") of all bijections from "X" to "X". In complete analogy, a "right group action" of "G" on "X" can be defined as a function formula_9 satisfying the axioms The difference between left and right actions is in the order in which a product "gh" acts on "x"; for a left action, "h" acts first and is followed by "g", while for a right action, "g" acts first and is followed by "h". Because of the formula ("gh")-1 = "h"-1"g"-1, a left action can be constructed from a right action by composing with the inverse operation of the group. Also, a right action of a group "G" on "X" can be considered as a left action of its opposite group "G"op on "X". Thus it is sufficient to only consider left actions without any loss of generality. When there is a natural correspondence between the set of group elements and the set of space transformations, a group can be interpreted as acting on the space in a canonical way. For example, the symmetric group of a finite set consists of all bijective transformations of that set; thus, applying any element of the permutation group to an element of the set will produce another (not necessarily distinct) element of the set. More generally, symmetry groups such as the homeomorphism group of a topological space or the general linear group of a vector space, as well as their subgroups, also admit canonical actions. For other groups, an interpretation of the group in terms of an action may have to be specified, either because the group does not act canonically on any space or because the canonical action is not the action of interest. For example, we can specify an action of the two-element cyclic group "C"2 = {0, 1} on the finite set {"a", "b", "c"} by specifying that 0 (the identity element) sends "a" ↦ "a", "b" ↦ "b", and "c" ↦ "c", and that 1 sends "a" ↦ "b", "b" ↦ "a", and "c" ↦ "c". This action is not canonical. The action of "G" on "X" is called: Furthermore, if "G" acts on a topological space "X", then the action is: If "X" is a non-zero module over a ring "R" and the action of "G" is "R"-linear then it is said to be Consider a group "G" acting on a set "X". The "orbit" of an element "x" in "X" is the set of elements in "X" to which "x" can be moved by the elements of "G". The orbit of "x" is denoted by "G"⋅"x": The defining properties of a group guarantee that the set of orbits of (points "x" in) "X" under the action of "G" form a partition of "X". The associated equivalence relation is defined by saying if and only if there exists a "g" in "G" with . The orbits are then the equivalence classes under this relation; two elements "x" and "y" are equivalent if and only if their orbits are the same, that is, . The group action is transitive if and only if it has exactly one orbit, that is, if there exists "x" in "X" with . This is the case if and only if for "all" "x" in "X" (given that "X" is non-empty). The set of all orbits of "X" under the action of "G" is written as "X"/"G" (or, less frequently: "G"\"X"), and is called the "quotient" of the action. In geometric situations it may be called the ', while in algebraic situations it may be called the space of ', and written "XG", by contrast with the invariants (fixed points), denoted "XG": the coinvariants are a "quotient" while the invariants are a "subset." The coinvariant terminology and notation are used particularly in group cohomology and group homology, which use the same superscript/subscript convention. If "Y" is a subset of "X", one writes "GY" for the set . The subset "Y" is said "invariant under G" if (which is equivalent to ). In that case, "G" also operates on "Y" by restricting the action to "Y". The subset "Y" is called "fixed under G" if for all "g" in "G" and all "y" in "Y". Every subset that is fixed under "G" is also invariant under "G", but not conversely. Every orbit is an invariant subset of "X" on which "G" acts transitively. The action of "G" on "X" is "transitive" if and only if all elements are equivalent, meaning that there is only one orbit. A "G-invariant" element of "X" is such that for all . The set of all such "x" is denoted "XG" and called the "G-invariants" of "X". When "X" is a "G"-module, "XG" is the zeroth cohomology group of "G" with coefficients in "X", and the higher cohomology groups are the derived functors of the functor of "G"-invariants. Given "g" in "G" and "x" in "X" with , it is said that ""x" is a fixed point of "g"" or that ""g" fixes "x"". For every "x" in "X", the stabilizer subgroup of "G" with respect to "x" (also called the "isotropy group" or "little group") is the set of all elements in "G" that fix "x": This is a subgroup of "G", though typically not a normal one. The action of "G" on "X" is free if and only if all stabilizers are trivial. The kernel "N" of the homomorphism with the symmetric group, , is given by the intersection of the stabilizers "Gx" for all "x" in "X". If "N" is trivial, the action is said to be faithful (or effective). Let "x" and "y" be two elements in "X", and let "g" be a group element such that . Then the two stabilizer groups "Gx" and "Gy" are related by . Proof: by definition, if and only if . Applying "g"−1 to both sides of this equality yields ; that is, . An opposite inclusion follows similarly by taking and supposing . The above says that the stabilizers of elements in the same orbit are conjugate to each other. Thus, to each orbit, we can associate a conjugacy class of a subgroup of "G" (that is, the set of all conjugates of the subgroup). Let formula_23 denote the conjugacy class of "H". Then the orbit "O" has type formula_23 if the stabilizer formula_25 of some/any "x" in "O" belongs to formula_23. A maximal orbit type is often called a principal orbit type. Orbits and stabilizers are closely related. For a fixed "x" in "X", consider the map "f":"G" → "X" given by "g" ↦ "g"·"x". By definition the image "f"("G") of this map is the orbit "G"·"x". The condition for two elements to have the same image is In other words, "g" and "h" lie in the same coset for the stabilizer subgroup formula_25. Thus the fiber formula_29 of "f" over any "y" in "G"·"x" is such a coset, and clearly every such coset occurs as a fiber. Therefore "f" defines a "bijection" between the set formula_30 of cosets for the stabilizer subgroup and the orbit "G"·"x", which sends formula_31. This result is known as the "orbit-stabilizer theorem". If "G" is finite then the orbit-stabilizer theorem, together with Lagrange's theorem, gives in other words the length of the orbit of "x" times the order of its stabilizer is the order of the group. In particular that implies that the orbit length is a divisor of the group order. This result is especially useful since it can be employed for counting arguments (typically in situations where "X" is finite as well). A result closely related to the orbit-stabilizer theorem is Burnside's lemma: where "X"g the set of points fixed by "g". This result is mainly of use when "G" and "X" are finite, when it can be interpreted as follows: the number of orbits is equal to the average number of points fixed per group element. Fixing a group "G", the set of formal differences of finite "G"-sets forms a ring called the Burnside ring of "G", where addition corresponds to disjoint union, and multiplication to Cartesian product. The notion of group action can be put in a broader context by using the "action groupoid" formula_43 associated to the group action, thus allowing techniques from groupoid theory such as presentations and fibrations. Further the stabilizers of the action are the vertex groups, and the orbits of the action are the components, of the action groupoid. For more details, see the book "Topology and groupoids" referenced below. This action groupoid comes with a morphism "p": "G′" → "G" which is a "covering morphism of groupoids". This allows a relation between such morphisms and covering maps in topology. If "X" and "Y" are two "G"-sets, a "morphism" from "X" to "Y" is a function such that for all "g" in "G" and all "x" in "X". Morphisms of "G"-sets are also called "equivariant maps" or "G-maps". The composition of two morphisms is again a morphism. If a morphism "f" is bijective, then its inverse is also a morphism. In this case "f" is called an "isomorphism", and the two "G"-sets "X" and "Y" are called "isomorphic"; for all practical purposes, isomorphic "G"-sets are indistinguishable. Some example isomorphisms: With this notion of morphism, the collection of all "G"-sets forms a category; this category is a Grothendieck topos (in fact, assuming a classical metalogic, this topos will even be Boolean). One often considers "continuous group actions": the group "G" is a topological group, "X" is a topological space, and the map is continuous with respect to the product topology of . The space "X" is also called a "G-space" in this case. This is indeed a generalization, since every group can be considered a topological group by using the discrete topology. All the concepts introduced above still work in this context, however we define morphisms between "G"-spaces to be "continuous" maps compatible with the action of "G". The quotient "X"/"G" inherits the quotient topology from "X", and is called the "quotient space" of the action. The above statements about isomorphisms for regular, free and transitive actions are no longer valid for continuous group actions. If "X" is a regular covering space of another topological space "Y", then the action of the deck transformation group on "X" is properly discontinuous as well as being free. Every free, properly discontinuous action of a group "G" on a path-connected topological space "X" arises in this manner: the quotient map is a regular covering map, and the deck transformation group is the given action of "G" on "X". Furthermore, if "X" is simply connected, the fundamental group of "X"/"G" will be isomorphic to "G". These results have been generalized in the book "Topology and Groupoids" referenced below to obtain the fundamental groupoid of the orbit space of a discontinuous action of a discrete group on a Hausdorff space, as, under reasonable local conditions, the orbit groupoid of the fundamental groupoid of the space. This allows calculations such as the fundamental group of the symmetric square of a space "X", namely the orbit space of the product of "X" with itself under the twist action of the cyclic group of order 2 sending to . An action of a group "G" on a locally compact space "X" is "cocompact" if there exists a compact subset "A" of "X" such that . For a properly discontinuous action, cocompactness is equivalent to compactness of the quotient space "X/G". The action of "G" on "X" is said to be "proper" if the mapping that sends is a proper map. A group action of a topological group "G" on a topological space "X" is said to be "strongly continuous" if for all "x" in "X", the map is continuous with respect to the respective topologies. Such an action induces an action on the space of continuous functions on "X" by defining for every "g" in "G", "f" a continuous function on "X", and "x" in "X". Note that, while every continuous group action is strongly continuous, the converse is not in general true. The subspace of "smooth points" for the action is the subspace of "X" of points "x" such that is smooth, that is, it is continuous and all derivatives are continuous. We can also consider actions of monoids on sets, by using the same two axioms as above. This does not define bijective maps and equivalence relations however. See semigroup action. Instead of actions on sets, we can define actions of groups and monoids on objects of an arbitrary category: start with an object "X" of some category, and then define an action on "X" as a monoid homomorphism into the monoid of endomorphisms of "X". If "X" has an underlying set, then all definitions and facts stated above can be carried over. For example, if we take the category of vector spaces, we obtain group representations in this fashion. We can view a group "G" as a category with a single object in which every morphism is invertible. A (left) group action is then nothing but a (covariant) functor from "G" to the category of sets, and a group representation is a functor from "G" to the category of vector spaces. A morphism between G-sets is then a natural transformation between the group action functors. In analogy, an action of a groupoid is a functor from the groupoid to the category of sets or to some other category. In addition to continuous actions of topological groups on topological spaces, one also often considers smooth actions of Lie groups on smooth manifolds, regular actions of algebraic groups on algebraic varieties, and actions of group schemes on schemes. All of these are examples of group objects acting on objects of their respective category.
https://en.wikipedia.org/wiki?curid=12781
Gzip gzip is a file format and a software application used for file compression and decompression. The program was created by Jean-loup Gailly and Mark Adler as a free software replacement for the compress program used in early Unix systems, and intended for use by GNU (the "g" is from "GNU"). Version 0.1 was first publicly released on 31 October 1992, and version 1.0 followed in February 1993. The decompression of "gzip" format can be implemented as streaming algorithm, an important feature for Web protocols, data interchange and ETL (in standard pipes) applications. gzip is based on the DEFLATE algorithm, which is a combination of LZ77 and Huffman coding. DEFLATE was intended as a replacement for LZW and other patent-encumbered data compression algorithms which, at the time, limited the usability of "compress" and other popular archivers. "gzip" is often also used to refer to the gzip file format, which is: Although its file format also allows for multiple such streams to be concatenated (gzipped files are simply decompressed concatenated as if they were originally one file), gzip is normally used to compress just single files. Compressed archives are typically created by assembling collections of files into a single tar archive (also called tarball), and then compressing that archive with gzip. The final compressed file usually has the extension or . gzip is not to be confused with the ZIP archive format, which also uses DEFLATE. The ZIP format can hold collections of files without an external archiver, but is less compact than compressed tarballs holding the same data, because it compresses files individually and cannot take advantage of redundancy between files (solid compression). Various implementations of the program have been written. The most commonly known is the GNU Project's implementation using Lempel-Ziv coding (LZ77). OpenBSD's version of gzip is actually the compress program, to which support for the gzip format was added in OpenBSD 3.4. The 'g' in this specific version stands for "gratis". FreeBSD, DragonFly BSD and NetBSD use a BSD-licensed implementation instead of the GNU version; it is actually a command-line interface for zlib intended to be compatible with the GNU implementation's options. These implementations originally come from NetBSD, and support decompression of bzip2 and the Unix pack format. An alternative compression program achieving 3-8% better compression is Zopfli. It achieves gzip-compatible compression using more exhaustive algorithms, at the expense of compression time required. It does not affect decompression time. codice_3, written by Mark Adler, is compatible with gzip and speeds up compression by using all available CPU cores and threads. The tar utility included in most Linux distributions can extract .tar.gz files by passing the option, e.g., . zlib is an abstraction of the DEFLATE algorithm in library form which includes support both for the gzip file format and a lightweight data stream format in its API. The zlib stream format, DEFLATE, and the gzip file format were standardized respectively as RFC 1950, RFC 1951, and RFC 1952. The gzip format is used in HTTP compression, a technique used to speed up the sending of HTML and other content on the World Wide Web. It is one of the three standard formats for HTTP compression as specified in RFC 2616. This RFC also specifies a zlib format (called "DEFLATE"), which is equal to the gzip format except that gzip adds eleven bytes of overhead in the form of headers and trailers. Still, the gzip format is sometimes recommended over zlib because Internet Explorer does not implement the standard correctly and cannot handle the zlib format as specified in RFC 1950. zlib DEFLATE is used internally by the Portable Network Graphics (PNG) format. Since the late 1990s, bzip2, a file compression utility based on a block-sorting algorithm, has gained some popularity as a gzip replacement. It produces considerably smaller files (especially for source code and other structured text), but at the cost of memory and processing time (up to a factor of 4). AdvanceCOMP and 7-Zip can produce gzip-compatible files, using an internal DEFLATE implementation with better compression ratios than gzip itself—at the cost of more processor time compared to the reference implementation.
https://en.wikipedia.org/wiki?curid=12783
General anaesthetic General anaesthetics (or anesthetics, see spelling differences) are often defined as compounds that induce a loss of consciousness in humans or loss of righting reflex in animals. Clinical definitions are also extended to include an induced coma that causes lack of awareness to painful stimuli, sufficient to facilitate surgical applications in clinical and veterinary practice. General anaesthetics do not act as analgesics and should also not be confused with sedatives. General anaesthetics are a structurally diverse group of compounds whose mechanisms encompasses multiple biological targets involved in the control of neuronal pathways. The precise workings are the subject of some debate and ongoing research. General anesthetics elicit a state of general anesthesia. It remains somewhat controversial regarding how this state should be defined. General anesthetics, however, typically elicit several key reversible effects: immobility, analgesia, amnesia, unconsciousness, and reduced autonomic responsiveness to noxious stimuli. Drugs given to induce general anaesthesia can be either as gases or vapours (inhalational anaesthetics), or as injections (intravenous anaesthetics or even intramuscular). All of these agents share the property of being quite hydrophobic (i.e., as liquids, they are not freely miscible—or mixable—in water, and as gases they dissolve in oils better than in water). It is possible to deliver anaesthesia solely by inhalation or injection, but most commonly the two forms are combined, with an injection given to induce anaesthesia and a gas used to maintain it. Inhalational anaesthetic substances are either volatile liquids or gases, and are usually delivered using an anaesthesia machine. An anaesthesia machine allows composing a mixture of oxygen, anaesthetics and ambient air, delivering it to the patient and monitoring patient and machine parameters. Liquid anaesthetics are vapourised in the machine. Many compounds have been used for inhalation anaesthesia, but only a few are still in widespread use. Desflurane, isoflurane and sevoflurane are the most widely used volatile anaesthetics today. They are often combined with nitrous oxide. Older, less popular, volatile anaesthetics, include halothane, enflurane, and methoxyflurane. Researchers are also actively exploring the use of xenon as an anaesthetic. Injectable anaesthetics are used for the induction and maintenance of a state of unconsciousness. Anaesthetists prefer to use intravenous injections, as they are faster, generally less painful and more reliable than intramuscular or subcutaneous injections. Among the most widely used drugs are: Benzodiazepines are sedatives and are used in combinations with other general anaesthetics Induction and maintenance of general anesthesia, and the control of the various physiological side effects is typically achieved through a combinatorial drug approach. Individual general anesthetics vary with respect to their specific physiological and cognitive effects. While general anesthesia induction may be facilitated by one general anesthetic, others may be used in parallel or subsequently to achieve and maintain the desired anesthetic state. The drug approach utilized is dependent upon the procedure and the needs of the healthcare providers. It is postulated that general anaesthetics exert their action by the activation of inhibitory central nervous system (CNS) receptors, and the inactivation of CNS excitatory receptors. The relative roles of different receptors is still under debate, but evidence exists for particular targets being involved with certain anaesthetics and drug effects. Below are several key targets of general anesthetics that likely mediate their effects: Aside from the clinically advantageous effects of general anesthetics, there are a number of other physiological consequences mediated by this class of drug. Notably, a reduction in blood pressure can be facilitated by a variety of mechanisms, including reduced cardiac contractility and dilation of the vasculature. This drop in blood pressure may activate a reflexive increase in heartrate, due to a baroreceptor-mediated feedback mechanism. Some anesthetics, however, disrupt this reflex. Patients under general anesthesia are at greater risk of developing hypothermia, as the aforementioned vasodilation increases the heat lost via peripheral blood flow. By and large, these drugs reduce the internal body temperature threshold at which autonomic thermoregulatory mechanisms are triggered in response to cold. (On the other hand, the threshold at which thermoregulatory mechanisms are triggered in response to heat is typically increased.) Anesthetics typically affect respiration. Inhalational anesthetics elicit bronchodilation, an increase in respiratory rate, and reduced tidal volume. The net effect is decreased respiration, which must be managed by healthcare providers, while the patient is under general anesthesia. The reflexes that function to alleviate airway obstructions are also dampened (e.g. gag and cough). Compounded with a reduction in lower esophageal sphincter tone, which increases the frequency of regurgitation, patients are especially prone to asphyxiation while under general anesthesia. Healthcare providers closely monitor individuals under general anesthesia and utilize a number of devices, such as an endotracheal tube, to ensure patient safety. General anesthetics also affect the chemoreceptor trigger zone and brainstem vomiting center, eliciting nausea and vomiting following treatment. Intravenously-delivered general anesthetics are typically small and highly lipophilic molecules. These characteristics facilitate their rapid preferential distribution into the brain and spinal cord, which are both highly vascularized and lipophilic. It is here where the actions of these drugs lead to general anesthesia induction. Following distribution into the central nervous system (CNS), the anesthetic drug then diffuses out of the CNS into the muscles and viscera, followed by adipose tissues. In patients given a single injection of drug, this redistribution results in termination of general anesthesia. Therefore, following administration of a single anesthetic bolus, duration of drug effect is dependent solely upon the redistribution kinetics. The half-life of an anesthetic drug following a prolonged infusion, however, depends upon both drug redistribution kinetics, drug metabolism in the liver, and existing drug concentration in fat. When large quantities of an anesthetic drug have already been dissolved in the body's fat stores, this can slow its redistribution out of the brain and spinal cord, prolonging its CNS effects. For this reason, the half-lives of these infused drugs are said to be context-dependent. Generally, prolonged anesthetic drug infusions result in longer drug half-lives, slowed elimination from the brain and spinal cord, and delayed termination of general anesthesia. Minimal alveolar concentration (MAC) is the concentration of an inhalational anesthetic in the lungs that prevents 50% of patients from responding to surgical incision. This value is used to compare the potencies of various inhalational general anesthetics and impacts the partial-pressure of the drug utilized by healthcare providers during general anesthesia induction and/or maintenance. Induction of anesthesia is facilitated by diffusion of an inhaled anesthetic drug into the brain and spinal cord. Diffusion throughout the body proceeds until the drug's partial pressure within the various tissues is equivalent to the partial pressure of the drug within the lungs. Healthcare providers can control the rate of anesthesia induction and final tissue concentrations of the anesthetic by varying the partial pressure of the inspired anesthetic. A higher drug partial pressure in the lungs will drive diffusion more rapidly throughout the body and yield a higher maximum tissue concentration. Respiratory rate and inspiratory volume will also effect the promptness of anesthesia onset, as will the extent of pulmonary blood flow. The partition coefficient of a gaseous drug is indicative of its relative solubility in various tissues. This metric is the relative drug concentration between two tissues, when their partial pressures are equal (gas:blood, fat:blood, etc.). Inhalational anesthetics vary widely with respect to their tissue solubilities and partition coefficients. Anesthetics that are highly soluble require many molecules of drug to raise the partial pressure within a given tissue, as opposed to minimally soluble anesthetics which require relatively few. Generally, inhalational anesthetics that are minimally soluble reach equilibrium more quickly. Inhalational anesthetics that have a high fat:blood partition coefficient, however, reach equilibrium more slowly, due to the minimal vascularization of fat tissue, which serves as a large, slowly-filling reservoir for the drug. Inhaled anesthetics are eliminated via expiration, following diffusion into the lungs. This process is dependent largely upon the anesthetic blood:gas partition coefficient, tissue solubility, blood flow to the lungs, and patient respiratory rate and inspiratory volume. For gases that have minimal tissue solubility, termination of anesthesia generally occurs as rapidly as the onset of anesthesia. For gases that have high tissue solubility, however, termination of anesthesia is generally context-dependent. As with intravenous anesthetic infusions, prolonged delivery of highly soluble anesthetic gases generally results in longer drug half-lives, slowed elimination from the brain and spinal cord, and delayed termination of anesthesia. Metabolism of inhaled anesthetics is generally not a major route of drug elimination. Arab/Persian physicians introduced the use of preoperative anaesthetic compounds around the 9th century.
https://en.wikipedia.org/wiki?curid=12786
Geoffrey Chaucer Geoffrey Chaucer (; – 25 October 1400) was an English poet and author. Widely considered the greatest English poet of the Middle Ages, he is best known for "The Canterbury Tales". He has been called the "father of English literature", or, alternatively, the "father of English poetry". He was the first writer to be buried in what has since come to be called Poets' Corner, in Westminster Abbey. Chaucer also gained fame as a philosopher and astronomer, composing the scientific "A Treatise on the Astrolabe" for his 10-year-old son Lewis. He maintained a career in the civil service as a bureaucrat, courtier, diplomat, and member of parliament. Among Chaucer's many other works are "The Book of the Duchess", "The House of Fame", "The Legend of Good Women", and "Troilus and Criseyde". He is seen as crucial in legitimising the literary use of Middle English when the dominant literary languages in England were still French and Latin. Chaucer was born in London most likely in the early 1340s, though the precise date and location remain unknown. His father and grandfather were both London vintners, and several previous generations had been merchants in Ipswich. His family name is derived from the French "chausseur", meaning "shoemaker". In 1324, his father John Chaucer was kidnapped by an aunt in the hope of marrying the 12-year-old to her daughter in an attempt to keep property in Ipswich. The aunt was imprisoned and fined £250, now equivalent to about £, which suggests that the family was financially secure. John Chaucer married Agnes Copton, who inherited properties in 1349, including 24 shops in London from her uncle Hamo de Copton, who is described in a will dated 3 April 1354 and listed in the City Hustings Roll as "moneyer", said to be moneyer at the Tower of London. In the City Hustings Roll 110, 5, Ric II, dated June 1380, Chaucer refers to himself as "me Galfridum Chaucer, filium Johannis Chaucer, Vinetarii, Londonie", which translates as: "Geoffrey Chaucer, son of John Chaucer, vintners, London". While records concerning the lives of his contemporaries, William Langland and the Pearl Poet, are practically non-existent, since Chaucer was a public servant, his official life is very well documented, with nearly five hundred written items testifying to his career. The first of the "Chaucer Life Records" appears in 1357, in the household accounts of Elizabeth de Burgh, the Countess of Ulster, when he became the noblewoman's page through his father's connections, a common medieval form of apprenticeship for boys into knighthood or prestige appointments. The countess was married to Lionel, Duke of Clarence, the second surviving son of the king, Edward III, and the position brought the teenage Chaucer into the close court circle, where he was to remain for the rest of his life. He also worked as a courtier, a diplomat, and a civil servant, as well as working for the king from 1389 to 1391 as Clerk of the King's Works. In 1359, the early stages of the Hundred Years' War, Edward III invaded France and Chaucer travelled with Lionel of Antwerp, 1st Duke of Clarence, Elizabeth's husband, as part of the English army. In 1360, he was captured during the siege of Rheims. Edward paid £16 for his ransom, a considerable sum , and Chaucer was released. After this, Chaucer's life is uncertain, but he seems to have travelled in France, Spain, and Flanders, possibly as a messenger and perhaps even going on a pilgrimage to Santiago de Compostela. Around 1366, Chaucer married Philippa (de) Roet. She was a lady-in-waiting to Edward III's queen, Philippa of Hainault, and a sister of Katherine Swynford, who later (c. 1396) became the third wife of John of Gaunt. It is uncertain how many children Chaucer and Philippa had, but three or four are most commonly cited. His son, Thomas Chaucer, had an illustrious career, as chief butler to four kings, envoy to France, and Speaker of the House of Commons. Thomas's daughter, Alice, married the Duke of Suffolk. Thomas's great-grandson (Geoffrey's great-great-grandson), John de la Pole, Earl of Lincoln, was the heir to the throne designated by Richard III before he was deposed. Geoffrey's other children probably included Elizabeth Chaucy, a nun at Barking Abbey, Agnes, an attendant at Henry IV's coronation; and another son, Lewis Chaucer. Chaucer's "Treatise on the Astrolabe" was written for Lewis. According to tradition, Chaucer studied law in the Inner Temple (an Inn of Court) at this time. He became a member of the royal court of Edward III as a "valet de chambre", yeoman, or esquire on 20 June 1367, a position which could entail a wide variety of tasks. His wife also received a pension for court employment. He travelled abroad many times, at least some of them in his role as a valet. In 1368, he may have attended the wedding of Lionel of Antwerp to Violante Visconti, daughter of Galeazzo II Visconti, in Milan. Two other literary stars of the era were in attendance: Jean Froissart and Petrarch. Around this time, Chaucer is believed to have written "The Book of the Duchess" in honour of Blanche of Lancaster, the late wife of John of Gaunt, who died in 1369 of the plague. Chaucer travelled to Picardy the next year as part of a military expedition; in 1373 he visited Genoa and Florence. Numerous scholars such as Skeat, Boitani, and Rowland suggested that, on this Italian trip, he came into contact with Petrarch or Boccaccio. They introduced him to medieval Italian poetry, the forms and stories of which he would use later. The purposes of a voyage in 1377 are mysterious, as details within the historical record conflict. Later documents suggest it was a mission, along with Jean Froissart, to arrange a marriage between the future King Richard II and a French princess, thereby ending the Hundred Years War. If this was the purpose of their trip, they seem to have been unsuccessful, as no wedding occurred. In 1378, Richard II sent Chaucer as an envoy (secret dispatch) to the Visconti and to Sir John Hawkwood, English condottiere (mercenary leader) in Milan. It has been speculated that it was Hawkwood on whom Chaucer based his character the Knight in the "Canterbury Tales", for a description matches that of a 14th-century condottiere. A possible indication that his career as a writer was appreciated came when Edward III granted Chaucer "a gallon of wine daily for the rest of his life" for some unspecified task. This was an unusual grant, but given on a day of celebration, St George's Day, 1374, when artistic endeavours were traditionally rewarded, it is assumed to have been another early poetic work. It is not known which, if any, of Chaucer's extant works prompted the reward, but the suggestion of him as poet to a king places him as a precursor to later poets laureate. Chaucer continued to collect the liquid stipend until Richard II came to power, after which it was converted to a monetary grant on 18 April 1378. Chaucer obtained the very substantial job of comptroller of the customs for the port of London, which he began on 8 June 1374. He must have been suited for the role as he continued in it for twelve years, a long time in such a post at that time. His life goes undocumented for much of the next ten years, but it is believed that he wrote (or began) most of his famous works during this period. He was mentioned in law papers of 4 May 1380, involved in the raptus (rape or seizure) of Cecilia Chaumpaigne. What was meant is unclear, but the incident seems to have been resolved quickly with an exchange of money in June 1380 and did not leave a stain on Chaucer's reputation. It is not known if Chaucer was in the City of London at the time of the Peasants' Revolt, but if he was, he would have seen its leaders pass almost directly under his apartment window at Aldgate. While still working as comptroller, Chaucer appears to have moved to Kent, being appointed as one of the commissioners of peace for Kent, at a time when French invasion was a possibility. He is thought to have started work on "The Canterbury Tales" in the early 1380s. He also became a member of parliament for Kent in 1386, and attended the 'Wonderful Parliament' that year. He appears to have been present at most of the 71 days it sat, for which he was paid £24 9s. On 15 October that year, he gave a deposition in the case of "Scrope v. Grosvenor". There is no further reference after this date to Philippa, Chaucer's wife, and she is presumed to have died in 1387. He survived the political upheavals caused by the Lords Appellants, despite the fact that Chaucer knew some of the men executed over the affair quite well. On 12 July 1389, Chaucer was appointed the clerk of the king's works, a sort of foreman organising most of the king's building projects. No major works were begun during his tenure, but he did conduct repairs on Westminster Palace, St. George's Chapel, Windsor, continue building the wharf at the Tower of London, and build the stands for a tournament held in 1390. It may have been a difficult job, but it paid well: two shillings a day, more than three times his salary as a comptroller. Chaucer was also appointed keeper of the lodge at the King's park in Feckenham, which was a largely honorary appointment. In September 1390, records say that Chaucer was robbed and possibly injured while conducting the business, and he stopped working in this capacity on 17 June 1391. He began as Deputy Forester in the royal forest of Petherton Park in North Petherton, Somerset on 22 June. This was no sinecure, with maintenance an important part of the job, although there were many opportunities to derive profit. Richard II granted him an annual pension of 20 pounds in 1394 (roughly £25,000/US$33,000 in 2018 money), and Chaucer's name fades from the historical record not long after Richard's overthrow in 1399. The last few records of his life show his pension renewed by the new king, and his taking a lease on a residence within the close of Westminster Abbey on 24 December 1399. Henry IV renewed the grants assigned by Richard, but "The Complaint of Chaucer to his Purse" hints that the grants might not have been paid. The last mention of Chaucer is on 5 June 1400 when some money was paid which was owed to him. Chaucer died of unknown causes on 25 October 1400, although the only evidence for this date comes from the engraving on his tomb which was erected more than 100 years after his death. There is some speculation that he was murdered by enemies of Richard II or even on the orders of his successor Henry IV, but the case is entirely circumstantial. Chaucer was buried in Westminster Abbey in London, as was his right owing to his status as a tenant of the Abbey's close. In 1556, his remains were transferred to a more ornate tomb, making him the first writer interred in the area now known as Poets' Corner. Chaucer was a close friend of John of Gaunt, the wealthy Duke of Lancaster and father of Henry IV, and he served under Lancaster's patronage. Near the end of their lives, Lancaster and Chaucer became brothers-in-law when Chaucer married Philippa (Pan) de Roet in 1366, and Lancaster married Phillippa's sister Katherine Swynford (de Roet) in 1396. Chaucer's "Book of the Duchess" (also known as the "Deeth of Blaunche the Duchesse") was written in commemoration of Blanche of Lancaster, John of Gaunt's first wife. The poem refers to John and Blanche in allegory as the narrator relates the tale of "A long castel with walles white/Be Seynt Johan, on a ryche hil" (1318–1319) who is mourning grievously after the death of his love, "And goode faire White she het/That was my lady name ryght" (948–949). The phrase "long castel" is a reference to Lancaster (also called "Loncastel" and "Longcastell"), "walles white" is thought to be an oblique reference to Blanche, "Seynt Johan" was John of Gaunt's name-saint, and "ryche hil" is a reference to Richmond. These references reveal the identity of the grieving black knight of the poem as John of Gaunt, Duke of Lancaster and Earl of Richmond. "White" is the English translation of the French word "blanche", implying that the white lady was Blanche of Lancaster. Chaucer's short poem "Fortune", believed to have been written in the 1390s, is also thought to refer to Lancaster. "Chaucer as narrator" openly defies "Fortune", proclaiming that he has learned who his enemies are through her tyranny and deceit, and declares "my suffisaunce" (15) and that "over himself hath the maystrye" (14). "Fortune", in turn, does not understand Chaucer's harsh words to her for she believes that she has been kind to him, claims that he does not know what she has in store for him in the future, but most importantly, "And eek thou hast thy beste frend alyve" (32, 40, 48). Chaucer retorts, "My frend maystow nat reven, blind goddesse" (50) and orders her to take away those who merely pretend to be his friends. "Fortune" turns her attention to three princes whom she implores to relieve Chaucer of his pain and "Preyeth his beste frend of his noblesse/That to som beter estat he may atteyne" (78–79). The three princes are believed to represent the dukes of Lancaster, York, and Gloucester, and a portion of line 76 ("as three of you or tweyne") is thought to refer to the ordinance of 1390 which specified that no royal gift could be authorised without the consent of at least two of the three dukes. Most conspicuous in this short poem is the number of references to Chaucer's "beste frend". "Fortune" states three times in her response to the plaintiff, "And also, you still have your best friend alive" (32, 40, 48); she also refers to his "beste frend" in the envoy when appealing to his "noblesse" to help Chaucer to a higher estate. The narrator makes a fifth reference when he rails at "Fortune" that she shall not take his friend from him. Chaucer's attitudes toward the Church should not be confused with his attitudes toward Christianity. He seems to have respected and admired Christians and to have been one himself, though he also recognised that many people in the church were venal and corrupt. He writes in "Canterbury Tales", "now I beg all those that listen to this little treatise, or read it, that if there be anything in it that pleases them, they thank our Lord Jesus Christ for it, from whom proceeds all understanding and goodness." Chaucer's first major work was "The Book of the Duchess", an elegy for Blanche of Lancaster who died in 1368. Two other early works were "Anelida and Arcite" and "The House of Fame". He wrote many of his major works in a prolific period when he held the job of customs comptroller for London (1374 to 1386). His "Parlement of Foules", "The Legend of Good Women", and "Troilus and Criseyde" all date from this time. It is believed that he started "The Canterbury Tales" in the 1380s. Chaucer also translated Boethius' "Consolation of Philosophy" and "The Romance of the Rose" by Guillaume de Lorris (extended by Jean de Meun). Eustache Deschamps called himself a "nettle in Chaucer's garden of poetry". In 1385, Thomas Usk made glowing mention of Chaucer, and John Gower also lauded him. Chaucer's "Treatise on the Astrolabe" describes the form and use of the astrolabe in detail and is sometimes cited as the first example of technical writing in the English language, and it indicates that Chaucer was versed in science in addition to his literary talents. "The equatorie of the planetis" is a scientific work similar to the "Treatise" and sometimes ascribed to Chaucer because of its language and handwriting, an identification which scholars no longer deem tenable. Chaucer wrote in continental accentual-syllabic meter, a style which had developed in English literature since around the 12th century as an alternative to the alliterative Anglo-Saxon metre. Chaucer is known for metrical innovation, inventing the rhyme royal, and he was one of the first English poets to use the five-stress line, a decasyllabic cousin to the iambic pentameter, in his work, with only a few anonymous short works using it before him. The arrangement of these five-stress lines into rhyming couplets, first seen in his "The Legend of Good Women", was used in much of his later work and became one of the standard poetic forms in English. His early influence as a satirist is also important, with the common humorous device, the funny accent of a regional dialect, apparently making its first appearance in "The Reeve's Tale". The poetry of Chaucer, along with other writers of the era, is credited with helping to standardise the London Dialect of the Middle English language from a combination of the Kentish and Midlands dialects. This is probably overstated; the influence of the court, chancery and bureaucracy – of which Chaucer was a part – remains a more probable influence on the development of Standard English. Modern English is somewhat distanced from the language of Chaucer's poems owing to the effect of the Great Vowel Shift some time after his death. This change in the pronunciation of English, still not fully understood, makes the reading of Chaucer difficult for the modern audience. The status of the final "-e" in Chaucer's verse is uncertain: it seems likely that during the period of Chaucer's writing the final "-e" was dropping out of colloquial English and that its use was somewhat irregular. Chaucer's versification suggests that the final "-e" is sometimes to be vocalised, and sometimes to be silent; however, this remains a point on which there is disagreement. When it is vocalised, most scholars pronounce it as a schwa. Apart from the irregular spelling, much of the vocabulary is recognisable to the modern reader. Chaucer is also recorded in the "Oxford English Dictionary" as the first author to use many common English words in his writings. These words were probably frequently used in the language at the time but Chaucer, with his ear for common speech, is the earliest extant manuscript source. "Acceptable", "alkali", "altercation", "amble", "angrily", "annex", "annoyance", "approaching", "arbitration", "armless", "army", "arrogant", "arsenic", "arc", "artillery" and "aspect" are just some of the many English words first attested in Chaucer. Widespread knowledge of Chaucer's works is attested by the many poets who imitated or responded to his writing. John Lydgate was one of the earliest poets to write continuations of Chaucer's unfinished "Tales" while Robert Henryson's "Testament of Cresseid" completes the story of Cressida left unfinished in his "Troilus and Criseyde". Many of the manuscripts of Chaucer's works contain material from these poets and later appreciations by the Romantic era poets were shaped by their failure to distinguish the later "additions" from original Chaucer. Writers of the 17th and 18th centuries, such as John Dryden, admired Chaucer for his stories, but not for his rhythm and rhyme, as few critics could then read Middle English and the text had been butchered by printers, leaving a somewhat unadmirable mess. It was not until the late 19th century that the official Chaucerian canon, accepted today, was decided upon, largely as a result of Walter William Skeat's work. Roughly seventy-five years after Chaucer's death, "The Canterbury Tales" was selected by William Caxton to be one of the first books to be printed in England. Chaucer is sometimes considered the source of the English vernacular tradition. His achievement for the language can be seen as part of a general historical trend towards the creation of a vernacular literature, after the example of Dante, in many parts of Europe. A parallel trend in Chaucer's own lifetime was underway in Scotland through the work of his slightly earlier contemporary, John Barbour, and was likely to have been even more general, as is evidenced by the example of the Pearl Poet in the north of England. Although Chaucer's language is much closer to Modern English than the text of "Beowulf", such that (unlike that of "Beowulf") a Modern English-speaker with a large vocabulary of archaic words may understand it, it differs enough that most publications modernise his idiom. The following is a sample from the prologue of "The Summoner's Tale" that compares Chaucer's text to a modern translation: The first recorded association of Valentine's Day with romantic love is believed to be in Chaucer’s "Parliament of Fowls" (1382), a dream vision portraying a parliament for birds to choose their mates. Honoring the first anniversary of the engagement of fifteen-year-old King Richard II of England to fifteen-year-old Anne of Bohemia: For this was on seynt Volantynys dayWhan euery bryd comyth there to chese his makeOf euery kynde that men thinke mayAnd that so heuge a noyse gan they makeThat erthe & eyr & tre & euery lakeSo ful was that onethe was there spaceFor me to stonde, so ful was al the place. The poet Thomas Hoccleve, who may have met Chaucer and considered him his role model, hailed Chaucer as "the firste fyndere of our fair langage". John Lydgate referred to Chaucer within his own text "The Fall of Princes" as the "lodesterre … off our language". Around two centuries later, Sir Philip Sidney greatly praised "Troilus and Criseyde" in his own "Defence of Poesie". The large number of surviving manuscripts of Chaucer's works is testimony to the enduring interest in his poetry prior to the arrival of the printing press. There are 83 surviving manuscripts of the Canterbury Tales (in whole or part) alone, along with sixteen of "Troilus and Criseyde", including the personal copy of Henry IV. Given the ravages of time, it is likely that these surviving manuscripts represent hundreds since lost. Chaucer's original audience was a courtly one, and would have included women as well as men of the upper social classes. Yet even before his death in 1400, Chaucer's audience had begun to include members of the rising literate, middle and merchant classes, which included many Lollard sympathisers who may well have been inclined to read Chaucer as one of their own, particularly in his satirical writings about friars, priests, and other church officials. In 1464, John Baron, a tenant farmer in Agmondesham (Amersham in Buckinghamshire), was brought before John Chadworth, the Bishop of Lincoln, on charges of being a Lollard heretic; he confessed to owning a "boke of the Tales of Caunterburie" among other suspect volumes. William Caxton, the first English printer, was responsible for the first two folio editions of "The Canterbury Tales" which were published in 1478 and 1483. Caxton's second printing, by his own account, came about because a customer complained that the printed text differed from a manuscript he knew; Caxton obligingly used the man's manuscript as his source. Both Caxton editions carry the equivalent of manuscript authority. Caxton's edition was reprinted by his successor, Wynkyn de Worde, but this edition has no independent authority. Richard Pynson, the King's Printer under Henry VIII for about twenty years, was the first to collect and sell something that resembled an edition of the collected works of Chaucer; however, in the process, he introduced five previously printed texts that are now known not to be Chaucer's. (The collection is actually three separately printed texts, or collections of texts, bound together as one volume.) There is a likely connection between Pynson's product and William Thynne's a mere six years later. Thynne had a successful career from the 1520s until his death in 1546, as chief clerk of the kitchen of Henry VIII, one of the masters of the royal household. He spent years comparing various versions of Chaucer's works, and selected 41 pieces for publication. While there were questions over the authorship of some of the material, there is not doubt this was the first comprehensive view of Chaucer's work. "The Workes of Geffray Chaucer," published in 1532, was the first edition of Chaucer's collected works. His editions of "Chaucer's Works" in 1532 and 1542 were the first major contributions to the existence of a widely recognised Chaucerian canon. Thynne represents his edition as a book sponsored by and supportive of the king who is praised in the preface by Sir Brian Tuke. Thynne's canon brought the number of apocryphal works associated with Chaucer to a total of 28, even if that was not his intention. As with Pynson, once included in the "Works", pseudepigraphic texts stayed with those works, regardless of their first editor's intentions. In the 16th and 17th centuries, Chaucer was printed more than any other English author, and he was the first author to have his works collected in comprehensive single-volume editions in which a Chaucer canon began to cohere. Some scholars contend that 16th-century editions of Chaucer's "Works" set the precedent for all other English authors in terms of presentation, prestige and success in print. These editions certainly established Chaucer's reputation, but they also began the complicated process of reconstructing and frequently inventing Chaucer's biography and the canonical list of works which were attributed to him. Probably the most significant aspect of the growing apocrypha is that, beginning with Thynne's editions, it began to include medieval texts that made Chaucer appear as a proto-Protestant Lollard, primarily the "Testament of Love" and "The Plowman's Tale". As "Chaucerian" works that were not considered apocryphal until the late 19th century, these medieval texts enjoyed a new life, with English Protestants carrying on the earlier Lollard project of appropriating existing texts and authors who seemed sympathetic—or malleable enough to be construed as sympathetic—to their cause. The official Chaucer of the early printed volumes of his "Works" was construed as a proto-Protestant as the same was done, concurrently, with William Langland and "Piers Plowman". The famous "Plowman's Tale" did not enter Thynne's "Works" until the second, 1542, edition. Its entry was surely facilitated by Thynne's inclusion of Thomas Usk's "Testament of Love" in the first edition. The "Testament of Love" imitates, borrows from, and thus resembles Usk's contemporary, Chaucer. ("Testament of Love" also appears to borrow from "Piers Plowman".) Since the "Testament of Love" mentions its author's part in a failed plot (book 1, chapter 6), his imprisonment, and (perhaps) a recantation of (possibly Lollard) heresy, all this was associated with Chaucer. (Usk himself was executed as a traitor in 1388.) John Foxe took this recantation of heresy as a defence of the true faith, calling Chaucer a "right Wiclevian" and (erroneously) identifying him as a schoolmate and close friend of John Wycliffe at Merton College, Oxford. (Thomas Speght is careful to highlight these facts in his editions and his "Life of Chaucer".) No other sources for the "Testament of Love" exist—there is only Thynne's construction of whatever manuscript sources he had. John Stow (1525–1605) was an antiquarian and also a chronicler. His edition of Chaucer's "Works" in 1561 brought the apocrypha to more than 50 titles. More were added in the 17th century, and they remained as late as 1810, well after Thomas Tyrwhitt pared the canon down in his 1775 edition. The compilation and printing of Chaucer's works was, from its beginning, a political enterprise, since it was intended to establish an English national identity and history that grounded and authorised the Tudor monarchy and church. What was added to Chaucer often helped represent him favourably to Protestant England. In his 1598 edition of the "Works", Speght (probably taking cues from Foxe) made good use of Usk's account of his political intrigue and imprisonment in the "Testament of Love" to assemble a largely fictional "Life of Our Learned English Poet, Geffrey Chaucer". Speght's "Life" presents readers with an erstwhile radical in troubled times much like their own, a proto-Protestant who eventually came round to the king's views on religion. Speght states, "In the second year of Richard the second, the King tooke Geffrey Chaucer and his lands into his protection. The occasion wherof no doubt was some daunger and trouble whereinto he was fallen by favouring some rash attempt of the common people." Under the discussion of Chaucer's friends, namely John of Gaunt, Speght further explains: Later, in "The Argument" to the "Testament of Love", Speght adds: Speght is also the source of the famous tale of Chaucer being fined for beating a Franciscan friar in Fleet Street, as well as a fictitious coat of arms and family tree. Ironically – and perhaps consciously so – an introductory, apologetic letter in Speght's edition from Francis Beaumont defends the unseemly, "low", and bawdy bits in Chaucer from an elite, classicist position. Francis Thynne noted some of these inconsistencies in his "Animadversions", insisting that Chaucer was not a commoner, and he objected to the friar-beating story. Yet Thynne himself underscores Chaucer's support for popular religious reform, associating Chaucer's views with his father William Thynne's attempts to include "The Plowman's Tale" and "The Pilgrim's Tale" in the 1532 and 1542 "Works". The myth of the Protestant Chaucer continues to have a lasting impact on a large body of Chaucerian scholarship. Though it is extremely rare for a modern scholar to suggest Chaucer supported a religious movement that did not exist until more than a century after his death, the predominance of this thinking for so many centuries left it for granted that Chaucer was at least hostile toward Catholicism. This assumption forms a large part of many critical approaches to Chaucer's works, including neo-Marxism. Alongside Chaucer's "Works", the most impressive literary monument of the period is John Foxe's "Acts and Monuments...". As with the Chaucer editions, it was critically significant to English Protestant identity and included Chaucer in its project. Foxe's Chaucer both derived from and contributed to the printed editions of Chaucer's "Works", particularly the pseudepigrapha. "Jack Upland" was first printed in Foxe's "Acts and Monuments", and then it appeared in Speght's edition of Chaucer's "Works". Speght's "Life of Chaucer" echoes Foxe's own account, which is itself dependent upon the earlier editions that added the "Testament of Love" and "The Plowman's Tale" to their pages. Like Speght's Chaucer, Foxe's Chaucer was also a shrewd (or lucky) political survivor. In his 1563 edition, Foxe "thought it not out of season … to couple … some mention of Geoffrey Chaucer" with a discussion of John Colet, a possible source for John Skelton's character Colin Clout. Probably referring to the 1542 Act for the Advancement of True Religion, Foxe said that he "marvel[s] to consider … how the bishops, condemning and abolishing all manner of English books and treatises which might bring the people to any light of knowledge, did yet authorise the works of Chaucer to remain still and to be occupied; who, no doubt, saw into religion as much almost as even we do now, and uttereth in his works no less, and seemeth to be a right Wicklevian, or else there never was any. And that, all his works almost, if they be thoroughly advised, will testify (albeit done in mirth, and covertly); and especially the latter end of his third book of the Testament of Love … Wherein, except a man be altogether blind, he may espy him at the full: although in the same book (as in all others he useth to do), under shadows covertly, as under a visor, he suborneth truth in such sort, as both privily she may profit the godly-minded, and yet not be espied of the crafty adversary. And therefore the bishops, belike, taking his works but for jests and toys, in condemning other books, yet permitted his books to be read." It is significant, too, that Foxe's discussion of Chaucer leads into his history of "The Reformation of the Church of Christ in the Time of Martin Luther" when "Printing, being opened, incontinently ministered unto the church the instruments and tools of learning and knowledge; which were good books and authors, which before lay hid and unknown. The science of printing being found, immediately followed the grace of God; which stirred up good wits aptly to conceive the light of knowledge and judgment: by which light darkness began to be espied, and ignorance to be detected; truth from error, religion from superstition, to be discerned." Foxe downplays Chaucer's bawdy and amorous writing, insisting that it all testifies to his piety. Material that is troubling is deemed metaphoric, while the more forthright satire (which Foxe prefers) is taken literally. John Urry produced the first edition of the complete works of Chaucer in a Latin font, published posthumously in 1721. Included were several tales, according to the editors, for the first time printed, a biography of Chaucer, a glossary of old English words, and testimonials of author writers concerning Chaucer dating back to the 16th century. According to A. S. G Edwards, "This was the first collected edition of Chaucer to be printed in roman type. The life of Chaucer prefixed to the volume was the work of the Reverend John Dart, corrected and revised by Timothy Thomas. The glossary appended was also mainly compiled by Thomas. The text of Urry's edition has often been criticised by subsequent editors for its frequent conjectural emendations, mainly to make it conform to his sense of Chaucer's metre. The justice of such criticisms should not obscure his achievement. His is the first edition of Chaucer for nearly a hundred and fifty years to consult any manuscripts and is the first since that of William Thynne in 1534 to seek systematically to assemble a substantial number of manuscripts to establish his text. It is also the first edition to offer descriptions of the manuscripts of Chaucer's works, and the first to print texts of 'Gamelyn' and 'The Tale of Beryn', works ascribed to, but not by, Chaucer." Although Chaucer's works had long been admired, serious scholarly work on his legacy did not begin until the late 18th century, when Thomas Tyrwhitt edited "The Canterbury Tales", and it did not become an established academic discipline until the 19th century. Scholars such as Frederick James Furnivall, who founded the Chaucer Society in 1868, pioneered the establishment of diplomatic editions of Chaucer's major texts, along with careful accounts of Chaucer's language and prosody. Walter William Skeat, who like Furnivall was closely associated with the "Oxford English Dictionary", established the base text of all of Chaucer's works with his edition, published by Oxford University Press. Later editions by John H. Fisher and Larry D. Benson offered further refinements, along with critical commentary and bibliographies. With the textual issues largely addressed, if not resolved, attention turned to the questions of Chaucer's themes, structure, and audience. The "Chaucer Review" was founded in 1966 and has maintained its position as the pre-eminent journal of Chaucer studies. In 1994, literary critic Harold Bloom placed Chaucer among the , and in 1997 expounded on William Shakespeare’s debt to the author. The following major works are in rough chronological order but scholars still debate the dating of most of Chaucer's output and works made up from a collection of stories may have been compiled over a long period.
https://en.wikipedia.org/wiki?curid=12787
History of Asia The history of Asia can be seen as the collective history of several distinct peripheral coastal regions such as East Asia, South Asia, Southeast Asia and the Middle East linked by the interior mass of the Eurasian steppe. The coastal periphery was the home to some of the world's earliest known civilizations and religions, with each of the three regions developing early civilizations around fertile river valleys. These valleys were fertile because the soil there was rich and could bear many root crops. The civilizations in Mesopotamia, India, and China shared many similarities and likely exchanged technologies and ideas such as mathematics and the wheel. Other notions such as that of writing likely developed individually in each area. Cities, states and then empires developed in these lowlands. The steppe region had long been inhabited by mounted nomads, and from the central steppes they could reach all areas of the Asian continent. The northern part of the continent, covering much of Siberia was also inaccessible to the steppe nomads due to the dense forests and the tundra. These areas in Siberia were very sparsely populated. The centre and periphery were kept separate by mountains and deserts. The Caucasus, Himalaya, Karakum Desert, and Gobi Desert formed barriers that the steppe horsemen could only cross with difficulty. While technologically and culturally the city dwellers were more advanced, they could do little militarily to defend against the mounted hordes of the steppe. However, the lowlands did not have enough open grasslands to support a large horsebound force. Thus the nomads who conquered states in the Middle East were soon forced to adapt to the local societies. The spread of Islam waved the Islamic Golden Age and the Timurid Renaissance, which later influenced the age of Islamic gunpowder empires. Asia's history features major developments seen in other parts of the world, as well as events that have affected those other regions. These include the trade of the Silk Road, which spread cultures, languages, religions, and diseases throughout Afro-Eurasian trade. Another major advancement was the innovation of gunpowder in medieval China, later developed by the Gunpowder empires, mainly by the Mughals and Safavids, which led to advanced warfare through the use of guns. A report by archaeologist Rakesh Tewari on Lahuradewa, India shows new C14 datings that range between 9000 and 8000 BCE associated with rice, making Lahuradewa the earliest Neolithic site in entire South Asia. The prehistoric Beifudi site near Yixian in Hebei Province, China, contains relics of a culture contemporaneous with the Cishan and Xinglongwa cultures of about 8000–7000 BCE, neolithic cultures east of the Taihang Mountains, filling in an archaeological gap between the two Northern Chinese cultures. The total excavated area is more than 1,200 square meters and the collection of neolithic findings at the site consists of two phases. Around 5500 BCE the Halafian culture appeared in Lebanon, Israel, Syria, Anatolia, and northern Mesopotamia, based upon dryland agriculture. In southern Mesopotamia were the alluvial plains of Sumer and Elam. Since there was little rainfall, irrigation systems were necessary. The Ubaid culture flourished from 5500 BCE. The Chalcolithic period (or Copper Age) began about 4500 BCE, then the Bronze Age began about 3500 BCE, replacing the Neolithic cultures. The Indus Valley Civilization (IVC) was a Bronze Age civilization (3300–1300 BCE; mature period 2600–1900 BCE) which was centered mostly in the western part of the Indian Subcontinent; it is considered that an early form of Hinduism was performed during this civilization. Some of the great cities of this civilization include Harappa and Mohenjo-daro, which had a high level of town planning and arts. The cause of the destruction of these regions around 1700 BCE is debatable, although evidence suggests it was caused by natural disasters (especially flooding). This era marks Vedic period in India, which lasted from roughly 1500 to 500 BCE. During this period, the Sanskrit language developed and the Vedas were written, epic hymns that told tales of gods and wars. This was the basis for the Vedic religion, which would eventually sophisticate and develop into Hinduism. China and Vietnam were also centres of metalworking. Dating back to the Neolithic Age, the first bronze drums, called the Dong Son drums have been uncovered in and around the Red River Delta regions of Vietnam and Southern China. These relate to the prehistoric Dong Son Culture of Vietnam. Song Da bronze drum's surface, Dong Son culture, Vietnam In Ban Chiang, Thailand (Southeast Asia), bronze artifacts have been discovered dating to 2100 BCE. In Nyaunggan, Burma bronze tools have been excavated along with ceramics and stone artifacts. Dating is still currently broad (3500–500 BCE). The Iron Age saw the widespread use of iron tools, weaponry, and armor throughout the major civilizations of Asia. The Achaemenid dynasty of the Persian Empire, founded by Cyrus the Great, ruled an area from Greece and Turkey to the Indus River and Central Asia during the 6th to 4th centuries BCE. Persian politics included a tolerance for other cultures, a highly centralized government, and significant infrastructure developments. Later, in Darius the Great's rule, the territories were integrated, a bureaucracy was developed, nobility were assigned military positions, tax collection was carefully organized, and spies were used to ensure the loyalty of regional officials. The primary religion of Persia at this time was Zoroastrianism, developed by the philosopher Zoroaster. It introduced an early form of monotheism to the area. The religion banned animal sacrifice and the use of intoxicants in rituals; and introduced the concept of spiritual salvation through personal moral action, an end time, and both general and Particular judgment with a heaven or hell. These concepts would heavily influence later emperors and the masses. More importantly, Zoroastrianism would be an important precursor for the Abrahamic religions such as Christianity, Islam, or Judaism. The Persian Empire was successful in establishing peace and stability throughout the Middle East and were a major influence in art, politics (affecting Hellenistic leaders), and religion. Alexander the Great conquered this dynasty in the 4th century BCE, creating the brief Hellenistic period. He was unable to establish stability and after his death, Persia broke into small, weak dynasties including the Seleucid Empire, followed by the Parthian Empire. By the end of the Classical age, Persia had been reconsolidated into the Sassanid Empire, also known as the second Persian Empire. The Roman Empire would later control parts of Western Asia. The Seleucid, Parthian and Sassanid dynasties of Persia dominated Western Asia for centuries. The Maurya and Gupta empires are called the Golden Age of India and were marked by extensive inventions and discoveries in science, technology, art, religion, and philosophy that crystallized the elements of what is generally known as Indian culture. The religions of Hinduism and Buddhism, which began in Indian sub-continent, were an important influence on South, East and Southeast Asia. By 600 BCE, India had been divided into 17 regional states that would occasionally feud amongst themselves. In 327 BCE, Alexander the Great came to India with a vision of conquering the whole world. He crossed northwestern India and created the province Bactria but could not move further because his army wanted to go back to their family. Shortly prior, the soldier Chandragupta Maurya began to take control of the Ganges river and soon established the Maurya Empire. The Maurya Empire (Sanskrit: मौर्य राजवंश, Maurya Rājavaṃśa) was the geographically extensive and powerful empire in ancient India, ruled by the Mauryan dynasty from 321 to 185 BCE. It was one of the world's largest empires in its time, stretching to the Himalayas in the north, what is now Assam in the east, probably beyond modern Pakistan in the west, and annexing Balochistan and much of what is now Afghanistan, at its greatest extent. South of Mauryan empire was the Tamilakam an independent country dominated by three dynasties, the Pandyans, Cholas and Cheras. The government established by Chandragupta was led by an autocratic king, who primarily relied on the military to assert his power. It also applied the use of a bureaucracy and even sponsored a postal service. Chandragupta's grandson, Ashoka, greatly extended the empire by conquering most of modern-day India (save for the southern tip). He eventually converted to Buddhism, though, and began a peaceful life where he promoted the religion as well as humane methods throughout India. The Maurya Empire would disintegrate soon after Ashoka's death and was conquered by the Kushan invaders from the northwest, establishing the Kushan Empire. Their conversion to Buddhism caused the religion to be associated with foreigners and therefore a decline in its popularity occurred. The Kushan Empire would fall apart by 220 CE, creating more political turmoil in India. Then in 320, the Gupta Empire (Sanskrit: गुप्त राजवंश, Gupta Rājavanśha) was established and covered much of the Indian Subcontinent. Founded by Maharaja Sri-Gupta, the dynasty was the model of a classical civilization. Gupta kings united the area primarily through negotiation of local leaders and families as well as strategical intermarriage. Their rule covered less land than the Maurya Empire, but established the greatest stability. In 535, the empire ended when India was overrun by the Hunas. Since 1029 BCE, the Zhou dynasty ( ), had existed in China and it would continue to until 258 BCE. The Zhou dynasty had been using a feudal system by giving power to local nobility and relying on their loyalty in order to control its large territory. As a result, the Chinese government at this time tended to be very decentralized and weak, and there was often little the emperor could do to resolve national issues. Nonetheless, the government was able to retain its position with the creation of the Mandate of Heaven, which could establish an emperor as divinely chosen to rule. The Zhou additionally discouraged the human sacrifice of the preceding eras and unified the Chinese language. Finally, the Zhou government encouraged settlers to move into the Yangtze River valley, thus creating the Chinese Middle Kingdom. But by 500 BCE, its political stability began to decline due to repeated nomadic incursions and internal conflict derived from the fighting princes and families. This was lessened by the many philosophical movements, starting with the life of Confucius. His philosophical writings (called Confucianism) concerning the respect of elders and of the state would later be popularly used in the Han dynasty. Additionally, Laozi's concepts of Taoism, including yin and yang and the innate duality and balance of nature and the universe, became popular throughout this period. Nevertheless, the Zhou Dynasty eventually disintegrated as the local nobles began to gain more power and their conflict devolved into the Warring States period, from 402 to 201 BCE. One leader eventually came on top, Qin Shi Huang (, "Shǐ Huángdì"), who overthrew the last Zhou emperor and established the Qin dynasty. The Qin dynasty (Chinese: 秦朝; pinyin: Qín Cháo) was the first ruling dynasty of Imperial China, lasting from 221 to 207 BCE. The new Emperor abolished the feudal system and directly appointed a bureaucracy that would rely on him for power. Huang's imperial forces crushed any regional resistance, and they furthered the Chinese empire by expanding down to the South China Sea and northern Vietnam. Greater organization brought a uniform tax system, a national census, regulated road building (and cart width), standard measurements, standard coinage, and an official written and spoken language. Further reforms included new irrigation projects, the encouragement of silk manufacturing, and (most famously) the beginning of the construction of the Great Wall of China—designed to keep out the nomadic raiders who'd constantly badger the Chinese people. However, Shi Huang was infamous for his tyranny, forcing laborers to build the Wall, ordering heavy taxes, and severely punishing all who opposed him. He oppressed Confucians and promoted Legalism, the idea that people were inherently evil, and that a strong, forceful government was needed to control them. Legalism was infused with realistic, logical views and rejected the pleasures of educated conversation as frivolous. All of this made Shi Huang extremely unpopular with the people. As the Qin began to weaken, various factions began to fight for control of China. The Han dynasty (simplified Chinese: 汉朝; traditional Chinese: 漢朝; pinyin: Hàn Cháo; 206 BCE – 220 CE) was the second imperial dynasty of China, preceded by the Qin Dynasty and succeeded by the Three Kingdoms (220–265 CE). Spanning over four centuries, the period of the Han Dynasty is considered a golden age in Chinese history. One of the Han dynasty's greatest emperors, Emperor Wu of Han, established a peace throughout China comparable to the Pax Romana seen in the Mediterranean a hundred years later. To this day, China's majority ethnic group refers to itself as the "Han people". The Han Dynasty was established when two peasants succeeded in rising up against Shi Huang's significantly weaker successor-son. The new Han government retained the centralization and bureaucracy of the Qin, but greatly reduced the repression seen before. They expanded their territory into Korea, Vietnam, and Central Asia, creating an even larger empire than the Qin. The Han developed contacts with the Persian Empire in the Middle East and the Romans, through the Silk Road, with which they were able to trade many commodities—primarily silk. Many ancient civilizations were influenced by the Silk Road, which connected China, India, the Middle East and Europe. Han emperors like Wu also promoted Confucianism as the national "religion" (although it is debated by theologians as to whether it is defined as such or as a philosophy). Shrines devoted to Confucius were built and Confucian philosophy was taught to all scholars who entered the Chinese bureaucracy. The bureaucracy was further improved with the introduction of an examination system that selected scholars of high merit. These bureaucrats were often upper-class people educated in special schools, but whose power was often checked by the lower-class brought into the bureaucracy through their skill. The Chinese imperial bureaucracy was very effective and highly respected by all in the realm and would last over 2,000 years. The Han government was highly organized and it commanded the military, judicial law (which used a system of courts and strict laws), agricultural production, the economy, and the general lives of its people. The government also promoted intellectual philosophy, scientific research, and detailed historical records. However, despite all of this impressive stability, central power began to lose control by the turn of the Common Era. As the Han Dynasty declined, many factors continued to pummel it into submission until China was left in a state of chaos. By 100 CE, philosophical activity slowed, and corruption ran rampant in the bureaucracy. Local landlords began to take control as the scholars neglected their duties, and this resulted in heavy taxation of the peasantry. Taoists began to gain significant ground and protested the decline. They started to proclaim magical powers and promised to save China with them; the Taoist Yellow Turban Rebellion in 184 (led by rebels in yellow scarves) failed but was able to weaken the government. The aforementioned Huns combined with diseases killed up to half of the population and officially ended the Han dynasty by 220. The ensuing period of chaos was so terrible it lasted for three centuries, where many weak regional rulers and dynasties failed to establish order in China. This period of chaos and attempts at order is commonly known as that of the Six Dynasties. The first part of this included the Three Kingdoms which started in 220 and describes the brief and weak successor "dynasties" that followed the Han. In 265, the Jin dynasty of China was started and this soon split into two different empires in control of northwestern and southeastern China. In 420, the conquest and abdication of those two dynasties resulted in the first of the Southern and Northern Dynasties. The Northern and Southern Dynasties passed through until finally, by 557, the Northern Zhou dynasty ruled the north and the Chen dynasty ruled the south. During this period, the Eastern world empires continued to expand through trade, migration and conquests of neighboring areas. Gunpowder was widely used as early as the 11th century and they were using moveable type printing five hundred years before Gutenberg created his press. Buddhism, Taoism, Confucianism were the dominant philosophies of the Far East during the Middle Ages. Marco Polo was not the first Westerner to travel to the Orient and return with amazing stories of this different culture, but his accounts published in the late 13th and early 14th centuries were the first to be widely read throughout Europe. The Arabian peninsula and the surrounding Middle East and Near East regions saw dramatic change during the Medieval era caused primarily by the spread of Islam and the establishment of the Arabian Empires. In the 5th century, the Middle East was separated into small, weak states; the two most prominent were the Sassanian Empire of the Persians in what is now Iran and Iraq, and the Byzantine Empire in Anatolia (modern-day Turkey). The Byzantines and Sassanians fought with each other continually, a reflection of the rivalry between the Roman Empire and the Persian Empire seen during the previous five hundred years. The fighting weakened both states, leaving the stage open to a new power. Meanwhile, the nomadic Bedouin tribes who dominated the Arabian desert saw a period of tribal stability, greater trade networking and a familiarity with Abrahamic religions or monotheism. While the Byzantine Roman and Sassanid Persian empires were both weakened by the Byzantine–Sasanian War of 602–628, a new power in the form of Islam grew in the Middle East under Muhammad in Medina. In a series of rapid Muslim conquests, the Rashidun army, led by the Caliphs and skilled military commanders such as Khalid ibn al-Walid, swept through most of the Middle East, taking more than half of Byzantine territory in the Arab–Byzantine wars and completely engulfing Persia in the Muslim conquest of Persia. It would be the Arab Caliphates of the Middle Ages that would first unify the entire Middle East as a distinct region and create the dominant ethnic identity that persists today. These Caliphates included the Rashidun Caliphate, Umayyad Caliphate, Abbasid Caliphate, and later the Seljuq Empire. After Muhammad introduced Islam, it jump-started Middle Eastern culture into an Islamic Golden Age, inspiring achievements in architecture, the revival of old advances in science and technology, and the formation of a distinct way of life. Muslims saved and spread Greek advances in medicine, algebra, geometry, astronomy, anatomy, and ethics that would later finds it way back to Western Europe. The dominance of the Arabs came to a sudden end in the mid-11th century with the arrival of the Seljuq Turks, migrating south from the Turkic homelands in Central Asia. They conquered Persia, Iraq (capturing Baghdad in 1055), Syria, Palestine, and the Hejaz. This was followed by a series of Christian Western Europe invasions. The fragmentation of the Middle East allowed joined forces, mainly from England, France, and the emerging Holy Roman Empire, to enter the region. In 1099 the knights of the First Crusade captured Jerusalem and founded the Kingdom of Jerusalem, which survived until 1187, when Saladin retook the city. Smaller crusader fiefdoms survived until 1291. In the early 13th century, a new wave of invaders, the armies of the Mongol Empire, swept through the region, sacking Baghdad in the Siege of Baghdad (1258) and advancing as far south as the border of Egypt in what became known as the Mongol conquests. The Mongols eventually retreated in 1335, but the chaos that ensued throughout the empire deposed the Seljuq Turks. In 1401, the region was further plagued by the Turko-Mongol, Timur, and his ferocious raids. By then, another group of Turks had arisen as well, the Ottomans. The Mongol Empire conquered a large part of Asia in the 13th century, an area extending from China to Europe. Medieval Asia was the kingdom of the Khans. Never before had any person controlled as much land as Genghis Khan. He built his power unifying separate Mongol tribes before expanding his kingdom south and west. He and his grandson, Kublai Khan, controlled lands in China, Burma, Central Asia, Russia, Iran, the Middle East, and Eastern Europe. Estimates are that the Mongol armies reduced the population of China by nearly a third. Genghis Khan was a pagan who tolerated nearly every religion, and their culture often suffered the harshest treatment from Mongol armies. The Khan armies pushed as far west as Jerusalem before being defeated in 1260. The Indian early medieval age, 600 to 1200, is defined by regional kingdoms and cultural diversity. When Harsha of Kannauj, who ruled much of the Indo-Gangetic Plain from 606 to 647, attempted to expand southwards, he was defeated by the Chalukya ruler of the Deccan. When his successor attempted to expand eastwards, he was defeated by the Pala king of Bengal. When the Chalukyas attempted to expand southwards, they were defeated by the Pallavas from farther south, who in turn were opposed by the Pandyas and the Cholas from still farther south. The Cholas could under the rule of Raja Raja Chola defeat their rivals and rise to a regional power. Cholas expanded northward and defeated Eastern Chalukya, Kalinga and the Pala. Under Rajendra Chola the Cholas created the first notable navy of Indian subcontinent. The Chola navy extended the influence of Chola empire to southeast asia. During this time, pastoral peoples whose land had been cleared to make way for the growing agricultural economy were accommodated within caste society, as were new non-traditional ruling classes. The Muslim conquest in the Indian subcontinent mainly took place from the 12th century onwards, though earlier Muslim conquests include the limited inroads into modern Afghanistan and Pakistan and the Umayyad campaigns in India, during the time of the Rajput kingdoms in the 8th century. Major economic and military powers like the Delhi Sultanate and Bengal Sultanate, were seen to be established. The search of their wealth led the Voyages of Christopher Columbus. China saw the rise and fall of the Sui, Tang, Song, and Yuan dynasties and therefore improvements in its bureaucracy, the spread of Buddhism, and the advent of Neo-Confucianism. It was an unsurpassed era for Chinese ceramics and painting. Medieval architectural masterpieces the Great South Gate in Todaiji, Japan, and the Tien-ning Temple in Peking, China are some of the surviving constructs from this era. A new powerful dynasty began to rise in the 580s, amongst the divided factions of China. This was started when an aristocrat named Yang Jian married his daughter into the Northern Zhou dynasty. He proclaimed himself Emperor Wen of Sui and appeased the nomadic military by abandoning the Confucian scholar-gentry. Emperor Wen soon led the conquest of the southern Chen Dynasty and united China once more under the Sui dynasty. The emperor lowered taxes and constructed granaries that he used to prevent famine and control the market. Later Wen's son would murder him for the throne and declare himself Emperor Yang of Sui. Emperor Yang revived the Confucian scholars and the bureaucracy, much to anger of the aristocrats and nomadic military leaders. Yang became an excessive leader who overused China's resources for personal luxury and perpetuated exhaustive attempts to conquer Goguryeo. His military failures and neglect of the empire forced his own ministers to assassinate him in 618, ending the Sui Dynasty. Fortunately, one of Yang's most respectable advisors, Li Yuan, was able to claim the throne quickly, preventing a chaotic collapse. He proclaimed himself Emperor Gaozu, and established the Tang dynasty in 623. The Tang saw expansion of China through conquest to Tibet in the west, Vietnam in the south, and Manchuria in the north. Tang emperors also improved the education of scholars in the Chinese bureaucracy. A Ministry of Rites was established and the examination system was improved to better qualify scholars for their jobs. In addition, Buddhism became popular in China with two different strains between the peasantry and the elite, the Pure Land and Zen strains, respectively. Greatly supporting the spread of Buddhism was Empress Wu, who additionally claimed an unofficial "Zhou dynasty" and displayed China's tolerance of a woman ruler, which was rare at the time. However, Buddhism would also experience some backlash, especially from Confucianists and Taoists. This would usually involve criticism about how it was costing the state money, since the government was unable to tax Buddhist monasteries, and additionally sent many grants and gifts to them. The Tang dynasty began to decline under the rule of Emperor Xuanzong, who began to neglect the economy and military and caused unrest amongst the court officials due to the excessive influence of his concubine, Yang Guifei, and her family. This eventually sparked a revolt in 755. Although the revolt failed, subduing it required involvement with the unruly nomadic tribes outside of China and distributing more power to local leaders—leaving the government and economy in a degraded state. The Tang dynasty officially ended in 907 and various factions led by the aforementioned nomadic tribes and local leaders would fight for control of China in the Five Dynasties and Ten Kingdoms period. By 960, most of China proper had been reunited under the Song dynasty, although it lost territories in the north and could not defeat one of the nomadic tribes there—the Liao dynasty of the highly sinicized Khitan people. From then on, the Song would have to pay tribute to avoid invasion and thus set the precedent for other nomadic kingdoms to oppress them. The Song also saw the revival of Confucianism in the form of Neo-Confucianism. This had the effect of putting the Confucian scholars at a higher status than aristocrats or Buddhists and also intensified the reduction of power in women. The infamous practice of foot binding developed in this period as a result. Eventually the Liao dynasty in the north was overthrown by the Jin dynasty of the Manchu-related Jurchen people. The new Jin kingdom invaded northern China, leaving the Song to flee farther south and creating the Southern Song dynasty in 1126. There, cultural life flourished. By 1227, the Mongols had conquered the Western Xia kingdom northwest of China. Soon the Mongols incurred upon the Jin empire of the Jurchens. Chinese cities were soon besieged by the Mongol hordes that showed little mercy for those who resisted and the Southern Song Chinese were quickly losing territory. In 1271 the current great khan, Kublai Khan, claimed himself Emperor of China and officially established the Yuan Dynasty. By 1290, all of China was under control of the Mongols, marking the first time they were ever completely conquered by a foreign invader; the new capital was established at Khanbaliq (modern-day Beijing). Kublai Khan segregated Mongol culture from Chinese culture by discouraging interactions between the two peoples, separating living spaces and places of worship, and reserving top administrative positions to Mongols, thus preventing Confucian scholars to continue the bureaucratic system. Nevertheless, Kublai remained fascinated with Chinese thinking, surrounding himself with Chinese Buddhist, Taoist, or Confucian advisors. Mongol women displayed a contrasting independent nature compared to the Chinese women who continued to be suppressed. Mongol women often rode out on hunts or even to war. Kublai's wife, Chabi, was a perfect example of this; Chabi advised her husband on several political and diplomatic matters; she convinced him that the Chinese were to be respected and well-treated in order to make them easier to rule. However this was not enough to affect Chinese women's position, and the increasingly Neo-Confucian successors of Kublai further repressed Chinese and even Mongol women. The Black Death, which would later ravage Western Europe, had its beginnings in Asia, where it wiped out large populations in China in 1331. The three Kingdoms of Korea involves Goguryeo in north, Baekje in southwest, and Silla in southeast Korean peninsula. These three kingdoms were like a bridge of cultures between China and Japan. Thanks to them, Japan was able to accept Chinese splendid cultures. Prince Shōtoku of Japan had been taught by two teachers. One was from Baekje, the other was from Goguryeo. Once Japan invaded Silla, Goguryeo helped Silla to defeat Japan. Baekje met the earliest heyday of them. Its heyday was the 5th century AD. Its capital was Seoul. During its heyday, the kingdom made colonies overseas. Liaodong, China and Kyushu, Japan were the colonies of Baekje during its short heyday. Goguryeo was the strongest kingdom of all. They sometimes called themselves as an Empire. Its heyday was 6th century. King Gwanggaeto widened its territory to north. So Goguryeo dominated from Korean peninsula to Manchuria. And his son, King Jangsu widened its territory to south. He occupied Seoul, and moved its capital to Pyeongyang. Goguryeo almost occupied three quarters of South Korean peninsula thanks to king Jangsu who widened the kingdom's territory to south. Silla met the latest heyday. King Jinheung went north and occupiedSeoul. But it was short. Baekje became stronger and attacked Silla. Baekje occupied more than 40 cities of Silla. So Silla could hardly survive. China's Sui dynasty invaded Goguryeo and Goguryeo–Sui War occurs between Korea and China. Goguryeo won against China and Sui dynasty fell. After then, Tang dynasty reinvaded Goguryeo and helped Silla to unify the peninsula. Because Silla was the weakest of all, so it asked Tang for help. Goguryeo, Baekje, and Japan helped each other against Tang-Silla alliance, but Baekje and Goguryeo fell. Unfortunately, Tang dynasty betrayed Silla in order to occupy the whole Korean peninsula. During the Silla-Tang war, people of fallen Baekje and Goguryeo helped Silla against Chinese invasion, so Silla could beat China and unified the peninsula. This war helped Korean people to unite mentally. The rest of Goguryeo people established Balhae and won the war against Tang in later 7th century AD. Balhae is the north state, and Later Silla was the south state. Balhae was a quite strong kingdom as their ancestor Goguryeo did. Finally, the Emperor of Tang dynasty admits Balhae as 'A strong country in the East'. They liked to trade with Japan, China, and Silla. Balhae and Later Silla sent a lot of international students to China. And Arabian merchants came into Korean peninsula, so Korea became known as 'Silla' in the western countries. Silla improved Korean writing system called Idu letters. Idu affected Katakana of Japan. Liao dynasty invaded Balhae in early 10th century, so Balhae fell. The unified Korean kingdom, Later Silla divided into three kingdoms again because of the corrupt central government. It involves Later Goguryeo(also as known as "Taebong"), Later Baekje, and Later Silla. The general of Later Goguryeo, Wang Geon took the thrown and changed its name into Goryeo, which was derived by the old strong kingdom, Goguryeo, and Goryeo reunified the peninsula. Goryeo reunited the Korean peninsula during the later three kingdoms period and named itself as 'Empire'. But nowadays, Goryeo is known as a kingdom. The name 'Goryeo' was derived from Goguryeo, and the name Korea was derived from Goryeo. Goryeo adopted people from fallen Balhae. They also widened their territory to north by defending Liao dynasty and attacking the Jurchen people. Goryeo developed a splendid culture. The first metal type printed book Jikji was also from Korea. The Goryeo ware is one of the most famous legacies of this kingdom. Goryeo imported Chinese government system and developed into their own ways. During this period, laws were codified and a civil service system was introduced. Buddhism flourished and spread throughout the peninsula. The Tripitaka Koreana is 81,258 books total. It was made to keep Korea safe against the Mongolian invasion. It is now a UNESCO world heritage. Goryeo won the battle against Liao dynasty. Then, the Mongolian Empire invaded Goryeo. Goryeo did not disappear but it had to obey Mongolians. After 80 years, in 14th century, the Mongolian dynasty Yuan lose power, King Gongmin tried to free themselves against Mongol although his wife was also Mongolian. At the 14th century, Ming dynasty want Goryeo to obey China. But Goryeo didn't. They decided to invade China. Going to China, the general of Goryeo, Lee Sung-Gae came back and destroyed Goryeo and established new dynasty, Joseon. And he became Taejo of Joseon, which means the first king of Joseon. Japan's medieval history began with the Asuka period, from around 600 to 710. The time was characterized by the Taika Reform and imperial centralization, both of which were a direct result of growing Chinese contact and influences. In 603, Prince Shōtoku of the Yamato dynasty began significant political and cultural changes. He issued the Seventeen-article constitution in 604, centralizing power towards the emperor (under the title "tenno", or heavenly sovereign) and removing the power to levy taxes from provincial lords. Shōtoku was also a patron of Buddhism and he encouraged building temples competitively. Shōtoku's reforms transitioned Japan to the Nara period (c. 710 to c. 794), with the moving of the Japanese capital to Nara in Honshu. This period saw the culmination of Chinese-style writing, etiquette, and architecture in Japan along with Confucian ideals to supplement the already present Buddhism. Peasants revered both Confucian scholars and Buddhist monks. However, in the wake of the 735–737 Japanese smallpox epidemic, Buddhism gained the status of state religion and the government ordered the construction of numerous Buddhist temples, monasteries, and statues. The lavish spending combined with the fact that many aristocrats did not pay taxes, put a heavy burden on peasantry that caused poverty and famine. Eventually the Buddhist position got out of control, threatening to seize imperial power and causing Emperor Kanmu to move the capital to Heian-kyō to avoid a Buddhist takeover. This marked the beginning of the Heian period and the end of Taika reform. With the Heian period (from 794 to 1185) came a decline of imperial power. Chinese influence also declined, as a result of its correlation with imperial centralization and the heavenly mandate, which came to be regarded as ineffective. By 838, the Japanese court discontinued its embassies in China; only traders and Buddhist monks continued to travel to China. Buddhism itself came to be considered more Japanese than Chinese, and persisted to be popular in Japan. Buddhists monks and monasteries continued their attempts to gather personal power in courts, along with aristocrats. One particular noble family that dominated influence in the imperial bureaucracy was the Fujiwara clan. During this time cultural life in the imperial court flourished. There was a focus on beauty and social interaction and writing and literature was considered refined. Noblewomen were cultured the same as noblemen, dabbling in creative works and politics. A prime example of both Japanese literature and women's role in high-class culture at this time was "The Tale of Genji", written by the lady-in-waiting Murasaki Shikibu. Popularization of wooden palaces and shōji sliding doors amongst the nobility also occurred. Loss of imperial power also led to the rise of provincial warrior elites. Small lords began to function independently. They administered laws, supervised public works projects, and collected revenue for themselves instead of the imperial court. Regional lords also began to build their own armies. These warriors were loyal only their local lords and not the emperor, although the imperial government increasingly called them in to protect the capital. The regional warrior class developed into the samurai, which created its own culture: including specialized weapons such as the katana and a form of chivalry, bushido. The imperial government's loss of control in the second half of the Heian period allowed banditry to grow, requiring both feudal lords and Buddhist monasteries to procure warriors for protection. As imperial control over Japan declined, feudal lords also became more independent and seceded from the empire. These feudal states squandered the peasants living in them, reducing the farmers to an almost serfdom status. Peasants were also rigidly restricted from rising to the samurai class, being physically set off by dress and weapon restrictions. As a result of their oppression, many peasants turned to Buddhism as a hope for reward in the afterlife for upright behavior. With the increase of feudalism, families in the imperial court began to depend on alliances with regional lords. The Fujiwara clan declined from power, replaced by a rivalry between the Taira clan and the Minamoto clan. This rivalry grew into the Genpei War in the early 1180s. This war saw the use of both samurai and peasant soldiers. For the samurai, battle was ritual and they often easily cut down the poorly trained peasantry. The Minamoto clan proved successful due to their rural alliances. Once the Taira was destroyed, the Minamoto established a military government called the shogunate (or bakufu), centered in Kamakura. The end of the Genpei War and the establishment of the Kamakura shogunate marked the end of the Heian period and the beginning of the Kamakura period in 1185, solidifying feudal Japan. The Russian Empire began to expand into Asia from the 17th century, and would eventually take control of all of Siberia and most of Central Asia by the end of the 19th century. The Ottoman Empire controlled Anatolia, the Middle East, North Africa and the Balkans from the 16th century onwards. In the 17th century, the Manchu conquered China and established the Qing Dynasty. In the 16th century, the Mughal Empire controlled much of India and initiated the second golden age for India. China was the largest economy in the world for much of the time, followed by India until the 18th century. By 1368, Zhu Yuanzhang had claimed himself Hongwu Emperor and established the Ming dynasty of China. Immediately, the new emperor and his followers drove the Mongols and their culture out of China and beyond the Great Wall. The new emperor was somewhat suspicious of the scholars that dominated China's bureaucracy, for he had been born a peasant and was uneducated. Nevertheless, Confucian scholars were necessary to China's bureaucracy and were reestablished as well as reforms that would improve the exam systems and make them more important in entering the bureaucracy than ever before. The exams became more rigorous, cut down harshly on cheating, and those who excelled were more highly appraised. Finally, Hongwu also directed more power towards the role of emperor so as to end the corrupt influences of the bureaucrats. The Hongwu emperor, perhaps for his sympathy of the common-folk, had built many irrigation systems and other public projects that provided help for the peasant farmers. They were also allowed to cultivate and claim unoccupied land without having to pay any taxes and labor demands were lowered. However, none of this was able to stop the rising landlord class that gained many privileges from the government and slowly gained control of the peasantry. Moneylenders foreclosed on peasant debt in exchange for mortgages and bought up farmer land, forcing them to become the landlords' tenants or to wander elsewhere for work. Also during this time, Neo-Confucianism intensified even more than the previous two dynasties (the Song and Yuan). Focus on the superiority of elders over youth, men over women, and teachers over students resulted in minor discrimination of the "inferior" classes. The fine arts grew in the Ming era, with improved techniques in brush painting that depicted scenes of court, city or country life; people such as scholars or travelers; or the beauty of mountains, lakes, or marshes. The Chinese novel fully developed in this era, with such classics written such as "Water Margin", "Journey to the West", and "Jin Ping Mei". Economics grew rapidly in the Ming Dynasty as well. The introduction of American crops such as maize, sweet potatoes, and peanuts allowed for cultivation of crops in infertile land and helped prevent famine. The population boom that began in the Song dynasty accelerated until China's population went from 80 or 90 million to 150 million in three centuries, culminating in 1600. This paralleled the market economy that was growing both internally and externally. Silk, tea, ceramics, and lacquer-ware were produced by artisans that traded them in Asia and to Europeans. Westerners began to trade (with some Chinese-assigned limits), primarily in the port-towns of Macau and Canton. Although merchants benefited greatly from this, land remained the primary symbol of wealth in China and traders' riches were often put into acquiring more land. Therefore, little of these riches were used in private enterprises that could've allowed for China to develop the market economy that often accompanied the highly-successful Western countries. In the interest of national glory, the Chinese began sending impressive junk ships across the South China Sea and the Indian Ocean. From 1403 to 1433, the Yongle Emperor commissioned expeditions led by the admiral Zheng He, a Muslim eunuch from China. Chinese junks carrying hundreds of soldiers, goods, and animals for zoos, traveled to Southeast Asia, Persia, southern Arabia, and east Africa to show off Chinese power. Their prowess exceeded that of current Europeans at the time, and had these expeditions not ended, the world economy may be different from today. In 1433, the Chinese government decided that the cost of a navy was an unnecessary expense. The Chinese navy was slowly dismantled and focus on interior reform and military defense began. It was China's longstanding priority that they protect themselves from nomads and they have accordingly returned to it. The growing limits on the Chinese navy would leave them vulnerable to foreign invasion by sea later on. As was inevitable, Westerners arrived on the Chinese east coast, primarily Jesuit missionaries which reached the mainland in 1582. They attempted to convert the Chinese people to Christianity by first converting the top of the social hierarchy and allowing the lower classes to subsequently convert. To further gain support, many Jesuits adopted Chinese dress, customs, and language. Some Chinese scholars were interested in certain Western teachings and especially in Western technology. By the 1580s, Jesuit scholars like Matteo Ricci and Adam Schall amazed the Chinese elite with technological advances such as European clocks, improved calendars and cannons, and the accurate prediction of eclipses. Although some the scholar-gentry converted, many were suspicious of the Westerners whom they called "barbarians" and even resented them for the embarrassment they received at the hand of Western correction. Nevertheless, a small group of Jesuit scholars remained at the court to impress the emperor and his advisors. Near the end of the 1500s, the extremely centralized government that gave so much power to the emperor had begun to fail as more incompetent rulers took the mantle. Along with these weak rulers came increasingly corrupt officials who took advantage of the decline. Once more the public projects fell into disrepair due to neglect by the bureaucracy and resulted in floods, drought, and famine that rocked the peasantry. The famine soon became so terrible that some peasants resorted to selling their children to slavery to save them from starvation, or to eating bark, the feces of geese, or other people. Many landlords abused the situation by building large estates where desperate farmers would work and be exploited. In turn, many of these farmers resorted to flight, banditry, and open rebellion. All of this corresponded with the usual dynastic decline of China seen before, as well as the growing foreign threats. In the mid-16th century, Japanese and ethnic Chinese pirates began to raid the southern coast, and neither the bureaucracy nor the military were able to stop them. The threat of the northern Manchu people also grew. The Manchu were an already large state north of China, when in the early 17th century a local leader named Nurhaci suddenly united them under the Eight Banners—armies that the opposing families were organized into. The Manchus adopted many Chinese customs, specifically taking after their bureaucracy. Nevertheless, the Manchus still remained a Chinese vassal. In 1644 Chinese administration became so weak, the 16th and last emperor, the Chongzhen Emperor, did not respond to the severity of an ensuing rebellion by local dissenters until the enemy had invaded the Forbidden City (his personal estate). He soon hanged himself in the imperial gardens. For a brief amount of time, the Shun dynasty was claimed, until a loyalist Ming official called support from the Manchus to put down the new dynasty. The Shun Dynasty ended within a year and the Manchu were now within the Great Wall. Taking advantage of the situation, the Manchus marched on the Chinese capital of Beijing. Within two decades all of China belonged to the Manchu and the Qing dynasty was established. In early-modern Korea, the 500-year-old kingdom, Goryeo fell and new dynasty Joseon rose in August 5th, 1392. Taejo of Joseon changed the country's name from Goryeo to Joseon. The fourth king, Sejong the Great created Hangul, the Korean alphabets by himself in 1443. He also improved science technology as Koreans invent Sun Clocks, Water Clocks, Rain-Measuring system, Star Map, Korean map, and detail records of Korean small villages. He even widened the territory to the north. So nowadays' Korean territory formed at that age. He even attacked Japanese pirates in Tsushima Island, who had been attacking Korea a lot. So he is considered the best king ever in history of Korea. The ninth king, Seongjong accomplished the first complete Korean law code in 1485. So the culture and people's lives were improved again. In 1592, Japan under Toyotomi Hideyoshi invaded Korea. That war is Imjin war. At that war, Joseon was in a long peace like PAX ROMANA. So Joseon was not ready for the war. Joseon lose again and again. Japanese army conquered Seoul. The whole Korean peninsula was in danger. But Yi Sun-sin, the most renowned general of Korea defeated Japanese fleet in southern Korea coast even 13 ships VS 133 ships. This is the legendary battle called "Battle of Myeongnyang". After then, Ming dynasty helped Joseon and Japan lose the battle. So Toyotomi Hideyoshi's campaign in Korea failed, and the Tokugawa Shogunate later began. Korea hurt a lot at Imjin war. Not long after, Manchurian people invaded Joseon again. It is called Qing invasion of Joseon. First invasion was for sake. Because Qing was at war between Ming, so Ming's alliance Joseon was threatening. And the second invasion was for Joseon to obey Qing. After then, Qing defeated Ming and took the whole Chinese territories. Joseon also had to obey Qing because Joseon lose the second war against Qing. After Qing invasion, the princes of Joseon dynasty lived their childhood in China. The son of king Injo met Adam Schall in Beijing. So he wanted to introduce western technologies to Korean people when he becomes a king. Unfortunately, he died before he takes the thrown. After then, the alternative prince became the 17th king of Joseon dynasty, Hyojong, trying to revenge for his kingdom and fallen Ming dynasty to Qing. Later kings such as Yeongjo and Jeongjo tried to improve their people's lives and stop the governors' unreasonable competition. From 17th century to 18th century, Joseon sent diplomats and artists to Japan more than 10 times. This group was called 'Tongshinsa'. They were sent to Japan to teach Japan about advanced Korean cultures. Japanese people liked to receive poems from Korean nobles. At that time, Korea was more powerful than Japan. But that relationship between Joseon and Japan was reversed after 19th century. Because Japan became more powerful than Korea and China, either. So Joseon sent diplomats called 'Sooshinsa' to learn Japanese advanced technologies. After king Jeongjo's death, some noble families controlled the whole kingdom in early 19th century. At the end of that period, Western people invaded Joseon. In 1876, Joseon was set free from Qing so they did not have to obey Qing. But Japanese Empire was happy because Joseon became a perfect independent kingdom. So Japan could intervene the kingdom more. After this, Joseon traded with the US and sent 'Sooshinsa' to Japan, 'Youngshinsa' to Qing, and 'Bobingsa' to the US and Europe. These groups took many modern things to Korean peninsula. In early-modern Japan following the Sengoku period of "warring states", central government had been largely reestablished by Oda Nobunaga and Toyotomi Hideyoshi during the Azuchi–Momoyama period. After the Battle of Sekigahara in 1600, central authority fell to Tokugawa Ieyasu who completed this process and received the title of "shōgun" in 1603. Society in the Japanese "Tokugawa period" (see Edo society), unlike the shogunates before it, was based on the strict class hierarchy originally established by Toyotomi Hideyoshi. The "daimyōs" (feudal lords) were at the top, followed by the warrior-caste of samurai, with the farmers, artisans, and merchants ranking below. The country was strictly closed to foreigners with few exceptions with the "Sakoku" policy. Literacy rose in the two centuries of isolation. In some parts of the country, particularly smaller regions, "daimyōs" and samurai were more or less identical, since "daimyōs" might be trained as samurai, and samurai might act as local lords. Otherwise, the largely inflexible nature of this social stratification system unleashed disruptive forces over time. Taxes on the peasantry were set at fixed amounts which did not account for inflation or other changes in monetary value. As a result, the tax revenues collected by the samurai landowners were worth less and less over time. This often led to numerous confrontations between noble but impoverished samurai and well-to-do peasants. None, however, proved compelling enough to seriously challenge the established order until the arrival of foreign powers. In the Indian subcontinent, the Mughal Empire ruled most of India in the early 18th century. During emperor Shah Jahan and his son Aurangzeb's Islamic sharia reigns, the empire reached its architectural and economic zenith, and became the world's largest economy, worth over 25% of world GDP and signaled the proto-industrialization. Following major events such as the Nader Shah's invasion of the Mughal Empire, Battle of Plassey, Battle of Buxar and the long Anglo-Mysore Wars, most of South Asia was colonised and governed by the British Empire, thus establishing the British Raj. The "classic period" ended with the death of Mughal Emperor Aurangzeb, although the dynasty continued for another 150 years. During this period, the Empire was marked by a highly centralized administration connecting the different regions. All the significant monuments of the Mughals, their most visible legacy, date to this period which was characterised by the expansion of Persian cultural influence in the Indian subcontinent, with brilliant literary, artistic, and architectural results. The Maratha Empire was located in the south west of present-day India and expanded greatly under the rule of the Peshwas, the prime ministers of the Maratha empire. In 1761, the Maratha army lost the Third Battle of Panipat against Shah Ahmad Durrani which halted imperial expansion and the empire was then divided into a confederacy of Maratha states. The European economic and naval powers pushed into Asia, first to do trading, and then to take over major colonies. The Dutch led the way followed by the British. Portugal had arrived first, but was too weak to maintain its small holdings and was largely pushed out, retaining only Goa and Macau. The British set up a private organization, the East India Company, which handled both trade and Imperial control of much of India. The commercial colonization of India commenced in 1757, after the Battle of Plassey, when the Nawab of Bengal surrendered his dominions to the British East India Company, in 1765, when the Company was granted the "diwani", or the right to collect revenue, in Bengal and Bihar, or in 1772, when the Company established a capital in Calcutta, appointed its first Governor-General, Warren Hastings, and became directly involved in governance. The Maratha states, following the Anglo-Maratha wars, eventually lost to the British East India Company in 1818 with the Third Anglo-Maratha War. The rule lasted until 1858, when, after the Indian rebellion of 1857 and consequent of the Government of India Act 1858, the British government assumed the task of directly administering India in the new British Raj. In 1819 Stamford Raffles established Singapore as a key trading post for Britain in their rivalry with the Dutch. However, their rivalry cooled in 1824 when an Anglo-Dutch treaty demarcated their respective interests in Southeast Asia. From the 1850s onwards, the pace of colonization shifted to a significantly higher gear. The Dutch East India Company (1800) and British East India Company (1858) were dissolved by their respective governments, who took over the direct administration of the colonies. Only Thailand was spared the experience of foreign rule, although, Thailand itself was also greatly affected by the power politics of the Western powers. Colonial rule had a profound effect on Southeast Asia. While the colonial powers profited much from the region's vast resources and large market, colonial rule did develop the region to a varying extent. The Great Game was a political and diplomatic confrontation between Great Britain and Russia over Afghanistan and neighbouring territories in Central and South Asia. It lasted from 1828 to 1907. There was no war, but there were many threats. Russia was fearful of British commercial and military inroads into Central Asia, and Britain was fearful of Russia threatening its largest and most important possession, India. This resulted in an atmosphere of distrust and the constant threat of war between the two empires. Britain made it a high priority to protect all the approaches to India, and the "great game" is primarily how the British did this in terms of a possible Russian threat. Historians with access to the archives have concluded that Russia had no plans involving India, as the Russians repeatedly stated. The Great Game began in 1838 when Britain decided to gain control over the Emirate of Afghanistan and make it a protectorate, and to use the Ottoman Empire, the Persian Empire, the Khanate of Khiva, and the Emirate of Bukhara as buffer states between both empires. This would protect India and also key British sea trade routes by stopping Russia from gaining a port on the Persian Gulf or the Indian Ocean. Russia proposed Afghanistan as the neutral zone, and the final result was diving up Afghanistan with a neutral zone in the middle between Russian areas in the north and British in the South. Important episodes included the failed First Anglo-Afghan War of 1838, the First Anglo-Sikh War of 1845, the Second Anglo-Sikh War of 1848, the Second Anglo-Afghan War of 1878, and the annexation of Kokand by Russia. The 1901 novel "Kim" by Rudyard Kipling made the term popular and introduced the new implication of great power rivalry. It became even more popular after the 1979 advent of the Soviet–Afghan War. By 1644, the northern Manchu people had conquered China and established a foreign dynasty—the Qing Dynasty—once more. The Manchu Qing emperors, especially Confucian scholar Kangxi, remained largely conservative—retaining the bureaucracy and the scholars within it, as well as the Confucian ideals present in Chinese society. However, changes in the economy and new attempts at resolving certain issues occurred too. These included increased trade with Western countries that brought large amounts of silver into the Chinese economy in exchange for tea, porcelain, and silk textiles. This allowed for a new merchant-class, the compradors, to develop. In addition, repairs were done on existing dikes, canals, roadways, and irrigation works. This, combined with the lowering of taxes and government-assigned labor, was supposed to calm peasant unrest. However, the Qing failed to control the growing landlord class which had begun to exploit the peasantry and abuse their position. By the late 18th century, both internal and external issues began to arise in Qing China's politics, society, and economy. The exam system with which scholars were assigned into the bureaucracy became increasingly corrupt; bribes and other forms of cheating allowed for inexperienced and inept scholars to enter the bureaucracy and this eventually caused rampant neglect of the peasantry, military, and the previously mentioned infrastructure projects. Poverty and banditry steadily rose, especially in rural areas, and mass migrations looking for work throughout China occurred. The perpetually conservative government refused to make reforms that could resolve these issues. China saw its status reduced by what it perceived as parasitic trade with Westerners. Originally, European traders were at a disadvantage because the Chinese cared little for their goods, while European demand for Chinese commodities such as tea and porcelain only grew. In order to tip the trade imbalance in their favor, British merchants began to sell Indian opium to the Chinese. Not only did this sap Chinese bullion reserves, it also led to widespread drug addiction amongst the bureaucracy and society in general. A ban was placed on opium as early as 1729 by the Yongzheng Emperor, but little was done to enforce it. By the early 19th century, under the new Daoguang Emperor, the government began serious efforts to eradicate opium from Chinese society. Leading this endeavour were respected scholar-officials including Imperial Commissioner Lin Zexu. After Lin destroyed more than 20,000 chests of opium in the summer of 1839, Europeans demanded compensation for what they saw as unwarranted Chinese interference in their affairs. When it was not paid, the British declared war later the same year, starting what became known as the First Opium War. The outdated Chinese junks were no match for the advanced British gunboats, and soon the Yangzi River region came under threat of British bombardment and invasion. The emperor had no choice but to sue for peace, resulting in the exile of Lin and the making of the Treaty of Nanking, which ceded the British control of Hong Kong and opened up trade and diplomacy with other European countries, including Germany, France, and the USA. Northeast China came under influence of Russia with the building of the Chinese Eastern Railway through Harbin to Vladivostok. The Empire of Japan replaced Russian influence in the region as a result of the Russo-Japanese War in 1904–1905, and Japan laid the South Manchurian Railway in 1906 to Port Arthur. During the Warlord Era in China, Zhang Zuolin established himself in Northeast China, but was murdered by the Japanese for being too independent. The former Chinese emperor, Puyi, was then placed on the throne to lead a Japanese puppet state of Manchukuo. In August 1945, the Soviet Union invaded the region. From 1945 to 1948, Northeast China was a base area for Mao Zedong's People's Liberation Army in the Chinese Civil War. With the encouragement of the Kremlin, the area was used as a staging ground during the Civil War for the Chinese Communists, who were victorious in 1949 and have controlled ever since. When it became the 19th century, the king of Joseon was powerless. Because the noble family of the king's wife got the power and ruled the country by their way. The 26th king of Joseon dynasty, Gojong's father, Heungseon Daewongun wanted the king be powerful again. Even he wasn't the king. As the father of young king, he destroyed noble families and corrupt organizations. So the royal family got the power again. But he wanted to rebuild Gyeongbokgung palace in order to show the royal power to people. So he was criticized by people because he spent enormous money and inflation occurred because of that. So his son, the real king Gojong got power. The 26th king of Joseon, Gojong changed the nation's name to "Daehan Jeguk". It means the Korean Empire. And he also promoted himself as an emperor. The new empire accepted more western technology and strengthened military power. And Korean Empire was going to become a Neutral Nation. Unfortunately, in the Russo-Japanese war, Japan ignored this, and eventually Japan won against Russian Empire, and started to invade Korea. Japan first stole the right of diplomacy from Korean Empire illegally. But every western country ignored this invasion because they knew Japan became a strong country as they defeated Russian Empire. So emperor Gojong sent diplomats to a Dutch city known as The Hague to let everyone know that Japan stole the Empire's right illegally. But it was failed. Because the diplomats couldn't go into the conference room. Japan kicked Gojong off on the grounds that this reason. 3 years after, In 1910, Korean Empire became a part of Empire of Japan. It was the first time ever after invasion of Han dynasty in 108 BC. The European powers had control of other parts of Asia by the early 20th century, such as British India, French Indochina, Spanish East Indies, and Portuguese Macau and Goa. The Great Game between Russia and Britain was the struggle for power in the Central Asian region in the nineteenth century. The Trans-Siberian Railway, crossing Asia by train, was complete by 1916. Parts of Asia remained free from European control, although not influence, such as Persia, Thailand and most of China. In the twentieth century, Imperial Japan expanded into China and Southeast Asia during the World War II. After the war, many Asian countries became independent from European powers. During the Cold War, the northern parts of Asia were communist controlled with the Soviet Union and People's Republic of China, while western allies formed pacts such as CENTO and SEATO. Conflicts such as the Korean War, Vietnam War and Soviet invasion of Afghanistan were fought between communists and anti-communists. In the decades after the Second World War, a massive restructuring plan drove Japan to become the world's second-largest economy, a phenomenon known as the Japanese post-war economic miracle. The Arab–Israeli conflict has dominated much of the recent history of the Middle East. After the Soviet Union's collapse in 1991, there were many new independent nations in Central Asia. Prior to World War II, China faced a civil war between Mao Zedong's Communist party and Chiang Kai-shek's nationalist party; the nationalists appeared to be in the lead. However, once the Japanese invaded in 1937, the two parties were forced to form a temporary cease-fire in order to defend China. The nationalists faced many military failures that caused them to lose territory and subsequently, respect from the Chinese masses. In contrast, the communists' use of guerilla warfare (led by Lin Biao) proved effective against the Japanese's conventional methods and put the Communist Party on top by 1945. They also gained popularity for the reforms they were already applying in controlled areas, including land redistribution, education reforms, and widespread health care. For the next four years, the nationalists would be forced to retreat to the small island east of China, known as Taiwan (formerly known as Formosa), where they remain today. In mainland China, People's Republic of China was established by the Communist Party, with Mao Zedong as its state chairman. The communist government in China was defined by the party cadres. These hard-line officers controlled the People's Liberation Army, which itself controlled large amounts of the bureaucracy. This system was further controlled by the Central Committee, which additionally supported the state chairman who was considered the head of the government. The People's Republic's foreign policies included the repressing of secession attempts in Mongolia and Tibet and supporting of North Korea and North Vietnam in the Korean War and Vietnam War, respectively. Additionally, by 1960 China began to cut off its connections with the Soviet Union due to border disputes and an increasing Chinese sense of superiority, especially the personal feeling of Mao over the Russian premier, Nikita Khrushchev. Today China plays important roles in world economics and politics. China today is the world's second largest economy and the second fastest growing economy. During the period when the Korean War occurred, Korea divided into North and South. Syngman Rhee became the first president of South Korea, and Kim Il-sung became the supreme leader of North Korea. After the war, the president of South Korea, Syngman Rhee tries to become a dictator. So the April Revolution occurred, eventually Syngman Rhee was exiled from his country. In 1963, Park Chung-hee was empowered with a military coup d'état. He dispatched Republic of Korea Army to Vietnam War. And during this age, the economy of South Korea outran that of North Korea. Although Park Chung-hee improved the nation's economy, he was a dictator, so people didn't like him. Eventually, he is murdered by Kim Jae-gyu. In 1979, Chun Doo-hwan was empowered by another coup d’état by military. He oppressed the resistances in the city of Gwangju. That event is called 'Gwangju Uprising'. Despite the Gwangju Uprising, Chun Doo-hwan became the president. But the people resisted again in 1987. This movement is called 'June Struggle'. As a result of Gwangju Uprising and June Struggle, South Korea finally became a democratic republic in 1987. Roh Tae-woo (1988–93), Kim Young-sam (1993–98), Kim Dae-jung (1998–2003), Roh Moo-hyun (2003–2008), Lee Myung-bak (2008–2013), Park Geun-hye (2013–2017), Moon Jae-in (2017–) were elected as a president in order after 1987. In 1960, North Korea was far more wealthier than South Korea. But in 1970, South Korea begins to outrun the North Korean economy. In 2018, South Korea is ranked #10 in world GDP ranking.
https://en.wikipedia.org/wiki?curid=14097
History of the Americas The prehistory of the Americas (North, South, and Central America, and the Caribbean) begins with people migrating to these areas from Asia during the height of an Ice Age. These groups are generally believed to have been isolated from the people of the "Old World" until the coming of Europeans in the 10th century from Iceland led by Leif Erikson and with the voyages of Christopher Columbus in 1492. The ancestors of today's American Indigenous peoples were the Paleo-Indians; they were hunter-gatherers who migrated into North America. The most popular theory asserts that migrants came to the Americas via Beringia, the land mass now covered by the ocean waters of the Bering Strait. Small lithic stage peoples followed megafauna like bison, mammoth (now extinct), and caribou, thus gaining the modern nickname "big-game hunters." Groups of people may also have traveled into North America on shelf or sheet ice along the northern Pacific coast. Cultural traits brought by the first immigrants later evolved and spawned such cultures as Iroquois on North America and Pirahã of South America. These cultures later developed into civilizations. In many cases, these cultures expanded at a later date than their Old World counterparts. Cultures that may be considered advanced or civilized include Norte Chico, Cahokia, Zapotec, Toltec, Olmec, Maya, Aztec, Chimor, Mixtec, Moche, Mississippian, Puebloan, Totonac, Teotihuacan, Huastec people, Purépecha, Izapa, Mazatec, Muisca, and the Inca. After the voyages of Christopher Columbus in 1492, Spanish and later Portuguese, English, French and Dutch colonial expeditions arrived in the New World, conquering and settling the discovered lands, which led to a transformation of the cultural and physical landscape in the Americas. Spain colonized most of the Americas from present-day Southwestern United States, Florida and the Caribbean to the southern tip of South America. Portugal settled in what is mostly present-day Brazil while England established colonies on the Eastern coast of the United States, as well as the North Pacific coast and in most of Canada. France settled in Quebec and other parts of Eastern Canada and claimed an area in what is today the central United States. The Netherlands settled New Netherland (administrative centre New Amsterdam - now New York), some Caribbean islands and parts of Northern South America. European colonization of the Americas led to the rise of new cultures, civilizations and eventually states, which resulted from the fusion of Native American and European traditions, peoples and institutions. The transformation of American cultures through colonization is evident in architecture, religion, gastronomy, the arts and particularly languages, the most widespread being Spanish (376 million speakers), English (348 million) and Portuguese (201 million). The colonial period lasted approximately three centuries, from the early 16th to the early 19th centuries, when Brazil and the larger Hispanic American nations declared independence. The United States obtained independence from Great Britain much earlier, in 1776, while Canada formed a federal dominion in 1867 and received legal independence in 1931. Others remained attached to their European parent state until the end of the 19th century, such as Cuba and Puerto Rico which were linked to Spain until 1898. Smaller territories such as Guyana obtained independence in the mid-20th century, while certain Caribbean islands and French Guiana remain part of a European power to this day. The specifics of Paleo-Indian migration to and throughout the Americas, including the exact dates and routes traveled, are subject to ongoing research and discussion. The traditional theory has been that these early migrants moved into the Beringia land bridge between eastern Siberia and present-day Alaska around 40,000 – 17,000 years ago, when sea levels were significantly lowered due to the Quaternary glaciation. These people are believed to have followed herds of now-extinct Pleistocene megafauna along "ice-free corridors" that stretched between the Laurentide and Cordilleran ice sheets. Another route proposed is that, either on foot or using primitive boats, they migrated down the Pacific Northwest coast to South America. Evidence of the latter would since have been covered by a sea level rise of a hundred meters following the last ice age. Archaeologists contend that the Paleo-Indian migration out of Beringia (eastern Alaska), ranges from 40,000 to around 16,500 years ago. This time range is a hot source of debate. The few agreements achieved to date are the origin from Central Asia, with widespread habitation of the Americas during the end of the last glacial period, or more specifically what is known as the late glacial maximum, around 16,000 – 13,000 years before present. The American Journal of Human Genetics released an article in 2007 stating "Here we show, by using 86 complete mitochondrial genomes, that all Indigenous American haplogroups, including Haplogroup X (mtDNA), were part of a single founding population." Amerindian groups in the Bering Strait region exhibit perhaps the strongest DNA or mitochondrial DNA relations to Siberian peoples. The genetic diversity of Amerindian indigenous groups increase with distance from the assumed entry point into the Americas. Certain genetic diversity patterns from West to East suggest, particularly in South America, that migration proceeded first down the west coast, and then proceeded eastward. Geneticists have variously estimated that peoples of Asia and the Americas were part of the same population from 42,000 to 21,000 years ago. New studies shed light on the founding population of indigenous Americans, suggesting that their ancestry traced to both east Asian and western Eurasians who migrated to North America directly from Siberia. A 2013 study in the journal Nature reported that DNA found in the 24,000-year-old remains of a young boy in Mal’ta Siberia suggest that up to one-third of the indigenous Americans may have ancestry that can be traced back to western Eurasians, who may have "had a more north-easterly distribution 24,000 years ago than commonly thought" Professor Kelly Graf said that "Our findings are significant at two levels. First, it shows that Upper Paleolithic Siberians came from a cosmopolitan population of early modern humans that spread out of Africa to Europe and Central and South Asia. Second, Paleoindian skeletons with phenotypic traits atypical of modern-day Native Americans can be explained as having a direct historical connection to Upper Paleolithic Siberia." A route through Beringia is seen as more likely than the Solutrean hypothesis. On October 3, 2014, the Oregon cave where the oldest DNA evidence of human habitation in North America was found was added to the National Register of Historic Places. The DNA, radiocarbon dated to 14,300 years ago, was found in fossilized human coprolites uncovered in the Paisley Five Mile Point Caves in south central Oregon. The Lithic stage or "Paleo-Indian period", is the earliest classification term referring to the first stage of human habitation in the Americas, covering the Late Pleistocene epoch. The time period derives its name from the appearance of "Lithic flaked" stone tools. Stone tools, particularly projectile points and scrapers, are the primary evidence of the earliest well known human activity in the Americas. Lithic reduction stone tools are used by archaeologists and anthropologists to classify cultural periods. Several thousand years after the first migrations, the first complex civilizations arose as hunter-gatherers settled into semi-agricultural communities. Identifiable sedentary settlements began to emerge in the so-called Middle Archaic period around 6000 BCE. Particular archaeological cultures can be identified and easily classified throughout the Archaic period. In the late Archaic, on the north-central coastal region of Peru, a complex civilization arose which has been termed the Norte Chico civilization, also known as Caral-Supe. It is the oldest known civilization in the Americas and one of the five sites where civilization originated independently and indigenously in the ancient world, flourishing between the 30th and 18th centuries BC. It pre-dated the Mesoamerican Olmec civilization by nearly two millennia. It was contemporaneous with the Egypt following the unification of its kingdom under Narmer and the emergence of the first Egyptian hieroglyphics. Monumental architecture, including earthwork platform mounds and sunken plazas have been identified as part of the civilization. Archaeological evidence points to the use of textile technology and the worship of common god symbols. Government, possibly in the form of theocracy, is assumed to have been required to manage the region. However, numerous questions remain about its organization. In archaeological nomenclature, the culture was pre-ceramic culture of the pre-Columbian Late Archaic period. It appears to have lacked ceramics and art. Ongoing scholarly debate persists over the extent to which the flourishing of Norte Chico resulted from its abundant maritime food resources, and the relationship that these resources would suggest between coastal and inland sites. The role of seafood in the Norte Chico diet has been a subject of scholarly debate. In 1973, examining the Aspero region of Norte Chico, Michael E. Moseley contended that a maritime subsistence (seafood) economy had been the basis of society and its early flourishing. This theory, later termed "maritime foundation of Andean Civilization" was at odds with the general scholarly consensus that civilization arose as a result of intensive grain-based agriculture, as had been the case in the emergence of civilizations in northeast Africa (Egypt) and southwest Asia (Mesopotamia). While earlier research pointed to edible domestic plants such as squash, beans, lucuma, guava, pacay, and camote at Caral, publications by Haas and colleagues have added avocado, achira, and maize (Zea Mays) to the list of foods consumed in the region. In 2013, Haas and colleagues reported that maize was a primary component of the diet throughout the period of 3000 to 1800 BC. Cotton was another widespread crop in Norte Chico, essential to the production of fishing nets and textiles. Jonathan Haas noted a mutual dependency, whereby "The prehistoric residents of the Norte Chico needed the fish resources for their protein and the fishermen needed the cotton to make the nets to catch the fish." In the 2005 book "", journalist Charles C. Mann surveyed the literature at the time, reporting a date "sometime before 3200 BC, and possibly before 3500 BC" as the beginning date for the formation of Norte Chico. He notes that the earliest date securely associated with a city is 3500 BC, at Huaricanga in the (inland) Fortaleza area. The Norte Chico civilization began to decline around 1800 BC as more powerful centers appeared to the south and north along its coast, and to the east within the Andes Mountains. After the decline of the Norte Chico civilization, several large, centralized civilizations developed in the Western Hemisphere: Chavin, Nazca, Moche, Huari, Quitus, Cañaris, Chimu, Pachacamac, Tiahuanaco, Aymara and Inca in the Central Andes (Ecuador, Peru and Bolivia); Muisca in Colombia ; Taínos in Dominican Republic (Hispaniola, Española) and part of Caribbean; and the Olmecs, Maya, Toltecs, Mixtecs, Zapotecs, Aztecs and Purepecha in southern North America (Mexico, Guatemala). The Olmec civilization was the first Mesoamerican civilization, beginning around 1600-1400 BC and ending around 400 BC. Mesoamerica is considered one of the six sites around the globe in which civilization developed independently and indigenously. This civilization is considered the mother culture of the Mesoamerican civilizations. The Mesoamerican calendar, numeral system, writing, and much of the Mesoamerican pantheon seem to have begun with the Olmec. Some elements of agriculture seem to have been practiced in Mesoamerica quite early. The domestication of maize is thought to have begun around 7,500 to 12,000 years ago. The earliest record of lowland maize cultivation dates to around 5100 BC. Agriculture continued to be mixed with a hunting-gathering-fishing lifestyle until quite late compared to other regions, but by 2700 BC, Mesoamericans were relying on maize, and living mostly in villages. Temple mounds and classes started to appear. By 1300/ 1200 BC, small centres coalesced into the Olmec civilization, which seems to have been a set of city-states, united in religious and commercial concerns. The Olmec cities had ceremonial complexes with earth/clay pyramids, palaces, stone monuments, aqueducts and walled plazas. The first of these centers was at San Lorenzo (until 900 bc). La Venta was the last great Olmec centre. Olmec artisans sculpted jade and clay figurines of Jaguars and humans. Their iconic giant heads - believed to be of Olmec rulers - stood in every major city. The Olmec civilization ended in 400 BC, with the defacing and destruction of San Lorenzo and La Venta, two of the major cities. It nevertheless spawned many other states, most notably the Mayan civilization, whose first cities began appearing around 700-600 BC. Olmec influences continued to appear in many later Mesoamerican civilizations. Cities of the Aztecs, Mayas, and Incas were as large and organized as the largest in the Old World, with an estimated population of 200,000 to 350,000 in Tenochtitlan, the capital of the Aztec Empire. The market established in the city was said to have been the largest ever seen by the conquistadors when they arrived. The capital of the Cahokians, Cahokia, located near modern East St. Louis, Illinois, may have reached a population of over 20,000. At its peak, between the 12th and 13th centuries, Cahokia may have been the most populous city in North America. Monk's Mound, the major ceremonial center of Cahokia, remains the largest earthen construction of the prehistoric New World. These civilizations developed agriculture as well, breeding maize (corn) from having ears 2–5 cm in length to perhaps 10–15 cm in length. Potatoes, tomatoes, beans (greens), pumpkins, avocados, and chocolate are now the most popular of the pre-Columbian agricultural products. The civilizations did not develop extensive livestock as there were few suitable species, although alpacas and llamas were domesticated for use as beasts of burden and sources of wool and meat in the Andes. By the 15th century, maize was being farmed in the Mississippi River Valley after introduction from Mexico. The course of further agricultural development was greatly altered by the arrival of Europeans. Cahokia was a major regional chiefdom, with trade and tributary chiefdoms located in a range of areas from bordering the Great Lakes to the Gulf of Mexico. The Iroquois League of Nations or "People of the Long House", based in present-day upstate and western New York, had a confederacy model from the mid-15th century. It has been suggested that their culture contributed to political thinking during the development of the later United States government. Their system of affiliation was a kind of federation, different from the strong, centralized European monarchies. Leadership was restricted to a group of 50 sachem chiefs, each representing one clan within a tribe; the Oneida and Mohawk people had nine seats each; the Onondagas held fourteen; the Cayuga had ten seats; and the Seneca had eight. Representation was not based on population numbers, as the Seneca tribe greatly outnumbered the others. When a sachem chief died, his successor was chosen by the senior woman of his tribe in consultation with other female members of the clan; property and hereditary leadership were passed matrilineally. Decisions were not made through voting but through consensus decision making, with each sachem chief holding theoretical veto power. The Onondaga were the "firekeepers", responsible for raising topics to be discussed. They occupied one side of a three-sided fire (the Mohawk and Seneca sat on one side of the fire, the Oneida and Cayuga sat on the third side.) Elizabeth Tooker, an anthropologist, has said that it was unlikely the US founding fathers were inspired by the confederacy, as it bears little resemblance to the system of governance adopted in the United States. For example, it is based on inherited rather than elected leadership, selected by female members of the tribes, consensus decision-making regardless of population size of the tribes, and a single group capable of bringing matters before the legislative body. Long-distance trading did not prevent warfare and displacement among the indigenous peoples, and their oral histories tell of numerous migrations to the historic territories where Europeans encountered them. The Iroquois invaded and attacked tribes in the Ohio River area of present-day Kentucky and claimed the hunting grounds. Historians have placed these events as occurring as early as the 13th century, or in the 17th century Beaver Wars. Through warfare, the Iroquois drove several tribes to migrate west to what became known as their historically traditional lands west of the Mississippi River. Tribes originating in the Ohio Valley who moved west included the Osage, Kaw, Ponca and Omaha people. By the mid-17th century, they had resettled in their historical lands in present-day Kansas, Nebraska, Arkansas and Oklahoma. The Osage warred with Caddo-speaking Native Americans, displacing them in turn by the mid-18th century and dominating their new historical territories. The Pueblo people of what is now occupied by the Southwestern United States and northern Mexico, living conditions were that of large stone apartment like adobe structures. They live in Arizona, New Mexico, Utah, Colorado, and possibly surrounding areas. Chichimeca was the name that the Mexica (Aztecs) generically applied to a wide range of semi-nomadic peoples who inhabited the north of modern-day Mexico, and carried the same sense as the European term "barbarian". The name was adopted with a pejorative tone by the Spaniards when referring especially to the semi-nomadic hunter-gatherer peoples of northern Mexico.
https://en.wikipedia.org/wiki?curid=14098
History of Africa The history of Africa begins with the emergence of hominids, archaic humans and—at least 200,000 years ago—anatomically modern humans ("Homo sapiens"), in East Africa, and continues unbroken into the present as a patchwork of diverse and politically developing nation states.The earliest known recorded history arose in Ancient Egypt, and later in Nubia, the Sahel, the Maghreb and the Horn of Africa. Following the desertification of the Sahara, North African history became entwined with the Middle East and Southern Europe while the Bantu expansion swept from modern day Cameroon (Central Africa) across much of the sub-Saharan continent in waves between around 1000 BC and 0 AD, creating a linguistic commonality across much of the central and Southern continent. During the Middle Ages, Islam spread west from Arabia to Egypt, crossing the Maghreb and the Sahel. Some notable pre-colonial states and societies in Africa include the Ajuran Empire, D'mt, Adal Sultanate, Alodia, Warsangali Sultanate, Kingdom of Nri, Nok culture, Mali Empire, Songhai Empire, Benin Empire, Oyo Empire, Kingdom of Lunda (Punu-yaka), Ashanti Empire, Ghana Empire, Mossi Kingdoms, Mutapa Empire, Kingdom of Mapungubwe, Kingdom of Sine, Kingdom of Sennar, Kingdom of Saloum, Kingdom of Baol, Kingdom of Cayor, Kingdom of Zimbabwe, Kingdom of Kongo, Empire of Kaabu, Kingdom of Ile Ife, Ancient Carthage, Numidia, Mauretania, and the Aksumite Empire. At its peak, prior to European colonialism, it is estimated that Africa had up to 10,000 different states and autonomous groups with distinct languages and customs. From the mid-7th century, the Arab slave trade saw Muslim Arabs enslave Africans. Following an armistice between the Rashidun Caliphate and the Kingdom of Makuria after the Second Battle of Dongola in 652 AD, they were transported, along with Asians and Europeans, across the Red Sea, Indian Ocean, and Sahara Desert. From the late 15th century, Europeans joined the slave trade. One could say the Portuguese led in partnership with other Europeans. That includes the triangular trade, with the Portuguese initially acquiring slaves through trade and later by force as part of the Atlantic slave trade. They transported enslaved West, Central, and Southern Africans overseas. Subsequently, European colonization of Africa developed rapidly from around 10% (1870) to over 90% (1914) in the Scramble for Africa (1881–1914). However following struggles for independence in many parts of the continent, as well as a weakened Europe after the Second World War , decolonization took place across the continent, culminating in the 1960 Year of Africa. Disciplines such as recording of oral history, historical linguistics, archaeology and genetics have been vital in rediscovering the great African civilizations of antiquity. The first known hominids evolved in Africa. According to paleontology, the early hominids' skull anatomy was similar to that of the gorilla and the chimpanzee, great apes that also evolved in Africa, but the hominids had adopted a bipedal locomotion which freed their hands. This gave them a crucial advantage, enabling them to live in both forested areas and on the open savanna at a time when Africa was drying up and the savanna was encroaching on forested areas. This would have occurred 10 to 5 million years ago, but these claims are controversial because biologists and genetics have humans appearing around the last 70 thousand to 200 thousand years. By 4 million years ago, several australopithecine hominid species had developed throughout Southern, Eastern and Central Africa. They were tool users, and makers of tools. They scavenged for meat and were omnivores. By approximately 3.3 million years ago, primitive stone tools were first used to scavenge kills made by other predators and to harvest carrion and marrow from their bones. In hunting, "Homo habilis" was probably not capable of competing with large predators and was still more prey than hunter. "H. habilis" probably did steal eggs from nests and may have been able to catch small game and weakened larger prey (cubs and older animals). The tools were classed as Oldowan. Around 1.8 million years ago, "Homo ergaster" first appeared in the fossil record in Africa. From "Homo ergaster", "Homo erectus" evolved 1.5 million years ago. Some of the earlier representatives of this species were still fairly small-brained and used primitive stone tools, much like "H. habilis". The brain later grew in size, and "H. erectus" eventually developed a more complex stone tool technology called the Acheulean. Possibly the first hunters, "H. erectus" mastered the art of making fire and was the first hominid to leave Africa, colonizing most of Afro-Eurasia and perhaps later giving rise to "Homo floresiensis". Although some recent writers have suggested that "Homo georgicus" was the first and primary hominid ever to live outside Africa, many scientists consider "H. georgicus" to be an early and primitive member of the "H. erectus" species. The fossil record shows "Homo sapiens" (also known as "modern humans" or "anatomically modern humans") living in Africa by about 350,000-260,000 years ago. The earliest known "Homo sapiens" fossils include the Jebel Irhoud remains from Morocco (ca. 315,000 years ago), the Florisbad Skull from South Africa (ca. 259,000 years ago), and the Omo remains from Ethiopia (ca. 195,000 years ago). Scientists have suggested that "Homo sapiens" may have arisen between 350,000 and 260,000 years ago through a merging of populations in East Africa and South Africa. Evidence of a variety behaviors indicative of Behavioral modernity date to the African Middle Stone Age, associated with early "Homo sapiens" and their emergence. Abstract imagery, widened subsistence strategies, and other "modern" behaviors have been discovered from that period in Africa, especially South, North, and East Africa. The Blombos Cave site in South Africa, for example, is famous for rectangular slabs of ochre engraved with geometric designs. Using multiple dating techniques, the site was confirmed to be around 77,000 and 100–75,000 years old. Ostrich egg shell containers engraved with geometric designs dating to 60,000 years ago were found at Diepkloof, South Africa. Beads and other personal ornamentation have been found from Morocco which might be as much as 130,000 years old; as well, the Cave of Hearths in South Africa has yielded a number of beads dating from significantly prior to 50,000 years ago., and shell beads dating to about 75,000 years ago have been found at Blombos Cave, South Africa. Specialized projectile weapons as well have been found at various sites in Middle Stone Age Africa, including bone and stone arrowheads at South African sites such as Sibudu Cave (along with an early bone needle also found at Sibudu) dating approximately 60,000-70,000 years ago, and bone harpoons at the Central African site of Katanda dating to about 90,000 years ago. Evidence also exists for the systematic heat treating of silcrete stone to increased its flake-ability for the purpose of toolmaking, beginning approximately 164,000 years ago at the South African site of Pinnacle Point and becoming common there for the creation of microlithic tools at about 72,000 years ago. Early stone-tipped projectile weapons (a characteristic tool of "Homo sapiens"), the stone tips of javelins or throwing spears, were discovered in 2013 at the Ethiopian site of Gademotta, and date to around 279,000 years ago. In 2008, an ochre processing workshop likely for the production of paints was uncovered dating to ca. 100,000 years ago at Blombos Cave, South Africa. Analysis shows that a liquefied pigment-rich mixture was produced and stored in the two abalone shells, and that ochre, bone, charcoal, grindstones and hammer-stones also formed a composite part of the toolkits. Evidence for the complexity of the task includes procuring and combining raw materials from various sources (implying they had a mental template of the process they would follow), possibly using pyrotechnology to facilitate fat extraction from bone, using a probable recipe to produce the compound, and the use of shell containers for mixing and storage for later use. Modern behaviors, such as the making of shell beads, bone tools and arrows, and the use of ochre pigment, are evident at a Kenyan site by 78,000-67,000 years ago. Expanding subsistence strategies beyond big-game hunting and the consequential diversity in tool types has been noted as signs of behavioral modernity. A number of South African sites have shown an early reliance on aquatic resources from fish to shellfish. Pinnacle Point, in particular, shows exploitation of marine resources as early as 120,000 years ago, perhaps in response to more arid conditions inland. Establishing a reliance on predictable shellfish deposits, for example, could reduce mobility and facilitate complex social systems and symbolic behavior. Blombos Cave and Site 440 in Sudan both show evidence of fishing as well. Taphonomic change in fish skeletons from Blombos Cave have been interpreted as capture of live fish, clearly an intentional human behavior. Humans in North Africa (Nazlet Sabaha, Egypt) are known to have dabbled in chert mining, as early as ≈100,000 years ago, for the construction of stone tools. Evidence was found in 2018, dating to about 320,000 years ago, at the Kenyan site of Olorgesailie, of the early emergence of modern behaviors including: long-distance trade networks (involving goods such as obsidian), the use of pigments, and the possible making of projectile points. It is observed by the authors of three 2018 studies on the site, that the evidence of these behaviors is approximately contemporary to the earliest known "Homo sapiens" fossil remains from Africa (such as at Jebel Irhoud and Florisbad), and they suggest that complex and modern behaviors began in Africa around the time of the emergence of "Homo sapiens". In 2019, further evidence of early complex projectile weapons in Africa was found at Adouma, Ethiopia dated 80,000-100,000 years ago, in the form of points considered likely to belong to darts delivered by spear throwers. Around 65–50,000 years ago, the species' expansion out of Africa launched the colonization of the planet by modern human beings. By 10,000 BC, "Homo sapiens" had spread to most corners of Afro-Eurasia. Their disperals are traced by linguistic, cultural and genetic evidence. The earliest physical evidence of astronomical activity appears to be a lunar calendar found on the Ishango bone dated to between 23,000 and 18,000 BC from in what is now the Democratic Republic of the Congo. Scholars have argued that warfare was absent throughout much of humanity's prehistoric past, and that it emerged from more complex political systems as a result of sedentism, agricultural farming, etc. However, the findings at the site of Nataruk in Turkana County, Kenya, where the remains of 27 individuals who died as the result of an intentional attack by another group 10,000 years ago, suggest that inter-human conflict has a much longer history. Around 16,000 BC, from the Red Sea Hills to the northern Ethiopian Highlands, nuts, grasses and tubers were being collected for food. By 13,000 to 11,000 BC, people began collecting wild grains. This spread to Western Asia, which domesticated its wild grains, wheat and barley. Between 10,000 and 8000 BC, Northeast Africa was cultivating wheat and barley and raising sheep and cattle from Southwest Asia. A wet climatic phase in Africa turned the Ethiopian Highlands into a mountain forest. Omotic speakers domesticated enset around 6500–5500 BC. Around 7000 BC, the settlers of the Ethiopian highlands domesticated donkeys, and by 4000 BC domesticated donkeys had spread to Southwest Asia. Cushitic speakers, partially turning away from cattle herding, domesticated teff and finger millet between 5500 and 3500 BC. During the 10th millennium BP, pottery was developed throughout the southern Sahara and the Sahel. In the steppes and savannahs of the Sahara and Sahel in Northern West Africa, the Nilo-Saharan speakers and Mandé peoples started to collect and domesticate wild millet, African rice and sorghum between 8000 and 6000 BC. Later, gourds, watermelons, castor beans, and cotton were also collected and domesticated. The people started capturing wild cattle and holding them in circular thorn hedges, resulting in domestication. They also started making pottery and built stone settlements (e.g., Tichitt, Oualata). Fishing, using bone-tipped harpoons, became a major activity in the numerous streams and lakes formed from the increased rains. Mande peoples have been credited with the independent development of agriculture about 3000–4000 BC. In West Africa, the wet phase ushered in an expanding rainforest and wooded savanna from Senegal to Cameroon. Between 9000 and 5000 BC, Niger–Congo speakers domesticated the oil palm and raffia palm. Two seed plants, black-eyed peas and voandzeia (African groundnuts), were domesticated, followed by okra and kola nuts. Since most of the plants grew in the forest, the Niger–Congo speakers invented polished stone axes for clearing forest. Most of Southern Africa was occupied by pygmy peoples and Khoisan who engaged in hunting and gathering. Some of the oldest rock art was produced by them. For several hundred thousand years the Sahara has alternated between desert and savanna grassland in a 41,000 year cycle caused by changes ("precession") in the Earth's axis as it rotates around the sun which change the location of the North African Monsoon. When the North African monsoon is at its strongest annual precipitation and subsequent vegetation in the Sahara region increase, resulting in conditions commonly referred to as the "green Sahara". For a relatively weak North African monsoon, the opposite is true, with decreased annual precipitation and less vegetation resulting in a phase of the Sahara climate cycle known as the "desert Sahara". The Sahara has been a desert for several thousand years, and is expected to become green again in about 15,000 years time (17,000 AD). Just prior to Saharan desertification, the communities that developed south of Egypt, in what is now Sudan, were full participants in the Neolithic revolution and lived a settled to semi-nomadic lifestyle, with domesticated plants and animals. It has been suggested that megaliths found at Nabta Playa are examples of the world's first known archaeoastronomical devices, predating Stonehenge by some 1,000 years. The sociocultural complexity observed at Nabta Playa and expressed by different levels of authority within the society there has been suggested as forming the basis for the structure of both the Neolithic society at Nabta and the Old Kingdom of Egypt. By 5000 BC, Africa entered a dry phase, and the climate of the Sahara region gradually became drier. The population trekked out of the Sahara region in all directions, including towards the Nile Valley below the Second Cataract, where they made permanent or semipermanent settlements. A major climatic recession occurred, lessening the heavy and persistent rains in Central and Eastern Africa. Archaeological findings in Central Africa have been discovered dating back to over 100,000 years. Extensive walled sites and settlements have recently been found in Zilum, Chad approximately southwest of Lake Chad dating to the first millennium BC. Trade and improved agricultural techniques supported more sophisticated societies, leading to the early civilizations of Sao, Kanem, Bornu, Shilluk, Baguirmi, and Wadai. Around 1,000 BC, Bantu migrants had reached the Great Lakes Region in Central Africa. Halfway through the first millennium BC, the Bantu had also settled as far south as what is now Angola. The first metals to be smelted in Africa were lead, copper, and bronze in the fourth millennium BC. Copper was smelted in Egypt during the predynastic period, and bronze came into use after 3,000 BC at the latest in Egypt and Nubia. Nubia was a major source of copper as well as gold. The use of gold and silver in Egypt dates back to the predynastic period. In the Aïr Mountains, present-day Niger, copper was smelted independently of developments in the Nile valley between 3,000 and 2,500 BC. The process used was unique to the region, indicating that it was not brought from outside the region; it became more mature by about 1,500 BC. By the 1st millennium BC, iron working had been introduced in Northwestern Africa, Egypt, and Nubia. According to Zangato an Holl, there is evidence of iron-smelting in the Central African Republic and Cameroon that may date back to 3,000 to 2,500 BC. In 670 BC, Nubians were pushed out of Egypt by Assyrians using iron weapons, after which the use of iron in the Nile valley became widespread. The theory of iron spreading to Sub-Saharan Africa via the Nubian city of Meroe is no longer widely accepted. Metalworking in West Africa has been dated as early as 2,500 BC at Egaro west of the Termit in Niger, and iron working was practiced there by 1,500 BC. Iron smelting has been dated to 2,000 BC in southeast Nigeria. In Central Africa, there is evidence that iron working may have been practiced as early as the 3rd millennium BC. Iron smelting was developed in the area between Lake Chad and the African Great Lakes between 1,000 and 600 BC, and in West Africa around 2,000 BC, long before it reached Egypt. Before 500 BC, the Nok culture in the Jos Plateau was already smelting iron. Archaeological sites containing iron smelting furnaces and slag have been excavated at sites in the Nsukka region of southeast Nigeria in what is now Igboland: dating to 2,000 BC at the site of Lejja (Eze-Uzomaka 2009) and to 750 BC and at the site of Opi (Holl 2009). The site of Gbabiri (in the Central African Republic) has also yielded evidence of iron metallurgy, from a reduction furnace and blacksmith workshop; with earliest dates of 896-773 BC and 907-796 BC respectively. The ancient history of North Africa is inextricably linked to that of the Ancient Near East. This is particularly true of Ancient Egypt and Nubia. In the Horn of Africa the Kingdom of Aksum ruled modern-day Eritrea, northern Ethiopia and the coastal area of the western part of the Arabian Peninsula. The Ancient Egyptians established ties with the Land of Punt in 2,350 BC. Punt was a trade partner of Ancient Egypt and it is believed that it was located in modern-day Somalia, Djibouti or Eritrea. Phoenician cities such as Carthage were part of the Mediterranean Iron Age and classical antiquity. Sub-Saharan Africa developed more or less independently in those times. After the desertification of the Sahara, settlement became concentrated in the Nile Valley, where numerous sacral chiefdoms appeared. The regions with the largest population pressure were in the Nile Delta region of Lower Egypt, in Upper Egypt, and also along the second and third cataracts of the Dongola Reach of the Nile in Nubia. This population pressure and growth was brought about by the cultivation of southwest Asian crops, including wheat and barley, and the raising of sheep, goats, and cattle. Population growth led to competition for farm land and the need to regulate farming. Regulation was established by the formation of bureaucracies among sacral chiefdoms. The first and most powerful of the chiefdoms was Ta-Seti, founded around 3,500 BC. The idea of sacral chiefdom spread throughout Upper and Lower Egypt. Later consolidation of the chiefdoms into broader political entities began to occur in Upper and Lower Egypt, culminating into the unification of Egypt into one political entity by Narmer (Menes) in 3,100 BC. Instead of being viewed as a sacral chief, he became a divine king. The henotheism, or worship of a single god within a polytheistic system, practiced in the sacral chiefdoms along Upper and Lower Egypt, became the polytheistic Ancient Egyptian religion. Bureaucracies became more centralized under the pharaohs, run by viziers, governors, tax collectors, generals, artists, and technicians. They engaged in tax collecting, organizing of labor for major public works, and building irrigation systems, pyramids, temples, and canals. During the Fourth Dynasty (2,620–2,480 BC), long distance trade was developed, with the Levant for timber, with Nubia for gold and skins, with Punt for frankincense, and also with the western Libyan territories. For most of the Old Kingdom, Egypt developed her fundamental systems, institutions and culture, always through the central bureaucracy and by the divinity of the Pharaoh. After the fourth millennium BC, Egypt started to extend direct military and political control over her southern and western neighbors. By 2,200 BC, the Old Kingdom's stability was undermined by rivalry among the governors of the nomes who challenged the power of pharaohs and by invasions of Asiatics into the Nile Delta. The First Intermediate Period had begun, a time of political division and uncertainty. Middle Kingdom of Egypt arose when Mentuhotep II of Eleventh Dynasty unified Egypt once again between 2041 and 2016 BC beginning with his conquering of Tenth Dynasty in 2041 BC. Pyramid building resumed, long-distance trade re-emerged, and the center of power moved from Memphis to Thebes. Connections with the southern regions of Kush, Wawat and Irthet at the second cataract were made stronger. Then came the Second Intermediate Period, with the invasion of the Hyksos on horse-drawn chariots and utilizing bronze weapons, a technology heretofore unseen in Egypt. Horse-drawn chariots soon spread to the west in the inhabitable Sahara and North Africa. The Hyksos failed to hold on to their Egyptian territories and were absorbed by Egyptian society. This eventually led to one of Egypt's most powerful phases, the New Kingdom (1,580–1,080 BC), with the Eighteenth Dynasty. Egypt became a superpower controlling Nubia and Judea while exerting political influence on the Libyans to the West and on the Mediterranean. As before, the New Kingdom ended with invasion from the west by Libyan princes, leading to the Third Intermediate Period. Beginning with Shoshenq I, the Twenty-second Dynasty was established. It ruled for two centuries. To the south, Nubian independence and strength was being reasserted. This reassertion led to the conquest of Egypt by Nubia, begun by Kashta and completed by Piye (Pianhky, 751–730 BC) and Shabaka (716–695 BC). This was the birth of the Twenty-fifth Dynasty of Egypt. The Nubians tried to re-establish Egyptian traditions and customs. They ruled Egypt for a hundred years. This was ended by an Assyrian invasion, with Taharqa experiencing the full might of Assyrian iron weapons. The Nubian pharaoh Tantamani was the last of the Twenty-fifth dynasty. When the Assyrians and Nubians left, a new Twenty-sixth Dynasty emerged from Sais. It lasted until 525 BC, when Egypt was invaded by the Persians. Unlike the Assyrians, the Persians stayed. In 332, Egypt was conquered by Alexander the Great. This was the beginning of the Ptolemaic dynasty, which ended with Roman conquest in 30 BC. Pharaonic Egypt had come to an end. Around 3,500 BC, one of the first sacral kingdoms to arise in the Nile was Ta-Seti, located in northern Nubia. Ta-Seti was a powerful sacral kingdom in the Nile Valley at the 1st and 2nd cataracts that exerted an influence over nearby chiefdoms based on pictorial representation ruling over Upper Egypt. Ta-Seti traded as far as Syro-Palestine, as well as with Egypt. Ta-Seti exported gold, copper, ostrich feathers, ebony and ivory to the Old Kingdom. By the 32nd century BC, Ta-Seti was in decline. After the unification of Egypt by Narmer in 3,100 BC, Ta-Seti was invaded by the Pharaoh Hor-Aha of the First Dynasty, destroying the final remnants of the kingdom. Ta-Seti is affiliated with the A-Group Culture known to archaeology. Small sacral kingdoms continued to dot the Nubian portion of the Nile for centuries after 3,000 BC. Around the latter part of the third millennium, there was further consolidation of the sacral kingdoms. Two kingdoms in particular emerged: the Sai kingdom, immediately south of Egypt, and the Kingdom of Kerma at the third cataract. Sometime around the 18th century BC, the Kingdom of Kerma conquered the Kingdom of Sai, becoming a serious rival to Egypt. Kerma occupied a territory from the first cataract to the confluence of the Blue Nile, White Nile, and Atbarah River. About 1,575 to 1,550 BC, during the latter part of the Seventeenth Dynasty, the Kingdom of Kerma invaded Egypt. The Kingdom of Kerma allied itself with the Hyksos invasion of Egypt. Egypt eventually re-energized under the Eighteenth Dynasty and conquered the Kingdom of Kerma or Kush, ruling it for almost 500 years. The Kushites were Egyptianized during this period. By 1100 BC, the Egyptians had withdrawn from Kush. The region regained independence and reasserted its culture. Kush built a new religion around Amun and made Napata its spiritual center. In 730 BC, the Kingdom of Kush invaded Egypt, taking over Thebes and beginning the Nubian Empire. The empire extended from Palestine to the confluence of the Blue Nile, the White Nile, and River Atbara. In 760 BC, the Kushites were expelled from Egypt by iron-wielding Assyrians. Later, the administrative capital was moved from Napata to Meröe, developing a new Nubian culture. Initially, Meroites were highly Egyptianized, but they subsequently began to take on distinctive features. Nubia became a center of iron-making and cotton cloth manufacturing. Egyptian writing was replaced by the Meroitic alphabet. The lion god Apedemak was added to the Egyptian pantheon of gods. Trade links to the Red Sea increased, linking Nubia with Mediterranean Greece. Its architecture and art diversified, with pictures of lions, ostriches, giraffes, and elephants. Eventually, with the rise of Aksum, Nubia's trade links were broken and it suffered environmental degradation from the tree cutting required for iron production. In 350 AD, the Aksumite king Ezana brought Meröe to an end. The Egyptians referred to the people west of the Nile, ancestral to the Berbers, as Libyans. The Libyans were agriculturalists like the Mauri of Morocco and the Numidians of central and eastern Algeria and Tunis. They were also nomadic, having the horse, and occupied the arid pastures and desert, like the Gaetuli. Berber desert nomads were typically in conflict with Berber coastal agriculturalists. The Phoenicians were Mediterranean seamen in constant search for valuable metals such as copper, gold, tin, and lead. They began to populate the North African coast with settlementstrading and mixing with the native Berber population. In 814 BC, Phoenicians from Tyre established the city of Carthage. By 600 BC, Carthage had become a major trading entity and power in the Mediterranean, largely through trade with tropical Africa. Carthage's prosperity fostered the growth of the Berber kingdoms, Numidia and Mauretania. Around 500 BC, Carthage provided a strong impetus for trade with Sub-Saharan Africa. Berber middlemen, who had maintained contacts with Sub-Saharan Africa since the desert had desiccated, utilized pack animals to transfer products from oasis to oasis. Danger lurked from the Garamantes of Fez, who raided caravans. Salt and metal goods were traded for gold, slaves, beads, and ivory. The Carthaginians were rivals to the Greeks and Romans. Carthage fought the Punic Wars, three wars with Rome: the First Punic War (264 to 241 BC), over Sicily; the Second Punic War (218 to 201 BC), in which Hannibal invaded Europe; and the Third Punic War (149 to 146 BC). Carthage lost the first two wars, and in the third it was destroyed, becoming the Roman province of Africa, with the Berber Kingdom of Numidia assisting Rome. The Roman province of Africa became a major agricultural supplier of wheat, olives, and olive oil to imperial Rome via exorbitant taxation. Two centuries later, Rome brought the Berber kingdoms of Numidia and Mauretania under its authority. In the 420's AD, Vandals invaded North Africa and Rome lost her territories. The Berber kingdoms subsequently regained their independence. Christianity gained a foothold in Africa at Alexandria in the 1st century AD and spread to Northwest Africa. By 313 AD, with the Edict of Milan, all of Roman North Africa was Christian. Egyptians adopted Monophysite Christianity and formed the independent Coptic Church. Berbers adopted Donatist Christianity. Both groups refused to accept the authority of the Roman Catholic Church. As Carthaginian power grew, its impact on the indigenous population increased dramatically. Berber civilization was already at a stage in which agriculture, manufacturing, trade, and political organization supported several states. Trade links between Carthage and the Berbers in the interior grew, but territorial expansion also resulted in the enslavement or military recruitment of some Berbers and in the extraction of tribute from others. By the early 4th century BC, Berbers formed one of the largest element, with Gauls, of the Carthaginian army. In the Mercenary War (241-238 BC), a rebellion was instigated by mercenary soldiers of Carthage and African allies. Berber soldiers participated after being unpaid following the defeat of Carthage in the First Punic War. Berbers succeeded in obtaining control of much of Carthage's North African territory, and they minted coins bearing the name Libyan, used in Greek to describe natives of North Africa. The Carthaginian state declined because of successive defeats by the Romans in the Punic Wars; in 146 BC the city of Carthage was destroyed. As Carthaginian power waned, the influence of Berber leaders in the hinterland grew. By the 2nd century BC, several large but loosely administered Berber kingdoms had emerged. Two of them were established in Numidia, behind the coastal areas controlled by Carthage. West of Numidia lay Mauretania, which extended across the Moulouya River in Morocco to the Atlantic Ocean. The high point of Berber civilization, unequaled until the coming of the Almohads and Almoravid dynasty more than a millennium later, was reached during the reign of Masinissa in the 2nd century BC. After Masinissa's death in 148 BC, the Berber kingdoms were divided and reunited several times. Masinissa's line survived until 24 AD, when the remaining Berber territory was annexed to the Roman Empire. The ancestors of the Somali people were an important link in the Horn of Africa connecting the region's commerce with the rest of the ancient world. Somali sailors and merchants were the main suppliers of frankincense, myrrh and spices, all of which were valuable luxuries to the Ancient Egyptians, Phoenicians, Mycenaeans and Babylonians. In the classical era, several flourishing Somali city-states such as Opone, Mosylon, Cape Guardafui, and Malao competed with the Sabaeans, Parthians and Axumites for the rich Indo–Greco-Roman trade. "Increases in urbanization and in the area under cultivation during Roman rule caused wholesale dislocations of the Berber society, forcing nomad tribes to settle or to move from their traditional rangelands. Sedentary tribes lost their autonomy and connection with the land. Berber opposition to the Roman presence was nearly constant. The Roman emperor Trajan established a frontier in the south by encircling the Aurès and Nemencha mountains and building a line of forts from Vescera (modern Biskra) to Ad Majores (Hennchir Besseriani, southeast of Biskra). The defensive line extended at least as far as Castellum Dimmidi (modern Messaâd, southwest of Biskra), Roman Algeria's southernmost fort. Romans settled and developed the area around Sitifis (modern Sétif) in the 2nd century, but farther west the influence of Rome did not extend beyond the coast and principal military roads until much later." The Roman military presence of North Africa remained relatively small, consisting of about 28,000 troops and auxiliaries in Numidia and the two Mauretanian provinces. Starting in the 2nd century AD, these garrisons were manned mostly by local inhabitants. Aside from Carthage, urbanization in North Africa came in part with the establishment of settlements of veterans under the Roman emperors Claudius (reigned 41–54), Nerva (96–98), and Trajan (98–117). In Algeria such settlements included Tipasa, Cuicul or Curculum (modern Djemila, northeast of Sétif), Thamugadi (modern Timgad, southeast of Sétif), and Sitifis (modern Sétif). The prosperity of most towns depended on agriculture. Called the "granary of the empire", North Africa became one of the largest exporters of grain in the empire, shipping to the provinces which did not produce cereals, like Italy and Greece. Other crops included fruit, figs, grapes, and beans. By the 2nd century AD, olive oil rivaled cereals as an export item. The beginnings of the Roman imperial decline seemed less serious in North Africa than elsewhere. However, uprisings did take place. In 238 AD, landowners rebelled unsuccessfully against imperial fiscal policies. Sporadic tribal revolts in the Mauretanian mountains followed from 253 to 288, during the Crisis of the Third Century. The towns also suffered economic difficulties, and building activity almost ceased. The towns of Roman North Africa had a substantial Jewish population. Some Jews had been deported from Judea or Palestine in the 1st and 2nd centuries AD for rebelling against Roman rule; others had come earlier with Punic settlers. In addition, a number of Berber tribes had converted to Judaism. Christianity arrived in the 2nd century and soon gained converts in the towns and among slaves. More than eighty bishops, some from distant frontier regions of Numidia, attended the Council of Carthage (256) in 256. By the end of the 4th century, the settled areas had become Christianized, and some Berber tribes had converted "en masse". A division in the church that came to be known as the Donatist heresy began in 313 among Christians in North Africa. The Donatists stressed the holiness of the church and refused to accept the authority to administer the sacraments of those who had surrendered the scriptures when they were forbidden under the Emperor Diocletian (reigned 284–305). The Donatists also opposed the involvement of Constantine the Great (reigned 306–337) in church affairs in contrast to the majority of Christians who welcomed official imperial recognition. The occasionally violent Donatist controversy has been characterized as a struggle between opponents and supporters of the Roman system. The most articulate North African critic of the Donatist position, which came to be called a heresy, was Augustine, bishop of Hippo Regius. Augustine maintained that the unworthiness of a minister did not affect the validity of the sacraments because their true minister was Jesus Christ. In his sermons and books Augustine, who is considered a leading exponent of Christian dogma, evolved a theory of the right of orthodox Christian rulers to use force against schismatics and heretics. Although the dispute was resolved by a decision of an imperial commission in Carthage in 411, Donatist communities continued to exist as late as the 6th century. A decline in trade weakened Roman control. Independent kingdoms emerged in mountainous and desert areas, towns were overrun, and Berbers, who had previously been pushed to the edges of the Roman Empire, returned. During the Vandalic War, Belisarius, general of the Byzantine emperor Justinian I based in Constantinople, landed in North Africa in 533 with 16,000 men and within a year destroyed the Vandal Kingdom. Local opposition delayed full Byzantine control of the region for twelve years, however, and when imperial control came, it was but a shadow of the control exercised by Rome. Although an impressive series of fortifications were built, Byzantine rule was compromised by official corruption, incompetence, military weakness, and lack of concern in Constantinople for African affairs, which made it an easy target for the Arabs during the Early Muslim conquests . As a result, many rural areas reverted to Berber rule. The earliest state in Eritrea and northern Ethiopia, Dʿmt, dates from around the 8th and 7th centuries BC. D'mt traded through the Red Sea with Egypt and the Mediterranean, providing frankincense. By the 5th and 3rd centuries, D'mt had declined, and several successor states took its place. Later there was greater trade with South Arabia, mainly with the port of Saba. Adulis became an important commercial center in the Ethiopian Highlands. The interaction of the peoples in the two regions, the southern Arabia Sabaeans and the northern Ethiopians, resulted in the Ge'ez culture and language and eventual development of the Ge'ez script. Trade links increased and expanded from the Red Sea to the Mediterranean, with Egypt, Israel, Phoenicia, Greece, and Rome, to the Black Sea, and to Persia, India, and China. Aksum was known throughout those lands. By the 5th century BC, the region was very prosperous, exporting ivory, hippopotamus hides, gold dust, spices, and live elephants. It imported silver, gold, olive oil, and wine. Aksum manufactured glass crystal, brass, and copper for export. A powerful Aksum emerged, unifying parts of eastern Sudan, northern Ethiopia (Tigre), and Eritrea. Its kings built stone palatial buildings and were buried under megalithic monuments. By 300 AD, Aksum was minting its own coins in silver and gold. In 331 AD, King Ezana (320–350 AD) was converted to Miaphysite Christianity which believes in one united divine-human nature of Christ, supposedly by Frumentius and Aedesius, who became stranded on the Red Sea coast. Some scholars believed the process was more complex and gradual than a simple conversion. Around 350, the time Ezana sacked Meroe, the Syrian monastic tradition took root within the Ethiopian church. In the 6th century Aksum was powerful enough to add Saba on the Arabian peninsula to her empire. At the end of the 6th century, the Sasanian Empire pushed Aksum out of the peninsula. With the spread of Islam through Western Asia and Northern Africa, Aksum's trading networks in the Mediterranean faltered. The Red Sea trade diminished as it was diverted to the Persian Gulf and dominated by Arabs, causing Aksum to decline. By 800 AD, the capital was moved south into the interior highlands, and Aksum was much diminished. In the western Sahel the rise of settled communities occurred largely as a result of the domestication of millet and of sorghum. Archaeology points to sizable urban populations in West Africa beginning in the 2nd millennium BC. Symbiotic trade relations developed before the trans-Saharan trade, in response to the opportunities afforded by north–south diversity in ecosystems across deserts, grasslands, and forests. The agriculturists received salt from the desert nomads. The desert nomads acquired meat and other foods from pastoralists and farmers of the grasslands and from fishermen on the Niger River. The forest-dwellers provided furs and meat. Dhar Tichitt and Oualata in present-day Mauritania figure prominently among the early urban centers, dated to 2,000 BC. About 500 stone settlements litter the region in the former savannah of the Sahara. Its inhabitants fished and grew millet. It has been found Augustin Holl that the Soninke of the Mandé peoples were likely responsible for constructing such settlements. Around 300 BC the region became more desiccated and the settlements began to decline, most likely relocating to Koumbi Saleh. Architectural evidence and the comparison of pottery styles suggest that Dhar Tichitt was related to the subsequent Ghana Empire. Djenné-Djenno (in present-day Mali) was settled around 300 BC, and the town grew to house a sizable Iron Age population, as evidenced by crowded cemeteries. Living structures were made of sun-dried mud. By 250 BC Djenné-Djenno had become a large, thriving market town. Farther south, in central Nigeria, around 1,500 BC, the Nok culture developed in Jos Plateau. It was a highly centralized community. The Nok people produced lifelike representations in terracotta, including human heads and human figures, elephants, and other animals. By 500 BC they were smelting iron. By 200 AD the Nok culture had vanished. Based on stylistic similarities with the Nok terracottas, the bronze figurines of the Yoruba kingdom of Ife and those of the Bini kingdom of Benin are now believed to be continuations of the traditions of the earlier Nok culture. The Bantu expansion involved a significant movement of people in African history and in the settling of the continent. People speaking Bantu languages (a branch of the Niger–Congo family) began in the second millennium BC to spread from Cameroon eastward to the Great Lakes region. In the first millennium BC, Bantu languages spread from the Great Lakes to southern and east Africa. One early movement headed south to the upper Zambezi valley in the 2nd century BC. Then Bantu-speakers pushed westward to the savannahs of present-day Angola and eastward into Malawi, Zambia, and Zimbabwe in the 1st century AD. The second thrust from the Great Lakes was eastward, 2,000 years ago, expanding to the Indian Ocean coast, Kenya and Tanzania. The eastern group eventually met the southern migrants from the Great Lakes in Malawi, Zambia, and Zimbabwe. Both groups continued southward, with eastern groups continuing to Mozambique and reaching Maputo in the 2nd century AD, and expanding as far as Durban. By the later first millennium AD, the expansion had reached the Great Kei River in present-day South Africa. Sorghum, a major Bantu crop, could not thrive under the winter rainfall of Namibia and the western Cape. Khoisan people inhabited the remaining parts of southern Africa. The Sao civilization flourished from about the sixth century BC to as late as the 16th century AD in Central Africa. The Sao lived by the Chari River south of Lake Chad in territory that later became part of present-day Cameroon and Chad. They are the earliest people to have left clear traces of their presence in the territory of modern Cameroon. Today, several ethnic groups of northern Cameroon and southern Chad - but particularly the Sara people - claim descent from the civilization of the Sao. Sao artifacts show that they were skilled workers in bronze, copper, and iron. Finds include bronze sculptures and terracotta statues of human and animal figures, coins, funerary urns, household utensils, jewelry, highly decorated pottery, and spears. The largest Sao archaeological finds have occurred south of Lake Chad. The Kanem Empire was centered in the Chad Basin. It was known as the Kanem Empire from the 9th century AD onward and lasted as the independent kingdom of Bornu until 1893. At its height it encompassed an area covering not only much of Chad, but also parts of modern southern Libya, eastern Niger, northeastern Nigeria, northern Cameroon, parts of South Sudan and the Central African Republic. The history of the Empire is mainly known from the Royal Chronicle or "Girgam" discovered in 1851 by the German traveller Heinrich Barth. Kanem rose in the 8th century in the region to the north and east of Lake Chad. The Kanem empire went into decline, shrank, and in the 14th century was defeated by Bilala invaders from the Lake Fitri region. Around the 9th century AD, the central Sudanic Empire of Kanem, with its capital at Njimi, was founded by the Kanuri-speaking nomads. Kanem arose by engaging in the trans-Saharan trade. It exchanged slaves captured by raiding the south for horses from North Africa, which in turn aided in the acquisition of slaves. By the late 11th century, the Islamic Sayfawa (Saifawa) dynasty was founded by Humai (Hummay) ibn Salamna. The Sayfawa Dynasty ruled for 771 years, making it one of the longest-lasting dynasties in human history. In addition to trade, taxation of local farms around Kanem became a source of state income. Kanem reached its peak under "Mai" (king) Dunama Dibalemi ibn Salma (1210–1248). The empire reportedly was able to field 40,000 cavalry, and it extended from Fezzan in the north to the Sao state in the south. Islam became firmly entrenched in the empire. Pilgrimages to Mecca were common; Cairo had hostels set aside specifically for pilgrims from Kanem. The Kanuri people led by the Sayfuwa migrated to the west and south of the lake, where they established the Bornu Empire. By the late 16th century the Bornu empire had expanded and recaptured the parts of Kanem that had been conquered by the Bulala. Satellite states of Bornu included the Damagaram in the west and Baguirmi to the southeast of Lake Chad. Around 1400, the Sayfawa Dynasty moved its capital to Bornu, a tributary state southwest of Lake Chad with a new capital Birni Ngarzagamu. Overgrazing had caused the pastures of Kanem to become too dry. In addition, political rivalry from the Bilala clan was becoming intense. Moving to Bornu better situated the empire to exploit the trans-Saharan trade and to widen its network in that trade. Links to the Hausa states were also established, providing horses and salt from Bilma for Akan gold. Mai Ali Gazi ibn Dunama (c. 1475–1503) defeated the Bilala, reestablishing complete control of Kanem. During the early 16th century, the Sayfawa Dynasty solidified its hold on the Bornu population after much rebellion. In the latter half of the 16th century, "Mai" Idris Alooma modernized its military, in contrast to the Songhai Empire. Turkish mercenaries were used to train the military. The Sayfawa Dynasty were the first monarchs south of the Sahara to import firearms. The empire controlled all of the Sahel from the borders of Darfur in the east to Hausaland to the west. Friendly relationship was established with the Ottoman Empire via Tripoli. The "Mai" exchanged gifts with the Ottoman sultan. During the 17th and 18th centuries, not much is known about Bornu. During the 18th century, it became a center of Islamic learning. However, Bornu's army became outdated by not importing new arms, and Kamembu had also begun its decline. The power of the "mai" was undermined by droughts and famine that were becoming more intense, internal rebellion in the pastoralist north, growing Hausa power, and the importation of firearms which made warfare more bloody. By 1841, the last "mai" was deposed, bringing to an end the long-lived Sayfawa Dynasty. In its place, the al-Kanemi dynasty of the "shehu" rose to power. The Shilluk Kingdom was centered in South Sudan from the 15th century from along a strip of land along the western bank of the White Nile, from Lake No to about 12° north latitude. The capital and royal residence was in the town of Fashoda. The kingdom was founded during the mid-15th century AD by its first ruler, Nyikang. During the 19th century, the Shilluk Kingdom faced decline following military assaults from the Ottoman Empire and later British and Sudanese colonization in Anglo-Egyptian Sudan. The Kingdom of Baguirmi existed as an independent state during the 16th and 17th centuries southeast of Lake Chad in what is now the country of Chad. Baguirmi emerged to the southeast of the Kanem-Bornu Empire. The kingdom's first ruler was Mbang Birni Besse. Later in his reign, the Bornu Empire conquered and made the state a tributary. The Wadai Empire was centered on Chad and the Central African Republic from the 17th century. The Tunjur people founded the Wadai Kingdom to the east of Bornu in the 16th century. In the 17th century there was a revolt of the Maba people who established a Muslim dynasty. At first Wadai paid tribute to Bornu and Durfur, but by the 18th century Wadai was fully independent and had become an aggressor against its neighbors.To the west of Bornu, by the 15th century the Kingdom of Kano had become the most powerful of the Hausa Kingdoms, in an unstable truce with the Kingdom of Katsina to the north. Both were absorbed into the Sokoto Caliphate during the Fulani Jihad of 1805, which threatened Bornu itself. Sometime between 1300 and 1400 AD, Kongolo Mwamba (Nkongolo) from the Balopwe clan unified the various Luba peoples, near Lake Kisale. He founded the Kongolo Dynasty, which was later ousted by Kalala Ilunga. Kalala expanded the kingdom west of Lake Kisale. A new centralized political system of spiritual kings ("balopwe") with a court council of head governors and sub-heads all the way to village heads. The "balopwe" was the direct communicator with the ancestral spirits and chosen by them. Conquered states were integrated into the system and represented in the court, with their titles. The authority of the "balopwe" resided in his spiritual power rather than his military authority. The army was relatively small. The Luba was able to control regional trade and collect tribute for redistribution. Numerous offshoot states were formed with founders claiming descent from the Luba. The Luba political system spread throughout Central Africa, southern Uganda, Rwanda, Burundi, Malawi, Zambia, Zimbabwe, and the western Congo. Two major empires claiming Luba descent were the Lunda Empire and Maravi Empire. The Bemba people and Basimba people of northern Zambia were descended from Luba migrants who arrived in Zambia during the 17th century. In the 1450s, a Luba from the royal family Ilunga Tshibinda married Lunda queen Rweej and united all Lunda peoples. Their son "mulopwe" Luseeng expanded the kingdom. His son Naweej expanded the empire further and is known as the first Lunda emperor, with the title "mwato yamvo" ("mwaant yaav", "mwant yav"), the Lord of Vipers. The Luba political system was retained, and conquered peoples were integrated into the system. The "mwato yamvo" assigned a "cilool" or "kilolo" (royal adviser) and tax collector to each state conquered. Numerous states claimed descent from the Lunda. The Imbangala of inland Angola claimed descent from a founder, Kinguri, brother of Queen Rweej, who could not tolerate the rule of "mulopwe" Tshibunda. "Kinguri" became the title of kings of states founded by Queen Rweej's brother. The Luena (Lwena) and Lozi (Luyani) in Zambia also claim descent from Kinguri. During the 17th century, a Lunda chief and warrior called Mwata Kazembe set up an Eastern Lunda kingdom in the valley of the Luapula River. The Lunda's western expansion also saw claims of descent by the Yaka and the Pende. The Lunda linked Central Africa with the western coast trade. The kingdom of Lunda came to an end in the 19th century when it was invaded by the Chokwe, who were armed with guns. By the 15th century AD, the farming Bakongo people ("ba" being the plural prefix) were unified as the Kingdom of Kongo under a ruler called the "manikongo", residing in the fertile Pool Malebo area on the lower Congo River. The capital was M'banza-Kongo. With superior organization, they were able to conquer their neighbors and extract tribute. They were experts in metalwork, pottery, and weaving raffia cloth. They stimulated interregional trade via a tribute system controlled by the "manikongo". Later, maize (corn) and cassava (manioc) would be introduced to the region via trade with the Portuguese at their ports at Luanda and Benguela. The maize and cassava would result in population growth in the region and other parts of Africa, replacing millet as a main staple. By the 16th century, the "manikongo" held authority from the Atlantic in the west to the Kwango River in the east. Each territory was assigned a "mani-mpembe" (provincial governor) by the "manikongo". In 1506, Afonso I (1506–1542), a Christian, took over the throne. Slave trading increased with Afonso's wars of conquest. About 1568 to 1569, the Jaga invaded Kongo, laying waste to the kingdom and forcing the "manikongo" into exile. In 1574, Manikongo Álvaro I was reinstated with the help of Portuguese mercenaries. During the latter part of the 1660s, the Portuguese tried to gain control of Kongo. Manikongo António I (1661–1665), with a Kongolese army of 5,000, was destroyed by an army of Afro-Portuguese at the Battle of Mbwila. The empire dissolved into petty polities, fighting among each other for war captives to sell into slavery. Kongo gained captives from the Kingdom of Ndongo in wars of conquest. Ndongo was ruled by the "ngola". Ndongo would also engage in slave trading with the Portuguese, with São Tomé being a transit point to Brazil. The kingdom was not as welcoming as Kongo; it viewed the Portuguese with great suspicion and as an enemy. The Portuguese in the latter part of the 16th century tried to gain control of Ndongo but were defeated by the Mbundu. Ndongo experienced depopulation from slave raiding. The leaders established another state at Matamba, affiliated with Queen Nzinga, who put up a strong resistance to the Portuguese until coming to terms with them. The Portuguese settled along the coast as trade dealers, not venturing on conquest of the interior. Slavery wreaked havoc in the interior, with states initiating wars of conquest for captives. The Imbangala formed the slave-raiding state of Kasanje, a major source of slaves during the 17th and 18th centuries. The birth of Islam opposite Somalia's Red Sea coast meant that Somali merchants and sailors living on the Arabian Peninsula gradually came under the influence of the new religion through their converted Arab Muslim trading partners. With the migration of Muslim families from the Islamic world to Somalia in the early centuries of Islam, and the peaceful conversion of the Somali population by Somali Muslim scholars in the following centuries, the ancient city-states eventually transformed into Islamic Mogadishu, Berbera, Zeila, Barawa and Merka, which were part of the "Berber" (the medieval Arab term for the ancestors of the modern Somalis) civilization. The city of Mogadishu came to be known as the "City of Islam" and controlled the East African gold trade for several centuries. During this period, sultanates such as the Ajuran Empire and the Sultanate of Mogadishu, and republics like Barawa, Merca and Hobyo and their respective ports flourished and had a lucrative foreign commerce with ships sailing to and coming from Arabia, India, Venice, Persia, Egypt, Portugal and as far away as China. Vasco da Gama, who passed by Mogadishu in the 15th century, noted that it was a large city with houses four or five stories high and big palaces in its centre, in addition to many mosques with cylindrical minarets. In the 16th century, Duarte Barbosa noted that many ships from the Kingdom of Cambaya in modern-day India sailed to Mogadishu with cloth and spices, for which they in return received gold, wax, and ivory. Barbosa also highlighted the abundance of meat, wheat, barley, horses, and fruit in the coastal markets, which generated enormous wealth for the merchants. Mogadishu, the center of a thriving weaving industry known as "toob benadir" (specialized for the markets in Egypt and Syria), together with Merca and Barawa, served as a transit stop for Swahili merchants from Mombasa and Malindi and for the gold trade from Kilwa. Jewish merchants from the Strait of Hormuz brought their Indian textiles and fruit to the Somali coast to exchange for grain and wood. Trading relations were established with Malacca in the 15th century, with cloth, ambergris, and porcelain being the main commodities of the trade. Giraffes, zebras, and incense were exported to the Ming Empire of China, which established Somali merchants as leaders in the commerce between the Asia and Africa and influenced the Chinese language with borrowings from the Somali language in the process. Hindu merchants from Surat and southeast African merchants from Pate, seeking to bypass both the Portuguese blockade and Omani meddling, used the Somali ports of Merca and Barawa (which were out of the two powers' jurisdiction) to conduct their trade in safety and without any problems. The Zagwe dynasty ruled many parts of modern Ethiopia and Eritrea from approximately 1137 to 1270. The name of the dynasty comes from the Cushitic speaking Agaw of northern Ethiopia. From 1270 AD and on for many centuries, the Solomonic dynasty ruled the Ethiopian Empire. In the early 15th century Ethiopia sought to make diplomatic contact with European kingdoms for the first time since Aksumite times. A letter from King Henry IV of England to the Emperor of Abyssinia survives. In 1428, the Emperor Yeshaq I sent two emissaries to Alfonso V of Aragon, who sent return emissaries who failed to complete the return trip. The first continuous relations with a European country began in 1508 with the Kingdom of Portugal under Emperor Lebna Dengel, who had just inherited the throne from his father. This proved to be an important development, for when the empire was subjected to the attacks of the Adal general and imam, Ahmad ibn Ibrahim al-Ghazi (called ""Grañ"", or "the Left-handed"), Portugal assisted the Ethiopian emperor by sending weapons and four hundred men, who helped his son Gelawdewos defeat Ahmad and re-establish his rule. This Abyssinian–Adal War was also one of the first proxy wars in the region as the Ottoman Empire, and Portugal took sides in the conflict. When Emperor Susenyos converted to Roman Catholicism in 1624, years of revolt and civil unrest followed resulting in thousands of deaths. The Jesuit missionaries had offended the Orthodox faith of the local Ethiopians, and on June 25, 1632, Susenyos's son, Emperor Fasilides, declared the state religion to again be Ethiopian Orthodox Christianity and expelled the Jesuit missionaries and other Europeans. By 711 AD, the Umayyad Caliphate had conquered all of North Africa. By the 10th century, the majority of the population of North Africa was Muslim. By the 9th century AD, the unity brought about by the Islamic conquest of North Africa and the expansion of Islamic culture came to an end. Conflict arose as to who should be the successor of the prophet. The Umayyads had initially taken control of the Caliphate, with their capital at Damascus. Later, the Abbasids had taken control, moving the capital to Baghdad. The Berber people, being independent in spirit and hostile to outside interference in their affairs and to Arab exclusivity in orthodox Islam, adopted Shi'ite and Kharijite Islam, both considered unorthodox and hostile to the authority of the Abbasid Caliphate. Numerous Kharijite kingdoms came and fell during the 8th and 9th centuries, asserting their independence from Baghdad. In the early 10th century, Shi'ite groups from Syria, claiming descent from Muhammad's daughter Fatimah, founded the Fatimid Dynasty in the Maghreb. By 950, they had conquered all of the Maghreb and by 969 all of Egypt. They had immediately broken away from Baghdad. In an attempt to bring about a purer form of Islam among the Sanhaja Berbers, Abdallah ibn Yasin founded the Almoravid movement in present-day Mauritania and Western Sahara. The Sanhaja Berbers, like the Soninke, practiced an indigenous religion alongside Islam. Abdallah ibn Yasin found ready converts in the Lamtuna Sanhaja, who were dominated by the Soninke in the south and the Zenata Berbers in the north. By the 1040s, all of the Lamtuna was converted to the Almoravid movement. With the help of Yahya ibn Umar and his brother Abu Bakr ibn Umar, the sons of the Lamtuna chief, the Almoravids created an empire extending from the Sahel to the Mediterranean. After the death of Abdallah ibn Yassin and Yahya ibn Umar, Abu Bakr split the empire in half, between himself and Yusuf ibn Tashfin, because it was too big to be ruled by one individual. Abu Bakr took the south to continue fighting the Soninke, and Yusuf ibn Tashfin took the north, expanding it to southern Spain. The death of Abu Bakr in 1087 saw a breakdown of unity and increase military dissension in the south. This caused a re-expansion of the Soninke. The Almoravids were once held responsible for bringing down the Ghana Empire in 1076, but this view is no longer credited. During the 10th through 13th centuries, there was a large-scale movement of bedouins out of the Arabian Peninsula. About 1050, a quarter of a million Arab nomads from Egypt moved into the Maghreb. Those following the northern coast were referred to as Banu Hilal. Those going south of the Atlas Mountains were the Banu Sulaym. This movement spread the use of the Arabic language and hastened the decline of the Berber language and the Arabisation of North Africa. Later an Arabised Berber group, the Hawwara, went south to Nubia via Egypt. In the 1140s, Abd al-Mu'min declared jihad on the Almoravids, charging them with decadence and corruption. He united the northern Berbers against the Almoravids, overthrowing them and forming the Almohad Empire. During this period, the Maghreb became thoroughly Islamised and saw the spread of literacy, the development of algebra, and the use of the number zero and decimals. By the 13th century, the Almohad states had split into three rival states. Muslim states were largely extinguished in the Iberian Peninsula by the Christian kingdoms of Castile, Aragon, and Portugal. Around 1415, Portugal engaged in a "reconquista" of North Africa by capturing Ceuta, and in later centuries Spain and Portugal acquired other ports on the North African coast. In 1492, at the end of the Granada War, Spain defeated Muslims in the Emirate of Granada, effectively ending eight centuries of Muslim domination in southern Iberia. Portugal and Spain took the ports of Tangiers, Algiers, Tripoli, and Tunis. This put them in direct competition with the Ottoman Empire, which re-took the ports using Turkish corsairs (pirates and privateers). The Turkish corsairs would use the ports for raiding Christian ships, a major source of booty for the towns. Technically, North Africa was under the control of the Ottoman Empire, but only the coastal towns were fully under Istanbul's control. Tripoli benefited from trade with Borno. The pashas of Tripoli traded horses, firearms, and armor via Fez with the sultans of the Bornu Empire for slaves. In the 16th century, an Arab nomad tribe that claimed descent from Muhammad's daughter, the Saadis, conquered and united Morocco. They prevented the Ottoman Empire from reaching to the Atlantic and expelled Portugal from Morocco's western coast. Ahmad al-Mansur brought the state to the height of its power. He invaded Songhay in 1591, to control the gold trade, which had been diverted to the western coast of Africa for European ships and to the east, to Tunis. Morocco's hold on Songhay diminished in the 17th century. In 1603, after Ahmad's death, the kingdom split into the two sultanates of Fes and Marrakesh. Later it was reunited by Moulay al-Rashid, founder of the Alaouite Dynasty (1672–1727). His brother and successor, Ismail ibn Sharif (1672–1727), strengthened the unity of the country by importing slaves from the Sudan to build up the military. In 642 AD, the Rashidun Caliphate conquered Byzantine Egypt. Egypt under the Fatimid Caliphate was prosperous. Dams and canals were repaired, and wheat, barley, flax, and cotton production increased. Egypt became a major producer of linen and cotton cloth. Its Mediterranean and Red Sea trade increased. Egypt also minted a gold currency called the Fatimid dinar, which was used for international trade. The bulk of revenues came from taxing the fellahin (peasant farmers), and taxes were high. Tax collecting was leased to Berber overlords, who were soldiers who had taken part in the Fatimid conquest in 969 AD. The overlords paid a share to the caliphs and retained what was left. Eventually, they became landlords and constituted a settled land aristocracy. To fill the military ranks, Mamluk Turkish slave cavalry and Sudanese slave infantry were used. Berber freemen were also recruited. In the 1150s, tax revenues from farms diminished. The soldiers revolted and wreaked havoc in the countryside, slowed trade, and diminished the power and authority of the Fatimid caliphs. During the 1160s, Fatimid Egypt came under threat from European crusaders. Out of this threat, a Kurdish general named Ṣalāḥ ad-Dīn Yūsuf ibn Ayyūb (Saladin), with a small band of professional soldiers, emerged as an outstanding Muslim defender. Saladin defeated the Christian crusaders at Egypt's borders and recaptured Jerusalem in 1187. On the death of Al-Adid, the last Fatimid caliph, in 1171, Saladin became the ruler of Egypt, ushering in the Ayyubid Dynasty. Under his rule, Egypt returned to Sunni Islam, Cairo became an important center of Arab Islamic learning, and Mamluk slaves were increasingly recruited from Turkey and southern Russia for military service. Support for the military was tied to the "iqta", a form of land taxation in which soldiers were given ownership in return for military service. Over time, Mamluk slave soldiers became a very powerful landed aristocracy, to the point of getting rid of the Ayyubid dynasty in 1250 and establishing a Mamluk dynasty. The more powerful Mamluks were referred to as "amirs". For 250 years, Mamluks controlled all of Egypt under a military dictatorship. Egypt extended her territories to Syria and Palestine, thwarted the crusaders, and halted a Mongol invasion in 1260 at the Battle of Ain Jalut. Mamluk Egypt came to be viewed as a protector of Islam, and of Medina and Mecca. Eventually the "iqta" system declined and proved unreliable for providing an adequate military. The Mamluks started viewing their "iqta" as hereditary and became attuned to urban living. Farm production declined, and dams and canals lapsed into disrepair. Mamluk military skill and technology did not keep pace with new technology of handguns and cannons. With the rise of the Ottoman Empire, Egypt was easily defeated. In 1517, at the end of an Ottoman–Mamluk War, Egypt became part of the Ottoman Empire. The Istanbul government revived the "iqta" system. Trade was reestablished in the Red Sea, but it could not completely connect with the Indian Ocean trade because of growing Portuguese presence. During the 17th and 18th centuries, hereditary Mamluks regained power. The leading Mamluks were referred to as "beys". Pashas, or viceroys, represented the Istanbul government in name only, operating independently. During the 18th century, dynasties of pashas became established. The government was weak and corrupt. In 1798, Napoleon invaded Egypt. The local forces had little ability to resist the French conquest. However, the British Empire and the Ottoman Empire were able to remove French occupation in 1801. These events marked the beginning of a 19th-century Anglo-Franco rivalry over Egypt. After Ezana of Aksum sacked Meroe, people associated with the site of Ballana moved into Nubia from the southwest and founded three kingdoms: Makuria, Nobatia, and Alodia. They would rule for 200 years. Makuria was above the third cataract, along the Dongola Reach with its capital at Dongola. Nobadia was to the north with its capital at Faras, and Alodia was to the south with its capital at Soba. Makuria eventually absorbed Nobadia. The people of the region converted to Monophysite Christianity around 500 to 600 CE. The church initially started writing in Coptic, then in Greek, and finally in Old Nubian, a Nilo-Saharan language. The church was aligned with the Egyptian Coptic Church. By 641, Egypt was conquered by the Rashidun Caliphate. This effectively blocked Christian Nubia and Aksum from Mediterranean Christendom. In 651–652, Arabs from Egypt invaded Christian Nubia. Nubian archers soundly defeated the invaders. The Baqt (or Bakt) Treaty was drawn, recognizing Christian Nubia and regulating trade. The treaty controlled relations between Christian Nubia and Islamic Egypt for almost six hundred years. By the 13th century, Christian Nubia began its decline. The authority of the monarchy was diminished by the church and nobility. Arab bedouin tribes began to infiltrate Nubia, causing further havoc. "Fakirs" (holy men) practicing Sufism introduced Islam into Nubia. By 1366, Nubia had become divided into petty fiefdoms when it was invaded by Mamluks. During the 15th century, Nubia was open to Arab immigration. Arab nomads intermingled with the population and introduced the Arab culture and the Arabic language. By the 16th century, Makuria and Nobadia had been Islamized. During the 16th century, Abdallah Jamma headed an Arab confederation that destroyed Soba, capital of Alodia, the last holdout of Christian Nubia. Later Alodia would fall under the Funj Sultanate. During the 15th century, Funj herders migrated north to Alodia and occupied it. Between 1504 and 1505, the kingdom expanded, reaching its peak and establishing its capital at Sennar under Badi II Abu Daqn (c. 1644–1680). By the end of the 16th century, the Funj had converted to Islam. They pushed their empire westward to Kordofan. They expanded eastward, but were halted by Ethiopia. They controlled Nubia down to the 3rd Cataract. The economy depended on captured enemies to fill the army and on merchants travelling through Sennar. Under Badi IV (1724–1762), the army turned on the king, making him nothing but a figurehead. In 1821, the Funj were conquered by Muhammad Ali (1805–1849), Pasha of Egypt. Settlements of Bantu-speaking peoples who were iron-using agriculturists and herdsmen were long already well established south of the Limpopo River by the 4th century CE, displacing and absorbing the original Khoisan speakers. They slowly moved south, and the earliest ironworks in modern-day KwaZulu-Natal Province are believed to date from around 1050. The southernmost group was the Xhosa people, whose language incorporates certain linguistic traits from the earlier Khoi-San people, reaching the Great Fish River in today's Eastern Cape Province. The Kingdom of Mapungubwe was the first state in Southern Africa, with its capital at Mapungubwe. The state arose in the 12th century CE. Its wealth came from controlling the trade in ivory from the Limpopo Valley, copper from the mountains of northern Transvaal, and gold from the Zimbabwe Plateau between the Limpopo and Zambezi rivers, with the Swahili merchants at Chibuene. By the mid-13th century, Mapungubwe was abandoned. After the decline of Mapungubwe, Great Zimbabwe rose on the Zimbabwe Plateau. "Zimbabwe" means stone building. Great Zimbabwe was the first city in Southern Africa and was the center of an empire, consolidating lesser Shona polities. Stone building was inherited from Mapungubwe. These building techniques were enhanced and came into maturity at Great Zimbabwe, represented by the wall of the Great Enclosure. The dry-stack stone masonry technology was also used to build smaller compounds in the area. Great Zimbabwe flourished by trading with Swahili Kilwa and Sofala. The rise of Great Zimbabwe parallels the rise of Kilwa. Great Zimbabwe was a major source of gold. Its royal court lived in luxury, wore Indian cotton, surrounded themselves with copper and gold ornaments, and ate on plates from as far away as Persia and China. Around the 1420s and 1430s, Great Zimbabwe was on decline. The city was abandoned by 1450. Some have attributed the decline to the rise of the trading town Ingombe Ilede. A new chapter of Shona history ensued. Nyatsimba Mutota, a northern Shona king of the Karanga, engaged in conquest. He and his son Mutope conquered the Zimbabwe Plateau, going through Mozambique to the east coast, linking the empire to the coastal trade. They called their empire "Wilayatu 'l Mu'anamutapah" or "mwanamutapa" (Lord of the Plundered Lands), or the Kingdom of Mutapa. "Monomotapa" was the Portuguese corruption. They did not build stone structures; the northern Shonas had no traditions of building in stone. After the death of Matope in 1480, the empire split into two small empires: Torwa in the south and Mutapa in the north. The split occurred over rivalry from two Shona lords, Changa and Togwa, with the "mwanamutapa" line. Changa was able to acquire the south, forming the Kingdom of Butua with its capital at Khami. The Mutapa Empire continued in the north under the "mwenemutapa" line. During the 16th century the Portuguese were able to establish permanent markets up the Zambezi River in an attempt to gain political and military control of Mutapa. They were partially successful. In 1628, a decisive battle allowed them to put a puppet "mwanamutapa" named Mavura, who signed treaties that gave favorable mineral export rights to the Portuguese. The Portuguese were successful in destroying the "mwanamutapa" system of government and undermining trade. By 1667, Mutapa was in decay. Chiefs would not allow digging for gold because of fear of Portuguese theft, and the population declined. The Kingdom of Butua was ruled by a "changamire", a title derived from the founder, Changa. Later it became the Rozwi Empire. The Portuguese tried to gain a foothold but were thrown out of the region in 1693, by Changamire Dombo. The 17th century was a period of peace and prosperity. The Rozwi Empire fell into ruins in the 1830s from invading Nguni from Natal. By 1500 AD, most of southern Africa had established states. In northwestern Namibia, the Ovambo engaged in farming and the Herero engaged in herding. As cattle numbers increased, the Herero moved southward to central Namibia for grazing land. A related group, the Ovambanderu, expanded to Ghanzi in northwestern Botswana. The Nama, a Khoi-speaking, sheep-raising group, moved northward and came into contact with the Herero; this would set the stage for much conflict between the two groups. The expanding Lozi states pushed the Mbukushu, Subiya, and Yei to Botei, Okavango, and Chobe in northern Botswana. The development of Sotho–Tswana states based on the highveld, south of the Limpopo River, began around 1000 CE. The chief's power rested on cattle and his connection to the ancestor. This can be seen in the Toutswemogala Hill settlements with stone foundations and stone walls, north of the highveld and south of the Vaal River. Northwest of the Vaal River developed early Tswana states centered on towns of thousands of people. When disagreements or rivalry arose, different groups moved to form their own states. Southeast of the Drakensberg mountains lived Nguni-speaking peoples (Zulu, Xhosa, Swazi, and Ndebele). They too engaged in state building, with new states developing from rivalry, disagreements, and population pressure causing movement into new regions. This 19th-century process of warfare, state building and migration later became known as the Mfecane (Nguni) or Difaqane (Sotho). Its major catalyst was the consolidation of the Zulu Kingdom. They were metalworkers, cultivators of millet, and cattle herders. The Khoisan lived in the southwestern Cape Province, where winter rainfall is plentiful. Earlier Khoisan populations were absorbed by Bantu peoples, such as the Sotho and Nguni, but the Bantu expansion stopped at the region with winter rainfall. Some Bantu languages have incorporated the click consonant of the Khoisan languages. The Khoisan traded with their Bantu neighbors, providing cattle, sheep, and hunted items. In return, their Bantu speaking neighbors traded copper, iron, and tobacco. By the 16th century, the Dutch East India Company established a replenishing station at Table Bay for restocking water and purchasing meat from the Khoikhoi. The Khoikhoi received copper, iron, tobacco, and beads in exchange. In order to control the price of meat and stock and make service more consistent, the Dutch established a permanent settlement at Table Bay in 1652. They grew fresh fruit and vegetables and established a hospital for sick sailors. To increase produce, the Dutch decided to increase the number of farms at Table Bay by encouraging freeburgher "boers" (farmers) on lands worked initially by slaves from West Africa. The land was taken from Khoikhoi grazing land, triggering the first Khoikhoi-Dutch war in 1659. No victors emerged, but the Dutch assumed a "right of conquest" by which they claimed all of the cape. In a series of wars pitting the Khoikhoi against each other, the Boers assumed all Khoikhoi land and claimed all their cattle. The second Khoikoi-Dutch war (1673–1677) was a cattle raid. The Khoikhoi also died in thousands from European diseases. By the 18th century, the cape colony had grown, with slaves coming from Madagascar, Mozambique, and Indonesia. The settlement also started to expand northward, but Khoikhoi resistance, raids, and guerrilla warfare slowed the expansion during the 18th century. Boers who started to practice pastoralism were known as "trekboers". A common source of "trekboer" labor was orphan children who were captured during raids and whose parents had been killed. According to the theory of recent African origin of modern humans, the mainstream position held within the scientific community, all humans originate from either Southeast Africa or the Horn of Africa. During the first millennium CE, Nilotic and Bantu-speaking peoples moved into the region. Following the Bantu Migration, on the coastal section of Southeast Africa, a mixed Bantu community developed through contact with Muslim Arab and Persian traders, leading to the development of the mixed Arab, Persian and African Swahili City States. The Swahili culture that emerged from these exchanges evinces many Arab and Islamic influences not seen in traditional Bantu culture, as do the many Afro-Arab members of the Bantu Swahili people. With its original speech community centered on the coastal parts of Tanzania (particularly Zanzibar) and Kenya—a seaboard referred to as the Swahili Coast—the Bantu Swahili language contains many Arabic language loan-words as a consequence of these interactions. The earliest Bantu inhabitants of the Southeast coast of Kenya and Tanzania encountered by these later Arab and Persian settlers have been variously identified with the trading settlements of Rhapta, Azania and Menouthias referenced in early Greek and Chinese writings from 50 AD to 500 AD, ultimately giving rise to the name for Tanzania. These early writings perhaps document the first wave of Bantu settlers to reach Southeast Africa during their migration. Historically, the Swahili people could be found as far north as northern Kenya and as far south as the Ruvuma River in Mozambique. Arab geographers referred to the Swahili coast as the land of the "zanj" (blacks). Although once believed to be the descendants of Persian colonists, the ancient Swahili are now recognized by most historians, historical linguists, and archaeologists as a Bantu people who had sustained important interactions with Muslim merchants, beginning in the late 7th and early 8th centuries AD. Medieval Swahili kingdoms are known to have had island trade ports, described by Greek historians as "metropolises", and to have established regular trade routes with the Islamic world and Asia. Ports such as Mombasa, Zanzibar, and Kilwa were known to Chinese sailors under Zheng He and medieval Islamic geographers such as the Berber traveller Abu Abdullah ibn Battuta. The main Swahili exports were ivory, slaves, and gold. They traded with Arabia, India, Persia, and China. The Portuguese arrived in 1498. On a mission to economically control and Christianize the Swahili coast, the Portuguese attacked Kilwa first in 1505 and other cities later. Because of Swahili resistance, the Portuguese attempt at establishing commercial control was never successful. By the late 17th century, Portuguese authority on the Swahili coast began to diminish. With the help of Omani Arabs, by 1729 the Portuguese presence had been removed. The Swahili coast eventually became part of the Sultanate of Oman. Trade recovered, but it did not regain the levels of the past. The Urewe culture developed and spread in and around the Lake Victoria region of Africa during the African Iron Age. The culture's earliest dated artifacts are located in the Kagera Region of Tanzania, and it extended as far west as the Kivu region of the Democratic Republic of the Congo, as far east as the Nyanza and Western provinces of Kenya, and north into Uganda, Rwanda and Burundi. Sites from the Urewe culture date from the Early Iron Age, from the 5th century BC to the 6th century AD. The origins of the Urewe culture are ultimately in the Bantu expansion originating in Cameroon. Research into early Iron Age civilizations in Sub-Saharan Africa has been undertaken concurrently with studies on African linguistics on Bantu expansion. The Urewe culture may correspond to the Eastern subfamily of Bantu languages, spoken by the descendants of the first wave of Bantu peoples to settle East Africa. At first sight, Urewe seems to be a fully developed civilization recognizable through its distinctive, stylish earthenware and highly technical and sophisticated iron working techniques. Given our current level of knowledge, neither seems to have developed or altered for nearly 2,000 years. However, minor local variations in the ceramic ware can be observed. Urewe is the name of the site in Kenya brought to prominence through the publication in 1948 of Mary Leakey's archaeological findings. She described the early Iron Age period in the Great Lakes region in Central East Africa around Lake Victoria. Madagascar was apparently first settled by Austronesian speakers from Southeast Asia before the 6th century AD and subsequently by Bantu speakers from the east African mainland in the 6th or 7th century, according to archaeological and linguistic data. The Austronesians introduced banana and rice cultivation, and the Bantu speakers introduced cattle and other farming practices. About the year 1000, Arab and Indian trade settlement were started in northern Madagascar to exploit the Indian Ocean trade. By the 14th century, Islam was introduced on the island by traders. Madagascar functioned in the East African medieval period as a contact port for the other Swahili seaport city-states such as Sofala, Kilwa, Mombasa, and Zanzibar. Several kingdoms emerged after the 15th century: the Sakalava Kingdom (16th century) on the west coast, Tsitambala Kingdom (17th century) on the east coast, and Merina (15th century) in the central highlands. By the 19th century, Merina controlled the whole island. In 1500, the Portuguese were the first Europeans on the island, raiding the trading settlements. The British and later the French arrived. During the latter part of the 17th century, Madagascar was a popular transit point for pirates. Radama I (1810–1828) invited Christian missionaries in the early 19th century. Queen Ranavalona I "the Cruel" (1828–1861) banned the practice of Christianity in the kingdom, and an estimated 150,000 Christians perished. Under Radama II (1861–1863), Madagascar took a French orientation, with great commercial concession given to the French. In 1895, in the second Franco-Hova War, the French invaded Madagascar, taking over Antsiranana (Diego Suarez) and declaring Madagascar a protectorate. Between the 14th and 15th centuries, large Southeast African kingdoms and states emerged, such as the Buganda and Karagwe Kingdoms of Uganda and Tanzania. By 1000 AD, numerous states had arisen on the Lake Plateau among the Great Lakes of East Africa. Cattle herding, cereal growing, and banana cultivation were the economic mainstays of these states. The Ntusi and Bigo earthworks are representative of one of the first states, the Bunyoro kingdom, which oral tradition stipulates was part of the Empire of Kitara that dominated the whole Lakes region. A Luo ethnic elite, from the Bito clan, ruled over the Bantu-speaking Nyoro people. The society was essentially Nyoro in its culture, based on the evidence from pottery, settlement patterns, and economic specialization. The Bito clan claimed legitimacy by being descended from the Bachwezi clan, who were said to have ruled the Empire of Kitara. However, very little is known about Kitara; some scholars even question its historical existence. Most founding leaders of the various polities in the lake region seem to have claimed descent from the Bachwezi. There are now 13 million Tara who are part of the second African loss,(Nafi and Uma are two losses). The Buganda kingdom was founded by Kato Kimera around the 14th century AD. Kato Kintu may have migrated to the northwest of Lake Victoria as early as 1000 BC. Buganda was ruled by the "kabaka" with a "bataka" composed of the clan heads. Over time, the "kabakas" diluted the authority of the "bataka", with Buganda becoming a centralized monarchy. By the 16th century, Buganda was engaged in expansion but had a serious rival in Bunyoro. By the 1870s, Buganda was a wealthy nation-state. The "kabaka" ruled with his "Lukiko" (council of ministers). Buganda had a naval fleet of a hundred vessels, each manned by thirty men. Buganda supplanted Bunyoro as the most important state in the region. However, by the early 20th century, Buganda became a province of the British Uganda Protectorate. Southeast of Bunyoro, near Lake Kivu at the bottom of the western rift, the Kingdom of Rwanda was founded, perhaps during the 17th century. Tutsi (BaTutsi) pastoralists formed the elite, with a king called the "mwami". The Hutu (BaHutu) were farmers. Both groups spoke the same language, but there were strict social norms against marrying each other and interaction. According to oral tradition, the Kingdom of Rwanda was founded by Mwami Ruganzu II (Ruganzu Ndori) (c. 1600–1624), with his capital near Kigali. It took 200 years to attain a truly centralized kingdom under Mwami Kigeli IV (Kigeri Rwabugiri) (1840–1895). Subjugation of the Hutu proved more difficult than subduing the Tutsi. The last Tutsi chief gave up to Mwami Mutara II (Mutara Rwogera) (1802–1853) in 1852, but the last Hutu holdout was conquered in the 1920s by Mwami Yuhi V (Yuli Musinga) (1896–1931). South of the Kingdom of Rwanda was the Kingdom of Burundi. It was founded by the Tutsi chief Ntare Rushatsi (c. 1657–1705). Like Rwanda, Burundi was built on cattle raised by Tutsi pastoralists, crops from Hutu farmers, conquest, and political innovations. Under Mwami Ntare Rugaamba (c. 1795–1852), Burundi pursued an aggressive expansionist policy, one based more on diplomacy than force. The Maravi claimed descent from Karonga ("kalonga"), who took that title as king. The Maravi connected Central Africa to the east coastal trade, with Swahili Kilwa. By the 17th century, the Maravi Empire encompassed all the area between Lake Malawi and the mouth of the Zambezi River. The "karonga" was Mzura, who did much to extend the empire. Mzura made a pact with the Portuguese to establish a 4,000-man army to attack the Shona in return for aid in defeating his rival Lundi, a chief of the Zimba. In 1623, he turned on the Portuguese and assisted the Shona. In 1640, he welcomed back the Portuguese for trade. The Maravi Empire did not long survive the death of Mzura. By the 18th century, it had broken into its previous polities. The Ghana Empire may have been an established kingdom as early as the 8th century AD, founded among the Soninke by Dinge Cisse. Ghana was first mentioned by Arab geographer Al-Farazi in the late 8th century. Ghana was inhabited by urban dwellers and rural farmers. The urban dwellers were the administrators of the empire, who were Muslims, and the "Ghana" (king), who practiced traditional religion. Two towns existed, one where the Muslim administrators and Berber-Arabs lived, which was connected by a stone-paved road to the king's residence. The rural dwellers lived in villages, which joined together into broader polities that pledged loyalty to the "Ghana." The "Ghana" was viewed as divine, and his physical well-being reflected on the whole society. Ghana converted to Islam around 1050, after conquering Aoudaghost. The Ghana Empire grew wealthy by taxing the trans-Saharan trade that linked Tiaret and Sijilmasa to Aoudaghost. Ghana controlled access to the goldfields of Bambouk, southeast of Koumbi Saleh. A percentage of salt and gold going through its territory was taken. The empire was not involved in production. By the 11th century, Ghana was in decline. It was once thought that the sacking of Koumbi Saleh by Berbers under the Almoravid dynasty in 1076 was the cause. This is no longer accepted. Several alternative explanations are cited. One important reason is the transfer of the gold trade east to the Niger River and the Taghaza Trail, and Ghana's consequent economic decline. Another reason cited is political instability through rivalry among the different hereditary polities. The empire came to an end in 1230, when Takrur in northern Senegal took over the capital. The Mali Empire began in the 13th century AD, when a Mande (Mandingo) leader, Sundiata (Lord Lion) of the Keita clan, defeated Soumaoro Kanté, king of the Sosso or southern Soninke, at the Battle of Kirina in c. 1235. Sundiata continued his conquest from the fertile forests and Niger Valley, east to the Niger Bend, north into the Sahara, and west to the Atlantic Ocean, absorbing the remains of the Ghana Empire. Sundiata took on the title of "mansa". He established the capital of his empire at Niani. Although the salt and gold trade continued to be important to the Mali Empire, agriculture and pastoralism was also critical. The growing of sorghum, millet, and rice was a vital function. On the northern borders of the Sahel, grazing cattle, sheep, goats, and camels were major activities. Mande society was organize around the village and land. A cluster of villages was called a "kafu", ruled by a "farma". The "farma" paid tribute to the "mansa". A dedicated army of elite cavalry and infantry maintained order, commanded by the royal court. A formidable force could be raised from tributary regions, if necessary. Conversion to Islam was a gradual process. The power of the "mansa" depended on upholding traditional beliefs and a spiritual foundation of power. Sundiata initially kept Islam at bay. Later "mansas" were devout Muslims but still acknowledged traditional deities and took part in traditional rituals and festivals, which were important to the Mande. Islam became a court religion under Sundiata's son Uli I (1225–1270). "Mansa" Uli made a pilgrimage to Mecca, becoming recognized within the Muslim world. The court was staffed with literate Muslims as secretaries and accountants. Muslim traveller Ibn Battuta left vivid descriptions of the empire. Mali reached the peak of its power and extent in the 14th century, when "Mansa" Musa (1312–1337) made his famous "hajj" to Mecca with 500 slaves, each holding a bar of gold worth 500 mitqals. "Mansa" Musa's "hajj" devalued gold in Mamluk Egypt for a decade. He made a great impression on the minds of the Muslim and European world. He invited scholars and architects like Ishal al-Tuedjin (al-Sahili) to further integrate Mali into the Islamic world. The Mali Empire saw an expansion of learning and literacy. In 1285, Sakura, a freed slave, usurped the throne. This "mansa" drove the Tuareg out of Timbuktu and established it as a center of learning and commerce. The book trade increased, and book copying became a very respectable and profitable profession. Timbuktu and Djenné became important centers of learning within the Islamic world. After the reign of Mansa Suleyman (1341–1360), Mali began its spiral downward. Mossi cavalry raided the exposed southern border. Tuareg harassed the northern border in order to retake Timbuktu. Fulani (Fulbe) eroded Mali's authority in the west by establishing the independent Imamate of Futa Toro, a successor to the kingdom of Takrur. Serer and Wolof alliances were broken. In 1545 to 1546, the Songhai Empire took Niani. After 1599, the empire lost the Bambouk goldfields and disintegrated into petty polities. The Songhai people are descended from fishermen on the Middle Niger River. They established their capital at Kukiya in the 9th century AD and at Gao in the 12th century. The Songhai speak a Nilo-Saharan language. Sonni Ali, a Songhai, began his conquest by capturing Timbuktu in 1468 from the Tuareg. He extended the empire to the north, deep into the desert, pushed the Mossi further south of the Niger, and expanded southwest to Djenne. His army consisted of cavalry and a fleet of canoes. Sonni Ali was not a Muslim, and he was portrayed negatively by Berber-Arab scholars, especially for attacking Muslim Timbuktu. After his death in 1492, his heirs were deposed by General Muhammad Ture, a Muslim of Soninke origins Muhammad Ture (1493–1528) founded the Askiya Dynasty, "askiya" being the title of the king. He consolidated the conquests of Sonni Ali. Islam was used to extend his authority by declaring jihad on the Mossi, reviving the trans-Saharan trade, and having the Abbasid "shadow" caliph in Cairo declare him as caliph of Sudan. He established Timbuktu as a great center of Islamic learning. Muhammad Ture expanded the empire by pushing the Tuareg north, capturing Aïr in the east, and capturing salt-producing Taghaza. He brought the Hausa states into the Songhay trading network. He further centralized the administration of the empire by selecting administrators from loyal servants and families and assigning them to conquered territories. They were responsible for raising local militias. Centralization made Songhay very stable, even during dynastic disputes. Leo Africanus left vivid descriptions of the empire under Askiya Muhammad. Askiya Muhammad was deposed by his son in 1528. After much rivalry, Muhammad Ture's last son Askiya Daoud (1529–1582) assumed the throne. In 1591, Morocco invaded the Songhai Empire under Ahmad al-Mansur of the Saadi Dynasty in order to secure the goldfields of the Sahel. At the Battle of Tondibi, the Songhai army was defeated. The Moroccans captured Djenne, Gao, and Timbuktu, but they were unable to secure the whole region. Askiya Nuhu and the Songhay army regrouped at Dendi in the heart of Songhai territory where a spirited guerrilla resistance sapped the resources of the Moroccans, who were dependent upon constant resupply from Morocco. Songhai split into several states during the 17th century. Morocco found its venture unprofitable. The gold trade had been diverted to Europeans on the coast. Most of the trans-Saharan trade was now diverted east to Bornu. Expensive equipment purchased with gold had to be sent across the Sahara, an unsustainable scenario. The Moroccans who remained married into the population and were referred to as "Arma" or "Ruma". They established themselves at Timbuktu as a military caste with various fiefs, independent from Morocco. Amid the chaos, other groups began to assert themselves, including the Fulani of Futa Tooro who encroached from the west. The Bambara Empire, one of the states that broke from Songhai, sacked Gao. In 1737, the Tuareg massacred the "Arma". The Fulani were migratory people. They moved from Mauritania and settled in Futa Tooro, Futa Djallon, and subsequently throughout the rest of West Africa. By the 14th century CE, they had converted to Islam. During the 16th century, they established themselves at Macina in southern Mali. During the 1670s, they declared jihads on non-Muslims. Several states were formed from these jihadist wars, at Futa Toro, Futa Djallon, Macina, Oualia, and Bundu. The most important of these states was the Sokoto Caliphate or Fulani Empire. In the city of Gobir, Usman dan Fodio (1754–1817) accused the Hausa leadership of practicing an impure version of Islam and of being morally corrupt. In 1804, he launched the Fulani War as a jihad among a population that was restless about high taxes and discontented with its leaders. Jihad fever swept northern Nigeria, with strong support among both the Fulani and the Hausa. Usman created an empire that included parts of northern Nigeria, Benin, and Cameroon, with Sokoto as its capital. He retired to teach and write and handed the empire to his son Muhammed Bello. The Sokoto Caliphate lasted until 1903 when the British conquered northern Nigeria. The Akan speak a Kwa language. The speakers of Kwa languages are believed to have come from East/Central Africa, before settling in the Sahel. By the 12th century, the Akan Kingdom of Bonoman (Bono State) was established. During the 13th century, when the gold mines in modern-day Mali started to dry up, Bonoman and later other Akan states began to rise to prominence as the major players in the Gold trade. It was Bonoman and other Akan kingdoms like Denkyira, Akyem, Akwamu which were the predecessors to what became the all-powerful Empire of Ashanti. When and how the Ashante got to their present location is debatable. What is known is that by the 17th century an Akan people were identified as living in a state called Kwaaman. The location of the state was north of Lake Bosomtwe. The state's revenue was mainly derived from trading in gold and kola nuts and clearing forest to plant yams. They built towns between the Pra and Ofin rivers. They formed alliances for defense and paid tribute to Denkyira one of the more powerful Akan states at that time along with Adansi and Akwamu. During the 16th century, Ashante society experienced sudden changes, including population growth because of cultivation of New World plants such as cassava and maize and an increase in the gold trade between the coast and the north. By the 17th century, Osei Kofi Tutu I (c. 1695–1717), with help of Okomfo Anokye, unified what became the Ashante into a confederation with the Golden Stool as a symbol of their unity and spirit. Osei Tutu engaged in a massive territorial expansion. He built up the Ashante army based on the Akan state of Akwamu, introducing new organization and turning a disciplined militia into an effective fighting machine. In 1701, the Ashante conquered Denkyira, giving them access to the coastal trade with Europeans, especially the Dutch. Opoku Ware I (1720–1745) engaged in further expansion, adding other southern Akan states to the growing empire. He turned north adding Techiman, Banda, Gyaaman, and Gonja, states on the Black Volta. Between 1744 and 1745, "Asantehene" Opoku attacked the powerful northern state of Dagomba, gaining control of the important middle Niger trade routes. Kusi Obodom (1750–1764) succeeded Opoku. He solidified all the newly won territories. Osei Kwadwo (1777–1803) imposed administrative reforms that allowed the empire to be governed effectively and to continue its military expansion. Osei Kwame Panyin (1777–1803), Osei Tutu Kwame (1804–1807), and Osei Bonsu (1807–1824) continued territorial consolidation and expansion. The Ashante Empire included all of present-day Ghana and large parts of the Ivory Coast. The "ashantehene" inherited his position from his mother. He was assisted at the capital, Kumasi, by a civil service of men talented in trade, diplomacy, and the military, with a head called the "Gyaasehene". Men from Arabia, Sudan, and Europe were employed in the civil service, all of them appointed by the "ashantehene". At the capital and in other towns, the "ankobia" or special police were used as bodyguards to the "ashantehene", as sources of intelligence, and to suppress rebellion. Communication throughout the empire was maintained via a network of well-kept roads from the coast to the middle Niger and linking together other trade cities. For most of the 19th century, the Ashante Empire remained powerful. It was later destroyed in 1900 by British superior weaponry and organization following the four Anglo-Ashanti wars. The Dahomey Kingdom was founded in the early 17th century when the Aja people of the Allada kingdom moved northward and settled among the Fon. They began to assert their power a few years later. In so doing they established the Kingdom of Dahomey, with its capital at Agbome. King Houegbadja (c. 1645–1685) organized Dahomey into a powerful centralized state. He declared all lands to be owned of the king and subject to taxation. Primogeniture in the kingship was established, neutralizing all input from village chiefs. A "cult of kingship" was established. A captive slave would be sacrificed annually to honor the royal ancestors. During the 1720s, the slave-trading states of Whydah and Allada were taken, giving Dahomey direct access to the slave coast and trade with Europeans. King Agadja (1708–1740) attempted to end the slave trade by keeping the slaves on plantations producing palm oil, but the European profits on slaves and Dahomey's dependency on firearms were too great. In 1730, under king Agaja, Dahomey was conquered by the Oyo Empire, and Dahomey had to pay tribute. Taxes on slaves were mostly paid in cowrie shells. During the 19th century, palm oil was the main trading commodity. France conquered Dahomey during the Second Franco-Dahomean War (1892–1894) and established a colonial government there. Most of the troops who fought against Dahomey were native Africans. Traditionally, the Yoruba people viewed themselves as the inhabitants of a united empire, in contrast to the situation today, in which "Yoruba" is the cultural-linguistic designation for speakers of a language in the Niger–Congo family. The name comes from a Hausa word to refer to the Oyo Empire. The first Yoruba state was Ile-Ife, said to have been founded around 1000 AD by a supernatural figure, the first "oni" Oduduwa. Oduduwa's sons would be the founders of the different city-states of the Yoruba, and his daughters would become the mothers of the various Yoruba "obas", or kings. Yoruba city-states were usually governed by an "oba" and an "iwarefa", a council of chiefs who advised the "oba." by the 18th century, the Yoruba city-states formed a loose confederation, with the "Oni" of Ife as the head and Ife as the capital. As time went on, the individual city-states became more powerful with their "obas" assuming more powerful spiritual positions and diluting the authority of the "Oni" of Ife. Rivalry became intense among the city-states. The Oyo Empire rose in the 16th century. The Oyo state had been conquered in 1550 by the kingdom of Nupe, which was in possession of cavalry, an important tactical advantage. The "alafin" (king) of Oyo was sent into exile. After returning, "Alafin" Orompoto (c. 1560–1580) built up an army based on heavily armed cavalry and long-service troops. This made them invincible in combat on the northern grasslands and in the thinly wooded forests. By the end of the 16th century, Oyo had added the western region of the Niger to the hills of Togo, the Yoruba of Ketu, Dahomey, and the Fon nation. A governing council served the empire, with clear executive divisions. Each acquired region was assigned a local administrator. Families served in king-making capacities. Oyo, as a northern Yoruba kingdom, served as middle-man in the north–south trade and connecting the eastern forest of Guinea with the western and central Sudan, the Sahara, and North Africa. The Yoruba manufactured cloth, ironware, and pottery, which were exchanged for salt, leather, and most importantly horses from the Sudan to maintain the cavalry. Oyo remained strong for two hundred years. It became a protectorate of Great Britain in 1888, before further fragmenting into warring factions. The Oyo state ceased to exist as any sort of power in 1896. The Kwa Niger–Congo speaking Edo people had established the Benin Empire by the middle of the 15th century. It was engaged in political expansion and consolidation from its very beginning. Under "Oba" (king) Ewuare (c. 1450–1480 AD), the state was organized for conquest. He solidified central authority and initiated 30 years of war with his neighbors. At his death, the Benin Empire extended to Dahomey in the west, to the Niger Delta in the east, along the west African coast, and to the Yoruba towns in the north. Ewuare's grandson "Oba" Esigie (1504–1550) eroded the power of the "uzama" (state council) and increased contact and trade with Europeans, especially with the Portuguese who provided a new source of copper for court art. The "oba" ruled with the advice of the "uzama", a council consisting of chiefs of powerful families and town chiefs of different guilds. Later its authority was diminished by the establishment of administrative dignitaries. Women wielded power. The queen mother who produced the future "oba" wielded immense influence. Benin was never a significant exporter of slaves, as Alan Ryder's book Benin and the Europeans showed. By the early 18th century, it was wrecked with dynastic disputes and civil wars. However, it regained much of its former power in the reigns of Oba Eresoyen and Oba Akengbuda. After the 16th century, Benin mainly exported pepper, ivory, gum, and cotton cloth to the Portuguese and Dutch who resold it to other African societies on the coast. In 1897, the British sacked the city. The Niger Delta comprised numerous city-states with numerous forms of government. These city-states were protected by the waterways and thick vegetation of the delta. The region was transformed by trade in the 17th century. The delta's city-states were comparable to those of the Swahili people in East Africa. Some, like Bonny, Kalabari, and Warri, had kings. Others, like Brass, were republics with small senates, and those at Cross River and Old Calabar were ruled by merchants of the "ekpe" society. The "ekpe" society regulated trade and made rules for members known as house systems. Some of these houses, like the Pepples of Bonny, were well known in the Americas and Europe. The Igbo lived east of the delta (but with the Anioma on the west of the Niger River). The Kingdom of Nri rose in the 9th century, with the "Eze" Nri being its leader. It was a political entity composed of villages, and each village was autonomous and independent with its own territory and name, each recognized by its neighbors. Villages were democratic with all males and sometimes females a part of the decision-making process. Graves at Igbo-Ukwu (800 AD) contained brass artifacts of local manufacture and glass beads from Egypt or India, indicative of extraregional trade. By the 1850s, British and German missionaries and traders had penetrated present-day Namibia. Herero and Nama competed for guns and ammunition, providing cattle, ivory, and ostrich feathers. The Germans were more firmly established than the British in the region. By 1884, the Germans declared the coastal region from the Orange River to the Kunene River a German protectorate, part of German South West Africa. They pursued an aggressive policy of land expansion for white settlements. They exploited rivalry between the Nama and Herero. The Herero entered into an alliance with the Germans, thinking they could get an upper hand on the Nama. The Germans set up a garrison at the Herero capital and started allocating Herero land for white settlements, including the best grazing land in the central plateau, and made tax and labor demands. The Herero and Ovambanderu rebelled, but the rebellion was crushed and leaders were executed. Between 1896 and 1897, rinderpest crippled the economic backbone of the Herero and Nama economy and slowed white expansion. The Germans continued the policy of making Namibia a white settlement by seizing land and cattle, and even trying to export Herero labor to South Africa. In 1904, the Herero rebelled. German General Lothar von Trotha implemented an extermination policy at the Battle of Waterberg, which drove the Herero west of the Kalahari Desert. At the end of 1905, only 16,000 Herero were alive, out of a previous population of 80,000. Nama resistance was crushed in 1907. All Nama and Herero cattle and land were confiscated from the very diminished population, with remaining Nama and Herero assuming a subordinate position. Labor had to be imported from among the Ovambo. A moment of great disorder in southern Africa was the "Mfecane", "the crushing." It was started by the northern Nguni kingdoms of Mthethwa, Ndwandwe, and Swaziland over scarce resource and famine. When Dingiswayo of Mthethwa died, Shaka of the Zulu people took over. He established the Zulu Kingdom, asserting authority over the Ndwandwe and pushing the Swazi north. The scattering Ndwandwe and Swazi caused the Mfecane to spread. During the 1820s, Shaka expanded the empire all along the Drakensberg foothills, with tribute being paid as far south as the Tugela and Umzimkulu rivers. He replaced the chiefs of conquered polities with "indunas", responsible to him. He introduced a centralized, dedicated, and disciplined military force not seen in the region, with a new weapon in the short stabbing-spear. In 1828, Shaka was assassinated by his half brother Dingane, who lacked the military genius and leadership skills of Shaka. Voortrekkers tried to occupy Zulu land in 1838. In the early months they were defeated, but the survivors regrouped at the Ncome River and soundly defeated the Zulu. However, the Voortrekkers dared not settle Zulu land. Dingane was killed in 1840 during a civil war. His brother Mpande took over and strengthened Zulu territories to the north. In 1879 the Zulu Kingdom was invaded by Britain in a quest to control all of South Africa. The Zulu Kingdom was victorious at the Battle of Isandlwana but was defeated at the Battle of Ulundi. One of the major states to emerge from the Mfecane was the Sotho Kingdom founded at Thaba Bosiu by Moshoeshoe I around 1821 to 1822. It was a confederation of different polities that accepted the absolute authority of Moshoeshoe. During the 1830s, the kingdom invited missionaries as a strategic means of acquiring guns and horses from the Cape. The Orange Free State slowly diminished the kingdom but never completely defeated it. In 1868, Moshoeshoe asked that the Sotho Kingdom be annexed by Britain, to save the remnant. It became the British protectorate of Basutoland. The arrival of the ancestors of the Tswana-speakers who came to control the region (from the Vaal River to Botswana) has yet to be dated precisely although AD 600 seems to be a consensus estimate. This massive cattle-raising complex prospered until 1300 AD or so. All these various peoples were connected to trade routes that ran via the Limpopo River to the Indian Ocean, and trade goods from Asia such as beads made their way to Botswana most likely in exchange for ivory, gold, and rhinoceros horn. The first written records relating to modern-day Botswana appear in 1824. What these records show is that the Bangwaketse had become the predominant power in the region. Under the rule of Makaba II, the Bangwaketse kept vast herds of cattle in well-protected desert areas, and used their military prowess to raid their neighbours. Other chiefdoms in the area, by this time, had capitals of 10,000 or so and were fairly prosperous. This equilibrium came to end during the Mfecane period, 1823–1843, when a succession of invading peoples from South Africa entered the country. Although the Bangwaketse were able to defeat the invading Bakololo in 1826, over time all the major chiefdoms in Botswana were attacked, weakened, and impoverished. The Bakololo and Amandebele raided repeatedly, and took large numbers of cattle, women, and children from the Batswana—most of whom were driven into the desert or sanctuary areas such as hilltops and caves. Only after 1843, when the Amandebele moved into western Zimbabwe, did this threat subside. During the 1840s and 1850s trade with Cape Colony-based merchants opened up and enabled the Batswana chiefdoms to rebuild. The Bakwena, Bangwaketse, Bangwato and Batawana cooperated to control the lucrative ivory trade, and then used the proceeds to import horses and guns, which in turn enabled them to establish control over what is now Botswana. This process was largely complete by 1880, and thus the Bushmen, the Bakalanga, the Bakgalagadi, the Batswapong and other current minorities were subjugated by the Batswana. Following the Great Trek, Afrikaners from the Cape Colony established themselves on the borders of Botswana in the Transvaal. In 1852 a coalition of Tswana chiefdoms led by Sechele I resisted Afrikaner incursions, and after about eight years of intermittent tensions and hostilities, eventually came to a peace agreement in Potchefstroom in 1860. From that point on, the modern-day border between South Africa and Botswana was agreed on, and the Afrikaners and Batswana traded and worked together peacefully. In the 1820s, refugees from the Zulu expansion under Shaka came into contact with the Basotho people residing on the highveld. In 1823, those pressures caused one group of Basotho, the Kololo, to migrate north, past the Okavango Swamp and across the Zambezi into Barotseland, now part of Zambia. In 1845, the Kololo conquered Barotseland. At about the same time, the Boers began to encroach upon Basotho territory. After the Cape Colony had been ceded to Britain at the conclusion of the Napoleonic Wars, the "voortrekkers" ("pioneers") were farmers who opted to leave the former Dutch colony and moved inland where they eventually established independent polities. At the time of these developments, Moshoeshoe I gained control of the Basotho kingdoms of the southern Highveld. Universally praised as a skilled diplomat and strategist, he was able to wield the disparate refugee groups escaping the Difaqane into a cohesive nation. His inspired leadership helped his small nation to survive the dangers and pitfalls (the Zulu hegemony, the inward expansion of the voortrekkers and the designs of imperial Britain) that destroyed other indigenous South African kingdoms during the 19th century In 1822, Moshoeshoe established his capital at Butha-Buthe, an easily defensible mountain in the northern Drakensberg mountains, laying the foundations of the eventual Kingdom of Lesotho. His capital was later moved to Thaba Bosiu To deal with the encroaching voortrekker groups, Moshoeshoe encouraged French missionary activity in his kingdom. Missionaries sent by the Paris Evangelical Missionary Society provided the King with foreign affairs counsel and helped to facilitate the purchase of modern weapons. Aside from acting as state ministers, missionaries (primarily Casalis and Arbousset) played a vital role in delineating Sesotho orthography and printing Sesotho language materials between 1837 and 1855. The first Sesotho translation of the Bible appeared in 1878. In 1868, after losing the western lowlands to the Boers during the Free State–Basotho Wars; Moshoeshoe successfully appealed to Queen Victoria to proclaim Lesotho (then known as Basutoland) a protectorate of Britain and the British administration was placed in Maseru, the site of Lesotho's current capital. Local chieftains retained power over internal affairs while Britain was responsible for foreign affairs and the defence of the protectorate. In 1869, the British sponsored a process by which the borders of Basutoland were finally demarcated. While many clans had territory within Basutoland, large numbers of Sesotho speakers resided in areas allocated to the Orange Free State, the sovereign voortrekker republic that bordered the Basotho kingdom. By the 19th century, most Khoikhoi territory was under Boer control. The Khoikhoi had lost economic and political independence and had been absorbed into Boer society. The Boers spoke Afrikaans, a language or dialect derived from Dutch, and no longer called themselves Boers but Afrikaners. Some Khoikhoi were used as commandos in raids against other Khoikhoi and later Xhosa. A mixed Khoi, slave, and European population called the Cape Coloureds, who were outcasts within colonial society, also arose. Khoikhoi who lived far on the frontier included the Kora, Oorlams, and Griqua. In 1795, the British took over the cape colony from the Dutch. In the 1830s, Boers embarked on a journey of expansion, east of the Great Fish River into the Zuurveld. They were referred to as "Voortrekkers". They founded republics of the Transvaal and Orange Free State, mostly in areas of sparse population that had been diminished by the "Mfecane/Difaqane". Unlike the Khoisan, the Bantu states were not conquered by the Afrikaners, because of population density and greater unity. Additionally, they began to arm themselves with guns acquired through trade at the cape. In some cases, as in the Xhosa/Boer Wars, Boers were removed from Xhosa lands. It required a dedicated imperial military force to subdue the Bantu-speaking states. In 1901, the Boer republics were defeated by Britain in the Second Boer War. The defeat however consummated many Afrikaners' ambition: South Africa would be under white rule. The British placed all power—legislative, executive, administrative—in English and Afrikaner hands. Between 1878 and 1898, European states partitioned and conquered most of Africa. For 400 years, European nations had mainly limited their involvement to trading stations on the African coast. Few dared venture inland from the coast; those that did, like the Portuguese, often met defeats and had to retreat to the coast. Several technological innovations helped to overcome this 400-year pattern. One was the development of repeating rifles, which were easier and quicker to load than muskets. Artillery was being used increasingly. In 1885, Hiram S. Maxim developed the maxim gun, the model of the modern-day machine gun. European states kept these weapons largely among themselves by refusing to sell these weapons to African leaders. African germs took numerous European lives and deterred permanent settlements. Diseases such as yellow fever, sleeping sickness, yaws, and leprosy made Africa a very inhospitable place for Europeans. The deadliest disease was malaria, endemic throughout Tropical Africa. In 1854, the discovery of quinine and other medical innovations helped to make conquest and colonization in Africa possible. Strong motives for conquest of Africa were at play. Raw materials were needed for European factories. Europe in the early part of the 19th century was undergoing its Industrial Revolution. Nationalist rivalries and prestige were at play. Acquiring African colonies would show rivals that a nation was powerful and significant. These factors culminated in the Scramble for Africa. Knowledge of Africa increased. Numerous European explorers began to explore the continent. Mungo Park traversed the Niger River. James Bruce travelled through Ethiopia and located the source of the Blue Nile. Richard Francis Burton was the first European at Lake Tanganyika. Samuel White Baker explored the Upper Nile. John Hanning Speke located a source of the Nile at Lake Victoria. Other significant European explorers included Heinrich Barth, Henry Morton Stanley (coiner of the term "Dark Continent" for Africa in an 1878 book), Silva Porto, Alexandre de Serpa Pinto, Rene Caille, Friedrich Gerhard Rohlfs, Gustav Nachtigal, George Schweinfurth, and Joseph Thomson. The most famous of the explorers was David Livingstone, who explored southern Africa and traversed the continent from the Atlantic at Luanda to the Indian Ocean at Quelimane. European explorers made use of African guides and servants, and established long-distance trading routes. Missionaries attempting to spread Christianity also increased European knowledge of Africa. Between 1884 and 1885, European nations met at the Berlin West Africa Conference to discuss the partitioning of Africa. It was agreed that European claims to parts of Africa would only be recognised if Europeans provided effective occupation. In a series of treaties in 1890–1891, colonial boundaries were completely drawn. All of Sub-Saharan Africa was claimed by European powers, except for Ethiopia (Abyssinia) and Liberia. The European powers set up a variety of different administrations in Africa, reflecting different ambitions and degrees of power. In some areas, such as parts of British West Africa, colonial control was tenuous and intended for simple economic extraction, strategic power, or as part of a long-term development plan. In other areas, Europeans were encouraged to settle, creating settler states in which a European minority dominated. Settlers only came to a few colonies in sufficient numbers to have a strong impact. British settler colonies included British East Africa (now Kenya), Northern and Southern Rhodesia, (Zambia and Zimbabwe, respectively), and South Africa, which already had a significant population of European settlers, the Boers. France planned to settle Algeria and eventually incorporate it into the French state on an equal basis with the European provinces. Algeria's proximity across the Mediterranean allowed plans of this scale. In most areas colonial administrations did not have the manpower or resources to fully administer the territory and had to rely on local power structures to help them. Various factions and groups within the societies exploited this European requirement for their own purposes, attempting to gain positions of power within their own communities by cooperating with Europeans. One aspect of this struggle included what Terence Ranger has termed the "invention of tradition." In order to legitimize their own claims to power in the eyes of both the colonial administrators and their own people, native elites would essentially manufacture "traditional" claims to power, or ceremonies. As a result, many societies were thrown into disarray by the new order. Following the Scramble for Africa, an early but secondary focus for most colonial regimes was the suppression of slavery and the slave trade. By the end of the colonial period they were mostly successful in this aim, though slavery is still very active in Africa. As a part of the Scramble for Africa, France had the establishment of a continuous west–east axis of the continent as an objective, in contrast with the British north–south axis. Tensions between Britain and France reached tinder stage in Africa. At several points war was possible, but never happened. The most serious episode was the Fashoda Incident of 1898. French troops tried to claim an area in the Southern Sudan, and a much more powerful British force purporting to be acting in the interests of the Khedive of Egypt arrived to confront them. Under heavy pressure the French withdrew securing British control over the area. The status quo was recognised by an agreement between the two states acknowledging British control over Egypt, while France became the dominant power in Morocco, but France suffered a humiliating defeat overall. Belgium France Germany Italy Portugal Spain United Kingdom Independent states In the 1880s the European powers had divided up almost all of Africa (only Ethiopia and Liberia were independent). They ruled until after World War II when forces of nationalism grew much stronger. In the 1950s and 1960s the colonial holdings became independent states. The process was usually peaceful but there were several long bitter bloody civil wars, as in Algeria, Kenya and elsewhere. Across Africa the powerful new force of nationalism drew upon the organizational skills that natives learned in the British and French and other armies in the world wars. It led to organizations that were not controlled by or endorsed by either the colonial powers not the traditional local power structures that were collaborating with the colonial powers. Nationalistic organizations began to challenge both the traditional and the new colonial structures and finally displaced them. Leaders of nationalist movements took control when the European authorities exited; many ruled for decades or until they died off. These structures included political, educational, religious, and other social organizations. In recent decades, many African countries have undergone the triumph and defeat of nationalistic fervor, changing in the process the loci of the centralizing state power and patrimonial state. With the vast majority of the continent under the colonial control of European governments, the World Wars were significant events in the geopolitical history of Africa. Africa was a theater of war and saw fighting in both wars. More important in most regions, the total war footing of colonial powers impacted the governance of African colonies, through resource allocation, conscription, and taxation. In World War I there were several campaigns in Africa, including the Togoland Campaign, the Kamerun Campaign, the South West Africa campaign, and the East African campaign. In each, Allied forces, primarily British, but also French, Belgian, South African, and Portuguese, sought to force the Germans out of their African colonies. In each, German forces were badly outnumbered and, due to Allied naval superiority, were cut off from reinforcement or resupply. The Allies eventually conquered all German colonies; German forces in East Africa managed to avoid surrender throughout the war, though they could not hold any territory after 1917. After World War I, former German colonies in Africa were taken over by France, Belgium, and the British Empire. After World War I, colonial powers continued to consolidate their control over their African territories. In some areas, particularly in Southern and East Africa, large settler populations were successful in pressing for additional devolution of administration, so-called "home rule" by the white settlers. In many cases, settler regimes were harsher on African populations, tending to see them more as a threat to political power, as opposed to colonial regimes which had generally endeavored to co-opt local populations into economic production. The Great Depression strongly affected Africa's non-subsistence economy, much of which was based on commodity production for Western markets. As demand increased in the late 1930s, Africa's economy rebounded as well. Africa was the site of one of the first instances of fascist territorial expansions in the 1930s. Italy had attempted to conquer Ethiopia in the 1890s but had been rebuffed in the First Italo-Ethiopian War. Ethiopia lay between two Italian colonies, Italian Somaliland and Eritrea and was invaded in October 1935. With an overwhelming advantage in armor and aircraft, by May 1936, Italian forces had occupied the capital of Addis Ababa and effectively declared victory. Ethiopia and their other colonies were consolidated into Italian East Africa. Africa was a large continent whose geography gave it strategic importance during the war. North Africa was the scene of major British and American campaigns against Italy and Germany; East Africa was the scene of a major British campaign against Italy. The vast geography provided major transportation routes linking the United States to the Middle East and Mediterranean regions. The sea route around South Africa was heavily used even though it added 40 days to voyages that had to avoid the dangerous Suez region. Lend Lease supplies to Russia often came this way. Internally, long-distance road and railroad connections facilitated the British war effort. The Union of Africa had dominion status and was largely self-governing, the other British possessions were ruled by the colonial office, usually with close ties to local chiefs and kings. Italian holdings were the target of successful British military campaigns. The Belgian Congo, and two other Belgian colonies, were major exporters. In terms of numbers and wealth, the British -controlled the richest portions of Africa, and made extensive use not only of the geography, but the manpower, and the natural resources. Civilian colonial officials made a special effort to upgrade the African infrastructure, promote agriculture, integrate colonial Africa with the world economy, and recruit over a half million soldiers. Before the war, Britain had made few plans for the utilization of Africa, but it quickly set up command structures. The Army set up the West Africa Command, which recruited 200,000 soldiers. The East Africa Command was created in September 1941 to support the overstretched Middle East Command. It provided the largest number of men, over 320,000, chiefly from Kenya, Tanganyika, and Uganda. The Southern Command was the domain of South Africa. The Royal Navy set up the South Atlantic Command based in Sierra Leone, that became one of the main convoy assembly points. The RAF Coastal Command had major submarine-hunting operations based in West Africa, while a smaller RAF command Dealt with submarines in the Indian Ocean. Ferrying aircraft from North America and Britain was the major mission of the Western Desert Air Force. In addition smaller more localized commands were set up throughout the war. Before 1939, the military establishments were very small throughout British Africa, and largely consisted of whites, who comprised under two percent of the population outside South Africa. As soon as the war began, newly created African units were set up, primarily by the Army. The new recruits were almost always volunteers, usually provided in close cooperation with local tribal leaders. During the war, military pay scales far exceeded what civilians natives could earn, especially when food, housing and clothing allowances are included. The largest numbers were in construction units, called Pioneer units, with over 82,000 soldiers.. The RAF and Navy also did some recruiting. The volunteers did some fighting, a great deal of guard duty, and construction work. 80,000 served in the Middle East. A special effort was made not to challenge white supremacy, certainly before the war, and to a large extent during the war itself. Nevertheless, the soldiers were drilled and train to European standards, given strong doses of propaganda, and learn leadership and organizational skills that proved essential to the formation of nationalistic and independence movements after 1945. There were minor episodes of discontent, but nothing serious, among the natives. Afrikaner nationalism was a factor in South Africa, But the proto-German Afrikaner prime minister was replaced in 1939 by Jan Smuts, an Afrikaner who was an enthusiastic supporter of the British Empire. His government closely cooperated with London and raised 340,000 volunteers (190,000 were white, or about one-third of the eligible white men). As early as 1857, the French established volunteer units of black soldiers in sub- Sahara Africa, termed the "tirailleurs senegalais." They served in military operations throughout the Empire, including 171,000 soldiers in World War I and 160,000 in World War II. About 90,000 became POWs in Germany. The veterans played a central role in the postwar independence movement in French Africa. authorities in West Africa declared allegiance to the Vichy regime, as did the colony of French Gabon Vichy forces defeated a Free French Forces invasion of French West Africa in the two battles of Dakar in July and September 1940. Gabon fell to Free France after the Battle of Gabon in November 1940, but West Africa remained under Vichy control until November 1942. Vichy forces tried to resist the overwhelming Allied landings in North Africa (operation "Torch") in November 1942. Vichy Admiral François Darlan suddenly switched sides and the fighting ended. The Allies gave Darlan control of North African French forces in exchange for support from both French North Africa as well as French West Africa. Vichy was now eliminated as a factor in Africa. Darlan was assassinated in December, and the two factions of Free French, led by Charles de Gaulle and Henri Giraud, jockeyed for power. De Gaulle finally won out. Since Germany had lost its African colonies following World War I, World War II did not reach Africa until Italy joined the war on June 10, 1940, controlling Libya and Italian East Africa. With the fall of France on June 25, most of France's colonies in North and West Africa were controlled by the Vichy government, though much of Central Africa fell under Free French control after some fighting between Vichy and Free French forces at the Battle of Dakar and the Battle of Gabon. After the fall of France, Africa was the only active theater for ground combat until the Italian invasion of Greece in October. In the Western Desert campaign Italian forces from Libya sought to overrun Egypt, controlled by the British. Simultaneously, in the East African campaign, Italian East African forces overran British Somaliland and some British outposts in Kenya and Anglo-Egyptian Sudan. When Italy's efforts to conquer Egypt (including the crucial Suez Canal) and Sudan fell short, they were unable to reestablish supply to Italian East Africa. Without the ability to reinforce or resupply and surrounded by Allied possessions, Italian East Africa was conquered by mainly British and South African forces in 1941. In North Africa, the Italians soon requested help from the Germans who sent a substantial force under General Rommel. With German help, the Axis forces regained the upper hand but were unable to break through British defenses in two tries at El Alamein. In late 1942, Allied forces, mainly Americans and Canadians, invaded French North Africa in Operation Torch, where Vichy French forces initially surprised them with their resistance but were convinced to stop fighting after three days. The second front relieved pressure on the British in Egypt who began pushing west to meet up with the Torch forces, eventually pinning German and Italian forces in Tunisia, which was conquered by May 1943 in the Tunisia campaign, ending the war in Africa. The only other significant operations occurred in the French colony of Madagascar, which was invaded by the British in May 1942 to deny its ports to the Axis (potentially the Japanese who had reached the eastern Indian Ocean). The French garrisons in Madagascar surrendered in November 1942. The decolonization of Africa started with Libya in 1951, although Liberia, South Africa, Egypt and Ethiopia were already independent. Many countries followed in the 1950s and 1960s, with a peak in 1960 with the Year of Africa, which saw 17 African nations declare independence, including a large part of French West Africa. Most of the remaining countries gained independence throughout the 1960s, although some colonizers (Portugal in particular) were reluctant to relinquish sovereignty, resulting in bitter wars of independence which lasted for a decade or more. The last African countries to gain formal independence were Guinea-Bissau (1974), Mozambique (1975) and Angola (1975) from Portugal; Djibouti from France in 1977; Zimbabwe from the United Kingdom in 1980; and Namibia from South Africa in 1990. Eritrea later split off from Ethiopia in 1993. The Mau Mau Uprising took place in Kenya from 1952 until 1956 but was put down by British and local forces. A state of emergency remained in place until 1960. Kenya became independent in 1963, and Jomo Kenyatta served as its first president. The early 1960s also signaled the start of major clashes between the Hutus and the Tutsis in Rwanda and Burundi. In 1994 this culminated in the Rwandan genocide, a conflict in which over 800,000 people were murdered. Moroccan nationalism developed during the 1930s; the Istiqlal Party was formed, pushing for independence. In 1953 sultan Mohammed V of Morocco called for independence. On March 2, 1956, Morocco became independent of France. Mohammed V became ruler of independent Morocco. In 1954, Algeria formed the National Liberation Front (FLN) as it split from France. This resulted in the Algerian War, which lasted until independence negotiations in 1962. Muhammad Ahmed Ben Bella was elected President of Algeria. Over a million French nationals, predominantly Pied-Noirs, left the country, crippling the economy. In 1934, the "Neo Destour" (New Constitution) party was founded by Habib Bourguiba pushing for independence in Tunisia. Tunisia became independent in 1955. Its "bey" was deposed and Habib Bourguiba elected as President of Tunisia. In 1954, Gamal Abdel Nasser deposed the monarchy of Egypt in the Egyptian Revolution of 1952 and came to power as Prime Minister of Egypt. Muammar Gaddafi led the 1969 Libyan coup d'état which deposed Idris of Libya. Gaddafi remained in power until his death in the Libyan Civil War of 2011. Egypt was involved in several wars against Israel and was allied with other Arab countries. The first was the 1948 Arab–Israeli War, right after the state of Israel was founded. Egypt went to war again in the Six-Day War of 1967 and lost the Sinai Peninsula to Israel. They went to war yet again in the Yom Kippur War of 1973. In 1979, President of Egypt Anwar Sadat and Prime Minister of Israel Menachem Begin signed the Camp David Accords, which gave back the Sinai Peninsula to Egypt in exchange for the recognition of Israel. The accords are still in effect today. In 1981, Sadat was assassinated by members of the Egyptian Islamic Jihad under Khalid Islambouli. The assassins were Islamists who targeted Sadat for his signing of the Accords. In 1948 the apartheid laws were started in South Africa by the dominant National Party. These were largely a continuation of existing policies; the difference was the policy of "separate development" (Apartheid). Where previous policies had only been disparate efforts to economically exploit the African majority, Apartheid represented an entire philosophy of separate racial goals, leading to both the divisive laws of 'petty apartheid,' and the grander scheme of African homelands. In 1994, Apartheid ended, and Nelson Mandela of the African National Congress was elected president after the South African general election, 1994, the country's first non-racial election. The central regions of Africa were traditionally regarded to be the regions between Kilwa and the mouth of the Zambesi river. Due to its isolated position from the coasts, this area has received minimal attention from historian pertaining to Africa. It also had one of the most varied sources of European colonial imperialists including Germany in Cameroon, Britain in Northern Cameroons, Belgium in Congo, and France in CAF. Due to its territory, among the main trope s regarding Central Africa is traversing its lands and the nature of its tropicals. Since 1982, one of the main protracted issues within central Africa has been the ongoing secession movement of the secessionist entity of Ambazonia. The impasse between Cameroon and Ambazonia gained steam in 1992 when Fon Gorji-Dinka filed an international lawsuit against Cameroon claiming that Ambazonian territories are held illegally by the latter and describing Cameroonian claims on Ambazonian territories as illegal. Fifteen years later, this stalemate would escalate when Abmazonia formally declared itself as the Federal Republic of Ambazonia. Following World War II, nationalist movements arose across West Africa, most notably in Ghana under Kwame Nkrumah. In 1957, Ghana became the first sub-Saharan colony to achieve its independence, followed the next year by France's colonies; by 1974, West Africa's nations were entirely autonomous. Since independence, many West African nations have been plagued by corruption and instability, with notable civil wars in Nigeria, Sierra Leone, Liberia, and Ivory Coast, and a succession of military coups in Ghana and Burkina Faso. Many states have failed to develop their economies despite enviable natural resources, and political instability is often accompanied by undemocratic government. See also 2014 Ebola virus epidemic in Sierra Leone, 2014 Ebola virus epidemic in Guinea, and 2014 Ebola virus epidemic in Liberia The first historical studies in English appeared in the 1890s, and followed one of four approaches. 1) The territorial narrative was typically written by a veteran soldier or civil servant who gave heavy emphasis to what he had seen. 2) The "apologia" were essays designed to justify British policies. 3) Popularizers tried to reach a large audience. 4) Compendia appeared designed to combine academic and official credentials. Professional scholarship appeared around 1900, and began with the study of business operations, typically using government documents and unpublished archives. The economic approach was widely practiced in the 1930s, primarily to provide descriptions of the changes underway in the previous half-century. In 1935, American historian William L. Langer published "The Diplomacy of Imperialism: 1890–1902", a book that is still widely cited. In 1939, Oxford professor Reginald Coupland published "The Exploitation of East Africa, 1856–1890: The Slave Trade and the Scramble", another popular treatment. World War II diverted most scholars to wartime projects and accounted for a pause in scholarship during the 1940s. By the 1950s many African students were studying in British universities, and they produced a demand for new scholarship, and started themselves to supply it as well. Oxford University became the main center for African studies, with activity as well at Cambridge University and the London School of Economics. The perspective of British government policymakers or international business operations slowly gave way to a new interest in the activities of the natives, especially nationalistic movements and the growing demand for independence. The major breakthrough came from Ronald Robinson and John Andrew Gallagher, especially with their studies of the impact of free trade on Africa. In 1985 "The Oxford History of South Africa" (2 vols.) was published, attempting to synthesize the available materials. In 2013, "The Oxford Handbook of Modern African History" was published, bringing the scholarship up to date.
https://en.wikipedia.org/wiki?curid=14099
History of Oceania The History of Oceania includes the history of Australia, New Zealand, Hawaii, Papua New Guinea, Fiji and other Pacific island nations. The prehistory of Oceania is divided into the prehistory of each of its major areas: Polynesia, Micronesia, Melanesia, and Australasia, and these vary greatly as to when they were first inhabited by humans—from 70,000 years ago (Australasia) to 3,000 years ago (Polynesia). The Polynesian people are considered to be by linguistic, archaeological and human genetic ancestry a subset of the sea-migrating Austronesian people and tracing Polynesian languages places their prehistoric origins in the Malay Archipelago, and ultimately, in Taiwan. Between about 3000 and 1000 BCE speakers of Austronesian languages began spreading from Taiwan into Island South-East Asia, as tribes whose natives were thought to have arrived through South China about 8,000 years ago to the edges of western Micronesia and on into Melanesia, although they are different from the Han Chinese who now form the majority of people in China and Taiwan. There are three theories regarding the spread of humans across the Pacific to Polynesia. These are outlined well by Kayser "et al." (2000) and are as follows: In the archaeological record there are well-defined traces of this expansion which allow the path it took to be followed and dated with some certainty. It is thought that by roughly 1400 BCE, "Lapita Peoples", so-named after their pottery tradition, appeared in the Bismarck Archipelago of north-west Melanesia. This culture is seen as having adapted and evolved through time and space since its emergence "Out of Taiwan". They had given up rice production, for instance, after encountering and adapting to breadfruit in the Bird's Head area of New Guinea. In the end, the most eastern site for Lapita archaeological remains recovered so far has been through work on the archaeology in Samoa. The site is at Mulifanua on Upolu. The Mulifanua site, where 4,288 pottery shards have been found and studied, has a "true" age of c. 1000 BCE based on C14 dating. A 2010 study places the beginning of the human archaeological sequences of Polynesia in Tonga at 900 BCE, the small differences in dates with Samoa being due to differences in radiocarbon dating technologies between 1989 and 2010, the Tongan site apparently predating the Samoan site by some few decades in real time. Within a mere three or four centuries between about 1300 and 900 BCE, the Lapita archaeological culture spread 6,000 kilometres further to the east from the Bismarck Archipelago, until it reached as far as Fiji, Tonga, and Samoa. The area of Tonga, Fiji, and Samoa served as a gateway into the rest of the Pacific region known as Polynesia. Ancient Tongan mythologies recorded by early European explorers report the islands of 'Ata and Tongatapu as the first islands being hauled to the surface from the deep ocean by Maui. The "Tuʻi Tonga Empire" or "Tongan Empire" in Oceania are descriptions sometimes given to Tongan expansionism and projected hegemony dating back to 950 CE, but at its peak during the period 1200–1500. While modern researchers and cultural experts attest to widespread Tongan influence and evidences of transoceanic trade and exchange of material and non-material cultural artifacts, empirical evidence of a true political empire ruled for any length of time by successive rulers is lacking. Modern archeology, anthropology and linguistic studies confirm widespread Tongan cultural influence ranging widely through East 'Uvea, Rotuma, Futuna, Samoa and Niue, parts of Micronesia (Kiribati, Pohnpei), Vanuatu, and New Caledonia and the Loyalty Islands, and while some academics prefer the term "maritime chiefdom", others argue that, while very different from examples elsewhere, "..."empire" is probably the most convenient term." Pottery art from Fijian towns shows that Fiji was settled before or around 3500 to 1000 BC, although the question of Pacific migration still lingers. It is believed that the Lapita people or the ancestors of the Polynesians settled the islands first but not much is known of what became of them after the Melanesians arrived; they may have had some influence on the new culture, and archaeological evidence shows that they would have then moved on to Tonga, Samoa and even Hawai'i. The first settlements in Fiji were started by voyaging traders and settlers from the west about 5000 years ago. Lapita pottery shards have been found at numerous excavations around the country. Aspects of Fijian culture are similar to the Melanesian culture of the western Pacific but have a stronger connection to the older Polynesian cultures. Across from east to west, Fiji has been a nation of many languages. Fiji's history was one of settlement but also of mobility. Over the centuries, a unique Fijian culture developed. Constant warfare and cannibalism between warring tribes were quite rampant and very much part of everyday life. In later centuries, the ferocity of the cannibal lifestyle deterred European sailors from going near Fijian waters, giving Fiji the name "Cannibal Isles"; as a result, Fiji remained unknown to the rest of the world. Early European visitors to Easter Island recorded the local oral traditions about the original settlers. In these traditions, Easter Islanders claimed that a chief Hotu Matu'a arrived on the island in one or two large canoes with his wife and extended family. They are believed to have been Polynesian. There is considerable uncertainty about the accuracy of this legend as well as the date of settlement. Published literature suggests the island was settled around 300–400 CE, or at about the time of the arrival of the earliest settlers in Hawaii. Some scientists say that Easter Island was not inhabited until 700–800 CE. This date range is based on glottochronological calculations and on three radiocarbon dates from charcoal that appears to have been produced during forest clearance activities. Moreover, a recent study which included radiocarbon dates from what is thought to be very early material suggests that the island was settled as recently as 1200 CE. This seems to be supported by a 2006 study of the island's deforestation, which could have started around the same time. A large now extinct palm, "Paschalococos disperta", related to the Chilean wine palm "(Jubaea chilensis)", was one of the dominant trees as attested by fossil evidence; this species, whose sole occurrence was Easter Island, became extinct due to deforestation by the early settlers. Micronesia began to be settled several millennia ago, although there are competing theories about the origin and arrival of the first settlers. There are numerous difficulties with conducting archaeological excavations in the islands, due to their size, settlement patterns and storm damage. As a result, much evidence is based on linguistic analysis. The earliest archaeological traces of civilization have been found on the island of Saipan, dated to 1500 BCE or slightly before. The ancestors of the Micronesians settled there over 4,000 years ago. A decentralized chieftain-based system eventually evolved into a more centralized economic and religious culture centered on Yap and Pohnpei. The prehistory of many Micronesian islands such as Yap are not known very well. On Pohnpei, pre-colonial history is divided into three eras: "Mwehin Kawa" or "Mwehin Aramas" (Period of Building, or Period of Peopling, before c. 1100); "Mwehin Sau Deleur" (Period of the Lord of Deleur, c. 1100 to c. 1628); and "Mwehin Nahnmwarki" (Period of the Nahnmwarki, c. 1628 to c. 1885). Pohnpeian legend recounts that the Saudeleur rulers, the first to bring government to Pohnpei, were of foreign origin. The Saudeleur centralized form of absolute rule is characterized in Pohnpeian legend as becoming increasingly oppressive over several generations. Arbitrary and onerous demands, as well as a reputation for offending Pohnpeian deities, sowed resentment among Pohnpeians. The Saudeleur Dynasty ended with the invasion of Isokelekel, another semi-mythical foreigner, who replaced the Saudeleur rule with the more decentralized "nahnmwarki" system in existence today. Isokelekel is regarded as the creator of the modern Pohnpeian "nahnmwarki" social system and the father of the Pompeian people. Construction of Nan Madol, a megalithic complex made from basalt lava logs in Pohnpei began as early as 1200 CE. Nan Madol is offshore of Temwen Island near Pohnpei, consists of a series of small artificial islands linked by a network of canals, and is often called the "Venice of the Pacific". It is located near the island of Pohnpei and was the ceremonial and political seat of the Saudeleur Dynasty that united Pohnpei's estimated 25,000 people until its centralized system collapsed amid the invasion of Isokelekel. Isokelekel and his descendants initially occupied the stone city, but later abandoned it. The first people of the Northern Mariana Islands navigated to the islands at some period between 4000 BCE to 2000 BCE from South-East Asia. They became known as the Chamorros, and spoke an Austronesian language called Chamorro. The ancient Chamorro left a number of megalithic ruins, including Latte stone. The Refaluwasch, or Carolinian, people came to the Marianas in the 1800s from the Caroline Islands. Micronesian colonists gradually settled the Marshall Islands during the 2nd millennium BCE, with inter-island navigation made possible using traditional stick charts. The first settlers of Australia, New Guinea, and the large islands just to the east arrived between 50,000 and 30,000 years ago, when Neanderthals still roamed Europe. The original inhabitants of the group of islands now named Melanesia were likely the ancestors of the present-day Papuan-speaking people. Migrating from South-East Asia, they appear to have occupied these islands as far east as the main islands in the Solomon Islands archipelago, including Makira and possibly the smaller islands farther to the east. Particularly along the north coast of New Guinea and in the islands north and east of New Guinea, the Austronesian people, who had migrated into the area somewhat more than 3,000 years ago, came into contact with these pre-existing populations of Papuan-speaking peoples. In the late 20th century, some scholars theorized a long period of interaction, which resulted in many complex changes in genetics, languages, and culture among the peoples. Kayser, et al. proposed that, from this area, a very small group of people (speaking an Austronesian language) departed to the east to become the forebears of the Polynesian people. However, the theory is contradicted by the findings of a genetic study published by Temple University in 2008; based on genome scans and evaluation of more than 800 genetic markers among a wide variety of Pacific peoples, it found that neither Polynesians nor Micronesians have much genetic relation to Melanesians. Both groups are strongly related genetically to East Asians, particularly Taiwanese aborigines. It appeared that, having developed their sailing outrigger canoes, the Polynesian ancestors migrated from East Asia, moved through the Melanesian area quickly on their way, and kept going to eastern areas, where they settled. They left little genetic evidence in Melanesia. The study found a high rate of genetic differentiation and diversity among the groups living within the Melanesian islands, with the peoples distinguished by island, language, topography, and geography among the islands. Such diversity developed over their tens of thousands of years of settlement before the Polynesian ancestors ever arrived at the islands. For instance, populations developed differently in coastal areas, as opposed to those in more isolated mountainous valleys. Additional DNA analysis has taken research into new directions, as more human species have been discovered since the late 20th century. Based on his genetic studies of the Denisova hominin, an ancient human species discovered in 2010, Svante Pääbo claims that ancient human ancestors of the Melanesians interbred in Asia with these humans. He has found that people of New Guinea share 4–6% of their genome with the Denisovans, indicating this exchange. The Denisovans are considered cousin to the Neanderthals; both groups are now understood to have migrated out of Africa, with the Neanderthals going into Europe, and the Denisovans heading east about 400,000 years ago. This is based on genetic evidence from a fossil found in Siberia. The evidence from Melanesia suggests their territory extended into south Asia, where ancestors of the Melanesians developed. Melanesians of some islands are one of the few non-European peoples, and the only dark-skinned group of people outside Australia, known to have blond hair. Indigenous Australians are the original inhabitants of the Australian continent and nearby islands. Indigenous Australians migrated from Africa to Asia around 70,000 years ago and arrived in Australia around 50,000 years ago. The Torres Strait Islanders are indigenous to the Torres Strait Islands, which are at the northernmost tip of Queensland near Papua New Guinea. The term "Aboriginal" is traditionally applied to only the indigenous inhabitants of mainland Australia and Tasmania, along with some of the adjacent islands, i.e.: the "first peoples". "Indigenous Australians" is an inclusive term used when referring to both Aboriginal and Torres Strait islanders. The earliest definite human remains found to date are that of Mungo Man, which have been dated at about 40,000 years old, but the time of arrival of the ancestors of Indigenous Australians is a matter of debate among researchers, with estimates dating back as far as 125,000 years ago. There is great diversity among different Indigenous communities and societies in Australia, each with its own unique mixture of cultures, customs and languages. In present-day Australia these groups are further divided into local communities. Oceania was first explored by Europeans from the 16th century onwards. Portuguese navigators, between 1512 and 1526, reached the Moluccas (by António de Abreu and Francisco Serrão in 1512), Timor, the Aru Islands (Martim A. Melo Coutinho), the Tanimbar Islands, some of the Caroline Islands (by Gomes de Sequeira in 1525), and west Papua New Guinea (by Jorge de Menezes in 1526). In 1519 a Castilian ('Spanish') expedition led by Ferdinand Magellan sailed down the east coast of South America, found and sailed through the strait that bears his name and on 28 November 1520 entered the ocean which he named "Pacific". The three remaining ships, led by Magellan and his captains Duarte Barbosa and João Serrão, then sailed north and caught the trade winds which carried them across the Pacific to the Philippines where Magellan was killed. One surviving ship led by Juan Sebastián Elcano returned west across the Indian Ocean and the other went north in the hope of finding the westerlies and reaching Mexico. Unable to find the right winds, it was forced to return to the East Indies. The Magellan-Elcano expedition achieved the first circumnavigation of the world and reached the Philippines, the Mariana Islands and other islands of Oceania. From 1527 to 1595 a number of other large Spanish expeditions crossed the Pacific Ocean, leading to the discovery of the Marshall Islands and Palau in the North Pacific, as well as Tuvalu, the Marquesas, the Solomon Islands archipelago, the Cook Islands and the Admiralty Islands in the South Pacific. In 1565, Spanish navigator Andrés de Urdaneta found a wind system that would allow ships to sail eastward from Asia, back to the Americas. From then until 1815 the annual Manila Galleons crossed the Pacific from Mexico to the Philippines and back, in the first transpacific trade route in history. Combined with the Spanish Atlantic or West Indies Fleet, the Manila Galleons formed one of the first global maritime exchange in human history, linking Seville in Spain with Manila in the Philippines, via Mexico. Later, in the quest for Terra Australis, Spanish explorers in the 17th century discovered the Pitcairn and Vanuatu archipelagos, and sailed the Torres Strait between Australia and New Guinea, named after navigator Luís Vaz de Torres. In 1668 the Spanish founded a colony on Guam as a resting place for west-bound galleons. For a long time this was the only non-coastal European settlement in the Pacific. The Dutch were the first non-natives to undisputedly explore and chart coastlines of Australia, Tasmania, New Zealand, Tonga, Fiji, Samoa, and Easter Island. Verenigde Oostindische Compagnie (or VOC) was a major force behind the (c. 1590s–1720s) and Netherlandish cartography (c. 1570s–1670s). In the 17th century, the VOC's navigators and explorers charted almost three-quarters of the Australian coastline, except the east coast. Abel Tasman was the first known European explorer to reach the islands of Van Diemen's Land (now Tasmania) and New Zealand, and to sight the Fiji islands. His navigator François Visscher, and his merchant Isaack Gilsemans, mapped substantial portions of Australia, New Zealand, Tonga and the Fijian islands. On 24 November 1642 Abel Tasman sighted the west coast of Tasmania, north of Macquarie Harbour. He named his discovery Van Diemen's Land after Antonio van Diemen, Governor-General of the Dutch East Indies. then claimed formal possession of the land on 3 December 1642. After some exploration, Tasman had intended to proceed in a northerly direction but as the wind was unfavourable he steered east. On 13 December they sighted land on the north-west coast of the South Island, New Zealand, becoming the first Europeans to do so. Tasman named it "Staten Landt" on the assumption that it was connected to an island (Staten Island, Argentina) at the south of the tip of South America. Proceeding north and then east, he stopped to gather water, but one of his boats was attacked by Māori in a double hulled waka (canoes) and four of his men were attacked and killed by mere. As Tasman sailed out of the bay he was again attacked, this time by 11 waka . The waka approached the Zeehan which fired and hit one Māori who fell down. Canister shot hit the side of a waka. Archeological research has shown the Dutch had tried to land at a major agricultural area, which the Māori may have been trying to protect. Tasman named the bay "Murderers' Bay" (now known as Golden Bay) and sailed north, but mistook Cook Strait for a bight (naming it "Zeehaen's Bight"). Two names he gave to New Zealand landmarks still endure, Cape Maria van Diemen and Three Kings Islands, but "Kaap Pieter Boreels" was renamed by Cook 125 years later to Cape Egmont. En route back to Batavia, Tasman came across the Tongan archipelago on 20 January 1643. While passing the Fiji Islands Tasman's ships came close to being wrecked on the dangerous reefs of the north-eastern part of the Fiji group. He charted the eastern tip of Vanua Levu and Cikobia before making his way back into the open sea. He eventually turned north-west to New Guinea, and arrived at Batavia on 15 June 1643. For over a century after Tasman's voyages, until the era of James Cook, Tasmania and New Zealand were not visited by Europeans—mainland Australia was visited, but usually only by accident. In 1766 the Royal Society engaged James Cook to travel to the Pacific Ocean to observe and record the transit of Venus across the Sun. The expedition sailed from England on 26 August 1768, rounded Cape Horn and continued westward across the Pacific to arrive at Tahiti on 13 April 1769, where the observations of the Venus Transit were made. Once the observations were completed, Cook opened the sealed orders which were additional instructions from the Admiralty for the second part of his voyage: to search the south Pacific for signs of the postulated rich southern continent of "Terra Australis". With the help of a Tahitian named Tupaia, who had extensive knowledge of Pacific geography, Cook managed to reach New Zealand on 6 October 1769, leading only the second group of Europeans to do so (after Abel Tasman over a century earlier, in 1642). Cook mapped the complete New Zealand coastline, making only some minor errors (such as calling Banks Peninsula an island, and thinking Stewart Island/Rakiura was a peninsula of the South Island). He also identified Cook Strait, which separates the North Island from the South Island, and which Tasman had not seen. Cook then voyaged west, reaching the south-eastern coast of Australia on 19 April 1770, and in doing so his expedition became the first recorded Europeans to have encountered its eastern coastline. On 23 April he made his first recorded direct observation of indigenous Australians at Brush Island near Bawley Point, noting in his journal: ""…and were so near the Shore as to distinguish several people upon the Sea beach they appear'd to be of a very dark or black Colour but whether this was the real colour of their skins or the C[l]othes they might have on I know not"." On 29 April Cook and crew made their first landfall on the mainland of the continent at a place now known as the Kurnell Peninsula. It is here that James Cook made first contact with an aboriginal tribe known as the Gweagal. After his departure from Botany Bay he continued northwards. After a grounding mishap on the Great Barrier Reef, the voyage continued, sailing through Torres Strait before returning to England via Batavia, the Cape of Good Hope, and Saint Helena. In 1772 the Royal Society commissioned Cook to search for the hypothetical Terra Australis again. On his first voyage, Cook had demonstrated by circumnavigating New Zealand that it was not attached to a larger landmass to the south. Although he charted almost the entire eastern coastline of Australia, showing it to be continental in size, the Terra Australis was believed by the Royal Society to lie further south. Cook commanded on this voyage, while Tobias Furneaux commanded its companion ship, . Cook's expedition circumnavigated the globe at an extreme southern latitude, becoming one of the first to cross the Antarctic Circle (17 January 1773). In the Antarctic fog, "Resolution" and "Adventure" became separated. Furneaux made his way to New Zealand, where he lost some of his men during an encounter with Māori, and eventually sailed back to Britain, while Cook continued to explore the Antarctic, reaching 71°10'S on 31 January 1774. Cook almost encountered the mainland of Antarctica, but turned towards Tahiti to resupply his ship. He then resumed his southward course in a second fruitless attempt to find the supposed continent. On this leg of the voyage he brought a young Tahitian named Omai, who proved to be somewhat less knowledgeable about the Pacific than Tupaia had been on the first voyage. On his return voyage to New Zealand in 1774, Cook landed at the Friendly Islands, Easter Island, Norfolk Island, New Caledonia, and Vanuatu. Before returning to England, Cook made a final sweep across the South Atlantic from Cape Horn. He then turned north to South Africa, and from there continued back to England. His reports upon his return home put to rest the popular myth of Terra Australis. On his last voyage, Cook again commanded HMS "Resolution", while Captain Charles Clerke commanded . The voyage was ostensibly planned to return the Pacific Islander, Omai to Tahiti, or so the public were led to believe. The trip's principal goal was to locate a North-West Passage around the American continent. After dropping Omai at Tahiti, Cook travelled north and in 1778 became the first European to visit the Hawaiian Islands. After his initial landfall in January 1778 at Waimea harbour, Kauai, Cook named the archipelago the "Sandwich Islands" after the fourth Earl of Sandwich—the acting First Lord of the Admiralty. From the Sandwich Islands Cook sailed north and then north-east to explore the west coast of North America north of the Spanish settlements in Alta California. Cook explored and mapped the coast all the way to the Bering Strait, on the way identifying what came to be known as Cook Inlet in Alaska. In a single visit, Cook charted the majority of the North American north-west coastline on world maps for the first time, determined the extent of Alaska, and closed the gaps in Russian (from the West) and Spanish (from the South) exploratory probes of the Northern limits of the Pacific. Cook returned to Hawaii in 1779. After sailing around the archipelago for some eight weeks, he made landfall at Kealakekua Bay, on 'Hawaii Island', largest island in the Hawaiian Archipelago. Cook's arrival coincided with the "Makahiki", a Hawaiian harvest festival of worship for the Polynesian god Lono. Coincidentally the form of Cook's ship, HMS "Resolution", or more particularly the mast formation, sails and rigging, resembled certain significant artefacts that formed part of the season of worship. Similarly, Cook's clockwise route around the island of Hawaii before making landfall resembled the processions that took place in a clockwise direction around the island during the Lono festivals. It has been argued (most extensively by Marshall Sahlins) that such coincidences were the reasons for Cook's (and to a limited extent, his crew's) initial deification by some Hawaiians who treated Cook as an incarnation of Lono. Though this view was first suggested by members of Cook's expedition, the idea that any Hawaiians understood Cook to be Lono, and the evidence presented in support of it, were challenged in 1992. After a month's stay, Cook resumed his exploration of the Northern Pacific. Shortly after leaving Hawaii Island, however, the "Resolution" foremast broke, so the ships returned to Kealakekua Bay for repairs. Tensions rose, and a number of quarrels broke out between the Europeans and Hawaiians. On 14 February 1779, at Kealakekua Bay, some Hawaiians took one of Cook's small boats. As thefts were quite common in Tahiti and the other islands, Cook would have taken hostages until the stolen articles were returned. He attempted to take as hostage the King of Hawaiʻi, Kalaniʻōpuʻu. The Hawaiians prevented this, and Cook's men had to retreat to the beach. As Cook turned his back to help launch the boats, he was struck on the head by the villagers and then stabbed to death as he fell on his face in the surf. Hawaiian tradition says that he was killed by a chief named Kalaimanokahoʻowaha or Kanaʻina. The Hawaiians dragged his body away. Four of Cook's men were also killed and two others were wounded in the confrontation. The esteem which the islanders nevertheless held for Cook caused them to retain his body. Following their practice of the time, they prepared his body with funerary rituals usually reserved for the chiefs and highest elders of the society. The body was disembowelled, baked to facilitate removal of the flesh, and the bones were carefully cleaned for preservation as religious icons in a fashion somewhat reminiscent of the treatment of European saints in the Middle Ages. Some of Cook's remains, thus preserved, were eventually returned to his crew for a formal burial at sea. Clerke assumed leadership of the expedition. Following the death of Clerke, "Resolution" and "Discovery" returned home in October 1780 commanded by John Gore, a veteran of Cook's first voyage, and Captain James King. After their arrival in England, King completed Cook's account of the voyage. In 1789 the Mutiny on the Bounty against William Bligh led to several of the mutineers escaping the Royal Navy and settling on Pitcairn Islands, which later became a British colony. Britain also established colonies in Australia in 1788, New Zealand in 1840 and Fiji in 1872, with much of Oceania becoming part of the British Empire. The Gilbert Islands (now known as Kiribati) and the Ellice Islands (now known as Tuvalu) came under Britain's sphere of influence in the late 19th century. The Ellice Islands were administered as British protectorate by a Resident Commissioner from 1892 to 1916 as part of the British Western Pacific Territories (BWPT), and later as part of the Gilbert and Ellice Islands colony from 1916 to 1974. Among the last islands in Oceania to be colonised was Niue (1900). In 1887, King Fata-a-iki, who reigned Niue from 1887 to 1896, offered to cede sovereignty to the British Empire, fearing the consequences of annexation by a less benevolent colonial power. The offer was not accepted until 1900. Niue was a British protectorate, but the UK's direct involvement ended in 1901 when New Zealand annexed the island. French Catholic missionaries arrived on Tahiti in 1834; their expulsion in 1836 caused France to send a gunboat in 1838. In 1842, Tahiti and Tahuata were declared a French protectorate, to allow Catholic missionaries to work undisturbed. The capital of Papeetē was founded in 1843. In 1880, France annexed Tahiti, changing the status from that of a protectorate to that of a colony. On 24 September 1853, under orders from Napoleon III, Admiral Febvrier Despointes took formal possession of New Caledonia and Port-de-France (Nouméa) was founded 25 June 1854. A few dozen free settlers settled on the west coast in the following years. New Caledonia became a penal colony, and from the 1860s until the end of the transportations in 1897, about 22,000 criminals and political prisoners were sent to New Caledonia, among them many Communards, including Henri de Rochefort and Louise Michel. Between 1873 and 1876, 4,200 political prisoners were "relegated" in New Caledonia. Only forty of them settled in the colony, the rest returned to France after being granted amnesty in 1879 and 1880. In the 1880s, France claimed the Tuamotu Archipelago, which formerly belonged to the Pōmare Dynasty, without formally annexing it. Having declared a protectorate over Tahuata in 1842, the French regarded the entire Marquesas Islands as French. In 1885, France appointed a governor and established a general council, thus giving it the proper administration for a colony. The islands of Rimatara and Rūrutu unsuccessfully lobbied for British protection in 1888, so in 1889 they were annexed by France. Postage stamps were first issued in the colony in 1892. The first official name for the colony was "Établissements de l'Océanie" (Settlements in Oceania); in 1903 the general council was changed to an advisory council and the colony's name was changed to "Établissements Français de l'Océanie" (French Settlements in Oceania). The Spanish explorer Alonso de Salazar landed in the Marshall Islands in 1529. They were later named by Krusenstern, after English explorer John Marshall, who visited them together with Thomas Gilbert in 1788, en route from Botany Bay to Canton (two ships of the First Fleet). The Marshall Islands were claimed by Spain in 1874. In November 1770, Felipe González de Ahedo commanded an expedition from the Viceroyalty of Peru that searched for Davis Land and Madre de Dios Island and looked for foreign naval activities. This expedition landed on "Isla de San Carlos" (Easter Island) and signed a treaty of annexion with the Rapa Nui chiefs. In 1606 Luís Vaz de Torres explored the southern coast of New Guinea from Milne Bay to the Gulf of Papua including Orangerie Bay which he named "Bahía de San Lorenzo". His expedition also discovered Basilaki Island naming it "Tierra de San Buenaventura", which he claimed for Spain in July 1606. On 18 October his expedition reached the western part of the island in present-day Indonesia, and also claimed the territory for the King of Spain. A successive European claim occurred in 1828, when the Netherlands formally claimed the western half of the island as Netherlands New Guinea. In 1883, following a short-lived French annexation of New Ireland, the British colony of Queensland annexed south-eastern New Guinea. However, the Queensland government's superiors in the United Kingdom revoked the claim, and (formally) assumed direct responsibility in 1884, when Germany claimed north-eastern New Guinea as the protectorate of German New Guinea (also called Kaiser-Wilhelmsland). The first Dutch government posts were established in 1898 and in 1902: Manokwari on the north coast, Fak-Fak in the west and Merauke in the south at the border with British New Guinea. The German, Dutch and British colonial administrators each attempted to suppress the still-widespread practices of inter-village warfare and headhunting within their respective territories. In 1905 the British government transferred some administrative responsibility over south-east New Guinea to Australia (which renamed the area "Territory of Papua"); and in 1906, transferred all remaining responsibility to Australia. During World War I, Australian forces seized German New Guinea, which in 1920 became the Territory of New Guinea, to be administered by Australia under a League of Nations mandate. The territories under Australian administration became collectively known as The Territories of Papua and New Guinea (until February 1942). Germany established colonies in New Guinea in 1884, and Samoa in 1900. Following papal mediation and German compensation of $4.5 million, Spain recognized a German claim in 1885. Germany established a protectorate and set up trading stations on the islands of Jaluit and Ebon to carry out the flourishing copra (dried coconut meat) trade. Marshallese Iroij (high chiefs) continued to rule under indirect colonial German administration. The United States also expanded into the Pacific, beginning with Baker Island and Howland Island in 1857, and with Hawaii becoming a U.S. territory in 1898. Disagreements between the US, Germany and UK over Samoa led to the Tripartite Convention of 1899. Samoa aligned its interests with the United States in a Deed of Succession, signed by the "Tui Manúʻa" (supreme chief of Manúʻa) on 16 July 1904 at the Crown residence of the Tuimanuʻa called the "Faleula" in the place called Lalopua (from Official documents of the Tuimanuʻa government, 1893; Office of the Governor, 2004). Cession followed the Tripartite Convention of 1899 that partitioned the eastern islands of Samoa (including Tutuila and the Manúʻa Group) from the western islands of Samoa (including ʻUpolu and Savaiʻi). At the beginning of World War I, Japan assumed control of the Marshall Islands. The Japanese headquarters was established at the German center of administration, Jaluit. On 31 January 1944, during World War II, American forces landed on Kwajalein atoll and U.S. Marines and Army troops later took control of the islands from the Japanese on 3 February, following intense fighting on Kwajalein and Enewetak atolls. In 1947, the United States, as the occupying power, entered into an agreement with the UN Security Council to administer much of Micronesia, including the Marshall Islands, as the Trust Territory of the Pacific Islands. During World War II, Japan colonized many Oceanic colonies by wresting control from western powers. The Samoan Crisis was a confrontation standoff between the United States, Imperial Germany and Great Britain from 1887–1889 over control of the Samoan Islands during the Samoan Civil War. The prime minister of the kingdom of Hawaii Walter M. Gibson had long aimed to establishing an empire in the Pacific. In 1887 his government sent the "homemade battleship" Kaimiloa to Samoa looking for an alliance against colonial powers. It ended in suspicions from the German Navy and embarrassment for the conduct of the crew. The 1889 incident involved three American warships, USS , and and three German warships, SMS "Adler", SMS "Olga", and SMS "Eber", keeping each other at bay over several months in Apia harbor, which was monitored by the British warship . The standoff ended on 15 and 16 March when a cyclone wrecked all six warships in the harbor. "Calliope" was able to escape the harbor and survived the storm. Robert Louis Stevenson witnessed the storm and its aftermath at Apia and later wrote about what he saw. The Samoan Civil War continued, involving Germany, United States and Britain, eventually resulting, via the Tripartite Convention of 1899, in the partition of the Samoan Islands into American Samoa and German Samoa. The Asian and Pacific Theatre of World War I was a conquest of German colonial possession in the Pacific Ocean and China. The most significant military action was the Siege of Tsingtao in what is now China, but smaller actions were also fought at Battle of Bita Paka and Siege of Toma in German New Guinea. All other German and Austrian possessions in Asia and the Pacific fell without bloodshed. Naval warfare was common; all of the colonial powers had naval squadrons stationed in the Indian or Pacific Oceans. These fleets operated by supporting the invasions of German held territories and by destroying the East Asia Squadron. One of the first land offensives in the Pacific theatre was the Occupation of German Samoa in August 1914 by New Zealand forces. The campaign to take Samoa ended without bloodshed after over 1,000 New Zealanders landed on the German colony, supported by an Australian and French naval squadron. Australian forces attacked German New Guinea in September 1914: 500 Australians encountered 300 Germans and native policemen at the Battle of Bita Paka; the Allies won the day and the Germans retreated to Toma. A company of Australians and a British warship besieged the Germans and their colonial subjects, ending with a German surrender. After the fall of Toma, only minor German forces were left in New Guinea and these generally capitulated once met by Australian forces. In December 1914, one German officer near Angorum attempted resist the occupation with thirty native police but his force deserted him after they fired on an Australian scouting party and he was subsequently captured. German Micronesia, the Marianas, the Carolines and the Marshall Islands also fell to Allied forces during the war. The Pacific front saw major action during the Second World War, mainly between the belligerents Japan and the United States. The attack on Pearl Harbor was a surprise military strike conducted by the Imperial Japanese Navy against the United States naval base at Pearl Harbor, Hawaii, on the morning of 7 December 1941 (8 December in Japan). The attack led to the United States' entry into World War II. The attack was intended as a preventive action in order to keep the U.S. Pacific Fleet from interfering with military actions the Empire of Japan was planning in South-East Asia against overseas territories of the United Kingdom, the Netherlands, and the United States. There were simultaneous Japanese attacks on the U.S.-held Philippines and on the British Empire in Malaya, Singapore, and Hong Kong. The Japanese subsequently invaded New Guinea, the Solomon Islands and other Pacific islands. The Japanese were turned back at the Battle of the Coral Sea and the Kokoda Track campaign before they were finally defeated in 1945. Some of the most prominent Oceanic battlegrounds were the Solomon Islands campaign, the Air raids on Darwin, the Kokada Track, and the Borneo campaign. In 1940 the administration of French Polynesia recognised the Free French Forces and many Polynesians served in World War II. Unknown at the time to French and Polynesians, the Konoe Cabinet in Imperial Japan on 16 September 1940 included French Polynesia among the many territories which were to become Japanese possessions in the post-war world—though in the course of the war in the Pacific the Japanese were not able to launch an actual invasion of the French islands. Some of the most intense fighting of the Second World War occurred in the Solomons. The most significant of the Allied Forces' operations against the Japanese Imperial Forces was launched on 7 August 1942, with simultaneous naval bombardments and amphibious landings on the Florida Islands at Tulagi and Red Beach on Guadalcanal. The Guadalcanal Campaign became an important and bloody campaign fought in the Pacific War as the Allies began to repulse Japanese expansion. Of strategic importance during the war were the coastwatchers operating in remote locations, often on Japanese held islands, providing early warning and intelligence of Japanese naval, army and aircraft movements during the campaign. "The Slot" was a name for New Georgia Sound, when it was used by the Tokyo Express to supply the Japanese garrison on Guadalcanal. Of more than 36,000 Japanese on Guadalcanal, about 26,000 were killed or missing, 9,000 died of disease, and 1,000 were captured. The Kokoda Track campaign was a campaign consisting of a series of battles fought between July and November 1942 between Japanese and Allied—primarily Australian—forces in what was then the Australian territory of Papua. Following a landing near Gona, on the north coast of New Guinea, Japanese forces attempted to advance south overland through the mountains of the Owen Stanley Range to seize Port Moresby as part of a strategy of isolating Australia from the United States. Initially only limited Australian forces were available to oppose them, and after making rapid progress the Japanese South Seas Force clashed with under strength Australian forces at Awala, forcing them back to Kokoda. A number of Japanese attacks were subsequently fought off by the Australian Militia, yet they began to withdraw over the Owen Stanley Range, down the Kokoda Track. In sight of Port Moresby itself, the Japanese began to run out of momentum against the Australians who began to receive further reinforcements. Having outrun their supply lines and following the reverses suffered by the Japanese at Guadalcanal, the Japanese were now on the defensive, marking the limit of the Japanese advance southwards. The Japanese subsequently withdrew to establish a defensive position on the north coast, but they were followed by the Australians who recaptured Kokoda on 2 November. Further fighting continued into November and December as the Australian and United States forces assaulted the Japanese beachheads, in what later became known as the Battle of Buna–Gona. Due to its low population, Oceania was a popular location for atmospheric and underground nuclear tests. Tests were conducted in various locations by the United Kingdom (Operation Grapple and Operation Antler), the United States (Bikini atoll and the Marshall Islands) and France (Moruroa), often with devastating consequences for the inhabitants. From 1946 to 1958, the Marshall Islands served as the Pacific Proving Grounds for the United States, and was the site of 67 nuclear tests on various atolls. The world's first hydrogen bomb, codenamed "Mike", was tested at the Enewetak atoll in the Marshall Islands on 1 November (local date) in 1952, by the United States. In 1954, fallout from the American Castle Bravo hydrogen bomb test in the Marshall Islands was such that the inhabitants of the Rongelap Atoll were forced to abandon their island. Three years later the islanders were allowed to return, but suffered abnormally high levels of cancer. They were evacuated again in 1985 and in 1996 given $45 million in compensation. A series of British tests were also conducted in the 1950s at Maralinga in South Australia, forcing the removal of the Pitjantjatjara and Yankunytjatjara peoples from their ancestral homelands. In 1962, France's early nuclear testing ground of Algeria became independent and the atoll of Moruroa in the Tuamotu Archipelago was selected as the new testing site. Moruroa atoll became notorious as a site of French nuclear testing, primarily because tests were carried out there after most Pacific testing had ceased. These tests were opposed by most other nations in Oceania. The last atmospheric test was conducted in 1974, and the last underground test in 1996. French nuclear testing in the Pacific was controversial in the 1980s, in 1985 French agents caused the Sinking of the Rainbow Warrior in Auckland to prevent it from arriving at the test site in Moruroa. In September 1995, France stirred up widespread protests by resuming nuclear testing at Fangataufa atoll after a three-year moratorium. The last test was on 27 January 1996. On 29 January 1996, France announced that it would accede to the Comprehensive Test Ban Treaty, and no longer test nuclear weapons. Fiji has suffered several coups d'état: military in 1987 and 2006 and civilian in 2000. All were ultimately due to ethnic tension between indigenous Fijians and Indo-Fijians, who originally came to the islands as indentured labour in the late nineteenth and early twentieth century. The 1987 coup followed the election of a multi-ethnic coalition, which Lieutenant Colonel Sitiveni Rabuka overthrew, claiming racial discrimination against ethnic Fijians. The coup was denounced by the United Nations and Fiji was expelled from the Commonwealth of Nations. The 2000 coup was essentially a repeat of the 1987 affair, although it was led by civilian George Speight, apparently with military support. Commodore Frank Bainimarama, who was opposed to Speight, then took over and appointed a new Prime Minister. Speight was later tried and convicted for treason. Many indigenous Fijians were unhappy at the treatment of Speight and his supporters, feeling that the coup had been legitimate. In 2006 the Fijian parliament attempted to introduce a series of bills which would have, amongst other things, pardoned those involved in the 2000 coup. Bainimarama, concerned that the legal and racial injustices of the previous coups would be perpetuated, staged his own coup. It was internationally condemned, and Fiji again suspended from the Commonwealth. In 2006 the then Australia Defence Minister, Brendan Nelson, warned Fijian officials of an Australian Naval fleet within proximity of Fiji that would respond to any attacks against its citizens. The Australian government estimated that anywhere between 15,000 and 20,000 people could have died in the Bougainville Civil War. More conservative estimates put the number of combat deaths as 1–2,000. From 1975, there were attempts by the Bougainville Province to secede from Papua New Guinea. These were resisted by Papua New Guinea primarily because of the presence in Bougainville of the Panguna mine, which was vital to Papua New Guinea's economy. The Bougainville Revolutionary Army began attacking the mine in 1988, forcing its closure the following year. Further BRA activity led to the declaration of a state of emergency and the conflict continued until about 2005, when successionist leader and self-proclaimed King of Bougainville Francis Ona died of malaria. Peacekeeping troops led by Australia have been in the region since the late 1990s, and a referendum on independence will be held in the 2010s. In 1946, French Polynesians were granted French citizenship and the islands' status was changed to an overseas territory; the islands' name was changed in 1957 to Polynésie Française (French Polynesia). Australia and New Zealand became dominions in the 20th century, adopting the Statute of Westminster Act in 1942 and 1947 respectively, marking their legislative independence from the United Kingdom. Hawaii became a U.S. state in 1959. Samoa became the first pacific nation to gain independence in 1962, Fiji and Tonga became independent in 1970, with many other nations following in the 1970s and 1980s. The South Pacific Forum was founded in 1971, which became the Pacific Islands Forum in 2000. Bougainville Island, geographically part of the Solomon Islands archipelago but politically part of Papua New Guinea, tried unsuccessfully to become independent in 1975, and a civil war followed in the early 1990s, with it later being granted autonomy. On 1 May 1979, in recognition of the evolving political status of the Marshall Islands, the United States recognized the constitution of the Marshall Islands and the establishment of the Government of the Republic of the Marshall Islands. The constitution incorporates both American and British constitutional concepts. In 1852, French Polynesia was granted partial internal autonomy; in 1984, the autonomy was extended. French Polynesia became a full overseas collectivity of France in 2004. Between 2001 and 2007 Australia's Pacific Solution policy transferred asylum seekers to several Pacific nations, including the Nauru detention centre. Australia, New Zealand and other nations took part in the Regional Assistance Mission to Solomon Islands from 2003 after a request for aid.
https://en.wikipedia.org/wiki?curid=14104
Hanseatic League The Hanseatic League (; , , ; ; ; ) was a commercial and defensive confederation of merchant guilds and market towns in Northwestern and Central Europe. Growing from a few North German towns in the late 1100s, the league came to dominate Baltic maritime trade for three centuries along the coasts of Northern Europe. Hansa territories stretched from the Baltic to the North Sea and inland during the Late Middle Ages, and diminished slowly after 1450. , later spelled as , was the Old High German word for a convoy, and this word was applied to bands of merchants traveling between the Hanseatic cities — whether by land or by sea. Merchant circles established the league to protect the guilds' economic interests and diplomatic privileges in their affiliated cities and countries, as well as along the trade routes which the merchants used. The Hanseatic cities had their own legal system and operated their own armies for mutual protection and aid. Despite this, the organization was not a state, nor could it be called a confederation of city-states; only a very small number of the cities within the league enjoyed autonomy and liberties comparable to those of a free imperial city. Exploratory trading adventures, raids, and piracy occurred early throughout the Baltic region; the sailors of Gotland sailed up rivers as far away as Novgorod. Scandinavians led international trade in the Baltic area before the Hanseatic League, establishing major trading hubs at Birka, Haithabu, and Schleswig by the 9th century CE. The later Hanseatic ports between Mecklenburg and Königsberg, (present-day Kaliningrad) originally formed part of the Scandinavian-led Baltic trade-system. Historians generally trace the origins of the Hanseatic League to the rebuilding of the north German town of Lübeck in 1159 by the powerful Henry the Lion, Duke of Saxony and Bavaria, after he had captured the area from Adolf II, Count of Schauenburg and Holstein. More recent scholarship has deemphasized the focus on Lübeck due to it having been designed as one of several regional trading centers. German cities achieved domination of trade in the Baltic with striking speed during the 13th century, and Lübeck became a central node in the seaborne trade that linked the areas around the North and Baltic seas. The hegemony of Lübeck peaked during the 15th century. Lübeck became a base for merchants from Saxony and Westphalia trading eastward and northward. Well before the term "Hanse" appeared in a document in 1267, merchants in different cities began to form guilds, or "Hansa", with the intention of trading with towns overseas, especially in the economically less-developed eastern Baltic. This area could supply timber, wax, amber, resins, and furs, along with rye and wheat brought down on barges from the hinterland to port markets. The towns raised their own armies, with each guild required to provide levies when needed. The Hanseatic cities came to the aid of one another, and commercial ships often had to be used to carry soldiers and their arms. Visby (on the island of Gotland) functioned as the leading centre in the Baltic before the Hansa. Sailing east, Visby merchants established a trading post at Novgorod called "Gutagard" (also known as "Gotenhof") in 1080. Merchants from northern Germany also stayed there in the early period of the Gotlander settlement. Later they established their own trading station in Novgorod, known as , which was further up-river, in the first half of the 13th century. In 1229 German merchants at Novgorod were granted certain privileges that made their positions more secure. Hansa societies worked to remove restrictions on trade for their members. The earliest extant documentary mention, although without a name, of a specific German commercial federation dates from 1157 in London. That year, the merchants of the Hansa in Cologne convinced King Henry II of England to exempt them from all tolls in London and allow them to trade at fairs throughout England. The "Queen of the Hansa", Lübeck, where traders were required to trans-ship goods between the North Sea and the Baltic, gained imperial privileges to become a free imperial city in 1226, as had Hamburg in 1189. In 1241 Lübeck, which had access to the Baltic and North seas' fishing grounds, formed an alliance—a precursor to the League—with Hamburg, another trading city, which controlled access to salt-trade routes from Lüneburg. The allied cities gained control over most of the salt-fish trade, especially the Scania Market; Cologne joined them in the Diet of 1260. In 1266 King Henry III of England granted the Lübeck and Hamburg Hansa a charter for operations in England, and the Cologne Hansa joined them in 1282 to form the most powerful Hanseatic colony in London. Much of the drive for this co-operation came from the fragmented nature of existing territorial governments, which failed to provide security for trade. Over the next 50 years, the Hansa solidified with formal agreements for confederation and co-operation covering the west and east trade routes. The principal city and linchpin remained Lübeck; with the first general diet of the Hansa held there in 1356, the Hanseatic League acquired an official structure. Lübeck's location on the Baltic provided access for trade with Scandinavia and Kievan Rus' (with its sea-trade center, Veliky Novgorod), putting it in direct competition with the Scandinavians who had previously controlled most of the Baltic trade-routes. A treaty with the Visby Hansa put an end to this competition: through this treaty the Lübeck merchants gained access to the inland Russian port of Novgorod, where they built a trading post or "Kontor" (literally: "office"). Although such alliances formed throughout the Holy Roman Empire, the league never became a closely managed formal organisation. Assemblies of the Hanseatic towns met irregularly in Lübeck for a "Hansetag" (Hanseatic day) from 1356 onwards, but many towns chose not to attend nor to send representatives, and decisions were not binding on individual cities. Over the period, a network of alliances grew to include a flexible roster of 70 to 170 cities. The league succeeded in establishing additional "Kontors" in Bruges (Flanders), Bergen (Norway), and London (England). These trading posts became significant enclaves. The London "Kontor", established in 1320, stood west of London Bridge near Upper Thames Street, on the site now occupied by Cannon Street station. It grew into a significant walled community with its own warehouses, weighhouse, church, offices and houses, reflecting the importance and scale of trading activity on the premises. The first reference to it as the Steelyard ("der Stahlhof") occurs in 1422. Starting with trade in coarse woollen fabrics, the Hanseatic League had the effect of bringing both commerce and industry to northern Germany. As trade increased, newer and finer woollen and linen fabrics, and even silks, were manufactured in northern Germany. The same refinement of products out of cottage industry occurred in other fields, e.g. etching, wood carving, armour production, engraving of metals, and wood-turning. The century-long monopolization of sea navigation and trade by the Hanseatic League ensured that the Renaissance arrived in northern Germany long before it did in the rest of Europe. In addition to the major "Kontors", individual Hanseatic ports had a representative merchant and warehouse. In England this happened in Boston, Bristol, Bishop's Lynn (now King's Lynn, which features the sole remaining Hanseatic warehouse in England), Hull, Ipswich, Norwich, Yarmouth (now Great Yarmouth), and York. The league primarily traded timber, furs, resin (or tar), flax, honey, wheat, and rye from the east to Flanders and England with cloth (and, increasingly, manufactured goods) going in the other direction. Metal ore (principally copper and iron) and herring came southwards from Sweden. German colonists in the 12th and 13th centuries settled in numerous cities on and near the east Baltic coast, such as Elbing (Elbląg), Thorn (Toruń), Reval (Tallinn), Riga, and Dorpat (Tartu), which became members of the Hanseatic League, and some of which still retain many Hansa buildings and bear the style of their Hanseatic days. Most were granted Lübeck law ("Lübisches Recht"), after the league's most prominent town. The law provided that they had to appeal in all legal matters to Lübeck's city council. The Livonian Confederation of 1435 to incorporated modern-day Estonia and parts of Latvia and had its own Hanseatic parliament (diet); all of its major towns became members of the Hanseatic League. The dominant language of trade was Middle Low German, a dialect with significant impact for countries involved in the trade, particularly the larger Scandinavian languages, Estonian, and Latvian. The league had a fluid structure, but its members shared some characteristics; most of the Hansa cities either started as independent cities or gained independence through the collective bargaining power of the league, though such independence remained limited. The Hanseatic free cities owed allegiance directly to the Holy Roman Emperor, without any intermediate family tie of obligation to the local nobility. Another similarity involved the cities' strategic locations along trade routes. At the height of their power in the late-14th century, the merchants of the Hanseatic League succeeded in using their economic power and, sometimes, their military might—trade routes required protection and the league's ships sailed well-armed—to influence imperial policy. The league also wielded power abroad. Between 1361 and 1370 it waged war against Denmark. Initially unsuccessful, Hanseatic towns in 1368 allied in the Confederation of Cologne, sacked Copenhagen and Helsingborg, and forced Valdemar IV, King of Denmark, and his son-in-law Haakon VI, King of Norway, to grant the league 15% of the profits from Danish trade in the subsequent peace treaty of Stralsund in 1370, thus gaining an effective trade and economic monopoly in Scandinavia. This favourable treaty marked the height of Hanseatic power. After the Danish-Hanseatic War (1426–1435) and the Bombardment of Copenhagen (1428), the Treaty of Vordingborg renewed the commercial privileges in 1435. The Hansa also waged a vigorous campaign against pirates. Between 1392 and 1440 maritime trade of the league faced danger from raids of the Victual Brothers and their descendants, privateers hired in 1392 by Albert of Mecklenburg, King of Sweden, against Margaret I, Queen of Denmark. In the Dutch–Hanseatic War (1438–1441), the merchants of Amsterdam sought and eventually won free access to the Baltic and broke the Hanseatic monopoly. As an essential part of protecting their investment in ships and their cargoes, the League trained pilots and erected lighthouses. Most foreign cities confined the Hanseatic traders to certain trading areas and to their own trading posts. They seldom interacted with the local inhabitants, except when doing business. Many locals, merchant and noble alike, envied the power of the League and tried to diminish it. For example, in London, the local merchants exerted continuing pressure for the revocation of privileges. The refusal of the Hansa to offer reciprocal arrangements to their English counterparts exacerbated the tension. King Edward IV of England reconfirmed the league's privileges in the Treaty of Utrecht (1474) despite the latent hostility, in part thanks to the significant financial contribution the League made to the Yorkist side during the Wars of the Roses of 1455–1487. In 1597 Queen Elizabeth of England expelled the League from London, and the Steelyard closed the following year. Tsar Ivan III of Russia closed the Hanseatic "Kontor" at Novgorod in 1494. The very existence of the League and its privileges and monopolies created economic and social tensions that often crept over into rivalries between League members. The economic crises of the late 15th century did not spare the Hansa. Nevertheless, its eventual rivals emerged in the form of the territorial states, whether new or revived, and not just in the west: Ivan III, Grand Prince of Moscow, ended the entrepreneurial independence of Hansa's Novgorod "Kontor" in 1478—it closed completely and finally in 1494. New vehicles of credit were imported from Italy, where double-entry book-keeping was invented in 1492, and outpaced the Hansa economy, in which silver coins changed hands rather than bills of exchange. In the 15th century, tensions between the Prussian region and the "Wendish" cities (Lübeck and its eastern neighbours) increased. Lübeck was dependent on its role as centre of the Hansa, being on the shore of the sea without a major river. It was on the entrance of the land route to Hamburg, but this land route could be bypassed by sea travel around Denmark and through the Kattegat. Prussia's main interest, on the other hand, was the export of bulk products like grain and timber, which were very important for England, the Low Countries, and, later on, also for Spain and Italy. In 1454, the year of the marriage of Elisabeth of Austria to the Jagiellonian king, the towns of the Prussian Confederation rose up against the dominance of the Teutonic Order and asked Casimir IV, King of Poland, for help. Gdańsk (Danzig), Thorn and Elbing became part of the Kingdom of Poland, (from 1466 to 1569 referred to as Royal Prussia, region of Poland) by the Second Peace of Thorn (1466). Poland in turn was heavily supported by the Holy Roman Empire through family connections and by military assistance under the Habsburgs. Kraków, then the capital of Poland, had a loose association with the Hansa. The lack of customs borders on the River Vistula after 1466 helped to gradually increase Polish grain exports, transported to the sea down the Vistula, from per year, in the late 15th century, to over in the 17th century. The Hansa-dominated maritime grain trade made Poland one of the main areas of its activity, helping Danzig to become the Hansa's largest city. The member cities took responsibility for their own protection. In 1567, a Hanseatic League agreement reconfirmed previous obligations and rights of league members, such as common protection and defense against enemies. The Prussian Quartier cities of Thorn, Elbing, Königsberg and Riga and Dorpat also signed. When pressed by the King of Poland–Lithuania, Danzig remained neutral and would not allow ships running for Poland into its territory. They had to anchor somewhere else, such as at Pautzke (Puck). A major economic advantage for the Hansa was its control of the shipbuilding market, mainly in Lübeck and in Danzig. The Hansa sold ships everywhere in Europe, including Italy. They drove out the Dutch, because Holland wanted to favour Bruges as a huge staple market at the end of a trade route. When the Dutch started to become competitors of the Hansa in shipbuilding, the Hansa tried to stop the flow of shipbuilding technology from Hanseatic towns to Holland. Danzig, a trading partner of Amsterdam, attempted to forestall the decision. Dutch ships sailed to Danzig to take grain from the city directly, to the dismay of Lübeck. Hollanders also circumvented the Hanseatic towns by trading directly with north German princes in non-Hanseatic towns. Dutch freight costs were much lower than those of the Hansa, and the Hansa were excluded as middlemen. When Bruges, Antwerp and Holland all became part of the Duchy of Burgundy they actively tried to take over the monopoly of trade from the Hansa, and the staples market from Bruges was transferred to Amsterdam. The Dutch merchants aggressively challenged the Hansa and met with much success. Hanseatic cities in Prussia, Livonia, supported the Dutch against the core cities of the Hansa in northern Germany. After several naval wars between Burgundy and the Hanseatic fleets, Amsterdam gained the position of leading port for Polish and Baltic grain from the late 15th century onwards. The Dutch regarded Amsterdam's grain trade as the mother of all trades ("Moedernegotie"). Nuremberg in Franconia developed an overland route to sell formerly Hansa-monopolised products from Frankfurt via Nuremberg and Leipzig to Poland and Russia, trading Flemish cloth and French wine in exchange for grain and furs from the east. The Hansa profited from the Nuremberg trade by allowing Nurembergers to settle in Hanseatic towns, which the Franconians exploited by taking over trade with Sweden as well. The Nuremberger merchant Albrecht Moldenhauer was influential in developing the trade with Sweden and Norway, and his sons Wolf Moldenhauer and Burghard Moldenhauer established themselves in Bergen and Stockholm, becoming leaders of the local Hanseatic activities. At the start of the 16th century, the league found itself in a weaker position than it had known for many years. The rising Swedish Empire had taken control of much of the Baltic Sea. Denmark had regained control over its own trade, the "Kontor" in Novgorod had closed, and the "Kontor" in Bruges had become effectively moribund. The individual cities making up the league had also started to put self-interest before their common Hanseatic interests. Finally, the political authority of the German princes had started to grow, constraining the independence of the merchants and Hanseatic towns. The league attempted to deal with some of these issues: it created the post of Syndic in 1556 and elected Heinrich Sudermann as a permanent official with legal training, who worked to protect and extend the diplomatic agreements of the member towns. In 1557 and 1579 revised agreements spelled out the duties of towns and some progress was made. The Bruges "Kontor" moved to Antwerp and the Hansa attempted to pioneer new routes. However the league proved unable to prevent the growing mercantile competition, and so a long decline commenced. The Antwerp "Kontor" closed in 1593, followed by the London "Kontor" in 1598. The Bergen "Kontor" continued until 1754; of all the "Kontore", only its buildings, the "Bryggen", survive. The gigantic warship "Adler von Lübeck" was constructed for military use against Sweden during the Northern Seven Years' War (1563–70) but was never put to military use, epitomizing the vain attempts of Lübeck to uphold its long-privileged commercial position in a changing economic and political climate. By the late 17th century, the league had imploded and could no longer deal with its own internal struggles. The social and political changes that accompanied the Protestant Reformation included the rise of Dutch and English merchants and the pressure of the Ottoman Empire upon the Holy Roman Empire and its trade routes. Only nine members attended the last formal meeting in 1669 and only three (Lübeck, Hamburg and Bremen) remained as members until its demise in 1862, in the wake of the creation of the German Empire under Kaiser Wilhelm I. Hence, only Lübeck, Hamburg, and Bremen retain the words "Hanseatic City" in their official German titles. The members of the Hanseatic League were Low German merchants, whose towns were, with the exception of Dinant, where these merchants held citizenship. Not all towns with Low German merchant communities were members of the league (e.g., Emden, Memel (today Klaipėda), Viborg (today Vyborg) and Narva never joined). However, Hanseatic merchants could also come from settlements without German town law—the premise for league membership was birth to German parents, subjection to German law, and a commercial education. The league served to advance and defend the common interests of its heterogeneous members: commercial ambitions such as enhancement of trade, and political ambitions such as ensuring maximum independence from the noble territorial rulers. Decisions and actions of the Hanseatic League were the consequence of a consensus-based procedure. If an issue arose, the league's members were invited to participate in a central meeting, the "Tagfahrt" ("meeting ride", sometimes also referred to as "Hansetag", since 1358). The member communities then chose envoys ("Ratssendeboten") to represent their local consensus on the issue at the "Tagfahrt". Not every community sent an envoy; delegates were often entitled to represent a set of communities. Consensus-building on local and "Tagfahrt" levels followed the Low Saxon tradition of "Einung", where consensus was defined as absence of protest: after a discussion, the proposals which gained sufficient support were dictated aloud to the scribe and passed as binding "Rezess" if the attendees did not object; those favouring alternative proposals unlikely to get sufficient support were obliged to remain silent during this procedure. If consensus could not be established on a certain issue, it was found instead in the appointment of a number of league members who were then empowered to work out a compromise. The Hanseatic "Kontore", which operated like an early stock exchange, each had their own treasury, court and seal. Like the guilds, the "Kontore" were led by "Ältermänner" ("eldermen", or English aldermen). The Stalhof "Kontor", as a special case, had a Hanseatic and an English "Ältermann". In 1347 the "Kontor" of Brussels modified its statute to ensure an equal representation of the league's members. To that end, member communities from different regions were pooled into three circles ("Drittel" ("third [part]"): the Wendish and Saxon Drittel, the Westphalian and Prussian Drittel as well as the Gothlandian, Livonian and Swedish Drittel. The merchants from their respective "Drittel" would then each choose two "Ältermänner" and six members of the Eighteen Men's Council ("Achtzehnmännerrat") to administer the "Kontor" for a set period of time. In 1356, during a Hanseatic meeting in preparation of the first "Tagfahrt", the league confirmed this statute. The league in general gradually adopted and institutionalized the division into "Drittel" (see table). The "Tagfahrt" or "Hansetag" was the only central institution of the Hanseatic League. However, with the division into "Drittel", the members of the respective subdivisions frequently held a "Dritteltage" (""Drittel" meeting") to work out common positions which could then be presented at a "Tagfahrt". On a more local level, league members also met, and while such regional meetings were never formalized into a Hanseatic institution, they gradually gained importance in the process of preparing and implementing "Tagfahrt" decisions. From 1554, the division into "Drittel" was modified to reduce the circles' heterogeneity, to enhance the collaboration of the members on a local level and thus to make the league's decision-making process more efficient. The number of circles rose to four, so they were called "Quartiere" (quarters): This division was however not adopted by the "Kontore", who, for their purposes (like "Ältermänner" elections), grouped the league members in different ways (e.g., the division adopted by the Stahlhof in London in 1554 grouped the league members into "Dritteln", whereby Lübeck merchants represented the Wendish, Pomeranian Saxon and several Westphalian towns, Cologne merchants represented the Cleves, Mark, Berg and Dutch towns, while Danzig merchants represented the Prussian and Livonian towns). The names of the Quarters have been abbreviated in the following table: Kontor: The Kontore were foreign trading posts of the League, not cities that were Hanseatic members, and are set apart in a separate table below. The remaining column headings are as follows: Despite its collapse, several cities still maintained the link to the Hanseatic League. Dutch cities including Groningen, Deventer, Kampen, Zutphen and Zwolle, and a number of German cities including Bremen, Demmin, Greifswald, Hamburg, Lübeck, Lüneburg, Rostock, Stade, Stralsund and Wismar still call themselves "Hanse" cities (their car license plates are prefixed "H", e.g. –"HB"– for "Hansestadt Bremen"). Hamburg and Bremen continue to style themselves officially as "free Hanseatic cities", with Lübeck named "Hanseatic City" (Rostock's football team is named F.C. Hansa Rostock in memory of the city's trading past). For Lübeck in particular, this anachronistic tie to a glorious past remained especially important in the 20th century. In 1937, the Nazi Party removed this privilege through the Greater Hamburg Act possibly because the "Senat" of Lübeck did not permit Adolf Hitler to speak in Lübeck during his 1932 election campaign. He held the speech in Bad Schwartau, a small village on the outskirts of Lübeck. Subsequently, he referred to Lübeck as "the small city close to Bad Schwartau." After the EU enlargement to the East in May 2004 there were some experts who wrote about the resurrection of the Baltic Hansa. The legacy of the Hansa is remembered today in several names: the German airline Lufthansa (i.e., "Air Hansa"); F.C. Hansa Rostock; Hanze University of Applied Sciences, Groningen, Netherlands; Hanze oil production platform, Netherlands; the Hansa Brewery in Bergen and the Hanse Sail in Rostock. DDG Hansa was a major German shipping company from 1881 until its bankruptcy in 1980. Hansabank in the Baltic states, which has been rebranded into Swedbank. Hansa-Park, one of the biggest theme parks in Germany. There are two museums in Europe dedicated specifically to the history of the Hanseatic League: the European Hansemuseum in Lübeck and the Hanseatic Museum and Schøtstuene in Bergen. In 1980, former Hanseatic League members established a "new Hanse" in Zwolle. This league is open to all former Hanseatic League members and cities that share a Hanseatic Heritage. In 2012 the New Hanseatic league had 187 members. This includes twelve Russian cities, most notably Novgorod, which was a major Russian trade partner of the Hansa in the Middle Ages. The "new Hanse" fosters and develops business links, tourism and cultural exchange. The headquarters of the New Hansa is in Lübeck, Germany. The current President of the Hanseatic League of New Time is Bernd Saxe, Mayor of Lübeck. Each year one of the member cities of the New Hansa hosts the Hanseatic Days of New Time international festival. In 2006 King's Lynn became the first English member of the newly formed new Hanseatic League. It was joined by Hull in 2012 and Boston in 2016. The "New Hanseatic League" was established in February 2018 by finance ministers from Denmark, Estonia, Finland, Ireland, Latvia, Lithuania, the Netherlands and Sweden through the signing of a two-page foundational document which set out the countries' "shared views and values in the discussion on the architecture of the EMU."
https://en.wikipedia.org/wiki?curid=14105
Horror fiction Horror is a genre of speculative fiction which is intended to frighten, scare, disgust, or startle its readers by inducing feelings of horror and terror. Literary historian J. A. Cuddon defined the horror story as "a piece of fiction in prose of variable length... which shocks, or even frightens the reader, or perhaps induces a feeling of repulsion or loathing". It creates an eerie and frightening atmosphere. Horror is frequently supernatural, though it might also be non-supernatural. Often the central menace of a work of horror fiction can be interpreted as a metaphor for the larger fears of a society. The horror genre has ancient origins with roots in folklore and religious traditions, focusing on death, the afterlife, evil, the demonic and the principle of the thing embodied in the person. These were manifested in stories of beings such as demons, witches, vampires, werewolves and ghosts. European horror fiction became established through works of the Ancient Greeks and Ancient Romans. The well-known 19th-century novel about Frankenstein was greatly influenced by the story of Hippolytus, where Asclepius revives him from death. Euripides wrote plays based on the story, "Hippolytos Kalyptomenos" and "Hippolytus". In Plutarch's "The Lives of the Noble Grecians and Romans" focused on Cimon, the author describes the spirit of a murderer, Damon, who himself was murdered in a bathhouse in Chaeronea. Pliny the Younger tells the tale of Athenodorus Cananites who bought a haunted house in Athens. Athenodorus was cautious since the house was inexpensive. While writing a book on philosophy, he was visited by a ghostly appearing figure bound in chains. The figure disappeared in the courtyard; the following day, the magistrates dug it up to find an unmarked grave. Werewolf stories were popular in medieval French literature. One of Marie de France's twelve lais is a werewolf story titled "Bisclavret". The Countess Yolande commissioned a werewolf story titled "Guillaume de Palerme". Anonymous writers penned two werewolf stories, "Biclarel" and "Melion". Much horror fiction derives from the cruellest personages of the 15th century. Dracula can be traced to the Prince of Wallachia Vlad III, whose alleged war crimes were published in German pamphlets. A 1499 pamphlet was published by Markus Ayrer, which is most notable for its woodcut imagery. The alleged serial-killer sprees of Gilles de Rais have been seen as the inspiration for "Bluebeard". The motif of the vampiress is most notably derived from the real-life noblewoman and murderess, Elizabeth Bathory, and helped usher in the emergence of horror fiction in the 18th century, such as through László Turóczi's 1729 book "Tragica Historia". The 18th century saw the gradual development of Romanticism and the Gothic horror genre. It drew on the written and material heritage of the Late Middle Ages, finding its form with Horace Walpole's seminal and controversial 1764 novel, "The Castle of Otranto". In fact, the first edition was published disguised as an actual medieval romance from Italy, discovered and republished by a fictitious translator. Once revealed as modern, many found it anachronistic, reactionary, or simply in poor taste but it proved immediately popular. "Otranto" inspired "Vathek" (1786) by William Beckford, "A Sicilian Romance" (1790), "The Mysteries of Udolpho" (1794) and "The Italian" (1796) by Ann Radcliffe and "The Monk" (1797) by Matthew Lewis. A significant amount of horror fiction of this era was written by women and marketed towards a female audience, a typical scenario of the novels being a resourceful female menaced in a gloomy castle. The Gothic tradition blossomed into the genre that modern readers today call horror literature in the 19th century. Influential works and characters that continue resonating in fiction and film today saw their genesis in the Brothers Grimm's "Hänsel und Gretel" (1812), Mary Shelley's "Frankenstein" (1818), John Polodori's "The Vampyre" (1819), Charles Maturin's "Melmoth the Wanderer" (1820), Washington Irving's "The Legend of Sleepy Hollow" (1820), Jane C. Loudon's "" (1827), Victor Hugo's "The Hunchback of Notre Dame" (1831), Thomas Peckett Prest's "Varney the Vampire" (1847), the works of Edgar Allan Poe, the works of Sheridan Le Fanu, Robert Louis Stevenson's "Strange Case of Dr Jekyll and Mr Hyde" (1886), Oscar Wilde's "The Picture of Dorian Gray" (1890), H. G. Wells' "The Invisible Man" (1897), and Bram Stoker's "Dracula" (1897). Each of these works created an enduring icon of horror seen in later re-imaginings on the page, stage and screen. A proliferation of cheap periodicals around turn of the century led to a boom in horror writing. For example, Gaston Leroux serialized his "Le Fantôme de l'Opéra" before it became a novel in 1910. One writer who specialized in horror fiction for mainstream pulps, such as "All-Story Magazine," was Tod Robbins, whose fiction deals with themes of madness and cruelty. Later, specialist publications emerged to give horror writers an outlet, prominent among them was "Weird Tales" and "Unknown" "Worlds". Influential horror writers of the early 20th century made inroads in these mediums. Particularly, the venerated horror author H.P. Lovecraft, and his enduring Cthulhu Mythos transformed and popularized the genre of cosmic horror, and M.R. James is credited with redefining the ghost story in that era. The serial murderer became a recurring theme. Yellow journalism and sensationalism of various murderers, such as Jack the Ripper, and lesser so, Carl Panzram, Fritz Haarman, and Albert Fish, all perpetuated this phenomenon. The trend continued in the postwar era, partly renewed after the murders committed by Ed Gein. In 1959, Robert Bloch, inspired by the murders, wrote "Psycho". The crimes committed in 1969 by the Manson family influenced the slasher theme in horror fiction of the 1970s. In 1981, Thomas Harris wrote "Red Dragon", introducing Dr. Hannibal Lecter. In 1988, the sequel to that novel, "The Silence of the Lambs", was published. Early cinema was inspired by many aspects of horror literature, and started a strong tradition of horror films and subgenres that continues to this day. Up until the graphic depictions of violence and gore on the screen commonly associated with 1960s and 1970s slasher films and splatter films, comic books such as those published by EC Comics (most notably "Tales From The Crypt") in the 1950s satisfied readers' quests for horror imagery that the silver screen could not provide. This imagery made these comics controversial, and as a consequence, they were frequently censored. The modern zombie tale dealing with the motif of the living dead harks back to works including H.P. Lovecraft's stories "Cool Air" (1925), "In The Vault" (1926), and "The Outsider" (1926), and Dennis Wheatley's "Strange Conflict" (1941). Richard Matheson's novel "I Am Legend" (1954) influenced an entire genre of apocalyptic zombie fiction emblematized by the films of George A. Romero. In the late 1960s and early 1970s, the enormous commercial success of three books - "Rosemary's Baby" (1967) by Ira Levin, "The Exorcist" by William Peter Blatty, and "The Other" by Thomas Tryon - encouraged publishers to begin releasing numerous other horror novels, thus creating a "horror boom". One of the best-known late-20th century horror writers is Stephen King, known for "Carrie", "The Shining", "It", "Misery" and several dozen other novels and about 200 short stories. Beginning in the 1970s, King's stories have attracted a large audience, for which he was awarded by the U.S. National Book Foundation in 2003. Other popular horror authors of the period included Anne Rice, Brian Lumley, Graham Masterton, James Herbert, Dean Koontz, Clive Barker, Achievements in horror fiction are recognized by numerous awards. The Horror Writer's Association presents the Bram Stoker Awards for Superior Achievement, named in honor of Bram Stoker, author of the seminal horror novel "Dracula". The Australian Horror Writers Association presents annual Australian Shadows Awards. The International Horror Guild Award was presented annually to works of horror and dark fantasy from 1995 to 2008. The Shirley Jackson Awards are literary awards for outstanding achievement in the literature of psychological suspense, horror, and the dark fantastic works. Other important awards for horror literature are included as subcategories within general awards for fantasy and science fiction in such awards as the Aurealis Award. Some writers of fiction normally classified as "horror" tend to dislike the term, considering it too lurid. They instead use the terms dark fantasy or Gothic fantasy for supernatural horror, or "psychological thriller" for non-supernatural horror.
https://en.wikipedia.org/wiki?curid=14109
Holomorphic function In mathematics, a holomorphic function is a complex-valued function of one or more complex variables that is, at every point of its domain, complex differentiable in a neighborhood of the point. The existence of a complex derivative in a neighbourhood is a very strong condition, for it implies that any holomorphic function is actually infinitely differentiable and equal, locally, to its own Taylor series ("analytic"). Holomorphic functions are the central objects of study in complex analysis. Though the term "analytic function" is often used interchangeably with "holomorphic function", the word "analytic" is defined in a broader sense to denote any function (real, complex, or of more general type) that can be written as a convergent power series in a neighbourhood of each point in its domain. The fact that all holomorphic functions are complex analytic functions, and vice versa, is a major theorem in complex analysis. Holomorphic functions are also sometimes referred to as "regular functions". A holomorphic function whose domain is the whole complex plane is called an entire function. The phrase "holomorphic at a point "z"0" means not just differentiable at "z"0, but differentiable everywhere within some neighbourhood of "z"0 in the complex plane. Given a complex-valued function "f" of a single complex variable, the derivative of "f" at a point "z"0 in its domain is defined by the limit This is the same as the definition of the derivative for real functions, except that all of the quantities are complex. In particular, the limit is taken as the complex number "z" approaches "z"0, and must have the same value for any sequence of complex values for "z" that approach "z"0 on the complex plane. If the limit exists, we say that "f" is complex-differentiable at the point "z"0. This concept of complex differentiability shares several properties with real differentiability: it is linear and obeys the product rule, quotient rule, and chain rule. If "f" is "complex differentiable" at "every" point "z"0 in an open set "U", we say that "f" is holomorphic on "U". We say that "f" is holomorphic at the point "z"0 if "f" is complex differentiable on some neighbourhood of "z"0. We say that "f" is holomorphic on some non-open set "A" if it is holomorphic in an open set containing "A". As a pathological non-example, the function given by "f"("z") = |"z"|2 is complex differentiable at exactly one point ("z"0 = 0), and for this reason, it is "not" holomorphic at 0 because there is no open set around 0 on which "f" is complex differentiable. The relationship between real differentiability and complex differentiability is the following. If a complex function is holomorphic, then "u" and "v" have first partial derivatives with respect to "x" and "y", and satisfy the Cauchy–Riemann equations: or, equivalently, the Wirtinger derivative of "f" with respect to the complex conjugate of "z" is zero: which is to say that, roughly, "f" is functionally independent from the complex conjugate of "z". If continuity is not given, the converse is not necessarily true. A simple converse is that if "u" and "v" have "continuous" first partial derivatives and satisfy the Cauchy–Riemann equations, then "f" is holomorphic. A more satisfying converse, which is much harder to prove, is the Looman–Menchoff theorem: if "f" is continuous, "u" and "v" have first partial derivatives (but not necessarily continuous), and they satisfy the Cauchy–Riemann equations, then "f" is holomorphic. The word "holomorphic" was introduced by two of Cauchy's students, Briot (1817–1882) and Bouquet (1819–1895), and derives from the Greek ὅλος ("holos") meaning "entire", and μορφή ("morphē") meaning "form" or "appearance". Today, the term "holomorphic function" is sometimes preferred to "analytic function", as the latter is a more general concept. This is also because an important result in complex analysis is that every holomorphic function is complex analytic, a fact that does not follow obviously from the definitions. The term "analytic" is however also in wide use. Because complex differentiation is linear and obeys the product, quotient, and chain rules; the sums, products and compositions of holomorphic functions are holomorphic, and the quotient of two holomorphic functions is holomorphic wherever the denominator is not zero. If one identifies C with R2, then the holomorphic functions coincide with those functions of two real variables with continuous first derivatives which solve the Cauchy–Riemann equations, a set of two partial differential equations. Every holomorphic function can be separated into its real and imaginary parts, and each of these is a solution of Laplace's equation on R2. In other words, if we express a holomorphic function "f"("z") as both "u" and "v" are harmonic functions, where v is the harmonic conjugate of u. Cauchy's integral theorem implies that the contour integral of every holomorphic function along a loop vanishes: Here "γ" is a rectifiable path in a simply connected open subset "U" of the complex plane C whose start point is equal to its end point, and is a holomorphic function. Cauchy's integral formula states that every function holomorphic inside a disk is completely determined by its values on the disk's boundary. Furthermore: Suppose "U" is an open subset of C, is a holomorphic function and the closed disk is completely contained in "U". Let γ be the circle forming the boundary of "D". Then for every "a" in the interior of "D": where the contour integral is taken counter-clockwise. The derivative "f"′("a") can be written as a contour integral using Cauchy's differentiation formula: for any simple loop positively winding once around "a", and for infinitesimal positive loops γ around "a". In regions where the first derivative is not zero, holomorphic functions are conformal in the sense that they preserve angles and the shape (but not size) of small figures. Every holomorphic function is analytic. That is, a holomorphic function "f" has derivatives of every order at each point "a" in its domain, and it coincides with its own Taylor series at "a" in a neighbourhood of "a". In fact, "f" coincides with its Taylor series at "a" in any disk centred at that point and lying within the domain of the function. From an algebraic point of view, the set of holomorphic functions on an open set is a commutative ring and a complex vector space. Additionally, the set of holomorphic functions in an open set U is an integral domain if and only if the open set U is connected. In fact, it is a locally convex topological vector space, with the seminorms being the suprema on compact subsets. From a geometric perspective, a function "f" is holomorphic at "z"0 if and only if its exterior derivative "df" in a neighbourhood "U" of "z"0 is equal to "f"′("z") "dz" for some continuous function "f"′. It follows from that "df"′ is also proportional to "dz", implying that the derivative "f"′ is itself holomorphic and thus that "f" is infinitely differentiable. Similarly, the fact that implies that any function "f" that is holomorphic on the simply connected region "U" is also integrable on "U". (For a path γ from "z"0 to "z" lying entirely in "U", define in light of the Jordan curve theorem and the generalized Stokes' theorem, "F"γ("z") is independent of the particular choice of path γ, and thus "F"("z") is a well-defined function on "U" having and .) All polynomial functions in "z" with complex coefficients are holomorphic on C, and so are sine, cosine and the exponential function. (The trigonometric functions are in fact closely related to and can be defined via the exponential function using Euler's formula). The principal branch of the complex logarithm function is holomorphic on the set The square root function can be defined as and is therefore holomorphic wherever the logarithm log("z") is. The function 1/"z" is holomorphic on As a consequence of the Cauchy–Riemann equations, a real-valued holomorphic function must be constant. Therefore, the absolute value of "z", the argument of "z", the real part of "z" and the imaginary part of "z" are not holomorphic. Another typical example of a continuous function which is not holomorphic is the complex conjugate formed by complex conjugation. The definition of a holomorphic function generalizes to several complex variables in a straightforward way. Let "D" denote an open subset of C"n", and let . The function "f" is analytic at a point "p" in "D" if there exists an open neighbourhood of "p" in which "f" is equal to a convergent power series in "n" complex variables. Define "f" to be holomorphic if it is analytic at each point in its domain. Osgood's lemma shows (using the multivariate Cauchy integral formula) that, for a continuous function "f", this is equivalent to "f" being holomorphic in each variable separately (meaning that if any coordinates are fixed, then the restriction of "f" is a holomorphic function of the remaining coordinate). The much deeper Hartogs' theorem proves that the continuity hypothesis is unnecessary: "f" is holomorphic if and only if it is holomorphic in each variable separately. More generally, a function of several complex variables that is square integrable over every compact subset of its domain is analytic if and only if it satisfies the Cauchy–Riemann equations in the sense of distributions. Functions of several complex variables are in some basic ways more complicated than functions of a single complex variable. For example, the region of convergence of a power series is not necessarily an open ball; these regions are Reinhardt domains, the simplest example of which is a polydisk. However, they also come with some fundamental restrictions. Unlike functions of a single complex variable, the possible domains on which there are holomorphic functions that cannot be extended to larger domains are highly limited. Such a set is called a domain of holomorphy. A complex differential ("p",0)-form α is holomorphic if and only if its antiholomorphic Dolbeault derivative is zero, formula_11. The concept of a holomorphic function can be extended to the infinite-dimensional spaces of functional analysis. For instance, the Fréchet or Gateaux derivative can be used to define a notion of a holomorphic function on a Banach space over the field of complex numbers.
https://en.wikipedia.org/wiki?curid=14110
History of Algeria Much of the history of Algeria has taken place on the fertile coastal plain of North Africa, which is often called the Maghreb (or Maghrib). North Africa served as a transit region for people moving towards Europe or the Middle East, thus, the region's inhabitants have been influenced by populations from other areas, including the Carthaginians, Romans, and Vandals. The region was conquered by the Muslims in the early 8th century AD, but broke off from the Umayyad Caliphate after the Berber Revolt of 740. Later, various Berbers, Arabs, Persian Muslim states, Sunni, Shia or Ibadi communities were established that ruled parts of modern-day of Algeria: including the Rustamids, Ifranids, Fatimids, Maghrawas, Zirids, Hammadids, Almoravid, Almohads, Hafsids, and Ziyyanids. During the Ottoman period, Algiers was the center of the Barbary slave trade which led to many naval conflicts. The last significant events in the country's recent history have been the Algerian War and Algerian Civil War. Evidence of the early human occupation of Algeria is demonstrated by the discovery of 1.8 million year old Oldowan stone tools found at Ain Hanech in 1992. In 1954 fossilised "Homo erectus" bones were discovered by C. Arambourg at Ternefine that are 700,000 years old. Neolithic civilization (marked by animal domestication and subsistence agriculture) developed in the Saharan and Mediterranean Maghrib between 6000 and 2000 BC. This type of economy, richly depicted in the Tassili n'Ajjer cave paintings in southeastern Algeria, predominated in the Maghrib until the classical period. The amalgam of peoples of North Africa coalesced eventually into a distinct native population, the Berbers lacked a written language and hence tended to be overlooked or marginalized in historical accounts. Since 4000 BC, the indigenous peoples of northern Africa successfully resisted Phoenician, Roman, Vandal, Byzantine, Turkish, and French invaders but accepted Islam between the 7th to 9th century, and Arabic is now the language spoken by a majority in the country. Phoenician traders arrived on the North African coast around 900 BC and established Carthage (in present-day Tunisia) around 800 BC. During the classical period, Berber civilization was already at a stage in which agriculture, manufacturing, trade, and political organization supported several states. Trade links between Carthage and the Berbers in the interior grew, but territorial expansion also resulted in the enslavement or military recruitment of some Berbers and in the extraction of tribute from others. The Carthaginian state declined because of successive defeats by the Romans in the Punic Wars, and in 146 BC, the city of Carthage was destroyed. As Carthaginian power waned, the influence of Berber leaders in the hinterland grew. By the 2nd century BC, several large but loosely administered Berber kingdoms had emerged. After that, king Masinissa managed to unify Numidia under his rule. Madghacen was a king of independent kingdoms of the Numidians, between 12 and 3 BC. Christianity arrived in the 2nd century. By the end of the 4th century, the settled areas had become Christianized, and some Berber tribes had converted en masse. After the fall of the Western Roman Empire, Algeria came under the control of the Vandal Kingdom. Later, the Eastern Roman Empire (also known as the Byzantine Empire) conquered Algeria from the Vandals, incorporating it into the Praetorian prefecture of Africa and later the Exarchate of Africa. From the 8th century Umayyad conquest of North Africa led by Musa bin Nusayr, Arab colonization started. The 11th century invasion of migrants from the Arabian peninsula brought oriental tribal customs. The introduction of Islam and Arabic had a profound impact on North Africa. The new religion and language introduced changes in social and economic relations, and established links with the Arab world through acculturation and assimilation. According to historians of the Middle Ages, the Berbers are divided into two branches, both are from their ancestor Mazigh. The two branches Botr and Barnès are divided into tribes, and each Maghreb region is made up of several tribes. The large Berber tribes or peoples are Sanhaja, Houara, Zenata, Masmuda, Kutama, Awarba, Barghawata ... etc. Each tribe is divided into sub tribes. All these tribes have independence and territorial decisions. Several Berber dynasties emerged during the Middle Ages: - In North Africa, Sudan, in Andalusia, Italy, in Mali, Niger, Senegal and Egypt. Ibn Khaldoun made a table of Berber dynasties: Zirid, Banu Ifran, Maghrawa, Almoravid, Hammadid, Almohad Caliphate, Marinid, Zayyanid, Wattasid, Meknes, Hafsid dynasty. The invasion of the Banu Hilal Arab tribes in 11th century, sacked Kairouan, and the area under Zirid control was reduced to the coastal region, and the Arab conquests fragmented into petty Bedouin emirates. The second Arab military expeditions into the Maghreb, between 642 and 669, resulted in the spread of Islam. The Umayyads (a Muslim dynasty based in Damascus from 661 to 750) recognised that the strategic necessity of dominating the Mediterranean dictated a concerted military effort on the North African front. By 711 Umayyad forces helped by Berber converts to Islam had conquered all of North Africa. In 750 the Abbasids succeeded the Umayyads as Muslim rulers and moved the caliphate to Baghdad. Under the Abbasids, Berber Kharijites Sufri Banu Ifran were opposed to Umayyad and Abbasids. After, the Rustumids (761–909) actually ruled most of the central Maghrib from Tahirt, southwest of Algiers. The imams gained a reputation for honesty, piety, and justice, and the court of Tahirt was noted for its support of scholarship. The Rustumid imams failed, however, to organise a reliable standing army, which opened the way for Tahirt's demise under the assault of the Fatimid dynasty. With their interest focused primarily on Egypt and Muslim lands beyond, the Fatimids left the rule of most of Algeria to the Zirids and Hammadid (972–1148), a Berber dynasty that centered significant local power in Algeria for the first time, but who were still at war with Banu Ifran (kingdom of Tlemcen) and Maghraoua (942-1068). This period was marked by constant conflict, political instability, and economic decline. Following a large incursion of Arab Bedouin from Egypt beginning in the first half of the 11th century, the use of Arabic spread to the countryside, and sedentary Berbers were gradually Arabised. The Almoravid ("those who have made a religious retreat") movement developed early in the 11th century among the Sanhaja Berbers of southern Morocco. The movement's initial impetus was religious, an attempt by a tribal leader to impose moral discipline and strict adherence to Islamic principles on followers. But the Almoravid movement shifted to engaging in military conquest after 1054. By 1106, the Almoravids had conquered the Maghreb as far east as Algiers and Morocco, and Spain up to the Ebro River. Like the Almoravids, the Almohads ("unitarians") found their inspiration in Islamic reform. The Almohads took control of Morocco by 1146, captured Algiers around 1151, and by 1160 had completed the conquest of the central Maghrib. The zenith of Almohad power occurred between 1163 and 1199. For the first time, the Maghrib was united under a local regime, but the continuing wars in Spain overtaxed the resources of the Almohads, and in the Maghrib their position was compromised by factional strife and a renewal of tribal warfare. In the central Maghrib, the Abdalwadid founded a dynasty that ruled the Kingdom of Tlemcen in Algeria. For more than 300 years, until the region came under Ottoman suzerainty in the 16th century, the Zayanids kept a tenuous hold in the central Maghrib. Many coastal cities asserted their autonomy as municipal republics governed by merchant oligarchies, tribal chieftains from the surrounding countryside, or the privateers who operated out of their ports. Nonetheless, Tlemcen, the "pearl of the Maghrib," prospered as a commercial center. The final triumph of the 700-year Christian reconquest of Spain was marked by the fall of Granada in 1492. Christian Spain imposed its influence on the Maghrib coast by constructing fortified outposts and collecting tribute. But Spain never sought to extend its North African conquests much beyond a few modest enclaves. Privateering was an age-old practice in the Mediterranean, and North African rulers engaged in it increasingly in the late 16th and early 17th centuries because it was so lucrative. Until the 17th century the Barbary pirates used galleys, but a Dutch renegade of the name of Zymen Danseker taught them the advantage of using sailing ships. Algeria became the privateering city-state par excellence, and two privateer brothers were instrumental in extending Ottoman influence in Algeria. At about the time Spain was establishing its presidios in the Maghrib, the Muslim privateer brothers Aruj and Khair ad Din—the latter known to Europeans as Barbarossa, or Red Beard—were operating successfully off Tunisia. In 1516 Aruj moved his base of operations to Algiers but was killed in 1518. Khair ad Din succeeded him as military commander of Algiers, and the Ottoman sultan gave him the title of beylerbey (provincial governor). The Spanish expansionist policy in North Africa begun with the Catholic Monarchs and the regent Cisneros, once the "Reconquista" in the Iberian Peninsula was finished. That way, several towns and outposts in the Algerian coast were conquered and occupied: Mers El Kébir (1505), Oran (1509), Algiers (1510) and Bugia (1510). The Spanish conquest of Oran was won with much bloodshed: 4,000 Algerians were massacred, and up to 8,000 were taken prisoner. For about 200 years, Oran's inhabitants were virtually held captive in their fortress walls, ravaged by famine and plague; Spanish soldiers, too, were irregularly fed and paid. The Spaniards left Algiers in 1529, Bujia in 1554, Mers El Kébir and Oran in 1708. The Spanish returned in 1732 when the armada of the Duke of Montemar was victorious in the Battle of Aïn-el-Turk and retook Oran and Mers El Kébir; the Spanish massacred many Muslim soldiers. In 1751, a Spanish adventurer, named John Gascon, obtained permission, and vessels and fireworks, to go against Algiers, and set fire, at night, to the Algerian fleet. The plan, however, miscarried. In 1775, Charles III of Spain sent a large force to attack Algiers, under the command of Alejandro O'Reilly (who had led Spanish forces in crushing French rebellion in Louisiana), resulting in a disastrous defeat. The Algerians suffered 5,000 casualties. The Spanish navy bombarded Algiers in 1784; over 20,000 cannonballs were fired, much of the city and its fortifications were destroyed and most of the Algerian fleet was sunk. Oran and Mers El Kébir were held until 1792, when they were sold by the king Charles IV to the Bey of Algiers. Under Khair ad Din's regency, Algiers became the center of Ottoman authority in the Maghrib. For 300 years, Algeria was a province of the Ottoman Empire under a regency that had Algiers as its capital (see Dey). Subsequently, with the institution of a regular Ottoman administration, governors with the title of pasha ruled. Turkish was the official language, and Arabs and Berbers were excluded from government posts. In 1671 a new leader took power, adopting the title of dey. In 1710 the dey persuaded the sultan to recognize him and his successors as regent, replacing the pasha in that role. Although Algiers remained a part of the Ottoman Empire, the Ottoman government ceased to have effective influence there. European maritime powers paid the tribute demanded by the rulers of the privateering states of North Africa (Algiers, Tunis, Tripoli, and Morocco) to prevent attacks on their shipping. The Napoleonic wars of the early 19th century diverted the attention of the maritime powers from suppressing piracy. But when peace was restored to Europe in 1815, Algiers found itself at war with Spain, the Netherlands, Prussia, Denmark, Russia, and Naples. Algeria and surrounding areas, collectively known as the Barbary States, were responsible for piracy in the Mediterranean Sea, as well as the enslaving of Christians, actions which brought them into the First and Second Barbary War with the United States of America. North African boundaries have shifted during various stages of the conquests. The borders of modern Algeria were created by the French, whose colonization began in 1830 (French invasion began on July 5). To benefit French colonists (many of whom were not in fact of French origin but Italian, Maltese, and Spanish) and nearly the entirety of whom lived in urban areas, northern Algeria was eventually organized into overseas departments of France, with representatives in the French National Assembly. France controlled the entire country, but the traditional Muslim population in the rural areas remained separated from the modern economic infrastructure of the European community. As a result of what the French considered an insult to the French consul in Algiers by the Day in 1827, France blockaded Algiers for three years. In 1830, France invaded and occupied the coastal areas of Algeria, citing a diplomatic incident as casus belli. Hussein Dey went into exile. French colonization then gradually penetrated southwards, and came to have a profound impact on the area and its populations. The European conquest, initially accepted in the Algiers region, was soon met by a rebellion, led by Abdel Kadir, which took roughly a decade for the French troops to put down. By 1848 nearly all of northern Algeria was under French control, and the new government of the Second Republic declared the occupied lands an integral part of France. Three "civil territories"—Algiers, Oran, and Constantine—were organized as French départements (local administrative units) under a civilian government. In addition to enduring the affront of being ruled by a foreign, non-Muslim power, many Algerians lost their lands to the new government or to colonists. Traditional leaders were eliminated, coopted, or made irrelevant, and the traditional educational system was largely dismantled; social structures were stressed to the breaking point. From 1856, native Muslims and Jews were viewed as French subjects not citizens. However, in 1865, Napoleon III allowed them to apply for full French citizenship, a measure that few took, since it involved renouncing the right to be governed by "sharia" law in personal matters, and was considered a kind of apostasy; in 1870, French citizenship was made automatic for Jewish natives, a move which largely angered many Muslims, which resulted in the Jews being seen as the accomplices of the colonial power by anti-colonial Algerians. Nonetheless, this period saw progress in health, some infrastructures, and the overall expansion of the economy of Algeria, as well as the formation of new social classes, which, after exposure to ideas of equality and political liberty, would help propel the country to independence. A new generation of Islamic leadership emerged in Algeria at the time of World War I and grew to maturity during the 1920s and 1930s. Various groups were formed in opposition to French rule, most notable the National Liberation Front (FLN) and the National Algerian Movement. "Colons" (colonists), or, more popularly, "pieds noirs" (literally, black feet) dominated the government and controlled the bulk of Algeria's wealth. Throughout the colonial era, they continued to block or delay all attempts to implement even the most modest reforms. But from 1933 to 1936, mounting social, political, and economic crises in Algeria induced the indigenous population to engage in numerous acts of political protest. The government responded with more restrictive laws governing public order and security. Algerian Muslims rallied to the French side at the start of World War II as they had done in World War I. But the colons were generally sympathetic to the collaborationist Vichy regime established following France's defeat by Nazi Germany. After the fall of the Vichy regime in Algeria (November 11, 1942) as a result of Operation Torch, the Free French commander in chief in North Africa slowly rescinded repressive Vichy laws, despite opposition by colon extremists. In March 1943, Muslim leader Ferhat Abbas presented the French administration with the Manifesto of the Algerian People, signed by 56 Algerian nationalist and international leaders. The manifesto demanded an Algerian constitution that would guarantee immediate and effective political participation and legal equality for Muslims. Instead, the French administration in 1944 instituted a reform package, based on the 1936 Viollette Plan, that granted full French citizenship only to certain categories of "meritorious" Algerian Muslims, who numbered about 60,000. In April 1945 the French had arrested the Algerian nationalist leader Messali Hadj. On May 1 the followers of his Parti du Peuple Algérien (PPA) participated in demonstrations which were violently put down by the police. Several Algerians were killed. The tensions between the Muslim and colon communities exploded on May 8, 1945, V-E Day. When a Muslim march was met with violence, marchers rampaged. The army and police responded by conducting a prolonged and systematic ratissage (literally, raking over) of suspected centers of dissidence. According to official French figures, 1,500 Muslims died as a result of these countermeasures. Other estimates vary from 6,000 to as high as 45,000 killed. Many nationalists drew the conclusion that independence could not be won by peaceful means, and so started organizing for violent rebellion. In August 1947, the French National Assembly approved the government-proposed Organic Statute of Algeria. This law called for the creation of an Algerian Assembly with one house representing Europeans and "meritorious" Muslims and the other representing the remaining 8 million or more Muslims. Muslim and colon deputies alike abstained or voted against the statute but for diametrically opposed reasons: the Muslims because it fell short of their expectations and the colons because it went too far. The Algerian War of Independence (1954–1962), brutal and long, was the most recent major turning point in the country's history. Although often fratricidal, it ultimately united Algerians and seared the value of independence and the philosophy of anticolonialism into the national consciousness. Abusive tactics of the French Army remains a controversial subject in France to this day. In the early morning hours of November 1, 1954, the National Liberation Front (Front de Libération Nationale—FLN) launched attacks throughout Algeria in the opening salvo of a war of independence. An important watershed in this war was the massacre of civilians by the FLN near the town of Philippeville in August 1955. The government claimed it killed 1,273 guerrillas in retaliation; according to the FLN, 12,000 Muslims perished in an orgy of bloodletting by the armed forces and police, as well as colon gangs. After Philippeville, all-out war began in Algeria. The FLN fought largely using guerrilla tactics whilst the French counter-insurgency tactics often included severe reprisals and repression. Eventually, protracted negotiations led to a cease-fire signed by France and the FLN on March 18, 1962, at Evian, France. The Evian accords also provided for continuing economic, financial, technical, and cultural relations, along with interim administrative arrangements until a referendum on self-determination could be held. The Evian accords guaranteed the religious and property rights of French settlers, but the perception that they would not be respected led to the exodus of one million "pieds-noirs" and "harkis". Between 350,000 and 1 million Algerians are estimated to have died during the war, and more than 2 million, out of a total Muslim population of 9 or 10 million, were made into refugees or forcibly relocated into government-controlled camps. Much of the countryside and agriculture was devastated, along with the modern economy, which had been dominated by urban European settlers (the "pied-noirs"). French sources estimated that at least 70,000 Muslim civilians were killed or abducted and presumed killed, by the FLN during the Algerian War. Nearly one million people of mostly French, Spanish and Italian descent were forced to flee the country at independence due to the unbridgeable rifts opened by the civil war and threats from units of the victorious FLN; along with them fled most Algerians of Jewish descent and those Muslim Algerians who had supported a French Algeria ("harkis"). 30–150,000 pro-French Muslims were also killed in Algeria by FLN in post-war reprisals. The referendum was held in Algeria on 1 July 1962, and France declared Algeria independent on 3 July. On 8 September 1963, a constitution was adopted by referendum, and later that month, Ahmed Ben Bella was formally elected the first president, after receiving support from the military, led by Houari Boumediène. The war for independence and its aftermath had severely disrupted Algeria's society and economy. In addition to the physical destruction, the exodus of the "colons" deprived the country of most of its managers, civil servants, engineers, teachers, physicians, and skilled workers. The homeless and displaced numbered in the hundreds of thousands, many suffering from illness, and some 70 percent of the workforce was unemployed. The months immediately following independence witnessed the pell-mell rush of Algerians, their government, and its officials to claim the property and jobs left behind by the Europeans. In the 1963 March Decrees, Ben Bella declared that all agricultural, industrial, and commercial properties previously owned and operated by Europeans were vacant, thereby legalizing confiscation by the state. A new constitution drawn up under close FLN supervision was approved by nationwide referendum in September 1963, and Ben Bella was confirmed as the party's choice to lead the country for a five-year term. The military played an important role in Ben Bella's administration. Since Ben Bella recognized the role that the military played in bringing him to power, Ben Bella appointed senior officers as ministers and other important positions within the new state, including naming Boumediène as the defence minister. They played a core role into implementing the country's security and foreign policy. Under the new constitution, Ben Bella as president combined the functions of chief of state and head of government with those of supreme commander of the armed forces. He formed his government without needing legislative approval and was responsible for the definition and direction of its policies. There was no effective institutional check on its powers. Opposition leader Hocine Aït-Ahmed quit the National Assembly in 1963 to protest the increasingly dictatorial tendencies of the regime and formed a clandestine resistance movement, the Front of Socialist Forces (Front des Forces Socialistes—FFS) dedicated to overthrowing the Ben Bella regime by force. Late summer 1963 saw sporadic incidents attributed to the FFS. More serious fighting broke out a year later. The army moved quickly and in force to crush the rebellion. As minister of defense, Houari Boumédienne had no qualms about sending the army to put down regional uprisings because he felt they posed a threat to the state. Ben Bella also attempted to co-opt allies from among some of those regionalists, in order to undermine the ability of military commanders to influence foreign and security policy. Tensions increased between Houari Boumédienne and Ahmed Ben Bella. In 1965 the military toppled Ahmed Ben Bella, and Houari Boumedienne became head of state. On 19 June 1965, Houari Boumédienne deposed Ahmed Ben Bella in a military coup d'état that was both swift and bloodless. Ben Bella "disappeared", and would not be seen again until he was released from house arrest in 1980 by Boumédienne's successor, Colonel Chadli Bendjedid. Boumédienne immediately dissolved the National Assembly and suspended the 1963 constitution. Political power resided in the Council of the Revolution, a predominantly military body intended to foster cooperation among various factions in the army and the party. Houari Boumédienne's position as head of government and of state was initially not secure partly because of his lack of a significant power base outside the armed forces; he relied strongly on a network of former associates known as the Oujda group (after his posting as ALN leader in the Moroccan border town of Oujda during the war years), but he could not fully dominate the fractious regime. This situation may have accounted for his deference to collegial rule. Over Boumédienne's 11-year reign as the Chairman of the Revolutionary Council, it introduced two formal mechanisms, such as the People's Municipal Assembly ("Assemblée Populaires Communales") and the People's Provincial Assembly ("Assemblée Populaires de Wilaya") for popular participation in politics. Under his rule, leftist and socialist concepts are merged into Islam. Boumédienne also used Islam opportunistically to consolidate his power. On one side, he made token concessions and cosmetic changes, such as appointing Ahmed Taleb Ibrahimi in charge of national education in 1965, or adopting policies like criminalizing gambling, establishing Friday as the national holiday and dropping plans to introduce birth control to paint an Islamic image of the new government. But on the other hand, the government also repressed Islamic groups progressively, such as ordering the dissolution of Al Qiyam. Following attempted coups—most notably that of chief-of-staff Col. Tahar Zbiri in December 1967—and a failed assassination attempt in (April 25, 1968), Boumédienne consolidated power and forced military and political factions to submit. He took a systematic, authoritarian approach to state building, arguing that Algeria needed stability and an economic base before any political institutions. Eleven years after Houari Boumédienne took power, after much public debate, a long-promised new constitution was promulgated in November 1976. The Constitution restored the National Popular Assembly and it was given legislative, consent and oversight functions.
https://en.wikipedia.org/wiki?curid=14113
History of Zimbabwe Following the Lancaster House Agreement of 1979 there was a transition to internationally recognized majority rule in 1980; the United Kingdom ceremonially granted Zimbabwe independence on 18 April that year. In the 2000s Zimbabwe's economy began to deteriorate due to various factors, including, the imposition of economic sanctions by western countries led by the United Kingdom, and also due to wide spread corruption in government. Economic instability caused a lot of Zimbabweans to move overseas or to neighboring countries. Prior to its recognized independence as Zimbabwe in 1980, the nation had been known by several names: Rhodesia, Southern Rhodesia and Zimbabwe Rhodesia. Prior to the arrival of Bantu speakers in present-day Zimbabwe the region was populated by ancestors of the San people. The first Bantu-speaking farmers arrived during the Bantu expansion around 2000 years ago. These Bantu speakers were the makers of early Iron Age pottery belonging to the Silver Leaves or Matola tradition, third to fifth centuries A.D., found in southeast Zimbabwe. This tradition was part of the eastern stream of Bantu expansion (sometimes called Kwale) which originated west of the Great Lakes, spreading to the coastal regions of southeastern Kenya and north eastern Tanzania, and then southwards to Mozambique, south eastern Zimbabwe and Natal. More substantial in numbers in Zimbabwe were the makers of the Ziwa and Gokomere ceramic wares, of the fourth century A.D. Their early Iron Age ceramic tradition belonged to the highlands facies of the eastern stream, which moved inland to Malawi and Zimbabwe. Imports of beads have been found at Gokomere and Ziwa sites, possibly in return for gold exported to the coast. A later phase of the Gokomere culture was the Zhizo in southern Zimbabwe. Zhizo communities settled in the Shashe-Limpopo area in the tenth century. Their capital there was Schroda (just across the Limpopo River from Zimbabwe). Many fragments of ceramic figurines have been recovered from there, figures of animals and birds, and also fertility dolls. The inhabitants produced ivory bracelets and other ivory goods. Imported beads found there and at other Zhizo sites, are evidence of trade, probably of ivory and skins, with traders on the Indian Ocean coast. Pottery belonging to a western stream of Bantu expansion (sometimes called Kalundu) has been found at sites in northeastern Zimbabwe, dated from the seventh century. (The western stream originated in the same area as the eastern stream: both belong to the same style system, called by Phillipson the Chifumbadze system, which has general acceptance by archaeologists.) The terms eastern and western streams represent the expansion of the Bantu speaking peoples in terms of their culture. Another question is the branches of the Bantu languages which they spoke. It seems that the makers of the Ziwa/Gokomere wares were not the ancestral speakers of the Shona languages of today's Zimbabwe, who did not arrive in there until around the tenth century, from south of the Limpopo river, and whose ceramic culture belonged to the western stream. The linguist and historian Ehret believes that in view of the similarity of the Ziwa/Gokomere pottery to the Nkope of the ancestral Nyasa language speakers, the Ziwa/Gokomere people spoke a language closely related to the Nyasa group. Their language, whatever it was, was superseded by the ancestral Shona languages, although Ehret says that a set of Nyasa words occur in central Shona dialects today. The evidence that the ancestral Shona speakers came from South Africa is that the ceramic styles associated with Shona speakers in Zimbabwe from the thirteenth to the seventeenth centuries can be traced back to western stream (Kalunndu) pottery styles in South Africa. The Ziwa /Gokomere and Zhizo traditions were superseded by Leopards Kopje and Gumanye wares of the Kalundu tradition from the tenth century. Although the western stream Kalundu tradition was ancestral to Shona ceramic wares, the closest relationships of the ancestral Shona language according to many linguists were with a southern division of eastern Bantu – such languages as the southeastern languages (Nguni, Sotho-Tswana, Tsonga), Nyasa and Makwa. While it may well be the case that the people of the western stream spoke a language belonging to a wider Eastern Bantu division, it is a puzzle which remains to be resolved that they spoke a language most closely related to the languages just mentioned, all of which are today spoken in southeastern Africa. After the Shona speaking people moved into the present day Zimbabwe many different dialects developed over time in the different parts of the country. Among these was Kalanga. It is believed that Kalanga speaking societies first emerged in the middle Limpopo valley in the 9th century before moving on to the Zimbabwean highlands. The Zimbabwean plateau eventually became the centre of subsequent Kalanga states. The Kingdom of Mapungubwe was the first in a series of sophisticated trade states developed in Zimbabwe by the time of the first European explorers from Portugal. They traded in gold, ivory and copper for cloth and glass. From about 1250 until 1450, Mapungubwe was eclipsed by the Kingdom of Zimbabwe. This Kalanga state further refined and expanded upon Mapungubwe's stone architecture, which survives to this day at the ruins of the kingdom's capital of Great Zimbabwe. From circa 1450–1760, Zimbabwe gave way to the Kingdom of Mutapa. This Kalanga state ruled much of the area that is known as Zimbabwe today, and parts of central Mozambique. It is known by many names including the Mutapa Empire, also known as Mwenemutapa was known for its gold trade routes with Arabs and the Portuguese. However, Portuguese settlers destroyed the trade and began a series of wars which left the empire in near collapse in the early 17th century. As a direct response to Portuguese aggression in the interior, a new Kalanga state emerged called the Rozwi Empire. Relying on centuries of military, political and religious development, the Rozwi (which means "destroyers") removed the Portuguese from the Zimbabwe plateau by force of arms. The Rozwi continued the stone building traditions of the Zimbabwe and Mapungubwe kingdoms while adding guns to its arsenal and developing a professional army to protect its trade routes and conquests. Around 1821, the Zulu general Mzilikazi of the Khumalo clan successfully rebelled from King Shaka and created his own clan, the Ndebele. The Ndebele fought their way northwards into the Transvaal, leaving a trail of destruction in their wake and beginning an era of widespread devastation known as the Mfecane. When Dutch trekboers converged on the Transvaal in 1836, they drove the tribe even further northward. By 1838, the Rozwi Empire, along with the other Shona states had been unconquered by the Ndebele. After losing their remaining South African lands in 1840, Mzilikazi and his tribe permanently settled the southwest of present-day Zimbabwe in what became known as Matabeleland, establishing Bulawayo as their capital. Mzilikazi then organised his society into a military system with regimental kraals, similar to those of Shaka, which was stable enough to repel further Boer incursions. During the pre-colonial period, the Ndebele social structure was stratified. It was composed of mainly three social groups, Zansi, Enhla and Amahole. The Zansi comprised the ruling class the original Khumalo people who migrated from south of Limpopo with Mzilikazi. The Enhla and Amahole groups were made up of other tribes and ethnics who had been incorporated into the empire during the migration. However, with the passage of time, this stratification has slowly disappeared The Ndebele people have for long ascribed to the worship of Unkunkulu as their supreme being. Their religious life in general, rituals, ceremonies, practices, devotion and loyalty revolves around the worship of this Supreme Being. However, with the popularisation of Christianity and other religions, Ndebele traditional religion is now uncommon Mzilikazi died in 1868 and, following a violent power struggle, was succeeded by his son, Lobengula. King Mzilikazi had established the Ndebele Kingdom, with Shona subjects paying tribute to him. This Kingdom (Ndebele Kingdom) under King Lobengula faced the threat of colonialism and King Lobengula signed the treaties of occupation with the European imperialists that saw the occupation of the present Zimbabwe state. In the 1880s, the British arrived with Cecil Rhodes' British South Africa Company. In 1898, the name Southern Rhodesia was adopted. In 1888, British colonialist Cecil Rhodes obtained a concession for mining rights from King Lobengula of the Ndebele peoples. Cecil Rhodes presented this concession to persuade the government of the United Kingdom to grant a royal charter to his British South Africa Company (BSAC) over Matabeleland, and its subject states such as Mashonaland. Rhodes sought permission to negotiate similar concessions covering all territory between the Limpopo River and Lake Tanganyika, then known as 'Zambesia'. In accordance with the terms of aforementioned concessions and treaties, Cecil Rhodes promoted the colonisation of the region's land, with British control over labour as well as precious metals and other mineral resources. In 1895 the BSAC adopted the name 'Rhodesia' for the territory of Zambesia, in honour of Cecil Rhodes. In 1898, 'Southern Rhodesia' became the official denotation for the region south of the Zambezi, which later became Zimbabwe. The region to the north was administered separately by the BSAC and later named Northern Rhodesia (now Zambia). The Shona staged unsuccessful revolts (known as Chimurenga) against encroachment upon their lands by clients of BSAC and Cecil Rhodes in 1896 and 1897.Following the failed insurrections of 1896–97 the Ndebele and Shona groups became subject to Rhodes's administration thus precipitating European settlement en masse which led to land distribution disproportionately favouring Europeans, displacing the Shona, Ndebele, and other indigenous peoples. The colony's first formal constitution was drafted in 1899, and copied various pieces of legislation directly from that of the Union of South Africa; Rhodesia was meant to be, in many ways, a shadow colony of the Cape. Many within the administrative framework of the BSAC assumed that Southern Rhodesia, when its "development" was "suitably advanced", would "take its rightful place as a member of" the Union of South Africa after the Anglo-Boer War of 1902, when the four South African colonies joined under the auspices of one flag and began to work towards the creation of a unified administrative structure. The territory was made open to white settlement, and these settlers were then in turn given considerable administrative powers, including a franchise that, while on the surface non-racial, ensured "a predominantly European electorate" which "operated to preclude Great Britain from modifying her policy in Southern Rhodesia and subsequently treating it as a territory inhabited mainly by Africans whose interests should be paramount and to whom British power should be transferred". Southern Rhodesia became a self-governing British colony in October 1923, subsequent to a referendum held the previous year. The British government took full command of the British South Africa Company's holdings, including both Northern and Southern Rhodesia. Northern Rhodesia retained its status as a colonial protectorate; Southern Rhodesia was given responsible self-government – with limitations and still annexed to the crown as a colony. Many studies of the country see it as a state that operated independently within the Commonwealth; nominally under the rule of the British crown, but technically able to do as it pleased. And in theory, Southern Rhodesia was able to govern itself, draft its own legislation, and elect its own parliamentary leaders. But in reality, this was self-government subject to supervision. Until the white minority settler government's declaration of unilateral independence in 1965, London remained in control of the colony's external affairs, and all legislation was subject to approval from the United Kingdom Government and the Queen. In 1930, the Land Apportionment Act divided rural land along racial lines, creating four types of land: white-owned land that could not be acquired by Africans; purchase areas for those Africans who could afford to purchase land; Tribal Trust Lands designated as the African reserves; and Crown lands owned by the state, reserved for future use and public parks. Fifty one percent of the land was given to approximately 50,000 white inhabitants, with 29.8 per cent left for over a million Africans. Many Rhodesians served on behalf of the United Kingdom during World War II, mainly in the East African Campaign against Axis forces in Italian East Africa. In 1953, in the face of African opposition, Britain consolidated the two colonies of Rhodesia with Nyasaland (now Malawi) in the ill-fated Federation of Rhodesia and Nyasaland which was dominated by Southern Rhodesia. Growing African nationalism and general dissent, particularly in Nyasaland, persuaded the UK to dissolve the Union in 1963, forming three colonies. As colonial rule was ending throughout the continent and as African-majority governments assumed control in neighbouring Northern Rhodesia and in Nyasaland, the white-minority Rhodesian government led by Ian Smith made a Unilateral Declaration of Independence (UDI) from the United Kingdom on 11 November 1965. The United Kingdom deemed this an act of rebellion, but did not re-establish control by force. The white minority government declared itself a republic in 1970. A civil war ensued, with Joshua Nkomo's ZAPU and Robert Mugabe's ZANU using assistance from the governments of Zambia and Mozambique. Although Smith's declaration was not recognised by the United Kingdom nor any other foreign power, Southern Rhodesia dropped the designation "Southern", and claimed nation status as the Republic of Rhodesia in 1970 although this was not recognised internationally. The country gained official independence as Zimbabwe on 18 April 1980. The government held independence celebrations in Rufaro stadium in Salisbury, the capital. Lord Christopher Soames, the last Governor of Southern Rhodesia, watched as Charles, Prince of Wales, gave a farewell salute and the Rhodesian Signal Corps played "God Save the Queen". Many foreign dignitaries also attended, including Prime Minister Indira Gandhi of India, President Shehu Shagari of Nigeria, President Kenneth Kaunda of Zambia, President Seretse Khama of Botswana, and Prime Minister Malcolm Fraser of Australia, representing the Commonwealth of Nations. Bob Marley sang 'Zimbabwe', a song he wrote, at the government's invitation in a concert at the country's independence festivities. President Shagari pledged $15 million at the celebration to train Zimbabweans in Zimbabwe and expatriates in Nigeria. Mugabe's government used part of the money to buy newspaper companies owned by South Africans, increasing the government's control over the media. The rest went to training students in Nigerian universities, government workers in the Administrative Staff College of Nigeria in Badagry, and soldiers in the Nigerian Defence Academy in Kaduna. Later that year Mugabe commissioned a report by the BBC on press freedom in Zimbabwe. The BBC issued its report on 26 June, recommending the privatisation of the Zimbabwe Broadcasting Corporation and its independence from political interests. "See also:" Foreign relations of Zimbabwe Mugabe's government changed the capital's name from Salisbury to Harare on 18 April 1982 in celebration of the second anniversary of independence. The government renamed the main street in the capital, Jameson Avenue, in honour of Samora Machel, President of Mozambique. In 1992, a World Bank study indicated that more than 500 health centres had been built since 1980. The percentage of children vaccinated increased from 25% in 1980 to 67% in 1988 and life expectancy increased from 55 to 59 years. Enrolment increased by 232 per cent one year after primary education was made free and secondary school enrolment increased by 33 per cent in two years. These social policies lead to an increase in the debt ratio.Several laws were passed in the 1980s in an attempt to reduce wage gaps. However, the gaps remained considerable. In 1988, the law gave women, at least in theory, the same rights as men. Previously, they could only take a few personal initiatives without the consent of their father or husband. The new Constitution provided for an executive President as Head of State with a Prime Minister as Head of Government. Reverend Canaan Banana served as the first President. In government amended the Constitution in 1987 to provide for an Executive President and abolished the office of Prime Minister. The constitutional changes came into effect on 1 January 1988 with Robert Mugabe as President. The bicameral Parliament of Zimbabwe had a directly elected House of Assembly and an indirectly elected Senate, partly made up of tribal chiefs. The Constitution established two separate voters rolls, one for the black majority, who had 80% of the seats in Parliament, and the other for whites and other ethnic minorities, such as Coloureds, people of mixed race, and Asians, who held 20%. The government amended the Constitution in 1986, eliminating the voter rolls and replacing the white seats with seats filled by nominated members. Many white MPs joined ZANU which then reappointed them. In 1990 the government abolished the Senate and increased the House of Assembly's membership to include members nominated by the President. Prime Minister Mugabe kept Peter Walls, the head of the army, in his government and put him in charge of integrating the Zimbabwe People's Revolutionary Army (ZIPRA), Zimbabwe African National Liberation Army (ZANLA), and the Rhodesian Army. While Western media outlets praised Mugabe's efforts at reconciliation with the white minority, tension soon developed. On 17 March 1980, after several unsuccessful assassination attempts Mugabe asked Walls, "Why are your men trying to kill me?" Walls replied, "If they were my men you would be dead." BBC news interviewed Walls on 11 August 1980. He told the BBC that he had asked British Prime Minister Margaret Thatcher to annul the 1980 election prior to the official announcement of the result on the grounds that Mugabe used intimidation to win the election. Walls said Thatcher had not replied to his request. On 12 August British government officials denied that they had not responded, saying Antony Duff, Deputy Governor of Salisbury, told Walls on 3 March that Thatcher would not annul the election. Minister of Information Nathan Shamuyarira said the government would not be "held ransom by racial misfits" and told "all those Europeans who do not accept the new order to pack their bags." He also said the government continued to consider taking "legal or administrative action" against Walls. Mugabe, returning from a visit with United States President Jimmy Carter in New York City, said, "One thing is quite clear—we are not going to have disloyal characters in our society." Walls returned to Zimbabwe after the interview, telling Peter Hawthorne of "Time" magazine, "To stay away at this time would have appeared like an admission of guilt." Mugabe drafted legislation that would exile Walls from Zimbabwe for life and Walls moved to South Africa. Ethnic divisions soon came back to the forefront of national politics. Tension between ZAPU and ZANU erupted with guerrilla activity starting again in Matabeleland in south-western Zimbabwe. Nkomo (ZAPU) left for exile in Britain and did not return until Mugabe guaranteed his safety. In 1982 government security officials discovered large caches of arms and ammunition on properties owned by ZAPU, accusing Nkomo and his followers of plotting to overthrow the government. Mugabe fired Nkomo and his closest aides from the cabinet. Seven MPs, members of the Rhodesian Front, left Smith's party to sit as "independents" on 4 March 1982, signifying their dissatisfaction with his policies. As a result of what they saw as persecution of Nkomo and his party, PF-ZAPU supporters, army deserters began a campaign of dissidence against the government. Centring primarily in Matabeleland, home of the Ndebeles who were at the time PF-ZAPU's main followers, this dissidence continued through 1987. It involved attacks on government personnel and installations, armed banditry aimed at disrupting security and economic life in the rural areas, and harassment of ZANU-PF members. Because of the unsettled security situation immediately after independence and democratic sentiments, the government kept in force a "state of emergency". This gave the government widespread powers under the "Law and Order Maintenance Act," including the right to detain persons without charge which it used quite widely. In 1983 to 1984 the government declared a curfew in areas of Matabeleland and sent in the army in an attempt to suppress members of the Ndebele tribe. The pacification campaign, known as the Gukuruhundi, or strong wind, resulted in at least 20,000 civilian deaths perpetrated by an elite, North Korean-trained brigade, known in Zimbabwe as the Gukurahundi. ZANU-PF increased its majority in the 1985 elections, winning 67 of the 100 seats. The majority gave Mugabe the opportunity to start making changes to the constitution, including those with regard to land restoration. Fighting did not cease until Mugabe and Nkomo reached an agreement in December 1987 whereby ZAPU became part of ZANU-PF and the government changed the constitution to make Mugabe the country's first executive president and Nkomo one of two vice-presidents. Elections in March 1990 resulted in another overwhelming victory for Mugabe and his party, which won 117 of the 120 election seats. Election observers estimated voter turnout at only 54% and found the campaign neither free nor fair, though balloting met international standards. Unsatisfied with a "de facto" one-party state, Mugabe called on the ZANU-PF Central Committee to support the creation of a "de jure" one-party state in September 1990 and lost. The government began further amending the constitution. The judiciary and human rights advocates fiercely criticised the first amendments enacted in April 1991 because they restored corporal and capital punishment and denied recourse to the courts in cases of compulsory purchase of land by the government. The general health of the civilian population also began to significantly flounder and by 1997 25% of the population of Zimbabwe had been infected by HIV, the AIDS virus. During the 1990s students, trade unionists, and workers often demonstrated to express their discontent with the government. Students protested in 1990 against proposals for an increase in government control of universities and again in 1991 and 1992 when they clashed with police. Trade unionists and workers also criticised the government during this time. In 1992 police prevented trade unionists from holding anti-government demonstrations. In 1994 widespread industrial unrest weakened the economy. In 1996 civil servants, nurses, and junior doctors went on strike over salary issues. On 9 December 1997 a national strike paralysed the country. Mugabe was panicked by demonstrations by Zanla ex-combatants, war veterans, who had been the heart of incursions 20 years earlier in the Bush War. He agreed to pay them large gratuities and pensions, which proved to be a wholly unproductive and unbudgeted financial commitment. The discontent with the government spawned draconian government crackdowns which in turn started to destroy both the fabric of the state and of society. This in turn brought with it further discontent within the population. Thus a vicious downward spiral commenced. Although many whites had left Zimbabwe after independence, mainly for neighbouring South Africa, those who remained continued to wield disproportionate control of some sectors of the economy, especially agriculture. In the late-1990s whites accounted for less than 1% of the population but owned 70% of arable land. Mugabe raised this issue of land ownership by white farmers. In a calculated move, he began forcible land redistribution, which brought the government into headlong conflict with the International Monetary Fund. Amid a severe drought in the region, the police and military were instructed not to stop the invasion of white-owned farms by the so-called 'war veterans' and youth militia. This has led to a mass migration of White Zimbabweans out of Zimbabwe. At present almost no arable land is in the possession of white farmers. The economy was run along corporatist lines with strict governmental controls on all aspects of the economy. Controls were placed on wages, prices and massive increases in government spending resulting in significant budget deficits. This experiment met with very mixed results and Zimbabwe fell further behind the first world and unemployment. Some market reforms in the 1990s were attempted. A 40 per cent devaluation of the Zimbabwean dollar was allowed to occur and price and wage controls were removed. These policies also failed at that time. Growth, employment, wages, and social service spending contracted sharply, inflation did not improve, the deficit remained well above target, and many industrial firms, notably in textiles and footwear, closed in response to increased competition and high real interest rates. The incidence of poverty in the country increased during this time. However, Zimbabwe began experiencing a period of considerable political and economic upheaval in 1999. Opposition to President Mugabe and the ZANU-PF government grew considerably after the mid-1990s in part due to worsening economic and human rights conditions brought about by crippling economic sanctions imposed by western countries led by Britain in response to land seizures from the White minority farmers. The Movement for Democratic Change (MDC) was established in September 1999 as an opposition party founded by trade unionist Morgan Tsvangirai. The MDC's first opportunity to test opposition to the Mugabe government came in February 2000, when a referendum was held on a draft constitution proposed by the government. Among its elements, the new constitution would have permitted President Mugabe to seek two additional terms in office, granted government officials immunity from prosecution, and authorised government seizure of white-owned land. The referendum was handily defeated. Shortly thereafter, the government, through a loosely organised group of war veterans, some of the so-called war veterans judging from their age were not war veterans as they were too young to have fought in the chimurenga, sanctioned an aggressive land redistribution program often characterised by forced expulsion of white farmers and violence against both farmers and farm employees. Parliamentary elections held in June 2000 were marred by localised violence, and claims of electoral irregularities and government intimidation of opposition supporters. Nonetheless, the MDC succeeded in capturing 57 of 120 seats in the National Assembly. Presidential elections were held in March 2002. In the months leading up to the poll, ZANU-PF, with the support of the army, security services, and especially the so-called 'war veterans', – very few of whom actually fought in the Second Chimurenga against the Smith regime in the 1970s – set about wholesale intimidation and suppression of the MDC-led opposition. Despite strong international criticism, these measures, together with organised subversion of the electoral process, ensured a Mugabe victory . The government's behaviour drew strong criticism from the EU and the US, which imposed limited sanctions against the leading members of the Mugabe regime. Since the 2002 election, Zimbabwe has suffered further economic difficulty and growing political chaos. Divisions within the opposition MDC had begun to fester early in the decade, after Morgan Tsvangirai (the president of the MDC) was lured into a government sting operation that videotaped him talking of Mr. Mugabe's removal from power. He was subsequently arrested and put on trial on treason charges. This crippled his control of party affairs and raised questions about his competence. It also catalysed a major split within the party. In 2004 he was acquitted, but not until after suffering serious abuse and mistreatment in prison. The opposing faction was led by Welshman Ncube who was the general secretary of the party. In mid-2004, vigilantes loyal to Mr. Tsvangirai began attacking members who were mostly loyal to Ncube, climaxing in a September raid on the party's Harare headquarters in which the security director was nearly thrown to his death. An internal party inquiry later established that aides to Tsvangirai had tolerated, if not endorsed, the violence. Divisive as the violence was, it was a debate over the rule of law that set off the party's final break-up in November 2005. These division severely weakened the opposition. In addition the government employed its own operatives to both spy on each side and to undermine each side via acts of espionage. Zimbabwean parliamentary election, 2005 were held in March 2005 in which ZANU-PF won a two-thirds majority, were again criticised by international observers as being flawed. Mugabe's political operatives were thus able to weaken the opposition internally and the security apparatus of the state was able to destabilise it externally by using violence in anti-Mugabe strongholds to prevent citizens from voting. Some voters were 'turned away' from polling station despite having proper identification, further guaranteeing that the government could control the results. Additionally Mugabe had started to appoint judges sympathetic to the government, making any judicial appeal futile. Mugabe was also able to appoint 30 of the members of parliament. As Senate elections approached further opposition splits occurred. Ncube's supporters argued that the M.D.C. should field a slate of candidates; Tsvangirai's argued for a boycott. When party leaders voted on the issue, Ncube's side narrowly won, but Mr. Tsvangirai declared that as president of the party he was not bound by the majority's decision. Again the opposition was weakened. As a result, the elections for a new Senate in November 2005 were largely boycotted by the opposition. Mugabe's party won 24 of the 31 constituencies where elections were held amid low voter turnout. Again, evidence surfaced of voter intimidation and fraud. In May 2005 the government began Operation Murambatsvina. It was officially billed to rid urban areas of illegal structures, illegal business enterprises, and criminal activities. In practice its purpose was to punish political opponents. The UN estimates 700,000 people have been left without jobs or homes as a result. Families and traders, especially at the beginning of the operation, were often given no notice before police destroyed their homes and businesses. Others were able to salvage some possessions and building materials but often had nowhere to go, despite the government's statement that people should be returning to their rural homes. Thousands of families were left unprotected in the open in the middle of Zimbabwe's winter., . The government interfered with non-governmental organisation (NGO) efforts to provide emergency assistance to the displaced in many instances. Some families were removed to transit camps, where they had no shelter or cooking facilities and minimal food, supplies, and sanitary facilities. The operation continued into July 2005, when the government began a program to provide housing for the newly displaced. Human Rights Watch said the evictions had disrupted treatment for people with HIV/AIDS in a country where 3,000 die from the disease each week and about 1.3 million children have been orphaned. The operation was "the latest manifestation of a massive human rights problem that has been going on for years", said Amnesty International. As of September 2006, housing construction fell far short of demand, and there were reports that beneficiaries were mostly civil servants and ruling party loyalists, not those displaced. The government campaign of forced evictions continued in 2006, albeit on a lesser scale. In September 2005 Mugabe signed constitutional amendments that reinstituted a national senate (abolished in 1987) and that nationalised all land. This converted all ownership rights into leases. The amendments also ended the right of landowners to challenge government expropriation of land in the courts and marked the end of any hope of returning any land that had been hitherto grabbed by armed land invasions. Elections for the senate in November resulted in a victory for the government. The MDC split over whether to field candidates and partially boycotted the vote. In addition to low turnout there was widespread government intimidation. The split in the MDC hardened into factions, each of which claimed control of the party. The early months of 2006 were marked by food shortages and mass hunger. The sheer extremity of the siltation was revealed by the fact that in the courts, state witnesses said they were too weak from hunger to testify. In August 2006 runaway inflation forced the government to replace its existing currency with a revalued one. In December 2006, ZANU-PF proposed the "harmonisation" of the parliamentary and presidential election schedules in 2010; the move was seen by the opposition as an excuse to extend Mugabe's term as president until 2010. Morgan Tsvangirai was badly beaten on 12 March 2007 after being arrested and held at Machipisa Police Station in the Highfield suburb of Harare. The event garnered an international outcry and was considered particularly brutal and extreme, even considering the reputation of Mugabe's government. Kolawole Olaniyan, Director of Amnesty International's Africa Programme said "We are very concerned by reports of continuing brutal attacks on opposition activists in Zimbabwe and call on the government to stop all acts of violence and intimidation against opposition activists". The economy has shrunk by 50% from 2000 to 2007. In September 2007 the inflation rate was put at almost 8,000%, the world's highest. There are frequent power and water outages. Harare's drinking water became unreliable in 2006 and as a consequence dysentery and cholera swept the city in December 2006 and January 2007. Unemployment in formal jobs is running at a record 80%. There was widespread hunger, manipulated by the government so that opposition strongholds suffer the most. Availability of bread was severely constrained after a poor wheat harvest and the closure of all bakeries. The country, which used to be one of Africa's richest, became one of its poorest. Many observers now view the country as a 'failed state'. The settlement of the Second Congo War brought back Zimbabwe's substantial military commitment, although some troops remain to secure the mining assets under their control. The government lacks the resources or machinery to deal with the ravages of the HIV/AIDS pandemic, which affects 25% of the population. With all this and the forced and violent removal of white farmers in a brutal land redistribution program, Mugabe has earned himself widespread scorn from the international arena. The regime has managed to cling to power by creating wealthy enclaves for government ministers, and senior party members. For example, Borrowdale Brook, a suburb of Harare is an oasis of wealth and privilege. It features mansions, manicured lawns, full shops with fully stocked shelves containing an abundance of fruit and vegetables, big cars and a golf club give is the home to President Mugabe's out-of-town retreat. Zimbabwe's bakeries shut down in October 2007 and supermarkets warned that they would have no bread for the foreseeable future due to collapse in wheat production after the seizure of white-owned farms. The ministry of agriculture has also blamed power shortages for the wheat shortfall, saying that electricity cuts have affected irrigation and halved crop yields per acre. The power shortages are because Zimbabwe relies on Mozambique for some of its electricity and that due to an unpaid bill of $35 million Mozambique had reduced the amount of electrical power it supplies. On 4 December 2007, The United States imposed travel sanctions against 38 people with ties to President Mugabe because they "played a central role in the regime's escalated human rights abuses." On 8 December 2007, Mugabe attended a meeting of EU and African leaders in Lisbon, prompting UK Prime Minister Gordon Brown to decline to attend. While German chancellor Angela Merkel criticised Mugabe with her public comments, the leaders of other African countries offered him statements of support. The educational system in Zimbabwe which was once regarded as among the best in Africa, went into crisis in 2007 because of the country's economic meltdown. One foreign reporter witnessed hundreds of children at Hatcliffe Extension Primary School in Epworth, west of Harare, writing in the dust on the floor because they had no exercise books or pencils. The high school exam system unravelled in 2007. Examiners refused to mark examination papers when they were offered just Z$79 a paper, enough to buy three small candies. Corruption has crept into the system and may explain why in January 2007 thousands of pupils received no marks for subjects they had entered, while others were deemed "excellent" in subjects they had not sat. However, as of late the education system has recovered and is still considered the best in Southern Africa. Zimbabwe held a presidential election along with a 2008 parliamentary election of 29 March. The three major candidates were incumbent President Robert Mugabe of the Zimbabwe African National Union – Patriotic Front (ZANU-PF), Morgan Tsvangirai of the Movement for Democratic Change – Tsvangirai (MDC-T), and Simba Makoni, an independent. As no candidate received an outright majority in the first round, a second round was held on 27 June 2008 between Tsvangirai (with 47.9% of the first round vote) and Mugabe (43.2%). Tsvangirai withdrew from the second round a week before it was scheduled to take place, citing violence against his party's supporters. The second round went ahead, despite widespread criticism, and led to victory for Mugabe. Because of Zimbabwe's dire economic situation the election was expected to provide President Mugabe with his toughest electoral challenge to date. Mugabe's opponents were critical of the handling of the electoral process, and the government was accused of planning to rig the election; Human Rights Watch said that the election was likely to be "deeply flawed". After the first round, but before the counting was completed, Jose Marcos Barrica, the head of the Southern African Development Community observer mission, described the election as "a peaceful and credible expression of the will of the people of Zimbabwe." No official results were announced for more than a month after the first round. The failure to release results was strongly criticised by the MDC, which unsuccessfully sought an order from the High Court to force their release. An independent projection placed Tsvangirai in the lead, but without the majority needed to avoid a second round. The MDC declared that Tsvangirai won a narrow majority in the first round and initially refused to participate in any second round. ZANU-PF has said that Mugabe will participate in a second round; the party alleged that some electoral officials, in connection with the MDC, fraudulently reduced Mugabe's score, and as a result a recount was conducted. After the recount and the verification of the results, the Zimbabwe Electoral Commission (ZEC) announced on 2 May that Tsvangirai won 47.9% and Mugabe won 43.2%, thereby necessitating a run-off, which was to be held on 27 June 2008. Despite Tsvangirai's continuing claims to have won a first round majority, he refused to participate in the second round. The period following the first round was marked by serious political violence caused by ZANU-PF. ZANU-PF blamed the MDC supporters for perpetrating this violence; Western governments and prominent Western organisations have blamed ZANU-PF for the violence which seems very likely to be true. On 22 June 2008, Tsvangirai announced that he was withdrawing from the run-off, describing it as a "violent sham" and saying that his supporters risked being killed if they voted for him. The second round nevertheless went ahead as planned with Mugabe as the only actively participating candidate, although Tsvangirai's name remained on the ballot. Mugabe won the second round by an overwhelming margin and was sworn in for another term as President on 29 June. The international reaction to the second round have varied. The United States and states of the European Union have called for increased sanctions. On 11 July, the United Nations Security Council voted to impose sanctions on the Zimbabwe; Russia and China vetoed. The African Union has called for a "government of national unity." Preliminary talks to set up conditions for official negotiations began between leading negotiators from both parties on 10 July, and on 22 July, the three party leaders met for the first time in Harare to express their support for a negotiated settlement of disputes arising out of the presidential and parliamentary elections. Negotiations between the parties officially began on 25 July and are currently proceeding with very few details released from the negotiation teams in Pretoria, as coverage by the media is barred from the premises where the negotiations are taking place. The talks were mediated by South African President Thabo Mbeki. On 15 September 2008, the leaders of the 14-member Southern African Development Community witnessed the signing of the power-sharing agreement, brokered by South African leader Thabo Mbeki. With symbolic handshake and warm smiles at the Rainbow Towers hotel, in Harare, Mugabe and Tsvangirai signed the deal to end the violent political crisis. As provided, Robert Mugabe will remain president, Morgan Tsvangirai will become prime minister, ZANU-PF and the MDC will share control of the police, Mugabe's Zanu (PF) will command the Army, and Arthur Mutambara becomes deputy prime minister. In November 2008 the Air Force of Zimbabwe was sent, after some police officers began refusing orders to shoot the illegal miners at Marange diamond fields. Up to 150 of the estimated 30,000 illegal miners were shot from helicopter gunships. In 2008 some Zimbabwean lawyers and opposition politicians from Mutare claimed that Shiri was the prime mover behind the military assaults on illegal diggers in the diamond mines in the east of Zimbabwe. Estimates of the death toll by mid-December range from 83 reported by the Mutare City Council, based on a request for burial ground, to 140 estimated by the (then) opposition Movement for Democratic Change - Tsvangirai party. In January 2009, Morgan Tsvangirai announced that he would do as the leaders across Africa had insisted and join a coalition government as prime minister with his nemesis, President Robert Mugabe . On 11 February 2009 Tsvangirai was sworn in as the Prime Minister of Zimbabwe. By 2009 inflation had peaked at 500 billion % per year under the Mugabe government and the Zimbabwe currency was worthless. The opposition shared power with the Mugabe regime between 2009 and 2013, Zimbabwe switched to using the US dollar as currency and the economy improved reaching a growth rate of 10% per year. In 2013 the Mugabe government won an election which The Economist described as "rigged," doubled the size of the civil service and embarked on "...misrule and dazzling corruption." However, the United Nations, African Union and SADC endorsed the elections as free and fair. By 2016 the economy had collapsed, nationwide protests took place throughout the country and the finance minister admitted "Right now we literally have nothing." There was the introduction of bond notes to literally fight the biting cash crisis and liquidity crunch. Cash became scarce on the market in the year 2017. On Wednesday 15 November 2017 the military placed President Mugabe under house arrest and removed him from power. The military stated that the president was safe. The military placed tanks around government buildings in Harare and blocked the main road to the airport. Public opinion in the capital favored the dictators removal although they were uncertain about his replacement with another dictatorship. The Times reported that Emmerson Mnangagwa helped to orchestrate the coup. He had recently been sacked by Mr Mugabe so that the path could be smoothed for Grace Mugabe to replace her husband. A Zimbabwean army officer, Major General Sibusiso Moyo, went on television to say the military was targeting "criminals" around President Mugabe but not actively removing the president from power. However the head of the African Union described it as such. Ugandan writer Charles Onyango-Obbo stated on Twitter "If it looks like a coup, walks like a coup and quacks like a coup, then it's a coup". Naunihal Singh, an assistant professor at the U.S. Naval War College and author of a book on military coups, described the situation in Zimbabwe as a coup. He tweeted that "'The President is safe' is a classic coup catch-phrase" of such an event. Robert Mugabe resigned 21 November 2017. Second Vice-President Phelekezela Mphoko became the Acting President. Emmerson Mnangagwa was sworn in as President on 24 November 2017. General elections were held on 30 July 2018 to elect the president and members of both houses of parliament. Ruling party ZANU-PF won the majority of seats in parliament, incumbent President Emmerson Mnangagwa was declared the winner after receiving 50.8% of votes. The opposition accused the government of rigging the vote. In subsequent riots by MDC supporters, the army opened fire and killed three people, while three others died of their injuries the following day. In January 2019 following a 130% increase in the price of fuel thousands of Zimbabweans protested and the government responded with a coordinated crackdown that resulted in hundreds of arrests and multiple deaths.
https://en.wikipedia.org/wiki?curid=14114
History of Russia The history of Russia begins with the histories of the East Slavs. The traditional start-date of specifically Russian history is the establishment of the Rus' state in the north in 862 ruled by Vikings. Staraya Ladoga and Novgorod became the first major cities of the new union of immigrants from Scandinavia with the Slavs and Finno-Ugrians. In 882 Prince Oleg of Novgorod seized Kiev, thereby uniting the northern and southern lands of the Eastern Slavs under one authority. The state adopted Christianity from the Byzantine Empire in 988, beginning the synthesis of Byzantine and Slavic cultures that defined Orthodox Slavic culture for the next millennium. Kievan Rus' ultimately disintegrated as a state due to the Mongol invasions in 1237–1240 along with the resulting deaths of about half the population of Rus'. After the 13th century, Moscow became a cultural center. The territories of the Grand Duchy of Moscow became the Tsardom of Russia in 1547. In 1721 Tsar Peter the Great renamed his state as the Russian Empire, hoping to associate it with historical and cultural achievements of ancient Rus' – in contrast to his policies oriented towards Western Europe. The state now extended from the eastern borders of the Polish-Lithuanian Commonwealth to the Pacific Ocean. Peasant revolts were common, and all were fiercely suppressed. The Emperor Alexander II abolished Russian serfdom in 1861, but the peasants fared poorly and revolutionary pressures grew. In the following decades, reform efforts such as the Stolypin reforms of 1906–1914, the constitution of 1906, and the State Duma (1906–1917) attempted to open and liberalize the economy and political system, but the Emperors refused to relinquish autocratic rule and resisted sharing their power. A combination of economic breakdown, war-weariness, and discontent with the autocratic system of government triggered revolution in Russia in 1917. The overthrow of the monarchy initially brought into office a coalition of liberals and moderate socialists, but their failed policies led to seizure of power by the communist Bolsheviks on 25 October 1917 (7 November New Style). Between 1922 and 1991 the history of Russia became essentially the history of the Soviet Union, effectively an ideologically-based state roughly conterminous with the Russian Empire before the 1918 Treaty of Brest-Litovsk. The approach to the building of socialism, however, varied over different periods in Soviet history: from the mixed economy and diverse society and culture of the 1920s through the command economy and repressions of the Joseph Stalin era to the "era of stagnation" from the 1960s to the 1980s. From its first years, government in the Soviet Union-based itself on the one-party rule of the Communists, as the Bolsheviks called themselves, beginning in March 1918. By the mid-1980s, with the weaknesses of Soviet economic and political structures becoming acute, Mikhail Gorbachev embarked on major reforms, which eventually led to the overthrow of the communist party and the breakup of the USSR, leaving Russia again on its own and marking the start of the history of post-Soviet Russia. The Russian Federation came into being in January 1992 as the legal successor to the USSR. Russia retained its nuclear arsenal but lost its superpower status. Scrapping the socialist central planning and state-ownership of property of the socialist era, new leaders, led by President Vladimir Putin (who first became President in 2000), took political and economic power after 2000 and engaged in an energetic foreign policy. Russia's 2014 annexation of the Crimean peninsula has led to economic sanctions imposed by the United States and the European Union. In 2006, 1.5-million-year-old Oldowan flint tools were discovered in the Dagestan Akusha region of the north Caucasus, demonstrating the presence of early humans in Russia from a very early time. The discovery of some of the earliest evidence for the presence of anatomically modern humans found anywhere in Europe was reported in 2007 from the deepest levels of the Kostenki archaeological site near the Don River in Russia, which has been dated to at least 40,000 years ago. Arctic Russia was reached by 40,000 years ago. That Russia was also home to some of the last surviving Neanderthals was revealed by the discovery of the partial skeleton of a Neanderthal infant in Mezmaiskaya cave in Adygea, which was carbon dated to only 29,000 years ago. In 2008, Russian archaeologists from the Institute of Archaeology and Ethnology of Novosibirsk, working at the site of Denisova Cave in the Altai Mountains of Siberia, uncovered a 40,000-year-old small bone fragment from the fifth finger of a juvenile hominin, which DNA analysis revealed to be a previously unknown species of human, which was named the Denisova hominin. During the prehistoric eras the vast steppes of Southern Russia were home to tribes of nomadic pastoralists. In classical antiquity, the Pontic Steppe was known as Scythia. Remnants of these long gone steppe cultures were discovered in the course of the 20th century in such places as Ipatovo, Sintashta, Arkaim, and Pazyryk. In the later part of the 8th century BCE, Greek merchants brought classical civilization to the trade emporiums in Tanais and Phanagoria. Gelonus was described by Herodotus as a huge (Europe's biggest) earth- and wood-fortified grad inhabited around 500 BC by Heloni and Budini. The Bosporan Kingdom was incorporated as part of the Roman province of Moesia Inferior from 63 to 68 AD, under Emperor Nero. At about the 2nd century AD Goths migrated to the Black Sea, and in the 3rd and 4th centuries AD, a semi-legendary Gothic kingdom of Oium existed in Southern Russia until it was overrun by Huns. Between the 3rd and 6th centuries AD, the Bosporan Kingdom, a Hellenistic polity which succeeded the Greek colonies, was also overwhelmed by successive waves of nomadic invasions, led by warlike tribes which would often move on to Europe, as was the case with the Huns and Turkish Avars. A Turkic people, the Khazars, ruled the lower Volga basin steppes between the Caspian and Black Seas through to the 8th century. Noted for their laws, tolerance, and cosmopolitanism, the Khazars were the main commercial link between the Baltic and the Muslim Abbasid empire centered in Baghdad. They were important allies of the Byzantine Empire, and waged a series of successful wars against the Arab Caliphates. In the 8th century, the Khazars embraced Judaism. Some of the ancestors of the modern Russians were the Slavic tribes, whose original home is thought by some scholars to have been the wooded areas of the Pripet Marshes. The Early East Slavs gradually settled Western Russia in two waves: one moving from Kiev towards present-day Suzdal and Murom and another from Polotsk towards Novgorod and Rostov. From the 7th century onwards, East Slavs constituted the bulk of the population in Western Russia and slowly but peacefully assimilated the native Finno-Ugric tribes, such as the Merya, the Muromians, and the Meshchera. Scandinavian Norsemen, known as Vikings in Western Europe and Varangians in the East, combined piracy and trade throughout Northern Europe. In the mid-9th century, they began to venture along the waterways from the eastern Baltic to the Black and Caspian Seas. According to the earliest Russian chronicle, a Varangian named Rurik was elected ruler ("knyaz") of Novgorod in about 860, before his successors moved south and extended their authority to Kiev, which had been previously dominated by the Khazars. Oleg, Rurik's son Igor and Igor's son Sviatoslav subsequently subdued all local East Slavic tribes to Kievan rule, destroyed the Khazar khaganate and launched several military expeditions to Byzantium and Persia. Thus, the first East Slavic state, Rus', emerged in the 9th century along the Dnieper River valley. A coordinated group of princely states with a common interest in maintaining trade along the river routes, Kievan Rus' controlled the trade route for furs, wax, and slaves between Scandinavia and the Byzantine Empire along the Volkhov and Dnieper Rivers. By the end of the 10th century, the minority Norse military aristocracy had merged with the native Slavic population, which also absorbed Greek Christian influences in the course of the multiple campaigns to loot Tsargrad, or Constantinople. One such campaign claimed the life of the foremost Slavic druzhina leader, Svyatoslav I, who was renowned for having crushed the power of the Khazars on the Volga. At the time, the Byzantine Empire was experiencing a major military and cultural revival; despite its later decline, its culture would have a continuous influence on the development of Russia in its formative centuries. By the 11th century, particularly during the reign of Yaroslav the Wise, Kievan Rus' displayed an economy and achievements in architecture and literature superior to those that then existed in the western part of the continent. Compared with the languages of European Christendom, the Russian language was little influenced by the Greek and Latin of early Christian writings. This was because Church Slavonic was used directly in liturgy instead. A nomadic Turkic people, the Kipchaks (also known as the Cumans), replaced the earlier Pechenegs as the dominant force in the south steppe regions neighbouring to Rus' at the end of the 11th century and founded a nomadic state in the steppes along the Black Sea (Desht-e-Kipchak). Repelling their regular attacks, especially in Kiev, which was just one day's ride from the steppe, was a heavy burden for the southern areas of Rus'. The nomadic incursions caused a massive influx of Slavs to the safer, heavily forested regions of the north, particularly to the area known as Zalesye. Kievan Rus' ultimately disintegrated as a state because of in-fighting between members of the princely family that ruled it collectively. Kiev's dominance waned, to the benefit of Vladimir-Suzdal in the north-east, Novgorod in the north, and Halych-Volhynia in the south-west. Conquest by the Mongol Golden Horde in the 13th century was the final blow. Kiev was destroyed. Halych-Volhynia would eventually be absorbed into the Polish–Lithuanian Commonwealth, while the Mongol-dominated Vladimir-Suzdal and independent Novgorod Republic, two regions on the periphery of Kiev, would establish the basis for the modern Russian nation. The invading Mongols accelerated the fragmentation of the Rus'. In 1223, the disunited southern princes faced a Mongol raiding party at the Kalka River and were soundly defeated. In 1237–1238 the Mongols burnt down the city of Vladimir (4 February 1238) and other major cities of northeast Russia, routed the Russians at the Sit' River, and then moved west into Poland and Hungary. By then they had conquered most of the Russian principalities. Only the Novgorod Republic escaped occupation and continued to flourish in the orbit of the Hanseatic League. The impact of the Mongol invasion on the territories of Kievan Rus' was uneven. The advanced city culture was almost completely destroyed. As older centers such as Kiev and Vladimir never recovered from the devastation of the initial attack, the new cities of Moscow, Tver and Nizhny Novgorod began to compete for hegemony in the Mongol-dominated Russia. Although a Russian army defeated the Golden Horde at Kulikovo in 1380, Mongol domination of the Russian-inhabited territories, along with demands of tribute from Russian princes, continued until about 1480. The Mongols held Russia and Volga Bulgaria in sway from their western capital at Sarai, one of the largest cities of the medieval world. The princes of southern and eastern Russia had to pay tribute to the Mongols of the Golden Horde, commonly called Tatars; but in return they received charters authorizing them to act as deputies to the khans. In general, the princes were allowed considerable freedom to rule as they wished, while the Russian Orthodox Church even experienced a spiritual revival under the guidance of Metropolitan Alexis and Sergius of Radonezh. The Mongols left their impact on the Russians in such areas as military tactics and transportation. Under Mongol occupation, Russia also developed its postal road network, census, fiscal system, and military organization. Daniil Aleksandrovich, the youngest son of Alexander Nevsky, founded the principality of Moscow (known as Muscovy in English), which first cooperated with and ultimately expelled the Tatars from Russia. Well-situated in the central river system of Russia and surrounded by protective forests and marshes, Moscow was at first only a vassal of Vladimir, but soon it absorbed its parent state. A major factor in the ascendancy of Moscow was the cooperation of its rulers with the Mongol overlords, who granted them the title of Grand Prince of Moscow and made them agents for collecting the Tatar tribute from the Russian principalities. The principality's prestige was further enhanced when it became the center of the Russian Orthodox Church. Its head, the Metropolitan, fled from Kiev to Vladimir in 1299 and a few years later established the permanent headquarters of the Church in Moscow under the original title of Kiev Metropolitan. By the middle of the 14th century, the power of the Mongols was declining, and the Grand Princes felt able to openly oppose the Mongol yoke. In 1380, at Kulikovo on the Don River, the Mongols were defeated, and although this hard-fought victory did not end Tatar rule of Russia, it did bring great fame to the Grand Prince Dmitry Donskoy. Moscow's leadership in Russia was now firmly based and by the middle of the 14th century its territory had greatly expanded through purchase, war, and marriage. In the 15th century, the grand princes of Moscow continued to consolidate Russian land to increase their population and wealth. The most successful practitioner of this process was Ivan III, who laid the foundations for a Russian national state. Ivan competed with his powerful northwestern rival, the Grand Duchy of Lithuania, for control over some of the semi-independent Upper Principalities in the upper Dnieper and Oka River basins. Through the defections of some princes, border skirmishes, and a long war with the Novgorod Republic, Ivan III was able to annex Novgorod and Tver. As a result, the Grand Duchy of Moscow tripled in size under his rule. During his conflict with Pskov, a monk named Filofei (Philotheus of Pskov) composed a letter to Ivan III, with the prophecy that the latter's kingdom would be the Third Rome. The Fall of Constantinople and the death of the last Greek Orthodox Christian emperor contributed to this new idea of Moscow as 'New Rome' and the seat of Orthodox Christianity, as did Ivan's 1472 marriage to Byzantine Princess Sophia Palaiologina. A contemporary of the Tudors and other "new monarchs" in Western Europe, Ivan proclaimed his absolute sovereignty over all Russian princes and nobles. Refusing further tribute to the Tatars, Ivan initiated a series of attacks that opened the way for the complete defeat of the declining Golden Horde, now divided into several Khanates and hordes. Ivan and his successors sought to protect the southern boundaries of their domain against attacks of the Crimean Tatars and other hordes. To achieve this aim, they sponsored the construction of the Great Abatis Belt and granted manors to nobles, who were obliged to serve in the military. The manor system provided a basis for an emerging cavalry based army. In this way, internal consolidation accompanied outward expansion of the state. By the 16th century, the rulers of Moscow considered the entire Russian territory their collective property. Various semi-independent princes still claimed specific territories, but Ivan III forced the lesser princes to acknowledge the grand prince of Moscow and his descendants as unquestioned rulers with control over military, judicial, and foreign affairs. Gradually, the Russian ruler emerged as a powerful, autocratic ruler, a tsar. The first Russian ruler to officially crown himself "Tsar" was Ivan IV. Ivan III tripled the territory of his state, ended the dominance of the Golden Horde over the Rus', renovated the Moscow Kremlin, and laid the foundations of the Russian state. Biographer Fennell concludes that his reign was "militarily glorious and economically sound," and especially points to his territorial annexations and his centralized control over local rulers. However, Fennell, the leading British specialist on Ivan III, argues that his reign was also "a period of cultural depression and spiritual barrenness. Freedom was stamped out within the Russian lands. By his bigoted anti-Catholicism Ivan brought down the curtain between Russia and the west. For the sake of territorial aggrandizement he deprived his country of the fruits of Western learning and civilization." The development of the Tsar's autocratic powers reached a peak during the reign of Ivan IV (1547–1584), known as "Ivan the Terrible". He strengthened the position of the monarch to an unprecedented degree, as he ruthlessly subordinated the nobles to his will, exiling or executing many on the slightest provocation. Nevertheless, Ivan is often seen as a farsighted statesman who reformed Russia as he promulgated a new code of laws (Sudebnik of 1550), established the first Russian feudal representative body (Zemsky Sobor), curbed the influence of the clergy, and introduced local self-management in rural regions. Although his long Livonian War for control of the Baltic coast and access to the sea trade ultimately proved a costly failure, Ivan managed to annex the Khanates of Kazan, Astrakhan, and Siberia. These conquests complicated the migration of aggressive nomadic hordes from Asia to Europe via the Volga and Urals. Through these conquests, Russia acquired a significant Muslim Tatar population and emerged as a multiethnic and multiconfessional state. Also around this period, the mercantile Stroganov family established a firm foothold in the Urals and recruited Russian Cossacks to colonise Siberia. In the later part of his reign, Ivan divided his realm in two. In the zone known as the "oprichnina", Ivan's followers carried out a series of bloody purges of the feudal aristocracy (whom he suspected of treachery after the betrayal of prince Kurbsky), culminating in the Massacre of Novgorod in 1570. This combined with the military losses, epidemics, and poor harvests so weakened Russia that the Crimean Tatars were able to sack central Russian regions and burn down Moscow in 1571. In 1572 Ivan abandoned the "oprichnina". At the end of Ivan IV's reign the Polish–Lithuanian and Swedish armies carried out a powerful intervention in Russia, devastating its northern and northwest regions. The death of Ivan's childless son Feodor was followed by a period of civil wars and foreign intervention known as the "Time of Troubles" (1606–13). Extremely cold summers (1601–1603) wrecked crops, which led to the Russian famine of 1601–1603 and increased the social disorganization. Boris Godunov's (Борис Годунов) reign ended in chaos, civil war combined with foreign intrusion, devastation of many cities and depopulation of the rural regions. The country rocked by internal chaos also attracted several waves of interventions by the Polish–Lithuanian Commonwealth. During the Polish–Muscovite War (1605–1618), Polish–Lithuanian forces reached Moscow and installed the impostor False Dmitriy I in 1605, then supported False Dmitry II in 1607. The decisive moment came when a combined Russian-Swedish army was routed by the Polish forces under hetman Stanisław Żółkiewski at the Battle of Klushino on . As the result of the battle, the Seven Boyars, a group of Russian nobles, deposed the tsar Vasily Shuysky on , and recognized the Polish prince Władysław IV Vasa as the Tsar of Russia on . The Poles entered Moscow on . Moscow revolted but riots there were brutally suppressed and the city was set on fire. The crisis provoked a patriotic national uprising against the invasion, both in 1611 and 1612. Finally, a volunteer army, led by the merchant Kuzma Minin and prince Dmitry Pozharsky, expelled the foreign forces from the capital on . The Russian statehood survived the "Time of Troubles" and the rule of weak or corrupt Tsars because of the strength of the government's central bureaucracy. Government functionaries continued to serve, regardless of the ruler's legitimacy or the faction controlling the throne. However, the "Time of Troubles" provoked by the dynastic crisis resulted in the loss of much territory to the Polish–Lithuanian Commonwealth in the Russo-Polish war, as well as to the Swedish Empire in the Ingrian War. In February 1613, with the chaos ended and the Poles expelled from Moscow, a national assembly, composed of representatives from fifty cities and even some peasants, elected Michael Romanov, the young son of Patriarch Filaret, to the throne. The Romanov dynasty ruled Russia until 1917. The immediate task of the new dynasty was to restore peace. Fortunately for Moscow, its major enemies, the Polish–Lithuanian Commonwealth and Sweden, were engaged in a bitter conflict with each other, which provided Russia the opportunity to make peace with Sweden in 1617 and to sign a truce with the Polish–Lithuanian Commonwealth in 1619. Recovery of lost territories began in the mid-17th century, when the Khmelnitsky Uprising (1648–57) in Ukraine against Polish rule brought about the Treaty of Pereyaslav, concluded between Russia and the Ukrainian Cossacks. According to the treaty, Russia granted protection to the Cossacks state in Left-bank Ukraine, formerly under Polish control. This triggered a prolonged Russo-Polish War (1654-1667), which ended with the Treaty of Andrusovo, where Poland accepted the loss of Left-bank Ukraine, Kiev and Smolensk. Rather than risk their estates in more civil war, the boyars cooperated with the first Romanovs, enabling them to finish the work of bureaucratic centralization. Thus, the state required service from both the old and the new nobility, primarily in the military. In return, the tsars allowed the boyars to complete the process of enserfing the peasants. In the preceding century, the state had gradually curtailed peasants' rights to move from one landlord to another. With the state now fully sanctioning serfdom, runaway peasants became state fugitives, and the power of the landlords over the peasants "attached" to their land had become almost complete. Together the state and the nobles placed an overwhelming burden of taxation on the peasants, whose rate was 100 times greater in the mid-17th century than it had been a century earlier. In addition, middle-class urban tradesmen and craftsmen were assessed taxes, and, like the serfs, they were forbidden to change residence. All segments of the population were subject to military levy and to special taxes. Riots amongst peasants and citizens of Moscow at this time were endemic, and included the Salt Riot (1648), Copper Riot (1662), and the Moscow Uprising (1682). By far the greatest peasant uprising in 17th-century Europe erupted in 1667. As the free settlers of South Russia, the Cossacks, reacted against the growing centralization of the state, serfs escaped from their landlords and joined the rebels. The Cossack leader Stenka Razin led his followers up the Volga River, inciting peasant uprisings and replacing local governments with Cossack rule. The tsar's army finally crushed his forces in 1670; a year later Stenka was captured and beheaded. Yet, less than half a century later, the strains of military expeditions produced another revolt in Astrakhan, ultimately subdued. Much of Russia's expansion occurred in the 17th century, culminating in the first Russian colonisation of the Pacific in the mid-17th century, the Russo-Polish War (1654–67) that incorporated left-bank Ukraine, and the Russian conquest of Siberia. Poland was divided in the 1790–1815 era, with much of the land and population going to Russia. Most of the 19th century growth came from adding territory in Asia, south of Siberia. Peter the Great (1672–1725) brought autocracy into Russia and played a major role in bringing his country into the European state system. Russia had now become the largest country in the world, stretching from the Baltic Sea to the Pacific Ocean. The vast majority of the land was unoccupied, and travel was slow. Much of its expansion had taken place in the 17th century, culminating in the first Russian settlement of the Pacific in the mid-17th century, the reconquest of Kiev, and the pacification of the Siberian tribes. However, a population of only 14 million was stretched across this vast landscape. With a short growing season grain yields trailed behind those in the West and potato farming was not yet widespread. As a result, the great majority of the population workforce was occupied with agriculture. Russia remained isolated from the sea trade and its internal trade, communication and manufacturing were seasonally dependent. Peter's first military efforts were directed against the Ottoman Turks. His aim was to establish a Russian foothold on the Black Sea by taking the town of Azov. His attention then turned to the north. Peter still lacked a secure northern seaport except at Archangel on the White Sea, whose harbor was frozen nine months a year. Access to the Baltic was blocked by Sweden, whose territory enclosed it on three sides. Peter's ambitions for a "window to the sea" led him in 1699 to make a secret alliance with the Polish–Lithuanian Commonwealth and Denmark against Sweden resulting in the Great Northern War. The war ended in 1721 when an exhausted Sweden sued for peace with Russia. Peter acquired four provinces situated south and east of the Gulf of Finland, thus securing his coveted access to the sea. There, in 1703, he had already founded the city that was to become Russia's new capital, Saint Petersburg, as a "window opened upon Europe" to replace Moscow, long Russia's cultural center. Russian intervention in the Commonwealth marked, with the Silent Sejm, the beginning of a 200-year domination of that region by the Russian Empire. In celebration of his conquests, Peter assumed the title of emperor, and the Russian Tsardom officially became the Russian Empire in 1721. Peter reorganized his government based on the latest Western models, molding Russia into an absolutist state. He replaced the old "boyar" Duma (council of nobles) with a nine-member senate, in effect a supreme council of state. The countryside was also divided into new provinces and districts. Peter told the senate that its mission was to collect tax revenues. In turn tax revenues tripled over the course of his reign. Administrative Collegia (ministries) were established in St. Petersburg, to replace the old governmental departments. In 1722 Peter promulgated his famous Table of ranks. As part of the government reform, the Orthodox Church was partially incorporated into the country's administrative structure, in effect making it a tool of the state. Peter abolished the patriarchate and replaced it with a collective body, the Holy Synod, led by a lay government official. Peter continued and intensified his predecessors' requirement of state service for all nobles. By this same time, the once powerful Persian Safavid Empire to the south was heavily declining. Taking advantage of the profitable situation, Peter launched the Russo-Persian War (1722-1723), known as "The Persian Expedition of Peter the Great" by Russian histographers, in order to be the first Russian emperor to establish Russian influence in the Caucasus and Caspian Sea region. After considerable success and the capture of many provinces and cities in the Caucasus and northern mainland Persia, the Safavids were forced to hand over the territories to Russia. However, by twelve years later, all the territories were ceded back to Persia, which was now led by the charismatic military genius Nader Shah, as part of the Treaty of Resht and Treaty of Ganja and the Russo-Persian alliance against the Ottoman Empire, the common neighbouring rivalling enemy. Peter the Great died in 1725, leaving an unsettled succession, but Russia had become a great power by the end of his reign. Peter I was succeeded by his second wife, Catherine I (1725–1727), who was merely a figurehead for a powerful group of high officials, then by his minor grandson, Peter II (1727–1730), then by his niece, Anna (1730–1740), daughter of Tsar Ivan V. The heir to Anna was soon deposed in a coup and Elizabeth, daughter of Peter I, ruled from 1741 to 1762. During her reign, Russia took part in the Seven Years' War. Nearly forty years were to pass before a comparably ambitious ruler appeared on the Russian throne. Catherine II, "the Great" (r. 1762–1796), was a German princess who married the German heir to the Russian crown. He took weak positions, and Catherine overthrew him in a coup in 1762, becoming queen regnant. Catherine enthusiastically supported the ideals of The Enlightenment, thus earning the status of an enlightened despot She patronized the arts, science and learning. She contributed to the resurgence of the Russian nobility that began after the death of Peter the Great. Catherine promulgated the Charter to the Gentry reaffirming rights and freedoms of the Russian nobility and abolishing mandatory state service. She seized control of all the church lands, drastically reduced the size of the monasteries, and put the surviving clergy on a tight budget. Catherine spent heavily to promote an expansive foreign policy. She extended Russian political control over the Polish–Lithuanian Commonwealth with actions, including the support of the Targowica Confederation. The cost of her campaigns, on top of the oppressive social system that required serfs to spend almost all of their time laboring on the land of their lords, provoked a major peasant uprising in 1773. Inspired by a Cossack named Pugachev, with the emphatic cry of "Hang all the landlords!", the rebels threatened to take Moscow until Catherine crushed the rebellion. Like the other enlightened despots of Europe, Catherine made certain of her own power and formed an alliance with the nobility. Catherine successfully waged war against the decaying Ottoman Empire and advanced Russia's southern boundary to the Black Sea. Then, by allying with the rulers of Austria and Prussia, she incorporated the territories of the Polish–Lithuanian Commonwealth, where after a century of Russian rule non-Catholic, mainly Orthodox population prevailed during the Partitions of Poland, pushing the Russian frontier westward into Central Europe. In accordance to the treaty Russia had signed with the Georgians to protect them against any new invasion of their Persian suzerains and further political aspirations, Catherine waged a new war against Persia in 1796 after they had again invaded Georgia and established rule over it about a year prior, and had expelled the newly established Russian garrisons in the Caucasus. Innovative tsars such as Peter the Great and Catherine the Great brought in Western experts, scientists, philosophers, and engineers. Powerful Russians resented their privileged positions and alien ideas. The backlash was especially severe after the Napoleonic wars. It produced a powerful anti-western campaign that "led to a wholesale purge of Western specialists and their Russian followers in universities, schools, and government service." Russia was in a continuous state of financial crisis. While revenue rose from 9 million rubles in 1724 to 40 million in 1794, expenses grew more rapidly, reaching 49 million in 1794. The budget was allocated 46 percent to the military, 20 percent to government economic activities, 12 percent to administration, and nine percent for the Imperial Court in St. Petersburg. The deficit required borrowing, primarily from Amsterdam; five percent of the budget was allocated to debt payments. Paper money was issued to pay for expensive wars, thus causing inflation. For its spending, Russia obtained a large and glorious army, a very large and complex bureaucracy, and a splendid court that rivaled Paris and London. However, the government was living far beyond its means, and 18th-century Russia remained "a poor, backward, overwhelmingly agricultural, and illiterate country." By the time of her death in 1796, Catherine's expansionist policy had made Russia into a major European power. Alexander I continued this policy, wresting Finland from the weakened kingdom of Sweden in 1809 and Bessarabia from the Ottomans in 1812. After Russian armies liberated allied Georgia from Persian occupation in 1802, they clashed with Persia over control and consolidation over Georgia, as well as the Iranian territories that comprise modern-day Azerbaijan and Dagestan. They also became involved in the Caucasian War against the Caucasian Imamate. In 1813, the war with Persia concluded with a Russian victory, forcing Qajar Iran to cede swaths of its territories in the Caucasus to Russia, which drastically increased its territory in the region. To the south-west, Russia attempted to expand at the expense of the Ottoman Empire, using Georgia at its base for the Caucasus and Anatolian front. In European policy, Alexander I switched Russia back and forth four times in 1804–1812 from neutral peacemaker to anti-Napoleon to an ally of Napoleon, winding up in 1812 as Napoleon's enemy. In 1805, he joined Britain in the War of the Third Coalition against Napoleon, but after the massive defeat at the Battle of Austerlitz he switched and formed an alliance with Napoleon by the Treaty of Tilsit (1807) and joined Napoleon's Continental System. He fought a small-scale naval war against Britain, 1807–12. He and Napoleon could never agree, especially about Poland, and the alliance collapsed by 1810. Furthermore, Russia's economy had been hurt by Napoleon's Continental System, which cut off trade with Britain. As Esdaile notes, "Implicit in the idea of a Russian Poland was, of course, a war against Napoleon." Schroeder says Poland was the root cause of the conflict but Russia's refusal to support the Continental System was also a factor. The invasion of Russia was a catastrophe for Napoleon and his 450,000 invasion troops. One major battle was fought at Borodino; casualties were very high but it was indecisive and Napoleon was unable to engage and defeat the Russian armies. He attempted to force the Tsar to terms by capturing Moscow at the onset of winter, even though the French Army had already lost most of its men. The expectation proved futile. The Russians retreated, burning crops and food supplies in a scorched earth policy that multiplied Napoleon's logistic problems. Unprepared for winter warfare, 85%–90% of Napoleon's soldiers died from disease, cold, starvation or by ambush by peasant guerrilla fighters. As Napoleon's forces retreated, Russian troops pursued them into Central and Western Europe and finally captured Paris. Out of a total population of around 43 million people, Russia lost about 1.5 million in the year 1812; of these about 250,000 to 300,000 were soldiers and the rest peasants and serfs. After the final defeat of Napoleon in 1815, Alexander became known as the 'savior of Europe.' He presided over the redrawing of the map of Europe at the Congress of Vienna (1814–15), which made him the king of Congress Poland. He formed the Holy Alliance with Austria and Prussia, to suppress revolutionary movements in Europe that he saw as immoral threats to legitimate Christian monarchs. He helped Austria's Klemens von Metternich in suppressing all national and liberal movements. Although the Russian Empire would play a leading political role as late as 1848, its retention of serfdom precluded economic progress of any significant degree. As West European economic growth accelerated during the Industrial Revolution, sea trade and colonialism which had begun in the second half of the 18th century, Russia began to lag ever farther behind, undermining its ability to field strong armies. Russia's great power status obscured the inefficiency of its government, the isolation of its people, and its economic backwardness. Following the defeat of Napoleon, Alexander I was willing to discuss constitutional reforms, and though a few were introduced, no thoroughgoing changes were attempted. The tsar was succeeded by his younger brother, Nicholas I (1825–1855), who at the onset of his reign was confronted with an uprising. The background of this revolt lay in the Napoleonic Wars, when a number of well-educated Russian officers traveled in Europe in the course of the military campaigns, where their exposure to the liberalism of Western Europe encouraged them to seek change on their return to autocratic Russia. The result was the Decembrist Revolt (December 1825), the work of a small circle of liberal nobles and army officers who wanted to install Nicholas' brother as a constitutional monarch. But the revolt was easily crushed, leading Nicholas to turn away from liberal reforms and champion the reactionary doctrine "Orthodoxy, Autocracy, and Nationality". In 1826–1828 Russia fought another war against Persia. Russia lost almost all of its recently consolidated territories during the first year but gained them back and won the war on highly favourable terms. At the 1828 Treaty of Turkmenchay, Russia gained Armenia, Nakhchivan, Nagorno-Karabakh, Azerbaijan, and Iğdır. In the 1828–1829 Russo-Turkish War Russia invaded northeastern Anatolia and occupied the strategic Ottoman towns of Erzurum and Gumushane and, posing as protector and saviour of the Greek Orthodox population, received extensive support from the region's Pontic Greeks. Following a brief occupation, the Russian imperial army withdrew back into Georgia. By the 1830s, Russia had conquered all Persian territories and major Ottoman territories in the Caucasus. In 1831 Nicholas crushed the November Uprising in Poland. The Russian autocracy gave Polish artisans and gentry reason to rebel in 1863 by assailing the national core values of language, religion, and culture. The resulting January Uprising was a massive Polish revolt, which also was crushed. France, Britain and Austria tried to intervene in the crisis but were unable to do so. The Russian patriotic press used the Polish uprising to unify the Russian nation, claiming it was Russia's God-given mission to save Poland and the world. Poland was punished by losing its distinctive political and judicial rights, with Russianization imposed on its schools and courts. Tsar Nicholas I (reigned 1825–1855) lavished attention on his very large army; with a population of 60–70 million people, the army included a million men. They had outdated equipment and tactics, but the tsar, who dressed like a soldier and surrounded himself with officers, gloried in the victory over Napoleon in 1812 and took enormous pride in its smartness on parade. The cavalry horses, for example, were only trained in parade formations, and did poorly in battle. The glitter and braid masked profound weaknesses that he did not see. He put generals in charge of most of his civilian agencies regardless of their qualifications. An agnostic who won fame in cavalry charges was made supervisor of Church affairs. The Army became the vehicle of upward social mobility for noble youths from non-Russian areas, such as Poland, the Baltic, Finland and Georgia. On the other hand, many miscreants, petty criminals and undesirables were punished by local officials by enlisting them for life in the Army. The conscription system was highly unpopular with people, as was the practice of forcing peasants to house the soldiers for six months of the year. Curtiss finds that "The pedantry of Nicholas' military system, which stressed unthinking obedience and parade ground evolutions rather than combat training, produced ineffective commanders in time of war." His commanders in the Crimean War were old and incompetent, and indeed so were his muskets as the colonels sold the best equipment and the best food. Finally the Crimean War at the end of his reign demonstrated to the world what no one had previously realized: Russia was militarily weak, technologically backward, and administratively incompetent. Despite his grand ambitions toward the south and Ottoman Empire, Russia had not built its railroad network in that direction, and communications were bad. The bureaucracy was riddled with graft, corruption and inefficiency and was unprepared for war. The Navy was weak and technologically backward; the Army, although very large, was good only for parades, suffered from colonels who pocketed their men's pay, poor morale, and was even more out of touch with the latest technology as developed by Britain and France. As Fuller notes, "Russia had been beaten on the Crimean peninsula, and the military feared that it would inevitably be beaten again unless steps were taken to surmount its military weakness." As Western Europe modernized, after 1840 the issue for Russia became one of direction. Westernizers favored imitating Western Europe while others renounced the West and called for a return of the traditions of the past. The latter path was championed by Slavophiles, who heaped scorn on the "decadent" West. The Slavophiles were opponents of bureaucracy and preferred the collectivism of the medieval Russian "mir", or village community, to the individualism of the West. Westernizers formed an intellectual movement that deplored the backwardness of Russian culture, and looked to western Europe for intellectual leadership. They were opposed by Slavophiles who denounced the West as too materialistic and instead promoted the spiritual depth of Russian traditionalism. A forerunner of the movement was Pyotr Chaadayev (1794–1856). He exposed the cultural isolation of Russia, from the perspective of Western Europe, in his "Philosophical Letters" of 1831. He cast doubt on the greatness of the Russian past, and ridiculed Orthodoxy for failing to provide a sound spiritual basis for the Russian mind. He called on Russia to emulate Western Europe, especially in rational and logical thought, its progressive spirit, its leadership in science, and indeed its leadership on the path to freedom. Vissarion Belinsky (1811–1848), and Alexander Herzen (1812–1870) were prominent Westernizers. Since the war against Napoleon, Russia had become deeply involved in the affairs of Europe, as part of the "Holy Alliance." The Holy Alliance was formed to serve as the "policeman of Europe." However, to be the policeman of Europe and maintain the alliance required large armies. Prussia, Austria, Britain and France (the other members of the alliance) lacked large armies and needed Russia to supply the required numbers, which fit the philosophy of Nicholas I. When the Revolutions of 1848 swept Europe, however, Russia was quiet. The Tsar sent his army into Hungary in 1849 at the request of the Austrian Empire and broke the revolt there, while preventing its spread to Russian Poland. The Tsar cracked down on any signs of internal unrest. Russia expected that in exchange for supplying the troops to be the policeman of Europe, it should have a free hand in dealing with the decaying Ottoman Empire—the "sick man of Europe." In 1853 Russia invaded Ottoman-controlled areas leading to the Crimean War. Britain and France came to the rescue of the Ottomans. After a gruelling war fought largely in Crimea, with very high death rates from disease, the allies won. Historian Orlando Figes points to the long-term damage Russia suffered: As Fuller notes, "Russia had been beaten on the Crimean peninsula, and the military feared that it would inevitably be beaten again unless steps were taken to surmount its military weakness." Tsar Nicholas died with his philosophy in dispute. One year earlier, Russia had become involved in the Crimean War, a conflict fought primarily in the Crimean peninsula. Since playing a major role in the defeat of Napoleon, Russia had been regarded as militarily invincible, but, once pitted against a coalition of the great powers of Europe, the reverses it suffered on land and sea exposed the weakness of Tsar Nicholas' regime. When Alexander II came to the throne in 1855, desire for reform was widespread. The most pressing problem confronting the Government was serfdom. In 1859, there were 23 million serfs (out of a total population of 67.1 Million). In anticipation of civil unrest that could ultimately foment a revolution, Alexander II chose to preemptively abolish serfdom with the emancipation reform in 1861. Emancipation brought a supply of free labor to the cities, stimulated industry, and the middle class grew in number and influence. The freed peasants had to buy land, allotted to them, from the landowners with the state assistance. The Government issued special bonds to the landowners for the land that they had lost, and collected a special tax from the peasants, called redemption payments, at a rate of 5% of the total cost of allotted land yearly. All the land turned over to the peasants was owned collectively by the "mir", the village community, which divided the land among the peasants and supervised the various holdings. Alexander was the most successful Russian reformer since Peter the Great, and was responsible for numerous reforms besides abolishing serfdom. He reorganized the judicial system, setting up elected local judges, abolishing capital punishment, promoting local self-government through the zemstvo system, imposing universal military service, ending some of the privileges of the nobility, and promoting the universities. In foreign policy, he sold Alaska to the United States in 1867, fearing the remote colony would fall into British hands if there was another war. He modernized the military command system. He sought peace, and moved away from bellicose France when Napoleon III fell. He joined with Germany and Austria in the League of the Three Emperors that stabilized the European situation. The Russian Empire expanded in Siberia and in the Caucasus and made gains at the expense of China. Faced with an uprising in Poland in 1863, he stripped that land of its separate Constitution and incorporated it directly into Russia. To counter the rise of a revolutionary and anarchistic movements, he sent thousands of dissidents into exile in Siberia and was proposing additional parliamentary reforms when he was assassinated in 1881. In the late 1870s Russia and the Ottoman Empire again clashed in the Balkans. The Russo-Turkish War was popular among the Russian people, who supported the independence of their fellow Orthodox Slavs, the Serbs and the Bulgarians. However, the war increased tension with Austria-Hungary, which also had ambitions in the region. The tsar was disappointed by the results of the Congress of Berlin in 1878, but abided by the agreement. During this period Russia expanded its empire into Central Asia, which was rich in raw materials, conquering the khanates of Kokand, Bokhara, and Khiva, as well as the Trans-Caspian region. In the 1860s a movement known as Nihilism developed in Russia. A term originally coined by Ivan Turgenev in his 1862 novel "Fathers and Sons", Nihilists favoured the destruction of human institutions and laws, based on the assumption that such institutions and laws are artificial and corrupt. At its core, Russian nihilism was characterized by the belief that the world lacks comprehensible meaning, objective truth, or value. For some time many Russian liberals had been dissatisfied by what they regarded as the empty discussions of the intelligentsia. The Nihilists questioned all old values and shocked the Russian establishment. They moved beyond being purely philosophical to becoming major political forces after becoming involved in the cause of reform. Their path was facilitated by the previous actions of the Decembrists, who revolted in 1825, and the financial and political hardship caused by the Crimean War, which caused many Russians to lose faith in political institutions. The Nihilists first attempted to convert the aristocracy to the cause of reform. Failing there, they turned to the peasants. Their campaign, which targeted the people instead of the aristocracy or the landed gentry, became known as the Populist movement. It was based upon the belief that the common people possessed the wisdom and peaceful ability to lead the nation. While the Narodnik movement was gaining momentum, the government quickly moved to extirpate it. In response to the growing reaction of the government, a radical branch of the Narodniks advocated and practiced terrorism. One after another, prominent officials were shot or killed by bombs. This represented the ascendancy of anarchism in Russia as a powerful revolutionary force. Finally, after several attempts, Alexander II was assassinated by anarchists in 1881, on the very day he had approved a proposal to call a representative assembly to consider new reforms in addition to the abolition of serfdom designed to ameliorate revolutionary demands. Unlike his father, the new tsar Alexander III (1881–1894) was throughout his reign a staunch reactionary who revived the maxim of "Orthodoxy, Autocracy, and National Character". A committed Slavophile, Alexander III believed that Russia could be saved from chaos only by shutting itself off from the subversive influences of Western Europe. In his reign Russia concluded the union with republican France to contain the growing power of Germany, completed the conquest of Central Asia, and exacted important territorial and commercial concessions from China. The tsar's most influential adviser was Konstantin Pobedonostsev, tutor to Alexander III and his son Nicholas, and procurator of the Holy Synod from 1880 to 1895. He taught his royal pupils to fear freedom of speech and press and to hate democracy, constitutions, and the parliamentary system. Under Pobedonostsev, revolutionaries were hunted down and a policy of Russification was carried out throughout the empire. Alexander was succeeded by his son Nicholas II (1894–1917). The Industrial Revolution, which began to exert a significant influence in Russia, was meanwhile creating forces that would finally overthrow the tsar. Politically, these opposition forces organized into three competing parties: The liberal elements among the industrial capitalists and nobility, who believed in peaceful social reform and a constitutional monarchy, founded the Constitutional Democratic party or "Kadets" in 1905. Followers of the Narodnik tradition established the Socialist-Revolutionary Party or "Esers" in 1901, advocating the distribution of land among those who actually worked it—the peasants. A third radical group founded the Russian Social Democratic Labour Party or "RSDLP" in 1898; this party was the primary exponent of Marxism in Russia. Gathering their support from the radical intellectuals and the urban working class, they advocated complete social, economic and political revolution. In 1903 the RSDLP split into two wings: the radical Bolsheviks, led by Vladimir Lenin, and the relatively moderate Mensheviks, led by Yuli Martov. The Mensheviks believed that Russian socialism would grow gradually and peacefully and that the tsar's regime should be succeeded by a democratic republic in which the socialists would cooperate with the liberal bourgeois parties. The Bolsheviks advocated the formation of a small elite of professional revolutionists, subject to strong party discipline, to act as the vanguard of the proletariat in order to seize power by force. The disastrous performance of the Russian armed forces in the Russo-Japanese War was a major blow to the Russian State and increased the potential for unrest. In January 1905, an incident known as "Bloody Sunday" occurred when Father Gapon led an enormous crowd to the Winter Palace in Saint Petersburg to present a petition to the tsar. When the procession reached the palace, Cossacks opened fire on the crowd, killing hundreds. The Russian masses were so aroused over the massacre that a general strike was declared demanding a democratic republic. This marked the beginning of the Russian Revolution of 1905. Soviets (councils of workers) appeared in most cities to direct revolutionary activity. In October 1905, Nicholas reluctantly issued the October Manifesto, which conceded the creation of a national Duma (legislature) to be called without delay. The right to vote was extended, and no law was to go into force without confirmation by the Duma. The moderate groups were satisfied; but the socialists rejected the concessions as insufficient and tried to organize new strikes. By the end of 1905, there was disunity among the reformers, and the tsar's position was strengthened for the time being. The Russian avant-garde was a large, influential wave of modern art that flourished in Russian Empire and Soviet Union, approximately from 1890 to 1930—although some have placed its beginning as early as 1850 and its end as late as 1960. The term covers many separate art movements of the era in painting, literature, music and architecture. The Archduke Franz Ferdinand of Austro-Hungary was assassinated by Bosnian Serbs on 28 June 1914. An ultimatum followed to Serbia, which was considered a Russian client-state, by Austro-Hungary on 23 July. Russia had no treaty obligation to Serbia, and in long-term perspective, Russia was militarily gaining on Germany and Austro-Hungary, and thus had an incentive to wait. Most Russian leaders wanted to avoid a war. However, in the present crisis they had the support of France, and they feared that the failure to support Serbia would lead to a loss of Russian credibility and a major political defeat to Russia's goals for a leadership role in the Balkans. Tsar Nicholas II mobilised Russian forces on 30 July 1914 to defend Serbia from Austria-Hungary. Christopher Clark states: "The Russian general mobilisation [of 30 July] was one of the most momentous decisions of the July crisis. This was the first of the general mobilisations. It came at the moment when the German government had not yet even declared the State of Impending War". Germany responded with her own mobilisation and declaration of War on 1 August 1914. At the opening of hostilities, the Russians took the offensive against both Germany and Austria-Hungary. The very large but poorly equipped Russian army fought tenaciously and desperately at times despite its lack of organization and very weak logistics. Casualties were enormous. By 1915, many soldiers were sent to the front unarmed, and told to pick up whatever weapons they could from the battlefield. Nevertheless, the Russian army fought on, and tied down large numbers of Germans and Austrians. When civilians showed a surge of patriotism, the tsar and his entourage failed to exploit it for military benefit. Instead, they relied on slow-moving bureaucracies. In areas where they did advance against the Austrians, they failed to rally the ethnic and religious minorities that were hostile to Austria, such as Poles. The tsar refused to cooperate with the national legislature, the Duma, and listened less to experts than to his wife, who was in thrall to her chief advisor, the so-called holy man Grigori Rasputin. More than two million refugees fled. Repeated military failures and bureaucratic ineptitude soon turned large segments of the population against the government. The German and Ottoman fleets prevented Russia from importing supplies and exporting goods through the Baltic and Black seas. By the middle of 1915 the impact of the war was demoralizing. Food and fuel were in short supply, casualties kept occurring, and inflation was mounting. Strikes increased among low-paid factory workers, and the peasants, who wanted land reforms, were restless. Meanwhile, elite distrust of the regime was deepened by reports that Rasputin was gaining influence; his assassination in late 1916 ended the scandal but did not restore the autocracy's lost prestige. The Tsarist system was completely overthrown in February 1917. Rabinowitch argues: The February 1917 revolution...grew out of prewar political and economic instability, technological backwardness, and fundamental social divisions, coupled with gross mismanagement of the war effort, continuing military defeats, domestic economic dislocation, and outrageous scandals surrounding the monarchy. In late February (3 March 1917), a strike occurred in a factory in the capital Petrograd (the new name for Saint Petersburg). On 23 February (8 March) 1917, thousands of female textile workers walked out of their factories protesting the lack of food and calling on other workers to join them. Within days, nearly all the workers in the city were idle, and street fighting broke out. The tsar ordered the Duma to disband, ordered strikers to return to work, and ordered troops to shoot at demonstrators in the streets. His orders triggered the February Revolution, especially when soldiers openly sided with the strikers. The tsar and the aristocracy fell on 2 March, as Nicholas II abdicated. To fill the vacuum of authority, the Duma declared a Provisional Government, headed by Prince Lvov, which was collectively known as the Russian Republic. Meanwhile, the socialists in Petrograd organized elections among workers and soldiers to form a soviet (council) of workers' and soldiers' deputies, as an organ of popular power that could pressure the "bourgeois" Provisional Government. In July, following a series of crises that undermined their authority with the public, the head of the Provisional Government resigned and was succeeded by Alexander Kerensky, who was more progressive than his predecessor but not radical enough for the Bolsheviks or many Russians discontented with the deepening economic crisis and the continuation of the war. While Kerensky's government marked time, the socialist-led soviet in Petrograd joined with soviets that formed throughout the country to create a national movement. The German government provided over 40 million gold marks to subsidize Bolshevik publications and activities subversive of the tsarist government, especially focusing on disgruntled soldiers and workers. In April 1917 Germany provided a special sealed train to carry Vladimir Lenin back to Russia from his exile in Switzerland. After many behind-the-scenes maneuvers, the soviets seized control of the government in November 1917 and drove Kerensky and his moderate provisional government into exile, in the events that would become known as the October Revolution. When the national Constituent Assembly (elected in December 1917) refused to become a rubber stamp of the Bolsheviks, it was dissolved by Lenin's troops and all vestiges of democracy were removed. With the handicap of the moderate opposition removed, Lenin was able to free his regime from the war problem by the harsh Treaty of Brest-Litovsk (1918) with Germany. Russia lost much of her western borderlands. However, when Germany was defeated the Soviet government repudiated the Treaty. The Bolshevik grip on power was by no means secure, and a lengthy struggle broke out between the new regime and its opponents, which included the Socialist Revolutionaries, right-wing "Whites", and large numbers of peasants. At the same time the Allied powers sent several expeditionary armies to support the anti-Communist forces in an attempt to force Russia to rejoin the world war. The Bolsheviks fought against both these forces and national independence movements in the former Russian Empire. By 1921, they had defeated their internal enemies and brought most of the newly independent states under their control, with the exception of Finland, the Baltic States, the Moldavian Democratic Republic (which joined Romania), and Poland (with whom they had fought the Polish–Soviet War). Finland also annexed the region Pechenga of the Russian Kola peninsula; Soviet Russia and allied Soviet republics conceded the parts of its territory to Estonia (Petseri County and Estonian Ingria), Latvia (Pytalovo), and Turkey (Kars). Poland incorporated the contested territories of Western Belarus and Western Ukraine, the former parts of the Russian Empire (except Galicia) east to Curzon Line. Both sides regularly committed brutal atrocities against civilians. During the civil war era White Terror (Russia) for example, Petlyura and Denikin's forces massacred 100,000 to 150,000 Jews in Ukraine and southern Russia. Hundreds of thousands of Jews were left homeless and tens of thousands became victims of serious illness. Estimates for the total number of people killed during the Red Terror carried out by the Bolsheviks vary widely. One source asserts that the total number of victims of repression and pacification campaigns could be 1.3 million, whereas others give estimates ranging from 10,000 in the initial period of repression to 50,000 to 140,000 and an estimate of 28,000 executions per year from December 1917 to February 1922. The most reliable estimations for the total number of killings put the number at about 100,000, whereas others suggest a figure of 200,000. The Russian economy was devastated by the war, with factories and bridges destroyed, cattle and raw materials pillaged, mines flooded and machines damaged. The droughts of 1920 and 1921, as well as the 1921 famine, worsened the disaster still further. Disease had reached pandemic proportions, with 3,000,000 dying of typhus alone in 1920. Millions more also died of widespread starvation. By 1922 there were at least 7,000,000 street children in Russia as a result of nearly ten years of devastation from the Great War and the civil war. Another one to two million people, known as the White émigrés, fled Russia, many with the White Gen. Pyotr Nikolayevich Wrangel—some through the Far East, others west into the newly independent Baltic countries. These émigrés included a large percentage of the educated and skilled population of Russia. The history of Russia between 1922 and 1991 is essentially the history of the Union of Soviet Socialist Republics, or Soviet Union. This ideologically based union, established in December 1922 by the leaders of the Russian Communist Party, was roughly coterminous with Russia before the Treaty of Brest-Litovsk. At that time, the new nation included four constituent republics: the Russian SFSR, the Ukrainian SSR, the Belarusian SSR, and the Transcaucasian SFSR. The constitution, adopted in 1924, established a federal system of government based on a succession of soviets set up in villages, factories, and cities in larger regions. This pyramid of soviets in each constituent republic culminated in the All-Union Congress of Soviets. However, while it appeared that the congress exercised sovereign power, this body was actually governed by the Communist Party, which in turn was controlled by the Politburo from Moscow, the capital of the Soviet Union, just as it had been under the tsars before Peter the Great. The period from the consolidation of the Bolshevik Revolution in 1917 until 1921 is known as the period of war communism. Land, all industry, and small businesses were nationalized, and the money economy was restricted. Strong opposition soon developed. The peasants wanted cash payments for their products and resented having to surrender their surplus grain to the government as a part of its civil war policies. Confronted with peasant opposition, Lenin began a strategic retreat from war communism known as the New Economic Policy (NEP). The peasants were freed from wholesale levies of grain and allowed to sell their surplus produce in the open market. Commerce was stimulated by permitting private retail trading. The state continued to be responsible for banking, transportation, heavy industry, and public utilities. Although the left opposition among the Communists criticized the rich peasants, or kulaks, who benefited from the NEP, the program proved highly beneficial and the economy revived. The NEP would later come under increasing opposition from within the party following Lenin's death in early 1924. As the Russian Empire included during this period not only the region of Russia, but also today's territories of Ukraine, Belarus, Poland, Lithuania, Estonia, Latvia, Moldavia and the Caucasian and Central Asian countries, it is possible to examine the firm formation process in all those regions. One of the main determinants of firm creation for given regions of Russian Empire might be urban demand of goods and supply of industrial and organizational skill. While the Russian economy was being transformed, the social life of the people underwent equally drastic changes. The Family Code of 1918 granted women equal status to men, and permitted a couple to take either the husband or wife’s name. Divorce no longer required court procedure, and to make women completely free of the responsibilities of childbearing, abortion was made legal as early as 1920. As a side effect, the emancipation of women increased the labor market. Girls were encouraged to secure an education and pursue a career in the factory or the office. Communal nurseries were set up for the care of small children, and efforts were made to shift the center of people's social life from the home to educational and recreational groups, the soviet clubs. The years from 1929 to 1939 comprised a tumultuous decade in Soviet history—a period of massive industrialization and internal struggles as Joseph Stalin established near total control over Soviet society, wielding virtually unrestrained power. Following Lenin's death Stalin wrestled to gain control of the Soviet Union with rival factions in the Politburo, especially Leon Trotsky's. By 1928, with the Trotskyists either exiled or rendered powerless, Stalin was ready to put a radical programme of industrialisation into action. In 1929 Stalin proposed the first five-year plan. Abolishing the NEP, it was the first of a number of plans aimed at swift accumulation of capital resources through the buildup of heavy industry, the collectivization of agriculture, and the restricted manufacture of consumer goods. For the first time in history a government controlled all economic activity. As a part of the plan, the government took control of agriculture through the state and collective farms ("kolkhozes"). By a decree of February 1930, about one million individual peasants ("kulaks") were forced off their land. Many peasants strongly opposed regimentation by the state, often slaughtering their herds when faced with the loss of their land. In some sections they revolted, and countless peasants deemed "kulaks" by the authorities were executed. The combination of bad weather, deficiencies of the hastily established collective farms, and massive confiscation of grain precipitated a serious famine, and several million peasants died of starvation, mostly in Ukraine, Kazakhstan and parts of southwestern Russia. The deteriorating conditions in the countryside drove millions of desperate peasants to the rapidly growing cities, fueling industrialization, and vastly increasing Russia's urban population in the space of just a few years. The plans received remarkable results in areas aside from agriculture. Russia, in many measures the poorest nation in Europe at the time of the Bolshevik Revolution, now industrialized at a phenomenal rate, far surpassing Germany's pace of industrialization in the 19th century and Japan's earlier in the 20th century. While the Five-Year Plans were forging ahead, Stalin was establishing his personal power. The NKVD gathered in tens of thousands of Soviet citizens to face arrest, deportation, or execution. Of the six original members of the 1920 Politburo who survived Lenin, all were purged by Stalin. Old Bolsheviks who had been loyal comrades of Lenin, high officers in the Red Army, and directors of industry were liquidated in the Great Purges. Purges in other Soviet republics also helped centralize control in the USSR. Stalin's repressions led to the creation of a vast system of internal exile, of considerably greater dimensions than those set up in the past by the tsars. Draconian penalties were introduced and many citizens were prosecuted for fictitious crimes of sabotage and espionage. The labor provided by convicts working in the labor camps of the Gulag system became an important component of the industrialization effort, especially in Siberia. An estimated 18 million people passed through the Gulag system, and perhaps another 15 million had experience of some other form of forced labor. The Soviet Union viewed the 1933 accession of fervently anti-Communist Hitler's government to power in Germany with great alarm from the onset, especially since Hitler proclaimed the Drang nach Osten as one of the major objectives in his vision of the German strategy of Lebensraum. The Soviets supported the republicans of Spain who struggled against fascist German and Italian troops in the Spanish Civil War. In 1938–1939, immediately prior to WWII, the Soviet Union successfully fought against Imperial Japan in the Soviet–Japanese border conflicts in the Russian Far East, which led to Soviet-Japanese neutrality and the tense border peace that lasted until August 1945. In 1938 Germany annexed Austria and, together with major Western European powers, signed the Munich Agreement following which Germany, Hungary and Poland divided parts of Czechoslovakia between themselves. German plans for further eastward expansion, as well as the lack of resolve from Western powers to oppose it, became more apparent. Despite the Soviet Union strongly opposing the Munich deal and repeatedly reaffirming its readiness to militarily back commitments given earlier to Czechoslovakia, the Western Betrayal led to the end of Czechoslovakia and further increased fears in the Soviet Union of a coming German attack. This led the Soviet Union to rush the modernization of its military industry and to carry out its own diplomatic maneuvers. In 1939 the Soviet Union signed the Molotov–Ribbentrop Pact: a non-aggression pact with Nazi Germany dividing Eastern Europe into two separate spheres of influence. Following the pact, the USSR normalized relations with Nazi Germany and resumed Soviet–German trade. On 17 September 1939, sixteen days after the start of World War II and with the victorious Germans having advanced deep into Polish territory, the Red Army invaded eastern Poland, stating as justification the "need to protect Ukrainians and Belarusians" there, after the "cessation of existence" of the Polish state. As a result, the Belarusian and Ukrainian Soviet republics' western borders were moved westward, and the new Soviet western border was drawn close to the original Curzon line. In the meantime negotiations with Finland over a Soviet-proposed land swap that would redraw the Soviet-Finnish border further away from Leningrad failed, and in December 1939 the USSR invaded Finland, beginning a campaign known as the Winter War (1939–40). The war took a heavy death toll on the Red Army but forced Finland to sign a Moscow Peace Treaty and cede the Karelian Isthmus and Ladoga Karelia. In summer 1940 the USSR issued an ultimatum to Romania forcing it to cede the territories of Bessarabia and Northern Bukovina. At the same time, the Soviet Union also occupied the three formerly independent Baltic states (Estonia, Latvia and Lithuania). The peace with Germany was tense, as both sides were preparing for the military conflict, and abruptly ended when the Axis forces led by Germany swept across the Soviet border on 22 June 1941. By the autumn the German army had seized Ukraine, laid a siege of Leningrad, and threatened to capture the capital, Moscow, itself. Despite the fact that in December 1941 the Red Army threw off the German forces from Moscow in a successful counterattack, the Germans retained the strategic initiative for approximately another year and held a deep offensive in the south-eastern direction, reaching the Volga and the Caucasus. However, two major German defeats in Stalingrad and Kursk proved decisive and reversed the course of the entire World War as the Germans never regained the strength to sustain their offensive operations and the Soviet Union recaptured the initiative for the rest of the conflict. By the end of 1943, the Red Army had broken through the German siege of Leningrad and liberated much of Ukraine, much of Western Russia and moved into Belarus. By the end of 1944, the front had moved beyond the 1939 Soviet frontiers into eastern Europe. Soviet forces drove into eastern Germany, capturing Berlin in May 1945. The war with Germany thus ended triumphantly for the Soviet Union. As agreed at the Yalta Conference, three months after the Victory Day in Europe the USSR launched the Soviet invasion of Manchuria, defeating the Japanese troops in neighboring Manchuria, the last Soviet battle of World War II. Although the Soviet Union was victorious in World War II, the war resulted in around 26–27 million Soviet deaths (estimates vary) and had devastated the Soviet economy in the struggle. Some 1,710 towns and 70,000 settlements were destroyed. The occupied territories suffered from the ravages of German occupation and deportations of slave labor by Germany. Thirteen million Soviet citizens became victims of the repressive policies of Germany and its allies in occupied territories, where people died because of mass murders, famine, absence of elementary medical aid and slave labor. The Nazi Genocide of the Jews, carried out by German "Einsatzgruppen" along with local collaborators, resulted in almost complete annihilation of the Jewish population over the entire territory temporarily occupied by Germany and its allies. During the occupation, the Leningrad region lost around a quarter of its population, Soviet Belarus lost from a quarter to a third of its population, and 3.6 million Soviet prisoners of war (of 5.5 million) died in German camps. Collaboration among the major Allies had won the war and was supposed to serve as the basis for postwar reconstruction and security. However, the conflict between Soviet and U.S. national interests, known as the Cold War, came to dominate the international stage in the postwar period. The Cold War emerged from a conflict between Stalin and U.S. President Harry Truman over the future of Eastern Europe during the Potsdam Conference in the summer of 1945. Russia had suffered three devastating Western onslaughts in the previous 150 years during the Napoleonic Wars, the First World War, and the Second World War, and Stalin's goal was to establish a buffer zone of states between Germany and the Soviet Union. Truman charged that Stalin had betrayed the Yalta agreement. With Eastern Europe under Red Army occupation, Stalin was also biding his time, as his own atomic bomb project was steadily and secretly progressing. In April 1949 the United States sponsored the North Atlantic Treaty Organization (NATO), a mutual defense pact in which most Western nations pledged to treat an armed attack against one nation as an assault on all. The Soviet Union established an Eastern counterpart to NATO in 1955, dubbed the Warsaw Pact. The division of Europe into Western and Soviet blocks later took on a more global character, especially after 1949, when the U.S. nuclear monopoly ended with the testing of a Soviet bomb and the Communist takeover in China. The foremost objectives of Soviet foreign policy were the maintenance and enhancement of national security and the maintenance of hegemony over Eastern Europe. The Soviet Union maintained its dominance over the Warsaw Pact through crushing the Hungarian Revolution of 1956, suppressing the Prague Spring in Czechoslovakia in 1968, and supporting the suppression of the Solidarity movement in Poland in the early 1980s. The Soviet Union opposed the United States in a number of proxy conflicts all over the world, including the Korean War and Vietnam War. As the Soviet Union continued to maintain tight control over its sphere of influence in Eastern Europe, the Cold War gave way to "Détente" and a more complicated pattern of international relations in the 1970s in which the world was no longer clearly split into two clearly opposed blocs. Less powerful countries had more room to assert their independence, and the two superpowers were partially able to recognize their common interest in trying to check the further spread and proliferation of nuclear weapons in treaties such as SALT I, SALT II, and the Anti-Ballistic Missile Treaty. U.S.–Soviet relations deteriorated following the beginning of the nine-year Soviet–Afghan War in 1979 and the 1980 election of Ronald Reagan, a staunch anti-communist, but improved as the communist bloc started to unravel in the late 1980s. With the collapse of the Soviet Union in 1991, Russia lost the superpower status that it had won in the Second World War. In the power struggle that erupted after Stalin's death in 1953, his closest followers lost out. Nikita Khrushchev solidified his position in a speech before the Twentieth Congress of the Communist Party in 1956 detailing Stalin's atrocities. In 1964 Khrushchev was impeached by the Communist Party's Central Committee, charging him with a host of errors that included Soviet setbacks such as the Cuban Missile Crisis. After a period of collective leadership led by Leonid Brezhnev, Alexei Kosygin and Nikolai Podgorny, a veteran bureaucrat, Brezhnev, took Khrushchev's place as Soviet leader. Brezhnev emphasized heavy industry, instituted the Soviet economic reform of 1965, and also attempted to ease relationships with the United States. In the 1960s the USSR became a leading producer and exporter of petroleum and natural gas. Soviet science and industry peaked in the Khrushchev and Brezhnev years. The world's first nuclear power plant was established in 1954 in Obninsk, and the Baikal Amur Mainline was built. The Soviet space program, founded by Sergey Korolev, was especially successful. On 4 October 1957 Soviet Union launched the first space satellite Sputnik. On 12 April 1961 Yuri Gagarin became the first human to travel into space in the Soviet spaceship Vostok 1. Other achievements of Russian space program include: the first photo of the far side of the Moon; exploration of Venus; the first spacewalk by Alexei Leonov; first female spaceflight by Valentina Tereshkova. More recently, the Soviet Union produced the world's first space station, Salyut which in 1986 was replaced by Mir, the first consistently inhabited long-term space station, that served from 1986 to 2001. While all modernized economies were rapidly moving to computerization after 1965, the USSR fell further and further behind. Moscow's decision to copy the IBM 360 of 1965 proved a decisive mistake for it locked scientists into an antiquated system they were unable to improve. They had enormous difficulties in manufacturing the necessary chips reliably and in quantity, in programming workable and efficient programs, in coordinating entirely separate operations, and in providing support to computer users. One of the greatest strengths of Soviet economy was its vast supplies of oil and gas; world oil prices quadrupled in the 1973–74, and rose again in 1979–1981, making the energy sector the chief driver of the Soviet economy, and was used to cover multiple weaknesses. At one point, Soviet Premier Alexei Kosygin told the head of oil and gas production, "things are bad with bread. Give me 3 million tons [of oil] over the plan." Former prime minister Yegor Gaidar, an economist looking back three decades, in 2007 wrote: Two developments dominated the decade that followed: the increasingly apparent crumbling of the Soviet Union's economic and political structures, and the patchwork attempts at reforms to reverse that process. After the rapid succession of former KGB Chief Yuri Andropov and Konstantin Chernenko, transitional figures with deep roots in Brezhnevite tradition, Mikhail Gorbachev implemented perestroika in an attempt to modernize Soviet communism, and made significant changes in the party leadership. However, Gorbachev's social reforms led to unintended consequences. His policy of "glasnost" facilitated public access to information after decades of government repression, and social problems received wider public attention, undermining the Communist Party's authority. "Glasnost" allowed ethnic and nationalist disaffection to reach the surface, and many constituent republics, especially the Baltic republics, Georgian SSR and Moldavian SSR, sought greater autonomy, which Moscow was unwilling to provide. In the revolutions of 1989 the USSR lost its allies in Eastern Europe. Gorbachev's attempts at economic reform were not sufficient, and the Soviet government left intact most of the fundamental elements of communist economy. Suffering from low pricing of petroleum and natural gas, the ongoing war in Afghanistan, and outdated industry and pervasive corruption, the Soviet planned economy proved to be ineffective, and by 1990 the Soviet government had lost control over economic conditions. Due to price control, there were shortages of almost all products, reaching their peak in the end of 1991, when people had to stand in long lines and were lucky to buy even the essentials. Control over the constituent republics was also relaxed, and they began to assert their national sovereignty over Moscow. The tension between Soviet Union and Russian SFSR authorities came to be personified in the bitter power struggle between Gorbachev and Boris Yeltsin. Squeezed out of Union politics by Gorbachev in 1987, Yeltsin, who represented himself as a committed democrat, presented a significant opposition to Gorbachev's authority. In a remarkable reversal of fortunes, he gained election as chairman of the Russian republic's new Supreme Soviet in May 1990. The following month, he secured legislation giving Russian laws priority over Soviet laws and withholding two-thirds of the budget. In the first Russian presidential election in 1991 Yeltsin became president of the Russian SFSR. At last Gorbachev attempted to restructure the Soviet Union into a less centralized state. However, on 19 August 1991, a coup against Gorbachev, conspired by senior Soviet officials, was attempted. The coup faced wide popular opposition and collapsed in three days, but disintegration of the Union became imminent. The Russian government took over most of the Soviet Union government institutions on its territory. Because of the dominant position of Russians in the Soviet Union, most gave little thought to any distinction between Russia and the Soviet Union before the late 1980s. In the Soviet Union, only Russian SFSR lacked even the paltry instruments of statehood that the other republics possessed, such as its own republic-level Communist Party branch, trade union councils, Academy of Sciences, and the like. The Communist Party of the Soviet Union was banned in Russia in 1991–1992, although no lustration has ever taken place, and many of its members became top Russian officials. However, as the Soviet government was still opposed to market reforms, the economic situation continued to deteriorate. By December 1991, the shortages had resulted in the introduction of food rationing in Moscow and Saint Petersburg for the first time since World War II. Russia received humanitarian food aid from abroad. After the Belavezha Accords, the Supreme Soviet of Russia withdrew Russia from the Soviet Union on 12 December. The Soviet Union officially ended on 25 December 1991, and the Russian Federation (formerly the Russian Soviet Federative Socialist Republic) took power on 26 December. The Russian government lifted price control on January 1992. Prices rose dramatically, but shortages disappeared. Although Yeltsin came to power on a wave of optimism, he never recovered his popularity after endorsing Yegor Gaidar's "shock therapy" of ending Soviet-era price controls, drastic cuts in state spending, and an open foreign trade regime in early 1992 ("see" Russian economic reform in the 1990s). The reforms immediately devastated the living standards of much of the population. In the 1990s Russia suffered an economic downturn that was, in some ways, more severe than the United States or Germany had undergone six decades earlier in the Great Depression. Hyperinflation hit the ruble, due to monetary overhang from the days of the planned economy. Meanwhile, the profusion of small parties and their aversion to coherent alliances left the legislature chaotic. During 1993, Yeltsin's rift with the parliamentary leadership led to the September–October 1993 constitutional crisis. The crisis climaxed on 3 October, when Yeltsin chose a radical solution to settle his dispute with parliament: he called up tanks to shell the Russian White House, blasting out his opponents. As Yeltsin was taking the unconstitutional step of dissolving the legislature, Russia came close to a serious civil conflict. Yeltsin was then free to impose the current Russian constitution with strong presidential powers, which was approved by referendum in December 1993. The cohesion of the Russian Federation was also threatened when the republic of Chechnya attempted to break away, leading to the First and Second Chechen Wars. Economic reforms also consolidated a semi-criminal oligarchy with roots in the old Soviet system. Advised by Western governments, the World Bank, and the International Monetary Fund, Russia embarked on the largest and fastest privatization that the world had ever seen in order to reform the fully nationalized Soviet economy. By mid-decade, retail, trade, services, and small industry was in private hands. Most big enterprises were acquired by their old managers, engendering a new rich (Russian tycoons) in league with criminal mafias or Western investors. Corporate raiders such as Andrei Volgin engaged in hostile takeovers of corrupt corporations by the mid-1990s. By the mid-1990s Russia had a system of multiparty electoral politics. But it was harder to establish a representative government because of two structural problems—the struggle between president and parliament and the anarchic party system. Meanwhile, the central government had lost control of the localities, bureaucracy, and economic fiefdoms, and tax revenues had collapsed. Still in a deep depression, Russia's economy was hit further by the financial crash of 1998. After the crisis, Yeltsin was at the end of his political career. Just hours before the first day of 2000, Yeltsin made a surprise announcement of his resignation, leaving the government in the hands of the little-known Prime Minister Vladimir Putin, a former KGB official and head of the FSB, the KGB's post-Soviet successor agency. In 2000, the new acting president defeated his opponents in the presidential election on 26 March, and won in a landslide four years later. In 2001, Putin discussed with Bill Clinton the possibility of Russia joining NATO, without result. International observers were alarmed by moves in late 2004 to further tighten the presidency's control over parliament, civil society, and regional officeholders. In 2008 Dmitri Medvedev, a former Gazprom chairman and Putin's head of staff, was elected new President of Russia. In 2012, Putin was once again elected as President. Russia had difficulty attracting foreign direct investment and experienced large capital outflows. Russia's long-term problems include a shrinking workforce, rampant corruption, and underinvestment in infrastructure. Nevertheless, reversion to a socialist command economy seemed almost impossible. Russia ended 2006 with its eighth straight year of growth, averaging 6.7% annually since the financial crisis of 1998. Although high oil prices and a relatively cheap ruble initially drove this growth, since 2003 consumer demand and, more recently, investment have played a significant role. Russia is well ahead of most other resource-rich countries in its economic development, with a long tradition of education, science, and industry. In 2014, following a referendum, in which separation was favored by a large majority of voters, the Russian leadership announced the accession of Crimea into the Russian Federation. The 2017–2018 Russian protests against alleged corruption in the federal Russian government took place in many Russian cities. In the first half of 2019 there were approximately 863 protests across the country.
https://en.wikipedia.org/wiki?curid=14115
History of Christianity The history of Christianity concerns the Christian religion, Christendom, and the Church with its various denominations, from the 1st century to the present. Christianity originated with the ministry of Jesus in the 1st century Roman province of Judea. According to the Gospels, Jesus was a Jewish teacher and healer who proclaimed the imminent kingdom of God and was crucified . His followers believed that he was then raised from the dead and exalted by God, and would return soon at the inception of God's kingdom. The earliest followers of Jesus were apocalyptic Jewish Christians. The inclusion of gentiles in the developing early Christian Church caused a schism between Judaism and Jewish Christianity during the first two centuries of the Christian Era. In 313, Emperor Constantine I issued the Edict of Milan legalizing Christian worship. In 380, with the Edict of Thessalonica put forth under Theodosius I, the Roman Empire officially adopted Trinitarian Christianity as its state religion, and Christianity established itself as a predominantly Roman religion in the state church of the Roman Empire. Christological debates about the human and divine nature of Jesus consumed the Christian Church for a couple of centuries, and seven ecumenical councils were called to resolve these debates. Arianism was condemned at the First Council of Nicea (325), which supported the Trinitarian doctrine as expounded in the Nicene Creed. In the early Middle Ages, missionary activities spread Christianity towards the west among German peoples. During the High Middle Ages, eastern and western Christianity grew apart, leading to the East–West Schism of 1054. Growing criticism of the Roman Catholic ecclesiological structure and its behaviour led to the Protestant movement of the 16th century and the split of western Christianity. Since the Renaissance era, with colonialism inspired by the Church, Christianity has expanded throughout the world. Today there are more than two billion Christians worldwide, and Christianity has become the world's largest religion. Within the last century, as the influence of Christianity has waned in the West, it has rapidly grown in the East and the Global South in China, South Korea and much of sub-Saharan Africa. The religious climate of 1st century Judea was diverse, with numerous Judaic sects. The ancient historian Josephus describes four prominent groups in the Judaism of the time: Pharisees, Sadducees, Essenes and Zealots. This led to unrest, and the 1st century BC and 1st century AD had numerous charismatic religious leaders, contributing to what would become the Mishnah of rabbinic Judaism, including Yohanan ben Zakkai and Hanina ben Dosa. Jewish messianism, and the Jewish messiah concept, has its roots in the apocalyptic literature of the 2nd century BC to 1st century BC, promising a future "anointed" leader (messiah or king) from the Davidic line to resurrect the Israelite Kingdom of God, in place of the foreign rulers of the time. The main sources of information regarding Jesus' life and teachings are the four canonical gospels, and to a lesser extent the Acts of the Apostles and the Pauline epistles. According to the Gospels, Jesus was a Jewish teacher and healer who was crucified c.30–33 AD. His followers believed that he was raised from death and exalted by God because of his faithfulness. Early Christianity is generally reckoned by church historians to begin with the ministry of Jesus ( 27-30) and end with the First Council of Nicaea (325). It is typically divided into two periods: the "Apostolic Age" ( 30–100, when the first apostles were still alive) and the "Ante-Nicene Period" ( 100–325). The Apostolic Age is named after the Apostles and their missionary activities. It holds special significance in Christian tradition as the age of the direct apostles of Jesus. A primary source for the Apostolic Age is the Acts of the Apostles, but its historical accuracy is questionable and its coverage is partial, focusing especially from onwards on the ministry of Paul, and ending around 62 AD with Paul preaching in Rome under house arrest. The earliest followers of Jesus were apocalyptic Jewish Christians. The early Christian groups were strictly Jewish, such as the Ebionites and the early Christian community in Jerusalem, led by James, the brother of Jesus. According to , they described themselves as "disciples of the Lord" and [followers] "of the Way", and according to a settled community of disciples at Antioch were the first to be called "Christians". Some of the early Christian communities attracted gentile God-fearers, who already visited Jewish synagogues. The inclusion of gentiles posed a problem, as they could not fully observe the Halakha. Saul of Tarsus, commonly known as Paul the Apostle, persecuted the early Jewish Christians, then converted and started his mission among the gentiles. The main concern of Paul's letters is the inclusion of gentiles into God's New Covenant, deeming faith in Christ sufficient for righteousness. Because of this inclusion of gentiles, early Christianity changed of character and gradually grew apart from Judaism and Jewish Christianity during the first two centuries of the Christian Era. The Gospels and New Testament epistles contain early creeds and hymns, as well as accounts of the Passion, the empty tomb, and Resurrection appearances. Early Christianity slowly spread to pockets of believers among Aramaic-speaking peoples along the Mediterranean coast and also to the inland parts of the Roman Empire and beyond that into the Parthian Empire and the later Sasanian Empire, including Mesopotamia, which was dominated at different times and to varying extents by these empires. The ante-Nicene period (literally meaning "before Nicaea") was the period following the Apostolic Age down to the First Council of Nicaea in 325. By the beginning of the Nicene period, the Christian faith had spread throughout Western Europe and the Mediterranean Basin, and to North Africa and the East. A more formal Church structure grew out of the early communities, and variant Christian doctrines developed. Christianity grew apart from Judaism, creating its own identity by an increasingly harsh rejection of Judaism, of Jewish practices, and an acceptance of what some scholars have called religious anti-semitism, however, many modern day Christian theologians reject this accusation. The number of Christians grew by approximately 40% per decade during the first and second centuries. In the post-Apostolic church, bishops emerged as overseers of urban Christian populations, and a hierarchy of clergy gradually took on the form of "episkopos" (overseers, inspectors; and the origin of the term bishop) and "presbyters" (elders; and the origin of the term priest), and then "deacons" (servants). But this emerged slowly and at different times in different locations. Clement, a 1st-century bishop of Rome, refers to the leaders of the Corinthian church in his epistle to Corinthians as bishops and presbyters interchangeably. The New Testament writers also use the terms overseer and elders interchangeably and as synonyms. The Ante-Nicene period saw the rise of a great number of Christian sects, cults and movements with strong unifying characteristics lacking in the apostolic period. They had different interpretations of Scripture, particularly the divinity of Jesus and the nature of the Trinity. Many variations in this time defy neat categorizations, as various forms of Christianity interacted in a complex fashion to form the dynamic character of Christianity in this era. The Post-Apostolic period was diverse both in terms of beliefs and practices. In addition to the broad spectrum of general branches of Christianity, there was constant change and diversity that variably resulted in both internecine conflicts and syncretic adoption. The Pauline epistles were circulating in collected form by the end of the 1st century. By the early 3rd century, there existed a set of Christian writings similar to the current New Testament, though there were still disputes over the canonicity of Hebrews, James, II Peter, II and III John, and Revelation. By the 4th century, there existed unanimity in the West concerning the New Testament canon, and by the 5th century the East, with a few exceptions, had come to accept the Book of Revelation and thus had come into harmony on the matter of the canon. As Christianity spread, it acquired certain members from well-educated circles of the Hellenistic world; they sometimes became bishops. They produced two sorts of works: theological and "apologetic", the latter being works aimed at defending the faith by using reason to refute arguments against the veracity of Christianity. These authors are known as the Church Fathers, and study of them is called patristics. Notable early fathers include Ignatius of Antioch, Polycarp, Justin Martyr, Irenaeus, Tertullian, Clement of Alexandria, and Origen. Christian art emerged relatively late, and the first known Christian images emerge from about 200 AD, though there is some literary evidence that small domestic images were used earlier. The oldest known Christian paintings are from the Roman catacombs, dated to about 200, and the oldest Christian sculptures are from sarcophagi, dating to the beginning of the 3rd century. Although many Hellenistic Jews seem to have had images of religious figures, as at the Dura-Europos synagogue, the traditional Mosaic prohibition of "graven images" no doubt retained some effect, although never proclaimed by theologians. This early rejection of images, and the necessity to hide Christian practise from persecution, leaves few archaeological records regarding early Christianity and its evolution. There was no empire-wide persecution of Christians until the reign of Decius in the third century. The last and most severe persecution organised by the imperial authorities was the Diocletianic Persecution, 303–311. The Edict of Serdica was issued in 311 by the Roman Emperor Galerius, officially ending the persecution in the East. With the passage in 313 AD of the Edict of Milan, in which the Roman Emperors Constantine the Great and Licinius legalised the Christian religion, persecution of Christians by the Roman state ceased. How much Christianity Constantine adopted at this point is difficult to discern, but his accession was a turning point for the Christian Church. Constantine supported the Church financially, built various basilicas, granted privileges (e.g., exemption from certain taxes) to clergy, promoted Christians to some high offices, and returned confiscated property. Constantine played an active role in the leadership of the Church. In 316, he acted as a judge in a North African dispute concerning the Donatist controversy. More significantly, in 325 he summoned the Council of Nicaea, the first ecumenical council. Constantine thus established a precedent for the emperor as responsible to God for the spiritual health of their subjects, and thus with a duty to maintain orthodoxy. The emperor was to enforce doctrine, root out heresy, and uphold ecclesiastical unity. Constantine's son's successor, his nephew Julian, under the influence of his adviser Mardonius, renounced Christianity and embraced a Neo-platonic and mystical form of paganism, shocking the Christian establishment. He began reopening pagan temples, modifying them to resemble Christian traditions such as the episcopal structure and public charity (previously unknown in Roman paganism). Julian's short reign ended when he died while campaigning in the East. Arianism was a popular doctrine of the 4th century, which was the denial of the divinity of Christ as propounded by Arius. Although this doctrine was condemned as heresy and eventually eliminated by the Roman Church, it remained popular underground for some time. In the late 4th century, Ulfilas, a Roman bishop and an Arian, was appointed as the first bishop to the Goths, the Germanic peoples in much of Europe at the borders of and within the Empire. Ulfilas spread Arian Christianity among the Goths, firmly establishing the faith among many of the Germanic tribes, thus helping to keep them culturally distinct. During this age, the first ecumenical councils were convened. They were mostly concerned with Christological disputes. The First Council of Nicaea (325) and the First Council of Constantinople (381) resulted in condemning Arian teachings as heresy and producing the Nicene Creed. On 27 February 380, with the Edict of Thessalonica put forth under Theodosius I, Gratian, and Valentinian II, the Roman Empire officially adopted Trinitarian Christianity as its state religion. Prior to this date, Constantius II and Valens had personally favoured Arian or Semi-Arian forms of Christianity, but Valens' successor Theodosius I supported the Trinitarian doctrine as expounded in the Nicene Creed. After its establishment, the Church adopted the same organisational boundaries as the Empire: geographical provinces, called dioceses, corresponding to imperial governmental territorial division. The bishops, who were located in major urban centres as per pre-legalisation tradition, thus oversaw each diocese. The bishop's location was his "seat", or "see". Among the sees, five came to hold special eminence: Rome, Constantinople, Jerusalem, Antioch, and Alexandria. The prestige of most of these sees depended in part on their apostolic founders, from whom the bishops were therefore the spiritual successors. Though the bishop of Rome was still held to be the First among equals, Constantinople was second in precedence as the new capital of the empire. Theodosius I decreed that others not believing in the preserved "faithful tradition", such as the Trinity, were to be considered to be practitioners of illegal heresy, and in 385, this resulted in the first case of capital punishment of a heretic, namely Priscillian. During the early 5th century, the School of Edessa had taught a Christological perspective stating that Christ's divine and human nature were distinct persons. A particular consequence of this perspective was that Mary could not be properly called the mother of God but could only be considered the mother of Christ. The most widely known proponent of this viewpoint was the Patriarch of Constantinople Nestorius. Since referring to Mary as the mother of God had become popular in many parts of the Church this became a divisive issue. The Roman Emperor Theodosius II called for the Council of Ephesus (431), with the intention of settling the issue. The councils ultimately rejected Nestorius' view. Many churches who followed the Nestorian viewpoint broke away from the Roman Church, causing a major schism. The Nestorian churches were persecuted, and many followers fled to the Sasanian Empire where they were accepted. The Sasanian (Persian) Empire had many Christian converts early in its history tied closely to the Syriac branch of Christianity. The Empire was officially Zoroastrian and maintained a strict adherence to this faith in part to distinguish itself from the religion of the Roman Empire (originally the pagan Roman religion and then Christianity). Christianity became tolerated in the Sasanian Empire, and as the Roman Empire increasingly exiled heretics during the 4th and 6th centuries, the Sasanian Christian community grew rapidly. By the end of the 5th century, the Persian Church was firmly established and had become independent of the Roman Church. This church evolved into what is today known as the Church of the East. In 451, the Council of Chalcedon was held to further clarify the Christological issues surrounding Nestorianism. The council ultimately stated that Christ's divine and human nature were separate but both part of a single entity, a viewpoint rejected by many churches who called themselves miaphysites. The resulting schism created a communion of churches, including the Armenian, Syrian, and Egyptian churches. Though efforts were made at reconciliation in the next few centuries, the schism remained permanent, resulting in what is today known as Oriental Orthodoxy. Monasticism is a form of asceticism whereby one renounces worldly pursuits and goes off alone as a hermit or joins a tightly organized community. It began early in the Church as a family of similar traditions, modelled upon Scriptural examples and ideals, and with roots in certain strands of Judaism. John the Baptist is seen as an archetypical monk, and monasticism was inspired by the organisation of the Apostolic community as recorded in Acts 2. Eremetic monks, or hermits, live in solitude, whereas cenobitics live in communities, generally in a monastery, under a rule (or code of practice) and are governed by an abbot. Originally, all Christian monks were hermits, following the example of Anthony the Great. However, the need for some form of organised spiritual guidance lead Pachomius in 318 to organise his many followers in what was to become the first monastery. Soon, similar institutions were established throughout the Egyptian desert as well as the rest of the eastern half of the Roman Empire. Women were especially attracted to the movement. Central figures in the development of monasticism were Basil the Great in the East and, in the West, Benedict, who created the famous Rule of Saint Benedict, which would become the most common rule throughout the Middle Ages and the starting point for other monastic rules. The transition into the Middle Ages was a gradual and localised process. Rural areas rose as power centres whilst urban areas declined. Although a greater number of Christians remained in the East (Greek areas), important developments were underway in the West (Latin areas) and each took on distinctive shapes. The bishops of Rome, the popes, were forced to adapt to drastically changing circumstances. Maintaining only nominal allegiance to the emperor, they were forced to negotiate balances with the "barbarian rulers" of the former Roman provinces. In the East, the Church maintained its structure and character and evolved more slowly. The stepwise loss of Western Roman Empire dominance, replaced with foederati and Germanic kingdoms, coincided with early missionary efforts into areas not controlled by the collapsing empire. As early as in the 5th century, missionary activities from Roman Britain into the Celtic areas (Scotland, Ireland and Wales) produced competing early traditions of Celtic Christianity, that was later reintegrated under the Church in Rome. Prominent missionaries were Saints Patrick, Columba and Columbanus. The Anglo-Saxon tribes that invaded southern Britain some time after the Roman abandonment were initially pagan but were converted to Christianity by Augustine of Canterbury on the mission of Pope Gregory the Great. Soon becoming a missionary centre, missionaries such as Wilfrid, Willibrord, Lullus and Boniface converted their Saxon relatives in Germania. The largely Christian Gallo-Roman inhabitants of Gaul (modern France) were overrun by the Franks in the early 5th century. The native inhabitants were persecuted until the Frankish King Clovis I converted from paganism to Roman Catholicism in 496. Clovis insisted that his fellow nobles follow suit, strengthening his newly established kingdom by uniting the faith of the rulers with that of the ruled. After the rise of the Frankish Kingdom and the stabilizing political conditions, the Western part of the Church increased the missionary activities, supported by the Merovingian kingdom as a means to pacify troublesome neighbour peoples. After the foundation of a church in Utrecht by Willibrord, backlashes occurred when the pagan Frisian King Radbod destroyed many Christian centres between 716 and 719. In 717, the English missionary Boniface was sent to aid Willibrord, re-establishing churches in Frisia continuing missions in Germany. Following a series of heavy military reverses against the Muslims, the Iconoclasm emerged in the early 8th century. In the 720s, the Byzantine Emperor Leo III the Isaurian banned the pictorial representation of Christ, saints, and biblical scenes. In the West, Pope Gregory III held two synods at Rome and condemned Leo's actions. The Byzantine Iconoclast Council, held at Hieria in 754, ruled that holy portraits were heretical. The movement destroyed much of the Christian church's early artistic history. The iconoclastic movement was later defined as heretical in 787 under the Second Council of Nicaea (the seventh ecumenical council) but had a brief resurgence between 815 and 842. The Carolingian Renaissance was a period of intellectual and cultural revival of literature, arts, and scriptural studies during the late 8th and 9th centuries, mostly during the reigns of Charlemagne and Louis the Pious, Frankish rulers. To address the problems of illiteracy among clergy and court scribes, Charlemagne founded schools and attracted the most learned men from all of Europe to his court. Tensions in Christian unity started to become evident in the 4th century. Two basic problems were involved: the nature of the primacy of the bishop of Rome and the theological implications of adding a clause to the Nicene Creed, known as the "filioque" clause. These doctrinal issues were first openly discussed in Photius's patriarchate. The Eastern churches viewed Rome's understanding of the nature of episcopal power as being in direct opposition to the Church's essentially conciliar structure and thus saw the two ecclesiologies as mutually antithetical. The other major irritant to Eastern Christendom was the Western use of the "Filioque" clause—meaning "and the Son"—in the Nicene Creed . This too developed gradually and entered the Creed over time. The issue was the addition by the West of the Latin clause "Filioque" to the Creed, as in "the Holy Spirit... who proceeds from the Father "and the Son"", where the original Creed, sanctioned by the councils and still used today by the Eastern Orthodox, simply states "the Holy Spirit, the Lord and Giver of Life, who proceeds from the Father." The Eastern Church argued that the phrase had been added unilaterally and therefore illegitimately, since the East had never been consulted. In addition to this ecclesiological issue, the Eastern Church also considered the" Filioque" clause unacceptable on dogmatic grounds. In the 9th century, a controversy arose between Eastern (Byzantine, Greek Orthodox) and Western (Latin, Roman Catholic) Christianity that was precipitated by the opposition of the Roman Pope John VII to the appointment by the Byzantine Emperor Michael III of Photios I to the position of patriarch of Constantinople. Photios was refused an apology by the pope for previous points of dispute between the East and West. Photios refused to accept the supremacy of the pope in Eastern matters or accept the "Filioque" clause. The Latin delegation at the council of his consecration pressed him to accept the clause in order to secure their support. The controversy also involved Eastern and Western ecclesiastical jurisdictional rights in the Bulgarian church. Photios did provide concession on the issue of jurisdictional rights concerning Bulgaria, and the papal legates made do with his return of Bulgaria to Rome. This concession, however, was purely nominal, as Bulgaria's return to the Byzantine rite in 870 had already secured for it an autocephalous church. Without the consent of Boris I of Bulgaria, the papacy was unable to enforce any of its claims. The East–West Schism, or Great Schism, separated the Church into Western (Latin) and Eastern (Greek) branches, i.e., Western Catholicism and Eastern Orthodoxy. It was the first major division since certain groups in the East rejected the decrees of the Council of Chalcedon (see Oriental Orthodoxy) and was far more significant. Though normally dated to 1054, the East–West Schism was actually the result of an extended period of estrangement between Latin and Greek Christendom over the nature of papal primacy and certain doctrinal matters like the "filioque", but intensified by cultural and linguistic differences. From the 6th century, onward most of the monasteries in the West were of the Benedictine Order. Owing to the stricter adherence to a reformed Benedictine rule, the abbey of Cluny became the acknowledged leader of western monasticism from the later 10th century. Cluny created a large, federated order in which the administrators of subsidiary houses served as deputies of the abbot of Cluny and answered to him. The Cluniac spirit was a revitalising influence on the Norman church, at its height from the second half of the 10th century through the early 12th century. The next wave of monastic reform came with the Cistercian Movement. The first Cistercian abbey was founded in 1098, at Cîteaux Abbey. The keynote of Cistercian life was a return to a literal observance of the Benedictine rule, rejecting the developments of the Benedictines. The most striking feature in the reform was the return to manual labour, and especially to field-work. Inspired by Bernard of Clairvaux, the primary builder of the Cistercians, they became the main force of technological diffusion in medieval Europe. By the end of the 12th century, the Cistercian houses numbered 500, and at its height in the 15th century the order claimed to have close to 750 houses. Most of these were built in wilderness areas, and played a major part in bringing such isolated parts of Europe into economic cultivation. A third level of monastic reform was provided by the establishment of the Mendicant orders. Commonly known as friars, mendicants live under a monastic rule with traditional vows of poverty, chastity, and obedience, but they emphasise preaching, missionary activity, and education, in a secluded monastery. Beginning in the 12th century, the Franciscan order was instituted by the followers of Francis of Assisi, and thereafter the Dominican order was begun by St. Dominic. The Investiture Controversy, or Lay investiture controversy, was the most significant conflict between secular and religious powers in medieval Europe. It began as a dispute in the 11th century between the Holy Roman Emperor Henry IV and Pope Gregory VII concerning who would appoint bishops (investiture). The end of lay investiture threatened to undercut the power of the Empire and the ambitions of noblemen. Bishoprics being merely lifetime appointments, a king could better control their powers and revenues than those of hereditary noblemen. Even better, he could leave the post vacant and collect the revenues, theoretically in trust for the new bishop, or give a bishopric to pay a helpful noble. The Church wanted to end lay investiture to end this and other abuses, to reform the episcopate and provide better pastoral care. Pope Gregory VII issued the "Dictatus Papae", which declared that the pope alone could appoint bishops. Henry IV's rejection of the decree led to his excommunication and a ducal revolt. Eventually Henry received absolution after dramatic public penance, though the Great Saxon Revolt and conflict of investiture continued. A similar controversy occurred in England between King Henry I and St. Anselm, Archbishop of Canterbury, over investiture and episcopal vacancy. The English dispute was resolved by the Concordat of London, 1107, where the king renounced his claim to invest bishops but continued to require an oath of fealty. This was a partial model for the Concordat of Worms ("Pactum Calixtinum"), which resolved the Imperial investiture controversy with a compromise that allowed secular authorities some measure of control but granted the selection of bishops to their cathedral canons. As a symbol of the compromise, both ecclesiastical and lay authorities invested bishops with respectively, the staff and the ring. Generally, the Crusades refer to the campaigns in the Holy Land against Muslim forces sponsored by the papacy. There were other crusades against Islamic forces in southern Spain, southern Italy, and Sicily, as well as the campaigns of Teutonic Knights against pagan strongholds in north-eastern Europe. A few crusades were waged within Christendom against groups that were considered heretical and schismatic. The Holy Land had been part of the Roman Empire, and thus Byzantine Empire, until the Islamic conquests of the 7th and 8th centuries. Thereafter, Christians had generally been permitted to visit the sacred places in the Holy Land until 1071, when the Seljuk Turks closed Christian pilgrimages and assailed the Byzantines, defeating them at the Battle of Manzikert. Emperor Alexius I asked for aid from Pope Urban II against Islamic aggression. He probably expected money from the pope for the hiring of mercenaries. Instead, Urban II called upon the knights of Christendom in a speech made at the Council of Clermont on 27 November 1095, combining the idea of pilgrimage to the Holy Land with that of waging a holy war against infidels. The First Crusade captured Antioch in 1099 and then Jerusalem. The Second Crusade occurred in 1145 when Edessa was retaken by Islamic forces. Jerusalem was held until 1187 and the Third Crusade, famous for the battles between Richard the Lionheart and Saladin. The Fourth Crusade, begun by Innocent III in 1202, intended to retake the Holy Land but was soon subverted by Venetians who used the forces to sack the Christian city of Zara. When the crusaders arrived in Constantinople, they sacked the city and other parts of Asia Minor and established the Latin Empire of Constantinople in Greece and Asia Minor. This was effectively the last crusade sponsored by the papacy, with later crusades being sponsored by individuals. Jerusalem was held by the crusaders for nearly a century, while other strongholds in the Near East remained in Christian possession much longer. The crusades in the Holy Land ultimately failed to establish permanent Christian kingdoms. Islamic expansion into Europe remained a threat for centuries, culminating in the campaigns of Suleiman the Magnificent in the 16th century. Crusades in southern Spain, southern Italy, and Sicily eventually lead to the demise of Islamic power in Europe. Teutonic Knights expanded Christian domains in Eastern Europe, and the much less frequent crusades within Christendom, such as the Albigensian Crusade, achieved their goal of maintaining doctrinal unity. The Medieval Inquisition was a series of inquisitions (Roman Catholic Church bodies charged with suppressing heresy) from around 1184, including the Episcopal Inquisition (1184–1230s) and later the Papal Inquisition (1230s). It was in response to movements within Europe considered apostate or heretical to Western Catholicism, in particular the Cathars and the Waldensians in southern France and northern Italy. These were the first inquisition movements of many that would follow. The inquisitions in combination with the Albigensian Crusade were fairly successful in ending heresy. Historian Thomas F. Madden has written about popular myths regarding the inquisition. Early evangelisation in Scandinavia was begun by Ansgar, Archbishop of Bremen, "Apostle of the North". Ansgar, a native of Amiens, was sent with a group of monks to Jutland in around 820 at the time of the pro-Christian King Harald Klak. The mission was only partially successful, and Ansgar returned two years later to Germany, after Harald had been driven out of his kingdom. In 829, Ansgar went to Birka on Lake Mälaren, Sweden, with his aide friar Witmar, and a small congregation was formed in 831 which included the king's steward Hergeir. Conversion was slow, however, and most Scandinavian lands were only completely Christianised at the time of rulers such as Saint Canute IV of Denmark and Olaf I of Norway in the years following AD 1000. The Christianisation of the Slavs was initiated by one of Byzantium's most learned churchmen—the patriarch Photios I of Constantinople. The Byzantine Emperor Michael III chose Cyril and Methodius in response to a request from King Rastislav of Moravia, who wanted missionaries that could minister to the Moravians in their own language. The two brothers spoke the local Slavonic vernacular and translated the Bible and many of the prayer books. As the translations prepared by them were copied by speakers of other dialects, the hybrid literary language Old Church Slavonic was created, which later evolved into Church Slavonic and is the common liturgical language still used by the Russian Orthodox Church and other Slavic Orthodox Christians. Methodius went on to convert the Serbs. Bulgaria was a pagan country since its establishment in 681 until 864 when Boris I converted to Christianity. The reasons for that decision were complex; the most important factors were that Bulgaria was situated between two powerful Christian empires, Byzantium and East Francia; Christian doctrine particularly favoured the position of the monarch as God's representative on Earth, while Boris also saw it as a way to overcome the differences between Bulgars and Slavs. Bulgaria was officially recognised as a patriarchate by Constantinople in 927, Serbia in 1346, and Russia in 1589. All of these nations, however, had been converted long before these dates. The Avignon Papacy, sometimes referred to as the Babylonian Captivity, was a period from 1309 to 1378 during which seven popes resided in Avignon, in modern-day France. In 1309, Pope Clement V moved to Avignon in southern France. Confusion and political animosity waxed, as the prestige and influence of Rome waned without a resident pontiff. Troubles reached their peak in 1378 when Gregory XI died while visiting Rome. A papal conclave met in Rome and elected Urban VI, an Italian. Urban soon alienated the French cardinals, and they held a second conclave electing Robert of Geneva to succeed Gregory XI, beginning the Western Schism. John Wycliffe, an English scholar and alleged heretic best known for denouncing the corruptions of the Church, was a precursor of the Protestant Reformation. He emphasized the supremacy of the Bible and called for a direct relationship between man and God, without interference by priests and bishops. His followers played a role in the English Reformation. Jan Hus, a Czech theologian in Prague, was influenced by Wycliffe and spoke out against the corruptions he saw in the Church. He was a forerunner of the Protestant Reformation, and his legacy has become a powerful symbol of Czech culture in Bohemia. The Renaissance was a period of great cultural change and achievement, marked in Italy by a classical orientation and an increase of wealth through mercantile trade. The city of Rome, the papacy, and the papal states were all affected by the Renaissance. On the one hand, it was a time of great artistic patronage and architectural magnificence, where the Church commissioned such artists as Michelangelo, Brunelleschi, Bramante, Raphael, Fra Angelico, Donatello, and da Vinci. On the other hand, wealthy Italian families often secured episcopal offices, including the papacy, for their own members, some of whom were known for immorality, such as Alexander VI and Sixtus IV. In addition to being the head of the Church, the pope became one of Italy's most important secular rulers, and pontiffs such as Julius II often waged campaigns to protect and expand their temporal domains. Furthermore, the popes, in a spirit of refined competition with other Italian lords, spent lavishly both on private luxuries but also on public works, repairing or building churches, bridges, and a magnificent system of aqueducts in Rome that still function today. In 1453, Constantinople fell to the Ottoman Empire. Eastern Christians fleeing Constantinople, and the Greek manuscripts they carried with them, is one of the factors that prompted the literary renaissance in the West at about this time. The Ottoman government followed Islamic law when dealing with the conquered Christian population. Christians were officially tolerated as people of the Book. As such, the Church's canonical and hierarchical organisation were not significantly disrupted, and its administration continued to function. One of the first things that Mehmet the Conqueror did was to allow the Church to elect a new patriarch, Gennadius Scholarius. However, these rights and privileges, including freedom of worship and religious organisation, were often established in principle but seldom corresponded to reality. Christians were viewed as second-class citizens, and the legal protections they depended upon were subject to the whims of the sultan and the sublime porte. The Hagia Sophia and the Parthenon, which had been Christian churches for nearly a millennium, were converted into mosques. Violent persecutions of Christians were common and reached their climax in the Armenian, Assyrian, and Greek genocides. In the early 16th century, attempts were made by the theologians Martin Luther and Huldrych Zwingli, along with many others, to reform the Church. They considered the root of corruptions to be doctrinal, rather than simply a matter of moral weakness or lack of ecclesiastical discipline, and thus advocated for God's autonomy in redemption, and against voluntaristic notions that salvation could be earned by people. The Reformation is usually considered to have started with the publication of the "Ninety-five Theses" by Luther in 1517, although there was no schism until the 1521 Diet of Worms. The edicts of the Diet condemned Luther and officially banned citizens of the Holy Roman Empire from defending or propagating his ideas. The word "Protestant" is derived from the Latin "protestatio", meaning "declaration", which refers to the letter of protestation by Lutheran princes against the decision of the Diet of Speyer in 1529, which reaffirmed the edict of the Diet of Worms ordering the seizure of all property owned by persons guilty of advocating Lutheranism. The term "Protestant" was not originally used by Reformation era leaders; instead, they called themselves "evangelical", emphasising the "return to the true gospel (Greek: "euangelion")." Early protest was against corruptions such as simony, the holding of multiple church offices by one person at the same time, episcopal vacancies, and the sale of indulgences. The Protestant position also included sola scriptura, sola fide, the priesthood of all believers, Law and Gospel, and the two kingdoms doctrine. The three most important traditions to emerge directly from the Protestant Reformation were the Lutheran, Reformed, and Anglican traditions, though the latter group identifies as both "Reformed" and "Catholic", and some subgroups reject the classification as "Protestant." Unlike other reform movements, the English Reformation began by royal influence. Henry VIII considered himself a thoroughly Catholic king, and in 1521 he defended the papacy against Luther in a book he commissioned entitled, "The Defence of the Seven Sacraments", for which Pope Leo X awarded him the title "Fidei Defensor" (Defender of the Faith). However, the king came into conflict with the papacy when he wished to annul his marriage with Catherine of Aragon, for which he needed papal sanction. Catherine, among many other noble relations, was the aunt of Emperor Charles V, the papacy's most significant secular supporter. The ensuing dispute eventually lead to a break from Rome and the declaration of the King of England as head of the English Church, which saw itself as a Protestant Church navigating a middle way between Lutheranism and Reformed Christianity, but leaning more towards the latter. Consequently, England experienced periods of reform and also Counter-Reformation. Monarchs such as Edward VI, Lady Jane Grey, Mary I, Elizabeth I, and Archbishops of Canterbury such as Thomas Cranmer and William Laud pushed the Church of England in different directions over the course of only a few generations. What emerged was the Elizabethan Religious Settlement and a state church that considered itself both "Reformed" and "Catholic" but not "Roman", and other unofficial more radical movements such as the Puritans. In terms of politics, the English Reformation included heresy trials, the exiling of Roman Catholic populations to Spain and other Roman Catholic lands, and censorship and prohibition of books. The Counter-Reformation was the response of the Catholic Church to the Protestant Reformation. In terms of meetings and documents, it consisted of the "Confutatio Augustana", the Council of Trent, the "Roman Catechism", and the "Defensio Tridentinæ fidei". In terms of politics, the Counter-Reformation included heresy trials, the exiling of Protestant populations from Catholic lands, the seizure of children from their Protestant parents for institutionalized Catholic upbringing, a series of wars, the "Index Librorum Prohibitorum" (the list of prohibited books), and the Spanish Inquisition. Although Protestants were excommunicated in an attempt to reduce their influence within the Catholic Church, at the same time they were persecuted during the Counter-Reformation, prompting some to live as crypto-Protestants (also termed Nicodemites, against the urging of John Calvin who urged them to live their faith openly. Crypto-Protestants were documented as late as the 19th century in Latin America. The Council of Trent (1545–1563), initiated by Pope Paul III addressed issues of certain ecclesiastical corruptions such as simony, absenteeism, nepotism, the holding of multiple church offices by one person, and other abuses, as well as the reassertion of traditional practices and the dogmatic articulation of the traditional doctrines of the Church, such as the episcopal structure, clerical celibacy, the seven Sacraments, transubstantiation (the belief that during mass the consecrated bread and wine truly become the body and blood of Christ), the veneration of relics, icons, and saints (especially the Blessed Virgin Mary), the necessity of both faith and good works for salvation, the existence of purgatory and the issuance (but not the sale) of indulgences. In other words, all Protestant doctrinal objections and changes were uncompromisingly rejected. The Council also fostered an interest in education for parish priests to increase pastoral care. Milan's Archbishop Saint Charles Borromeo set an example by visiting the remotest parishes and instilling high standards. Simultaneous to the Counter-Reformation, the Catholic Reformation consisted of improvements in art and culture, anti-corruption measures, the founding of the Jesuits, the establishment of seminaries, a reassertion of traditional doctrines and the emergence of new religious orders aimed at both moral reform and new missionary activity. Also part of this was the development of new yet orthodox forms of spirituality, such as that of the Spanish mystics and the French school of spirituality. The papacy of St. Pius V was known not only for its focus on halting heresy and worldly abuses within the Church, but also for its focus on improving popular piety in a determined effort to stem the appeal of Protestantism. Pius began his pontificate by giving large alms to the poor, charity, and hospitals, and the pontiff was known for consoling the poor and sick as well as supporting missionaries. The activities of these pontiffs coincided with a rediscovery of the ancient Christian catacombs in Rome. As Diarmaid MacCulloch states, "Just as these ancient martyrs were revealed once more, Catholics were beginning to be martyred afresh, both in mission fields overseas and in the struggle to win back Protestant northern Europe: the catacombs proved to be an inspiration for many to action and to heroism." Catholic missions were carried to new places beginning with the new Age of Discovery, and the Roman Catholic Church established missions in the Americas. The Galileo affair, in which Galileo Galilei came into conflict with the Roman Catholic Church over his support of Copernican astronomy, is often considered a defining moment in the history of the relationship between religion and science. In 1610, Galileo published his "Sidereus Nuncius (Starry Messenger)", describing the surprising observations that he had made with the new telescope. These and other discoveries exposed major difficulties with the understanding of the Heavens that had been held since antiquity, and raised new interest in radical teachings such as the heliocentric theory of Copernicus. In reaction, many scholars maintained that the motion of the Earth and immobility of the Sun were heretical, as they contradicted some accounts given in the Bible as understood at that time. Galileo's part in the controversies over theology, astronomy and philosophy culminated in his trial and sentencing in 1633, on a grave suspicion of heresy. The most famous colonisation by Protestants in the New World was that of English Puritans in North America. Unlike the Spanish or French, the English colonists made surprisingly little effort to evangelise the native peoples. The Puritans, or Pilgrims, left England so that they could live in an area with Puritanism established as the exclusive civic religion. Though they had left England because of the suppression of their religious practice, most Puritans had thereafter originally settled in the Low Countries but found the licentiousness there, where the state hesitated from enforcing religious practice, as unacceptable, and thus they set out for the New World and the hopes of a Puritan utopia. Revivalism refers to the Calvinist and Wesleyan revival, called the Great Awakening in North America, which saw the development of evangelical Congregationalist, Presbyterian, Baptist, and new Methodist churches. The First Great Awakening was a wave of religious enthusiasm among Protestants in the American colonies c. 1730–1740, emphasising the traditional Reformed virtues of Godly preaching, rudimentary liturgy, and a deep sense of personal guilt and redemption by Christ Jesus. Historian Sydney E. Ahlstrom saw it as part of a "great international Protestant upheaval" that also created pietism in Germany, the Evangelical Revival, and Methodism in England. It centred on reviving the spirituality of established congregations and mostly affected Congregational, Presbyterian, Dutch Reformed, German Reformed, Baptist, and Methodist churches, while also spreading within the slave population. The Second Great Awakening (1800–1830s), unlike the first, focused on the unchurched and sought to instill in them a deep sense of personal salvation as experienced in revival meetings. It also sparked the beginnings of groups such as the Mormons, the Restoration Movement and the Holiness movement. The Third Great Awakening began from 1857 and was most notable for taking the movement throughout the world, especially in English speaking countries. The final group to emerge from the "great awakenings" in North America was Pentecostalism, which had its roots in the Methodist, Wesleyan, and Holiness movements, and began in 1906 on Azusa Street in Los Angeles. Pentecostalism would later lead to the Charismatic movement. Restorationism refers to the belief that a purer form of Christianity should be restored using the early church as a model. In many cases, restorationist groups believed that contemporary Christianity, in all its forms, had deviated from the true, original Christianity, which they then attempted to "reconstruct", often using the Book of Acts as a "guidebook" of sorts. Restorationists do not usually describe themselves as "reforming" a Christian church continuously existing from the time of Jesus, but as "restoring" the Church that they believe was lost at some point. "Restorationism" is often used to describe the Stone-Campbell Restoration Movement. The term "restorationist" is also used to describe the Jehovah's Witness movement, founded in the late 1870s by Charles Taze Russell. The term can also be used to describe the Latter Day Saint movement, including The Church of Jesus Christ of Latter-day Saints (LDS Church), the Community of Christ and numerous other Latter Day Saints sects. Latter Day Saints, also known as Mormons, believe that Joseph Smith was chosen to restore the original organization established by Jesus, now "in its fullness", rather than to reform the church. The Russian Orthodox Church held a privileged position in the Russian Empire, expressed in the motto of the late empire from 1833: Orthodoxy, Autocracy, and Populism. Nevertheless, the Church reform of Peter I in the early 18th century had placed the Orthodox authorities under the control of the tsar. An ober-procurator appointed by the tsar ran the committee which governed the Church between 1721 and 1918: the Most Holy Synod. The Church became involved in the various campaigns of russification, and was accused of involvement in Russian anti-semitism, despite the lack of an official position on Judaism as such. The Bolsheviks and other Russian revolutionaries saw the Church, like the tsarist state, as an enemy of the people. Criticism of atheism was strictly forbidden and sometimes lead to imprisonment. Some actions against Orthodox priests and believers included torture, being sent to prison camps, labour camps or mental hospitals, as well as execution. In the first five years after the Bolshevik revolution, 28 bishops and 1,200 priests were executed. This included people like the Grand Duchess Elizabeth Fyodorovna who was at this point a monastic. Along with her murder was Grand Duke Sergei Mikhailovich Romanov; the Princes Ioann Konstantinvich, Konstantin Konstantinovich, Igor Konstantinovich and Vladimir Pavlovich Paley; Grand Duke Sergei's secretary, Fyodor Remez; and Varvara Yakovleva, a sister from the Grand Duchess Elizabeth's convent. Liberal Christianity, sometimes called liberal theology, is an umbrella term covering diverse, philosophically informed religious movements and moods within late 18th, 19th and 20th-century Christianity. The word "liberal" in liberal Christianity does not refer to a leftist "political" agenda or set of beliefs, but rather to the freedom of dialectic process associated with continental philosophy and other philosophical and religious paradigms developed during the Age of Enlightenment. Fundamentalist Christianity is a movement that arose mainly within British and American Protestantism in the late 19th century and early 20th century in reaction to modernism and certain liberal Protestant groups that denied doctrines considered fundamental to Christianity yet still called themselves "Christian." Thus, fundamentalism sought to re-establish tenets that could not be denied without relinquishing a Christian identity, the "fundamentals": inerrancy of the Bible, Sola Scriptura, the Virgin Birth of Jesus, the doctrine of substitutionary atonement, the bodily Resurrection of Jesus, and the imminent return of Jesus Christ. Under the state atheism of countries in the Eastern Bloc, Christians of many denominations experienced persecution, with many churches and monasteries being destroyed, as well as clergy being executed. The position of Christians affected by Nazism is highly complex. Pope Pius XI declared – "Mit brennender Sorge" – that Fascist governments had hidden "pagan intentions" and expressed the irreconcilability of the Catholic position and totalitarian fascist state worship, which placed the nation above God, fundamental human rights, and dignity. His declaration that "Spiritually, [Christians] are all Semites" prompted the Nazis to give him the title "Chief Rabbi of the Christian World." Catholic priests were executed in concentration camps alongside Jews; for example, 2,600 Catholic priests were imprisoned in Dachau, and 2,000 of them were executed (cf. " Priesterblock"). A further 2,700 Polish priests were executed (a quarter of all Polish priests), and 5,350 Polish nuns were either displaced, imprisoned, or executed. Many Catholic laymen and clergy played notable roles in sheltering Jews during the Holocaust, including Pope Pius XII. The head rabbi of Rome became a Catholic in 1945, and in honour of the actions the pope undertook to save Jewish lives, he took the name Eugenio (the pope's first name). A former Israeli consul in Italy claimed: "The Catholic Church saved more Jewish lives during the war than all the other churches, religious institutions, and rescue organisations put together." The relationship between Nazism and Protestantism, especially the German Lutheran Church, was complex. Though many Protestant church leaders in Germany supported the Nazis' growing anti-Jewish activities, some such as Dietrich Bonhoeffer (a Lutheran pastor) of the Confessing Church, a movement within Protestantism that strongly opposed Nazism, were strongly opposed to the Third Reich. Bonhoeffer was later found guilty in the conspiracy to assassinate Hitler and executed. On 11 October 1962, Pope John XXIII opened the Second Vatican Council, the 21st ecumenical council of the Catholic Church. The council was "pastoral" in nature, emphasising and clarifying already defined dogma, revising liturgical practices, and providing guidance for articulating traditional Church teachings in contemporary times. The council is perhaps best known for its instructions that the Mass may be celebrated in the vernacular as well as in Latin. Ecumenism broadly refers to movements between Christian groups to establish a degree of unity through dialogue. Ecumenism is derived from Greek ("oikoumene"), which means "the inhabited world", but more figuratively something like "universal oneness." The movement can be distinguished into Catholic and Protestant movements, with the latter characterised by a redefined ecclesiology of "denominationalism" (which the Catholic Church, among others, rejects). Over the last century, moves have been made to reconcile the schism between the Catholic Church and the Eastern Orthodox churches. Although progress has been made, concerns over papal primacy and the independence of the smaller Orthodox churches has blocked a final resolution of the schism. On 30 November 1894, Pope Leo XIII published "Orientalium Dignitas". On 7 December 1965, a Joint Catholic-Orthodox Declaration of Pope Paul VI and the Ecumenical Patriarch Athenagoras I was issued lifting the mutual excommunications of 1054. Some of the most difficult questions in relations with the ancient Eastern Churches concern some doctrine (i.e. "filioque", scholasticism, functional purposes of asceticism, the essence of God, Hesychasm, Fourth Crusade, establishment of the Latin Empire, Uniatism to note but a few) as well as practical matters such as the concrete exercise of the claim to papal primacy and how to ensure that ecclesiastical union would not mean mere absorption of the smaller Churches by the Latin component of the much larger Catholic Church (the most numerous single religious denomination in the world), and the stifling or abandonment of their own rich theological, liturgical and cultural heritage. With respect to Catholic relations with Protestant communities, certain commissions were established to foster dialogue and documents have been produced aimed at identifying points of doctrinal unity, such as the Joint Declaration on the Doctrine of Justification produced with the Lutheran World Federation in 1999. Ecumenical movements within Protestantism have focused on determining a list of doctrines and practices essential to being Christian and thus extending to all groups which fulfill these basic criteria a (more or less) co-equal status, with perhaps one's own group still retaining a "first among equal" standing. This process involved a redefinition of the idea of "the Church" from traditional theology. This ecclesiology, known as denominationalism, contends that each group (which fulfills the essential criteria of "being Christian") is a sub-group of a greater "Christian Church", itself a purely abstract concept with no direct representation, i.e., no group, or "denomination", claims to be "the Church." This ecclesiology is at variance with other groups that indeed consider themselves to be "the Church." The "essential criteria" generally consist of belief in the Trinity, belief that Jesus Christ is the only way to have forgiveness and eternal life, and that Jesus died and rose again bodily. In reaction to these developments, Christian fundamentalism was a movement to reject the radical influences of philosophical humanism, as this was affecting the Christian religion. Especially targeting critical approaches to the interpretation of the Bible, and trying to blockade the inroads made into their churches by atheistic scientific assumptions, the fundamentalists began to appear in various denominations as numerous independent movements of resistance to the drift away from historic Christianity. Over time, the Fundamentalist Evangelical movement has divided into two main wings, with the label Fundamentalist following one branch, while Evangelical has become the preferred banner of the more moderate movement. Although both movements primarily originated in the English-speaking world, the majority of Evangelicals now live elsewhere in the world. The following links give an overview of the history of Christianity: The following links provide quantitative data related to Christianity and other major religions, including rates of adherence at different points in time:
https://en.wikipedia.org/wiki?curid=14117
Hertz The hertz (symbol: Hz) is the derived unit of frequency in the International System of Units (SI) and is defined as one cycle per second. It is named after Heinrich Rudolf Hertz, the first person to provide conclusive proof of the existence of electromagnetic waves. Hertz are commonly expressed in multiples: kilohertz (103 Hz, kHz), megahertz (106 Hz, MHz), gigahertz (109 Hz, GHz), terahertz (1012 Hz, THz), petahertz (1015 Hz, PHz), exahertz (1018 Hz, EHz), and zettahertz (1021 Hz, ZHz). Some of the unit's most common uses are in the description of sine waves and musical tones, particularly those used in radio- and audio-related applications. It is also used to describe the clock speeds at which computers and other electronics are driven. The units are sometimes also used as a representation of energy, via the photon energy equation ("E"="h"ν), with one hertz equivalent to "h" joules. The hertz is defined as one cycle per second. The International Committee for Weights and Measures defined the second as "the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom" and then adds: "It follows that the hyperfine splitting in the ground state of the caesium 133 atom is exactly 9 192 631 770 hertz, ν(hfs Cs) = 9 192 631 770 Hz." The dimension of the unit hertz is 1/time (1/T). Expressed in base SI units it is 1/second (1/s). Problems can arise because the units of angular measure (cycle or radian) are omitted in SI. In English, "hertz" is also used as the plural form. As an SI unit, Hz can be prefixed; commonly used multiples are kHz (kilohertz, 103 Hz), MHz (megahertz, 106 Hz), GHz (gigahertz, 109 Hz) and THz (terahertz, 1012 Hz). One hertz simply means "one cycle per second" (typically that which is being counted is a complete cycle); 100 Hz means "one hundred cycles per second", and so on. The unit may be applied to any periodic event—for example, a clock might be said to tick at 1 Hz, or a human heart might be said to beat at 1.2 Hz. The occurrence rate of aperiodic or stochastic events is expressed in reciprocal second or inverse second (1/s or s−1) in general or, in the specific case of radioactive decay, in becquerels. Whereas 1 Hz is 1 cycle per second, 1 Bq is 1 aperiodic radionuclide event per second. Even though angular velocity, angular frequency and the unit hertz all have the dimension 1/s, angular velocity and angular frequency are not expressed in hertz, but rather in an appropriate angular unit such as radians per second. Thus a disc rotating at 60 revolutions per minute (rpm) is said to be rotating at either 2 rad/s "or" 1 Hz, where the former measures the angular velocity and the latter reflects the number of "complete" revolutions per second. The conversion between a frequency "f" measured in hertz and an angular velocity "ω" measured in radians per second is The hertz is named after the German physicist Heinrich Hertz (1857–1894), who made important scientific contributions to the study of electromagnetism. The name was established by the International Electrotechnical Commission (IEC) in 1930. It was adopted by the General Conference on Weights and Measures (CGPM) ("Conférence générale des poids et mesures") in 1960, replacing the previous name for the unit, "cycles per second" (cps), along with its related multiples, primarily "kilocycles per second" (kc/s) and "megacycles per second" (Mc/s), and occasionally "kilomegacycles per second" (kMc/s). The term "cycles per second" was largely replaced by "hertz" by the 1970s. One hobby magazine, "Electronics Illustrated", declared their intention to stick with the traditional kc., Mc., etc. units. Sound is a traveling longitudinal wave which is an oscillation of pressure. Humans perceive frequency of sound waves as pitch. Each musical note corresponds to a particular frequency which can be measured in hertz. An infant's ear is able to perceive frequencies ranging from 20 Hz to 20,000 Hz; the average adult human can hear sounds between 20 Hz and 16,000 Hz. The range of ultrasound, infrasound and other physical vibrations such as molecular and atomic vibrations extends from a few femtohertz into the terahertz range and beyond. Electromagnetic radiation is often described by its frequency—the number of oscillations of the perpendicular electric and magnetic fields per second—expressed in hertz. Radio frequency radiation is usually measured in kilohertz (kHz), megahertz (MHz), or gigahertz (GHz). Light is electromagnetic radiation that is even higher in frequency, and has frequencies in the range of tens (infrared) to thousands (ultraviolet) of terahertz. Electromagnetic radiation with frequencies in the low terahertz range (intermediate between those of the highest normally usable radio frequencies and long-wave infrared light) is often called terahertz radiation. Even higher frequencies exist, such as that of gamma rays, which can be measured in exahertz (EHz). (For historical reasons, the frequencies of light and higher frequency electromagnetic radiation are more commonly specified in terms of their wavelengths or photon energies: for a more detailed treatment of this and the above frequency ranges, see electromagnetic spectrum.) In computers, most central processing units (CPU) are labeled in terms of their clock rate expressed in megahertz (106 Hz) or gigahertz (109 Hz). This specification refers to the frequency of the CPU's master clock signal. This signal is a square wave, which is an electrical voltage that switches between low and high logic values at regular intervals. As the hertz has become the primary unit of measurement accepted by the general populace to determine the performance of a CPU, many experts have criticized this approach, which they claim is an easily manipulable benchmark. Some processors use multiple clock periods to perform a single operation, while others can perform multiple operations in a single cycle. For personal computers, CPU clock speeds have ranged from approximately 1 MHz in the late 1970s (Atari, Commodore, Apple computers) to up to 6 GHz in IBM POWER microprocessors. Various computer buses, such as the front-side bus connecting the CPU and northbridge, also operate at various frequencies in the megahertz range. Higher frequencies than the International System of Units provides prefixes for are believed to occur naturally in the frequencies of the quantum-mechanical vibrations of high-energy, or, equivalently, massive particles, although these are not directly observable and must be inferred from their interactions with other phenomena. By convention, these are typically not expressed in hertz, but in terms of the equivalent quantum energy, which is proportional to the frequency by the factor of Planck's constant.
https://en.wikipedia.org/wiki?curid=14121
Höðr Höðr ( ; often anglicized as Hod, Hoder, or Hodur) is a blind god and a son of Odin and Frigg in Norse mythology. Tricked and guided by Loki, he shot the mistletoe arrow which was to slay the otherwise invulnerable Baldr. According to the "Prose Edda" and the "Poetic Edda", the goddess Frigg, Baldr's mother, made everything in existence swear never to harm Baldr, except for the mistletoe, which she found too unimportant to ask (alternatively, which she found too young to demand an oath from). The gods amused themselves by trying weapons on Baldr and seeing them fail to do any harm. Loki, the mischief-maker, upon finding out about Baldr's one weakness, made a spear from mistletoe, and helped Höðr shoot it at Baldr. In reaction to this, Odin and the giantess Rindr gave birth to Váli, who grew to adulthood within a day and slew Höðr. The Danish historian Saxo Grammaticus recorded an alternative version of this myth in his "Gesta Danorum". In this version, the mortal hero Høtherus and the demi-god "Balderus" compete for the hand of Nanna. Ultimately, Høtherus slays Balderus. The name "Hǫðr", meaning 'warrior', is comparable with the Old English "heaðu-deór" ('brave, stout in war'). Like the Old Norse noun "hǫð" ('war, slaughter'), it stems from Proto-Germanic "*haþuz" ('battle'; compare with Old English "heaðo-", Old High German "hadu"-, Old Saxon "hathu"-). In the "Gylfaginning" part of Snorri Sturluson's Prose Edda Höðr is introduced in an ominous way. Höðr is not mentioned again until the prelude to Baldr's death is described. All things except the mistletoe (believed to be harmless) have sworn an oath not to harm Baldr, so the Æsir throw missiles at him for sport. The "Gylfaginning" does not say what happens to Höðr after this. In fact it specifically states that Baldr cannot be avenged, at least not immediately. It does seem, however, that Höðr ends up in Hel one way or another for the last mention of him in "Gylfaginning" is in the description of the post-Ragnarök world. Snorri's source of this knowledge is clearly "Völuspá" as quoted below. In the "Skáldskaparmál" section of the Prose Edda several kennings for Höðr are related. None of those kennings, however, are actually found in surviving skaldic poetry. Neither are Snorri's kennings for Váli, which are also of interest in this context. It is clear from this that Snorri was familiar with the role of Váli as Höðr's slayer, even though he does not relate that myth in the "Gylfaginning" prose. Some scholars have speculated that he found it distasteful, since Höðr is essentially innocent in his version of the story. Höðr is referred to several times in the Poetic Edda, always in the context of Baldr's death. The following strophes are from "Völuspá". This account seems to fit well with the information in the Prose Edda, but here the role of Baldr's avenging brother is emphasized. Baldr and Höðr are also mentioned in "Völuspá"'s description of the world after Ragnarök. The poem "Vafþrúðnismál" informs us that the gods who survive Ragnarök are Viðarr, Váli, Móði and Magni with no mention of Höðr and Baldr. The myth of Baldr's death is also referred to in another Eddic poem, "Baldrs draumar". Höðr is not mentioned again by name in the Eddas. He is, however, referred to in "Völuspá in skamma". The name of Höðr occurs several times in skaldic poetry as a part of warrior-kennings. Thus "Höðr brynju", "Höðr of byrnie", is a warrior and so is "Höðr víga", "Höðr of battle". Some scholars have found the fact that the poets should want to compare warriors with Höðr to be incongruous with Snorri's description of him as a blind god, unable to harm anyone without assistance. It is possible that this indicates that some of the poets were familiar with other myths about Höðr than the one related in "Gylfaginning" - perhaps some where Höðr has a more active role. On the other hand, the names of many gods occur in kennings and the poets might not have been particular in using any god name as a part of a kenning. In "Gesta Danorum" Hotherus is a human hero of the Danish and Swedish royal lines. He is gifted in swimming, archery, fighting and music and Nanna, daughter of King Gevarus falls in love with him. But at the same time Balderus, son of Othinus, has caught sight of Nanna bathing and fallen violently in love with her. He resolves to slay Hotherus, his rival. Out hunting, Hotherus is led astray by a mist and meets wood-maidens who control the fortunes of war. They warn him that Balderus has designs on Nanna but also tell him that he shouldn't attack him in battle since he is a demigod. Hotherus goes to consult with King Gevarus and asks him for his daughter. The king replies that he would gladly favour him but that Balderus has already made a like request and he does not want to incur his wrath. Gevarus tells Hotherus that Balderus is invincible but that he knows of one weapon which can defeat him, a sword kept by Mimingus, the satyr of the woods. Mimingus also has another magical artifact, a bracelet that increases the wealth of its owner. Riding through a region of extraordinary cold in a carriage drawn by reindeer, Hotherus captures the satyr with a clever ruse and forces him to yield his artifacts. Hearing about Hotherus's artifacts, Gelderus, king of Saxony, equips a fleet to attack him. Gevarus warns Hotherus of this and tells him where to meet Gelderus in battle. When the battle is joined, Hotherus and his men save their missiles while defending themselves against those of the enemy with a testudo formation. With his missiles exhausted, Gelderus is forced to sue for peace. He is treated mercifully by Hotherus and becomes his ally. Hotherus then gains another ally with his eloquent oratory by helping King Helgo of Hålogaland win a bride. Meanwhile, Balderus enters the country of king Gevarus armed and sues for Nanna. Gevarus tells him to learn Nanna's own mind. Balderus addresses her with cajoling words but is refused. Nanna tells him that because of the great difference in their nature and stature, since he is a demigod, they are not suitable for marriage. As news of Balderus's efforts reaches Hotherus, he and his allies resolve to attack Balderus. A great naval battle ensues where the gods fight on the side of Balderus. Thoro in particular shatters all opposition with his mighty club. When the battle seems lost, Hotherus manages to hew Thoro's club off at the haft and the gods are forced to retreat. Gelderus perishes in the battle and Hotherus arranges a funeral pyre of vessels for him. After this battle Hotherus finally marries Nanna. Balderus is not completely defeated and shortly afterwards returns to defeat Hotherus in the field. But Balderus's victory is without fruit for he is still without Nanna. Lovesick, he is harassed by phantoms in Nanna's likeness and his health deteriorates so that he cannot walk but has himself drawn around in a cart. After a while Hotherus and Balderus have their third battle and again Hotherus is forced to retreat. Weary of life because of his misfortunes, he plans to retire and wanders into the wilderness. In a cave he comes upon the same maidens he had met at the start of his career. Now they tell him that he can defeat Balderus if he gets a taste of some extraordinary food which had been devised to increase the strength of Balderus. Encouraged by this, Hotherus returns from exile and once again meets Balderus in the field. After a day of inconclusive fighting, he goes out during the night to spy on the enemy. He finds where Balderus's magical food is prepared and plays the lyre for the maidens preparing it. While they don't want to give him the food, they bestow on him a belt and a girdle which secure victory. Heading back to his camp, Hotherus meets Balderus and plunges his sword into his side. After three days, Balderus dies from his wound. Many years later, Bous, the son of Othinus and Rinda, avenges his brother by killing Hotherus in a duel. There are also two lesser-known DanishLatin chronicles, the "Chronicon Lethrense" and the "Annales Lundenses", of which the latter is included in the former. These two sources provide a second euhemerized account of Höðr's slaying of Balder. It relates that Hother was the king of the Saxons, son of Hothbrod, the daughter of Hadding. Hother first slew Othen's (i.e., Odin's) son Balder in battle and then chased Othen and Thor. Finally, Othen's son Both killed Hother. Hother, Balder, Othen, and Thor were incorrectly considered to be gods. According to the Swedish mythologist and romantic poet Viktor Rydberg, the story of Baldr's death was taken from Húsdrápa, a poem composed by Ulfr Uggason around 990 AD at a feast thrown by the Icelandic Chief Óláfr Höskuldsson to celebrate the finished construction of his new home, Hjarðarholt, the walls of which were filled with symbolic representations of the Baldr myth among others. Rydberg suggested that Höðr was depicted with eyes closed and Loki guiding his aim to indicate that Loki was the true cause of Baldr's death and Höðr was only his "blind tool." Rydberg theorized that the author of the "Gylfaginning" then mistook the description of the symbolic artwork in the Húsdrápa as the actual tale of Baldr's death.
https://en.wikipedia.org/wiki?curid=14127
Herat Herāt (; Persian/Pashto: ) is the third-largest city of Afghanistan. It has a population of about 436,300, and serves as the capital of Herat Province, situated in the fertile valley of the Hari River in the western part of the country. It is linked with Kandahar, Kabul, and Mazar-i-Sharif via Highway 1 or the ring road. It is further linked to the city of Mashhad in neighboring Iran through the border town of Islam Qala, and to Mary in Turkmenistan to the north through the border town of Torghundi. Herat dates back to Avestan times and was traditionally known for its wine. The city has a number of historic sites, including the Herat Citadel and the Musalla Complex. During the Middle Ages Herat became one of the important cities of Khorasan, as it was known as the "Pearl of Khorasan". After the conquest of Tamerlane, the city became an important center of intellectual and artistic life in the Islamic world. Under the rule of Shah Rukh the city served as the focal point of the Timurid Renaissance, whose glory matched Florence of the Italian Renaissance as the center of a cultural rebirth. After the fall of the Timurid Empire, Herat has been governed by various Afghan rulers since the early 18th century. In 1717, the city was invaded by the Hotaki forces until they were expelled by the Afsharids in 1729. After Nader Shah's death and Ahmad Shah Durrani's rise to power in 1747, Herat became part of Afghanistan. It witnessed some political disturbances and military invasions during the early half of the 19th century but the 1857 Treaty of Paris ended hostilities of the Anglo-Persian War. Herat lies on the ancient trade routes of the Middle East, Central and South Asia, and today is a regional hub in western Afghanistan. The roads from Herat to Iran, Turkmenistan, and other parts of Afghanistan are still strategically important. As the gateway to Iran, it collects high amount of customs revenue for Afghanistan. It also has an international airport. The city has high residential density clustered around the core of the city. However, vacant plots account for a higher percentage of the city (21%) than residential land use (18%) and agricultural is the largest percentage of total land use (36%). Today the city is considered to be relatively safe. Herat dates back to ancient times (its exact age remains unknown). During the period of the Achaemenid Empire (ca. 550-330 BC), the surrounding district was known as 𐏃𐎼𐎡𐎺 "Haraiva" (in Old Persian), and in classical sources the region was correspondingly known as Aria (Areia). In the Zoroastrian Avesta, the district is mentioned as "Haroiva". The name of the district and its main town is derived from that of the chief river of the region, the Herey River (Old Dari "Hereyrud", "Silken Water"), which traverses the district and passes some south of modern Herāt. Herey is mentioned in Sanskrit as yellow or golden color equivalent to Persian "Zard" meaning Gold (yellow). The naming of a region and its principal town after the main river is a common feature in this part of the world—compare the adjoining districts/rivers/towns of Arachosia and Bactria. The district "Aria" of the Achaemenid Empire is mentioned in the provincial lists that are included in various royal inscriptions, for instance, in the Behistun inscription of Darius I (ca. 520 BC). Representatives from the district are depicted in reliefs, e.g., at the royal Achaemenid tombs of Naqsh-e Rustam and Persepolis. They are wearing Scythian-style dress (with a tunic and trousers tucked into high boots) and a twisted Bashlyk that covers their head, chin and neck. Hamdallah Mustawfi, composer of the 14th century work "The Geographical Part of the Nuzhat-al-Qulub" writes that: Herodotus described Herat as "the bread-basket of Central Asia". At the time of Alexander the Great in 330 BC, Aria was obviously an important district. It was administered by a satrap called Satibarzanes, who was one of the three main Persian officials in the East of the Empire, together with the satrap Bessus of Bactria and Barsaentes of Arachosia. In late 330 BC, Alexander captured the Arian capital that was called Artacoana. The town was rebuilt and the citadel was constructed. Afghanistan became part of the Seleucid Empire. However, most sources suggest that Herat was predominantly Zoroastrian. It became part of the Parthian Empire in 167 BC. In the Sasanian period (226-652), 𐭧𐭥𐭩𐭥‎ "Harēv" is listed in an inscription on the Ka'ba-i Zartosht at Naqsh-e Rustam; and "Hariy" is mentioned in the Pahlavi catalogue of the provincial capitals of the empire. In around 430, the town is also listed as having a Christian community, with a Nestorian bishop. In the last two centuries of Sasanian rule, Aria (Herat) had great strategic importance in the endless wars between the Sasanians, the Chionites and the Hephthalites who had been settled in the northern section of Afghanistan since the late 4th century. At the time of the Arab invasion in the middle of the 7th century, the Sasanian central power seemed already largely nominal in the province in contrast with the role of the Hephthalites tribal lords, who were settled in the Herat region and in the neighboring districts, mainly in pastoral Bādghis and in Qohestān. It must be underlined, however, that Herat remained one of the three Sasanian mint centers in the east, the other two being Balkh and Marv. The Hephthalites from Herat and some unidentified Turks opposed the Arab forces in a battle of Qohestān in 651-52 AD, trying to block their advance on Nishāpur, but they were defeated When the Arab armies appeared in Khorāsān in the 650s AD, Herāt was counted among the twelve capital towns of the Sasanian Empire. The Arab army under the general command of Ahnaf ibn Qais in its conquest of Khorāsān in 652 seems to have avoided Herāt, but it can be assumed that the city eventually submitted to the Arabs, since shortly afterwards an Arab governor is mentioned there. A treaty was drawn in which the regions of Bādghis and Bushanj were included. As did many other places in Khorāsān, Herāt rebelled and had to be re-conquered several times. Another power that was active in the area in the 650s was Tang dynasty China which had embarked on a campaign that culminated in the Conquest of the Western Turks. By 659–661, the Tang claimed a tenuous suzerainty over Herat, the westernmost point of Chinese power in its long history. This hold however would be ephemeral with local Turkish tribes rising in rebellion in 665 and driving out the Tang. In 702 AD Yazid ibn al-Muhallab defeated certain Arab rebels, followers of Ibn al-Ash'ath, and forced them out of Herat. The city was the scene of conflicts between different groups of Muslims and Arab tribes in the disorders leading to the establishment of the Abbasid Caliphate. Herat was also a centre of the followers of Ustadh Sis. In 870 AD, Yaqub ibn Layth Saffari, a local ruler of the Saffarid dynasty conquered Herat and the rest of the nearby regions in the name of Islam. The region of Herāt was under the rule of King Nuh III, the seventh of the Samanid line—at the time of Sebük Tigin and his older son, Mahmud of Ghazni. The governor of Herāt was a noble by the name of "Faik", who was appointed by Nuh III. It is said that Faik was a powerful, but insubordinate governor of Nuh III; and had been punished by Nuh III. Faik made overtures to Bogra Khan and Ughar Khan of Khorasan. Bogra Khan answered Faik's call, came to Herāt and became its ruler. The Samanids fled, betrayed at the hands of Faik to whom the defence of Herāt had been entrusted by Nuh III. In 994, Nuh III invited Alp Tigin to come to his aid. Alp Tigin, along with Mahmud of Ghazni, defeated Faik and annexed Herāt, Nishapur and Tous. Herat was a great trading centre strategically located on trade routes from Mediterranean to India or to China. The city was noted for its textiles during the Abbasid Caliphate, according to many references by geographers. Herāt also had many learned sons such as Ansārī. The city is described by Estakhri and Ibn Hawqal in the 10th century as a prosperous town surrounded by strong walls with plenty of water sources, extensive suburbs, an inner citadel, a congregational mosque, and four gates, each gate opening to a thriving market place. The government building was outside the city at a distance of about a mile in a place called Khorāsānābād. A church was still visible in the countryside northeast of the town on the road to Balkh, and farther away on a hilltop stood a flourishing fire temple, called Sereshk, or Arshak according to Mustawfi. Herat was a part of the Taherid dominion in Khorāsān until the rise of the Saffarids in Sistān under Ya'qub-i Laith in 861, who, in 862, started launching raids on Herat before besieging and capturing it on 16 August 867, and again in 872. The Saffarids succeeded in expelling the Taherids from Khorasan in 873. The Sāmānid dynasty was established in Transoxiana by three brothers, Nuh, Yahyā, and Ahmad. Ahmad Sāmāni opened the way for the Samanid dynasty to the conquest of Khorāsān, including Herāt, which they were to rule for one century. The centralized Samanid administration served as a model for later dynasties. The Samanid power was destroyed in 999 by the Qarakhanids, who were advancing on Transoxiana from the northeast, and by the Ghaznavids, former Samanid retainers, attacking from the southeast. Sultan Maḥmud of Ghazni officially took control of Khorāsān in 998. Herat was one of the six Ghaznavid mints in the region. In 1040, Herat was captured by the Seljuk Empire. Yet, in 1175, it was captured by the Ghurids of Ghor and then came under the Khawarazm Empire in 1214. According to the account of Mustawfi, Herat flourished especially under the Ghurid dynasty in the 12th century. Mustawfi reported that there were "359 colleges in Herat, 12,000 shops all fully occupied, 6,000 bath-houses; besides caravanserais and mills, also a darwish convent and a fire temple". There were about 444,000 houses occupied by a settled population. The men were described as "warlike and carry arms", and they were Sunni Muslims. The great mosque of Herāt was built by Ghiyas ad-Din Ghori in 1201. In this period Herāt became an important center for the production of metal goods, especially in bronze, often decorated with elaborate inlays in precious metals. Herat was invaded and destroyed by Genghis Khan's Mongol army in 1221. The city was destroyed a second time and remained in ruins from 1222 to about 1236. In 1244 a local prince Shams al-Din Kart was named ruler of Herāt by the Mongol governor of Khorāsān and in 1255 he was confirmed in his rule by the founder of the Il-Khan dynasty Hulagu. Shams al-Din founded a new dynasty and his successors, especially Fakhr-al-Din and Ghiyath al-Din, built many mosques and other buildings. The members of this dynasty were great patrons of literature and the arts. By this time Herāt became known as the "pearl of Khorasan". Timur took Herat in 1380 and he brought the Kartid dynasty to an end a few years later. The city reached its greatest glory under the Timurid princes, especially Sultan Husayn Bayqara who ruled Herat from 1469 until May 4, 1506. His chief minister, the poet and author in Persian and Turkish, Mir Ali-Shir Nava'i was a great builder and patron of the arts. Under the Timurids, Herat assumed the role of the main capital of an empire that extended in the West as far as central Persia. As the capital of the Timurid empire, it boasted many fine religious buildings and was famous for its sumptuous court life and musical performance and its tradition of miniature paintings. On the whole, the period was one of relative stability, prosperity, and development of economy and cultural activities. It began with the nomination of Shahrokh, the youngest son of Timur, as governor of Herat in 1397. The reign of Shahrokh in Herat was marked by intense royal patronage, building activities, and promotion of manufacturing and trade, especially through the restoration and enlargement of the Herat's bāzār. The present Musallah Complex, and many buildings such as the madrasa of Gawhar Shad, Ali Shir mahāl, many gardens, and others, date from this time. The village of Gazar Gah, over two km northeast of Herat, contained a shrine which was enlarged and embellished under the Timurids. The tomb of the poet and mystic Khwājah Abdullāh Ansārī (d. 1088), was first rebuilt by Shahrokh about 1425, and other famous men were buried in the shrine area. Herat was shortly captured by Kara Koyunlu between 1458–1459. In 1507 Herat was occupied by the Uzbeks but after much fighting the city was taken by Shah Isma'il, the founder of the Safavid dynasty, in 1510 and the Shamlu Qizilbash assumed the governorship of the area. Under the Safavids, Herat was again relegated to the position of a provincial capital, albeit one of a particular importance. At the death of Shah Isma'il the Uzbeks again took Herat and held it until Shah Tahmasp retook it in 1528. The Persian king, Abbas was born in Herat, and in Safavid texts, Herat is referred to as "a'zam-i bilād-i īrān", meaning "the greatest of the cities of Iran". In the 16th century, all future Safavid rulers, from Tahmasp I to Abbas I, were governors of Herat in their youth. By the early 18th century Herat was governed by various Hotaki and Abdali Afghans. After Nader Shah's death in 1747, Ahmad Shah Durrani took possession of the city and became part of the Durrani Empire. In 1824, Herat became independent for several years when the Afghan Empire was split between the Durranis and Barakzais. The Persians sieged the city in 1837, but the British helped the Afghans in repelling them. In 1856, they invaded again, and briefly managed to retake the city; it led directly to the Anglo-Persian War. In 1857 hostilities between the Persians and the British ended after the Treaty of Paris was signed, and the Persian troops withdrew from Herat. Afghanistan reconquered Herat in 1863 under Dost Muhammad Khan, two weeks before his death. One of the greatest tragedies for the Afghans and Muslims was the British invasion of, and subsequent destruction of the Islamic Musallah complex in Herat in 1885. The officially stated reason was to get a good line of sight for their artillery against Russian invaders who never came. This was but one small sidetrack in the Great Game, a century-long conflict between the British Empire and the Russian Empire in the 19th century. In the 1960s, engineers from the United States built Herat Airport, which was used by the Soviet forces during the Democratic Republic of Afghanistan in the 1980s. Even before the Soviet invasion at the end of 1979, there was a substantial presence of Soviet advisors in the city with their families. Between March 10 and March 20, 1979, the Afghan Army in Herāt under the control of commander Ismail Khan mutinied. Thousands of protesters took to the streets against the Khalq communist regime's oppression led by Nur Mohammad Taraki. The new rebels led by Khan managed to oust the communists and take control of the city for 3 days, with some protesters murdering any Soviet advisers. This shocked the government, who blamed the new administration of Iran following the Iranian Revolution for influencing the uprising. Reprisals by the government followed, and between 3,000 and 24,000 people (according to different sources) were killed, in what is called the 1979 Herat uprising, or in Persian as the "Qiam-e Herat". The city itself was recaptured with tanks and airborne forces, but at the cost of thousands of civilians killed. This massacre was the first of its kind since the country's independence in 1919, and was the bloodiest event preceding the Soviet–Afghan War. Herat received damage during the Soviet–Afghan War in the 1980s, especially its western side. The province as a whole was one of the worst-hit. In April 1983, a series of Soviet bombings damaged half of the city and killed around 3,000 civilians, described as "extremely heavy, brutal and prolonged". Ismail Khan was the leading mujahideen commander in Herāt fighting against the Soviet-backed government. After the communist government's collapse in 1992, Khan joined the new government and he became governor of Herat Province. The city was relatively safe and it was recovering and rebuilding from the damage caused in the Soviet–Afghan War. However, on September 5, 1995, the city was captured by the Taliban without much resistance, forcing Khan to flee. Herat became the first Persian-speaking city to be captured by the Taliban. The Taliban's strict enforcement of laws confining women at home and closing girls' schools alienated Heratis who are traditionally more liberal and educated, like the Kabulis, than other urban populations in the country. Two days of anti-Taliban protests occurred in December 1996 which was violently dispersed and led to the imposition of a curfew. In May 1999, a rebellion Herat was crushed by the Taliban, who blamed Iran for causing it. After the U.S. invasion of Afghanistan, on November 12, 2001, it was captured from the Taliban by forces loyal to the Northern Alliance and Ismail Khan returned to power (see Battle of Herat). In 2004, Mirwais Sadiq, Aviation Minister of Afghanistan and the son of Ismail Khan, was ambushed and killed in Herāt by a local rival group. More than 200 people were arrested under suspicion of involvement. In 2005, the International Security Assistance Force (ISAF) began establishing bases in and around the city. Its main mission was to train the Afghan National Security Forces (ANSF) and help with the rebuilding process of the country. Regional Command West, led by Italy, assisted the Afghan National Army (ANA) 207th Corps. Herat was one of the first seven areas that transitioned security responsibility from NATO to Afghanistan. In July 2011, the Afghan security forces assumed security responsibility from NATO. Due to their close relations, Iran began investing in the development of Herat's power, economy and education sectors. In the meantime, the United States built a consulate in Herat to help further strengthen its relations with Afghanistan. In addition to the usual services, the consulate works with the local officials on development projects and with security issues in the region. Herat has a hot semi-arid climate (Köppen climate classification "BSh"). Precipitation is very low, and mostly falls in winter. Although Herāt is approximately lower than Kandahar, the summer climate is more temperate, and the climate throughout the year is far from disagreeable, although winter temperatures are comparably lower. From May to September, the wind blows from the northwest with great force. The winter is tolerably mild; snow melts as it falls, and even on the mountains does not lie long. Three years out of four it does not freeze hard enough for the people to store ice. The eastern reaches of the Hari River, including the rapids, are frozen hard in the winter, and people travel on it as on a road. India, Iran and Pakistan operate their consulate here for trade, military and political links. Of the more than dozen minarets that once stood in Herāt, many have been toppled from war and neglect over the past century. Recently, however, everyday traffic threatens many of the remaining unique towers by shaking the very foundations they stand on. Cars and trucks that drive on a road encircling the ancient city rumble the ground every time they pass these historic structures. UNESCO personnel and Afghan authorities have been working to stabilize the Fifth Minaret. The population of Herat numbered approximately 436,300 in 2013. The city houses a multi-ethnic society and speakers of the Persian language are in the majority. There is no current data on the precise ethnic composition of the city's population, but according to a 2003 map found in the National Geographic Magazine, Persian-speaking Tajik and Farsiwan peoples form the overwhelming majority of the city, comprising around 65% of the population. The remaining population comprises Pashtuns (30%), Hazaras (2%), Uzbeks (2%) and Turkmens (1%). Persian is the native language of Herat and the local dialect – known by natives as "Herātī" – belongs to the "Khorāsānī" cluster within Persian. It is akin to the Persian dialects of eastern Iran, notably those of Mashhad and Khorasan Province, which borders Herat. This Persian dialect serves as the lingua franca of the city. The second language that is understood by many is Pashto, which is the native language of the Pashtuns. The local Pashto dialect spoken in Herat is a variant of western Pashto, which is also spoken in Kandahar and southern and western Afghanistan. Religiously, Sunni Islam is practiced by the majority, while Shias make up the minority. The city once had a Jewish community. About 280 families lived in Herat as of 1948, but most of them moved to Israel that year, and the community disappeared by 1992. There are four former synagogues in the city's old quarter, which were neglected for decades and fell into disrepair. In the late 2000s, the buildings of the synagogues were renovated by the Aga Khan Trust for Culture, and at this time, three of them were turned into schools and nurseries, the Jewish community having vanished. The Jewish cemetery is being taken care of by Jalil Ahmed Abdelaziz. Herat International Airport was built by engineers from the United States in the 1960s and was used by the Soviet Armed Forces during the Soviet–Afghan War in the 1980s. It was bombed in late 2001 during Operation Enduring Freedom but had been rebuilt within the next decade. The runway of the airport has been extended and upgraded and as of August 2014 there were regularly scheduled direct flights to Delhi, Dubai, Mashad, and various airports in Afghanistan. At least five airlines operated regularly scheduled direct flights to Kabul. Rail connections to and from Herat were proposed many times, during "The Great Game" of the 19th century and again in the 1970s and 1980s, but nothing came to life. In February 2002, Iran and the Asian Development Bank announced funding for a railway connecting Torbat-e Heydarieh in Iran to Herat. This was later changed to begin in Khaf in Iran, a railway for both cargo and passengers, with work on the Iranian side of the border starting in 2006. Construction is underway in the Afghan side and it was estimated to be completed by March 2018. There is also the prospect of an extension across Afghanistan to Sher Khan Bandar. The AH76 highway connects Herat to Maymana and the north. The AH77 connects it east towards Chaghcharan and north towards Mary in Turkmenistan. Highway 1 (part of Asian highway AH1) links it to Mashhad in Iran to the northwest, and south via the Kandahar–Herat Highway to Delaram.
https://en.wikipedia.org/wiki?curid=14128
Hedeby Hedeby (, Old Norse "Heiðabýr", German "Haithabu") was an important Danish Viking Age (8th to the 11th centuries) trading settlement near the southern end of the Jutland Peninsula, now in the Schleswig-Flensburg district of Schleswig-Holstein, Germany. It is the most important archaeological site in Schleswig-Holstein. Around 965, chronicler Ibrahim ibn Ahmed At-Tartûschi visited Hedeby and described it as, "a very large city at the very end of the world's ocean." The settlement developed as a trading centre at the head of a narrow, navigable inlet known as the Schlei, which connects to the Baltic Sea. The location was favorable because there is a short portage of less than 15 km to the Treene River, which flows into the Eider with its North Sea estuary, making it a convenient place where goods and ships could be pulled on a corduroy road overland for an almost uninterrupted seaway between the Baltic and the North Sea and avoid a dangerous and time-consuming circumnavigation of Jutland, providing Hedeby with a role similar to later Lübeck. Hedeby was the second largest Nordic town during the Viking Age, after Uppåkra in present-day southern Sweden, The city of Schleswig was later founded on the other side of the Schlei. Hedeby was abandoned after its destruction in 1066. Hedeby was rediscovered in the late 19th century and excavations began in 1900. The Hedeby Museum was opened next to the site in 1985. The Old Norse name "Heiða-býr" simply translates to "heath-settlement" ("heiðr" "heath" and "býr" = "yard; settlement, village, town"). The name is recorded in numerous spelling variants. Heinrich Karl Wilhelm Berghaus, "Schweden, Norwegen u. Dänemark die 3 skandinavischen Reiche" Hasselberg (1858), p. 890. Sources from the 9th and 10th century AD also attest to the names "Sliesthorp" and "Sliaswich" (cf. "-thorp" vs. "-wich"), and the town of Schleswig still exists 3 km north of Hedeby. However, Æthelweard claimed in his Latin translation of the Anglo-Saxon Chronicle that the Saxons used "Slesuuic" and the Danes "Haithaby" to refer to the same town. Hedeby is first mentioned in the Frankish chronicles of Einhard (804) who was in the service of Charlemagne, but was probably founded around 770. In 808 the Danish king Godfred (Lat. Godofredus) destroyed a competing Slav trade centre named Reric, and it is recorded in the Frankish chronicles that he moved the merchants from there to Hedeby. This may have provided the initial impetus for the town to develop. The same sources record that Godfred strengthened the Danevirke, an earthen wall that stretched across the south of the Jutland peninsula. The Danevirke joined the defensive walls of Hedeby to form an east–west barrier across the peninsula, from the marshes in the west to the Schlei inlet leading into the Baltic in the east. The town itself was surrounded on its three landward sides (north, west, and south) by earthworks. At the end of the 9th century the northern and southern parts of the town were abandoned for the central section. Later a 9-metre (29-ft) high semi-circular wall was erected to guard the western approaches to the town. On the eastern side, the town was bordered by the innermost part of the Schlei inlet and the bay of Haddebyer Noor. Hedeby became a principal marketplace because of its geographical location on the major trade routes between the Frankish Empire and Scandinavia (north-south), and between the Baltic and the North Sea (east-west). Between 800 and 1000 the growing economic power of the Vikings led to its dramatic expansion as a major trading centre. Along with Birka and Schleswig, Hedeby's prominence as a major international trading hub served as a foundation of the Hanseatic League that would emerge by the 12th century. The following indicate the importance achieved by the town: A Swedish dynasty founded by Olof the Brash is said to have ruled Hedeby during the last decades of the 9th century and the first part of the 10th century. This was told to Adam of Bremen by the Danish king Sweyn Estridsson, and it is supported by three runestones found in Denmark. Two of them were raised by the mother of Olof's grandson Sigtrygg Gnupasson. The third runestone, discovered in 1796, is from Hedeby, the "Stone of Eric" (). It is inscribed with Norwegian-Swedish runes. It is, however, possible that Danes also occasionally wrote with this version of the younger futhark. Life was short and crowded in Hedeby. The small houses were clustered tightly together in a grid, with the east–west streets leading down to jetties in the harbour. People rarely lived beyond 30 or 40, and archaeological research shows that their later years were often painful due to crippling diseases such as tuberculosis. Yet make-up for men and rights for women provide surprises to the modern understanding. Al-Tartushi, a late 10th-century traveller from al-Andalus, provides one of the most colourful and often quoted descriptions of life in Hedeby. Al-Tartushi was from Cordoba in Spain, which had a significantly more wealthy and comfortable lifestyle than Hedeby. While Hedeby may have been significant by Scandinavian standards, Al-Tartushi was unimpressed: The town was sacked in 1050 by King Harald Hardrada of Norway during a conflict with King Sweyn II of Denmark. He set the town on fire by sending several burning ships into the harbour, the charred remains of which were found at the bottom of the Schlei during recent excavations. A Norwegian "skald", quoted by Snorri Sturluson, describes the sack as follows: In 1066 the town was sacked and burned by West Slavs. Following the destruction, Hedeby was slowly abandoned. People moved across the Schlei inlet, which separates the two peninsulas of Angeln and Schwansen, and founded the town of Schleswig. After the settlement was abandoned, rising waters contributed to the complete disappearance of all visible structures on the site. It was even forgotten where the settlement had been. This proved to be fortunate for later archaeological work at the site. Archaeological work began at the site in 1900 after the rediscovery of the settlement. Excavations were conducted for the next 15 years. Further excavations were carried out between 1930 and 1939. Archaeological work on the site was productive for two main reasons: that the site had never been built on since its destruction some 840 years earlier, and that the permanently waterlogged ground had preserved wood and other perishable materials. After the Second World War, in 1959, archaeological work was started again and has continued intermittently ever since. The embankments surrounding the settlement were excavated, and the harbour was partially dredged, during which the wreck of a Viking ship was discovered. Despite all this work, only 5% of the settlement (and only 1% of the harbour) has as yet been investigated. The most important finds resulting from the excavations are now on display in the adjoining Haithabu Museum. In 2005 an ambitious archaeological reconstruction program was initiated on the original site. Based on the results of archaeological analyses, exact copies of some of the original Viking houses have been rebuilt.
https://en.wikipedia.org/wiki?curid=14130
Hazaras The Hazaras (; ) are an ethnic group native to the mountainous region of Hazarajat in central Afghanistan. They speak the Hazaragi variant of Persian, one of the two official languages of Afghanistan. They are the third-largest ethnic group in Afghanistan, also making up a significant minority group in the neighboring Pakistan, with a population of between 650,000 and 900,000, largely living in the region of Quetta. Hazaras are considered to be one of the most oppressed groups in Afghanistan, and their persecution dates back decades. Babur, founder of the Mughal Empire in the early 16th century, records the name "Hazara" in his autobiography. He referred to the populace of a region called "Hazaristan", located west of the Kabulistan region, east of Ghor, and north of Ghazni. The conventional theory is that the name "Hazara" derives from the Persian word for "thousand" ( ). It may be the translation of the Mongol word (or ), a military unit of 1,000 soldiers at the time of Genghis Khan. With time, the term "Hazar" could have been substituted for the Mongol word and now stands for the group of people, while the Hazaras in their native language call themselves ( ) and ( ). The origins of the Hazara have not been fully reconstructed. Significant Inner Asian descent—in historical context, Turkic and Mongol—is probable because their physical attributes, facial bone structures and parts of their culture and language resemble those of Mongolians and Central Asian Turks. Genetic analysis of the Hazara indicate partial Mongolian ancestry. Invading Mongols and Turco-Mongols mixed with the local Iranian population, forming a distinct group. For example, Nikudari Mongols settled in what is now Afghanistan and mixed with the native populations. A second wave of mostly Chagatai Mongols came from Central Asia and were followed by other Mongolic groups, associated with the Ilkhanate and the Timurids, all of whom settled in Hazarajat and mixed with the local population, forming a distinct group. The Hazara identity in Afghanistan is believed by many to have originated in the aftermath of the 1221 Siege of Bamyan. The first mention of Hazara are made by Babur in the early 16th century and later by the court historians of Shah Abbas of the Safavid dynasty. It is reported that they embraced Shia Islam between the end of the 16th and the beginning of the 17th century, during the Safavid period. Hazara men along with tribes of other ethnic groups had been recruited and added to the army of Ahmad Shah Durrani in the 18th century. Some claim that in the mid‑18th century Hazara were forced out of Helmand and the Arghandab District of Kandahar Province. During the second reign of Dost Mohammad Khan in the 19th century, Hazara from Hazarajat began to be taxed for the first time. However, for the most part they still managed to keep their regional autonomy until the subjugation of Abdur Rahman Khan began in the late 19th century. When the Treaty of Gandomak was signed and the Second Anglo-Afghan War ended in 1880, Abdur Rahman Khan set out a goal to bring Hazarajat and Kafiristan under his control. He launched several campaigns in Hazarajat due to resistance from the Hazara in which his forces committed atrocities. The southern part of Hazarajat was spared as they accepted his rule, while the other parts of Hazarajat rejected Abdur Rahman and instead supported his uncle, Sher Ali Khan. In response to this Abdur Rahman waged a war against tribal leaders who rejected his policies and rule. This is known as the Hazara Uprisings. Abdur Rahman arrested Syed Jafar, chief of the Sheikh Ali Hazara tribe, and jailed him in Mazar-i-Sharif. These campaigns had a catastrophic impact on the demographics of Hazaras causing 60% of them to perish or become displaced. In 1901, Habibullah Khan, Abdur Rahman's successor, granted amnesty to all people who were exiled by his predecessor. However, the division between the Afghan government and the Hazara people was already made too deep under Abdur Rahman. Hazara continued to face severe social, economic and political discrimination through most of the 20th century. In 1933 King Mohammed Nadir Khan was assassinated by Abdul Khaliq Hazara. The Afghan government captured and executed him later, along with several of his innocent family members. Mistrust of the central government by the Hazaras and local uprisings continued. In particular, from 1945–1946, during Zahir Shah's rule, a revolt took place against new taxes that were exclusively imposed on the Hazara. The Kuchi nomads meanwhile not only were exempted from taxes, but also received allowances from the Afghan government. The angry rebels began capturing and killing government officials. In response, the central government sent a force to subdue the region and later removed the taxes. During the Soviet–Afghan War, the Hazarajat region did not see as much heavy fighting as other regions of Afghanistan. However, rival Hazara political factions fought. The division was between the "Tanzáim-i nasl-i naw-i Hazara", a party based in Quetta, of Hazara nationalists and secular intellectuals, and the Islamist parties in Hazarajat. By 1979, the Hazara-Islamist groups liberated Hazarajat from the central Soviet-backed Afghan government and later took entire control of Hazarajat away from the secularists. By 1984, after severe fighting, the secularist groups lost all their power to the Islamists. As the Soviets withdrew in 1989, the Islamist groups felt the need to broaden their political appeal and turned their focus to Hazara ethnic nationalism. This led to establishment of the Hizb-i-Wahdat, an alliance of all the Hazara resistance groups (except the "Harakat-i Islami"). In 1992 with the fall of Kabul, the "Harakat-i Islami" took sides with Burhanuddin Rabbani's government while the Hizb-i-Wahdat took sides with the opposition. The Hizb-i-Wahdat was eventually forced out of Kabul in 1995 when the Taliban movement captured and killed their leader Abdul Ali Mazari. With the Taliban's capture of Kabul in 1996, all the Hazara groups united with the new Northern Alliance against the common new enemy. However, it was too late and despite the fierce resistance Hazarajat fell to the Taliban by 1998. The Taliban had Hazarajat totally isolated from the rest of the world going as far as not allowing the United Nations to deliver food to the provinces of Bamyan, Ghor, Wardak, and Daykundi. Though Hazara played a role in the anti-Soviet movement, other Hazara participated in the new communist government, which actively courted Afghan minorities. Sultan Ali Kishtmand, a Hazara, served as prime minister of Afghanistan from 1981–1990 (with one brief interruption in 1988). The Ismaili Hazara of Baghlan Province likewise supported the communists, and their "pir" (religious leader) Jaffar Naderi led a pro-Communist militia in the region. During the years that followed, Hazara suffered severe oppression and many ethnic massacres, genocides and pogroms were carried out by the predominantly ethnic Pashtun Taliban and are documented by such groups the Human Rights Watch. These human rights abuses not only occurred in Hazarajat, but across all districts controlled by the Taliban. Particularly after their capture of Mazar-i-Sharif in 1998, where after a massive killing of some 8,000 civilians, the Taliban openly declared that the Hazara would be targeted. Following the 11 September 2001 attacks in the United States, British and American forces invaded Afghanistan. Many Hazara have become leaders in today's newly emerging Afghanistan. Hazara have also pursued higher education, enrolled in the army, and many have top government positions. For example, Mohammad Mohaqiq, a Hazara from the Hizb-i-Wahdat party, ran in the 2004 presidential election in Afghanistan, and Karim Khalili became the Vice President of Afghanistan. A number of ministers and governors are Hazara, including Sima Samar, Habiba Sarabi, Sarwar Danish, Sayed Hussein Anwari, Abdul Haq Shafaq, Sayed Anwar Rahmati, Qurban Ali Oruzgani. The mayor of Nili in Daykundi Province is Azra Jafari, who became the first female mayor in Afghanistan. Some other notable Hazara include: Sultan Ali Keshtmand, Abdul Wahed Sarābi, Ghulam Ali Wahdat, Sayed Mustafa Kazemi, Muhammad Arif Shah Jahan, Ghulam Husain Naseri, Abbas Noyan, Abbas Ibrahim Zada, Ramazan Bashardost, Ahmad Shah Ramazan, Ahmad Behzad, Nasrullah Sadiqi Zada Nili. The National Assembly of Afghanistan (Parliament) is 25% made up of ethnic Hazara, which represents 61 members. Although Afghanistan has been historically one of the poorest countries in the world, the Hazarajat region has been kept even more poor from development by past governments. Since ousting the Taliban in late 2001, billions of dollars have poured into Afghanistan for reconstruction and several large-scale reconstruction projects took place in Afghanistan from August 2012. For example, there have been more than 5000 kilometers of road pavement completed across Afghanistan, of which little was done in central Afghanistan Hazarajat. On the other hand, the Band-e Amir in the Bamyan Province became the first national park of Afghanistan. The road from Kabul to Bamyan was also built, along with new police stations, government institutions, hospitals, and schools in the Bamyan Province, Daykundi Province, and the others. The first ski resort of Afghanistan was also established in Bamyan Province. An indication of discrimination is that Kuchis (Afghan nomads who have historically been migrating from region to region depending on the season) are allowed to use Hazarajat pastures during the summer season. It is believed that allowing the Kuchis to use some of the grazing land in Hazarajat began during the rule of Abdur Rahman Khan. Living in mountainous Hazarajat, where little farm land exists, Hazara people rely on these pasture lands for their livelihood during the long and harsh winters. In 2007 some Kuchi nomads entered into parts of Hazarajat to graze their livestock, and when the local Hazara resisted, a clash took place and several people on both sides died using assault rifles. Such events continue to occur, even after the central government was forced to intervene, including President Hamid Karzai. In late July 2012, a Hazara police commander in Uruzgan province reportedly rounded up and killed 9 Pashtun civilians in revenge for the death of two local Hazara. The matter is being investigated by the Afghan government. The drive by President Hamid Karzai after the Peace Jirga to strike a deal with Taliban leaders caused deep unease in Afghanistan's minority communities, who fought the Taliban the longest and suffered the most during their rule. The leaders of the Tajik, Uzbek and Hazara communities, which together make up around 65% of the country's population, vowed to resist any return of the Taliban to power, referring to the large-scale massacres of Hazara civilians during the Taliban period. Genetically, the Hazara are a mixture of western Eurasian and eastern Eurasian components. Genetic research suggests that the Hazaras of Afghanistan cluster closely with the Uzbek population of the country, while both groups are at a notable distance from Afghanistan's Tajik and Pashtun populations. There is evidence of both a patrimonial and maternal relation to Turkic Peoples and Mongols. East Asian male and female ancestry is supported by studies in genetic genealogy as well. East Asian maternal haplogroups (mtDNA) make up about 35%, which are virtually absent from bordering populations, suggesting that the male descendants of Turkic and Mongolian peoples were accompanied by women of East Asian ancestry. Women of Non-East Asian mtDNA in Hazaras are at about 65%, most which are West Eurasians and some South Asian. The most frequent paternal haplogroups found amongst the Pakistani Hazara were haplogroup C-M217 at 40%(10/25) and Haplogroup R1b at 32% (8/25). One study about paternal DNA haplogroups of the Afghanistan shows that the Y-DNA haplogroups R1a and C-M217 are the most common haplogroups, followed by J2-M172 and L-M20. Some Hazaras also have the haplogroup R1a1a-M17, E1b1b1-M35, L-M20 and H-M69, which are common in Tajiks, Pashtuns and Indians. In one study, a small minority had the haplogroup B-M60, normally found in East Africa, and in one mtDNA study of Hazara, mtDNA Haplogroup L (which is of African origin) was detected at a frequency of 7.5%. A recent study shows that the Uyghurs are closely related to the Hazara and that both are closer to various East Asian groups. The study also suggests a small but notable East Asian ancestry in other populations of Pakistan and India. The vast majority of Hazaras live in central Afghanistan, and significant numbers are also found in major cities and towns. Many Hazara men leave Hazarjat to work in cities, including in neighboring countries or abroad. The latest World Factbook estimates show that Hazara make up nine percent of the total Afghan population but some sources claim that they are about 20 percent. However, they fail to cite a reference. In the 1970s, they were estimated by Louis Dupree at approximately 1,000,000. Alessandro Monsutti argues, in his recent anthropological book, that migration is the traditional way of life of the Hazara people, referring to the seasonal and historical migrations which have never ceased and do not seem to be dictated only by emergency situations such as war. Due to the decades of war in Afghanistan and the sectarian violence in Pakistan, many Hazaras left their communities and have settled in Australia, New Zealand, Canada, the United States, the United Kingdom and particularly the Northern European countries such as Sweden and Denmark. Some go to these countries as exchange students while others through human smuggling, which sometimes costs them their lives. Since 2001, about 1,000 people have died in the ocean while trying to reach Australia by boats from Indonesia. Many of these were Hazaras, including women and small children who could not swim. The notable case was the Tampa affair in which a shipload of refugees, mostly Hazara, was rescued by the Norwegian freighter MV "Tampa" and subsequently sent to Nauru. New Zealand agreed to take some of the refugees and all but one of those were granted stay. During the British expansion in the 19th century, Hazaras worked during the winter months in coal mines, road construction and in other menial labor jobs in some cities of what is now Pakistan. The earliest record of Hazara in the areas of Pakistan is found in Broadfoot's Sappers company from 1835 in Quetta. This company had also participated in the First Anglo-Afghan War. Some Hazara also worked in the agriculture farms in Sindh and construction of Sukkur barrage. Haider Ali Karmal Jaghori was a prominent political thinker of the Hazara people in Pakistan, writing about the political history of Hazara people. His work "Hazaraha wa Hazarajat Bastan Dar Aiyna-i-Tarikh" was published in Quetta in 1992, and another work by Aziz Tughyan Hazara "Tarikh Milli Hazara" was published in 1984 in Quetta. Most Pakistani Hazaras today live in the city of Quetta, in Balochistan, Pakistan. Localities in the city of Quetta with prominent Hazara populations include Hazara Town and Mehr Abad and Hazara tribes such as the "Sardar" are exclusively Pakistani. Literacy level among the Hazara community in Pakistan is relatively high compare to the Hazaras of Afghanistan, and they have integrated well into the social dynamics of the local society. Saira Batool, a Hazara woman, was one of the first female pilots in Pakistan Air Force. Other notable Hazara include Qazi Mohammad Esa, General Musa Khan Hazara, who served as Commander in Chief of the Pakistani Army from 1958 to 1968, Air Marshal Sharbat Ali Changezi, Hussain Ali Yousafi, the slain chairman of the Hazara Democratic Party, Syed Nasir Ali Shah, MNA from Quetta and his father Haji Sayed Hussain Hazara who was a senator and member of Majlis-e-Shura during the Zia-ul-Haq era. Despite all of this, Hazaras are often targeted by militant groups such as the Lashkar-e-Jhangvi and others. "Activists say at least 800-1,000 Hazaras have been killed since 1999 and the pace is quickening. More than one hundred have been murdered in and around Quetta since January, according to Human Rights Watch." The political representation of the community is served by Hazara Democratic Party, a secular liberal democratic party, headed by Abdul Khaliq Hazara. Over the many years as a result of political unrest in Afghanistan some Hazaras have migrated to Iran. The local Hazara population has been estimated at 500,000 people of which at least one third have spent more than half their life in Iran. They have complained of discrimination in Iran. In March 2011, "Eurasia Daily Monitor" reported that representatives of Hazara community in Iran have asked Mongolia to intervene in supporting their case with Iranian government and prevent Iranian forced repatriation to Afghanistan. The Hazara, outside of Hazarajat, have adopted the cultures of the cities where they dwell, resembling customs and traditions of the Afghan Tajiks and Pashtuns. Traditionally the Hazara are highland farmers and although sedentary, in the Hazarajat, they have retained many of their own customs and traditions, some of which are more closely related to those of Central Asia than to those of the Afghan Tajiks. For instance, many Hazara musicians are widely hailed as being skilled in playing the dambura, a native, regional lute instrument similarly found in other Central Asian nations such as Tajikistan, Uzbekistan and Kazakhstan. The Hazara live in houses rather than tents; Aimaqs and Aimaq Hazaras in tents rather than houses. Hazara people living in Hazarajat (Hazaristan) areas speak the Hazaragi language of Afghanistan, which is infused with a significant number of Altaic loan words including Mongolic and Turkic. The primary differences between Dari and Hazaragi are the accent and Hazaragi's greater array of some Altaic loanwords. Despite these differences, Hazaragi is mutually intelligible with Dari, one of the official languages of Afghanistan. Many of the urban Hazara in the larger cities such as Kabul and Mazar-i-Sharif no longer speak Hazaragi but speak standard literary Dari (usually the "Kābolī" dialect) or other regional varieties of Dari (for example the "Khorāsānī" dialect in the western region of Herat). Hazara are predominantly Shi'a Muslims, mostly of the Twelver sect and some Ismaili. Since the majority of Afghans practice Sunni Islam, this may have contributed to the discrimination against the Hazara. Hazara probably converted to Shi'ism during the first part of the 16th century, in the early days of the Safavid Dynasty. Nonetheless, a small number of Hazara are Sunni, such as the Aimaq Hazaras. Sunni Hazara have been attached to non-Hazara tribes (such as Taimuris), while the Ismaili Hazara have always been kept separate from the rest of the Hazara on account of religious beliefs and political purposes. The Hazara people have been organized by various tribes. They include Sheikh Ali, Jaghori, Ghaznichi, Muhammad Khwaja, Behsudi, Uruzgani, Daikundi, Daizangi, Turkmani, Dai Mirdadi and others. The different tribes come from regions such as Parwan, Bamyan, Ghazni, Ghor, Urozgan, Daykundi and Maidan Wardak and have spread outwards from Hazarajat (Main City) into Kabul and other parts of Afghanistan. Many Hazaras engaged varieties of sports, including football, volleyball, wrestling, martial arts, boxing, karate, taekwondo, judo, wushu and more. Pahlawan Ebrahim Khedri, 62 kg wrestler, was the national champion for two decades in Afghanistan. Rohullah Nikpai, won a bronze medal in Taekwondo in the Beijing Olympics 2008, beating world champion Juan Antonio Ramos of Spain 4–1 in a play-off final. It was Afghanistan's first-ever Olympic medal. He then won a second Olympic medal for Afghanistan in the London 2012 games. Afghanistan's first female Olympic athlete Friba Razayee, competed in judo at the 2004 Athens Olympics, but was eliminated in the first round of competition. Other famous Hazara athlete Syed Abdul Jalil Waiz, was the first ever badminton player representing Afghanistan in Asian Junior Championships in 2005 where he produced the first win for his country against Iraq, with 15–13, 15–1. He participated in several international championships since 2005 and achieved victories against Australia, Philippines and Mongolia. Hamid Rahimi is a new boxer from Afghanistan and lives in Germany. Hazara famous football players are Zohib Islam Amiri, Ali Hazara, Moshtagh Yaghoubi, Mustafa Amini and Rahmat Akbari. Zohib Islam Amiri, is currently playing for the Afghanistan national football team. A Pakistani Hazara named Abrar Hussain, a former Olympic boxer served as deputy director general of the Pakistan Sports Board. He represented Pakistan three times at the Olympics and won a gold medal at the 1990 Asian Games in Beijing. Some Hazara from Pakistan have also excelled in sports and have received numerous awards particularly in boxing, football and in field hockey. Qayum Changezi, a legendary Pakistani football player, was a Hazara. New Hazara youngsters are seen to appear in many sports in Pakistan mostly from Quetta. Rajab Ali Hazara, who is leading under 16 Pakistan Football team as captain.
https://en.wikipedia.org/wiki?curid=14131
Hawala Hawala or hewala ( , meaning "transfer" or sometimes "trust"), also known as ' in Persian, and ' or in Somali, is a popular and informal value transfer system based not on the movement of cash, or on telegraph or computer network wire transfers between banks, but instead on the performance and honour of a huge network of money brokers (known as "hawaladars"). While hawaladars are spread throughout the world, they are primarily located in the Middle East, North Africa, the Horn of Africa, and the Indian subcontinent, operating outside of, or parallel to, traditional banking, financial channels, and remittance systems. Hawala follows Islamic traditions but its use is not limited to Muslims. The hawala system originated in India. It has existed since the 8th century between Indian, Arabic and Muslim traders alongside the Silk Road and beyond as a protection against theft. It is believed to have arisen in the financing of long-distance trade around the emerging capital trade centers in the early medieval period. In South Asia, it appears to have developed into a fully-fledged money market instrument, which was only gradually replaced by the instruments of the formal banking system in the first half of the 20th century. "Hawala" itself influenced the development of the agency in common law and in civil laws, such as the "aval" in French law and the "avallo" in Italian law. The words "aval" and "avallo" were themselves derived from "hawala". The transfer of debt, which was "not permissible under Roman law but became widely practiced in medieval Europe, especially in commercial transactions", was due to the large extent of the "trade conducted by the Italian cities with the Muslim world in the Middle Ages". The agency was also "an institution unknown to Roman law" as no "individual could conclude a binding contract on behalf of another as his agent". In Roman law, the "contractor himself was considered the party to the contract and it took a second contract between the person who acted on behalf of a principal and the latter in order to transfer the rights and the obligations deriving from the contract to him". On the other hand, Islamic law and the later common law "had no difficulty in accepting agency as one of its institutions in the field of contracts and of obligations in general". Today, hawala is probably used mostly for migrant workers' remittances to their countries of origin. In the most basic variant of the hawala system, money is transferred via a network of hawala brokers, or "hawaladars". It is the transfer of money without actually moving it. In fact, a successful definition of the hawala system that is used is "money transfer without money movement". According to author Sam Vaknin, while there are large hawaladar operators with networks of middlemen in cities across many countries, most hawaladars are small businesses who work at hawala as a sideline or moonlighting operation. The figure shows how hawala works: (1) a customer ("A", left-hand side) approaches a hawala broker ("X") in one city and gives a sum of money (red arrow) that is to be transferred to a recipient ("B", right-hand side) in another, usually foreign, city. Along with the money, he usually specifies something like a password that will lead to the money being paid out (blue arrows). (2b) The hawa calls another hawala broker "M" in the recipient's city, and informs "M" about the agreed password, or gives other disposition of the funds. Then, the intended recipient ("B"), who also has been informed by "A" about the password (2a), now approaches "M" and tells him the agreed password (3a). If the password is correct, then "M" releases the transferred sum to "B" (3b), usually minus a small commission. "X" now basically owes "M" the money that "M" had paid out to "B"; thus "M" has to trust "X"s promise to settle the debt at a later date. The unique feature of the system is that no promissory instruments are exchanged between the hawala brokers; the transaction takes place entirely on the honour system. As the system does not depend on the legal enforceability of claims, it can operate even in the absence of a legal and juridical environment. Trust and extensive use of connections are the components that distinguish it from other remittance systems. Hawaladar networks are often based on membership in the same family, village, clan, or ethnic group, and cheating is punished by effective ex-communication and "loss of honour"—leading to severe economic hardship. Informal records are produced of individual transactions, and a running tally of the amount owed by one broker to another is kept. Settlements of debts between hawala brokers can take a variety of forms (such as goods, services, properties, transfers of employees, etc.), and need not take the form of direct cash transactions. In addition to commissions, hawala brokers often earn their profits through bypassing official exchange rates. Generally, the funds enter the system in the source country's currency and leave the system in the recipient country's currency. As settlements often take place without any foreign exchange transactions, they can be made at other than official exchange rates. Hawala is attractive to customers because it provides a fast and convenient transfer of funds, usually with a far lower commission than that charged by banks. Its advantages are most pronounced when the receiving country applies unprofitable exchange rate regulations or when the banking system in the receiving country is less complex (e.g., due to differences in legal environment in places such as Afghanistan, Yemen, Somalia). Moreover, in some parts of the world it is the only option for legitimate fund transfers, and has even been used by aid organizations in areas where it is the best-functioning institution. Dubai has been prominent for decades as a welcoming hub for hawala transactions worldwide. The "hundi" is a financial instrument that developed on the Indian sub-continent for use in trade and credit transactions. Hundis are used as a form of remittance instrument to transfer money from place to place, as a form of credit instrument or IOU to borrow money and as a bill of exchange in trade transactions. The Reserve Bank of India describes the Hundi as "an unconditional order in writing made by a person directing another to pay a certain sum of money to a person named in the order." The word "angadia" means courier in Hindi, but also designates those who act as hawaladars within India. These people mostly act as a parallel banking system for businessmen. They charge a commission of around 0.2–0.5% per transaction from transferring money from one city to another. According to the CIA, with the dissolution of Somalia's formal banking system, many informal money transfer operators arose to fill the void. It estimates that such "hawaladars", "xawilaad" or "xawala" brokers are now responsible for the transfer of up to $1.6 billion per year in remittances to the country, most coming from working Somalis outside Somalia. Such funds have in turn had a stimulating effect on local business activity. The 2012 Tuareg rebellion left Northern Mali without an official money transfer service for months. The coping mechanisms that appeared were patterned on the hawala system. Some government officials assert that hawala can be used to facilitate money laundering, avoid taxation, and move wealth anonymously. As a result, it is illegal in some U.S. states, India, Pakistan, and some other countries. After the September 11 terrorist attacks, the American government suspected that some hawala brokers may have helped terrorist organizations transfer money to fund their activities, and the 9/11 Commission Report stated that "Al Qaeda frequently moved the money it raised by hawala". As a result of intense pressure from the U.S. authorities to introduce systematic anti-money laundering initiatives on a global scale, a number of hawala networks were closed down and a number of hawaladars were successfully prosecuted for money laundering. However, there is little evidence that these actions brought the authorities any closer to identifying and arresting a significant number of terrorists or drug smugglers. Experts emphasized that the overwhelming majority of those who used these informal networks were doing so for legitimate purposes, and simply chose to use a transaction medium other than state-supported banking systems. Today, the hawala system in Afghanistan is instrumental in providing financial services for the delivery of emergency relief and humanitarian and developmental aid for the majority of international and domestic NGOs, donor organizations, and development aid agencies. In November 2001, the Bush administration froze the assets of Al-Barakat, a Somali remittance hawala company used primarily by a large number of Somali immigrants. Many of its agents in several countries were initially arrested, though later freed after no concrete evidence against them was found. In August 2006 the last Al-Barakat representatives were taken off the U.S. terror list, though some assets remain frozen. The mass media has speculated that pirates from Somalia use the hawala system to move funds internationally, for example into neighboring Kenya, where these transactions are neither taxed nor recorded. In January 2010, the Kabul office of New Ansari Exchange, Afghanistan's largest hawala money transfer business, was closed following a raid by the Sensitive Investigative Unit, the country's national anti-political corruption unit, allegedly because this company was involved in laundering profits from the illicit opium trade and the moving of cash earned by government allied warlords through extortion and drug trafficking. Thousands of records were seized, from which links were found between money transfers by this company and political and business figures and NGOs in the country, including relatives of President Hamid Karzai. In August 2010, Karzai took control of the task force that staged the raid, and the US-advised anti-corruption group, the Major Crimes Task Force. He ordered a commission to review scores of past and current anti-corruption inquests.
https://en.wikipedia.org/wiki?curid=14132
Hydroponics Hydroponics is a type of Horticulture and a subset of hydroculture, which is a method of growing plants without soil, by using mineral nutrient solutions in a water solvent. Terrestrial plants may be grown with only their roots exposed to the nutritious liquid, or the roots may be physically supported by an inert medium such as perlite, gravel, or other substrates. Despite inert media, roots can cause changes of the rhizosphere pH and root exudates can impact the rhizosphere biology. The nutrients used in hydroponic systems can come from many of different sources, including (but not limited to) fish excrement, duck manure, purchased chemical fertilisers, or artificial nutrient solutions. Plants commonly grown hydroponically, on inert media, include tomatoes, peppers, cucumbers, lettuces, marijuana, and model plants like "Arabidopsis thaliana". Hydroponics offers many advantages, one of them being a decrease in water usage for agriculture. To grow of tomatoes using intensive farming methods requires of water; using hydroponics, ; and only using aeroponics. Since it takes much less water to grow produce, it could be possible in the future for providers in harsh environments with little accessible water to grow their own food. The earliest published work on growing terrestrial plants without soil was the 1627 book "Sylva Sylvarum" or 'A Natural History' by Francis Bacon, printed a year after his death. Water culture became a popular research technique after that. In 1699 John Woodward published his water culture experiments with spearmint. He found that plants in less-pure water sources grew better than plants in distilled water. By 1842, a list of nine elements believed to be essential for plant growth had been compiled, and the discoveries of German botanists Julius von Sachs and Wilhelm Knop, in the years 1859–1875, resulted in a development of the technique of soilless cultivation. Growth of terrestrial plants without soil in mineral nutrient solutions was called solution culture. It quickly became a standard research and teaching technique and is still widely used. Solution culture is now considered a type of hydroponics where there is an inert medium. Around the 1930s plant scientists investigated diseases of certain plants, and thereby observed symptoms related to existing soil conditions. In this context, water culture experiments were undertaken with the hope of delivering similar symptoms under controlled conditions. In 1929, William Frederick Gericke (August 30, 1882 – September 29, 1970) of the University of California at Berkeley began publicly promoting that solution culture be used for agricultural crop production. He first termed it aquaculture but later found that aquaculture was already applied to culture of aquatic organisms. Gericke created a sensation by growing tomato vines high in his back yard in mineral nutrient solutions rather than soil. He introduced the term hydroponics, water culture, in 1937, proposed to him by , a phycologist with an extensive education in the classics. Hydroponics is derived from neologism υδρωπονικά (derived from Greek ύδωρ=water and πονέω=cultivate), constructed in analogy to γεωπονικά (derived from Greek γαία=earth and πονέω=cultivate), geoponica, that which concerns agriculture, replacing, γεω-, earth, with ὑδρο-, water. Unfortunately, Gericke underestimated that the time was not yet ripe for the general technical application of hydroponics. Reports of Gericke's work and his claims that hydroponics would revolutionize plant agriculture prompted a huge number of requests for further information. Gericke had been denied use of the University's greenhouses for his experiments due to the administration's skepticism, and when the University tried to compel him to release his preliminary nutrient recipes developed at home he requested greenhouse space and time to improve them using appropriate research facilities. While he was eventually provided greenhouse space, the University assigned Hoagland and Arnon to re-evaluate Gericke's claims and show his formula held no benefit over soil grown plant yields, a view held by Hoagland. In 1940, Gericke published the book, "Complete Guide to Soilless Gardening," after leaving his academic position in 1937 in a climate that was politically unfavorable. Therein, for the first time, he published his basic formula involving the macro- and micronutrient salts for hydroponically-grown plants. As a result of research of Gericke's claims by order of the University of California, Dennis Robert Hoagland and Daniel Israel Arnon wrote a classic 1938 agricultural bulletin, "The Water Culture Method for Growing Plants Without Soil," which made the claim that hydroponic crop yields were no better than crop yields with good-quality soils. Ultimately, crop yields would be limited by factors other than mineral nutrients, especially light. However, this study did not adequately appreciate that hydroponics has other key benefits including the fact that the roots of the plant have constant access to oxygen and that the plants have access to as much or as little water as they need. This is important as one of the most common errors when growing is overwatering and underwatering; and hydroponics prevents this from occurring as large amounts of water, which may drown root systems in soil, can be made available to the plant, and any water not used, is drained away, recirculated, or actively aerated, thus, eliminating anoxic conditions. In soil, a grower needs to be very experienced to know exactly how much water to feed the plant. Too much and the plant will be unable to access oxygen; too little and the plant will lose the ability to transport nutrients, which are typically moved into the roots while in solution. Hoagland's views and helpful support by the University prompted these two researchers to develop several new formulas for mineral nutrient solutions, universally known as Hoagland solution. Modified Hoagland solutions will continue to be used, as will the hydroponic techniques proposed by Gericke. One of the earliest successes of hydroponics occurred on Wake Island, a rocky atoll in the Pacific Ocean used as a refueling stop for Pan American Airlines. Hydroponics was used there in the 1930s to grow vegetables for the passengers. Hydroponics was a necessity on Wake Island because there was no soil, and it was prohibitively expensive to airlift in fresh vegetables. From 1943 to 1946, Daniel I. Arnon served as a major in the United States Army and used his prior expertise with plant nutrition to feed troops stationed on barren Ponape Island in the western Pacific by growing crops in gravel and nutrient-rich water because there was no arable land available. In the 1960s, Allen Cooper of England developed the Nutrient film technique. The Land Pavilion at Walt Disney World's EPCOT Center opened in 1982 and prominently features a variety of hydroponic techniques. In recent decades, NASA has done extensive hydroponic research for its Controlled Ecological Life Support System (CELSS). Hydroponics research mimicking a Martian environment uses LED lighting to grow in a different color spectrum with much less heat. Ray Wheeler, a plant physiologist at Kennedy Space Center's Space Life Science Lab, believes that hydroponics will create advances within space travel, as a bioregenerative life support system. In 2007, Eurofresh Farms in Willcox, Arizona, sold more than 200 million pounds of hydroponically grown tomatoes. Eurofresh has under glass and represents about a third of the commercial hydroponic greenhouse area in the U.S. Eurofresh tomatoes were pesticide-free, grown in rockwool with top irrigation. Eurofresh declared bankruptcy, and the greenhouses were acquired by NatureSweet Ltd. in 2013. As of 2017, Canada had hundreds of acres of large-scale commercial hydroponic greenhouses, producing tomatoes, peppers and cucumbers. Due to technological advancements within the industry and numerous economic factors, the global hydroponics market is forecast to grow from US$226.45 million in 2016 to US$724.87 million by 2023. There are two main variations for each medium: sub-irrigation and top irrigation. For all techniques, most hydroponic reservoirs are now built of plastic, but other materials have been used including concrete, glass, metal, vegetable solids, and wood. The containers should exclude light to prevent algae and fungal growth in the nutrient solution. In static solution culture, plants are grown in containers of nutrient solution, such as glass Mason jars (typically, in-home applications), pots, buckets, tubs, or tanks. The solution is usually gently aerated but may be un-aerated. If un-aerated, the solution level is kept low enough that enough roots are above the solution so they get adequate oxygen. A hole is cut (or drilled) in the top of the reservoir for each plant; if it a jar or tub, it may be its lid, but otherwise, cardboard, foil, paper, wood or metal may be put on top. A single reservoir can be dedicated to a single plant, or to various plants. Reservoir size can be increased as plant size increases. A home-made system can be constructed from food containers or glass canning jars with aeration provided by an aquarium pump, aquarium airline tubing and aquarium valves. Clear containers are covered with aluminium foil, butcher paper, black plastic, or other material to exclude light, thus helping to eliminate the formation of algae. The nutrient solution is changed either on a schedule, such as once per week, or when the concentration drops below a certain level as determined with an electrical conductivity meter. Whenever the solution is depleted below a certain level, either water or fresh nutrient solution is added. A Mariotte's bottle, or a float valve, can be used to automatically maintain the solution level. In raft solution culture, plants are placed in a sheet of buoyant plastic that is floated on the surface of the nutrient solution. That way, the solution level never drops below the roots. In continuous-flow solution culture, the nutrient solution constantly flows past the roots. It is much easier to automate than the static solution culture because sampling and adjustments to the temperature, pH, and nutrient concentrations can be made in a large storage tank that has potential to serve thousands of plants. A popular variation is the nutrient film technique or NFT, whereby a very shallow stream of water containing all the dissolved nutrients required for plant growth is recirculated past the bare roots of plants in a watertight thick root mat, which develops in the bottom of the channel and has an upper surface that, although moist, is in the air. Subsequent to this, an abundant supply of oxygen is provided to the roots of the plants. A properly designed NFT system is based on using the right channel slope, the right flow rate, and the right channel length. The main advantage of the NFT system over other forms of hydroponics is that the plant roots are exposed to adequate supplies of water, oxygen, and nutrients. In all other forms of production, there is a conflict between the supply of these requirements, since excessive or deficient amounts of one results in an imbalance of one or both of the others. NFT, because of its design, provides a system where all three requirements for healthy plant growth can be met at the same time, provided that the simple concept of NFT is always remembered and practised. The result of these advantages is that higher yields of high-quality produce are obtained over an extended period of cropping. A downside of NFT is that it has very little buffering against interruptions in the flow (e.g., power outages). But, overall, it is probably one of the more productive techniques. The same design characteristics apply to all conventional NFT systems. While slopes along channels of 1:100 have been recommended, in practice it is difficult to build a base for channels that is sufficiently true to enable nutrient films to flow without ponding in locally depressed areas. As a consequence, it is recommended that slopes of 1:30 to 1:40 are used. This allows for minor irregularities in the surface, but, even with these slopes, ponding and water logging may occur. The slope may be provided by the floor, benches or racks may hold the channels and provide the required slope. Both methods are used and depend on local requirements, often determined by the site and crop requirements. As a general guide, flow rates for each gully should be one liter per minute. At planting, rates may be half this and the upper limit of 2 L/min appears about the maximum. Flow rates beyond these extremes are often associated with nutritional problems. Depressed growth rates of many crops have been observed when channels exceed 12 meters in length. On rapidly growing crops, tests have indicated that, while oxygen levels remain adequate, nitrogen may be depleted over the length of the gully. As a consequence, channel length should not exceed 10–15 meters. In situations where this is not possible, the reductions in growth can be eliminated by placing another nutrient feed halfway along the gully and halving the flow rates through each outlet. Aeroponics is a system wherein roots are continuously or discontinuously kept in an environment saturated with fine drops (a mist or aerosol) of nutrient solution. The method requires no substrate and entails growing plants with their roots suspended in a deep air or growth chamber with the roots periodically wetted with a fine mist of atomized nutrients. Excellent aeration is the main advantage of aeroponics. Aeroponic techniques have proven to be commercially successful for propagation, seed germination, seed potato production, tomato production, leaf crops, and micro-greens. Since inventor Richard Stoner commercialized aeroponic technology in 1983, aeroponics has been implemented as an alternative to water intensive hydroponic systems worldwide. The limitation of hydroponics is the fact that of water can only hold of air, no matter whether aerators are utilized or not. Another distinct advantage of aeroponics over hydroponics is that any species of plants can be grown in a true aeroponic system because the microenvironment of an aeroponic can be finely controlled. The limitation of hydroponics is that certain species of plants can only survive for so long in water before they become waterlogged. The advantage of aeroponics is that suspended aeroponic plants receive 100% of the available oxygen and carbon dioxide to the roots zone, stems, and leaves, thus accelerating biomass growth and reducing rooting times. NASA research has shown that aeroponically grown plants have an 80% increase in dry weight biomass (essential minerals) compared to hydroponically grown plants. Aeroponics used 65% less water than hydroponics. NASA also concluded that aeroponically grown plants require ¼ the nutrient input compared to hydroponics. Unlike hydroponically grown plants, aeroponically grown plants will not suffer transplant shock when transplanted to soil, and offers growers the ability to reduce the spread of disease and pathogens. Aeroponics is also widely used in laboratory studies of plant physiology and plant pathology. Aeroponic techniques have been given special attention from NASA since a mist is easier to handle than a liquid in a zero-gravity environment. Fogponics is a derivation of aeroponics wherein the nutrient solution is aerosolized by a diaphragm vibrating at ultrasonic frequencies. Solution droplets produced by this method tend to be 5–10 µm in diameter, smaller than those produced by forcing a nutrient solution through pressurized nozzles, as in aeroponics. The smaller size of the droplets allows them to diffuse through the air more easily, and deliver nutrients to the roots without limiting their access to oxygen. Passive sub-irrigation, also known as passive hydroponics, semi-hydroponics, or "hydroculture", is a method wherein plants are grown in an inert porous medium that transports water and fertilizer to the roots by capillary action from a separate reservoir as necessary, reducing labor and providing a constant supply of water to the roots. In the simplest method, the pot sits in a shallow solution of fertilizer and water or on a capillary mat saturated with nutrient solution. The various hydroponic media available, such as expanded clay and coconut husk, contain more air space than more traditional potting mixes, delivering increased oxygen to the roots, which is important in epiphytic plants such as orchids and bromeliads, whose roots are exposed to the air in nature. Additional advantages of passive hydroponics are the reduction of root rot and the additional ambient humidity provided through evaporations. Hydroculture compared to traditional farming in terms of crops yield per area in a controlled environment was roughly 10 times more efficient than traditional farming, uses 13 times less water in one crop cycle than traditional farming, but on average uses 100 times more kilojoules per kilogram of energy than traditional farming. In its simplest form, there is a tray above a reservoir of nutrient solution. Either the tray is filled with growing medium (clay granules being the most common) and then plant directly or place the pot over medium, stand in the tray. At regular intervals, a simple timer causes a pump to fill the upper tray with nutrient solution, after which the solution drains back down into the reservoir. This keeps the medium regularly flushed with nutrients and air. Once the upper tray fills past the drain stop, it begins recirculating the water until the timer turns the pump off, and the water in the upper tray drains back into the reservoirs. In a run-to-waste system, nutrient and water solution is periodically applied to the medium surface. The method was invented in Bengal in 1946; for this reason it is sometimes referred to as "The Bengal System". This method can be set up in various configurations. In its simplest form, a nutrient-and-water solution is manually applied one or more times per day to a container of inert growing media, such as rockwool, perlite, vermiculite, coco fibre, or sand. In a slightly more complex system, it is automated with a delivery pump, a timer and irrigation tubing to deliver nutrient solution with a delivery frequency that is governed by the key parameters of plant size, plant growing stage, climate, substrate, and substrate conductivity, pH, and water content. In a commercial setting, watering frequency is multi-factorial and governed by computers or PLCs. Commercial hydroponics production of large plants like tomatoes, cucumber, and peppers uses one form or another of run-to-waste hydroponics. In environmentally responsible uses, the nutrient-rich waste is collected and processed through an on-site filtration system to be used many times, making the system very productive. Some bonsai are also grown in soil-free substrates (typically consisting of akadama, grit, diatomaceous earth and other inorganic components) and have their water and nutrients provided in a run-to-waste form. The hydroponic method of plant production by means of suspending the plant roots in a solution of nutrient-rich, oxygenated water. Traditional methods favor the use of plastic buckets and large containers with the plant contained in a net pot suspended from the centre of the lid and the roots suspended in the nutrient solution. The solution is oxygen saturated by an air pump combined with porous stones. With this method, the plants grow much faster because of the high amount of oxygen that the roots receive. The Kratky Method is similar to deep water culture, but uses a non-circulating water reservoir. "Top-fed" deep water culture is a technique involving delivering highly oxygenated nutrient solution direct to the root zone of plants. While deep water culture involves the plant roots hanging down into a reservoir of nutrient solution, in top-fed deep water culture the solution is pumped from the reservoir up to the roots (top feeding). The water is released over the plant's roots and then runs back into the reservoir below in a constantly recirculating system. As with deep water culture, there is an airstone in the reservoir that pumps air into the water via a hose from outside the reservoir. The airstone helps add oxygen to the water. Both the airstone and the water pump run 24 hours a day. The biggest advantage of top-fed deep water culture over standard deep water culture is increased growth during the first few weeks. With deep water culture, there is a time when the roots have not reached the water yet. With top-fed deep water culture, the roots get easy access to water from the beginning and will grow to the reservoir below much more quickly than with a deep water culture system. Once the roots have reached the reservoir below, there is not a huge advantage with top-fed deep water culture over standard deep water culture. However, due to the quicker growth in the beginning, grow time can be reduced by a few weeks. A rotary hydroponic garden is a style of commercial hydroponics created within a circular frame which rotates continuously during the entire growth cycle of whatever plant is being grown. While system specifics vary, systems typically rotate once per hour, giving a plant 24 full turns within the circle each 24-hour period. Within the center of each rotary hydroponic garden can be a high intensity grow light, designed to simulate sunlight, often with the assistance of a mechanized timer. Each day, as the plants rotate, they are periodically watered with a hydroponic growth solution to provide all nutrients necessary for robust growth. Due to the plants continuous fight against gravity, plants typically mature much more quickly than when grown in soil or other traditional hydroponic growing systems. Due to the small foot print a rotary hydroponic system has, it allows for more plant material to be grown per square foot of floor space than other traditional hydroponic systems. One of the most obvious decisions hydroponic farmers have to make is which medium they should use. Different media are appropriate for different growing techniques. Baked clay pellets are suitable for hydroponic systems in which all nutrients are carefully controlled in water solution. The clay pellets are inert, pH-neutral, and do not contain any nutrient value. The clay is formed into round pellets and fired in rotary kilns at . This causes the clay to expand, like popcorn, and become porous. It is light in weight, and does not compact over time. The shape of an individual pellet can be irregular or uniform depending on brand and manufacturing process. The manufacturers consider expanded clay to be an ecologically sustainable and re-usable growing medium because of its ability to be cleaned and sterilized, typically by washing in solutions of white vinegar, chlorine bleach, or hydrogen peroxide (), and rinsing completely. Another view is that clay pebbles are best not re-used even when they are cleaned, due to root growth that may enter the medium. Breaking open a clay pebble after a crop has been shown to reveal this growth. Growstones, made from glass waste, have both more air and water retention space than perlite and peat. This aggregate holds more water than parboiled rice hulls. Growstones by volume consist of 0.5 to 5% calcium carbonate – for a standard 5.1 kg bag of Growstones that corresponds to 25.8 to 258 grams of calcium carbonate. The remainder is soda-lime glass. Regardless of hydroponic demand, coconut coir is a natural byproduct derived from coconut processes. The outer husk of a coconut consists of fibers which are commonly used to make a myriad of items ranging from floor mats to brushes. After the long fibers are used for those applications, the dust and short fibers are merged to create coir. Coconuts absorb high levels of nutrients throughout their life cycle, so the coir must undergo a maturation process before it becomes a viable growth medium. This process removes salt, tannins and phenolic compounds through substantial water washing. Contaminated water is a byproduct of this process, as three hundred to six hundred liters of water per one cubic meter of coir is needed. Additionally, this maturation can take up to six months and one study concluded the working conditions during the maturation process are dangerous and would be illegal in North America and Europe. Despite requiring attention, posing health risks and environmental impacts, coconut coir has impressive material properties. When exposed to water, the brown, dry, chunky and fibrous material expands nearly three-four times its original size. This characteristic combined with coconut coir's water retention capacity and resistance to pests and diseases make it an effective growth medium. Used as an alternative to rock wool, coconut coir, also known as coir peat, offers optimized growing conditions. Parboiled rice husks (PBH) are an agricultural byproduct that would otherwise have little use. They decay over time, and allow drainage, and even retain less water than growstones. A study showed that rice husks did not affect the effects of plant growth regulators. Perlite is a volcanic rock that has been superheated into very lightweight expanded glass pebbles. It is used loose or in plastic sleeves immersed in the water. It is also used in potting soil mixes to decrease soil density. Perlite has similar properties and uses to vermiculite but, in general, holds more air and less water and is buoyant. Like perlite, vermiculite is a mineral that has been superheated until it has expanded into light pebbles. Vermiculite holds more water than perlite and has a natural "wicking" property that can draw water and nutrients in a passive hydroponic system. If too much water and not enough air surrounds the plants roots, it is possible to gradually lower the medium's water-retention capability by mixing in increasing quantities of perlite. Like perlite, pumice is a lightweight, mined volcanic rock that finds application in hydroponics. Sand is cheap and easily available. However, it is heavy, does not hold water very well, and it must be sterilized between uses. Due to sand being easily available and in high demand sand shortages are on our horizon as we are running out. The same type that is used in aquariums, though any small gravel can be used, provided it is washed first. Indeed, plants growing in a typical traditional gravel filter bed, with water circulated using electric powerhead pumps, are in effect being grown using gravel hydroponics. Gravel is inexpensive, easy to keep clean, drains well and will not become waterlogged. However, it is also heavy, and, if the system does not provide continuous water, the plant roots may dry out. Wood fibre, produced from steam friction of wood, is a very efficient organic substrate for hydroponics. It has the advantage that it keeps its structure for a very long time. Wood wool (i.e. wood slivers) have been used since the earliest days of the hydroponics research. However, more recent research suggests that wood fibre may have detrimental effects on "plant growth regulators". Wool from shearing sheep is a little-used yet promising renewable growing medium. In a study comparing wool with peat slabs, coconut fibre slabs, perlite and rockwool slabs to grow cucumber plants, sheep wool had a greater air capacity of 70%, which decreased with use to a comparable 43%, and water capacity that increased from 23% to 44% with use. Using sheep wool resulted in the greatest yield out of the tested substrates, while application of a biostimulator consisting of humic acid, lactic acid and Bacillus subtilis improved yields in all substrates. Rock wool (mineral wool) is the most widely used medium in hydroponics. Rock wool is an inert substrate suitable for both run-to-waste and recirculating systems. Rock wool is made from molten rock, basalt or 'slag' that is spun into bundles of single filament fibres, and bonded into a medium capable of capillary action, and is, in effect, protected from most common microbiological degradation. Rock wool is typically used only for the seedling stage, or with newly cut clones, but can remain with the plant base for its lifetime. Rock wool has many advantages and some disadvantages. The latter being the possible skin irritancy (mechanical) whilst handling (1:1000). Flushing with cold water usually brings relief. Advantages include its proven efficiency and effectiveness as a commercial hydroponic substrate. Most of the rock wool sold to date is a non-hazardous, non-carcinogenic material, falling under Note Q of the European Union Classification Packaging and Labeling Regulation (CLP). Mineral wool products can be engineered to hold large quantities of water and air that aid root growth and nutrient uptake in hydroponics; their fibrous nature also provides a good mechanical structure to hold the plant stable. The naturally high pH of mineral wool makes them initially unsuitable to plant growth and requires "conditioning" to produce a wool with an appropriate, stable pH. Brick shards have similar properties to gravel. They have the added disadvantages of possibly altering the pH and requiring extra cleaning before reuse. Polystyrene packing peanuts are inexpensive, readily available, and have excellent drainage. However, they can be too lightweight for some uses. They are used mainly in closed-tube systems. Note that non-biodegradable polystyrene peanuts must be used; biodegradable packing peanuts will decompose into a sludge. Plants may absorb styrene and pass it to their consumers; this is a possible health risk. The formulation of hydroponic solutions is an application of plant nutrition, with nutrient deficiency symptoms mirroring those found in traditional soil based agriculture. However, the underlying chemistry of hydroponic solutions can differ from soil chemistry in many significant ways. Important differences include: As in conventional agriculture, nutrients should be adjusted to satisfy Liebig's law of the minimum for each specific plant variety. Nevertheless, generally acceptable concentrations for nutrient solutions exist, with minimum and maximum concentration ranges for most plants being somewhat similar. Most nutrient solutions are mixed to have concentrations between 1,000 and 2,500 ppm. Acceptable concentrations for the individual nutrient ions, which comprise that total ppm figure, are summarized in the following table. For essential nutrients, concentrations below these ranges often lead to nutrient deficiencies while exceeding these ranges can lead to nutrient toxicity. Optimum nutrition concentrations for plant varieties are found empirically by experience or by plant tissue tests. Organic fertilizers can be used to supplement or entirely replace the inorganic compounds used in conventional hydroponic solutions. However, using organic fertilizers introduces a number of challenges that are not easily resolved. Examples include: Nevertheless, if precautions are taken, organic fertilizers can be used successfully in hydroponics. Examples of suitable materials, with their average nutritional contents tabulated in terms of percent dried mass, are listed in the following table. Micronutrients can be sourced from organic fertilizers as well. For example, composted pine bark is high in manganese and is sometimes used to fulfill that mineral requirement in hydroponic solutions. To satisfy requirements for National Organic Programs, pulverized, unrefined minerals (e.g. Gypsum, Calcite, and glauconite) can also be added to satisfy a plant's nutritional needs. In addition to chelating agents, humic acids can be added to increase nutrient uptake. Managing nutrient concentrations and pH values within acceptable ranges is essential for successful hydroponic horticulture. Common tools used to manage hydroponic solutions include: Chemical equipment can also be used to perform accurate chemical analyses of nutrient solutions. Examples include: Using chemical equipment for hydroponic solutions can be beneficial to growers of any background because nutrient solutions are often reusable. Because nutrient solutions are virtually never completely depleted, and should never be due to the unacceptably low osmotic pressure that would result, re-fortification of old solutions with new nutrients can save growers money and can control point source pollution, a common source for the eutrophication of nearby lakes and streams. Although pre-mixed concentrated nutrient solutions are generally purchased from commercial nutrient manufacturers by hydroponic hobbyists and small commercial growers, several tools exist to help anyone prepare their own solutions without extensive knowledge about chemistry. The free and open source tools HydroBuddy and HydroCal have been created by professional chemists to help any hydroponics grower prepare their own nutrient solutions. The first program is available for Windows, Mac and Linux while the second one can be used through a simple JavaScript interface. Both programs allow for basic nutrient solution preparation although HydroBuddy provides added functionality to use and save custom substances, save formulations and predict electrical conductivity values. Often mixing hydroponic solutions using individual salts is impractical for hobbyists or small-scale commercial growers because commercial products are available at reasonable prices. However, even when buying commercial products, multi-component fertilizers are popular. Often these products are bought as three part formulas which emphasize certain nutritional roles. For example, solutions for vegetative growth (i.e. high in nitrogen), flowering (i.e. high in potassium and phosphorus), and micronutrient solutions (i.e. with trace minerals) are popular. The timing and application of these multi-part fertilizers should coincide with a plant's growth stage. For example, at the end of an annual plant's life cycle, a plant should be restricted from high nitrogen fertilizers. In most plants, nitrogen restriction inhibits vegetative growth and helps induce flowering. With pest problems reduced and nutrients constantly fed to the roots, productivity in hydroponics is high; however, growers can further increase yield by manipulating a plant's environment by constructing sophisticated growrooms. To increase yield further, some sealed greenhouses inject CO2 into their environment to help improve growth and plant fertility.
https://en.wikipedia.org/wiki?curid=14133
Henry Purcell Henry Purcell () ( September 1659 – 21 November 1695) was an English composer. Although it incorporated Italian and French stylistic elements, Purcell's was a uniquely English form of Baroque music. He is generally considered to be one of the greatest English composers; no later native-born English composer approached his fame until Edward Elgar, Ralph Vaughan Williams, William Walton and Benjamin Britten in the 20th century. Purcell was born in St Ann's Lane, Old Pye Street, Westminster – the area of London later known as Devil's Acre – in 1659. Henry Purcell Senior, whose older brother, Thomas Purcell, (died 1682) was a musician, was a gentleman of the Chapel Royal and sang at the coronation of King Charles II of England. Henry the elder had three sons: Edward, Henry and Daniel. Daniel Purcell, the youngest of the brothers, was also a prolific composer who wrote the music for much of the final act of "The Indian Queen" after Henry Purcell's death. Henry Purcell's family lived just a few hundred yards west of Westminster Abbey from 1659 onwards. After his father's death in 1664, Purcell was placed under the guardianship of his uncle Thomas, who showed him great affection and kindness. Thomas was himself a gentleman of His Majesty's Chapel, and arranged for Henry to be admitted as a chorister. Henry studied first under Captain Henry Cooke, Master of the Children, and afterwards under Pelham Humfrey, Cooke's successor. The composer Matthew Locke was a family friend and, particularly with his semi-operas, probably also had a musical influence on the young Purcell. Henry was a chorister in the Chapel Royal until his voice broke in 1673, when he became assistant to the organ-builder John Hingston, who held the post of keeper of wind instruments to the King. Purcell is said to have been composing at nine years old, but the earliest work that can be certainly identified as his is an ode for the King's birthday, written in 1670. (The dates for his compositions are often uncertain, despite considerable research.) It is assumed that the three-part song "Sweet tyranness, I now resign" was written by him as a child. After Humfrey's death, Purcell continued his studies under Dr John Blow. He attended Westminster School and in 1676 was appointed copyist at Westminster Abbey. Henry Purcell's earliest anthem "Lord, who can tell" was composed in 1678. It is a psalm that is prescribed for Christmas Day and also to be read at morning prayer on the fourth day of the month. In 1679, he wrote songs for John Playford's "Choice Ayres, Songs and Dialogues" and an anthem, the name of which is unknown, for the Chapel Royal. From an extant letter written by Thomas Purcell we learn that this anthem was composed for the exceptionally fine voice of the Rev. John Gostling, then at Canterbury, but afterwards a gentleman of His Majesty's Chapel. Purcell wrote several anthems at different times for Gostling's extraordinary basso profondo voice, which is known to have had a range of at least two full octaves, from D below the bass staff to the D above it. The dates of very few of these sacred compositions are known; perhaps the most notable example is the anthem "They that go down to the sea in ships." In gratitude for the providential escape of King Charles II from shipwreck, Gostling, who had been of the royal party, put together some verses from the Psalms in the form of an anthem and requested Purcell to set them to music. The challenging work opens with a passage which traverses the full extent of Gostling's range, beginning on the upper D and descending two octaves to the lower. In 1679, Blow, who had been appointed organist of Westminster Abbey 10 years before, resigned his office in favour of Purcell. Purcell now devoted himself almost entirely to the composition of sacred music, and for six years severed his connection with the theatre. However, during the early part of the year, probably before taking up his new office, he had produced two important works for the stage, the music for Nathaniel Lee's "Theodosius", and Thomas d'Urfey's "Virtuous Wife". Between 1680 and 1688 Purcell wrote music for seven plays. The composition of his chamber opera "Dido and Aeneas", which forms a very important landmark in the history of English dramatic music, has been attributed to this period, and its earliest production may well have predated the documented one of 1689. It was written to a libretto furnished by Nahum Tate, and performed in 1689 in cooperation with Josias Priest, a dancing master and the choreographer for the Dorset Garden Theatre. Priest's wife kept a boarding school for young gentlewomen, first in Leicester Fields and afterwards at Chelsea, where the opera was performed. It is occasionally considered the first genuine English opera, though that title is usually given to Blow's "Venus and Adonis": as in Blow's work, the action does not progress in spoken dialogue but in Italian-style recitative. Each work runs to less than one hour. At the time, "Dido and Aeneas" never found its way to the theatre, though it appears to have been very popular in private circles. It is believed to have been extensively copied, but only one song was printed by Purcell's widow in "Orpheus Britannicus", and the complete work remained in manuscript until 1840, when it was printed by the Musical Antiquarian Society under the editorship of Sir George Macfarren. The composition of "Dido and Aeneas" gave Purcell his first chance to write a sustained musical setting of a dramatic text. It was his only opportunity to compose a work in which the music carried the entire drama. The story of "Dido and Aeneas" derives from the original source in Virgil's epic the "Aeneid". Soon after Purcell's marriage, in 1682, on the death of Edward Lowe, he was appointed organist of the Chapel Royal, an office which he was able to hold simultaneously with his position at Westminster Abbey. His eldest son was born in this same year, but he was short-lived. His first printed composition, "Twelve Sonatas", was published in 1683. For some years after this, he was busy in the production of sacred music, odes addressed to the king and royal family, and other similar works. In 1685, he wrote two of his finest anthems, "I was glad" and "My heart is inditing," for the coronation of King James II. In 1690 he composed a setting of the birthday ode for Queen Mary, "Arise, my muse" and four years later wrote one of his most elaborate, important and magnificent works – a setting for another birthday ode for the Queen, written by Nahum Tate, entitled "Come Ye Sons of Art". In 1687, he resumed his connection with the theatre by furnishing the music for John Dryden's tragedy "Tyrannick Love". In this year, Purcell also composed a march and passepied called "Quick-step", which became so popular that Lord Wharton adapted the latter to the fatal verses of "Lillibullero"; and in or before January 1688, Purcell composed his anthem "Blessed are they that fear the Lord" by express command of the King. A few months later, he wrote the music for D'Urfey's play, "The Fool's Preferment". In 1690, he composed the music for Betterton's adaptation of Fletcher and Massinger's "Prophetess" (afterwards called "Dioclesian") and Dryden's "Amphitryon". In 1691, he wrote the music for what is sometimes considered his dramatic masterpiece, "King Arthur", or "The British Worthy ". In 1692, he composed "The Fairy-Queen" (an adaptation of Shakespeare's "A Midsummer Night's Dream"), the score of which (his longest for theatre) was rediscovered in 1901 and published by the Purcell Society. "The Indian Queen" followed in 1695, in which year he also wrote songs for Dryden and Davenant's version of Shakespeare's "The Tempest" (recently, this has been disputed by music scholars), probably including "Full fathom five" and "Come unto these yellow sands". "The Indian Queen" was adapted from a tragedy by Dryden and Sir Robert Howard. In these semi-operas (another term for which at the time was "dramatic opera"), the main characters of the plays do not sing but speak their lines: the action moves in dialogue rather than recitative. The related songs are sung "for" them by singers, who have minor dramatic roles. Purcell's "Te Deum" and "Jubilate Deo" were written for Saint Cecilia's Day, 1694, the first English "Te Deum" ever composed with orchestral accompaniment. This work was annually performed at St Paul's Cathedral until 1712, after which it was performed alternately with Handel's "Utrecht Te Deum and Jubilate" until 1743, when both works were replaced by Handel's "Dettingen Te Deum". He composed an anthem and two elegies for Queen Mary II's funeral, his "Funeral Sentences and Music for the Funeral of Queen Mary". Besides the operas and semi-operas already mentioned, Purcell wrote the music and songs for Thomas d'Urfey's "The Comical History of Don Quixote", "Bonduca", "The Indian Queen" and others, a vast quantity of sacred music, and numerous odes, cantatas, and other miscellaneous pieces. The quantity of his instrumental chamber music is minimal after his early career, and his keyboard music consists of an even more minimal number of harpsichord suites and organ pieces. In 1693, Purcell composed music for two comedies: "The Old Bachelor", and "The Double Dealer". Purcell also composed for five other plays within the same year. In July 1695, Purcell composed an ode for the Duke of Gloucester for his sixth birthday. The ode is titled "Who can from joy refrain?" Purcell's four-part sonatas were issued in 1697. In the final six years of his life, Purcell wrote music for forty-two plays. Purcell died in 1695 at his home in Marsham Street, at the height of his career. He is believed to have been 35 or 36 years old at the time. The cause of his death is unclear: one theory is that he caught a chill after returning home late from the theatre one night to find that his wife had locked him out. Another is that he succumbed to tuberculosis. The beginning of Purcell's will reads: Purcell is buried adjacent to the organ in Westminster Abbey. The music that he had earlier composed for Queen Mary's funeral was performed during his funeral as well. Purcell was universally mourned as "a very great master of music."  Following his death, the officials at Westminster honoured him by unanimously voting that he be buried with no expense in the north aisle of the Abbey. His epitaph reads: "Here lyes Henry Purcell Esq., who left this life and is gone to that Blessed Place where only His harmony can be exceeded." Purcell fathered six children by his wife Frances, four of whom died in infancy. His wife, as well as his son Edward (1689–1740) and daughter Frances, survived him. His wife Frances died in 1706, having published a number of her husband's works, including the now famous collection called "Orpheus Britannicus", in two volumes, printed in 1698 and 1702, respectively. Edward was appointed organist of St Clement's, Eastcheap, London, in 1711 and was succeeded by his son Edward Henry Purcell (died 1765). Both men were buried in St Clement's near the organ gallery. Purcell worked in many genres, both in works closely linked to the court, such as symphony song, to the Chapel Royal, such as the symphony anthem, and the theatre. Among Purcell's most notable works are his opera "Dido and Aeneas" (1688), his semi-operas "Dioclesian" (1690), "King Arthur" (1691), "The Fairy-Queen" (1692) and "Timon of Athens" (1695), as well as the compositions "Hail! Bright Cecilia" (1692), "Come Ye Sons of Art" (1694) and "Funeral Sentences and Music for the Funeral of Queen Mary" (1695). After his death, Purcell was honoured by many of his contemporaries, including his old friend John Blow, who wrote "An Ode, on the Death of Mr. Henry Purcell (Mark how the lark and linnet sing)" with text by his old collaborator, John Dryden. William Croft's 1724 setting for the Burial Service, was written in the style of "the great Master". Croft preserved Purcell's setting of "Thou knowest Lord" (Z 58) in his service, for reasons "obvious to any artist"; it has been sung at every British state funeral ever since. More recently, the English poet Gerard Manley Hopkins wrote a famous sonnet entitled simply "Henry Purcell", with a headnote reading: "The poet wishes well to the divine genius of Purcell and praises him that, whereas other musicians have given utterance to the moods of man's mind, he has, beyond that, uttered in notes the very make and species of man as created both in him and in all men generally." Purcell also had a strong influence on the composers of the English musical renaissance of the early 20th century, most notably Benjamin Britten, who arranged many of Purcell's vocal works for voice(s) and piano in "Britten's Purcell Realizations", including from "Dido and Aeneas", and whose "The Young Person's Guide to the Orchestra" is based on a theme from Purcell's "Abdelazar". Stylistically, the aria "I know a bank" from Britten's opera "A Midsummer Night's Dream" is clearly inspired by Purcell's aria "Sweeter than Roses", which Purcell originally wrote as part of incidental music to Richard Norton's "Pausanias, the Betrayer of His Country". Purcell is honoured together with Johann Sebastian Bach and George Frideric Handel with a feast day on the liturgical calendar of the Episcopal Church (USA) on 28 July. In a 1940 interview Ignaz Friedman stated that he considered Purcell as great as Bach and Beethoven. In Victoria Street, Westminster, England, there is a bronze monument to Purcell, sculpted by Glynn Williams and erected in 1994. Purcell's works have been catalogued by Franklin Zimmerman, who gave them a number preceded by Z. A Purcell Club was founded in London in 1836 for promoting the performance of his music, but was dissolved in 1863. In 1876 a Purcell Society was founded, which published new editions of his works. A modern-day Purcell Club has been created, and provides guided tours and concerts in support of Westminster Abbey. Today there is a Henry Purcell Society of Boston, which performs his music in live concert and currently is online streaming concerts, in response to the pandemic. There is a Purcell Society in London, which collects and studies Purcell manuscripts and musical scores, concentrating on producing revised versions of the scores of all his music. So strong was his reputation that a popular wedding processional was incorrectly attributed to Purcell for many years. The so-called "Purcell's Trumpet Voluntary" was in fact written around 1700 by a British composer named Jeremiah Clarke as the "Prince of Denmark's March". Music for the Funeral of Queen Mary was reworked by Wendy Carlos for the title music of the 1971 film by Stanley Kubrick, "A Clockwork Orange". The 1973 "Rolling Stone" review of Jethro Tull's "A Passion Play" compared the musical style of the album with that of Purcell. In 2009 Pete Townshend of The Who, an English rock band that established itself in the 1960s, identified Purcell's harmonies, particularly the use of suspension and resolution that Townshend had learned from producer Kit Lambert, as an influence on the band's music (in songs such as "Won't Get Fooled Again" (1971), "I Can See for Miles" (1967) and the very Purcellian intro to "Pinball Wizard"). Purcell's music was widely featured as background music in the Academy Award winning 1979 film "Kramer vs. Kramer", with a soundtrack on CBS Masterworks Records. In the 21st century, the soundtrack of the 2005 film version of "Pride and Prejudice" features a dance titled "A Postcard to Henry Purcell". This is a version by composer Dario Marianelli of Purcell's "Abdelazar" theme. In the German-language 2004 movie, "Downfall", the music of Dido's Lament is used repeatedly as the end of the Third Reich culminates. The 2012 film "Moonrise Kingdom" contains Benjamin Britten's version of the Rondeau in Purcell's "Abdelazar" created for his 1946 "The Young Person's Guide to the Orchestra". In 2013, the Pet Shop Boys released their single "Love Is a Bourgeois Construct" incorporating one of the same ground basses from "King Arthur" used by Nyman in his "Draughtsman's Contract" score. Olivia Chaney performs her adaptation of "There's Not a Swain" on her CD "The Longest River." The 1995 film, "England, My England" tells the story of an actor who is himself writing a play about Purcell's life and music, and features many of his compositions. Bibliography
https://en.wikipedia.org/wiki?curid=14135
Hydrophobe In chemistry, hydrophobicity is the physical property of a molecule that is seemingly repelled from a mass of water (known as a hydrophobe). (Strictly speaking, there is no repulsive force involved; it is an absence of attraction.) In contrast, hydrophiles are attracted to water. Hydrophobic molecules tend to be nonpolar and, thus, prefer other neutral molecules and nonpolar solvents. Because water molecules are polar, hydrophobes do not dissolve well among them. Hydrophobic molecules in water often cluster together, forming micelles. Water on hydrophobic surfaces will exhibit a high contact angle. Examples of hydrophobic molecules include the alkanes, oils, fats, and greasy substances in general. Hydrophobic materials are used for oil removal from water, the management of oil spills, and chemical separation processes to remove non-polar substances from polar compounds. Hydrophobic is often used interchangeably with lipophilic, "fat-loving". However, the two terms are not synonymous. While hydrophobic substances are usually lipophilic, there are exceptions, such as the silicones and fluorocarbons. The term "hydrophobe" comes from the Ancient Greek ὑδρόφόβος (hýdrophóbos), "having a horror of water", constructed . The hydrophobic interaction is mostly an entropic effect originating from the disruption of the highly dynamic hydrogen bonds between molecules of liquid water by the nonpolar solute forming a clathrate-like structure around the non-polar molecules. This structure formed is more highly ordered than free water molecules due to the water molecules arranging themselves to interact as much as possible with themselves, and thus results in a higher entropic state which causes non-polar molecules to clump together to reduce the surface area exposed to water and decrease the entropy of the system. Thus, the two immiscible phases (hydrophilic vs. hydrophobic) will change so that their corresponding interfacial area will be minimal. This effect can be visualized in the phenomenon called phase separation. Superhydrophobic surfaces, such as the leaves of the lotus plant, are those that are extremely difficult to wet. The contact angles of a water droplet exceeds 150°. This is referred to as the lotus effect, and is primarily a physical property related to interfacial tension, rather than a chemical property. In 1805, Thomas Young defined the contact angle "θ" by analyzing the forces acting on a fluid droplet resting on a solid surface surrounded by a gas. where "θ" can be measured using a contact angle goniometer. Wenzel determined that when the liquid is in intimate contact with a microstructured surface, "θ" will change to "θ"W* where "r" is the ratio of the actual area to the projected area. Wenzel's equation shows that microstructuring a surface amplifies the natural tendency of the surface. A hydrophobic surface (one that has an original contact angle greater than 90°) becomes more hydrophobic when microstructured – its new contact angle becomes greater than the original. However, a hydrophilic surface (one that has an original contact angle less than 90°) becomes more hydrophilic when microstructured – its new contact angle becomes less than the original. Cassie and Baxter found that if the liquid is suspended on the tops of microstructures, "θ" will change to "θ"CB*: where "φ" is the area fraction of the solid that touches the liquid. Liquid in the Cassie–Baxter state is more mobile than in the Wenzel state. We can predict whether the Wenzel or Cassie–Baxter state should exist by calculating the new contact angle with both equations. By a minimization of free energy argument, the relation that predicted the smaller new contact angle is the state most likely to exist. Stated in mathematical terms, for the Cassie–Baxter state to exist, the following inequality must be true. A recent alternative criterion for the Cassie–Baxter state asserts that the Cassie–Baxter state exists when the following 2 criteria are met:1) Contact line forces overcome body forces of unsupported droplet weight and 2) The microstructures are tall enough to prevent the liquid that bridges microstructures from touching the base of the microstructures. A new criterion for the switch between Wenzel and Cassie-Baxter states has been developed recently based on surface roughness and surface energy. The criterion focuses on the air-trapping capability under liquid droplets on rough surfaces, which could tell whether Wenzel's model or Cassie-Baxter's model should be used for certain combination of surface roughness and energy. Contact angle is a measure of static hydrophobicity, and contact angle hysteresis and slide angle are dynamic measures. Contact angle hysteresis is a phenomenon that characterizes surface heterogeneity. When a pipette injects a liquid onto a solid, the liquid will form some contact angle. As the pipette injects more liquid, the droplet will increase in volume, the contact angle will increase, but its three-phase boundary will remain stationary until it suddenly advances outward. The contact angle the droplet had immediately before advancing outward is termed the advancing contact angle. The receding contact angle is now measured by pumping the liquid back out of the droplet. The droplet will decrease in volume, the contact angle will decrease, but its three-phase boundary will remain stationary until it suddenly recedes inward. The contact angle the droplet had immediately before receding inward is termed the receding contact angle. The difference between advancing and receding contact angles is termed contact angle hysteresis and can be used to characterize surface heterogeneity, roughness, and mobility. Surfaces that are not homogeneous will have domains that impede motion of the contact line. The slide angle is another dynamic measure of hydrophobicity and is measured by depositing a droplet on a surface and tilting the surface until the droplet begins to slide. In general, liquids in the Cassie–Baxter state exhibit lower slide angles and contact angle hysteresis than those in the Wenzel state. Dettre and Johnson discovered in 1964 that the superhydrophobic lotus effect phenomenon was related to rough hydrophobic surfaces, and they developed a theoretical model based on experiments with glass beads coated with paraffin or TFE telomer. The self-cleaning property of superhydrophobic micro-nanostructured surfaces was reported in 1977. Perfluoroalkyl, perfluoropolyether, and RF plasma -formed superhydrophobic materials were developed, used for electrowetting and commercialized for bio-medical applications between 1986 and 1995. Other technology and applications have emerged since the mid 1990s. A durable superhydrophobic hierarchical composition, applied in one or two steps, was disclosed in 2002 comprising nano-sized particles ≤ 100 nanometers overlaying a surface having micrometer-sized features or particles ≤ 100 micrometers. The larger particles were observed to protect the smaller particles from mechanical abrasion. In recent research, superhydrophobicity has been reported by allowing alkylketene dimer (AKD) to solidify into a nanostructured fractal surface. Many papers have since presented fabrication methods for producing superhydrophobic surfaces including particle deposition, sol-gel techniques, plasma treatments, vapor deposition, and casting techniques. Current opportunity for research impact lies mainly in fundamental research and practical manufacturing. Debates have recently emerged concerning the applicability of the Wenzel and Cassie–Baxter models. In an experiment designed to challenge the surface energy perspective of the Wenzel and Cassie–Baxter model and promote a contact line perspective, water drops were placed on a smooth hydrophobic spot in a rough hydrophobic field, a rough hydrophobic spot in a smooth hydrophobic field, and a hydrophilic spot in a hydrophobic field. Experiments showed that the surface chemistry and geometry at the contact line affected the contact angle and contact angle hysteresis, but the surface area inside the contact line had no effect. An argument that increased jaggedness in the contact line enhances droplet mobility has also been proposed. Many hydrophobic materials found in nature rely on Cassie's law and are biphasic on the submicrometer level with one component air. The lotus effect is based on this principle. Inspired by it, many functional superhydrophobic surfaces have been prepared. An example of a bionic or biomimetic superhydrophobic material in nanotechnology is nanopin film. One study presents a vanadium pentoxide surface that switches reversibly between superhydrophobicity and superhydrophilicity under the influence of UV radiation. According to the study, any surface can be modified to this effect by application of a suspension of rose-like V2O5 particles, for instance with an inkjet printer. Once again hydrophobicity is induced by interlaminar air pockets (separated by 2.1 nm distances). The UV effect is also explained. UV light creates electron-hole pairs, with the holes reacting with lattice oxygen, creating surface oxygen vacancies, while the electrons reduce V5+ to V3+. The oxygen vacancies are met by water, and it is this water absorbency by the vanadium surface that makes it hydrophilic. By extended storage in the dark, water is replaced by oxygen and hydrophilicity is once again lost. A significant majority of hydrophobic surfaces have their hydrophobic properties imparted by structural or chemical modification of a surface of a bulk material, through either coatings or surface treatments. That is to say, the presence of molecular species (usually organic) or structural features results in high contact angles of water. In recent years, rare earth oxides have been shown to possess intrinsic hydrophobicity. The intrinsic hydrophobicity of rare earth oxides depends on surface orientation and oxygen vacancy levels, and is naturally more robust than coatings or surface treatments, having potential applications in condensers and catalysts that can operate at high temperatures or corrosive environments. Hydrophobic concrete has been produced since the mid-20th century. Active recent research on superhydrophobic materials might eventually lead to more industrial applications. A simple routine of coating cotton fabric with silica or titania particles by sol-gel technique has been reported, which protects the fabric from UV light and makes it superhydrophobic. An efficient routine has been reported for making polyethylene superhydrophobic and thus self-cleaning. 99% of dirt on such a surface is easily washed away. Patterned superhydrophobic surfaces also have promise for lab-on-a-chip microfluidic devices and can drastically improve surface-based bioanalysis. In pharmaceuticals, hydrophobicity of pharmaceutical blends affects important quality attributes of final products, such as drug dissolution and hardness. Methods have been developed to measure the hydrophobicity of pharmaceutical materials.
https://en.wikipedia.org/wiki?curid=14136
Harley-Davidson Harley-Davidson, Inc., H-D, or Harley, is an American motorcycle manufacturer founded in 1903 in Milwaukee, Wisconsin. It was one of two major American motorcycle manufacturers to survive the Great Depression, along with Indian. The company has survived numerous ownership arrangements, subsidiary arrangements, periods of poor economic health and product quality, and intense global competition to become one of the world's largest motorcycle manufacturers and an iconic brand widely known for its loyal following. There are owner clubs and events worldwide, as well as a company-sponsored, brand-focused museum. Harley-Davidson is noted for a style of customization that gave rise to the chopper motorcycle style. The company traditionally marketed heavyweight, air-cooled cruiser motorcycles with engine displacements greater than 700 cc, but it has broadened its offerings to include more contemporary VRSC (2002) and middle-weight Street (2015) platforms. Harley-Davidson manufactures its motorcycles at factories in York, Pennsylvania; Milwaukee, Wisconsin; Kansas City, Missouri (closing); Manaus, Brazil; and Bawal, India. Construction of a new plant in Thailand began in late 2018. The company markets its products worldwide, and also licenses and markets merchandise under the Harley-Davidson brand, among them apparel, home decor and ornaments, accessories, toys, scale figures of its motorcycles, and video games based on its motorcycle line and the community. In 1901, year-old William S. Harley drew up plans for a small engine with a displacement of 7.07 cubic inches (116 cc) and four-inch (102 mm) flywheels designed for use in a regular pedal-bicycle frame. Over the next two years, he and his childhood friend Arthur Davidson worked on their motor-bicycle using the northside Milwaukee machine shop at the home of their friend Henry Melk. It was finished in 1903 with the help of Arthur's brother Walter Davidson. Upon testing their power-cycle, Harley and the Davidson brothers found it unable to climb the hills around Milwaukee without pedal assistance, and they wrote off their first motor-bicycle as a valuable learning experiment. The three began work on a new and improved machine with an engine of 24.74 cubic inches (405 cc) with flywheels weighing . Its advanced loop-frame pattern was similar to the 1903 Milwaukee Merkel motorcycle designed by Joseph Merkel, later of Flying Merkel fame. The bigger engine and loop-frame design took it out of the motorized bicycle category and marked the path to future motorcycle designs. They also received help with their bigger engine from outboard motor pioneer Ole Evinrude, who was then building gas engines of his own design for automotive use on Milwaukee's Lake Street. The prototype of the new loop-frame Harley-Davidson was assembled in a shed in the Davidson family backyard. Most of the major parts, however, were made elsewhere, including some probably fabricated at the West Milwaukee railshops where oldest brother William A. Davidson was toolroom foreman. This prototype machine was functional by September 8, 1904, when it competed in a Milwaukee motorcycle race held at State Fair Park. Edward Hildebrand rode it and placed fourth in the race. In January 1905, the company placed small advertisements in the "Automobile and Cycle Trade Journal" offering bare Harley-Davidson engines to the do-it-yourself trade. By April, they were producing complete motorcycles on a very limited basis. That year, Harley-Davidson dealer Carl H. Lang of Chicago sold three bikes from the five built in the Davidson backyard shed. Years later, the company moved the original shed to the Juneau Avenue factory where it stood for many decades as a tribute. In 1906, Harley and the Davidson brothers built their first factory on Chestnut Street (later Juneau Avenue), at the current location of Harley-Davidson's corporate headquarters. The first Juneau Avenue plant was a single-story wooden structure. The company produced about 50 motorcycles that year. In 1907, William S. Harley graduated from the University of Wisconsin–Madison with a degree in mechanical engineering. That year, they expanded the factory with a second floor and later with facings and additions of Milwaukee pale yellow ("cream") brick. With the new facilities, production increased to 150 motorcycles in 1907. The company was officially incorporated that September. They also began selling their motorcycles to police departments around this time, a market that has been important to them ever since. In 1907, William A. Davidson quit his job as tool foreman for the Milwaukee Road railroad and joined the Motor Company. Production in 1905 and 1906 were all single-cylinder models with 26.84 cubic inch (440 cc) engines. In February 1907, they displayed a prototype model at the Chicago Automobile Show with a 45-degree V-Twin engine. Very few V-Twin models were built between 1907 and 1910. These first V-Twins displaced 53.68 cubic inches (880 cc) and produced about . This gave about double the power of the first singles, and top speed was about . Production jumped from 450 motorcycles in 1908 to 1,149 machines in 1909. In 1911, the company introduced an improved V-Twin model with a displacement of 49.48 cubic inches (811 cc) and mechanically operated intake valves, as opposed to the "automatic" intake valves used on earlier V-Twins that opened by engine vacuum. It was smaller than earlier twins but gave better performance. After 1913, the majority of bikes produced by Harley-Davidson were V-Twin models. In 1912, Harley-Davidson introduced their patented "Ful-Floteing Seat" which was suspended by a coil spring inside the seat tube. The spring tension could be adjusted to suit the rider's weight, and more than of travel was available. Harley-Davidson used seats of this type until 1958. By 1913, the yellow brick factory had been demolished and a new five-story structure had been built on the site which took up two blocks along Juneau Avenue and around the corner on 38th Street. Despite the competition, Harley-Davidson was already pulling ahead of Indian and dominated motorcycle racing after 1914. Production that year swelled to 16,284 machines. In 1917, the United States entered World War I and the military demanded motorcycles for the war effort. Harleys had already been used by the military in the Pancho Villa Expedition but World War I was the first time that it was adopted for military issue, first with the British Model H produced by Triumph Motorcycles Ltd in 1915. The U.S. military purchased over 20,000 motorcycles from Harley-Davidson. Harley-Davidson launched a line of bicycles in 1917 in hopes of recruiting customers for its motorcycles. Models included the traditional diamond frame men's bicycle, a step-through frame 3–18 "Ladies Standard", and a 5–17 "Boy Scout" for youth. The effort was discontinued in 1923 because of disappointing sales. The bicycles were built for Harley-Davidson in Dayton, Ohio by the Davis Machine Company from 1917 to 1921, when Davis stopped manufacturing bicycles. By 1920, Harley-Davidson was the largest motorcycle manufacturer in the world, with 28,189 machines produced and dealers in 67 countries. In 1921, Otto Walker set a record on a Harley-Davidson as the first motorcycle to win a race at an average speed greater than . Harley-Davidson put several improvements in place during the 1920s, such as a new 74 cubic inch (1,212.6  cc) V-Twin introduced in 1921, and the "teardrop" gas tank in 1925. They added a front brake in 1928, although only on the J/JD models. In the late summer of 1929, Harley-Davidson introduced its 45 cubic inch (737 cc) flathead V-Twin to compete with the Indian 101 Scout and the Excelsior Super X. This was the "D" model produced from 1929 to 1931. Riders of Indian motorcycles derisively referred to it as the "three cylinder Harley" because the generator was upright and parallel to the front cylinder. The Great Depression began a few months after the introduction of their model. Harley-Davidson's sales fell from 21,000 in 1929 to 3,703 in 1933. Despite this, Harley-Davidson unveiled a new lineup for 1934, which included a flathead engine and Art Deco styling. In order to survive the remainder of the Depression, the company manufactured industrial powerplants based on their motorcycle engines. They also designed and built a three-wheeled delivery vehicle called the Servi-Car, which remained in production until 1973. In the mid-1930s, Alfred Rich Child opened a production line in Japan with the VL. The Japanese license-holder, Sankyo Seiyaku Corporation, severed its business relations with Harley-Davidson in 1936 and continued manufacturing the VL under the Rikuo name. An flathead engine was added to the line in 1935, by which time the single-cylinder motorcycles had been discontinued. In 1936, the 61E and 61EL models with the "Knucklehead" OHV engines were introduced. Valvetrain problems in early Knucklehead engines required a redesign halfway through its first year of production and retrofitting of the new valvetrain on earlier engines. By 1937, all Harley-Davidson flathead engines were equipped with dry-sump oil recirculation systems similar to the one introduced in the "Knucklehead" OHV engine. The revised V and VL models were renamed U and UL, the VH and VLH to be renamed UH and ULH, and the R to be renamed W. In 1941, the 74 cubic-inch "Knucklehead" was introduced as the F and the FL. The flathead UH and ULH models were discontinued after 1941, while the 74 cubic inch U & UL flathead models were produced up to 1948. One of only two American cycle manufacturers to survive the Great Depression, Harley-Davidson again produced large numbers of motorcycles for the US Army in World War II and resumed civilian production afterwards, producing a range of large V-twin motorcycles that were successful both on racetracks and for private buyers. Harley-Davidson, on the eve of World War II, was already supplying the Army with a military-specific version of its WL line, called the WLA. The A in this case stood for "Army". Upon the outbreak of war, the company, along with most other manufacturing enterprises, shifted to war work. More than 90,000 military motorcycles, mostly WLAs and WLCs (the Canadian version) were produced, many to be provided to allies. Harley-Davidson received two Army-Navy 'E' Awards, one in 1943 and the other in 1945, which were awarded for Excellence in Production. Shipments to the Soviet Union under the Lend-Lease program numbered at least 30,000. The WLAs produced during all four years of war production generally have 1942 serial numbers. Production of the WLA stopped at the end of World War II, but was resumed from 1950 to 1952 for use in the Korean War. The U.S. Army also asked Harley-Davidson to produce a new motorcycle with many of the features of BMW's side-valve and shaft-driven R71. Harley-Davidson largely copied the BMW engine and drive train and produced the shaft-driven 750 cc 1942 Harley-Davidson XA. This shared no dimensions, no parts or no design concepts (except side valves) with any prior Harley-Davidson engine. Due to the superior cooling of the flat-twin engine with the cylinders across the frame, Harley's XA cylinder heads ran 100 °F (56 °C) cooler than its V-twins. The XA never entered full production: the motorcycle by that time had been eclipsed by the Jeep as the Army's general purpose vehicle, and the WLA—already in production—was sufficient for its limited police, escort, and courier roles. Only 1,000 were made and the XA never went into full production. It remains the only shaft-driven Harley-Davidson ever made. As part of war reparations, Harley-Davidson acquired the design of a small German motorcycle, the DKW RT 125, which they adapted, manufactured, and sold from 1948 to 1966. Various models were made, including the Hummer from 1955 to 1959, but they are all colloquially referred to as "Hummers" at present. BSA in the United Kingdom took the same design as the foundation of their BSA Bantam. In 1960, Harley-Davidson consolidated the Model 165 and Hummer lines into the Super-10, introduced the Topper scooter, and bought fifty percent of Aermacchi's motorcycle division. Importation of Aermacchi's 250 cc horizontal single began the following year. The bike bore Harley-Davidson badges and was marketed as the Harley-Davidson Sprint. The engine of the Sprint was increased to 350 cc in 1969 and would remain that size until 1974, when the four-stroke Sprint was discontinued. After the Pacer and Scat models were discontinued at the end of 1965, the Bobcat became the last of Harley-Davidson's American-made two-stroke motorcycles. The Bobcat was manufactured only in the 1966 model year. Harley-Davidson replaced their American-made lightweight two-stroke motorcycles with the Italian Aermacchi-built two-stroke powered M-65, M-65S, and Rapido. The M-65 had a semi-step-through frame and tank. The M-65S was a M-65 with a larger tank that eliminated the step-through feature. The Rapido was a larger bike with a 125 cc engine. The Aermacchi-built Harley-Davidsons became entirely two-stroke powered when the 250 cc two-stroke SS-250 replaced the four-stroke 350 cc Sprint in 1974. Harley-Davidson purchased full control of Aermacchi's motorcycle production in 1974 and continued making two-stroke motorcycles there until 1978, when they sold the facility to Cagiva, owned by the Castiglioni family. Established in 1918, the oldest continuously operating Harley-Davidson dealership outside of the United States is in Australia. Sales in Japan started in 1912 then in 1929, Harley-Davidsons were produced in Japan under license to the company Rikuo (Rikuo Internal Combustion Company) under the name of Harley-Davidson and using the company's tooling, and later under the name Rikuo. Production continued until 1958. In 1952, following their application to the U.S. Tariff Commission for a 40 percent tax on imported motorcycles, Harley-Davidson was charged with restrictive practices. In 1969, American Machine and Foundry (AMF) bought the company, streamlined production, and slashed the workforce. This tactic resulted in a labor strike and cost-cutting produced lower-quality bikes. The bikes were expensive and inferior in performance, handling, and quality to Japanese motorcycles. Sales and quality declined, and the company almost went bankrupt. The "Harley-Davidson" name was mocked as "Hardly Ableson", "Hardly Driveable," and "Hogly Ferguson", and the nickname "Hog" became pejorative. The early '70s saw the introduction of what the motoring press called the Universal Japanese Motorcycle in North America, that revolutionized the industry and made motorcycling in America more accessible during the 1970s and 1980s.. In 1977, following the successful manufacture of the Liberty Edition to commemorate America's bicentennial in 1976, Harley-Davidson produced what has become one of its most controversial models, the Harley-Davidson Confederate Edition. The bike was essentially a stock Harley-Davidson with Confederate-specific paint and details. In 1981, AMF sold the company to a group of 13 investors led by Vaughn Beals and Willie G. Davidson for $80 million. Inventory was strictly controlled using the just-in-time system. In the early eighties, Harley-Davidson claimed that Japanese manufacturers were importing motorcycles into the US in such volume as to harm or threaten to harm domestic producers. After an investigation by the U.S. International Trade Commission, President Reagan in 1983 imposed a 45 percent tariff on imported bikes with engine capacities greater than 700 cc. Harley-Davidson subsequently rejected offers of assistance from Japanese motorcycle makers. However, the company did offer to drop the request for the tariff in exchange for loan guarantees from the Japanese. Rather than trying to match the Japanese, the new management deliberately exploited the "retro" appeal of the machines, building motorcycles that deliberately adopted the look and feel of their earlier machines and the subsequent customizations of owners of that era. Many components such as brakes, forks, shocks, carburetors, electrics and wheels were outsourced from foreign manufacturers and quality increased, technical improvements were made, and buyers slowly returned. Harley-Davidson bought the "Sub Shock" cantilever-swingarm rear suspension design from Missouri engineer Bill Davis and developed it into its Softail series of motorcycles, introduced in 1984 with the FXST Softail. In response to possible motorcycle market loss due to the aging of baby-boomers, Harley-Davidson bought luxury motorhome manufacturer Holiday Rambler in 1986. In 1996, the company sold Holiday Rambler to the Monaco Coach Corporation. The "Sturgis" model, boasting a dual belt-drive, was introduced initially in 1980 and was made for three years. This bike was then brought back as a commemorative model in 1991. By 1990, with the introduction of the "Fat Boy", Harley-Davidson once again became the sales leader in the heavyweight (over 750 cc) market. At the time of the Fat Boy model introduction, a story rapidly spread that its silver paint job and other features were inspired by the B-29; and Fat Boy was a combination of the names of the atomic bombs Fat Man and Little Boy. However, the Urban Legend Reference Pages lists this story as an urban legend. 1993 and 1994 saw the replacement of FXR models with the Dyna (FXD), which became the sole rubber mount FX Big Twin frame in 1994. The FXR was revived briefly from 1999 to 2000 for special limited editions (FXR2, FXR3 & FXR4). Construction started on the $75 million, 130,000 square-foot (12,000 m2) Harley-Davidson Museum in the Menomonee Valley on June 1, 2006. It opened in 2008 and houses the company's vast collection of historic motorcycles and corporate archives, along with a restaurant, café and meeting space. Harley-Davidson's association with sportbike manufacturer Buell Motorcycle Company began in 1987 when they supplied Buell with fifty surplus XR1000 engines. Buell continued to buy engines from Harley-Davidson until 1993, when Harley-Davidson bought 49 percent of the Buell Motorcycle Company. Harley-Davidson increased its share in Buell to ninety-eight percent in 1998, and to complete ownership in 2003. In an attempt to attract newcomers to motorcycling in general and to Harley-Davidson in particular, Buell developed a low-cost, low-maintenance motorcycle. The resulting single-cylinder Buell Blast was introduced in 2000, and was made through 2009, which, according to Buell, was to be the final year of production. The Buell Blast was the training vehicle for the Harley-Davidson Rider's Edge New Rider Course from 2000 until May 2014, when the company re-branded the training academy and started using the Harley-Davidson Street 500 motorcycles. In those 14 years, more than 350,000 participants in the course learned to ride on the Buell Blast. On October 15, 2009, Harley-Davidson Inc. issued an official statement that it would be discontinuing the Buell line and ceasing production immediately. The stated reason was to focus on the Harley-Davidson brand. The company refused to consider selling Buell. Founder Erik Buell subsequently established Erik Buell Racing and continued to manufacture and develop the company's 1125RR racing motorcycle. In 1998 the first Harley-Davidson factory outside the US opened in Manaus, Brazil, taking advantage of the free economic zone there. The location was positioned to sell motorcycles in the southern hemisphere market. Harley-Davidson hired Elton John to headline their 100th anniversary event on September 1, 2003. Other performers included The Doobie Brothers, Kid Rock, and Tim McGraw. During its period of peak demand, during the late 1990s and early first decade of the 21st century, Harley-Davidson embarked on a program of expanding the number of dealerships throughout the country. At the same time, its current dealers typically had waiting lists that extended up to a year for some of the most popular models. Harley-Davidson, like the auto manufacturers, records a sale not when a consumer buys their product, but rather when it is delivered to a dealer. Therefore, it is possible for the manufacturer to inflate sales numbers by requiring dealers to accept more inventory than desired in a practice called channel stuffing. When demand softened following the unique 2003 model year, this news led to a dramatic decline in the stock price. In April 2004 alone, the price of HOG shares dropped from more than $60 to less than $40. Immediately prior to this decline, retiring CEO Jeffrey Bleustein profited $42 million on the exercise of employee stock options. Harley-Davidson was named as a defendant in numerous class action suits filed by investors who claimed they were intentionally defrauded by Harley-Davidson's management and directors. By January 2007, the price of Harley-Davidson shares reached $70. Starting around 2000, several police departments started reporting problems with high speed instability on the Harley-Davidson Touring motorcycles. A Raleigh, North Carolina police officer, Charles Paul, was killed when his 2002 police touring motorcycle crashed after reportedly experiencing a high speed wobble. The California Highway Patrol conducted testing of the Police Touring motorcycles in 2006. The CHP test riders reported experiencing wobble or weave instability while operating the motorcycles on the test track. On February 2, 2007, upon the expiration of their union contract, about 2,700 employees at Harley-Davidson Inc.'s largest manufacturing plant in York, Pennsylvania, went on strike after failing to agree on wages and health benefits. During the pendency of the strike, the company refused to pay for any portion of the striking employees' health care. The day before the strike, after the union voted against the proposed contract and to authorize the strike, the company shut down all production at the plant. The York facility employs more than 3,200 workers, both union and non-union. Harley-Davidson announced on February 16, 2007, that it had reached a labor agreement with union workers at its largest manufacturing plant, a breakthrough in the two-week-old strike. The strike disrupted Harley-Davidson's national production and was felt in Wisconsin, where 440 employees were laid off, and many Harley suppliers also laid off workers because of the strike. On July 11, 2008, Harley-Davidson announced they had signed a definitive agreement to acquire the MV Agusta Group for US$109 million (€70M). MV Agusta Group contains two lines of motorcycles: the high-performance MV Agusta brand and the lightweight Cagiva brand. The acquisition was completed on August 8. On October 15, 2009, Harley-Davidson announced that it would divest its interest in MV Agusta. Harley-Davidson Inc. sold Italian motorcycle maker MV Agusta to Claudio Castiglioni – a member of the family that had purchased Aermacchi from H-D in 1978 – for a reported 3 euros, ending the transaction in the first week of August 2010. Castiglioni was MV Agusta's former owner, and had been MV Agusta's chairman since Harley-Davidson bought it in 2008. As part of the deal, Harley-Davidson put $26M into MV Agusta's accounts, essentially giving Castiglioni $26M to take the brand. In August 2009, Harley-Davidson announced plans to enter the market in India and started selling motorcycles there in 2010. The company established a subsidiary, Harley-Davidson India, in Gurgaon, near Delhi, in 2011 and created an Indian dealer network. According to Interbrand, the value of the Harley-Davidson brand fell by 43 percent to $4.34 billion in 2009. The fall in value is believed to be connected to the 66 percent drop in the company profits in two-quarters of the previous year. On April 29, 2010, Harley-Davidson stated that they must cut $54 million in manufacturing costs from its production facilities in Wisconsin, and that they would explore alternative U.S. sites to accomplish this. The announcement came in the wake of a massive company-wide restructuring, which began in early 2009 and involved the closing of two factories, one distribution center, and the planned elimination of nearly 25 percent of its total workforce (around 3,500 employees). The company announced on September 14, 2010, that it would remain in Wisconsin. The classic Harley-Davidson engines are V-twin engines, with a 45° angle between the cylinders. The crankshaft has a single pin, and both pistons are connected to this pin through their connecting rods. This 45° angle is covered under several United States patents and is an engineering tradeoff that allows a large, high-torque engine in a relatively small space. It causes the cylinders to fire at uneven intervals and produces the choppy "potato-potato" sound so strongly linked to the Harley-Davidson brand. To simplify the engine and reduce costs, the V-twin ignition was designed to operate with a single set of points and no distributor. This is known as a dual fire ignition system, causing both spark plugs to fire regardless of which cylinder was on its compression stroke, with the other spark plug firing on its cylinder's exhaust stroke, effectively "wasting a spark". The exhaust note is basically a throaty growling sound with some popping. The 45° design of the engine thus creates a plug firing sequencing as such: The first cylinder fires, the second (rear) cylinder fires 315° later, then there is a 405° gap until the first cylinder fires again, giving the engine its unique sound. Harley-Davidson has used various ignition systems throughout its history – be it the early points and condenser system, (Big Twin up to 1978 and Sportsters up to 1978), magneto ignition system used on some 1958 to 1969 Sportsters, early electronic with centrifugal mechanical advance weights, (all models 1978 and a half to 1979), or the late electronic with transistorized ignition control module, more familiarly known as the black box or the brain, (all models 1980 to present). Starting in 1995, the company introduced Electronic Fuel Injection (EFI) as an option for the 30th anniversary edition Electra Glide. EFI became standard on all Harley-Davidson motorcycles, including Sportsters, upon the introduction of the 2007 product line. In 1991, Harley-Davidson began to participate in the Sound Quality Working Group, founded by Orfield Labs, Bruel and Kjaer, TEAC, Yamaha, Sennheiser, SMS and Cortex. This was the nation's first group to share research on psychological acoustics. Later that year, Harley-Davidson participated in a series of sound quality studies at Orfield Labs, based on recordings taken at the Talladega Superspeedway, with the objective to lower the sound level for EU standards while analytically capturing the "Harley Sound". This research resulted in the bikes that were introduced in compliance with EU standards for 1998. On February 1, 1994, the company filed a sound trademark application for the distinctive sound of the Harley-Davidson motorcycle engine: "The mark consists of the exhaust sound of applicant's motorcycles, produced by V-twin, common crankpin motorcycle engines when the goods are in use". Nine of Harley-Davidson's competitors filed comments opposing the application, arguing that cruiser-style motorcycles of various brands use a single-crankpin V-twin engine which produce a similar sound. These objections were followed by litigation. In June 2000, the company dropped efforts to federally register its trademark. The Revolution engine is based on the VR-1000 Superbike race program, co-developed by Harley-Davidson's Powertrain Engineering team and Porsche Engineering in Stuttgart, Germany. It is a liquid cooled, dual overhead cam, internally counterbalanced 60 degree V-twin engine with a displacement of 69 cubic inch (1,130 cc), producing at 8,250 rpm at the crank, with a redline of 9,000 rpm. It was introduced for the new VRSC (V-Rod) line in 2001 for the 2002 model year, starting with the single VRSCA (V-Twin Racing Street Custom) model. The Revolution marks Harley's first collaboration with Porsche since the V4 Nova project, which, like the V-Rod, was a radical departure from Harley's traditional lineup until it was cancelled by AMF in 1981 in favor of the Evolution engine. A 1,250 cc Screamin' Eagle version of the Revolution engine was made available for 2005 and 2006, and was present thereafter in a single production model from 2005 to 2007. In 2008, the 1,250 cc Revolution Engine became standard for the entire VRSC line. Harley-Davidson claims at the crank for the 2008 VRSCAW model. The VRXSE "Destroyer" is equipped with a stroker (75 mm crank) Screamin' Eagle 79 cubic inch (1,300 cc) Revolution Engine, producing more than . 750 cc and 500 cc versions of the Revolution engine are used in Harley-Davidson's Street line of light cruisers. These motors, named the Revolution X, use a single overhead cam, screw and locknut valve adjustment, a single internal counterbalancer, and vertically split crankcases; all of these changes making it different from the original Revolution design. An extreme endurance test of the Revolution engine was performed in a dynometer installation, simulating the German Autobahn (highways without general speed limit) between the Porsche research and development center in Weissach, near Stuttgart to Düsseldorf. Uncounted samples of engines failed, until an engine successfully passed the 500-hour nonstop run. This was the benchmark for the engineers to approve the start of production for the Revolution engine, which was documented in the Discovery channel special Harley-Davidson: Birth of the V-Rod, October 14, 2001. The first Harley-Davidson motorcycles were powered by single-cylinder IOE engines with the inlet valve operated by engine vacuum, based on the DeDion-Bouton pattern. Singles of this type continued to be made until 1913, when a pushrod and rocker system was used to operate the overhead inlet valve on the single, a similar system having been used on their V-twins since 1911. Single-cylinder motorcycle engines were discontinued in 1918. Single-cylinder engines were reintroduced in 1925 as 1926 models. These singles were available either as flathead engines or as overhead valve engines until 1930, after which they were only available as flatheads. The flathead single-cylinder motorcycles were designated Model A for engines with magneto systems only and Model B for engines with battery and coil systems, while overhead valve versions were designated Model AA and Model BA respectively, and a magneto-only racing version was designated Model S. This line of single-cylinder motorcycles ended production in 1934. Modern Harley-branded motorcycles fall into one of seven model families: Touring, Softail, Dyna, Sportster, Vrod, Street and LiveWire. These model families are distinguished by the frame, engine, suspension, and other characteristics. Touring models use Big-Twin engines and large-diameter telescopic forks. All Touring designations begin with the letters FL, "e.g.", FLHR (Road King) and FLTR (Road Glide). The touring family, also known as "dressers" or "baggers", includes Road King, Road Glide, Street Glide and Electra Glide models offered in various trims. The Road Kings have a "retro cruiser" appearance and are equipped with a large clear windshield. Road Kings are reminiscent of big-twin models from the 1940s and 1950s. Electra Glides can be identified by their full front fairings. Most Electra Glides sport a fork-mounted fairing referred to as the "Batwing" due to its unmistakable shape. The Road Glide and Road Glide Ultra Classic have a frame-mounted fairing, referred to as the "Sharknose". The Sharknose includes a unique, dual front headlight. Touring models are distinguishable by their large saddlebags, rear coil-over air suspension and are the only models to offer full fairings with radios and CBs. All touring models use the same frame, first introduced with a Shovelhead motor in 1980, and carried forward with only modest upgrades until 2009, when it was extensively redesigned. The frame is distinguished by the location of the steering head in front of the forks and was the first H-D frame to rubber mount the drivetrain to isolate the rider from the vibration of the big V-twin. The frame was modified for the 1993 model year when the oil tank went under the transmission and the battery was moved inboard from under the right saddlebag to under the seat. In 1997, the frame was again modified to allow for a larger battery under the seat and to lower seat height. In 2007, Harley-Davidson introduced the Twin Cam 96 engine, as well the six-speed transmission to give the rider better speeds on the highway. In 2006, Harley introduced the FLHX Street Glide, a bike designed by Willie G. Davidson to be his personal ride, to its touring line. In 2008, Harley added anti-lock braking systems and cruise control as a factory installed option on all touring models (standard on CVO and Anniversary models). Also new for 2008 is the fuel tank for all touring models. 2008 also brought throttle-by-wire to all touring models. For the 2009 model year, Harley-Davidson redesigned the entire touring range with several changes, including a new frame, new swingarm, a completely revised engine-mounting system, front wheels for all but the FLHRC Road King Classic, and a 2–1–2 exhaust. The changes result in greater load carrying capacity, better handling, a smoother engine, longer range and less exhaust heat transmitted to the rider and passenger. Also released for the 2009 model year is the FLHTCUTG Tri-Glide Ultra Classic, the first three-wheeled Harley since the Servi-Car was discontinued in 1973. The model features a unique frame and a 103-cubic-inch (1,690 cc) engine exclusive to the trike. In 2014, Harley-Davidson released a redesign for specific touring bikes and called it "Project Rushmore". Changes include a new 103CI High Output engine, one handed easy open saddlebags and compartments, a new Boom! Box Infotainment system with either 4.3-inch (10 cm) or 6.5-inch (16.5 cm) screens featuring touchscreen functionality [6.5-inch (16.5 cm) models only], Bluetooth (media and phone with approved compatible devices), available GPS and SiriusXM, Text-to-Speech functionality (with approved compatible devices) and USB connectivity with charging. Other features include ABS with Reflex linked brakes, improved styling, Halogen or LED lighting and upgraded passenger comfort. These big-twin motorcycles capitalize on Harley's strong value on tradition. With the rear-wheel suspension hidden under the transmission, they are visually similar to the "hardtail" choppers popular in the 1960s and 1970s, as well as from their own earlier history. In keeping with that tradition, Harley offers Softail models with "Heritage" styling that incorporate design cues from throughout their history and used to offer "Springer" front ends on these Softail models from the factory. Softail models utilize the big-twin engine (F) and the Softail chassis (ST). Dyna-frame motorcycles were developed in the 1980s and early 1990s and debuted in the 1991 model year with the FXDB Sturgis offered in limited edition quantities. In 1992 the line continued with the limited edition FXDB Daytona and a production model FXD Super Glide. The new DYNA frame featured big-twin engines and traditional styling. They can be distinguished from the Softail by the traditional coil-over suspension that connects the swingarm to the frame, and from the Sportster by their larger engines. On these models, the transmission also houses the engine's oil reservoir. Prior to 2006, Dyna models typically featured a narrow, XL-style 39mm front fork and front wheel, as well as footpegs which the manufacturer included the letter "X" in the model designation to indicate. This lineup traditionally included the Super Glide (FXD), Super Glide Custom (FXDC), Street Bob (FXDB), and Low Rider (FXDL). One exception was the Wide Glide (FXDWG), which featured thicker 41mm forks and a narrow front wheel, but positioned the forks on wider triple-trees that give a beefier appearance. In 2008, the Dyna Fat Bob (FXDF) was introduced to the Dyna lineup, featuring aggressive styling like a new 2–1–2 exhaust, twin headlamps, a 180 mm rear tire, and, for the first time in the Dyna lineup, a 130 mm front tire. For the 2012 model year, the Dyna Switchback (FLD) became the first Dyna to break the tradition of having an FX model designation with floorboards, detachable painted hard saddlebags, touring windshield, headlight nacelle and a wide front tire with full fender. The new front end resembled the big-twin FL models from 1968 to 1971. The Dyna family used the 88-cubic-inch (1,440 cc) twin cam from 1999 to 2006. In 2007, the displacement was increased to 96 cubic inches (1,570 cc) as the factory increased the stroke to . For the 2012 model year, the manufacturer began to offer Dyna models with the 103-cubic-inch (1,690 cc) upgrade. All Dyna models use a rubber-mounted engine to isolate engine vibration. Harley discontinued the Dyna platform in 2017 for the 2018 model year, having been replaced by a completely-redesigned Softail chassis; some of the existing models previously released by the company under the Dyna nameplate have since been carried over to the new Softail line. Dyna models utilize the big-twin engine (F), footpegs noted as (X) with the exception of the 2012 FLD Switchback, a Dyna model which used floorboards as featured on the Touring (L) models, and the Dyna chassis (D). Therefore, except for the FLD from 2012 to 2016, all Dyna models have designations that begin with FXD, "e.g.", FXDWG (Dyna Wide Glide) and FXDL (Dyna Low Rider). Introduced in 1957, the Sportster family were conceived as racing motorcycles, and were popular on dirt and flat-track race courses through the 1960s and 1970s. Smaller and lighter than the other Harley models, contemporary Sportsters make use of 883 cc or 1,200 cc Evolution engines and, though often modified, remain similar in appearance to their racing ancestors. Up until the 2003 model year, the engine on the Sportster was rigidly mounted to the frame. The 2004 Sportster received a new frame accommodating a rubber-mounted engine. This made the bike heavier and reduced the available lean angle, while it reduced the amount of vibration transmitted to the frame and the rider, providing a smoother ride for rider and passenger. In the 2007 model year, Harley-Davidson celebrated the 50th anniversary of the Sportster and produced a limited edition called the XL50, of which only 2000 were made for sale worldwide. Each motorcycle was individually numbered and came in one of two colors, Mirage Pearl Orange or Vivid Black. Also in 2007, electronic fuel injection was introduced to the Sportster family, and the Nightster model was introduced in mid-year. In 2009, Harley-Davidson added the Iron 883 to the Sportster line, as part of the Dark Custom series. In the 2008 model year, Harley-Davidson released the XR1200 Sportster in Europe, Africa, and the Middle East. The XR1200 had an Evolution engine tuned to produce , four-piston dual front disc brakes, and an aluminum swing arm. "Motorcyclist" featured the XR1200 on the cover of its July 2008 issue and was generally positive about it in their "First Ride" story, in which Harley-Davidson was repeatedly asked to sell it in the United States. One possible reason for the delayed availability in the United States was the fact that Harley-Davidson had to obtain the "XR1200" naming rights from Storz Performance, a Harley customizing shop in Ventura, Calif. The XR1200 was released in the United States in 2009 in a special color scheme including Mirage Orange highlighting its dirt-tracker heritage. The first 750 XR1200 models in 2009 were pre-ordered and came with a number 1 tag for the front of the bike, autographed by Kenny Coolbeth and Scott Parker and a thank you/welcome letter from the company, signed by Bill Davidson. The XR1200 was discontinued in model year 2013. Except for the street-going XR1000 of the 1980s and the XR1200, most Sportsters made for street use have the prefix XL in their model designation. For the Sportster Evolution engines used since the mid-1980s, there have been two engine sizes. Motorcycles with the smaller engine are designated XL883, while those with the larger engine were initially designated XL1100. When the size of the larger engine was increased from 1,100 cc to 1,200 cc, the designation was changed accordingly from XL1100 to XL1200. Subsequent letters in the designation refer to model variations within the Sportster range, e.g. the XL883C refers to an 883 cc Sportster Custom, while the XL1200S designates the now-discontinued 1200 Sportster Sport. Introduced in 2001 and produced until 2017, the VRSC muscle bike family bears little resemblance to Harley's more traditional lineup. Competing against Japanese and American muscle bikes in the upcoming muscle bike/power cruiser segment, the "V-Rod" makes use of an engine developed jointly with Porsche that, for the first time in Harley history, incorporates overhead cams and liquid cooling. The V-Rod is visually distinctive, easily identified by the 60-degree V-Twin engine, the radiator and the hydroformed frame members that support the round-topped air cleaner cover. The VRSC platform was also used for factory drag-racing motorcycles. In 2008, Harley added the anti-lock braking system as a factory installed option on all VRSC models. Harley also increased the displacement of the stock engine from , which had only previously been available from Screamin' Eagle, and added a slipper clutch as standard equipment. VRSC models include: VRSC models utilize the Revolution engine (VR), and the street versions are designated Street Custom (SC). After the VRSC prefix common to all street Revolution bikes, the next letter denotes the model, either A (base V-Rod: discontinued), AW (base V-Rod + W for Wide with a 240 mm rear tire), B (discontinued), D (Night Rod: discontinued), R (Street Rod: discontinued), SE and SEII(CVO Special Edition), or X (Special edition). Further differentiation within models are made with an additional letter, "e.g.", VRSCDX denotes the Night Rod Special. The VRXSE V-Rod Destroyer is Harley-Davidson's production drag racing motorcycle, constructed to run the quarter mile in less than ten seconds. It is based on the same revolution engine that powers the VRSC line, but the VRXSE uses the Screamin' Eagle 1,300 cc "stroked" incarnation, featuring a 75 mm crankshaft, 105 mm Pistons, and 58 mm throttle bodies. The V-Rod Destroyer is not a street legal motorcycle. As such, it uses "X" instead of "SC" to denote a non-street bike. "SE" denotes a CVO Special Edition. The Street, Harley-Davidson's newest platform and their first all new platform in thirteen years, was designed to appeal to younger riders looking for a lighter bike at a cheaper price. The Street 750 model was launched in India at the 2014 Indian Auto Expo, Delhi-NCR on February 5, 2014. The Street 750 weighs 218 kg and has a ground clearance of 144 mm giving it the lowest weight and the highest ground clearance of Harley-Davidson motorcycles currently available. The Street 750 uses an all-new, liquid-cooled, 60° V-twin engine called the Revolution X. In the Street 750, the engine displaces and produces 65 Nm at 4,000 rpm. A six speed transmission is used. The Street 750 and the smaller-displacement Street 500 has been available since late 2014. Street series motorcycles for the North American market will be built in Harley-Davidson's Kansas City, Missouri plant, while those for other markets around the world will be built completely in their plant in Bawal, India. Harley-Davidson's "LiveWire", released in 2019, is their first electric vehicle. The high-voltage battery provides a minimum city range of 98 miles (158 km). The LiveWire targets a different type of customer than their classic V-twin powered motorcycles. In March 2020, a Harley-Davidson LiveWire was used to break the 24-hour distance record for an electric motorcycle. The bike traveled a reported 1,723 km (1,079 miles) in 23 hours and 48 minutes. The LiveWire offers a Level 1 slow recharge, which uses a regular wall outlet to refill an empty battery overnight, or a quick Level 3 DC Fast Charge. The Fast Charge fills the battery most of the way in about 40 minutes. Swiss rider Michel von Tell used the Level 3 charging to make the 24-hour ride. Custom Vehicle Operations (CVO) is a team within Harley-Davidson that produces limited-edition customizations of Harley's stock models. Every year since 1999, the team has selected two to five of the company's base models and added higher-displacement engines, performance upgrades, special-edition paint jobs, more chromed or accented components, audio system upgrades, and electronic accessories to create high-dollar, premium-quality customizations for the factory custom market. The models most commonly upgraded in such a fashion are the Ultra Classic Electra Glide, which has been selected for CVO treatment every year from 2006 to the present, and the Road King, which was selected in 2002, 2003, 2007, and 2008. The Dyna, Softail, and VRSC families have also been selected for CVO customization. The Environmental Protection Agency conducted emissions-certification and representative emissions test in Ann Arbor, Michigan, in 2005. Subsequently, Harley-Davidson produced an "environmental warranty". The warranty ensures each owner that the vehicle is designed and built free of any defects in materials and workmanship that would cause the vehicle to not meet EPA standards. In 2005, the EPA and the Pennsylvania Department of Environmental Protection (PADEP) confirmed Harley-Davidson to be the first corporation to voluntarily enroll in the One Clean-Up Program. This program is designed for the clean-up of the affected soil and groundwater at the former York Naval Ordnance Plant. The program is backed by the state and local government along with participating organizations and corporations. Paul Gotthold, Director of Operations for the EPA, congratulated the motor company: Harley-Davidson also purchased most of Castalloy, a South Australian producer of cast motorcycle wheels and hubs. The South Australian government has set forth "protection to the purchaser (Harley-Davidson) against environmental risks". In August 2016 Harley-Davidson settled with the EPA for $12 million, without admitting wrongdoing, over the sale of after-market "super tuners". Super tuners were devices, marketed for competition, which enabled increased performance of Harley-Davidson products. However, the devices also modified the emission control systems, producing increased hydrocarbon and nitrogen oxide. Harley-Davidson is required to buy back and destroy any super tuners which do not meet Clean Air Act requirements and spend $3 million on air pollution mitigation. The Milwaukee Bucks of the National Basketball Association sport a Harley Davidson sponsor patch on their jerseys. According to a recent Harley-Davidson study, in 1987 half of all Harley riders were under age 35. Now, only 15 percent of Harley buyers are under 35, and as of 2005, the median age had risen to 46.7. In 2008, Harley-Davidson stopped disclosing the average age of riders; at this point it was 48 years old. In 1987, the median household income of a Harley-Davidson rider was $38,000. By 1997, the median household income for those riders had more than doubled, to $83,000. Many Harley-Davidson Clubs exist nowadays around the world; the oldest one, founded in 1928, is in Prague. Harley-Davidson attracts a loyal brand community, with licensing of the Harley-Davidson logo accounting for almost 5 percent of the company's net revenue ($41 million in 2004). Harley-Davidson supplies many American police forces with their motorcycle fleets. From its founding, Harley-Davidson had worked to brand its motorcycles as respectable and refined products, with ads that showed what motorcycling writer Fred Rau called "refined-looking ladies with parasols, and men in conservative suits as the target market". The 1906 Harley-Davidson's effective, and polite, muffler was emphasized in advertisements with the nickname "The Silent Gray Fellow". That began to shift in the 1960s, partially in response to the clean-cut motorcyclist portrayed in Honda's "You meet the nicest people on a Honda" campaign, when Harley-Davidson sought to draw a contrast with Honda by underscoring the more working-class, macho, and even a little anti-social attitude associated with motorcycling's dark side. With the 1971 FX Super Glide, the company embraced, rather than distanced, itself from chopper style, and the counterculture custom Harley scene. Their marketing cultivated the "bad boy" image of biker and motorcycle clubs, and to a point, even outlaw or one-percenter motorcycle clubs. Beginning in 1920, a team of farm boys, including Ray Weishaar, who became known as the "hog boys", consistently won races. The group had a live hog as their mascot. Following a win, they would put the hog on their Harley and take a victory lap. In 1983, the Motor Company formed a club for owners of its product, taking advantage of the long-standing nickname by turning "hog" into the acronym HOG, for Harley Owners Group. Harley-Davidson attempted to trademark "hog", but lost a case against an independent Harley-Davidson specialist, The Hog Farm of West Seneca, New York, in 1999, when the appellate panel ruled that "hog" had become a generic term for large motorcycles and was therefore unprotectable as a trademark. On August 15, 2006, Harley-Davidson Inc. had its NYSE ticker symbol changed from HDI to HOG. Harley-Davidson FL "big twins" normally had heavy steel fenders, chrome trim, and other ornate and heavy accessories. After World War II, riders wanting more speed would often shorten the fenders or take them off completely to reduce the weight of the motorcycle. These bikes were called "bobbers" or sometimes "choppers", because parts considered unnecessary were chopped off. Those who made or rode choppers and bobbers, especially members of motorcycle clubs like the Hells Angels, referred to stock FLs as "garbage wagons".
https://en.wikipedia.org/wiki?curid=14142
Hiberno-English Hiberno-English (from Latin "Hibernia": "Ireland") or Irish English (, ) is the set of English dialects natively written and spoken within the island of Ireland (including both the Republic of Ireland and Northern Ireland). English was brought to Ireland as a result of the Norman invasion of Ireland of the late 12th century. Initially, it was mainly spoken in an area known as the Pale around Dublin, with mostly Irish spoken throughout the rest of the country. By the Tudor period, Irish culture and language had regained most of the territory lost to the invaders: even in the Pale, "all the common folk… for the most part are of Irish birth, Irish habit, and of Irish language". Some small pockets remained predominantly English-speaking; because of their sheer isolation their dialects developed into later (now extinct) dialects known as Yola in Wexford and Fingallian in Fingal, Dublin. These were no longer mutually intelligible with other English varieties. However, the Tudor conquest and colonisation of Ireland in the 16th century led to a second wave of immigration by English-speakers, the suppression and decline of the status of the Irish language, and marked a forced decrease in the use of Irish. By the mid-19th century, English was the majority language spoken in the country. It has retained this status to the present day, with even those whose first language is Irish being fluent in English as well. Today, there is only a little more than one per cent of the population that speaks Irish natively. Of the 40% of the population, who self-identified as speaking some Irish in 2016, 4% speak Irish daily outside the education system. English is one of two official languages, along with Irish, of the Republic of Ireland, and is the country's working language. Hiberno-English's spelling and pronunciation standards align with British rather than American English. However, Hiberno-English's diverse accents and some of its grammatical structures are unique, with some influence by the Irish language and a tendency to be phonologically conservative, retaining older features no longer common in the accents of England or North America. Phonologists today often divide Hiberno-English into four or five overarching classes of dialects or accents: Ulster accents, West and South-West Irish accents (including, for example, the Cork accent), various Dublin accents, and a supraregional accent expanding since only the last quarter of the twentieth century. Ulster English (or Northern Irish English) here refers collectively to the varieties of the Ulster province, including Northern Ireland and neighbouring counties outside of Northern Ireland, which has been influenced by Ulster Irish as well as the Scots language, brought over by Scottish settlers during the Plantation of Ulster. Its main subdivisions are Mid-Ulster English, South Ulster English and Ulster Scots, the latter of which is arguably a separate language. Ulster varieties distinctly pronounce: West and South-West Irish English here refers to broad varieties of Ireland's West and South-West Regions. Accents of both regions are known for: South-West Irish English (often known, by specific county, as Cork English, Kerry English, or Limerick English) also features two major defining characteristics of its own. One is the pin–pen merger: the raising of to when before or (as in "again" or "pen"). The other is the intonation pattern of a slightly higher pitch followed by a significant drop in pitch on stressed long-vowel syllables (across multiple syllables or even within a single one), which is popularly heard in rapid conversation, by speakers of other English dialects, as a noticeable kind of undulating "sing-song" pattern. Dublin English is highly internally diverse and refers collectively to the Irish English varieties immediately surrounding and within the metropolitan area of Dublin. Modern-day Dublin English largely lies on a phonological continuum, ranging from a more traditional, lower-prestige, local urban accent on the one end to a more recently developing, higher-prestige, non-local (regional and even supraregional) accent on the other end, whose most advanced characteristics only first emerged in the late 1980s and 1990s. The accent that most strongly uses the traditional working-class features has been labelled by linguists as local Dublin English. Most speakers from Dublin and its suburbs, however, have accent features falling variously along the entire middle as well as the newer end of the spectrum, which together form what is called non-local Dublin English, spoken by middle- and upper-class natives of Dublin and the greater eastern Irish region surrounding the city. A subset of this variety, whose middle-class speakers mostly range in the middle section of the continuum, is called mainstream Dublin English. Mainstream Dublin English has become the basis of an accent that has otherwise become supraregional (see more below) everywhere except in the north of the country. The majority of Dubliners born since the 1980s (led particularly by females) has shifted towards the most innovative non-local accent, here called new Dublin English, which has gained ground over mainstream Dublin English and which is the most extreme variety in rejecting the local accent's traditional features. The varieties at either extreme of the spectrum, local and new Dublin English, are both discussed in further detail below. In the most general terms, all varieties of Dublin English have the following identifying sounds that are often distinct from the rest of Ireland, pronouncing: Local Dublin English (or popular Dublin English) here refers to a traditional, broad, working-class variety spoken in the Republic of Ireland's capital city of Dublin. It is the only Irish English variety that in earlier history was non-rhotic; however, it is today weakly rhotic, Known for diphthongisation of the and vowels, the local Dublin accent is also known for a phenomenon called "vowel breaking", in which , , and in closed syllables are "broken" into two syllables, approximating , , , and , respectively. Evolving as a fashionable outgrowth of the mainstream non-local Dublin English, new Dublin English (also, advanced Dublin English and, formerly, fashionable Dublin English) is a youthful variety that originally began in the early 1990s among the "avant-garde" and now those aspiring to a non-local "urban sophistication". New Dublin English itself, first associated with affluent and middle-class inhabitants of southside Dublin, is probably now spoken by a majority of Dubliners born since the 1980s. It has replaced (yet was largely influenced by) moribund D4 English (often known as "Dublin 4" or "DART speak" or, mockingly, "Dortspeak"), which originated around the 1970s from Dubliners who rejected traditional notions of Irishness, regarding themselves as more trendy and sophisticated; however, particular aspects of the D4 accent became quickly noticed and ridiculed as sounding affected, causing these features to fall out of fashion by the 1990s. New Dublin English can have a fur–fair merger, horse–hoarse, and witch–which mergers, while resisting the traditionally Irish cot–caught merger. Notable lifelong Dublin native speakers Supraregional southern Irish English (sometimes, simply, supraregional Irish English or supraregional Hiberno-English) here refers to a variety crossing regional boundaries throughout all of the Republic of Ireland, except the north. As mentioned earlier, mainstream Dublin English of the early- to mid-1900s is the direct influence and catalyst for this variety. Most speakers born in the 1980s or later are showing fewer features of the twentieth-century mainstream supraregional form and more characteristics of an advanced supraregional variety that aligns clearly with the rapidly spreading new Dublin accent (see more above, under "Non-local Dublin English"). Ireland's supraregional dialect pronounces: The following charts list the vowels typical of each Irish English dialect as well as the several distinctive consonants of Irish English. Phonological characteristics of overall Irish English are given as well as categorisations into five major divisions of Hiberno-English: northern Ireland (or Ulster); West & South-West Ireland; local Dublin; new Dublin; and supraregional (southern) Ireland. Features of mainstream non-local Dublin English fall on a range between "local Dublin" and "new Dublin". The defining monophthongs of Irish English: The following pure vowel sounds are defining characteristics of Irish English: All pure vowels of various Hiberno-English dialects: Footnotes: In southside Dublin's once-briefly fashionable "Dublin 4" (or "Dortspeak") accent, the " and broad " set becomes rounded as [ɒː]. Unstressed syllable-final or is realised in Ulster accents uniquely as . Other notes: The defining diphthongs of Hiberno-English: The following gliding vowel (diphthong) sounds are defining characteristics of Irish English: All diphthongs of various Hiberno-English dialects: Footnotes: Due to the local Dublin accent's phenomenon of "vowel breaking", may be realised in that accent as in a closed syllable, and, in the same environment, may be realised as . The defining "r"-coloured vowels of Hiberno-English: The following "r"-coloured vowel features are defining characteristics of Hiberno-English: All "r"-coloured vowels of various Hiberno-English dialects: Footnotes: Every major accent of Irish English is rhotic (pronounces "r" after a vowel sound). The local Dublin accent is the only one that during an earlier time was non-rhotic, though it usually very lightly rhotic today, with a few minor exceptions. The rhotic consonant in this and most other Irish accents is an approximant . The "r" sound of the mainstream non-local Dublin accent is more precisely a velarised approximant , while the "r" sound of the more recently emerging non-local Dublin (or "new Dublin") accent is more precisely a retroflex approximant . In southside Dublin's once-briefly fashionable "Dublin 4" (or "Dortspeak") accent, is realised as . In non-local Dublin's more recently emerging (or "new Dublin") accent, and may both be realised more rounded as . In local Dublin, West/South-West, and other very conservative and traditional Irish English varieties ranging from the south to the north, the phoneme is split into two distinct phonemes depending on spelling and preceding consonants, which have sometimes been represented as versus , and often more precisely pronounced as versus . As an example, the words "earn" and "urn" are not pronounced the same, as they are in most dialects of English around the world. In the local Dublin and West/South-West accents, when after a labial consonant (e.g. "fern"), when spelled as "ur" or "or" (e.g. "word"), or when spelled as "ir" after an alveolar stop (e.g. "dirt") are pronounced as ; in all other situations, is pronounced as . Example words include: In non-local Dublin, younger, and supraregional Irish accents, this split is seldom preserved, with both of the phonemes typically merged as . In rare few local Dublin varieties that are non-rhotic, is either lowered to or backed and raised to . The distinction between and is widely preserved in Ireland, so that, for example, "horse" and "hoarse" are not merged in most Irish English dialects; however, they are usually merged in Belfast and new Dublin. In local Dublin, due to the phenomenon of "vowel breaking" may in fact be realised as . The defining consonants of Hiberno-English: The consonants of Hiberno-English mostly align to the typical English consonant sounds. However, a few Irish English consonants have distinctive, varying qualities. The following consonant features are defining characteristics of Hiberno-English: Unique consonants in various Hiberno-English dialects: Footnotes: In traditional, conservative Ulster English, and are palatalised before a low front vowel. Local Dublin also undergoes cluster simplification, so that stop consonant sounds occurring after fricatives or sonorants may be left unpronounced, resulting, for example, in "poun(d)" and "las(t)". Rhoticity: Every major accent of Irish English is strongly rhotic (pronounces "r" after a vowel sound), though to a weaker degree with the local Dublin accent. The accents of local Dublin and some smaller eastern towns like Drogheda were historically non-rhotic and now only very lightly rhotic or variably rhotic, with the rhotic consonant being an alveolar approximant, . In extremely traditional and conservative accents (exemplified, for instance, in the speech of older speakers throughout the country, even in South-West Ireland, such as Mícheál Ó Muircheartaigh and Jackie Healy-Rae), the rhotic consonant, before a vowel sound, can also be an alveolar tap, . The rhotic consonant for the northern Ireland and new Dublin accents is a retroflex approximant, . Dublin's retroflex approximant has no precedent outside of northern Ireland and is a genuine innovation of the 1990s and 2000s. A guttural/uvular is found in north-east Leinster. Otherwise, the rhotic consonant of virtually all other Irish accents is the postalveolar approximant, . The symbol [θ̠] is used here to represent the voiceless alveolar non-sibilant fricative, sometimes known as a "slit fricative", whose articulation is described as being apico-alveolar. Overall, and are being increasingly merged in supraregional Irish English, for example, making "wine" and "whine" homophones, as in most varieties of English around the world. Other phonological characteristics of Irish English include that consonant clusters ending in before are distinctive: The naming of the letter "H" as "haitch" is standard. Due to Gaelic influence, an epenthetic schwa is sometimes inserted, perhaps as a feature of older and less careful speakers, e.g. "film" and "form" . A number of Irish-language loan words are used in Hiberno-English, particularly in an official state capacity. For example, the head of government is the Taoiseach, the deputy head is the Tánaiste, the parliament is the Oireachtas and its lower house is Dáil Éireann. Less formally, people also use loan words in day-to-day speech, although this has been on the wane in recent decades and among the young. Another group of Hiberno-English words are those "derived" from the Irish language. Some are words in English that have entered into general use, while others are unique to Ireland. These words and phrases are often Anglicised versions of words in Irish or direct translations into English. In the latter case, they often give a meaning to a word or phrase that is generally not found in wider English use. Another class of vocabulary found in Hiberno-English are words and phrases common in Old and Middle English, but which have since become obscure or obsolete in the modern English language generally. Hiberno-English has also developed particular meanings for words that are still in common use in English generally. In addition to the three groups above, there are also additional words and phrases whose origin is disputed or unknown. While this group may not be unique to Ireland, their usage is not widespread, and could be seen as characteristic of the language in Ireland. The syntax of the Irish language is quite different from that of English. Various aspects of Irish syntax have influenced Hiberno-English, though many of these idiosyncrasies are disappearing in suburban areas and among the younger population. The other major influence on Hiberno-English that sets it apart from modern English in general is the retention of words and phrases from Old- and Middle-English. Reduplication is an alleged trait of Hiberno-English strongly associated with Stage Irish and Hollywood films. Irish has no words that directly translate as "yes" or "no", and instead repeats the verb used in the question, negated if necessary, to answer. Hiberno-English uses "yes" and "no" less frequently than other English dialects as speakers can repeat the verb, positively or negatively, instead of (or in redundant addition to) using "yes" or "no". This is not limited only to the verb "to be": it is also used with "to have" when used as an auxiliary; and, with other verbs, the verb "to do" is used. This is most commonly used for intensification, especially in Ulster English. Irish indicates recency of an action by adding "after" to the present continuous (a verb ending in "-ing"), a construction known as the "hot news perfect" or "after perfect". The idiom for "I had done X when I did Y" is "I was after doing X when I did Y", modelled on the Irish usage of the compound prepositions , , and :  /  / . A similar construction is seen where exclamation is used in describing a recent event: When describing less astonishing or significant events, a structure resembling the German perfect can be seen: This correlates with an analysis of "H1 Irish" proposed by Adger & Mitrovic, in a deliberate parallel to the status of German as a V2 language. Recent past construction has been directly adopted into Newfoundland English, where it is common in both formal and casual register. In rural areas of the Avalon peninsula, where Newfoundland Irish was spoken until the early 20th century, it is the grammatical standard for describing whether or not an action has occurred. The reflexive version of pronouns is often used for emphasis or to refer indirectly to a particular person, etc., according to context. "Herself", for example, might refer to the speaker's boss or to the woman of the house. Use of "herself" or "himself" in this way often indicates that the speaker attributes some degree of arrogance or selfishness to the person in question. Note also the indirectness of this construction relative to, for example, "She's coming now". This reflexive pronoun can also be used to describe a partner - "I was with himself last night." or "How's herself doing?" There are some language forms that stem from the fact that there is no verb "to have" in Irish. Instead, possession is indicated in Irish by using the preposition "at", (in Irish, "ag."). To be more precise, Irish uses a prepositional pronoun that combines "ag" "at" and "mé" "me" to create "agam". In English, the verb "to have" is used, along with a "with me" or "on me" that derives from "Tá … agam." This gives rise to the frequent Somebody who can speak a language "has" a language, in which Hiberno-English has borrowed the grammatical form used in Irish. When describing something, many Hiberno-English speakers use the term "in it" where "there" would usually be used. This is due to the Irish word "ann" (pronounced "oun" or "on") fulfilling both meanings. Another idiom is this thing or that thing described as "this man here" or "that man there", which also features in Newfoundland English in Canada. Conditionals have a greater presence in Hiberno-English due to the tendency to replace the simple present tense with the conditional (would) and the simple past tense with the conditional perfect (would have). Bring and take: Irish use of these words differs from that of British English because it follows the Irish grammar for "beir" and "tóg". English usage is determined by direction; person determines Irish usage. So, in English, one takes ""from" here "to" there", and brings it ""to" here "from" there". In Irish, a person takes only when accepting a transfer of possession of the object from someone elseand a person brings at all other times, irrespective of direction (to or from). The Irish equivalent of the verb "to be" has two present tenses, one (the present tense proper or "aimsir láithreach") for cases which are generally true or are true at the time of speaking and the other (the habitual present or "aimsir ghnáthláithreach") for repeated actions. Thus, "you are [now, or generally]" is "tá tú", but "you are [repeatedly]" is "bíonn tú". Both forms are used with the verbal noun (equivalent to the English present participle) to create compound tenses. This is similar to the distinction between "ser" and "estar" in Spanish or the use of the 'habitual be' in African-American Vernacular English. The corresponding usage in English is frequently found in rural areas, especially Mayo/Sligo in the west of Ireland and Wexford in the south-east, Inner-City Dublin along with border areas of the North and Republic. In this form, the verb "to be" in English is similar to its use in Irish, with a "does be/do be" (or "bees", although less frequently) construction to indicate the continuous, or habitual, present: This construction also surfaces in African American Vernacular English, as the famous habitual be. In old-fashioned usage, "it is" can be freely abbreviated "’tis", even as a standalone sentence. This also allows the double contraction "’tisn’t", for "it is not". Irish has separate forms for the second person singular ("tú") and the second person plural ("sibh"). Mirroring Irish, and almost every other Indo-European language, the plural "you" is also distinguished from the singular in Hiberno-English, normally by use of the otherwise archaic English word "ye" ; the word "yous" (sometimes written as "youse") also occurs, but primarily only in Dublin and across Ulster. In addition, in some areas in Leinster, north Connacht and parts of Ulster, the hybrid word "ye-s", pronounced "yiz", may be used. The pronunciation differs with that of the northwestern being and the Leinster pronunciation being . The word "ye", "yis" or "yous", otherwise archaic, is still used in place of "you" for the second-person plural. "Ye'r", "Yisser" or "Yousser" are the possessive forms, e.g. "Where are yous going?" The verb "mitch" is very common in Ireland, indicating being truant from school. This word appears in Shakespeare (though he wrote in Early Modern English rather than Middle English), but is seldom heard these days in British English, although pockets of usage persist in some areas (notably South Wales, Devon, and Cornwall). In parts of Connacht and Ulster the "mitch" is often replaced by the verb "scheme", while in Dublin it is often replaced by "on the hop/bounce". Another usage familiar from Shakespeare is the inclusion of the second person pronoun after the imperative form of a verb, as in "Wife, go you to her ere you go to bed" (Romeo and Juliet, Act III, Scene IV). This is still common in Ulster: "Get youse your homework done or you're no goin' out!" In Munster, you will still hear children being told, "Up to bed, let ye" . For influence from Scotland, see Ulster Scots and Ulster English. Now is often used at the end of sentences or phrases as a semantically empty word, completing an utterance without contributing any apparent meaning. Examples include "Bye now" (= "Goodbye"), "There you go now" (when giving someone something), "Ah now!" (expressing dismay), "Hold on now" (= "wait a minute"), "Now then" as a mild attention-getter, etc. This usage is universal among English dialects, but occurs more frequently in Hiberno-English. It is also used in the manner of the Italian 'prego' or German 'bitte', for example a barman might say "Now, Sir." when delivering drinks. So is often used for emphasis ("I can speak Irish, so I can"), or it may be tacked onto the end of a sentence to indicate agreement, where "then" would often be used in Standard English ("Bye so", "Let's go so", "That's fine so", "We'll do that so"). The word is also used to contradict a negative statement ("You're not pushing hard enough" – "I am so!"). (This contradiction of a negative is also seen in American English, though not as often as "I am too", or "Yes, I am".) The practice of indicating emphasis with "so" and including reduplicating the sentence's subject pronoun and auxiliary verb (is, are, have, has, can, etc.) such as in the initial example, is particularly prevalent in more northern dialects such as those of Sligo, Mayo and the counties of Ulster. Sure/Surely is often used as a tag word, emphasising the obviousness of the statement, roughly translating as but/and/well/indeed. It can be used as "to be sure" (but note that the other stereotype of "Sure and …" is not actually used in Ireland.) Or "Sure, I can just go on Wednesday", "I will not, to be sure." The word is also used at the end of sentences (primarily in Munster), for instance "I was only here five minutes ago, sure!" and can express emphasis or indignation. In Ulster, the reply "Aye, surely" may be given to show strong agreement. To is often omitted from sentences where it would exist in British English. For example, "I'm not allowed go out tonight", instead of "I'm not allowed "to" go out tonight". Will is often used where British English would use "shall" or American English "should" (as in "Will I make us a cup of tea?"). The distinction between "shall" (for first-person simple future, and second- and third-person emphatic future) and "will" (second- and third-person simple future, first-person emphatic future), maintained by many in England, does not exist in Hiberno-English, with "will" generally used in all cases. Once is sometimes used in a different way from how it is used in other dialects; in this usage, it indicates a combination of logical and causal conditionality: "I have no problem laughing at myself once the joke is funny." Other dialects of English would probably use "if" in this situation.
https://en.wikipedia.org/wiki?curid=14144
Harmonic analysis Harmonic analysis is a branch of mathematics concerned with the representation of functions or signals as the superposition of basic waves, and the study of and generalization of the notions of Fourier series and Fourier transforms (i.e. an extended form of Fourier analysis). In the past two centuries, it has become a vast subject with applications in areas as diverse as number theory, representation theory, signal processing, quantum mechanics, tidal analysis and neuroscience. The term "harmonics" originated as the Ancient Greek word "harmonikos", meaning "skilled in music". In physical eigenvalue problems, it began to mean waves whose frequencies are integer multiples of one another, as are the frequencies of the harmonics of music notes, but the term has been generalized beyond its original meaning. The classical Fourier transform on R"n" is still an area of ongoing research, particularly concerning Fourier transformation on more general objects such as tempered distributions. For instance, if we impose some requirements on a distribution "f", we can attempt to translate these requirements in terms of the Fourier transform of "f". The Paley–Wiener theorem is an example of this. The Paley–Wiener theorem immediately implies that if "f" is a nonzero distribution of compact support (these include functions of compact support), then its Fourier transform is never compactly supported. This is a very elementary form of an uncertainty principle in a harmonic-analysis setting. Fourier series can be conveniently studied in the context of Hilbert spaces, which provides a connection between harmonic analysis and functional analysis. Many applications of harmonic analysis in science and engineering begin with the idea or hypothesis that a phenomenon or signal is composed of a sum of individual oscillatory components. Ocean tides and vibrating strings are common and simple examples. The theoretical approach is often to try to describe the system by a differential equation or system of equations to predict the essential features, including the amplitude, frequency, and phases of the oscillatory components. The specific equations depend on the field, but theories generally try to select equations that represent major principles that are applicable. The experimental approach is usually to acquire data that accurately quantifies the phenomenon. For example, in a study of tides, the experimentalist would acquire samples of water depth as a function of time at closely enough spaced intervals to see each oscillation and over a long enough duration that multiple oscillatory periods are likely included. In a study on vibrating strings, it is common for the experimentalist to acquire a sound waveform sampled at a rate at least twice that of the highest frequency expected and for a duration many times the period of the lowest frequency expected. For example, the top signal at the right is a sound waveform of a bass guitar playing an open string corresponding to an A note with a fundamental frequency of 55 Hz. The waveform appears oscillatory, but it is more complex than a simple sine wave, indicating the presence of additional waves. The different wave components contributing to the sound can be revealed by applying a mathematical analysis technique known as the Fourier transform, the result of which is shown in the lower figure. Note that there is a prominent peak at 55 Hz, but that there are other peaks at 110 Hz, 165 Hz, and at other frequencies corresponding to integer multiples of 55 Hz. In this case, 55 Hz is identified as the fundamental frequency of the string vibration, and the integer multiples are known as harmonics. One of the most modern branches of harmonic analysis, having its roots in the mid-20th century, is analysis on topological groups. The core motivating ideas are the various Fourier transforms, which can be generalized to a transform of functions defined on Hausdorff locally compact topological groups. The theory for abelian locally compact groups is called Pontryagin duality. Harmonic analysis studies the properties of that duality and Fourier transform and attempts to extend those features to different settings, for instance, to the case of non-abelian Lie groups. For general non-abelian locally compact groups, harmonic analysis is closely related to the theory of unitary group representations. For compact groups, the Peter–Weyl theorem explains how one may get harmonics by choosing one irreducible representation out of each equivalence class of representations. This choice of harmonics enjoys some of the useful properties of the classical Fourier transform in terms of carrying convolutions to pointwise products, or otherwise showing a certain understanding of the underlying group structure. See also: Non-commutative harmonic analysis. If the group is neither abelian nor compact, no general satisfactory theory is currently known ("satisfactory" means at least as strong as the Plancherel theorem). However, many specific cases have been analyzed, for example SL"n". In this case, representations in infinite dimensions play a crucial role.
https://en.wikipedia.org/wiki?curid=14147
Home run In baseball, a home run (abbreviated HR) is scored when the ball is hit in such a way that the batter is able to circle the bases and reach home safely in one play without any errors being committed by the defensive team in the process. In modern baseball, the feat is typically achieved by hitting the ball over the outfield fence between the foul poles (or making contact with either foul pole) without first touching the ground, resulting in an automatic home run. There is also the "inside-the-park" home run where the batter reaches home safely while the baseball is in play on the field. A home run with a high exit velocity and good launch angle is sometimes called a "no-doubter," because it leaves no doubt that it is going to leave the park when it leaves the bat. When a home run is scored, the batter is also credited with a hit and a run scored, and an RBI for each runner that scores, including himself. Likewise, the pitcher is recorded as having given up a hit and a run, with additional runs charged for each runner that scores other than the batter. Home runs are among the most popular aspects of baseball and, as a result, prolific home run hitters are usually the most popular among fans and consequently the highest paid by teams—hence the old saying, "Home run hitters drive Cadillacs, and singles hitters drive Fords (coined, circa 1948, by veteran pitcher Fritz Ostermueller, by way of mentoring his young teammate, Ralph Kiner). In modern times a home run is most often scored when the ball is hit over the outfield wall between the foul poles (in fair territory) before it touches the ground (in flight), and without being caught or deflected back onto the field by a fielder. A batted ball is also a home run if it touches either foul pole or its attached screen before touching the ground, as the foul poles are by definition in fair territory. Additionally, many major-league ballparks have ground rules stating that a batted ball in flight that strikes a specified location or fixed object is a home run; this usually applies to objects that are beyond the outfield wall but are located such that it may be difficult for an umpire to judge. In professional baseball, a batted ball that goes over the outfield wall "after" touching the ground (i.e. a ball that bounces over the outfield wall) becomes an automatic double. This is colloquially referred to as a "ground rule double" even though it is uniform across all of Major League Baseball, per MLB rules 5.05(a)(6) through 5.05(a)(9). A fielder is allowed to reach over the wall to attempt to catch the ball as long as his feet are on or over the field during the attempt, and if the fielder successfully catches the ball while it is in flight the batter is out, even if the ball had already passed the vertical plane of the wall. However, since the fielder is not part of the field, a ball that bounces off a fielder (including his glove) and over the wall without touching the ground is still a home run. A fielder may not deliberately throw his glove, cap, or any other equipment or apparel to stop or deflect a fair ball, and an umpire may award a home run to the batter if a fielder does so on a ball that, in the umpire's judgment, would have otherwise been a home run (this is rare in modern professional baseball). A home run accomplished in any of the above manners is an automatic home run. The ball is dead, even if it rebounds back onto the field (e.g., from striking a foul pole), and the batter and any preceding runners cannot be put out at any time while running the bases. However, if one or more runners fail to touch a base or one runner passes another before reaching home plate, that runner or runners can be called out on appeal, though in the case of not touching a base a runner can go back and touch it if doing so won't cause them to be passed by another preceding runner and they have not yet touched the next base (or home plate in the case of missing third base). This stipulation is in Approved Ruling (2) of Rule 7.10(b). An inside-the-park home run occurs when a batter hits the ball into play and is able to circle the bases before the fielders can put him out. Unlike with an outside-the-park home run, the batter-runner and all preceding runners are liable to be put out by the defensive team at any time while running the bases. This can only happen if the ball does not leave the ballfield. In the early days of baseball, outfields were relatively much more spacious, reducing the likelihood of an over-the-fence home run, while increasing the likelihood of an inside-the-park home run, as a ball getting past an outfielder had more distance that it could roll before a fielder could track it down. Modern outfields are much less spacious and more uniformly designed than in the game's early days, therefore inside-the-park home runs are now a rarity. They usually occur when a fast runner hits the ball deep into the outfield and the ball bounces in an unexpected direction away from the nearest outfielder (e.g., off a divot in the grass or off the outfield wall), the nearest outfielder is injured on the play and cannot get to the ball, or an outfielder misjudges the flight of the ball in a way that he cannot quickly recover from the mistake (e.g., by diving and missing). The speed of the runner is crucial as even triples are relatively rare in most modern ballparks. If any defensive play on an inside-the-park home run is labeled an error by the official scorer, a home run is not scored; instead, it is scored as a single, double, or triple, and the batter-runner and any applicable preceding runners are said to have taken all additional bases on error. All runs scored on such a play, however, still count. An example of an unexpected bounce occurred during the 2007 Major League Baseball All-Star Game at AT&T Park in San Francisco on July 10, 2007. Ichiro Suzuki of the American League team hit a fly ball that caromed off the right-center field wall in the opposite direction from where National League right fielder Ken Griffey, Jr. was expecting it to go. By the time the ball was relayed, Ichiro had already crossed the plate standing up. This was the first inside-the-park home run in All-Star Game history, and led to Suzuki being named the game's Most Valuable Player. Home runs are often characterized by the number of runners on base at the time. A home run hit with the bases empty is seldom called a "one-run homer", but rather a solo home run, solo homer, or "solo shot". With one runner on base, two runs are scored (the baserunner and the batter) and thus the home run is often called a two-run homer or two-run shot. Similarly, a home runs with two runners on base is a three-run homer or three-run shot. The term "four-run homer" is seldom used; instead, it is nearly always called a "grand slam". Hitting a grand slam is the best possible result for the batter's turn at bat and the worst possible result for the pitcher and his team. A grand slam occurs when the bases are "loaded" (that is, there are base runners standing at first, second, and third base) and the batter hits a home run. According to "The Dickson Baseball Dictionary", the term originated in the card game of contract bridge. An inside-the-park grand slam is a grand slam that is also an inside-the-park home run, a home run without the ball leaving the field, and it is very rare, due to the relative rarity of loading the bases along with the significant rarity (nowadays) of inside-the-park home runs. On July 25, 1956, Roberto Clemente became the only MLB player to have ever scored a walk-off inside-the-park grand slam in a 9–8 Pittsburgh Pirates win over the Chicago Cubs, at Forbes Field. On April 23, 1999, Fernando Tatís made history by hitting two grand slams in one inning, both against Chan Ho Park of the Los Angeles Dodgers. With this feat, Tatís also set a Major League record with 8 RBI in one inning. On July 29, 2003 against the Texas Rangers, Bill Mueller of the Boston Red Sox became the only player in major league history to hit two grand slams in one game from opposite sides of the plate; he hit three home runs in that game, and his two grand slams were in consecutive at-bats. On August 25, 2011 the New York Yankees became the first team to hit three grand slams in one game vs the Oakland A's. The Yankees eventually won the game 22–9, after trailing 7–1. These types of home runs are characterized by the specific game situation in which they occur, and can theoretically occur on either an outside-the-park or inside-the-park home run. A walk-off home run is a home run hit by the home team in the bottom of the ninth inning, any extra inning, or other scheduled final inning, which gives the home team the lead and thereby ends the game. The term is attributed to Hall of Fame relief pitcher Dennis Eckersley, so named because after the run is scored, the losing team has to "walk off" the field. Two World Series have ended via the "walk-off" home run. The first was the 1960 World Series when Bill Mazeroski of the Pittsburgh Pirates hit a 9th inning solo home run in the 7th game of the series off New York Yankees pitcher Ralph Terry to give the Pirates the World Championship. The second time was the 1993 World Series when Joe Carter of the Toronto Blue Jays hit a 9th inning 3-run home run off Philadelphia Phillies pitcher Mitch Williams in Game 6 of the series, to help the Toronto Blue Jays capture their second World Series Championship in a row. Such a home run can also be called a "sudden death" or "sudden victory" home run. That usage has lessened as "walk-off home run" has gained favor. Along with Mazeroski's 1960 shot, the most famous walk-off or sudden-death homer would probably be the "Shot Heard 'Round the World" hit by Bobby Thomson to win the 1951 National League pennant for the New York Giants, along with many other game-ending home runs that famously ended some of the most important and suspenseful baseball games. A walk-off home run over the fence is an exception to baseball's one-run rule. Normally if the home team is tied or behind in the ninth or extra innings the game ends as soon as the home team scores enough runs to achieve a lead. If the home team has two outs in the inning, and the game is tied, the game will officially end either the moment the batter successfully reaches 1st base or the moment the runner touches home plate—whichever happens last. However, this is superseded by the "ground rule", which provides automatic doubles (when a ball-in-play hits the ground first then leaves the playing field) and home runs (when a ball-in-play leaves the playing field without ever touching the ground). In the latter case, all base runners including the batter are allowed to cross the plate. A leadoff home run is a home run hit by the first batter of a team, the leadoff hitter of the first inning of the game. In MLB, Rickey Henderson holds the career record with 81 lead-off home runs. Craig Biggio holds the National League career record with 53, third overall to Henderson, and Alfonso Soriano with 54. As of 2018, Ian Kinsler held the career record among active players, with 48 leadoff home runs, which also ranked him fourth all-time. In 1996, Brady Anderson set a Major League record by hitting a lead-off home run in four consecutive games. When two consecutive batters each hit a home run, this is described as back-to-back home runs. It is still considered back-to-back even if both batters hit their home runs off different pitchers. A third batter hitting a home run is commonly referred to as back-to-back-to-back. Four home runs in a row by consecutive batters has only occurred nine times in the history of Major League Baseball. Following convention, this is called back-to-back-to-back-to-back. The most recent occurrence was on June 9, 2019, when the Washington Nationals hit four in a row against the San Diego Padres in Petco Park as Howie Kendrick, Trea Turner, Adam Eaton and Anthony Rendon homered off pitcher Craig Stammen. Stammen became the fifth pitcher to surrender back-to-back-to-back-to-back home runs, following Paul Foytack on July 31, 1963, Chase Wright on April 22, 2007, Dave Bush on August 10, 2010, and Michael Blazek on July 27, 2017. On August 14, 2008, the Chicago White Sox defeated the Kansas City Royals 9–2. In this game, Jim Thome, Paul Konerko, Alexei Ramírez, and Juan Uribe hit back-to-back-to-back-to-back home runs in that order. Thome, Konerko, and Ramirez blasted their homers off of Joel Peralta, while Uribe did it off of Rob Tejeda. The next batter, veteran backstop Toby Hall, tried aimlessly to hit the ball as far as possible, but his effort resulted in a strike out. On April 22, 2007 the Boston Red Sox were trailing the New York Yankees 3–0 when Manny Ramirez, J. D. Drew, Mike Lowell and Jason Varitek hit back-to-back-to-back-to-back home runs to put them up 4–3. They eventually went on to win the game 7–6 after a three-run home run by Mike Lowell in the bottom of the 7th inning. On September 18, 2006 trailing 9–5 to the San Diego Padres in the 9th inning, Jeff Kent, J. D. Drew, Russell Martin, and Marlon Anderson of the Los Angeles Dodgers hit back-to-back-to-back-to-back home runs to tie the game. After giving up a run in the top of the 10th, the Dodgers won the game in the bottom of the 10th, on a walk-off two run home run by Nomar Garciaparra. J. D. Drew has been part of two different sets of back-to-back-to-back-to-back home runs. In both occurrences, his homer was the second of the four. On September 30, 1997, in the sixth inning of Game One of the American League Division Series between the New York Yankees and Cleveland Indians, Tim Raines, Derek Jeter and Paul O'Neill hit back-to-back-to-back home runs for the Yankees. Raines' home run tied the game. New York went on to win 8–6. This was the first occurrence of three home runs in a row ever in postseason play. The Boston Red Sox repeated the feat in Game Four of the 2007 American League Championship Series, also against the Indians. The Indians returned the favor in Game One of the 2016 American League Division Series. Twice in MLB history have two brothers hit back-to-back home runs. On April 23, 2013, brothers Melvin Upton, Jr. (formerly B.J. Upton) and Justin Upton hit back-to-back home runs. The first time was on September 15, 1938, when Lloyd Waner and Paul Waner performed the feat. Simple back-to-back home runs are a relatively frequent occurrence. If a pitcher gives up a homer, he might have his concentration broken and might alter his normal approach in an attempt to "make up for it" by striking out the next batter with some fastballs. Sometimes the next batter will be expecting that and will capitalize on it. A notable back-to-back home run of that type in World Series play involved "Babe Ruth's called shot" in 1932, which was accompanied by various Ruthian theatrics, yet the pitcher, Charlie Root, was allowed to stay in the game. He delivered just one more pitch, which Lou Gehrig drilled out of the park for a back-to-back shot, after which Root was removed from the game. In Game 3 of the 1976 NLCS, George Foster and Johnny Bench hit back-to-back homers in the last of the ninth off Ron Reed to tie the game. The Series-winning run was scored later in the inning. Another notable pair of back-to-back home runs occurred on September 14, 1990, when Ken Griffey, Sr. and Ken Griffey, Jr. hit back-to-back home runs, off Kirk McCaskill, the only father-and-son duo to do so in Major League history. On May 2, 2002, Bret Boone and Mike Cameron of the Seattle Mariners hit back-to-back home runs off of starter Jon Rauch in the first inning of a game against the Chicago White Sox. The Mariners batted around in the inning, and Boone and Cameron came up to bat against reliever Jim Parque with two outs, again hitting back-to-back home runs and becoming the only pair of teammates to hit back-to-back home runs twice in the same inning. On June 19, 2012, José Bautista and Colby Rasmus hit back-to-back home runs and back-to-back-to-back home runs with Edwin Encarnación for a lead change in each instance. On July 23, 2017, Whit Merrifield, Jorge Bonifacio, and Eric Hosmer of the Kansas City Royals hit back-to-back-to-back home runs in the fourth inning against the Chicago White Sox. The Royals went on to win the game 5–4. On June 20, 2018, George Springer, Alex Bregman, and José Altuve of the Houston Astros hit back-to-back-to-back home runs in the sixth inning against the Tampa Bay Rays. The Astros went on to win the game 5–1. On April 3, 2018, the St. Louis Cardinals began the game against the Milwaukee Brewers with back-to-back homers from Dexter Fowler and Tommy Pham. Then in the bottom of the ninth, with two outs and the Cardinals leading 4–3, Christian Yelich homered to tie the game; and Ryan Braun hit the next pitch for a walk-off homer. This is the only major league game to begin and end with back-to-back homers. On May 5, 2019, Eugenio Suarez, Jesse Winker and Derek Dietrich of the Cincinnati Reds, hit back-to-back-to-back home runs on three straight pitches against Jeff Samardzija of the San Francisco Giants in the bottom of the first inning. The record for consecutive home runs by a batter under any circumstances is four. Of the sixteen players (through 2012) who have hit four in one game, six have hit them consecutively. Twenty-eight other batters have hit four consecutive across two games. Bases on balls do not count as at-bats, and Ted Williams holds the record for consecutive home runs across the most games, four in four games played, during September 17–22, 1957, for the Red Sox. Williams hit a pinch-hit homer on the 17th; walked as a pinch-hitter on the 18th; there was no game on the 19th; hit another pinch-homer on the 20th; homered and then was lifted for a pinch-runner after at least one walk, on the 21st; and homered after at least one walk on the 22nd. All in all, he had four walks interspersed among his four homers. In World Series play, Reggie Jackson hit a record three in one Series game, the final game (Game 6) in 1977. But those three were a part of a much more impressive feat. He walked on four pitches in the second inning of game 6. Then he hit his three home runs on the first pitch of his next three at bats, off of three different pitchers (4th inning- Hooten, 5th inning- Sosa, 8th inning- Hough). He had also hit one in his last at bat of the previous game, giving him four home runs on four consecutive swings. The four in a row set the record for consecutive homers across two Series games. In Game 3 of the World Series in 2011, Albert Pujols hit three home runs to tie the record with Babe Ruth and Reggie Jackson. The St. Louis Cardinals went on to win the World Series in Game 7 at Busch Stadium. In Game 1 of the World Series in 2012, Pablo Sandoval of the San Francisco Giants hit three home runs on his first three at-bats of the Series. Nomar Garciaparra holds the record for consecutive home runs in the shortest time in terms of innings: three homers in two innings, on July 23, 2002, for the Boston Red Sox. An offshoot of hitting for the cycle, a "home run cycle" is when a player hits a solo home run, two-run home run, three-run home run, and grand slam all in one game. This is an extremely rare feat, as it requires the batter not only to hit four home runs in the game, but also to hit the home runs with a specific number of runners already on base. This is largely dependent on circumstances outside of the player's control, such as teammates' ability to get on base, and the order in which the player comes to bat in any particular inning. A further variant of the home run cycle would be the "natural home run cycle", should a batter hit the home runs in the specific order listed above. A home run cycle has never occurred in MLB, which has only had 18 instances of a player hitting four home runs in a game. Though multiple home run cycles have been recorded in collegiate baseball, the only known home run cycle in a professional baseball game belongs to Tyrone Horne, playing for the Arkansas Travelers in a Double-A level Minor League Baseball game against the San Antonio Missions on July 27, 1998. Major league players have come close to hitting a home run cycle, a notable example being Scooter Gennett of the Cincinnati Reds on June 6, 2017, when he hit four home runs against the St. Louis Cardinals. He hit a grand slam in the third inning, a two-run home run in the fourth inning, a solo home run in the sixth inning, and a two-run home run in the eighth inning. He had an opportunity for a three-run home run in the first inning, but drove in one run with a single in that at bat. In the early days of the game, when the ball was less lively and the ballparks generally had very large outfields, most home runs were of the inside-the-park variety. The first home run ever hit in the National League was by Ross Barnes of the Chicago White Stockings (now known as the Chicago Cubs), in 1876. The home "run" was literally descriptive. Home runs over the fence were rare, and only in ballparks where a fence was fairly close. Hitters were discouraged from trying to hit home runs, with the conventional wisdom being that if they tried to do so they would simply fly out. This was a serious concern in the 19th century, because in baseball's early days a ball caught after one bounce was still an out. The emphasis was on place-hitting and what is now called "manufacturing runs" or "small ball". The home run's place in baseball changed dramatically when the live-ball era began after World War I. First, the materials and manufacturing processes improved significantly, making the now-mass-produced, cork-centered ball somewhat more lively. Batters such as Babe Ruth and Rogers Hornsby took full advantage of rules changes that were instituted during the 1920s, particularly prohibition of the spitball, and the requirement that balls be replaced when worn or dirty. These changes resulted in the baseball being easier to see and hit, and easier to hit out of the park. Meanwhile, as the game's popularity boomed, more outfield seating was built, shrinking the size of the outfield and increasing the chances of a long fly ball resulting in a home run. The teams with the sluggers, typified by the New York Yankees, became the championship teams, and other teams had to change their focus from the "inside game" to the "power game" in order to keep up. Before , Major League Baseball considered a fair ball that bounced over an outfield fence to be a home run. The rule was changed to require the ball to clear the fence on the fly, and balls that reached the seats on a bounce became automatic doubles (often referred to as a ground rule double). The last "bounce" home run in MLB was hit by Al López of the Brooklyn Robins on September 12, 1930, at Ebbets Field. A carryover of the old rule is that if a player deflects a ball over the outfield fence in fair territory without it touching the ground, it is a home run, per MLB rule 5.05(a)(9). Additionally, MLB rule 5.05(a)(5) still stipulates that a ball hit over a fence in fair territory that is less that from home plate "shall entitle the batter to advance to second base only", as some early ballparks had short dimensions. Also until circa 1931, the ball had to go not only over the fence in fair territory, but it had to land in the bleachers in fair territory or still be visibly fair when disappearing from view. The rule stipulated "fair when last seen" by the umpires. Photos from that era in ballparks, such as the Polo Grounds and Yankee Stadium, show ropes strung from the foul poles to the back of the bleachers, or a second "foul pole" at the back of the bleachers, in a straight line with the foul line, as a visual aid for the umpire. Ballparks still use a visual aid much like the ropes; a net or screen attached to the foul poles on the fair side has replaced ropes. As with American football, where a touchdown once required a literal "touch down" of the ball in the end zone but now only requires the "breaking of the [vertical] plane" of the goal line, in baseball the ball need only "break the plane" of the fence in fair territory (unless the ball is caught by a player who is in play, in which case the batter is called out). Babe Ruth's 60th home run in 1927 was somewhat controversial, because it landed barely in fair territory in the stands down the right field line. Ruth lost a number of home runs in his career due to the when-last-seen rule. Bill Jenkinson, in "The Year Babe Ruth Hit 104 Home Runs", estimates that Ruth lost at least 50 and as many as 78 in his career due to this rule. Further, the rules once stipulated that an over-the-fence home run in a sudden-victory situation would only count for as many bases as was necessary to "force" the winning run home. For example, if a team trailed by two runs with the bases loaded, and the batter hit a fair ball over the fence, it only counted as a triple, because the runner immediately ahead of him had technically already scored the game-winning run. That rule was changed in the 1920s as home runs became increasingly frequent and popular. Babe Ruth's career total of 714 would have been one higher had that rule not been in effect in the early part of his career. Major League Baseball keeps running totals of all-time home runs by team, including teams no longer active (prior to 1900) as well as by individual players. Gary Sheffield hit the 250,000th home run in MLB history with a grand slam on September 8, 2008. Sheffield had hit MLB's 249,999th home run against Gio González in his previous at-bat. The all-time, verified professional baseball record for career home runs for one player, excluding the U.S. Negro Leagues during the era of segregation, is held by Sadaharu Oh. Oh spent his entire career playing for the Yomiuri Giants in Japan's Nippon Professional Baseball, later managing the Giants, the Fukuoka SoftBank Hawks and the 2006 World Baseball Classic Japanese team. Oh holds the all-time home run world record, having hit 868 home runs in his career. In Major League Baseball, the career record is 762, held by Barry Bonds, who broke Hank Aaron's record on August 7, 2007, when he hit his 756th home run at AT&T Park off pitcher Mike Bacsik. Only eight other major league players have hit as many as 600: Hank Aaron (755), Babe Ruth (714), Alex Rodriguez (696), Willie Mays (660), Albert Pujols (654), Ken Griffey, Jr. (630), Jim Thome (612), and Sammy Sosa (609); Pujols holds the record for active MLB players. The single season record is 73, set by Barry Bonds in 2001. Other notable single season records were achieved by Babe Ruth who hit 60 in 1927, Roger Maris, with 61 home runs in 1961, and Mark McGwire, who hit 70 in 1998. Negro League slugger Josh Gibson's Baseball Hall of Fame plaque says he hit "almost 800" home runs in his career. The "Guinness Book of World Records" lists Gibson's lifetime home run total at 800. Ken Burns' award-winning series, "Baseball", states that his actual total may have been as high as 950. Gibson's true total is not known, in part due to inconsistent record keeping in the Negro Leagues. The 1993 edition of the MacMillan "Baseball Encyclopedia" attempted to compile a set of Negro League records, and subsequent work has expanded on that effort. Those records demonstrate that Gibson and Ruth were of comparable power. The 1993 book had Gibson hitting 146 home runs in the 501 "official" Negro League games they were able to account for in his 17-year career, about 1 homer every 3.4 games. Babe Ruth, in 22 seasons (several of them in the dead-ball era), hit 714 in 2503 games, or 1 homer every 3.5 games. The large gap in the numbers for Gibson reflect the fact that Negro League clubs played relatively far fewer league games and many more "barnstorming" or exhibition games during the course of a season, than did the major league clubs of that era. Other legendary home run hitters include Jimmie Foxx, Mel Ott, Ted Williams, Mickey Mantle (who on September 10, 1960, mythically hit "the longest home run ever" at an estimated distance of , although this was measured after the ball stopped rolling), Reggie Jackson, Harmon Killebrew, Ernie Banks, Mike Schmidt, Dave Kingman, Sammy Sosa (who hit 60 or more home runs in a season 3 times), Ken Griffey, Jr. and Eddie Mathews. In 1987, Joey Meyer of the Denver Zephyrs hit the longest verifiable home run in professional baseball history. The home run was measured at a distance of and was hit inside Denver's Mile High Stadium. Major League Baseball's longest verifiable home run distance is about , by Babe Ruth, to straightaway center field at Tiger Stadium (then called Navin Field and before the double-deck), which landed nearly across the intersection of Trumbull and Cherry. The location of where Hank Aaron's record 755th home run landed has been monumented in Milwaukee. The spot sits outside Miller Park, where the Milwaukee Brewers currently play. Similarly, the point where Aaron's 715th homer landed, upon breaking Ruth's career record in 1974, is marked in the Turner Field parking lot. A red-painted seat in Fenway Park marks the landing place of the 502-ft home run Ted Williams hit in 1946, the longest measured homer in Fenway's history; a red stadium seat mounted on the wall of the Mall of America in Bloomington, Minnesota, marks the landing spot of Harmon Killebrew's record 520-foot shot in old Metropolitan Stadium. May 2019 saw 1,135 MLB home runs, the highest ever number of home runs in a single month in Major League Baseball history. During this month, 44.5% of all runs came during a homer, breaking the previous record of 42.3%. Replays "to get the call right" have been used extremely sporadically in the past, but the use of instant replay to determine "boundary calls"—home runs and foul balls—was not officially allowed until 2008. In a game on May 31, 1999, involving the St. Louis Cardinals and Florida Marlins, a hit by Cliff Floyd of the Marlins was initially ruled a double, then a home run, then was changed back to a double when umpire Frank Pulli decided to review video of the play. The Marlins protested that video replay was not allowed, but while the National League office agreed that replay was not to be used in future games, it declined the protest on the grounds it was a judgment call, and the play stood. In November 2007, the general managers of Major League Baseball voted in favor of implementing instant replay reviews on boundary home run calls. The proposal limited the use of instant replay to determining whether a boundary/home run call is: On August 28, 2008, instant replay review became available in MLB for reviewing calls in accordance with the above proposal. It was first utilized on September 3, 2008 in a game between the New York Yankees and the Tampa Bay Rays at Tropicana Field. Alex Rodriguez of the Yankees hit what appeared to be a home run, but the ball hit a catwalk behind the foul pole. It was at first called a home run, until Tampa Bay manager Joe Maddon argued the call, and the umpires decided to review the play. After 2 minutes and 15 seconds, the umpires came back and ruled it a home run. About two weeks later, on September 19, also at Tropicana Field, a boundary call was overturned for the first time. In this case, Carlos Peña of the Rays was given a ground rule double in a game against the Minnesota Twins after an umpire believed a fan reached into the field of play to catch a fly ball in right field. The umpires reviewed the play, determined the fan did not reach over the fence, and reversed the call, awarding Peña a home run. Aside from the two aforementioned reviews at Tampa Bay, replay was used four more times in the 2008 MLB regular season: twice at Houston, once at Seattle, and once at San Francisco. The San Francisco incident is perhaps the most unusual. Bengie Molina, the Giants' catcher, hit what was first called a single. Molina then was replaced in the game by Emmanuel Burriss, a pinch-runner, before the umpires re-evaluated the call and ruled it a home run. In this instance though, Molina was not allowed to return to the game to complete the run, as he had already been replaced. Molina was credited with the home run, and two RBIs, but not for the run scored which went to Burriss instead. On October 31, 2009, in the fourth inning of Game 3 of the World Series, Alex Rodriguez hit a long fly ball that hit a camera protruding over the wall and into the field of play in deep right field. The ball ricocheted off the camera and re-entered the field, initially ruled a double. However, after the umpires consulted with each other after watching the instant replay, the hit was ruled a home run, marking the first time an instant replay home run was hit in a playoff game. Career achievements
https://en.wikipedia.org/wiki?curid=14148
Harappa Harappa (; Urdu/) is an archaeological site in Punjab, Pakistan, about west of Sahiwal. The site takes its name from a modern village located near the former course of the Ravi River which now runs to the north. The current village of Harappa is less than from the ancient site. Although modern Harappa has a legacy railway station from the British Raj period, it is a small crossroads town of 15,000 people today. The site of the ancient city contains the ruins of a Bronze Age fortified city, which was part of the Indus Valley Civilisation centred in Sindh and the Punjab, and then the Cemetery H culture. The city is believed to have had as many as 23,500 residents and occupied about with clay brick houses at its greatest extent during the Mature Harappan phase (2600 BC – 1900 BC), which is considered large for its time. Per archaeological convention of naming a previously unknown civilisation by its first excavated site, the Indus Valley Civilisation is also called the Harappan Civilisation. The ancient city of Harappa was heavily damaged under British rule, when bricks from the ruins were used as track ballast in the construction of the Lahore–Multan Railway. In 2005, a controversial amusement park scheme at the site was abandoned when builders unearthed many archaeological artifacts during the early stages of building work. A plea from the Pakistani archaeologist Mohit Prem Kumar to the Ministry of Culture resulted in a restoration of the site. The Harappan Civilisation has its earliest roots in cultures such as that of Mehrgarh, approximately 6000  BC. The two greatest cities, Mohenjo-daro and Harappa, emerged circa 2600 BC along the Indus River valley in Punjab and Sindh. The civilisation, with a possible writing system, urban centers, and diversified social and economic system, was rediscovered in the 1920s also after excavations at Mohenjo-daro in Sindh near Larkana, and Harappa, in west Punjab south of Lahore. A number of other sites stretching from the Himalayan foothills in east Punjab, India in the north, to Gujarat in the south and east, and to Pakistani Balochistan in the west have also been discovered and studied. Although the archaeological site at Harappa was damaged in 1857 when engineers constructing the Lahore-Multan railroad used brick from the Harappa ruins for track ballast, an abundance of artifacts have nevertheless been found. The bricks discovered were made of red sand, clay, stones and were baked at very high temperature. As early as 1826 Harappa, located in west Punjab, attracted the attention of Daya Ram Sahni, who gets credit for preliminary excavations of Harappa. The Indus Valley civilisation was mainly an urban culture sustained by surplus agricultural production and commerce, the latter including trade with Sumer in southern Mesopotamia. Both Mohenjo-Daro and Harappa are generally characterized as having "differentiated living quarters, flat-roofed brick houses, and fortified administrative or religious centers." Although such similarities have given rise to arguments for the existence of a standardized system of urban layout and planning, the similarities are largely due to the presence of a semi-orthogonal type of civic layout, and a comparison of the layouts of Mohenjo-Daro and Harappa shows that they are in fact, arranged in a quite dissimilar fashion. The weights and measures of the Indus Valley Civilisation, on the other hand, were highly standardized, and conform to a set scale of gradations. Distinctive seals were used, among other applications, perhaps for identification of property and shipment of goods. Although copper and bronze were in use, iron was not yet employed. "Cotton was woven and dyed for clothing; wheat, rice, and a variety of vegetables and fruits were cultivated; and a number of animals, including the humped bull, were domesticated," as well as "fowl for fighting". Wheel-made pottery—some of it adorned with animal and geometric motifs—has been found in profusion at all the major Indus sites. A centralized administration for each city, though not the whole civilisation, has been inferred from the revealed cultural uniformity; however, it remains uncertain whether authority lay with a commercial oligarchy. Harappans had many trade routes along the Indus River that went as far as the Persian Gulf, Mesopotamia, and Egypt. Some of the most valuable things traded were carnelian and lapis lazuli. What is clear is that Harappan society was not entirely peaceful, with the human skeletal remains demonstrating some of the highest rates of injury (15.5%) found in South Asian prehistory. Paleopathological analysis demonstrated that leprosy and tuberculosis were present at Harappa, with the highest prevalence of both disease and trauma present in the skeletons from Area G (an ossuary located south-east of the city walls). Furthermore, rates of cranio-facial trauma and infection increased through time demonstrating that the civilisation collapsed amid illness and injury. The bioarchaeologists who examined the remains have suggested that the combined evidence for differences in mortuary treatment and epidemiology indicate that some individuals and communities at Harappa were excluded from access to basic resources like health and safety, a basic feature of hierarchical societies worldwide. The excavators of the site have proposed the following chronology of Harappa's occupation: By far the most exquisite and obscure artifacts unearthed to date are the small, square steatite (soapstone) seals engraved with human or animal motifs. A large number of seals have been found at such sites as Mohenjo-Daro and Harappa. Many bear pictographic inscriptions generally thought to be a form of writing or script. Despite the efforts of philologists from all parts of the world, and despite the use of modern cryptographic analysis, the signs remain undeciphered. It is also unknown if they reflect proto-Dravidian or other non-Vedic language(s). The ascribing of Indus Valley Civilisation iconography and epigraphy to historically known cultures is extremely problematic, in part due to the rather tenuous archaeological evidence for such claims, as well as the projection of modern South Asian political concerns onto the archaeological record of the area. This is especially evident in the radically varying interpretations of Harappan material culture as seen from both Pakistan- and India-based scholars. In February 2006 a school teacher in the village of Sembian-Kandiyur in Tamil Nadu discovered a stone celt (tool) with an inscription estimated to be up to 3,500 years old. Indian epigraphist Iravatham Mahadevan postulated that the four signs were in the Indus script and called the find "the greatest archaeological discovery of a century in Tamil Nadu". Based on this evidence he goes on to suggest that the language used in the Indus Valley was of Dravidian origin. However, the absence of a Bronze Age in South India, contrasted with the knowledge of bronze making techniques in the Indus Valley cultures, calls into question the validity of this hypothesis. Clay and stone tablets unearthed at Harappa, which were carbon dated 3300–3200 BC., contain trident-shaped and plant-like markings. "It is a big question as to if we can call what we have found true writing, but we have found symbols that have similarities to what became Indus script" said Dr. Richard Meadow of Harvard University, Director of the Harappa Archeological Research Project. This primitive writing is placed slightly earlier than primitive writings of the Sumerians of Mesopotamia, dated c.3100 BC. These markings have similarities to what later became Indus Script.
https://en.wikipedia.org/wiki?curid=14149
Hendecasyllable In poetry, a hendecasyllable is a line of eleven syllables. The term "hendecasyllabic" is used to refer to two different poetic meters, the older of which is quantitative and used chiefly in classical (Ancient Greek and Latin) poetry and the newer of which is accentual and used in medieval and modern poetry. It is often referred to when an iambic parameter contains 11 syllables. The classical hendecasyllable is a quantitative meter used in Ancient Greece in Aeolic verse and in scolia, and later by the Roman poets Catullus and Martial. Each line has eleven syllables; hence the name, which comes from the Greek word for eleven. The heart of the line is the choriamb (¯ ˘ ˘ ¯). There are three different versions. The pattern of the Phalaecian (Latin: "hendecasyllabus phalaecius") is as follows (using "¯" for a long syllable, "˘" for a short and "−̆" for an "anceps" or variable syllable): Another form of hendecasyllabic verse is the "Alcaic" (Latin: "hendecasyllabus alcaicus"; used in the Alcaic stanza), which has the pattern: The third form of hendecasyllabic verse is the "Sapphic" (Latin: "hendecasyllabus sapphicus"; so named for its use in the Sapphic stanza), with the pattern: Forty-three of Catullus's poems are hendecasyllabic; for an example, see Catullus 1. The metre has been imitated in English, notably by Alfred Tennyson, Swinburne, and Robert Frost, cf. "For Once Then Something." Contemporary American poets Annie Finch ("Lucid Waking") and Patricia Smith ("The Reemergence of the Noose") have published recent examples. Poets wanting to capture the hendecasyllabic rhythm in English have simply transposed the pattern into its accentual-syllabic equivalent: ¯ ˘|¯ ˘|¯ ˘ ˘|¯ ˘|¯ ˘|, or trochee/trochee/dactyl/trochee/trochee, so that the long/short pattern becomes a stress/unstress pattern. Tennyson, however, maintained the quantitative features of the metre: The hendecasyllable () is the principal metre in Italian poetry. Its defining feature is a constant stress on the tenth syllable, so that the number of syllables in the verse may vary, equaling eleven in the usual case where the final word is stressed on the penultimate syllable. The verse also has a stress preceding the caesura, on either the fourth or sixth syllable. The first case is called "endecasillabo a minore", or lesser hendecasyllable, and has the first hemistich equivalent to a "quinario"; the second is called "endecasillabo a maiore", or greater hendecasyllable, and has a "settenario" as the first hemistich. There is a strong tendency for hendecasyllabic lines to end with feminine rhymes (causing the total number of syllables to be eleven, hence the name), but ten-syllable lines (""Ciò che 'n grembo a Benaco star non può"") and twelve-syllable lines (""Ergasto mio, perché solingo e tacito"") are encountered as well. Lines of ten or twelve syllables are more common in rhymed verse; "versi sciolti", which rely more heavily on a pleasant rhythm for effect, tend toward a stricter eleven-syllable format. As a novelty, lines longer than twelve syllables can be created by the use of certain verb forms and affixed enclitic pronouns (""Ottima è l'acqua; ma le piante abbeverinosene.""). Additional accents beyond the two mandatory ones provide rhythmic variation and allow the poet to express thematic effects. A line in which accents fall consistently on even-numbered syllables (""Al còr gentìl rempàira sèmpre amóre"") is called iambic ("giambico") and may be a greater or lesser hendecasyllable. This line is the simplest, commonest and most musical but may become repetitive, especially in longer works. Lesser hendecasyllables often have an accent on the seventh syllable (""fàtta di giòco in figùra d'amóre""). Such a line is called dactylic ("dattilico") and its less pronounced rhythm is considered particularly appropriate for representing dialogue. Another kind of greater hendecasyllable has an accent on the third syllable (""Se Mercé fosse amìca a' miei disìri"") and is known as anapestic ("anapestico"). This sort of line has a crescendo effect and gives the poem a sense of speed and fluidity. It is considered improper for the lesser hendecasyllable to use a word accented on its antepenultimate syllable ("parola sdrucciola") for its mid-line stress. A line like ""Più non sfavìllano quegli òcchi néri"", which delays the caesura until after the sixth syllable, is not considered a valid hendecasylable. Most classical Italian poems are composed in hendecasyllables, including the major works of Dante, Francesco Petrarca, Ludovico Ariosto, and Torquato Tasso. The rhyme systems used include terza rima, ottava, sonnet and canzone, and some verse forms use a mixture of hendecasyllables and shorter lines. From the early 16th century onward, hendecasyllables are often used without a strict system, with few or no rhymes, both in poetry and in drama. This is known as "verso sciolto". An early example is "Le Api" ("the bees") by Giovanni di Bernardo Rucellai, written around 1517 and published in 1525, which begins: Like other early Italian-language tragedies, the "Sophonisba" of Gian Giorgio Trissino (1515) is in blank hendecasyllables. Later examples can be found in the "Canti" of Giacomo Leopardi, where hendecasyllables are alternated with "settenari". The hendecasyllabic metre (Polish: "jedenastozgłoskowiec") was very popular in Polish poetry, especially in the seventeenth and eighteenth centuries, owing to strong Italian literary influence. It was used by Jan Kochanowski, Piotr Kochanowski (who translated "Jerusalem Delivered" by Torquato Tasso), Sebastian Grabowiecki, Wespazjan Kochowski and Stanisław Herakliusz Lubomirski. The greatest Polish Romantic poet, Adam Mickiewicz, set his poem Grażyna in this measure. The Polish hendecasyllable is widely used when translating English blank verse. The eleven-syllable line is normally defined by primary stresses on the fourth and tenth syllables and a caesura after the fifth syllable. Only rarely it is fully iambic. A popular form of Polish literature that employs the hendacasyllable is the Sapphic stanza: 11/11/11/5. The Polish hendecasyllable is often combined with an 8-syllable line: 11a/8b/11a/8b. Such a stanza was used by Mickiewicz in his ballads, as in the following example. The hendecasyllable (Portuguese: "hendecassílabo") is a common meter in Portuguese poetry. The best-known Portuguese poem composed in hendecasyllables is Luís de Camões' "Lusiads", which begins as follows: In Portuguese, the hendecasyllable meter is often called "decasyllable" ("decassílabo"), even when the work in question uses overwhelmingly feminine rhymes (as is the case with the "Lusiads"). The hendecasyllable ("endecasílabo") is less pervasive in Spanish poetry than in Italian or Portuguese, but it is commonly used with Italianate verse forms like sonnets and ottava rima. An example of the latter is Alonso de Ercilla's epic "La Araucana", which opens as follows: Spanish dramatists often use hendecasyllables in tandem with shorter lines like heptasyllables, as can be seen in Rosaura's opening speech from Calderón's "La vida es sueño": The term "hendecasyllable" is sometimes used to describe a line of iambic pentameter with a feminine ending, as in the first line of John Keats's "Endymion:" "A thing of beauty is a joy for ever."
https://en.wikipedia.org/wiki?curid=14153
Hebrides The Hebrides (; , ; ) comprise a widespread and diverse archipelago off the west coast of mainland Scotland. There are two main groups: the Inner and Outer Hebrides. These islands have a long history of occupation dating back to the Mesolithic, and the culture of the residents has been affected by the successive influences of Celtic-speaking, Norse-speaking, and English-speaking peoples. This diversity is reflected in the names given to the islands, which are derived from the languages that have been spoken there in historic and perhaps prehistoric times. The Hebrides are the source of much of Scottish Gaelic literature and Gaelic music. Today the economy of the islands is dependent on crofting, fishing, tourism, the oil industry, and renewable energy. The Hebrides have lower biodiversity than mainland Scotland, but there is a significant presence of seals and seabirds. The earliest written references that have survived relating to the islands were made circa 77 AD by Pliny the Elder in his "Natural History", where he states that there are 30 ', and makes a separate reference to ', which Watson (1926) concludes is unequivocally the Outer Hebrides. Writing about 80 years later, in 140-150 AD, Ptolemy, drawing on the earlier naval expeditions of , writes that there are five ' (possibly meaning the Inner Hebrides) and '. Later texts in classical Latin, by writers such as , use the forms ' and '. The name ' recorded by Ptolemy may be pre-Celtic. Islay is Ptolemy's , the use of the "p" hinting at a Brythonic or Pictish tribal name, , although the root is not Gaelic. Woolf (2012) has suggested that ' may be "an Irish attempt to reproduce the word ' phonetically rather than by translating it" and that the tribe's name may come from the root ' meaning "horse". Watson (1926) also notes the possible relationship between ' and the ancient Irish Ulaid tribal name ' and the personal name of a king recorded in the "". The names of other individual islands reflect their complex linguistic history. The majority are Norse or Gaelic but the roots of several other Hebrides may have a pre-Celtic origin. Adomnán, the 7th century abbot of Iona, records Colonsay as "Colosus" and Tiree as "Ethica", both of which may be pre-Celtic names. The etymology of Skye is complex and may also include a pre-Celtic root. Lewis is "" in Old Norse and although various suggestions have been made as to a Norse meaning (such as "song house") the name is not of Gaelic origin and the Norse credentials are questionable. The earliest comprehensive written list of Hebridean island names was undertaken by Donald Monro in 1549, which in some cases also provides the earliest written form of the island name. The derivations of all of the inhabited islands of the Hebrides and some of the larger uninhabited ones are listed below. Lewis and Harris is the largest island in Scotland and the third largest of the British Isles, after Great Britain and Ireland. It incorporates Lewis in the north and Harris in the south, both of which are frequently referred to as individual islands, although they are joined by a land border. Remarkably, the island does not have a common name in either English or Gaelic and is referred to as "Lewis and Harris", "Lewis with Harris", "Harris with Lewis" etc. For this reason it is treated as two separate islands below. The derivation of Lewis may be pre-Celtic (see above) and the origin of Harris is no less problematic. In the Ravenna Cosmography, "Erimon" may refer to Harris (or possibly the Outer Hebrides as a whole). This word may derive from the ( "desert". The origin of Uist () is similarly unclear. There are various examples of Inner Hebridean island names that were originally Gaelic but have become completely replaced. For example, Adomnán records "Sainea", "Elena", "Ommon" and "Oideacha" in the Inner Hebrides, which names must have passed out of usage in the Norse era and whose locations are not clear. One of the complexities is that an island may have had a Celtic name, which was replaced by a similar-sounding Norse name, but then reverted to an essentially Gaelic name with a Norse "øy" or "ey" ending. See for example Rona below. The names of uninhabited islands follow the same general patterns as the inhabited islands. The following are the ten largest in the Hebrides and their outliers. The etymology of St Kilda, a small archipelago west of the Outer Hebrides, and its main island Hirta, is very complex. No saint is known by the name of Kilda, and various theories have been proposed for the word's origin, which dates from the late 16th century. Haswell-Smith (2004) notes that the full name "St Kilda" first appears on a Dutch map dated 1666, and that it may have been derived from Norse ' ("sweet wellwater") or from a mistaken Dutch assumption that the spring ' was dedicated to a saint (' is a tautological placename, consisting of the Gaelic and Norse words for "well", i.e. "well well"). The origin of the Gaelic for "Hirta"—', ', or '—which long pre-dates the use of "St Kilda", is similarly open to interpretation. Watson (1926) offers the Old Irish ', a word meaning "death", possibly relating to the dangerous seas. Maclean (1977), drawing on an Icelandic saga describing an early 13th-century voyage to Ireland that mentions a visit to the islands of ', speculates that the shape of Hirta resembles a stag, "" being "stags" in Norse. The etymology of small islands may be no less complex. In relation to , R. L. Stevenson believed that "black and dismal" was a translation of the name, noting that "as usual, in Gaelic, it is not the only one." The Hebrides were settled during the Mesolithic era around 6500 BC or earlier, after the climatic conditions improved enough to sustain human settlement. Occupation at a site on is dated to 8590 ±95 uncorrected radiocarbon years BP, which is amongst the oldest evidence of occupation in Scotland. There are many examples of structures from the Neolithic period, the finest example being the standing stones at Callanish, dating to the 3rd millennium BC. Cladh Hallan, a Bronze Age settlement on South Uist is the only site in the UK where prehistoric mummies have been found. In 55 BC, the Greek historian Diodorus Siculus wrote that there was an island called "Hyperborea" (which means "beyond the North Wind"), where a round temple stood from which the moon appeared only a little distance above the earth every 19 years. This may have been a reference to the stone circle at Callanish. A traveller called Demetrius of Tarsus related to Plutarch the tale of an expedition to the west coast of Scotland in or shortly before AD 83. He stated it was a gloomy journey amongst uninhabited islands, but he had visited one which was the retreat of holy men. He mentioned neither the druids nor the name of the island. The first written records of native life begin in the 6th century AD, when the founding of the kingdom of Dál Riata took place. This encompassed roughly what is now Argyll and Bute and Lochaber in Scotland and County Antrim in Ireland. The figure of Columba looms large in any history of Dál Riata, and his founding of a monastery on Iona ensured that the kingdom would be of great importance in the spread of Christianity in northern Britain. However, Iona was far from unique. Lismore in the territory of the Cenél Loairn, was sufficiently important for the death of its abbots to be recorded with some frequency and many smaller sites, such as on Eigg, Hinba, and Tiree, are known from the annals. North of Dál Riata, the Inner and Outer Hebrides were nominally under Pictish control, although the historical record is sparse. Hunter (2000) states that in relation to King Bridei I of the Picts in the sixth century: "As for Shetland, Orkney, Skye and the Western Isles, their inhabitants, most of whom appear to have been Pictish in culture and speech at this time, are likely to have regarded Bridei as a fairly distant presence.” Viking raids began on Scottish shores towards the end of the 8th century and the Hebrides came under Norse control and settlement during the ensuing decades, especially following the success of Harald Fairhair at the Battle of in 872. In the Western Isles Ketill Flatnose may have been the dominant figure of the mid 9th century, by which time he had amassed a substantial island realm and made a variety of alliances with other Norse leaders. These princelings nominally owed allegiance to the Norwegian crown, although in practice the latter's control was fairly limited. Norse control of the Hebrides was formalised in 1098 when Edgar of Scotland formally signed the islands over to Magnus III of Norway. The Scottish acceptance of Magnus III as King of the Isles came after the Norwegian king had conquered Orkney, the Hebrides and the Isle of Man in a swift campaign earlier the same year, directed against the local Norwegian leaders of the various island petty kingdoms. By capturing the islands Magnus imposed a more direct royal control, although at a price. His skald Bjorn Cripplehand recorded that in Lewis "fire played high in the heaven" as "flame spouted from the houses" and that in the Uists "the king dyed his sword red in blood". The Hebrides were now part of the Kingdom of the Isles, whose rulers were themselves vassals of the Kings of Norway. This situation lasted until the partitioning of the Western Isles in 1156, at which time the Outer Hebrides remained under Norwegian control while the Inner Hebrides broke out under Somerled, the Norse-Gael kinsman of the Manx royal house. Following the ill-fated 1263 expedition of Haakon IV of Norway, the Outer Hebrides and the Isle of Man were yielded to the Kingdom of Scotland as a result of the 1266 Treaty of Perth. Although their contribution to the islands can still be found in personal and place names, the archaeological record of the Norse period is very limited. The best known find is the Lewis chessmen, which date from the mid 12th century. As the Norse era drew to a close, the Norse-speaking princes were gradually replaced by Gaelic-speaking clan chiefs including the MacLeods of Lewis and Harris, Clan Donald and MacNeil of Barra. This transition did little to relieve the islands of internecine strife although by the early 14th century the MacDonald Lords of the Isles, based on Islay, were in theory these chiefs' feudal superiors and managed to exert some control. The Lords of the Isles ruled the Inner Hebrides as well as part of the Western Highlands as subjects of the King of Scots until John MacDonald, fourth Lord of the Isles, squandered the family's powerful position. A rebellion by his nephew, Alexander of Lochalsh provoked an exasperated James IV to forfeit the family's lands in 1493. In 1598, King James VI authorised some "Gentleman Adventurers" from Fife to civilise the "most barbarous Isle of Lewis". Initially successful, the colonists were driven out by local forces commanded by Murdoch and Neil MacLeod, who based their forces on in . The colonists tried again in 1605 with the same result, but a third attempt in 1607 was more successful and in due course Stornoway became a Burgh of Barony. By this time, Lewis was held by the Mackenzies of Kintail (later the Earls of Seaforth), who pursued a more enlightened approach, investing in fishing in particular. The Seaforths' royalist inclinations led to Lewis becoming garrisoned during the Wars of the Three Kingdoms by Cromwell's troops, who destroyed the old castle in Stornoway. With the implementation of the Treaty of Union in 1707, the Hebrides became part of the new Kingdom of Great Britain, but the clans' loyalties to a distant monarch were not strong. A considerable number of islesmen "came out" in support of the Jacobite Earl of Mar in the 1715 and again in the 1745 rising including Macleod of Dunvegan and MacLea of Lismore. The aftermath of the decisive Battle of Culloden, which effectively ended Jacobite hopes of a Stuart restoration, was widely felt. The British government's strategy was to estrange the clan chiefs from their kinsmen and turn their descendants into English-speaking landlords whose main concern was the revenues their estates brought rather than the welfare of those who lived on them. This may have brought peace to the islands, but in the following century it came at a terrible price. In the wake of the rebellion, the clan system was broken up and islands of the Hebrides became a series of landed estates. The early 19th century was a time of improvement and population growth. Roads and quays were built; the slate industry became a significant employer on Easdale and surrounding islands; and the construction of the Crinan and Caledonian canals and other engineering works such as Clachan Bridge improved transport and access. However, in the mid-19th century, the inhabitants of many parts of the Hebrides were devastated by the Clearances, which destroyed communities throughout the Highlands and Islands as the human populations were evicted and replaced with sheep farms. The position was exacerbated by the failure of the islands' kelp industry that thrived from the 18th century until the end of the Napoleonic Wars in 1815 and large scale emigration became endemic. As , a Gaelic poet from South Uist, wrote for his countrymen who were obliged to leave the Hebrides in the late 18th century, emigration was the only alternative to "sinking into slavery" as the Gaels had been unfairly dispossessed by rapacious landlords. In the 1880s, the "Battle of the Braes" involved a demonstration against unfair land regulation and eviction, stimulating the calling of the Napier Commission. Disturbances continued until the passing of the 1886 Crofters' Act. The Hebrides have a diverse geology ranging in age from Precambrian strata that are amongst the oldest rocks in Europe to Paleogene igneous intrusions. Raised shore platforms in the Hebrides are identified as strandflats formed possibly in Pliocene times and later modified by the Quaternary glaciations. The Hebrides can be divided into two main groups, separated from one another by the Minch to the north and the Sea of the Hebrides to the south. The Inner Hebrides lie closer to mainland Scotland and include Islay, Jura, Skye, Mull, Raasay, Staffa and the Small Isles. There are 36 inhabited islands in this group. The Outer Hebrides are a chain of more than 100 islands and small skerries located about west of mainland Scotland. There are 15 inhabited islands in this archipelago. The main islands include Barra, Benbecula, Berneray, Harris, Lewis, North Uist, South Uist, and St Kilda. In total, the islands have an area of approximately and a population of 44,759. A complication is that there are various descriptions of the scope of the Hebrides. The "Collins Encyclopedia of Scotland" describes the Inner Hebrides as lying "east of the Minch", which would include any and all offshore islands. There are various islands that lie in the sea lochs such as and that might not ordinarily be described as "Hebridean", but no formal definitions exist. In the past, the Outer Hebrides were often referred to as the "Long Isle" (). Today, they are also known as the "Western Isles", although this phrase can also be used to refer to the Hebrides in general. The Hebrides have a cool temperate climate that is remarkably mild and steady for such a northerly latitude, due to the influence of the Gulf Stream. In the Outer Hebrides the average temperature for the year is 6 °C (44 °F) in January and 14 °C (57 °F) in summer. The average annual rainfall in Lewis is and sunshine hours range from 1,100 – 1,200 "per annum" (13%). The summer days are relatively long, and May to August is the driest period. The residents of the Hebrides have spoken a variety of different languages during the long period of human occupation. It is assumed that Pictish must once have predominated in the northern Inner Hebrides and Outer Hebrides. The Scottish Gaelic language arrived from Ireland due to the growing influence of the kingdom of Dál Riata from the 6th century AD onwards, and became the dominant language of the southern Hebrides at that time. For a few centuries, the military might of the ' meant that Old Norse was prevalent in the Hebrides. North of , the place names that existed prior to the 9th century have been all but obliterated. The Old Norse name for the Hebrides during the Viking occupation was ', which means "Southern Isles"; in contrast to the ", or "Northern Isles" of Orkney and Shetland. South of , Gaelic place names are more common, and after the 13th century, Gaelic became the main language of the entire Hebridean archipelago. Due to Scots and English being favoured in government and the educational system, the Hebrides have been in a state of diglossia since at least the 17th century. The Highland Clearances of the 19th century accelerated the language shift away from Scottish Gaelic, as did increased migration and the continuing lower status of Gaelic speakers. Nevertheless, as late as the end of the 19th century, there were significant populations of monolingual Gaelic speakers, and the Hebrides still contain the highest percentages of Gaelic speakers in Scotland. This is especially true of the Outer Hebrides, where a slim majority speak the language. The Scottish Gaelic college, , is based on Skye and Islay. Ironically, given the status of the Western Isles as the last Gaelic-speaking stronghold in Scotland, the Gaelic language name for the islands – " – means "isles of the foreigners"; from the time when they were under Norse colonisation. For those who remained, new economic opportunities emerged through the export of cattle, commercial fishing and tourism. Nonetheless emigration and military service became the choice of many and the archipelago's populations continued to dwindle throughout the late 19th century and for much of the 20th century. Lengthy periods of continuous occupation notwithstanding, many of the smaller islands were abandoned. There were, however, continuing gradual economic improvements, among the most visible of which was the replacement of the traditional thatched blackhouse with accommodation of a more modern design and with the assistance of Highlands and Islands Enterprise many of the islands' populations have begun to increase after decades of decline. The discovery of substantial deposits of North Sea oil in 1965 and the renewables sector have contributed to a degree of economic stability in recent decades. For example, the Arnish yard has had a chequered history but has been a significant employer in both the oil and renewables industries. The widespread immigration of mainlanders, particularly non-Gaelic speakers, has been a subject of controversy. Many contemporary Gaelic musicians have roots in the Hebrides, including Julie Fowlis (North Uist), Catherine-Ann MacPhee (Barra), Kathleen MacInnes (South Uist), and Ishbel MacAskill (Lewis). All of these singers have repertoire based on the Hebridean tradition, such as ' and ' (waulking songs). This tradition includes many songs composed by little-known or anonymous poets before 1800, such as "", "" and "". Several of Runrig's songs are inspired by the archipelago; Calum and were raised on North Uist and Donnie Munro on Skye. The Gaelic poet spent much of his life in the Hebrides and often referred to them in his poetry, including in ' and '. The best known Gaelic poet of her era, (Mary MacPherson, 1821–98), embodied the spirit of the land agitation of the 1870s and 1880s. This, and her powerful evocation of the Hebrides—she was from Skye—has made her among the most enduring Gaelic poets. Allan MacDonald (1859–1905), who spent his adult life on Eriskay and South Uist, composed hymns and verse in honour of the Blessed Virgin, the Christ Child, and the Eucharist. In his secular poetry, MacDonald praised the beauty of Eriskay and its people. In his verse drama, ' ("The Old Wives' Parliament"), he lampooned the gossiping of his female parishioners and local marriage customs. In the 20th century, Murdo Macfarlane of Lewis wrote ', a well-known poem about the Gaelic revival in the Outer Hebrides. Sorley MacLean, the most respected 20th-century Gaelic writer, was born and raised on Raasay, where he set his best known poem, ', about the devastating effect of the Highland Clearances. , raised on South Uist and described by MacLean as "one of the few really significant living poets in Scotland, writing in any language" (West Highland Free Press, October 1992) wrote the Scottish Gaelic-language novel "" which was voted in the Top Ten of the 100 Best-Ever Books from Scotland. In some respects the Hebrides lack biodiversity in comparison to mainland Britain; for example, there are only half as many mammalian species. However, these islands provide breeding grounds for many important seabird species including the world's largest colony of northern gannets. Avian life includes the corncrake, red-throated diver, rock dove, kittiwake, tystie, Atlantic puffin, goldeneye, golden eagle and white-tailed sea eagle. The latter was re-introduced to Rùm in 1975 and has successfully spread to various neighbouring islands, including Mull. There is a small population of red-billed chough concentrated on the islands of Islay and Colonsay. Red deer are common on the hills and the grey seal and common seal are present around the coasts of Scotland. Colonies of seals are found on Oronsay and the Treshnish Isles. The rich freshwater streams contain brown trout, Atlantic salmon and water shrew. Offshore, minke whales, Killer whales, basking sharks, porpoises and dolphins are among the sealife that can be seen. Heather moor containing ling, bell heather, cross-leaved heath, bog myrtle and fescues is abundant and there is a diversity of Arctic and alpine plants including Alpine pearlwort and mossy cyphal. Loch Druidibeg on South Uist is a national nature reserve owned and managed by Scottish Natural Heritage. The reserve covers 1,677 hectares across the whole range of local habitats. Over 200 species of flowering plants have been recorded on the reserve, some of which are nationally scarce. South Uist is considered the best place in the UK for the aquatic plant slender naiad, which is a European Protected Species. Hedgehogs are not native to the Outer Hebrides—they were introduced in the 1970s to reduce garden pests—and their spread poses a threat to the eggs of ground nesting wading birds. In 2003, Scottish Natural Heritage undertook culls of hedgehogs in the area although these were halted in 2007 due to protests. Trapped animals were relocated to the mainland.
https://en.wikipedia.org/wiki?curid=14155
Hexameter Hexameter is a metrical line of verses consisting of six feet. It was the standard epic metre in classical Greek and Latin literature, such as in the "Iliad", "Odyssey" and "Aeneid". Its use in other genres of composition include Horace's satires, Ovid's "Metamorphoses," and the Hymns of Orpheus. According to Greek mythology, hexameter was invented by Phemonoe, daughter of Apollo and the first Pythia of Delphi. In classical hexameter, the six feet follow these rules: A short syllable (υ) is a syllable with a short vowel and no consonant at the end. A long syllable (–) is a syllable that either has a long vowel, one or more consonants at the end (or a long consonant), or both. Spaces between words are not counted in syllabification, so for instance "cat" is a long syllable in isolation, but "cat attack" would be syllabified as short-short-long: "ca", "ta", "tack" (υ υ –). Variations of the sequence from line to line, as well as the use of caesura (logical full stops within the line) are essential in avoiding what may otherwise be a monotonous sing-song effect. Although the rules seem simple, it is hard to use classical hexameter in English, because English is a stress-timed language that condenses vowels and consonants between stressed syllables, while hexameter relies on the regular timing of the phonetic sounds. Languages having the latter properties (i.e., languages that are not stress-timed) include Ancient Greek, Latin, Lithuanian and Hungarian. While the above classical hexameter has never enjoyed much popularity in English, where the standard metre is iambic pentameter, English poems have frequently been written in iambic hexameter. There are numerous examples from the 16th century and a few from the 17th; the most prominent of these is Michael Drayton's "Poly-Olbion" (1612) in couplets of iambic hexameter. An example from Drayton (marking the feet): In the 17th century the iambic hexameter, also called alexandrine, was used as a substitution in the heroic couplet, and as one of the types of permissible lines in lyrical stanzas and the Pindaric odes of Cowley and Dryden. Several attempts were made in the 19th century to naturalise the dactylic hexameter to English, by Henry Wadsworth Longfellow, Arthur Hugh Clough and others, none of them particularly successful. Gerard Manley Hopkins wrote many of his poems in six-foot iambic and sprung rhythm lines. In the 20th century a loose ballad-like six-foot line with a strong medial pause was used by William Butler Yeats. The iambic six-foot line has also been used occasionally, and an accentual six-foot line has been used by translators from the Latin and many poets. In the late 18th century the hexameter was adapted to the Lithuanian language by Kristijonas Donelaitis. His poem ""Metai" (The Seasons)" is considered the most successful hexameter text in Lithuanian as yet. Hungarian is extremely suitable to hexameter (and other forms of poetry based on quantitative metre). It has been applied to Hungarian since 1541, introduced by the grammarian János Sylvester. It can even occur spontaneously: A student may extricate oneself from failing to remember a poem by saying "I'm stuck here, unfortunately the rest won't come into my mind," which is a hexameter in Hungarian: Sándor Weöres included an ordinary nameplate text ("Gyula Tóth tinsmith and plumber") in one of his poems (this time, a pentameter): An inscription on a bar of chocolate went as follows ("milk chocolate with apricot and biscuit pieces"), another hexameter, noticed by the poet Dániel Varró: Due to this feature, hexameter has been widely used both in translated (Greek and Roman) and in original Hungarian poetry up to the twentieth century (e.g. by Miklós Radnóti).
https://en.wikipedia.org/wiki?curid=14160
HIV The human immunodeficiency viruses (HIV) are two species of "Lentivirus" (a subgroup of retrovirus) that infect humans. Over time, they cause acquired immunodeficiency syndrome (AIDS), a condition in which progressive failure of the immune system allows life-threatening opportunistic infections and cancers to thrive. Without treatment, average survival time after infection with HIV is estimated to be 9 to 11 years, depending on the HIV subtype. In most cases, HIV is a sexually transmitted infection and occurs by contact with or transfer of blood, pre-ejaculate, semen, and vaginal fluids. Research has shown (for both same-sex and opposite-sex couples) that HIV is untransmittable through condomless sexual intercourse if the HIV-positive partner has a consistently undetectable viral load. Non-sexual transmission can occur from an infected mother to her infant during pregnancy, during childbirth by exposure to her blood or vaginal fluid, and through breast milk. Within these bodily fluids, HIV is present as both free virus particles and virus within infected immune cells. HIV infects vital cells in the human immune system, such as helper T cells (specifically CD4+ T cells), macrophages, and dendritic cells. HIV infection leads to low levels of CD4+ T cells through a number of mechanisms, including pyroptosis of abortively infected T cells, apoptosis of uninfected bystander cells, direct viral killing of infected cells, and killing of infected CD4+ T cells by CD8+ cytotoxic lymphocytes that recognize infected cells. When CD4+ T cell numbers decline below a critical level, cell-mediated immunity is lost, and the body becomes progressively more susceptible to opportunistic infections, leading to the development of AIDS. HIV is a member of the genus "Lentivirus", part of the family "Retroviridae". Lentiviruses have many morphologies and biological properties in common. Many species are infected by lentiviruses, which are characteristically responsible for long-duration illnesses with a long incubation period. Lentiviruses are transmitted as single-stranded, positive-sense, enveloped RNA viruses. Upon entry into the target cell, the viral RNA genome is converted (reverse transcribed) into double-stranded DNA by a virally encoded enzyme, reverse transcriptase, that is transported along with the viral genome in the virus particle. The resulting viral DNA is then imported into the cell nucleus and integrated into the cellular DNA by a virally encoded enzyme, integrase, and host co-factors. Once integrated, the virus may become latent, allowing the virus and its host cell to avoid detection by the immune system, for an indeterminate amount of time. The HIV virus can remain dormant in the human body for up to ten years after primary infection; during this period the virus does not cause symptoms. Alternatively, the integrated viral DNA may be transcribed, producing new RNA genomes and viral proteins, using host cell resources, that are packaged and released from the cell as new virus particles that will begin the replication cycle anew. Two types of HIV have been characterized: HIV-1 and HIV-2. HIV-1 is the virus that was initially discovered and termed both lymphadenopathy associated virus (LAV) and human T-lymphotropic virus 3 (HTLV-III). HIV-1 is more virulent and more infective than HIV-2, and is the cause of the majority of HIV infections globally. The lower infectivity of HIV-2, compared to HIV-1, implies that fewer of those exposed to HIV-2 will be infected per exposure. Due to its relatively poor capacity for transmission, HIV-2 is largely confined to West Africa. HIV is different in structure from other retroviruses. It is roughly spherical with a diameter of about 120 nm, around 60 times smaller than a red blood cell. It is composed of two copies of positive-sense single-stranded RNA that codes for the virus's nine genes enclosed by a conical capsid composed of 2,000 copies of the viral protein p24. The single-stranded RNA is tightly bound to nucleocapsid proteins, p7, and enzymes needed for the development of the virion such as reverse transcriptase, proteases, ribonuclease and integrase. A matrix composed of the viral protein p17 surrounds the capsid ensuring the integrity of the virion particle. This is, in turn, surrounded by the viral envelope, that is composed of the lipid bilayer taken from the membrane of a human host cell when the newly formed virus particle buds from the cell. The viral envelope contains proteins from the host cell and relatively few copies of the HIV envelope protein, which consists of a cap made of three molecules known as glycoprotein (gp) 120, and a stem consisting of three gp41 molecules that anchor the structure into the viral envelope. The envelope protein, encoded by the HIV "env" gene, allows the virus to attach to target cells and fuse the viral envelope with the target cell's membrane releasing the viral contents into the cell and initiating the infectious cycle. As the sole viral protein on the surface of the virus, the envelope protein is a major target for HIV vaccine efforts. Over half of the mass of the trimeric envelope spike is N-linked glycans. The density is high as the glycans shield the underlying viral protein from neutralisation by antibodies. This is one of the most densely glycosylated molecules known and the density is sufficiently high to prevent the normal maturation process of glycans during biogenesis in the endoplasmic and Golgi apparatus. The majority of the glycans are therefore stalled as immature 'high-mannose' glycans not normally present on human glycoproteins that are secreted or present on a cell surface. The unusual processing and high density means that almost all broadly neutralising antibodies that have so far been identified (from a subset of patients that have been infected for many months to years) bind to, or are adapted to cope with, these envelope glycans. The molecular structure of the viral spike has now been determined by X-ray crystallography and cryogenic electron microscopy. These advances in structural biology were made possible due to the development of stable recombinant forms of the viral spike by the introduction of an intersubunit disulphide bond and an isoleucine to proline mutation (radical replacement of an amino acid) in gp41. The so-called SOSIP trimers not only reproduce the antigenic properties of the native viral spike, but also display the same degree of immature glycans as presented on the native virus. Recombinant trimeric viral spikes are promising vaccine candidates as they display less non-neutralising epitopes than recombinant monomeric gp120, which act to suppress the immune response to target epitopes. The RNA genome consists of at least seven structural landmarks (LTR, TAR, RRE, PE, SLIP, CRS, and INS), and nine genes ("gag", "pol", and "env", "tat", "rev", "nef", "vif", "vpr", "vpu", and sometimes a tenth "tev", which is a fusion of "tat", "env" and "rev"), encoding 19 proteins. Three of these genes, "gag", "pol", and "env", contain information needed to make the structural proteins for new virus particles. For example, "env" codes for a protein called gp160 that is cut in two by a cellular protease to form gp120 and gp41. The six remaining genes, "tat", "rev", "nef", "vif", "vpr", and "vpu" (or "vpx" in the case of HIV-2), are regulatory genes for proteins that control the ability of HIV to infect cells, produce new copies of virus (replicate), or cause disease. The two "tat" proteins (p16 and p14) are transcriptional transactivators for the LTR promoter acting by binding the TAR RNA element. The TAR may also be processed into microRNAs that regulate the apoptosis genes "ERCC1" and "IER3". The "rev" protein (p19) is involved in shuttling RNAs from the nucleus and the cytoplasm by binding to the RRE RNA element. The "vif" protein (p23) prevents the action of APOBEC3G (a cellular protein that deaminates cytidine to uridine in the single-stranded viral DNA and/or interferes with reverse transcription). The "vpr" protein (p14) arrests cell division at G2/M. The "nef" protein (p27) down-regulates CD4 (the major viral receptor), as well as the MHC class I and class II molecules. "Nef" also interacts with SH3 domains. The "vpu" protein (p16) influences the release of new virus particles from infected cells. The ends of each strand of HIV RNA contain an RNA sequence called a long terminal repeat (LTR). Regions in the LTR act as switches to control production of new viruses and can be triggered by proteins from either HIV or the host cell. The Psi element is involved in viral genome packaging and recognized by "gag" and "rev" proteins. The SLIP element () is involved in the frameshift in the "gag"-"pol" reading frame required to make functional "pol". The term viral tropism refers to the cell types a virus infects. HIV can infect a variety of immune cells such as CD4+ T cells, macrophages, and microglial cells. HIV-1 entry to macrophages and CD4+ T cells is mediated through interaction of the virion envelope glycoproteins (gp120) with the CD4 molecule on the target cells' membrane and also with chemokine co-receptors. Macrophage-tropic (M-tropic) strains of HIV-1, or non-syncytia-inducing strains (NSI; now called R5 viruses) use the "β"-chemokine receptor, CCR5, for entry and are thus able to replicate in both macrophages and CD4+ T cells. This CCR5 co-receptor is used by almost all primary HIV-1 isolates regardless of viral genetic subtype. Indeed, macrophages play a key role in several critical aspects of HIV infection. They appear to be the first cells infected by HIV and perhaps the source of HIV production when CD4+ cells become depleted in the patient. Macrophages and microglial cells are the cells infected by HIV in the central nervous system. In the tonsils and adenoids of HIV-infected patients, macrophages fuse into multinucleated giant cells that produce huge amounts of virus. T-tropic strains of HIV-1, or syncytia-inducing strains (SI; now called X4 viruses) replicate in primary CD4+ T cells as well as in macrophages and use the "α"-chemokine receptor, CXCR4, for entry. Dual-tropic HIV-1 strains are thought to be transitional strains of HIV-1 and thus are able to use both CCR5 and CXCR4 as co-receptors for viral entry. The "α"-chemokine SDF-1, a ligand for CXCR4, suppresses replication of T-tropic HIV-1 isolates. It does this by down-regulating the expression of CXCR4 on the surface of HIV target cells. M-tropic HIV-1 isolates that use only the CCR5 receptor are termed R5; those that use only CXCR4 are termed X4, and those that use both, X4R5. However, the use of co-receptors alone does not explain viral tropism, as not all R5 viruses are able to use CCR5 on macrophages for a productive infection and HIV can also infect a subtype of myeloid dendritic cells, which probably constitute a reservoir that maintains infection when CD4+ T cell numbers have declined to extremely low levels. Some people are resistant to certain strains of HIV. For example, people with the CCR5-Δ32 mutation are resistant to infection by the R5 virus, as the mutation leaves HIV unable to bind to this co-receptor, reducing its ability to infect target cells. Sexual intercourse is the major mode of HIV transmission. Both X4 and R5 HIV are present in the seminal fluid, which enables the virus to be transmitted from a male to his sexual partner. The virions can then infect numerous cellular targets and disseminate into the whole organism. However, a selection process leads to a predominant transmission of the R5 virus through this pathway. In patients infected with subtype B HIV-1, there is often a co-receptor switch in late-stage disease and T-tropic variants that can infect a variety of T cells through CXCR4. These variants then replicate more aggressively with heightened virulence that causes rapid T cell depletion, immune system collapse, and opportunistic infections that mark the advent of AIDS. HIV-positive patients acquire an enormously broad spectrum of opportunistic infections, which was particularly problematic prior to the onset of HAART therapies; however, the same infections are reported among HIV-infected patients examined post-mortem following the onset of antiretroviral therapies. Thus, during the course of infection, viral adaptation to the use of CXCR4 instead of CCR5 may be a key step in the progression to AIDS. A number of studies with subtype B-infected individuals have determined that between 40 and 50 percent of AIDS patients can harbour viruses of the SI and, it is presumed, the X4 phenotypes. HIV-2 is much less pathogenic than HIV-1 and is restricted in its worldwide distribution to West Africa. The adoption of "accessory genes" by HIV-2 and its more promiscuous pattern of co-receptor usage (including CD4-independence) may assist the virus in its adaptation to avoid innate restriction factors present in host cells. Adaptation to use normal cellular machinery to enable transmission and productive infection has also aided the establishment of HIV-2 replication in humans. A survival strategy for any infectious agent is not to kill its host, but ultimately become a commensal organism. Having achieved a low pathogenicity, over time, variants that are more successful at transmission will be selected. The HIV virion enters macrophages and CD4+ T cells by the adsorption of glycoproteins on its surface to receptors on the target cell followed by fusion of the viral envelope with the target cell membrane and the release of the HIV capsid into the cell. Entry to the cell begins through interaction of the trimeric envelope complex (gp160 spike) on the HIV viral envelope and both CD4 and a chemokine co-receptor (generally either CCR5 or CXCR4, but others are known to interact) on the target cell surface. Gp120 binds to integrin α4β7 activating LFA-1, the central integrin involved in the establishment of virological synapses, which facilitate efficient cell-to-cell spreading of HIV-1. The gp160 spike contains binding domains for both CD4 and chemokine receptors. The first step in fusion involves the high-affinity attachment of the CD4 binding domains of gp120 to CD4. Once gp120 is bound with the CD4 protein, the envelope complex undergoes a structural change, exposing the chemokine receptor binding domains of gp120 and allowing them to interact with the target chemokine receptor. This allows for a more stable two-pronged attachment, which allows the N-terminal fusion peptide gp41 to penetrate the cell membrane. Repeat sequences in gp41, HR1, and HR2 then interact, causing the collapse of the extracellular portion of gp41 into a hairpin shape. This loop structure brings the virus and cell membranes close together, allowing fusion of the membranes and subsequent entry of the viral capsid. After HIV has bound to the target cell, the HIV RNA and various enzymes, including reverse transcriptase, integrase, ribonuclease, and protease, are injected into the cell. During the microtubule-based transport to the nucleus, the viral single-strand RNA genome is transcribed into double-strand DNA, which is then integrated into a host chromosome. HIV can infect dendritic cells (DCs) by this CD4-CCR5 route, but another route using mannose-specific C-type lectin receptors such as DC-SIGN can also be used. DCs are one of the first cells encountered by the virus during sexual transmission. They are currently thought to play an important role by transmitting HIV to T cells when the virus is captured in the mucosa by DCs. The presence of FEZ-1, which occurs naturally in neurons, is believed to prevent the infection of cells by HIV. HIV-1 entry, as well as entry of many other retroviruses, has long been believed to occur exclusively at the plasma membrane. More recently, however, productive infection by pH-independent, clathrin-mediated endocytosis of HIV-1 has also been reported and was recently suggested to constitute the only route of productive entry. Shortly after the viral capsid enters the cell, an enzyme called reverse transcriptase liberates the positive-sense single-stranded RNA genome from the attached viral proteins and copies it into a complementary DNA (cDNA) molecule. The process of reverse transcription is extremely error-prone, and the resulting mutations may cause drug resistance or allow the virus to evade the body's immune system. The reverse transcriptase also has ribonuclease activity that degrades the viral RNA during the synthesis of cDNA, as well as DNA-dependent DNA polymerase activity that creates a sense DNA from the "antisense" cDNA. Together, the cDNA and its complement form a double-stranded viral DNA that is then transported into the cell nucleus. The integration of the viral DNA into the host cell's genome is carried out by another viral enzyme called integrase. The integrated viral DNA may then lie dormant, in the latent stage of HIV infection. To actively produce the virus, certain cellular transcription factors need to be present, the most important of which is NF-"κ"B (nuclear factor kappa B), which is upregulated when T cells become activated. This means that those cells most likely to be targeted, entered and subsequently killed by HIV are those actively fighting infection. During viral replication, the integrated DNA provirus is transcribed into RNA, some of which then undergo RNA splicing to produce mature messenger RNAs (mRNAs). These mRNAs are exported from the nucleus into the cytoplasm, where they are translated into the regulatory proteins Tat (which encourages new virus production) and Rev. As the newly produced Rev protein is produced it moves to the nucleus, where it binds to full-length, unspliced copies of virus RNAs and allows them to leave the nucleus. Some of these full-length RNAs function as new copies of the virus genome, while others function as mRNAs that are translated to produce the structural proteins Gag and Env. Gag proteins bind to copies of the virus RNA genome to package them into new virus particles. HIV-1 and HIV-2 appear to package their RNA differently. HIV-1 will bind to any appropriate RNA. HIV-2 will preferentially bind to the mRNA that was used to create the Gag protein itself. Two RNA genomes are encapsidated in each HIV-1 particle (see Structure and genome of HIV). Upon infection and replication catalyzed by reverse transcriptase, recombination between the two genomes can occur. Recombination occurs as the single-strand, positive-sense RNA genomes are reverse transcribed to form DNA. During reverse transcription, the nascent DNA can switch multiple times between the two copies of the viral RNA. This form of recombination is known as copy-choice. Recombination events may occur throughout the genome. Anywhere from two to 20 recombination events per genome may occur at each replication cycle, and these events can rapidly shuffle the genetic information that is transmitted from parental to progeny genomes. Viral recombination produces genetic variation that likely contributes to the evolution of resistance to anti-retroviral therapy. Recombination may also contribute, in principle, to overcoming the immune defenses of the host. Yet, for the adaptive advantages of genetic variation to be realized, the two viral genomes packaged in individual infecting virus particles need to have arisen from separate progenitor parental viruses of differing genetic constitution. It is unknown how often such mixed packaging occurs under natural conditions. Bonhoeffer "et al." suggested that template switching by reverse transcriptase acts as a repair process to deal with breaks in the single-stranded RNA genome. In addition, Hu and Temin suggested that recombination is an adaptation for repair of damage in the RNA genomes. Strand switching (copy-choice recombination) by reverse transcriptase could generate an undamaged copy of genomic DNA from two damaged single-stranded RNA genome copies. This view of the adaptive benefit of recombination in HIV could explain why each HIV particle contains two complete genomes, rather than one. Furthermore, the view that recombination is a repair process implies that the benefit of repair can occur at each replication cycle, and that this benefit can be realized whether or not the two genomes differ genetically. On the view that recombination in HIV is a repair process, the generation of recombinational variation would be a consequence, but not the cause of, the evolution of template switching. HIV-1 infection causes chronic inflammation and production of reactive oxygen species. Thus, the HIV genome may be vulnerable to oxidative damages, including breaks in the single-stranded RNA. For HIV, as well as for viruses in general, successful infection depends on overcoming host defensive strategies that often include production of genome-damaging reactive oxygen species. Thus, Michod "et al." suggested that recombination by viruses is an adaptation for repair of genome damages, and that recombinational variation is a byproduct that may provide a separate benefit. The final step of the viral cycle, assembly of new HIV-1 virions, begins at the plasma membrane of the host cell. The Env polyprotein (gp160) goes through the endoplasmic reticulum and is transported to the Golgi apparatus where it is cleaved by furin resulting in the two HIV envelope glycoproteins, gp41 and gp120. These are transported to the plasma membrane of the host cell where gp41 anchors gp120 to the membrane of the infected cell. The Gag (p55) and Gag-Pol (p160) polyproteins also associate with the inner surface of the plasma membrane along with the HIV genomic RNA as the forming virion begins to bud from the host cell. The budded virion is still immature as the gag polyproteins still need to be cleaved into the actual matrix, capsid and nucleocapsid proteins. This cleavage is mediated by the packaged viral protease and can be inhibited by antiretroviral drugs of the protease inhibitor class. The various structural components then assemble to produce a mature HIV virion. Only mature virions are then able to infect another cell. The classical process of infection of a cell by a virion can be called "cell-free spread" to distinguish it from a more recently recognized process called "cell-to-cell spread". In cell-free spread (see figure), virus particles bud from an infected T cell, enter the blood or extracellular fluid and then infect another T cell following a chance encounter. HIV can also disseminate by direct transmission from one cell to another by a process of cell-to-cell spread, for which two pathways have been described. Firstly, an infected T cell can transmit virus directly to a target T cell via a virological synapse. Secondly, an antigen-presenting cell (APC), such as a macrophage or dendritic cell, can transmit HIV to T cells by a process that either involves productive infection (in the case of macrophages) or capture and transfer of virions "in trans" (in the case of dendritic cells). Whichever pathway is used, infection by cell-to-cell transfer is reported to be much more efficient than cell-free virus spread. A number of factors contribute to this increased efficiency, including polarised virus budding towards the site of cell-to-cell contact, close apposition of cells, which minimizes fluid-phase diffusion of virions, and clustering of HIV entry receptors on the target cell towards the contact zone. Cell-to-cell spread is thought to be particularly important in lymphoid tissues where CD4+ T cells are densely packed and likely to interact frequently. Intravital imaging studies have supported the concept of the HIV virological synapse "in vivo". The many spreading mechanisms available to HIV contribute to the virus' ongoing replication in spite of anti-retroviral therapies. HIV differs from many viruses in that it has very high genetic variability. This diversity is a result of its fast replication cycle, with the generation of about 1010 virions every day, coupled with a high mutation rate of approximately 3 x 10−5 per nucleotide base per cycle of replication and recombinogenic properties of reverse transcriptase. This complex scenario leads to the generation of many variants of HIV in a single infected patient in the course of one day. This variability is compounded when a single cell is simultaneously infected by two or more different strains of HIV. When simultaneous infection occurs, the genome of progeny virions may be composed of RNA strands from two different strains. This hybrid virion then infects a new cell where it undergoes replication. As this happens, the reverse transcriptase, by jumping back and forth between the two different RNA templates, will generate a newly synthesized retroviral DNA sequence that is a recombinant between the two parental genomes. This recombination is most obvious when it occurs between subtypes. The closely related simian immunodeficiency virus (SIV) has evolved into many strains, classified by the natural host species. SIV strains of the African green monkey (SIVagm) and sooty mangabey (SIVsmm) are thought to have a long evolutionary history with their hosts. These hosts have adapted to the presence of the virus, which is present at high levels in the host's blood, but evokes only a mild immune response, does not cause the development of simian AIDS, and does not undergo the extensive mutation and recombination typical of HIV infection in humans. In contrast, when these strains infect species that have not adapted to SIV ("heterologous" or similar hosts such as rhesus or cynomologus macaques), the animals develop AIDS and the virus generates genetic diversity similar to what is seen in human HIV infection. Chimpanzee SIV (SIVcpz), the closest genetic relative of HIV-1, is associated with increased mortality and AIDS-like symptoms in its natural host. SIVcpz appears to have been transmitted relatively recently to chimpanzee and human populations, so their hosts have not yet adapted to the virus. This virus has also lost a function of the "nef" gene that is present in most SIVs. For non-pathogenic SIV variants, "nef" suppresses T cell activation through the CD3 marker. "Nef"'s function in non-pathogenic forms of SIV is to downregulate expression of inflammatory cytokines, MHC-1, and signals that affect T cell trafficking. In HIV-1 and SIVcpz, "nef" does not inhibit T-cell activation and it has lost this function. Without this function, T cell depletion is more likely, leading to immunodeficiency. Three groups of HIV-1 have been identified on the basis of differences in the envelope ("env") region: M, N, and O. Group M is the most prevalent and is subdivided into eight subtypes (or clades), based on the whole genome, which are geographically distinct. The most prevalent are subtypes B (found mainly in North America and Europe), A and D (found mainly in Africa), and C (found mainly in Africa and Asia); these subtypes form branches in the phylogenetic tree representing the lineage of the M group of HIV-1. Co-infection with distinct subtypes gives rise to circulating recombinant forms (CRFs). In 2000, the last year in which an analysis of global subtype prevalence was made, 47.2% of infections worldwide were of subtype C, 26.7% were of subtype A/CRF02_AG, 12.3% were of subtype B, 5.3% were of subtype D, 3.2% were of CRF_AE, and the remaining 5.3% were composed of other subtypes and CRFs. Most HIV-1 research is focused on subtype B; few laboratories focus on the other subtypes. The existence of a fourth group, "P", has been hypothesised based on a virus isolated in 2009. The strain is apparently derived from gorilla SIV (SIVgor), first isolated from western lowland gorillas in 2006. HIV-2's closest relative is SIVsm, a strain of SIV found in sooty mangabees. Since HIV-1 is derived from SIVcpz, and HIV-2 from SIVsm, the genetic sequence of HIV-2 is only partially homologous to HIV-1 and more closely resembles that of SIVsm. Many HIV-positive people are unaware that they are infected with the virus. For example, in 2001 less than 1% of the sexually active urban population in Africa had been tested, and this proportion is even lower in rural populations. Furthermore, in 2001 only 0.5% of pregnant women attending urban health facilities were counselled, tested or receive their test results. Again, this proportion is even lower in rural health facilities. Since donors may therefore be unaware of their infection, donor blood and blood products used in medicine and medical research are routinely screened for HIV. HIV-1 testing is initially done using an enzyme-linked immunosorbent assay (ELISA) to detect antibodies to HIV-1. Specimens with a non-reactive result from the initial ELISA are considered HIV-negative, unless new exposure to an infected partner or partner of unknown HIV status has occurred. Specimens with a reactive ELISA result are retested in duplicate. If the result of either duplicate test is reactive, the specimen is reported as repeatedly reactive and undergoes confirmatory testing with a more specific supplemental test (e.g., a polymerase chain reaction (PCR), western blot or, less commonly, an immunofluorescence assay (IFA)). Only specimens that are repeatedly reactive by ELISA and positive by IFA or PCR or reactive by western blot are considered HIV-positive and indicative of HIV infection. Specimens that are repeatedly ELISA-reactive occasionally provide an indeterminate western blot result, which may be either an incomplete antibody response to HIV in an infected person or nonspecific reactions in an uninfected person. Although IFA can be used to confirm infection in these ambiguous cases, this assay is not widely used. In general, a second specimen should be collected more than a month later and retested for persons with indeterminate western blot results. Although much less commonly available, nucleic acid testing (e.g., viral RNA or proviral DNA amplification method) can also help diagnosis in certain situations. In addition, a few tested specimens might provide inconclusive results because of a low quantity specimen. In these situations, a second specimen is collected and tested for HIV infection. Modern HIV testing is extremely accurate, when the window period is taken into consideration. A single screening test is correct more than 99% of the time. The chance of a false-positive result in a standard two-step testing protocol is estimated to be about 1 in 250,000 in a low risk population. Testing post-exposure is recommended immediately and then at six weeks, three months, and six months. The latest recommendations of the US Centers for Disease Control and Prevention (CDC) show that HIV testing must start with an immunoassay combination test for HIV-1 and HIV-2 antibodies and p24 antigen. A negative result rules out HIV exposure, while a positive one must be followed by an HIV-1/2 antibody differentiation immunoassay to detect which antibodies are present. This gives rise to four possible scenarios: HIV/AIDS research includes all medical research that attempts to prevent, treat, or cure HIV/AIDS, as well as fundamental research about the nature of HIV as an infectious agent and AIDS as the disease caused by HIV. Many governments and research institutions participate in HIV/AIDS research. This research includes behavioral health interventions, such as research into sex education, and drug development, such as research into microbicides for sexually transmitted diseases, HIV vaccines, and anti-retroviral drugs. Other medical research areas include the topics of pre-exposure prophylaxis, post-exposure prophylaxis, circumcision and HIV, and accelerated aging effects. The management of HIV/AIDS normally includes the use of multiple antiretroviral drugs. In many parts of the world, HIV has become a chronic condition in which progression to AIDS is increasingly rare. HIV latency, and the consequent viral reservoir in CD4+ T cells, dendritic cells, as well as macrophages, is the main barrier to eradication of the virus. It is important to note that although HIV is highly virulent, transmission does not occur through sex when an HIV-positive person has a consistently undetectable viral load (<50 copies/ml) due to anti-retroviral treatment. Previously it was said the chance of transmission was "very low" or "negligible" (The "Swiss Statement"). However, following multiple studies, it is now clear that the chance of passing on HIV through sex is effectively zero where the HIV-positive person has a consistently undetectable viral load; this is known as U=U, "Undetectable=Untransmittable", also phrased as "can't pass it on". The studies demonstrating U=U are: Opposites Attract, PARTNER 1, PARTNER 2, (for male-male couples) and HPTN052 (for heterosexual couples) when "the partner living with HIV had a durably suppressed viral load." In these studies, couples where one partner was HIV positive and one partner was HIV negative were enrolled and regular HIV testing completed. In total from the four studies, 4097 couples were enrolled over four continents and 151,880 acts of condomless sex were reported; there were zero phylogenetically linked transmissions of HIV where the positive partner had an undetectable viral load. Following this, the U=U consensus statement advocating the use of "zero risk" was signed by hundreds of individuals and organisations, including the US CDC, British HIV Association and "The Lancet" medical journal. The importance of the final results of the PARTNER 2 study were described by the medical director of the Terrence Higgins Trust as "impossible to overstate," while lead author Alison Rodger declared that the message that "undetectable viral load makes HIV untransmittable ... can help end the HIV pandemic by preventing HIV transmission. The authors summarised their findings in "The Lancet" as follows: This result is consistent with the conclusion presented by Anthony S. Fauci, the Director of the National Institute of Allergy and Infectious Diseases for the U.S. National Institutes of Health, and his team in a viewpoint published in the "Journal of the American Medical Association", that U=U is an effective HIV prevention method when an undetectable viral load is maintained. Genital herpes (HSV-2) reactivation in those infected with the virus have an associated increase in CCR-5 enriched CD4+ T cells as well as inflammatory dendritic cells in the submucosa of the genital skin. Tropism of HIV for CCR-5 positive cells explains the two to threefold increase in HIV acquisition among persons with genital herpes. Daily antiviral (e.g. acyclovir) medication do not reduce the sub-clinical post reactivation inflammation and therefore does not confer reduced risk of HIV acquisition. The first news story on "an exotic new disease" appeared May 18, 1981 in the gay newspaper "New York Native". AIDS was first clinically observed in 1981 in the United States. The initial cases were a cluster of injection drug users and gay men with no known cause of impaired immunity who showed symptoms of "Pneumocystis" pneumonia (PCP or PJP, the latter term recognizing that the causative agent is now called "Pneumocystis jirovecii"), a rare opportunistic infection that was known to occur in people with very compromised immune systems. Soon thereafter, additional gay men developed a previously rare skin cancer called Kaposi's sarcoma (KS). Many more cases of PJP and KS emerged, alerting U.S. Centers for Disease Control and Prevention (CDC) and a CDC task force was formed to monitor the outbreak. The earliest retrospectively described case of AIDS is believed to have been in Norway beginning in 1966. In the beginning, the CDC did not have an official name for the disease, often referring to it by way of the diseases that were associated with it, for example, lymphadenopathy, the disease after which the discoverers of HIV originally named the virus. They also used "Kaposi's Sarcoma and Opportunistic Infections", the name by which a task force had been set up in 1981. In the general press, the term "GRID", which stood for gay-related immune deficiency, had been coined. The CDC, in search of a name and looking at the infected communities, coined "the 4H disease", as it seemed to single out homosexuals, heroin users, hemophiliacs, and Haitians. However, after determining that AIDS was not isolated to the gay community, it was realized that the term GRID was misleading and "AIDS" was introduced at a meeting in July 1982. By September 1982 the CDC started using the name AIDS. In 1983, two separate research groups led by American Robert Gallo and French investigators and Luc Montagnier independently declared that a novel retrovirus may have been infecting AIDS patients, and published their findings in the same issue of the journal "Science". Gallo claimed that a virus his group had isolated from a person with AIDS was strikingly similar in shape to other human T-lymphotropic viruses (HTLVs) his group had been the first to isolate. Gallo admitted in 1987 that the virus he claimed to have discovered in 1984 was in reality a virus sent to him from France the year before. Gallo's group called their newly-isolated virus HTLV-III. Montagnier's group isolated a virus from a patient presenting with swelling of the lymph nodes of the neck and physical weakness, two classic symptoms of primary HIV infection. Contradicting the report from Gallo's group, Montagnier and his colleagues showed that core proteins of this virus were immunologically different from those of HTLV-I. Montagnier's group named their isolated virus lymphadenopathy-associated virus (LAV). As these two viruses turned out to be the same, in 1986 LAV and HTLV-III were renamed HIV. Another group working contemporaneously with the Montagnier and Gallo groups was that of Dr Jay Levy at the University of California, San Francisco. He independently discovered the AIDS virus in 1983 and named it the AIDS associated retrovirus (ARV). This virus was very different from the virus reported by the Montagnier and Gallo groups. The ARV strains indicated, for the first time, the heterogeneity of HIV isolates and several of these remain classic examples of the AIDS virus found in the United States. Both HIV-1 and HIV-2 are believed to have originated in non-human primates in West-central Africa, and are believed to have transferred to humans (a process known as zoonosis) in the early 20th century. HIV-1 appears to have originated in southern Cameroon through the evolution of SIVcpz, a simian immunodeficiency virus (SIV) that infects wild chimpanzees (HIV-1 descends from the SIVcpz endemic in the chimpanzee subspecies "Pan troglodytes troglodytes"). The closest relative of HIV-2 is SIVsmm, a virus of the sooty mangabey ("Cercocebus atys atys"), an Old World monkey living in littoral West Africa (from southern Senegal to western Côte d'Ivoire). New World monkeys such as the owl monkey are resistant to HIV-1 infection, possibly because of a genomic fusion of two viral resistance genes. HIV-1 is thought to have jumped the species barrier on at least three separate occasions, giving rise to the three groups of the virus, M, N, and O. There is evidence that humans who participate in bushmeat activities, either as hunters or as bushmeat vendors, commonly acquire SIV. However, SIV is a weak virus, and it is typically suppressed by the human immune system within weeks of infection. It is thought that several transmissions of the virus from individual to individual in quick succession are necessary to allow it enough time to mutate into HIV. Furthermore, due to its relatively low person-to-person transmission rate, it can only spread throughout the population in the presence of one or more high-risk transmission channels, which are thought to have been absent in Africa prior to the 20th century. Specific proposed high-risk transmission channels, allowing the virus to adapt to humans and spread throughout the society, depend on the proposed timing of the animal-to-human crossing. Genetic studies of the virus suggest that the most recent common ancestor of the HIV-1 M group dates back to circa 1910. Proponents of this dating link the HIV epidemic with the emergence of colonialism and growth of large colonial African cities, leading to social changes, including different patterns of sexual contact (especially multiple, concurrent partnerships), the spread of prostitution, and the concomitant high frequency of genital ulcer diseases (such as syphilis) in nascent colonial cities. While transmission rates of HIV during vaginal intercourse are typically low, they are increased manyfold if one of the partners suffers from a sexually transmitted infection resulting in genital ulcers. Early 1900s colonial cities were notable due to their high prevalence of prostitution and genital ulcers to the degree that as of 1928 as many as 45% of female residents of eastern Leopoldville were thought to have been prostitutes and as of 1933 around 15% of all residents of the same city were infected by one of the forms of syphilis. The earliest, well-documented case of HIV in a human dates back to 1959 in the Belgian Congo. The virus may have been present in the United States as early as the mid-to-late 1950s, as a sixteen-year-old male presented with symptoms in 1966 and died in 1969. An alternative view—unsupported by evidence—holds that unsafe medical practices in Africa during years following World War II, such as unsterile reuse of single-use syringes during mass vaccination, antibiotic, and anti-malaria treatment campaigns, were the initial vector that allowed the virus to adapt to humans and spread.
https://en.wikipedia.org/wiki?curid=14170
Mesopotamia Mesopotamia ( "", ) is a historical region of Western Asia situated within the Tigris–Euphrates river system, in the northern part of the Fertile Crescent, in modern days roughly corresponding to most of Iraq, Kuwait, the eastern parts of Syria, Southeastern Turkey, and regions along the Turkish–Syrian and Iran–Iraq borders. The Sumerians and Akkadians (including Assyrians and Babylonians) dominated Mesopotamia from the beginning of written history () to the fall of Babylon in 539 BC, when it was conquered by the Achaemenid Empire. It fell to Alexander the Great in 332 BC, and after his death, it became part of the Greek Seleucid Empire. Around 150 BC, Mesopotamia was under the control of the Parthian Empire. Mesopotamia became a battleground between the Romans and Parthians, with western parts of Mesopotamia coming under ephemeral Roman control. In AD 226, the eastern regions of Mesopotamia fell to the Sassanid Persians. The division of Mesopotamia between Roman (Byzantine from AD 395) and Sassanid Empires lasted until the 7th century Muslim conquest of Persia of the Sasanian Empire and Muslim conquest of the Levant from Byzantines. A number of primarily neo-Assyrian and Christian native Mesopotamian states existed between the 1st century BC and 3rd century AD, including Adiabene, Osroene, and Hatra. Mesopotamia is the site of the earliest developments of the Neolithic Revolution from around 10,000 BC. It has been identified as having "inspired some of the most important developments in human history, including the invention of the wheel, the planting of the first cereal crops, and the development of cursive script, mathematics, astronomy, and agriculture". The regional toponym "Mesopotamia" (, '[land] between rivers'; ' or '; ; "land of rivers") comes from the ancient Greek root words (, 'middle') and (, 'river') and translates to '(land) between rivers'. It is used throughout the Greek Septuagint () to translate the Hebrew and Aramaic equivalent "Naharaim". An even earlier Greek usage of the name "Mesopotamia" is evident from "The Anabasis of Alexander", which was written in the late 2nd century AD, but specifically refers to sources from the time of Alexander the Great. In the "Anabasis", Mesopotamia was used to designate the land east of the Euphrates in north Syria. The Aramaic term corresponded to a similar geographical concept. Later, the term Mesopotamia was more generally applied to all the lands between the Euphrates and the Tigris, thereby incorporating not only parts of Syria but also almost all of Iraq and southeastern Turkey. The neighbouring steppes to the west of the Euphrates and the western part of the Zagros Mountains are also often included under the wider term Mesopotamia. A further distinction is usually made between Northern or Upper Mesopotamia and Southern or Lower Mesopotamia. Upper Mesopotamia, also known as the "Jazira", is the area between the Euphrates and the Tigris from their sources down to Baghdad. Lower Mesopotamia is the area from Baghdad to the Persian Gulf and includes Kuwait and parts of western Iran. In modern academic usage, the term "Mesopotamia" often also has a chronological connotation. It is usually used to designate the area until the Muslim conquests, with names like "Syria", "Jazira", and "Iraq" being used to describe the region after that date. It has been argued that these later euphemisms are Eurocentric terms attributed to the region in the midst of various 19th-century Western encroachments. Mesopotamia encompasses the land between the Euphrates and Tigris rivers, both of which have their headwaters in the Taurus Mountains. Both rivers are fed by numerous tributaries, and the entire river system drains a vast mountainous region. Overland routes in Mesopotamia usually follow the Euphrates because the banks of the Tigris are frequently steep and difficult. The climate of the region is semi-arid with a vast desert expanse in the north which gives way to a region of marshes, lagoons, mud flats, and reed banks in the south. In the extreme south, the Euphrates and the Tigris unite and empty into the Persian Gulf. The arid environment which ranges from the northern areas of rain-fed agriculture to the south where irrigation of agriculture is essential if a surplus energy returned on energy invested (EROEI) is to be obtained. This irrigation is aided by a high water table and by melting snows from the high peaks of the northern Zagros Mountains and from the Armenian Highlands, the source of the Tigris and Euphrates Rivers that give the region its name. The usefulness of irrigation depends upon the ability to mobilize sufficient labor for the construction and maintenance of canals, and this, from the earliest period, has assisted the development of urban settlements and centralized systems of political authority. Agriculture throughout the region has been supplemented by nomadic pastoralism, where tent-dwelling nomads herded sheep and goats (and later camels) from the river pastures in the dry summer months, out into seasonal grazing lands on the desert fringe in the wet winter season. The area is generally lacking in building stone, precious metals and timber, and so historically has relied upon long-distance trade of agricultural products to secure these items from outlying areas. In the marshlands to the south of the area, a complex water-borne fishing culture has existed since prehistoric times, and has added to the cultural mix. Periodic breakdowns in the cultural system have occurred for a number of reasons. The demands for labor has from time to time led to population increases that push the limits of the ecological carrying capacity, and should a period of climatic instability ensue, collapsing central government and declining populations can occur. Alternatively, military vulnerability to invasion from marginal hill tribes or nomadic pastoralists has led to periods of trade collapse and neglect of irrigation systems. Equally, centripetal tendencies amongst city states has meant that central authority over the whole region, when imposed, has tended to be ephemeral, and localism has fragmented power into tribal or smaller regional units. These trends have continued to the present day in Iraq. The pre-history of the Ancient Near East begins in the Lower Paleolithic period. Therein, writing emerged with a pictographic script in the Uruk IV period (c. 4th millennium BC), and the documented record of actual historical events — and the ancient history of lower Mesopotamia — commenced in the mid-third millennium BC with cuneiform records of early dynastic kings. This entire history ends with either the arrival of the Achaemenid Empire in the late 6th century BC, or with the Muslim conquest and the establishment of the Caliphate in the late 7th century AD, from which point the region came to be known as Iraq. In the long span of this period, Mesopotamia housed some of the world's most ancient highly developed and socially complex states. The region was one of the four riverine civilizations where writing was invented, along with the Nile valley in Ancient Egypt, the Indus Valley Civilization in the Indian subcontinent, and the Yellow River in Ancient China. Mesopotamia housed historically important cities such as Uruk, Nippur, Nineveh, Assur and Babylon, as well as major territorial states such as the city of Eridu, the Akkadian kingdoms, the Third Dynasty of Ur, and the various Assyrian empires. Some of the important historical Mesopotamian leaders were Ur-Nammu (king of Ur), Sargon of Akkad (who established the Akkadian Empire), Hammurabi (who established the Old Babylonian state), Ashur-uballit II and Tiglath-Pileser I (who established the Assyrian Empire). Scientists analysed DNA from the 8,000-year-old remains of early farmers found at an ancient graveyard in Germany. They compared the genetic signatures to those of modern populations and found similarities with the DNA of people living in today's Turkey and Iraq. The earliest language written in Mesopotamia was Sumerian, an agglutinative language isolate. Along with Sumerian, Semitic languages were also spoken in early Mesopotamia. Subartuan a language of the Zagros, perhaps related to the Hurro-Urartuan language family is attested in personal names, rivers and mountains and in various crafts. Akkadian came to be the dominant language during the Akkadian Empire and the Assyrian empires, but Sumerian was retained for administrative, religious, literary and scientific purposes. Different varieties of Akkadian were used until the end of the Neo-Babylonian period. Old Aramaic, which had already become common in Mesopotamia, then became the official provincial administration language of first the Neo-Assyrian Empire, and then the Achaemenid Empire: the official lect is called Imperial Aramaic. Akkadian fell into disuse, but both it and Sumerian were still used in temples for some centuries. The last Akkadian texts date from the late 1st century AD. Early in Mesopotamia's history (around the mid-4th millennium BC) cuneiform was invented for the Sumerian language. Cuneiform literally means "wedge-shaped", due to the triangular tip of the stylus used for impressing signs on wet clay. The standardized form of each cuneiform sign appears to have been developed from pictograms. The earliest texts (7 archaic tablets) come from the É, a temple dedicated to the goddess Inanna at Uruk, from a building labeled as Temple C by its excavators. The early logographic system of cuneiform script took many years to master. Thus, only a limited number of individuals were hired as scribes to be trained in its use. It was not until the widespread use of a syllabic script was adopted under Sargon's rule that significant portions of the Mesopotamian population became literate. Massive archives of texts were recovered from the archaeological contexts of Old Babylonian scribal schools, through which literacy was disseminated. During the third millennium BC, there developed a very intimate cultural symbiosis between the Sumerian and the Akkadian language users, which included widespread bilingualism. The influence of Sumerian on Akkadian (and vice versa) is evident in all areas, from lexical borrowing on a massive scale, to syntactic, morphological, and phonological convergence. This has prompted scholars to refer to Sumerian and Akkadian in the third millennium as a sprachbund. Akkadian gradually replaced Sumerian as the spoken language of Mesopotamia somewhere around the turn of the 3rd and the 2nd millennium BC (the exact dating being a matter of debate), but Sumerian continued to be used as a sacred, ceremonial, literary, and scientific language in Mesopotamia until the 1st century AD. Libraries were extant in towns and temples during the Babylonian Empire. An old Sumerian proverb averred that "he who would excel in the school of the scribes must rise with the dawn." Women as well as men learned to read and write, and for the Semitic Babylonians, this involved knowledge of the extinct Sumerian language, and a complicated and extensive syllabary. A considerable amount of Babylonian literature was translated from Sumerian originals, and the language of religion and law long continued to be the old agglutinative language of Sumer. Vocabularies, grammars, and interlinear translations were compiled for the use of students, as well as commentaries on the older texts and explanations of obscure words and phrases. The characters of the syllabary were all arranged and named, and elaborate lists were drawn up. Many Babylonian literary works are still studied today. One of the most famous of these was the Epic of Gilgamesh, in twelve books, translated from the original Sumerian by a certain Sîn-lēqi-unninni, and arranged upon an astronomical principle. Each division contains the story of a single adventure in the career of Gilgamesh. The whole story is a composite product, although it is probable that some of the stories are artificially attached to the central figure. Mesopotamian mathematics and science was based on a sexagesimal (base 60) numeral system. This is the source of the 60-minute hour, the 24-hour day, and the 360-degree circle. The Sumerian calendar was based on the seven-day week. This form of mathematics was instrumental in early map-making. The Babylonians also had theorems on how to measure the area of several shapes and solids. They measured the circumference of a circle as three times the diameter and the area as one-twelfth the square of the circumference, which would be correct if were fixed at 3. The volume of a cylinder was taken as the product of the area of the base and the height; however, the volume of the frustum of a cone or a square pyramid was incorrectly taken as the product of the height and half the sum of the bases. Also, there was a recent discovery in which a tablet used as 25/8 (3.125 instead of 3.14159~). The Babylonians are also known for the Babylonian mile, which was a measure of distance equal to about seven modern miles (11 km). This measurement for distances eventually was converted to a time-mile used for measuring the travel of the Sun, therefore, representing time. From Sumerian times, temple priesthoods had attempted to associate current events with certain positions of the planets and stars. This continued to Assyrian times, when Limmu lists were created as a year by year association of events with planetary positions, which, when they have survived to the present day, allow accurate associations of relative with absolute dating for establishing the history of Mesopotamia. The Babylonian astronomers were very adept at mathematics and could predict eclipses and solstices. Scholars thought that everything had some purpose in astronomy. Most of these related to religion and omens. Mesopotamian astronomers worked out a 12-month calendar based on the cycles of the moon. They divided the year into two seasons: summer and winter. The origins of astronomy as well as astrology date from this time. During the 8th and 7th centuries BC, Babylonian astronomers developed a new approach to astronomy. They began studying philosophy dealing with the ideal nature of the early universe and began employing an internal logic within their predictive planetary systems. This was an important contribution to astronomy and the philosophy of science and some scholars have thus referred to this new approach as the first scientific revolution. This new approach to astronomy was adopted and further developed in Greek and Hellenistic astronomy. In Seleucid and Parthian times, the astronomical reports were thoroughly scientific; how much earlier their advanced knowledge and methods were developed is uncertain. The Babylonian development of methods for predicting the motions of the planets is considered to be a major episode in the history of astronomy. The only Greek-Babylonian astronomer known to have supported a heliocentric model of planetary motion was Seleucus of Seleucia (b. 190 BC). Seleucus is known from the writings of Plutarch. He supported Aristarchus of Samos' heliocentric theory where the Earth rotated around its own axis which in turn revolved around the Sun. According to Plutarch, Seleucus even proved the heliocentric system, but it is not known what arguments he used (except that he correctly theorized on tides as a result of Moon's attraction). Babylonian astronomy served as the basis for much of Greek, classical Indian, Sassanian, Byzantine, Syrian, medieval Islamic, Central Asian, and Western European astronomy. The oldest Babylonian texts on medicine date back to the Old Babylonian period in the first half of the 2nd millennium BC. The most extensive Babylonian medical text, however, is the "Diagnostic Handbook" written by the "ummânū", or chief scholar, Esagil-kin-apli of Borsippa, during the reign of the Babylonian king Adad-apla-iddina (1069-1046 BC). Along with contemporary Egyptian medicine, the Babylonians introduced the concepts of diagnosis, prognosis, physical examination, enemas, and prescriptions. In addition, the "Diagnostic Handbook" introduced the methods of therapy and aetiology and the use of empiricism, logic, and rationality in diagnosis, prognosis and therapy. The text contains a list of medical symptoms and often detailed empirical observations along with logical rules used in combining observed symptoms on the body of a patient with its diagnosis and prognosis. The symptoms and diseases of a patient were treated through therapeutic means such as bandages, creams and pills. If a patient could not be cured physically, the Babylonian physicians often relied on exorcism to cleanse the patient from any curses. Esagil-kin-apli's "Diagnostic Handbook" was based on a logical set of axioms and assumptions, including the modern view that through the examination and inspection of the symptoms of a patient, it is possible to determine the patient's disease, its aetiology, its future development, and the chances of the patient's recovery. Esagil-kin-apli discovered a variety of illnesses and diseases and described their symptoms in his "Diagnostic Handbook". These include the symptoms for many varieties of epilepsy and related ailments along with their diagnosis and prognosis. Mesopotamian people invented many technologies including metal and copper-working, glass and lamp making, textile weaving, flood control, water storage, and irrigation. They were also one of the first Bronze Age societies in the world. They developed from copper, bronze, and gold on to iron. Palaces were decorated with hundreds of kilograms of these very expensive metals. Also, copper, bronze, and iron were used for armor as well as for different weapons such as swords, daggers, spears, and maces. According to a recent hypothesis, the Archimedes' screw may have been used by Sennacherib, King of Assyria, for the water systems at the Hanging Gardens of Babylon and Nineveh in the 7th century BC, although mainstream scholarship holds it to be a Greek invention of later times. Later, during the Parthian or Sasanian periods, the Baghdad Battery, which may have been the world's first battery, was created in Mesopotamia. Ancient Mesopotamian religion was the first recorded. Mesopotamians believed that the world was a flat disc, surrounded by a huge, holed space, and above that, heaven. They also believed that water was everywhere, the top, bottom and sides, and that the universe was born from this enormous sea. In addition, Mesopotamian religion was polytheistic. Although the beliefs described above were held in common among Mesopotamians, there were also regional variations. The Sumerian word for universe is an-ki, which refers to the god An and the goddess Ki. Their son was Enlil, the air god. They believed that Enlil was the most powerful god. He was the chief god of the pantheon. The Sumerians also posed philosophical questions, such as: Who are we?, Where are we?, How did we get here?. They attributed answers to these questions to explanations provided by their gods. The numerous civilizations of the area influenced the Abrahamic religions, especially the Hebrew Bible; its cultural values and literary influence are especially evident in the Book of Genesis. Giorgio Buccellati believes that the origins of philosophy can be traced back to early Mesopotamian wisdom, which embodied certain philosophies of life, particularly ethics, in the forms of dialectic, dialogues, epic poetry, folklore, hymns, lyrics, prose works, and proverbs. Babylonian reason and rationality developed beyond empirical observation. The earliest form of logic was developed by the Babylonians, notably in the rigorous nonergodic nature of their social systems. Babylonian thought was axiomatic and is comparable to the "ordinary logic" described by John Maynard Keynes. Babylonian thought was also based on an open-systems ontology which is compatible with ergodic axioms. Logic was employed to some extent in Babylonian astronomy and medicine. Babylonian thought had a considerable influence on early Ancient Greek and Hellenistic philosophy. In particular, the Babylonian text "Dialogue of Pessimism" contains similarities to the agonistic thought of the Sophists, the Heraclitean doctrine of dialectic, and the dialogs of Plato, as well as a precursor to the Socratic method. The Ionian philosopher Thales was influenced by Babylonian cosmological ideas. Ancient Mesopotamians had ceremonies each month. The theme of the rituals and festivals for each month was determined by at least six important factors: Some songs were written for the gods but many were written to describe important events. Although music and songs amused kings, they were also enjoyed by ordinary people who liked to sing and dance in their homes or in the marketplaces. Songs were sung to children who passed them on to their children. Thus songs were passed on through many generations as an oral tradition until writing was more universal. These songs provided a means of passing on through the centuries highly important information about historical events. The Oud (Arabic:العود) is a small, stringed musical instrument used by the Mesopotamians. The oldest pictorial record of the Oud dates back to the Uruk period in Southern Mesopotamia over 5000 years ago. It is on a cylinder seal currently housed at the British Museum and acquired by Dr. Dominique Collon. The image depicts a female crouching with her instruments upon a boat, playing right-handed. This instrument appears hundreds of times throughout Mesopotamian history and again in ancient Egypt from the 18th dynasty onwards in long- and short-neck varieties. The oud is regarded as a precursor to the European lute. Its name is derived from the Arabic word العود al-‘ūd 'the wood', which is probably the name of the tree from which the oud was made. (The Arabic name, with the definite article, is the source of the word 'lute'.) Hunting was popular among Assyrian kings. Boxing and wrestling feature frequently in art, and some form of polo was probably popular, with men sitting on the shoulders of other men rather than on horses. They also played "majore", a game similar to the sport rugby, but played with a ball made of wood. They also played a board game similar to senet and backgammon, now known as the "Royal Game of Ur". Mesopotamia, as shown by successive law codes, those of Urukagina, Lipit Ishtar and Hammurabi, across its history became more and more a patriarchal society, one in which the men were far more powerful than the women. For example, during the earliest Sumerian period, the ""en"", or high priest of male gods was originally a woman, that of female goddesses, a man. Thorkild Jacobsen, as well as many others, has suggested that early Mesopotamian society was ruled by a "council of elders" in which men and women were equally represented, but that over time, as the status of women fell, that of men increased. As for schooling, only royal offspring and sons of the rich and professionals, such as scribes, physicians, temple administrators, went to school. Most boys were taught their father's trade or were apprenticed out to learn a trade. Girls had to stay home with their mothers to learn housekeeping and cooking, and to look after the younger children. Some children would help with crushing grain or cleaning birds. Unusually for that time in history, women in Mesopotamia had rights. They could own property and, if they had good reason, get a divorce. Hundreds of graves have been excavated in parts of Mesopotamia, revealing information about Mesopotamian burial habits. In the city of Ur, most people were buried in family graves under their houses, along with some possessions. A few have been found wrapped in mats and carpets. Deceased children were put in big "jars" which were placed in the family chapel. Other remains have been found buried in common city graveyards. 17 graves have been found with very precious objects in them. It is assumed that these were royal graves. Rich of various periods, have been discovered to have sought burial in Bahrein, identified with Sumerian Dilmun. Irrigated agriculture spread southwards from the Zagros foothills with the Samara and Hadji Muhammed culture, from about 5,000 BC. Sumerian temples functioned as banks and developed the first large-scale system of loans and credit, but the Babylonians developed the earliest system of commercial banking. It was comparable in some ways to modern post-Keynesian economics, but with a more "anything goes" approach. In the early period down to Ur III temples owned up to one third of the available land, declining over time as royal and other private holdings increased in frequency. The word Ensi was used to describe the official who organized the work of all facets of temple agriculture. Villeins are known to have worked most frequently within agriculture, especially in the grounds of temples or palaces. The geography of southern Mesopotamia is such that agriculture is possible only with irrigation and good drainage, a fact which has had a profound effect on the evolution of early Mesopotamian civilization. The need for irrigation led the Sumerians, and later the Akkadians, to build their cities along the Tigris and Euphrates and the branches of these rivers. Major cities, such as Ur and Uruk, took root on tributaries of the Euphrates, while others, notably Lagash, were built on branches of the Tigris. The rivers provided the further benefits of fish (used both for food and fertilizer), reeds, and clay (for building materials). With irrigation, the food supply in Mesopotamia was comparable to the Canadian prairies. The Tigris and Euphrates River valleys form the northeastern portion of the Fertile Crescent, which also included the Jordan River valley and that of the Nile. Although land nearer to the rivers was fertile and good for crops, portions of land farther from the water were dry and largely uninhabitable. This is why the development of irrigation was very important for settlers of Mesopotamia. Other Mesopotamian innovations include the control of water by dams and the use of aqueducts. Early settlers of fertile land in Mesopotamia used wooden plows to soften the soil before planting crops such as barley, onions, grapes, turnips, and apples. Mesopotamian settlers were some of the first people to make beer and wine. As a result of the skill involved in farming in the Mesopotamian, farmers did not depend on slaves to complete farm work for them, but there were some exceptions. There were too many risks involved to make slavery practical (i.e. the escape/mutiny of the slave). Although the rivers sustained life, they also destroyed it by frequent floods that ravaged entire cities. The unpredictable Mesopotamian weather was often hard on farmers; crops were often ruined so backup sources of food such as cows and lambs were also kept. Over time the southernmost parts of Sumerian Mesopotamia suffered from increased salinity of the soils, leading to a slow urban decline and a centring of power in Akkad, further north. The geography of Mesopotamia had a profound impact on the political development of the region. Among the rivers and streams, the Sumerian people built the first cities along with irrigation canals which were separated by vast stretches of open desert or swamp where nomadic tribes roamed. Communication among the isolated cities was difficult and, at times, dangerous. Thus, each Sumerian city became a city-state, independent of the others and protective of its independence. At times one city would try to conquer and unify the region, but such efforts were resisted and failed for centuries. As a result, the political history of Sumer is one of almost constant warfare. Eventually Sumer was unified by Eannatum, but the unification was tenuous and failed to last as the Akkadians conquered Sumeria in 2331 BC only a generation later. The Akkadian Empire was the first successful empire to last beyond a generation and see the peaceful succession of kings. The empire was relatively short-lived, as the Babylonians conquered them within only a few generations. The Mesopotamians believed their kings and queens were descended from the City of Gods, but, unlike the ancient Egyptians, they never believed their kings were real gods. Most kings named themselves “king of the universe” or “great king”. Another common name was “shepherd”, as kings had to look after their people. When Assyria grew into an empire, it was divided into smaller parts, called provinces. Each of these were named after their main cities, like Nineveh, Samaria, Damascus, and Arpad. They all had their own governor who had to make sure everyone paid their taxes. Governors also had to call up soldiers to war and supply workers when a temple was built. He was also responsible for enforcing the laws. In this way, it was easier to keep control of a large empire. Although Babylon was quite a small state in the Sumerian, it grew tremendously throughout the time of Hammurabi's rule. He was known as "the lawmaker", and soon Babylon became one of the main cities in Mesopotamia. It was later called Babylonia, which meant "the gateway of the gods." It also became one of history's greatest centers of learning. With the end of the Uruk phase, walled cities grew and many isolated Ubaid villages were abandoned indicating a rise in communal violence. An early king Lugalbanda was supposed to have built the white walls around the city. As city-states began to grow, their spheres of influence overlapped, creating arguments between other city-states, especially over land and canals. These arguments were recorded in tablets several hundreds of years before any major war—the first recording of a war occurred around 3200 BC but was not common until about 2500 BC. An Early Dynastic II king (Ensi) of Uruk in Sumer, Gilgamesh (c. 2600 BC), was commended for military exploits against Humbaba guardian of the Cedar Mountain, and was later celebrated in many later poems and songs in which he was claimed to be two-thirds god and only one-third human. The later Stele of the Vultures at the end of the Early Dynastic III period (2600–2350 BC), commemorating the victory of Eannatum of Lagash over the neighbouring rival city of Umma is the oldest monument in the world that celebrates a massacre. From this point forwards, warfare was incorporated into the Mesopotamian political system. At times a neutral city may act as an arbitrator for the two rival cities. This helped to form unions between cities, leading to regional states. When empires were created, they went to war more with foreign countries. King Sargon, for example, conquered all the cities of Sumer, some cities in Mari, and then went to war with northern Syria. Many Assyrian and Babylonian palace walls were decorated with the pictures of the successful fights and the enemy either desperately escaping or hiding amongst reeds. City-states of Mesopotamia created the first law codes, drawn from legal precedence and decisions made by kings. The codes of Urukagina and Lipit Ishtar have been found. The most renowned of these was that of Hammurabi, as mentioned above, who was posthumously famous for his set of laws, the Code of Hammurabi (created c. 1780 BC), which is one of the earliest sets of laws found and one of the best preserved examples of this type of document from ancient Mesopotamia. He codified over 200 laws for Mesopotamia. Examination of the laws show a progressive weakening of the rights of women, and increasing severity in the treatment of slaves The art of Mesopotamia rivalled that of Ancient Egypt as the most grand, sophisticated and elaborate in western Eurasia from the 4th millennium BC until the Persian Achaemenid Empire conquered the region in the 6th century BC. The main emphasis was on various, very durable, forms of sculpture in stone and clay; little painting has survived, but what has suggests that painting was mainly used for geometrical and plant-based decorative schemes, though most sculpture was also painted. The Protoliterate period, dominated by Uruk, saw the production of sophisticated works like the Warka Vase and cylinder seals. The Guennol Lioness is an outstanding small limestone figure from Elam of about 3000–2800 BC, part man and part lion. A little later there are a number of figures of large-eyed priests and worshippers, mostly in alabaster and up to a foot high, who attended temple cult images of the deity, but very few of these have survived. Sculptures from the Sumerian and Akkadian period generally had large, staring eyes, and long beards on the men. Many masterpieces have also been found at the Royal Cemetery at Ur (c. 2650 BC), including the two figures of a "Ram in a Thicket", the "Copper Bull" and a bull's head on one of the Lyres of Ur. From the many subsequent periods before the ascendency of the Neo-Assyrian Empire Mesopotamian art survives in a number of forms: cylinder seals, relatively small figures in the round, and reliefs of various sizes, including cheap plaques of moulded pottery for the home, some religious and some apparently not. The Burney Relief is an unusual elaborate and relatively large (20 x 15 inches) terracotta plaque of a naked winged goddess with the feet of a bird of prey, and attendant owls and lions. It comes from the 18th or 19th centuries BC, and may also be moulded. Stone stelae, votive offerings, or ones probably commemorating victories and showing feasts, are also found from temples, which unlike more official ones lack inscriptions that would explain them; the fragmentary Stele of the Vultures is an early example of the inscribed type, and the Assyrian Black Obelisk of Shalmaneser III a large and solid late one. The conquest of the whole of Mesopotamia and much surrounding territory by the Assyrians created a larger and wealthier state than the region had known before, and very grandiose art in palaces and public places, no doubt partly intended to match the splendour of the art of the neighbouring Egyptian empire. The Assyrians developed a style of extremely large schemes of very finely detailed narrative low reliefs in stone for palaces, with scenes of war or hunting; the British Museum has an outstanding collection. They produced very little sculpture in the round, except for colossal guardian figures, often the human-headed lamassu, which are sculpted in high relief on two sides of a rectangular block, with the heads effectively in the round (and also five legs, so that both views seem complete). Even before dominating the region they had continued the cylinder seal tradition with designs which are often exceptionally energetic and refined. The study of ancient Mesopotamian architecture is based on available archaeological evidence, pictorial representation of buildings, and texts on building practices. Scholarly literature usually concentrates on temples, palaces, city walls and gates, and other monumental buildings, but occasionally one finds works on residential architecture as well. Archaeological surface surveys also allowed for the study of urban form in early Mesopotamian cities. Brick is the dominant material, as the material was freely available locally, whereas building stone had to be brought a considerable distance to most cities. The ziggurat is the most distinctive form, and cities often had large gateways, of which the Ishtar Gate from Neo-Babylonian Babylon, decorated with beasts in polychrome brick, is the most famous, now largely in the Pergamon Museum in Berlin. The most notable architectural remains from early Mesopotamia are the temple complexes at Uruk from the 4th millennium BC, temples and palaces from the Early Dynastic period sites in the Diyala River valley such as Khafajah and Tell Asmar, the Third Dynasty of Ur remains at Nippur (Sanctuary of Enlil) and Ur (Sanctuary of Nanna), Middle Bronze Age remains at Syrian-Turkish sites of Ebla, Mari, Alalakh, Aleppo and Kultepe, Late Bronze Age palaces at Bogazkoy (Hattusha), Ugarit, Ashur and Nuzi, Iron Age palaces and temples at Assyrian (Kalhu/Nimrud, Khorsabad, Nineveh), Babylonian (Babylon), Urartian (Tushpa/Van, Kalesi, Cavustepe, Ayanis, Armavir, Erebuni, Bastam) and Neo-Hittite sites (Karkamis, Tell Halaf, Karatepe). Houses are mostly known from Old Babylonian remains at Nippur and Ur. Among the textual sources on building construction and associated rituals are Gudea's cylinders from the late 3rd millennium are notable, as well as the Assyrian and Babylonian royal inscriptions from the Iron Age.
https://en.wikipedia.org/wiki?curid=20189
Miranda Richardson Miranda Jane Richardson (born 3 March 1958) is an English actress. She made her film debut playing Ruth Ellis in "Dance with a Stranger" (1985) and went on to receive Academy Award nominations for "Damage" (1992) and "Tom & Viv" (1994). A seven-time BAFTA Award nominee, she won the BAFTA Award for Best Actress in a Supporting Role for "Damage". She has also been nominated for seven Golden Globe Awards, winning twice for "Enchanted April" (1992) and the TV film "Fatherland" (1994). In 1996, one critic asserted that she is "the greatest actress of our time in any medium" after she appeared in "Orlando" at the Edinburgh Festival. Richardson began her career in 1979 and made her West End debut in the 1981 play "Moving", before being nominated for the 1987 Olivier Award for Best Actress for "A Lie of the Mind". Her television credits include "Blackadder" (1986–1989), "A Dance to the Music of Time" (1997), "Merlin" (1998), "The Lost Prince" (2003), "Gideon's Daughter" (2006), the sitcom "The Life and Times of Vivienne Vyle" (2007), and "Rubicon" (2010). She was nominated for the 2015 Primetime Emmy Award for Outstanding Narrator for "Operation Orangutan". Her other films include "Empire of the Sun" (1987), "The Crying Game" (1992), "The Apostle" (1997), "Sleepy Hollow" (1999), "Chicken Run" (2000), "The Hours" (2002), "Spider" (2002), "Harry Potter and the Goblet of Fire" (2005), "The Young Victoria" (2009), "Made in Dagenham" (2010), "Belle" (2013), and "Stronger" (2017). Richardson was born in Southport, England, to Marian Georgina (née Townsend), a housewife, and William Alan Richardson, a marketing executive, and was their second daughter. Richardson enrolled at the Bristol Old Vic Theatre School, where she studied alongside Daniel Day-Lewis and Jenny Seagrove, having started out with juvenile performances in "Cinderella" and "Lord Arthur Savile's Crime" at the Southport Dramatic Club. Richardson has enjoyed a successful and extensive theatre career, first joining Manchester Library Theatre in 1979 as an assistant stage manager, followed by a number of appearances in repertory theatre. Her London stage debut was in "Moving" at the Queen's Theatre in 1981. She found recognition in the West End for a series of stage performances, ultimately receiving an Olivier Award nomination for her performance in "A Lie of the Mind", and, in 1996, one critic asserted that she is "the greatest actress of our time in any medium" after she appeared in "Orlando" at the Edinburgh Festival. She returned to the London stage in May 2009 to play the lead role in Wallace Shawn's new play, "Grasses of a Thousand Colours" at the Royal Court Theatre. Richardson has said that she prefers new works rather than the classics because of the history which goes with them. In 1985, Richardson made her film debut as Ruth Ellis, the last woman to be hanged in the United Kingdom, in the biographical drama "Dance with a Stranger". Around the same time, Richardson played a comedic Queen Elizabeth I, aka Queenie, in the British television comedy "Blackadder II". Following "Dance with a Stranger", Richardson turned down numerous parts in which her character was unstable or disreputable, including the Glenn Close role in "Fatal Attraction". In this period, she appeared in "Empire of the Sun" (1987). In an episode of the TV series "The Storyteller" ("The Three Ravens", 1988), she played a witch. Meanwhile, she had returned in guest roles in one episode each in "Blackadder the Third" (1987) and "Blackadder Goes Forth" (1989). She returned to play Queenie in the Christmas special "Blackadder's Christmas Carol" (1988) and, later, a special edition for the millennium "". Her portrayal of a troubled theatre goer in "Secret Friends" (BBC 2, 1990) was described as "a miniature tour de force... Miranda Richardson's finest hour, all in ten minutes" ("The Sunday Times"). Other television roles include Pamela Flitton in "A Dance to the Music of Time" (1997), Miss Gilchrist in "St. Ives" (1998), Bettina the interior decorator in "Absolutely Fabulous", Queen Elspeth, Snow White's stepmother, in "" (2001), and Queen Mary in "The Lost Prince" (2003). Richardson has appeared in a number of high-profile supporting roles in film, including Vanessa Bell in "The Hours", Lady Van Tassel in "Sleepy Hollow" and Patsy Carpenter in "The Evening Star". She also won acclaim for her performances in "The Crying Game" and "Enchanted April", for which she won a Golden Globe. She received Academy Award nominations for her performances in "Damage" and "Tom & Viv". Her film credits also include "Kansas City" (1996), "The Apostle" (1997) and "Wah-Wah" (2005). In 2002, she performed a triple-role in the thriller "Spider". Richardson also appeared as Queen Rosalind of Denmark in "The Prince and Me" and as the ballet mistress Madame Giry in the film version of the Andrew Lloyd Webber musical "The Phantom of the Opera" (2004). In 2005, she appeared in the role of Rita Skeeter, the toxic "Daily Prophet" journalist in "Harry Potter and the Goblet of Fire". She also did the voice for Corky in "The Adventures of Bottle Top Bill and His Best Friend Corky" (2005), an Australian animated series for children. In 2006, she appeared in "Gideon's Daughter". She played Mrs. Claus in the film "Fred Claus" (2007). Richardson appeared in the BBC sitcom, "The Life and Times of Vivienne Vyle". She appeared as a guest in "A Taste of My Life". In 2008, Richardson was cast in a leading role in original AMC pilot, "Rubicon". She plays Katherine Rhumor, a New York socialite who finds herself drawn into the central intrigue of a think tank after the death of her husband. Additionally, she played Labour politician Barbara Castle in the British film "Made in Dagenham". Richardson was cast as Queen Ulla in "Maleficent", where she was to play the titular character's aunt, but her role was cut from the film during post-production. In 2015, she played Sybil Birling in Helen Edmundson's BBC One adaptation of J. B. Priestley's "An Inspector Calls". She is interested in falconry.
https://en.wikipedia.org/wiki?curid=20192
Mecklenburg Mecklenburg (, Low German: "Mękel(n)borg" ) is a historical region in northern Germany comprising the western and larger part of the federal-state Mecklenburg-Vorpommern. The largest cities of the region are Rostock, Schwerin, Neubrandenburg, Wismar and Güstrow. The name Mecklenburg derives from a castle named "Mikilenburg" (Old Saxon: "big castle", hence its translation into New Latin and ), located between the cities of Schwerin and Wismar. In Slavic language it was known as "Veligrad", which also means "big castle". It was the ancestral seat of the House of Mecklenburg; for a time the area was divided into Mecklenburg-Schwerin and Mecklenburg-Strelitz among the same dynasty. Linguistically Mecklenburgers retain and use many features of Low German vocabulary or phonology. The adjective for the region is "Mecklenburgian" (); inhabitants are called Mecklenburgians (). Mecklenburg is known for its mostly flat countryside. Much of the terrain is boggy, with ponds, marshes and fields as common features, with small forests interspersed. The terrain changes as one moves north towards the Baltic Sea. Under the peat of Mecklenburg are sometimes found deposits of ancient lava flows. Traditionally, at least in the countryside, the stone from these flows is cut and used in the construction of homes, often in joint use with cement, brick and wood, forming a unique look to the exterior of country houses. Mecklenburg has productive farming, but the land is most suitable for grazing for livestock. Mecklenburg is the site of many prehistoric dolmen tombs. Its earliest organised inhabitants may have had Celtic origins. By no later than 100 BC the area had been populated by pre-Christian Germanic peoples. The traditional symbol of Mecklenburg, the grinning steer's head (Low German: "Ossenkopp", lit.: 'oxen's head', with "osse" being a synonym for steer and bull in Middle Low German), with an attached hide, and a crown above, may have originated from this period. It represents what early peoples would have worn, i.e. a steers's head as a helmet, with the hide hanging down the back to protect the neck from the sun, and overall as a way to instill fear in the enemy. From the 7th through the 12th centuries, Germanic Mecklenburg was ruled by Western Slavic overlords, newly arrived from the steppes. Among them were the Obotrites and other tribes that Frankish sources referred to as "Wends". The 11th-century founder of the Mecklenburger dynasty of Dukes and later Grand Dukes, which lasted until 1918, was Nyklot of the Obotrites. In the late 12th century, Henry the Lion, Duke of the Saxons, reconquered the region, took oaths from its local lords, and Christianized its people, in a precursor to the Northern Crusades. From the 12th to 14th centuries, large numbers of Germans and Flemings settled the area (Ostsiedlung), importing German law and improved agricultural techniques. The Wends who survived all warfare and devastation of the centuries before, including invasions of and expeditions into Saxony, Denmark and Liutizic areas as well as internal conflicts, were assimilated in the centuries thereafter. However, elements of certain names and words used in Mecklenburg speak to the lingering Slavic influence. An example would be the city of Schwerin, which was originally called "Zuarin" in Slavic. Another example is the town of Bresegard, the 'gard' portion of the town name deriving from the Slavic word 'grad', meaning city or town. Since the 12th century, the territory remained stable and relatively independent of its neighbours; one of the few German territories for which this is true. During the reformation the Duke in Schwerin would convert to Protestantism and so would follow the Duchy of Mecklenburg in 1549. Like many German territories, Mecklenburg was sometimes partitioned and re-partitioned among different members of the ruling dynasty. In 1621 it was divided into the two duchies of Mecklenburg-Schwerin and Mecklenburg-Güstrow. With the extinction of the Güstrow line in 1701, the Güstrow lands were redivided, part going to the Duke of Mecklenburg-Schwerin, and part going to the new line of Mecklenburg-Strelitz. In 1815, the two Mecklenburgian duchies were raised to Grand Duchies, the Grand Duchy of Mecklenburg-Schwerin and the Grand Duchy of Mecklenburg-Strelitz, and subsequently existed separately as such in Germany under enlightened but absolute rule (constitutions being granted on the eve of World War I) until the revolution of 1918. Life in Mecklenburg could be quite harsh. Practices such as having to ask for permission from the Grand Duke to get married, or having to apply for permission to emigrate, would linger late into the history of Mecklenburg (i.e. 1918), long after such practices had been abandoned in other German areas. Even as late as the later half of the 19th century the Grand Duke personally owned half of the countryside. The last Duke abdicated in 1918, as monarchies fell throughout Europe. The Duke's ruling house reigned in Mecklenburg uninterrupted (except for two years) from its incorporation into the Holy Roman Empire until 1918. From 1918 to 1933, the duchies were free states in the Weimar Republic. Traditionally Mecklenburg has always been one of the poorer German regions. The reasons for this may be varied, but one factor stands out: agriculturally the land is poor and can not produce at the same level as other parts of Germany. The two Mecklenburgs made attempts at being independent states after 1918, but eventually failed as their dependence on the rest of the German lands became apparent. After three centuries of partition, Mecklenburg was united on 1 January 1934 by the German government. During World War II the Wehrmacht assigned Mecklenburg and Pomerania to Wehrkreis II under the command of "General der Infanterie" Werner Kienitz, with the headquarters at Stettin. Mecklenburg was assigned to an Area headquartered at Schwerin, which was responsible for military units in Schwerin; Rostock; Parchim; and Neustrelitz. After World War II, the Soviet government occupying eastern Germany merged Mecklenburg with the smaller neighbouring region of Western Pomerania (German "Vorpommern") to form the state of Mecklenburg-Vorpommern. Mecklenburg contributed about two-thirds of the geographical size of the new state and the majority of its population. Also, the new state became temporary or permanent home for many refugees expelled from former German territories seized by the Soviet Union and Poland after the war. The Soviets changed the name from "Mecklenburg-Western Pomerania" to "Mecklenburg" in 1947. In 1952, the East German government ended the independent existence of Mecklenburg, creating three districts ("Bezirke") out of its territory: Rostock, Schwerin and Neubrandenburg. During German reunification in 1990, the state of Mecklenburg-Vorpommern was revived, and is now one of the 16 states of the Federal Republic of Germany. The House of Mecklenburg was founded by Niklot, prince of the Obotrites, Chizzini and Circipani on the Baltic Sea, who died in 1160. His Christian progeny was recognized as prince of the Holy Roman Empire 1170 and Duke of Mecklenburg 8 July 1348. On 27 February 1658 the ducal house divided in two branches: Mecklenburg-Schwerin and Mecklenburg-Strelitz. The flag of both Mecklenburg duchies is traditionally made of the colours blue, yellow and red. The sequence however changed more than once in the past 300 years. In 1813 the duchies used yellow-red-blue. 23 December 1863 for Schwerin and 4 January 1864 for Strelitz blue-yellow-red was ordered. Mecklenburg-Schwerin however used white instead of yellow for flags on sea by law of 24 March 1855. Siebmachers Wappenbuch gives therefore (?) blue-white-red for Schwerin and blue-yellow-red for Strelitz. According to this source, the grand ducal house of Schwerin used a flag of 3.75 to 5.625 M with the middle arms on a white quadrant (1.75 M) in the middle. The middle arms show the shield of Mecklenburg as arranged in the 17th century. The county of Schwerin in the middle and in the quartering Mecklenburg (bull's head with hide), Rostock (griffin), principality of Schwerin (griffin surmounting green rectangle), Ratzeburg (cross surmounted by crown), Stargard (arm with hand holding ring) and Wenden (bull's head). The shield is supported by a bull and a griffin and surmounted by a royal crown. The dukes of Strelitz used according to Siebmachers the blue-yellow-red flag with just the (oval) shield of Mecklenburg in the yellow band. Ströhl in 1897 and Bulgaria, show another arrangement: The grand-duke of Mecklenburg-Schwerin flows a flag (4:5) with the arms of the figures from the shield of arms. The former Schwerin standard with the white quadrant is now ascribed to the grand dukes of Strelitz. Ströhl mentions a flag for the grand ducal house by law of 23 December 1863 with the middle arms in the yellow band. And he mentions a special sea flag, the same but with a white middle band. 'Berühmte Fahnen' shows furthermore a standard for grand duchess Alexandra of Mecklenburg-Schwerin, princess of Hannover (1882–1963), showing her shield and that of Mecklenburg joined by the order of the Wendic Crown in a white oval. On sea the yellow band in her flag was of course white. The princes (dukes) of Mecklenburg-Schwerin had according to this source their own standard, showing the griffin of Rostock. A flat landscape, Mecklenburg is known for its farmlands - which produces quinoa, wheat, barley and maize – and its husbandry. Notably, the region is known for its https://en.wikipedia.org/wiki/Rinderkennzeichnungs-_und_Rindfleischetikettierungs%C3%BCberwachungsaufgaben%C3%BCbertragungsgesetz and https://en.wikipedia.org/wiki/Mecklenburger. Recently, with the upheavals and environmental disruptions created by globalisation, German peasants have been concerned about the introduction of potentially invasive species such as the Greater rhea and the Asian hornet. Mecklenburg faces a huge increase in tourism since German reunification in 1990, particularly with its beaches and seaside resorts at the Baltic Sea ("German Riviera", Warnemünde, Boltenhagen, Heiligendamm, Kühlungsborn, Rerik and others), the Mecklenburg Lakeland ("Mecklenburgische Seenplatte") and the Mecklenburg Switzerland ("Mecklenburgische Schweiz") with their pristine nature, the old Hanseatic towns of Rostock, Greifswald, Stralsund and Wismar (the latter two being World Heritage) well known for their medieval Brick Gothic buildings, and the former royal residences of Schwerin, Güstrow, Ludwigslust and Neustrelitz.
https://en.wikipedia.org/wiki?curid=20193
Management science Management science (MS) is the broad interdisciplinary study of problem solving and decision making in human organizations, with strong links to management, economics, business, engineering, management consulting, and other fields. It uses various scientific research-based principles, strategies, and analytical methods including mathematical modeling, statistics and numerical algorithms to improve an organization's ability to enact rational and accurate management decisions by arriving at optimal or near optimal solutions to complex decision problems. Management science helps businesses to achieve goals using various scientific methods. The field was initially an outgrowth of applied mathematics, where early challenges were problems relating to the optimization of systems which could be modeled linearly, i.e., determining the optima (maximum value of profit, assembly line performance, crop yield, bandwidth, etc. or minimum of loss, risk, costs, etc.) of some objective function. Today, management science encompasses any organizational activity for which a problem is structured in mathematical form to generate managerially relevant insights. Management science is concerned with a number of different areas of study: One is developing and applying models and concepts that may prove useful in helping to illuminate management issues and solve managerial problems. The models used can often be represented mathematically, but sometimes computer-based, visual or verbal representations are used as well or instead. Another area is designing and developing new and better models of organizational excellence. Management science research can be done on three levels: The management scientist's mandate is to use rational, systematic, science-based techniques to inform and improve decisions of all kinds. The techniques of management science are not restricted to business applications but may be applied to military, medical, public administration, charitable groups, political groups or community groups. Its origins can be traced to operations research, which became influential during World War II when the Allied forces recruited scientists of various disciplines to assist with military operations. In these early applications, the scientists used simple mathematical models to make efficient use of limited technologies and resources. The application of these models to the corporate sector became known as management science. In 1967 Stafford Beer characterized the field of management science as "the business use of operations research". Some of the fields that management science involves include: as well as many others. Applications of management science are abundant in industries such as airlines, manufacturing companies, service organizations, military branches, and in government. Management science has contributed insights and solutions to a vast range of problems and issues, including: Management science is also concerned with so-called "soft-operational analysis", which concerns methods for strategic planning, strategic decision support, and problem structuring methods (PSM). At this level of abstraction, mathematical modeling and simulation will not suffice. Therefore, since the late 20th century, new "non-quantified" modelling methods have been developed, including morphological analysis and various forms of influence diagrams.
https://en.wikipedia.org/wiki?curid=20200
Musical notation Music notation or musical notation is any system used to visually represent aurally perceived music played with instruments or sung by the human voice through the use of written, printed, or otherwise-produced symbols, including notation for durations of absence of sound such as rests. Types and methods of notation have varied between cultures and throughout history, and much information about ancient music notation is fragmentary. Even in the same time period, such as in the 2010s, different styles of music and different cultures use different music notation methods; for example, for professional classical music performers, sheet music using staves and noteheads is the most common way of notating music, but for professional country music session musicians, the Nashville Number System is the main method. The symbols used include ancient symbols and modern symbols made upon any media such as symbols cut into stone, made in clay tablets, made using a pen on papyrus or parchment or manuscript paper; printed using a printing press (c. 1400s), a computer printer (c. 1980s) or other printing or modern copying technology. Although many ancient cultures used symbols to represent melodies and rhythms, none of them were particularly comprehensive, and this has limited today's understanding of their music. The seeds of what would eventually become modern western notation were sown in medieval Europe, starting with the Catholic Church's goal for ecclesiastical uniformity. The church began notating plainchant melodies so that the same chants could be used throughout the church. Music notation developed further in the Renaissance and Baroque music eras. In the classical period (1750–1820) and the Romantic music era (1820–1900), notation continued to develop as new musical instrument technologies were developed. In the contemporary classical music of the 20th and 21st century, music notation has continued to develop, with the introduction of graphical notation by some modern composers and the use, since the 1980s, of computer-based score writer programs for notating music. Music notation has been adapted to many kinds of music, including classical music, popular music, and traditional music. The earliest form of musical notation can be found in a cuneiform tablet that was created at Nippur, in Babylonia (today's Iraq), in about 1400 BC. The tablet represents fragmentary instructions for performing music, that the music was composed in harmonies of thirds, and that it was written using a diatonic scale. A tablet from about 1250 BC shows a more developed form of notation. Although the interpretation of the notation system is still controversial, it is clear that the notation indicates the names of strings on a lyre, the tuning of which is described in other tablets. Although they are fragmentary, these tablets represent the earliest notated melodies found anywhere in the world. Ancient Greek musical notation was in use from at least the 6th century BC until approximately the 4th century AD; several complete compositions and fragments of compositions using this notation survive. The notation consists of symbols placed above text syllables. An example of a complete composition is the Seikilos epitaph, which has been variously dated between the 2nd century BC to the 2nd century AD. Three hymns by Mesomedes of Crete exist in manuscript. The Delphic Hymns, dated to the 2nd century BC, also use this notation, but they are not completely preserved. Ancient Greek notation appears to have fallen out of use around the time of the Decline of the Western Roman Empire. Byzantine music once included music for court ceremonies, but has only survived as vocal church music within various Orthodox traditions of monodic (monophonic) chant written down in Byzantine round notation (see Macarie's "anastasimatarion" with the Greek text translated into Romanian and transliterated into Cyrillic script). Since the 6th century Greek theoretical categories (melos, genos, harmonia, systema) played a key role to understand and transmit Byzantine music, especially the tradition of Damascus had a strong impact on the pre-Islamic Near East comparable to the impact coming from Persian music. The earliest evidence are papyrus fragments of Greek tropologia. These fragments just present the hymn text following a modal signature or key (like “” for “echos plagios protos” or “” for “echos devteros”). Unlike Western notation Byzantine neumes used since the 10th century were always related to modal steps (same modal degree, one degree lower, two degrees higher etc.) in relation to such a clef or modal key (modal signatures). Originally this key or the incipit of a common melody was enough to indicate a certain melodic model given within the echos. Next to ekphonetic notation, only used in lectionaries to indicate formulas used during scriptural lessons, melodic notation developed not earlier than between the 9th and the 10th century, when a theta (), oxeia () or diple () were written under a certain syllable of the text, whenever a longer melisma was expected. This primitive form was called “theta” or “diple notation”. Today, one can study the evolution of this notation within Greek monastic chant books like those of the sticherarion and the heirmologion (Chartres notation was rather used on Mount Athos and Constantinople, Coislin notation within the Patriarchates of Jerusalem and Alexandria), while there was another gestic notation originally used for the asmatikon (choir book) and kontakarion (book of the soloist or monophonaris) of the Constantinopolitan cathedral rite. The earliest books which have survived, are “kondakars” in Slavonic translation which already show an own notation system known as Kondakarian notation. Like the Greek alphabet notational signs are ordered left to right (though the direction could be adapted like in certain Syriac manuscripts). The question of rhythm was entirely based on cheironomia (the interpretation of so-called great signs which derived from different chant books). These great signs () indicated well-known melodical phrases given by gestures of the choirleaders of the cathedral rite. They existed once as part of an oral tradition, developed Kondakarian notation and became during the 13th century integrated into Byzantine round notation as a kind of universal notation system. Today the main difference between Western and Eastern neumes is that Eastern notation symbols are "differential" rather than absolute, i.e. they indicate pitch steps (rising, falling or at the same step), and the musicians know to deduce correctly, from the score and the note they are singing presently, which correct interval is meant. These step symbols themselves, or better "phonic neumes", resemble brush strokes and are colloquially called "gántzoi" ("hooks") in Modern Greek. Notes as pitch classes or modal keys (usually memorised by modal signatures) are represented in written form only between these neumes (in manuscripts usually written in red ink). In modern notation they simply serve as an optional reminder and modal and tempo directions have been added, if necessary. In Papadic notation medial signatures usually meant a temporary change into another echos. The so-called "great signs" were once related to cheironomic signs; according to modern interpretations they are understood as embellishments and microtonal attractions (pitch changes smaller than a semitone), both essential in Byzantine chant. Since Chrysanthos of Madytos there are seven standard note names used for "solfège" ("parallagē") "pá, vú, ghá, dhē, ké, zō, nē", while the older practice still used the four enechemata or intonation formulas of the four echoi given by the modal signatures, the authentic or "kyrioi" in ascending direction, and the plagal or "plagioi" in descending direction (Papadic Octoechos). With exception of "vú and zō" they do roughly correspond to Western solmization syllables as "re, mi, fa, sol, la, si, do". Byzantine music uses the eight natural, non-tempered scales whose elements were identified by "Ēkhoi", "sounds", exclusively, and therefore the absolute pitch of each note may slightly vary each time, depending on the particular "Ēkhos" used. Byzantine notation is still used in many Orthodox Churches. Sometimes cantors also use transcriptions into Western or Kievan staff notation while adding non-notatable embellishment material from memory and "sliding" into the natural scales from experience, but even concerning modern neume editions since the reform of Chrysanthos a lot of details are only known from an oral tradition related to traditional masters and their experience. In 1252, Safi al-Din al-Urmawi developed a form of musical notation, where rhythms were represented by geometric representation. Many subsequent scholars of rhythm have sought to develop graphical geometrical notations. For example, a similar geometric system was published in 1987 by Kjell Gustafson, whose method represents a rhythm as a two-dimensional graph. The scholar and music theorist Isidore of Seville, while writing in the early 7th century, considered that "unless sounds are held by the memory of man, they perish, because they cannot be written down." By the middle of the 9th century, however, a form of neumatic notation began to develop in monasteries in Europe as a mnemonic device for Gregorian chant, using symbols known as neumes; the earliest surviving musical notation of this type is in the "Musica disciplina" of Aurelian of Réôme, from about 850. There are scattered survivals from the Iberian Peninsula before this time, of a type of notation known as Visigothic neumes, but its few surviving fragments have not yet been deciphered. The problem with this notation was that it only showed melodic contours and consequently the music could not be read by someone who did not know the music already. Notation had developed far enough to notate melody, but there was still no system for notating rhythm. A mid-13th-century treatise, "De Mensurabili Musica", explains a set of six rhythmic modes that were in use at the time, although it is not clear how they were formed. These rhythmic modes were all in triple time and rather limited rhythm in chant to six different repeating patterns. This was a flaw seen by German music theorist Franco of Cologne and summarised as part of his treatise "Ars cantus mensurabilis" (the art of measured chant, or mensural notation). He suggested that individual notes could have their own rhythms represented by the shape of the note. Not until the 14th century did something like the present system of fixed note lengths arise. The use of regular measures (bars) became commonplace by the end of the 17th century. The founder of what is now considered the standard music staff was Guido d'Arezzo, an Italian Benedictine monk who lived from about 991 until after 1033. He taught the use of solmization syllables based on a hymn to Saint John the Baptist, which begins Ut Queant Laxis and was written by the Lombard historian Paul the Deacon. The first stanza is: Guido used the first syllable of each line, Ut, Re, Mi, Fa, Sol, La, and Si, to read notated music in terms of hexachords; they were not note names, and each could, depending on context, be applied to any note. In the 17th century, Ut was changed in most countries except France to the easily singable, open syllable Do, said to have been taken from the name of the Italian theorist Giovanni Battista Doni, but rather Do have been taken from the word "Dominus" in Latin with the meaning "the Lord". Catholic monks developed the first forms of modern European musical notation in order to standardize liturgy throughout the worldwide Church, and an enormous body of religious music has been composed for it through the ages. This led directly to the emergence and development of European classical music, and its many derivatives. The Baroque style, which encompassed music, art, and architecture, was particularly encouraged by the post-Reformation Catholic Church as such forms offered a means of religious expression that was stirring and emotional, intended to stimulate religious fervor. Modern music notation is used by musicians of many different genres throughout the world. The staff acts as a framework upon which pitches are indicated by placing oval noteheads on the staff lines or between the lines. The pitch of the oval musical noteheads can be modified by accidentals. The duration (note length) is shown with different note values, which can be indicated by the notehead being a stemless hollow oval (a whole note or semibreve), a hollow rectangle or stemless hollow oval with one or two vertical lines on either side (double whole note or breve), a stemmed hollow oval (a half note or minim), or solid oval using stems to indicate quarter notes (crotchets) and stems with added flags or beams to indicate smaller subdivisions, and additional symbols such as dots and ties which lengthen the duration of a note. Notation is read from left to right, which makes setting music for right-to-left scripts difficult. A staff (or stave, in British English) of written music generally begins with a clef, which indicates the position of one particular note on the staff. The treble clef or G clef was originally a letter G and it identifies the second line up on the five line staff as the note G above middle C. The bass clef or F clef shows the position of the note F below middle C. While the treble and bass clef are the most widely used clefs, other clefs are used, such as the alto clef (used for viola and alto trombone music) and the tenor clef (used for some cello, tenor trombone, and double bass music). Notes representing a pitch outside of the scope of the five line staff can be represented using ledger lines, which provide a single note with additional lines and spaces. Some instruments use mainly one clef, such as violin and flute, which use treble clef and double bass and tuba, which use bass clef. Some instruments regularly use both clefs, such as piano and pipe organ. Following the clef, the key signature on a staff indicates the key of the piece or song by specifying that certain notes are flat or sharp throughout the piece, unless otherwise indicated with accidentals added before certain notes. When a sharp is placed before a note, this makes that note one semitone higher. When a flat is placed before a note, this makes that note one semitone lower. Double sharps and double flats are less common, but they are used. A double sharp is placed before a note to make it two semitones higher. A double flat is placed before a note to make it two semitones lower. A natural sign placed before a note renders that note in its "natural" form, which means that any sharps or flats applying to that note from the key signature or from accidentals are cancelled. Sometimes a courtesy accidental is used in music where it is not technically required, to remind the musician of what pitch the key signature requires. Following the key signature is the time signature. The time signature typically consists of two numbers, with one of the most common being . The top "4" indicates that there are four beats per measure (also called bar). The bottom "4" indicates that each of those beats are quarter notes. Measures divide the piece into groups of beats, and the time signatures specify those groupings. is used so often that it is also called "common time", and it may be indicated with rather than numbers. Other common time signatures are (three beats per bar, with each beat being a quarter note); (two beats per bar, with each beat being a quarter note); (six beats per bar, with each beat being an eighth note) and (twelve beats per bar, with each beat being an eighth note; in practice, the eighth notes are typically put into four groups of three eighth notes. is a compound time type of time signature). Many other time signatures exist, such as , , , , , and so on. Many short classical music pieces from the classical era and songs from traditional music and popular music are in one time signature for much or all of the piece. Music from the Romantic music era and later, particularly contemporary classical music and rock music genres such as progressive rock and the hardcore punk subgenre mathcore, may use mixed meter; songs or pieces change from one meter to another, for example alternating between bars of and . Directions to the player regarding matters such as tempo (e.g., Allegro, Andante, Largo, Vif, Lent, Modérément, Presto, etc.), dynamics (pianississimo, pianissimo, piano, mezzopiano, mezzoforte, forte, fortissimo, fortississimo, etc.) appear above or below the staff. Terms indicating the musical expression or "feel" to a song or piece are indicated at the beginning of the piece and at any points where the mood changes (e.g., "Slow March", "Fast Swing", "Medium Blues", "Fougueux", "Feierlich", "Gelassen", "Piacevole", "Con slancio", "Majestic", "Hostile" etc.) For vocal music, lyrics are written near the pitches of the melody. For short pauses (breaths), retakes (retakes are indicated with a ' mark) are added. In music for ensembles, a "score" shows music for all players together, with the staves for the different instruments and/or voices stacked vertically. The conductor uses the score while leading an orchestra, concert band, choir or other large ensemble. Individual performers in an ensemble play from "parts" which contain only the music played by an individual musician. A score can be constructed from a complete set of parts and vice versa. The process was laborious and time consuming when parts were hand-copied from the score, but since the development of scorewriter computer software in the 1980s, a score stored electronically can have parts automatically prepared by the program and quickly and inexpensively printed out using a computer printer. A ♭ in music lowers a pitch down one semitone. A ♯ in music raises a note one semitone. For example, a sharp on B would raise it to B♯ while a flat would lower it to B♭. "Jeongganbo" is a unique traditional musical notation system created during the time of Sejong the Great that was the first East Asian system to represent rhythm, pitch, and time. Among various kinds of Korean traditional music, Jeong-gan-bo targets a particular genre, Jeong-ak (). Jeong-gan-bo tells the pitch by writing the pitch's name down in a box called 'jeong-gan' (this is where the name comes from). One jeong-gan is one beat each, and it can be split into two, three or more to hold half beats and quarter beats, and more. This makes it easy for the reader to figure out the beat. Also, there are lots of markings indicating things such as ornaments. Most of these were later created by Ki-su Kim. The Samaveda text (1200 BC – 1000 BC) contains notated melodies, and these are probably the world's oldest surviving ones. The musical notation is written usually immediately above, sometimes within, the line of Samaveda text, either in syllabic or a numerical form depending on the Samavedic "Sakha" (school). The Indian scholar and musical theorist Pingala (c. 200 BC), in his "Chanda Sutra", used marks indicating long and short syllables to indicate meters in Sanskrit poetry. A rock inscription from circa 7th–8th century CE at Kudumiyanmalai, Tamil Nadu contains an early example of a musical notation. It was first identified and published by archaeologist/epigraphist D. R. Bhandarkar. Written in the Pallava-grantha script of the 7th century, it contains 38 horizontal lines of notations inscribed on a rectangular rock face (dimension of around 13 by 14 feet). Each line of the notation contains 64 characters (characters representing musical notes), written in groups of four notes. The basic characters for the seven notes, 'sa ri ga ma pa dha ni', are seen to be suffixed with the vowels a, i, u ,e. For example, in the place of 'sa', any one of 'sa', 'si', 'su' or 'se' is used. Similarly, in place of ri, any one of 'ra', 'ri', 'ru' or 're' is used. Horizontal lines divide the notation into 7 sections. Each section contains 4 to 7 lines of notation, with a title indicating its musical 'mode'. These modes may have been popular atleast from the 6th century CE and were incorporated into the Indian 'raga' system that developed later. But some of the unusual features seen in this notation have been given several non-conclusive interpretations by scholars. In the notation of Indian rāga, a solfege-like system called sargam is used. As in Western solfege, there are names for the seven basic pitches of a major scale (Shadja, Rishabha, Gandhara, Madhyama, Panchama, Dhaivata and Nishada, usually shortened to Sa Re Ga Ma Pa Dha Ni). The tonic of any scale is named Sa, and the dominant Pa. Sa is fixed in any scale, and Pa is fixed at a fifth above it (a Pythagorean fifth rather than an equal-tempered fifth). These two notes are known as achala swar ('fixed notes'). Each of the other five notes, Re, Ga, Ma, Dha and Ni, can take a 'regular' (shuddha) pitch, which is equivalent to its pitch in a standard major scale (thus, shuddha Re, the second degree of the scale, is a whole-step higher than Sa), or an altered pitch, either a half-step above or half-step below the shuddha pitch. Re, Ga, Dha and Ni all have altered partners that are a half-step lower (Komal-"flat") (thus, komal Re is a half-step higher than Sa). Ma has an altered partner that is a half-step higher (teevra-"sharp") (thus, tivra Ma is an augmented fourth above Sa). Re, Ga, Ma, Dha and Ni are called vikrut swar ('movable notes'). In the written system of Indian notation devised by Ravi Shankar, the pitches are represented by Western letters. Capital letters are used for the achala swar, and for the higher variety of all the vikrut swar. Lowercase letters are used for the lower variety of the vikrut swar. Other systems exist for non-twelve-tone equal temperament and non-Western music, such as the Indian "Swaralipi". Znamenny Chant is a singing tradition used in the Russian Orthodox Church which uses a "hook and banner" notation. Znamenny Chant is unison, melismatic liturgical singing that has its own specific notation, called the "stolp" notation. The symbols used in the stolp notation are called ' (, 'hooks') or ' (, 'signs'). Often the names of the signs are used to refer to the stolp notation. Znamenny melodies are part of a system, consisting of Eight Modes (intonation structures; called glasy); the melodies are characterized by fluency and well-balancedness . There exist several types of Znamenny Chant: the so-called "Stolpovoy", "Malyj" (Little) and "Bolshoy" (Great) Znamenny Chant. Ruthenian Chant (Prostopinije) is sometimes considered a sub-division of the Znamenny Chant tradition, with the Muscovite Chant (Znamenny Chant proper) being the second branch of the same musical continuum. Znamenny Chants are not written with notes (the so-called linear notation), but with special signs, called "Znamëna" (Russian for "marks", "banners") or "Kryuki" ("hooks"), as some shapes of these signs resemble hooks. Each sign may include the following components: a large black hook or a black stroke, several smaller black 'points' and 'commas' and lines near the hook or crossing the hook. Some signs may mean only one note, some 2 to 4 notes, and some a whole melody of more than 10 notes with a complicated rhythmic structure. The stolp notation was developed in Kievan Rus' as an East Slavic refinement of the Byzantine neumatic musical notation. The most notable feature of this notation system is that it records transitions of the melody, rather than notes. The signs also represent a mood and a gradation of how this part of melody is to be sung (tempo, strength, devotion, meekness, etc.) Every sign has its own name and also features as a spiritual symbol. For example, there is a specific sign, called "little dove" (Russian: голубчик "(golubchik)"), which represents two rising sounds, but which is also a symbol of the Holy Ghost. Gradually the system became more and more complicated. This system was also ambiguous, so that almost no one, except the most trained and educated singers, could sing an unknown melody at sight. The signs only helped to reproduce the melody, not coding it in an unambiguous way. The earliest known examples of text referring to music in China are inscriptions on musical instruments found in the Tomb of Marquis Yi of Zeng (d. 433 B.C.). Sets of 41 chimestones and 65 bells bore lengthy inscriptions concerning pitches, scales, and transposition. The bells still sound the pitches that their inscriptions refer to. Although no notated musical compositions were found, the inscriptions indicate that the system was sufficiently advanced to allow for musical notation. Two systems of pitch nomenclature existed, one for relative pitch and one for absolute pitch. For relative pitch, a solmization system was used. Gongche notation used Chinese characters for the names of the scale. Japanese music is highly diversified, and therefore requires various systems of notation. In Japanese shakuhachi music, for example, glissandos and timbres are often more significant than distinct pitches, whereas taiko notation focuses on discrete strokes. Ryukyuan sanshin music uses kunkunshi, a notation system of kanji with each character corresponding to a finger position on a particular string. Notation plays a relatively minor role in the oral traditions of Indonesia. However, in Java and Bali, several systems were devised beginning at the end of the 19th century, initially for archival purposes. Today the most widespread are cipher notations ("not angka" in the broadest sense) in which the pitches are represented with some subset of the numbers 1 to 7, with 1 corresponding to either highest note of a particular octave, as in Sundanese gamelan, or lowest, as in the kepatihan notation of Javanese gamelan. Notes in the ranges outside the central octave are represented with one or more dots above or below the each number. For the most part, these cipher notations are mainly used to notate the skeletal melody (the balungan) and vocal parts (gerongan), although transcriptions of the elaborating instrument variations are sometimes used for analysis and teaching. Drum parts are notated with a system of symbols largely based on letters representing the vocables used to learn and remember drumming patterns; these symbols are typically laid out in a grid underneath the skeletal melody for a specific or generic piece. The symbols used for drum notation (as well as the vocables represented) are highly variable from place to place and performer to performer. In addition to these current systems, two older notations used a kind of staff: the Solonese script could capture the flexible rhythms of the pesinden with a squiggle on a horizontal staff, while in Yogyakarta a ladder-like vertical staff allowed notation of the balungan by dots and also included important drum strokes. In Bali, there are a few books published of Gamelan gender wayang pieces, employing alphabetical notation in the old Balinese script. Composers and scholars both Indonesian and foreign have also mapped the slendro and pelog tuning systems of gamelan onto the western staff, with and without various symbols for microtones. The Dutch composer Ton de Leeuw also invented a three line staff for his composition "Gending". However, these systems do not enjoy widespread use. In the second half of the twentieth century, Indonesian musicians and scholars extended cipher notation to other oral traditions, and a diatonic scale cipher notation has become common for notating western-related genres (church hymns, popular songs, and so forth). Unlike the cipher notation for gamelan music, which uses a "fixed Do" (that is, 1 always corresponds to the same pitch, within the natural variability of gamelan tuning), Indonesian diatonic cipher notation is "moveable-Do" notation, so scores must indicate which pitch corresponds to the number 1 (for example, "1=C"). In pitch bracket notation music is written with melody lines and pitch brackets. Melody lines are like staff lines except they can change pitch by writing pitch brackets on them. Pitch brackets add or subtract scale steps to the melody line. The shape of the bracket (i.e. angle bracket), determines the number of scale steps to add. The direction of the bracket, opening or closing, determines whether to add or subtract scale steps. As a result of the mathematical nature of pitch bracket notation, arithmetic and algebra can be directly applied to the notation. Musical variations can be mathematically generated from their themes. Cipher notation systems assigning Arabic numerals to the major scale degrees have been used at least since the Iberian organ tablatures of the 16th-century and include such exotic adaptations as "Siffernotskrift". The one most widely in use today is the Chinese "Jianpu", discussed in the main article. Numerals can of course also be assigned to different scale systems, as in the Javanese "kepatihan" notation described above. Solfège is a way of assigning syllables to names of the musical scale. In order, they are today: "Do Re Mi Fa Sol La Ti Do" (for the octave). The classic variation is: "Do Re Mi Fa Sol La Si Do". The first Western system of functional names for the musical notes was introduced by Guido of Arezzo (c. 991 – after 1033), using the beginning syllables of the first six musical lines of the Latin hymn Ut queant laxis. The original sequence was "Ut Re Mi Fa Sol La", where each verse started a scale note higher. "Ut" later became "Do". The equivalent syllables used in Indian music are: "Sa Re Ga Ma Pa Dha Ni". See also: solfège, sargam, Kodály hand signs. Tonic sol-fa is a type of notation using the initial letters of solfège. The notes of the 12-tone scale can be written by their letter names A–G, possibly with a trailing sharp or flat symbol, such as A or B. Tablature was first used in the Middle Ages for organ music and later in the Renaissance for lute music. In most lute tablatures, a staff is used, but instead of pitch values, the lines of the staff represent the strings of the instrument. The frets to finger are written on each line, indicated by letters or numbers. Rhythm is written separately with one or another variation of standard note values indicating the duration of the fastest moving part. Few seem to have remarked on the fact that tablature combines in one notation system both the physical and technical requirements of play (the lines and symbols on them and in relation to each other representing the actual performance actions) with the unfolding of the music itself (the lines of tablature taken horizontally represent the actual temporal unfolding of the music). In later periods, lute and guitar music was written with standard notation. Tablature caught interest again in the late 20th century for popular guitar music and other fretted instruments, being easy to transcribe and share over the internet in ASCII format. Websites like OLGA have archives of text-based popular music tablature. Klavarskribo (sometimes shortened to klavar) is a music notation system that was introduced in 1931 by the Dutchman Cornelis Pot. The name means "keyboard writing" in Esperanto. It differs from conventional music notation in a number of ways and is intended to be easily readable. Many klavar readers are from the Netherlands. Some chromatic systems have been created taking advantage of the layout of black and white keys of the standard piano keyboard. The "staff" is most widely referred to as "piano roll", created by extending the black and white piano keys. Over the past three centuries, hundreds of music notation systems have been proposed as alternatives to traditional western music notation. Many of these systems seek to improve upon traditional notation by using a "chromatic staff" in which each of the 12 pitch classes has its own unique place on the staff. Examples are the "Ailler-Brennink" notation, Jacques-Daniel Rochat's Dodeka music notation, Tom Reed's "Twinline" notation, Russell Ambrose's "Ambrose Piano Tabs", Paul Morris' "Clairnote", John Keller's "Express Stave", and José A. Sotorrio's "Bilinear Music Notation". These notation systems do not require the use of standard key signatures, accidentals, or clef signs. They also represent interval relationships more consistently and accurately than traditional notation. The Music Notation Project (formerly known as the Music Notation Modernization Association) has a website with information on many of these notation systems. The term 'graphic notation' refers to the contemporary use of non-traditional symbols and text to convey information about the performance of a piece of music. Practitioners include Christian Wolff, Earle Brown, Anthony Braxton, John Cage, Morton Feldman, Krzysztof Penderecki, Cornelius Cardew, and Roger Reynolds. The book Notations is another example of this kind of notation. Simplified Music Notation is an alternative form of musical notation designed to make sight-reading easier. It is based on classical staff notation, but incorporates sharps and flats into the shape of the note heads. Notes such as double sharps and double flats are written at the pitch they are actually played at, but preceded by symbols called "history signs" that show they have been transposed. Modified Stave Notation (MSN) is an alternative way of notating music for people who cannot easily read ordinary musical notation even if it is enlarged. Parsons code is used to encode music so that it can be easily searched. Braille music is a complete, well developed, and internationally accepted musical notation system that has symbols and notational conventions quite independent of print music notation. It is linear in nature, similar to a printed language and different from the two-dimensional nature of standard printed music notation. To a degree Braille music resembles musical markup languages such as MusicXML or NIFF. In integer notation, or the integer model of pitch, all pitch classes and intervals between pitch classes are designated using the numbers 0 through 11. The standard form of rap notation is the "flow diagram", where rappers line up their lyrics underneath "beat numbers". Hip-hop scholars also make use of the same flow diagrams that rappers use: the books "How to Rap" and "How to Rap 2" extensively use the diagrams to explain rap's triplets, flams, rests, rhyme schemes, runs of rhyme, and breaking rhyme patterns, among other techniques. Similar systems are used by musicologists Adam Krims in his book "Rap Music and the Poetics of Identity" and Kyle Adams in his work on rap's flow. As rap usually revolves around a strong 4/4 beat, with certain syllables aligned to the beat, all the notational systems have a similar structure: they all have four beat numbers at the top of the diagram, so that syllables can be written in-line with the beat. Many computer programs have been developed for creating music notation (called "scorewriters" or "music notation software"). Music may also be stored in various digital file formats for purposes other than graphic notation output. According to Philip Tagg and Richard Middleton, musicology and to a degree European-influenced musical practice suffer from a 'notational centricity', a methodology slanted by the characteristics of notation. A variety of 20th- and 21st-century composers have dealt with this problem, either by adapting standard Western musical notation or by using graphic notation. These include George Crumb, Luciano Berio, Krzystof Penderecki, Earl Brown, John Cage, Witold Lutoslawski, and others.
https://en.wikipedia.org/wiki?curid=20201
Meir Kahane Meir David HaKohen Kahane (; ; August 1, 1932 – November 5, 1990) was an Israeli-American ordained Orthodox rabbi, writer, and ultra-nationalist politician who served one term in Israel's Knesset. His legacy continues to influence militant and far-right political groups active today in Israel. Kahane publicized his "Kahanism" ideology, which he claimed was simply Torah Judaism based on "halakha" (Jewish law), through published works, weekly articles, speeches, debates on college campuses and in synagogues throughout the United States, and appearances on various televised programs and radio shows. He was an intense advocate for Jewish causes, such as organizing defense squads and patrols in Jewish neighborhoods and demanding for the Soviet Union to "release its oppressed Jews". He supported violence against those he regarded as enemies of the Jewish people, called for immediate Jewish mass migration to Israel to avoid a potential "Holocaust" in the United States, supported the restriction of Israel's democracy to its Jewish citizens, hoped that Israel would eventually adopt "halakha", and endorsed the annexation of the West Bank and Gaza Strip. Kahane proposed enforcing "halakha" as codified by Maimonides. Non-Jews wishing to dwell in Israel would have three options: remain as "resident strangers" with limited rights, leave Israel and receive compensation for their property, or be forcibly removed without compensation. While serving in the Knesset in the mid-1980s Kahane proposed numerous laws, none of which passed, to emphasize Judaism in public schools, reduce Israel's bureaucracy, forbid sexual relations between non-Jews and Jews, and end cultural meetings between Jewish and Arab students. In 1968, Kahane was one of the co-founders of the Jewish Defense League (JDL) in the United States. In 1971, he co-founded Kach ("Thus"), a new political party in Israel. The same year, he was convicted in New York for conspiracy to manufacture explosives and received a suspended sentence of five years. In 1984, he became a member of the Knesset, when Kach gained its only-ever seat in parliamentary elections. In 1988, after polls showed Kach gaining popularity, the Israeli government banned Kach for being "racist" and "anti-democratic" under the terms of a law that it had just passed. Kahane was assassinated in a New York hotel by an Egyptian-born U.S. citizen in November 1990. Martin David Kahane was born in Brooklyn, New York, in 1932 to an Orthodox Jewish family. His father, Yechezkel (Charles) Kahane, the author of the "Torah Yesharah", studied at Polish and Czech yeshivas, was involved in the Revisionist Zionist movement, and was a close friend of Ze'ev Jabotinsky. As a teenager, Kahane became an ardent admirer of Jabotinsky and Peter Bergson, who were frequent guests in his parents' home. He joined the Betar (Brit Trumpeldor) youth wing of Revisionist Zionism. He was active in protests against Ernest Bevin, the British Foreign Secretary who maintained restrictions on the immigration of Jews, even Nazi death camp survivors, to Palestine after the end of the Second World War. In 1947, Kahane was arrested for throwing eggs and tomatoes at Bevin, who was disembarking at Pier 84 on a visit to New York. A photo of the arrest appeared in the "New York Daily News". In 1954, he became the Mazkir (Secretary) of Greater New York City's 16 Bnei Akiva chapters. Kahane's formal education included elementary school at the Yeshiva of Flatbush, and he attended high school at both Abraham Lincoln High School and the Brooklyn Talmudical Academy. Kahane received his rabbinical ordination from the Mir Yeshiva, in Brooklyn, where he was especially admired by the head Rabbi Abraham Kalmanowitz, and he began going by his Hebrew name, Meir. He was fully conversant in the Tanakh (Jewish Bible), the Talmud, the Midrash and Jewish law. Subsequently, Kahane earned a B.A. in Political Science from Brooklyn College in 1954, a Bachelor of Law - LL.B. from New York Law School, and an M.A. in International Relations from New York University. In 1956, Kahane married Libby Blum, with whom he had four children: Tzipporah, Tova, Baruch, and Binyamin. Journalists Michael T. Kaufman and Robert I. Friedman have separately said that Kahane, under the alias of Michael King and while already married, proposed to 21-year old model Gloria Jean D'Argenio (who used the stage name Estelle Donna Evans) in 1966. Kahane allegedly sent a letter to D'Argenio in which he unilaterally ended their relationship. In response, D'Argenio jumped off the Queensboro Bridge and died of her injuries the next day. In 2008, Kahane's wife dismissed the incident as lacking proof. After D'Argenio's death, Kahane started the Estelle Donna Evans Foundation in her name. Kahane claimed D'Argenio had been his former secretary in his failed consulting operation, she had died of terminal cancer, and her “well-to-do” family had endowed the foundation. Robert Friedman reported, “In reality, Kahane used the money to help finance the JDL.” That meant two different things: funding the purchase of supplies for bombings and fattening his own wallet, spending lavishly on trips for himself. In 1958, Kahane became the rabbi of the Howard Beach Jewish Center in Queens, New York City. Although the synagogue was originally Conservative, rather than strictly Orthodox, the board of directors agreed to Kahane's conditions, which included resigning from the Conservative movement's United Synagogue of America, installing a partition separating men and women during prayer, instituting traditional prayers, and maintaining a kosher kitchen. At the Jewish Center, Kahane influenced many of the synagogue's youngsters to adopt a more observant lifestyle, which often troubled parents. He trained Arlo Guthrie for his bar mitzvah. When his contract was not renewed, he soon published an article entitled "End of the Miracle of Howard Beach". That was Kahane's first article in "The Jewish Press", an American Orthodox Jewish weekly for which he would continue to write for the rest of his life. Kahane also used the pen name David Sinai, and the pseudonyms Michael King, David Borac, and Martin Keene. In the late 1950s and the early 1960s, Kahane's life of secrecy and his strong anticommunism landed him a position as a consultant with the FBI. According to his wife, Libby, his assignment was to infiltrate the anticommunist John Birch Society and report his findings to the FBI. At some time in the late 1950s, Kahane assumed the persona of a Gentile, along with the pseudonym Michael King. Kahane began openly expressing his anticommunism. He and Joseph Churba created the July Fourth Movement, which was formed to counteract widespread opposition towards U.S. involvement in the Vietnam War. Subsequently, they coauthored the book "The Jewish Stake in Vietnam", an attempt to convince American Jews of the "evil of Communism". The introduction states that, "All Americans have a stake in this grim war against Communism... It is vital that Jews realize the threat to their very survival [should Communism succeed]." Churba had a major falling out with Kahane over the use of paramilitary activities, and they parted ways permanently. Churba went on to pursue his own career, joining the U.S. Air Force, writing many books on the Middle East, and eventually becoming one of Ronald Reagan's consultants. Kahane chose to fight for Jewish rights, and was willing to use extreme measures. He even attempted to acquire and grow biological weapons to use on a Soviet military installation. He began using the phrase "Never Again" and conceived the Jewish Star and fist insignia, a symbol resembling that of the Black Panther Party. However, Kahane himself opposed the Black Panthers because they had supported anti-Jewish riots in Massachusetts and had left-wing views. Kahane founded the Jewish Defense League (JDL) in New York City in 1968. Its self-described purpose was to protect Jews from local manifestations of anti-Semitism. The JDL said it was committed to five fundamental principles: The JDL favored civil rights for blacks, but opposed black anti-Semites and racism of any form. In 1971, the JDL formed an alliance with a black rights group in what Kahane termed "a turning point in Black-Jewish relations". The Anti-Defamation League claimed that Kahane "preached a radical form of Jewish nationalism which reflected racism, violence and political extremism" that was replicated by Irv Rubin, the JDL's successor to Kahane. A number of the JDL's members and leaders, including Kahane, were convicted of acts related to domestic terrorism. In 1971, Kahane was sentenced to a suspended five-year prison sentence for conspiring to manufacture explosives. In 1975, Kahane was arrested for leading the attack on the Soviet United Nations mission and injuring two officers, but he was released after being given summonses for disorderly conduct. Later the same year, Kahane was accused of conspiring to kidnap a Soviet diplomat, bomb the Iraqi embassy in Washington, and ship arms abroad from Israel. He was convicted of violating his probation for the 1971 bombing conviction and was sentenced to one year in prison. However, he served most of it in a hotel, with frequent unsupervised absences, because of a concession over the provision of kosher food. In a 1984 interview with "Washington Post" correspondent Carla Hall, Kahane admitted that the JDL "bombed the Russian [Soviet] mission in New York, the Russian cultural mission here [Washington] in 1971, the Soviet trade offices". In 1971, Kahane moved to Israel. At the time, he declared that he would focus on Jewish education. He later began gathering lists of Arab citizens of the State of Israel who were willing to emigrate for compensation, and eventually, he initiated protests that advocated the expulsion of Arabs from that country, and Israeli-occupied territories. In 1972, Jewish Defense League leaflets were distributed in Hebron, calling for the mayor to stand trial for the 1929 Hebron massacre. Kahane was arrested dozens of times. In 1971, he founded Kach, a political party that ran for the Knesset, the Israeli Parliament, during the 1973 general elections under the name "The League List". It won 12,811 votes (0.82%), just 2,857 (0.18%) short of the electoral threshold at the time (1%) for winning a Knesset seat. The party was even less successful in the 1977 elections, winning only 4,836 votes. In 1980, Kahane was arrested for the 62nd time since his emigration, and he was jailed for six months after a detention order that was based on allegations of him planning armed attacks against Palestinians in response to the killings of Jewish settlers. Kahane was held in prison in Ramla, where he wrote the book "They Must Go". Kahane was banned from entering the United Kingdom in 1981. In 1981, Kahane's party again ran for the Knesset during the 1981 elections, but it did not win a seat and received only 5,128 votes. In 1984, the Israeli Central Elections Committee banned him from being a candidate on the grounds that Kach was a racist party, but the Supreme Court of Israel overturned the ban on the grounds that the committee was not authorized to ban Kahane's candidacy. The Supreme Court suggested that the Knesset pass a law excluding racist parties from future elections. The Knesset responded in 1985 by amending the "Basic Law:Knesset" to include a prohibition (paragraph 7a) against registration of parties that explicitly or implicitly incite racism. In the 1984 legislative elections, Kahane's Kach party received 25,907 votes, gaining one seat in the Knesset, which was taken by Kahane. He refused to take the standard oath of office and insisted on adding a Biblical verse from Psalms to indicate that national laws were overruled by the Torah if they conflict. Kahane's legislative proposals focused on Jewish education, an open economy, transferring the Arab population out of the Land of Israel, revoking Israeli citizenship from non-Jews, and banning Jewish-Gentile marriages and sexual relations. While his popularity in Israel grew, Kahane was boycotted in the Knesset, where his speeches were often made to an assembly that was empty except for the duty chairman and the transcriptionist. Kahane's legislative proposals and motions of no-confidence against the government were ignored or rejected. Kahane often pejoratively called other Knesset members "Hellenists, a reference to Jews who assimilated into Greek culture after Judea's occupation by Alexander the Great. In 1987, Kahane opened a yeshiva ("HaRaayon HaYehudi") with funding from US supporters to teach "the Authentic Jewish Idea". Despite the boycott, his popularity grew among the Israeli public, especially for working-class Sephardi Jews. Polls showed that Kach would have likely received anywhere from four to twelve seats in the coming November 1988 elections. In 1985, the Knesset passed an amendment to the Basic Law of Israel, barring political parties that incited to racism. The Central Elections Committee banned Kahane a second time, and he appealed to the Israeli Supreme Court. However, the Supreme Court this time ruled in favor of the committee, disqualifying Kach from running in the 1988 legislative elections. Kahane was thus the first candidate in Israel to be barred from election for racism. The move was criticized as being anti-democratic by Alan M. Dershowitz. Also in 1985, the Knesset passed a law declaring that a Knesset member could only be an Israeli citizen. As a result of this legislation, Kahane renounced his United States citizenship. Following his banning from the Knesset, he tried to get his U.S. citizenship reinstated on the basis of the fact that he was compelled to relinquish it by the Knesset. It was not reinstated, but he was permitted to continue traveling to the United States. In November 1990, Kahane gave a speech to an audience of mostly Orthodox Jews from Brooklyn, where he warned American Jews to immigrate to Israel before it was "too late". As a crowd gathered around Kahane in the second-floor lecture hall in Midtown Manhattan's New York Marriott East Side, Kahane was assassinated by El Sayyid Nosair, an Egyptian-born U.S. citizen who had trained in Pakistan. He was initially charged and acquitted of the murder. Nosair was later convicted of the murder in U.S. District Court for his involvement in the 1993 World Trade Center bombing. Prosecutors were able to try Nosair again for the murder because the federal indictment included the killing as part of the alleged terrorist conspiracy. He was sentenced to life imprisonment and later made a confession to federal agents. Kahane was buried on Har HaMenuchot, in Jerusalem. He was eulogized by a number of prominent supporters in both the U.S. and in Israel, including Rabbi Moshe Tendler and the Sephardic Chief Rabbi of Israel, Mordechai Eliyahu, who spoke of how little the people understood of Kahane's "true value". Kahane argued that there was a glory in Jewish destiny, which came through the observance of the Torah and "halakha" (Jewish law). He also noted, "Democracy and Judaism are not the same thing." Kahane also stressed the view that a Jewish state and a Western democracy were incompatible, since Western democracy is religion-blind, and a Jewish state is religion-oriented by its very name. He also warned of the danger of non-Jewish citizens becoming a majority and voting against the Jewish character of the state: "The question is as follows: if the Arabs settle among us and make enough children to become a majority, will Israel continue to be a Jewish state? Do we have to accept that the Arab majority will decide?" "Western democracy has to be ruled out. For me, that's cut and dried: There's no question of setting up democracy in Israel, because democracy means equal rights for all, irrespective of racial or religious origins." Kahane proposed an "exchange of populations" that would continue the Jewish exodus from Arab lands: "A total of some 750,000 Jews fled Arab lands since 1948. Surely it is time for Jews, worried over the huge growth of Arabs in Israel, to consider finishing the exchange of populations that began 35 years ago." Kahane proposed a $40,000 compensation plan for Arabs who would leave voluntarily, and forcible expulsion for those who "don't want to leave". He encouraged retaliatory violence against Arabs who attacked Jews: "I approve of anybody who commits such acts of violence. Really, I don't think that we can sit back and watch Arabs throwing rocks at buses whenever they feel like it. They must understand that a bomb thrown at a Jewish bus is going to mean a bomb thrown at an Arab bus." In some of his writings, Kahane argued that Israel should never start a war for territory but that if a war were launched against Israel, Biblical territory should be annexed. However, in an interview, he defined Israel's "minimal borders" as follows: "The southern boundary goes up to El Arish, which takes in all of northern Sinai, including Yamit. To the east, the frontier runs along the western part of the East Bank of the Jordan River, hence part of what is now Jordan. Eretz Yisrael also includes part of Lebanon and certain parts of Syria, and part of Iraq, all the way to the Euphrates River." When critics suggested that following Kahane's plans would mean a perpetual war between Jews and Arabs, Kahane responded, "There will be a perpetual war. With or without Kahane." Following Kahane's death, no leader emerged to replace him in the movement although the idea of transferring populations, which was attributed mainly to Kahane, was subsequently incorporated into the political platform of various political parties in Israel, such as Moledet (applying to Arab non-citizen residents of the West Bank) and Yisrael Beiteinu (in the form of population exchange). Two small Kahanist factions later emerged; one under the name "Kach", and the other under the name "Kahane chai" (Hebrew: כהנא חי, literally "Kahane lives [on]"), the second one being led by his younger son, Binyamin Ze'ev Kahane. Neither one was permitted to participate in the Knesset elections by the Central Elections Committee. In 1994, following the Cave of the Patriarchs massacre of Palestinian Muslim worshippers in Hebron by Kach supporter Baruch Goldstein, in which 29 Muslim worshipers were killed, the Israeli government declared both parties to be terrorist organizations. The US State Department also added Kach and Kahane Chai to its list of Foreign Terrorist Organizations. In the 2003 Knesset elections, Herut, which had split off from the National Union list, ran with Michael Kleiner and former Kach activist Baruch Marzel taking the top two spots on the list. The joint effort narrowly missed the 1.5% barrier. In the following 2006 elections, the Jewish National Front, led by Baruch Marzel, fared better, but it also failed to pass the minimum threshold. A follower of Kahane who was involved with Kach for many years, Michael Ben-Ari, was elected to the Knesset in the 2009 elections on renewed National Union list. He stood again in the 2013 elections as the second candidate on the list of Otzma LeYisrael, but the party failed to pass the minimum threshold. In 2007, the FBI released over a thousand documents relating to its daily surveillance of Kahane since the early 1960s. In 2015, Kahane's grandson, Meir Ettinger, was detained by Israeli law enforcement. He was the alleged leader of the radical Jewish group "The Revolt". In an online "manifesto" echoing some of his grandfather's teachings, Ettinger promotes the "dispossession of gentiles" who live in Israel and the establishment of a new "kingdom of Israel", a theocracy ruled according to the Halacha. Ettinger's writings condemned Israel's government, mainstream rabbis, and the IDF, and also have denounced Christian churches as "idolatry". In 2016, Kahane's widow claimed that modern Jewish extremists in Israel are not following the ideology of her late husband, Rabbi Meir Kahane. She justified that claim by arguing that unlike modern Jewish extremists, Rabbi Kahane had a more mature approach that did not encourage illegal activities. In 2017, "The Forward" reported that some of Kahane's followers were aligning themselves with white nationalists and the alt-right. Other Kahanists declared that such moves did not reflect Kahane's teachings, and they supported that declaration by arguing that Kahane worked together with African Americans.
https://en.wikipedia.org/wiki?curid=20202
Manatee Manatees (family Trichechidae, genus Trichechus) are large, fully aquatic, mostly herbivorous marine mammals sometimes known as sea cows. There are three accepted living species of Trichechidae, representing three of the four living species in the order Sirenia: the Amazonian manatee ("Trichechus inunguis"), the West Indian manatee ("Trichechus manatus"), and the West African manatee ("Trichechus senegalensis"). They measure up to long, weigh as much as , and have paddle-like flippers. The etymology of the name is unclear, with connections having been made to Latin (‘hand’), and to a word – sometimes cited as "manati" – used by the Taíno, a pre-Columbian people of the Caribbean, meaning "breast". Manatees' other name, "sea cows", comes from the fact that they are slow plant-eaters, peaceful and similar to cows on land. They often graze on water plants in tropical seas. Manatees are three of the four living species in the order Sirenia. The fourth is the Eastern Hemisphere's dugong. The Sirenia are thought to have evolved from four-legged land mammals more than 60 million years ago, with the closest living relatives being the Proboscidea (elephants) and Hyracoidea (hyraxes). The Amazonian's hair color is brownish gray, and it has thick wrinkled skin, often with coarse hair, or "whiskers". Photos are rare; although very little is known about this species, scientists think it is similar to West Indian manatee. Manatees weigh , and average in length, sometimes growing to and (the females tend to be larger and heavier). At birth, baby manatees weigh about each. The manatee has a large, flexible, prehensile upper lip, used to gather food and eat and for social interaction and communication. Manatees have shorter snouts than their fellow sirenians, the dugongs. The lids of manatees' small, widely spaced eyes close in a circular manner. The adults have no incisor or canine teeth, just a set of cheek teeth, which are not clearly differentiated into molars and premolars. These teeth are repeatedly replaced throughout life, with new teeth growing at the rear as older teeth fall out from farther forward in the mouth, somewhat as elephants' teeth do. At any time, a manatee typically has no more than six teeth in each jaw of its mouth. Its tail is paddle-shaped, and is the clearest visible difference between manatees and dugongs; a dugong tail is fluked, similar in shape to that of a whale. The female manatee has two teats, one under each flipper, a characteristic that was used to make early links between the manatee and elephants. The manatee is unusual among mammals in having just six cervical vertebrae, a number that may be due to mutations in the homeotic genes. All other mammals have seven cervical vertebrae, other than the two-toed and three-toed sloths. Like the horse, the manatee has a simple stomach, but a large cecum, in which it can digest tough plant matter. Generally, the intestines are about 45 meters, unusually long for an animal of the manatee's size. Fossil remains of manatee ancestors - also known as sirenians - date back to the Early Eocene. Apart from mothers with their young, or males following a receptive female, manatees are generally solitary animals. Manatees spend approximately 50% of the day sleeping submerged, surfacing for air regularly at intervals of less than 20 minutes. The remainder of the time is mostly spent grazing in shallow waters at depths of . The Florida subspecies ("T. m. latirostris") has been known to live up to 60 years. Generally, manatees swim at about . However, they have been known to swim at up to in short bursts. Manatees are capable of understanding discrimination tasks and show signs of complex associative learning. They also have good long-term memory. They demonstrate discrimination and task-learning abilities similar to dolphins and pinnipeds in acoustic and visual studies. Manatees typically breed once every two years; generally only a single calf is born. Gestation lasts about 12 months and to wean the calf takes a further 12 to 18 months, although females may have more than one estrous cycle per year. Manatees emit a wide range of sounds used in communication, especially between cows and their calves. Their ears are large internally but the external openings are small, and they are located four inches behind each eye. Adults communicate to maintain contact and during sexual and play behaviors. Taste and smell, in addition to sight, sound, and touch, may also be forms of communication. Manatees are herbivores and eat over 60 different freshwater (e.g., floating hyacinth, pickerel weed, alligator weed, water lettuce, hydrilla, water celery, musk grass, mangrove leaves) and saltwater plants (e.g., sea grasses, shoal grass, manatee grass, turtle grass, widgeon grass, sea clover, and marine algae). Using their divided upper lip, an adult manatee will commonly eat up to 10%–15% of their body weight (about 50 kg) per day. Consuming such an amount requires the manatee to graze for up to seven hours a day. To be able to cope with the high levels of cellulose in their plant based diet, manatees utilize hindgut fermentation to help with the digestion process. Manatees have been known to eat small numbers of fish from nets. Manatees use their flippers to "walk" along the bottom whilst they dig for plants and roots in the substrate. When plants are detected, the flippers are used to scoop the vegetation toward the manatee's lips. The manatee has prehensile lips; the upper lip pad is split into left and right sides which can move independently. The lips use seven muscles to manipulate and tear at plants. Manatees use their lips and front flippers to move the plants into the mouth. The manatee does not have front teeth, however, behind the lips, on the roof of the mouth, there are dense, ridged pads. These horny ridges, and the manatee's lower jaw, tear through ingested plant material. Manatees have four rows of teeth. There are 6 to 8 high-crowned, open-rooted molars located along each side of the upper and lower jaw giving a total of 24 to 32 flat, rough-textured teeth. Eating gritty vegetation abrades the teeth, particularly the enamel crown; however, research indicates that the enamel structure in manatee molars is weak. To compensate for this, manatee teeth are continually replaced. When anterior molars wear down, they are shed. Posterior molars erupt at the back of the row and slowly move forward to replace these like enamel crowns on a conveyor belt, similarly to elephants. This process continues throughout the manatee's lifetime. The rate at which the teeth migrate forward depends on how quickly the anterior teeth abrade. Some studies indicate that the rate is about 1 cm/month although other studies indicate 0.1 cm/month. Manatees inhabit the shallow, marshy coastal areas and rivers of the Caribbean Sea and the Gulf of Mexico ("T. manatus", West Indian manatee), the Amazon basin ("T. inunguis", Amazonian manatee), and West Africa ("T. senegalensis", West African manatee). West Indian manatees prefer warmer temperatures and are known to congregate in shallow waters. They frequently migrate through brackish water estuaries to freshwater springs. They cannot survive below 15 °C (60 °F). Their natural source for warmth during winter is warm, spring-fed rivers. The coast of the state of Georgia is usually the northernmost range of the West Indian manatees because their low metabolic rate does not protect them in cold water. Prolonged exposure to water below 20 °C (68 °F) can cause "cold stress syndrome" and death. Florida manatees can move freely between fresh water and salt water. Manatees have been seen as far north as Cape Cod, and in 1995 and again in 2006, one was seen in New York City and Rhode Island's Narragansett Bay. A manatee was spotted in the Wolf River harbor near the Mississippi River in downtown Memphis in 2006, and was later found dead 10 miles downriver in McKellar Lake. Another manatee was found dead on a New Jersey beach in February 2020, considered especially unusual given the time of year. At the time of the manatee's discovery, the water temperature in the area was below 6.5 °C (43.7 °F). The West Indian manatee migrates into Florida rivers—such as the Crystal, the Homosassa, and the Chassahowitzka rivers, whose headsprings are 22 °C (72 °F) all year. Between November and March each year, about 600 West Indian manatees gather in the rivers in Citrus County, Florida such as the Crystal River National Wildlife Refuge. In winter, manatees often gather near the warm-water outflows of power plants along the Florida coast, instead of migrating south as they once did. Some conservationists are concerned that these manatees have become too reliant on these artificially warmed areas. The U.S. Fish and Wildlife Service is trying to find a new way to heat the water for manatees that depended on plants that have closed. Studies suggest that Florida manatees need access to fresh water for proper regulation of water and salts in their bodies. Accurate population estimates of the Florida manatee ("T. manatus") are difficult. They have been called scientifically weak because they vary widely from year to year, some areas showing increases, others decreases, and little strong evidence of increases except in two areas. Manatee counts are highly variable without an accurate way to estimate numbers: In Florida in 1996, a winter survey found 2,639 manatees; in 1997, a January survey found 2,229, and a February survey found 1,706. A statewide synoptic survey in January 2010 found 5,067 manatees living in Florida, the highest number recorded to that time. As of January 2016, the USFWS estimates the range-wide manatee population to be at least 13,000; as of January, 2018, at least 6,100 are estimated to be in Florida. Population viability studies conducted in 1997 found that decreasing adult survival and eventual extinction were a probable future outcome for Florida manatees unless they got more protection. The U.S. Fish and Wildlife Service proposed downgrading the manatee's status from endangered to threatened in January 2016 after more than 40 years of the manatee's being classified as on the endangered. The freshwater Amazonian manatee ("T. inunguis") inhabits the Amazon River and its tributaries, and never ventures into salt water. They are found in coastal marine and estuarine habitats, and in freshwater river systems along the west coast of Africa from the Senegal River south to the Cuanza River in Angola. They live as far upriver on the Niger River as Koulikoro in Mali, 2,000 km from the coast. In relation to the threat posed by humans, predation does not present a significant threat to manatees. When threatened, the manatee's response is to dive as deeply as it can, suggesting that threats have most frequently come from land dwellers such as humans rather than from other water-dwelling creatures such as caimans or sharks. The main causes of death for manatees are human-related issues, such as habitat destruction and human objects. Natural causes of death include adverse temperatures, predation by crocodiles on young, and disease. Their slow-moving, curious nature, coupled with dense coastal development, has led to many violent collisions with propeller-driven boats and ships, leading frequently to maiming, disfigurement, and even death. As a result, a large proportion of manatees exhibit spiral cutting propeller scars on their backs, usually caused by larger vessels that do not have skegs in front of the propellers like the smaller outboard and inboard-outboard recreational boats have. They are now even identified by humans based on their scar patterns. Many manatees have been cut in two by large vessels like ships and tug boats, even in the highly populated lower St. Johns River's narrow channels. Some are concerned that the current situation is inhumane, with upwards of 50 scars and disfigurements from vessel strikes on a single manatee. Often, the lacerations lead to infections, which can prove fatal. Internal injuries stemming from being trapped between hulls and docks and impacts have also been fatal. Recent testing shows that manatees may be able to hear speed boats and other watercraft approaching, due to the frequency the boat makes. However, a manatee may not be able to hear the approaching boats when they are performing day-to-day activities or distractions. The manatee has a tested frequency range of 8 kilohertz to 32 kilohertz. Manatees hear on a higher frequency than would be expected for such large marine mammals. Many large boats emit very low frequencies, which confuse the manatee and explain their lack of awareness around boats. The Lloyd's mirror effect results in low frequency propeller sounds not being discernible near the surface, where most accidents occur. Research indicates that when a boat has a higher frequency the manatees rapidly swim away from danger. In 2003, a population model was released by the United States Geological Survey that predicted an extremely grave situation confronting the manatee in both the Southwest and Atlantic regions where the vast majority of manatees are found. It states, According to marine mammal veterinarians: These veterinarians go on to state: In 2009, of the 429 Florida manatees recorded dead, 97 were killed by commercial and recreational vessels, which broke the earlier record number of 95 set in 2002. Another cause of manatee deaths are red tides, a term used for the proliferation, or "blooms", of the microscopic marine algae, "Karenia brevis". This dinoflagellate produces brevetoxins that can have toxic effects on the central nervous system of animals. In 1996, a red tide was responsible for 151 manatee deaths. The bloom was present from early March to the end of April and killed approximately 15% of the known population of manatees along South Florida's western coast. Other blooms in 1982 and 2005 resulted in 37 and 44 deaths, respectively. Manatees can also be crushed and isolated in water control structures (navigation locks, floodgates, etc.) and are occasionally killed by entanglement in fishing gear, such as crab pot float lines, box traps, and shark nets. While humans are allowed to swim with manatees in one area of Florida, there have been numerous charges of people harassing and disturbing the manatees. According to the United States Fish and Wildlife Service, approximately 99 manatee deaths each year are related to human activities. In January 2016, there were 43 manatee deaths in Florida alone. All three species of manatee are listed by the World Conservation Union as vulnerable to extinction. However, The U.S. Fish and Wildlife Service (FWS) does not consider the West Indian manatee to be "endangered" anymore, having downgraded its status to "threatened" as of March 2017. They cite improvements to habitat conditions, population growth and reductions of threats as reasoning for the change. The reclassification was met with controversy, with Floridan senator Vern Buchanan and groups such as the Save the Manatee Club and the Center for Biological Diversity expressing concerns that the change would have a detrimental effect on conservation efforts. The new classification will not affect current federal protections. West Indian manatees were originally classified as endangered with the 1967 class of endangered species. Manatee population in the United States reached a low in the 1970's, during which only a few hundred individuals lived in the nation. As of February 2016, 6,250 manatees were reported swimming in Florida's springs. It is illegal under federal and Florida law to injure or harm a manatee. The MV "Freedom Star" and MV "Liberty Star", ships used by NASA to tow space shuttle solid rocket boosters back to Kennedy Space Center, were propelled only by water jets to protect the endangered manatee population that inhabits regions of the Banana River where the ships are based. Brazil outlawed hunting in 1973 in an effort to preserve the species. Deaths by boat strikes are still common. The oldest manatee in captivity was Snooty, at the South Florida Museum's Parker Manatee Aquarium in Bradenton, Florida. Born at the Miami Aquarium and Tackle Company on July 21, 1948, Snooty was one of the first recorded captive manatee births. Raised entirely in captivity, Snooty was never to be released into the wild. As such he was the only manatee at the aquarium, and one of only a few captive manatees in the United States that was allowed to interact with human handlers. That made him uniquely suitable for manatee research and education. Snooty died suddenly two days after his 69th birthday, July 23, 2017, when he was found in an underwater area only used to access plumbing for the exhibit life support system. The South Florida Museum's initial press release stated, “Early indications are that an access panel door that is normally bolted shut had somehow been knocked loose and that Snooty was able to swim in.” There are a number of manatee rehabilitation centers in the United States. These include three government-run critical care facilities in Florida at Lowry Park Zoo, Miami Seaquarium, and SeaWorld Orlando. After initial treatment at these facilities, the manatees are transferred to rehabilitation facilities before release. These include the Cincinnati Zoo and Botanical Garden, Columbus Zoo and Aquarium, Epcot's The Seas, South Florida Museum, and Homosassa Springs Wildlife State Park. The Columbus Zoo was a founding member of the Manatee Rehabilitation Partnership in 2001. Since 1999, the zoo's Manatee Bay facility has helped rehabilitate 20 manatees. The Cincinnati Zoo has rehabilitated and released more than a dozen manatees since 1999. Manatees can also be viewed in a number of European zoos, such as the Tierpark Berlin, the Nuremberg Zoo, in ZooParc de Beauval in France and in the Aquarium of Genoa in Italy. The River Safari at Singapore features seven of them. Since the 19th century, Georgetown, Guyana has kept West Indian manatees in its botanical garden, and later, its national park. In the 1910s and again in the 1950s, sugar estates in Guyana used manatees to keep their irrigation canals weed-free. Between the 1950s and 1970s, the Georgetown water treatment plant used manatees in their storage canals for the same purpose. The manatee has been linked to folklore on mermaids. In West African folklore, they were considered sacred and thought to have been once human. Killing one was taboo and required penance. In the novel "Moby-Dick", Herman Melville distinguishes manatees ("Lamatins", cf. "lamatins") from small whales; stating, "I am aware that down to the present time, the fish styled Lamatins and Dugongs (Pig-fish and Sow-fish of the Coffins of Nantucket) are included by many naturalists among the whales. But as these pig-fish are a noisy, contemptible set, mostly lurking in the mouths of rivers, and feeding on wet hay, and especially as they do not spout, I deny their credentials as whales; and have presented them with their passports to quit the Kingdom of Cetology."
https://en.wikipedia.org/wiki?curid=20204
Marsupial Marsupials are any members of the mammalian infraclass Marsupialia. All extant marsupials are endemic to Australasia and the Americas. A distinctive characteristic common to these species is that most of the young are carried in a pouch. Well-known marsupials include kangaroos, wallabies, koalas, phalangeriformes, opossums, wombats, and Tasmanian devils. Some lesser-known marsupials are the dunnarts, potoroos, cuscuses and the extinct thylacine. Marsupials represent the clade originating from the last common ancestor of extant metatherians. Like other mammals in the Metatheria, they give birth to relatively undeveloped young that often reside in a pouch located on their mothers' abdomen for a certain amount of time. Close to 70% of the 334 extant species occur on the Australian continent (the mainland, Tasmania, New Guinea and nearby islands). The remaining 30% are found in the Americas — primarily in South America, thirteen in Central America, and one in North America, north of Mexico. The word "marsupial" comes from "marsupium", the technical term for the abdominal pouch. It, in turn, is borrowed from Latin and ultimately from the ancient Greek , meaning "pouch". Marsupials are taxonomically identified as members of mammalian infraclass Marsupialia, first described as a family under the order Pollicata by German zoologist Johann Karl Wilhelm Illiger in his 1811 work "Prodromus Systematis Mammalium et Avium". However, James Rennie, author of "The Natural History of Monkeys, Opossums and Lemurs" (1838), pointed out that the placement of five different groups of mammals – monkeys, lemurs, tarsiers, aye-ayes and marsupials (with the exception of kangaroos, that were placed under the order Salientia) – under a single order (Pollicata) did not appear to have a strong justification. In 1816, French zoologist George Cuvier classified all marsupials under the order Marsupialia. In 1997, researcher J. A. W. Kirsch and others accorded infraclass rank to Marsupialia. There are two primary divisions: American marsupials (Ameridelphia) and Australian marsupials (Australidelphia). Marsupialia is further divided as follows: † – Extinct Comprising over 300 extant species, several attempts have been made to accurately interpret the phylogenetic relationships among the different marsupial orders. Studies differ on whether Didelphimorphia or Paucituberculata is the sister group to all other marsupials. Though the order Microbiotheria (which has only one species, the monito del monte) is found in South America, morphological similarities suggest it is closely related to Australian marsupials. Molecular analyses in 2010 and 2011 identified Microbiotheria as the sister group to all Australian marsupials. However, the relations among the four Australidelphid orders are not as well understood. The cladogram below, depicting the relationships among the various marsupial orders, is based on a 2015 phylogenetic study. DNA evidence supports a South American origin for marsupials, with Australian marsupials arising from a single Gondwanan migration of marsupials from South America, across Antarctica, to Australia. There are many small arboreal species in each group. The term "opossum" is used to refer to American species (though "possum" is a common abbreviation), while similar Australian species are properly called "possums". Marsupials have the typical characteristics of mammals—e.g., mammary glands, three middle ear bones, and true hair. There are, however, striking differences as well as a number of anatomical features that separate them from Eutherians. In addition to the front pouch, which contains multiple teats for the sustenance of their young, marsupials have other common structural features. Ossified patellae are absent in most modern marsupials (though a small number of exceptions are reported) and epipubic bones are present. Marsupials (and monotremes) also lack a gross communication (corpus callosum) between the right and left brain hemispheres. The skull has peculiarities in comparison to placental mammals. In general, the skull is relatively small and tight. Holes ("foramen lacrimale") are located in the front of the orbit. The cheekbone is enlarged and extends further to the rear. The angular extension ("processus angularis") of the lower jaw is bent toward the center. Another feature is the hard palate which, in contrast to the placental mammals' foramina, always have more openings. The teeth differ from that of placental mammals, so that all taxa except wombats have a different number of incisors in the upper and lower jaws. The early marsupials had a dental formula from 5/4 – 1/1 – 3/3 – 4/4, that is, per pine half; they have five maxilla or four mandibular incisors, one canine, three premolars and four molars, for a total of 50 teeth. Some taxa, such as the opossum, have the original number of teeth. In other groups the number of teeth is reduced. The dental formula for Macropodidae (kangaroos and wallabies etc.) is 3/1 – (0 or 1)/0 – 2/2 – 4/4. Marsupials in many cases have 40 to 50 teeth, significantly more than placental mammals. The upper jaw has a high number of incisors, up to ten, and they have more molars than premolars. The second set of teeth grows in only at the 3rd premolar: all remaining teeth are already created as permanent teeth. Few general characteristics describe their skeleton. In addition to details in the construction of the ankle, bones ("Ossa epubica") are characteristic, two from the pubic bone of the pelvis, which is a forwardly projecting bone. Since these are present in males and pouchless species, it is believed that they originally had nothing to do with reproduction, but served in the muscular approach to the movement of the hind limbs. This could be explained by an original feature of mammals, as these epipubic bones are also found in monotremes. Marsupial reproductive organs differ from the placental mammals. For them, the reproductive tract is doubled. The females have two uteri and two vaginas, and before birth, a birth canal forms between them, the median vagina. The males have a split or double penis lying in front of the scrotum. A pouch is present in most, but not all, species. Many marsupials have a permanent bag, whereas in others the pouch develops during gestation, as with the shrew opossum, where the young are hidden only by skin folds or in the fur of the mother. The arrangement of the pouch is variable to allow the offspring to receive maximum protection. Locomotive kangaroos have a pouch opening at the front, while many others that walk or climb on all fours have the opening in the back. Usually, only females have a pouch, but the male water opossum has a pouch that is used to accommodate his genitalia while swimming or running. Marsupials have adapted to many habitats, reflected in the wide variety in their build. The largest living marsupial, the red kangaroo, grows up to in height and in weight, but extinct genera, such as "Diprotodon", were significantly larger and heavier. The smallest members of this group are the marsupial mice, which often reach only in body length. Some species resemble placental mammals and are examples of convergent evolution. The extinct "Thylacine" strongly resembled the placental wolf, hence its nickname "Tasmanian wolf". Flying and the associated ability to glide occurred both with marsupials (as with sugar gliders) and some placental mammals (as with flying squirrels), which developed independently. Other groups such as the kangaroo, however, do not have clear placental counterparts, though they share similarities in lifestyle and ecological niches with ruminants. Marsupials' reproductive systems differ markedly from those of placental mammals. During embryonic development, a choriovitelline placenta forms in all marsupials. In bandicoots, an additional chorioallantoic placenta forms, although it lacks the chorionic villi found in eutherian placentas. The evolution of reproduction in marsupials, and speculation about the ancestral state of mammalian reproduction, have engaged discussion since the end of the 19th century. Both sexes possess a cloaca, which is connected to a urogenital sac used to store waste before expulsion. The bladder of marsupials functions as a site to concentrate urine and empties into the common urogenital sinus in both females and males. Most male marsupials, except for macropods and marsupial moles, have a bifurcated penis, separated into two columns, so that the penis has two ends corresponding to the females' two vaginas. The penis is used only during copulation, and is separate from the urinary tract. It curves forward when erect, and when not erect, it is retracted into the body in an S-shaped curve. Neither marsupials nor monotremes possess a baculum. The shape of the glans penis varies among marsupial species. The male thylacine had a pouch that acted as a protective sheath, covering his external reproductive organs while he ran through thick brush. The shape of the urethral grooves of the males' genitalia is used to distinguish between "Monodelphis brevicaudata", "Monodelphis domestica", and "Monodelphis americana". The grooves form 2 separate channels that form the ventral and dorsal folds of the erectile tissue. Several species of dasyurid marsupials can also be distinguished by their penis morphology. The only accessory sex glands marsupials possess are the prostate and bulbourethral glands. There are no ampullae, seminal vesicles or coagulating glands. The prostate is proportionally larger in marsupials than in placental mammals. During the breeding season, the male tammar wallaby's prostate and bulbourethral gland enlarge. However, there does not appear to be any seasonal difference in the weight of the testes. Female marsupials have two lateral vaginas, which lead to separate uteri, but both open externally through the same orifice. A third canal, the median vagina, is used for birth. This canal can be transitory or permanent. Some marsupial species are able to store sperm in the oviduct after mating. Marsupials give birth at a very early stage of development; after birth, newborn marsupials crawl up the bodies of their mothers and attach themselves to a teat, which is located on the underside of the mother, either inside a pouch called the marsupium, or open to the environment. There they remain for a number of weeks, attached to the teat. The offspring are eventually able to leave the marsupium for short periods, returning to it for warmth, protection, and nourishment. Pre-natal development differs between marsupials and placental mammals. Key aspects of the first stages of placental mammal embryo development, such as the inner cell mass and the process of compaction, are not found in marsupials. The cleavage stages of marsupial development are very variable between groups and aspects of marsupial early development are not yet fully understood. An early birth removes a developing marsupial from its mother's body much sooner than in placental mammals, thus marsupials have not developed a complex placenta to protect the embryo from its mother's immune system. Though early birth puts the tiny newborn marsupial at a greater environmental risk, it significantly reduces the dangers associated with long pregnancies, as there is no need to carry a large fetus to full term in bad seasons. Marsupials are extremely altricial animals, needing to be intensely cared for immediately following birth (cf. precocial). Because newborn marsupials must climb up to their mother's teats, their front limbs and facial structures are much more developed than the rest of their bodies at the time of birth. This requirement has been argued to have resulted in the limited range of locomotor adaptations in marsupials compared to placentals. Marsupials must develop grasping forepaws during their early youth, making the evolutive transition from these limbs into hooves, wings, or flippers, as some groups of placental mammals have done, more difficult. However, several marsupials do possess atypical forelimb morphologies, such as the hooved forelimbs of the pig-footed bandicoot, suggesting that the range of forelimb specialization is not as limited as assumed. An infant marsupial is known as a joey. Marsupials have a very short gestation period—usually around four to five weeks, but as low as 12 days for some species—and the joey is born in an essentially fetal state. The blind, furless, miniature newborn, the size of a jelly bean, crawls across its mother's fur to make its way into the pouch, where it latches onto a teat for food. It will not re-emerge for several months, during which time it develops fully. After this period, the joey begins to spend increasing lengths of time out of the pouch, feeding and learning survival skills. However, it returns to the pouch to sleep, and if danger threatens, it will seek refuge in its mother's pouch for safety. Joeys stay in the pouch for up to a year in some species, or until the next joey is born. A marsupial joey is unable to regulate its own body temperature and relies upon an external heat source. Until the joey is well-furred and old enough to leave the pouch, a pouch temperature of must be constantly maintained. Joeys are born with "oral shields". In species without pouches or with rudimentary pouches these are more developed than in forms with well-developed pouches, implying a role in maintaining the young attached to the mother's teat. The first American marsupial the Europeans encountered was the common opossum. Vicente Yáñez Pinzón, commander of the "Niña" on Christopher Columbus' first voyage in the late 1400s, collected a female opossum with young in her pouch off the Brazilian coast. He presented them to the Spanish monarchs, though by then the young were lost and the female had died. The animal was noted for its strange pouch or "second belly", and how the offspring reached the pouch was a mystery. On the other hand, it was the Portuguese who first described Australian marsupials. António Galvão, a Portuguese administrator in Ternate (1536–40), wrote a detailed account of the northern common cuscus ("Phalanger orientalis"): From the start of the 17th century more accounts of marsupials arrived. For instance, a 1606 record of an animal, killed on the southern coast of New Guinea, described it as "in the shape of a dog, smaller than a greyhound", with a snakelike "bare scaly tail" and hanging testicles. The meat tasted like venison, and the stomach contained ginger leaves. This description appears to closely resemble the dusky pademelon ("Thylogale brunii"), in which case this would be the earliest European record of a member of the kangaroo family (Macropodidae). The relationships among the three extant divisions of mammals (monotremes, marsupials, and placentals) were long a matter of debate among taxonomists. Most morphological evidence comparing traits such as number and arrangement of teeth and structure of the reproductive and waste elimination systems as well as most genetic and molecular evidence favors a closer evolutionary relationship between the marsupials and placental mammals than either has with the monotremes. The ancestors of marsupials, part of a larger group called metatherians, probably split from those of placental mammals (eutherians) during the mid-Jurassic period, though no fossil evidence of metatherians themselves are known from this time. From DNA and protein analyses, the time of divergence of the two lineages has been estimated to be around 100 to 120 mya. Fossil metatherians are distinguished from eutherians by the form of their teeth; metatherians possess four pairs of molar teeth in each jaw, whereas eutherian mammals (including true placentals) never have more than three pairs. Using this criterion, the earliest known metatherian is "Sinodelphys szalayi", which lived in China around 125 mya. This makes it a contemporary to some early eutherian species which have been found in the same area. While placental fossils dominate in Asia, marsupial fossils occur in larger numbers in North America. The oldest metatherian fossils are found in present-day China. About 100 mya, the supercontinent Pangaea was in the process of splitting into the northern continent Laurasia and the southern continent Gondwana, with what would become China and Australia already separated by the Tethys Ocean. From there, metatherians spread westward into modern North America (still attached to Eurasia), where the earliest true marsupials are found. Marsupials are difficult to distinguish from other fossils, as they are characterized by aspects of the reproductive system which do not normally fossilize (including pouches) and by subtle changes in the bone and tooth structure that show a metatherian is part of the marsupial crown group (the most exclusive group that contains all living marsupials). The earliest definite marsupial fossil belongs to the species "Peradectes minor", from the Paleocene of Montana, dated to about 65 million years ago. From their point of origin in Laurasia, marsupials spread to South America, which was possibly connected to North America at around 65 mya through a ridge that has since moved on to become the Caribbean Archipelago. Laurasian marsupials eventually died off, for not entirely clear reasons; convention has it that they disappeared due to competition with placentals, but this is no longer accepted to be the primary reason. Marsupials, "Peradectes" and the related Herpetotheriidae are nested within a clade of metatherians that also included a variety of Cretaceous North American taxa. In South America, the opossums evolved and developed a strong presence, and the Paleogene also saw the evolution of shrew opossums (Paucituberculata) alongside non-marsupial metatherian predators such as the borhyaenids and the saber-toothed "Thylacosmilus". South American niches for mammalian carnivores were dominated by these marsupial and sparassodont metatherians, which seem to have competitively excluded South American placentals from evolving carnivory. While placental predators were absent, the metatherians did have to contend with avian (terror bird) and terrestrial crocodylomorph competition. Marsupials were excluded in turn from large herbivore niches in South America by the presence of native placental ungulates (now extinct) and xenarthrans (whose largest forms are also extinct). South America and Antarctica remained connected until 35 mya, as shown by the unique fossils found there. North and South America were disconnected until about three million years ago, when the Isthmus of Panama formed. This led to the Great American Interchange. Sparassodonts disappeared for unclear reasons – again, this has classically assumed as competition from carnivoran placentals, but the last sparassodonts co-existed with a few small carnivorans like procyonids and canines, and disappeared long before the arrival of macropredatory forms like felines, while didelphimorphs (opossums) invaded Central America, with the Virginia opossum reaching as far north as Canada. Marsupials reached Australia via Antarctica about 50 mya, shortly after Australia had split off. This suggests a single dispersion event of just one species, most likely a relative to South America's monito del monte (a microbiothere, the only New World australidelphian). This progenitor may have rafted across the widening, but still narrow, gap between Australia and Antarctica. The journey must not have been easy; South American ungulate and xenarthran remains have been found in Antarctica, but these groups did not reach Australia. In Australia, marsupials radiated into the wide variety seen today, including not only omnivorous and carnivorous forms such as were present in South America, but also into large herbivores. Modern marsupials appear to have reached the islands of New Guinea and Sulawesi relatively recently via Australia. A 2010 analysis of retroposon insertion sites in the nuclear DNA of a variety of marsupials has confirmed all living marsupials have South American ancestors. The branching sequence of marsupial orders indicated by the study puts Didelphimorphia in the most basal position, followed by Paucituberculata, then Microbiotheria, and ending with the radiation of Australian marsupials. This indicates that Australidelphia arose in South America, and reached Australia after Microbiotheria split off. In Australia, terrestrial placental mammals disappeared early in the Cenozoic (their most recent known fossils being 55 million-year-old teeth resembling those of condylarths) for reasons that are not clear, allowing marsupials to dominate the Australian ecosystem. Extant native Australian terrestrial placental mammals (such as hopping mice) are relatively recent immigrants, arriving via island hopping from Southeast Asia. Genetic analysis suggests a divergence date between the marsupials and the placentals at . The ancestral number of chromosomes has been estimated to be 2n = 14. A new hypothesis suggests that South American microbiotheres resulted from a back-dispersal from eastern Gondwana due to new cranial and post-cranial marsupial fossils from the "Djarthia murgonensis" from the early Eocene Tingamarra Local Fauna in Australia that indicate the "Djarthia murgonensis" is the most plesiomorphic, the oldest unequivocal australidelphian, and may be the ancestral morphotype of the Australian marsupial radiation.
https://en.wikipedia.org/wiki?curid=20205
Manchester Manchester () is a city and metropolitan borough in Greater Manchester, England. It is historically and traditionally a part of the county of Lancashire. It has a population of 547,627 as of 2018 (making it the fifth most populous English district). It lies within the United Kingdom's second-most populous urban area, with a population of 2.5 million and third most populous metropolitan area, with a population of 3.3 million. It is fringed by the Cheshire Plain to the south, the Pennines to the north and east, and an arc of towns with which it forms a continuous conurbation. The local authority for the city is Manchester City Council. The recorded history of Manchester began with the civilian settlement associated with the Roman fort of "Mamucium" or "Mancunium", which was established in about AD 79 on a sandstone bluff near the confluence of the rivers Medlock and Irwell. Although historically a part of Lancashire, areas of Cheshire south of the River Mersey were incorporated into Manchester in the 20th century. The first to be included, Wythenshawe, was added to the city in 1931. Throughout the Middle Ages Manchester remained a manorial township, but began to expand "at an astonishing rate" around the turn of the 19th century. Manchester's unplanned urbanisation was brought on by a boom in textile manufacture during the Industrial Revolution, and resulted in it becoming the world's first industrialised city. Manchester achieved city status in 1853. The Manchester Ship Canal opened in 1894, creating the Port of Manchester and directly linking the city to the Irish Sea, to the west. Its fortune declined after the Second World War, owing to deindustrialisation, but the IRA bombing in 1996 led to extensive investment and regeneration. Following successful redevelopment after the IRA bombing, Manchester was the host city for the 2002 Commonwealth Games. Manchester is the third most visited city in the UK, after London and Edinburgh. In 2019, it surpassed Edinburgh to become the second most visited city in the UK after London. It is notable for its architecture, culture, musical exports, media links, scientific and engineering output, social impact, sports clubs and transport connections. Manchester is a city of notable firsts. Manchester Liverpool Road railway station was the world's first inter-city passenger railway station and the oldest remaining railway station. The city has also excelled in scientific advancement, as it was at The University of Manchester, in 1917, that scientist Ernest Rutherford first split the atom. The university's further achievements include Frederic C. Williams, Tom Kilburn and Geoff Tootill who developed and built the world's first stored-program computer in 1948; and, in 2004, Andre Geim and Konstantin Novoselov successfully isolated and characterised the first graphene. The name "Manchester" originates from the Latin name "Mamucium" or its variant "Mancunio" and the citizens are still referred to as Mancunians (). These names are generally thought to represent a Latinisation of an original Brittonic name. The generally accepted etymology of this name is that it comes from Brittonic *"mamm-" ("breast", in reference to a "breast-like hill"). However, more recent work suggests that it could come from *"mamma" ("mother", in reference to a local river goddess). Both usages are preserved in Insular Celtic languages, such as "mam" meaning "breast" in Irish and "mother" in Welsh. The suffix "-chester" is from Old English "ceaster" ("Roman fortification", itself a loanword from Latin "castra", "fort; fortified town"). The Brigantes were the major Celtic tribe in what is now known as Northern England; they had a stronghold in the locality at a sandstone outcrop on which Manchester Cathedral now stands, opposite the bank River Irwell. Their territory extended across the fertile lowland of what is now Salford and Stretford. Following the Roman conquest of Britain in the 1st century, General Agricola ordered the construction of a fort named Mamucium in the year 79 to ensure that Roman interests in Deva Victrix (Chester) and Eboracum (York) were protected from the Brigantes. Central Manchester has been permanently settled since this time. A stabilised fragment of foundations of the final version of the Roman fort is visible in Castlefield. The Roman habitation of Manchester probably ended around the 3rd century; its civilian settlement appears to have been abandoned by the mid-3rd century, although the fort may have supported a small garrison until the late 3rd or early 4th century. After the Roman withdrawal and Saxon conquest, the focus of settlement shifted to the confluence of the Irwell and Irk sometime before the arrival of the Normans after 1066. Much of the wider area was laid waste in the subsequent Harrying of the North. In the doomsday book Manchester is recorded as within the hundred of Salford and held as tenant in chief by a Norman named Roger of Poitou, later being held by the family of De Gresle, lord of the manor and residents of Manchester Castle until 1215 before a Manor House was built. By 1421 Thomas de la Warre founded and constructed a collegiate church for the parish, now Manchester Cathedral; the domestic premises of the college house Chetham's School of Music and Chetham's Library. The library, which opened in 1653 and is still open to the public today, is the oldest free public reference library in the United Kingdom. Manchester is mentioned as having a market in 1282. Around the 14th century, Manchester received an influx of Flemish weavers, sometimes credited as the foundation of the region's textile industry. Manchester became an important centre for the manufacture and trade of woollens and linen, and by about 1540, had expanded to become, in John Leland's words, "The fairest, best builded, quickest, and most populous town of all Lancashire." The cathedral and Chetham's buildings are the only significant survivors of Leland's Manchester. During the English Civil War Manchester strongly favoured the Parliamentary interest. Although not long-lasting, Cromwell granted it the right to elect its own MP. Charles Worsley, who sat for the city for only a year, was later appointed Major General for Lancashire, Cheshire and Staffordshire during the Rule of the Major Generals. He was a diligent puritan, turning out ale houses and banning the celebration of Christmas; he died in 1656. Significant quantities of cotton began to be used after about 1600, firstly in linen/cotton fustians, but by around 1750 pure cotton fabrics were being produced and cotton had overtaken wool in importance. The Irwell and Mersey were made navigable by 1736, opening a route from Manchester to the sea docks on the Mersey. The Bridgewater Canal, Britain's first wholly artificial waterway, was opened in 1761, bringing coal from mines at Worsley to central Manchester. The canal was extended to the Mersey at Runcorn by 1776. The combination of competition and improved efficiency halved the cost of coal and halved the transport cost of raw cotton. Manchester became the dominant marketplace for textiles produced in the surrounding towns. A commodities exchange, opened in 1729, and numerous large warehouses, aided commerce. In 1780, Richard Arkwright began construction of Manchester's first cotton mill. In the early 1800s, John Dalton formulated his atomic theory in Manchester. Manchester's history is concerned with textile manufacture during the Industrial Revolution. The great majority of cotton spinning took place in the towns of south Lancashire and north Cheshire, and Manchester was for a time the most productive centre of cotton processing. Manchester became known as the world's largest marketplace for cotton goods and was dubbed "Cottonopolis" and "Warehouse City" during the Victorian era. In Australia, New Zealand and South Africa, the term "manchester" is still used for household linen: sheets, pillow cases, towels, etc. The industrial revolution brought about huge change in Manchester and was key to the increase in Manchester's population. Manchester began expanding "at an astonishing rate" around the turn of the 19th century as people flocked to the city for work from Scotland, Wales, Ireland and other areas of England as part of a process of unplanned urbanisation brought on by the Industrial Revolution. It developed a wide range of industries, so that by 1835 "Manchester was without challenge the first and greatest industrial city in the world." Engineering firms initially made machines for the cotton trade, but diversified into general manufacture. Similarly, the chemical industry started by producing bleaches and dyes, but expanded into other areas. Commerce was supported by financial service industries such as banking and insurance. Trade, and feeding the growing population, required a large transport and distribution infrastructure: the canal system was extended, and Manchester became one end of the world's first intercity passenger railway—the Liverpool and Manchester Railway. Competition between the various forms of transport kept costs down. In 1878 the GPO (the forerunner of British Telecom) provided its first telephones to a firm in Manchester. The Manchester Ship Canal was built between 1888 and 1894, in some sections by canalisation of the Rivers Irwell and Mersey, running from Salford to Eastham Locks on the tidal Mersey. This enabled oceangoing ships to sail right into the Port of Manchester. On the canal's banks, just outside the borough, the world's first industrial estate was created at Trafford Park. Large quantities of machinery, including cotton processing plant, were exported around the world. A centre of capitalism, Manchester was once the scene of bread and labour riots, as well as calls for greater political recognition by the city's working and non-titled classes. One such gathering ended with the Peterloo massacre of 16 August 1819. The economic school of Manchester Capitalism developed there, and Manchester was the centre of the Anti-Corn Law League from 1838 onward. Manchester has a notable place in the history of Marxism and left-wing politics; being the subject of Friedrich Engels' work "The Condition of the Working Class in England in 1844"; Engels spent much of his life in and around Manchester, and when Karl Marx visited Manchester, they met at Chetham's Library. The economics books Marx was reading at the time can be seen in the library, as can the window seat where Marx and Engels would meet. The first Trades Union Congress was held in Manchester (at the Mechanics' Institute, David Street), from 2 to 6 June 1868. Manchester was an important cradle of the Labour Party and the Suffragette Movement. At that time, it seemed a place in which anything could happen—new industrial processes, new ways of thinking (the Manchester School, promoting free trade and "laissez-faire"), new classes or groups in society, new religious sects, and new forms of labour organisation. It attracted educated visitors from all parts of Britain and Europe. A saying capturing this sense of innovation survives today: "What Manchester does today, the rest of the world does tomorrow." Manchester's golden age was perhaps the last quarter of the 19th century. Many of the great public buildings (including Manchester Town Hall) date from then. The city's cosmopolitan atmosphere contributed to a vibrant culture, which included the Hallé Orchestra. In 1889, when county councils were created in England, the municipal borough became a county borough with even greater autonomy. Although the Industrial Revolution brought wealth to the city, it also brought poverty and squalor to a large part of the population. Historian Simon Schama noted that "Manchester was the very best and the very worst taken to terrifying extremes, a new kind of city in the world; the chimneys of industrial suburbs greeting you with columns of smoke". An American visitor taken to Manchester's blackspots saw "wretched, defrauded, oppressed, crushed human nature, lying and bleeding fragments". The number of cotton mills in Manchester itself reached a peak of 108 in 1853. Thereafter the number began to decline and Manchester was surpassed as the largest centre of cotton spinning by Bolton in the 1850s and Oldham in the 1860s. However, this period of decline coincided with the rise of the city as the financial centre of the region. Manchester continued to process cotton, and in 1913, 65% of the world's cotton was processed in the area. The First World War interrupted access to the export markets. Cotton processing in other parts of the world increased, often on machines produced in Manchester. Manchester suffered greatly from the Great Depression and the underlying structural changes that began to supplant the old industries, including textile manufacture. Like most of the UK, the Manchester area was mobilised extensively during the Second World War. For example, casting and machining expertise at Beyer, Peacock and Company's locomotive works in Gorton was switched to bomb making; Dunlop's rubber works in Chorlton-on-Medlock made barrage balloons; and just outside the city in Trafford Park, engineers Metropolitan-Vickers made Avro Manchester and Avro Lancaster bombers and Ford built the Rolls-Royce Merlin engines to power them. Manchester was thus the target of bombing by the Luftwaffe, and by late 1940 air raids were taking place against non-military targets. The biggest took place during the "Christmas Blitz" on the nights of 22/23 and 24 December 1940, when an estimated of high explosives plus over 37,000 incendiary bombs were dropped. A large part of the historic city centre was destroyed, including 165 warehouses, 200 business premises, and 150 offices. 376 were killed and 30,000 houses were damaged. Manchester Cathedral was among the buildings seriously damaged; its restoration took 20 years. Cotton processing and trading continued to fall in peacetime, and the exchange closed in 1968. By 1963 the port of Manchester was the UK's third largest, and employed over 3,000 men, but the canal was unable to handle the increasingly large container ships. Traffic declined, and the port closed in 1982. Heavy industry suffered a downturn from the 1960s and was greatly reduced under the economic policies followed by Margaret Thatcher's government after 1979. Manchester lost 150,000 jobs in manufacturing between 1961 and 1983. Regeneration began in the late 1980s, with initiatives such as the Metrolink, the Bridgewater Concert Hall, the Manchester Arena, and (in Salford) the rebranding of the port as Salford Quays. Two bids to host the Olympic Games were part of a process to raise the international profile of the city. Manchester has a history of attacks attributed to Irish Republicans, including the Manchester Martyrs of 1867, arson in 1920, a series of explosions in 1939, and two bombs in 1992. On Saturday 15 June 1996, the Provisional Irish Republican Army (IRA) carried out the 1996 Manchester bombing, the detonation of a large bomb next to a department store in the city centre. The largest to be detonated on British soil, the bomb injured over 200 people, heavily damaged nearby buildings, and broke windows away. The cost of the immediate damage was initially estimated at £50 million, but this was quickly revised upwards. The final insurance payout was over £400 million; many affected businesses never recovered from the loss of trade. Spurred by the investment after the 1996 bomb and aided by the XVII Commonwealth Games, the city centre has undergone extensive regeneration. New and renovated complexes such as The Printworks and Corn Exchange have become popular shopping, eating and entertainment areas. Manchester Arndale is the UK's largest city-centre shopping centre. Large city sections from the 1960s have been demolished, re-developed or modernised with the use of glass and steel. Old mills have been converted into apartments. Hulme has undergone extensive regeneration, with million-pound loft-house apartments being developed. The 47-storey, Beetham Tower was the tallest UK building outside London and the highest residential accommodation in Europe when completed in 2006. It was surpassed in 2018 by the South Tower of the Deansgate Square project, also in Manchester. In January 2007, the independent Casino Advisory Panel licensed Manchester to build the UK's only supercasino, but plans were abandoned in February 2008. On 22 May 2017, an Islamic terrorist carried out a bombing at an Ariana Grande concert in the Manchester Arena. The bomb killed 23, including the attacker, and injured over 800. It was the deadliest terrorist attack and first suicide bombing in Britain since the 7 July 2005 London bombings. It caused worldwide condemnation and changed the UK's threat level to "critical" for the first time since 2007. Since around the turn of the 21st century, Manchester has been regarded as one of the candidates for the unofficial title of second city of the United Kingdom alongside Birmingham by sections of the international press, British public, and government ministers. The BBC reports that redevelopment of recent years has heightened claims that Manchester is the second city of the UK. Manchester and Birmingham traditionally compete as front runners for this unofficial title. The City of Manchester is governed by the Manchester City Council. The Greater Manchester Combined Authority, with a directly elected mayor, has responsibilities for economic strategy and transport, amongst other areas, on a Greater Manchester-wide basis. Manchester has been a member of the English Core Cities Group since its inception in 1995. The town of Manchester was granted a charter by Thomas Grelley in 1301, but lost its borough status in a court case of 1359. Until the 19th century local government was largely in the hands of manorial courts, the last of which was dissolved in 1846. From a very early time, the township of Manchester lay within the historic or ceremonial county boundaries of Lancashire. Pevsner wrote "That [neighbouring] Stretford and Salford are not administratively one with Manchester is one of the most curious anomalies of England". A stroke of a Norman baron's pen is said to have divorced Manchester and Salford, though it was not Salford that became separated from Manchester, it was Manchester, with its humbler line of lords, that was separated from Salford. It was this separation that resulted in Salford becoming the judicial seat of Salfordshire, which included the ancient parish of Manchester. Manchester later formed its own Poor Law Union using the name "Manchester". In 1792, Commissioners—usually known as "Police Commissioners"—were established for the social improvement of Manchester. Manchester regained its borough status in 1838, and comprised the townships of Beswick, Cheetham Hill, Chorlton upon Medlock and Hulme. By 1846, with increasing population and greater industrialisation, the Borough Council had taken over the powers of the "Police Commissioners". In 1853, Manchester was granted "city status" in the United Kingdom. In 1885, Bradford, Harpurhey, Rusholme and parts of Moss Side and Withington townships became part of the City of Manchester. In 1889, the city became a county borough as did many larger Lancashire towns, and therefore not governed by Lancashire County Council. Between 1890 and 1933, more areas were added to the city which had been administered by Lancashire County Council, including former villages such as Burnage, Chorlton-cum-Hardy, Didsbury, Fallowfield, Levenshulme, Longsight, and Withington. In 1931, the Cheshire civil parishes of Baguley, Northenden and Northen Etchells from the south of the River Mersey were added. In 1974, by way of the Local Government Act 1972, the City of Manchester became a metropolitan district of the metropolitan county of Greater Manchester. That year, Ringway, the village where the Manchester Airport is located, was added to the City. In November 2014, it was announced that Greater Manchester would receive a new directly elected Mayor. The Mayor would have fiscal control over health, transport, housing and police in the area. Andy Burnham was elected as the first Mayor of Greater Manchester in 2017. At , northwest of London, Manchester lies in a bowl-shaped land area bordered to the north and east by the Pennines, an upland chain that runs the length of northern England, and to the south by the Cheshire Plain. Manchester is north-east of Liverpool and north-west of Sheffield, making the city the halfway point between the two. The city centre is on the east bank of the River Irwell, near its confluences with the Rivers Medlock and Irk, and is relatively low-lying, being between above sea level. The River Mersey flows through the south of Manchester. Much of the inner city, especially in the south, is flat, offering extensive views from many highrise buildings in the city of the foothills and moors of the Pennines, which can often be capped with snow in the winter months. Manchester's geographic features were highly influential in its early development as the world's first industrial city. These features are its climate, its proximity to a seaport at Liverpool, the availability of water power from its rivers, and its nearby coal reserves. The name Manchester, though officially applied only to the metropolitan district within Greater Manchester, has been applied to other, wider divisions of land, particularly across much of the Greater Manchester county and urban area. The "Manchester City Zone", "Manchester post town" and the "Manchester Congestion Charge" are all examples of this. For purposes of the Office for National Statistics, Manchester forms the most populous settlement within the Greater Manchester Urban Area, the United Kingdom's third-largest conurbation. There is a mix of high-density urban and suburban locations. The largest open space in the city, at around , is Heaton Park. Manchester is contiguous on all sides with several large settlements, except for a small section along its southern boundary with Cheshire. The M60 and M56 motorways pass through Northenden and Wythenshawe respectively in the south of Manchester. Heavy rail lines enter the city from all directions, the principal destination being Manchester Piccadilly station. Manchester experiences a temperate oceanic climate (Köppen: "Cfb"), like much of the British Isles, with warm summers and cool winters. Summer daytime temperatures regularly top 20 Celsius, quite often reaching 25 Celsius on sunny days during July and August in particular. In more recent years, temperatures have occasionally reached over 30 Celsius. There is regular but generally light precipitation throughout the year. The city's average annual rainfall is compared to a UK average of , and its mean rain days are 140.4 per annum, compared to the UK average of 154.4. Manchester has a relatively high humidity level, and this, along with abundant soft water, was one factor that led to advancement of the textile industry in the area. Snowfalls are not common in the city because of the urban warming effect but the West Pennine Moors to the north-west, South Pennines to the north-east and Peak District to the east receive more snow, which can close roads leading out of the city. They include the A62 via Oldham and Standedge, the A57, Snake Pass, towards Sheffield, and the Pennine section of the M62. The lowest temperature ever recorded in Manchester was on 7 January 2010. Manchester lies at the centre of a green belt region extending into the wider surrounding counties. This reduces urban sprawl, prevents towns in the conurbation from further convergence, protects the identity of outlying communities, and preserves nearby countryside. It is achieved by restricting inappropriate development within the designated areas and imposing stricter conditions on permitted building. Due to being already highly urban, the city contains limited portions of protected green-belt area within greenfield throughout the borough, with minimal development opportunities, at Clayton Vale, Heaton Park, Chorlton Water Park along with the Chorlton Ees & Ivy Green nature reserve and the floodplain surrounding the River Mersey, as well as the southern area around Manchester Airport. The green belt was first drawn up in 1961. Below are the 10 largest foreign nationalities of Manchester in 2011. Historically the population of Manchester began to increase rapidly during the Victorian era, estimated at 354,930 for Manchester and 110,833 for Salford in 1865, and peaking at 766,311 in 1931. From then the population began to decrease rapidly, due to slum clearance and the increased building of social housing overspill estates by Manchester City Council after the Second World War such as Hattersley and Langley. The 2012 mid-year estimate for the population of Manchester was 510,700. This was an increase of 7,900, or 1.6 per cent, since the 2011 estimate. Since 2001, the population has grown by 87,900, or 20.8 per cent, making Manchester the third fastest-growing area in the 2011 census. The city experienced the greatest percentage population growth outside London, with an increase of 19 per cent to over 500,000. Manchester's population is projected to reach 532,200 by 2021, an increase of 5.8 per cent from 2011. This represents a slower rate of growth than the previous decade. The Greater Manchester Built-up Area in 2011 had an estimated population of 2,553,400. In 2012 an estimated 2,702,200 people lived in Greater Manchester. An 6,547,000 people were estimated in 2012 to live within of Manchester and 11,694,000 within . Between the beginning of July 2011 and end of June 2012 (mid-year estimate date), births exceeded deaths by 4,800. Migration (internal and international) and other changes accounted for a net increase of 3,100 people between July 2011 and June 2012. Compared with Greater Manchester and with England, Manchester has a younger population, with a particularly large 20–35 age group. There were 76,095 undergraduate and postgraduate students at Manchester Metropolitan University, the University of Manchester and Royal Northern College of Music in the 2011/2012 academic year. Since the 2001 census, the proportion of Christians in Manchester has fallen by 22 per cent from 62.4 per cent to 48.7 per cent. The proportion of those with no religious affiliation rose by 58.1 per cent from 16 per cent to 25.3 per cent, whilst the proportion of Muslims increased by 73.6 per cent from 9.1 per cent to 15.8 per cent. The size of the Jewish population in Greater Manchester is the largest in Britain outside London. Of all households in Manchester, 0.23 per cent were Same-Sex Civil Partnership households, compared with an English national average of 0.16 per cent in 2011. In terms of ethnic composition, the City of Manchester has the highest non-white proportion of any district in Greater Manchester. Statistics from the 2011 census showed that 66.7 per cent of the population was White (59.3 per cent White British, 2.4 per cent White Irish, 0.1 per cent Gypsy or Irish Traveller, 4.9 per cent Other White – although the size of mixed European and British ethnic groups is unclear, there are reportedly over 25,000 Mancunians of at least partial Italian descent alone, which represents 5.5 per cent of its population). 4.7 per cent were mixed race (1.8 per cent White and Black Caribbean, 0.9 per cent White and Black African, 1.0 per cent White and Asian, 1.0 per cent other mixed), 17.1 per cent Asian (2.3 per cent Indian, 8.5 per cent Pakistani, 1.3 per cent Bangladeshi, 2.7 per cent Chinese, 2.3 per cent other Asian), 8.6 per cent Black (5.1 per cent African, 1.6 per cent other Black), 1.9 per cent Arab and 1.2 per cent of other ethnic heritage. Kidd identifies Moss Side, Longsight, Cheetham Hill, Rusholme, as centres of population for ethnic minorities. Manchester's Irish Festival, including a St Patrick's Day parade, is one of Europe's largest. There is also a well-established Chinatown in the city with a substantial number of oriental restaurants and Chinese supermarkets. The area also attracts large numbers of Chinese students to the city who, in attending the local universities, contribute to Manchester having the third-largest Chinese population in Europe. The Manchester Larger Urban Zone, a Eurostat measure of the functional city-region approximated to local government districts, had a population of 2,539,100 in 2004. In addition to Manchester itself, the LUZ includes the remainder of the county of Greater Manchester. The Manchester LUZ is the second largest within the United Kingdom, behind that of London. The Office for National Statistics does not produce economic data for the City of Manchester alone, but includes four other metropolitan boroughs, Salford, Stockport, Tameside, Trafford, in an area named Greater Manchester South, which had a GVA of £34.8 billion. The economy grew relatively strongly between 2002 and 2012, when growth was 2.3 per cent above the national average. The wider metropolitan economy is the second largest in the United Kingdom. It is ranked as a beta world city by the Globalization and World Cities Research Network. As the UK economy continues to recover from its 2008–2010 downturn, Manchester compares favourably according to recent figures. In 2012 it showed the strongest annual growth in business stock (5 per cent) of all core cities. The city had a relatively sharp increase in the number of business deaths, the largest increase in all the core cities, but this was offset by strong growth in new businesses, resulting in strong net growth. Manchester's civic leadership has a reputation for business acumen. It owns two of the country's four busiest airports and uses its earnings to fund local projects. Meanwhile, KPMG's competitive alternative report found that in 2012 Manchester had the 9th lowest tax cost of any industrialised city in the world, and fiscal devolution has come earlier to Manchester than to any other British city: it can keep half the extra taxes it gets from transport investment. KPMG's competitive alternative report also found that Manchester was Europe's most affordable city featured, ranking slightly better than the Dutch cities of Rotterdam and Amsterdam, which all have a cost-of-living index of less than 95. Manchester is a city of contrast, where some of the country's most deprived and most affluent neighbourhoods can be found. According to 2010 Indices of Multiple Deprivation, Manchester is the 4th most deprived local council in England. Unemployment throughout 2012–2013 averaged 11.9 per cent, which was above national average, but lower than some of the country's comparable large cities. On the other hand, Greater Manchester is home to more multi-millionaires than anywhere outside London, with the City of Manchester taking up most of the tally. In 2013 Manchester was ranked 6th in the UK for quality of life, according to a rating of the UK's 12 largest cities. Women fare better in Manchester than the rest of the country in comparative pay with men. The per hours-worked gender pay gap is 3.3 per cent compared with 11.1 per cent for Britain. 37 per cent of the working-age population in Manchester have degree-level qualifications, as opposed to an average of 33 per cent across other core cities, although its schools under-perform slightly compared with the national average. Manchester has the largest UK office market outside London, according to GVA Grimley, with a quarterly office uptake (averaged over 2010–2014) of some 250,000 square ft – equivalent to the quarterly office uptake of Leeds, Liverpool and Newcastle combined and 90,000 square feet more than the nearest rival, Birmingham. The strong office market in Manchester has been partly attributed to "northshoring", (from offshoring) which entails the relocation or alternative creation of jobs away from the overheated South to areas where office space is possibly cheaper and the workforce market less saturated. According to 2019 property investment research, Manchester is rated as No. 2 location for "Best Places To Invest in Property in the UK". This was attributed to a 5.6 per cent increase in house prices and local investment in infrastructure and in Manchester Airport. Manchester's buildings display a variety of architectural styles, ranging from Victorian to contemporary architecture. The widespread use of red brick characterises the city, much of the architecture of which harks back to its days as a global centre for the cotton trade. Just outside the immediate city centre are a large number of former cotton mills, some of which have been left virtually untouched since their closure, while many have been redeveloped as apartment buildings and office space. Manchester Town Hall, in Albert Square, was built in the Gothic revival style and is seen as one of the most important Victorian buildings in England. Manchester also has a number of skyscrapers built in the 1960s and 1970s, the tallest being the CIS Tower near Manchester Victoria station until the Beetham Tower was completed in 2006. The latter exemplifies a new surge in high-rise building. It includes a Hilton hotel, a restaurant and apartments. The largest skyscraper is now Deansgate Square South Tower, at 201 metres (659 feet).The Green Building, opposite Oxford Road station, is a pioneering eco-friendly housing project, while the recently completed One Angel Square, is one of the most sustainable large buildings in the world. The award-winning Heaton Park in the north of the city borough is one of the largest municipal parks in Europe, covering of parkland. The city has 135 parks, gardens, and open spaces. Two large squares hold many of Manchester's public monuments. Albert Square has monuments to Prince Albert, Bishop James Fraser, Oliver Heywood, William Ewart Gladstone and John Bright. Piccadilly Gardens has monuments dedicated to Queen Victoria, Robert Peel, James Watt and the Duke of Wellington. The cenotaph in St Peter's Square is Manchester's main memorial to its war dead. Designed by Edwin Lutyens, it echoes the original on Whitehall in London. The Alan Turing Memorial in Sackville Park commemorates his role as the father of modern computing. A larger-than-life statue of Abraham Lincoln by George Gray Barnard in the eponymous Lincoln Square (having stood for many years in Platt Fields) was presented to the city by Mr and Mrs Charles Phelps Taft of Cincinnati, Ohio, to mark the part Lancashire played in the cotton famine and American Civil War of 1861–1865. A Concorde is on display near Manchester Airport. Manchester has six designated local nature reserves: Chorlton Water Park, Blackley Forest, Clayton Vale and Chorlton Ees, Ivy Green, Boggart Hole Clough and Highfield Country Park. Manchester Liverpool Road was the world's first purpose-built passenger and goods railway station and served as the Manchester terminus on the Liverpool and Manchester Railway – the world's first inter-city passenger railway. It is still extant and its buildings form part of the Museum of Science and Industry. Two of the city's four main line termini did not survive the 1960s: Manchester Central and Manchester Exchange each closed in 1969. In addition, Manchester Mayfield station closed to passenger services in 1960; its buildings and platforms are still extant, next to Piccadilly station, but are due to be redeveloped in the 2020s. Today, the city is well served by its rail network although it is now working to capacity, and is at the centre of an extensive county-wide railway network, including the West Coast Main Line, with two mainline stations: Manchester Piccadilly and Manchester Victoria. The Manchester station group – comprising Manchester Piccadilly, Manchester Victoria, Manchester Oxford Road and Deansgate – is the third busiest in the United Kingdom, with 44.9 million passengers recorded in 2017/2018. The High Speed 2 link to Birmingham and London is also planned, which if built will include a tunnel under Manchester on the final approach into an upgraded Piccadilly station. Recent improvements in Manchester as part of the Northern Hub in the 2010s have been numerous electrification schemes into and through Manchester, redevelopment of Victoria station and construction of the Ordsall Chord directly linking Victoria and Piccadilly. Work on two new through platforms at Piccadilly and an extensive upgrade at Oxford Road had not commenced as of 2019. Manchester city centre suffers from constrained rail capacity that frequently leads to delays and cancellations – a 2018 report found that all three major Manchester stations are among the top ten worst stations in the United Kingdom for punctuality, with Oxford Road deemed the worst in the country. Manchester became the first city in the UK to acquire a modern light rail tram system when the Manchester Metrolink opened in 1992. In 2016–2017, 37.8 million passenger journeys were made on the system. The present system mostly runs on former commuter rail lines converted for light rail use, and crosses the city centre via on-street tram lines. The network consists of eight lines with 99 stops. A new line to the Trafford Centre opened in 2020. Manchester city centre is also serviced by over a dozen heavy and light rail-based park and ride sites. The city has one of the most extensive bus networks outside London, with over 50 bus companies operating in the Greater Manchester region radiating from the city. In 2011, 80 per cent of public transport journeys in Greater Manchester were made by bus, amounting to 220 million passenger journeys each year. After deregulation in 1986, the bus system was taken over by GM Buses, which after privatisation was split into GM Buses North and GM Buses South. Later these were taken over by First Greater Manchester and Stagecoach Manchester. Much of the First Greater Manchester business was sold to Diamond Bus North West and Go North West in 2019. Go North West operate a three-route zero-fare bus service, called "Metroshuttle", which carries 2.8 million commuters a year around Manchester's business districts. Stagecoach Manchester is the Stagecoach Group's largest subsidiary and operates around 690 buses. Manchester Airport serves Manchester, Northern England and North Wales. The airport is the third busiest in the United Kingdom and the largest outside the London region. Services cover many destinations in Europe, North America, the Caribbean, Africa, the Middle East, and Asia (with more destinations from Manchester than any other airport in Britain). A second runway was opened in 2001 and there have been continued terminal improvements. The airport has the highest rating available: ""Category 10"", encompassing an elite group of airports able to handle ""Code F"" aircraft, including the Airbus A380 and Boeing 747-8. From September 2010 the airport became one of only 17 airports in the world and the only UK airport other than Heathrow Airport and Gatwick Airport to operate the Airbus A380. A smaller City Airport Manchester exists to the west of Manchester city centre. It was Manchester's first municipal airport and became the site of the first air traffic control tower in the UK, and the first municipal airfield in the UK to be licensed by the Air Ministry. Today, private charter flights and general aviation use City. It also has a flight school, and both the Greater Manchester Police Air Support Unit and the North West Air Ambulance have helicopters based there. An extensive canal network, including the Manchester Ship Canal, was built to carry freight from the Industrial Revolution onward; the canals are still maintained, though now largely repurposed for leisure use. In 2012, plans were approved to introduce a water taxi service between Manchester city centre and MediaCityUK at Salford Quays. This ceased to operate in June 2018, citing poor infrastructure. Bands that have emerged from the Manchester music scene include Van der Graaf Generator, Oasis, The Smiths, Joy Division and its successor group New Order, Buzzcocks, The Stone Roses, The Fall, The Durutti Column, 10cc, Godley & Creme, The Verve, Elbow, Doves, The Charlatans, M People, The 1975, Simply Red, Take That, Dutch Uncles, Everything Everything, Pale Waves and The Outfield. Manchester was credited as the main driving force behind British indie music of the 1980s led by The Smiths, later including The Stone Roses, Happy Mondays, Inspiral Carpets, and James. The later groups came from what became known as the "Madchester" scene that also centred on The Haçienda nightclub developed by the founder of Factory Records, Tony Wilson. Although from southern England, The Chemical Brothers subsequently formed in Manchester. Former Smiths frontman Morrissey, whose lyrics often refer to Manchester locations and culture, later found international success as a solo artist. Previously, notable Manchester acts of the 1960s include The Hollies, Herman's Hermits, and Davy Jones of the Monkees (famed in the mid-1960s for their albums and their American TV show), and the earlier Bee Gees, who grew up in Chorlton. Another notable contemporary band from near Manchester is The Courteeners consisting of Liam Fray and four close friends. Singer-songwriter Ren Harvieu is also from Greater Manchester. Manchester has two symphony orchestras, the Hallé and the BBC Philharmonic, and a chamber orchestra, the Manchester Camerata. In the 1950s, the city was home to a so-called "Manchester School" of classical composers, which was composed of Harrison Birtwistle, Peter Maxwell Davies, David Ellis and Alexander Goehr. Manchester is a centre for musical education: the Royal Northern College of Music and Chetham's School of Music. Forerunners of the RNCM were the Northern School of Music (founded 1920) and the Royal Manchester College of Music (founded 1893), which merged in 1973. One of the earliest instructors and classical music pianists/conductors at the RNCM, shortly after its founding, was the Russian-born Arthur Friedheim, (1859–1932), who later had the music library at the famed Peabody Institute conservatory of music in Baltimore, Maryland, named after him. The main classical music venue was the Free Trade Hall on Peter Street until the opening in 1996 of the 2,500 seat Bridgewater Hall. Brass band music, a tradition in the north of England, is important to Manchester's musical heritage; some of the UK's leading bands, such as the CWS Manchester Band and the Fairey Band, are from Manchester and surrounding areas, and the Whit Friday brass-band contest takes place annually in the neighbouring areas of Saddleworth and Tameside. Manchester has a thriving theatre, opera and dance scene, with a number of large performance venues, including Manchester Opera House, which feature large-scale touring shows and West End productions; the Palace Theatre; and the Royal Exchange Theatre in Manchester's former cotton exchange, which is the largest theatre in the round in the UK. Smaller venues include the Contact Theatre and Z-arts in Hulme. The Dancehouse on Oxford Road is dedicated to dance productions. In 2014, HOME, a new custom-built arts complex opened. Housing two theatre spaces, five cinemas and an art exhibition space, it replaced the Cornerhouse and The Library Theatre. Since 2007 the city has hosted the Manchester International Festival, a biennial international arts festival with a focus on original work, which has included major new commissions by artists, including Bjork. A government statement in 2014 announced a £78 million grant for a new "large-scale, ultra-flexible arts space" for the city. Later the council stated it had secured a further £32 million. The £110 million venue was confirmed in July 2016. The theatre, to be called The Factory, after Manchester's Factory Records, will provide a permanent home for the Manchester International Festival. It is due to open at the end of 2019. Manchester's museums celebrate Manchester's Roman history, rich industrial heritage and its role in the Industrial Revolution, the textile industry, the Trade Union movement, women's suffrage and football. A reconstructed part of the Roman fort of Mamucium is open to the public in Castlefield. The Science and Industry Museum, housed in the former Liverpool Road railway station, has a large collection of steam locomotives, industrial machinery, aircraft and a replica of the world's first stored computer program (known as the Manchester Baby). The Museum of Transport displays a collection of historic buses and trams. Trafford Park in the neighbouring borough of Trafford is home to Imperial War Museum North. The Manchester Museum opened to the public in the 1880s, has notable Egyptology and natural history collections. The municipally owned Manchester Art Gallery in Mosley Street houses a permanent collection of European painting and one of Britain's main collections of Pre-Raphaelite paintings. In the south of the city, the Whitworth Art Gallery displays modern art, sculpture and textiles and was voted Museum of the Year in 2015. Other exhibition spaces and museums in Manchester include Islington Mill in Salford, the National Football Museum at Urbis, Castlefield Gallery, the Manchester Costume Gallery at Platt Fields Park, the People's History Museum and the Manchester Jewish Museum. The work of Stretford-born painter , known for "matchstick" paintings of industrial Manchester and Salford, can be seen in the City and Whitworth Manchester galleries, and at the Lowry art centre in Salford Quays (in the neighbouring borough of Salford), which devotes a large permanent exhibition to his works. Manchester is a UNESCO City of Literature known for a "radical literary history". Manchester in the 19th century featured in works highlighting the changes that industrialisation had brought. They include Elizabeth Gaskell's novel "Mary Barton: A Tale of Manchester Life" (1848), and studies such as "The Condition of the Working Class in England in 1844" by Friedrich Engels, while living and working here. Manchester was the meeting place of Engels and Karl Marx. The two began writing "The Communist Manifesto" in Chetham's Library – founded in 1653 and claiming to be the oldest public library in the English-speaking world. Elsewhere in the city, the John Rylands Library holds an extensive collection of early printing. The Rylands Library Papyrus P52, believed to be the earliest extant New Testament text, is on permanent display there. Letitia Landon's poem "Manchester" in Fisher's Drawing Room Scrap Book, 1835, records the rapid growth of the city and its cultural importance. Charles Dickens is reputed to have set his novel "Hard Times" in the city, and though partly modelled on Preston, it shows the influence of his friend Mrs Gaskell. Gaskell penned all her novels but "Mary Barton" at her home in 84 Plymouth Grove. Often her house played host to influential authors: Dickens, Charlotte Brontë, Harriet Beecher Stowe and Charles Eliot Norton, for example. It is now open as a literary museum. Charlotte Brontë began writing her novel "Jane Eyre" in 1846, while staying at lodgings in Hulme. She was accompanying her father Patrick, who was convalescing in the city after cataract surgery. She probably envisioned Manchester Cathedral churchyard as the burial place for Jane's parents and the birthplace of Jane herself. Also associated with the city is the Victorian poet and novelist Isabella Banks, famed for her 1876 novel "The Manchester Man". Anglo-American author Frances Hodgson Burnett was born in the city's Cheetham Hill district in 1849, and wrote much of her classic children's novel "The Secret Garden" while visiting nearby Salford's Buile Hill Park. Anthony Burgess is among the 20th-century writers who made Manchester their home. He wrote here the dystopian satire "A Clockwork Orange" in 1962. Dame Carol Ann Duffy, Poet Laureate from 2009 to 2019, moved to the city in 1996 and lives in West Didsbury. The night-time economy of Manchester has expanded significantly since about 1993, with investment from breweries in bars, public houses and clubs, along with active support from the local authorities. The more than 500 licensed premises in the city centre have a capacity to deal with more than visitors, with 110,000–130,000 people visiting on a typical weekend night, making Manchester the most popular city for events at 79 per thousand people. The night-time economy has a value of about £100 million. and supports 12,000 jobs. The Madchester scene of the 1980s, from which groups including New Order, The Smiths, The Stone Roses, the Happy Mondays, Inspiral Carpets, 808 State, James and The Charlatans emerged, was based on clubs such as The Haçienda. The period was the subject of the film "24 Hour Party People". Many of the big clubs suffered problems with organised crime at that time; Haslam describes one where staff were so completely intimidated that free admission and drinks were demanded (and given) and drugs were openly dealt. After a series of violent drug-related incidents, The Hacienda closed in 1998. In 1988, Manchester was often referred to as Madchester for its rave scene. Owned by Tony Wilson's Factory Records, it was given the catalogue number FAC51 and official club name, FAC51 The Hacienda. Known for developing many talented 1980s influential acts, it also influenced the graphic design industry via Factory artists such as Peter Saville (PSA), Octavo (8vo), Central Design Station, etc. The memorabilia from this club holds a high value among collectors and fans of these artists and the club. Peter Saville was most notable for his minimalism, which still influences contemporary graphic design. Public houses in the Canal Street area have had an LGBTQ+ clientele since at least 1940, and now form the centre of Manchester's LGBTQ+ community. Since the opening of new bars and clubs, the area attracts 20,000 visitors each weekend and has hosted a popular festival, Manchester Pride, each August since 2003. There are three universities in the City of Manchester. The University of Manchester, Manchester Metropolitan University and Royal Northern College of Music. The University of Manchester is the largest full-time non-collegiate university in the United Kingdom, created in 2004 by the merger of Victoria University of Manchester, founded in 1904, and UMIST, founded in 1956 having developed from the Mechanics' Institute founded, as indicated in the university's logo, in 1824. The University of Manchester includes the Manchester Business School, which offered the first MBA course in the UK in 1965. Manchester Metropolitan University was formed as Manchester Polytechnic on the merger of three colleges in 1970. It gained university status in 1992, and in the same year absorbed Crewe and Alsager College of Higher Education in South Cheshire. The University of Law, the largest provider of vocation legal training in Europe, has a campus in the city. The three universities are grouped around Oxford Road on the southern side of the city centre, which forms Europe's largest urban higher-education precinct. Together they have a combined population of 76,025 students in higher education as of 2015, although almost 6,000 of them were based at Manchester Metropolitan University's campuses at Crewe and Alsager in Cheshire. One of Manchester's notable secondary schools is Manchester Grammar School. Established in 1515, as a free grammar school next to what is now the Cathedral, it moved in 1931 to Old Hall Lane in Fallowfield, south Manchester, to accommodate the growing student body. In the post-war period, it was a direct grant grammar school (i.e. partially state funded), but it reverted to independent status in 1976 after abolition of the direct-grant system. Its previous premises are now used by Chetham's School of Music. There are three schools nearby: William Hulme's Grammar School, Withington Girls' School and Manchester High School for Girls. In 2010, the Manchester Local Education Authority was ranked last out of Greater Manchester's ten LEAs and 147th out of 150 in the country LEAs based on the percentage of pupils attaining at least five A*–C grades at General Certificate of Secondary Education (GCSE) including maths and English (38.6 per cent compared with the national average of 50.7 per cent). The LEA also had the highest occurrence of absences: 11.11 per cent of "half-day sessions missed by pupils", well above the national average of 5.8 per cent. Of the schools in the LEA with 30 or more pupils, four had 90 per cent or more pupils achieving at least five A*–C grades at GCSE including maths and English: Manchester High School for Girls, St Bede's College, Manchester Islamic High School for Girls, and The King David High School. Three managed 25 per cent or less: Plant Hill Arts College, North Manchester High School for Boys, Brookway High School and Sports College. Manchester is well known as a city of sport. Two decorated Premier League football clubs bear the city name – Manchester United and Manchester City. Manchester United play its home games at Old Trafford, in the Manchester suburb of Trafford, the largest club football ground in the United Kingdom. Manchester City's home ground is the City of Manchester Stadium (also known as the Etihad Stadium for sponsorship purposes); its former ground, Maine Road was demolished in 2003. The City of Manchester Stadium was initially built as the main athletics stadium for the 2002 Commonwealth Games and was then reconfigured into a football stadium before Manchester City's arrival. Manchester has hosted domestic, continental and international football competitions at Fallowfield Stadium, Maine Road, Old Trafford and the City of Manchester Stadium. Competitions hosted in city include the FIFA World Cup (1966), UEFA European Football Championship (1996), Olympic Football (2012), UEFA Champions League Final (2003), UEFA Cup Final (2008), four FA Cup Finals (1893, 1911, 1915, 1970) and three League Cup Finals (1977, 1978, 1984). First-class sporting facilities were built for the 2002 Commonwealth Games, including the City of Manchester Stadium, the National Squash Centre and the Manchester Aquatics Centre. Manchester has competed twice to host the Olympic Games, beaten by Atlanta for 1996 and Sydney for 2000. The National Cycling Centre includes a velodrome, BMX Arena and Mountainbike trials, and is the home of British Cycling, UCI ProTeam Team Sky and Sky Track Cycling. The Manchester Velodrome was built as a part of the bid for the 2000 games and has become a catalyst for British success in cycling. The velodrome hosted the UCI Track Cycling World Championships for a record third time in 2008. The National Indoor BMX Arena (2,000 capacity) adjacent to the velodrome opened in 2011. The Manchester Arena hosted the FINA World Swimming Championships in 2008. Manchester Cricket Club evolved into Lancashire County Cricket Club and play at Old Trafford Cricket Ground, as do Manchester Originals, a new city-based cricket team founded in 2019 which will play in the new cricket competition The Hundred, representing Lancashire and Manchester. Manchester also hosted the World Squash Championships in 2008, and also hosted the 2010 World Lacrosse Championship in July 2010. Recent sporting events hosted by Manchester include the 2013 Ashes series, 2013 Rugby League World Cup and the 2015 Rugby World Cup. The ITV franchise Granada Television is partly headquartered on the old Granada Studios site in Quay Street and partly at a new location at MediaCityUK as part of the initial phase of its migration to Salford Quays. It produces "Coronation Street," local news and programmes for North West England. Although its influence has waned, Granada had been described as "the best commercial television company in the world". Manchester was one of the BBC's three main centres in England. Programmes including "Mastermind", and "Real Story", were made at New Broadcasting House. The "Cutting It" series set in the city's Northern Quarter and "The Street" were set in Manchester as was "Life on Mars". The first edition of "Top of the Pops" was broadcast from a studio in Rusholme on New Year's Day 1964. Manchester was the regional base for BBC One North West Region programmes before it relocated to MediaCityUK in nearby Salford Quays. The Manchester television channel, Channel M, owned by the Guardian Media Group operated from 2000, but closed in 2012. Manchester is also covered by two internet television channels: Quays News and Manchester.tv. The city had a new terrestrial channel from January 2014 when YourTV Manchester, which won the OFCOM licence bid in February 2013. It began its first broadcast, but in 2015, That's Manchester took over to air on 31 May and launched the freeview channel 8 service slot, before moving to channel 7 in April 2016. The city has the highest number of local radio stations outside London, including BBC Radio Manchester, Hits Radio Manchester, Capital Manchester, Greatest Hits Manchester, Heart North West, Smooth North West, Gold, 96.2 The Revolution, NMFM (North Manchester FM) and XS Manchester. Student radio stations include Fuse FM at the University of Manchester and MMU Radio at the Manchester Metropolitan University. A community radio network is coordinated by Radio Regen, with stations covering Ardwick, Longsight and Levenshulme (All FM 96.9) and Wythenshawe (Wythenshawe FM 97.2). Defunct radio stations include Sunset 102, which became Kiss 102, then Galaxy Manchester), and KFM which became Signal Cheshire (now Imagine FM). These stations and pirate radio played a significant role in the city's house music culture, the Madchester scene. "The Guardian" newspaper was founded in 1821 as "The Manchester Guardian". Its head office is still in the city, though many of its management functions were moved to London in 1964. Its sister publication, the "Manchester Evening News", has the largest circulation of a UK regional evening newspaper. The paper is free in the city centre on Thursdays and Fridays, but paid for in the suburbs. Despite its title, it is available all day. The "Metro North West" is available free at Metrolink stops, rail stations and other busy locations. The MEN group distributes several local weekly free papers. For many years most national newspapers had offices in Manchester: "The Daily Telegraph", "Daily Express", "Daily Mail", "Daily Mirror", "The Sun". At its height, 1,500 journalists were employed, though in the 1980s office closures began and today the "second Fleet Street" is no more. An attempt to launch a Northern daily newspaper, the "North West Times", employing journalists made redundant by other titles, closed in 1988. Another attempt was made with the "North West Enquirer", which hoped to provide a true "regional" newspaper for the North West, much in the same vein as the "Yorkshire Post" does for Yorkshire or "The Northern Echo" does for the North East; it folded in October 2006. Manchester has formal twinning arrangements (or "friendship agreements") with several places. In addition, the British Council maintains a metropolitan centre in Manchester. Manchester is home to the largest group of consuls in the UK outside London. The expansion of international trade links during the Industrial Revolution led to the introduction of the first consuls in the 1820s and since then over 800, from all parts of the world, have been based in Manchester. Manchester hosts consular services for most of the north of England.
https://en.wikipedia.org/wiki?curid=20206
Margaret Murray Margaret Alice Murray (13 July 1863 – 13 November 1963) was an Anglo-Indian Egyptologist, archaeologist, anthropologist, historian, and folklorist. The first woman to be appointed as a lecturer in archaeology in the United Kingdom, she worked at University College London (UCL) from 1898 to 1935. She served as President of the Folklore Society from 1953 to 1955, and published widely over the course of her career. Born to a wealthy middle-class English family in Calcutta, British India, Murray divided her youth between India, Britain, and Germany, training as both a nurse and a social worker. Moving to London, in 1894 she began studying Egyptology at UCL, developing a friendship with department head Flinders Petrie, who encouraged her early academic publications and appointed her Junior Professor in 1898. In 1902–03 she took part in Petrie's excavations at Abydos, Egypt, there discovering the Osireion temple and the following season investigated the Saqqara cemetery, both of which established her reputation in Egyptology. Supplementing her UCL wage by giving public classes and lectures at the British Museum and Manchester Museum, it was at the latter in 1908 that she led the unwrapping of Khnum-nakht, one of the mummies recovered from the Tomb of the Two Brothers – the first time that a woman had publicly unwrapped a mummy. Recognising that British Egyptomania reflected the existence of a widespread public interest in Ancient Egypt, Murray wrote several books on Egyptology targeted at a general audience. Murray also became closely involved in the first-wave feminist movement, joining the Women's Social and Political Union and devoting much time to improving women's status at UCL. Unable to return to Egypt due to the First World War, she focused her research on the witch-cult hypothesis, the theory that the witch trials of Early Modern Christendom were an attempt to extinguish a surviving pre-Christian, pagan religion devoted to a Horned God. Although later academically discredited, the theory gained widespread attention and proved a significant influence on the emerging new religious movement of Wicca. From 1921 to 1931 Murray undertook excavations of prehistoric sites on Malta and Menorca and developed her interest in folkloristics. Awarded an honorary doctorate in 1927, she was appointed Assistant Professor in 1928 and retired from UCL in 1935. That year she visited Palestine to aid Petrie's excavation of Tall al-Ajjul and in 1937 she led a small excavation at Petra in Jordan. Taking on the presidency of the Folklore Society in later life, she lectured at such institutions as the University of Cambridge and City Literary Institute, and continued to publish in an independent capacity until her death. Murray's work in Egyptology and archaeology was widely acclaimed and earned her the moniker of "The Grand Old Woman of Egyptology", although after her death many of her contributions to the field were overshadowed by those of Petrie. Conversely, Murray's work in folkloristics and the history of witchcraft has been academically discredited and her methods in these areas heavily criticised. The influence of her witch-cult theory in both religion and literature has been examined by various scholars, and she herself has been dubbed the "Grandmother of Wicca". Margaret Murray was born on 13 July 1863 in Calcutta, Bengal Presidency, then a major military city in British India. A member of the wealthy British imperial elite, she lived in the city with her family: parents James and Margaret Murray, an older sister named Mary, and her paternal grandmother and great-grandmother. James Murray, born in India of English descent, was a businessman and manager of the Serampore paper mills who was thrice elected President of the Calcutta Chamber of Commerce. His wife, Margaret (née Carr), had moved to India from Britain in 1857 to work as a missionary, preaching Christianity and educating Indian women. She continued with this work after marrying James and giving birth to her two daughters. Although most of their lives were spent in the European area of Calcutta, which was walled off from the indigenous sectors of the city, Murray encountered members of indigenous society through her family's employment of 10 Indian servants and through childhood holidays to Mussoorie. The historian Amara Thornton has suggested that Murray's Indian childhood continued to exert an influence over her throughout her life, expressing the view that Murray could be seen as having a hybrid transnational identity that was both British and Indian. During her childhood, Murray never received a formal education, and in later life expressed pride in the fact that she had never had to sit an exam before entering university. In 1870, Margaret and her sister Mary were sent to Britain, there moving in with their uncle John, a vicar, and his wife Harriet at their home in Lambourn, Berkshire. Although John provided them with a strongly Christian education and a belief in the inferiority of women, both of which she would reject, he awakened Murray's interest in archaeology through taking her to see local monuments. In 1873, the girls' mother arrived in Europe and took them with her to Bonn in Germany, where they both became fluent in German. In 1875 they returned to Calcutta, staying there till 1877. They then moved with their parents back to England, where they settled in Sydenham, South London. There, they spent much time visiting The Crystal Palace, while their father worked at his firm's London office. In 1880, they returned to Calcutta, where Margaret remained for the next seven years. She became a nurse at the Calcutta General Hospital, which was run by the Sisters of the Anglican Sisterhood of Clower, and there was involved with the hospital's attempts to deal with a cholera outbreak. In 1887, she returned to England, moving to Rugby, Warwickshire, where her uncle John had moved, now widowed. Here she took up employment as a social worker dealing with local underprivileged people. When her father retired and moved to England, she moved into his house in Bushey Heath, Hertfordshire, living with him until his death in 1891. In 1893 she then travelled to Madras, Tamil Nadu, where her sister had moved to with her new husband. Encouraged by her mother and sister, Murray decided to enroll at the newly opened department of Egyptology at University College London (UCL) in Bloomsbury, Central London. Having been founded by an endowment from Amelia Blanford Edwards, one of the co-founders of the Egypt Exploration Fund (EEF), the department was run by the pioneering early archaeologist Sir William Flinders Petrie, and based in the Edwards Library of UCL's South Cloisters. Murray began her studies at UCL at age 30 in January 1894, as part of a class composed largely of other women and older men. There, she took courses in the Ancient Egyptian and Coptic languages which were taught by Francis Llewellyn Griffith and Walter Ewing Crum respectively. Murray soon got to know Petrie, becoming his copyist and illustrator and producing the drawings for the published report on his excavations at Qift, "Koptos". In turn, he aided and encouraged her to write her first research paper, "The Descent of Property in the Early Periods of Egyptian History", which was published in the "Proceedings of the Society for Biblical Archaeology" in 1895. Becoming Petrie's "de facto" though unofficial assistant, Murray began to give some of the linguistic lessons in Griffith's absence. In 1898 she was appointed to the position of Junior Lecturer, responsible for teaching the linguistic courses at the Egyptology department; this made her the first female lecturer in archaeology in the United Kingdom. In this capacity, she spent two days a week at UCL, devoting the other days to caring for her ailing mother. As time went on, she came to teach courses on Ancient Egyptian history, religion, and language. Among Murray's students – to whom she referred as "the Gang" – were several who went on to produce noted contributions to Egyptology, including Reginald Engelbach, Georgina Aitken, Guy Brunton, and Myrtle Broome. She supplemented her UCL salary by teaching evening classes in Egyptology at the British Museum. At this point, Murray had no experience in field archaeology, and so during the 1902–03 field season, she travelled to Egypt to join Petrie's excavations at Abydos. Petrie and his wife, Hilda Petrie, had been excavating at the site since 1899, having taken over the archaeological investigation from French Coptic scholar Émile Amélineau. Murray at first joined as site nurse, but was subsequently taught how to excavate by Petrie and given a senior position. This led to some issues with some of the male excavators, who disliked the idea of taking orders from a woman. This experience, coupled with discussions with other female excavators (some of whom were active in the feminist movement) led Murray to adopt openly feminist viewpoints. While excavating at Abydos, Murray uncovered the Osireion, a temple devoted to the god Osiris which had been constructed by order of Pharaoh Seti I during the period of the New Kingdom. She published her site report as "The Osireion at Abydos" in 1904; in the report, she examined the inscriptions that had been discovered at the site to discern the purpose and use of the building. During the 1903–04 field season, Murray returned to Egypt, and at Petrie's instruction began her investigations at the Saqqara cemetery near to Cairo, which dated from the period of the Old Kingdom. Murray did not have legal permission to excavate the site, and instead spent her time transcribing the inscriptions from ten of the tombs that had been excavated during the 1860s by Auguste Mariette. She published her findings in 1905 as "Saqqara Mastabas I", although would not publish translations of the inscriptions until 1937 as "Saqqara Mastabas II". Both "The Osireion at Abydos" and "Saqqara Mastabas I" proved to be very influential in the Egyptological community, with Petrie recognising Murray's contribution to his own career. On returning to London, Murray took an active role in the feminist movement, volunteering and financially donating to the cause and taking part in feminist demonstrations, protests, and marches. Joining the Women's Social and Political Union, she was present at large marches like the Mud March of 1907 and the Women's Coronation Procession of June 1911. She concealed the militancy of her actions in order to retain the image of respectability within academia. Murray also pushed the professional boundaries for women throughout her own career, and mentored other women in archaeology and throughout academia. As women could not use the men's common room, she successfully campaigned for UCL to open a common room for women, and later ensured that a larger, better-equipped room was converted for the purpose; it was later renamed the Margaret Murray Room. At UCL, she became a friend of fellow female lecturer Winifred Smith, and together they campaigned to improve the status and recognition of women in the university, with Murray becoming particularly annoyed at female staff who were afraid of upsetting or offending the male university establishment with their demands. Feeling that students should get nutritious yet affordable lunches, for many years she sat on the UCL Refectory Committee. Various museums around the United Kingdom invited Murray to advise them on their Egyptological collections, resulting in her cataloguing the Egyptian artefacts owned by the Dublin National Museum, the National Museum of Antiquities in Edinburgh, and the Society of Antiquaries of Scotland, being elected a Fellow of the latter in thanks. Petrie had established connections with the Egyptological wing of Manchester Museum in Manchester, and it was there that many of his finds had been housed. Murray thus often travelled to the museum to catalogue these artefacts, and during the 1906–07 school year regularly lectured there. In 1907, Petrie excavated the Tomb of the Two Brothers, a Middle Kingdom burial of two Egyptian priests, Nakht-ankh and Khnum-nakht, and it was decided that Murray would carry out the public unwrapping of the latter's mummified body. Taking place at the museum in May 1908, it represented the first time that a woman had led a public mummy unwrapping and was attended by over 500 onlookers, attracting press attention. Murray was particularly keen to emphasise the importance that the unwrapping would have for the scholarly understanding of the Middle Kingdom and its burial practices, and lashed out against members of the public who saw it as immoral; she declared that "every vestige of ancient remains must be carefully studied and recorded without sentimentality and without fear of the outcry of the ignorant". She subsequently published a book about her analysis of the two bodies, "The Tomb of the Two Brothers", which remained a key publication on Middle Kingdom mummification practices into the 21st century. Murray was dedicated to public education, hoping to infuse Egyptomania with solid scholarship about Ancient Egypt, and to this end authored a series of books aimed at a general audience. In 1905 she published "Elementary Egyptian Grammar" which was followed in 1911 by "Elementary Coptic (Sahidic) Grammar". In 1913, she published "Ancient Egyptian Legends" for John Murray's "The Wisdom of the East" series. She was particularly pleased with the increased public interest in Egyptology that followed Howard Carter's discovery of the tomb of Pharaoh Tutankhamun in 1922. From at least 1911 until his death in 1940, Murray was a close friend of the anthropologist Charles Gabriel Seligman of the London School of Economics, and together they co-authored a variety of papers on Egyptology that were aimed at an anthropological audience. Many of these dealt with subjects that Egyptological journals would not publish, such as the "Sa" sign for the uterus, and thus were published in "Man", the journal of the Royal Anthropological Institute. It was at Seligman's recommendation that she was invited to become a member of the Institute in 1916. In 1914, Petrie launched the academic journal "Ancient Egypt", published through his own British School of Archaeology in Egypt (BSAE), which was based at UCL. Given that he was often away from London excavating in Egypt, Murray was left to operate as "de facto" editor much of the time. She also published many research articles in the journal and authored many of its book reviews, particularly of the German-language publications which Petrie could not read. The outbreak of the First World War in 1914, in which the United Kingdom went to war against Germany and the Ottoman Empire, meant that Petrie and other staff members were unable to return to Egypt for excavation. Instead, Petrie and Murray spent much of the time reorganising the artefact collections that they had attained over the past decades. To aid Britain's war effort, Murray enrolled as a volunteer nurse in the Volunteer Air Detachment of the College Women's Union Society, and for several weeks was posted to Saint-Malo in France. After being taken ill herself, she was sent to recuperate in Glastonbury, Somerset, where she became interested in Glastonbury Abbey and the folklore surrounding it which connected it to the legendary figure of King Arthur and to the idea that the Holy Grail had been brought there by Joseph of Aramathea. Pursuing this interest, she published the paper "Egyptian Elements in the Grail Romance" in the journal "Ancient Egypt", although few agreed with her conclusions and it was criticised for making unsubstantiated leaps with the evidence by the likes of Jessie Weston. Murray's interest in folklore led her to develop an interest in the witch trials of Early Modern Europe. In 1917, she published a paper in "Folklore", the journal of the Folklore Society, in which she first articulated her version of the witch-cult theory, arguing that the witches persecuted in European history were actually followers of "a definite religion with beliefs, ritual, and organization as highly developed as that of any cult in the end". She followed this up with papers on the subject in the journals "Man" and the "Scottish Historical Review". She articulated these views more fully in her 1921 book "The Witch-Cult in Western Europe", published by Oxford University Press after receiving a positive peer review by Henry Balfour, and which received both criticism and support on publication. Many reviews in academic journals were critical, with historians claiming that she had distorted and misinterpreted the contemporary records that she was using, but the book was nevertheless influential. As a result of her work in this area, she was invited to provide the entry on "witchcraft" for the fourteenth edition of the "Encyclopædia Britannica" in 1929. She used the opportunity to propagate her own witch-cult theory, failing to mention the alternate theories proposed by other academics. Her entry would be included in the encyclopedia until 1969, becoming readily accessible to the public, and it was for this reason that her ideas on the subject had such a significant impact. It received a particularly enthusiastic reception by occultists such as Dion Fortune, Lewis Spence, Ralph Shirley, and J. W. Brodie Innes, perhaps because its claims regarding an ancient secret society chimed with similar claims common among various occult groups. Murray joined the Folklore Society in February 1927, and was elected to the society's council a month later, although she stood down in 1929. Murray reiterated her witch-cult theory in her 1933 book, "The God of the Witches", which was aimed at a wider, non-academic audience. In this book, she cut out or toned down what she saw as the more unpleasant aspects of the witch-cult, such as animal and child sacrifice, and began describing the religion in more positive terms as "the Old Religion". From 1921 to 1927, Murray led archaeological excavations on Malta, assisted by Edith Guest and Gertrude Caton Thompson. She excavated the Bronze Age megalithic monuments of Santa Sofia, Santa Maria tal-Bakkari, Għar Dalam, and Borġ in-Nadur, all of which were threatened by the construction of a new aerodrome. In this she was funded by the Percy Sladen Memorial Fund. Her resulting three-volume excavation report came to be seen as an important publication within the field of Maltese archaeology. During the excavations, she had taken an interest in the island's folklore, resulting in the 1932 publication of her book "Maltese Folktales", much of which was a translation of earlier stories collected by Manuel Magri and her friend Liza Galea. In 1932 Murray returned to Malta to aid in the cataloguing of the Bronze Age pottery collection held in Malta Museum, resulting in another publication, "Corpus of the Bronze Age Pottery of Malta". On the basis of her work in Malta, Louis C. G. Clarke, the curator of the Cambridge Museum of Ethnology and Anthropology, invited her to lead excavations on the island of Menorca from 1930 to 1931. With the aid of Guest, she excavated the talaiotic sites of Trepucó and Sa Torreta de Tramuntana, resulting in the publication of "Cambridge Excavations in Minorca". Murray also continued to publish works on Egyptology for a general audience, such as "Egyptian Sculpture" (1930) and "Egyptian Temples" (1931), which received largely positive reviews. In the summer of 1925 she led a team of volunteers to excavate Homestead Moat in Whomerle Wood near to Stevenage, Hertfordshire; she did not publish an excavation report and did not mention the event in her autobiography, with her motives for carrying out the excavation remaining unclear. In 1924, UCL promoted Murray to the position of assistant professor, and in 1927 she was awarded an honorary doctorate for her career in Egyptology. That year, Murray was tasked with guiding Mary of Teck, the Queen consort, around the Egyptology department during the latter's visit to UCL. The pressures of teaching had eased by this point, allowing Murray to spend more time travelling internationally; in 1920 she returned to Egypt and in 1929 visited South Africa, where she attended the meeting of the British Association for the Advancement of Science, whose theme was the prehistory of southern Africa. In the early 1930s she travelled to the Soviet Union, where she visited museums in Leningrad, Moscow, Kharkiv, and Kiev, and then in late 1935 she undertook a lecture tour of Norway, Sweden, Finland, and Estonia. Although having reached legal retirement age in 1927, and thus unable to be offered another five-year contract, Murray was reappointed on an annual basis each year until 1935. At this point, she retired, expressing the opinion that she was glad to leave UCL, for reasons that she did not make clear. In 1933, Petrie had retired from UCL and moved to Jerusalem in Mandatory Palestine with his wife; Murray therefore took over as editor of the "Ancient Egypt" journal, renaming it "Ancient Egypt and the East" to reflect its increasing research interest in the ancient societies that surrounded and interacted with Egypt. The journal folded in 1935, perhaps due to Murray's retirement. Murray then spent some time in Jerusalem, where she aided the Petries in their excavation at Tall al-Ajjul, a Bronze Age mound south of Gaza. During Murray's 1935 trip to Palestine, she had taken the opportunity to visit Petra in neighbouring Jordan. Intrigued by the site, in March and April 1937 she returned in order to carry out a small excavation in several cave dwellings at the site, subsequently writing both an excavation report and a guidebook on Petra. Back in England, from 1934 to 1940, Murray aided the cataloguing of Egyptian antiquities at Girton College, Cambridge, and also gave lectures in Egyptology at the university until 1942. During the Second World War, Murray evaded the Blitz of London by moving to Cambridge, where she volunteered for a group (probably the Army Bureau of Current Affairs or The British Way and Purpose) who educated military personnel to prepare them for post-war life. Based in the city, she embarked on research into the town's Early Modern history, examining documents stored in local parish churches, Downing College, and Ely Cathedral; she never published her findings. In 1945, she briefly became involved in the "Who put Bella in the Wych Elm?" murder case. After the war ended she returned to London, settling into a bedsit room in Endsleigh Street, which was close to University College London (UCL) and the Institute of Archaeology (then an independent institution, now part of UCL); she continued her involvement with the former and made use of the latter's library. On most days she visited the British Museum in order to consult their library, and twice a week she taught adult education classes on Ancient Egyptian history and religion at the City Literary Institute; upon her retirement from this position she nominated her former pupil, Veronica Seton-Williams, to replace her. Murray's interest in popularising Egyptology among the wider public continued; in 1949 she published "Ancient Egyptian Religious Poetry", her second work for John Murray's "The Wisdom of the East" series. That same year she also published "The Splendour That Was Egypt", in which she collated many of her UCL lectures. The book adopted a diffusionist perspective that argued that Egypt influenced Greco-Roman society and thus modern Western society. This was seen as a compromise between Petrie's belief that other societies influenced the emergence of Egyptian civilisation and Grafton Elliot Smith's highly unorthodox and heavily criticised hyperdiffusionist view that Egypt was the source of all global civilisation. The book received a mixed reception from the archaeological community. In 1953, Murray was appointed to the presidency of the Folklore Society following the resignation of former president Allan Gomme. The Society had initially approached John Mavrogordato for the post, but he had declined, with Murray accepting the nomination several months later. Murray remained President for two terms, until 1955. In her 1954 presidential address, "England as a Field for Folklore Research", she lamented what she saw as the English people's disinterest in their own folklore in favour of that from other nations. For the autumn 1961 issue of "Folklore", the society published a "festschrift" to Murray to commemorate her 98th birthday. The issue contained contributions from various scholars paying tribute to her – with papers dealing with archaeology, fairies, Near Eastern religious symbols, Greek folk songs – but notably not about witchcraft, potentially because no other folklorists were willing to defend her witch-cult theory. In May 1957, Murray had championed the archaeologist T. C. Lethbridge's controversial claims that he had discovered three pre-Christian chalk hill figures on Wandlebury Hill in the Gog Magog Hills, Cambridgeshire. Privately she expressed concern about the reality of the figures. Lethbridge subsequently authored a book championing her witch-cult theory in which he sought the cult's origins in pre-Christian culture. In 1960, she donated her collection of papers – including correspondences with a wide range of individuals across the country – to the Folklore Society Archive, where it is now known as "the Murray Collection". Crippled with arthritis, Murray had moved into a home in North Finchley, north London, where she was cared for by a retired couple who were trained nurses; from here she occasionally took taxis into central London to visit the UCL library. Amid failing health, in 1962 Murray moved into the Queen Victoria Memorial Hospital in Welwyn, Hertfordshire, where she could receive 24-hour care; she lived here for the final 18 months of her life. To mark her hundredth birthday, on 13 July 1963 a group of her friends, former students, and doctors gathered for a party at nearby Ayot St. Lawrence. Two days later, her doctor drove her to UCL for a second birthday party, again attended by many of her friends, colleagues, and former students; it was the last time that she visited the university. In "Man", the journal of the Royal Anthropological Institute, it was noted that Murray was "the only Fellow of the Institute to [reach their centenary] within living memory, if not in its whole history". That year she published two books; one was "The Genesis of Religion", in which she argued that humanity's first deities had been goddesses rather than male gods. The second was her autobiography, "My First Hundred Years", which received predominantly positive reviews. She died on 13 November 1963, and her body was cremated. The later folklorists Caroline Oates and Juliette Wood have suggested that Murray was best known for her witch-cult theory, with biographer Margaret S. Drower expressing the view that it was her work on this subject which "perhaps more than any other, made her known to the general public". It has been claimed that Murray's was the "first feminist study of the witch trials", as well as being the first to have actually "empowered the witches" by giving the (largely female) accused both free will and a voice distinct from that of their interrogators. The theory was faulty, in part because all of her academic training was in Egyptology, with no background knowledge in European history, but also because she exhibited a "tendency to generalize wildly on the basis of very slender evidence". Oates and Wood, however, noted that Murray's interpretations of the evidence fitted within wider perspectives on the past that existed at the time, stating that "Murray was far from isolated in her method of reading ancient ritual origins into later myths". In particular, her approach was influenced by the work of the anthropologist James Frazer, who had argued for the existence of a pervasive dying-and-resurrecting god myth, and she was also influenced by the interpretative approaches of E. O. James, Karl Pearson, Herbert Fleure, and Harold Peake. In "The Witch-Cult in Western Europe", Murray stated that she had restricted her research to Great Britain, although made some recourse to sources from France, Flanders, and New England. She drew a division between what she termed "Operative Witchcraft", which referred to the performance of charms and spells with any purpose, and "Ritual Witchcraft", by which she meant "the ancient religion of Western Europe", a fertility-based faith that she also termed "the Dianic cult". She claimed that the cult had "very probably" once been devoted to the worship of both a male deity and a "Mother Goddess" but that "at the time when the cult is recorded the worship of the male deity appears to have superseded that of the female". In her argument, Murray claimed that the figure referred to as the Devil in the trial accounts was the witches' god, "manifest and incarnate", to whom the witches offered their prayers. She claimed that at the witches' meetings, the god would be personified, usually by a man or at times by a woman or an animal; when a human personified this entity, Murray claimed that they were usually dressed plainly, though they appeared in full costume for the witches' Sabbaths. Members joined the cult either as children or adults through what Murray called "admission ceremonies"; Murray asserted that applicants had to agree to join of their own free will, and agree to devote themselves to the service of their deity. She also claimed that in some cases, these individuals had to sign a covenant or were baptised into the faith. At the same time, she claimed that the religion was largely passed down hereditary lines. Murray described the religion as being divided into covens containing thirteen members, led by a coven officer who was often termed the "Devil" in the trial accounts, but who was accountable to a "Grand Master". According to Murray, the records of the coven were kept in a secret book, with the coven also disciplining its members, to the extent of executing those deemed traitors. Describing this witch-cult as "a joyous religion", she claimed that the two primary festivals that it celebrated were on May Eve and November Eve, although that other dates of religious observation were 1 February and 1 August, the winter and summer solstices, and Easter. She asserted that the "General Meeting of all members of the religion" were known as Sabbaths, while the more private ritual meetings were known as Esbats. The Esbats, Murray claimed, were nocturnal rites that began at midnight, and were "primarily for business, whereas the Sabbath was purely religious". At the former, magical rites were performed both for malevolent and benevolent ends. She also asserted that the Sabbath ceremonies involved the witches paying homage to the deity, renewing their "vows of fidelity and obedience" to him, and providing him with accounts of all the magical actions that they had conducted since the previous Sabbath. Once this business had been concluded, admissions to the cult or marriages were conducted, ceremonies and fertility rites took place, and then the Sabbath ended with feasting and dancing. Deeming Ritual Witchcraft to be "a fertility cult", she asserted that many of its rites were designed to ensure fertility and rain-making. She claimed that there were four types of sacrifice performed by the witches: blood-sacrifice, in which the neophyte writes their name in blood; the sacrifice of animals; the sacrifice of a non-Christian child to procure magical powers; and the sacrifice of the witches' god by fire to ensure fertility. She interpreted accounts of witches shapeshifting into various animals as being representative of a rite in which the witches dressed as specific animals which they took to be sacred. She asserted that accounts of familiars were based on the witches' use of animals, which she divided into "divining familiars" used in divination and "domestic familiars" used in other magic rites. Murray asserted that a pre-Christian fertility-based religion had survived the Christianization process in Britain, although that it came to be "practised only in certain places and among certain classes of the community". She believed that folkloric stories of fairies in Britain were based on a surviving race of dwarfs, who continued to live on the island up until the Early Modern period. She asserted that this race followed the same pagan religion as the witches, thus explaining the folkloric connection between the two. In the appendices to the book, she also alleged that Joan of Arc and Gilles de Rais were members of the witch-cult and were executed for it, a claim which has been refuted by historians, especially in the case of Joan of Arc. The later historian Ronald Hutton commented that "The Witch-Cult in Western Europe" "rested upon a small amount of archival research, with extensive use of printed trial records in 19th-century editions, plus early modern pamphlets and works of demonology". He also noted that the book's tone was generally "dry and clinical, and every assertion was meticulously footnoted to a source, with lavish quotation". It was not a bestseller; in its first thirty years, only 2,020 copies were sold. However, it led many people to treat Murray as an authority on the subject; in 1929, she was invited to provide the entry on "Witchcraft" for the "Encyclopædia Britannica", and used it to present her interpretation of the subject as if it were universally accepted in scholarship. It remained in the encyclopedia until being replaced in 1969. Murray followed "The Witch-Cult in Western Europe" with "The God of the Witches", published by the popular press Sampson Low in 1931; although similar in content, unlike her previous volume it was aimed at a mass market audience. The tone of the book also differed strongly from its predecessor, containing "emotionally inflated [language] and coloured with religious phraseology" and repeatedly referring to the witch-cult as "the Old Religion". In this book she also "cut out or toned down" many of the claims made in her previous volume which would have painted the cult in a bad light, such as those which discussed sex and the sacrifice of animals and children. In this book she began to refer to the witches' deity as the Horned God, and asserted that it was an entity who had been worshipped in Europe since the Palaeolithic. She further asserted that in the Bronze Age, the worship of the deity could be found throughout Europe, Asia, and parts of Africa, claiming that the depiction of various horned figures from these societies proved that. Among the evidence cited were the horned figures found at Mohenjo-Daro, which are often interpreted as depictions of Pashupati, as well as the deities Osiris and Amon in Egypt and the Minotaur of Minoan Crete. Within continental Europe, she claimed that the Horned God was represented by Pan in Greece, Cernunnos in Gaul, and in various Scandinavian rock carvings. Claiming that this divinity had been declared the Devil by the Christian authorities, she nevertheless asserted that his worship was testified in officially Christian societies right through to the Modern period, citing folkloric practices such as the Dorset Ooser and the Puck Fair as evidence of his veneration. In 1954, she published "The Divine King in England", in which she greatly extended on the theory, taking influence from Frazer's "The Golden Bough", an anthropological book that made the claim that societies all over the world sacrificed their kings to the deities of nature. In her book, she claimed that this practice had continued into medieval England, and that, for instance, the death of William II was really a ritual sacrifice. No academic took the book seriously, and it was ignored by many of her supporters. Upon initial publication, Murray's thesis gained a favourable reception from many readers, including some significant scholars, albeit none who were experts in the witch trials. Historians of Early Modern Britain like George Norman Clark and Christopher Hill incorporated her theories into their work, although the latter subsequently distanced himself from the theory. For the 1961 reprint of "The Witch-Cult in Western Europe", the Medieval historian Steven Runciman provided a foreword in which he accepted that some of Murray's "minor details may be open to criticism", but in which he was otherwise supportive of her thesis. Her theories were recapitulated by Arno Runeberg in his 1947 book "Witches, Demons and Fertility Magic" as well as Pennethorne Hughes in his 1952 book "Witches". As a result, the Canadian historian Elliot Rose, writing in 1962, claimed that the Murrayite interpretations of the witch trials "seem to hold, at the time of writing, an almost undisputed sway at the higher intellectual levels", being widely accepted among "educated people". Rose suggested that the reason that Murray's theory gained such support was partly because of her "imposing credentials" as a member of staff at UCL, a position that lent her theory greater legitimacy in the eyes of many readers. He further suggested that the Murrayite view was attractive to many as it confirmed "the general picture of pre-Christian Europe a reader of Frazer or [Robert] Graves would be familiar with". Similarly, Hutton suggested that the cause of the Murrayite theory's popularity was because it "appealed to so many of the emotional impulses of the age", including "the notion of the English countryside as a timeless place full of ancient secrets", the literary popularity of Pan, the widespread belief that the majority of British had remained pagan long after the process of Christianisation, and the idea that folk customs represented pagan survivals. At the same time, Hutton suggested, it seemed more plausible to many than the previously dominant rationalist idea that the witch trials were the result of mass delusion. Related to this, the folklorist Jacqueline Simpson suggested that part of the Murrayite theory's appeal was that it appeared to give a "sensible, demystifying, liberating approach to a longstanding but sterile argument" between the rationalists who denied that there had been any witches and those, like Montague Summers, who insisted that there had been a real Satanic conspiracy against Christendom in the Early Modern period replete with witches with supernatural powers. "How refreshing", noted the historian Hilda Ellis Davidson, "and exciting her first book was "at that period". A new approach, and such a surprising one." Murray's theories never received support from experts in the Early Modern witch trials, and from her early publications onward many of her ideas were challenged by those who highlighted her "factual errors and methodological failings". Indeed, the majority of scholarly reviews of her work produced during the 1920s and 1930s were largely critical. George L. Burr reviewed both of her initial books on the witch-cult for the "American Historical Review". He stated that she was not acquainted with the "careful general histories by modern scholars" and criticised her for assuming that the trial accounts accurately reflected the accused witches' genuine experiences of witchcraft, regardless of whether those confessions had been obtained through torture and coercion. He also charged her with selectively using the evidence to serve her interpretation, for instance by omitting any supernatural or miraculous events that appear in the trial accounts. W. R. Halliday was highly critical in his review for "Folklore", as was E. M. Loeb in his review for "American Anthropologist". Soon after, one of the foremost specialists of the trial records, L'Estrange Ewen, brought out a series of books which rejected Murray's interpretation. Rose suggested that Murray's books on the witch-cult "contain an incredible number of minor errors of fact or of calculation and several inconsistencies of reasoning". He accepted that her case "could, perhaps, still be proved by somebody else, though I very much doubt it". Highlighting that there is a gap of about a thousand years between the Christianisation of Britain and the start of the witch trials there, he argues that there is no evidence for the existence of the witch-cult anywhere in the intervening period. He further criticises Murray for treating pre-Christian Britain as a socially and culturally monolithic entity, whereas in reality, it contained a diverse array of societies and religious beliefs. He also challenges Murray's claim that the majority of Britons in the Middle Ages remained pagan as "a view grounded on ignorance alone". Murray did not respond directly to the criticisms of her work, but reacted to her critics in a hostile manner; in later life she asserted that she eventually ceased reading reviews of her work, and believed that her critics were simply acting out of their own Christian prejudices to non-Christian religion. Simpson noted that despite these critical reviews, within the field of British folkloristics Murray's theories were permitted "to pass unapproved but unchallenged, either out of politeness or because nobody was really interested enough to research the topic". As evidence, she noted that no substantial research articles on the subject of witchcraft were published in "Folklore" between Murray's in 1917 and Rossell Hope Robbins' in 1963. She also highlighted that when regional studies of British folklore were published in this period by folklorists like Theo Brown, Ruth Tongue, or Enid Porter, none adopted the Murrayite framework for interpreting witchcraft beliefs, thus evidencing her claim that Murray's theories were widely ignored by scholars of folkloristics. Murray's work was increasingly criticised following her death in 1963, with the definitive academic rejection of the Murrayite witch-cult theory occurring during the 1970s. During these decades, a variety of scholars across Europe and North America – such as Alan Macfarlane, Erik Midelfort, William Monter, Robert Muchembled, Gerhard Schormann, Bente Alver and Bengt Ankarloo – published in-depth studies of the archival records from the witch trials, leaving no doubt that those tried for witchcraft were not practitioners of a surviving pre-Christian religion. In 1971, the English historian Keith Thomas stated that on the basis of this research, there was "very little evidence to suggest that the accused witches were either devil-worshippers or members of a pagan fertility cult". He stated that Murray's conclusions were "almost totally groundless" because she ignored the systematic study of the trial accounts provided by Ewen and instead used sources very selectively to argue her point. In 1975, the historian Norman Cohn commented that Murray's "knowledge of European history, even of English history, was superficial and her grasp of historical method was non-existent", adding that her ideas were "firmly set in an exaggerated and distorted version of the Frazerian mould". That same year, the historian of religion Mircea Eliade described Murray's work as "hopelessly inadequate", containing "numberless and appalling errors". In 1996, the feminist historian Diane Purkiss stated that although Murray's thesis was "intrinsically improbable" and commanded "little or no allegiance within the modern academy", she felt that male scholars like Thomas, Cohn, and Macfarlane had unfairly adopted an androcentric approach by which they contrasted their own, male and methodologically sound interpretation against Murray's "feminised belief" about the witch-cult. Hutton stated that Murray had treated her source material with "reckless abandon", in that she had taken "vivid details of alleged witch practices" from "sources scattered across a great extent of space and time" and then declared them to be normative of the cult as a whole. Simpson outlined how Murray had selected her use of evidence very specifically, particularly by ignoring and/or rationalising any accounts of supernatural or miraculous events in the trial records, thereby distorting the events that she was describing. Thus, Simpson pointed out, Murray rationalised claims that the cloven-hoofed Devil appeared at the witches' Sabbath by stating that he was a man with a special kind of shoe, and similarly asserted that witches' claims to have flown through the air on broomsticks were actually based on their practice of either hopping along on broomsticks or smearing hallucinogenic salves onto themselves. Concurring with this assessment, the historian Jeffrey Burton Russell, writing with the independent author Brooks Alexander, stated that "Murray's use of sources, in general, is appalling". The pair went on to claim that "today, scholars are agreed that Murray was more than just wrong – she was completely and embarrassingly wrong on nearly all of her basic premises". The Italian historian Carlo Ginzburg has been cited as being willing to give "some slight support" to Murray's theory. Ginzburg stated that although her thesis had been "formulated in a wholly uncritical way" and contained "serious defects", it did contain "a kernel of truth". He stated his opinion that she was right in claiming that European witchcraft had "roots in an ancient fertility cult", something that he argued was vindicated by his work researching the "benandanti", an agrarian visionary tradition recorded in the Friuli district of Northeastern Italy during the 16th and 17th centuries. Several historians and folklorists have pointed out that Ginzburg's arguments are very different to Murray's: whereas Murray argued for the existence of a pre-Christian witches' cult whose members physically met during the witches' Sabbaths, Ginzburg argued that some of the European visionary traditions that were conflated with witchcraft in the Early Modern period had their origins in pre-Christian fertility religions. Moreover, other historians have expressed criticism of Ginzburg's interpretation of the "benandanti"; Cohn stated that there was "nothing whatsoever" in the source material to justify the idea that the "benandanti" were the "survival of an age-old fertility cult". Echoing these views, Hutton commented that Ginzburg's claim that the "benandanti" visionary traditions were a survival from pre-Christian practices was an idea resting on "imperfect material and conceptual foundations". He added that Ginzburg's "assumption" that "what was being dreamed about in the sixteenth century had in fact been acted out in religious ceremonies" dating to "pagan times", was entirely "an inference of his own" and not one supported by the documentary evidence. On researching the history of UCL's Egyptology department, the historian Rosalind M. Janssen stated that Murray was "remembered with gratitude and immense affection by all her former students. A wise and witty teacher, two generations of Egyptologists have forever been in her debt." Alongside teaching them, Murray was known to socialise with her UCL students outside of class hours. The archaeologist Ralph Merrifield, who knew Murray through the Folklore Society, described her as a "diminutive and kindly scholar, who radiated intelligence and strength of character into extreme old age". Davidson, who also knew Murray through the Society, noted that at their meetings "she would sit near the front, a bent and seemingly guileless old lady dozing peacefully, and then in the middle of a discussion would suddenly intervene with a relevant and penetrating comment which showed that she had missed not one word of the argument". The later folklorist Juliette Wood noted that many members of the Folklore Society "remember her fondly", adding that Murray had been "especially keen to encourage younger researchers, even those who disagreed with her ideas". One of Murray's friends in the Society, E. O. James, described her as a "mine of information and a perpetual inspiration ever ready to impart her vast and varied stores of specialised knowledge without reserve, or, be it said, much if any regard for the generally accepted opinions and conclusions of the experts!" Davidson described her as being "not at all assertive ... [she] never thrust her ideas on anyone. [In relation to her witch-cult theory,] she behaved in fact rather like someone who was a fully convinced member of some unusual religious sect, or perhaps, of the Freemasons, but never on any account got into arguments about it in public." The archaeologist Glyn Daniel observed that Murray remained mentally alert into her old age, commenting that "her vigour and forthrightness and ruthless energy never deserted her". Murray never married, instead devoting her life to her work, and for this reason, Hutton drew comparisons between her and two other prominent female British scholars of the period, Jane Harrison and Jessie Weston. Murray's biographer Kathleen L. Sheppard stated that she was deeply committed to public outreach, particularly when it came to Egyptology, and that as such she "wanted to change the means by which the public obtained knowledge about Egypt's history: she wished to throw open the doors to the scientific laboratory and invite the public in". She considered travel to be one of her favourite activities, although due to restraints on her time and finances she was unable to do this regularly; her salary remained small and the revenue from her books was meagre. Raised a devout Christian by her mother, Murray had initially become a Sunday School teacher to preach the faith, but after entering the academic profession she rejected religion, gaining a reputation among other members of the Folklore Society as a noted sceptic and a rationalist. She was openly critical of organised religion, although continued to maintain a personal belief in a God of some sort, relating in her autobiography that she believed in "an unseen over-ruling Power", "which science calls Nature and religion calls God". She was also a believer and a practitioner of magic, performing curses against those she felt deserved it; in one case she cursed a fellow academic, Jaroslav Černý, when she felt that his promotion to the position of Professor of Egyptology over her friend Walter Bryan Emery was unworthy. Her curse entailed mixing up ingredients in a frying pan, and was undertaken in the presence of two colleagues. In another instance, she was claimed to have created a wax image of Kaiser Wilhelm II and then melted it during the First World War. Ruth Whitehouse argues that, given Murray's lack of mention of such incidents in her autobiography and generally rational approach, a "spirit of mischief" as opposed to "a real belief in the efficacy of the spells" may have motivated her practice of magic. Hutton noted that Murray was one of the earliest women to "make a serious impact upon the world of professional scholarship", and the archaeologist Niall Finneran described her as "one of the greatest characters of post-war British archaeology". Upon her death, Daniel referred to her as "the Grand Old Woman of Egyptology", with Hutton noting that Egyptology represented "the core of her academic career". In 2014, Thornton referred to her as "one of Britain's most famous Egyptologists". However, according to the archaeologist Ruth Whitehouse, Murray's contributions to archaeology and Egyptology were often overlooked as her work was overshadowed by that of Petrie, to the extent that she was often thought of primarily as one of Petrie's assistants rather than as a scholar in her own right. By her retirement she had come to be highly regarded within the discipline, although, according to Whitehouse, Murray's reputation declined following her death, something that Whitehouse attributed to the rejection of her witch-cult theory and the general erasure of women archaeologists from the discipline's male-dominated history. In his obituary for Murray in "Folklore", James noted that her death was "an event of unusual interest and importance in the annals of the Folk-Lore Society in particular as well as in the wider sphere in which her influence was felt in so many directions and disciplines". However, later academic folklorists, such as Simpson and Wood, have cited Murray and her witch-cult theory as an embarrassment to their field, and to the Folklore Society specifically. Simpson suggested that Murray's position as President of the Society was a causal factor in the mistrustful attitude that many historians held toward folkloristics as an academic discipline, as they erroneously came to believe that all folklorists endorsed Murray's ideas. Similarly, Catherine Noble stated that "Murray caused considerable damage to the study of witchcraft". In 1935, UCL introduced the Margaret Murray Prize, awarded to the student who is deemed to have produced the best dissertation in Egyptology; it continued to be presented annually into the 21st century. In 1969, UCL named one of their common rooms in her honour, but it was converted into an office in 1989. In June 1983, Queen Elizabeth The Queen Mother visited the room and there was gifted a copy of Murray's "My First Hundred Years". UCL also hold two busts of Murray, one kept in the Petrie Museum and the other in the library of the UCL Institute of Archaeology. This sculpture was commissioned by one of her students, Violet MacDermot, and produced by the artist Stephen Rickard. UCL also possess a watercolour painting of Murray by Winifred Brunton; formerly exhibited in the Petrie Gallery, it was later placed into the Art Collection stores. In 2013, on the 150th anniversary of Murray's birth and the 50th of her death, the UCL Institute of Archaeology's Ruth Whitehouse described Murray as "a remarkable woman" whose life was "well worth celebrating, both in the archaeological world at large and especially in UCL". The historian of archaeology Rosalind M. Janssen titled her study of Egyptology at UCL "The First Hundred Years" "as a tribute" to Murray. Murray's friend Margaret Stefana Drower authored a short biography of her, which was included as a chapter in the 2004 edited volume on "Breaking Ground: Pioneering Women Archaeologists". In 2013, Lexington Books published "The Life of Margaret Alice Murray: A Woman's Work in Archaeology", a biography of Murray authored by Kathleen L. Sheppard, then an assistant professor at Missouri University of Science and Technology; the book was based upon Sheppard's doctoral dissertation produced at the University of Oklahoma. Although characterising it as being "written in a clear and engaging manner", one reviewer noted that Sheppard's book focuses on Murray the "scientist" and as such neglects to discuss Murray's involvement in magical practices and her relationship with Wicca. Murray's witch-cult theories provided the blueprint for the contemporary Pagan religion of Wicca, with Murray being referred to as the "Grandmother of Wicca". The Pagan studies scholar Ethan Doyle White stated that it was the theory which "formed the historical narrative around which Wicca built itself", for on its emergence in England during the 1940s and 1950s, Wicca claimed to be the survival of this witch-cult. Wicca's theological structure, revolving around a Horned God and Mother Goddess, was adopted from Murray's ideas about the ancient witch-cult, and Wiccan groups were named "covens" and their meetings termed "esbats", both words that Murray had popularised. As with Murray's witch-cult, Wicca's practitioners entered via an initiation ceremony; Murray's claims that witches wrote down their spells in a book may have been an influence on Wicca's Book of Shadows. Wicca's early system of seasonal festivities were also based on Murray's framework. Noting that there is no evidence of Wicca existing before the publication of Murray's books, Merrifield commented that for those in 20th century Britain who wished to form their own witches' covens, "Murray may have seemed the ideal fairy godmother, and her theory became the pumpkin coach that could transport them into the realm of fantasy for which they longed". The historian Philip Heselton suggested that the New Forest coven – the oldest alleged Wiccan group – was founded "circa" 1935 by esotericists aware of Murray's theory and who may have believed themselves to be reincarnated witch-cult members. It was Gerald Gardner, who claimed to be an initiate of the New Forest coven, who established the tradition of Gardnerian Wicca and popularised the religion; according to Simpson, Gardner was the only member of the Folklore Society to "wholeheartedly" accept Murray's witch-cult hypothesis. The duo knew each other, with Murray writing the foreword to Gardner's 1954 book "Witchcraft Today", although in that foreword she did not explicitly specify whether she believed Gardner's claim that he had discovered a survival of her witch-cult. In 2005, Noble suggested that "Murray's name might be all but forgotten today if it were not for Gerald Gardner". Murray's witch-cult theories were likely also a core influence on the non-Gardnerian Wiccan traditions that were established in Britain and Australia between 1930 and 1970 by the likes of Bob Clay-Egerton, Robert Cochrane, Charles Cardell, and Rosaleen Norton. The prominent Wiccan Doreen Valiente eagerly searched for what she believed were other surviving remnants of the Murrayite witch-cult around Britain. Valiente remained committed to a belief in Murray's witch-cult after its academic rejection, and she described Murray as "a remarkable woman". In San Francisco during the late 1960s, Murray's writings were among the sources used by Aidan A. Kelly in the creation of his Wiccan tradition, the New Reformed Orthodox Order of the Golden Dawn. In Los Angeles during the early 1970s, they were used by Zsuzsanna Budapest when she was establishing her feminist-oriented tradition of Dianic Wicca. The Murrayite witch-cult theory also provided the basis for the ideas espoused in "Witchcraft and the Gay Counterculture", a 1978 book written by the American gay liberation activist Arthur Evans. Members of the Wiccan community gradually became aware of academia's rejection of the witch-cult theory. Accordingly, belief in its literal truth declined during the 1980s and 1990s, with many Wiccans instead coming to view it as a myth that conveyed metaphorical or symbolic truths. Others insisted that the historical origins of the religion did not matter and that instead Wicca was legitimated by the spiritual experiences it gave to its participants. In response, Hutton authored "The Triumph of the Moon", a historical study exploring Wicca's early development; on publication in 1999 the book exerted a strong impact on the British Pagan community, further eroding belief in the Murrayite theory among Wiccans. Conversely, other practitioners clung on to the theory, treating it as an important article of faith and rejecting post-Murrayite scholarship on European witchcraft. Several prominent practitioners continued to insist that Wicca was a religion with origins stretching back to the Palaeolithic, but others rejected the validity of historical scholarship and emphasised intuition and emotion as the arbiter of truth. A few "counter-revisionist" Wiccans – among them Donald H. Frew, Jani Farrell-Roberts, and Ben Whitmore – published critiques in which they attacked post-Murrayite scholarship on matters of detail, but none defended Murray's original hypothesis completely. Simpson noted that the publication of the Murray thesis in the "Encyclopædia Britannica" made it accessible to "journalists, film-makers popular novelists and thriller writers", who adopted it "enthusiastically". It influenced the work of Aldous Huxley and Robert Graves. It was also an influence on the American horror author H. P. Lovecraft, who cited "The Witch-Cult in Western Europe" in his writings about the fictional cult of Cthulhu. The author Sylvia Townsend Warner cited Murray's work on the witch-cult as an influence on her 1926 novel "Lolly Willowes", and sent a copy of her book to Murray in appreciation, with the two meeting for lunch shortly after. There was nevertheless some difference in their depictions of the witch-cult; whereas Murray had depicted an organised pre-Christian cult, Warner depicted a vague family tradition that was explicitly Satanic. In 1927, Warner lectured on the subject of witchcraft, exhibiting a strong influence from Murray's work. Analysing the relationship between Murray and Warner, the English literature scholar Mimi Winick characterised both as being "engaged in imagining new possibilities for women in modernity". A bibliography of Murray's published work was published in "Folklore" by Wilfrid Bonser in 1961, and her friend Drower produced a posthumous limited bibliography in 2004, and another limited bibliography appeared in Kathleen L. Sheppard's 2013 biography of her.
https://en.wikipedia.org/wiki?curid=20208
Aoraki / Mount Cook Aoraki / Mount Cook is the highest mountain in New Zealand. Its height, as of 2014, is listed as . It lies in the Southern Alps, the mountain range which runs the length of the South Island. A popular tourist destination, it is also a favourite challenge for mountain climbers. Aoraki / Mount Cook consists of three summits, from South to North the Low Peak (), Middle Peak () and High Peak. The summits lie slightly south and east of the main divide of the Southern Alps, with the Tasman Glacier to the east and the Hooker Glacier to the southwest. The mountain is in the Aoraki / Mount Cook National Park, in the Canterbury region. The park was established in 1953 and along with Westland National Park, Mount Aspiring National Park and Fiordland National Park forms one of the UNESCO World Heritage Sites. The park contains more than 140 peaks standing over and 72 named glaciers, which cover 40 percent of its . The peak is located at the northern end of the Kirikirikatata / Mount Cook Range, where it meets with the main spine of the Main Divide, forming a massif between the Hooker Valley to the southwest and the Tasman Valley east of the mountain. These two valleys provide the closest easily accessible view points of Aoraki / Mount Cook. A lookout point at the end of the Hooker Valley Track located only 10 km from the peak has views of the entire mountainside. The settlement of Mount Cook Village, also referred to as "Aoraki / Mount Cook", is a tourist centre and base camp for the mountain. It is 7 km from the end of the Tasman Glacier and 15 km south of Aoraki / Mount Cook's summit. On clear days, Aoraki / Mount Cook is visible from the West Coast as far north as Greymouth, some 150 kilometres away, and from most of State Highway 80 along Lake Pukaki and State Highway 6 south of Lake Pukaki. The near horizontal ridge connecting the mountain's three summits forms a distinctive blocky shape when viewed from an eastern or western direction. Another popular view point is from Lake Matheson on the West Coast, described as the "view of views", where on calm days, the peaks of Aoraki / Mount Cook and Mt Tasman are reflected in Lake Matheson. Aoraki / Mount Cook receives substantial orographic precipitation throughout the year, as breezy, moisture-laden westerly winds dominate all year-round, bringing rainclouds from the Tasman Sea with them. Annual precipitation around the mountain ranges varies greatly as the local climate is dominated by the eastward movement of depressions and anticyclones from across the Tasman Sea. The Aoraki / Mount Cook massif is a major obstacle to the prevailing westerly winds as they push depressions and associated cold fronts of moist air from the subtropics in the northwest against the mountain range. As the air rises towards the peaks, it expands and cools, and forms clouds. Rain and snowfall are often heaviest around the level and can last for several days if the front is slow-moving. As a result of the local weather patterns, the western slopes of Aoraki / Mount Cook can receive well over of annual precipitation, whereas the nearby Mount Cook Village, only south of the mountain receives of rain or snowfall. While the weather on the eastern side of the mountain is generally better, rain or snow can quickly become widespread on that side as well if the wind turns to the south or southeast. This brings with it a rapid drop in temperature and poor visibility, adding to the difficult climbing conditions on Aoraki / Mount Cook. Temperatures at the mountain's base in the Hooker Valley around range from to , and generally fall just over 1 °C for every 200 metres of altitude. From about and higher, semi-permanent snow and ice fields exist during winter. Winter and spring are usually less settled than summer and autumn. Anticyclones often bring days of settled weather in summer, or clear cold conditions in winter with severe frost. ' is the name of a person in the traditions of the Ngāi Tahu iwi; an early name for the South Island is ' (Aoraki's Canoe). In the past many believed it meant "Cloud Piercer", a romantic rendering of the name's components: ' (world, daytime, cloud, etc.) and ' or ' (day, sky, weather, etc.). Historically, the Māori name has been spelt ', using the standard Māori form. Aoraki / Mount Cook became known to Māori after their arrival in New Zealand some time around the 14th century CE. The first Europeans who may have seen Aoraki / Mount Cook were members of Abel Tasman's crew, who saw a "large land uplifted high" (probably some part of the Southern Alps) while off the west coast of the South Island, just north of present-day Greymouth on 13 December 1642 during Tasman's first Pacific voyage. The English name of "Mount Cook" was given to the mountain in 1851 by Captain John Lort Stokes to honour Captain James Cook who surveyed and circumnavigated the islands of New Zealand in 1770. Captain Cook did not sight the mountain during his exploration. Following the settlement between Ngāi Tahu and the Crown in 1998, the name of the mountain was officially changed from Mount Cook to Aoraki / Mount Cook to incorporate its historic Māori name, Aoraki. As part of the settlement, a number of South Island placenames were amended to incorporate their original Māori name. Signifying the importance of Aoraki / Mount Cook, it is the only one of these names where the Māori name precedes the English. Under the settlement the Crown agreed to return title to Aoraki / Mount Cook to Ngāi Tahu, who would then formally gift it back to the nation. Neither transfer has yet occurred; Ngāi Tahu can decide when this will happen. The Southern Alps in the South Island were formed by tectonic uplifting and pressure as the Pacific and Indo-Australian Plates collided along the island's western coast. The uplifting continues, raising Aoraki / Mount Cook an average of each year. However, erosive forces are also powerful shapers of the mountains. The severe weather is due to the mountain's jutting into powerful westerly winds of the Roaring Forties which run around approximately 45°S latitude, south of both Africa and Australia. The Southern Alps are the first obstacle the winds encounter after South America, having moved east across the Southern Ocean. The height of Aoraki / Mount Cook was established in 1881 by G. J. Roberts (from the west side) and in 1889 by T. N. Brodrick (from the Canterbury side). Their measurements agreed closely at . The height was reduced by when approximately 12–14 million cubic metres of rock and ice fell off the northern peak on 14 December 1991. Two decades of erosion of the ice cap exposed after this collapse reduced the height by another 30 m to 3724 m, as revealed by new GPS data from a University of Otago climbing expedition in November 2013. Aoraki / Mount Cook lies in the centre of the distinctive Alpine Fault, a 650 km long active fault in the Southern Alps. It is responsible for the uplift of Aoraki / Mt Cook and is believed to move every 100 to 300 years. It last moved in 1717. The average annual rainfall in the surrounding lowlands, in particular to the west, is around . This very high rainfall leads to temperate rainforests in these coastal lowlands and a reliable source of snow in the mountains to keep the glaciers flowing. These include the Tasman Glacier to the east of the mountain and the smaller Hooker Glacier immediately to its south. The vegetation in the valleys to the east, in particular the Tasman Valley, is noticeably less lush than that on the western slopes of the mountain. Forest would normally grow to about 1,300 m in this area, but a lack of soil due to scree, rock falls and the effects of glaciation prevent this in most localities around the mountain. Snow tussock and other alpine plants cling to as high as 1,900 m. Above the snowline, only lichen can be found amongst the rock, snowfields and ice that dominate the highest parts of Aoraki / Mt Cook. The first recorded attempt on the summit was made by the Irishman Rev. William S. Green and the Swiss hotelier Emil Boss and the Swiss mountain guide Ulrich Kaufmann on 2 March 1882 via the Tasman and Linda Glaciers. Mt Cook Guidebook author Hugh Logan believe they came within 50 metres of the summit. The first known ascent was on 25 December 1894, when New Zealanders Tom Fyfe, John Michael (Jack) Clarke and George Graham reached the summit via the Hooker Valley and the north ridge. Despite an earlier failed attempt on 20 December, the local climbers were spurred on by their desire for the first ascent to be made by New Zealand mountaineers amid reports that the American mountaineer Edward FitzGerald had his eye on the summit. The party reached the summit at approximately 1:30pm after bounding up the last leg of the mountain full of excitement at reaching the top. The route they had successfully traversed was not repeated again until the 100th ascent over 60 years later in 1955. Swiss guide Matthias Zurbriggen of FitzGerald's party made the second ascent on 14 March 1895 from the Tasman Glacier side, via the ridge that now bears his name. This is credited as the first solo ascent, although Zurbriggen was accompanied part of the way up the ridge by J Adamson. After Zurbriggen's ascent it was another ten years before the mountain was climbed again. In February 1905 Jack Clarke with four others completed the third ascent following Zurbriggen's route. So Clarke therefore became the first person to do a repeat ascent. The first woman to ascend the mountain was Freda Du Faur, an Australian, on 3 December 1910. Local guide George Bannister, a nephew of another guide, Pahikore Te Koeti Turanga of Ngāi Tahu, was the first Māori to successfully scale the peak in 1912. A traverse of the three peaks was first accomplished in 1913 by Freda Du Faur and guides Peter and Alex Graham. This 'grand traverse' was repeated in January 1916 by Conrad Kain, guiding the 57-year-old Mrs. Jane Thomson, considered at the time "a marvellous feat unequalled for daring in the annals of the Southern Alps". Sir Edmund Hillary made his first ascent in January 1948. In February 1948 with Ruth Adams, Harry Ayres and Mick Sullivan, Hillary made the first ascent of the South Ridge to the Low Peak In order to celebrate the life of Hillary the South Ridge was renamed as Hillary Ridge in August 2011. Aoraki / Mount Cook is a technically challenging mountain with a high level of glaciation. Its level of difficulty is often underestimated and can change dramatically depending on weather, snow and ice conditions. The climb crosses large crevasses, and involves risks of ice and rock falls, avalanches and rapidly changing weather conditions. Since the early 20th century, around 80 people have died attempting to climb the mountain, making it New Zealand's deadliest peak. The climbing season traditionally runs from November to February, and hardly a season goes by without at least one fatality. According to Māori legend, Aoraki was a young boy who, along with his three brothers, were the sons of Rakinui, the Sky Father. On their voyage around the Papatūānuku, the Earth Mother, their canoe became stranded on a reef and tilted. Aoraki and his brothers climbed onto the top side of their canoe. However, the south wind froze them and turned them to stone. Their canoe became the Te Waka o Aoraki, the South Island, and their prows, the Marlborough Sounds. Aoraki, the tallest, became the highest peak, and his brothers created the Kā Tiritiri o te Moana, the Southern Alps. Ngāi Tahu, the main iwi (tribe) of New Zealand's southern region, consider Aoraki as the most sacred of the ancestors that they had descended from. Aoraki brings the iwi with its sense of community and purpose, and remains the physical form of Aoraki and the link between the worlds of the supernatural and nature.
https://en.wikipedia.org/wiki?curid=20212
Multiple-image Network Graphics Multiple-image Network Graphics (MNG) is a graphics file format, published in 2001, for animated images. Its specification is publicly documented and there are free software reference implementations available. MNG is closely related to the PNG image format. When PNG development started in early 1995, developers decided not to incorporate support for animation, because the majority of the PNG developers felt that overloading a single file type with both still and animation features is a bad design, both for users (who have no simple way of determining to which class a given image file belongs) and for web servers (which should use a MIME type starting with image/ for stills and video/ for animations—GIF notwithstanding). However, work soon started on MNG as an animation-supporting version of PNG. Version 1.0 of the MNG specification was released on 31 January 2001. Gwenview has native MNG support. GIMP can export images as MNG files. Imagemagick can create a MNG file from a series of PNG files. With the MNG plugin, Irfanview can read a MNG file. If MPlayer is linked against libmng, as it usually is, MPlayer and thus all graphical front-ends like Gnome MPlayer can display MNG files. Mozilla browsers and Netscape 6.0, 6.01 and 7.0 included native support for MNG until the code was removed in 2003 due to code size and little actual usage, causing complaints on the Mozilla development site. Mozilla later added support for APNG as a simpler alternative. Similarly, early versions of the Konqueror browser included MNG support but it was later dropped. MNG support was never included in Google Chrome, Internet Explorer, Opera, or Safari. Web servers generally don't come pre-configured to support MNG files. The MNG developers had hoped that MNG would replace GIF for animated images on the World Wide Web, just as PNG had done for still images. However, with the expiration of LZW patents and existence of alternative file formats such as Flash and SVG, combined with lack of MNG-supporting viewers and services, web usage was far less than expected. The structure of MNG files is essentially the same as that of PNG files, differing only in the slightly different signature (codice_1 in hexadecimal, where codice_2 is ASCII for "MNG" – see Portable Network Graphics: File header) and the use of a much greater variety of chunks to support all the animation features that it provides. Images to be used in the animation are stored in the MNG file as encapsulated PNG or JNG images. Two versions of MNG of reduced complexity are also defined: MNG-LC (low complexity) and MNG-VLC (very low complexity). These allow applications to include some level of MNG support without having to implement the entire MNG specification, just as the SVG standard offers the "SVG Basic" and "SVG Tiny" subsets. MNG does not have a registered MIME media type, but codice_3 or codice_4 can be used. MNG animations may be included in HTML pages using the codice_5 or codice_6 tag. MNG can either be lossy or lossless, depending whether the frames are encoded in PNG (lossless) or JNG (lossy). The most common alternatives are Animated GIF and Adobe Flash, with the relative newcomer video alternative to GIF recently gaining momentum. Animated GIF images are restricted to 256 colors and are used in simple scenarios but are supported in all major web browsers. Adobe Flash is a common alternative for creating complex and/or interactive animations and is natively supported by Internet Explorer 10 and Google Chrome, although support is deprecated as of 2016. In web pages, it is possible to create pseudo-animations by writing JavaScript code that loads still PNG or JPEG images of each frame and displays them one by one for a specified time interval. Apart from requiring the user to have JavaScript support and choose not to disable it, this method can be CPU- and bandwidth-intensive for pages with more than one image, large images, or high framerates, and does not allow the animation to be saved in one image file or posted on image-based sites such as flickr or imageboards. Most web browsers support APNG, a non-standard extension to PNG for simple GIF-like animations. Another alternative is SVG images with embedded PNG or JPEG graphics, using SVG animation (if supported) or JavaScript to flip between images. Internet Explorer supports neither APNG nor SVG animation. Another approach uses CSS 3 features, notably CSS Animation, which now has some level of support in most major web browsers. CSS Sprites (providing several images as tiles in a single large image file) can be used as animations by varying which part of the large image is visible using CSS Animation or JavaScript.
https://en.wikipedia.org/wiki?curid=20213
Mississippi John Hurt John Smith Hurt (March 8, 1893 – November 2, 1966), better known as Mississippi John Hurt, was an American country blues singer and guitarist. Raised in Avalon, Mississippi, Hurt taught himself to play the guitar around the age of nine. He worked as a sharecropper and began playing at dances and parties, singing to a melodious fingerpicked accompaniment. His first recordings, made for Okeh Records in 1928, were commercial failures, and he continued to work as a farmer. Dick Spottswood and Tom Hoskins, a blues enthusiast, located Hurt in 1963 and persuaded him to move to Washington, D.C. He was recorded by the Library of Congress in 1964. This helped further the American folk music revival, which led to the rediscovery of many other bluesmen of Hurt's era. Hurt performed on the university and coffeehouse concert circuit with other Delta blues musicians who were brought out of retirement. He also recorded several albums for Vanguard Records. Hurt returned to Mississippi, where he died, in Grenada, a year later. Material recorded by him has been re-released by many record labels. His songs have been recorded by Bob Dylan, Dave Van Ronk, Jerry Garcia, Beck, Doc Watson, John McCutcheon, Taj Mahal, Bruce Cockburn, David Johansen, Bill Morrissey, Gillian Welch, Josh Ritter, Chris Smither, Guthrie Thomas, Parsonsfield, and Rory Block. Hurt was born in Teoc, Carroll County, Mississippi, and raised in Avalon, Mississippi. He taught himself to play guitar at the age of nine, stealthily playing the guitar of a friend of his mother's, who often stayed at the Hurt home while courting a woman who lived nearby. As a youth he played old-time music for friends and at dances. He worked as a farmhand and sharecropper into the 1920s. His fast, highly syncopated style of playing was meant for dancing. On occasion, a medicine show would come through the area. Hurt recalled that one wanted to hire him: "One of them wanted me, but I said no because I just never wanted to get away from home." In 1923, he played with the fiddle player Willie Narmour as a substitute for Narmour's regular partner, Shell Smith. When Narmour got a chance to record for Okeh Records as a prize for winning first place in a 1928 fiddle contest, he recommended Hurt to Okeh producer Tommy Rockwell. After auditioning "Monday Morning Blues" at his home, Hurt took part in two recording sessions, in Memphis and New York City (see Discography below). While in Memphis, he recalled seeing "many, many blues singers ... Lonnie Johnson, Blind Lemon Jefferson, Bessie Smith, and lots, lots more." Hurt described his first recording session as follows: Hurt attempted further negotiations with Okeh to record again, but his records were commercial failures. Okeh went out of business during the Great Depression, and Hurt returned to Avalon and obscurity, working as a sharecropper and playing at local parties and dances. Hurt's renditions of "Frankie" and "Spike Driver Blues" were included in "The Anthology of American Folk Music" in 1952 which generated considerable interest in locating him. When a copy of "Avalon Blues" was discovered in 1963, it led musicologist Dick Spottswood to locate Avalon in an atlas, and ask Tom Hoskins, who was traveling that way, to enquire after Hurt. When Hoskins arrived in Avalon the first person he asked directed him to Hurt's cabin. Hoskins persuaded an apprehensive Hurt to perform several songs for him, to ensure that he was genuine. Hoskins was convinced and, seeing that Hurt's guitar playing skills were still intact, encouraged him to move to Washington, D.C., and perform for a broader audience. His performance at the 1963 Newport Folk Festival caused his star to rise in the folk revival occurring at that time. He performed extensively at colleges, concert halls, and coffeehouses and appeared on "The Tonight Show Starring Johnny Carson". He also recorded three albums for Vanguard Records. Much of his repertoire was also recorded for the Library of Congress. His fans particularly liked the ragtime songs "Salty Dog" and "Candy Man" and the blues ballads "Spike Driver Blues" (a variant of "John Henry") and "Frankie". Hurt's influence spanned several music genres, including blues, spirituals, country, bluegrass, folk, and contemporary rock and roll. A soft-spoken man, his nature was reflected in the work, which consisted of a mellow mix of country, blues, and old-time music. Hurt died on November 2, 1966, of a heart attack, in hospital at Grenada, Mississippi. His last recordings had been done at a hotel in New York City in February and July of that year, and were not released until 1972 on the Vanguard LP "Last Sessions". Hurt used a fast, syncopated fingerpicking style of guitar playing that he taught himself. He was influenced by few other musicians, among whom was an elderly, unrecorded blues singer from the area where he lived, Rufus Hanks, who played twelve-string guitar and harmonica. He also recalled listening to the country singer Jimmie Rodgers. On occasion, Hurt would use an open tuning and a slide, as he did in his arrangement of "The Ballad of Casey Jones". According to the music critic Robert Christgau, "the school of John Fahey proceeded from his finger-picking, and while he's not the only quietly conversational singer in the modern folk tradition, no one else has talked the blues with such delicacy or restraint." There is a memorial to Hurt in Avalon, Mississippi. It is parallel to RR2, the rural road on which he grew up. The American singer-songwriter Tom Paxton, who met Hurt and played on the same bill with him at the Gaslight in Greenwich Village around 1963, wrote and recorded a song about him in 1977, "Did You Hear John Hurt?", which he still frequently plays in live performances. The first track of John Fahey's 1968 solo acoustic guitar album "Requia" is "Requiem for John Hurt". Fahey's posthumous live album, "The Great Santa Barbara Oil Slick", also features a version of the piece, entitled "Requiem for Mississippi John Hurt". The British folk and blues artist Wizz Jones recorded a tribute song, "Mississippi John", for his 1977 album "Magical Flight". The Delta blues artist Rory Block recorded the album "Avalon: A Tribute to Mississippi John Hurt", released in 2013 as part of her "Mentor Series". The New England singer-songwriter Bill Morrissey released the Grammy-nominated album "Songs of Mississippi John Hurt" in 1999. In 2017, John Hurt's life story was told in the award-winning documentary series "American Epic". The film featured unseen film footage of Hurt performing and being interviewed, and radically improved restorations of his 1920s recordings. Director Bernard MacMahon stated that Hurt "was the inspiration for "American Epic"". Hurt's life was profiled in the accompanying book, "". Sources for this section are as follows:.
https://en.wikipedia.org/wiki?curid=20215
Moravia Moravia ( , also , ; ; ; ; ) is a historical region in the east of the Czech Republic and one of three historical Czech lands, with Bohemia and Czech Silesia. The medieval and early modern Margraviate of Moravia was a crown land of the Lands of the Bohemian Crown from 1348 to 1918, an imperial state of the Holy Roman Empire from 1004 to 1806, a crown land of the Austrian Empire from 1804 to 1867, and a part of Austria-Hungary from 1867 to 1918. Moravia was one of the five lands of Czechoslovakia founded in 1918, in 1928 it was merged with Czech Silesia, and dissolved during the abolition of the land system in 1949 following the communist coup d'état. Its area of 22,623.41 km2 is home to more than 3 million people. The people are historically named Moravians, a subgroup of Czechs, the other group being called Bohemians. Moravia also had been home of a large German-speaking population until their expulsion in 1945. The land takes its name from the Morava river, which runs from its north to south, being its principal watercourse. Moravia's largest city and historical capital is Brno. Before being sacked by the Swedish army during the Thirty Years' War, Olomouc was another capital, and it is still the seat of the Roman Catholic Archdiocese of Olomouc. The region and former margraviate of Moravia, "Morava" in Czech, is named after its principal river Morava. It is theorized that the river's name is derived from Proto-Indo-European "*mori": "waters", or indeed any word denoting "water" or a "marsh". The German name for Moravia is "Mähren", from the river's German name "March". This could have a different etymology, as "march" is a term used in the Medieval times for an outlying territory, a border or a frontier (cf. English "march"). Moravia occupies most of the eastern part of the Czech Republic. Moravian territory is naturally strongly determined, in fact, as the Morava river basin, with strong effect of mountains in the west ("de facto" main European continental divide) and partly in the east, where all the rivers rise. Moravia occupies an exceptional position in Central Europe. All the highlands in the west and east of this part of Europe run west–east, and therefore form a kind of filter, making north–south or south north movement more difficult. Only Moravia with the depression of the westernmost Outer Subcarpathia, wide, between the Bohemian Massif and the Outer Western Carpathians (gripping the meridian at a constant angle of 30°), provides a comfortable connection between the Danubian and Polish regions, and this area is thus of great importance in terms of the possible migration routes of large mammals – both as regards periodically recurring seasonal migrations triggered by climatic oscillations in the prehistory, when permanent settlement started. Moravia borders Bohemia in the west, Lower Austria in the south(west), Slovakia in the southeast, Poland very shortly in the north, and Czech Silesia in the northeast. Its natural boundary is formed by the Sudetes mountains in the north, the Carpathians in the east and the Bohemian-Moravian Highlands in the west (the border runs from Králický Sněžník in the north, over Suchý vrch, across Upper Svratka Highlands and Javořice Highlands to tripoint nearby Slavonice in the south). The Thaya river meanders along the border with Austria and the tripoint of Moravia, Austria and Slovakia is at the confluence of the Thaya and Morava rivers. The northeast border with Silesia runs partly along the Moravice, Oder and Ostravice rivers. Between 1782–1850, Moravia (also thus known as "Moravia-Silesia") also included a small portion of the former province of Silesia – the Austrian Silesia (when Frederick the Great annexed most of ancient Silesia (the land of upper and middle Oder river) to Prussia, Silesia's southernmost part remained with the Habsburgs). Today Moravia includes the South Moravian Region, the Zlín Region, vast majority of the Olomouc Region, southeastern half of the Vysočina Region and parts of the Moravian-Silesian, Pardubice and South Bohemian regions. Geologically, Moravia covers a transitive area between the Bohemian Massif and the Carpathians (from (north)west to southeast), and between the Danube basin and the North European Plain (from south to northeast). Its core geomorphological features are three wide valleys, namely the Dyje-Svratka Valley ("Dyjsko-svratecký úval"), the Upper Morava Valley ("Hornomoravský úval") and the Lower Morava Valley ("Dolnomoravský úval"). The first two form the westernmost part of the Outer Subcarpathia, the last is the northernmost part of the Vienna Basin. The valleys surround the low range of Central Moravian Carpathians. The highest mountains of Moravia are situated on its northern border in Hrubý Jeseník, the highest peak is Praděd (1491 m). Second highest is the massive of Králický Sněžník (1424  m) the third are the Moravian-Silesian Beskids at the very east, with Smrk (1278 m), and then south from here Javorníky (1072). The White Carpathians along the southeastern border rise up to 970 m at Velká Javořina. The spacious, but moderate Bohemian-Moravian Highlands on the west reach 837 m at Javořice. The fluvial system of Moravia is very cohesive, as the region border is similar to the watershed of the Morava river, and thus almost the entire area is drained exclusively by a single stream. Morava's far biggest tributaries are Thaya (Dyje) from the right (or west) and Bečva (east). Morava and Thaya meet at the southernmost and lowest (148 m) point of Moravia. Small peripheral parts of Moravia belong to the catchment area of Elbe, Váh and especially Oder (the northeast). The watershed line running along Moravia's border from west to north and east is part of the European Watershed. For centuries, there has been plans to build a waterway across Moravia to join the Danube and Oder river systems, using the natural route through the Moravian Gate. Evidence of the presence of members of the human genus, "Homo", dates back more than 600,000 years in the paleontological area of Stránská Skála. Attracted by suitable living conditions, early modern humans settled in the region by the Paleolithic period. The Předmostí archeological (Cro-magnon) site in Moravia is dated to between 24,000 and 27,000 years old. Caves in Moravský kras were used by mammoth hunters. Venus of Dolní Věstonice, the oldest ceramic figure in the world, was found in the excavation of Dolní Věstonice by Karel Absolon. Around 60 BC, the Celtic Volcae people withdrew from the region and were succeeded by the Germanic Quadi. Some of the events of the Marcomannic Wars took place in Moravia in AD 169–180. After the war exposed the weakness of Rome's northern frontier, half of the Roman legions (16 out of 33) were stationed along the Danube. In response to increasing numbers of Germanic settlers in frontier regions like Pannonia, Dacia, Rome established two new frontier provinces on the left shore of the Danube, Marcomannia and Sarmatia, including today's Moravia and western Slovakia. In the 2nd century AD, a Roman fortress stood on the vineyards hill known as and ("hillfort"), situated above the former village Mušov and above today's beach resort at Pasohlávky. During the reign of the Emperor Marcus Aurelius, the 10th Legion was assigned to control the Germanic tribes who had been defeated in the Marcomannic Wars. In 1927, the archeologist Gnirs, with the support of president Tomáš Garrigue Masaryk, began research on the site, located 80 km from Vindobona and 22 km to the south of Brno. The researchers found remnants of two masonry buildings, a "praetorium" and a "balneum" ("bath"), including a "hypocaustum". The discovery of bricks with the stamp of the Legio X Gemina and coins from the period of the emperors Antoninus Pius, Marcus Aurelius and Commodus facilitated dating of the locality. A variety of Germanic and major Slavic tribes crossed through Moravia during the Migration Period before Slavs established themselves in the 6th century AD. At the end of the 8th century, the Moravian Principality came into being in present-day south-eastern Moravia, Záhorie in south-western Slovakia and parts of Lower Austria. In 833 AD, this became the state of Great Moravia with the conquest of the Principality of Nitra (present-day Slovakia). Their first king was Mojmír I (ruled 830–846). Louis the German invaded Moravia and replaced Mojmír I with his nephew Rastiz who became St. Rastislav. St. Rastislav (846–870) tried to emancipate his land from the Carolingian influence, so he sent envoys to Rome to get missionaries to come. When Rome refused he turned to Constantinople to the Byzantine emperor Michael. The result was the mission of Saints Cyril and Methodius who translated liturgical books into Slavonic, which had lately been elevated by the Pope to the same level as Latin and Greek. Methodius became the first Moravian archbishop, but after his death the German influence again prevailed and the disciples of Methodius were forced to flee. Great Moravia reached its greatest territorial extent in the 890s under Svatopluk I. At this time, the empire encompassed the territory of the present-day Czech Republic and Slovakia, the western part of present Hungary (Pannonia), as well as Lusatia in present-day Germany and Silesia and the upper Vistula basin in southern Poland. After Svatopluk's death in 895, the Bohemian princes defected to become vassals of the East Frankish ruler Arnulf of Carinthia, and the Moravian state ceased to exist after being overrun by invading Magyars in 907. Following the defeat of the Magyars by Emperor Otto I at the Battle of Lechfeld in 955, Otto's ally Boleslaus I, the Přemyslid ruler of Bohemia, took control over Moravia. Bolesław I Chrobry of Poland annexed Moravia in 999, and ruled it until 1019, when the Přemyslid prince Bretislaus recaptured it. Upon his father's death in 1034, Bretislaus became the ruler of Bohemia. In 1055, he decreed that Bohemia and Moravia would be inherited together by primogeniture, although he also provided that his younger sons should govern parts (quarters) of Moravia as vassals to his oldest son. Throughout the Přemyslid era, junior princes often ruled all or part of Moravia from Olomouc, Brno or Znojmo, with varying degrees of autonomy from the ruler of Bohemia. Dukes of Olomouc often acted as the "right hand" of Prague dukes and kings, while Dukes of Brno and especially those of Znojmo were much more insubordinate. Moravia reached its height of autonomy in 1182, when Emperor Frederick I elevated Conrad II Otto of Znojmo to the status of a margrave, immediately subject to the emperor, independent of Bohemia. This status was short-lived: in 1186, Conrad Otto was forced to obey the supreme rule of Bohemian duke Frederick. Three years later, Conrad Otto succeeded to Frederick as Duke of Bohemia and subsequently canceled his margrave title. Nevertheless, the margrave title was restored in 1197 when Vladislaus III of Bohemia resolved the succession dispute between him and his brother Ottokar by abdicating from the Bohemian throne and accepting Moravia as a vassal land of Bohemian (i.e., Prague) rulers. Vladislaus gradually established this land as Margraviate, slightly administratively different from Bohemia. After the Battle of Legnica, the Mongols carried their raids into Moravia. The main line of the Přemyslid dynasty became extinct in 1306, and in 1310 John of Luxembourg became Margrave of Moravia and King of Bohemia. In 1333, he made his son Charles the next Margrave of Moravia (later in 1346, Charles also became the King of Bohemia). In 1349, Charles gave Moravia to his younger brother John Henry who ruled in the margraviate until his death in 1375, after him Moravia was ruled by his oldest son Jobst of Moravia who was in 1410 elected the Holy Roman King but died in 1411 (he is buried with his father in the Church of St. Thomas in Brno – the Moravian capital from which they both ruled). Moravia and Bohemia remained within the Luxembourg dynasty of Holy Roman kings and emperors (except during the Hussite wars), until inherited by Albert II of Habsburg in 1437. After his death followed the interregnum until 1453; land (as the rest of lands of the Bohemian Crown) was administered by the landfriedens ("landfrýdy"). The rule of young Ladislaus the Posthumous subsisted only less than five years and subsequently (1458) the Hussite George of Poděbrady was elected as the king. He again reunited all Czech lands (then Bohemia, Moravia, Silesia, Upper & Lower Lusatia) into one-man ruled state. In 1466, Pope Paul II excommunicated George and forbade all Catholics (i.e. about 15% of population) from continuing to serve him. The Hungarian crusade followed and in 1469 Matthias Corvinus conquered Moravia and proclaimed himself (with assistance of rebelling Bohemian nobility) as the king of Bohemia. The subsequent 21-year period of a divided kingdom was decisive for the rising awareness of a specific Moravian identity, distinct from that of Bohemia. Although Moravia was reunited with Bohemia in 1490 when Vladislaus Jagiellon, king of Bohemia, also became king of Hungary, some attachment to Moravian "freedoms" and resistance to government by Prague continued until the end of independence in 1620. In 1526, Vladislaus' son Louis died in battle and the Habsburg Ferdinand I was elected as his successor. After the death of King Louis II of Hungary and Bohemia in 1526, Ferdinand I of Austria was elected King of Bohemia and thus ruler of the Crown of Bohemia (including Moravia). The epoch 1526–1620 was marked by increasing animosity between Catholic Habsburg kings (emperors) and the Protestant Moravian nobility (and other Crowns') estates. Moravia, like Bohemia, was a Habsburg possession until the end of World War I. In 1573 the Jesuit University of Olomouc was established; this was the first university in Moravia. The establishment of a special papal seminary, Collegium Nordicum, made the University a centre of the Catholic Reformation and effort to revive Catholicism in Central and Northern Europe. The second largest group of students were from Scandinavia. Brno and Olomouc served as Moravia's capitals until 1641. As the only city to successfully resist the Swedish invasion, Brno become the sole capital following the capture of Olomouc. The Margraviate of Moravia had, from 1348 in Olomouc and Brno, its own Diet, or parliament, "zemský sněm" ("Landtag" in German), whose deputies from 1905 onward were elected separately from the ethnically separate German and Czech constituencies. The oldest surviving theatre building in Central Europe, the Reduta Theatre, was established in 17th-century Moravia. Ottoman Turks and Tatars invaded the region in 1663, taking 12,000 captives. In 1740, Moravia was invaded by Prussian forces under Frederick the Great, and Olomouc was forced to surrender on 27 December 1741. A few months later the Prussians were repelled, mainly because of their unsuccessful siege of Brno in 1742. In 1758, Olomouc was besieged by Prussians again, but this time its defenders forced the Prussians to withdraw following the Battle of Domstadtl. In 1777, a new Moravian bishopric was established in Brno, and the Olomouc bishopric was elevated to an archbishopric. In 1782, the Margraviate of Moravia was merged with Austrian Silesia into "Moravia-Silesia", with Brno as its capital. This lasted until 1850. Moravia was briefly one of 17 former crown lands of the Cisleithanian part of Austria-Hungary after 1867. According to Austro-Hungarian census of 1910 the proportion of Czech in the population of Moravia at the time (2.622.000) was 71,8 %, while the proportion of Germans was 27,6 %. Following the break-up of the Austro-Hungarian Empire in 1918, Moravia became part of Czechoslovakia. As one of the five lands of Czechoslovakia, it had restricted autonomy. In 1928 Moravia ceased to exist as a territorial unity and was merged with Czech Silesia into the Moravian-Silesian Land (yet with the natural dominance of Moravia). By the Munich Agreement (1938), the southwestern and northern peripheries of Moravia, which had a German-speaking majority, were annexed by Nazi Germany, and during the German occupation of Czechoslovakia (1939–1945), the remnant of Moravia was an administrative unit within the Protectorate of Bohemia and Moravia. During the WW II Moravia lost 46,306 Jews according to religion. In 1945 after the end of World War II and Allied defeat of Germany, Czechoslovakia expelled the ethnic German minority of Moravia to Germany and Austria. The Moravian-Silesian Land was restored with Moravia as part of it and towns and villages that were left by the former German inhabitants, were re-settled by Czech-speakers. In 1949 the territorial division of Czechoslovakia was radically changed, as the Moravian-Silesian Land was abolished and Lands were replaced by ""kraje"" (regions), whose borders substantially differ from the historical Bohemian-Moravian border, so Moravia politically ceased to exist after more than 1100 years (833–1949) of its history. Although another administrative reform in 1960 implemented (among others) the North Moravian and the South Moravian regions ("Severomoravský" and "Jihomoravský kraj"), with capitals in Ostrava and Brno respectively, their joint area was only roughly alike the historical state and, chiefly, there was no land or federal autonomy, unlike Slovakia. After the fall of the Soviet Union and the whole Eastern Bloc, the Czechoslovak Federal Assembly condemned the cancellation of Moravian-Silesian land and expressed "firm conviction that this injustice will be corrected" in 1990. However, after the breakup of Czechoslovakia into Czech Republic and Slovakia in 1993, Moravian area remained integral to the Czech territory, and the latest administrative division of Czech Republic (introduced in 2000) is similar to the administrative division of 1949. Nevertheless, the federalist or separatist movement in Moravia is completely marginal. The centuries-lasting historical Bohemian-Moravian border has been preserved up to now only by the Czech Roman Catholic Administration, as the Ecclesiastical Province of Moravia corresponds with the former Moravian-Silesian Land. The popular perception of the Bohemian-Moravian border's location is distorted by the memory of the 1960 regions (whose boundaries are still partly in use). An area in South Moravia, around Hodonín and Břeclav, is part of the Viennese Basin. Petroleum and lignite are found there in abundance. The main economic centres of Moravia are Brno, Olomouc and Zlín, plus Ostrava lying directly on the Moravian-Silesian border. As well as agriculture in general, Moravia is noted for its viticulture; it contains 94% of the Czech Republic's vineyards and is at the centre of the country's wine industry. Wallachia have at least a 400-year-old tradition of slivovitz making. Czech automotive industry also had a large role in the industry of Moravia, plants such as in Prostějov or Tatra in Kopřivnice had produced many aerodynamic automobiles in the 20th century. Moravia is also the centre of the Czech firearm industry, as the vast majority of Czech firearms manufacturers (e.g. CZUB, Zbrojovka Brno, Czech Small Arms, Czech Weapons, ZVI, Great Gun) are settled in Moravia. Almost all well-known Czech sporting, self-defence, military and hunting firearms come from Moravia. Also, Meopta rifle scopes are of Moravian origin. The original Bren gun was conceived here, as was the assault rifles CZ-805 BREN or Sa vz. 58, and handguns CZ 75 or ZVI Kevin (also known as the "Micro Desert Eagle"). The Zlín Region hosts several aircraft manufacturers, namely Let Kunovice (also known as Aircraft Industries, a.s.), ZLIN AIRCRAFT a.s. Otrokovice (former well-known name Moravan Otrokovice), Evektor-Aerotechnik and Czech Sport Aircraft. Sport aircraft are also manufactured in Jihlava by Jihlavan Airplanes/Skyleader. Aircraft production in the region started in 1930s and there are signs of recovery in recent years and the production is expected to grow from 2013 onwards. Machinery has been the most important industrial sector in the region, especially in South Moravia, for many decades. The main centres of machinery production are Brno (Zbrojovka Brno, Zetor, První brněnská strojírna, Siemens), Blansko (ČKD Blansko, Metra), Adamov (ADAST), Kuřim (TOS Kuřim), Boskovice (Minerva, Novibra) and Břeclav (Otis Elevator Company), together with a large number of other variously sized machinery or machining factories, companies or workshops spread all over Moravia. The beginnings of the electrical industry in Moravia date back to 1918. The biggest centres of electrical production are Brno (VUES, ZPA Brno, EM Brno), Drásov, Frenštát pod Radhoštěm and Mohelnice (currently Siemens). The Moravians are generally a Slavic ethnic group who speak various (generally more archaic) dialects of Czech. Before the expulsion of Germans from Moravia the Moravian German minority also referred to themselves as "Moravians" ("Mährer"). Those expelled and their descendants continue to identify as Moravian. Some Moravians assert that Moravian is a language distinct from Czech; however, their position is not widely supported by academics and the public. Some Moravians identify as an ethnically distinct group; the majority consider themselves to be ethnically Czech. In the census of 1991 (the first census in history in which respondents were allowed to claim Moravian nationality), 1,362,000 (13.2%) of the Czech population identified as being of Moravian nationality (or ethnicity). In some parts of Moravia (mostly in the centre and south), majority of the population identified as Moravians, rather than Czechs. In the census of 2001, the number of Moravians had decreased to 380,000 (3.7% of the country's population). In the census of 2011, this number rose to 522,474 (4.9% of the Czech population). Moravia historically had a large minority of ethnic Germans, some of whom had arrived as early as the 13th century at the behest of the Přemyslid dynasty. Germans continued to come to Moravia in waves, culminating in the 18th century. They lived in the main city centres and in the countryside along the border with Austria (stretching up to Brno) and along the border with Silesia at Jeseníky, and also in two language islands, around Jihlava and around Moravská Třebová. After the Second World War, Czechoslovakia almost fully expelled them in retaliation for Nazi German efforts to create a Greater Germanic Reich in Central Europe. Notable people from Moravia include (in order of birth): Moravia can be divided on dialectal and lore basis into several ethnographic regions of comparable significance. In this sense, it is more heterogenous than Bohemia. Significant parts of Moravia, usually those formerly inhabited by the German speakers, are dialectally indifferent, as they have been resettled by people from various Czech (and Slovak) regions. The principal cultural regions of Moravia are:
https://en.wikipedia.org/wiki?curid=20216
Murray Rothbard Murray Newton Rothbard (; March 2, 1926 – January 7, 1995) was an American heterodox economist of the Austrian School, economic historian and political theorist. Rothbard was the founder and leading theoretician of anarcho-capitalism, a staunch advocate of historical revisionism and a central figure in the 20th-century American libertarian movement. He wrote over twenty books on political theory, revisionist history, economics, and other subjects. Rothbard argued that all services provided by the "monopoly system of the corporate state" could be provided more efficiently by the private sector and wrote that the state is "the organization of robbery systematized and writ large". He called fractional-reserve banking a form of fraud and opposed central banking. He categorically opposed all military, political, and economic interventionism in the affairs of other nations. According to his protégé Hans-Hermann Hoppe, "[t]here would be no anarcho-capitalist movement to speak of without Rothbard". Libertarian economist Jeffrey Herbener, who calls Rothbard his friend and "intellectual mentor", wrote that Rothbard received "only ostracism" from mainstream academia. Rothbard rejected mainstream economic methodologies and instead embraced the praxeology of his most important intellectual precursor, Ludwig von Mises. To promote his economic and political ideas, Rothbard joined Lew Rockwell and Burton Blumert in 1982 to establish the Mises Institute in Alabama. Rothbard's parents were David and Rae Rothbard, Jewish immigrants to the United States from Poland and Russia, respectively. David was a chemist. Murray attended Birch Wathen Lenox School, a private school in New York City. He later said he much preferred Birch Wathen to the "debasing and egalitarian public school system" he had attended in the Bronx. Rothbard wrote of having grown up as a "right-winger" (adherent of the "Old Right") among friends and neighbors who were "communists or fellow-travelers". He was a member of The New York Young Republican Club in his youth. Rothbard characterized his immigrant father as an individualist who embraced the American values of minimal government, free enterprise, private property and "a determination to rise by one's own merits ... "[A]ll socialism seemed to me monstrously coercive and abhorrent". Rothbard attended Columbia University, where he received a Bachelor of Arts degree in mathematics in 1945 and a Ph.D. in economics in 1956. The delay in receiving his Ph.D. was due in part to conflict with his advisor, Joseph Dorfman, and in part to Arthur Burns’s rejecting his dissertation. Burns was a longtime friend of the Rothbards and their neighbor at their Manhattan apartment building. It was only after Burns went on leave from the Columbia faculty to head President Eisenhower's Council of Economic Advisors that Rothbard's thesis was accepted and he received his doctorate. Rothbard later said that all his fellow students were extreme leftists and that he was one of only two Republicans at Columbia at the time. During the 1940s, Rothbard became acquainted with Frank Chodorov and read widely in libertarian-oriented works by Albert Jay Nock, Garet Garrett, Isabel Paterson, H. L. Mencken, and Austrian economist Ludwig von Mises. In the early 1950s, when Mises was teaching in the Wall Street division of the New York University Stern School of Business, Rothbard attended his unofficial seminar. Rothbard was greatly influenced by Mises's book "Human Action". He attracted the attention of the William Volker Fund, a group that provided financial backing to promote right-wing ideologies in the 1950s and early 1960s. The Volker Fund paid Rothbard to write a textbook to explain "Human Action" in a form that could be used to introduce college undergraduates to Mises's views; a sample chapter he wrote on money and credit won Mises's approval. For ten years, the Volker Fund paid him a retainer as a "senior analyst". As Rothbard continued his work, he enlarged the project. The result was his book "Man, Economy, and State", published in 1962. Upon its publication, Mises praised Rothbard's work effusively. In 1953, Rothbard married JoAnn Schumacher (1928–1999), whom he called Joey, in New York City. JoAnn was his editor and a close adviser as well as hostess of his Rothbard Salon. They enjoyed a loving marriage and Rothbard often called her "the indispensable framework" of his life and achievements. According to Joey, the Volker Fund's patronage allowed Rothbard to work from home as a freelance theorist and pundit for the first 15 years of their marriage. The Volker Fund collapsed in 1962, leading Rothbard to seek employment from various New York academic institutions. He was offered a part-time position teaching economics to engineering students at Brooklyn Polytechnic Institute in 1966 at age 40. The institution had no economics department or economics majors and Rothbard derided its social science department as "Marxist", but Justin Raimondo writes that Rothbard liked teaching at Brooklyn Polytechnic because working only two days a week gave him freedom to contribute to developments in libertarian politics. Rothbard continued in this role until 1986. Then 60 years old, Rothbard left Brooklyn Polytechnic Institute for the Lee Business School at the University of Nevada, Las Vegas (UNLV), where he held the title of S.J. Hall Distinguished Professor of Economics, a chair endowed by a libertarian businessman. According to Rothbard's friend, colleague and fellow Misesian economist Hans-Hermann Hoppe, Rothbard led a "fringe existence" in academia, but he was able to attract a large number of "students and disciples" through his writings, thereby becoming "the creator and one of the principal agents of the contemporary libertarian movement". He kept his position at UNLV from 1986 until his death. Rothbard founded the Center for Libertarian Studies in 1976 and the "Journal of Libertarian Studies" in 1977. In 1982, he co-founded the Ludwig von Mises Institute in Auburn, Alabama, and was vice president of academic affairs until 1995. Rothbard also founded the Institute's "Review of Austrian Economics", a heterodox economics journal later renamed the "Quarterly Journal of Austrian Economics", in 1987. After Rothbard's death, Joey reflected on his happiness and bright spirit, saying, "he managed to make a living for 40 years without having to get up before noon. This was important to him". She recalled how Rothbard would begin every day with a phone conversation with his colleague Lew Rockwell: "Gales of laughter would shake the house or apartment, as they checked in with each other. Murray thought it was the best possible way to start a day". Rothbard was irreligious and agnostic about God, describing himself as a "mixture of an agnostic and a Reform Jew". Despite identifying as an agnostic and an atheist, he was critical of the "left-libertarian hostility to religion". In Rothbard's later years, many of his friends anticipated that he would convert to Catholicism, but he never did. "The New York Times" obituary called Rothbard "an economist and social philosopher who fiercely defended individual freedom against government intervention". In 1954, Rothbard, along with several other attendees of Mises's seminar, joined the circle of novelist Ayn Rand, the founder of Objectivism. He soon parted from her, writing among other things that her ideas were not as original as she proclaimed, but similar to those of Aristotle, Thomas Aquinas and Herbert Spencer. In 1958, after the publication of Rand's novel "Atlas Shrugged", Rothbard wrote her a "fan letter", calling the book "an infinite treasure house" and "not merely the greatest novel ever written, [but] one of the very greatest books ever written, fiction or nonfiction". He also wrote: "[Y]ou introduced me to the whole field of natural rights and natural law philosophy", prompting him to learn "the glorious natural rights tradition". Rothbard rejoined Rand's circle for a few months, but soon broke with Rand again over various differences, including his defense of anarchism. Rothbard later satirized Rand's acolytes in his unpublished one-act farce "Mozart Was a Red"
https://en.wikipedia.org/wiki?curid=20217
Mel Brooks Mel Brooks (born Melvin Kaminsky; June 28, 1926) is an American director, writer, actor, comedian, producer and composer. He is known as a creator of broad film farces and comedic parodies. Brooks began his career as a comic and a writer for Sid Caesar's variety show "Your Show of Shows" (1950–54) alongside Woody Allen, Neil Simon, and Larry Gelbart. Together with Carl Reiner, he created the comic character The 2000 Year Old Man. He wrote, with Buck Henry, the hit television comedy series "Get Smart", which ran from 1965 to 1970. In middle age, Brooks became one of the most successful film directors of the 1970s, with many of his films being among the top 10 moneymakers of the year they were released. His best-known films include "The Producers" (1967), "The Twelve Chairs" (1970), "Blazing Saddles" (1974), "Young Frankenstein" (1974), "Silent Movie" (1976), "High Anxiety" (1977), "History of the World, Part I" (1981), "Spaceballs" (1987), and "" (1993). A musical adaptation of his first film, "The Producers", ran on Broadway from 2001 to 2007, and was remade into a musical film in 2005 by Brooks himself. In 2001, having previously won an Emmy, a Grammy and an Oscar, he joined a small list of EGOT winners with his Tony Award wins for "The Producers". He received a Kennedy Center Honor in 2009, a Hollywood Walk of Fame star in 2010, the 41st AFI Life Achievement Award in June 2013, a British Film Institute Fellowship in March 2015, a National Medal of Arts in September 2016, and a BAFTA Fellowship in February 2017. Three of his films ranked in the American Film Institute's list of the top 100 comedy films of the past 100 years (1900–2000), all of which ranked in the top 15 of the list: "Blazing Saddles" at number 6, "The Producers" at number 11, and "Young Frankenstein" at number 13. Brooks was married to the actress Anne Bancroft from 1964 until her death in 2005. Their son Max Brooks is an actor and author, known for his novel "" (2006). Brooks was born Melvyn Kaminsky on June 28, 1926, in Brooklyn, New York, to Max (1895–1929) and Kate (née Brookman) Kaminsky (1896–1989), and grew up in Williamsburg. His father's family were German Jews from Danzig (present-day Gdańsk, Poland); his mother's family were Jews from Kiev, in the Pale of Settlement of the Russian Empire (present-day Ukraine). He had three older brothers: Irving, Lenny, and Bernie. Brooks' father died of kidney disease at 34 when Brooks was 2 years old. He has said of his father's death, "There's an outrage there. I may be angry at God, or at the world, for that. And I'm sure a lot of my comedy is based on anger and hostility. Growing up in Williamsburg, I learned to clothe it in comedy to spare myself problems—like a punch in the face." Brooks was a small, sickly boy who often was bullied and teased by his classmates because of his size. He grew up in tenement housing. At age 9, Brooks went to a Broadway show with his uncle Joe—a taxi driver who drove the Broadway doormen back to Brooklyn for free and was given the tickets in gratitude—and saw "Anything Goes" with William Gaxton, Ethel Merman and Victor Moore at the Alvin Theater. After the show, he told his uncle that he was not going to work in the garment district like everyone else but was absolutely going into show business. When Brooks was 14 he gained employment as a pool tummler. Brooks kept his guests amused with his crazy antics. In a "Playboy" interview Brooks explained that one day he stood at the edge of a diving board wearing a large overcoat and 2 suitcases full of rocks who then announced: "Business is terrible! I can't go on!" before jumping, fully clothed into the pool. He was taught by Buddy Rich (who had also grown up in Williamsburg) how to play the drums and started to earn money as a musician when he was 14. During Brooks' time as a drummer he was given his first opportunity as a comedian at the age of 16 following an ill MC. During his teens, Melvyn Kaminsky officially changed his name to Mel Brooks, influenced by his mother's maiden name Brookman, after being confused with the trumpeter Max Kaminsky. After attending Abraham Lincoln High School for a year, Brooks graduated from Eastern District High School, studying for a year at Brooklyn College as a psychology major in 1946. Brooks was drafted into the United States Army in 1944. After scoring highly on the Army General Classification Test (a Stanford-Binet-type IQ test), he was sent to the elite Army Specialized Training Program (ASTP) at the Virginia Military Institute to be taught skills such as military engineering, foreign languages or medicine. Manpower shortages led the Army to disband the ASTP so Brooks returned to basic training at Fort Sill, Oklahoma in May 1944. Brooks served as a corporal in the 1104 Engineer Combat Battalion, 78th Infantry Division, defusing land mines as the allies advanced into Nazi Germany. With the end of the war in Europe, Brooks took part in organizing shows for captured Germans and American forces. After the war, Brooks started working in various Borscht Belt resorts and nightclubs in the Catskill Mountains as a drummer and pianist. After a regular comic at one of the nightclubs was too sick to perform one night, Brooks started working as a stand-up comic, telling jokes and doing movie-star impressions. He also began acting in summer stock in Red Bank, New Jersey, and did some radio work. He eventually worked his way up to the comically aggressive job of tummler (master entertainer) at Grossinger's, one of the Borscht Belt's most famous resorts. Brooks found more rewarding work behind the scenes, becoming a comedy writer for television. In 1949, his friend Sid Caesar hired Brooks to write jokes for the DuMont/NBC series "The Admiral Broadway Revue", paying him $50 a week. In 1950, Caesar created the revolutionary variety comedy series "Your Show of Shows" and hired Brooks as a writer along with Carl Reiner, Neil Simon, Danny Simon, and head writer Mel Tolkin. The writing staff proved widely influential. Reiner, as creator of "The Dick Van Dyke Show", based Morey Amsterdam's character Buddy Sorell on Brooks. Likewise, the film "My Favorite Year" (1982) is loosely based on Brooks' experiences as a writer on the show including an encounter with the actor Errol Flynn. Neil Simon's play "Laughter on the 23rd Floor" (1993) is also loosely based on the production of the show, and the character Ira Stone is based on Brooks. "Your Show of Shows" ended in 1954 when performer Imogene Coca left to host her own show. Caesar then created "Caesar's Hour" with most of the same cast and writers (including Brooks and adding Woody Allen and Larry Gelbart). "Caesar's Hour" ran from 1954 until 1957. Brooks and co-writer Reiner had become close friends and began to casually improvise comedy routines when they were not working. Reiner played the straight-man interviewer and set Brooks up as anything from a Tibetan monk to an astronaut. As Reiner explained: "In the evening, we'd go to a party and I'd pick a character for him to play. I never told him what it was going to be." On one of these occasions, Reiner's suggestion concerned a 2000 year-old-man who had witnessed the crucifixion of Jesus Christ (who "came in the store but never bought anything"), had been married several hundred times, and had "over forty-two thousand children, and not one comes to visit me." At first Brooks and Reiner only performed the routine for friends but, by the late 1950s, it gained a reputation in New York City. Kenneth Tynan saw the comedy duo perform at a party in 1959 and wrote that Brooks "was the most original comic improvisor I had ever seen." In 1960, Brooks moved from New York to Hollywood. He and Reiner began performing the "2000 Year Old Man" act on "The Steve Allen Show". Their performances led to the release of the comedy album "2000 Years with Carl Reiner and Mel Brooks" that sold over a million copies in 1961. They eventually expanded their routine with two more albums in 1961 and 1962, a revival in 1973, a 1975 animated TV special, and a reunion album in 1998. At one point, when Brooks had financial and career struggles, the record sales from the 2000 Year Old Man were his chief source of income. Brooks adapted the 2000 Year Old Man character to create the 2500 Year Old Brewmaster for Ballantine Beer in the 1960s. Interviewed by Dick Cavett in a series of ads, the Brewmaster (in a German accent, as opposed to the 2000 Year Old Man's Yiddish accent) said he was inside the original Trojan horse and "could've used a six-pack of fresh air." Brooks was involved in the creation of the Broadway musical "All American" which debuted on Broadway in 1962. Brooks wrote the play with lyrics by Lee Adams, and music by Charles Strouse. The show starred Ray Bolger as a southern science professor at a large university who uses the principles of engineering on the college's football team and the team begins to win games. The show was directed by Joshua Logan, whose script doctored the second act and added a gay subtext to the plot. The show ran for 80 performances and received two Tony Award nominations. The animated short film "The Critic" (1963), a satire of arty, esoteric cinema, was conceived by Brooks and directed by Ernest Pintoff. Brooks supplied running commentary as the baffled moviegoer trying to make sense of the obscure visuals. The short film won the Academy Award for Animated Short Film. With comedy writer Buck Henry, Brooks created a comedic TV show titled "Get Smart" about a bumbling James Bond-inspired spy. Brooks explains: "I was sick of looking at all those nice sensible situation comedies. They were such distortions of life... I wanted to do a crazy, unreal comic-strip kind of thing about something besides a family. No one had ever done a show about an idiot before. I decided to be the first." The show stars Don Adams as Maxwell Smart, Agent 86. The series ran from 1965 until 1970, although Brooks had little involvement after the first season. "Get Smart" was highly rated for most of its production and won seven Emmy Awards, including Outstanding Comedy Series in 1968 and 1969. For several years, Brooks had been toying with a bizarre and unconventional idea about a musical comedy of Adolf Hitler. Brooks explored the idea as a novel and a play before finally writing a script. Eventually, he was able to find two producers to fund the show, Joseph E. Levine and Sidney Glazier, and made his first feature film, "The Producers" (1967). "The Producers" was so brazen in its satire that major studios would not touch it, nor would many exhibitors. Brooks finally found an independent distributor who released it as an art film, a specialized attraction. In 1968, Brooks received an Oscar for Best Original Screenplay for the film instead of such writers as Stanley Kubrick and John Cassavetes. "The Producers" became a smash underground hit, first on the nationwide college circuit, then in revivals and on home video. Brooks later turned it into a musical, which became hugely successful on Broadway, receiving an unprecedented twelve Tony awards. With the moderate financial success of the film "The Producers", Glazier financed Brooks' next film, "The Twelve Chairs" (1970). Loosely based on Ilf and Petrov's 1928 Russian novel of the same name about greedy materialism in post-revolutionary Russia, the film stars Ron Moody, Frank Langella, and Dom DeLuise as three men individually searching for a fortune in diamonds hidden in a set of 12 antique chairs. Brooks makes a cameo appearance as an alcoholic ex-serf who "yearns for the regular beatings of yesteryear." The film was shot in Yugoslavia with a budget of $1.5 million. The film received poor reviews and was not financially successful. Brooks then wrote an adaptation of Oliver Goldsmith's "She Stoops to Conquer", but was unable to sell the idea to any studio and believed that his career was over. In 1972, Brooks met agent David Begelman, who helped him set up a deal with Warner Brothers to hire Brooks (as well as Richard Pryor, Andrew Bergman, Norman Steinberg, and Al Uger) as a script doctor for an unproduced script called "Tex-X". Eventually, Brooks was hired as director for what became "Blazing Saddles" (1974), his third film. "Blazing Saddles" starred Cleavon Little, Gene Wilder, Harvey Korman, Slim Pickens, Madeline Kahn, Alex Karras, and Brooks himself, with cameos by Dom DeLuise and Count Basie. The film had music by Brooks and John Morris, and had a modest budget of $2.6 million. This film is a satire on the Western film genre and references older films such as "Destry Rides Again" (1939), "High Noon" (1952), "Once Upon a Time in the West" (1968), and "The Treasure of the Sierra Madre" (1948), as well as a surreal scene towards the end of the film referencing the extravagant musicals of Busby Berkeley. Upon its release, "Blazing Saddles" was the second-highest US grossing film of 1974, earning $119.5 million worldwide. Despite mixed reviews, the film was a success with younger audiences. It was nominated for three Academy Awards: Best Actress in a Supporting Role for Madeline Kahn, Best Film Editing, and Best Music, Original Song. The film won the Writers Guild of America Award for "Best Comedy Written Directly for the Screen" and in 2006 it was deemed "culturally, historically or aesthetically significant" by the Library of Congress and was selected for preservation in the National Film Registry. Brooks has said that the film "has to do with love more than anything else. I mean when that black guy rides into that Old Western town and even a little old lady says 'Up yours, nigger!', you know that his heart is broken. So it's really the story of that heart being mended." When Gene Wilder replaced Gig Young as the Waco Kid, he did so only if Brooks agreed that his next film would be an idea that Wilder had been working on; a spoof of the Universal series of "Frankenstein" films from several decades earlier. After the filming of "Blazing Saddles" was completed, Wilder and Brooks began writing the script for "Young Frankenstein" and shot the film in the spring of 1974. It starred Wilder, Marty Feldman, Peter Boyle, Teri Garr, Madeline Kahn, Cloris Leachman and Kenneth Mars, with Gene Hackman in a cameo role. Brooks' voice can be heard three times, first as the wolf howl when the characters are on their way to the castle, second as the voice of Victor Frankenstein when the characters discover the laboratory, and third as the cat sound when Gene Wilder accidentally throws a dart out of the window in a scene with Kenneth Mars. Composer John Morris again provided the music score and Universal monsters film special effects veteran Kenneth Strickfaden worked on the film. "Young Frankenstein" was the third-highest-grossing film domestically of 1974, just behind "Blazing Saddles". It earned $86 million worldwide and received two Academy Award nominations: Academy Award for Writing Adapted Screenplay and Academy Award for Best Sound. It received some of the best reviews of Brooks' career and even critic Pauline Kael liked the film, saying: "Brooks makes a leap up as a director because, although the comedy doesn't build, he carries the story through ... Brooks even has a satisfying windup, which makes this just about the only comedy of recent years that doesn't collapse." In 1975, at the height of his movie career, Brooks tried TV again with "When Things Were Rotten", a Robin Hood parody that lasted only 13 episodes. Nearly 20 years later, in response to the 1991 hit film "", Brooks mounted another Robin Hood parody with "" (1993). Brooks' film resurrected several pieces of dialogue from his TV series, as well as from earlier Brooks films. Brooks followed up his two hit films with an audacious idea: the first feature-length silent comedy in four decades. "Silent Movie" (1976) was written by Brooks and Ron Clark, starring Brooks in his first leading role, Dom DeLuise, Marty Feldman, Sid Caesar, Bernadette Peters, and in cameo roles playing themselves: Paul Newman, Burt Reynolds, James Caan, Liza Minnelli, Anne Bancroft, and the non-speaking Marcel Marceau who ironically uttered the film's only word of audible dialogue: "Non!" Although not as successful as his previous two films, "Silent Movie" was a hit and grossed $36 million. Later that year, Brooks was named number 5 on a list of the Top Ten Box Office Stars. Brooks' parody of the films of Alfred Hitchcock in "High Anxiety" (1977) was written by Brooks, Ron Clark, Rudy De Luca, and Barry Levinson. It was the first movie produced by Brooks himself. It starred Brooks, Madeline Kahn, Cloris Leachman, Harvey Korman, Ron Carey, Howard Morris, and Dick Van Patten. The film satirizes such Hitchcock films as "Vertigo", "Spellbound", "Psycho", "The Birds", "North by Northwest", "Dial M for Murder", and "Suspicion". Brooks stars as Professor Richard H. (for Harpo) Thorndyke, a Nobel Prize-winning psychologist who also happens to suffer from "high anxiety". By 1980, Gene Siskel and Roger Ebert had referred to Mel Brooks and Woody Allen as "the two most successful comedy directors in the world today ... America's two funniest filmmakers." Released that year was the dramatic film "The Elephant Man" directed by David Lynch and produced by Brooks. Knowing that anyone seeing a poster reading "Mel Brooks presents "The Elephant Man"" would expect a comedy, he set up the company Brooksfilms. Brooksfilms has since produced a number of non-comedy films, including "Frances" (1982), "The Fly" (1986), and "84 Charing Cross Road" (1987), starring Anthony Hopkins and Anne Bancroft, along with comedies, including Richard Benjamin's "My Favorite Year" (1982), which was partially based on Mel Brooks' real life. Brooks sought to purchase the rights to "84 Charing Cross Road" for his wife, Anne Bancroft, for many years. He also produced the comedy "Fatso" (1980) that Bancroft directed. In 1981, Brooks joked that the only genres that he had not spoofed were historical epics and Biblical spectacles. "History of the World Part I" was a tongue-in-cheek look at human culture from the Dawn of Man to the French Revolution. The film was written, produced, and directed by Brooks with narration by Orson Welles. This film was another modest financial hit, earning $31 million. It received mixed critical reviews. Critic Pauline Kael, who for years had been critical of Brooks, said: "Either you get stuck thinking about the bad taste or you let yourself laugh at the obscenity in the humor as you do Buñuel's perverse dirty jokes." Brooks produced and starred in (but did not write or direct) a remake of Ernst Lubitsch's 1942 film "To Be or Not to Be". Brooks' 1983 version was directed by Alan Johnson and starred Brooks, Anne Bancroft, Charles Durning, Tim Matheson, Jose Ferrer, and Christopher Lloyd. The film garnered international publicity by featuring a controversial song on its soundtrack—"To Be or Not to Be (The Hitler Rap)"—satirizing German society in the 1940s with Brooks playing Hitler. The second movie Brooks directed in the 1980s came in the form of "Spaceballs" (1987), a parody of science fiction, mainly "Star Wars". The film starred Bill Pullman, John Candy, Rick Moranis, Daphne Zuniga, Dick Van Patten, Joan Rivers, Dom DeLuise, and Brooks. In 1989, Brooks (with co-executive producer Alan Spencer) made another attempt at television success with the sitcom "The Nutt House", which featured Brooks regulars Harvey Korman and Cloris Leachman and was originally broadcast on NBC, but the network only aired five of the eleven episodes produced before canceling the series. During the next decade, Brooks directed "Life Stinks" (1991), "" (1993), and "" (1995). "People" magazine suggested, "anyone in a mood for a hearty laugh couldn't do better than "Robin Hood: Men in Tights", which gave fans a parody of Robin Hood, especially ""." Like Brooks' other films, it is filled with one-liners and even the occasional breaking of the fourth wall. "Robin Hood: Men in Tights" was Brooks' second time exploring the life of Robin Hood, the first, as mentioned above, having been with his 1975 TV show, "When Things Were Rotten". "Life Stinks" was a financial and critical failure, but is notable as being the only film that Brooks directed that is neither a parody nor a film about other films or theater. ("The Twelve Chairs" was actually a parody of the original novel.) In the 2000s, Brooks worked on an animated series sequel to "Spaceballs" called "", which premiered on September 21, 2008, on G4 TV. Brooks has also supplied vocal roles for animation. He voiced Bigweld, the master inventor, in the animated film "Robots" (2005), and in the later animated film "Mr. Peabody & Sherman" (2014) he had a cameo appearance as Albert Einstein. He returned, to voice Dracula's father, Vlad, in "Hotel Transylvania 2" (2015) and "" (2018). The musical adaptation of his film "The Producers" to the Broadway stage broke the Tony record with 12 wins, a record that had previously been held for 37 years by "Hello, Dolly!" at 10 wins. This success led to a big-screen version of the Broadway adaptation/remake with actors Matthew Broderick, Nathan Lane, Gary Beach, and Roger Bart reprising their stage roles, in addition to new cast members Uma Thurman and Will Ferrell in 2005. In early April 2006, Brooks began composing the score to a Broadway musical adaptation of "Young Frankenstein", which he says is "perhaps the best movie [he] ever made." The world premiere was performed at Seattle's Paramount Theater, between August 7, 2007, and September 1, 2007, after which it opened on Broadway at the former Lyric Theater (then the Hilton Theatre), New York, on October 11, 2007. It earned mixed reviews from the critics. Brooks joked about the concept of a musical adaptation of "Blazing Saddles" in the final number in "Young Frankenstein", in which the full company sings, "next year, "Blazing Saddles"!" In 2010, Mel Brooks confirmed this, saying that the musical could be finished within a year. No creative team or plan has been announced. Brooks is one of the few people who have received an Oscar, an Emmy, a Tony, and a Grammy. He was awarded his first Grammy for Best Spoken Comedy Album in 1999 for his recording of "The 2000 Year Old Man in the Year 2000" with Carl Reiner. His two other Grammys came in 2002 for Best Musical Show Album for the cast album of "The Producers" and for Best Long Form Music Video for the DVD "Recording the Producers – A Musical Romp with Mel Brooks". He won his first of four Emmy awards in 1967 for Outstanding Writing Achievement in Variety for a Sid Caesar special and went on to win three consecutive Emmys in 1997, 1998, and 1999 for Outstanding Guest Actor in a Comedy Series for his role of Uncle Phil on "Mad About You". Brooks won his Academy Award for Original Screenplay (Oscar) in 1968 for "The Producers". He won his three Tony awards in 2001 for his work on the musical, "The Producers" for Best Musical, Best Original Musical Score, and Best Book of a Musical. Brooks won a Hugo Award and a Nebula Award for "Young Frankenstein". In a 2005 poll by Channel 4 to find "The Comedian's Comedian", he was voted No. 50 of the top 50 comedy acts ever by fellow comedians and comedy insiders. The American Film Institute (AFI) list three of Brooks' films on their AFI's 100 Years...100 Laughs list: "Blazing Saddles" (#6), "The Producers" (#11), and "Young Frankenstein" (#13). On December 5, 2009, Brooks was one of five recipients of the 2009 Kennedy Center Honors at the John F. Kennedy Center for the Performing Arts in Washington, DC. He was inducted into the Hollywood Walk of Fame on April 23, 2010 with a motion pictures star located at 6712 Hollywood Boulevard. American Masters produced a biography on Brooks which premiered May 20, 2013, on PBS. The AFI presented Brooks with its highest tribute, the AFI Life Achievement Award, in June 2013. In 2014 Brooks was honored in a handprint and footprint ceremony at TCL Chinese Theatre. His concrete handprints include a six-fingered left hand as he wore a prosthetic finger when making his prints. On March 20, 2015, Brooks was awarded a British Film Institute Fellowship from the British Film Institute. Brooks was married to Florence Baum (1926–2008) from 1953 to 1962, their marriage ending in divorce. They had three children: Stephanie, Nicky, and Eddie. Brooks married stage, film and television actress Anne Bancroft in 1964, and they remained together until her death in 2005. They had met at a rehearsal for the "Perry Como Variety Show" in 1961, and were married three years later on August 5, 1964, at the Manhattan Marriage Bureau. Their son, Max Brooks, was born in 1972, and their grandson, Henry Michael Brooks, was born in 2005. In 2010, Brooks credited Bancroft with having been "the guiding force" behind his involvement in developing "The Producers" and "Young Frankenstein" for the musical theater, saying of an early meeting with her: "From that day, until her death ... we were glued together." Regarding religion, Brooks stated, "I'm rather secular. I'm basically Jewish. But I think I'm Jewish not because of the Jewish religion at all. I think it's the relationship with the people and the pride I have. The tribe surviving so many misfortunes, and being so brave and contributing so much knowledge to the world and showing courage."On Jewish cinema, Brooks said,"They can be anything and anywhere … if there’s a tribal thing, like, the ‘please God, protect us’ feeling … we don’t know where and how it’s gonna come out. Avatar was a Jewish movie … these people on the run, chasing—and being pursued.”Brooks' great-nephew from his brother Lenny, Todd Kaminsky, is a New York state senator for state senate district 9 on Long Island and formerly represented Long Island's state assembly district 20 in the New York State Assembly.
https://en.wikipedia.org/wiki?curid=20218
Mycoplasma genitalium Mycoplasma genitalium (MG, commonly known as Mgen), is a sexually transmitted, small and pathogenic bacterium that lives on the skin cells of the urinary and genital tracts in humans. Medical reports published in 2007 and 2015 state Mgen is becoming increasingly common. Resistance to multiple antibiotics is occurring, including azithromycin which until recently was the most reliable treatment. The bacteria was first isolated from urogenital tract of humans in 1981, and was eventually identified as a new species of "Mycoplasma" in 1983. It can cause negative health effects in men and women. It also increases the risk factor for HIV spread with higher occurrences in homosexual men and those previously treated with the azithromycin antibiotics. Specifically, it causes urethritis in both men and women, and also cervicitis and pelvic inflammation in women. It presents clinically similar symptoms to that of "Chlamydia trachomatis" infection and has shown higher incidence rates, compared to both "Chlamydia trachomatis" and "Neisseria gonorrhoeae" infections in some populations. Its complete genome sequence was published in 1995 (size 0.58 Mbp, with 475 genes). It was regarded as a cellular unit with the smallest genome size (in Mbp) until 2003 when a new species of Archaea, namely "Nanoarchaeum equitans", was sequenced (0.49 Mbp, with 540 genes). However, Mgen still has the smallest genome of any known (naturally occurring) self-replicating organism and thus is often the organism of choice in minimal genome research. The synthetic genome of Mgen named "Mycoplasma genitalium" JCVI-1.0 (after the research centre, J. Craig Venter Institute, where it was synthesised) was produced in 2008, becoming the first organism with a synthetic genome. In 2014, a protein was described called Protein M from "M. genitalium". Infection with Mgen produces a combination of clinical symptoms, but can be asymptomatic. It causes inflammation in the urethra (urethritis) both in men and women, which is associated with mucopurulent discharge in the urinary tract, and burning while urinating. In women, it causes cervicitis and pelvic inflammatory diseases (PID), including endometritis and salpingitis. Women may also experience bleeding after sex and it is also linked with tubal factor infertility. For men, the most common signs are painful urination or a watery discharge from the penis. Polymerase chain reaction analyses indicated that it is a cause of acute non-gonococcal urethritis (NGU) and probably chronic NGU. It is strongly associated with persistent and recurring non-gonococcal urethritis (NGU) responsible for 15 percent to 20 percent of symptomatic NGU cases in men. Unlike other "Mycoplasma", the infection is not associated with bacterial vaginosis. It is highly associated with the intensity of HIV infection. Some scientists are doing research to see if Mgen could play a role in the development of prostate and ovarian cancers and lymphomas in some individuals. These studies have yet to find conclusive evidence to suggest a link. The genome of "M. genitalium" consists of 525 genes in one circular DNA of 580,070 base pairs. Scott N. Peterson and his team at the University of North Carolina at Chapel Hill reported the first genetic map using pulsed-field gel electrophoresis in 1991. They performed an initial study of the genome using random sequencing in 1993, by which they found 100,993 nucleotides and 390 protein-coding genes. Collaborating with researchers at the Institute for Genomic Research, which included Craig Venter, they made the complete genome sequence in 1995 using shotgun sequencing. Only 470 predicted coding regions (out of 482 protein encoding genes) were identified, including genes required for DNA replication, transcription and translation, DNA repair, cellular transport, and energy metabolism. It was the second complete bacterial genome ever sequenced, after "Haemophilus influenzae". In 2006, the team at the J. Craig Venter Institute reported that only 382 genes are essential for biological functions. The small genome of "M. genitalium" made it the organism of choice in The Minimal Genome Project, a study to find the smallest set of genetic material necessary to sustain life. There is a consistent association of "M. genitalium" infection and female reproductive tract syndromes. "M. genitalium" infection was significantly associated with increased risk of preterm birth, spontaneous abortion, cervicitis, and pelvic inflammatory disease. In addition, this pathogen may latently infect the chorionic villi tissues of pregnant women, thereby impacting pregnancy outcome. Infertility risk is also strongly associated with infection with "M. genitalium", although evidence suggests it is not associated with male infertility. When "M. genitalium" is a co-infectious agent risk associations are stronger and statistically significant. "M. genitalium" is strongly associated with HIV-1. Recent research shows that prevalence of Mgen is currently higher than other commonly occurring STIs (Sexually Transmitted Infections). Mgen is a fastidious organism with prolonged growth durations. This makes detection of the pathogen in clinical specimens and subsequent isolation, extremely difficult. Lacking a cell wall, mycoplasma remains unaffected by commonly used antibiotics. The absence of specific serological assays leaves nucleic acid amplification tests (NAAT) as the only viable option for detection of Mgen DNA or RNA. However, samples with positive NAAT for the pathogen should be tested for macrolide resistance mutations, which are strongly correlated to azithromycin treatment failures, owing to rapid rates of mutation of the pathogen. Mutations in the 23S rRNA gene of Mgen have been linked with clinical treatment failure and high level in vitro macrolide resistance. Macrolide resistance mediating mutations have been observed in 20-50% of cases in the UK, Denmark, Sweden, Australia, and Japan. Resistance is also developing towards the second-line antimicrobials like fluoroquinolone. According to the European guidelines, the indication for commencement of diagnosis for Mgen infection are: Screening for Mgen with a combination of detection and macrolide resistance mutations will provide the adequate information required to develop personalised antimicrobial treatments, in order to optimise patient management and control the spread of antimicrobial resistance (AMR). Owing to the widespread macrolide resistance, samples that are positive for Mgen should ideally be followed up with an assay capable of detecting mutations that mediate antimicrobial resistance. The European Guideline on Mgen infections, in 2016, recommended complementing the molecular detection of Mgen with an assay capable of detecting macrolide resistance-associated mutations. The U.S. Centers for Disease Control and Prevention has one specific recommended regimen with azithromycin and another specific recommended regimen with doxycycline. As alternative regimens, the agency has specific regimens each with erythromycin or erythromycin ethylsuccinate or ofloxacin or levofloxacin. Treatment of "Mycoplasma genitalium" infections is becoming increasingly difficult due to rapidly growing antimicrobial resistance. Diagnosis and treatment is further hampered by the fact that "Mycoplasma genitalium" infections are not routinely tested. Studies have demonstrated that a 5-day course of azithromycin has a superior cure rate compared to a single, larger dose. Further, a single dose of azithromycin can lead to the bacteria becoming resistant to azithromycin. Among Swedish patients, doxycycline was shown to be relatively ineffective (with a cure rate of 48% for women and 38% for men); and treatment with a single dose of azithromycin is not prescribed due to it inducing antimicrobial resistance. The five-day treatment with azithromycin showed no development of antimicrobial resistance. Based on these findings, UK doctors are moving to the 5-day azithromycin regimen. Doxycycline is also still used, and moxifloxacin is used as a second-line treatment in case doxycyline and azithromycin are not able to eradicate the infection. In patients where doxycycline, azithromycin and moxifloxacin all failed, pristinamycin has been shown to still be able to eradicate the infection. "Mycoplasma genitalium" was originally isolated in 1980 from urethral specimens of two male patients suffering from non-gonococcal urethritis in the genitourinary medicine (GUM) clinic at St Mary's Hospital, Paddington, London. It was reported in 1981 by a team led by Joseph G. Tully. Under electron microscopy, it appears as a flask-shaped cell with a narrow terminal portion that is crucial for its attachment to the host cell surfaces. The bacterial cell is slightly elongated somewhat like a vase, and measures 0.6-0.7 μm in length, 0.3-0.4 μm at the broadest region, and 0.06-0.08 μm at the tip. The base is broad while the tip is stretched into a narrow neck, which terminates with a cap. The terminal region has a specialised region called nap, which is absent in other "Mycoplasma". Serological tests indicated that the bacterium was not related to known species of "Mycoplasma". The comparison of genome sequences with other urinogenital bacteria, such as "M. hominis" and "Ureaplasma parvum", revealed that "M. genitalium" is significantly different, especially in the energy-generating pathways, although it shared a core genome of ~250 protein-encoding genes. On 6 October 2007, Craig Venter announced that a team of scientists led by Nobel laureate Hamilton Smith at the J. Craig Venter Institute had successfully constructed synthetic DNA with which they planned to make the first synthetic genome. Reporting in "The Guardian", Venter said that they had stitched together a DNA strand containing 381 genes, consisting of 580,000 base pairs, based on the genome of "M. genitalium". On 24 January 2008, they announced the successful creation of a synthetic bacterium, which they named "Mycoplasma genitalium" JCVI-1.0 (the name of the strain indicating J. Craig Venter Institute with its specimen number). They synthesised and assembled the complete 582,970-base pair genome of the bacterium. The final stages of synthesis involved cloning the DNA into the bacterium "E. coli" for nucleotide production and sequencing. This produced large fragments of approximately 144,000 base pairs or 1/4th of the whole genome. Finally, the products were cloned inside the yeast "Saccharomyces cerevisiae" to synthesize the 580,000 base pairs. The molecular size of the synthetic bacterial genome is 360,110 kilodaltons (kDa). Printed in 10-point font, the letters of the genome cover 147 pages. On 20 July 2012, Stanford University and the J. Craig Venter Institute announced successful simulation of the complete life cycle of a "Mycoplasma genitalium" cell, in the journal "Cell". The entire organism is modeled in terms of its molecular components, integrating all cellular processes into a single model. Using object oriented programming to model the interactions of 28 categories of molecules, including DNA, RNA, proteins, and metabolites, and running on a 128-core Linux cluster, the simulation takes 10 hours for a single "M. genitalium" cell to divide once — about the same time the actual cell takes — and generates half a gigabyte of data. The discovery of Protein M, a new protein from "M. genitalium", was announced in February 2014. The protein was identified during investigations on the origin of multiple myeloma, a B-cell hematologic neoplasm. To understand the long-term "Mycoplasma" infection, it was found that antibodies from multiple myeloma patients' blood were recognised by "M. genitalium". The antibody reactivity was due to a protein never known before, and is chemically responsive to all types of human and nonhuman antibodies available. The protein is about 50 kDa in size, and composed of 556 amino acids. Future research must focus on the development of novel antimicrobials and treatment algorithms that emphasize on dual antimicrobial therapy and AMR testing in treatment protocols. Importantly, most patients with MG are treated syndromically and this treatment is even more compromised by the emerging resistances to several antimicrobials. This also stresses the importance of evidence-based knowledge regarding the activity of novel antimicrobials against several pathogens that cause STIs. The rapid development of AMR in Mgen suggests that single-dose antimicrobial monotherapy may be inappropriate even for uncomplicated STIs. For Mgen, antimicrobial combination therapy and AMR testing, in conjunction with the development and evaluation of new classes of antimicrobials, are of utmost importance. Some of the novel antimicrobials, particularly the fluoroketolide solithromycin, might at least temporarily replace azithromycin in the treatment of Mgen. Ultimately, the only sustainable solution to control these infections might be the development of vaccines, a task that remains to be incredibly difficult with most pathogens of commonly occurring STIs, being unculturable.
https://en.wikipedia.org/wiki?curid=20219
Mummy A mummy is a dead human or an animal whose skin and organs have been preserved by either intentional or accidental exposure to chemicals, extreme cold, very low humidity, or lack of air, so that the recovered body does not decay further if kept in cool and dry conditions. Some authorities restrict the use of the term to bodies deliberately embalmed with chemicals, but the use of the word to cover accidentally desiccated bodies goes back to at least 1615 AD (see the section Etymology and meaning). Mummies of humans and animals have been found on every continent, both as a result of natural preservation through unusual conditions, and as cultural artifacts. Over one million animal mummies have been found in Egypt, many of which are cats. Many of the Egyptian animal mummies are sacred ibis, and radiocarbon dating suggests the Egyptian Ibis mummies that have been analyzed were from time frame that falls between approximately 450 and 250 BC. In addition to the mummies of ancient Egypt, deliberate mummification was a feature of several ancient cultures in areas of America and Asia with very dry climates. The Spirit Cave mummies of Fallon, Nevada in North America were accurately dated at more than 9,400 years old. Before this discovery, the oldest known deliberate mummy was a child, one of the Chinchorro mummies found in the Camarones Valley, Chile, which dates around 5050 BC. The oldest known naturally mummified human corpse is a severed head dated as 6,000 years old, found in 1936 AD at the site named Inca Cueva No. 4 in South America. The English word "mummy" is derived from medieval Latin "mumia", a borrowing of the medieval Arabic word "mūmiya" (مومياء) and from a Persian word "mūm" (wax), which meant an embalmed corpse, and as well as the bituminous embalming substance, and also meant "bitumen". The Medieval English term "mummy" was defined as "medical preparation of the substance of mummies", rather than the entire corpse, with Richard Hakluyt in 1599 AD complaining that "these dead bodies are the Mummy which the Phisistians and Apothecaries doe against our willes make us to swallow". These substances were defined as mummia. The OED defines a mummy as "the body of a human being or animal embalmed (according to the ancient Egyptian or some analogous method) as a preparation for burial", citing sources from 1615 AD onward. However, Chamber's "Cyclopædia" and the Victorian zoologist Francis Trevelyan Buckland define a mummy as follows: "A human or animal body desiccated by exposure to sun or air. Also applied to the frozen carcase of an animal imbedded in prehistoric snow". Wasps of the genus "Aleiodes" are known as "mummy wasps" because they wrap their caterpillar prey as "mummies". While interest in the study of mummies dates as far back as Ptolemaic Greece, most structured scientific study began at the beginning of the 20th century. Prior to this, many rediscovered mummies were sold as curiosities or for use in pseudoscientific novelties such as mummia. The first modern scientific examinations of mummies began in 1901, conducted by professors at the English-language Government School of Medicine in Cairo, Egypt. The first X-ray of a mummy came in 1903, when professors Grafton Elliot Smith and Howard Carter used the only X-ray machine in Cairo at the time to examine the mummified body of Thutmose IV. British chemist Alfred Lucas applied chemical analyses to Egyptian mummies during this same period, which returned many results about the types of substances used in embalming. Lucas also made significant contributions to the analysis of Tutankhamun in 1922. Pathological study of mummies saw varying levels of popularity throughout the 20th century. In 1992, the First World Congress on Mummy Studies was held in Puerto de la Cruz on Tenerife in the Canary Islands. More than 300 scientists attended the Congress to share nearly 100 years of collected data on mummies. The information presented at the meeting triggered a new surge of interest in the subject, with one of the major results being integration of biomedical and bioarchaeological information on mummies with existing databases. This was not possible prior to the Congress due to the unique and highly specialized techniques required to gather such data. In more recent years, CT scanning has become an invaluable tool in the study of mummification by allowing researchers to digitally "unwrap" mummies without risking damage to the body. The level of detail in such scans is so intricate that small linens used in tiny areas such as the nostrils can be digitally reconstructed in 3-D. Such modelling has been utilized to perform digital autopsies on mummies to determine cause of death and lifestyle, such as in the case of Tutankhamun. Mummies are typically divided into one of two distinct categories: anthropogenic or spontaneous. Anthropogenic mummies were deliberately created by the living for any number of reasons, the most common being for religious purposes. Spontaneous mummies, such as Ötzi, were created unintentionally due to natural conditions such as extremely dry heat or cold, or anaerobic conditions such as those found in bogs. While most individual mummies exclusively belong to one category or the other, there are examples of both types being connected to a single culture, such as those from the ancient Egyptian culture and the Andean cultures of South America. Until recently, it was believed that the earliest ancient Egyptian mummies were created naturally due to the environment in which they were buried. In 2014, an 11-year study by University of York, Macquarie University and University of Oxford suggested that artificial mummification occurred 1,500 years earlier than first thought. This was confirmed in 2018, when tests on a 5,600 year-old mummy in Turin revealed that it had been deliberately mummified using linen wrappings and embalming oils made from conifer resin and aromatic plant extracts. The preservation of the dead had a profound effect on ancient Egyptian religion. Mummification was an integral part of the rituals for the dead beginning as early as the 2nd dynasty (about 2800 BC). Egyptians saw the preservation of the body after death as an important step to living well in the afterlife. As Egypt gained more prosperity, burial practices became a status symbol for the wealthy as well. This cultural hierarchy lead to the creation of elaborate tombs, and more sophisticated methods of embalming. By the 4th dynasty (about 2600 BC) Egyptian embalmers began to achieve "true mummification" through a process of evisceration. Much of this early experimentation with mummification in Egypt is unknown. The few documents that directly describe the mummification process date to the Greco-Roman period. The majority of the papyri that have survived only describe the ceremonial rituals involved in embalming, not the actual surgical processes involved. A text known as "The Ritual of Embalming" does describe some of the practical logistics of embalming, however, there are only two known copies and each is incomplete. With regards to mummification shown in images, there are apparently also very few. The tomb of Tjay designated TT23, is one of only two known which show the wrapping of a mummy (Riggs 2014). Another text that describes the processes being used in latter periods is Herodotus' Histories. Written in Book 2 of the "Histories" is one of the most detailed descriptions of the Egyptian mummification process, including the mention of using natron in order to dehydrate corpses for preservation. However, these descriptions are short and fairly vague, leaving scholars to infer the majority of the techniques that were used by studying mummies that have been unearthed. By utilizing current advancements in technology, scientists have been able to uncover a plethora of new information about the techniques used in mummification. A series of CT scans performed on a 2,400-year-old mummy in 2008 revealed a tool that was left inside the cranial cavity of the skull. The tool was a rod, made of an organic material, that was used to break apart the brain to allow it to drain out of the nose. This discovery helped to dispel the claim within Herodotus' works that the rod had been a hook made of iron. Earlier experimentation in 1994 by researchers Bob Brier and Ronald Wade supported these findings. While attempting to replicate Egyptian mummification, Brier and Wade discovered that removal of the brain was much easier when the brain was liquefied and allowed to drain with the help of gravity, as opposed to trying to pull the organ out piece-by-piece with a hook. Through various methods of study over many decades, modern Egyptologists now have an accurate understanding of how mummification was achieved in ancient Egypt. The first and most important step was to halt the process of decomposition, by removing the internal organs and washing out the body with a mix of spices and palm wine. The only organ left behind was the heart, as tradition held the heart was the seat of thought and feeling and would therefore still be needed in the afterlife. After cleansing, the body was then dried out with natron inside the empty body cavity as well as outside on the skin. The internal organs were also dried and either sealed in individual jars, or wrapped to be replaced within the body. This process typically took forty days. After dehydration, the mummy was wrapped in many layers of linen cloth. Within the layers, Egyptian priests placed small amulets to guard the decedent from evil. Once the mummy was completely wrapped, it was coated in a resin in order to keep the threat of moist air away. Resin was also applied to the coffin in order to seal it. The mummy was then sealed within its tomb, alongside the worldly goods that were believed to help aid it in the afterlife. Aspergillus niger has been found in the mummies of ancient Egyptian tombs and can be inhaled when they are disturbed. Mummification is one of the defining customs in ancient Egyptian society for people today. The practice of preserving the human body is believed to be a quintessential feature of Egyptian life. Yet even mummification has a history of development and was accessible to different ranks of society in different ways during different periods. There were at least three different processes of mummification according to Herodotus. They range from "the most perfect" to the method employed by the "poorer classes". The most expensive process was to preserve the body by dehydration and protect against pests, such as insects. Almost all of the actions Herodotus described serve one of these two functions. First, the brain was removed from the cranium through the nose; the gray matter was discarded. Modern mummy excavations have shown that instead of an iron hook inserted through the nose as Herodotus claims, a rod was used to liquefy the brain via the cranium, which then drained out the nose by gravity. The embalmers then rinsed the skull with certain drugs that mostly cleared any residue of brain tissue and also had the effect of killing bacteria. Next, the embalmers made an incision along the flank with a sharp blade fashioned from an Ethiopian stone and removed the contents of the abdomen. Herodotus does not discuss the separate preservation of these organs and their placement either in special jars or back in the cavity, a process that was part of the most expensive embalming, according to archaeological evidence. The abdominal cavity was then rinsed with palm wine and an infusion of crushed, fragrant herbs and spices; the cavity was then filled with spices including myrrh, cassia, and, Herodotus notes, "every other sort of spice except frankincense", also to preserve the person. The body was further dehydrated by placing it in natron, a naturally occurring salt, for seventy days. Herodotus insists that the body did not stay in the natron longer than seventy days. Any shorter time and the body is not completely dehydrated; any longer, and the body is too stiff to move into position for wrapping. The embalmers then wash the body again and wrapped it with linen bandages. The bandages were covered with a gum that modern research has shown is both waterproofing agent and an antimicrobial agent. At this point, the body was given back to the family. These "perfect" mummies were then placed in wooden cases that were human-shaped. Richer people placed these wooden cases in stone sarcophagi that provided further protection. The family placed the sarcophagus in the tomb upright against the wall, according to Herodotus. The second process that Herodotus describes was used by middle-class people or people who "wish to avoid expense". In this method, an oil derived from cedar trees was injected with a syringe into the abdomen. A rectal plug prevented the oil from escaping. This oil probably had the dual purpose of liquefying the internal organs but also of disinfecting the abdominal cavity. (By liquefying the organs, the family avoided the expense of canopic jars and separate preservation.) The body was then placed in natron for seventy days. At the end of this time, the body was removed and the cedar oil, now containing the liquefied organs, was drained through the rectum. With the body dehydrated, it could be returned to the family. Herodotus does not describe the process of burial of such mummies, but they were perhaps placed in a shaft tomb. Poorer people used coffins fashioned from terracotta. The third and least-expensive method the embalmers offered was to clear the intestines with an unnamed liquid, injected as an enema. The body was then placed in natron for seventy days and returned to the family. Herodotus gives no further details. In Christian tradition, some bodies of saints are naturally conserved and venerated. In addition to the mummies of Egypt, there have been instances of mummies being discovered in other areas of the African continent. The bodies show a mix of anthropogenic and spontaneous mummification, with some being thousands of years old. The mummified remains of an infant were discovered during an expedition by archaeologist Fabrizio Mori to Libya during the winter of 1958–1959 in the natural cave structure of Uan Muhuggiag. After curious deposits and cave paintings were discovered on the surfaces of the cave, expedition leaders decided to excavate. Uncovered alongside fragmented animal bone tools was the mummified body of an infant, wrapped in animal skin and wearing a necklace made of ostrich egg shell beads. Professor Tongiorgi of the University of Pisa radiocarbon-dated the infant to between 5,000–8,000 years old. A long incision located on the right abdominal wall, and the absence of internal organs, indicated that the body had been eviscerated post-mortem, possibly in an effort to preserve the remains. A bundle of herbs found within the body cavity also supported this conclusion. Further research revealed that the child had been around 30 months old at the time of death, though gender could not be determined due to poor preservation of the sex organs. The first mummy to be discovered in South Africa was found in the Baviaanskloof Wilderness Area by Dr. Johan Binneman in 1999. Nicknamed Moses, the mummy was estimated to be around 2,000 years old. After being linked to the indigenous Khoi culture of the region, the National Council of Khoi Chiefs of South Africa began to make legal demands that the mummy be returned shortly after the body was moved to the Albany Museum in Grahamstown. The mummies of Asia are usually considered to be accidental. The decedents were buried in just the right place where the environment could act as an agent for preservation. This is particularly common in the desert areas of the Tarim Basin and Iran. Mummies have been discovered in more humid Asian climates, however these are subject to rapid decay after being removed from the grave. Mummies from various dynasties throughout China's history have been discovered in several locations across the country. They are almost exclusively considered to be unintentional mummifications. Many areas in which mummies have been uncovered are difficult for preservation, due to their warm, moist climates. This makes the recovery of mummies a challenge, as exposure to the outside world can cause the bodies to decay in a matter of hours. An example of a Chinese mummy that was preserved despite being buried in an environment not conducive to mummification is Xin Zhui. Also known as Lady Dai, she was discovered in the early 1970s at the Mawangdui archaeological site in Changsha. She was the wife of the marquis of Dai during the Han dynasty, who was also buried with her alongside another young man often considered to be a very close relative. However, Xin Zhui's body was the only one of the three to be mummified. Her corpse was so well-preserved that surgeons from the Hunan Provincial Medical Institute were able to perform an autopsy. The exact reason why her body was so completely preserved has yet to be determined. Among the mummies discovered in China are those termed Tarim mummies because of their discovery in the Tarim Basin. The dry desert climate of the basin proved to be an excellent agent for desiccation. For this reason, over 200 Tarim mummies, which are over 4,000 years old, were excavated from a cemetery in the present-day Xinjiang region. The mummies were found buried in upside-down boats with hundreds of 13-foot-long wooden poles in the place of tombstones. DNA sequence data shows that the mummies had Haplogroup R1a (Y-DNA) characteristic of western Eurasia in the area of East-Central Europe, Central Asia and Indus Valley. This has created a stir in the Turkic-speaking Uighur population of the region, who claim the area has always belonged to their culture, while it was not until the 10th century when the Uighurs are said by scholars to have moved to the region from Central Asia. American Sinologist Victor H. Mair claims that ""the earliest mummies in the Tarim Basin were exclusively Caucasoid, or Europoid"" with "east Asian migrants arriving in the eastern portions of the Tarim Basin around 3,000 years ago", while Mair also notes that it was not until 842 that the Uighur peoples settled in the area. Other mummified remains have been recovered from around the Tarim Basin at sites including Qäwrighul, Yanghai, Shengjindian, Shanpula (Sampul), Zaghunluq, and Qizilchoqa. As of 2012, at least eight mummified human remains have been recovered from the Douzlakh Salt Mine at Chehr Abad in northwestern Iran. Due to their salt preservation, these bodies are collectively known as Saltmen. Carbon-14 testing conducted in 2008 dated three of the bodies to around 400 BC. Later isotopic research on the other mummies returned similar dates, however, many of these individuals were found to be from a region that is not closely associated with the mine. It was during this time that researchers determined the mine suffered a major collapse, which likely caused the death of the miners. Since there is significant archaeological data that indicates the area was not actively inhabited during this time period, current consensus holds that the accident occurred during a brief period of temporary mining activity. In 1993, a team of Russian archaeologists led by Dr. Natalia Polosmak discovered the Siberian Ice Maiden, a Scytho-Siberian woman, on the Ukok Plateau in the Altai Mountains near the Mongolian border. The mummy was naturally frozen due to the severe climatic conditions of the Siberian steppe. Also known as Princess Ukok, the mummy was dressed in finely detailed clothing and wore an elaborate headdress and jewelry. Alongside her body were buried six decorated horses and a symbolic meal for her last journey. Her left arm and hand were tattooed with animal style figures, including a highly stylized deer. The Ice Maiden has been a source of some recent controversy. The mummy's skin has suffered some slight decay, and the tattoos have faded since the excavation. Some residents of the Altai Republic, formed after the breakup of the Soviet Union, have requested the return of the Ice Maiden, who is currently stored in Novosibirsk in Siberia. Another Siberian mummy, a man, was discovered much earlier in 1929. His skin was also marked with tattoos of two monsters resembling griffins, which decorated his chest, and three partially obliterated images which seem to represent two deer and a mountain goat on his left arm. Philippine mummies are called Kabayan Mummies. They are common in Igorot culture and their heritage. The mummies are found in some areas named Kabayan, Sagada and among others. The mummies are dated between the 14th and 19th centuries. The European continent is home to a diverse spectrum of spontaneous and anthropogenic mummies. Some of the best-preserved mummies have come from bogs located across the region. The Capuchin monks that inhabited the area left behind hundreds of intentionally-preserved bodies that have provided insight into the customs and cultures of people from various eras. One of the oldest mummies (nicknamed Ötzi) was discovered on this continent. New mummies continue to be uncovered in Europe well into the 21st Century. The United Kingdom, the Republic of Ireland, Germany, the Netherlands, Sweden, and Denmark have produced a number of bog bodies, mummies of people deposited in sphagnum bogs, apparently as a result of murder or ritual sacrifices. In such cases, the acidity of the water, low temperature and lack of oxygen combined to tan the body's skin and soft tissues. The skeleton typically disintegrates over time. Such mummies are remarkably well preserved on emerging from the bog, with skin and internal organs intact; it is even possible to determine the decedent's last meal by examining stomach contents. The Haraldskær Woman was discovered by labourers in a bog in Jutland in 1835. She was erroneously identified as an early medieval Danish queen, and for that reason was placed in a royal sarcophagus at the Saint Nicolai Church, Vejle, where she currently remains. Another bog body, also from Denmark, known as the Tollund Man was discovered in 1950. The corpse was noted for its excellent preservation of the face and feet, which appeared as if the man had recently died. Only the head of Tollund Man remains, due to the decomposition of the rest of his body, which was not preserved along with the head. The mummies of the Canary Islands belong to the indigenous Guanche people and date to the time before 14th Century Spanish explorers settled in the area. All deceased people within the Guanche culture were mummified during this time, though the level of care taken with embalming and burial varied depending on individual social status. Embalming was carried out by specialized groups, organized according to gender, who were considered unclean by the rest of the community. The techniques for embalming were similar to those of the ancient Egyptians; involving evisceration, preservation, and stuffing of the evacuated bodily cavities, then wrapping of the body in animal skins. Despite the successful techniques utilized by the Guanche, very few mummies remain due to looting and desecration. The majority of mummies recovered in the Czech Republic come from underground crypts. While there is some evidence of deliberate mummification, most sources state that desiccation occurred naturally due to unique conditions within the crypts. The Capuchin Crypt in Brno contains three hundred years of mummified remains directly below the main altar. Beginning in the 18th Century when the crypt was opened, and continuing until the practice was discontinued in 1787, the Capuchin monks of the monastery would lay the deceased on a pillow of bricks on the ground. The unique air quality and topsoil within the crypt naturally preserved the bodies over time. Approximately fifty mummies were discovered in an abandoned crypt beneath the Church of St. Procopius of Sázava in Vamberk in the mid-1980s. Workers digging a trench accidentally broke into the crypt, which began to fill with waste water. The mummies quickly began to deteriorate, though thirty-four were able to be rescued and stored temporarily at the District Museum of the Orlické Mountains until they could be returned to the monastery in 2000. The mummies range in age and social status at time of death, with at least two children and one priest. The majority of the Vamberk mummies date from the 18th century. The Klatovy catacombs currently house an exhibition of Jesuit mummies, alongside some aristocrats, that were originally interred between 1674 and 1783. In the early 1930s, the mummies were accidentally damaged during repairs, resulting in the loss of 140 bodies. The newly updated airing system preserves the thirty-eight bodies that are currently on display. Apart from several bog bodies, Denmark has also yielded several other mummies, such as the three Borum Eshøj mummies, the Skrydstrup Woman and the Egtved Girl, who were all found inside burial mounds, or tumuli. In 1875, the Borum Eshøj grave mound was uncovered, which had been built around three coffins, which belonged to a middle aged man and woman as well as a man in his early twenties. Through examination, the woman was discovered to be around 50–60 years old. She was found with several artifacts made of bronze, consisting of buttons, a belt plate, and rings, showing she was of higher class. All of the hair had been removed from the skull later when farmers had dug through the casket. Her original hairstyle is unknown. The two men wore kilts, and the younger man wore a sheath which contained a bronze dagger. All three mummies were dated to 1351–1345 BC. The Skrydstrup Woman was unearthed from a tumulus in Southern Jutland, in 1935. Carbon-14 dating showed that she had died around 1300 BC; examination also revealed that she was around 18–19 years old at the time of death, and that she had been buried in the summertime. Her hair had been drawn up in an elaborate hairstyle, which was then covered by a horse hair hairnet made by the sprang technique. She was wearing a blouse and a necklace as well as two golden earrings, showing she was of higher class. The Egtved Girl, dated to 1370 BC, was also found inside a sealed coffin within a tumulus, in 1921. She was wearing a bodice and a skirt, including a belt and bronze bracelets. Found with the girl, at her feet, were the cremated remains of a child and, by her head, a box containing some bronze pins, a hairnet, and an awl. In 1994, 265 mummified bodies were found in the crypt of a Dominican church in Vác, Hungary from the 1729–1838 period. The discovery proved to be scientifically important, and by 2006 an exhibition was established in the Museum of Natural History in Budapest. Unique to the Hungarian mummies are their elaborately decorated coffins, with no two being exactly alike. The varied geography and climatology of Italy has led to many cases of spontaneous mummification. Italian mummies display the same diversity, with a conglomeration of natural and intentional mummification spread across many centuries and cultures. The oldest natural mummy in Europe was discovered in 1991 in the Ötztal Alps on the Austrian-Italian border. Nicknamed Ötzi, the mummy is a 5,300-year-old male believed to be a member of the Tamins-Carasso-Isera cultural group of South Tyrol. Despite his age, a recent DNA study conducted by Walther Parson of Innsbruck Medical University revealed Ötzi has 19 living genetic relatives. The Capuchin Catacombs of Palermo were built in the 16th century by the monks of Palermo's Capuchin monastery. Originally intended to hold the deliberately mummified remains of dead friars, interment in the catacombs became a status symbol for the local population in the following centuries. Burials continued until the 1920s, with one of the final burials being that of Rosalia Lombardo. In all, the catacombs host nearly 8000 mummies. (See: Catacombe dei Cappuccini) The most recent discovery of mummies in Italy came in 2010, when sixty mummified human remains were found in the crypt of the Conversion of St Paul church in Roccapelago di Pievepelago, Italy. Built in the 15th century as a cannon hold and later converted in the 16th century, the crypt had been sealed once it had reached capacity, leaving the bodies to be protected and preserved. The crypt was reopened during restoration work on the church, revealing the diverse array of mummies inside. The bodies were quickly moved to a museum for further study. The mummies of North America are often steeped in controversy, as many of these bodies have been linked to still-existing native cultures. While the mummies provide a wealth of historically-significant data, native cultures and tradition often demands the remains be returned to their original resting places. This has led to many legal actions by Native American councils, leading to most museums keeping mummified remains out of the public eye. Kwäday Dän Ts'ìnchi ("Long ago person found" in the Southern Tutchone language of the Champagne and Aishihik First Nations), was found in August 1999 by three First Nations hunters at the edge of a glacier in Tatshenshini-Alsek Provincial Park, British Columbia, Canada. According to the Kwäday Dän Ts'ìnchi Project, the remains are the oldest well preserved mummy discovered in North America. (The Spirit Cave mummy although not well preserved, is much older.) Initial radiocarbon tests date the mummy to around 550 years-old. In 1972, eight remarkably preserved mummies were discovered at an abandoned Inuit settlement called Qilakitsoq, in Greenland. The "Greenland Mummies" consisted of a six-month-old baby, a four-year-old boy, and six women of various ages, who died around 500 years ago. Their bodies were naturally mummified by the sub-zero temperatures and dry winds in the cave in which they were found. Intentional mummification in pre-Columbian Mexico was practiced by the Aztec culture. These bodies are collectively known as Aztec mummies. Genuine Aztec mummies were "bundled" in a woven wrap and often had their faces covered by a ceremonial mask. Public knowledge of Aztec mummies increased due to traveling exhibits and museums in the 19th and 20th centuries, though these bodies were typically naturally desiccated remains and not actually the mummies associated with Aztec culture. (See: Aztec mummy) Natural mummification has been known to occur in several places in Mexico; this includes the mummies of Guanajuato. A collection of these mummies, most of which date to the late 19th century, have been on display at "El Museo de las Momias" in the city of Guanajuato since 1970. The museum claims to have the smallest mummy in the world on display (a mummified fetus). It was thought that minerals in the soil had the preserving effect, however it may rather be due to the warm, arid climate. Mexican mummies are also on display in the small town of Encarnación de Díaz, Jalisco. Spirit Cave Man was discovered in 1940 during salvage work prior to guano mining activity that was scheduled to begin in the area. The mummy is a middle-aged male, found completely dressed and lying on a blanket made of animal skin. Radiocarbon tests in the 1990s dated the mummy to being nearly 9,000 years old. The remains are currently held at the Nevada State Museum. There has been some controversy within the local Native American community, who began petitioning to have the remains returned and reburied in 1995. Mummies from the Oceania are not limited only to Australia. Discoveries of mummified remains have also been located in New Zealand, and the Torres Strait, though these mummies have been historically harder to examine and classify. Prior to the 20th Century, most literature on mummification in the region was either silent or anecdotal. However, the boom of interest generated by the scientific study of Egyptian mummification lead to more concentrated study of mummies in other cultures, including those of Oceania. The aboriginal mummification traditions found in Australia are thought be related to those found in the Torres Strait islands, the inhabitants of which achieved a high level of sophisticated mummification techniques (See:Torres Strait). Australian mummies lack some of the technical ability of the Torres Strait mummies, however much of the ritual aspects of the mummification process are similar. Full-body mummification was achieved by these cultures, but not the level of artistic preservation as found on smaller islands. The reason for this seems to be for easier transport of bodies by more nomadic tribes. The mummies of the Torres Strait have a considerably higher level of preservation technique as well as creativity compared to those found on Australia. The process began with removal of viscera, after which the bodies were set in a seated position on a platform and either left to dry in the sun or smoked over a fire in order to aid in desiccation. In the case of smoking, some tribes would collect the fat that drained from the body to mix with ocher to create red paint that would then be smeared back on the skin of the mummy. The mummies remained on the platforms, decorated with the clothing and jewelry they wore in life, before being buried. Some Māori tribes from New Zealand would keep mummified heads as trophies from tribal warfare. They are also known as Mokomokai. In the 19th Century, many of the trophies were acquired by Europeans who found the tattooed skin to be a phenomenal curiosity. Westerners began to offer valuable commodities in exchange for the uniquely tattooed mummified heads. The heads were later put on display in museums, 16 of which being housed across France alone. In 2010, the Rouen City Hall of France returned one of the heads to New Zealand, despite earlier protests by the Culture Ministry of France. There is also evidence that some Maori tribes may have practiced full-body mummification, though the practice is not thought to have been widespread. The discussion of Maori mummification has been historically controversial, with some experts in past decades claiming that such mummies have never existed. Contemporary science does now acknowledge the existence of full-body mummification in the culture. There is still controversy, however, as to the nature of the mummification process. Some bodies appear to be spontaneously created by the natural environment, while others exhibit signs of deliberate practices. General modern consensus tends to agree that there could be a mixture of both types of mummification, similar to that of the ancient Egyptian mummies. The South American continent contains some of the oldest mummies in the world, both deliberate and accidental. The bodies were preserved by the best agent for mummification: the environment. The Pacific coastal desert in Peru and Chile is one of the driest areas in the world and the dryness facilitated mummification. Rather than developing elaborate processes such as later-dynasty ancient Egyptians, the early South Americans often left their dead in naturally dry or frozen areas, though some did perform surgical preparation when mummification was intentional. Some of the reasons for intentional mummification in South America include memorialization, immortalization, and religious offerings. A large number of mummified bodies have been found in pre-Columbian cemeteries scattered around Peru. The bodies had often been wrapped for burial in finely-woven textiles. The Chinchorro mummies are the oldest intentionally prepared mummified bodies ever found. Beginning in 5th millennium BC and continuing for an estimated 3,500 years, all human burials within the Chinchorro culture were prepared for mummification. The bodies were carefully prepared, beginning with removal of the internal organs and skin, before being left in the hot, dry climate of the Atacama Desert, which aided in desiccation. A large number of Chinchorro mummies were also prepared by skilled artisans to be preserved in a more artistic fashion, though the purpose of this practice is widely debated. Several naturally-preserved, unintentional mummies dating from the Incan period (1438–1532 AD) have been found in the colder regions of Argentina, Chile, and Peru. These are collectively known as "ice mummies". The first Incan ice mummy was discovered in 1954 atop El Plomo Peak in Chile, after an eruption of the nearby volcano Sabancaya melted away ice that covered the body. The Mummy of El Plomo was a male child who was presumed to be wealthy due to his well-fed bodily characteristics. He was considered to be the most well-preserved ice mummy in the world until the discovery of Mummy Juanita in 1995. Mummy Juanita was discovered near the summit of Ampato in the Peruvian section of the Andes mountains by archaeologist Johan Reinhard. Her body had been so thoroughly frozen that it had not been desiccated; much of her skin, muscle tissue, and internal organs retained their original structure. She is believed to be a ritual sacrifice, due to the close proximity of her body to the Incan capital of Cusco, as well as the fact she was wearing highly intricate clothing to indicate her special social status. Several Incan ceremonial artifacts and temporary shelters uncovered in the surrounding area seem to support this theory. More evidence that the Inca left sacrificial victims to die in the elements, and later be unintentionally preserved, came in 1999 with the discovery of the Llullaillaco mummies on the border of Argentina and Chile. The three mummies are children, two girls and one boy, who are thought to be sacrifices associated with the ancient ritual of "qhapaq hucha". Recent biochemical analysis of the mummies has revealed that the victims had consumed increasing quantities of alcohol and coca, possibly in the form of chicha, in the months leading up to sacrifice. The dominant theory for the drugging reasons that, alongside ritual uses, the substances probably made the children more docile. Chewed coca leaves found inside the eldest child's mouth upon her discovery in 1999 supports this theory. The bodies of Inca emperors and wives were mummified after death. In 1533, the Spanish conquistadors of the Inca Empire viewed the mummies in the Inca capital of Cuzco. The mummies were displayed, often in lifelike positions, in the palaces of the deceased emperors and had a retinue of servants to care for them. The Spanish were impressed with the quality of the mummification which involved removal of the organs, embalming, and freeze-drying. The population revered the mummies of the Inca emperors. This reverence seemed idolatry to the Roman Catholic Spanish and in 1550 they confiscated the mummies. The mummies were taken to Lima where they were displayed in the San Andres Hospital. The mummies deteriorated in the humid climate of Lima and eventually they were either buried or destroyed by the Spanish. An attempt to find the mummies of the Inca emperors beneath the San Andres hospital in 2001 was unsuccessful. The archaeologists found a crypt, but it was empty. Possibly the mummies had been removed when the building was repaired after an earthquake. Monks whose bodies remain incorrupt without any traces of deliberate mummification are venerated by some Buddhists who believe they successfully were able to mortify their flesh to death. Self-mummification was practiced until the late 1800s in Japan and has been outlawed since the early 1900s. Many Mahayana Buddhist monks were reported to know their time of death and left their last testaments and their students accordingly buried them sitting in lotus position, put into a vessel with drying agents (such as wood, paper, or lime) and surrounded by bricks, to be exhumed later, usually after three years. The preserved bodies would then be decorated with paint and adorned with gold. Bodies purported to be those of self-mummified monks are exhibited in several Japanese shrines, and it has been claimed that the monks, prior to their death, stuck to a sparse diet made up of salt, nuts, seeds, roots, pine bark, and "urushi" tea. In the 1830s, Jeremy Bentham, the founder of utilitarianism, left instructions to be followed upon his death which led to the creation of a sort of modern-day mummy. He asked that his body be displayed to illustrate how the "horror at dissection originates in ignorance"; once so displayed and lectured about, he asked that his body parts be preserved, including his skeleton (minus his skull, which despite being mis-preserved, was displayed beneath his feet until theft required it to be stored elsewhere), which were to be dressed in the clothes he usually wore and "seated in a Chair usually occupied by me when living in the attitude in which I am sitting when engaged in thought". His body, outfitted with a wax head created because of problems preparing it as Bentham requested, is on open display in the University College London. During the early 20th century, the Russian movement of Cosmism, as represented by Nikolai Fyodorovich Fyodorov, envisioned scientific resurrection of dead people. The idea was so popular that, after Vladimir Lenin's death, Leonid Krasin and Alexander Bogdanov suggested to cryonically preserve his body and brain in order to revive him in the future. Necessary equipment was purchased abroad, but for a variety of reasons the plan was not realized. Instead his body was embalmed and placed on permanent exhibition in the Lenin Mausoleum in Moscow, where it is displayed to this day. The mausoleum itself was modeled by Alexey Shchusev on the Pyramid of Djoser and the Tomb of Cyrus. In late 19th-century Venezuela, a German-born doctor named Gottfried Knoche conducted experiments in mummification at his laboratory in the forest near La Guaira. He developed an embalming fluid (based on an aluminum chloride compound) that mummified corpses without having to remove the internal organs. The formula for his fluid was never revealed and has not been discovered. Most of the several dozen mummies created with the fluid (including himself and his immediate family) have been lost or were severely damaged by vandals and looters. In 1975, an esoteric organization by the name of Summum introduced "Modern Mummification", a service that utilizes modern techniques along with aspects of ancient methods of mummification. The first person to formally undergo Summum's process of modern mummification was the founder of Summum, Summum Bonum Amen Ra, who died in January 2008. Summum is currently considered to be the only "commercial mummification business" in the world. In 2010, a team led by forensic archaeologist Stephen Buckley mummified Alan Billis using techniques based on 19 years of research of 18th-dynasty Egyptian mummification. The process was filmed for television, for the documentary "Mummifying Alan: Egypt's Last Secret". Billis made the decision to allow his body to be mummified after being diagnosed with terminal cancer in 2009. His body currently resides at London's Gordon Museum. Plastination is a technique used in anatomy to conserve bodies or body parts. The water and fat are replaced by certain plastics, yielding specimens that can be touched, do not smell or decay, and even retain most microscopic properties of the original sample. The technique was invented by Gunther von Hagens when working at the anatomical institute of the Heidelberg University in 1978. Von Hagens has patented the technique in several countries and is heavily involved in its promotion, especially as the creator and director of the Body Worlds traveling exhibitions, exhibiting plastinated human bodies internationally. He also founded and directs the Institute for Plastination in Heidelberg. More than 40 institutions worldwide have facilities for plastination, mainly for medical research and study, and most affiliated to the International Society for Plastination. In the Middle Ages, based on a mistranslation from the Arabic term for bitumen, it was thought that mummies possessed healing properties. As a result, it became common practice to grind Egyptian mummies into a powder to be sold and used as medicine. When actual mummies became unavailable, the sun-desiccated corpses of criminals, slaves and suicidal people were substituted by mendacious merchants. Mummies were said to have a lot of healing properties. Francis Bacon and Robert Boyle recommended them for healing bruises and preventing bleeding. The trade in mummies seems to have been frowned upon by Turkish authorities who ruled Egypt – several Egyptians were imprisoned for boiling mummies to make oil in 1424. However, mummies were in high demand in Europe and it was possible to buy them for the right amount of money. John Snaderson, an English tradesman who visited Egypt in the 16th century shipped six hundred pounds of mummy back to England. The practice developed into a wide-scale business that flourished until the late 16th century. Two centuries ago, mummies were still believed to have medicinal properties to stop bleeding, and were sold as pharmaceuticals in powdered form as in mellified man. Artists also made use of Egyptian mummies; a brownish pigment known as mummy brown, based on "mummia" (sometimes called alternatively "caput mortuum", Latin for "death's head"), which was originally obtained by grinding human and animal Egyptian mummies. It was most popular in the 17th century, but was discontinued in the early 19th century when its composition became generally known to artists who replaced the said pigment by a totally different blend -but keeping the original name, mummia or mummy brown-yielding a similar tint and based on ground minerals (oxides and fired earths) and or blends of powdered gums and oleoresins (such as myrrh and frankincense) as well as ground bitumen. These blends appeared on the market as forgeries of powdered mummy pigment but were ultimately considered as acceptable replacements, once antique mummies were no longer permitted to be destroyed. Many thousands of mummified cats were also sent from Egypt to England to be processed for use in fertilizer. During the 19th century, following the discovery of the first tombs and artifacts in Egypt, egyptology was a huge fad in Europe, especially in Victorian England. European aristocrats would occasionally entertain themselves by purchasing mummies, having them unwrapped, and holding observation sessions. The pioneer of this kind of entertainment in Britain was Thomas Pettigrew known as "Mummy" Pettigrew due to his work. Such unrolling sessions destroyed hundreds of mummies, because the exposure to the air caused them to disintegrate. The use of mummies as fuel for locomotives was documented by Mark Twain (likely as a joke or humor), but the truth of the story remains debatable. During the American Civil War, mummy-wrapping linens were said to have been used to manufacture paper. Evidence for the reality of these claims is still equivocal. Researcher Ben Radford reports that, in her book "The Mummy Congress", Heather Pringle writes: "No mummy expert has ever been able to authenticate the story ... Twain seems to be the only published source – and a rather suspect one at that". Pringle also writes that there is no evidence for the "mummy paper" either. Radford also says that many journalists have not done a good job with their research, and while it is true that mummies were often not shown respect in the 1800s, there is no evidence for this rumor. While mummies were used in medicine, some researchers have brought into question these other uses such as making paper and paint, fueling locomotives and fertilizing land. Bibliography Books Online Video
https://en.wikipedia.org/wiki?curid=20224
Melilla Melilla ( , , ; ) is a Spanish autonomous city located on the northwest coast of Africa, sharing a border with Morocco. It has an area of . Melilla is one of two permanently inhabited Spanish cities in mainland Africa, the other being nearby Ceuta. It was part of the Province of Málaga until 14 March 1995, when the city's Statute of Autonomy was passed. Melilla is one of the special territories of the European Union. As of 2019, Melilla had a population of 86,487. The population is chiefly divided between people of Iberian and Riffian extraction. There is also a small number of Sephardic Jews and Sindhi Hindus. Spanish and Riffian-Berber are the two most widely spoken languages, the former being the official language. Melilla, just like Ceuta and other Spain's remaining territories in Africa, is subject to an irredentist claim by Morocco. The original name (currently rendered as "Rusadir"), was a Phoenician name, coming from the name given to the nearby Cape Three Forks. "Addir" meant "powerful". The name creation is similar to that of other names given in Antiquity to outlets along the north-african coast, including "Rusguniae", "Rusubbicari", "Rusuccuru", "Rusippisir", "Rusigan" (Rachgoun), "Rusicade", "Ruspina", "Ruspe" or "Rsmlqr". Meanwhile, the etymology of the current city name (dating back to the 9th century, rendered as "Melilla" in Spanish) is uncertain. An active apicultural location in the past, the name has been related to honey; this is tentatively backed up by two ancient coins featuring a bee as well as the inscriptions and . Others relate the name to "discord" or "fever" or also to an ancient Arab personality. The current Riffian name of Melilla is "Mřič" or "Mlilt", which means the "white one". It was a Phoenician and later Punic trade establishment under the name of Rusadir ("Rusaddir" for the Romans and "Russadeiron" () for the Greeks). Later Rome absorbed it as part of the Roman province of Mauretania Tingitana. Rusaddir is mentioned by Ptolemy (IV, 1) and Pliny (V, 18) who called it ""oppidum et portus"" (a fortified town and port). It was also cited by Mela (I, 33) as "Rusicada," and by the "Itinerarium Antonini". Rusaddir was said to have once been the seat of a bishop, but there is no record of any bishop of the purported see, which is not included in the Catholic Church's list of titular sees. As centuries passed, it was ruled by Vandal, Byzantine and Hispano-Visigothic bands. The political history is similar to that of towns in the region of the Moroccan Rif and southern Spain. Local rule passed through a succession of Amazigh, Phoenician, Punic, Roman, Umayyad, Cordobese, Idrisid, Almoravid, Almohad, Marinid, and then Wattasid rulers. During the 15th century, the city subsumed into decadence, just like most of the rest of cities of the Kingdom of Fez located along the Mediterranean coast, eclipsed by those along the Atlantic facade. Following the completion of the conquest of the Nasrid Kingdom of Granada by the Catholic Monarchs in 1492, their Secretary started to compile information about the sorry state of the north-african coast with the prospect of a potential territorial expansion in mind, sending field agents to investigate, and subsequently reporting the Catholic Monarchs that, by early 1494, locals had expelled the authority of the Sultan of Fez and had offered to pledge service. While the 1494 Treaty of Tordesillas put Melilla and Cazaza (until then reserved to the Portuguese) under the sphere of Castile, the conquest of the city had to wait, delayed by the occupation of Naples by Charles VIII of France. The Duke of Medina Sidonia, Juan Alfonso Pérez de Guzmán promoted the seizure of the city, to be headed by , while the Catholic Monarchs, Isabella of Castile and Ferdinand of Aragon endorsed the initiative, also providing the assistance of their artillery officer Francisco Ramírez de Madrid during the operation. The city was occupied on 17 September 1497 virtually without any violence as it, as located in the border between the Kingdom of Tlemcen and the Kingdom of Fez and fought over many times by those powers, it had been left abandoned and partially ruined. No big-scale expansion into the Kingdom of Fez ensued, and, barring the enterprises of the Cardinal Cisneros along the coast in Mers El Kébir and Oran (in the Algerian coast), and the rock of Badis (this one in the territorial scope of the Kingdom of Fez), the imperial impetus of the Hispanic Monarchy was eventually directed elsewhere, to the Italian Wars waged against France, and, particularly since 1519, to the newly discovered continent across the Atlantic. Melilla was initially jointly administered by the House of Medina Sidonia and the Crown, and a 1498 settlement forced the former to station a 700-men garrison in Melilla and forced the latter to provide the city with a number of maravedíes and wheat "fanega"s. The Crown's interest in the city decreased during the reign of Charles V. During the 16th century, soldiers stationed in Melilla were badly remunerated, leading to many desertions. During the late 17th century, Alaouite sultan Ismail Ibn Sharif attempted to conquer the city, taking the outer forts protecting the city in the 1680s and further unsuccessfully besieging the city in the 1690s. One Spanish officer reflected, "an hour in Melilla, from the point of view of merit, was worth more than thirty years of service to Spain." The current limits of the Spanish territory around the Melilla fortress were fixed by treaties with Morocco in 1859, 1860, 1861, and 1894. In the late 19th century, as Spanish influence expanded in this area, the Crown authorized Melilla as the only centre of trade on the Rif coast between Tetuan and the Algerian frontier. The value of trade increased, with goat skins, eggs and beeswax being the principal exports, and cotton goods, tea, sugar and candles being the chief imports. In 1893, the Rif Berbers launched the First Melillan campaign to take back this area; Spain sent 25,000 Spanish soldiers to defend against them. The conflict was also known as the "Margallo War", after Spanish General Juan García y Margallo, who was killed in the battle, and was the Governor of Melilla. In 1908 two companies under the protection of Bou Hmara, a chieftain then ruling the Rif region, started mining lead and iron some 20 kilometers (12.4 miles) from Melilla. They started to construct a railway between the port and the mines. In October of that year the Bou Hmara's vassals revolted against him and raided the mines, which remained closed until June 1909. By July the workmen were again attacked and several were killed. Severe fighting between the Spaniards and the tribesmen followed, in the Second Melillan campaign. In 1910, with the Rif having submitted, the Spaniards restarted the mines and undertook harbor works at Mar Chica, but hostilities broke out again in 1911. On 22 July 1921, the Berbers under the leadership of Abd el Krim inflicted a grave defeat on the Spanish (see Battle of Annual). The Berbers controlled the area until 1926, when the Spanish Protectorate finally managed to regain the area. The city was used as one of the staging grounds for the July 1936 military coup d'état that started the Spanish Civil War. A statue of Francisco Franco, the putschist general assuming the control of the Army of Africa in 1936, is still prominently featured, the last statue of Franco in Spain. On 6 November 2007, King Juan Carlos I and Queen Sofia visited the city, which caused a massive demonstration of support. The visit also sparked protests from the Moroccan government. It was the first time a Spanish monarch had visited Melilla in 80 years. Melilla (and Ceuta) have declared the Muslim holiday of Eid al-Adha or Feast of the Sacrifice, as an official public holiday from 2010 onward. This is the first time a non-Christian religious festival has been officially celebrated in Spain since the Reconquista. Melilla is located in the northwest of the African continent, in the shores of the Alboran Sea, a marginal sea of the Mediterranean, the latter's westernmost portion. The city layout is arranged in a wide semicircle around the beach and the Port of Melilla, on the eastern side of the peninsula of Cape Tres Forcas, at the foot of and around the mouth of the Río de Oro intermittent water stream, above sea level. The urban nucleus was originally a fortress, Melilla la Vieja, built on a peninsular mound about in height. The Moroccan settlement of Beni Ansar lies immediately south of Melilla. The nearest Moroccan city is Nador, and the ports of Melilla and Nador are both within the same bay; nearby is the Bou Areg Lagoon Melilla has a warm Mediterranean climate influenced by its proximity to the sea, rendering much cooler summers and more precipitation than inland areas deeper into Africa. The climate, in general, has a lot in common with the type found in southern coastal Spain on the European mainland, with relatively small temperature differences between seasons. The government bodies stipulated in the Statute of Autonomy are the Assembly of Melilla, the President of Melilla and the Council of Government. The assembly is a 25-member body whose members are elected through universal suffrage every 4 years in closed party lists following the schedule of local elections at the national level. Its members are called "local deputies" but they rather enjoy the status of "concejales" (municipal councillors). Unlike regional legislatures (and akin to municipal councils), the assembly does not enjoy right of initiative for primary legislation. The president of Melilla (who, often addressed as Mayor-President, also exerts the roles of Mayor, president of the Assembly, president of the Council of Government and representative of the city) is invested by the Assembly. After local elections, the president is invested through a qualified majority from among the leaders of the election lists, or, failing to achieve the former, the leader of the most voted list at the election is invested to the office. In case of a motion of no confidence the president can only be ousted with a qualified majority voting for an alternative assembly member. The Council of Government is the traditional collegiate executive body for parliamentary systems. Unlike the municipal government boards in the standard "ayuntamientos", the members of the Council of Government (including the Vice-Presidents) do not need to be members of the assembly. Melilla is the city in Spain with the highest proportion of postal voting; vote buying (via mail-in ballots) is widely reported to be a common practice in the poor neighborhoods of Melilla. Court cases in this matter had involved the PP, the CPM and the PSOE. On 15 June 2019, following the May 2019 Melilla Assembly election, the regionalist and left-leaning party of Muslim and Amazigh persuasion Coalition for Melilla (CPM, 8 seats), the Spanish Socialist Workers' Party (PSOE, 4 seats) and Citizens–Party of the Citizenry (Cs, 1 seat) voted in favour of the Cs' candidate (Eduardo de Castro) vis-à-vis the Presidency of the Autonomous City, ousting Juan José Imbroda, from the People's Party (PP, 10 seats), who had been in office since 2000. Melilla is subdivided into eight districts ("distritos"), which are further subdivided into neighbourhoods ("barrios"): The Gross domestic product (GDP) of the autonomous community was 1.6 billion euros in 2018, accounting for 0.1% of Spanish economic output. GDP per capita adjusted for purchasing power was 19,900 euros or 66% of the EU27 average in the same year. Melilla was the NUTS2 region with the lowest GDP per capita in Spain. Melilla does not participate into the European Union Customs Union (EUCU). There is no VAT (IVA) tax, but a local reduced-rate tax called IPSI. Preserving the status of free port, imports are free of tariffs and the only tax concerning them is the IPSI. Exports to the Customs Union (including Peninsular Spain) are however subject to the correspondent customs tariff and are taxed with the correspondent VAT. There are some special manufacturing taxes regarding electricity and transport, as well as complementary charges on tobacco and oil and fuel products. The principal industry is fishing. Cross-border commerce (legal or smuggled) and Spanish and European grants and wages are the other income sources. Melilla is regularly connected to the Iberian peninsula by air and sea traffic and is also economically connected to Morocco: most of its fruit and vegetables are imported across the border. Moroccans in the city's hinterland are attracted to it: 36,000 Moroccans cross the border daily to work, shop or trade goods. The port of Melilla offers several daily connections to Almería and Málaga. Melilla Airport offers daily flights to Almería, Málaga and Madrid. Spanish operators Air Europa and Iberia operate in Melilla's airport. Many people travelling between Europe and Morocco use the ferry links to Melilla, both for passengers and for freight. Because of this, the port and related companies form an important economic driver for the city. Melilla's Capilla de Santiago, or James's Chapel, by the city walls, is the only authentic Gothic structure in Africa. In the first quarter of the 20th century, Melilla became a thriving port benefitting from the recently established Protectorate of Spanish Morocco in the nearby Rif region. The new architectural style of "modernismo" was expressed by a new bourgeois class. This style, frequently referred to as the Catalan version of Art Nouveau, was extremely popular in the early part of the 20th century in Spain. The workshops inspired by the Catalan architect Enrique Nieto continued in the modernist style, even after Modernisme went out of fashion elsewhere. Accordingly, Melilla has the second most important concentration of Modernist works in Spain after Barcelona. Nieto was in charge of designing the main Synagogue, the Central Mosque and various Catholic Churches. Melilla has been praised as an example of multiculturalism, being a small city in which one can find four major religions represented. However, the Christian majority of the past, constituting around 65% of the population at one point, has been shrinking, while the number of native Muslim inhabitants has steadily increased to its present 45% of the population. The Jewish and Hindu communities have also been shrinking due to economic emigration to mainland Spain (notably Malaga and Madrid). Jews, who had lived in Melilla for centuries, have been leaving the city in recent years (from 20% of the population before World War II to less than 5% today). Most of the Jewish population has left for Israel and Venezuela. There is a small, autonomous, and commercially important Hindu community present in Melilla, which numbers about 100 members today. Melilla has been a popular destination for migrants in order to enter the European Union. The border is secured by the Melilla border fence, a six-metre-tall double fence with watch towers; yet migrants (in groups of tens or sometimes hundreds) storm the fence and manage to cross it from time to time. Detection wires, radar, and day/night vision cameras are planned to increase security and prevent irregular immigration. In February 2014, over 200 migrants from sub-Saharan Africa scaled a security fence to get into the Melilla migrant reception centre. The reception centre, built for 480 migrants, was already overcrowded with 1,300 people. In recent years, the Spanish government has urged Moroccan security forces to stem the flow of migrants traveling towards Melilla. In 2015, Moroccan police dispersed migrant camps in the forests surrounding Melilla by torching makeshift homes and arresting migrants. Since the 2014 incident, Spain has installed additional security measures, including increased fencing, camera surveillance systems, and a more salient troop presence. Attempted border crossings by migrants has decreased at both Melilla and Ceuta since its peak in 2015–2016; arrivals are down twenty-five percent since 2018. However, attempts by migrants to swarm the security fences at Melilla have been widely broadcast by Spanish media sources, creating a sense of urgency in mainland Spain. This fear over African migrants is seen by many as the main factor leading to the rise of Vox, Spain's populist party. Vox officials have frequently pointed to the immigration situation at Melilla and Ceuta proof of a crisis at Spain's border. Melilla Airport is serviced by Air Nostrum, flying to the Spanish cities of Málaga, Madrid, Barcelona, Las Palmas de Gran Canaria, Palma de Mallorca, Granada, Badajoz, Sevilla and Almería. In April 2013, a local enterprise set up Melilla Airlines, flying from the city to Málaga. The city is linked to Málaga, Almería and Motril by ferry. Three roads connect Melilla and Morocco but require clearance through border checkpoints. Melilla is a surfing destination. The city's football club, UD Melilla, plays in the third tier of Spanish football, the Segunda División B. The club was founded in 1943 and since 1945 have played at the 12,000-seater Estadio Municipal Álvarez Claro. Until the other club was dissolved in 2012, UD Melilla played the Ceuta-Melilla derby against AD Ceuta. The clubs travelled to each other via the Spanish mainland to avoid entering Morocco. The second-highest ranked club in the city are Casino del Real CF of the fourth-tier Tercera División. The football's government institution is the Melilla Football Federation. The government of Morocco has repeatedly called for Spain to transfer the sovereignty of Ceuta and Melilla, along with uninhabited islets such as the islands of Alhucemas, the rock of Vélez and the Perejil island, drawing comparisons with Spain's territorial claim to Gibraltar. In both cases, the national governments and local populations of the disputed territories reject these claims by a large majority. The Spanish position states that both Ceuta and Melilla are integral parts of Spain, and have been since the 16th century, centuries prior to Morocco's independence from France in 1956, whereas Gibraltar, being a British Overseas Territory, is not and never has been part of the United Kingdom. Both cities also have the same semi-autonomous status as the mainland region in Spain. Melilla has been under Spanish rule for longer than cities in northern Spain such as Pamplona or Tudela, and was conquered roughly in the same period as the last Muslim cities of Southern Spain such as Granada, Málaga, Ronda or Almería: Spain claims that the enclaves were established before the creation of the Kingdom of Morocco. Morocco denies these claims and maintains that the Spanish presence on or near its coast is a remnant of the colonial past which should be ended. The United Nations list of Non-Self-Governing Territories does not include these Spanish territories and the dispute remains bilaterally debated between Spain and Morocco. Melilla is twinned with:
https://en.wikipedia.org/wiki?curid=20226
Macaroni Macaroni (, Italian: maccheroni) is dry pasta shaped like narrow tubes. Made with durum wheat, macaroni is commonly cut in short lengths; curved macaroni may be referred to as elbow macaroni. Some home machines can make macaroni shapes, but like most pasta, macaroni is usually made commercially by large-scale extrusion. The curved shape is created by different speeds of extrusion on opposite sides of the pasta tube as it comes out of the machine. In North America, the word "macaroni" is often used synonymously with elbow-shaped macaroni, as it is the variety most often used in macaroni and cheese recipes. In Italy, the noun "maccheroni" refers to straight, tubular, square-ended "pasta corta" ("short-length pasta"). Maccheroni may also refer to long pasta dishes such as "maccheroni alla chitarra" and "frittata di maccheroni", which are prepared with long pasta like spaghetti. The name comes from Italian "maccheroni" , plural form of "maccherone". The many variants sometimes differ from each other because of the texture of each pasta: "rigatoni" and "tortiglioni", for example, have ridges down their lengths, while "chifferi", "lumache", "lumaconi", "pipe", "pipette", etc. refer to elbow-shaped pasta similar to macaroni in North American culture. However, the product as well as the name derive from the ancient Greek "Macaria". The academic consensus supports that the word is derived from the Greek μακαρία ("makaria"), a kind of barley broth which was served to commemorate the dead. In turn, that comes from μάκαρες ("makares") meaning "blessed dead", and ultimately from μακάριος ("makarios"), collateral of μάκαρ ("makar") which means "blessed, happy". However, the Italian linguist G. Alessio argues that the word can have two origins. The first is the Medieval Greek μακαρώνεια ("makarōneia") "dirge" (stated in sec. XIII by James of Bulgaria), which would mean "funeral meal" and then "food to serve" during this office (see modern Eastern Thrace's μαχαρωνιά - "macharōnia" in the sense of "rice-based dish served at the funeral"), in which case, the term would be composed of the double root of μακάριος "blessed" and αἰωνίος ("aiōnios"), "eternally". The second is the Greek μακαρία "barley broth", which would have added the suffix "-one". In his book "Delizia! The Epic History of Italians and their Food" (2007), John Dickie instead says that the word macaroni, and its earlier variants like "maccheroni", "comes from "maccare", meaning to pound or crush." The word first appears in English as "makerouns" in the 1390 "Forme of Cury" which records the earliest recipe for macaroni cheese. The word later came to be applied to overdressed dandies and was associated with foppish Italian fashions of dress and periwigs, as in the eighteenth-century British song "Yankee Doodle". The Russian language borrowed the word (as ) as a generic term for all varieties of pasta; this also holds for several other Slavic languages, as well as for Estonian, Turkish, Greek, and Brazilian Portuguese. In Iran, all sorts of pasta are collectively called "makaroni". As is the case with dishes made with other types of pasta, macaroni and cheese is a popular dish in North America, and is often made with elbow macaroni. The same dish, known simply as macaroni cheese, is also popular in Great Britain, where it originated. A sweet macaroni pudding, known as creamed macaroni, containing milk and sugar (and rather similar to a rice pudding) was also popular with the British during the Victorian era. It is still manufactured by Ambrosia and sold in UK supermarkets. In areas with large Chinese populations open to Western cultural influence such as Hong Kong, Macao, Malaysia and Singapore, the local Chinese have adopted macaroni as an ingredient for Chinese-style Western cuisine. In Hong Kong's "cha chaan teng" ("tea restaurants") and Southeast Asia's "kopi tiam" ("coffee shops"), macaroni is cooked in water and then rinsed to remove starch, and served in clear broth with ham or frankfurter sausages, peas, black mushrooms, and optionally eggs, reminiscent of noodle soup dishes. This is often a course for breakfast or light lunch fare. Macaroni has also been incorporated into Malay Malaysian cuisine where it is stir-fried akin to mee goreng using Asian seasoning similar to said noodle dish (i.e shallots, oyster sauce and chili paste).
https://en.wikipedia.org/wiki?curid=20229
Messenger RNA Messenger RNA (mRNA) is a single-stranded RNA molecule that corresponds to the genetic sequence of a gene and is read by the ribosome in the process of producing a protein. mRNA is created during the process of transcription, where the enzyme RNA polymerase converts genes into primary transcript mRNA (also known as pre-mRNA). This pre-mRNA usually still contains introns, regions that will not go on to code for the final amino acid sequence. These are removed in the process of RNA splicing, leaving only exons, regions that will encode the protein. This exon sequence constitutes mature mRNA. Mature mRNA is then read by the ribosome, and, utilising amino acids carried by transfer RNA (tRNA), the ribosome creates the protein. This process is known as translation. All of these processes form part of the central dogma of molecular biology, which describes the flow of genetic information in a biological system. Like in DNA, mRNA genetic information is in the sequence of nucleotides, which are arranged into codons consisting of three base pairs each. Each codon codes for a specific amino acid, except the stop codons, which terminate protein synthesis. This process of translation of codons into amino acids requires two other types of RNA: transfer RNA, which recognises the codon and provides the corresponding amino acid, and ribosomal RNA (rRNA), the central component of the ribosome's protein-manufacturing machinery. The existence of mRNA was first suggested by Jacques Monod and François Jacob and was subsequently discovered by Jacob, Sydney Brenner and Matthew Meselson at the California Institute of Technology in 1961. The brief existence of an mRNA molecule begins with transcription, and ultimately ends in degradation. During its life, an mRNA molecule may also be processed, edited, and transported prior to translation. Eukaryotic mRNA molecules often require extensive processing and transport, while prokaryotic mRNA molecules do not. A molecule of eukaryotic mRNA and the proteins surrounding it are together called a messenger RNP. Transcription is when RNA is made from DNA. During transcription, RNA polymerase makes a copy of a gene from the DNA to mRNA as needed. This process is similar in eukaryotes and prokaryotes. One notable difference, however, is that eukaryotic RNA polymerase associates with mRNA-processing enzymes during transcription so that processing can proceed quickly after the start of transcription. The short-lived, unprocessed or partially processed product is termed "precursor mRNA", or "pre-mRNA"; once completely processed, it is termed "mature mRNA". Processing of mRNA differs greatly among eukaryotes, bacteria, and archaea. Non-eukaryotic mRNA is, in essence, mature upon transcription and requires no processing, except in rare cases. Eukaryotic pre-mRNA, however, requires several processing steps before its transport to the cytoplasm and its translation by the ribosome. The extensive processing of eukaryotic pre-mRNA that leads to the mature mRNA is the RNA splicing, a mechanism by which introns or outrons (non-coding regions) are removed and exons (coding regions) are joined together. A "5' cap" (also termed an RNA cap, an RNA 7-methylguanosine cap, or an RNA m7G cap) is a modified guanine nucleotide that has been added to the "front" or 5' end of a eukaryotic messenger RNA shortly after the start of transcription. The 5' cap consists of a terminal 7-methylguanosine residue that is linked through a 5'-5'-triphosphate bond to the first transcribed nucleotide. Its presence is critical for recognition by the ribosome and protection from RNases. Cap addition is coupled to transcription, and occurs co-transcriptionally, such that each influences the other. Shortly after the start of transcription, the 5' end of the mRNA being synthesized is bound by a cap-synthesizing complex associated with RNA polymerase. This enzymatic complex catalyzes the chemical reactions that are required for mRNA capping. Synthesis proceeds as a multi-step biochemical reaction. In some instances, an mRNA will be edited, changing the nucleotide composition of that mRNA. An example in humans is the apolipoprotein B mRNA, which is edited in some tissues, but not others. The editing creates an early stop codon, which, upon translation, produces a shorter protein. Polyadenylation is the covalent linkage of a polyadenylyl moiety to a messenger RNA molecule. In eukaryotic organisms most messenger RNA (mRNA) molecules are polyadenylated at the 3' end, but recent studies have shown that short stretches of uridine (oligouridylation) are also common. The poly(A) tail and the protein bound to it aid in protecting mRNA from degradation by exonucleases. Polyadenylation is also important for transcription termination, export of the mRNA from the nucleus, and translation. mRNA can also be polyadenylated in prokaryotic organisms, where poly(A) tails act to facilitate, rather than impede, exonucleolytic degradation. Polyadenylation occurs during and/or immediately after transcription of DNA into RNA. After transcription has been terminated, the mRNA chain is cleaved through the action of an endonuclease complex associated with RNA polymerase. After the mRNA has been cleaved, around 250 adenosine residues are added to the free 3' end at the cleavage site. This reaction is catalyzed by polyadenylate polymerase. Just as in alternative splicing, there can be more than one polyadenylation variant of an mRNA. Polyadenylation site mutations also occur. The primary RNA transcript of a gene is cleaved at the poly-A addition site, and 100–200 A's are added to the 3’ end of the RNA. If this site is altered, an abnormally long and unstable mRNA construct will be formed. Another difference between eukaryotes and prokaryotes is mRNA transport. Because eukaryotic transcription and translation is compartmentally separated, eukaryotic mRNAs must be exported from the nucleus to the cytoplasm—a process that may be regulated by different signaling pathways. Mature mRNAs are recognized by their processed modifications and then exported through the nuclear pore by binding to the cap-binding proteins CBP20 and CBP80, as well as the transcription/export complex (TREX). Multiple mRNA export pathways have been identified in eukaryotes. In spatially complex cells, some mRNAs are transported to particular subcellular destinations. In mature neurons, certain mRNA are transported from the soma to dendrites. One site of mRNA translation is at polyribosomes selectively localized beneath synapses. The mRNA for Arc/Arg3.1 is induced by synaptic activity and localizes selectively near active synapses based on signals generated by NMDA receptors. Other mRNAs also move into dendrites in response to external stimuli, such as β-actin mRNA. Upon export from the nucleus, actin mRNA associates with ZBP1 and the 40S subunit. The complex is bound by a motor protein and is transported to the target location (neurite extension) along the cytoskeleton. Eventually ZBP1 is phosphorylated by Src in order for translation to be initiated. In developing neurons, mRNAs are also transported into growing axons and especially growth cones. Many mRNAs are marked with so-called "zip codes," which target their transport to a specific location. Because prokaryotic mRNA does not need to be processed or transported, translation by the ribosome can begin immediately after the end of transcription. Therefore, it can be said that prokaryotic translation is "coupled" to transcription and occurs "co-transcriptionally". Eukaryotic mRNA that has been processed and transported to the cytoplasm (i.e., mature mRNA) can then be translated by the ribosome. Translation may occur at ribosomes free-floating in the cytoplasm, or directed to the endoplasmic reticulum by the signal recognition particle. Therefore, unlike in prokaryotes, eukaryotic translation "is not" directly coupled to transcription. It is even possible in some contexts that reduced mRNA levels are accompanied by increased protein levels, as has been observed for mRNA/protein levels of EEF1A1 in breast cancer. Coding regions are composed of codons, which are decoded and translated (in eukaryotes usually into one and in prokaryotes usually into several) into proteins by the ribosome. Coding regions begin with the start codon and end with a stop codon. In general, the start codon is an AUG triplet and the stop codon is UAG ("amber"), UAA ("ochre"), or UGA ("opal"). The coding regions tend to be stabilised by internal base pairs, this impedes degradation. In addition to being protein-coding, portions of coding regions may serve as regulatory sequences in the pre-mRNA as exonic splicing enhancers or exonic splicing silencers. Untranslated regions (UTRs) are sections of the mRNA before the start codon and after the stop codon that are not translated, termed the five prime untranslated region (5' UTR) and three prime untranslated region (3' UTR), respectively. These regions are transcribed with the coding region and thus are exonic as they are present in the mature mRNA. Several roles in gene expression have been attributed to the untranslated regions, including mRNA stability, mRNA localization, and translational efficiency. The ability of a UTR to perform these functions depends on the sequence of the UTR and can differ between mRNAs. Genetic variants in 3' UTR have also been implicated in disease susceptibility because of the change in RNA structure and protein translation. The stability of mRNAs may be controlled by the 5' UTR and/or 3' UTR due to varying affinity for RNA degrading enzymes called ribonucleases and for ancillary proteins that can promote or inhibit RNA degradation. (See also, C-rich stability element.) Translational efficiency, including sometimes the complete inhibition of translation, can be controlled by UTRs. Proteins that bind to either the 3' or 5' UTR may affect translation by influencing the ribosome's ability to bind to the mRNA. MicroRNAs bound to the 3' UTR also may affect translational efficiency or mRNA stability. Cytoplasmic localization of mRNA is thought to be a function of the 3' UTR. Proteins that are needed in a particular region of the cell can also be translated there; in such a case, the 3' UTR may contain sequences that allow the transcript to be localized to this region for translation. Some of the elements contained in untranslated regions form a characteristic secondary structure when transcribed into RNA. These structural mRNA elements are involved in regulating the mRNA. Some, such as the SECIS element, are targets for proteins to bind. One class of mRNA element, the riboswitches, directly bind small molecules, changing their fold to modify levels of transcription or translation. In these cases, the mRNA regulates itself. The 3' poly(A) tail is a long sequence of adenine nucleotides (often several hundred) added to the 3' end of the pre-mRNA. This tail promotes export from the nucleus and translation, and protects the mRNA from degradation. An mRNA molecule is said to be monocistronic when it contains the genetic information to translate only a single protein chain (polypeptide). This is the case for most of the eukaryotic mRNAs. On the other hand, polycistronic mRNA carries several open reading frames (ORFs), each of which is translated into a polypeptide. These polypeptides usually have a related function (they often are the subunits composing a final complex protein) and their coding sequence is grouped and regulated together in a regulatory region, containing a promoter and an operator. Most of the mRNA found in bacteria and archaea is polycistronic, as is the human mitochondrial genome. Dicistronic or bicistronic mRNA encodes only two proteins. In eukaryotes mRNA molecules form circular structures due to an interaction between the eIF4E and poly(A)-binding protein, which both bind to eIF4G, forming an mRNA-protein-mRNA bridge. Circularization is thought to promote cycling of ribosomes on the mRNA leading to time-efficient translation, and may also function to ensure only intact mRNA are translated (partially degraded mRNA characteristically have no m7G cap, or no poly-A tail). Other mechanisms for circularization exist, particularly in virus mRNA. Poliovirus mRNA uses a cloverleaf section towards its 5' end to bind PCBP2, which binds poly(A)-binding protein, forming the familiar mRNA-protein-mRNA circle. Barley yellow dwarf virus has binding between mRNA segments on its 5' end and 3' end (called kissing stem loops), circularizing the mRNA without any proteins involved. RNA virus genomes (the + strands of which are translated as mRNA) are also commonly circularized. During genome replication the circularization acts to enhance genome replication speeds, cycling viral RNA-dependent RNA polymerase much the same as the ribosome is hypothesized to cycle. Different mRNAs within the same cell have distinct lifetimes (stabilities). In bacterial cells, individual mRNAs can survive from seconds to more than an hour. However, the lifetime averages between 1 and 3 minutes, making bacterial mRNA much less stable than eukaryotic mRNA. In mammalian cells, mRNA lifetimes range from several minutes to days. The greater the stability of an mRNA the more protein may be produced from that mRNA. The limited lifetime of mRNA enables a cell to alter protein synthesis rapidly in response to its changing needs. There are many mechanisms that lead to the destruction of an mRNA, some of which are described below. In general, in prokaryotes the lifetime of mRNA is much shorter than in eukaryotes. Prokaryotes degrade messages by using a combination of ribonucleases, including endonucleases, 3' exonucleases, and 5' exonucleases. In some instances, small RNA molecules (sRNA) tens to hundreds of nucleotides long can stimulate the degradation of specific mRNAs by base-pairing with complementary sequences and facilitating ribonuclease cleavage by RNase III. It was recently shown that bacteria also have a sort of 5' cap consisting of a triphosphate on the 5' end. Removal of two of the phosphates leaves a 5' monophosphate, causing the message to be destroyed by the exonuclease RNase J, which degrades 5' to 3'. Inside eukaryotic cells, there is a balance between the processes of translation and mRNA decay. Messages that are being actively translated are bound by ribosomes, the eukaryotic initiation factors eIF-4E and eIF-4G, and poly(A)-binding protein. eIF-4E and eIF-4G block the decapping enzyme (DCP2), and poly(A)-binding protein blocks the exosome complex, protecting the ends of the message. The balance between translation and decay is reflected in the size and abundance of cytoplasmic structures known as P-bodies The poly(A) tail of the mRNA is shortened by specialized exonucleases that are targeted to specific messenger RNAs by a combination of cis-regulatory sequences on the RNA and trans-acting RNA-binding proteins. Poly(A) tail removal is thought to disrupt the circular structure of the message and destabilize the cap binding complex. The message is then subject to degradation by either the exosome complex or the decapping complex. In this way, translationally inactive messages can be destroyed quickly, while active messages remain intact. The mechanism by which translation stops and the message is handed-off to decay complexes is not understood in detail. The presence of AU-rich elements in some mammalian mRNAs tends to destabilize those transcripts through the action of cellular proteins that bind these sequences and stimulate poly(A) tail removal. Loss of the poly(A) tail is thought to promote mRNA degradation by facilitating attack by both the exosome complex and the decapping complex. Rapid mRNA degradation via AU-rich elements is a critical mechanism for preventing the overproduction of potent cytokines such as tumor necrosis factor (TNF) and granulocyte-macrophage colony stimulating factor (GM-CSF). AU-rich elements also regulate the biosynthesis of proto-oncogenic transcription factors like c-Jun and c-Fos. Eukaryotic messages are subject to surveillance by nonsense mediated decay (NMD), which checks for the presence of premature stop codons (nonsense codons) in the message. These can arise via incomplete splicing, V(D)J recombination in the adaptive immune system, mutations in DNA, transcription errors, leaky scanning by the ribosome causing a frame shift, and other causes. Detection of a premature stop codon triggers mRNA degradation by 5' decapping, 3' poly(A) tail removal, or endonucleolytic cleavage. In metazoans, small interfering RNAs (siRNAs) processed by Dicer are incorporated into a complex known as the RNA-induced silencing complex or RISC. This complex contains an endonuclease that cleaves perfectly complementary messages to which the siRNA binds. The resulting mRNA fragments are then destroyed by exonucleases. siRNA is commonly used in laboratories to block the function of genes in cell culture. It is thought to be part of the innate immune system as a defense against double-stranded RNA viruses. MicroRNAs (miRNAs) are small RNAs that typically are partially complementary to sequences in metazoan messenger RNAs. Binding of a miRNA to a message can repress translation of that message and accelerate poly(A) tail removal, thereby hastening mRNA degradation. The mechanism of action of miRNAs is the subject of active research. There are other ways by which messages can be degraded, including non-stop decay and silencing by Piwi-interacting RNA (piRNA), among others. Full length mRNA molecules have been proposed as therapeutics since the beginning of the biotech era but there was little traction until the 2010s, when Moderna Therapeutics was founded and managed to raise almost a billion dollars in venture funding in its first three years. Theoretically, the administered mRNA sequence can cause a cell to make a protein, which in turn could directly treat a disease or could function as a vaccine; more indirectly the protein could drive an endogenous stem cell to differentiate in a desired way. The primary challenges of RNA therapy center on delivering the RNA to directed cells, more even than determining what sequence to deliver. Naked RNA sequences will naturally degrade after preparation; they may trigger the body's immune system to attack them as an invader; and they are impermeable to the cell membrane. Once within the cell, they must then leave the cell's transport mechanism to take action within the cytoplasm, which houses the ribosomes that direct manufacture of proteins.
https://en.wikipedia.org/wiki?curid=20232
Mount Saint Vincent University Mount Saint Vincent University, often referred to as The Mount, is a public primarily undergraduate university located in Halifax, Nova Scotia, Canada, and was established in 1873. Mount Saint Vincent offers undergraduate programs in Arts, Science, Education, and Professional Studies. The Mount has 13 graduate degrees in areas including Applied Human Nutrition, School Psychology, Child and Youth Study, Education, Family Studies and Gerontology, Public Relations and Women's Studies. The Mount offers a doctorate program, a Ph.D. in Educational Studies, through a joint-initiative with St. Francis Xavier University and Acadia University. The Mount offers more than 190 courses, over 10 full undergraduate degree programs and four graduate degree, programs online. The university attracts many students in part because of its small class sizes, specialty programs, and location. The Mount has Canada Research Chairs in Gender Identity and Social Practices as well as Food Security and Policy Change. This institution is unique nationwide as it has a Chair in learning disabilities, Master of Public Relations program, Bachelor of Science in Communication Studies, and numerous other programs, faculty, and research initiatives. Established by the Sisters of Charity of Saint Vincent de Paul as a women's college in 1873, the Mount was one of the few institutions of higher education for women in Canada at a time when women could not vote. The original purpose of the academy was to train novices and young sisters as teachers, but the Sisters also recognized a need to educate other young women. Over the ensuing years, the order developed a convent, schools, an orphanage, and health care facilities throughout the Halifax area, as well as across North America. Architect Charles Welsford West designed the Romanesque chapel and annex (1903–05) at Mount St. Vincent Academy (now University). He served as the Architect, Nova Scotia Public Works & Mines 1932-1950. By 1912, the Sisters of Charity of Saint Vincent de Paul recognized the need to offer greater opportunity through university education and adopted a plan to establish a college for young women. It was two years later in 1914 that the Sisters partnered with Dalhousie University, enabling Mount Saint Vincent to offer the first two years of a bachelor's degree program to be credited toward a Dalhousie degree. In 1925, the Nova Scotia Legislature awarded the Mount the right to grant its own degrees, making it the only independent women's college in the British Commonwealth. By 1951, degrees were offered in Arts, Secretarial Science, Music, Home Economics, Library Science, Nursing and Education. A new charter was granted in 1966 and the College became Mount Saint Vincent University, bringing forth the establishment of a Board of Governors and Senate. This was also a period of tremendous growth – with enrollment increases, new construction and new agreements. In 1967 the Mount began admitting men as students. The University continued to evolve with the expansion of programs during the 1970s and entered into several new fields, including Child Study, Public Relations, Gerontology, Tourism and Hospitality Management, Cooperative Education and Distance Education. In July 1988, the Sisters of Charity of Saint Vincent de Paul officially transferred ownership of the institution to the Board of Governors. After a fire in 1951 burned down Mount Saint Vincent’s solitary building, the people of Halifax came together to support students by providing alternative accommodations for their classes. In recognition of the generosity of their community, the Sisters of Charity established a memorial holiday in appreciation of their gesture. Caritas Day, named after the Christian virtue of charity, takes place on the last Wednesday of January of each year. No classes are held on this day, and students are encouraged to volunteer their time instead. Caritas Day is an opportunity for students and faculty alike to connect with the Sisters of Charity and come together outside of class time in a setting that is both personally and academically beneficial. Mount Saint Vincent University offers over 40 undergraduate degrees in the Arts, Sciences and Professional Studies. Professional Studies programs include Applied Human Nutrition, Business Administration, Child and Youth Study, Family Studies and Gerontology, Information Technology, Public Relations, Non-profit Leadership and Tourism and Hospitality Management. All undergraduate programs are work-experience eligible, meaning any Mount student can take part in a work placement (practicum, co-op, internship) as part of their program. The Mount also offers diplomas in Business Administration and Tourism & Hospitality Management, and certificates in Accounting, Business Administration, Marketing, Proficiency in French and Non-profit Leadership. Following consolidation of post-secondary programs across Nova Scotia in the 1990s, the Mount became home to the only education program in the Halifax area. The faculty of Education is home to the only school psychology graduate program in Atlantic Canada. Graduates of this program are eligible to become registered psychologists in Nova Scotia and several other provinces in Canada. The Mount houses 16 research centres and institutes. The Department of Applied Human Nutrition has an accredited dietetic program. The University is accredited by a professional organization such as the Dietitians of Canada and the university's graduates may subsequently become registered dietitians. Mount Saint Vincent University is the only university in Canada to offer a Master of Public Relations program (MPR). The MPR program graduated its first class in October 2009. The Canadian Public Relations Society (CPRS) recognizes MSVU's MPR program for excellence in PR education in its Pathways to the Profession guide. Academic programs are supported by a wide variety of electronic and print research resources in the MSVU Library. Research services include drop-in reference assistance, research appointments and classroom workshops. January 2019 marked the 40th anniversary of the Mount's co-operative education program. It is the longest-standing nationally accredited co-op program in the Maritime Provinces, offering an optional co-op program in 1979 for students in the Bachelor of Business Administration program. Four decades later, more than 8,000 Business Administration, Public Relations, and Tourism & Hospitality Management students have taken their learning from the classroom to the workplace, completing paid work terms in industries related to their field of study (today co-op is a required part of the Public Relations and Tourism & Hospitality Management degrees). Since 2014, the Mount Co-op Office has also enabled experiential opportunities for Arts and Science students through an Arts & Science Internship Program. Mount Saint Vincent University is home to the Centre for Women in Business, a not-for-profit university business development centre (UBDC), dedicated to assisting with entrepreneurial activities both within the university and throughout Nova Scotia. Founded in 1992 by the University's Department of Business & Tourism, this remains the only UBDC in Canada with a primary focus on women. The Centre has served more than 7500 clients over the past 18 years. The Mount Saint Vincent University Art Gallery is located on the first floor of Seton Academic Centre. The gallery opened in 1971 as a resource to Mount Saint Vincent, communities served by the university, artists, Metro Halifax residents and art publics everywhere. Admission is always free of charge. MSVU Art Gallery reflects the University's educational aims by devoting a significant part of its activities to the representation of women as cultural subjects and producers. Its exhibitions explore various forms of cultural production, highlighting the achievements of Nova Scotian artists and themes relevant to academic programs offered by the university. The Mount was the first Nova Scotia university to add a wikuom to its campus facilities. First raised on June 12, 2017, the wikuom is a welcoming traditional Mi'kmaq space where both Indigenous and non-Indigenous communities can gather and learn together. The Mount is also home to the Aboriginal Student Centre (ASC), which is home to ASC staff who provide academic advising, counselling and other support services to students. The ASC hosts a number of events, including the Mount's Mid-Winter Feast, Blanket Exercises, Cultural Workshops, Mini-Mount Camps, and more. Home to the Mystics, the Mount competes in the Atlantic Colleges Athletic Association (ACAA) in Women's & Men's Basketball, Women's & Men's Soccer, Cross Country and Women's Volleyball. The Mystics hold a championship titles in all sports, making them the most acclaimed team of the ACAA division. Notable graduates of the Mount include: During the 1995 G7 summit, Mount Saint Vincent University awarded an honorary Doctor of Laws degree to Hillary Clinton.
https://en.wikipedia.org/wiki?curid=20237
Minimal pair In phonology, minimal pairs are pairs of words or phrases in a particular language, spoken or signed, that differ in only one phonological element, such as a phoneme, toneme or chroneme, and have distinct meanings. They are used to demonstrate that two phones are two separate phonemes in the language. Many phonologists in the middle part of the 20th century had a strong interest in developing techniques for discovering the phonemes of unknown languages, and in some cases, they set up writing systems for the languages. The major work of Kenneth Pike on the subject is "Phonemics: a technique for reducing languages to writing". The minimal pair was an essential tool in the discovery process and was found by substitution or commutation tests. As an example for English vowels, the pair "let" + "lit" can be used to demonstrate that the phones (in let) and (in lit) actually represent distinct phonemes and . An example for English consonants is the minimal pair of "pat" + "bat". The following table shows other pairs demonstrating the existence of various distinct phonemes in English. All of the possible minimal pairs for any language may be set out in the same way. Phonemic differentiation may vary between different dialects of a language so a particular minimal pair in one accent may be a pair of homophones in another. That means not that one of the phonemes is absent in the homonym accent but only that it is not contrastive in the same range of contexts. In addition to the minimal pairs of vowels and consonants provided above, others may be found: Many languages show contrasts between long and short vowels and consonants. A distinctive difference in length is attributed by some phonologists to a unit called a chroneme. Thus, Italian has the following minimal pair that is based on long and short : However, in such a case it is not easy to decide whether a long vowel or consonant should be treated as having an added chroneme or simply as a geminate sound with phonemes. Classical Latin, German, some Italian dialects, almost all Uralic languages, Thai, and many other languages also have distinctive length in vowels. An example is the "cŭ/cū" minimal pair in the dialect that is spoken near Palmi (Calabria, Italy): In some languages like Italian, word-initial consonants are geminated after certain vowel-final words in the same prosodic unit. Sometimes, the phenomenon can create some syntactic-gemination-minimal-pairs: In the example, the graphical accent on "dà" is just a diacritical mark that does not change the pronunciation of the word itself. However, in some specific areas, like Tuscany, both phrases are pronounced and so can be distinguished only from the context. Minimal pairs for tone contrasts in tone languages can be established; some writers refer to that as a contrast involving a toneme. For example, Kono distinguishes high tone and low tone on syllables: Languages in which stress may occur in different positions within the word often have contrasts that can be shown in minimal pairs, as in Greek and Spanish: English-speakers are able to hear the difference between, for example, "great ape" and "grey tape", but phonemically, the two phrases are identical: . The difference between the two phrases, which constitute a minimal pair, is said to be one of juncture. At the word boundary, a "plus juncture" /+/ has been posited and said to be the factor conditioning allophones to allow distinctivity: in this example, the phrase "great ape" has an diphthong shortened by pre-fortis clipping and, since it is not syllable-initial, a with little aspiration (variously , , , , etc., depending on dialect); meanwhile in "grey tape", the has its full length and the is aspirated . Only languages with allophonic differences associated with grammatical boundaries may have juncture as a phonological element. There is disagreement over whether or not French has phonological juncture: it seems likely that the difference between, for example, "" (some little holes) and "" (some little wheels), phonemically both , is only perceptible in slow, careful speech. The principle of a simple binary opposition between the two members of a minimal pair may be extended to cover a minimal set in which a number of words differ from one another in terms of one phone in a particular position in the word. For example, the vowels , , , , of Swahili are shown to be distinct by the following set of words: "pata" 'hinge', "peta" 'bend', "pita" 'pass', "pota" 'twist', "puta" 'thrash'. However, establishing such sets is not always straightforward and may require very complex study of multiple oppositions as expounded by, for example, Nikolai Trubetzkoy. Minimal pairs were an important part of the theory of pronunciation teaching during its development in the period of structuralist linguistics, particularly in the 1940s and 1950s, and minimal pair drills were widely used to train students to discriminate among the phonemes of the target language. These drills took the form of minimal pair word drills and minimal pair sentence drills. For example, if the focus of a lesson was on the distinction /ɪ/ versus /ɛ/, learners might be asked to signal which sound they heard as the teacher pronounced lists of words with these phonemes such as "lid/led", "tin/ten", or "slipped/slept". Minimal pair sentence drills consisted of paired sentences such as "He slipped on the floor/He slept on the floor." Again, learners would be asked to distinguish which of the sentences they heard as the teacher read them aloud. Another use of minimal pair drills was in pair work. Here, one member of the pair would be responsible for listening to the other member read the minimal pair word or sentence aloud and would be tasked with identifying which phoneme was being produced. In this form of classroom practice, both the skills of perception and production were practiced. Later writers have criticized the approach as being artificial and lacking in relevance to language learners' needs. However, even today minimal pair listening and production drills remain a common tool for the teaching of segmental differences. Some writers have claimed that learners are likely not to hear differences between phones if the difference is not a phonemic one. One of the objectives of contrastive analysis of languages' sound systems was to identify points of likely difficulty for language learners that would arise from differences in phoneme inventories between the native language and the target language. However, experimental evidence for this claim is hard to find, and the claim should be treated with caution. In the past, signs were considered holistic forms without internal structure. However, the discovery in the mid-20th century that minimal pairs also exist in sign languages showed that sign languages have sublexical structure. Signs consist of phonemes, which are specifications for location, movement, handshape, orientation, and non-manual elements. When signs differ in only one of these specifications, they form a minimal pair. For instance, the German Sign Language signs shoes and socks are identical in form apart from their handshapes.
https://en.wikipedia.org/wiki?curid=20239
Miranda (moon) Miranda, also designated Uranus V, is the smallest and innermost of Uranus's five round satellites. It was discovered by Gerard Kuiper on 16 February 1948 at McDonald Observatory in Texas, and named after Miranda from William Shakespeare's play "The Tempest". Like the other large moons of Uranus, Miranda orbits close to its planet's equatorial plane. Because Uranus orbits the Sun on its side, Miranda's orbit is perpendicular to the ecliptic and shares Uranus' extreme seasonal cycle. At just 470 km in diameter, Miranda is one of the smallest closely observed objects in the Solar System that might be in hydrostatic equilibrium (spherical under its own gravity). The only close-up images of Miranda are from the "Voyager 2" probe, which made observations of Miranda during its Uranus flyby in January 1986. During the flyby, Miranda's southern hemisphere pointed towards the Sun, so only that part was studied. Miranda probably formed from an accretion disc that surrounded the planet shortly after its formation, and, like other large moons, it is likely differentiated, with an inner core of rock surrounded by a mantle of ice. Miranda has one of the most extreme and varied topographies of any object in the Solar System, including Verona Rupes, a 20-kilometer-high scarp that is the highest cliff in the Solar System, and chevron-shaped tectonic features called "coronae". The origin and evolution of this varied geology, the most of any Uranian satellite, are still not fully understood, and multiple hypotheses exist regarding Miranda's evolution. Miranda was discovered on 16 February 1948 by planetary astronomer Gerard Kuiper using the McDonald Observatory's Otto Struve Telescope. Its motion around Uranus was confirmed on 1 March 1948. It was the first satellite of Uranus discovered in nearly 100 years. Kuiper elected to name the object "Miranda" after the character in Shakespeare's "The Tempest", because the four previously discovered moons of Uranus, Ariel, Umbriel, Titania and Oberon, had all been named after characters of Shakespeare or Alexander Pope. However, the previous moons had been named specifically after fairies, whereas Miranda was a human. Subsequently, discovered satellites of Uranus were named after characters from Shakespeare and Pope, whether fairies or not. The moon is also designated Uranus V. Of Uranus's five round satellites, Miranda orbits closest to it, at roughly 129,000 km from the surface; about a quarter again as far as its most distant ring. Its orbital period is 34 hours, and, like that of the Moon, is synchronous with its rotation period, which means it always shows the same face to Uranus, a condition known as tidal locking. Miranda's orbital inclination (4.34°) is unusually high for a body so close to its planet, and roughly ten times that of the other major Uranian satellites. The reason for this is still uncertain; there are no mean-motion resonances between the moons that could explain it, leading to the hypothesis that the moons occasionally pass through secondary resonances, which at some point in the past led to Miranda being locked for a time into a 3:1 resonance with Umbriel, before chaotic behaviour induced by the secondary resonances moved it out of it again. In the Uranian system, due to the planet's lesser degree of oblateness, and the larger relative size of its satellites, escape from a mean-motion resonance is much easier than for satellites of Jupiter or Saturn. Miranda's orbit is the most inclined of any of Uranus's large satellites, at 4.232°, it is 10-20 times that of Titania, Ariel and Umbriel and 73 times that of Oberon. At 1.2 g/cm3, Miranda is the least dense of Uranus's round satellites. That density suggests a composition of more than 60% water ice. Miranda's surface may be mostly water ice, though it is far rockier than its corresponding satellites in the Saturn system, indicating that heat from radioactive decay may have led to internal differentiation, allowing silicate rock and organic compounds to settle in its interior. Miranda is too small for any internal heat to have been retained over the age of the Solar System. Miranda is the least spherical of Uranus's satellites, with an equatorial diameter 3% wider than its polar diameter. Only water has been detected so far on Miranda's surface, though it has been speculated that methane, ammonia, carbon monoxide or nitrogen may also exist at 3% concentrations. These bulk properties are similar to Saturn's moon Mimas, though Mimas is smaller, less dense, and more oblate. Precisely how a body as small as Miranda could have enough internal energy to produce the myriad geological features seen on its surface is not established with certainty, though the currently favoured hypothesis is that it was driven by tidal heating during a past time when it was in 3:1 orbital resonance with Umbriel. The resonance would have increased Miranda's orbital eccentricity to 0.1, and generated tidal friction due to the varying tidal forces from Uranus. As Miranda approached Uranus, tidal force increased; as it retreated, tidal force decreased, causing flexing that would have warmed Miranda's interior by 20 K, enough to trigger melting. The period of tidal flexing could have lasted for up to 100 million years. Also, if clathrate existed within Miranda, as has been hypothesised for the satellites of Uranus, it may have acted as an insulator, since it has a lower conductivity than water, increasing Miranda's temperature still further. Miranda may have also once been in a 5:3 orbital resonance with Ariel, which would have also contributed to its internal heating. However, the maximum heating attributable to the resonance with Umbriel was likely about three times greater. Due to Uranus's near-sideways orientation, only Miranda's southern hemisphere was visible to "Voyager 2" when it arrived. The observed surface has patchwork regions of broken terrain, indicating intense geological activity in Miranda's past, and is criss-crossed by huge canyons, believed to be the result of extensional tectonics; as liquid water froze beneath the surface, it expanded, causing the surface ice to split, creating graben. The canyons are hundreds of kilometers long and tens of kilometers wide. Miranda also has the largest known cliff in the Solar System, Verona Rupes, which has a height of . Some of Miranda's terrain is possibly less than 100 million years old based on crater counts, while sizeable regions possess crater counts that indicate ancient terrain. While crater counts suggest that the majority of Miranda's surface is old, with a similar geological history to the other Uranian satellites, few of those craters are particularly large, indicating that most must have formed after a major resurfacing event in its distant past. Craters on Miranda also appear to possess softened edges, which could be the result either of ejecta or of cryovolcanism. The temperature at Miranda's south pole is roughly 85 K, a temperature at which pure water ice adopts the properties of rock. Also, the cryovolcanic material responsible for the surfacing is too viscous to have been pure liquid water, but too fluid to have been solid water. Rather, it is believed to have been a viscous, lava-like mixture of water and ammonia, which freezes at , or perhaps ethanol. Miranda's observed hemisphere contains three giant 'racetrack'-like grooved structures called coronae, each at least wide and up to deep, named Arden, Elsinore and Inverness after locations in Shakespeare's plays. Inverness is lower in altitude than the surrounding terrain (though domes and ridges are of comparable elevation), while Elsinore is higher, The relative sparsity of craters on their surfaces means they overlay the earlier cratered terrain. The coronae, which are unique to Miranda, initially defied easy explanation; one early hypothesis was that Miranda, at some time in its distant past, (prior to any of the current cratering) had been completely torn to pieces, perhaps by a massive impact, and then reassembled in a random jumble. The heavier core material fell through the crust, and the coronae formed as the water re-froze. However, the current favoured hypothesis is that they formed via extensional processes at the tops of diapirs, or upwellings of warm ice from within Miranda itself. The coronae are surrounded by rings of concentric faults with a similar low-crater count, suggesting they played a role in their formation. If the coronae formed through downwelling from a catastrophic disruption, then the concentric faults would present as compressed. If they formed through upwelling, such as by diapirism, then they would be extensional tilt blocks, and present extensional features, as current evidence suggests they do. The concentric rings would have formed as ice moved away from the heat source. The diapirs may have changed the density distribution within Miranda, which could have caused Miranda to reorient itself, similar to a process believed to have occurred at Saturn's geologically active moon Enceladus. Evidence suggests the reorientation would have been as extreme as 60 degrees from the sub-Uranian point. The positions of all the coronae require a tidal heating pattern consistent with Miranda being solid, and lacking an internal liquid ocean. It is believed through computer modelling that Miranda may have an additional corona on the unimaged hemisphere. Miranda's apparent magnitude is +16.6, making it invisible to many amateur telescopes. Virtually all known information regarding its geology and geography was obtained during the flyby of Uranus made by "Voyager 2" on 25 January 1986, The closest approach of "Voyager 2" to Miranda was —significantly less than the distances to all other Uranian moons. Of all the Uranian satellites, Miranda had the most visible surface. The discovery team had expected Miranda to resemble Mimas, and found themselves at a loss to explain the moon's unique geography in the 24-hour window before releasing the images to the press. In 2017, as part of its Planetary Science Decadal Survey, NASA evaluated the possibility of an orbiter to return to Uranus some time in the 2020s. Uranus was the preferred destination over Neptune due to favourable planetary alignments meaning shorter flight times.
https://en.wikipedia.org/wiki?curid=20254
Mars in fiction Fictional representations of Mars have been popular for over a century. Interest in Mars has been stimulated by the planet's dramatic red color, by early scientific speculations that its surface conditions might be capable of supporting life, and by the possibility that Mars could be colonized by humans in the future. Almost as popular as stories about Mars are stories about Martians engaging in activity (frequently invasions) away from their home planet. In the 20th century, actual spaceflights to the planet Mars, including seminal events such as the first artificial object to impact the surface of Mars in 1971, and then later the first landing of "the first mechanized device to successfully operate on Mars" in 1976 (in the Viking program by the United States), inspired a great deal of interest in Mars-related fiction. Exploration of the planet has continued in the 21st century on to the present day. The following works of fiction deal with the planet itself, with any assumed Martian civilization as part of its planetary landscape. Mars has been seen as the perfect distance away from Earth to create the idea of a different life. As this allowed for early works to fuel the minds of what Mars could hold. The ideas of Mars as science fiction, would first start with Giovanni Schiaparelli in 1877. The ideas of Mars will grow and change with new information changing the way Mars would be seen in the science fiction world. Several early modern writers, including Athanasius Kircher (1602–1680) and Emanuel Swedenborg (1688-1772), hypothesized contact with Mars. Early science fiction about Mars often involved the first voyages to the planet, sometimes as an invasion force, more often for the purposes of exploration. By the 1930s, stories about reaching Mars had become somewhat trite, and the focus shifted to Mars as an alien landscape. In the following stories, human contact and basic exploration had taken place sometime in the past; Mars is a setting rather than a goal. Mariner 4 in July 1965 found that Mars—contrary to expectations—is heavily cratered, with a very thin atmosphere. No canals were found; while scientists did not believe that Mars was a moist planet, the lack of surface water surprised them. Science fiction had so influenced real explorations of the planet, however—Carl Sagan was among the many fans who became scientists—that after Mariner 9 in 1971-1972, craters were named after Wells, Burroughs, and other authors. The Mariner and Viking space probes confirmed that the Martian environment is extremely hostile to life. By the 1970s, the ideas of canals and ancient civilizations had to be abandoned. Authors soon began writing stories based on the new Mars (frequently treating it as a desert planet). Most of these works feature humans struggling to tame the planet, and some of them refer to terraforming (using technology to transform a planet's environment to be Earthlike). A common theme, particularly among American writers, is that of a Martian colony fighting for independence from Earth. It appeared already in Heinlein's "Red Planet" and is a major plot element in Greg Bear's "Moving Mars" and Kim Stanley Robinson's "Mars" trilogy. It is also part of the plot of the movie "Total Recall" and the television series "Babylon 5". Many video games also use this concept, such as the "Red Faction" and "Zone of the Enders" series, and "". A historical rebellion of Mars against Earth is also mentioned in the "Star Trek" series of novels, which are not considered canon. In the decades following Mariner and Apollo, the once-popular subgenre of realistic stories about a first expedition to Mars fell out of fashion, possibly due to the failure of the Apollo Program to continue on to Mars. The early 1990s saw a revival and re-envisioning of realistic novels about Mars expeditions. Early novels in this renaissance were Jack Williamson's novel "Beachhead" and Ben Bova's novel "Mars" (both 1992), which envisioned large-scale expeditions to Mars according to the thinking of the 1990s. These were followed by Gregory Benford's "The Martian Race" (1999), Geoffrey A. Landis's "Mars Crossing" (2000), and Robert Zubrin's "First Landing" (2002), which took as their starting points the smaller and more focused expedition strategies evolved in the late 1990s, mostly building on the concepts of Mars Direct. Several post-Mariner works are homages to the older phase of Mars fiction, circumventing the scientific picture of a dry and lifeless Mars with an unbreathable atmosphere through such science fiction generic staples as positing its future terraforming, or creating alternate history versions of Mars, where Burroughs' Barsoom, Bradbury's "Martian Chronicles" or "The War of the Worlds" are literal truth. Nostalgia for the older Mars also frequently appears in comics and role-playing games, particularly of the steampunk genre: is a ride at Walt Disney World's Epcot theme park about astronauts training in a flight simulation of the first mission to Mars. Along the way, riders experience changes in G-Force, cryosleep, and a meteor shower. In the following works of fiction, the Martian setting is of secondary importance to the work as a whole. The "Doctor Who" television series has Mars as the uninhabitable homeworld of the Ice Warriors, a recurring adversary of the Second and Third Doctors from 1967 to 1974. In "Pyramids of Mars" (1975), the Fourth Doctor defeats Sutekh, last of the Osirians, who had been imprisoned for his crimes beneath a pyramid, with a signal to keep him paralyzed sent from a Martian pyramid. In "The Waters of Mars" (2009), an episode set on the planet itself, the Tenth Doctor implies that the Ice Warriors have become extinct. (This episode also introduces a viral, water-borne Martian named the Flood.) The episode is set in 2059, and implies that the first human colony on Mars will arrive in 2057, two years before the episode is set (as told in dialogue). The Martian is a favorite character of classical science fiction; he was frequently found away from his home planet, often invading Earth, but sometimes simply a lonely character representing alienness from his surroundings. Martians, other than human beings transplanted to Mars, became rare in fiction after Mariner, except in exercises of deliberate nostalgia – more frequently in some genres, such as comics and animation, than in written literature.
https://en.wikipedia.org/wiki?curid=20257
McIntosh (apple) The McIntosh ( ), McIntosh Red, or colloquially the Mac is an apple cultivar, the national apple of Canada. The fruit has red and green skin, a tart flavour, and tender white flesh, which ripens in late September. In the 20th century it was the most popular cultivar in Eastern Canada and New England, and is considered an all-purpose apple, suitable both for cooking and eating raw. Apple Inc. employee Jef Raskin named the Macintosh line of personal computers after the fruit. John McIntosh discovered the original McIntosh sapling on his Dundela farm in Upper Canada in 1811. He and his wife bred it, and the family started grafting the tree and selling the fruit in 1835. In 1870, it entered commercial production, and became common in northeastern North America after 1900. While still important in production, the fruit's popularity fell in the early 21st century in the face of competition from varieties such as the Gala. According to the US Apple Association website it is one of the fifteen most popular apple cultivars in the United States. The McIntosh or McIntosh Red (nicknamed the "Mac"), is the most popular apple cultivar in eastern Canada and the northeastern United States. It also sells well in eastern Europe. A spreading tree that is moderately vigorous, the McIntosh bears annually or in alternate years. The tree is hardy to at least USDA Hardiness zone 4a, or . 50% or more of its flowers die at or below. The McIntosh apple is a small- to medium-sized round fruit with a short stem. It has a red and green skin that is thick, tender, and easy to peel. Its white flesh is sometime tinged with green or pink and is juicy, tender, and firm, soon becoming soft. The flesh is easily bruised. The fruit is considered "all-purpose", suitable both for eating raw and for cooking. It is used primarily for dessert, and requires less time to cook than most cultivars. It is usually blended when used for juice. The fruit grows best in cool areas where nights are cold and autumn days are clear; otherwise, it suffers from poor colour and soft flesh, and tends to fall from the tree before harvest. It stores for two to three months in air, but is prone to scald, flesh softening, chilling sensitivity, and coprinus rot. It can become mealy when stored at temperatures below . The fruit is optimally stored in a controlled atmosphere in which temperatures are between , and air content is 1.5–4.5% oxygen and 1–5% carbon dioxide; under such conditions, the McIntosh will keep for five to eight months. The McIntosh is most commonly cultivated in Canada, the United States, and eastern Europe. The parentage of the McIntosh is unknown, but the Snow Apple (or Fameuse), Fall St Lawrence, and Alexander have been speculated. It is one of the top five apple cultivars used in cloning, and research indicates the McIntosh combines well for winter hardiness. If unsprayed, the McIntosh succumbs easily to apple scab, which may lead to entire crops being unmarketable. It has generally low susceptibility to fire blight, powdery mildew, cedar-apple rust, quince rust, and hawthorn rust. It is susceptible to fungal diseases such as "Nectria" canker, brown rot, black rot, race 1 of apple rust (but resists race 2). It is moderately resistant to "Pezicula" bark rot and "Alternaria" leaf blotch, and resists brown leaf spot well. The McIntosh is one of the most common cultivars used in cloning; a 1996 study found that the McIntosh was a parent in 101 of 439 cultivars selected, more than any other founding clone. It was used in over half of the Canadian cultivars selected, and was used extensively in the United States and Eastern Europe as well; rarely was it used elsewhere. Offspring of the McIntosh include: the Jersey Black hybrid the Macoun, the Newtown Pippin hybrid the Spartan, the Cortland; the Empire; the Jonamac, the Jersey Mac, the Lobo, the Melba, the Summered, the Tydeman's Red, and possibly the Paula Red. Apple trees were introduced to Canada at the Habitation at Port-Royal (modern Port Royal, Annapolis County, Nova Scotia) as early as 1606 by French settlers. Following its introduction, apple cultivation spread inland. The McIntosh's discoverer, John McIntosh (1777 – ), left his native Mohawk Valley home in New York State in 1796 to follow his love, Dolly Irwin, who had been taken to Upper Canada by her Loyalist parents. She had died by the time he found her, but he settled as a farmer in Upper Canada. He married Hannah Doran in 1801, and they farmed along the Saint Lawrence River until 1811, when McIntosh exchanged the land he had with his brother-in-law Edward Doran for a plot in Dundela. While clearing the overgrown plot McIntosh discovered some wild apple seedlings on his farm. He transplanted the seedlings next to his house. One of the seedlings bore particularly good fruit. The McIntosh grandchildren dubbed the fruit it produced "Granny's apple", as they often saw their grandmother taking care of the tree in the orchard. McIntosh was selling seedlings from the tree by 1820, but they did not produce fruit of the quality of the original. John McIntosh's son Allan (1815–1899) learned grafting about 1835; with this cloning the McIntoshes could maintain the distinctive properties of the fruit of the original tree. Allan and brother Sandy (1825–1906), nicknamed "Sandy the Grafter", increased production and promotion of the cultivar. Earliest sales were in 1835, and in 1836 the cultivar was renamed the "McIntosh Red"; it entered commercial production in 1870. The apple became popular after 1900, when the first sprays for apple scab were developed. A house fire damaged the original McIntosh tree in 1894; it last produced fruit in 1908, and died and fell over in 1910. Horticulturist William Tyrrell Macoun of the Central Experimental Farm in Ottawa is credited with popularizing the McIntosh in Canada. He stated the McIntosh needed "no words of praise", that it was "one of the finest appearing and best dessert apples grown". The Macoun, a hybrid of the McIntosh and Jersey Black grown by the Agricultural Experiment Station in Geneva, NY, was named for him in 1923. In the northeastern United States the McIntosh replaced a large number of Baldwins that were killed in a severe winter in 1933–34. In the late 1940s, Canadian ambassador to the United Nations Andrew McNaughton told Soviet Minister for Foreign Affairs Andrei Gromyko that the McIntosh Red was Canada's best apple. The McIntosh made up 40% of the Canadian apple market by the 1960s; and at least thirty varieties of McIntosh hybrid were known by 1970. Its popularity later waned in the face of competition from foreign imports; in the first decade of the 21st century, the Gala accounted for 33% of the apple market in Ontario to the McIntosh's 12%, and the Northern Spy had become the preferred apple for pies. Production remained important to Ontario, however, as of McIntoshes were produced in 2010. The original tree discovered by John McIntosh bore fruit for more than ninety years, and died in 1910. Horticulturalists from the Upper Canada Village heritage park saved cuttings from the last known first-generation McIntosh graft before it died in 2011 for producing clones. The McIntosh has been designated the national apple of Canada. A popular subscription funded a plaque placed from the original McIntosh tree in 1912. The Ontario Archaeological and Historic Sites Board replaced the plaque with a more descriptive one in 1962, and the Historic Sites and Monuments Board of Canada put up another in a park nearby in 2001, by a painted mural commemorating the fruit. Apple Inc. employee Jef Raskin named the Macintosh line of personal computers after the McIntosh. He deliberately misspelled the name to avoid conflict with the hi-fi equipment manufacturer McIntosh Laboratory. Apple's attempt in 1982 to trademark the name Macintosh was nevertheless denied due to the phonetic similarity between Apple's product and the name of the hi-fi manufacturer. Apple licensed the rights to the name in 1983, and bought the trademark in 1986. In 1995 the Royal Canadian Mint commissioned Toronto artist Roger Hill to design a commemorative silver dollar for release in 1996. Mint engraver Sheldon Beveridge engraved the image of a group of three McIntoshes and a McIntosh blossom which adorn one side with a ribbon naming the variety. An inscription on the edge reads "1796 Canada Dollar 1996". Issued sheathed in a silver cardboard sleeve in a black leatherette case, 133,779 pieces of the proof were sold, as well as 58,834 pieces of the uncirculated version in a plastic capsule and silver sleeve.
https://en.wikipedia.org/wiki?curid=20258
Machete A machete (; ) is a broad blade used either as an implement like an axe, or in combat like a short sword. The blade is typically long and usually under thick. In the Spanish language, the word is a diminutive form of the word "macho", which was used to refer to sledgehammers. In the English language, an equivalent term is matchet, though it is less commonly used. In the English-speaking Caribbean, such as Jamaica, Barbados, Guyana, and Grenada and in Trinidad and Tobago, the term "cutlass" is used for these agricultural tools. In various tropical and subtropical countries, the machete is frequently used to cut through rainforest undergrowth and for agricultural purposes (e.g. cutting sugar cane). Besides this, in Latin America a common use is for such household tasks as cutting large foodstuffs into pieces—much as a cleaver is used—or to perform crude cutting tasks, such as making simple wooden handles for other tools. It is common to see people using machetes for other jobs, such as splitting open coconuts, yard work, removing small branches and plants, chopping animals' food, and clearing bushes. Machetes are often considered tools and used by adults. However, many hunter–gatherer societies and cultures surviving through subsistence agriculture begin teaching babies to use sharp tools, including machetes, before their first birthdays. Because the machete is common in many tropical countries, it is often the weapon of choice for uprisings. For example, the Boricua Popular Army are unofficially called "macheteros" because of the machete-wielding laborers of sugar cane fields of past Puerto Rico. Many of the killings in the 1994 Rwandan genocide were performed with machetes, and they were the primary weapon used by the Interahamwe militias there. Machetes were also a distinctive tool and weapon of the Haitian "Tonton Macoute". In 1762, the Kingdom of Great Britain invaded Cuba in the Battle of Havana, and peasant guerrillas led by Pepe Antonio, a Guanabacoa councilman, used machetes in the defense of the city. The machete was also the most iconic weapon during the independence wars in that country (1868–1898), although it saw limited battlefield use. Carlos Manuel de Céspedes, owner of the sugar refinery "La Demajagua" near Manzanillo, freed his slaves on 10 October 1868. He proceeded to lead them, armed with machetes, in revolt against the Spanish government. The first cavalry charge using machetes as the primary weapon was carried out on 4 November 1868 by Máximo Gómez, a sergeant born in the Dominican Republic, who later became the general in chief of the Cuban Army. The machete was (and still is) a common side arm and tool for many ethnic groups in West Africa. Machetes in this role are referenced in Chinua Achebe's "Things Fall Apart". Some countries have a name for the blow of a machete; the Spanish "machetazo" is sometimes used in English. In the British Virgin Islands, Grenada, Jamaica, Saint Kitts and Nevis, Barbados, Saint Lucia, and Trinidad and Tobago, the word "planass" means to hit someone with the flat of the blade of a machete or cutlass. To strike with the sharpened edge is to "chop". Throughout the Caribbean, the term 'cutlass' refers to a laborers' cutting tool. The Brazilian Army's Instruction Center on Jungle Warfare developed a machete with a blade in length and a very pronounced clip point. This machete is issued with a 5-inch Bowie knife and a sharpening stone in the scabbard; collectively called a "jungle kit" ("Conjunto de Selva" in Portuguese); it is manufactured by Indústria de Material Bélico do Brasil (IMBEL). Many fictitious slashers have used it as a weapon in horror and fighting movies, the most well known and notorious being Jason Voorhees, from the "Friday the 13th" movie series and Quincy, from the Downtown Defenders (franchise). The tsakat is used primarily in southern Armenia and Artsakh when clearing areas or hiking. It's especially well suited for clearing the plentiful blackberry plants in these regions. The "panga" or "tapanga" is a variant used in East and Southern Africa. This name may be of Swahili etymology; not to be confused with the Panga fish. The "panga" blade broadens on the backside and has a length of . The upper inclined portion of the blade may be sharpened. This tool has been used as a weapon: during the Mau Mau Uprising; in the Rwandan Genocide; in South Africa particularly in the 1980s and early 1990s when the former province of Natal was wracked by conflict between the African National Congress and the Zulu-nationalist Inkatha Freedom Party. In the Philippines, the "bolo" is a very similar tool, but with the blade swelling just before the tip to make the knife even more efficient for chopping. Variations include the longer and more pointed "iták" intended for combat; this was used during the Philippine Revolution against the Spanish colonial authorities, later becoming a signature weapon of guerrillas in the Philippine–American War. Filipinos still use the "bolo" for everyday tasks, such as clearing vegetation and chopping various large foodstuffs. These are also commonly found in most Filipino kitchens, with some sets displayed on the walls and other sets for less practical use. The "bolo" is also used in training in "eskrima", the indigenous martial art of the Philippines. Other similar tools include the "parang" and the "golok" (from Malaysia and Indonesia); however, these tend to have shorter, thicker blades with a primary grind, and are more effective on woody vegetation. The Nepalese "kukri" is a curved blade that is often used for similar tasks. In Thailand, more variations exist, such as the "e-nep", or "nep", which translates as "leaf" (มีดเหน็บ). It may resemble some forms of Muslim blades like the "jambiya", or the Nepali "khukuri", having aspects of both with the up-swept tip and protruding belly. Another design found in Thailand is the "e-toh", which is prominent in Southern China, Laos, and other northern parts of South East Asia. Generally, "e-tohs" must have forward weighted tips, and are used around the home for splitting stove wood or chopping bone. The Chinese "dao", with its triangular tip, is found in Thailand as the "hua-tad" (หัวแตด), which translates roughly as "head chopper". The most common blade in Thailand is called the "pra", (พร้า) it can describe long straight designs, or billhook designs. The primary purpose of a "pra" is farm work and clearing vegetation. In the various regions of Ecuador, it is still used as an everyday tool in agricultural labors, such as clearing, chopping, cutting and felling. In the Pacific coast region, the machete has a long history of use and can be seen as part of the everyday dress of the rural male inhabitants, especially in the provinces of Manabi, Los Rios and Guayas. In its day, the machete and the skills related to it were seen as a token of manliness, and it was carried, sword-like, in ornamented sheaths made out of leather or in sashes around the waist. Its use was not limited to agriculture: it also had a double role as a ready-to-hand weapon for self-defense or attack. Although modern laws in Ecuador now prohibit its use as a weapon, there are still cases of vicious fighting or intimidation related to it. Being a part of the male dress, it also has a part in the cultural expressions of the coastal rural regions of Ecuador, such as dances, horse taming contests and skill exhibitions. In the southern Brazilian state of Rio Grande do Sul, the machete made by Spanish is largely used. It is used to clear paths through the bush, and was used to fight against the Brazilian Empire in the Ragamuffin War. There, the machete is called "facão" or "facón" (literally "big knife"). Today, this region has a dance called the "dança dos facões" (machetes' dance) in which the dancers, who also always men, knock their machetes while dancing, simulating a battle. "Maculelê", an Afro-Brazilian dance and martial art, can also be performed with "facões". This practice began in the city of Santo Amaro, Bahia, in the northeastern part of the country. In southern Mexico and Central America it is widely used to clear bush and often hundreds of "macheteros" are contracted to assist in clearing paths for the construction of new roads or structures. Many people in the rural regions own machetes to clear the constant overgrowth of jungle bush. In the recent drug cartel wars of the region, many homicides and decapitations are suspected of being committed with machetes or similar tools. The "taiga" is a machete of Russian origin that combines the functions of machetes, axes, knives, saws, and shovels into one tool. It is easily distinguished by the large swell at the end of the blade to facilitate chopping. The "taiga" is used by military air and special forces, including the "Spetsnaz". The modern machete is very similar to some forms of the medieval falchion, a short sword popular from the 13th century onwards. The cutting edge of the falchion is curved, widening toward the point, and has a straight, unsharpened back edge. The machete differs from the falchion mainly in the lack of a guard and a simpler hilt, though some machetes do have a guard for greater hand protection during work. The "kopis" is an ancient Greek weapon comparable to the machete. The "makhaira" is also similar, but was intended primarily to be a weapon rather than a tool. The "seax" is a Germanic weapon that is also similar in function, although different in shape. The "kukri" is a Nepalese curved blade used for many purposes similar to the machete. The "parang" is a Malaysian knife that many machetes are based on. The "grosse messer" is a large "medieval" knife, employed both as a tool and as a weapon. The "dao" is a traditional Chinese weapon resembling the machete. It is also known as "The General of All Weapons". The fascine knife is a somewhat similar tool and weapon used by European armies throughout the late 18th to early 20th centuries. The Spanish Army called its fascine knives "machetes". Whereas infantry were usually issued short sabres as side arms, engineers and artillerymen often received fascine knives, as besides being side arms they also served as useful tools for the construction of fortifications and other utilitarian tasks. They differ from machetes in that they generally have far thicker, tapered blades optimized for chopping European vegetation (the thin, flat blade of the machete is better for soft plants found in tropical environments), sword-like hilts and guards, and sometimes a sawback-blade. Some later models could be fixed to rifles as bayonets as well. The katana, typically acquired through trade, was used by the Ainu people in a machete-like fashion rather than a weapon as it was originally intended to be. Both the materials used and the shape of the machete itself are important to make a good machete. In the past, the most famous manufacturer of machetes in Latin America and the Spanish-speaking Caribbean was Collins Company of Collinsville, Connecticut. The company was founded as Collins & Company in 1826 by Samuel W. Collins to make axes. Its first machetes were sold in 1845 and became so famous that all good machetes were called "un Collins". In the English-speaking Caribbean, Robert Mole & Sons of Birmingham, England, was long considered the manufacturer of agricultural cutlasses of the best quality. Some Robert Mole blades survive as souvenirs of travelers to Trinidad, Jamaica, and, less commonly, St. Lucia. Since the 1950s, however, manufacturing shortcuts have resulted in a quality decline of machetes. Today, most modern factory-made machetes are of very simple construction, consisting of a blade and full-length tang punched from a single piece of flat steel plate of uniform thickness (and thus lack a primary grind), and a simple grip of two plates of wood or plastic bolted or riveted together around the tang. Finally, both sides are ground down to a rough edge so that the purchaser can sharpen the blade to their specific geometry using a file. These machetes are occasionally provided with a simple cord loop as a sort of lanyard, and a canvas scabbard—although in some regions where machetes are valuable, commonly used tools, the users may make decorative leather scabbards for them. Toughness is important because of the twisting and impact forces that the relatively thin blade may encounter, while edge retention is secondary. Medium to high carbon spring steels, such as 1050 to 1095, are well suited to this application (with better machetes using the latter), and are relatively easy to sharpen. Most stainless steel machetes should be avoided, as many high-carbon stainless-steel machetes cannot stand up to repeated impacts, and will easily break if abused. In comparison to most other knives, which are commonly heat treated to a very high degree of hardness, many machete blades are tempered to maximum toughness, often nearly spring tempered. This results in a tougher blade, more resistant to chipping and breaking, with an edge that is easier to sharpen but does not retain sharpness as well, due to its lower hardness. A properly constructed machete will have a convex or flat primary bevel from the spine to the edge, which is formed by a secondary bevel. Better machetes will also have a slight distal taper. Colombia is the largest exporter of machetes worldwide. The flag of Angola features a machete, along with a cog-wheel. The machete is also a performance weapon used in variations of the Brazilian martial dance called "maculelê", often practiced by practitioners of "capoeira". Machetes are a distinctive characteristic in the folkloric dances of the state of Nayarit. It is also seen in the state of Durango, in the folkloric dance called "Danza de los Machetes" which consists of blind-folded dancers juggling machetes and pitching them at increasing speeds between one another. Traditional forms of fencing with machetes include Colombian grima in Colombia, "Juego del garrote" in Venezuela, and "tire machèt" in Haiti.
https://en.wikipedia.org/wiki?curid=20261
Mushroom A mushroom or toadstool is the fleshy, spore-bearing fruiting body of a fungus, typically produced above ground, on soil, or on its food source. The standard for the name "mushroom" is the cultivated white button mushroom, "Agaricus bisporus"; hence the word "mushroom" is most often applied to those fungi (Basidiomycota, Agaricomycetes) that have a stem (stipe), a cap (pileus), and gills (lamellae, sing. lamella) on the underside of the cap. "Mushroom" also describes a variety of other gilled fungi, with or without stems, therefore the term is used to describe the fleshy fruiting bodies of some Ascomycota. These gills produce microscopic spores that help the fungus spread across the ground or its occupant surface. Forms deviating from the standard morphology usually have more specific names, such as "bolete", "puffball", "stinkhorn", and "morel", and gilled mushrooms themselves are often called "agarics" in reference to their similarity to "Agaricus" or their order Agaricales. By extension, the term "mushroom" can also refer to either the entire fungus when in culture, the thallus (called a mycelium) of species forming the fruiting bodies called mushrooms, or the species itself. The terms "mushroom" and "toadstool" go back centuries and were never precisely defined, nor was there consensus on application. During the 15th and 16th centuries, the terms "mushrom, mushrum, muscheron, mousheroms, mussheron, or musserouns" were used. The term "mushroom" and its variations may have been derived from the French word "mousseron" in reference to moss ("mousse"). Delineation between edible and poisonous fungi is not clear-cut, so a "mushroom" may be edible, poisonous, or unpalatable. Cultural or social phobias of mushrooms and fungi may be related. The term "fungophobia" was coined by William Delisle Hay of England, who noted a national superstition or fear of "toadstools". The word "toadstool" has apparent analogies in Dutch "padde(n)stoel" (toad-stool/chair, mushroom) and German "Krötenschwamm" (toad-fungus, alt. word for panther cap). In German folklore and old fairy tales, toads are often depicted sitting on toadstool mushrooms and catching, with their tongues, the flies that are said to be drawn to the "Fliegenpilz", a German name for the toadstool, meaning "flies' mushroom". This is how the mushroom got another of its names, "Krötenstuhl" (a less-used German name for the mushroom), literally translating to "toad-stool". Identifying mushrooms requires a basic understanding of their macroscopic structure. Most are Basidiomycetes and gilled. Their spores, called basidiospores, are produced on the gills and fall in a fine rain of powder from under the caps as a result. At the microscopic level, the basidiospores are shot off basidia and then fall between the gills in the dead air space. As a result, for most mushrooms, if the cap is cut off and placed gill-side-down overnight, a powdery impression reflecting the shape of the gills (or pores, or spines, etc.) is formed (when the fruit body is sporulating). The color of the powdery print, called a spore print, is used to help classify mushrooms and can help to identify them. Spore print colors include white (most common), brown, black, purple-brown, pink, yellow, and creamy, but almost never blue, green, or red. While modern identification of mushrooms is quickly becoming molecular, the standard methods for identification are still used by most and have developed into a fine art harking back to medieval times and the Victorian era, combined with microscopic examination. The presence of juices upon breaking, bruising reactions, odors, tastes, shades of color, habitat, habit, and season are all considered by both amateur and professional mycologists. Tasting and smelling mushrooms carries its own hazards because of poisons and allergens. Chemical tests are also used for some genera. In general, identification to genus can often be accomplished in the field using a local mushroom guide. Identification to species, however, requires more effort; one must remember that a mushroom develops from a button stage into a mature structure, and only the latter can provide certain characteristics needed for the identification of the species. However, over-mature specimens lose features and cease producing spores. Many novices have mistaken humid water marks on paper for white spore prints, or discolored paper from oozing liquids on lamella edges for colored spored prints. Typical mushrooms are the fruit bodies of members of the order Agaricales, whose type genus is "Agaricus" and type species is the field mushroom, "Agaricus campestris". However, in modern molecularly defined classifications, not all members of the order Agaricales produce mushroom fruit bodies, and many other gilled fungi, collectively called mushrooms, occur in other orders of the class Agaricomycetes. For example, chanterelles are in the Cantharellales, false chanterelles such as "Gomphus" are in the Gomphales, milk-cap mushrooms ("Lactarius", "Lactifluus") and russulas ("Russula"), as well as "Lentinellus", are in the Russulales, while the tough, leathery genera "Lentinus" and "Panus" are among the Polyporales, but "Neolentinus" is in the Gloeophyllales, and the little pin-mushroom genus, "Rickenella", along with similar genera, are in the Hymenochaetales. Within the main body of mushrooms, in the Agaricales, are common fungi like the common fairy-ring mushroom, shiitake, enoki, oyster mushrooms, fly agarics and other Amanitas, magic mushrooms like species of "Psilocybe", paddy straw mushrooms, shaggy manes, etc. An atypical mushroom is the lobster mushroom, which is a deformed, cooked-lobster-colored parasitized fruitbody of a "Russula" or "Lactarius", colored and deformed by the mycoparasitic Ascomycete "Hypomyces lactifluorum". Other mushrooms are not gilled, so the term "mushroom" is loosely used, and giving a full account of their classifications is difficult. Some have pores underneath (and are usually called boletes), others have spines, such as the hedgehog mushroom and other tooth fungi, and so on. "Mushroom" has been used for polypores, puffballs, jelly fungi, coral fungi, bracket fungi, stinkhorns, and cup fungi. Thus, the term is more one of common application to macroscopic fungal fruiting bodies than one having precise taxonomic meaning. Approximately 14,000 species of mushrooms are described. A mushroom develops from a nodule, or pinhead, less than two millimeters in diameter, called a primordium, which is typically found on or near the surface of the substrate. It is formed within the mycelium, the mass of threadlike hyphae that make up the fungus. The primordium enlarges into a roundish structure of interwoven hyphae roughly resembling an egg, called a "button". The button has a cottony roll of mycelium, the universal veil, that surrounds the developing fruit body. As the egg expands, the universal veil ruptures and may remain as a cup, or volva, at the base of the stalk, or as warts or volval patches on the cap. Many mushrooms lack a universal veil, therefore they do not have either a volva or volval patches. Often, a second layer of tissue, the partial veil, covers the bladelike gills that bear spores. As the cap expands, the veil breaks, and remnants of the partial veil may remain as a ring, or annulus, around the middle of the stalk or as fragments hanging from the margin of the cap. The ring may be skirt-like as in some species of "Amanita", collar-like as in many species of "Lepiota", or merely the faint remnants of a cortina (a partial veil composed of filaments resembling a spiderweb), which is typical of the genus "Cortinarius". Mushrooms lacking partial veils do not form an annulus. The stalk (also called the stipe, or stem) may be central and support the cap in the middle, or it may be off-center and/or lateral, as in species of "Pleurotus" and "Panus". In other mushrooms, a stalk may be absent, as in the polypores that form shelf-like brackets. Puffballs lack a stalk, but may have a supporting base. Other mushrooms, such as truffles, jellies, earthstars, and bird's nests, usually do not have stalks, and a specialized mycological vocabulary exists to describe their parts. The way the gills attach to the top of the stalk is an important feature of mushroom morphology. Mushrooms in the genera "Agaricus", "Amanita", "Lepiota" and "Pluteus", among others, have free gills that do not extend to the top of the stalk. Others have decurrent gills that extend down the stalk, as in the genera "Omphalotus" and "Pleurotus". There are a great number of variations between the extremes of free and decurrent, collectively called attached gills. Finer distinctions are often made to distinguish the types of attached gills: adnate gills, which adjoin squarely to the stalk; notched gills, which are notched where they join the top of the stalk; adnexed gills, which curve upward to meet the stalk, and so on. These distinctions between attached gills are sometimes difficult to interpret, since gill attachment may change as the mushroom matures, or with different environmental conditions. A hymenium is a layer of microscopic spore-bearing cells that covers the surface of gills. In the nongilled mushrooms, the hymenium lines the inner surfaces of the tubes of boletes and polypores, or covers the teeth of spine fungi and the branches of corals. In the Ascomycota, spores develop within microscopic elongated, sac-like cells called asci, which typically contain eight spores in each ascus. The Discomycetes, which contain the cup, sponge, brain, and some club-like fungi, develop an exposed layer of asci, as on the inner surfaces of cup fungi or within the pits of morels. The Pyrenomycetes, tiny dark-colored fungi that live on a wide range of substrates including soil, dung, leaf litter, and decaying wood, as well as other fungi, produce minute, flask-shaped structures called perithecia, within which the asci develop. In the Basidiomycetes, usually four spores develop on the tips of thin projections called sterigmata, which extend from club-shaped cells called a basidia. The fertile portion of the Gasteromycetes, called a gleba, may become powdery as in the puffballs or slimy as in the stinkhorns. Interspersed among the asci are threadlike sterile cells called paraphyses. Similar structures called cystidia often occur within the hymenium of the Basidiomycota. Many types of cystidia exist, and assessing their presence, shape, and size is often used to verify the identification of a mushroom. The most important microscopic feature for identification of mushrooms is the spores. Their color, shape, size, attachment, ornamentation, and reaction to chemical tests often can be the crux of an identification. A spore often has a protrusion at one end, called an apiculus, which is the point of attachment to the basidium, termed the apical germ pore, from which the hypha emerges when the spore germinates. Many species of mushrooms seemingly appear overnight, growing or expanding rapidly. This phenomenon is the source of several common expressions in the English language including "to mushroom" or "mushrooming" (expanding rapidly in size or scope) and "to pop up like a mushroom" (to appear unexpectedly and quickly). In reality, all species of mushrooms take several days to form primordial mushroom fruit bodies, though they do expand rapidly by the absorption of fluids. The cultivated mushroom, as well as the common field mushroom, initially form a minute fruiting body, referred to as the pin stage because of their small size. Slightly expanded, they are called buttons, once again because of the relative size and shape. Once such stages are formed, the mushroom can rapidly pull in water from its mycelium and expand, mainly by inflating preformed cells that took several days to form in the primordia. Similarly, there are other mushrooms, like "Parasola plicatilis" (formerly "Coprinus plicatlis"), that grow rapidly overnight and may disappear by late afternoon on a hot day after rainfall. The primordia form at ground level in lawns in humid spaces under the thatch and after heavy rainfall or in dewy conditions balloon to full size in a few hours, release spores, and then collapse. They "mushroom" to full size. Not all mushrooms expand overnight; some grow very slowly and add tissue to their fruiting bodies by growing from the edges of the colony or by inserting hyphae. For example, "Pleurotus nebrodensis" grows slowly, and because of this combined with human collection, it is now critically endangered. Though mushroom fruiting bodies are short-lived, the underlying mycelium can itself be long-lived and massive. A colony of "Armillaria solidipes" (formerly known as "Armillaria ostoyae") in Malheur National Forest in the United States is estimated to be 2,400 years old, possibly older, and spans an estimated . Most of the fungus is underground and in decaying wood or dying tree roots in the form of white mycelia combined with black shoelace-like rhizomorphs that bridge colonized separated woody substrates. Raw brown mushrooms are 92% water, 4% carbohydrates, 2% protein and less than 1% fat. In a 100 gram (3.5 ounce) amount, raw mushrooms provide 22 calories and are a rich source (20% or more of the Daily Value, DV) of B vitamins, such as riboflavin, niacin and pantothenic acid, selenium (37% DV) and copper (25% DV), and a moderate source (10-19% DV) of phosphorus, zinc and potassium (table). They have minimal or no vitamin C and sodium content. The vitamin D content of a mushroom depends on postharvest handling, in particular the unintended exposure to sunlight. The US Department of Agriculture provided evidence that UV-exposed mushrooms contain substantial amounts of vitamin D. When exposed to ultraviolet (UV) light, even after harvesting, ergosterol in mushrooms is converted to vitamin D2, a process now used intentionally to supply fresh vitamin D mushrooms for the functional food grocery market. In a comprehensive safety assessment of producing vitamin D in fresh mushrooms, researchers showed that artificial UV light technologies were equally effective for vitamin D production as in mushrooms exposed to natural sunlight, and that UV light has a long record of safe use for production of vitamin D in food. Mushrooms are used extensively in cooking, in many cuisines (notably Chinese, Korean, European, and Japanese). Most mushrooms sold in supermarkets have been commercially grown on mushroom farms. The most popular of these, "Agaricus bisporus", is considered safe for most people to eat because it is grown in controlled, sterilized environments. Several varieties of "A. bisporus" are grown commercially, including whites, crimini, and portobello. Other cultivated species available at many grocers include "Hericium erinaceus", shiitake, maitake (hen-of-the-woods), "Pleurotus", and enoki. In recent years, increasing affluence in developing countries has led to a considerable growth in interest in mushroom cultivation, which is now seen as a potentially important economic activity for small farmers. China is a major edible mushroom producer. The country produces about half of all cultivated mushrooms, and around of mushrooms are consumed per person per year by 1.4 billion people. In 2014, Poland was the world's largest mushroom exporter, reporting an estimated annually. Separating edible from poisonous species requires meticulous attention to detail; there is no single trait by which all toxic mushrooms can be identified, nor one by which all edible mushrooms can be identified. People who collect mushrooms for consumption are known as mycophagists, and the act of collecting them for such is known as mushroom hunting, or simply "mushrooming". Even edible mushrooms may produce allergic reactions in susceptible individuals, from a mild asthmatic response to severe anaphylactic shock. Even the cultivated "A. bisporus" contains small amounts of hydrazines, the most abundant of which is agaritine (a mycotoxin and carcinogen). However, the hydrazines are destroyed by moderate heat when cooking. A number of species of mushrooms are poisonous; although some resemble certain edible species, consuming them could be fatal. Eating mushrooms gathered in the wild is risky and should only be undertaken by individuals knowledgeable in mushroom identification. Common best practice is for wild mushroom pickers to focus on collecting a small number of visually distinctive, edible mushroom species that cannot be easily confused with poisonous varieties. Many mushroom species produce secondary metabolites that can be toxic, mind-altering, antibiotic, antiviral, or bioluminescent. Although there are only a small number of deadly species, several others can cause particularly severe and unpleasant symptoms. Toxicity likely plays a role in protecting the function of the basidiocarp: the mycelium has expended considerable energy and protoplasmic material to develop a structure to efficiently distribute its spores. One defense against consumption and premature destruction is the evolution of chemicals that render the mushroom inedible, either causing the consumer to vomit the meal (see emetics), or to learn to avoid consumption altogether. In addition, due to the propensity of mushrooms to absorb heavy metals, including those that are radioactive, European mushrooms may, as late as 2008, include toxicity from the 1986 Chernobyl disaster and continue to be studied. Mushrooms with psychoactive properties have long played a role in various native medicine traditions in cultures all around the world. They have been used as sacrament in rituals aimed at mental and physical healing, and to facilitate visionary states. One such ritual is the "velada" ceremony. A practitioner of traditional mushroom use is the "shaman" or "curandera" (priest-healer). Psilocybin mushrooms possess psychedelic properties. Commonly known as "magic mushrooms" or shrooms", they are openly available in smart shops in many parts of the world, or on the black market in those countries that have outlawed their sale. Psilocybin mushrooms have been reported as facilitating profound and life-changing insights often described as mystical experiences. Recent scientific work has supported these claims, as well as the long-lasting effects of such induced spiritual experiences. Psilocybin, a naturally occurring chemical in certain psychedelic mushrooms such as "Psilocybe cubensis", is being studied for its ability to help people suffering from psychological disorders, such as obsessive–compulsive disorder. Minute amounts have been reported to stop cluster and migraine headaches. A double-blind study, done by the Johns Hopkins Hospital, showed psychedelic mushrooms could provide people an experience with substantial personal meaning and spiritual significance. In the study, one third of the subjects reported ingestion of psychedelic mushrooms was the single most spiritually significant event of their lives. Over two-thirds reported it among their five most meaningful and spiritually significant events. On the other hand, one-third of the subjects reported extreme anxiety. However, the anxiety went away after a short period of time. Psilocybin mushrooms have also shown to be successful in treating addiction, specifically with alcohol and cigarettes. A few species in the genus "Amanita", most recognizably "A. muscaria", but also "A. pantherina", among others, contain the psychoactive compound muscimol. The muscimol-containing chemotaxonomic group of "Amanitas" contains no amatoxins or phallotoxins, and as such are not hepatoxic, though if not properly cured will be non-lethally neurotoxic due to the presence of ibotenic acid. The "Amanita" intoxication is similar to Z-drugs in that it includes CNS depressant and sedative-hypnotic effects, but also dissociation and delirium in high doses. Some mushrooms are used or studied as possible treatments for diseases, particularly their extracts, including polysaccharides, glycoproteins and proteoglycans. In some countries, extracts of polysaccharide-K, schizophyllan, polysaccharide peptide, or lentinan are government-registered adjuvant cancer therapies, even though clinical evidence of efficacy in humans has not been confirmed. Historically in traditional Chinese medicine, mushrooms are believed to have medicinal value, although there is no evidence for such uses. Mushrooms can be used for dyeing wool and other natural fibers. The chromophores of mushroom dyes are organic compounds and produce strong and vivid colors, and all colors of the spectrum can be achieved with mushroom dyes. Before the invention of synthetic dyes, mushrooms were the source of many textile dyes. Some fungi, types of polypores loosely called mushrooms, have been used as fire starters (known as tinder fungi). Mushrooms and other fungi play a role in the development of new biological remediation techniques (e.g., using mycorrhizae to spur plant growth) and filtration technologies (e.g. using fungi to lower bacterial levels in contaminated water).
https://en.wikipedia.org/wiki?curid=20264
Mainframe computer Mainframe computers or mainframes (colloquially referred to as "big iron") are computers used primarily by large organizations for critical applications; bulk data processing, such as census, industry and consumer statistics, and enterprise resource planning; and transaction processing. They are larger and have more processing power than some other classes of computers: minicomputers, servers, workstations, and personal computers. The term originally referred to the large cabinets called "main frames" that housed the central processing unit and main memory of early computers. Later, the term was used to distinguish high-end commercial machines from less-powerful units. Most large-scale computer-system architectures were established in the 1960s, but continue to evolve. Mainframe computers are often used as servers. Modern mainframe design is characterized less by raw computational speed and more by: Their high stability and reliability enable these machines to run uninterrupted for very long periods of time, with mean time between failures (MTBF) measured in decades. Mainframes have high availability, one of the primary reasons for their longevity, since they are typically used in applications where downtime would be costly or catastrophic. The term reliability, availability and serviceability (RAS) is a defining characteristic of mainframe computers. Proper planning and implementation are required to realize these features. In addition, mainframes are more secure than other computer types: the NIST vulnerabilities database, US-CERT, rates traditional mainframes such as IBM Z (previously called z Systems, System z and zSeries), Unisys Dorado and Unisys Libra as among the most secure with vulnerabilities in the low single digits as compared with thousands for Windows, UNIX, and Linux. Software upgrades usually require setting up the operating system or portions thereof, and are non-disruptive only when using virtualizing facilities such as IBM z/OS and Parallel Sysplex, or Unisys XPCL, which support workload sharing so that one system can take over another's application while it is being refreshed. In the late 1950s, mainframes had only a rudimentary interactive interface (the console) and used sets of punched cards, paper tape, or magnetic tape to transfer data and programs. They operated in batch mode to support back office functions such as payroll and customer billing, most of which were based on repeated tape-based sorting and merging operations followed by line printing to preprinted continuous stationery. When interactive user terminals were introduced, they were used almost exclusively for applications (e.g. airline booking) rather than program development. Typewriter and Teletype devices were common control consoles for system operators through the early 1970s, although ultimately supplanted by keyboard/display devices. By the early 1970s, many mainframes acquired interactive user terminals operating as timesharing computers, supporting hundreds of users simultaneously along with batch processing. Users gained access through keyboard/typewriter terminals and specialized text terminal CRT displays with integral keyboards, or later from personal computers equipped with terminal emulation software. By the 1980s, many mainframes supported graphic display terminals, and terminal emulation, but not graphical user interfaces. This form of end-user computing became obsolete in the 1990s due to the advent of personal computers provided with GUIs. After 2000, modern mainframes partially or entirely phased out classic "green screen" and color display terminal access for end-users in favour of Web-style user interfaces. The infrastructure requirements were drastically reduced during the mid-1990s, when CMOS mainframe designs replaced the older bipolar technology. IBM claimed that its newer mainframes reduced data center energy costs for power and cooling, and reduced physical space requirements compared to server farms. Modern mainframes can run multiple different instances of operating systems at the same time. This technique of virtual machines allows applications to run as if they were on physically distinct computers. In this role, a single mainframe can replace higher-functioning hardware services available to conventional servers. While mainframes pioneered this capability, virtualization is now available on most families of computer systems, though not always to the same degree or level of sophistication. Mainframes can add or hot swap system capacity without disrupting system function, with specificity and granularity to a level of sophistication not usually available with most server solutions. Modern mainframes, notably the IBM zSeries, System z9 and System z10 servers, offer two levels of virtualization: logical partitions (LPARs, via the PR/SM facility) and virtual machines (via the z/VM operating system). Many mainframe customers run two machines: one in their primary data center and one in their backup data center—fully active, partially active, or on standby—in case there is a catastrophe affecting the first building. Test, development, training, and production workload for applications and databases can run on a single machine, except for extremely large demands where the capacity of one machine might be limiting. Such a two-mainframe installation can support continuous business service, avoiding both planned and unplanned outages. In practice many customers use multiple mainframes linked either by Parallel Sysplex and shared DASD (in IBM's case), or with shared, geographically dispersed storage provided by EMC or Hitachi. Mainframes are designed to handle very high volume input and output (I/O) and emphasize throughput computing. Since the late 1950s, mainframe designs have included subsidiary hardware (called "channels" or "peripheral processors") which manage the I/O devices, leaving the CPU free to deal only with high-speed memory. It is common in mainframe shops to deal with massive databases and files. Gigabyte to terabyte-size record files are not unusual. Compared to a typical PC, mainframes commonly have hundreds to thousands of times as much data storage online, and can access it reasonably quickly. Other server families also offload I/O processing and emphasize throughput computing. Mainframe return on investment (ROI), like any other computing platform, is dependent on its ability to scale, support mixed workloads, reduce labor costs, deliver uninterrupted service for critical business applications, and several other risk-adjusted cost factors. Mainframes also have execution integrity characteristics for fault tolerant computing. For example, z900, z990, System z9, and System z10 servers effectively execute result-oriented instructions twice, compare results, arbitrate between any differences (through instruction retry and failure isolation), then shift workloads "in flight" to functioning processors, including spares, without any impact to operating systems, applications, or users. This hardware-level feature, also found in HP's NonStop systems, is known as lock-stepping, because both processors take their "steps" (i.e. instructions) together. Not all applications absolutely need the assured integrity that these systems provide, but many do, such as financial transaction processing. IBM, with z Systems, continues to be a major manufacturer in the mainframe market. Unisys manufactures ClearPath Libra mainframes, based on earlier Burroughs MCP products and ClearPath Dorado mainframes based on Sperry Univac OS 1100 product lines. In 2000, Hitachi co-developed the zSeries z900 with IBM to share expenses, but subsequently the two companies have not collaborated on new Hitachi models. Hewlett-Packard sells its unique NonStop systems, which it acquired with Tandem Computers and which some analysts classify as mainframes. Groupe Bull's GCOS, Stratus OpenVOS, Fujitsu (formerly Siemens) BS2000, and Fujitsu-ICL VME mainframes are still available in Europe, and Fujitsu (formerly Amdahl) GS21 mainframes globally. NEC with ACOS and Hitachi with AP10000-VOS3 still maintain mainframe hardware businesses in the Japanese market. The amount of vendor investment in mainframe development varies with market share. Fujitsu and Hitachi both continue to use custom S/390-compatible processors, as well as other CPUs (including POWER and Xeon) for lower-end systems. Bull uses a mixture of Itanium and Xeon processors. NEC uses Xeon processors for its low-end ACOS-2 line, but develops the custom NOAH-6 processor for its high-end ACOS-4 series. IBM also develops custom processors in-house, such as the zEC12. Unisys produces code compatible mainframe systems that range from laptops to cabinet-sized mainframes that use homegrown CPUs as well as Xeon processors. Furthermore, there exists a market for software applications to manage the performance of mainframe implementations. In addition to IBM, significant players in this market include BMC, Compuware, and CA Technologies. Several manufacturers and their successors produced mainframe computers from the late 1950s until the early 21st Century, with gradually decreasing numbers and a gradual transition to simulation on Intel chips rather than proprietary hardware. The US group of manufacturers was first known as "IBM and the Seven Dwarfs": usually Burroughs, UNIVAC, NCR, Control Data, Honeywell, General Electric and RCA, although some lists varied. Later, with the departure of General Electric and RCA, it was referred to as IBM and the BUNCH. IBM's dominance grew out of their 700/7000 series and, later, the development of the 360 series mainframes. The latter architecture has continued to evolve into their current zSeries mainframes which, along with the then Burroughs and Sperry (now Unisys) MCP-based and OS1100 mainframes, are among the few mainframe architectures still extant that can trace their roots to this early period. While IBM's zSeries can still run 24-bit System/360 code, the 64-bit zSeries and System z9 CMOS servers have nothing physically in common with the older systems. Notable manufacturers outside the US were Siemens and Telefunken in Germany, ICL in the United Kingdom, Olivetti in Italy, and Fujitsu, Hitachi, Oki, and NEC in Japan. The Soviet Union and Warsaw Pact countries manufactured close copies of IBM mainframes during the Cold War; the BESM series and Strela are examples of an independently designed Soviet computer. Shrinking demand and tough competition started a shakeout in the market in the early 1970s—RCA sold out to UNIVAC and GE sold its business to Honeywell; between 1986 and 1990 Honeywell was bought out by Bull; UNIVAC became a division of Sperry, which later merged with Burroughs to form Unisys Corporation in 1986. In 1984 estimated sales of desktop computers ($11.6 billion) exceeded mainframe computers ($11.4 billion) for the first time. IBM received the vast majority of mainframe revenue. During the 1980s, minicomputer-based systems grew more sophisticated and were able to displace the lower-end of the mainframes. These computers, sometimes called "departmental computers" were typified by the DEC VAX. In 1991, AT&T Corporation briefly owned NCR. During the same period, companies found that servers based on microcomputer designs could be deployed at a fraction of the acquisition price and offer local users much greater control over their own systems given the IT policies and practices at that time. Terminals used for interacting with mainframe systems were gradually replaced by personal computers. Consequently, demand plummeted and new mainframe installations were restricted mainly to financial services and government. In the early 1990s, there was a rough consensus among industry analysts that the mainframe was a dying market as mainframe platforms were increasingly replaced by personal computer networks. InfoWorld's Stewart Alsop infamously predicted that the last mainframe would be unplugged in 1996; in 1993, he cited Cheryl Currid, a computer industry analyst as saying that the last mainframe "will stop working on December 31, 1999", a reference to the anticipated Year 2000 problem (Y2K). That trend started to turn around in the late 1990s as corporations found new uses for their existing mainframes and as the price of data networking collapsed in most parts of the world, encouraging trends toward more centralized computing. The growth of e-business also dramatically increased the number of back-end transactions processed by mainframe software as well as the size and throughput of databases. Batch processing, such as billing, became even more important (and larger) with the growth of e-business, and mainframes are particularly adept at large-scale batch computing. Another factor currently increasing mainframe use is the development of the Linux operating system, which arrived on IBM mainframe systems in 1999 and is typically run in scores or up to c. 8,000 virtual machines on a single mainframe. Linux allows users to take advantage of open source software combined with mainframe hardware RAS. Rapid expansion and development in emerging markets, particularly People's Republic of China, is also spurring major mainframe investments to solve exceptionally difficult computing problems, e.g. providing unified, extremely high volume online transaction processing databases for 1 billion consumers across multiple industries (banking, insurance, credit reporting, government services, etc.) In late 2000, IBM introduced 64-bit z/Architecture, acquired numerous software companies such as Cognos and introduced those software products to the mainframe. IBM's quarterly and annual reports in the 2000s usually reported increasing mainframe revenues and capacity shipments. However, IBM's mainframe hardware business has not been immune to the recent overall downturn in the server hardware market or to model cycle effects. For example, in the 4th quarter of 2009, IBM's System z hardware revenues decreased by 27% year over year. But MIPS (millions of instructions per second) shipments increased 4% per year over the past two years. Alsop had himself photographed in 2000, symbolically eating his own words ("death of the mainframe"). In 2012, NASA powered down its last mainframe, an IBM System z9. However, IBM's successor to the z9, the z10, led a New York Times reporter to state four years earlier that "mainframe technology—hardware, software and services—remains a large and lucrative business for I.B.M., and mainframes are still the back-office engines behind the world's financial markets and much of global commerce". , while mainframe technology represented less than 3% of IBM's revenues, it "continue[d] to play an outsized role in Big Blue's results". In 2015, IBM launched the IBM z13, in June 2017 the IBM z14 and in September 2019 IBM launched the latest version of the product, the IBM z15. A supercomputer is a computer at the leading edge of data processing capability, with respect to calculation speed. Supercomputers are used for scientific and engineering problems (high-performance computing) which crunch numbers and data, while mainframes focus on transaction processing. The differences are: Mainframes and supercomputers cannot always be clearly distinguished; up until the early 1990s, many supercomputers were based on a mainframe architecture with supercomputing extensions. An example of such a system is the HITAC S-3800, which was instruction-set compatible with IBM System/370 mainframes, and could run the Hitachi VOS3 operating system (a fork of IBM MVS). The S-3800 therefore can be seen as being both simultaneously a supercomputer and also an IBM-compatible mainframe. In 2007, an amalgamation of the different technologies and architectures for supercomputers and mainframes has led to the so-called gameframe.
https://en.wikipedia.org/wiki?curid=20266
Microsoft Excel Microsoft Excel is a spreadsheet developed by Microsoft for Windows, macOS, Android and iOS. It features calculation, graphing tools, pivot tables, and a macro programming language called Visual Basic for Applications. It has been a very widely applied spreadsheet for these platforms, especially since version 5 in 1993, and it has replaced Lotus 1-2-3 as the industry standard for spreadsheets. Excel forms part of the Microsoft Office suite of software. Microsoft Excel has the basic features of all spreadsheets, using a grid of "cells" arranged in numbered "rows" and letter-named "columns" to organize data manipulations like arithmetic operations. It has a battery of supplied functions to answer statistical, engineering, and financial needs. In addition, it can display data as line graphs, histograms and charts, and with a very limited three-dimensional graphical display. It allows sectioning of data to view its dependencies on various factors for different perspectives (using "pivot tables" and the "scenario manager"). It has a programming aspect, "Visual Basic for Applications", allowing the user to employ a wide variety of numerical methods, for example, for solving differential equations of mathematical physics, and then reporting the results back to the spreadsheet. It also has a variety of interactive features allowing user interfaces that can completely hide the spreadsheet from the user, so the spreadsheet presents itself as a so-called "application", or "decision support system" (DSS), via a custom-designed user interface, for example, a stock analyzer, or in general, as a design tool that asks the user questions and provides answers and reports. In a more elaborate realization, an Excel application can automatically poll external databases and measuring instruments using an update schedule, analyze the results, make a Word report or PowerPoint slide show, and e-mail these presentations on a regular basis to a list of participants. Excel was not designed to be used as a database. Microsoft allows for a number of optional command-line switches to control the manner in which Excel starts. Excel 2016 has 484 functions. Of these, 360 existed prior to Excel 2010. Microsoft classifies these functions in 14 categories. Of the 484 current functions, 386 may be called from VBA as methods of the object "WorksheetFunction" and 44 have the same names as VBA functions. The Windows version of Excel supports programming through Microsoft's Visual Basic for Applications (VBA), which is a dialect of Visual Basic. Programming with VBA allows spreadsheet manipulation that is awkward or impossible with standard spreadsheet techniques. Programmers may write code directly using the Visual Basic Editor (VBE), which includes a window for writing code, debugging code, and code module organization environment. The user can implement numerical methods as well as automating tasks such as formatting or data organization in VBA and guide the calculation using any desired intermediate results reported back to the spreadsheet. VBA was removed from Mac Excel 2008, as the developers did not believe that a timely release would allow porting the VBA engine natively to Mac OS X. VBA was restored in the next version, Mac Excel 2011, although the build lacks support for ActiveX objects, impacting some high level developer tools. A common and easy way to generate VBA code is by using the Macro Recorder. The Macro Recorder records actions of the user and generates VBA code in the form of a macro. These actions can then be repeated automatically by running the macro. The macros can also be linked to different trigger types like keyboard shortcuts, a command button or a graphic. The actions in the macro can be executed from these trigger types or from the generic toolbar options. The VBA code of the macro can also be edited in the VBE. Certain features such as loop functions and screen prompt by their own properties, and some graphical display items, cannot be recorded but must be entered into the VBA module directly by the programmer. Advanced users can employ user prompts to create an interactive program, or react to events such as sheets being loaded or changed. Macro Recorded code may not be compatible with Excel versions. Some code that is used in Excel 2010 cannot be used in Excel 2003. Making a Macro that changes the cell colors and making changes to other aspects of cells may not be backward compatible. VBA code interacts with the spreadsheet through the Excel "Object Model", a vocabulary identifying spreadsheet objects, and a set of supplied functions or "methods" that enable reading and writing to the spreadsheet and interaction with its users (for example, through custom toolbars or "command bars" and "message boxes"). User-created VBA subroutines execute these actions and operate like macros generated using the macro recorder, but are more flexible and efficient. From its first version Excel supported end-user programming of macros (automation of repetitive tasks) and user-defined functions (extension of Excel's built-in function library). In early versions of Excel, these programs were written in a macro language whose statements had formula syntax and resided in the cells of special-purpose macro sheets (stored with file extension .XLM in Windows.) XLM was the default macro language for Excel through Excel 4.0. Beginning with version 5.0 Excel recorded macros in VBA by default but with version 5.0 XLM recording was still allowed as an option. After version 5.0 that option was discontinued. All versions of Excel, including Excel 2010 are capable of running an XLM macro, though Microsoft discourages their use. Excel supports charts, graphs, or histograms generated from specified groups of cells. The generated graphic component can either be embedded within the current sheet or added as a separate object. These displays are dynamically updated if the content of cells changes. For example, suppose that the important design requirements are displayed visually; then, in response to a user's change in trial values for parameters, the curves describing the design change shape, and their points of intersection shift, assisting the selection of the best design. Additional features are available using add-ins. Several are provided with Excel, including: Excel for the web is a free lightweight version of Microsoft Excel available as part of Office on the web, which also includes web versions of Microsoft Word and Microsoft PowerPoint. Versions of Excel up to 7.0 had a limitation in the size of their data sets of 16K (214 = ) rows. Versions 8.0 through 11.0 could handle 64K (216 = ) rows and 256 columns (28 as label 'IV'). Version 12.0 onwards, including the current Version 16.x, can handle over 1M (220 = ) rows, and (214 as label 'XFD') columns. Microsoft Excel up until 2007 version used a proprietary binary file format called Excel Binary File Format (.XLS) as its primary format. Excel 2007 uses Office Open XML as its primary file format, an XML-based format that followed after a previous XML-based format called "XML Spreadsheet" ("XMLSS"), first introduced in Excel 2002. Although supporting and encouraging the use of new XML-based formats as replacements, Excel 2007 remained backwards-compatible with the traditional, binary formats. In addition, most versions of Microsoft Excel can read CSV, DBF, SYLK, DIF, and other legacy formats. Support for some older file formats was removed in Excel 2007. The file formats were mainly from DOS-based programs. OpenOffice.org has created documentation of the Excel format. Two epochs of the format exist: the 97-2003 OLE format, and the older stream format. Since then Microsoft made the Excel binary format specification available to freely download. The "XML Spreadsheet" format introduced in Excel 2002 is a simple, XML based format missing some more advanced features like storage of VBA macros. Though the intended file extension for this format is ".xml", the program also correctly handles XML files with ".xls" extension. This feature is widely used by third-party applications (e.g. "MySQL Query Browser") to offer "export to Excel" capabilities without implementing binary file format. The following example will be correctly opened by Excel if saved either as "Book1.xml" or "Book1.xls": Microsoft Excel 2007, along with the other products in the Microsoft Office 2007 suite, introduced new file formats. The first of these (.xlsx) is defined in the Office Open XML (OOXML) specification. Windows applications such as Microsoft Access and Microsoft Word, as well as Excel can communicate with each other and use each other's capabilities. The most common are Dynamic Data Exchange: although strongly deprecated by Microsoft, this is a common method to send data between applications running on Windows, with official MS publications referring to it as "the protocol from hell". As the name suggests, it allows applications to supply data to others for calculation and display. It is very common in financial markets, being used to connect to important financial data services such as Bloomberg and Reuters. OLE Object Linking and Embedding allows a Windows application to control another to enable it to format or calculate data. This may take on the form of "embedding" where an application uses another to handle a task that it is more suited to, for example a PowerPoint presentation may be embedded in an Excel spreadsheet or vice versa. Excel users can access external data sources via Microsoft Office features such as (for example) codice_1 connections built with the Office Data Connection file format. Excel files themselves may be updated using a Microsoft supplied ODBC driver. Excel can accept data in real-time through several programming interfaces, which allow it to communicate with many data sources such as Bloomberg and Reuters (through addins such as Power Plus Pro). Alternatively, Microsoft Query provides ODBC-based browsing within Microsoft Excel. Programmers have produced APIs to open Excel spreadsheets in a variety of applications and environments other than Microsoft Excel. These include opening Excel documents on the web using either ActiveX controls, or plugins like the Adobe Flash Player. The Apache POI opensource project provides Java libraries for reading and writing Excel spreadsheet files. ExcelPackage is another open-source project that provides server-side generation of Microsoft Excel 2007 spreadsheets. PHPExcel is a PHP library that converts Excel5, Excel 2003, and Excel 2007 formats into objects for reading and writing within a web application. Excel Services is a current .NET developer tool that can enhance Excel's capabilities. Excel spreadsheets can be accessed from Python with xlrd and openpyxl. js-xlsx and js-xls can open Excel spreadsheets from JavaScript. Microsoft Excel protection offers several types of passwords: All passwords except "password to open a document" can be removed instantly regardless of the Microsoft Excel version used to create the document. These types of passwords are used primarily for shared work on a document. Such password-protected documents are not encrypted, and a data sources from a set password is saved in a document's header. "Password to protect workbook" is an exception – when it is set, a document is encrypted with the standard password “"VelvetSweatshop"”, but since it is known to the public, it actually does not add any extra protection to the document. The only type of password that can prevent a trespasser from gaining access to a document is "password to open a document". The cryptographic strength of this kind of protection depends strongly on the Microsoft Excel version that was used to create the document. In "Microsoft Excel 95" and earlier versions, the password to open is converted to a 16-bit key that can be instantly cracked. In "Excel 97/2000" the password is converted to a 40-bit key, which can also be cracked very quickly using modern equipment. As regards services that use rainbow tables (e.g. Password-Find), it takes up to several seconds to remove protection. In addition, password-cracking programs can brute-force attack passwords at a rate of hundreds of thousands of passwords a second, which not only lets them decrypt a document but also find the original password. In "Excel 2003/XP" the encryption is slightly better – a user can choose any encryption algorithm that is available in the system (see Cryptographic Service Provider). Due to the CSP, an "Excel" file can't be decrypted, and thus the "password to open" can't be removed, though the brute-force attack speed remains quite high. Nevertheless, the older "Excel 97/2000" algorithm is set by the default. Therefore, users who do not change the default settings lack reliable protection of their documents. The situation changed fundamentally in "Excel 2007", where the modern AES algorithm with a key of 128 bits started being used for decryption, and a 50,000-fold use of the hash function SHA1 reduced the speed of brute-force attacks down to hundreds of passwords per second. In "Excel 2010", the strength of the protection by the default was increased two times due to the use of a 100,000-fold SHA1 to convert a password to a key. Microsoft Excel Viewer was a freeware program for viewing and printing spreadsheet documents created by Excel. The Microsoft Excel Viewer was retired in April 2018 in lieu of Excel Online. Excel Viewer is similar to Microsoft Word Viewer in functionality. (There is not a current version for the Mac.) Excel Viewer is available for Microsoft Windows and Windows CE handheld PCs, such as the NEC MobilePro. It is also possible to open Excel files using certain online tools and services. Online excel viewers do not require users to have Microsoft Excel installed. In addition to issues with spreadsheets in general, other problems specific to Excel include numeric precision, misleading statistics functions, mod function errors, date limitations and more. Despite the use of 15-figure precision, Excel can display many more figures (up to thirty) upon user request. But the displayed figures are "not" those actually used in its computations, and so, for example, the difference of two numbers may differ from the difference of their displayed values. Although such departures are usually beyond the 15th decimal, exceptions do occur, especially for very large or very small numbers. Serious errors can occur if decisions are made based upon automated comparisons of numbers (for example, using the Excel "If" function), as equality of two numbers can be unpredictable. In the figure, the fraction 1/9000 is displayed in Excel. Although this number has a decimal representation that is an infinite string of ones, Excel displays only the leading 15 figures. In the second line, the number one is added to the fraction, and again Excel displays only 15 figures. In the third line, one is subtracted from the sum using Excel. Because the sum in the second line has only eleven 1's after the decimal, the difference when 1 is subtracted from this displayed value is three 0's followed by a string of eleven 1's. However, the difference reported by Excel in the third line is three 0's followed by a string of "thirteen" 1's and two extra erroneous digits. This is because Excel calculates with about half a digit more than it displays. Excel works with a modified 1985 version of the IEEE 754 specification. Excel's implementation involves conversions between binary and decimal representations, leading to accuracy that is on average better than one would expect from simple fifteen digit precision, but that can be worse. See the main article for details. Besides accuracy in user computations, the question of accuracy in Excel-provided functions may be raised. Particularly in the arena of statistical functions, Excel has been criticized for sacrificing accuracy for speed of calculation. As many calculations in Excel are executed using VBA, an additional issue is the accuracy of VBA, which varies with variable type and user-requested precision. The accuracy and convenience of statistical tools in Excel has been criticized, as mishandling missing data, as returning incorrect values due to inept handling of round-off and large numbers, as only selectively updating calculations on a spreadsheet when some cell values are changed, and as having a limited set of statistical tools. Microsoft has announced some of these issues are addressed in Excel 2010. Excel has issues with modulo operations. In the case of excessively large results, Excel will return the error warning instead of an answer. Excel includes February 29, 1900, incorrectly treating 1900 as a leap year, even though e.g. 2100 is correctly treated as a non-leap year. The bug originated from Lotus 1-2-3 (deliberately implemented to save computer memory), and was also purposely implemented in Excel, for the purpose of bug compatibility. This legacy has later been carried over into Office Open XML file format. Thus a (not necessarily whole) number greater than or equal to 61 interpreted as a date and time are the (real) number of days after December 30, 1899, 0:00, a non-negative number less than 60 is the number of days after December 31, 1899, 0:00, and numbers with whole part 60 represent the fictional day. Excel supports dates with years in the range 1900-9999, except that December 31, 1899, can be entered as 0 and is displayed as 0-jan-1900. Converting a fraction of a day into hours, minutes and days by treating it as a moment on the day January 1, 1900, does not work for a negative fraction. Entering text that happens to be in a form that is interpreted as a date, the text can be unintentionally changed to a standard date format. A similar problem occurs when a text happens to be in the form of a floating-point notation of a number. In these cases the original exact text cannot be recovered from the result. This issue has caused a well known problem in the analysis of DNA, for example in bioinformatics. As first reported in 2004, genetic scientists found that Excel automatically and incorrectly converts certain gene names into dates. A follow-up study in 2016 found many peer reviewed scientific journal papers had been affected and that "Of the selected journals, the proportion of published articles with Excel files containing gene lists that are affected by gene name errors is 19.6 %." Excel parses the copied and pasted data and sometimes changes them depending on what it thinks they are. For example, MARCH1 (Membrane Associated Ring-CH-type finger 1) gets converted to the date March 1 (1-Mar) and SEPT2 (Septin 2) is converted into September 2 (2-Sep) etc. While some secondary news sources reported this as a fault with Excel, the original authors of the 2016 paper placed the blame with the researchers mis-using Excel. Apostrophe entered before MARCH1 as in 'MARCH1 prevents Excel convert to date. Format cell as TEXT before entering MARCH1 also prevents Excel convert to date. The following functions return incorrect results when passed a string longer than 255 characters: Microsoft Excel will not open two documents with the same name and instead will display the following error: The reason is for calculation ambiguity with linked cells. If there is a cell ='[Book1.xlsx]Sheet1'!$G$33, and there are two books named "Book1" open, there is no way to tell which one the user means. Microsoft originally marketed a spreadsheet program called Multiplan in 1982. Multiplan became very popular on CP/M systems, but on MS-DOS systems it lost popularity to Lotus 1-2-3. Microsoft released the first version of Excel for the Macintosh on September 30, 1985, and the first Windows version was 2.05 (to synchronize with the Macintosh version 2.2) in November 1987. Lotus was slow to bring 1-2-3 to Windows and by the early 1990s, Excel had started to outsell 1-2-3 and helped Microsoft achieve its position as a leading PC software developer. This accomplishment solidified Microsoft as a valid competitor and showed its future of developing GUI software. Microsoft maintained its advantage with regular new releases, every two years or so. Excel 2.0 is the first version of Excel for the Intel platform. Versions prior to 2.0 were only available on the Apple Macintosh. The first Windows version was labeled "2" to correspond to the Mac version. This included a run-time version of Windows. "BYTE" in 1989 listed Excel for Windows as among the "Distinction" winners of the BYTE Awards. The magazine stated that the port of the "extraordinary" Macintosh version "shines", with a user interface as good as or better than the original. Included toolbars, drawing capabilities, outlining, add-in support, 3D charts, and many more new features. Introduced auto-fill. Also, an easter egg in Excel 4.0 reveals a hidden animation of a dancing set of numbers 1 through 3, representing Lotus 1-2-3, which was then crushed by an Excel logo. With version 5.0, Excel has included Visual Basic for Applications (VBA), a programming language based on Visual Basic which adds the ability to automate tasks in Excel and to provide user-defined functions (UDF) for use in worksheets. VBA is a powerful addition to the application and includes a fully featured integrated development environment (IDE). Macro recording can produce VBA code replicating user actions, thus allowing simple automation of regular tasks. VBA allows the creation of forms and in‑worksheet controls to communicate with the user. The language supports use (but not creation) of ActiveX (COM) DLL's; later versions add support for class modules allowing the use of basic object-oriented programming techniques. The automation functionality provided by VBA made Excel a target for macro viruses. This caused serious problems until antivirus products began to detect these viruses. Microsoft belatedly took steps to prevent the misuse by adding the ability to disable macros completely, to enable macros when opening a workbook or to trust all macros signed using a trusted certificate. Versions 5.0 to 9.0 of Excel contain various Easter eggs, including a "Hall of Tortured Souls", although since version 10 Microsoft has taken measures to eliminate such undocumented features from their products. 5.0 was released in a 16-bit x86 version for Windows 3.1 and later in a 32-bit version for NT 3.51 (x86/Alpha/PowerPC) Released in 1995 with Microsoft Office for Windows 95, this is the first major version after Excel 5.0, as there is no Excel 6.0 with all of the Office applications standardizing on the same major version number. Internal rewrite to 32-bits. Almost no external changes, but faster and more stable. Included in Office 97 (for x86 and Alpha). This was a major upgrade that introduced the paper clip office assistant and featured standard VBA used instead of internal Excel Basic. It introduced the now-removed Natural Language labels. This version of Excel includes a flight simulator as an Easter Egg. Included in Office 2000. This was a minor upgrade but introduced an upgrade to the clipboard where it can hold multiple objects at once. The Office Assistant, whose frequent unsolicited appearance in Excel 97 had annoyed many users, became less intrusive. Included in Office XP. Very minor enhancements. Included in Office 2003. Minor enhancements, most significant being the new Tables. Included in Office 2007. This release was a major upgrade from the previous version. Similar to other updated Office products, Excel in 2007 used the new Ribbon menu system. This was different from what users were used to, and was met with mixed reactions. One study reported fairly good acceptance by users except highly experienced users and users of word processing applications with a classical WIMP interface, but was less convinced in terms of efficiency and organization. However, an online survey reported that a majority of respondents had a negative opinion of the change, with advanced users being "somewhat more negative" than intermediate users, and users reporting a self-estimated reduction in productivity. Added functionality included the SmartArt set of editable business diagrams. Also added was an improved management of named variables through the "Name Manager", and much-improved flexibility in formatting graphs, which allow ("x, y") coordinate labeling and lines of arbitrary weight. Several improvements to pivot tables were introduced. Also like other office products, the Office Open XML file formats were introduced, including ".xlsm" for a workbook with macros and ".xlsx" for a workbook without macros. Specifically, many of the size limitations of previous versions were greatly increased. To illustrate, the number of rows was now 1,048,576 (220) and columns was 16,384 (214; the far-right column is XFD). This changes what is a valid "A1" reference versus a named range. This version made more extensive use of multiple cores for the calculation of spreadsheets; however, VBA macros are not handled in parallel and XLL add‑ins were only executed in parallel if they were thread-safe and this was indicated at registration. Included in Office 2010, this is the next major version after v12.0, as version number 13 was skipped. Minor enhancements and 64-bit support, including the following: Included in Office 2013, along with a lot of new tools included in this release: Included in Office 2016, along with a lot of new tools included in this release: Microsoft no longer releases Office or Excel in discrete versions. Instead, features are introduced automatically over time using Windows Update. The version number remains 16.0. Thereafter only the approximate dates when features appear can now be given. Excel Mobile is a spreadsheet program that can edit XLSX files. It can edit and format text in cells, calculate formulas, search within the spreadsheet, sort rows and columns, freeze panes, filter the columns, add comments, and create charts. It can't add columns or rows except at the edge of the document, rearrange columns or rows, delete rows or columns, or add spreadsheet tabs. The 2007 version has the ability to use a full-screen mode to deal with limited screen resolution, as well as split panes to view different parts of a worksheet at one time. Protection settings, zoom settings, autofilter settings, certain chart formatting, hidden sheets, and other features are not supported on Excel Mobile, and will be modified upon opening and saving a workbook. In 2015, Excel Mobile became available for Windows 10 and Windows 10 Mobile on Windows Store. Excel offers many user interface tweaks over the earliest electronic spreadsheets; however, the essence remains the same as in the original spreadsheet software, VisiCalc: the program displays cells organized in rows and columns, and each cell may contain data or a formula, with relative or absolute references to other cells. Excel 2.0 for Windows, which was modeled after its Mac GUI-based counterpart, indirectly expanded the installed base of the then-nascent Windows environment. Excel 2.0 was released a month before Windows 2.0, and the installed base of Windows was so low at that point in 1987 that Microsoft had to bundle a runtime version of Windows 1.0 with Excel 2.0. Unlike Microsoft Word, there never was a DOS version of Excel. Excel became the first spreadsheet to allow the user to define the appearance of spreadsheets (fonts, character attributes, and cell appearance). It also introduced intelligent cell recomputation, where only cells dependent on the cell being modified are updated (previous spreadsheet programs recomputed everything all the time or waited for a specific user command). Excel introduced auto-fill, the ability to drag and expand the selection box to automatically copy a cell or row contents to adjacent cells or rows, adjusting the copies intelligently by automatically incrementing cell references or contents. Excel also introduced extensive graphing capabilities. Because Excel is widely used, it has been attacked by hackers. While Excel is not directly exposed to the Internet, if an attacker can get a victim to open a file in Excel, and there is an appropriate security bug in Excel, then the attacker can gain control of the victim's computer. UK's GCHQ has a tool named TORNADO ALLEY with this purpose.
https://en.wikipedia.org/wiki?curid=20268
Michael Hutchence Michael Kelland John Hutchence (22 January 1960 – 22 November 1997) was an Australian musician, singer-songwriter and actor who co-founded the rock band INXS, which sold over 60 million records worldwide and was inducted into the ARIA Hall of Fame in 2001. Hutchence was the lead singer and lyricist of INXS from 1977 until his death. Hutchence was a member of the short-lived pop rock group Max Q. He also recorded some solo material and acted in feature films, including "Dogs in Space" (1986), "Frankenstein Unbound" (1990), and "Limp" (1997). Hutchence had a string of love affairs with prominent actresses, models and singers, and his private life was often reported in the Australian and international press. In July 1996, Hutchence and English television presenter Paula Yates had a daughter, Heavenly Hiraani Tiger Lily. On the morning of 22 November 1997, Hutchence was found dead in his hotel room in Sydney. His death was reported by the New South Wales Coroner to be the result of suicide by hanging. Michael Kelland John Hutchence was born on 22 January 1960, to Sydney businessman Kelland ("Kell") Frank Hutchence (1924-2002) and make-up artist Patricia Glassop (née Kennedy, 1926-2010). Kelland's parents were sea captain Frank Hutchence and Mabs from England who settled in Sydney in 1922. Michael joined elder half-sister Tina; both siblings were of Irish ancestry from their mother's side, as Patricia's father was from County Cork in Ireland. Following Kell's business interests, the Hutchence family moved to Brisbane (where younger brother Rhett was born) and later to Hong Kong. During the early years in Hong Kong, both boys attended Beacon Hill School in Kowloon Tong. While in Hong Kong, Michael showed promise as a swimmer before breaking his arm badly. He then began to show interest in poetry and performed his first song in a local toy store commercial. Michael attended King George V School during his early teens. The family returned to Sydney in 1972, buying a house in Belrose near the Northern Beaches. Hutchence attended Davidson High School, where he met and befriended Andrew Farriss. Around this time, Hutchence and Farriss spent a lot of time jamming in the garage with Andrew's brothers. Farriss then convinced Hutchence to join his band, Doctor Dolphin, alongside classmates Kent Kerny and Neil Sanders. Bass guitarist Garry Beers and drummer Geoff Kennelly from nearby Forest High School filled out the line-up. Hutchence's parents separated when he was 15; for a short time in 1976, he lived with his mother and half-sister Tina in California. Hutchence later returned to Sydney with his mother. In 1977, a new band, The Farriss Brothers, was formed with Tim Farriss on lead guitar, his younger brother Andrew as keyboardist, and youngest brother Jon on drums. Andrew brought Hutchence on board as a vocalist and Beers on bass guitar, and Tim brought in his former bandmate Kirk Pengilly to play guitar and saxophone. The band made their debut on 16 August 1977 at Whale Beach, 40 km (25 mi) north of Sydney. Hutchence, the Farriss brothers, Kerny, Sanders, Beers and Kennelly briefly performed as The Vegetables, singing "We Are the Vegetables". Ten months later, they returned to Sydney and recorded a set of demos. The Farriss Brothers regularly supported hard rockers Midnight Oil on the pub rock circuit, and were renamed as INXS in 1979. Their first performance under the new name was on 1 September at the Oceanview Hotel in Toukley. In May 1980, the group released their first single, "Simple Simon"/"We Are the Vegetables" which was followed by the debut album "INXS" in October. Their first Top 40 Australian hit on the Kent Music Report Singles Chart, "Just Keep Walking", was released in September 1980. Hutchence became the main spokesperson for the band. He co-wrote almost all of INXS's songs with Andrew Farriss. According to Hutchence, most of the songs on the band's second album, "Underneath the Colours", were written within a fairly short space of time: "Most bands shudder at the prospect of having 20 years to write their first album and four days to write their second. For us, though, it was good. It left less room for us to go off on all sorts of tangents". Soon after recording sessions for "Underneath the Colours" – produced by Richard Clapton – had finished, band members started work on outside projects. Hutchence recorded "Speed Kills", written by Don Walker of hard rockers Cold Chisel, for the "Freedom" (1982) film soundtrack, directed by Scott Hicks. It was Hutchence's first solo single and was released by WEA in early 1982. In March 1985, after Hutchence and INXS recorded their album "The Swing" (1984), WEA released the Australian version of "Dekadance", as a limited edition cassette only EP of six tracks including remixes from the album. The cassette also included a cover version of Nancy Sinatra and Lee Hazlewood's hit "Jackson", which Hutchence sang as a duet with Jenny Morris, a backing singer for "The Swing" sessions. The EP reached No 2 on the Kent Music Report Albums Chart. Hutchence provided vocals for new wave band Beargarden's 1985 single release. On 19 May, INXS won seven awards at the 1984 "Countdown" Music and Video Awards ceremony, including 'Best Songwriter' for Hutchence and Andrew, and 'Most Popular Male' for Hutchence. They performed "Burn for You", dressed in Akubras (a brand of hats) and Drizabones (a brand of outdoor coats/oilskin jackets) followed by Hutchence and Morris singing "Jackson" to close. In 1986, Hutchence played Sam, the lead male role, in the Australian film "Dogs in Space", directed by long-time INXS music video collaborator Richard Lowenstein. Sam's girlfriend, Anna, was portrayed by Saskia Post as a "fragile peroxide blonde in op-shop clothes". Hutchence provided four songs on the film's soundtrack. Also working on the film and its soundtrack, as music director, was Ollie Olsen (ex-Whirlywirld). Late in 1986, before commencing work on a new INXS album and while supposedly taking an eight-month break, the band's management decided to stage the Australian Made tour as a series of major outdoor concerts across the country. The roster featured INXS, Jimmy Barnes (Cold Chisel), Models, Divinyls, Mental as Anything, The Triffids and I'm Talking. To promote the tour, Hutchence and Barnes shared vocals on The Easybeats cover "Good Times" and "Laying Down the Law", which Barnes cowrote with Beers, Andrew Farriss, Jon Farriss, Hutchence and Pengilly. "Good Times" was used as the theme for the concert series of 1986–1987. It peaked at No. 2 on the Australian charts, and months later was featured in the Joel Schumacher film "The Lost Boys" and its soundtrack, allowing it to peak at No. 47 in the U.S. on 1 August 1987. Divinyls' lead singer Chrissie Amphlett enjoyed the tour and reconnected with Hutchence, stating that "[he] was a sweet man, who said in one interview that he wanted me to have his baby." In 1987, Hutchence provided vocals for Richard Clapton's album "Glory Road", which was produced by Jon Farriss. INXS released "Kick" in October 1987, and the album provided the band with worldwide popularity. "Kick" peaked at No. 1 in Australia, No. 3 on the US "Billboard" 200, No. 9 in UK, and No. 15 in Austria. The band's most successful studio album, "Kick" has been certified six times platinum by the RIAA and spawned four US top 10 singles ("New Sensation", "Never Tear Us Apart", "Devil Inside" and "Need You Tonight", the last of which reached the top of the US "Billboard" singles charts). According to "1001 Songs: The Great Songs of All Time and the Artists, Stories and Secrets Behind Them", the single "Need You Tonight" is not lyrically complex; it is Hutchence's performance where "he sings in kittenish whisper, gently drawing back with the incredible lust of a tiger hunting in the night" that makes the song "as sexy and funky as any white rock group has ever been". In September 1988, the band swept the MTV Video Music Awards with the video for "Need You Tonight/Mediate" winning in five categories. In 1989, Hutchence collaborated further with Olsen for the Max Q project, and was joined by members of Olsen's previous groups including Whirlywirld, No and Orchestra of Skin and Bone. They released a self-titled album and three singles, "Way of the World", "Sometimes" and "Monday Night by Satellite". Max Q disbanded in 1990. "Max Q" showed Hutchence exploring the darker side of his music and, with Olsen, he created "one of the most innovative dance music albums of the decade". Hutchence wrote most of the music and provided "an extraordinary performance ... it was one of the most significant statements Hutchence was to make". In 1990, Hutchence portrayed nineteenth-century Romantic poet Percy Shelley in Roger Corman's film version of "Frankenstein Unbound", which was based on a science fiction time travel story of the same name written by Brian Aldiss. In 1990, INXS released "X", which spawned more international hits such as "Suicide Blonde" and "Disappear" (both Top 10 in the US). "Suicide Blonde" peaked at No. 2 in Australia and No. 11 in the UK. Hutchence, with Andrew Farriss, wrote the song after Hutchence's then-girlfriend, Kylie Minogue, used the phrase "suicide blonde" to describe her look during her 1989 film, "The Delinquents"; the film depicted Minogue in a platinum blonde wig. Hutchence won the 'Best International Artist' at the 1991 BRIT Awards with INXS winning the related group award. Hutchence provided vocals for pub rockers Noiseworks' album, "Love Versus Money" (1991). "Welcome to Wherever You Are" was released by INXS in August 1992. It received good critical reviews and went to No. 1 in the UK. Hutchence and INXS faced reduced commercial success with "Full Moon, Dirty Hearts", especially in the U.S. The band took time off to rest and be with their families, while Hutchence remained in the public eye through his romances. He commenced work on a self-titled solo album in the mid-1990s. After a period of inactivity and releases that received lukewarm reviews, INXS recorded the band's 10th official album, "Elegantly Wasted", in 1996. Hutchence was a baritone. In 2013, News.com.au ranked Hutchence fourth in a list of the 15 greatest Australian singers of all time. Billboard described Hutchence as "charismatic," with a "seductive purr and [a] lithe, magnetic stage presence." Paul Donoughue of ABC.net.au wrote that Hutchence had "a phenomenal voice — moody, sexual, and dynamic, able to shift effortlessly from fragile to cocksure." Reviewing an INXS concert, Dave Simpson of "The Guardian" wrote, "Watching Hutchence, hair flailing, crotch thrusting, a mischievous smile forever creeping across his leathery face, I realised that here was a man born to be onstage, living and loving every minute, an explosion of sexual energy". Hutchence biographer Toby Creswell asserted that "Hutchence was, without question, one of the truly great frontmen — he expressed the music in a dynamic way that few others could." According to "People", Hutchence's "public brawls and onetime open drug use led London tabloids to dub him the 'wild man of rock.'" He was romantically linked to Kylie Minogue, Belinda Carlisle, Helena Christensen, and Kym Wilson. In August 1992, Helena Christensen and Hutchence were walking late at night on a street in Copenhagen after drinking heavily when he refused to move for a taxi. The taxi driver then assaulted him, causing him to fall backwards and hit his head on the roadway. Hutchence suffered a fractured skull in the altercation. Hutchence did not immediately seek medical assistance for the injury, instead waiting several days before seeing a doctor. As a result, his fractured skull left him with an almost complete loss of the sense of smell and significant loss of taste. This injury led to periods of depression and increased levels of aggression; he had not fully recovered after two weeks in a Copenhagen hospital. According to INXS bandmate Beers, Hutchence pulled a knife and threatened to kill him during the 1993 recording of "Full Moon, Dirty Hearts" on the isle of Capri. Beers said: "Over those six weeks, Michael threatened or physically confronted nearly every member of the band." In the mid-1990s, Hutchence became romantically involved with Paula Yates. He had met her in 1985, during an interview for her program, "The Tube". Yates interviewed him again in 1994 for her "Big Breakfast" show, and their affair was soon uncovered by the British press. At the time, Yates was married to The Boomtown Rats' lead singer and Live Aid organiser Bob Geldof. Media scrutiny was intense, and Hutchence assaulted a photographer who had followed them. Yates' separation from Geldof in February 1995 sparked a public and at times bitter custody battle over their daughters. Yates and Geldof divorced in May 1996. On 22 July 1996, Yates gave birth to Hutchence's daughter, Heavenly Hiraani Tiger Lily Hutchence. In September 1996, Yates and Hutchence made headlines when they were arrested for suspicion of drug possession after the family nanny reportedly found a small amount of opium in a shoebox underneath their bed. The case was later dropped due to lack of evidence. Hutchence and INXS went on a world tour to support the April 1997 release of "Elegantly Wasted". The final 20th anniversary tour was to occur in Australia in November and December. During the tour, Paula Yates planned to visit Hutchence with their daughter and Yates's three children, but Bob Geldof had taken legal action to prevent the visit. On the morning of 22 November 1997, Hutchence, aged 37, was found dead in Room 524 at the Ritz-Carlton hotel in Double Bay, Sydney. Actress Kym Wilson was the last person to see Hutchence alive, after partying with him in his hotel room prior to his death. Geldof and Yates each gave police statements on the phone calls they exchanged with Hutchence on the morning of his death; however, they did not volunteer their phone records. Yates's statement on 26 November indicated that she had informed Hutchence of the Geldof girls' custody hearing being adjourned until 17 December, which meant that Yates would not be able to bring Tiger and the Geldof girls to Australia for a visit as previously intended. According to Yates, Hutchence "was frightened and couldn't stand a minute more without his baby... [he] was terribly upset and he said, 'I don't know how I'll live without seeing Tiger'". Yates indicated that Hutchence said he was going to phone Geldof "to let the girls come to Australia". Geldof's police statements and evidence to the coroner indicated that Geldof did receive a call from Hutchence, who was "hectoring and abusive and threatening" during their phone conversation. The occupant in the room next to Hutchence's heard a loud male voice and swearing at about 5 am; the coroner was satisfied that this was Hutchence arguing with Geldof. At 9:54 am on 22 November, Hutchence spoke with a former girlfriend, Michèle Bennett; according to Bennett, Hutchence was crying, sounded upset, and told her he needed to see her. Bennett arrived at his hotel room door at about 10:40 am, but there was no response. Hutchence's body was discovered by a hotel maid at 11:50 am. Police reported that Hutchence was found "in a kneeling position facing the door. He had used his snakeskin belt to tie a knot on the automatic door closure at the top of the door, and had strained his head forward into the loop so hard that the buckle had broken." On 6 February 1998, after an autopsy and coronial inquest, New South Wales State Coroner, Derrick Hand, presented his report. The report ruled that Hutchence's death was suicide while depressed and under the influence of alcohol and other drugs. "An analysis report of Hutchence's blood [indicated] the presence of alcohol, cocaine, Prozac and prescription drugs." In producing his coroner's report, Hand had specifically considered the suggestions of accidental death (coupled with the fact that Hutchence left no suicide note), but had discounted them based on substantial evidence presented to the contrary. In a 1999 interview on "60 Minutes" (and in a documentary film on Channel 4), Yates claimed that Hutchence's death might have resulted from autoerotic asphyxiation; this claim contradicted her previous statements to police investigators and the coroner. On 27 November 1997, Hutchence's funeral was held at St Andrew's Cathedral, Sydney. His casket was carried out of the cathedral by members of INXS and by his younger brother, Rhett; "Never Tear Us Apart" was played in the background. Nick Cave, a friend of Hutchence, performed his 1997 song "Into My Arms" during the funeral and requested that television cameras be switched off. Rhett claimed in his 2004 book, "Total XS", that on the previous day at the funeral parlour, Yates had put a gram of heroin into Hutchence's pocket. Following Hutchence's death, INXS continued recording and performing until 2012. According to the Recording Industry Association of America (RIAA), INXS has sold 30 million units in the United States alone, making them the highest-selling Australian music act in the United States behind AC/DC. INXS has sold over 60 million records worldwide. INXS was inducted into the ARIA Hall of Fame in 2001. Hutchence's solo album, "Michael Hutchence", was released in October 1999. He had started on the album in 1995, recording songs in between INXS sessions; he had last worked on it three days prior to his death. The last song he recorded was "Possibilities". The album includes "Slide Away", a duet with U2's Bono; Bono's vocals were recorded after Hutchence's death. The 1999 movie "Limp" includes a cameo by Hutchence. On 18 June 2000, Patricia Glassop and Tina Schorr released their book, "Just a Man: The Real Michael Hutchence", which has been described as "an odd biography ... [that] combines the basic facts of Hutchence's early life ... with an almost too-intimate view of the authors' feelings". Paula Yates died on 17 September 2000 of an accidental heroin overdose; she was discovered in the presence of Hutchence's then-four-year-old daughter, Tiger. Soon after Yates's death, Geldof assumed foster custody of Tiger so that she could be brought up with her three older half-sisters, Fifi, Peaches and Pixie. In 2007, Tiger was adopted by Bob Geldof, the father of her half-sisters. As of 2019, Tiger's legal name is Heavenly Hiraani Tiger Lily Hutchence Geldof. On 12 December 2002, Hutchence's father, Kelland, died of cancer in Sydney. Kelland had helped create and maintain a memorial website for his son. On 20 August 2005, Melbourne's "The Age" reported on the disposition of Hutchence's estate and assets, estimated at between $10 to $20 million but containing virtually nothing. The remainder of his estate had reportedly been sold off or swallowed in legal fees. In July 2009, Hutchence's mother, Patricia Glassop, protested that Geldof had prevented access to her granddaughter for three years. Glassop died on 21 September 2010. A documentary about Hutchence entitled "Michael Hutchence: The Last Rockstar" aired in 2017. In 2019, "Mystify: Michael Hutchence"—another documentary about Hutchence's life—was released. General Specific
https://en.wikipedia.org/wiki?curid=20269
Motorola 68000 The Motorola 68000 ("'sixty-eight-thousand'"; also called the m68k or Motorola 68k, ""sixty-eight-kay"") is a 16/32-bit CISC microprocessor, introduced in 1979 by Motorola Semiconductor Products Sector. The design implements a 32-bit instruction set, with 32-bit registers and a 32-bit internal data bus. The address bus is 24-bits and does not use memory segmentation, which made it popular with programmers. Internally, it uses a 16-bit data ALU and two additional 16-bit ALUs used mostly for addresses, and has a 16-bit external data bus. For this reason, Motorola referred to it as a 16/32-bit processor. As one of the first widely available processors with a 32-bit instruction set, and running at relatively high speeds for the era, the 68k was a popular design through the 1980s. It was widely used in a new generation of personal computers with graphical user interfaces, including the Apple Macintosh, Commodore Amiga, Atari ST and many others. It competed primarily against the Intel 8088, found in the IBM PC, which it easily outperformed. The 68k and 8088 pushed other designs, like the Zilog Z8000 and , into niche markets, and made Motorola a major player in the CPU space. The 68k was soon expanded with additional family members, implementing full 32-bit ALUs as part of the growing Motorola 68000 series. The original 68k is generally software forward-compatible with the rest of the line despite being limited to a 16-bit wide external bus. After 40 years in production, the 68000 architecture is still in use. Motorola's first widely-produced CPU was the Motorola 6800. Although a capable design, it was eclipsed by more powerful designs, such as the Zilog Z80, and less powerful but faster designs, such as the MOS 6502. As the sales prospects of the 6800 dimmed, Motorola began a totally new design to replace it. This became the Motorola Advanced Computer System on Silicon project, or MACSS, begun in 1976. The MACSS aimed to develop an entirely new architecture without backward compatibility with the 6800. It ultimately does retain a bus protocol compatibility mode for existing 6800 peripheral devices, and a version with an 8-bit data bus was produced. However, the designers mainly focused on the future, or forward compatibility, which gives the 68000 design a head start against later 32-bit instruction set architectures. For instance, the CPU registers are 32 bits wide, though few self-contained structures in the processor itself operate on 32 bits at a time. The MACSS team drew heavily on the influence of minicomputer processor design, such as the PDP-11 and VAX systems, which are similarly microcode-based. In the mid 1970s, the 8-bit microprocessor manufacturers raced to introduce the 16-bit generation. National Semiconductor had been first with its IMP-16 and PACE processors in 1973–1975, but these have issues with speed. Intel had worked on their advanced 16/32-bit Intel iAPX 432 (alias 8800) since 1975 and their Intel 8086 since 1976 (it was introduced in 1978 but became widespread in the form of the almost identical 8088 in the IBM PC a few years later). Arriving late to the 16-bit arena affords the new processor more transistors (roughly 40,000 active versus 20,000 active in the 8086), 32-bit macroinstructions, and acclaimed general ease of use. The original MC68000 was fabricated using an HMOS process with a 3.5 µm feature size. Formally introduced in September 1979, initial samples were released in February 1980, with production chips available over the counter in November. Initial speed grades are 4, 6, and 8 MHz. 10 MHz chips became available during 1981, and 12.5 MHz chips by June 1982. The 16.67 MHz "12F" version of the MC68000, the fastest version of the original HMOS chip, was not produced until the late 1980s. IBM considered the 68000 for the IBM PC but chose the Intel 8088 because the 68000 was not ready; Walden C. Rhines wrote that thus "Motorola, with its superior technology, lost the single most important design contest of the last 50 years". The 68k instruction set is particularly well suited to implement Unix, and the 68000 and its successors became the dominant CPUs for Unix-based workstations including Sun workstations and Apollo/Domain workstations. The 68000 also is used for mass-market computers such as the Apple Lisa, Macintosh, Amiga, and Atari ST. The 68000 is used in Microsoft Xenix systems, as well as an early NetWare Unix-based Server. The 68000 is used in the first generation of desktop laser printers, including the original Apple Inc. LaserWriter and the HP LaserJet. In 1982, the 68000 received a minor update to its ISA to support virtual memory and to conform to the Popek and Goldberg virtualization requirements. The updated chip is called the 68010. It also adds a new "loop mode" which speeds up small loops, and increases overall performance by about 10% at the same clock speeds. A further extended version, which exposes 31 bits of the address bus, was also produced in small quantities as the 68012. To support lower-cost systems and control applications with smaller memory sizes, Motorola introduced the 8-bit compatible MC68008, also in 1982. This is a 68000 with an 8-bit data bus and a smaller (20-bit) address bus. After 1982, Motorola devoted more attention to the 68020 and 88000 projects. Several other companies were second-source manufacturers of the HMOS 68000. These included Hitachi (HD68000), who shrank the feature size to 2.7 µm for their 12.5 MHz version, Mostek (MK68000), Rockwell (R68000), Signetics (SCN68000), Thomson/SGS-Thomson (originally EF68000 and later TS68000), and Toshiba (TMP68000). Toshiba was also a second-source maker of the CMOS 68HC000 (TMP68HC000). Encrypted variants of the 68000, being the Hitachi FD1089 and FD1094, store decryption keys for opcodes and opcode data in battery-backed memory and were used in certain Sega arcade systems including System 16 to prevent piracy and illegal bootleg games. The 68HC000, the first CMOS version of the 68000, was designed by Hitachi and jointly introduced in 1985. Motorola's version was called the MC68HC000, while Hitachi's was the HD68HC000. The 68HC000 was eventually offered at speeds of 8–20 MHz. Except for using CMOS circuitry, it behaved identically to the HMOS MC68000, but the change to CMOS greatly reduced its power consumption. The original HMOS MC68000 consumed around 1.35 watts at an ambient temperature of 25 °C, regardless of clock speed, while the MC68HC000 consumed only 0.13 watts at 8 MHz and 0.38 watts at 20 MHz. (Unlike CMOS circuits, HMOS still draws power when idle, so power consumption varies little with clock rate.) Apple selected the 68HC000 for use in the Macintosh Portable. Motorola replaced the MC68008 with the MC68HC001 in 1990. This chip resembled the 68HC000 in most respects, but its data bus could operate in either 16-bit or 8-bit mode, depending on the value of an input pin at reset. Thus, like the 68008, it could be used in systems with cheaper 8-bit memories. The later evolution of the 68000 focused on more modern embedded control applications and on-chip peripherals. The 68EC000 chip and SCM68000 core removed the M6800 peripheral bus, and excluded the MOVE from SR instruction from user mode programs, making the 68EC000 and 68SEC000 the only 68000 CPUs not 100% object code compatible with previous 68000 CPUs when run in User Mode. When run in Supervisor Mode, there was no difference. In 1996, Motorola updated the standalone core with fully static circuitry, drawing only 2 µW in low-power mode, calling it the MC68SEC000. Motorola ceased production of the HMOS MC68000 and MC68008 in 1996, but its spin-off company Freescale Semiconductor was still producing the MC68HC000, MC68HC001, MC68EC000, and MC68SEC000, as well as the MC68302 and MC68306 microcontrollers and later versions of the DragonBall family. The 68000's architectural descendants, the 680x0, CPU32, and Coldfire families, were also still in production. More recently, with the Sendai fab closure, all 68HC000, 68020, 68030, and 68882 parts have been discontinued, leaving only the 68SEC000 in production. After being succeeded by "true" 32-bit microprocessors, the 68000 was used as the core of many microcontrollers. In 1989, Motorola introduced the MC68302 communications processor. At its introduction, the 68000 was first used in high-priced systems, including multiuser microcomputers like the WICAT 150, early Alpha Microsystems computers, Sage II / IV, Tandy TRS-80 Model 16, and ; single-user workstations such as Hewlett-Packard's HP 9000 Series 200 systems, the first Apollo/Domain systems, Sun Microsystems' Sun-1, and the Corvus Concept; and graphics terminals like Digital Equipment Corporation's VAXstation 100 and Silicon Graphics' IRIS 1000 and 1200. Unix systems rapidly moved to the more capable later generations of the 68k line, which remained popular in that market throughout the 1980s. By the mid-1980s, falling production cost made the 68000 viable for use in personal and home computers, starting with the Apple Lisa and Macintosh, and followed by the Commodore Amiga, Atari ST, and Sharp X68000. On the other hand, the Sinclair QL microcomputer was the most commercially important utilisation of the 68008, along with its derivatives, such as the ICL One Per Desk business terminal. Helix Systems (in Missouri, United States) designed an extension to the SWTPC SS-50 bus, the SS-64, and produced systems built around the 68008 processor. While the adoption of RISC and x86 displaced the 68000 series as desktop/workstation CPU, the processor found substantial use in embedded applications. By the early 1980s, quantities of 68000 CPUs could be purchased for less than 30 USD per part. Video game manufacturers used the 68000 as the backbone of many arcade games and home game consoles: Atari's "Food Fight", from 1982, was one of the first 68000-based arcade games. Others included Sega's System 16, Capcom's CP System and CPS-2, and SNK's Neo Geo. By the late 1980s, the 68000 was inexpensive enough to power home game consoles, such as Sega's Mega Drive/Genesis console and also the Sega CD attachment for it (A Sega CD system has three CPUs, two of them 68000s). The 1993 multi-processor Atari Jaguar console used a 68000 as a support chip, although some developers used it as the primary processor due to familiarity. The 1994 multi-processor Sega Saturn console used the 68000 as a sound co-processor (much as the Mega Drive/Genesis uses the Z80 as a co-processor for sound and/or other purposes). Certain arcade games (such as "Steel Gunner" and others based on Namco System 2) use a dual 68000 CPU configuration, and systems with a triple 68000 CPU configuration also exist (such as "Galaxy Force" and others based on the Sega Y Board), along with a quad 68000 CPU configuration, which has been used by Jaleco (one 68000 for sound has a lower clock rate compared to the other 68000 CPUs) for games such as "Big Run" and "Cisco Heat"; a fifth 68000 (at a different clock rate compared to the other 68000 CPUs) was additionally used in the Jaleco arcade game "Wild Pilot" for I/O processing. The 68000 also saw great success as an embedded controller. As early as 1981, laser printers such as the Imagen Imprint-10 were controlled by external boards equipped with the 68000. The first HP LaserJet—introduced in 1984—came with a built-in 8 MHz 68000. Other printer manufacturers adopted the 68000, including Apple with its introduction of the LaserWriter in 1985, the first PostScript laser printer. The 68000 continued to be widely used in printers throughout the rest of the 1980s, persisting well into the 1990s in low-end printers. The 68000 also saw success in the field of industrial control systems. Among the systems benefited from having a 68000 or derivative as their microprocessor were families of programmable logic controllers (PLCs) manufactured by Allen-Bradley, Texas Instruments and subsequently, following the acquisition of that division of TI, by Siemens. Users of such systems do not accept product obsolescence at the same rate as domestic users, and it is entirely likely that despite having been installed over 20 years ago, many 68000-based controllers will continue in reliable service well into the 21st century. In a number of digital oscilloscopes from the 80s, the 68000 has been used as a waveform display processor; some models including the LeCroy 9400/9400A also use the 68000 as a waveform math processor (including addition, subtraction, multiplication, and division of two waveforms/references/waveform memories), and some digital oscilloscopes using the 68000 (including the 9400/9400A) can also perform fast Fourier transform functions on a waveform. The 683XX microcontrollers, based on the 68000 architecture, are used in networking and telecom equipment, television set-top boxes, laboratory and medical instruments, and even handheld calculators. The MC68302 and its derivatives have been used in many telecom products from Cisco, 3com, Ascend, Marconi, Cyclades and others. Past models of the Palm PDAs and the Handspring Visor used the DragonBall, a derivative of the 68000. AlphaSmart uses the DragonBall family in later versions of its portable word processors. Texas Instruments uses the 68000 in its high-end graphing calculators, the TI-89 and TI-92 series and Voyage 200. Early versions of these used a specialized microcontroller with a static 68EC000 core; later versions use a standard MC68SEC000 processor. A modified version of the 68000 formed the basis of the IBM XT/370 hardware emulator of the System 370 processor. The 68000 has a 24-bit external address bus and two byte-select signals "replaced" A0. These 24 lines can therefore address 16 MB of physical memory with byte resolution. Address storage and computation uses 32 bits internally; however, the 8 high-order address bits are ignored due to the physical lack of device pins. This allows it to run software written for a logically flat 32-bit address space, while accessing only a 24-bit physical address space. Motorola's intent with the internal 32-bit address space was forward compatibility, making it feasible to write 68000 software that would take full advantage of later 32-bit implementations of the 68000 instruction set. However, this did not prevent programmers from writing forward incompatible software. "24-bit" software that discarded the upper address byte, or used it for purposes other than addressing, could fail on 32-bit 68000 implementations. For example, early (pre-7.0) versions of Apple's Mac OS used the high byte of memory-block master pointers to hold flags such as "locked" and "purgeable". Later versions of the OS moved the flags to a nearby location, and Apple began shipping computers which had "32-bit clean" ROMs beginning with the release of the 1989 Mac IIci. The 68000 family stores multi-byte integers in memory in big-endian order. The CPU has eight 32-bit general-purpose data registers (D0-D7), and eight address registers (A0-A7). The last address register is the stack pointer, and assemblers accept the label SP as equivalent to A7. This was a good number of registers at the time in many ways. It was small enough to allow the 68000 to respond quickly to interrupts (even in the worst case where all 8 data registers D0–D7 and 7 address registers A0–A6 needed to be saved, 15 registers in total), and yet large enough to make most calculations fast, because they could be done entirely within the processor without keeping any partial results in memory. (Note that an exception routine in supervisor mode can also save the user stack pointer A7, which would total 8 address registers. However, the dual stack pointer (A7 and supervisor-mode A7') design of the 68000 makes this normally unnecessary, except when a task switch is performed in a multitasking system.) Having two types of registers was mildly annoying at times, but not hard to use in practice. Reportedly, it allowed the CPU designers to achieve a higher degree of parallelism, by using an auxiliary execution unit for the address registers. The 68000 has a 16-bit status register. The upper 8 bits is the system byte, and modification of it is privileged. The lower 8 bits is the user byte, also known as the condition code register (CCR), and modification of it is not privileged. The 68000 comparison, arithmetic, and logic operations modify condition codes to record their results for use by later conditional jumps. The condition code bits are "zero" (Z), "carry" (C), "overflow" (V), "extend" (X), and "negative" (N). The "extend" (X) flag deserves special mention, because it is separate from the carry flag. This permits the extra bit from arithmetic, logic, and shift operations to be separated from the carry for flow-of-control and linkage. The designers attempted to make the assembly language orthogonal. That is, instructions are divided into operations and address modes, and almost all address modes are available for almost all instructions. There are 56 instructions and a minimum instruction size of 16 bits. Many instructions and addressing modes are longer to include additional address or mode bits. The CPU, and later the whole family, implements two levels of privilege. User mode gives access to everything except privileged instructions such as interrupt level controls. Supervisor privilege gives access to everything. An interrupt always becomes supervisory. The supervisor bit is stored in the status register, and is visible to user programs. An advantage of this system is that the supervisor level has a separate stack pointer. This permits a multitasking system to use very small stacks for tasks, because the designers do not have to allocate the memory required to hold the stack frames of a maximum stack-up of interrupts. The CPU recognizes seven interrupt levels. Levels 1 through 5 are strictly prioritized. That is, a higher-numbered interrupt can always interrupt a lower-numbered interrupt. In the status register, a privileged instruction allows one to set the current minimum interrupt level, blocking lower or equal priority interrupts. For example, if the interrupt level in the status register is set to 3, higher levels from 4 to 7 can cause an exception. Level 7 is a level triggered non-maskable interrupt (NMI). Level 1 can be interrupted by any higher level. Level 0 means no interrupt. The level is stored in the status register, and is visible to user-level programs. Hardware interrupts are signalled to the CPU using three inputs that encode the highest pending interrupt priority. A separate Encoder is usually required to encode the interrupts, though for systems that do not require more than three hardware interrupts it is possible to connect the interrupt signals directly to the encoded inputs at the cost of additional software complexity. The interrupt controller can be as simple as a 74LS148 priority encoder, or may be part of a VLSI peripheral chip such as the MC68901 Multi-Function Peripheral (used in the Atari ST range of computers and Sharp X68000), which also provided a UART, timer, and parallel I/O. The "exception table" (interrupt vector table interrupt vector addresses) is fixed at addresses 0 through 1023, permitting 256 32-bit vectors. The first vector (RESET) consists of two vectors, namely the starting stack address, and the starting code address. Vectors 3 through 15 are used to report various errors: bus error, address error, illegal instruction, zero division, CHK and CHK2 vector, privilege violation (to block privilege escalation), and some reserved vectors that became line 1010 emulator, line 1111 emulator, and hardware breakpoint. Vector 24 starts the real interrupts: spurious interrupt (no hardware acknowledgement), and level 1 through level 7 autovectors, then the 16 TRAP vectors, then some more reserved vectors, then the user defined vectors. Since at a minimum the starting code address vector must always be valid on reset, systems commonly included some nonvolatile memory (e.g. ROM) starting at address zero to contain the vectors and bootstrap code. However, for a general purpose system it is desirable for the operating system to be able to change the vectors at runtime. This was often accomplished by either pointing the vectors in ROM to a jump table in RAM, or through use of bank switching to allow the ROM to be replaced by RAM at runtime. The 68000 does not meet the Popek and Goldberg virtualization requirements for full processor virtualization because it has a single unprivileged instruction "MOVE from SR", which allows user-mode software read-only access to a small amount of privileged state. The 68EC000 and 68SEC000, which are later derivatives of the 68000 are, however, as the Move from SR-instruction is privileged. The same change was introduced on the 68010 and later CPUs. The 68000 is also unable to easily support virtual memory, which requires the ability to trap and recover from a failed memory access. The 68000 does provide a bus error exception which can be used to trap, but it does not save enough processor state to resume the faulted instruction once the operating system has handled the exception. Several companies did succeed in making 68000-based Unix workstations with virtual memory that worked by using two 68000 chips running in parallel on different phased clocks. When the "leading" 68000 encountered a bad memory access, extra hardware would interrupt the "main" 68000 to prevent it from also encountering the bad memory access. This interrupt routine would handle the virtual memory functions and restart the "leading" 68000 in the correct state to continue properly synchronized operation when the "main" 68000 returned from the interrupt. These problems were fixed in the next major revision of the 68k architecture, with the release of the MC68010. The Bus Error and Address Error exceptions push a large amount of internal state onto the supervisor stack in order to facilitate recovery, and the MOVE from SR instruction was made privileged. A new unprivileged "MOVE from CCR" instruction is provided for use in its place by user mode software; an operating system can trap and emulate user-mode MOVE from SR instructions if desired. The standard addressing modes are: Plus: access to the status register, and, in later models, other special registers. Most instructions have dot-letter suffixes, permitting operations to occur on 8-bit bytes (".b"), 16-bit words (".w"), and 32-bit longs (".l"). Like many CPUs of its era the cycle timing of some instructions varied depending on the source operand(s). For example, the unsigned multiply instruction takes (38+2n) clock cycles to complete where 'n' is equal to the number of bits set in the operand. To create a function that took a fixed cycle count required the addition of extra code after the multiply instruction. This would typically consume extra cycles for each bit that wasn't set in the original multiplication operand. Most instructions are dyadic, that is, the operation has a source, and a destination, and the destination is changed. Notable instructions were: The 68EC000 is a low-cost version of the 68000 with a slightly different pinout, designed for embedded controller applications. The 68EC000 can have either a 8-bit or 16-bit data bus, switchable at reset. The processors are available in a variety of speeds including 8 and 16 MHz configurations, producing 2,100 and 4,376 Dhrystones each. These processors have no floating-point unit, and it is difficult to implement an FPU coprocessor (MC68881/2) with one because the EC series lacks necessary coprocessor instructions. The 68EC000 was used as a controller in many audio applications, including Ensoniq musical instruments and sound cards, where it was part of the MIDI synthesizer. On Ensoniq sound boards, the controller provided several advantages compared to competitors without a CPU on board. The processor allowed the board to be configured to perform various audio tasks, such as MPU-401 MIDI synthesis or MT-32 emulation, without the use of a TSR program. This improved software compatibility, lowered CPU usage, and eliminated host system memory usage. The Motorola 68EC000 core was later used in the m68k-based DragonBall processors from Motorola/Freescale. It also was used as a sound controller in the Sega Saturn game console and as a controller for the HP JetDirect Ethernet controller boards for the mid-1990s LaserJet printers. The 68000 assembly code below is for a subroutine named , which copies a null-terminated string of 8-bit characters to a destination string, converting all alphabetic characters to lower case. The subroutine establishes a call frame using register A6 as the frame pointer. This kind of calling convention supports reentrant and recursive code and is typically used by languages like C and C++. The subroutine then retrieves the parameters passed to it ( and ) from the stack. It then loops, reading an ASCII character (a single byte) from the string, checking whether it is a capital alphabetic character, and if so, converting it into a lower-case character, otherwise leaving it as it is, then writing the character into the string. Finally, it checks whether the character was a null character; if not, it repeats the loop, otherwise it restores the previous stack frame (and A6 register) and returns. Note that the string pointers (registers A0 and A1) are auto-incremented in each iteration of the loop. In contrast, the code below is for a stand-alone function, even on the most restrictive version of AMS for the TI-89 series of calculators, being kernel-independent, with no values looked up in tables, files or libraries when executing, no system calls, no exception processing, minimal registers to be used, nor the need to save any. It is valid for historical Julian dates from 1 March 1 AD, or for Gregorian ones. In less than two dozen operations it calculates a day number compatible with ISO 8601 when called with three inputs stored at their corresponding LOCATIONS:
https://en.wikipedia.org/wiki?curid=20270
Minicomputer A minicomputer, or colloquially mini, is a class of smaller computers that was developed in the mid-1960s and sold for much less than mainframe and mid-size computers from IBM and its direct competitors. In a 1970 survey, "The New York Times" suggested a consensus definition of a minicomputer as a machine costing less than (), with an input-output device such as a teleprinter and at least four thousand words of memory, that is capable of running programs in a higher level language, such as Fortran or BASIC. The class formed a distinct group with its own software architectures and operating systems. Minis were designed for control, instrumentation, human interaction, and communication switching as distinct from calculation and record keeping. Many were sold indirectly to original equipment manufacturers (OEMs) for final end use application. During the two decade lifetime of the minicomputer class (1965–1985), almost 100 companies formed and only a half dozen remained. When single-chip CPU microprocessors appeared, beginning with the Intel 4004 in 1971, the term "minicomputer" came to mean a machine that lies in the middle range of the computing spectrum, in between the smallest mainframe computers and the microcomputers. The term "minicomputer" is little used today; the contemporary term for this class of system is "midrange computer", such as the higher-end SPARC, Power ISA and Itanium-based systems from Oracle, IBM and Hewlett-Packard. The term "minicomputer" developed in the 1960s to describe the smaller computers that became possible with the use of transistors and core memory technologies, minimal instructions sets and less expensive peripherals such as the ubiquitous Teletype Model 33 ASR. They usually took up one or a few 19-inch rack cabinets, compared with the large mainframes that could fill a room. The definition of minicomputer is vague with the consequence that there are a number of candidates for the "first" minicomputer, ranging from the CDC 160 circa 1960 to the DEC PDP-8 circa 1965. An early and highly successful minicomputer was Digital Equipment Corporation's (DEC) 12-bit PDP-8, which was built using discrete transistors and cost from upwards when launched in 1964. Later versions of the PDP-8 took advantage of small-scale integrated circuits. The important precursors of the PDP-8 include the PDP-5, LINC, the TX-0, the TX-2, and the PDP-1. DEC gave rise to a number of minicomputer companies along Massachusetts Route 128, including Data General, Wang Laboratories, Apollo Computer, and Prime Computer. Minicomputers were also known as midrange computers. They grew to have relatively high processing power and capacity. They were used in manufacturing process control, telephone switching and to control laboratory equipment. In the 1970s, they were the hardware that was used to launch the computer-aided design (CAD) industry and other similar industries where a smaller dedicated system was needed. The 7400 series of TTL integrated circuits started appearing in minicomputers in the late 1960s. The 74181 arithmetic logic unit (ALU) was commonly used in the CPU data paths. Each 74181 had a bus width of four bits, hence the popularity of "bit-slice" architecture. Some scientific computers, such as the Nicolet 1080, would use the 7400 series in groups of five ICs (parallel) for their uncommon twenty bits architecture. The 7400 series offered data-selectors, multiplexers, three-state buffers, memories, etc. in dual in-line packages with one-tenth inch spacing, making major system components and architecture evident to the naked eye. Starting in the 1980s, many minicomputers used VLSI circuits. At the launch of the MITS Altair 8800 in 1975, "Radio Electronics" magazine referred to the system as a "minicomputer", although the term microcomputer soon became usual for personal computers based on single-chip microprocessors. At the time, microcomputers were 8-bit single-user, relatively simple machines running simple program-launcher operating systems like CP/M or MS-DOS, while minis were much more powerful systems that ran full multi-user, multitasking operating systems, such as VMS and Unix, and although the classical mini was a 16-bit computer, the emerging higher performance superminis were 32-bit. The decline of the minis happened due to the lower cost of microprocessor-based hardware, the emergence of inexpensive and easily deployable local area network systems, the emergence of the 68020, 80286 and the 80386 microprocessors, and the desire of end-users to be less reliant on inflexible minicomputer manufacturers and IT departments or "data centers". The result was that minicomputers and computer terminals were replaced by networked workstations, file servers and PCs in some installations, beginning in the latter half of the 1980s. During the 1990s, the change from minicomputers to inexpensive PC networks was cemented by the development of several versions of Unix and Unix-like systems that ran on the Intel x86 microprocessor architecture, including Solaris, Linux, FreeBSD, NetBSD and OpenBSD. Also, the Microsoft Windows series of operating systems, beginning with Windows NT, now included server versions that supported preemptive multitasking and other features required for servers. As microprocessors have become more powerful, the CPUs built up from multiple components – once the distinguishing feature differentiating mainframes and midrange systems from microcomputers – have become increasingly obsolete, even in the largest mainframe computers. Digital Equipment Corporation (DEC) was once the leading minicomputer manufacturer, at one time the second-largest computer company after IBM. But as the minicomputer declined in the face of generic Unix servers and Intel-based PCs, not only DEC, but almost every other minicomputer company including Data General, Prime, Computervision, Honeywell and Wang Laboratories, many based in New England (hence the end of the Massachusetts Miracle), also collapsed or merged. DEC was sold to Compaq in 1998, while Data General was acquired by EMC Corporation. Today only a few proprietary minicomputer architectures survive. The IBM System/38 operating system, which introduced many advanced concepts, lives on with IBM's AS/400. Realising the importance of the myriad lines of 'legacy code' (programs) written, 'AS' stands for 'Application System'. Great efforts were made by IBM to enable programs originally written for the System/34 and System/36 to be moved to the AS/400. The AS/400 was replaced by the iSeries, which was subsequently replaced by the System i. In 2008, the System i was replaced by the IBM Power Systems. By contrast, competing proprietary computing architectures from the early 1980s, such as DEC's VAX, Wang VS and Hewlett Packard's HP 3000 have long been discontinued without a compatible upgrade path. OpenVMS runs HP Alpha and Intel IA-64 (Itanium) CPU architectures. Tandem Computers, which specialized in reliable large-scale computing, was acquired by Compaq, and a few years afterward the combined entity merged with Hewlett Packard. The NSK-based NonStop product line was re-ported from MIPS processors to Itanium-based processors branded as 'HP Integrity NonStop Servers'. As in the earlier migration from stack machines to MIPS microprocessors, all customer software was carried forward without source changes. Integrity NonStop continues to be HP's answer for the extreme scaling needs of its very largest customers. The NSK operating system, now termed NonStop OS, continues as the base software environment for the NonStop Servers, and has been extended to include support for Java and integration with popular development tools like Visual Studio and Eclipse. A variety of companies emerged that built turnkey systems around minicomputers with specialized software and, in many cases, custom peripherals that addressed specialized problems such as computer-aided design, computer-aided manufacturing, process control, manufacturing resource planning, and so on. Many if not most minicomputers were sold through these original equipment manufacturers and value-added resellers. Several pioneering computer companies first built minicomputers, such as DEC, Data General, and Hewlett-Packard (HP) (who now refers to its HP3000 minicomputers as "servers" rather than "minicomputers"). And although today's PCs and servers are clearly microcomputers physically, architecturally their CPUs and operating systems have developed largely by integrating features from minicomputers. In the software context, the relatively simple OSs for early microcomputers were usually inspired by minicomputer OSs (such as CP/M's similarity to Digital's single user OS/8 and RT-11 and multi-user RSTS time-sharing system). Also, the multiuser OSs of today are often either inspired by, or directly descended from, minicomputer OSs. UNIX was originally a minicomputer OS, while Windows NT kernel—the foundation for all current versions of Microsoft Windows-borrowed design ideas liberally from VMS. Many of the first generation of PC programmers were educated on minicomputer systems.
https://en.wikipedia.org/wiki?curid=20272
Mechanized infantry Mechanized infantry (or mechanised infantry) are infantry units equipped with armored personnel carriers (APCs) or infantry fighting vehicles (IFVs) for transport and combat (see also mechanized force). Mechanized infantry is distinguished from motorized infantry in that its vehicles provide a degree of protection from hostile fire, as opposed to "soft-skinned" wheeled vehicles (trucks or jeeps) for motorized infantry. Most APCs and IFVs are fully tracked or are all-wheel drive vehicles (6×6 or 8×8), for mobility across rough ground. Some nations distinguish between mechanized and armored (or armoured) infantry, designating troops carried by APCs as mechanized and those in IFVs as armored. The support weapons for mechanized infantry are also provided with motorized transport, or they are built directly into combat vehicles to keep pace with the mechanized infantry in combat. For units equipped with most types of APC or any type of IFV, fire support weapons, such as machine guns, autocannons, small-bore direct-fire howitzers, and anti-tank guided missiles are often mounted directly on the infantry's own transport vehicles. Compared with "light" truck-mobile infantry, mechanized infantry can maintain rapid tactical movement and, if mounted in IFVs, have more integral firepower. It requires more combat supplies (ammunition and especially fuel) and ordnance supplies (spare vehicle components), and a comparatively larger proportion of manpower is required to crew and maintain the vehicles. For example, most APCs mount a section of seven or eight infantrymen but have a crew of two. Most IFVs carry only six or seven infantry but require a crew of three. To be effective in the field, mechanized units also require many mechanics, with specialized maintenance and recovery vehicles and equipment. Some of the first mechanized infantry were German assault teams mounted on A7V tanks during World War I. The vehicles were extra-large to let them carry sizeable assault teams and would regularly carry infantry on board in addition to their already large crews that were trained as stormtroopers. All machine-gun-armed A7V tanks carried two small flame throwers for their dismounts to use. A7V tank would often carry a second officer to lead the assault team. During the Battle of St. Quentin, A7Vs were accompanied by 20 stormtroopers from Rohr Assault Battalion, but it is unspecified if they were acting as dismounts or were accompanying the tanks on foot. During the battle, tank crews were reported to have dismounted and attacked enemy positions with grenades and flamethrowers on numerous occasions. Another example of the use of such a method of fighting is the capture of Villers-Bretonneux, in which A7Vs would suppress the defenders with machine gun fire and assault teams would dismount and attack them with grenades. Towards the end of World War I, all the armies involved were faced with the problem of maintaining the momentum of an attack. Tanks, artillery, or infiltration tactics could all be used to break through an enemy defense, but almost all offensives launched in 1918 ground to a halt after a few days. The following infantry quickly became exhausted, and artillery, supplies and fresh formations could not be brought forward over the battlefields quickly enough to maintain the pressure on the regrouping enemy It was widely acknowledged that cavalry was too vulnerable to be used on most European battlefields, but many armies continued to deploy them. Motorized infantry could maintain rapid movement, but their trucks required either a good road network or firm open terrain, such as desert. They were unable to traverse a battlefield obstructed by craters, barbed wire, and trenches. Tracked or all-wheel drive vehicles were to be the solution. Following the war, development of mechanized forces was largely theoretical for some time, but many nations began rearming in the 1930s. The British Army had established an Experimental Mechanized Force in 1927, but it failed to pursue that line because of budget constraints and the prior need to garrison the frontiers of the British Empire. Although some proponents of mobile warfare, such as J. F. C. Fuller, advocated building "tank fleets", other, such as Heinz Guderian in Germany, Adna R. Chaffee Jr. in the United States, and Mikhail Tukhachevsky in the Soviet Union, recognized that tank units required close support from infantry and other arms and that such supporting arms needed to maintain the same pace as the tanks. As the Germans rearmed in the 1930s, they equipped some infantry units in their new "Panzer" divisions with the half-track Sd.Kfz. 251, which could keep up with tanks on most terrain. The French Army also created "light mechanized" ("légère mécanisée") divisions in which some of the infantry units possessed small tracked carriers. Together with the motorization of the other infantry and support units, this gave both armies highly mobile combined-arms formations. The German doctrine was to use them to exploit breakthroughs in "Blitzkrieg" offensives, whereas the French envisaged them being used to shift reserves rapidly in a defensive battle. As World War II progressed, most major armies integrated tanks or assault guns with mechanized infantry, as well as other supporting arms, such as artillery and engineers, as combined arms units. Allied armored formations included a mechanized infantry element for combined arms teamwork. For example, US armored divisions had a balance of three battalions each of tanks, armored infantry, and self-propelled artillery. The US armored infantry was fully equipped with M2 and M3 halftracks. In the British and Commonwealth armies, "Type A armoured brigades," intended for independent operations or to form part of armored divisions, had a "motor infantry" battalion mounted in Bren Carriers or later in lend-lease halftracks. "Type B" brigades lacked a motor infantry component and were subordinated to infantry formations. The Canadian Army and, subsequently the British Army, used expedients such as the Kangaroo APC, usually for specific operations rather than to create permanent mechanized infantry formations. The first such operation was Operation Totalize in the Battle of Normandy, which failed to achieve its ultimate objectives but showed that mechanized infantry could incur far fewer casualties than dismounted troops in set-piece operations. The German Army, having introduced mechanized infantry in its "Panzer" divisions, later named them "Panzergrenadier" units. In the middle of the war, it created entire mechanized infantry divisions and named Panzergrenadier divisions. Because the German economy could not produce adequate numbers of its half-track APC, barely a quarter or a third of the infantry in Panzer or Panzergrenadier divisions were mechanized, except in a few favored formations. The rest were moved by truck. However, most German reconnaissance units in such formations were also primarily mechanized infantry and could undertake infantry missions when it was needed. The Allies generally used jeeps, armored cars, or light tanks for reconnaissance. The Red Army began the war while still in the process of reorganizing its armored and mechanized formations, most of which were destroyed during the first months of the German Invasion of the Soviet Union. About a year later, the Soviets recreated division-sized mechanized infantry units, termed mechanized corps, usually with one tank brigade and three mechanized infantry brigades, with motorized supporting arms. They were generally used in the exploitation phase of offensives, as part of the prewar Soviet concept of deep operations. The Soviet Army also created several cavalry mechanized groups in which tanks, mechanized infantry and horsed cavalry were mixed. They were also used in the exploitation and pursuit phases of offensives. Red Army mechanized infantry were generally carried on tanks or trucks, with only a few dedicated lend-lease half-track APCs. The New Zealand Army ultimately fielded a division of a roughly similar composition to a Soviet mechanized corps, which fought in the Italian Campaign, but it had little scope for mobile operations until near the end of the war. The Romanian Army fielded a mixed assortment of vehicles. These amounted to 126 French-designed Renault UE Chenillettes which were licence-built locally, 34 captured and refurbished Soviet armored tractors, 27 German-made armored half-tracks of the Sd.Kfz. 250 and Sd.Kfz. 251 types, over 200 Czechoslovak Tatra, Praga and Skoda trucks (the Tatra trucks were a model which was specifically built for the Romanian Army) as well as 300 German Horch 901 4x4 field cars. Sd.Kfz. 8 and Sd.Kfz. 9 half-tracks were also acquired, as well as nine vehicles of the Sd.Kfz. 10 type and 100 RSO/01 fully tracked tractors. The Romanians also produced five prototypes of an indigenous artillery tractor. In the postwar era, the early years of the Cold War, the Soviet Army and NATO further developed the equipment and doctrine for mechanized infantry. With the exception of airborne formations, the Red Army mechanized all its infantry formations. Initially, wheeled APCs, like the BTR-152, were used, some of which lacked overhead protection and were therefore vulnerable to artillery fire. It still gave the Soviet Army greater strategic flexibility because of the large land area and the long borders of the Soviet Union and its allies in the Warsaw Pact. The US Army established the basic configuration of the tracked APC with the M75 and M59 before it adopted the lighter M113, which could be carried by Lockheed C-130 Hercules and other transport aircraft. The vehicle gave infantry the same mobility as tanks but with much less effective armor protection (it still had nuclear, biological, and chemical protection). In the Vietnam War, the M113 was often fitted with extra armament and used as an "ad hoc" infantry fighting vehicle. Early operations by the Army of the Republic of Vietnam using the vehicle showed that troops were far more effective while they were mounted in the vehicles than when they dismounted. American doctrine subsequently emphasized mounted tactics. The Americans ultimately deployed a mechanized brigade and ten mechanized battalions to Vietnam. Even more important for future developments was the Soviet BMP-1, which was the first true IFV. Its introduction prompted the development of similar vehicles in Western armies, such as the West German Marder and American M2 Bradley. Unlike the APC, which was intended merely to transport the infantry from place to place under armor, the IFV possessed heavy firepower that could support the infantry in attack or defense. Many IFVs were also equipped with firing ports from which their infantry could fire their weapons from inside, but they were generally not successful and have been dropped from modern IFVs. Soviet organization led to different tactics between the "light" and the "heavy" varieties of mechanized infantry. In the Soviet Army, a first-line "motor rifle" division from the 1970s onward usually had two regiments equipped with wheeled BTR-60 APCs and one with the tracked BMP-1 IFV. The "light" regiments were intended to make dismounted attacks on the division's flanks, and the BMP-equipped "heavy" regiment remained mounted and supported the division's tank regiment on the main axis of advance. Both types of infantry regiment still were officially titled "motor rifle" units. A line of development in the Soviet Armed Forces from the 1980s was the provision of specialized IFVs for use by the Russian Airborne Troops. The first of them was the BMD-1, which had the same firepower as the BMP-1 but be carried in or even parachuted from the standard Soviet transport aircraft. That made airborne formations into mechanized infantry at the cost of reducing "bayonet" strength, as the BMD could carry only three or at most four paratroopers in addition to its three-man crew. They were used in that role in the Soviet invasion of Afghanistan in 1979. At present, almost all infantry units from industrialized nations are provided with some type of motor transport. Infantry units equipped with IFVs rather than lighter vehicles are commonly designated as "heavy", indicating more combat power but also more costly long-range transportation requirements. In Operation Desert Shield, during the buildup phase of the First Gulf War, the U.S. Army was concerned about the lack of mobility, protection and firepower offered by existing rapid deployment (i.e., airborne) formations; and also about the slowness of deploying regular armored units. The experience led the U.S. Army to form combat brigades based on the Stryker wheeled IFV. In the British Army, "heavy" units equipped with the Warrior IFV are described as "armoured infantry", and units with the Bulldog APC as "mechanised infantry". This convention is becoming widespread; for example the French Army has ""motorisées"" units equipped with the wheeled VAB and ""mécanisées"" (armoured) units with the tracked AMX-10P. The transport and other logistic requirements have led many armies to adopt wheeled APCs when their existing stocks of tracked APCs require replacement. An example is the Canadian Army, which has used the LAV III wheeled IFV in fighting in Afghanistan. The Italian, Spanish and Swedish armies are adopting (and exporting) new indigenous-produced tracked IFVs. The Swedish CV90 IFV in particular has been adopted by several armies. A recent trend seen in the Israel Defense Forces and the Armed Forces of the Russian Federation is the development and introduction of exceptionally well-armored APCs (HAPC), such as the IDF Achzarit, that are converted from obsolete main battle tanks (such as the Soviet T-55). Such vehicles are usually expedients, and lack of space prevents the armament of an IFV being carried in addition to an infantry section or squad. In the Russian Army, such vehicles were introduced for fighting in urban areas, where the risk from short range infantry anti-tank weapons, such as the RPG-7, is highest, after Russian tank and motor infantry units suffered heavy losses fighting Chechen troops in Grozny during the First Chechen War in 1995. Many APCs and IFVs currently under development are intended for rapid deployment by aircraft. New technologies that promise reduction in weight, such as electric drive, may be incorporated. However, facing a similar threat in post-invasion Iraq to that which prompted the Russians to convert tanks to APCs, the occupying armies have found it necessary to apply extra armor to existing APCs and IFVs, which adds to the overall size and weight. Some of the latest designs (such as the German Puma) are intended to allow a light, basic model vehicle, which is air-transportable, to be fitted in the field with additional protection, thereby ensuring both strategic flexibility and survivability. It is generally accepted that single weapons system types are much less effective without the support of the full combined arms team; the pre-World War II notion of "tank fleets" has proven to be as unsound as the World War I idea of unsupported infantry attacks. Though many nations' armored formations included an organic mechanized infantry component at the start of World War II, the proportion of mechanized infantry in such combined arms formations was increased by most armies as the war progressed. The lesson was re-learned, first by the Pakistani Army in the 1965 War with India, where the nation fielded two different types of armored divisions: one which was almost exclusively armor (the 1st), while another was more balanced (the 6th). The latter division showed itself to be far more combat capable than the former. Having achieved spectacular successes in the offensive with tank-heavy formations during the Six-Day War, the Israel Defense Forces found in the Yom Kippur War of 1973 that a doctrine that relied primarily on tanks and aircraft had proven inadequate. As a makeshift remedy, paratroopers were provided with motorized transport and used as mechanized infantry in coordination with the armor.
https://en.wikipedia.org/wiki?curid=20282
Martin Fowler (software engineer) Martin Fowler (born 1963) is a British software developer, author and international public speaker on software development, specialising in object-oriented analysis and design, UML, patterns, and agile software development methodologies, including extreme programming. His 1999 book "Refactoring" popularised the practice of code refactoring. In 2004 he introduced Presentation Model (PM), an architectural pattern. Fowler was born and grew up in Walsall, England, where he went to Queen Mary's Grammar School for his secondary education. He graduated at University College London in 1986. In 1994 he moved to the United States, where he lives near Boston, Massachusetts in the suburb of Melrose. Fowler started working with software in the early 1980s. Out of university in 1986 he started working in software development for Coopers & Lybrand until 1991. In 2000 he joined ThoughtWorks, a systems integration and consulting company, where he serves as Chief Scientist. Fowler has written nine books on the topic of software development (see "Publications"). He is a member of the "Agile Alliance" and helped create the Manifesto for Agile Software Development in 2001, along with 16 fellow signatories. He maintains a "bliki", a mix of blog and wiki. He popularised the term Dependency Injection as a form of Inversion of Control.
https://en.wikipedia.org/wiki?curid=20286