text
stringlengths 10
951k
| source
stringlengths 39
44
|
|---|---|
Junk science
The expression junk science is used to describe scientific data, research, or analysis considered by the person using the phrase to be spurious or fraudulent. The concept is often invoked in political and legal contexts where facts and scientific results have a great amount of weight in making a determination. It usually conveys a pejorative connotation that the research has been untowardly driven by political, ideological, financial, or otherwise unscientific motives.
The concept was popularized in the 1990s in relation to expert testimony in civil litigation. More recently, invoking the concept has been a tactic to criticize research on the harmful environmental or public health effects of corporate activities, and occasionally in response to such criticism. Author Dan Agin in his book "Junk Science" harshly criticized those who deny the basic premise of global warming,
In some contexts, junk science is counterposed to the "sound science" or "solid science" that favors one's own point of view.
The phrase "junk science" appears to have been in use prior to 1985. A 1985 United States Department of Justice report by the Tort Policy Working Group noted:
The use of such invalid scientific evidence (commonly referred to as 'junk science') has resulted in findings of causation which simply cannot be justified or understood from the standpoint of the current state of credible scientific or medical knowledge.
In 1989, the climate scientist Jerry Mahlman (Director of the Geophysical Fluid Dynamics Laboratory) characterized the theory that global warming was due to solar variation (presented in "Scientific Perspectives on the Greenhouse Problem" by Frederick Seitz et al.) as "noisy junk science."
Peter W. Huber popularized the term with respect to litigation in his 1991 book "Galileo's Revenge: Junk Science in the Courtroom." The book has been cited in over 100 legal textbooks and references; as a consequence, some sources cite Huber as the first to coin the term. By 1997, the term had entered the legal lexicon as seen in an opinion by Supreme Court of the United States Justice John Paul Stevens:
An example of 'junk science' that should be excluded under the Daubert standard as too unreliable would be the testimony of a phrenologist who would purport to prove a defendant's future dangerousness based on the contours of the defendant's skull. Lower courts have subsequently set guidelines for identifying junk science, such as the 2005 opinion of United States Court of Appeals for the Seventh Circuit Judge Easterbrook:
Positive reports about magnetic water treatment are not replicable; this plus the lack of a physical explanation for any effects are hallmarks of junk science.
As the subtitle of Huber's book, "Junk Science in the Courtroom", suggests, his emphasis was on the use or misuse of expert testimony in civil litigation. One prominent example cited in the book was litigation over casual contact in the spread of AIDS. A California school district sought to prevent a young boy with AIDS, Ryan Thomas, from attending kindergarten. The school district produced an expert witness, Steven Armentrout, who testified that a possibility existed that AIDS could be transmitted to schoolmates through yet undiscovered "vectors." However, five experts testified on behalf of Thomas that AIDS is not transmitted through casual contact, and the court affirmed the "solid science" (as Huber called it) and rejected Armentrout's argument.
In 1999, Paul Ehrlich and others advocated public policies to improve the dissemination of valid environmental scientific knowledge and discourage junk science:
The Intergovernmental Panel on Climate Change reports offer an antidote to junk science by articulating the current consensus on the prospects for climate change, by outlining the extent of the uncertainties, and by describing the potential benefits and costs of policies to address climate change.
In a 2003 study about changes in environmental activism regarding the Crown of the Continent Ecosystem, Pedynowski noted that junk science can undermine the credibility of science over a much broader scale because misrepresentation by special interests casts doubt on more defensible claims and undermines the credibility of all research.
In his 2006 book "Junk Science", Dan Agin emphasized two main causes of junk science: fraud, and ignorance. In the first case, Agin discussed falsified results in the development of organic transistors:
As far as understanding junk science is concerned, the important aspect is that both Bell Laboratories and the international physics community were fooled until someone noticed that noise records published by Jan Hendrik Schön in several papers were identical—which means physically impossible.
In the second case, he cites an example that demonstrates ignorance of statistical principles in the lay press:
Since no such proof is possible [that genetically modified food is harmless], the article in The New York Times was what is called a "bad rap" against the U.S. Department of Agriculture—a bad rap based on a junk-science belief that it's possible to prove a null hypothesis.
Agin asks the reader to step back from the rhetoric, as "how things are labeled does not make a science junk science." In its place, he offers that junk science is ultimately motivated by the desire to hide undesirable truths from the public.
John Stauber and Sheldon Rampton of "PR Watch" say the concept of junk science has come to be invoked in attempts to dismiss scientific findings that stand in the way of short-term corporate profits. In their book "Trust Us, We're Experts" (2001), they write that industries have launched multimillion-dollar campaigns to position certain theories as junk science in the popular mind, often failing to employ the scientific method themselves. For example, the tobacco industry has described research demonstrating the harmful effects of smoking and second-hand smoke as junk science, through the vehicle of various astroturf groups.
Theories more favorable to corporate activities are portrayed in words as "sound science." Past examples where "sound science" was used include the research into the toxicity of Alar, which was heavily criticized by antiregulatory advocates, and Herbert Needleman's research into low dose lead poisoning. Needleman was accused of fraud and personally attacked.
Fox News commentator Steven Milloy often denigrates credible scientific research on topics like global warming, ozone depletion, and passive smoking as "junk science". The credibility of Milloy's website junkscience.com was questioned by Paul D. Thacker, a writer for "The New Republic", in the wake of evidence that Milloy had received funding from Philip Morris, RJR Tobacco, and Exxon Mobil. Thacker also noted that Milloy was receiving almost $100,000 a year in consulting fees from Philip Morris while he criticized the evidence regarding the hazards of second-hand smoke as junk science. Following the publication of this article, the Cato Institute, which had hosted the junkscience.com site, ceased its association with the site and removed Milloy from its list of adjunct scholars.
Tobacco industry documents reveal that Philip Morris executives conceived of the "Whitecoat Project" in the 1980s as a response to emerging scientific data on the harmfulness of second-hand smoke. The goal of the Whitecoat Project, as conceived by Philip Morris and other tobacco companies, was to use ostensibly independent "scientific consultants" to spread doubt in the public mind about scientific data through invoking concepts like junk science. According to epidemiologist David Michaels, Assistant Secretary of Energy for Environment, Safety, and Health in the Clinton Administration, the tobacco industry invented the "sound science" movement in the 1980s as part of their campaign against the regulation of second-hand smoke.
David Michaels has argued that, since the U.S. Supreme Court ruling in "Daubert v. Merrell Dow Pharmaceuticals, Inc.", lay judges have become "gatekeepers" of scientific testimony and, as a result, respected scientists have sometimes been unable to provide testimony so that corporate defendants are "increasingly emboldened" to accuse adversaries of practicing junk science.
In 1995, the Union of Concerned Scientists launched the Sound Science Initiative, a national network of scientists committed to debunking junk science through media outreach, lobbying, and developing joint strategies to participate in town meetings or public hearings. In its newsletter on Science and Technology in Congress, the American Association for the Advancement of Science also recognized the need for increased understanding between scientists and lawmakers: "Although most individuals would agree that sound science is preferable to junk science, fewer recognize what makes a scientific study 'good' or 'bad'." The American Dietetic Association, criticizing marketing claims made for food products, has created a list of "Ten Red Flags of Junk Science."
Individual scientists have also invoked the concept.
American psychologist Paul Cameron has been designated by the Southern Poverty Law Center as an anti-gay extremist and a purveyor of "junk science". Cameron's research has been heavily criticized for unscientific methods and distortions which attempt to link homosexuality with pedophilia. In one instance, Cameron claimed that lesbians are 300 times more likely to get into car accidents. The SPLC states his work has been continually cited in some sections of the media despite being discredited. Cameron was expelled from the American Psychological Association in 1983.
|
https://en.wikipedia.org/wiki?curid=15627
|
James Cook
James Cook (7 November 172814 February 1779) was a British explorer, navigator, cartographer, and captain in the British Royal Navy. He made detailed maps of Newfoundland prior to making three voyages to the Pacific Ocean, during which he achieved the first recorded European contact with the eastern coastline of Australia and the Hawaiian Islands, and the first recorded circumnavigation of New Zealand.
Cook joined the British merchant navy as a teenager and joined the Royal Navy in 1755. He saw action in the Seven Years' War and subsequently surveyed and mapped much of the entrance to the Saint Lawrence River during the siege of Quebec, which brought him to the attention of the Admiralty and Royal Society. This acclaim came at a crucial moment in his career and the direction of British overseas exploration, and led to his commission in 1766 as commander of for the first of three Pacific voyages.
In these voyages, Cook sailed thousands of miles across largely uncharted areas of the globe. He mapped lands from New Zealand to Hawaii in the Pacific Ocean in greater detail and on a scale not previously charted by Western explorers. He surveyed and named features, and recorded islands and coastlines on European maps for the first time. He displayed a combination of seamanship, superior surveying and cartographic skills, physical courage, and an ability to lead men in adverse conditions.
Cook was attacked and killed in 1779 during his third exploratory voyage in the Pacific while attempting to kidnap the Island of Hawaii's monarch, Kalaniʻōpuʻu, in order to reclaim a cutter stolen from one of his ships. He left a legacy of scientific and geographical knowledge that influenced his successors well into the 20th century, and numerous memorials worldwide have been dedicated to him.
James Cook was born on 7 November 1728 (NS) in the village of Marton in Yorkshire and baptised on 14 November (N.S.) in the parish church of St Cuthbert, where his name can be seen in the church register. He was the second of eight children of James Cook (1693–1779), a Scottish farm labourer from Ednam in Roxburghshire, and his locally born wife, Grace Pace (1702–1765), from Thornaby-on-Tees. In 1736, his family moved to Airey Holme farm at Great Ayton, where his father's employer, Thomas Skottowe, paid for him to attend the local school. In 1741, after five years' schooling, he began work for his father, who had been promoted to farm manager. Despite not being formally educated he became capable in mathematics, astronomy and charting by the time of his "Endeavour" voyage. For leisure, he would climb a nearby hill, Roseberry Topping, enjoying the opportunity for solitude. Cooks' Cottage, his parents' last home, which he is likely to have visited, is now in Melbourne, Australia, having been moved from England and reassembled, brick by brick, in 1934.
In 1745, when he was 16, Cook moved to the fishing village of Staithes, to be apprenticed as a shop boy to grocer and haberdasher William Sanderson. Historians have speculated that this is where Cook first felt the lure of the sea while gazing out of the shop window.
After 18 months, not proving suited for shop work, Cook travelled to the nearby port town of Whitby to be introduced to friends of Sanderson's, John and Henry Walker. The Walkers, who were Quakers, were prominent local ship-owners in the coal trade. Their house is now the Captain Cook Memorial Museum. Cook was taken on as a merchant navy apprentice in their small fleet of vessels, plying coal along the English coast. His first assignment was aboard the collier "Freelove", and he spent several years on this and various other coasters, sailing between the Tyne and London. As part of his apprenticeship, Cook applied himself to the study of algebra, geometry, trigonometry, navigation and astronomy—all skills he would need one day to command his own ship.
His three-year apprenticeship completed, Cook began working on trading ships in the Baltic Sea. After passing his examinations in 1752, he soon progressed through the merchant navy ranks, starting with his promotion in that year to mate aboard the collier brig "Friendship". In 1755, within a month of being offered command of this vessel, he volunteered for service in the Royal Navy, when Britain was re-arming for what was to become the Seven Years' War. Despite the need to start back at the bottom of the naval hierarchy, Cook realised his career would advance more quickly in military service and entered the Navy at Wapping on 17 June 1755.
Cook married Elizabeth Batts, the daughter of Samuel Batts, keeper of the Bell Inn in Wapping and one of his mentors, on 21 December 1762 at St Margaret's Church, Barking, Essex. The couple had six children: James (1763–1794), Nathaniel (1764–1780, lost aboard which foundered with all hands in a hurricane in the West Indies), Elizabeth (1767–1771), Joseph (1768–1768), George (1772–1772) and Hugh (1776–1793, who died of scarlet fever while a student at Christ's College, Cambridge). When not at sea, Cook lived in the East End of London. He attended St Paul's Church, Shadwell, where his son James was baptised. Cook has no direct descendants—all of his children died before having children of their own.
Cook's first posting was with , serving as able seaman and master's mate under Captain Joseph Hamar for his first year aboard, and Captain Hugh Palliser thereafter. In October and November 1755, he took part in "Eagle"'s capture of one French warship and the sinking of another, following which he was promoted to boatswain in addition to his other duties. His first temporary command was in March 1756 when he was briefly master of "Cruizer", a small cutter attached to "Eagle" while on patrol.
In June 1757 Cook formally passed his master's examinations at Trinity House, Deptford, qualifying him to navigate and handle a ship of the King's fleet. He then joined the frigate as master under Captain Robert Craig.
During the Seven Years' War, Cook served in North America as master aboard the fourth-rate Navy vessel . With others in "Pembroke"s crew, he took part in the major amphibious assault that captured the Fortress of Louisbourg from the French in 1758, and in the siege of Quebec City in 1759. Throughout his service he demonstrated a talent for surveying and cartography and was responsible for mapping much of the entrance to the Saint Lawrence River during the siege, thus allowing General Wolfe to make his famous stealth attack during the 1759 Battle of the Plains of Abraham.
Cook's surveying ability was also put to use in mapping the jagged coast of Newfoundland in the 1760s, aboard . He surveyed the northwest stretch in 1763 and 1764, the south coast between the Burin Peninsula and Cape Ray in 1765 and 1766, and the west coast in 1767. At this time, Cook employed local pilots to point out the "rocks and hidden dangers" along the south and west coasts. During the 1765 season, four pilots were engaged at a daily pay of 4 shillings each: John Beck for the coast west of "Great St Lawrence", Morgan Snook for Fortune Bay, John Dawson for Connaigre and Hermitage Bay, and John Peck for the "Bay of Despair".
While in Newfoundland, Cook also conducted astronomical observations, in particular of the eclipse of the sun on 5 August 1766. By obtaining an accurate estimate of the time of the start and finish of the eclipse, and comparing these with the timings at a known position in England it was possible to calculate the longitude of the observation site in Newfoundland. This result was communicated to the Royal Society in 1767.
His five seasons in Newfoundland produced the first large-scale and accurate maps of the island's coasts and were the first scientific, large scale, hydrographic surveys to use precise triangulation to establish land outlines. They also gave Cook his mastery of practical surveying, achieved under often adverse conditions, and brought him to the attention of the Admiralty and Royal Society at a crucial moment both in his career and in the direction of British overseas discovery. Cook's maps were used into the 20th century, with copies being referenced by those sailing Newfoundland's waters for 200 years.
Following on from his exertions in Newfoundland, Cook wrote that he intended to go not only "farther than any man has been before me, but as far as I think it is possible for a man to go".
On 25 May 1768, the Admiralty commissioned Cook to command a scientific voyage to the Pacific Ocean. The purpose of the voyage was to observe and record the 1769 transit of Venus across the Sun which, when combined with observations from other places, would help to determine the distance of the Earth from the Sun. Cook, at age 39, was promoted to lieutenant to grant him sufficient status to take the command. For its part, the Royal Society agreed that Cook would receive a one hundred guinea gratuity in addition to his Naval pay.
The expedition sailed aboard , departing England on 26 August 1768. Cook and his crew rounded Cape Horn and continued westward across the Pacific, arriving at Tahiti on 13 April 1769, where the observations of the Venus Transit were made. However, the result of the observations was not as conclusive or accurate as had been hoped. Once the observations were completed, Cook opened the sealed orders, which were additional instructions from the Admiralty for the second part of his voyage: to search the south Pacific for signs of the postulated rich southern continent of "Terra Australis".
Cook then sailed to New Zealand, taking with him Tupaia, an exceptionally accomplished Tahitian aristocrat and priest, who helped guide him through the Polynesian islands, and mapped the complete coastline, making only some minor errors. He then voyaged west, reaching the southeastern coast of Australia on 19 April 1770, and in doing so his expedition became the first recorded Europeans to have encountered its eastern coastline.
On 23 April, he made his first recorded direct observation of indigenous Australians at Brush Island near Bawley Point, noting in his journal: "... and were so near the Shore as to distinguish several people upon the Sea beach they appear'd to be of a very dark or black Colour but whether this was the real colour of their skins or the C[l]othes they might have on I know not." On 29 April, Cook and crew made their first landfall on the mainland of the continent at a place now known as the Kurnell Peninsula. Cook originally named the area "Stingray Bay", but later he crossed this out and named it "Botany Bay" after the unique specimens retrieved by the botanists Joseph Banks and Daniel Solander. It is here that Cook made first contact with an aboriginal tribe known as the Gweagal.
After his departure from Botany Bay, he continued northwards. He stopped at Bustard Bay (now known as Seventeen Seventy) on 23 May 1770. On 24 May, Cook and Banks and others went ashore. Continuing north, on 11 June a mishap occurred when "Endeavour" ran aground on a shoal of the Great Barrier Reef, and then "nursed into a river mouth on 18 June 1770". The ship was badly damaged, and his voyage was delayed almost seven weeks while repairs were carried out on the beach (near the docks of modern Cooktown, Queensland, at the mouth of the Endeavour River). The voyage then continued and at about midday on 22 August 1770, they reached the northernmost tip of the coast and, without leaving the ship, Cook named it Cape York. Leaving the east coast, Cook turned west and nursed his battered ship through the dangerously shallow waters of Torres Strait. Searching for a vantage point, Cook saw a steep hill on a nearby island from the top of which he hoped to see "a passage into the Indian Seas". Cook named the island Possession Island, where he claimed the entire coastline that he had just explored as British territory. He returned to England via Batavia (modern Jakarta, Indonesia), where many in his crew succumbed to malaria, and then the Cape of Good Hope, arriving at the island of Saint Helena on 30 April 1771. The ship finally returned to England on 12 July 1771, anchoring in The Downs, with Cook going to Deal.
Cook's journals were published upon his return, and he became something of a hero among the scientific community. Among the general public, however, the aristocratic botanist Joseph Banks was a greater hero. Banks even attempted to take command of Cook's second voyage but removed himself from the voyage before it began, and Johann Reinhold Forster and his son Georg Forster were taken on as scientists for the voyage. Cook's son George was born five days before he left for his second voyage.
Shortly after his return from the first voyage, Cook was promoted in August 1771 to the rank of commander. In 1772, he was commissioned to lead another scientific expedition on behalf of the Royal Society, to search for the hypothetical Terra Australis. On his first voyage, Cook had demonstrated by circumnavigating New Zealand that it was not attached to a larger landmass to the south. Although he charted almost the entire eastern coastline of Australia, showing it to be continental in size, the Terra Australis was believed to lie further south. Despite this evidence to the contrary, Alexander Dalrymple and others of the Royal Society still believed that a massive southern continent should exist.
Cook commanded on this voyage, while Tobias Furneaux commanded its companion ship, . Cook's expedition circumnavigated the globe at an extreme southern latitude, becoming one of the first to cross the Antarctic Circle on 17 January 1773. In the Antarctic fog, "Resolution" and "Adventure" became separated. Furneaux made his way to New Zealand, where he lost some of his men during an encounter with Māori, and eventually sailed back to Britain, while Cook continued to explore the Antarctic, reaching 71°10'S on 31 January 1774.
Cook almost encountered the mainland of Antarctica but turned towards Tahiti to resupply his ship. He then resumed his southward course in a second fruitless attempt to find the supposed continent. On this leg of the voyage, he brought a young Tahitian named Omai, who proved to be somewhat less knowledgeable about the Pacific than Tupaia had been on the first voyage. On his return voyage to New Zealand in 1774, Cook landed at the Friendly Islands, Easter Island, Norfolk Island, New Caledonia, and Vanuatu.
Before returning to England, Cook made a final sweep across the South Atlantic from Cape Horn and surveyed, mapped, and took possession for Britain of South Georgia, which had been explored by the English merchant Anthony de la Roché in 1675. Cook also discovered and named Clerke Rocks and the South Sandwich Islands ("Sandwich Land"). He then turned north to South Africa and from there continued back to England. His reports upon his return home put to rest the popular myth of Terra Australis.
Cook's second voyage marked a successful employment of Larcum Kendall's K1 copy of John Harrison's H4 marine chronometer, which enabled Cook to calculate his longitudinal position with much greater accuracy. Cook's log was full of praise for this time-piece which he used to make charts of the southern Pacific Ocean that were so remarkably accurate that copies of them were still in use in the mid-20th century.
Upon his return, Cook was promoted to the rank of post-captain and given an honorary retirement from the Royal Navy, with a posting as an officer of the Greenwich Hospital. He reluctantly accepted, insisting that he be allowed to quit the post if an opportunity for active duty should arise. His fame extended beyond the Admiralty; he was made a Fellow of the Royal Society and awarded the Copley Gold Medal for completing his second voyage without losing a man to scurvy. Nathaniel Dance-Holland painted his portrait; he dined with James Boswell; he was described in the House of Lords as "the first navigator in Europe". But he could not be kept away from the sea. A third voyage was planned, and Cook volunteered to find the Northwest Passage. He travelled to the Pacific and hoped to travel east to the Atlantic, while a simultaneous voyage travelled the opposite route.
On his last voyage, Cook again commanded HMS "Resolution", while Captain Charles Clerke commanded . The voyage was ostensibly planned to return the Pacific Islander Omai to Tahiti, or so the public was led to believe. The trip's principal goal was to locate a Northwest Passage around the American continent. After dropping Omai at Tahiti, Cook travelled north and in 1778 became the first European to begin formal contact with the Hawaiian Islands. After his initial landfall in January 1778 at Waimea harbour, Kauai, Cook named the archipelago the "Sandwich Islands" after the fourth Earl of Sandwich—the acting First Lord of the Admiralty.
From the Sandwich Islands, Cook sailed north and then northeast to explore the west coast of North America north of the Spanish settlements in Alta California. He sighted the Oregon coast at approximately 44°30′ north latitude, naming Cape Foulweather, after the bad weather which forced his ships south to about 43° north before they could begin their exploration of the coast northward. He unknowingly sailed past the Strait of Juan de Fuca and soon after entered Nootka Sound on Vancouver Island. He anchored near the First Nations village of Yuquot. Cook's two ships remained in Nootka Sound from 29 March to 26 April 1778, in what Cook called Ship Cove, now Resolution Cove, at the south end of Bligh Island. Relations between Cook's crew and the people of Yuquot were cordial but sometimes strained. In trading, the people of Yuquot demanded much more valuable items than the usual trinkets that had been acceptable in Hawaii. Metal objects were much desired, but the lead, pewter, and tin traded at first soon fell into disrepute. The most valuable items which the British received in trade were sea otter pelts. During the stay, the Yuquot "hosts" essentially controlled the trade with the British vessels; the natives usually visited the British vessels at Resolution Cove instead of the British visiting the village of Yuquot at Friendly Cove.
After leaving Nootka Sound, Cook explored and mapped the coast all the way to the Bering Strait, on the way identifying what came to be known as Cook Inlet in Alaska. In a single visit, Cook charted the majority of the North American northwest coastline on world maps for the first time, determined the extent of Alaska, and closed the gaps in Russian (from the west) and Spanish (from the south) exploratory probes of the northern limits of the Pacific.
By the second week of August 1778, Cook was through the Bering Strait, sailing into the Chukchi Sea. He headed northeast up the coast of Alaska until he was blocked by sea ice at a latitude of 70°44′ north. Cook then sailed west to the Siberian coast, and then southeast down the Siberian coast back to the Bering Strait. By early September 1778 he was back in the Bering Sea to begin the trip to the Sandwich (Hawaiian) Islands. He became increasingly frustrated on this voyage and perhaps began to suffer from a stomach ailment; it has been speculated that this led to irrational behaviour towards his crew, such as forcing them to eat walrus meat, which they had pronounced inedible.
Cook returned to Hawaii in 1779. After sailing around the archipelago for some eight weeks, he made landfall at Kealakekua Bay on Hawai'i Island, largest island in the Hawaiian Archipelago. Cook's arrival coincided with the "Makahiki", a Hawaiian harvest festival of worship for the Polynesian god Lono. Coincidentally the form of Cook's ship, HMS "Resolution", or more particularly the mast formation, sails and rigging, resembled certain significant artefacts that formed part of the season of worship. Similarly, Cook's clockwise route around the island of Hawaii before making landfall resembled the processions that took place in a clockwise direction around the island during the Lono festivals. It has been argued (most extensively by Marshall Sahlins) that such coincidences were the reasons for Cook's (and to a limited extent, his crew's) initial deification by some Hawaiians who treated Cook as an incarnation of Lono. Though this view was first suggested by members of Cook's expedition, the idea that any Hawaiians understood Cook to be Lono, and the evidence presented in support of it, were challenged in 1992.
After a month's stay, Cook attempted to resume his exploration of the northern Pacific. Shortly after leaving Hawaii Island, however, "Resolution"s foremast broke, so the ships returned to Kealakekua Bay for repairs.
Tensions rose, and a number of quarrels broke out between the Europeans and Hawaiians at Kealakekua Bay. An unknown group of Hawaiians took one of Cook's small boats. The evening when the cutter was taken, the people had become "insolent" even with threats to fire upon them. Cook attempted to kidnap and ransom the King of Hawaiʻi, Kalaniʻōpuʻu.
The following day, 14 February 1779, Cook marched through the village to retrieve the king. Cook took the king (aliʻi nui) by his own hand and led him willingly away. One of Kalaniʻōpuʻu's favourite wives, Kanekapolei, and two chiefs approached the group as they were heading to the boats. They pleaded with the king not to go. An old kahuna (priest), chanting rapidly while holding out a coconut, attempted to distract Cook and his men as a large crowd began to form at the shore. The king began to understand that Cook was his enemy. As Cook turned his back to help launch the boats, he was struck on the head by the villagers and then stabbed to death as he fell on his face in the surf. He was first struck on the head with a club by a chief named Kalaimanokahoʻowaha or Kanaʻina (namesake of Charles Kana'ina) and then stabbed by one of the king's attendants, Nuaa. The Hawaiians carried his body away towards the back of the town, still visible to the ship through their spyglass. Four marines, Corporal James Thomas, Private Theophilus Hinks, Private Thomas Fatchett and Private John Allen, were also killed and two others were wounded in the confrontation.
The esteem which the islanders nevertheless held for Cook caused them to retain his body. Following their practice of the time, they prepared his body with funerary rituals usually reserved for the chiefs and highest elders of the society. The body was disembowelled, baked to facilitate removal of the flesh, and the bones were carefully cleaned for preservation as religious icons in a fashion somewhat reminiscent of the treatment of European saints in the Middle Ages. Some of Cook's remains, thus preserved, were eventually returned to his crew for a formal burial at sea.
Clerke assumed leadership of the expedition and made a final attempt to pass through the Bering Strait. He died of tuberculosis on 22 August 1779 and John Gore, a veteran of Cook's first voyage, took command of "Resolution" and of the expedition. James King replaced Gore in command of "Discovery". The expedition returned home, reaching England in October 1780. After their arrival in England, King completed Cook's account of the voyage.
David Samwell, who sailed with Cook on "Resolution", wrote of him: He was a modest man, and rather bashful; of an agreeable lively conversation, sensible and intelligent. In temper he was somewhat hasty, but of a disposition the most friendly, benevolent and humane. His person was above six feet high: and, though a good looking man, he was plain both in dress and appearance. His face was full of expression: his nose extremely well shaped: his eyes which were small and of a brown cast, were quick and piercing; his eyebrows prominent, which gave his countenance altogether an air of austerity.
The Australian Museum acquired its "Cook Collection" in 1894 from the Government of New South Wales. At that time the collection consisted of 115 artefacts collected on Cook's three voyages throughout the Pacific Ocean, during the period 1768–80, along with documents and memorabilia related to these voyages. Many of the ethnographic artefacts were collected at a time of first contact between Pacific Peoples and Europeans. In 1935 most of the documents and memorabilia were transferred to the Mitchell Library in the State Library of New South Wales. The provenance of the collection shows that the objects remained in the hands of Cook's widow Elizabeth Cook, and her descendants, until 1886. In this year John Mackrell, the great-nephew of Isaac Smith, Elizabeth Cook's cousin, organised the display of this collection at the request of the NSW Government at the Colonial and Indian Exhibition in London. In 1887 the London-based Agent-General for the New South Wales Government, Saul Samuel, bought John Mackrell's items and also acquired items belonging to the other relatives Reverend Canon Frederick Bennett, Mrs Thomas Langton, H.M.C. Alexander, and William Adams. The collection remained with the Colonial Secretary of NSW until 1894, when it was transferred to the Australian Museum.
Cook's 12 years sailing around the Pacific Ocean contributed much to Europeans' knowledge of the area. Several islands, such as the Hawaiian group, were encountered for the first time by Europeans, and his more accurate navigational charting of large areas of the Pacific was a major achievement. To create accurate maps, latitude and longitude must be accurately determined. Navigators had been able to work out latitude accurately for centuries by measuring the angle of the sun or a star above the horizon with an instrument such as a backstaff or quadrant. Longitude was more difficult to measure accurately because it requires precise knowledge of the time difference between points on the surface of the earth. The Earth turns a full 360 degrees relative to the sun each day. Thus longitude corresponds to time: 15 degrees every hour, or 1 degree every 4 minutes. Cook gathered accurate longitude measurements during his first voyage from his navigational skills, with the help of astronomer Charles Green, and by using the newly published "Nautical Almanac" tables, via the lunar distance method – measuring the angular distance from the moon to either the sun during daytime or one of eight bright stars during night-time to determine the time at the Royal Observatory, Greenwich, and comparing that to his local time determined via the altitude of the sun, moon, or stars.
On his second voyage, Cook used the K1 chronometer made by Larcum Kendall, which was the shape of a large pocket watch, in diameter. It was a copy of the H4 clock made by John Harrison, which proved to be the first to keep accurate time at sea when used on the ship "Deptford"s journey to Jamaica in 1761–62. He succeeded in circumnavigating the world on his first voyage without losing a single man to scurvy, an unusual accomplishment at the time. He tested several preventive measures, most importantly the frequent replenishment of fresh food. For presenting a paper on this aspect of the voyage to the Royal Society he was presented with the Copley Medal in 1776. Cook became the first European to have extensive contact with various people of the Pacific. He correctly postulated a link among all the Pacific peoples, despite their being separated by great ocean stretches (see Malayo-Polynesian languages). Cook theorised that Polynesians originated from Asia, which scientist Bryan Sykes later verified. In New Zealand the coming of Cook is often used to signify the onset of the colonisation
which officially started more than 70 years after his crew became the second group of Europeans to visit that archipelago.
Cook carried several scientists on his voyages; they made significant observations and discoveries. Two botanists, Joseph Banks and the Swede Daniel Solander, sailed on the first voyage. The two collected over 3,000 plant species. Banks subsequently strongly promoted British settlement of Australia, leading to the establishment of New South Wales as a penal settlement in 1788. Artists also sailed on Cook's first voyage. Sydney Parkinson was heavily involved in documenting the botanists' findings, completing 264 drawings before his death near the end of the voyage. They were of immense scientific value to British botanists. Cook's second expedition included William Hodges, who produced notable landscape paintings of Tahiti, Easter Island, and other locations. Several officers who served under Cook went on to distinctive accomplishments. William Bligh, Cook's sailing master, was given command of in 1787 to sail to Tahiti and return with breadfruit. Bligh became known for the mutiny of his crew, which resulted in his being set adrift in 1789. He later became Governor of New South Wales, where he was the subject of another mutiny—the 1808 Rum Rebellion. George Vancouver, one of Cook's midshipmen, led a voyage of exploration to the Pacific Coast of North America from 1791 to 1794. In honour of Vancouver's former commander, his ship was named . George Dixon, who sailed under Cook on his third expedition, later commanded his own. Henry Roberts, a lieutenant under Cook, spent many years after that voyage preparing the detailed charts that went into Cook's posthumous atlas, published around 1784.
Cook's contributions to knowledge gained international recognition during his lifetime. In 1779, while the American colonies were fighting Britain for their independence, Benjamin Franklin wrote to captains of colonial warships at sea, recommending that if they came into contact with Cook's vessel, they were to "not consider her an enemy, nor suffer any plunder to be made of the effects contained in her, nor obstruct her immediate return to England by detaining her or sending her into any other part of Europe or to America; but that you treat the said Captain Cook and his people with all civility and kindness ... as common friends to mankind."
The first recorded circumnavigation of the world by an animal was by Cook's goat, who made that memorable journey twice: the first time on HMS "Dolphin", under Samuel Wallis, and then aboard "Endeavour". When they returned to England, Cook had the goat presented with a silver collar engraved with lines from Samuel Johnson: "Perpetui, ambita bis terra, praemia lactis Haec habet altrici Capra secunda Jovis." ("In fame scarce second to the nurse of Jove,/ This Goat, who twice the world had traversed round,/Deserving both her master's care and love,/Ease and perpetual pasture now has found." "The nurse of Jove" is a reference to the mythological Amalthea.) She was put to pasture on Cook's farm outside London and was reportedly admitted to the privileges of the Royal Naval Hospital at Greenwich. Cook's journal recorded the date of the goat's death: 28 March 1772.
A U.S. coin, the 1928 Hawaii Sesquicentennial half-dollar, carries Cook's image. Minted for the 150th anniversary of his discovery of the islands, its low mintage (10,008) has made this example of Early United States commemorative coins both scarce and expensive. The site where he was killed in Hawaii was marked in 1874 by a white obelisk set on of chained-off beach. This land, although in Hawaii, was deeded to the United Kingdom by Princess Likelike and her husband, Archibald Scott Cleghorn, to the British Consul to Hawaii, James Hay Wodehouse, in 1877. A nearby town is named Captain Cook, Hawaii; several Hawaiian businesses also carry his name. The Apollo 15 Command/Service Module "Endeavour" was named after Cook's ship, , as was the . In addition, the first Crew Dragon capsule flown by SpaceX was named for "Endeavour." Another shuttle, "Discovery", was named after Cook's .
The first institution of higher education in North Queensland, Australia, was named after him, with James Cook University opening in Townsville in 1970. Numerous institutions, landmarks and place names reflect the importance of Cook's contributions, including the Cook Islands, Cook Strait, Cook Inlet and the Cook crater on the Moon. Aoraki / Mount Cook, the highest summit in New Zealand, is named for him. Another Mount Cook is on the border between the U.S. state of Alaska and the Canadian Yukon territory, and is designated Boundary Peak 182 as one of the official Boundary Peaks of the Hay–Herbert Treaty. A life-size statue of Cook upon a column stands in Hyde Park located in the centre of Sydney. A large aquatic monument is planned for Cook's landing place at Botany Bay, Sydney.
One of the earliest monuments to Cook in the United Kingdom is located at The Vache, erected in 1780 by Admiral Hugh Palliser, a contemporary of Cook and one-time owner of the estate. A large obelisk was built in 1827 as a monument to Cook on Easby Moor overlooking his boyhood village of Great Ayton, along with a smaller monument at the former location of Cook's cottage. There is also a monument to Cook in the church of St Andrew the Great, St Andrew's Street, Cambridge, where his sons Hugh, a student at Christ's College, and James were buried. Cook's widow Elizabeth was also buried in the church and in her will left money for the memorial's upkeep. The 250th anniversary of Cook's birth was marked at the site of his birthplace in Marton by the opening of the Captain Cook Birthplace Museum, located within Stewart Park (1978). A granite vase just to the south of the museum marks the approximate spot where he was born. Tributes also abound in post-industrial Middlesbrough, including a primary school, shopping square and the "Bottle 'O Notes", a public artwork by Claes Oldenburg, that was erected in the town's Central Gardens in 1993. Also named after Cook is James Cook University Hospital, a major teaching hospital which opened in 2003 with a railway station serving it called James Cook opening in 2014.
The Royal Research Ship RRS "James Cook" was built in 2006 to replace the RRS "Charles Darwin" in the UK's Royal Research Fleet, and Stepney Historical Trust placed a plaque on Free Trade Wharf in the Highway, Shadwell to commemorate his life in the East End of London. In 2002, Cook was placed at number 12 in the BBC's poll of the 100 Greatest Britons.
Cook was a subject in many literary creations, one of the earliest was "Captain Cook" by Letitia Elizabeth Landon (L.E.L.). In 1931, Kenneth Slessor's poem "Five Visions of Captain Cook" was the "most dramatic break-through" in Australian poetry of the 20th century according to poet Douglas Stewart.
|
https://en.wikipedia.org/wiki?curid=15630
|
John Baskerville
John Baskerville (baptised 28 January 1707 – 8 January 1775) was an English businessman, in areas including japanning and papier-mâché, but he is best remembered as a printer and type designer.
Baskerville was born in the village of Wolverley, near Kidderminster in Worcestershire and baptised on 28 January at Wolverley church. At the time of his birth this was considered the year 1706; it would now be considered early 1707. Baskerville established an early career teaching handwriting and is known to have offered his services cutting gravestones (a demonstration slab by him survives in the Library of Birmingham) before making a considerable fortune from the manufacture of lacquerwork items (japanning).
He practised as a printer in Birmingham, England. Baskerville was a member of the Royal Society of Arts, and an associate of some of the members of the Lunar Society. He directed his punchcutter, John Handy, in the design of many typefaces of broadly similar appearance. In 1757, Baskerville published a remarkable quarto edition of Virgil on wove paper, using his own type. It took three years to complete, but it made such an impact that he was appointed printer to the University of Cambridge the following year.
John Baskerville printed works for the University of Cambridge in 1758 and, although an atheist, printed a splendid folio Bible in 1763. His typefaces were greatly admired by Benjamin Franklin, a fellow printer. Baskerville's work was criticised by jealous competitors and soon fell out of favour, but since the 1920s many new fonts have been released by Linotype, Monotype, and other type foundries – revivals of his work and mostly called 'Baskerville'. Emigre released a popular revival of this typeface in 1996 called Mrs Eaves, named for Baskerville's wife, Sarah Eaves. Baskerville's most notable typeface Baskerville represents the peak of transitional type face and bridges the gap between Old Style and Modern type design.
Baskerville also was responsible for significant innovations in printing, paper and ink production. He worked with paper maker James Whatman to produce a smoother whiter paper which showcased his strong black type. Baskerville also pioneered a completely new style of typography adding wide margins and leading between each line.
Baskerville died in January 1775 at his home, "Easy Hill". He requested that his body be placed
However, in 1821 a canal was built through the land and his body was placed on show by the landowner until Baskerville's family and friends arranged to have it moved to the crypt of Christ Church, Birmingham. Christ Church was demolished in 1897 so his remains were then moved, with other bodies from the crypt, to consecrated catacombs at Warstone Lane Cemetery. In 1963 a petition was presented to Birmingham City Council requesting that he be reburied in unconsecrated ground according to his wishes.
Baskerville House was built on the grounds of "Easy Hill". In 1947, BBC radio broadcast a radio play about his burial, named "Hic Jacet: or The Corpse in the Crescent" by Neville Brendon Watts. The original recording was not preserved but a performance was staged by students at the Birmingham School of Acting in 2013 at the Typographic Hub Centre of Birmingham City University.
A Portland stone sculpture of the Baskerville typeface, "Industry and Genius", in his honour stands in front of Baskerville House in Centenary Square, Birmingham. It was created by local artist David Patten.
Some examples of volumes published by Baskerville.
|
https://en.wikipedia.org/wiki?curid=15632
|
Joseph Stalin
Joseph Vissarionovich Stalin (born Ioseb Besarionis dzе Jugashvili; 5 March 1953) was a Georgian revolutionary and Soviet politician who led the Soviet Union from the mid-1920s until 1953 as the general secretary of the Communist Party of the Soviet Union (1922–1952) and premier of the Soviet Union (1941–1953). Despite initially governing the Soviet Union as part of a collective leadership, he eventually consolidated power to become the country's "de facto" dictator by the 1930s. A communist ideologically committed to the Leninist interpretation of Marxism, Stalin formalised these ideas as Marxism–Leninism, while his own policies are known as Stalinism.
Born to a poor family in Gori in the Russian Empire (now Georgia), Stalin joined the Marxist Russian Social Democratic Labour Party as a youth. He edited the party's newspaper, "Pravda", and raised funds for Vladimir Lenin's Bolshevik faction via robberies, kidnappings, and protection rackets. Repeatedly arrested, he underwent several internal exiles. After the Bolsheviks seized power during the 1917 October Revolution and created a one-party state under Lenin's newly renamed Communist Party, Stalin joined its governing Politburo. Serving in the Russian Civil War before overseeing the Soviet Union's establishment in 1922, Stalin assumed leadership over the country following Lenin's 1924 death. Under Stalin, "Socialism in One Country" became a central tenet of the party's dogma. Through the Five-Year Plans, the country underwent agricultural collectivisation and rapid industrialisation, creating a centralised command economy. This led to significant disruptions in food production that contributed to the famine of 1932–33. To eradicate accused "enemies of the working class", Stalin instituted the "Great Purge", in which over a million were imprisoned and at least 700,000 executed between 1934 and 1939. By 1937, he had complete personal control over the party and state.
Stalin's government promoted Marxism–Leninism abroad through the Communist International and supported European anti-fascist movements during the 1930s, particularly in the Spanish Civil War. In 1939, it signed a non-aggression pact with Nazi Germany, resulting in the Soviet invasion of Poland. Germany ended the pact by invading the Soviet Union in 1941. Despite initial setbacks, the Soviet Red Army repelled the German incursion and captured Berlin in 1945, ending World War II in Europe. The Soviets annexed the Baltic states and helped establish Soviet-aligned governments throughout Central and Eastern Europe, China, and North Korea. The Soviet Union and the United States emerged from the war as global superpowers. Tensions arose between the Soviet-backed Eastern Bloc and U.S.-backed Western Bloc which became known as the Cold War. Stalin led his country through the post-war reconstruction, during which it developed a nuclear weapon in 1949. In these years, the country experienced another major famine and an anti-semitic campaign peaking in the doctors' plot. After Stalin's death in 1953, he was eventually succeeded by Nikita Khrushchev, who denounced him and initiated the de-Stalinisation of Soviet society.
Widely considered one of the 20th century's most significant figures, Stalin was the subject of a pervasive personality cult within the international Marxist–Leninist movement, which revered him as a champion of the working class and socialism. Since the dissolution of the Soviet Union in 1991, Stalin has retained popularity in Russia and Georgia as a victorious wartime leader who established the Soviet Union as a major world power. Conversely, his totalitarian government has been widely condemned for overseeing mass repressions, ethnic cleansing, deportations, hundreds of thousands of executions, and famines that killed millions.
Stalin was born in the Georgian town of Gori, then part of the Tiflis Governorate of the Russian Empire and home to a mix of Georgian, Armenian, Russian, and Jewish communities. He was born on , and baptised on 29 December. His parents, Besarion Jughashvili and Ekaterine Geladze, were ethnically Georgian, and Stalin grew up speaking the Georgian language. He was their only child to survive past infancy, and was nicknamed "Soso", a diminutive of "Ioseb".
Besarion was a shoemaker and owned his own workshop; it was initially a financial success, but later fell into decline, and the family found itself living in poverty. Besarion became an alcoholic, and drunkenly beat his wife and son. Ekaterine and Stalin left the home by 1883, and began a wandering life, moving through nine different rented rooms over the next decade. In 1886, they moved into the house of a family friend, Father Christopher Charkviani. Ekaterine worked as a house cleaner and launderer, and was determined to send her son to school. In September 1888, Stalin enrolled at the Gori Church School, a place secured by Charkviani. Although he got into many fights, Stalin excelled academically, displaying talent in painting and drama classes, writing his own poetry, and singing as a choirboy. Stalin faced several severe health problems; an 1884 smallpox infection left him with facial pock scars, and aged 12, he was seriously injured after being hit by a phaeton, which was the likely cause of a lifelong disability to his left arm.
In August 1894, Stalin enrolled in the Spiritual Seminary in Tiflis, enabled by a scholarship that allowed him to study at a reduced rate. Here he joined 600 trainee priests who boarded at the institution. Stalin was again academically successful and gained high grades. He continued writing poetry; five of his poems were published under the pseudonym of "Soselo" in Ilia Chavchavadze's newspaper "Iveria" ('Georgia'). Thematically, they dealt with topics like nature, land, and patriotism. According to Stalin's biographer Simon Sebag Montefiore they became "minor Georgian classics", and were included in various anthologies of Georgian poetry over the coming years. As he grew older, Stalin lost interest in priestly studies, his grades dropped, and he was repeatedly confined to a cell for his rebellious behaviour. The seminary's journal noted that he declared himself an atheist, stalked out of prayers and refused to doff his hat to monks.
Stalin joined a forbidden book club at the school; he was particularly influenced by Nikolay Chernyshevsky's 1863 pro-revolutionary novel "What Is To Be Done?". Another influential text was Alexander Kazbegi's "The Patricide", with Stalin adopting the nickname "Koba" from that of the book's bandit protagonist. He also read "", the 1867 book by German sociological theorist Karl Marx. Stalin devoted himself to Marx's socio-political theory, Marxism, which was then on the rise in Georgia, one of various forms of socialism opposed to the empire's governing Tsarist authorities. At night, he attended secret workers' meetings, and was introduced to Silibistro "Silva" Jibladze, the Marxist founder of Mesame Dasi ('Third Group'), a Georgian socialist group. Stalin left the seminary in April 1899 and never returned.
In October 1899, Stalin began work as a meteorologist at a Tiflis observatory. He attracted a group of supporters through his classes in socialist theory, and co-organised a secret workers' mass meeting for May Day 1900, at which he successfully encouraged many of the men to take strike action. By this point, the empire's secret police — the Okhrana — were aware of Stalin's activities within Tiflis' revolutionary milieu. They attempted to arrest him in March 1901, but he escaped and went into hiding, living off the donations of friends and sympathisers. Remaining underground, he helped plan a demonstration for May Day 1901, in which 3,000 marchers clashed with the authorities. He continued to evade arrest by using aliases and sleeping in different apartments. In November 1901, he was elected to the Tiflis Committee of the Russian Social Democratic Labour Party (RSDLP), a Marxist party founded in 1898.
That month, Stalin travelled to the port city of Batumi. His militant rhetoric proved divisive among the city's Marxists, some of whom suspected that he might be an "agent provocateur" working for the government. He found employment at the Rothschild refinery storehouse, where he co-organised twice workers' strikes. After several strike leaders were arrested, he co-organised a mass public demonstration which led to the storming of the prison; troops fired upon the demonstrators, 13 of whom were killed. Stalin organised a second mass demonstration on the day of their funeral, before being arrested in April 1902. Held first in Batumi Prison, and then Kutaisi Prison, in mid-1903 Stalin was sentenced to three years of exile in eastern Siberia.
Stalin left Batumi in October, arriving at the small Siberian town of Novaya Uda in late November. There, he lived in a two-room peasant's house, sleeping in the building's larder. He made two escape attempts; on the first he made it to Balagansk before returning due to frostbite. His second attempt, in January 1904, was successful and he made it to Tiflis. There, he co-edited a Georgian Marxist newspaper, "Proletariatis Brdzola" ("Proletarian Struggle"), with Philip Makharadze. He called for the Georgian Marxist movement to split off from its Russian counterpart, resulting in several RSDLP members accusing him of holding views contrary to the ethos of Marxist internationalism and calling for his expulsion from the party; he soon recanted his opinions. During his exile, the RSDLP had split between Vladimir Lenin's "Bolsheviks" and Julius Martov's "Mensheviks". Stalin detested many of the Mensheviks in Georgia and aligned himself with the Bolsheviks. Although Stalin established a Bolshevik stronghold in the mining town of Chiatura, Bolshevism remained a minority force in the Menshevik-dominated Georgian revolutionary scene.
In January 1905, government troops massacred protesters in Saint Petersburg. Unrest soon spread across the Russian Empire in what came to be known as the Revolution of 1905. Georgia was particularly affected. Stalin was in Baku in February when ethnic violence broke out between Armenians and Azeris; at least 2,000 were killed. He publicly lambasted the "pogroms against Jews and Armenians" as being part of Tsar Nicholas II's attempts to "buttress his despicable throne". Stalin formed a Bolshevik Battle Squad which he used to try to keep Baku's warring ethnic factions apart; he also used the unrest as a cover for stealing printing equipment. Amid the growing violence throughout Georgia he formed further Battle Squads, with the Mensheviks doing the same. Stalin's Squads disarmed local police and troops, raided government arsenals, and raised funds through protection rackets on large local businesses and mines. They launched attacks on the government's Cossack troops and pro-Tsarist Black Hundreds, co-ordinating some of their operations with the Menshevik militia.
In November 1905, the Georgian Bolsheviks elected Stalin as one of their delegates to a Bolshevik conference in Saint Petersburg. On arrival, he met Lenin's wife Nadezhda Krupskaya, who informed him that the venue had been moved to Tampere in the Grand Duchy of Finland. At the conference Stalin met Lenin for the first time. Although Stalin held Lenin in deep respect, he was vocal in his disagreement with Lenin's view that the Bolsheviks should field candidates for the forthcoming election to the State Duma; Stalin saw the parliamentary process as a waste of time. In April 1906, Stalin attended the RSDLP Fourth Congress in Stockholm; this was his first trip outside the Russian Empire. At the conference, the RSDLP—then led by its Menshevik majority—agreed that it would not raise funds using armed robbery. Lenin and Stalin disagreed with this decision, and later privately discussed how they could continue the robberies for the Bolshevik cause.
Stalin married Kato Svanidze in a church ceremony at Senaki in July 1906. In March 1907 she bore a son, Yakov. By that year—according to the historian Robert Service—Stalin had established himself as "Georgia's leading Bolshevik". He attended the Fifth RSDLP Congress, held in London in May–June 1907. After returning to Tiflis, Stalin organised the robbing of a large delivery of money to the Imperial Bank in June 1907. His gang ambushed the armed convoy in Yerevan Square with gunfire and home-made bombs. Around 40 people were killed, but all of his gang escaped alive.
After the heist, Stalin settled in Baku with his wife and son. There, Mensheviks confronted Stalin about the robbery and voted to expel him from the RSDLP, but he took no notice of them.
In Baku, Stalin secured Bolshevik domination of the local RSDLP branch, and edited two Bolshevik newspapers, "Bakinsky Proletary" and "Gudok" ("Whistle"). In August 1907, he attended the Seventh Congress of the Second International—an international socialist organisation—in Stuttgart, Germany. In November 1907, his wife died of typhus, and he left his son with her family in Tiflis. In Baku he had reassembled his gang, the Outfit, which continued to attack Black Hundreds and raised finances by running protection rackets, counterfeiting currency, and carrying out robberies. They also kidnapped the children of several wealthy figures to extract ransom money. In early 1908, he travelled to the Swiss city of Geneva to meet with Lenin and the prominent Russian Marxist Georgi Plekhanov, although the latter exasperated him.
In March 1908, Stalin was arrested and interned in Bailov Prison in Baku. There, he led the imprisoned Bolsheviks, organised discussion groups, and ordered the killing of suspected informants. He was eventually sentenced to two years exile in the village of Solvychegodsk, Vologda Province, arriving there in February 1909. In June, he escaped the village and made it to Kotlas disguised as a woman and from there to Saint Petersburg. In March 1910, he was arrested again, and sent back to Solvychegodsk. There he had affairs with at least two women; his landlady, Maria Kuzakova, who later gave birth to his second son, Konstantin. In June 1911, Stalin was given permission to move to Vologda, where he stayed for two months, having a relationship with Pelageya Onufrieva. He escaped to Saint Petersburg, where he was arrested in September 1911, and sentenced to a further three-year exile in Vologda.
In January 1912, while Stalin was in exile, the first Bolshevik Central Committee had been elected at the Prague Conference. Shortly after the conference, Lenin and Grigory Zinoviev decided to co-opt Stalin to the committee. Still in Vologda, Stalin agreed, remaining a Central Committee member for the rest of his life. Lenin believed that Stalin, as a Georgian, would help secure support for the Bolsheviks from the Empire's minority ethnicities. In February 1912, Stalin again escaped to Saint Petersburg, tasked with converting the Bolshevik weekly newspaper, "Zvezda" ("Star") into a daily, "Pravda" ("Truth"). The new newspaper was launched in April 1912, although Stalin's role as editor was kept secret.
In May 1912, he was arrested again and imprisoned in the Shpalerhy Prison, before being sentenced to three years exile in Siberia. In July, he arrived at the Siberian village of Narym, where he shared a room with fellow Bolshevik Yakov Sverdlov. After two months, Stalin and Sverdlov escaped back to Saint Petersburg.
During a brief period back in Tiflis, Stalin and the Outfit planned the ambush of a mail coach, during which most of the group—although not Stalin—were apprehended by the authorities. Stalin returned to Saint Petersburg, where he continued editing and writing articles for "Pravda".
After the October 1912 Duma elections resulted in six Bolsheviks and six Mensheviks being elected, Stalin wrote articles calling for reconciliation between the two Marxist factions, for which he was criticised by Lenin. In late 1912, he twice crossed into the Austro-Hungarian Empire to visit Lenin in Kraków, eventually bowing to Lenin's opposition to reunification with the Mensheviks. In January 1913, Stalin travelled to Vienna, where he conducted researches into the 'national question' of how the Bolsheviks should deal with the Russian Empire's national and ethnic minorities. Lenin, who encouraged Stalin to write up an article on the subject, wanted to attract these groups to the Bolshevik cause by offering them the right of secession from the Russian state, but at the same time hoped they would remain part of a future Bolshevik-governed Russia.
Stalin's finished article was titled "Marxism and the National Question", and first published in the March, April, and May 1913 issues of the Bolshevik journal "Prosveshcheniye"; Lenin was very happy with it. According to Montefiore, this was "Stalin's most famous work". The article was published under the pseudonym of "K. Stalin", a name he had been using since 1912. Derived from the Russian word for steel ("stal"), this has been translated as "Man of Steel"; Stalin may have intended it to imitate Lenin's pseudonym. Stalin retained this name for the rest of his life, possibly because it had been used on the article which established his reputation among the Bolsheviks.
In February 1913, Stalin was arrested while back in Saint Petersburg. He was sentenced to four years exile in Turukhansk, a remote part of Siberia from which escape was particularly difficult. In August, he arrived in the village of Monastyrskoe, although after four weeks was relocated to the hamlet of Kostino. In March 1914, concerned over a potential escape attempt, the authorities moved Stalin to the hamlet of Kureika on the edge of the Arctic Circle. In the hamlet, Stalin had a relationship with Lidia Pereprygia, who was thirteen at the time and thus a year under the legal age of consent in Tsarist Russia. In or about December 1914, Pereprygia gave birth to Stalin's child, although the infant soon died. She gave birth to another of his children, Alexander, circa April 1917. In Kureika, Stalin lived closely with the indigenous Tunguses and Ostyak, and spent much of his time fishing.
While Stalin was in exile, Russia entered the First World War, and in October 1916 Stalin and other exiled Bolsheviks were conscripted into the Russian Army, leaving for Monastyrskoe. They arrived in Krasnoyarsk in February 1917, where a medical examiner ruled Stalin unfit for military service due to his crippled arm. Stalin was required to serve four more months on his exile, and he successfully requested that he serve it in nearby Achinsk. Stalin was in the city when the February Revolution took place; uprisings broke out in Petrograd—as Saint Petersburg had been renamed—and Tsar Nicholas II abdicated to escape being violently overthrown. The Russian Empire became a "de facto" republic, headed by a Provisional Government dominated by liberals. In a celebratory mood, Stalin travelled by train to Petrograd in March. There, Stalin and fellow Bolshevik Lev Kamenev assumed control of "Pravda", and Stalin was appointed the Bolshevik representative to the Executive Committee of the Petrograd Soviet, an influential council of the city's workers. In April, Stalin came third in the Bolshevik elections for the party's Central Committee; Lenin came first and Zinoviev came second. This reflected his senior standing in the party at the time.
Stalin helped organise the July Days uprising, an armed display of strength by Bolshevik supporters. After the demonstration was suppressed, the Provisional Government initiated a crackdown on the Bolsheviks, raiding "Pravda". During this raid, Stalin smuggled Lenin out of the newspaper's office and took charge of the Bolshevik leader's safety, moving him between Petrograd safe houses before smuggling him to Razliv. In Lenin's absence, Stalin continued editing "Pravda" and served as acting leader of the Bolsheviks, overseeing the party's Sixth Congress, which was held covertly. Lenin began calling for the Bolsheviks to seize power by toppling the Provisional Government in a "coup d'état". Stalin and fellow senior Bolshevik Leon Trotsky both endorsed Lenin's plan of action, but it was initially opposed by Kamenev and other party members. Lenin returned to Petrograd and secured a majority in favour of a "coup" at a meeting of the Central Committee on 10 October.
On 24 October, police raided the Bolshevik newspaper offices, smashing machinery and presses; Stalin salvaged some of this equipment to continue his activities. In the early hours of 25 October, Stalin joined Lenin in a Central Committee meeting in the Smolny Institute, from where the Bolshevik "coup"—the October Revolution—was directed. Bolshevik militia seized Petrograd's electric power station, main post office, state bank, telephone exchange, and several bridges. A Bolshevik-controlled ship, the "Aurora", opened fire on the Winter Palace; the Provisional Government's assembled delegates surrendered and were arrested by the Bolsheviks. Although he had been tasked with briefing the Bolshevik delegates of the Second Congress of Soviets about the developing situation, Stalin's role in the coup had not been publicly visible. Trotsky and other later Bolshevik opponents of Stalin used this as evidence that his role in the "coup" had been insignificant, although later historians reject this. According to the historian Oleg Khlevniuk, Stalin "filled an important role [in the October Revolution]... as a senior Bolshevik, member of the party's Central Committee, and editor of its main newspaper"; the historian Stephen Kotkin similarly noted that Stalin had been "in the thick of events" in the build-up to the "coup".
On 26 October, Lenin declared himself Chairman of a new government, the Council of People's Commissars ("Sovnarkom"). Stalin backed Lenin's decision not to form a coalition with the Mensheviks and Socialist Revolutionary Party, although they did form a coalition government with the Left Socialist Revolutionaries. Stalin became part of an informal foursome leading the government, alongside Lenin, Trotsky, and Sverdlov; of these, Sverdlov was regularly absent, and died in March 1919. Stalin's office was based near to Lenin's in the Smolny Institute, and he and Trotsky were the only individuals allowed access to Lenin's study without an appointment. Although not so publicly well known as Lenin or Trotsky, Stalin's importance among the Bolsheviks grew. He co-signed Lenin's decrees shutting down hostile newspapers, and with Sverdlov chaired the sessions of the committee drafting a constitution for the new Russian Soviet Federative Socialist Republic. He strongly supported Lenin's formation of the Cheka security service and the subsequent Red Terror that it initiated; noting that state violence had proved an effective tool for capitalist powers, he believed that it would prove the same for the Soviet government. Unlike senior Bolsheviks like Kamenev and Nikolai Bukharin, Stalin never expressed concern about the rapid growth and expansion of the Cheka and Terror.
Having dropped his editorship of "Pravda", Stalin was appointed the People's Commissar for Nationalities. He took Nadezhda Alliluyeva as his secretary, and at some point married her, although the wedding date is unknown. In November 1917, he signed the Decree on Nationality, according ethnic and national minorities living in Russia the right of secession and self-determination. The decree's purpose was primarily strategic; the Bolsheviks wanted to gain favour among ethnic minorities but hoped that the latter would not actually desire independence. That month, he travelled to Helsinki to talk with the Finnish Social-Democrats, granting Finland's request for independence in December. His department allocated funds for the establishment of presses and schools in the languages of various ethnic minorities. Socialist Revolutionaries accused Stalin's talk of federalism and national self-determination as a front for Sovnarkom's centralising and imperialist policies.
Due to the ongoing First World War, in which Russia was fighting the Central Powers of Germany and Austria-Hungary, Lenin's government relocated from Petrograd to Moscow in March 1918. There, they based themselves in the Kremlin; it was here that Stalin, Trotsky, Sverdlov, and Lenin lived. Stalin supported Lenin's desire to sign an armistice with the Central Powers regardless of the cost in territory. Stalin thought it necessary because—unlike Lenin—he was unconvinced that Europe was on the verge of proletarian revolution. Lenin eventually convinced the other senior Bolsheviks of his viewpoint, resulting in the signing of the Treaty of Brest-Litovsk in March 1918. The treaty gave vast areas of land and resources to the Central Powers and angered many in Russia; the Left Socialist Revolutionaries withdrew from the coalition government over the issue. The governing RSDLP party was soon renamed, becoming the Russian Communist Party.
After the Bolsheviks seized power, both right and left-wing armies rallied against them, generating the Russian Civil War. To secure access to the dwindling food supply, in May 1918 Sovnarkom sent Stalin to Tsaritsyn to take charge of food procurement in southern Russia. Eager to prove himself as a commander, once there he took control of regional military operations. He befriended two military figures, Kliment Voroshilov and Semyon Budyonny, who would form the nucleus of his military and political support base. Believing that victory was assured by numerical superiority, he sent large numbers of Red Army troops into battle against the region's anti-Bolshevik White armies, resulting in heavy losses; Lenin was concerned by this costly tactic. In Tsaritsyn, Stalin commanded the local Cheka branch to execute suspected counter-revolutionaries, sometimes without trial, and—in contravention of government orders—purged the military and food collection agencies of middle-class specialists, some of whom he also executed. His use of state violence and terror was at a greater scale than most Bolshevik leaders approved of; for instance, he ordered several villages to be torched to ensure compliance with his food procurement program.
In December 1918, Stalin was sent to Perm to lead an inquiry into how Alexander Kolchak's White forces had been able to decimate Red troops based there. He returned to Moscow between January and March 1919, before being assigned to the Western Front at Petrograd. When the Red Third Regiment defected, he ordered the public execution of captured defectors. In September he was returned to the Southern Front. During the war, he proved his worth to the Central Committee, displaying decisiveness, determination, and a willingness to take on responsibility in conflict situations. At the same time, he disregarded orders and repeatedly threatened to resign when affronted. He was reprimanded by Lenin at the 8th Party Congress for employing tactics which resulted in far too many deaths of Red army soldiers. In November 1919, the government nonetheless awarded him the Order of the Red Banner for his wartime service.
The Bolsheviks won the Russian civil war by the end of 1919. By that time, Sovnarkom had turned its attention to spreading proletarian revolution abroad, to this end forming the Communist International in March 1919; Stalin attended its inaugural ceremony. Although Stalin did not share Lenin's belief that Europe's proletariat were on the verge of revolution, he acknowledged that as long as it stood alone, Soviet Russia remained vulnerable. In December 1918, he drew up decrees recognising Marxist-governed Soviet republics in Estonia, Lithuania, and Latvia; during the civil war these Marxist governments were overthrown and the Baltic countries became fully independent of Russia, an act Stalin regarded as illegitimate. In February 1920, he was appointed to head the Workers' and Peasants' Inspectorate; that same month he was also transferred to the Caucasian Front.
Following earlier clashes between Polish and Russian troops, the Polish–Soviet War broke out in early 1920, with the Poles invading Ukraine and taking Kiev on 7 May. On 26 May, Stalin was moved to Ukraine, on the Southwest Front. The Red Army retook Kiev on 10 June, and soon forced the Polish troops back into Poland. On 16 July, the Central Committee decided to take the war into Polish territory. Lenin believed that the Polish proletariat would rise up to support the Russians against Józef Piłsudski's Polish government. Stalin had cautioned against this; he believed that nationalism would lead the Polish working-classes to support their government's war effort. He also believed that the Red Army was ill-prepared to conduct an offensive war and that it would give White Armies a chance to resurface in Crimea, potentially reigniting the civil war. Stalin lost the argument, after which he accepted Lenin's decision and supported it. Along the Southwest Front, he became determined to conquer Lviv; in focusing on this goal he disobeyed orders in early August to transfer his troops to assist Mikhail Tukhachevsky's forces that were attacking Warsaw.
In mid-August 1920, the Poles repulsed the Russian advance and Stalin returned to Moscow to attend the Politburo meeting. In Moscow, Lenin and Trotsky blamed him for his behavior in the Polish–Soviet war. Stalin felt humiliated and under-appreciated; On 17 August, he demanded demission from the military, which was granted on 1 September. At the 9th Bolshevik Conference in late September, Trotsky accused Stalin of "strategic mistakes" in his handling of the war. Trotsky claimed that Stalin sabotaged the campaign by disobeying troop transfer orders. Lenin joined Trotsky in criticizing him, and nobody spoke on his behalf at the conference. Stalin felt disgraced and increased his antipathy toward Trotsky. The Polish-Soviet war finally ended on 18 March 1921, when a peace treaty was signed in Riga.
The Soviet government sought to bring neighbouring states under its domination; in February 1921 it invaded the Menshevik-governed Georgia, while in April 1921, Stalin ordered the Red Army into Turkestan to reassert Russian state control. As People's Commissar for Nationalities, Stalin believed that each national and ethnic group should have the right to self-expression, facilitated through "autonomous republics" within the Russian state in which they could oversee various regional affairs. In taking this view, some Marxists accused him of bending too much to bourgeois nationalism, while others accused him of remaining too Russocentric by seeking to retain these nations within the Russian state.
Stalin's native Caucasus posed a particular problem due to its highly multi-ethnic mix. Stalin opposed the idea of separate Georgian, Armenian, and Azerbaijani autonomous republics, arguing that these would likely oppress ethnic minorities within their respective territories; instead he called for a Transcaucasian Socialist Federative Soviet Republic. The Georgian Communist Party opposed the idea, resulting in the Georgian Affair. In mid-1921, Stalin returned to the southern Caucasus, there calling on Georgian Communists to avoid the chauvinistic Georgian nationalism which marginalised the Abkhazian, Ossetian, and Adjarian minorities in Georgia. On this trip, Stalin met with his son Yakov, and brought him back to Moscow; Nadya had given birth to another of Stalin's sons, Vasily, in March 1921.
After the civil war, workers' strikes and peasant uprisings broke out across Russia, largely in opposition to Sovnarkom's food requisitioning project; as an antidote, Lenin introduced market-oriented reforms: the New Economic Policy (NEP). There was also internal turmoil in the Communist Party, as Trotsky led a faction calling for the abolition of trade unions; Lenin opposed this and Stalin helped rally opposition to Trotsky's position. Stalin also agreed to supervise the Department of Agitation and Propaganda in the Central Committee Secretariat. At the 11th Party Congress in 1922, Lenin nominated Stalin as the party's new General Secretary. Although concerns were expressed that adopting this new post on top of his others would overstretch his workload and give him too much power, Stalin was appointed to the position. For Lenin, it was advantageous to have a key ally in this crucial post.
In May 1922, a massive stroke left Lenin partially paralyzed. Residing at his Gorki dacha, Lenin's main connection to Sovnarkom was through Stalin, who was a regular visitor. Lenin twice asked Stalin to procure poison so that he could commit suicide, but Stalin never did so. Despite this comradeship, Lenin disliked what he referred to as Stalin's "Asiatic" manner, and told his sister Maria that Stalin was "not intelligent". Lenin and Stalin argued on the issue of foreign trade; Lenin believed that the Soviet state should have a monopoly on foreign trade, but Stalin supported Grigori Sokolnikov's view that doing so was impractical at that stage. Another disagreement came over the Georgian Affair, with Lenin backing the Georgian Central Committee's desire for a Georgian Soviet Republic over Stalin's idea of a Transcaucasian one.
They also disagreed on the nature of the Soviet state. Lenin called for establishment of a new federation named the "Union of Soviet Republics of Europe and Asia", reflecting his desire for expansion across the two continents, and insisted that the Russian state should join this union on equal terms with the other Soviet states. Stalin believed this would encourage independence sentiment among non-Russians, instead arguing that ethnic minorities would be content as "autonomous republics" within the Russian Soviet Federative Socialist Republic. Lenin accused Stalin of "Great Russian chauvinism"; Stalin accused Lenin of "national liberalism". A compromise was reached, in which the federation would be renamed the "Union of Soviet Socialist Republics" (USSR). The USSR's formation was ratified in December 1922; although officially a federal system, all major decisions were taken by the governing Politburo of the Communist Party of the Soviet Union in Moscow.
Their differences also became personal; Lenin was particularly angered when Stalin was rude to his wife Krupskaya during a telephone conversation. In the final years of his life, Krupskaya provided governing figures with Lenin's Testament, a series of increasingly disparaging notes about Stalin. These criticised Stalin's rude manners and excessive power, suggesting that Stalin should be removed from the position of General Secretary. Some historians have questioned whether Lenin ever produced these, suggesting instead that they may have been written by Krupskaya, who had personal differences with Stalin; Stalin, however, never publicly voiced concerns about their authenticity.
Lenin died in January 1924. Stalin took charge of the funeral and was one of its pallbearers; against the wishes of Lenin's widow, the Politburo embalmed his corpse and placed it within a mausoleum in Moscow's Red Square. It was incorporated into a growing personality cult devoted to Lenin, with Petrograd being renamed "Leningrad" that year. To bolster his image as a devoted Leninist, Stalin gave nine lectures at Sverdlov University on the "Foundations of Leninism", later published in book form. During the 13th Party Congress in May 1924, "Lenin's Testament" was read only to the leaders of the provincial delegations. Embarrassed by its contents, Stalin offered his resignation as General Secretary; this act of humility saved him and he was retained in the position.
As General Secretary, Stalin had had a free hand in making appointments to his own staff, implanting his loyalists throughout the party and administration. Favouring new Communist Party members, many from worker and peasant backgrounds, to the "Old Bolsheviks" who tended to be university educated, he ensured he had loyalists dispersed across the country's regions. Stalin had much contact with young party functionaries, and the desire for promotion led many provincial figures to seek to impress Stalin and gain his favour. Stalin also developed close relations with the trio at the heart of the secret police (first the Cheka and then its replacement, the State Political Directorate): Felix Dzerzhinsky, Genrikh Yagoda, and Vyacheslav Menzhinsky. In his private life, he divided his time between his Kremlin apartment and a dacha at Zubalova; his wife gave birth to a daughter, Svetlana, in February 1926.
In the wake of Lenin's death, various protagonists emerged in the struggle to become his successor: alongside Stalin was Trotsky, Zinoviev, Kamenev, Bukharin, Alexei Rykov, and Mikhail Tomsky. Stalin saw Trotsky—whom he personally despised—as the main obstacle to his dominance within the party. While Lenin had been ill Stalin had forged an anti-Trotsky alliance with Kamenev and Zinoviev. Although Zinoviev was concerned about Stalin's growing authority, he rallied behind him at the 13th Congress as a counterweight to Trotsky, who now led a party faction known as the Left Opposition. The Left Opposition believed the NEP conceded too much to capitalism; Stalin was called a "rightist" for his support of the policy. Stalin built up a retinue of his supporters in the Central Committee, while the Left Opposition were gradually removed from their positions of influence. He was supported in this by Bukharin, who like Stalin believed that the Left Opposition's proposals would plunge the Soviet Union into instability.
In late 1924, Stalin moved against Kamenev and Zinoviev, removing their supporters from key positions. In 1925, the two moved into open opposition to Stalin and Bukharin. At the 14th Party Congress in December, they launched an attack against Stalin's faction, but it was unsuccessful. Stalin in turn accused Kamenev and Zinoviev of reintroducing factionalism—and thus instability—into the party. In mid-1926, Kamenev and Zinoviev joined with Trotsky's supporters to form the United Opposition against Stalin; in October they agreed to stop factional activity under threat of expulsion, and later publicly recanted their views under Stalin's command. The factionalist arguments continued, with Stalin threatening to resign in October and then December 1926 and again in December 1927. In October 1927, Zinoviev and Trotsky were removed from the Central Committee; the latter was exiled to Kazakhstan and later deported from the country in 1929. Some of those United Opposition members who were repentant were later rehabilitated and returned to government.
Stalin was now the party's supreme leader, although he was not the head of government, a task he entrusted to key ally Vyacheslav Molotov. Other important supporters on the Politburo were Voroshilov, Lazar Kaganovich, and Sergo Ordzhonikidze, with Stalin ensuring his allies ran the various state institutions. According to Montefiore, at this point "Stalin was the leader of the oligarchs but he was far from a dictator". His growing influence was reflected in the naming of various locations after him; in June 1924 the Ukrainian mining town of Yuzovka became Stalino, and in April 1925, Tsaritsyn was renamed Stalingrad on the order of Mikhail Kalinin and Avel Enukidze.
In 1926, Stalin published "On Questions of Leninism". Here, he argued for the concept of "Socialism in One Country", which he presented as an orthodox Leninist perspective. It nevertheless clashed with established Bolshevik views that socialism could not be established in one country but could only be achieved globally through the process of world revolution. In 1927, there was some argument in the party over Soviet policy regarding China. Stalin had called for the Communist Party of China, led by Chen Duxiu, to ally itself with Chiang Kai-shek's Kuomintang (KMT) nationalists, viewing a Communist-Kuomintang alliance as the best bulwark against Japanese imperial expansionism. Instead, the KMT repressed the Communists and a civil war broke out between the two sides.
The Soviet Union lagged behind the industrial development of Western countries, and there had been a shortfall of grain; 1927 produced only 70% of grain produced in 1926. Stalin's government feared attack from Japan, France, the United Kingdom, Poland, and Romania. Many Communists, including in Komsomol, OGPU, and the Red Army, were eager to be rid of the NEP and its market-oriented approach; they had concerns about those who profited from the policy: affluent peasants known as "kulaks" and the small business owners or "Nepmen". At this point, Stalin turned against the NEP, putting him on a course to the "left" even of Trotsky or Zinoviev.
In early 1928 Stalin travelled to Novosibirsk, where he alleged that kulaks were hoarding their grain and ordered that the kulaks be arrested and their grain confiscated, with Stalin bringing much of the area's grain back to Moscow with him in February. At his command, grain procurement squads surfaced across Western Siberia and the Urals, with violence breaking out between these squads and the peasantry. Stalin announced that both kulaks and the "middle peasants" must be coerced into releasing their harvest. Bukharin and several other Central Committee members were angry that they had not been consulted about this measure, which they deemed rash. In January 1930, the Politburo approved the liquidation of the kulak class; accused kulaks were rounded up and exiled to other parts of the country or to concentration camps. Large numbers died during the journey. By July 1930, over 320,000 households had been affected by the de-kulakisation policy. According to Stalin biographer Dmitri Volkogonov, de-kulakisation was "the first mass terror applied by Stalin in his own country".
In 1929, the Politburo announced the mass collectivisation of agriculture, establishing both "kolkhozy" collective farms and "sovkhoz" state farms. Stalin barred kulaks from joining these collectives. Although officially voluntary, many peasants joined the collectives out of fear they would face the fate of the kulaks; others joined amid intimidation and violence from party loyalists.
By 1932, about 62% of households involved in agriculture were part of collectives, and by 1936 this had risen to 90%. Many of the collectivised peasants resented the loss of their private farmland, and productivity slumped. Famine broke out in many areas, with the Politburo frequently ordering the distribution of emergency food relief to these regions.
Armed peasant uprisings against dekulakisation and collectivisation broke out in Ukraine, northern Caucasus, southern Russia, and central Asia, reaching their apex in March 1930; these were suppressed by the Red Army. Stalin responded to the uprisings with an article insisting that collectivisation was voluntary and blaming any violence and other excesses on local officials. Although he and Stalin had been close for many years, Bukharin expressed concerns about these policies; he regarded them as a return to Lenin's old "war communism" policy and believed that it would fail. By mid-1928 he was unable to rally sufficient support in the party to oppose the reforms. In November 1929 Stalin removed him from the Politburo.
Officially, the Soviet Union had replaced the "irrationality" and "wastefulness" of a market economy with a planned economy organised along a long-term, precise, and scientific framework; in reality, Soviet economics were based on "ad hoc" commandments issued from the centre, often to make short-term targets. In 1928, the first five-year plan was launched, its main focus on boosting heavy industry; it was finished a year ahead of schedule, in 1932. The USSR underwent a massive economic transformation. New mines were opened, new cities like Magnitogorsk constructed, and work on the White Sea-Baltic Canal begun. Millions of peasants moved to the cities, although urban house building could not keep up with the demand. Large debts were accrued purchasing foreign-made machinery.
Many of the major construction projects, including the White Sea-Baltic Canal and the Moscow Metro, were constructed largely through forced labour. The last elements of workers' control over industry were removed, with factory managers increasing their authority and receiving privileges and perks; Stalin defended wage disparity by pointing to Marx's argument that it was necessary during the lower stages of socialism. To promote the intensification of labour, a series of medals and awards as well as the Stakhanovite movement were introduced. Stalin's message was that socialism was being established in the USSR while capitalism was crumbling amid the Wall Street crash. His speeches and articles reflected his utopian vision of the Soviet Union rising to unparalleled heights of human development, creating a "new Soviet person".
In 1928, Stalin declared that class war between the proletariat and their enemies would intensify as socialism developed. He warned of a "danger from the right", including in the Communist Party itself. The first major show trial in the USSR was the Shakhty Trial of 1928, in which several middle-class "industrial specialists" were convicted of sabotage. From 1929 to 1930, further show trials were held to intimidate opposition: these included the Industrial Party Trial, Menshevik Trial, and Metro-Vickers Trial. Aware that the ethnic Russian majority may have concerns about being ruled by a Georgian, he promoted ethnic Russians throughout the state hierarchy and made the Russian language compulsory throughout schools and offices, albeit to be used in tandem with local languages in areas with non-Russian majorities. Nationalist sentiment among ethnic minorities was suppressed. Conservative social policies were promoted to enhance social discipline and boost population growth; this included a focus on strong family units and motherhood, the re-criminalisation of homosexuality, restrictions placed on abortion and divorce, and the abolition of the "Zhenotdel" women's department.
Stalin desired a "cultural revolution", entailing both the creation of a culture for the "masses" and the wider dissemination of previously elite culture. He oversaw the proliferation of schools, newspapers, and libraries, as well as the advancement of literacy and numeracy. "Socialist realism" was promoted throughout the arts, while Stalin personally wooed prominent writers, namely Maxim Gorky, Mikhail Sholokhov, and Aleksey Nikolayevich Tolstoy. He also expressed patronage for scientists whose research fitted within his preconceived interpretation of Marxism; he for instance endorsed the research of agrobiologist Trofim Lysenko despite the fact that it was rejected by the majority of Lysenko's scientific peers as pseudo-scientific. The government's anti-religious campaign was re-intensified, with increased funding given to the League of Militant Atheists. Christian, Muslim, and Buddhist clergy faced persecution. Many religious buildings were demolished, most notably Moscow's Cathedral of Christ the Saviour, destroyed in 1931 to make way for the (never completed) Palace of the Soviets. Religion retained an influence over much of the population; in the 1937 census, 57% of respondents identified as religious.
Throughout the 1920s and beyond, Stalin placed a high priority on foreign policy. He personally met with a range of Western visitors, including George Bernard Shaw and H. G. Wells, both of whom were impressed with him. Through the Communist International, Stalin's government exerted a strong influence over Marxist parties elsewhere in the world; initially, Stalin left the running of the organisation largely to Bukharin. At its 6th Congress in July 1928, Stalin informed delegates that the main threat to socialism came not from the right but from non-Marxist socialists and social democrats, whom he called "social fascists"; Stalin recognised that in many countries, the social democrats were the Marxist-Leninists' main rivals for working-class support. This preoccupation with opposing rival leftists concerned Bukharin, who regarded the growth of fascism and the far right across Europe as a far greater threat. After Bukharin's departure, Stalin placed the Communist International under the administration of Dmitry Manuilsky and Osip Piatnitsky.
Stalin faced problems in his family life. In 1929, his son Yakov unsuccessfully attempted suicide; his failure earned Stalin's contempt. His relationship with Nadya was also strained amid their arguments and her mental health problems. In November 1932, after a group dinner in the Kremlin in which Stalin flirted with other women, Nadya shot herself.
Publicly, the cause of death was given as appendicitis; Stalin also concealed the real cause of death from his children. Stalin's friends noted that he underwent a significant change following her suicide, becoming emotionally harder.
Within the Soviet Union, there was widespread civic disgruntlement against Stalin's government. Social unrest, previously restricted largely to the countryside, was increasingly evident in urban areas, prompting Stalin to ease on some of his economic policies in 1932. In May 1932, he introduced a system of kolkhoz markets where peasants could trade their surplus produce. At the same time, penal sanctions became more severe; at Stalin's instigation, in August 1932 a decree was introduced meaning that the theft of even a handful of grain could be a capital offense. The second five-year plan had its production quotas reduced from that of the first, with the main emphasis now being on improving living conditions. It therefore emphasised the expansion of housing space and the production of consumer goods. Like its predecessor, this Plan was repeatedly amended to meet changing situations; there was for instance an increasing emphasis placed on armament production after Adolf Hitler became German Chancellor in 1933.
The Soviet Union experienced a major famine which peaked in the winter of 1932–33; between five and seven million people died. Worst affected were Ukraine and the North Caucasus, although the famine also affected Kazakhstan and several Russian provinces. Historians have long debated whether Stalin's government had intended the famine to occur or not; there are no known documents in which Stalin or his government explicitly called for starvation to be used against the population. The 1931 and 1932 harvests had been poor ones due to weather conditions, and had followed several years in which lower productivity had resulted in a gradual decline in output. Government policies—including the focus on rapid industrialisation, the socialisation of livestock, and the emphasis on sown areas over crop rotation—exacerbated the problem; the state had also failed to build reserve grain stocks for such an emergency. Stalin blamed the famine on hostile elements and wreckers within the peasantry; his government provided small amounts of food to famine-struck rural areas, although this was wholly insufficient to deal with the levels of starvation. The Soviet government believed that food supplies should be prioritized for the urban workforce; for Stalin, the fate of Soviet industrialisation was far more important than the lives of the peasantry. Grain exports, which were a major means of Soviet payment for machinery, declined heavily. Stalin would not acknowledge that his policies had contributed to the famine, the existence of which was denied to foreign observers.
In 1935–36, Stalin oversaw a new constitution; its dramatic liberal features were designed as propaganda weapons, for all power rested in the hands of Stalin and his Politburo. He declared that "socialism, which is the first phase of communism, has basically been achieved in this country". In 1938, "The History of the Communist Party of the Soviet Union (Bolsheviks)", colloquially known as the "Short Course", was released; Conquest later referred to it as the "central text of Stalinism". A number of authorised Stalin biographies were also published, although Stalin generally wanted to be portrayed as the embodiment of the Communist Party rather than have his life story explored. During the later 1930s, Stalin placed "a few limits on the worship of his own greatness". By 1938, Stalin's inner circle had gained a degree of stability, containing the personalities who would remain there until Stalin's death.
Seeking improved international relations, in 1934 the Soviet Union secured membership of the League of Nations, of which it had previously been excluded. Stalin initiated confidential communications with Hitler in October 1933, shortly after the latter came to power in Germany. Stalin admired Hitler, particularly his manoeuvres to remove rivals within the Nazi Party in the Night of the Long Knives. Stalin nevertheless recognised the threat posed by fascism and sought to establish better links with the liberal democracies of Western Europe; in May 1935, the Soviets signed a treaty of mutual assistance with France and Czechoslovakia. At the Communist International's 7th Congress, held in July–August 1935, the Soviet government encouraged Marxist-Leninists to unite with other leftists as part of a popular front against fascism. In turn, the anti-communist governments of Germany, Fascist Italy and Japan signed the Anti-Comintern Pact of 1936.
When the Spanish Civil War broke out in July 1936, the Soviets sent 648 aircraft and 407 tanks to the left-wing Republican faction; these were accompanied by 3000 Soviet troops and 42,000 members of the International Brigades set up by the Communist International. Stalin took a strong personal involvement in the Spanish situation. Germany and Italy backed the Nationalist faction, which was ultimately victorious in March 1939. With the outbreak of the Second Sino-Japanese War in July 1937, the Soviet Union and China signed a non-aggression pact the following August. Stalin aided the Chinese as the KMT and the Communists had suspended their civil war and formed the desired United Front.
Stalin often gave conflicting signals regarding state repression. In May 1933, he released from prison many convicted of minor offenses, ordering the security services not to enact further mass arrests and deportations. In September 1934, he launched a commission to investigate false imprisonments; that same month he called for the execution of workers at the Stalin Metallurgical Factory accused of spying for Japan. This mixed approach began to change in December 1934, after prominent party member Sergey Kirov was murdered. After the murder, Stalin became increasingly concerned by the threat of assassination, improved his personal security, and rarely went out in public. State repression intensified after Kirov's death; Stalin instigated this, reflecting his prioritisation of security above other considerations. Stalin issued a decree establishing NKVD troikas which could mete out rulings without involving the courts. In 1935, he ordered the NKVD to expel suspected counter-revolutionaries from urban areas; in early 1935, over 11,000 were expelled from Leningrad. In 1936, Nikolai Yezhov became head of the NKVD.
Stalin orchestrated the arrest of many former opponents in the Communist Party as well as sitting members of the Central Committee: denounced as Western-backed mercenaries, many were imprisoned or exiled internally. The first Moscow Trial took place in August 1936; Kamenev and Zinoviev were among those accused of plotting assassinations, found guilty in a show trial, and executed. The second Moscow Show Trial took place in January 1937, and the third in March 1938, in which Bukharin and Rykov were accused of involvement in the alleged Trotskyite-Zinovievite terrorist plot and sentenced to death. By late 1937, all remnants of collective leadership were gone from the Politburo, which was controlled entirely by Stalin.
There were mass expulsions from the party, with Stalin commanding foreign communist parties to also purge anti-Stalinist elements.
During the 1930s and 1940s, NKVD groups assassinated defectors and opponents abroad; in August 1940, Trotsky was assassinated in Mexico, eliminating the last of Stalin's opponents among the former Party leadership. In May, this was followed by the arrest of most members of the military Supreme Command and mass arrests throughout the military, often on fabricated charges. These purges replaced most of the party's old guard with younger officials who did not remember a time before Stalin's leadership and who were regarded as more personally loyal to him. Party functionaries readily carried out their commands and sought to ingratiate themselves with Stalin to avoid becoming the victim of the purge. Such functionaries often carried out a greater number of arrests and executions than their quotas set by Stalin's central government.
Repressions further intensified in December 1936 and remained at a high level until November 1938, a period known as the Great Purge. By the latter part of 1937, the purges had moved beyond the party and were affecting the wider population. In July 1937, the Politburo ordered a purge of "anti-Soviet elements" in society, targeting anti-Stalin Bolsheviks, former Mensheviks and Socialist Revolutionaries, priests, ex-White Army soldiers, and common criminals. That month, Stalin and Yezhov signed Order No. 00447, listing 268,950 people for arrest, of whom 75,950 were executed. He also initiated "national operations", the ethnic cleansing of non-Soviet ethnic groups—among them Poles, Germans, Latvians, Finns, Greeks, Koreans, and Chinese—through internal or external exile. During these years, approximately 1.6 million people were arrested, 700,000 were shot, and an unknown number died under NKVD torture.
Stalin initiated all key decisions during the Terror, personally directing many of its operations and taking an interest in their implementation. His motives in doing so have been much debated by historians. His personal writings from the period were — according to Khlevniuk — "unusually convoluted and incoherent", filled with claims about enemies encircling him. He was particularly concerned at the success that right-wing forces had in overthrowing the leftist Spanish government, fearing a domestic fifth column in the event of future war with Japan and Germany. The Great Terror ended when Yezhov was removed as the head of the NKVD, to be replaced by Lavrentiy Beria, a man totally devoted to Stalin. Yezhov was arrested in April 1939 and executed in 1940. The Terror damaged the Soviet Union's reputation abroad, particularly among sympathetic leftists. As it wound down, Stalin sought to deflect responsibility from himself, blaming its "excesses" and "violations of law" on Yezhov. According to historian James Harris, contemporary archival research shows that the motivation behind the purges was not Stalin attempting to establish his own personal dictatorship; evidence suggests he was committed to building the socialist state envisioned by Lenin. The real motivation for the terror, according to Harris, was an over-exaggerated fear of counterrevolution.
As a Marxist–Leninist, Stalin expected an inevitable conflict between competing capitalist powers; after Nazi Germany annexed Austria and then part of Czechoslovakia in 1938, Stalin recognised a war was looming. He sought to maintain Soviet neutrality, hoping that a German war against France and Britain would lead to Soviet dominance in Europe. Militarily, the Soviets also faced a threat from the east, with Soviet troops clashing with the expansionist Japanese in the latter part of the 1930s. Stalin initiated a military build-up, with the Red Army more than doubling between January 1939 and June 1941, although in its haste to expand many of its officers were poorly trained. Between 1940 and 1941 he also purged the military, leaving it with a severe shortage of trained officers when war broke out.
As Britain and France seemed unwilling to commit to an alliance with the Soviet Union, Stalin saw a better deal with the Germans. On 3 May 1939, Stalin replaced his western-oriented foreign minister Maxim Litvinov with Vyacheslav Molotov. In May 1939, Germany began negotiations with the Soviets, proposing that Eastern Europe be divided between the two powers. Stalin saw this as an opportunity both for territorial expansion and temporary peace with Germany. In August 1939, the Soviet Union signed the Molotov-Ribbentrop pact with Germany, a non-aggression pact negotiated by Molotov and German foreign minister Joachim von Ribbentrop. A week later, Germany invaded Poland, sparking the UK and France to declare war on it. On 17 September, the Red Army entered eastern Poland, officially to restore order amid the collapse of the Polish state. On 28 September, Germany and the Soviet Union exchanged some of their newly conquered territories; Germany gained the linguistically Polish-dominated areas of Lublin Province and part of Warsaw Province while the Soviets gained Lithuania. A German–Soviet Frontier Treaty was signed shortly after, in Stalin's presence. The two states continued trading, undermining the British blockade of Germany.
The Soviets further demanded parts of eastern Finland, but the Finnish government refused. The Soviets invaded Finland in November 1939, yet despite numerical inferiority, the Finns kept the Red Army at bay. International opinion backed Finland, with the Soviets being expelled from the League of Nations. Embarrassed by their inability to defeat the Finns, the Soviets signed an interim peace treaty, in which they received territorial concessions from Finland. In June 1940, the Red Army occupied the Baltic states, which were forcibly merged into the Soviet Union in August; they also invaded and annexed Bessarabia and northern Bukovina, parts of Romania. The Soviets sought to forestall dissent in these new East European territories with mass repressions. One of the most noted instances was the Katyn massacre of April and May 1940, in which around 22,000 members of the Polish armed forces, police, and intelligentsia were executed.
The speed of the German victory over and occupation of France in mid-1940 took Stalin by surprise. He increasingly focused on appeasement with the Germans to delay any conflict with them. After the Tripartite Pact was signed by Axis Powers Germany, Japan and Italy, in October 1940, Stalin proposed that the USSR also join the Axis alliance. To demonstrate peaceful intentions toward Germany, in April 1941 the Soviets signed a neutrality pact with Japan. Although "de facto" head of government for a decade and a half, Stalin concluded that relations with Germany had deteriorated to such an extent that he needed to deal with the problem as "de jure" head of government as well: on 6 May, Stalin replaced Molotov as Premier of the Soviet Union.
In June 1941, Germany invaded the Soviet Union, initiating the war on the Eastern Front. Although intelligence agencies had repeatedly warned him of Germany's intentions, Stalin was taken by surprise. He formed a State Defense Committee, which he headed as Supreme Commander, as well as a military Supreme Command (Stavka), with Georgy Zhukov as its Chief of Staff. The German tactic of "blitzkrieg" was initially highly effective; the Soviet air force in the western borderlands was destroyed within two days. The German Wehrmacht pushed deep into Soviet territory; soon, Ukraine, Byelorussia, and the Baltic states were under German occupation, and Leningrad was under siege; and Soviet refugees were flooding into Moscow and surrounding cities. By July, Germany's Luftwaffe was bombing Moscow, and by October the Wehrmacht was amassing for a full assault on the capital. Plans were made for the Soviet government to evacuate to Kuibyshev, although Stalin decided to remain in Moscow, believing his flight would damage troop morale. The German advance on Moscow was halted after two months of battle in increasingly harsh weather conditions.
Going against the advice of Zhukov and other generals, Stalin emphasised attack over defence. In June 1941, he ordered a scorched earth policy of destroying infrastructure and food supplies before the Germans could seize them, also commanding the NKVD to kill around 100,000 political prisoners in areas the Wehrmacht approached. He purged the military command; several high-ranking figures were demoted or reassigned and others were arrested and executed. With Order No. 270, Stalin commanded soldiers risking capture to fight to the death describing the captured as traitors; among those taken as a prisoner of war by the Germans was Stalin's son Yakov, who died in their custody. Stalin issued Order No. 227 in July 1942, which directed that those retreating unauthorised would be placed in "penal battalions" used as cannon fodder on the front lines. Amid the fighting, both the German and Soviet armies disregarded the law of war set forth in the Geneva Conventions; the Soviets heavily publicised Nazi massacres of communists, Jews, and Romani. Stalin exploited Nazi anti-Semitism, and in April 1942 he sponsored the Jewish Anti-Fascist Committee (JAC) to garner Jewish and foreign support for the Soviet war effort.
The Soviets allied with the United Kingdom and United States; although the US joined the war against Germany in 1941, little direct American assistance reached the Soviets until late 1942. Responding to the invasion, the Soviets intensified their industrial enterprises in central Russia, focusing almost entirely on production for the military. They achieved high levels of industrial productivity, outstripping that of Germany. During the war, Stalin was more tolerant of the Russian Orthodox Church, allowing it to resume some of its activities and meeting with Patriarch Sergius in September 1943. He also permitted a wider range of cultural expression, notably permitting formerly suppressed writers and artists like Anna Akhmatova and Dmitri Shostakovich to disperse their work more widely. The Internationale was dropped as the country's national anthem, to be replaced with a more patriotic song. The government increasingly promoted Pan-Slavist sentiment, while encouraging increased criticism of cosmopolitanism, particularly the idea of "rootless cosmopolitanism", an approach with particular repercussions for Soviet Jews. Comintern was dissolved in 1943, and Stalin encouraged foreign Marxist–Leninist parties to emphasise nationalism over internationalism to broaden their domestic appeal.
In April 1942, Stalin overrode Stavka by ordering the Soviets' first serious counter-attack, an attempt to seize German-held Kharkov in eastern Ukraine. This attack proved unsuccessful. That year, Hitler shifted his primary goal from an overall victory on the Eastern Front, to the goal of securing the oil fields in the southern Soviet Union crucial to a long-term German war effort. While Red Army generals saw evidence that Hitler would shift efforts south, Stalin considered this to be a flanking move in a renewed effort to take Moscow. In June 1942, the German Army began a major offensive in Southern Russia, threatening Stalingrad; Stalin ordered the Red Army to hold the city at all costs. This resulted in the protracted Battle of Stalingrad. In December 1942, he placed Konstantin Rokossovski in charge of holding the city. In February 1943, the German troops attacking Stalingrad surrendered. The Soviet victory there marked a major turning point in the war; in commemoration, Stalin declared himself Marshal of the Soviet Union.
By November 1942, the Soviets had begun to repulse the important German strategic southern campaign and, although there were 2.5 million Soviet casualties in that effort, it permitted the Soviets to take the offensive for most of the rest of the war on the Eastern Front. Germany attempted an encirclement attack at Kursk, which was successfully repulsed by the Soviets. By the end of 1943, the Soviets occupied half of the territory taken by the Germans from 1941 to 1942. Soviet military industrial output also had increased substantially from late 1941 to early 1943 after Stalin had moved factories well to the east of the front, safe from German invasion and aerial assault.
In Allied countries, Stalin was increasingly depicted in a positive light over the course of the war. In 1941, the London Philharmonic Orchestra performed a concert to celebrate his birthday, and in 1942, "Time" magazine named him "Man of the Year". When Stalin learned that people in Western countries affectionately called him "Uncle Joe" he was initially offended, regarding it as undignified. There remained mutual suspicions between Stalin, British Prime Minister Winston Churchill, and U.S. President Franklin D. Roosevelt, who were together known as the "Big Three". Churchill flew to Moscow to visit Stalin in August 1942 and again in October 1944. Stalin scarcely left Moscow throughout the war, with Roosevelt and Churchill frustrated with his reluctance to travel to meet them.
In November 1943, Stalin met with Churchill and Roosevelt in Tehran, a location of Stalin's choosing. There, Stalin and Roosevelt got on well, with both desiring the post-war dismantling of the British Empire. At Tehran, the trio agreed that to prevent Germany rising to military prowess yet again, the German state should be broken up. Roosevelt and Churchill also agreed to Stalin's demand that the German city of Königsberg be declared Soviet territory. Stalin was impatient for the UK and US to open up a Western Front to take the pressure off of the East; they eventually did so in mid-1944. Stalin insisted that, after the war, the Soviet Union should incorporate the portions of Poland it occupied pursuant to the Molotov–Ribbentrop Pact with Germany, which Churchill opposed. Discussing the fate of the Balkans, later in 1944 Churchill agreed to Stalin's suggestion that after the war, Bulgaria, Romania, Hungary, and Yugoslavia would come under the Soviet sphere of influence while Greece would come under that of the West.
In 1944, the Soviet Union made significant advances across Eastern Europe toward Germany, including Operation Bagration, a massive offensive in the Byelorussian SSR against the German Army Group Centre. In 1944 the German armies were pushed out of the Baltic states, which were then re-annexed into the Soviet Union. As the Red Army reconquered the Caucasus and Crimea, various ethnic groups living in the region—the Kalmyks, Chechens, Ingushi, Karachai, Balkars, and Crimean Tatars—were accused of having collaborated with the Germans. Using the idea of collective responsibility as a basis, Stalin's government abolished their autonomous republics and between late 1943 and 1944 deported the majority of their populations to Central Asia and Siberia. Over one million people were deported as a result of the policy.
In February 1945, the three leaders met at the Yalta Conference. Roosevelt and Churchill conceded to Stalin's demand that Germany pay the Soviet Union 20 billion dollars in reparations, and that his country be permitted to annex Sakhalin and the Kurile Islands in exchange for entering the war against Japan. An agreement was also made that a post-war Polish government should be a coalition consisting of both communist and conservative elements. Privately, Stalin sought to ensure that Poland would come fully under Soviet influence. The Red Army withheld assistance to Polish resistance fighters battling the Germans in the Warsaw Uprising, with Stalin believing that any victorious Polish militants could interfere with his aspirations to dominate Poland through a future Marxist government. Although concealing his desires from the other Allied leaders, Stalin placed great emphasis on capturing Berlin first, believing that this would enable him to bring more of Europe under long-term Soviet control. Churchill was concerned that this was the case, and unsuccessfully tried to convince the US that the Western Allies should pursue the same goal.
In April 1945, the Red Army seized Berlin, Hitler committed suicide, and Germany surrendered in May. Stalin had wanted Hitler captured alive; he had his remains brought to Moscow to prevent them becoming a relic for Nazi sympathisers. As the Red Army had conquered German territory, they discovered the extermination camps that the Nazi administration had run. Many Soviet soldiers engaged in looting, pillaging, and rape, both in Germany and parts of Eastern Europe. Stalin refused to punish the offenders. After receiving a complaint about this from Yugoslav communist Milovan Djilas, Stalin asked how after experiencing the traumas of war a soldier could "react normally? And what is so awful in his having fun with a woman, after such horrors?"
With Germany defeated, Stalin switched focus to the war with Japan, transferring half a million troops to the Far East. Stalin was pressed by his allies to enter the war and wanted to cement the Soviet Union's strategic position in Asia. On 8 August, in between the US atomic bombings of Hiroshima and Nagasaki, the Soviet army invaded Japanese-occupied Manchuria and defeated the Kwantung Army. These events led to the Japanese surrender and the war's end. Soviet forces continued to expand until they occupied all their territorial concessions, but the U.S. rebuffed Stalin's desire for the Red Army to take a role in the Allied occupation of Japan.
Stalin attended the Potsdam Conference in July–August 1945, alongside his new British and U.S. counterparts, Prime Minister Clement Attlee and President Harry Truman. At the conference, Stalin repeated previous promises to Churchill that he would refrain from a "Sovietization" of Eastern Europe. Stalin pushed for reparations from Germany without regard to the base minimum supply for German citizens' survival, which worried Truman and Churchill who thought that Germany would become a financial burden for Western powers. He also pushed for "war booty", which would permit the Soviet Union to directly seize property from conquered nations without quantitative or qualitative limitation, and a clause was added permitting this to occur with some limitations. Germany was divided into four zones: Soviet, U.S., British, and French, with Berlin itself—located within the Soviet area—also subdivided thusly.
After the war, Stalin was—according to Service—at the "apex of his career". Within the Soviet Union he was widely regarded as the embodiment of victory and patriotism. His armies controlled Central and Eastern Europe up to the River Elbe.
In June 1945, Stalin adopted the title of Generalissimus, and stood atop Lenin's Mausoleum to watch a celebratory parade led by Zhukov through Red Square. At a banquet held for army commanders, he described the Russian people as "the outstanding nation" and "leading force" within the Soviet Union, the first time that he had unequivocally endorsed the Russians over other Soviet nationalities. In 1946, the state published Stalin's "Collected Works". In 1947, it brought out a second edition of his official biography, which eulogised him to a greater extent than its predecessor. He was quoted in "Pravda" on a daily basis and pictures of him remained pervasive on the walls of workplaces and homes.
Despite his strengthened international position, Stalin was cautious about internal dissent and desire for change among the population. He was also concerned about his returning armies, who had been exposed to a wide range of consumer goods in Germany, much of which they had looted and brought back with them. In this he recalled the 1825 Decembrist Revolt by Russian soldiers returning from having defeated France in the Napoleonic Wars. He ensured that returning Soviet prisoners of war went through "filtration" camps as they arrived in the Soviet Union, in which 2,775,700 were interrogated to determine if they were traitors. About half were then imprisoned in labour camps. In the Baltic states, where there was much opposition to Soviet rule, de-kulakisation and de-clericalisation programs were initiated, resulting in 142,000 deportations between 1945 and 1949. The Gulag system of labour camps was expanded further. By January 1953, three percent of the Soviet population was imprisoned or in internal exile, with 2.8 million in "special settlements" in isolated areas and another 2.5 million in camps, penal colonies, and prisons.
The NKVD were ordered to catalogue the scale of destruction during the war. It was established that 1,710 Soviet towns and 70,000 villages had been destroyed. The NKVD recorded that between 26 and 27 million Soviet citizens had been killed, with millions more being wounded, malnourished, or orphaned. In the war's aftermath, some of Stalin's associates suggested modifications to government policy. Post-war Soviet society was more tolerant than its pre-war phase in various respects. Stalin allowed the Russian Orthodox Church to retain the churches it had opened during the war. Academia and the arts were also allowed greater freedom than they had prior to 1941. Recognising the need for drastic steps to be taken to combat inflation and promote economic regeneration, in December 1947 Stalin's government devalued the ruble and abolished the ration-book system. Capital punishment was abolished in 1947 but reinstalled in 1950.
Stalin's health was deteriorating, and heart problems forced a two-month vacation in the latter part of 1945.
He grew increasingly concerned that senior political and military figures might try to oust him; he prevented any of them from becoming powerful enough to rival him and had their apartments bugged with listening devices. He demoted Molotov, and increasingly favoured Beria and Malenkov for key positions. In 1949, he brought Nikita Khrushchev from Ukraine to Moscow, appointing him a Central Committee secretary and the head of the city's party branch. In the Leningrad Affair, the city's leadership was purged amid accusations of treachery; executions of many of the accused took place in 1950.
In the post-war period there were often food shortages in Soviet cities, and the USSR experienced a major famine from 1946 to 1947. Sparked by a drought and ensuing bad harvest in 1946, it was exacerbated by government policy towards food procurement, including the state's decision to build up stocks and export food internationally rather than distributing it to famine hit areas. Current estimates indicate that between one million and 1.5 million people died from malnutrition or disease as a result. While agricultural production stagnated, Stalin focused on a series of major infrastructure projects, including the construction of hydroelectric plants, canals, and railway lines running to the polar north. Much of this was constructed by prison labour.
In the aftermath of the Second World War, the British Empire declined, leaving the U.S. and USSR as the dominant world powers. Tensions among these former Allies grew, resulting in the Cold War. Although Stalin publicly described the British and U.S. governments as aggressive, he thought it unlikely that a war with them would be imminent, believing that several decades of peace was likely. He nevertheless secretly intensified Soviet research into nuclear weaponry, intent on creating an atom bomb. Still, Stalin foresaw the undesirability of a nuclear conflict, saying in 1949 that "atomic weapons can hardly be used without spelling the end of the world." He personally took a keen interest in the development of the weapon. In August 1949, the bomb was successfully tested in the deserts outside Semipalatinsk in Kazakhstan. Stalin also initiated a new military build-up; the Soviet army was expanded from 2.9 million soldiers, as it stood in 1949, to 5.8 million by 1953.
The US began pushing its interests on every continent, acquiring air force bases in Africa and Asia and ensuring pro-U.S. regimes took power across Latin America. It launched the Marshall Plan in June 1947, with which it sought to undermine Soviet hegemony in eastern Europe. The US also offered financial assistance as part of the Marshall Plan on the condition that they opened their markets to trade, aware that the Soviets would never agree.
The Allies demanded that Stalin withdraw the Red Army from northern Iran. He initially refused, leading to an international crisis in 1946, but one year later Stalin finally relented and moved the Soviet troops out.
Stalin also tried to maximise Soviet influence on the world stage, unsuccessfully pushing for Libya—recently liberated from Italian occupation—to become a Soviet protectorate. He sent Molotov as his representative to San Francisco to take part in negotiations to form the United Nations, insisting that the Soviets have a place on the Security Council. In April 1949, the Western powers established the North Atlantic Treaty Organisation (NATO), an international military alliance of capitalist countries. Within Western countries, Stalin was increasingly portrayed as the "most evil dictator alive" and compared to Hitler.
In 1948, Stalin edited and rewrote sections of "Falsifiers of History", published as a series of "Pravda" articles in February 1948 and then in book form. Written in response to public revelations of the 1939 Soviet alliance with Germany, it focused on blaming Western powers for the war. He erroneously claimed that the initial German advance in the early part of the war was not a result of Soviet military weakness, but rather a deliberate Soviet strategic retreat. In 1949, celebrations took place to mark Stalin's seventieth birthday (albeit not the correct year) at which Stalin attended an event in the Bolshoi Theatre alongside Marxist–Leninist leaders from across Europe and Asia.
After the war, Stalin sought to retain Soviet dominance across Eastern Europe while expanding its influence in Asia. Cautiously regarding the responses from the Western Allies, Stalin avoided immediately installing Communist Party governments across Eastern Europe, instead initially ensuring that Marxist-Leninists were placed in coalition ministries. In contrast to his approach to the Baltic states, he rejected the proposal of merging the new communist states into the Soviet Union, rather recognising them as independent nation-states.
He was faced with the problem that there were few Marxists left in Eastern Europe, with most having been killed by the Nazis. He demanded that war reparations be paid by Germany and its Axis allies Hungary, Romania, and the Slovak Republic. Aware that these countries had been pushed toward socialism through invasion rather than by proletarian revolution, Stalin referred to them not as "dictatorships of the proletariat" but as "people's democracies", suggesting that in these countries there was a pro-socialist alliance combining the proletariat, peasantry, and lower middle-class.
Churchill observed that an "Iron Curtain" had been drawn across Europe, separating the east from the west. In September 1947, a meeting of East European communist leaders was held in Szklarska Poręba, Poland, from which was formed Cominform to co-ordinate the Communist Parties across Eastern Europe and also in France and Italy. Stalin did not personally attend the meeting, sending Zhdanov in his place. Various East European communists also visited Stalin in Moscow. There, he offered advice on their ideas; for instance he cautioned against the Yugoslav idea for a Balkan federation incorporating Bulgaria and Albania. Stalin had a particularly strained relationship with Yugoslav leader Josip Broz Tito due to the latter's continued calls for Balkan federation and for Soviet aid for the communist forces in the ongoing Greek Civil War. In March 1948, Stalin launched an anti-Tito campaign, accusing the Yugoslav communists of adventurism and deviating from Marxist–Leninist doctrine. At the second Cominform conference, held in Bucharest in June 1948, East European communist leaders all denounced Tito's government, accusing them of being fascists and agents of Western capitalism. Stalin ordered several assassination attempts on Tito's life and contemplated invading Yugoslavia.
Stalin suggested that a unified, but demilitarised, German state be established, hoping that it would either come under Soviet influence or remain neutral. When the US and UK remained opposed to this, Stalin sought to force their hand by blockading Berlin in June 1948. He gambled that the others would not risk war, but they airlifted supplies into West Berlin until May 1949, when Stalin relented and ended the blockade. In September 1949 the Western powers transformed Western Germany into an independent Federal Republic of Germany; in response the Soviets formed East Germany into the German Democratic Republic in October. In accordance with their earlier agreements, the Western powers expected Poland to become an independent state with free democratic elections. In Poland, the Soviets merged various socialist parties into the Polish United Workers' Party, and vote rigging was used to ensure that it secured office. The 1947 Hungarian elections were also rigged, with the Hungarian Working People's Party taking control. In Czechoslovakia, where the communists did have a level of popular support, they were elected the largest party in 1946. Monarchy was abolished in Bulgaria and Romania. Across Eastern Europe, the Soviet model was enforced, with a termination of political pluralism, agricultural collectivisation, and investment in heavy industry. It was aimed for economic autarky within the Eastern Bloc.
In October 1949, Mao took power in China. With this accomplished, Marxist governments now controlled a third of the world's land mass. Privately, Stalin revealed that he had underestimated the Chinese Communists and their ability to win the civil war, instead encouraging them to make another peace with the KMT. In December 1949, Mao visited Stalin. Initially Stalin refused to repeal the Sino-Soviet Treaty of 1945, which significantly benefited the Soviet Union over China, although in January 1950 he relented and agreed to sign a new treaty between the two countries. Stalin was concerned that Mao might follow Tito's example by pursuing a course independent of Soviet influence, and made it known that if displeased he would withdraw assistance from China; the Chinese desperately needed said assistance after decades of civil war.
At the end of the Second World War, the Soviet Union and the United States divided up the Korean Peninsula, formerly a Japanese colonial possession, along the 38th parallel, setting up a communist government in the north and a pro-Western government in the south. North Korean leader Kim Il-sung visited Stalin in March 1949 and again in March 1950; he wanted to invade the south and although Stalin was initially reluctant to provide support, he eventually agreed by May 1950. The North Korean Army launched the Korean War by invading the south in June 1950, making swift gains and capturing Seoul. Both Stalin and Mao believed that a swift victory would ensue. The U.S. went to the UN Security Council—which the Soviets were boycotting over its refusal to recognise Mao's government—and secured military support for the South Koreans. U.S. led forces pushed the North Koreans back. Stalin wanted to avoid direct Soviet conflict with the U.S., convincing the Chinese to aid the North.
The Soviet Union was one of the first nations to extend diplomatic recognition to the newly created state of Israel in 1948. When the Israeli ambassador Golda Meir arrived in the USSR, Stalin was angered by the Jewish crowds who gathered to greet her. He was further angered by Israel's growing alliance with the U.S. After Stalin fell out with Israel, he launched an anti-Jewish campaign within the Soviet Union and the Eastern Bloc. In November 1948, he abolished the JAC, and show trials took place for some of its members. The Soviet press engaged in attacks on Zionism, Jewish culture, and "rootless cosmopolitanism", with growing levels of anti-Semitism being expressed across Soviet society. Stalin's increasing tolerance of anti-Semitism may have stemmed from his increasing Russian nationalism or from the recognition that anti-Semitism had proved a useful mobilising tool for Hitler and that he could do the same; he may have increasingly viewed the Jewish people as a "counter-revolutionary" nation whose members were loyal to the U.S. There were rumours, although they have never been substantiated, that Stalin was planning on deporting all Soviet Jews to the Jewish Autonomous Region in Birobidzhan, eastern Siberia.
In his later years, Stalin was in poor health. He took increasingly long holidays; in 1950 and again in 1951 he spent almost five months vacationing at his Abkhazian dacha. Stalin nevertheless mistrusted his doctors; in January 1952 he had one imprisoned after they suggested that he should retire to improve his health. In September 1952, several Kremlin doctors were arrested for allegedly plotting to kill senior politicians in what came to be known as the Doctors' Plot; the majority of the accused were Jewish. He instructed the arrested doctors to be tortured to ensure confession. In November, the Slánský trial took place in Czechoslovakia as 13 senior Communist Party figures, 11 of them Jewish, were accused and convicted of being part of a vast Zionist-American conspiracy to subvert Eastern Bloc governments. That same month, a much publicised trial of accused Jewish industrial wreckers took place in Ukraine. In 1951, he initiated the Mingrelian affair, a purge of the Georgian branch of the Communist Party which resulted in over 11,000 deportations.
From 1946 until his death, Stalin only gave three public speeches, two of which lasted only a few minutes. The amount of written material that he produced also declined. In 1950, Stalin issued the article "Marxism and Problems of Linguistics", which reflected his interest in questions of Russian nationhood.
In 1952, Stalin's last book, "Economic Problems of Socialism in the USSR", was published. It sought to provide a guide to leading the country after his death. In October 1952, Stalin gave an hour and a half speech at the Central Committee plenum. There, he emphasised what he regarded as leadership qualities necessary in the future and highlighted the weaknesses of various potential successors, particularly Molotov and Mikoyan. In 1952, he also eliminated the Politburo and replaced it with a larger version which he called the Presidium.
On 1 March 1953, Stalin's staff found him semi-conscious on the bedroom floor of his Volynskoe dacha. He had suffered a cerebral hemorrhage. He was moved onto a couch and remained there for three days. He was hand-fed using a spoon, given various medicines and injections, and leeches were applied to him. Svetlana and Vasily were called to the dacha on 2 March; the latter was drunk and angrily shouted at the doctors, resulting in him being sent home. Stalin died on 5 March 1953. According to Svetlana, it had been "a difficult and terrible death". An autopsy revealed that he had died of a cerebral hemorrhage and that he also suffered from severe damage to his cerebral arteries due to atherosclerosis. It is possible that Stalin was murdered. Beria has been suspected of murder, although no firm evidence has ever appeared.
Stalin's death was announced on 6 March. The body was embalmed, and then placed on display in Moscow's House of Unions for three days. Crowds were such that a crush killed around 100 people. The funeral involved the body being laid to rest in Lenin's Mausoleum in Red Square on 9 March; hundreds of thousands attended. That month featured a surge in arrests for "anti-Soviet agitation" as those celebrating Stalin's death came to police attention. The Chinese government instituted a period of official mourning for Stalin's death.
Stalin left no anointed successor nor a framework within which a transfer of power could take place. The Central Committee met on the day of his death, with Malenkov, Beria, and Khrushchev emerging as the party's key figures. The system of collective leadership was restored, and measures introduced to prevent any one member attaining autocratic domination again. The collective leadership included the following eight senior members of the Presidium of the Central Committee of the Communist Party of the Soviet Union listed according to the order of precedence presented formally on 5 March 1953: Georgy Malenkov, Lavrentiy Beria, Vyacheslav Molotov, Kliment Voroshilov, Nikita Khrushchev, Nikolai Bulganin, Lazar Kaganovich and Anastas Mikoyan. Reforms to the Soviet system were immediately implemented. Economic reform scaled back the mass construction projects, placed a new emphasis on house building, and eased the levels of taxation on the peasantry to stimulate production. The new leaders sought rapprochement with Yugoslavia and a less hostile relationship with the U.S., pursuing a negotiated end to the Korean War in July 1953. The doctors who had been imprisoned were released and the anti-Semitic purges ceased. A mass amnesty for those imprisoned for non-political crimes was issued, halving the country's inmate population, while the state security and Gulag systems were reformed, with torture being banned in April 1953.
Stalin claimed to have embraced Marxism at the age of fifteen, and it served as the guiding philosophy throughout his adult life; according to Kotkin, Stalin held "zealous Marxist convictions", while Montefiore suggested that Marxism held a "quasi-religious" value for Stalin. Although he never became a Georgian nationalist, during his early life elements from Georgian nationalist thought blended with Marxism in his outlook. The historian Alfred J. Rieber noted that he had been raised in "a society where rebellion was deeply rooted in folklore and popular rituals". Stalin believed in the need to adapt Marxism to changing circumstances; in 1917, he declared that "there is dogmatic Marxism and there is creative Marxism. I stand on the ground of the latter". Volkogonov believed that Stalin's Marxism was shaped by his "dogmatic turn of mind", suggesting that this had been instilled in the Soviet leader during his education in religious institutions. According to scholar Robert Service, Stalin's "few innovations in ideology were crude, dubious developments of Marxism". Some of these derived from political expediency rather than any sincere intellectual commitment; Stalin would often turn to ideology "post hoc" to justify his decisions. Stalin referred to himself as a "praktik", meaning that he was more of a practical revolutionary than a theoretician.
As a Marxist and an extreme anti-capitalist, Stalin believed in an inevitable "class war" between the world's proletariat and bourgeoise. He believed that the working classes would prove successful in this struggle and would establish a dictatorship of the proletariat, regarding the Soviet Union as an example of such a state. He also believed that this proletarian state would need to introduce repressive measures against foreign and domestic "enemies" to ensure the full crushing of the propertied classes, and thus the class war would intensify with the advance of socialism. As a propaganda tool, the shaming of "enemies" explained all inadequate economic and political outcomes, the hardships endured by the populace, and military failures. The new state would then be able to ensure that all citizens had access to work, food, shelter, healthcare, and education, with the wastefulness of capitalism eliminated by a new, standardised economic system. According to Sandle, Stalin was "committed to the creation of a society that was industrialised, collectivised, centrally planned and technologically advanced."
Stalin adhered to the Leninist variant of Marxism. In his book, "Foundations of Leninism", he stated that "Leninism is the Marxism of the epoch of imperialism and of the proletarian revolution". He claimed to be a loyal Leninist, although was—according to Service—"not a blindly obedient Leninist". Stalin respected Lenin, but not uncritically, and spoke out when he believed that Lenin was wrong. During the period of his revolutionary activity, Stalin regarded some of Lenin's views and actions as being the self-indulgent activities of a spoiled émigré, deeming them counterproductive for those Bolshevik activists based within the Russian Empire itself. After the October Revolution, they continued to have differences. Whereas Lenin believed that all countries across Europe and Asia would readily unite as a single state following proletariat revolution, Stalin argued that national pride would prevent this, and that different socialist states would have to be formed; in his view, a country like Germany would not readily submit to being part of a Russian-dominated federal state. Stalin biographer Oleg Khlevniuk nevertheless believed that the pair developed a "strong bond" over the years, while Kotkin suggested that Stalin's friendship with Lenin was "the single most important relationship in Stalin's life". After Lenin's death, Stalin relied heavily on Lenin's writings—far more so than those of Marx and Engels—to guide him in the affairs of state. Stalin adopted the Leninist view on the need for a revolutionary vanguard who could lead the proletariat rather than being led by them. Leading this vanguard, he believed that the Soviet peoples needed a strong, central figure—akin to a Tsar—whom they could rally around. In his words, "the people need a Tsar, whom they can worship and for whom they can live and work". He read about, and admired, two Tsars in particular: Ivan the Terrible and Peter the Great. In the personality cult constructed around him, he was known as the "vozhd", an equivalent to the Italian "duce" and German "fuhrer".
Stalinism was a development of Leninism, and while Stalin avoided using the term "Marxism-Leninism-Stalinism", he allowed others to do so. Following Lenin's death, Stalin contributed to the theoretical debates within the Communist Party, namely by developing the idea of "Socialism in One Country". This concept was intricately linked to factional struggles within the party, particularly against Trotsky. He first developed the idea in December 1924 and elaborated upon in his writings of 1925–26. Stalin's doctrine held that socialism could be completed in Russia but that its final victory there could not be guaranteed because of the threat from capitalist intervention. For this reason, he retained the Leninist view that world revolution was still a necessity to ensure the ultimate victory of socialism. Although retaining the Marxist belief that the state would wither away as socialism transformed into pure communism, he believed that the Soviet state would remain until the final defeat of international capitalism. This concept synthesised Marxist and Leninist ideas with nationalist ideals, and served to discredit Trotsky—who promoted the idea of "permanent revolution"—by presenting the latter as a defeatist with little faith in Russian workers' abilities to construct socialism.
Stalin viewed nations as contingent entities which were formed by capitalism and could merge into others. Ultimately he believed that all nations would merge into a single, global human community, and regarded all nations as inherently equal. In his work, he stated that "the right of secession" should be offered to the ethnic-minorities of the Russian Empire, but that they should not be encouraged to take that option. He was of the view that if they became fully autonomous, then they would end up being controlled by the most reactionary elements of their community; as an example he cited the largely illiterate Tatars, whom he claimed would end up dominated by their mullahs. Stalin argued that the Jews possessed a "national character" but were not a "nation" and were thus unassimilable. He argued that Jewish nationalism, particularly Zionism, was hostile to socialism. According to Khlevniuk, Stalin reconciled Marxism with great-power imperialism and therefore expansion of the empire makes him a worthy to the Russian tsars. Service argued that Stalin's Marxism was imbued with a great deal of Russian nationalism. According to Montefiore, Stalin's embrace of the Russian nation was pragmatic, as the Russians were the core of the population of the USSR; it was not a rejection of his Georgian origins. Stalin's push for Soviet westward expansion into eastern Europe resulted in accusations of Russian imperialism.
Ethnically Georgian, Stalin grew up speaking the Georgian language, and did not begin learning Russian until the age of eight or nine. He remained proud of his Georgian identity, and throughout his life retained a heavy Georgian accent when speaking Russian. According to Montefiore, despite Stalin's affinity for Russia and Russians, he remained profoundly Georgian in his lifestyle and personality. Stalin's colleagues described him as "Asiatic", and he told a Japanese journalist that "I am not a European man, but an Asian, a Russified Georgian". Service also noted that Stalin "would never be Russian", could not credibly pass as one, and never tried to pretend that he was. Montefiore was of the view that "after 1917, [Stalin] became quadri-national: Georgian by nationality, Russian by loyalty, internationalist by ideology, Soviet by citizenship."
Stalin had a soft voice, and when speaking Russian did so slowly, carefully choosing his phrasing. In private he used coarse language, although avoided doing so in public. Described as a poor orator, according to Volkogonov, Stalin's speaking style was "simple and clear, without flights of fancy, catchy phrases or platform histrionics". He rarely spoke before large audiences, and preferred to express himself in written form. His writing style was similar, being characterised by its simplicity, clarity, and conciseness. Throughout his life, he used various nicknames and pseudonyms, including "Koba", "Soselo", and "Ivanov", adopting "Stalin" in 1912; it was based on the Russian word for "steel" and has often been translated as "Man of Steel".
In adulthood, Stalin measured tall. To appear taller, he wore stacked shoes, and stood on a small platform during parades. His mustached face was pock-marked from smallpox during childhood; this was airbrushed from published photographs. He was born with a webbed left foot, and his left arm had been permanently injured in childhood which left it shorter than his right and lacking in flexibility, which was probably the result of being hit, at the age of 12, by a horse-drawn carriage.
During his youth, Stalin cultivated a scruffy appearance in rejection of middle-class aesthetic values. He grew his hair long and often wore a beard; for clothing, he often wore a traditional Georgian "chokha" or a red satin shirt with a grey coat and red fedora. From mid-1918 until his death he favoured military-style clothing, in particular long black boots, light-coloured collarless tunics, and a gun. He was a lifelong smoker, who smoked both a pipe and cigarettes. He had few material demands and lived plainly, with simple and inexpensive clothing and furniture; his interest was in power rather than wealth.
As Soviet leader, Stalin typically awoke around 11am, with lunch being served between 3 and 5pm and dinner no earlier than 9pm; he then worked late into the evening. He often dined with other Politburo members and their families. As leader, he rarely left Moscow unless to go to one of his dachas; he disliked travel, and refused to travel by plane. His choice of favoured holiday house changed over the years, although he holidayed in southern parts of the USSR every year from 1925 to 1936 and again from 1945 to 1951. Along with other senior figures, he had a dacha at Zubalova, 35 km outside Moscow, although ceased using it after Nadya's 1932 suicide. After 1932, he favoured holidays in Abkhazia, being a friend of its leader, Nestor Lakoba. In 1934, his new Kuntsevo Dacha was built; 9 km from the Kremlin, it became his primary residence. In 1935 he began using a new dacha provided for him by Lakoba at Novy Afon; in 1936, he had the Kholodnaya Rechka dacha built on the Abkhazian coast, designed by Miron Merzhanov.
Trotsky and several other Soviet figures promoted the idea that Stalin was a mediocrity. This gained widespread acceptance outside the Soviet Union during his lifetime but was misleading. According to Montefiore, "it is clear from hostile and friendly witnesses alike that Stalin was always exceptional, even from childhood". Stalin had a complex mind, great self-control, and an excellent memory. He was a hard worker, and displayed a keen desire to learn; when in power, he scrutinised many details of Soviet life, from film scripts to architectural plans and military hardware. According to Volkogonov, "Stalin's private life and working life were one and the same"; he did not take days off from political activities.
Stalin could play different roles to different audiences, and was adept at deception, often deceiving others as to his true motives and aims. Several historians have seen it appropriate to follow Lazar Kaganovich's description of there being "several Stalins" as a means of understanding his multi-faceted personality. He was a good organiser, with a strategic mind, and judged others according to their inner strength, practicality, and cleverness. He acknowledged that he could be rude and insulting, although rarely raised his voice in anger; as his health deteriorated in later life he became increasingly unpredictable and bad tempered. Despite his tough-talking attitude, he could be very charming; when relaxed, he cracked jokes and mimicked others. Montefiore suggested that this charm was "the foundation of Stalin's power in the Party".
Stalin was ruthless, temperamentally cruel, and had a propensity for violence high even among the Bolsheviks. He lacked compassion, something Volkogonov suggested might have been accentuated by his many years in prison and exile, although he was capable of acts of kindness to strangers, even amid the Great Terror. He was capable of self-righteous indignation, and was resentful, vindictive, and vengeful, holding onto grievances against others for many years. By the 1920s, he was also suspicious and conspiratorial, prone to believing that people were plotting against him and that there were vast international conspiracies behind acts of dissent. He never attended torture sessions or executions, although Service thought Stalin "derived deep satisfaction" from degrading and humiliating people and keeping even close associates in a state of "unrelieved fear". Montefiore thought Stalin's brutality marked him out as a "natural extremist"; Service suggested he had a paranoid or sociopathic personality disorder. Other historians linked his brutality not to any personality trait, but to his unwavering commitment to the survival of the Soviet Union and the international Marxist–Leninist cause.
Keenly interested in the arts, Stalin admired artistic talent. He protected several Soviet writers, such as Mikhail Bulgakov, even when their work was labelled harmful to his regime. He enjoyed music, owning around 2,700 albums, and frequently attending the Bolshoi Theatre during the 1930s and 1940s. His taste in music and theatre was conservative, favouring classical drama, opera, and ballet over what he dismissed as experimental "formalism". He also favoured classical forms in the visual arts, disliking avant-garde styles like cubism and futurism. He was a voracious reader, with a library of over 20,000 books. Little of this was fiction, although he could cite passages from Alexander Pushkin, Nikolay Nekrasov, and Walt Whitman by heart. He favoured historical studies, keeping up with debates in the study of Russian, Mesopotamian, ancient Roman, and Byzantine history. An autodidact, he claimed to read as many as 500 pages a day, with Montefiore regarding him as an intellectual. Stalin also enjoyed watching films late at night at cinemas installed in the Kremlin and his dachas. He favoured the Western genre; his favourite film was the 1938 picture "Volga Volga".
Stalin was a keen and accomplished billiards player, and collected watches. He also enjoyed practical jokes; he for instance would place a tomato on the seat of Politburo members and wait for them to sit on it. When at social events, he encouraged singing, as well as alcohol consumption; he hoped that others would drunkenly reveal their secrets to him. As an infant, Stalin displayed a love of flowers, and later in life he became a keen gardener. His Volynskoe suburb had a park, with Stalin devoting much attention to its agricultural activities.
Stalin publicly condemned anti-Semitism, although was repeatedly accused of it. People who knew him, such as Khrushchev, suggested he long harbored negative sentiments toward Jews, and anti-Semitic trends in his policies were further fueled by Stalin's struggle against Trotsky. After Stalin's death, Khrushchev claimed that Stalin encouraged him to incite anti-Semitism in Ukraine, allegedly telling him that "the good workers at the factory should be given clubs so they can beat the hell out of those Jews." In 1946, Stalin allegedly said privately that "every Jew is a potential spy." Conquest stated that although Stalin had Jewish associates, he promoted anti-Semitism. Service cautioned that there was "no irrefutable evidence" of anti-Semitism in Stalin's published work, although his private statements and public actions were "undeniably reminiscent of crude antagonism towards Jews"; he added that throughout Stalin's lifetime, the Georgian "would be the friend, associate or leader of countless individual Jews". According to Beria, Stalin had affairs with several Jewish women.
Friendship was important to Stalin, and he used it to gain and maintain power. Kotkin observed that Stalin "generally gravitated to people like himself: parvenu intelligentsia of humble background". He gave nicknames to his favourites, for instance referring to Yezhov as "my blackberry". Stalin was sociable and enjoyed a joke. According to Montefiore, Stalin's friendships "meandered between love, admiration, and venomous jealousy". While head of the Soviet Union he remained in contact with many of his old friends in Georgia, sending them letters and gifts of money.
According to Montefiore, in his early life Stalin "rarely seems to have been without a girlfriend". He was sexually promiscuous, although rarely talked about his sex life. Montefiore noted that Stalin's favoured types were "young, malleable teenagers or buxom peasant women", who would be supportive and unchallenging toward him. According to Service, Stalin "regarded women as a resource for sexual gratification and domestic comfort". Stalin married twice and had several offspring.
Stalin married his first wife, Ekaterina Svanidze, in 1906. According to Montefiore, theirs was "a true love match"; Volkogonov suggested that she was "probably the one human being he had really loved". When she died, Stalin said "This creature softened my heart of stone." They had a son, Yakov, who often frustrated and annoyed Stalin. Yakov had a daughter, Galina, before fighting for the Red Army in the Second World War. He was captured by the German Army and then committed suicide.
Stalin's second wife was Nadezhda Alliluyeva; theirs was not an easy relationship, and they often fought. They had two biological children—a son, Vasily, and a daughter, Svetlana—and adopted another son, Artyom Sergeev, in 1921. During his marriage to Nadezhda, Stalin had affairs with many other women, most of whom were fellow revolutionaries or their wives. Nadezdha suspected that this was the case, and committed suicide in 1932. Stalin regarded Vasily as spoiled and often chastised his behaviour; as Stalin's son, Vasily nevertheless was swiftly promoted through the ranks of the Red Army and allowed a lavish lifestyle. Conversely, Stalin had an affectionate relationship with Svetlana during her childhood, and was also very fond of Artyom. In later life, he disapproved of Svetlana's various suitors and husbands, putting a strain on his relationship with her. After the Second World War, he made little time for his children and his family played a decreasingly important role in his life. After Stalin's death, Svetlana changed her surname from Stalin to Allilueva, and defected to the U.S.
After Nadezdha's death, Stalin became increasingly close to his sister-in-law Zhenya Alliluyeva; Montefiore believed that they were probably lovers. There are unproven rumours that from 1934 onward he had a relationship with his housekeeper Valentina Istomina. Stalin had at least two illegitimate children, although he never recognised them as being his. One of them, Konstantin Kuzakov, later taught philosophy at the Leningrad Military Mechanical Institute, but never met his father. The other, Alexander, was the son of Lidia Pereprygia; he was raised as the son of a peasant fisherman and the Soviet authorities made him swear never to reveal that Stalin was his biological father.
The historian Robert Conquest stated that Stalin, "perhaps[…] determined the course of the twentieth century" more than any other individual. Biographers like Service and Volkogonov have considered him an outstanding and exceptional politician; Montefiore labelled Stalin as "that rare combination: both 'intellectual' and killer", a man who was "the ultimate politician" and "the most elusive and fascinating of the twentieth-century titans". According to historian Kevin McDermott, interpretations of Stalin range from "the sycophantic and adulatory to the vitriolic and condemnatory". For most Westerners and anti-communist Russians, he is viewed overwhelmingly negatively as a mass murderer; for significant numbers of Russians and Georgians, he is regarded as a great statesman and state-builder.
Stalin strengthened and stabilised the Soviet Union; Service suggested that without him the country might have collapsed long before 1991. In under three decades, Stalin transformed the Soviet Union into a major industrial world power, one which could "claim impressive achievements" in terms of urbanisation, military strength, education, and Soviet pride. Under his rule, the average Soviet life expectancy grew due to improved living conditions, nutrition, and medical care; mortality rates also declined. Although millions of Soviet citizens despised him, support for Stalin was nevertheless widespread throughout Soviet society. Yet Stalin's necessity for Soviet Union's economic development has been questioned, with it being argued that Stalin's policies from 1928 on may have only been a limiting factor.
Stalin's Soviet Union has been characterised as a totalitarian state, with Stalin its authoritarian leader. Various biographers have described him as a dictator, an autocrat, or accused him of practicing Caesarism. Montefiore argued that while Stalin initially ruled as part of a Communist Party oligarchy, in 1934 the Soviet government transformed from this oligarchy into a personal dictatorship, with Stalin only becoming "absolute dictator" between March and June 1937, when senior military and NKVD figures were eliminated. According to Kotkin, Stalin "built a personal dictatorship within the Bolshevik dictatorship". In both the Soviet Union and elsewhere he came to be portrayed as an "Oriental despot". The biographer Dmitri Volkogonov characterised him as "one of the most powerful figures in human history", while McDermott stated that Stalin had "concentrated unprecedented political authority in his hands", and Service noted that by the late 1930s, Stalin "had come closer to personal despotism than almost any monarch in history".
McDermott nevertheless cautioned against "over-simplistic stereotypes"—promoted in the fiction of writers like Aleksandr Solzhenitsyn, Vasily Grossman, and Anatoly Rybakov—that portrayed Stalin as an omnipotent and omnipresent tyrant who controlled every aspect of Soviet life through repression and totalitarianism. Service similarly warned of the portrayal of Stalin as an "unimpeded despot", noting that "powerful though he was, his powers were not limitless", and his rule depended on his willingness to conserve the Soviet structure he had inherited. Kotkin observed that Stalin's ability to remain in power relied on him having a majority in the Politburo at all times. Khlevniuk noted that at various points, particularly when Stalin was old and frail, there were "periodic manifestations" in which the party oligarchy threatened his autocratic control. Stalin denied to foreign visitors that he was a dictator, stating that those who labelled him such did not understand the Soviet governance structure.
A vast literature devoted to Stalin has been produced. During Stalin's lifetime, his approved biographies were largely hagiographic in content. Stalin ensured that these works gave very little attention to his early life, particularly because he did not wish to emphasise his Georgian origins in a state numerically dominated by Russians. Since his death many more biographies have been written, although until the 1980s these relied largely on the same sources of information. Under Mikhail Gorbachev's Soviet administration various previously classified files on Stalin's life were made available to historians, at which point Stalin became "one of the most urgent and vital issues on the public agenda" in the Soviet Union. After the dissolution of the Union in 1991, the rest of the archives were opened to historians, resulting in much new information about Stalin coming to light, and producing a flood of new research.
Leninists remain divided in their views on Stalin; some view him as Lenin's authentic successor, while others believe he betrayed Lenin's ideas by deviating from them. The socio-economic nature of Stalin's Soviet Union has also been much debated, varyingly being labelled a form of state socialism, state capitalism, bureaucratic collectivism, or a totally unique mode of production. Socialist writers like Volkogonov have acknowledged that Stalin's actions damaged "the enormous appeal of socialism generated by the October Revolution".
With a high number of excess deaths occurring under his rule, Stalin has been labeled "one of the most notorious figures in history". These deaths occurred as a result of collectivisation, famine, terror campaigns, disease, war and mortality rates in the Gulag. As the majority of excess deaths under Stalin were not direct killings, the exact number of victims of Stalinism is difficult to calculate due to lack of consensus among scholars on which deaths can be attributed to the regime.
Official records reveal 799,455 documented executions in the Soviet Union between 1921 and 1953; 681,692 of these were carried out between 1937 and 1938, the years of the Great Purge. However, according to Michael Ellman, the best modern estimate for the number of repression deaths during the Great Purge is 950,000–1.2 million, which includes executions, deaths in detention, or soon after their release. In addition, while archival data shows that 1,053,829 perished in the Gulag from 1934 to 1953, the current historical consensus is that of the 18 million people who passed through the Gulag system from 1930 to 1953, between 1.5 and 1.7 million died as a result of their incarceration. The historian and archival researcher Stephen G. Wheatcroft and Michael Ellman attribute roughly 3 million deaths to the Stalinist regime, including executions and deaths from criminal negligence. Wheatcoft and historian R. W. Davies estimate famine deaths at 5.5–6.5 million while scholar Steven Rosefielde gives a number of 8.7 million. The American historian Timothy D. Snyder in 2011 summarised modern data, made after the opening of the Soviet archives in the 1990s, and concludes that Stalin's regime was responsible for 9 million deaths, with 6 million of these being deliberate killings. He notes that the estimate is far lower than the estimates of 20 million or above which were made before access to the archives.
Historians continue to debate whether or not the 1932–33 Ukrainian famine—known in Ukraine as the Holodomor—should be called a genocide. Twenty six countries officially recognise it under the legal definition of genocide. In 2006, the Ukrainian Parliament declared it to be such, and in 2010 a Ukrainian court posthumously convicted Stalin, Lazar Kaganovich, Stanislav Kosior, and other Soviet leaders of genocide. Popular among some Ukrainian nationalists is the idea that Stalin consciously organised the famine to suppress national desires among the Ukrainian people. This interpretation has been rejected by more recent historical studies. These have articulated the view that—while Stalin's policies contributed significantly to the high mortality rate—there is no evidence that Stalin or the Soviet government consciously engineered the famine. The idea that this was a targeted attack on the Ukrainians is complicated by the widespread suffering that also affected other Soviet peoples in the famine, including the Russians. Within Ukraine, ethnic Poles and Bulgarians died in similar proportions to ethnic Ukrainians. Despite any lack of clear intent on Stalin's part, the historian Norman Naimark noted that although there may not be sufficient "evidence to convict him in an international court of justice as a genocidaire[...] that does not mean that the event itself cannot be judged as genocide".
Shortly after his death, the Soviet Union went through a period of de-Stalinization. Malenkov denounced the Stalin personality cult, which was subsequently criticised in "Pravda". In 1956, Khrushchev gave his "Secret Speech", titled "On the Cult of Personality and Its Consequences", to a closed session of the Party's 20th Congress. There, Khrushchev denounced Stalin for both his mass repression and his personality cult. He repeated these denunciations at the 22nd Party Congress in October 1962. In October 1961, Stalin's body was removed from the mausoleum and buried in the Kremlin Wall Necropolis next to the Kremlin walls, the location marked only by a simple bust. Stalingrad was renamed Volgograd.
Khrushchev's de-Stalinisation process in Soviet society ended when he was replaced as leader by Leonid Brezhnev in 1964; the latter introduced a level of re-Stalinisation within the Soviet Union. In 1969 and again in 1979, plans were proposed for a full rehabilitation of Stalin's legacy, but on both occasions were defeated by critics within the Soviet and international Marxist–Leninist movement. Gorbachev saw the total denunciation of Stalin as necessary for the regeneration of Soviet society. After the fall of the Soviet Union in 1991, the first President of the new Russian Federation, Boris Yeltsin, continued Gorbachev's denunciation of Stalin but added to it a denunciation of Lenin. His successor, Vladimir Putin, did not seek to rehabilitate Stalin but emphasised the celebration of Soviet achievements under Stalin's leadership rather than the Stalinist repressions; however, in October 2017 Putin opened the Wall of Grief memorial in Moscow, noting that the "terrible past" would neither be "justified by anything" nor "erased from the national memory".
Amid the social and economic turmoil of the post-Soviet period, many Russians viewed Stalin as having overseen an era of order, predictability, and pride. He remains a revered figure among many Russian nationalists, who feel nostalgic about the Soviet victory over Nazi Germany in World War II, and he is regularly invoked approvingly within both Russia's far-left and far-right. In the 2008 "Name of Russia" television show, Stalin was voted as the third most notable personality in Russian history. Polling by the Levada Center suggest Stalin's popularity has grown since 2015, with 46% of Russians expressing a favourable view of him in 2017 and 51% in 2019. At the same time, there was a growth in pro-Stalinist literature in Russia, much relying upon the misrepresentation or fabrication of source material. In this literature, Stalin's repressions are regarded either as a necessary measure to defeat "enemies of the people" or the result of lower-level officials acting without Stalin's knowledge.
The only part of the former Soviet Union where admiration for Stalin has remained consistently widespread is Georgia. Many Georgians resent criticism of Stalin, the most famous figure from their nation's modern history; a 2013 survey by Tbilisi University found 45% of Georgians expressing "a positive attitude" to him. Some positive sentiment can also be found elsewhere in the former Soviet Union. A 2012 survey commissioned by the Carnegie Endowment found 38% of Armenians concurring that their country "will always have need of a leader like Stalin". In early 2010 a new monument to Stalin was erected in Zaporizhia, Ukraine; in December unknown persons cut off its head and in 2011 it was destroyed in an explosion. In a 2016 Kiev International Institute of Sociology poll, 38% of respondents had a negative attitude to Stalin, 26% a neutral one and 17% a positive, with 19% refusing to answer.
|
https://en.wikipedia.org/wiki?curid=15641
|
Johnny Unitas
John Constantine Unitas (; May 7, 1933 – September 11, 2002), nicknamed "Johnny U" and "the Golden Arm", was an American professional football player in the National Football League (NFL). He spent the majority of his career playing for the Baltimore Colts. He was a record-setting quarterback, and the NFL's most valuable player in 1959, 1964, and 1967.
For 52 years, he held the record for most consecutive games with a touchdown pass (set between 1956 and 1960), until it was broken in 2012 by Drew Brees. Unitas was the prototype of the modern era marquee quarterback, with a strong passing game, media fanfare, and widespread popularity. He has been consistently listed as one of the greatest NFL players of all time.
John Constantine Unitas was born in Pittsburgh in 1933 to Francis J. Unitas and Helen Superfisky, both of Lithuanian descent; he grew up in the Mount Washington neighborhood. When Unitas was five years old, his father died of cardiovascular renal disease complicated by pneumonia, leaving the young boy to be raised by his mother, who worked two jobs to support the family. His surname was a result of a phonetic transliteration of a common Lithuanian last name "Jonaitis". Attending St. Justin's High School in Pittsburgh, Unitas played halfback and quarterback.
In his younger years, Unitas dreamed about being part of the Notre Dame Fighting Irish football team, but when he tried out for the team, coach Frank Leahy said that he was just too skinny and he would "get murdered" if he was put on the field.
Instead, he attended the University of Louisville. In his four-year career as a Louisville Cardinal, Unitas completed 245 passes for 3,139 yards and 27 touchdowns. Reportedly, the Unitas weighed pounds on his first day of practice. His first start was in the fifth game of the 1951 season against St. Bonaventure, where he threw 11 consecutive passes and three touchdowns to give the Cardinals a 21–19 lead. Louisville ended up losing the game 22–21 on a disputed field goal, but found a new starting quarterback. Unitas completed 12 of 19 passes for 240 yards and four touchdowns in a 35–28 victory over Houston. The team finished the season 5–5 overall and 4–1 with Unitas starting. He completed 46 of 99 passes for 602 yards and nine touchdowns (44).
By the 1952 season, the university decided to de-emphasize sports. The new president at Louisville, Dr. Philip Grant Davidson, reduced the amount of athletic aid, and tightened academic standards for athletes. As a result, 15 returning players could not meet the new standards and lost their scholarships. But Unitas maintained his by taking on a new elective: square dancing. In 1952, coach Frank Camp switched the team to two-way football. Unitas not only played safety or linebacker on defense and quarterback on offense, but also returned kicks and punts on special teams. The Cardinals won their first game against Wayne State, and then Florida State in the second game. Unitas completed 16 of 21 passes for 198 yards and three touchdowns. It was said that Unitas put on such a show at the Florida State game that he threw a pass under his legs for 15 yards. The rest of the season was a struggle for the Cardinals, who finished 3–5. Unitas completed 106 of 198 passes for 1,540 yards and 12 touchdowns.
The team won their first game in 1953, against Murray State, and lost the rest for a record of 1–7. One of the most memorable games of the season came in a 59–6 loss against Tennessee. Unitas completed 9 out of 19 passes for 73 yards, rushed 9 times for 52 yards, returned six kickoffs for 85 yards, one punt for three yards, and had 86 percent of the team's tackles. The only touchdown the team scored was in the fourth quarter when Unitas made a fake pitch to the running back and ran the ball 23 yards for a touchdown. Unitas was hurt later in the fourth quarter while trying to run the ball. On his way off the field, he received a standing ovation. When he got to the locker room he was so tired that his jersey and shoulder pads had to be cut off because he could not lift his arms. Louisville ended the season with a 20–13 loss to Eastern Kentucky. Unitas completed 49 of 95 passes for 470 yards and three touchdowns.
Unitas was elected captain for the 1954 season, but due to an early injury did not see much playing time. His first start was the third game of the season, against Florida State. Of the 34-man team, 21 were freshmen. The 1954 Cardinals went 3–6, with their last win at home against Morehead State. Unitas was slowed by so many injuries his senior year his 527 passing yards ended second to Jim Houser's 560.
After his collegiate career, the Pittsburgh Steelers of the NFL drafted Unitas in the ninth round. However, he was released before the season began as the odd man out among four quarterbacks trying to fill three spots. Steelers' head coach Walt Kiesling had made up his mind about Unitas; he thought he was not smart enough to quarterback an NFL team, and he was not given any snaps in practice with the Steelers. Among those edging out Unitas was Ted Marchibroda, future longtime NFL head coach. Out of pro football, Unitas—by this time married—worked in construction in Pittsburgh to support his family. On the weekends, he played quarterback, safety and punter on a local semi-professional team called the Bloomfield Rams for $6 a game.
In 1956, Unitas joined the Baltimore Colts of the NFL under legendary coach Weeb Ewbank, after being asked at the last minute to join Bloomfield Rams lineman Jim Deglau, a Croatian steel worker with a life much like Unitas, at the latter's scheduled Colts tryout. The pair borrowed money from friends to pay for the gas to make the trip. Deglau later told a reporter after Unitas's death, "[His] uncle told him not to come. [He] was worried that if he came down and the Colts passed on him, it would look bad (to other NFL teams)." The Colts signed Unitas, much to the chagrin of the Cleveland Browns, who had hoped to claim the former Steeler quarterback.
Unitas made his NFL debut with an inauspicious "mop-up" appearance against Detroit, going 0–2 with one interception. Two weeks later, starting quarterback George Shaw suffered a broken leg against the Chicago Bears. In his first serious action, Unitas's initial pass was intercepted and returned for a touchdown. Then he botched a hand-off on his next play, a fumble recovered by the Bears. Unitas rebounded quickly from that 58–27 loss, leading the Colts to an upset of Green Bay and their first win over Cleveland. He threw nine touchdown passes that year, including one in the season finale that started his record 47-game streak. His 55.6-percent completion mark was a rookie record.
In 1957, his first season as the Colts full-time starter at quarterback, Unitas finished first in the NFL in passing yards (2,550) and touchdown passes (24) as he helped lead the Colts to a 7–5 record, the first winning record in franchise history. At season's end, Unitas received the Jim Thorpe Trophy as the NFL's Most Valuable Player by the Newspaper Enterprise Association (NEA).
Unitas continued his prowess in 1958 passing for 2,007 yards and 19 touchdowns as the Colts won the Western Conference title. The Colts won the NFL championship under his leadership on December 28, 1958, by defeating the New York Giants 23–17 in sudden death overtime on a touchdown by fullback Alan Ameche. It was the first overtime game in NFL history, and is often referred to as the "greatest game ever played". The game, nationally televised by NBC, has been credited for sparking the rise in popularity of professional football during the 1960s.
In 1959, Unitas was named the NFL's MVP by the Associated Press (AP) for the first time, as well as United Press International's player of the year, after leading the NFL in passing yards (2,899), touchdown passes (32), and completions (193). He then led the Colts to a repeat championship, beating the Giants again 31–16 in the title game.
As the 1960s began, the Colts' fortunes (and win totals) declined. Injuries to key players such as Alan Ameche, Raymond Berry, and Lenny Moore were a contributing factor. Unitas's streak of 47 straight games with at least one touchdown pass ended against the Los Angeles Rams in week 11 of the 1960 season. In spite of this, he topped the 3,000-yard passing mark for the first time and led the league in touchdown passes for the fourth consecutive season.
After three middle-of-the-pack seasons, Colts owner Carroll Rosenbloom fired Weeb Ewbank and replaced him with Don Shula, who at the time was the youngest head coach in NFL history (33 years of age when he was hired). The Colts finished 8–6 in Shula's first season at the helm, good enough for only third place in the NFL's Western Conference, but they did end the season on a strong note by winning their final three games. The season was very successful for Unitas personally, as he led the NFL in passing yards with a career-best total of 3,481 and also led in completions with 237.
In the 1964 season the Colts returned to the top of the Western Conference. After dropping their season opener to the Minnesota Vikings, the Colts ran off 10 straight victories to finish with a 12–2 record. The season was one of Unitas's best as he finished with 2,824 yards passing, a league-best 9.26 yards per pass attempt, 19 touchdown passes and only 6 interceptions. He was named the NFL's Most Valuable Player by the AP and UPI for a second time. However, the season ended on a disappointing note for the Colts as they were upset by the Cleveland Browns in the 1964 NFL Championship Game, losing 27–0.
Unitas resumed his torrid passing in 1965, throwing for 2,530 yards, 23 touchdowns and finishing with a league-high and career best 97.1 passer rating. But he was lost for the balance of the season due to a knee injury in a week 12 loss to the Bears.Backup quarterback Gary Cuozzo also suffered a season-ending injury the following week, and running back Tom Matte filled in as the emergency quarterback for the regular season finale and in a heartbreaking playoff loss to the Packers. The Colts and Packers finished in a tie for first place in the Western Conference and a one-game playoff was played in Green Bay to decide who would be the conference representative in the 1965 NFL Championship Game. The Colts lost in overtime 13–10 due in large part to a game-tying field goal by Don Chandler that many say was incorrectly ruled good.
Unitas, healthy once more, threw for 2,748 yards and 22 touchdowns in 1966 in a return to Pro Bowl form. However, he posted a league-high 24 interceptions.
After once again finishing 2nd in the Western Conference in 1966, the Colts rebounded to finish 11–1–2 in 1967 tying the Los Angeles Rams for the NFL's best record. In winning his third MVP award from the AP and UPI in 1967 (and his second from the NEA), Unitas had a league-high 58.5 completion percentage and passed for 3,428 yards and 20 touchdowns. He openly complained about having tennis elbow and he threw eight interceptions and only three touchdown passes in the final five games. Once again, the season ended in heartbreak for the Colts, as they were shut out of the newly instituted four team NFL playoff after losing the divisional tiebreaker to the Rams, a 34–10 rout in the regular season finale.
In the final game of the 1968 preseason, the muscles in Unitas's arm were torn when he was hit by a member of the Dallas Cowboys defense. Unitas wrote in his autobiography that he felt his arm was initially injured by the use of the "night ball" that the NFL was testing for better TV visibility during night games. In a post-game interview the previous year, he noted having constant pain in his elbow for several years prior. He would spend most of the season sitting on the bench. The Colts still marched to a league-best 13–1 record behind backup quarterback and ultimate 1968 NFL MVP Earl Morrall. Although he was injured through most of the season, Unitas came off the bench to play in Super Bowl III, the famous game where Joe Namath guaranteed a New York Jets win despite conventional wisdom. Unitas's insertion was a desperation move in an attempt to retrieve dominance of the NFL over the upstart AFL. Although the Colts won an NFL Championship in 1968, they lost the Super Bowl to the AFL Champion New York Jets, thus becoming the first ever NFL champions that were not also deemed world champions. Unitas helped put together the Colts' only score, a touchdown late in the game. Unitas also drove the Colts into scoring position following the touchdown and successful onside kick, but for reasons that to this day are unknown, head coach Don Shula eschewed a field goal attempt, which (if successful) would have cut the Jets' lead to 16–10. Despite not playing until late in the third quarter, he still finished the game with more passing yards than the team's starter, Earl Morrall.
After an off-season of rehabilitation on his elbow, Unitas rebounded in 1969, passing for 2,342 yards and 12 touchdowns with 20 interceptions. But the Colts finished with a disappointing 8–5–1 record, and missed the playoffs.
In 1970, the NFL and AFL had merged into one league, and the Colts moved to the new American Football Conference, along with the Cleveland Browns and the Pittsburgh Steelers. He threw for 2,213 yards and 14 touchdowns while leading the Colts to an 11–2–1 season. In their first rematch with the Jets, Unitas and Namath threw a combined nine interceptions in a 29–22 Colts win. Namath threw 62 passes and broke his hand on the final play of the game, ending his season.
Unitas threw for 390 yards, three touchdowns, and no interceptions in AFC playoff victories over the Cincinnati Bengals and the Oakland Raiders. In Super Bowl V against the Dallas Cowboys, he was knocked out of the game with a rib injury in the second quarter, soon after throwing a 75-yard touchdown pass (setting a then-Super Bowl record) to John Mackey. However, he had also tossed two interceptions before his departure from the game. Earl Morrall came in to lead the team to a last second, 16–13 victory.
In 1971, Unitas split playing time with Morrall, throwing only three touchdown passes. He started both playoff games, a win over the Cleveland Browns that sent the Colts to the AFC Championship game against the Miami Dolphins, which they lost by a score of 21–0. Unitas threw three interceptions in the game, one of which was returned for a touchdown by safety Dick Anderson.
The 1972 season saw the Colts declining into mediocrity. After losing the season opener, Unitas was involved in the second and final regular season head-to-head meeting with "Broadway" Joe Namath. The first was in 1970 (won by the Colts, 29–22). The last meeting was a memorable one, which took place on September 24, 1972, at Memorial Stadium. He threw for 376 yards and three touchdowns, but Namath upstaged him again, bombing the Colts for 496 yards and six touchdowns in a 44–34 Jets victory – their first over Baltimore since the 1970 merger. After losing four of their first five games, the Colts fired head coach Don McCafferty, and benched Unitas.
One of the more memorable moments in football history came on Unitas's last game in a Colts uniform at Memorial Stadium, in a game against the Buffalo Bills. He was not the starter for this game, but the Colts were blowing the Bills out by a score of 28–0 behind Marty Domres; Unitas entered the game due to the fans chanting, "We want Unitas!!!", and a plan devised by head coach John Sandusky to convince Unitas that the starting quarterback was injured. Unitas came onto the field, and threw two passes, one of which was a long touchdown to wide receiver Eddie Hinton which would be his last pass as a Colt. The Colts won the game by the score of 35–7.
Unitas was traded to the San Diego Chargers in 1973 after posting a 5–9 record in 1972 with Baltimore, but he was far past his prime. He replaced former Chargers quarterback John Hadl. He was replaced in Baltimore by Marty Domres, who had been acquired from the San Diego Chargers in August 1972. Domres was ultimately replaced by LSU's Bert Jones, drafted with the number two overall pick in 1973. Unitas started the season with a 38–0 loss to the Washington Redskins. He threw for just 55 yards and 3 interceptions, and was sacked 8 times. His final victory as a starter came against the Buffalo Bills in week two. Unitas was 10–18 for 175 yards, two touchdown passes, and no interceptions in a 34–7 Chargers rout. Unitas was clearly not the same player he was years ago and many were questioning his role as a starter after a loss to the Bengals in week three. Two weeks later, he threw two first-half interceptions, passed for only 19 yards, and went 2-for-9 against the Pittsburgh Steelers. He was then replaced by rookie quarterback, future Hall of Famer Dan Fouts. After having posted a 1–3 record as a starter, Unitas retired in the preseason of 1974.
Unitas finished his 18 NFL seasons with 2,830 completions in 5,186 attempts for 40,239 yards and 290 touchdowns, with 253 interceptions. He also rushed for 1,777 yards and 13 touchdowns. Plagued by arm trouble in his later seasons, he threw more interceptions (64) than touchdowns (38) in 1968–1973. After averaging 215.8 yards per game in his first 12 seasons, his production fell to 124.4 in his final six. His passer rating plummeted from 82.9 to 60.4 for the same periods. Even so, Unitas set many passing records during his career. He was the first quarterback to throw for more than 40,000 yards, despite playing during an era when NFL teams played shorter seasons of 12 or 14 games (as opposed to today's 16-game seasons) and prior to modern passing-friendly rules implemented in 1978. His 32 touchdown passes in 1959 were a record at the time, making Unitas the first quarterback to hit the 30 touchdown mark in a season. His 47-game consecutive touchdown streak between 1956 and 1960 was a record considered by many to be unbreakable. The streak stood for 52 years before being broken by New Orleans Saints quarterback Drew Brees in a game against the San Diego Chargers on October 7, 2012.
After his playing days were finished, Unitas settled in Baltimore where he raised his family while also pursuing a career in broadcasting, doing color commentary for NFL games on CBS in the 1970s. He was elected to the Pro Football Hall of Fame in 1979. After Robert Irsay moved the Colts franchise to Indianapolis in 1984, a move reviled to this day in Baltimore as "Bob Irsay's Midnight Ride," he was so outraged that he cut all ties to the relocated team (though his No. 19 jersey is still retired by the Colts), declaring himself strictly a "Baltimore" Colt for the remainder of his life. Some other prominent old-time Colts followed his lead, although many attended the 1975 team's reunion at Lucas Oil Stadium in Indianapolis in 2009. A total of 39 Colts players from that 1975 team attended said reunion in Indianapolis, including Bert Jones and Lydell Mitchell. Unitas asked the Pro Football Hall of Fame on numerous occasions (including on Roy Firestone's "Up Close") to remove his display unless it was listed as belonging to the Baltimore Colts. The Hall of Fame has never complied with the request. Unitas donated his Colts memorabilia to the Babe Ruth Museum in Baltimore; they are now on display in the Sports Legends Museum at Camden Yards.
Unitas was inducted into the American Football Association's Semi Pro Football Hall of Fame in 1987.
Unitas actively lobbied for another NFL team to come to Baltimore. After the Cleveland Browns moved to Baltimore in 1996 and changed their name to the Ravens, he and some of the other old-time Colts attended the Ravens' first game ever against the Raiders on Opening Day at Memorial Stadium. He was frequently seen on the Ravens' sidelines at home games (most prominently in 1998 when the now-Indianapolis Colts played the Ravens) and received a thunderous ovation every time he was pictured on each of the huge widescreens at M&T Bank Stadium. He was often seen on the 30-yard line on the Ravens side. When the NFL celebrated its first 50 years, Unitas was voted the league's best player. Retired Bears quarterback Sid Luckman said of Unitas, "He was better than me, better than Sammy Baugh, better than anyone."
Unitas lived most of the final years of his life severely hobbled. Due to an elbow injury suffered during his playing career, he had only very limited use of his right hand, and could not perform any physical activity more strenuous than golf due to his artificial knees.
"Source":
At the age of 21 on November 20, 1954, Unitas married his high school sweetheart Dorothy Hoelle; they lived in Towson and had five children before divorcing. Unitas's second wife was Sandra Lemon, whom he married on June 26, 1972; they had three children, lived in Baldwin, and remained married until his death.
On September 11, 2002, Unitas died from a heart attack while working out at the Kernan Physical Therapy Center (now The University of Maryland Rehabilitation & Orthopaedic Institute) in Baltimore. Between his death and October 4, 2002, 56,934 people signed an online petition urging the Baltimore Ravens to rename the Ravens' home stadium (owned by the State of Maryland) after Unitas. These requests were unsuccessful since the lucrative naming rights had already been leased by the Ravens to Buffalo-based M&T Bank. However, on October 20, 2002, the Ravens dedicated the front area of the stadium's main entrance as Unitas Plaza and unveiled a statue of Unitas as the centerpiece of the plaza.
Towson University, where Unitas was a major fund-raiser and which his children attended, named its football and lacrosse complex Johnny Unitas Stadium in recognition of both his football career and service to the university.
Toward the end of his life, Unitas brought media attention to the many permanent physical disabilities that he and his fellow players suffered during their careers before heavy padding and other safety features became popular. Unitas himself lost almost total use of his right hand, with the middle finger and thumb noticeably disfigured from being repeatedly broken during games.
Unitas is buried at Dulaney Valley Memorial Gardens in Timonium, Maryland.
|
https://en.wikipedia.org/wiki?curid=15644
|
Julian calendar
The Julian calendar, proposed by Julius Caesar in 708Ab urbe condita (AUC) (46 BC), was a reform of the Roman calendar. It took effect on , by edict. It was designed with the aid of Greek mathematicians and Greek astronomers such as Sosigenes of Alexandria.
The calendar was the predominant calendar in the Roman world, most of Europe, and in European settlements in the Americas and elsewhere, until it was gradually replaced by the Gregorian calendar, promulgated in 1582 by Pope Gregory XIII. The Julian calendar is still used in parts of the Eastern Orthodox Church and in parts of Oriental Orthodoxy as well as by the Berbers.
The Julian calendar has two types of year: a normal year of 365 days and a leap year of 366 days. They follow a simple cycle of three normal years and one leap year, giving an average year that is 365.25 days long. That is more than the actual solar year value of 365.24219 days, which means the Julian calendar gains a day every 128 years.
For any given event during the years from 1901 to 2099 inclusive, its date according to the Julian calendar is 13 days behind its corresponding Gregorian date.
The ordinary year in the previous Roman calendar consisted of 12 months, for a total of 355 days. In addition, a 27- or 28-day intercalary month, the Mensis Intercalaris, was sometimes inserted between February and March. This intercalary month was formed by inserting 22 or 23 days after the first 23 days of February; the last five days of February, which counted down toward the start of March, became the last five days of Intercalaris. The net effect was to add 22 or 23 days to the year, forming an intercalary year of 377 or 378 days. Some say the "mensis intercalaris" always had 27 days and began on either the first or the second day after the Terminalia (23 February).
According to the later writers Censorinus and Macrobius, the ideal intercalary cycle consisted of ordinary years of 355 days alternating with intercalary years, alternately 377 and 378 days long. In this system, the average Roman year would have had days over four years, giving it an average drift of one day per year relative to any solstice or equinox. Macrobius describes a further refinement whereby, in one 8-year period within a 24-year cycle, there were only three intercalary years, each of 377 days (thus 11 intercalary years out of 24). This refinement averages the length of the year to 365.25 days over 24 years.
In practice, intercalations did not occur systematically according to any of these ideal systems, but were determined by the pontifices. So far as can be determined from the historical evidence, they were much less regular than these ideal schemes suggest. They usually occurred every second or third year, but were sometimes omitted for much longer, and occasionally occurred in two consecutive years.
If managed correctly this system could have allowed the Roman year to stay roughly aligned to a tropical year. However, since the pontifices were often politicians, and because a Roman magistrate's term of office corresponded with a calendar year, this power was prone to abuse: a pontifex could lengthen a year in which he or one of his political allies was in office, or refuse to lengthen one in which his opponents were in power.
If too many intercalations were omitted, as happened after the Second Punic War and during the Civil Wars, the calendar would drift out of alignment with the tropical year. Moreover, because intercalations were often determined quite late, the average Roman citizen often did not know the date, particularly if he were some distance from the city. For these reasons, the last years of the pre-Julian calendar were later known as "years of confusion". The problems became particularly acute during the years of Julius Caesar's pontificate before the reform, 63–46 BC, when there were only five intercalary months (instead of eight), none of which were during the five Roman years before 46 BC.
Caesar's reform was intended to solve this problem permanently, by creating a calendar that remained aligned to the sun without any human intervention. This proved useful very soon after the new calendar came into effect. Varro used it in 37 BC to fix calendar dates for the start of the four seasons, which would have been impossible only 8 years earlier. A century later, when Pliny dated the winter solstice to 25 December because the sun entered the 8th degree of Capricorn on that date, this stability had become an ordinary fact of life.
Although the approximation of days for the tropical year had been known for a long time ancient solar calendars had used less precise periods, resulting in gradual misalignment of the calendar with the seasons.
The octaeteris, a cycle of 8 lunar years popularised by Cleostratus (and also commonly attributed to Eudoxus) which was used in some early Greek calendars, notably in Athens, is 1.53 days longer than eight Julian years. The length of nineteen years in the cycle of Meton was 6,940 days, six hours longer than the mean Julian year. The mean Julian year was the basis of the 76-year cycle devised by Callippus (a student under Eudoxus) to improve the Metonic cycle.
In Persia (Iran) after the reform in the Persian calendar by introduction of the Persian Zoroastrian (i. e. Young Avestan) calendar in 503 BC and afterwards, the first day of the year (1 Farvardin=Nowruz) slipped against the vernal equinox at the rate of approximately one day every four years.
Likewise in the Egyptian calendar, a fixed year of 365 days was in use, drifting by one day against the sun in four years. An unsuccessful attempt to add an extra day every fourth year was made in 238 BC (Decree of Canopus). Caesar probably experienced this "wandering" or "vague" calendar in that country. He landed in the Nile delta in October 48 BC and soon became embroiled in the Ptolemaic dynastic war, especially after Cleopatra managed to be "introduced" to him in Alexandria.
Caesar imposed a peace, and a banquet was held to celebrate the event. Lucan depicted Caesar talking to a wise man called Acoreus during the feast, stating his intention to create a calendar more perfect than that of Eudoxus (Eudoxus was popularly credited with having determined the length of the year to be days). But the war soon resumed and Caesar was attacked by the Egyptian army for several months until he achieved victory. He then enjoyed a long cruise on the Nile with Cleopatra before leaving the country in June 47 BC.
Caesar returned to Rome in 46 BC and, according to Plutarch, called in the best philosophers and mathematicians of his time to solve the problem of the calendar. Pliny says that Caesar was aided in his reform by the astronomer Sosigenes of Alexandria who is generally considered the principal designer of the reform. Sosigenes may also have been the author of the astronomical almanac published by Caesar to facilitate the reform. Eventually, it was decided to establish a calendar that would be a combination between the old Roman months, the fixed length of the Egyptian calendar, and the days of Greek astronomy. According to Macrobius, Caesar was assisted in this by a certain Marcus Flavius.
Caesar's reform only applied to the Roman calendar. However, in the following decades many of the local civic and provincial calendars of the empire and neighbouring client kingdoms were aligned to the Julian calendar by transforming them into calendars with years of 365 days with an extra day intercalated every four years. The reformed calendars typically retained many features of the unreformed calendars. In many cases, the New Year was not on 1 January, the leap day was not on the bissextile day, the old month names were retained, the lengths of the reformed months did not match the lengths of Julian months, and, even if they did, their first days did not match the first day of the corresponding Julian month. Nevertheless, since the reformed calendars had fixed relationships to each other and to the Julian calendar, the process of converting dates between them became quite straightforward, through the use of conversion tables known as "hemerologia". Several of the reformed calendars are only known through surviving hemerologia.
The three most important of these calendars are the Alexandrian calendar, the Asian calendar and the Syro-Macedonian calendar. Other reformed calendars are known from Cappadocia, Cyprus and the cities of Syria and Palestine. Most reformed calendars were adopted under Augustus, though the calendar of Nabatea was reformed after the kingdom became the Roman province of Arabia in AD 106. There is no evidence that local calendars were aligned to the Julian calendar in the western empire. Unreformed calendars continued to be used in Gaul, Greece, Macedon, the Balkans and parts of Palestine, most notably in Judea.
The Alexandrian calendar adapted the Egyptian calendar by adding a 6th epagomenal day as the last day of the year in every fourth year, falling on 29 August preceding a Julian bissextile day. It was otherwise identical to the Egyptian calendar. The first leap day was in 22 BC, and they occurred every four years from the beginning, even though Roman leap days occurred every three years at this time (see Leap year error). This calendar influenced the structure of several other reformed calendars, such as those of the cities of Gaza and Ascalon in Palestine, Salamis in Cyprus, and the province of Arabia. It was adopted by the Coptic church and remains in use both as the liturgical calendar of the Coptic church and as the civil calendar of Ethiopia.
The Asian calendar was an adaptation of the Macedonian calendar used in the province of Asia and, with minor variations, in nearby cities and provinces. It is known in detail through the survival of decrees promulgating it issued in 8 BC by the proconsul Paullus Fabius Maximus. It renamed the first month Dios as "Kaisar", and arranged the months such that each month started on the ninth day before the kalends of the corresponding Roman month; thus the year began on 23 September, Augustus' birthday. Since Greek months typically had 29 or 30 days, the extra day of 31-day months was named "Sebaste"—the emperor's day—and was the first day of these months. The leap day was a second Sebaste day in the month of Xandikos, i.e., 24 February. This calendar remained in use at least until the middle of the fifth century AD.
The Syro-Macedonian calendar was an adaptation of the Macedonian calendar used in Antioch and other parts of Syria. The months were exactly aligned to the Julian calendar, but they retained their Macedonian names and the year began in Dios = November until the fifth century, when the start of the year was moved to Gorpiaios = September.
These reformed calendars generally remained in use until the fifth or sixth century. Around that time most of them were replaced as civil calendars by the Julian calendar, but with a year starting in September to reflect the year of the indiction cycle.
The Julian calendar spread beyond the borders of the Roman Empire through its use as the Christian liturgical calendar. When a people or a country was converted to Christianity, they generally also adopted the Christian calendar of the church responsible for conversion. Thus, Christian Nubia and Ethiopia adopted the Alexandrian calendar, while Christian Europe adopted the Julian calendar, in either the Catholic or Orthodox variant. Starting in the 16th century, European settlements in the Americas and elsewhere likewise inherited the Julian calendar of the mother country, until they adopted the Gregorian reform. The last country to adopt the Julian calendar was the Ottoman Empire, which used it for financial purposes for some time under the name Rumi calendar and dropped the "escape years" which tied it to Muslim chronology in 1840.
The first step of the reform was to realign the start of the calendar year (1 January) to the tropical year by making 46 BC (708 AUC) 445 days long, compensating for the intercalations which had been missed during Caesar's pontificate. This year had already been extended from 355 to 378 days by the insertion of a regular intercalary month in February. When Caesar decreed the reform, probably shortly after his return from the African campaign in late Quintilis (July), he added 67 more days by inserting two extraordinary intercalary months between November and December.
These months are called "Intercalaris Prior" and "Intercalaris Posterior" in letters of Cicero written at the time; there is no basis for the statement sometimes seen that they were called "Undecimber" and "Duodecimber", terms that arose in the 18th century over a millennium after the Roman Empire's collapse. Their individual lengths are unknown, as is the position of the Nones and Ides within them.
Because 46 BC was the last of a series of irregular years, this extra-long year was, and is, referred to as the "last year of confusion". The new calendar began operation after the realignment had been completed, in 45 BC.
The Julian months were formed by adding ten days to a regular pre-Julian Roman year of 355 days, creating a regular Julian year of 365 days. Two extra days were added to January, Sextilis (August) and December, and one extra day was added to April, June, September and November. February was not changed in ordinary years, and so continued to be the traditional 28 days. Thus, the ordinary (i.e., non-leap year) lengths of all of the months were set by the Julian calendar to the same values they still hold today. (See Sacrobosco's theory on month lengths (below) for stories purporting otherwise.)
The Julian reform did not change the method used to account days of the month in the pre-Julian calendar, based on the Kalends, Nones and Ides, nor did it change the positions of these three dates within the months. Macrobius states that the extra days were added immediately before the last day of each month to avoid disturbing the position of the established religious ceremonies relative to the Nones and Ides of the month. However, since Roman dates after the Ides of the month counted down toward the start of the next month, the extra days had the effect of raising the initial value of the count of the day following the Ides in the lengthened months. Thus, in January, Sextilis and December the 14th day of the month became a.d. XIX Kal. instead of a.d. XVII Kal., while in April, June, September and November it became a.d. XVIII Kal.
Romans of the time born after the Ides of a month responded differently to the effect of this change on their birthdays. Mark Antony kept his birthday on 14 January, which changed its date from a.d. XVII Kal. Feb to a.d. XIX Kal. Feb, a date that had previously not existed. Livia kept the date of her birthday unchanged at a.d. III Kal. Feb., which moved it from 28 to 30 January, a day that had previously not existed. Augustus kept his on 23 September, but both the old date (a.d. VIII Kal. Oct.) and the new (a.d. IX Kal. Oct.) were celebrated in some places.
The inserted days were all initially characterised as "dies fasti" (F – see Roman calendar). The character of a few festival days was changed. In the early Julio-Claudian period a large number of festivals were decreed to celebrate events of dynastic importance, which caused the character of the associated dates to be changed to NP. However, this practice was discontinued around the reign of Claudius, and the practice of characterising days fell into disuse around the end of the first century AD: the Antonine jurist Gaius speaks of "dies nefasti" as a thing of the past.
The old intercalary month was abolished. The new leap day was dated as "ante diem bis sextum Kalendas Martias" ('the sixth doubled day before the Kalends of March'), usually abbreviated as "a.d. bis VI Kal. Mart."; hence it is called in English the bissextile day. The year in which it occurred was termed "annus bissextus", in English the bissextile year.
There is debate about the exact position of the bissextile day in the early Julian calendar. The earliest direct evidence is a statement of the 2nd century jurist Celsus, who states that there were two halves of a 48-hour day, and that the intercalated day was the "posterior" half. An inscription from AD 168 states that "a.d. V Kal. Mart." was the day after the bissextile day. The 19th century chronologist Ideler argued that Celsus used the term "posterior" in a technical fashion to refer to the earlier of the two days, which requires the inscription to refer to the whole 48-hour day as the bissextile. Some later historians share this view. Others, following Mommsen, take the view that Celsus was using the ordinary Latin (and English) meaning of "posterior". A third view is that neither half of the 48-hour "bis sextum" was originally formally designated as intercalated, but that the need to do so arose as the concept of a 48-hour day became obsolete.
There is no doubt that the bissextile day eventually became the earlier of the two days for most purposes. In 238 Censorinus stated that it was inserted after the Terminalia (23 February) and was followed by the last five days of February, i.e., a.d. VI, V, IV, III and prid. Kal. Mart. (which would be 24 to 28 February in a common year and the 25th to 29th in a leap year). Hence he regarded the bissextum as the first half of the doubled day. All later writers, including Macrobius about 430, Bede in 725, and other medieval computists (calculators of Easter) followed this rule, as does the liturgical calendar of the Roman Catholic Church. However, Celsus' definition continued to be used for legal purposes. It was incorporated into Justinian's Digest, and in the English statute "De anno et die bissextili" of 1236, which was not formally repealed until 1879.
The effect of the bissextile day on the nundinal cycle is not discussed in the sources. According to Dio Cassius, a leap day was inserted in 41 BC to ensure that the first market day of 40 BC did not fall on 1 January, which implies that the old 8-day cycle was not immediately affected by the Julian reform. However, he also reports that in AD 44, and on some previous occasions, the market day was changed to avoid a conflict with a religious festival. This may indicate that a single nundinal letter was assigned to both halves of the 48-hour bissextile day by this time, so that the Regifugium and the market day might fall on the same date but on different days. In any case, the 8-day nundinal cycle began to be displaced by the 7-day week in the first century AD, and dominical letters began to appear alongside nundinal letters in the fasti.
During the late Middle Ages days in the month came to be numbered in consecutive day order. Consequently, the leap day was considered to be the last day in February in leap years, i.e., 29 February, which is its current position.
The Julian reform set the lengths of the months to their modern values. However, a 13th-century scholar, Sacrobosco, proposed a different explanation for the lengths of Julian months which is still widely repeated but is certainly wrong.
According to Sacrobosco, the month lengths for ordinary years in the Roman Republican calendar, from January to December, were:
Sacrobosco then thought that Julius Caesar added one day to every month except February, a total of 11 more days to regular months, giving the ordinary Julian year of 365 days. A single leap day could now be added to this extra short February:
He then said Augustus changed this, by taking one day from February to add it to Sextilis, and then modifying the alternation of the following months, to:
so that the length of "Augustus" (August) would not be shorter than (and therefore inferior to) the length of "Iulius" (July), giving us the irregular month lengths which are still in use.
There is abundant evidence disproving this theory. First, a wall painting of a Roman calendar predating the Julian reform has survived, which confirms the literary accounts that the months were already irregular before Julius Caesar reformed them, with an ordinary year of 355 days, not 354, with month lengths arranged as:
Also, the Julian reform did not change the dates of the Nones and Ides. In particular, the Ides were late (on the 15th rather than 13th) in March, May, July and October, showing that these months always had 31 days in the Roman calendar, whereas Sacrobosco's theory requires that March, May and July were originally 30 days long and that the length of October was changed from 29 to 30 days by Caesar and to 31 days by Augustus. Further, Sacrobosco's theory is explicitly contradicted by the 3rd and 5th century authors Censorinus and Macrobius, and it is inconsistent with seasonal lengths given by Varro, writing in 37 BC, before Sextilis was renamed for Augustus in 8 BC, with the 31-day Sextilis given by an Egyptian papyrus from 24 BC, and with the 28-day February shown in the "Fasti Caeretani", which is dated before 12 BC.
The Julian calendar has two types of year: "normal" years of 365 days and "leap" years of 366 days. There is a simple cycle of three "normal" years followed by a leap year and this pattern repeats forever without exception. The Julian year is, therefore, on average 365.25 days long. Consequently, the Julian year drifts over time with respect to the tropical (solar) year (365.24217 days).
Although Greek astronomers had known, at least since Hipparchus, a century before the Julian reform, that the tropical year was slightly shorter than 365.25 days, the calendar did not compensate for this difference. As a result, the calendar year gains about three days every four centuries compared to observed equinox times and the seasons. This discrepancy was largely corrected by the Gregorian reform of 1582. The Gregorian calendar has the same months and month lengths as the Julian calendar, but, in the Gregorian calendar, year numbers evenly divisible by 100 are not leap years, except that those evenly divisible by 400 remain leap years. (Even then, the Gregorian calendar diverges from astronomical observations by one day in 3,030 years).
The difference in the average length of the year between Julian (365.25 days) and Gregorian (365.2425 days) is 0.002%, making the Julian 10.8 minutes longer. The accumulated effect of this difference over some 1600 years since the basis for calculation of the date of Easter was determined at the First Council of Nicea means for example that, from 29 February "Julian" (13 March "Gregorian") 1900 and until 28 February "Julian" (13 March "Gregorian") 2100, the "Julian" calendar is 13 days behind the "Gregorian" calendar; one day after (i.e. on 29 February "Julian" or 14 March "Gregorian"), the difference will be 14 days.
Although the new calendar was much simpler than the pre-Julian calendar, the pontifices initially added a leap day every three years, instead of every four. There are accounts of this in Solinus, Pliny, Ammianus, Suetonius, and Censorinus.
Macrobius gives the following account of the introduction of the Julian calendar:
So, according to Macrobius,
Some people have had different ideas as to how the leap years went. The above scheme is that of Scaliger (1583) in the table below. He established that the Augustan reform was instituted in 8 BC. The table shows for each reconstruction the implied proleptic Julian date for the first day of Caesar's reformed calendar (Kal. Ian. AUC 709) and the first Julian date on which the Roman calendar date matches the Julian calendar after the completion of Augustus' reform.
Alexander Jones says that the correct Julian calendar was in use in Egypt in 24 BC, implying that the first day of the reform in both Egypt and Rome, , was the Julian date 1 January if 45 BC was a leap year and 2 January if it was not. This necessitates fourteen leap days up to and including AD 8 if 45 BC was a leap year and thirteen if it was not.
Pierre Brind'Amour argued that "only one day was intercalated between 1/1/45 and 1/1/40 (disregarding a momentary 'fiddling' in December of 41) to avoid the nundinum falling on Kal. Ian."
By the systems of Scaliger, Ideler and Bünting, the leap years prior to the suspension happen to be BC years that are divisible by 3, just as, after leap year resumption, they are the AD years divisible by 4.
In 1999, a papyrus was discovered which gives the dates of astronomical phenomena in 24 BC in both the Egyptian and Roman calendars. From , Egypt had two calendars: the old Egyptian in which every year had 365 days and the new Alexandrian in which every fourth year had 366 days. Up to the date in both calendars was the same. The dates in the Alexandrian and Julian calendars are in one-to-one correspondence except for the period from 29 August in the year preceding a Julian leap year to the following 24 February. From a comparison of the astronomical data with the Egyptian and Roman dates, Alexander Jones concluded that the Egyptian astronomers (as opposed to travellers from Rome) used the correct Julian calendar.
An inscription has been discovered which orders a new calendar to be used in the Province of Asia to replace the previous Greek lunar calendar. According to one translation
This is historically correct. It was decreed by the proconsul that the first day of the year in the new calendar shall be Augustus' birthday, a.d. IX Kal. Oct. Every month begins on the ninth day before the kalends. The date of introduction, the day after 14 Peritius, was 1 Dystrus, the next month. The month after that was Xanthicus. Thus Xanthicus began on a.d. IX Kal. Mart., and normally contained 31 days. In leap year, however, it contained an extra "Sebaste day", the Roman leap day, and thus had 32 days. From the lunar nature of the old calendar we can fix the starting date of the new one as 24 January, in the Julian calendar, which was a leap year. Thus from inception the dates of the reformed Asian calendar are in one-to-one correspondence with the Julian.
Another translation of this inscription is
This would move the starting date back three years to 8 BC, and from the lunar synchronism back to 26 January (Julian). But since the corresponding Roman date in the inscription is 24 January, this must be according to the incorrect calendar which in 8 BC Augustus had ordered to be corrected by the omission of leap days. As the authors of the previous paper point out, with the correct four-year cycle being used in Egypt and the three-year cycle abolished in Rome, it is unlikely that Augustus would have ordered the three-year cycle to be introduced in Asia.
The Julian reform did not immediately cause the names of any months to be changed. The old intercalary month was abolished and replaced with a single intercalary day at the same point (i.e., five days before the end of February). January continued to be the first month of the year.
The Romans later renamed months after Julius Caesar and Augustus, renaming Quintilis as "Iulius" (July) in 44 BC and Sextilis as "Augustus" (August) in 8 BC. Quintilis was renamed to honour Caesar because it was the month of his birth. According to a "senatus consultum" quoted by Macrobius, Sextilis was renamed to honour Augustus because several of the most significant events in his rise to power, culminating in the fall of Alexandria, occurred in that month.
Other months were renamed by other emperors, but apparently none of the later changes survived their deaths. In AD 37, Caligula renamed September as "Germanicus" after his father; in AD 65, Nero renamed April as "Neroneus", May as "Claudius" and June as "Germanicus"; and in AD 84 Domitian renamed September as "Germanicus" and October as "Domitianus". Commodus was unique in renaming all twelve months after his own adopted names (January to December): "Amazonius", "Invictus", "Felix", "Pius", "Lucius", "Aelius", "Aurelius", "Commodus", "Augustus", "Herculeus", "Romanus", and "Exsuperatorius". The emperor Tacitus is said to have ordered that September, the month of his birth and accession, be renamed after him, but the story is doubtful since he did not become emperor before November 275. Similar honorific month names were implemented in many of the provincial calendars that were aligned to the Julian calendar.
Other name changes were proposed but were never implemented. Tiberius rejected a senatorial proposal to rename September as "Tiberius" and October as "Livius", after his mother Livia. Antoninus Pius rejected a senatorial decree renaming September as "Antoninus" and November as "Faustina", after his empress.
Much more lasting than the ephemeral month names of the post-Augustan Roman emperors were the Old High German names introduced by Charlemagne. According to his biographer, Charlemagne renamed all of the months agriculturally into German. These names were used until the 15th century, over 700 years after his rule, and continued, with some modifications, to see some use as "traditional" month names until the late 18th century. The names (January to December) were: "Wintarmanoth" ("winter month"), "Hornung", "Lentzinmanoth" ("spring month", "Lent month"), "Ostarmanoth" ("Easter month"), "Wonnemanoth" ("joy-month", a corruption of "Winnimanoth" "pasture-month"), "Brachmanoth" ("fallow-month"), "Heuuimanoth" ("hay month"), "Aranmanoth" ("reaping month"), "Witumanoth" ("wood month"), "Windumemanoth" ("vintage month"), "Herbistmanoth" ("harvest month"), and "Heilagmanoth" ("holy month").
The calendar month names used in western and northern Europe, in Byzantium, and by the Berbers, were derived from the Latin names. However, in eastern Europe older seasonal month names continued to be used into the 19th century, and in some cases are still in use, in many languages, including: Belarusian, Bulgarian, Croatian, Czech, Finnish, Georgian, Lithuanian, Macedonian, Polish, Romanian, Slovene, Ukrainian. When the Ottoman Empire adopted the Julian calendar, in the form of the Rumi calendar, the month names reflected Ottoman tradition.
The principal method used by the Romans to identify a year for dating purposes was to name it after the two consuls who took office in it, the eponymous period in question being the consular year. Beginning in 153 BC, consuls began to take office on 1 January, thus synchronizing the commencement of the consular and calendar years. The calendar year has begun in January and ended in December since about 450 BC according to Ovid or since about 713 BC according to Macrobius and Plutarch (see Roman calendar). Julius Caesar did not change the beginning of either the consular year or the calendar year. In addition to consular years, the Romans sometimes used the regnal year of the emperor, and by the late 4th century documents were also being dated according to the 15-year cycle of the indiction. In 537, Justinian required that henceforth the date must include the name of the emperor and his regnal year, in addition to the indiction and the consul, while also allowing the use of local eras.
In 309 and 310, and from time to time thereafter, no consuls were appointed. When this happened, the consular date was given a count of years since the last consul (so-called "post-consular" dating). After 541, only the reigning emperor held the consulate, typically for only one year in his reign, and so post-consular dating became the norm. Similar post-consular dates were also known in the west in the early 6th century. The system of consular dating, long obsolete, was formally abolished in the law code of Leo VI, issued in 888.
Only rarely did the Romans number the year from the founding of the city (of Rome), "ab urbe condita" (AUC). This method was used by Roman historians to determine the number of years from one event to another, not to date a year. Different historians had several different dates for the founding. The "Fasti Capitolini", an inscription containing an official list of the consuls which was published by Augustus, used an epoch of 752 BC. The epoch used by Varro, 753 BC, has been adopted by modern historians. Indeed, Renaissance editors often added it to the manuscripts that they published, giving the false impression that the Romans numbered their years. Most modern historians tacitly assume that it began on the day the consuls took office, and ancient documents such as the "Fasti Capitolini" which use other AUC systems do so in the same way. However, Censorinus, writing in the 3rd century AD, states that, in his time, the AUC year began with the Parilia, celebrated on 21 April, which was regarded as the actual anniversary of the foundation of Rome.
Many local eras, such as the Era of Actium and the Spanish Era, were adopted for the Julian calendar or its local equivalent in the provinces and cities of the Roman Empire. Some of these were used for a considerable time. Perhaps the best known is the Era of Martyrs, sometimes also called "Anno Diocletiani" (after Diocletian), which was associated with the Alexandrian calendar and often used by the Alexandrian Christians to number their Easters during the 4th and 5th centuries, and continues to be used by the Coptic and Ethiopian churches.
In the eastern Mediterranean, the efforts of Christian chronographers such as Annianus of Alexandria to date the Biblical creation of the world led to the introduction of Anno Mundi eras based on this event. The most important of these was the Etos Kosmou, used throughout the Byzantine world from the 10th century and in Russia until 1700. In the west, the kingdoms succeeding the empire initially used indictions and regnal years, alone or in combination. The chronicler Prosper of Aquitaine, in the fifth century, used an era dated from the Passion of Christ, but this era was not widely adopted. Dionysius Exiguus proposed the system of Anno Domini in 525. This era gradually spread through the western Christian world, once the system was adopted by Bede in the eighth century.
The Julian calendar was also used in some Muslim countries. The Rumi calendar, the Julian calendar used in the later years of the Ottoman Empire, adopted an era derived from the lunar AH year equivalent to AD 1840, i.e., the effective Rumi epoch was AD 585. In recent years, some users of the Berber calendar have adopted an era starting in 950 BC, the approximate date that the Libyan pharaoh Sheshonq I came to power in Egypt.
The Roman calendar began the year on 1 January, and this remained the start of the year after the Julian reform. However, even after local calendars were aligned to the Julian calendar, they started the new year on different dates. The Alexandrian calendar in Egypt started on 29 August (30 August after an Alexandrian leap year). Several local provincial calendars were aligned to start on the birthday of Augustus, 23 September. The indiction caused the Byzantine year, which used the Julian calendar, to begin on 1 September; this date is still used in the Eastern Orthodox Church for the beginning of the liturgical year. When the Julian calendar was adopted in AD 988 by Vladimir I of Kiev, the year was numbered Anno Mundi 6496, beginning on 1 March, six months after the start of the Byzantine Anno Mundi year with the same number. In 1492 (AM 7000), Ivan III, according to church tradition, realigned the start of the year to 1 September, so that AM 7000 only lasted for six months in Russia, from 1 March to 31 August 1492.
During the Middle Ages 1 January retained the name "New Year's Day" (or an equivalent name) in all western European countries (affiliated with the Roman Catholic Church), since the medieval calendar continued to display the months from January to December (in twelve columns containing 28 to 31 days each), just as the Romans had. However, most of those countries began their numbered year on 25 December (the Nativity of Jesus), 25 March (the Incarnation of Jesus), or even Easter, as in France (see the Liturgical year article for more details).
In Anglo-Saxon England, the year most commonly began on 25 December, which, as (approximately) the winter solstice, had marked the start of the year in pagan times, though 25 March (the equinox) is occasionally documented in the 11th century. Sometimes the start of the year was reckoned as 24 September, the start of the so-called "western indiction" introduced by Bede. These practices changed after the Norman conquest. From 1087 to 1155 the English year began on 1 January, and from 1155 to 1751 began on 25 March. In 1752 it was moved back to 1 January. (See Calendar (New Style) Act 1750).
Even before 1752, 1 January was sometimes treated as the start of the new year – for example by Pepys – while the "year starting 25th March was called the Civil or Legal Year". To reduce misunderstandings on the date, it was not uncommon for a date between 1 January and 24 March to be written as "1661/62". This was to explain to the reader that the year was 1661 counting from March and 1662 counting from January as the start of the year. (For more detail, see Dual dating).
Most western European countries shifted the first day of their numbered year to 1 January while they were still using the Julian calendar, "before" they adopted the Gregorian calendar, many during the 16th century. The following table shows the years in which various countries adopted 1 January as the start of the year. Eastern European countries, with populations showing allegiance to the Orthodox Church, began the year on 1 September from about 988. The Rumi calendar used in the Ottoman Empire began the civil year on 1 March until 1918.
The Julian calendar has been replaced as the civil calendar by the Gregorian calendar in all countries which officially used it apart from Greece, which now uses the Revised Julian calendar. Turkey switched (for fiscal purposes) on 16 February/1 March 1917. Russia changed on 1/14 February 1918. Greece made the change for civil purposes on 16 February/1 March 1923, but the national day (25 March), which was a religious holiday, was to remain on the old calendar. This provision became redundant on 10/23 March the following year when the calendar systems of church and state were unified. Most Christian denominations in the west and areas evangelised by western churches have made the change to Gregorian for their liturgical calendars to align with the civil calendar.
A calendar similar to the Julian one, the Alexandrian calendar, is the basis for the Ethiopian calendar, which is still the civil calendar of Ethiopia. Egypt converted from the Alexandrian calendar to Gregorian on 1 Thaut 1592/11 September 1875.
During the changeover between calendars and for some time afterwards, dual dating was used in documents and gave the date according to both systems. In contemporary as well as modern texts that describe events during the period of change, it is customary to clarify to which calendar a given date refers by using an O.S. or N.S. suffix (denoting Old Style, Julian or New Style, Gregorian).
The Julian calendar was in general use in Europe and northern Africa until 1582, when Pope Gregory XIII promulgated the Gregorian calendar. Reform was required because too many leap days are added with respect to the astronomical seasons on the Julian scheme. On average, the astronomical solstices and the equinoxes advance by 10.8 minutes per year against the Julian year. As a result, the calculated date of Easter gradually moved out of alignment with the March equinox.
While Hipparchus and presumably Sosigenes were aware of the discrepancy, although not of its correct value, it was evidently felt to be of little importance at the time of the Julian reform (46 BC). However, it accumulated significantly over time: the Julian calendar gained a day every 128 years. By 1582, it was ten days out of alignment from where it supposedly had been in 325 during the Council of Nicaea.
The Gregorian calendar was soon adopted by most Catholic countries (e.g., Spain, Portugal, Poland, most of Italy). Protestant countries followed later, and some countries of eastern Europe even later. In the British Empire (including the American colonies), Wednesday was followed by Thursday . For 12 years from 1700 Sweden used a modified Julian calendar, and adopted the Gregorian calendar in 1753.
Since the Julian and Gregorian calendars were long used simultaneously, although in different places, calendar dates in the transition period are often ambiguous, unless it is specified which calendar was being used. In some circumstances, double dates might be used, one in each calendar. The notation "Old Style" (O.S.) is sometimes used to indicate a date in the Julian calendar, as opposed to "New Style" (N.S.), which either represents the Julian date with the start of the year as 1 January or a full mapping onto the Gregorian calendar. This notation is used to clarify dates from countries which continued to use the Julian calendar after the Gregorian reform, such as Great Britain, which did not switch to the reformed calendar until 1752, or Russia, which did not switch until 1918. This is why the Russian Revolution of 7 November 1917 N.S. is known as the October Revolution, because it began on 25 October OS.
Throughout the long transition period, the Julian calendar has continued to diverge from the Gregorian. This has happened in whole-day steps, as leap days which were dropped in certain centennial years in the Gregorian calendar continued to be present in the Julian calendar. Thus, in the year 1700 the difference increased to 11 days; in 1800, 12; and in 1900, 13. Since 2000 was a leap year according to both the Julian and Gregorian calendars, the difference of 13 days did not change in that year: (Gregorian) fell on (Julian). This difference of 13 days will persist until Saturday 28 February 2100 (Julian), i.e. 13 March 2100 (Gregorian), since 2100 is "not" a Gregorian leap year, but "is" a Julian leap year; the next day the difference will be of 14 days: Sunday 29 February (Julian) will be Sunday 14 March (Gregorian); the next day Monday (Julian) falls on Monday (Gregorian).
Although most Eastern Orthodox countries (most of them in eastern or southeastern Europe) had adopted the Gregorian calendar by 1924, their national churches had not. The "Revised Julian calendar" was endorsed by a synod in Constantinople in May 1923, consisting of a solar part which was and will be identical to the Gregorian calendar until the year 2800, and a lunar part which calculated Easter astronomically at Jerusalem. All Orthodox churches refused to accept the lunar part, so all Orthodox churches continue to celebrate Easter according to the Julian calendar, with the exception of the Finnish Orthodox Church, which uses the Gregorian "Paschalion".
The solar part of the Revised Julian calendar was accepted by only some Orthodox churches. Those that did accept it, with hope for improved dialogue and negotiations with the western denominations, were the Ecumenical Patriarchate of Constantinople, the Patriarchates of Alexandria, Antioch, the Orthodox Churches of Greece, Cyprus, Romania, Poland (from 1924 to 2014; it is still permitted to use the Revised Julian calendar in parishes that want it), Bulgaria (in 1963), and the Orthodox Church in America (although some OCA parishes are permitted to use the Julian calendar). Thus these churches celebrate the Nativity on the same day that western Christians do, 25 December Gregorian until 2799.
The Orthodox Churches of Jerusalem, Russia, Serbia, Montenegro, Poland (from 15 June 2014), North Macedonia, Georgia, Ukraine, and the Greek Old Calendarists and other groups continue to use the Julian calendar, thus they celebrate the Nativity on 25 December "Julian" (which is 7 January "Gregorian" until 2100). The Russian Orthodox Church has some parishes in the West which celebrate the Nativity on 25 December "Gregorian" until 2799.
Parishes of the Orthodox Church in America Bulgarian Diocese, both before and after the 1976 transfer of that diocese from the Russian Orthodox Church Outside Russia to the Orthodox Church in America, were permitted to use this date. Some Old Calendarist groups which stand in opposition to the state churches of their homelands will use the Great Feast of the Theophany (6 January "Julian"/19 January "Gregorian") as a day for religious processions and the Great Blessing of Waters, to publicise their cause.
Most branches of the Eastern Orthodox Church use the Julian calendar for calculating the date of Easter, upon which the timing of all the other moveable feasts depends. Some such churches have adopted the Revised Julian calendar for the observance of fixed feasts, while such Orthodox churches retain the Julian calendar for all purposes.
The Oriental Orthodox Churches generally use the local calendar of their homelands. However, when calculating the Nativity Feast, most observe the Julian calendar. This was traditionally for the sake of unity throughout Christendom. In the west, some Oriental Orthodox Churches either use the Gregorian calendar or are permitted to observe the Nativity according to it.
The Armenian Patriarchate of Jerusalem of Armenian Apostolic Orthodox Church uses Julian calendar, while the rest of Armenian Church uses Gregorian calendar. Both celebrate the Nativity as part of the Feast of Theophany according to their respective calendar.
The Julian calendar is still used by the Berbers of the Maghreb in the form of the Berber calendar.
|
https://en.wikipedia.org/wiki?curid=15651
|
John Quincy Adams
John Quincy Adams (; July 11, 1767 – February 23, 1848) was an American statesman, diplomat, lawyer, and diarist who served as the sixth president of the United States, from 1825 to 1829. He previously served as the eighth United States Secretary of State from 1817 to 1825. During his long diplomatic and political career, Adams also served as an ambassador, and as a member of the United States Senate and United States House of Representatives representing Massachusetts. He was the eldest son of John Adams, who served as the second US president from 1797 to 1801, and First Lady Abigail Adams. Initially a Federalist like his father, he won election to the presidency as a member of the Democratic-Republican Party, and in the mid-1830s became affiliated with the Whig Party.
Born in what is now Quincy, Massachusetts (then part of the town of Braintree), Adams spent much of his youth in Europe, where his father served as a diplomat. After returning to the United States, Adams established a successful legal practice in Boston. In 1794, President George Washington appointed Adams as the U.S. ambassador to the Netherlands, and Adams would serve in high-ranking diplomatic posts until 1801, when Thomas Jefferson took office as president. Federalist leaders in Massachusetts arranged for Adams's election to the United States Senate in 1802, but Adams broke with the Federalist Party over foreign policy and was denied re-election. In 1809, Adams was appointed as the U.S. ambassador to Russia by President James Madison, a member of the Democratic-Republican Party. Adams held diplomatic posts for the duration of Madison's presidency, and he served as part of the American delegation that negotiated an end to the War of 1812. In 1817, newly-elected President James Monroe selected Adams as his Secretary of State. In that role, Adams negotiated the Adams–Onís Treaty, which provided for the American acquisition of Florida. He also helped formulate the Monroe Doctrine, which became a key tenet of U.S. foreign policy.
The 1824 presidential election was contested by Adams, Andrew Jackson, William H. Crawford, and Henry Clay, all of whom were members of the Democratic-Republican Party. As no candidate won a majority of the electoral vote, the House of Representatives held a contingent election to determine the president, and Adams won that contingent election with the support of Clay. As president, Adams called for an ambitious agenda that included federally-funded infrastructure projects, the establishment of a national university, and engagement with the countries of Latin America, but many of his initiatives were defeated in Congress. During Adams's presidency, the Democratic-Republican Party polarized into two major camps: one group, known as the National Republican Party, supported President Adams, while the other group, known as the Democratic Party, was led by Andrew Jackson. The Democrats proved to be more effective political organizers than Adams and his National Republican supporters, and Jackson decisively defeated Adams in the 1828 presidential election.
Rather than retiring from public service, Adams won election to the House of Representatives, where he would serve from 1831 to his death in 1848. He joined the Anti-Masonic Party in the early 1830s before becoming a member of the Whig Party, which united those opposed to President Jackson. During his time in Congress, Adams became increasingly critical of slavery and of the Southern leaders whom he believed controlled the Democratic Party. He was particularly opposed to the annexation of Texas and the Mexican–American War, which he saw as a war to extend slavery. He also led the repeal of the "gag rule", which had prevented the House of Representatives from debating petitions to abolish slavery. Historians generally concur that Adams was one of the greatest diplomats and secretaries of state in American history. They typically rank him as a mediocre president who had an ambitious agenda but could not get it passed by Congress.
John Quincy Adams was born on July 11, 1767, to John and Abigail Adams (née Smith) in a part of Braintree, Massachusetts that is now Quincy. He was named for his mother's maternal grandfather, Colonel John Quincy, after whom Quincy, Massachusetts, is named. Young Adams was educated by private tutors – his cousin James Thaxter and his father's law clerk, Nathan Rice. He soon began to exhibit his literary skills, and in 1779 he initiated a diary which he kept until just before he died in 1848. Until the age of ten, Adams grew up on the family farm in Braintree, largely in the care of his mother. Though frequently absent due to his participation in the American Revolution, John Adams maintained a correspondence with his son, encouraging him to read works by authors such as Thucydides and Hugo Grotius. With his father's encouragement, Adams would also translate classical authors such as Virgil, Horace, Plutarch, and Aristotle.
In 1778, Adams and his father departed for Europe, where John Adams would serve as part of American diplomatic missions in France and the Netherlands. During this period, Adams studied French, Greek, and Latin, and attended several schools, including Leiden University. In 1781, Adams traveled to Saint Petersburg, Russia, where he served as the secretary of American diplomat Francis Dana. He returned to the Netherlands in 1783, and accompanied his father to Great Britain in 1784. Though Adams enjoyed Europe, he and his family decided he needed to return to the United States to complete his education and eventually launch a political career.
Adams returned to the United States in 1785 and earned admission as a member of the junior class of Harvard College the following year. He was elected to Phi Beta Kappa and excelled academically, graduating second in his class in 1787. After graduating from Harvard, he studied law with Theophilus Parsons in Newburyport, Massachusetts from 1787 to 1789. Adams initially opposed the ratification of the United States Constitution, but he ultimately came to accept the document, and in 1789 his father was elected as the first Vice President of the United States. In 1790, Adams opened his own legal practice in Boston. Despite some early struggles, he was successful as an attorney and established financial independence from his parents.
Adams initially avoided becoming directly involved in politics, instead focusing on building his legal career. In 1791, he wrote a series of pseudonymously-published essays arguing that Britain provided a better governmental model than France. Two years later, he published another series of essays attacking Edmond-Charles Genêt, a French diplomat who sought to undermine President George Washington's policy of neutrality in the French Revolutionary Wars. In 1794, Washington appointed Adams as the U.S. ambassador to the Netherlands; Adams considered declining the role but ultimately took the position at the advice of his father. While abroad, Adams continued to urge neutrality, arguing that the United States would benefit economically by staying out of the ongoing French Revolutionary Wars. His chief duty as the ambassador to the Netherlands was to secure and maintain loans essential to U.S. finances. On his way to the Netherlands, he met with John Jay, who was then negotiating the Jay Treaty with Great Britain. Adams supported the Jay Treaty, but it proved unpopular with many in the United States, contributing to a growing partisan split between the Federalist Party of Alexander Hamilton and the Democratic-Republican Party of Thomas Jefferson.
Adams spent the winter of 1795–96 in London, where he met Louisa Catherine Johnson, the second daughter of American merchant Joshua Johnson. In April 1796, Louisa accepted Adams's proposal of marriage. Adams's parents disapproved of his decision to marry a woman who had grown up in England, but he informed his parents that he would not reconsider his decision. Adams initially wanted to delay his wedding to Louisa until he returned to the United States, but they were married in All Hallows-by-the-Tower in July 1797. Shortly after the wedding, Joshua Johnson fled England to escape his creditors, and Adams did not receive the dowry that Johnson had promised him, much to the embarrassment of Louisa. Nonetheless, Adams noted in his own diary that he had no regrets about his decision to marry Louisa.
In 1796, Washington appointed Adams as the U.S. ambassador to Portugal. Later in that same year, John Adams defeated Jefferson in the 1796 presidential election. When the elder Adams became president, he appointed his son as the U.S. ambassador to Prussia. Though concerned that his appointment would be criticized as nepotistic, Adams accepted the position and traveled to the Prussian capital of Berlin with his wife and his younger brother, Thomas Boylston Adams. The State Department charged Adams with developing commercial relations with Prussia and Sweden, but President Adams also asked his son to write him frequently about affairs in Europe. In 1799, Adams negotiated a new trade agreement between the United States and Prussia, though he was never able to complete an agreement with Sweden. He frequently wrote to family members in the United States, and in 1801 his letters about the Prussian region of Silesia were published in a book titled "Letters on Silesia". In the 1800 presidential election, Jefferson defeated John Adams, and both Adams and his son left office in early 1801.
On his return to the United States, Adams re-established a legal practice in Boston, and in April 1802 he was elected to the Massachusetts Senate. In November of that same year he ran unsuccessfully for the United States House of Representatives. In February 1803, the Massachusetts legislature elected Adams to the United States Senate. Though somewhat reluctant to affiliate with any political party, Adams joined the Federalist minority in Congress. Like his Federalist colleagues, he opposed the impeachment of Associate Justice Samuel Chase, an outspoken supporter of the Federalist Party.
Adams had strongly opposed Jefferson's 1800 presidential candidacy, but he gradually became alienated from the Federalist Party. His disaffection was driven by the party's declining popularity, disagreements over foreign policy, and Adams's hostility to Timothy Pickering, a Federalist Party leader whom Adams viewed as overly favorable to Britain. Unlike other New England Federalists, Adams supported the Jefferson administration's Louisiana Purchase and generally favored expansionist policies. Adams was the lone Federalist in Congress to vote for the Non-importation Act of 1806, which was designed to punish Britain for its attacks on American shipping during the ongoing Napoleonic Wars. Adams became increasingly frustrated with the unwillingness of other Federalists to condemn British actions, including impressment, and he moved closer to the Jefferson administration. After Adams supported the Embargo Act of 1807, the Federalist-controlled Massachusetts legislature elected Adams's successor several months before the end of his term and Adams resigned from the Senate shortly thereafter.
While a member of the Senate, Adams served as a professor of logic at Brown University and as the Boylston Professor of Rhetoric and Oratory at Harvard University. Adams's devotion to classical rhetoric shaped his response to public issues, and he would remain inspired by those rhetorical ideals long after the neo-classicalism and deferential politics of the founding generation were eclipsed by the commercial ethos and mass democracy of the Jacksonian Era. Many of Adams's idiosyncratic positions were rooted in his abiding devotion to the Ciceronian ideal of the citizen-orator "speaking well" to promote the welfare of the polis. He was also influenced by the classical republican ideal of civic eloquence espoused by British philosopher David Hume. Adams adapted these classical republican ideals of public oratory to the American debate, viewing its multilevel political structure as ripe for "the renaissance of Demosthenic eloquence." His "Lectures on Rhetoric and Oratory" (1810) looks at the fate of ancient oratory, the necessity of liberty for it to flourish, and its importance as a unifying element for a new nation of diverse cultures and beliefs. Just as civic eloquence failed to gain popularity in Britain, in the United States interest faded in the second decade of the 19th century, as the "public spheres of heated oratory" disappeared in favor of the private sphere.
After resigning from the Senate, Adams was ostracized by Massachusetts Federalist leaders, but he declined Democratic-Republican entreaties to seek office. In 1809, he argued before the Supreme Court of the United States in the case of "Fletcher v. Peck", and the Supreme Court ultimately agreed with Adams's argument that the Constitution's Contract Clause prevented the state of Georgia from invalidating a land sale to out-of-state companies. Later that year, President James Madison appointed Adams as the first United States Minister to Russia in 1809. Though Adams had only recently broken with the Federalist Party, his support of Jefferson's foreign policy had earned him goodwill with the Madison Administration. Adams was well-qualified for the role after his experiences in Europe generally and Russia specifically.
After a difficult passage through the Baltic Sea, Adams arrived in the Russian capital of St. Petersburg in October 1809. He quickly established a productive working relationship with Russian official Nikolay Rumyantsev and eventually befriended Tsar Alexander I of Russia. Adams continued to favor American neutrality between France and Britain during the Napoleonic War. Louisa was initially distraught at the prospect of living in Russia, but she became a popular figure at the Russian court. From his diplomatic post, Adams observed the French Emperor Napoleon's invasion of Russia, which ended in defeat for the French. In February 1811, Adams was nominated by President Madison as an Associate Justice of the United States Supreme Court. The nomination was unanimously confirmed by the Senate, but Adams declined the seat, preferring a career in politics and diplomacy, so Joseph Story took the seat instead.
Adams had long feared that the United States would enter a war it could not win against Britain, and by early 1812 he saw such a war as inevitable due to the constant British attacks on American shipping and the British practice of impressment. In mid-1812, the United States declared war against Britain, beginning the War of 1812. Tsar Alexander attempted to mediate the conflict between Britain and the United States, and President Madison appointed Adams, Secretary of the Treasury Albert Gallatin, and Federalist Senator James A. Bayard to a delegation charged with negotiating an end to the war. Gallatin and Bayard arrived in St. Petersburg in July 1813, but the British declined Tsar Alexander's offer of mediation. Hoping to commence the negotiations at another venue, Adams left Russia in April 1814. Negotiations finally began in mid-1814 in Ghent, where Adams, Gallatin, and Bayard were joined by two additional American delegates, Jonathan Russell and former Speaker of the House Henry Clay. Adams, the nominal head of the delegation, got along well with Gallatin, Bayard, and Russell, but he occasionally clashed with Clay.
The British delegation initially treated the United States as a defeated power, demanding the creation of an Indian barrier state from American territory near the Great Lakes. The American delegation unanimously rejected this offer, and their negotiating position was bolstered by the American victory in the Battle of Plattsburgh. By November 1814, the government of Lord Liverpool decided to seek an end to hostilities with the U.S. on the basis of "status quo ante bellum". Adams and his fellow commissioners had hoped for similar terms, even though a return to the status quo would mean the continuation of British practice of impressment. The treaty was signed on December 24, 1814. The United States did not gain any concessions from the treaty but could boast that it had survived a war against the strongest power in the world. Following the signing of the treaty, Adams traveled to Paris, where he witnessed first-hand the Hundred Days of Napoleon's restoration.
In May 1815, Adams learned that President Madison had appointed him as the U.S. ambassador to Britain. With the aid of Clay and Gallatin, Adams negotiated a limited trade agreement with Britain. Following the conclusion of the trade agreement, much of Adams's time as ambassador was spent helping stranded American sailors and prisoners of war. In pursuit of national unity, newly-elected President James Monroe decided a Northerner would be optimal for the position of Secretary of State, and he chose the respected and experienced Adams for the role. Having spent several years in Europe, Adams returned to the United States in August 1817.
Adams served as Secretary of State throughout Monroe's eight-year presidency, from 1817 to 1825. Many of his successes as secretary, such as the convention of 1818 with Great Britain, the Transcontinental Treaty with Spain, and the Monroe Doctrine, were not preplanned strategy but responses to unexpected events. Adams wanted to delay American recognition of the newly independent republics of Latin America to avoid the risk of war with Spain and its European allies. However, Andrew Jackson's military campaign in Florida, and Henry Clay's threats in Congress, forced Spain to cut a deal, which Adam negotiated successfully. Biographer James Lewis says, "He managed to play the cards that he had been dealt – cards that he very clearly had not wanted – in ways that forced the Spanish cabinet to recognize the weakness of its own hand." Apart from the Monroe doctrine, his last four years the Secretary of State were less successful, since he was absorbed in his presidential campaign and refused to make compromises with other nations that might have weakened his candidacy; The result was a small scale trade war, but a successful election to the White House.
Taking office in the aftermath of the War of 1812, Adams thought that the country had been fortunate in avoiding territorial losses, and he prioritized avoiding another war with a European power, particularly Britain. He also sought to avoid exacerbating sectional tensions, which had been a major issue for the country during the War of 1812. One of the major challenges confronting Adams was how to respond to the power vacuum in Latin America that arose from Spain's weakness following the Peninsular War. In addition to his foreign policy role, Adams held several domestic duties, including overseeing the 1820 Census.
Monroe and Adams agreed on most of the major foreign policy issues: both favored neutrality in the Latin American wars of independence, peace with Great Britain, denial of a trade agreement with the French, and expansion, peacefully if possible, into the North American territories of the Spanish Empire. The president and his secretary of state developed a strong working relationship, and while Adams often influenced Monroe's policies, he respected that Monroe made the final decisions on major issues. Monroe met regularly with his five-person cabinet, which initially consisted of Adams, Secretary of the Treasury William H. Crawford, Secretary of War John C. Calhoun, Secretary of the Navy Benjamin Crowninshield, and Attorney General William Wirt. Adams developed a strong respect for Calhoun but believed that Crawford was unduly focused on succeeding Monroe in 1824.
During his time as ambassador to Britain, Adams had begun negotiations over several contentious issues that had not been solved by the War of 1812 or the Treaty of Ghent. In 1817, the two countries agreed to the Rush–Bagot Treaty, which limited naval armaments on the Great Lakes. Negotiations between the two powers continued, resulting in the Treaty of 1818, which defined the Canada–United States border west of the Great Lakes. The boundary was set at the 49th parallel to the Rocky Mountains, while the territory to the west of the mountains, known as Oregon Country, would be jointly occupied. The agreement represented a turning point in United Kingdom–United States relations, as the U.S. turned its attention to its southern and western borders and British fears over American expansionism waned.
When Adams took office, Spanish possessions bordered the United States to the South and West. In the South, Spain retained control of Florida, which the U.S. had long sought to purchase. Spain struggled to control the Indian tribes active in Florida, and some of those tribes raided U.S. territory. In the West, New Spain bordered the territory acquired by the U.S. in the Louisiana Purchase, but no clear boundary had been established between U.S. and Spanish territory. After taking office, Adams began negotiations with Luis de Onís, the Spanish minister to the United States, for the purchase of Florida and the settlement of a border between the U.S. and New Spain. The negotiations were interrupted by an escalation of the Seminole War, and in December 1818 Monroe ordered General Andrew Jackson to enter Florida and retaliate against Seminoles that had raided Georgia. Exceeding his orders, Jackson captured the Spanish outposts of St. Marks and Pensacola and executed two Englishmen. While the rest of the cabinet was outraged by Jackson's actions, Adams defended them as necessary to the country's self-defense, and he eventually convinced Monroe and most of the cabinet to support Jackson. Adams informed Spain that Jackson had been compelled to act by Spain's failure to police its own territory, and he advised Spain to either secure the region or sell it to the United States. The British, meanwhile, declined to risk their recent rapprochement with the United States, and did not make a major diplomatic issue out of Jackson's execution of two British nationals.
Negotiations between Spain and the United States continued, and Spain agreed to cede Florida. The determination of the western boundary of the United States proved more difficult. American expansionists favored setting the border at the Rio Grande, but Spain, intent on protecting its colony of Mexico from American encroachment, insisted on setting the boundary at the Sabine River. At Monroe's direction, Adams agreed to the Sabine River boundary, but he insisted that Spain cede its claims on Oregon Country. Adams was deeply interested in establishing American control over the Oregon Country, partly because he believed that control of that region would spur trade with Asia. The acquisition of Spanish claims to the Pacific Northwest also allowed the Monroe administration to pair the acquisition of Florida, which was chiefly sought by Southerners, with territorial gains favored primarily by those in the North. After extended negotiations, Spain and the United States agreed to the Adams–Onís Treaty, which was ratified in February 1821. Adams was deeply proud of the treaty, though he privately was concerned by the potential expansion of slavery into the newly-acquired territories. In 1824, the Monroe administration would further bolster U.S. claims to Oregon by reaching the Russo-American Treaty of 1824, which set the southern border of Russian Alaska at the parallel 54°40′ north.
As the Spanish Empire continued to fracture during Monroe's second term, Adams and Monroe became increasingly concerned that the "Holy Alliance" of Prussia, Austria, and Russia would seek to bring Spain's erstwhile colonies under their control. In 1822, following the conclusion of the Adams–Onís Treaty, the Monroe administration recognized the independence of several Latin American countries, including Argentina and Mexico. In 1823, British Foreign Secretary George Canning suggested that the U.S. and Britain should work together to preserve the independence of these fledgling republics. The cabinet debated whether to accept the offer, but Adams opposed it. Instead, Adams urged Monroe to publicly declare U.S. opposition to any European attempt to colonize or re-take control of territory in the Americas, while also committing the U.S. to neutrality in European affairs. In his December 1823 annual message to Congress, Monroe laid out the Monroe Doctrine, which was largely built upon Adams's ideas. In issuing the Monroe Doctrine, the United States displayed a new level of assertiveness in international relations, as the doctrine represented the country's first claim to a sphere of influence. It also marked the country's shift in psychological orientation away from Europe and towards the Americas. Debates over foreign policy would no longer center on relations with Britain and France, but would instead focus on western expansion and relations with Native Americans. The doctrine became one of the foundational principles of U.S. foreign policy.
Immediately upon becoming Secretary of State, Adams emerged as one of Monroe's most likely successors, as the last three presidents had all served in the role at some point before taking office. As the 1824 election approached, Henry Clay, John C. Calhoun (who later dropped out of the race), and William H. Crawford appeared to be Adams's primary competition to succeed Monroe. Crawford favored state sovereignty and a strict constructionist view of the Constitution, while Clay, Calhoun, and Adams embraced federally-funded internal improvements, high tariffs, and the Second Bank of the United States, which was also known as the national bank. Because the Federalist Party had all but collapsed after the War of 1812, all the major presidential candidates were members of the Democratic-Republican Party. Adams felt that his own election as president would vindicate his father, while also allowing him to pursue an ambitious domestic policy. Though he lacked the charisma of his competitors, Adams was widely respected and benefited from the lack of other prominent Northern political leaders.
Adams's top choice for the role of vice president was General Andrew Jackson; Adams noted that "the Vice-Presidency was a station in which [Jackson] could hang no one, and in which he would need to quarrel with no one." However, as the 1824 election approached, Jackson jumped into the race for president. While the other candidates based their candidacies on their long tenure as congressmen, ambassadors, or members of the cabinet, Jackson's appeal rested on his military service, especially in the Battle of New Orleans. The congressional nominating caucus had decided upon previous Democratic-Republican presidential nominees, but it had become largely discredited by 1824. Candidates were instead nominated by state legislatures or nominating conventions, and Adams received the endorsement of the New England legislatures. The regional strength of each candidate played an important role in the election; Adams was popular in New England, Clay and Jackson were strong in the West, and Jackson and Crawford competed for the South.
In the 1824 presidential election, Jackson won a plurality in the Electoral College, taking 99 of the 261 electoral votes, while Adams won 84, Crawford won 41, and Clay took 37. Calhoun, meanwhile, won a majority of the electoral votes for vice president. Adams nearly swept the electoral votes of New England and won a majority of the electoral votes in New York, but he won a total of just six electoral votes from the slave states. Most of Jackson's support came from slave-holding states, but he also won New Jersey, Pennsylvania, and some electoral votes from the Northwest. As no candidate won a majority of the electoral votes, the House was required to hold a contingent election under the terms of the Twelfth Amendment. The House would decide among the top three electoral vote winners, with each state's delegation having one vote; thus, unlike his three rivals, Clay was not eligible to be elected by the House.
Adams knew that his own victory in the contingent election would require the support of Clay, who wielded immense influence in the House of Representatives. Though they were quite different in temperament and had clashed in the past, Adams and Clay shared similar views on national issues. By contrast, Clay viewed Jackson as a dangerous demagogue, and he was unwilling to support Crawford due to the latter's health issues. Adams and Clay met before the contingent election, and Clay agreed to support Adams in the election. Adams also met with Federalists such as Daniel Webster, promising that he would not deny governmental positions to members of their party. On February 9, 1825, Adams won the contingent election on the first ballot, taking 13 of the 24 state delegations. Adams won the House delegations of all the states in which he or Clay had won a majority of the electoral votes, as well as the delegations of Illinois, Louisiana, and Maryland. Adams's victory made him the first child of a president to serve as president himself. After the election, many of Jackson's supporters claimed that Adams and Clay had reached a "Corrupt Bargain" whereby Adams promised Clay the position of Secretary of State in return for Clay's support.
Adams was inaugurated on March 4, 1825. He took the oath of office on a book of constitutional law, instead of the more traditional Bible. In his inaugural address, he adopted a post-partisan tone, promising that he would avoid party-building and politically-motivated appointments. He also proposed an elaborate program of "internal improvements": roads, ports, and canals. Though some worried about the constitutionality of such federal projects, Adams argued that the General Welfare Clause provided for broad constitutional authority. He promised that he would ask Congress to authorize many such projects.
Adams presided over a harmonious and productive cabinet that he met with on a weekly basis. Like Monroe, Adams sought a geographically-balanced cabinet that would represent the various party factions, and he asked the members of the Monroe cabinet to remain in place for his own administration. Samuel L. Southard of New Jersey stayed on as Secretary of the Navy, William Wirt kept his post of Attorney General, and John McLean of Ohio continued to serve as the Postmaster General, an important position that was not part of the cabinet. Adams's first choices for Secretary of War and Secretary of the Treasury were Andrew Jackson and William Crawford, but each declined to serve in the administration. Adams instead selected James Barbour of Virginia, a prominent supporter of Crawford, to lead the War Department. Leadership of the Treasury Department went to Richard Rush of Pennsylvania, who would become a prominent advocate of internal improvements and protective tariffs within the administration. Adams chose Henry Clay as Secretary of State, angering those who believed that Clay had offered his support in the 1824 election for the most prestigious position in the cabinet. Though Clay would later regret accepting the position since it reinforced the "Corrupt Bargain" accusation, Clay's strength in the West and interest in foreign policy made him a natural choice for the top cabinet position.
In his 1825 annual message to Congress, Adams presented a comprehensive and ambitious agenda. He called for major investments in internal improvements as well as the creation of a national university, a naval academy, and a national astronomical observatory. Noting the healthy status of the treasury and the possibility for more revenue via land sales, Adams argued for the completion of several projects that were in various stages of construction or planning, including a road from Washington to New Orleans. He also proposed the establishment of a Department of the Interior as a new cabinet-level department that would preside over these internal improvements. Adams hoped to fund these measures primarily through Western land sales, rather than increased taxes or public debt. The domestic agenda of Adams and Clay, which would come to be known as the American System, was designed to unite disparate regional interests in the promotion of a thriving national economy.
Adams's programs faced opposition from various quarters. Many disagreed with his broad interpretation of the constitution and preferred that power be concentrated in state governments rather than the federal government. Others disliked interference from any level of government and were opposed to central planning. Some in the South feared that Adams was secretly an abolitionist and that he sought to suborn the states to the federal government. Most of the president's proposals were defeated in Congress. Adams's ideas for a national university, national observatory, and the establishment of a uniform system of weights and measures never received congressional votes. His proposal for the creation of a naval academy won the approval of the Senate, but was defeated in the House; opponents objected to the naval academy's cost and worried that the establishment of such an institution would "produce degeneracy and corruption of the public morality." Adams's proposal to establish a national bankruptcy law was also defeated.
Unlike other aspects of his domestic agenda, Adams won congressional approval for several ambitious infrastructure projects. Between 1824 and 1828, the United States Army Corps of Engineers conducted surveys for a bevy of potential roads, canals, railroads, and improvements in river navigation. Adams presided over major repairs and further construction on the National Road, and shortly after he left office the National Road extended from Cumberland, Maryland to Zanesville, Ohio. The Adams administration also saw the beginning of the Chesapeake and Ohio Canal; the construction of the Chesapeake and Delaware Canal and the Louisville and Portland Canal around the falls of the Ohio; the connection of the Great Lakes to the Ohio River system in Ohio and Indiana; and the enlargement and rebuilding of the Dismal Swamp Canal in North Carolina. Additionally, the first passenger railroad in the United States, the Baltimore and Ohio Railroad, was constructed during Adams's presidency. Though many of these projects were undertaken by private actors, the government often provided money or land to aid the completion of such projects.
In the immediate aftermath of the 1825 contingent election, Jackson was gracious to Adams. Nevertheless, Adams's appointment of Clay rankled Jackson, who received a flood of letters encouraging him to run. In 1825, Jackson accepted the presidential nomination of the Tennessee legislature for the 1828 election. Though he had been close with Adams during Monroe's presidency, Vice President Calhoun was also politically alienated from the president by the appointment of Clay, since that appointment established Clay as the natural heir to Adams. Adams's ambitious December 1825 annual message to Congress further galvanized the opposition, with important figures such as Francis Preston Blair of Kentucky and Thomas Hart Benton of Missouri breaking with the Adams administration. By the end of the first session of the 19th United States Congress, an anti-Adams congressional coalition consisting of Jacksonians (led by Benton and Hugh Lawson White), Crawfordites (led by Martin Van Buren and Nathaniel Macon), and Calhounites (led by Robert Y. Hayne and George McDuffie) had emerged. Aside from Clay, Adams lacked strong supporters outside of the North, and Edward Everett, John Taylor, and Daniel Webster served as his strongest advocates in Congress. Supporters of Adams began calling themselves National Republicans, while supporters of Jackson began calling themselves Democrats. In the press, they were often described as "Adams Men" and "Jackson Men."
In the 1826 elections, Adams's opponents picked up seats throughout the country, as allies of Adams failed to coordinate among themselves. Pro-Adams Speaker of the House John Taylor was replaced by Andrew Stevenson, a Jackson supporter; as Adams himself noted, the U.S. had never seen a Congress that was firmly under the control of political opponents of the president. After the elections, Van Buren and Calhoun agreed to throw their support behind Jackson in 1828, with Van Buren bringing along many of Crawford's supporters. Though Jackson did not articulate a detailed political platform in the same way that Adams did, his coalition was united in opposition to Adams's reliance on government planning. Adams, meanwhile, clung to the hope of a non-partisan nation, and he refused to make full use of the power of patronage to build up his own party structure.
During the first half of his administration, Adams avoided taking a strong stand on tariffs, partly because he wanted to avoid alienating his allies in the South and New England. After Jacksonians took power in 1827, they devised a tariff bill designed to appeal to Western states while instituting high rates on imported materials important to the economy of New England. It is unclear whether Van Buren, who shepherded the bill through Congress, meant for the bill to pass, or if he had deliberately designed it to force Adams and his allies to oppose it. Regardless, Adams signed the Tariff of 1828, which became known as the "Tariff of Abominations" by opponents. Adams was denounced in the South, and he received little credit for the tariff in the North.
Adams sought the gradual assimilation of Native Americans via consensual agreements, a priority shared by few whites in the 1820s. Yet Adams was also deeply committed to the westward expansion of the United States. Settlers on the frontier, constantly seeking to move westward, cried for a more expansionist policy that disregarded the concerns of Native Americans. Early in his term, Adams suspended the Treaty of Indian Springs after learning that the Governor of Georgia, George Troup, had forced the treaty on the Muscogee. Adams signed a new treaty with the Muscogee in January 1826 that allowed the Muscogee to stay but ceded most of their land to Georgia. Troup refused to accept its terms, and authorized all Georgian citizens to evict the Muscogee. A showdown between Georgia and the federal government was only averted after the Muscogee agreed to a third treaty. Though many saw Troup as unreasonable in his dealings with the federal government and the Native Americans, the administration's handling of the incident alienated those in the Deep South who favored immediate Indian removal.
One of the major foreign policy goals of the Adams administration was the expansion of American trade. His administration reached reciprocity treaties with a number of nations, including Denmark, the Hanseatic League, the Scandinavian countries, Prussia, and the Federal Republic of Central America. The administration also reached commercial agreements with the Kingdom of Hawaii and the Kingdom of Tahiti. Agreements with Denmark and Sweden opened their colonies to American trade, but Adams was especially focused on opening trade with the British West Indies. The United States had reached a commercial agreement with Britain in 1815, but that agreement excluded British possessions in the Western Hemisphere. In response to U.S. pressure, the British had begun to allow a limited amount of American imports to the West Indies in 1823, but U.S. leaders continued to seek an end to Britain's protective Imperial Preference system. In 1825, Britain banned U.S. trade with the British West Indies, dealing a blow to Adams's prestige. The Adams administration negotiated extensively with the British to lift this ban, but the two sides were unable to come to an agreement. Despite the loss of trade with the British West Indies, the other commercial agreements secured by Adams helped expand the overall volume of U.S. exports.
Aside from an unsuccessful attempt to purchase Texas from Mexico, President Adams did not seek to expand into Latin America or North America. Adams and Clay instead sought engagement with Latin America to prevent it from falling under the British Empire's economic influence. As part of this goal, the administration favored sending a U.S. delegation to the Congress of Panama, an 1826 conference of New World republics organized by Simón Bolívar. Clay and Adams hoped that the conference would inaugurate a "Good Neighborhood Policy" among the independent states of the Americas. However, the funding for a delegation and the confirmation of delegation nominees became entangled in a political battle over Adams's domestic policies, with opponents such as Van Buren impeding the process of confirming a delegation. Van Buren saw the Panama Congress as an unwelcome deviation from the more isolationist foreign policy established by President Washington, while many Southerners opposed involvement with any conference attended by delegates of Haiti, a republic that had been established through a slave revolt. Though the U.S. delegation finally won confirmation from the Senate, it never reached the Congress of Panama due to the Senate's delay.
The Jacksonians formed an effective party apparatus that adopted many modern campaign techniques. Rather than focusing on issues, they emphasized Jackson's popularity and the supposed corruption of Adams and the federal government. Jackson himself described the campaign as a "struggle between the virtue of the people and executive patronage." Adams, meanwhile, refused to adapt to the new reality of political campaigns, and he avoided public functions and refused to invest in pro-administration tools such as newspapers. In early 1827, Jackson was publicly accused of having encouraged his wife, Rachel, to desert her first husband. In response, followers of Jackson attacked Adams's personal life, and the campaign turned increasingly nasty. The Jacksonian press portrayed Adams as an out-of-touch elitist, while pro-Adams newspapers attacked Jackson's past involvement in various duels and scuffles, portraying him as too emotional and impetuous for the presidency. Though Adams and Clay had hoped that the campaign would focus on the American System, it was instead dominated by the personalities of Jackson and Adams.
Vice President Calhoun joined Jackson's ticket, while Adams turned to Secretary of the Treasury Richard Rush as his running mate. The 1828 election thus marked the first time in U.S. history that a presidential ticket composed of two Northerners faced off against a presidential ticket composed of two Southerners. In the election, Jackson won 178 of the 261 electoral votes and just under 56 percent of the popular vote. Jackson won 50.3 percent of the popular vote in the free states, but 72.6 percent of the vote in the slave states. No future presidential candidate would match Jackson's proportion of the popular vote until Theodore Roosevelt's 1904 campaign, while Adams's loss made him the second one-term president, after his own father. By 1828, only two states did not hold a popular vote for president, and the number of votes in the 1828 election was triple that in the 1824 election. This increase in votes was due not only to the recent wave of democratization, but also because of increased interest in elections and the growing ability of the parties to mobilize voters.
Adams considered permanently retiring from public life after his 1828 defeat, and he was deeply hurt by the suicide of his son, George Washington Adams, in 1829. He was appalled by many of the Jackson administration's actions, including its embrace of the spoils system. Though they had once maintained a cordial relationship, Adams and Jackson each came to loathe the other in the decades after the 1828 election. Adams grew bored of his retirement and still felt that his career was unfinished, so he ran for and won a seat in the United States House of Representatives in the 1830 elections. His election went against the generally held opinion, shared by his own wife and youngest son, that former presidents should not run for public office. Nonetheless, he would win election to nine terms, serving from 1831 until his death in 1848. Adams and Andrew Johnson are the only former presidents to serve in Congress. After winning election, Adams became affiliated with the Anti-Masonic Party, partly because the National Republican Party's leadership in Massachusetts included many of the former Federalists that Adams had clashed with earlier in his career. The Anti-Masonic Party originated as a movement against Freemasonry, but it developed into the country's first third party and embraced a general program of anti-elitism.
Adams expected a light workload when he returned to Washington at 64 years old, but Speaker Andrew Stevenson selected Adams chairman of the Committee on Commerce and Manufactures. Though he identified as a member of the Anti-Masonic Party, Congress was broadly polarized into allies of Jackson and opponents of Jackson, and Adams generally aligned with the latter camp. Stevenson, an ally of Jackson, expected that the committee chairmanship would keep Adams busy defending the tariff even while the Jacksonian majority on the committee would prevent Adams from accruing any real power. As chairman of the committee charged with writing tariff laws, Adams became an important player in the Nullification Crisis, which stemmed largely from Southern objections to the high rates imposed by the Tariff of 1828. South Carolina leaders argued that states could nullify federal laws, and they announced that the federal government would be barred from enforcing the tariff in their state. Adams helped pass the Tariff of 1832, which lowered rates, but not enough to mollify the South Carolina nullifiers. The crisis was ended when Clay and Calhoun agreed to another tariff bill, the Tariff of 1833, that furthered lower tariff rates. Adams was appalled by the Nullification Crisis's outcome, as he felt that the Southern states had unfairly benefited from challenging federal law. After the crisis, Adams increasingly came to believe that Southerners exercised an undue degree of influence over the federal government, largely through their control of Jackson's Democratic Party.
The Anti-Masonic Party nominated Adams in the 1833 Massachusetts gubernatorial election in a four-way race between Adams, the National Republican candidate, the Democratic candidate, and a candidate of the Working Men's Party. The National Republican candidate, John Davis, won 40% of the vote, while Adams finished in second place with 29%. Because no candidate won a majority of the vote, the state legislature decided the election. Rather than seek election by the legislature, Adams withdrew his name from contention, and the legislature selected Davis. Adams was nearly elected to the Senate in 1835 by a coalition of Anti-Masons and National Republicans, but his support for Jackson in a minor foreign policy matter annoyed National Republican leaders enough that they dropped their support for his candidacy. After 1835, Adams never again sought higher office, focusing instead on his service in the House of Representatives.
In the mid-1830s, the Anti-Masonic Party, the National Republicans, and other groups opposed to Jackson coalesced into the Whig Party. In the 1836 presidential election Democrats put forward Martin Van Buren, while the Whigs fielded multiple presidential candidates. Because he disdained all the major party contenders for president, Adams did not take part in the campaign; Van Buren won the election. Nonetheless, Adams became aligned with the Whig Party in Congress. Adams generally opposed the initiatives of President Van Buren, long a political adversary, though they maintained a cordial public relationship.
The Republic of Texas won its independence from Mexico in the Texas Revolution of 1835–1836. Texas had largely been settled by Americans from the Southern United States, and many of those settlers owned slaves despite an 1829 Mexican law that abolished slavery. Many in the United States and Texas thus favored the admission of Texas into the union as a slave state. Adams considered the issue of Texas to be "a question of far deeper root and more overshadowing branches than any or all others that agitate the country", and he emerged as one of the leading congressional opponents of annexation. Adams had sought to acquire Texas when he served as secretary of state, but he argued that, because Mexico had abolished slavery, the acquisition of Texas would transform the region from a free territory into a slave state. He also feared that the annexation of Texas would encourage Southern expansionists to pursue other potential slave states, including Cuba. Adams's strong stance may have played a role in discouraging Van Buren from pushing for the annexation of Texas during his presidency.
Whig nominee William Henry Harrison defeated Van Buren in the 1840 presidential election, and the Whigs gained control of both houses of Congress for the first time. Despite his low regard for Harrison as a person, Adams was enthusiastic about the new Whig administration and the end of the long-standing Democratic dominance of the federal government. However, Harrison died in April 1841 and was succeeded by Vice President John Tyler, a Southerner who, unlike Adams, Henry Clay, and many other prominent Whigs, did not embrace the American System. Adams saw Tyler as an agent of "the slave-driving, Virginia, Jeffersonian school, principled against all improvement." After Tyler vetoed a bill to restore the national bank, Whig congressmen expelled Tyler from the party. Adams was appointed chairman of a special committee that explored impeaching Tyler, and Adams presented a scathing report of Tyler that argued that his actions warranted impeachment. The impeachment process did not move forward, though, in large part because the Whigs did not believe that the Senate would vote to remove Tyler from office.
Tyler made the annexation of Texas the main foreign policy priority of the later stages of his administration. He attempted to win ratification of an annexation treaty in 1844, but, to Adams's surprise and relief, the treaty was rejected by the Senate. The annexation of Texas became the central issue of the 1844 presidential election, and Southerners blocked the nomination of Van Buren at the 1844 Democratic National Convention due to the latter's opposition to annexation; the party instead nominated James K. Polk, an acolyte of Andrew Jackson. Though he once again did not take part in the campaigning, Adams was deeply disappointed that Polk defeated his old ally, Henry Clay, in the 1844 election. He attributed the outcome of the election partly to the Liberty Party, a small, abolitionist third party that may have siphoned votes from Clay in the crucial state of New York. After the election, Tyler, whose term would end in March 1845, once again submitted an annexation treaty to Congress. Adams strongly attacked the treaty, arguing that the annexation of Texas would involve the United States in "a war for slavery." Despite Adams's opposition, both houses of Congress approved the treaty, with most Democrats voting for annexation and most Whigs voting against it. Texas thus joined the United States as a slave state in 1845.
Adams had served with James K. Polk in the House of Representatives, and Adams loathed the new president, seeing him as another expansionist, pro-slavery Southern Democrat. Adams favored the annexation of the entirety of Oregon Country, a disputed region occupied by both the United States and Britain, and was disappointed when President Polk signed the Oregon Treaty, which divided the land between the two claimants at the 49th parallel. Polk's expansionist aims were centered instead on the Mexican province of Alta California, and he attempted to buy the province from Mexico. The Mexican government refused to sell California or recognize the independence and subsequent American annexation of Texas. Polk deployed a military detachment led by General Zachary Taylor to back up his assertion that the Rio Grande constituted the Southern border of both Texas and the United States. After Taylor's forces clashed with Mexican soldiers north of the Rio Grande, Polk asked for a declaration of war in early 1846, asserting that Mexico had invaded American territory. Though some Whigs questioned whether Mexico had started an aggressive war, both houses of Congress declared war, with the House voting 174-to-14 to approve the declaration. Adams, who believed that Polk was seeking to wage an offensive to expand slavery, was one of the 14 dissenting votes. After the start of the war, he supported the Wilmot Proviso, an unsuccessful legislative proposal that would have banned slavery in any territory ceded by Mexico. After 1846, ill health increasingly affected Adams, but he continued to oppose the Mexican–American War until his death in 1848.
In the 1830s, slavery emerged as an increasingly polarizing issue in the United States. A longtime opponent of slavery, Adams used his new role in Congress to fight it, and he became the most prominent national leader opposing slavery. After one of his reelection victories, he said that he must "bring about a day prophesied when slavery and war shall be banished from the face of the earth." He wrote in his private journal in 1820:
In 1836, partially in response to Adams's consistent presentation of citizen petitions requesting the abolition of slavery in the District of Columbia, the House of Representatives imposed a "gag rule" that immediately tabled any petitions about slavery. The rule was favored by Democrats and Southern Whigs but was largely opposed by Northern Whigs like Adams. In late 1836, Adams began a campaign to ridicule slave owners and the gag rule. He frequently attempted to present anti-slavery petitions, often in ways that provoked strong reactions from Southern representatives. Though the gag rule remained in place, the discussion ignited by his actions and the attempts of others to quiet him raised questions of the right to petition, the right to legislative debate, and the morality of slavery. Adams fought actively against the gag rule for another seven years, eventually moving the resolution that led to its repeal in 1844.
In 1841, at the request of Lewis Tappan and Ellis Gray Loring, Adams joined the case of "United States v. The Amistad". Adams went before the Supreme Court on behalf of African slaves who had revolted and seized the Spanish ship "Amistad". Adams appeared on February 24, 1841, and spoke for four hours. His argument succeeded: the Court ruled that the Africans were free and they returned to their homes.
Adams also became a leading force for the promotion of science. In 1829, British scientist James Smithson died, and he left his fortune for the "increase and diffusion of knowledge." In Smithson's will, he stated that should his nephew, Henry James Hungerford, die without heirs, the Smithson estate would go to the government of the United States to create an "Establishment for the increase & diffusion of Knowledge among men." After the nephew died without heirs in 1835, President Andrew Jackson informed Congress of the bequest, which amounted to about US$500,000 ($75,000,000 in 2008 U.S. dollars after inflation). Adams realized that this might allow the United States to realize his dream of building a national institution of science and learning. Adams thus became Congress's primary supporter of the future Smithsonian Institution.
The money was invested in shaky state bonds, which quickly defaulted. After heated debate in Congress, Adams successfully argued to restore the lost funds with interest. Though Congress wanted to use the money for other purposes, Adams successfully persuaded Congress to preserve the money for an institution of science and learning. Congress also debated whether the federal government had the authority to accept the gift, though with Adams leading the initiative, Congress decided to accept the legacy bequeathed to the nation and pledged the faith of the United States to the charitable trust on July 1, 1836. Partly due to Adams's efforts, Congress voted to establish the Smithsonian Institution in 1846. A nonpolitical board of regents was established to lead the institution, which included a museum, art gallery, library, and laboratory.
In mid-November 1846, the 78-year-old former president suffered a stroke that left him partially paralyzed. After a few months of rest, he made a full recovery and resumed his duties in Congress. When Adams entered the House chamber on February 13, 1848, everyone "stood up and applauded."
On February 21, 1848, the House of Representatives was discussing the matter of honoring U.S. Army officers who served in the Mexican–American War. Adams had been a vehement critic of the war, and as Congressmen rose up to say, "Aye!" in favor of the measure, he instead yelled, "No!" He rose to answer a question put forth by Speaker of the House Robert Charles Winthrop. Immediately thereafter, Adams collapsed, having suffered a massive cerebral hemorrhage. Two days later, on February 23, he died at 7:20 p.m. with his wife at his side in the Speaker's Room inside the Capitol Building in Washington, D.C.; his only living child, Charles Francis, did not arrive in time to see his father alive. His last words were "This is the last of earth. I am content." Among those present for his death was Abraham Lincoln, then a freshman representative from Illinois.
His original interment was temporary, in the public vault at the Congressional Cemetery in Washington, D.C. Later, he was interred in the family burial ground in Quincy, Massachusetts, across from the First Parish Church, called Hancock Cemetery. After Louisa's death in 1852, his son had his parents reinterred in the expanded family crypt in the United First Parish Church across the street, next to John and Abigail. Both tombs are viewable by the public. Adams's original tomb at Hancock Cemetery is still there and marked simply "J.Q. Adams".
Adams and Louisa had three sons and a daughter. Their daughter, Louisa, was born in 1811 but died in 1812. They named their first son George Washington Adams (1801–1829) after the first president. This decision upset Adams's mother, and, by her account, his father as well. Both George and their second son, John (1803–1834), led troubled lives and died in early adulthood. George, who had long suffered from alcoholism, died in 1829 after going overboard on a steamboat; it is not clear whether he fell or purposely jumped from the boat. John, who ran an unprofitable flour and grist mill owned by his father, died of an unknown illness in 1834. Adams's youngest son, Charles Francis Adams Sr., was an important leader of the "Conscience Whigs", a Northern, anti-slavery faction of the Whig Party. Charles served as the Free Soil Party's vice presidential candidate in the 1848 presidential election and later became a prominent member of the Republican Party.
Adams's personality and political beliefs were much like his father's. He always preferred secluded reading to social engagements, and several times had to be pressured by others to remain in public service. Historian Paul Nagel states that, like Abraham Lincoln after him, Adams often suffered from depression, for which he sought some form of treatment in early years. Adams thought his depression was due to the high expectations demanded of him by his father and mother. Throughout his life he felt inadequate and socially awkward because of his depression, and was constantly bothered by his physical appearance. He was closer to his father, whom he spent much of his early life with abroad, than he was to his mother. When he was younger and the American Revolution was going on, his mother told her children what their father was doing, and what he was risking, and because of this Adams grew to greatly respect his father. His relationship with his mother was rocky; she had high expectations of him and was afraid her children might end up dead alcoholics like her brother. His biographer, Nagel, concludes that his mother's disapproval of Louisa Johnson motivated him to marry Johnson in 1797, despite Adams's reservations that Johnson, like his mother, had a strong personality.
Though Adams wore a powdered wig in his youth, he abandoned this fashion and became the first president to adopt a short haircut instead of long hair tied in a queue and to regularly wear long trousers instead of knee breeches. It has been suggested that John Quincy Adams had the highest I.Q. of any U.S. president. Dean Simonton, a professor of psychology at UC Davis, estimated his I.Q. score at 165.
Adams is widely regarded as one of the most effective diplomats and secretaries of state in American history, but scholars generally rank him as an average president. Adams is remembered as a man eminently qualified for the presidency, yet hopelessly weakened in his presidential leadership potential because of the 1824 election. Most importantly, Adams is remembered as a poor politician in an era when politics had begun to matter more. He spoke of trying to serve as a man above the "baneful weed of party strife" at the precise moment in history when the Second Party System was emerging with nearly revolutionary force. Biographer and historian William J. Cooper notes that Adams "does not loom large in the American imagination", but that he has received more public attention since the late 20th century due to his anti-slavery stances. Cooper writes that Adams was the first "major public figure" to publicly question whether the United States could remain united so long as the institution of slavery persisted. Historian Daniel Walker Howe writes that Adams's "intellectual ability and courage were above reproach, and his wisdom in perceiving the national interest has stood the test of time." Historians have often included Adams among the leading conservatives of his day. Russell Kirk, however, sees Adams as a flawed conservative who was imprudent in opposing slavery.
John Quincy Adams Birthplace is now part of Adams National Historical Park and open to the public. Adams House, one of twelve undergraduate residential Houses at Harvard University, is named for John Adams, John Quincy Adams, and other members of the Adams family associated with Harvard. In 1870, Charles Francis built the first presidential library in the United States, to honor his father. The Stone Library includes over 14,000 books written in twelve languages. The library is located in the "Old House" at Adams National Historical Park in Quincy, Massachusetts.
Adams's middle name of Quincy has been used by several locations in the United States, including the town of Quincy, Illinois. Adams County, Illinois and Adams County, Indiana are also named after Adams. Adams County, Iowa and Adams County, Wisconsin were each named for either John Adams or John Quincy Adams.
Some sources contend that in 1843 Adams sat for the earliest confirmed photograph of a U.S. president, although others maintain that William Henry Harrison had posed even earlier for his portrait, in 1841. The original daguerreotype is in the collection of the National Portrait Gallery of the Smithsonian Institution.
Adams occasionally is featured in the mass media. In the PBS miniseries "The Adams Chronicles" (1976), he was portrayed by David Birney, William Daniels, Marcel Trenchard, Steven Grover and Mark Winkworth. He was also portrayed by Anthony Hopkins in the 1997 film "Amistad", and again by Ebon Moss-Bachrach and Steven Hinkle in the 2008 HBO television miniseries "John Adams"; the HBO series received criticism for needless historical and temporal distortions in its portrayal.
|
https://en.wikipedia.org/wiki?curid=15654
|
Jurassic
The Jurassic ( ; from the Jura Mountains) is a geologic period and system that spanned 56 million years from the end of the Triassic Period million years ago (Mya) to the beginning of the Cretaceous Period Mya. The Jurassic constitutes the middle period of the Mesozoic Era, also known as the Age of Reptiles. The start of the period was marked by the major Triassic–Jurassic extinction event. Two other extinction events occurred during the period: the Pliensbachian-Toarcian extinction in the Early Jurassic, and the Tithonian event at the end; neither event ranks among the "Big Five" mass extinctions, however.
The Jurassic period is divided into three epochs: Early, Middle, and Late. Similarly, in stratigraphy, the Jurassic is divided into the Lower Jurassic, Middle Jurassic, and Upper Jurassic series of rock formations.
The Jurassic is named after the Jura Mountains in the European Alps, where limestone strata from the period were first identified.
By the beginning of the Jurassic, the supercontinent Pangaea had begun rifting into two landmasses: Laurasia to the north, and Gondwana to the south. This created more coastlines and shifted the continental climate from dry to humid, and many of the arid deserts of the Triassic were replaced by lush rainforests.
On land, the fauna transitioned from the Triassic fauna, dominated by both dinosauromorph and crocodylomorph archosaurs, to one dominated by dinosaurs alone. The first birds also appeared during the Jurassic, having evolved from a branch of theropod dinosaurs. Other major events include the appearance of the earliest lizards, and the evolution of therian mammals, including primitive placentals. Crocodilians made the transition from a terrestrial to an aquatic mode of life. The oceans were inhabited by marine reptiles such as ichthyosaurs and plesiosaurs, while pterosaurs were the dominant flying vertebrates.
The name "Jura" is derived from the Celtic root "*jor" via Gaulish "*iuris" "wooded mountain", which, borrowed into Latin as a place name, evolved into "Juria" and finally "Jura".
The chronostratigraphic term "Jurassic" is directly linked to the Jura Mountains, a mountain range mainly following the course of the France–Switzerland border. During a tour of the region in 1795, Alexander von Humboldt recognized the mainly limestone dominated mountain range of the Jura Mountains as a separate formation that had not been included in the established stratigraphic system defined by Abraham Gottlob Werner, and he named it "Jura-Kalkstein" ('Jura limestone') in 1799.
Thirty years later, in 1829, the French naturalist Alexandre Brongniart published a survey on the different terrains that constitute the crust of the Earth. In this book, Brongniart referred to the terrains of the Jura Mountains as "terrains jurassiques", thus coining and publishing the term for the first time.
The Jurassic period is divided into three epochs: Early, Middle, and Late. Similarly, in stratigraphy, the Jurassic is divided into the Lower Jurassic, Middle Jurassic, and Upper Jurassic series of rock formations, also known as "Lias", "Dogger" and "Malm" in Europe. The separation of the term Jurassic into three sections originated with Leopold von Buch. The faunal stages from youngest to oldest are:
During the early Jurassic period, the supercontinent Pangaea broke up into the northern supercontinent Laurasia and the southern supercontinent Gondwana; the Gulf of Mexico opened in the new rift between North America and what is now Mexico's Yucatán Peninsula. The Jurassic North Atlantic Ocean was relatively narrow, while the South Atlantic did not open until the following Cretaceous period, when Gondwana itself rifted apart. The Tethys Sea closed, and the Neotethys basin appeared. Climates were warm, with no evidence of a glacier having appeared. As in the Triassic, there was apparently no land over either pole, and no extensive ice caps existed.
The Jurassic geological record is good in western Europe, where extensive marine sequences indicate a time when much of that future landmass was submerged under shallow tropical seas; famous locales include the Jurassic Coast World Heritage Site in southern England and the renowned late Jurassic "lagerstätten" of Holzmaden and Solnhofen in Germany. In contrast, the North American Jurassic record is the poorest of the Mesozoic, with few outcrops at the surface. Though the epicontinental Sundance Sea left marine deposits in parts of the northern plains of the United States and Canada during the late Jurassic, most exposed sediments from this period are continental, such as the alluvial deposits of the Morrison Formation.
The Jurassic was a time of calcite sea geochemistry in which low-magnesium calcite was the primary inorganic marine precipitate of calcium carbonate. Carbonate hardgrounds were thus very common, along with calcitic ooids, calcitic cements, and invertebrate faunas with dominantly calcitic skeletons.
The first of several massive batholiths were emplaced in the northern American cordillera beginning in the mid-Jurassic, marking the Nevadan orogeny. Important Jurassic exposures are also found in Russia, India, South America, Japan, Australasia and the United Kingdom.
In Africa, Early Jurassic strata are distributed in a similar fashion to Late Triassic beds, with more common outcrops in the south and less common fossil beds which are predominated by tracks to the north. As the Jurassic proceeded, larger and more iconic groups of dinosaurs like sauropods and ornithopods proliferated in Africa. Middle Jurassic strata are neither well represented nor well studied in Africa. Late Jurassic strata are also poorly represented apart from the spectacular Tendaguru fauna in Tanzania. The Late Jurassic life of Tendaguru is very similar to that found in western North America's Morrison Formation.
During the Jurassic period, the primary vertebrates living in the sea were fish and marine reptiles. The latter include ichthyosaurs, which were at the peak of their diversity, plesiosaurs, pliosaurs, and marine crocodiles of the families Teleosauridae and Metriorhynchidae. Numerous turtles could be found in lakes and rivers.
In the invertebrate world, several new groups appeared, including rudists (a reef-forming variety of bivalves) and belemnites. Calcareous sabellids ("Glomerula") appeared in the Early Jurassic. The Jurassic also had diverse encrusting and boring (sclerobiont) communities, and it saw a significant rise in the bioerosion of carbonate shells and hardgrounds. Especially common is the ichnogenus (trace fossil) "Gastrochaenolites".
During the Jurassic period, about four or five of the twelve clades of planktonic organisms that exist in the fossil record either experienced a massive evolutionary radiation or appeared for the first time.
On land, various archosaurian reptiles remained dominant. The Jurassic was a golden age for the large herbivorous dinosaurs known as the sauropods—"Camarasaurus", "Apatosaurus", "Diplodocus", "Brachiosaurus", and many others—that roamed the land late in the period; their foraging grounds were either the prairies of ferns, palm-like cycads and bennettitales, or the higher coniferous growth, according to their adaptations. The smaller Ornithischian herbivore dinosaurs, like stegosaurs and small ornithopods were less predominant, but played important roles. They were preyed upon by large theropods, such as "Ceratosaurus", "Megalosaurus", "Torvosaurus" and "Allosaurus", all these belong to the 'lizard hipped' or saurischian branch of the dinosaurs.
During the Late Jurassic, the first avialans, like "Archaeopteryx", evolved from small coelurosaurian dinosaurs. In the air, pterosaurs were common; they ruled the skies, filling many ecological roles now taken by birds, and may have already produced some of the largest flying animals of all time. Within the undergrowth were various types of early mammals, as well as tritylodonts, lizard-like sphenodonts, and early lissamphibians. The rest of the Lissamphibia evolved in this period, introducing the first salamanders and caecilians.
The arid, continental conditions characteristic of the Triassic steadily eased during the Jurassic period, especially at higher latitudes; the warm, humid climate allowed lush jungles to cover much of the landscape. Gymnosperms were relatively diverse during the Jurassic period. The Conifers in particular dominated the flora, as during the Triassic; they were the most diverse group and constituted the majority of large trees.
Extant conifer families that flourished during the Jurassic included the Araucariaceae, Cephalotaxaceae, Pinaceae, Podocarpaceae, Taxaceae and Taxodiaceae. The extinct Mesozoic conifer family Cheirolepidiaceae dominated low latitude vegetation, as did the shrubby Bennettitales. Cycads, similar to palm trees, were also common, as were ginkgos and Dicksoniaceous tree ferns in the forest. Smaller ferns were probably the dominant undergrowth. Caytoniaceous seed ferns were another group of important plants during this time and are thought to have been shrub to small-tree sized. Ginkgo plants were particularly common in the mid- to high northern latitudes. In the Southern Hemisphere, podocarps were especially successful, while Ginkgos and Czekanowskiales were rare.
In the oceans, modern coralline algae appeared for the first time. However, they were a part of another major extinction that happened within the next major time period.
Since the early 1990s, the term "Jurassic" has been popularised by the "Jurassic Park" franchise, which started in 1990 with Michael Crichton's novel of the same title and its film adaptation, first released in 1993.
|
https://en.wikipedia.org/wiki?curid=15655
|
John Wyndham
John Wyndham Parkes Lucas Beynon Harris (; 10 July 1903 – 11 March 1969) was an English science fiction writer best known for his works published under the pen name John Wyndham, although he also used other combinations of his names, such as John Beynon and Lucas Parkes. Some of his works were set in post-apocalyptic landscapes. His best known works include "The Day of the Triffids" (1951) and "The Midwich Cuckoos" (1957), the latter filmed twice as "Village of the Damned".
Wyndham was born in the village of Dorridge near Knowle, Warwickshire (now West Midlands), England, the son of George Beynon Harris, a barrister, and Gertrude Parkes, the daughter of a Birmingham ironmaster.
His early childhood was spent in Edgbaston in Birmingham, but when he was 8 years old his parents separated. His father then attempted to sue the Parkes family for "the custody, control and society" of his wife and family in an unusual and high-profile court case, which he lost. Following this embarrassment, Gertrude left Birmingham to live in a series of boarding houses and spa hotels.
He and his younger brother, the writer Vivian Beynon Harris, spent the rest of their childhood at a number of English preparatory and public schools, including Blundell's School in Tiverton, Devon, during World War I. His longest and final stay was at Bedales School near Petersfield in Hampshire (1918–21), which he left at the age of 18, and where he blossomed and was happy.
After leaving school, Wyndham tried several careers, including farming, law, commercial art and advertising, but mostly relied on an allowance from his family. He eventually turned to writing for money in 1925. In 1927, he published the detective novel "The Curse of the Burdens" as John B. Harris, and by 1931, he was selling short stories and serial fiction to American science fiction magazines. His debut short story, 1931's "Worlds To Barter", appeared under the byline John B. Harris; subsequent stories through about 1935 were credited to John Beynon Harris. By mid-1935, the Harris surname was dropped, and his work was signed as by John Beynon. Three novels by Beynon were published in 1935/36, two of them works of science fiction, the other being a detective story. He also used the pen name Wyndham Parkes for one short story in the UK's "Fantasy Magazine" in 1939, as 'John Beynon' had already been credited for another story in the same issue.
During these years, he lived at the Penn Club, London, which had been opened in 1920 by the remaining members of the Friends Ambulance Unit, and which had been partly funded by the Quakers. The intellectual and political mix of pacifists, socialists and communists continued to inform his views on social engineering and feminism.
Whilst there, he met his future wife, Grace Wilson, a teacher. They embarked on a long-term love affair but did not marry, partly because of the marriage bar.
During World War II, Wyndham first served as a censor in the Ministry of Information. His experiences as a firewatcher during the London Blitz, and later as a member of the Home Guard, would be recreated after the war in "The Day of the Triffids."
He then joined the British Army, serving as a Corporal cipher operator in the Royal Corps of Signals. He participated in the Normandy landings, landing a few days after D-Day.
He was attached to the XXX Corps, which took part in some of the heaviest fighting, including surrounding the trapped German army in the Falaise Pocket.
His wartime letters to his long-time partner Grace Wilson are now held at the University of Liverpool archive. He wrote at length of his struggles with his conscience, his doubts about humanity and his fears of the inevitability of war. He also wrote passionately of his love for her, and his fears that he would be so tainted she would not be able to love him when he returned.
After the war, Wyndham returned to writing, still using the pen name John Beynon. Inspired by the success of his younger brother, who had four novels published starting in 1948, he altered his writing style; and, by 1951, using the John Wyndham pen name for the first time, he wrote the novel "The Day of the Triffids". His pre-war writing career was not mentioned in the book's publicity, and people were allowed to assume that it was a first novel from a previously unknown writer.
The book proved to be an enormous success and established Wyndham as an important exponent of science fiction. During his lifetime, he wrote and published six more novels under the name John Wyndham, and used that name professionally from 1951 forward. His 1959 novel "The Outward Urge" was credited to John Wyndham and Lucas Parkes, but "Lucas Parkes" is yet another pseudonym for Wyndham himself. As well, two-story collections published in the 1950s came out under Wyndham's name, but included several pre-1951 stories originally published as by John Beynon.
In 1963, he married Grace Isobel Wilson, whom he had known for more than 20 years; the couple remained married until he died. He and Grace lived for several years in separate rooms at the Penn Club, London and later lived near Petersfield, Hampshire, just outside the grounds of Bedales School. A collection of his letters to Grace written during the Second World War is held in the University of Liverpool archive. Wyndham explores the issues around women being forced by their biology to choose between careers and love in "Trouble with Lichen".
He died in 1969, aged 65, at his home in Petersfield, survived by his wife and his brother. Subsequently, some of his unsold work was published and his earlier work was re-published. His archive was acquired by Liverpool University.
On 24 May 2015 an alley in Hampstead that appears in "The Day of the Triffids" was formally named Triffid Alley as a memorial to him.
John Wyndham's many short stories also appear with later variant titles or pen names. His stories include:
John Wyndham's reputation rests mainly on the first four of the novels published in his lifetime under that name. "The Day of the Triffids" remains his best-known work, but some readers consider that "The Chrysalids" was really his best. This is set in the far future of a post-nuclear dystopia, where women's fertility is compromised and they are severely oppressed if they give birth to "mutants". David Mitchell, author of Cloud Atlas, wrote of it: ""One of the most thoughtful post-apocalypse novels ever written. Wyndham was a true English visionary, a William Blake with a science doctorate".
The ideas of the Chrysalids are echoed in The Handmaid's Tale. Author Margaret Atwood has acknowledged Wyndham as an influence, and wrote an introduction to a new edition of Chocky, in which she said the intelligent alien babies of The Midwich Cuckoos entered her dreams.
He also wrote several short stories, ranging from hard science fiction to whimsical fantasy. A few have been filmed: "Consider Her Ways", "Random Quest", "Dumb Martian", "A Long Spoon", "Jizzle" (filmed as "Maria") and "Time to Rest" (filmed as "No Place Like Earth"). There is also a radio version of "Survival".
Brian Aldiss, another British science fiction writer, disparagingly labelled some of them "cosy catastrophes", especially "The Day of the Triffids". This became a cliche about his work, but has been refuted by many more recent critics. L.J. Hurst pointed out that in "Triffids" the main character witnesses several murders, suicides and misadventures, and is frequently in mortal danger himself. Margaret Atwood wrote: "one might as well call World War II—of which Wyndham was a veteran—a "cozy" war because not everyone died in it."
Many other writers have acknowledged Wyndham as an influence, including Alex Garland, whose screenplay for 28 Days Later draws heavily on The Day of the Triffids.
Notes
Citations
|
https://en.wikipedia.org/wiki?curid=15656
|
Jerzy Kosiński
Jerzy Kosiński (; June 14, 1933 – May 3, 1991), born Józef Lewinkopf, was a Polish-American novelist and two-time President of the American Chapter of P.E.N., who wrote primarily in English. Born in Poland, he survived World War II and, as a young man, immigrated to the U.S., where he became a citizen.
He was known for various novels, among them "Being There" (1970) and "The Painted Bird" (1965), which were adapted as films in 1979 and 2019 respectively.
Kosiński was born Józef Lewinkopf to Jewish parents in Łódź, Poland. As a child during World War II, he lived in central Poland under a false identity, Jerzy Kosiński, which his father gave to him. Eugeniusz Okoń, a Roman Catholic priest, issued him a forged baptismal certificate, and the Lewinkopf family survived the Holocaust thanks to local villagers who offered assistance to Polish Jews, often at great risk. (The penalty for helping Jews in Nazi-Germany-occupied Poland was death for all members of a family and sometimes for the inhabitants of the village). Kosiński's father was assisted not only by town leaders and clergymen, but also by individuals such as Marianna Pasiowa, a member of an underground network that helped Jews evade capture. The family lived openly in Dąbrowa Rzeczycka, near Stalowa Wola, and attended church in nearby Wola Rzeczycka, with the support of villagers in Kępa Rzeczycka. For a time, they were sheltered by a Catholic family in Rzeczyca Okrągła. Jerzy even served as an altar boy in the local church.
After the war ended, Kosiński and his parents moved to Jelenia Góra. By age 22, he had earned two graduate degrees, in history and sociology, at the University of Łódź. He then became a teaching assistant at the Polish Academy of Sciences. Kosiński also studied in the Soviet Union, and served as a sharpshooter in the Polish Army.
To migrate to the United States in 1957, he created a fake foundation, which supposedly sponsored him. He later claimed he forged the letters from prominent communist authorities guaranteeing his loyal return to Poland, as were then required for anyone leaving the country.
Kosiński first worked at odd jobs to get by, including driving a truck, and he managed to graduate from Columbia University. He became an American citizen in 1965. He also received grants from the Guggenheim Fellowship in 1967 and the Ford Foundation in 1968. In 1970, he won the American Academy of Arts and Letters award for literature. The grants allowed him to write a political non-fiction book that opened new doors of opportunity. He became a lecturer at Yale, Princeton, Davenport, and Wesleyan Universities.
In 1962, Kosiński married an American steel heiress, Mary Hayward Weir. They divorced four years later. Weir died in 1968 from brain cancer, leaving Kosiński out of her will. He would fictionalize his marriage in his novel "Blind Date", speaking of Weir under the pseudonym Mary-Jane Kirkland. Kosiński later, in 1968, married Katherina "Kiki" von Fraunhofer (1933–2007), a marketing consultant and a member of the Bavarian nobility.
Towards the end of his life, Kosiński suffered from multiple illnesses and was also under attack from journalists who accused him of plagiarism. By his late 50s, he was suffering from an irregular heartbeat as well as severe physical and nervous exhaustion.
He died by suicide on May 3, 1991, by ingesting a lethal amount of alcohol and drugs and wrapping a plastic bag around his head, suffocating to death. His suicide note read: "I am going to put myself to sleep now for a bit longer than usual. Call it Eternity."
Kosiński's novels have appeared on "The New York Times" Best Seller list, and have been translated into over 30 languages, with total sales estimated at 70 million in 1991.
"The Painted Bird", Kosiński's controversial 1965 novel, is a fictional account that depicts the personal experiences of a boy of unknown religious and ethnic background who wanders around unidentified areas of Eastern Europe during World War II and takes refuge among a series of people, many of whom are brutally cruel and abusive, either to him or to others.
Soon after the book was published in the US, Kosiński was accused by the then-Communist Polish government of being anti-Polish, especially following the regime's 1968 anti-Semitic campaign. The book was banned in Poland from its initial publication until the fall of the Communist government in 1989. When it was finally printed, thousands of Poles in Warsaw lined up for as long as eight hours to purchase copies of the work autographed by Kosiński. Polish literary critic and University of Warsaw professor Paweł Dudziak remarked that "in spite of the unclear role of its author,"The Painted Bird" is an achievement in English literature." He stressed that since the book is a work of fiction and does not document real-world events, accusations of anti-Polish sentiment may result only from taking it too literally.
The book received recommendations from Elie Wiesel who wrote in "The New York Times Book Review" that it was "one of the best ... Written with deep sincerity and sensitivity." Richard Kluger, reviewing it for "Harper's Magazine" wrote: "Extraordinary ... literally staggering ... one of the most powerful books I have ever read." Jonathan Yardley, reviewing it for "The Miami Herald", wrote: "Of all the remarkable fiction that emerged from World War II, nothing stands higher than Jerzy Kosiński's "The Painted Bird". A magnificent work of art, and a celebration of the individual will. No one who reads it will forget it; no one who reads it will be unmoved by it."
However, reception of the book was not uniformly positive. After being translated into Polish, it was read by the people with whom the Lewinkopf family lived during the war. They recognized names of Jewish children sheltered by them (who also survived the war), depicted in the novel as victims of abuse by characters based on them. Also, according to Iwo Cyprian Pogonowski, "The Painted Bird" was Kosiński's most successful attempt at profiteering from the Holocaust by maintaining an aura of a chronicle. Also, several claims that Kosiński committed plagiarism in writing "The Painted Bird" were leveled against him. (See 'Criticism' section, below.)
"Steps" (1968), a novel comprising scores of loosely connected vignettes, won the U.S. National Book Award for Fiction.
American novelist David Foster Wallace described "Steps" as a "collection of unbelievably creepy little allegorical tableaux done in a terse elegant voice that's like nothing else anywhere ever". Wallace continued in praise: "Only Kafka's fragments get anywhere close to where Kosiński goes in this book, which is better than everything else he ever did combined." Samuel Coale, in a 1974 discussion of Kosiński's fiction, wrote that "the narrator of "Steps" for instance, seems to be nothing more than a disembodied voice howling in some surrealistic wilderness."
One of Kosiński's most significant works is "Being There" (1970), a satirical view of the absurd reality of America's media culture. It is the story of Chance the gardener, a man with few distinctive qualities who emerges from nowhere and suddenly becomes the heir to the throne of a Wall Street tycoon and a presidential policy adviser. His simple and straightforward responses to popular concerns are praised as visionary despite the fact that no one actually understands what he is really saying. Many questions surround his mysterious origins, and filling in the blanks in his background proves impossible.
The novel was made into a 1979 movie directed by Hal Ashby, and starring Peter Sellers, who was nominated for an Academy Award for the role, and Melvyn Douglas, who won the award for Best Supporting Actor. The screenplay was co-authored by award-winning screenwriter Robert C. Jones with Kosiński. The film won the 1981 British Academy of Film and Television Arts (Film) Best Screenplay Award, as well as the 1980 Writers Guild of America Award (Screen) for Best Comedy Adapted from Another Medium. It was also nominated for the 1980 Golden Globe Best Screenplay Award (Motion Picture).
According to Eliot Weinberger, an American writer, essayist, editor and translator, Kosiński was not the author of "The Painted Bird". Weinberger alleged in his 2000 book "Karmic Traces" that Kosiński was not fluent in English at the time of its writing.
In a review of "Jerzy Kosiński: A Biography" by James Park Sloan, D. G. Myers, Associate Professor of English at Texas A&M University wrote "For years Kosinski passed off "The Painted Bird" as the true story of his own experience during the Holocaust. Long before writing it he regaled friends and dinner parties with macabre tales of a childhood spent in hiding among the Polish peasantry. Among those who were fascinated was Dorothy de Santillana, a senior editor at Houghton Mifflin, to whom Kosiński confided that he had a manuscript based on his experiences. Upon accepting the book for publication Santillana said, 'It is my understanding that, fictional as the material may sound, it is straight autobiography'." Although he backed away from this claim, Kosiński never wholly disavowed it.
M. A. Orthofer addressed Weinberger's assertion: "Kosinski was, in many respects, a fake – possibly near as genuine a one as Weinberger could want. (One aspect of the best fakes is the lingering doubt that, possibly, there is some authenticity behind them – as is the case with Kosinski.) Kosinski famously liked to pretend he was someone he wasn't (as do many of the characters in his books), he occasionally published under a pseudonym, and, apparently, he plagiarized and forged left and right."
Kosiński himself addressed these claims in the introduction to the 1976 reissue of "The Painted Bird", saying that "Well-intentioned writers, critics, and readers sought facts to back up their claims that the novel was autobiographical. They wanted to cast me in the role of spokesman for my generation, especially for those who had survived the war; but for me, survival was an individual action that earned the survivor the right to speak only for himself. Facts about my life and my origins, I felt, should not be used to test the book's authenticity, any more than they should be used to encourage readers to read "The Painted Bird". Furthermore, I felt then, as I do now, that fiction and autobiography are very different modes."
In June 1982, a "Village Voice" report by Geoffrey Stokes and Eliot Fremont-Smith accused Kosiński of plagiarism, claiming that much of his work was derivative of prewar books unfamiliar to English-speaking readers, and that "Being There" was a plagiarism of "Kariera Nikodema Dyzmy" — "The Career of Nicodemus Dyzma" — a 1932 Polish bestseller by Tadeusz Dołęga-Mostowicz. They also alleged Kosiński wrote "The Painted Bird" in Polish, and had it secretly translated into English. The report claimed that Kosiński's books had actually been ghost-written by "assistant editors", finding stylistic differences among Kosiński's novels. Kosiński, according to them, had depended upon his freelance editors for "the sort of composition that we usually call writing." American biographer James Sloan notes that New York poet, publisher and translator George Reavey claimed to have written "The Painted Bird" for Kosiński.
The article found a more realistic picture of Kosiński's life during the Holocaust – a view which was supported by biographers Joanna Siedlecka and Sloan. The article asserted that "The Painted Bird," assumed to be semi-autobiographical, was largely a work of fiction. The information showed that rather than wandering the Polish countryside, as his fictional character did, Kosiński spent the war years in hiding with Polish Catholics.
Terence Blacker, a profitable English publisher (who helped publish Kosiński's books) and author of children's books and mysteries for adults, wrote an article published in "The Independent" in 2002:
The significant point about Jerzy Kosiński was that ... his books ... had a vision and a voice consistent with one another and with the man himself. The problem was perhaps that he was a successful, worldly author who played polo, moved in fashionable circles and even appeared as an actor in Warren Beatty's "Reds". He seemed to have had an adventurous and rather kinky sexuality which, to many, made him all the more suspect. All in all, he was a perfect candidate for the snarling pack of literary hangers-on to turn on. There is something about a storyteller becoming rich and having a reasonably full private life that has a powerful potential to irritate so that, when things go wrong, it causes a very special kind of joy.
This theory explains much: the reckless driving, the abuse of small dogs, the thirst for fame, the fabrication of personal experience, the secretiveness about how he wrote, the denial of his Jewish identity. 'There was a hollow space at the center of Kosinski that had resulted from denying his past,' Sloan writes, 'and his whole life had become a race to fill in that hollow space before it caused him to implode, collapsing inward upon himself like a burnt-out star.' On this theory, Kosinski emerges as a classic borderline personality, frantically defending himself against ... all-out psychosis.
Journalist John Corry wrote a 6,000-word feature article in "The New York Times" in November 1982, responding and defending Kosiński, which appeared on the front page of the Arts and Leisure section. Among other things, Corry alleged that reports claiming that "Kosinski was a plagiarist in the pay of the C.I.A. were the product of a Polish Communist disinformation campaign."
Kosiński himself responded that he had never maintained that the book was autobiographical, even though years earlier he confided to Houghton Mifflin editor Santillana that his manuscript "draws upon a childhood spent, by the casual chances of war, in the remotest villages of Eastern Europe." In 1988, he wrote "The Hermit of 69th Street", in which he sought to demonstrate the absurdity of investigating prior work by inserting footnotes for practically every term in the book. "Ironically," wrote theatre critic Lucy Komisar, "possibly his only true book ... about a successful author who is shown to be a fraud."
Despite repudiation of the "Village Voice" allegations in detailed articles in "The New York Times", "The Los Angeles Times", and other publications, Kosiński remained tainted. "I think it contributed to his death," said Zbigniew Brzezinski, a friend and fellow Polish emigrant.
Kosiński appeared 12 times on "The Tonight Show Starring Johnny Carson" during 1971–73, and "The Dick Cavett Show" in 1974, was a guest on the talk radio show of Long John Nebel, posed half-naked for a cover photograph by Annie Leibovitz for "The New York Times Magazine" in 1982, and presented the Oscar for screenwriting in 1982.
He also played the role of Bolshevik revolutionary and Politburo member Grigory Zinoviev in Warren Beatty's film "Reds". The "Time" magazine critic wrote: "As Reed's Soviet nemesis, novelist Jerzy Kosinski acquits himself nicely–a tundra of ice against Reed's all-American fire." "Newsweek" complimented Kosiński's "delightfully abrasive" performance.
Kosiński was friends with Roman Polanski, with whom he attended the National Film School in Łódź, and said he narrowly missed being at Polanski and Sharon Tate's house on the night Tate was murdered by Charles Manson's followers in 1969, due to lost luggage. His novel "Blind Date" portrayed the Manson murders.
In 1984, Polanski denied Kosiński's story in his autobiography. Journalist John Taylor of New York Magazine believes Polanski was mistaken. "Although it was a single sentence in a 461-page book, reviewers focused on it. But the accusation was untrue: Jerzy and Kiki "had" been invited to stay with Tate the night of the Manson murders, and they missed being killed as well only because they stopped in New York en route from Paris because their luggage had been misdirected." The reason why Taylor believes this is that "a friend of Kosiński's wrote a letter to the "Times", which was published in the "Book Review", describing the detailed plans he and Jerzy had made to meet that weekend at Polanski's house on Cielo Drive." The letter referenced was written by Clement Biddle Wood.
Kosiński was also friends with Wojciech Frykowski and Abigail Folger. He introduced the couple to each other.
Svetlana Alliluyeva, who had a friendship with Kosiński, is introduced as a character in his novel "Blind Date".
Kosiński wrote his novel "Pinball" (1982) for his friend George Harrison, having conceived of the idea for the book at least ten years before writing it.
Kosiński practiced the photographic arts, with one-man exhibitions to his credit in Warsaw's Crooked Circle Gallery (1957), and in the Andre Zarre Gallery in New York (1988). He watched surgeries, and read to terminally ill patients.
Kosiński was also very interested in polo, and compared himself to a character from his novel "Passion Play": "The character, Fabian, is at the mercy of his aging and his sexual obsession. It's my calling card. I'm 46. I'm like Fabian."
"National Book Awards – 1969". National Book Foundation. Retrieved 2012-03-28. (With essay by Harold Augenbraum from the Awards 60-year anniversary blog.)
He is the subject of the off-Broadway play "More Lies About Jerzy" (2001), written by Davey Holmes and originally starring Jared Harris as Kosinski-inspired character "Jerzy Lesnewski". The most recent production being produced at the New End Theatre in London starring George Layton.
He also appears as one of the 'literary golems' (ghosts) in Thane Rosenbaum's novel "The Golems of Gotham".
|
https://en.wikipedia.org/wiki?curid=15657
|
Jeep
Jeep is a brand of American automobile and also a division of FCA US LLC (formerly Chrysler Group, LLC), a wholly owned subsidiary of the Italian-American corporation Fiat Chrysler Automobiles. Jeep has been part of Chrysler since 1987, when Chrysler acquired the Jeep brand, along with remaining assets, from its previous owner American Motors Corporation (AMC).
Jeep's product range consists solely of sport utility vehicles – both crossovers and fully off-road worthy SUVs and models, including one pickup truck. Previously, Jeep's range included other pick-ups, as well as small vans, and a few roadsters. Some of Jeep's vehicles—such as the Grand Cherokee—reach into the luxury SUV segment, a market segment the 1963 Wagoneer is considered to have started. Jeep sold 1.4 million SUVs globally in 2016, up from 500,000 in 2008, two-thirds of which in North America, and was Fiat-Chrysler's best selling brand in the U.S. during the first half of 2017. In the U.S. alone, over 2400 dealerships hold franchise rights to sell Jeep-branded vehicles, and if Jeep were spun off into a separate company, it is estimated to be worth between $22 and $33.5 billion—slightly "more" than all of FCA (US).
Prior to 1940 the term "jeep" had been used as U.S. Army slang for new recruits or vehicles, but the
World War II "jeep" that went into production in 1941 specifically tied the name to this light military 4x4, arguably making them the oldest four-wheel drive mass-production vehicles now known as SUVs. The Jeep became the primary light 4-wheel-drive vehicle of the United States Armed Forces and the Allies during World War II, as well as the postwar period. The term became common worldwide in the wake of the war. Doug Stewart noted:
"The spartan, cramped, and unstintingly functional jeep became the ubiquitous World War II four-wheeled personification of Yankee ingenuity and cocky, can-do determination." It is the precursor of subsequent generations of military light utility vehicles such as the Humvee, and inspired the creation of civilian analogs such as the original Series I Land Rover. Many Jeep variants serving similar military and civilian roles have since been designed in other nations.
The Jeep marque has been headquartered in Toledo, Ohio, ever since Willys-Overland launched production of the first CJ or Civilian Jeep branded models there in 1945. Its replacement, the conceptually consistent Jeep Wrangler series, remains in production since 1986. With its solid axles and open top, the Wrangler has been called the Jeep model that is as central to the brand's identity as the rear-engined 911 is to Porsche.
At least two Jeep models (the CJ-5 and the SJ Wagoneer) enjoyed extraordinary three-decade production runs of a single body generation.
In lowercase, the term "jeep" continues to be used as a generic term for vehicles inspired by the Jeep that are suitable for use on rough terrain.
When it became clear that the United States would be involved in the European theater of World War II, the Army contacted 135 companies to create working prototypes of a four-wheel drive reconnaissance car. Only two companies responded: American Bantam Car Company and Willys-Overland. The Army set a seemingly impossible deadline of 49 days to supply a working prototype. Willys asked for more time, but was refused. The Bantam Car Company had only a skeleton staff left on the payroll and solicited Karl Probst, a talented freelance designer from Detroit. After turning down Bantam's initial request, Probst responded to an Army request and began work on July 17, 1940, initially without salary.
Probst laid out full plans in just two days for the Bantam prototype known as the BRC or Bantam Reconnaissance Car, working up a cost estimate the next day. Bantam's bid was submitted on July 22, complete with blueprints. Much of the vehicle could be assembled from off-the-shelf automotive parts, and custom four-wheel drivetrain components were to be supplied by Spicer. The hand-built prototype was completed in Butler, Pennsylvania and driven to Camp Holabird, Maryland on September 23 for Army testing. The vehicle met all the Army's criteria except engine torque.
The Army thought that the Bantam company was too small to supply the required number of vehicles, so it supplied the Bantam design to Willys and Ford, and encouraged them to modify the design. The resulting Ford "Pygmy" and Willys "Quad" prototypes looked very similar to the Bantam BRC prototype, and Spicer supplied very similar four-wheel drivetrain components to all three manufacturers.
1,500 of each model (Bantam BRC-40, Ford GP, and Willys MA) were built and extensively field-tested. After the weight specification was revised from to a maximum of including oil and water, Willys-Overland's chief engineer Delmar "Barney" Roos modified the design in order to use Willys's heavy but powerful "Go Devil" engine, and won the initial production contract. The Willys version became the standard Jeep design, designated the model MB and was built at their plant in Toledo, Ohio. The familiar pressed-metal Jeep grille was a Ford design feature and incorporated in the final design by the Army.
Because the US War Department required a large number of vehicles in a short time, Willys-Overland granted the US Government a non-exclusive license to allow another company to manufacture vehicles using Willys' specifications. The Army chose Ford as a second supplier, building Jeeps to the Willys' design. Willys supplied Ford with a complete set of plans and specifications. American Bantam, the creators of the first Jeep, built approximately 2,700 of them to the BRC-40 design, but spent the rest of the war building heavy-duty trailers for the Army.
Final production version Jeeps built by Willys-Overland were the Model MB, while those built by Ford were the Model GPW ("G"=government vehicle, "P" designated the 80" wheelbase, and "W" = the Willys engine design). There were subtle differences between the two. The versions produced by Ford had every component (including bolt heads) marked with an "F", and early on Ford also stamped their name in large letters in their trademark script, embossed in the rear panel of their jeeps. Willys followed the Ford pattern by stamping 'Willys' into several body parts, but the U.S. government objected to this practice, and both parties stopped this in 1942. In spite of persistent advertising by both car and component manufacturers' contributions to the production of the successful jeeps during the war, no "Jeep"-branded vehicles were built until the 1945 Willys CJ-2A.
The cost per vehicle trended upwards as the war continued from the price under the first contract from Willys at US$648.74 (Ford's was $782.59 per unit). Willys-Overland and Ford, under the direction of Charles E. Sorensen (Vice-President of Ford during World War II), produced about 640,000 Jeeps towards the war effort, which accounted for approximately 18% of all the wheeled military vehicles built in the U.S. during the war.
Jeeps were used by every service of the U.S. military. An average of 145 were supplied to every Army infantry regiment. Jeeps were used for many purposes, including cable laying, saw milling, as firefighting pumpers, field ambulances, tractors and, with suitable wheels, would even run on railway tracks. An amphibious jeep, the model GPA, or "seep" (Sea Jeep) was built for Ford in modest numbers but it could not be considered a huge success—it was neither a good off-road vehicle nor a good boat. As part of the war effort, nearly 30% of all Jeep production was supplied to Great Britain and to the Soviet Red Army.
The Jeep has been widely imitated around the world, including in France by Delahaye and by Hotchkiss et Cie (after 1954, Hotchkiss manufactured Jeeps under license from Willys), and in Japan by Mitsubishi Motors and Toyota. The Land Rover was inspired by the Jeep. The utilitarian good looks of the original Jeep have been hailed by industrial designers and museum curators alike. The Museum of Modern Art described the Jeep as a masterpiece of functionalist design, and has periodically exhibited the Jeep as part of its collection. Pulitzer Prize-winning war correspondent Ernie Pyle called the jeep, along with the Coleman G.I. Pocket Stove, "the two most important pieces of noncombat equipment ever developed." Jeeps became even more famous following the war, as they became available on the surplus market. Some ads claimed to offer "Jeeps still in the factory crate." This legend persisted for decades, despite the fact that Jeeps were never shipped from the factory in crates (although Ford did "knock down" Jeeps for easier shipping, which may have perpetuated the myth).
The "Jeepney" is a unique type of taxi or bus created in the Philippines. The first Jeepneys were military-surplus MBs and GPWs, left behind in the war-ravaged country following World War II and Filipino independence. Jeepneys were built from Jeeps by lengthening and widening the rear "tub" of the vehicle, allowing them to carry more passengers. Over the years, Jeepneys have become the most ubiquitous symbol of the modern Philippines, even as they have been decorated in more elaborate and flamboyant styles by their owners. Most Jeepneys today are scratch-built by local manufacturers, using different powertrains.
Aside from Jeepneys, backyard assemblers in the Philippines construct replica Jeeps with stainless steel bodies and surplus parts, and are called "owner-type jeeps" (as jeepneys are also called "passenger-type jeeps").
In the United States military, the Jeep has been supplanted by a number of vehicles (e.g. Ford's M151) of which the latest is the Humvee.
After World War II, Jeep began to experiment with new designs, including a model that could drive under water. On February 1, 1950, contract N8ss-2660 was approved for 1,000 units "especially adapted for general reconnaissance or command communications" and "constructed for short period underwater operation such as encountered in landing and fording operations." The engine was modified with a snorkel system so that the engine could properly breathe under water.
In 1965, Jeep developed the M715 1.25-ton army truck, a militarized version of the civilian J-series Jeep truck, which served extensively in the Vietnam War. It had heavier full-floating axles and a foldable, vertical, flat windshield. Today, it serves other countries, and is still being produced by Kia under license.
Many explanations of the origin of the word "jeep" have proven difficult to verify. The most widely held theory is that the military designation "GP" (for "Government Purposes" or "General Purpose") was slurred into the word "Jeep" in the same way that the contemporary "HMMWV" (for "High-Mobility Multi-purpose Wheeled Vehicle") has become known as the Humvee. Joe Frazer, Willys-Overland President from 1939 to 1944, claimed to have coined the word "jeep" by slurring the initials G.P. There are no contemporaneous uses of "GP" before later attempts to create a ""backronym.""
A more detailed view, popularized by R. Lee Ermey on his television series "Mail Call", disputes this "slurred GP" origin, saying that the vehicle was designed for specific duties, and was never referred to as "General Purpose" and it is highly unlikely that the average jeep-driving GI would have been familiar with this designation. The Ford GPW abbreviation actually meant G for government use, P to designate its wheelbase and W to indicate its Willys-Overland designed engine. Ermey suggests that soldiers at the time were so impressed with the new vehicles that they informally named it after Eugene the Jeep, a character in the "Thimble Theatre" comic strip and cartoons created by E. C. Segar, as early as mid-March 1936. Eugene the Jeep was Popeye's "jungle pet" and was "small, able to move between dimensions and could solve seemingly impossible problems."
The word "jeep" however, was used as early as World War I, as US Army slang for new uninitiated recruits, or by mechanics to refer to new unproven vehicles. In 1937, tractors which were supplied by Minneapolis Moline to the US Army were called jeeps. A precursor of the Boeing B-17 Flying Fortress was also referred to as the jeep.
"Words of the Fighting Forces" by Clinton A. Sanders, a dictionary of military slang, published in 1942, in the library at The Pentagon gives this definition:
This definition is supported by the use of the term "jeep carrier" to refer to the Navy's small escort carriers.
Early in 1941, Willys-Overland demonstrated the vehicle's off-road capability by having it drive up the steps of the United States Capitol, driven by Willys test driver Irving "Red" Hausmann, who had recently heard soldiers at Fort Holabird calling it a "jeep." When asked by syndicated columnist Katharine Hillyer for the "Washington Daily News" (or by a bystander, according to another account) what it was called, Hausmann answered, "It's a jeep."
Katharine Hillyer's article was published nationally on February 19, 1941, and included a picture of the vehicle with the caption:
Although the term was also military slang for vehicles that were untried or untested, this exposure caused all other jeep references to fade, leaving the 4x4 with the name.
The "Jeep" brand has gone through many owners, starting with Willys-Overland, which filed the original trademark application for the "Jeep" brand-name in February 1943. To help establish the term as a Willys brand, the firm campaigned with advertisements emphasizing Willys' prominent contribution to the Jeep that helped win the war. Willys' application initially met with years of opposition, primarily from Bantam, but also from Minneapolis-Moline. The Federal Trade Commission initially ruled in favor of Bantam in May 1943, largely ignoring Minneapolis-Moline's claim, and continued to scold Willys-Overland after the war for its advertising. The FTC even slapped the company with a formal complaint, to cease and desist any claims that it "created or designed" the Jeep — Willys was only allowed to advertise its contribution to the Jeep's development.
Willys however proceeded to produce the first Civilian Jeep (CJ) branded vehicles in 1945, and simply copyrighted the "Jeep" name in 1946. Being the only company that continually produced "Jeep" vehicles after the war, Willys-Overland was eventually granted the name "Jeep" as a registered trademark in June 1950. Aside from Willys, King Features Syndicate has held a trademark on the name "Jeep" for their comics since August 1936.
Willys had also seriously considered the brand name "AGRIJEEP", and was granted the trademark for it in December 1944, but instead the civilian production models as of 1945 were marketed as the "Universal Jeep," which reflected a wider range of uses outside of farming.
A division of FCA US LLC, the most recent successor company to the Jeep brand, now holds trademark status on the name "Jeep" and the distinctive 7-slot front grille design. The original 9-slot grille associated with all World War II jeeps was designed by Ford for their GPW, and because it weighed less than the original "Slat Grille" of Willys (an arrangement of flat bars), was incorporated into the "standardized jeep" design.
The history of the HMMWV (Humvee) has ties with Jeep. In 1971, Jeep's Defense and Government Products Division was turned into AM General, a wholly owned subsidiary of American Motors Corporation, which also owned Jeep. In 1979, while still owned by American Motors, AM General began the first steps toward designing the Humvee. AM General also continued manufacturing the two-wheel-drive DJ, which Jeep created in 1953. The General Motors Hummer and Chrysler Jeep have been waging battle in U.S. courts over the right to use seven slots in their respective radiator grilles. Chrysler Jeep claims it has the exclusive rights to use the seven vertical slits since it is the sole remaining assignee of the various companies since Willys gave their postwar jeeps seven slots instead of Ford's nine-slot design for the Jeep.
Jeep advertising has always emphasized the brand's vehicles' off-road capabilities. Today, the Wrangler is one of the few remaining four-wheel-drive vehicles with solid front and rear axles. These axles are known for their durability, strength, and articulation. New Wranglers come with a Dana 44 rear differential and a Dana 30 front differential. The upgraded Rubicon model of the JK Wrangler is equipped with electronically activated locking differentials, Dana 44 axles front and rear with 4.10 gears, a 4:1 transfer case, electronic sway bar disconnect and heavy duty suspension.
Another benefit of solid axle vehicles is they tend to be easier and cheaper to "lift" with aftermarket suspension systems. This increases the distance between the axle and chassis of the vehicle. By increasing this distance, larger tires can be installed, which will increase the ground clearance, allowing it to traverse even larger and more difficult obstacles. In addition to higher ground clearance, many owners aim to increase suspension articulation or "flex" to give their Jeeps greatly improved off-road capabilities. Good suspension articulation keeps all four wheels in contact with the ground and maintains traction.
Useful features of the smaller Jeeps are their short wheelbases, narrow frames, and ample approach, breakover, and departure angles, allowing them to fit into places where full-size four-wheel drives have difficulty.
After the war, Willys did not resume production of its passenger-car models, choosing instead to concentrate on Jeeps and Jeep-branded vehicles, launching the Jeep Station Wagon in 1946, the Jeep Truck in 1947, and the Jeepster in 1948. An attempt to re-enter the passenger-car market in 1952 with the Willys Aero sedan proved unsuccessful, and ended with the company's acquisition by Kaiser Motors in 1953, for $60 million. Kaiser initially called the merged company "Willys Motors", but renamed itself Kaiser-Jeep in 1963. By the end of 1955, Kaiser-Frazer had dropped the Willys Aero, as well as its own passenger cars to sell Jeeps exclusively.
American Motors Corporation (AMC) in turn purchased Kaiser's money-losing Jeep operations in 1970. This time $70 million changed hands. The utility vehicles complemented AMC's passenger car business by sharing components, achieving volume efficiencies, as well as capitalizing on Jeep's international and government markets. In 1971 AMC spun off Jeep's commercial, postal and military vehicle lines into a separate subsidiary, AM General – the company that later developed the M998 Humvee. In 1976 Jeep introduced the CJ-7, replacing the CJ-6 in North America, as well as crossing 100,000 civilian units in annual global sales for the first time.
The French automaker Renault began investing in AMC in 1979. Renault began selling Jeeps through their European dealerships soon thereafter, beginning in Belgium and France, gradually supplanting a number of independent importers. During this period Jeep introduced the XJ Cherokee, its first unibody SUV; and global sales topped 200,000 for the first time in 1985. However, the replacement of the CJ Jeeps by the new Wrangler line in 1986 marked the start of a different era. By 1987, the automobile markets had changed and Renault itself was experiencing financial troubles.
At the same time, Chrysler Corporation wanted to capture the Jeep brand, as well as other assets of AMC. So Chrysler bought out AMC in 1987, shortly after the Jeep CJ-7 had been replaced with the AMC-designed Wrangler YJ. After more than 40 years, the four-wheel drive utility vehicles brand that had been a profitable niche for smaller automakers, fell into the hands of one of the Big Three; and Jeep was the only AMC brand continued by Chrysler after the acquisition. But Chrysler subsequently merged with Daimler-Benz in 1998 and folded into DaimlerChrysler. DaimlerChrysler eventually sold most of their interest in Chrysler to a private equity company in 2007. Chrysler and the Jeep division operated under Chrysler Group LLC, until December 15, 2014, when Chrysler folded into Fiat Chrysler Automobiles, with the stateside division operating under 'FCA US LLC'.
Jeeps have been built under licence by various manufacturers around the world, including Mahindra in India, EBRO in Spain, and several in South America. Mitsubishi built more than 30 models in Japan between 1953 and 1998; Most were based on the CJ-3B model of the original Willys-Kaiser design.
Toledo, Ohio has been the headquarters of the Jeep brand since its inception, and the city has always been proud of this heritage. Although no longer produced in the same Toledo Complex as the World War II originals, two streets in the vicinity of the old plant are named Willys Parkway and Jeep Parkway. The Jeep Wrangler and Jeep Cherokee are built in the city currently, in separate facilities, not far from the site of the original Willys-Overland plant.
American Motors set up the first automobile-manufacturing joint venture in the People's Republic of China on January 15, 1984. The result was Beijing Jeep Corporation, Ltd., in partnership with Beijing Automobile Industry Corporation, to produce the Jeep Cherokee (XJ) in Beijing. Manufacture continued after Chrysler's buyout of AMC. This joint venture is now part of DaimlerChrysler and DaimlerChrysler China Invest Corporation. The original 1984 XJ model was updated and called the "Jeep 2500" toward the end of its production that ended after 2005.
While Jeeps have been built in India under licence by Mahindra & Mahindra since the 1960s, Jeep has entered the Indian market directly in 2016, starting with release of the Wrangler and Grand Cherokee in the country.
The CJ (for "Civilian Jeep") series were literally the first "Jeep" branded vehicles sold commercially to the civilian public, beginning in 1945 with the CJ-2A, followed by the CJ-3A in 1949 and the CJ-3B in 1953. These early Jeeps are frequently referred to as "flat-fenders" because their front fenders were completely flat and straight, no different than on the original WW II model (the Willys MB and identical Ford GPW). The CJ-4 exists only as a single 1951 prototype, and constitutes the "missing link" between the flat-fendered CJ-2A and CJ-3A/B, and the subsequent Jeeps with new bodies, featuring rounded fenders and hoods, beginning with the 1955 CJ-5, first introduced as the military Willys MD (or M38A1). The restyled body was mostly prompted to clear the taller new overhead-valve Hurricane engine.
With over 300,000 wagons and its variants built in the U.S., it was one of Willys' most successful post-World War II models. Its production coincided with consumers moving to the suburbs.
The 1948 introduced Jeepster was directly based on the rear-wheel-drive Jeep Station Wagon chassis, and shared many of the same parts.
(Jeepster) Commando
From 1955 onwards Willys offered two-wheel drive versions of their CJ Jeeps for commercial use, called DJ models (for 'Dispatcher Jeep'), in both open and closed body-styles. A well-known version was the right-hand drive model with sliding side-doors, used by the US Postal service.
In 1961 the range was expanded with the 'Fleetvan' delivery-van, based on DJ Jeeps.
Fleetvan Jeep
SUV models (1962-1991)
Pickup models (1962-1988)
The Jeep brand currently produces five models, but 8 vehicles are under the brand name or use the Jeep logo:
Jeeps have been built and/or assembled around the world by various companies.
Jeep is also a brand of apparel of outdoor lifestyle sold under license. It is reported that there are between 600 and 1,500 such outlets in China, vastly outnumbering the number of Jeep auto dealers in the country.
In April 2012 Jeep signed a shirt sponsorship deal worth €35m ($45m) with Italian football club Juventus.
In August 2014 Jeep signed a sponsorship deal with Greek football club AEK Athens F.C..
|
https://en.wikipedia.org/wiki?curid=15658
|
Jamaica
Jamaica () is an island country situated in the Caribbean Sea. Spanning in area, it is the third-largest island of the Greater Antilles and the Caribbean (after Cuba and Hispaniola). Jamaica lies about south of Cuba, and west of Hispaniola (the island containing the countries of Haiti and the Dominican Republic); the British Overseas Territory of the Cayman Islands lies some to the north-west.
Originally inhabited by the indigenous Arawak and Taíno peoples, the island came under Spanish rule following the arrival of Christopher Columbus in 1494. Many of the indigenous people were either killed or died of diseases to which they had no immunity, and the Spanish thus forcibly transplanted large numbers of African slaves to Jamaica as labourers. The island remained a possession of Spain until 1655, when England (later Great Britain) conquered it, renaming it "Jamaica". Under British colonial rule Jamaica became a leading sugar exporter, with a plantation economy dependent on the African slaves and later their descendants. The British fully emancipated all slaves in 1838, and many freedmen chose to have subsistence farms rather than to work on plantations. Beginning in the 1840s, the British began utilising Chinese and Indian indentured labour to work on plantations. The island achieved independence from the United Kingdom on 6 August 1962.
With /1e6 round 1 million people, Jamaica is the third-most populous Anglophone country in the Americas (after the United States and Canada), and the fourth-most populous country in the Caribbean. Kingston is the country's capital and largest city. The majority of Jamaicans are of Sub-Saharan African ancestry, with significant European, East Asian (primarily Chinese), Indian, Lebanese, and mixed-race minorities. Due to a high rate of emigration for work since the 1960s, there is a large Jamaican diaspora, particularly in Canada, the United Kingdom, and the United States. The country has a global influence that belies its small size; it was the birthplace of the Rastafari religion, reggae music (and associated genres such as dub, ska and dancehall), and it is internationally prominent in sports, most notably cricket, sprinting and athletics.
Jamaica is an upper-middle income country with an economy heavily dependent on tourism; it has an average of 4.3 million tourists a year. Politically it is a Commonwealth realm, with Elizabeth II as its queen. Her appointed representative in the country is the Governor-General of Jamaica, an office held by Patrick Allen since 2009. Andrew Holness has served as Prime Minister of Jamaica since March 2016. Jamaica is a parliamentary constitutional monarchy with legislative power vested in the bicameral Parliament of Jamaica, consisting of an appointed Senate and a directly elected House of Representatives.
The indigenous people, the Yamaye (also known as the Taíno), called the island "Xaymaca" in an Arawakan language, meaning the "Land of Wood and Water" or the "Land of Springs".
Colloquially Jamaicans refer to their home island as the "Rock". Slang names such as "Jamrock", "Jamdown" ("Jamdung" in Jamaican Patois), or briefly "Ja", have derived from this.
Humans have inhabited Jamaica from as early as 4000–1000 BC. Little is known of these early peoples. Another group, known as the "Redware people" after their pottery, arrived circa 600 AD, followed by the Arawak–Taíno circa 800 AD, who most likely came from South America. They practised an agrarian and fishing economy, and at their height are thought to have numbered some 60,000 people, grouped into around 200 village headed by "caciques" (chiefs). The south coast of Jamaica was the most populated, especially around the area now known as Old Harbour.
Though often thought to have become extinct following contact with Europeans, the Taíno in fact still inhabited Jamaica when the English took control of the island in 1655. Some fled into interior regions, merging with African Maroon communities. Today, only a tiny number of Jamaican natives, known as Yamaye, remain. The Jamaican National Heritage Trust is attempting to locate and document any remaining evidence of the Taíno.
Christopher Columbus was the first European to see Jamaica, claiming the island for Spain after landing there in 1494 on his second voyage to the Americas. His probable landing point was Dry Harbour, called Discovery Bay, and St. Ann's Bay was named "Saint Gloria" by Columbus, as the first sighting of the land. He later returned in 1503; however, he was shipwrecked and he and his crew were forced to live on Jamaica for a year whilst waiting to be rescued. One and a half kilometres west of St. Ann's Bay is the site of the first Spanish settlement on the island, Sevilla, which was established in 1509 by Juan de Esquivel but abandoned around 1524 because it was deemed unhealthy. The capital was moved to Spanish Town, then called "St. Jago de la Vega", around 1534 (at present-day St. Catherine). Meanwhile, the Taínos began dying in large numbers, either from introduced diseases to which they had no immunity, or from enslavement by the Spanish. As a result, the Spanish began importing slaves from Africa to the island. Many slaves managed to escape, forming autonomous communities in remote and easily defended areas in the interior of Jamaica, mixing with the remaining Taino; these communities became known as Maroons. Small numbers of Jews also came to live on the island. By the early 17th century it is estimated that no more than 2,500–3,000 people lived on Jamaica.
The English began taking an interest in the island and, following a failed attempt to conquer Santo Domingo on Hispaniola, Sir William Penn and General Robert Venables led an invasion of Jamaica in 1655. Battles at Ocho Rios in 1657 and the Rio Nuevo in 1658 resulted in Spanish defeats; in 1660 the Maroons began supporting the English and the Spanish defeat was secured.
When the English captured Jamaica, the Spanish colonists fled after freeing their slaves. Many slaves dispersed into the mountains, joining the already established maroon communities. During the centuries of slavery, Maroons established free communities in the mountainous interior of Jamaica, where they maintained their freedom and independence for generations. Meanwhile, the Spanish made several attempts to re-capture the island, prompting the British to support pirates attacking Spanish ships in the Caribbean; as a result piracy became rampant on Jamaica, with the city of Port Royal becoming notorious for its lawlessness. Spain later recognised English possession of the island with the Treaty of Madrid (1670). As a result, the English authorities sought to reign in the worst excesses of the pirates.
In 1660, the population of Jamaica was about 4,500 white and 1,500 black. By the early 1670s, as the English developed sugar cane plantations worked by large numbers of slaves, black Africans formed a majority of the population. The Irish in Jamaica also formed a large part of the island's early population, making up two-thirds of the white population on the island in the late 17th century, twice that of the English population. They were brought in as indentured labourers and soldiers after the conquest of 1655. The majority of Irish were transported by force as political prisoners of war from Ireland as a result of the ongoing Wars of the Three Kingdoms. Migration of large numbers of Irish to the island continued into the 18th century.
A limited form of local government was introduced with the creation of the House of Assembly of Jamaica in 1664; however, it represented only a tiny number of rich plantation owners. In 1692, the colony was rocked by an earthquake that resulted in several thousand deaths and the almost complete destruction of Port Royal.
During the 1700s the economy boomed, based largely on sugar and other crops such as coffee, cotton and indigo. All these crops were worked by black slaves, who lived short and often brutal lives with no rights, being the property of a small planter-class. A large slave rebellion, known as Tacky's War, broke out in 1760 but was defeated by the British. During this period the British also attempted to consolidate their control over the island by defeating the Maroons, who continued to live in the interior under leaders such as Cudjoe and Queen Nanny. The First Maroon War (1728 – 1739/40) ended in stalemate, as did a second conflict in 1795–96; however, as a result of these wars many Maroons were expelled to Nova Scotia and, later, Sierra Leone.
By the beginning of the 19th century, Jamaica's dependence on slave labour and a plantation economy had resulted in black people outnumbering white people by a ratio of almost 20 to 1. Although the British had outlawed the importation of slaves, some were still smuggled in from Spanish colonies and directly. While planning the abolition of slavery, the British Parliament passed laws to improve conditions for slaves. They banned the use of whips in the field and flogging of women; informed planters that slaves were to be allowed religious instruction, and required a free day during each week when slaves could sell their produce, prohibiting Sunday markets to enable slaves to attend church. The House of Assembly in Jamaica resented and resisted the new laws. Members, with membership then restricted to European-Jamaicans, claimed that the slaves were content and objected to Parliament's interference in island affairs. Slave owners feared possible revolts if conditions were lightened.
The British abolished the slave trade in 1807, but not the institution itself. In 1831 a huge slave rebellion, known as Baptist War, broke out, led by the Baptist preacher Samuel Sharpe. The rebellion resulted in hundreds of deaths, the destruction of many plantations, and resulted in ferocious reprisals by the plantocracy class. As a result of rebellions such as these, as well as the efforts of abolitionists, the British outlawed slavery in its empire in 1834, with full emancipation from chattel slavery declared in 1838. The population in 1834 was 371,070, of whom 15,000 were white, 5,000 free black; 40,000 'coloured' or free people of color (mixed race); and 311,070 were slaves. The resulting labour shortage prompted the British to begin to "import" indentured servants to supplement the labour pool, as many freedmen resisted working on the plantations. Workers recruited from India began arriving in 1845, Chinese workers in 1854. Many South Asian and Chinese descendants continue to reside in Jamaica today.
Over the next 20 years, several epidemics of cholera, scarlet fever, and smallpox hit the island, killing almost 60,000 people (about 10 per day). Nevertheless, in 1871 the census recorded a population of 506,154 people, 246,573 of which were males, and 259,581 females. Their races were recorded as 13,101 white, 100,346 coloured (mixed black and white), and 392,707 black. This period was marked by an economic slump, with many Jamaicans living in poverty. Dissatisfaction with this, and continued racial discrimination and marginalisation of the black majority, led to the outbreak of the Morant Bay rebellion in 1865 led by Paul Bogle, which was put down by Governor John Eyre with such brutality that he was recalled from his position. His successor, John Peter Grant, enacted a series of social, financial and political reforms whilst aiming to uphold firm British rule over the island, which became a Crown Colony in 1866. In 1872 the capital was transferred from Spanish Town to Kingston.
In 1907 Jamaica was struck by an earthquake — this, and the subsequent fire, caused immense destruction in Kingston and the deaths of 800–1,000 people.
Unemployment and poverty remained a problem for many Jamaicans. Various movements seeking political change arose as a result, most notably the Universal Negro Improvement Association and African Communities League founded by Marcus Garvey in 1917. As well as seeking greater political rights and an improvement for the condition of workers, Garvey was also a prominent Pan-Africanist and proponent of the Back-to-Africa movement. He was also one of the chief inspirations behind Rastafari, a religion founded in Jamaica in the 1930s that combined Christianity with an Afrocentric theology focused on the figure of Haile Selassie, Emperor of Ethiopia. Despite occasional persecution, Rastafari grew to become an established faith on the island, later spreading abroad.
The Great Depression of the 1930s hit Jamaica hard. As part of the British West Indian labour unrest of 1934–39, Jamaica saw numerous strikes, culminating in a strike in 1938 that turned into a full-blown riot.
As a result, the British government instituted a commission to look into the causes of the disturbances; their report recommended political and economic reforms in Britain's Caribbean colonies. A new House of Representatives was established in 1944, elected by universal adult suffrage. During this period Jamaica's two-party system emerged, with the creation of the Jamaican Labour Party (JLP) under Alexander Bustamante and the People's National Party (PNP) under Norman Manley.
Jamaica slowly gained increasing autonomy from the United Kingdom. In 1958 it became a province in the Federation of the West Indies, a federation of several of Britain's Caribbean colonies. Membership of the Federation proved to be divisive, however, and a referendum on the issue saw a slight majority voting to leave. After leaving the Federation, Jamaica attained full independence on 6 August 1962. The new state retained, however, its membership in the Commonwealth of Nations (with the Queen as head of state) and adopted a Westminster-style parliamentary system. Bustamante, at the age of 78, became the country's first prime minister.
Strong economic growth, averaging approximately 6% per annum, marked the first ten years of independence under conservative JLP governments; these were led by successive Prime Ministers Alexander Bustamante, Donald Sangster (who died of natural causes within two months of taking office) and Hugh Shearer. The growth was fuelled by high levels of private investment in bauxite/alumina, tourism, the manufacturing industry and, to a lesser extent, the agricultural sector. In terms of foreign policy Jamaica became a member of the Non-Aligned Movement, seeking to retain strong ties with Britain and the United States whilst also developing links with Communist states such as Cuba.
The optimism of the first decade was accompanied by a growing sense of inequality among many Afro-Jamaicans, and a concern that the benefits of growth were not being shared by the urban poor, many of whom ended up living in crime-ridden shanty towns in Kingston. This, combined with the effects of a slowdown in the global economy in 1970, led to the voters electing the PNP under Michael Manley in 1972. Manley's government enacted various social reforms, such as a higher minimum wage, land reform, legislation for women's equality, greater housing construction and an increase in educational provision. Internationally he improved ties with the Communist bloc and vigorously opposed the apartheid regime in South Africa. However, the economy faltered in this period due to a combination of internal and external factors (such as the oil shocks). The rivalry between the JLP and PNP became intense, and political and gang-related violence grew significantly in this period.
By 1980, Jamaica's gross national product had declined to some 25% below its 1972 level. Seeking change, Jamaicans voted the JLP back in in 1980 under Edward Seaga. Firmly anti-Communist, Seaga cut ties with Cuba and sent troops to support the US invasion of Grenada in 1983. The economic deterioration, however, continued into the mid-1980s, exacerbated by a number of factors. The largest and third-largest alumina producers, Alpart and Alcoa, closed; and there was a significant reduction in production by the second-largest producer, Alcan. Reynolds Jamaica Mines, Ltd. left the Jamaican industry. There was also a decline in tourism, which was important to the economy. Owing to rising foreign and local debt, accompanied by large fiscal deficits, the government sought International Monetary Fund (IMF) financing, which was dependent on implementing various austerity measures. These resulted in strikes in 1985 and a decline in support for the Seaga government, exacerbated by criticism of the government's response to the devastation caused by Hurricane Gilbert in 1988. Having now de-emphasised socialism and adopting a more centrist position, Michael Manley and the PNP were re-elected in 1989.
The PNP went on to win a string of elections, under Prime Ministers Michael Manley (1989–1992), P. J. Patterson (1992–2005) and Portia Simpson-Miller (2005–2007). During this period various economic reforms were introduced, such as deregulating the finance sector and floating the Jamaican dollar, as well as greater investment in infrastructure, whilst also retaining a strong social safety net. Political violence, so prevalent in the previous two decades, declined significantly. In 2007 the PNP were defeated by the JLP, ending 18 years of PNP rule; Bruce Golding became the new prime minister. Golding's tenure (2007-2010) was dominated by the effects of the global recession, as well as the fallout from an attempt by Jamaican police and military to arrest drug lord Christopher Coke in 2010 which erupted in violence, resulting in over 70 deaths. As a result of this incident Golding resigned and was replaced by Andrew Holness in 2011; Holness was defeated in the 2011 Jamaican general election, which saw Portia Miller-Simpson return to power, but Holness began a second term after winning the 2016 Jamaican general election.
Independence, however widely celebrated in Jamaica, has been questioned in the early 21st century. In 2011, a survey showed that approximately 60% of Jamaicans believe that the country would have been better off had it remained a British colony, with only 17% believing it would have been worse off, citing as problems years of social and fiscal mismanagement in the country.
Jamaica is a parliamentary democracy and constitutional monarchy. The head of state is the Queen of Jamaica (currently Elizabeth II), represented locally by the Governor-General of Jamaica. The governor-general is nominated by the Prime Minister of Jamaica and the entire Cabinet and then formally appointed by the monarch. All the members of the Cabinet are appointed by the governor-general on the advice of the prime minister. The monarch and the governor-general serve largely ceremonial roles, apart from their reserve powers for use in certain constitutional crisis situations. The position of the monarch has been a matter of continuing debate in Jamaica for many years; currently both major political parties are committed to transitioning to a Republic with a President.
Jamaica's current constitution was drafted in 1962 by a bipartisan joint committee of the Jamaican legislature. It came into force with the Jamaica Independence Act, 1962 of the United Kingdom parliament, which gave Jamaica independence.
The Parliament of Jamaica is bicameral, consisting of the House of Representatives (Lower House) and the Senate (Upper House). Members of the House (known as Members of Parliament or "MPs") are directly elected, and the member of the House of Representatives who, in the governor-general's best judgement, is best able to command the confidence of a majority of the members of that House, is appointed by the governor-general to be the prime minister. Senators are nominated jointly by the prime minister and the parliamentary Leader of the Opposition and are then appointed by the governor-general.
The Judiciary of Jamaica operates on a common law system derived from English law and British Commonwealth precedents. The court of final appeal is the Judicial Committee of the Privy Council, though during the 2000s parliament attempted to replace it with the Caribbean Court of Justice.
Jamaica has traditionally had a two-party system, with power often alternating between the People's National Party (PNP) and Jamaica Labour Party (JLP). The party with current administrative and legislative power is the Jamaica Labour Party, with a one-seat parliamentary majority . There are also several minor parties who have yet to gain a seat in parliament; the largest of these is the National Democratic Movement (NDM).
Jamaica is divided into 14 parishes, which are grouped into three historic counties that have no administrative relevance.
In the context of local government the parishes are designated "Local Authorities". These local authorities are further styled as "Municipal Corporations", which are either city municipalities or town municipalities. Any new city municipality must have a population of at least 50,000, and a town municipality a number set by the Minister of Local Government. There are currently no town municipalities.
The local governments of the parishes of Kingston and St. Andrews are consolidated as the city municipality of Kingston & St. Andrew Municipal Corporation. The newest city municipality created is the Municipality of Portmore in 2003. While it is geographically located within the parish of St. Catherine, it is governed independently.
The Jamaica Defence Force (JDF) is the small but professional military force of Jamaica. The JDF is based on the British military model with similar organisation, training, weapons and traditions. Once chosen, officer candidates are sent to one of several British or Canadian basic officer courses depending on the arm of service. Enlisted soldiers are given basic training at Up Park Camp or JDF Training Depot, Newcastle, both in St. Andrew. As with the British model, NCOs are given several levels of professional training as they rise up the ranks. Additional military schools are available for speciality training in Canada, the United States and the United Kingdom.
The JDF is directly descended from the British Army's West India Regiment formed during the colonial era. The West India Regiment was used extensively by the British Empire in policing the empire from 1795 to 1926. Other units in the JDF heritage include the early colonial Jamaica Militia, the Kingston Infantry Volunteers of WWI and reorganised into the Jamaican Infantry Volunteers in World War II. The West Indies Regiment was reformed in 1958 as part of the West Indies Federation, after dissolution of the Federation the JDF was established.
The Jamaica Defence Force (JDF) comprises an infantry Regiment and Reserve Corps, an Air Wing, a Coast Guard fleet and a supporting Engineering Unit. The infantry regiment contains the 1st, 2nd and 3rd (National Reserve) battalions. The JDF Air Wing is divided into three flight units, a training unit, a support unit and the JDF Air Wing (National Reserve). The Coast Guard is divided between seagoing crews and support crews who conduct maritime safety and maritime law enforcement as well as defence-related operations.
The role of the support battalion is to provide support to boost numbers in combat and issue competency training in order to allow for the readiness of the force. The 1st Engineer Regiment was formed due to an increased demand for military engineers and their role is to provide engineering services whenever and wherever they are needed. The Headquarters JDF contains the JDF Commander, Command Staff as well as Intelligence, Judge Advocate office, Administrative and Procurement sections.
In recent years the JDF has been called on to assist the nation's police, the Jamaica Constabulary Force (JCF), in fighting drug smuggling and a rising crime rate which includes one of the highest murder rates in the world. JDF units actively conduct armed patrols with the JCF in high-crime areas and known gang neighbourhoods. There has been vocal controversy as well as support of this JDF role. In early 2005, an Opposition leader, Edward Seaga, called for the merger of the JDF and JCF. This has not garnered support in either organisation nor among the majority of citizens. In 2017, Jamaica signed the UN treaty on the Prohibition of Nuclear Weapons.
Jamaica is the third largest island in the Caribbean. It lies between latitudes 17° and 19°N, and longitudes 76° and 79°W. Mountains dominate the interior: the Don Figuerero, Santa Cruz, and May Day mountains in the west, the Dry Harbour Mountains in the centre, and the John Crow Mountains and Blue Mountains in the east, the latter containing Blue Mountain Peak, Jamaica's tallest mountain at 2,256 m. They are surrounded by a narrow coastal plain. Jamaica only has two cities, the first being Kingston, the capital city and centre of business, located on the south coast and the second being Montego Bay, one of the best known cities in the Caribbean for tourism, located on the north coast. Kingston Harbour is the seventh-largest natural harbour in the world, which contributed to the city being designated as the capital in 1872. Other towns of note include Portmore, Spanish Town, Savanna la Mar, Mandeville and the resort towns of Ocho Ríos, Port Antonio and Negril.
Tourist attractions include Dunn's River Falls in St. Ann, YS Falls in St. Elizabeth, the Blue Lagoon in Portland, believed to be the crater of an extinct volcano, and Port Royal, site of a major earthquake in 1692 that helped form the island's Palisadoes tombolo.
Among the variety of terrestrial, aquatic and marine ecosystems are dry and wet limestone forests, rainforest, riparian woodland, wetlands, caves, rivers, seagrass beds and coral reefs. The authorities have recognised the tremendous significance and potential of the environment and have designated some of the more 'fertile' areas as 'protected'. Among the island's protected areas are the Cockpit Country, Hellshire Hills, and Litchfield forest reserves. In 1992, Jamaica's first marine park, covering nearly , was established in Montego Bay. Portland Bight Protected Area was designated in 1999. The following year Blue and John Crow Mountains National Park was created, covering roughly of a wilderness area which supports thousands of tree and fern species and rare animals.
There are several small islands off Jamaica's coast, most notably those in Portland Bight such as Pigeon Island, Salt Island, Dolphin Island, Long Island, Great Goat Island and Little Goat Island, and also Lime Cay located further east. Much further out — some 50–80 km off the south coast — lie the very small Morant Cays and Pedro Cays.
The climate in Jamaica is tropical, with hot and humid weather, although higher inland regions are more temperate. Some regions on the south coast, such as the Liguanea Plain and the Pedro Plains, are relatively dry rain-shadow areas.
Jamaica lies in the hurricane belt of the Atlantic Ocean and because of this, the island sometimes suffers significant storm damage. Hurricanes Charlie and Gilbert hit Jamaica directly in 1951 and 1988, respectively, causing major damage and many deaths. In the 2000s (decade), hurricanes Ivan, Dean, and Gustav also brought severe weather to the island.
Jamaica's climate is tropical, supporting diverse ecosystems with a wealth of plants and animals. Its plant life has changed considerably over the centuries; when the Spanish arrived in 1494, except for small agricultural clearings, the country was deeply forested. The European settlers cut down the great timber trees for building and ships' supplies, and cleared the plains, savannas, and mountain slopes for intense agricultural cultivation. Many new plants were introduced including sugarcane, bananas, and citrus trees.
Today, however, Jamaica is now the home to about 3,000 species of native flowering plants (of which over 1,000 are endemic and 200 are species of orchid), thousands of species of non-flowering flora, and about 20 botanical gardens, some of which are several hundred years old. Areas of heavy rainfall also contain stands of bamboo, ferns, ebony, mahogany, and rosewood. Cactus and similar dry-area plants are found along the south and southwest coastal area. Parts of the west and southwest consist of large grasslands, with scattered stands of trees.
Jamaican's fauna, typical of the Caribbean, includes highly diversified wildlife with many endemic species. As with other oceanic islands, land mammals are mostly several species of bats of which at least three endemic species are found only in Cockpit Country, one of which is at-risk. Other species of bat include the fig-eating and hairy-tailed bats. The only non-bat native mammal extant in Jamaica is the Jamaican hutia, locally known as the coney. Introduced mammals such as wild boar and the small Asian mongoose are also common. Jamaica is also home to about 50 species of reptiles, the largest of which is the American crocodile; however, it is only present within the Black River and a few other areas. Lizards such as anoles, iguanas and snakes such as racers and the Jamaican boa (the largest snake on the island), are common in areas such as the Cockpit Country. None of Jamaica's eight species of native snakes is venomous.
Jamaica is home to about 289 species of birds of which 27 are endemic including the endangered black-Billed parrots and the Jamaican blackbird, both of which are only found in Cockpit Country. It is also the indigenous home to four species of hummingbirds (three of which are found nowhere else in the world): the black-billed streamertail, the Jamaican mango, the Vervain hummingbird, and red-billed streamertails. The red-billed streamertail, known locally as the "doctor bird", is Jamaica's National Symbol. Other notable species include the Jamaican tody and the Greater flamingo,
One species of freshwater turtle is native to Jamaica, the Jamaican slider. It is found only on Jamaica and on a few islands in the Bahamas. In addition, many types of frogs are common on the island, especially treefrogs. Beautiful and exotic birds, such as the can be found among a large number of others.
Jamaican waters contain considerable resources of fresh-and saltwater fish. The chief varieties of saltwater fish are kingfish, jack, mackerel, whiting, bonito, and tuna. Fish that occasionally enter freshwater and estuarine environments include snook, jewfish, mangrove snapper, and mullets. Fish that spend the majority of their lives in Jamaica's fresh waters include many species of livebearers, killifish, freshwater gobies, the mountain mullet, and the American eel. Tilapia have been introduced from Africa for aquaculture, and are very common. Also visible in the waters surrounding Jamaica are dolphins, parrotfish, and the endangered manatee.
Insects and other invertebrates are abundant, including the world's largest centipede, the Amazonian giant centipede. Jamaica is the home to about 150 species of butterflies and moths, including 35 indigenous species and 22 subspecies. It is also the native home to the Jamaican swallowtail, the western hemisphere's largest butterfly.
Coral reef ecosystems are important because they provide people with a source of livelihood, food, recreation, and medicinal compounds and protect the land on which they live. Jamaica relies on the ocean and its ecosystem for its development. However, the marine life in Jamaica is also being affected. There could be many factors that contribute to marine life not having the best health. Jamaica's geological origin, topographical features and seasonal high rainfall make it susceptible to a range of natural hazards that can affect the coastal and oceanic environments. These include storm surge, slope failures (landslides), earthquakes, floods and hurricanes. Coral reefs in the Negril Marine Park (NMP), Jamaica, have been increasingly impacted by nutrient pollution and macroalgal blooms following decades of intensive development as a major tourist destination. Another one of those factors could include tourist, being that Jamaica is a very touristy place the island draws people to travel here from all over the world. The Jamaican tourism industry accounts for 32% of total employment and 36% of the country's GDP and is largely based on the sun, sea and sand, the last two of these attributes being dependent on healthy coral reef ecosystems. Because of Jamaica's tourism, they have developed a study to see if the tourist would be willing to help financially to manage their marine ecosystem because Jamaica alone is unable to. The ocean connects all the countries all over the world, however, everyone and everything is affecting the flow and life in the ocean. Jamaica is a very touristy place specifically because of their beaches. If their oceans are not functioning at their best then the well-being of Jamaica and the people who live there will start to deteriorate. According to the OECD, oceans contribute $1.5 trillion annually in value-added to the overall economy. A developing country on an island will get the majority of their revenue from their ocean. Healthy oceans, coasts and freshwater ecosystems are crucial for economic growth and food production, but they are also fundamental to global efforts to mitigate climate change. Climate change also has an effect on the ocean and life within the ocean.
Pollution comes from run-off, sewage systems, and garbage. However, this typically all ends up in the ocean after there is rain or floods. Everything that ends up in the water changes the quality and balance of the ocean. Poor coastal water quality has adversely affected fisheries, tourism and mariculture, as well as undermining biological sustainability of the living resources of ocean and coastal habitats. Jamaica imports and exports many goods through their waters. Some of the imports that go into Jamaica include petroleum and petroleum products. Issues include accidents at sea; risk of spills through local and international transport of petroleum and petroleum products. Oil spills can disrupt the marine life because the chemicals that are being spilled that should not be there. Oil and water do not mix. Unfortunately oil spills is not the only form of pollution that occurs in Jamaica. Solid waste disposal mechanisms in Jamaica are currently inadequate. The solid waste gets into the water through rainfall forces. Solid waste is also harmful to wildlife, particularly birds, fish and turtles that feed at the surface of the water and mistake floating debris for food. For example, plastic can be caught around birds and turtles necks making it difficult to eat and breath as they begin to grow causing the plastic to get tighter around their necks. Pieces of plastic, metal, and glass can be mistaken for the food fish eat. Each Jamaican generates 1 kg (2 lbs) of waste per day; only 70% of this is collected by National Solid Waste Management Authority (NSWMA) — the remaining 30% is either burnt or disposed of in gullies/waterways.
There are policies that are being put into place to help preserve the ocean and the life below water. The goal of integrated coastal zone management (ICZM) is to improve the quality of life of human communities who depend on coastal resources while maintaining the biological diversity and productivity of coastal ecosystems. Developing an underdeveloped country can impact the oceans ecosystem because of all the construction that would be done to develop the country. Over-building, driven by powerful market forces as well as poverty among some sectors of the population, and destructive exploitation contribute to the decline of ocean and coastal resources. Developing practices that will contribute to the lives of the people but also to the life of the ocean and its ecosystem. Some of these practices include: Develop sustainable fisheries practices, ensure sustainable mariculture techniques and practices, sustainable management of shipping, and promote sustainable tourism practices. As for tourism, tourism is the number one source of foreign exchange earnings in Jamaica and, as such is vital to the national economy. Tourist typically go to countries unaware of issues and how they impact those issues. Tourist are not going to be used to living in a different style compared to their own country. Practices such as: provide sewage treatment facilities for all tourist areas, determine carrying capacity of the environment prior to planning tourism activities, provide alternative types of tourist activities can help to get desired results such as the development of alternative tourism which will reduce the current pressure on resources that support traditional tourism activities. A study was conducted to see how tourist could help with sustainable financing for ocean and coastal management in Jamaica. Instead of using tourist fees they would call them environmental fees. This study aims to inform the relevant stakeholders of the feasibility of implementing environmental fees as well as the likely impact of such revenue generating instruments on the current tourist visitation rates to the island. The development of a user fee system would help fund environmental management and protection. The results show that tourists have a high consumer surplus associated with a vacation in Jamaica, and have a significantly lower willingness to pay for a tourism tax when compared to an environmental tax. The findings of the study show that the "label" of the tax and as well as the respondent's awareness of the institutional mechanisms for environmental protection and tourism are important to their decision framework. Tourist are more willing to pay for environmental fees rather than tourist tax fees. A tax high enough to fund for environmental management and protection but low enough to continue to bring tourist to Jamaica. It has been shows that if an environmental tax of $1 per person were introduced it would not cause a significant decline in visitation rates and would generate revenues of US$1.7M.
Jamaica's diverse ethnic roots is reflected in the national motto 'Out of Many One People'. Most of the population of 2,812,000 (July 2018 est.) are of African or partially African descent, with many being able to trace their origins to the West African countries of Ghana and Nigeria. Other major ancestral areas are Europe, South Asia, and East Asia. It is uncommon for Jamaicans to identify themselves by race as is prominent in other countries such as the United States, with most Jamaicans seeing Jamaican nationality as an identity in and of itself, identifying as simply being 'Jamaican' regardless of ethnicity. A study found that the average admixture on the island was 78.3% Sub-Saharan African, 16.0% European, and 5.7% East Asian.
The Jamaican Maroons of Accompong and other settlements are the descendant of African slaves who fled the plantations for the interior where they set up their own autonomous communities. Many Maroons continue to have their own traditions and speak their own language, known locally as Kromanti.
Asians form the second-largest group and include Indo-Jamaicans and Chinese Jamaicans. Most are descended from indentured workers brought by the British colonial government to fill labour shortages following the abolition of slavery in 1838. Prominent Indian Jamaicans include jockey Shaun Bridgmohan, who was the first Jamaican in the Kentucky Derby, NBC Nightly News journalist Lester Holt, and Miss Jamaica World and Miss Universe winner Yendi Phillips. The southwestern parish of Westmoreland is famous for its large population of Indo-Jamaicans. Along with their Indian counterparts, Chinese Jamaicans have also played an integral part in Jamaica's community and history. Prominent descendants of this group include Canadian billionaire investor Michael Lee-Chin, supermodels Naomi Campbell and Tyson Beckford, and VP Records founder Vincent "Randy" Chin.
There are about 20,000 Jamaicans who have Lebanese and Syrian ancestry. Most were Christian immigrants who fled the Ottoman occupation of Lebanon in the early 19th century. Eventually their descendants became very successful politicians and businessmen. Notable Jamaicans from this group include former Jamaican Prime Minister Edward Seaga, Jamaican politician and former Miss World Lisa Hanna, Jamaican politicians Edward Zacca and Shahine Robinson, and hotelier Abraham Elias Issa.
In 1835, Lord Seaford gave 500 acres of his 10,000 acre estate in Westmoreland for the Seaford Town German settlement. Today most of the town's descendants are of full or partial German descent.
The first wave of English immigrants arrived to the island 1655 after conquering the Spanish, and they have historically being the dominant group. Prominent descendants from this group include former American Governor of New York David Paterson, Sandals Hotels owner Gordon Butch Stewart, United States Presidential Advisor and "mother" of the Pell Grant Lois Rice, and former United States National Security Advisor and Ambassador to the United Nations Susan Rice. The first Irish immigrants came to Jamaica in the 1600s as war prisoners and later, indentured labour. Their descendants include two of Jamaica's National Heroes: Prime Ministers Michael Manley and Alexander Bustamante. Along with the English and the Irish, the Scots are another group that has made a significant impact on the island. According to the Scotland Herald newspaper, Jamaica has more people using the Campbell surnames than the population of Scotland itself, and it also has the highest percentage of Scottish surnames outside of Scotland. Scottish surnames account to about 60% of the surnames in the Jamaican phone books. The first Jamaican inhabitants from Scotland were exiled "rebels". Later, they would be followed by ambitious businessmen who spent time between their great country estates in Scotland and the island. As a result, many of the slave owning plantations on the island were owned by Scottish men, and thus a large number of mixed-race Jamaicans can claim Scottish ancestry. High immigration from Scotland continued until well after independence. Today, notable Scottish-Jamaicans include the businessman John Pringle, former American Secretary of State Colin Powell, and American actress Kerry Washington.
There is also a significant Portuguese Jamaican population that is predominantly of Sephardic Jewish heritage; they are primarily located in the Saint Elizabeth Parish in southwest Jamaica. The first Jews arrived as explorers from Spain in the 15th century after being forced to convert to Christianity or face death. A small number of them became slave owners and even famous pirates. Judaism eventually became very influential in Jamaica and can be seen today with many Jewish cemeteries around the country. During the Holocaust Jamaica became a refuge for Jews fleeing persecution in Europe. Famous Jewish descendants include the dancehall artist Sean Paul, former record producer and founder of Island Records Chris Blackwell, and Jacob De Cordova who was the founder of the Jamaica Gleaner newspaper
In recent years immigration has increased, coming mainly from China, Haiti, Cuba, Colombia, and Latin America; 20,000 Latin Americans reside in Jamaica. In 2016, the Prime Minister Andrew Holness suggested making Spanish Jamaica's second official language. About 7,000 Americans also reside in Jamaica. Notable American with connection to the island include fashion icon Ralph Lauren, philanthropist Daisy Soros, Blackstone's Schwarzman family, the family of the late Lieutenant Governor of Delaware John W. Rollins, fashion designer Vanessa Noel, investor Guy Stuart, Edward and Patricia Falkenberg, and iHeart Media CEO Bob Pittman, all of whom hold annual charity events to support the island.
Jamaica is regarded as a bilingual country, with two major languages in use by the population. The official language is English, which is "used in all domains of public life", including the government, the legal system, the media, and education. However, the primary spoken language is an English-based creole called Jamaican Patois (or Patwa). The two exist in a dialect continuum, with speakers using a different register of speech depending on context and whom they are speaking to. 'Pure' Patois, though sometimes seen as merely a particularly aberrant dialect of English, is essentially mutually unintelligible to speakers of standard English and is best thought of a separate language. A 2007 survey by the Jamaican Language Unit found that 17.1 percent of the population were monolingual in Jamaican Standard English (JSE), 36.5 percent were monolingual in Patois, and 46.4 percent were bilingual, although earlier surveys had pointed to a greater degree of bilinguality (up to 90 percent). The Jamaican education system has only recently begun to offer formal instruction in Patois, while retaining JSE as the "official language of instruction".
Additionally, some Jamaicans use one or more of Jamaican Sign Language (JSL), American Sign Language (ASL) or the indigenous Jamaican Country Sign Language (Konchri Sain). Both JSL and ASL are rapidly replacing Konchri Sain for a variety of reasons.
Many Jamaicans have emigrated to other countries, especially to the United Kingdom, the United States, and Canada. In the case of the United States, about 20,000 Jamaicans per year are granted permanent residence. There has also been emigration of Jamaicans to other Caribbeans countries such as Cuba, Puerto Rico, Guyana, and The Bahamas. It was estimated in 2004 that up to 2.5 million Jamaicans and Jamaican descendants live abroad.
Jamaicans in the United Kingdom number an estimated 800,000 making them by far the country's largest African-Caribbean group. Large-scale migration from Jamaica to the UK occurred primarily in the 1950s and 1960s when the country was still under British rule. Jamaican communities exist in most large UK cities. Concentrations of expatriate Jamaicans are quite considerable in numerous cities in the United States, including New York City, Buffalo, the Miami metro area, Atlanta, Chicago, Orlando, Tampa, Washington, D.C., Philadelphia, Hartford, Providence and Los Angeles. In Canada, the Jamaican population is centred in Toronto, with smaller communities in cities such as Hamilton, Montreal, Winnipeg, Vancouver and Ottawa. Jamaican Canadians comprise about 30% of the entire Black Canadian population.
A notable though much smaller group of emigrants are Jamaicans in Ethiopia. These are mostly Rastafarians, in whose theological worldview Africa is the promised land, or 'Zion', or more specifically Ethiopia, due to reverence in which former Ethiopian Emperor Haile Selassie is held. Most live in the small town of Shashamane about 150 miles (240 km) south of the capital Addis Ababa.
When Jamaica gained independence in 1962, the murder rate was 3.9 per 100,000 inhabitants, one of the lowest in the world. By 2009, the rate was 62 per 100,000 inhabitants, one of the highest in the world. Gang violence became a serious problem, with organised crime being centred around Jamaican posses or 'Yardies'. Jamaica has had one of the highest murder rates in the world for many years, according to UN estimates. Some areas of Jamaica, particularly poor areas in Kingston, Montego Bay and elsewhere experience high levels of crime and violence.
However, there were 1,682 reported murders in 2009 and 1,428 in 2010. After 2011 the murder rate continued to fall, following the downward trend in 2010, after a strategic programme was launched. In 2012, the Ministry of National Security reported a 30 percent decrease in murders. Nevertheless, in 2017 murders rose by 22% over the previous year.
Many Jamaicans are hostile towards LGBT and intersex people, and mob attacks against gay people have been reported. Numerous high-profile dancehall and ragga artists have produced songs featuring explicitly homophobic lyrics. Male homosexuality is illegal and punishable by prison time.
Christianity is the largest religion practised in Jamaica. About 70% are Protestants; Roman Catholics are just 2% of the population. According to the 2001 census, the country's largest Protestant denominations are the Church of God (24%), Seventh-day Adventist Church (11%), Pentecostal (10%), Baptist (7%), Anglican (4%), United Church (2%), Methodist (2%), Moravian (1%) and Plymouth Brethren (1%). Bedwardism is a form of Christianity native to the island, sometime view as a separate faith. The Christian faith gained acceptance as British Christian abolitionists and Baptist missionaries joined educated former slaves in the struggle against slavery.
The Rastafari movement has 29,026 adherents, according to the 2011 census, with 25,325 Rastafarian males and 3,701 Rastafarian females. The faith originated in Jamaica in the 1930s and though rooted in Christianity it is heavily Afrocentric in its focus, revering figures such as the Jamaican black nationalist Marcus Garvey and Haile Selassie, the former Emperor of Ethiopia. Rastafari has since spread across the globe, especially to areas with large black or African diasporas.
Various faiths and traditional religious practices derived from Africa are practised on the island, notably Kumina, Convince, Myal and Obeah.
Other religions in Jamaica include Jehovah's Witnesses (2% population), the Bahá'í faith, which counts perhaps 8,000 adherents and 21 Local Spiritual Assemblies, Mormonism, Buddhism, and Hinduism. The Hindu Diwali festival is celebrated yearly among the Indo-Jamaican community.
There is also a small population of about 200 Jews, who describe themselves as Liberal-Conservative. The first Jews in Jamaica trace their roots back to early 15th-century Spain and Portugal. Kahal Kadosh Shaare Shalom, also known as the United Congregation of Israelites, is a historic synagogue located in the city of Kingston. Originally built in 1912, it is the official and only Jewish place of worship left on the island. The once abundant Jewish population has voluntarily converted to Christianity over time. Shaare Shalom is one of the few synagogues in the world that contains sand covered floors and is a popular tourist destination.
Other small groups include Muslims, who claim 5,000 adherents. The Muslim holidays of Ashura (known locally as Hussay or Hosay) and Eid have been celebrated throughout the island for hundreds of years. In the past, every plantation in each parish celebrated Hosay. Today it has been called an Indian carnival and is perhaps most well known in Clarendon where it is celebrated each August. People of all religions attend the event, showing mutual respect.
Though a small nation, Jamaican culture has a strong global presence. The musical genres reggae, ska, mento, rocksteady, dub, and, more recently, dancehall and ragga all originated in the island's vibrant, popular urban recording industry. These have themselves gone on to influence numerous other genres, such as punk rock (through reggae and ska), dub poetry, New Wave, two-tone, reggaeton, jungle, drum and bass, dubstep, grime and American rap music. Some rappers, such as The Notorious B.I.G., Busta Rhymes, and Heavy D, are of Jamaican descent.
Bob Marley is probably the best known Jamaican musician; with his band The Wailers he had a string of hits in 1960s–70s, popularising reggae internationally and going on to sell millions of records. Many other internationally known artists were born in Jamaica, including Millie Small, Lee "Scratch" Perry, Gregory Isaacs, Half Pint, Protoje, Peter Tosh, Bunny Wailer, Big Youth, Jimmy Cliff, Dennis Brown, Desmond Dekker, Beres Hammond, Beenie Man, Shaggy, Grace Jones, Shabba Ranks, Super Cat, Buju Banton, Sean Paul, I Wayne, Bounty Killer and many others. Bands that came from Jamaica include Black Uhuru, Third World Band, Inner Circle, Chalice Reggae Band, Culture, Fab Five and Morgan Heritage.
The journalist and author H. G. de Lisser (1878–1944) used his native country as the setting for his many novels. Born in Falmouth, Jamaica, de Lisser worked as a reporter for the "Jamaica Times" at a young age and in 1920 began publishing the magazine "Planters' Punch". "The White Witch of Rosehall" is one of his better-known novels. He was named Honorary President of the Jamaican Press Association; he worked throughout his professional career to promote the Jamaican sugar industry.
Roger Mais (1905 – 1955), a journalist, poet, and playwright wrote many short stories, plays, and novels, including "The Hills Were Joyful Together" (1953), "Brother Man" (1954), and "Black Lightning" (1955).
Ian Fleming (1908 – 1964), who had a home in Jamaica where he spent considerable time, repeatedly used the island as a setting in his James Bond novels, including "Live and Let Die", "Doctor No", "For Your Eyes Only", "The Man with the Golden Gun", and "Octopussy and The Living Daylights". In addition, James Bond uses a Jamaica-based cover in "Casino Royale". So far, the only James Bond film adaptation to have been set in Jamaica is "Doctor No". Filming for the fictional island of San Monique in "Live and Let Die" took place in Jamaica.
Marlon James (1970), novelist has published three novels: "John Crow's Devil" (2005), "The Book of Night Women" (2009) and "A Brief History of Seven Killings" (2014), winner of the 2015 Man Booker Prize.
Jamaica has a history in the film industry dating from the early 1960s. A look at delinquent youth in Jamaica is presented in the 1970s musical crime film "The Harder They Come", starring Jimmy Cliff as a frustrated (and psychopathic) reggae musician who descends into a murderous crime spree. Other notable Jamaican films include "Countryman", "Rockers", "Dancehall Queen", "One Love", "Shottas", "Out the Gate", "Third World Cop" and "Kingston Paradise". Jamaica is also often used as a filming location, such as the James Bond film "Dr. No" (1962), "Cocktail" (1988) starring Tom Cruise, and the 1993 Disney comedy "Cool Runnings", which is loosely based on the true story of Jamaica's first bobsled team trying to make it in the Winter Olympics.
The island is famous for its Jamaican jerk spice, curries and rice and peas which is integral to Jamaican cuisine. Jamaica is also home to Red Stripe beer and Jamaican Blue Mountain Coffee.
("From the Jamaica Information Service")
Sport is an integral part of national life in Jamaica and the island's athletes tend to perform to a standard well above what might ordinarily be expected of such a small country. While the most popular local sport is cricket, on the international stage Jamaicans have tended to do particularly well at track and field athletics.
Jamaica has produced some of the world's most famous cricketers, including George Headley, Courtney Walsh, and Michael Holding. The country was one of the venues of 2007 Cricket World Cup and the West Indies cricket team is one of 10 ICC full member teams that participate in international Test cricket. The Jamaica national cricket team competes regionally, and also provides players for the West Indies team. Sabina Park is the only Test venue in the island, but the Greenfield Stadium is also used for cricket. Chris Gayle is the most renowned batsman from Jamaica currently representing the West Indies cricket team.
Since independence Jamaica has consistently produced world class athletes in track and field. In Jamaica involvement in athletics begins at a very young age and most high schools maintain rigorous athletics programs with their top athletes competing in national competitions (most notably the VMBS Girls and Boys Athletics Championships) and international meets (most notably the Penn Relays). In Jamaica it is not uncommon for young athletes to attain press coverage and national fame long before they arrive on the international athletics stage.
Over the past six decades Jamaica has produced dozens of world class sprinters including Olympic and World Champion Usain Bolt, world record holder in the 100m for men at 9.58s, and 200m for men at 19.19s. Other noteworthy Jamaican sprinters include Arthur Wint, the first Jamaican Olympic gold medalist; Donald Quarrie, Elaine Thompson double Olympic champion from Rio 2016 in the 100m and 200m, Olympic Champion and former 200m world record holder; Roy Anthony Bridge, part of the International Olympic Committee; Merlene Ottey; Delloreen Ennis-London; Shelly-Ann Fraser-Pryce, the former World and two time Olympic 100m Champion; Kerron Stewart; Aleen Bailey; Juliet Cuthbert; three-time Olympic gold medalist; Veronica Campbell-Brown; Sherone Simpson; Brigitte Foster-Hylton; Yohan Blake; Herb McKenley; George Rhoden, Olympic gold medalist; Deon Hemmings, Olympic gold medalist; as well as Asafa Powell, former 100m world record holder and 2x 100m Olympic finalist and gold medal winner in the men's 2008 Olympic 4 × 100 m. American Olympic winner Sanya Richards-Ross was also born in Jamaica.
Jamaica has also produced several world class amateur and professional boxers including Trevor Berbick and Mike McCallum. First-generation Jamaican athletes have continued to make a significant impact on the sport internationally, especially in the United Kingdom where the list of top British boxers born in Jamaica or of Jamaican parents includes Lloyd Honeyghan, Chris Eubank, Audley Harrison, David Haye, Lennox Lewis and Frank Bruno, Donovan "Razor" Ruddock, Mike Tyson, and Floyd Mayweather Jr., whose maternal grandfather is Jamaican.
Association football and horse-racing are other popular sports in Jamaica. The national football team qualified for the 1998 FIFA World Cup. Horse racing was Jamaica's first sport. It was brought in the 1700s by British immigrants to satisfy their longing for their favorite pastime back at home. During slavery, the Afro-Jamaican slaves were considered the best horse jockeys. Today, horse racing provides jobs for about 20,000 people including horse breeders, groomers, and trainers. Also, several Jamaicans are known internationally for their success in horse racing including Richard DePass, who once held the Guinness Book of World Records for the most wins in a day, Canadian awards winner George HoSang, and American award winners Charlie Hussey, Andrew Ramgeet, and Barrington Harvey. Also, there are hundreds of Jamaicans who are employed in the United States, Canada, and the United Kingdom as exercise riders and groomers.
Race car driving is also a popular sport in Jamaica with several car racing tracks and racing associations across the country.
The Jamaica national bobsled team was once a serious contender in the Winter Olympics, beating many well-established teams. Chess and basketball are widely played in Jamaica and are supported by the Jamaica Chess Federation (JCF) and the Jamaica Basketball Federation (JBF), respectively. Netball is also very popular on the island, with the Jamaica national netball team called The Sunshine Girls consistently ranking in the top five in the world.
Rugby league has been played in Jamaica since 2006.
The Jamaica national rugby league team is made up of players who play in Jamaica and from UK based professional and semi professional clubs (notably in the Super League and Championship). In November 2018 for the first time ever, the Jamaican rugby league team qualified for the Rugby League World Cup after defeating the USA & Canada. Jamaica will play in the 2021 Rugby League World Cup in England.
According to ESPN, the highest paid Jamaican professional athlete in 2011 was Justin Masterson, starting pitcher for the baseball team Cleveland Indians in the United States.
The emancipation of the slaves heralded the establishment of an education system for the masses. Prior to emancipation there were few schools for educating locals and many sent their children off to England to access quality education. After emancipation the West Indian Commission granted a sum of money to establish Elementary Schools, now known as "All Age Schools". Most of these schools were established by the churches. This was the genesis of the modern Jamaican school system.
Presently the following categories of schools exist:
Additionally, there are many community and teacher training colleges.
Education is free from the early childhood to secondary levels. There are also opportunities for those who cannot afford further education in the vocational arena, through the Human Employment and Resource Training-National Training Agency (HEART Trust-NTA) programme, which is opened to all working age national population and through an extensive scholarship network for the various universities.
Students are taught Spanish in school from the primary level upwards; about 40–45% of educated people in Jamaica knows some form of Spanish.
Jamaica is a mixed economy with both state enterprises and private sector businesses. Major sectors of the Jamaican economy include agriculture, mining, manufacturing, tourism, petroleum refining, financial and insurance services. Tourism and mining are the leading earners of foreign exchange. Half the Jamaican economy relies on services, with half of its income coming from services such as tourism. An estimated 4.3 million foreign tourists visit Jamaica every year. According to the World Bank, Jamaica is an upper-middle income country that, like its Caribbean neighbours, is vulnerable to the effects of climate change, flooding, and hurricanes. In 2018, Jamaica represented the CARICOM Caribbean Community at the G20 and the G7 annual meetings. In 2019 Jamaica reported its lowest unemployment rate in 50 years.
Supported by multilateral financial institutions, Jamaica has, since the early 1980s, sought to implement structural reforms aimed at fostering private sector activity and increasing the role of market forces in resource allocation Since 1991, the government has followed a programme of economic liberalisation and stabilisation by removing exchange controls, floating the exchange rate, cutting tariffs, stabilising the Jamaican dollar, reducing inflation and removing restrictions on foreign investment. Emphasis has been placed on maintaining strict fiscal discipline, greater openness to trade and financial flows, market liberalisation and reduction in the size of government. During this period, a large share of the economy was returned to private sector ownership through divestment and privatisation programmes. The free-trade zones at Kingston, Montego Bay and Spanish Town allow duty-free importation, tax-free profits, and free repatriation of export earnings.
Jamaica's economy grew strongly after the years of independence, but then stagnated in the 1980s, due to the heavy falls in price of bauxite and fluctuations in the price of agriculture. The financial sector was troubled in 1994, with many banks and insurance companies suffering heavy losses and liquidity problems. According to the Commonwealth Secretariat, "The government set up the Financial Sector Adjustment Company (Finsac) in January 1997 to assist these banks and companies, providing funds in return for equity, and acquired substantial holdings in banks and insurance companies and related companies.. but it only exasperated the problem, and brought the country into large external debt. From 2001, once it had restored these banks and companies to financial health, Finsac divested them." The Government of Jamaica remains committed to lowering inflation, with a long-term objective of bringing it in line with that of its major trading partners.
In 1996 and 1997 there was a decrease in GDP largely due to significant problems in the financial sector and, in 1997, a severe island-wide drought (the worst in 70 years) and hurricane that drastically reduced agricultural production. In 1997 and 1998, nominal GDP was approximately a high of about 8 percent of GDP and then lowered to 4½ percent of GDP in 1999 and 2000. The economy in 1997 was marked by low levels of import growth, high levels of private capital inflows and relative stability in the foreign exchange market.
Recent economic performance shows the Jamaican economy is recovering. Agricultural production, an important engine of growth increased to 5.5% in 2001 compared to the corresponding period in 2000, signalling the first positive growth rate in the sector since January 1997. In 2018, Jamaica reported a 7.9% increase in corn, 6.1% increase in plantains, 10.4% increase in bananas, 2.2% increase in pineapples, 13.3% increase in dasheen, 24.9% increase in coconuts, and a 10.6% increase in whole milk production. Bauxite and alumina production increased 5.5% from January to December 1998, compared to the corresponding period in 1997. January's bauxite production recorded a 7.1% increase relative to January 1998 and continued expansion of alumina production through 2009 is planned by Alcoa. Jamaica is the fifth-largest exporter of bauxite in the world, after Australia, China, Brazil and Guinea. The country also exports limestone, of which it holds large deposits. The government is currently implementing plans to increase its extraction.
A Canadian company, Carube Copper Corp, has found and confirmed, "...the existence of at least seven significant Cu/Au porphyry systems (in St. Catherine)." They have estimated that, "The porphyry distribution found at Bellas Gate is similar to that found in the Northparkes mining district of New South Wales, Australia (which was) sold to China in 2013 for US$820 million." Carube noted that Jamaica's geology, "... is similar to that of Chile, Argentina and the Dominican Republic — all productive mining jurisdictions." Mining on the sites began in 2017.
Tourism, which is the largest foreign exchange earner, showed improvement as well. In 1999 the total visitor arrivals was 2 million, an increase of 100,000 from the previous year. Since 2017, Jamaica's tourism has risen exponentially, rising to 4.3 million average tourists per year. Jamaica's largest tourist markets are from North America, South America, and Europe. In 2017, Jamaica recorded a 91.3% increase in stopover visitors from Southern and Western Europe (and a 41% increase in stopover arrivals from January to September 2017 over the same period from the previous year) with Germany, Portugal and Spain registering the highest percentage gains. In 2018, Jamaica won several World Travel Awards in Portugal winning the "Chairman's Award for Global Tourism Innovation", "Best Tourist Board in the Caribbean" "Best Honeymoon Destination", "Best Culinary Destination", "World's Leading Beach Destination" and "World's Leading Cruise Destination". Two months later, the Travvy Tourism Awards held in New York City, awarded Jamaica's Tourism Minister Edmund Bartlett, with the inaugural Chairman's Award for, "Global Tourism Innovation for the Development of the Global Tourism Resilience and Crisis Management Centre (GTRCM)". Bartlett has also won the Pacific Travel Writer's Association's award in Germany for the, "2018 Best Tourism Minister of the Year".
Petrojam, Jamaica's national and only petroleum refinery, is co-owned by the Government of Venezuela. Petrojam, "..operates a 35,000 barrel per day hydro-skimming refinery, to produce Automotive Diesel Oil; Heavy Fuel Oil; Kerosene/Jet Fuel, Liquid Petroleum Gas (LPG), Asphalt and Gasoline." Customers include the Power industry, Aircraft refuelers, and Local Marketing companies. On 20 February 2019, the Jamaican Government voted to retake ownership of Venezuela's 49% share.
Jamaica's agricultural exports are sugar, bananas, cocoa, coconut, molasses oranges, limes, grapefruit, rum, yams, allspice (of which it is the world's largest and "most exceptional quality" exporter), and Blue Mountain Coffee which is considered a world renowned gourmet brand.
Jamaica has a wide variety of industrial and commercial activities. The aviation industry is able to perform most routine aircraft maintenance, except for heavy structural repairs. There is a considerable amount of technical support for transport and agricultural aviation. Jamaica has a considerable amount of industrial engineering, light manufacturing, including metal fabrication, metal roofing, and furniture manufacturing. Food and beverage processing, glassware manufacturing, software and data processing, printing and publishing, insurance underwriting, music and recording, and advanced education activities can be found in the larger urban areas. The Jamaican construction industry is entirely self-sufficient, with professional technical standards and guidance.
Since the first quarter of 2006, the economy of Jamaica has undergone a period of staunch growth. With inflation for the 2006 calendar year down to 6.0% and unemployment down to 8.9%, the nominal GDP grew by an unprecedented 2.9%. An investment programme in island transportation and utility infrastructure and gains in the tourism, mining, and service sectors all contributed this figure. All projections for 2007 show an even higher potential for economic growth with all estimates over 3.0% and hampered only by urban crime and public policies.
In 2006, Jamaica became part of the CARICOM Single Market and Economy (CSME) as one of the pioneering members.
The global economic downturn had a significant impact on the Jamaican economy for the years 2007 to 2009, resulting in negative economic growth. The government implemented a new Debt Management Initiative, the Jamaica Debt Exchange (JDX) on 14 January 2010. The initiative would see holders of Government of Jamaica (GOJ) bonds returning the high interest earning instruments for bonds with lower yields and longer maturities. The offer was taken up by over 95% of local financial institutions and was deemed a success by the government.
Owing to the success of the JDX program, the Bruce Golding-led government was successful in entering into a borrowing arrangement with the IMF on 4 February 2010 for the amount of US$1.27b. The loan agreement is for a period of three years.
In April 2014, the Governments of Jamaica and China signed the preliminary agreements for the first phase of the Jamaican Logistics Hub (JLH) – the initiative that aims to position Kingston as the fourth node in the global logistics chain, joining Rotterdam, Dubai and Singapore, and serving the Americas. The Project, when completed, is expected to provide many jobs for Jamaicans, Economic Zones for multinational companies and much needed economic growth to alleviate the country's heavy debt-to-GDP ratio. Strict adherence to the IMF's refinancing programme and preparations for the JLH has favourably affected Jamaica's credit rating and outlook from the three biggest rating agencies. In 2018, both Moody's and Standard and Poor Credit ratings upgraded Jamaica's ratings to both "stable and positive" respectively.
The transport infrastructure in Jamaica consists of roadways, railways and air transport, with roadways forming the backbone of the island's internal transport system.
The Jamaican road network consists of almost of roads, of which over is paved. The Jamaican Government has, since the late 1990s and in cooperation with private investors, embarked on a campaign of infrastructural improvement projects, one of which includes the creation of a system of freeways, the first such access-controlled roadways of their kind on the island, connecting the main population centres of the island. This project has so far seen the completion of of freeway.
Railways in Jamaica no longer enjoy the prominent position they once did, having been largely replaced by roadways as the primary means of transport. Of the of railway found in Jamaica, only remain in operation, currently used to transport bauxite. On 13 April 2011, a limited passenger service was resumed between May Pen, Spanish Town and Linstead.
There are three international airports in Jamaica with modern terminals, long runways, and the navigational equipment required to accommodate the large jet aircraft used in modern and air travel: Norman Manley International Airport in Kingston; Ian Fleming International Airport in Boscobel, Saint Mary Parish; and the island's largest and busiest airport, Sir Donald Sangster International Airport in the resort city of Montego Bay. Manley and Sangster International airports are home to the country's national airline, Air Jamaica. In addition there are local commuter airports at Tinson Pen (Kingston), Port Antonio, and Negril, which cater to internal flights only. Many other small, rural centres are served by private airstrips on sugar estates or bauxite mines.
Owing to its location in the Caribbean Sea in the shipping lane to the Panama Canal and relative proximity to large markets in North America and emerging markets in Latin America, Jamaica receives much traffic of shipping containers. The container terminal at the Port of Kingston has undergone large expansion in capacity in recent years to handle growth both already realised as well as that which is projected in coming years. Montego Freeport in Montego Bay also handles a variety of cargo like (though more limited than) the Port of Kingston, mainly agricultural products.
There are several other ports positioned around the island, including Port Esquivel in St. Catherine (WINDALCO), Rocky Point in Clarendon, Port Kaiser in St. Elizabeth, Port Rhoades in Discovery Bay, Reynolds Pier in Ocho Rios, and Boundbrook Port in Port Antonio.
To aid the navigation of shipping, Jamaica operates nine lighthouses. They are maintained by the Port Authority of Jamaica, an agency of the Ministry of Transport and Works.
Jamaica depends on petroleum imports to satisfy its national energy needs. Many test sites have been explored for oil, but no commercially viable quantities have been found. The most convenient sources of imported oil and motor fuels (diesel, gasoline, and jet fuel) are from Mexico and Venezuela.
Jamaica's electrical power is produced by diesel (bunker oil) generators located in Old Harbour. Other smaller power stations (most owned by the Jamaica Public Service Company, the island's electricity provider) support the island's electrical grid including the Hunts Bay Power Station, the Bogue Power Station, the Rockfort Power Station and small hydroelectric plants on the White River, Rio Bueno, Morant River, Black River (Maggotty) and Roaring River. A wind farm, owned by the Petroleum Corporation of Jamaica, was established at Wigton, Manchester.
Jamaica has successfully operated a SLOWPOKE-2 nuclear reactor of 20 kW capacity since the early 1980s, but there are no plans to expand nuclear power at present.
Jamaica imports approximately of oil energy products per day, including asphalt and lubrication products. Just 20% of imported fuels are used for road transportation, the rest being used by the bauxite industry, electricity generation, and aviation. 30,000 barrels/day of crude imports are processed into various motor fuels and asphalt by the Petrojam Refinery in Kingston.
Jamaica produces enormous quantities of drinking alcohol (at least 5% water content), most of which appears to be consumed as beverages, and none used as motor fuel. Facilities exist to refine hydrous ethanol feedstock into anhydrous ethanol (0% water content), but as of 2007, the process appeared to be uneconomic and the production plant was idle.
Jamaica has a fully digital telephone communication system with a mobile penetration of over 95%.
The country's two mobile operators – FLOW Jamaica (formerly LIME, bMobile and Cable and Wireless Jamaica) and Digicel Jamaica have spent millions in network upgrades and expansion. The newest operator, Digicel was granted a licence in 2001 to operate mobile services in the newly liberalised telecom market that had once been the sole domain of the incumbent FLOW (then Cable and Wireless Jamaica) monopoly. Digicel opted for the more widely used GSM wireless system, while a past operator, Oceanic (which became Claro Jamaica and later merged with Digicel Jamaica in 2011) opted for the CDMA standard. FLOW (formerly "LIME" – pre-Columbus Communications merger) which had begun with TDMA standard, subsequently upgraded to GSM in 2002, decommissioned TDMA in 2006 and only utilised that standard until 2009 when LIME launched its 3G network. Both operators currently provide islandwide coverage with HSPA+ (3G) technology. Currently, only Digicel offers LTE to its customers whereas FLOW Jamaica has committed to launching LTE in the cities of Kingston and Montego Bay, places where Digicel's LTE network is currently only found in, in short order.
A new entrant to the Jamaican communications market, Flow Jamaica, laid a new submarine cable connecting Jamaica to the United States. This new cable increases the total number of submarine cables connecting Jamaica to the rest of the world to four. Cable and Wireless Communications (parent company of LIME) acquired the company in late 2014 and replaced their brand LIME with FLOW. FLOW Jamaica currently has the most broadband and cable subscribers on the island and also has 1 million mobile subscribers, second to Digicel (which had, at its peak, over 2 million mobile subscriptions on its network).
Digicel entered the broadband market in 2010 by offering WiMAX broadband, capable of up to 6 Mbit/s per subscriber. To further their broadband share post-LIME/FLOW merger in 2014, the company introduced a new broadband service called Digicel Play, which is Jamaica's second FTTH offering (after LIME's deployment in selected communities in 2011). It is currently only available in the parishes of Kingston, Portmore and St. Andrew. It offers speeds of up to 200 Mbit/s down, 100 Mbit/s up via a pure fibre optic network. Digicel's competitor, FLOW Jamaica, has a network consisting of ADSL, Coaxial and Fibre to the Home (inherited from LIME) and only offers speeds up to 100 Mbit/s. FLOW has committed to expanding its Fibre offering to more areas in order to combat Digicel's entrance into the market.
It was announced that the Office and Utilities Regulations (OUR), Ministry of Science, Technology, Energy and Mining (MSTEM) and the Spectrum Management Authority (SMA) have given approval for another mobile operator licence in January 2016. The identity of this entrant was ascertained on May 20, 2016, when the Jamaican Government named the new carrier as Symbiote Investments Limited operating under the name Caricel. The company will focus on 4G LTE data offerings and will first go live in the Kingston Metropolitan Area and will expand to the rest of Jamaica thereafter.
|
https://en.wikipedia.org/wiki?curid=15660
|
History of Jamaica
The Caribbean island of Jamaica was inhabited by the Arawak tribes prior to the arrival of Columbus in 1494. Early inhabitants of Jamaica named the land "Xaymaca", meaning "Land of wood and water ". The Spanish enslaved the Arawak, who were so ravaged by their conflict with the Europeans and by foreign diseases that nearly the entire native population was extinct by 1600. The Spanish also transported hundreds of West African people to the island.
In 1655, the English invaded Jamaica, defeating the Spanish colonists. African slaves took advantage of the political turmoil and escaped to the island's interior, forming independent communities (known as the Maroons). Meanwhile, on the coast, the English built the settlement of Port Royal, which became a base of operations for pirates and privateers, including Captain Henry Morgan.
In the 19th century, sugar cane replaced piracy as British Jamaica's main source of income. The sugar industry was labour-intensive and the British brought hundreds of thousands of enslaved Africans to Jamaica. By 1850 the black Jamaican population outnumbered the white population by a ratio of twenty to one. Enslaved Jamaicans mounted over a dozen major uprisings during the 18th century, including Tacky's revolt in 1760. There were also periodic skirmishes between the British and the mountain communities, culminating in the First Maroon War of the 1730s and the Second Maroon War of the 1795
The first inhabitants of Jamaica probably came from islands to the east in two waves of migration. About 600 CE the culture known as the “Redware people” arrived; little is known of them, however, beyond the red pottery they left. Alligator Pond in Manchester Parish and Little River in St. Ann Parish are among the earliest known sites of this Ostionoid person, who lived near the coast and extensively hunted turtles and fish.
Around 800 CE, Arawak arrived, eventually settling throughout the island. Living in villages ruled by tribal chiefs called the caciques, they sustained themselves on fishing and the cultivation of maize and cassava. At the height of their civilization, their population is estimated to have numbered as much as 60,000.
The Arawak brought from South America a system of raising yuca known as "conuco." To add nutrients to the soil, the Arawak burned local bushes and trees and heaped the ash into large mounds, into which they then planted yuca cuttings. Most Arawak lived in large circular buildings ("bohios"), constructed with wooden poles, woven straw, and palm leaves. The Arawak spoke an Arawakan language and did not have writing. Some of the words used by them, such as "barbacoa" ("barbecue"), "hamaca" ("hammock"), "kanoa" ("canoe"), "tabaco" ("tobacco"), "yuca", "batata" ("sweet potato"), and "juracán" ("hurricane"), have been incorporated into Spanish and English.
https://jis.gov.jm/information/jamaican-history/
Christopher Columbus is believed to be the first European to reach Jamaica. He landed on the island on 5 May 1494, during his second voyage to the Americas. Columbus returned to Jamaica during his fourth voyage to the Americas. He had been sailing around the Caribbean nearly a year when a storm beached his ships in St. Ann's Bay, Jamaica, on 25 June 1503. For a year Columbus and his men remained stranded on the island, finally departing in June 1504.
The Spanish crown granted the island to the Columbus family, but for decades it was something of a backwater, valued chiefly as a supply base for food and animal hides. In 1509 Juan de Esquivel founded the first permanent European settlement, the town of Sevilla la Nueva (New Seville), on the north coast. A decade later, Friar Bartolomé de las Casas wrote Spanish authorities about Esquivel's conduct during the Higüey massacre of 1503.
In 1534 the capital was moved to Villa de la Vega (later Santiago de la Vega), now called Spanish Town. This settlement served as the capital of both Spanish and English Jamaica, from its founding in 1534 until 1872, after which the capital was moved to Kingston.
The Spanish enslaved many of the Arawak; some escaped, but most died from European diseases and overwork. The Spaniards also introduced the first African slaves. By the early 17th century, when virtually no Taino remained in the region, the population of the island was about 3,000, including a small number of African slaves. Disappointed in the lack of gold on the isle, the Spanish mainly used Jamaica as a military base to supply colonising efforts in the mainland Americas.
The Spanish colonists did not bring women in the first expeditions and took Taíno women for their common-law wives, resulting in mestizo children. Sexual violence against the Taíno women by the Spanish was also common.
Although the Taino referred to the island as "Xaymaca", the Spanish gradually changed the name to "Jamaica". In the so-called Admiral's map of 1507 the island was labeled as "Jamaiqua" and in Peter Martyr's work "Decades" of 1511, he referred to it as both "Jamaica" and "Jamica".
In late 1654, English leader Oliver Cromwell launched the "Western Design" armada against Spain's colonies in the Caribbean. In April 1655, General Robert Venables led the armada in an attack on Spain's fort at Santo Domingo, Hispaniola. After the Spanish repulsed this poorly-executed attack, the English force then sailed for Jamaica, the only Spanish West Indies island that did not have new defensive works. In May 1655, around 7,000 English soldiers landed near Jamaica's Spanish Town capital and soon overwhelmed the small number of Spanish troops (at the time, Jamaica's entire population only numbered around 2,500). Spain never recaptured Jamaica, losing the Battle of Ocho Rios in 1657 and the Battle of Rio Nuevo in 1658. In 1660, the turning point was when some Spanish runaway slaves, who became Jamaican Maroons, switched sides from the Spanish to the English. For England, Jamaica was to be the 'dagger pointed at the heart of the Spanish Empire,' although in fact it was a possession of little economic value then. England gained formal possession of Jamaica from Spain in 1670 through the Treaty of Madrid. Removing the pressing need for constant defense against Spanish attack, this change served as an incentive to planting.
Cromwell increased the island's European population by sending indentured servants and prisoners to Jamaica. Due to the wars in Ireland at this time two-thirds of this 17th-century European population was Irish. But tropical diseases kept the number of Europeans under 10,000 until about 1740. Although the African slave population in the 1670s and 1680s never exceeded 10,000, by the end of the 17th century imports of slaves increased the black population to at least five times the number of whites. Thereafter, Jamaica's African population did not increase significantly in number until well into the 18th century, in part because ships coming from the west coast of Africa preferred to unload at the islands of the Eastern Caribbean. At the beginning of the 18th century, the number of slaves in Jamaica did not exceed 45,000, but by 1800 it had increased to over 300,000.
Beginning with the Stuart monarchy's appointment of a civil governor to Jamaica in 1661, political patterns were established that lasted well into the 20th century. The second governor, Lord Windsor, brought with him in 1662 a proclamation from the king giving Jamaica's non-slave populace the rights of English citizens, including the right to make their own laws. Although he spent only ten weeks in Jamaica, Lord Windsor laid the foundations of a governing system that was to last for two centuries: a crown-appointed governor acting with the advice of a nominated council in the legislature. The legislature consisted of the governor and an elected but highly unrepresentative House of Assembly. For years, the planter-dominated Assembly was in continual conflict with the various governors and the Stuart kings; there were also contentious factions within the assembly itself. For much of the 1670s and 1680s, Charles II and James II and the assembly feuded over such matters as the purchase of slaves from ships not run by the royal English trading company. The last Stuart governor, Christopher Monck, 2nd Duke of Albemarle, who was more interested in treasure hunting than in planting, turned the planter oligarchy out of office. After the duke's death in 1688, the planters, who had fled Jamaica to London, succeeded in lobbying James II to order a return to the pre-Albemarle political arrangement (the local control of Jamaican planters belonging to the assembly).
Following the 1655 conquest, Spain repeatedly attempted to recapture Jamaica. In response, in 1657, Governor Edward D'Oyley invited the Brethren of the Coast to come to Port Royal and make it their home port. The Brethren was made up of a group of pirates who were descendants of cattle-hunting "boucaniers" (later Anglicised to buccaneers), who had turned to piracy after being robbed by the Spanish (and subsequently thrown out of Hispaniola). These pirates concentrated their attacks on Spanish shipping, whose interests were considered the major threat to the town. These pirates later became legal English privateers who were given letters of marque by Jamaica's governor. Around the same time that pirates were invited to Port Royal, England launched a series of attacks against Spanish shipping vessels and coastal towns. By sending the newly appointed privateers after Spanish ships and settlements, England had successfully set up a system of defense for Port Royal. Jamaica became a haven of privateers, buccaneers, and occasionally outright pirates: Christopher Myngs, Edward Mansvelt, and most famously, Henry Morgan.
England gained formal possession of Jamaica from Spain in 1670 through the Treaty of Madrid. Removing the pressing need for constant defense against Spanish attack, this change served as an incentive to planting. This settlement also improved the supply of slaves and resulted in more protection, including military support, for the planters against foreign competition. As a result, the sugar monoculture and slave-worked plantation society spread across Jamaica throughout the 18th century, decreasing Jamaica's dependence on privateers for protection and funds.
However, the English colonial authorities continued to have difficulties suppressing the Spanish Maroons, who made their homes in the mountainous interior, and mounted periodic raids on estates and towns, such as Spanish Town. The Karmahaly Maroons continued to stay in the forested mountains, and periodically fought the English.
Another blow to Jamaica's partnership with privateers was the violent earthquake which destroyed much of Port Royal on 7 June 1692. Two-thirds of the town sank into the sea immediately after the main shock. After the earthquake, the town was partially rebuilt but the colonial government was relocated to Spanish Town, which had been the capital under Spanish rule. Port Royal was further devastated by a fire in 1703 and a hurricane in 1722. Most of the sea trade moved to Kingston. By the late 18th century, Port Royal was largely abandoned.
In the mid-17th century, sugarcane had been brought into the British West Indies by the Dutch, from Brazil. Upon landing in Jamaica and other islands, they quickly urged local growers to change their main crops from cotton and tobacco to sugarcane. With depressed prices of cotton and tobacco, due mainly to stiff competition from the North American colonies, the farmers switched, leading to a boom in the Caribbean economies. Sugarcane was quickly snapped up by the British, who used it in cakes and to sweeten tea. In the 18th century, sugar replaced piracy as Jamaica's main source of income. The sugar industry was labour-intensive and the British brought hundreds of thousands of enslaved Africans to Jamaica. By 1832, the median-size plantation in Jamaica had about 150 slaves, and nearly one of every four bondsmen lived on units that had at least 250 slaves. In "The Book of Night Women", author Marlon James indicates that the ratio of slave owner to enslaved Africans is 1:33. James also depicts atrocities that slave owners subjected slaves to, and violent resistance from the slaves; numerous slaves died in pursuit of freedom. After slavery was abolished in 1834, sugarcane plantations used a variety of forms of labour including workers imported from India under contracts of indenture.
When the British captured Jamaica in 1655, the Spanish colonists fled, leaving a large number of African slaves. These former Spanish slaves created three "Palenques", or settlements. Former slaves organised under the leadership of Juan de Serras allied with the Spanish guerrillas on the western end of the Cockpit Country, while those under Juan de Bolas established themselves in modern-day Clarendon Parish and served as a "black militia" for the English. The third chose to join those who had previously escaped from the Spanish to live and intermarry with the Arawak people. Each group of Maroons established distinct independent communities in the mountainous interior of Jamaica. They survived by subsistence farming and periodic raids of plantations. Over time, the Maroons came to control large areas of the Jamaican interior. Early in the 18th century, the Maroons took a heavy toll on the British troops and local militia sent against them in the interior, in what came to be known as the First Maroon War.
The First Maroon War came to an end with a 1739–40 agreement between the Maroons and the British government. The Maroons were to remain in their five main towns (Accompong; Cudjoe's Town (Trelawny Town); Nanny Town, later known as Moore Town; Scott's Hall (Jamaica); and Charles Town, Jamaica), living under their own rulers and a British supervisor. In exchange, they were asked to agree not to harbour new runaway slaves, but rather to help catch them. This last clause in the treaty naturally caused a split between the Maroons and the rest of the black population, although from time to time runaways from the plantations still found their way into Maroon settlements. Another provision of the agreement was that the Maroons would serve to protect the island from invaders. The latter was because the Maroons were revered by the British as skilled warriors. The person responsible for the compromise with the British was the Leeward Maroon leader, Cudjoe, a short, almost dwarf-like man who for years fought skillfully and bravely to maintain his people's independence. As he grew older, however, Cudjoe became increasingly disillusioned. He ran into quarrels with his lieutenants and with other Maroon groups. He felt that the only hope for the future was honorable peace with the enemy, which was just what the British were thinking. The 1739 treaty should be seen in this light. A year later, the even more rebellious Windward Maroons of Trelawny Town also agreed to sign a treaty under pressure from both white Jamaicans and the Leeward Maroons.
In May 1760, Tacky, a slave overseer on the Frontier plantation in Saint Mary Parish, led a group of enslaved Africans in taking over the Frontier and Trinity plantations while killing their enslavers. They then marched to the storeroom at Fort Haldane, where the munitions to defend the town of Port Maria were kept. After killing the storekeeper, Tacky and his men stole nearly 4 barrels of gunpowder and 40 firearms with shot, before marching on to overrun the plantations at Heywood Hall and Esher. By dawn, hundreds of other slaves had joined Tacky and his followers. At Ballard's Valley, the rebels stopped to rejoice in their success. One slave from Esher decided to slip away and sound the alarm. Obeahmen (Caribbean witch doctors) quickly circulated around the camp dispensing a powder that they claimed would protect the men from injury in battle and loudly proclaimed that an Obeahman could not be killed. Confidence was high. Soon there were 70 to 80 mounted militia on their way along with some Maroons from Scott's Hall, who were bound by treaty to suppress such rebellions. When the militia learned of the Obeahman's boast of not being able to be killed, an Obeahman was captured, killed and hung with his mask, ornaments of teeth and bone and feather trimmings at a prominent place visible from the encampment of rebels. Many of the rebels, confidence shaken, returned to their plantations. Tacky and 25 or so men decided to fight on. Tacky and his men went running through the woods being chased by the Maroons and their legendary marksman, Davy the Maroon. While running at full speed, Davy shot Tacky and cut off his head as evidence of his feat, for which he would be richly rewarded. Tacky's head was later displayed on a pole in Spanish Town until a follower took it down in the middle of the night. The rest of Tacky's men were found in a cave near Tacky Falls, having committed suicide rather than going back to slavery.
In 1795, the Second Maroon War was instigated when two Maroons were flogged by a black slave for allegedly stealing two pigs. When six Maroon leaders came to the British to present their grievances, the British took them as prisoners. This sparked an eight-month conflict, spurred by the fact that Maroons felt that they were being mistreated under the terms of Cudjoe's Treaty of 1739, which ended the First Maroon War. The war lasted for five months as a bloody stalemate. The British 5,000 troops and militia outnumbered the Maroons ten to one, but the mountainous and forested topography of Jamaica proved ideal for guerrilla warfare. The Maroons surrendered in December 1795. A treaty signed in December between Major General George Walpole and the Maroon leaders established that the Maroons would beg on their knees for the King's forgiveness, return all runaway slaves, and be relocated elsewhere in Jamaica. The governor of Jamaica ratified the treaty, but gave the Maroons only three days to present themselves to beg forgiveness on 1 January 1796. Suspicious of British intentions, most of the Maroons did not surrender until mid-March. The British used the contrived breach of treaty as a pretext to deport the entire Trelawny Town Maroons to Nova Scotia. After a few years the Maroons were again deported to the new British settlement of Sierra Leone in West Africa.
In 1831, enslaved Baptist preacher Samuel Sharpe led a strike among demanding more freedom and a working wage of "half the going wage rate." Upon refusal of their demands, the strike escalated into a full rebellion, in part because Sharpe had also made military preparations with a rebel military group known as the Black Regiment led by a slave known as Colonel Johnson of Retrieve Estate, about 150 strong with 50 guns among them. Colonel Johnson's Black Regiment clashed with a local militia led by Colonel Grignon at old Montpelier on December 28. The militia retreated to Montego Bay while the Black Regiment advanced an invasion of estates in the hills, inviting more slaves to join while burning houses, fields and other properties, setting off a trail of fires through the Great River Valley in Westmoreland and St. Elizabeth to St James.
The Baptist War, as it was known, became the largest slave uprising in the British West Indies, lasting 10 days and mobilised as many as 60,000 of Jamaica's 300,000 slaves. The rebellion was suppressed by British forces under the control of Sir Willoughby Cotton. The reaction of the Jamaican Government and plantocracy was far more brutal. Approximately five hundred slaves were killed in total: 207 during the revolt and somewhere in the range between 310 and 340 slaves were killed through "various forms of judicial executions" after the rebellion was concluded, at times, for quite minor offences (one recorded execution indicates the crime being the theft of a pig; another, a cow). An 1853 account by Henry Bleby described how three or four simultaneous executions were commonly observed; bodies would be allowed to pile up until workhouse slaves carted the bodies away at night and buried them in mass graves outside town. The brutality of the plantocracy during the revolt is thought to have accelerated the process of emancipation, with initial measures beginning in 1833.
Because of the loss of property and life in the 1831 Baptist War rebellion, the British Parliament held two inquiries. Their reports on conditions contributed greatly to the abolition movement and passage of the 1833 law to abolish slavery as of August 1, 1834, throughout the British Empire. The Jamaican slaves were bound (indentured) to their former owners' service, albeit with a guarantee of rights, until 1838 under what was called the "Apprenticeship System".
With the abolition of the slave trade in 1808 and slavery itself in 1834, however, the island's sugar- and slave-based economy faltered. The period after emancipation in 1834 initially was marked by a conflict between the plantocracy and elements in the Colonial Office over the extent to which individual freedom should be coupled with political participation for blacks. In 1840 the assembly changed the voting qualifications in a way that enabled a majority of blacks and people of mixed race (browns or mulattos) to vote. But neither change in the political system, nor abolition of slavery changed the planter's chief interest, which lay in the continued profitability of their estates, and they continued to dominate the elitist assembly. Nevertheless, at the end of the 19th century and in the early years of the 20th century, the crown began to allow some Jamaicans – mostly local merchants, urban professionals, and artisans—into the appointed councils.
Tensions resulted in the October 1865 Morant Bay rebellion led by Paul Bogle. The rebellion was sparked on 7 October, when a black man was put on trial and imprisoned for allegedly trespassing on a long-abandoned plantation. During the proceedings, James Geoghegon, a black spectator, disrupted the trial, and in the police's attempts to seize him and remove him from the courthouse, a fight broke out between the police and other spectators. While pursuing Geoghegon, the two policeman were beaten with sticks and stones. The following Monday arrest warrants were issued for several men for rioting, resisting arrest, and assaulting the police. Among them was Baptist preacher Paul Bogle. A few days later on 11 October, Mr. Paul Bogle marched with a group of protesters to Morant Bay. When the group arrived at the court house they were met by a small and inexperienced volunteer militia. The crowd began pelting the militia with rocks and sticks, and the militia opened fire on the group, killing seven black protesters before retreating.
Governor John Eyre sent government troops, under Brigadier-General Alexander Nelson, to hunt down the poorly armed rebels and bring Paul Bogle back to Morant Bay for trial. The troops met with no organised resistance, but regardless they killed blacks indiscriminately, most of whom had not been involved in the riot or rebellion: according to one soldier, "we slaughtered all before us… man or woman or child". In the end, 439 black Jamaicans were killed directly by soldiers, and 354 more (including Paul Bogle) were arrested and later executed, some without proper trials. Paul Bogle was executed "either the same evening he was tried or the next morning." Other punishments included flogging for over 600 men and women (including some pregnant women), and long prison sentences, with thousands of homes belonging to black Jamaicans were burned down without any justifiable reason.
George William Gordon, a Jamaican businessman and politician, who had been critical of Governor John Eyre and his policies, was later arrested by Governor John Eyre who believed he had been behind the rebellion. Despite having very little to do with it, Gordon was eventually executed. Though he was arrested in Kingston, he was transferred by Eyre to Morant Bay, where he could be tried under martial law. The execution and trial of Gordon via martial law raised some constitutional issues back in Britain, where concerns emerged about whether British dependencies should be ruled under the government of law, or through military license. The speedy trial saw Gordon hanged on 23 October, just two days after his trial had begun. He and William Bogle, Paul's brother, "were both tried together, and executed at the same time.
During most of the 18th century, a monocrop economy based on sugarcane production for export flourished. In the last quarter of the century, however, the Jamaican sugar economy declined as famines, hurricanes, colonial wars, and wars of independence disrupted trade. By the 1820s, Jamaican sugar had become less competitive with that from high-volume producers such as Cuba and production subsequently declined. By 1882 sugar output was less than half the level achieved in 1828. A major reason for the decline was the British Parliament's 1807 abolition of the slave trade, under which the transportation of slaves to Jamaica after 1 March 1808 was forbidden; the abolition of the slave trade was followed by the abolition of slavery in 1834 and full emancipation within four years. Unable to convert the ex-slaves into a sharecropping tenant class similar to the one established in the post-Civil War South of the United States, planters became increasingly dependent on wage labour and began recruiting workers abroad, primarily from India, China, and Sierra Leone. Many of the former slaves settled in peasant or small farm communities in the interior of the island, the "yam belt," where they engaged in subsistence and some cash crop farming.
The second half of the 19th century was a period of severe economic decline for Jamaica. Low crop prices, droughts, and disease led to serious social unrest, culminating in the Morant Bay rebellions of 1865. However, renewed British administration after the 1865 rebellion, in the form of crown colony status, resulted in some social and economic progress as well as investment in the physical infrastructure. Agricultural development was the centrepiece of restored British rule in Jamaica. In 1868 the first large-scale irrigation project was launched. In 1895 the Jamaica Agricultural Society was founded to promote more scientific and profitable methods of farming. Also in the 1890s, the Crown Lands Settlement Scheme was introduced, a land reform program of sorts, which allowed small farmers to purchase two hectares or more of land on favourable terms.
Between 1865 and 1930, the character of landholding in Jamaica changed substantially, as sugar declined in importance. As many former plantations went bankrupt, some land was sold to Jamaican peasants under the Crown Lands Settlement whereas other cane fields were consolidated by dominant British producers, most notably by the British firm Tate and Lyle. Although the concentration of land and wealth in Jamaica was not as drastic as in the Spanish-speaking Caribbean, by the 1920s the typical sugar plantation on the island had increased to an average of 266 hectares. But, as noted, smallscale agriculture in Jamaica survived the consolidation of land by sugar powers. The number of small holdings in fact tripled between 1865 and 1930, thus retaining a large portion of the population as peasantry. Most of the expansion in small holdings took place before 1910, with farms averaging between two and twenty hectares.
The rise of the banana trade during the second half of the 19th century also changed production and trade patterns on the island. Bananas were first exported in 1867, and banana farming grew rapidly thereafter. By 1890, bananas had replaced sugar as Jamaica's principal export. Production rose from 5 million stems (32 percent of exports) in 1897 to an average of 20 million stems a year in the 1920s and 1930s, or over half of domestic exports. As with sugar, the presence of American companies, like the well-known United Fruit Company in Jamaica, was a driving force behind renewed agricultural exports. The British also became more interested in Jamaican bananas than in the country's sugar. Expansion of banana production, however, was hampered by serious labour shortages. The rise of the banana economy took place amidst a general exodus of up to 11,000 Jamaicans a year.
In 1846 Jamaican planters, adversely affected by the loss of slave labour, suffered a crushing blow when Britain passed the Sugar Duties Act, eliminating Jamaica's traditionally favoured status as its primary supplier of sugar. The Jamaica House of Assembly stumbled from one crisis to another until the collapse of the sugar trade, when racial and religious tensions came to a head during the Morant Bay rebellion of 1865. Although suppressed ruthlessly, the severe rioting so alarmed the planters that the two-centuries-old assembly voted to abolish itself and asked for the establishment of direct British rule. In 1866 the new governor John Peter Grant arrived to implement a series of reforms which accompanied the transition to a crown colony. The government consisted of the Legislative Council and the executive Privy Council containing members of both chambers of the House of Assembly, but the Colonial Office exercised effective power through a presiding British governor. The council included a few handpicked prominent Jamaicans for the sake of appearance only. In the late 19th century, crown colony rule was modified; representation and limited self-rule were reintroduced gradually into Jamaica after 1884. The colony's legal structure was reformed along the lines of English common law and county courts, and a constabulary force was established. The smooth working of the crown colony system was dependent on a good understanding and an identity of interests between the governing officials, who were [British, and most of the nonofficial, nominated members of the Legislative Council, who were Jamaicans. The elected members of this body were in a permanent minority and without any influence or administrative power. The unstated alliance – based on shared color, attitudes, and interest – between the British officials and the Jamaican upper class was reinforced in London, where the West India Committee lobbied for Jamaican interests. Jamaica's white or near-white propertied class continued to hold the dominant position in every respect; the vast majority of the black population remained poor and unenfranchised.
Until it was disestablished in 1870, the Church of England in Jamaica was the established church. It represented the white English community. It received funding from the colonial government, and was given responsibility for providing religious instruction to the slaves. It was challenged by Methodist missionaries from England, and the Methodists in turn were denounced as troublemakers. The Church of England in Jamaica established the Jamaica Home and Foreign Missionary Society in 1861; its mission stations multiplied, with financial help from religious organizations in London. The Society sent its own missionaries to West Africa. Baptist missions grew rapidly, thanks to missionaries from England and the United States, and became the largest denomination by 1900. Baptist missionaries denounced the apprentice system as a form of slavery. In the 1870s and 1880s the Methodists opened a high school and a theological college. Other Protestant groups included the Moravians, Presbyterians, Congregationalists, Seventh-day Adventist, Church of God, and others. There were several thousand Roman Catholics. The population was largely Christian by 1900, and most families were linked with the church or a Sunday School. Traditional pagan practices persisted in unorganized fashion, such as witchcraft.
In 1872, the government passed an act to transfer government offices from Spanish Town to Kingston. Kingston had been founded as a refuge for survivors of the 1692 earthquake that destroyed Port Royal. The town did not begin to grow until after the further destruction of Port Royal by fire in 1703. Surveyor John Goffe drew up a plan for the town based on a grid bounded by North, East, West and Harbour Streets. By 1716 it had become the largest town and the center of trade for Jamaica. The government sold land to people with the regulation that they purchase no more than the amount of the land that they owned in Port Royal, and only land on the sea front. Gradually wealthy merchants began to move their residences from above their businesses to the farm lands north on the plains of Liguanea. In 1755 the governor, Sir Charles Knowles, had decided to transfer the government offices from Spanish Town to Kingston. It was thought by some to be an unsuitable location for the Assembly in proximity to the moral distractions of Kingston, and the next governor rescinded the Act. However, by 1780 the population of Kingston was 11,000, and the merchants began lobbying for the administrative capital to be transferred from Spanish Town, which was by then eclipsed by the commercial activity in Kingston. The 1907 Kingston earthquake destroyed much of the city. Considered by many writers of that time one of the world's deadliest earthquakes, it resulted in the death of over eight hundred Jamaicans and destroyed the homes of over ten thousand more.
Marcus Mosiah Garvey, a black activist and Trade Unionist, founded the Universal Negro Improvement Association and African Communities League in 1914, one of Jamaica's first political parties in 1929, and a workers association in the early 1930s. Garvey also promoted the Back-to-Africa movement, which called for those of African descent to return to the homelands of their ancestors. Garvey, to no avail, pleaded with the colonial government to improve living conditions for indigenous peoples in the West Indies.
Garvey, a controversial figure, had been the target of a four-year investigation by the United States government. He was convicted of mail fraud in 1923 and had served most of a five-year term in an Atlanta penitentiary when he was deported to Jamaica in 1927. Garvey left the colony in 1935 to live in the United Kingdom, where he died heavily in debt five years later. He was proclaimed Jamaica's first national hero in the 1960s after Edward P.G. Seaga, then a government minister, arranged the return of his remains to Jamaica. In 1987 Jamaica petitioned the United States Congress to pardon Garvey on the basis that the federal charges brought against him were unsubstantiated and unjust.
The Rastafari movement, a new religion, emerged among impoverished and socially disenfranchised Afro-Jamaican communities in 1930s Jamaica. Its Afrocentric ideology was largely a reaction against Jamaica's then-dominant British colonial culture. It was influenced by both Ethiopianism and the Back-to-Africa movement promoted by black nationalist figures like Marcus Garvey. The movement developed after several Christian clergymen, most notably Leonard Howell, proclaimed that the crowning of Haile Selassie as Emperor of Ethiopia in 1930 fulfilled a Biblical prophecy. By the 1950s, Rastafari's counter-cultural stance had brought the movement into conflict with wider Jamaican society, including violent clashes with law enforcement. In the 1960s and 1970s it gained increased respectability within Jamaica and greater visibility abroad through the popularity of Rasta-inspired reggae musicians like Bob Marley. Enthusiasm for Rastafari declined in the 1980s, following the deaths of Haile Selassie and Marley.
The Great Depression caused sugar prices to slump in 1929 and led to the return of many Jamaicans. Economic stagnation, discontent with unemployment, low wages, high prices, and poor living conditions caused social unrest in the 1930s. Uprisings in Jamaica began on the Frome Sugar Estate in the western parish of Westmoreland and quickly spread east to Kingston. Jamaica, in particular, set the pace for the region in its demands for economic development from British colonial rule.
Because of disturbances in Jamaica and the rest of the region, the British in 1938 appointed the Moyne Commission. An immediate result of the Commission was the Colonial Development Welfare Act, which provided for the expenditure of approximately Ł1 million a year for twenty years on coordinated development in the British West Indies. Concrete actions, however, were not implemented to deal with Jamaica's massive structural problems.
The rise of nationalism, as distinct from island identification or desire for self-determination, is generally dated to the 1938 labour riots that affected both Jamaica and the islands of the Eastern Caribbean. William Alexander Bustamante, a moneylender in the capital city of Kingston who had formed the Jamaica Trade Workers and Tradesmen Union (JTWTU) three years earlier, captured the imagination of the black masses with his messianic personality, even though he himself was light-skinned, affluent, and aristocratic. Bustamante emerged from the 1938 strikes and other disturbances as a populist leader and the principal spokesperson for the militant urban working class, and in that year, using the JTWTU as a stepping stone, he founded the Bustamante Industrial Trade Union (BITU), which inaugurated Jamaica's workers movement.
A distant cousin of Bustamante's, Norman W. Manley, concluded as a result of the 1938 riots that the real basis for national unity in Jamaica lay in the masses. Unlike the union-oriented Bustamante, however, Manley was more interested in access to control over state power and political rights for the masses. On 18 September 1938, he inaugurated the People's National Party (PNP), which had begun as a nationalist movement supported by the mixed-race middle class and the liberal sector of the business community with leaders who were highly educated members of the upper middle class. The 1938 riots spurred the PNP to unionise labour, although it would be several years before the PNP formed major labour unions. The party concentrated its earliest efforts on establishing a network both in urban areas and in banana-growing rural parishes, later working on building support among small farmers and in areas of bauxite mining.
The PNP adopted a socialist ideology in 1940 and later joined the Socialist International, allying itself formally with the social democratic parties of Western Europe. Guided by socialist principles, Manley was not a doctrinaire socialist. PNP socialism during the 1940s was similar to British Labour Party ideas on state control of the factors of production, equality of opportunity, and a welfare state, although a left-wing element in the PNP held more orthodox Marxist views and worked for the internationalisation of the trade union movement through the Caribbean Labour Congress. In those formative years of Jamaican political and union activity, relations between Manley and Bustamante were cordial. Manley defended Bustamante in court against charges brought by the British for his labour activism in the 1938 riots and looked after the BITU during Bustamante's imprisonment.
Bustamante had political ambitions of his own, however. In 1942, while still incarcerated, he founded a political party to rival the PNP, called the Jamaica Labour Party (JLP). The new party, whose leaders were of a lower class than those of the PNP, was supported by conservative businessmen and 60,000 dues-paying BITU members, who encompassed dock and sugar plantation workers and other unskilled urban labourers. On his release in 1943, Bustamante began building up the JLP. Meanwhile, several PNP leaders organised the leftist-oriented Trade Union Congress (TUC). Thus, from an early stage in modern Jamaica, unionised labour was an integral part of organised political life.
For the next quarter century, Bustamante and Manley competed for centre stage in Jamaican political affairs, the former espousing the cause of the "barefoot man"; the latter, "democratic socialism," a loosely defined political and economic theory aimed at achieving a classless system of government. Jamaica's two founding fathers projected quite different popular images. Bustamante, lacking even a high school diploma, was an autocratic, charismatic, and highly adept politician; Manley was an athletic, Oxford-trained lawyer, Rhodes scholar, humanist, and liberal intellectual. Although considerably more reserved than Bustamante, Manley was well liked and widely respected. He was also a visionary nationalist who became the driving force behind the crown colony's quest for independence.
Following the 1938 disturbances in the West Indies, London sent the Moyne Commission to study conditions in the British Caribbean territories. Its findings led in the early 1940s to better wages and a new constitution. Issued on 20 November 1944, the Constitution modified the crown colony system and inaugurated limited self-government based on the Westminster model of government and universal adult suffrage. It also embodied the island's principles of ministerial responsibility and the rule of law. Thirty-one percent of the population participated in the 1944 elections. The JPL – helped by its promises to create jobs, its practice of dispensing public funds in pro-JLP parishes, and the PNP's relatively radical platform – won an 18 percent majority of the votes over the PNP, as well as 22 seats in the 32-member House of Representatives, with 5 going to the PNP and 5 to other short-lived parties. In 1945 Bustamante took office as Jamaica's first premier (the pre-independence title for head of government).
Under the new charter, the British governor, assisted by the six-member Privy Council and ten-member Executive Council, remained responsible solely to the crown. The Jamaican Legislative Council became the upper house, or Senate, of the bicameral Parliament. House members were elected by adult suffrage from single-member electoral districts called constituencies. Despite these changes, ultimate power remained concentrated in the hands of the governor and other high officials.
After World War II, Jamaica began a relatively long transition to full political independence. Jamaicans preferred British culture over American, but they had a love-hate relationship with the British and resented British domination, racism, and the dictatorial Colonial Office. Britain gradually granted the colony more self-government under periodic constitutional changes. Jamaica's political patterns and governmental structure were shaped during two decades of what was called "constitutional decolonisation," the period between 1944 and independence in 1962.
Having seen how little popular appeal the PNP's 1944 campaign position had, the party shifted toward the centre in 1949 and remained there until 1974. The PNP actually won a 0.8-percent majority of the votes over the JLP in the 1949 election, although the JLP won a majority of the House seats. In the 1950s, the PNP and JLP became increasingly similar in their sociological composition and ideological outlook. During the cold war years, socialism became an explosive domestic issue. The JLP exploited it among property owners and churchgoers, attracting more middle-class support. As a result, PNP leaders diluted their socialist rhetoric, and in 1952 the PNP moderated its image by expelling four prominent leftists who had controlled the TUC. The PNP then formed the more conservative National Workers Union (NWU). Henceforth, PNP socialism meant little more than national planning within a framework of private property and foreign capital. The PNP retained, however, a basic commitment to socialist precepts, such as public control of resources and a more equitable income distribution. Manley's PNP came to office for the first time after winning the 1955 elections with an 11-percent majority over the JLP and 50.5 percent of the popular vote.
Amendments to the constitution that took effect in May 1953 reconstituted the Executive Council and provided for eight ministers to be selected from among House members. The first ministries were subsequently established. These amendments also enlarged the limited powers of the House of Representatives and made elected members of the governor's executive council responsible to the legislature. Manley, elected chief minister beginning in January 1955, accelerated the process of decolonisation during his able stewardship. Further progress toward self-government was achieved under constitutional amendments in 1955 and 1956, and cabinet government was established on 11 November 1957.
Assured by British declarations that independence would be granted to a collective West Indian state rather than to individual colonies, Manley supported Jamaica's joining nine other British territories in the West Indies Federation, established on 3 January 1958. Manley became the island's premier after the PNP again won a decisive victory in the general election in July 1959, securing thirty of forty-five House seats.
Membership in the federation remained an issue in Jamaican politics. Bustamante, reversing his previously supportive position on the issue, warned of the financial implications of membership – Jamaica was responsible for 43 percent of its own financing – and an inequity in Jamaica's proportional representation in the federation's House of Assembly. Manley's PNP favoured staying in the federation, but he agreed to hold a referendum in September 1961 to decide on the issue. When 54 percent of the electorate voted to withdraw, Jamaica left the federation, which dissolved in 1962 after Trinidad and Tobago also pulled out. Manley believed that the rejection of his pro-federation policy in the 1961 referendum called for a renewed mandate from the electorate, but the JLP won the election of early 1962 by a fraction. Bustamante assumed the premiership that April, and Manley spent his remaining few years in politics as leader of the opposition.
Jamaica received its independence on 6 August 1962. The new nation retained, however, its membership in the Commonwealth of Nations and adopted a Westminster-style parliamentary system. Bustamante, at the age of 78, became the new nation's first prime minister.
Bustamante subsequently became the first Prime Minister of Jamaica. The island country joined the Commonwealth of Nations, an organisation of ex-British territories. Jamaica continues to be a Commonwealth realm, with the British Monarch as Queen of Jamaica and head of state.
An extensive period of postwar growth transformed Jamaica into an increasingly industrial society. This pattern was accelerated with the export of bauxite beginning in the 1950s. The economic structure shifted from a dependence on agriculture that in 1950 accounted for 30.8 percent of GDP to an agricultural contribution of 12.9 percent in 1960 and 6.7 percent in 1970. During the same period, the contribution to GDP of mining increased from less than 1 percent in 1950 to 9.3 percent in 1960 and 12.6 percent in 1970.
Bustamante's government also continued the government's repression of Rastafarians. During the Coral Gardens incident, one prominent example of state violence against Rastafarians, where following a violent confrontation between Rastafarians and police forces at a gas station, Bustamante issued the police and military an order to "bring in all Rastas, dead or alive." 54 years later, following a government investigation into the incident, the government of Jamaica issued an apology, taking unequivocal responsibility for the Bustamante government's actions and making significant financial reparations to remaining survivors of the incident.
Jamaica's reggae music developed from Ska and rocksteady in the 1960s. The shift from rocksteady to reggae was illustrated by the organ shuffle pioneered by Jamaican musicians like Jackie Mittoo and Winston Wright and featured in transitional singles "Say What You're Saying" (1967) by Clancy Eccles and "People Funny Boy" (1968) by Lee "Scratch" Perry. The Pioneers' 1968 track "Long Shot (Bus' Me Bet)" has been identified as the earliest recorded example of the new rhythm sound that became known as reggae.
Early 1968 was when the first "bona fide" reggae records were released: "Nanny Goat" by Larry Marshall and "No More Heartaches" by The Beltones. That same year, the newest Jamaican sound began to spawn big-name imitators in other countries. American artist Johnny Nash's 1968 hit "Hold Me Tight" has been credited with first putting reggae in the American listener charts. Around the same time, reggae influences were starting to surface in rock and pop music, one example being 1968's "Ob-La-Di, Ob-La-Da" by The Beatles. Other significant reggae pioneers include Prince Buster, Desmond Dekker and Ken Boothe.
The Wailers, a band started by Bob Marley, Peter Tosh and Bunny Wailer in 1963, is perhaps the most recognised band that made the transition through all three stages of early Jamaican popular music: ska, rocksteady and reggae. The Wailers would go on to release some of the earliest reggae records with producer Lee Scratch Perry. After the Wailers disbanded in 1974, Marley then went on to pursue a solo career that culminated in the release of the album "Exodus" in 1977, which established his worldwide reputation and produced his status as one of the world's best-selling artists of all time, with sales of more than 75 million records. He was a committed Rastafari who infused his music with a sense of spirituality.
In the election of 1972, the PNP's Michael Manley defeated the JLP's unpopular incumbent Prime Minister Hugh Shearer. Under Manley, Jamaica established a minimum wage for all workers, including domestic workers. In 1974, Manley proposed free education from primary school to university. The introduction of universally free secondary education was a major step in removing the institutional barriers to private sector and preferred government jobs that required secondary diplomas. The PNP government in 1974 also formed the Jamaica Movement for the Advancement of Literacy (JAMAL), which administered adult education programs with the goal of involving 100,000 adults a year.
Land reform expanded under his administration. Historically, land tenure in Jamaica has been rather inequitable. Project Land Lease (introduced in 1973), attempted an integrated rural development approach, providing tens of thousands of small farmers with land, technical advice, inputs such as fertilisers and access to credit. An estimated 14 percent of idle land was redistributed through this program, much of which had been abandoned during the post-war urban migration and/or purchased by large bauxite companies.
The minimum voting age was lowered to 18 years, while equal pay for women was introduced. Maternity leave was also introduced, while the government outlawed the stigma of illegitimacy. The Masters and Servants Act was abolished, and a Labour Relations and Industrial Disputes Act provided workers and their trade unions with enhanced rights. The National Housing Trust was established, providing "the means for most employed people to own their own homes," and greatly stimulated housing construction, with more than 40,000 houses built between 1974 and 1980.
Subsidised meals, transportation and uniforms for schoolchildren from disadvantaged backgrounds were introduced, together with free education at primary, secondary, and tertiary levels. Special employment programmes were also launched, together with programmes designed to combat illiteracy. Increases in pensions and poor relief were carried out, along with a reform of local government taxation, an increase in youth training, an expansion of day care centres. and an upgrading of hospitals.
A worker's participation program was introduced, together with a new mental health law and the family court. Free health care for all Jamaicans was introduced, while health clinics and a paramedical system in rural areas were established. Various clinics were also set up to facilitate access to medical drugs. Spending on education was significantly increased, while the number of doctors and dentists in the country rose.
The One Love Peace Concert was a large concert held in Kingston on April 22, 1978, during a time of political civil war in Jamaica between opposing parties Jamaican Labour Party and the People's National Party. The concert came to its peak during Bob Marley & The Wailers' performance of "Jammin'", when Marley joined the hands of political rivals Michael Manley (PNP) and Edward Seaga (JLP).
In the 1980 election, Edward Seaga and the JLP won by an overwhelming majority – 57 percent of the popular vote and 51 of the 60 seats in the House of Representatives. Seaga immediately began to reverse the policies of his predecessor by privatising industry and seeking closer ties with the USA. Seaga was one of the first foreign heads of government to visit newly elected US president Ronald Reagan early the next year and was one of the architects of the Caribbean Basin Initiative, which was sponsored by Reagan. He delayed his promise to cut diplomatic relations with Cuba until a year later when he accused the Cuban government of giving asylum to Jamaican criminals.
Seaga supported the collapse of the Marxist regime in Grenada and the subsequent US-led invasion of that island in October 1983. On the back of the Grenada invasion, Seaga called snap elections at the end of 1983, which Manley's PNP boycotted. His party thus controlled all seats in parliament. In an unusual move, because the Jamaican constitution required an opposition in the appointed Senate, Seaga appointed eight independent senators to form an official opposition.
Seaga lost much of his US support when he was unable to deliver on his early promises of removing the bauxite levy, and his domestic support also plummeted. Articles attacking Seaga appeared in the US media and foreign investors left the country. Rioting in 1987 and 1988, the continued high popularity of Michael Manley, and complaints of governmental incompetence in the wake of the devastation of the island by Hurricane Gilbert in 1988, also contributed to his defeat in the 1989 elections.
In 1988, Hurricane Gilbert produced a storm surge and brought up to of rain in the mountainous areas of Jamaica, causing inland flash flooding. 49 people died. Prime Minister Edward Seaga stated that the hardest hit areas near where Gilbert made landfall looked "like Hiroshima after the atom bomb." The storm left US$4 billion (in 1988 dollars) in damage from destroyed crops, buildings, houses, roads, and small aircraft. Two people eventually had to be rescued because of mudslides triggered by Gilbert and were sent to the hospital. The two people were reported to be fine. No planes were going in and out of Kingston, and telephone lines were jammed from Jamaica to Florida.
As Gilbert lashed Kingston, its winds knocked down power lines, uprooted trees, and flattened fences. On the north coast, waves hit Ocho Rios, a popular tourist resort where hotels were evacuated. Kingston's airport reported severe damage to its aircraft, and all Jamaica-bound flights were cancelled at Miami International Airport. Unofficial estimates state that at least 30 people were killed around the island. Estimated property damage reached more than $200 million. More than 100,000 houses were destroyed or damaged and the country's banana crop was largely destroyed. Hundreds of miles of roads and highways were also heavily damaged. Reconnaissance flights over remote parts of Jamaica reported that 80 percent of the homes on the island had lost their roofs. The poultry industry was also wiped out; the damage from agricultural loss reached $500 million (1988 USD). Hurricane Gilbert was the most destructive storm in the history of Jamaica and the most severe storm since Hurricane Charlie in 1951.
Jamaica's film industry was born in 1972 with the release of "The Harder They Come", the first feature-length film made by Jamaicans. It starred reggae singer Jimmy Cliff, was directed by Perry Henzell, and was produced by Island Records founder Chris Blackwell. The film is famous for its reggae soundtrack that is said to have "brought reggae to the world". Jamaica's other popular films include 1976's "Smile Orange", 1982's "Countryman", 1991's "The Lunatic", 1997's "Dancehall Queen", and 1999's "Third World Cop". Major figures in the Jamaican film industry include actors Paul Campbell and Carl Bradshaw, actress Audrey Reid, and producer Chris Blackwell.
The 1989 election. was the first election contested by the People's National Party since 1980, as they had boycotted the 1983 snap election. Prime Minister Edward Seaga announced the election date on January 15, 1989 at a rally in Kingston. He cited emergency conditions caused by Hurricane Gilbert in 1988 as the reason for extending the parliamentary term beyond its normal five-year mandate.
The date and tone of the election were shaped in part by Hurricane Gilbert, which made landfall in September 1988 and decimated the island. The hurricane caused almost $1 billion worth of damage to the island, with banana and coffee crops wiped out and thousands of homes destroyed. Both parties engaged in campaigning through the distribution of relief supplies, a hallmark of the Jamaican patronage system. Political commentators noted that prior to the hurricane, Edward Seaga and the JLP trailed Michael Manley and the PNP by twenty points in opinion polls. The ability to provide relief as the party in charge allowed Seaga to improve his standing among voters and erode the inevitability of Manley's victory. However, scandals related to the relief effort cost Seaga and the JLP some of the gains made immediately following the hurricane. Scandals that emerged included National Security Minister Errol Anderson personally controlling a warehouse full of disaster relief supplies and candidate Joan Gordon-Webley distributing American-donated flour in sacks with her picture on them.
The election was characterised by a narrower ideological difference between the two parties on economic issues. Michael Manley facilitated his comeback campaign by moderating his leftist positions and admitting mistakes made as Prime Minister, saying he erred when he involved government in economic production and had abandoned all thoughts of nationalising industry. He cited the PNP's desire to continue the market-oriented policies of the JLP government, but with a more participatory approach. Prime Minister Edward Seaga ran on his record of economic growth and the reduction of unemployment in Jamaica, using the campaign slogan "Don't Let Them Wreck It Again" to refer to Manley's tenure as Prime Minister. Seaga during his tenure as Prime Minister emphasised the need to tighten public sector spending and cut close to 27,000 public sector jobs in 1983 and 1984. He shifted his plans as elections neared with a promise to spend J$1 billion on a five-year Social Well-Being Programme, which would build new hospitals and schools in Jamaica. Foreign policy also played a role in the 1989 election. Prime Minister Edward Seaga emphasised his relations with the United States, a relationship which saw Jamaica receiving considerable economic aid from the U.S and additional loans from international institutions. Manley pledged better relations with the United States while at the same time pledging to restore diplomatic relations with Cuba that had been cut under Seaga. With Manley as Prime Minister, Jamaican-American relations had significantly frayed as a result of Manley's economic policies and close relations with Cuba.
The PNP was re-elected and Manley's second term focused on liberalising Jamaica's economy, with the pursuit of a free-market programme that stood in marked contrast to the interventionist economic policies pursued by Manley's first government. Various measures were, however, undertaken to cushion the negative effects of liberalisation. A Social Support Programme was introduced to provide welfare assistance for poor Jamaicans. In addition, the programme focused on creating direct employment, training, and credit for much of the population. The government also announced a 50% increase in the number of food stamps for the most vulnerable groups (including pregnant women, nursing mothers, and children) was announced. A small number of community councils were also created. In addition, a limited land reform programme was carried out that leased and sold land to small farmers, and land plots were granted to hundreds of farmers. The government also had an admirable record in housing provision, while measures were also taken to protect consumers from illegal and unfair business practices.
In 1992, citing health reasons, Manley stepped down as Prime Minister and PNP leader. His former Deputy Prime Minister, Percival Patterson, assumed both offices. Patterson led efforts to strengthen the country's social protection and security systems—a critical element of his economic and social policy agenda to mitigate, reduce poverty and social deprivation. His massive investments in modernisation of Jamaica's infrastructure and restructuring of the country's financial sector are widely credited with having led to Jamaica's greatest period of investment in tourism, mining, ICT and energy since the 1960s. He also ended Jamaica's 18-year borrowing relationship with the International Monetary Fund, allowing the country greater latitude in pursuit of its economic policies.
Patterson led the PNP to resounding victories in the 1993 and 1997 elections. Patterson called the 1997 election in November 1997, when his People's National Party was ahead in the opinion polls, inflation had fallen substantially and the national football team had just qualified for the 1998 World Cup. The previous election in 1993 had seen the People's National Party win 52 of the 60 seats.
A record 197 candidates contested the election, with a new political party, the National Democratic Movement, standing in most of the seats. The National Democratic Movement had been founded in 1995 by a former Labour Party chairman, Bruce Golding, after a dispute over the leadership of the Jamaica Labour Party.
The 1997 election was mainly free of violence as compared to previous elections, although it began with an incident where rival motorcades from the main parties were fired on. The election was the first in Jamaica where a team of international election monitors attended. The monitors were from the Carter Center and included Jimmy Carter, Colin Powell and former heavyweight boxing world champion Evander Holyfield. Just before the election the two main party leaders made a joint appeal for people to avoid marring the election with violence. Election day itself saw one death and four injuries relating to the election, but the 1980 election had seen over 800 deaths.
In winning the election the People's National Party became the first party to win three consecutive terms. The opposition Jamaica Labour Party only had two more seats in Parliament after the election but their leader Edward Seaga held his seat for a ninth time in a row. The National Democratic Movement failed to win any seats despite a pre-election prediction that they would manage to win a seat.
The 2002 election. was a victory for the People's National Party, but their number of seats fell from 50 to 34 (out of 60 total). PNP leader P. J. Patterson retained his position as Prime Minister, becoming the first political leader to win three successive elections. Patterson stepped down on 26 February 2006, and was replaced by Portia Simpson-Miller, Jamaica's first female Prime Minister.
The 2007 elections. had originally been scheduled for August 27, 2007 but were delayed to September 3 due to Hurricane Dean. The preliminary results indicated a slim victory for the opposition Jamaican Labour Party led by Bruce Golding, which grew by two seats from 31–29 to 33–27 after official recounts. The JLP defeated the People's National Party after 18 years of unbroken governance.
In the 1990s, Jamaica and other Caribbean banana producers argued for the continuation of their preferential access to EU markets, notably the United Kingdom. They feared that otherwise the EU would be flooded with cheap bananas from the Central American plantations, with devastating effects on several Caribbean economies. Negotiations led in 1993 to the EU agreeing to maintain the Caribbean producers' preferential access until the end of Lomé IV, pending possible negotiation on an extension. In 1995, the United States government petitioned to the World Trade Organization to investigate whether the Lomé IV convention had violated WTO rules. Then later in 1996, the WTO Dispute Settlement Body ruled in favor of the plaintiffs, effectively ending the cross-subsidies that had benefited ACP countries for many years. But the US remained unsatisfied and insisted that all preferential trade agreements between the EU and ACP should cease. The WTO Dispute Settlement Body established another panel to discuss the issue and concluded that agreements between the EU and ACP were indeed not compatible with WTO regulations. Finally, the EU negotiated with the US through WTO to reach an agreement.
In tourism, after a decrease in volume following the 11 September attacks in the U.S., the number of tourists going to Jamaica eventually rebounded, with the island now receiving over a million tourists each year. Services now account for over 60 percent of Jamaica's GDP and one of every four workers in Jamaica works in tourism or services. However, according to the World Bank, around 80% of the money tourism makes in Jamaica does not stay on the island, but goes instead to the multinational resorts.
The 2007 Cricket World Cup was the first time the ICC Cricket World Cup had been held in the Caribbean. The Jamaican Government spent US$81 million for "on the pitch" expenses. This included refurbishing Sabina Park and constructing the new multi-purpose facility in Trelawny – through a loan from China. Another US$20 million is budgeted for 'off-the-pitch' expenses, putting the tally at more than US$100 million or JM$7 billion. This put the reconstruction cost of Sabina Park at US$46 million whilst the Trelawny Stadium will cost US$35 million. The total amount of money spent on stadiums was at least US$301 million. The 2007 World Cup organisers were criticised for restrictions on outside food, signs, replica kits and musical instruments, despite Caribbean cricketing customs, with authorities being accused of "running [cricket and cricketing traditions] out of town, then sanitising it out of existence". Sir Viv Richards echoed the concerns. The ICC were also condemned for high prices for tickets and concessions, which were considered unaffordable for the local population in many of the locations. In a tragic turn of events, Pakistan coach Bob Woolmer was found dead on 18 March 2007, one day after his team's defeat to Ireland put them out of the running for the World Cup. Jamaican police performed an autopsy which was deemed inconclusive. The following day police announced that the death was suspicious and ordered a full investigation. Further investigation revealed the cause of death was "manual strangulation", and that the investigation would be handled as a murder. After a lengthy investigation the Jamaican police rescinded the comments that he was not murdered, and confirmed that he died from natural causes.
In sprinting, Jamaicans had begun their domination of the 100 metres world record in 2005. Jamaica's Asafa Powell set the record in June 2005 and held it until May 2008, with times of 9.77 and 9.74 seconds respectively. However, at the 2008 Summer Olympics in Beijing, Jamaica's athletes reached heights by nearly doubling the country's total gold medal count and breaking the nation's record for number of medals earned in a single games. Usain Bolt won three of Jamaica's six gold medals at Beijing, breaking an Olympic and world record in all three of the events in which he participated. Shelly-Ann Fraser led an unprecedented Jamaican sweep of the medals in the Women's 100 m.
Although Jamaican dancehall music originated in the late 1970s, it greatly increased in popularity in the late 1980s and 1990s. Initially dancehall was a more sparse version of reggae than the roots style, which had dominated much of the 1970s. Two of the biggest stars of the early dancehall era were Yellowman and Eek-a-Mouse. Dancehall brought a new generation of producers, including Linval Thompson, Gussie Clarke and Jah Thomas. In the mid-1980s, digital instrumentation became more prevalent, changing the sound considerably, with digital dancehall (or "ragga") becoming increasingly characterised by faster rhythms.
In the early 1990s songs by Dawn Penn, Shabba Ranks, Patra and Chaka Demus and Pliers were the first dancehall megahits in the US and abroad. Other varieties of dancehall achieved crossover success outside of Jamaica during the mid-to-late 1990s. In the 1990s, dancehall came under increasing criticism for anti-gay lyrics such as those found in Buju Banton's 1988 hit "Boom Bye Bye," which is about shooting a gay man in the head: "It's like boom bye bye / Inna batty boy head / Rude boy nah promote no nasty man / Dem haffi dead."
The early 2000s saw the success of newer charting acts such as Elephant Man, Tanya Stephens, and Sean Paul. Dancehall made a resurgence within the pop market in the late 2000s, with songs by Konshens, Mr. Vegas, Popcaan, Mavado, Vybz Kartel, Beenie Man among others. In 2011, Vybz Kartel—at the time, one of dancehall's biggest stars—was arrested for the murder of Clive "Lizard" William. In 2014 he was sentenced to life in prison after a 65-day trial, the longest in Jamaican history.
Politically and socially, the 2010s in Jamaica have been shaped by the Tivoli Incursion—a 2010 gun-battle between police and the gang of Christopher "Dudus" Coke. Over seventy Jamaicans were killed during the gun battle and the inquiry into police actions during the incursion continues today.
Coke took over the "Shower Posse" gang of Tivoli Gardens from his father, Lester “Jim Brown” Coke, in the 1990s. Under Christopher Coke's leadership, the gang trafficked drugs and dabbled in visa fraud (using a high-school athletics team) and extortion, charging small traders in the nearby market for “protection money”. The gang had close political ties. Tivoli Gardens is part of the Kingston Western parliamentary district, a seat was held for years by Edward Seaga, long-time leader of the JLP. That helped Coke expand into construction, with his company winning numerous government contracts. Within Tivoli Gardens, the gang operated as a government unto itself.
On 23 May 2010, Jamaica security forces began searching for Coke after the United States requested his extradition, and the leader of the criminal gang that attacked several police stations. The violence, which largely took place over 24–25 May, killed at least 73 civilians and wounded at least 35 others. Four soldiers/police were also killed and more than 500 arrests were made, as Jamaican police and soldiers fought gunmen in the Tivoli Gardens district of Kingston.
Coke was eventually captured on 23 June, after initial rumours that he was attempting to surrender to the United States. Kingston police arrested Coke on the outskirts of the city, apparently while a local reverend, Reverend Al Miller, was helping negotiate his surrender to the United States Embassy. In 2011, Coke pleaded guilty to racketeering and drug-related charges in a New York Federal court, and was sentenced to 23 years in prison on 8 June 2012.
In the four years following Coke's capture, Jamaica's murder rate decreased by nearly half. However, the murder rate remains one of the highest in the world and Jamaica's morgues have not been able to keep up. The lack of facilities to store and study murder victims has been one of the reasons that few murders are solved, with the conviction rate for homicides standing at around five percent. In 2007, following the botched investigation into the death of Pakistan cricket coach Bob Woolmer, who died unexpectedly while the island hosted the sport's world cup, Jamaican politicians debated the need for a modern public morgue.
The Tivoli Incursion and LGBT rights were both major issues in the 2011 election.
Although the JLP survived an election called shortly after the 2010 Tivoli Gardens incident, the following year the date of the 2011 election was set as 29 December, and major local media outlets viewed the election as "too close to call", though as Simpson-Miller campaigned in key constituencies the gap widened to favour the PNP. Days before the election, Simpson-Miller came out fully in favor of LGBT rights in a televised debate, saying that she "has no problem giving certain positions of authority to a homosexual as long as they show the necessary level of competence for the post." However, since taking power her government has not attempted to repeal the laws which criminalise homosexuality.
In 2012, Dane Lewis launched a legal challenge to Jamaica's Offenses Against Persons Act of 1864, commonly known as the "buggery" laws, on the grounds that they are unconstitutional and promote homophobia throughout the Caribbean. The legal challenge was taken to the Inter-American Commission on Human Rights. The Offenses Against Persons Act does not formally ban homosexuality, but clause 76 provides for up to 10 years' imprisonment, with or without hard labour, for anyone convicted of the "abominable crime of buggery committed either with mankind or any animal". Two further clauses outlaw attempted buggery and gross indecency between two men.
LGBT rights returned to Jamaican headlines the next year, following the violent murder in July 2013 of a 16-year-old boy who showed up at a party in women's clothing. Advocates called for the repeal of a nearly 150-year-old anti-sodomy law that bans anal sex, legislation which is accused of helping spur anti-LGBT violence.
In 2013, the International Monetary Fund announced a $1 billion loan to help Jamaica meet large debt payments. The loan required the Jamaican government to institute a pay freeze amounting to a 20% real-terms cut in wages. Jamaica is one of the most indebted countries and spends around half of its annual federal budget on debt repayments.
The 2010s look to be a bad time for Jamaica's sugarcane industry. After a brief increase sugar prices, the outlook for Jamaican sugar took a hit in 2015 when the EU began moving towards ending a cap on European sugar beet production. Jamaica exports 25% of the sugar it produces to Britain and prices for Jamaican sugar are expected to fall in the wake of the end of the cap on the EU's subsidised sugar beet industry.
However, marijuana may become a new cash crop and tourist-draw for Jamaica, depending on future legislation. On 25 February 2015, the Jamaican House of Representatives passed a law decriminalizing possession of up to 2 ounces of cannabis. The new law includes provisions legalizing the cultivation for personal use of up to five plants, as well as setting up regulations for the cultivation and distribution of cannabis for medical and religious purposes
|
https://en.wikipedia.org/wiki?curid=15661
|
Geography of Jamaica
Jamaica lies south of Cuba and west of Haiti. At its greatest extent, Jamaica is long, and its width varies between . Jamaica has a small area of . However, Jamaica is the largest island of the Commonwealth Caribbean and the third largest of the Greater Antilles, after Cuba and Hispaniola. Many small islands are located along the south coast of Jamaica, such as the Port Royal Cays. Southwest of mainland Jamaica lies Pedro Bank, an area of shallow seas, with a number of cays (low islands or reefs), extending generally east to west for over . To the southeast lies Morant Bank, with the Morant Cays, from Morant Point, the easternmost point of mainland Jamaica. Alice Shoal, southwest of the main island of Jamaica, falls within the Jamaica–Colombia Joint Regime. It has an Exclusive Economic Zone of .
Jamaica and the other islands of the Antilles evolved from an arc of ancient volcanoes that rose from the sea millions of years ago. During periods of submersion, thick layers of limestone were laid down over the old igneous and metamorphic rock. In many places, the limestone is thousands of feet thick. The country can be divided into three landform regions: the central mountain chain formed by igneous and metamorphic rocks; the karst limestone hills in the Cockpit area; the low-lying coastal plains and interior valleys.
The highest area is the Blue Mountains range. These eastern mountains are formed by a central ridge of metamorphic rock running northwest to southeast from which many long spurs jut to the north and south. For a distance of over , the crest of the ridge exceeds . The highest point is Blue Mountain Peak at . The Blue Mountains rise to these elevations from the coastal plain in the space of about , thus producing one of the steepest general gradients in the world. In this part of the country, the old metamorphic rock reveals itself through the surrounding limestone. To the north of the Blue Mountains lies the strongly tilted limestone plateau forming the John Crow Mountains. This range rises to elevations of over . To the west, in the central part of the country, are two high rolling plateaus: the Dry Harbour Mountains to the north and the Manchester Plateau to the south. Between the two, the land is rugged and here, also, the limestone layers are broken by the older rocks. Streams that rise in the region flow outward and sink soon after reaching the limestone layers.
The limestone plateau covers two-thirds of the country, so that karst formations dominate the island. Karst is formed by the erosion of the limestone in solution. Sinkholes, caves and caverns, disappearing streams, hummocky hills, and terra rosa (residual red) soils in the valleys are distinguishing features of a karst landscape; all these are present in Jamaica. To the west of the mountains is the rugged terrain of the Cockpit Country, one of the world's most dramatic examples of karst topography.
The Cockpit Country is pockmarked with steep-sided hollows, as much as deep in places, which are separated by conical hills and ridges. On the north, the main defining feature is the fault-based "Escarpment", a long ridge that extends from Flagstaff in the west, through Windsor in the centre, to Campbells and the start of the Barbecue Bottom Road (B10). The Barbecue Bottom Road, which runs north-south, high along the side of a deep, fault-based valley in the east, is the only drivable route across the Cockpit Country. However, there are two old, historical trails that cross further west, the Troy Trail, and the Quick Step Trail, both of which are seldom used and difficult to find. In the southwest, near Quick Step, is the district known as the "Land of Look Behind," so named because Spanish horsemen venturing into this region of hostile runaway slaves were said to have ridden two to a mount, one rider facing to the rear to keep a precautionary watch. Where the ridges between sinkholes in the plateau area have dissolved, flat-bottomed basins or valleys have been formed that are filled with terra rosa soils, some of the most productive on the island. The largest basin is the Vale of Clarendon, long and wide. Queen of Spains Valley, Nassau Valley, and Cave Valley were formed by the same process.
The coastline of Jamaica is one of many contrasts. The northeast shore is severely eroded by the ocean. There are many small inlets in the rugged coastline, but no coastal plain of any extent. A narrow strip of plains along the northern coast offers calm seas and white sand beaches. Behind the beaches is a flat raised plain of uplifted coral reef.
The southern coast has small stretches of plains lined by black sand beaches. These are backed by cliffs of limestone where the plateaus end. In many stretches with no coastal plain, the cliffs drop straight to the sea. In the southwest, broad plains stretch inland for a number of kilometres. The Black River courses through the largest of these plains. The swamplands of the Great Morass and the Upper Morass fill much of the plains. The western coastline contains the island's finest beaches.
Two types of climate are found in Jamaica. An upland tropical climate prevails on the windward side of the mountains, whereas a semiarid climate predominates on the leeward side. Warm trade winds from the east and northeast bring rainfall throughout the year. The rainfall is heaviest from May to October, with peaks in those two months. The average rainfall is per year. Rainfall is much greater in the mountain areas facing the north and east, however. Where the higher elevations of the John Crow Mountains and the Blue Mountains catch the rain from the moisture-laden winds, rainfall exceeds per year. Since the southwestern half of the island lies in the rain shadow of the mountains, it has a semiarid climate and receives fewer than of rainfall annually.
Temperatures in Jamaica are fairly constant throughout the year, averaging in the lowlands and at higher elevations. Temperatures may dip to below at the peaks of the Blue Mountains. The island receives, in addition to the northeast trade winds, refreshing onshore breezes during the day and cooling offshore breezes at night. These are known on Jamaica as the "Doctor Breeze" and the "Undertaker's Breeze," respectively.
Jamaica lies in the Atlantic hurricane belt; as a result, the island sometimes experiences significant storm damage. Powerful hurricanes which have hit the island directly causing death and destruction include Hurricane Charlie in 1951 and Hurricane Gilbert in 1988. Several other powerful hurricanes have passed near to the island with damaging effects. In 1980, for example, Hurricane Allen destroyed nearly all Jamaica's banana crop. Hurricane Ivan (2004) swept past the island causing heavy damage and a number of deaths; in 2005, Hurricanes Dennis and Emily brought heavy rains to the island. A Category 4 hurricane, Hurricane Dean, caused some deaths and heavy damage to Jamaica in August 2007.
The first recorded hurricane to hit Jamaica was in 1519. The island has been struck by tropical cyclones regularly. During two of the coldest periods in the last 250 years (1780s and 1810s), the frequency of hurricanes in the Jamaica region was unusually high. Another peak of activity occurred in the 1910s, the coldest decade of the 20th century. On the other hand, hurricane formation was greatly diminished from 1968 to 1994, which for some reason coincides with the great Sahel drought.
Although most of Jamaica's native vegetation has been stripped in order to make room for cultivation, some areas have been left virtually undisturbed since the time of Columbus. Indigenous vegetation can be found along the northern coast from Rio Bueno to Discovery Bay, in the highest parts of the Blue Mountains, and in the heart of the Cockpit Country.
As in the case of vegetation, considerable loss of wildlife has occurred beginning with the settlement of native peoples in the region millennia ago. For example, the only pinniped ever known to the Caribbean, the Caribbean monk seal once occurred in Jamaican waters and has now been driven to extinction. Mongoose (Herpestes auropunctatus), introduced to Jamaica in 1872 to reduce rat populations that damaged commercial sugarcane (Saccharum officinarum) crops, prey on several Jamaican species, including the critically endangered, Jamaican iguana (Cyclura collei), and have been implicated in the historical population declines and extinctions of many others. Other wildlife species inhabiting the island include, the West Indian Manatee (Trichechus manatus), American crocodile (Crocodylus acutus), and the endemic Homerus Swallowtail (Papilio homerus),which is the largest butterfly species in the Western Hemisphere.
|
https://en.wikipedia.org/wiki?curid=15662
|
Demographics of Jamaica
This article is about the demographic features of the population of Jamaica, including population density, education level, health of the populace, economic status, religious affiliations and other aspects of the population.
According to the total population was in , compared to only 1,403,000 in 1950. The proportion of children below the age of 15 in 2010 was 29%, 63.1% was between 15 and 65 years of age, while 7.8% was 65 years or older
.
Structure of the population (04.04.2011) (Census):
Source: "UN World Population Prospects"
The following demographic statistics are from the CIA World Factbook unless otherwise referenced.
|
https://en.wikipedia.org/wiki?curid=15663
|
Economy of Jamaica
The economy of Jamaica is heavily reliant on services, accounting for 70% of the country's GDP. Jamaica has natural resources, primarily bauxite, and an ideal climate conducive to agriculture and also tourism. The discovery of bauxite in the 1940s and the subsequent establishment of the bauxite-alumina industry shifted Jamaica's economy from sugar and bananas. By the 1970s, Jamaica had emerged as a world leader in export of these minerals as foreign investment increased.
Weakness in the financial sector, speculation, and lower levels of investment erode confidence in the productive sector. The government continues its efforts to raise new sovereign debt in local and international financial markets in order to meet its U.S. dollar debt obligations, to mop up liquidity to maintain the exchange rate and to help fund the current budget deficit.
Jamaican Government economic policies encourage foreign investment in areas that earn or save foreign exchange, generate employment, and use local raw materials. The government also provides a wide range of incentives to investors.
Free trade zones have stimulated investment in garment assembly, light manufacturing, and data entry by foreign firms. However, over the last 5 years, the garment industry has suffered from reduced export earnings, continued factory closures, and rising unemployment. The Government of Jamaica hopes to encourage economic activity through a combination of privatization, financial sector restructuring, reduced interest rates, and by boosting tourism and related productive activities.
In April 2014, the Governments of Jamaica and China signed the preliminary agreements for the first phase of the Jamaican Logistics Hub (JLH) - the initiative that aims to position Kingston as the fourth node in the global logistics chain, joining Rotterdam, Dubai and Singapore, and serving the Americas. The Project, when completed, is expected to provide many jobs for Jamaicans, Economic Zones for multinational companies and much needed economic growth to alleviate the country's heavy debt-to-GDP ratio. Strict adherence to the IMF's refinancing programme and preparations for the JLH has favourably affected Jamaica's credit rating and outlook from the three biggest rating agencies.
Before independence, Jamaica's economy was largely focused on agriculture with the vast majority of the labour force engaged in the production of sugar, bananas, and tobacco. According to one study, 18th century Jamaica had the highest wealth inequality in the world, as a very small, slave-owning elite was extremely wealthy while the rest of the population lived on the edge of subsistence.
These products were mainly exported to the United Kingdom, Canada, and the United States of America. Jamaica's trade relationships expanded substantially from 1938 to 1946, with total imports almost doubling from £6,485,000 to £12,452,000. After 1962, the Jamaican government pushed for ibauxite/alumina, energy, and tourism shrank in 1998 and 1999. In 2000, Jamaica experienced its first year of positive growth since 1995.
Inflation fell from 25% in 1995 to single digits in 2000, reaching a multidecade low of 4.3% in 2004. Through periodic intervention in the market, the central bank also has prevented any abrupt drop in the exchange rate. The Jamaican dollar has been slipping, despite intervention, resulting in an average exchange rate of J$73.40 per US$1.00 and J136.2 per €1.00 (February 2011). In addition, inflation has been trending upward since 2004 and is projected to once again reach a double digit rate of 12-13% through the year 2008 due to a combination of unfavorable weather damaging crops and increasing agricultural imports and high energy prices.
Over the last 30 years, real per capita GDP increased at an average of just one percent per year, making Jamaica one of the slowest growing developing countries in the world.
To reverse this trajectory, the Government of Jamaica embarked on a comprehensive and ambitious program of reforms for which it has garnered national and international support: a four-year Extended Fund Facility (EFF) by the International Monetary Fund (IMF) providing a support package of US$932 million; World Bank Group and the Inter-American Development Bank (IDB) programs providing US$510 million each to facilitate the GoJ's economic reform agenda to stabilize the economy, reduce debt and create the conditions for growth and resilience.. In addition, the International Finance Corporation (IFC) and Multilateral Investment Guarantee Agency (MIGA) will continue to support private sector development.
The reform program is beginning to bear fruit: Institutional reforms and measures to improve the environment for the private sector have started to restore confidence in the Jamaican economy. Jamaica jumped 27 places to 58 among 189 economies worldwide in the 2015 Doing Business ranking, the country's credit rating has improved and the Government has successfully raised more than US$2 billion in the international capital in the markets in 2014 and 2015..
Despite some revival, economic growth is still low: the Jamaican Government is forecasting real gross domestic product (GDP) growth of 1.9 per cent for the fiscal year 2015/2016 and the country continues to be confronted by serious social issues that predominantly affect youth, such as high levels of crime and violence and high unemployment. Jamaica, which had seen its poverty rate drop almost 20 percent over two decades, saw it increase by eight percent in a few years.
The unemployment rate in Jamaica is about 13.2% (April 2015, Statistical Institute of Jamaica ), with youth unemployment more than twice the national rate (38%). However, among Jamaica's assets are its skilled labor force and strong social and governance indicators.
Agricultural production is an important contributor to Jamaica's economy. However, it is vulnerable to extreme weather, such as hurricanes and to competition from neighbouring countries such as the USA. Other difficulties faced by farmers include thefts from the farm, known as praedial larceny. Agricultural production accounted for 7.4% of GDP in 1997, providing employment for nearly a quarter of the country. Jamaica's agriculture, together with forestry and fishing, accounted for about 6.6% of GDP in 1999. Sugar has been produced in Jamaica for centuries, it is the nation's dominant agricultural export. Sugar is produced in nearly every parish. The production of raw sugar in the year 2000 was estimated at 175,000 tons, a decrease from 290,000 tons in 1978.
Jamaican agriculture has been less prominent in GDP in the 2000s than other industries, hitting an all-time low between 2004 and 2008. This may have been due to a reaction to increased competition as international trade policies were enacted. For example, as NAFTA was enacted in 1993, a significant amount of Caribbean exports to the United States diminished, being out competed by Latin American exports. Another example is the Banana Import Regime's 3rd phase, in which EU nations had first given priority in banana imports to previously colonized nations. Under pressure by the World Trade Organization, the EU policy was altered to provide a non-discriminatory trade agreement. Jamaica's banana industry was easily outpriced by American companies exporting Latin American goods. Jamaica's agriculture industry is now bouncing back, growing from being 6.6% of GDP to 7.2%.
Sugar formed 7.1% of the exports in 1999 and Jamaica made up about 4.8% of the total production of sugar in the Caribbean. Sugar is also used for the production of by-products such as molasses, rum and some wallboard is made from bagasse.
Banana production in 1999 was 130,000 tons. Bananas formed 2.4% of the exports in 1999 and Jamaica formed around 7.5% of the total production of banana in the Caribbean. Jamaica stopped exporting banana in 2008 after suffering from several years of hurricanes that devastated the plantations.
Coffee is mainly grown around the Blue Mountains and in hilly areas. One type in particular, Jamaican Blue Mountain Coffee, is considered among the best in the world because at those heights in the Blue Mountains, the cooler climate causes the berries to take longer to ripen and the beans develop more of the substances which on roasting give coffee its flavor. Coffee formed 1.9% of exports in 1999. The picking season lasts from August to March. The coffee is exported from Kingston.
Cocoa is grown throughout Jamaica and local sales absorb about 1/3 of the output to be made into instant drinks and confectionery. Citrus fruit is mainly grown in the central parts of Jamaica, particularly between the elevations of 1,000-2,500 feet. The picking season lasts from November to April. Two factories in Bog Walk produce fruit juices, canned fruit, essential oils and marmalade. Coconuts are grown on the northern and eastern coasts, which provide enough copra to supply factories to make butterine, margarine, lard, edible oil & laundry soap.
Other export crops are pimento, ginger, tobacco, sisal and other fruit are exported. Rice is grown around swampy areas around the Black River & around Long Bay in Hanover and Westmoreland parishes for local consumption.
As tastes have changed in Jamaica in favor of more meat and packaged food the national food import bill has grown to the point that it threatens the health of the economy. The government has responded by encouraging gardening and farming, a response which has had limited success. For example, the percentage of potatoes grown locally has increased, but imports of french fries have continued at a high level.
Pastures form a good percentage of the land in Jamaica. Many properties specialize in cattle rearing. Livestock holdings were 400,000 head of cattle, 440,000 goats, 180,000 hogs & 30,
rs of livestock are increasing, this isn't enough for local requirements for a growing population. Dairying has increased since the erection of a condensed milk factory at Bog Walk in 1940. Even so, the supply of dairy products is not enough for local requirements and there are large imports of powdered milk, butter and cheese.
The fishing industry grew during the 1900s, primarily from the focus on inland fishing. Several thousand fishermen make a living from fishing. The shallow waters and cays off the south coast are richer than the northern waters. Other fishermen live on the Pedro Cays, to the south of Jamaica.
Jamaica supplies about half of its fish requirements; major imports of frozen and salted fish are imported from the USA & Canada.
The total catch in 2000 was 5,676 tons, a decrease from 11,458 tons in 1997; the catch was mainly marine, with freshwater carp, barbel, etc., crustaceans & molluscs.
By the late 1890, only of Jamaica's original of forest remained. Roundwood production was 881,000 cu m (31.1 million cu ft) in 2000. About 68% of the timber cut in 2000 was used as fuel wood while 32% was used for industrial use. The forests that once covered Jamaica now exist only in mountainous areas. They only supply 20% of the island timber requirements. The remaining forest is protected from further exploitation. Other accessible mountain areas are being reforested, mainly with pines, mahoe and mahogany.
Jamaica was the third-leading producer of bauxite and alumina in 1998, producing 12.6 million tons of bauxite, accounting for 10.4% of world production, and 3.46 million tons of alumina, accounting for 7.4% of world production. 8,540 million tons of bauxite was mined in 2012 and 10,200 million tons of bauxite in 2011.
Mining and quarrying made up 4.1% of the nation's gross domestic product in 1999. Bauxite and alumina formed 55.2% of exports in 1999 and are the second-leading money earner after tourism. Jamaica has reserves of over 2 billion tonnes, which are expected to last 100 years. Bauxite is found in the central parishes of St. Elizabeth, Manchester, Clarendon, St. Catherine, St. Ann, and Trelawny. There are four alumina plants and six mines.
Jamaica has deposits of several million tons of gypsum on the southern slopes of the Blue Mountains. Jamaica produced 330,441 tons of gypsum in the year 2000, some of which was used in the local cement industry and in the manufacturing of building materials.
Other minerals present in Jamaica include marble, limestone, and silica, as well as ores of copper, lead, zinc, manganese and iron. Some of these are worked in small quantities. Petroleum has been sought, but so far none has been found.
The manufacturing sector is an essential contributor to the Jamaican economy. Though manufacturing accounted for 13.9% of GDP in 1999. Jamaican companies contribute many manufactures such as food processing; oil refining; produced chemicals, construction materials, plastic goods, paints, pharmaceuticals, cartons, leather goods and cigars & assembled electronics, textiles and apparel. The garment industry is a major job employer for thousands of hundreds of locals and they formed 12.9% of exports in 1999 earning US$159 million. Chemicals formed 3.3% of the exports in 1999 earning US$40 million.
An oil refinery is located near Kingston converts crude petroleum obtained from Venezuela into gasoline and other products. These are mainly for local use. The construction industry is growing due to new hotels and attractions being built for tourism. Construction and installation formed 10.4% of the GDP in 1999.
Manufactured goods were imported and formed 30.3% of the imports and cost US$877 million in 1999.
Since the launch of the Jamaican Logistics Hub initiative, various economic zones have been proposed throughout the country to assemble goods from other parts of the world for distribution to the Americas.
Tourism is tied with remittances as Jamaica's top source of revenue. The tourism industry earns over 50 percent of the country's total foreign exchange earnings and provides about one-fourth of all jobs in Jamaica. Most tourist activity is centered on the island's northern coast, including the communities of Montego Bay, Ocho Rios, and Port Antonio, as well as in Negril on the island's western tip.
Some destinations include Ocho Rios, Green Grotto Caves, Y.S. Falls and Appleton Estate. Most of the tourist sites are landmarks as well as homes for many Jamaicans. Many of the most frequented tourist sites are located mainly by water such as rivers and beaches where fishermen make a living from seafood. One of the most famous beach towns in Jamaica is Ocho Rios, a located in the parish of Saint Ann on the north coast of Jamaica. It was once a fishing village but now attracts millions of tourists yearly. The site is popular today because of the food and culture that can be found there.
Another famous location in Jamaica that attracts millions yearly is Dunn's River Falls, located in Ocho Rios; this waterfall is approximately 600 feet long and runs off into the sea. Around the location many hotels and restaurants are available and many street vendors sell food around the clock. Another well-known beach town is Negril, the party capital of the country. This beach town has many different factors to add to the night life.
Rising to join Singapore, Dubai and Rotterdam as the fourth node in global logistics, the Jamaican Government has embarked on a restructuring of the economy, deciding to utilise its location at the centre of North-South and East-West shipping lanes to become the choice of global logistics companies to be the Hub of the Western Hemisphere, serving a market of 800 million, and becoming the gateway to Europe and Africa. With the establishment of the Logistics Hub, Jamaica is set to become an important part of the global value chain. Preliminary agreements with the Chinese government were signed in April 2014, marking the first step in the restructuring of the Jamaican economy.
Over the course of the last decade, Jamaica has made immense strides in developing its Information and Communications Technology (ICT) infrastructure. Today, the island is a highly competitive and attractive business destination and a regional leader in Information Technology Export Services (ITES).
As the largest English speaking territory in the Caribbean, Jamaica is the region's leading contact centre location with over 30 information communications technology/business process outsourcing (ICT/BPO) companies operating in the country employing 11,500 full-time agents.
Jamaican tax rates are extremely favourable in world standards, the brackets are as follows:
Separate Tax Rates apply for foreign nationals.
The following table shows the main economic indicators in 1980–2018. Inflation below 5% is in green.
|
https://en.wikipedia.org/wiki?curid=15665
|
Telecommunications in Jamaica
Telecommunications in Jamaica include radio, television, fixed and mobile telephones, and the Internet.
Country Code: +1-876, +1-658
International Call Prefix: 011 (outside NANP)
Calls from Jamaica to the US, Canada, and other NANP Caribbean nations, are dialed as 1 + NANP area code + 7-digit number. Calls from Jamaica to non-NANP countries are dialed as 011 + country code + phone number with local area code.
Number Format: nxx-xxxx
Jamaica has a fully digital telephone communication system.
The country's three mobile operators – Cable and Wireless (once marketed as LIME – Landline, Internet, Mobile and Entertainment now named FLOW), Digicel, and at one point Oceanic Digital (operating as MiPhone and now known as Claro since late 2008) until the carrier was acquired and the relevant spectrum sold to Digicel – have spent millions in network upgrade and expansion. Both Digicel and Oceanic Digital were granted licences in 2001 to operate mobile services in the newly liberalised telecom market that had once been the sole domain of the incumbent Cable and Wireless monopoly. Digicel opted for the more widely used GSM wireless system, while Oceanic opted for the CDMA standard. Cable and Wireless, which had begun with TDMA standard, subsequently upgraded to GSM, and currently utilises both standards on its network.
With wireless usage increasing, landlines supplied by Cable and Wireless have declined from just over half a million to roughly about three hundred thousand as of 2006. In a bid to grab more market share, Cable and Wireless recently launched a new land line service called HomeFone Prepaid that would allow customers to pay for minutes they use rather than pay a set monthly fee for service, much like prepaid wireless service.
Two more licenses were auctioned by the Jamaican government to provide mobile services on the island, including one that was previously owned by AT&T Wireless but never utilized, and one new license.
Another entrant to the Jamaican communications market, FLOW, laid a new submarine cable connecting Jamaica to the United States. This new cable increases the total number of submarine cables connecting Jamaica to the rest of the world to four. The company's parent was acquired by Cable and Wireless Communications in November 2014 and finalized in March 2015. The new FLOW was re-launched as a successor to LIME and the old Flow on August 31, 2015; offering mobile, fixed voice, fixed broadband and TV services to the market. It has now become the first quad-play provider in Jamaica. The company runs a vast copper network (inherited from LIME) islandwide as well as a Hybrid Fiber and Coaxial network (from the old Flow) in the metropolitan areas of Kingston and Montego Bay. They also have small Fiber-to-the-home operations in certain sections of St. James that began in 2011 (under LIME). On the mobile side, the company had completed its 4G HSPA+ rollout (capable of speeds up to 21 Mbit/s) across the island in November 2015 and has announced plans to move to LTE within the year 2016. However, Digicel has become the first LTE network operator in Jamaica, going live with their network on June 9, 2016.
There are no government restrictions on access to the Internet or credible reports that the government monitors e-mail or Internet chat rooms without judicial oversight.
The law provides for freedom of speech and press, and the government generally respects these rights in practice. An independent press, generally effective judicial protection, and a functioning democratic political system combine to ensure freedom of speech and press. The independent media are active and express a wide variety of views without restriction. Broadcast media were largely state owned, but open to pluralistic points of view. Although the constitution prohibits arbitrary interference with privacy, family, home, or correspondence, in practice the police conduct searches without warrants.
A law decriminalizing defamation was passed by the Jamaican House of Representatives in November 2013 after being approved unanimously by the Senate the previous July. It took six years to amend the libel and slander laws, which – although little used – made media offences punishable by imprisonment.
|
https://en.wikipedia.org/wiki?curid=15666
|
Transport in Jamaica
Transport in Jamaica consists of roadways, railways, ship and air transport, with roadways forming the backbone of the island's internal transport system.
The Jamaican road network consists of almost 21,000 kilometres of roads, of which over 15,000 kilometres is paved. The Jamaican Government has, since the late 1990s and in cooperation with private investors, embarked on a campaign of infrastructural improvement projects, one of which includes the creation of a system of freeways, the first such access-controlled roadways of their kind on the island, connecting the main population centres of the island. This project has so far seen the completion of 33 kilometres of freeway.
The Highway 2000 project, which seeks ultimately to link Kingston with Montego Bay and the north coast, is currently undergoing a series of phases/legs. Phase 1 is the highway network between Kingston and Mandeville which itself has been divided into sub-phases: Phase 1a (Kingston-Bushy Park (in actuality, Kingston-Sandy Bay) highway and the upgrade of the Portmore Causeway) which was completed June 2006, and Phase 1b (Sandy Bay-Williamsfield). Phase 2a is the highway between Old Harbour and Ocho Rios, and Phase 2b is the highway between Mandeville and Montego Bay.
"total:" .
"paved:" .
"unpaved:" (1997 est.).
Railways in Jamaica, as in many other countries, no longer enjoy the prominent position they once did, having been largely replaced by roadways as the primary means of transport. Of the 272 kilometres of railway found in Jamaica, only 57 kilometres remain in operation, currently used to transport bauxite.
In 2008, with increasing traffic congestion, moves are being made to reconstruct old railway lines.
"total:" 370 km
"standard gauge:" 370 km gauge. Of these, 207 km belong to the Jamaica Railway Corporation in common carrier service but are no longer operational. The other 163 km is privately owned and used to transport bauxite.
There are two international airports in Jamaica with modern terminals, long runways, and the navigational equipment required to accommodate the large jet aircraft used in modern air travel: Norman Manley International Airport in the capital, Kingston and Sangster International Airport in the resort city of Montego Bay. Both airports are home to the country's national airline, Air Jamaica. In addition there are local commuter airports at Tinson Pen (Kingston), Port Antonio, Ocho Rios, Mandeville, and Negril which cater to internal flights only. The Ian Fleming International Airport opened in February 2011 to serve the Ocho Rios - Port Antonio area. Many other small, rural centres are served by private fields on sugar estates or bauxite mines.
Owing to its location in the Caribbean Sea in the shipping lane to the Panama Canal and relative proximity to large markets in North America and emerging markets in Latin America, Jamaica receives high container traffic. The container terminal at the Port of Kingston has undergone large expansion in capacity in recent years to handle growth both already realised as well as that which is projected in coming years.
There are several other ports positioned around the island, including the alumina ports, Port Esquivel in St. Catherine (WINDALCO), Rocky Point in Clarendon and Port Kaiser in St. Elizabeth. Port Rhoades in Discovery Bay is responsible for transporting bauxite dried at the adjacent Kaiser plant. Reynolds Pier in Ocho Rios is responsible for exporting sugar. Montego Freeport in Montego Bay also handles a variety of cargo like (though more limited than) the Port of Kingston, mainly agricultural products. Boundbrook Port in Port Antonio exports bananas. There are also three cruise ship piers along the island, in Ocho Rios, Montego Bay and Port Antonio.
The Kingston port is situated in the Kingston Harbour which is the 7th largest natural (i.e. not man made) harbour in the world.
As the island is a large exporter of bauxite, there is considerable freighter traffic. To aid navigation, Jamaica operates nine lighthouses
Petroleum products: .
|
https://en.wikipedia.org/wiki?curid=15667
|
Foreign relations of Jamaica
Jamaica has diplomatic relations with most nations and is a member of the United Nations and the Organization of American States. Jamaica chairs the Working Group on smaller Economies.
Jamaica is an active member of the Commonwealth of Nations and the Non-Aligned Movement (G-77). Jamaica is a beneficiary of the Lome Conventions, through which the European Union (EU) grants trade preferences to selected states in Asia, the Caribbean, and the Pacific, and has played a leading role in the negotiations of the successor agreement in Fiji in 2000.
Disputes - international:
none
Illicit drugs:
Transshipment point for cocaine from Central and South America to North America and Europe; illicit cultivation of cannabis; government has an active manual cannabis eradication program
The Ministry of Foreign Affairs and Foreign Trade is the government ministry responsible for handling the Jamaica's external relations and foreign trade.
Historically, Jamaica has had close ties with the UK. Trade, financial, and cultural relations with the United States are now predominant. Jamaica is linked with the other countries of the English-speaking Caribbean through the Caribbean Community (CARICOM), and more broadly through the Association of Caribbean States (ACS). Jamaica has served two 2-year terms on the United Nations Security Council, in 1979-80 and in 2000-2001.
In the follow-on meetings to the December 1994 Summit of the Americas, Jamaica—together with Uruguay—was given the responsibility of coordinating discussions on invigorating society.
Jamaica maintains economic and cultural relations with Taiwan via Taipei Economic and Cultural Office in Canada.
Jamaica has been a member state of the Commonwealth of Nations since 1962, when it became an independent Commonwealth realm.
African, Caribbean and Pacific Group of States,
Caricom,
CCC,
Caribbean Development Bank,
United Nations Economic Commission for Latin America and the Caribbean,
Food and Agriculture Organization,
G-15,
G-33,
G-77,
Inter-American Development Bank,
International Atomic Energy Agency,
International Bank for Reconstruction and Development,
International Civil Aviation Organization,
International Red Cross and Red Crescent Movement,
International Fund for Agricultural Development,
International Finance Corporation,
International Federation of Red Cross and Red Crescent Societies,
International Hydrographic Organization (pending member),
International Labour Organization,
International Monetary Fund,
International Telecommunication Union,
Intelsat,
Interpol,
International Olympic Committee,
International Organization for Migration,
International Organization for Standardization,
International Telecommunication Union,
Latin American Economic System,
Non-Aligned Movement,
Organization of American States,
OPANAL,
Organisation for the Prohibition of Chemical Weapons,
United Nations,
UN Security Council (temporary),
United Nations Conference on Trade and Development,
UNESCO,
United Nations Industrial Development Organization,
Universal Postal Union,
World Health Organization,
World Intellectual Property Organization,
World Meteorological Organization,
World Tourism Organization,
World Trade Organization
|
https://en.wikipedia.org/wiki?curid=15669
|
Science and technology in Jamaica
Since the late 3477, the Jamaican government has set an agenda to push the development of technology in Jamaica. The goal is to make Jamaica a significant player in the arena of information technology.
Jamaica was among the earliest developing countries to craft a scientific law to guide the use of Science & Technology for the exploitation of domestic natural resources. In fact, the island was among the first in the American hemisphere to gain electricity, build a railway and to use research results to boost sugar cane production. The Jamaican Science and Technology Policy have two missions; 1) to improve science, technology and engineering and 2) to leverage its use to enhance societal needs.
Efforts to develop its Science and Technology educative system, through institutions such as The University of Technology, has been successful but it has been difficult to translate the results into domestic technologies, products and services because of national budgetary constraints.
|
https://en.wikipedia.org/wiki?curid=15670
|
Jan Mayen
Jan Mayen () is a Norwegian volcanic island in the Arctic Ocean, with no permanent population. It is long (southwest-northeast) and in area, partly covered by glaciers (an area of around the Beerenberg volcano). It has two parts: larger northeast Nord-Jan and smaller Sør-Jan, linked by a wide isthmus. It lies northeast of Iceland (495 km (305 mi) NE of Kolbeinsey), east of central Greenland and west of the North Cape, Norway. The island is mountainous, the highest summit being the Beerenberg volcano in the north. The isthmus is the location of the two largest lakes of the island, Sørlaguna (South Lagoon), and Nordlaguna (North Lagoon). A third lake is called Ullerenglaguna (Ullereng Lagoon). Jan Mayen was formed by the Jan Mayen hotspot.
Although administered separately, in the ISO 3166-1 standard Jan Mayen and Svalbard are collectively designated as "Svalbard and Jan Mayen", with the two-letter country code "SJ".
The island and the circa 1960s NATO base is frequently referenced in the Kathryn Bigelow film, "".
Jan Mayen Island has one exploitable natural resource, gravel, from the site at Trongskaret. Other than this, economic activity is limited to providing services for employees of Norway's radio communications and meteorological stations located on the island. Jan Mayen has one unpaved airstrip, Jan Mayensfield, which is about long. The coast has no ports or harbours, only offshore anchorages.
There are important fishing resources, and the existence of Jan Mayen establishes a large exclusive economic zone around it. A dispute between Norway and Denmark regarding the fishing exclusion zone between Jan Mayen and Greenland was settled in 1988 granting Denmark the greater area of sovereignty. Geologists suspect significant deposits of petroleum and natural gas lie below Jan Mayen's surrounding seafloors.
Jan Mayen Island is an integral part of the Kingdom of Norway. Since 1995, Jan Mayen has been administered by the County Governor ("fylkesmann") of the northern Norwegian county of Nordland, to which it is closest. However, some authority over Jan Mayen has been assigned to the station commander of the Norwegian Defence Logistics Organisation, a branch of the Norwegian Armed Forces.
The only inhabitants on the island are personnel working for the Norwegian Armed Forces and the Norwegian Meteorological Institute. Eighteen people spend the winter on the island, but the population may roughly double (35) during the summer, when heavy maintenance is performed. Personnel serve either six months or one year, and are exchanged twice a year in April and October. The support crew, including mechanics, cooks, and a nurse, are among the military personnel. The military personnel operated a Loran-C base, until it closed at the end of 2015. Both the LORAN transmitter and the meteorological station are located a few kilometres away from the settlement Olonkinbyen (Olonkin City), where all personnel live.
Transport to the island is provided by C-130 Hercules military transport planes operated by the Royal Norwegian Air Force that land at Jan Mayensfield's gravel runway. The planes fly in from Bodø Main Air Station eight times a year. Since the airport does not have any instrument landing capabilities, good visibility is required, and it is not uncommon for the planes to have to return to Bodø, two hours away, without landing. For heavy goods, freight ships visit during the summer, but since there are no harbours, the ships must anchor.
The island has no indigenous population, but is assigned the ISO 3166-1 alpha-2 country code SJ (together with Svalbard). It uses the Internet country code top-level domain (ccTLD) .no (.sj is allocated but not used) and data code JN. Jan Mayen has telephone and internet connection over satellite, using Norwegian telephone numbers (country code 47). Its amateur radio call sign prefix is JX. It has a postal code, NO-8099 JAN MAYEN, but delivery time varies, especially during the winter.
Between the fifth and ninth centuries (400–900 AD), numerous communities of monks originating in Ireland (Papar) navigated throughout the north Atlantic in leather boats, exploring and sometimes settling in distant islands where their monastic communities could be separated from close contact with others. Strong indicators exist of their presence in the Faroe Islands and Iceland before the arrival of the Vikings, and medieval chronicles such as the famous "Voyage of Saint Brendan the Abbot" testify to the extensive interest in exploration at the time. A modern-day trans-Atlantic journey proved the ability of the early navigators to reach all lands of the north Atlantic even further from Ireland than Jan Mayen – and, given favorable winds, at a speed roughly equal to that of modern yachts. Though quite feasible, there is nevertheless no direct physical trace of medieval landings or settlement on Jan Mayen.
The land named "Svalbarð" ("cold coast") by the Vikings in the early medieval book Landnámabók may have been Jan Mayen (instead of Spitsbergen, renamed Svalbard by the Norwegians in modern times); the distance from Iceland to "Svalbarð" mentioned in this book is two days' sailing (with favorable winds), consistent with the approximate to Jan Mayen and not with the minimum to Spitsbergen. However much Jan Mayen may have been known in Europe at that time, it was subsequently forgotten for some centuries.
In the 17th century, many claims of the island's rediscovery were made, spurred by the rivalry on the Arctic whaling grounds, and the island received many names. According to Thomas Edge, an early 17th-century whaling captain who was often inaccurate, "William Hudson" discovered the island in 1608 and named it "Hudson's Touches" (or "Tutches"). However, Henry Hudson could only have come by on his voyage in 1607 (if he had made an illogical detour) and he made no mention of it in his journal. Douglas Hunter, in "Half Moon" (2009), believes Hudson may not have mentioned his supposed discovery of the island because he was "loath to address a crew insurrection that might well have erupted at that time, when the men realized where he was trying to take them." This is, however, merely speculation on Hunter's part. There is absolutely no evidence to support such a claim.
According to William Scoresby (1820: p.154), referring to the mistaken belief that the Dutch had discovered the island in 1611, Hull whalers discovered the island "about the same time" and named it "Trinity Island". Muller (1874: pp.190–191) took this to mean they had come upon Jan Mayen in 1611 or 1612, which was repeated by many subsequent authors. There were, in fact, no Hull whalers in either of these years, the first Hull whaling expedition having been sent to the island only in 1616 (see below). As with the previous claim made by Edge, there is no cartographical or written proof for this supposed discovery.
The first verified discoveries of Jan Mayen, by three separate expeditions, occurred in the summer of 1614, probably within one month of each other. The Dutchman Fopp Gerritsz, whilst in command of a whaling expedition sent out by the Englishman John Clarke, of Dunkirk, claimed (in 1631) to have discovered the island on June 28 and named it "Isabella". In January the "Noordsche Compagnie" (Northern Company), modelled on the Dutch East India Company, had been established to support Dutch whaling in the Arctic. Two of its ships, financed by merchants from Amsterdam and Enkhuizen, reached Jan Mayen in July 1614. The captains of these ships—Jan Jacobszoon May van Schellinkhout on the "Gouden Cath" (Golden Cat), and Jacob de Gouwenaer on the "Orangienboom" (Orange Tree)—named it "Mr. Joris Eylant" after the Dutch cartographer Joris Carolus who was on board and mapped the island. The captains acknowledged that a third Dutch ship, the "Cleyn Swaentgen" (Little Swan) captained by Jan Jansz Kerckhoff and financed by "Noordsche Compagnie" shareholders from Delft, had already been at the island when they arrived. They had assumed the latter, who named the island "Maurits Eylandt" (or Mauritius) after Maurice of Nassau, Prince of Orange, would report their discovery to the States General. However, the Delft merchants had decided to keep the discovery secret and returned in 1615 to hunt for their own profit. The ensuing dispute was only settled in 1617, though both companies were allowed to whale at Jan Mayen in the meantime.
In 1615, the English whaler Robert Fotherby went ashore. Apparently thinking he had made a new discovery, he named the island "Sir Thomas Smith's Island" and the volcano "Mount Hakluyt". On a map of c. 1634, Jean Vrolicq renamed the island "Île de Richelieu".
Jan Mayen first appeared on Willem Jansz Blaeu's 1620 edition map of Europe, originally published by Cornelis Doedz in 1606. Blaeu, who lived in Amsterdam, named it "Jan Mayen" after captain Jan Jacobszoon May van Schellinkhout of the Amsterdam-financed "Gouden Cath". Blaeu made the first detailed map of the island in his famous "Zeespiegel" atlas of 1623, establishing its current name.
From 1615 to 1638, Jan Mayen was used as a whaling base by the Dutch "Noordsche Compagnie", which had been given a monopoly on whaling in the Arctic regions by the States General in 1614. Only two ships, one from the "Noordsche Compagnie", and the other from the Delft merchants, were off Jan Mayen in 1615. The following year a score of vessels were sent to the island. The "Noordsche Compagnie" sent eight ships escorted by three warships under Jan Jacobsz. Schrobop; while the Delft merchants sent up five ships under Adriaen Dircksz. Leversteyn, son of one of the above merchants. There were also two ships from Dunkirk sent by John Clarke, as well as a ship each from London and Hull.
Heertje Jansz, master of the "Hope", of Enkhuizen, wrote a day-by-day account of the season. The ships took two weeks to reach Jan Mayen, arriving early in June. On 15 June they met the two English ships, which Schrobop allowed to remain, on condition they gave half their catch to the Dutch. The ships from Dunkirk were given the same conditions. By late July the first ship had left with a full cargo of whale oil; the rest left early in August, several filled with oil.
That year 200 men were seasonally living and working on the island at six temporary whaling stations (spread along the northwest coast). During the first decade of whaling more than ten ships visited Jan Mayen each year, while in the second period (1624 and later) five to ten ships were sent. With the exception of a few ships from Dunkirk, which came to the island in 1617 and were either driven away or forced to give a third of their catch to the Dutch, only the Dutch and merchants from Hull sent up ships to Jan Mayen from 1616 onward. In 1624 ten wooden houses were built in South Bay. About this time the Dutch appear to have abandoned the temporary stations consisting of tents of sail and crude furnaces, replacing them with two semi-permanent stations with wooden storehouses and dwellings and large brick furnaces, one in the above-mentioned South Bay and the other in the North Bay. In 1628 two forts were built to protect the stations. Among the sailors active at Jan Mayen was the later admiral Michiel Adriaensz de Ruyter. In 1633, at the age of 26, he was for the first time listed as an officer aboard "de Groene Leeuw" (The Green Lion). He again went to Jan Mayen in 1635, aboard the same ship.
In 1632 the "Noordsche Compagnie" expelled the Danish-employed Basque whalers from Spitsbergen. In revenge, the latter sailed to Jan Mayen, where the Dutch had left for the winter, to plunder the Dutch equipment and burn down the settlements and factories. Captain Outger Jacobsz of Grootebroek was asked to stay the next winter (1633/34) on Jan Mayen with six shipmates to defend the island. While a group with the same task survived the winter on Spitsbergen, all seven on Jan Mayen died of scurvy or trichinosis (from eating raw polar bear meat) combined with the harsh conditions.
During the first phase of whaling the hauls were generally good, some exceptional. For example, Mathijs Jansz. Hoepstock caught 44 whales in Hoepstockbukta in 1619, which produced 2,300 casks of whale oil. During the second phase the hauls were much lower. While 1631 turned out to be a very good season, the following year, due to the weather and ice, only eight whales were caught. In 1633 eleven ships managed to catch just 47 whales; while a meager 42 were caught by the same number in 1635. The bowhead whale was locally hunted to near-extinction around 1640 (approximately 1000 had been killed and processed on the island), at which time Jan Mayen was abandoned and stayed uninhabited for two and a half centuries.
During the International Polar Year 1882–1883 the Austro-Hungarian North Pole Expedition stayed one year at Jan Mayen. The expedition performed extensive mapping of the area, their maps being of such quality that they were used until the 1950s. The Austrian polar station on Jan Mayen Island was built and equipped in 1882 fully at Count Wilczek's own expense.
Polar bears appear on Jan Mayen, although in diminished numbers compared with earlier times. Between 1900 and 1920, there were a number of Norwegian trappers spending winters on Jan Mayen, hunting Arctic foxes in addition to some polar bears. But the exploitation soon made the profits decline, and the hunting ended. Polar bears are genetically distinguishable in this region of the Arctic from those living elsewhere.
The League of Nations gave Norway jurisdiction over the island, and in 1921 Norway opened the first meteorological station. The Norwegian Meteorological Institute annexed the middle part of the island for Norway in 1922 and the whole island in 1926 when Hallvard Devold was head of the weather observations base on the island. On 27 February 1930, the island was made "de jure" a part of the Kingdom of Norway.
During World War II, continental Norway was invaded and occupied by Germany in spring 1940. The four-man team on Jan Mayen stayed at their posts and in an act of defiance began sending their weather reports to the United Kingdom instead of Norway. The British codenamed Jan Mayen 'Island X' and attempted to reinforce it with troops to counteract any German attack. The Norwegian patrol boat ran aground on Nansenflua, one of the islands' many uncharted lava reefs and the 68-man crew abandoned ship and joined the Norwegian team on shore. The British expedition commander, prompted by the loss of the gunboat, decided to abandon Jan Mayen until the following spring and radioed for a rescue ship. Within a few days a ship arrived and evacuated the four Norwegians and their would-be reinforcements after demolishing the weather station to prevent it from falling into German hands. The Germans attempted to land a weather team on the island on 16 November 1940. The German naval trawler carrying the team crashed on the rocks just off Jan Mayen after a patrolling British destroyer had picked them up on radar. This was not a coincidence as the German plan had been compromised from the beginning with British wireless interceptors of the Radio Security Service following the communications of the Abwehr (the German Intelligence service) concerning the operation and the destroyer had been waiting. Most of the crew struggled ashore and were taken prisoner by a landing party from the destroyer.
The Allies returned to the island on 10 March 1941, when the Norwegian ship "Veslekari", escorted by the patrol boat "Honningsvaag", dropped 12 Norwegian weathermen on the island. The team's radio transmissions soon betrayed its presence to the Axis, and German planes from Norway began to bomb and strafe Jan Mayen whenever weather would permit it, though they did little damage. Soon supplies and reinforcements arrived and even some anti-aircraft guns, giving the island a garrison of a few dozen weathermen and soldiers. By 1941, Germany had given up hope of evicting the Allies from the island and the constant air raids stopped.
On 7 August 1942, a German Focke-Wulf Fw 200 "Condor", probably on a mission to bomb the station, smashed into the nearby mountainside of Danielssenkrateret in fog, killing all nine crew members. In 1950, the wreck of another German plane with four crew members was discovered on the southwest side of the island. In 1943, the Americans established a radio locating station named Atlantic City in the north to try to locate German radio bases in Greenland.
After the war, the meteorological station was located at Atlantic City, but moved in 1949 to a new location. Radio Jan Mayen also served as an important radio station for ship traffic in the Arctic Ocean. In 1959, NATO decided to build the LORAN-C network in the Atlantic Ocean, and one of the transmitters had to be on Jan Mayen. By 1961, the new military installations, including a new airfield, were operational.
For some time, scientists doubted if there could be any activity in the Beerenberg volcano, but in 1970 the volcano erupted, and added another of land mass to the island during the three-to-four weeks it lasted. It had more eruptions in 1973 and 1985. During an eruption, the sea temperature around the island may increase from just above freezing to about .
Historic stations and huts on the island are Hoyberg, Vera, Olsbu, Puppebu (cabin), Gamlemetten or Gamlestasjonen (the old weather station), Jan Mayen Radio, Helenehytta, Margarethhytta, and Ulla (a cabin at the foot of the Beerenberg).
A regulation dating from 2010 renders the island a nature reserve under Norwegian jurisdiction. The aim of this regulation is to ensure the preservation of a pristine Arctic island and the marine life nearby, including the ocean floor. Landings at Jan Mayen can be done by boat. However, this is permitted only at a small part of the island, named Båtvika (Boat Bay). As there is no commercial airline operating at the island, one cannot get there by plane except by chartering one. Admission for landings by a charter plane has to be obtained in advance. Admission to stay on the island has to be obtained in advance, and is generally limited to a few days (or even hours). Putting up a tent or setting up camp is prohibited. There is a separate regulation for the stay of foreigners.
Jan Mayen consists of two geographically distinct parts. Nord-Jan has a round shape and is dominated by the high Beerenberg volcano with its large ice cap (), which can be divided into twenty individual outlet glaciers. The largest of those is Sørbreen, with an area of and a length of . South-Jan is narrow, comparatively flat and unglaciated. Its highest elevation is Rudolftoppen at . The station and living quarters are located on South-Jan. The island lies at the northern end of the Jan Mayen Microcontinent. The microcontinent was originally part of the Greenland Plate, but now forms part of the Eurasian Plate.
The island was identified as an Important Bird Area (IBA) by BirdLife International because it is a breeding site for large numbers of seabirds, supporting populations of northern fulmars (78,000–160,000 pairs), little auks (10,000–100,000 pairs), thick-billed guillemot (74,000–147,000 pairs) and black guillemots (100–1,000 pairs).
Jan Mayen has a hyperoceanic polar climate, similar to Greenland and Svalbard, with a Köppen classification of "ET". The Gulf Stream's powerful influence makes seasonal temperature variations extremely small considering the latitude of the island, with ranges from around in August to in February, but also makes the island extremely cloudy with little sunshine even during the continuous polar day. The deep snow cover prevents any permafrost from developing despite a mean annual temperature slightly below freezing.
|
https://en.wikipedia.org/wiki?curid=15673
|
Jarvis Island
Jarvis Island (; formerly known as Bunker Island or Bunker's Shoal) is an uninhabited coral island located in the South Pacific Ocean, about halfway between Hawaii and the Cook Islands. It is an unincorporated, unorganized territory of the United States, administered by the United States Fish and Wildlife Service of the United States Department of the Interior as part of the National Wildlife Refuge system. Unlike most coral atolls, the lagoon on Jarvis is wholly dry.
Jarvis is one of the Line Islands and for statistical purposes is also grouped as one of the United States Minor Outlying Islands. Jarvis Island is the largest of three U.S. equatorial possessions, along with Baker Island and Howland Island.
While a few offshore anchorage spots are marked on maps, Jarvis island has no ports or harbors, and swift currents are a hazard. There is a boat landing area in the middle of the western shoreline near a crumbling day beacon, and another near the southwest corner of the island. The center of Jarvis island is a dried lagoon where deep guano deposits accumulated, which were mined for about 20 years during the nineteenth century. The island has a tropical desert climate, with high daytime temperatures, constant wind, and strong sun. Nights, however, are quite cool. The ground is mostly sandy and reaches 23 feet (7 meters) at its highest point. Because of the island's distance from other large landmasses, the Jarvis Island high point is the 36th most isolated peak in the world. The low-lying coral island has long been noted as hard to sight from small ships and is surrounded by a narrow fringing reef.
Jarvis Island is one of two United States territories that are in the southern hemisphere (the other is American Samoa). Located only south of the equator, Jarvis has no known natural freshwater lens and scant rainfall. This creates a very bleak, flat landscape without any plants larger than shrubs. There is no evidence that the island has ever supported a self-sustaining human population. Its sparse bunch grass, prostrate vines and low-growing shrubs are primarily a nesting, roosting, and foraging habitat for seabirds, shorebirds, and marine wildlife.
Jarvis Island was once one of the largest breeding colonies in the tropical ocean, but guano mining and the introduction of rodents have ruined much of the island's native wildlife. Just eight breeding species were recorded in 1982, compared to thirteen in 1996, and fourteen species in 2004. The Polynesian storm petrel had made its return after over 40 years absent from Jarvis Island, and the number of Brown noddies multiplied from just a few birds in 1982 to nearly 10,000. Just twelve Gray-backed terns were recorded in 1982, but by 2004, over 200 nests were found on the island.
Jarvis Island was submerged underwater during the latest interglacial period, roughly 125,000 years ago, when sea levels were 5–10 meters higher than today. As the sea level declined, the horseshoe-shaped lagoon was formed in the center of Jarvis Island.
Jarvis Island is located in the Samoa Time Zone (UTC -11:00), the same time zone as American Samoa, Kingman Reef, Midway Atoll and Palmyra Atoll.
The island's first known sighting by Europeans was on August 21, 1821, by the British ship "Eliza Francis" (or "Eliza Frances") owned by Edward, Thomas and William Jarvis and commanded by Captain Brown. The island was visited by whaling vessels till the 1870s.
The US Exploring Expedition surveyed the island in 1841. In March 1857 the island was claimed for the United States under the Guano Islands Act and formally annexed on February 27, 1858.
The American Guano Company, which was incorporated in 1857, established claims in respect of Baker Island and Jarvis Island which was recognized under the U.S. Guano Islands Act of 1856. Beginning in 1858, several support structures were built on Jarvis Island, along with a two-story, eight-room "superintendent's house" featuring an observation cupola and wide verandahs. Tram tracks were laid down for bringing mined guano to the western shore. One of the first loads was taken by Samuel Gardner Wilder.
For the following twenty-one years, Jarvis was commercially mined for guano, sent to the United States as fertilizer, but the island was abruptly abandoned in 1879, leaving behind about a dozen buildings and 8,000 tonnes of mined guano.
New Zealand entrepreneurs, including photographer Henry Winkelmann, then made unsuccessful attempts to continue guano extraction on Jarvis, and the two-story house was sporadically inhabited during the early 1880s. Squire Flockton was left alone on the island as caretaker for several months and committed suicide there in 1883, apparently from gin-fueled despair. His wooden grave marker was a carved plank which could be seen in the island's tiny four-grave cemetery for decades.
John T. Arundel & Co. resumed mining guano from 1886 to 1899. The United Kingdom annexed the island on June 3, 1889. Phosphate and copra entrepreneur John T. Arundel visited the island in 1909 on maiden voyage of the "S.S. Ocean Queen" and near the beach landing on the western shore members of the crew built a pyramidal day beacon made from slats of wood, which was painted white. The beacon was standing in 1935, and remained until at least 1942.
On August 30, 1913, the barquentine "Amaranth" (C. W. Nielson, captain) was carrying a cargo of coal from Newcastle, New South Wales, to San Francisco when it wrecked on Jarvis' southern shore. Ruins of ten wooden guano-mining buildings, the two-story house among them, could still be seen by the "Amaranth" crew, who left Jarvis aboard two lifeboats. One reached Pago Pago, American Samoa, and the other made Apia in Western Samoa. The ship's scattered remains were noted and scavenged for many years, and rounded fragments of coal from the "Amaranth"'s hold were still being found on the south beach in the late 1930s.
Jarvis Island was reclaimed by the United States government and colonized from March 26, 1935, onwards, under the American Equatorial Islands Colonization Project (see also Howland Island and Baker Island). President Franklin D. Roosevelt assigned administration of the island to the U.S. Department of the Interior on May 13, 1936. Starting out as a cluster of large, open tents pitched next to the still-standing white wooden day beacon, the Millersville settlement on the island's western shore was named after a bureaucrat with the United States Department of Air Commerce. The settlement grew into a group of shacks built mostly with wreckage from the "Amaranth" (lumber from which was also used by the young Hawaiian colonists to build surfboards), but later, stone and wood dwellings were built and equipped with refrigeration, radio equipment, and a weather station. A crude aircraft landing area was cleared on the northeast side of the island, and a T-shaped marker which was intended to be seen from the air was made from gathered stones, but no airplane is known to have ever landed there.
At the beginning of World War II, an Imperial Japanese Navy submarine surfaced off the west coast of the island. Believing that it was a U.S. Navy submarine which had come to fetch them, the four young colonists rushed down the steep western beach in front of Millersville towards the shore. The submarine answered their waves with fire from its deck gun, but no one was hurt in the attack. On February 7, 1942, the USCGC "Taney" evacuated the colonists, then shelled and burned the dwellings. The roughly cleared landing area on the island's northeast end was later shelled by the Japanese, leaving crater holes.
Jarvis was visited by scientists during the International Geophysical Year from July 1957 until November 1958. In January 1958 all scattered building ruins from both the nineteenth century guano diggings and the 1935–1942 colonization attempt were swept away without a trace by a severe storm which lasted several days and was witnessed by the scientists. When the IGY research project ended the island was abandoned again. By the early 1960s a few sheds, a century of accumulated trash, the scientists' house from the late 1950s and a solid, short lighthouse-like day beacon built two decades before were the only signs of human habitation on Jarvis.
On June 27, 1974, Secretary of the Interior Rogers Morton created Jarvis Island National Wildlife Refuge which was expanded in 2009 to add submerged lands within of the island. The refuge now includes of land and of water. Along with six other islands, the island was administered by the U.S. Fish and Wildlife Service as part of the Pacific Remote Islands National Wildlife Refuge Complex. In January 2009, that entity was upgraded to the Pacific Remote Islands Marine National Monument by President George W. Bush.
A feral cat population, descendants of cats likely brought by colonists in the 1930s, wrought disruption to the island's wildlife and vegetation. These cats were removed through efforts which began in the mid-1960s and lasted until 1990 when they were completely eradicated. Nineteenth-century tram track remains can be seen in the dried lagoon bed at the island's center and the late 1930s-era lighthouse-shaped day beacon still stands on the western shore at the site of Millersville.
Public entry to anyone, including U.S. citizens, on Jarvis Island requires a special-use permit and is generally restricted to scientists and educators. The U.S. Fish and Wildlife Service and the United States Coast Guard periodically visit Jarvis.
There is no airport on the island, nor does the island contain any large terminal or port. There is a day beacon near the middle of the west coast. Some offshore anchorage is available.
As a U.S. territory, the defense of Jarvis Island is the responsibility of the United States. All laws of the United States are applicable on the island.
|
https://en.wikipedia.org/wiki?curid=15683
|
Jersey
Jersey ( , ; ), officially the Bailiwick of Jersey (; Jèrriais: "Bailliage dé Jèrri"), is a British Crown dependency located near the coast of Normandy, France. It is the second-closest of the Channel Islands to France, after Alderney.
Jersey was part of the Duchy of Normandy, whose dukes went on to become kings of England from 1066. After Normandy was lost by the kings of England in the 13th century, and the ducal title surrendered to France, Jersey and the other Channel Islands remained attached to the English crown.
The bailiwick consists of the island of Jersey, the largest of the Channel Islands, along with surrounding uninhabited islands and rocks collectively named Les Dirouilles, Les Écréhous, Les Minquiers, Les Pierres de Lecq, and other reefs. Although the bailiwicks of Jersey and Guernsey are often referred to collectively as the Channel Islands, the "Channel Islands" are not a constitutional or political unit. Jersey has a separate relationship to the Crown from the other Crown dependencies of Guernsey and the Isle of Man, although all are held by the monarch of the United Kingdom.
Jersey is a self-governing parliamentary democracy under a constitutional monarchy, with its own financial, legal and judicial systems, and the power of self-determination. The Lieutenant Governor on the island is the personal representative of the Queen.
Jersey is not part of the United Kingdom, and has an international identity separate from that of the UK, but the UK is constitutionally responsible for the defence of Jersey. The definition of United Kingdom in the British Nationality Act 1981 is interpreted as including the UK and the Islands together. The European Commission confirmed in a written reply to the European Parliament in 2003 that Jersey was within the Union as a European Territory for whose external relationships the UK is responsible. Jersey was not fully part of the European Union but had a special relationship with it, notably being treated as within the European Community for the purposes of free trade in goods.
British cultural influence on the island is evident in its use of English as the main language and the British pound as its primary currency, even if some people still speak or understand Jèrriais, the local form of the Norman language, and place names with French or Norman origins abound. Additional British cultural commonalities include driving on the left, access to the BBC and ITV regions, a school curriculum following that of England, and the popularity of British sports, including cricket.
The Channel Islands are mentioned in the Antonine Itinerary as the following: "Sarnia", "Caesarea", "Barsa", "Silia" and "Andium", but Jersey cannot be identified specifically because none corresponds directly to the present names. The name "Caesarea" has been used as the Latin name for Jersey (also in its French version "Césarée") since William Camden's "Britannia", and is used in titles of associations and institutions today. The Latin name "Caesarea" was also applied to the colony of New Jersey as "Nova Caesarea".
"Andium", "Agna" and "Augia" were used in antiquity.
Scholars variously surmise that "Jersey" and "Jèrri" derive from "jarð" (Old Norse for "earth") or "jarl" (earl), or perhaps a personal name, "Geirr" ("Geirr's Island"). The ending "-ey" denotes an island (as in Guernsey or Surtsey).
Jersey history is influenced by its strategic location between the northern coast of France and the southern coast of England; the island's recorded history extends over a thousand years.
La Cotte de St Brelade is a Palaeolithic site inhabited before rising sea levels transformed Jersey into an island. Jersey was a centre of Neolithic activity, as demonstrated by the concentration of dolmens. Evidence of Bronze Age and early Iron Age settlements can be found in many locations around the island.
Additional archaeological evidence of Roman influence has been found, in particular at Les Landes, the coastal headland site at Le Pinacle, where remains of a primitive structure are attributed to Gallo-Roman temple worship ("fanum").
Jersey was part of Neustria with the same Gallo-Frankish population as the continental mainland. Jersey, the whole Channel Islands and the Cotentin peninsula (probably with the Avranchin) came formally under the control of the duke of Brittany during the Viking invasions, because the king of the Franks was unable to defend them, however they remained in the archbishopric of Rouen. Jersey was invaded by Vikings in the 9th century. In 933 it was annexed to the future Duchy of Normandy, together with the other Channel Islands, Cotentin and Avranchin, by William Longsword, count of Rouen and it became one of the Norman Islands. When William's descendant, William the Conqueror, conquered England in 1066, the Duchy of Normandy and the kingdom of England were governed under one monarch. The Dukes of Normandy owned considerable estates in the island, and Norman families living on their estates established many of the historical Norman-French Jersey family names. King John lost all his territories in mainland Normandy in 1204 to King Philip II Augustus, but retained possession of Jersey and the other Channel Islands.
In the Treaty of Paris (1259), the English king formally surrendered his claim to the duchy of Normandy and ducal title, and since then the islands have been internally self-governing territories of the English crown and latterly the British crown.
On 7 October 1406, 1,000 French men at arms led by Pero Niño invaded Jersey, landing at St Aubin's Bay and defeated the 3,000 defenders but failed to capture the island.
In the late 16th century, islanders travelled across the North Atlantic to participate in the Newfoundland fisheries. In recognition for help given to him during his exile in Jersey in the 1640s, King Charles II of England gave Vice Admiral Sir George Carteret, bailiff and governor, a large grant of land in the American colonies in between the Hudson and Delaware rivers, which he promptly named New Jersey. It is now a state in the United States.
Aware of the military importance of Jersey, the British government had ordered that the bailiwick be heavily fortified. On 6 January 1781, a French invasion force of 2,000 men set out to take over the island, but only half of the force arrived and landed. The Battle of Jersey lasted about half an hour, with the English successfully defending the island. There were about thirty casualties on each side, and the English took 600 French prisoners who were subsequently sent to England. The French commanders were slain.
Trade laid the foundations of prosperity, aided by neutrality between England and France. The Jersey way of life involved agriculture, milling, fishing, shipbuilding and production of woollen goods. 19th-century improvements in transport links brought tourism to the island.
During the Second World War, some citizens were evacuated to the UK but most remained. Jersey was occupied by Germany from 1 July 1940 until 9 May 1945, when Germany surrendered. During this time the Germans constructed many fortifications using Soviet slave labour. After 1944, supplies from mainland France were interrupted by the D-Day landings, and food on the island became scarce. The SS "Vega" was sent to the island carrying Red Cross supplies and news of the success of the Allied advance in Europe. The Channel Islands were one of the last places in Europe to be liberated. 9 May is celebrated as the island's Liberation Day, where there are celebrations in Liberation Square.
Jersey's unicameral legislature is the States Assembly. It includes 49 elected members: 8 senators (elected on an island-wide basis), 12 Connétables (often called 'constables', heads of parishes) and 29 deputies (representing constituencies), all elected for four-year terms as from the October 2011 elections. There are also five non-voting members appointed by the Crown: the Bailiff, the Lieutenant Governor of Jersey, the Dean of Jersey, the Attorney General and Solicitor General. Jersey has one of the lowest voter turnouts internationally, with just 33% of the electorate voting in 2005, putting it well below the 77% European average for that year.
The Council of Ministers, consisting of a Chief Minister and nine ministers, makes up part of the Government of Jersey. Each minister may appoint up to two assistant ministers. A Chief Executive is head of the civil service. Some government functions are carried out in the island's 12 parishes.
The Bailiff is President (presiding officer) of the States Assembly, head of the judiciary and as civic head of the island carries out various ceremonial roles.
As one of the Crown dependencies, Jersey is autonomous and self-governing, with its own independent legal, administrative and fiscal systems. In 1973, the Royal Commission on the Constitution set out the duties of the Crown as including: ultimate responsibility for the 'good government' of the Crown dependencies; ratification of island legislation by Order in Council (Royal Assent); international representation, subject to consultation with the island authorities before concluding any agreement which would apply to them; ensuring the islands meet their international obligations; and defence.
Queen Elizabeth II reigns in Jersey as Queen of the United Kingdom and her other Realms and Territories.
"The Crown" is defined by the Law Officers of the Crown as the "Crown in right of Jersey". The Queen's representative and adviser in the island is the Lieutenant Governor of Jersey. He is a point of contact between Jersey ministers and the United Kingdom government and carries out executive functions in relation to immigration control, deportation, naturalisation and the issue of passports. Since 13 March 2017, the incumbent Lieutenant Governor has been Sir Stephen Dalton.
Jersey is a distinct jurisdiction for the purposes of conflict of laws, separate from the other Channel Islands, England and Wales, Scotland and Northern Ireland.
Jersey law has been influenced by several different legal traditions, in particular Norman customary law, English common law and modern French civil law. Jersey's legal system is therefore described as 'mixed' or 'pluralistic', and sources of law are in French and English languages, although since the 1950s the main working language of the legal system is English.
The principal court is the Royal Court, with appeals to the Jersey Court of Appeal and, ultimately, to the Judicial Committee of the Privy Council. The Bailiff is head of the judiciary; the Bailiff and the Deputy Bailiff are appointed by the Crown. Other members of the island's judiciary are appointed by the Bailiff.
Administratively, Jersey is divided into 12 parishes. All border on the sea. They were named after the Christian saints to whom their ancient parish churches were dedicated:
The parishes of Jersey are further divided into "vingtaines" (or, in St. Ouen, "cueillettes"), divisions that are historic. Today they are used chiefly for purposes of local administration and electoral constituency.
The Connétable is the head of each parish, elected at a public election for a four-year term to run the parish and to represent the municipality in the States Assembly. The Procureur du Bien Public (two in each parish) is the legal and financial representative of the parish (elected at a public election since 2003 in accordance with the "Public Elections (Amendment) (Jersey) Law 2003"; formerly an Assembly of Electors of each parish elected the Procureurs in accordance with the "Loi (1804) au sujet des assemblées paroissiales"). A Procureur du Bien Public is elected for three years as a public trustee for the funds and property of the parish and may contract when authorised by a Parish Assembly. The Parish Assembly is the decision-making body of local government in each parish; it consists of all entitled voters of the parish.
Each parish elects its own force of Honorary Police consisting of "Centeniers", "Vingteniers" and Constable's Officers. Centeniers are elected at a public election within each parish for a term of three years to undertake policing within the parish. The Centenier is the only officer authorised to charge and bail offenders. Formerly, the senior Centenier of each parish (entitled the "Chef de Police") deputised for the Connétable in the States of Jersey when the Connétable was unable to attend a sitting of the States. This function has now been abolished.
Although diplomatic representation is reserved to the Crown, Jersey has been developing its own international identity over recent years. It negotiates directly with foreign governments on matters within the competence of the Government of Jersey. Jersey maintains the "Bureau des Iles Anglo-Normandes" in Caen, France, a permanent non-diplomatic representation. A similar office, the "Maison de Normandie" in St. Helier, represents the "Conseil général" of Manche and the "Regional Council" of Normandy. It also houses the Consulate of France. In July 2009, a Channel Islands Tunnel was proposed to connect Jersey with Lower Normandy.
Jersey is a member of the British-Irish Council, the Commonwealth Parliamentary Association and the Assemblée parlementaire de la Francophonie. Jersey wants to become a full member of the Commonwealth in its own right.
In 2007, the Chief Minister and the UK Lord Chancellor signed an agreement that established a framework for the development of the international identity of Jersey.
In January 2011, the Chief Minister designated one of his assistant ministers as having responsibility for external relations; he is now often described as the island's 'foreign minister'.
Tax information exchange agreements (TIEAs) have been signed directly by the island with several countries.
Jersey is neither a Member State of the European Union, nor does it have a European Union Association Agreement. It did, however, have a relationship with the EU governed by article 335(5)(c) TFEU giving effect to Protocol 3 to the UK's Treaty of Accession in 1972. However, Jersey did not appear on the list of European States and Territories outside the Union and the Communities prepared by the European Council and the Commission. This is a result of the manner of implementation of the Treaty arrangements under the Act of Accession in 1972. Jersey would have been fully within the European Communities like Gibraltar, being a European territory for whose external relations the United Kingdom was responsible, but that was limited to the Protocol 3 arrangements under article 355 TFEU to reflect the then existing relationship with the United Kingdom.
Under Protocol 3, Jersey was part of the European Union Customs Union of the European Community. The common customs tariff, levies and other agricultural import measures apply to trade between the island and non-Member States. There is free movement of goods and trade between the island and Member States. EU rules on freedom of movement for workers do not apply in Jersey. However, Article 4 of the Protocol required the island's authorities to give the same treatment to all natural and legal persons of the Communities. In Pereira, the ECJ held that the scope of this article included any matter governed by the Treaties in a territory where the Treaties are fully applicable. The island was therefore within the scope of the Treaties to a limited extent, as a European Territory. To infer, as the French Ambassador and finance minister had attempted to argue, namely that the island is outside the European Union and Communities without qualification is therefore simplistic, in law false. The German blacklisting of the island had to be hastily revoked when this was pointed out. As a result, Jersey was not part of the single market in financial services. It was not required to implement EU Directives on such matters as the movement of capital, company law or money laundering. However, the island's close proximity (135 km south) and its close association with the financial sector of the U.K. has come under increasing scrutiny in recent years, with several mainline publications (e.g., "The Wall Street Journal") labelling the island a tax haven.
British citizens who have only a connection to Jersey, and not with the United Kingdom or another member state of the European Union, were not considered by the Jersey States to be European Union citizens. They have 'Islander status' and their Jersey-issued British passports are endorsed with the words "the holder is not entitled to benefit from EU provisions relating to employment or establishment". This will continue until the end of the Brexit transition period.
However, it is not yet clear whether the citizenship rights in articles 18 and 21 TFEU are partly available to them as British Citizens, given the limited restriction of their rights under article 2 of the Protocol. That restriction on the exercise of certain freedoms did not apply to all Community or Union rights. The freedom of movement under the prior EC régime was and remains a separate set of rights from the Citizen rights under article 20 and 21 TFEU which include the right to move and reside. Those rights are primary citizenship rights, not a mere freedom. It might not need a Treaty change to perfect this, merely a preliminary ruling from the CJEU, and supplementary implementation measures from the Council, given the effective right of entrance and residence granted to EU nationals via Article 4 of the Protocol. Jersey residents did not have a right to vote in elections for the European Parliament. Jersey and Guernsey jointly opened an office in Brussels in 2010 to promote their common interests with European Union institutions.
The effect of the UK leaving the European Union is uncertain. The UK has confirmed that the Crown dependencies position will be argued in the Brexit negotiations.
The question of an independent Jersey has been discussed from time to time in the States Assembly. In 2005–08, a working group of the States Assembly examined the options for independence, concluding that Jersey 'is equipped to face the challenges of independence' but making no recommendations. Proposals for Jersey independence continue to be discussed outside the States.
In October 2012, the Council of Ministers issued a "Common policy for external relations" which noted "that it is not Government policy to seek independence from the United Kingdom, but rather to ensure that Jersey is prepared if it were in the best interests of islanders to do so". On the basis of the established principles the Council of Ministers decided to "ensure that Jersey is prepared for external change that may affect the island's formal relationship with the United Kingdom and/or European Union".
Jersey is an island measuring (or 66,436 vergées), including reclaimed land and intertidal zone. It lies in the English Channel, about from the Cotentin Peninsula in Normandy, France, and about south of Great Britain. It is the largest and southernmost of the Channel Islands, with a maximum land elevation of 143 m (469 ft) above sea level.
The central two parishes (St. John and St. Lawrence) occupy the centre of the island and offer many direct routes from north to south through a number of valleys including water works valley.
The climate is an oceanic climate with mild winters and mild to warm summers.
The Atlantic Ocean has a moderating effect on temperature in Jersey, as water has a much greater specific heat capacity than air and tends to heat and cool slowly throughout the year. This has a warming influence on coastal areas in winter and a cooling influence in summer. The highest temperature recorded was 36.0 °C (96.8 °F) on 9 August 2003 and again on 23 July 2019, and the lowest temperature recorded was −10.3 °C (13.5 °F) on 5 January 1894. By comparison, higher temperatures are found in mainland United Kingdom, which achieved 38.5 °C (101.3 °F) in Faversham, Kent on 10 August 2003. The impact of the Atlantic Ocean and coastal winds ensure that Jersey is slightly cooler than the southern and central parts of England during the summer months.
Snow falls rarely in Jersey; some years will pass with no snow fall at all.
The terrain consists of a plateau sloping from long sandy bays in the south to rugged cliffs in the north. The plateau is cut by valleys running generally north-south.
The following table contains the official Jersey Airport averages for 1981-2010 for Jersey, being located from St. Helier.
Jersey's economy is based on financial services (40% of GVA in 2012), tourism and hospitality (hotels, restaurants, bars, transport and communications totalling 8.4% of GVA in 2012), retail and wholesale (7% of GVA in 2012), construction (6.2% of GVA in 2012) and agriculture (1.3% of GVA in 2012).
Thanks to specialisation in a few high-return sectors, at purchasing power parity Jersey has high economic output per capita, substantially ahead of all of the world's large developed economies. Gross national income in 2009 was £3.7 billion (approximately £40,000 per head of population). However, this is not indicative of each individual resident's purchasing power, and the actual standard of living in Jersey is comparable to that in the United Kingdom outside central London. The island is recognised as one of the leading offshore financial centres. The growth of this sector however has not been without its controversies as Jersey has been characterised by critics and detractors as a place in which the "leadership has essentially been captured by global finance, and whose members will threaten and intimidate anyone who dissents." In June 2005 the States introduced the Competition (Jersey) Law 2005 to regulate competition and stimulate economic growth. This competition law was based on that of other jurisdictions.
Tourism supports not only hotels, but also retail and services: in 2015 there were 717,600 visitors spending £243 million. Duty-free goods are available for purchase on travel to and from the island.
In 2009 57% of the Island's area was agricultural land (an increase on 2008). Major agricultural products are potatoes and dairy produce; agriculture's share of GVA increased 5% in 2009, a fifth successive year of growth. Jersey cattle are a small breed of cow widely known for its rich milk and cream; the quality of its meat is also appreciated on a small scale. The herd total in 2009 was 5,090 animals. Fisheries and aquaculture make use of Jersey's marine resources to a total value of over £6 million in 2009.
Farmers and growers often sell surplus food and flowers in boxes on the roadside, relying on the honesty of customers to drop the correct change into the money box and take what they want. In the 21st century, diversification of agriculture and amendments in planning strategy have led to farm shops replacing many of the roadside stalls.
53,460 people were employed in Jersey : 24% in financial and legal services; 16% in wholesale and retail trades; 16% in the public sector; 10% in education, health and other private sector services; 10% in construction and quarrying; 9% in hotels, restaurants and bars.
Jersey along with Guernsey has its own lottery called the Channel Islands Lottery that was launched in 1975.
On 18 February 2005, Jersey was granted Fairtrade Island status.
Until the 20th century, the States relied on indirect taxation to finance the administration of Jersey. The levying of "impôts" (duties) different from those of the United Kingdom was granted by Charles II and remained in the hands of the Assembly of Governor, Bailiff and Jurats until 1921 when that body's tax raising powers were transferred to the Assembly of the States, leaving the Assembly of Governor, Bailiff and Jurats to serve simply as licensing bench for the sale of alcohol (this fiscal reform also stripped the Lieutenant-Governor of most of his effective remaining administrative functions). The Income Tax Law of 1928 introducing income tax was the first law drafted entirely in English. Income tax has been levied at a flat rate of 20% set by the occupying Germans during the Second World War.
Because value added tax (VAT) has not been levied in the island, luxury goods have often been cheaper than in the UK or in France, providing an incentive for tourism from neighbouring countries. The absence of VAT has also led to the growth of the fulfilment industry, whereby low-value luxury items, such as videos, lingerie and contact lenses are exported, avoiding VAT on arrival and thus undercutting local prices on the same products. In 2005, the Government of Jersey announced limits on licences granted to non-resident companies trading in this way. Low-value consignment relief provided the mechanism for VAT-free imports from the Channel Islands to the UK until 1 April 2012, at which time this policy of the UK government was binned.
Although Jersey does not have VAT, the Government of Jersey introduced a goods and services tax (GST) on 6 May 2008, at a standard rate of 3%. The rate was amended to 5% on 1 June 2011. Although GST is at 5%, shopping in Jersey is still far more expensive than in the UK. Food is not exempt, unlike with VAT.
Jersey is not subject to European Union fiscal legislation, and its "Zero/Ten" corporate tax legislation will be compliant with the Code of Conduct in business taxation as from the removal of the deemed distribution and attribution anti-avoidance legislation , which was apparently criticised by certain unnamed members of the Code of Conduct Group, a subsidiary body of ECOFIN. The Code of Conduct Group, at least in theory, keeps most of its documentation and discussion confidential. The European Commission has confirmed that the Code is not a legal instrument, and therefore is not legally binding, only becoming of limited "political" authority once a unanimous report has been adopted by the Group at the end of the Presidency concerned.
Jersey is considered to be a tax haven by some organisations - for example the Financial Secrecy Index ranks Jersey as 18th . Jersey does not feature, however, in the March 2019 revised EU list of non-cooperative jurisdictions for tax purposes.
Jersey issues its own postage stamps and Jersey banknotes and coins that circulate with UK coinage, Bank of England notes, Scottish notes and Guernsey currency within the island. Jersey currency is not legal tender outside Jersey: However, in the United Kingdom it is "acceptable tender" and can be surrendered at banks within that country in exchange for Bank of England-issued currency on a like-for-like basis.
Designs on the reverse of Jersey coins:
The main currency of Jersey is the pound, although in many places the euro is accepted because of the location of the island.
Pound coins are issued, but are much less widely used than pound notes. Designs on the reverse of Jersey pound coins include historic ships built in Jersey and a series of the twelve parishes' crests. The motto around the milled edge of Jersey pound coins is "" ("Island of Jersey"). Two pound coins are also issued, but in very small quantities.
In July 2014, the Jersey Financial Services Commission approved the establishment of the world's first regulated Bitcoin fund, at a time when the digital currency was being accepted by some local businesses.
Censuses have been undertaken in Jersey since 1821. In the 2011 census, the total resident population was estimated to be 97,857, of whom 34% live in Saint Helier, the island's only town. Approximately half the island's population was born in Jersey; 31% of the population were born elsewhere in the British Isles, 7% in continental Portugal or Madeira, 8% in other European countries and 4% elsewhere.
The people of Jersey are often called Islanders or, in individual terms, Jerseyman or Jerseywoman. Some Jersey-born people identify as British.
Jersey belongs to the Common Travel Area and the definition of "United Kingdom" in the British Nationality Act 1981 is interpreted as including the UK and the Islands together.
For immigration and nationality purposes, the United Kingdom generally treats Jersey as though it were part of the UK. Jersey is constitutionally entitled to restrict immigration by non-Jersey residents, but control of immigration at the point
of entry cannot be introduced for British, certain Commonwealth and EEA nationals
without change to existing international law. Immigration is therefore controlled by a mixture of restrictions on those without "residential status" purchasing or renting property in the island and restrictions on employment. Migration policy is to move to a registration system to integrate residential and employment status. Jersey maintains its own immigration and border controls. United Kingdom immigration legislation may be extended to Jersey by order in council (subject to exceptions and adaptations) following consultation with Jersey and with Jersey's consent. Although Jersey citizens are full British citizens, an endorsement restricting the right of establishment in European Union states other than the UK is placed in the passports of British citizens connected solely with the Channel Islands and Isle of Man. Those who have a parent or grandparent born in the United Kingdom, or who have lived in the United Kingdom for five years, are not subject to this restriction.
Historical large-scale immigration was facilitated by the introduction of steamships (from 1823). By 1840, up to 5,000 English people, mostly half-pay officers and their families, had settled in Jersey. In the aftermath of 1848, Polish, Russian, Hungarian, Italian and French political refugees came to Jersey. Following Louis Napoléon's coup of 1851, more French "proscrits" arrived. By the end of the 19th century, well-to-do British families, attracted by the lack of income tax, were settling in Jersey in increasing numbers, establishing St Helier as a predominantly English-speaking town.
Seasonal work in agriculture had depended mostly on Bretons and mainland Normans from the 19th century. The growth of tourism attracted staff from the United Kingdom. Following liberation in 1945, agricultural workers were mostly recruited from the United Kingdom – the demands of reconstruction in mainland Normandy and Brittany employed domestic labour.
Until the 1960s, the population had been relatively stable for decades at around 60,000 (excluding the Occupation years). Economic growth spurred immigration and a rise in population, which is, by 2013, about 100,000. From the 1960s Portuguese workers arrived, mostly working initially in seasonal industries in agriculture and tourism.
Immigration has helped give aspects of Jersey a distinct urban character, particularly in and around the parish of St Helier, which contributes much to ongoing debates between development and sustainability throughout the island.
Religion in Jersey has a complex history, drawn largely from different Christian denominations. In 2015, Jersey's first ever national survey of religion found that two fifths of Jersey people have no religion, with only small handfuls of Jersey people belonging to the non-Christian religions. In total, 54% said they had some form of religion, and 7% were not sure. Of those that specified a denomination of Christianity, equal proportions were 'Catholic' or 'Roman Catholic' (43%) as were 'Anglican' or 'Church of England' (44%). The remaining eighth (13%) gave another Christian denomination.
The established church is the Church of England, from 2015 under the See of Canterbury (previously under the Winchester diocese). In the countryside, Methodism found its traditional stronghold. A substantial minority of Roman Catholics can also be found in Jersey. There are two Catholic private combined primary and secondary schools: De La Salle College in Saint Saviour is an all-boys school, and Beaulieu Convent School in Saint Saviour is an all-girls school; and FCJ primary school in St. Saviour. A Catholic order of Sisters has a presence in school life.
Until the 19th century, indigenous Jèrriais – a variety of Norman – was the language of the island, though French was used for official business. During the 20th century, British cultural influence saw an intense language shift take place and Jersey today is predominantly English-speaking. Jèrriais nonetheless survives; around 2,600 islanders (three percent) are reckoned to be habitual speakers, and some 10,000 (12 percent) in all claim some knowledge of the language, particularly amongst the elderly in rural parishes. There have been efforts to revive Jèrriais in schools, and the highest number of declared Jèrriais speakers is in the capital.
The dialects of Jèrriais differ in phonology and, to a lesser extent, lexis between parishes, with the most marked differences to be heard between those of the west and east. Many place names are in Jèrriais, and French and English place names are also to be found. Anglicisation of the place names increased apace with the migration of English people to the island.
Some Neolithic carvings are the earliest works of artistic character to be found in Jersey. Only fragmentary wall-paintings remain from the rich mediaeval artistic heritage, after the wholesale iconoclasm of the Calvinist Reformation of the 16th century.
The island is particularly famous for the Battle of Flowers, a carnival held annually since 1902. Other festivals include "La Fête dé Noué" (Christmas festival), "La Faîs'sie d'Cidre" (cidermaking festival), the Battle of Britain air display, Jersey Live Music Festival, Branchage Film Festival, food festivals, and parish events.
The island's patron saint is Saint Helier.
BBC Radio Jersey provides a radio service, and BBC Channel Islands News with headquarters in Jersey provides a joint television news service with Guernsey. ITV Channel Television is a regional ITV franchise shared with the Bailiwick of Guernsey but with its headquarters in Jersey.
Channel 103 is a commercial radio station. Bailiwick Radio broadcasts two music services, Classics and Hits, online at bailiwickradio.com, Apple & Android apps and on TuneIn. Radio Youth FM is an internet radio station run by young people.
Bailiwick Express is one of Jersey's digital online news sources.
Jersey has only one newspaper, the "Jersey Evening Post", which is printed six days a week, and has been in publication since 1890.
The traditional folk music of Jersey was common in country areas until the mid-20th century. It cannot be separated from the musical traditions of continental Europe, and the majority of songs and tunes that have been documented have close parallels or variants, particularly in France. Most of the surviving traditional songs are in French, with a minority in Jèrriais.
In contemporary music, Nerina Pallot has enjoyed international success. Music festivals include Jersey Live, Weekender, Rock in the Park, Avanchi presents Jazz in July, the music section of the Jersey Eisteddfod and the Liberation Jersey Music Festival.
In 1909, T. J. West established the first cinema in the Royal Hall in St. Helier, which became known as West's Cinema in 1923 (demolished 1977). The first talking picture, "The Perfect Alibi", was shown on 30 December 1929 at the Picture House in St. Helier. The Jersey Film Society was founded on 11 December 1947 at the Café Bleu, West's Cinema. The large Art Deco Forum Cinema was opened in 1935 – during the German occupation this was used for German propaganda films.
The Odeon Cinema was opened 2 June 1952 and, was later rebranded in the early 21st century as the Forum cinema. Its owners, however, struggled to meet tough competition from the Cineworld Cinemas group, which opened a 10 screen multiplex on the waterfront centre in St. Helier on reclaimed land in December 2002 and the Odeon closed its doors in late 2008. The Odeon is now a listed building.
Since 1997, Kevin Lewis (formerly of the Cine Centre and the New Forum) has arranged the Jersey Film Festival, a charity event showing the latest and also classic films outdoors in 35 mm on a big screen. The festival is regularly held in Howard Davis Park, St Saviour.
First held in 2008, the Branchage Jersey International Film Festival attracts filmmakers from all over the world.
Seafood has traditionally been important to the cuisine of Jersey: mussels (called "moules" in the island), oysters, lobster and crabs – especially spider crabs – ormers and conger.
Jersey milk being very rich, cream and butter have played a large part in insular cooking. "(See Channel Island milk)" However, there is no indigenous tradition of cheese making, contrary to the custom of mainland Normandy, but some cheese is produced commercially. Jersey fudge, mostly imported and made with milk from overseas Jersey cattle herds, is a popular food product with tourists.
Jersey Royal potatoes are the local variety of new potato, and the island is famous for its early crop of Chats (small potatoes) from the south-facing côtils (steeply sloping fields). They were originally grown using vraic as a natural fertiliser giving them their own individual taste, only a small portion of those grown in the island still use this method. They are eaten in a variety of ways, often simply boiled and served with butter or when not as fresh fried in butter.
Apples historically were an important crop. "Bourdélots" are apple dumplings, but the most typical speciality is black butter ("lé nièr beurre"), a dark spicy spread prepared from apples, cider and spices. Cider used to be an important export. After decline and near-disappearance in the late 20th century, apple production is being increased and promoted. Besides cider, apple brandy is produced. Other production of alcohol drinks includes wine, and in 2013 the first commercial vodkas made from Jersey Royal potatoes were marketed.
Among other traditional dishes are cabbage loaf, Jersey wonders ("les mèrvelles"), fliottes, bean crock ("les pais au fou"), nettle ("ortchie") soup, vraic buns.
In its own right Jersey participates in the Commonwealth Games and in the biennial Island Games, which it first hosted in 1997 and more recently in 2015.
In sporting events in which Jersey does not have international representation, when the British Home Nations are competing separately, islanders that do have high athletic skill may choose to compete for any of the Home Nations – there are, however, restrictions on subsequent transfers to represent another Home Nation.
Jersey is an associate member of the International Cricket Council (ICC). The Jersey cricket team plays in the Inter-insular match among others. The Jersey cricket team competed in the World Division 4, held in Tanzania in October 2008, after recently finishing as runners-up and therefore being promoted from the World Division 5 held in Jersey. They also competed in the European Division 2, held in Guernsey during August 2008. The youth cricket teams have been promoted to play in the European Division 1 alongside Ireland, Scotland, Denmark, the Netherlands and Guernsey. In two tournaments at this level Jersey have finished 6th.
For Horse racing, Les Landes Racecourse can be found at Les Landes in St. Ouen next to the ruins of Grosnez Castle.
The Jersey Football Association supervises football in Jersey. The Jersey Football Combination has nine teams in its top division. Jersey national football team plays in the annual Muratti competition among others.
Rugby union in Jersey comes under the auspices of the Jersey Rugby Association (JRA), which is a member of the Rugby Football Union of England. Jersey Reds compete in the English rugby union system; after four promotions in five seasons, the last three of which were consecutive, they competed in the second-level RFU Championship in 2012–13.
Jersey has two public indoor swimming pools. Swimming in the sea, windsurfing and other marine sports are practised. Jersey Swimming Club have organised an annual swim from Elizabeth Castle to Saint Helier Harbour for over 50 years. A round-island swim is a major challenge that a select number of swimmers have achieved. The Royal Channel Island Yacht Club is based in Jersey.
There is one facility for extreme sports and some facilities for youth sports. Jersey has one un-roofed skateboarding park. Coastal cliffs provide opportunities for rock climbing.
Two professional golfers from Jersey have won the Open Championship seven times between them; Harry Vardon won six times and Ted Ray won once. Vardon and Ray also won the U.S. Open once each. Harry Vardon's brother, Tom Vardon, had wins on various European tours.
An independent body that promotes sports in Jersey and support clubs, 'Jersey Sport' was launched in 2017
Wace, a Norman poet of the 12th century, is Jersey's earliest known author. Printing arrived in Jersey only in the 1780s, but the island supported a multitude of regular publications in French (and Jèrriais) and English throughout the 19th century, in which poetry, most usually topical and satirical, flourished (see Jèrriais literature). The first Jèrriais book to be published was "Rimes et Poésies Jersiaises de divers auteurs réunies et mises en ordre", edited by Abraham Mourant in 1865. Writers born in Jersey include Elinor Glyn, John Lemprière, Philippe Le Sueur Mourant, Robert Pipon Marett and Augustus Asplet Le Gros. Frederick Tennyson and Gerald Durrell were among authors who made Jersey their home. Contemporary authors based in Jersey include Jack Higgins.
The Government of Jersey provides education through state schools (including a fee-paying option at secondary level) and also supports private schools. The Jersey curriculum follows that of England. It follows the National Curriculum although there are a few differences to adapt for the island, for example all Year 4 students study a six-week Jersey Studies course.
Jersey has a college of further education and university centre, Highlands College. As well as offering part-time and evening courses, Highlands is also a sixth form provider, working alongside Hautlieu School which offers the only non-fee-paying sixth form, and works collaboratively with a range of organisations including the Open University, University of Plymouth and London South Bank University. In particular students can study at Highlands for the two-year foundation degree in financial services and for a BSc in social sciences, both validated by the University of Plymouth.
The Institute of Law is Jersey's law school, providing a course for students seeking to qualify as Jersey advocates and solicitors. It also provides teaching for students enrolled on the University of London LLB degree programme, via the International Programmes. The Institute of Law also runs a 'double degree' course: students can obtain the LLB from the University of London and a "Licence en droit M1" from Toulouse 1 Capitol University; the two combine 4 years of studies in both English and French. The Open University supports students in Jersey, but they pay higher fees than UK students. Private sector higher education providers include the Jersey International Business School.
Three areas of land are protected for their ecological or geological interest as Sites of Special Interest (SSI). Jersey has four designated Ramsar sites: Les Pierres de Lecq, Les Minquiers, Les Écréhous and Les Dirouilles and the south east coast of Jersey (a large area of intertidal zone).
Jersey is the home of the Jersey Zoo (formerly known as the Durrell Wildlife Park) founded by the naturalist, zookeeper and author Gerald Durrell.
Four species of small mammal are considered native: the wood mouse ("Apodemus sylvaticus"), the Jersey bank vole ("Myodes glareolus caesarius"), the Lesser white-toothed shrew ("Crocidura suaveolens") and the French shrew ("Sorex coronatus"). Three wild mammals are well-established introductions: the rabbit (introduced in the mediaeval period), the red squirrel and the hedgehog (both introduced in the 19th century). The stoat ("Mustela erminea") became extinct in Jersey between 1976 and 2000. The Green lizard (Lacerta bilineata) is a protected species of reptile; Jersey is its only habitat in the British Isles.
The red-billed chough "Pyrrhocorax pyrrhocorax" became extinct in Jersey around 1900, when changes in farming and grazing practices led to a decline in the coastal slope habitat required by this species. Birds on the Edge, a project between the Government of Jersey, Durrell Wildlife Conservation Trust and Jersey National Trust, is working to restore Jersey's coastal habitats and reinstate the red-billed chough (and other bird species) to the island
Jersey is the only place in the British Isles where the agile frog "Rana dalmatina" is found. The remaining population of agile frogs on Jersey is very small and is restricted to the south west of the island. The species is the subject of an ongoing programme to save it from extinction in Jersey via a collaboration between the Government of Jersey, Durrell Wildlife Conservation Trust and Jersey Amphibian and Reptile Group (JARG), with support and sponsorship from several other organisations. The programme includes captive breeding and release, public awareness and habitat restoration activities.
Trees generally considered native are the alder ("Alnus glutinosa"), silver birch ("Betula pendula"), sweet chestnut ("Castanea sativa"), hazel ("Corylus avellana"), hawthorn ("Crataegus monogyna"), beech ("Fagus sylvatica"), ash ("Fraxinus excelsior"), aspen ("Populus tremula"), wild cherry ("Prunus avium"), blackthorn ("Prunus spinosa"), holm oak ("Quercus ilex"), oak ("Quercus robur"), sallow ("Salix cinerea"), elder ("Sambucus nigra"), elm ("Ulmus" spp.) and medlar ("Mespilus germanica"). Among notable introduced species, the cabbage palm ("Cordyline australis") has been planted in coastal areas and may be seen in many gardens.
Notable marine species include the ormer, conger, bass, undulate ray, grey mullet, ballan wrasse and garfish. Marine mammals include the bottlenosed dolphin and grey seal.
Historically the island has given its name to a variety of overly-large cabbage, the Jersey cabbage, also known as Jersey kale or cow cabbage.
Japanese Knotweed "Fallopia japonica" is an invasive species that threatens Jersey's biodiversity. It is easily recognisable and has hollow stems with small white flowers that are produced in late summer. Other non-native species on the island include the Colorado beetle, burnet rose and oak processionary moth.
Emergency services are provided by the States of Jersey Police with the support of the Honorary Police as necessary, States of Jersey Ambulance Service, Jersey Fire and Rescue Service and the Jersey Coastguard. The Jersey Fire and Rescue Service and the Royal National Lifeboat Institution operate an inshore rescue and lifeboat service; Channel Islands Air Search provides rapid response airborne search of the surrounding waters.
The States of Jersey Fire Service was formed in 1938 when the States took over the Saint Helier Fire Brigade, which had been formed in 1901. The first lifeboat was equipped, funded by the States, in 1830. The RNLI established a lifeboat station in 1884. Border security and customs controls are undertaken by the States of Jersey Customs and Immigration Service. Jersey has adopted the 112 emergency number alongside its existing 999 emergency number.
|
https://en.wikipedia.org/wiki?curid=15693
|
History of Jersey
The island of Jersey and the other Channel Islands represent the last remnants of the medieval Duchy of Normandy that held sway in both France and England. Jersey lies in the Bay of Mont Saint-Michel and is the largest of the Channel Islands. It has enjoyed self-government since the division of the Duchy of Normandy in 1204.
The earliest evidence of human activity in Jersey dates to about 250,000 years ago (before Jersey became an island) when bands of nomadic hunters used the caves at La Cotte de St Brelade as a base for hunting mammoth and woolly rhinoceros.
Rising sea levels resulted in it has been an island for approximately 6,000 years and at its current extremes it measures 10 miles east to west and six miles north to south. Evidence dating from the Ice Age period of engravings dating from at least 12,000 BC have been found, showing occupation by Homo sapiens.
Evidence also exists of settled communities in the Neolithic period, which is marked by the building of the ritual burial sites known as dolmens. The number, size, and visible locations of these megalithic monuments (especially La Hougue Bie) have suggested that social organisation over a wide area, including surrounding coasts, was required for the construction. Archaeological evidence also shows that trading links with Brittany and the south coast of England existed during this time.
Evidence of occupation and wealth has been discovered in the form of hoards. In 1889, during construction of a house in Saint Helier, a 746-g gold torc of Irish origin was unearthed. A Bronze Age hoard consisting of 110 implements, mostly spears and swords, was discovered in Saint Lawrence in 1976 - probably a smith's stock. Hoards of coins were discovered at La Marquanderie, in Saint Brelade, Le Câtel, in Trinity, and Le Câtillon, in Grouville (1957).
In June 2012, two metal detectorists announced that they had uncovered what could be Europe's largest hoard of Iron Age Celtic coins. 70,000 late Iron Age and Roman coins. The hoard is thought to have belonged to a Curiosolitae tribe fleeing Julius Caesar's armies around 50 to 60 BC.
In October 2012, another metal detectorist reported an earlier Bronze Age find, the Trinity Hoard.
Although Jersey was part of the Roman world, there is a lack of evidence to give a better understanding of the island during the Gallo-Roman and early Middle Ages. The tradition that the island was called "Caesarea" by the Romans appears to have no basis in fact. The Roman name for the Channel Islands was "I. Lenuri" (Lenur Islands) and were occupied by the Britons during their migration to Brittany (5th-6th century).
Various saints such as the Celts Samson of Dol and Branwalator (Brelade) were active in the region. Tradition has it that Saint Helier from Tongeren in modern-day Belgium first brought Christianity to the island in the 6th century, part of the walls of the Fishermen's Chapel dates from this period and Charlemagne sent his emissary to the island (at that time called "Angia", also spelt "Agna") in 803. A chapel built around 911, now forms part of the nave of the Parish Church of St Clement.
The island took the name Jersey as a result of Viking activity in the area between the 9th and 10th centuries. The Channel Islands remained politically linked to Brittany until 933, when William Longsword, Duke of Normandy seized the Cotentin and the islands and added them to his domain; in 1066, Duke William II of Normandy defeated Harold at Hastings to become king of England; however, he continued to rule his French possessions as a separate entity, as fealty was owed as a Duke, to the King of France.
According to the Rolls of the Norman Exchequer, in 1180 Jersey was divided for administrative purposes into three ministeria: "de Gorroic", "de Groceio" and "de Crapau Doit" (possibly containing four parishes each). This was a time of building or extending churches with most parish churches in the island being built/rebuilt in a Norman style chosen by the abbey or priory to which each church had been granted. St Mary and St Martin being given to Cerisy Abbey.
The islands remained part of the Duchy of Normandy until 1204, when King Philip II Augustus of France conquered the duchy from King John of England; thanks to Pierre de Préaux who decided to support King John, the islands remained in the personal possession of the English king and were described as being a Peculiar of the Crown. The so-called "Constitutions of King John" are the foundation of modern self-government.
From 1204 onwards, the Channel Islands ceased to be a peaceful backwater and became a potential flashpoint on the international stage between England and France. In the Treaty of Paris (1259), the King of France gave up claim to the Channel Islands. The claim was based upon his position as feudal overlord of the Duke of Normandy. The King of England gave up claim to mainland Normandy and appointed a Warden, a position now termed Lieutenant Governor of Jersey and a bailiff to govern in his stead. The Channel Islands were never absorbed into the Kingdom of England. However the churches in Jersey were left under the control of the Diocese of Coutances for another 300 years.
The existing Norman customs and laws were allowed to continue, with the exception that the ultimate head of the legal system was the King of England rather than the Duke of Normandy. There was no attempt to introduce English law. The law was conducted through 12 jurats, constables ("connétable") and a bailiff ("Baillé"). These titles have different meanings and duties to those in England.
Mont Orgueil castle was built at this time to serve as a royal fortress and military base. This was needed as the Island had few defences and had previously been suppressed by a fleet commanded by a French exile, Eustace the Monk working with the English King until in 1212 he changed sides and raided the Channel Islands on behalf of the French King. A "warden" was appointed to represent the King in the island, this title sometimes called "Captain" and later became "Governor" of the island, the duties were primarily military with the power to represent the King to appoint a bailiff who was normally an islander. Any oppression by a bailiff or a warden was to be resolved locally or failing that, by appeal to the King who appointed commissioners to report on disputes.
During the Hundred Years' War, the island was attacked many times resulting in the formal creation of the Island Militia in 1337, which was compulsory for the next 600 years for all men of military age. In March 1338, a French force landed on Jersey, intent on capturing the island. Although the island was overrun, Mont Orgueil remained in English hands. The French remained until September, when they sailed off to conquer Guernsey, Alderney, and Sark. In 1339, the French returned, allegedly with 8,000 men in 17 Genoese galleys and 35 French ships. Again, they failed to take the castle and, after causing damage, withdrew.
It was 1348 when the Black Death reached the Island, ravaging the population. The change in England to a written language in "English" was not taken up in Jersey, where Norman-French continued until the 20th-century. In July 1373, Bertrand du Guesclin overran Jersey and besieged Mont Orgueil. His troops succeeded in breaching the outer defences, forcing the garrison back to the keep. The garrison came to an agreement that they would surrender if not relieved by Michaelmas and du Guesclin sailed back to Brittany, leaving a small force to carry on the siege. An English relief fleet arrived in time. On 7 October 1406, 1,000 French men at arms led by Pero Nino, a Castilian nobleman turned corsair, invaded Jersey, landing at St Aubin's Bay and defeated the 3,000 defenders but failed to capture the island.
The rise of Joan of Arc inspired France to evict the English from mainland France, with the exception of Calais, putting Jersey back in the front line. The French did not succeed in capturing Jersey during the Hundred Years' War, but after a secret deal between Margaret of Anjou and Pierre de Brézé was made to gain French support for the Lancastrian cause during the Wars of the Roses, the French captured Mont Orgueil in the summer of 1461, and was held by the French until 1468 when Yorkist forces and local militia recaptured the castle.
Due to the island's strategic importance to the English crown, the islanders were able to negotiate, over a number of centuries, the right to retain privileges and improve on certain benefits, such as trade rights, from the King.
During the 16th century, ideas of the reformation of the church coupled with the split with the Catholic Faith by Henry VIII of England, resulted in the islanders adopting the Protestant religion, in 1569 the churches moved under the control of the Diocese of Winchester. Calvinism in Jersey meant that life became very austere. Laws were strictly enforced, punishment for wrong doers was severe, but education was improved.
The excommunication of Elizabeth I of England by the Pope increased the military threat to the island and the increasing use of gunpowder on the battlefield meant that the fortifications on the island had to be adapted. A new fortress was built to defend St Aubin's Bay, the new Elizabeth Castle was named after the queen by Sir Walter Raleigh when he was governor. The island militia was reorganised on a parish basis and each parish had two cannon which were usually housed in the church - one of the St Peter cannon can still be seen at the bottom of Beaumont Hill.
One of the favourable trade deals with England was the ability to import wool (England needing an export market but was at war with most of Europe). The production of knitwear in the island reached such a scale that it threatened the island's ability to produce its own food, so laws were passed regulating who could knit with whom and when. The name "Jersey" synonymous for a sweater, shows its importance. The islanders also became involved with the Newfoundland fisheries at this time. The boats left the island in February/March following a church service in St Brelade's church and they did not return again until September/October. Colonies were established in Newfoundland.
During the 1640s, England, Ireland and Scotland were embroiled in the War of the Three Kingdoms. The civil war also divided Jersey, and while the sympathy of islanders lay with Parliament, the de Carterets (see Sir George Carteret and Sir Philippe de Carteret II) held the island for the king.
The Prince of Wales, the future Charles II visited the island in 1646 and again in October 1649 following the trial and execution of his father, Charles I. In the Royal Square in St. Helier on 17 February 1649, Charles was publicly proclaimed king after his father's death (following the first public proclamation in Edinburgh on 5 February 1649). Parliamentarian forces eventually captured the island in 1651 and Elizabeth Castle seven weeks later. In recognition for all the help given to him during his exile, Charles II gave George Carteret, Bailiff and governor, a large grant of land in the American colonies, which he promptly named New Jersey, now part of the United States of America.
Towards the end of the 17th century, Jersey strengthened its links with the Americas when many islanders emigrated to New England and north east Canada. The Jersey merchants built up a thriving business empire in the Newfoundland and Gaspé fisheries. Companies such as Robins and the Le Boutilliers set up thriving businesses.
By the 1720s, a discrepancy in coinage values between Jersey and France was threatening economic stability. The States of Jersey therefore resolved to devalue the liard to six to the sou. The legislation to that effect implemented in 1729 caused popular riots that shook the establishment. The devaluation was therefore cancelled.
The Chamber of Commerce founded 24 February 1768 is the oldest in the Commonwealth.
The "Code" of 1771 laid down for the first time in one place the extant laws of Jersey, and from this time, the functions of the Royal Court and the States of Jersey were delimited, with sole legislative power vested in the States.
Methodism arrived in Jersey in 1774, brought by fishermen returning from Newfoundland. Conflict with the authorities ensued when men refused to attend militia drill when that coincided with chapel meetings. The Royal Court attempted to proscribe Methodist meetings, but King George III refused to countenance such interference with liberty of religion. The first Methodist minister in Jersey was appointed in 1783, and John Wesley preached in Jersey in August 1789, his words being interpreted into the vernacular for the benefit of those from the country parishes. The first building constructed specifically for Methodist worship was erected in St. Ouen in 1809.
The 18th century was a period of political tension between Britain and France, as the two nations clashed all over the world as their ambitions grew. Because of its position, Jersey was more or less on a continuous war footing.
During the American Wars of Independence, two attempted invasions of the island were made. In 1779, the Prince of Orange William V was prevented from landing at St Ouen's Bay; on 6 January 1781, a force led by Baron de Rullecourt captured St Helier in a daring dawn raid, but was defeated by a British army led by Major Francis Peirson in the Battle of Jersey. A short-lived peace was followed by the French Revolutionary Wars and the Napoleonic Wars which, when they had ended, had changed Jersey forever. In 1799–1800, over 6000 Russian troops under the command of Charles du Houx de Vioménil were quartered in Jersey after an evacuation of Holland.
The first printing press was introduced to Jersey in 1784.
The number of English-speaking soldiers stationed on the island and the number of retired officers and English-speaking labourers who came to the islands in the 1820s led to the island gradually moving towards an English-speaking culture in the town.
The "livre tournois" had been used as the legal currency for centuries. However, it was abolished during the French Revolutionary period. Although the coins were no longer minted, they remained the legal currency in Jersey until 1837, when dwindling supplies and consequent difficulties in trade and payment obliged the adoption of the pound sterling as legal tender.
The military roads constructed (on occasion at gunpoint in the face of opposition from landowners) by the governor, General George Don, to link coastal fortifications with St. Helier harbour had an unexpected effect on agriculture once peace restored reliable trade links. Farmers in previously isolated valleys were able to swiftly transport crops grown in the island's microclimate to waiting ships and then on to the markets of London and Paris ahead of the competition. In conjunction with the introduction of steamships and the development of the French and British railway systems, Jersey's agriculture was no longer as isolated as before.
The population of Jersey rose rapidly, from 47,544 in 1841 to 56,078 20 years later, despite a 20% mortality rate amongst new born children. Life expectancy was 35 years. Both immigration and emigration increased.
The town expanded with many new streets and houses in a Georgian style, in 1843 it was agreed to erect street names. The Theatre Royal was built, as were Victoria College, Jersey in 1852 and extensions to the harbour, which were called Victoria Harbour. Jersey issued its first coins in 1841, and exhibited 34 items at The Great Exhibition in 1851, the world's first ever Pillar box was installed in 1852 and a paid police force was created in 1854.
Two railways, the Jersey Western Railway in 1870, and the Jersey Eastern Railway in 1874, were opened. The western railway from to La Corbière and the eastern railway from to Gorey Pier. The two railways have never been connected. Buses started running on the island in the 1920s, and the railways could not cope with the competition. The eastern railway closed in 1926 and the western railway in 1936 after a fire disaster that year.
Jersey was the fourth-largest shipbuilding area in the 19th-century British Isles, building over 900 vessels around the island. Shipbuilding declined with the coming of iron ships and steam. A number of banks on Jersey, guarantors of an industry both onshore and off, failed in 1873 and 1886, even causing strife and discord in far-flung societies. The population fell slightly in the 20 years to 1881.
In the late 19th century, as the former thriving cider and wool industries declined, island farmers benefited from the development of two luxury products - Jersey cattle and Jersey Royal potatoes. The former was the product of careful and selective breeding programmes; the latter was a total fluke.
The anarchist philosopher, Peter Kropotkin, who visited the Channel Islands in 1890, 1896, and 1903, described the agriculture of Jersey in "The Conquest of Bread".
The 19th century also saw the rise of tourism as an important industry (linked with the improvement in passenger ships) which reached its climax in the period from the end of the Second World War to the 1980s.
Elementary education became obligatory in 1899, and free in 1907. The years before the First World War saw the foundation of cultural institutions, the Battle of Flowers and the Jersey Eisteddfod. The first aeroplanes arrived in Jersey in 1912.
In 1914, the British garrison was withdrawn at the start of the war and the militia were mobilised. Jersey men served in the British and French armed forces. Numbers of German prisoners of war were interned in Jersey. The influenza epidemic of 1918 added to the toll of war.
In 1919, imperial measurements replaced, for the most part, the tradition Jersey system of weights and measures; women aged over 30 were given the vote; and the endowments of the ancient grammar schools were repurposed as scholarships for Victoria College.
In 1921, the visit of King George V was the occasion for the design of the parish crests.
In 1923, the British government asked Jersey to contribute an annual sum towards the costs of the Empire. The States of Jersey refused and offered instead a one-off contribution to war costs. After negotiations, Jersey's one-off contribution was accepted.
The first motor car had arrived in 1899, and by the 1930s, competition from motor buses had rendered the railways unprofitable, with final closure coming in 1935 (except for the later German reintroduction of rail during the military occupation). Jersey Airport was opened in 1937 to replace the use of the beach of Saint Aubin's bay as an airstrip at low tide.
English was first permitted in debates in the States of Jersey in 1901, and the first legislation to be drawn up primarily in English was the Income Tax Law of 1928.
Following the withdrawal of defences by the British government and German bombardment, Jersey was occupied by German troops between 1940 and 1945. The Channel Islands were the only British soil occupied by German troops in World War II. This period of occupation had about 8,000 islanders evacuated, 1,200 islanders deported to camps in Germany, and over 300 islanders sentenced to the prison and concentration camps of mainland Europe. Twenty died as a result. The islanders endured near-starvation in the winter of 1944–45, after the Channel Islands had been cut off from German-occupied Europe by Allied forces advancing from the Normandy beachheads, avoided only by the arrival of the Red Cross supply ship "Vega" in December 1944. Liberation Day - 9 May is marked as a public holiday.
The event which has had the most far-reaching effect on Jersey in modern times is the growth of the finance industry in the island from the 1960s onwards. With the release of the Paradise Papers, it was learned that two non-U.S. subsidiaries of Apple were domiciled in Jersey for one year (2015).
|
https://en.wikipedia.org/wiki?curid=15694
|
Geography of Jersey
This article describes the geography of Jersey, an island territory in the English Channel. The island of Jersey has an area of 119 square kilometres, with 70 kilometres of coastline. Jersey claims a territorial sea of and an exclusive fishing zone of .
Jersey is the largest and southernmost of the Channel Islands. It is located north of Brittany and west of the Cotentin Peninsula in Normandy. About 30% of the population of the island is concentrated in Saint Helier, which is a parish and the capital town of the island.
Besides the main island, the bailiwick includes other islets and reefs with no permanent population: Les Écréhous, Les Minquiers, Les Pierres de Lecq, Les Dirouilles.
The highest point in the island is Les Platons on the north coast, at . Parts of the parish of St Clement in the south were previously below sea-level but the construction of a seawall and infilling of low land has probably left only a few pockets of land below mean sea level. The terrain is generally low-lying on the south coast, with some rocky headlands, rising gradually to rugged cliffs along the north coast. On the west coast there are sand dunes. Small valleys run north to south across the island. Very large tidal variation exposes large expanses of sand and rock to the southeast at low tide.
Snow falls rarely in Jersey; some years will pass with no snow fall at all.
The main natural resource on this island is arable land. 66% of the island's land is used as such, and the remaining 34% is used for other purposes.
Current environmental issues for Jersey include waste disposal, air pollution and traffic.
|
https://en.wikipedia.org/wiki?curid=15695
|
Demographics of Jersey
This article is about the demographic features of the population of Jersey, including population density, ethnicity, education level, health of the populace, economic status, religious affiliations and other aspects of the population.
The Bailiwick of Jersey is a British Crown dependency off the coast of Normandy, France.
The following demographic statistics are from the CIA World Factbook, unless otherwise indicated.
In the 2011 census, the total resident population was 97,857, of whom 34% lived in Saint Helier, the island's only town. The total is increasing at the rate of approximately 1,000 per year on average, and the resident population is calculated to have passed the 100,000 level during the course of 2013. The latest estimate is 107,800 in 2019.
0–14 years:
16,1% (male 7,998; female 7,453)
15–24 years:
14,9% (male 7,243; female 7,000)
25–54 years:
41,5% (male 19,854; female 19,836)
55–64 years:
12,1% (male 5,619; female 5,954)
65 years and over:
15,4% (male 6,307; female 8,468) (2013 est.)
0–14 years:
18% (male 8,140; female 7,563)
15–64 years:
68% (male 30,036; female 30,329)
65 years and over:
14% (male 5,454; female 7,393) (2000 est.)
0.49% (average 1991–2005)
11.0 births/1,000 population (2005)
8.5 deaths/1,000 population (2005)
2.81 migrant(s)/1,000 population (2000 est.)
4 deaths/1,000 live births (2005)
"total population:"
78.48 years
"male:"
76.07 years
"female:"
81.07 years (2000 est.)
1.56 children born/woman (2000 est.)
"noun:"
Jerseyman, Jerseywoman
"adjective:"
Jersey
British and Norman-French descent. Portuguese and Polish minorities.
Anglican, Roman Catholic, Baptist, Methodist and Presbyterian.
English (official), French (official), Jèrriais (unofficial: spoken more commonly in country districts, used as a first language by around 2,600 people).
Portuguese commonly found, e.g. notices in telephone boxes, through use by migrant workers.
82% of children in state schools achieve their reading targets – the UK average is 90%.
|
https://en.wikipedia.org/wiki?curid=15696
|
Politics of Jersey
Politics of the Bailiwick of Jersey takes place in a framework of a parliamentary representative democratic constitution.
As one of the Crown Dependencies, Jersey is autonomous and self-governing, with its own independent legal, administrative and fiscal systems.
The legislature is the States Assembly.
Executive powers are mainly exercised by a Chief Minister and nine ministers, known collectively as the Council of Ministers, which is part of the Government of Jersey. Other executive powers are exercised by the Connétable and Parish Assembly in each of the twelve parishes.
Elizabeth II's traditional title as Head of State is Duke of Normandy. "The Crown" is defined by the Law Officers of the Crown as the "Crown in right of Jersey". The Queen's representative and adviser in the island is the Lieutenant Governor of Jersey. He is a point of contact between Jersey ministers and the United Kingdom government and carries out executive functions in relation to immigration control, deportation, naturalisation and the issue of passports. Since 2017, the incumbent Lieutenant Governor has been Sir Stephen Dalton.
The Crown (not the government or parliament of Jersey) appoints the Lieutenant Governor, the Bailiff, Deputy Bailiff, Attorney General and Solicitor General. In practice, the process of appointment involves a panel in Jersey which select a preferred candidate whose name is communicated to the UK Ministry of Justice for approval before a formal recommendation is made to the Queen.
Jersey has an unwritten constitution arising from the Treaty of Paris (1259). When Henry III and the King of France came to terms over the Duchy of Normandy, all lands except the Channel Islands recognised the suzerainty of the King of France. The Channel Islands however were never absorbed into the Kingdom of England by any Act of Union and exist as "peculiars of the Crown".
Campaigns for constitutional reform during the 19th century successfully called for: the replacement of lay Jurats with professional judges in the Royal Court to decide questions of law; the establishment of a Police Court (later known as the Magistrate's Court); the creation of a Petty Debts Court; a professional, salaried police force for St Helier in addition to the Honorary Police; and the reform of "archaic procedure of the Royal Court for criminal trials". In 1845, the elected office of deputy was created though this did little to redress the disparity of representation between the rural and urban parishes: in 1854 St Helier contained over half of the island's population, yet was able to elect only three out of the 14 deputies.
Two significant constitutional reforms took place during the 20th century. In 1946, the States of Jersey drew up plans for change following the German Occupation, which were examined by a Committee of the Privy Council. No change was made to the functions of the Bailiff. The twelve Jurats were removed from the assembly of the States of Jersey and replaced by twelve senators elected on an island-wide basis who would have no judicial functions. The twelve Rectors also lost their place in the States assembly. No reforms were made to the role of the Deputies in the assembly. The second major reforms took place in December 2005, when the States of Jersey Law 2005 came into force. This created a system of ministerial government to replace the previous committee-based administration.
Legislation relating to the organisation of government includes:
Constitutional reforms continue to be debated in the island.
In 2009, the States assembly rejected proposals by the Privileges and Procedures Committee to simplify the electoral system by keeping the 12 Connétables and introducing 37 deputies elected to six "super-constituencies". In 2010, the States assembly agreed to holding elections for all seats on a single date and to cut the number of Senators from 12 to 8.
In December 2010, a committee chaired by Lord Carswell recommended changes to the role of the Bailiff—in particular that the Bailiff should cease to the presiding officer over the States assembly.
Following widespread criticisms of the system of ministerial government introduced in December 2005, the States assembly agreed in March 2011 to establish an independent electoral commission to review the make-up of the assembly and government.
In April 2011, Deputy Le Claire lodged au Greffe a request for the Chief Minister to produce, for debate, a draft written "Constitution for Jersey"; the States assembly did not support this idea.
Within the United Kingdom government, responsibility for relations between Jersey (and the other Crown dependencies) and the United Kingdom lie in the Crown Dependencies Branch within the International Directorate of the Ministry of Justice, which has a core team of three officials, with four others and four lawyers available when required.
In 2010, the House of Commons Justice Committee, conducting an inquiry into the Crown dependencies, found that the Jersey government and those of the other islands were "with some important caveats, content with their relationship with the Ministry of Justice". Tensions have, however, arisen from time to time. In the 1980s, there were discussions about a financial contribution from Jersey towards the United Kingdom's costs in relation to defence and international representation. In March 2009, the House of Lords Constitution Committee criticised UK government proposals in the Borders, Citizenship and Immigration Bill dealing with the Common Travel Area, concluding that "the policy-making process ... has not been informed by any real appreciation of the constitutional status of the Crown dependencies or the rights of free movement of Islanders". In 2009, the UK cancelled the reciprocal health agreement with Jersey, though a new one came into effect in April 2011. From 2005 to 2011, there were protracted dealings between Jersey and the United Kingdom over Jersey's "zero-ten" tax regime and whether it would be acceptable to the European Union.
Although Jersey is for most day-to-day purposes entirely self-governing in relation to its internal affairs, the Crown retains residual responsibility for the "good government" of the island. The UK government has consistently adopted a "non-interventionist policy", and following the "high degree of consensus amongst academics, legal advisers, politicians and officials" would only intervene "in the event of a fundamental breakdown in public order or the rule of law, endemic corruption in the government or other extreme circumstances".
The 1973 Kilbrandon Report stated that "In international law the United Kingdom Government is responsible for the Islands' international relations" and "also responsible for the defence of the Islands". The United Kingdom is responsible for Jersey's international relations as an aspect of the island's status as a Crown dependency. It is now normal practice for the UK to consult the Jersey government and seek their consent before entering into treaty obligations affecting the island.
Since 2000, Jersey's "external personality" has developed, recognised in the preamble to the States of Jersey Law 2005 which refers to "an increasing need for Jersey to participate in matters of international affairs". In 2007, the Chief Minister of Jersey and the UK government agreed an "International Identity Framework", setting out the modern relationship between the United Kingdom and Jersey. The United Kingdom now issues "Letters of Entrustment" to the Jersey government, which delegate power to Jersey to negotiate international agreements on its own behalf and sign treaties in Jersey's own name rather than through the United Kingdom. This development was "strongly supported" by the House of Commons Justice Committee in its March 2010 report on the Crown Dependencies. In January 2011 Senator Freddie Cohen was appointed as Assistant Chief Minister with responsibility for UK and International Relations (in effect, Jersey's first Foreign Minister).
Jersey was neither a Member State nor an Associate Member of European Union. It did, however, have a relationship with the EU governed by Protocol 3 to the UK’s Treaty of Accession in 1972.
In relation to the Council of Europe, Jersey – as a territory the United Kingdom is responsible for in international law– has been bound by the European Convention on Human Rights since the UK acceded to the treaty in 1951. The Human Rights (Jersey) Law 2000 makes Convention rights part of Jersey law and is based closely on the United Kingdom's Human Rights Act 1998.
During the 1980s, the question of Jersey making an annual contribution towards the United Kingdom's costs of defence and international representation undertaken on behalf of Jersey was raised. In 1987, the States of Jersey made an interim payment of £8 million while the matter was discussed. The outcome of debates within the island was that the contribution should take the form of maintaining a Territorial Army unit in Jersey. The Jersey Field Squadron (Militia), attached to the Royal Monmouthshire Royal Engineers (Militia), deploys individuals on operations in support of British Forces.
The question of Jersey's independence has been discussed from time to time in the Assembly of the States of Jersey. In 1999, a member of the government said that 'Independence is an option open to the Island if the circumstances should justify this' but the government 'does not believe independence is appropriate in the present circumstances and does not see the circumstances arising in the foreseeable future when it would be appropriate'. In 2000, Senator Paul Le Claire called for a referendum on independence, a proposal which failed to win any significant support. The Policy and Resources Committee of the States of Jersey established the Constitutional Review Group in July 2005, chaired by Sir Philip Bailhache, with terms of reference 'to conduct a review and evaluation of the potential advantages and disadvantages for Jersey in seeking independence from the United Kingdom or other incremental change in the constitutional relationship, while retaining the Queen as Head of State'. The Group's "Second Interim Report" was presented to the States by the Council of Ministers in June 2008. The report concluded that 'Jersey is equipped to face the challenges of independence' but 'whether those steps should be taken is not within the remit of this paper'.
Proposals for Jersey independence have subsequently been discussed at an international conference held in Jersey, organised by the "Jersey and Guernsey Law Review". The former Bailiff, Sir Philip Bailhache has called for changes to the Channel Islands' relationship with the United Kingdom government, arguing that 'at the very least, we should be ready for independence if we are placed in a position where that course was the only sensible option'.
In October 2012 the Council of Ministers issued a "Common policy for external relations" that set out a number of principles for the conduct of external relations in accordance with existing undertakings and agreements. This document noted that Jersey "is a self-governing, democratic country with the power of self-determination" and "that it is not Government policy to seek independence from the United Kingdom, but rather to ensure that Jersey is prepared if it were in the best interests of Islanders to do so". On the basis of the established principles the Council of Ministers decided to "ensure that Jersey is prepared for external change that may affect the Island’s formal relationship with the United Kingdom and/or European Union".
The parliamentary body responsible for adopting legislation and scrutinising the Council of Ministers is the States Assembly. Forty-Nine elected members (8 Senators, 29 Deputies and 12 Connétables) sit in the unicameral assembly. There are also five non-elected, non-voting members appointed by the Crown (the Bailiff, the Lieutenant Governor, the Dean of Jersey, the Attorney General and the Solicitor General).
Decisions in the States are taken by majority vote of the elected members present and voting. The States of Jersey Law 2005 removed the Bailiff's a casting vote and the Lieutenant Governor's power of veto. Although formally organised party politics plays no role in the States of Jersey assembly, members often vote together in two main blocs – a minority of members, holding broadly progressive views and critical of the Council of Ministers versus a majority of members, of conservative ideology, who support the Council of Ministers.
Scrutiny panels of backbench members of the assembly have been established to examine (i) economic affairs, (ii) environment, (iii) corporate services, (iv) education and home affairs and (v) health, social security and housing. The real utility of the panels is said to be "that of independent critique which holds ministers to account and constructively engages with policy which is deficient".
According to constitutional convention United Kingdom legislation may be extended to Jersey by Order in Council at the request of the Island's government. Whether an Act of the United Kingdom Parliament may expressly apply to the Island as regards matters of self-government, or whether this historic power is now in abeyance, is a matter of legal debate. The States of Jersey Law 2005 established that no United Kingdom Act or Order in Council may apply to the Bailiwick without being referred to the States of Jersey.
Historically, Jersey had a "committee-based system of administration embracing all public service functions and guaranteeing extensive involvement in policy-making for most members" of the States Assembly.
The report of a review committee chaired by Sir Cecil Clothier criticised this system of government, finding it incapable of developing high-level strategy, efficient policy-coordination or effective political leadership. The States of Jersey Law 2005 introduced a ministerial system of government. Executive powers are now exercised by a Chief Minister and nine ministers, known collectively as the Council of Ministers responsible to the States Assembly.
The Chief Minister is elected from amongst the elected members of the States. Ministers are then proposed both by the Chief Minister and any other elected member, the final decision being made by the States assembly.
Cabinet collective responsibility among members of the Council of Ministers is a feature of the 2015 "Code of Conduct for Ministers". However, ministers retain the right to present their own policy to the assembly of the States in their capacity as a member of the assembly in domains not concerning Council policy.
The overall direction of government as agreed by the Council of Ministers is published periodically as a "strategic plan", the current one being the "Common Strategic Policy 2018 to 2022". These plans are debated and approved by the assembly of the States of Jersey and translated into action by a series of business plans for each department.
Several departments, each headed by a minister, are responsible for developing policy within the framework of the strategic plan and for implementing services. They include: the Chief Minister's Department; Economic Development; Education, Sport and Culture; Health and Social Services; Home Affairs; Housing; Environment; Social Security; Transport and Technical Services; and Treasury and Resources.
In 2000, the Clothier report noted that "over the centuries Jersey has had many parties, by which one means only a coming together of like minds to achieve a particular objective. Once achieved, the binding purpose has disappeared and the group pursuing it has dissolved. Such a grouping is not a true political party because it lacks the cement of a common philosophy of government, having only a narrow objective to hold it together until the objective is either attained or lost". Various parties have been formed over the years in Jersey, but since the 1950s the majority of candidates have stood for election unaffiliated to any political party.
Historically, two parties dominated Jersey politics. Originating in the 1770s, the "Jeannot party" formed around the radical lawyer and Connétable, Jean Dumaresq, who opposed the cabal of Jurats who surrounded Lieutenant-Bailiff Charles Lemprière (whose supporters became known as the "Charlot party"). The Jeannots rapidly adopted the nickname of "Magots" (cheese mites) after their opponents boasted of aiming to crush them like mites.
The Charlots and Magots contested power at elections until in 1819 the progressive Magots adopted the rose as their emblem, while the conservative Charlots wore laurel leaves. The symbolism soon became entrenched to the extent that gardens displayed their owners' allegiances, and pink or green paintwork also showed political sympathies. Still today in Jersey, the presence of established laurels or rose gardens in old houses gives a clue to the past party adherence of former owners, and the chair of the Constable of Saint Helier in the Assembly Room of the Parish Hall still sports the carved roses of a former incumbent.
In order to help control voting in Jersey, it was not unknown for citizens to find themselves taken and stranded on the Écréhous until after voting had taken place. By the time of the introduction of the secret ballot in 1891, party politics had waned.
"Blues" and "Reds" contested local elections into the 1920s, but Islandwide party politics lay dormant until the post-Occupation elections under the new Constitution of 1948.
The first election under the new constitution saw a struggle for dominance between the Jersey Democratic Movement and the Jersey Progressive Party, led by Cyril Le Marquand. Having achieved the political reforms it advocated the Progressive Party soon folded as an organisation, while the Democratic Movement, incorporating the tiny Communist Party of Jersey, continued in existence as a campaigning social movement until the late 20th century.
The Jersey Green Party succeeded in having candidates elected in the 1980s. There were difficulties in maintaining a successful party structure in a consensus government system. Former Senator Stuart Syvret was often reported to be a Green and represents the Jersey Greens in the Green Islands Network.
The prospect of ministerial government and the creation of an executive and opposition, led to the formation of two political parties – the Jersey Democratic Alliance and the Centre Party – in preparation for the 2005 elections. A group called "Elect Jersey 2005" worked to assist some independent candidates prepare for the elections. None of the party-affiliated candidates was successful in the October senatorial elections; three JDA members standing as independents were elected as deputies in November 2005 along with two members of the Centre Party who had similarly stood as independents. The Centre Party was wound up in 2007.
In 2008, legislation was passed to require registration of political parties who wished to endorse candidates for election as a senator, deputy or Connétable.
In the 2008 elections for senators, the JDA fielded two candidates, two candidates stood as members of the campaign group "Jersey 2020" (focusing on environmental issues) and two for "Time4Change/Reform": none was successful. In the subsequent deputies' elections, four JDA candidates were successful, but three of them subsequently left the party and continued to sit as independents. In August 2011, the JDA announced that party members would stand only as independents in the October 2011 elections. A branch of the 'Liberal Democrats Abroad' was formed in the island in November 2011.
On 4 July 2014, Reform Jersey became Jersey's only political party when it was registered in the Royal Court. The party contested the 2014 general election where 3 of their 8 candidates were elected.
Jersey is divided into twelve administrative districts known as parishes. All have access to the sea and are named after the saints to whom their ancient parish churches are dedicated.
The parishes of Jersey are further divided into "vingtaines" (or, in St. Ouen, "cueillettes"), divisions which are historic and nowadays mostly used for purposes of electoral constituency in municipal elections. These elections are held to elect the members of the Parish municipality. Each parish has an Honorary Police force of elected, unpaid civilians who exercise police and prosecution powers.
Elections for Senators and Deputies occur at fixed four-yearly intervals, historically in October. From 2018, elections will be held in May every fourth year.
At a local level, the Connétables (or 'constables') are elected for four years. Other posts in parish municipalities vary in length from one to three years and elections take place at a Parish Assembly on a majority basis. It has been some time since parties contested elections at this level, other than for the position of Connétable who uniquely has a role in both the national assembly and in local government.
Jersey, as a polity predominated by independents has always had a number of pressure groups. Many ad-hoc lobby groups form in response to a single issue and then dissolve once the concerns have been dealt with. However, there are a number of pressure groups actively working to influence government decisions on a number of issues. For example, in 2012 the National Trust engaged in pressure campaign against development of the Plemont headland. The Trust was supported by the majority of the islands senior politicians, including the Chief Minister, but a proposition made in the States of Jersey for the States to compulsorily purchase the headland and sell it to the Trust was defeated in a vote on 13 December 2012. The outcome of the vote was 24 in favour of acquisition, 25 against, with one absent and one declaring an interest.
The following groups are funded by their members.
The following groups are, at least, partially funded by government. Appointments are made by the States of Jersey
|
https://en.wikipedia.org/wiki?curid=15697
|
Economy of Jersey
The economy of Jersey is largely driven by international financial services and legal services, which accounted for 40.5% of total GVA in 2010. Other sectors include construction, retail, agriculture, tourism and telecommunications.
In 2008 Jersey’s gross national income per capita was among the highest in the
world.
In 2011 the island's economy, as measured by GVA, declined by 1% to £3.6 billion.
Jersey-based financial organisations provide services to customers worldwide. In June 2008 it was reported that 12,070 people were employed full-time, within this sector. The Royal Bank of Canada (RBC) is a major employer with some 900 staff employed in Jersey, as of March 2009.
The finance sector profits rose to about £1.5 billion in 2007,
representing a real-term increase of 12% on 2006.
Jersey is one of the top worldwide Offshore financial centers It is described by some as a tax haven. It attracts deposits from customers outside of the island, seeking the advantages such places offer, like reduced tax burdens. Its taxation laws have been widely criticised by various people and groups, however the former Chief Minister of Jersey, Terry Le Sueur, has countered these criticisms, saying that "Jersey [is] among cooperative finance centres". And in September 2013 the UK Prime Minister, David Cameron, said it was not fair any longer to refer to any of the overseas territories or Crown dependencies as tax havens, as they have taken action to make sure that they have fair and open tax systems.House of Commons Hansard 9 Sept 2013 Its information privacy law also provides exemptions that other European countries do not, for example in the way Trusts do not have to disclose as much information to Benficiaries about use of their personal data as is normally required under such laws.
Jersey's finance industry featured in a BBC Panorama documentary, titled "Tax me if you can", first broadcast on 2 February 2009.
On 4 February 2009 Jersey Finance officially announced its intention to open a new representative office in London.
At the end of 2008 deposits in Jersey banks totalled £206 billion, down £6.2 billion from £212.3 billion at the start of the year.
The first regulated Bitcoin fund was established in Jersey in July 2014, with the approval of the Jersey Financial Services Commission, after island leaders expressed a desire for Jersey to become a global center for digital currencies. At the time of the establishment of the fund by a Jersey-based hedge fund company, Bitcoin was already being accepted by some local businesses.
Construction represented 5.2% of GVA during 2007. In June 2008 it was reported that 4,980 people were employed full-time in the construction and quarrying sector.
St. Helier, and the Waterfront area in particular, has seen much redevelopment during the early 21st century, with several projects in planning or under-construction during 2009. Developments include a leisure complex, the Radisson hotel, and a new central bus station - Liberation Station. As of 2009 there are plans to sink the A1 road, to provide building sites above it for offices, possibly for financial companies.
Construction grew by 8% over the period 2006 to 2007.
As of June 2008 there were 6,610 persons in full-time employment within Jersey's wholesale and retail trades.
Retail and wholesale grew by around 5% during 2007.
Sandpiper C.I. Limited operate a chain of stores in Jersey, their franchises include well-known names, such as Marks & Spencer, Iceland, and Costa Coffee.
A number of online retailers, and fulfillment houses operate from the Channel Islands, including Jersey, supplying a variety of low-value goods such as CDs, DVDs, video games, and gadgets. Residents of the EU were choosing to order goods from Jersey, so as to benefit from a tax relief known as Low-value consignment relief (LVCR). UK residents, in particular, were taking advantage of this situation.
A local company, play.com grew substantially during the time that LVCR applied to Jersey. Notably, Amazon UK also took advantage of this by dispatching some low-value items from Jersey.
In April 2012 the UK Government made law changes to prevent the Channel Islands continued exploitation of LVCR, meaning that UK residents would have to pay the full VAT amount on items imported from the Channel Islands. Some goods are still sold and distributed from Jersey, despite these changes.
Potatoes, cauliflower, tomatoes, and flowers are important export crops, shipped mostly to the United Kingdom. The Jersey breed of dairy cattle is known worldwide and represents an important export income earner. Milk products go to the UK and other EU countries.
As of 2009 the Jersey Royal potato is the biggest crop export.
Jersey saw a boom in tourism during the post-World War II years. This boom has been winding down since the late-1980s. Many of the larger hotels, which were constructed during the boom, have now been demolished.
Visitors to the island arrive either by sea at Saint Helier, or by air at Jersey Airport. These routes are subsidised by the States of Jersey. Exact figures for subsidies are not in the public domain.
Visitor length of stays have reduced from an average of 5.7 nights, in 1997, to 4.3 nights, in 2010.
Most tourist attractions are operated by private companies, including companies owned, or funded by the States of Jersey. Elizabeth Castle, for example, is controlled by Jersey Heritage. Some other attractions are owned by the National Trust for Jersey.
In 2011 visitor numbers rose by 0.6%, with a notable increase in visitors from Germany, and France. It was reported that tourist and business visitors spent a total of £242m while on the island.
Notable hotels include:
This sector accounted for 4% of GVA during 2007.
Most of the telecoms infrastructure is owned by Jersey Telecom.
In 2008, most goods imported and exported were transported by Huelin-Renouf, Condor Logistics, and other smaller operators, via either Saint Helier harbour, or Jersey Airport.
During the period 1984 to 1994, British Channel Island Ferries were responsible for much shipping to and from the United Kingdom.
Jersey shares The International Stock Exchange (TISE) with Guernsey, where it is based.
The workforce in Jersey tends to increase during the summer months, with around 3,500 more people employed in the summer of 2008 than in the winter of 2007. These seasonal workers are mostly employed in agriculture, hotels, restaurants and bars.
Until the 19th century, cider was the largest agricultural export with up to a quarter of the agricultural land given over to orchards. In 1839 for example, of cider were exported from Jersey to England alone, but by 1870 exports from Jersey had slumped to . Beer had replaced cider as a fashionable drink in the main export markets, and even the home market had switched to beer as the population became more urban. Potatoes overtook cider as the most important crop in Jersey in the 1840s. Small-scale cider production on farms for domestic consumption, particularly by seasonal workers from Brittany and mainland Normandy, was maintained, but by the mid-20th century production dwindled until only eight farms were producing cider for their own consumption in 1983. The number of orchards had been reduced to such a level that the destruction of trees in the Great Storm of 1987 demonstrated how close the Islands had come to losing many of its traditional cider apple varieties. A concerted effort was made to identify and preserve surviving varieties and new orchards were planted. As part of diversification, farmers have moved into commercial cider production, and the cider tradition is celebrated and marketed as a heritage experience.
The knitting of woollen garments was a thriving industry for Jersey during the 17th and 18th centuries.
Jersey was the 4th largest ship building area in the 19th century British Isles. See History of Jersey.
Jersey pounds per US dollar - 0.55 (2005), 0.6981 (January 2002), 0.6944 (2001), 0.6596 (2000), 0.6180 (1999), 0.6037 (1998), 0.6106 (1997); the Jersey pound is at par with the British pound.
Until the twentieth century, the States relied on indirect taxation to finance the administration of Jersey. The levying of "impôts" (duties) was in the hands of the Assembly of Governor, Bailiff and Jurats until 1921 when that body's tax raising powers were transferred to the Assembly of the States, leaving the Assembly of Governor, Bailiff and Jurats to serve simply as licensing bench for the sale of alcohol. The Income Tax Law of 1928 introducing income tax was the first law drafted entirely in English. Income tax has been levied at a flat rate of 20% for decades.
Historically, no VAT was levied in Jersey, with the result that luxury goods have often been cheaper than in the UK or in France. This provided an incentive for tourism from neighbouring countries. The absence of VAT also led to the growth of a fulfilment industry, whereby low-value luxury items, such as videos, lingerie and contact lenses are exported in a manner avoiding VAT on arrival, thus undercutting local prices on the same products. In 2005 the States of Jersey announced limits on licences granted to non-resident companies trading in this way. The States of Jersey introduced a goods and services tax (GST) in 2008. Although this is a form of VAT, it has been charged at a much lower rate than UK or French VAT, and as such Jersey's fulfilment industry continues.
The strategy for introducing the new GST tax was to fill a 'black hole' in the budget that was created by the introduction of a new 0/10 tax that replaced the old tax system that previously exempted foreign investors from corporation tax and levied a 20% rate on Jersey residents. The new 0/10 tax exempts all businesses except those in financial services from having to pay any corporation tax (0%), while leaving the financial services to pay a low tax rate (10%). The income generated from the new 0/10 tax proposal will not be equal to the revenue of the original tax system and this leaves Jersey with a deficit in its budget of several million pounds.
To fill the deficit created by the changes made to Jersey's tax structure, the States of Jersey introduced GST.
GST is added to most goods and services, which has raised the cost of living. The people hit the hardest by the new GST will be the people on the lowest incomes, however, to try to prevent islanders living below the poverty line, the States of Jersey introduced an Income Support service in January 2008.
It is arguable that the people who benefit from Jersey's new tax structure are the owners of the large businesses that are separate or support the financial service based businesses. This is because they do not have to pay any corporation tax but will still benefit from the island's business.
|
https://en.wikipedia.org/wiki?curid=15698
|
Transport in Jersey
This article details the variety of means of transport in Jersey, Channel Islands.
Airports:
Historically there were public railway services in the island, provided by two railway companies:
The mostly coastal lines operated out of St Helier and ran across the southern part of the island, reaching Gorey Harbour in the east and la Corbière in the west. There were two stations in St Helier: (JR) and (JER).
After closure, most of the infrastructure was removed and today little evidence remains of these railways. A small number of former station buildings are still standing, including St Helier Weighbridge, which is now in use as the Liberty Wharf shopping centre, and St Aubin railway station, which is used today as the Parish Hall of Saint Brélade. Part of the former Jersey Railway line from St Aubin to Corbière has been converted into a rail trail for cyclists and walkers.
During the German military occupation 1940–1945, light railways were re-established by the Germans for the purpose of supplying coastal fortifications. A one-metre gauge line was laid down following the route of the former Jersey Railway from Saint Helier to La Corbière, with a branch line connecting the stone quarry at Ronez in Saint John. A 60 cm line ran along the west coast, and another was laid out heading east from Saint Helier to Gorey. The first line was opened in July 1942, the ceremony being disrupted by passively resisting Jersey spectators. The German railway infrastructure was dismantled after the Liberation in 1945.
Two railways operate at the Pallot Heritage Steam Museum; a standard gauge heritage steam railway, and a narrow gauge pleasure line operated by steam-outline diesel motive power.
Highways:
"total:"
577 km (1995)
"paved:"
NA km
"unpaved:"
NA km
Buses are operated by CT Plus Jersey, a local subsidiary of HCT Group. Bus service routes radiate from the Liberation Station in St Helier.
In 2012, it was announced that CT Plus would take over the operation of the bus service, commencing on 2 January 2013, ending 10 years of Connex service in Jersey. This new service is called LibertyBus.
Jersey has a well sign posted Island Cycle Network. A traffic-free route for cyclists and pedestrians links Saint Helier to La Corbière and a branch of this route ends up at St Peter's village passing Jersey Airport.
Driving is on the left hand side. The maximum speed limit throughout the entire island is 40 mph (64 km/h), with slower limits on certain stretches of road, such as 20/30 mph (32/48 km/h) in built up areas and 15 mph (24 km/h) on roads designated as "green lanes".
Visitors wishing to drive must possess a Certificate of Insurance or an International Green Card, a valid Driving Licence or International Driving Permit (UK International Driving Permits are not valid). Photocopies are not acceptable. A nationality plate must be displayed on the back of visiting vehicles.
It is an offence to hold a mobile phone whilst driving a moving vehicle. Where fitted, all passengers inside a vehicle must wear a seat belt at all times, regardless of whether they are sitting in the front or the rear.
The penalties for drinking and driving in Jersey are up to £2,000 fine or 6 months in prison for the first offence plus unlimited disqualification of driving licence. It is an offence to drive whilst under the influence of drugs. Since July 2014 it has also been illegal to smoke in any vehicle carrying passengers under the age of 18.
Single yellow lines indicate that parking is prohibited and is liable to a fine.
Most parking, including streetside, operate using either Paycards or PaybyPhone (generally at the same car parks) and is indicated with the Paycard Symbol. Paycards are available to purchase at many stores across the island. Paycards are used by scratching the relevant information such as time of arrival.
Paycards are sold in units. Car parks where Paycards are required have signs indicating how many units are required for a period of time. Paycards are sold in 1 unit, 2 units or 4 units.
Certain car parks, such as the Waterfront, Sand Street and Ports of Jersey Car Parks use automatic number plate recognition or ticket technology with a pay upon exit system.
There are four main residents’ and business parking zones within St Helier.
In some roads on the outskirts of St Helier and in the harbours, and also in some car parks in St Brelade, parking is free but controlled by parking discs (time wheels) – obtainable from the Town Hall for a small charge.
Seaports and harbours:
Saint Helier is the island's main port, others include Gorey, Saint Aubin, La Rocque, and Bonne Nuit. It is distant from Granville, Manche, from Southampton, from Poole, and from St Malo.
On 20 August 2013, Huelin-Renouf, which had operated a "lift-on lift-off" container service for 80 years between the Port of Southampton and the Port of Jersey, ceased trading. Senator Alan Maclean, a Jersey politician had previously tried to save the 90-odd jobs furnished by the company to no avail. On 20 September, it was announced that Channel Island Lines would continue this service, and would purchase the MV "Huelin Dispatch" from Associated British Ports who in turn had purchased them from the receiver in the bankruptcy. The new operator was to be funded by Rockayne Limited, a closely held association of Jersey businesspeople.
Passenger-only access to France is provided by ferry service, to either Barneville-Carteret, Granville or Dielette.
A service to St Malo was provided by , but is now operated by its sister service, Condor Ferries, which runs "Commodore Goodwill", a large ro-ro vessel to Portsmouth, and has multiple ro-ro connections to Poole and St Malo.
|
https://en.wikipedia.org/wiki?curid=15700
|
Johnston Atoll
Johnston Atoll, also known as Kalama Atoll to Native Hawaiians, is an unincorporated territory of the United States currently administered by the United States Fish and Wildlife Service. Johnston Atoll is a National Wildlife Refuge and is closed to public entry. Limited access for management needs is only by Letter of Authorization from the U.S. Air Force and Special Use Permit from the U.S. Fish and Wildlife Service.
For nearly 70 years, the atoll was under the control of the U.S. military. During that time, it was variously used as a naval refueling depot, an airbase, a nuclear and biological weapons testing site, a secret missile base, and a chemical weapon and Agent Orange storage and disposal site. These activities left the area environmentally contaminated and remediation and monitoring continue.
Johnston Atoll is one of the world's most isolated atolls. It has been the most visited of the U.S. Pacific Marine Monuments as Continental Micronesia previously made stopovers at Johnston Atoll Airport. The island was annexed under the name Kalama by King Kamehameha IV of Hawai'i.
The Johnston Atoll is a deserted, (with the exception of USFWS), atoll in the North Pacific Ocean located about southwest of the island of Hawai'i and is grouped as one of the United States Minor Outlying Islands. The atoll, which is located on a coral reef platform, has four islands. Johnston (or Kalama) Island and Sand Island are both enlarged natural features, while "Akau" (North) and "Hikina" (East) are two artificial islands formed by coral dredging. By 1964, dredge and fill operations had increased the size of Johnston Island to from its original , also increased Sand Island from , and added two new islands, North and East, of .
The four islands compose a total land area of . Due to the atoll's tilt, much of the reef on the southeast portion has subsided. But even though it does not have an encircling reef crest, the reef crest on the northwest portion of the atoll does provide for a shallow lagoon, with depths ranging from .
The climate is tropical but generally dry. Northeast trade winds are consistent and there is little seasonal temperature variation. With elevation ranging from sea level to at Summit Peak, the islands contain some low-growing vegetation and palm trees on mostly flat terrain and no natural fresh water resources.
It is a dry atoll with less than 20 inches of annual rainfall.
The first Western record of the atoll was on September 2, 1796, when the Boston-based American brig "Sally" accidentally grounded on a shoal near the islands. The ship's captain, Joseph Pierpont, published his experience in several American newspapers the following year giving an accurate position of Johnston and Sand Island along with part of the reef. However, he did not name or lay claim to the area, which the local tribes referred to as "iwi poʻo mokupuni". The islands were not officially named until Captain Charles J. Johnston of the Royal Naval ship sighted them on December 14, 1807. The ship's journal recorded: “on the 14th [December 1808] made a new discovery, viz. two very low islands, in lat. 16º 52´ N. long. 190º 26´ E., having a dangerous reef to the east of them, and the whole not exceeding four miles in extent”.
The Guano Islands Act, enacted on August 18, 1856, was federal legislation passed by the U.S. Congress that enabled citizens of the U.S. to take possession of islands containing guano deposits. In 1858, William Parker and R. F. Ryan chartered the schooner "Palestine" specifically to find Johnston Atoll. They located guano on the atoll in March 1858 and proceeded to claim the island. By 1858, Johnston Atoll was claimed by both the United States and the Kingdom of Hawaii. In June 1858, Samuel Allen, sailing on the "Kalama", tore down the U.S. flag and raised the Hawaiian flag, renaming the atoll Kalama. The larger island was renamed Kalama Island, and the nearby smaller island was called Cornwallis.
Returning on July 27, 1858, the Captain of the "Palestine" again hoisted the American flag and reasserted the rights of the United States. The same day, the atoll was declared part of the domain of King Kamehameha IV. On this visit, however, the "Palestine" left two crew members on the island to gather phosphate. While "Palestine" was at the atoll and these two men were still on the island, a July 27, 1858 proclamation of Kamehameha IV declared the annexation of this island to Hawaii, stating that it was "derelict and abandoned." However, later that year King Kamehameha revoked the lease granted to Samuel Allen when the King learned that the atoll had been claimed previously by the United States. However, this did not prevent the Hawaiian Territory from making use of the Atoll or asserting ownership.
By 1890, the atoll's guano deposits had been almost entirely depleted (mined out) by U.S. interests operating under the Guano Islands Act. In 1892, made a survey and map of the island, hoping that it might be suitable as a telegraph cable station. On January 16, 1893, the Hawaiian Legation at London reported a diplomatic conference over this temporary occupation of the island. However, the Kingdom of Hawaii was overthrown on January 17, 1893. When Hawaii was annexed by the United States in 1898, during the Spanish–American War, the name of Johnston Island was omitted from the list of Hawaiian Islands. On September 11, 1909, Johnston was leased by the Territory of Hawaii to a private citizen for fifteen years. A board shed was built on the southeast side of the larger island, and a small tramline run up onto the slope of the low hill, to facilitate the removal of guano. Apparently neither the quantity nor the quality of the guano was sufficient to pay for gathering it, so that the project was soon abandoned.
The Tanager Expedition was a joint expedition, sponsored by the U.S. Department of Agriculture and the Bishop Museum of Hawaii, which visited the Atoll in 1923. The expedition to the atoll consisted of two teams accompanied by destroyer convoys, with the first departing Honolulu on July 7, 1923 aboard the , which conducted the first survey of Johnston Island in the 20th century. Aerial survey and mapping flights over Johnston were conducted with a Douglas DT-2 floatplane carried on her fantail, which was hoisted into the water for takeoff. From July 10–22, 1923, the atoll was recorded in a pioneering aerial photography project. The left Honolulu on July 16 and joined up with the "Whippoorwill" to complete the survey and then traveled to Wake Island to complete surveys there. Tents were pitched on the southwest beach of fine white sand, and a rather thorough biological survey was made of the island. Hundreds of sea birds, of a dozen kinds, were the principal inhabitants, together with lizards, insects, and hermit crabs. The reefs and shallow water abounded with fish and other marine life.
On June 29, 1926, by , President Calvin Coolidge established Johnston Island Reservation as a federal bird refuge and placed it under the control of the U.S. Department of Agriculture, as a "refuge and breeding ground for native birds." Johnston Atoll was added to the United States National Wildlife Refuge system in 1926, and renamed the Johnston Island National Wildlife Refuge in 1940. The Johnston Atoll National Wildlife Refuge was established to protect the tropical ecosystem and the wildlife that it harbors.
However, the Department of Agriculture had no ships, and the Navy was interested in the Atoll for strategic reasons, so with on December 29, 1934, President Franklin D. Roosevelt placed the islands under the "control and jurisdiction of the Secretary of the Navy for administrative purposes," but subject to use as a refuge and breeding ground for native birds, under the Department of Interior.
On February 14, 1941, President Franklin Roosevelt issued to create naval defense areas in the central Pacific territories. The proclamation established "Johnston Island Naval Defensive Sea Area" which encompassed the territorial waters between the extreme high-water marks and the three-mile marine boundaries surrounding the atoll. "Johnston Island Naval Airspace Reservation" was also established to restrict access to the airspace over the naval defense sea area. Only U.S. government ships and aircraft were permitted to enter the naval defense areas at Johnston unless authorized by the Secretary of the Navy.
In 1990, two full-time U.S. Fish and Wildlife Service personnel, a Refuge Manager and a biologist, were stationed on Johnston Atoll to handle the increase in biological, contaminant, and resource conflict activities.
After the military mission on the island ended in 2004, the Atoll was administered by the Pacific Remote Islands National Wildlife Refuge Complex. The outer islets and water rights were managed cooperatively by the Fish and Wildlife Service, with some of the actual Johnston Island land mass remaining under control of the United States Air Force (USAF) for environmental remediation and the Defense Threat Reduction Agency (DTRA) for plutonium cleanup purposes. However, on January 6, 2009, under authority of section 2 of the Antiquities Act, the Pacific Remote Islands Marine National Monument was established by President George W. Bush to administer and protect Johnston Island along with six other Pacific islands. The national monument includes Johnston Atoll National Wildlife Refuge within its boundaries and contains of land and over of water area. The Administration of President Barack Obama in 2014 extended the protected area to encompass the entire Exclusive Economic Zone, by banning all commercial fishing activities. Under a 2017 review of all national monuments extended since 1996, then-Secretary of the Interior Ryan Zinke recommended to permit fishing outside the 12-mile limit.
On December 29, 1934, President Franklin D. Roosevelt with transferred control of Johnston Atoll to the United States Navy under the 14th Naval District, Pearl Harbor, in order to establish an air station, and also to the Department of the Interior to administer the bird refuge. In 1948, the USAF assumed control of the Atoll.
During the Operation Hardtack nuclear test series from April 22 to August 19, 1958, administration of Johnston Atoll was assigned to the Commander of Joint Task Force 7. After the tests were completed, the island reverted to the command of the US Air Force.
From 1963 to 1970, the Navy's Joint Task force 8 and the Atomic Energy Commission (AEC) held joint operational control of the island during high-altitude nuclear testing operations.
In 1970, operational control was handed back to the Air Force until July 1973, when Defense Special Weapons Agency was given host-management responsibility by the Secretary of Defense. Over the years, sequential descendant organizations have been the Defense Atomic Support Agency (DASA) from 1959 to 1971, the Defense Nuclear Agency (DNA) from 1971 to 1996, and the Defense Special Weapons Agency (DSWA) from 1996 to 1998. In 1998, Defense Special Weapons Agency, and selected elements of the Office of Secretary of Defense were combined to form the Defense Threat Reduction Agency (DTRA). In 1999, host-management responsibility transferred from the Defense Threat Reduction Agency once again to the Air Force until the Air Force mission ended in 2004 and the base was closed.
In 1935, personnel from the US Navy's Patrol Wing Two carried out some minor construction to develop the atoll for seaplane operation. In 1936, the Navy began the first of many changes to enlarge the atoll's land area. They erected some buildings and a boat landing on Sand Island and blasted coral to clear a seaplane landing. Several seaplanes made flights from Hawaii to Johnston, such as that of a squadron of six aircraft in November, 1935.
In November 1939, further work was commenced on Sand Island by civilian contractors to allow the operation of one squadron of patrol planes with tender support. Part of the lagoon was dredged and the excavated material was used to make a parking area connected by a causeway to Sand Island. Three seaplane landings were cleared, one by and two cross-landings each by and dredged to a depth of . Sand Island had barracks built for 400 men, a mess hall, underground hospital, radio station, water tanks and a steel control tower. In December 1943 an additional of parking was added to the seaplane base.
On May 26, 1942, a United States Navy Consolidated PBY-5 Catalina wrecked at Johnston Atoll. The Catalina pilot made a normal power landing and immediately applied throttle for take-off. At a speed of about fifty knots the plane swerved to the left and then continued into a violent waterloop. The hull of the plane was broken open and the Catalina sank immediately.
After the war on March 27, 1949, a PBY-6A Catalina had to make a forced landing during flight from Kwajalein to Johnston Island. The plane was damaged beyond repair and the crew of 11 was rescued nine hours later by a Navy ship which sank the plane by gunfire.
During 1958, a proposed support agreement for Navy Seaplane operations at Johnston Island was under discussion though it was never completed because a requirement for the operation failed to materialize.
By September 1941, construction of an airfield on Johnston Island commenced. A by runway was built together with two 400-man barracks, two mess halls, a cold-storage building, an underground hospital, a fresh-water plant, shop buildings, and fuel storage. The runway was complete by December 7, 1941, though in December 1943 the 99th Naval Construction Battalion arrived at the atoll and proceeded to lengthen the runway to . The runway was subsequently lengthened and improved as the island was enlarged.
During WWII Johnston Atoll was used as a refueling base for submarines, and also as an aircraft refueling stop for American bombers transiting the Pacific Ocean, including the Boeing B-29 Enola Gay. By 1944, the atoll was one of the busiest air transport terminals in the Pacific. Air Transport Command aeromedical evacuation planes stopped at Johnston en route to Hawaii. Following V-J Day on August 14, 1945, Johnston Atoll saw the flow of men and aircraft that had been coming from the mainland into the Pacific turn around. By 1947, over 1,300 B-29 and B-24 bombers had passed through the Marianas, Kwajalein, Johnston Island, and Oahu en route to Mather Field and civilian life.
Following WWII, Johnston Atoll Airport was used commercially by Continental Air Micronesia, touching down between Honolulu and Majuro. When an aircraft landed it was surrounded by armed soldiers and the passengers were not allowed to leave the aircraft. Aloha Airlines also made weekly scheduled flights to the island carrying civilian and military personnel; in the 1990s there were flights almost daily, and some days saw up to 3 arrivals. Just before movement of the chemical munitions to Johnston Atoll, the Surgeon General, Public Health Service, reviewed the shipment and the Johnston Atoll storage plans. His recommendations caused the Secretary of Defense in December 1970 to issue instructions suspending missile launches and all non-essential aircraft flights. As a result, Air Micronesia service was immediately discontinued, and missile firings were terminated with the exception of two 1975 satellite launches deemed critical to the island's mission.
There were many times when the runway was needed for emergency landings for both civil and military aircraft. When the runway was decommissioned, it could no longer be used as a potential emergency landing place when planning flight routes across the Pacific Ocean. As of 2003, the airfield at Johnston Atoll consisted of an unmaintained closed single asphalt/concrete runway 5/23, a parallel taxiway, and a large paved ramp along the southeast side of the runway.
In February 1941 Johnston Atoll was designated as a Naval Defensive Sea Area and Airspace Reservation. On the day the Japanese struck Pearl Harbor, December 7, 1941, was out of her home port of Pearl Harbor, to make a simulated bombardment at Johnston Island. Japan's strike at Pearl Harbor occurred as the ship was unloading marines, civilians and stores on the atoll. On December 15, 1941, the atoll was shelled outside the reef by a Japanese submarine, which had been part of the attack on Pearl Harbor eight days earlier. Several buildings including the power station were hit, but no personnel were injured. Additional Japanese shelling occurred on December 22 and 23, 1941. On all occasions, Johnston Atoll's coastal artillery guns returned fire, driving off the sub.
In July 1942, the civilian contractors at the atoll were replaced by 500 men from the 5th and 10th Naval Construction Battalions, who expanded the fuel storage and water production at the base and built additional facilities. The 5th Battalion departed in January 1943. In December 1943 the 99th Naval Construction Battalion arrived at the atoll and proceeded to lengthen the runway to and add an additional of parking to the seaplane base.
On January 25, 1957, the Department of Treasury was granted a 5-year permit for the United States Coast Guard (USCG) to operate and maintain a Long Range Aid to Navigation (LORAN) transmitting station on Johnston Atoll. Two years later in December 1959, the Secretary of Defense approved the Secretary of the Treasury's request to use Sand Island for U.S. Coast Guard LORAN A and C station sites. The USCG was granted permission to install a LORAN A and C station on Sand Island to be staffed by U.S. Coast Guard personnel through June 30, 1992. The permit for a LORAN station to operate on Johnston Island was terminated in 1962. On November 1, 1957, a new United States Coast Guard LORAN-A station was commissioned. By 1958, the Coast Guard LORAN Station at Johnston Island began transmitting on a 24-hour basis, thus establishing a new LORAN rate in the Central Pacific. The new rate between Johnston Island and French Frigate Shoals gave a higher order of accuracy for fixing positions in the steamship lanes from Oahu, Hawaii, to Midway Island. In the past, this was impossible in some areas along this important shipping route. The original U.S. Coast Guard LORAN-A Station on Johnston Island ceased operations on June 30, 1961 when the new station on nearby Sand Island began transmitting using a larger 180 foot antenna.
The LORAN-C station was disestablished on July 1, 1992, and all Coast Guard personnel, electronic equipment, and property departed the atoll that month. Buildings on Sand Island were transferred to other activities. LORAN whip antennas on Johnston and Sand Islands were removed, and the 625-foot LORAN tower and antenna were demolished on December 3, 1992. The LORAN A and C station and buildings on Sand Island were then dismantled and removed.
Between 1958 and 1975, Johnston Atoll was used as an American national nuclear test site for atmospheric and extremely high-altitude nuclear explosions in outer space. In 1958, Johnston Atoll was the location of the two "Hardtack I" nuclear tests firings. One conducted August 1, 1958 was codenamed "Hardtack Teak" and one conducted August 12, 1958 was codenamed "Orange." Both tests detonated 3.8-megaton hydrogen bombs launched to high altitudes by rockets from Johnston Atoll.
Johnston Island was also used as the launch site of 124 sounding rockets going up as high as . These carried scientific instruments and telemetry equipment, either in support of the nuclear bomb tests, or in experimental antisatellite technology.
Eight PGM-17 Thor missiles deployed by the U.S. Air Force (USAF) were launched from Johnston Island in 1962 as part of "Operation Fishbowl," a part of "Operation Dominic" nuclear weapons tests in the Pacific. The first launch in "Operation Fishbowl" was a successful research and development launch with no warhead. In the end, "Operation Fishbowl" produced four successful high-altitude detonations: "Starfish Prime," "Checkmate," "Bluegill Triple Prime," and "Kingfish." In addition, it produced one atmospheric nuclear explosion, "Tightrope."
On July 9, 1962, "Starfish Prime" had a 1.4-megaton explosion, using a W49 warhead at an altitude of about . It created a very brief fireball visible over a wide area, plus bright artificial auroras visible in Hawaii for several minutes. "Starfish Prime" also produced an electromagnetic pulse that disrupted some electric power and communication systems in Hawaii. It pumped enough radiation into the Van Allen belts to destroy or damage seven satellites in orbit.
The final Fishbowl launch that used a Thor missile carried the "Kingfish" 400-kiloton warhead up to its detonation altitude. Although it was officially one of the Operation Fishbowl tests, it is sometimes not listed among high-altitude nuclear tests because of its lower detonation altitude. "Tightrope" was the final test of Operation Fishbowl and detonated on November 3, 1962. It launched on a nuclear-armed Nike-Hercules missile and was detonated at a lower altitude than the other tests:
"At Johnston Island, there was an intense white flash. Even with high-density goggles, the burst was too bright to view, even for a few seconds. A distinct thermal pulse was felt on bare skin. A yellow-orange disc was formed, and transformed itself into a purple doughnut. A glowing purple cloud was faintly visible for a few minutes." The nuclear yield was reported in most official documents as “less than 20 kilotons.” One report by the U.S. government reported the yield of the "Tightrope" test as 10 kilotons. Seven sounding rockets were launched from Johnston Island in support of the "Tightrope" test, and this was the final American nuclear atmospheric test.
The "Fishbowl" series included four failures, all of which were deliberately disrupted by range safety officers when the missiles' systems failed during launch and were aborted. The second launch of the Fishbowl series, "Bluegill", carried an active warhead. Bluegill was "lost" by a defective range safety tracking radar and had to be destroyed 10 minutes after liftoff even though it probably ascended successfully. The subsequent nuclear weapon launch failures from Johnston Atoll caused serious contamination to the island and surrounding areas with weapons-grade plutonium and americium that remains an issue to this day.
The failure of the "Bluegill" launch created in effect a dirty bomb but did not release the nuclear warhead's plutonium debris onto Johnston Atoll as the missile fell into the ocean south of the island and was not recovered. However, the "Starfish", "Bluegill Prime", and "Bluegill Double Prime" test launch failures in 1962 scattered radioactive debris over Johnston Island contaminating it, the lagoon, and Sand Island with plutonium for decades.
"Starfish", a high altitude Thor launched nuclear test scheduled for June 20, 1962, was the first to contaminate the atoll. The rocket with the 1.45-megaton Starfish device (W49 warhead and the MK-4 Re-entry vehicle) on its nose was launched that evening, but the Thor missile engine cut out only 59 seconds after launch. The range safety officer sent a destruct signal 65 seconds after launch, and the missile was destroyed at approximately altitude. The warhead high explosive detonated in 1-point safe fashion, destroying the warhead without producing nuclear yield. Large pieces of the plutonium contaminated missile including pieces of the warhead, booster rocket, engine, re-entry vehicle and missile parts fell back on Johnston Island. More wreckage along with plutonium contamination was found on nearby Sand Island.
"Bluegill Prime," the second attempt to launch the payload which failed last time was scheduled for 23:15 (local) on July 25, 1962. It too was a genuine disaster and caused the most serious plutonium contamination on the island. The Thor missile was carrying one pod, two re-entry vehicles and the W50 nuclear warhead. The missile engine malfunctioned immediately after ignition, and the range safety officer fired the destruct system while the missile was still on the launch pad. The Johnston Island launch complex was demolished in the subsequent explosions and fire which burned through the night. The launch emplacement and portions of the island were contaminated with radioactive plutonium spread by the explosion, fire and wind-blown smoke.
Afterward, the Johnston Island launch complex was heavily damaged and contaminated with plutonium. Missile launches and nuclear testing halted until the radioactive debris was dumped and soils were recovered and the launch emplacement rebuilt. Three months of repairs, decontamination, and rebuilding the LE1 as well as the backup pad LE2 were necessary before tests could resume. In an effort to continue with the testing program, U.S. troops were sent in to do a rapid cleanup. The troops scrubbed down the revetments and launch pad, carted away debris and removed the top layer of coral around the contaminated launch pad. The plutonium-contaminated rubbish was dumped in the lagoon, polluting the surrounding marine environment. More than 550 drums of contaminated material were dumped in the ocean off Johnston from 1964–1965. At the time of the Bluegill Prime disaster, the top fill around the launch pad was scraped by a bulldozer and grader. It was then dumped into the lagoon to make a ramp, so the rest of the debris could be loaded onto landing craft to be dumped out into the ocean. An estimated 10 percent of the plutonium from the test device was in the fill used to make the ramp. Then the ramp was covered and placed into a landfill on the island during 1962 dredging to extend the island. The lagoon was again dredged in 1963–1964 and used to expand Johnston Island from to recontaminating additional portions of the island.
On October 15, 1962 the "Bluegill Double Prime" test also misfired. During the test, the rocket was destroyed at a height of 109,000 feet after it malfunctioned 90 seconds into the flight. U.S. Defense Department officials confirm that when the rocket was destroyed, it contributed to the radioactive pollution on the island.
In 1963, the U.S. Senate ratified the Limited Test Ban Treaty, which contained a provision known as "Safeguard C". Safeguard C was the basis for maintaining Johnston Atoll as a "ready to test" above-ground nuclear testing site should atmospheric nuclear testing ever be deemed to be necessary again. In 1993, Congress appropriated no funds for the Johnston Atoll "Safeguard C" mission, bringing it to an end.
Program 437 turned the PGM-17 Thor into an operational anti-satellite (ASAT) weapon system, a capability that was kept top secret even after it was deployed. The Program 437 mission was approved for development by U.S. Secretary of Defense Robert McNamara on November 20, 1962 and based at the Atoll. Program 437 used modified Thor missiles that had been returned from deployment in Great Britain and was the second deployed U.S. operational nuclear anti-satellite operation. Eighteen more suborbital Thor launches took place from Johnston Island during the 1964–1975 period in support of Program 437. In 1965–1966 four Program 437 Thors were launched with 'Alternate Payloads' for satellite inspection. This was evidently an elaboration of the system to allow visual verification of the target before destroying it. These flights may have been related to the late 1960s Program 922, a non-nuclear version of Thor with infrared homing and a high-explosive warhead. Thors were kept positioned and active near the two Johnston Island launch pads after 1964. However, partly because of the Vietnam War, in October 1970 the Department of Defense had transferred Program 437 to standby status as an economic measure. The Strategic Arms Limitation Talks led to Anti-Ballistic Missile Treaty that prohibited 'interference with national means of verification', which meant that ASAT's were not allowed, by treaty, to attack Russian spy satellites. Thors were removed from Johnston Atoll and were stored in mothballed war-reserve condition at Vandenberg Air Force Base from 1970 until the anti-satellite mission of Johnston Island facilities was ceased on August 10, 1974, and the program was officially discontinued on April 1, 1975, when any possibility of restoring the ASAT program was finally terminated. Eighteen Thor launches in support of the Program 437 Alternate Payload (AP) mission took place from Johnston Atoll's Launch emplacements.
The Space Detection and Tracking System or SPADATS was operated by North American Aerospace Defense Command (NORAD) along with the U.S. Air Force Spacetrack system, The Navy Space Surveillance System and Canadian Forces Air Defense Command Satellite Tracking Unit. The Smithsonian Astrophysical Observatory also operated a dozen 3.5 ton Baker-Nunn Camera systems (none at Johnston) for cataloging of man-made satellites. The U.S. Air Force had ten Baker-Nunn camera stations around the world mostly from 1960 to 1977 with a phase-out beginning in 1964.
The Baker-Nunn space camera station was constructed on Sand Island and was functioning by 1965. USAF 18th Surveillance Squadron operated the Baker-Nunn camera at a station built along the causeway on Sand Island until 1975 when a contract to operate the four remaining Air Force stations was awarded to Bendix Field Engineering Corporation. In about 1977, the camera at Sand Island was moved to Daegu, South Korea. Baker-Nunn were rendered obsolete with the IOC of 3 GEODSS optical tracking sites at Daegu, Korea; Mount Haleakala, Maui and White Sands Missile Range. A fourth site was operational in 1985 at Diego Garcia and a proposed fifth site in Portugal was cancelled. The Daegu, Korea site was closed due to encroaching city lights. GEODSS tracked satellites at night, though the MIT Lincoln Laboratory test site, co-located with Site 1 at White Sands did track asteroids in daytime as proof of concept in the early 1980s.
Satellite and Missile Observation System Project (SAMOS-E) or "E-6" was a relatively short-lived series of United States visual reconnaissance satellites in the early 1960s. SAMOS was also known by the unclassified terms Program 101 and Program 201. The Air Force program was used as a cover for the initial development of the Central Intelligence Agency's Key Hole (including Corona and Gambit) reconnaissance satellites systems. Imaging was performed with film cameras and television surveillance from polar low Earth orbits with film canisters returning via capsule and parachute with mid-air retrieval. SAMOS was first launched in 1960, but not operational until 1963 with all of the missions being launched from Vandenberg AFB.
During the early months of the SAMOS program it was essential not only to hide the Corona and GAMBIT technical efforts under a screen of SAMOS activity, but also to make the orbital vehicle portions of the two systems resemble one another in outward appearance. Thus, some of the configuration details of SAMOS were decided less by engineering logic than by the need to camouflage GAMBIT and thus, in theory, a GAMBIT could be launched without alerting many people to its real nature.
Problems relative to tracking networks, communications, and recovery were resolved with the decision in late February 1961 to use Johnston Island as the film capsule descent and recovery zone for the program.
On July 10, 1961 work was initiated on four buildings of the Johnston Island Recovery Operations Center for the National Reconnaissance Office. Men from the Johnston Atoll facility would recover the parachuting film canister capsules with a radar equipped JC-130 aircraft by capturing them in the air with a specialized recovery apparatus.
The recovery center was also responsible for collecting the radioactive scientific data pods dropped from missiles following launch and nuclear detonation.
In the lead up to biological warfare testing in the Pacific under Project 112 and Project SHAD, a new virus was discovered during the Pacific Ocean Biological Survey Program by teams from the Smithsonian's Division of Birds aboard a U.S Army tugboat involved in the program. Initially, the name of that effort was to be called the Pacific Ornithological Observation Project but this was changed for obvious reasons. First isolated in 1964 the tick-borne virus was discovered in "Ornithodoros capensis" ticks, found in a nest of common noddy ("Anous stolidus") at Sand Island, Johnston Atoll. It was designated "Johnston Atoll Virus" and is related to influenza.
In February, March, and April 1965 Johnston Atoll was used to launch biological attacks against U.S. Army and Navy vessels south-west of Johnston island in vulnerability, defense and decontamination tests conducted by the Deseret Test Center during Project SHAD under Project 112. Test DTC 64-4 (Deseret Test Center) was originally called "RED BEVA" (Biological EVAluation) though the name was later changed to "Shady Grove", likely for operational security reasons. The biological agents released during this test included "Francisella tularensis" (formerly called "Pasteurella tularensis") (Agent UL), the causative agent of Tularemia; "Coxiella burnetii" (Agent OU), causative agent of Q fever; and "Bacillus globigii" (Agent BG). During Project SHAD, "Bacillus globigii" was used to simulate biological warfare agents (such as Anthrax), because it was then considered a contaminant with little health consequence to humans; however, it is now considered a human pathogen. Ships equipped with the E-2 multi-head disseminator and A-4C aircraft equipped with Aero 14B spray tanks released live pathogenic agents in nine aerial and four surface trials in phase B of the test series from February 12 to March 15, 1965 and in four aerial trials in phase D of the test series from March 22 to April 3, 1965.
According to Project SHAD veteran Jack Alderson who commanded the Army tugs, area three at Johnston Atoll was located at the most downwind part of the island and consisted of an collapsible Nissen hut to be used for weapons preparation and some communications.
In 1970, Congress redefined the island's military mission as the storage and destruction of chemical weapons. The United States Army leased on the Atoll to store chemical weapons held in Okinawa, Japan. Johnston Atoll became a chemical weapons storage site in 1971 holding about 6.6 percent of the U.S. military chemical weapon arsenal. The chemical weapons were brought from Okinawa under with the re-deployment of the 267th Chemical Company and consisted of rockets, mines, artillery projectiles, and bulk 1-ton containers filled with Sarin, Agent VX, vomiting agent, and blister agent such as mustard gas. Chemical weapons from West Germany and World War II era weapons from the Solomon Islands were also stored on the island after 1990. Chemical agents were stored in the high security Red Hat Storage Area (RHSA) which included hardened igloos in the weapon storage area, the Red Hat building (#850), two Red Hat hazardous waste warehouses (#851 and #852), an open storage area, and security entrances and guard towers.
Some of the other weapons stored at the site were shipped from U.S. stockpiles in West Germany in 1990. These shipments followed a 1986 agreement between the U.S. and West Germany to move the munitions. Merchant ships carrying the munitions left Germany under Operation Golden Python and Operation Steel Box in October 1990 and arrived at Johnston Island November 6, 1990. Although the ships were unloaded within nine days, the unpacking and storing of munitions continued into 1991. The remainder of the chemical weapons was a small number of World War II era weapons shipped from the Solomon Islands.
Agent Orange was brought to Johnston Atoll from South Vietnam and Gulfport, Mississippi in 1972 under Operation Pacer IVY and stored on the northwest corner of the island known as the Herbicide Orange Storage site but dubbed the "Agent Orange Yard". The Agent Orange was eventually destroyed during Operation Pacer HO on the Dutch incineration ship MT "Vulcanus" in the Summer of 1977. The U.S. Environmental Protection Agency (EPA) reported that 1,800,000 gallons of Herbicide Orange were stored at Johnston Atoll and that an additional 480,000 gallons stored at Gulfport, Mississippi was brought to Johnston Atoll for destruction. Leaking barrels during the storage and spills during re-drumming operations contaminated both the storage area and the lagoon with herbicide residue and its toxic contaminant 2,3,7,8-Tetrachlorodibenzodioxin.
The Army's Johnston Atoll Chemical Agent Disposal System (JACADS) was the first full-scale chemical weapons disposal facility. Built to incinerate chemical munitions on the island, planning started in 1981, construction began in 1985, and was completed five years later. Following completion of construction and facility characterization, JACADS began operational verification testing (OVT) in June 1990. From 1990 until 1993, the Army conducted four planned periods of Operational Verification Testing (OVT), required by Public Law 100-456. OVT was completed in March 1993, having demonstrated that the reverse assembly incineration technology was effective and that JACADS operations met all environmental parameters. The OVT process enabled the Army to gain critical insight into the factors that establish a safe and effective rate of destruction for all munitions and agent types. Only after this critical testing period did the Army proceed with full-scale disposal operations at JACADS. Transition to full-scale operations started in May 1993 but the facility did not begin full-scale operations until August 1993.
All of the chemical weapons once stored on Johnston Island were demilitarized and the agents incinerated at JACADS with the process completing in 2000 followed by the destruction of legacy hazardous waste material associated with chemical weapon storage and cleanup. JACADS was demolished by 2003 and the island was stripped of its remaining infrastructure and environmentally remediated.
In 2003, structures and facilities, including those used in JACADS, were removed, and the runway was marked closed. The last flight out for official personnel was June 15, 2004. After this date, the base was completely deserted, with the only structures left standing being the Joint Operations Center (JOC) building at the east end of the runway, chemical bunkers in the weapon storage area and at least one Quonset hut.
Built in 1964, the JOC is a four-floor concrete and steel administration building for the island that has no windows and was built to withstand a category IV tropical cyclone as well as atmospheric nuclear tests. The building remains standing but was gutted entirely in 2004, during an asbestos abatement project. All doors of the JOC except one have been welded shut. The ground floor has a side building attached which served as a facility for decontamination that contained three long snaking corridors and 55 shower heads one could walk through during decontamination.
Rows of bunkers in the Red Hat Storage Area remain intact; however, an agreement was established between the U.S. Army and EPA Region IX on August 21, 2003, that the Munitions Demilitarization Building (MDB) at JACADS would be demolished and the bunkers in the RHSA used for disposal of construction rubble and debris. After placement of the debris inside the bunkers, they were secured and the entries blocked with a concrete block barrier (a.k.a. King Tut Block) to prevent access to the bunker interior.
Over the years, leaks of Agent Orange as well as chemical weapon leaks in the weapon storage area occurred where caustic chemicals such as sodium hydroxide were used to mitigate toxic agents during cleanup. Larger spills of nerve and mustard agent within the MCD at JACADS also took place. Small releases of chemical weapon components from JACADS were cited by the EPA. Multiple studies of the Johnston Atoll environment and ecology have been conducted and the atoll is likely the most studied island in the Pacific.
Studies at the atoll on the impact of PCB contamination in reef damselfish ("Abudefduf sordidus") demonstrated that embryonic abnormalities could be used as a metric for comparing contaminated and uncontaminated areas. Some PCB contamination in the lagoon was traced to Coast Guard disposal practices of PCB-laden electrical transformers.
In 1962, plutonium pollution following three failed nuclear missile launches was heaviest near the destroyed launch emplacement, in the lagoon offshore of the launch pad, and near Sand Island. The contaminated launch site was stripped, the debris gathered and buried in the island's 1962 expansion. A comprehensive radiological survey was completed in 1980 to record transuranic contamination remaining from the 1962 THOR missile aborts. The Air Force also initiated research on methods to remove dioxin contamination from soil resulting from leakage of the stored herbicide Agent Orange. Since then, U.S. defense authorities have surveyed the island in a series of studies.
Contaminated structures were dismantled and isolated within the former THOR Launch Emplacement No. 1 (LE-1) as a start for the cleanup program. About 45,000 tons of soil contaminated with radioactive isotopes was collected and placed into a fenced area covering on the north of the island. The area was known as the Radiological Control Area, and heavily contaminated with highly radioactive Plutonium. The Pluto Yard is on the site of the LE1 where the 1962 missile explosion occurred and also where a highly contaminated loading ramp was buried that was made for loading plutonium contaminated debris onto small boats that was dumped at sea. Remediation included a plutonium "mining" operation called the Johnston Atoll Plutonium Contaminated Soil Cleanup Project. The collected radioactive soil and other debris was buried in a landfill created within the former LE-1 area from June 2002 through November 11, 2002. Remediation at the Radiation Control Area included the construction of a 61-centimeter-thick cap of coral sealing the landfill. Permanent markers were placed at each corner of the landfill to identify the landfill area.
The atoll was placed up for auction via the U.S. General Services Administration (GSA) in 2005 before it was withdrawn. The stripped Johnston Island was briefly offered for sale with several deed restrictions in 2005 as a "residence or vacation getaway," with potential usage for "eco-tourism" by the GSA's Office of Real Property Utilization and Disposal. The proposed sale included the unique postal zip code 96558, formerly assigned to the Armed Forces in the Pacific. The proposed sale did not include running water, electricity, or activation of the closed runway. The details of the offering were outlined on GSA's website and in a newsletter of the Center for Land Use Interpretation as unusual real estate listing # 6384, Johnston Island.
On August 22, 2006, Johnston Island was struck by Hurricane Ioke. The eastern eye-wall passed directly over the atoll, with winds exceeding . Twelve people were on the island when the hurricane struck, part of a crew sent to the island to deliver a USAF contractor who sampled groundwater contamination levels. All 12 survived and one wrote a first hand account of taking shelter from the storm in the JOC building.
On December 9, 2007, the United States Coast Guard swept the runway at Johnston Island of debris and used the runway in the removal and rescue of an ill Taiwanese fisherman to Oahu, Hawaii. The fisherman was transferred from the Taiwanese fishing vessel "Sheng Yi Tsai No. 166" to the Coast Guard buoy tender "Kukui" on December 6, 2007. The fisherman was transported to the island, and then picked up by a Coast Guard HC-130 Hercules rescue plane from Kodiak, Alaska.
Since the base was closed, the atoll has been visited by many vessels crossing the Pacific, as the deserted atoll has a strong lure due to the activities once performed there. Visitors have blogged about stopping there during a trip, or have posted photos of their visits.
In 2010, a Fish and Wildlife survey team identified a swarm of Anoplolepis ants that had invaded the island. The crazy ants are particularly destructive to the native wildlife, and needed to be eradicated. The "Crazy Ant Strike Team" project was led by the U.S. Fish and Wildlife Service, who achieved a 99% reduction in ant numbers by 2013 and who continue to work towards a full eradication of the species from the atoll. The team camped in a bunker that was previously used as a fallout shelter and office.
Johnston Atoll has never had any indigenous inhabitants, although during the late part of the 20th century, there were averages of about 300 American military personnel and 1,000 civilian contractors present at any given time.
The primary means of transportation to this island was the airport which had a paved military runway or alternatively by ship via a pier and ship channel through the atoll's coral reef system. The islands were wired with 13 outgoing and 10 incoming commercial telephone lines, a 60-channel submarine cable, 22 DSN circuits by satellite, an Autodin with standard remote terminal, a digital telephone switch, the Military Affiliated Radio System (MARS station), a UHF/VHF air-ground radio, and a link to the Pacific Consolidated Telecommunications Network (PCTN) satellite. Amateur radio operators occasionally transmitted from the island, using the KH3 call-sign prefix. The United States Undersea Cable Corporation was awarded contracts to lay underwater cable in the Pacific. A cable known as "Wet Wash C" was laid in 1966 between Makua, Oahu, Hawaii and the Johnston Island Air Force Base. surveyed the route and laid of cable and 45 repeaters. These cables were manufactured by the Simplex Wire and Cable Company with the repeaters being supplied by Felten and Guilleaume. In 1993 a satellite communication ground station was added to augment the atoll's communications capability.
Johnston Atoll's economic activity was limited to providing services to American military and contractor personnel residing on the island. The island was regularly resupplied by ship or barge, and all foodstuffs and manufactured goods were imported. The base had six 2.5-megawatt (MW) electrical generators using diesel engines. The runway was also available to commercial airlines for emergency landings (a fairly common event), and for many years it was a regular stop on Continental Micronesia airline's "island hopper" service between Hawaii and the Marshall Islands.
There were no official license plates issued for use on Johnston Atoll. U.S. government vehicles were issued U.S. government license plates and private vehicles retained the plates from which they were registered. According to reputable license plate collectors, a number of "Johnston Atoll license plates" were created as souvenirs, and have even been sold online to collectors, but they were not officially issued.
About 300 species of fish have been recorded from the reefs and inshore waters of the atoll. It is also visited by green turtles and Hawaiian monk seals. Seabird species recorded as breeding on the atoll include Bulwer's petrel, wedge-tailed shearwater, Christmas shearwater, white-tailed tropicbird, red-tailed tropicbird, brown booby, red-footed booby, masked booby, great frigatebird, spectacled tern, sooty tern, brown noddy, black noddy and white tern. It is visited by migratory shorebirds, including the Pacific golden plover, wandering tattler, bristle-thighed curlew, ruddy turnstone and sanderling. Possibilities of humpback whales using the waters as breeding ground has been suggested albeit in small numbers and irregular occurrences so far. Many other cetaceans possibly migrate through the area, but the species being most notably confirmed is Cuvier's beaked whales.
|
https://en.wikipedia.org/wiki?curid=15704
|
Geography of Jordan
Jordan is situated geographically in Southwest Asia, south of Syria, west of Iraq, northwest of Saudi Arabia and east of Israel and the West Bank; politically, the area has also been referred to in the West as the Middle or Near East. The territory of Jordan now covers about .
Between 1950 and the Six-Day War in 1967, although not widely recognized, Jordan claimed and administered an additional encompassing the West Bank; in 1988 and with continuing Israeli occupation, King Hussein relinquished Jordan's claim to the West Bank in favor of the Palestinians.
Jordan is landlocked except at its southern extremity, where nearly of shoreline along the Gulf of Aqaba provide access to the Red Sea.
Geographic coordinates:
Except for small sections of the borders with Israel and Syria, Jordan's international boundaries do not follow well-defined natural features of the terrain. The country's boundaries were established by various international agreements and with the exception of the border with Israel, none was in dispute in early 1989.
Jordan's boundaries with Syria, Iraq, and Saudi Arabia do not have the special significance that the border with Israel does; these borders have not always hampered tribal nomads in their movements, yet for a few groups borders did separate them from traditional grazing areas and delimited by a series of agreements between the United Kingdom and the government of what eventually became Saudi Arabia) was first formally defined in the Hadda Agreement of 1925.
In 1965 Jordan and Saudi Arabia concluded an agreement that realigned and delimited the boundary. Jordan gained 19 kilometers of land on the Gulf of Aqaba and 6,000 square kilometers of territory in the interior, and 7,000 square kilometers of Jordanian-administered, landlocked territory was ceded to Saudi Arabia. The new boundary enabled Jordan to expand its port facilities and established a zone in which the two parties agreed to share petroleum revenues equally if oil were discovered. The agreement also protected the pasturage and watering rights of nomadic tribes inside the exchanged territories.
The country consists mainly of a plateau between and meters high, divided into ridges by valleys and gorges, and a few mountainous areas. West of the plateau, land descents form the East Bank of the Jordan Rift Valley. The valley is part of the north-south Great Rift Valley, and its successive depressions are Lake Tiberias (Sea of Galilee; its bottom is about ), Jordan Valley, the Dead Sea (its bottom is about ), Arabah, and the Gulf of Aqaba at the Red Sea. Jordan's western border follows the bottom of the rift. Although an earthquake-prone region, no severe shocks had been recorded for several centuries.
By far the greatest part of the East Bank is desert, displaying the land forms and other features associated with great aridity. Most of this land is part of the Syrian Desert and northern Arabian Desert. There are broad expanses of sand and dunes, particularly in the south and southeast, together with salt flats. Occasional jumbles of sandstone hills or low mountains support only meager and stunted vegetation that thrives for a short period after the scanty winter rains. These areas support little life and are the least populated regions of Jordan.
The drainage network is coarse and incised. In many areas the relief provides no eventual outlet to the sea, so that sedimentary deposits accumulate in basins where moisture evaporates or is absorbed in the ground. Toward the depression in the western part of the East Bank, the desert rises gradually into the Jordanian Highlands—a steppe country of high, deeply cut limestone plateaus with an average elevation of about 900 meters. Occasional summits in this region reach 1,200 meters in the northern part and exceed 1,700 meters in the southern part; the highest peak is Jabal Ramm at 1,754 meters (though the highest peak in all of Jordan is Jabal Umm al Dami at 1854 meters. It is located in a remote part of southern Jordan). These highlands are an area of long-settled villages.
The western edge of this plateau country forms an escarpment along the eastern side of the Jordan River-Dead Sea depression and its continuation south of the Dead Sea. Most of the wadis that provide drainage from the plateau country into the depression carry water only during the short season of winter rains. Sharply incised with deep, canyon-like walls, whether flowing or dry the wadis can be formidable obstacles to travel.
The Jordan River is short, but from its mountain headwaters (approximately 160 kilometers north of the river's mouth at the Dead Sea) the riverbed drops from an elevation of about 3,000 meters above sea level to more than 400 meters below sea level. Before reaching Jordanian territory the river forms the Sea of Galilee, the surface of which is 212 meters below sea level. The Jordan River's principal tributary is the Yarmouk River. Near the junction of the two rivers, the Yarmouk forms the boundary between Israel on the northwest, Syria on the northeast, and Jordan on the south. The Zarqa River, the second main tributary of the Jordan River, flows and empties entirely within the East Bank.
A 380-kilometer-long rift valley runs from the Yarmouk River in the north to Al Aqaba in the south. The northern part, from the Yarmouk River to the Dead Sea, is commonly known as the Jordan Valley. It is divided into eastern and western parts by the Jordan River. Bordered by a steep escarpment on both the eastern and the western side, the valley reaches a maximum width of twenty-two kilometers at some points. The valley is properly known as "Al Ghawr" or "Al Ghor" (the depression, or valley).
The Rift Valley on the southern side of the Dead Sea is known as the Southern "Ghawr" and the Wadi al Jayb (popularly known as the Wadi al Arabah). The Southern Ghawr runs from Wadi al Hammah, on the south side of the Dead Sea, to Ghawr Faya, about twenty-five kilometers south of the Dead Sea. Wadi al Jayb is 180 kilometers long, from the southern shore of the Dead Sea to Al Aqaba in the south. The valley floor varies in level. In the south, it reaches its lowest level at the Dead Sea (more than 400 meters below sea level), rising in the north to just above sea level. Evaporation from the sea is extreme due to year-round high temperatures. The water contains about 250 grams of dissolved salts per liter at the surface and reaches the saturation point at 110 meters.
The Dead Sea occupies the deepest depression on the land surface of the earth. The depth of the depression is accentuated by the surrounding mountains and highlands that rise to elevations of 800 to 1,200 meters above sea level. The sea's greatest depth is about 430 meters, and it thus reaches a point more than 825 meters below sea level. A drop in the level of the sea has caused the former Lisan Peninsula to become a land bridge dividing the sea into separate northern and southern basins.
The major characteristic of the climate is the contrast between a relatively rainy season from November to April and very dry weather for the rest of the year. With hot, dry, uniform summers and cool, variable winters during which practically all of the precipitation occurs, the country has a Mediterranean-style climate.
In general, the farther inland from the Mediterranean Sea a given part of the country lies, the greater are the seasonal contrasts in temperature and the less rainfall. Atmospheric pressures during the summer months are relatively uniform, whereas the winter months bring a succession of marked low pressure areas and accompanying cold fronts. These cyclonic disturbances generally move eastward from over the Mediterranean Sea several times a month and result in sporadic precipitation.
Most of the East Bank receives less than of rain a year and may be classified as a dry desert or steppe region. Where the ground rises to form the highlands east of the Jordan Valley, precipitation increases to around in the south and or more in the north. The Jordan Valley, lying in the lee of high ground on the West Bank, forms a narrow climatic zone that annually receives up to of rain in the northern reaches; rain dwindles to less than at the head of the Dead Sea.
The country's long summer reaches a peak during August. January is usually the coolest month. The fairly wide ranges of temperature during a twenty-four-hour period are greatest during the summer months and have a tendency to increase with higher elevation and distance from the Mediterranean seacoast. Daytime temperatures during the summer months frequently exceed and average about . In contrast, the winter months—November to April—bring moderately cool and sometimes cold weather, averaging about . Except in the rift depression, frost is fairly common during the winter, it may take the form of snow at the higher elevations of the north western highlands. Usually it snows a couple of times a year in western Amman.
For a month or so before and after the summer dry season, hot, dry air from the desert, drawn by low pressure, produces strong winds from the south or southeast that sometimes reach gale force. Known in the Middle East by various names, including the khamsin, this dry, sirocco-style wind is usually accompanied by great dust clouds. Its onset is heralded by a hazy sky, a falling barometer, and a drop in relative humidity to about 10 percent. Within a few hours there may be a to rise in temperature. These windstorms ordinarily last a day or so, cause much discomfort, and destroy crops by desiccating them.
The shamal, another wind of some significance, comes from the north or northwest, generally at intervals between June and September. Remarkably steady during daytime hours but becoming a breeze at night, the shamal may blow for as long as nine days out of ten and then repeat the process. It originates as a dry continental mass of polar air that is warmed as it passes over the Eurasian landmass. The dryness allows intense heating of the Earth's surface by the sun, resulting in high daytime temperatures that moderate after sunset.
Area:
"total:" 89,342 km²
"land:" 88,802 km²
"water:" 540 km²
Land boundaries:
"total:" 1,744 km
"border countries:"
Iraq 179 km, Israel 307 km, Saudi Arabia 731 km, Syria 379 km, West Bank 148 km
Coastline: 26 km
"note:"
Jordan also borders the Dead Sea, for
Maritime claims:
"territorial sea:"
Elevation extremes:
"lowest point:" Dead Sea −408 m
"highest point:" Jabal Umm ad Dami 1,854 m
Natural resources: phosphates, potash, oil shale
Land use:
"arable land:"
2.41%
"permanent crops:"
0.97%
"other:"
96.62% (2012)
Irrigated land:
788.6 km² (2004)
Total renewable water resources:
0.94 km3 (2011)
Freshwater withdrawal (domestic/industrial/agricultural):
"total:"
0.94 km3/yr (31%/4%/65%)
"per capita:"
166 m3/yr (2005)
Droughts; occasional minor earthquakes in areas close to the Jordan Rift Valley
Environment – current issues:
limited natural fresh water resources and water stress; deforestation; overgrazing; soil erosion; desertification
Environment – international agreements:
"party to:"
Biodiversity, Climate Change, Climate Change-Kyoto Protocol, Desertification, Endangered Species, Hazardous Wastes, Law of the Sea, Marine Dumping, Ozone Layer Protection, Wetlands
|
https://en.wikipedia.org/wiki?curid=15716
|
Demographics of Jordan
Jordan has a population of approximately 9,531,712 inhabitants (Female: 47%; Males: 53%) as of 2015. Jordanians () are the citizens of Jordan, who share a common Levantine Semitic ancestry. Some 98% percent of Jordanians are Arabs, while the remaining 2% are other ethnic minorities. Around 2.9 million were non-citizens, a figure including refugees, legal and illegal immigrants. Jordan's annual population growth rate stood at 2.05% in 2017, with an average of three children per woman. There were 1,977,534 households in Jordan in 2015, with an average of 4.8 persons per household.
The official language is Arabic, while English is the second most widely spoken language by Jordanians. It is also widely used in commerce and government. In 2016, about 84% of Jordan's population live in urban towns and cities. Many Jordanians and people of Jordanian descent live across the world, mainly in the United States, United Arab Emirates, Canada, France, Sweden and Spain.
In 2016, Jordan was named as the largest refugee hosting country per capita in the world, followed by Turkey, Pakistan and Lebanon. The kingdom of Jordan hosts refugees mainly from Palestine, Syria, Iraq and many other countries. There are also hundreds of thousands of workers from Egypt, Indonesia and South Asia, who work as domestic and construction workers.
The territory of Jordan can be defined by the history of its creation after the end of World War I, the League of Nations and redrawing of the borders of the Eastern Mediterranean littoral. The ensuing decisions, most notably the Sykes–Picot Agreement, which created the Mandatory Palestine. In September 1922, Transjordan was formally identified as a subdivision of the Mandate Palestine after the League of Nations approved the British Transjordan memorandum which stated that the Mandate east of the Jordan River would be excluded from all the provisions dealing with Jewish settlement west of the Jordan River.
Arab Jordanians are either descended from families and clans who were living in the cities and towns in Transjordan prior to the 1948 war, most notably in the governorates of Jerash, Ajlun, Balqa, Irbid, Madaba, Al Karak, Aqaba, Amman and some other towns in the country, or from the Palestinian families who sought refuge in Jordan in different times in the 20th century, mostly during and after the wars of 1948 and 1967. Most of the native Christian population in the country belongs to this ethnicity or to the Bedouin ethnicity. Along to some other Arab ethnicities, mostly from Syria and Iraq.
The Druze people are believed to constitute about 0.5% of the total population of Jordan, which is around 32,000. The Druze, who refer to themselves as al-Muwahhideen, or "believers in one God," are concentrated in the rural, mountainous areas west and north of Amman.
The other group of Jordanians is descended from Bedouins (of which, less than 1% live a nomadic lifestyle). Bedouin settlements are concentrated in the wasteland south and east of the country.
There were an estimated 5,000 Armenians living within the country in 2009. An estimated 4,500 of these are members of the Armenian Apostolic Church, and predominantly speak the Western dialect of the Armenian language. This population makes up the majority of non-Arab Christians in the country.
There is an Assyrian refugee population in Jordan. Many Assyrians have arrived in Jordan as refugees since the invasion of Iraq, making up a large part of the Iraqi refugees.
By the end of the 19th century, the Ottoman Authorities directed the Circassian immigrants to settle in Jordan. The Circassians are Sunni Muslims and are estimated to number 20,000 to 80,000 persons.
There are about 10,000 Chechens estimated to reside in Jordan.
Jordan is a home to 2,175,491 registered Palestine refugees. Out of those 2,175,491 refugees, 634,182 have not been given Jordanian citizenship. Jordan also hosts around 1.4 million Syrian refugees who fled to the country due to the Syrian Civil War since 2011. About 31,163 Yemenis and 22,700 Libyan refugees live in Jordan as of January 2015. There are thousands of Lebanese refugees who came to Jordan when civil strife and war and the 2006 war broke out in their native country. Up to 1 million Iraqis came to Jordan following the Iraq War in 2003. In 2015, their number was 130,911. About 2,500 Iraqi Mandaean refugees have been resettled in Jordan.
Jordan prides itself on its health services, some of the best in the region. Qualified medics, favourable investment climate and Jordan's stability have contributed to the success of this sector.
Jordan has a very advanced education system. The school education system comprises 2 years of pre-school education, 10 years of compulsory basic education, and two years of secondary academic or vocational education, after which the students sit for the General Certificate of Secondary Education Exam (Tawjihi). Scholars may attend either private or public schools.
Access to higher education is open to holders of the General Secondary Education Certificate, who can then choose between private Community Colleges, public Community Colleges or universities (public and private). The credit-hour system, which entitles students to select courses according to a study plan, is implemented at universities. The number of public universities has reached (10), besides (17) universities that are private, and (51) community colleges. Numbers of universities accompanied by significant increase in number of students enrolled to study in these universities, where the number of enrolled students in both public and private universities is estimated at nearly (236) thousand; (28) thousand out of the total are from Arab or foreign nationalities.
Source: "UN World Population Prospects"
The following demographic statistics are from the CIA World Factbook, unless otherwise indicated.
10,086,876 (According to the Population Clock as of February 14, 2018).
Structure of the population
Structure of the population (01.10.2004) (Census)
Structure of the population (31.12.2013) (Estimates) (Excluding foreigners, including registered Palestinian):
refugees. :
Births and deaths
Fertility Rate (The Demographic Health Survey)
Fertility Rate (TFR) (Wanted Fertility Rate) and CBR (Crude Birth Rate):
Fertility Rate (TFR) (Wanted Fertility Rate) by nationality
15–24 years (in 2015):
15 years and older (in 2015):
One World Values Survey reported 51.4% of Jordanians responded that they would prefer not to have neighbors of a different race.
|
https://en.wikipedia.org/wiki?curid=15717
|
Politics of Jordan
The politics of Jordan takes place in a framework of a parliamentary monarchy, whereby the Prime Minister of Jordan is head of government, and of a multi-party system. Jordan is a constitutional monarchy based on the constitution promulgated on January 8, 1952. The king exercises his power through the government he appoints which is responsible before the Parliament.
King Abdullah II of Jordan has been sovereign since the death of his father in 1999. Omar Razzaz has been Prime Minister since 4 June 2018.
The Constitution of Jordan vests executive authority in the king and in his cabinet. The king signs and executes or vetoes all laws. The king may also suspend or dissolve parliament, and shorten or lengthen the term of session. A veto by the king may be overridden by a two-thirds vote of both houses of parliament at his discretion, most recently in November 2009. The king appoints and may dismiss all judges by decree, approves amendments to the constitution after passing by both parliaments, declares war and acts as the supreme leader of the armed forces. Cabinet decisions, court judgments, and the national currency are issued in his name. The Cabinet, led by a prime minister, was formerly appointed by the king, but following the 2011 Jordanian protests, King Abdullah agreed to an elected cabinet. The cabinet is responsible to the Chamber of Deputies on matters of general policy; a two-thirds vote of "no confidence" by the Chamber can force the cabinet to resign.
Legislative power rests in the bicameral National Assembly. The National Assembly ("Majlis al-Umma") has two chambers. The Chamber of Deputies ("Majlis al-Nuwaab") has 130 members, elected for a four-year terms in single-seat constituencies with 15 seats reserved for women by a special electoral college, nine for Christians and three for Chechens/Circassians. While the Chamber of Deputies is elected by the people, its main legislative abilities are limited to approving, rejecting, or amending legislation with little power to initiate laws. The Assembly of Senators ("Majlis al-Aayan") has 65 members appointed by the King for a four-year term. The Assembly of Senators is responsible to the Chamber of Deputies and can be removed by a "vote of no confidence".
Political factions or blocs in the Jordanian parliament change with each parliamentary election and typically involve one of the following affiliations; a democratic Marxist/Socialist faction, a mainstream liberal faction, a moderate-pragmatic faction, a mainstream conservative faction, and an extreme conservative faction (such as the Islamic Action Front).
The Jordanian Chamber of Deputies is known for brawls between its members, including acts of violence and the use of weapons. In September 2013 Representative Talal al-Sharif tried to shoot one of his colleagues with an assault rifle while at the parliamentary premises.
The judiciary is completely independent from the other two branches of the government. The constitution provides for three categories of courts—civil (in this case meaning "regular"), religious, and special. Regular courts consist of both civil and criminal varieties at the first level—First Instance or Conciliation Courts, second level—Appelette or Appeals Courts, and the Cassation Court which is the highest judicial authority in the kingdom. There are two types of religious courts: Sharia courts which enforce the provisions of Islamic law and civil status, and tribunals of other religious communities officially recognized in Jordan.
King Hussein ruled Jordan from 1953 to 1999, surviving a number of challenges to his rule, drawing on the loyalty of his military, and serving as a symbol of unity and stability for both the Jordanians and Palestinian communities in Jordan. King Hussein ended martial law in 1989 and ended suspension on political parties that was initiated following the loss of the West Bank to Israel and in order to preserve the status quo in Jordan. In 1989 and 1993, Jordan held free and fair parliamentary elections. Controversial changes in the election law led Islamist parties to boycott the 1997, 2011 and 2013 elections.
King Abdullah II succeeded his father Hussein following the latter's death in February 1999. Abdullah moved quickly to reaffirm Jordan's peace treaty with Israel and its relations with the United States. Abdullah, during the first year in power, refocused the government's agenda on economic reform.
Jordan's continuing structural economic difficulties, burgeoning population, and more open political environment led to the emergence of a variety of political parties. Moving toward greater independence, Jordan's parliament has investigated corruption charges against several regime figures and has become the major forum in which differing political views, including those of political Islamists, are expressed.
On February 1, 2012, it was announced that King Abdullah had dismissed his government. This has been interpreted as a pre-emptive move in the context of the Tunisian Jasmine Revolution and unfolding events in nearby Egypt.
King Abdullah II and the Jordanian Government began the process of decentralization, with the Madaba governorate as the pilot project, on the regional level dividing the nation into three regions: North, Central, and South. The Greater Amman Municipality will be excluded from the plan but it will set up a similar decentralization process. Each region will have an elected council that will handle the political, social, legal, and economic affairs of its area. This decentralization process is part of Jordan's Democratization Program.
Jordan ranked 47th out of 180 nations in the Corruption Perceptions Index. The Constitution of Jordan states that no member of Parliament can have any financial or business dealings with the government and no member of the royal family can be in the government. However, corruption remains a problem in Jordan despite progress. Corruption cases are examined by the Anti-Corruption Commission and then referred to the judiciary for legal action. Corruption in Jordan takes the form of nepotism, favouritism, and bribery.
Administratively, Jordan is divided into twelve governorates ("muhafazat", singular—"muhafazah"), each headed by a governor appointed by the king. They are the sole authorities for all government departments and development projects in their respective areas:
ABEDA, ACC, AFESD, AL, AMF, CAEU, CCC, CTBTO, EBRD, ESCWA, FAO, G-77, IAEA, IBRD, ICAO, ICC, ICC, ICFTU, ICRM, IDA, IDB, IFAD, IFC, IFRCS, ILO, IMF, IMO, Intelsat, Interpol, IOC, IOM (observer), ISO (correspondent), ITU, NAM, OIC, OPCW, OSCE (partner), PCA, UN, UNCTAD, UNESCO, UNIDO, UNMIBH, UNMIK, UNMOP, UNMOT, UNOMIG, UNRWA, UNTAET, UNWTO, UPU, WFTU, WHO, WIPO, WMO, WTO, WTrO
|
https://en.wikipedia.org/wiki?curid=15718
|
Lucent
Lucent Technologies, Inc., was an American multinational telecommunications equipment company headquartered in Murray Hill, New Jersey, in the United States. It was established on September 30, 1996, through the divestiture of the former AT&T Technologies business unit of AT&T Corporation, which included Western Electric and Bell Labs.
Lucent was merged with Alcatel SA of France on December 1, 2006, forming Alcatel-Lucent. Alcatel-Lucent was absorbed by Nokia in January 2016.
Lucent means "they shine" in Latin. The name was applied in 1996 at the time of the split from AT&T.
The name was widely criticised, as the logo was to be, both internally and externally. Corporate communications and business cards included the strapline 'Bell Labs Innovations' in a bid to retain the prestige of the internationally famous research lab, within a new business under an as-yet unknown name.
This same linguistic root also gives Lucifer, "the light bearer" (from lux, 'light', and ferre, 'to bear'), who is also a character in Dante's epic poem "Inferno". Shortly after the Lucent renaming in 1996, Lucent's Plan 9 project released a development of their work as the Inferno OS in 1997. This extended the 'Lucifer' and Dante references as a series of punning names for the components of Inferno - Dis, Limbo, Charon and Styx (9P Protocol). When the rights to Inferno were sold in 2000, the company Vita Nuova Holdings was formed to represent them. This continues the Dante theme, although moving away from his "Divine Comedy" to the poem "La Vita Nuova".
The Lucent logo, the Innovation Ring, was designed by Landor Associates, a prominent San Francisco-based branding consultancy. One source inside Lucent says that the logo is a Zen Buddhist symbol for "eternal truth", the Enso, turned 90 degrees and modified. Another source says it represents the mythic ouroboros, a snake holding its tail in its mouth. Lucent's logo also has been said to represent constant re-creating and re-thinking. Carly Fiorina picked the logo because her mother was a painter and she rejected the sterile geometric logos of most high tech companies.
After the logo was compared in the media to the ring a coffee mug leaves on paper, a "Dilbert" comic strip showed Dogbert as an overpaid consultant designing a new company logo; he takes a piece of paper that his coffee cup was sitting on and calls it the "Brown Ring of Quality". A telecommunication commentator referred to the logo as "a big red zero" and predicted financial losses.
One of the primary reasons AT&T Corporation chose to spin off its equipment manufacturing business was to permit it to profit from sales to competing telecommunications providers; these customers had previously shown reluctance to purchase from a direct competitor. Bell Labs brought prestige to the new company, as well as the revenue from thousands of patents.
At the time of its spinoff, Lucent was placed under the leadership of Henry Schacht, who was brought in to oversee its transition from an arm of AT&T into an independent corporation. Richard McGinn, who was serving as President and COO, succeeded Schacht as CEO in 1997 while Schacht remained chairman of the board. Lucent became a "darling" stock of the investment community in the late 1990s, and its split-adjusted spinoff price of $7.56/share rose to a high of $84. Its market capitalization reached a high of $258 billion, and it was at the time the most widely held company with 5.3 million shareholders.
In 1997, Lucent acquired Milpitas-based voicemail market leader Octel Communications Corporation for $2.1 billion, a move which immediately rendered the Business Systems Group profitable. By 1999 Lucent stock continued to soar and in that year Lucent acquired Ascend Communications, an Alameda, California–based manufacturer of communications equipment for US$24 billion. Lucent held discussions to acquire Juniper Networks but decided instead to build its own routers.
In 1997, Lucent acquired Livingston Enterprises Inc. for $650 million in stock. Livingston was known most for the creation of the RADIUS protocol and their PortMaster product that was used widely by dial-up internet service providers.
In 1995, Carly Fiorina led corporate operations. In that capacity, she reported to Lucent chief executive Henry B. Schacht. She played a key role in planning and implementing the 1996 initial public offering of a successful stock and company launch strategy. Under her guidance, the spin-off raised 3 billion.
Later in 1996, Fiorina was appointed president of Lucent's consumer products sector, reporting to president and chief operating officer Rich McGinn. In 1997, she was named group president for Lucent's 19 billion global service-provider business, overseeing marketing and sales for the company's largest customer segment. That year, Fiorina chaired a 2.5 billion joint venture between Lucent's consumer communications and Royal Philips Electronics, under the name Philips Consumer Communications (PCC). The focus of the venture was to bring both companies to the top three in technology, distribution, and brand recognition.
Ultimately, the project struggled and dissolved a year later after it garnered only 2% market share in mobile phones. Losses were at $500 million on sales of $2.5 billion. As a result of the failed joint venture, Philips announced the closure of one-quarter of the company's 230 factories worldwide, and Lucent closed down its wireless handset portion of the venture. Analysts suggested that the joint venture's failure was due to a combination of technology and management problems. Upon the end of the joint venture, PCC sent 5,000 employees back to Philips, many of which were laid off, and 8,400 employees back to Lucent.
Under Fiorina, the company added 22,000 jobs and revenues seemed to grow from 19 billion to 38 billion. However, the real cause of Lucent spurring sales under Fiorina was by lending money to their own customers. According to "Fortune" magazine, "In a neat bit of accounting magic, money from the loans began to appear on Lucent’s income statement as new revenue while the dicey debt got stashed on its balance sheet as an allegedly solid asset". Lucent's stock price grew 10-fold.
At the start of 2000, Lucent's "private bubble" burst, while competitors like Nortel Networks and Alcatel were still going strong; it would be many months before the rest of the telecom industry bubble collapsed. Previously Lucent had 14 straight quarters where it exceeded analysts' expectations, leading to high expectations for the 15th quarter, ending Dec. 31, 1999. On January 6, 2000, Lucent made the first of a string of announcements that it had missed its quarterly estimates, as CEO Rich McGinn grimly announced that Lucent had run into special problems during that quarter—including disruptions in its optical networking business—and reported flat revenues and a big drop in profits. That caused the stock to plunge by 28%, shaving $64 billion off of the company's market capitalization. When it was later revealed that it had used dubious accounting and sales practices to generate some of its earlier quarterly numbers, Lucent fell from grace. It was said that "Rich McGinn couldn't accept Lucent's fall from its early triumphs." He described himself once as imposing "audacious" goals on his managers, believing the stretch for performance would produce dream results. Henry Schacht defended the corporate culture that McGinn created and also noted that McGinn did not sell any Lucent shares while serving as CEO. In November 2000, the company disclosed to the Securities and Exchange Commission that it had a $125 million accounting error for the third quarter of 2000, and by December 2000 it reported it had overstated its revenues for its latest quarter by nearly $700 million. Although no wrongdoing was found on his part, McGinn was forced to resign as CEO and he was replaced by Schacht on an interim basis. Subsequently, its CFO, Deborah Hopkins, left the company in May 2001 with Lucent's stock at $9.06 whereas at the time she was hired it was at $46.82.
In 2001 there were merger discussions between Lucent and Alcatel, which would have seen Lucent acquired at its current market price without a premium; the newly combined entity would have been headquartered in Murray Hill. However, these negotiations collapsed when Schacht insisted on an equal 7-7 split of the merged company's board of directors, while Alcatel chief executive officer Serge Tchuruk wanted 8 of the 14 board seats for Alcatel due to it being in a stronger position. The failure of the merger talks caused Lucent's share price to collapse, and by October 2002 the stock price had bottomed at 55 cents per share.
Patricia Russo, formerly Lucent's EVP of the Corporate Office who then left for Eastman Kodak to serve as COO, was named permanent Chairman and CEO of Lucent in 2002, succeeding Schacht who remained on the Board of Directors.
In April 2000, Lucent sold its Consumer Products unit to VTech and Consumer Phone Services. In October 2000, Lucent spun off its Business Systems arm into Avaya, Inc., and in June 2002, it spun off its microelectronics division into Agere Systems. The spinoffs of enterprise networking and wireless, the industry's key growth businesses from 2003 onward, meant that Lucent no longer had the capacity to serve this market.
Lucent was reduced to 30,500 employees, down from about 165,000 employees at its zenith. The layoffs of so many experienced employees meant that the company was in a weakened position and unable to reestablish itself when the market recovered in 2003. By early 2003, Lucent's market value was $15.6 billion (which includes $6.8 billion of current value for two companies that Lucent had recently spun off, Avaya and Agere Systems), making the shares worth around $2.13, a far cry from its dotcom bubble peak of around $84, when Lucent was worth $258 billion.
Lucent continued to be active in the areas of telephone switching, optical, data and wireless networking.
On April 2, 2006, Lucent announced a merger agreement with Alcatel, which was 1.5 times the size of Lucent. Serge Tchuruk became non-executive chairman, and Russo served as CEO of the newly merged company, Alcatel-Lucent, until they were both forced to resign at the end of 2008. The merger failed to produce the expected synergies, and there were significant write-downs of Lucent's assets that Alcatel purchased.
Lucent was divided into several core groups:
The Murray Hill anechoic chamber, built in 1940, is the world's oldest wedge-based anechoic chamber. The interior room measures approximately high by wide by deep. The exterior concrete and brick walls are about thick to keep outside noise from entering the chamber. The chamber absorbs over 99.995% of the incident acoustic energy above 200 Hz. At one time the Murray Hill chamber was cited in the Guinness Book of World Records as the world's quietest room. It is possible to hear the sounds of skeletal joints and heart beats very prominently.
The Murray Hill facility was the global headquarters for Lucent Technologies. The Murray Hill facility also has the largest copper-roof in the world. When Lucent Technologies was experiencing financial troubles in 2000 and 2001, one out of every three fluorescent lights was turned off in the facility. The same was done in the Naperville, Illinois, and Allentown, Pennsylvania, facilities for a while. The facility had a cricket field and featured a nearby station from which enthusiasts could control RC airplanes and helicopters.
|
https://en.wikipedia.org/wiki?curid=18157
|
Lupercalia
Lupercalia was an ancient, possibly pre-Roman pastoral annual festival, observed in the city of Rome from the 13th to the 15th of February to avert evil spirits and purify the city, releasing health and fertility. Lupercalia was also called "dies Februatus", after the instruments of purification called "februa", which gave February "(Februarius)" its name.
The festival was later known as Februa ("Purifications" or "Purgings") after the ' which was used on the day. It was also known as ' and gave its name to Juno Februalis, Februlis, or Februata in her role as its patron deity; to a god called Februus, and to February ('), the month during which it occurred. Ovid connects ' to an Etruscan word for "purging". Some sources connect the Latin word for fever ("") with the same idea of purification or purging, due to the sweating commonly seen in association with fevers.
The name "Lupercalia" was believed in antiquity to evince some connection with the Ancient Greek festival of the Arcadian Lykaia, a wolf festival (, "lýkos"; ), and the worship of "Lycaean Pan", assumed to be a Greek equivalent to Faunus, as instituted by Evander. Justin describes a cult image of "the Lycaean god, whom the Greeks call Pan and the Romans Lupercus", as nude, save for a goatskin girdle. It stood in the Lupercal, the cave where tradition held that Romulus and Remus were suckled by the she-wolf (Lupa). The cave lay at the foot of the Palatine Hill, on which Romulus was thought to have founded Rome.
The rites were confined to the Lupercal cave, the Palatine Hill, and the Forum, all of which were central locations in Rome's foundation myth. Near the cave stood a sanctuary of Rumina, goddess of breastfeeding; and the wild fig-tree ("Ficus Ruminalis") to which Romulus and Remus were brought by the divine intervention of the river-god Tiberinus; some Roman sources name the wild fig tree "caprificus", literally "goat fig". Like the cultivated fig, its fruit is pendulous, and the tree exudes a milky sap if cut, which makes it a good candidate for a cult of breastfeeding.
The Lupercalia had its own priesthood, the "Luperci" ("brothers of the wolf"), whose institution and rites were attributed either to the Arcadian culture-hero Evander, or to Romulus and Remus, erstwhile shepherds who had each established a group of followers. The "Luperci" were young men ("iuvenes"), usually between the ages of 20 and 40. They formed two religious "collegia" (associations) based on ancestry; the "Quinctiliani" (named after gens Quinctia) and the "Fabiani" (named after "gens" Fabia). Each college was headed by a "magister". In 44 BC, a third college, the "Juliani", was instituted in honor of Julius Caesar; its first "magister" was Mark Antony. The college of "Juliani" disbanded or lapsed following Caesar's assassination, and was not re-established in the reforms of his successor, Augustus. In the Imperial era, membership of the two traditional "collegia" was opened to "iuvenes" of equestrian status.
At the Lupercal altar, a male goat (or goats) and a dog were sacrificed by one or another of the "Luperci", under the supervision of the Flamen dialis, Jupiter's chief priest. An offering was also made of salted mealcakes, prepared by the Vestal Virgins. After the blood sacrifice, two "Luperci" approached the altar. Their foreheads were anointed with blood from the sacrificial knife, then wiped clean with wool soaked in milk, after which they were expected to smile and/or laugh.
The sacrificial feast followed, after which the Luperci cut thongs (known as "") from the flayed skin of the animal, and ran with these, naked or near-naked, along the old Palatine boundary, in an anticlockwise direction around the hill. In Plutarch's description of the Lupercalia, written during the early Empire,
...many of the noble youths and of the magistrates run up and down through the city naked, for sport and laughter striking those they meet with shaggy thongs. And many women of rank also purposely get in their way, and like children at school present their hands to be struck, believing that the pregnant will thus be helped in delivery, and the barren to pregnancy.
The "Luperci" completed their circuit of the Palatine, then returned to the "Lupercal" cave.
The Februa was of ancient and possibly Sabine origin. After February was added to the Roman calendar, Februa occurred on its fifteenth day (""). Of its various rituals, the most important came to be those of the Lupercalia. The Romans themselves attributed the instigation of the Lupercalia to Evander, a culture hero from Arcadia who was credited with bringing the Olympic pantheon, Greek laws and alphabet to Italy, where he founded the city of Pallantium on the future site of Rome, 60 years before the Trojan War.
Lupercalia was celebrated in parts of Italy and Gaul; "Luperci" are attested by inscriptions at Velitrae, Praeneste, Nemausus (modern Nîmes) and elsewhere. The ancient cult of the Hirpi Sorani ("wolves of Soranus", from Sabine "hirpus" "wolf"), who practiced at Mt. Soracte, north of Rome, had elements in common with the Roman Lupercalia.
Descriptions of the Lupercalia festival of 44 BC attest to its continuity; Julius Caesar used it as the backdrop for his (possibly staged) public refusal of a golden crown offered to him by Mark Antony. The Lupercal cave was restored or rebuilt by Augustus, and has been speculated to be identical with a grotto discovered in 2007, below the remains of Augustus' residence; according to scholarly consensus, the grotto is a nymphaeum, not the Lupercal. The Lupercalia festival is marked on a calendar of 354 alongside traditional and Christian festivals. Despite the banning in 391 of all non-Christian cults and festivals, Lupercalia was celebrated by the nominally Christian populace on a regular basis into the reign of the emperor Anastasius. Pope Gelasius I (494–96) claimed that only the "vile rabble" were involved in the festival and sought its forceful abolition; the Senate protested that the Lupercalia was essential to Rome's safety and well-being. This prompted Gelasius' scornful suggestion that "If you assert that this rite has salutary force, celebrate it yourselves in the ancestral fashion; run nude yourselves that you may properly carry out the mockery".
There is no contemporary evidence to support the popular notions that Gelasius abolished the Lupercalia, or that he, or any other prelate, replaced it with the Feast of the Purification of the Blessed Virgin Mary. A literary association between Lupercalia and the romantic elements of Saint Valentine's Day dates back to Chaucer and poetic traditions of courtly love.
Horace's Ode III, 18 alludes to the Lupercalia. The festival or its associated rituals gave its name to the Roman month of February ("") and thence to the modern month. The Roman god Februus personified both the month and purification, but seems to postdate both.
William Shakespeare's play "Julius Caesar" begins during the Lupercalia. Mark Antony is instructed by Caesar to strike his wife Calpurnia, in the hope that she will be able to conceive.
Research published in 2019 suggests that the word Leprechaun derives from "Lupercus".
In the second season of the Netflix series Chilling Adventures of Sabrina the witches celebrate Lupercalia
|
https://en.wikipedia.org/wiki?curid=18158
|
List of agnostics
Listed here are persons who have identified themselves as theologically agnostic. Also included are individuals who have expressed the view that the veracity of a god's existence is unknown or inherently unknowable.
"To be sure, when she wrote her groundbreaking book, Friedan considered herself an "agnostic" Jew, unaffiliated with any religious branch or institution." Kirsten Fermaglich, "American Dreams and Nazi Nightmares: Early Holocaust Consciousness and Liberal America, 1957–1965" (2007), page 59.
"Kafka was also alienated from his own heritage by his parent's perfunctory religious practice and minimal social formality in the Jewish community, though his style and influence is sometimes attributed to Jewish folk lore. Kafka eventually declared himself a socialist atheist, Spinoza, Darwin and Nietzsche some of his influences." C. D. Merriman, Franz Kafka.
"Nabokov is a self-affirmed agnostic in matters religious, political, and philosophical." Donald E. Morton, "Vladimir Nabokov" (1974), p. 8.
On his religious beliefs:
ANNO: "I don't belong to any kind of organized religion, so I guess I could be considered agnostic. Japanese spiritualism holds that there is kami (spirit) in everything, and that's closer to my own beliefs." Anno's Roundtable Discussion.
"Henry Fonda claims to be an agnostic. Not an atheist but a doubter." Howard Teichmann, "Fonda: My Life", p. 303.
"The publication of Darwin's "Origin of Species" totally transformed his intellectual life, giving him a sense of evolutionary process without which much of his later work would have been unimaginable. Galton became a "religious agnostic", recognising the social value of religion but not its transcendental basis." Robert Peel, Sir Francis Galton FRS (1822–1911) – The Legacy of His Ideas - .
"'You really can't know,' answered Bill Nye the Controversial Guy." Steve Wartenberg, The Morning Call, 6 April 2006.
"Now Ibn al-Haytham was a devout Muslim – that is, he was a supernaturalist. He studied science because he considered that by doing this he could better understand the nature of the god that he believed in – he thought that a supernatural agent had created the laws of nature. The same is true of virtually all the leading scientists in the Western world, such as Galileo and Newton, who lived after al-Haytham, until about the middle of the twentieth century. There were a few exceptions – Pierre Laplace, Siméon Poisson, Albert Einstein, Paul Dirac and Marie Curie were naturalists for example." John Ellis, "How Science Works: Evolution: A Student Primer", p. 13.
"Both Enrico and Leo were agnostics." Nina Byers, Fermi and Szilard.
|
https://en.wikipedia.org/wiki?curid=18166
|
Linked list
In computer science, a linked list is a linear collection of data elements, whose order is not given by their physical placement in memory, Instead, each element points to the next. It is a data structure consisting of a collection of nodes which together represent a sequence. In its most basic form, each node contains: data, and a reference (in other words, a "link") to the next node in the sequence. This structure allows for efficient insertion or removal of elements from any position in the sequence during iteration. More complex variants add additional links, allowing more efficient insertion or removal of nodes at arbitrary positions. A drawback of linked lists is that access time is linear (and difficult to pipeline). Faster access, such as random access, is not feasible. Arrays have better cache locality compared to linked lists.
"A linked list whose nodes contain two fields: an integer value and a link to the next node. The last node is linked to a terminator used to signify the end of the list."
Linked lists are among the simplest and most common data structures. They can be used to implement several other common abstract data types, including lists, stacks, queues, associative arrays, and S-expressions, though it is not uncommon to implement those data structures directly without using a linked list as the basis.
The principal benefit of a linked list over a conventional array is that the list elements can be easily inserted or removed without reallocation or reorganization of the entire structure because the data items need not be stored contiguously in memory or on disk, while restructuring an array at run-time is a much more expensive operation. Linked lists allow insertion and removal of nodes at any point in the list, and allow doing so with a constant number of operations by keeping the link previous to the link being added or removed in memory during list traversal.
On the other hand, since simple linked lists by themselves do not allow random access to the data or any form of efficient indexing, many basic operations—such as obtaining the last node of the list, finding a node that contains a given datum, or locating the place where a new node should be inserted—may require iterating through most or all of the list elements. The advantages and disadvantages of using linked lists are given below. Linked list are dynamic, so the length of list can increase or decrease as necessary. Each node does not necessarily follow the previous one physically in the memory.
Linked lists were developed in 1955–1956 by Allen Newell, Cliff Shaw and Herbert A. Simon at RAND Corporation as the primary data structure for their Information Processing Language. IPL was used by the authors to develop several early artificial intelligence programs, including the Logic Theory Machine, the General Problem Solver, and a computer chess program. Reports on their work appeared in IRE Transactions on Information Theory in 1956, and several conference proceedings from 1957 to 1959, including Proceedings of the Western Joint Computer Conference in 1957 and 1958, and Information Processing (Proceedings of the first UNESCO International Conference on Information Processing) in 1959. The now-classic diagram consisting of blocks representing list nodes with arrows pointing to successive list nodes appears in "Programming the Logic Theory Machine" by Newell and Shaw in Proc. WJCC, February 1957. Newell and Simon were recognized with the ACM Turing Award in 1975 for having "made basic contributions to artificial intelligence, the psychology of human cognition, and list processing".
The problem of machine translation for natural language processing led Victor Yngve at Massachusetts Institute of Technology (MIT) to use linked lists as data structures in his COMIT programming language for computer research in the field of linguistics. A report on this language entitled "A programming language for mechanical translation" appeared in Mechanical Translation in 1958.
LISP, standing for list processor, was created by John McCarthy in 1958 while he was at MIT and in 1960 he published its design in a paper in the Communications of the ACM, entitled "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I". One of LISP's major data structures is the linked list.
By the early 1960s, the utility of both linked lists and languages which use these structures as their primary data representation was well established. Bert Green of the MIT Lincoln Laboratory published a review article entitled "Computer languages for symbol manipulation" in IRE Transactions on Human Factors in Electronics in March 1961 which summarized the advantages of the linked list approach. A later review article, "A Comparison of list-processing computer languages" by Bobrow and Raphael, appeared in Communications of the ACM in April 1964.
Several operating systems developed by Technical Systems Consultants (originally of West Lafayette Indiana, and later of Chapel Hill, North Carolina) used singly linked lists as file structures. A directory entry pointed to the first sector of a file, and succeeding portions of the file were located by traversing pointers. Systems using this technique included Flex (for the Motorola 6800 CPU), mini-Flex (same CPU), and Flex9 (for the Motorola 6809 CPU). A variant developed by TSC for and marketed by Smoke Signal Broadcasting in California, used doubly linked lists in the same manner.
The TSS/360 operating system, developed by IBM for the System 360/370 machines, used a double linked list for their file system catalog. The directory structure was similar to Unix, where a directory could contain files and other directories and extend to any depth.
Each record of a linked list is often called an 'element' or 'node'.
The field of each node that contains the address of the next node is usually called the 'next link' or 'next pointer'. The remaining fields are known as the 'data', 'information', 'value', 'cargo', or 'payload' fields.
The 'head' of a list is its first node. The 'tail' of a list may refer either to the rest of the list after the head, or to the last node in the list. In Lisp and some derived languages, the next node may be called the 'cdr' (pronounced "could-er") of the list, while the payload of the head node may be called the 'car'.
Singly linked lists contain nodes which have a data field as well as 'next' field, which points to the next node in line of nodes. Operations that can be performed on singly linked lists include insertion, deletion and traversal.
"A singly linked list whose nodes contain two fields: an integer value and a link to the next node"The following code demonstrates how to add a new node with data "value" to the end of a singly linked list:
node addNode(node head, int value) {
In a 'doubly linked list', each node contains, besides the next-node link, a second link field pointing to the 'previous' node in the sequence. The two links may be called 'forward('s') and 'backwards', or 'next' and 'prev'('previous').
"A doubly linked list whose nodes contain three fields: an integer value, the link forward to the next node, and the link backward to the previous node"
A technique known as XOR-linking allows a doubly linked list to be implemented using a single link field in each node. However, this technique requires the ability to do bit operations on addresses, and therefore may not be available in some high-level languages.
Many modern operating systems use doubly linked lists to maintain references to active processes, threads, and other dynamic objects. A common strategy for rootkits to evade detection is to unlink themselves from these lists.
In a 'multiply linked list', each node contains two or more link fields, each field being used to connect the same set of data records in a different order of same set(e.g., by name, by department, by date of birth, etc.). While doubly linked lists can be seen as special cases of multiply linked list, the fact that the two and more orders are opposite to each other leads to simpler and more efficient algorithms, so they are usually treated as a separate case.
In the last node of a list, the link field often contains a null reference, a special value is used to indicate the lack of further nodes. A less common convention is to make it point to the first node of the list; in that case, the list is said to be 'circular' or 'circularly linked'; otherwise, it is said to be 'open' or 'linear'. It is a list where the last pointer points to the first node.
"A circular linked list"
In the case of a circular doubly linked list, the first node also points to the last node of the list.
In some implementations an extra 'sentinel' or 'dummy' node may be added before the first data record or after the last one. This convention simplifies and accelerates some list-handling algorithms, by ensuring that all links can be safely dereferenced and that every list (even one that contains no data elements) always has a "first" and "last" node.
An empty list is a list that contains no data records. This is usually the same as saying that it has zero nodes. If sentinel nodes are being used, the list is usually said to be empty when it has only sentinel nodes.
The link fields need not be physically part of the nodes. If the data records are stored in an array and referenced by their indices, the link field may be stored in a separate array with the same indices as the data records.
Since a reference to the first node gives access to the whole list, that reference is often called the 'address', 'pointer', or 'handle' of the list. Algorithms that manipulate linked lists usually get such handles to the input lists and return the handles to the resulting lists. In fact, in the context of such algorithms, the word "list" often means "list handle". In some situations, however, it may be convenient to refer to a list by a handle that consists of two links, pointing to its first and last nodes.
The alternatives listed above may be arbitrarily combined in almost every way, so one may have circular doubly linked lists without sentinels, circular singly linked lists with sentinels, etc.
As with most choices in computer programming and design, no method is well suited to all circumstances. A linked list data structure might work well in one case, but cause problems in another. This is a list of some of the common tradeoffs involving linked list structures.
A "dynamic array" is a data structure that allocates all elements contiguously in memory, and keeps a count of the current number of elements. If the space reserved for the dynamic array is exceeded, it is reallocated and (possibly) copied, which is an expensive operation.
Linked lists have several advantages over dynamic arrays. Insertion or deletion of an element at a specific point of a list, assuming that we have indexed a pointer to the node (before the one to be removed, or before the insertion point) already, is a constant-time operation (otherwise without this reference it is O(n)), whereas insertion in a dynamic array at random locations will require moving half of the elements on average, and all the elements in the worst case. While one can "delete" an element from an array in constant time by somehow marking its slot as "vacant", this causes fragmentation that impedes the performance of iteration.
Moreover, arbitrarily many elements may be inserted into a linked list, limited only by the total memory available; while a dynamic array will eventually fill up its underlying array data structure and will have to reallocate—an expensive operation, one that may not even be possible if memory is fragmented, although the cost of reallocation can be averaged over insertions, and the cost of an insertion due to reallocation would still be amortized O(1). This helps with appending elements at the array's end, but inserting into (or removing from) middle positions still carries prohibitive costs due to data moving to maintain contiguity. An array from which many elements are removed may also have to be resized in order to avoid wasting too much space.
On the other hand, dynamic arrays (as well as fixed-size array data structures) allow constant-time random access, while linked lists allow only sequential access to elements. Singly linked lists, in fact, can be easily traversed in only one direction. This makes linked lists unsuitable for applications where it's useful to look up an element by its index quickly, such as heapsort. Sequential access on arrays and dynamic arrays is also faster than on linked lists on many machines, because they have optimal locality of reference and thus make good use of data caching.
Another disadvantage of linked lists is the extra storage needed for references, which often makes them impractical for lists of small data items such as characters or boolean values, because the storage overhead for the links may exceed by a factor of two or more the size of the data. In contrast, a dynamic array requires only the space for the data itself (and a very small amount of control data). It can also be slow, and with a naïve allocator, wasteful, to allocate memory separately for each new element, a problem generally solved using memory pools.
Some hybrid solutions try to combine the advantages of the two representations. Unrolled linked lists store several elements in each list node, increasing cache performance while decreasing memory overhead for references. CDR coding does both these as well, by replacing references with the actual data referenced, which extends off the end of the referencing record.
A good example that highlights the pros and cons of using dynamic arrays vs. linked lists is by implementing a program that resolves the Josephus problem. The Josephus problem is an election method that works by having a group of people stand in a circle. Starting at a predetermined person, one may count around the circle "n" times. Once the "n"th person is reached, one should remove them from the circle and have the members close the circle. The process is repeated until only one person is left. That person wins the election. This shows the strengths and weaknesses of a linked list vs. a dynamic array, because if the people are viewed as connected nodes in a circular linked list, then it shows how easily the linked list is able to delete nodes (as it only has to rearrange the links to the different nodes). However, the linked list will be poor at finding the next person to remove and will need to search through the list until it finds that person. A dynamic array, on the other hand, will be poor at deleting nodes (or elements) as it cannot remove one node without individually shifting all the elements up the list by one. However, it is exceptionally easy to find the "n"th person in the circle by directly referencing them by their position in the array.
The list ranking problem concerns the efficient conversion of a linked list representation into an array. Although trivial for a conventional computer, solving this problem by a parallel algorithm is complicated and has been the subject of much research.
A balanced tree has similar memory access patterns and space overhead to a linked list while permitting much more efficient indexing, taking O(log n) time instead of O(n) for a random access. However, insertion and deletion operations are more expensive due to the overhead of tree manipulations to maintain balance. Schemes exist for trees to automatically maintain themselves in a balanced state: AVL trees or red-black trees.
While doubly linked and circular lists have advantages over singly linked linear lists, linear lists offer some advantages that make them preferable in some situations.
A singly linked linear list is a recursive data structure, because it contains a pointer to a "smaller" object of the same type. For that reason, many operations on singly linked linear lists (such as merging two lists, or enumerating the elements in reverse order) often have very simple recursive algorithms, much simpler than any solution using iterative commands. While those recursive solutions can be adapted for doubly linked and circularly linked lists, the procedures generally need extra arguments and more complicated base cases.
Linear singly linked lists also allow tail-sharing, the use of a common final portion of sub-list as the terminal portion of two different lists. In particular, if a new node is added at the beginning of a list, the former list remains available as the tail of the new one—a simple example of a persistent data structure. Again, this is not true with the other variants: a node may never belong to two different circular or doubly linked lists.
In particular, end-sentinel nodes can be shared among singly linked non-circular lists. The same end-sentinel node may be used for "every" such list. In Lisp, for example, every proper list ends with a link to a special node, denoted by codice_1 or codice_2, whose codice_3 and codice_4 links point to itself. Thus a Lisp procedure can safely take the codice_3 or codice_4 of "any" list.
The advantages of the fancy variants are often limited to the complexity of the algorithms, not in their efficiency. A circular list, in particular, can usually be emulated by a linear list together with two variables that point to the first and last nodes, at no extra cost.
Double-linked lists require more space per node (unless one uses XOR-linking), and their elementary operations are more expensive; but they are often easier to manipulate because they allow fast and easy sequential access to the list in both directions. In a doubly linked list, one can insert or delete a node in a constant number of operations given only that node's address. To do the same in a singly linked list, one must have the "address of the pointer" to that node, which is either the handle for the whole list (in case of the first node) or the link field in the "previous" node. Some algorithms require access in both directions. On the other hand, doubly linked lists do not allow tail-sharing and cannot be used as persistent data structures
A circularly linked list may be a natural option to represent arrays that are naturally circular, e.g. the corners of a polygon, a pool of buffers that are used and released in FIFO ("first in, first out") order, or a set of processes that should be time-shared in round-robin order. In these applications, a pointer to any node serves as a handle to the whole list.
With a circular list, a pointer to the last node gives easy access also to the first node, by following one link. Thus, in applications that require access to both ends of the list (e.g., in the implementation of a queue), a circular structure allows one to handle the structure by a single pointer, instead of two.
A circular list can be split into two circular lists, in constant time, by giving the addresses of the last node of each piece. The operation consists in swapping the contents of the link fields of those two nodes. Applying the same operation to any two nodes in two distinct lists joins the two list into one. This property greatly simplifies some algorithms and data structures, such as the quad-edge and face-edge.
The simplest representation for an empty "circular" list (when such a thing makes sense) is a null pointer, indicating that the list has no nodes. Without this choice, many algorithms have to test for this special case, and handle it separately. By contrast, the use of null to denote an empty "linear" list is more natural and often creates fewer special cases.
Sentinel node may simplify certain list operations, by ensuring that the next or previous nodes exist for every element, and that even empty lists have at least one node. One may also use a sentinel node at the end of the list, with an appropriate data field, to eliminate some end-of-list tests. For example, when scanning the list looking for a node with a given value "x", setting the sentinel's data field to "x" makes it unnecessary to test for end-of-list inside the loop. Another example is the merging two sorted lists: if their sentinels have data fields set to +∞, the choice of the next output node does not need special handling for empty lists.
However, sentinel nodes use up extra space (especially in applications that use many short lists), and they may complicate other operations (such as the creation of a new empty list).
However, if the circular list is used merely to simulate a linear list, one may avoid some of this complexity by adding a single sentinel node to every list, between the last and the first data nodes. With this convention, an empty list consists of the sentinel node alone, pointing to itself via the next-node link. The list handle should then be a pointer to the last data node, before the sentinel, if the list is not empty; or to the sentinel itself, if the list is empty.
The same trick can be used to simplify the handling of a doubly linked linear list, by turning it into a circular doubly linked list with a single sentinel node. However, in this case, the handle should be a single pointer to the dummy node itself.
When manipulating linked lists in-place, care must be taken to not use values that you have invalidated in previous assignments. This makes algorithms for inserting or deleting linked list nodes somewhat subtle. This section gives pseudocode for adding or removing nodes from singly, doubly, and circularly linked lists in-place. Throughout we will use "null" to refer to an end-of-list marker or sentinel, which may be implemented in a number of ways.
Our node data structure will have two fields. We also keep a variable "firstNode" which always points to the first node in the list, or is "null" for an empty list.
Traversal of a singly linked list is simple, beginning at the first node and following each "next" link until we come to the end:
The following code inserts a node after an existing node in a singly linked list. The diagram shows how it works. Inserting a node before an existing one cannot be done directly; instead, one must keep track of the previous node and insert a node after it.
Inserting at the beginning of the list requires a separate function. This requires updating "firstNode".
Similarly, we have functions for removing the node "after" a given node, and for removing a node from the beginning of the list. The diagram demonstrates the former. To find and remove a particular node, one must again keep track of the previous element.
Notice that codice_7 sets codice_8 to codice_9 when removing the last node in the list.
Since we can't iterate backwards, efficient codice_10 or codice_11 operations are not possible. Inserting to a list before a specific node requires traversing the list, which would have a worst case running time of O(n).
Appending one linked list to another can be inefficient unless a reference to the tail is kept as part of the List structure, because we must traverse the entire first list in order to find the tail, and then append the second list to this. Thus, if two linearly linked lists are each of length formula_4, list appending has asymptotic time complexity of formula_8. In the Lisp family of languages, list appending is provided by the codice_12 procedure.
Many of the special cases of linked list operations can be eliminated by including a dummy element at the front of the list. This ensures that there are no special cases for the beginning of the list and renders both codice_13 and codice_7 unnecessary. In this case, the first useful data in the list will be found at codice_15.
In a circularly linked list, all nodes are linked in a continuous circle, without using "null." For lists with a front and a back (such as a queue), one stores a reference to the last node in the list. The "next" node after the last node is the first node. Elements can be added to the back of the list and removed from the front in constant time.
Circularly linked lists can be either singly or doubly linked.
Both types of circularly linked lists benefit from the ability to traverse the full list beginning at any given node. This often allows us to avoid storing "firstNode" and "lastNode", although if the list may be empty we need a special representation for the empty list, such as a "lastNode" variable which points to some node in the list or is "null" if it's empty; we use such a "lastNode" here. This representation significantly simplifies adding and removing nodes with a non-empty list, but empty lists are then a special case.
Assuming that "someNode" is some node in a non-empty circular singly linked list, this code iterates through that list starting with "someNode":
Notice that the test "while node ≠ someNode" must be at the end of the loop. If the test was moved to the beginning of the loop, the procedure would fail whenever the list had only one node.
This function inserts a node "newNode" into a circular linked list after a given node "node". If "node" is null, it assumes that the list is empty.
Suppose that "L" is a variable pointing to the last node of a circular linked list (or null if the list is empty). To append "newNode" to the "end" of the list, one may do
To insert "newNode" at the "beginning" of the list, one may do
Languages that do not support any type of reference can still create links by replacing pointers with array indices. The approach is to keep an array of records, where each record has integer fields indicating the index of the next (and possibly previous) node in the array. Not all nodes in the array need be used. If records are also not supported, parallel arrays can often be used instead.
As an example, consider the following linked list record that uses arrays instead of pointers:
A linked list can be built by creating an array of these structures, and an integer variable to store the index of the first element.
Links between elements are formed by placing the array index of the next (or previous) cell into the Next or Prev field within a given element. For example:
In the above example, codice_16 would be set to 2, the location of the first entry in the list. Notice that entry 3 and 5 through 7 are not part of the list. These cells are available for any additions to the list. By creating a codice_17 integer variable, a free list could be created to keep track of what cells are available. If all entries are in use, the size of the array would have to be increased or some elements would have to be deleted before new entries could be stored in the list.
The following code would traverse the list and display names and account balance:
When faced with a choice, the advantages of this approach include:
This approach has one main disadvantage, however: it creates and manages a private memory space for its nodes. This leads to the following issues:
For these reasons, this approach is mainly used for languages that do not support dynamic memory allocation. These disadvantages are also mitigated if the maximum size of the list is known at the time the array is created.
Many programming languages such as Lisp and Scheme have singly linked lists built in. In many functional languages, these lists are constructed from nodes, each called a "cons" or "cons cell". The cons has two fields: the "car", a reference to the data for that node, and the "cdr", a reference to the next node. Although cons cells can be used to build other data structures, this is their primary purpose.
In languages that support abstract data types or templates, linked list ADTs or templates are available for building linked lists. In other languages, linked lists are typically built using references together with records.
When constructing a linked list, one is faced with the choice of whether to store the data of the list directly in the linked list nodes, called "internal storage", or merely to store a reference to the data, called "external storage". Internal storage has the advantage of making access to the data more efficient, requiring less storage overall, having better locality of reference, and simplifying memory management for the list (its data is allocated and deallocated at the same time as the list nodes).
External storage, on the other hand, has the advantage of being more generic, in that the same data structure and machine code can be used for a linked list no matter what the size of the data is. It also makes it easy to place the same data in multiple linked lists. Although with internal storage the same data can be placed in multiple lists by including multiple "next" references in the node data structure, it would then be necessary to create separate routines to add or delete cells based on each field. It is possible to create additional linked lists of elements that use internal storage by using external storage, and having the cells of the additional linked lists store references to the nodes of the linked list containing the data.
In general, if a set of data structures needs to be included in linked lists, external storage is the best approach. If a set of data structures need to be included in only one linked list, then internal storage is slightly better, unless a generic linked list package using external storage is available. Likewise, if different sets of data that can be stored in the same data structure are to be included in a single linked list, then internal storage would be fine.
Another approach that can be used with some languages involves having different data structures, but all have the initial fields, including the "next" (and "prev" if double linked list) references in the same location. After defining separate structures for each type of data, a generic structure can be defined that contains the minimum amount of data shared by all the other structures and contained at the top (beginning) of the structures. Then generic routines can be created that use the minimal structure to perform linked list type operations, but separate routines can then handle the specific data. This approach is often used in message parsing routines, where several types of messages are received, but all start with the same set of fields, usually including a field for message type. The generic routines are used to add new messages to a queue when they are received, and remove them from the queue in order to process the message. The message type field is then used to call the correct routine to process the specific type of message.
Suppose you wanted to create a linked list of families and their members. Using internal storage, the structure might look like the following:
To print a complete list of families and their members using internal storage, we could write:
Using external storage, we would create the following structures:
To print a complete list of families and their members using external storage, we could write:
Notice that when using external storage, an extra step is needed to extract the record from the node and cast it into the proper data type. This is because both the list of families and the list of members within the family are stored in two linked lists using the same data structure ("node"), and this language does not have parametric types.
As long as the number of families that a member can belong to is known at compile time, internal storage works fine. If, however, a member needed to be included in an arbitrary number of families, with the specific number known only at run time, external storage would be necessary.
Finding a specific element in a linked list, even if it is sorted, normally requires O("n") time (linear search). This is one of the primary disadvantages of linked lists over other data structures. In addition to the variants discussed above, below are two simple ways to improve search time.
In an unordered list, one simple heuristic for decreasing average search time is the "move-to-front heuristic", which simply moves an element to the beginning of the list once it is found. This scheme, handy for creating simple caches, ensures that the most recently used items are also the quickest to find again.
Another common approach is to "index" a linked list using a more efficient external data structure. For example, one can build a red-black tree or hash table whose elements are references to the linked list nodes. Multiple such indexes can be built on a single list. The disadvantage is that these indexes may need to be updated each time a node is added or removed (or at least, before that index is used again).
A random access list is a list with support for fast random access to read or modify any element in the list. One possible implementation is a skew binary random access list using the skew binary number system, which involves a list of trees with special properties; this allows worst-case constant time head/cons operations, and worst-case logarithmic time random access to an element by index. Random access lists can be implemented as persistent data structures.
Random access lists can be viewed as immutable linked lists in that they likewise support the same O(1) head and tail operations.
A simple extension to random access lists is the min-list, which provides an additional operation that yields the minimum element in the entire list in constant time (without mutation complexities).
Both stacks and queues are often implemented using linked lists, and simply restrict the type of operations which are supported.
The skip list is a linked list augmented with layers of pointers for quickly jumping over large numbers of elements, and then descending to the next layer. This process continues down to the bottom layer, which is the actual list.
A binary tree can be seen as a type of linked list where the elements are themselves linked lists of the same nature. The result is that each node may include a reference to the first node of one or two other linked lists, which, together with their contents, form the subtrees below that node.
An unrolled linked list is a linked list in which each node contains an array of data values. This leads to improved cache performance, since more list elements are contiguous in memory, and reduced memory overhead, because less metadata needs to be stored for each element of the list.
A hash table may use linked lists to store the chains of items that hash to the same position in the hash table.
A heap shares some of the ordering properties of a linked list, but is almost always implemented using an array. Instead of references from node to node, the next and previous data indexes are calculated using the current data's index.
A self-organizing list rearranges its nodes based on some heuristic which reduces search times for data retrieval by keeping commonly accessed nodes at the head of the list.
|
https://en.wikipedia.org/wiki?curid=18167
|
Logic gate
A logic gate is an idealized or physical electronic device implementing a Boolean function, a logical operation performed on one or more binary inputs that produces a single binary output. Depending on the context, the term may refer to an ideal logic gate, one that has for instance zero rise time and unlimited fan-out, or it may refer to a non-ideal physical device (see Ideal and real op-amps for comparison).
Logic gates are primarily implemented using diodes or transistors acting as electronic switches, but can also be constructed using vacuum tubes, electromagnetic relays (relay logic), fluidic logic, pneumatic logic, optics, molecules, or even mechanical elements. With amplification, logic gates can be cascaded in the same way that Boolean functions can be composed, allowing the construction of a physical model of all of Boolean logic, and therefore, all of the algorithms and mathematics that can be described with Boolean logic.
Logic circuits include such devices as multiplexers, registers, arithmetic logic units (ALUs), and computer memory, all the way up through complete microprocessors, which may contain more than 100 million gates. In modern practice, most gates are made from MOSFETs (metal–oxide–semiconductor field-effect transistors).
Compound logic gates AND-OR-Invert (AOI) and OR-AND-Invert (OAI) are often employed in circuit design because their construction using MOSFETs is simpler and more efficient than the sum of the individual gates.
In reversible logic, Toffoli gates are used.
A functionally complete logic system may be composed of relays, valves (vacuum tubes), or transistors. The simplest family of logic gates uses bipolar transistors, and is called resistor–transistor logic (RTL). Unlike simple diode logic gates (which do not have a gain element), RTL gates can be cascaded indefinitely to produce more complex logic functions. RTL gates were used in early integrated circuits. For higher speed and better density, the resistors used in RTL were replaced by diodes resulting in diode–transistor logic (DTL). Transistor–transistor logic (TTL) then supplanted DTL. As integrated circuits became more complex, bipolar transistors were replaced with smaller field-effect transistors (MOSFETs); see PMOS and NMOS. To reduce power consumption still further, most contemporary chip implementations of digital systems now use CMOS logic. CMOS uses complementary (both n-channel and p-channel) MOSFET devices to achieve a high speed with low power dissipation.
For small-scale logic, designers now use prefabricated logic gates from families of devices such as the TTL 7400 series by Texas Instruments, the CMOS 4000 series by RCA, and their more recent descendants. Increasingly, these fixed-function logic gates are being replaced by programmable logic devices, which allow designers to pack many mixed logic gates into a single integrated circuit. The field-programmable nature of programmable logic devices such as FPGAs has reduced the 'hard' property of hardware; it is now possible to change the logic design of a hardware system by reprogramming some of its components, thus allowing the features or function of a hardware implementation of a logic system to be changed. Other types of logic gates include, but are not limited to:
Electronic logic gates differ significantly from their relay-and-switch equivalents. They are much faster, consume much less power, and are much smaller (all by a factor of a million or more in most cases). Also, there is a fundamental structural difference. The switch circuit creates a continuous metallic path for current to flow (in either direction) between its input and its output. The semiconductor logic gate, on the other hand, acts as a high-gain voltage amplifier, which sinks a tiny current at its input and produces a low-impedance voltage at its output. It is not possible for current to flow between the output and the input of a semiconductor logic gate.
Another important advantage of standardized integrated circuit logic families, such as the 7400 and 4000 families, is that they can be cascaded. This means that the output of one gate can be wired to the inputs of one or several other gates, and so on. Systems with varying degrees of complexity can be built without great concern of the designer for the internal workings of the gates, provided the limitations of each integrated circuit are considered.
The output of one gate can only drive a finite number of inputs to other gates, a number called the 'fan-out limit'. Also, there is always a delay, called the 'propagation delay', from a change in input of a gate to the corresponding change in its output. When gates are cascaded, the total propagation delay is approximately the sum of the individual delays, an effect which can become a problem in high-speed circuits. Additional delay can be caused when many inputs are connected to an output, due to the distributed capacitance of all the inputs and wiring and the finite amount of current that each output can provide.
The binary number system was refined by Gottfried Wilhelm Leibniz (published in 1705), influenced by the ancient "I Ching"s binary system. Leibniz established that using the binary system combined the principles of arithmetic and logic.
In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits. Eventually, vacuum tubes replaced relays for logic operations. Lee De Forest's modification, in 1907, of the Fleming valve can be used as a logic gate. Ludwig Wittgenstein introduced a version of the 16-row truth table as proposition 5.101 of "Tractatus Logico-Philosophicus" (1921). Walther Bothe, inventor of the coincidence circuit, got part of the 1954 Nobel Prize in physics, for the first modern electronic AND gate in 1924. Konrad Zuse designed and built electromechanical logic gates for his computer Z1 (from 1935–38).
From 1934 to 1936, NEC engineer Akira Nakashima introduced switching circuit theory in a series of papers showing that two-valued Boolean algebra, which he discovered independently, can describe the operation of switching circuits. His work was later cited by Claude E. Shannon, who elaborated on the use of Boolean algebra in the analysis and design of switching circuits in 1937. Using this property of electrical switches to implement logic is the fundamental concept that underlies all electronic digital computers. Switching circuit theory became the foundation of digital circuit design, as it became widely known in the electrical engineering community during and after World War II, with theoretical rigor superseding the "ad hoc" methods that had prevailed previously.
Metal-oxide-semiconductor (MOS) logic originates from the MOSFET (metal-oxide-semiconductor field-effect transistor), invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. They first demonstrated both PMOS logic and NMOS logic in 1960. Both types were later combined and adapted into complementary MOS (CMOS) logic by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor in 1963.
Active research is taking place in molecular logic gates.
There are two sets of symbols for elementary logic gates in common use, both defined in ANSI/IEEE Std 91-1984 and its supplement ANSI/IEEE Std 91a-1991. The "distinctive shape" set, based on traditional schematics, is used for simple drawings and derives from United States Military Standard MIL-STD-806 of the 1950s and 1960s. It is sometimes unofficially described as "military", reflecting its origin. The "rectangular shape" set, based on ANSI Y32.14 and other early industry standards as later refined by IEEE and IEC, has rectangular outlines for all types of gate and allows representation of a much wider range of devices than is possible with the traditional symbols. The IEC standard, IEC 60617-12, has been adopted by other standards, such as EN 60617-12:1999 in Europe, BS EN 60617-12:1999 in the United Kingdom, and DIN EN 60617-12:1998 in Germany.
The mutual goal of IEEE Std 91-1984 and IEC 60617-12 was to provide a uniform method of describing the complex logic functions of digital circuits with schematic symbols. These functions were more complex than simple AND and OR gates. They could be medium scale circuits such as a 4-bit counter to a large scale circuit such as a microprocessor.
IEC 617-12 and its successor IEC 60617-12 do not explicitly show the "distinctive shape" symbols, but do not prohibit them. These are, however, shown in ANSI/IEEE 91 (and 91a) with this note: "The distinctive-shape symbol is, according to IEC Publication 617, Part 12, not preferred, but is not considered to be in contradiction to that standard." IEC 60617-12 correspondingly contains the note (Section 2.1) "Although non-preferred, the use of other symbols recognized by official national standards, that is distinctive shapes in place of symbols [list of basic gates], shall not be considered to be in contradiction with this standard. Usage of these other symbols in combination to form complex symbols (for example, use as embedded symbols) is discouraged." This compromise was reached between the respective IEEE and IEC working groups to permit the IEEE and IEC standards to be in mutual compliance with one another.
A third style of symbols, DIN 40700 (1976), was in use in Europe and is still widely used in European academia, see the logic table in .
In the 1980s, schematics were the predominant method to design both circuit boards and custom ICs known as gate arrays. Today custom ICs and the field-programmable gate array are typically designed with Hardware Description Languages (HDL) such as Verilog or VHDL.
Output comparison of 1-input logic gates.
Output comparison of 2-input logic gates.
Charles Sanders Peirce (during 1880–81) showed that NOR gates alone (or alternatively NAND gates alone) can be used to reproduce the functions of all the other logic gates, but his work on it was unpublished until 1933. The first published proof was by Henry M. Sheffer in 1913, so the NAND logical operation is sometimes called Sheffer stroke; the logical NOR is sometimes called "Peirce's arrow". Consequently, these gates are sometimes called "universal logic gates".
By use of De Morgan's laws, an "AND" function is identical to an "OR" function with negated inputs and outputs. Likewise, an "OR" function is identical to an "AND" function with negated inputs and outputs. A NAND gate is equivalent to an OR gate with negated inputs, and a NOR gate is equivalent to an AND gate with negated inputs.
This leads to an alternative set of symbols for basic gates that use the opposite core symbol ("AND" or "OR") but with the inputs and outputs negated. Use of these alternative symbols can make logic circuit diagrams much clearer and help to show accidental connection of an active high output to an active low input or vice versa. Any connection that has logic negations at both ends can be replaced by a negationless connection and a suitable change of gate or vice versa. Any connection that has a negation at one end and no negation at the other can be made easier to interpret by instead using the De Morgan equivalent symbol at either of the two ends. When negation or polarity indicators on both ends of a connection match, there is no logic negation in that path (effectively, bubbles "cancel"), making it easier to follow logic states from one symbol to the next. This is commonly seen in real logic diagrams – thus the reader must not get into the habit of associating the shapes exclusively as OR or AND shapes, but also take into account the bubbles at both inputs and outputs in order to determine the "true" logic function indicated.
A De Morgan symbol can show more clearly a gate's primary logical purpose and the polarity of its nodes that are considered in the "signaled" (active, on) state. Consider the simplified case where a two-input NAND gate is used to drive a motor when either of its inputs are brought low by a switch. The "signaled" state (motor on) occurs when either one OR the other switch is on. Unlike a regular NAND symbol, which suggests AND logic, the De Morgan version, a two negative-input OR gate, correctly shows that OR is of interest. The regular NAND symbol has a bubble at the output and none at the inputs (the opposite of the states that will turn the motor on), but the De Morgan symbol shows both inputs and output in the polarity that will drive the motor.
De Morgan's theorem is most commonly used to implement logic gates as combinations of only NAND gates, or as combinations of only NOR gates, for economic reasons.
Logic gates can also be used to store data. A storage element can be constructed by connecting several gates in a "latch" circuit. More complicated designs that use clock signals and that change only on a rising or falling edge of the clock are called edge-triggered "flip-flops". Formally, a flip-flop is called a bistable circuit, because it has two stable states which it can maintain indefinitely. The combination of multiple flip-flops in parallel, to store a multiple-bit value, is known as a register. When using any of these gate setups the overall system has memory; it is then called a sequential logic system since its output can be influenced by its previous state(s), i.e. by the "sequence" of input states. In contrast, the output from combinational logic is purely a combination of its present inputs, unaffected by the previous input and output states.
These logic circuits are known as computer memory. They vary in performance, based on factors of speed, complexity, and reliability of storage, and many different types of designs are used based on the application.
A three-state logic gate is a type of logic gate that can have three different outputs: high (H), low (L) and high-impedance (Z). The high-impedance state plays no role in the logic, which is strictly binary. These devices are used on buses of the CPU to allow multiple chips to send data. A group of three-states driving a line with a suitable control circuit is basically equivalent to a multiplexer, which may be physically distributed over separate devices or plug-in cards.
In electronics, a high output would mean the output is sourcing current from the positive power terminal (positive voltage). A low output would mean the output is sinking current to the negative power terminal (zero voltage). High impedance would mean that the output is effectively disconnected from the circuit.
Since the 1990s, most logic gates are made in CMOS (complementary metal oxide semiconductor) technology that uses both NMOS and PMOS transistors. Often millions of logic gates are packaged in a single integrated circuit.
There are several logic families with different characteristics (power consumption, speed, cost, size) such as: RDL (resistor–diode logic), RTL (resistor-transistor logic), DTL (diode–transistor logic), TTL (transistor–transistor logic) and CMOS. There are also sub-variants, e.g. standard CMOS logic vs. advanced types using still CMOS technology, but with some optimizations for avoiding loss of speed due to slower PMOS transistors.
Non-electronic implementations are varied, though few of them are used in practical applications. Many early electromechanical digital computers, such as the Harvard Mark I, were built from relay logic gates, using electro-mechanical relays. Logic gates can be made using pneumatic devices, such as the Sorteberg relay or mechanical logic gates, including on a molecular scale. Logic gates have been made out of DNA (see DNA nanotechnology) and used to create a computer called MAYA (see MAYA-II). Logic gates can be made from quantum mechanical effects (though quantum computing usually diverges from boolean design; see quantum logic gate). Photonic logic gates use nonlinear optical effects.
In principle any method that leads to a gate that is functionally complete (for example, either a NOR or a NAND gate) can be used to make any kind of digital logic circuit. Note that the use of 3-state logic for bus systems is not needed, and can be replaced by digital multiplexers, which can be built using only simple logic gates (such as NAND gates, NOR gates, or AND and OR gates).
|
https://en.wikipedia.org/wiki?curid=18168
|
Linear search
In computer science, a linear search or sequential search is a method for finding an element within a list. It sequentially checks each element of the list until a match is found or the whole list has been searched.
A linear search runs in at worst linear time and makes at most comparisons, where is the length of the list. If each element is equally likely to be searched, then linear search has an average case of comparisons, but the average case can be affected if the search probabilities for each element vary. Linear search is rarely practical because other search algorithms and schemes, such as the binary search algorithm and hash tables, allow significantly faster searching for all but short lists.
A linear search sequentially checks each element of the list until it finds an element that matches the target value. If the algorithm reaches the end of the list, the search terminates unsuccessfully.
Given a list of elements with values or records , and target value , the following subroutine uses linear search to find the index of the target in .
The basic algorithm above makes two comparisons per iteration: one to check if equals "T", and the other to check if still points to a valid index of the list. By adding an extra record to the list (a sentinel value) that equals the target, the second comparison can be eliminated until the end of the search, making the algorithm faster. The search will reach the sentinel if the target is not contained within the list.
If the list is ordered such that , the search can establish the absence of the target more quickly by concluding the search once exceeds the target. This variation requires a sentinel that is greater than the target.
For a list with "n" items, the best case is when the value is equal to the first element of the list, in which case only one comparison is needed. The worst case is when the value is not in the list (or occurs only once at the end of the list), in which case "n" comparisons are needed.
If the value being sought occurs "k" times in the list, and all orderings of the list are equally likely, the expected number of comparisons is
For example, if the value being sought occurs once in the list, and all orderings of the list are equally likely, the expected number of comparisons is formula_2. However, if it is "known" that it occurs once, then at most "n" - 1 comparisons are needed, and the expected number of comparisons is
(for example, for "n" = 2 this is 1, corresponding to a single if-then-else construct).
Either way, asymptotically the worst-case cost and the expected cost of linear search are both O("n").
The performance of linear search improves if the desired value is more likely to be near the beginning of the list than to its end. Therefore, if some values are much more likely to be searched than others, it is desirable to place them at the beginning of the list.
In particular, when the list items are arranged in order of decreasing probability, and these probabilities are geometrically distributed, the cost of linear search is only O(1).
Linear search is usually very simple to implement, and is practical when the list has only a few elements, or when performing a single search in an un-ordered list.
When many values have to be searched in the same list, it often pays to pre-process the list in order to use a faster method. For example, one may sort the list and use binary search, or build an efficient search data structure from it. Should the content of the list change frequently, repeated re-organization may be more trouble than it is worth.
As a result, even though in theory other search algorithms may be faster than linear search (for instance binary search), in practice even on medium-sized arrays (around 100 items or less) it might be infeasible to use anything else. On larger arrays, it only makes sense to use other, faster search methods if the data is large enough, because the initial time to prepare (sort) the data is comparable to many linear searches.
|
https://en.wikipedia.org/wiki?curid=18171
|
Land mine
A land mine is an explosive device concealed under or on the ground and designed to destroy or disable enemy targets, ranging from combatants to vehicles and tanks, as they pass over or near it. Such a device is typically detonated automatically by way of pressure when a target steps on it or drives over it, although other detonation mechanisms are also sometimes used. A land mine may cause damage by direct blast effect, by fragments that are thrown by the blast, or by both.
The use of land mines is controversial because of their potential as indiscriminate weapons. They can remain dangerous many years after a conflict has ended, harming civilians and the economy. 78 countries are contaminated with land mines and 15,000–20,000 people are killed every year while countless more are maimed. Approximately 80% of land mine casualties are civilian, with children as the most affected age group. Most killings occur in times of peace. With pressure from a number of campaign groups organised through the International Campaign to Ban Landmines, a global movement to prohibit their use led to the 1997 Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction, also known as the "Ottawa Treaty". To date, 164 nations have signed the treaty, but these do not include China, the Russian Federation, and the United States.
In the Anti-Personnel Mine Ban Convention (also known as the Ottawa Treaty) and the Protocol on Mines, Booby-Traps and Other Devices, a "mine" is defined as a "munition designed to be placed under, on or near the ground or other surface area and to be exploded by the presence, proximity or contact of a person or vehicle." Similar in function is the "booby-trap", which the Protocol defines as "any device or material which is designed, constructed or adapted to kill or injure and which functions unexpectedly when a person disturbs or approaches an apparently harmless object or performs an apparently safe act." Such actions might include opening a door or picking up an object. Normally, mines are mass-produced and placed in groups, while booby traps are improvised and deployed one at a time. Also, booby traps can be non-explosive devices such as the punji stick. Overlapping both categories is the "improvised explosive device" (IED), which is "a device placed or fabricated in an improvised manner incorporating explosive material, destructive, lethal, noxious, incendiary, pyrotechnic materials or chemicals designed to destroy, disfigure, distract or harass. They may incorporate military stores, but are normally devised from non-military components." Some meet the definition of mines or booby traps and are also referred to as improvised, artisanal or locally manufactured mines. Other types of IED are remotely activated, so are not considered mines.
"Remotely delivered mines" are dropped from an aircraft or carried by devices such as artillery shells or rockets. Another type of remotely delivered explosive is the "cluster munition", a device that releases several submunitions ("bomblets") over a large area. If they do not explode, they are referred to as "unexploded ordnance" (UXO), along with unexploded artillery shells and other explosive devices that were not manually placed (that is, mines and booby traps are not UXOs). "Explosive remnants of war" (ERW) include UXO and "abandoned explosive ordnance" (AXO), devices that were never used and were left behind after a conflict.
Land mines are divided into two types: anti-tank mines, which are designed to disable tanks or other vehicles; and anti-personnel mines, which are designed to injure or kill people.
The history of land mines can be divided up into three main phases: In the ancient world, buried spikes provided many of the same functions as modern mines. Mines using gunpowder as the explosive were used from the Ming Dynasty to the American Civil War. Subsequently, high explosives were developed and used in land mines.
Some fortifications in the Roman Empire were surrounded by a series of hazards buried in the ground. These included "goads", foot-long pieces of wood with iron hooks on their ends; "lilia" (lilies, so named after their appearance), which were pits in which sharpened logs were arranged in a five-point pattern; and "abatis", fallen trees with sharpened branches facing outwards. As with modern land mines, they were "victim-operated", often concealed, and formed zones that were wide enough so that the enemy could not do much harm from outside, but were under fire (from spear throws, in this case) if they attempted to remove the obstacles. A notable use of these defenses was by Julius Caesar in the Battle of Alesia. His forces were besieging Vercingetorix, the leader of the Gauls, but Vercingetorix managed to send for reinforcements. To maintain the siege and defend against the reinforcements, Caesar formed a line of fortifications on both sides, and they played an important role in his victory. Lilies were also used by Scots against the English at the Battle of Bannockburn in 1314, and by Germans at the Battle of Passchendaele in the First World War.
A more easily deployed defense used by the Romans was the caltrop, a weapon about 12–15 cm across with four sharp spikes that are oriented so that when it is thrown on the ground, one spike always points up. As with modern antipersonnel mines, caltrops are designed to disable soldiers rather than kill them; they are also more effective in stopping mounted forces, who lack the advantage of being able to carefully scrutinize each step they take (though forcing foot-mounted forces to take the time to do so has benefits in and of itself). They were used by the Jin Dynasty in China at the Battle of Zhongdu to slow down the advance of Genghis Khan's army; Joan of Arc was wounded by one in the Siege of Orléans; in Japan they are known as tetsu-bishu and were used by ninjas from the fourteenth century onwards. Caltrops are still strung together and used as roadblocks in some modern conflicts.
Starting in the ninth century, the Chinese began centuries of experiments that resulted in gunpowder, an explosive mixture of sulfur, charcoal and potassium nitrate. Gunpowder was first used in battle in the thirteenth century. An "enormous bomb", credited to Lou Qianxia, was used in 1277 by the Chinese at the Battle of Zhongdu, although it probably had little effect. Gunpowder was difficult to use in mines because it is hygroscopic, easily absorbing water from the atmosphere, and when wet is no longer explosive.
A 14th-century military treatise, the "Huolongjing" (Fire Dragon Manual), describes hollow cast iron cannonball shells filled with gunpowder. The wad of the mine was made of hard wood, carrying three different fuses in case of defective connection to the touch hole. These fuses were long and lit by hand, so they required carefully timed calculation of enemy movements.
The "Huolongjing" also describes land mines that were set off by enemy movement. A nine-foot length of bamboo was waterproofed by wrapping it in cowhide and covering it with oil. It was filled with compressed gunpowder and lead or iron pellets, sealed with wax and concealed in a trench. The triggering mechanism was not fully described until the early 17th century. When the enemy stepped onto hidden boards, they dislodge a pin, causing a weight to fall. A cord attached to the weight was wrapped around a drum attached to two steel wheels; when the weight fell, the wheels struck sparks against flint, igniting a set of fuses to multiple mines. A similar mechanism was used in the first wheellock musket in Europe as sketched by Leonardo da Vinci around 1500 AD.
Another victim-operated device was the "underground sky-soaring thunder", which lured bounty hunters with halberds, pikes, and lances planted in the ground. If they pulled on one of these weapons, the butt end disturbed a bowl underneath and a slow-burning incandescent material in the bowl ignited the fuses.
The fuse mechanisms for the above devices were cumbersome and unreliable. By the time Europeans arrived in China, landmines were largely forgotten.
At Augsburg in 1573, three centuries after the Chinese invented the first pressure-operated mine, a German military engineer by the name of Samuel Zimmermann invented the "Fladdermine" (flying mine). It consisted of a few pounds of black powder buried near the surface and was activated by stepping on it or tripping a wire that made a flintlock fire. Such mines were deployed on the slope in front of a fort. They were used during the Franco-Prussian War but were probably not very effective because a flintlock does not work for long when left untended.
Another device, the fougasse, was not victim-operated or mass-produced, but it was a precursor of modern fragmentation mines and the claymore mine. If consisted of a cone-shape hole with gunpowder at the bottom, covered either by rocks and scrap iron ("stone fougasse") or mortar shells, similar to large black powder hand grenades ("shell fougasse"). It was triggered by a flintlock connected to a tripwire on the surface. It could sometimes cause heavy casualties but required high maintenance due to the susceptibility of black powder to dampness. Consequently, it was mainly employed in the defenses of major fortifications, in which role it used in several European wars of the eighteenth century and the American Revolution.
One of the greatest limitations of early land mines was the unreliable fuses and their susceptibility to dampness. This changed with the invention of the safety fuse. Later, "Command initiation", the ability to detonate a charge immediately instead of waiting several minutes for a fuse to burn, became possible after electricity was developed. An electrical current sent down a wire could ignite the charge with a spark. The Russians claim first use of this technology in the Russo-Turkish War of 1828–1829, and with it the fougasse remained useful until it was superseded by the claymore in the 1960s.
Victim-activated mines were also unreliable because they relied on a flintlock to ignite the explosive. The percussion cap, developed in the early 19th century, made them much more reliable, and pressure-operated mines were deployed on land and sea in the Crimean War (1853–1856).
During the American Civil War, the Confederate brigadier general Gabriel J. Rains deployed thousands of "torpedoes" consisting of artillery shells with pressure caps, beginning with the Battle of Yorktown in 1862. As a Captain, Rains had earlier employed explosive booby traps during the Seminole Wars in Florida in 1840. Over the course of the war, mines only caused a few hundred casualties, but they had a large effect on morale and slowed down the advance of Union troops. Many on both sides considered the use of mines barbaric, and in response, generals in the Union Army forced Confederate prisoners to remove the mines.
Starting in the 19th century, more powerful explosives than gunpowder were developed, often for non-military reasons such as blasting train tunnels in the Alps and Rockies. Guncotton, up to four times more powerful than gunpowder, was invented by Christian Schonbein in 1846. It was dangerous to make until Frederick Augustus Abel developed a safe method in 1865. From the 1870s to the First World War, it was the standard explosive used by the British military.
In 1847, Ascanio Sobrero invented nitroglycerine to treat angina pectoris and it turned out to be a much more powerful explosive than guncotton. It was very dangerous to use until Alfred Nobel found a way to incorporate it in a solid mixture called dynamite and developed a safe detonator. Even then, dynamite needed to be stored carefully or it could form crystals that detonated easily. Thus, the military still preferred guncotton.
In 1863, the German chemical industry developed trinitrotoluene (TNT). This had the advantage that it was difficult to detonate, so it could withstand the shock of firing by artillery pieces. It was also advantageous for land mines for several reasons: it was not detonated by the shock of shells landing nearby; it was lightweight, unaffected by damp, and stable under a wide range of conditions; it could be melted to fill a container of any shape, and it was cheap to make. Thus, it became the standard explosive in mines after the First World War.
In their colonial conflicts, the British had fewer scruples about using mines than the Americans had in the Civil War. The British used mines in the Siege of Khartoum to hold off a much larger Sudanese Mahdist force for ten months. In the end, however, the town was taken and the British massacred. In the Boer War (1899–1903), they succeeded in holding Mafeking against Boer forces with the help of a mixture of real and fake minefields; and they laid mines alongside railroad tracks to discourage sabotage.
In the Russo-Japanese War of 1904–1905, both sides used land and sea mines, although the effect on land mainly affected morale. The naval mines were far more effective, destroying several battleships.
One sign of the increasing power of explosives used in land mines was that, by the First World War, they burst into about 1,000 high-velocity fragments; in the Franco-Prussian War (1870), it had only been 20 to 30 fragments. Nevertheless, antipersonnel mines were not a big factor in the war because machine guns, barbed wire and rapid-fire artillery were far more effective defenses. An exception was in Africa (now Tanzania and Namibia) where the warfare was much more mobile.
Towards the end of the war, the British started to use tanks to break through trench defenses. The Germans responded with anti-tank guns and mines. Improvised mines gave way to mass-produced mines consisting of wooden boxes filled with guncotton, and minefields were standardized to stop masses of tanks from advancing.
Between World Wars, the future Allies did little work on land mines, but the Germans developed a series of anti-tank mines, the "Tellermines" (plate mines). They also developed the "Schrapnell mine" (also known as the S-mine), the first bouncing mine. When triggered, this jumped up to about waist height and exploded, sending thousands of steel balls in all directions. Triggered by pressure, trip wires or electronics, it could harm soldiers within an area of about 2800 square feet.
Tens of millions of mines were laid in the Second World War, particularly in the deserts of North Africa and the steppes of Eastern Europe, where the open ground favored tanks. However, the first country to use them was Finland. They were defending against a much larger Soviet force with over 6,000 tanks, twenty times the number the Finns had; but they had terrain that was broken up by lakes and forests, so tank movement was restricted to roads and tracks. Their defensive line, the Mannerheim Line, integrated these natural defenses with mines, including simple fragmentation mines mounted on stakes.
While the Germans were advancing rapidly using "blitzkrieg" tactics, they did not make much use of mines. After 1942, however, they were on the defensive and became the most inventive and systematic users of mines. Their production shot up and they began inventing new types of mines as the Allies found ways to counter the existing ones. To make it more difficult to remove antitank mines, they surrounded them with S-mines and added anti-handling devices that would explode when soldiers tried to lift them. They also took a formal approach to laying mines and they kept detailed records of the locations of mines.
In the Second Battle of El Alamein in 1942, the Germans prepared for an Allied attack by laying about half a million mines in two fields running across the entire battlefield and five miles deep. Nicknamed the Devil's gardens, they were covered by 88 mm anti-tank guns and small-arms fire. The Allies prevailed, but at the cost of over half their tanks; 20 percent of the losses were caused by mines.
The Soviets learned the value of mines from their war with Finland, and when Germany invaded, they made heavy use of them, manufacturing over 67 million. At the Battle of Kursk, which put an end to the German advance, they laid over a million mines in eight belts with an overall depth of 35 kilometres.
Mines forced tanks to slow down and wait for soldiers to go ahead and remove the mines. The main method of breaching minefields involved prodding the dirt with a bayonet or stick at an angle of 30 degrees (to avoid putting pressure on the top of the mine and detonating it). Since all mines at the beginning of the war had metal casings, metal detectors could be used to speed up the locating of mines. A Polish officer, Józef Kosacki, developed a portable mine detector known as the Polish mine detector. To counter the detector, Germans developed mines with wooden casings, the Schu-mine 42 (antipersonnel) and Holzmine 42 (anti-tank). Effective, cheap and easy to make, the "schu" mine became the most common mine in the war. Mine casings were also made of glass, concrete and clay. The Russians developed a mine with a pressed-cardboard casing, the PMK40, and the Italians made an anti-tank mine out of bakelite. In 1944, the Germans created the Topfmine, an entirely non-metallic mine. They ensured that they could detect their own mines by covering them with radioactive sand, but the Allies did not find this out until after the war.
Several mechanical methods for clearing mines were tried. Heavy rollers attached to tanks or cargo trucks, but they did not last long and their weight made the tanks considerably slower. Tanks and bulldozers pushed ploughs that in turn pushed aside any mines to a depth of 30 cm. The Bangalore torpedo, a long thin tube filled with explosives, was invented in 1912 and used to clear barbed wire. Larger versions such as the Snake and the Conger were developed but were not very effective. One of the best options was the flail, which chains with weights on the end attached to rotating drums. The first version, the Scorpion, was attached to the Matilda tank and used in the Second Battle of El Alamein. The Crab, attached to the Sherman tank, was faster (2 kilometers per hour); it was used during D-Day and the aftermath.
During the Cold War, the members of NATO were concerned about massive armored attacks by the Soviet Union. They planned for a minefield stretching across the entire West German border, and developed new types of mine. The British designed an anti-tank mine, the Mark 7, to defeat rollers by detonating the second time it was pressed. It also had a 0.7-second delay so the tank would be directly over the mine. They also developed the first scatterable mine, the No. 7 ("Dingbat"). The Americans used the M6 antitank mine and tripwire-operated bouncing antipersonnel mines such as the M2 and M16.
In the Korean War, land mine use was dictated by the steep terrain, narrow valleys, forest cover and lack of developed roads. This made tanks less effective and more easily stopped by mines. However, mines laid near roads were often easy to spot. In response to this problem, the US developed the M24, a mine that was placed off to the side of the road. When triggered by a tripwire, it fired a rocket. However, the mine was not available until after the war.
The Chinese had a lot of success with massed infantry attacks. The extensive forest cover limited the range of machine guns, but anti-personnel mines were effective. However, mines were poorly recorded and marked, often becoming as much a hazard to allies as enemies. Tripwire-operated mines were not defended by pressure mines; the Chinese were often able to disable them and reuse them against UN forces.
Looking for more destructive mines, the Americans developed the Claymore, a directional fragmentation mine that hurls steel balls in a 60 degree arc at a lethal speed of 1,200 metres per second. They also developed a pressure-operated mine, the M14 ("toe-popper"). These, too, were ready too late for the Korean war.
In 1948, the British developed the No. 6 antipersonnel mine a minimum-metal mine with a narrow diameter, making it difficult to detect with metal detectors or prodding. Its three-pronged pressure piece inspired the nickname "Carrot Mine". However, it was unreliable in wet conditions. In the 1960s the Canadians developed a similar, but more reliable mine, the C3A1 ("Elsie") and the British army adopted it. The British also developed the L9 Bar Mine, a wide anti-tank mine with a rectangular shape, which covered more area, allowing a minefield to be laid four times as fast as previous mines. They also upgraded the Dingbat to the Ranger, a plastic mine that was fired from a truck-mounted discharger that could fire 72 mines at a time.
In the 1950s, the US Operation Doan Brook studied the feasibility of delivering mines by air. This led to three types of air-delivered mine. Wide Area Anti-Personnel Mines (WAAPMs) were small steel spheres that discharged tripwires when they hit the ground; each dispenser held 540 mines. The BLU-43 Dragontooth was small and had a flattened W shape to slow its descent, while the Gravel mine was larger. Both were packed by the thousand into bombs. All three were designed to inactivate after a period of time, but any that failed to activate presented a safety challenge. Over 37 million Gravel mines were produced between 1967 and 1968, and when they were dropped in places like Vietnam their locations were unmarked and unrecorded. A similar problem was presented by unexploded cluster munitions.
The next generation of scatterable mines arose in response to the increasing mobility of war. The Germans developed the Skorpion system, which scattered AT2 mines from a tracked vehicle. The Italians developed a helicopter delivery system that could rapidly switch between SB-33 anti-personnel mines and SB-81 anti-tank mines. The US developed a range of systems called the Family of Scatterable Mines (FASCAM) that could deliver mines by fast jet, artillery, helicopter and ground launcher.
In the First World War, the Germans developed a device, nicknamed the “Yperite Mine” by the British, that they left behind in abandoned trenches and bunkers. It was detonated by a delayed charge, spreading mustard gas (“Yperite”). In the Second World War they developed a modern chemical mine, the Spruh-Buchse 37 (Bounding Gas Mine 37), but never used it. The United States developed the M1 chemical mine , which used mustard gas, in 1939; and the M23 chemical mine, which used the VX nerve agent, in 1960. The Soviets developed the KhF, a "bounding chemical mine". The French had chemical mines and the Iraqis were believed to have them before the invasion of Kuwait. In 1997, the Chemical Weapons Convention came into force, prohibiting the use of chemical weapons and mandating their destruction. As of 30 April 2019, 97% of the declared stockpiles of chemical weapons were destroyed.
For a few decades during the Cold War, the U.S. developed atomic demolition munitions, often referred to as nuclear land mines. These were portable nuclear bombs that could be placed by hand, and could be detonated remotely or with a timer. Some of these were deployed in Europe. Governments in West Germany, Turkey and Greece wanted to have nuclear minefields as a defense against attack from the Warsaw Pact. However, such weapons were politically and tactically infeasible, and by 1989 the last of these munitions was retired. The British also had a project, codenamed Blue Peacock, to develop nuclear mines to be buried in Germany; the project was cancelled in 1958.
A conventional land mine consists of a casing that is mostly filled with the main charge. It has a firing mechanism such as a pressure plate; this triggers a detonator or igniter, which in turn sets off a booster charge. There may be additional firing mechanisms in anti-handling devices.
A land mine can be triggered by a number of things including pressure, movement, sound, magnetism and vibration. Anti-personnel mines commonly use the pressure of a person's foot as a trigger, but tripwires are also frequently employed. Most modern anti-vehicle mines use a magnetic trigger to enable it to detonate even if the tires or tracks did not touch it. Advanced mines are able to sense the difference between friendly and enemy types of vehicles by way of a built-in signature catalog. This will theoretically enable friendly forces to use the mined area while denying the enemy access.
Many mines combine the main trigger with a touch or tilt trigger to prevent enemy engineers from defusing it. Land mine designs tend to use as little metal as possible to make searching with a metal detector more difficult; land mines made mostly of plastic have the added advantage of being very inexpensive.
Some types of modern mines are designed to self-destruct, or chemically render themselves inert after a period of weeks or months to reduce the likelihood of civilian casualties at the conflict's end. These self-destruct mechanisms are not absolutely reliable, and most land mines laid historically are not equipped in this manner.
There is a common misperception that a landmine is armed by stepping on it and only triggered by stepping off, providing tension in movies. In fact the initial pressure trigger will detonate the mine, as they are designed to kill or maim, not to make someone stand very still until it can be disarmed.
Anti-handling devices detonate the mine if someone attempts to lift, shift or disarm it. The intention is to hinder deminers by discouraging any attempts to clear minefields. There is a degree of overlap between the function of a boobytrap and an anti-handling device insofar as some mines have optional fuze pockets into which standard pull or pressure-release boobytrap firing devices can be screwed. Alternatively, some mines may mimic a standard design, but actually be specifically intended to kill deminers, such as the MC-3 and PMN-3 variants of the PMN mine. Anti-handling devices can be found on both anti-personnel mines and anti-tank mines, either as an integral part of their design or as improvised add-ons. For this reason, the standard render safe procedure for mines is often to destroy them on site without attempting to lift them.
Anti-tank mines were created not long after the invention of the tank in the First World War. At first improvised, purpose-built designs were developed. Set off when a tank passes, they attack the tank at one of its weaker areas — the tracks. They are designed to immobilize or destroy vehicles and their occupants. In U.S. military terminology destroying the vehicles is referred to as a catastrophic kill while only disabling its movement is referred to as a mobility kill.
Anti-tank mines are typically larger than anti-personnel mines and require more pressure to detonate. The high trigger pressure, normally requiring prevents them from being set off by infantry or smaller vehicles of lesser importance. More modern anti-tank mines use shaped charges to focus and increase the armor penetration of the explosives.
Anti-personnel mines are designed primarily to kill or injure people, as opposed to vehicles. They are often designed to injure rather than kill in order to increase the logistical support (evacuation, medical) burden on the opposing force. Some types of anti-personnel mines can also damage the tracks or wheels of armored vehicles.
In the asymmetric warfare conflicts and civil wars of the 21st century, improvised explosives, known as IEDs, have partially supplanted conventional landmines as the source of injury to dismounted (pedestrian) soldiers and civilians. IEDs are used mainly by insurgents and terrorists against regular armed forces and civilians. The injuries from the anti-personnel IED were recently reported in BMJ Open to be far worse than with landmines resulting in multiple limb amputations and lower body mutilation.
Land mines were designed for two main uses:
Land mines are currently used in large quantities mostly for this first purpose, thus their widespread use in the demilitarized zones (DMZs) of likely flashpoints such as Cyprus, Afghanistan and Korea. As of 2013, the only governments that still laid land mines were Myanmar in its internal conflict, and Syria in its civil war.
In military science, minefields are considered a defensive or harassing weapon, used to slow the enemy down, to help deny certain terrain to the enemy, to focus enemy movement into kill zones, or to reduce morale by randomly attacking material and personnel. In some engagements during World War II, anti-tank mines accounted for half of all vehicles disabled.
Since combat engineers with mine-clearing equipment can clear a path through a minefield relatively quickly, mines are usually considered effective only if covered by fire.
The extents of minefields are often marked with warning signs and cloth tape, to prevent friendly troops and non-combatants from entering them. Of course, sometimes terrain can be denied using dummy minefields. Most forces carefully record the location and disposition of their own minefields, because warning signs can be destroyed or removed, and minefields should eventually be cleared. Minefields may also have marked or unmarked safe routes to allow friendly movement through them.
Placing minefields without marking and recording them for later removal is considered a war crime under Protocol II of the Convention on Certain Conventional Weapons, which is itself an annex to the Geneva Conventions.
Artillery and aircraft scatterable mines allow minefields to be placed in front of moving formations of enemy units, including the reinforcement of minefields or other obstacles that have been breached by enemy engineers. They can also be used to cover the retreat of forces disengaging from the enemy, or for interdiction of supporting units to isolate front line units from resupply. In most cases these minefields consist of a combination of anti-tank and anti-personnel mines, with the anti-personnel mines making removal of the anti-tank mines more difficult. Mines of this type used by the United States are designed to self-destruct after a preset period of time, reducing the requirement for mine clearing to only those mines whose self-destruct system did not function. Some designs of these scatterable mines require an electrical charge (capacitor or battery) to detonate. After a certain period of time, either the charge dissipates, leaving them effectively inert or the circuitry is designed such that upon reaching a low level, the device is triggered, thus destroying the mine.
None of the conventional tactics and norms of mine warfare applies when they are employed in a guerrilla role:
Land mines were commonly deployed by insurgents during the South African Border War, leading directly to the development of the first dedicated mine-protected armoured vehicles in South Africa. Namibian insurgents used anti-tank mines to throw South African military convoys into disarray before attacking them. To discourage detection and removal efforts, they also laid anti-personnel mines directly parallel to the anti-tank mines. This initially resulted in heavy South African military and police casualties, as the vast distances of road network vulnerable to insurgent sappers every day made comprehensive detection and clearance efforts impractical. The only other viable option was the adoption of mine-protected vehicles which could remain mobile on the roads with little risk to their passengers even if a mine was detonated. South Africa is widely credited with inventing the v-hull, a vee-shaped hull for armoured vehicles which deflects mine blasts away from the passenger compartment.
During the ongoing Syrian Civil War, Iraqi Civil War (2014–2017) and Yemeni Civil War (2015–present) landmines have been used for both defensive and guerrilla purposes.
Minefields may be laid by several means. The preferred, but most labour-intensive, way is to have engineers bury the mines, since this will make the mines practically invisible and reduce the number of mines needed to deny the enemy an area. Mines can be laid by specialized mine-laying vehicles. Mine-scattering shells may be fired by artillery from a distance of several tens of kilometers.
Mines may be dropped from helicopters or airplanes, or ejected from cluster bombs or cruise missiles.
Anti-tank minefields can be scattered with anti-personnel mines to make clearing them manually more time-consuming; and anti-personnel minefields are scattered with anti-tank mines to prevent the use of armored vehicles to clear them quickly. Some anti-tank mine types are also able to be triggered by infantry, giving them a dual purpose even though their main and official intention is to work as anti-tank weapons.
Some minefields are specifically booby-trapped to make clearing them more dangerous. Mixed anti-personnel and anti-tank minefields, anti-personnel mines "under" anti-tank mines, and fuses separated from mines have all been used for this purpose. Often, single mines are backed by a secondary device, designed to kill or maim personnel tasked with clearing the mine.
Multiple anti-tank mines have been buried in stacks of two or three with the bottom mine fuzed, in order to multiply the penetrating power. Since the mines are buried, the ground directs the energy of the blast in a single direction—through the bottom of the target vehicle or on the track.
Another specific use is to mine an aircraft runway immediately after it has been bombed in order to delay or discourage repair. Some cluster bombs combine these functions. One example was the British JP233 cluster bomb which includes munitions to damage (crater) the runway as well as anti-personnel mines in the same cluster bomb. As a result of the anti-personnel mine ban it was withdrawn from British Royal Air Force service, and the last stockpiles of the mine were destroyed on 19 October 1999.
Metal detectors were first used for demining, after their invention by the Polish officer Józef Kosacki. His invention, known as the Polish mine detector, was used by the Allies alongside mechanical methods, to clear the German mine fields during the Second Battle of El Alamein when 500 units were shipped to Field Marshal Montgomery's Eighth Army.
The Nazis used captured civilians who were chased across minefields to detonate the explosives. According to Laurence Rees "Curt von Gottberg, the SS-Obergruppenführer who, during 1943, conducted another huge anti-partisan action called Operation Kottbus on the eastern border of Belarus, reported that 'approximately two to three thousand local people were blown up in the clearing of the minefields'."
Whereas the placing and arming of mines is relatively inexpensive and simple, the process of detecting and removing them is typically expensive, slow, and dangerous. This is especially true of irregular warfare where mines were used on an ad hoc basis in unmarked areas. Anti-personnel mines are most difficult to find, due to their small size and the fact that many are made almost entirely of non-metallic materials specifically to escape detection.
Manual clearing remains the most effective technique for clearing mine fields, although hybrid techniques involving the use of animals and robots are being developed. Animals are desirable due to their strong sense of smell, which is more than capable of detecting a land mine. Animals like rats and dogs can also differentiate between other metal objects and land mines because they can be trained to detect the explosive agent itself.
Other techniques involve the use of geo-location technologies. A joint team of researchers at the University of New South Wales and Ohio State University is working to develop a system based on multi-sensor integration.
The laying of land mines has inadvertently led to a positive development in the Falkland Islands. Mine fields laid near the sea during the Falklands War have become favorite places for penguins, which do not weigh enough to detonate the mines. Therefore, they can breed safely, free of human intrusion. These odd sanctuaries have proven so popular and lucrative for ecotourism that efforts exist to prevent removal of the mines.
The use of land mines is controversial because they are indiscriminate weapons, harming soldier and civilian alike. They remain dangerous after the conflict in which they were deployed has ended, killing and injuring civilians and rendering land impassable and unusable for decades. To make matters worse, many factions have not kept accurate records (or any at all) of the exact locations of their minefields, making removal efforts painstakingly slow. These facts pose serious difficulties in many developing nations where the presence of mines hampers resettlement, agriculture, and tourism. The International Campaign to Ban Landmines campaigned successfully to prohibit their use, culminating in the 1997 Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction, known informally as the Ottawa Treaty.
The Treaty came into force on 1 March 1999. The treaty was the result of the leadership of the Governments of Canada, Norway, South Africa and Mozambique working with the "International Campaign to Ban Landmines", launched in 1992. The campaign and its leader, Jody Williams, won the Nobel Peace Prize in 1997 for its efforts.
The treaty does not include anti-tank mines, cluster bombs or claymore-type mines operated in command mode and focuses specifically on anti-personnel mines, because these pose the greatest long term (post-conflict) risk to humans and animals since they are typically designed to be triggered by any movement or pressure of only a few kilograms, whereas anti-tank mines require much more weight (or a combination of factors that would exclude humans). Existing stocks must be destroyed within four years of signing the treaty.
Signatories of the Ottawa Treaty agree that they will not use, produce, stockpile or trade in anti-personnel land mines. In 1997, there were 122 signatories; the Treaty has now been signed by 162 countries. As of early 2016, 162 countries have joined the Treaty. Thirty-six countries, including the People's Republic of China, the Russian Federation and the United States, which together may hold tens of millions of stockpiled antipersonnel mines, are not party to the Convention. Another 34 have yet to sign on. The United States did not sign because the treaty lacks an exception for the Korean Demilitarized Zone.
There is a clause in the treaty, Article 3, which permits countries to retain land mines for use in training or development of countermeasures. Sixty-four countries have taken this option.
As an alternative to an outright ban, 10 countries follow regulations that are contained in a 1996 amendment of Protocol II of the Convention on Conventional Weapons (CCW). The countries are China, Finland, India, Israel, Morocco, Pakistan, South Korea and the United States. Sri Lanka, which had adhered to this regulation announced in 2016, that it would join the Ottawa Treaty.
Before the Ottawa Treaty was adopted, the Arms Project of Human Rights Watch identified "almost 100 companies and government agencies in 48 countries" that had manufactured "more than 340 types of antipersonnel landmines in recent decades." Five to ten million mines were produced per year with a value of $50 to $200 million. The largest producers were probably China, Italy and the Soviet Union. The companies involved included giants such as Daimler-Benz, the Fiat Group, the Daewoo Group, RCA and General Electric.
As of 2017, the "Landmine & Cluster Munition Monitor" identified four countries that were "likely to be actively producing" land mines: India, Myanmar, Pakistan and South Korea. Another seven states reserved the right to make them but were probably not doing so: China, Cuba, Iran, North Korea, Russia, Singapore, and Vietnam.
Throughout the world there are millions of hectares that are contaminated with land mines.
From 1999 to 2017, the "Landmine Monitor" has recorded over 120,000 casualties from mines, IEDs and explosive remnants of war; it estimates that another 1,000 per year go unrecorded. The estimate for all time is over half a million. In 2017, at least 2,793 were killed and 4,431 injured. 87% of the casualties were civilians and 47% were children (less than 18 years old). The largest numbers of casualties were in Afghanistan (2,300), Syria (1,906), and Ukraine (429).
Natural disasters can have a significant impact on efforts to demine areas of land. For example, the floods that occurred in Mozambique in 1999 and 2000 may have displaced hundreds of thousands of land mines left from the war. Uncertainty about their locations delayed recovery efforts.
From a recent study by Asmeret Asefaw Berhe, land degradation caused by land mines "can be classified into five groups: access denial, loss of biodiversity, micro-relief disruption, chemical composition, and loss of productivity". The effects of an explosion depend on: "(i) the objectives and methodological approaches of the investigation; (ii) concentration of mines in a unit area; (iii) chemical composition and toxicity of the mines; (iv) previous uses of the land and (v) alternatives that are available for the affected populations."
The most prominent ecological issue associated with landmines (or fear of them) is denial of access to vital resources (where "access" refers to the ability to use resources, in contrast to "property", the right to use them). The presence and fear of presence of even a single landmine can discourage access for agriculture, water supplies and possibly conservation measures. Reconstruction and development of important structures such as schools and hospitals are likely to be delayed, and populations may shift to urban areas, increasing overcrowding and the risk of spreading diseases.
Access denial can have positive effects on the environment. When a mined area becomes a "no-man's land", plants and vegetation have a chance to grow and recover. For example, formerly arable lands in Nicaragua returned to forests and remained undisturbed after the establishment of landmines. Similarly, the Penguins of the Falkland Islands have benefited because they are not heavy enough to trigger the mines present. However, these benefits can only last as long as animals, tree limbs, etc. do not detonate the mines. In addition, long idle periods could "potentially end up creating or exacerbating loss of productivity", particularly within land of low quality.
Landmines can threaten biodiversity by wiping out vegetation and wildlife during explosions or demining. This extra burden can push threatened and endangered species to extinction. They have also been used by poachers to target endangered species. Displace people refugees hunt animals for food and destroy habitat by making shelters.
Shrapnel, or abrasions of bark or roots caused by detonated mines, can cause the slow death of trees and provide entry sites for wood-rotting fungi. When landmines make land unavailable for farming, residents resort to the forests to meet all of their survival needs. This exploitation furthers the loss of biodiversity.
Near mines that have exploded or decayed, soils tend to be contaminated, particularly with heavy metals. Products produced from the explosives, both organic and inorganic substances, are most likely to be "long lasting, water-soluble and toxic even in small amounts". They can be implemented either "directly or indirectly into soil, water bodies, microorganisms and plants with drinking water, food products or during respiration".
Toxic compounds can also find their way into bodies of water accumulate in land animals, fish and plants. They can act "as a nerve poison to hamper growth", with deadly effect.
|
https://en.wikipedia.org/wiki?curid=18172
|
Labour economics
Labour economics seeks to understand the functioning and dynamics of the markets for wage labour. Labour is a commodity that supplied by labourers in exchange for a wage paid by demanding firms.
Labour markets or job markets function through the interaction of workers and employers. Labour economics looks at the suppliers of labour services (workers) and the demanders of labour services (employers), and attempts to understand the resulting pattern of wages, employment, and income. Labour markets are normally geographically bounded, but the rise of the internet has brought about a 'planetary labour market' in some sectors.
Labour is a measure of the work done by human beings. It is conventionally contrasted with such other factors of production as land and capital. Some theories focus on human capital (referring to the skills that workers possess, not necessarily their actual work). Labour is unique to study because it is a special type of good that cannot be separated from the owner (i.e. the work cannot be separated from the person who does it). A labour market is also different from other lund markets in that workers are the suppliers and firms are the demanders.
There are two sides to labour economics. Labour economics can generally be seen as the application of microeconomic or macroeconomic techniques to the labour market. Microeconomic techniques study the role of individuals and individual firms in the labour market. Macroeconomic techniques look at the interrelations between the labour market, the goods market, the money market, and the foreign trade market. It looks at how these interactions influence macro variables such as employment levels, participation rates, aggregate income and gross domestic product.
The Labour force (LF) is defined as the number of people of working age, who are either employed or actively looking for work (unemployed). The labour force participation rate (LFPR) is the number of people in the labour force divided by the size of the adult civilian non-institutional population (or by the population of working age that is not institutionalized), LFPR = LF/Population.
The non-labour force includes those who are not looking for work, those who are institutionalized (such as in prisons or psychiatric wards), stay-at-home spouses, children not of working age, and those serving in the military. The unemployment level is defined as the labour force minus the number of people currently employed. The unemployment rate is defined as the level of unemployment divided by the labour force. The employment rate is defined as the number of people currently employed divided by the adult population (or by the population of working age). In these statistics, self-employed people are counted as employed.
The skills required in a labour force can vary from individual to individual, as well as from firm to firm. Some firms have specific skills they are interested in, limiting the labour force to certain criteria. A firm requiring specific skills will help determine the size of the market.
Variables like employment level, unemployment level, labour force, and unfilled vacancies are called stock variables because they measure a quantity at a point in time. They can be contrasted with flow variables which measure a quantity over a duration of time. Changes in the labour force are due to flow variables such as natural population growth, net immigration, new entrants, and retirements. Changes in unemployment depend on inflows (non-employed people starting to look for jobs and employed people who lose their jobs that are looking for new ones) and outflows (people who find new employment and people who stop looking for employment). When looking at the overall macroeconomy, several types of unemployment have been identified, which can be separated into two categories of natural and unnatural unemployment.
"Natural Unemployment"
"Unnatural Unemployment"
Neoclassical economists view the labour market as similar to other markets in that the forces of supply and demand jointly determine the price (in this case the wage rate) and quantity (in this case the number of people employed).
However, the labour market differs from other markets (like the markets for goods or the financial market) in several ways. In particular, the labour market may act as a non-clearing market. While according to neoclassical theory most markets quickly attain a point of equilibrium without excess supply or demand, this may not be true of the labour market: it may have a persistent level of unemployment. Contrasting the labour market to other markets also reveals persistent compensating differentials among similar workers.
Models that assume perfect competition in the labour market, as discussed below, conclude that workers earn their marginal product of labour.
Households are suppliers of labour. In microeconomic theory, people are assumed to be rational and seeking to maximize their utility function. In the labour market model, their utility function expresses trade-offs in preference between leisure time and income from time used for labour. However, they are constrained by the hours available to them.
Let "w" denote the hourly wage, "k" denote total hours available for labour and leisure, "L" denote the chosen number of working hours, π denote income from non-labour sources, and "A" denote leisure hours chosen. The individual's problem is to maximise utility "U", which depends on total income available for spending on consumption and also depends on the time spent in leisure, subject to a time constraint, with respect to the choices of labour time and leisure time:
This is shown in the graph below, which illustrates the trade-off between allocating time to leisure activities and allocating it to income-generating activities. The linear constraint indicates that every additional hour of leisure undertaken requires the loss of an hour of labour and thus of the fixed amount of goods that that labour's income could purchase. Individuals must choose how much time to allocate to leisure activities and how much to working. This allocation decision is informed by the indifference curve labelled IC1. The curve indicates the combinations of leisure and work that will give the individual a specific level of utility. The point where the highest indifference curve is just tangent to the constraint line (point A), illustrates the optimum for this supplier of labour services.
If consumption is measured by the value of income obtained, this diagram can be used to show a variety of interesting effects. This is because the absolute value of the slope of the budget constraint is the wage rate. The point of optimisation (point A) reflects the equivalency between the wage rate and the marginal rate of substitution of leisure for income (the absolute value of the slope of the indifference curve). Because the marginal rate of substitution of leisure for income is also the ratio of the marginal utility of leisure (MUL) to the marginal utility of income (MUY), one can conclude:
where "Y" is total income and the right side is the wage rate.
"Effects of a wage increase"
If the wage rate increases, this individual's constraint line pivots up from X,Y1 to X,Y2. He/she can now purchase more goods and services. His/her utility will increase from point A on IC1 to point B on IC2.
To understand what effect this might have on the decision of how many hours to work, one must look at the income effect and substitution effect.
The wage increase shown in the previous diagram can be decomposed into two separate effects. The pure income effect is shown as the movement from point A to point C in the next diagram. Consumption increases from YA to YC and – since the diagram assumes that leisure is a normal good – leisure time increases from XA to XC. (Employment time decreases by the same amount as leisure increases.)
"The Income and Substitution effects of a wage increase"
But that is only part of the picture. As the wage rate rises, the worker will substitute away from leisure and into the provision of labour—that is, will work more hours to take advantage of the higher wage rate, or in other words substitute away from leisure because of its higher opportunity cost. This substitution effect is represented by the shift from point C to point B. The net impact of these two effects is shown by the shift from point A to point B. The relative magnitude of the two effects depends on the circumstances. In some cases, such as the one shown, the substitution effect is greater than the income effect (in which case more time will be allocated to working), but in other cases, the income effect will be greater than the substitution effect (in which case less time is allocated to working). The intuition behind this latter case is that the individual decides that the higher earnings on the previous amount of labour can be "spent" by purchasing more leisure.
"The Labour Supply curve"
If the substitution effect is greater than the income effect, an individual's supply of labour services will increase as the wage rate rises, which is represented by a positive slope in the labour supply curve (as at point E in the adjacent diagram, which exhibits a positive wage elasticity). This positive relationship is increasing until point F, beyond which the income effect dominates the substitution effect and the individual starts to reduce the number of labour hours he supplies (point G) as wage increases; in other words, the wage elasticity is now negative.
The direction of the slope may change more than once for some individuals, and the labour supply curve is different for different individuals.
Other variables that affect the labour supply decision, and can be readily incorporated into the model, include taxation, welfare, work environment, and income as a signal of ability or social contribution.
A firm's labour demand is based on its marginal physical product of labour (MPPL). This is defined as the additional output (or physical product) that results from an increase of one unit of labour (or from an infinitesimal increase in labour). (See also Production theory basics.)
Labour demand is a derived demand; that is, hiring labour is not desired for its own sake but rather because it aids in producing output, which contributes to an employer's revenue and hence profits. The demand for an additional amount of labour depends on the Marginal Revenue Product (MRP) and the marginal cost (MC) of the worker. With a perfectly competitive goods market, the MRP is calculated by multiplying the price of the end product or service by the Marginal Physical Product of the worker. If the MRP is greater than a firm's Marginal Cost, then the firm will employ the worker since doing so will increase profit. The firm only employs however up to the point where MRP=MC, and not beyond, in neoclassical economic theory.
The MRP of the worker is affected by other inputs to production with which the worker can work (e.g. machinery), often aggregated under the term "capital". It is typical in economic models for greater availability of capital for a firm to increase the MRP of the worker, all else equal. Education and training are counted as "human capital". Since the amount of physical capital affects MRP, and since financial capital flows can affect the amount of physical capital available, MRP and thus wages can be affected by financial capital flows within and between countries, and the degree of capital mobility within and between countries.
According to neoclassical theory, over the relevant range of outputs, the marginal physical product of labour is declining (law of diminishing returns). That is, as more and more units of labour are employed, their additional output begins to decline.
Additionally, although the MRP is a good way of expressing an employer's demand, other factors such as social group formation can the demand, as well as the labour supply. This constantly restructures exactly what a labour market is, and leads way to cause problems for theories of inflation.
"A firm's labour demand in the short run (D) and a horizontal supply curve (S)"
The marginal revenue product of labour can be used as the demand for labour curve for this firm in the short run. In competitive markets, a firm faces a perfectly elastic supply of labour which corresponds with the wage rate and the marginal resource cost of labour (W = SL = MFCL). In imperfect markets, the diagram would have to be adjusted because MFCL would then be equal to the wage rate divided by marginal costs. Because optimum resource allocation requires that marginal factor costs equal marginal revenue product, this firm would demand L units of labour as shown in the diagram.
The demand for labour of this firm can be summed with the demand for labour of all other firms in the economy to obtain the aggregate demand for labour. Likewise, the supply curves of all the individual workers (mentioned above) can be summed to obtain the aggregate supply of labour. These supply and demand curves can be analysed in the same way as any other industry demand and supply curves to determine equilibrium wage and employment levels.
Wage differences exist, particularly in mixed and fully/partly flexible labour markets. For example, the wages of a doctor and a port cleaner, both employed by the NHS, differ greatly. There are various factors concerning this phenomenon. This includes the MRP of the worker. A doctor's MRP is far greater than that of the port cleaner. In addition, the barriers to becoming a doctor are far greater than that of becoming a port cleaner. To become a doctor takes a lot of education and training which is costly, and only those who excel in academia can succeed in becoming doctors. The port cleaner, however, requires relatively less training. The supply of doctors is therefore significantly less elastic than that of port cleaners. Demand is also inelastic as there is a high demand for doctors and medical care is a necessity, so the NHS will pay higher wage rates to attract the profession.
Some labour markets have a single employer and thus do not satisfy the perfect competition assumption of the neoclassical model above. The model of a monopsonistic labour market gives a lower quantity of employment and a lower equilibrium wage rate than does the competitive model.
In many real-life situations, the assumption of perfect information is unrealistic. An employer does not necessarily know how hard workers are working or how productive they are. This provides an incentive for workers to shirk from providing their full effort, called moral hazard. Since it is difficult for the employer to identify the hard-working and the shirking employees, there is no incentive to work hard and productivity falls overall, leading to the hiring of more workers and a lower unemployment rate.
One solution that is used to avoid a moral hazard is stock options that grant employees the chance to benefit directly from a firm's success. However, this solution has attracted criticism as executives with large stock-option packages have been suspected of acting to over-inflate share values to the detriment of the long-run welfare of the firm. Another solution, foreshadowed by the rise of temporary workers in Japan and the firing of many of these workers in response to the financial crisis of 2008, is more flexible job- contracts and -terms that encourage employees to work less than full-time by partially compensating for the loss of hours, relying on workers to adapt their working time in response to job requirements and economic conditions instead of the employer trying to determine how much work is needed to complete a given task and overestimating.[""]
Another aspect of uncertainty results from the firm's imperfect knowledge about worker ability. If a firm is unsure about a worker's ability, it pays a wage assuming that the worker's ability is the average of similar workers. This wage under compensates high-ability workers which may drive them away from the labour market as well as at the same time attracting low-ability workers. Such a phenomenon, called adverse selection, can sometimes lead to market collapse.
One way to combat adverse selection, firms will try to use signalling, pioneered by Michael Spence, whereby employers could use various characteristics of applicants differentiate between high-ability or low-ability workers. One common signal used is education, whereby employers assume that high-ability workers will have higher levels of education. Employers can then compensate high-ability workers with higher wages. However, signalling does not always work, and it may appear to an external observer that education has raised the marginal product of labour, without this necessarily being true.
One of the major research achievements of the 1990-2010 period was the development of a framework with dynamic search, matching, and bargaining.
At the micro level, one sub-discipline eliciting increased attention in recent decades is analysis of internal labour markets, that is, "within" firms (or other organisations), studied in personnel economics from the perspective of personnel management. By contrast, external labour markets "imply that workers move somewhat fluidly between firms and wages are determined by some aggregate process where firms do not have significant discretion over wage setting." The focus is on "how firms establish, maintain, and end employment relationships and on how firms provide incentives to employees," including models and empirical work on incentive systems and as constrained by economic efficiency and risk/incentive tradeoffs relating to personnel compensation.
Inequality and discrimination in the workplace can have many effects on workers.
In the context of labour economics, inequality is usually referring to the unequal distribution of earning between households. Inequality is commonly measured by economists using the Gini coefficient. This coefficient does not have a concrete meaning but is more used as a way to compare inequality across regions. The higher the Gini coefficient is calculated to be the larger inequality exists in a region. Over time, inequality has, on average, been increasing. This is due to numerous factors including labour supply and demand shifts as well as institutional changes in the labour market. On the shifts in labour supply and demand, factors include demand for skilled workers going up more than the supply of skilled workers and relative to unskilled workers as well as technological changes that increase productivity; all of these things cause wages to go up for skilled labour while unskilled worker wages stay the same or decline. As for the institutional changes, a decrease in union power and a declining real minimum wage, which both reduce unskilled workers wages, and tax cuts for the wealthy all increase the inequality gap between groups of earners.
As for discrimination, it is the difference in pay that can be attributed to the demographic differences between people, such as gender, race, ethnicity, religion, sexual orientation, etc, even though these factors do not affect the productivity of the worker. Many regions and countries have enacted government policies to combat discrimination, including discrimination in the workplace. Discrimination can be modelled and measured in numerous ways. The oaxaca decomposition is a common method used to calculate the amount of discrimination that exists when wages differ between groups of people. This decomposition aims to calculate the difference in wages that occurs because of differences in skills versus the returns to those skills. A way of modelling discrimination in the workplace when dealing with wages are Gary Becker's taste models. Using taste models, employer discrimination can be thought of as the employer not hiring the minority worker because of their perceived cost of hiring that worker is higher than that of the cost of hiring a non-minority worker, which causes less hiring of the minority. Another taste model is for employee discrimination, which does not cause a decline in the hiring of minorities, but instead causes a more segregated workforce because the prejudiced worker feels that they should be paid more to work next to the worker they are prejudiced against or that they are not paid an equal amount as the worker they are prejudiced against. One more taste model involves customer discrimination, whereby the employers themselves are not prejudiced but believe that their customers might be, so therefore the employer is less likely to hire the minority worker if they are going to interact with customers that are prejudiced. There are many other taste models other than these that Gary Becker has made to explain discrimination that causes differences in hiring in wages in the labour market.
Many sociologists, political economists, and heterodox economists claim that labour economics tends to lose sight of the complexity of individual employment decisions. These decisions, particularly on the supply side, are often loaded with considerable emotional baggage and a purely numerical analysis can miss important dimensions of the process, such as social benefits of a high income or wage rate regardless of the marginal utility from increased consumption or specific economic goals.
From the perspective of mainstream economics, neoclassical models are not meant to serve as a full description of the psychological and subjective factors that go into a given individual's employment relations, but as a useful approximation of human behaviour in the aggregate, which can be fleshed out further by the use of concepts such as information asymmetry, transaction costs, contract theory etc.
Also missing from most labour market analyses is the role of unpaid labour such as unpaid internships where workers with little or no experience are allowed to work a job without pay so that they can gain experience in a particular profession. Even though this type of labour is unpaid it can nevertheless play an important part in society if not abused by employers. The most dramatic example is child raising. However, over the past 25 years an increasing literature, usually designated as the economics of the family, has sought to study within household decision making, including joint labour supply, fertility, child-raising, as well as other areas of what is generally referred to as home production.
The labour market, as institutionalised under today's market economic systems, has been criticised, especially by both mainstream socialists and anarcho-syndicalists, who utilise the term wage slavery as a pejorative for wage labour. Socialists draw parallels between the trade of labour as a commodity and slavery. Cicero is also known to have suggested such parallels.
According to Noam Chomsky, analysis of the psychological implications of wage slavery goes back to the Enlightenment era. In his 1791 book "On the Limits of State Action", classical liberal thinker Wilhelm von Humboldt explained how "whatever does not spring from a man's free choice, or is only the result of instruction and guidance, does not enter into his very nature; he does not perform it with truly human energies, but merely with mechanical exactness" and so when the labourer works under external control, "we may admire what he does, but we despise what he is." Both the Milgram and Stanford experiments have been found useful in the psychological study of wage-based workplace relations.
The American philosopher John Dewey posited that until "industrial feudalism" is replaced by "industrial democracy," politics will be "the shadow cast on society by big business". Thomas Ferguson has postulated in his investment theory of party competition that the undemocratic nature of economic institutions under capitalism causes elections to become occasions when blocs of investors coalesce and compete to control the state.
As per anthropologist David Graeber, the earliest wage labour contracts we know about were in fact contracts for the rental of chattel slaves (usually the owner would receive a share of the money, and the slave, another, with which to maintain his or her living expenses.) Such arrangements, according to Graeber, were quite common in New World slavery as well, whether in the United States or Brazil. C. L. R. James argued that most of the techniques of human organisation employed on factory workers during the industrial revolution were first developed on slave plantations.
Additionally, Marxists posit that labour-as-commodity, which is how they regard wage labour, provides an absolutely fundamental point of attack against capitalism. "It can be persuasively argued," noted one concerned philosopher, "that the conception of the worker's labour as a commodity confirms Marx's stigmatisation of the wage system of private capitalism as 'wage-slavery;' that is, as an instrument of the capitalist's for reducing the worker's condition to that of a slave, if not below it."
|
https://en.wikipedia.org/wiki?curid=18178
|
Lammas
Lammas Day (Anglo-Saxon "hlaf-mas", "loaf-mass"), also known as Loaf Mass Day, is a Christian holiday celebrated in some English-speaking countries in the Northern Hemisphere on 1 August. It is a festival to mark the annual wheat harvest, and is the first harvest festival of the year. The name originates from the word "loaf" in reference to bread and "Mass" in reference to the primary Christian liturgy celebrating Holy Communion.
On Loaf Mass Day, it is customary to bring to a Christian church a loaf made from the new crop, which began to be harvested at Lammastide, which falls at the halfway point between the summer solstice and autumn September equinox. Christians also have church processions to bakeries, where those working therein are blessed by Christian clergy.
Lammas has coincided with the feast of St. Peter in Chains, commemorating St. Peter's miraculous deliverance from prison, but in the liturgical reform of 1969, the feast of St. Alphonsus Liguori was transferred to this day, the day of St. Alphonsus' death.
While Loaf Mass Day is traditionally a Christian holy day, Lughnasadh is celebrated by Neopagans around the same time.
Ann Lewin explains a key practice of the Christian feast of Lammas (Loaf Mass Day) and its importance in the Christian Calendar in relation to other feasts of the Church Year:
In The Church of England, a Protestant denomination that is the mother church of the Anglican Communion, during the celebration of the Mass, "The Lammas loaf, or part of it, may be used as the bread of the Eucharist, or the Lammas loaf and the eucharistic bread may be kept separate."
The loaf is blessed, and in Anglo-Saxon England it might be employed afterwards in protective rituals: a book of Anglo-Saxon charms directed that the Lammas bread be broken into four bits, which were to be placed at the four corners of the barn, to protect the garnered grain.
In many parts of England, tenants were bound to present freshly harvested wheat to their landlords on or before the first day of August. In the "Anglo-Saxon Chronicle", where it is referred to regularly, it is called "the feast of first fruits". The blessing of first fruits was performed annually in both the Eastern Christian and Western Christian Churches on the first or the sixth of August (the latter being the feast of the Transfiguration of Christ).
In medieval times the feast was sometimes known in England and Scotland as the "Gule of August", but the meaning of "gule" is unclear. Ronald Hutton suggests following the 18th-century Welsh clergyman antiquary John Pettingall that it is merely an Anglicisation of ', the Welsh name of the "feast of August". The "OED" and most etymological dictionaries give it a more circuitous origin similar to "gullet"; from Old French ', a diminutive of ', "throat, neck," from Latin ' "throat".
Several antiquaries beginning with John Brady offered a back-construction to its being originally known as "Lamb-mass", under the undocumented supposition that tenants of the Cathedral of York, dedicated to St. Peter ad Vincula, of which this is the feast, would have been required to bring a live lamb to the church, or, with John Skinner, "because Lambs then grew out of season." This is a folk etymology, of which "OED" notes that it was "subsequently felt as if from LAMB + MASS".
For many villeins, the wheat must have run low in the days before Lammas, and the new harvest began a season of plenty, of hard work and company in the fields, reaping in teams. Thus there was a spirit of celebratory play.
In the medieval agricultural year, Lammas also marked the end of the hay harvest that had begun after Midsummer. At the end of hay-making a sheep would be loosed in the meadow among the mowers, for him to keep who could catch it.
In Shakespeare's "Romeo and Juliet" (1.3.19) it is observed of Juliet, "Come Lammas Eve at night shall she [Juliet] be fourteen." Since Juliet was born Lammas eve, she came before the harvest festival, which is significant since her life ended before she could reap what she had sown and enjoy the bounty of the harvest, in this case full consummation and enjoyment of her love with Romeo.
Another well-known cultural reference is the opening of "The Battle of Otterburn": "It fell about the Lammas tide when the muir-men win their hay".
William Hone speaks in "The Every-Day Book" (1838) of a later festive Lammas day sport common among Scottish farmers near Edinburgh. He says that they "build towers...leaving a hole for a flag-pole in the centre so that they may raise their colours." When the flags over the many peat-constructed towers were raised, farmers would go to others' towers and attempt to "level them to the ground." A successful attempt would bring great praise. However, people were allowed to defend their towers, and so everyone was provided with a "tooting-horn" to alert nearby country folk of the impending attack and the battle would turn into a "brawl." According to Hone, more than four people had died at this festival and many more were injured. At the day's end, races were held, with prizes given to the townspeople.
Lughnasadh is the name used for one of the eight sabbats in the Neopagan Wheel of the Year. It is the first of the three autumn harvest festivals, the other two being the autumn equinox (also called Mabon) and Samhain. In the Northern Hemisphere it takes place around 1 August, while in the Southern Hemisphere it is celebrated around 1 February.
Lammas is one of the Scottish quarter days.
"Lammas leaves" or "Lammas growth" refers to a second crop of leaves produced in high summer by some species of trees in temperate countries to replace those lost to insect damage. They often differ slightly in shape, texture and/or hairiness from the earlier leaves.
A low-impact development project at Tir y Gafel, Glandwr, Pembrokeshire, Lammas Ecovillage, is a collective initiative for nine self-built homes. It was the first such project to obtain planning permission based on a predecessor of what is now the sixth national planning guidance for sustainable rural communities originally proposed by the One Planet Council.
Exeter in Devon is one of the few towns in England that still celebrates its Lammas Fair and has a processional custom which stretches back over 900 years, led by the Lord Mayor. During the fair a white glove on a pole decorated with garlands is raised above the Guildhall. The fair now takes place on the first Thursday in July.
The "Doctor Who" serial "The Image of the Fendahl" takes place on Lammas Eve.
In the "Inspector Morse" episode "Day of the Devil", Lammas Day is presented as a Satanic (un)holy day, "the Devil's day".
Katherine Kurtz's alternate World War II fantasy "history" takes its title, "Lammas Night", from pagan tradition surrounding the first of August and the Divine Right of Kings.
The English football club Staines Lammas F.C. is named after the festival.
|
https://en.wikipedia.org/wiki?curid=18179
|
Longmeadow, Massachusetts
Longmeadow is a town in Hampden County, Massachusetts, in the United States. The population was 15,784 at the 2010 census.
Longmeadow was first settled in 1644, and officially incorporated October 17, 1783. The town was originally farmland within the limits of Springfield. It remained relatively pastoral until the street railway was built , when the population tripled over a fifteen-year period. After Interstate 91 was built in the wetlands on the west side of town, population tripled again between 1960 and 1975.
During the 19th and early 20th centuries, Longmeadow was best known as the site from which Longmeadow brownstone was mined. Several famous American buildings, including Princeton University's Neo-Gothic library, are made of Longmeadow brownstone. In 1894, the more populous and industrialized "East Village" portion of the town containing the brownstone quarries split off to become East Longmeadow.
Designed by famed golf course architect Donald Ross in 1922, the Longmeadow Country Club was the proving ground for golf equipment designed and manufactured by the Spalding Co. of Chicopee. Bobby Jones, a consultant for Spalding, was a member in standing at LCC and made a number of his instructional films at LCC in the 1930s.
Longmeadow is located in the western part of the state, just south of the city of Springfield, and is bordered on the west by the Connecticut River and Agawam, to the east by East Longmeadow, and to the south by Enfield, Connecticut. It extends approximately north to south and east to west. It is approximately north of Hartford.
More than 30% of the town is permanent open space. Conservation areas on the west side of town include more than bordering the Connecticut River. The area supports a wide range of wildlife including deer, beaver, wild turkeys, foxes, and eagles. Springfield's Forest Park, which at is the largest city park in New England, forms the northern border of the town. The private Twin Hills and public Franconia golf courses, plus town athletic fields and conservation land, cover nearly 2/3 of the eastern border of the town. Two large public parks, the Longmeadow Country Club, and three conservation areas account for the bulk of the remaining formal open space. Almost 20% of the houses in town are in proximity to a "dingle", a tree-lined steep-sided sandy ravine with a wetland at the bottom that provides a privacy barrier between yards.
Longmeadow has a town common, commonly referred to as "The Green", located along U.S. Route 5 on the west side of town. It is about long. Roughly 100 houses date back before 1900, most of which are in the historic district, are located near the town green. Longmeadow’s Town Green is a historic district on the National Register of Historic Places, and it is surrounded by a number of buildings dating back to the 18th and 19th centuries. Longmeadow is unique as the town green has maintained its residential purpose and has resisted commercial pressure. The current function as listed by the National Register of Historic Places is domestic and landscape. The current sub-function as listed by the National Register of Historic Places is park and single dwelling. Houses along the photogenic main street (Longmeadow Street) are set back farther than in most towns of similar residential density. The town has three recently remodeled elementary schools, two secondary schools, and one high school. The commercial center of town is an area called "The Longmeadow Shops", including restaurants and clothing stores.
According to the United States Census Bureau, the town has a total area of , of which are land and , or 5.34%, are water.
As of the census of 2000, there were 15,633 people, 5,734 households, and 4,432 families residing in the town. The population density was . There were 5,879 housing units at an average density of . The racial makeup of the town was 95.42% White, 0.69% African American, 0.05% Native American, 2.90% Asian, 0.06% Pacific Islander, 0.26% from other races, and 0.62% from two or more races. Hispanic or Latino of any race were 1.09% of the population.
There were 5,734 households out of which 37.1% had children under the age of 18 living with them, 69.1% were married couples living together, 6.4% had a female householder with no husband present, and 22.7% were non-families. 20.4% of all households were made up of individuals and 14.0% had someone living alone who was 65 years of age or older. The average household size was 2.66 and the average family size was 3.09.
In the town, the population was spread out with 26.8% under the age of 18, 4.6% from 18 to 24, 22.0% from 25 to 44, 28.7% from 45 to 64, and 17.8% who were 65 years of age or older. The median age was 43 years. For every 100 females, there were 87.7 males. For every 100 females age 18 and over, there were 82.0 males.
The median income for a household in the town was $109,586, and the median income for a family was $115,578. Males had a median income of $68,238 versus $40,890 for females. The per capita income for the town was $48,949. About 1.0% of families and 2.1% of the population were below the poverty line, including 0.3% of those under age 18 and 8.3% of those age 65 or over.
The town is chartered as an Open Town Meeting form of government. The town government also consists of a Select Board with five members, elected by the town. The public school system is governed by the School Committee. The School Committee is made up of seven voting members elected by the town, the superintendent of schools, two assistant-superintendents, a secretary, and a student representative.
The Longmeadow public school system operates six schools. Blueberry Hill School, Center School, and Wolf Swamp Road School are K−5 elementary schools. Williams Middle School and Glenbrook Middle School serve grades 6–8. Longmeadow High School serves all students in the town between grades 9 and 12. The town's elementary schools have been recently rebuilt, statements of interest for improvements to the two middle schools and Longmeadow High School were filed with the Massachusetts School Building Authority in 2007. In 2010, the voters of Longmeadow approved a 2.5% budget override to support the construction of a new $78 million high school. The town received an estimated $34 million in state funds to be used towards the new construction The new High School was completed and opened to students on February 26, 2013. After students and faculty had moved into the new school, the demolition of the old school was begun. The demolition was completed by June 2013. The school had its grand opening in September 2013 with both the brand new school and renovated business & administration wing open.
Longmeadow also hosts two private parochial schools, the Lubavitcher Yeshiva Academy (LYA) and St. Mary's Academy. LYA was established in 1946 in response to the Greater Springfield Jewish community's need for a quality Jewish day school. In 1999, LYA became the first Jewish day school to be accredited by the New England Association of Schools and Colleges (NEASC). The more than 90 students that the school serves each year from across the spectrum of Jewish life include orthodox, conservative, reform, and unaffiliated families. St. Mary's School, located behind St. Mary's Church, serves Catholic students grades Pre-K through Grade 8.
Approximately 50% of the students at Longmeadow High School participate in the music program. The choruses have won numerous gold medals at the MICCA competition. The jazz ensemble has won numerous gold medals as well, but no longer competes. The honors chorus "Lyrics" has won numerous awards and has traveled to many places around the world on tours, such as Italy and Sweden. The wind ensemble and symphony orchestra have had the honor of performing in Indianapolis, Boston (Boston Symphony Hall), and New York (Carnegie Hall). In 2010, Longmeadow was awarded The American Prize in Orchestral Performance. The music program's crowning achievement has been receiving three national Grammy Awards based on the high level of excellence maintained throughout all groups in the music program.
Longmeadow also contains the 46-acre primary campus for Bay Path University, a private undergraduate and graduate institution.
|
https://en.wikipedia.org/wiki?curid=18182
|
Body relative direction
Body relative directions (also known as egocentric coordinates) are geometrical orientations relative to a body such as a human person's.
The most common ones are: left and right; forward(s) and backward(s); up and down.
They form three pairs of orthogonal axes.
Since definitions of left and right based on the geometry of the natural environment are unwieldy, in practice, the meaning of relative direction words is conveyed through tradition, acculturation, education, and direct reference. One common definition of up and down uses gravity and the planet Earth as a frame of reference. Since there is a very noticeable force of gravity acting between the Earth and any other nearby object, down is defined as that direction which an object moves in reference to the Earth when the object is allowed to fall freely. Up is then defined as the opposite direction of down. Another common definition uses a human body, standing upright, as a frame of reference. In that case, up is defined as the direction from feet to head, perpendicular to the surface of the Earth. In most cases, up is a directionally oriented position generally opposite to that of the pull of gravity.
In situations where a common frame of reference is needed, it is most common to use an egocentric view. A simple example is road signage. Another example is stage blocking, where "stage left" "stage right" "stage up" and "stage down" are, by convention, defined from the actor's point of view, but up and down stage do not follow gravitational conventions of up and down. An example of a non-egocentric view is page layout, where the relative terms "upper half" "left margin," etc. are defined in terms of the observer but employed in reverse for a type compositor, returning to an egocentric view. In medicine and science, where precise definitions are crucial, relative directions (left and right) are the sides of the organism, not those of the observer. The same is true in heraldry, where left and right in a coat of arms is treated as if the shield were being held by the armiger. To avoid confusion, Latin terminology is employed: "dexter" and "sinister" for right and left. Proper right and proper left are terms mainly used to describe artistic images, and overcome the potential confusion that a figure's"own" right or "proper right" hand is on the left hand as the viewer sees it from the front.
Forward and backward may be defined by referring to an object's or person's motion. Forward is defined as the direction in which the object is moving. Backward is then defined as the opposite direction to forward. Alternatively, 'forward' may be the direction pointed by the observer's nose, defining 'backward' as the direction from the nose to the sagittal border in the observer's skull. With respect to a ship 'forward' would indicate the relative position of any object lying in the direction the ship is pointing. For symmetrical objects, it is also necessary to define forward and backward in terms of expected direction. Many mass transit trains are built symmetrically with paired control booths, and definitions of forward, backward, left, and right are temporary.
Given significant distance from the magnetic poles, one can figure which hand is which using a magnetic compass and the sun. Facing the sun, before noon, the north pointer of the compass points to the "left" hand. After noon, it points to the "right".
A right-hand rule is one common way to relate three principal directions. For many years a fundamental question in physics was whether a left-hand rule would be equivalent. Many natural structures, including human bodies, follow a certain "handedness", but it was widely assumed that nature did not distinguish the two possibilities. This changed with the discovery of parity violations in particle physics. If a sample of cobalt-60 atoms is magnetized so that they spin counterclockwise around some axis, the beta radiation resulting from their nuclear decay will be preferentially directed opposite that axis. Since counter-clockwise may be defined in terms of up, forward, and right, this experiment unambiguously differentiates left from right using only natural elements: if they were reversed, or the atoms spun clockwise, the radiation would follow the spin axis instead of being opposite to it.
Bow, stern, port, and starboard, fore and aft are nautical terms that convey an impersonal relative direction in the context of the moving frame of persons aboard a ship. The need for impersonal terms is most clearly seen in a rowing shell where the majority of the crew face aft ("backwards"), hence the oars to their right are actually on the port side of the boat. Rowers eschew the terms left, right, port and starboard in favor of stroke-side and bow-side. The usage derives from the tradition of having the stroke (the rower closest to the stern of the boat) oar on the port side of the boat.
Most human cultures use relative directions for reference, but there are exceptions. Australian Aboriginal peoples like the Guugu Yimithirr, Kaiadilt and Thaayorre have no words denoting the egocentric directions in their language; instead, they exclusively refer to cardinal directions, even when describing small-scale spaces. For instance, if they wanted someone to move over on the car seat to make room, they might say "move a bit to the east". To tell someone where exactly they left something in their house, they might say, "I left it on the southern edge of the western table." Or they might warn a person to "look out for that big ant just north of your foot". Other peoples "from Polynesia to Mexico and from Namibia to Bali" similarly have predominantly "geographic languages". American Sign Language makes heavy use of geographical direction through absolute orientation.
Left-right discrimination (LRD) refers to a person's ability to differentiate between left and right. The inability to accurately differentiate between left and right is known as left-right confusion (LRC). According to research performed by John R. Clarke of Drexel University, LRC affects approximately 15% of the population. People who suffer from LRC can typically perform daily navigational tasks, such as driving according to road signs or following a map, but may have difficulty performing actions that require a precise understanding of directional commands, such as ballroom dancing.
Data regarding LRC prevalence is primarily based on behavioral studies, self-assessments, and surveys. Gormley and Brydges found that in a group of 800 adults, 17% of women and 9% of men reported difficulty differentiating between left and right. Such studies suggest that women are more prone to LRC than men, with women reporting higher rates of LRC in both accuracy and speed of response.
The Bergen Left-Right Discrimination (BLRD) test is designed to measure individual performance in LRD accuracy. However, this test has been criticized for incorporating tasks that require the use of additional strategies, such as mental rotation (MR). Because men have been shown to consistently outperform women in MR tasks, tests involving the use of this particular strategy may present alternative cognitive demands and lead to inaccurate assessment of LRD performance. An extended version of the BLRD test was designed to allow for differential evaluation of LRD and MR abilities, in which subtests were created with either high or low demands on mental rotation. Results from these studies did not find sex differences in LRD performance when mental rotation demands were low. Another study found that sex differences in left-right discrimination existed in terms of self-reported difficulty, but not in actual tested ability.
Alternatively, studies focused on LRD as a phenomenon distinct from MR concluded that there are sex differences present in LRD. Scientists controlled for MR demands, potential menstrual cycle effects, and other hormone fluctuations, and determined that the neurocognitive mechanisms that support LRD are different for men and women. This research revealed that inferior parietal and right angular gyrus activation were correlated with LRD performance in both men and women. Women also demonstrated increased prefrontal activation, but did not exhibit greater bilateral activation. Additionally, no correlation was found between LRD accuracy and brain activation, or between brain activation and reaction time, for either sex. These results indicate that there are sex differences in the neurocognitive mechanisms underlying LRD performance; however, findings did not suggest that women are more prone to LRC than men.
Humans are constantly making decisions about spatial relations; however, some spatial relations, such as left-right, are commonly confused, while other spatial relations, such as up-down, above-below, and front-back, are seldom, if ever, mistaken. The ability to categorize and compartmentalize space is an essential tool for navigating this 3D world; an ability shown to develop in early infancy. Infant ability to visually match above-below and left-right relations appears to diminish in early toddlerhood, as language acquisition may complicate verbal labeling. Children learn to verbally discriminate between above-below relations around the age of three, and learn left-right linguistic labels between the ages of six and seven; however, these classifications may only exist in the linguistic context. In other words, children may learn the terms for left and right without having developed a cognitive representation to allow for the accurate application of such spatial distinctions.
Research seeks to explain the neural activity associated with left-right discrimination, attempting to identify differences in the encoding, consolidation, and retrieval of left-right versus above-below relations. One study found that neural activity patterns for left-right and above-below distinctions are represented differently in the brain, leading to the theory that these spatial judgements are supported by separate cognitive mechanisms. Experiments used magnetoencephalography (MEG) to record neural activity during a computerized nonverbal task, examining left-right and above-below differences in encoding and working memory. Results showed differences in neural activity patterns in the right cerebellum, right superior temporal gyrus, and left temporoparietal junction during the encoding phase, and indicated differential neural activity in the inferior parietal, right superior temporal, and right cerebellum regions in the working memory tests.
Although some individuals may struggle with LRD more than others, discriminating between left and right in the face of distraction has been shown to impair even the most proficient individual's ability to accurately differentiate between the two. This issue is of particular importance to medical students, clinicians and health care professionals, where distraction in the workplace and LRD inaccuracy can lead to severe consequences, including laterality errors and wrong-side surgeries. Laterality errors in the field of aviation may also lead to equally devastating results, for example, causing a major airline crash.
Distraction has a significant impact on LRD accuracy, and the type of distraction can alter the magnitude of these effects. For example, cognitive distraction, which occurs when an individual is not directly focused on the task at hand, has a more profound effect on LRD performance than auditory distraction, such as the presence of continuous ambient noise. Additionally, in the field of health care, it has been noted that mental rotation is often involved in making left-right distinctions, such as when a medical practitioner is facing their patient and must adjust for the opposite left-right relations.
|
https://en.wikipedia.org/wiki?curid=18183
|
Lizard
Lizards are a widespread group of squamate reptiles, with over 6,000 species, ranging across all continents except Antarctica, as well as most oceanic island chains. The group is paraphyletic as it excludes the snakes and Amphisbaenia; some lizards are more closely related to these two excluded groups than they are to other lizards. Lizards range in size from chameleons and geckos a few centimeters long to the 3 meter long Komodo dragon.
Most lizards are quadrupedal, running with a strong side-to-side motion. Others are legless, and have long snake-like bodies. Some such as the forest-dwelling "Draco" lizards are able to glide. They are often territorial, the males fighting off other males and signalling, often with brightly colours, to attract mates and to intimidate rivals. Lizards are mainly carnivorous, often being sit-and-wait predators; many smaller species eat insects, while the Komodo eats mammals as big as water buffalo.
Lizards make use of a variety of antipredator adaptations, including venom, camouflage, reflex bleeding, and the ability to sacrifice and regrow their tails.
The adult length of species within the suborder ranges from a few centimeters for chameleons such as "Brookesia micra" and geckos such as "Sphaerodactylus ariasae" to nearly in the case of the largest living varanid lizard, the Komodo dragon. Most lizards are fairly small animals.
Lizards typically have rounded torsos, elevated heads on short necks, four limbs and long tails, although some are legless. Lizards and snakes share a movable quadrate bone, distinguishing them from the rhynchocephalians, which have more rigid diapsid skulls. Some lizards such as chameleons have prehensile tails, assisting them in climbing among vegetation.
As in other reptiles, the skin of lizards is covered in overlapping scales made of keratin. This provides protection from the environment and reduces water loss through evaporation. This adaptation enables lizards to thrive in some of the driest deserts on earth. The skin is tough and leathery, and is shed (sloughed) as the animal grows. Unlike snakes which shed the skin in a single piece, lizards slough their skin in several pieces. The scales may be modified into spines for display or protection, and some species have bone osteoderms underneath the scales.
The dentitions of lizards reflect their wide range of diets, including carnivorous, insectivorous, omnivorous, herbivorous, nectivorous, and molluscivorous. Species typically have uniform teeth suited to their diet, but several species have variable teeth, such as cutting teeth in the front of the jaws and crushing teeth in the rear. Most species are pleurodont, though agamids and chameleons are acrodont.
The tongue can be extended outside the mouth, and is often long. In the beaded lizards, whiptails and monitor lizards, the tongue is forked and used mainly or exclusively to sense the environment, continually flicking out to sample the environment, and back to transfer molecules to the vomeronasal organ responsible for chemosensation, analogous to but different from smell or taste. In geckos, the tongue is used to lick the eyes clean: they have no eyelids. Chameleons have very long sticky tongues which can be extended rapidly to catch their insect prey.
Three lineages, the geckos, anoles, and chameleons, have modified the scales under their toes to form adhesive pads, highly prominent in the first two groups. The pads are composed of millions of tiny setae (hair-like structures) which fit closely to the substrate to adhere using van der Waals forces; no liquid adhesive is needed. In addition, the toes of chameleons are divided into two opposed groups on each foot (zygodactyly), enabling them to perch on branches as birds do.
Aside from legless lizards, most lizards are quadrupedal and move using gaits with alternating movement of the right and left limbs with substantial body bending. This body bending prevents significant respiration during movement, limiting their endurance, in a mechanism called Carrier's constraint. Several species can run bipedally, and a few can prop themselves up on their hindlimbs and tail while stationary. Several small species such as those in the genus "Draco" can glide: some can attain a distance of , losing in height. Some species, like geckos and chameleons, adhere to vertical surfaces including glass and ceilings. Some species, like the common basilisk, can run across water.
Lizards make use of their senses of sight, touch, olfaction and hearing like other vertebrates. The balance of these varies with the habitat of different species; for instance, skinks that live largely covered by loose soil rely heavily on olfaction and touch, while geckos depend largely on acute vision for their ability to hunt and to evaluate the distance to their prey before striking. Monitor lizards have acute vision, hearing, and olfactory senses. Some lizards make unusual use of their sense organs: chameleons can steer their eyes in different directions, sometimes providing non-overlapping fields of view, such as forwards and backwards at once. Lizards lack external ears, having instead a circular opening in which the tympanic membrane (eardrum) can be seen. Many species rely on hearing for early warning of predators, and flee at the slightest sound.
As in snakes and many mammals, all lizards have a specialised olfactory system, the vomeronasal organ, used to detect pheromones. Monitor lizards transfer scent from the tip of their tongue to the organ; the tongue is used only for this information-gathering purpose, and is not involved in manipulating food.
Some lizards, particularly iguanas, have retained a photosensory organ on the top of their heads called the parietal eye, a basal ("primitive") feature also present in the tuatara. This "eye" has only a rudimentary retina and lens and cannot form images, but is sensitive to changes in light and dark and can detect movement. This helps them detect predators stalking it from above.
Until 2006 it was thought that among lizards, only the Gila monster and the Mexican beaded lizard were venomous. However, several species of monitor lizards, including the Komodo dragon, produce powerful venom in their oral glands. Lace monitor venom, for instance, causes swift loss of consciousness and extensive bleeding through its pharmacological effects, both lowering blood pressure and preventing blood clotting. Nine classes of toxin known from snakes are produced by lizards. The range of actions provides the potential for new medicinal drugs based on lizard venom proteins.
Genes associated with venom toxins have been found in the salivary glands on a wide range of lizards, including species traditionally thought of as non-venomous, such as iguanas and bearded dragons. This suggests that these genes evolved in the common ancestor of lizards and snakes, some 200 million years ago (forming a single clade, the Toxicofera). However, most of these putative venom genes were "housekeeping genes" found in all cells and tissues, including skin and cloacal scent glands. The genes in question may thus be evolutionary precursors of venom genes.
Recent studies (2013 and 2014) on the lung anatomy of the savannah monitor and green iguana found them to have a unidirectional airflow system, which involves the air moving in a loop through the lungs when breathing. This was previously thought to only exist in the archosaurs (crocodilians and birds). This may be evidence that unidirectional airflow is an ancestral trait in diapsids.
As with all amniotes, lizards rely on internal fertilisation and copulation involves the male inserting one of his hemipenes into the female's cloaca. The majority of species are oviparous (egg laying). The female deposits the eggs in a protective structure like a nest or crevice or simply on the ground. Depending on the species, clutch size can vary from 4–5 percent of the females body weight to 40–50 percent and clutches range from one or a few large eggs to dozens of small ones.
In most lizards, the eggs have leathery shells to allow for the exchange of water, although more arid-living species have calcified shells to retain water. Inside the eggs, the embryos use nutrients from the yolk. Parental care is uncommon and the female usually abandons the eggs after laying them. Brooding and protection of eggs does occur in some species. The female prairie skink uses respiratory water loss to maintain the humidity of the eggs which facilitates embryonic development. In lace monitors, the young hatch close to 300 days, and the female returns to help them escape the termite mound were the eggs were laid.
Around 20 percent of lizard species reproduce via viviparity (live birth). This is particularly common in Anguimorphs. Viviparous species give birth to relatively developed young which look like miniature adults. Embryos are nourished via a placenta-like structure. A minority of lizards have parthenogenesis (reproduction from unfertilised eggs). These species consist of all females who reproduce asexually with no need for males. This is known in occur in various species of whiptail lizards. Parthenogenesis was also recorded in species that normally reproduce sexually. A captive female Komodo dragon produced a clutch of eggs, despite being separated from males for over two years.
Sex determination in lizards can be temperature-dependent. The temperature of the eggs' micro-environment can determine the sex of the hatched young: low temperature incubation produces more females while higher temperatures produce more males. However, some lizards have sex chromosomes and both male heterogamety (XY and XXY) and female heterogamety (ZW) occur.
The majority of lizard species are active during the day, though some are active at night, notably geckos. As ectotherms, lizards have a limited ability to regulate their body temperature, and must seek out and bask in sunlight to gain enough heat to become fully active.
Most social interactions among lizards are between breeding individuals. Territoriality is common and is correlated with species that use sit-and-wait hunting strategies. Males establish and maintain territories that contain resources which attract females and which they defend from other males. Important resources include basking, feeding, and nesting sites as well as refuges from predators. The habitat of a species affects the structure of territories, for example, rock lizards have territories atop rocky outcrops. Some species may aggregate in groups, enhancing vigilance and lessening the risk of predation for individuals, particularly for juveniles. Agonistic behaviour typically occurs between sexually mature males over territory or mates and may involve displays, posturing, chasing, grappling and biting.
Lizards signal both to attract mates and to intimidate rivals. Visual displays include body postures and inflation, push-ups, bright colours, mouth gapings and tail waggings. Male anoles and iguanas have dewlaps or skin flaps which come in various sizes, colours and patterns and the expansion of the dewlap as well as head-bobs and body movements add to the visual signals. Some species have deep blue dewlaps and communicate with ultraviolet signals. Blue-tongued skinks will flash their tongues as a threat display. Chameleons are known to change their complex colour patterns when communicating, particularly during agonistic encounters. They tend to show brighter colours when displaying aggression and darker colours when they submit or "give up".
Several gecko species are brightly coloured; some species tilt their bodies to display their coloration. In certain species, brightly coloured males turn dull when not in the presence of rivals or females. While it is usually males that display, in some species females also use such communication. In the bronze anole, head-bobs are a common form of communication among females, the speed and frequency varying with age and territorial status. Chemical cues or pheromones are also important in communication. Males typically direct signals at rivals, while females direct them at potential mates. Lizards may be able to recognise individuals of the same species by their scent.
Acoustic communication is less common in lizards. Hissing, a typical reptilian sound, is mostly produced by larger species as part of a threat display, accompanying gaping jaws. Some groups, particularly geckos, snake-lizards, and some iguanids, can produce more complex sounds and vocal apparatuses have independently evolved in different groups. These sounds are used for courtship, territorial defense and in distress, and include clicks, squeaks, barks and growls. The mating call of the male tokay gecko is heard as "tokay-tokay!". Tactile communication involves individuals rubbing against each other, either in courtship or in aggression. Some chameleon species communicate with one another by vibrating the substrate that they are standing on, such as a tree branch or leaf.
Lizards are found worldwide, excluding the far north and Antarctica, and some islands. They can be found in elevations from sea level to . They prefer warmer, tropical climates but are adaptable and can live in all but the most extreme environments. Lizards also exploit a number of habitats; most primarily live on the ground, but others may live in rocks, on trees, underground and even in water. The marine iguana is adapted for life in the sea.
The majority of lizard species are predatory and the most common prey items are small, terrestrial invertebrates, particularly insects. Many species are sit-and-wait predators though others may be more active foragers. Chameleons prey on numerous insect species, such as beetles, grasshoppers and winged termites as well as spiders. They rely on persistence and ambush to capture these prey. An individual perches on a branch and stays perfectly still, with only its eyes moving. When an insect lands, the chameleon focuses its eyes on the target and slowly moves towards it before projecting its long sticky tongue which, when hauled back, brings the attach prey with it. Geckos feed on crickets, beetles, termites and moths.
Termites are an important part of the diets of some species of Autarchoglossa, since, as social insects, they can be found in large numbers in one spot. Ants may form a prominent part of the diet of some lizards, particularly among the lacertas. Horned lizards are also well known for specializing on ants. Due to their small size and indigestible chitin, ants must be consumed in large amounts, and ant-eating lizards have larger stomachs than even herbivorous ones. Species of skink and alligator lizards eat snails and their power jaws and molar-like teeth are adapted for breaking the shells.
Larger species, such as monitor lizards, can feed on larger prey including fish, frogs, birds, mammals and other reptiles. Prey may be swallowed whole and torn into smaller pieces. Both bird and reptile eggs may also be consumed as well. Gila monsters and beaded lizards climb trees to reach both the eggs and young of birds. Despite being venomous, these species rely on their strong jaws to kill prey. Mammalian prey typically consists of rodents and leporids; the Komodo dragon can kill prey as large as water buffalo. Dragons are prolific scavengers, and a single decaying carcass can attract several from away. A dragon is capable of consuming a carcass in 17 minutes.
Around 2 percent of lizard species, including many iguanids, are herbivores. Adults of these species eat plant parts like flowers, leaves, stems and fruit, while juveniles eat more insects. Plant parts can be hard to digest, and, as they get closer to adulthood, juvenile iguanas eat faeces from adults to acquire the microflora necessary for their transition to a plant-based diet. Perhaps the most herbivorous species is the marine iguana which dives to forage for algae, kelp and other marine plants. Some non-herbivorous species supplement their insect diet with fruit, which is easily digested.
Lizards have a variety of antipredator adaptations, including running and climbing, venom, camouflage, tail autotomy, and reflex bleeding.
Lizards exploit a variety of different camouflage methods. Many lizards are disruptively patterned. In some species, such as Aegean wall lizards, individuals vary in colour, and select rocks which best match their own colour to minimise the risk of being detected by predators. The Moorish gecko is able to change colour for camouflage: when a light-coloured gecko is placed on a dark surface, it darkens within an hour to match the environment. The chameleons in general use their ability to change their coloration for signalling rather than camouflage, but some species such as Smith's dwarf chameleon do use active colour change for camouflage purposes.
The flat-tail horned lizard's body is coloured like its desert background, and is flattened and fringed with white scales to minimise its shadow.
Many lizards, including geckos and skinks, are capable of shedding their tails (autotomy). The detached tail, sometimes brilliantly coloured, continues to writhe after detaching, distracting the predator's attention from the fleeing prey. Lizards partially regenerate their tails over a period of weeks. Some 326 genes are involved in regenerating lizard tails. The fish-scale gecko "Geckolepis megalepis " sheds patches of skin and scales if grabbed.
Many lizards attempt to escape from danger by running to a place of safety; for example, wall lizards can run up walls and hide in holes or cracks. Horned lizards adopt differing defences for specific predators. They may play dead to deceive a predator that has caught them; attempt to outrun the rattlesnake, which does not pursue prey; but stay still, relying on their cryptic coloration, for "Masticophis" whip snakes which can catch even swift prey. If caught, some species such as the greater short-horned lizard puff themselves up, making their bodies hard for a narrow-mouthed predator like a whip snake to swallow. Finally, horned lizards can squirt blood at cat and dog predators from a pouch beneath its eyes, to a distance of about ; the blood tastes foul to these attackers.
The earliest known fossil remains of a lizard belong to the iguanian species "Tikiguania estesi", found in the Tiki Formation of India, which dates to the Carnian stage of the Triassic period, about 220 million years ago. However, doubt has been raised over the age of "Tikiguania" because it is almost indistinguishable from modern agamid lizards. The "Tikiguania" remains may instead be late Tertiary or Quaternary in age, having been washed into much older Triassic sediments. Lizards are most closely related to the Rhynchocephalia, which appeared in the Late Triassic, so the earliest lizards probably appeared at that time. Mitochondrial phylogenetics suggest that the first lizards evolved in the late Permian. It had been thought on the basis of morphological data that iguanid lizards diverged from other squamates very early on, but molecular evidence contradicts this.
Mosasaurs probably evolved from an extinct group of aquatic lizards known as aigialosaurs in the Early Cretaceous. Dolichosauridae is a family of Late Cretaceous aquatic varanoid lizards closely related to the mosasaurs.
The position of the lizards and other Squamata among the reptiles was studied using fossil evidence by Rainer Schoch and Hans-Dieter Sues in 2015. Lizards form about 60% of the extant non-avian reptiles.
Both the snakes and the Amphisbaenia (worm lizards) are clades deep within the Squamata (the smallest clade that contains all the lizards), so "lizard" is paraphyletic.
The cladogram is based on genomic analysis by Wiens and colleagues in 2012 and 2016. Excluded taxa are shown in upper case on the cladogram.
In the 13th century, lizards were recognized in Europe as part of a broad category of "reptiles" that consisted of a miscellany of egg-laying creatures, including "snakes, various fantastic monsters, […], assorted amphibians, and worms", as recorded by Vincent of Beauvais in his "Mirror of Nature". The seventeenth century saw changes in this loose description. The name Sauria was coined by James Macartney (1802); it was the Latinisation of the French name "Sauriens", coined by Alexandre Brongniart (1800) for an order of reptiles in the classification proposed by the author, containing lizards and crocodilians, later discovered not to be each other's closest relatives. Later authors used the term "Sauria" in a more restricted sense, i.e. as a synonym of Lacertilia, a suborder of Squamata that includes all lizards but excludes snakes. This classification is rarely used today because Sauria so-defined is a paraphyletic group. It was defined as a clade by Jacques Gauthier, Arnold G. Kluge and Timothy Rowe (1988) as the group containing the most recent common ancestor of archosaurs and lepidosaurs (the groups containing crocodiles and lizards, as per Mcartney's original definition) and all its descendants. A different definition was formulated by Michael deBraga and Olivier Rieppel (1997), who defined Sauria as the clade containing the most recent common ancestor of Choristodera, Archosauromorpha, Lepidosauromorpha and all their descendants. However, these uses have not gained wide acceptance among specialists.
Suborder Lacertilia (Sauria) – (lizards)
Lizards have frequently evolved convergently, with multiple groups independently developing similar morphology and ecological niches. "Anolis" ecomorphs have become a model system in evolutionary biology for studying convergence. Limbs have been lost or reduced independently over two dozen times across lizard evolution, including in the Anniellidae, Anguidae, Cordylidae, Dibamidae, Gymnophthalmidae, Pygopodidae, and Scincidae; snakes are just the most famous and species-rich group of Squamata to have followed this path.
Most lizard species are harmless to humans. Only the largest lizard species, the Komodo dragon, which reaches in length and weighs up to , has been known to stalk, attack, and, on occasion, kill humans. An eight-year-old Indonesian boy died from blood loss after an attack in 2007.
Numerous species of lizard are kept as pets, including bearded dragons, iguanas, anoles, and geckos (such as the popular leopard gecko).
Lizards appear in myths and folktales around the world. In Australian Aboriginal mythology, Tarrotarro, the lizard god, split the human race into male and female, and gave people the ability to express themselves in art. A lizard king named Mo'o features in Hawaii and other cultures in Polynesia. In the Amazon, the lizard is the king of beasts, while among the Bantu of Africa, the god Unkulunkulu sent a chameleon to tell humans they would live forever, but the chameleon was held up, and another lizard brought a different message, that the time of humanity was limited. A popular legend in Maharashtra tells the tale of how a common Indian monitor, with ropes attached, was used to scale the walls of the fort in the Battle of Sinhagad. In the Bhojpuri speaking region of India and Nepal, there is a belief among children that, on touching Skunk's tail three (or five) time with the shortest finger gives money.
Green iguanas are eaten in Central America, where they are sometimes referred to as "chicken of the tree" after their habit of resting in trees and their supposedly chicken-like taste, while spiny-tailed lizards are eaten in Africa. In North Africa, "Uromastyx" species are considered "dhaab" or 'fish of the desert' and eaten by nomadic tribes.
Lizards such as the Gila monster produce toxins with medical applications. Gila toxin reduces plasma glucose; the substance is now synthesised for use in the anti-diabetes drug exenatide (Byetta). Another toxin from Gila monster saliva has been studied for use as an anti-Alzheimer's drug.
Lizards in many cultures share the symbolism of snakes, especially as an emblem of resurrection. This may have derived from their regular moulting. The motif of lizards on Christian candle holders probably alludes to the same symbolism.
According to Jack Tresidder, in Egypt and the Classical world they were beneficial emblems, linked with wisdom. In African, Aboriginal and Melanesian folklore they are linked to cultural heroes or ancestral figures.
|
https://en.wikipedia.org/wiki?curid=18184
|
Book of Leviticus
The Book of Leviticus () is the third book of the Torah and of the Old Testament; scholars generally agree that it developed over a long period of time, reaching its present form during the Persian Period between 538-332 BC.
Most of its chapters (1–7, 11–27) consist of God's speeches to Moses, which God commands Moses to repeat to the Israelites. This takes place within the story of the Israelites' Exodus after they escaped Egypt and reached Mt. Sinai (Exodus 19:1). The Book of Exodus narrates how Moses led the Israelites in building the Tabernacle (Exodus 35–40) with God's instructions (Exodus 25–31). Then in Leviticus, God tells the Israelites and their priests how to make offerings in the Tabernacle and how to conduct themselves while camped around the holy tent sanctuary. Leviticus takes place during the month or month-and-a-half between the completion of the Tabernacle (Exodus 40:17) and the Israelites' departure from Sinai (Numbers 1:1, 10:11).
The instructions of Leviticus emphasize ritual, legal and moral practices rather than beliefs. Nevertheless, they reflect the world view of the creation story in Genesis 1 that God wishes to live with humans. The book teaches that faithful performance of the sanctuary rituals can make that possible, so long as the people avoid sin and impurity whenever possible. The rituals, especially the sin and guilt offerings, provide the means to gain forgiveness for sins (Leviticus 4–5) and purification from impurities (Leviticus 11–16) so that God can continue to live in the Tabernacle in the midst of the people.
The English name Leviticus comes from the Latin "Leviticus," which is in turn from the Greek Λευιτικόν, "Leuitikon", referring to the priestly tribe of the Israelites, “Levi.” The Greek expression is in turn a variant of the rabbinic Hebrew "torat kohanim", "law of priests", as many of its laws relate to priests.
In Hebrew the book is called "Vayikra" (), from the opening of the book, "va-yikra" "And He [God] called."
The outlines from commentaries are similar, though not identical; compare those of Wenham, Hartley, Milgrom, and Watts.
I. Laws on sacrifice (1:1–7:38)
II. Institution of the priesthood (8:1–10:20)
III. Uncleanliness and its treatment (11:1–15:33)
IV. Day of Atonement: purification of the tabernacle from the effects of uncleanliness and sin (ch. 16)
V. Prescriptions for practical holiness (the Holiness Code, chs. 17–26)
VI. Redemption of votive gifts (ch. 27)
Chapters 1–5 describe the various sacrifices from the sacrificers' point of view, although the priests are essential for handling the blood. Chapters 6–7 go over much the same ground, but from the point of view of the priest, who, as the one actually carrying out the sacrifice and dividing the "portions", needs to know how to do this. Sacrifices are between God, the priest, and the offerers, although in some cases the entire sacrifice is a single portion to God—i.e., burnt to ashes.
Chapters 8–10 describe how Moses consecrates Aaron and his sons as the first priests, the first sacrifices, and God's destruction of two of Aaron's sons for ritual offenses. The purpose is to underline the character of altar priesthood (i.e., those priests with power to offer sacrifices to God) as an Aaronite privilege, and the responsibilities and dangers of their position.
With sacrifice and priesthood established, chapters 11–15 instruct the lay people on purity (or cleanliness). Eating certain animals produces uncleanliness, as does giving birth; certain skin diseases (but not all) are unclean, as are certain conditions affecting walls and clothing (mildew and similar conditions); and genital discharges, including female menses and male gonorrhea, are unclean. The reasoning behind the food rules are obscure; for the rest the guiding principle seems to be that all these conditions involve a loss of "life force", usually but not always blood.
Leviticus 16 concerns the Day of Atonement. This is the only day on which the High Priest is to enter the holiest part of the sanctuary, the holy of holies. He is to sacrifice a bull for the sins of the priests, and a goat for the sins of the laypeople. The priest is to send a second goat into the desert to "Azazel", bearing the sins of the whole people. Azazel may be a wilderness-demon, but its identity is mysterious.
Chapters 17–26 are the Holiness code. It begins with a prohibition on all slaughter of animals outside the Temple, even for food, and then prohibits a long list of sexual contacts and also child sacrifice. The "holiness" injunctions which give the code its name begin with the next section: there are penalties for the worship of Molech, consulting mediums and wizards, cursing one's parents and engaging in unlawful sex. Priests receive instruction on mourning rituals and acceptable bodily defects. The punishment for blasphemy is death, and there is the setting of rules for eating sacrifices; there is an explanation of the calendar, and there are rules for sabbatical and Jubilee years; there are rules for oil lamps and bread in the sanctuary; and there are rules for slavery. The code ends by telling the Israelites they must choose between the law and prosperity on the one hand, or, on the other, horrible punishments, the worst of which will be expulsion from the land.
Chapter 27 is a disparate and probably late addition telling about persons and things serving as dedication to the Lord and how one can redeem, instead of fulfill, vows.
The majority of scholars have concluded that the Pentateuch received its final form during the Persian period (538–332 BC). Nevertheless, Leviticus had a long period of growth before reaching that form.
The entire composition of the book of Leviticus is Priestly literature. Most scholars see chapters 1–16 (the Priestly code) and chapters 17–26 (the Holiness code) as the work of two related schools, but while the Holiness material employs the same technical terms as the Priestly code, it broadens their meaning from pure ritual to the theological and moral, turning the ritual of the Priestly code into a model for the relationship of Israel to God: as the tabernacle, which is apart from uncleanliness, becomes holy by the presence of the Lord, so He will dwell among Israel when Israel receives purification (becomes holy) and separates from other peoples. The ritual instructions in the Priestly code apparently grew from priests giving instruction and answering questions about ritual matters; the Holiness code (or H) used to be a separate document, later becoming part of Leviticus, but it seems better to think of the Holiness authors as editors who worked with the Priestly code and actually produced Leviticus as we now have it.
Many scholars argue that the rituals of Leviticus have a theological meaning concerning Israel's relationship with its God. Jacob Milgrom was especially influential in spreading this view. He maintained that the priestly regulations in Leviticus expressed a rational system of theological thought. The writers expected them to be put into practice in Israel's temple, so the rituals would express this theology as well, as well as ethical concern for the poor. Milgrom also argued that the book's purity regulations (chaps. 11–15) have a basis in ethical thinking. Many other interpreters have followed Milgrom in exploring the theological and ethical implications of Leviticus's regulations (e.g. Marx, Balentine), though some have questioned how systematic they really are. Ritual, therefore, is not taking a series of actions for their own sake, but a means of maintaining the relationship between God, the world, and humankind.
The main function of the priests is service at the altar, and only the sons of Aaron are priests in the full sense. (Ezekiel also distinguishes between altar-priests and lower Levites, but in Ezekiel the altar-priests are sons of Zadok instead of sons of Aaron; many scholars see this as a remnant of struggles between different priestly factions in First Temple times, finding resolution by the Second Temple into a hierarchy of Aaronite altar-priests and lower-level Levites, including singers, gatekeepers and the like).
In chapter 10, God kills Nadab and Abihu, the oldest sons of Aaron, for offering "strange incense". Aaron has two sons left. Commentators have read various messages in the incident: a reflection of struggles between priestly factions in the post–Exilic period (Gerstenberger); or a warning against offering incense outside the Temple, where there might be the risk of invoking strange gods (Milgrom). In any case, there has been a pollution of the sanctuary by the bodies of the two dead priests, leading into the next theme, holiness.
Ritual purity is essential for an Israelite to be able to approach Yahweh and remain part of the community. Uncleanliness threatens holiness; Chapters 11–15 review the various causes of uncleanliness and describe the rituals which will restore cleanliness; one is to maintain cleanliness through observation of the rules on sexual behaviour, family relations, land ownership, worship, sacrifice, and observance of holy days.
Yahweh dwells with Israel in the holy of holies. All of the priestly ritual focuses on Yahweh and the construction and maintenance of a holy space, but sin generates impurity, as do everyday events such as childbirth and menstruation; impurity pollutes the holy dwelling place. Failure to ritually purify the sacred space could result in God leaving, which would be disastrous.
Through sacrifice, the priest "makes atonement" for sin and the offerer receives forgiveness (but only if God accepts the sacrifice—forgiveness comes only from God). Atonement rituals involve the pouring or sprinkling of blood as the symbol of the life of the victim: the blood has the power to wipe out or absorb the sin. The two-part division of the book structurally reflects the role of atonement: chapters 1–16 call for the establishment of the institution for atonement, and chapters 17–27 call for the life of the atoned community in holiness.
The consistent theme of chapters 17–26 is in the repetition of the phrase, "Be holy, for I the Lord your God am holy." Holiness in ancient Israel had a different meaning than in contemporary usage: it might have been regarded as the "god-ness" of God, an invisible but physical and potentially dangerous force. Specific objects, or even days, can be holy, but they derive holiness from being connected with God—the seventh day, the tabernacle, and the priests all derive their holiness from God. As a result, Israel had to maintain its own holiness in order to live safely alongside God.
The need for holiness is for the possession of the Promised Land (Canaan), where the Jews will become a holy people: "You shall not do as they do in the land of Egypt where you dwelt, and you shall not do as they do in the land of Canaan to which I am bringing you...You shall do my ordinances and keep my statutes...I am the Lord, your God" (ch. 18:3).
Leviticus, as part of the Torah, became the law book of Jerusalem's Second Temple as well as of the Samaritan temple. Evidence of its influence is evident among the Dead Sea Scrolls, which included fragments of seventeen manuscripts of Leviticus dating from the third to the first centuries BC. Many other Qumran scrolls cite the book, especially the Temple Scroll and 4QMMT.
Jews and Christians have not observed Leviticus's instructions for animal offerings since the first century AD. Because of the destruction of the temple in Jerusalem in 70 AD, Jewish worship has focused on prayer and the study of Torah. Nevertheless, Leviticus constitutes a major source of Jewish law and is traditionally the first book children learn in the Rabbinic system of education. There are two main Midrashim on Leviticus—the halakhic one (Sifra) and a more aggadic one (Vayikra Rabbah).
The New Testament, particularly the Epistle to the Hebrews, uses ideas and images from Leviticus to describe Christ as the high priest who offers his own blood as a sin offering. Therefore, Christians do not make animal offerings either, as Gordon Wenham summarized: "With the death of Christ the only sufficient "burnt offering" was offered once and for all, and therefore the animal sacrifices which foreshadowed Christ's sacrifice were made obsolete."
Christians generally have the view that the New Covenant supersedes (i.e., replaces) the Old Testament's ritual laws, which includes many of the rules in Leviticus. Christians therefore have usually not observed Leviticus' rules regarding diet, purity, and agriculture. Christian teachings have differed, however, as to where to draw the line between ritual and moral regulations.
Online versions of Leviticus:
Related article:
Brief introduction
|
https://en.wikipedia.org/wiki?curid=18187
|
L. Frank Baum
His works anticipated such century-later commonplaces as television, augmented reality, laptop computers ("The Master Key"), wireless telephones ("Tik-Tok of Oz"), women in high-risk and action-heavy occupations ("Mary Louise in the Country"), and the ubiquity of advertising on clothing ("Aunt Jane's Nieces at Work").
Baum was born in Chittenango, New York, in 1856 into a devout Methodist family. He had German, Scots-Irish and English ancestry. He was the seventh of nine children of Cynthia Ann (née Stanton) and Benjamin Ward Baum, only five of whom survived into adulthood. "Lyman" was the name of his father's brother, but he always disliked it and preferred his middle name "Frank".
His father succeeded in many businesses, including barrel-making, oil drilling in Pennsylvania, and real estate. Baum grew up on his parents' expansive estate called Rose Lawn, which he fondly recalled as a sort of paradise. Rose Lawn was located in Mattydale, New York. Frank was a sickly, dreamy child, tutored at home with his siblings. From the age of 12, he spent two miserable years at Peekskill Military Academy but, after being severely disciplined for daydreaming, he had a possibly psychogenic heart attack and was allowed to return home.
Baum started writing early in life, possibly prompted by his father buying him a cheap printing press. He had always been close to his younger brother Henry (Harry) Clay Baum, who helped in the production of "The Rose Lawn Home Journal". The brothers published several issues of the journal, including advertisements from local businesses, which they gave to family and friends for free. By the age of 17, Baum established a second amateur journal called "The Stamp Collector", printed an 11-page pamphlet called "Baum's Complete Stamp Dealers' Directory", and started a stamp dealership with friends.
At 20, Baum took on the national craze of breeding fancy poultry. He specialized in raising the Hamburg chicken. In March 1880, he established a monthly trade journal, "The Poultry Record", and in 1886, when Baum was 30 years old, his first book was published: "The Book of the Hamburgs: A Brief Treatise upon the Mating, Rearing, and Management of the Different Varieties of Hamburgs".
Baum had a flair for being the spotlight of fun in the household, including during times of financial difficulties. His selling of fireworks made the Fourth of July memorable. His skyrockets, Roman candles, and fireworks filled the sky, while many people around the neighborhood would gather in front of the house to watch the displays. Christmas was even more festive. Baum dressed as Santa Claus for the family. His father would place the Christmas tree behind a curtain in the front parlor so that Baum could talk to everyone while he decorated the tree without people managing to see him. He maintained this tradition all his life.
Baum embarked on his lifetime infatuation—and wavering financial success—with the theater. A local theatrical company duped him into replenishing their stock of costumes on the promise of leading roles coming his way. Disillusioned, Baum left the theater — temporarily — and went to work as a clerk in his brother-in-law's dry goods company in Syracuse. This experience may have influenced his story "The Suicide of Kiaros", first published in the literary journal "The White Elephant". A fellow clerk one day was found locked in a store room dead, probably from suicide.
Baum could never stay away long from the stage. He performed in plays under the stage names of Louis F. Baum and George Brooks. In 1880, his father built him a theater in Richburg, New York, and Baum set about writing plays and gathering a company to act in them. "The Maid of Arran" proved a modest success, a melodrama with songs based on William Black's novel "A Princess of Thule". Baum wrote the play and composed songs for it (making it a prototypical musical, as its songs relate to the narrative), and acted in the leading role. His aunt Katharine Gray played his character's aunt. She was the founder of Syracuse Oratory School, and Baum advertised his services in her catalog to teach theater, including stage business, play writing, directing, translating (French, German, and Italian), revision, and operettas.
On November 9, 1882, Baum married Maud Gage, a daughter of Matilda Joslyn Gage, a famous women's suffrage and feminist activist. While Baum was touring with "The Maid of Arran", the theater in Richburg caught fire during a production of Baum's ironically titled parlor drama "Matches", destroying the theater as well as the only known copies of many of Baum's scripts, including "Matches", as well as costumes.
In July 1888, Baum and his wife moved to Aberdeen, Dakota Territory where he opened a store called "Baum's Bazaar". His habit of giving out wares on credit led to the eventual bankrupting of the store, so Baum turned to editing the local newspaper "The Aberdeen Saturday Pioneer" where he wrote the column "Our Landlady". Following the death of Sitting Bull at the hands of Indian agency police, Baum urged the wholesale extermination of all America's native peoples in a column that he wrote on December 20, 1890 (full text below). On January 3, 1891 he returned to the subject in an editorial response to the Wounded Knee Massacre:
The Pioneer has before declared that our only safety depends upon the total extirmination of the Indians. Having wronged them for centuries, we had better, in order to protect our civilization, follow it up by one more wrong and wipe these untamed and untamable creatures from the face of the earth.
A recent analysis of these editorials has challenged their literal interpretation, suggesting that the actual intent of Baum was to generate sympathy for the Indians via obnoxious argument, ostensibly promoting the contrary position.
Baum's description of Kansas in "The Wonderful Wizard of Oz" is based on his experiences in drought-ridden South Dakota. During much of this time, Matilda Joslyn Gage was living in the Baum household. While Baum was in South Dakota, he sang in a quartet which included James Kyle, who became one of the first Populist (People's Party) Senators in the U.S.
Baum's newspaper failed in 1891, and he, Maud, and their four sons moved to the Humboldt Park section of Chicago, where Baum took a job reporting for the "Evening Post". Beginning in 1897, he founded and edited a magazine called "The Show Window", later known as the "Merchants Record and Show Window", which focused on store window displays, retail strategies and visual merchandising. The major department stores of the time created elaborate Christmastime fantasies, using clockwork mechanisms that made people and animals appear to move. The former "Show Window" magazine is still currently in operation, now known as "VMSD" magazine (visual merchandising + store design), based in Cincinnati. In 1900, Baum published a book about window displays in which he stressed the importance of mannequins in drawing customers. He also had to work as a traveling salesman.
In 1897, he wrote and published "Mother Goose in Prose", a collection of Mother Goose rhymes written as prose stories and illustrated by Maxfield Parrish. "Mother Goose" was a moderate success and allowed Baum to quit his sales job (which had had a negative impact on his health). In 1899, Baum partnered with illustrator W. W. Denslow to publish "Father Goose, His Book", a collection of nonsense poetry. The book was a success, becoming the best-selling children's book of the year.
In 1900, Baum and Denslow (with whom he shared the copyright) published "The Wonderful Wizard of Oz" to much critical acclaim and financial success. The book was the best-selling children's book for two years after its initial publication. Baum went on to write thirteen more novels based on the places and people of the Land of Oz.
Two years after "Wizard" publication, Baum and Denslow teamed up with composer Paul Tietjens and director Julian Mitchell to produce a musical stage version of the book under Fred R. Hamlin. Baum and Tietjens had worked on a musical of "The Wonderful Wizard of Oz" in 1901 and based closely upon the book, but it was rejected. This stage version opened in Chicago in 1902 (the first to use the shortened title "The Wizard of Oz"), then ran on Broadway for 293 stage nights from January to October 1903. It returned to Broadway in 1904, where it played from March to May and again from November to December. It successfully toured the United States with much of the same cast, as was done in those days, until 1911, and then became available for amateur use. The stage version starred Anna Laughlin as Dorothy Gale, alongside David C. Montgomery and Fred Stone as the Tin Woodman and Scarecrow respectively, which shot the pair to instant fame.
The stage version differed quite a bit from the book, and was aimed primarily at adults. Toto was replaced with Imogene the Cow, and Tryxie Tryfle (a waitress) and Pastoria (a streetcar operator) were added as fellow cyclone victims. The Wicked Witch of the West was eliminated entirely in the script, and the plot became about how the four friends were allied with the usurping Wizard and were hunted as traitors to Pastoria II, the rightful King of Oz. It is unclear how much control or influence Baum had on the script; it appears that many of the changes were written by Baum against his wishes due to contractual requirements with Hamlin. Jokes in the script, mostly written by Glen MacDonough, called for explicit references to President Theodore Roosevelt, Senator Mark Hanna, Rev. Andrew Danquer, and oil magnate John D. Rockefeller. Although use of the script was rather free-form, the line about Hanna was ordered dropped as soon as Hamlin got word of his death in 1904.
Beginning with the success of the stage version, most subsequent versions of the story, including newer editions of the novel, have been titled "The Wizard of Oz", rather than using the full, original title. In more recent years, restoring the full title has become increasingly common, particularly to distinguish the novel from the Hollywood film.
Baum wrote a new Oz book, "The Marvelous Land of Oz", with a view to making it into a stage production, which was titled "The Woggle-Bug", but Montgomery and Stone balked at appearing when the original was still running. The Scarecrow and Tin Woodman were then omitted from this adaptation, which was seen as a self-rip-off by critics and proved to be a major flop before it could reach Broadway. He also worked for years on a musical version of "Ozma of Oz", which eventually became "The Tik-Tok Man of Oz". This did fairly well in Los Angeles, but not well enough to convince producer Oliver Morosco to mount a production in New York. He also began a stage version of "The Patchwork Girl of Oz", but this was ultimately realized as a film".
With the success of "Wizard" on page and stage, Baum and Denslow hoped for further success and published "Dot and Tot of Merryland" in 1901. The book was one of Baum's weakest, and its failure further strained his faltering relationship with Denslow. It was their last collaboration. Baum worked primarily with John R. Neill on his fantasy work beginning in 1904, but Baum met Neill few times (all before he moved to California) and often found Neill's art not humorous enough for his liking. He was particularly offended when Neill published "The Oz Toy Book: Cut-outs for the Kiddies" without authorization.
Baum reportedly designed the chandeliers in the Crown Room of the Hotel del Coronado; however, that attribution has yet to be corroborated. Several times during the development of the Oz series, Baum declared that he had written his last Oz book and devoted himself to other works of fantasy fiction based in other magical lands, including "The Life and Adventures of Santa Claus" and "Queen Zixi of Ix". However, he returned to the series each time, persuaded by popular demand, letters from children, and the failure of his new books. Even so, his other works remained very popular after his death, with "The Master Key" appearing on "St. Nicholas Magazine"'s survey of readers' favorite books well into the 1920s.
In 1905, Baum declared plans for an Oz amusement park. In an interview, he mentioned buying Pedloe Island off the coast of California to turn it into an Oz park. However, there is no evidence that he purchased such an island, and no one has ever been able to find any island whose name even resembles Pedloe in that area. Nevertheless, Baum stated to the press that he had discovered a Pedloe Island off the coast of California and that he had purchased it to be "the Marvelous Land of Oz," intending it to be "a fairy paradise for children." Eleven year old Dorothy Talbot of San Francisco was reported to be ascendant to the throne on March 1, 1906, when the Palace of Oz was expected to be completed. Baum planned to live on the island, with administrative duties handled by the princess and her all-child advisers. Plans included statues of the Scarecrow, Tin Woodman, Jack Pumpkinhead, and H.M. Woggle-Bug, T.E. Baum abandoned his Oz park project after the failure of "The Woggle-Bug", which was playing at the Garrick Theatre in 1905.
Because of his lifelong love of theatre, he financed elaborate musicals, often to his financial detriment. One of Baum's worst financial endeavors was his "The Fairylogue and Radio-Plays" (1908), which combined a slideshow, film, and live actors with a lecture by Baum as if he were giving a travelogue to Oz. However, Baum ran into trouble and could not pay his debts to the company who produced the films. He did not get back to a stable financial situation for several years, after he sold the royalty rights to many of his earlier works, including "The Wonderful Wizard of Oz". This resulted in the M.A. Donahue Company publishing cheap editions of his early works with advertising which purported that Baum's newer output was inferior to the less expensive books that they were releasing. Baum had shrewdly transferred most of his property into Maud's name, except for his clothing, his typewriter, and his library (mostly of children's books, such as the fairy tales of Andrew Lang, whose portrait he kept in his study)—all of which, he successfully argued, were essential to his occupation. Maud handled the finances anyway, and thus Baum lost much less than he could have.
Baum made use of several pseudonyms for some of his other non-Oz books. They include:
Baum also anonymously wrote "The Last Egyptian: A Romance of the Nile". He continued theatrical work with Harry Marston Haldeman's men's social group The Uplifters, for which he wrote several plays for various celebrations. He also wrote the group's parodic by-laws. The group also included Will Rogers, but was proud to have had Baum as a member and posthumously revived many of his works despite their ephemeral intent. Many of these play's titles are known, but only "The Uplift of Lucifer" is known to survive (it was published in a limited edition in the 1960s). Prior to that, his last produced play was "The Tik-Tok Man of Oz" (based on "Ozma of Oz" and the basis for "Tik-Tok of Oz"), a modest success in Hollywood that producer Oliver Morosco decided did not do well enough to take to Broadway. Morosco, incidentally, quickly turned to film production, as did Baum.
In 1914, Baum started his own film production company The Oz Film Manufacturing Company, which came as an outgrowth of the Uplifters. He served as its president and principal producer and screenwriter. The rest of the board consisted of Louis F. Gottschalk, Harry Marston Haldeman, and Clarence R. Rundel. The films were directed by J. Farrell MacDonald, with casts that included Violet MacMillan, Vivian Reed, Mildred Harris, Juanita Hansen, Pierre Couderc, Mai Welles, Louise Emmons, J. Charles Haydon, and early appearances by Harold Lloyd and Hal Roach. Silent film actor Richard Rosson appeared in one of the films (Rosson's younger brother Harold Rosson was the cinematographer on "The Wizard of Oz", released in 1939). After little success probing the unrealized children's film market, Baum acknowledged his authorship of "The Last Egyptian" and made a film of it (portions of which are included in "Decasia"), but the Oz name had become box office poison for the time being, and even a name change to Dramatic Feature Films and transfer of ownership to Frank Joslyn Baum did not help. Baum invested none of his own money in the venture, unlike "The Fairylogue and Radio-Plays", but the stress probably took its toll on his health.
On May 5, 1919, Baum suffered a stroke. The following day he slipped into a coma but briefly awoke and spoke his last words to his wife, "Now we can cross the Shifting Sands." Frank died on May 6, 1919. He was buried in Glendale's Forest Lawn Memorial Park Cemetery.
His final Oz book, "Glinda of Oz", was published on July 10, 1920, a year after his death. The Oz series was continued long after his death by other authors, notably Ruth Plumly Thompson, who wrote an additional twenty-one Oz books.
Baum's avowed intentions with the Oz books and his other fairy tales was to retell tales such as those which are found in the works of the Brothers Grimm and Hans Christian Andersen, make them in an American vein, update them, avoid stereotypical characters such as dwarfs or genies, and remove the association of violence and moral teachings. His first Oz books contained a fair amount of violence, but it decreased as the series progressed; in "The Emerald City of Oz", Ozma objects to the use of violence, even to the use of violence against the Nomes who threaten Oz with invasion. His introduction is often cited as the beginning of the sanitization of children's stories, although he did not do a great deal more than eliminate harsh moral lessons.
Another traditional element that Baum intentionally omitted was the emphasis on romance. He considered romantic love to be uninteresting to young children, as well as largely incomprehensible. In "The Wonderful Wizard of Oz", the only element of romance lay in the background of the Tin Woodman and his love for Nimmie Amee, which explains his condition and it does not otherwise affect the tale, and that of Gayelette and the enchantment of the Winged monkeys. The only other stories with such elements were "The Scarecrow of Oz" and "Tik-Tok of Oz", both based on dramatizations, which Baum regarded warily until his readers accepted them.
Sally Roesch Wagner of The Matilda Joslyn Gage Foundation has published a pamphlet titled "The Wonderful Mother of Oz" describing how Matilda Gage's feminist politics were sympathetically channeled by Baum into his Oz books. Much of the politics in the Republican "Aberdeen Saturday Pioneer" dealt with trying to convince the populace to vote for women's suffrage. Baum was the secretary of Aberdeen's Woman's Suffrage Club. Susan B. Anthony visited Aberdeen and stayed with the Baums. Nancy Tystad Koupal notes an apparent loss of interest in editorializing after Aberdeen failed to pass the bill for women's enfranchisement.
Some of Baum's contacts with suffragists of his day seem to have inspired much of his second Oz story "The Marvelous Land of Oz". In this story, General Jinjur leads the girls and women of Oz in a revolt, armed with knitting needles; they succeed and make the men do the household chores. Jinjur proves to be an incompetent ruler, but a female advocating gender equality is ultimately placed on the throne. His Edith Van Dyne stories depict girls and young women engaging in traditionally masculine activities, including the "Aunt Jane's Nieces", "The Flying Girl" and its sequel, and his girl sleuth Josie O'Gorman from The Bluebird Books.
During the period surrounding the 1890 Ghost Dance movement and Wounded Knee Massacre, Baum wrote two editorials about Native Americans for the "Aberdeen Saturday Pioneer" which have provoked controversy in recent times because of his assertion that the safety of white settlers depended on the wholesale genocide of American Indians. Sociologist Robert Venables has argued that Baum was not using sarcasm in the editorials.
The first piece was published on December 20, 1890, five days after the killing of the Lakota Sioux holy man, Sitting Bull. The piece opined that with Sitting Bull's death, "the nobility of the Redskin" had been extinguished, and that the safety of the frontier would not be established until there was "total annihilation" of the remaining Native Americans, who, he claimed, lived as "miserable wretches." Baum said that their extermination should not be regretted, and that that their elimination would "do justice to the manly characteristics" of the early Native Americans.
Baum wrote a second editorial following the Wounded Knee Massacre on December 29, 1890, being published on January 3, 1891. Baum alleged that the weak leadership of General Nelson A. Miles over the Native Americans had led to a "terrible loss of blood" to American soldiers, in a "battle" which had been a disgrace to the Department of War. He found that the "disaster" could have easily been prevented with proper preparations. Baum reiterated that he believed, due to the history of mistreatment of Native Americans, that extermination of the "untamed and untamable" tribes was necessary to protect American settlers. Baum ended the editorial with the following anecdote: "An eastern contemporary, with a grain of wisdom in its wit, says that 'when the whites win a fight, it is a victory, and when the Indians win it, it is a massacre.'"
These two editorials have haunted his modern legacy. In 2006, two descendants of Baum apologized to the Sioux nation for any hurt that their ancestor had caused.
The short story "The Enchanted Buffalo" claims to be a legend of a tribe of bison, and states that a key element made it into legends of Native American tribes. Baum mentions his characters' distaste for a Hopi snake dance in "Aunt Jane's Nieces and Uncle John", but also deplores the horrible situation of Indian Reservations. "Aunt Jane's Nieces on the Ranch" has a hard-working Mexican present himself as an exception to counter Anglo stereotypes of Mexican laziness. Baum's mother-in-law and Woman's Suffrage leader Matilda Joslyn Gage had great influence over Baum's views. Gage was initiated into the Wolf Clan and admitted into the Iroquois Council of Matrons for her outspoken respect and sympathy for Native American people; it would seem unlikely that Baum could have harbored animosity for them in his mature years.
Numerous political references to the "Wizard" appeared early in the 20th century. Henry Littlefield, an upstate New York high school history teacher, wrote a scholarly article which was the first full-fledged interpretation of the novel as an extended political allegory of the politics and characters of the 1890s. Special attention was paid to the Populist metaphors and debates over silver and gold. Baum was a Republican and avid supporter of Women's Suffrage, and it is thought that he did not support the political ideals of either the Populist movement of 1890–1892 or the Bryanite-silver crusade of 1896–1900. He published a poem in support of William McKinley.
Since 1964, many scholars, economists, and historians have expanded on Littlefield's interpretation, pointing to multiple similarities between the characters (especially as depicted in Denslow's illustrations) and stock figures from editorial cartoons of the period. Littlefield himself wrote to "The New York Times" letters to the editor section spelling out that his theory had no basis in fact, but that his original point was "not to label Baum, or to lessen any of his magic, but rather, as a history teacher at Mount Vernon High School, to invest turn-of-the-century America with the imagery and wonder I have always found in his stories."
Baum's newspaper had addressed politics in the 1890s, and Denslow was an editorial cartoonist as well as an illustrator of children's books. A series of political references is included in the 1902 stage version, such as references by name to the President, to a powerful senator, and to John D. Rockefeller for providing the oil needed by the Tin Woodman. Scholars have found few political references in Baum's Oz books after 1902.
Baum himself was asked whether his stories had hidden meanings, but he always replied that they were written to "please children".
Baum was originally a Methodist, but he joined the Episcopal Church in Aberdeen to participate in community theatricals. Later, he and his wife were encouraged by Matilda Joslyn Gage to become members of the Theosophical Society in 1892. Baum's beliefs are often reflected in his writing. The only mention of a church in his Oz books is the porcelain one which the Cowardly Lion breaks in the Dainty China Country in "The Wonderful Wizard of Oz". The Baums sent their older sons to "Ethical Culture Sunday School" in Chicago, which taught morality, not religion.
1921's "The Royal Book of Oz" was posthumous attributed to Baum but was entirely the work of Ruth Plumly Thompson.
|
https://en.wikipedia.org/wiki?curid=18188
|
Lake Ladoga
Lake Ladoga ( or ; [earlier in Finnish "Nevajärvi"]; ; ) is a freshwater lake located in the Republic of Karelia and Leningrad Oblast in northwestern Russia, in the vicinity of Saint Petersburg.
It is the largest lake located entirely in Europe, the second largest lake after Baikal in Russia, and the 14th largest freshwater lake by area in the world. "Ladoga Lacus", a methane lake on Saturn's moon Titan, is named after the lake.
In one of Nestor's chronicles from the 12th century a lake called "the Great Nevo" is mentioned, a clear link to the Neva River and possibly further to Finnish "nevo" 'sea' or "neva" 'bog, quagmire'.
Ancient Norse sagas and Hanseatic treaties both mention a city made of lakes named Old Norse "Aldeigja" or "Aldoga". Since the beginning of the 14th century this hydronym was commonly known as "Ladoga". According to T. N. Jackson, it can be taken "almost for granted that the name of Ladoga first referred to the river, then the city, and only then the lake". Therefore, he considers the primary hydronym Ladoga to originate in the eponymous inflow to the lower reaches of the Volkhov River whose early Finnic name was "Alodejoki" (corresponding to modern ) 'river of the lowlands'.
The Germanic toponym ("Aldeigja" ~ "Aldoga") was soon borrowed by the Slavic population and transformed by means of the Old East Slavic metathesis "ald- → lad-" to . The Old Norse intermediary word between Finnish and Old East Slavic word is fully supported by archeology, since the Scandinavians first appeared in Ladoga in the early 750s, that is, a couple of decades before the Slavs.
Other hypotheses about the origin of the name derive it from 'wave' and 'wavy', or from the Russian dialectal word алодь, meaning 'open lake, extensive water field'. Eugene Helimski by contrast, offers an etymology rooted in German. In his opinion, the primary name of the lake was 'old source', associated to the open sea, in contrast to the name of the Neva River (flowing from Lake Ladoga) which would derive from the German expression for 'the new'. Through the intermediate form "*Aldaugja", came about, referring to the city of Ladoga.
The lake has an average surface area of 17,891 km2 (excluding the islands). Its north-to-south length is 219 km and its average width is 83 km; the average depth is 51 m, although it reaches a maximum of 230 m in the north-western part. Basin area: 276,000 km2, volume: 837 km3 (earlier estimated as 908 km3). There are around 660 islands, with a total area of about 435 km2. Ladoga is, on average, 5 m above sea level. Most of the islands, including the famous Valaam archipelago, Kilpola and Konevets, are situated in the northwest of the lake.
Separated from the Baltic Sea by the Karelian Isthmus, it drains into the Gulf of Finland via the Neva River.
Lake Ladoga is navigable, being a part of the Volga-Baltic Waterway connecting the Baltic Sea with the Volga River. The Ladoga Canal bypasses the lake in the south, connecting the Neva to the Svir.
The basin of Lake Ladoga includes about 50,000 lakes and 3,500 rivers longer than 10 km. About 85% of the water inflow is due to tributaries, 13% is due to precipitation, and 2% is due to underground waters.
Geologically, the Lake Ladoga depression is a graben and syncline structure of Proterozoic age (Precambrian). This "Ladoga–Pasha structure", as it known, hosts Jotnian sediments. During the Pleistocene glaciations the depression was partially stripped of its sedimentary rock fill by glacial overdeepening. During the Last Glacial Maximum, about 17,000 years BP, the lake served likely as a channel that concentrated ice of the Fennoscandian Ice Sheet into an ice stream that fed glacier lobes further east.
Deglaciation following the Weichselian glaciation took place in the Lake Ladoga basin between 12,500 and 11,500 radiocarbon years BP. Lake Ladoga was initially part of the Baltic Ice Lake (70–80 m. above present sea level), a historical freshwater stage of Baltic Sea. It is possible, though not certain, that Ladoga was isolated from it during regression of the subsequent Yoldia Sea brackish stage (10,200–9,500 BP). The isolation threshold should be at Heinjoki to the east of Vyborg, where the Baltic Sea and Ladoga were connected by a strait or a river outlet at least until the formation of the River Neva, and possibly even much later, until the 12th century AD or so.
At 9,500 BP, Lake Onega, previously draining into the White Sea, started emptying into Ladoga via the River Svir. Between 9,500 and 9,100 BP, during the transgression of Ancylus Lake, the next freshwater stage of the Baltic, Ladoga certainly became part of it, even if they hadn't been connected immediately before. During the Ancylus Lake subsequent regression, around 8,800 BP Ladoga became isolated.
Ladoga slowly transgressed in its southern part due to uplift of the Baltic Shield in the north. It has been hypothesized, but not proven, that waters of the Litorina Sea, the next brackish-water stage of the Baltic, occasionally invaded Ladoga between 7,000 and 5,000 BP. Around 5,000 BP the waters of the Saimaa Lake penetrated Salpausselkä and formed a new outlet, River Vuoksi, entering Lake Ladoga in the northwestern corner and raising its level by 1–2 m.
The River Neva originated when the Ladoga waters at last broke through the threshold at Porogi into the lower portions of Izhora River, then a tributary of the Gulf of Finland, between 4,000 and 2,000 BP. Dating of some sediments in the northwestern part of Lake Ladoga suggests it happened at 3,100 radiocarbon years BP (3,410–3,250 calendar years BP).
The Ladoga is rich with fish. 48 forms (species and infra specific taxa) of fish have been encountered in the lake, including roach, carp bream, zander, European perch, ruffe, endemic variety of smelt, two varieties of "Coregonus albula" (vendace), eight varieties of "Coregonus lavaretus", a number of other Salmonidae as well as, albeit rarely, endangered Atlantic sturgeon (formerly confused with European sea sturgeon). Commercial fishing was once a major industry but has been hurt by overfishing. After the war, between 1945–1954, the total annual catch increased and reached a maximum of 4,900 tonnes. However, unbalanced fishery led to the drastic decrease of catch in 1955–1963, sometimes to 1,600 tonnes per year. Trawling has been forbidden in Lake Ladoga since 1956 and some other restrictions were imposed. The situation gradually recovered, and in 1971–1990 the catch ranged between 4,900 and 6,900 tonnes per year, about the same level as the total catch in 1938. Fish farms and recreational fishing are developing.
It has its own endemic ringed seal subspecies known as the Ladoga seal.
Since the beginning of the 1960s Ladoga has become considerably eutrophicated.
Nizhnesvirsky Natural Reserve is situated along the shore of Lake Ladoga immediately to the north of the mouth of the River Svir.
The Ladoga has a population of Arctic char that is genetically close to the chars of Lake Sommen and Lake Vättern in southern Sweden.
In the Middle Ages, the lake formed a vital part of the trade route from the Varangians to the Eastern Roman Empire, with the Norse emporium at Staraya Ladoga defending the mouth of the Volkhov since the 8th century. In the course of the Swedish–Novgorodian Wars, the area was disputed between the Novgorod Republic and Sweden. In the early 14th century, the fortresses of Korela (Kexholm) and Oreshek (Nöteborg) were established along the banks of the lake.
The ancient Valaam Monastery was founded on the island of Valaam, the largest in Lake Ladoga, abandoned between 1611–1715, magnificently restored in the 18th century, and evacuated to Finland during the Winter War in 1940. In 1989 the monastic activities in the Valaam were resumed. Other historic cloisters in the vicinity are the Konevets Monastery, which sits on the Konevets island, and the Alexander-Svirsky Monastery, which preserves fine samples of medieval Muscovite architecture.
During the Ingrian War, a fraction of the Ladoga coast was occupied by Sweden. In 1617, by the Treaty of Stolbovo, the northern and western coast was ceded by Russia to Sweden. In 1721, after the Great Northern War, it was restitutioned to Russia by the Treaty of Nystad. In the 18th century, the Ladoga Canal was built to bypass the lake which was prone to winds and storms that destroyed hundreds of cargo ships.
Later, from around 1812–1940 the lake was shared between Finland and Russia. According to the conditions of the 1920 Tartu Peace Treaty militarization of the lake was severely restricted. However, both Soviet Russia and Finland had flotillas in Ladoga (see also Finnish Ladoga Naval Detachment). After the Winter War (1939–40) according to the Moscow Peace Treaty, Ladoga, previously shared with Finland, became an internal basin of the Soviet Union.
During the Continuation War (1941–44) not only Finnish and Soviet, but also German and Italian vessels operated there (see also Naval Detachment K and Regia Marina). Under these circumstances, during much of the Siege of Leningrad (1941–44), Lake Ladoga provided the only access to the besieged city as a section of the eastern shore remained in Soviet hands. Supplies were transported into Leningrad with trucks on winter roads over the ice, the "Road of Life", and by boat in the summer. After World War II, Finland lost the Karelia region again to the USSR, and all Finnish citizens were evacuated from the ceded territory. Ladoga became an internal Soviet basin once again. The northern shore, Ladoga Karelia with the town of Sortavala, is now part of the Republic of Karelia. The western shore, Karelian Isthmus, became part of Leningrad Oblast.
|
https://en.wikipedia.org/wiki?curid=18189
|
Language family
A language family is a group of languages related through descent from a common "ancestral language" or "parental language", called the proto-language of that family. The term "family" reflects the tree model of language origination in historical linguistics, which makes use of a metaphor comparing languages to people in a biological family tree, or in a subsequent modification, to species in a phylogenetic tree of evolutionary taxonomy. Linguists therefore describe the "daughter languages" within a language family as being "genetically related".
According to "Ethnologue" there are 7,117 living human languages distributed in 142 different language families. A "living language" is simply one that is currently used as the primary form of communication of a group of people. There are also many dead languages, or languages which have no native speakers living, and extinct languages, which have no native speakers and no descendant languages. Finally, there are some languages that are insufficiently studied to be classified, and probably some which are not even known to exist outside their respective speech communities.
Membership of languages in a language family is established by research in comparative linguistics. Sister languages are said to descend "genetically" from a common ancestor. Speakers of a language family belong to a common speech community. The divergence of a proto-language into daughter languages typically occurs through geographical separation, with the original speech community gradually evolving into distinct linguistic units. Individuals belonging to other speech communities may also adopt languages from a different language family through the language shift process.
Genealogically related languages present shared retentions; that is, features of the proto-language (or reflexes of such features) that cannot be explained by chance or borrowing (convergence). Membership in a branch or group within a language family is established by shared innovations; that is, common features of those languages that are not found in the common ancestor of the entire family. For example, Germanic languages are "Germanic" in that they share vocabulary and grammatical features that are not believed to have been present in the Proto-Indo-European language. These features are believed to be innovations that took place in Proto-Germanic, a descendant of Proto-Indo-European that was the source of all Germanic languages.
Language families can be divided into smaller phylogenetic units, conventionally referred to as "branches" of the family because the history of a language family is often represented as a tree diagram. A family is a monophyletic unit; all its members derive from a common ancestor, and all attested descendants of that ancestor are included in the family. (Thus, the term "family" is analogous to the biological term "clade".)
Some taxonomists restrict the term "family" to a certain level, but there is little consensus in how to do so. Those who affix such labels also subdivide branches into "groups", and groups into "complexes". A top-level (i.e., the largest) family is often called a "phylum" or "stock". The closer the branches are to each other, the closer the languages will be related. This means if a branch off of a proto-language is 4 branches down and there is also a sister language to that fourth branch, then the two sister languages are more closely related to each other than to that common ancestral proto-language.
The term "macrofamily" or "superfamily" is sometimes applied to proposed groupings of language families whose status as phylogenetic units is generally considered to be unsubstantiated by accepted historical linguistic methods. For example, the Celtic, Germanic, Slavic, Italic, and Indo-Iranian language families are branches of a larger Indo-European language family. There is a remarkably similar pattern shown by the linguistic tree and the genetic tree of human ancestry
that was verified statistically. Languages interpreted in terms of the putative phylogenetic tree of human languages are transmitted to a great extent vertically (by ancestry) as opposed to horizontally (by spatial diffusion).
Some close-knit language families, and many branches within larger families, take the form of dialect continua in which there are no clear-cut borders that make it possible to unequivocally identify, define, or count individual languages within the family. However, when the differences between the speech of different regions at the extremes of the continuum are so great that there is no mutual intelligibility between them, as occurs in Arabic, the continuum cannot meaningfully be seen as a single language.
A speech variety may also be considered either a language or a dialect depending on social or political considerations. Thus, different sources, especially over time, can give wildly different numbers of languages within a certain family. Classifications of the Japonic family, for example, range from one language (a language isolate with dialects) to nearly twenty—until the classification of Ryukyuan as separate languages within a Japonic language family rather than dialects of Japanese, the Japanese language itself was considered a language isolate and therefore the only language in its family.
Most of the world's languages are known to be related to others. Those that have no known relatives (or for which family relationships are only tentatively proposed) are called language isolates, essentially language families consisting of a single language. An example is Basque. In general, it is assumed that language isolates have relatives or had relatives at some point in their history but at a time depth too great for linguistic comparison to recover them.
A language isolated in its own branch within a family, such as Albanian and Armenian within Indo-European, is often also called an isolate, but the meaning of the word "isolate" in such cases is usually clarified with a modifier. For instance, Albanian and Armenian may be referred to as an "Indo-European isolate". By contrast, so far as is known, the Basque language is an absolute isolate: it has not been shown to be related to any other language despite numerous attempts. Another well-known isolate is Mapudungun, the Mapuche language from the Araucanían language family in Chile. A language may be said to be an isolate currently but not historically if related but now extinct relatives are attested. The Aquitanian language, spoken in Roman times, may have been an ancestor of Basque, but it could also have been a sister language to the ancestor of Basque. In the latter case, Basque and Aquitanian would form a small family together. (Ancestors are not considered to be distinct members of a family.)
A proto-language can be thought of as a mother language (not to be confused with a mother tongue, which is one that a specific person has been exposed to from birth), being the root which all languages in the family stem from. The common ancestor of a language family is seldom known directly since most languages have a relatively short recorded history. However, it is possible to recover many features of a proto-language by applying the comparative method, a reconstructive procedure worked out by 19th century linguist August Schleicher. This can demonstrate the validity of many of the proposed families in the list of language families. For example, the reconstructible common ancestor of the Indo-European language family is called "Proto-Indo-European". Proto-Indo-European is not attested by written records and so is conjectured to have been spoken before the invention of writing.
Shared innovations, acquired by borrowing or other means, are not considered genetic and have no bearing with the language family concept. It has been asserted, for example, that many of the more striking features shared by Italic languages (Latin, Oscan, Umbrian, etc.) might well be "areal features". However, very similar-looking alterations in the systems of long vowels in the West Germanic languages greatly postdate any possible notion of a proto-language innovation (and cannot readily be regarded as "areal", either, since English and continental West Germanic were not a linguistic area). In a similar vein, there are many similar unique innovations in Germanic, Baltic and Slavic that are far more likely to be areal features than traceable to a common proto-language. But legitimate uncertainty about whether shared innovations are areal features, coincidence, or inheritance from a common ancestor, leads to disagreement over the proper subdivisions of any large language family.
A sprachbund is a geographic area having several languages that feature common linguistic structures. The similarities between those languages are caused by language contact, not by chance or common origin, and are not recognized as criteria that define a language family. An example of a sprachbund would be the Indian subcontinent.
The concept of language families is based on the historical observation that languages develop dialects, which over time may diverge into distinct languages. However, linguistic ancestry is less clear-cut than familiar biological ancestry, in which species do not crossbreed. It is more like the evolution of microbes, with extensive lateral gene transfer: Quite distantly related languages may affect each other through language contact, which in extreme cases may lead to languages with no single ancestor, whether they be creoles or mixed languages. In addition, a number of sign languages have developed in isolation and appear to have no relatives at all. Nonetheless, such cases are relatively rare and most well-attested languages can be unambiguously classified as belonging to one language family or another, even if this family's relation to other families is not known.
|
https://en.wikipedia.org/wiki?curid=18190
|
Looe Island
Looe Island (, meaning "island of the monk's enclosure"), also known as St George's Island, and historically St Michael's Island is a small island a mile from the mainland town of Looe off Cornwall, England.
According to local legend, Joseph of Arimathea landed here with the Christ Child. Some scholars, including Glyn Lewis, suggest the island could be Ictis, the location described by Diodorus Siculus as a centre for the tin trade in pre-Roman Britain.
The island is now owned and managed by the Cornwall Wildlife Trust charity where access is carefully managed for the benefit of wildlife and landing is only possible via the Cornwall Wildlife Trust authorized boatman. The waters around the island are a marine nature reserve and form part of the Looe Voluntary Marine Conservation Area (VMCA). First established in 1995, the Looe VCMA covers nearly 5 km of coastline and aims to protect the coastal and marine wildlife around Looe.
People have been living on Looe Island since the Iron Age. Evidence of early habitation includes pieces of Roman amphorae as well as stone boat anchors and Roman coins. In the Dark Ages, the island was used a seat of early Christian settlement. The child Jesus was believed to have visited the Island with his uncle, Joseph of Arimathea, who traded with the Cornish tin traders. Therefore, Looe Island became a place of pilgrimage for early Christians and a small thatched roofed chapel was built there during this time.
In the later medieval period, the island came under the overall control of Glastonbury Abbey, with the Prior of Lammana being directly responsible for its governance; the island's chapel was under the care of two Benedictine monks until 1289 when the property was sold to a local landowner. The priory was replaced by a domestic chapel served by a secular priest until the Dissolution of the Monasteries in 1536 when it became property of the Crown. From the 13th to the 16th centuries it was known as St Michael's Island but after the dissolution of the monasteries, it was rededicated in 1594 as St George's Island.
Through the 17th and 18th centuries the island was used by smugglers to avoid the British Government's revenue cutters out of Plymouth and Falmouth. The Old Guildhall Museum in Looe hold information and research about the smuggling families of Looe Island and information is also available in the more recent publications about the island.
In the 20th century, Looe island was owned (and inhabited) by two sisters, Babs and Evelyn Atkins, who wrote two books: "We Bought An Island" and its sequel "Tales From Our Cornish Island" . They chronicle the purchase of the island and what it was like to live there. Evelyn died in 1997 at the age of 87; Babs continued to live on the island until her death in 2004, at the age of 86. On her death, the island was bequeathed to the Cornwall Wildlife Trust; it will be preserved as a nature reserve in perpetuity. The adjoining islet, formerly known as Little Island, now renamed Trelawny Island and connected by a small bridge, was bequeathed by Miss Atkins back to the Trelawny family, who previously owned Looe Island from 1743 to 1921.
Situated in the English Channel, about one mile from East Looe in the direction of Polperro, it is about in area and a mile (1.6 km) in circumference. Its highest point is above sea level. Looe Island, like much of south west England, has a mild climate with frost and snow being rare.
The island is owned and managed by the Cornwall Wildlife Trust. This is a non-profit-making venture, the landing fees and other income being devoted to conserving the island's natural environment and providing facilities. The island is open during the summer to day visitors arriving by the Trust's boat. After a short welcome talk visitors are directed to the small visitor centre from where they can pick up a copy of the self-guided trail. Visitors have some two hours on the island and all trips are subject to tides and weather/sea state. While it is normally accessible only by the Cornwall Wildlife Trust's boat, at extremely low spring tides it is possible for the journey to be made by foot across the slippery, seaweed-covered rocky sea floor. However you have to remain on the beach and promptly head back to the mainland.
In 2008, Channel 4's archaeology series "Time Team" visited the island to carry out an investigation into its early Christian history. They excavated the sites of Christian chapels built on both the island and on the mainland opposite. During their dig they found the remains of a Benedictine chapel that was built in c.1139 by monks from Glastonbury Abbey, a reliquary, graves and the remains of much earlier Anglo-Romano places of worship built of wood with dating evidence suggesting use by Christians before the reign of Constantine the Great.
In 1994/95 Andrew Hugill composed Island Symphony, an electro-acoustic piece utilising sampled sounds sourced over the net plus recorded natural sounds from the island itself.
|
https://en.wikipedia.org/wiki?curid=18194
|
LaTeX
LaTeX ( or ), stylized within the system as LaTeX, is a document preparation system. When writing, the writer uses plain text as opposed to the formatted text found in "What You See Is What You Get" word processors like Microsoft Word, LibreOffice Writer and Apple Pages. The writer uses markup tagging conventions to define the general structure of a document (such as article, book, and letter), to stylise text throughout a document (such as bold and italics), and to add citations and cross-references. A TeX distribution such as TeX Live or MikTeX is used to produce an output file (such as PDF or DVI) suitable for printing or digital distribution.
LaTeX is widely used in academia for the communication and publication of scientific documents in many fields, including mathematics, statistics, computer science, engineering, physics, economics, linguistics, quantitative psychology, philosophy, and political science. It also has a prominent role in the preparation and publication of books and articles that contain complex multilingual materials, such as Sanskrit and Greek. LaTeX uses the TeX typesetting program for formatting its output, and is itself written in the TeX macro language.
LaTeX can be used as a standalone document preparation system, or as an intermediate format. In the latter role, for example, it is sometimes used as part of a pipeline for translating DocBook and other XML-based formats to PDF. The typesetting system offers programmable desktop publishing features and extensive facilities for automating most aspects of typesetting and desktop publishing, including numbering and cross-referencing of tables and figures, chapter and section headings, the inclusion of graphics, page layout, indexing and bibliographies.
Like TeX, LaTeX started as a writing tool for mathematicians and computer scientists, but even from early in its development, it has also been taken up by scholars who needed to write documents that include complex math expressions or non-Latin scripts, such as Arabic, Devanagari and Chinese.
LaTeX is intended to provide a high-level, descriptive markup language that accesses the power of TeX in an easier way for writers. In essence, TeX handles the layout side, while LaTeX handles the content side for document processing. LaTeX comprises a collection of TeX macros and a program to process LaTeX documents, and because the plain TeX formatting commands are elementary, it provides authors with ready-made commands for formatting and layout requirements such as chapter headings, footnotes, cross-references and bibliographies.
LaTeX was originally written in the early 1980s by Leslie Lamport at SRI International. The current version is LaTeX2e (stylised as LaTeX2ε). LaTeX is free software and is distributed under the LaTeX Project Public License (LPPL).
LaTeX attempts to follow the design philosophy of separating presentation from content, so that authors can focus on the content of what they are writing without attending simultaneously to its visual appearance. In preparing a LaTeX document, the author specifies the logical structure using simple, familiar concepts such as "chapter", "section", "table", "figure", etc., and lets the LaTeX system handle the formatting and layout of these structures. As a result, it encourages the separation of the layout from the content — while still allowing manual typesetting adjustments whenever needed. This concept is similar to the mechanism by which many word processors allow styles to be defined globally for an entire document, or the use of Cascading Style Sheets in styling HTML documents.
The LaTeX system is a markup language that handles typesetting and rendering, and can be arbitrarily extended by using the underlying macro language to develop custom macros such as new environments and commands. Such macros are often collected into "packages," which could then be made available to address some specific typesetting needs such as the formatting of complex mathematical expressions or graphics (e.g., the use of the codice_1 environment provided by the codice_2 package to produce aligned equations).
In order to create a document in LaTeX, you first write a file, say codice_3, using your preferred text editor. Then you give your codice_3 file as input to the TeX program (with the LaTeX macros loaded), which prompts TeX to write out a file suitable for onscreen viewing or printing. This write-format-preview cycle is one of the chief ways in which working with LaTeX differs from the What-You-See-Is-What-You-Get (WYSIWYG) style of document editing. It is similar to the code-compile-execute cycle known to the computer programmers. Today, many LaTeX-aware editing programs make this cycle a simple matter through the pressing a single key, while showing the output preview on the screen beside the input window. Some online LaTeX editors even automatically refresh the preview, while other online tools provide incremental editing in-place, mixed in with the preview in a streamlined single window.
The example below shows the LaTeX input and corresponding output:
Note how the equation for formula_1 (highlighted in the example code) was typeset by the markup:
where the square root is denoted by "codice_5", and the fractions by "codice_6".
The characters T, E, X in the name come from the Greek capital letters tau, epsilon, and chi, as the name of TeX derives from the (skill, art, technique); for this reason, TeX's creator Donald Knuth promotes a pronunciation of () (that is, with a voiceless velar fricative as in Modern Greek, similar to the ch in loch). Lamport remarks that "TeX is usually pronounced "tech", making "lah"-teck, lah-"teck", and "lay"-teck the logical choices; but language is not always logical, so "lay-tecks" is also possible."
The name is traditionally printed in running text with a special typographical logo: LaTeX.
In media where the logo cannot be precisely reproduced in running text, the word is typically given the unique capitalization "LaTeX". Alternatively, the TeX, LaTeX and XeTeX logos can also be rendered via pure CSS and XHTML for use in graphical web browsers — by following the specifications of the internal codice_7 macro.
LaTeX is typically distributed along with plain TeX under a free software license: the LaTeX Project Public License (LPPL). The LPPL is not compatible with the GNU General Public License, as it requires that modified files must be clearly differentiable from their originals (usually by changing the filename); this was done to ensure that files that depend on other files will produce the expected behavior and avoid dependency hell. The LPPL is DFSG compliant as of version 1.3. As free software, LaTeX is available on most operating systems, which include UNIX (Solaris, HP-UX, AIX), BSD (FreeBSD, macOS, NetBSD, OpenBSD), Linux (Red Hat, Debian, Arch, Gentoo), Windows, DOS, RISC OS, AmigaOS and Plan9.
As a macro package, LaTeX provides a set of macros for TeX to interpret. There are many other macro packages for TeX, including Plain TeX, GNU Texinfo, AMSTeX, and ConTeXt.
When TeX "compiles" a document, it follows (from the user's point of view) the following processing sequence: Macros → TeX → Driver → Output. Different implementations of each of these steps are typically available in TeX distributions. Traditional TeX will output a DVI file, which is usually converted to a PostScript file. More recently, Hàn Thế Thành and others have written a new implementation of TeX called pdfTeX, which also outputs to PDF and takes advantage of features available in that format. The XeTeX engine developed by Jonathan Kew, on the other hand, merges modern font technologies and Unicode with TeX.
The default font for LaTeX is Knuth's Computer Modern, which gives default documents created with LaTeX the same distinctive look as those created with plain TeX. XeTeX allows the use of OpenType and TrueType (that is, outlined) fonts for output files.
There are also many editors for LaTeX, some of which are offline, source-code-based while others are online, partial-WYSIWYG-based. For more, see Comparison of TeX editors.
LaTeX2e is the current version of LaTeX, since it replaced LaTeX 2.09 in 1994. , LaTeX3, which started in the early 1990s, is under a long-term development project. Planned features include improved syntax, hyperlink support, a new user interface, access to arbitrary fonts and a new documentation.
There are numerous commercial implementations of the entire TeX system. System vendors may add extra features like additional typefaces and telephone support. LyX is a free, WYSIWYM visual document processor that uses LaTeX for a back-end. TeXmacs is a free, WYSIWYG editor with similar functionalities as LaTeX, but with a different typesetting engine. Other WYSIWYG editors that produce LaTeX include Scientific Word on MS Windows., and BaKoMa TeX on Windows, Mac and Linux.
A number of community-supported TeX distributions are available, including TeX Live (multi-platform), teTeX (deprecated in favor of TeX Live, UNIX), fpTeX (deprecated), MiKTeX (Windows), proTeXt (Windows), MacTeX (TeX Live with the addition of Mac specific programs), gwTeX (Mac OS X) (deprecated), OzTeX (Mac OS Classic), AmigaTeX (no longer available), PasTeX (AmigaOS, available on the Aminet repository), and Auto-Latex Equations (Google Docs add-on that supports MathJax LaTeX commands).
LaTeX documents (codice_8) can be opened with any text editor. They consist of plain text and do not contain hidden formatting codes or binary instructions. Additionally, TeX documents can be shared by rendering the LaTeX file to Rich Text Format (codice_9) or XML. This can be done using the free software programs LaTeX2RTF or TeX4ht. LaTeX can also be rendered to PDF files using the LaTeX extension pdfLaTeX. LaTeX files containing Unicode text can be processed into PDFs with the codice_10 package, or by the TeX extensions XeLaTeX and LuaLaTeX.
LaTeX has become the de facto standard to typeset mathematical expression in scientific documents. Hence, there are several conversion tools focusing on mathematical LaTeX expressions, such as converters to MathML or Computer Algebra System.
LaTeX was created in the early 1980s by Leslie Lamport, when he was working at SRI. He needed to write TeX macros for his own use, and thought that with a little extra effort he could make a general package usable by others. Peter Gordon, an editor at Addison-Wesley, convinced him to write a LaTeX user's manual for publication (Lamport was initially skeptical that anyone would pay money for it); it came out in 1986 and sold hundreds of thousands of copies. Meanwhile, Lamport released versions of his LaTeX macros in 1984 and 1985. On 21 August 1989, at a TeX Users Group (TUG) meeting at Stanford, Lamport agreed to turn over maintenance and development of LaTeX to Frank Mittelbach. Mittelbach, along with Chris Rowley and Rainer Schöpf, formed the LaTeX3 team; in 1994, they released LaTeX 2e, the current standard version, and continue working on LaTeX3.
|
https://en.wikipedia.org/wiki?curid=18195
|
Lebesgue measure
In measure theory, a branch of mathematics, the Lebesgue measure, named after French mathematician Henri Lebesgue, is the standard way of assigning a measure to subsets of "n"-dimensional Euclidean space. For "n" = 1, 2, or 3, it coincides with the standard measure of length, area, or volume. In general, it is also called n"-dimensional volume, n"-volume, or simply volume. It is used throughout real analysis, in particular to define Lebesgue integration. Sets that can be assigned a Lebesgue measure are called Lebesgue-measurable; the measure of the Lebesgue-measurable set "A" is here denoted by "λ"("A").
Henri Lebesgue described this measure in the year 1901, followed the next year by his description of the Lebesgue integral. Both were published as part of his dissertation in 1902.
The Lebesgue measure is often denoted by "dx", but this should not be confused with the distinct notion of a volume form.
Given a subset formula_1, with the length of interval formula_2 given by formula_3, the Lebesgue outer measure formula_4 is defined as
The Lebesgue measure is defined on the Lebesgue "σ"-algebra, which is the collection of all sets formula_6 which satisfy the "Carathéodory criterion" which requires that for every formula_7,
For any set in the Lebesgue "σ"-algebra, its Lebesgue measure is given by its Lebesgue outer measure formula_9.
Sets that are not included in the Lebesgue "σ"-algebra are not Lebesgue-measurable. Such sets do exist (e.g. Vitali sets), i.e., the Lebesgue "σ"-algebra is strictly contained in the power set of formula_10.
The first part of the definition states that the subset formula_6 of the real numbers is reduced to its outer measure by coverage by sets of open intervals. Each of these sets of intervals formula_12 covers formula_6 in the sense that when the intervals are combined together by union, they contain formula_6. The total length of any covering interval set can easily overestimate the measure of formula_6, because formula_6 is a subset of the union of the intervals, and so the intervals may include points which are not in formula_6. The Lebesgue outer measure emerges as the greatest lower bound (infimum) of the lengths from among all possible such sets. Intuitively, it is the total length of those interval sets which fit formula_6 most tightly and do not overlap.
That characterizes the Lebesgue outer measure. Whether this outer measure translates to the Lebesgue measure proper depends on an additional condition. This condition is tested by taking subsets formula_19 of the real numbers using formula_6 as an instrument to split formula_19 into two partitions: the part of formula_19 which intersects with formula_6 and the remaining part of formula_19 which is not in formula_6: the set difference of formula_19 and formula_6. These partitions of formula_19 are subject to the outer measure. If for all possible such subsets formula_19 of the real numbers, the partitions of formula_19 cut apart by formula_6 have outer measures whose sum is the outer measure of formula_19, then the outer Lebesgue measure of formula_6 gives its Lebesgue measure. Intuitively, this condition means that the set formula_6 must not have some curious properties which causes a discrepancy in the measure of another set when formula_6 is used as a "mask" to "clip" that set, hinting at the existence of sets for which the Lebesgue outer measure does not give the Lebesgue measure. (Such sets are, in fact, not Lebesgue-measurable.)
The Lebesgue measure on R"n" has the following properties:
All the above may be succinctly summarized as follows:
The Lebesgue measure also has the property of being "σ"-finite.
A subset of R"n" is a "null set" if, for every ε > 0, it can be covered with countably many products of "n" intervals whose total volume is at most ε. All countable sets are null sets.
If a subset of R"n" has Hausdorff dimension less than "n" then it is a null set with respect to "n"-dimensional Lebesgue measure. Here Hausdorff dimension is relative to the Euclidean metric on R"n" (or any metric Lipschitz equivalent to it). On the other hand, a set may have topological dimension less than "n" and have positive "n"-dimensional Lebesgue measure. An example of this is the Smith–Volterra–Cantor set which has topological dimension 0 yet has positive 1-dimensional Lebesgue measure.
In order to show that a given set "A" is Lebesgue-measurable, one usually tries to find a "nicer" set "B" which differs from "A" only by a null set (in the sense that the symmetric difference ("A" − "B") formula_53("B" − "A") is a null set) and then show that "B" can be generated using countable unions and intersections from open or closed sets.
The modern construction of the Lebesgue measure is an application of Carathéodory's extension theorem. It proceeds as follows.
Fix . A box in R"n" is a set of the form
where , and the product symbol here represents a Cartesian product. The volume of this box is defined to be
For "any" subset "A" of R"n", we can define its outer measure "λ"*("A") by:
We then define the set "A" to be Lebesgue-measurable if for every subset "S" of R"n",
These Lebesgue-measurable sets form a "σ"-algebra, and the Lebesgue measure is defined by for any Lebesgue-measurable set "A".
The existence of sets that are not Lebesgue-measurable is a consequence of a certain set-theoretical axiom, the axiom of choice, which is independent from many of the conventional systems of axioms for set theory. The Vitali theorem, which follows from the axiom, states that there exist subsets of R that are not Lebesgue-measurable. Assuming the axiom of choice, non-measurable sets with many surprising properties have been demonstrated, such as those of the Banach–Tarski paradox.
In 1970, Robert M. Solovay showed that the existence of sets that are not Lebesgue-measurable is not provable within the framework of Zermelo–Fraenkel set theory in the absence of the axiom of choice (see Solovay's model).
The Borel measure agrees with the Lebesgue measure on those sets for which it is defined; however, there are many more Lebesgue-measurable sets than there are Borel measurable sets. The Borel measure is translation-invariant, but not complete.
The Haar measure can be defined on any locally compact group and is a generalization of the Lebesgue measure (R"n" with addition is a locally compact group).
The Hausdorff measure is a generalization of the Lebesgue measure that is useful for measuring the subsets of R"n" of lower dimensions than "n", like submanifolds, for example, surfaces or curves in R3 and fractal sets. The Hausdorff measure is not to be confused with the notion of Hausdorff dimension.
It can be shown that there is no infinite-dimensional analogue of Lebesgue measure.
|
https://en.wikipedia.org/wiki?curid=18198
|
Lake Champlain
Lake Champlain (; ; Abenaki: "Pitawbagok"; ) is a natural freshwater lake in North America mainly within the borders of the United States (in the states of Vermont and New York) but partially situated across the Canada–U.S. border, in the Canadian province of Quebec.
The New York portion of the Champlain Valley includes the eastern portions of Clinton County and Essex County. Most of this area is part of the Adirondack Park. There are recreational facilities in the park and along the relatively undeveloped coastline of Lake Champlain. The cities of Plattsburgh, New York and Burlington, Vermont are on the lake's western and eastern shores, respectively, and the Town of Ticonderoga, New York is in the region's southern part. The Quebec portion is in the regional county municipalities of Le Haut-Richelieu and Brome-Missisquoi. There are a number of islands in the lake; the largest include Grand Isle, Isle La Motte, and North Hero, all part of Grand Isle County, Vermont.
The Champlain Valley is the northernmost unit of a landform system known as the Great Appalachian Valley, which stretches between Quebec, Canada, to the north, and Alabama, US, to the south. The Champlain Valley is a physiographic section of the larger Saint Lawrence Valley, which in turn is part of the larger Appalachian physiographic division.
Lake Champlain is one of numerous large lakes scattered in an arc through Labrador, in Canada, the northern United States, and the Northwest Territories of Canada. It is the thirteenth largest lake by area in the US. Approximately in area, the lake is long and across at its widest point, and has a maximum depth of approximately . The lake varies seasonally from about above mean sea level.
Lake Champlain is in the Lake Champlain Valley between the Green Mountains of Vermont and the Adirondack Mountains of New York, drained northward by the Richelieu River into the St. Lawrence River at Sorel-Tracy, Quebec, northeast and downstream of Montreal, Quebec. It also receives the waters from the Lake George, so its basin collects waters from the northwestern slopes of the Green Mountains and the northernmost eastern peaks of the Adirondack Mountains.
Lake Champlain drains nearly half of Vermont, and approximately 250,000 people get their drinking water from the lake.
The lake is fed in Vermont by the LaPlatte, Lamoille, Missisquoi, Poultney, and Winooski rivers, along with Lewis Creek, Little Otter Creek, and Otter Creek. In New York, it is fed by the Ausable, Boquet, Great Chazy, La Chute, Little Ausable, Little Chazy, Salmon, and Saranac rivers, along with Putnam Creek. In Quebec, it is fed by the Pike River.
It is connected to the Hudson River by the Champlain Canal.
Parts of the lake freeze each winter, and in some winters the entire lake surface freezes, referred to as "closing". In July and August, the lake temperature reaches an average of .
The Chazy Reef is an extensive Ordovician carbonate rock formation that extends from Tennessee to Quebec and Newfoundland. It occurs in prominent outcropping at Goodsell Ridge, Isle La Motte, the northernmost island in Lake Champlain.
The oldest reefs are around "The Head" of the south end of the island; slightly younger reefs are found at the Fisk Quarry, and the youngest (the famous coral reefs) are in fields to the north. Together, these three sites provide a unique narrative of events that took place over 450 million years ago in the ocean in the Southern Hemisphere, long before Lake Champlain's emergence 20,000 years ago.
The lake has long acted as a border between indigenous nations much as it is today between the states of New York and Vermont. The lake is located at the frontier between Abenaki and Mohawk (Iroquois Confederacy) traditional territories. The official toponym for the lake according to the orthography established by the Grand Council of Wanab-aki Nation is Pitawbagok (alternative orthographies include Petonbowk and Bitawbagok), meaning 'middle lake', 'lake in between' or 'double lake'.
The Mohawk name in modern orthography as standardized in 1993 is Kaniatarakwà:ronte, meaning "a bulged lake" or “lake with a bulge in it." An alternate name is Kaniá:tare tsi kahnhokà:ronte (phonetic English spelling "Caniaderi Guarunte" ), meaning 'door of the country' or 'lake to the country'. The lake is an important eastern gateway to Iroquois Confederacy lands.
The lake was named after the French explorer Samuel de Champlain, who encountered it in July 1609. While the ports of Burlington, Vermont, Port Henry, New York, and Plattsburgh, New York today are primarily used by small craft, ferries, and lake cruise ships, they were of substantial commercial and military importance in the 18th and 19th centuries.
New France allocated concessions all along lake Champlain to French settlers and built forts to defend the waterways. In colonial times, Lake Champlain was used as a water (or, in winter, ice) passage between the Saint Lawrence and Hudson valleys. Travelers found it easier to journey by boats and sledges on the lake rather than go overland on unpaved and frequently mud-bound roads. The lake's northern tip at Saint-Jean-sur-Richelieu, Quebec (known as St. John in colonial times under British rule), is a short distance from Montreal, Quebec. The southern tip at Whitehall (Skenesborough in revolutionary times) is a short distance from Saratoga, Glens Falls, and Albany, New York.
Forts were built at Ticonderoga and Crown Point (Fort St. Frederic) to control passage on the lake in colonial times. Important battles were fought at Ticonderoga in 1758 and 1775. During the Revolutionary War, the British and Americans conducted a frenetic shipbuilding race through the spring and summer of 1776, at opposite ends of the lake, and fought a significant naval engagement on October 11 at the Battle of Valcour Island. While it was a tactical defeat for the Americans, and the small fleet led by Benedict Arnold was almost destroyed, the Americans gained a strategic victory; the British invasion was delayed long enough so the approach of winter prevented the fall of these forts until the following year. In this period, the Continental Army gained strength and was victorious at Saratoga.
At the start of the Revolutionary War, British forces occupied the Champlain Valley. However, it did not take long for rebel leaders to realize the importance of controlling Lake Champlain. Early in the war, the colonial militias attempted to expel the British from Boston; however, this undertaking could not be achieved without heavy artillery. The British forts at Ticonderoga and Crown Point, on Lake Champlain, were known to have ample supplies of artillery and were weakly manned by the British. Thus, the colonial militias devised a plan to take control of the two forts and bring the guns back to the fight in Boston.
The necessity of controlling the two forts at Ticonderoga and Crown Point placed Lake Champlain as a strategic arena during the Revolutionary War. By taking control of these forts, Americans not only gained heavy artillery, but control of a vast water highway, as well: Lake Champlain provided a direct invasion route to British Canada. However, had the British controlled the lake, they could have divided the colonies of New England and further depleted the Continental Army.
The Continental Army's first offensive action took place in May 1775, three weeks after the Battles of Lexington and Concord. Ethan Allen, accompanied by 200 Green Mountain Boys, was ordered to capture Fort Ticonderoga and retrieve supplies for the fight in Boston. Benedict Arnold shared the command with Allen, and in early May 1775, they captured Fort Ticonderoga, Crown Point, and the southern Loyalist settlement of Skenesborough. As a result of Allen’s offensive attack on the Champlain Valley in 1775, the American forces controlled the Lake Champlain waterway.
The Continental Army realized the strategic advantage of controlling Lake Champlain, as it leads directly to the heart of Quebec. Immediately after taking Forts Ticonderoga and Crown Point, the Americans began planning an attack on British Canada. The American siege of Quebec was a two-pronged assault and occurred throughout the winter of 1775–1776. Brigadier General Richard Montgomery led the first assault up the Champlain Valley into Canada, while Benedict Arnold led a second army to Quebec via the Maine wilderness.
Despite the strategic advantage of controlling a direct route to Quebec by way of the Champlain Valley, the American siege of British Canada during the winter of 1775 failed. The Continental Army mistakenly assumed they would receive support from the Canadians upon their arrival at Quebec. This was not the case, and the rebel army struggled to take Quebec with diminishing supplies, support, and harsh northern winter weather.
The Continental Army was forced to camp outside Quebec’s walls for the winter, with reinforcements from New York, Pennsylvania, Massachusetts, New Hampshire, and Connecticut allowing the soldiers to maintain their siege of the city. The reinforcements traveled hundreds of miles (kilometres) up the frozen Lake Champlain and St. Lawrence River, but were too late and too few to influence a successful siege of Quebec. In May 1776, with the arrival of a British convoy carrying 10,000 British and Hessian troops to Canada, the Continental forces retreated back down the Champlain Valley to reevaluate their strategy.
"I know of no better method than to secure the important posts of Ticonderoga and Crown Point, and by building a number of armed vessels to command the lakes, otherwise the forces now in Canada will be brought down upon us as quick as possible, having nothing to oppose them… They will doubtless try to construct some armed vessels and then endeavor to penetrate the country toward New York." (Brigadier General John Sullivan to George Washington, June 24, 1776).
Both British and American forces spent the summer of 1776 building their naval fleets, at opposite ends of Lake Champlain. By the October 1776, the Continental Army had 16 operating naval vessels on Lake Champlain, a great increase to the four small ships they had at the beginning of the summer. General Benedict Arnold commanded the American naval fleet on Lake Champlain, which was composed of volunteers and soldiers drafted from the Northern Army. With great contrast to the Continental navy, experienced Royal Navy officers, British seamen, and Hessian artillerymen manned the British fleet on Lake Champlain. By the end of the summer of 1776, the opposing armies were prepared to battle over the strategic advantage of controlling Lake Champlain.
On October 11, 1776, the British and American naval fleets met on the western side of Valcour Island, on Lake Champlain. American General Benedict Arnold established the location, as it provided the Continental fleet with a natural defensive position. The British and American vessels engaged in combat for much of the day, only stopping due to the impending nightfall.
After a long day of combat, the American fleet was in worse shape than the experienced British Navy. Upon ceasefire, Arnold called a council of war with his fellow officers, proposing to escape the British fleet via rowboats under the cover of night. As the British burned Arnold's flagship, the "Royal Savage," to the east, the Americans rowed past the British lines.
The following morning, the British learned of the Americans' escape and set out after the fleeing Continental vessels. On October 13, the British fleet caught up to the struggling American ships near Split Rock Mountain. With no hope of fighting off the powerful British navy, Arnold ordered his men to run their five vessels aground in Ferris Bay, Panton, Vermont. The depleted Continental army escaped on land back to Fort Ticonderoga and Mount Independence; however, they no longer controlled the Lake Champlain waterway.
The approaching winter of 1776–1777 restricted British movement along the recently controlled Lake Champlain. As the British abandoned Crown Point and returned to Canada for the winter, the Americans reduced their garrisons in the Champlain Valley from 13,000 to 2,500 soldiers.
In early 1777, British General John Burgoyne led 8,000 troops from Canada, down Lake Champlain, and into the Champlain Valley. The goal of this invasion was to divide the New England colonies, thus forcing the Continental Army into a separated fight on multiple fronts. Lake Champlain provided Burgoyne with protected passage deep into the American colonies. Burgoyne’s army reached Fort Ticonderoga and Mount Independence in late June 1777. During the night of July 5, the American forces fled Ticonderoga as the British took control of the fort. However, Burgoyne’s southern campaign did not go uncontested.
On October 7, 1777, American General Horatio Gates, who occupied Bemis Heights, met Burgoyne’s army at the Second Battle of Freeman’s Farm. At Freeman’s Farm, Burgoyne’s army suffered its final defeat and ended their invasion south into the colonies. Ten days later, on October 17, 1777, British General Burgoyne surrendered his army at Saratoga. This defeat was instrumental to the momentum of the Revolutionary War, as the defeat of the British army along the Champlain-Hudson waterway convinced France to ally with the American army.
Following the failed British campaign led by General Burgoyne, the British still maintained control over the Champlain waterway for the duration of the Revolutionary War. The British used the Champlain waterway to supply raids across the Champlain Valley from 1778 to 1780, and Lake Champlain permitted direct transportation of supplies from the British posts at the northern end of the lake.
With the end of the Revolutionary War in 1783, the British naval fleet on Lake Champlain retreated up to St. John’s. However, British troops garrisoned at Fort Dutchman's Point (North Hero, Vermont) and Fort au Fer (Champlain, New York) on Lake Champlain, did not leave until the 1796 Jay Treaty.
Eager to take back control of Lake Champlain following the end of the Revolutionary War, Americans flocked to settle the Champlain Valley. Many individuals emigrated from Massachusetts and other New England colonies, such as Salmon Dutton, a settler of Cavendish, Vermont. Dutton emigrated in 1782 and worked as a surveyor, town official, and toll road owner. His home had a dooryard garden, typical of mid-19th century New England village homes, and his experience settling in the Champlain Valley depicts the industries and lifestyles surrounding Lake Champlain following the Revolutionary War.
Similar to the experience of Salmon Dutton, former colonial militia Captain Hezekiah Barnes settled in Charlotte, Vermont, in 1787. Following the war, Barnes also worked as a road surveyor; he also established an inn and trading post in Charlotte, along the main trade route from Montreal down Lake Champlain. Barnes’s stagecoach inn was built in traditional Georgian style, with 10 fireplaces and a ballroom on the interior, and a wraparound porch on the outside. In 1800, Continental Army Captain Benjamin Harrington established a distillery business in Shelburne, Vermont, which supplied his nearby inn. Furthermore, Captain Stevens and Jeremiah Trescott built a water-powered sawmill in South Royalton, Vermont, in the late 1700s. These individual accounts shed light on the significance of Lake Champlain during the post-Revolutionary War period.
During the War of 1812, British and American forces faced each other in the Battle of Lake Champlain, also known as the Battle of Plattsburgh, fought on September 11, 1814. This ended the final British invasion of the northern states during the War of 1812. It was fought just prior to the signing of the Treaty of Ghent, and the American victory denied the British any leverage to demand exclusive control over the Great Lakes or territorial gains against the New England states.
Three US Naval ships have been named after this battle: , , and a cargo ship used during World War I.
Following the War of 1812, the U.S. Army began construction on "Fort Blunder", an unnamed fortification built at the northernmost end of Lake Champlain to protect against attacks from British Canada. Its nickname came from a surveying error: the initial phase of construction on the fort turned out to be taking place on a point north of the Canada–U.S. border. Once this error was spotted, construction was abandoned. Locals scavenged materials used in the abandoned fort for use in their homes and public buildings.
By the Webster-Ashburton Treaty of 1842, the Canada–U.S. border was adjusted northward to include the strategically important site of "Fort Blunder" on the US side. In 1844, work was begun to replace the remains of the 1812-era fort with a massive new Third System masonry fortification, known as Fort Montgomery. Portions of this fort are still standing.
In the early 19th century, the construction of the Champlain Canal connected Lake Champlain to the Hudson River system, allowing north-south commerce by water from New York City to Montreal and Atlantic Canada.
In 1909, 65,000 people celebrated the 300th anniversary of the French discovery of the lake. Attending dignitaries included President William Howard Taft, along with representatives from France, Canada, and the United Kingdom.
In 1929, then-New York Governor Franklin Roosevelt and Vermont Governor John Weeks dedicated the first bridge to span the lake, built from Crown Point to Chimney Point. This bridge lasted until December 2009. Severe deterioration was found, and the bridge was demolished and replaced with the Lake Champlain Bridge, which opened in November 2011.
On February 19, 1932, boats were able to sail on Lake Champlain. It was the first time the lake was known to be free of ice during the winter at that time.
Lake Champlain briefly became the nation's sixth Great Lake on March 6, 1998, when President Clinton signed Senate Bill 927. This bill, which was led by U.S. Senator Patrick Leahy of Vermont and reauthorized the National Sea Grant Program, contained a line declaring Lake Champlain to be a Great Lake. This status enabled its neighboring states to apply for additional federal research and education funds allocated to these national resources. However, following a small uproar, the Great Lake status was rescinded on March 24 (although New York and Vermont universities continue to receive funds to monitor and study the lake).
In 1609, Samuel de Champlain wrote that he saw a lake monster long, as thick as a man's thigh, with silver-gray scales a dagger could not penetrate. The alleged monster had jaws with sharp and dangerous teeth. Native Americans claimed to have seen similar monsters long. This mysterious creature is likely the original Lake Champlain monster. The monster has been memorialized in sports teams' names and mascots, i.e., the Vermont Lake Monsters and Champ, the mascot of the state's minor league baseball team. A Vermont Historical Society publication recounts the story and offers possible explanations for accounts of the so-called monster: "floating logs, schools of large sturgeons diving in a row, or flocks of black birds flying close to the water."
A pollution prevention, control, and restoration plan for Lake Champlain was first endorsed in October 1996 by the governors of New York and Vermont, and the regional administrators of the United States Environmental Protection Agency (EPA). In April 2003, the plan was updated, and Quebec signed onto it. The plan is being implemented by the Lake Champlain Basin Program and its partners at the state, provincial, federal, and local levels. Renowned as a model for interstate and international cooperation, its primary goals are to reduce phosphorus inputs to Lake Champlain, reduce toxic contamination, minimize the risks to humans from water-related health hazards, and control the introduction, spread, and impact of non-native nuisance species to preserve the integrity of the Lake Champlain ecosystem.
Senior staff who helped organize the Environmental Protection Agency in 1970 recall that International Paper was one of the first companies to call upon the brand new agency, because it was being pushed by both New York and Vermont with regard to a discharge of pollution into Lake Champlain.
Agricultural and urban runoff from the watershed or drainage basin is the primary source of excess phosphorus, which exacerbates algae blooms in Lake Champlain. The most problematic blooms have been cyanobacteria, commonly called blue-green algae, in the northeastern part of the Lake, primarily Missisquoi Bay.
To reduce phosphorus runoff to this part of the lake, Vermont and Quebec agreed to reduce their inputs by 60% and 40%, respectively, by an agreement signed in 2002. While agricultural sources (manure and fertilizers) are the primary sources of phosphorus (about 70%) in the Missisquoi basin, runoff from developed land and suburbs is estimated to contribute about 46% of the phosphorus runoff basin-wide to Lake Champlain, and agricultural lands contributed about 38%.
In 2002, the cleanup plan noted that the lake had the capacity to absorb of phosphorus each year. In 2009, a judge noted that were still flowing in annually, more than twice what the lake could handle. Sixty municipal and industrial sewage plants discharge processed waste from the Vermont side.
In 2008, the EPA expressed concerns to the State of Vermont that the Lake's cleanup was not progressing fast enough to meet the original cleanup goal of 2016. The state, however, cites its Clean and Clear Action Plan as a model that will produce positive results for Lake Champlain.
In 2007, Vermont banned phosphates for dishwasher use starting in 2010. This will prevent an estimated from flowing into the lake. While this represents 0.6% of the phosphate pollution, it took US$1.9 million to remove the pollutant from treated wastewater, an EPA requirement.
Despite concerns about pollution, Lake Champlain is safe for swimming, fishing, and boating. It is considered a world-class fishery for salmonid species (Lake trout and Atlantic salmon) and bass. About 81 fish species live in the lake, and more than 300 bird species rely on it for habitat and as a resource during migrations.
By 2008, at least six institutions were monitoring lake water health:
In 2001, scientists estimated that farming contributed 38% of the phosphorus runoff. By 2010, results of environmentally conscious farming practices, enforced by law, had made any positive contribution to lake cleanliness. A federally funded study was started to analyze this problem and to arrive at a solution.
Biologists have been trying to control lampreys in the lake since 1985 or earlier. Lampreys are native to the area but have expanded in population to such an extent that they wounded nearly all lake trout in 2006 and 70–80% of salmon. The use of pesticides against the lamprey has reduced their casualties of other fish to 35% of salmon and 31% of lake trout. The goal was 15% of salmon and 25% of lake trout.
The federal and state governments originally budgeted US$18 million for lake programs for 2010. This was later supplemented by an additional US$6.5 million from the federal government.
In 2010, the estimate of cormorant population, now classified as a nuisance species because they take so much of the lake fish, ranged from 14,000 to 16,000. A Fish and Wildlife commissioner said the ideal population would be 3,300 or about . Cormorants had disappeared from the lake (and all northern lakes) due to the use of DDT in the 1940s and 1950s, which made their eggs more fragile and reduced breeding populations.
Ring-billed gulls are also considered a nuisance, and measures have been taken to reduce their population. Authorities are trying to encourage the return of black crowned night herons, cattle egrets, and great blue herons, which disappeared during the time DDT was being widely used.
In 1989, UNESCO designated the area around Lake Champlain as the Champlain-Adirondack Biosphere Reserve.
The Alburgh Peninsula (also known as the Alburgh Tongue), extending south from the Quebec shore of the lake into Vermont, and Province Point, the southernmost tip of a small promontory approximately in size a few miles (kilometres) to the northeast of the town of East Alburgh, Vermont, are connected by land to the rest of the state only via Canada. This is a distinction shared with the state of Alaska, Point Roberts, Washington, and the Northwest Angle in Minnesota. All of these are practical exclaves of the United States contiguous with Canada. Unlike the other cases, highway bridges across the lake provide direct access to the Alburgh peninsula from within the United States (from three directions).
Two roadways cross over the lake, connecting Vermont and New York:
In 2009, the bridge had been used by 3,400 drivers per day, and driving around the southern end of the lake added two hours to the trip. Ferry service was re-established to take some of the traffic burden. On December 28, 2009, the bridge was destroyed by a controlled demolition. A new bridge was rapidly constructed by a joint state commitment, opening on November 7, 2011.
North of Ticonderoga, New York, the lake widens appreciably; ferry service is operated by the Lake Champlain Transportation Company at:
While the old bridge was being demolished and the new one constructed, Lake Champlain Transportation
Company operated a free, 24-hour ferry from just south of the bridge to Chimney
Point in Vermont at the expense of the states of New York and Vermont at a cost to the states of about $10 per car.
The most southerly crossing is the Fort Ticonderoga Ferry, connecting Ticonderoga, New York with Shoreham, Vermont, just north of the historic fort.
Four significant railroad crossings were built over the lake. As of 2016, only one remains.
Now called Colchester Park, the main three-mile (5 km) causeway has been adapted and preserved as a recreation area for cyclists, runners, and anglers. Two smaller marble rock-landfill causeways were also erected as part of this line that connected Grand Isle to North Hero, and spanned from North Hero to Alburgh.
Lake Champlain has been connected to the Erie Canal via the Champlain Canal since the canal's official opening September 9, 1823, the same day as the opening of the Erie Canal from Rochester on Lake Ontario to Albany. It connects to the St. Lawrence River via the Richelieu River, with the Chambly Canal bypassing rapids on the river since 1843. Together with these waterways the lake is part of the Lakes to Locks Passage. The Lake Champlain Seaway, a project to use the lake to bring ocean-going ships from New York City to Montreal, was proposed in the late 19th century and considered as late as the 1960s, but rejected for various reasons. The lake is also part of the 740-mile Northern Forest Canoe Trail, which begins in Old Forge, New York, and ends in Fort Kent, Maine.
Burlington, Vermont (pop. 42,217, 2010 Census) is the largest city on the lake. The 2nd and 3rd most populated cities/towns are Plattsburgh, New York, and South Burlington, Vermont, respectively. The fourth-largest community is the town of Colchester.
Lake Champlain contains roughly 80 islands, three of which comprise four entire Vermont towns (most of Grand Isle County). The largest islands:
All active navigational aids on the American portion of the lake are maintained by USCG Burlington station, along with those on international Lake Memphremagog to the east.
Aids to navigation on the Canadian portion of the lake are maintained by the Canadian Coast Guard.
There are a number of parks in the Lake Champlain region in both New York and Vermont.
Those on the New York side of the lake include: Point Au Roche State Park, which park grounds have hiking and cross country skiing trails, and a public beach; and the Ausable Point State Park. The Cumberland Bay State Park is located on Cumberland Head, with a campground, city beach, and sports fields.
There are various parks along the lake on the Vermont side, including Sand Bar State Park in Milton, featuring a natural sand beach, swimming, canoe and kayak rentals, food concession, picnic grounds and a play area. At , Grand Isle State Park contains camping facilities, a sand volleyball court, a nature walk trail, a horseshoe pit and a play area. Button Bay State Park in Ferrisburgh features campsites, picnic areas, a nature center and a swimming pool. Burlington's Waterfront Park is a revitalized industrial area.
Coast Guard Station Burlington provides "Search and Rescue, Law Enforcement and Ice Rescue services 24 hours a day, 365 days a year." Services are also provided by local, and state, and federal governments bordering on the lake, including the U.S. Border Patrol, Royal Canadian Mounted Police, Vermont State Police, New York State Police Marine Detail, and Vermont Fish and Wildlife wardens.
|
https://en.wikipedia.org/wiki?curid=18201
|
Lambda calculus
Lambda calculus (also written as λ-calculus) is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. It is a universal model of computation that can be used to simulate any Turing machine. It was introduced by the mathematician Alonzo Church in the 1930s as part of his research into the foundations of mathematics.
Lambda calculus consists of constructing lambda terms and performing reduction operations on them. In the simplest form of lambda calculus, terms are built using only the following rules:
producing expressions such as: (λ"x".λ"y".(λ"z".(λ"x"."z x") (λ"y"."z y")) ("x y")). Parentheses can be dropped if the expression is unambiguous. For some applications, terms for logical and mathematical constants and operations may be included.
The reduction operations include:
If De Bruijn indexing is used, then α-conversion is no longer required as there will be no name collisions. If repeated application of the reduction steps eventually terminates, then by the Church–Rosser theorem it will produce a β-normal form.
Variable names are not needed if using a universal lambda function, such as Iota and Jot, which can create any function behavior by calling it on itself in various combinations.
Lambda calculus is Turing complete, that is, it is a universal model of computation that can be used to simulate any Turing machine. Its namesake, the Greek letter lambda (λ), is used in lambda expressions and lambda terms to denote binding a variable in a function.
Lambda calculus may be "untyped" or "typed". In typed lambda calculus, functions can be applied only if they are capable of accepting the given input's "type" of data. Typed lambda calculi are "weaker" than the untyped lambda calculus, which is the primary subject of this article, in the sense that "typed lambda calculi can express less" than the untyped calculus can, but on the other hand typed lambda calculi allow more things to be proved; in the simply typed lambda calculus it is, for example, a theorem that every evaluation strategy terminates for every simply typed lambda-term, whereas evaluation of untyped lambda-terms need not terminate. One reason there are many different typed lambda calculi has been the desire to do more (of what the untyped calculus can do) without giving up on being able to prove strong theorems about the calculus.
Lambda calculus has applications in many different areas in mathematics, philosophy, linguistics, and computer science. Lambda calculus has played an important role in the development of the theory of programming languages. Functional programming languages implement the lambda calculus. Lambda calculus is also a current research topic in Category theory.
The lambda calculus was introduced by mathematician Alonzo Church in the 1930s as part of an investigation into the foundations of mathematics. The original system was shown to be logically inconsistent in 1935 when Stephen Kleene and J. B. Rosser developed the Kleene–Rosser paradox.
Subsequently, in 1936 Church isolated and published just the portion relevant to computation, what is now called the untyped lambda calculus. In 1940, he also introduced a computationally weaker, but logically consistent system, known as the simply typed lambda calculus.
Until the 1960s when its relation to programming languages was clarified, the lambda calculus was only a formalism. Thanks to Richard Montague and other linguists' applications in the semantics of natural language, the lambda calculus has begun to enjoy a respectable place in both linguistics and computer science.
There is a bit of controversy over the reason for Church's use of the Greek letter lambda (λ) as the notation for function-abstraction in the lambda calculus, perhaps in part due to conflicting explanations by Church himself. According to Cardone and Hindley (2006):
By the way, why did Church choose the notation “λ”? In [an unpublished 1964 letter to Harald Dickson] he stated clearly that it came from the notation “formula_1” used for class-abstraction by Whitehead and Russell, by first modifying “formula_1” to “∧formula_3” to distinguish function-abstraction from class-abstraction, and then changing “∧” to “λ” for ease of printing.
This origin was also reported in [Rosser, 1984, p.338]. On the other hand, in his later years Church told two enquirers that the choice was more accidental: a symbol was needed and λ just happened to be chosen.
Dana Scott has also addressed this controversy in various public lectures.
Scott recounts that he once posed a question about the origin of the lambda symbol to Church's son-in-law John Addison, who then wrote his father-in-law a postcard:
Dear Professor Church,
Russell had the iota operator, Hilbert had the epsilon operator. Why did you choose lambda for your operator?
According to Scott, Church's entire response consisted of returning the postcard with the following annotation: "eeny, meeny, miny, moe".
Computable functions are a fundamental concept within computer science and mathematics. The lambda calculus provides a simple semantics for computation, enabling properties of computation to be studied formally. The lambda calculus incorporates two simplifications that make this semantics simple.
The first simplification is that the lambda calculus treats functions "anonymously", without giving them explicit names. For example, the function
can be rewritten in "anonymous form" as
(read as "a tuple of and is mapped to formula_6"). Similarly,
can be rewritten in anonymous form as
where the input is simply mapped to itself.
The second simplification is that the lambda calculus only uses functions of a single input. An ordinary function that requires two inputs, for instance the formula_9 function, can be reworked into an equivalent function that accepts a single input, and as output returns "another" function, that in turn accepts a single input. For example,
can be reworked into
This method, known as currying, transforms a function that takes multiple arguments into a chain of functions each with a single argument.
Function application of the formula_9 function to the arguments (5, 2), yields at once
whereas evaluation of the curried version requires one more step
to arrive at the same result.
The lambda calculus consists of a language of lambda terms, which is defined by a certain formal syntax, and a set of transformation rules, which allow manipulation of the lambda terms. These transformation rules can be viewed as an equational theory or as an operational definition.
As described above, all functions in the lambda calculus are anonymous functions, having no names. They only accept one input variable, with currying used to implement functions with several variables.
The syntax of the lambda calculus defines some expressions as valid lambda calculus expressions and some as invalid, just as some strings of characters are valid C programs and some are not. A valid lambda calculus expression is called a "lambda term".
The following three rules give an inductive definition that can be applied to build all syntactically valid lambda terms:
Nothing else is a lambda term. Thus a lambda term is valid if and only if it can be obtained by repeated application of these three rules. However, some parentheses can be omitted according to certain rules. For example, the outermost parentheses are usually not written. See "Notation", below.
An abstraction formula_31 is a definition of an anonymous function that is capable of taking a single input formula_3 and substituting it into the expression formula_25.
It thus defines an anonymous function that takes formula_3 and returns formula_25. For example, formula_36 is an abstraction for the function formula_37 using the term formula_38 for formula_25. The definition of a function with an abstraction merely "sets up" the function but does not invoke it. The abstraction binds the variable formula_3 in the term formula_25.
An application formula_42 represents the application of a function formula_25 to an input formula_29, that is, it represents the act of calling function formula_25 on input formula_29 to produce formula_47.
There is no concept in lambda calculus of variable declaration. In a definition such as formula_48 (i.e. formula_49), the lambda calculus treats formula_21 as a variable that is not yet defined. The abstraction formula_48 is syntactically valid, and represents a function that adds its input to the yet-unknown formula_21.
Bracketing may be used and may be needed to disambiguate terms. For example, formula_53 and formula_54 denote different terms (although they coincidentally reduce to the same value). Here, the first example defines a function whose lambda term is the result of applying x to the child function, while the second example is the application of the outermost function to the input x, which returns the child function. Therefore, both examples evaluate to the identity function formula_55.
In lambda calculus, functions are taken to be 'first class values', so functions may be used as the inputs, or be returned as outputs from other functions.
For example, formula_55 represents the identity function, formula_8, and formula_58 represents the identity function applied to formula_21. Further, formula_60 represents the constant function formula_61, the function that always returns formula_21, no matter the input. In lambda calculus, function application is regarded as left-associative, so that formula_63 means formula_64.
There are several notions of "equivalence" and "reduction" that allow lambda terms to be "reduced" to "equivalent" lambda terms.
A basic form of equivalence, definable on lambda terms, is alpha equivalence. It captures the intuition that the particular choice of a bound variable, in an abstraction, does not (usually) matter.
For instance, formula_55 and formula_66 are alpha-equivalent lambda terms, and they both represent the same function (the identity function).
The terms formula_3 and formula_21 are not alpha-equivalent, because they are not bound in an abstraction.
In many presentations, it is usual to identify alpha-equivalent lambda terms.
The following definitions are necessary in order to be able to define β-reduction:
The free variables of a term are those variables not bound by an abstraction. The set of free variables of an expression is defined inductively:
For example, the lambda term representing the identity formula_55 has no free variables, but the function formula_78 has a single free variable, formula_21.
Suppose formula_25, formula_29 and formula_82 are lambda terms and formula_3 and formula_21 are variables.
The notation formula_85 indicates substitution of formula_82 for formula_3 in formula_25 in a "capture-avoiding" manner. This is defined so that:
For example, formula_100, and formula_101.
The freshness condition (requiring that formula_21 is not in the free variables of formula_82) is crucial in order to ensure that substitution does not change the meaning of functions.
For example, a substitution is made that ignores the freshness condition: formula_104. This substitution turns the constant function formula_105 into the identity formula_55 by substitution.
In general, failure to meet the freshness condition can be remedied by alpha-renaming with a suitable fresh variable.
For example, switching back to our correct notion of substitution, in formula_107 the abstraction can be renamed with a fresh variable formula_108, to obtain formula_109, and the meaning of the function is preserved by substitution.
The β-reduction rule states that an application of the form formula_110 reduces to the term formula_111. The notation formula_112 is used to indicate that formula_113 β-reduces to formula_114.
For example, for every formula_29, formula_116. This demonstrates that formula_117 really is the identity.
Similarly, formula_118, which demonstrates that formula_119 is a constant function.
The lambda calculus may be seen as an idealised version of a functional programming language, like Haskell or Standard ML.
Under this view, β-reduction corresponds to a computational step. This step can be repeated by additional β-reductions until there are no more applications left to reduce. In the untyped lambda calculus, as presented here, this reduction process may not terminate.
For instance, consider the term formula_120.
Here formula_121.
That is, the term reduces to itself in a single β-reduction, and therefore the reduction process will never terminate.
Another aspect of the untyped lambda calculus is that it does not distinguish between different kinds of data.
For instance, it may be desirable to write a function that only operates on numbers. However, in the untyped lambda calculus, there is no way to prevent a function from being applied to truth values, strings, or other non-number objects.
Lambda expressions are composed of:
The set of lambda expressions, Λ, can be defined inductively:
Instances of rule 2 are known as "abstractions" and instances of rule 3 are known as "applications".
To keep the notation of lambda expressions uncluttered, the following conventions are usually applied:
The abstraction operator, λ, is said to bind its variable wherever it occurs in the body of the abstraction. Variables that fall within the scope of an abstraction are said to be "bound". In an expression λ"x"."M", the part λ"x" is often called "binder", as a hint that the variable "x" is getting bound by appending λ"x" to "M". All other variables are called "free". For example, in the expression λ"y"."x x y", "y" is a bound variable and "x" is a free variable. Also a variable is bound by its nearest abstraction. In the following example the single occurrence of "x" in the expression is bound by the second lambda: λ"x"."y" (λ"x"."z x").
The set of "free variables" of a lambda expression, "M", is denoted as FV("M") and is defined by recursion on the structure of the terms, as follows:
An expression that contains no free variables is said to be "closed". Closed lambda expressions are also known as "combinators" and are equivalent to terms in combinatory logic.
The meaning of lambda expressions is defined by how expressions can be reduced.
There are three kinds of reduction:
We also speak of the resulting equivalences: two expressions are "α-equivalent", if they can be α-converted into the same expression. β-equivalence and η-equivalence are defined similarly.
The term "redex", short for "reducible expression", refers to subterms that can be reduced by one of the reduction rules. For example, (λ"x"."M") "N" is a β-redex in expressing the substitution of "N" for "x" in "M". The expression to which a redex reduces is called its "reduct"; the reduct of (λ"x"."M") "N" is "M"["x" := "N"].
If "x" is not free in "M", λ"x"."M x" is also an η-redex, with a reduct of "M".
α-conversion, sometimes known as α-renaming, allows bound variable names to be changed. For example, α-conversion of λ"x"."x" might yield λ"y"."y". Terms that differ only by α-conversion are called "α-equivalent". Frequently, in uses of lambda calculus, α-equivalent terms are considered to be equivalent.
The precise rules for α-conversion are not completely trivial. First, when α-converting an abstraction, the only variable occurrences that are renamed are those that are bound to the same abstraction. For example, an α-conversion of λ"x".λ"x"."x" could result in λ"y".λ"x"."x", but it could "not" result in λ"y".λ"x"."y". The latter has a different meaning from the original. This is analogous to the programming notion of variable shadowing.
Second, α-conversion is not possible if it would result in a variable getting captured by a different abstraction. For example, if we replace "x" with "y" in λ"x".λ"y"."x", we get λ"y".λ"y"."y", which is not at all the same.
In programming languages with static scope, α-conversion can be used to make name resolution simpler by ensuring that no variable name masks a name in a containing scope (see α-renaming to make name resolution trivial).
In the De Bruijn index notation, any two α-equivalent terms are syntactically identical.
Substitution, written "M"["V" := "N"], is the process of replacing all "free" occurrences of the variable "V" in the expression "M" with expression "N". Substitution on terms of the lambda calculus is defined by recursion on the structure of terms, as follows (note: x and y are only variables while M and N are any lambda expression):
To substitute into an abstraction, it is sometimes necessary to α-convert the expression. For example, it is not correct for (λ"x"."y")["y" := "x"] to result in λ"x"."x", because the substituted "x" was supposed to be free but ended up being bound. The correct substitution in this case is λ"z"."x", up to α-equivalence. Substitution is defined uniquely up to α-equivalence.
β-reduction captures the idea of function application. β-reduction is defined in terms of substitution: the β-reduction of (λ"V"."M") "N" is "M"["V" := "N"].
For example, assuming some encoding of 2, 7, ×, we have the following β-reduction: (λ"n"."n" × 2) 7 → 7 × 2.
β-reduction can be seen to be the same as the concept of "local reducibility" in natural deduction, via the Curry–Howard isomorphism.
η-reduction expresses the idea of extensionality, which in this context is that two functions are the same if and only if they give the same result for all arguments. η-reduction converts between λ"x"."f" "x" and "f" whenever "x" does not appear free in "f".
η-reduction can be seen to be the same as the concept of "local completeness" in natural deduction, via the Curry–Howard isomorphism.
For the untyped lambda calculus, β-reduction as a rewriting rule is neither strongly normalising nor weakly normalising.
However, it can be shown that β-reduction is confluent when working up to α-conversion (i.e. we consider two normal forms to be equal if it is possible to α-convert one into the other).
Therefore, both strongly normalising terms and weakly normalising terms have a unique normal form. For strongly normalising terms, any reduction strategy is guaranteed to yield the normal form, whereas for weakly normalising terms, some reduction strategies may fail to find it.
The basic lambda calculus may be used to model booleans, arithmetic, data structures and recursion, as illustrated in the following sub-sections.
There are several possible ways to define the natural numbers in lambda calculus, but by far the most common are the Church numerals, which can be defined as follows:
and so on. Or using the alternative syntax presented above in "Notation":
A Church numeral is a higher-order function—it takes a single-argument function "f", and returns another single-argument function. The Church numeral "n" is a function that takes a function "f" as argument and returns the "n"-th composition of "f", i.e. the function "f" composed with itself "n" times. This is denoted "f"("n") and is in fact the "n"-th power of "f" (considered as an operator); "f"(0) is defined to be the identity function. Such repeated compositions (of a single function "f") obey the laws of exponents, which is why these numerals can be used for arithmetic. (In Church's original lambda calculus, the formal parameter of a lambda expression was required to occur at least once in the function body, which made the above definition of 0 impossible.)
One way of thinking about the Church numeral "n", which is often useful when analysing programs, is as an instruction 'repeat "n" times'. For example, using the PAIR and NIL functions defined below, one can define a function that constructs a (linked) list of "n" elements all equal to "x" by repeating 'prepend another "x" element' "n" times, starting from an empty list. The lambda term is
By varying what is being repeated, and varying what argument that function being repeated is applied to, a great many different effects can be achieved.
We can define a successor function, which takes a Church numeral "n" and returns "n" + 1 by adding another application of "f", where '(mf)x' means the function 'f' is applied 'm' times on 'x':
Because the "m"-th composition of "f" composed with the "n"-th composition of "f" gives the "m"+"n"-th composition of "f", addition can be defined as follows:
PLUS can be thought of as a function taking two natural numbers as arguments and returning a natural number; it can be verified that
and
are β-equivalent lambda expressions. Since adding "m" to a number "n" can be accomplished by adding 1 "m" times, an alternative definition is:
Similarly, multiplication can be defined as
Alternatively
since multiplying "m" and "n" is the same as repeating the add "n" function "m" times and then applying it to zero.
Exponentiation has a rather simple rendering in Church numerals, namely
The predecessor function defined by PRED "n" = "n" − 1 for a positive integer "n" and PRED 0 = 0 is considerably more difficult. The formula
can be validated by showing inductively that if "T" denotes (λ"g".λ"h"."h" ("g" "f")), then T("n")(λ"u"."x") = (λ"h"."h"("f"("n"−1)("x"))) for "n" > 0. Two other definitions of PRED are given below, one using conditionals and the other using pairs. With the predecessor function, subtraction is straightforward. Defining
SUB "m" "n" yields "m" − "n" when "m" > "n" and 0 otherwise.
By convention, the following two definitions (known as Church booleans) are used for the boolean values TRUE and FALSE:
Then, with these two lambda terms, we can define some logic operators (these are just possible formulations; other expressions are equally correct):
We are now able to compute some logic functions, for example:
and we see that AND TRUE FALSE is equivalent to FALSE.
A "predicate" is a function that returns a boolean value. The most fundamental predicate is ISZERO, which returns TRUE if its argument is the Church numeral 0, and FALSE if its argument is any other Church numeral:
The following predicate tests whether the first argument is less-than-or-equal-to the second:
and since "m" = "n", if LEQ "m" "n" and LEQ "n" "m", it is straightforward to build a predicate for numerical equality.
The availability of predicates and the above definition of TRUE and FALSE make it convenient to write "if-then-else" expressions in lambda calculus. For example, the predecessor function can be defined as:
which can be verified by showing inductively that "n" (λ"g".λ"k".ISZERO ("g" 1) "k" (PLUS ("g" "k") 1)) (λ"v".0) is the add "n" − 1 function for "n" > 0.
A pair (2-tuple) can be defined in terms of TRUE and FALSE, by using the Church encoding for pairs. For example, PAIR encapsulates the pair ("x","y"), FIRST returns the first element of the pair, and SECOND returns the second.
A linked list can be defined as either NIL for the empty list, or the PAIR of an element and a smaller list. The predicate NULL tests for the value NIL. (Alternatively, with NIL := FALSE, the construct "l" (λ"h".λ"t".λ"z".deal_with_head_"h"_and_tail_"t") (deal_with_nil) obviates the need for an explicit NULL test).
As an example of the use of pairs, the shift-and-increment function that maps ("m", "n") to ("n", "n" + 1) can be defined as
which allows us to give perhaps the most transparent version of the predecessor function:
There is a considerable body of programming idioms for lambda calculus. Many of these were originally developed in the context of using lambda calculus as a foundation for programming language semantics, effectively using lambda calculus as a low-level programming language. Because several programming languages include the lambda calculus (or something very similar) as a fragment, these techniques also see use in practical programming, but may then be perceived as obscure or foreign.
In lambda calculus, a library would take the form of a collection of previously defined functions, which as lambda-terms are merely particular constants. The pure lambda calculus does not have a concept of named constants since all atomic lambda-terms are variables, but one can emulate having named constants by setting aside a variable as the name of the constant, using abstraction to bind that variable in the main body, and apply that abstraction to the intended definition. Thus to use "f" to mean "M" (some explicit lambda-term) in "N" (another lambda-term, the "main program"), one can say
Authors often introduce syntactic sugar, such as let, to permit writing the above in the more intuitive order
By chaining such definitions, one can write a lambda calculus "program" as zero or more function definitions, followed by one lambda-term using those functions that constitutes the main body of the program.
A notable restriction of this let is that the name "f" is not defined in "M", since "M" is outside the scope of the abstraction binding "f"; this means a recursive function definition cannot be used as the "M" with let. The more advanced letrec syntactic sugar construction that allows writing recursive function definitions in that naive style instead additionally employs fixed-point combinators.
Recursion is the definition of a function using the function itself. Lambda calculus cannot express this as directly as some other notations: all functions are anonymous in lambda calculus, so we can't refer to a value which is yet to be defined, inside the lambda term defining that same value. However, recursion can still be achieved by arranging for a lambda expression to receive itself as its argument value, for example in (λ"x"."x" "x") "E".
Consider the factorial function F("n") recursively defined by
In the lambda expression which is to represent this function, a "parameter" (typically the first one) will be assumed to receive the lambda expression itself as its value, so that calling it – applying it to an argument – will amount to recursion. Thus to achieve recursion, the intended-as-self-referencing argument (called "r" here) must always be passed to itself within the function body, at a call point:
The self-application achieves replication here, passing the function's lambda expression on to the next invocation as an argument value, making it available to be referenced and called there.
This solves it but requires re-writing each recursive call as self-application. We would like to have a generic solution, without a need for any re-writes:
Given a lambda term with first argument representing recursive call (e.g. G here), the "fixed-point" combinator FIX will return a self-replicating lambda expression representing the recursive function (here, F). The function does not need to be explicitly passed to itself at any point, for the self-replication is arranged in advance, when it is created, to be done each time it is called. Thus the original lambda expression (FIX G) is re-created inside itself, at call-point, achieving self-reference.
In fact, there are many possible definitions for this FIX operator, the simplest of them being:
In the lambda calculus, Y "g" is a fixed-point of "g", as it expands to:
Now, to perform our recursive call to the factorial function, we would simply call (Y G) "n", where "n" is the number we are calculating the factorial of. Given "n" = 4, for example, this gives:
Every recursively defined function can be seen as a fixed point of some suitably defined function closing over the recursive call with an extra argument, and therefore, using Y, every recursively defined function can be expressed as a lambda expression. In particular, we can now cleanly define the subtraction, multiplication and comparison predicate of natural numbers recursively.
Certain terms have commonly accepted names :
Several of these have direct applications in the "elimination of abstraction" that turns lambda terms into combinator calculus terms.
If "N" is a lambda-term without abstraction, but possibly containing named constants (combinators), then there exists a lambda-term "T"("x","N") which is equivalent to λ"x"."N" but lacks abstraction (except as part of the named constants, if these are considered non-atomic). This can also be viewed as anonymising variables, as "T"("x","N") removes all occurrences of "x" from "N", while still allowing argument values to be substituted into the positions where "N" contains an "x". The conversion function "T" can be defined by:
In either case, a term of the form "T"("x","N") "P" can reduce by having the initial combinator I, K, or S grab the argument "P", just like β-reduction of (λ"x"."N") "P" would do. I returns that argument. K throws the argument away, just like (λ"x"."N") would do if "x" has no free occurrence in "N". S passes the argument on to both subterms of the application, and then applies the result of the first to the result of the second.
The combinators B and C are similar to S, but pass the argument on to only one subterm of an application (B to the "argument" subterm and C to the "function" subterm), thus saving a subsequent K if there is no occurrence of "x" in one subterm. In comparison to B and C, the S combinator actually conflates two functionalities: rearranging arguments, and duplicating an argument so that it may be used in two places. The W combinator does only the latter, yielding the B, C, K, W system as an alternative to SKI combinator calculus.
A typed lambda calculus is a typed formalism that uses the lambda-symbol (formula_122) to denote anonymous function abstraction. In this context, types are usually objects of a syntactic nature that are assigned to lambda terms; the exact nature of a type depends on the calculus considered (see kinds below). From a certain point of view, typed lambda calculi can be seen as refinements of the untyped lambda calculus but from another point of view, they can also be considered the more fundamental theory and "untyped lambda calculus" a special case with only one type.
Typed lambda calculi are foundational programming languages and are the base of typed functional programming languages such as ML and Haskell and, more indirectly, typed imperative programming languages. Typed lambda calculi play an important role in the design of type systems for programming languages; here typability usually captures desirable properties of the program, e.g. the program will not cause a memory access violation.
Typed lambda calculi are closely related to mathematical logic and proof theory via the Curry–Howard isomorphism and they can be considered as the internal language of classes of categories, e.g. the simply typed lambda calculus is the language of Cartesian closed categories (CCCs).
A function "F": N → N of natural numbers is a computable function if and only if there exists a lambda expression "f" such that for every pair of "x", "y" in N, "F"("x")="y" if and only if "f" "x" =β "y", where "x" and "y" are the Church numerals corresponding to "x" and "y", respectively and =β meaning equivalence with β-reduction. This is one of the many ways to define computability; see the Church–Turing thesis for a discussion of other approaches and their equivalence.
There is no algorithm that takes as input any two lambda expressions and outputs TRUE or FALSE depending on whether or not the two expressions are equivalent. More precisely, no computable function can decide the equivalence. This was historically the first problem for which undecidability could be proven. As usual for such a proof, "computable" means computable by any model of computation that is Turing complete.
Church's proof first reduces the problem to determining whether a given lambda expression has a "normal form". A normal form is an equivalent expression that cannot be reduced any further under the rules imposed by the form. Then he assumes that this predicate is computable, and can hence be expressed in lambda calculus. Building on earlier work by Kleene and constructing a Gödel numbering for lambda expressions, he constructs a lambda expression "e" that closely follows the proof of Gödel's first incompleteness theorem. If "e" is applied to its own Gödel number, a contradiction results.
As pointed out by Peter Landin's 1965 paper "A Correspondence between ALGOL 60 and Church's Lambda-notation", sequential procedural programming languages can be understood in terms of the lambda calculus, which provides the basic mechanisms for procedural abstraction and procedure (subprogram) application.
For example, in Lisp the "square" function can be expressed as a lambda expression as follows:
The above example is an expression that evaluates to a first-class function. The symbol codice_1 creates an anonymous function, given a list of parameter names, codice_2 – just a single argument in this case, and an expression that is evaluated as the body of the function, codice_3. Anonymous functions are sometimes called lambda expressions.
For example, Pascal and many other imperative languages have long supported passing subprograms as arguments to other subprograms through the mechanism of function pointers. However, function pointers are not a sufficient condition for functions to be first class datatypes, because a function is a first class datatype if and only if new instances of the function can be created at run-time. And this run-time creation of functions is supported in Smalltalk, JavaScript, and more recently in Scala, Eiffel ("agents"), C# ("delegates") and C++11, among others.
Whether a term is normalising or not, and how much work needs to be done in normalising it if it is, depends to a large extent on the reduction strategy used. The distinction between reduction strategies relates to the distinction in functional programming languages between eager evaluation and lazy evaluation.
Applicative order is not a normalising strategy. The usual counterexample is as follows: define Ω = ωω where ω = λ"x"."xx". This entire expression contains only one redex, namely the whole expression; its reduct is again Ω. Since this is the only available reduction, Ω has no normal form (under any evaluation strategy). Using applicative order, the expression KIΩ = (λ"x".λ"y"."x") (λ"x"."x")Ω is reduced by first reducing Ω to normal form (since it is the rightmost redex), but since Ω has no normal form, applicative order fails to find a normal form for KIΩ.
In contrast, normal order is so called because it always finds a normalising reduction, if one exists. In the above example, KIΩ reduces under normal order to "I", a normal form. A drawback is that redexes in the arguments may be copied, resulting in duplicated computation (for example, (λ"x"."xx") ((λ"x"."x")"y") reduces to ((λ"x"."x")"y") ((λ"x"."x")"y") using this strategy; now there are two redexes, so full evaluation needs two more steps, but if the argument had been reduced first, there would now be none).
The positive tradeoff of using applicative order is that it does not cause unnecessary computation, if all arguments are used, because it never substitutes arguments containing redexes and hence never needs to copy them (which would duplicate work). In the above example, in applicative order (λ"x"."xx") ((λ"x"."x")"y") reduces first to (λ"x"."xx")"y" and then to the normal order "yy", taking two steps instead of three.
Most "purely" functional programming languages (notably Miranda and its descendants, including Haskell), and the proof languages of theorem provers, use "lazy evaluation", which is essentially the same as call by need. This is like normal order reduction, but call by need manages to avoid the duplication of work inherent in normal order reduction using "sharing". In the example given above, (λ"x"."xx") ((λ"x"."x")"y") reduces to ((λ"x"."x")"y") ((λ"x"."x")"y"), which has two redexes, but in call by need they are represented using the same object rather than copied, so when one is reduced the other is too.
While the idea of β-reduction seems simple enough, it is not an atomic step, in that it must have a non-trivial cost when estimating computational complexity. To be precise, one must somehow find the location of all of the occurrences of the bound variable "V" in the expression "E", implying a time cost, or one must keep track of these locations in some way, implying a space cost. A naïve search for the locations of "V" in "E" is "O"("n") in the length "n" of "E". This has led to the study of systems that use explicit substitution. Sinot's director strings offer a way of tracking the locations of free variables in expressions.
The Church–Rosser property of the lambda calculus means that evaluation (β-reduction) can be carried out in "any order", even in parallel. This means that various nondeterministic evaluation strategies are relevant. However, the lambda calculus does not offer any explicit constructs for parallelism. One can add constructs such as Futures to the lambda calculus. Other process calculi have been developed for describing communication and concurrency.
In Lévy's 1988 paper "Sharing in the Evaluation of lambda Expressions", he defines a notion of optimal sharing, such that no work is "duplicated". For example, performing a β-reduction in normal order on (λ"x"."xx") (II) reduces it to II (II). The argument II is duplicated by the application to the first lambda term. If the reduction was done in an applicative order first, we save work because work is not duplicated: (λ"x"."xx") (II) reduces to (λ"x"."xx") I. On the other hand, using applicative order can result in redundant reductions or even possibly never reduce to normal form. For example, performing a β-reduction in normal order on (λ"f".f I) (λy.(λ"x"."xx") (y I)) yields (λy.(λ"x"."xx") (y I)) I, (λ"x"."xx") (II) which we know we can do without duplicating work. Doing the same but in applicative order yields (λ"f".f I) (λy.y I (y I)), (λy.y I (y I)) I, I I (I I), and now work is duplicated.
Lévy shows the existence of lambda terms where there "does not exist" a sequence of reductions which reduces them without duplicating work. The below lambda term is such an example.
It is composed of three similar terms, x=((λg. ... ) (λh.y)) and y=((λf. ...) (λw.z) ), and finally z=λw.(h(w(λy.y))). There are only two possible β-reductions to be done here, on x and on y. Reducing the outer x term first results in the inner y term being duplicated, and each copy will have to be reduced, but reducing the inner y term first will duplicate its argument z, which will cause work to be duplicated when the values of h and w are made known. Incidentally, the above term reduces to the identity function (λy.y), and is constructed by making wrappers which make the identity function available to the binders g=λh..., f=λw..., h=λx.x (at first), and w=λz.z (at first), all of which are applied to the innermost term λy.y.
The precise notion of duplicated work relies on noticing that after the first reduction of I I is done, the value of the other I I can be determined, because they have the same structure (and in fact they have exactly the same values), and result from a common ancestor. Such similar structures can each be assigned a label that can be tracked across reductions. If a name is assigned to the redex that produces all the resulting II terms, and then all duplicated occurrences of II can be tracked and reduced in one go. However, it is not obvious that a redex will produce the II term. Identifying the structures that are similar in different parts of a lambda term can involve a complex algorithm and can possibly have a complexity equal to the history of the reduction itself.
While Lévy defines the notion of optimal sharing, he does not provide an algorithm to do it. In Vincent van Oostrom, Kees-Jan van de Looij, and Marijn Zwitserlood's paper "Lambdascope: Another optimal implementation of the lambda-calculus", they provide such an algorithm by transforming lambda terms into interaction nets, which are then reduced. Roughly speaking, the resulting reduction is optimal because every term that would have the same labels as per Lévy's paper would also be the same graph in the interaction net. In the paper, they mention that their prototype implementation of Lambdascope performs as well as the "optimised" version of the reference optimal higher order machine BOHM.
More details can be found in the short article About the efficient reduction of lambda terms.
The fact that lambda calculus terms act as functions on other lambda calculus terms, and even on themselves, led to questions about the semantics of the lambda calculus. Could a sensible meaning be assigned to lambda calculus terms? The natural semantics was to find a set "D" isomorphic to the function space "D" → "D", of functions on itself. However, no nontrivial such "D" can exist, by cardinality constraints because the set of all functions from "D" to "D" has greater cardinality than "D", unless "D" is a singleton set.
In the 1970s, Dana Scott showed that, if only continuous functions were considered, a set or domain "D" with the required property could be found, thus providing a model for the lambda calculus.
This work also formed the basis for the denotational semantics of programming languages.
These extensions are in the lambda cube:
These formal systems are extensions of lambda calculus that are not in the lambda cube:
These formal systems are variations of lambda calculus:
These formal systems are related to lambda calculus:
Monographs/textbooks for graduate students:
"Some parts of this article are based on material from FOLDOC, used with ."
|
https://en.wikipedia.org/wiki?curid=18203
|
Lossy compression
In information technology, lossy compression or irreversible compression is the class of data encoding methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size for storing, handling, and transmitting content. The different versions of the photo of the cat to the right show how higher degrees of approximation create coarser images as more details are removed. This is opposed to lossless data compression (reversible data compression) which does not degrade the data. The amount of data reduction possible using lossy compression is much higher than through lossless techniques.
Well-designed lossy compression technology often reduces file sizes significantly before degradation is noticed by the end-user. Even when noticeable by the user, further data reduction may be desirable (e.g., for real-time communication, to reduce transmission times, or to reduce storage needs). The most widely used lossy compression algorithm is the discrete cosine transform (DCT), first published by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974. Recently, a new family of sinusoidal-hyperbolic transform functions, which have comparable properties and performance with DCT, have been proposed for lossy compression.
Lossy compression is most commonly used to compress multimedia data (audio, video, and images), especially in applications such as streaming media and internet telephony. By contrast, lossless compression is typically required for text and data files, such as bank records and text articles. It can be advantageous to make a master lossless file which can then be used to produce additional copies from. This allows one to avoid basing new compressed copies off of a lossy source file, which would yield additional artifacts and further unnecessary information loss.
It is possible to compress many types of digital data in a way that reduces the size of a computer file needed to store it, or the bandwidth needed to transmit it, with no loss of the full information contained in the original file. A picture, for example, is converted to a digital file by considering it to be an array of dots and specifying the color and brightness of each dot. If the picture contains an area of the same color, it can be compressed without loss by saying "200 red dots" instead of "red dot, red dot, ...(197 more times)..., red dot."
The original data contains a certain amount of information, and there is a lower limit to the size of file that can carry all the information. Basic information theory says that there is an absolute limit in reducing the size of this data. When data is compressed, its entropy increases, and it cannot increase indefinitely. As an intuitive example, most people know that a compressed ZIP file is smaller than the original file, but repeatedly compressing the same file will not reduce the size to nothing. Most compression algorithms can recognize when further compression would be pointless and would in fact increase the size of the data.
In many cases, files or data streams contain more information than is needed for a particular purpose. For example, a picture may have more detail than the eye can distinguish when reproduced at the largest size intended; likewise, an audio file does not need a lot of fine detail during a very loud passage. Developing lossy compression techniques as closely matched to human perception as possible is a complex task. Sometimes the ideal is a file that provides exactly the same perception as the original, with as much digital information as possible removed; other times, perceptible loss of quality is considered a valid trade-off for the reduced data.
The terms 'irreversible' and 'reversible' are preferred over 'lossy' and 'lossless' respectively for some applications, such as medical image compression, to circumvent the negative implications of 'loss'. The type and amount of loss can affect the utility of the images. Artifacts or undesirable effects of compression may be clearly discernible yet the result still useful for the intended purpose. Or lossy compressed images may be 'visually lossless', or in the case of medical images, so-called Diagnostically Acceptable Irreversible Compression (DAIC) may have been applied.
Some forms of lossy compression can be thought of as an application of transform coding, which is a type of data compression used for digital images, digital audio signals, and digital video. The transformation is typically used to enable better (more targeted) quantization. Knowledge of the application is used to choose information to discard, thereby lowering its bandwidth. The remaining information can then be compressed via a variety of methods. When the output is decoded, the result may not be identical to the original input, but is expected to be close enough for the purpose of the application.
The most common form of lossy compression is a transform coding method, the discrete cosine transform (DCT), which was first published by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974. DCT is the most widely used form of lossy compression, for popular image compression formats (such as JPEG), video coding standards (such as MPEG and H.264/AVC) and audio compression formats (such as MP3 and AAC).
In the case of audio data, a popular form of transform coding is perceptual coding, which transforms the raw data to a domain that more accurately reflects the information content. For example, rather than expressing a sound file as the amplitude levels over time, one may express it as the frequency spectrum over time, which corresponds more accurately to human audio perception. While data reduction (compression, be it lossy or lossless) is a main goal of transform coding, it also allows other goals: one may represent data more accurately for the original amount of space – for example, in principle, if one starts with an analog or high-resolution digital master, an MP3 file of a given size should provide a better representation than a raw uncompressed audio in WAV or AIFF file of the same size. This is because uncompressed audio can only reduce file size by lowering bit rate or depth, whereas compressing audio can reduce size while maintaining bit rate and depth. This compression becomes a selective loss of the least significant data, rather than losing data across the board. Further, a transform coding may provide a better domain for manipulating or otherwise editing the data – for example, equalization of audio is most naturally expressed in the frequency domain (boost the bass, for instance) rather than in the raw time domain.
From this point of view, perceptual encoding is not essentially about "discarding" data, but rather about a "better representation" of data. Another use is for backward compatibility and graceful degradation: in color television, encoding color via a luminance-chrominance transform domain (such as YUV) means that black-and-white sets display the luminance, while ignoring the color information. Another example is chroma subsampling: the use of color spaces such as YIQ, used in NTSC, allow one to reduce the resolution on the components to accord with human perception – humans have highest resolution for black-and-white (luma), lower resolution for mid-spectrum colors like yellow and green, and lowest for red and blues – thus NTSC displays approximately 350 pixels of luma per scanline, 150 pixels of yellow vs. green, and 50 pixels of blue vs. red, which are proportional to human sensitivity to each component.
Lossy compression formats suffer from generation loss: repeatedly compressing and decompressing the file will cause it to progressively lose quality. This is in contrast with lossless data compression, where data will not be lost via the use of such a procedure. Information-theoretical foundations for lossy data compression are provided by rate-distortion theory. Much like the use of probability in optimal coding theory, rate-distortion theory heavily draws on Bayesian estimation and decision theory in order to model perceptual distortion and even aesthetic judgment.
There are two basic lossy compression schemes:
In some systems the two techniques are combined, with transform codecs being used to compress the error signals generated by the predictive stage.
The advantage of lossy methods over lossless methods is that in some cases a lossy method can produce a much smaller compressed file than any lossless method, while still meeting the requirements of the application. Lossy methods are most often used for compressing sound, images or videos. This is because these types of data are intended for human interpretation where the mind can easily "fill in the blanks" or see past very minor errors or inconsistencies – ideally lossy compression is transparent (imperceptible), which can be verified via an ABX test. Data files using lossy compression are smaller in size and thus cost less to store and to transmit over the Internet, a crucial consideration for streaming video services such as Netflix and streaming audio services such as Spotify.
A study conducted by the Audio Engineering Library concluded that lossy compression formats such as MP3s have distinct effects on timbral and emotional characteristics, tending to strengthen negative emotional qualities and weaken positive ones. The study further noted that the trumpet is the instrument most affected by compression, while the horn is least.
When a user acquires a lossily compressed file, (for example, to reduce download time) the retrieved file can be quite different from the original at the bit level while being indistinguishable to the human ear or eye for most practical purposes. Many compression methods focus on the idiosyncrasies of human physiology, taking into account, for instance, that the human eye can see only certain wavelengths of light. The psychoacoustic model describes how sound can be highly compressed without degrading perceived quality. Flaws caused by lossy compression that are noticeable to the human eye or ear are known as compression artifacts.
The compression ratio (that is, the size of the compressed file compared to that of the uncompressed file) of lossy video codecs is nearly always far superior to that of the audio and still-image equivalents.
An important caveat about lossy compression (formally transcoding), is that editing lossily compressed files causes digital generation loss from the re-encoding. This can be avoided by only producing lossy files from (lossless) originals and only editing (copies of) original files, such as images in raw image format instead of JPEG. If data which has been compressed lossily is decoded and compressed losslessly, the size of the result can be comparable with the size of the data before lossy compression, but the data already lost cannot be recovered. When deciding to use lossy conversion without keeping the original, one should remember that format conversion may be needed in the future to achieve compatibility with software or devices (format shifting), or to avoid paying patent royalties for decoding or distribution of compressed files.
By modifying the compressed data directly without decoding and re-encoding, some editing of lossily compressed files without degradation of quality is possible. Editing which reduces the file size as if it had been compressed to a greater degree, but without more loss than this, is sometimes also possible.
The primary programs for lossless editing of JPEGs are codice_1, and the derived codice_2 (which also preserves Exif information), and Jpegcrop (which provides a Windows interface).
These allow the image to be
While unwanted information is destroyed, the quality of the remaining portion is unchanged.
Some other transforms are possible to some extent, such as joining images with the same encoding (composing side by side, as on a grid) or pasting images (such as logos) onto existing images (both via Jpegjoin), or scaling.
Some changes can be made to the compression without re-encoding:
The freeware Windows-only IrfanView has some lossless JPEG operations in its codice_3 plugin.
Metadata, such as ID3 tags, Vorbis comments, or Exif information, can usually be modified or removed without modifying the underlying data.
One may wish to downsample or otherwise decrease the resolution of the represented source signal and the quantity of data used for its compressed representation without re-encoding, as in bitrate peeling, but this functionality is not supported in all designs, as not all codecs encode data in a form that allows less important detail to simply be dropped. Some well-known designs that have this capability include JPEG 2000 for still images and H.264/MPEG-4 AVC based Scalable Video Coding for video. Such schemes have also been standardized for older designs as well, such as JPEG images with progressive encoding, and MPEG-2 and MPEG-4 Part 2 video, although those prior schemes had limited success in terms of adoption into real-world common usage. Without this capacity, which is often the case in practice, to produce a representation with lower resolution or lower fidelity than a given one, one needs to start with the original source signal and encode, or start with a compressed representation and then decompress and re-encode it (transcoding), though the latter tends to cause digital generation loss.
Another approach is to encode the original signal at several different bitrates, and then either choose which to use (as when streaming over the internet – as in RealNetworks' "SureStream" – or offering varying downloads, as at Apple's iTunes Store), or broadcast several, where the best that is successfully received is used, as in various implementations of hierarchical modulation. Similar techniques are used in mipmaps, pyramid representations, and more sophisticated scale space methods. Some audio formats feature a combination of a lossy format and a lossless correction which when combined reproduce the original signal; the correction can be stripped, leaving a smaller, lossily compressed, file. Such formats include MPEG-4 SLS (Scalable to Lossless), WavPack, OptimFROG DualStream, and DTS-HD Master Audio in lossless (XLL) mode).
Researchers have (semi-seriously) performed lossy compression on text by either using a thesaurus to substitute short words for long ones, or generative text techniques, although these sometimes fall into the related category of lossy data conversion.
A general kind of lossy compression is to lower the resolution of an image, as in image scaling, particularly decimation. One may also remove less "lower information" parts of an image, such as by seam carving. Many media transforms, such as Gaussian blur, are, like lossy compression, irreversible: the original signal cannot be reconstructed from the transformed signal. However, in general these will have the same size as the original, and are not a form of compression. Lowering resolution has practical uses, as the NASA New Horizons craft will transmit thumbnails of its encounter with Pluto-Charon before it sends the higher resolution images. Another solution for slow connections is the usage of Image interlacing which progressively defines the image. Thus a partial transmission is enough to preview the final image, in a lower resolution version, without creating a scaled and a full version too.
(Wayback Machine copy)
|
https://en.wikipedia.org/wiki?curid=18208
|
Lossless compression
Lossless compression is a class of data compression algorithms that allows the original data to be perfectly reconstructed from the compressed data. By contrast, lossy compression permits reconstruction only of an approximation of the original data, though usually with greatly improved compression rates (and therefore reduced media sizes).
Lossless data compression is used in many applications. For example, it is used in the ZIP file format and in the GNU tool gzip. It is also often used as a component within lossy data compression technologies (e.g. lossless mid/side joint stereo preprocessing by MP3 encoders and other lossy audio encoders).
Lossless compression is used in cases where it is important that the original and the decompressed data be identical, or where deviations from the original data would be unfavourable. Typical examples are executable programs, text documents, and source code. Some image file formats, like PNG or GIF, use only lossless compression, while others like TIFF and MNG may use either lossless or lossy methods. Lossless audio formats are most often used for archiving or production purposes, while smaller lossy audio files are typically used on portable players and in other cases where storage space is limited or exact replication of the audio is unnecessary.
Most lossless compression programs do two things in sequence: the first step generates a "statistical model" for the input data, and the second step uses this model to map input data to bit sequences in such a way that "probable" (e.g. frequently encountered) data will produce shorter output than "improbable" data.
The primary encoding algorithms used to produce bit sequences are Huffman coding (also used by DEFLATE) and arithmetic coding. Arithmetic coding achieves compression rates close to the best possible for a particular statistical model, which is given by the information entropy, whereas Huffman compression is simpler and faster but produces poor results for models that deal with symbol probabilities close to 1.
There are two primary ways of constructing statistical models: in a "static" model, the data is analyzed and a model is constructed, then this model is stored with the compressed data. This approach is simple and modular, but has the disadvantage that the model itself can be expensive to store, and also that it forces using a single model for all data being compressed, and so performs poorly on files that contain heterogeneous data. "Adaptive" models dynamically update the model as the data is compressed. Both the encoder and decoder begin with a trivial model, yielding poor compression of initial data, but as they learn more about the data, performance improves. Most popular types of compression used in practice now use adaptive coders.
Lossless compression methods may be categorized according to the type of data they are designed to compress. While, in principle, any general-purpose lossless compression algorithm ("general-purpose" meaning that they can accept any bitstring) can be used on any type of data, many are unable to achieve significant compression on data that are not of the form for which they were designed to compress. Many of the lossless compression techniques used for text also work reasonably well for indexed images.
These techniques take advantage of the specific characteristics of images such as the common phenomenon of contiguous 2-D areas of similar tones.
Every pixel but the first is replaced by the difference to its left neighbor. This leads to small values having a much higher probability than large values.
This is often also applied to sound files, and can compress files that contain mostly low frequencies and low volumes.
For images, this step can be repeated by taking the difference to the top pixel, and then in videos, the difference to the pixel in the next frame can be taken.
A hierarchical version of this technique takes neighboring pairs of data points, stores their difference and sum, and on a higher level with lower resolution continues with the sums. This is called discrete wavelet transform. JPEG2000 additionally uses data points from other pairs and multiplication factors to mix them into the difference. These factors must be integers, so that the result is an integer under all circumstances. So the values are increased, increasing file size, but hopefully the distribution of values is more peaked.
The adaptive encoding uses the probabilities from the previous sample in sound encoding, from the left and upper pixel in image encoding, and additionally from the previous frame in video encoding. In the wavelet transformation, the probabilities are also passed through the hierarchy.
Many of these methods are implemented in open-source and proprietary tools, particularly LZW and its variants. Some algorithms are patented in the United States and other countries and their legal usage requires licensing by the patent holder. Because of patents on certain kinds of LZW compression, and in particular licensing practices by patent holder Unisys that many developers considered abusive, some open source proponents encouraged people to avoid using the Graphics Interchange Format (GIF) for compressing still image files in favor of Portable Network Graphics (PNG), which combines the LZ77-based deflate algorithm with a selection of domain-specific prediction filters. However, the patents on LZW expired on June 20, 2003.
Many of the lossless compression techniques used for text also work reasonably well for indexed images, but there are other techniques that do not work for typical text that are useful for some images (particularly simple bitmaps), and other techniques that take advantage of the specific characteristics of images (such as the common phenomenon of contiguous 2-D areas of similar tones, and the fact that color images usually have a preponderance of a limited range of colors out of those representable in the color space).
As mentioned previously, lossless sound compression is a somewhat specialized area. Lossless sound compression algorithms can take advantage of the repeating patterns shown by the wave-like nature of the data – essentially using autoregressive models to predict the "next" value and encoding the (hopefully small) difference between the expected value and the actual data. If the difference between the predicted and the actual data (called the "error") tends to be small, then certain difference values (like 0, +1, −1 etc. on sample values) become very frequent, which can be exploited by encoding them in few output bits.
It is sometimes beneficial to compress only the differences between two versions of a file (or, in video compression, of successive images within a sequence). This is called delta encoding (from the Greek letter Δ, which in mathematics, denotes a difference), but the term is typically only used if both versions are meaningful outside compression and decompression. For example, while the process of compressing the error in the above-mentioned lossless audio compression scheme could be described as delta encoding from the approximated sound wave to the original sound wave, the approximated version of the sound wave is not meaningful in any other context.
By operation of the pigeonhole principle, no lossless compression algorithm can efficiently compress all possible data. For this reason, many different algorithms exist that are designed either with a specific type of input data in mind or with specific assumptions about what kinds of redundancy the uncompressed data are likely to contain.
Some of the most common lossless compression algorithms are listed below.
See this list of lossless video codecs.
Cryptosystems often compress data (the "plaintext") "before" encryption for added security. When properly implemented, compression greatly increases the unicity distance by removing patterns that might facilitate cryptanalysis. However, many ordinary lossless compression algorithms produce headers, wrappers, tables, or other predictable output that might instead make cryptanalysis easier. Thus, cryptosystems must utilize compression algorithms whose output does not contain these predictable patterns.
Genetics compression algorithms (not to be confused with genetic algorithms) are the latest generation of lossless algorithms that compress data (typically sequences of nucleotides) using both conventional compression algorithms and specific algorithms adapted to genetic data. In 2012, a team of scientists from Johns Hopkins University published the first genetic compression algorithm that does not rely on external genetic databases for compression. HAPZIPPER was tailored for HapMap data and achieves over 20-fold compression (95% reduction in file size), providing 2- to 4-fold better compression much faster than leading general-purpose compression utilities.
Genomic sequence compression algorithms, also known as DNA sequence compressors, explore the fact that DNA sequences have characteristic properties, such as inverted repeats. The most successful compressors are XM and GeCo. For eukaryotes XM is slightly better in compression ratio, though for sequences larger than 100 MB its computational requirements are impractical.
Self-extracting executables contain a compressed application and a decompressor. When executed, the decompressor transparently decompresses and runs the original application. This is especially often used in demo coding, where competitions are held for demos with strict size limits, as small as 1k.
This type of compression is not strictly limited to binary executables, but can also be applied to scripts, such as JavaScript.
Lossless compression algorithms and their implementations are routinely tested in head-to-head benchmarks. There are a number of better-known compression benchmarks. Some benchmarks cover only the data compression ratio, so winners in these benchmarks may be unsuitable for everyday use due to the slow speed of the top performers. Another drawback of some benchmarks is that their data files are known, so some program writers may optimize their programs for best performance on a particular data set. The winners on these benchmarks often come from the class of context-mixing compression software.
Matt Mahoney, in his February 2010 edition of the free booklet "Data Compression Explained", additionally lists the following:
The Compression Ratings website published a chart summary of the "frontier" in compression ratio and time.
The Compression Analysis Tool is a Windows application that enables end users to benchmark the performance characteristics of streaming implementations of LZF4, DEFLATE, ZLIB, GZIP, BZIP2 and LZMA using their own data. It produces measurements and charts with which users can compare the compression speed, decompression speed and compression ratio of the different compression methods and to examine how the compression level, buffer size and flushing operations affect the results.
Lossless data compression algorithms cannot guarantee compression for all input data sets. In other words, for any lossless data compression algorithm, there will be an input data set that does not get smaller when processed by the algorithm, and for any lossless data compression algorithm that makes at least one file smaller, there will be at least one file that it makes larger. This is easily proven with elementary mathematics using a counting argument, as follows:
Any lossless compression algorithm that makes some files shorter must necessarily make some files longer, but it is not necessary that those files become "very much" longer. Most practical compression algorithms provide an "escape" facility that can turn off the normal coding for files that would become longer by being encoded. In theory, only a single additional bit is required to tell the decoder that the normal coding has been turned off for the entire input; however, most encoding algorithms use at least one full byte (and typically more than one) for this purpose. For example, DEFLATE compressed files never need to grow by more than 5 bytes per 65,535 bytes of input.
In fact, if we consider files of length N, if all files were equally probable, then for any lossless compression that reduces the size of some file, the expected length of a compressed file (averaged over all possible files of length N) must necessarily be "greater" than N. So if we know nothing about the properties of the data we are compressing, we might as well not compress it at all. A lossless compression algorithm is useful only when we are more likely to compress certain types of files than others; then the algorithm could be designed to compress those types of data better.
Thus, the main lesson from the argument is not that one risks big losses, but merely that one cannot always win. To choose an algorithm always means implicitly to select a "subset" of all files that will become usefully shorter. This is the theoretical reason why we need to have different compression algorithms for different kinds of files: there cannot be any algorithm that is good for all kinds of data.
The "trick" that allows lossless compression algorithms, used on the type of data they were designed for, to consistently compress such files to a shorter form is that the files the algorithms are designed to act on all have some form of easily modeled redundancy that the algorithm is designed to remove, and thus belong to the subset of files that that algorithm can make shorter, whereas other files would not get compressed or even get bigger. Algorithms are generally quite specifically tuned to a particular type of file: for example, lossless audio compression programs do not work well on text files, and vice versa.
In particular, files of random data cannot be consistently compressed by any conceivable lossless data compression algorithm: indeed, this result is used to "define" the concept of randomness in algorithmic complexity theory.
It's provably impossible to create an algorithm that can losslessly compress any data. While there have been many claims through the years of companies achieving "perfect compression" where an arbitrary number "N" of random bits can always be compressed to "N" − 1 bits, these kinds of claims can be safely discarded without even looking at any further details regarding the purported compression scheme. Such an algorithm contradicts fundamental laws of mathematics because, if it existed, it could be applied repeatedly to losslessly reduce any file to length 0. Allegedly "perfect" compression algorithms are often derisively referred to as "magic" compression algorithms for this reason.
On the other hand, it has also been proven that there is no algorithm to determine whether a file is incompressible in the sense of Kolmogorov complexity. Hence it's possible that any particular file, even if it appears random, may be significantly compressed, even including the size of the decompressor. An example is the digits of the mathematical constant "pi", which appear random but can be generated by a very small program. However, even though it cannot be determined whether a particular file is incompressible, a simple theorem about incompressible strings shows that over 99% of files of any given length cannot be compressed by more than one byte (including the size of the decompressor).
Abstractly, a compression algorithm can be viewed as a function on sequences (normally of octets). Compression is successful if the resulting sequence is shorter than the original sequence (and the instructions for the decompression map). For a compression algorithm to be lossless, the compression map must form an injection from "plain" to "compressed" bit sequences.
The pigeonhole principle prohibits a bijection between the collection of sequences of length "N" and any subset of the collection of sequences of length "N"−1. Therefore, it is not possible to produce a lossless algorithm that reduces the size of every possible input sequence.
Most everyday files are relatively 'sparse' in an information entropy sense, and thus, most lossless algorithms a layperson is likely to apply on regular files compress them relatively well. This may, through misapplication of intuition, lead some individuals to conclude that a well-designed compression algorithm can compress "any" input, thus, constituting a "magic compression algorithm".
Real compression algorithm designers accept that streams of high information entropy cannot be compressed, and accordingly, include facilities for detecting and handling this condition. An obvious way of detection is applying a raw compression algorithm and testing if its output is smaller than its input. Sometimes, detection is made by heuristics; for example, a compression application may consider files whose names end in ".zip", ".arj" or ".lha" uncompressible without any more sophisticated detection. A common way of handling this situation is quoting input, or uncompressible parts of the input in the output, minimizing the compression overhead. For example, the zip data format specifies the 'compression method' of 'Stored' for input files that have been copied into the archive verbatim.
Mark Nelson, in response to claims of magic compression algorithms appearing in comp.compression, has constructed a 415,241 byte binary file of highly entropic content, and issued a public challenge of $100 to anyone to write a program that, together with its input, would be smaller than his provided binary data yet be able to reconstitute it without error.
The FAQ for the comp.compression newsgroup contains a challenge by Mike Goldman offering $5,000 for a program that can compress random data. Patrick Craig took up the challenge, but rather than compressing the data, he split it up into separate files all of which ended in the number "5", which was not stored as part of the file. Omitting this character allowed the resulting files (plus, in accordance with the rules, the size of the program that reassembled them) to be smaller than the original file. However, no actual compression took place, and the information stored in the names of the files was necessary to reassemble them in the correct order in the original file, and this information was not taken into account in the file size comparison. The files themselves are thus not sufficient to reconstitute the original file; the file names are also necessary. Patrick Craig agreed that no meaningful compression had taken place, but argued that the wording of the challenge did not actually require this. A full history of the event, including discussion on whether or not the challenge was technically met, is on Patrick Craig's web site.
|
https://en.wikipedia.org/wiki?curid=18209
|
Larry Niven
Laurence van Cott Niven (; born April 30, 1938) is an American science fiction writer. His best-known works are "Ringworld" (1970), which received Hugo, Locus, Ditmar, and Nebula awards, and, with Jerry Pournelle, "The Mote in God's Eye" (1974) and "Lucifer's Hammer" (1977). The Science Fiction and Fantasy Writers of America named him the 2015 recipient of the Damon Knight Memorial Grand Master Award. His work is primarily hard science fiction, using big science concepts and theoretical physics. It also often includes elements of detective fiction and adventure stories. His fantasy includes the series "The Magic Goes Away", rational fantasy dealing with magic as a non-renewable resource.
Niven was born in Los Angeles. He is a great-grandson of Edward L. Doheny, an oil tycoon who had drilled the first successful well in the Los Angeles City Oil Field in 1892, and also was subsequently implicated in the Teapot Dome scandal. He briefly attended the California Institute of Technology and graduated with a Bachelor of Arts in mathematics (with a minor in psychology) from Washburn University in Topeka, Kansas in 1962. He also completed a year of graduate work in mathematics at the University of California, Los Angeles. On September 6, 1969, he married Marilyn Joyce "Fuzzy Pink" Wisowaty, a science fiction and Regency literature fan. He is an agnostic.
Niven is the author of numerous science fiction short stories and novels, beginning with his 1964 story "The Coldest Place". In this story, the coldest place concerned is the dark side of Mercury, which at the time the story was written was thought to be tidally locked with the Sun (it was found to rotate in a 2:3 resonance after Niven received payment for the story, but before it was published).
Algis Budrys said in 1968 that Niven becoming a top writer despite the New Wave was evidence that "trends are for second-raters". In addition to the Nebula award in 1970 and the Hugo and Locus awards in 1971 for "Ringworld", Niven won the Hugo Award for Best Short Story for "Neutron Star" in 1967. He won the same award in 1972, for "Inconstant Moon", and in 1975 for "The Hole Man". In 1976, he won the Hugo Award for Best Novelette for "The Borderland of Sol".
Niven has written scripts for three science fiction television series: the original "Land of the Lost" series; "", for which he adapted his early story "The Soft Weapon"; and "The Outer Limits", for which he adapted his story "Inconstant Moon" into an episode of the same name.
Niven has also written for the DC Comics character Green Lantern, including in his stories hard science fiction concepts such as universal entropy and the redshift effect.
He has included limited psi gifts (mind over matter) in some characters in his stories; like Gil Hamilton's psychic arm which can only reach as far as a corporeal arm could, though it can, for example, reach through solid materials and manipulate objects on the other side, and through videophone screens, or Matt Keller's ability to make people not notice him.
Several of his stories predicted the black market in transplant organs ("organlegging").
Many of Niven's stories—sometimes called the Tales of Known Space—take place in his Known Space universe, in which humanity shares the several habitable star systems nearest to the Sun with over a dozen alien species, including the aggressive feline Kzinti and the very intelligent but cowardly Pierson's Puppeteers, which are frequently central characters. The "Ringworld" series is part of the Tales of Known Space, and Niven has shared the setting with other writers since a 1988 anthology, "The Man-Kzin Wars" (Baen Books, jointly edited with Jerry Pournelle and Dean Ing). There have been several volumes of short stories and novellas.
Niven has also written a logical fantasy series "The Magic Goes Away", which utilizes an exhaustible resource called "mana" to power a rule-based "technological" magic. "The Draco Tavern" series of short stories take place in a more light-hearted science fiction universe, and are told from the point of view of the proprietor of an omni-species bar. The whimsical "Svetz" series consists of a collection of short stories, "The Flight of the Horse", and a novel, "Rainbow Mars", which involve a nominal time machine sent back to retrieve long-extinct animals, but which travels, in fact, into alternative realities and brings back mythical creatures such as a Roc and a Unicorn. Much of his writing since the 1970s has been in collaboration, particularly with Jerry Pournelle and Steven Barnes, but also Brenda Cooper and Edward M. Lerner.
One of Niven's best known humorous works is "Man of Steel, Woman of Kleenex", in which he uses real-world physics to underline the difficulties of Superman and a human woman (Lois Lane or Lana Lang) mating.
Niven appeared in the 1980 science documentary film "Target... Earth?"
In Niven's novel "Ringworld", he envisions a Ringworld: a band of material, roughly a million miles wide, of approximately the same diameter as Earth's orbit, rotating around a star. The idea originated in Niven's attempts to imagine a more efficient version of a Dyson sphere, which could produce the effect of surface gravity through rotation. Given that spinning a Dyson sphere would result in the atmosphere pooling around the equator, the Ringworld removes all the extraneous parts of the structure, leaving a spinning band landscaped on the sun-facing side, with the atmosphere and inhabitants kept in place through centrifugal force and high perimeter walls (rim walls). After publication of "Ringworld", Dan Alderson and Ctein, two friends told Niven that the Ringworld was dynamically unstable such that if the center of rotation drifts away from the central sun, gravitational forces will not "re-center" it, thus allowing the ring to eventually contact the sun and be destroyed. Niven used this as a core plot element in the sequel novel, "The Ringworld Engineers".
This idea proved influential, serving as an alternative to a full Dyson sphere that required fewer assumptions (such as artificial gravity) and allowed a day/night cycle to be introduced (through the use of a smaller ring of "shadow squares", rotating between the ring and its sun). This was further developed by Iain M. Banks in his Culture series, which features about ringworld–size megastructures called Orbitals that orbit a star rather than encircling it entirely (actual "Rings" and Dyson "Spheres" are also mentioned but are much rarer). Alastair Reynolds also uses ringworlds in his 2008 novel "House of Suns". The Ringworld-like namesake of the "Halo" video game series is the eponymous Halo megastructure/superweapon.
In the trading card game, the card Nevinyrral's Disk uses his name, spelled backwards. This tribute was paid because the game's system where mana from lands is used to power spells was inspired by his book The Magic Goes Away.
According to author Michael Moorcock, in 1967, Niven was among those Science Fiction Writers of America members who voiced opposition to the Vietnam War. However, in 1968 Niven's name appeared in a pro-war ad in "Galaxy Science Fiction".
Niven was an adviser to Ronald Reagan on the creation of the Strategic Defense Initiative antimissile policy, as part of the Citizens' Advisory Council on National Space Policy – as covered in the BBC documentary "Pandora's Box" by Adam Curtis.
In 2007, Niven, in conjunction with a group of science fiction writers known as SIGMA, led by Pournelle, began advising the U.S. Department of Homeland Security as to future trends affecting terror policy and other topics.
Larry Niven is also known in science fiction fandom for "Niven's Law": "There is no cause so right that one cannot find a fool following it." Over the course of his career Niven has added to this first law a list of Niven's Laws which he describes as "how the Universe works" as far as he can tell.
|
https://en.wikipedia.org/wiki?curid=18210
|
Linux distribution
A Linux distribution (often abbreviated as distro) is an operating system made from a software collection that is based upon the Linux kernel and, often, a package management system. Linux users usually obtain their operating system by downloading one of the Linux distributions, which are available for a wide variety of systems ranging from embedded devices (for example, OpenWrt) and personal computers (for example, Linux Mint) to powerful supercomputers (for example, Rocks Cluster Distribution).
A typical Linux distribution comprises a Linux kernel, GNU tools and libraries, additional software, documentation, a window system (the most common being the X Window System), a window manager, and a desktop environment.
Most of the included software is free and open-source software made available both as compiled binaries and in source code form, allowing modifications to the original software. Usually, Linux distributions optionally include some proprietary software that may not be available in source code form, such as binary blobs required for some device drivers.
A Linux distribution may also be described as a particular assortment of application and utility software (various GNU tools and libraries, for example), packaged together with the Linux kernel in such a way that its capabilities meet the needs of many users. The software is usually adapted to the distribution and then packaged into software packages by the distribution's maintainers. The software packages are available online in so-called repositories, which are storage locations usually distributed around the world. Beside glue components, such as the distribution installers (for example, Debian-Installer and Anaconda) or the package management systems, there are only very few packages that are originally written from the ground up by the maintainers of a Linux distribution.
Almost six hundred Linux distributions exist, with close to five hundred out of those in active development. Because of the huge availability of software, distributions have taken a wide variety of forms, including those suitable for use on desktops, servers, laptops, netbooks, mobile phones and tablets, as well as minimal environments typically for use in embedded systems. There are commercially-backed distributions, such as Fedora (Red Hat), openSUSE (SUSE) and Ubuntu (Canonical Ltd.), and entirely community-driven distributions, such as Debian, Slackware, Gentoo and Arch Linux. Most distributions come ready to use and pre-compiled for a specific instruction set, while some distributions (such as Gentoo) are distributed mostly in source code form and compiled locally during installation.
Linus Torvalds developed the Linux kernel and distributed its first version, 0.01, in 1991. Linux was initially distributed as source code only, and later as a pair of downloadable floppy disk images one bootable and containing the Linux kernel itself, and the other with a set of GNU utilities and tools for setting up a file system. Since the installation procedure was complicated, especially in the face of growing amounts of available software, distributions sprang up to simplify this.
Early distributions included the following:
The two oldest and still active distribution projects started in 1993. The SLS distribution was not well maintained, so in July 1993 a new distribution, called Slackware and based on SLS, was released by Patrick Volkerding. Also dissatisfied with SLS, Ian Murdock set to create a free distribution by founding Debian, which had its first release in December 1993.
Users were attracted to Linux distributions as alternatives to the DOS and Microsoft Windows operating systems on IBM PC compatible computers, Mac OS on the Apple Macintosh, and proprietary versions of Unix. Most early adopters were familiar with Unix from work or school. They embraced Linux distributions for their low (if any) cost, and availability of the source code for most or all of the software included.
As of 2017, Linux has become more popular in server and embedded devices markets than in the desktop market. For example, Linux is used on over 50% of web servers, whereas its desktop market share is about 3.7%.
Many Linux distributions provide an installation system akin to that provided with other modern operating systems. On the other hand, some distributions, including Gentoo Linux, provide only the binaries of a basic kernel, compilation tools, and an installer; the installer compiles all the requested software for the specific architecture of the user's computer, using these tools and the provided source code.
Distributions are normally segmented into "packages". Each package contains a specific application or service. Examples of packages are a library for handling the PNG image format, a collection of fonts or a web browser.
The package is typically provided as compiled code, with installation and removal of packages handled by a package management system (PMS) rather than a simple file archiver. Each package intended for such a PMS contains meta-information such as a package description, version, and "dependencies". The package management system can evaluate this meta-information to allow package searches, to perform an automatic upgrade to a newer version, to check that all dependencies of a package are fulfilled, and/or to fulfill them automatically.
Although Linux distributions typically contain much more software than proprietary operating systems, it is normal for local administrators to also install software not included in the distribution. An example would be a newer version of a software application than that supplied with a distribution, or an alternative to that chosen by the distribution (for example, KDE Plasma Workspaces rather than GNOME or vice versa for the user interface layer). If the additional software is distributed in source-only form, this approach requires local compilation. However, if additional software is locally added, the "state" of the local system may fall out of synchronization with the state of the package manager's database. If so, the local administrator will be required to take additional measures to ensure the entire system is kept up to date. The package manager may no longer be able to do so automatically.
Most distributions install packages, including the kernel and other core operating system components, in a predetermined configuration. Few now require or even permit configuration adjustments at first install time. This makes installation less daunting, particularly for new users, but is not always acceptable. For specific requirements, much software must be carefully configured to be useful, to work correctly with other software, or to be secure, and local administrators are often obliged to spend time reviewing and reconfiguring assorted software.
Some distributions go to considerable lengths to specifically adjust and customize most or all of the software included in the distribution. Not all do so. Some distributions provide configuration tools to assist in this process.
By replacing "everything" provided in a distribution, an administrator may reach a "distribution-less" state: everything was retrieved, compiled, configured, and installed locally. It is possible to build such a system from scratch, avoiding a distribution altogether. One needs a way to generate the first binaries until the system is "self-hosting". This can be done via compilation on another system capable of building binaries for the intended target (possibly by cross-compilation). For example, see Linux From Scratch.
In broad terms, Linux distributions may be:
The diversity of Linux distributions is due to technical, organizational, and philosophical variation among vendors and users. The permissive licensing of free software means that any user with sufficient knowledge and interest can customize an existing distribution or design one to suit his or her own needs.
Rolling Linux distributions are kept updated using small and frequent updates. Software contained in a rolling distribution's software stack is usually standard release, though.
Rolling releases can be either:
The terms "partially rolling" and "partly rolling" (along with synonyms "semi-rolling" and "half-rolling"), "fully rolling", "truly rolling" and "optionally rolling" are all standard terms used by software developers and users.
Repositories of rolling distributions usually contains very recent software releases – often the latest stable software releases available. They have pseudo-releases and installation media that are simply a snapshot of the software distribution at the time of the release of the installation image. Typically, a rolling release operating system installed from an older installation medium can be fully updated post-installation to a current state.
There are pros and cons to both standard release and rolling release software development methodologies.
In terms of the software development process, standard releases require significant development effort being spent on keeping old versions up to date due to propagating bug fixes back to the newest branch, versus focusing more on the newest development branch. Also, unlike rolling releases, standard releases require more than one code branch to be developed and maintained, which increases the software development and software maintenance workload of the software developers and software maintainers.
On the other hand, software features and technology planning are easier in standard releases due to a better understanding of upcoming features in the next version(s) rather than simply the whim of the developers at any given time. Software release cycles can also be synchronized with those of major upstream software projects, such as desktop environments.
As far as the user experience, standard releases are often viewed as more stable and bug-free since software conflicts can be more easily addressed and the software stack more thoroughly tested and evaluated, during the software development cycle. For this reason, they tend to be the preferred choice in enterprise environments and mission-critical tasks.
However, rolling releases offer more current software which can also provide increased stability and fewer software bugs along with the additional benefits of new features, greater functionality, faster running speeds, and improved system and application security. Regarding software security, the rolling release model can have advantages in timely security updates, fixing system or application security bugs and vulnerabilities, that standard releases may have to wait till the next release for or patch in various versions. In a rolling release distribution, where the user has "chosen" to run it as a highly dynamic system, the constant flux of software packages can introduce new unintended vulnerabilities.
A "live" distribution is a Linux distribution that can be booted from removable storage media such as optical discs or USB flash drives, instead of being installed on and booted from a hard disk drive. The portability of installation-free distributions makes them advantageous for applications such as demonstrations, borrowing someone else's computer, rescue operations, or as installation media for a standard distribution.
When the operating system is booted from a read-only medium such as a CD or DVD, any user data that needs to be retained between sessions cannot be stored on the boot device but must be written to another storage device, such as a USB flash drive or a hard disk drive.
Many Linux distributions provide a "live" form in addition to their conventional form, which is a network-based or removable-media image intended to be used only for installation; such distributions include SUSE, Ubuntu, Linux Mint, MEPIS and Fedora. Some distributions, including Knoppix, Puppy Linux, Devil-Linux, SuperGamer, SliTaz GNU/Linux and , are designed primarily for live use. Additionally, some minimal distributions can be run directly from as little space as one floppy disk without the need to change the contents of the system's hard disk drive.
The website DistroWatch lists many Linux distributions, and displays some of the ones that have the most web traffic on the site. The Wikimedia Foundation released an analysis of the browser User Agents of visitors to WMF websites until 2015, which includes details of the most popular Operating System identifiers, including some Linux distributions. Many of the popular distributions are listed below.
Lightweight Linux distributions are those that have been designed with support for older hardware in mind, allowing older hardware to still be used productively, or, for maximum possible speed in newer hardware by leaving more resources available for use by applications. Examples include Tiny Core Linux, Puppy Linux and Slitaz.
Other distributions target specific niches, such as:
Whether Google's Android counts as a Linux distribution is a matter of definition. It uses the Linux kernel, so the Linux Foundation and Chris DiBona, Google's open source chief, agree that Android is a Linux distribution; others, such as Google engineer Patrick Brady, disagree by noting the lack of support for many GNU tools in Android, including glibc.
Other non-GNU distributions include Cyanogenmod, its fork LineageOS, Android-x86 and recently Tizen and Mer/Sailfish OS.
The Free Standards Group is an organization formed by major software and hardware vendors that aims to improve interoperability between different distributions. Among their proposed standards are the Linux Standard Base, which defines a common ABI and packaging system for Linux, and the Filesystem Hierarchy Standard which recommends a standard filenaming chart, notably the basic directory names found on the root of the tree of any Linux filesystem. Those standards, however, see limited use, even among the distributions developed by members of the organization.
The diversity of Linux distributions means that not all software runs on all distributions, depending on what libraries and other system attributes are required. Packaged software and software repositories are usually specific to a particular distribution, though cross-installation is sometimes possible on closely related distributions.
The process of constantly switching between distributions is often referred to as "distro hopping". Virtual machines such as VirtualBox and VMware Workstation virtualize hardware allowing users to test live media on a virtual machine. Some websites like DistroWatch offer lists of popular distributions, and link to screenshots of operating systems as a way to get a first impression of various distributions.
There are tools available to help people select an appropriate distribution, such as several versions of the Linux Distribution Chooser, and the universal package search tool "whohas". There are easy ways to try out several Linux distributions before deciding on one: Multi Distro is a Live CD that contains nine space-saving distributions.
There are several ways to install a Linux distribution. Nowadays, the most common method of installing Linux is by booting from a live USB memory stick, which can be created by using an USB image writer application and the ISO image, which can be downloaded from the various Linux distribution websites. DVD disks, CD disks, network installations and even other hard drives can also be used as "installation media".
Early Linux distributions were installed using sets of floppies but this has been abandoned by all major distributions. Nowadays most distributions offer CD and DVD sets with the vital packages on the first disc and less important packages on later ones. They usually also allow installation over a network after booting from either a set of floppies or a CD with only a small amount of data on it.
New users tend to begin by partitioning a hard drive in order to keep their previously installed operating system. The Linux distribution can then be installed on its own separate partition without affecting previously saved data.
In a Live CD setup, the computer boots the entire operating system from CD without first installing it on the computer's hard disk. Some distributions have a Live CD "installer", where the computer boots the operating system from the disk, and then proceeds to install it onto the computer's hard disk, providing a seamless transition from the OS running from the CD to the OS running from the hard disk.
Both servers and personal computers that come with Linux already installed are available from vendors including Hewlett-Packard, Dell and System76.
On embedded devices, Linux is typically held in the device's firmware and may or may not be consumer-accessible.
Anaconda, one of the more popular installers, is used by Red Hat Enterprise Linux, Fedora (which uses the Fedora Media Writer) and other distributions to simplify the installation process. Debian, Ubuntu and many others use Debian-Installer.
Some distributions let the user install Linux on top of their current system, such as WinLinux or coLinux. Linux is installed to the Windows hard disk partition, and can be started from inside Windows itself.
Virtual machines (such as VirtualBox or VMware) also make it possible for Linux to be run inside another OS. The VM software simulates a separate computer onto which the Linux system is installed. After installation, the virtual machine can be booted as if it were an independent computer.
Various tools are also available to perform full dual-boot installations from existing platforms without a CD, most notably:
Some specific proprietary software products are not available in any form for Linux. As of September 2015, the Steam gaming service has 1,500 games available on Linux, compared to 2,323 games for Mac and 6,500 Windows games. Emulation and API-translation projects like Wine and CrossOver make it possible to run non-Linux-based software on Linux systems, either by emulating a proprietary operating system or by translating proprietary API calls (e.g., calls to Microsoft's Win32 or DirectX APIs) into native Linux API calls. A virtual machine can also be used to run a proprietary OS (like Microsoft Windows) on top of Linux.
Computer hardware is usually sold with an operating system other than Linux already installed by the original equipment manufacturer (OEM). In the case of IBM PC compatibles the OS is usually Microsoft Windows; in the case of Apple Macintosh computers it has always been a version of Apple's OS, currently macOS; Sun Microsystems sold SPARC hardware with the Solaris installed; video game consoles such as the Xbox, PlayStation, and Wii each have their own proprietary OS. This limits Linux's market share: consumers are unaware that an alternative exists, they must make a conscious effort to use a different operating system, and they must either perform the actual installation themselves, or depend on support from a friend, relative, or computer professional.
However, it is possible to buy hardware with Linux already installed. Lenovo, Hewlett-Packard, Dell, Affordy, Purism, and System76 all sell general-purpose Linux laptops, and custom-order PC manufacturers will also build Linux systems (but possibly with the Windows key on the keyboard). Fixstars Solutions (formerly Terra Soft) sells Macintosh computers and PlayStation 3 consoles with Yellow Dog Linux installed.
It is more common to find embedded devices sold with Linux as the default manufacturer-supported OS, including the Linksys NSLU2 NAS device, TiVo's line of personal video recorders, and Linux-based cellphones (including Android smartphones), PDAs, and portable music players.
The current Microsoft Windows license lets the manufacturer determine the refund policy. With previous versions of Windows, it was possible to obtain a refund if the manufacturer failed to provide the refund by litigation in the small claims courts. On 15 February 1999, a group of Linux users in Orange County, California held a "Windows Refund Day" protest in an attempt to pressure Microsoft into issuing them refunds. In France, the Linuxfrench and AFUL (French speaking Libre Software Users' Association) organizations along with free software activist Roberto Di Cosmo started a "Windows Detax" movement, which led to a 2006 petition against "racketiciels" (translation: Racketware) with 39,415 signatories and the DGCCRF branch of the French government filing several complaints against bundled software. On March 24, 2014, a new international petition was launched by AFUL on the Avaaz platform, translated into several languages and supported by many organizations around the world.
There are no official figures on popularity, adoption, downloads or installed base of Linux distributions.
There are also no official figures for the total number of Linux systems, partly due to the difficulty of quantifying the number of PCs running Linux (see Desktop Linux#Measuring adoption), since many users download Linux distributions. Hence, the sales figures for Linux systems and commercial Linux distributions indicate a much lower number of Linux systems and level of Linux adoption than is the case; this is mainly due to Linux being free and open source software that can be downloaded free of charge. A Linux Counter Project had kept track of a running guesstimate of the number of Linux systems, but did not distinguish between rolling release and standard release distributions. It ceased operation in August of 2018, though a few related blog posts were created through October 2018.
Desktop usage statistical reports for particular Linux distributions have been collected and published in January of 2020 by the Linux Hardware Project.
|
https://en.wikipedia.org/wiki?curid=18212
|
Los Angeles Dodgers
The Los Angeles Dodgers are an American professional baseball team based in Los Angeles, California. The Dodgers compete in Major League Baseball (MLB) as a member club of the National League (NL) West division. Established in 1883 in the New York City borough of Brooklyn, the team relocated to Los Angeles before the 1958 season. They played four seasons at the Los Angeles Memorial Coliseum before moving to Dodger Stadium, their current home, in .
The Dodgers have won six World Series championships and twenty three National League pennants. Eleven NL MVP award winners have played for the Dodgers, winning a total of fourteen MVP Awards; eight Cy Young Award winners have also pitched for the Dodgers, winning a total of twelve Cy Young Awards. The team has eighteen Rookie of the Year Award winners, more than twice as many as the second place New York Yankees 8. The Dodgers have had four consecutive Rookies of the Year from 1979 to 1982 and five in a row from 1992 to 1996.
In the early 20th century, the team, then sometimes called the Brooklyn Robins after manager Wilbert Robinson, won league pennants in 1916 and 1920, losing the World Series both times, first to Boston and then Cleveland. In the 1930s, the team officially adopted the Dodgers nickname, which had been in use since the 1890s, named after the Brooklyn pedestrians who dodged the streetcars in the city.
In 1941, the Dodgers captured their third National League pennant, only to lose to the New York Yankees. This marked the onset of the Dodgers–Yankees rivalry, as the Dodgers would face them in their next six World Series appearances. Led by Jackie Robinson, the first black Major League Baseball player of the modern era; and three-time National League Most Valuable Player Roy Campanella, also signed out of the Negro Leagues, the Dodgers captured their first World Series title in 1955 by defeating the Yankees for the first time, a story notably described in the 1972 book "The Boys of Summer".
Following the 1957 season the team left Brooklyn. In just their second season in Los Angeles, the Dodgers won their second World Series title, beating the Chicago White Sox in six games in 1959. Spearheaded by the dominant pitching style of Sandy Koufax and Don Drysdale, the Dodgers captured three pennants in the 1960s and won two more World Series titles, sweeping the Yankees in four games in 1963, and edging the Minnesota Twins in seven in 1965. The 1963 sweep was their second victory against the Yankees, and their first against them as a Los Angeles team. The Dodgers won four more pennants in 1966, 1974, 1977 and 1978, but lost in each World Series appearance. They went on to win the World Series again in 1981, thanks in part to pitching sensation Fernando Valenzuela.
The early 1980s were affectionately dubbed "Fernandomania." In 1988, another pitching hero, Orel Hershiser, again led them to a World Series victory, aided by one of the most memorable home runs of all time, by their injured star outfielder Kirk Gibson coming off the bench to pinch hit with two outs in the bottom of the ninth inning of game 1, in his only appearance of the series. The Dodgers won the pennant in 2017 and 2018, but lost the World Series to the Houston Astros and Boston Red Sox respectively.
The Dodgers share a fierce rivalry with the San Francisco Giants, dating back to when the two franchises played in New York City. Both teams moved west for the 1958 season. Both the Brooklyn/Los Angeles Dodgers and the New York/San Francisco Giants have appeared in the World Series 20 times. The Giants have won two more World Series (8); when the two teams were based in New York, the Giants won five World Series championships, and the Dodgers one. After the move to California, the Dodgers have won five World Series while the Giants have won three.
The Dodgers were founded in 1883 as the Brooklyn Atlantics, taking the name of a defunct team that had played in Brooklyn before them. The team joined the American Association in 1884 and won the AA championship in 1889 before joining the National League in 1890. They promptly won the NL Championship their first year in the League. The team was known alternatively as the Bridegrooms, Grooms, Superbas, Robins, and Trolley Dodgers before officially becoming the Brooklyn Dodgers in the 1930s.
In Brooklyn, the Dodgers won the NL pennant twelve times (1890, 1899, 1900, 1916, 1920, 1941, 1947, 1949, 1952, 1953, 1955, 1956) and the World Series in 1955. After moving to Los Angeles, the team won National League pennants in 1959, 1963, 1965, 1966, 1974, 1977, 1978, 1981, 1988, 2017, and 2018, with World Series championships in 1959, 1963, 1965, 1981 and 1988. In all, the Dodgers have appeared in 20 World Series: 9 in Brooklyn and 11 in Los Angeles.
For most of the first half of the 20th century, no Major League Baseball team employed an African American player. Jackie Robinson became the first African American to play for a Major League Baseball team when he played his first major league game on April 15, 1947, as a member of the Brooklyn Dodgers. This was mainly due to general manager Branch Rickey's efforts. The deeply religious Rickey's motivation appears to have been primarily moral, although business considerations were also a factor. Rickey was a member of The Methodist Church, the antecedent denomination to The United Methodist Church of today, which was a strong advocate for social justice and active later in the American Civil Rights Movement.
This event was the harbinger of the integration of professional sports in the United States, the concomitant demise of the Negro Leagues, and is regarded as a key moment in the history of the American Civil Rights Movement. Robinson was an exceptional player, a speedy runner who sparked the team with his intensity. He was the inaugural recipient of the Rookie of the Year award, which is now named the Jackie Robinson Award in his honor. The Dodgers' willingness to integrate, when most other teams refused to, was a key factor in their 1947–1956 success. They won six pennants in those 10 years with the help of Robinson, three-time MVP Roy Campanella, Cy Young Award winner Don Newcombe, Jim Gilliam and Joe Black. Robinson would eventually go on to become the first African-American elected to the Baseball Hall of Fame in 1962.
Real estate investor Walter O'Malley acquired majority ownership of the Dodgers in 1950, when he bought the 25 percent share of co-owner Branch Rickey and became allied with the widow of the another equal partner, Mrs. John L. Smith. Before long, he was working to buy new land in Brooklyn to build a more accessible and profitable ballpark than the aging Ebbets Field. Beloved as it was, Ebbets Field was no longer well-served by its aging infrastructure and the Dodgers could no longer sell out the park even in the heat of a pennant race, despite largely dominating the National League from 1946 to 1957.
O'Malley wanted to build a new, state of the art stadium in Brooklyn. But City Planner Robert Moses and New York politicians refused to grant him the eminent domain authority required to build pursuant to O'Malley's plans. To put pressure on the city, during the 1955 season, O'Malley announced that the team would play seven regular season games and one exhibition game at Jersey City's Roosevelt Stadium in 1956. Moses and the City considered this an empty threat, and did not believe O'Malley would go through with moving the team from New York City.
After teams began to travel to and from games by air instead of train, it became possible to include locations in the far west. Los Angeles officials attended the 1956 World Series looking to the Washington Senators to move to the West Coast. When O'Malley heard that LA was looking for a club, he sent word to the Los Angeles officials that he was interested in talking. LA offered him what New York would not: a chance to buy land suitable for building a ballpark, and own that ballpark, giving him complete control over all revenue streams. When the news came out, NYC Mayor Robert F. Wagner, Jr. and Moses made an offer to build a ballpark on the World's Fair Grounds in Queens that would be shared by the Giants and Dodgers. However, O'Malley was interested in his park only under his conditions, and the plans for a new stadium in Brooklyn seemed like a pipe dream. O'Malley decided to move the Dodgers to California, convincing Giants owner Horace Stoneham to move to San Francisco instead of Minneapolis to keep the Giants-Dodgers rivalry alive on the West Coast.
The Dodgers played their final game at Ebbets Field on September 24, 1957, which the Dodgers won 2–0 over the Pittsburgh Pirates.
New York remained a one-team town with the New York Yankees until 1962, when Joan Payson founded the New York Mets and brought National League baseball back to the city. The blue background used by the Dodgers, was adopted by the Mets, honoring their New York NL forebears with a blend of Dodgers blue and Giants orange.
The Dodgers were the first Major League Baseball team to ever play in Los Angeles. On April 18, 1958, the Dodgers played their first LA game, defeating the former New York and now new San Francisco Giants, 6–5, before 78,672 fans at the Los Angeles Memorial Coliseum. Catcher Roy Campanella, left partially paralyzed in an off-season accident, was never able to play in Los Angeles.
Construction on Dodger Stadium was completed in time for Opening Day 1962. With its clean, simple lines and its picturesque setting amid hills and palm trees, the ballpark quickly became an icon of the Dodgers and their new California lifestyle. O'Malley was determined that there would not be a bad seat in the house, achieving this by cantilevered grandstands that have since been widely imitated. More importantly for the team, the stadium's spacious dimensions, along with other factors, gave defense an advantage over offense and the Dodgers moved to take advantage of this by assembling a team that would excel with its pitching.
Since moving to Los Angeles, the Dodgers have won 11 more National League Championships and five World Series rings.
The Dodgers' official history reports that the term "Trolley Dodgers" was attached to the Brooklyn ballclub due to the complex maze of trolley cars that weaved its way through the borough of Brooklyn.
In 1892, the city of Brooklyn (Brooklyn was an independent city until annexed by New York City in 1898) began replacing its slow-moving, horse-drawn trolley lines with the faster, more powerful electric trolley lines. Within less than three years, by the end of 1895, electric trolley accidents in Brooklyn had resulted in more than 130 deaths and maimed well over 500 people. Brooklyn's high profile, the significant number of widely reported accidents, and a trolley strike in early 1895, combined to create a strong association in the public's mind between Brooklyn and trolley dodging.
Sportswriters started using the name "Trolley Dodgers" to refer to the Brooklyn team early in the 1895 season. The name was shortened to, on occasion, the "Brooklyn Dodgers" as early as 1898.
Sportswriters in the early 20th century began referring to the Dodgers as the "Bums", in reference to the team's fans and possibly because of the "street character" nature of Jack Dawkins, the "Artful Dodger" in Charles Dickens' "Oliver Twist". Newspaper cartoonist Willard Mullin used a drawing of famous clown Emmett Kelly to depict "Dem Bums": the team would later use "Weary Willie" in promotional images, and Kelly himself was a club mascot during the 1950s.
Other team names used by the franchise were the Atlantics, Grays, Grooms, Bridegrooms, Superbas and Robins. All of these nicknames were used by fans and sportswriters to describe the team, but not in any official capacity. The team's legal name was the Brooklyn Base Ball Club. However, the Trolley Dodger nickname was used throughout this period, simultaneously with these other nicknames, by fans and sportswriters of the day. The team did not use the name in any formal sense until 1932, when the word "Dodgers" appeared on team jerseys. The "conclusive shift" came in 1933, when both home and road jerseys for the team bore the name "Dodgers".
Examples of how the many popularized names of the team were used are available from newspaper articles before 1932. A New York Times article describing a game in 1916 starts out: "Jimmy Callahan, pilot of the Pirates, did his best to wreck the hopes the Dodgers have of gaining the National League pennant", but then goes on to comment: "the only thing that saved the Superbas from being toppled from first place was that the Phillies lost one of the two games played". What is interesting about the use of these two nicknames is that most baseball statistics sites and baseball historians generally now refer to the pennant-winning 1916 Brooklyn team as the Robins. A 1918 New York Times article uses the nickname in its title: "Buccaneers Take Last From Robins", but the subtitle of the article reads: "Subdue The Superbas By 11 To 4, Making Series An Even Break".
Another example of the use of the many nicknames is found on the program issued at Ebbets Field for the 1920 World Series, which identifies the matchup in the series as "Dodgers vs. Indians" despite the fact that the Robins nickname had been in consistent use for around six years. The "Robins" nickname was derived from the name of their Hall of Fame manager, Wilbert Robinson, who led the team from 1914 to 1931.
The Dodgers' uniform has remained relatively unchanged since the 1930s. The home jersey is white with "Dodgers" written in script across the chest in royal. The road jersey is gray with "Los Angeles" written in script across the chest in royal. The word "Dodgers" was first used on the front of the team's home jersey in 1933; the uniform was then white with red pinstripes and a stylized "B" on the left shoulder. The Dodgers also wore green outlined uniforms and green caps throughout the 1937 season but reverted to blue the following year.
The current design was created in 1939, and has remained the same ever since with only cosmetic changes. Originally intended for the 1951 World Series for which the ballclub failed to qualify, red numbers under the "Dodgers" script were added to the home uniform in 1952. The road jersey also has a red uniform number under the script. When the franchise moved from Brooklyn to Los Angeles, the city name on the road jersey changed, and the stylized "B" was replaced with the interlocking "LA" on the caps in 1958. In 1970, the Dodgers removed the city name from the road jerseys and had "Dodgers" on both the home and away uniforms. The city script returned to the road jerseys in 1999, and the tradition-rich Dodgers flirted with an alternate uniform for the first time since 1944 (when all-blue satin uniforms were introduced). These 1999 alternate jerseys had a royal top with the "Dodgers" script in white across the chest, and the red number on the front. These were worn with white pants and a new cap with silver brim, top button and Dodger logo. These alternates proved unpopular and the team abandoned them after only one season. In 2014, the Dodgers introduced an alternate road jersey: a gray version with the "Dodgers" script instead of the city name. In 2018, the Dodgers wore their 60th anniversary patch to honor the 60 years of being in Los Angeles.
The Dodgers have been groundbreaking in their signing of players from Asia; mainly Japan, South Korea, and Taiwan. Former owner Peter O'Malley began reaching out in 1980 by starting clinics in China and South Korea, building baseball fields in two Chinese cities, and in 1998 becoming the first major league team to open an office in Asia. The Dodgers were the second team to start a Japanese player in recent history, pitcher Hideo Nomo, the first team to start a South Korean player, pitcher Chan Ho Park, and the first Taiwanese player, Chin-Feng Chen. In addition, they were the first team to send out three Asian pitchers, from different Asian countries, in one game: Park, Hong-Chih Kuo of Taiwan, and Takashi Saito of Japan. In the 2008 season, the Dodgers had the most Asian players on its roster of any major league team with five. They included Japanese pitchers Takashi Saito and Hiroki Kuroda; South Korean pitcher Chan Ho Park; and Taiwanese pitcher Hong-Chih Kuo and infielder Chin-Lung Hu. In 2005, the Dodgers' Hee Seop Choi became the first Asian player to compete in the Home Run Derby. For the 2013 season, the Dodgers signed starting pitcher Hyun-Jin Ryu with a six-year, $36 million contract, after posting a bid of nearly $27 million to acquire him from the KBO's Hanhwa Eagles. For the 2016 season, the Dodgers signed starting pitcher Kenta Maeda with an eight-year, $25 million contract, after posting a bid of $20 million to acquire him from the NPB's Hiroshima Toyo Carp.
The Dodgers' rivalry with the San Francisco Giants dates back to the 19th century, when the two teams were based in New York; the rivalry with the New York Yankees took place when the Dodgers were based in New York, but was revived with their East Coast/West Coast World Series battles in 1963, 1977, 1978, and 1981. The Dodgers rivalry with the Philadelphia Phillies also dates back to their days in New York, but was most fierce during the 1970s, 1980s, and 2000s. The Dodgers also had a heated rivalry with the Cincinnati Reds during the 1970s, 1980s and early 1990s. The rivalry with the Los Angeles Angels and the San Diego Padres dates back to the Angels' and Padres' respective inaugural seasons (Angels in 1961, Padres in 1969). Regional proximity is behind the rivalries with both the Angels and the Padres.
The Dodgers–Giants rivalry is one of the longest-standing rivalries in U.S. baseball.
The feud between the Dodgers and the San Francisco Giants began in the late 19th century when both clubs were based in New York City, with the Dodgers playing in Brooklyn and the Giants playing at the Polo Grounds in Manhattan. After the 1957 season, Dodgers owner Walter O'Malley moved the team to Los Angeles for financial and other reasons. Along the way, he managed to convince Giants owner Horace Stoneham—who was considering moving his team to Minnesota—to preserve the rivalry by bringing his team to California as well. New York baseball fans were stunned and heartbroken by the move. Given that the cities of Los Angeles and San Francisco have been bitter rivals in economic, cultural, and political arenas for over a century and a half, the new venue in California became fertile ground for its transplantation.
Each team's ability to endure for over a century while moving across an entire continent, as well as the rivalry's leap from a cross-city to a cross-state engagement, have led to the rivalry being considered one of the greatest in American sports history.
Unlike many other historic baseball match-ups in which one team remains dominant for most of their history, the Dodgers–Giants rivalry has exhibited a persistent balance in the respective successes of the two teams. While the Giants have more wins in franchise history, the Dodgers and Giants are tied for most National League Pennants at 23, though the Giants have won eight World Series titles, while the Dodgers have won six. The 2010 World Series was the Giants' first championship since moving to California, while the Dodgers had won five World Series titles since their move, their last title coming in the 1988 World Series..
This rivalry refers to a series of games played with the Los Angeles Angels. The Freeway Series takes its name from the massive freeway system in the greater Los Angeles metropolitan area, the home of both teams; one could travel from one team's stadium to the other simply by traveling along Interstate 5. The term is akin to "Subway Series" which refers to meetings between New York City baseball teams. The term ""Freeway Series"" also inspired the official name of the region's NHL rivalry: the "Freeway Face-Off."
Animosity between the team's fanbases grew stronger in 2005, when the Angel's new team owner Arte Moreno changed the name of his ball club from the 'Anaheim Angels', to the 'Los Angeles Angels of Anaheim'. Since the city of Anaheim is located roughly 30 miles from Downtown Los Angeles, the Angels franchise was ridiculed throughout the league for the contradictory nature surrounding the name, especially by Dodgers owner Frank McCourt, who filed a formal complaint to commissioner Bud Selig. Once the complaint was denied, McCourt devised a t-shirt mocking the crosstown rivals reading 'The Los Angeles Dodgers of Los Angeles', which remains popular amongst the fanbase to this day.
The Dodgers–Yankees rivalry is one of the most well-known rivalries in Major League Baseball. The two teams have met eleven times in the World Series, more times than any other pair from the American and National Leagues. The initial significance was embodied in the two teams' proximity in New York City, when the Dodgers initially played in Brooklyn. After the Dodgers moved to Los Angeles in 1958, the rivalry retained its significance as the two teams represented the dominant cities on each coast of the United States, and since the 1980s, the two largest cities in the United States.
Although the rivalry's significance arose from the two teams' numerous World Series meetings, the Yankees and Dodgers have not met in the World Series since . They would not play each other in a non-exhibition game until 2004, when they played a three-game interleague series. Their last meeting was in August 2019, when the Yankees won two out of three games in Los Angeles.
The Dodgers have a loyal fanbase, evidenced by the fact that the Dodgers were the first MLB team to attract more than 3 million fans in a season (in 1978), and accomplished that feat six more times before any other franchise did it once. The Dodgers drew at least 3 million fans for 15 consecutive seasons from 1996 to 2010, the longest such streak in all of MLB. On July 3, 2007, Dodgers management announced that total franchise attendance, dating back to 1901, had reached 175 million, a record for all professional sports. In 2007, the Dodgers set a franchise record for single-season attendance, attracting over 3.8 million fans. In 2009, the Dodgers led MLB in total attendance. The Dodger baseball cap is consistently in the top three in sales. During the 2011–2012 season, Frank McCourt, the owner of the Dodgers at that time, was going through a rough divorce with his wife over who should be the owner of the Dodger team. Instead, Frank McCourt paid $131 million to his wife as part of the divorce settlement. As a result, the team payroll was financially low for a big-budget team crippling the Dodgers in the free-agent market. Collectively, the team performance waned due to the distracting drama in the front office resulting in low attendance numbers.
Given the team's proximity to Hollywood, numerous celebrities can often be seen attending home games at Dodger Stadium. Celebrities such as co-owner Magic Johnson, Mary Hart, Larry King, Tiger Woods, Alyssa Milano and Shia LaBeouf are known to sit at field box seats behind home plate where they sign autographs for fellow Dodger fans. Actor Bryan Cranston is a lifelong Dodger fan.
The Dodgers set the world record for the largest attendance for a single baseball game during an exhibition game against the Boston Red Sox on March 28, 2008 at the Los Angeles Memorial Coliseum in honor of the Dodgers 50th anniversary, with 115,300 fans in attendance. All proceeds from the game benefited the official charity of the Dodgers, ThinkCure! which supports cancer research at Children's Hospital Los Angeles and City of Hope. Primarily Dodgers fans are from their own location in southern California and also parts of southern Nevada; however there are also strong pockets of Dodger support in Mexico and throughout Asia, and their away games throughout the US will usually attract substantial numbers of expat and traveling fans.
Vin Scully called Dodgers games from 1950 to 2016. His longtime partners were Jerry Doggett (1956–1987) and Ross Porter (1977–2004). In 1976, he was selected by Dodgers fans as the Most Memorable Personality (on the field or off) in the team's history. He is also a recipient of the Baseball Hall of Fame's Ford C. Frick Award for broadcasters (inducted in 1982). Unlike the modern style in which multiple sportscasters have an on-air conversation (usually with one functioning as play-by-play announcer and the other[s] as color commentator), Scully, Doggett and Porter generally called games solo, trading with each other inning-by-inning. In the 1980s and 1990s, Scully would call the entire radio broadcast except for the third and seventh inning, allowing the other Dodger commentators to broadcast an inning.
When Doggett retired after the 1987 season, he was replaced by Hall-of-Fame Dodgers pitcher Don Drysdale, who previously broadcast games for the California Angels and Chicago White Sox. Drysdale died in his hotel room following a heart attack before a game in Montreal in 1993. This was a difficult broadcast for Scully and Porter who could not mention it on-air until Drysdale's family had been notified and the official announcement made. He was replaced by former Dodgers outfielder Rick Monday. Porter's tenure ended after the 2004 season, after which the format of play-by-play announcers and color commentators was installed, led by Monday and newcomer Charley Steiner. Scully, however, continued to announce solo.
Scully called roughly 100 games per season (all home games and road games in California and Arizona) for both flagship radio station KLAC and on television for SportsNet LA. Scully was simulcast for the first three innings of each of his appearances, then announced only for the TV audience. If Scully was calling the game, Steiner took over play-by-play on radio beginning with the fourth inning, with Monday as color commentator. If Scully was not calling the game, Steiner and Orel Hershiser called the entire game on television while Monday and Kevin Kennedy did the same on radio. In the event the Dodgers were in post-season play, Scully called the first three and last three innings of the radio broadcast alone and Steiner and Monday handled the middle innings. Vin Scully retired from calling games in 2016. His tenure with the Dodgers was the longest with any single sports team at 67 years. Youthful announcer Joe Davis was selected in 2017 by Dodgers management to handle play by play on television with Orel Hershiser as his colorman.
The Dodgers also broadcast on radio in Spanish, and the play-by-play is voiced by another Frick Award winner, Jaime Jarrín, who has been with the Dodgers since 1959. The color analyst for some games is former Dodger pitcher Fernando Valenzuela, for whom Jarrin once translated post-game interviews. The Spanish-language radio flagship station is KTNQ.
Koufax, Campanella, and Robinson were the first Dodgers to have their numbers retired, in a ceremony at Dodger Stadium on June 4, 1972. This was the year in which Koufax was inducted into the Baseball Hall of Fame; Robinson and Campanella were already Hall-of-Famers.
Alston's number was retired in the year following his retirement as the Dodgers manager, six years before he was inducted into the Hall of Fame.
Gilliam died suddenly in 1978 after a 28-year career with the Dodgers organization. The Dodgers retired his number two days after his death, prior to Game 1 of the 1978 World Series. As of 2018, he is the only non-Hall-of-Famer to have his number retired by the Dodgers (Alston's number was retired before he was elected to the Hall of Fame).
Beginning in 1980, the Dodgers have retired the numbers of longtime Dodgers (Snider, Reese, Drysdale, Lasorda, and Sutton) during the seasons in which each was inducted into the Hall of Fame.
In 1997, 50 years after he broke the color barrier and 25 years after the Dodgers retired his number, Robinson's No. 42 was retired throughout Major League Baseball. Robinson is the only major league baseball player to have this honor bestowed upon him. Starting in the 2007 season, Jackie Robinson Day (April 15, commemorating Opening Day of Robinson's rookie season of 1947) has featured many or all players and coaches wearing the number 42 as a tribute to Robinson.
The Dodgers have not issued the number 34 since the departure of Fernando Valenzuela in 1991, although it has not been officially retired.
Since 1884, the Dodgers have used a total of 31 Managers, the most current being Dave Roberts, who was appointed following the 2015 postseason, after the departure of Don Mattingly.
Over the nearly 43 years from 1954 to mid-1996, the Dodgers employed only two managers, Walter Alston and Tommy Lasorda, both of whom are in the Hall of Fame. During this entire time period of extraordinary stability, the Dodgers were family owned by Walter O'Malley and then his son Peter O'Malley. It was during this era that the Dodgers won 11 of their 21 pennants, and all six of their World Series championships.
The managers of the Los Angeles Dodgers (1958–present) are as follows:
During their time in Brooklyn, stadium organist Gladys Goodding became so well known that fans would joke that she was "the only Dodger who played every game without an error".
From the Dodgers' move to Los Angeles from Brooklyn in 1958, the Dodgers employed a handful of well-known public address announcers; the most famous of which was John Ramsey, who served as the PA voice of the Dodgers from 1958 until his retirement in 1982; he was also well known for announcing at other venerable Los Angeles venues, including the Los Angeles Memorial Coliseum and Sports Arena, and the Forum. Ramsey died in 1990.
From 1958 to 1982, Doug Moore, Philip Petty, and Dennis Packer served as back-up voices for John Ramsey for the Dodgers, California Angels, Los Angeles Chargers, USC football and Los Angeles Rams. Packer was Ramsey's primary backup for the Los Angeles Lakers and Los Angeles Kings until Ramsey's retirement from the Forum in 1978. Thereafter, Packer became the public address announcer for the Lakers, Kings, indoor soccer and indoor tennis events at the Forum.
Nick Nickson, a radio broadcaster for the Los Angeles Kings, replaced John Ramsey as the Dodger Stadium public address announcer in 1983 and served in that capacity through the 1989 season to work with the Kings full-time.
Dennis Packer and Pete Arbogast were emulators of John Ramsey, using the same stentorian style of announcing Ramsey was famous for. Packer and Arbogast shared the stadium announcing chores for the 1994 FIFA World Cup matches at the Rose Bowl. Arbogast won the Dodgers job on the day that Ramsey died on January 25, 1990, by doing a verbatim imitation of Ramsey's opening and closing remarks that were standard at each game. His replacement, in 1994 was Mike Carlucci, who remained as the Dodgers' PA voice announcer until he resigned in 2001 to concentrate on his voiceover and acting career along with his Olympics announcing duties.
From 2002 to 2014, the Dodgers public address announcer was Eric Smith, who also announces for the Los Angeles Clippers and USC Trojans.
On April 3, 2015 the Dodgers announced that former radio broadcaster Todd Leitz was hired to become their new public address announcer. Leitz was an anchor and news reporter in Los Angeles at KNX 1070 AM for 10 years, and a news reporter at KABC 790 for two years.
From 1988 to 2015, Nancy Bea enjoyed popularity behind the Dodger Stadium keyboard similar to Gladys Goodding. Since retirement in 2015, Beal's replacement and current organist is Dieter Ruehle, who also plays at Staples Center for Los Angeles Kings games.
Vin Scully is permanently honored in the Baseball Hall of Fame's "Scribes & Mikemen" exhibit as a result of winning the Ford C. Frick Award in 1982. Frick Award recipients are not official members of the Hall.
Sue Falsone, was the first female physical therapist in Major League baseball, and from 2012 to 2013, was the first female head athletic trainer.
The Los Angeles Dodgers farm system consists of nine minor league affiliates.
|
https://en.wikipedia.org/wiki?curid=18213
|
Louis Andriessen
Louis Andriessen (; born 6 June 1939) is a Dutch composer and pianist based in Amsterdam. He is a lecturer at the Royal Conservatory of The Hague. He was recipient of the Gaudeamus International Composers Award in 1959.
Andriessen was born in Utrecht into a musical family, the son of the composer Hendrik Andriessen (1892–1981), brother of composers Jurriaan Andriessen (1925–1996) and Caecilia Andriessen (1931–2019) and nephew of Willem Andriessen (1887–1964).
Andriessen originally studied with his father and Kees van Baaren at the Royal Conservatory of The Hague, before embarking upon two years of study with Italian composer Luciano Berio in Milan and Berlin. He later joined the faculty of the Royal Conservatory.
In 1969 Andriessen co-founded STEIM in Amsterdam. He also helped found the instrumental groups Orkest de Volharding and Hoketus, both of which performed compositions of the same names. He later became closely involved in the ongoing Schonberg and Asko ensembles and inspired the formation of the British ensemble Icebreaker.
Andriessen was married to guitarist Jeanette Yanikian (1935–2008). They were a couple for over 40 years and were married in 1996.
He is currently married to violinist Monica Germino.
Andriessen's early works show experimentation with various contemporary trends: post war serialism ("Series", 1958), pastiche ("Anachronie I", 1966–67), and tape ("Il Duce", 1973). His reaction to what he perceived as the conservatism of much of the Dutch contemporary music scene quickly moved him to form a radically alternative musical aesthetic of his own. Since the early 1970s he has refused to write for conventional symphony orchestras and has instead opted to write for his own idiosyncratic instrumental combinations, which often retain some traditional orchestral instruments alongside electric guitars, electric basses, and congas.
Andriessen's mature music combines the influences of jazz, American minimalism, Igor Stravinsky and Claude Vivier. His harmonic writing eschews the consonant modality of much minimalism, preferring post-war European dissonance, often crystallised into large blocks of sound. Large scale pieces such as "De Staat" ['Republic'] (1972–76), for example, are influenced by the energy of the big-band music of Count Basie and Stan Kenton and the repetitive procedures of Steve Reich, both combined with bright, clashing dissonances. Andriessen's music thus departs from post war European serialism and its offshoots. He has also played a role in providing alternatives to traditional performance practice techniques, often specifying forceful, rhythmic articulations, and amplified, non-vibrato, singing.
Other notable works include "Workers Union" (1975), a melodically indeterminate piece "for any loud sounding group of instruments"; "Mausoleum" (1979) for 2 baritones and large ensemble; "De Tijd" ['Time'] (1979–81) for female singers and ensemble; "De Snelheid" ['Velocity'] (1982–83), for 3 amplified ensembles; "De Materie" ['Matter'] (1984–88), a large four-part work for voices and ensemble; collaborations with filmmaker and librettist Peter Greenaway on the film "M is for Man, Music, Mozart" and the operas "Rosa: A Horse Drama" (1994) and "Writing to Vermeer" (1998); and "La Passione" (2000–02) for female voice, violin and ensemble.
Andriessen's primary publishers are Boosey & Hawkes and Donemus.
Complete list of works:
|
https://en.wikipedia.org/wiki?curid=18214
|
Leonard Peltier
Leonard Peltier (born September 12, 1944) is an American indigenous rights activist and an enrolled member of the Turtle Mountain Chippewa, and also of Lakota and Dakota descent. After being extradited from Canada through a witness statement, he was convicted in a controversial 1977 trial and sentenced to two consecutive terms of life imprisonment for first-degree murder of murdering two Federal Bureau of Investigation (FBI) agents in a June 26, 1975, shooting on the Pine Ridge Indian Reservation in South Dakota. As detailed by "In the Spirit of Crazy Horse", his trials and conviction are considered highly controversial and Amnesty International has raised concerns about their fairness.
Peltier ran for President of the United States in 2004, winning the nomination of the Peace and Freedom Party, and receiving 27,607 votes, only on the ballot in California. He is currently running for Vice President of the United States in 2020, on the Party for Socialism and Liberation and Peace and Freedom Party tickets with veteran socialist activist Gloria La Riva. Peltier is a member of the American Indian Movement (AIM).
Peltier's indictment and conviction have been the subject of much contention; Amnesty International placed his case under the "Unfair Trials" category of its "Annual Report: USA 2010". In his 1999 memoir, Peltier admitted to involvement in the shootout but denied killing the FBI agents.
Peltier is incarcerated at the United States Penitentiary, Coleman in Florida. Peltier became eligible for parole in 1993; his next scheduled parole hearing will be in July 2024, when Peltier will be 79. On January 18, 2017, the Office of the Pardon Attorney announced that President Barack Obama had denied Peltier's application for clemency. Peltier was next eligible for commutation in 2018. Barring appeals, parole, or presidential clemency, Peltier will remain in prison for the rest of his life.
Peltier was born on September 12, 1944, at the Turtle Mountain Indian Reservation of the Turtle Mountain Chippewa near Belcourt, North Dakota, in a family of thirteen children. Peltier's parents divorced when he was four years old. Therefore, Leonard and his sister Betty Ann lived with their paternal grandparents Alex and Mary Dubois-Peltier in the Turtle Mountain Indian Reservation. In September 1953, at the age of nine, Leonard was enrolled at the Wahpeton Indian School in Wahpeton, North Dakota, an Indian boarding school run by the Bureau of Indian Affairs (BIA). Leonard remained away from his home at Wahpeton Indian School through the ninth grade; the school forced assimilation to white American culture by requiring English and forbidding the inclusion of Native American culture. He graduated from Wahpeton in May 1957, and attended the Flandreau Indian School in Flandreau, South Dakota. After finishing the ninth grade, he returned to the Turtle Mountain Reservation to live with his father. Peltier later obtained a general equivalency degree.
In 1965, Peltier relocated to Seattle, Washington. Peltier was a welder, construction worker, and the co-owner of an auto shop in Seattle in his twenties. The co-owners of the shop in Seattle used the upper level of building as a stopping place for American Indians who had alcohol addiction issues or recently finished their prison sentences. However, the halfway house took a financial toll on the shop, so it closed down after some time.
In Seattle, Peltier became involved in a variety of causes championing Native American civil rights. In the early 1970s, he learned about the factional tensions at the Pine Ridge Indian Reservation in South Dakota between supporters of Richard Wilson, elected tribal chairman in 1972, and traditionalist members of the tribe. Consequently, Peltier became an official member of the American Indian Movement (AIM) in 1972. Wilson had created a private militia, known as the Guardians of the Oglala Nation (GOON), whose members were reputed to have attacked political opponents. Protests over a failed impeachment hearing of Wilson contributed to the AIM and Lakota armed takeover of Wounded Knee in February 1973, which resulted in a 71-day siege by federal forces, known as the Wounded Knee incident. They demanded the resignation of Wilson. Peltier, however, spent most of the occupation in a Milwaukee jail charged with attempted murder. When Peltier secured bail at the end of April, he took part in an AIM protest outside the federal building in Milwaukee and was on his way to Wounded Knee with the group to deliver supplies when the incident ended.
In 1975, Peltier traveled to the Pine Ridge Indian Reservation as a member of AIM to try to help reduce the continuing violence among political opponents. At the time, he was a fugitive, with a warrant issued in Milwaukee, Wisconsin. It charged him with unlawful flight to avoid prosecution for the attempted murder of an off-duty Milwaukee police officer, a crime of which he was acquitted in February 1978. During this time period, Peltier had seven children from two marriages and adopted two children.
On June 26, 1975, Special Agents Jack R. Coler and Ronald A. Williams of the Federal Bureau of Investigation (FBI) were on the Pine Ridge Indian Reservation searching for a young man named Jimmy Eagle, who was wanted for questioning in connection with the recent assault and robbery of two local ranch hands. Eagle had been involved in a physical altercation with a friend, during which he had stolen a pair of leather cowboy boots. At approximately 11:50 a.m., Williams and Coler, driving two separate unmarked cars, spotted, reported, and followed a red pick-up truck which matched the description of Eagle's.
Soon after his initial report, Williams radioed into a local dispatch that he and Coler had come under fire from the occupants of the vehicle. Williams radioed that they would be killed if reinforcements did not arrive. He next radioed that they both had been shot. FBI Special Agent Gary Adams was the first to respond to Williams' call for assistance, and he also came under gunfire; Adams was unable to reach Coler and Williams in time, and both agents died within the first ten minutes of gunfire. At about 4:25 p.m., authorities recovered the bodies of Williams and Coler from their vehicles.
The FBI reported that Williams had received a defensive wound to his right hand (as he attempted to shield his face) from a bullet which passed through his hand into his head, killing him instantly. Williams received two gunshot injuries, to his body and foot, prior to the contact shot that killed him. Coler, incapacitated from earlier bullet wounds, had been shot twice in the head. In total, 125 bullet holes were found in the agents' vehicles, many from a .223 Remington (5.56 mm) rifle. The shooters took apart Williams's car and stole four guns belonging to the agents.
Leonard Peltier provided numerous alibis, to different people, about his activities on the morning of the attacks. In an interview with the author Peter Matthiessen ("In the Spirit of Crazy Horse" 1983), Peltier described working on a car in Oglala, claiming to have driven back to the Jumping Bull Compound about an hour before the shooting started. In an interview with Lee Hill, he described being awakened in the tent city at the ranch by the sound of gunshots. To Harvey Arden, for "Prison Writings", he described enjoying a beautiful morning before he heard the firing.
At least three men were arrested in connection with the shooting: Peltier, Robert Robideau, and Darrelle "Dino" Butler, all AIM members who were present on the Jumping Bull compound at the time of the shootings.
On September 5, 1975, Agent Coler's .308 rifle and handgun and Agent Williams's handgun were recovered from an automobile in the vicinity of Butler's arrest location. The FBI forwarded a description of a recreational vehicle (RV) and the Plymouth station wagon recently purchased by Peltier to law enforcement during the hunt for the suspects. The RV was stopped by an Oregon State Trooper, but the driver, later discovered to be Peltier, fled on foot following a small shootout. Both Peltier's thumbprint and Agent Coler's handgun were discovered under the RV's front seat.
On September 10, 1975, AIM members Robert Robideau, Norman Charles, and Michael Anderson were injured in the explosion of a station wagon on the Kansas Turnpike close to Wichita. Agent Coler's .308 rifle and an AR-15 rifle were found in the burned vehicle.
On December 22, 1975, Peltier was named to the FBI Ten Most Wanted Fugitives list. On February 6, 1976, Peltier was arrested after being found in a friend's cabin in Hinton, Alberta. In December 1976, he was extradited from Canada based on documents submitted by the FBI that Warren Allmand, Canada's Solicitor General at the time, would later state contained false information.
One of those documents was an affidavit signed by Myrtle Poor Bear, a local Native American woman. While Poor Bear stated that she was Peltier's girlfriend during that time and watched the killings, Peltier and others at the scene claimed that Poor Bear did not know Peltier and was not present during the murders. Poor Bear admitted to lying to the FBI, but emphasized that the agents interviewing her coerced her into making the claims above. When Poor Bear tried to testify against the FBI, the judge barred her testimony because of mental incompetence.
Peltier fought extradition to the United States, even as Robideau and Butler were found not guilty on the grounds of self-defense by a federal jury in Cedar Rapids, Iowa. Peltier returned too late to be tried with Robideau and Butler and he was subsequently tried separately.
Peltier's trial was held in Fargo, North Dakota, where a jury convicted Peltier of the murders of Coler and Williams. Unlike the trial for Butler and Robideau, the jury was informed that the two FBI agents were killed by close-range shots to their heads, when they were already defenseless due to previous gunshot wounds. Consequently, Peltier could not submit a self-defense testimony that could have led to an acquittal. They also saw autopsy and crime scene photographs of the two agents, which had not been shown to the jury at Cedar Rapids. In April 1977, Peltier was convicted and sentenced to two consecutive life sentences.
On July 20, 1979, Peltier and two other inmates escaped from Federal Correctional Institution, Lompoc. One inmate was shot dead by a guard outside the prison and the other was captured 90 minutes later approximately away. Peltier remained at large until he was captured by a search party three days later near Santa Maria, California, after robbing a farmer who notified authorities. He was in possession of a Ruger Mini-14 rifle at the time of his capture, but was apprehended without incident. On December 22, 1979, Peltier was convicted and sentenced to serve a consecutive five-year sentence for escape and a consecutive two-year sentence for being a felon in possession of a firearm.
Peltier has made a number of appeals against his murder convictions.
In 1986, Federal Appeals Judge Gerald W. Heaney, concluded, "When all is said and done ... a few simple but very important facts remain. The casing introduced into evidence had in fact been extracted from the Wichita AR-15." In his 1999 memoir, Peltier admitted that he fired at the agents, but denies that he fired the fatal shots that killed them.
A cartridge case from the Wichita AR-15 was found in the trunk of Agent Coler's car, and admitted as evidence at Peltier's trial in Fargo, North Dakota. Also admitted as evidence was the fact that Peltier was the only person in possession of an AR-15 rifle during the shooting.
The journalist Scott Anderson said that in a 1995 interview with Peltier, he sought answers to the contradictions he had found in Peltier's accounts of the incident on June 26, 1975. When asked about the guns he carried that day, Peltier listed a .30-30, a .303, a .306, a .250 and a .22, but he did not remember the AR-15.
The former United States Attorney General Ramsey Clark has served "pro bono" as one of Peltier's lawyers and has aided in filing a series of appeals on Peltier's behalf. Clark identifies the evidence used against Peltier as "fabricated, circumstantial ... mis-used, concealed, and perverted." In all appeals, the conviction and sentence have been affirmed by the 8th Circuit Court of Appeals. The last two appeals were "Peltier v. Henman", 997 F. 2d 461 in July 1993 and "United States v. Peltier", 446 F.3d 911 (8th Cir. 2006) (Peltier IV) in 2006.
Numerous doubts have been raised over Peltier's guilt and the fairness of his trial, based on allegations and inconsistencies regarding the FBI and prosecution's handling of this case:
Peltier's conviction sparked great controversy and has drawn criticism from a number of sources. Numerous appeals have been filed on his behalf; none of the resulting rulings has been made in his favor. Peltier is considered by the AIM to be a political prisoner and has received support from individuals and groups including Nelson Mandela, Rigoberta Menchú, Soviet Peace Committee, Amnesty International, the United Nations High Commissioner for Human Rights, the Zapatista Army of National Liberation, Tenzin Gyatso (the 14th Dalai Lama), Mikhail Gorbachev, Zack de la Rocha, Rage Against the Machine, the European Parliament, the Belgian Parliament, the Italian Parliament, the Kennedy Memorial Center for Human Rights, Archbishop Desmond Tutu, and Rev. Jesse Jackson.
Peltier's supporters have asserted that he did not commit the murders, and that he either had no knowledge of the murders (as he told CNN in 1999), or that he has knowledge implicating others which he will never reveal, or (as told in Peter Matthiessen's "In the Spirit of Crazy Horse", 1983) that he approached and searched the agents, but did not execute them.
The film "Incident at Oglala" (1992) included the AIM activist Robert Robideau saying the FBI agents had been shot by a 'Mr X'. When Peltier was interviewed about 'Mr X', he said he knew who the man was. Dino Butler, in a 1995 interview with E.K. Caldwell of "News From Indian Country", said that 'Mr X' had been invented as the murderer in an attempt to achieve Peltier's release. In a 2001 interview with "News From Indian Country", Bernie Lafferty said that she had witnessed Peltier's referring to his murder of one of the agents.
In 1999, Peltier filed a "habeas corpus" petition, but it was rejected by the 10th Circuit Court on November 4, 2003. Near the end of the Clinton administration in 2001, rumors began circulating that Bill Clinton was considering granting Peltier clemency. Opponents of Peltier campaigned against his possible clemency; about 500 FBI agents and families protested outside the White House, and FBI director Louis Freeh sent a letter opposing Peltier's clemency to the White House. Clinton did not grant Peltier clemency. In 2002, Peltier filed a civil rights lawsuit in the U.S. District Court for the District of Columbia against the FBI, Louis Freeh and FBI agents who had participated in the campaign against his clemency petition, alleging that they "engaged in a systematic and officially sanctioned campaign of misinformation and disinformation." On March 22, 2004, the suit was dismissed. In January 2009, President George W. Bush denied Peltier's clemency petition before leaving office.
In 2016, Peltier's attorney's filed a clemency application with the White House's Office of the Pardon Attorney, and his supporters organized a campaign to convince President Barack Obama to commute Peltier's sentence, a campaign which included an appeal by Pope Francis, as well as James Reynolds, a senior attorney and former US Attorney who supervised the prosecution against Peltier in the appeal period following his initial trial. In a letter to the United States Department of Justice, Reynolds wrote that clemency was "in the best interest of justice in considering the totality of all matters involved". In a subsequent letter to the "Chicago Tribune", Reynolds added that the case against Peltier "was a very thin case that likely would not be upheld by courts today. It is a gross overstatement to label Peltier a 'cold-blooded murderer' on the basis of the minimal proof that survived the appeals in his case." On January 18, 2017, two days before President Obama left office, the Office of the Pardon Attorney announced that Obama had denied Peltier's application for clemency. On June 8, 2018, KFGO Radio in Fargo, N.D., reported that Peltier filed a formal clemency request with President Trump. KFGO obtained and published a letter that was sent by Peltier's attorney to the White House.
In January 2002 in the "News from Indian Country", the publisher Paul DeMain wrote an editorial that an "unnamed delegation" told him that Peltier murdered the FBI agents. DeMain described the delegation as "grandfathers and grandmothers, AIM activists, Pipe carriers and others who have carried a heavy unhealthy burden within them that has taken its toll." DeMain said he was told the motive for the execution-style murder of the AIM activist Anna Mae Aquash in December 1975 "allegedly was her knowledge that Leonard Peltier had shot the two agents, as he was convicted." DeMain did not accuse Peltier of participation in the Aquash murder. In 2003 two Native American men were indicted and later convicted for the murder.
On May 1, 2003, Peltier sued DeMain for libel for similar statements about the case published on March 10, 2003, in "News from Indian Country". On May 25, 2004, Peltier withdrew the suit after he and DeMain settled the case. DeMain issued a statement saying he did not think Peltier was given a fair trial for the two murder convictions nor did he think Peltier was connected to Anna Mae Aquash's death. DeMain did not retract his allegations that Peltier was guilty of the murders of the FBI agents and that the motive for Aquash's murder was the fear that she might inform on the activist.
Bruce Ellison, Leonard Peltier's lawyer since the 1970s, invoked his Fifth Amendment rights against self-incrimination and refused to testify at the 2003 federal grand jury hearings on charges against Arlo Looking Cloud and John Graham for the murder of Aquash. Ellison also refused to testify at Looking Cloud's trial in 2004. During the trial, the federal prosecutor named Ellison as a co-conspirator in the Aquash case. Witnesses said that Ellison participated in interrogating Aquash about being an informant on December 11, 1975, shortly before her murder.
In February 2004, Fritz Arlo Looking Cloud, an Oglala Sioux, was tried and convicted for the murder of Aquash. In Looking Cloud's trial, the federal prosecution argued that AIM's suspicion of Aquash stemmed from her having heard Peltier admit to the murders. Darlene "Kamook" Nichols, former wife of the AIM leader Dennis Banks, was a witness for the prosecution. She testified that in late 1975, Peltier told her and a small group of AIM fugitive activists about shooting the FBI agents. At the time, all were fleeing law enforcement after the Pine Ridge shootout. The other fugitives included her sister Bernie Nichols, her husband Dennis Banks, and Anna Mae Aquash, among several others. Bernie Nichols-Lafferty testified with a similar account of Peltier's statement.
Earlier in 1975, the AIM member Douglass Durham had been revealed to be an FBI agent and dismissed from the organization. AIM leaders were fearful of infiltration. Other witnesses have testified that, once Aquash was suspected of being an informant, Peltier interrogated her while holding a gun to her head. Peltier and David Hill were said to have Aquash participate in bomb-making so that her fingerprints would be on the bombs. Prosecutors alleged in court documents that the trio planted these bombs at two power plants on the Pine Ridge Indian Reservation on Columbus Day 1975.
During the trial, Nichols acknowledged receiving $42,000 from the FBI in connection with her cooperation on the case. She said it was compensation for travel expenses to collect evidence and moving expenses to be farther from her ex-husband Dennis Banks, whom she feared because she had implicated him as a witness. Peltier has claimed that Kamook Nichols committed perjury with her testimony.
On June 26, 2007, the Supreme Court of British Columbia ordered the extradition of John Graham to the United States to stand trial for his alleged role in the murder of Aquash. He was eventually tried by the state of South Dakota in 2010. During his trial, Darlene "Kamook" Ecoffey said Peltier told both her and Aquash that he had killed the FBI agents in 1975. Ecoffey testified under oath, "He (Peltier) held his hand like this," she said, pointing her index finger like a gun, "and he said 'that (expletive) was begging for his life but I shot him anyway.'" Graham was convicted of murdering Aquash and sentenced to life in prison.
Peltier was the candidate for the Peace and Freedom Party in the 2004 election for President of the United States. While numerous states have laws that prohibit prison inmates convicted of felonies from voting (Maine and Vermont are exceptions), the United States Constitution has no prohibition against felons being elected to federal offices, including President. The Peace and Freedom Party secured ballot status for Peltier only in California, where his presidential candidacy received 27,607 votes, approximately 0.2% of the vote in that state.
He is running for Vice President in the 2020 election as the running mate of Gloria La Riva with the Party for Socialism and Liberation.
In a February 27, 2006, decision, U.S. District Judge William Skretny ruled that the FBI did not have to release five of 812 documents relating to Peltier and held at their Buffalo field office. He ruled that the particular documents were exempted on the grounds of "national security and FBI agent/informant protection". In his opinion, Judge Skretny wrote, "Plaintiff has not established the existence of bad faith or provided any evidence contradicting (the FBI's) claim that the release of these documents would endanger national security or would impair this country's relationship with a foreign government." In response, Michael Kuzma, a member of Peltier's defense team, said, "We're appealing. It's incredible that it took him 254 days to render a decision." Kuzma further said, "The pages we were most intrigued about revolved around a teletype from Buffalo ... a three-page document that seems to indicate that a confidential source was being advised by the FBI not to engage in conduct that would compromise attorney-client privilege." Peltier's supporters have tried to obtain more than 100,000 pages of documents from FBI field offices, claiming that the files should have been turned over at the time of his trial or following a Freedom of Information Act (FOIA) request filed soon after.
On January 13, 2009, Peltier was beaten by inmates at the United States Penitentiary, Canaan, where he had been transferred from USP Lewisburg. He was sent back to Lewisburg, where he remained until the fall of 2011 when he was transferred to a federal penitentiary in Florida. As of 2016, Leonard Peltier is housed at Coleman Federal Correctional Complex in Coleman, Florida.
A controversial statue of Peltier was created by political artist Rigo 23. After it was installed on the grounds of American University, officials removed it following outcry.
It was reported by Joseph Corré that the last words of his father, Malcolm McLaren, were "Free Leonard Peltier".
|
https://en.wikipedia.org/wiki?curid=18217
|
LambdaMOO
LambdaMOO is an online community of the variety called a MOO. It is the oldest MOO today.
"LambdaMOO" was founded in late 1990 or early 1991 by Pavel Curtis at Xerox PARC. Now hosted in the state of Washington, it is operated and administered entirely on a volunteer basis. Guests are allowed, and membership is free to anyone with an e-mail address.
"LambdaMOO" gained some notoriety when Julian Dibbell wrote a book called "My Tiny Life" describing his experiences there. Over its history, "LambdaMOO" has been highly influential in the examination of virtual-world social issues.
LambdaMOO has its roots in the 1978–1980 work by Roy Trubshaw and Richard Bartle to create and expand the concept of Multi-User Dungeon (MUD) – virtual communities. Around 1987–1988, the expansion of the global internet allowed more users to experience the MUD. Pavel Curtis at Xerox Parc noted that they were "almost exclusively for recreational purposes." Curtis determined to explore whether the MUD could be non-recreational. He developed "LambdaMOO" software to run on the LambdaMOO server, which implements the MOO programming language. This software was subsequently made available to the public. Several starter databases, known as cores, are available for MOOs; "LambdaMOO" itself uses the LambdaCore database. The "Lambda" name is from Curtis's own username on earlier MUD systems.
LambdaMOO can refer to the software, the server, or the community of users.
"LambdaMOO" central geography was based on Pavel Curtis's California home. New players and guests traditionally connected in "The Coat Closet", but a second area, "The Linen Closet" (specially programmed as a silent area) was later added as an alternative connection point. The coat closet opens onto the center of the house in The Living Room, a common hangout and place for conversation; its fixtures include a fireplace (where things can be roasted), The Living Room Couch (which periodically causes players' objects to 'fall through' to underneath the couch), and a pet Cockatoo who repeats overheard phrases (which is sometimes found with its beak gagged). Occasionally, the Cockatoo is replaced with a more seasonal creature: a Turkey near Thanksgiving, a Raven near Halloween, et cetera.
To the north of the Living Room is the Entrance Hall, the Front Yard, and a limited residential area along LambdaStreet. There is an extensive subterranean complex located down the manhole, including a sewage system. Players walking to the far west along LambdaStreet may be given the option to 'jump off the edge of the world', which disables access to their account for three months.
To the south of the Living Room is a pool deck, a hot tub, and some of the extensive grounds of the mansion, featuring gardens, hot air balloon landing pads, open fields, fishing holes, and the like.
To the northwest of the living room are the laundry room, garage, dining room, smoking room, drawing room, housekeeper's quarters, and kitchen.
To the east of the entry hall, hallways provide access to some individual rooms, the Linen Closet, and to the eastern wing of the house. In the eastern wing can be found the Library of online books, the Museum of generic objects (which account-holders may create instances of), and an extensive area for the "LambdaMOO" RPG.
Since the creation of the original LambdaMOO map, many users have expanded the MOO by making additional rooms with the command "@dig."
While most MOOs are run by administrative fiat, in summer of 1993 "LambdaMOO" implemented a petition/ballot mechanism, allowing the community to propose and vote on new policies and other administrative actions. A petition may be created by anyone eligible to participate in politics (those who have maintained accounts at the MOO for at least 30 days), can be signed by other players, and may then be submitted for administrative 'vetting'. Once vetted, the petition has a limited time to collect enough signatures to become valid and be made into a ballot. Ballots are subsequently voted on; those with a 66% approval rating are passed and will be implemented. This system suffered quite a lot of evolution and eventually passed into a state where wizards took back the power they'd passed into the hands of the people, but still maintain the ballot system as a way for the community to express its opinions.
The population of "LambdaMOO" numbered close to 10,000 around 1994, with over 300 actively connected at any time.
|
https://en.wikipedia.org/wiki?curid=18221
|
Lorica segmentata
The lorica segmentata () is a type of personal armour that was used by soldiers of the Roman Empire, consisting of metal strips ("girth hoops" fashioned into circular bands), fastened to internal leather straps. The Latin name translates to "segmented cuirass" and was first used in the 16th century; the Roman appellation is unknown.
The plates of "lorica segmentata" armour were soft iron inside and (some at least) were mild steel on the outside, making the plates hardened against damage without becoming brittle. This case hardening was done deliberately by packing organic matter tightly around them and heating them in a forge, transferring carbon from the burnt materials into the surface of the metal.
The strips were arranged horizontally on the body, overlapping downwards, and they surrounded the torso in two halves, being fastened at the front and back. The upper body and shoulders were protected by additional strips ("shoulder guards") and breast- and backplates. The form of the armour allowed it to be stored very compactly, since it was possible to separate it into four sections, each of which would collapse on itself into a compact mass. The fitments that closed the various plate sections together (buckles, lobate hinges, hinged straps, tie-hooks, tie-rings, etc.) were made of brass. In later variants dating from around 75–80 A.D., the fastenings of the armour were simplified. Bronze hinges were removed in favour of simple rivets, belt fastenings utilised small hooks, and the lowest two girdle plates were replaced by one broad plate.
During the time of their use, this style of armour evolved and changed, the currently recognised types being the Dangestetten-Kalkriese-Vindonissa type from 9 BC to AD 43, the Corbridge-Carnuntum type from 69 to 100 and the Newstead type from 164 to 180, named after their places of discovery. There was, however, a considerable overlap between these types in use, and the Corbridge and Newstead types are often found at the same sites (e.g. at Caerleon in Wales, Carnuntum in Austria, Carlisle in England and León in Spain). It is possible that there was a fourth type, covering the body with segmented armour joined to scale shoulder defences. However, this is only known from one badly-damaged statue originating at Alba Iulia in Romania. The currently accepted range for the use of the armour is from about 14 B.C. (depiction on the arch of Susa) to the late 3rd century A.D. (León). Its use was geographically widespread, but mail ("lorica hamata") may have been more common at all times.
The question as to precisely who used the armour is debated. There is a clear difference in armour between the two corps shown on Trajan's Column. This is a monument erected in 113 in Rome to commemorate the conquest of Dacia by Emperor Trajan (ruled 98–117): its bas-reliefs are a key source for Roman military equipment. "Auxilia" are generally shown wearing mail, cuirasses, and carrying oval shields. Legionaries are uniformly depicted wearing the "lorica segmentata" and carrying the curved rectangular shield. On this basis, it has been supposed that "lorica segmentata" was used by legionaries only. However, some historians consider Trajan's Column to be inaccurate as a historical source due to its inaccurate and stylised portrayal of Roman armour: "it is probably safest to interpret the Column reliefs as ‘impressions’, rather than accurate representations."
The view that "auxilia" were light troops originates from Vegetius' comment that ""auxilia" are always joined as light troops with the legions in the line". It is true that some specialist units in the "auxilia", such as Syrian archers and Numidian cavalry wore light armour (or none). But they were a small minority of the "auxilia". Most auxiliary "cohortes" contained heavy infantry similar to legionaries.
However, on another Trajanic monument (the Adamclisi "Tropaeum") the "lorica segmentata" does not appear at all, and legionaries and "auxilia" alike are depicted wearing either mail or scales ("lorica squamata"). Some experts are of the opinion that the Adamclisi monument is a more accurate portrayal of the situation, the "segmentata" used rarely, maybe only for set-piece battles and parades. This viewpoint considers the figures in Trajan's Column to be highly stereotyped, in order to distinguish clearly between different types of troops. In any event, both corps were equipped with the same weapons: "gladius" (a close-combat stabbing sword) and javelins, although the type of javelin known as "pilum" seems to have been provided to legionaries only. Goldsworthy points out that the equipment of both corps were roughly equal in weight.
In recent years archaeologists have found fittings of "loricae segmentatae" in many fort sites that are thought to have been garrisoned by only auxiliary troops, "i.e.", where the legions were "not" based. If the legions were, indeed, broken up and distributed around all these small bases, then it implies a tactical use of the legions that has not previously been considered. Hitherto, the legions were regarded as shock troops employed only "en masse" and not broken up into detachments. M.C. Bishop, however, has argued that there is a need to examine the way in which the various troop types were armed and deduce from this what their battle roles were, rather than trying to consider who wore what. The legions were armed and trained for close-order combat while the auxiliary forces, just as numerous, were more accustomed to open order fighting, although they could be employed as the legions were (e.g. at Mons Graupius) if circumstances demanded this.
During the 3rd century, all "peregrini" were granted Roman citizenship, and therefore legionaries lost their social superiority. The "lorica segmentata" eventually disappeared from Roman use, although it appears to have still been in use into the early 4th century, being depicted in the Arch of Constantine erected in 315 during the reign of Constantine I to commemorate his military achievements. (However, it has been argued that these depictions are from an earlier monument by Marcus Aurelius, from which Constantine incorporated portions into his Arch.)
The "lorica segmentata" has come to be viewed as iconic of the Roman legions in popular culture. The tendency to portray Roman legionaries clad in this type of armour often extends to periods of time that are too early or too late in history.
|
https://en.wikipedia.org/wiki?curid=18223
|
Known Space
Known Space is the fictional setting of about a dozen science fiction novels and several collections of short stories written by Larry Niven. It has also become a shared universe in the spin-off "Man-Kzin Wars" anthologies. ISFDB catalogs all works set in the fictional universe that includes Known Space under the series name Tales of Known Space, which was the title of a 1975 collection of Niven's short stories. The first-published work in the series, which was Niven's first published piece was "The Coldest Place", in the December 1964 issue of "If" magazine, edited by Frederik Pohl. This was the first-published work in the 1975 collection.
The stories span approximately one thousand years of future history, from the first human explorations of the Solar System to the colonization of dozens of nearby systems. Late in the series, Known Space is an irregularly shaped "bubble" about 60 light-years across.
Within the Tales of Known Space, the epithet "Known Space" refers to a relatively small region in the Milky Way galaxy, one centered on Earth. In the future that the series depicts, spanning roughly the third millennium, humans have explored this region and colonized many of its worlds. Contact has been made with other species, such as the two-headed Pierson's Puppeteers and the aggressive felinoid Kzinti. Stories in the Known Space series include events and places outside of the region called "Known Space" such as the Ringworld, the Pierson's Puppeteers' Fleet of Worlds and the Pak homeworld.
The Tales were originally conceived as two separate series, the "Belter" stories set roughly from 2000 to 2350 CE and the "Neutron Star" / "Ringworld" stories set in 2651 CE and later. The earlier, Belter period features solar-system colonization and slower-than-light travel with fusion-powered and Bussard ramjet ships. The later, Neutron Star period features faster-than-light ships using "hyperdrive". Niven implicitly joined the two settings as a single fictional universe in the short story "A Relic of the Empire" ("If", December 1966), by using background elements of the Slaver civilization from the "Belter" series as a plot element in the faster-than-light setting. In the late 1980s—having written almost no Tales of Known Space in more than a decade—Niven opened the 300-year gap in the Known Space timeline as a shared universe, and the stories of the "Man-Kzin Wars" volumes fill in that history, bridging the two settings.
One aspect of the "Known Space" universe is that most of the early human colonies are on planets suboptimal for "Homo sapiens". During the first phase of human interstellar colonization (i.e. before humanity acquired FTL), simple robotic probes were sent to nearby stars to assess their planets for habitation. The programming of these probes was flawed: they sent back a "good for colonization" message if they found a habitable "point", rather than a habitable "planet". Sleeper ships containing human colonists were sent to the indicated star systems. Too often, those colonists had to make the best of a bad situation.
The series features a number of "superscience" inventions which figure as plot devices. Stories earlier in the timeline feature technology such as Bussard ramjets, Drouds (wires capable of directly stimulating the pleasure centers of the brain) and explore how organ transplantation technology enables the new crime of "organlegging" (as well as the general sociological effects of widespread transplant technology), while later stories feature hyperdrive, invulnerable starship hulls, stasis fields, molecular monofilaments, transfer booths (teleporters used only on planetary surfaces), the lifespan-extending drug boosterspice, and the tasp which is an extension of the wirehead development which works without direct contact.
Boosterspice is a compound that increases the longevity and reverses aging of human beings. With the use of boosterspice, humans can easily live hundreds of years and, theoretically, indefinitely.
Developed by the Institute of Knowledge on Jinx, it is said to be made from genetically engineered ragweed (although early stories have it ingested in the form of edible seeds). In "Ringworld's Children", it is suggested boosterspice may actually be adapted from Tree-of-Life, without the symbiotic virus that enabled hominids to metamorphose from Pak Breeder stage to Pak Protector stage (mutated Pak breeders were the ancestors of both "Homo sapiens" and the hominids of the Ringworld).
On the Ringworld, there is an analogous (and apparently more potent) compound developed from Tree-of-Life, but they are mutually incompatible; in "The Ringworld Engineers", Louis Wu learns that the character Halrloprillalar died when in ARM custody after leaving the Ringworld, as a result of having taken boosterspice after having used the Ringworld equivalent. Boosterspice only works on "Homo sapiens", whereas the Tree-of-Life compound will work on any hominid descended from the Pak.
Faster-than-light (FTL) propulsion, or hyperdrive, was obtained from the Outsiders at the end of the First Man-Kzin War. In addition to winning the war for humanity, it allowed the re-integration of all the human colonies, which were previously separated by distance. Standard hyperdrive covers a distance of one light-year every three days (121.75 x c). A more advanced Quantum II Hyperdrive introduced later is able to cover the same distance in one and a quarter minutes (420,768 x c).
In Niven's first novel, "World of Ptavvs", the hyperdrive used by the Thrintun required a ship to be going faster than 93% of the speed of light. However, this is the only time that Hyperdrive is described this way.
In the vast majority of "Known Space" material, hyperdrive requires that a ship be outside a star's gravity well to use. Ships which activate hyperdrive close to a star are likely to disappear without a trace. This effect is regarded as a limitation based on the laws of physics. In Niven's novel "Ringworld's Children" the Ringworld itself is converted into a gigantic Quantum II hyperdrive and launched into hyperspace while within its star's gravity well. "Ringworld's Children" reveals that there is life in hyperspace around gravity wells and that hyperspace predators eat spaceships which appear in hyperspace close to large masses, thus explaining why a structure as large as the Ringworld can safely engage the hyperdrive in a star's gravity well.
One phenomenon travellers in hyperspace can experience is the so-called 'blind spot' should they look through a porthole or camera screen, giving the impression that the walls around the porthole or sides of the camera view screen are expanding to 'cover up the outside'. The phenomenon is the result of hyperspace being so fundamentally different from 'normal/Einstein' space that a traveller's senses can not truly comprehend it, and instead the observer 'sees' a form of nothingness that can be hypnotic and dangerous.
Staring too long into the 'blind' spot can be insanity inducing, so as a precaution all view ports on ships are blinded when a ship enters hyperspace.
The Puppeteer firm, General Products, produces an invulnerable starship hull, known simply as a General Products Hull. The hulls are impervious to any type of matter or energy, with the exception of antimatter (which destroys the hull), gravitation, and visible light (which passes through the hull). While invulnerable themselves, this is no guarantee that the contents are likewise protected. For example, though a high speed impact with the surface of a planet or star may cause no harm to the hull, the occupants will be crushed if they are not protected by additional measures such as a stasis field or a gravity compensating field.
In "Fleet of Worlds", the characters tour a General Products factory and receive clues that allow them to destroy a General Products hull from the inside using only a high-powered interstellar communications laser. In "Juggler of Worlds", the Puppeteers, attempting to surmise how this was done without antimatter, identify another technique which can be used to destroy the otherwise invulnerable hulls, one which does suggest some potential defense options.
On Earth in the mid-21st century, it became possible to transplant any organ from any person to another, with the exception of brain and central nervous system tissue. Individuals were categorized according to their so-called "rejection spectrum" which allowed doctors to counter any immune system responses to the new organs, allowing transplants to "take" for life. It also enabled the crime of "organlegging" which lasted well into the 24th century.
A Slaver stasis field creates a bubble of space/time disconnected from the entropy gradient of the rest of the universe. Time slows effectively to a stop for an object in stasis, at a ratio of some billions of years outside to a second inside. An object in stasis is invulnerable to anything occurring outside the field, as well as being preserved indefinitely. A stasis field may be recognized by its perfectly reflecting surface, so perfect that it reflects 100% of all radiation and particles, including neutrinos. However one stasis field cannot exist inside another. This is used in "World of Ptavvs" where humans develop a stasis field technology and realize that a mirrored artifact known as the "Sea Statue" must be actually an alien in a stasis field. They place it with a human envoy, who is a telepath, and envelop both in field. By doing this, they unleash the last living member of the Slaver species on the world.
Stepping disks are a fictional teleportation technology. They were invented by the Pierson's Puppeteers, and their existence is not generally known to other races until the events of "The Ringworld Engineers".
The stepping disks are an outgrowth and improvement of the transfer booth technology used by humans and other Known Space races. Unlike the booths, the disks do not require an enclosed chamber, and somehow can differentiate between solid masses and air, for example. They also have a far greater range than transfer booths, extending several astronomical units.
Several limitations to stepping disks are mentioned in the Ringworld novels. If there is a difference in velocity between two disks, any matter transferred between them must be accelerated by the disk accordingly. If there is not enough energy to do so, the transfer cannot take place. This becomes a problem with disks that are a significant distance apart on the Ringworld surface, as they will have different velocities: same speed, different direction.
Transfer booths are an inexpensive form of teleportation. Short-range booths are similar in appearance to an old style telephone booth: one enters, "dials" one's desired destination, and is immediately deposited in a corresponding booth at the destination. Longer-range booths operate similarly, but are housed in former airports due to requiring "equipment to compensate for the difference in rotational velocity between different points on the Earth". They are inexpensive: a trip anywhere on Earth costs only a "tenth-star" (presumably equivalent to a dime). Introduced by one of Gregory Pelton's ancestors, apparently bought from, and based on, Puppeteer technology.
Some individuals in the stories display limited paranormal or "psionic" abilities. Gil Hamilton can move objects with his mind using his phantom arm, which he gained after losing an arm in an asteroid mining accident. When he finally had the arm replaced from an organ bank on Earth, the ability persisted. "Plateau Eyes" (introduced in "A Gift From Earth") is an ability to hide in plain sight, by causing others not to notice you. Population control is tight on Earth, but these abilities can gain the possessor a license to have more children. The Pierson's Puppeteers engineer a lottery for child licenses on Earth to increase the occurrence of "Luck", which they think is a paranormal ability humans have that has enabled them to defeat races such as the Kzinti. In "Ringworld", the character Teela Brown is said to have this ability (although possibly not to the same extent as others who avoided being included in the expedition.)
The ARM is the police force of the United Nations. ARM originated as an acronym for "Amalgamation of Regional Militia", though this is not a term in current usage by the time of the "Known Space" novels. An agent of the ARM, Gil Hamilton, is the protagonist of Niven's sci-fi detective stories, a series-within-a-series gathered in the collection "Flatlander". (Confusingly, "Flatlander" is also the name of an unrelated "Known Space" story.)
Their basic function is to enforce mandatory birth control on overcrowded Earth, and restrict research which might lead to dangerous weapons. In short, the ARM hunts down women who have illegal pregnancies and suppresses all new technologies. They also hunt organleggers, especially in the era of the "organ bank problem". Among the many technologies they control and outlaw are all trained forms of armed and unarmed combat. By the 25th century, ARM agents were kept in an artificially induced state of paranoid schizophrenia to enhance their usefulness as law enforcement officials, which led to them sometimes being referred to as ""Schizes"". Agents with natural tendencies toward paranoia were medicated into docility during their off duty hours, through the aforementioned science of psychistry (see "Madness Has Its Place" and "Juggler of Worlds").
Their jurisdiction is limited to the Earth-Moon system; other human colonies have their own militia. Nevertheless, in many "Known Space" stories, ARM agents operate or exert influence in other human star systems through the "Bureau of Alien Affairs" (see "In the Hall of the Mountain King", "Procrustes", "The Borderland of Sol", and "Neutron Star"). These interventions begin following the Man-Kzin Wars and the introduction of hyperdrive, presumably as part of a general re-integration of human societies.
The Tales of Known Space were first published primarily as short stories or serials in science fiction magazines. Generally the short fiction was subsequently released in one or more collections and the serial novels as books. Some of the shorter novels (novellas) published in magazines were expanded as, or incorporated in, book-length novels. There are also two or three short stories which share common themes and some background elements with "Known Space" stories, but which are not considered a part of the "Known Space" universe: "One Face" (1965) and "Bordered in Black" (1966) —both in the 1979 collection "Convergent Series"—and possibly "The Color of Sunfire", published online and listed here.
In the "Known Space" stories, Niven had created a number of technological devices (GP hull, stasis field, Ringworld material) which, combined with the "Teela Brown gene", made it very difficult to construct engaging stories beyond a certain date—the combination of factors made it tricky to produce any kind of creditable threat/problem without complex contrivances. Niven demonstrated this, to his own satisfaction, with "Safe at Any Speed" (1967). He used the setting for much less short fiction after 1968 and much less for novels after two published in 1980. Late in that decade, Niven invited other authors to participate in a series of shared-universe novels, with the Man-Kzin Wars as their setting. The first volume was published in 1988.
"Ringworld" (1970) won the annual Nebula, Hugo, and Locus best novel awards.
"Protector" (1973) and "The Ringworld Engineers" (1980) were nominated for the Hugo and Locus Awards.
Niven has described his fiction as "playground equipment", encouraging fans to speculate and extrapolate on the events described. Debates have been made, for example, on who built the Ringworld (Pak Protectors and the Outsiders being the traditional favorites, but see "Ringworld's Children" for a possibly definitive answer), and what happened to the Tnuctipun. Niven also states that this is not an invitation to violate his copyrights, warning potential publishers and editors not to proceed without permission.
Niven was also reported to have said that "Known Space should be seen as a possible future history told by people that may or may not have all their facts right."
The author also published an "outline" for a story which would "destroy" the Known Space Series (or more precisely, reveal much of the Known Space background to be an in-universe hoax), in an article entitled "Down in Flames". Although the article is written as though Niven intended to write the story, he later wrote that the article was only an elaborate joke, and he never intended to write such a novel. The article itself notes that the outline was made obsolete by the publication of "Ringworld". "Down in Flames" was a result of a conversation between Norman Spinrad and Niven in 1968, but at the time of its first publication in 1977 some of the concepts were invalidated by Niven's writings between '68 and '77. (A further edited version of the outline was published in "N-Space" in 1990.)
|
https://en.wikipedia.org/wiki?curid=18224
|
La Jetée
La Jetée () is a 1962 French science fiction featurette directed by Chris Marker and associated with the Left Bank artistic movement. Constructed almost entirely from still photos, it tells the story of a post-nuclear war experiment in time travel. It is 28 minutes long and shot in black and white.
It won the Prix Jean Vigo for short film. The 1995 science fiction film "12 Monkeys" was inspired by and borrows several concepts directly from "La Jetée", as does the 2015 "12 Monkeys" television series developed from the film.
A man (Davos Hanich) is a prisoner in the aftermath of World War III in post-apocalyptic Paris, where survivors live underground in the "Palais de Chaillot" galleries. Scientists research time travel, hoping to send test subjects to different time periods "to call past and future to the rescue of the present". They have difficulty finding subjects who can mentally withstand the shock of time travel. The scientists eventually settle upon the prisoner; his key to the past is a vague but obsessive memory from his pre-war childhood of a woman (Hélène Châtelain) he had seen on the observation platform ("the jetty") at Orly Airport shortly before witnessing a startling incident there. He did not understand exactly what happened, but knew he had seen a man die.
After several attempts, he reaches the pre-war period. He meets the woman from his memory, and they develop a romantic relationship. After his successful passages to the past, the experimenters attempt to send him into the far future. In a brief meeting with the technologically advanced people of the future, he is given a power unit sufficient to regenerate his own destroyed society.
Upon his return, with his mission accomplished, he discerns that he is to be executed by his jailers. He is contacted by the people of the future, who offer to help him escape to their time permanently; but he asks instead to be returned to the pre-war time of his childhood, hoping to find the woman again. He is returned to the past, placed on the jetty at the airport, and it occurs to him that the child version of himself is probably also there at the same time. He is more concerned with locating the woman, and quickly spots her. However, as he rushes to her, he notices an agent of his jailers who has followed him and realizes the agent is about to kill him. In his final moments, he comes to understand that the incident he witnessed as a child, which has haunted him ever since, was his own death.
"La Jetée" is constructed almost entirely from optically printed photographs playing out as a photomontage of varying rhythm. It contains only one brief shot (of the woman mentioned above sleeping and suddenly waking up) originating on a motion-picture camera, this due to the fact that Marker could only afford to hire one for an afternoon. The stills were taken with a Pentax Spotmatic and the motion-picture segment was shot with a 35 mm Arriflex. The film has no dialogue aside from small sections of muttering in German and people talking in an airport terminal. The story is told by a voice-over narrator. The scene in which the hero and the woman look at a cut-away trunk of a tree is a reference to Alfred Hitchcock's 1958 film "Vertigo" which Marker also references in his 1983 film "Sans soleil".
The editing of "La Jetée," adds to the intensity of the film. With the use of cut-ins and fade-outs, it produces the eerie and unsettling nature adding to the theme of the apocalyptic destruction of World World III. Director of "12 Monkeys", Terry Gilliam, describes the editing as "simply poetic" in the combination of editing and soundtrack that is used in the short film.
As the film plays out as a photomontage, the only continuous variable is the sound. The sound used in this production is minimal, showing up in the form of narration, Orchestral score and sound effect. The rhythmic patterns of the soundtrack act as a framework to add to the intensity of the film. This is seen as "The dissolve is synchronized with the sound. As the story moves from the past to the present, La Jetee creates mental continuity." The soundtrack adds to the Illusion of movement within the film and the change of time.
In "Black and Blue", her study of postwar French fiction, Carol Mavor describes "La Jetée" as taking "place in a no-place (u-topia) in no-time (u-chronia)" which she connects to the time and place of the fairy tale. She goes on to say "even the sound of the title resonates with the fairy-tale surprise of finding oneself in another world: La Jetée evokes 'là j'étais' (there I was)". By "u-topia", Mavor does not refer to "utopia" as the word is commonly used; she also describes an ambiguity of dystopia/utopia in the film: "It is dystopia with the hope of utopia, or is it utopia cut by the threat of dystopia."
Tor Books blogger Jake Hinkson summed up his interpretation in the title of an essay about the film, "There's No Escape Out of Time". He elaborated:
Hinkson also addresses the symbolic use of imagery: "The Man is blindfolded with some kind of padded device and he sees images. The Man is chosen for this assignment because ... he has maintained a sharp mind because of his attachment to certain images. Thus a film told through the use of still photos becomes about looking at images." He further observes that Marker himself did not refer to "La Jetée" as a film, but as photo novel.
Yannis Karpouzis makes a structuralistic analysis on "La Jetée", examining it as an intermedial artwork: Chris Marker creates an "archive" of objects and conditions that have a photographic quality of their own and they are followed by the same predicates as pictures. The dialogue between the media (photography and cinematography) and the filmic signifier (film stills, storyline and narration) is constantly in the backdrop.
In 2010, "Time" ranked "La Jetée" first in its list of "Top 10 time-travel movies". In 2012, in correspondence with the "Sight & Sound" Poll, the British Film Institute deemed "La Jetée" as the 50th greatest film of all time.
In 1963, Prix Jean Vigo awarded "La Jetée" for "Best Short Film."
In 1963, "La Jetée" was part of the Locarno International Film Festival.
In 2009, the films was featured in "Buenos Aires Festival Internacional de Cine Independiente.
"La Jetée" was featured in the "Cine//B Film Festival" in 2011.
The International Documentary Film Festival Amsterdam had "La Jetée" as a featured film in 2019.
Science fiction writer William Gibson considers the film one of his main influences.
The video for Sigue Sigue Sputnik's 1989 single "Dancerama" is also an homage to "La Jetée".
The film is one of the influences in the video for David Bowie's "Jump They Say" (1993).
Terry Gilliam's "12 Monkeys" (1995) was inspired by and takes several concepts directly from "La Jetée" (acknowledging this debt in the opening credits).
The 2003 short film "La puppé" is both an homage to and a parody of "La Jetée".
The 2007 Mexican film "Year of the Nail", which is told entirely through still photographs, was inspired by "La Jetée".
Kode9 in collaboration with Ms. Haptic, Marcel Weber (aka MFO), and Lucy Benson created an homage to "La Jetée" in 2011, for the Unsound Festival.
Northern Irish rock band Two Door Cinema Club screened the film at the launch party for their 2016 album "Gameshow". The final track on the album, "Je viens de la", is inspired by "La Jetée" and describes the journey of the film's protagonist.
The film was included in the "1001 Movies You Must See Before You Die" by producer, Steven Schneider.
In 1996, Zone Books released a book which reproduced the film's original images along with the script in both English and French.
In Region 2, the film is available with English subtitles in the "La Jetée/Sans soleil" digipack released by Arte Video. In Region 1, the Criterion Collection has released a "La Jetée/Sans soleil" combination DVD / Blu-ray, which features the option of hearing the English or French narration.
|
https://en.wikipedia.org/wiki?curid=18230
|
Little penguin
The little penguin ("Eudyptula minor") is the smallest species of penguin. It grows to an average of in height and in length, though specific measurements vary by subspecies. It is found on the coastlines of southern Australia and New Zealand, with possible records from Chile. In Australia, they are often called fairy penguins because of their small size. In New Zealand, they are more commonly known as little blue penguins or blue penguins owing to their slate-blue plumage; they are also known by their Māori name: kororā.
The little penguin was first described by German naturalist Johann Reinhold Forster in 1781. Several subspecies are known, but a precise classification of these is still a matter of dispute. The holotypes of the subspecies "E. m. variabilis" and "Eudyptula minor chathamensis" are in the collection of the Museum of New Zealand Te Papa Tongarewa. The white-flippered penguin is sometimes considered a subspecies, sometimes a distinct species, and sometimes a morph.
Genetic analyses indicate that the Australian and Otago (southeastern coast of South Island) little penguins may constitute a distinct species. In this case the specific name "minor" would devolve on it, with the specific name "novaehollandiae" suggested for the other populations. This interpretation suggests that "E. novaehollandiae" individuals arrived in New Zealand between AD 1500 and 1900 while the local "E. minor" population had declined, leaving a genetic opening for a new species.
Mitochondrial and nuclear DNA evidence suggests the split between "Eudyptula" and "Spheniscus" occurred around 25 million years ago, with the ancestors of the white-flippered and little penguins diverging about 2.7 million years ago.
Like those of all penguins, the little penguin's wings have developed into flippers used for swimming. The little penguin typically grows to between tall and usually weighs about 1.5 kg on average (3.3 lb). The head and upper parts are blue in colour, with slate-grey ear coverts fading to white underneath, from the chin to the belly. Their flippers are blue in colour. The dark grey-black beak is 3–4 cm long, the irises pale silvery- or bluish-grey or hazel, and the feet pink above with black soles and webbing. An immature individual will have a shorter bill and lighter upperparts.
Like most seabirds, they have a long lifespan. The average for the species is 6.5 years, but flipper ringing experiments show in very exceptional cases up to 25 years in captivity.
The little penguin breeds along the entire coastline of New Zealand (including the Chatham Islands), and southern Australia (including roughly 20,000 pairs on Babel Island). Australian colonies exist in New South Wales, Victoria, Tasmania, South Australia, Western Australia and the Jervis Bay Territory. Little penguins have also been reported from Chile (where they are known as "pingüino pequeño" or "pingüino azul") (Isla Chañaral 1996, Playa de Santo Domingo, San Antonio, 16 March 1997) and South Africa, but it is unclear whether these birds were vagrants. As new colonies continue to be discovered, rough estimates of the world population circa 2011 were around 350,000-600,000 animals.
Overall, little penguin populations in New Zealand have been decreasing. Some colonies have become extinct and others continue to be at risk. Some new colonies have been established in urban areas. The species is not considered endangered in New Zealand, with the exception of the white-flippered subspecies found only on Banks Peninsula and nearby Motunau Island. Since the 1960s, the mainland population has declined by 60-70%; though a small increase has occurred on Motunau Island. A colony exists in Wellington Harbor on Matiu/Somes Island.
Australian little penguin colonies primarily exist on offshore islands, where they are protected from feral terrestrial predators and human disturbance. Colonies are found from Port Stephens in northern New South Wales around the southern coast to Fremantle, Western Australia. Foraging penguins have occasionally been seen as far north as Southport, Queensland and Shark Bay, Western Australia.
An endangered population of little penguins exists at Manly, in Sydney's North Harbour. The population is protected under the NSW Threatened Species Conservation Act 1995 and has been managed in accordance with a Recovery Plan since the year 2000. The population once numbered in the hundreds, but has decreased to around 60 pairs of birds. The decline is believed to be mainly due to loss of suitable habitat, attacks by foxes and dogs and disturbance at nesting sites.
The largest colony in New South Wales is on Montague Island. Up to 8000 breeding pairs are known to nest there each year. Additional colonies exist on the Tollgate Islands in Batemans Bay.
Additional colonies exist in the Five Islands Nature Reserve, offshore from Port Kembla, and at Boondelbah Island, Cabbage Tree Island and the Broughton Islands off Port Stephens.
A population of about 5,000 breeding pairs exists on Bowen Island. The colony has increased from 500 pairs in 1979 and 1500 pairs in 1985. During this time, the island was privately leased. The island was vacated in 1986 and is currently controlled by the federal government.
In South Australia, many little penguin colony declines have been identified across the state. In some cases, colonies have declined to extinction (including the Neptune Islands, West Island, Wright Island, Pullen Island and several colonies on western Kangaroo Island), while others have declined from thousands of animals to few (Granite Island and Kingscote). The only known mainland colony exists at Bunda Cliffs on the state's far west coast, though colonies have existed historically on Yorke Peninsula. A report released in 2011 presented evidence supporting the listing of the statewide population or the more closely monitored sub-population from Gulf St. Vincent as Vulnerable under South Australia's "National Parks & Wildlife Act 1972". As of 2014, the little penguin is not listed as a species of conservation concern, despite ongoing declines at many colonies.
Tasmanian little penguin population estimates range from 110,000–190,000 breeding pairs of which less than 5% are found on mainland Tasmania. Ever-increasing human pressure is predicted to result in the extinction of colonies on mainland Tasmania.
The largest colony of little penguins in Victoria is located at Phillip Island, where the nightly 'parade' of penguins across Summerland Beach has been a major tourist destination, and more recently a major conservation effort, since the 1920s. Phillip Island is home to an estimated 32,000 breeding adults. Little penguins can also be seen in the vicinity of the St Kilda, Victoria pier and breakwater. The breakwater is home to a colony of little penguins which have been the subject of a conservation study since 1986.
Little penguin habitats also exist at a number of other locations, including London Arch and The Twelve Apostles along the Great Ocean Road, Wilson's Promontory and Gabo Island.
The largest colony of little penguins in Western Australia is believed to be located on Penguin Island, where an estimated 1,000 pairs nest during winter. Penguins are also known to nest on Garden Island and Carnac Island which lie north of Penguin Island. Many islands along Western Australia's southern coast are likely to support little penguin colonies, though the status of these populations is largely unknown. An account of little penguins on Bellinger Island published in 1928 numbered them in their thousands. Visiting naturalists in November 1986 estimated the colony at 20 breeding pairs. The account named another substantial colony 12 miles from Bellinger Island and the same distance from Cape Pasley. Little penguins are known to breed on some islands of the Recherche Archipelago, including Woody Island where day-tripping tourists can view the animals. A penguin colony exists on Mistaken Island in King George Sound near Albany. Historical accounts of little penguins on Newdegate Island at the mouth of Deep River and on Breaksea Island near Torbay also exist. West Australian little penguins have been found to forage as far as 150 miles north of Geraldton (south of Denham and Shark Bay).
Little penguins are diurnal and like many penguin species, spend the largest part of their day swimming and foraging at sea. During the breeding and chick-rearing seasons, little penguins leave their nest at sunrise, forage for food throughout the day and return to their nests just after dusk. Thus, sunlight, moonlight and artificial lights can affect the behaviour of attendance to the colony. Also, increased wind speeds negatively affect the little penguins' efficiency in foraging for chicks, but for reasons not yet understood. Little penguins preen their feathers to keep them waterproof. They do this by rubbing a tiny drop of oil onto every feather from a special gland above the tail.
Tagged or banded birds later recaptured or found deceased have shown that individual birds can travel great distances during their lifetimes. In 1984, a penguin that had been tagged at Gabo Island in eastern Victoria was found dead at Victor Harbor in South Australia. Another little penguin was found near Adelaide in 1970 after being tagged at Phillip Island in Victoria the previous year. In 1996, a banded penguin was found dead at Middleton. It had been banded in 1991 at Troubridge Island in Gulf St Vincent, South Australia.
The little penguin's foraging range is quite limited in terms of distance from shore when compared to seabirds that can fly.
Little penguins feed by hunting small clupeoid fish, cephalopods and crustaceans, for which they travel and dive quite extensively including to the sea floor. Researcher Tom Montague studied a Victorian population for two years in order to understand its feeding patterns. Montague's analysis revealed a penguin diet consisting of 76% fish and 24% squid. Nineteen fish species were recorded, with pilchard and anchovy dominating. The fish were usually less than 10 cm long and often post-larval or juvenile. Less common little penguin prey include: crab larvae, eels, jellyfish and seahorses. In New Zealand, important little penguin prey items include arrow squid, slender sprat, Graham's gudgeon, red cod and ahuru.
Since the year 2000, the little penguins of Port Phillip Bay's diet has consisted mainly of barracouta, anchovy, and arrow squid. Pilchards previously featured more prominently in southern Australian little penguin diets prior to mass sardine mortality events of the 1990s. These mass mortality events affected sardine stocks over 5,000 kilometres of coastline. Jellyfish including species in the genera "Chrysaora" and "Cyanea" were found to be actively sought-out food items, while they previously had been thought to be only accidentally ingested. Similar preferences were found in the Adélie penguin, yellow-eyed penguin and Magellanic penguin. A important crustacean present in the little penguin diet is the krill, "Nyctiphanes australis", which surface-swarms during the day.
Little penguins are generally inshore feeders. The use of data loggers has shown that in the diving behaviour of little penguins, 50% of dives go no deeper than 2 m, and the mean diving time is 21 seconds. In the 1980s, average little penguin dive time was estimated to be 23-24 seconds. The maximum recorded depth and time submerged are 66.7 metres and 90 seconds respectively.
Little penguins play an important role in the ecosystem as not only a predator to parasites but also a host. Recent studies have shown a new species of feather mite that feeds on the preening oil on the feathers of the penguin. Little penguins preen their mates to strengthen social bonds and remove parasites, especially from their partner's head where self-preening is difficult.
Little penguins reach sexual maturity at different ages. The female matures at two years old and the male at three years old.
Between June and August, males return to shore to renovate or dig new burrows and display to attract a mate for the season. Males compete for partners with their displays. Breeding occurs annually, but the timing and duration of the breeding season varies from location to location and from year to year. Breeding occurs during spring and summer when oceans are most productive and food is plentiful.
Little penguins remain faithful to their partner during a breeding season and whilst hatching eggs. At other times of the year they tend to swap burrows. They exhibit site fidelity to their nesting colonies and nesting sites over successive years. Little penguins can breed as isolated pairs, in colonies, or semi-colonially.
Penguins' nests vary depending on the available habitat. They are established close to the sea in sandy burrows excavated by the birds' feet or dug previously by other animals. Nests may also be made in caves, rock crevices, under logs or in or under a variety of man-made structures including nest boxes, pipes, stacks of wood or timber, and buildings. Nests have been occasionally observed to be shared with prions, while some burrows are occupied by short-tailed shearwaters and little penguins in alternating seasons. In the 1980s, little was known on the subject of competition for burrows between bird species.
The timing of breeding seasons varies across the species' range. In the 1980s, the first egg laid at a penguin colony on Australia's eastern coast could be expected to come as early as May or as late as October. Eastern Australian populations (including at Phillip Island, Victoria) lay their eggs from July to December. In South Australia's Gulf St. Vincent, eggs are laid between April and October and south of Perth in Western Australia, peak egg-laying occurred in June and continued until mid-October (based on observations from the 1980s).
Male and female birds share incubating and chick-rearing duties. They are the only species of penguin capable of producing more than one clutch of eggs per breeding season, but few populations do so. In ideal conditions, a penguin pair is capable of raising two or even three clutches of eggs over an extended season, which can last between eight and twenty-eight weeks.
The one or two (on rare occasions, three) white or lightly mottled brown eggs are laid between one and four days apart. Each egg typically weighs around 55 grams at time of laying. Incubation takes up to 36 days. Chicks are brooded for 18–38 days and fledge after 7–8 weeks. On Australia's east coast, chicks are raised from August to March. In Gulf St. Vincent, chicks are raised from June through November.
Little penguins typically return to their colonies to feed their chicks at dusk. The birds tend to come ashore in small groups to provide some defence against predators, which might otherwise pick off individuals. In Australia, the strongest colonies are usually on cat-free and fox-free islands. However, the population on Granite Island (which is a fox, cat and dog-free island) has been severely depleted, from around 2000 penguins in 2001 down to 22 in 2015. Granite Island is connected to the mainland via a timber causeway.
Predation by native animals is not considered a threat to little penguin populations, as these predators' diets are diverse. Large native reptiles including the tiger snake and Rosenberg's goanna are known to take little penguin chicks and blue-tongued lizards are known to take eggs. At sea, little penguins are eaten by long-nosed fur seals. A study conducted by researchers from the South Australian Research and Development Institute found that roughly 40 percent of seal droppings in South Australia's Granite Island area contained little penguin remains. Other marine predators include sharks and barracouta.
Little penguins are also preyed upon by white-bellied sea eagles. These large birds-of-prey are endangered in South Australia and not considered a threat to colony viability there. Other avian predators include: kelp gulls, pacific gulls, brown skuas and currawongs.
In Victoria, at least one penguin death has been attributed to a water rat.
A mass mortality event occurred in Port Phillip Bay in March 1935. The event coincided with moulting and deaths were attributed to fatigue. Another event occurred at Phillip Island in Victoria in 1940. The population there was believed to have fallen from 2000 birds to 200. Dead birds were allegedly in healthy-looking condition so speculation pointed to a disease or pathogen.
Citizens have raised concerns about mass mortality of penguins alleging a lack of official interest in the subject. Discoveries of dead penguins in Australia should be reported to the corresponding state's environment department. In South Australia, a mortality register was established in 2011.
Little penguins have long been a curiosity to humans and to children in particular. Captive animals are often exhibited in zoos. Over time attitudes towards penguins have evolved from direct exploitation (for meat, skins and eggs) to the development of tourism ventures, conservation management and the protection of both birds and their habitat.
During the 19th and 20th centuries, little penguins were shot for "sport", killed for their skins, captured for amusement and eaten by ship-wrecked sailors and castaways to avoid starvation. Their eggs were also collected for human consumption by indigenous and non-indigenous people. In 1831, N. W. J. Robinson noted that penguins were typically soaked in water for many days to tenderise the meat before eating.
One of the colonies raided for penguin skins was Lady Julia Percy Island in Victoria. The following directions for preparing penguin skin were published in "The Chronicle" in 1904:'F.W.M.,' Port Lincoln. — To clean penguin skins, scrape off as much fat as you can with a blunt knife. Then peg the skin out carefully, stretching it well. Let it remain in the sun till most of the fat is dried out of it, then rub with a compound of powdered alum, salt, and pepper in about equal proportions. Continue to rub this on at intervals until the skin becomes soft and pliable.An Australian taxidermist was once commissioned to make a woman's hat for a cocktail party from the remains of a dead little penguin. The newspaper described it as "a smart little toque of white and black feathers, with black flippers set at a jaunty angle on the crown."
In the 20th century, little penguins have been maliciously attacked by humans, used as bait to catch Southern rock lobster, used to free snagged fishing tackle, killed as incidental bycatch by fishermen using nets, and killed by vehicle strikes on roads and on the water.
In the late 20th and 21st centuries, more mutually beneficial relationships between penguins and humans developed. The sites of some breeding colonies have become carefully-managed tourist destinations which provide an economic boost for coastal and island communities in Australia and New Zealand. These locations also often provide facilities and volunteer staff to support population surveys, habitat improvement works and little penguin research programs.
At Phillip Island, Victoria, a viewing area has been established at the Phillip Island Nature Park to allow visitors to view the nightly "penguin parade". Lights and concrete stands have been erected to allow visitors to see but not photograph or film the birds (this is because it can blind or scare them) interacting in their colony. In 1987, more international visitors viewed the penguins coming ashore at Phillip Island than visited Uluru. In the financial year 1985-86, 350,000 people saw the event, and at that time audience numbers were growing 12% annually.
In Bicheno, Tasmania, evening penguin viewing tours are offered by a local tour operator at a rookery on private land. A similar sunset tour is offered at Low Head, near the mouth of the Tamar River on Tasmania's north coast. Observation platforms exist near some of Tasmania's other little penguin colonies, including Bruny Island and Lillico Beach near Devonport.
South of Perth, Western Australia, visitors to Penguin Island are able to view penguin feeding within a penguin rehabilitation centre and may also encounter wild penguins ashore in their natural habitat. The island is accessible via a short passenger ferry ride, and visitors depart the island before dusk to protect the colony from disturbance.
Visitors to Kangaroo Island, South Australia, have nightly opportunities to observe penguins at the Kangaroo Island Marine Centre in Kingscote and at the Penneshaw Penguin Centre. Granite Island at Victor Harbor, South Australia continues to offer guided tours at dusk, despite its colony dropping from thousands in the 1990s to dozens in 2014. There is also a Penguin Centre located on the island where the penguins can be viewed in captivity.
In the Otago, New Zealand town of Oamaru, visitors may view the birds returning to their colony at dusk. In Oamaru it is not uncommon for penguins to nest within the cellars and foundations of local shorefront properties, especially in the old historic precinct of the town. More recently, little penguin viewing facilities have been established at Pilots Beach on the Otago Peninsula in Dunedin, New Zealand. Here visitors are guided by volunteer wardens to watch penguins returning to their burrows at dusk.
Food availability appears to strongly influence the survival and breeding success of little penguin populations across their range.
Variation in prey abundance and distribution from year to year causes young birds to be washed up dead from starvation or in weak condition. This problem is not constrained to young birds, and has been observed throughout the 20th century. The breeding season of 1984-1985 in Australia was particularly bad, with minimal breeding success. Eggs were deserted prior to hatching and many chicks starved to death. Malnourished penguin carcasses were found washed up on beaches and the trend continued the following year. In April 1986, approximately 850 dead penguins were found washed ashore in south-western Victoria. The phenomenon was ascribed to lack of available food.
There are two seasonal peaks in the discovery of dead little penguins in Victoria. The first follows moult and the second occurs in mid-winter. Moulting penguins are under stress, and some return to the water in a weak condition afterwards. Mid-winter marks the season of lowest prey availability, thus increasing the probability of malnutrition and starvation.
In 1990, 24 dead penguins were found in the Encounter Bay area in South Australia during a week spanning late April to early May. A State government park ranger explained that many of the birds were juvenile and had starved after moulting.
In 1995 pilchard mass mortality events occurred, which reduced the penguins' available prey and resulted in starvation and breeding failure. Another similar event occurred in 1999. Both mortality events were attributed to an exotic pathogen which spread across the entire Australian population of the fish, reducing the breeding biomass by 70%. Crested tern and gannet populations also suffered following these events.
In 1995, 30 dead penguins were found ashore between Waitpinga and Chiton Rocks in the Encounter Bay area. The birds has suffered severe bacterial infections and the mortalities may have been linked to the mass mortality of pilchards that resulted from the spread of an exotic pathogen that year.
In the late 1980s, it was believed that penguins did not compete with the fishing industry, despite anchovy being commercially caught. That assertion was made prior to the establishment and development of South Australia's commercial pilchard fishery in the 1990s. In South Africa, the overfishing of species of preferred penguin prey has caused Jackass penguin populations to decline. Overfishing is a potential (but not proven) threat to the little penguin.
Introduced mammalian predators present the greatest terrestrial risk to little penguins and include cats, dogs, rats, foxes, ferrets and stoats.
Due to their diminutive size and the introduction of new predators, some colonies have been reduced in size by as much as 98% in just a few years, such as the small colony on Middle Island, near Warrnambool, Victoria, which was reduced from approximately 600 penguins in 2001 to less than 10 in 2005. Because of this threat of colony collapse, conservationists successfully pioneered an experimental technique using Maremma Sheepdogs to protect the colony and fend off would-be predators, with numbers reaching 100 by 2017.
Uncontrolled dogs or feral cats can have sudden and severe impacts on penguin colonies (more than the penguin's natural predators) and may kill many individuals. Examples of colonies affected by dog attacks include Manly, New South Wales, Penneshaw, South Australia, Red Chapel Beach, Wynyard and Low Head, Tasmania, Penguin Island, Western Australia and Little Kaiteriteri Beach, New Zealand. Cats have been recorded preying on penguin chicks at Emu Bay on Kangaroo Island in South Australia. Paw prints at an attack site at Freeman's Knob, Encounter Bay, showed that the dog responsible was small, roughly the size of a terrier. The single attack may have rendered the small colony extinct.
A suspected stoat or ferret attack at Doctor's Point near Dunedin, New Zealand claimed the lives of 29 little blue penguins in November 2014.
Foxes have been known to prey on little penguins since at least the early 20th century. A fox was believed responsible for the deaths of 53 little penguins over several nights on Granite Island in 1994. In June 2015, 26 penguins from the Manly colony were killed in 11 days. A fox believed responsible was eventually shot in the area and an autopsy is expected to prove or disprove its involvement. In November 2015 a fox entered the little penguin enclosure at the Melbourne Zoo and killed 14 penguins, prompting measures to further "fox proof" the enclosure.
The impacts of human habitation in proximity to little penguin colonies include collisions with vehicles, direct harassment, burning and clearing of vegetation and housing development. In 1950, roughly a hundred little penguins were allegedly burned to death near The Nobbies at Port Phillip Bay during a grass fire lit intentionally by a grazier for land management purposes. It was later reported that the figure had been overstated. The matter was resolved when the grazier offered to return land to the custody of the State for the future protection of the colony.
The Conservation Council of Western Australia has expressed opposition to the proposed development of a marina and canals at Mangles Bay, in close proximity to penguin colonies at Penguin Island and Garden Island. Researcher Belinda Cannell of Murdoch University found that over a quarter of penguins found dead in the area had been killed by boats. Carcasses had been found with heads, flippers or feet cut off, cuts on their backs and ruptured organs. The development would increase boat traffic and result in more penguin deaths.
Penguins are vulnerable to interference by humans, especially while they are ashore during moult or nesting periods.
In 1930 in Tasmania, it was believed that little penguins were competing with mutton-birds, which were being commercially exploited. An "open season" in which penguins would be permitted to be killed was planned in response to requests from members of the mutton-birding industry.
In the 1930s, an arsonist was believed to have started a fire on Rabbit Island near Albany, Western Australia- a known little penguin rookery. Visitors later reported finding dead penguins there with their feet burned off.
In 1949, penguins on Phillip Island in Victoria became victims of human cruelty, with some kicked and others thrown off a cliff and shot at. These acts of cruelty prompted the state government to fence off the rookeries. In 1973, ten dead penguins and fifteen young seagulls were found dead on Wright Island in Encounter Bay, South Australia. It was believed that they were killed by people poking sticks down burrows before scattering the dead bodies around. In 1983 one penguin was found dead and another injured at Encounter Bay, both by human interference. The injured bird was euthanased.
More recent examples of destructive interference can be found at Granite island, where in 1994 a penguin chick was taken from a burrow and abandoned on the mainland, a burrow containing penguin chicks was trampled and litter was discarded down active burrows. In 1998, two incidents in six months resulted in penguin deaths. The latter, which occurred in May, saw 13 penguins apparently kicked to death. In March 2016, two little penguins were kicked and attacked by humans during separate incidents at the St Kilda colony, Victoria.
In 2018, 20-year-old Tasmanian man Joshua Leigh Jeffrey was fined $82.50 in court costs and sentenced to 49 hours of community service at Burnie Magistrates Court after killing nine little penguins at Sulphur Creek in North West Tasmania on 1 January 2016 by beating them with a stick. Dr Eric Woehler from conservation group Birds Tasmania denounced the perceived leniency of the sentence, which he said placed minimal value on Tasmania's wildlife and set an "unwelcome precedent". Following an appeal by prosecutors, Jeffrey had his sentence doubled on 15 October 2018. The office of the Director of Public Prosecutions said it considered the original sentence to be manifestly inadequate. The original sentence was set aside, and Jeffrey was sentenced to two months in prison, suspended on the condition of him committing no offences for a year that are punishable by imprisonment. His community order was also doubled to 98 hours.
Also in 2018, a dozen little penguin carcasses were found in a garbage bin at Low Head, Tasmania prompting an investigation into the causes of death.
Some little penguins are drowned when amateur fishermen set gill nets near penguin colonies. Discarded fishing line can also present an entanglement risk and contact can result in physical injury, reduced mobility or drowning. In 2014, a group of 25 dead little penguins was found on Altona Beach in Victoria. Necropsies concluded that the animals had died after becoming entangled in net fishing equipment, prompting community calls for a ban on net fishing in Port Phillip Bay.
In the 20th century, little penguins were intentionally shot or caught by fishermen to use as bait in pots for catching crayfish (Southern rock lobster) or by line fishermen. Colonies were targeted for this purpose in various parts of Tasmania including Bruny Island and West Island, South Australia.
A study in Perth from 2003 to 2012 found that the main cause of mortality was trauma, most likely from watercraft, leading to a recommendation for management strategies to avoid watercraft strikes.
Oil spills can be lethal for penguins and other sea birds. Oil is toxic when ingested and penguins' buoyancy and the insulative quality of their plumage is damaged by contact with oil. Little penguin populations have been significantly affected during two major oil spills at sea: the "Iron Baron" oil spill off Tasmania's north coast in 1995 and the grounding of the "Rena" off New Zealand in 2011. In 2005, a 10-year post-mortem reflection on the "Iron Baron" incident estimated penguin fatalities at 25,000. The "Rena" incident killed 2,000 seabirds (including little penguins) directly, and killed an estimated 20,000 in total based on wider ecosystem impacts.
Another oil spill or dumping event claimed the lives of up to 120 little penguins which were found oiled, deceased and ashore near Warnambool in 1990. A further 104 penguins were taken into care for cleaning. The waters west of Cape Otway were polluted with bunker oil. The source was unknown at the time and an investigation was started into three potentially responsible vessels. Earlier oil spill or oil dumping events impacted little penguins at various locations in the 1920s, 1930s,1940s, and in the 1950s.
Plastics are swallowed by little penguins, who mistake them for prey items. They present a choking hazard and also occupy space in the animal's stomach. Indigestible material in a penguin's stomach can contribute to malnutrition or starvation. Other larger plastic items, such as bottle packaging rings, can become entangled around penguins' necks, affecting their mobility.
Heat waves can result in mass mortality episodes at nesting sites, as the penguins have poor physiological adaptations towards losing heat. Climate change is recognised as a threat, though currently it is assessed to be less significant than others. Efforts are being made to protect penguins in Australia from the likely future increased occurrence of extreme heat events.
Variation in the timing of seasonal ocean upwelling events, such as the Bonney Upwelling, which provide abundant nutrients vital to the growth and reproduction of primary producers at the base of the food chain, may adversely affect prey availability, and the timing and success or failure of little penguin breeding seasons.
Little penguins are protected from various threats under different legislation in different jurisdictions. The table below may not be exhaustive.
On land, little penguins are vulnerable to attack from domestic and feral dogs and cats. Attacks at Encounter Bay, on Kangaroo Island, at Manly in Tasmania and in New Zealand have resulted in significant impacts to different populations. Management strategies to mitigate the risk of attack include establishing dog-free zones near penguin colonies and introducing regulations to ensure dogs to remain on leashes at all times in adjacent areas.
Little penguins on Middle Island off Warrnambool, Victoria were subject to heavy predation by foxes, which were able to reach the island at low tide by a tidal sand bridge. The deployment of Maremma sheepdogs to protect the penguin colony has deterred the foxes and enabled the penguin population to rebound. This is in addition to the support from groups of volunteers who work to protect the penguins from attack at night. The first Maremma sheepdog to prove the concept was Oddball, whose story inspired a feature film of the same name, released in 2015. In December 2015, the BBC reported, "The current dogs patrolling Middle Island are Eudy and Tula, named after the scientific term for the fairy penguin: Eudyptula. They are the sixth and seventh dogs to be used and a new puppy is being trained up [...] to start work in 2016.
In Sydney, snipers have been used to protect a colony of little penguins. This effort is in addition to support from local volunteers who work to protect the penguins from attack at night.
Several efforts have been made to improve breeding sites on Kangaroo Island, including augmenting habitat with artificial burrows and revegetation work. The Knox School's habitat restoration efforts were filmed and broadcast in 2008 by "Totally Wild".
In 2019, concrete nesting "huts" were made for the little penguins of Lion Island in the mouth of the Hawkesbury River in New South Wales, Australia. The island was ravaged by a fire which began with a lightning strike and destroyed 85% of the penguin's natural habitat.
Weed control undertaken by the Friends of Five Islands in New South Wales helps improve prospects of breeding success for seabirds, including the little penguin. The main problem species on the Five Islands are kikuyu grass and coastal morning glory. The weeding work has resulted in increasing numbers of little penguin burrows in the areas weeded and the return of the white-faced storm petrel to the island after a 56 year breeding absence.
Zoological exhibits featuring purpose-built enclosures for little penguins can be seen in Australia at the Adelaide Zoo, Melbourne Zoo, the National Zoo & Aquarium in Canberra, Perth Zoo, Caversham Wildlife Park (Perth), Ballarat Wildlife Park, Sea Life Sydney Aquarium and the Taronga Zoo in Sydney. Enclosures include nesting boxes or similar structures for the animals to retire into, a reconstruction of a pool and in some cases, a transparent aquarium wall to allow patrons to view the animals underwater while they swim.
A little penguin exhibit exists at Sea World, on the Gold Coast, Queensland, Australia. In early March, 2007, 25 of the 37 penguins died from an unknown toxin following a change of gravel in their enclosure. It is still not known what caused the deaths of the little penguins, and it was decided not to return the 12 surviving penguins to the same enclosure where the penguins became ill. A new enclosure for the little penguin colony was opened at Sea World in 2008.
In New Zealand, little penguin exhibits exist at the Auckland Zoo, the Wellington Zoo and the National Aquarium of New Zealand. Since 2017, the National Aquarium of New Zealand, has featured a monthly "Penguin of the Month" board, declaring two of their resident animals the "Naughty" and "Nice" penguin for that month. Photos of the board have gone viral and gained the aquarium a large worldwide social media following.
A colony of little blue penguins exists at the New England Aquarium in Boston, Massachusetts. The penguins are one of three species on exhibit and are part of the Association of Zoos and Aquariums' Species Survival Plan for little blue penguins. Little penguins can also be seen at the Louisville Zoo and the Bronx Zoo.
Linus Torvalds, the original creator of Linux (a popular operating system kernel), was once pecked by a little penguin while on holiday in Australia. Reportedly, this encounter encouraged Torvalds to select Tux as the official Linux mascot.
A Linux kernel programming challenge called the Eudyptula Challenge has attracted thousands of persons; its creator(s) use the name "Little Penguin".
Penny the Little Penguin was the mascot for the 2007 FINA World Swimming Championships held in Melbourne, Victoria.
|
https://en.wikipedia.org/wiki?curid=18232
|
Lake Balaton
Lake Balaton (Hungarian IPA , , , , ) is a freshwater lake in the Transdanubian region of Hungary. It is the largest lake in Central Europe, and one of the region's foremost tourist destinations. The Zala River provides the largest inflow of water to the lake, and the canalised Sió is the only outflow.
The mountainous region of the northern shore is known both for its historic character and as a major wine region, while the flat southern shore is known for its resort towns. Balatonfüred and Hévíz developed early as resorts for the wealthy, but it was not until the late 19th century when landowners, ruined by "Phylloxera" attacking their grape vines, began building summer homes to rent out to the burgeoning middle classes.
In distinction to all other endonyms for lakes, which universally bear the identifying suffix "-tó" (“lake”), Lake Balaton is referred to in Hungarian with a definite article, ie. "a Balaton" (the Balaton). It was called "lacus Pelsodis" or "Pelso" by the Romans. The name is Indo-European in origin (cf. Czech "pleso" ‘sinkhole, deep end of a lake’), later replaced by the Slavic *"bolto" (Czech "bláto", Slovak "blato", Polish "błoto") meaning 'mud, swamp' (from earlier Proto-Slavic "boltьno", , ).
In January 846 Slavic prince Pribina began to build a fortress as his seat of power and several churches in the region of Lake Balaton, in a territory of modern Zalavár surrounded by forests and swamps along the river Zala. His well fortified castle and capital of Balaton Principality that became known as "Blatnohrad" or "Moosburg" ("Swamp Fortress") served as a bulwark both against the Bulgarians and the Moravians.
The German name for the lake is '. It is unlikely that the Germans named the lake so for being shallow since the adjective ' is a Greek loanword that was borrowed via French and entered the general German vocabulary in the 17th century. It is also noteworthy that the average depth of Balaton () is not extraordinary for the area (cf. the average depth of the neighbouring Neusiedler See, which is roughly ).
Lake Balaton affects the local area precipitation. The area receives approximately more precipitation than most of Hungary, resulting in more cloudy days and less extreme temperatures. The lake's surface freezes during winters. The microclimate around Lake Balaton has also made the region ideal for viticulture. The Mediterranean-like climate, combined with the soil (containing volcanic rock), has made the region notable for its production of wines since the Roman period two thousand years ago.
While a few settlements on Lake Balaton, including Balatonfüred and Hévíz, have long been resort centres for the Hungarian aristocracy, it was only in the late 19th century that the Hungarian middle class began to visit the lake. The construction of railways in 1861 and 1909 increased tourism substantially, but the post-war boom of the 1950s was much larger.
By the turn of the 20th century, Balaton had become a center of research by Hungarian biologists, geologists, hydrologists, and other scientists, leading to the country's first biological research institute being built on its shore in 1927.
The last major German offensive of World War II, Operation Frühlingserwachen, was conducted in the region of Lake Balaton in March 1945, being referred to as "the Lake Balaton Offensive" in many British histories of the war. The battle was a German attack by Sepp Dietrich's Sixth Panzer Army and the Hungarian Third Army between 6 March and 16 March 1945, and in the end, resulted in a Red Army victory. Several Ilyushin Il-2 wrecks have been pulled out of the lake after having been shot down during the later months of the war.
During the 1960s and 1970s, Balaton became a major tourist destination due to focused government efforts, causing the number of overnight guests in local hotels and campsites to increase from 700,000 in July 1965 to two million in July 1975. Weekend visitors to the region, including tens of thousands from Budapest, reached more than 600,000 by 1975. It was visited by ordinary working Hungarians and especially for subsidised holiday excursions for labor union members. It also attracted many East Germans and other residents of the Eastern Bloc. West Germans could also visit, making Balaton a common meeting place for families and friends separated by the Berlin Wall until 1989. The collapse of the Soviet Union after 1991 and the dismantling of the labor unions caused the gradual but steady reduction in numbers of lower-paid Hungarian visitors.
The major resorts around the lake are Siófok, Keszthely, and Balatonfüred. Zamárdi, another resort town on the southern shore, has been the site of Balaton Sound, a notable electronic music festival since 2007. Balatonkenese has hosted numerous traditional gastronomic events. Siófok is known for attracting young people to it because of its large clubs. Keszthely is the site of the Festetics Palace and Balatonfüred is a historical bathing town which hosts the annual Anna Ball.
The peak tourist season extends from June until the end of August. The average water temperature during the summer is , which makes bathing and swimming popular on the lake. Most of the beaches consist of either grass, rocks, or the silty sand that also makes up most of the bottom of the lake. Many resorts have artificial sandy beaches and all beaches have step access to the water. Other tourist attractions include sailing, fishing, and other water sports, as well as visiting the countryside and hills, wineries on the north coast, and nightlife on the south shore. The Tihany Peninsula is a historical district. Badacsony is a volcanic mountain and wine-growing region as well as a lakeside resort. The lake is almost completely surrounded by separated bike lanes to facilitate bicycle tourism.
Although the peak season at the lake is the summer, Balaton is also frequented during the winter, when visitors go ice-fishing or even skate, sledge, or ice-sail on the lake if it freezes over.
Sármellék International Airport provides air service to Balaton (although most service is only seasonal).
Other resort towns include: Balatonalmádi, Balatonboglár, Balatonlelle, Fonyód and Vonyarcvashegy.
From east to west:
Balatonfőkajár - Balatonakarattya - Balatonkenese - Balatonfűzfő - Balatonalmádi - Alsóörs - Paloznak - Csopak - Balatonarács - Balatonfüred - Tihany - Aszófő - Örvényes - Balatonudvari - Fövenyes - Balatonakali - Zánka - Balatonszepezd - Szepezdfürdő - Révfülöp - Pálköve - Ábrahámhegy - Balatonrendes - Badacsonytomaj - Badacsony - Badacsonytördemic - Szigliget - Balatonederics - Balatongyörök - Vonyarcvashegy - Gyenesdiás - Keszthely
From east to west:
Balatonakarattya - Balatonaliga - Balatonvilágos - Sóstó - Szabadifürdő - Siófok - Széplak - Zamárdi - Szántód - Balatonföldvár - Balatonszárszó - Balatonszemes - Balatonlelle - Balatonboglár - Fonyód - Fonyód–Alsóbélatelep - Bélatelep - Balatonfenyves - Balatonmáriafürdő - Balatonkeresztúr - Balatonberény - Fenékpuszta
|
https://en.wikipedia.org/wiki?curid=18233
|
Libro de los juegos
The Libro de los Juegos ("Book of games"), or Libro de axedrez, dados e tablas ("Book of chess, dice and tables", in Old Spanish), was commissioned by Alfonso X of Castile, Galicia and León and completed in his scriptorium in Toledo in 1283, is an exemplary piece of Alfonso's medieval literary legacy.
The book consists of ninety-seven leaves of parchment, many with color illustrations, and contains 150 miniatures. The text is a treatise that addresses the playing of three games: a game of skill, or chess; a game of chance, or dice; and a third game, backgammon, which combines elements of both skill and chance. The book contains the earliest known description of these games. These games are discussed in the final section of the book at both an astronomical and astrological level. Examining further, the text can also be read as an allegorical initiation tale and as a metaphysical guide for leading a balanced, prudent, and virtuous life. In addition to the didactic, although not overly moralistic, aspect of the text, the manuscript's illustrations reveal a rich cultural, social, and religious complexity.
It is one of the most important documents for researching the history of board games. The only known original is held in the library of the monastery of El Escorial near Madrid in Spain. The book is bound in sheepskin and is 40 cm high and 28 cm wide (16 in × 11 in). A 1334 copy is held in the library of the Spanish Royal Academy of History in Madrid.
Alfonso was likely influenced by his contact with scholars in the Arab world. Unlike many contemporary texts on the topic, he does not engage the games in the text with moralistic arguments; instead, he portrays them in an astrological context. He conceives of gaming as a dichotomy between the intellect and chance. The book is divided into three parts reflecting this: the first on chess (a game purely of abstract strategy), the second on dice (with outcomes controlled strictly by chance), and the last on tables (combining elements of both). The text may have been influenced by Frederick II's text on falconry.
The Libro de juegos contains an extensive collection of writings on chess, with over 100 chess problems and variants. Among its more notable entries is a depiction of what Alfonso calls the "ajedrex de los quatro tiempos" ("chess of the four seasons"). This game is a chess variant for four players, described as representing a conflict between the four elements and the four humors. The chessmen are marked correspondingly in green, red, black, and white, and pieces are moved according to the roll of dice. Alfonso also describes a game entitled "astronomical chess", played on a board of seven concentric circles, divided radially into twelve areas, each associated with a constellation of the Zodiac.
The book describes the rules for a number of games in the tables family. One notable entry is "todas tablas", which has an identical starting position to modern backgammon and follows the same rules for movement and bearoff. Alfonso also describes a variant played on a board with seven points in each table. Players rolled seven-sided dice to determine the movement of pieces, an example of Alfonso's preference for the number seven.
The miniatures in the "Libro de juegos" vary between half- and full-page illustrations. The half-page miniatures typically occupy the upper half of a folio, with text explaining the game "problem" solved in the image occupying the bottom half. The back or second (verso) side of Folio 1, in a half-page illustration, depicts the initial stages of the creation of the "Libro de juegos", accompanied by text on the bottom half of the page, and the front or first (recto) side of Folio 2 depicts the transmission of the game of chess from an Indian Philosopher-King to three followers. The full-page illustrations are almost exclusively on the verso side of later folios and are faced by accompanying text on the recto side of the following folio. The significance of the change in miniature size and placement may indicate images of special emphasis, could merely function as a narrative or didactic technique, or could indicate different artisans at work in Alfonso's scriptorium as the project developed over time.
Having multiple artisans working on the "Libro de juegos" would have been a typical practice for medieval chanceries and scriptoria, where the labor of producing a manuscript was divided amongst individuals of varying capacities, for example the positions of scribe, draftsman, and apprentice cutting pages. But in addition to performing different tasks, various artisans could have labored at the same job, such as the work of illustration in the "Libro de juegos", thereby revealing a variety hands or styles. The "Libro de Juegos" offers such evidence in the difference in size between the half- and full-page illustrations in addition to changes in framing techniques amongst the folios: geometrical frames with embellished corners, architectural frames established by loosely perspectival rooftops and colonnades, and games played under tents. Other stylistic variances are found in figural representation, in facial types, and in a repertoire of different postures assumed by the players in different folios in the manuscript.
For example, in a comparison of two miniatures, found on Folios 53v and 76r, examples of these different styles are apparent, although the trope of a pair of gamers is maintained. In Folio 53v, two men are playing chess, both wearing turbans and robes. Although they may be seated on rugs on the ground, as suggested by the ceramic containers that are placed on or front of the rug near the man on the right side of the board, the figures’ seated positions, which are full frontal with knees bent at right angles, suggests that they are seated on stools or perhaps upholstered benches. The figures’ robes display a Byzantine conservatism, with their modeled three-dimensionality and allusion to a Classical style, yet the iconic hand gestures are reminiscent of a Romanesque energy and theatricality. Although the figures are seated with their knees and torsos facing front, their shoulders and heads rotate in three-quarter profile toward the center of the page, the chess board, and each other. The proximal, inner arm of each player (the arm that is closest to the board) is raised in a speaking gesture; the distal, outside arms of the players are also raised and are bent at the elbows, creating a partial crossing of each player's torso as the hands lift in speaking gestures. The faces reveal a striking specificity of subtle detail, particular to a limited number of miniatures throughout the "Libro de juegos", perhaps indicative of a particular artist's hand. These details include full cheeks, realistic wrinkles around the eyes and across the brow, and a red, full-lipped mouth that hints at the Gothic affectations in figural representation coming out of France during the late twelfth and early thirteenth centuries.
The style in the miniature in Folio 76v is markedly different from the style in Folio 53v. In this case, the framed miniature contains two men, perhaps Spanish, with uncovered wavy light brown hair that falls to the jaw line. The men seem young, as the player on the left has no facial hair and his face is unlined. In both folios, both pairs of players are playing backgammon and seem to be well-dressed, although there is no addition of gold detailing to their robes as seen in the wardrobes of aristocratic players in other miniatures. These players are seated on the ground, leaning on pillows that are placed next to a backgammon board. In this miniature, the figure on the left side of the board faces the reader, while the figure on the right leans in to the board with his back to the reader. In other words, each player is leaning on his left elbow, using his right hand to reach across his body to play. In the miniatures of this style, the emphasis seems to be more on the posture of the player than the detail of their faces; this crossed, lounging style is only found in the folios of the "Libro de tablas", the third section of the "Libro de juegos" which explains the game of backgammon, again perhaps indicative of the work of a particular artist.
Other visual details contemporaneous of Alfonso's court and social and cultural milieu infuse the "Libro de juegos". Although some of the miniatures are framed by simple rectangles with corners embellished by the golden castles and lions of Castile and León, other are framed by medieval Spanish architectural motifs, including Gothic and Mudéjar arcades of columns and arches. At times, the figural depictions are hierarchical, especially in scenes with representations of Alfonso, where the king is seated on a raised throne while dictating to scribes or meting out punishments to gamblers. Yet a contemporary atmosphere of Spanish "convivencia" is evoked by the inclusion nobility, rogues, vagrants, young and old, men, women, Christian, Muslim, and Jewish characters. Alfonso himself is depicted throughout the text, both as participant and spectator and as an older man and as a younger. The pages are filled with many social classes and ethnicities in various stages of solving the challenges presented by games.
The "Libro de juegos" can be divided into three parts: the games and problems it explores textually, the actual illuminations themselves, and the metaphysical allegories, where an analysis of the texts and illuminations reveals the movements of the macrocosmos of the universe and the microcosmos of man. The symbolism within the medieval illuminations, as explained by the accompanying texts, reveal allusions to medieval literature, art, science, law and philosophy. Intended as a didactic text, the manuscript functions as a manual that documents and explains how and why one plays games ranging from pure, intellectual strategy (chess), to games of pure chance (dice), to games that incorporate both elements (backgammon). Conceivably, Alfonso hoped to elucidate for himself how to better play the game of life, while also providing a teaching tool for others. The game of "ajedrex", or chess, is not the only game explained in the "Libro de Juegos", but it does occupy the primary position in the text and is given the most attention to detail.
In the thirteenth century, chess had been played in Europe for almost two hundred years, having been introduced into Europe by Arabs around the year 1000. The Arabs had become familiar with the game as early as the eighth century when the Islamic empire conquered Persia, where the game of chess was alleged to have been originated. It is said that a royal advisor had invented the game in order to teach his king prudence without having to overtly correct him. As Arab contact with the West expanded, so too did the game and its various permutations, and by the twelfth century, chess was becoming an entertaining diversion among a growing population of Europeans, including some scholars, clergy, the aristocracy, and the merchant classes; thus, by the thirteenth century, the iconography and symbolism associated with chess would have been accessible and familiar to Alfonso and his literate court culture, who may have had access to the private library, and manuscripts, of Alfonso, including the "Libro de juegos".
The "Libro de juegos" manuscript was a Castilian translation of Arabic texts, which were themselves translations of Persian manuscripts. The visual trope portrayed in the "Libro de juegos" miniatures is seen in other European transcriptions of the Arabic translations, most notably the German Carmina Burana Manuscript: two figures, one on either side of the board, with the board tilted up to reveal to the readers the moves made by the players. The juxtaposition of chess and dice in Arabic tradition, indicating the opposing values of skill (chess) and ignorance (dice), was given a different spin in Alfonso's manuscript, however. As Alfonso elucidates in the opening section of the "Libro de Juegos", the "Libro de ajedrex" (Book of chess) demonstrates the value of the intellect, the "Libro de los dados" (Book of dice) illustrates that chance has supremacy over pure intellect, and the" Libro de las tablas" (Book of tables) celebrates a conjoined use of both intellect and chance. Further, the iconographic linkage between chess and kingship in the Western tradition continued to evolve and became symbolic of kingly virtues, including skill, prudence, and intelligence.
Most of the work accomplished in Alfonso's scriptorium consisted of translations into the Castilian vernacular from Arabic translations of Greek texts or classical Jewish medicinal texts. As a result, very few original works were produced by this scholar-king, relative to the huge amount of work that was translated under his auspices. This enormous focus on translation was perhaps an attempt by Alfonso to continue the legacy of academic openness in Castile, initiated by Islamic rulers in Córdoba, where the emirates had also employed armies of translators in order to fill their libraries with Arabic translations of classic Greek texts. Alfonso was successful in promoting Castilian society and culture through his emphasis on the use of Galaico-Portuguese and Castilian, in academic, juridical, diplomatic, literary, and historical works. This emphasis also had the effect of reducing the universality of his translated works and original academic writings, as Latin was the "lingua franca" in both Iberia and Europe; yet Alfonso never desisted in his promotion of the Castilian vernacular.
In 1217, Alfonso had captured the Kingdom of Murcia, on the Mediterranean coast south of Valencia, for his father, King Alfonso IX, thereby unifying the kingdoms of Castile and León, bringing together the northern half of the Iberian Peninsula under one Christian throne. With the Christian re-conquest of the Peninsula underway, inroads into Islamic territories were successfully incorporating lands previously held by the "taifa" kingdoms. The arts and sciences prospered in the Kingdom of Castile under the confluence of Latin and Arabic traditions of academic curiosity as Alfonso sponsored scholars, translators, and artists of all three religions of the Book (Jewish, Christian, and Muslim) in his chanceries and scriptoria. Clerical and secular scholars from Europe turned their eyes to Iberian Peninsula as the arts and sciences prospered in an early Spanish "renaissance" under the patronage of Alfonso X, who was continuing the tradition of (relatively) enlightened and tolerant "convivencia" established by the Muslim emirate several centuries earlier.
As an inheritor of a dynamic mixture of Arabic and Latin culture, Alfonso was steeped in the rich heritage of humanistic philosophy, and the production of his "Libro de juegos" reveals the compendium of world views that comprised the eclectic thirteenth century admixture of faith and science. According to this approach, man's actions could be traced historically and his failures and successes could be studied as lessons to be applied to his future progress. These experiences can be played out and studied as they are lived, or as game moves played and analyzed in the pages of the "Libro de juegos". It is a beautiful and luxurious document, rich not only in workmanship but also in the amount of scholarship of multiple medieval disciplines that are integrated in its pages.
|
https://en.wikipedia.org/wiki?curid=18234
|
Lithium citrate
Lithium citrate (Li3C6H5O7) is a chemical compound of lithium and citrate that is used as a mood stabilizer in psychiatric treatment of manic states and bipolar disorder. There is extensive pharmacology of lithium, the active component of this salt.
Lithia water contains various lithium salts, including the citrate. An early version of Coca-Cola available in pharmacies' soda fountains called Lithia Coke was a mixture of Coca-Cola syrup and lithia water. The soft drink 7Up was originally named "Bib-Label Lithiated Lemon-Lime Soda" when it was formulated in 1929 because it contained lithium citrate. The beverage was a patent medicine marketed as a cure for hangover. Lithium citrate was removed from 7Up in 1948.
|
https://en.wikipedia.org/wiki?curid=18236
|
Lithium carbonate
Lithium carbonate is an inorganic compound, the lithium salt of carbonate with the formula . This white salt is widely used in the processing of metal oxides and treatment of mood disorders.
For the treatment of bipolar disorder, it is on the World Health Organization's List of Essential Medicines, the most important medications needed in a basic health system.
Lithium carbonate is an important industrial chemical. It forms low-melting fluxes with silica and other materials. Glasses derived from lithium carbonate are useful in ovenware. Lithium carbonate is a common ingredient in both low-fire and high-fire ceramic glaze. Its alkaline properties are conducive to changing the state of metal oxide colorants in glaze particularly red iron oxide (). Cement sets more rapidly when prepared with lithium carbonate, and is useful for tile adhesives. When added to aluminium trifluoride, it forms LiF which gives a superior electrolyte for the processing of aluminium. It is also used in the manufacture of most lithium-ion battery cathodes, which are made of lithium cobalt oxide.
In 1843, lithium carbonate was used as a new solvent for stones in the bladder. In 1859, some doctors recommended a therapy with lithium salts for a number of ailments, including gout, urinary calculi, rheumatism, mania, depression, and headache. In 1948, John Cade discovered the antimanic effects of lithium ions. This finding led lithium, specifically lithium carbonate, to be used to treat mania associated with bipolar disorder.
Lithium carbonate is used to treat mania, the elevated phase of bipolar disorder. Lithium ions interfere with ion transport processes (see “sodium pump”) that relay and amplify messages carried to the cells of the brain. Mania is associated with irregular increases in protein kinase C (PKC) activity within the brain. Lithium carbonate and sodium valproate, another drug traditionally used to treat the disorder, act in the brain by inhibiting PKC's activity and help to produce other compounds that also inhibit the PKC. Lithium carbonate's mood-controlling properties are not fully understood.
Taking lithium salts has risks and side effects. Extended use of lithium to treat various mental disorders has been known to lead to acquired nephrogenic diabetes insipidus. Lithium intoxication can affect the central nervous system and renal system and can be lethal.
Unlike sodium carbonate, which forms at least three hydrates, lithium carbonate exists only in the anhydrous form. Its solubility in water is low relative to other lithium salts. The isolation of lithium from aqueous extracts of lithium ores capitalizes on this poor solubility. Its apparent solubility increases 10-fold under a mild pressure of carbon dioxide; this effect is due to the formation of the metastable bicarbonate, which is more soluble:
The extraction of lithium carbonate at high pressures of and its precipitation upon depressuring is the basis of the Quebec process.
Lithium carbonate can also be purified by exploiting its diminished solubility in hot water. Thus, heating a saturated aqueous solution causes crystallization of .
Lithium carbonate, and other carbonates of group 1, do not decarboxylate readily. decomposes at temperatures around 1300 °C.
Lithium is extracted from primarily two sources: pegmatite crystals and lithium salt from brine pools. About 30,000 tons were produced in 1989. It also exists as the rare mineral zabuyelite.
Lithium carbonate is generated by combining lithium peroxide with carbon dioxide. This reaction is the basis of certain air purifiers, e.g., in spacecraft, used to absorb carbon dioxide:
In recent years many junior mining companies have begun exploration of lithium projects throughout North America, South America and Australia to identify economic deposits that can potentially bring new supplies of lithium carbonate online to meet the growing demand for the product.
In April 2017 MGX Minerals reported it had received independent confirmation of its rapid lithium extraction process to recover lithium and other valuable minerals from oil and gas wastewater brine.
Natural lithium carbonate is known as zabuyelite. This mineral is connected with deposits of some salt lakes and some pegmatites.
|
https://en.wikipedia.org/wiki?curid=18237
|
Lunar Roving Vehicle
The Lunar Roving Vehicle (LRV) is a battery-powered four-wheeled rover used on the Moon in the last three missions of the American Apollo program (15, 16, and 17) during 1971 and 1972. They are popularly known as "Moon buggies", a play on the words "dune buggy".
Built by Boeing, each LRV weighs (on Earth) without payload. It could carry a maximum payload mass of including two astronauts, equipment, and lunar samples; and was designed for a top speed of , although it actually achieved a top speed of on its last mission, Apollo 17. They were transported to the Moon folded up in the Lunar Module's Quadrant 1 Bay. After being unpacked, they were driven an average distance of 30 km on each of the three missions, without major incident.
These three LRVs remain on the Moon.
The concept of a lunar rover predated Apollo, with a 1952–1954 series in "Collier's Weekly" magazine by Wernher von Braun and others, "Man Will Conquer Space Soon!" In this, von Braun described a six-week stay on the Moon, featuring 10-ton tractor trailers for moving supplies.
In 1956, Mieczysław G. Bekker published two books on land locomotion. At the time, Bekker was a University of Michigan professor and a consultant to the U.S. Army Tank-Automotive Command's Land Locomotion Laboratory. The books provided much of the theoretical base for future lunar vehicle development.
In the February 1964 issue of "Popular Science", von Braun, then director of NASA's Marshall Space Flight Center (MSFC), discussed the need for a lunar surface vehicle, and revealed that studies had been underway at Marshall in conjunction with Lockheed, Bendix, Boeing, General Motors, Brown Engineering, Grumman, and Bell Aerospace.
Beginning in the early 1960s, a series of studies centering on lunar mobility were conducted under Marshall. This began with the lunar logistics system (LLS), followed by the mobility laboratory (MOLAB), then the lunar scientific survey module (LSSM), and finally the mobility test article (MTA). In early planning for the Apollo program, it had been assumed that two Saturn V launch vehicles would be used for each lunar mission: one for sending the crew aboard a Lunar Surface Module (LSM) to lunar orbit, landing, and returning, and a second for sending an LSM-Truck (LSM-T) with all of the equipment, supplies, and transport vehicle for use by the crew while on the surface. All of the first Marshall studies were based on this dual-launch assumption, allowing a large, heavy, roving vehicle.
Grumman and Northrop, in the fall of 1962, began to design pressurized-cabin vehicles, with electric motors for each wheel. At about this same time Bendix and Boeing started their own internal studies on lunar transportation systems. Mieczysław Bekker, now with General Motors Defense Research Laboratories at Santa Barbara, California, was completing a study for NASA's Jet Propulsion Laboratory on a small, uncrewed lunar roving vehicle for the Surveyor program. Ferenc Pavlics, originally from Hungary, used a wire-mesh design for "resilient wheels," a design that would be followed in future small rovers.
In early 1963, NASA selected Marshall for studies in an Apollo Logistics Support System (ALSS). Following reviews of all earlier efforts, this resulted in a 10-volume report. Included was the need for a pressurized vehicle in the weight range, accommodating two men with their expendables and instruments for traverses up to two weeks in duration. In June 1964, Marshall awarded contracts to Bendix and to Boeing, with GM's lab designated as the vehicle technology subcontractor. Bell Aerospace was already under contract for studies of Lunar Flying Vehicles.
Even as the Bendix and Boeing studies were underway, Marshall was examining a less ambitious surface exploration activity, the LSSM. This would be composed of a fixed, habitable shelter–laboratory with a small lunar-traversing vehicle that could either carry one man or be remotely controlled. This mission would still require a dual launch with the moon vehicle carried on the "lunar truck". Marshall's Propulsion and Vehicle Engineering (P&VE) lab contracted with Hayes International to make a preliminary study of the shelter and its related vehicle. Because of the potential need for an enclosed vehicle for enlarged future lunar explorations, those design efforts continued for some time, and resulted in several full-scale test vehicles.
With pressure from Congress to hold down Apollo costs, Saturn V production was reduced, allowing only a single launch per mission. Any roving vehicle would have to fit on the same lunar module as the astronauts. In November 1964, two-rocket models were put on indefinite hold, but Bendix and Boeing were given study contracts for small rovers. The name of the lunar excursion module was changed to simply the lunar module, indicating that the capability for powered "excursions" away from a lunar-lander base did not yet exist. There could be no mobile lab — the astronauts would work out of the LM. Marshall continued to also examine uncrewed robotic rovers that could be controlled from the Earth.
From the beginnings at Marshall, the Brown Engineering Company of Huntsville, Alabama had participated in all of the lunar mobility efforts. In 1965, Brown became the prime support contractor for Marshall's P&VE Laboratory. With an urgent need to determine the feasibility of a two-man self-contained lander, von Braun bypassed the usual procurement process and had P&VE's Advanced Studies Office directly task Brown to design, build, and test a prototype vehicle. While Bendix and Boeing would continue to refine concepts and designs for a lander, test model rovers were vital for Marshall human factors studies involving spacesuit-clad astronauts interfacing with power, telemetry, navigation, and life-support rover equipment.
Brown's team made full use of the earlier small-rover studies, and commercially available components were incorporated wherever possible. The selection of wheels was of great importance, and almost nothing was known at that time about the lunar surface. The Marshall Space Sciences Laboratory (SSL) was responsible for predicting surface properties, and Brown was also prime support contractor for this lab; Brown set up a test area to examine a wide variety of wheel-surface conditions. To simulate Pavlics' "resilient wheel," a four-foot-diameter inner tube wrapped with nylon ski rope was used. On the small test rover, each wheel had a small electric motor, with overall power provided by standard truck batteries. A roll bar gave protection from overturn accidents.
In early 1966, Brown's vehicle became available for examining human factors and other testing. Marshall built a small test track with craters and rock debris where the several different mock-ups were compared; it became obvious that a small rover would be best for the proposed missions. The test vehicle was also operated in remote mode to determine characteristics that might be dangerous to the driver, such as acceleration, bounce-height, and turn-over tendency as it traveled at higher speeds and over simulated obstacles. The test rover's performance under one-sixth gravity was obtained through flights on a KC-135A aircraft in a Reduced Gravity parabolic maneuver; among other things, the need for a very soft wheel and suspension combination was shown. Although Pavlics' wire-mesh wheels were not initially available for the reduced gravity tests, the mesh wheels were tested on various soils at the Waterways Experiment Station of the U.S. Army Corps of Engineers at Vicksburg, Mississippi. Later, when wire-mesh wheels were tested on low-g flights, the need for wheel fenders to reduce dust contamination was found. The model was also extensively tested at the U.S. Army's Yuma Proving Ground in Arizona, as well as the Army's Aberdeen Proving Ground in Maryland.
During 1965 and 1967, the Summer Conference on Lunar Exploration and Science brought together leading scientists to assess NASA's planning for exploring the Moon and to make recommendations. One of their findings was that the LSSM was critical to a successful program and should be given major attention. At Marshall, von Braun established a Lunar Roving Task Team, and in May 1969, NASA approved the Manned Lunar Rover Vehicle Program as a Marshall hardware development. Saverio F. "Sonny" Morea was named Lunar Roving Vehicle Project Manager.
On 11 July 1969, just before the successful Moon landing of Apollo 11, a request for proposal for the final development and building the Apollo LRV was released by Marshall. Boeing, Bendix, Grumman, and Chrysler submitted proposals. Following three months of proposal evaluation and negotiations, Boeing was selected as the Apollo LRV prime contractor on 28 October 1969. Boeing would manage the LRV project under Henry Kudish in Huntsville, Alabama. As a major subcontractor, the General Motors' Defense Research Laboratories in Santa Barbara, California, would furnish the mobility system (wheels, motors, and suspension); this effort would be led by GM Program Manager Samuel Romano andFerenc Pavlics. Boeing in Seattle, Washington, would furnish the electronics and navigation system. Vehicle testing would take place at the Boeing facility in Kent, Washington, and the chassis manufacturing and overall assembly would be at the Boeing facility in Huntsville.
The first cost-plus-incentive-fee contract to Boeing was for $19,000,000 and called for delivery of the first LRV by 1 April 1971. Cost overruns, however, led to a final cost of $38,000,000, which was about the same as NASA's original estimate. Four lunar rovers were built, one each for Apollo missions 15, 16, and 17; and one used for spare parts after the cancellation of further Apollo missions. Other LRV models were built: a static model to assist with human factors design; an engineering model to design and integrate the subsystems; two one-sixth gravity models for testing the deployment mechanism; a one-gravity trainer to give the astronauts instruction in the operation of the rover and allow them to practice driving it; a mass model to test the effect of the rover on the LM structure, balance, and handling; a vibration test unit to study the LRV's durability and handling of launch stresses; and a qualification test unit to study integration of all LRV subsystems. A paper by Saverio Morea gives details of the LRV system and its development.
LRVs were used for greater surface mobility during the Apollo J-class missions, Apollo 15, Apollo 16, and Apollo 17. The rover was first used on 31 July 1971, during the Apollo 15 mission. This greatly expanded the range of the lunar explorers. Previous teams of astronauts were restricted to short walking distances around the landing site due to the bulky space suit equipment required to sustain life in the lunar environment. The range, however, was operationally restricted to remain within walking distance of the lunar module, in case the rover broke down at any point. The rovers were designed with a top speed of about , although Eugene Cernan recorded a maximum speed of , giving him the (unofficial) lunar land-speed record.
The LRV was developed in only 17 months and performed all its functions on the Moon with no major anomalies. Scientist-astronaut Harrison Schmitt of Apollo 17 said, "The Lunar Rover proved to be the reliable, safe and flexible lunar exploration vehicle we expected it to be. Without it, the major scientific discoveries of Apollo 15, 16, and 17 would not have been possible; and our current understanding of lunar evolution would not have been possible."
The LRVs experienced some minor problems. The rear fender extension on the Apollo 16 LRV was lost during the mission's second extra-vehicular activity (EVA) at station 8 when John Young bumped into it while going to assist Charles Duke. The dust thrown up from the wheel covered the crew, the console, and the communications equipment. High battery temperatures and resulting high power consumption ensued. No repair attempt was mentioned.
The fender extension on the Apollo 17 LRV broke when accidentally bumped by Eugene Cernan with a hammer handle. Cernan and Schmitt taped the extension back in place, but due to the dusty surfaces, the tape did not adhere and the extension was lost after about one hour of driving, causing the astronauts to be covered with dust. For their second EVA, a replacement "fender" was made with some EVA maps, duct tape, and a pair of clamps from inside the Lunar Module that were nominally intended for the moveable overhead light. This repair was later undone so that the clamps could be taken inside for the return launch. The maps were brought back to Earth and are now on display at the National Air and Space Museum. The abrasion from the dust is evident on some portions of the makeshift fender.
The color TV camera mounted on the front of the LRV could be remotely operated by Mission Control in pan and tilt axes as well as zoom. This allowed far better television coverage of the EVA than the earlier missions. On each mission, at the conclusion of the astronauts' stay on the surface, the commander drove the LRV to a position away from the Lunar Module so that the camera could record the ascent stage launch. The camera operator in Mission Control experienced difficulty in timing the various delays so that the LM ascent stage was in frame through the launch. On the third and final attempt (Apollo 17), the launch and ascent were successfully tracked.
NASA's rovers, left behind, are among the artificial objects on the Moon, as are the Soviet Union's uncrewed rovers, "Lunokhod 1" and "Lunokhod 2".
The Apollo Lunar Roving Vehicle was an electric-powered vehicle designed to operate in the low-gravity vacuum of the Moon and to be capable of traversing the lunar surface, allowing the Apollo astronauts to extend the range of their surface extravehicular activities. Three LRVs were used on the Moon: one on Apollo 15 by astronauts David Scott and Jim Irwin, one on Apollo 16 by John Young and Charles Duke, and one on Apollo 17 by Eugene Cernan and Harrison Schmitt. The mission commander served as the driver, occupying the left-hand seat of each LRV. Features are available in papers by Morea, Baker, and Kudish.
The Lunar Roving Vehicle had a mass of , and was designed to hold a payload of . This resulted in weights in the approximately one-sixth g on the lunar surface of empty (curb weight) and fully loaded (gross vehicle weight). The frame was long with a wheelbase of . The height of the vehicle was . The frame was made of 2219 aluminium alloy tubing welded assemblies and consisted of a three-part chassis that was hinged in the center so it could be folded up and hung in the Lunar Module Quadrant 1 bay, which was kept open to space by omission of the outer skin panel. It had two side-by-side foldable seats made of tubular aluminium with nylon webbing and aluminum floor panels. An armrest was mounted between the seats, and each seat had adjustable footrests and a Velcro-fastened seat belt. A large mesh dish antenna was mounted on a mast on the front center of the rover. The suspension consisted of a double horizontal wishbone with upper and lower torsion bars and a damper unit between the chassis and upper wishbone. Fully loaded, the LRV had a ground clearance of .
The wheels were designed and manufactured by General Motors Defense Research Laboratories in Santa Barbara, California. Ferenc Pavlics was given special recognition by NASA for developing the "resilient wheel". They consisted of a spun aluminum hub and a diameter, wide tire made of zinc-coated woven diameter steel strands attached to the rim and discs of formed aluminum. Titanium chevrons covered 50% of the contact area to provide traction. Inside the tire was a diameter bump stop frame to protect the hub. Dust guards were mounted above the wheels. Each wheel had its own electric drive made by Delco, a direct current (DC) series-wound motor capable of at 10,000 rpm, attached to the wheel via an 80:1 harmonic drive, and a mechanical brake unit. Each wheel could free-wheel in case of drive failure.
Maneuvering capability was provided through the use of front and rear steering motors. Each series-wound DC steering motor was capable of . The front and rear wheels could pivot in opposite directions to achieve a tight turning radius of , or could be decoupled so only front or rear would be used for steering.
Power was provided by two 36-volt silver-zinc potassium hydroxide non-rechargeable batteries with a charge capacity of 121 A·h each (a total of 242 A·h), yielding a range of . These were used to power the drive and steering motors and also a 36-volt utility outlet mounted on the front of the LRV to power the communications relay unit or the TV camera. LRV batteries and electronics were passively cooled, using change-of-phase wax thermal capacitor packages and reflective, upward-facing radiating surfaces. While driving, radiators were covered with mylar blankets to minimize dust accumulation. When stopped, the astronauts would open the blankets, and manually remove excess dust from the cooling surfaces with hand brushes.
A T-shaped hand controller situated between the two seats controlled the four drive motors, two steering motors, and brakes. Moving the stick forward powered the LRV forward, left and right turned the vehicle left or right, and pulling backwards activated the brakes. Activating a switch on the handle before pulling back would put the LRV into reverse. Pulling the handle all the way back activated a parking brake. The control and display modules were situated in front of the handle and gave information on the speed, heading, pitch, and power and temperature levels.
Navigation was based on continuously recording direction and distance through use of a directional gyro and odometer and feeding this data to a computer that would keep track of the overall direction and distance back to the LM. There was also a Sun-shadow device that could give a manual heading based on the direction of the Sun, using the fact that the Sun moved very slowly in the sky.
The LRV was used during the lunar surface operations of Apollo 15, 16 and 17, the J missions of the Apollo program. On each mission, the LRV was used on three separate EVAs, for a total of nine lunar traverses, or sorties. During operation, the Commander (CDR) always drove, while the Lunar Module Pilot (LMP) was a passenger who assisted with navigation.
An operational constraint on the use of the LRV was that the astronauts must be able to walk back to the LM if the LRV were to fail at any time during the EVA (called the "Walkback Limit"). Thus, the traverses were limited in the distance they could go at the start and at any time later in the EVA. Therefore, they went to the farthest point away from the LM and worked their way back to it so that, as the life support consumables were depleted, their remaining walk back distance was equally diminished. This constraint was relaxed during the longest traverse on Apollo 17, based on the demonstrated reliability of the LRV and spacesuits on previous missions. A paper by Burkhalter and Sharp provides details on usage.
Astronaut deployment of the LRV from the LM's open Quadrant 1 bay was achieved with a system of pulleys and braked reels using ropes and cloth tapes. The rover was folded and stored in the bay with the underside of the chassis facing out. One astronaut would climb the egress ladder on the LM and release the rover, which would then be slowly tilted out by the second astronaut on the ground through the use of reels and tapes. As the rover was let down from the bay, most of the deployment was automatic. The rear wheels folded out and locked in place. When they touched the ground, the front of the rover could be unfolded, the wheels deployed, and the entire frame let down to the surface by pulleys.
The rover components locked into place upon opening. Cabling, pins, and tripods would then be removed and the seats and footrests raised. After switching on all the electronics, the vehicle was ready to back away from the LM.
Four flight-ready LRVs were manufactured, as well as several others for testing and training. Three were transported to and left on the Moon via the Apollo 15, 16, and 17 missions, with the fourth rover used for spare parts on the first three following the cancellation of Apollo 18. Since only the upper stages of the lunar excursion modules could return to lunar orbit from the surface, the vehicles, along with the lower stages were abandoned. As a result, the only lunar rovers on display are test vehicles, trainers, and mock-ups. The rover used on Apollo 15 was left at Hadley-Apennine (
). The rover used on Apollo 16 was left at Descartes (
). The rover used on Apollo 17 was left at Taurus-Littrow (
) and was seen by the Lunar Reconnaissance Orbiter during passes in 2009 and 2011.
Several rovers were created for testing, training, or validation purposes. The engineering mockup is on display at the Museum of Flight in Seattle, Washington. The Qualification Test Unit is on display at the National Air and Space Museum in Washington, D.C. The rover used for vibration testing is on display in the Davidson Saturn V Center at the U.S. Space & Rocket Center in Huntsville, Alabama. Additional test units are on display at the Johnson Space Center in Houston, Texas, and the Kennedy Space Center Visitors Complex in Cape Canaveral, Florida. Replicas of rovers are on display at the National Museum of Naval Aviation in Pensacola, Florida, the Evergreen Aviation & Space Museum in McMinnville, Oregon, and the Kansas Cosmosphere and Space Center in Hutchinson, Kansas. A replica on loan from the Smithsonian Institution is on display at the attraction at Epcot at the Walt Disney World Resort near Orlando, Florida.
|
https://en.wikipedia.org/wiki?curid=18238
|
Labyrinth
In Greek mythology, the Labyrinth (, ) was an elaborate, confusing structure designed and built by the legendary artificer Daedalus for King Minos of Crete at Knossos. Its function was to hold the Minotaur, the monster eventually killed by the hero Theseus. Daedalus had so cunningly made the Labyrinth that he could barely escape it after he built it.
Although early Cretan coins occasionally exhibit branching (multicursal) patterns, the single-path (unicursal) seven-course "Classical" design without branching or dead ends became associated with the Labyrinth on coins as early as 430 BC, and similar non-branching patterns became widely used as visual representations of the Labyrinth – even though both logic and literary descriptions make it clear that the Minotaur was trapped in a complex branching maze. Even as the designs became more elaborate, visual depictions of the mythological Labyrinth from Roman times until the Renaissance are almost invariably unicursal. Branching mazes were reintroduced only when hedge mazes became popular during the Renaissance.
In English, the term "labyrinth" is generally synonymous with "maze". As a result of the long history of unicursal representation of the mythological Labyrinth, however, many contemporary scholars and enthusiasts observe a distinction between the two. In this specialized usage "maze" refers to a complex branching multicursal puzzle with choices of path and direction, while a unicursal "labyrinth" has only a single path to the center. A labyrinth in this sense has an unambiguous route to the center and back and presents no navigational challenge.
Unicursal labyrinths appeared as designs on pottery or basketry, as body art, and in etchings on walls of caves or churches. The Romans created many primarily decorative unicursal designs on walls and floors in tile or mosaic. Many labyrinths set in floors or on the ground are large enough that the path can be walked. Unicursal patterns have been used historically both in group ritual and for private meditation, and are increasingly found for therapeutic use in hospitals and hospices.
"Labyrinth" is a word of pre-Greek origin, which the Greeks associated with the palace of Knossos in Crete, excavated by Arthur Evans early in the 20th century. The word appears in a Linear B inscription as (). As early as 1892 Maximilian Mayer suggested that "labyrinthos" might derive from "labrys", a Lydian word for "double-bladed axe". Evans suggested that the palace at Knossos was the original labyrinth, and since the double axe motif appears in the palace ruins, he asserted that "labyrinth" could be understood to mean "the house of the double axe". This designation may not have been limited to Knossos, since the same symbols were discovered in other palaces in Crete. However Nilsson observes that in Crete the "double axe" is not a weapon and always accompanies goddesses or women and not a male god.
Beekes finds the relation with "labrys" speculative, and suggests instead the relation with (, 'narrow street'). The original Minoan word appears to refer to labyrinthine grottoes, such as seen at Gortyn. Pliny the Elder's four examples of labyrinths are all complex underground structures, and this appears to have been the standard Classical understanding of the word. It is also possible that the word labyrinth is derived from the Egyptian "loperohunt", meaning palace or temple by the lake. The Egyptian labyrinth near Lake Moeris is described by Herodotus and Strabo. By the 4th century BC, Greek vase painters also represented the Labyrinth by the familiar "Greek key" patterns of endlessly running meanders.
When the Bronze Age site at Knossos was excavated by explorer Arthur Evans, the complexity of the architecture prompted him to suggest that the palace had been the Labyrinth of Daedalus. Evans found various bull motifs, including an image of a man leaping over the horns of a bull, as well as depictions of a labrys carved into the walls. On the strength of a passage in the "Iliad", it has been suggested that the palace was the site of a dancing-ground made for Ariadne by the craftsman Daedalus,
|
https://en.wikipedia.org/wiki?curid=18245
|
Lyon & Healy
Lyon & Healy Harps, Inc. is an American musical instrument manufacturer based in Chicago, Illinois and is a subsidiary of Salvi Harps. Today best known for concert harps, the company's Chicago headquarters and manufacturing facility contains a showroom and concert hall. George W. Lyon and Patrick J. Healy began the company in 1864 as a sheet music shop. By the end of the 19th century, they manufactured a wide range of musical instruments—including not only harps, but guitars, mandolins, banjos, ukuleles and various brass and percussion instruments.
Today, Lyon & Healy harps are widely played by professional musicians, since they are one of the few makers of harps for orchestral use—which are known as "concert harps" or "pedal harps". Lyon & Healy also makes smaller "folk harps" or lever harps (based on traditional Irish and Scottish instruments) that use levers to change string pitch instead of pedals. In the 1980s, Lyon & Healy also began to manufacture electroacoustic harps and, later, solid body electric harps.
George W. Lyon, a native of Northborough, Massachusetts; and Patrick J. Healy, born in Mallow, Ireland, founded the company in 1864, after they moved from Boston to start a sheet music shop for music publisher Oliver Ditson. Determining Lyon & Healy's history is complicated because its building and company records were destroyed in two fires, including the Great Chicago Fire of 1871. Two smaller fires did little damage to the firm and did not result in data loss.
Company letters and trade catalogs don't provide exact dates that would reveal when Lyon & Healy began manufacturing instruments. An article in the "Musical Courier" states that Lyon & Healy began manufacturing instruments in 1885. Clearly, Lyon & Healy was making fretted string instruments in the 1880s, with Washburn (guitars, mandolins, banjos, and zithers) as their premier line. By the 1900s, if not earlier, Lyon & Healy might well have been manufacturing bowed string instruments.
According to Vintage Guitar magazine, "Circa 1900, the firm was so large it manufactured under a host of sub-brands; Washburn is perhaps the most recognized, though Leland, Lakeside, and American Conservatory are still seen."
Lyon & Healy also made various percussion instruments. Later, Lyon & Healy began manufacturing brass instruments, possibly as early as the 1890s. Lyon & Healy also repaired instruments, and offered engraving services. Complicating matters still further, Lyon & Healy engraved instruments that it retailed but did not actually manufacture. In its 1892 catalog, it claimed that it manufactured 100,000 instruments annually.
The company is known to have made other instruments, including reed organs and pianos. Lyon & Healy evidently began manufacturing these instruments around 1876 in its factories in Chicago and nearby cities. George W. Lyon patented his cottage upright in 1878 and it was sold under the Lyon & Healy name.
Lyon retired in 1889 and Healy became the company's first president. That year, Lyon & Healy built their first harp. Healy wanted to develop a harp better suited to the rigors of the American climate than available European models. They successfully produced a harp notable for its strength, pitch reliability, and freedom from unwanted vibration. Previously, most harps in North America were made by small groups of craftsmen in France, England, Ireland, or Italy.
In 1890, Lyon & Healy introduced the Style 23 Harp, still a popular and recognizable design. It has 47 strings, highly decorative floral carving on the top of the column, base, and feet, and a fleur de lis pattern at the bottom of the column. It is available in a gold version. It is tall, and weighs about . Lyon & Healy produces one of the most ornate and elaborate harps in the world, the Louis XV, which includes carvings of leaves, flowers, scrolls, and shells along its neck and kneeblock, as well the soundboard edges.
Lyon would later form a new company with E.A. Potter called Lyon & Potter, and remained there until his death on January 12, 1894. Healy died of pneumonia on April 3, 1905.
In the 1890s the company—which used the slogan,"Everything in music"—began building pipe organs. In 1894 Robert J. Bennett came to Lyon & Healy from the Hutchings company of Boston to head their organ department. The largest surviving Lyon & Healy pipe organ is at the Our Lady of Sorrows Basilica in Chicago. It is a large organ of four manuals and 57 ranks of pipes.
They also made small pipe organs. An example survives at St. Mary's Catholic Church in Aspen Colorado. It is a two manual tracker with a 30 note straight pedalboard and 7 ranks. It is believed to have been built around 1900, and can still be pumped by hand.
By the 1900s, Lyon & Healy was one of the largest music publishers in the world, and was a major producer of musical instruments. However, In late 1920s, Lyon & Healy sold its brass musical instrument manufacturing branch (see "New Langwill Index"). In the 1970s, the firm concentrated solely upon making and selling harps.
In 1928, Lyon & Healy introduced one of the most unusual harps ever mass-produced, the "Salzedo Model". The company designed it in collaboration with the harpist Carlos Salzedo. It an Art Deco style instrument that incorporates bold red and white lines on the soundboard to create a stylized and distinct appearance.
In the 1960s, Lyon & Healy introduced a smaller lever harp, the "Troubadour", a 36-string harp for young beginners with smaller hands, and for casual players. This harp stands , and weighs .
In the late 1970s, Steinway & Sons (then owned by CBS) purchased Lyon & Healy and soon after closed all retail stores, which sold sheet music and musical instruments, to focus on harp production.
By 1985, Lyon & Healy also made folk harps, also known as "Irish harps", which are even smaller than the Troubadour. The ""Shamrock model folk harp"" has 34 strings. It stands tall with its legs. The legs can be removed so the player can hold the instrument lap—style on the knees. It weighs about . It features Celtic designs on the soundboard. An Irish or folk harp player is sometimes called a "harper" rather than "harpist".
DePaul University now owns the Wabash building. Lyon & Healy harps are still in Chicago, Illinois, at 168 North Ogden Avenue. The building was once home to the recording studios of Orlando R. Marsh.
Wood in harp construction varies by instrument, but Sitka Spruce (Picea sitchensis) is the most common soundboard wood. Various Lyon & Healy guitars, mandolins, and many other instrument types reside in major musical instrument museums in the U.S. and Europe.
Lyon and Healy now primarily manufactures four types of harps—the "lever harp", "petite pedal harp", "semi-grande pedal harp", and "concert grand harp". They also make limited numbers of "special harps" called "concert grands". Lyon & Healy makes electric lever harps in nontraditional colors such as pink, green, blue, and red.
Lyon & Healy Corporation is a musical product distribution company in North America representing brands such as Delta, Relish Guitars, SIM1, Paoletti Guitars and Acus Sound Engineering. Lyon & Healy Corporation aims to become the leading provider of premium quality musical instruments and accessories.
|
https://en.wikipedia.org/wiki?curid=18246
|
Lamborghini
Automobili Lamborghini S.p.A. () is an Italian brand and manufacturer of luxury sports cars and SUVs based in Sant'Agata Bolognese. The company is owned by the Volkswagen Group through its subsidiary Audi.
Ferruccio Lamborghini, an Italian manufacturing magnate, founded Automobili Ferruccio Lamborghini S.p.A. in 1963 to compete with established marques, including Ferrari. The company was noted for using a rear mid-engine, rear-wheel drive. Lamborghini grew rapidly during its first decade, but sales plunged in the wake of the 1973 worldwide financial downturn and the oil crisis. The firm's ownership changed three times after 1973, including a bankruptcy in 1978. American Chrysler Corporation took control of Lamborghini in 1987 and sold it to Malaysian investment group Mycom Setdco and Indonesian group V'Power Corporation in 1994. In 1998, Mycom Setdco and V'Power sold Lamborghini to the Volkswagen Group where it was placed under the control of the group's Audi division.
New products and model lines were introduced to the brand's portfolio and brought to the market and saw an increased productivity for the brand. In the late 2000s, during the worldwide financial crisis and the subsequent economic crisis, Lamborghini's sales saw a drop of nearly 50 percent.
Lamborghini currently produces the V12-powered Aventador and the V10-powered Huracán, along with the Urus SUV powered by a twin-turbo V8 engine. In addition, the company produces V12 engines for offshore powerboat racing. Lamborghini Trattori, founded in 1948 by Ferruccio Lamborghini, is headquartered in Pieve di Cento, Italy and continues to produce tractors.
reality star and singer Elettra Lamborghini whom is granddaughter of founder Ferruccio Lamborghini is the heiress to Lamborghini fortune.
Manufacturing magnate Italian Ferruccio Lamborghini founded the company in 1963 with the objective of producing a refined grand touring car to compete with offerings from established marques such as Ferrari. The company's first models, such as the 350 GT, were released in the mid 1960s. Lamborghini was noted for the 1966 Miura sports coupé, which used a rear mid-engine, rear wheel drive layout.
Lamborghini grew rapidly during its first ten years, but sales fell in the wake of the 1973 worldwide financial downturn and the oil crisis. Ferruccio Lamborghini sold the company to Georges-Henri Rossetti and René Leimer and retired in 1974. The company went bankrupt in 1978, and was placed in the receivership of brothers Jean-Claude and Patrick Mimran in 1980. The Mimrans purchased the company out of receivership by 1984 and invested heavily in its expansion. Under the Mimrans' management, Lamborghini's model line was expanded from the Countach to include the Jalpa sports car and the LM002 high performance off-road vehicle.
The Mimrans sold Lamborghini to the Chrysler Corporation in 1987. After replacing the Countach with the Diablo and discontinuing the Jalpa and the LM002, Chrysler sold Lamborghini to Malaysian investment group Mycom Setdco and Indonesian group V'Power Corporation in 1994. In 1998, Mycom Setdco and V'Power sold Lamborghini to the Volkswagen Group where it was placed under the control of the group's Audi division. New products and model lines were introduced to the brand's portfolio and brought to the market and saw an increased productivity for the brand Lamborghini. In the late 2000s, during the worldwide financial crisis and the subsequent economic crisis, Lamborghini's sales saw a drop of nearly 50 percent.
As of the 2018 model year, Lamborghini's automobile product range consists of three model lines, two of which are mid-engine two-seat sports cars while the third one is a front engined, all-wheel drive SUV. The V12-powered Aventador line consists of the LP 740–4 Aventador S coupé and roadster. The V10-powered Huracán line currently includes the all-wheel-drive LP 610-4 coupé and spyder, the low cost rear-wheel-drive LP 580-2 coupé and spyder and the most powerful, track oriented LP 640-4 Performanté coupé and spyder. With the intention of doubling its sales volume by 2019, Lamborghini also added an SUV named Urus in its line-up which is powered by a twin-turbo V8 engine and utilises a front engine, all-wheel drive layout.
Motori Marini Lamborghini produces a large V12 marine engine block for use in World Offshore Series Class 1 powerboats. A Lamborghini branded marine engine displaces approximately and outputs approximately .
In the mid-1980s, Lamborghini produced a limited-production run of a 1,000 cc sports motorcycle. UK weekly newspaper "Motor Cycle News" reported in 1994 – when featuring an example available through an Essex motorcycle retailer – that 24 examples were produced with a Lamborghini alloy frame having adjustable steering head angle, Kawasaki GPz1000RX engine/transmission unit, Ceriani front forks and Marvic wheels. The bodywork was plastic and fully integrated with front fairing merged into fuel tank and seat cover ending in a rear tail-fairing. The motorcycles were designed by Lamborghini stylists and produced by French business Boxer Bikes.
Lamborghini licenses its brand to manufacturers that produce a variety of Lamborghini-branded consumer goods including scale models, clothing, accessories, bags, electronics and laptop computers.
In contrast to his rival Enzo Ferrari, Ferruccio Lamborghini had decided early on that there would be no factory-supported racing of Lamborghinis, viewing motorsport as too expensive and too draining on company resources. This was unusual for the time, as many sports car manufacturers sought to demonstrate the speed, reliability, and technical superiority through motorsport participation. Enzo Ferrari in particular was known for considering his road car business mostly a source of funding for his participation in motor racing. Ferruccio's policy led to tensions between him and his engineers, many of whom were racing enthusiasts; some had previously worked at Ferrari. When Dallara, Stanzani, and Wallace began dedicating their spare time to the development of the P400 prototype, they designed it to be a road car with racing potential, one that could win on the track and also be driven on the road by enthusiasts. When Ferruccio discovered the project, he allowed them to go ahead, seeing it as a potential marketing device for the company, while insisting that it would not be raced. The P400 went on to become the Miura. The closest the company came to building a true race car under Lamborghini's supervision were a few highly modified prototypes, including those built by factory test driver Bob Wallace, such as the Miura SV-based "Jota" and the Jarama S-based "Bob Wallace Special".
In the mid-1970s, while Lamborghini was under the management of Georges-Henri Rossetti, Lamborghini entered into an agreement with BMW to develop, then manufacture 400 cars for BMW in order to meet Group 4 homologation requirements. BMW lacked experience developing a mid-engined vehicle and believed that Lamborghini's experience in that area would make Lamborghini an ideal choice of partner. Due to Lamborghini's shaky finances, Lamborghini fell behind schedule developing the car's structure and running gear. When Lamborghini failed to deliver working prototypes on time, BMW took the program in house, finishing development without Lamborghini. BMW contracted with Baur to produce the car, which BMW named the M1, delivering the first vehicle in October 1978.
In 1985, Lamborghini's British importer developed the Countach QVX, in conjunction with Spice Engineering, for the 1986 Group C championship season. One car was built, but lack of sponsorship caused it to miss the season. The QVX competed in only one race, the non-championship 1986 Southern Suns 500 km race at Kyalami in South Africa, driven by Tiff Needell. Despite the car finishing better than it started, sponsorship could once again not be found and the programme was cancelled.
Lamborghini was an engine supplier in Formula One for the 1989 through 1993 Formula One seasons. It supplied engines to Larrousse (1989–1990,1992–1993), Lotus (1990), Ligier (1991), Minardi (1992), and to the Modena team in 1991. While the latter is commonly referred to as a factory team, the company saw themselves as a supplier, not a backer. The 1992 Larrousse–Lamborghini was largely uncompetitive but noteworthy in its tendency to spew oil from its exhaust system. Cars following closely behind the Larrousse were commonly coloured yellowish-brown by the end of the race. Lamborghini's best result was achieved with Larrousse at the 1990 Japanese Grand Prix, when Aguri Suzuki finished third on home soil.
In late 1991, a Lamborghini Formula One motor was used in the Konrad KM-011 Group C sports car, but the car only lasted a few races before the project was canceled. The same engine, re-badged a Chrysler, Lamborghini's then-parent company, was tested by McLaren towards the end of the 1993 season, with the intent of using it during the 1994 season. Although driver Ayrton Senna was reportedly impressed with the engine's performance, McLaren pulled out of negotiations, choosing a Peugeot engine instead, and Chrysler ended the project.
Two racing versions of the Diablo were built for the Diablo Supertrophy, a single-model racing series held annually from 1996 to 1999. In the first year, the model used in the series was the Diablo SVR, while the Diablo 6.0 GTR was used for the remaining three years. Lamborghini developed the Murciélago R-GT as a production racing car to compete in the FIA GT Championship, the Super GT Championship and the American Le Mans Series in 2004. The car's highest placing in any race that year was the opening round of the FIA GT Championship at Valencia, where the car entered by Reiter Engineering finished third from a fifth-place start. In 2006, during the opening round of the Super GT championship at Suzuka, a car run by the Japan Lamborghini Owners Club garnered the first victory (in class) by an R-GT. A GT3 version of the Gallardo has been developed by Reiter Engineering. A Murciélago R-GT entered by All-Inkl.com racing, driven by Christophe Bouchut and Stefan Mücke, won the opening round of the FIA GT Championship held at Zhuhai International Circuit, achieving the first major international race victory for Lamborghini.
() (results in bold indicate pole position)
The world of bullfighting is a key part of Lamborghini's identity. In 1962, Ferruccio Lamborghini visited the Seville ranch of Don Eduardo Miura, a renowned breeder of Spanish fighting bulls. Lamborghini, a Taurus himself, was so impressed by the majestic Miura animals that he decided to adopt a raging bull as the emblem for the automaker he would open shortly.
After producing two cars with alphanumeric designations, Lamborghini once again turned to the bull breeder for inspiration. Don Eduardo was filled with pride when he learned that Ferruccio had named a car for his family and their line of bulls; the fourth Miura to be produced was unveiled to him at his ranch in Seville.
The automaker would continue to draw upon the bullfighting connection in future years. The Islero was named for the Miura bull that killed the famed bullfighter Manolete in 1947. "Espada" is the Spanish word for sword, sometimes used to refer to the bullfighter himself. The Jarama's name carried a special double meaning; though it was intended to refer only to the historic bullfighting region in Spain, Ferruccio was concerned about confusion with the also historic Jarama motor racing track.
After christening the Urraco after a bull breed, in 1974, Lamborghini broke from tradition, naming the not for a bull, but for (), a Piedmontese expletive. Legend has it that stylist Nuccio Bertone uttered the word in surprise when he first saw the Countach prototype, "Project 112". The LM002 (LM for Lamborghini Militaire) sport utility vehicle and the Silhouette (named after the popular racing category of the time) were other exceptions to the tradition.
The Jalpa of 1982 was named for a bull breed; Diablo, for the Duke of Veragua's ferocious bull famous for fighting an epic battle against El Chicorro in Madrid in 1869; Murciélago, the legendary bull whose life was spared by El Lagartijo for his performance in 1879; Gallardo, named for one of the five ancestral castes of the Spanish fighting bull breed; and Reventón, the bull that defeated young Mexican "torero" Félix Guzmán in 1943. The Estoque concept of 2008 was named for the estoc, the sword traditionally used by matadors during bullfights.
Throughout its history, Lamborghini has envisioned and presented a variety of concept cars, beginning in 1963 with the very first Lamborghini prototype, the 350GTV. Other famous models include Bertone's 1967 Marzal, 1974 Bravo, and 1980 Athon, Chrysler's 1987 Portofino, the Italdesign-styled Cala from 1995, the Zagato-built Raptor from 1996.
A retro-styled Lamborghini Miura concept car, the first creation of chief designer Walter de'Silva, was presented in 2006. President and CEO Stephan Winkelmann denied that the concept would be put into production, saying that the Miura concept was "a celebration of our history, but Lamborghini is about the future. Retro design is not what we are here for. So we won’t do the [new] Miura.”
At the 2008 Paris Motor Show, Lamborghini revealed the Estoque, a four-door sedan concept. Although there had been much speculation regarding the Estoque's eventual production, Lamborghini management has not made a decision regarding production of what might be the first four-door car to roll out of the Sant'Agata factory.
At the 2010 Paris Motor Show, Lamborghini unveiled the Sesto Elemento. The concept car is made almost entirely of carbon fibre making it extremely light, with a weight of . The Sesto Elemento shares the same V10 engine found in the Lamborghini Gallardo. Lamborghini hopes to signal a shift in the company's direction from making super cars focused on top speed to producing more agile, track focused cars with the Sesto Elemento. The concept car can reach 0–62 mph (100 km/h) in 2.5 seconds and can reach a top speed of over 180 mph.
At the 2012 Geneva Motor Show, Lamborghini unveiled the Aventador J – a roofless, windowless version of the Lamborghini Aventador. The Aventador J uses the same 700 hp engine and seven-speed transmission as the standard Aventador.
At the 2012 Beijing Motor Show, Lamborghini unveiled the Urus SUV. This is the first SUV built by Lamborghini since the LM002.
As part of the celebration of 50 years of Lamborghini, the company created the Egoista. Egoista is for one person's driving and only one Egoista is to be made.
At the 2014 Paris Motor Show, Lamborghini unveiled the Asterion LPI910-4 hybrid concept car. Named after the half-man, half-bull hybrid (Minotaur) of Greek legend, it is the first hybrid Lamborghini in the history of the company. Utilizing the Huracán's 5.2 litre V10 producing , along with one electric motor mounted on the transaxle and an additional two on the front axle, developing an additional . This puts the power at a combined figure of . The 0–100 km/h (62 mph) time is claimed to be just above 3 seconds, with a claimed top speed of .
As of 2011, Lamborghini is structured as a wholly owned subsidiary of AUDI AG named Automobili Lamborghini S.p.A.
Automobili Lamborghini S.p.A. controls five principal subsidiaries: Ducati Motor Holding S.p.A., a manufacturer of motorcycles; Italdesign Giugiaro S.p.A., a 90.1%-owned design and prototyping firm that provides services to the entire Volkswagen Group; MML S.p.A. (Motori Marini Lamborghini), a manufacturer of marine engine blocks; and Volkswagen Group Italia S.p.A. (formerly Autogerma S.p.A.), which sells Audi and other Volkswagen Group vehicles in Italy.
The Lamborghini headquarters and main production site is located in Sant'Agata Bolognese, Italy. With the launch of its Urus SUV, the production site expanded from 80,000 to 160,000 square meters.
By sales, the most important markets in 2004 for Lamborghini's sports cars were the U.S. (41%), Germany (13%), Great Britain (9%) and Japan (8%). Prior to the launch of the Gallardo in 2003, Lamborghini produced approximately 400 vehicles per year; in 2011 Lamborghini produced 1,711 vehicles.
Automóviles Lamborghini Latinoamérica S.A. de C.V. (Lamborghini Automobiles of Latin America Public Limited Company) is an authorized distributor and manufacturer of Lamborghini-branded vehicles and merchandise in Latin America and South America.
In 1995, Indonesian corporation MegaTech, Lamborghini's owner at the time, entered into distribution and license agreements with Mexican businessman Jorge Antonio Fernandez Garcia. The agreements give Automóviles Lamborghini Latinoamérica S.A. de C.V. the exclusive distributorship of Lamborghini vehicles and branded merchandise in Latin America and South America. Under the agreements, Automóviles Lamborghini is also allowed to manufacture Lamborghini vehicles and market them worldwide under the Lamborghini brand.
Automóviles Lamborghini has produced two rebodied versions of the Diablo called the Eros and the Coatl. In 2015, Automóviles Lamborghini transferred the IP-rights to the Coatl foundation (chamber of commerce no. 63393700) in The Netherlands in order to secure these rights and to make them more marketable. The company has announced the production of a speedboat called the Lamborghini Glamour.
This two-storey museum is attached to the headquarters, and covers the history of Lamborghini cars and sport utility vehicles, showcasing a variety of modern and vintage models. The museum uses displays of cars, engines and photos to provide a history and review important milestones of Lamborghini.
A 9,000 square-foot museum about Ferruccio Lamborghini houses several cars, industrial prototypes, sketches, personal objects and family photos from Ferruccio's early life.
|
https://en.wikipedia.org/wiki?curid=18271
|
Lotus 1-2-3
Lotus 1-2-3 is a discontinued spreadsheet program from Lotus Software (later part of IBM). It was the IBM PC's first killer application, was hugely popular in the 1980s and contributed significantly to the success of the IBM PC.
The first spreadsheet, VisiCalc, had helped launch the Apple II as one of the earliest personal computers in business use. With IBM's entry into the market, VisiCalc was slow to respond, and when they did, they launched what was essentially a straight port of their existing system in spite of the greatly expanded hardware capabilities. Lotus's solution was marketed as a three-in-one integrated solution, which handled spreadsheet calculations, database functionality, and graphical charts, hence the name "1-2-3", though how much database capability was debatable given Lotus's sparse memory. 1-2-3 quickly overtook VisiCalc, as well as Multiplan and SuperCalc, two VisiCalc competitors.
1-2-3 was the spreadsheet standard throughout the 1980s and into the 1990s, part of an unofficial set of three stand-alone office automation products that included dBase and WordPerfect, to build a complete business platform. With the acceptance of Windows 3.0, the market for desktop software grew even more. None of the major spreadsheet developers had seriously considered the graphical user interface to supplement their DOS offerings, and so they responded slowly to Microsoft's own graphical-based products, Excel and Word. Lotus was surpassed by Microsoft in the early 1990s and never recovered. IBM purchased Lotus in 1995 and continued to sell Lotus offerings, only officially ending sales in 2013.
VisiCalc was launched in 1979 on the Apple II and immediately became a best-seller. Compared to earlier programs, VisiCalc allowed one to easily construct free-form calculation systems for practically any purpose, the limitations being primarily memory and speed related. The application was so compelling that there were numerous stories of people buying Apple II machines to run the program. VisiCalc's runaway success on the Apple led to direct bug compatible ports to other platforms, including the Atari 8-bit family, Commodore PET and many others. This included the IBM PC when it launched in 1981, where it quickly became another best-seller, with an estimated 300,000 sales in the first six months on the market.
There were well known problems with VisiCalc, and several competitors appeared to address some of these issues. One early example was 1980's SuperCalc, which solved the problem of circular references, while a slightly later example was Microsoft Multiplan from 1981, which offered larger sheets and other improvements. In spite of these, and others, VisiCalc continued to outsell them all.
The Lotus Development Corporation was founded by Mitchell Kapor, a friend of the developers of VisiCalc. 1-2-3 was originally written by Jonathan Sachs, who had written two spreadsheet programs previously while working at Concentric Data Systems, Inc. To aid its growth, in the UK, and possibly elsewhere, Lotus 1-2-3 was the very first computer software to use television consumer advertising.
Lotus 1-2-3 was released on 26 January 1983, and immediately overtook Visicalc in sales. Unlike Microsoft Multiplan, it stayed very close to the model of VisiCalc, including the "A1" letter and number cell notation, and slash-menu structure. It was cleanly programmed, relatively bug-free, gained speed from being written completely in x86 assembly language (this remained the case for all DOS versions until 3.0, when Lotus switched to C) and wrote directly to video memory rather than use the slow DOS and/or BIOS text output functions.
Among other novelties that Lotus introduced was a graph maker that could display several forms of graphs (including pie charts, bar graphics, or line charts) but required the user to have a graphics card. At this early stage, the only video boards available for the PC were IBM's Color/Graphics Adapter and Monochrome Display and Printer Adapter while the latter did not support any graphics. However, because the two video boards used different RAM and port addresses, both could be installed in the same machine and so Lotus took advantage of this by supporting a "split" screen mode whereby the user could display the worksheet portion of 1-2-3 on the sharper monochrome video and the graphics on the CGA display.
The initial release of 1-2-3 supported only three video setups, CGA, MDA (in which case the graph maker was not available) or dual monitor mode. However, a few months later support was added for Hercules Computer Technology's Hercules Graphics Adapter which was a clone of the MDA that allowed bitmap mode. The ability to have high-resolution text and graphics capabilities (at the expense of color) proved extremely popular and Lotus 1-2-3 is credited with popularizing the Hercules graphics card.
Subsequent releases of Lotus 1-2-3 supported more video standards as time went on, including EGA, AT&T/Olivetti, and VGA. Significantly, support for the PCjr/Tandy modes was never added and users of those machines were limited to CGA graphics.
The early versions of 1-2-3 also had a key disk copy protection. While the program was hard disk installable, the user had to insert the original floppy disk when starting 1-2-3 up. This protection scheme was easily cracked and a minor inconvenience for home users, but proved a serious nuisance in an office setting. Starting with Release 3.0, Lotus no longer used copy protection. However, it was then necessary to "initialize" the System disk with one's name and company name so as to customize the copy of the program. Release 2.2 and higher had this requirement. This was an irreversible process unless one had made an exact copy of the original disk so as to be able to change names to transfer the program to someone else.
The reliance on the specific hardware of the IBM PC led to 1-2-3 being utilized as one of the two stress test applications, along with Microsoft Flight Simulator, for true 100% compatibility when PC clones appeared in the early 1980s. 1-2-3 required two disk drives and at least 192K of memory, which made it incompatible with the IBM PCjr; Lotus produced a version for the PCjr that was on two cartridges but otherwise identical.
By early 1984 the software was a killer app for the IBM PC and compatibles, while hurting sales of computers that could not run it. "They're looking for 1-2-3. Boy, are they looking for 1-2-3!" "InfoWorld" wrote. Noting that computer purchasers did not want PC compatibility as much as compatibility with certain PC software, the magazine suggested "let's tell it like it is. Let's not say 'PC compatible,' or even 'MS-DOS compatible.' Instead, let's say '1-2-3 compatible.'" PC clones' advertising did often prominently state that they were compatible with 1-2-3. An Apple II software company promised that its spreadsheet had "the power of 1-2-3". Because spreadsheets use large amounts of memory, 1‐2‐3 helped popularize greater RAM capacities in PCs, and especially the advent of expanded memory, which allowed greater than 640k to be accessed.
Lotus 1-2-3 inspired imitators, the first of which was Mosaic Software's "The Twin", written in the fall of 1985 largely in the C language, followed by VP-Planner, which was backed by Adam Osborne. These were able to not only read 1-2-3 files, but also execute many or most macro programs by incorporating the same command structure. Copyright law had first been understood to only cover the source code of a program. After the success of lawsuits which claimed that the very "look and feel" of a program were covered, Lotus sought to ban any program which had a compatible command and menu structure. Program commands had not been considered to be covered before, but the commands of 1-2-3 were embedded in the words of the menu displayed on the screen. 1-2-3 won its 3-year long court battle against Paperback Software International and Mosaic Software Inc. in 1990. However, when it sued Borland over its Quattro Pro spreadsheet in Lotus v. Borland, a 6-year battle that ended at the Supreme Court in 1996, the final ruling appeared to support narrowing the applicability of copyright law to software; this is because the lower court's decision that it was not a copyright violation to merely have a compatible command menu or language was upheld, but only via stalemate. In 1995, the First Circuit found that command menus are an uncopyrightable "method of operation" under section 102(b) of the Copyright Act. The 1-2-3 menu structure (example, slash File Erase) was itself an advanced version of single letter menus introduced in VisiCalc. When the case came before the Supreme Court, the justices would end up deadlocked 4-4. This meant that Borland had emerged victorious, but the extent to which copyright law would be applicable to computer software went unaddressed and undefined.
Microsoft's early spreadsheet Multiplan eventually gave way to Excel, which debuted on the Macintosh in 1985. It arrived on PCs with the release of Windows 2.x in 1987, but as Windows was not yet popular, it posed no serious threat to Lotus's stranglehold on spreadsheet sales. However, Lotus suffered technical setbacks in this period. Version 3 of Lotus 1-2-3, fully converted from its original macro assembler to the more portable C language, was delayed by more than a year as the totally new 1-2-3 had to be made portable across platforms and fully compatible with existing macro sets and file formats. The inability to fit the larger code size of compiled C into lower-powered machines forced the company to split its spreadsheet offerings, with 1-2-3 release 3 only for higher-end machines, and a new version 2.2, based on the 2.01 assembler code base, available for PCs without extended memory. By the time these versions were released in 1989, Microsoft had eroded much of Lotus's market share.
During the early 1990s, Windows grew in popularity and along with it Excel, which gradually displaced Lotus from its leading position. A planned total revamp of 1-2-3 for Windows fell apart and all that the company could manage was a Windows adaptation of their existing spreadsheet with no changes except using a graphical interface. Additionally, several versions of 1-2-3 had different features and slightly different interfaces.
1-2-3's intended successor, Lotus Symphony, was Lotus's entry into the anticipated "integrated software" market. It intended to expand the rudimentary all-in-one 1-2-3 into a fully-fledged spreadsheet, graph, database and word processor for DOS, but none of the integrated packages ever really succeeded. 1-2-3 migrated to the Windows platform, as part of Lotus SmartSuite.
IBM's continued development and marketing of Lotus SmartSuite and OS/2 during the 1990s placed it in direct competition with Microsoft Office and Microsoft Windows, respectively. As a result, Microsoft "punished the IBM PC Company with higher prices, a late license for Windows 95, and the withholding of technical and marketing support." IBM wasn't granted OEM rights for Windows 95 until 15 minutes prior to the release of Windows 95, 24 August 1995. Because of this uncertainty, IBM machines were sold without Windows 95, while Compaq, HP, and other companies sold machines with Windows 95 from day one.
On 11 June 2013, IBM announced it would withdraw the Lotus brand: IBM Lotus 123 Millennium Edition V9.x, IBM Lotus SmartSuite 9.x V9.8.0, and Organizer V6.1.0. IBM stated, "Customers will no longer be able to receive support for these offerings after 30 September 2014. No service extensions will be offered. There will be no replacement programs."
The name "1-2-3" stemmed from the product's integration of three main capabilities. Along with being a spreadsheet, it also offered integral charting/graphing and rudimentary database operations.
Data features included sorting data in any defined rectangle, by order of information in one or two columns in the rectangular area. Justifying text in a range into paragraphs allowed it to be used as a primitive word processor.
It had keyboard-driven pop-up menus as well as one-key commands, making it fast to operate. It was also user-friendly, introducing an early instance of context-sensitive help accessed by the F1 key.
Macros in version one and add-ins (introduced in version 2.0) contributed much to 1-2-3's popularity, allowing dozens of outside vendors to sell macro packages and add-ins ranging from dedicated financial worksheets like F9 to full-fledged word processors. In the single-tasking MS-DOS, 1-2-3 was sometimes used as a complete office suite. All major graphics standards were supported; initially CGA and Hercules, and later EGA, AT&T, and VGA. Early versions used the filename extension "WKS". In version 2.0, the extension changed first to "WK1", then "WK2". This later became "WK3" for version 3.0 and "WK4" for version 4.0.
Version 2 introduced macros with syntax and commands similar in complexity to an advanced BASIC interpreter, as well as string variable expressions. Later versions supported multiple worksheets and were written in C. The charting/graphing routines were written in Forth by Jeremy Sagan (son of Carl Sagan) and the printing routines by Paul Funk (founder of Funk Software).
These editions of 1-2-3 for DOS were primarily written in x86 assembly language.
These editions of 1-2-3 for DOS were primarily written in C.
After previewing "1-2-3" on the IBM PC in 1982, "BYTE" called it "modestly revolutionary" for elegantly combining spreadsheet, database, and graphing functions. It praised the application's speed and ease of use, stating that with the built-in help screens and tutorial "1-2-3 is one of the few pieces of software that can literally be used by anybody. You can buy 1-2-3 and [an IBM PC] and be running the two together the same day". "PC Magazine" in 1983 called 1-2-3 "a powerful and impressive program ... as a spreadsheet, it's excellent", and attributed its very fast performance to being written in assembly language.
|
https://en.wikipedia.org/wiki?curid=18273
|
Light pollution
Light pollution is the presence of anthropogenic and artificial light in the night environment. It is exacerbated by excessive, misdirected or obtrusive use of light, but even carefully used light fundamentally alters natural conditions. As a major side-effect of urbanization, it is blamed for compromising health, disrupting ecosystems and spoiling aesthetic environments.
Light pollution is the presence of artificial light in otherwise dark conditions. The term is most commonly used in relation to in the outdoor environment, but is also used to refer to artificial light indoors. Adverse consequences are multiple; some of them may not be known yet. Light pollution competes with starlight in the night sky for urban residents, interferes with astronomical observatories, and, like any other form of pollution, disrupts ecosystems and has adverse health effects.
Light pollution is a side-effect of industrial civilization. Its sources include building exterior and interior lighting, advertising, outdoor area lighting (such as car parks), offices, factories, streetlights, and illuminated sporting venues. It is most severe in highly industrialized, densely populated areas of North America, Europe, and Japan and in major cities in the Middle East and North Africa like Tehran and Cairo, but even relatively small amounts of light can be noticed and create problems. Awareness of the deleterious effects of light pollution began early in the 20th century, but efforts to address effects did not begin until the 1950s. In the 1980s a global dark-sky movement emerged with the founding of the International Dark-Sky Association (IDA). There are now such educational and advocacy organizations in many countries worldwide.
Energy conservation advocates contend that light pollution must be addressed by changing the habits of society, so that lighting is used more efficiently, with less waste and less creation of unwanted or unneeded illumination. Several industry groups also recognize light pollution as an important issue. For example, the Institution of Lighting Engineers in the United Kingdom provides its members with information about light pollution, the problems it causes, and how to reduce its impact. Although, recent research point that the energy efficiency is not enough to reduce the light pollution because of the rebound effect.
Since not everyone is irritated by the same lighting sources, it is common for one person's light "pollution" to be light that is desirable for another. One example of this is found in advertising, when an advertiser wishes for particular lights to be bright and visible, even though others find them annoying. Other types of light pollution are more certain. For instance, light that "accidentally" crosses a property boundary and annoys a neighbour is generally wasted and pollutive light.
Disputes are still common when deciding appropriate action; and differences in opinion over what light is considered reasonable, and who should be responsible, mean that negotiation must sometimes take place between parties. Where objective measurement is desired, light levels can be quantified by field measurement or mathematical modeling, with results typically displayed as an isophote map or light contour map. Authorities have also taken a variety of measures for dealing with light pollution, depending on the interests, beliefs and understandings of the society involved. Measures range from doing nothing at all, to implementing strict laws and regulations about how lights may be installed and used.
Light pollution is caused by inefficient or unnecessary use of artificial light. Specific categories of light pollution include light trespass, over-illumination, glare, light clutter, and skyglow. A single offending light source often falls into more than one of these categories.
Light trespass occurs when unwanted light enters one's property, for instance, by shining over a neighbour's fence. A common light trespass problem occurs when a strong light enters the window of one's home from the outside, causing problems such as sleep deprivation. A number of cities in the U.S. have developed standards for outdoor lighting to protect the rights of their citizens against light trespass. To assist them, the International Dark-Sky Association has developed a set of model lighting ordinances.
The Dark-Sky Association was started to reduce the light going up into the sky which reduces visibility of stars (see Skyglow below). This is any light which is emitted more than 90° above nadir. By limiting light at this 90° mark they have also reduced the light output in the 80–90° range which creates most of the light trespass issues.
U.S. federal agencies may also enforce standards and process complaints within their areas of jurisdiction. For instance, in the case of light trespass by white strobe lighting from communication towers in excess of FAA minimum lighting requirements the Federal Communications Commission maintains an Antenna Structure Registration database information which citizens may use to identify offending structures and provides a mechanism for processing citizen inquiries and complaints. The U.S. Green Building Council (USGBC) has also incorporated a credit for reducing the amount of light trespass and sky glow into their environmentally friendly building standard known as LEED.
Light trespass can be reduced by selecting light fixtures which limit the amount of light emitted more than 80° above the nadir. The IESNA definitions include full cutoff (0%), cutoff (10%), and semi-cutoff (20%). (These definitions also include limits on light emitted above 90° to reduce sky glow.)
Over-illumination is the excessive use of light. Specifically within the United States, over-illumination is responsible for approximately two million barrels of oil per day in energy wasted. This is based upon U.S. consumption of equivalent of of petroleum. It is further noted in the same U.S. Department of Energy (DOE) source that over 30% of all primary energy is consumed by commercial, industrial and residential sectors. Energy audits of existing buildings demonstrate that the lighting component of residential, commercial and industrial uses consumes about 20–40% of those land uses, variable with region and land use. (Residential use lighting consumes only 10–30% of the energy bill while commercial buildings' major use is lighting.) Thus lighting energy accounts for about four or five million barrels of oil (equivalent) per day. Again energy audit data demonstrates that about 30–60% of energy consumed in lighting is unneeded or gratuitous.
An alternative calculation starts with the fact that commercial building lighting consumes in excess of 81.68 terawatts (1999 data) of electricity, according to the DOE. Thus commercial lighting alone consumes about four to five million barrels per day (equivalent) of petroleum, in line with the alternate rationale above to estimate U.S. lighting energy consumption. Even among developed countries there are large differences in patterns of light use. American cities emit three to five times more light to space per capita compared to German cities.
Over-illumination stems from several factors:
Most of these issues can be readily corrected with available, inexpensive technology, and with resolution of landlord/tenant practices that create barriers to rapid correction of these matters. Most importantly, public awareness would need to improve for industrialized countries to realize the large payoff in reducing over-illumination.
In certain cases an over-illumination lighting technique may be needed. For example, indirect lighting is often used to obtain a "softer" look, since hard direct lighting is generally found less desirable for certain surfaces, such as skin. The indirect lighting method is perceived as more cozy and suits bars, restaurants and living quarters. It is also possible to block the direct lighting effect by adding softening filters or other solutions, though intensity will be reduced.
Glare can be categorized into different types. One such classification is described in a book by Bob Mizon, coordinator for the British Astronomical Association's Campaign for Dark Skies, as follows:
According to Mario Motta, president of the Massachusetts Medical Society, "...glare from bad lighting is a public-health hazard—especially the older you become. Glare light scattering in the eye causes loss of contrast and leads to unsafe driving conditions, much like the glare on a dirty windshield from low-angle sunlight or the high beams from an oncoming car." In essence bright and/or badly shielded lights around roads can partially blind drivers or pedestrians and contribute to accidents.
The blinding effect is caused in large part by reduced contrast due to light scattering in the eye by excessive brightness, or to reflection of light from dark areas in the field of vision, with luminance similar to the background luminance. This kind of glare is a particular instance of disability glare, called veiling glare. (This is not the same as loss of accommodation of night vision which is caused by the direct effect of the light itself on the eye.)
Light clutter refers to excessive groupings of lights. Groupings of lights may generate confusion, distract from obstacles (including those that they may be intended to illuminate), and potentially cause accidents. Clutter is particularly noticeable on roads where the street lights are badly designed, or where brightly lit advertisements surrounds the roadways. Depending on the motives of the person or organization that installed the lights, their placement and design can even be intended to distract drivers, and can contribute to accidents.
Another source of light pollution are artificial satellites. With future increase in numbers of satellite constellations, like OneWeb and Starlink, it is feared especially by the astronomical community, such as the IAU that light pollution will increase significantly, beside other problems of satellite overcrowding.
Measuring the effect of sky glow on a global scale is a complex procedure. The natural atmosphere is not completely dark, even in the absence of terrestrial sources of light and illumination from the Moon. This is caused by two main sources: "airglow" and "scattered light".
At high altitudes, primarily above the mesosphere, there is enough UV radiation from the sun of very short wavelength to cause ionization. When the ions collide with electrically neutral particles they recombine and emit photons in the process, causing airglow. The degree of ionization is sufficiently large to allow a constant emission of radiation even during the night when the upper atmosphere is in the Earth's shadow. Lower in the atmosphere all the solar photons with energies above the ionization potential of N2 and O2 have already been absorbed by the higher layers and thus no appreciable ionization occurs.
Apart from emitting light, the sky also scatters incoming light, primarily from distant stars and the Milky Way, but also the zodiacal light, sunlight that is reflected and backscattered from interplanetary dust particles.
The amount of airglow and zodiacal light is quite variable (depending, amongst other things on sunspot activity and the Solar cycle) but given optimal conditions the darkest possible sky has a brightness of about 22 magnitude/square arc second. If a full moon is present, the sky brightness increases to about 18 magnitude/sq. arc second depending on local atmospheric transparency, 40 times brighter than the darkest sky. In densely populated areas a sky brightness of 17 magnitude/sq. arc second is not uncommon, or as much as 100 times brighter than is natural.
To precisely measure how bright the sky gets, night time satellite imagery of the earth is used as raw input for the number and intensity of light sources. These are put into a physical model of scattering due to air molecules and aerosoles to calculate cumulative sky brightness. Maps that show the enhanced sky brightness have been prepared for the entire world.
Inspection of the area surrounding Madrid reveals that the effects of light pollution caused by a single large conglomeration can be felt up to away from the center.
Global effects of light pollution are also made obvious. The entire area consisting of southern England, Netherlands, Belgium, west Germany, and northern France have a sky brightness of at least two to four times normal (see above right). The only places in continental Europe where the sky can attain its natural darkness are in northern Scandinavia and in islands far from the continent.
In North America the situation is comparable. There is a significant problem with light pollution ranging from the Canadian Maritime Provinces to the American Southwest. The International Dark-Sky Association works to designate areas that have high quality night skies. These areas are supported by communities and organizations that are dedicated to reducing light pollution (e.g. Dark-sky preserve). The National Park Service Natural Sounds and Night Skies Division has measured night sky quality in national park units across the U.S. Sky quality in the U.S. ranges from pristine (Capitol Reef National Park and Big Bend National Park) to severely degraded (Santa Monica Mountains National Recreation Area and Biscayne National Park). The National Park Service Night Sky Program monitoring database is available online (2015).
The Bortle scale is a nine-level measuring system used to track how much light pollution there is in the sky. Five or less is the amount required to see the Milky Way whilst one is "pristine", the darkest possible.
Light pollution in Hong Kong was declared the 'worst on the planet' in March 2013.
In June 2016, it was estimated that one third of the world's population could no longer see the Milky Way, including 80% of Americans and 60% of Europeans. Singapore was found to be the most light-polluted country in the world.
Medical research on the effects of excessive light on the human body suggests that a variety of adverse health effects may be caused by light pollution or excessive light exposure, and some lighting design textbooks use human health as an explicit criterion for proper interior lighting. Health effects of over-illumination or improper spectral composition of light may include: increased headache incidence, worker fatigue, medically defined stress, decrease in sexual function and increase in anxiety. Likewise, animal models have been studied demonstrating unavoidable light to produce adverse effect on mood and anxiety. For those who need to be awake at night, light at night also has an acute effect on alertness and mood.
In 2007, "shift work that involves circadian disruption" was listed as a probable carcinogen by the World Health Organization's International Agency for Research on Cancer. (IARC Press release No. 180). Multiple studies have documented a correlation between night shift work and the increased incidence of breast and prostate cancer. One study which examined the link between exposure to artificial light at night (ALAN) and levels of breast cancer in South Korea found that regions which had the highest levels of ALAN reported the highest number of cases of breast cancer. Seoul, which had the highest levels of light pollution, had 34.4% more cases of breast cancer than Ganwon-do, which had the lowest levels of light pollution. This suggested a high correlation between ALAN and the prevalence of breast cancer. It was also found that there was no correlation between other types of cancer such as cervical or lung cancer and ALAN levels.
A more recent discussion (2009), written by Professor Steven Lockley, Harvard Medical School, can be found in the CfDS handbook "Blinded by the Light?". Chapter 4, "Human health implications of light pollution" states that "...light intrusion, even if dim, is likely to have measurable effects on sleep disruption and melatonin suppression. Even if these effects are relatively small from night to night, continuous chronic circadian, sleep and hormonal disruption may have longer-term health risks". The New York Academy of Sciences hosted a meeting in 2009 on Circadian Disruption and Cancer. Red light suppresses melatonin the least.
In June 2009, the American Medical Association developed a policy in support of control of light pollution. News about the decision emphasized glare as a public health hazard leading to unsafe driving conditions. Especially in the elderly, glare produces loss of contrast, obscuring night vision.
When artificial light affects organisms and ecosystems it is called ecological light pollution. While light at night can be beneficial, neutral, or damaging for individual species, its presence invariably disturbs ecosystems. For example, some species of spiders avoid lit areas, while other species are happy to build their spider web directly on a lamp post. Since lamp posts attract many flying insects, the spiders that don't mind light gain an advantage over the spiders that avoid it. This is a simple example of the way in which species frequencies and food webs can be disturbed by the introduction of light at night.
Light pollution poses a serious threat in particular to nocturnal wildlife, having negative impacts on plant and animal physiology. It can confuse animal navigation, alter competitive interactions, change predator-prey relations, and cause physiological harm. The rhythm of life is orchestrated by the natural diurnal patterns of light and dark, so disruption to these patterns impacts the ecological dynamics.
Studies suggest that light pollution around lakes prevents zooplankton, such as "Daphnia", from eating surface algae, causing algal blooms that can kill off the lakes' plants and lower water quality. Light pollution may also affect ecosystems in other ways. For example, lepidopterists and entomologists have documented that nighttime light may interfere with the ability of moths and other nocturnal insects to navigate. Night-blooming flowers that depend on moths for pollination may be affected by night lighting, as there is no replacement pollinator that would not be affected by the artificial light. This can lead to species decline of plants that are unable to reproduce, and change an area's longterm ecology. Among nocturnal insects, fireflies (Coleoptera: Lampyridae, Phengodidae and Elateridae) are especially interesting study objects for light pollution, once they depend on their own light to reproduce and, consequently, are very sensitive to environmental levels of light. Fireflies are well known and interesting to the general public (unlike many other insects) and are easily spotted by nonexperts, and, due to their sensibility and rapid response to environmental changes, good bioindicators for artificial night lighting. Massive insect declines have been suggested as being at least partially mediated by artificial lights at night.
A 2009 study also suggests deleterious impacts on animals and ecosystems because of perturbation of polarized light or artificial polarization of light (even during the day, because direction of natural polarization of sun light and its reflection is a source of information for a lot of animals). This form of pollution is named polarized light pollution (PLP). Unnatural polarized light sources can trigger maladaptive behaviors in polarization-sensitive taxa and alter ecological interactions.
Lights on tall structures can disorient migrating birds. Estimates by the U.S. Fish and Wildlife Service of the number of birds killed after being attracted to tall towers range from four to five million per year to an order of magnitude higher. The Fatal Light Awareness Program (FLAP) works with building owners in Toronto, Ontario, Canada and other cities to reduce mortality of birds by turning out lights during migration periods.
Similar disorientation has also been noted for bird species migrating close to offshore production and drilling facilities. Studies carried out by Nederlandse Aardolie Maatschappij b.v. (NAM) and Shell have led to development and trial of new lighting technologies in the North Sea. In early 2007, the lights were installed on the Shell production platform L15. The experiment proved a great success since the number of birds circling the platform declined by 50 to 90%.
Sea turtle hatchlings emerging from nests on beaches are another casualty of light pollution. It is a common misconception that hatchling sea turtles are attracted to the moon. Rather, they find the ocean by moving away from the dark silhouette of dunes and their vegetation, a behavior with which artificial lights interfere. The breeding activity and reproductive phenology of toads, however, are cued by moonlight. Juvenile seabirds may also be disoriented by lights as they leave their nests and fly out to sea. Amphibians and reptiles are also affected by light pollution. Introduced light sources during normally dark periods can disrupt levels of melatonin production. Melatonin is a hormone that regulates photoperiodic physiology and behaviour. Some species of frogs and salamanders utilize a light-dependent "compass" to orient their migratory behaviour to breeding sites. Introduced light can also cause developmental irregularities, such as retinal damage, reduced juvenile growth, premature metamorphosis, reduced sperm production, and genetic mutation.
In September 2009, the 9th European Dark-Sky Symposium in Armagh, Northern Ireland had a session on the environmental effects of light at night (LAN). It dealt with bats, turtles, the "hidden" harms of LAN, and many other topics. The environmental effects of LAN were mentioned as early as 1897, in a "Los Angeles Times" article. The following is an excerpt from that article, called "Electricity and English songbirds":
Astronomy is very sensitive to light pollution. The night sky viewed from a city bears no resemblance to what can be seen from dark skies. Skyglow (the scattering of light in the atmosphere at night) reduces the contrast between stars and galaxies and the sky itself, making it much harder to see fainter objects. This is one factor that has caused newer telescopes to be built in increasingly remote areas.
Even at apparent clear night skies there can be a lot of stray light that becomes visible at longer exposure times in astrophotography. By means of software the stray light can be reduced, but at the same time object detail is getting lost in the images. The following picture of the area around the Pinwheel Galaxy (Messier 101) with the apparent magnitude of 7.5m with all stars down to an apparent magnitude of 10m was taken in Berlin in direction close to the zenith with a fast lens (f-number 1.2) and an exposure time of five seconds at an exposure index of ISO 12800:
Some astronomers use narrow-band "nebula filters", which allow only specific wavelengths of light commonly seen in nebulae, or broad-band "light pollution filters", which are designed to reduce (but not eliminate) the effects of light pollution by filtering out spectral lines commonly emitted by sodium- and mercury-vapor lamps, thus enhancing contrast and improving the view of dim objects such as galaxies and nebulae. Unfortunately, these light pollution reduction (LPR) filters are not a cure for light pollution. LPR filters reduce the brightness of the object under study and this limits the use of higher magnifications. LPR filters work by blocking light of certain wavelengths, which alters the color of the object, often creating a pronounced green cast. Furthermore, LPR filters work only on certain object types (mainly emission nebulae) and are of little use on galaxies and stars. No filter can match the effectiveness of a dark sky for visual or photographic purposes.
Light pollution affects the visibility of diffuse sky objects like nebulae and galaxies more than stars, due to their low surface brightness. Most such objects are rendered invisible in heavily light-polluted skies above major cities. A simple method for estimating the darkness of a location is to look for the Milky Way, which from truly dark skies appears bright enough to cast a shadow.
In addition to skyglow, light trespass can impact observations when artificial light directly enters the tube of the telescope and is reflected from non-optical surfaces until it eventually reaches the eyepiece. This direct form of light pollution causes a glow across the field of view, which reduces contrast. Light trespass also makes it hard for a visual observer to become sufficiently adapted to the dark. The usual measures to reduce this glare, if reducing the light directly is not an option, include flocking the telescope tube and accessories to reduce reflection, and putting a light shield (also usable as a dew shield) on the telescope to reduce light entering from angles other than those near the target. Under these conditions, some astronomers prefer to observe under a black cloth to ensure maximum adaptation to the dark.
A study presented at the American Geophysical Union meeting in San Francisco found that light pollution destroys nitrate radicals thus preventing the normal night time reduction of atmospheric smog produced by fumes emitted from cars and factories. The study was presented by Harald Stark from the National Oceanic and Atmospheric Administration.
In the night, the polarization of the moonlit sky is very strongly reduced in the presence of urban light pollution, because scattered urban light is not strongly polarized. Polarized moonlight can't be seen by humans, but is believed to be used by many animals for navigation.
Reducing light pollution implies many things, such as reducing sky glow, reducing glare, reducing light trespass, and reducing clutter. The method for best reducing light pollution, therefore, depends on exactly what the problem is in any given instance. Possible solutions include:
The use of "full cutoff" lighting fixtures, as much as possible, is advocated by most campaigners for the reduction of light pollution. It is also commonly recommended that lights be spaced appropriately for maximum efficiency, and that number of luminaires being used as well as the wattage of each luminaire match the needs of the particular application (based on local lighting design standards).
Full cutoff fixtures first became available in 1959 with the introduction of General Electric's M100 fixture.
A full cutoff fixture, when correctly installed, reduces the chance for light to escape above the plane of the horizontal. Light released above the horizontal may sometimes be lighting an intended target, but often serves no purpose. When it enters into the atmosphere, light contributes to sky glow. Some governments and organizations are now considering, or have already implemented, full cutoff fixtures in street lamps and stadium lighting.
The use of full cutoff fixtures help to reduce sky glow by preventing light from escaping above the horizontal. Full cutoff typically reduces the visibility of the lamp and reflector within a luminaire, so the effects of glare are also reduced. Campaigners also commonly argue that full cutoff fixtures are more efficient than other fixtures, since light that would otherwise have escaped into the atmosphere may instead be directed towards the ground. However, full cutoff fixtures may also trap more light in the fixture than other types of luminaires, corresponding to lower luminaire efficiency, suggesting a re-design of some luminaires may be necessary.
The use of full cutoff fixtures can allow for lower wattage lamps to be used in the fixtures, producing the same or sometimes a better effect, due to being more carefully controlled. In every lighting system, some sky glow also results from light reflected from the ground. This reflection can be reduced, however, by being careful to use only the lowest wattage necessary for the lamp, and setting spacing between lights appropriately. Assuring luminaire setback is greater than 90° from highly reflective surfaces also diminishes reflectance.
A common criticism of full cutoff lighting fixtures is that they are sometimes not as aesthetically pleasing to look at. This is most likely because historically there has not been a large market specifically for full cutoff fixtures, and because people typically like to see the source of illumination. Due to the specificity with their direction of light, full cutoff fixtures sometimes also require expertise to install for maximum effect.
The effectiveness of using full cutoff roadway lights to combat light pollution has also been called into question. According to design investigations, luminaires with full cutoff distributions (as opposed to "cutoff" or "semi cutoff", compared here) have to be closer together to meet the same light level, uniformity and glare requirements specified by the IESNA. These simulations optimized the height and spacing of the lights while constraining the overall design to meet the IESNA requirements, and then compared total uplight and energy consumption of different luminaire designs and powers. Cutoff designs performed better than full cutoff designs, and semi-cutoff performed better than either cutoff or full cutoff. This indicates that, in roadway installations, over-illumination or poor uniformity produced by full cutoff fixtures may be more detrimental than direct uplight created by fewer cutoff or semi-cutoff fixtures. Therefore, the overall performance of existing systems could be improved more by reducing the number of luminaires than by switching to full cutoff designs.
However, using the definition of "light pollution" from some Italian regional bills (i.e., "every irradiance of artificial light outside competence areas and particularly upward the sky") only full cutoff design prevents light pollution. The Italian Lombardy region, where only full cutoff design is allowed (Lombardy act no. 17/2000, promoted by Cielobuio-coordination for the protection of the night sky), in 2007 had the lowest per capita energy consumption for public lighting in Italy. The same legislation also imposes a minimum distance between street lamps of about four times their height, so full cut off street lamps are the best solution to reduce both light pollution and electrical power usage.
Several different types of light sources exist, each having different properties that affect their appropriateness for certain tasks, particularly efficiency and spectral power distribution. It is often the case that inappropriate light sources have been selected for a task, either due to ignorance or because more sophisticated light sources were unavailable at the time of installation. Therefore, badly chosen light sources often contribute unnecessarily to light pollution and energy waste. By re-assessing and changing the light sources used, it is often possible to reduce energy use and pollutive effects while simultaneously greatly improving efficiency and visibility.
Some types of light sources are listed in order of energy efficiency in the table below (figures are approximate maintained values), and include relative visual skyglow impacts.
Many astronomers request that nearby communities use low pressure sodium lights or amber Aluminium gallium indium phosphide LED as much as possible, because the principal wavelength emitted is comparably easy to work around or in rare cases filter out. The low cost of operating sodium lights is another feature. In 1980, for example, San Jose, California, replaced all street lamps with low pressure sodium lamps, whose light is easier for nearby Lick Observatory to filter out. Similar programs are now in place in Arizona and Hawaii. Such yellow light sources also have significantly less visual skyglow impact, so reduce visual sky brightness and improve star visibility for everyone.
Disadvantages of low pressure sodium lighting are that fixtures must usually be larger than competing fixtures, and that color cannot be distinguished, due to its emitting principally a single wavelength of light (see security lighting). Due to the substantial size of the lamp, particularly in higher wattages such as 135 W and 180 W, control of light emissions from low pressure sodium luminaires is more difficult. For applications requiring more precise direction of light (such as narrow roadways) the native lamp efficacy advantage of this lamp type is decreased and may be entirely lost compared to high pressure sodium lamps. Allegations that this also leads to higher amounts of light pollution from luminaires running these lamps arise principally because of older luminaires with poor shielding, still widely in use in the UK and in some other locations. Modern low-pressure sodium fixtures with better optics and full shielding, and the decreased skyglow impacts of yellow light preserve the luminous efficacy advantage of low-pressure sodium and result in most cases is less energy consumption and less visible light pollution. Unfortunately, due to continued lack of accurate information, many lighting professionals continue to disparage low-pressure sodium, contributing to its decreased acceptance and specification in lighting standards and therefore its use. Another disadvantage of low-pressure sodium lamps is that some people find the characteristic yellow light very displeasing aesthetically.
Because of the increased sensitivity of the human eye to blue and green wavelengths when viewing low-luminances (the Purkinje effect) in the night sky, different sources produce dramatically different amounts of visible skyglow from the same amount of light sent into the atmosphere.
In some cases, evaluation of existing plans has determined that more efficient lighting plans are possible. For instance, light pollution can be reduced by turning off unneeded outdoor lights, and lighting stadiums only when there are people inside. Timers are especially valuable for this purpose. One of the world's first coordinated "legislative" efforts to reduce the adverse effect of this pollution on the environment began in Flagstaff, Arizona, in the U.S. There, more than three decades of ordinance development has taken place, with the full support of the population, often with government support, with community advocates, and with the help of major local observatories, including the United States Naval Observatory Flagstaff Station. Each component helps to educate, protect and enforce the imperatives to intelligently reduce detrimental light pollution.
One example of a lighting plan assessment can be seen in a report originally commissioned by the Office of the Deputy Prime Minister in the United Kingdom, and now available through the Department for Communities and Local Government. The report details a plan to be implemented throughout the UK, for designing lighting schemes in the countryside, with a particular focus on preserving the environment.
In another example, the city of Calgary has recently replaced most residential street lights with models that are comparably energy efficient. The motivation is primarily operation cost and environmental conservation. The costs of installation are expected to be regained through energy savings within six to seven years.
The Swiss Agency for Energy Efficiency (SAFE) uses a concept that promises to be of great use in the diagnosis and design of road lighting, ""consommation électrique spécifique" ("CES")", which can be translated into English as "specific electric power consumption (SEC)". Thus, based on observed lighting levels in a wide range of Swiss towns, SAFE has defined target values for electric power consumption per metre for roads of various categories. Thus, SAFE currently recommends an SEC of two to three watts per meter for roads less than ten metres wide (four to six for wider roads). Such a measure provides an easily applicable environmental protection constraint on conventional "norms", which usually are based on the recommendations of lighting manufacturing interests, who may not take into account environmental criteria. In view of ongoing progress in lighting technology, target SEC values will need to be periodically revised downwards.
A newer method for predicting and measuring various aspects of light pollution was described in the journal Lighting Research & Technology (September 2008). Scientists at Rensselaer Polytechnic Institute's Lighting Research Center have developed a comprehensive method called Outdoor Site-Lighting Performance (OSP), which allows users to quantify, and thus optimize, the performance of existing and planned lighting designs and applications to minimize excessive or obtrusive light leaving the boundaries of a property. OSP can be used by lighting engineers immediately, particularly for the investigation of glow and trespass (glare analyses are more complex to perform and current commercial software does not readily allow them), and can help users compare several lighting design alternatives for the same site.
In the effort to reduce light pollution, researchers have developed a "Unified System of Photometry", which is a way to measure how much or what kind of street lighting is needed. The Unified System of Photometry allows light fixtures to be designed to reduce energy use while maintaining or improving perceptions of visibility, safety, and security. There was a need to create a new system of light measurement at night because the biological way in which the eye's rods and cones process light is different in nighttime conditions versus daytime conditions. Using this new system of photometry, results from recent studies have indicated that replacing traditional, yellowish, high-pressure sodium (HPS) lights with "cool" white light sources, such as induction, fluorescent, ceramic metal halide, or LEDs can actually reduce the amount of electric power used for lighting while maintaining or improving visibility in nighttime conditions.
The International Commission on Illumination, also known as the CIE from its French title, la Commission Internationale de l'Eclairage, will soon be releasing its own form of unified photometry for outdoor lighting.
|
https://en.wikipedia.org/wiki?curid=18279
|
Lagrangian point
In celestial mechanics, the Lagrangian points ( also Lagrange points, L-points, or libration points) are the points near two large bodies in orbit where a smaller object will maintain its position relative to the large orbiting bodies. At other locations, a small object would go into its own orbit around one of the large bodies, but at the Lagrangian points the gravitational forces of the two large bodies, the centripetal force of orbital motion, and (for certain points) the Coriolis acceleration all match up in a way that cause the small object to maintain a stable or nearly stable position relative to the large bodies.
There are five such points, labeled L1 to L5, all in the orbital plane of the two large bodies, for each given combination of two orbital bodies. For instance, there are five Lagrangian points L1 to L5 for the Sun–Earth system, and in a similar way there are five "different" Lagrangian points for the Earth–Moon system. L1, L2, and L3 are on the line through the centers of the two large bodies, while L4 and L5 each act as the third vertex of an equilateral triangle formed with the centers of the two large bodies. L4 and L5 are stable, which implies that objects can orbit around them in a rotating coordinate system tied to the two large bodies.
Several planets have trojan satellites near their L4 and L5 points with respect to the Sun. Jupiter has more than a million of these trojans. Artificial satellites have been placed at L1 and L2 with respect to the Sun and Earth, and with respect to the Earth and the Moon. The Lagrangian points have been proposed for uses in space exploration.
The three collinear Lagrange points (L1, L2, L3) were discovered by Leonhard Euler a few years before Joseph-Louis Lagrange discovered the remaining two.
In 1772, Lagrange published an "Essay on the three-body problem". In the first chapter he considered the general three-body problem. From that, in the second chapter, he demonstrated two special constant-pattern solutions, the collinear and the equilateral, for any three masses, with circular orbits.
The five Lagrangian points are labeled and defined as follows:
The point lies on the line defined by the two large masses "M"1 and "M"2, and between them. It is the point where the gravitational attraction of "M"2 partially cancels "M"1's gravitational attraction. An object that orbits the Sun more closely than Earth would normally have a shorter orbital period than Earth, but that ignores the effect of Earth's own gravitational pull. If the object is directly between Earth and the Sun, then Earth's gravity counteracts some of the Sun's pull on the object, and therefore increases the orbital period of the object. The closer to Earth the object is, the greater this effect is. At the point, the orbital period of the object becomes exactly equal to Earth's orbital period. is about 1.5 million kilometers from Earth, or 0.01 au, 1/100th the distance to the Sun.
The point lies on the line through the two large masses, beyond the smaller of the two. Here, the gravitational forces of the two large masses balance the centrifugal effect on a body at . On the opposite side of Earth from the Sun, the orbital period of an object would normally be greater than that of Earth. The extra pull of Earth's gravity decreases the orbital period of the object, and at the point that orbital period becomes equal to Earth's. Like L1, L2 is about 1.5 million kilometers or 0.01 au from Earth.
The point lies on the line defined by the two large masses, beyond the larger of the two. Within the Sun–Earth system, the point exists on the opposite side of the Sun, a little outside Earth's orbit and slightly further from the Sun than Earth is. This placement occurs because the Sun is also affected by Earth's gravity and so orbits around the two bodies' barycenter, which is well inside the body of the Sun. An object at Earth's distance from the Sun would have an orbital period of one year if only the Sun's gravity is considered. But an object on the opposite side of the Sun from Earth and directly in line with both "feels" Earth's gravity adding slightly to the Sun's and therefore must orbit a little further from the Sun in order to have the same 1-year period. It is at the point that the combined pull of Earth and Sun causes the object to orbit with the same period as Earth, in effect orbiting an Earth+Sun mass with the Earth-Sun barycenter at one focus of its orbit.
The and points lie at the third corners of the two equilateral triangles in the plane of orbit whose common base is the line between the centers of the two masses, such that the point lies behind () or ahead () of the smaller mass with regard to its orbit around the larger mass.
The triangular points ( and ) are stable equilibria, provided that the ratio of is greater than 24.96. This is the case for the Sun–Earth system, the Sun–Jupiter system, and, by a smaller margin, the Earth–Moon system. When a body at these points is perturbed, it moves away from the point, but the factor opposite of that which is increased or decreased by the perturbation (either gravity or angular momentum-induced speed) will also increase or decrease, bending the object's path into a stable, kidney bean-shaped orbit around the point (as seen in the corotating frame of reference).
In contrast to and , where stable equilibrium exists, the points , , and are positions of unstable equilibrium. Any object orbiting at , , or will tend to fall out of orbit; it is therefore rare to find natural objects there, and spacecraft inhabiting these areas must employ station keeping in order to maintain their position.
It is common to find objects at or orbiting the and points of natural orbital systems. These are commonly called "trojans". In the 20th century, asteroids discovered orbiting at the Sun–Jupiter and points were named after characters from Homer's "Iliad". Asteroids at the point, which leads Jupiter, are referred to as the "Greek camp", whereas those at the point are referred to as the "Trojan camp".
Other examples of natural objects orbiting at Lagrange points:
Lagrangian points are the constant-pattern solutions of the restricted three-body problem. For example, given two massive bodies in orbits around their common barycenter, there are five positions in space where a third body, of comparatively negligible mass, could be placed so as to maintain its position relative to the two massive bodies. As seen in a rotating reference frame that matches the angular velocity of the two co-orbiting bodies, the gravitational fields of two massive bodies combined providing the centripetal force at the Lagrangian points, allowing the smaller third body to be relatively stationary with respect to the first two.
The location of L1 is the solution to the following equation, gravitation providing the centripetal force:
where "r" is the distance of the L1 point from the smaller object, "R" is the distance between the two main objects, and "M"1 and "M"2 are the masses of the large and small object, respectively. (The quantity in parentheses on the right is the distance of L1 from the center of mass.) Solving this for "r" involves solving a quintic function, but if the mass of the smaller object ("M"2) is much smaller than the mass of the larger object ("M"1) then and are at approximately equal distances "r" from the smaller object, equal to the radius of the Hill sphere, given by:
This distance can be described as being such that the orbital period, corresponding to a circular orbit with this distance as radius around "M"2 in the absence of "M"1, is that of "M"2 around "M"1, divided by ≈ 1.73:
The location of L2 is the solution to the following equation, gravitation providing the centripetal force:
with parameters defined as for the L1 case. Again, if the mass of the smaller object ("M"2) is much smaller than the mass of the larger object ("M"1) then L2 is at approximately the radius of the Hill sphere, given by:
The location of L3 is the solution to the following equation, gravitation providing the centripetal force:
with parameters "M"1,2 and "R" defined as for the L1 and L2 cases, and "r" now indicates the distance of L3 from the position of the smaller object, if it were rotated 180 degrees about the larger object. If the mass of the smaller object ("M"2) is much smaller than the mass of the larger object ("M"1) then:
The reason these points are in balance is that, at and , the distances to the two masses are equal. Accordingly, the gravitational forces from the two massive bodies are in the same ratio as the masses of the two bodies, and so the resultant force acts through the barycenter of the system; additionally, the geometry of the triangle ensures that the resultant acceleration is to the distance from the barycenter in the same ratio as for the two massive bodies. The barycenter being both the center of mass and center of rotation of the three-body system, this resultant force is exactly that required to keep the smaller body at the Lagrange point in orbital equilibrium with the other two larger bodies of system. (Indeed, the third body need not have negligible mass.) The general triangular configuration was discovered by Lagrange in work on the three-body problem.
The radial acceleration "a" of an object in orbit at a point along the line passing through both bodies is given by:
where "r" is the distance from the large body "M"1 and sgn("x") is the sign function of "x". The terms in this function represent respectively: force from "M"1; force from "M"2; and centrifugal force. The points L3, L1, L2 occur where the acceleration is zero — see chart at right.
Although the , , and points are nominally unstable, there are (unstable) periodic orbits called "halo" orbits around these points in a three-body system. A full "n"-body dynamical system such as the Solar System does not contain these periodic orbits, but does contain quasi-periodic (i.e. bounded but not precisely repeating) orbits following Lissajous-curve trajectories. These quasi-periodic Lissajous orbits are what most of Lagrangian-point space missions have used until now. Although they are not perfectly stable, a modest effort of station keeping keeps a spacecraft in a desired Lissajous orbit for a long time. Also, for Sun–Earth- missions, it is preferable for the spacecraft to be in a large-amplitude () Lissajous orbit around than to stay at , because the line between Sun and Earth has increased solar interference on Earth–spacecraft communications. Similarly, a large-amplitude Lissajous orbit around keeps a probe out of Earth's shadow and therefore ensures continuous illumination of its solar panels.
The and points are stable provided that the mass of the primary body (e.g. the Earth) is at least 25 times the mass of the secondary body (e.g. the Moon). The Earth is over 81 times the mass of the Moon (the Moon is 1.23% of the mass of the Earth). Although the and points are found at the top of a "hill", as in the effective potential contour plot above, they are nonetheless stable. The reason for the stability is a second-order effect: as a body moves away from the exact Lagrange position, Coriolis acceleration (which depends on the velocity of an orbiting object and cannot be modeled as a contour map) curves the trajectory into a path around (rather than away from) the point.
This table lists sample values of L1, L2, and L3 within the solar system. Calculations assume the two bodies orbit in a perfect circle with separation equal to the semimajor axis and no other bodies are nearby. Distances are measured from the larger body's center of mass with L3 showing a negative location. The percentage columns show how the distances compare to the semimajor axis. E.g. for the Moon, L1 is located from Earth's center, which is 84.9% of the Earth–Moon distance or 15.1% in front of the Moon; L2 is located from Earth's center, which is 116.8% of the Earth–Moon distance or 16.8% beyond the Moon; and L3 is located from Earth's center, which is 99.3% of the Earth–Moon distance or 0.7084% in front of the Moon's 'negative' position.
Sun–Earth is suited for making observations of the Sun–Earth system. Objects here are never shadowed by Earth or the Moon and, if observing Earth, always view the sunlit hemisphere. The first mission of this type was the 1978 International Sun Earth Explorer 3 (ISEE-3) mission used as an interplanetary early warning storm monitor for solar disturbances. Since June 2015, DSCOVR has orbited the L1 point. Conversely it is also useful for space-based solar telescopes, because it provides an uninterrupted view of the Sun and any space weather (including the solar wind and coronal mass ejections) reaches L1 a few hours before Earth. Solar telescopes currently located around L1 include the Solar and Heliospheric Observatory and Advanced Composition Explorer.
Sun–Earth is a good spot for space-based observatories. Because an object around will maintain the same relative position with respect to the Sun and Earth, shielding and calibration are much simpler. It is, however, slightly beyond the reach of Earth's umbra, so solar radiation is not completely blocked at L2. Spacecraft generally orbit around L2, avoiding partial eclipses of the Sun to maintain a constant temperature. From locations near L2, the Sun, Earth and Moon are relatively close together in the sky; this means that a large sunshade with the telescope on the dark-side can allow the telescope to cool passively to around 50 K – this is especially helpful for infrared astronomy and observations of the cosmic microwave background. The James Webb Space Telescope is due to be positioned at L2.
Sun–Earth was a popular place to put a "Counter-Earth" in pulp science fiction and comic books. Once space-based observation became possible via satellites and probes, it was shown to hold no such object. The Sun–Earth is unstable and could not contain a natural object, large or small, for very long. This is because the gravitational forces of the other planets are stronger than that of Earth (Venus, for example, comes within 0.3 AU of this every 20 months).
A spacecraft orbiting near Sun–Earth would be able to closely monitor the evolution of active sunspot regions before they rotate into a geoeffective position, so that a 7-day early warning could be issued by the NOAA Space Weather Prediction Center. Moreover, a satellite near Sun–Earth would provide very important observations not only for Earth forecasts, but also for deep space support (Mars predictions and for manned mission to near-Earth asteroids). In 2010, spacecraft transfer trajectories to Sun–Earth were studied and several designs were considered.
Missions to Lagrangian points generally orbit the points rather than occupy them directly.
Another interesting and useful property of the collinear Lagrangian points and their associated Lissajous orbits is that they serve as "gateways" to control the chaotic trajectories of the Interplanetary Transport Network.
Earth–Moon allows comparatively easy access to Lunar and Earth orbits with minimal change in velocity and this has as an advantage to position a half-way manned space station intended to help transport cargo and personnel to the Moon and back.
Earth–Moon has been used for a communications satellite covering the Moon's far side, for example, Queqiao, launched in 2018, and would be "an ideal location" for a propellant depot as part of the proposed depot-based space transportation architecture.
Scientists at the B612 Foundation were planning to use Venus's L3 point to position their planned Sentinel telescope, which aimed to look back towards Earth's orbit and compile a catalogue of near-Earth asteroids.
In 2017, Nasa proposed the idea of positioning a magnetic dipole shield at the Sun–Mars point for use as an artificial magnetosphere for Mars. The idea is that this would protect the planet's atmosphere from the Sun's radiation and solar winds.
International Sun Earth Explorer 3 (ISEE-3) began its mission at the Sun–Earth L1 before leaving to intercept a comet in 1982. The Sun–Earth L1 is also the point to which the Reboot ISEE-3 mission was attempting to return the craft as the first phase of a recovery mission (as of September 25, 2014 all efforts have failed and contact was lost).
Solar and Heliospheric Observatory (SOHO) is stationed in a halo orbit at , and the Advanced Composition Explorer (ACE) in a Lissajous orbit. WIND is also at .
Deep Space Climate Observatory (DSCOVR), launched on 11 February 2015, began orbiting L1 on 8 June 2015 to study the solar wind and its effects on Earth. DSCOVR is unofficially known as GORESAT, because it carries a camera always oriented to Earth and capturing full-frame photos of the planet similar to the Blue Marble. This concept was proposed by then-Vice President of the United States Al Gore in 1998 and was a centerpiece in his film "An Inconvenient Truth".
LISA Pathfinder (LPF) was launched on 3 December 2015, and arrived at on 22 January 2016, where, among other experiments, it tested the technology needed by (e)LISA to detect gravitational waves. LISA Pathfinder used an instrument consisting of two small gold alloy cubes.
Spacecraft at the Sun–Earth L2 point are in a Lissajous orbit until decommissioned, when they are sent into a heliocentric graveyard orbit.
|
https://en.wikipedia.org/wiki?curid=18285
|
Lucid dream
A lucid dream is a dream during which the dreamer is aware that they are dreaming. During a lucid dream, the dreamer may gain some amount of control over the dream characters, narrative, and environment; however, this is not actually necessary for a dream to be described as lucid. Lucid dreaming has been studied and reported for many years. Prominent figures from ancient to modern times have been fascinated by lucid dreams and have sought ways to better understand their causes and purpose. Many different theories have emerged as a result of scientific research on the subject and have even been shown in pop culture. Further developments in psychological research have pointed to ways in which this form of dreaming may be utilized as a form of sleep therapy.
The term 'lucid dream' was coined by Dutch author and psychiatrist Frederik van Eeden in his 1913 article "A Study of Dreams", though descriptions of dreamers being aware that they are dreaming predates the actual term. Eeden studied his personal dreams since 1896. He wrote down the dreams that seemed most important to him, and out of all these dreams, 352 were what is now known as “lucid dreams”. He created different names for the different types of dreams he experienced; each name being created from the data he had collected. He named seven different types of dreams: initial dreams, pathological, ordinary dreaming, vivid dreaming, demoniacal, general dream-sensations, and lucid dreaming. Frederick Van Eeden said the seventh type of dreaming, lucid dreaming, was the most interesting and worthy of the most careful observation of studies. Eeden studied lucid dreaming between January 20, 1898, and December 26, 1912. While describing this state of dreaming, Eeden said, 'you are completely aware of your surroundings and are able to direct your actions freely, yet the sleep is stimulating and uninterrupted.'
In Eastern thought, cultivating the dreamer's ability to be aware that he or she is dreaming is central to both the ancient Indian Hindu practice of Yoga nidra and the Tibetan Buddhist practice of dream Yoga. The cultivation of such awareness was common practice among early Buddhists.
Early references to the phenomenon are also found in ancient Greek writing. For example, the philosopher Aristotle wrote: 'often when one is asleep, there is something in consciousness which declares that what then presents itself is but a dream'. Meanwhile, the physician Galen of Pergamon used lucid dreams as a form of therapy. In addition, a letter written by Saint Augustine of Hippo in 415 AD tells the story of a dreamer, Doctor Gennadius, and refers to lucid dreaming.
Philosopher and physician Sir Thomas Browne (1605–1682) was fascinated by dreams and described his own ability to lucid dream in his "Religio Medici", stating: '...yet in one dream I can compose a whole Comedy, behold the action, apprehend the jests and laugh my self awake at the conceits thereof'.
Samuel Pepys in his diary entry for 15 August 1665 records a dream, stating: "I had my Lady Castlemayne in my arms and was admitted to use all the dalliance I desired with her, and then dreamt that this could not be awake, but that it was only a dream".
In 1867, the French sinologist Marie-Jean-Léon, Marquis d'Hervey de Saint Denys anonymously published "Les Rêves et Les Moyens de Les Diriger; Observations Pratiques" ('Dreams and the ways to direct them; practical observations'), in which he describes his own experiences of lucid dreaming, and proposes that it is possible for anyone to learn to dream consciously.
In 1913, Dutch psychiatrist and writer Frederik (Willem) van Eeden (1860–1932) coined the term 'lucid dream' in an article entitled ""A Study of Dreams"".
Some have suggested that the term is a misnomer because van Eeden was referring to a phenomenon more specific than a lucid dream. Van Eeden intended the term lucid to denote "having insight", as in the phrase "a lucid interval" applied to someone in temporary remission from a psychosis, rather than as a reference to the perceptual quality of the experience, which may or may not be clear and vivid.
In 1968, Celia Green analyzed the main characteristics of such dreams, reviewing previously published literature on the subject and incorporating new data from participants of her own. She concluded that lucid dreams were a category of experience quite distinct from ordinary dreams and said they were associated with rapid eye movement sleep (REM sleep). Green was also the first to link lucid dreams to the phenomenon of false awakenings.
Lucid dreaming was subsequently researched by asking dreamers to perform pre-determined physical responses while experiencing a dream, including eye movement signals.
In 1980, Stephen LaBerge at Stanford University developed such techniques as part of his doctoral dissertation. In 1985, LaBerge performed a pilot study that showed that time perception while counting during a lucid dream is about the same as during waking life. Lucid dreamers counted out ten seconds while dreaming, signaling the start and the end of the count with a pre-arranged eye signal measured with electrooculogram recording. LaBerge's results were confirmed by German researchers D. Erlacher and M. Schredl in 2004.
In a further study by Stephen LaBerge, four subjects were compared either singing while dreaming or counting while dreaming. LaBerge found that the right hemisphere was more active during singing and the left hemisphere was more active during counting.
Neuroscientist J. Allan Hobson has hypothesized what might be occurring in the brain while lucid. The first step to lucid dreaming is recognizing one is dreaming. This recognition might occur in the dorsolateral prefrontal cortex, which is one of the few areas deactivated during REM sleep and where working memory occurs. Once this area is activated and the recognition of dreaming occurs, the dreamer must be cautious to let the dream continue but be conscious enough to remember that it is a dream. While maintaining this balance, the amygdala and parahippocampal cortex might be less intensely activated. To continue the intensity of the dream hallucinations, it is expected the pons and the parieto-occipital junction stay active.
Using electroencephalography (EEG) and other polysomnographical measurements, LaBerge and others have shown that lucid dreams begin in the Rapid Eye Movement (REM) stage of sleep. LaBerge also proposes that there are higher amounts of beta-1 frequency band (13–19 Hz) brain wave activity experienced by lucid dreamers, hence there is an increased amount of activity in the parietal lobes making lucid dreaming a conscious process.
Paul Tholey, a German Gestalt psychologist and a professor of psychology and sports science, originally studied dreams in order to answer the question if one dreams in colour or black and white. In his phenomenological research, he outlined an epistemological frame using critical realism. Tholey instructed his probands to continuously suspect waking life to be a dream, in order that such a habit would manifest itself during dreams. He called this technique for inducing lucid dreams the "Reflexionstechnik" (reflection technique). Probands learned to have such lucid dreams; they observed their dream content and reported it soon after awakening. Tholey could examine the cognitive abilities of dream figures. Nine trained lucid dreamers were directed to set other dream figures arithmetic and verbal tasks during lucid dreaming. Dream figures who agreed to perform the tasks proved more successful in verbal than in arithmetic tasks. Tholey discussed his scientific results with Stephen LaBerge, who has a similar approach.
A study was conducted to see if it were possible to attain the ability to lucid dream through a drug. In 2018, galantamine was given to 121 patients in a double-blind, placebo-controlled trial, the only one of its kind. Some participants found as much as a 42 percent increase in their ability to lucid dream, compared to self-reports from the past six months, and ten people experienced a lucid dream for the first time. It is theorized that galantamine allows ACh to build up, leading to greater recollection and awareness during dreaming.
Other researchers suggest that lucid dreaming is not a state of sleep, but of brief wakefulness, or "micro-awakening". Experiments by Stephen LaBerge used "perception of the outside world" as a criterion for wakefulness while studying lucid dreamers, and their sleep state was corroborated with physiological measurements. LaBerge's subjects experienced their lucid dream while in a state of REM, which critics felt may mean that the subjects are fully awake. J Allen Hobson responded that lucid dreaming must be a state of both waking and dreaming.
Philosopher Norman Malcolm has argued against the possibility of checking the accuracy of dream reports, pointing out that "the only criterion of the truth of a statement that someone has had a certain dream is, essentially, his saying so."
Paul Tholey laid the epistemological basis for the research of lucid dreams, proposing seven different conditions of clarity that a dream must fulfill in order to be defined as a lucid dream:
Later, in 1992, a study by Deirdre Barrett examined whether lucid dreams contained four "corollaries" of lucidity:
Barrett found less than a quarter of lucidity accounts exhibited all four.
Subsequently, Stephen LaBerge studied the prevalence of being able to control the dream scenario among lucid dreams, and found that while dream control and dream awareness are correlated, neither requires the other. LaBerge found dreams that exhibit one clearly without the capacity for the other; also, in some dreams where the dreamer is lucid and aware they could exercise control, they choose simply to observe.
In 2016, a meta-analytic study by David Saunders and colleagues on 34 lucid dreaming studies, taken from a period of 50 years, demonstrated that 55% of a pooled sample of 24,282 people claimed to have experienced lucid dreams at least once or more in their lifetime. Furthermore, for those that stated they did experience lucid dreams, approximately 23% reported to experience them on a regular basis, as often as once a month or more. In a 2004 study on lucid dream frequency and personality, a moderate correlation between nightmare frequency and frequency of lucid dreaming was demonstrated. Some lucid dreamers also reported that nightmares are a trigger for dream lucidity. Previous studies have reported that lucid dreaming is more common among adolescents than adults.
A 2015 study showed that people who had practiced meditation for a long time tended to have more lucid dreams. Julian Mutz and Amir-Homayoun Javadi claimed that “Lucid dreaming is a hybrid state of consciousness with features of both waking and dreaming” in a review they published in Neuroscience of Consciousness in 2017.
Mutz and Javadi found that during lucid dreaming, there is an increase in activity of the dorsolateral prefrontal cortex, the bilateral frontopolar prefrontal cortex, the precuneus, the inferior parietal lobules, and the supramarginal gyrus. All are brain functions related to higher cognitive functions including working memory, planning, and self-consciousness. The researchers also found that during a lucid dream, “levels of self-determination" were similar to those that people experienced during states of wakefulness. They also found that lucid dreamers can only control limited aspects of their dream at once.
Mutz and Javadi also have stated that by studying lucid dreaming further, scientists could learn more about various types of consciousness, which happen to be less easy to separate and research at other times.
It has been suggested that those who suffer from nightmares could benefit from the ability to be aware they are indeed dreaming. A pilot study performed in 2006 showed that lucid dreaming therapy treatment was successful in reducing nightmare frequency. This treatment consisted of exposure to the idea, mastery of the technique, and lucidity exercises. It was not clear what aspects of the treatment were responsible for the success of overcoming nightmares, though the treatment as a whole was said to be successful.
Australian psychologist Milan Colic has explored the application of principles from narrative therapy to clients' lucid dreams, to reduce the impact not only of nightmares during sleep but also depression, self-mutilation, and other problems in waking life. Colic found that therapeutic conversations could reduce the distressing content of dreams, while understandings about life—and even characters—from lucid dreams could be applied to their lives with marked therapeutic benefits.
Psychotherapists have applied lucid dreaming as a part of therapy. Studies have shown that, by inducing a lucid dream, recurrent nightmares can be alleviated. It is unclear whether this alleviation is due to lucidity or the ability to alter the dream itself. A 2006 study performed by Victor Spoormaker and Van den Bout evaluated the validity of lucid dreaming treatment (LDT) in chronic nightmare sufferers. LDT is composed of exposure, mastery and lucidity exercises. Results of lucid dreaming treatment revealed that the nightmare frequency of the treatment groups had decreased. In another study, Spoormaker, Van den Bout, and Meijer (2003) investigated lucid dreaming treatment for nightmares by testing eight subjects who received a one-hour individual session, which consisted of lucid dreaming exercises. The results of the study revealed that the nightmare frequency had decreased and the sleep quality had slightly increased.
Holzinger, Klösch, and Saletu managed a psychotherapy study under the working name of ‘Cognition during dreaming – a therapeutic intervention in nightmares’, which included 40 subjects, men and women, 18–50 years old, whose life quality was significantly altered by nightmares. The test subjects were administered Gestalt group therapy and 24 of them were also taught to enter the state of lucid dreaming by Holzinger. This was purposefully taught in order to change the course of their nightmares. The subjects then reported the diminishment of their nightmare prevalence from 2–3 times a week to 2–3 times per month.
In her book "The Committee of Sleep", Deirdre Barrett describes how some experienced lucid dreamers have learned to remember specific practical goals such as artists looking for inspiration seeking a show of their own work once they become lucid or computer programmers looking for a screen with their desired code. However, most of these dreamers had many experiences of failing to recall waking objectives before gaining this level of control.
"Exploring the World of Lucid Dreaming" by Stephen LaBerge and Howard Rheingold (1990) discusses creativity within dreams and lucid dreams, including testimonials from a number of people who claim they have used the practice of lucid dreaming to help them solve a number of creative issues, from an aspiring parent thinking of potential baby names to a surgeon practicing surgical techniques. The authors discuss how creativity in dreams could stem from "conscious access to the contents of our unconscious minds"; access to "tacit knowledge" - the things we know but can't explain, or things we know but are unaware that we know.
Films like "Dreamscape" (1984), "Waking Life" (2001), "Paprika" (2006), "Inception" (2010), "Lucid Dream" (2017) and "118" (2019) refer to lucid dreaming. The 1999 hit song "Higher" by American rock band Creed was directly inspired by lead singer Scott Stapp's lucid dreaming experience.
Though lucid dreaming can be beneficial to a number of aspects of life, some risks have been suggested. Those who have never had a lucid dream may not understand what is happening when they experience it for the first time. Individuals who experience lucid dreams could begin to feel isolated from others due to feeling different. It could become more difficult over time to wake up from a lucid dream. Someone struggling with certain mental illnesses could find it hard to be able to tell the difference between reality and the actual dream.
Long term risks with lucid dreaming have not been extensively studied.
Some people experience something like sleep paralysis, which is a state in between dreaming and waking. One experiencing sleep paralysis from sleep cannot move, is aware that they are awake, and yet still may be experiencing hallucinations from their dream. A report in the journal "Consciousness and Cognition" identifies three common types of hallucinations: an intruder in the room with you, a crushing feeling on your chest or back, and a feeling of flying or levitating. Sleep paralysis is relatively uncommon, with about 7.6% of the general population having experienced it at least once.
Notes
|
https://en.wikipedia.org/wiki?curid=18286
|
Light-emitting diode
A light-emitting diode (LED) is a semiconductor light source that emits light when current flows through it. Electrons in the semiconductor recombine with electron holes, releasing energy in the form of photons. The color of the light (corresponding to the energy of the photons) is determined by the energy required for electrons to cross the band gap of the semiconductor. White light is obtained by using multiple semiconductors or a layer of light-emitting phosphor on the semiconductor device.
Appearing as practical electronic components in 1962, the earliest LEDs emitted low-intensity infrared (IR) light. Infrared LEDs are used in remote-control circuits, such as those used with a wide variety of consumer electronics. The first visible-light LEDs were of low intensity and limited to red. Modern LEDs are available across the visible, ultraviolet (UV), and infrared wavelengths, with high light output.
Early LEDs were often used as indicator lamps, replacing small incandescent bulbs, and in seven-segment displays. Recent developments have produced high-output white light LEDs suitable for room and outdoor area lighting. LEDs have led to new displays and sensors, while their high switching rates are useful in advanced communications technology.
LEDs have many advantages over incandescent light sources, including lower energy consumption, longer lifetime, improved physical robustness, smaller size, and faster switching. LEDs are used in applications as diverse as aviation lighting, automotive headlamps, advertising, general lighting, traffic signals, camera flashes, lighted wallpaper, horticultural grow lights, and medical devices.
Unlike a laser, the light emitted from an LED is neither spectrally coherent nor even highly monochromatic. However, its spectrum is sufficiently narrow that it appears to the human eye as a pure (saturated) color. Also unlike most lasers, its radiation is not spatially coherent, so it cannot approach the very high brightnesses characteristic of lasers.
Electroluminescence as a phenomenon was discovered in 1907 by the British experimenter H. J. Round of Marconi Labs, using a crystal of silicon carbide and a cat's-whisker detector. Russian inventor Oleg Losev reported creation of the first LED in 1927. His research was distributed in Soviet, German and British scientific journals, but no practical use was made of the discovery for several decades.
In 1936, Georges Destriau observed that electroluminescence could be produced when zinc sulphide (ZnS) powder is suspended in an insulator and an alternating electrical field is applied to it. In his publications, Destriau often referred to luminescence as Losev-Light. Destriau worked in the laboratories of Madame Marie Curie, also an early pioneer in the field of luminescence with research on radium.
Hungarian Zoltán Bay together with György Szigeti pre-empted LED lighting in Hungary in 1939 by patenting a lighting device based on SiC, with an option on boron carbide, that emitted white, yellowish white, or greenish white depending on impurities present.
Kurt Lehovec, Carl Accardo, and Edward Jamgochian explained these first LEDs in 1951 using an apparatus employing SiC crystals with a current source of a battery or a pulse generator and with a comparison to a variant, pure, crystal in 1953.
Rubin Braunstein of the Radio Corporation of America reported on infrared emission from gallium arsenide (GaAs) and other semiconductor alloys in 1955. Braunstein observed infrared emission generated by simple diode structures using gallium antimonide (GaSb), GaAs, indium phosphide (InP), and silicon-germanium (SiGe) alloys at room temperature and at 77 kelvins.
In 1957, Braunstein further demonstrated that the rudimentary devices could be used for non-radio communication across a short distance. As noted by Kroemer Braunstein "…had set up a simple optical communications link: Music emerging from a record player was used via suitable electronics to modulate the forward current of a GaAs diode. The emitted light was detected by a PbS diode some distance away. This signal was fed into an audio amplifier and played back by a loudspeaker. Intercepting the beam stopped the music. We had a great deal of fun playing with this setup." This setup presaged the use of LEDs for optical communication applications.
In September 1961, while working at Texas Instruments in Dallas, Texas, James R. Biard and Gary Pittman discovered near-infrared (900 nm) light emission from a tunnel diode they had constructed on a GaAs substrate. By October 1961, they had demonstrated efficient light emission and signal coupling between a GaAs p-n junction light emitter and an electrically isolated semiconductor photodetector. On August 8, 1962, Biard and Pittman filed a patent titled "Semiconductor Radiant Diode" based on their findings, which described a zinc-diffused p–n junction LED with a spaced cathode contact to allow for efficient emission of infrared light under forward bias. After establishing the priority of their work based on engineering notebooks predating submissions from G.E. Labs, RCA Research Labs, IBM Research Labs, Bell Labs, and Lincoln Lab at MIT, the U.S. patent office issued the two inventors the patent for the GaAs infrared light-emitting diode (U.S. Patent US3293513), the first practical LED. Immediately after filing the patent, Texas Instruments (TI) began a project to manufacture infrared diodes. In October 1962, TI announced the first commercial LED product (the SNX-100), which employed a pure GaAs crystal to emit an 890 nm light output. In October 1963, TI announced the first commercial hemispherical LED, the SNX-110.
The first visible-spectrum (red) LED was developed in 1962 by Nick Holonyak, Jr. while working at General Electric. Holonyak first reported his LED in the journal "Applied Physics Letters" on December 1, 1962. M. George Craford, a former graduate student of Holonyak, invented the first yellow LED and improved the brightness of red and red-orange LEDs by a factor of ten in 1972. In 1976, T. P. Pearsall designed the first high-brightness, high-efficiency LEDs for optical fiber telecommunications by inventing new semiconductor materials specifically adapted to optical fiber transmission wavelengths.
The first commercial visible-wavelength LEDs were commonly used as replacements for incandescent and neon indicator lamps, and in seven-segment displays, first in expensive equipment such as laboratory and electronics test equipment, then later in such appliances as calculators, TVs, radios, telephones, as well as watches (see list of signal uses).
Until 1968, visible and infrared LEDs were extremely costly, in the order of US$200 per unit, and so had little practical use.
Hewlett-Packard (HP) was engaged in research and development (R&D) on practical LEDs between 1962 and 1968, by a research team under Howard C. Borden, Gerald P. Pighini and Mohamed M. Atalla at HP Associates and HP Labs. During this time, Atalla launched a material science investigation program on gallium arsenide (GaAs), gallium arsenide phosphide (GaAsP) and indium arsenide (InAs) devices at HP, and they collaborated with Monsanto Company on developing the first usable LED products. The first usable LED products were HP's LED display and Monsanto's LED indicator lamp, both launched in 1968. Monsanto was the first organization to mass-produce visible LEDs, using GaAsP in 1968 to produce red LEDs suitable for indicators. Monsanto had previously offered to supply HP with GaAsP, but HP decided to grow its own GaAsP. In February 1969, Hewlett-Packard introduced the HP Model 5082-7000 Numeric Indicator, the first LED device to use integrated circuit (integrated LED circuit) technology. It was the first intelligent LED display, and was a revolution in digital display technology, replacing the Nixie tube and becoming the basis for later LED displays.
Atalla left HP and joined Fairchild Semiconductor in 1969. He was the vice president and general manager of the Microwave & Optoelectronics division, from its inception in May 1969 up until November 1971. He continued his work on LEDs, proposing they could be used for indicator lights and optical readers in 1971. In the 1970s, commercially successful LED devices at less than five cents each were produced by Fairchild Optoelectronics. These devices employed compound semiconductor chips fabricated with the planar process (developed by Jean Hoerni, based on Atalla's surface passivation method). The combination of planar processing for chip fabrication and innovative packaging methods enabled the team at Fairchild led by optoelectronics pioneer Thomas Brandt to achieve the needed cost reductions. LED producers continue to use these methods.
The early red LEDs were bright enough only for use as indicators, as the light output was not enough to illuminate an area. Readouts in calculators were so small that plastic lenses were built over each digit to make them legible. Later, other colors became widely available and appeared in appliances and equipment.
Early LEDs were packaged in metal cases similar to those of transistors, with a glass window or lens to let the light out. Modern indicator LEDs are packed in transparent molded plastic cases, tubular or rectangular in shape, and often tinted to match the device color. Infrared devices may be dyed, to block visible light. More complex packages have been adapted for efficient heat dissipation in high-power LEDs. Surface-mounted LEDs further reduce the package size. LEDs intended for use with fiber optics cables may be provided with an optical connector.
The first blue-violet LED using magnesium-doped gallium nitride was made at Stanford University in 1972 by Herb Maruska and Wally Rhines, doctoral students in materials science and engineering. At the time Maruska was on leave from RCA Laboratories, where he collaborated with Jacques Pankove on related work. In 1971, the year after Maruska left for Stanford, his RCA colleagues Pankove and Ed Miller demonstrated the first blue electroluminescence from zinc-doped gallium nitride, though the subsequent device Pankove and Miller built, the first actual gallium nitride light-emitting diode, emitted green light. In 1974 the U.S. Patent Office awarded Maruska, Rhines and Stanford professor David Stevenson a patent for their work in 1972 (U.S. Patent US3819974 A). Today, magnesium-doping of gallium nitride remains the basis for all commercial blue LEDs and laser diodes. In the early 1970s, these devices were too dim for practical use, and research into gallium nitride devices slowed.
In August 1989, Cree introduced the first commercially available blue LED based on the indirect bandgap semiconductor, silicon carbide (SiC). SiC LEDs had very low efficiency, no more than about 0.03%, but did emit in the blue portion of the visible light spectrum.
In the late 1980s, key breakthroughs in GaN epitaxial growth and p-type doping ushered in the modern era of GaN-based optoelectronic devices. Building upon this foundation, Theodore Moustakas at Boston University patented a method for producing high-brightness blue LEDs using a new two-step process in 1991.
Two years later, in 1993, high-brightness blue LEDs were demonstrated by Shuji Nakamura of Nichia Corporation using a gallium nitride growth process. In parallel, Isamu Akasaki and Hiroshi Amano in Nagoya were working on developing the important GaN deposition on sapphire substrates and the demonstration of p-type doping of GaN. This new development revolutionized LED lighting, making high-power blue light sources practical, leading to the development of technologies like Blu-ray.
Nakamura was awarded the 2006 Millennium Technology Prize for his invention.
Nakamura, Hiroshi Amano and Isamu Akasaki were awarded the Nobel Prize in Physics in 2014 for the invention of the blue LED. In 2015, a US court ruled that three companies had infringed Moustakas's prior patent, and ordered them to pay licensing fees of not less than US$13 million.
In 1995, Alberto Barbieri at the Cardiff University Laboratory (GB) investigated the efficiency and reliability of high-brightness LEDs and demonstrated a "transparent contact" LED using indium tin oxide (ITO) on (AlGaInP/GaAs).
In 2001 and 2002, processes for growing gallium nitride (GaN) LEDs on silicon were successfully demonstrated. In January 2012, Osram demonstrated high-power InGaN LEDs grown on silicon substrates commercially, and GaN-on-silicon LEDs are in production at Plessey Semiconductors. As of 2017, some manufacturers are using SiC as the substrate for LED production, but sapphire is more common, as it has the most similar properties to that of gallium nitride, reducing the need for patterning the sapphire wafer (patterned wafers are known as epi wafers). Samsung, the University of Cambridge, and Toshiba are performing research into GaN on Si LEDs. Toshiba has stopped research, possibly due to low yields. Some opt towards epitaxy, which is difficult on silicon, while others, like the University of Cambridge, opt towards a multi-layer structure, in order to reduce (crystal) lattice mismatch and different thermal expansion ratios, in order to avoid cracking of the LED chip at high temperatures (e.g. during manufacturing), reduce heat generation and increase luminous efficiency. Epitaxy (or patterned sapphire) can be carried out with nanoimprint lithography. GaN is often deposited using Metalorganic vapour-phase epitaxy (MOCVD), and it also utilizes Lift-off.
Even though white light can be created using individual red, green and blue LEDs, this results in poor color rendering, since only three narrow bands of wavelengths of light are being emitted. The attainment of high efficiency blue LEDs was quickly followed by the development of the first white LED. In this device a :Ce (known as "YAG" or Ce:YAG phosphor) cerium doped phosphor coating produces yellow light through fluorescence. The combination of that yellow with remaining blue light appears white to the eye. Using different phosphors produces green and red light through fluorescence. The resulting mixture of red, green and blue is perceived as white light, with improved color rendering compared to wavelengths from the blue LED/YAG phosphor combination.
The first white LEDs were expensive and inefficient. However, the light output of LEDs has increased exponentially. The latest research and development has been propagated by Japanese manufacturers such as Panasonic, and Nichia, and by Korean and Chinese manufacturers such as Samsung, Kingsun, and others. This trend in increased output has been called Haitz's law after Dr. Roland Haitz.
Light output and efficiency of blue and near-ultraviolet LEDs rose and the cost of reliable devices fell. This led to relatively high-power white-light LEDs for illumination, which are replacing incandescent and fluorescent lighting.
Experimental white LEDs have been demonstrated to produce 303 lumens per watt of electricity (lm/w); some can last up to 100,000 hours. However, commercially available LEDs have an efficiency of up to 223 lm/w. Compared to incandescent bulbs, this is not only a huge increase in electrical efficiency, and even though LEDs are more expensive to purchase, overall cost is significantly cheaper than that of incandescent bulbs.
The LED chip is encapsulated inside a small, plastic, white mold. It can be encapsulated using resin (polyurethane-based), silicone, or epoxy containing (powdered) Cerium doped YAG phosphor. After allowing the solvents to evaporate, the LEDs are often tested, and placed on tapes for SMT placement equipment for use in LED light bulb production. Encapsulation is performed after probing, dicing, die transfer from wafer to package, and wire bonding or flip chip mounting, perhaps using Indium tin oxide, a transparent electrical conductor. In this case, the bond wire(s) are attached to the ITO film that has been deposited in the LEDs.
Some "remote phosphor" LED light bulbs use a single plastic cover with YAG phosphor for several blue LEDs, instead of using phosphor coatings on single chip white LEDs.
In a light emitting diode, the recombination of electrons and electron holes in a semiconductor produces light (be it infrared, visible or UV), a process called "electroluminescence". The wavelength of the light depends on the energy band gap of the semiconductors used. Since these materials have a high index of refraction, design features of the devices such as special optical coatings and die shape are required to efficiently emit light.
By selection of different semiconductor materials, single-color LEDs can be made that emit light in a narrow band of wavelengths from near-infrared through the visible spectrum and into the ultraviolet range. As the wavelengths become shorter, because of the larger band gap of these semiconductors, the operating voltage of the LED increases.
Blue LEDs have an active region consisting of one or more InGaN quantum wells sandwiched between thicker layers of GaN, called cladding layers. By varying the relative In/Ga fraction in the InGaN quantum wells, the light emission can in theory be varied from violet to amber.
Aluminium gallium nitride (AlGaN) of varying Al/Ga fraction can be used to manufacture the cladding and quantum well layers for ultraviolet LEDs, but these devices have not yet reached the level of efficiency and technological maturity of InGaN/GaN blue/green devices. If un-alloyed GaN is used in this case to form the active quantum well layers, the device emits near-ultraviolet light with a peak wavelength centred around 365 nm. Green LEDs manufactured from the InGaN/GaN system are far more efficient and brighter than green LEDs produced with non-nitride material systems, but practical devices still exhibit efficiency too low for high-brightness applications.
With AlGaN and AlGaInN, even shorter wavelengths are achievable. Near-UV emitters at wavelengths around 360–395 nm are already cheap and often encountered, for example, as black light lamp replacements for inspection of anti-counterfeiting UV watermarks in documents and bank notes, and for UV curing. While substantially more expensive, shorter-wavelength diodes are commercially available for wavelengths down to 240 nm. As the photosensitivity of microorganisms approximately matches the absorption spectrum of DNA, with a peak at about 260 nm, UV LED emitting at 250–270 nm are expected in prospective disinfection and sterilization devices. Recent research has shown that commercially available UVA LEDs (365 nm) are already effective disinfection and sterilization devices.
UV-C wavelengths were obtained in laboratories using aluminium nitride (210 nm), boron nitride (215 nm) and diamond (235 nm).
There are two primary ways of producing white light-emitting diodes. One is to use individual LEDs that emit three primary colors—red, green and blue—and then mix all the colors to form white light. The other is to use a phosphor material to convert monochromatic light from a blue or UV LED to broad-spectrum white light, similar to a fluorescent lamp. The yellow phosphor is cerium-doped YAG crystals suspended in the package or coated on the LED. This YAG phosphor causes white LEDs to look yellow when off, and the space between the crystals allow some blue light to pass through. Alternatively, white LEDs may use other phosphors like magnesium-doped potassium fluorosilicate (PFS) or other engineered phosphors. PFS assists in red light generation, and is used in conjunction with conventional Ce:YAG phosphor. In LEDs with PFS phosphor, some blue light passes through the phosphors, the Ce:YAG phosphor converts blue light to green and red light, and the PFS phosphor converts blue light to red light. The color temperature of the LED can be controlled by changing the concentration of the phosphors.
The 'whiteness' of the light produced is engineered to suit the human eye. Because of metamerism, it is possible to have quite different spectra that appear white. However, the appearance of objects illuminated by that light may vary as the spectrum varies. This is the issue of color rendition, quite separate from color temperature. An orange or cyan object could appear with the wrong color and much darker as the LED or phosphor does not emit the wavelength it reflects. The best color rendition LEDs use a mix of phosphors, resulting in less efficiency but better color rendering.
Mixing red, green, and blue sources to produce white light needs electronic circuits to control the blending of the colors. Since LEDs have slightly different emission patterns, the color balance may change depending on the angle of view, even if the RGB sources are in a single package, so RGB diodes are seldom used to produce white lighting. Nonetheless, this method has many applications because of the flexibility of mixing different colors, and in principle, this mechanism also has higher quantum efficiency in producing white light.
There are several types of multicolor white LEDs: di-, tri-, and tetrachromatic white LEDs. Several key factors that play among these different methods include color stability, color rendering capability, and luminous efficacy. Often, higher efficiency means lower color rendering, presenting a trade-off between the luminous efficacy and color rendering. For example, the dichromatic white LEDs have the best luminous efficacy (120 lm/W), but the lowest color rendering capability. However, although tetrachromatic white LEDs have excellent color rendering capability, they often have poor luminous efficacy. Trichromatic white LEDs are in between, having both good luminous efficacy (>70 lm/W) and fair color rendering capability.
One of the challenges is the development of more efficient green LEDs. The theoretical maximum for green LEDs is 683 lumens per watt but as of 2010 few green LEDs exceed even 100 lumens per watt. The blue and red LEDs approach their theoretical limits.
Multicolor LEDs also offer a new means to form light of different colors. Most perceivable colors can be formed by mixing different amounts of three primary colors. This allows precise dynamic color control. However, this type of LED's emission power decays exponentially with rising temperature,
resulting in a substantial change in color stability. Such problems inhibit industrial use. Multicolor LEDs without phosphors cannot provide good color rendering because each LED is a narrowband source. LEDs without phosphor, while a poorer solution for general lighting, are the best solution for displays, either backlight of LCD, or direct LED based pixels.
Dimming a multicolor LED source to match the characteristics of incandescent lamps is difficult because manufacturing variations, age, and temperature change the actual color value output. To emulate the appearance of dimming incandescent lamps may require a feedback system with color sensor to actively monitor and control the color.
This method involves coating LEDs of one color (mostly blue LEDs made of InGaN) with phosphors of different colors to form white light; the resultant LEDs are called phosphor-based or phosphor-converted white LEDs (pcLEDs). A fraction of the blue light undergoes the Stokes shift, which transforms it from shorter wavelengths to longer. Depending on the original LED's color, various color phosphors are used. Using several phosphor layers of distinct colors broadens the emitted spectrum, effectively raising the color rendering index (CRI).
Phosphor-based LEDs have efficiency losses due to heat loss from the Stokes shift and also other phosphor-related issues. Their luminous efficacies compared to normal LEDs depend on the spectral distribution of the resultant light output and the original wavelength of the LED itself. For example, the luminous efficacy of a typical YAG yellow phosphor based white LED ranges from 3 to 5 times the luminous efficacy of the original blue LED because of the human eye's greater sensitivity to yellow than to blue (as modeled in the luminosity function). Due to the simplicity of manufacturing, the phosphor method is still the most popular method for making high-intensity white LEDs. The design and production of a light source or light fixture using a monochrome emitter with phosphor conversion is simpler and cheaper than a complex RGB system, and the majority of high-intensity white LEDs presently on the market are manufactured using phosphor light conversion.
Among the challenges being faced to improve the efficiency of LED-based white light sources is the development of more efficient phosphors. As of 2010, the most efficient yellow phosphor is still the YAG phosphor, with less than 10% Stokes shift loss. Losses attributable to internal optical losses due to re-absorption in the LED chip and in the LED packaging itself account typically for another 10% to 30% of efficiency loss. Currently, in the area of phosphor LED development, much effort is being spent on optimizing these devices to higher light output and higher operation temperatures. For instance, the efficiency can be raised by adapting better package design or by using a more suitable type of phosphor. Conformal coating process is frequently used to address the issue of varying phosphor thickness.
Some phosphor-based white LEDs encapsulate InGaN blue LEDs inside phosphor-coated epoxy. Alternatively, the LED might be paired with a remote phosphor, a preformed polycarbonate piece coated with the phosphor material. Remote phosphors provide more diffuse light, which is desirable for many applications. Remote phosphor designs are also more tolerant of variations in the LED emissions spectrum. A common yellow phosphor material is cerium-doped yttrium aluminium garnet (Ce3+:YAG).
White LEDs can also be made by coating near-ultraviolet (NUV) LEDs with a mixture of high-efficiency europium-based phosphors that emit red and blue, plus copper and aluminium-doped zinc sulfide (ZnS:Cu, Al) that emits green. This is a method analogous to the way fluorescent lamps work. This method is less efficient than blue LEDs with YAG:Ce phosphor, as the Stokes shift is larger, so more energy is converted to heat, but yields light with better spectral characteristics, which render color better. Due to the higher radiative output of the ultraviolet LEDs than of the blue ones, both methods offer comparable brightness. A concern is that UV light may leak from a malfunctioning light source and cause harm to human eyes or skin.
Another method used to produce experimental white light LEDs used no phosphors at all and was based on homoepitaxially grown zinc selenide (ZnSe) on a ZnSe substrate that simultaneously emitted blue light from its active region and yellow light from the substrate.
A new style of wafers composed of gallium-nitride-on-silicon (GaN-on-Si) is being used to produce white LEDs using 200-mm silicon wafers. This avoids the typical costly sapphire substrate in relatively small 100- or 150-mm wafer sizes. The sapphire apparatus must be coupled with a mirror-like collector to reflect light that would otherwise be wasted. It was predicted that since 2020, 40% of all GaN LEDs are made with GaN-on-Si. Manufacturing large sapphire material is difficult, while large silicon material is cheaper and more abundant. LED companies shifting from using sapphire to silicon should be a minimal investment.
In an organic light-emitting diode (OLED), the electroluminescent material composing the emissive layer of the diode is an organic compound. The organic material is electrically conductive due to the delocalization of pi electrons caused by conjugation over all or part of the molecule, and the material therefore functions as an organic semiconductor. The organic materials can be small organic molecules in a crystalline phase, or polymers.
The potential advantages of OLEDs include thin, low-cost displays with a low driving voltage, wide viewing angle, and high contrast and color gamut. Polymer LEDs have the added benefit of printable and flexible displays. OLEDs have been used to make visual displays for portable electronic devices such as cellphones, digital cameras, lighting and televisions.
LEDs are made in different packages for different applications. A single or a few LED junctions may be packed in one miniature device for use as an indicator or pilot lamp. An LED array may include controlling circuits within the same package, which may range from a simple resistor, blinking or color changing control, or an addressable controller for RGB devices. Higher-powered white-emitting devices will be mounted on heat sinks and will be used for illumination. Alphanumeric displays in dot matrix or bar formats are widely available. Special packages permit connection of LEDs to optical fibers for high-speed data communication links.
These are mostly single-die LEDs used as indicators, and they come in various sizes from 2 mm to 8 mm, through-hole and surface mount packages. Typical current ratings range from around 1 mA to above 20 mA. Multiple LED dies attached to a flexible backing tape form an LED strip light.
Common package shapes include round, with a domed or flat top, rectangular with a flat top (as used in bar-graph displays), and triangular or square with a flat top. The encapsulation may also be clear or tinted to improve contrast and viewing angle. Infrared devices may have a black tint to block visible light while passing infrared radiation.
Ultra-high-output LEDs are designed for viewing in direct sunlight
5 V and 12 V LEDs are ordinary miniature LEDs that have a series resistor for direct connection to a 5V or 12V supply.
High-power LEDs (HP-LEDs) or high-output LEDs (HO-LEDs) can be driven at currents from hundreds of mA to more than an ampere, compared with the tens of mA for other LEDs. Some can emit over a thousand lumens. LED power densities up to 300 W/cm2 have been achieved. Since overheating is destructive, the HP-LEDs must be mounted on a heat sink to allow for heat dissipation. If the heat from an HP-LED is not removed, the device fails in seconds. One HP-LED can often replace an incandescent bulb in a flashlight, or be set in an array to form a powerful LED lamp.
Some well-known HP-LEDs in this category are the Nichia 19 series, Lumileds Rebel Led, Osram Opto Semiconductors Golden Dragon, and Cree X-lamp. As of September 2009, some HP-LEDs manufactured by Cree now exceed 105 lm/W.
Examples for Haitz's law—which predicts an exponential rise in light output and efficacy of LEDs over time—are the CREE XP-G series LED, which achieved 105lm/W in 2009 and the Nichia 19 series with a typical efficacy of 140lm/W, released in 2010.
LEDs developed by Seoul Semiconductor can operate on AC power without a DC converter. For each half-cycle, part of the LED emits light and part is dark, and this is reversed during the next half-cycle. The efficacy of this type of HP-LED is typically 40lm/W. A large number of LED elements in series may be able to operate directly from line voltage. In 2009, Seoul Semiconductor released a high DC voltage LED, named as 'Acrich MJT', capable of being driven from AC power with a simple controlling circuit. The low-power dissipation of these LEDs affords them more flexibility than the original AC LED design.
Flashing LEDs are used as attention seeking indicators without requiring external electronics. Flashing LEDs resemble standard LEDs but they contain an integrated voltage regulator and a multivibrator circuit that causes the LED to flash with a typical period of one second. In diffused lens LEDs, this circuit is visible as a small black dot. Most flashing LEDs emit light of one color, but more sophisticated devices can flash between multiple colors and even fade through a color sequence using RGB color mixing.
Bi-color LEDs contain two different LED emitters in one case. There are two types of these. One type consists of two dies connected to the same two leads antiparallel to each other. Current flow in one direction emits one color, and current in the opposite direction emits the other color. The other type consists of two dies with separate leads for both dies and another lead for common anode or cathode so that they can be controlled independently. The most common bi-color combination is red/traditional green, however, other available combinations include amber/traditional green, red/pure green, red/blue, and blue/pure green.
Tri-color LEDs contain three different LED emitters in one case. Each emitter is connected to a separate lead so they can be controlled independently. A four-lead arrangement is typical with one common lead (anode or cathode) and an additional lead for each color. Others, however, have only two leads (positive and negative) and have a built-in electronic controller.
RGB LEDs consist of one red, one green, and one blue LED. By independently adjusting each of the three, RGB LEDs are capable of producing a wide color gamut. Unlike dedicated-color LEDs, however, these do not produce pure wavelengths. Modules may not be optimized for smooth color mixing.
Decorative-multicolor LEDs incorporate several emitters of different colors supplied by only two lead-out wires. Colors are switched internally by varying the supply voltage.
Alphanumeric LEDs are available in seven-segment, starburst, and dot-matrix format. Seven-segment displays handle all numbers and a limited set of letters. Starburst displays can display all letters. Dot-matrix displays typically use 5×7 pixels per character. Seven-segment LED displays were in widespread use in the 1970s and 1980s, but rising use of liquid crystal displays, with their lower power needs and greater display flexibility, has reduced the popularity of numeric and alphanumeric LED displays.
Digital RGB addressable LEDs contain their own "smart" control electronics. In addition to power and ground, these provide connections for data-in, data-out, and sometimes a clock or strobe signal. These are connected in a daisy chain. Data sent to the first LED of the chain can control the brightness and color of each LED independently of the others. They are used where a combination of maximum control and minimum visible electronics are needed such as strings for Christmas and LED matrices. Some even have refresh rates in the kHz range, allowing for basic video applications. These devices are known by their part number (WS2812 being common) or a brand name such as NeoPixel.
An LED filament consists of multiple LED chips connected in series on a common longitudinal substrate that forms a thin rod reminiscent of a traditional incandescent filament. These are being used as a low-cost decorative alternative for traditional light bulbs that are being phased out in many countries. The filaments use a rather high voltage, allowing them to work efficiently with mains voltages. Often a simple rectifier and capacitive current limiting are employed to create a low-cost replacement for a traditional light bulb without the complexity of the low voltage, high current converter that single die LEDs need. Usually, they are packaged in bulb similar to the lamps they were designed to replace, and filled with inert gas to remove heat efficiently.
Surface-mounted LEDs are frequently produced in chip on board (COB) arrays, allowing better heat dissipation than with a single LED of comparable luminous output. The LEDs can be arranged around a cylinder, and are called "corn cob lights" because of the rows of yellow LEDs.
The current in an LED or other diodes rises exponentially with the applied voltage (see Shockley diode equation), so a small change in voltage can cause a large change in current. Current through the LED must be regulated by an external circuit such as a constant current source to prevent damage. Since most common power supplies are (nearly) constant-voltage sources, LED fixtures must include a power converter, or at least a current-limiting resistor. In some applications, the internal resistance of small batteries is sufficient to keep current within the LED rating.
An LED will light only when voltage is applied in the forward direction of the diode. No current flows and no light is emitted if voltage is applied in the reverse direction. If the reverse voltage exceeds the breakdown voltage, a large current flows and the LED will be damaged. If the reverse current is sufficiently limited to avoid damage, the reverse-conducting LED is a useful noise diode.
Certain blue LEDs and cool-white LEDs can exceed safe limits of the so-called blue-light hazard as defined in eye safety specifications such as "ANSI/IESNA RP-27.1–05: Recommended Practice for Photobiological Safety for Lamp and Lamp Systems". One study showed no evidence of a risk in normal use at domestic illuminance, and that caution is only needed for particular occupational situations or for specific populations. In 2006, the International Electrotechnical Commission published "IEC 62471 Photobiological safety of lamps and lamp systems", replacing the application of early laser-oriented standards for classification of LED sources.
While LEDs have the advantage over fluorescent lamps, in that they do not contain mercury, they may contain other hazardous metals such as lead and arsenic.
In 2016 the American Medical Association (AMA) issued a statement concerning the possible adverse influence of blueish street lighting on the sleep-wake cycle of city-dwellers. Industry critics claim exposure levels are not high enough to have a noticeable effect.
) and are easily attached to printed circuit boards.
LED uses fall into four major categories:
Mims, Forrest M. III. "An Inexpensive and Accurate Student Sun Photometer with Light-Emitting Diodes as Spectrally Selective Detectors".
The low energy consumption, low maintenance and small size of LEDs has led to uses as status indicators and displays on a variety of equipment and installations. Large-area LED displays are used as stadium displays, dynamic decorative displays, and dynamic message signs on freeways. Thin, lightweight message displays are used at airports and railway stations, and as destination displays for trains, buses, trams, and ferries.
One-color light is well suited for traffic lights and signals, exit signs, emergency vehicle lighting, ships' navigation lights, and LED-based Christmas lights
Because of their long life, fast switching times, and visibility in broad daylight due to their high output and focus, LEDs have been used in automotive brake lights and turn signals. The use in brakes improves safety, due to a great reduction in the time needed to light fully, or faster rise time, about 0.1 second faster than an incandescent bulb. This gives drivers behind more time to react. In a dual intensity circuit (rear markers and brakes) if the LEDs are not pulsed at a fast enough frequency, they can create a phantom array, where ghost images of the LED appear if the eyes quickly scan across the array. White LED headlamps are beginning to appear. Using LEDs has styling advantages because LEDs can form much thinner lights than incandescent lamps with parabolic reflectors.
Due to the relative cheapness of low output LEDs, they are also used in many temporary uses such as glowsticks, throwies, and the photonic textile Lumalive. Artists have also used LEDs for LED art.
With the development of high-efficiency and high-power LEDs, it has become possible to use LEDs in lighting and illumination. To encourage the shift to LED lamps and other high-efficiency lighting,in 2008 the US Department of Energy created the L Prize competition. The Philips Lighting North America LED bulb won the first competition on August 3, 2011, after successfully completing 18 months of intensive field, lab, and product testing.
Efficient lighting is needed for sustainable architecture. As of 2011, some LED bulbs provide up to 150 lm/W and even inexpensive low-end models typically exceed 50 lm/W, so that a 6-watt LED could achieve the same results as a standard 40-watt incandescent bulb. The lower heat output of LEDs also reduces demand on air conditioning systems. Worldwide, LEDs are rapidly adopted to displace less effective sources such as incandescent lamps and CFLs and reduce electrical energy consumption and its associated emissions. Solar powered LEDs are used as street lights and in architectural lighting.
The mechanical robustness and long lifetime are used in automotive lighting on cars, motorcycles, and bicycle lights. LED street lights are employed on poles and in parking garages. In 2007, the Italian village of Torraca was the first place to convert its street lighting to LEDs.
Cabin lighting on recent Airbus and Boeing jetliners uses LED lighting. LEDs are also being used in airport and heliport lighting. LED airport fixtures currently include medium-intensity runway lights, runway centerline lights, taxiway centerline and edge lights, guidance signs, and obstruction lighting.
LEDs are also used as a light source for DLP projectors, and to backlight LCD televisions (referred to as LED TVs) and laptop displays. RGB LEDs raise the color gamut by as much as 45%. Screens for TV and computer displays can be made thinner using LEDs for backlighting.
LEDs are small, durable and need little power, so they are used in handheld devices such as flashlights. LED strobe lights or camera flashes operate at a safe, low voltage, instead of the 250+ volts commonly found in xenon flashlamp-based lighting. This is especially useful in cameras on mobile phones, where space is at a premium and bulky voltage-raising circuitry is undesirable.
LEDs are used for infrared illumination in night vision uses including security cameras. A ring of LEDs around a video camera, aimed forward into a retroreflective background, allows chroma keying in video productions.
LEDs are used in mining operations, as cap lamps to provide light for miners. Research has been done to improve LEDs for mining, to reduce glare and to increase illumination, reducing risk of injury to the miners.
LEDs are increasingly finding uses in medical and educational applications, for example as mood enhancement. NASA has even sponsored research for the use of LEDs to promote health for astronauts.
Light can be used to transmit data and analog signals. For example, lighting white LEDs can be used in systems assisting people to navigate in closed spaces while searching necessary rooms or objects.
Assistive listening devices in many theaters and similar spaces use arrays of infrared LEDs to send sound to listeners' receivers. Light-emitting diodes (as well as semiconductor lasers) are used to send data over many types of fiber optic cable, from digital audio over TOSLINK cables to the very high bandwidth fiber links that form the Internet backbone. For some time, computers were commonly equipped with IrDA interfaces, which allowed them to send and receive data to nearby machines via infrared.
Because LEDs can cycle on and off millions of times per second, very high data bandwidth can be achieved.
Machine vision systems often require bright and homogeneous illumination, so features of interest are easier to process. LEDs are often used.
Barcode scanners are the most common example of machine vision applications, and many of those scanners use red LEDs instead of lasers. Optical computer mice use LEDs as a light source for the miniature camera within the mouse.
LEDs are useful for machine vision because they provide a compact, reliable source of light. LED lamps can be turned on and off to suit the needs of the vision system, and the shape of the beam produced can be tailored to match the systems's requirements.
The discovery of radiative recombination in Aluminum Gallium Nitride (AlGaN) alloys by U.S. Army Research Laboratory (ARL) led to the conceptualization of UV light emitting diodes (LEDs) to be incorporated in light induced fluorescence sensors used for biological agent detection. In 2004, the Edgewood Chemical Biological Center (ECBC) initiated the effort to create a biological detector named TAC-BIO. The program capitalized on Semiconductor UV Optical Sources (SUVOS) developed by the Defense Advanced Research Projects Agency (DARPA).
UV induced fluorescence is one of the most robust techniques used for rapid real time detection of biological aerosols. The first UV sensors were lasers lacking in-field-use practicality. In order to address this, DARPA incorporated SUVOS technology to create a low cost, small, lightweight, low power device. The TAC-BIO detector's response time was one minute from when it sensed a biological agent. It was also demonstrated that the detector could be operated unattended indoors and outdoors for weeks at a time.
Aerosolized biological particles will fluoresce and scatter light under a UV light beam. Observed fluorescence is dependent on the applied wavelength and the biochemical fluorophores within the biological agent. UV induced fluorescence offers a rapid, accurate, efficient and logistically practical way for biological agent detection. This is because the use of UV fluorescence is reagent less, or a process that does not require an added chemical to produce a reaction, with no consumables, or produces no chemical byproducts.
Additionally, TAC-BIO can reliably discriminate between threat and non-threat aerosols. It was claimed to be sensitive enough to detect low concentrations, but not so sensitive that it would cause false positives. The particle counting algorithm used in the device converted raw data into information by counting the photon pulses per unit of time from the fluorescence and scattering detectors, and comparing the value to a set threshold.
The original TAC-BIO was introduced in 2010, while the second generation TAC-BIO GEN II, was designed in 2015 to be more cost efficient as plastic parts were used. Its small, light-weight design allows it to be mounted to vehicles, robots, and unmanned aerial vehicles. The second generation device could also be utilized as an environmental detector to monitor air quality in hospitals, airplanes, or even in households to detect fungus and mold.
The light from LEDs can be modulated very quickly so they are used extensively in optical fiber and free space optics communications. This includes remote controls, such as for television sets, where infrared LEDs are often used. Opto-isolators use an LED combined with a photodiode or phototransistor to provide a signal path with electrical isolation between two circuits. This is especially useful in medical equipment where the signals from a low-voltage sensor circuit (usually battery-powered) in contact with a living organism must be electrically isolated from any possible electrical failure in a recording or monitoring device operating at potentially dangerous voltages. An optoisolator also lets information be transferred between circuits that don't share a common ground potential.
Many sensor systems rely on light as the signal source. LEDs are often ideal as a light source due to the requirements of the sensors. The Nintendo Wii's sensor bar uses infrared LEDs. Pulse oximeters use them for measuring oxygen saturation. Some flatbed scanners use arrays of RGB LEDs rather than the typical cold-cathode fluorescent lamp as the light source. Having independent control of three illuminated colors allows the scanner to calibrate itself for more accurate color balance, and there is no need for warm-up. Further, its sensors only need be monochromatic, since at any one time the page being scanned is only lit by one color of light.
Since LEDs can also be used as photodiodes, they can be used for both photo emission and detection. This could be used, for example, in a touchscreen that registers reflected light from a finger or stylus. Many materials and biological systems are sensitive to, or dependent on, light. Grow lights use LEDs to increase photosynthesis in plants, and bacteria and viruses can be removed from water and other substances using UV LEDs for sterilization.
Deep UV LEDs, with a spectra range 247 nm to 386 nm, have other applications, such as water/air purification, surface disinfection, epoxy curing, free-space nonline-of-sight communication, high performance liquid chromatography, UV curing and printing, phototherapy, medical/ analytical instrumentation, and DNA absorption.
LEDs have also been used as a medium-quality voltage reference in electronic circuits. The forward voltage drop (about 1.7 V for a red LED or 1.2V for an infrared) can be used instead of a Zener diode in low-voltage regulators. Red LEDs have the flattest I/V curve above the knee. Nitride-based LEDs have a fairly steep I/V curve and are useless for this purpose. Although LED forward voltage is far more current-dependent than a Zener diode, Zener diodes with breakdown voltages below 3 V are not widely available.
The progressive miniaturization of low-voltage lighting technology, such as LEDs and OLEDs, suitable to incorporate into low-thickness materials has fostered experimentation in combining light sources and wall covering surfaces for interior walls in the form of LED wallpaper.
LEDs require optimized efficiency to hinge on ongoing improvements such as phosphor materials and quantum dots.
The process of down-conversion (the method by which materials convert more-energetic photons to different, less energetic colors) also needs improvement. For example, the red phosphors that are used today are thermally sensitive and need to be improved in that aspect so that they do not color shift and experience efficiency drop-off with temperature. Red phosphors could also benefit from a narrower spectral width to emit more lumens and becoming more efficient at converting photons.
In addition, work remains to be done in the realms of current efficiency droop, color shift, system reliability, light distribution, dimming, thermal management, and power supply performance.
A new family of LEDs are based on the semiconductors called perovskites. In 2018, less than four years after their discovery, the ability of perovskite LEDs (PLEDs) to produce light from electrons already rivaled those of the best performing OLEDs. They have a potential for cost-effectiveness as they can be processed from solution, a low-cost and low-tech method, which might allow perovskite-based devices that have large areas to be made with extremely low cost. Their efficiency is superior by eliminating non-radiative losses, in other words, elimination of recombination pathways that do not produce photons; or by solving outcoupling problem (prevalent for thin-film LEDs) or balancing charge carrier injection to increase the EQE (external quantum efficiency). The most up-to-date PLED devices have broken the performance barrier by shooting the EQE above 20%.
In 2018, Cao et al. and Lin et al. independently published two papers on developing perovskite LEDs with EQE greater than 20%, which made these two papers a mile-stone in PLED development. Their device have similar planar structure, i.e. the active layer (perovskite) is sandwiched between two electrodes. To achieve a high EQE, they not only reduced non-radiative recombination, but also utilized their own, subtly different methods to improve the EQE.
In Cao and his colleagues' work, they targeted to solve the outcoupling problem, which is that the optical physics of thin-film LEDs causes the majority of light generated by the semiconductor to be trapped in the device. To achieve this goal, they demonstrated that solution-processed perovskites can spontaneously form submicrometre-scale crystal platelets, which can efficiently extract light from the device. These perovskites are formed simply by introducing amino-acif additives into the perovskite precursor solutions. In addition, their method is able to passivate perovskite surface defects and reduce nonradiative recombination. Therefore, by improving the light outcoupling and reducing nonradiative losses, Cao and his colleagues successfully achieved PLED with EQE up to 20.7%.
In Lin and his colleague's work, however, they used a different approach to generate high EQE. Instead of modifying the microstructure of perovskite layer, they chose to adopt a new strategy for managing the compositional distribution in the device——an approach that simultaneously provides high luminescence and balanced charge injection. In other words, they still used flat emissive layer, but tried to optimize the balance of electrons and holes injected into the perovskite, so as to make the most efficient use of the charge carriers. Moreover, in the perovskite layer, the crystals are perfectly enclosed by MABr additive (where MA is CH3NH3). The MABr shell passivates the nonradiative defects that would otherwise be present perovskite crystals, resulting in reduction of the nonradiative recombination. Therefore, by balancing charge injection and decreasing nonradiative losses, Lin and his colleagues developed PLED with EQE up to 20.3%.
Devices called "nanorods" are a form of LEDs that can also detect and absorb light. They consist of a quantum dot directly contacting two semiconductor materials (instead of just one as in a traditional LED). One semiconductor allows movement of positive charge and one allows movement of negative charge. They can emit light, sense light, and collect energy. The nanorod gathers electrons while the quantum dot shell gathers positive charges so the dot emits light. When the voltage is switched the opposite process occurs and the dot absorbs light. By 2017 the only color developed was red.
|
https://en.wikipedia.org/wiki?curid=18290
|
Lev Kuleshov
Lev Vladimirovich Kuleshov (; – 29 March 1970) was a Russian and Soviet filmmaker and film theorist, one of the founders of the world's first film school, the Moscow Film School. He was given the title People's Artist of the RSFSR in 1969. He was intimately involved in development of the style of film making known as Soviet montage, especially its psychological underpinning, including the use of editing and the cut to emotionally influence the audience, a principle known as the Kuleshov effect. He also developed the theory of creative geography, which is the use of the action around a cut to connect otherwise disparate settings into a cohesive narrative.
Lev Kuleshov was born in 1899 into an intellectual Russian family. His father Vladimir Sergeevich Kuleshov was of noble heritage; he studied art in the Moscow School of Painting, Sculpture and Architecture, despite his own father's disapproval. He then married a village schoolteacher Pelagia Alexandrovna Shubina who was raised in an orphanage, which only led to more confrontation. They gave birth to two sons: Boris and Lev.
At the time Lev Kuleshov was born, the family became financially broke, lost their estate and moved to Tambov, living a modest life. In 1911 Vladimir Kuleshov died; three years later Lev and his mother moved to Moscow where his elder brother was studying and working as an engineer. Lev Kuleshov decided to follow the steps of his father and entered the Moscow School of Painting, although he didn't finish it. In 1916 he applied to work at the film company led by Aleksandr Khanzhonkov. He produced scenery for Yevgeni Bauer's pictures, such as "The King of Paris", "For Happiness" and others. With time Kuleshov became more interested in film theory. He co-directed his first movie "Twilight" in 1917. His next film was released under the Soviet patronage.
During 1918–1920 he covered the Russian Civil War with a documentary crew. In 1919 he headed the first Soviet film courses at the National Film School. Kuleshov may well be the very first film theorist as he was a leader in the Soviet montage theory – developing his theories of editing before those of Sergei Eisenstein (briefly a student of Kuleshov). He contributed the article "Kinematografichesky naturshchik" to the first issue of "Zrelishcha" in 1922. Among his other notable students were Vsevolod Pudovkin, Boris Barnet, Mikhail Romm, Sergey Komarov, Porfiri Podobed, Vladimir Fogel and Aleksandra Khokhlova who became his wife. For Kuleshov, the essence of the cinema was editing, the juxtaposition of one shot with another. To illustrate this principle, he created what has come to be known as the Kuleshov Effect. In this now-famous editing exercise, shots of an actor were intercut with various meaningful images (a casket, a bowl of soup, etc.) in order to show how editing changes viewers' interpretations of images. Another one of his famous inventions was creative geography, also known as artificial landscape. Those techniques were described in his book "The Basics of Film Direction" (1941) which was later translated into many languages.
In addition to his theoretical and teaching work, Kuleshov directed a number of feature-length films. Among his most notable works are an action-comedy "The Extraordinary Adventures of Mr. West in the Land of the Bolsheviks" (1924), a psychological drama "By the Law" (1926) adapted from the short story by Jack London and a biographical drama "The Great Consoler" (1933) based on O. Henry's life and works. In 1934 and 1935 Kuleshov went to Tajikistan to direct there "Dokhunda", a movie based on the novel by Tajik national poet Sadriddin Ayni, but the project was regarded with suspicion by the authorities as possibly exciting Tajik nationalism, and stopped. No footage survives.
After directing his last film in 1943, Kuleshov served as an artistic director and an academic rector at VGIK where he worked for the next 25 years. He was a member of the jury at the 27th Venice International Film Festival, as well as a special guest during other international film festivals.
Lev Kuleshov died in Moscow in 1970. He was buried at the Novodevichy Cemetery. He was survived by his wife Aleksandra Khokhlova (1897–1985) – an actress, film director and educator, granddaughter of Pavel Tretyakov and Sergey Botkin – and Aleksandra's son from the first marriage.
|
https://en.wikipedia.org/wiki?curid=18292
|
Legacy system
In computing, a legacy system is an old method, technology, computer system, or application program, "of, relating to, or being a previous or outdated computer system," yet still in use. Often referencing a system as "legacy" means that it paved the way for the standards that would follow it. This can also imply that the system is out of date or in need of replacement.
The first use of the term "legacy" to describe computer systems probably occurred in the 1970s. By the 1980s it was commonly used to refer to existing computer systems to distinguish them from the design and implementation of new systems. Legacy was often heard during a conversion process, for example, when moving data from the legacy system to a new database.
While this term may indicate that some engineers may feel that a system is out of date, a legacy system can continue to be used for a variety of reasons. It may simply be that the system still provides for the users' needs. In addition, the decision to keep an old system may be influenced by economic reasons such as return on investment challenges or vendor lock-in, the inherent challenges of change management, or a variety of other reasons other than functionality. Backward compatibility (such as the ability of newer systems to handle legacy file formats and character encodings) is a goal that software developers often include in their work.
Even if it is no longer used, a legacy system may continue to impact the organization due to its historical role. Historic data may not have been converted into the new system format and may exist within the new system with the use of a customized schema crosswalk, or may exist only in a data warehouse. In either case, the effect on business intelligence and operational reporting can be significant. A legacy system may include procedures or terminology which are no longer relevant in the current context, and may hinder or confuse understanding of the methods or technologies used.
Organizations can have compelling reasons for keeping a legacy system, such as:
Legacy systems are considered to be potentially problematic by some software engineers for several reasons.
Where it is impossible to replace legacy systems through the practice of application retirement, it is still possible to enhance (or "re-face") them. Most development often goes into adding new interfaces to a legacy system. The most prominent technique is to provide a Web-based interface to a terminal-based mainframe application. This may reduce staff productivity due to slower response times and slower mouse-based operator actions, yet it is often seen as an "upgrade", because the interface style is familiar to unskilled users and is easy for them to use. John McCormick discusses such strategies that involve middleware.
Printing improvements are problematic because legacy software systems often add no formatting instructions, or they use protocols that are not usable in modern PC/Windows printers. A print server can be used to intercept the data and translate it to a more modern code. Rich Text Format (RTF) or PostScript documents may be created in the legacy application and then interpreted at a PC before being printed.
Biometric security measures are difficult to implement on legacy systems. A workable solution is to use a telnet or http proxy server to sit between users and the mainframe to implement secure access to the legacy application.
The change being undertaken in some organizations is to switch to automated business process (ABP) software which generates complete systems. These systems can then interface to the organizations' legacy systems and use them as data repositories. This approach can provide a number of significant benefits: the users are insulated from the inefficiencies of their legacy systems, and the changes can be incorporated quickly and easily in the ABP software.
Model-driven reverse and forward engineering approaches can be also used for the improvement of legacy software.
Andreas Hein, from the Technical University of Munich, researched the use of legacy systems in space exploration. According to Hein, legacy systems are attractive for reuse if an organization has the capabilities for verification, validation, testing, and operational history. These capabilities must be integrated into various software life cycle phases such as development, implementation, usage, or maintenance. For software systems, the capability to use and maintain the system are crucial. Otherwise the system will become less and less understandable and maintainable.
According to Hein, verification, validation, testing, and operational history increases the confidence in a system's reliability and quality. However, accumulating this history is often expensive. NASA's now retired Space Shuttle program used a large amount of 1970s-era technology. Replacement was cost-prohibitive because of the expensive requirement for flight certification. The original hardware completed the expensive integration and certification requirement for flight, but any new equipment would have had to go through that entire process again. This long and detailed process required extensive tests of the new components in their new configurations before a single unit could be used in the Space Shuttle program. Thus any new system that started the certification process becomes a "de facto" legacy system by the time it is approved for flight.
Additionally, the entire Space Shuttle system, including ground and launch vehicle assets, was designed to work together as a closed system. Since the specifications did not change, all of the certified systems and components performed well in the roles for which they were designed. Even before the Shuttle was scheduled to be retired in 2010, NASA found it advantageous to keep using many pieces of 1970s technology rather than to upgrade those systems and recertify the new components.
The term "legacy support" is often used in conjunction with legacy systems. The term may refer to a feature of modern software. For example, Operating systems with "legacy support" can detect and use older hardware. The term may also be used to refer to a business function; e.g. a software or hardware vendor that is supporting, or providing software maintenance, for older products.
A "legacy" product may be a product that is no longer sold, has lost substantial market share, or is a version of a product that is not current. A legacy product may have some advantage over a modern product making it appealing for customers to keep it around. A product is only truly "obsolete" if it has an advantage to nobody – if no person making a rational decision would choose to acquire it new.
The term "legacy mode" often refers specifically to backward compatibility. A software product that is capable of performing as though it were a previous version of itself, is said to be "running in legacy mode." This kind of feature is common in operating systems and internet browsers, where many applications depend on these underlying components.
The computer mainframe era saw many applications running in legacy mode. In the modern business computing environment, n-tier, or 3-tier architectures are more difficult to place into legacy mode as they include many components making up a single system.
Virtualization technology is a recent innovation allowing legacy systems to continue to operate on modern hardware by running older operating systems and browsers on a software system that emulates legacy hardware.
Programmers have borrowed the term "brownfield" from the construction industry, where previously developed land (often polluted and abandoned) is described as "brownfield".
There is an alternate favorable opinion — growing since the end of the Dotcom bubble in 1999 — that legacy systems are simply computer systems in working use:
IT analysts estimate that the cost of replacing business logic is about five times that of reuse, even discounting the risk of system failures and security breaches. Ideally, businesses would never have to rewrite most core business logic: "debits = credits" is a perennial requirement.
The IT industry is responding with "legacy modernization" and "legacy transformation": refurbishing existing business logic with new user interfaces, sometimes using screen scraping and service-enabled access through web services. These techniques allow organizations to understand their existing code assets (using discovery tools), provide new user and application interfaces to existing code, improve workflow, contain costs, minimize risk, and enjoy classic qualities of service (near 100% uptime, security, scalability, etc.).
This trend also invites reflection on what makes legacy systems so durable. Technologists are relearning the importance of sound architecture from the start, to avoid costly and risky rewrites. The most common legacy systems tend to be those which embraced well-known IT architectural principles, with careful planning and strict methodology during implementation. Poorly designed systems often don't last, both because they wear out and because their inherent faults invite replacement. Thus, many organizations are rediscovering the value of both their legacy systems and the theoretical underpinnings of those systems.
|
https://en.wikipedia.org/wiki?curid=18295
|
Lunar eclipse
A lunar eclipse occurs when the Moon moves into the Earth's shadow. This can occur only when the Sun, Earth, and Moon are exactly or very closely aligned (in syzygy), with Earth between the other two. A lunar eclipse can occur only on the night of a full moon. The type and length of a lunar eclipse depend on the Moon's proximity to either node of its orbit.
During a total lunar eclipse, Earth completely blocks direct sunlight from reaching the Moon. The only light reflected from the lunar surface has been refracted by Earth's atmosphere. This light appears reddish for the same reason that a sunset or sunrise does: the Rayleigh scattering of bluer light. Due to this reddish color, a totally eclipsed Moon is sometimes called a blood moon.
Unlike a solar eclipse, which can only be viewed from a relatively small area of the world, a lunar eclipse may be viewed from anywhere on the night side of Earth. A total lunar eclipse can last up to nearly 2 hours, while a total solar eclipse lasts only up to a few minutes at any given place, due to the smaller size of the Moon's shadow. Also unlike solar eclipses, lunar eclipses are safe to view without any eye protection or special precautions, as they are dimmer than the full Moon.
For the date of the next eclipse, see the section "Recent and forthcoming lunar eclipses".
Earth's shadow can be divided into two distinctive parts: the umbra and penumbra. Earth totally occludes direct solar radiation within the umbra, the central region of the shadow. However, since the Sun's diameter appears about one-quarter of Earth's in the lunar sky, the planet only partially blocks direct sunlight within the penumbra, the outer portion of the shadow.
A penumbral lunar eclipse occurs when the Moon passes through Earth's penumbra. The penumbra causes a subtle dimming of the lunar surface, which is only visible to the naked eye when about 70% of the Moon's diameter has immersed into Earth's penumbra. A special type of penumbral eclipse is a total penumbral lunar eclipse, during which the Moon lies exclusively within Earth's penumbra. Total penumbral eclipses are rare, and when these occur, the portion of the Moon closest to the umbra may appear slightly darker than the rest of the lunar disk.
A partial lunar eclipse occurs when only a portion of the Moon enters Earth's umbra, while a total lunar eclipse occurs when the entire Moon enters the planet's umbra. The Moon's average orbital speed is about , or a little more than its diameter per hour, so totality may last up to nearly 107 minutes. Nevertheless, the total time between the first and the last contacts of the Moon's limb with Earth's shadow is much longer and could last up to four hours.
The relative distance of the Moon from Earth at the time of an eclipse can affect the eclipse's duration. In particular, when the Moon is near apogee, the farthest point from Earth in its orbit, its orbital speed is the slowest. The diameter of Earth's umbra does not decrease appreciably within the changes in the Moon's orbital distance. Thus, the concurrence of a totally eclipsed Moon near apogee will lengthen the duration of totality.
A central lunar eclipse is a total lunar eclipse during which the Moon passes through the centre of Earth's shadow, contacting the antisolar point. This type of lunar eclipse is relatively rare.
A selenelion or selenehelion occurs when both the Sun and an eclipsed Moon can be observed at the same time. This can occur only just before sunset or just after sunrise, when both bodies will appear just above the horizon at nearly opposite points in the sky. This arrangement has led to the phenomenon being also called a horizontal eclipse.
Typically, a number of high ridges undergoing sunrise or sunset can view it. Although the Moon is in Earth's umbra, both the Sun and an eclipsed Moon can be simultaneously seen because atmospheric refraction causes each body to appear higher in the sky than their true geometric positions.
The following scale (the Danjon scale) was devised by André Danjon for rating the overall darkness of lunar eclipses:
There is often confusion between a solar eclipse and a lunar eclipse. While both involve interactions between the Sun, Earth, and the Moon, they are very different in their interactions.
The Moon does not completely darken as it passes through the umbra because of the refraction of sunlight by Earth's atmosphere into the shadow cone; if Earth had no atmosphere, the Moon would be completely dark during the eclipse. The reddish coloration arises because sunlight reaching the Moon must pass through a long and dense layer of Earth's atmosphere, where it is scattered. Shorter wavelengths are more likely to be scattered by the air molecules and small particles; thus, the longer wavelengths predominate by the time the light rays have penetrated the atmosphere. Human vision perceives this resulting light as red. This is the same effect that causes sunsets and sunrises to turn the sky a reddish color. An alternative way of conceiving this scenario is to realize that, as viewed from the Moon, the Sun would appear to be setting (or rising) behind Earth.
The amount of refracted light depends on the amount of dust or clouds in the atmosphere; this also controls how much light is scattered. In general, the dustier the atmosphere, the more that other wavelengths of light will be removed (compared to red light), leaving the resulting light a deeper red color. This causes the resulting coppery-red hue of the Moon to vary from one eclipse to the next. Volcanoes are notable for expelling large quantities of dust into the atmosphere, and a large eruption shortly before an eclipse can have a large effect on the resulting color.
Several cultures have myths related to lunar eclipses or allude to the lunar eclipse as being a good or bad omen. The Egyptians saw the eclipse as a sow swallowing the Moon for a short time; other cultures view the eclipse as the Moon being swallowed by other animals, such as a jaguar in Mayan tradition, or a three legged toad in China. Some societies thought it was a demon swallowing the Moon, and that they could chase it away by throwing stones and curses at it. The Greeks were ahead of their time when they said the Earth was round and used the shadow from the lunar eclipse as evidence. Some Hindus believe in the importance of bathing in the Ganges River following an eclipse because it will help to achieve salvation.
Similarly to the Mayans, the Incans believed that lunar eclipses occurred when a jaguar would eat the Moon, which is why a blood moon looks red. The Incans also believed that once the jaguar finished eating the Moon, it could come down and devour all the animals on Earth, so they would take spears and shout at the Moon to keep it away.
The ancient Mesopotamians believed that a lunar eclipse was when the Moon was being attacked by seven demons. This attack was more than just one on the Moon, however, for the Mesopotamians linked what happened in the sky with what happened on the land, and because the king of Mesopotamia represented the land, the seven demons were thought to be also attacking the king. In order to prevent this attack on the king, the Mesopotamians made someone pretend to be the king so they would be attacked instead of the true king. After the lunar eclipse was over, the substitute king was made to disappear (possibly by poisoning).
In some Chinese cultures, people would ring bells to prevent a dragon or other wild animals from biting the Moon. In the nineteenth century, during a lunar eclipse, the Chinese navy fired its artillery because of this belief. During the Zhou Dynasty in the Book of Songs, the sight of a red Moon engulfed in darkness was believed to foreshadow famine or disease.
Certain lunar eclipses have been referred to as "blood moons" in popular articles but this is not a scientifically-recognized term. This term has been given two separate, but overlapping, meanings.
The first, and simpler, meaning relates to the reddish color a totally eclipsed Moon takes on to observers on Earth. As sunlight penetrates the atmosphere of Earth, the gaseous layer filters and refracts the rays in such a way that the green to violet wavelengths on the visible spectrum scatter more strongly than the red, thus giving the Moon a reddish cast.
The second meaning of "blood moon" has been derived from this apparent coloration by two fundamentalist Christian pastors, Mark Blitz and John Hagee. They claimed that the 2014–15 "lunar tetrad" of four lunar eclipses coinciding with the feasts of Passover and Tabernacles matched the "moon turning to blood" described in the Book of Joel of the Hebrew Bible. This tetrad was claimed to herald the Second Coming of Christ and the Rapture as described in the Book of Revelation on the date of the first of the eclipses in this sequence on April 15, 2014.
At least two lunar eclipses and as many as five occur every year, although total lunar eclipses are significantly less common. If the date and time of an eclipse is known, the occurrences of upcoming eclipses are predictable using an eclipse cycle, like the saros.
Eclipses occur only during an eclipse season, when the Sun appears to pass near either node of the Moon's orbit.
|
https://en.wikipedia.org/wiki?curid=18298
|
Latin alphabet
The Latin or Roman alphabet is the writing system originally used by the ancient Romans to write the Latin language.
The term "Latin alphabet" may refer to either the alphabet used to write Latin (as described in this article) or other alphabets based on the Latin script, which is the basic set of letters common to the various alphabets descended from the classical Latin alphabet, such as the English alphabet. These Latin-script alphabets may discard letters, like the Rotokas alphabet or add new letters, like the Danish and Norwegian alphabets. Letter shapes have evolved over the centuries, including the development in Medieval Latin of lower-case, forms which did not exist in the Classical period alphabet.
Due to its use in writing Germanic, Romance and other languages first in Europe and then in other parts of the world and due to its use in Romanizing writing of other languages, it has become widespread (see Latin script). It is also used officially in Asian countries such as China (separate from its ideographic writing), Malaysia, Indonesia, and Vietnam, and has been adopted by Baltic and some Slavic states.
The Latin alphabet evolved from the visually similar Etruscan alphabet, which evolved from the Cumaean Greek version of the Greek alphabet, which was itself descended from the Phoenician alphabet, which in turn derived from Egyptian hieroglyphics. The Etruscans ruled early Rome; their alphabet evolved in Rome over successive centuries to produce the Latin alphabet.
During the Middle Ages, the Latin alphabet was used (sometimes with modifications) for writing Romance languages, which are direct descendants of Latin, as well as Celtic, Germanic, Baltic and some Slavic languages. With the age of colonialism and Christian evangelism, the Latin script spread beyond Europe, coming into use for writing indigenous American, Australian, Austronesian, Austroasiatic and African languages. More recently, linguists have also tended to prefer the Latin script or the International Phonetic Alphabet (itself largely based on the Latin script) when transcribing or creating written standards for non-European languages, such as the African reference alphabet.
Although it does not seem that classical Latin used diacritics (accents etc), modern English is the only major modern European language that does not have any for native words.
Although Latin did not use diacritical signs, signs of truncation of words, often placed above the truncated word or at the end of it, were very common. Furthermore, abbreviations or smaller overlapping letters were often used. This was due to the fact that if the text was engraved on the stone, the number of letters to be written was reduced, while if it was written on paper or parchment, it was spared the space, which was very precious. This habit continued even in the Middle Ages. Hundreds of symbols and abbreviations exist, varying from century to century.
It is generally believed that the Latin alphabet used by the Romans was derived from the Old Italic alphabet used by the Etruscans.
That alphabet was derived from the Euboean alphabet used by the Cumae, which in turn was derived from the Phoenician alphabet.
Latin included 21 different characters. The letter was the western form of the Greek gamma, but it was used for the sounds and alike, possibly under the influence of Etruscan, which might have lacked any voiced plosives. Later, probably during the 3rd century BC, the letter – unneeded to write Latin properly – was replaced with the new letter , a modified with a small vertical stroke, which took its place in the alphabet. From then on, represented the voiced plosive , while was generally reserved for the voiceless plosive . The letter was used only rarely, in a small number of words such as "Kalendae", often interchangeably with .
After the Roman conquest of Greece in the 1st century BC, Latin adopted the Greek letters and (or readopted, in the latter case) to write Greek loanwords, placing them at the end of the alphabet. An attempt by the emperor Claudius to introduce three additional letters did not last. Thus it was during the classical Latin period that the Latin alphabet contained 23 letters:
The Latin names of some of these letters are disputed; for example, may have been called or . In general the Romans did not use the traditional (Semitic-derived) names as in Greek: the names of the plosives were formed by adding to their sound (except for and , which needed different vowels to be distinguished from ) and the names of the continuants consisted either of the bare sound, or the sound preceded by .
The letter when introduced was probably called "hy" as in Greek, the name upsilon not being in use yet, but this was changed to "i Graeca" (Greek i) as Latin speakers had difficulty distinguishing its foreign sound from . was given its Greek name, zeta. This scheme has continued to be used by most modern European languages that have adopted the Latin alphabet. For the Latin sounds represented by the various letters see Latin spelling and pronunciation; for the names of the letters in English see English alphabet.
Diacritics were not regularly used, but they did occur sometimes, the most common being the apex used to mark long vowels, which had previously sometimes been written doubled. However, in place of taking an apex, the letter i was written taller: . For example, what is today transcribed "Lūciī a fīliī" was written in the inscription depicted.
The primary mark of punctuation was the interpunct, which was used as a word divider, though it fell out of use after 200 AD.
Old Roman cursive script, also called majuscule cursive and capitalis cursive, was the everyday form of handwriting used for writing letters, by merchants writing business accounts, by schoolchildren learning the Latin alphabet, and even emperors issuing commands. A more formal style of writing was based on Roman square capitals, but cursive was used for quicker, informal writing. It was most commonly used from about the 1st century BC to the 3rd century, but it probably existed earlier than that. It led to Uncial, a majuscule script commonly used from the 3rd to 8th centuries AD by Latin and Greek scribes.
New Roman cursive script, also known as minuscule cursive, was in use from the 3rd century to the 7th century, and uses letter forms that are more recognizable to modern eyes; , , , and had taken a more familiar shape, and the other letters were proportionate to each other. This script evolved into the medieval scripts known as Merovingian and Carolingian minuscule.
It was not until the Middle Ages that the letter (originally a ligature of two s) was added to the Latin alphabet, to represent sounds from the Germanic languages which did not exist in medieval Latin, and only after the Renaissance did the convention of treating and as vowels, and and as consonants, become established. Prior to that, the former had been merely allographs of the latter.
With the fragmentation of political power, the style of writing changed and varied greatly throughout the Middle Ages, even after the invention of the printing press. Early deviations from the classical forms were the uncial script, a development of the Old Roman cursive, and various so-called minuscule scripts that developed from New Roman cursive, of which the Carolingian minuscule was the most influential, introducing the lower case forms of the letters, as well as other writing conventions that have since become standard.
The languages that use the Latin script generally use capital letters to begin paragraphs and sentences and proper nouns. The rules for capitalization have changed over time, and different languages have varied in their rules for capitalization. Old English, for example, was rarely written with even proper nouns capitalized, whereas Modern English writers and printers of the 17th and 18th century frequently capitalized most and sometimes all nouns, which is still systematically done in Modern German, e.g. in the preamble and all of the United States Constitution: "We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America."
The Latin alphabet spread, along with the Latin language, from the Italian Peninsula to the lands surrounding the Mediterranean Sea with the expansion of the Roman Empire. The eastern half of the Empire, including Greece, Anatolia, the Levant, and Egypt, continued to use Greek as a lingua franca, but Latin was widely spoken in the western half, and as the western Romance languages evolved out of Latin, they continued to use and adapt the Latin alphabet.
With the spread of Western Christianity during the Middle Ages, the script was gradually adopted by the peoples of northern Europe who spoke Celtic languages (displacing the Ogham alphabet) or Germanic languages (displacing earlier Runic alphabets), Baltic languages, as well as by the speakers of several Uralic languages, most notably Hungarian, Finnish and Estonian. The Latin alphabet came into use for writing the West Slavic languages and several South Slavic languages, as the people who spoke them adopted Roman Catholicism.
Later, it was adopted by non-Catholic countries. Romanian, most of whose speakers are Orthodox, was the first major language to switch from Cyrillic to Latin script, doing so in the 19th century, although Moldova only did so after the Soviet collapse.
It has also been increasingly adopted by Turkic-speaking countries, beginning with Turkey in the 1920s. After the Soviet collapse, Azerbaijan, Turkmenistan, and Uzbekistan all switched from Cyrillic to Latin. The government of Kazakhstan announced in 2015 that the Latin alphabet would replace Cyrillic as the writing system for the Kazakh language by 2025.
The spread of the Latin alphabet among previously illiterate peoples has inspired the creation of new writing systems, such as the Avoiuli alphabet in Vanuatu, which replaces the letters of the Latin alphabet with alternative symbols.
|
https://en.wikipedia.org/wiki?curid=18306
|
Lugh
Lugh or Lug (Old ; Modern Irish: "Lú" ) is one of the most prominent gods in Irish mythology. A member of the Tuatha Dé Danann, Lugh is portrayed as a warrior, a king, a master craftsman and a savior. He is associated with skill and mastery in multiple disciplines, including the arts. He is also associated with oaths, truth and the law, and therefore with rightful kingship. Lugh is linked with the harvest festival of Lughnasadh, which bears his name. His most common epithets are "Lámfada" (, "of the long arm," possibly for his skill with a spear or his ability as a ruler) and "Samildánach" ("equally skilled in many arts").
In mythology, Lugh is the son of Cian and Ethniu (or Ethliu). He is the maternal grandson of the Fomorian tyrant Balor, whom Lugh kills in the "Battle of Mag Tuired". His foster-father is the sea god Manannán. Lugh's son is the hero Cú Chulainn, who is believed to be an incarnation of Lugh.
Lugh has several magical possessions. He wields an unstoppable fiery spear, a sling stone, and owns a hound named "Failinis". He is said to have invented "fidchell" (a Gaelic equivalent of chess), ball games, and horse racing.
He corresponds to the pan-Celtic god Lugus, and his Welsh counterpart is Lleu Llaw Gyffes. He has also been equated with Mercury.
The meaning of Lugh's name is still a matter of debate. Some scholars propose it derives from the Proto-Indo-European root "*(h2)lewgh-" meaning "to bind by oath" (compare Old Irish "luige" and Welsh "llw", both meaning "oath, vow, act of swearing" and derived from a suffixed Proto-Celtic form, "*lugiyo-", "oath"), suggesting he was originally a god of oaths and sworn contracts. When Balor meets Lugh in the Second Battle of Moytura, he calls Lugh a "babbler." In the past, his name was generally believed to come from the Proto-Indo-European root *"leuk-", "flashing light", so from Victorian times he has often been considered a sun god, similar to the Greco-Roman Apollo. However, the figure of Lugh from Irish literature seems to be a better match with the Celtic Mercury as described by Julius Caesar in his "De Bello Gallico". There are serious phonological issues with deriving the name from Proto-Indo-European "*leuk-", notably that Proto-Indo-European ' never produced Proto-Celtic '; for this reason, modern specialists in Celtic languages no longer accept this etymology.
Lugh is typically described as a youthful warrior. In the brief narrative "Baile in Scáil" Lugh is described as being very large and very beautiful and also as a spear-wielding horseman.
When he appears before the wounded Cú Chulainn in the Táin Bó Cúalnge, he is described as follows:
"A man fair and tall, with a great head of curly yellow hair. He has a green mantle wrapped about him and a brooch of white silver in the mantle over his breast. Next to his white skin, he wears a tunic of royal satin with red-gold insertion reaching to his knees. He carries a black shield with a hard boss of white-bronze. In his hand a five-pointed spear and next to it a forked javelin. Wonderful is the play and sport and diversion that he makes (with these weapons). But none accosts him and he accosts none as if no one could see him."
Elsewhere Lugh is described as a young, tall man with bright red cheeks, white sides, a bronze-coloured face, and blood-coloured hair.
Finally, in "The Fate of the Children of Turenn", Lugh is described by Bres as follows:
Then arose Breas, the son of Balar, and he said: "It is a wonder to me", said he, "that the sun to rise in the west today, and in the east every other day". "It would be better that it wer so", said the Druids. "What else is it?" said he. "The radiance of the face of Lugh of the Long Arms", said they.
Lugh's father is Cian of the Tuatha Dé Danann, and his mother is Ethniu: daughter of Balor, of the Fomorians. In "Cath Maige Tuired" their union is a dynastic marriage following an alliance between the Tuatha Dé and the Fomorians. In the "Lebor Gabála Érenn," Cian gives the boy to Tailtiu, queen of the Fir Bolg, in fosterage. In the Dindsenchas, Lugh, the foster-son of Tailtiu, is described as the "son of the Dumb Champion". In the poem Baile Suthain Sith Eamhna Lugh is called "descendant of the poet."
A folktale told to John O'Donovan by Shane O'Dugan of Tory Island in 1835 recounts the birth of a grandson of Balor who grows up to kill his grandfather. The grandson is unnamed, his father is called Mac Cinnfhaelaidh and the manner of his killing of Balor is different, but it has been taken as a version of the birth of Lugh, and was adapted as such by Lady Gregory. In this tale, Balor hears a druid's prophecy that he will be killed by his own grandson. To prevent this he imprisons his only daughter in the Tór Mór (great tower) of Tory Island. She is cared for by twelve women, who are to prevent her ever meeting or even learning of the existence of men. On the mainland, Mac Cinnfhaelaidh owns a magic cow who gives such abundant milk that everyone, including Balor, wants to possess her. While the cow is in the care of Mac Cinnfhaelaidh's brother Mac Samthainn, Balor appears in the form of a little red-haired boy and tricks him into giving him the cow. Looking for revenge, Mac Cinnfhaelaidh calls on a "leanan sídhe" (fairy woman) called Biróg, who transports him by magic to the top of Balor's tower, where he seduces Eithne. In time she gives birth to triplets, which Balor gathers up in a sheet and sends to be drowned in a whirlpool. The messenger drowns two of the babies but unwittingly drops one child into the harbour, where he is rescued by Biróg. She takes him to his father, who gives him to his brother, Gavida the smith, in fosterage.
There may be further triplism associated with his birth. His father in the folktale is one of a triad of brothers, Mac Cinnfhaelaidh, Gavida, and Mac Samthainn, whereas in the "Lebor Gabála", his father Cian is mentioned alongside his brothers Cú and Cethen. Two characters called Lugaid, a popular medieval Irish name thought to derive from Lugh, have three fathers: Lugaid Riab nDerg (Lugaid of the Red Stripes) was the son of the three "Findemna" or fair triplets, and Lugaid mac Con Roí was also known as "mac Trí Con", "son of three hounds". In Ireland's other great "sequestered maiden" story, the tragedy of Deirdre, the king's intended is carried off by three brothers, who are hunters with hounds. The canine imagery continues with Cian's brother Cú ("hound"), another Lugaid, Lugaid Mac Con (son of a hound), and Lugh's son Cúchulainn ("Culann's Hound"). A fourth Lugaid was Lugaid Loígde, a legendary King of Tara and ancestor of (or inspiration for) Lugaid Mac Con.
As a young man Lugh travels to Tara to join the court of King Nuada of the Tuatha Dé Danann. The doorkeeper will not let him in unless he has a skill he can use to serve the king. He offers his services as a wright, a smith, a champion, a swordsman, a harpist, a hero, a poet, historian, a sorcerer, and a craftsman, but each time is rejected as the Tuatha Dé Danann already have someone with that skill. When Lugh asks if they have anyone with all those skills simultaneously, the doorkeeper has to admit defeat, and Lugh joins the court and is appointed Chief Ollam of Ireland. He wins a flagstone-throwing contest against Ogma, the champion, and entertains the court with his harp. The Tuatha Dé Danann are, at that time, oppressed by the Fomorians, and Lugh is amazed how meekly they accept their oppression. Nuada wonders if this young man could lead them to freedom. Lugh is given command over the Tuatha Dé Danann, and he begins making preparations for war.
Tuireann and Cian, Lugh's father, are old enemies, and one day his sons, Brian, Iuchar, and Iucharba spot Cian in the distance and decide to kill him. They find him hiding in the form of a pig, but Cian tricked the brothers into allowing him to transform back to a man before they killed him, giving Lugh the legal right to claim compensation for a father rather than just a pig. When they try to bury him, the ground spits his body back twice before keeping him down, and eventually confesses that it is a grave to Lugh. Lugh holds a feast and invites the brothers, and during it he asks them what they would demand as compensation for the murder of their father. They reply that death would be the only just demand, and Lugh agrees. He then accuses them of the murder of his father, Cian, and sets them a series of seemingly impossible quests. The brothers go on an adventure and achieve them all except the last one, which will surely kill them. Despite Tuireann's pleas, Lugh demands that they proceed and, when they are all fatally wounded, he denies them the use of one of the items they have retrieved, a magic pigskin which heals all wounds. They die of their wounds and Tuireann dies of grief over their bodies.
Using the magic artefacts the sons of Tuireann have gathered, Lugh leads the Tuatha Dé Danann in the Second Battle of Mag Tuireadh against the Fomorians. Prior to the battle Lugh asks each man and woman in his army what art he or she will bring to the fray; he then addressed his army in speech, which elevated each warrior's spirit to that of a king or lord. Nuada is killed in the battle by Balor. Lugh faces Balor, who opens his terrible, poisonous eye that kills all it looks upon, but Lugh shoots a sling-stone that drives his eye out the back of his head, killing Balor and wreaking havoc on the Fomorian army behind. After the victory Lugh finds Bres, the half-Fomorian former king of the Tuatha Dé Danann, alone and unprotected on the battlefield, and Bres begs for his life. If he is spared, he promises, he will ensure that the cows of Ireland always give milk. The Tuatha Dé Danann refuse the offer. He then promises four harvests a year, but the Tuatha Dé Danann say one harvest a year suits them. But Lugh spares his life on the condition that he teach the Tuatha Dé Danann how and when to plough, sow, and reap.
Lugh instituted an event similar to the Olympic games called the Assembly of Talti which finished on Lughnasadh (1 August) in memory of his foster-mother, Tailtiu, at the town that bears her name (now Teltown, County Meath). He likewise instituted Lughnasadh fairs in the areas of Carman and Naas in honour of Carman and Nás, the eponymous tutelary goddess of these two regions. Horse races and displays of martial arts were important activities at all three fairs. However, Lughnasadh itself is a celebration of Lugh's triumph over the spirits of the Otherworld who had tried to keep the harvest for themselves. It survived long into Christian times and is still celebrated under a variety of names. "Lúnasa" is now the Irish name for the month of August.
According to a poem of the "dindsenchas", Lugh was responsible for the death of Bres. He made 300 wooden cows and filled them with a bitter, poisonous red liquid which was then "milked" into pails and offered to Bres to drink. Bres, who was under an obligation not to refuse hospitality, drank it down without flinching, and it killed him.
Lugh is said to have invented the board game fidchell.
One of his wives, Buach, had an affair with Cermait, son of the Dagda. Lugh killed him in revenge, but Cermait's sons, Mac Cuill, Mac Cecht, and Mac Gréine, killed Lugh in return, spearing him through the foot then drowning him in Loch Lugborta in County Westmeath He had ruled for forty years. Cermait was later revived by his father, the Dagda, who used the smooth or healing end of his staff to bring Cermait back to life.
Lugh is given the matriname "mac Ethlenn" or "mac Ethnenn" ("son of Ethliu or Ethniu", his mother) and the patriname "mac Cein" ("son of Cian", his father). He is the maternal grandson of the Fomorian tyrant Balor, whom Lugh kills in the "Battle of Mag Tuired". His foster-father is the sea god Manannán. Lugh's son is the hero Cú Chulainn, who is believed to be an incarnation of Lugh.
He had several wives, including Buí (AKA Buach or Bua "Victory") and Nás, daughters of Ruadri Ruad, king of Britain. Buí lived and was buried at Knowth (Cnogba). Nás was buried at Naas, County Kildare, which is named after her. Lugh had a son, Ibic "of the horses", by Nás. It is said that Nás dies with the noise of combat, therefore it is difficult to know where she dies. Lugh's daughter or sister was Ebliu, who married Fintan. By the mortal Deichtine, Lugh was father to the hero Cú Chulainn.
Lug possessed a number of magical items, retrieved by the sons of Tuirill Piccreo in Middle Irish redactions of the Lebor Gabála. Not all the items are listed here. The late narrative "Fate of the Children of Tuireann" not only gives a list of items gathered for Lugh, but also endows him with such gifts from the sea god Manannán as the sword Fragarach, the horse Enbarr (Aonbarr), the boat / ("Wave-Sweeper"), his armour and helmet.
Lugh's spear (), according to the text of The Four Jewels of the Tuatha Dé Danann, was said to be impossible to overcome, taken to Ireland from Gorias (or Findias).
Lugh obtained the Spear of Assal () as fine () imposed on the children of Tuirill Piccreo (or Biccreo), according to the short account in which adds that the incantation "Ibar (Yew)" made the cast always hit its mark, and "Athibar (Re-Yew)" caused the spear to return.
In a full narrative version called (The Fate of the Children of Tuireann), from copies no earlier than the 17th century, Lugh demands the spear named "Ar-éadbair" or "Areadbhair" (Early Modern Irish: ) which belonged to Pisear, king of Persia, that its tip had to be kept immersed in a pot of water to keep it from igniting, a property similar to the Lúin of Celtchar. This spear is also called "Slaughterer" in translation.
There is yet another name that Lugh's spear goes by: "A [yew] tree, the finest of the wood" (Early Modern Irish: ), occurring in an inserted verse within "The Fate of the Children of Tuireann". "The famous yew of the wood" () is also the name that Lugh's spear is given in a tract which alleges that it, the Lúin of Celtchar and the spear Crimall that blinded Cormac Mac Airt were one and the same weapon (tract in TCD MS 1336 (H 3. 17), col. 723, discussed in the Lúin page).
Lugh's projectile weapon, whether a dart or missile, was envisioned by symbolic of lightning-weapon.
Lugh's sling rod, named "Lugh's Chain", was the rainbow and the Milky Way, according to popular writer Charles Squire. Squire adds that Lugh's spear which needed no wielding was alive and thirsted so for blood that only by steeping its head in a sleeping-draught of pounded fresh poppy leaves could it be kept at rest. When battle was near, it was drawn out; then it roared and struggled against its thongs, fire flashed from it, and it tore through the ranks of the enemy once slipped from the leash, never tired of slaying.
According to the brief accounts in the Lebor Gabála Érenn, Lugh used the "sling-stone" ("cloich tabaill") to slay his grandfather, Balor the Strong-Smiter in the Battle of Magh Tuired. The narrative , preserved in a unique 16th century copy, words it slightly different saying that Lugh used the sling-stone to destroy the evil eye of Balor of the Piercing Eye (Bolur Birugderc).
The ammunition that Lugh used was not just a stone, but a "tathlum" according to a certain poem in Egerton MS. 1782 (olim W. Monck Mason MS.), the first quatrain of which is as follows:
The poem goes on to describe the composition of this tathlum, as being formed from the bloods collected from toads, bears, the lion, vipers and the neck-base of Osmuinn, mixed with the sands of the Armorian Sea and the Red Sea.
Lugh is also seen girt with the Freagarthach (better known as Fragarach), the sword of Manannán, in the assembly of the Tuatha Dé Danann in the "Fate of the Children of Tuireann".
Lugh had a horse named Aenbharr which could fare over both land and sea. Like much of his equipment, it was furnished to him by the sea god Manannán mac Lir. When the Children of Tuireann asked to borrow this horse, Lugh begrudged them, saying it would not be proper to make a loan of a loan. Consequently, Lugh was unable to refuse their request to use Lugh's currach (coracle) or boat, the "Wave-Sweeper" ().
In the Lebor Gabála, Gainne and Rea were the names of the pair of horses belonging to the king of the isle of Sicily [on the (Tyrrhene sea)], which Lug demanded as éric from the sons of Tuirill Briccreo.
Failinis was the name of the whelp of the King of Ioruaidhe that Lugh demanded as éiric (a forfeit) in the "Oidhead Chloinne Tuireann". This concurs with the name of the hound mentioned in an "Ossianic Ballad", sometimes referred to by its opening line " (They came here as a band of three)". In the ballad, the hound is called Ṡalinnis (Shalinnis) or Failinis (in the Lismore text), and belonged to a threesome from Iruaide whom the Fianna encounter. It is described as "the ancient grayhound... that had been with Lugh of the Mantles, / Given him by the sons of Tuireann Bicreann"
Lugh corresponds to the pan-Celtic god Lugus, and his Welsh counterpart is Lleu Llaw Gyffes. He has also been equated with Mercury. Sometimes he is interpreted as a storm god and, less often today, as a sun god. Others have noted a similarity in Lugh's slaying of Balor a Celtic analog to the slaying of Baldr by Loki. Lugh's mastery of all arts has led many to link him with the unnamed Gaulish god Julius Caesar identifies with Mercury, whom he describes as the "inventor of all the arts". Caesar describes the Gaulish Mercury as the most revered deity in Gaul, overseeing journeys and business transactions.
St. Mologa has been theorized to be a Christian continuation of the god Lugh.
The County of Louth in Ireland is named after the village of Louth, which is named after the God Lugh. Historically, the place name has had various spellings; "Lugmad", "Lughmhaigh", and "Lughmhadh" (see Historic Names List, for full listing). "Lú" is the modern simplified spelling. Other places named for Lugh include the cairn at Seelewey (Suidhe Lughaidh, or Lug's Seat), Dunlewey, and Rath-Lugaidh in Carney, Sligo. Seelewey was located in Moyturra Chonlainn and, according to local folklore, was a place where giants used to gather in olden days.
|
https://en.wikipedia.org/wiki?curid=18307
|
Lanthanide
The lanthanide () or lanthanoid () series of chemical elements comprises the 15 metallic chemical elements with atomic numbers 57–71, from lanthanum through lutetium. These elements, along with the chemically similar elements scandium and yttrium, are often collectively known as the rare earth elements.
The informal chemical symbol Ln is used in general discussions of lanthanide chemistry to refer to any lanthanide. All but one of the lanthanides are f-block elements, corresponding to the filling of the 4f electron shell; depending on the source, either lanthanum or lutetium is considered a d-block element, but is included due to its chemical similarities with the other 14. All lanthanide elements form trivalent cations, Ln3+, whose chemistry is largely determined by the ionic radius, which decreases steadily from lanthanum to lutetium.
They are called lanthanides because the elements in the series are chemically similar to lanthanum. Both lanthanum and lutetium have been labeled as group 3 elements, because they have a single valence electron in the 5d shell. However, both elements are often included in discussions of the chemistry of lanthanide elements. Lanthanum is the more often omitted of the two, because its placement as a group 3 element is somewhat more common in texts and for semantic reasons: since "lanthanide" means "like lanthanum", it has been argued that lanthanum cannot logically be a lanthanide, but IUPAC acknowledges its inclusion based on common usage.
In presentations of the periodic table, the lanthanides and the actinides are customarily shown as two additional rows below the main body of the table, with placeholders or else a selected single element of each series (either lanthanum and actinium, or lutetium and lawrencium) shown in a single cell of the main table, between barium and hafnium, and radium and rutherfordium, respectively. This convention is entirely a matter of aesthetics and formatting practicality; a rarely used wide-formatted periodic table inserts the lanthanide and actinide series in their proper places, as parts of the table's sixth and seventh rows (periods).
The 1985 International Union of Pure and Applied Chemistry “Red Book” (p. 45) recommends that ""lanthanoid"" is used rather than ""lanthanide"". The ending “-ide” normally indicates a negative ion. However, owing to wide current use, “lanthanide” is still allowed.
Together with the two elements at the top of group 3, scandium and yttrium, the trivial name "rare earths" is sometimes used to describe all the lanthanides; a definition of rare earths including the group 3, lanthanide, and actinide elements is also occasionally seen, and rarely Sc + Y + lanthanides + thorium. The "earth" in the name "rare earths" arises from the minerals from which they were isolated, which were uncommon oxide-type minerals. However, the use of the name is deprecated by IUPAC, as the elements are neither rare in abundance nor "earths" (an obsolete term for water-insoluble strongly basic oxides of electropositive metals incapable of being smelted into metal using late 18th century technology). Group 2 is known as the alkaline earth elements for much the same reason.
The "rare" in the "rare earths" name has much more to do with the difficulty of separating out each of the individual lanthanide elements than scarcity of any of them. By way of the Greek "dysprositos" for "hard to get at," element 66, dysprosium was similarly named; lanthanum itself is named after a word for "hidden." The elements 57 (La) to 71 (Lu) are very similar chemically to one another and frequently occur together in nature, often anywhere from three to all 15 of the lanthanides (along with yttrium as a 16th) occur in minerals such as samarskite, monazite and many others which can also contain the other two group 3 elements as well as thorium and occasionally other actinides as well. A majority of the rare earths were discovered at the same mine in Ytterby, Sweden and four of them are named (yttrium, ytterbium, erbium, terbium) after the city and a fifth *(holmium) after Stockholm; scandium is named after Scandinavia, thulium after the old name Thule, and the immediately-following group 4 element (number 72) hafnium is named for the Latin name of the city of Copenhagen.
Samarskite (a mineral which is the source of the name of the element samarium) and other similar minerals in particular also have these elements in association with the nearby metals tantalum, niobium, hafnium, zirconium, vanadium, and titanium, from group 4 and group 5 often in similar oxidation states. Monazite is a phosphate of numerous group 3 + lanthanide + actinide metals and mined especially for the thorium content and specific rare earths especially lanthanum, yttrium and cerium. Cerium and lanthanum as well as other members of the rare earth series are often produced as a metal called mischmetal containing a variable mixture of these elements with cerium and lanthanum predominating; it has direct uses such as lighter flints and other spark sources which do not require extensive purification of one of these metals. There are also rare earth-bearing minerals based on group 2 elements such as yttrocalcite, yttrocerite, yttrofluorite which vary in content of yttrium, cerium, and lanthanum in a particular as well as varying amounts of the others. Other lanthanide/rare earth minerals include bastnäsite, florencite, chernovite, perovskite, xenotime, cerite, gadolinite, lanthanite, fergusonite, polycrase, blomstrandine, håleniusite, miserite, loparite, lepersonnite, euxenite, all of which have a range of relative element concentration and may have the symbol of a predominating one such as monazite-ce; group 3 elements do not occur as native element minerals in the fashion of gold, silver, tantalum and many others on earth but may in lunar regolith. Very rare cerium, lanthanum, and presumably other lanthanide/group 3 halides, feldspars and garnets are also known to exist.
All of this is the result of the order in which the electron shells of these elements are filled—the outermost has the same configuration for all of them, and a deeper shell is progressively filled with electrons as the atomic number increases from 57 towards 71. For many years, mixtures of more than one rare earth were considered to be single elements, such as neodymium and praseodymium being thought to be the single element didymium and so on. Very small differences in solubility are used in solvent and ion-exchange purification methods for these elements which require a great deal of repeating to get a purified metal. The refined metals and their compounds have subtle and stark differences amongst themselves in electronic, electrical, optical, and magnetic properties which account for their many niche uses.
By way of examples of the term meaning the above considerations rather than their scarcity, cerium is the 26th most abundant element in the Earth's crust and more abundant than copper, neodymium is more abundant than gold; thulium (the second least common naturally occurring lanthanide) is more abundant than iodine, which is itself common enough for biology to have evolved critical usages thereof, and even the lone radioactive element in the series, promethium, is more common than the two rarest naturally occurring elements, francium and astatine, combined. Despite their abundance, even the technical term "lanthanides" could be interpreted to reflect a sense of elusiveness on the part of these elements, as it comes from the Greek λανθανειν ("lanthanein"), "to lie hidden". However, if not referring to their natural abundance, but rather to their property of "hiding" behind each other in minerals, this interpretation is in fact appropriate. The etymology of the term must be sought in the first discovery of lanthanum, at that time a so-called new rare earth element "lying hidden" in a cerium mineral, and it is an irony that lanthanum was later identified as the first in an entire series of chemically similar elements and could give name to the whole series. The term "lanthanide" was introduced by Victor Goldschmidt in 1925.
* Between initial Xe and final 6s2 electronic shells
** Sm has a close packed structure like the other lanthanides but has an unusual 9 layer repeat
Gschneider and Daane (1988) attribute the trend in melting point which increases across the series, (lanthanum (920 °C) – lutetium (1622 °C)) to the extent of hybridization of the 6s, 5d, and 4f orbitals. The hybridization is believed to be at its greatest for cerium, which has the lowest melting point of all, 795 °C.
The lanthanide metals are soft; their hardness increases across the series. Europium stands out, as it has the lowest density in the series at 5.24 g/cm3 and the largest metallic radius in the series at 208.4 pm. It can be compared to barium, which has a metallic radius of 222 pm. It is believed that the metal contains the larger Eu2+ ion and that there are only two electrons in the conduction band. Ytterbium also has a large metallic radius, and a similar explanation is suggested.
The resistivities of the lanthanide metals are relatively high, ranging from 29 to 134 μΩ·cm. These values can be compared to a good conductor such as aluminium, which has a resistivity of 2.655 μΩ·cm.
With the exceptions of La, Yb, and Lu (which have no unpaired f electrons), the lanthanides are strongly paramagnetic, and this is reflected in their magnetic susceptibilities. Gadolinium becomes ferromagnetic at below 16 °C (Curie point). The other heavier lanthanides – terbium, dysprosium, holmium, erbium, thulium, and ytterbium – become ferromagnetic at much lower temperatures.
* Not including initial [Xe] core
The colors of lanthanide complexes originate almost entirely from charge transfer interactions between the metal and the ligand. f → f transitions are symmetry forbidden (or Laporte-forbidden), which is also true of transition metals. However, transition metals are able to use vibronic coupling to break this rule. The valence orbitals in lanthanides are almost entirely non-bonding and as such little effective vibronic coupling takes, hence the spectra from f → f transitions are much weaker and narrower than those from d → d transitions. In general this makes the colors of lanthanide complexes far fainter than those of transition metal complexes. f → f transitions are not possible for the f1 and f13 configurations of Ce3+ and Yb3+ and thus these ions are colorless in aqueous solution.
Going across the lanthanides in the periodic table, the 4f orbitals are usually being filled. The effect of the 4f orbitals on the chemistry of the lanthanides is profound and is the factor that distinguishes them from the transition metals. There are seven 4f orbitals, and there are two different ways in which they are depicted: as a "cubic set" or as a general set. The cubic set is "f""z3", "f""xz2", "f""yz2", "f""xyz", "f""z(x2−y2)", "f""x(x2−3y2)" and "f""y(3x2−y2)". The 4f orbitals penetrate the [Xe] core and are isolated, and thus they do not participate in bonding. This explains why crystal field effects are small and why they do not form π bonds. As there are seven 4f orbitals, the number of unpaired electrons can be as high as 7, which gives rise to the large magnetic moments observed for lanthanide compounds. Measuring the magnetic moment can be used to investigate the 4f electron configuration, and this is a useful tool in providing an insight into the chemical bonding. The lanthanide contraction, i.e. the reduction in size of the Ln3+ ion from La3+ (103 pm) to Lu3+ (86.1 pm), is often explained by the poor shielding of the 5s and 5p electrons by the 4f electrons.
The electronic structure of the lanthanide elements, with minor exceptions, is [Xe]6s24fn. The chemistry of the lanthanides is dominated by the +3 oxidation state, and in LnIII compounds the 6s electrons and (usually) one 4f electron are lost and the ions have the configuration [Xe]4fm. All the lanthanide elements exhibit the oxidation state +3. In addition, Ce3+ can lose its single f electron to form Ce4+ with the stable electronic configuration of xenon. Also, Eu3+ can gain an electron to form Eu2+ with the f7 configuration that has the extra stability of a half-filled shell. Other than Ce(IV) and Eu(II), none of the lanthanides are stable in oxidation states other than +3 in aqueous solution. Promethium is effectively a man-made element, as all its isotopes are radioactive with half-lives shorter than 20 years.
In terms of reduction potentials, the Ln0/3+ couples are nearly the same for all lanthanides, ranging from −1.99 (for Eu) to −2.35 V (for Pr). Thus these metals are highly reducing, with reducing power similar to alkaline earth metals such as Mg (−2.36 V).
The ionization energies for the lanthanides can be compared with aluminium. In aluminium the sum of the first three ionization energies is 5139 kJ·mol−1, whereas the lanthanides fall in the range 3455 – 4186 kJ·mol−1. This correlates with the highly reactive nature of the lanthanides.
The sum of the first two ionization energies for europium, 1632 kJ·mol−1 can be compared with that of barium 1468.1 kJ·mol−1 and europium's third ionization energy is the highest of the lanthanides. The sum of the first two ionization energies for ytterbium are the second lowest in the series and its third ionization energy is the second highest. The high third ionization energy for Eu and Yb correlate with the half filling 4f7 and complete filling 4f14 of the 4f subshell, and the stability afforded by such configurations due to exchange energy. Europium and ytterbium form salt like compounds with Eu2+ and Yb2+, for example the salt like dihydrides. Both europium and ytterbium dissolve in liquid ammonia forming solutions of Ln2+(NH3)x again demonstrating their similarities to the alkaline earth metals.
The relative ease with which the 4th electron can be removed in cerium and (to a lesser extent praseodymium) indicates why Ce(IV) and Pr(IV) compounds can be formed, for example CeO2 is formed rather than Ce2O3 when cerium reacts with oxygen.
The similarity in ionic radius between adjacent lanthanide elements makes it difficult to separate them from each other in naturally occurring ores and other mixtures. Historically, the very laborious processes of cascading and fractional crystallization were used. Because the lanthanide ions have slightly different radii, the lattice energy of their salts and hydration energies of the ions will be slightly different, leading to a small difference in solubility. Salts of the formula Ln(NO3)3·2NH4NO3·4H2O can be used. Industrially, the elements are separated from each other by solvent extraction. Typically an aqueous solution of nitrates is extracted into kerosene containing tri-"n"-butylphosphate. The strength of the complexes formed increases as the ionic radius decreases, so solubility in the organic phase increases. Complete separation can be achieved continuously by use of countercurrent exchange methods. The elements can also be separated by ion-exchange chromatography, making use of the fact that the stability constant for formation of EDTA complexes increases for log K ≈ 15.5 for [La(EDTA)]− to log K ≈ 19.8 for [Lu(EDTA)]−.
When in the form of coordination complexes, lanthanides exist overwhelmingly in their +3 oxidation state, although particularly stable 4f configurations can also give +4 (Ce, Tb) or +2 (Eu, Yb) ions. All of these forms are strongly electropositive and thus lanthanide ions are hard Lewis acids. The oxidation states are also very stable; with the exceptions of SmI2 and cerium(IV) salts, lanthanides are not used for redox chemistry. 4f electrons have a high probability of being found close to the nucleus and are thus strongly affected as the nuclear charge increases across the series; this results in a corresponding decrease in ionic radii referred to as the lanthanide contraction.
The low probability of the 4f electrons existing at the outer region of the atom or ion permits little effective overlap between the orbitals of a lanthanide ion and any binding ligand. Thus lanthanide complexes typically have little or no covalent character and are not influenced by orbital geometries. The lack of orbital interaction also means that varying the metal typically has little effect on the complex (other than size), especially when compared to transition metals. Complexes are held together by weaker electrostatic forces which are omni-directional and thus the ligands alone dictate the symmetry and coordination of complexes. Steric factors therefore dominate, with coordinative saturation of the metal being balanced against inter-ligand repulsion. This results in a diverse range of coordination geometries, many of which are irregular, and also manifests itself in the highly fluxional nature of the complexes. As there is no energetic reason to be locked into a single geometry, rapid intramolecular and intermolecular ligand exchange will take place. This typically results in complexes that rapidly fluctuate between all possible configurations.
Many of these features make lanthanide complexes effective catalysts. Hard Lewis acids are able to polarise bonds upon coordination and thus alter the electrophilicity of compounds, with a classic example being the Luche reduction. The large size of the ions coupled with their labile ionic bonding allows even bulky coordinating species to bind and dissociate rapidly, resulting in very high turnover rates; thus excellent yields can often be achieved with loadings of only a few mol%. The lack of orbital interactions combined with the lanthanide contraction means that the lanthanides change in size across the series but that their chemistry remains much the same. This allows for easy tuning of the steric environments and examples exist where this has been used to improve the catalytic activity of the complex and change the nuclearity of metal clusters.
Despite this, the use of lanthanide coordination complexes as homogeneous catalysts is largely restricted to the laboratory and there are currently few examples them being used on an industrial scale. Lanthanides exist in many forms other that coordination complexes and many of these are industrially useful. In particular lanthanide metal oxides are used as heterogeneous catalysts in various industrial processes.
The trivalent lanthanides mostly form ionic salts. The trivalent ions are hard acceptors and form more stable complexes with oxygen-donor ligands than with nitrogen-donor ligands. The larger ions are 9-coordinate in aqueous solution, [Ln(H2O)9]3+ but the smaller ions are 8-coordinate, [Ln(H2O)8]3+. There is some evidence that the later lanthanides have more water molecules in the second coordination sphere. Complexation with monodentate ligands is generally weak because it is difficult to displace water molecules from the first coordination sphere. Stronger complexes are formed with chelating ligands because of the chelate effect, such as the tetra-anion derived from 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA).
The most common divalent derivatives of the lanthanides are for Eu(II), which achieves a favorable f7 configuration. Divalent halide derivatives are known for all of the lanthanides. They are either conventional salts or are Ln(III) "electride"-like salts. The simple salts include YbI2, EuI2, and SmI2. The electride-like salts, described as Ln3+, 2I−, e−, include LaI2, CeI2 and GdI2. Many of the iodides form soluble complexes with ethers, e.g. TmI2(dimethoxyethane)3. Samarium(II) iodide is a useful reducing agent. Ln(II) complexes can be synthesized by transmetalation reactions. The normal range of oxidation states can be expanded via the use of sterically bulky cyclopentadienyl ligands, in this way many lanthanides can be isolated as Ln(II) compounds.
Ce(IV) in ceric ammonium nitrate is a useful oxidizing agent. The Ce(IV) is the exception owing to the tendency to form an unfilled f shell.Otherwise tetravalent lanthanides are rare. However, recently Tb(IV) and Pr(IV) complexes have been shown to exist.
Lanthanide metals react exothermically with hydrogen to form LnH2, dihydrides. With the exception of Eu and Yb which resemble the Ba and Ca hydrides (non conducting, transparent salt like compounds) they form black pyrophoric, conducting compounds where the metal sub-lattice is face centred cubic and the H atoms occupy tetrahedral sites. Further hydrogenation produces a trihydride which is non-stoichiometric, non-conducting, more salt like. The formation of trihydride is associated with and increase in 8–10% volume and this is linked to greater localization of charge on the hydrogen atoms which become more anionic (H− hydride anion) in character.
The only tetrahalides known are the tetrafluorides of cerium, praseodymium, terbium, neodymium and dysprosium, the last two known only under matrix isolation conditions.
All of the lanthanides form trihalides with fluorine, chlorine, bromine and iodine. They are all high melting and predominantly ionic in nature. The fluorides are only slightly soluble in water and are not sensitive to air, and this contrasts with the other halides which are air sensitive, readily soluble in water and react at high temperature to form oxohalides.
The trihalides were important as pure metal can be prepared from them. In the gas phase the trihalides are planar or approximately planar, the lighter lanthanides have a lower % of dimers, the heavier lanthanides a higher proportion. The dimers have a similar structure to Al2Cl6.
Some of the dihalides are conducting while the rest are insulators. The conducting forms can be considered as LnIII electride compounds where the electron is delocalised into a conduction band, Ln3+ (X−)2(e−). All of the diodides have relatively short metal-metal separations. The CuTi2 structure of the lanthanum, cerium and praseodymium diodides along with HP-NdI2 contain 44 nets of metal and iodine atoms with short metal-metal bonds (393-386 La-Pr). these compounds should be considered to be two-dimensional metals (two-dimensional in the same way that graphite is). The salt-like dihalides include those of Eu, Dy, Tm, and Yb. The formation of a relatively stable +2 oxidation state for Eu and Yb is usually explained by the stability (exchange energy) of half filled (f7) and fully filled f14. GdI2 possesses the layered MoS2 structure, is ferromagnetic and exhibits colossal magnetoresistance
The sesquihalides Ln2X3 and the Ln7I12 compounds listed in the table contain metal clusters, discrete Ln6I12 clusters in Ln7I12 and condensed clusters forming chains in the sesquihalides. Scandium forms a similar cluster compound with chlorine, Sc7Cl12 Unlike many transition metal clusters these lanthanide clusters do not have strong metal-metal interactions and this is due to the low number of valence electrons involved, but instead are stabilised by the surrounding halogen atoms.
LaI is the only known monohalide. Prepared from the reaction of LaI3 and La metal, it has a NiAs type structure and can be formulated La3+ (I−)(e−)2.
All of the lanthanides form sesquioxides, Ln2O3. The lighter/larger lanthanides adopt a hexagonal 7-coordinate structure while the heavier/smaller ones adopt a cubic 6-coordinate "C-M2O3" structure. All of the sesquioxides are basic, and absorb water and carbon dioxide from air to form carbonates, hydroxides and hydroxycarbonates. They dissolve in acids to form salts.
Cerium forms a stoichiometric dioxide, CeO2, where cerium has an oxidation state of +4. CeO2 is basic and dissolves with difficulty in acid to form Ce4+ solutions, from which CeIV salts can be isolated, for example the hydrated nitrate Ce(NO3)4.5H2O. CeO2 is used as an oxidation catalyst in catalytic converters. Praseodymium and terbium form non-stoichiometric oxides containing LnIV, although more extreme reaction conditions can produce stoichiometric (or near stoichiometric) PrO2 and TbO2.
Europium and ytterbium form salt-like monoxides, EuO and YbO, which have a rock salt structure. EuO is ferromagnetic at low temperatures, and is a semiconductor with possible applications in spintronics. A mixed EuII/EuIII oxide Eu3O4 can be produced by reducing Eu2O3 in a stream of hydrogen. Neodymium and samarium also form monoxides, but these are shiny conducting solids, although the existence of samarium monoxide is considered dubious.
All of the lanthanides form hydroxides, Ln(OH)3. With the exception of lutetium hydroxide, which has a cubic structure, they have the hexagonal UCl3 structure. The hydroxides can be precipitated from solutions of LnIII. They can also be formed by the reaction of the sesquioxide, Ln2O3, with water, but although this reaction is thermodynamically favorable it is kinetically slow for the heavier members of the series. Fajans' rules indicate that the smaller Ln3+ ions will be more polarizing and their salts correspondingly less ionic. The hydroxides of the heavier lanthanides become less basic, for example Yb(OH)3 and Lu(OH)3 are still basic hydroxides but will dissolve in hot concentrated NaOH.
All of the lanthanides form Ln2Q3 (Q= S, Se, Te). The sesquisulfides can be produced by reaction of the elements or (with the exception of Eu2S3) sulfidizing the oxide (Ln2O3) with H2S. The sesquisulfides, Ln2S3 generally lose sulfur when heated and can form a range of compositions between Ln2S3 and Ln3S4. The sesquisulfides are insulators but some of the Ln3S4 are metallic conductors (e.g. Ce3S4) formulated (Ln3+)3 (S2−)4 (e−), while others (e.g. Eu3S4 and Sm3S4) are semiconductors. Structurally the sesquisulfides adopt structures that vary according to the size of the Ln metal. The lighter and larger lanthanides favoring 7-coordinate metal atoms, the heaviest and smallest lanthanides (Yb and Lu) favoring 6 coordination and the rest structures with a mixture of 6 and 7 coordination. Polymorphism is common amongst the sesquisulfides. The colors of the sesquisulfides vary metal to metal and depend on the polymorphic form. The colors of the γ-sesquisulfides are La2S3, white/yellow; Ce2S3, dark red; Pr2S3, green; Nd2S3, light green; Gd2S3, sand; Tb2S3, light yellow and Dy2S3, orange. The shade of γ-Ce2S3 can be varied by doping with Na or Ca with hues ranging from dark red to yellow, and Ce2S3 based pigments are used commercially and are seen as low toxicity substitutes for cadmium based pigments.
All of the lanthanides form monochalcogenides, LnQ, (Q= S, Se, Te). The majority of the monochalcogenides are conducting, indicating a formulation LnIIIQ2−(e-) where the electron is in conduction bands. The exceptions are SmQ, EuQ and YbQ which are semiconductors or insulators but exhibit a pressure induced transition to a conducting state.
Compounds LnQ2 are known but these do not contain LnIV but are LnIII compounds containing polychalcogenide anions.
Oxysulfides Ln2O2S are well known, they all have the same structure with 7-coordinate Ln atoms, and 3 sulfur and 4 oxygen atoms as near neighbours.
Doping these with other lanthanide elements produces phosphors. As an example, gadolinium oxysulfide, Gd2O2S doped with Tb3+ produces visible photons when irradiated with high energy X-rays and is used as a scintillator in flat panel detectors.
When mischmetal, an alloy of lanthanide metals, is added to molten steel to remove oxygen and sulfur, stable oxysulfides are produced that form an immiscible solid.
All of the lanthanides form a mononitride, LnN, with the rock salt structure. The mononitrides have attracted interest because of their unusual physical properties. SmN and EuN are reported as being "half metals". NdN, GdN, TbN and DyN are ferromagnetic, SmN is antiferromagnetic. Applications in the field of spintronics are being investigated.
CeN is unusual as it is a metallic conductor, contrasting with the other nitrides also with the other cerium pnictides. A simple description is Ce4+N3− (e–) but the interatomic distances are a better match for the trivalent state rather than for the tetravalent state. A number of different explanations have been offered.
The nitrides can be prepared by the reaction of lanthanum metals with nitrogen. Some nitride is produced along with the oxide, when lanthanum metals are ignited in air. Alternative methods of synthesis are a high temperature reaction of lanthanide metals with ammonia or the decomposition of lanthanide amides, Ln(NH2)3. Achieving pure stoichiometric compounds, and crystals with low defect density has proved difficult. The lanthanide nitrides are sensitive to air and hydrolyse producing ammonia.
The other pnictides phosphorus, arsenic, antimony and bismuth also react with the lanthanide metals to form monopnictides, LnQ. Additionally a range of other compounds can be produced with varying stoichiometries, such as LnP2, LnP5, LnP7, Ln3As, Ln5As3 and LnAs2.
Carbides of varying stoichiometries are known for the lanthanides. Non-stoichiometry is common. All of the lanthanides form LnC2 and Ln2C3 which both contain C2 units. The dicarbides with exception of EuC2, are metallic conductors with the calcium carbide structure and can be formulated as Ln3+C22−(e–). The C-C bond length is longer than that in CaC2, which contains the C22− anion, indicating that the antibonding orbitals of the C22− anion are involved in the conduction band. These dicarbides hydrolyse to form hydrogen and a mixture of hydrocarbons. EuC2 and to a lesser extent YbC2 hydrolyse differently producing a higher percentage of acetylene (ethyne). The sesquicarbides, Ln2C3 can be formulated as Ln4(C2)3. These compounds adopt the Pu2C3 structure which has been described as having C22− anions in bisphenoid holes formed by eight near Ln neighbours. The lengthening of the C-C bond is less marked in the sesquicarbides than in the dicarbides, with the exception of Ce2C3.
Other carbon rich stoichiometries are known for some lanthanides. Ln3C4 (Ho-Lu) containing C, C2 and C3 units; Ln4C7 (Ho-Lu) contain C atoms and C3 units and Ln4C5 (Gd-Ho) containing C and C2 units.
Metal rich carbides contain interstitial C atoms and no C2 or C3 units. These are Ln4C3 (Tb and Lu); Ln2C (Dy, Ho, Tm) and Ln3C (Sm-Lu).
All of the lanthanides form a number of borides. The "higher" borides (LnBx where x > 12) are insulators/semiconductors whereas the lower borides are typically conducting. The lower borides have stoichiometries of LnB2, LnB4, LnB6 and LnB12. Applications in the field of spintronics are being investigated. The range of borides formed by the lanthanides can be compared to those formed by the transition metals. The boron rich borides are typical of the lanthanides (and groups 1–3) whereas for the transition metals tend to form metal rich, "lower" borides. The lanthanide borides are typically grouped together with the group 3 metals with which they share many similarities of reactivity, stoichiometry and structure. Collectively these are then termed the rare earth borides.
Many methods of producing lanthanide borides have been used, amongst them are direct reaction of the elements; the reduction of Ln2O3 with boron; reduction of boron oxide, B2O3, and Ln2O3 together with carbon; reduction of metal oxide with boron carbide, B4C. Producing high purity samples has proved to be difficult. Single crystals of the higher borides have been grown in a low melting metal (e.g. Sn, Cu, Al).
Diborides, LnB2, have been reported for Sm, Gd, Tb, Dy, Ho, Er, Tm, Yb and Lu. All have the same, AlB2, structure containing a graphitic layer of boron atoms. Low temperature ferromagnetic transitions for Tb, Dy, Ho and Er. TmB2 is ferromagnetic at 7.2 K.
Tetraborides, LnB4 have been reported for all of the lanthanides except EuB4, all have the same UB4 structure. The structure has a boron sub-lattice consists of chains of octahedral B6 clusters linked by boron atoms. The unit cell decreases in size successively from LaB4 to LuB4. The tetraborides of the lighter lanthanides melt with decomposition to LnB6. Attempts to make EuB4 have failed. The LnB4 are good conductors and typically antiferromagnetic.
Hexaborides, LnB6 have been reported for all of the lanthanides. They all have the CaB6 structure, containing B6 clusters. They are non-stoichiometric due to cation defects. The hexaborides of the lighter lanthanides (La – Sm) melt without decomposition, EuB6 decomposes to boron and metal and the heavier lanthanides decompose to LnB4 with exception of YbB6 which decomposes forming YbB12. The stability has in part been correlated to differences in volatility between the lanthanide metals. In EuB6 and YbB6 the metals have an oxidation state of +2 whereas in the rest of the lanthanide hexaborides it is +3. This rationalises the differences in conductivity, the extra electrons in the LnIII hexaborides entering conduction bands. EuB6 is a semiconductor and the rest are good conductors. LaB6 and CeB6 are thermionic emitters, used, for example, in scanning electron microscopes.
Dodecaborides, LnB12, are formed by the heavier smaller lanthanides, but not by the lighter larger metals, La – Eu. With the exception YbB12 (where Yb takes an intermediate valence and is a Kondo insulator), the dodecaborides are all metallic compounds. They all have the UB12 structure containing a 3 dimensional framework of cubooctahedral B12 clusters.
The higher boride LnB66 is known for all lanthanide metals. The composition is approximate as the compounds are non-stoichiometric. They all have similar complex structure with over 1600 atoms in the unit cell. The boron cubic sub lattice contains super icosahedra made up of a central B12 icosahedra surrounded by 12 others, B12(B12)12. Other complex higher borides LnB50 (Tb, Dy, Ho Er Tm Lu) and LnB25 are known (Gd, Tb, Dy, Ho, Er) and these contain boron icosahedra in the boron framework.
Lanthanide-carbon σ bonds are well known; however as the 4f electrons have a low probability of existing at the outer region of the atom there is little effective orbital overlap, resulting in bonds with significant ionic character. As such organo-lanthanide compounds exhibit carbanion-like behavior, unlike the behavior in transition metal organometallic compounds. Because of their large size, lanthanides tend to form more stable organometallic derivatives with bulky ligands to give compounds such as Ln[CH(SiMe3)3]. Analogues of uranocene are derived from dilithiocyclooctatetraene, Li2C8H8. Organic lanthanide(II) compounds are also known, such as Cp*2Eu.
All the trivalent lanthanide ions, except lanthanum and lutetium, have unpaired f electrons. However, the magnetic moments deviate considerably from the spin-only values because of strong spin-orbit coupling. The maximum number of unpaired electrons is 7, in Gd3+, with a magnetic moment of 7.94 B.M., but the largest magnetic moments, at 10.4–10.7 B.M., are exhibited by Dy3+ and Ho3+. However, in Gd3+ all the electrons have parallel spin and this property is important for the use of gadolinium complexes as contrast reagent in MRI scans.
Crystal field splitting is rather small for the lanthanide ions and is less important than spin-orbit coupling in regard to energy levels. Transitions of electrons between f orbitals are forbidden by the Laporte rule. Furthermore, because of the "buried" nature of the f orbitals, coupling with molecular vibrations is weak. Consequently, the spectra of lanthanide ions are rather weak and the absorption bands are similarly narrow. Glass containing holmium oxide and holmium oxide solutions (usually in perchloric acid) have sharp optical absorption peaks in the spectral range 200–900 nm and can be used as a wavelength calibration standard for optical spectrophotometers, and are available commercially.
As f-f transitions are Laporte-forbidden, once an electron has been excited, decay to the ground state will be slow. This makes them suitable for use in lasers as it makes the population inversion easy to achieve. The is one that is widely used. Europium-doped yttrium vanadate was the first red phosphor to enable the development of color television screens. Lanthanide ions have notable luminescent properties due to their unique 4f orbitals. Laporte forbidden f-f transitions can be activated by excitation of a bound "antenna" ligand. This leads to sharp emission bands throughout the visible, NIR, and IR and relatively long luminescence lifetimes.
The lanthanide contraction is responsible for the great geochemical divide that splits the lanthanides into light and heavy-lanthanide enriched minerals, the latter being almost inevitably associated with and dominated by yttrium. This divide is reflected in the first two "rare earths" that were discovered: yttria (1794) and ceria (1803). The geochemical divide has put more of the light lanthanides in the Earth's crust, but more of the heavy members in the Earth's mantle. The result is that although large rich ore-bodies are found that are enriched in the light lanthanides, correspondingly large ore-bodies for the heavy members are few. The principal ores are monazite and bastnäsite. Monazite sands usually contain all the lanthanide elements, but the heavier elements are lacking in bastnäsite. The lanthanides obey the Oddo-Harkins rule – odd-numbered elements are less abundant than their even-numbered neighbors.
Three of the lanthanide elements have radioactive isotopes with long half-lives (138La, 147Sm and 176Lu) that can be used to date minerals and rocks from Earth, the Moon and meteorites.
Lanthanide elements and their compounds have many uses but the quantities consumed are relatively small in comparison to other elements. About 15000 ton/year of the lanthanides are consumed as catalysts and in the production of glasses. This 15000 tons corresponds to about 85% of the lanthanide production. From the perspective of value, however, applications in phosphors and magnets are more important.
The devices lanthanide elements are used in include superconductors, samarium-cobalt and neodymium-iron-boron high-flux rare-earth magnets, magnesium alloys, electronic polishers, refining catalysts and hybrid car components (primarily batteries and magnets). Lanthanide ions are used as the active ions in luminescent materials used in optoelectronics applications, most notably the laser. Erbium-doped fiber amplifiers are significant devices in optical-fiber communication systems. Phosphors with lanthanide dopants are also widely used in cathode ray tube technology such as television sets. The earliest color television CRTs had a poor-quality red; europium as a phosphor dopant made good red phosphors possible. Yttrium iron garnet (YIG) spheres can act as tunable microwave resonators. Lanthanide oxides are mixed with tungsten to improve their high temperature properties for TIG welding, replacing thorium, which was mildly hazardous to work with. Many defense-related products also use lanthanide elements such as night vision goggles and rangefinders. The SPY-1 radar used in some Aegis equipped warships, and the hybrid propulsion system of s all use rare earth magnets in critical capacities.
The price for lanthanum oxide used in fluid catalytic cracking has risen from $5 per kilogram in early 2010 to $140 per kilogram in June 2011.
Most lanthanides are widely used in lasers, and as (co-)dopants in doped-fiber optical amplifiers; for example, in Er-doped fiber amplifiers, which are used as repeaters in the terrestrial and submarine fiber-optic transmission links that carry internet traffic. These elements deflect ultraviolet and infrared radiation and are commonly used in the production of sunglass lenses. Other applications are summarized in the following table:
The complex Gd(DOTA) is used in magnetic resonance imaging.
As mentioned in the industrial applications section above, lanthanide metals are particularly useful in technologies that take advantage of their reactivity to specific wavelengths of light. Certain life science applications take advantage of the unique luminescence properties of lanthanide ion complexes (Ln(III) chelates or cryptates). These are well-suited for this application due to their large Stokes shifts and extremely long emission lifetimes (from microseconds to milliseconds) compared to more traditional fluorophores (e.g., fluorescein, allophycocyanin, phycoerythrin, and rhodamine). The biological fluids or serum commonly used in these research applications contain many compounds and proteins which are naturally fluorescent. Therefore, the use of conventional, steady-state fluorescence measurement presents serious limitations in assay sensitivity. Long-lived fluorophores, such as lanthanides, combined with time-resolved detection (a delay between excitation and emission detection) minimizes prompt fluorescence interference.
Time-resolved fluorometry (TRF) combined with fluorescence resonance energy transfer (FRET) offers a powerful tool for drug discovery researchers: Time-Resolved Fluorescence Resonance Energy Transfer or TR-FRET. TR-FRET combines the low background aspect of TRF with the homogeneous assay format of FRET. The resulting assay provides an increase in flexibility, reliability and sensitivity in addition to higher throughput and fewer false positive/false negative results.
This method involves two fluorophores: a donor and an acceptor. Excitation of the donor fluorophore (in this case, the lanthanide ion complex) by an energy source (e.g. flash lamp or laser) produces an energy transfer to the acceptor fluorophore if they are within a given proximity to each other (known as the Förster's radius). The acceptor fluorophore in turn emits light at its characteristic wavelength.
The two most commonly used lanthanides in life science assays are shown below along with their corresponding acceptor dye as well as their excitation and emission wavelengths and resultant Stokes shift (separation of excitation and emission wavelengths).
Currently there is research showing that lanthanide elements can be used as anticancer agents. The main role of the lanthanides in these studies is to inhibit proliferation of the cancer cells. Specifically cerium and lanthanum have been studied for their role as anti-cancer agents.
One of the specific elements from the lanthanide group that has been tested and used is cerium (Ce). There have been studies that use a protein-cerium complex to observe the effect of cerium on the cancer cells. The hope was to inhibit cell proliferation and promote cytotoxicity. Transferrin receptors in cancer cells, such as those in breast cancer cells and epithelial cervical cells, promote the cell proliferation and malignancy of the cancer. Transferrin is a protein used to transport iron into the cells and is needed to aid the cancer cells in DNA replication. Transferrin acts as a growth factor for the cancerous cells and is dependent on iron. Cancer cells have much higher levels of transferrin receptors than normal cells and are very dependent on iron for their proliferation. Cerium has shown results as an anti-cancer agent due to its similarities in structure and biochemistry to iron. Cerium may bind in the place of iron on to the transferrin and then be brought into the cancer cells by transferrin-receptor mediated endocytosis. The cerium binding to the transferrin in place of the iron inhibits the transferrin activity in the cell. This creates a toxic environment for the cancer cells and causes a decrease in cell growth. This is the proposed mechanism for cerium's effect on cancer cells, though the real mechanism may be more complex in how cerium inhibits cancer cell proliferation. Specifically in HeLa cancer cells studied in vitro, cell viability was decreased after 48 to 72 hours of cerium treatments. Cells treated with just cerium had decreases in cell viability, but cells treated with both cerium and transferrin had more significant inhibition for cellular activity.
Another specific element that has been tested and used as an anti-cancer agent is lanthanum, more specifically lanthanum chloride (LaCl3). The lanthanum ion is used to affect the levels of let-7a and microRNAs miR-34a in a cell throughout the cell cycle. When the lanthanum ion was introduced to the cell in vivo or in vitro, it inhibited the rapid growth and induced apoptosis of the cancer cells (specifically cervical cancer cells). This effect was caused by the regulation of the let-7a and microRNAs by the lanthanum ions. The mechanism for this effect is still unclear but it is possible that the lanthanum is acting in a similar way as the cerium and binding to a ligand necessary for cancer cell proliferation.
Due to their sparse distribution in the earth's crust and low aqueous solubility, the lanthanides have a low availability in the biosphere, and for a long time were not known to naturally form part of any biological molecules. In 2007 a novel methanol dehydrogenase that strictly uses lanthanides as enzymatic cofactors was discovered in a bacterium from the phylum Verrucomicrobia, "Methylacidiphilum fumariolicum". This bacterium was found to survive only if there are lanthanides present in the environment. Compared to most other nondietary elements, non-radioactive lanthanides are classified as having low toxicity.
|
https://en.wikipedia.org/wiki?curid=18308
|
Henry I of England
Henry I (c. 1068 – 1 December 1135), also known as Henry Beauclerc, was King of England from 1100 to his death in 1135. He was the fourth son of William the Conqueror and was educated in Latin and the liberal arts. On William's death in 1087, Henry's elder brothers Robert Curthose and William Rufus inherited Normandy and England, respectively, but Henry was left landless. He purchased the County of Cotentin in western Normandy from Robert, but his brothers deposed him in 1091. He gradually rebuilt his power base in the Cotentin and allied himself with William against Robert.
Present at the place where his brother William died in a hunting accident in 1100, Henry seized the English throne, promising at his coronation to correct many of William's less popular policies. He married Matilda of Scotland and they had two surviving children, William Adelin and Empress Matilda; he also had many illegitimate children by his many mistresses. Robert, who invaded in 1101, disputed Henry's control of England; this military campaign ended in a negotiated settlement that confirmed Henry as king. The peace was short-lived, and Henry invaded the Duchy of Normandy in 1105 and 1106, finally defeating Robert at the Battle of Tinchebray. Henry kept Robert imprisoned for the rest of his life. Henry's control of Normandy was challenged by Louis VI of France, Baldwin VII of Flanders and Fulk V of Anjou, who promoted the rival claims of Robert's son, William Clito, and supported a major rebellion in the Duchy between 1116 and 1119. Following Henry's victory at the Battle of Brémule, a favourable peace settlement was agreed with Louis in 1120.
Considered by contemporaries to be a harsh but effective ruler, Henry skilfully manipulated the barons in England and Normandy. In England, he drew on the existing Anglo-Saxon system of justice, local government and taxation, but also strengthened it with additional institutions, including the royal exchequer and itinerant justices. Normandy was also governed through a growing system of justices and an exchequer. Many of the officials who ran Henry's system were "new men" of obscure backgrounds, rather than from families of high status, who rose through the ranks as administrators. Henry encouraged ecclesiastical reform, but became embroiled in a serious dispute in 1101 with Archbishop Anselm of Canterbury, which was resolved through a compromise solution in 1105. He supported the Cluniac order and played a major role in the selection of the senior clergy in England and Normandy.
Henry's son William drowned in the "White Ship" disaster of 1120, throwing the royal succession into doubt. Henry took a second wife, Adeliza of Louvain, in the hope of having another son, but their marriage was childless. In response to this, he declared his daughter Matilda his heir and married her to Geoffrey of Anjou. The relationship between Henry and the couple became strained, and fighting broke out along the border with Anjou. Henry died on 1 December 1135 after a week of illness. Despite his plans for Matilda, the King was succeeded by his nephew, Stephen of Blois, resulting in a period of civil war known as the Anarchy.
Henry was probably born in England in 1068, in either the summer or the last weeks of the year, possibly in the town of Selby in Yorkshire. His father was William the Conqueror, the Duke of Normandy who had invaded England in 1066 to become the King of England, establishing lands stretching into Wales. The invasion had created an Anglo-Norman ruling class, many with estates on both sides of the English Channel. These Anglo-Norman barons typically had close links to the kingdom of France, which was then a loose collection of counties and smaller polities, under only the nominal control of the king. Henry's mother, Matilda of Flanders, was the granddaughter of Robert II of France, and she probably named Henry after her uncle, King Henry I of France.
Henry was the youngest of William and Matilda's four sons. Physically he resembled his older brothers Robert Curthose, Richard and William Rufus, being, as historian David Carpenter describes, "short, stocky and barrel-chested," with black hair. As a result of their age differences and Richard's early death, Henry would have probably seen relatively little of his older brothers. He probably knew his sister Adela well, as the two were close in age. There is little documentary evidence for his early years; historians Warren Hollister and Kathleen Thompson suggest he was brought up predominantly in England, while Judith Green argues he was initially brought up in the Duchy. He was probably educated by the Church, possibly by Bishop Osmund, the King's chancellor, at Salisbury Cathedral; it is uncertain if this indicated an intent by his parents for Henry to become a member of the clergy. It is also uncertain how far Henry's education extended, but he was probably able to read Latin and had some background in the liberal arts. He was given military training by an instructor called Robert Achard, and Henry was knighted by his father on 24 May 1086.
In 1087, William was fatally injured during a campaign in the Vexin. Henry joined his dying father near Rouen in September, where the King partitioned his possessions among his sons. The rules of succession in western Europe at the time were uncertain; in some parts of France, primogeniture, in which the eldest son would inherit a title, was growing in popularity. In other parts of Europe, including Normandy and England, the tradition was for lands to be divided up, with the eldest son taking patrimonial lands – usually considered to be the most valuable – and younger sons given smaller, or more recently acquired, partitions or estates.
In dividing his lands, William appears to have followed the Norman tradition, distinguishing between Normandy, which he had inherited, and England, which he had acquired through war. William's second son, Richard, had died in a hunting accident, leaving Henry and his two brothers to inherit William's estate. Robert, the eldest, despite being in armed rebellion against his father at the time of his death, received Normandy. England was given to William Rufus, who was in favour with the dying king. Henry was given a large sum of money, usually reported as £5,000, with the expectation that he would also be given his mother's modest set of lands in Buckinghamshire and Gloucestershire. William's funeral at Caen was marred by angry complaints from a local man, and Henry may have been responsible for resolving the dispute by buying off the protester with silver.
Robert returned to Normandy, expecting to have been given both the Duchy and England, to find that William Rufus had crossed the Channel and been crowned king. The two brothers disagreed fundamentally over the inheritance, and Robert soon began to plan an invasion of England to seize the kingdom, helped by a rebellion by some of the leading nobles against William Rufus. Henry remained in Normandy and took up a role within Robert's court, possibly either because he was unwilling to side openly with William Rufus, or because Robert might have taken the opportunity to confiscate Henry's inherited money if he had tried to leave. William Rufus sequestered Henry's new estates in England, leaving Henry landless.
In 1088, Robert's plans for the invasion of England began to falter, and he turned to Henry, proposing that his brother lend him some of his inheritance, which Henry refused. Henry and Robert then came to an alternative arrangement, in which Robert would make Henry the count of western Normandy, in exchange for £3,000. Henry's lands were a new countship based around a delegation of the ducal authority in the Cotentin, but it extended across the Avranchin, with control over the bishoprics of both. This also gave Henry influence over two major Norman leaders, Hugh d'Avranches and Richard de Redvers, and the abbey of Mont Saint-Michel, whose lands spread out further across the Duchy. Robert's invasion force failed to leave Normandy, leaving William Rufus secure in England.
Henry quickly established himself as count, building up a network of followers from western Normandy and eastern Brittany, whom historian John Le Patourel has characterised as "Henry's gang". His early supporters included Roger of Mandeville, Richard of Redvers, Richard d'Avranches and Robert Fitzhamon, along with the churchman Roger of Salisbury. Robert attempted to go back on his deal with Henry and re-appropriate the county, but Henry's grip was already sufficiently firm to prevent this. Robert's rule of the Duchy was chaotic, and parts of Henry's lands became almost independent of central control from Rouen.
During this period, neither William nor Robert seems to have trusted Henry. Waiting until the rebellion against William Rufus was safely over, Henry returned to England in July 1088. He met with the King but was unable to persuade him to grant him their mother's estates, and travelled back to Normandy in the autumn. While he had been away, however, Odo, Bishop of Bayeux, who regarded Henry as a potential competitor, had convinced Robert that Henry was conspiring against the duke with William Rufus. On landing, Odo seized Henry and imprisoned him in Neuilly-la-Forêt, and Robert took back the county of the Cotentin. Henry was held there over the winter, but in the spring of 1089 the senior elements of the Normandy nobility prevailed upon Robert to release him.
Although no longer formally the Count of Cotentin, Henry continued to control the west of Normandy. The struggle between his brothers continued. William Rufus continued to put down resistance to his rule in England, but began to build a number of alliances against Robert with barons in Normandy and neighbouring Ponthieu. Robert allied himself with Philip I of France. In late 1090 William Rufus encouraged Conan Pilatus, a powerful burgher in Rouen, to rebel against Robert; Conan was supported by most of Rouen and made appeals to the neighbouring ducal garrisons to switch allegiance as well.
Robert issued an appeal for help to his barons, and Henry was the first to arrive in Rouen in November. Violence broke out, leading to savage, confused street fighting as both sides attempted to take control of the city. Robert and Henry left the castle to join the battle, but Robert then retreated, leaving Henry to continue the fighting. The battle turned in favour of the ducal forces and Henry took Conan prisoner. Henry was angry that Conan had turned against his feudal lord. He had him taken to the top of Rouen Castle and then, despite Conan's offers to pay a huge ransom, threw him off the top of the castle to his death. Contemporaries considered Henry to have acted appropriately in making an example of Conan, and Henry became famous for his exploits in the battle.
In the aftermath, Robert forced Henry to leave Rouen, probably because Henry's role in the fighting had been more prominent than his own, and possibly because Henry had asked to be formally reinstated as the count of the Cotentin. In early 1091, William Rufus invaded Normandy with a sufficiently large army to bring Robert to the negotiating table. The two brothers signed a treaty at Rouen, granting William Rufus a range of lands and castles in Normandy. In return, William Rufus promised to support Robert's attempts to regain control of the neighbouring county of Maine, once under Norman control, and help in regaining control over the Duchy, including Henry's lands. They nominated each other as heirs to England and Normandy, excluding Henry from any succession while either one of them lived.
War now broke out between Henry and his brothers. Henry mobilised a mercenary army in the west of Normandy, but as William Rufus and Robert's forces advanced, his network of baronial support melted away. Henry focused his remaining forces at Mont Saint-Michel, where he was besieged, probably in March 1091. The site was easy to defend, but lacked fresh water. The chronicler William of Malmesbury suggested that when Henry's water ran short, Robert allowed his brother fresh supplies, leading to remonstrations between Robert and William Rufus. The events of the final days of the siege are unclear: the besiegers had begun to argue about the future strategy for the campaign, but Henry then abandoned Mont Saint-Michel, probably as part of a negotiated surrender. He left for Brittany and crossed over into France.
Henry's next steps are not well documented; one chronicler, Orderic Vitalis, suggests that he travelled in the French Vexin, along the Normandy border, for over a year with a small band of followers. By the end of the year, Robert and William Rufus had fallen out once again, and the Treaty of Rouen had been abandoned. In 1092, Henry and his followers seized the Normandy town of Domfront. Domfront had previously been controlled by Robert of Bellême, but the inhabitants disliked his rule and invited Henry to take over the town, which he did in a bloodless coup. Over the next two years, Henry re-established his network of supporters across western Normandy, forming what Judith Green terms a "court in waiting". By 1094, he was allocating lands and castles to his followers as if he were the Duke of Normandy. William Rufus began to support Henry with money, encouraging his campaign against Robert, and Henry used some of this to construct a substantial castle at Domfront.
William Rufus crossed into Normandy to take the war to Robert in 1094, and when progress stalled, called upon Henry for assistance. Henry responded, but travelled to London instead of joining the main campaign further east in Normandy, possibly at the request of the King, who in any event abandoned the campaign and returned to England. Over the next few years, Henry appears to have strengthened his power base in western Normandy, visiting England occasionally to attend at William Rufus's court. In 1095 Pope Urban II called the First Crusade, encouraging knights from across Europe to join. Robert joined the Crusade, borrowing money from William Rufus to do so, and granting the King temporary custody of his part of the Duchy in exchange. The King appeared confident of regaining the remainder of Normandy from Robert, and Henry appeared ever closer to William Rufus. They campaigned together in the Norman Vexin between 1097 and 1098.
On the afternoon of 2 August 1100, the King went hunting in the New Forest, accompanied by a team of huntsmen and a number of the Norman nobility, including Henry. An arrow, possibly shot by the baron Walter Tirel, hit and killed William Rufus. Numerous conspiracy theories have been put forward suggesting that the King was killed deliberately; most modern historians reject these, as hunting was a risky activity, and such accidents were common. Chaos broke out, and Tirel fled the scene for France, either because he had shot the fatal arrow, or because he had been incorrectly accused and feared that he would be made a scapegoat for the King's death.
Henry rode to Winchester, where an argument ensued as to who now had the best claim to the throne. William of Breteuil championed the rights of Robert, who was still abroad, returning from the Crusade, and to whom Henry and the barons had given homage in previous years. Henry argued that, unlike Robert, he had been born to a reigning king and queen, thereby giving him a claim under the right of porphyrogeniture. Tempers flared, but Henry, supported by Henry de Beaumont and Robert of Meulan, held sway and persuaded the barons to follow him. He occupied Winchester Castle and seized the royal treasury.
Henry was hastily crowned king in Westminster Abbey on 5 August by Maurice, the Bishop of London, as Anselm, the Archbishop of Canterbury, had been exiled by William Rufus, and Thomas, the Archbishop of York, was in the north of England at Ripon. In accordance with English tradition and in a bid to legitimise his rule, Henry issued a coronation charter laying out various commitments. The new king presented himself as having restored order to a trouble-torn country. He announced that he would abandon William Rufus's policies towards the Church, which had been seen as oppressive by the clergy; he promised to prevent royal abuses of the barons' property rights, and assured a return to the gentler customs of Edward the Confessor; he asserted that he would "establish a firm peace" across England and ordered "that this peace shall henceforth be kept".
In addition to his existing circle of supporters, many of whom were richly rewarded with new lands, Henry quickly co-opted many of the existing administration into his new royal household. William Giffard, William Rufus's chancellor, was made the Bishop of Winchester, and the prominent sheriffs Urse d'Abetot, Haimo Dapifer and Robert Fitzhamon continued to play a senior role in government. By contrast, the unpopular Ranulf Flambard, the Bishop of Durham and a key member of the previous regime, was imprisoned in the Tower of London and charged with corruption. The late king had left many church positions unfilled, and Henry set about nominating candidates to these, in an effort to build further support for his new government. The appointments needed to be consecrated, and Henry wrote to Anselm, apologising for having been crowned while the Archbishop was still in France and asking him to return at once.
On 11 November 1100 Henry married Matilda, the daughter of Malcolm III of Scotland. Henry was now around 31 years old, but late marriages for noblemen were not unusual in the 11th century. The pair had probably first met earlier the previous decade, possibly being introduced through Bishop Osmund of Salisbury. Historian Warren Hollister argues that Henry and Matilda were emotionally close, but their union was also certainly politically motivated. Matilda had originally been named Edith, an Anglo-Saxon name, and was a member of the West Saxon royal family, being the niece of Edgar the Ætheling, the great-granddaughter of Edmund Ironside and a descendant of Alfred the Great. For Henry, marrying Matilda gave his reign increased legitimacy, and for Matilda, an ambitious woman, it was an opportunity for high status and power in England.
Matilda had been educated in a sequence of convents, however, and may well have taken the vows to formally become a nun, which formed an obstacle to the marriage progressing. She did not wish to be a nun and appealed to Anselm for permission to marry Henry, and the Archbishop established a council at Lambeth Palace to judge the issue. Despite some dissenting voices, the council concluded that although Matilda had lived in a convent, she had not actually become a nun and was therefore free to marry, a judgement that Anselm then affirmed, allowing the marriage to proceed. Matilda proved an effective queen for Henry, acting as a regent in England on occasion, addressing and presiding over councils, and extensively supporting the arts. The couple soon had two children, Matilda, born in 1102, and William Adelin, born in 1103; it is possible that they also had a second son, Richard, who died young. Following the birth of these children, Matilda preferred to remain based in Westminster while Henry travelled across England and Normandy, either for religious reasons or because she enjoyed being involved in the machinery of royal governance.
Henry had a considerable sexual appetite and enjoyed a substantial number of sexual partners, resulting in many illegitimate children, at least nine sons and 13 daughters, many of whom he appears to have recognised and supported. It was normal for unmarried Anglo-Norman noblemen to have sexual relations with prostitutes and local women, and kings were also expected to have mistresses. Some of these relationships occurred before Henry was married, but many others took place after his marriage to Matilda. Henry had a wide range of mistresses from a range of backgrounds, and the relationships appear to have been conducted relatively openly. He may have chosen some of his noble mistresses for political purposes, but the evidence to support this theory is limited.
By early 1101, Henry's new regime was established and functioning, but many of the Anglo-Norman elite still supported his brother Robert, or would be prepared to switch sides if Robert appeared likely to gain power in England. In February, Flambard escaped from the Tower of London and crossed the Channel to Normandy, where he injected fresh direction and energy to Robert's attempts to mobilise an invasion force. By July, Robert had formed an army and a fleet, ready to move against Henry in England. Raising the stakes in the conflict, Henry seized Flambard's lands and, with the support of Anselm, Flambard was removed from his position as bishop. The King held court in April and June, where the nobility renewed their oaths of allegiance to him, but their support still appeared partial and shaky.
With the invasion imminent, Henry mobilised his forces and fleet outside Pevensey, close to Robert's anticipated landing site, training some of them personally in how to counter cavalry charges. Despite English levies and knights owing military service to the Church arriving in considerable numbers, many of his barons did not appear. Anselm intervened with some of the doubters, emphasising the religious importance of their loyalty to Henry. Robert unexpectedly landed further up the coast at Portsmouth on 20 July with a modest force of a few hundred men, but these were quickly joined by many of the barons in England. However, instead of marching into nearby Winchester and seizing Henry's treasury, Robert paused, giving Henry time to march west and intercept the invasion force.
The two armies met at Alton, Hampshire, where peace negotiations began, possibly initiated by either Henry or Robert, and probably supported by Flambard. The brothers then agreed to the Treaty of Alton, under which Robert released Henry from his oath of homage and recognised him as king; Henry renounced his claims on western Normandy, except for Domfront, and agreed to pay Robert £2,000 a year for life; if either brother died without a male heir, the other would inherit his lands; the barons whose lands had been seized by either the King or the Duke for supporting his rival would have them returned, and Flambard would be reinstated as bishop; the two brothers would campaign together to defend their territories in Normandy. Robert remained in England for a few months more with Henry before returning to Normandy.
Despite the treaty, Henry set about inflicting severe penalties on the barons who had stood against him during the invasion. William de Warenne, the Earl of Surrey, was accused of fresh crimes, which were not covered by the Alton amnesty, and was banished from England. In 1102 Henry then turned against Robert of Bellême and his brothers, the most powerful of the barons, accusing him of 45 different offences. Robert escaped and took up arms against Henry. Henry besieged Robert's castles at Arundel, Tickhill and Shrewsbury Castles, pushing down into the south-west to attack Bridgnorth. His power base in England broken, Robert accepted Henry's offer of banishment and left the country for Normandy.
Henry's network of allies in Normandy became stronger during 1103. He arranged the marriages of his illegitimate daughters, Juliane and Matilda, to Eustace of Breteuil and Rotrou III, Count of Perche, respectively, the latter union securing the Norman border. Henry attempted to win over other members of the Norman nobility and gave other English estates and lucrative offers to key Norman lords. Duke Robert continued to fight Robert of Bellême, but the Duke's position worsened, until by 1104, he had to ally himself formally with Bellême to survive. Arguing that the Duke had broken the terms of their treaty, the King crossed over the Channel to Domfront, where he met with senior barons from across Normandy, eager to ally themselves with him. He confronted the Duke and accused him of siding with his enemies, before returning to England.
Normandy continued to disintegrate into chaos. In 1105, Henry sent his friend Robert Fitzhamon and a force of knights into the Duchy, apparently to provoke a confrontation with Duke Robert. Fitzhamon was captured, and Henry used this as an excuse to invade, promising to restore peace and order. Henry had the support of most of the neighbouring counts around Normandy's borders, and King Philip of France was persuaded to remain neutral. Henry occupied western Normandy, and advanced east on Bayeux, where Fitzhamon was held. The city refused to surrender, and Henry besieged it, burning it to the ground. Terrified of meeting the same fate, the town of Caen switched sides and surrendered, allowing Henry to advance on Falaise, Calvados, which he took with some casualties. His campaign stalled, and the King instead began peace discussions with Robert. The negotiations were inconclusive and the fighting dragged on until Christmas, when Henry returned to England.
Henry invaded again in July 1106, hoping to provoke a decisive battle. After some initial tactical successes, he turned south-west towards the castle of Tinchebray. He besieged the castle and Duke Robert, supported by Robert of Bellême, advanced from Falaise to relieve it. After attempts at negotiation failed, the Battle of Tinchebray took place, probably on 28 September. The battle lasted around an hour, and began with a charge by Duke Robert's cavalry; the infantry and dismounted knights of both sides then joined the battle. Henry's reserves, led by Elias I, Count of Maine, and Alan IV, Duke of Brittany, attacked the enemy's flanks, routing first Bellême's troops and then the bulk of the ducal forces. Duke Robert was taken prisoner, but Bellême escaped.
Henry mopped up the remaining resistance in Normandy, and Duke Robert ordered his last garrisons to surrender. Reaching Rouen, Henry reaffirmed the laws and customs of Normandy and took homage from the leading barons and citizens. The lesser prisoners taken at Tinchebray were released, but the Duke and several other leading nobles were imprisoned indefinitely. The Duke's son, William Clito, was only three years old and was released to the care of Helias of Saint-Saens, a Norman baron. Henry reconciled himself with Robert of Bellême, who gave up the ducal lands he had seized and rejoined the royal court. Henry had no way of legally removing the Duchy from his brother, and initially Henry avoided using the title "duke" at all, emphasising that, as the King of England, he was only acting as the guardian of the troubled Duchy.
Henry inherited the kingdom of England from William Rufus, giving him a claim of suzerainty over Wales and Scotland, and acquired the Duchy of Normandy, a complex entity with troubled borders. The borders between England and Scotland were still uncertain during Henry's reign, with Anglo-Norman influence pushing northwards through Cumbria, but his relationship with King David I of Scotland was generally good, partially due to Henry's marriage to his sister. In Wales, Henry used his power to coerce and charm the indigenous Welsh princes, while Norman Marcher Lords pushed across the valleys of South Wales. Normandy was controlled via various interlocking networks of ducal, ecclesiastical and family contacts, backed by a growing string of important ducal castles along the borders. Alliances and relationships with neighbouring counties along the Norman border were particularly important to maintaining the stability of the Duchy.
Henry ruled through the various barons and lords in England and Normandy, whom he manipulated skilfully for political effect. Political friendships, termed "amicitia" in Latin, were important during the 12th century, and Henry maintained a wide range of these, mediating between his friends in various factions across his realm when necessary, and rewarding those who were loyal to him. He also had a reputation for punishing those barons who stood against him, and he maintained an effective network of informers and spies who reported to him on events. Henry was a harsh, firm ruler, but not excessively so by the standards of the day. Over time, he increased the degree of his control over the barons, removing his enemies and bolstering his friends until the "reconstructed baronage", as historian Warren Hollister describes it, was predominantly loyal and dependent on the King.
Henry's itinerant royal court comprised various parts. At the heart was his domestic household, called the "domus"; a wider grouping was termed the "familia regis", and formal gatherings of the court were termed "curia". The "domus" was divided into several parts. The chapel, headed by the chancellor, looked after the royal documents, the chamber dealt with financial affairs and the master-marshal was responsible for travel and accommodation. The "familia regis" included Henry's mounted household troops, up to several hundred strong, who came from a wider range of social backgrounds, and could be deployed across England and Normandy as required. Initially Henry continued his father's practice of regular crown-wearing ceremonies at his "curia", but they became less frequent as the years passed. Henry's court was grand and ostentatious, financing the construction of large new buildings and castles with a range of precious gifts on display, including his private menagerie of exotic animals, which he kept at Woodstock Palace. Despite being a lively community, Henry's court was more tightly controlled than those of previous kings. Strict rules controlled personal behaviour and prohibited members of the court from pillaging neighbouring villages, as had been the norm under William Rufus.
Henry was responsible for a substantial expansion of the royal justice system. In England, Henry drew on the existing Anglo-Saxon system of justice, local government and taxes, but strengthened it with additional central governmental institutions. Roger of Salisbury began to develop the royal exchequer after 1110, using it to collect and audit revenues from the King's sheriffs in the shires. Itinerant justices began to emerge under Henry, travelling around the country managing eyre courts, and many more laws were formally recorded. Henry gathered increasing revenue from the expansion of royal justice, both from fines and from fees. The first Pipe Roll that is known to have survived dates from 1130, recording royal expenditures. Henry reformed the coinage in 1107, 1108 and in 1125, inflicting harsh corporal punishments to English coiners who had been found guilty of debasing the currency. In Normandy, he restored law and order after 1106, operating through a body of Norman justices and an exchequer system similar to that in England. Norman institutions grew in scale and scope under Henry, although less quickly than in England. Many of the officials that ran Henry's system were termed "new men", relatively low-born individuals who rose through the ranks as administrators, managing justice or the royal revenues.
Henry's ability to govern was intimately bound up with the Church, which formed the key to the administration of both England and Normandy, and this relationship changed considerably over the course of his reign. William the Conqueror had reformed the English Church with the support of his Archbishop of Canterbury, Lanfranc, who became a close colleague and advisor to the King. Under William Rufus this arrangement had collapsed, the King and Archbishop Anselm had become estranged and Anselm had gone into exile. Henry also believed in Church reform, but on taking power in England he became embroiled in the investiture controversy.
The argument concerned who should invest a new bishop with his staff and ring: traditionally, this had been carried out by the King in a symbolic demonstration of royal power, but Pope Urban II had condemned this practice in 1099, arguing that only the papacy could carry out this task, and declaring that the clergy should not give homage to their local temporal rulers. Anselm returned to England from exile in 1100 having heard Urban's pronouncement, and informed Henry that he would be complying with the Pope's wishes. Henry was in a difficult position. On one hand, the symbolism and homage was important to him; on the other hand, he needed Anselm's support in his struggle with his brother Duke Robert.
Anselm stuck firmly to the letter of the papal decree, despite Henry's attempts to persuade him to give way in return for a vague assurance of a future royal compromise. Matters escalated, with Anselm going back into exile and Henry confiscating the revenues of his estates. Anselm threatened excommunication, and in July 1105 the two men finally negotiated a solution. A distinction was drawn between the secular and ecclesiastical powers of the prelates, under which Henry gave up his right to invest his clergy, but retained the custom of requiring them to come and do homage for the temporalities, the landed properties they held in England. Despite this argument, the pair worked closely together, combining to deal with Duke Robert's invasion of 1101, for example, and holding major reforming councils in 1102 and 1108.
A long-running dispute between the Archbishops of Canterbury and York flared up under Anselm's successor, Ralph d'Escures. Canterbury, traditionally the senior of the two establishments, had long argued that the Archbishop of York should formally promise to obey their Archbishop, but York argued that the two episcopates were independent within the English Church and that no such promise was necessary. Henry supported the primacy of Canterbury, to ensure that England remained under a single ecclesiastical administration, but the Pope preferred the case of York. The matter was complicated by Henry's personal friendship with Thurstan, the Archbishop of York, and the King's desire that the case should not end up in a papal court, beyond royal control. Henry needed the support of the Papacy in his struggle with Louis of France, however, and therefore allowed Thurstan to attend the Council of Rheims in 1119, where Thurstan was then consecrated by the Pope with no mention of any duty towards Canterbury. Henry believed that this went against assurances Thurstan had previously made and exiled him from England until the King and Archbishop came to a negotiated solution the following year.
Even after the investiture dispute, Henry continued to play a major role in the selection of new English and Norman bishops and archbishops. He appointed many of his officials to bishoprics and, as historian Martin Brett suggests, "some of his officers could look forward to a mitre with all but absolute confidence". Henry's chancellors, and those of his queens, became bishops of Durham, Hereford, London, Lincoln, Winchester and Salisbury. Henry increasingly drew on a wider range of these bishops as advisors – particularly Roger of Salisbury – breaking with the earlier tradition of relying primarily on the Archbishop of Canterbury. The result was a cohesive body of administrators through which Henry could exercise careful influence, holding general councils to discuss key matters of policy. This stability shifted slightly after 1125, when he began to inject a wider range of candidates into the senior positions of the Church, often with more reformist views, and the impact of this generation would be felt in the years after Henry's death.
Like other rulers of the period, Henry donated to the Church and patronised various religious communities, but contemporary chroniclers did not consider him an unusually pious king. His personal beliefs and piety may, however, have developed during the course of his life. Henry had always taken an interest in religion, but in his later years he may have become much more concerned about spiritual affairs. If so, the major shifts in his thinking would appear to have occurred after 1120, when his son William Adelin died, and 1129, when his daughter's marriage teetered on the verge of collapse.
As a proponent of religious reform, Henry gave extensively to reformist groups within the Church. He was a keen supporter of the Cluniac order, probably for intellectual reasons. He donated money to the abbey at Cluny itself, and after 1120 gave generously to Reading Abbey, a Cluniac establishment. Construction on Reading began in 1121, and Henry endowed it with rich lands and extensive privileges, making it a symbol of his dynastic lines. He also focused effort on promoting the conversion of communities of clerks into Augustinian canons, the foundation of leper hospitals, expanding the provision of nunneries, and the charismatic orders of the Savigniacs and Tironensians. He was an avid collector of relics, sending an embassy to Constantinople in 1118 to collect Byzantine items, some of which were donated to Reading Abbey.
Normandy faced an increased threat from France, Anjou and Flanders after 1108. King Louis VI succeeded to the French throne in 1108 and began to reassert central royal power. Louis demanded Henry give homage to him and that two disputed castles along the Normandy border be placed into the control of neutral castellans. Henry refused, and Louis responded by mobilising an army. After some arguments, the two kings negotiated a truce and retreated without fighting, leaving the underlying issues unresolved. Fulk V assumed power in Anjou in 1109 and began to rebuild Angevin authority. He inherited the county of Maine, but refused to recognise Henry as his feudal lord and instead allied himself with Louis. Robert II of Flanders also briefly joined the alliance, before his death in 1111.
In 1108, Henry betrothed his six-year-old daughter, Matilda, to Henry V, the future Holy Roman Emperor. For King Henry, this was a prestigious match; for Henry V, it was an opportunity to restore his financial situation and fund an expedition to Italy, as he received a dowry of £6,666 from England and Normandy. Raising this money proved challenging, and required the implementation of a special "aid", or tax, in England. Matilda was crowned German queen in 1110.
Henry responded to the French and Angevin threat by expanding his own network of supporters beyond the Norman borders. Some Norman barons deemed unreliable were arrested or dispossessed, and Henry used their forfeited estates to bribe his potential allies in the neighbouring territories, in particular Maine. Around 1110, Henry attempted to arrest the young William Clito, but William's mentors moved him to the safety of Flanders before he could be taken. At about this time, Henry probably began to style himself as the duke of Normandy. Robert of Bellême turned against Henry once again, and when he appeared at Henry's court in 1112 in a new role as a French ambassador, he was arrested and imprisoned.
Rebellions broke out in France and Anjou between 1111 and 1113, and Henry crossed into Normandy to support his nephew, Count Theobald II of Blois, who had sided against Louis in the uprising. In a bid to isolate Louis diplomatically, Henry betrothed his young son, William Adelin, to Fulk's daughter Matilda, and married his illegitimate daughter Matilda to Duke Conan III of Brittany, creating alliances with Anjou and Brittany respectively. Louis backed down and in March 1113 met with Henry near Gisors to agree a peace settlement, giving Henry the disputed fortresses and confirming Henry's overlordship of Maine, Bellême and Brittany.
Meanwhile, the situation in Wales was deteriorating. Henry had conducted a campaign in South Wales in 1108, pushing out royal power in the region and colonising the area around Pembroke with Flemings. By 1114, some of the resident Norman lords were under attack, while in Mid-Wales, Owain ap Cadwgan blinded one of the political hostages he was holding, and in North Wales Gruffudd ap Cynan threatened the power of the Earl of Chester. Henry sent three armies into Wales that year, with Gilbert Fitz Richard leading a force from the south, Alexander, King of Scotland, pressing from the north and Henry himself advancing into Mid-Wales. Owain and Gruffudd sued for peace, and Henry accepted a political compromise. He reinforced the Welsh Marches with his own appointees, strengthening the border territories.
Concerned about the succession, Henry sought to persuade Louis VI to accept his son, William Adelin, as the legitimate future Duke of Normandy, in exchange for his son's homage. Henry crossed into Normandy in 1115 and assembled the Norman barons to swear loyalty; he also almost successfully negotiated a settlement with Louis, affirming William's right to the Duchy in exchange for a large sum of money. However, Louis, backed by his ally Baldwin of Flanders, instead declared that he considered William Clito the legitimate heir to the Duchy.
War broke out after Henry returned to Normandy with an army to support Theobald of Blois, who was under attack from Louis. Henry and Louis raided each other's towns along the border, and a wider conflict then broke out, probably in 1116. Henry was pushed onto the defensive as French, Flemish and Angevin forces began to pillage the Normandy countryside. Amaury III of Montfort and many other barons rose up against Henry, and there was an assassination plot from within his own household. Henry's wife, Matilda, died in early 1118, but the situation in Normandy was sufficiently pressing that Henry was unable to return to England for her funeral.
Henry responded by mounting campaigns against the rebel barons and deepening his alliance with Theobald. Baldwin of Flanders was wounded in battle and died in September 1118, easing the pressure on Normandy from the north-east. Henry attempted to crush a revolt in the city of Alençon, but was defeated by Fulk and the Angevin army. Forced to retreat from Alençon, Henry's position deteriorated alarmingly, as his resources became overstretched and more barons abandoned his cause. Early in 1119, Eustace of Breteuil and Henry's daughter, Juliana, threatened to join the baronial revolt. Hostages were exchanged in a bid to avoid conflict, but relations broke down and both sides mutilated their captives. Henry attacked and took the town of Breteuil, Eure, despite Juliana's attempt to kill her father with a crossbow. In the aftermath, Henry dispossessed the couple of almost all of their lands in Normandy.
Henry's situation improved in May 1119 when he enticed Fulk to switch sides by finally agreeing to marry William Adelin to Fulk's daughter, Matilda, and paying Fulk a large sum of money. Fulk left for the Levant, leaving the County of Maine in Henry's care, and the King was free to focus on crushing his remaining enemies. During the summer Henry advanced into the Norman Vexin, where he encountered Louis's army, resulting in the Battle of Brémule. Henry appears to have deployed scouts and then organised his troops into several carefully formed lines of dismounted knights. Unlike Henry's forces, the French knights remained mounted; they hastily charged the Anglo-Norman positions, breaking through the first rank of the defences but then becoming entangled in Henry's second line of knights. Surrounded, the French army began to collapse. In the melee, Henry was hit by a sword blow, but his armour protected him. Louis and William Clito escaped from the battle, leaving Henry to return to Rouen in triumph.
The war slowly petered out after this battle, and Louis took the dispute over Normandy to Pope Callixtus II's council in Reims that October. Henry faced a number of French complaints concerning his acquisition and subsequent management of Normandy, and despite being defended by Geoffrey, the Archbishop of Rouen, Henry's case was shouted down by the pro-French elements of the council. Callixtus declined to support Louis, however, and merely advised the two rulers to seek peace. Amaury de Montfort came to terms with Henry, but Henry and William Clito failed to find a mutually satisfactory compromise. In June 1120, Henry and Louis formally made peace on terms advantageous to the King of England: William Adelin gave homage to Louis, and in return Louis confirmed William's rights to the Duchy.
Henry's succession plans were thrown into chaos by the sinking of the "White Ship" on 25 November 1120. Henry had left the port of Barfleur for England in the early evening, leaving William Adelin and many of the younger members of the court to follow on that night in a separate vessel, the "White Ship". Both the crew and passengers were drunk and, just outside the harbour, the ship hit a submerged rock. The ship sank, killing as many as 300 people, with only one survivor, a butcher from Rouen. Henry's court was initially too scared to report William's death to the King. When he was finally told, he collapsed with grief.
The disaster left Henry with no legitimate son, his various nephews now the closest possible male heirs. Henry announced he would take a new wife, Adeliza of Louvain, opening up the prospect of a new royal son, and the two were married at Windsor Castle in January 1121. Henry appears to have chosen her because she was attractive and came from a prestigious noble line. Adeliza seems to have been fond of Henry and joined him in his travels, probably to maximise the chances of her conceiving a child. The "White Ship" disaster initiated fresh conflict in Wales, where the drowning of Richard, Earl of Chester, encouraged a rebellion led by Maredudd ap Bleddyn. Henry intervened in North Wales that summer with an army and, although he was hit by a Welsh arrow, the campaign reaffirmed royal power across the region.
Henry's alliance with Anjou – which had been based on his son William marrying Fulk's daughter Matilda – began to disintegrate. Fulk returned from the Levant and demanded that Henry return Matilda and her dowry, a range of estates and fortifications in Maine. Matilda left for Anjou, but Henry argued that the dowry had in fact originally belonged to him before it came into the possession of Fulk, and so declined to hand the estates back to Anjou. Fulk married his daughter Sibylla to William Clito, and granted them Maine. Once again, conflict broke out, as Amaury de Montfort allied himself with Fulk and led a revolt along the Norman-Anjou border in 1123. Amaury was joined by several other Norman barons, headed by Waleran de Beaumont, one of the sons of Henry's old ally, Robert of Meulan.
Henry dispatched Robert of Gloucester and Ranulf le Meschin to Normandy and then intervened himself in late 1123. He began the process of besieging the rebel castles, before wintering in the Duchy. In the spring of 1124, campaigning began again. In the battle of Bourgthéroulde, Odo Borleng, castellan of Bernay, Eure, led the King's army and received intelligence that the rebels were departing from the rebel base in Beaumont-le-Roger allowing him to ambush them as they traversed through the Brotonne forest. Waleran charged the royal forces, but his knights were cut down by Odo's archers and the rebels were quickly overwhelmed. Waleran was captured, but Amaury escaped. Henry mopped up the remainder of the rebellion, blinding some of the rebel leaders – considered, at the time, a more merciful punishment than execution – and recovering the last rebel castles. He paid Pope Callixtus a large amount of money, in exchange for the Papacy annulling the marriage of William Clito and Sibylla on the grounds of consanguinity.
Henry and Adeliza did not conceive any children, generating prurient speculation as to the possible explanation, and the future of the dynasty appeared at risk. Henry may have begun to look among his nephews for a possible heir. He may have considered Stephen of Blois as a possible option and, perhaps in preparation for this, he arranged a beneficial marriage for Stephen to a wealthy heiress, Matilda. Theobald of Blois, his close ally, may have also felt that he was in favour with Henry. William Clito, who was King Louis's preferred choice, remained opposed to Henry and was therefore unsuitable. Henry may have also considered his own illegitimate son, Robert of Gloucester, as a possible candidate, but English tradition and custom would have looked unfavourably on this.
Henry's plans shifted when the Empress Matilda's husband, the Emperor Henry, died in 1125. The King recalled his daughter to England the next year and declared that, should he die without a male heir, she was to be his rightful successor. The Anglo-Norman barons were gathered together at Westminster at Christmas 1126, where they swore to recognise Matilda and any future legitimate heir she might have. Putting forward a woman as a potential heir in this way was unusual: opposition to Matilda continued to exist within the English court, and Louis was vehemently opposed to her candidacy.
Fresh conflict broke out in 1127, when the childless Charles I, Count of Flanders, was murdered, creating a local succession crisis. Backed by King Louis, William Clito was chosen by the Flemings to become their new ruler. This development potentially threatened Normandy, and Henry began to finance a proxy war in Flanders, promoting the claims of William's Flemish rivals. In an effort to disrupt the French alliance with William, Henry mounted an attack into France in 1128, forcing Louis to cut his aid to William. William died unexpectedly in July, removing the last major challenger to Henry's rule and bringing the war in Flanders to a halt. Without William, the baronial opposition in Normandy lacked a leader. A fresh peace was made with France, and Henry was finally able to release the remaining prisoners from the revolt of 1123, including Waleran of Meulan, who was rehabilitated into the royal court.
Meanwhile, Henry rebuilt his alliance with Fulk of Anjou, this time by marrying Matilda to Fulk's eldest son, Geoffrey. The pair were betrothed in 1127 and married the following year. It is unknown whether Henry intended Geoffrey to have any future claim on England or Normandy, and he was probably keeping his son-in-law's status deliberately uncertain. Similarly, although Matilda was granted a number of Normandy castles as part of her dowry, it was not specified when the couple would actually take possession of them. Fulk left Anjou for Jerusalem in 1129, declaring Geoffrey the Count of Anjou and Maine. The marriage proved difficult, as the couple did not particularly like each other and the disputed castles proved a point of contention, resulting in Matilda returning to Normandy later that year. Henry appears to have blamed Geoffrey for the separation, but in 1131 the couple were reconciled. Much to the pleasure and relief of Henry, Matilda then gave birth to a sequence of two sons, Henry and Geoffrey, in 1133 and 1134.
Relations between Henry, Matilda, and Geoffrey became increasingly strained during the King's final years. Matilda and Geoffrey suspected that they lacked genuine support in England. In 1135 they urged Henry to hand over the royal castles in Normandy to Matilda whilst he was still alive, and insisted that the Norman nobility swear immediate allegiance to her, thereby giving the couple a more powerful position after Henry's death. Henry angrily declined to do so, probably out of concern that Geoffrey would try to seize power in Normandy. A fresh rebellion broke out amongst the barons in southern Normandy, led by William III, Count of Ponthieu, whereupon Geoffrey and Matilda intervened in support of the rebels.
Henry campaigned throughout the autumn, strengthening the southern frontier, and then travelled to Lyons-la-Forêt in November to enjoy some hunting, still apparently healthy. There he fell ill – according to the chronicler Henry of Huntingdon, he ate too many ("a surfeit of") lampreys against his physician's advice – and his condition worsened over the course of a week. Once the condition appeared terminal, Henry gave confession and summoned Archbishop Hugh of Amiens, who was joined by Robert of Gloucester and other members of the court. In accordance with custom, preparations were made to settle Henry's outstanding debts and to revoke outstanding sentences of forfeiture. The King died on 1 December 1135, and his corpse was taken to Rouen accompanied by the barons, where it was embalmed; his entrails were buried locally at the priory of Notre-Dame du Pré, and the preserved body was taken on to England, where it was interred at Reading Abbey.
Despite Henry's efforts, the succession was disputed. When news began to spread of the King's death, Geoffrey and Matilda were in Anjou supporting the rebels in their campaign against the royal army, which included a number of Matilda's supporters such as Robert of Gloucester. Many of these barons had taken an oath to stay in Normandy until the late king was properly buried, which prevented them from returning to England. The Norman nobility discussed declaring Theobald of Blois king. Theobald's younger brother, Stephen of Blois, quickly crossed from Boulogne to England, however, accompanied by his military household. Hugh Bigod dubiously testified that Henry, on his deathbed, had released the barons from their oath to Matilda, and with the help of his brother, Henry of Blois, Stephen seized power in England and was crowned king on 22 December. Matilda did not give up her claim to England and Normandy, appealing at first to the Pope against the decision to allow the coronation of Stephen, and then invading England to start a prolonged civil war, known as the Anarchy, between 1135 and 1153.
Historians have drawn on a range of sources on Henry, including the accounts of chroniclers; other documentary evidence, including early pipe rolls; and surviving buildings and architecture. The three main chroniclers to describe the events of Henry's life were William of Malmesbury, Orderic Vitalis, and Henry of Huntingdon, but each incorporated extensive social and moral commentary into their accounts and borrowed a range of literary devices and stereotypical events from other popular works. Other chroniclers include Eadmer, Hugh the Chanter, Abbot Suger, and the authors of the Welsh "Brut". Not all royal documents from the period have survived, but there are a number of royal acts, charters, writs, and letters, along with some early financial records. Some of these have since been discovered to be forgeries, and others had been subsequently amended or tampered with.
Late medieval historians seized on the accounts of selected chroniclers regarding Henry's education and gave him the title of Henry "Beauclerc", a theme echoed in the analysis of Victorian and Edwardian historians such as Francis Palgrave and Henry Davis. The historian Charles David dismissed this argument in 1929, showing the more extreme claims for Henry's education to be without foundation. Modern histories of Henry commenced with Richard Southern's work in the early 1960s, followed by extensive research during the rest of the 20th century into a wide number of themes from his reign in England, and a much more limited number of studies of his rule in Normandy. Only two major, modern biographies of Henry have been produced, C. Warren Hollister's posthumous volume in 2001, and Judith Green's 2006 work.
Interpretation of Henry's personality by historians has altered over time. Earlier historians such as Austin Poole and Richard Southern considered Henry as a cruel, draconian ruler. More recent historians, such as Hollister and Green, view his implementation of justice much more sympathetically, particularly when set against the standards of the day, but even Green has noted that Henry was "in many respects highly unpleasant", and Alan Cooper has observed that many contemporary chroniclers were probably too scared of the King to voice much criticism. Historians have also debated the extent to which Henry's administrative reforms genuinely constituted an introduction of what Hollister and John Baldwin have termed systematic, "administrative kingship", or whether his outlook remained fundamentally traditional.
Henry's burial at Reading Abbey is marked by a local cross and a plaque, but Reading Abbey was slowly demolished during the Dissolution of the Monasteries in the 16th century. The exact location is uncertain, but the most likely location of the tomb itself is now in a built-up area of central Reading, on the site of the former abbey choir. A plan to locate his remains was announced in March 2015, with support from English Heritage and Philippa Langley, who aided with the successful discovery and exhumation of Richard III.
In addition to Matilda and William, Henry possibly had a short-lived son, Richard, with his first wife, Matilda of Scotland. Henry and his second wife, Adeliza of Louvain, had no children.
Henry had a number of illegitimate children by various mistresses.
|
https://en.wikipedia.org/wiki?curid=14179
|
Hentai
Outside of Japan, hentai ( or ; "" ; "pervert") is anime and manga pornography. In Japanese, however, "hentai" is not a genre of media but any type of perverse or bizarre sexual desire or act. For example, outside of Japan a work of animation depicting lesbian sex might be described as "yuri hentai", but in Japan it would just be described as "yuri".
The word is short for "hentai seiyoku" (), a perverse sexual desire. The original meaning of "hentai" in the Japanese language is a transformation or metamorphosis. The implication of perversion or paraphilia was derived from there. Both meanings can be easily distinguished in context.
"Hentai" is a kanji compound of ("hen"; "change", "weird", or "strange") and ("tai"; "appearance" or "condition"). It also means "perversion" or "abnormality", especially when used as an adjective. It is the shortened form of the phrase which means "sexual perversion". The character "hen" is catch-all for queerness as a peculiarity—it does not carry an explicit sexual reference. While the term has expanded in use to cover a range of publications including homosexual publications, it remains primarily a heterosexual term, as terms indicating homosexuality entered Japan as foreign words. Japanese pornographic works are often simply tagged as , meaning "prohibited to those not yet 18 years old", and . Less official terms also in use include , , and the English initialism AV (for "adult video"). Usage of the term "hentai" does not define a genre in Japan.
"Hentai" is defined differently in English. The "Oxford Dictionary Online" defines it as "a subgenre of the Japanese genres of manga and anime, characterized by overtly sexualized characters and sexually explicit images and plots." The origin of the word in English is unknown, but AnimeNation's John Oppliger points to the early 1990s, when a "Dirty Pair" erotic "doujinshi" (self-published work) titled "H-Bomb" was released, and when many websites sold access to images culled from Japanese erotic visual novels and games. The earliest English use of the term traces back to the rec.arts.anime boards; with a 1990 post concerning Happosai of "Ranma ½" and the first discussion of the meaning in 1991. A 1995 glossary on the rec.arts.anime boards contained reference to the Japanese usage and the evolving definition of hentai as "pervert" or "perverted sex". "The Anime Movie Guide", published in 1997, defines as the initial sound of hentai (i.e., the name of the letter "H", as pronounced in Japanese); it included that ecchi was "milder than hentai". A year later it was defined as a genre in "Good Vibrations Guide to Sex". At the beginning of 2000, "hentai" was listed as the 41st most-popular search term of the internet, while "anime" ranked 99th. The attribution has been applied retroactively to works such as "Urotsukidōji", "La Blue Girl", and "Cool Devices". "Urotsukidōji" had previously been described with terms such as "Japornimation", and "erotic grotesque", prior to being identified as hentai.
The history of the word "hentai" has its origins in science and psychology. By the middle of the Meiji era, the term appeared in publications to describe unusual or abnormal traits, including paranormal abilities and psychological disorders. A translation of German sexologist Richard von Krafft-Ebing's text "Psychopathia Sexualis" originated the concept of "hentai seiyoku", as a "perverse or abnormal sexual desire". Though it was popularized outside psychology, as in the case of Mori Ōgai's 1909 novel "Vita Sexualis". Continued interest in "hentai seiyoku" resulted in numerous journals and publications on sexual advice which circulated in the public, served to establish the sexual connotation of "hentai" as perverse. Any perverse or abnormal act could be hentai, such as committing "shinjū" (love suicide). It was Nakamura Kokyo's journal "Abnormal Psychology" which started the popular sexology boom in Japan which would see the rise of other popular journals like "Sexuality and Human Nature", "Sex Research" and "Sex". Originally, Tanaka Kogai wrote articles for "Abnormal Psychology", but it would be Tanaka's own journal "Modern Sexuality" which would become one of the most popular sources of information about erotic and neurotic expression. "Modern Sexuality" was created to promote fetishism, S&M, and necrophilia as a facet of modern life. The ero-guro movement and depiction of perverse, abnormal and often erotic undertones were a response to interest in "hentai seiyoku".
Following World War II, Japan took a new interest in sexualization and public sexuality. Mark McLelland puts forth the observation that the term "hentai" found itself shortened to "H" and that the English pronunciation was "etchi", referring to lewdness and which did not carry the stronger connotation of abnormality or perversion. By the 1950s, the "hentai seiyoku" publications became their own genre and included fetish and homosexual topics. By the 1960s, the homosexual content was dropped in favor of subjects like sadomasochism and stories of lesbianism targeted to male readers. The late 1960s brought a sexual revolution which expanded and solidified the normalizing the terms identity in Japan that continues to exist today through publications such as "Bessatsu Takarajima"s "Hentai-san ga iku" series.
With the usage of "hentai" as any erotic depiction, the history of these depictions is split into their media. Japanese artwork and comics serve as the first example of hentai material, coming to represent the iconic style after the publication of Azuma Hideo's "Cybele" in 1979. Japanese animation (anime) had its first hentai, in both definitions, with the 1984 release of Wonderkid's "Lolita Anime", overlooking the erotic and sexual depictions in 1969's "One Thousand and One Arabian Nights" and the bare-breasted Cleopatra in 1970's "Cleopatra" film. Erotic games, another area of contention, has its first case of the art style depicting sexual acts in 1985's "Tenshitachi no Gogo". In each of these mediums, the broad definition and usage of the term complicates its historic examination.
Depictions of sex and abnormal sex can be traced back through the ages, predating the term "hentai". "Shunga", a Japanese term for erotic art, is thought to have existed in some form since the Heian period. From the 16th to the 19th centuries, "shunga" works were suppressed by "shōguns". A well-known example is "The Dream of the Fisherman's Wife", which depicts a woman being stimulated by two octopuses. "Shunga" production fell with the introduction of pornographic photographs in the late 19th century.
To define erotic manga, a definition for manga is needed. While the "Hokusai Manga" uses the term "manga" in its title, it does not depict the story-telling aspect common to modern manga, as the images are unrelated. Due to the influence of pornographic photographs in the 19th and 20th centuries, the manga artwork was depicted by realistic characters. Osamu Tezuka helped define the modern look and form of manga, and was later proclaimed as the "God of Manga". His debut work "New Treasure Island" was released in 1947 as a comic book through Ikuei Publishing and sold over 400,000 copies, though it was the popularity of Tezuka's "Astro Boy", "Metropolis", and "Jungle Emperor" manga that would come to define the media. This story-driven manga style is distinctly unique from comic strips like "Sazae-san", and story-driven works came to dominate "shōjo" and "shōnen" magazines.
Adult themes in manga have existed since the 1940s, but some of these depictions were more realistic than the cartoon-cute characters popularized by Tezuka. Early well-known ""ero-gekiga"" releases were "Ero Mangatropa" (1973), "Erogenica" (1975), and "Alice" (1977). The distinct shift in the style of Japanese pornographic comics from realistic to cartoon-cute characters is accredited to Hideo Azuma, "The Father of Lolicon". In 1979, he penned "Cybele", which offered the first depictions of sexual acts between cute, unrealistic Tezuka-style characters. This would start a pornographic manga movement. The lolicon boom of the 1980s saw the rise of magazines such as the anthologies "Lemon People" and "Petit Apple Pie".
The publication of erotic materials in the United States can be traced back to at least 1990, when IANVS Publications printed its first "Anime Shower Special". In March 1994, Antarctic Press released "Bondage Fairies", an English translation of "Insect Hunter".
Because there are fewer animation productions, most erotic works are retroactively tagged as "hentai" since the coining of the term in English. "Hentai" is typically defined as consisting of excessive nudity, and graphic sexual intercourse whether or not it is perverse. The term "ecchi" is typically related to fanservice, with no sexual intercourse being depicted.
Two early works escape being defined as hentai, but contain erotic themes. This is likely due to the obscurity and unfamiliarity of the works, arriving in the United States and fading from public focus a full 20 years before importation and surging interests coined the Americanized term "hentai". The first is the 1969 film "One Thousand and One Arabian Nights", which faithfully includes erotic elements of the original story. In 1970, "", was the first animated film to carry an X rating, but it was mislabeled as erotica in the United States.
The "Lolita Anime" series is typically identified as the first erotic anime and original video animation (OVA); it was released in 1984 by Wonder Kids. Containing eight episodes, the series focused on underage sex and rape, and included one episode containing BDSM bondage. Several sub-series were released in response, including a second "Lolita Anime" series released by Nikkatsu. It has not been officially licensed or distributed outside of its original release.
The "Cream Lemon" franchise of works ran from 1984 to 2005, with a number of them entering the American market in various forms. "The Brothers Grime" series released by Excalibur Films contained "Cream Lemon" works as early as 1986. However, they were not billed as anime and were introduced during the same time that the first underground distribution of erotic works began.
The American release of licensed erotic anime was first attempted in 1991 by Central Park Media, with "I Give My All", but it never occurred. In December 1992, "Devil Hunter Yohko" was the first risque ("ecchi") title that was released by A.D. Vision. While it contains no sexual intercourse, it pushes the limits of the "ecchi" category with sexual dialogue, nudity and one scene in which the heroine is about to be raped.
It was Central Park Media's 1993 release of "Urotsukidoji" which brought the first hentai film to American viewers. Often cited for inventing the tentacle rape subgenre, it contains extreme depictions of violence and monster sex. As such, it is acknowledged for being the first to depict tentacle sex on screen. When the film premiered in the United States, it was described as being "drenched in graphic scenes of perverse sex and ultra-violence".
Following this release, a wealth of pornographic content began to arrive in the United States, with companies such as A.D. Vision, Central Park Media and Media Blasters releasing licensed titles under various labels. A.D. Vision's label SoftCel Pictures released 19 titles in 1995 alone. Another label, Critical Mass, was created in 1996 to release an unedited edition of "Violence Jack". When A.D. Vision's hentai label SoftCel Pictures shut down in 2005, most of its titles were acquired by Critical Mass. Following the bankruptcy of Central Park Media in 2009, the licenses for all Anime 18-related products and movies were transferred to Critical Mass.
The term "eroge" (erotic game) literally defines any erotic game, but has become synonymous with video games depicting the artistic styles of anime and manga. The origins of "eroge" began in the early 1980s, while the computer industry in Japan was struggling to define a computer standard with makers like NEC, Sharp, and Fujitsu competing against one another. The PC98 series, despite lacking in processing power, CD drives and limited graphics, came to dominate the market, with the popularity of "eroge" games contributing to its success.
Because of vague definitions of what constitutes an "erotic game", there are several possible candidates for the first "eroge". If the definition applies to adult themes, the first game was "Softporn Adventure". Released in America in 1981 for the Apple II, this was a text-based comedic game from On-Line Systems. If "eroge" is defined as the first graphical depictions or Japanese adult themes, it would be Koei's 1982 release of "Night Life". Sexual intercourse is depicted through simple graphic outlines. Notably, "Night Life" was not intended to be erotic so much as an instructional guide "to support married life". A series of "undressing" games appeared as early as 1983, such as "Strip Mahjong". The first anime-styled erotic game was "Tenshitachi no Gogo", released in 1985 by JAST. In 1988, ASCII released the first erotic role-playing game, "Chaos Angel". In 1989, AliceSoft released the turn-based role-playing game "Rance" and ELF released "Dragon Knight".
In the late 1980s, "eroge" began to stagnate under high prices and the majority of games containing uninteresting plots and mindless sex. ELF's 1992 release of "Dōkyūsei" came as customer frustration with "eroge" was mounting and spawned a new genre of games called dating sims. "Dōkyūsei" was unique because it had no defined plot and required the player to build a relationship with different girls in order to advance the story. Each girl had her own story, but the prospect of consummating a relationship required the girl growing to love the player; there was no easy sex.
The term "visual novel" is vague, with Japanese and English definitions classifying the genre as a type of interactive fiction game driven by narration and limited player interaction. While the term is often retroactively applied to many games, it was Leaf that coined the term with their "Leaf Visual Novel Series" (LVNS) with the 1996 release of "Shizuku" and "Kizuato". The success of these two dark "eroge" games would be followed by the third and final installment of the LVNS, the 1997 romantic "eroge" "To Heart". "Eroge" visual novels took a new emotional turn with Tactics' 1998 release "". Key's 1999 release of "Kanon" proved to be a major success and would go on to have numerous console ports, two manga series and two anime series.
Japanese laws have impacted depictions of works since the Meiji Restoration, but these predate the common definition of hentai material. Since becoming law in 1907, Article 175 of the Criminal Code of Japan forbids the publication of obscene materials. Specifically, depictions of male–female sexual intercourse and pubic hair are considered obscene, but bare genitalia is not. As censorship is required for published works, the most common representations are the blurring dots on pornographic videos and "bars" or "lights" on still images. In 1986, Toshio Maeda sought to get past censorship on depictions of sexual intercourse, by creating tentacle sex. This led to the large number of works containing sexual intercourse with monsters, demons, robots, and aliens, whose genitals look different from men's. While Western views attribute hentai to any explicit work, it was the products of this censorship which became not only the first titles legally imported to America and Europe, but the first successful ones. While uncut for American release, the United Kingdom's release of "Urotsukidoji" removed many scenes of the violence and tentacle rape scenes.
It was also because of this law that the artists began to depict the characters with a minimum of anatomical details and without pubic hair, by law, prior to 1991. Part of the ban was lifted when Nagisa Oshima prevailed over the obscenity charges at his trial for his film "In the Realm of the Senses". Though not enforced, the lifting of this ban did not apply to anime and manga as they were not deemed artistic exceptions.
Alterations of material or censorship and banning of works are common. The US release of "La Blue Girl" altered the age of the heroine from 16 to 18, removed sex scenes with a dwarf ninja named Nin-nin, and removed the Japanese blurring dots. "La Blue Girl" was outright rejected by UK censors who refused to classify it and prohibited its distribution. In 2011, the Liberal Democratic Party of Japan sought a ban on the subgenre "lolicon".
The most prolific consumers of hentai are men. "Eroge" games in particular combine three favored media—cartoons, pornography and gaming—into an experience. The hentai genre engages a wide audience that expands yearly, and desires better quality and storylines, or works which push the creative envelope. Nobuhiro Komiya, a manga censor, states that the unusual and extreme depictions in hentai are not about perversion so much as they are an example of the profit-oriented industry. Anime depicting normal sexual situations enjoy less market success than those that break social norms, such as sex at schools or bondage.
According to clinical psychologist Megha Hazuria Gorem, "Because toons are a kind of final fantasy, you can make the person look the way you want him or her to look. Every fetish can be fulfilled." Sexologist Narayan Reddy noted of "eroge", "Animators make new games because there is a demand for them, and because they depict things that the gamers do not have the courage to do in real life, or that might just be illegal, these games are an outlet for suppressed desire."
The hentai genre can be divided into numerous subgenres, the broadest of which encompasses heterosexual and homosexual acts. Hentai that features mainly heterosexual interactions occur in both male-targeted ("ero" or "dansei-muke") and female-targeted ("ladies' comics") form. Those that feature mainly homosexual interactions are known as "yaoi" or "Boys' Love" (male–male) and "yuri" (female–female). Both "yaoi" and, to a lesser extent, "yuri", are generally aimed at members of the opposite sex from the persons depicted. While "yaoi" and "yuri" are not always explicit, their pornographic history and association remain. "Yaoi" pornographic usage has remained strong in textual form through fanfiction. The definition of "yuri" has begun to be replaced by the broader definitions of "lesbian-themed animation or comics".
Hentai is perceived as "dwelling" on sexual fetishes. These include dozens of fetish and paraphilia related subgenres, which can be further classified with additional terms, such as heterosexual or homosexual types.
Many works are focused on depicting the mundane and the impossible across every conceivable act and situation, no matter how fantastical. One subgenre of hentai is "futanari" (hermaphroditism), which most often features a female with a penis or penis-like appendage in place of, or in addition to, a vulva. Futanari characters are primarily depicted as having sex with other women and will almost always be submissive with a male; exceptions include Yonekura Kengo's work, which features female empowerment and domination over males.
|
https://en.wikipedia.org/wiki?curid=14183
|
Henry VII of England
Henry VII (; 28 January 1457 – 21 April 1509) was the King of England and Lord of Ireland from his seizure of the crown on 22 August 1485 to his death. He was the first monarch of the House of Tudor.
Henry attained the throne when his forces defeated King Richard III at the Battle of Bosworth Field, the culmination of the Wars of the Roses. He was the last king of England to win his throne on the field of battle. He cemented his claim by marrying Elizabeth of York, daughter of Richard's brother Edward IV. Henry was successful in restoring the power and stability of the English monarchy after the civil war.
Henry is credited with a number of administrative, economic and diplomatic initiatives. His supportive policy toward England's wool industry and his standoff with the Low Countries had long-lasting benefit to the whole English economy. He paid very close attention to detail, and instead of spending lavishly he concentrated on raising new revenues. New taxes stabilised the government's finances, although a commission after his death found widespread abuses in the tax collection process. After a reign of nearly 24 years, he was peacefully succeeded by his son, Henry VIII.
Henry VII was born at Pembroke Castle on 28 January 1457 to Margaret Beaufort, Countess of Richmond. His father, Edmund Tudor, 1st Earl of Richmond, died three months before his birth.
Henry's paternal grandfather, Owen Tudor, originally from the Tudors of Penmynydd, Isle of Anglesey in Wales, had been a page in the court of Henry V. He rose to become one of the "Squires to the Body to the King" after military service at the Battle of Agincourt. Owen is said to have secretly married the widow of Henry V, Catherine of Valois. One of their sons was Edmund Tudor, father of Henry VII. Edmund was created Earl of Richmond in 1452, and "formally declared legitimate by Parliament".
Henry's main claim to the English throne derived from his mother through the House of Beaufort. Henry's mother, Lady Margaret Beaufort, was a great-granddaughter of John of Gaunt, the Duke of Lancaster and fourth son of Edward III, and his third wife Katherine Swynford. Katherine was Gaunt's mistress for about 25 years. When they married in 1396 they already had four children, including Henry's great-grandfather John Beaufort. Thus, Henry's claim was somewhat tenuous; it was from a woman, and by illegitimate descent. In theory, the Portuguese and Castilian royal families had a better claim as descendants of Catherine of Lancaster, the daughter of John of Gaunt and his second wife Constance of Castile.
Gaunt's nephew Richard II legitimised Gaunt's children by Katherine Swynford by Letters Patent in 1397. In 1407, Henry IV, Gaunt's son by his first wife, issued new Letters Patent confirming the legitimacy of his half-siblings, but also declaring them ineligible for the throne. Henry IV's action was of doubtful legality, as the Beauforts were previously legitimised by an Act of Parliament, but it further weakened Henry's claim.
Nonetheless, by 1483 Henry was the senior male Lancastrian claimant remaining after the deaths in battle, by murder or execution of Henry VI, his son Edward of Westminster, Prince of Wales, and the other Beaufort line of descent through Lady Margaret's uncle, the 2nd Duke of Somerset.
Henry also made some political capital out of his Welsh ancestry in attracting military support and safeguarding his army's passage through Wales on its way to the Battle of Bosworth. He came from an old, established Anglesey family that claimed descent from Cadwaladr, in legend, the last ancient British king, and on occasion Henry displayed the red dragon of Cadwaladr. He took it, as well as the standard of St. George, on his procession through London after the victory at Bosworth. A contemporary writer and Henry's biographer, Bernard André, also made much of Henry's Welsh descent.
In reality, his hereditary connections to Welsh aristocracy were not strong. He was descended by the paternal line, through several generations, from Ednyfed Fychan, the seneschal (steward) of Gwynedd and through this seneschal's wife from Rhys ap Tewdwr, the King of Deheubarth in South Wales. His more immediate ancestor, Tudur ap Goronwy, had aristocratic land rights, but his sons, who were first cousins to Owain Glyndŵr, sided with Owain in his revolt. One son was executed and the family land was forfeited. Another son, Henry's great-grandfather, became a butler to the Bishop of Bangor. Owen Tudor, the son of the butler, like the children of other rebels, was provided for by Henry V, a circumstance that precipitated his access to Queen Catherine of Valois. Notwithstanding this lineage, to the bards of Wales, Henry was a candidate for Y Mab Darogan, "The Son of Prophecy" who would free the Welsh from oppression.
In 1456, Henry's father Edmund Tudor was captured while fighting for Henry VI in South Wales against the Yorkists. He died in Carmarthen Castle, three months before Henry was born. Henry's uncle Jasper Tudor, the Earl of Pembroke and Edmund's younger brother, undertook to protect the young widow, who was 13 years old when she gave birth to Henry. When Edward IV became King in 1461, Jasper Tudor went into exile abroad. Pembroke Castle, and later the Earldom of Pembroke, were granted to the Yorkist William Herbert, who also assumed the guardianship of Margaret Beaufort and the young Henry.
Henry lived in the Herbert household until 1469, when Richard Neville, Earl of Warwick (the "Kingmaker"), went over to the Lancastrians. Herbert was captured fighting for the Yorkists and executed by Warwick. When Warwick restored Henry VI in 1470, Jasper Tudor returned from exile and brought Henry to court. When the Yorkist Edward IV regained the throne in 1471, Henry fled with other Lancastrians to Brittany, where he spent most of the next 14 years under the protection of Francis II, Duke of Brittany. In November 1476, Henry's protector fell ill and his principal advisers were more amenable to negotiating with the English king. Henry was handed over and escorted to the Breton port of Saint-Malo. While there, he feigned stomach cramps and in the confusion fled into a monastery. Following the Battle of Tewkesbury in 1471, Edward IV prepared to order Henry's extraction and probable execution. The townspeople took exception to his behaviour and Francis recovered from his illness. Thus, a small band of scouts rescued Henry.
By 1483, Henry's mother was actively promoting him as an alternative to Richard III, despite her being married to Lord Stanley, a Yorkist. At Rennes Cathedral on Christmas Day 1483, Henry pledged to marry Elizabeth of York, the eldest daughter of Edward IV, who was also Edward's heir since the presumed death of her brothers, the Princes in the Tower, King Edward V and his brother Richard of Shrewsbury, Duke of York. With money and supplies borrowed from his host, Francis II, Duke of Brittany, Henry tried to land in England, but his conspiracy unravelled resulting in the execution of his primary co-conspirator, the Duke of Buckingham. Now supported by Francis II's prime minister, Pierre Landais, Richard III attempted to extradite Henry from Brittany, but Henry escaped to France. He was welcomed by the French, who readily supplied him with troops and equipment for a second invasion.
Henry gained the support of the Woodvilles, in-laws of the late Edward IV, and sailed with a small French and Scottish force, landing at Mill Bay near Dale, Pembrokeshire. He marched toward England accompanied by his uncle Jasper and the Earl of Oxford. Wales was traditionally a Lancastrian stronghold, and Henry owed the support he gathered to his Welsh birth and ancestry, being directly descended, through his father, from Rhys ap Gruffydd. He amassed an army of about 5,000 soldiers.
Henry devised a plan to seize the throne by engaging Richard quickly because Richard had reinforcements in Nottingham and Leicester. Though outnumbered, Henry's Lancastrian forces decisively defeated Richard's Yorkist army at the Battle of Bosworth Field on 22 August 1485. Several of Richard's key allies, such as the Earl of Northumberland and William and Thomas Stanley, crucially switched sides or left the battlefield. Richard III's death at Bosworth Field effectively ended the Wars of the Roses.
As king, Henry was styled "by the Grace of God, King of England and France and Lord of Ireland". On his succession, Henry became entitled to bear the Royal Arms of England. After his marriage, Henry used as his emblem the red and white rose, which became known as the Tudor rose.
To secure his hold on the throne, Henry declared himself king by right of conquest retroactively from 21 August 1485, the day before Bosworth Field. Thus, anyone who had fought for Richard against him would be guilty of treason and Henry could legally confiscate the lands and property of Richard III, while restoring his own. Henry spared Richard's nephew and designated heir, the Earl of Lincoln, and made Margaret Plantagenet, a Yorkist heiress, Countess of Salisbury "suo jure". He took care not to address the baronage or summon Parliament until after his coronation, which took place in Westminster Abbey on 30 October 1485. After his coronation Henry issued an edict that any gentleman who swore fealty to him would, notwithstanding any previous attainder, be secure in his property and person.
Henry honoured his pledge of December 1483 to marry Elizabeth of York. They were third cousins, as both were great-great-grandchildren of John of Gaunt. Henry and Elizabeth were married on 18 January 1486 at Westminster Abbey. The marriage unified the warring houses and gave his children a strong claim to the throne. The unification of the houses of York and Lancaster by this marriage is symbolised by the heraldic emblem of the Tudor rose, a combination of the white rose of York and the red rose of Lancaster. It also ended future discussion as to whether the descendants of the fourth son of Edward III, Edmund, Duke of York, through marriage to Philippa, heiress of the second son, Lionel, Duke of Clarence, had a superior or inferior claim to those of the third son John of Gaunt, who had held the throne for three generations.
In addition, Henry had Parliament repeal "Titulus Regius", the statute that declared Edward IV's marriage invalid and his children illegitimate, thus legitimising his wife. Amateur historians Bertram Fields and Sir Clements Markham have claimed that he may have been involved in the murder of the Princes in the Tower, as the repeal of "Titulus Regius" gave the Princes a stronger claim to the throne than his own. Alison Weir, however, points out that the Rennes ceremony, two years earlier, was possible only if Henry and his supporters were certain that the Princes were already dead.
Henry secured his crown principally by dividing and undermining the power of the nobility, especially through the aggressive use of bonds and recognisances to secure loyalty. He also enacted laws against livery and maintenance, the great lords' practice of having large numbers of "retainers" who wore their lord's badge or uniform and formed a potential private army.
While he was still in Leicester, after the battle of Bosworth Field, Henry was already taking precautions to prevent any rebellions against his reign. Before leaving Leicester to go to London, Henry dispatched Robert Willoughby to Sheriff Hutton in Yorkshire, to have the ten-year-old Edward, Earl of Warwick, arrested and taken to the Tower of London. Edward was the son of George, Duke of Clarence, and as such he presented a threat as a potential rival to the new King Henry VII for the throne of England. However, Henry was threatened by several active rebellions over the next few years. The first was the rebellion of the Stafford brothers and Viscount Lovell of 1486, which collapsed without fighting.
In 1487, Yorkists led by Lincoln rebelled in support of Lambert Simnel, a boy who was claimed to be the Earl of Warwick, son of Edward IV's brother Clarence (who had last been seen as a prisoner in the Tower). The rebellion began in Ireland, where the traditionally Yorkist nobility, headed by the powerful Gerald FitzGerald, 8th Earl of Kildare, proclaimed Simnel king and provided troops for his invasion of England. The rebellion was defeated and Lincoln killed at the Battle of Stoke. Henry showed remarkable clemency to the surviving rebels: he pardoned Kildare and the other Irish nobles, and he made the boy, Simnel, a servant in the royal kitchen where he was in charge of roasting meats on a spit.
In 1490, a young Fleming, Perkin Warbeck, appeared and claimed to be Richard, the younger of the "Princes in the Tower". Warbeck won the support of Edward IV's sister Margaret of Burgundy. He led attempted invasions of Ireland in 1491 and England in 1495, and persuaded James IV of Scotland to invade England in 1496. In 1497 Warbeck landed in Cornwall with a few thousand troops, but was soon captured and executed.
When the King's agents searched the property of William Stanley (Chamberlain of the Household with direct access to Henry VII) they found a bag of coins amounting to around £10,000 and a collar of livery with Yorkist garnishings. Stanley was accused of supporting Warbeck's cause, arrested and later executed. In response to this threat within his own household, the King instituted more rigid security for access to his person.
In 1499, Henry had the Earl of Warwick executed. However, he spared Warwick's elder sister Margaret. She survived until 1541, when she was executed by Henry VIII.
Henry married Elizabeth of York with the hope of uniting the Yorkist and Lancastrian sides of the Plantagenet dynastic disputes, and he was largely successful. However, such a level of paranoia persisted that anyone (John de la Pole, Earl of Lincoln, for example) with blood ties to the Plantagenets was suspected of coveting the throne.
For most of Henry VII's reign Edward Story was Bishop of Chichester. Story's register still exists and, according to the 19th-century historian W.R.W. Stephens, "affords some illustrations of the avaricious and parsimonious character of the king". It seems that the king was skillful at extracting money from his subjects on many pretexts, including that of war with France or war with Scotland. The money so extracted added to the king's personal fortune rather than being used for the stated purpose.
Unlike his predecessors, Henry VII came to the throne without personal experience in estate management or financial administration. But during his reign he became a fiscally prudent monarch who restored the fortunes of an effectively bankrupt exchequer. Henry VII introduced stability to the financial administration of England by keeping the same financial advisors throughout his reign. For instance, except for the first few months of the reign, Lord Dynham and Thomas Howard, 2nd Duke of Norfolk were the only two office holders in the position of Lord High Treasurer of England throughout his reign.
Henry VII improved tax collection within the realm by introducing ruthlessly efficient mechanisms of taxation. He was supported in this effort by his chancellor, Archbishop John Morton, whose "Morton's Fork" was a catch-22 method of ensuring that nobles paid increased taxes: those nobles who spent little must have saved much, and thus they could afford the increased taxes; on the other hand, those nobles who spent much obviously had the means to pay the increased taxes. Royal government was also reformed with the introduction of the King's Council that kept the nobility in check.
The capriciousness and lack of due process that indebted many would tarnish his legacy and were soon ended upon Henry VII's death, after a commission revealed widespread abuses. According to the contemporary historian Polydore Vergil, simple "greed" underscored the means by which royal control was over-asserted in Henry's final years. Henry VIII executed Richard Empson and Edmund Dudley, his two most hated tax collectors, on trumped-up charges of treason.
He established the pound avoirdupois as a standard of weight; it later became part of the Imperial and customary systems of units.
Henry VII's policy was both to maintain peace and to create economic prosperity. Up to a point, he succeeded. The Treaty of Redon was signed in February 1489 in between Henry and representatives of Brittany. Based on the terms of the accord, Henry sent 6000 troops to fight (at the expense of Brittany) under the command of Lord Daubeney. The purpose of the agreement was to prevent France from annexing Brittany. According to John M. Currin, the treaty redefined Anglo-Breton relations, Henry started a new policy to recover Guyenne and other lost Plantagenet claims in France. The treaty marks a shift from neutrality over the French invasion of Brittany to active intervention against it.
Henry later conclude a treaty with France at Etaples that brought money into the coffers of England, and ensured the French would not support pretenders to the English throne, such as Perkin Warbeck. However, this treaty came at a slight price, as Henry mounted a minor invasion of Brittany in November 1492. Henry decided to keep Brittany out of French hands, signed an alliance with Spain to that end, and sent 6,000 troops to France. The confused, fractious nature of Breton politics undermined his efforts, which finally failed after three sizeable expeditions, at a cost of £24,000. However, as France was becoming more concerned with the Italian Wars, the French were happy to agree to the Treaty of Etaples. Henry had pressured the French by laying siege to Boulogne in October 1492.
Henry had been under the financial and physical protection of the French throne or its vassals for most of his life, before he became king. To strengthen his position, however, he subsidised shipbuilding, so strengthening the navy (he commissioned Europe's first ever – and the world's oldest surviving – dry dock at Portsmouth in 1495) and improving trading opportunities.
Henry VII was one of the first European monarchs to recognise the importance of the newly united Spanish kingdom; he concluded the Treaty of Medina del Campo, by which his son, Arthur Tudor, was married to Catherine of Aragon. He also concluded the Treaty of Perpetual Peace with Scotland (the first treaty between England and Scotland for almost two centuries), which betrothed his daughter Margaret to King James IV of Scotland. By this marriage, Henry VII hoped to break the Auld Alliance between Scotland and France. Though this was not achieved during his reign, the marriage eventually led to the union of the English and Scottish crowns under Margaret's great-grandson, James VI and I, following the death of Henry's granddaughter Elizabeth I.
He also formed an alliance with Holy Roman Emperor Maximilian I (1493–1519) and persuaded Pope Innocent VIII to issue a papal bull of excommunication against all pretenders to Henry's throne.
Henry VII was much enriched by trading alum, which was used in the wool and cloth trades as a chemical fixative for dyeing fabrics. Since alum was mined in only one area in Europe (Tolfa, Italy), it was a scarce commodity and therefore especially valuable to its land holder, the pope. With the English economy heavily invested in wool production, Henry VII became involved in the alum trade in 1486. With the assistance of the Italian merchant banker Lodovico della Fava and the Italian banker Girolamo Frescobaldi, Henry VII became deeply involved in the trade by licensing ships, obtaining alum from the Ottoman Empire, and selling it to the Low Countries and in England. This trade made an expensive commodity cheaper, which raised opposition from Pope Julius II, since the Tolfa mine was a part of papal territory and had given the Pope monopoly control over alum.
Henry's most successful diplomatic achievement as regards the economy was the "Magnus Intercursus" ("great agreement") of 1496. In 1494, Henry embargoed trade (mainly in wool) with the Netherlands in retaliation for Margaret of Burgundy's support for Perkin Warbeck. The Merchant Adventurers, the company which enjoyed the monopoly of the Flemish wool trade, relocated from Antwerp to Calais. At the same time, Flemish merchants were ejected from England. The stand-off eventually paid off for Henry. Both parties realised they were mutually disadvantaged by the reduction in commerce. Its restoration by the "Magnus Intercursus" was very much to England's benefit in removing taxation for English merchants and significantly increasing England's wealth. In turn, Antwerp became an extremely important trade entrepôt (transshipment port), through which, for example, goods from the Baltic, spices from the east and Italian silks were exchanged for English cloth.
In 1506, Henry extorted the Treaty of Windsor from Philip the Handsome of Burgundy. Philip had been shipwrecked on the English coast, and while Henry's guest, was bullied into an agreement so favourable to England at the expense of the Netherlands that it was dubbed the "Malus Intercursus" ("evil agreement"). France, Burgundy, the Holy Roman Empire, Spain and the Hanseatic League all rejected the treaty, which was never in force. Philip died shortly after the negotiations.
Henry's principal problem was to restore royal authority in a realm recovering from the Wars of the Roses. There were too many powerful noblemen and, as a consequence of the system of so-called bastard feudalism, each had what amounted to private armies of indentured retainers (mercenaries masquerading as servants).
He was content to allow the nobles their regional influence if they were loyal to him. For instance, the Stanley family had control of Lancashire and Cheshire, upholding the peace on the condition that they stayed within the law. In other cases, he brought his over-powerful subjects to heel by decree. He passed laws against "livery" (the upper classes' flaunting of their adherents by giving them badges and emblems) and "maintenance" (the keeping of too many male "servants"). These laws were used shrewdly in levying fines upon those that he perceived as threats.
However, his principal weapon was the Court of Star Chamber. This revived an earlier practice of using a small (and trusted) group of the Privy Council as a personal or Prerogative Court, able to cut through the cumbersome legal system and act swiftly. Serious disputes involving the use of personal power, or threats to royal authority, were thus dealt with.
Henry VII used Justices of the Peace on a large, nationwide scale. They were appointed for every shire and served for a year at a time. Their chief task was to see that the laws of the country were obeyed in their area. Their powers and numbers steadily increased during the time of the Tudors, never more so than under Henry's reign. Despite this, Henry was keen to constrain their power and influence, applying the same principles to the Justices of the Peace as he did to the nobility: a similar system of bonds and recognisances to that which applied to both the gentry and the nobles who tried to exert their elevated influence over these local officials.
All Acts of Parliament were overseen by the Justices of the Peace. For example, Justices of the Peace could replace suspect jurors in accordance with the 1495 act preventing the corruption of juries. They were also in charge of various administrative duties, such as the checking of weights and measures.
By 1509, Justices of the Peace were key enforcers of law and order for Henry VII. They were unpaid, which, in comparison with modern standards, meant a lesser tax bill to pay for a police force. Local gentry saw the office as one of local influence and prestige and were therefore willing to serve. Overall, this was a successful area of policy for Henry, both in terms of efficiency and as a method of reducing the corruption endemic within the nobility of the Middle Ages.
In 1502, Henry VII's life took a difficult and personal turn in which many people he was close to died in quick succession. His first son and heir apparent, Arthur, Prince of Wales, died suddenly at Ludlow Castle, very likely from a viral respiratory illness known at the time as the "English sweating sickness". This made Henry, Duke of York, heir apparent to the throne. The King, normally a reserved man who rarely showed much emotion in public unless angry, surprised his courtiers by his intense grief and sobbing at his son's death, while his concern for the Queen is evidence that the marriage was a happy one, as is his reaction to the Queen's death the following year, when he shut himself away for several days, refusing to speak to anyone.
Henry VII wanted to maintain the Spanish alliance. He, therefore, arranged a papal dispensation from Pope Julius II for Prince Henry to marry his brother's widow Catherine, a relationship that would have otherwise precluded marriage in the Roman Catholic Church. In 1503, Queen Elizabeth died in childbirth, so King Henry had the dispensation also permit him to marry Catherine himself. After obtaining the dispensation, Henry had second thoughts about the marriage of his son and Catherine. Catherine's mother Isabella I of Castile had died and Catherine's sister Joanna had succeeded her; Catherine was, therefore, daughter of only one reigning monarch and so less desirable as a spouse for Henry VII's heir-apparent. The marriage did not take place during his lifetime. Otherwise, at the time of his father's arranging of the marriage to Catherine of Aragon, the future Henry VIII was too young to contract the marriage according to Canon Law and would be ineligible until age fourteen.
Henry made half-hearted plans to remarry and beget more heirs, but these never came to anything. In 1505 he was sufficiently interested in a potential marriage to Joan, the recently widowed Queen of Naples, that he sent ambassadors to Naples to report on the 27-year-old's physical suitability. The wedding never took place, and the physical description Henry sent with his ambassadors of what he desired in a new wife matched the description of Elizabeth. After 1503, records show the Tower of London was never again used as a royal residence by Henry Tudor, and all royal births under Henry VIII took place in palaces. Henry VII was shattered by the loss of Elizabeth, and her death broke his heart. Of all British kings, Henry VII is one of only a handful that never had any known mistress, and for the times, it is very unusual that he did not remarry: his son, Henry, was the only heir left and the death of Arthur put the position of the House of Tudor in a more precarious political position.
During his lifetime the nobility often jeered him for re-centralizing power in London, and later the 16th-century historian Francis Bacon was ruthlessly critical of the methods by which he enforced tax law, but it is equally true that Henry Tudor was hellbent on keeping detailed records of his personal finances, down to the last halfpenny; these and one account book detailing the expenses of his queen survive in the British National Archives, as do accounts of courtiers and many of the king's own letters. Until the death of his wife, the evidence is clear from these accounting books that Henry Tudor was a more doting father and husband than was widely known and there is evidence that his outwardly austere personality belied a devotion to his family. Letters to relatives have an affectionate tone not captured by official state business, as evidenced by many written to his mother Margaret. Many of the entries show a man who loosened his purse strings generously for his wife and children, and not just on necessities: in spring 1491 he spent a great amount of gold on a lute for his daughter Mary; the following year he spent money on a lion for Elizabeth's menagerie. With Elizabeth's death, the possibilities for such family indulgences greatly diminished. Immediately afterwards, Henry became very sick and nearly died himself, allowing only Margaret Beaufort, his mother, near him: "privily departed to a solitary place, and would that no man should resort unto him." Worse still, Henry's older daughter Margaret had previously been betrothed to the King of Scotland, James IV, and within months of her mother's death she had to be escorted to the border by her father: he would never see her again. Margaret Tudor wrote letters to her father declaring her homesickness, but Henry could do nothing but mourn the loss of his family and honor the terms of the peace treaty he had agreed to with the King of Scotland.
Henry VII died of tuberculosis at Richmond Palace on 21 April 1509 and was buried in the chapel he commissioned in Westminster Abbey next to his wife, Elizabeth. He was succeeded by his second son, Henry VIII (reigned 1509–47). His mother survived him but died two months later on 29 June 1509.
Henry is the first English king of whose appearance good contemporary visual records in realistic portraits exist that are relatively free of idealization. At 27, he was tall and slender, with small blue eyes, which were said to have a noticeable animation of expression, and noticeably bad teeth in a long, sallow face beneath very fair hair. Amiable and high-spirited, Henry was friendly if dignified in manner, and it was clear to everyone that he was extremely intelligent. His biographer, Professor Chrimes, credits him – even before he had become king – with "a high degree of personal magnetism, ability to inspire confidence, and a growing reputation for shrewd decisiveness". On the debit side, he may have looked a little delicate as he suffered from poor health.
Historians have always compared Henry VII with his continental contemporaries, especially Louis XI of France and Ferdinand II of Aragon. By 1600 historians emphasised Henry's wisdom in drawing lessons in statecraft from other monarchs. In 1622 Francis Bacon published his "History of the Reign of King Henry VII". By 1900 the "New Monarchy" interpretation stressed the common factors that in each country led to the revival of monarchical power. This approach raised puzzling questions about similarities and differences in the development of national states. In the late 20th century a model of European state formation was prominent in which Henry less resembles Louis and Ferdinand.
|
https://en.wikipedia.org/wiki?curid=14186
|
Henry VIII of England
Henry VIII (28 June 1491 – 28 January 1547) was King of England from 1509 until his death in 1547. Henry is best known for his six marriages, and, in particular, his efforts to have his first marriage (to Catherine of Aragon) annulled. His disagreement with Pope Clement VII on the question of such an annulment led Henry to initiate the English Reformation, separating the Church of England from papal authority. He appointed himself the Supreme Head of the Church of England and dissolved convents and monasteries, for which he was excommunicated. Henry is also known as "the father of the Royal Navy," as he invested heavily in the navy, increasing its size from a few to more than 50 ships, and established the Navy Board.
Domestically, Henry is known for his radical changes to the English Constitution, ushering in the theory of the divine right of kings. He also greatly expanded royal power during his reign. He frequently used charges of treason and heresy to quell dissent, and those accused were often executed without a formal trial by means of bills of attainder. He achieved many of his political aims through the work of his chief ministers, some of whom were banished or executed when they fell out of his favour. Thomas Wolsey, Thomas More, Thomas Cromwell, Richard Rich, and Thomas Cranmer all figured prominently in his administration.
Henry was an extravagant spender, using the proceeds from the dissolution of the monasteries and acts of the Reformation Parliament. He also converted the money that was formerly paid to Rome into royal revenue. Despite the money from these sources, he was continually on the verge of financial ruin due to his personal extravagance, as well as his numerous costly and largely unsuccessful wars, particularly with King Francis I of France, Holy Roman Emperor Charles V, James V of Scotland and the Scottish regency under the Earl of Arran and Mary of Guise. At home, he oversaw the legal union of England and Wales with the Laws in Wales Acts 1535 and 1542, and he was the first English monarch to rule as King of Ireland following the Crown of Ireland Act 1542.
Henry's contemporaries considered him an attractive, educated, and accomplished king. He has been described as "one of the most charismatic rulers to sit on the English throne". He was an author and composer. As he aged, however, he became severely overweight and his health suffered, causing his death in 1547. He is frequently characterised in his later life as a lustful, egotistical, harsh and insecure king. He was succeeded by his son Edward VI.
Born 28 June 1491 at the Palace of Placentia in Greenwich, Kent, Henry Tudor was the third child and second son of Henry VII and Elizabeth of York. Of the young Henry's six (or seven) siblings, only three – Arthur, Prince of Wales; Margaret; and Mary – survived infancy. He was baptised by Richard Fox, the Bishop of Exeter, at a church of the Observant Franciscans close to the palace. In 1493, at the age of two, Henry was appointed Constable of Dover Castle and Lord Warden of the Cinque Ports. He was subsequently appointed Earl Marshal of England and Lord Lieutenant of Ireland at age three, and was made a Knight of the Bath soon after. The day after the ceremony he was created Duke of York and a month or so later made Warden of the Scottish Marches. In May 1495, he was appointed to the Order of the Garter. The reason for all the appointments to a small child was so his father could keep personal control of lucrative positions and not share them with established families. Henry was given a first-rate education from leading tutors, becoming fluent in Latin and French, and learning at least some Italian. Not much is known about his early life – save for his appointments – because he was not expected to become king. In November 1501, Henry also played a considerable part in the ceremonies surrounding his brother's marriage to Catherine of Aragon, the youngest surviving child of King Ferdinand II of Aragon and Queen Isabella I of Castile. As Duke of York, Henry used the arms of his father as king, differenced by a "label of three points ermine". He was further honoured, on 9 February 1506, by Holy Roman Emperor Maximilian I who made him a Knight of the Golden Fleece.
In 1502, Arthur died at the age of 15, possibly of sweating sickness, just 20 weeks after his marriage to Catherine. Arthur's death thrust all his duties upon his younger brother, the 10-year-old Henry. After a little debate, Henry became the new Duke of Cornwall in October 1502, and the new Prince of Wales and Earl of Chester in February 1503. Henry VII gave the boy few tasks. Young Henry was strictly supervised and did not appear in public. As a result, he ascended the throne "untrained in the exacting art of kingship".
Henry VII renewed his efforts to seal a marital alliance between England and Spain, by offering his second son in marriage to Arthur's widow Catherine. Both Isabella and Henry VII were keen on the idea, which had arisen very shortly after Arthur's death. On 23 June 1503, a treaty was signed for their marriage, and they were betrothed two days later. A papal dispensation was only needed for the "impediment of public honesty" if the marriage had not been consummated as Catherine and her duenna claimed, but Henry VII and the Spanish ambassador set out instead to obtain a dispensation for "affinity", which took account of the possibility of consummation. Cohabitation was not possible because Henry was too young. Isabella's death in 1504, and the ensuing problems of succession in Castile, complicated matters. Her father preferred her to stay in England, but Henry VII's relations with Ferdinand had deteriorated. Catherine was therefore left in limbo for some time, culminating in Prince Henry's rejection of the marriage as soon he was able, at the age of 14. Ferdinand's solution was to make his daughter ambassador, allowing her to stay in England indefinitely. Devout, she began to believe that it was God's will that she marry the prince despite his opposition.
Henry VII died on 21 April 1509, and the 17-year-old Henry succeeded him as king. Soon after his father's burial on 10 May, Henry suddenly declared that he would indeed marry Catherine, leaving unresolved several issues concerning the papal dispensation and a missing part of the marriage portion. The new king maintained that it had been his father's dying wish that he marry Catherine. Whether or not this was true, it was certainly convenient. Emperor Maximilian I had been attempting to marry his granddaughter (and Catherine's niece) Eleanor to Henry; she had now been jilted. Henry's wedding to Catherine was kept low-key and was held at the friar's church in Greenwich on 11 June 1509. On 23 June 1509, Henry led the now 23-year-old Catherine from the Tower of London to Westminster Abbey for their coronation, which took place the following day. It was a grand affair: the king's passage was lined with tapestries and laid with fine cloth. Following the ceremony, there was a grand banquet in Westminster Hall. As Catherine wrote to her father, "our time is spent in continuous festival".
Two days after his coronation, Henry arrested his father's two most unpopular ministers, Sir Richard Empson and Edmund Dudley. They were charged with high treason and were executed in 1510. Politically-motivated executions would remain one of Henry's primary tactics for dealing with those who stood in his way. Henry also returned to the public some of the money supposedly extorted by the two ministers. By contrast, Henry's view of the House of York – potential rival claimants for the throne – was more moderate than his father's had been. Several who had been imprisoned by his father, including the Marquess of Dorset, were pardoned. Others (most notably Edmund de la Pole) went unreconciled; de la Pole was eventually beheaded in 1513, an execution prompted by his brother Richard siding against the king.
Soon after, Catherine conceived, but the child, a girl, was stillborn on 31 January 1510. About four months later, Catherine again became pregnant. On New Year's Day 1511, the child – Henry – was born. After the grief of losing their first child, the couple were pleased to have a boy and festivities were held, including a two-day joust known as the Westminster Tournament. However, the child died seven weeks later. Catherine had two stillborn sons in 1513 and 1515, but gave birth in February 1516 to a girl, Mary. Relations between Henry and Catherine had been strained, but they eased slightly after Mary's birth.
Although Henry's marriage to Catherine has since been described as "unusually good", it is known that Henry took mistresses. It was revealed in 1510 that Henry had been conducting an affair with one of the sisters of Edward Stafford, 3rd Duke of Buckingham, either Elizabeth or Anne Hastings, Countess of Huntingdon. The most significant mistress for about three years, starting in 1516, was Elizabeth Blount. Blount is one of only two completely undisputed mistresses, considered by some to be few for a virile young king. Exactly how many Henry had is disputed: David Loades believes Henry had mistresses "only to a very limited extent", whilst Alison Weir believes there were numerous other affairs. There is no evidence that Catherine protested, and in 1518 she fell pregnant again with another girl, who was also stillborn. Blount gave birth in June 1519 to Henry's illegitimate son, Henry FitzRoy. The young boy was made Duke of Richmond in June 1525 in what some thought was one step on the path to his eventual legitimisation. In 1533, FitzRoy married Mary Howard, but died childless three years later. At the time of Richmond's death in June 1536, Parliament was enacting the Second Succession Act, which could have allowed him to become king.
In 1510, France, with a fragile alliance with the Holy Roman Empire in the League of Cambrai, was winning a war against Venice. Henry renewed his father's friendship with Louis XII of France, an issue that divided his council. Certainly war with the combined might of the two powers would have been exceedingly difficult. Shortly thereafter, however, Henry also signed a pact with Ferdinand. After Pope Julius II created the anti-French Holy League in October 1511, Henry followed Ferdinand's lead and brought England into the new League. An initial joint Anglo-Spanish attack was planned for the spring to recover Aquitaine for England, the start of making Henry's dreams of ruling France a reality. The attack, however, following a formal declaration of war in April 1512, was not led by Henry personally and was a considerable failure; Ferdinand used it simply to further his own ends, and it strained the Anglo-Spanish alliance. Nevertheless, the French were pushed out of Italy soon after, and the alliance survived, with both parties keen to win further victories over the French. Henry then pulled off a diplomatic coup by convincing the Emperor to join the Holy League. Remarkably, Henry had also secured the promised title of "Most Christian King of France" from Julius and possibly coronation by the Pope himself in Paris, if only Louis could be defeated.
On 30 June 1513, Henry invaded France, and his troops defeated a French army at the Battle of the Spurs – a relatively minor result, but one which was seized on by the English for propaganda purposes. Soon after, the English took Thérouanne and handed it over to Maximillian; Tournai, a more significant settlement, followed. Henry had led the army personally, complete with large entourage. His absence from the country, however, had prompted his brother-in-law, James IV of Scotland, to invade England at the behest of Louis. Nevertheless, the English army, overseen by Queen Catherine, decisively defeated the Scots at the Battle of Flodden on 9 September 1513. Among the dead was the Scottish king, thus ending Scotland's brief involvement in the war. These campaigns had given Henry a taste of the military success he so desired. However, despite initial indications, he decided not to pursue a 1514 campaign. He had been supporting Ferdinand and Maximilian financially during the campaign but had received little in return; England's coffers were now empty. With the replacement of Julius by Pope Leo X, who was inclined to negotiate for peace with France, Henry signed his own treaty with Louis: his sister Mary would become Louis' wife, having previously been pledged to the younger Charles, and peace was secured for eight years, a remarkably long time.
Charles V ascended the thrones of both Spain and the Holy Roman Empire following the deaths of his grandfathers, Ferdinand in 1516 and Maximilian in 1519. Francis I likewise became king of France upon the death of Louis in 1515, leaving three relatively young rulers and an opportunity for a clean slate. The careful diplomacy of Cardinal Thomas Wolsey had resulted in the Treaty of London in 1518, aimed at uniting the kingdoms of western Europe in the wake of a new Ottoman threat, and it seemed that peace might be secured. Henry met Francis I on 7 June 1520 at the Field of the Cloth of Gold near Calais for a fortnight of lavish entertainment. Both hoped for friendly relations in place of the wars of the previous decade. The strong air of competition laid to rest any hopes of a renewal of the Treaty of London, however, and conflict was inevitable. Henry had more in common with Charles, whom he met once before and once after Francis. Charles brought the Empire into war with France in 1521; Henry offered to mediate, but little was achieved and by the end of the year Henry had aligned England with Charles. He still clung to his previous aim of restoring English lands in France, but also sought to secure an alliance with Burgundy, then part of Charles' realm, and the continued support of Charles. A small English attack in the north of France made up little ground. Charles defeated and captured Francis at Pavia and could dictate peace; but he believed he owed Henry nothing. Sensing this, Henry decided to take England out of the war before his ally, signing the Treaty of the More on 30 August 1525.
During his marriage to Catherine of Aragon, Henry conducted an affair with Mary Boleyn, Catherine's lady-in-waiting. There has been speculation that Mary's two children, Henry Carey and Catherine Carey, were fathered by Henry, but this has never been proved, and the King never acknowledged them as he did in the case of Henry FitzRoy. In 1525, as Henry grew more impatient with Catherine's inability to produce the male heir he desired, he became enamoured of Boleyn's sister, Anne Boleyn, then a charismatic young woman of 25 in the Queen's entourage. Anne, however, resisted his attempts to seduce her, and refused to become his mistress as her sister had. It was in this context that Henry considered his three options for finding a dynastic successor and hence resolving what came to be described at court as the King's "great matter". These options were legitimising Henry FitzRoy, which would take the intervention of the pope and would be open to challenge; marrying off Mary as soon as possible and hoping for a grandson to inherit directly, but Mary was considered unlikely to conceive before Henry's death; or somehow rejecting Catherine and marrying someone else of child-bearing age. Probably seeing the possibility of marrying Anne, the third was ultimately the most attractive possibility to the 34-year-old Henry, and it soon became the King's absorbing desire to annul his marriage to the now 40-year-old Catherine. It was a decision that would lead Henry to reject papal authority and initiate the English Reformation.
Henry's precise motivations and intentions over the coming years are not widely agreed on. Henry himself, at least in the early part of his reign, was a devout and well-informed Catholic to the extent that his 1521 publication "Assertio Septem Sacramentorum" ("Defence of the Seven Sacraments") earned him the title of "Fidei Defensor" (Defender of the Faith) from Pope Leo X. The work represented a staunch defence of papal supremacy, albeit one couched in somewhat contingent terms. It is not clear exactly when Henry changed his mind on the issue as he grew more intent on a second marriage. Certainly, by 1527 he had convinced himself that Catherine had produced no male heir because their union was "blighted in the eyes of God." Indeed, in marrying Catherine, his brother's wife, he had acted contrary to Leviticus 20:21, an impediment Henry now believed that the Pope never had the authority to dispense with. It was this argument Henry took to Pope Clement VII in 1527 in the hope of having his marriage to Catherine annulled, forgoing at least one less openly defiant line of attack. In going public, all hope of tempting Catherine to retire to a nunnery or otherwise stay quiet was lost. Henry sent his secretary, William Knight, to appeal directly to the Holy See by way of a deceptively worded draft papal bull. Knight was unsuccessful; the Pope could not be misled so easily.
Other missions concentrated on arranging an ecclesiastical court to meet in England, with a representative from Clement VII. Though Clement agreed to the creation of such a court, he never had any intention of empowering his legate, Lorenzo Campeggio, to decide in Henry's favour. This bias was perhaps the result of pressure from Emperor Charles V, Catherine's nephew, though it is not clear how far this influenced either Campeggio or the Pope. After less than two months of hearing evidence, Clement called the case back to Rome in July 1529, from which it was clear that it would never re-emerge. With the chance for an annulment lost and England's place in Europe forfeit, Cardinal Wolsey bore the blame. He was charged with "praemunire" in October 1529 and his fall from grace was "sudden and total". Briefly reconciled with Henry (and officially pardoned) in the first half of 1530, he was charged once more in November 1530, this time for treason, but died while awaiting trial. After a short period in which Henry took government upon his own shoulders, Sir Thomas More took on the role of Lord Chancellor and chief minister. Intelligent and able, but also a devout Catholic and opponent of the annulment, More initially cooperated with the king's new policy, denouncing Wolsey in Parliament.
A year later, Catherine was banished from court, and her rooms were given to Anne. Anne was an unusually educated and intellectual woman for her time, and was keenly absorbed and engaged with the ideas of the Protestant Reformers, though the extent to which she herself was a committed Protestant is much debated. When Archbishop of Canterbury William Warham died, Anne's influence and the need to find a trustworthy supporter of the annulment had Thomas Cranmer appointed to the vacant position. This was approved by the Pope, unaware of the King's nascent plans for the Church.
In the winter of 1532, Henry met with Francis I at Calais and enlisted the support of the French king for his new marriage. Immediately upon returning to Dover in England, Henry, now 41, and Anne went through a secret wedding service. She soon became pregnant, and there was a second wedding service in London on 25 January 1533. On 23 May 1533, Cranmer, sitting in judgment at a special court convened at Dunstable Priory to rule on the validity of the king's marriage to Catherine of Aragon, declared the marriage of Henry and Catherine null and void. Five days later, on 28 May 1533, Cranmer declared the marriage of Henry and Anne to be valid. Catherine was formally stripped of her title as queen, becoming instead "princess dowager" as the widow of Arthur. In her place, Anne was crowned queen consort on 1 June 1533. The queen gave birth to a daughter slightly prematurely on 7 September 1533. The child was christened Elizabeth, in honour of Henry's mother, Elizabeth of York.
Following the marriage, there was a period of consolidation taking the form of a series of statutes of the Reformation Parliament aimed at finding solutions to any remaining issues, whilst protecting the new reforms from challenge, convincing the public of their legitimacy, and exposing and dealing with opponents. Although the canon law was dealt with at length by Cranmer and others, these acts were advanced by Thomas Cromwell, Thomas Audley and the Duke of Norfolk and indeed by Henry himself. With this process complete, in May 1532 More resigned as Lord Chancellor, leaving Cromwell as Henry's chief minister. With the Act of Succession 1533, Catherine's daughter, Mary, was declared illegitimate; Henry's marriage to Anne was declared legitimate; and Anne's issue was decided to be next in the line of succession. With the Acts of Supremacy in 1534, Parliament also recognised the King's status as head of the church in England and, with the Act in Restraint of Appeals in 1532, abolished the right of appeal to Rome. It was only then that Pope Clement took the step of excommunicating Henry and Thomas Cranmer, although the excommunication was not made official until some time later.
The king and queen were not pleased with married life. The royal couple enjoyed periods of calm and affection, but Anne refused to play the submissive role expected of her. The vivacity and opinionated intellect that had made her so attractive as an illicit lover made her too independent for the largely ceremonial role of a royal wife and it made her many enemies. For his part, Henry disliked Anne's constant irritability and violent temper. After a false pregnancy or miscarriage in 1534, he saw her failure to give him a son as a betrayal. As early as Christmas 1534, Henry was discussing with Cranmer and Cromwell the chances of leaving Anne without having to return to Catherine. Henry is traditionally believed to have had an affair with Margaret ("Madge") Shelton in 1535, although historian Antonia Fraser argues that Henry in fact had an affair with her sister Mary Shelton.
Opposition to Henry's religious policies was quickly suppressed in England. A number of dissenting monks, including the first Carthusian Martyrs, were executed and many more pilloried. The most prominent resisters included John Fisher, Bishop of Rochester, and Sir Thomas More, both of whom refused to take the oath to the King. Neither Henry nor Cromwell sought to have the men executed; rather, they hoped that the two might change their minds and save themselves. Fisher openly rejected Henry as the Supreme Head of the Church, but More was careful to avoid openly breaking the Treasons Act of 1534, which (unlike later acts) did not forbid mere silence. Both men were subsequently convicted of high treason, however – More on the evidence of a single conversation with Richard Rich, the Solicitor General. Both were duly executed in the summer of 1535.
These suppressions, as well as the Dissolution of the Lesser Monasteries Act of 1536, in turn contributed to more general resistance to Henry's reforms, most notably in the Pilgrimage of Grace, a large uprising in northern England in October 1536. Some 20,000 to 40,000 rebels were led by Robert Aske, together with parts of the northern nobility. Henry VIII promised the rebels he would pardon them and thanked them for raising the issues. Aske told the rebels they had been successful and they could disperse and go home. Henry saw the rebels as traitors and did not feel obliged to keep his promises with them, so when further violence occurred after Henry's offer of a pardon he was quick to break his promise of clemency. The leaders, including Aske, were arrested and executed for treason. In total, about 200 rebels were executed, and the disturbances ended.
On 8 January 1536, news reached the king and the queen that Catherine of Aragon had died. The following day, Henry dressed all in yellow, with a white feather in his bonnet. The queen was pregnant again, and she was aware of the consequences if she failed to give birth to a son. Later that month, the King was unhorsed in a tournament and was badly injured; it seemed for a time that his life was in danger. When news of this accident reached the queen, she was sent into shock and miscarried a male child that was about 15 weeks old, on the day of Catherine's funeral, 29 January 1536. For most observers, this personal loss was the beginning of the end of this royal marriage.
Although the Boleyn family still held important positions on the Privy Council, Anne had many enemies, including the Duke of Suffolk. Even her own uncle, the Duke of Norfolk, had come to resent her attitude to her power. The Boleyns preferred France over the Emperor as a potential ally, but the King's favour had swung towards the latter (partly because of Cromwell), damaging the family's influence. Also opposed to Anne were supporters of reconciliation with Princess Mary (among them the former supporters of Catherine), who had reached maturity. A second annulment was now a real possibility, although it is commonly believed that it was Cromwell's anti-Boleyn influence that led opponents to look for a way of having her executed.
Anne's downfall came shortly after she had recovered from her final miscarriage. Whether it was primarily the result of allegations of conspiracy, adultery, or witchcraft remains a matter of debate among historians. Early signs of a fall from grace included the King's new mistress, the 28-year-old Jane Seymour, being moved into new quarters, and Anne's brother, George Boleyn, being refused the Order of the Garter, which was instead given to Nicholas Carew. Between 30 April and 2 May, five men, including Anne's brother, were arrested on charges of treasonable adultery and accused of having sexual relationships with the queen. Anne was also arrested, accused of treasonous adultery and incest. Although the evidence against them was unconvincing, the accused were found guilty and condemned to death. George Boleyn and the other accused men were executed on 17 May 1536. At 8 am on 19 May 1536, Anne was executed on Tower Green.
The day after Anne's execution in 1536 the 45-year-old Henry became engaged to Seymour, who had been one of the Queen's ladies-in-waiting. They were married ten days later at the Palace of Whitehall, Whitehall, London, in the Queen's closet by Bishop Gardiner. On 12 October 1537, Jane gave birth to a son, Prince Edward, the future Edward VI. The birth was difficult, and the queen died on 24 October 1537 from an infection and was buried in Windsor. The euphoria that had accompanied Edward's birth became sorrow, but it was only over time that Henry came to long for his wife. At the time, Henry recovered quickly from the shock. Measures were immediately put in place to find another wife for Henry, which, at the insistence of Cromwell and the court, were focused on the European continent.
With Charles V distracted by the internal politics of his many kingdoms and external threats, and Henry and Francis on relatively good terms, domestic and not foreign policy issues had been Henry's priority in the first half of the 1530s. In 1536, for example, Henry granted his assent to the Laws in Wales Act 1535, which legally annexed Wales, uniting England and Wales into a single nation. This was followed by the Second Succession Act (the Act of Succession 1536), which declared Henry's children by Jane to be next in the line of succession and declared both Mary and Elizabeth illegitimate, thus excluding them from the throne. The king was also granted the power to further determine the line of succession in his will, should he have no further issue. However, when Charles and Francis made peace in January 1539, Henry became increasingly paranoid, perhaps as a result of receiving a constant list of threats to the kingdom (real or imaginary, minor or serious) supplied by Cromwell in his role as spymaster. Enriched by the dissolution of the monasteries, Henry used some of his financial reserves to build a series of coastal defences and set some aside for use in the event of a Franco-German invasion.
Having considered the matter, Cromwell, now Earl of Essex, suggested Anne, the 25-year-old sister of the Duke of Cleves, who was seen as an important ally in case of a Roman Catholic attack on England, for the duke fell between Lutheranism and Catholicism. Hans Holbein the Younger was dispatched to Cleves to paint a portrait of Anne for the king. Despite speculation that Holbein painted her in an overly flattering light, it is more likely that the portrait was accurate; Holbein remained in favour at court. After seeing Holbein's portrait, and urged on by the complimentary description of Anne given by his courtiers, the 49-year-old king agreed to wed Anne. However, it was not long before Henry wished to annul the marriage so he could marry another. Anne did not argue, and confirmed that the marriage had never been consummated. Anne's previous betrothal to the Duke of Lorraine's son Francis provided further grounds for the annulment. The marriage was subsequently dissolved, and Anne received the title of "The King's Sister", two houses and a generous allowance. It was soon clear that Henry had fallen for the 17-year-old Catherine Howard, the Duke of Norfolk's niece, the politics of which worried Cromwell, for Norfolk was a political opponent.
Shortly after, the religious reformers (and protégés of Cromwell) Robert Barnes, William Jerome and Thomas Garret were burned as heretics. Cromwell, meanwhile, fell out of favour although it is unclear exactly why, for there is little evidence of differences of domestic or foreign policy. Despite his role, he was never formally accused of being responsible for Henry's failed marriage. Cromwell was now surrounded by enemies at court, with Norfolk also able to draw on his niece's position. Cromwell was charged with treason, selling export licences, granting passports, and drawing up commissions without permission, and may also have been blamed for the failure of the foreign policy that accompanied the attempted marriage to Anne. He was subsequently attainted and beheaded.
On 28 July 1540 (the same day Cromwell was executed), Henry married the young Catherine Howard, a first cousin and lady-in-waiting of Anne Boleyn. He was absolutely delighted with his new queen, and awarded her the lands of Cromwell and a vast array of jewellery. Soon after the marriage, however, Queen Catherine had an affair with the courtier Thomas Culpeper. She also employed Francis Dereham, who had previously been informally engaged to her and had an affair with her prior to her marriage, as her secretary. The court was informed of her affair with Dereham whilst Henry was away; they dispatched Thomas Cranmer to investigate, who brought evidence of Queen Catherine's previous affair with Dereham to the king's notice. Though Henry originally refused to believe the allegations, Dereham confessed. It took another meeting of the council, however, before Henry believed the accusations against Dereham and went into a rage, blaming the council before consoling himself in hunting. When questioned, the queen could have admitted a prior contract to marry Dereham, which would have made her subsequent marriage to Henry invalid, but she instead claimed that Dereham had forced her to enter into an adulterous relationship. Dereham, meanwhile, exposed Queen Catherine's relationship with Culpeper. Culpeper and Dereham were both executed, and Catherine too was beheaded on 13 February 1542.
In 1538, the chief minister Thomas Cromwell pursued an extensive campaign against what his government termed "idolatry" practiced under the old religion, culminating in September with the dismantling of the shrine of St. Thomas Becket at Canterbury. As a consequence, the king was excommunicated by Pope Paul III on 17 December of the same year. In 1540, Henry sanctioned the complete destruction of shrines to saints. In 1542, England's remaining monasteries were all dissolved, and their property transferred to the Crown. Abbots and priors lost their seats in the House of Lords; only archbishops and bishops remained. Consequently, the Lords Spiritual—as members of the clergy with seats in the House of Lords were known—were for the first time outnumbered by the Lords Temporal.
The 1539 alliance between Francis and Charles had soured, eventually degenerating into renewed war. With Catherine of Aragon and Anne Boleyn dead, relations between Charles and Henry improved considerably, and Henry concluded a secret alliance with the Emperor and decided to enter the Italian War in favour of his new ally. An invasion of France was planned for 1543. In preparation for it, Henry moved to eliminate the potential threat of Scotland under the youthful James V. The Scots were defeated at Battle of Solway Moss on 24 November 1542, and James died on 15 December. Henry now hoped to unite the crowns of England and Scotland by marrying his son Edward to James' successor, Mary. The Scottish Regent Lord Arran agreed to the marriage in the Treaty of Greenwich on 1 July 1543, but it was rejected by the Parliament of Scotland on 11 December. The result was eight years of war between England and Scotland, a campaign later dubbed "the Rough Wooing". Despite several peace treaties, unrest continued in Scotland until Henry's death.
Despite the early success with Scotland, Henry hesitated to invade France, annoying Charles. Henry finally went to France in June 1544 with a two-pronged attack. One force under Norfolk ineffectively besieged Montreuil. The other, under Suffolk, laid siege to Boulogne. Henry later took personal command, and Boulogne fell on 18 September 1544. However, Henry had refused Charles' request to march against Paris. Charles' own campaign fizzled, and he made peace with France that same day. Henry was left alone against France, unable to make peace. Francis attempted to invade England in the summer of 1545, but reached only the Isle of Wight before being repulsed in the Battle of the Solent. Out of money, France and England signed the Treaty of Camp on 7 June 1546. Henry secured Boulogne for eight years. The city was then to be returned to France for 2 million crowns (£750,000). Henry needed the money; the 1544 campaign had cost £650,000, and England was once again bankrupt.
Henry married his last wife, the wealthy widow Catherine Parr, in July 1543. A reformer at heart, she argued with Henry over religion. Ultimately, Henry remained committed to an idiosyncratic mixture of Catholicism and Protestantism; the reactionary mood which had gained ground following the fall of Cromwell had neither eliminated his Protestant streak nor been overcome by it. Parr helped reconcile Henry with his daughters, Mary and Elizabeth. In 1543, the Third Succession Act put them back in the line of succession after Edward. The same act allowed Henry to determine further succession to the throne in his will.
Late in life, Henry became obese, with a waist measurement of , and had to be moved about with the help of mechanical inventions. He was covered with painful, pus-filled boils and possibly suffered from gout. His obesity and other medical problems can be traced to the jousting accident in 1536 in which he suffered a leg wound. The accident re-opened and aggravated a previous injury he had sustained years earlier, to the extent that his doctors found it difficult to treat. The chronic wound festered for the remainder of his life and became ulcerated, thus preventing him from maintaining the level of physical activity he had previously enjoyed. The jousting accident is also believed to have caused Henry's mood swings, which may have had a dramatic effect on his personality and temperament.
The theory that Henry suffered from syphilis has been dismissed by most historians. Historian Susan Maclean Kybett ascribes his demise to scurvy, which is caused by a lack of fresh fruits and vegetables. Alternatively, his wives' pattern of pregnancies and his mental deterioration have led some to suggest that the king may have been Kell positive and suffered from McLeod syndrome. According to another study, Henry VIII's history and body morphology may have been the result of traumatic brain injury after his 1536 jousting accident, which in turn led to a neuroendocrine cause of his obesity. This analysis identifies growth hormone deficiency (GHD) as the source for his increased adiposity but also significant behavioural changes noted in his later years, including his multiple marriages.
Henry's obesity hastened his death at the age of 55, which occurred on 28 January 1547 in the Palace of Whitehall, on what would have been his father's 90th birthday. The tomb he had planned (with components taken from the tomb intended for Cardinal Wolsey) was only partly constructed and would never be completed. (The sarcophagus and its base were later removed and used for Lord Nelson's tomb in the crypt of St. Paul's Cathedral.) Henry was interred in a vault at St George's Chapel, Windsor Castle, next to Jane Seymour. Over a hundred years later, King Charles I (1625–1649) was buried in the same vault.
Upon Henry's death, he was succeeded by his son Edward VI. Since Edward was then only nine years old, he could not rule directly. Instead, Henry's will designated 16 executors to serve on a council of regency until Edward reached the age of 18. The executors chose Edward Seymour, 1st Earl of Hertford, Jane Seymour's elder brother, to be Lord Protector of the Realm. If Edward died childless, the throne was to pass to Mary, Henry VIII's daughter by Catherine of Aragon, and her heirs. If Mary's issue failed, the crown was to go to Elizabeth, Henry's daughter by Anne Boleyn, and her heirs. Finally, if Elizabeth's line became extinct, the crown was to be inherited by the descendants of Henry VIII's deceased younger sister, Mary, the Greys. The descendants of Henry's sister Margaret – the Stuarts, rulers of Scotland – were thereby excluded from the succession. This final provision failed when James VI of Scotland became King of England in 1603.
Henry cultivated the image of a Renaissance man, and his court was a centre of scholarly and artistic innovation and glamorous excess, epitomised by the Field of the Cloth of Gold. He scouted the country for choirboys, taking some directly from Wolsey's choir, and introduced Renaissance music into court. Musicians included Benedict de Opitiis, Richard Sampson, Ambrose Lupo, and Venetian organist Dionisio Memo, and Henry himself kept a considerable collection of instruments. He was skilled on the lute and could play the organ, and he was a talented player of the virginals. He could also sight read music and sing well. He was an accomplished musician, author, and poet; his best known piece of music is "Pastime with Good Company" ("The Kynges Ballade"), and he is reputed to have written "Greensleeves" but probably did not.
Henry was an avid gambler and dice player, and he excelled at sports, especially jousting, hunting, and real tennis. He was also known for his strong defence of conventional Christian piety. He was involved in the construction and improvement of several significant buildings, including Nonsuch Palace, King's College Chapel, Cambridge, and Westminster Abbey in London. Many of the existing buildings which he improved were properties confiscated from Wolsey, such as Christ Church, Oxford, Hampton Court Palace, the Palace of Whitehall, and Trinity College, Cambridge.
Henry was an intellectual, the first English king with a modern humanist education. He read and wrote English, French, and Latin, and owned a large library. He annotated many books and published one of his own, and he had numerous pamphlets and lectures prepared to support the reformation of the church. Richard Sampson's "Oratio" (1534), for example, was an argument for absolute obedience to the monarchy and claimed that the English church had always been independent from Rome. At the popular level, theatre and minstrel troupes funded by the crown travelled around the land to promote the new religious practices; the pope and Catholic priests and monks were mocked as foreign devils, while the glorious king was hailed as a brave and heroic defender of the true faith. Henry worked hard to present an image of unchallengeable authority and irresistible power.
Henry was a large, well-built athlete, over tall, strong, and broad in proportion, and he excelled at jousting and hunting. These were more than pastimes; they were political devices which served multiple goals, enhancing his athletic royal image, impressing foreign emissaries and rulers, and conveying his ability to suppress any rebellion. He arranged a jousting tournament at Greenwich in 1517 where he wore gilded armour and gilded horse trappings, and outfits of velvet, satin, and cloth of gold with pearls and jewels. It suitably impressed foreign ambassadors, one of whom wrote home that "the wealth and civilisation of the world are here, and those who call the English barbarians appear to me to render themselves such". Henry finally retired from jousting in 1536 after a heavy fall from his horse left him unconscious for two hours, but he continued to sponsor two lavish tournaments a year. He then started adding weight and lost the trim, athletic figure that had made him so handsome, and his courtiers began dressing in heavily padded clothes to emulate and flatter him. His health rapidly declined near the end of his reign.
The power of Tudor monarchs, including Henry, was 'whole' and 'entire', ruling, as they claimed, by the grace of God alone. The crown could also rely on the exclusive use of those functions that constituted the royal prerogative. These included acts of diplomacy (including royal marriages), declarations of war, management of the coinage, the issue of royal pardons and the power to summon and dissolve parliament as and when required. Nevertheless, as evident during Henry's break with Rome, the monarch worked within established limits, whether legal or financial, that forced him to work closely with both the nobility and parliament (representing the gentry).
In practice, Tudor monarchs used patronage to maintain a royal court that included formal institutions such as the Privy Council as well as more informal advisers and confidants. Both the rise and fall of court nobles could be swift: although the often-quoted figure of 72,000 executions of thieves during the last two years of his reign is inflated, Henry did undoubtedly execute at will, burning or beheading two of his wives, twenty peers, four leading public servants, six close attendants and friends, one cardinal (John Fisher) and numerous abbots. Among those who were in favour at any given point in Henry's reign, one could usually be identified as a chief minister, though one of the enduring debates in the historiography of the period has been the extent to which those chief ministers controlled Henry rather than vice versa. In particular, historian G. R. Elton has argued that one such minister, Thomas Cromwell, led a "Tudor revolution in government" quite independent of the king, whom Elton presented as an opportunistic, essentially lazy participant in the nitty-gritty of politics. Where Henry did intervene personally in the running of the country, Elton argued, he mostly did so to its detriment. The prominence and influence of faction in Henry's court is similarly discussed in the context of at least five episodes of Henry's reign, including the downfall of Anne Boleyn.
From 1514 to 1529, Thomas Wolsey (1473–1530), a cardinal of the established Church, oversaw domestic and foreign policy for the young king from his position as Lord Chancellor. Wolsey centralised the national government and extended the jurisdiction of the conciliar courts, particularly the Star Chamber. The Star Chamber's overall structure remained unchanged, but Wolsey used it to provide for much-needed reform of the criminal law. The power of the court itself did not outlive Wolsey, however, since no serious administrative reform was undertaken and its role was eventually devolved to the localities. Wolsey helped fill the gap left by Henry's declining participation in government (particularly in comparison to his father) but did so mostly by imposing himself in the King's place. His use of these courts to pursue personal grievances, and particularly to treat delinquents as if mere examples of a whole class worthy of punishment, angered the rich, who were annoyed as well by his enormous wealth and ostentatious living. Following Wolsey's downfall, Henry took full control of his government, although at court numerous complex factions continued to try to ruin and destroy each other.
Thomas Cromwell (c. 1485–1540) also came to define Henry's government. Returning to England from the continent in 1514 or 1515, Cromwell soon entered Wolsey's service. He turned to law, also picking up a good knowledge of the Bible, and was admitted to Gray's Inn in 1524. He became Wolsey's "man of all work". Cromwell, driven in part by his religious beliefs, attempted to reform the body politic of the English government through discussion and consent, and through the vehicle of continuity and not outward change. He was seen by many people as the man they wanted to bring about their shared aims, including Thomas Audley. By 1531, Cromwell and those associated with him were already responsible for the drafting of much legislation. Cromwell's first office was that of the master of the King's jewels in 1532, from which he began to invigorate the government finances. By this point, Cromwell's power as an efficient administrator, in a Council full of politicians, exceeded what Wolsey had achieved.
Cromwell did much work through his many offices to remove the tasks of government from the Royal Household (and ideologically from the personal body of the King) and into a public state. He did so, however, in a haphazard fashion that left several remnants, not least because he needed to retain Henry's support, his own power, and the possibility of actually achieving the plan he set out. Cromwell made the various income streams put in place by Henry VII more formal and assigned largely autonomous bodies for their administration. The role of the King's Council was transferred to a reformed Privy Council, much smaller and more efficient than its predecessor. A difference emerged between the financial health of the king, and that of the country, although Cromwell's fall undermined much of his bureaucracy, which required his hand to keep order among the many new bodies and prevent profligate spending that strained relations as well as finances. Cromwell's reforms ground to a halt in 1539, the initiative lost, and he failed to secure the passage of an enabling act, the Proclamation by the Crown Act 1539. He too was executed, on 28 July 1540.
Henry inherited a vast fortune and a prosperous economy from his father Henry VII, who had been frugal and careful with money. This fortune was estimated to be £1,250,000 (£375 million by today's standards). By comparison, however, the reign of Henry was a near-disaster in financial terms. Although he further augmented his royal treasury through the seizure of church lands, Henry's heavy spending and long periods of mismanagement damaged the economy.
Much of this wealth was spent by Henry on maintaining his court and household, including many of the building works he undertook on royal palaces. Henry hung 2,000 tapestries in his palaces; by comparison, James V of Scotland hung just 200. Henry took pride in showing off his collection of weapons, which included exotic archery equipment, 2,250 pieces of land ordnance and 6,500 handguns. Tudor monarchs had to fund all the expenses of government out of their own income. This income came from the Crown lands that Henry owned as well as from customs duties like tonnage and poundage, granted by parliament to the king for life. During Henry's reign the revenues of the Crown remained constant (around £100,000), but were eroded by inflation and rising prices brought about by war. Indeed, war and Henry's dynastic ambitions in Europe exhausted the surplus he had inherited from his father by the mid-1520s.
Whereas Henry VII had not involved Parliament in his affairs very much, Henry VIII had to turn to Parliament during his reign for money, in particular for grants of subsidies to fund his wars. The Dissolution of the Monasteries provided a means to replenish the treasury, and as a result the Crown took possession of monastic lands worth £120,000 (£36 million) a year. The Crown had profited a small amount in 1526 when Wolsey had put England onto a gold, rather than silver, standard, and had debased the currency slightly. Cromwell debased the currency more significantly, starting in Ireland in 1540. The English pound halved in value against the Flemish pound between 1540 and 1551 as a result. The nominal profit made was significant, helping to bring income and expenditure together, but it had a catastrophic effect on the overall economy of the country. In part, it helped to bring about a period of very high inflation from 1544 onwards.
Henry is generally credited with initiating the English Reformation – the process of transforming England from a Catholic country to a Protestant one – though his progress at the elite and mass levels is disputed, and the precise narrative not widely agreed. Certainly, in 1527, Henry, until then an observant and well-informed Catholic, appealed to the Pope for an annulment of his marriage to Catherine. No annulment was immediately forthcoming, since the papacy was now under the control of Charles V who was Catherine’ s nephew. The traditional narrative gives this refusal as the trigger for Henry's rejection of papal supremacy (which he had previously defended).Yet as E.L.Woodward put it, Henry's determination to divorce Catherine was the occasion rather than the cause of the English Reformation so that "neither too much nor too little must be made of this divorce." Historian A. F. Pollard has also argued that even if Henry had not needed an annulment, he may have come to reject papal control over the governance of England purely for political reasons. Indeed, Henry needed a son to secure the Tudor Dynasty and avert the risk of civil war over disputed succession.
In any case, between 1532 and 1537, Henry instituted a number of statutes that dealt with the relationship between king and pope and hence the structure of the nascent Church of England. These included the Statute in Restraint of Appeals (passed 1533), which extended the charge of "praemunire" against all who introduced papal bulls into England, potentially exposing them to the death penalty if found guilty. Other acts included the Supplication against the Ordinaries and the Submission of the Clergy, which recognised Royal Supremacy over the church. The Ecclesiastical Appointments Act 1534 required the clergy to elect bishops nominated by the Sovereign. The Act of Supremacy in 1534 declared that the King was "the only Supreme Head on Earth of the Church of England" and the Treasons Act 1534 made it high treason, punishable by death, to refuse the Oath of Supremacy acknowledging the King as such. Similarly, following the passage of the Act of Succession 1533, all adults in the Kingdom were required to acknowledge the Act's provisions (declaring Henry's marriage to Anne legitimate and his marriage to Catherine illegitimate) by oath; those who refused were subject to imprisonment for life, and any publisher or printer of any literature alleging that the marriage to Anne was invalid subject to the death penalty. Finally, the Peter's Pence Act was passed, and it reiterated that England had "no superior under God, but only your Grace" and that Henry's "imperial crown" had been diminished by "the unreasonable and uncharitable usurpations and exactions" of the Pope. The King had much support from the Church under Cranmer.
Henry, to Thomas Cromwell's annoyance, insisted on parliamentary time to discuss questions of faith, which he achieved through the Duke of Norfolk. This led to the passing of the Act of Six Articles, whereby six major questions were all answered by asserting the religious orthodoxy, thus restraining the reform movement in England. It was followed by the beginnings of a reformed liturgy and of the Book of Common Prayer, which would take until 1549 to complete. The victory won by religious conservatives did not convert into much change in personnel, however, and Cranmer remained in his position. Overall, the rest of Henry's reign saw a subtle movement away from religious orthodoxy, helped in part by the deaths of prominent figures from before the break with Rome, especially the executions of Thomas More and John Fisher in 1535 for refusing to renounce papal authority. Henry established a new political theology of obedience to the crown that was continued for the next decade. It reflected Martin Luther's new interpretation of the fourth commandment ("Honour thy father and mother"), brought to England by William Tyndale. The founding of royal authority on the Ten Commandments was another important shift: reformers within the Church used the Commandments' emphasis on faith and the word of God, while conservatives emphasised the need for dedication to God and doing good. The reformers' efforts lay behind the publication of the "Great Bible" in 1539 in English. Protestant Reformers still faced persecution, particularly over objections to Henry's annulment. Many fled abroad, including the influential Tyndale, who was eventually executed and his body burned at Henry's behest.
When taxes once payable to Rome were transferred to the Crown, Cromwell saw the need to assess the taxable value of the Church's extensive holdings as they stood in 1535. The result was an extensive compendium, the "Valor Ecclesiasticus". In September of the same year, Cromwell commissioned a more general visitation of religious institutions, to be undertaken by four appointee visitors. The visitation focussed almost exclusively on the country's religious houses, with largely negative conclusions. In addition to reporting back to Cromwell, the visitors made the lives of the monks more difficult by enforcing strict behavioural standards. The result was to encourage self-dissolution. In any case, the evidence gathered by Cromwell led swiftly to the beginning of the state-enforced dissolution of the monasteries with all religious houses worth less than £200 vested by statute in the crown in January 1536. After a short pause, surviving religious houses were transferred one by one to the Crown and onto new owners, and the dissolution confirmed by a further statute in 1539. By January 1540 no such houses remained: some 800 had been dissolved. The process had been efficient, with minimal resistance, and brought the crown some £90,000 a year. The extent to which the dissolution of all houses was planned from the start is debated by historians; there is some evidence that major houses were originally intended only to be reformed. Cromwell's actions transferred a fifth of England's landed wealth to new hands. The programme was designed primarily to create a landed gentry beholden to the crown, which would use the lands much more efficiently. Although little opposition to the supremacy could be found in England's religious houses, they had links to the international church and were an obstacle to further religious reform.
Response to the reforms was mixed. The religious houses had been the only support of the impoverished, and the reforms alienated much of the population outside London, helping to provoke the great northern rising of 1536–1537, known as the Pilgrimage of Grace. Elsewhere the changes were accepted and welcomed, and those who clung to Catholic rites kept quiet or moved in secrecy. They would re-emerge during the reign of Henry's daughter Mary (1553–1558).
Apart from permanent garrisons at Berwick, Calais, and Carlisle, England's standing army numbered only a few hundred men. This was increased only slightly by Henry. Henry's invasion force of 1513, some 30,000 men, was composed of billmen and longbowmen, at a time when the other European nations were moving to hand guns and pikemen. The difference in capability was at this stage not significant, however, and Henry's forces had new armour and weaponry. They were also supported by battlefield artillery and the war wagon, relatively new innovations, and several large and expensive siege guns. The invasion force of 1544 was similarly well-equipped and organised, although command on the battlefield was laid with the dukes of Suffolk and Norfolk, which in the case of the latter produced disastrous results at Montreuil.
Henry's break with Rome incurred the threat of a large-scale French or Spanish invasion. To guard against this, in 1538, he began to build a chain of expensive, state-of-the-art defences, along Britain's southern and eastern coasts from Kent to Cornwall, largely built of material gained from the demolition of the monasteries. These were known as Henry VIII's Device Forts. He also strengthened existing coastal defence fortresses such as Dover Castle and, at Dover, Moat Bulwark and Archcliffe Fort, which he personally visited for a few months to supervise. Wolsey had many years before conducted the censuses required for an overhaul of the system of militia, but no reform resulted. In 1538–39, Cromwell overhauled the shire musters, but his work mainly served to demonstrate how inadequate they were in organisation. The building works, including that at Berwick, along with the reform of the militias and musters, were eventually finished under Queen Mary.
Henry is traditionally cited as one of the founders of the Royal Navy. Technologically, Henry invested in large cannon for his warships, an idea that had taken hold in other countries, to replace the smaller serpentines in use. He also flirted with designing ships personally – although his contribution to larger vessels, if any, is not known, it is believed that he influenced the design of rowbarges and similar galleys. Henry was also responsible for the creation of a permanent navy, with the supporting anchorages and dockyards. Tactically, Henry's reign saw the Navy move away from boarding tactics to employ gunnery instead. The Tudor navy was enlarged up to fifty ships (the "Mary Rose" was one of them), and Henry was responsible for the establishment of the "council for marine causes" to specifically oversee all the maintenance and operation of the Navy, becoming the basis for the later Admiralty.
At the beginning of Henry's reign, Ireland was effectively divided into three zones: the Pale, where English rule was unchallenged; Leinster and Munster, the so-called "obedient land" of Anglo-Irish peers; and the Gaelic Connaught and Ulster, with merely nominal English rule. Until 1513, Henry continued the policy of his father, to allow Irish lords to rule in the king's name and accept steep divisions between the communities. However, upon the death of the 8th Earl of Kildare, governor of Ireland, fractious Irish politics combined with a more ambitious Henry to cause trouble. When Thomas Butler, 7th Earl of Ormond died, Henry recognised one successor for Ormond's English, Welsh and Scottish lands, whilst in Ireland another took control. Kildare's successor, the 9th Earl, was replaced as Lord Lieutenant of Ireland by Thomas Howard, Earl of Surrey in 1520. Surrey's ambitious aims were costly, but ineffective; English rule became trapped between winning the Irish lords over with diplomacy, as favoured by Henry and Wolsey, and a sweeping military occupation as proposed by Surrey. Surrey was recalled in 1521, with Piers Butler – one of claimants to the Earldom of Ormond – appointed in his place. Butler proved unable to control opposition, including that of Kildare. Kildare was appointed chief governor in 1524, resuming his dispute with Butler, which had before been in a lull. Meanwhile, the Earl of Desmond, an Anglo-Irish peer, had turned his support to Richard de la Pole as pretender to the English throne; when in 1528 Kildare failed to take suitable actions against him, Kildare was once again removed from his post.
The Desmond situation was resolved on his death in 1529, which was followed by a period of uncertainty. This was effectively ended with the appointment of Henry FitzRoy, Duke of Richmond and the king's son, as lord lieutenant. Richmond had never before visited Ireland, his appointment a break with past policy. For a time it looked as if peace might be restored with the return of Kildare to Ireland to manage the tribes, but the effect was limited and the Irish parliament soon rendered ineffective. Ireland began to receive the attention of Cromwell, who had supporters of Ormond and Desmond promoted. Kildare, on the other hand, was summoned to London; after some hesitation, he departed for London in 1534, where he would face charges of treason. His son, Thomas, Lord Offaly was more forthright, denouncing the king and leading a "Catholic crusade" against the king, who was by this time mired in marital problems. Offaly had the Archbishop of Dublin murdered, and besieged Dublin. Offaly led a mixture of Pale gentry and Irish tribes, although he failed to secure the support of Lord Darcy, a sympathiser, or Charles V. What was effectively a civil war was ended with the intervention of 2,000 English troops – a large army by Irish standards – and the execution of Offaly (his father was already dead) and his uncles.
Although the Offaly revolt was followed by a determination to rule Ireland more closely, Henry was wary of drawn-out conflict with the tribes, and a royal commission recommended that the only relationship with the tribes was to be promises of peace, their land protected from English expansion. The man to lead this effort was Sir Antony St Leger, as Lord Deputy of Ireland, who would remain into the post past Henry's death. Until the break with Rome, it was widely believed that Ireland was a Papal possession granted as a mere fiefdom to the English king, so in 1541 Henry asserted England's claim to the Kingdom of Ireland free from the Papal overlordship. This change did, however, also allow a policy of peaceful reconciliation and expansion: the Lords of Ireland would grant their lands to the King, before being returned as fiefdoms. The incentive to comply with Henry's request was an accompanying barony, and thus a right to sit in the Irish House of Lords, which was to run in parallel with England's. The Irish law of the tribes did not suit such an arrangement, because the chieftain did not have the required rights; this made progress tortuous, and the plan was abandoned in 1543, not to be replaced.
The complexities and sheer scale of Henry's legacy ensured that, in the words of Betteridge and Freeman, "throughout the centuries, Henry has been praised and reviled, but he has never been ignored". Historian J.D. Mackie sums up Henry's personality and its impact on his achievements and popularity:
A particular focus of modern historiography has been the extent to which the events of Henry's life (including his marriages, foreign policy and religious changes) were the result of his own initiative and, if they were, whether they were the result of opportunism or of a principled undertaking by Henry. The traditional interpretation of those events was provided by historian A.F. Pollard, who in 1902 presented his own, largely positive, view of the king, lauding him, "as the king and statesman who, whatever his personal failings, led England down the road to parliamentary democracy and empire". Pollard's interpretation remained the dominant interpretation of Henry's life until the publication of the doctoral thesis of G. R. Elton in 1953.
Elton's book on "The Tudor Revolution in Government", maintained Pollard's positive interpretation of the Henrician period as a whole, but reinterpreted Henry himself as a follower rather than a leader. For Elton, it was Cromwell and not Henry who undertook the changes in government – Henry was shrewd, but lacked the vision to follow a complex plan through. Henry was little more, in other words, than an "ego-centric monstrosity" whose reign "owed its successes and virtues to better and greater men about him; most of its horrors and failures sprang more directly from [the king]".
Although the central tenets of Elton's thesis have since been questioned, it has consistently provided the starting point for much later work, including that of J. J. Scarisbrick, his student. Scarisbrick largely kept Elton's regard for Cromwell's abilities, but returned agency to Henry, who Scarisbrick considered to have ultimately directed and shaped policy. For Scarisbrick, Henry was a formidable, captivating man who "wore regality with a splendid conviction". The effect of endowing Henry with this ability, however, was largely negative in Scarisbrick's eyes: to Scarisbrick the Henrician period was one of upheaval and destruction and those in charge worthy of blame more than praise. Even among more recent biographers, including David Loades, David Starkey and John Guy, there has ultimately been little consensus on the extent to which Henry was responsible for the changes he oversaw or the correct assessment of those he did bring about.
This lack of clarity about Henry's control over events has contributed to the variation in the qualities ascribed to him: religious conservative or dangerous radical; lover of beauty or brutal destroyer of priceless artefacts; friend and patron or betrayer of those around him; chivalry incarnate or ruthless chauvinist. One traditional approach, favoured by Starkey and others, is to divide Henry's reign into two halves, the first Henry being dominated by positive qualities (politically inclusive, pious, athletic but also intellectual) who presided over a period of stability and calm, and the latter a "hulking tyrant" who presided over a period of dramatic, sometimes whimsical, change. Other writers have tried to merge Henry's disparate personality into a single whole; Lacey Baldwin Smith, for example, considered him an egotistical borderline neurotic given to great fits of temper and deep and dangerous suspicions, with a mechanical and conventional, but deeply held piety, and having at best a mediocre intellect.
Many changes were made to the royal style during his reign. Henry originally used the style "Henry the Eighth, by the Grace of God, King of England, France and Lord of Ireland". In 1521, pursuant to a grant from Pope Leo X rewarding Henry for his "Defence of the Seven Sacraments", the royal style became "Henry the Eighth, by the Grace of God, King of England and France, Defender of the Faith and Lord of Ireland". Following Henry's excommunication, Pope Paul III rescinded the grant of the title "Defender of the Faith", but an Act of Parliament (35 Hen 8 c 3) declared that it remained valid; and it continues in royal usage to the present day, as evidenced by the letters FID DEF or F.D. on all British coinage. Henry's motto was "Coeur Loyal" ("true heart"), and he had this embroidered on his clothes in the form of a heart symbol and with the word "loyal". His emblem was the Tudor rose and the Beaufort portcullis. As king, Henry's arms were the same as those used by his predecessors since Henry IV: "Quarterly, Azure three fleurs-de-lys Or (for France) and Gules three lions passant guardant in pale Or (for England)".
In 1535, Henry added the "supremacy phrase" to the royal style, which became "Henry the Eighth, by the Grace of God, King of England and France, Defender of the Faith, Lord of Ireland and of the Church of England in Earth Supreme Head". In 1536, the phrase "of the Church of England" changed to "of the Church of England and also of Ireland". In 1541, Henry had the Irish Parliament change the title "Lord of Ireland" to "King of Ireland" with the Crown of Ireland Act 1542, after being advised that many Irish people regarded the Pope as the true head of their country, with the Lord acting as a mere representative. The reason the Irish regarded the Pope as their overlord was that Ireland had originally been given to King Henry II of England by Pope Adrian IV in the 12th century as a feudal territory under papal overlordship. The meeting of Irish Parliament that proclaimed Henry VIII as King of Ireland was the first meeting attended by the Gaelic Irish chieftains as well as the Anglo-Irish aristocrats. The style "Henry the Eighth, by the Grace of God, King of England, France and Ireland, Defender of the Faith and of the Church of England and also of Ireland in Earth Supreme Head" remained in use until the end of Henry's reign.
|
https://en.wikipedia.org/wiki?curid=14187
|
Haryana
Haryana () is one of the 28 states in India, located in the northern part of the country. It was carved out of the former state of East Punjab on 1 November 1966 on a linguistic basis. It is ranked 22nd in terms of area, with less than 1.4% () of India's land area. Chandigarh is the state capital, Faridabad in National Capital Region is the most populous city of the state, and Gurugram is a leading financial hub of the NCR, with major Fortune 500 companies located in it. Haryana has 6 administrative divisions, 22 districts, 72 sub-divisions, 93 revenue tehsils, 50 sub-tehsils, 140 community development blocks, 154 cities and towns, 6,848 villages, and 6222 villages panchayats.
As the largest recipient of investment per capita since 2000 in India, and one of the wealthiest and most economically developed regions in South Asia, Haryana has the fifth highest per capita income among Indian states and territories, more than double the national average for year 2018–19. Haryana's state GSDP is 12th largest in India and grew at 12.96% between 2012 and 2017. There are by 30 special economic zones (SEZs), mainly located within the industrial corridor projects connecting the National Capital Region (NCR). Faridabad has been described as eighth fastest growing city in the world and third most in India. In services, Gurugram ranks number 1 in India in IT growth rate and existing technology infrastructure, and number 2 in startup ecosystem, innovation and livability. Haryana is the 7th highest among Indian states by human development index ranking.
Among the world's oldest and largest ancient civilisations, the Indus Valley Civilization sites at Rakhigarhi village in Hisar district and Bhirrana in Fatehabad district are 9,000 years old. Rich in history, monuments, heritage, flora and fauna, human resources and tourism with well developed economy, national highways and state roads, it is bordered by Himachal Pradesh to the north-east, by river Yamuna along its eastern border with Uttar Pradesh, by Rajasthan to the west and south, and Ghaggar-Hakra River flows along its northern border with Punjab. Since Haryana surrounds the country's capital Delhi on three sides (north, west and south), consequently a large area of Haryana is included in the economically-important National Capital Region for the purposes of planning and development.
The name Haryana is found in the works of the 12th-century AD Apabhramsha writer Vibudh Shridhar (VS 1189–1230). The name Haryana has been derived from the Sanskrit words "Hari" (the Hindu god Vishnu) and "ayana" (home), meaning "the Abode of God". However, scholars such as Muni Lal, Murli Chand Sharma, HA Phadke and Sukhdev Singh Chib believe that the name comes from a compound of the words "Hari" (Sanskrit "Harit", "green") and "Aranya" (forest).
The villages of Rakhigarhi in Hisar district and Bhirrana in Fatehabad district are home to the largest and one of the world's oldest ancient Indus Valley Civilization sites, dated at over 9,000 years old. Evidence of paved roads, a drainage system, a large-scale rainwater collection storage system, terracotta brick and statue production, and skilled metal working (in both bronze and precious metals) have been uncovered. According to archaeologists, Rakhigarhi may be the origin of Harappan civilisation, which arose in the Ghaggar basin in Haryana and gradually and slowly moved to the Indus valley.
The south of Haryana is the claimed location of the Vedic Brahmavarta region.
Ancient bronze and stone idols of Jain Tirthankara were found in archaeological expeditions in Badli, Bhiwani (Ranila, Charkhi Dadri and Badhra), Dadri, Gurgaon (Ferozepur Jhirka), Hansi, Hisar (Agroha), Kasan, Nahad, Narnaul, Pehowa, Rewari, Rohad, Rohtak (Asthal Bohar) and Sonepat in Haryana.
Pushyabhuti dynasty ruled parts of northern India in 7th century with its capital at Thanesar. Harsha was a prominent king of the dynasty. Tomara dynasty ruled the south Haryana region in 10th century. Anangpal Tomar was a prominent king among the Tomaras.
After the sack of Bhatner fort during the Timurid conquests of India in 1398, Timur attacked and sacked the cities of Sirsa, Fatehabad, Sunam, Kaithal and Panipat. When he reached the town of Sarsuti (Sirsa), the residents, who were mostly non-Muslims, fled and were chased by a detachment of Timur's troops, with thousands of them being killed and looted by the troops. From there he travelled to Fatehabad, whose residents fled and a large number of those remaining in the town were massacred. The Ahirs resisted him at Ahruni but were defeated, with thousands being killed and many being taken prisoners while the town was burnt to ashes. From there he travelled to Tohana, whose Jat inhabitants were stated to be robbers according to Sharaf ad-Din Ali Yazdi. They tried to resist but were defeated and fled. Timur's army pursued and killed 200 Jats, while taking many more as prisoners. He then sent a detachment to chase the fleeing Jats and killed 2,000 of them while their wives and children were enslaved and their property plundered. Timur proceeded to Kaithal whose residents were massacred and plundered, destroying all villages along the way. On the next day, he came to Assandh whose residents were "fire-worshippers" according to Yazdi, and had fled to Delhi. Next, he travelled to and subdued Tughlaqpur fort and Salwan before reaching Panipat whose residents had already fled. He then marched on to Loni fort.
Hemu claimed royal status after defeating Akbar's Mughal forces on 7 October 1556 in the Battle of Delhi and assumed the ancient title of Vikramaditya.
The area that is now Haryana has been ruled by some of the major empires of India. Panipat is known for three seminal battles in the history of India. In the First Battle of Panipat (1526), Babur defeated the Lodis. In the Second Battle of Panipat (1556), Akbar defeated the local Haryanvi Hindu Emperor of Delhi, who belonged to Rewari. Hem Chandra Vikramaditya had earlier won 22 battles across India from Punjab to Bengal, defeating Mughals and Afghans. Hemu had defeated Akbar's forces twice at Agra and the Battle of Delhi in 1556 to become the last Hindu Emperor of India with a formal coronation at Purana Quila in Delhi on 7 October 1556. In the Third Battle of Panipat (1761), the Afghan king Ahmad Shah Abdali defeated the Marathas.
Haryana as a state came into existence on 1 November 1966 the Punjab Reorganisation Act (1966). The Indian government set up the Shah Commission under the chairmanship of Justice JC Shah on 23 April 1966 to divide the existing state of Punjab and determine the boundaries of the new state of Haryana after consideration of the languages spoken by the people. The commission delivered its report on 31 May 1966 whereby the then-districts of Hisar, Mahendragarh, Gurgaon, Rohtak and Karnal were to be a part of the new state of Haryana. Further, the tehsils of Jind and Narwana in the Sangrur district – along with Naraingarh, Ambala and Jagadhri – were to be included.
The commission recommended that the tehsil of Kharar, which includes Chandigarh, the state capital of Punjab, should be a part of Haryana. However Kharar was given to Punjab. The city of Chandigarh was made a union territory, serving as the capital of both Punjab and Haryana.
Bhagwat Dayal Sharma became the first Chief Minister of Haryana.
According to the 2011 census, of total 25,350,000 population of Haryana, Hindus (87.46%) constitute the majority of the state's population with Muslims (7.03%) (mainly Meos) and Sikhs (4.91%) being the largest minorities.
Muslims are mainly found in the Nuh. Haryana has the second largest Sikh population in India after Punjab, and they mostly live in the districts adjoining Punjab, such as Sirsa, Jind, Fatehabad, Kaithal, Kurukshetra, Ambala and Panchkula.
The official language of Haryana is Hindi.
Several regional languages or dialects, often subsumed under Hindi, are spoken in the state. Predominant among them is Haryanvi (also known as Bangru), whose territory encompasses the central and eastern portions of Haryana. Hindustani is spoken in the northeast, Bagri in the west, and Ahirwati, Mewati and Braj Bhasha in the south.
There are also significant numbers of speakers of Urdu and Punjabi, the latter of which was recognised in 2010 as a second official language of Haryana for government and administrative purposes. After the state's formation, Telugu was made the state's "second language" – to be taught in schools – but it was not the "second official language" for official communication. Due to a lack of students, the language ultimately stopped being taught.
There are also some speakers of several major regional languages of neighbouring states or other parts of the subcontinent, like Bengali, Bhojpuri, Marwari, Mewari, Nepali and Saraiki, as well as smaller communities of speakers of languages that are dispersed across larger regions, like Bauria, Bazigar, Gujari, Gade Lohar, Oadki, and Sansi.
Haryana has its own unique traditional folk music, folk dances, saang (folk theatre), cinema, belief system such as Jathera (ancestral worship), and arts such as Phulkari and Shisha embroidery.
Folk music and dances of Haryana are based on satisfying cultural needs of primarily agrarian and martial natures of Haryanavi tribes.
Haryanvi musical folk theatre main types are Saang, Rasa lila and Ragini. The Saang and Ragini form of theatre was popularised by Lakhmi Chand.
Haryanvi folk dances and music have fast energetic movements. Three popular categories of dance are: festive-seasonal, devotional, and ceremonial-recreational. The festive-seasonal dances and songs are Gogaji/Gugga, Holi, Phaag, Sawan, Teej. The devotional dances and songs are Chaupaiya, Holi, Manjira, Ras Leela, Raginis). The ceremonial-recreational dances and songs are of following types: legendary bravery (Kissa and Ragini of male warriors and female Satis), love and romance (Been and its variant Nāginī dance, and Ragini), ceremonial (Dhamal Dance, Ghoomar, Jhoomar (male), Khoria, Loor, and Ragini).
Haryanvi folk music is based on day to day themes and injecting earthly humor enlivens the feel of the songs. Haryanvi music takes two main forms: "Classical folk music" and "Desi Folk music" (Country Music of Haryana), and sung in the form of ballads and love, valor and bravery, harvest, happiness and pangs of parting of lovers.
Classical Haryanvi folk music is based on Indian classical music. Hindustani classical ragas, learnt in gharana parampara of guru–shishya tradition, are used to sing songs of heroic bravery (such as Alha-Khand (1163-1202 CE) about the bravery of Alha and Udal, Jaimal and Patta of Maharana Udai Singh II), Brahmas worship and festive seasonal songs (such as Teej, Holi and Phaag songs of Phalgun month near Holi). Bravery songs are sung in high pitch.
Desi Haryanvi folk music, is a form of Haryanvi music, based on Raag Bhairvi, Raag Bhairav, Raag Kafi, Raag Jaijaivanti, Raag Jhinjhoti and Raag Pahadi and used for celebrating community bonhomie to sing seasonal songs, ballads, ceremonial songs (wedding, etc.) and related religious legendary tales such as Puran Bhagat. Relationship and songs celebrating love and life are sung in medium pitch. Ceremonial and religious songs are sung in low pitch. Young girls and women usually sing entertaining and fast seasonal, love, relationship and friendship related songs such as Phagan (song for eponymous season/month), Katak (songs for the eponymous season/month), Samman (songs for the eponymous season/month), bande-bandi (male-female duet songs), sathne (songs of sharing heartfelt feelings among female friends). Older women usually sing devotional Mangal Geet (auspicious songs) and ceremonial songs such as Bhajan, Bhat (wedding gift to the mother of bride or groom by her brother), Sagai, Ban (Hindu wedding ritual where pre-wedding festivities starts), Kuan-Poojan (a custom that is performed to welcome the birth of a child by worshiping the well or source of drinking water), Sanjhi and Holi festival.
Music and dance for Haryanvi people is a great way of demolishing societal differences as folk singers are highly esteemed and they are sought after and invited for the events, ceremonies and special occasions regardless of their caste or status. These inter-caste songs are fluid in nature, and never personalised for any specific caste, and they are sung collectively by women from different strata, castes, dialects. These songs do transform fluidly in dialect, style, words, etc. This adoptive style can be seen from the adoption of tunes of Bollywood movie songs into Haryanvi songs. Despite this continuous fluid transforming nature, Haryanvi songs have a distinct style of their own as explained above.
With the coming up of a strongly socio-econmic metropolitan culture in the emergence of urban Gurgaon (Gurugram) Haryana is also witnessing community participation in public arts and city beautification. Several landmarks across Gurgaon are decorated with public murals and graffiti with cultural cohesive ideologies and stand the testimony of a lived sentiment in Haryana folk.
As per a survey, 81% people of Haryana are vegetarian, and the regional cuisine features the staples of roti, saag, vegetarian sabzi and milk products such as ghee, milk, lassi and kheer.
Haryana has a concept of 36 Jātis or communities. Castes such as Jat, Rajpoot, Gujjar, Saini, Pasi, Ahir, Ror, Mev, Vishnoi and Harijan are some of the notable of these 36 Jātis.
Haryana is a landlocked state in northern India. It is between 27°39' to 30°35' N latitude and between 74°28' and 77°36' E longitude. The total geographical area of the state is 4.42 m ha, which is 1.4% of the geographical area of the country. The altitude of Haryana varies between 700 and 3600 ft (200 metres to 1200 metres) above sea level. Haryana has only 4% (compared to national 21.85%) area under forests. Karoh Peak, a tall mountain peak in the Sivalik Hills range of the greater Himalayas range located near Morni Hills area of Panchkula district, is highest point in Haryana.
|
https://en.wikipedia.org/wiki?curid=14189
|
Himachal Pradesh
Himachal Pradesh (; "snow-laden province") is a state in the northern part of India. Situated in the Western Himalayas, it is one of the eleven mountain states and is characterized by an extreme landscape featuring several peaks and extensive river systems. Himachal Pradesh shares borders with the State of Jammu and Kashmir to the north, and the states of Punjab to the west, Haryana to the southwest, and Uttarakhand and Uttar Pradesh to the south. The state also has a border with the autonomous region of Tibet to the east.
The predominantly mountainous region comprising the present-day Himachal Pradesh has been inhabited since pre-historic times having witnessed multiple waves of human migration from other areas. Through its history, the region was mostly ruled by local kingdoms some of which accepted the suzerainty of larger empires. Prior to India's independence from the British, Himachal comprised the hilly regions of Punjab Province of British India. After independence, many of the hilly territories were organized as the Chief Commissioner's province of Himachal Pradesh which later became a union territory. In 1966, hilly areas of neighboring Punjab state were merged into Himachal and it was ultimately granted full statehood in 1971.
Himachal Pradesh is spread across valleys with many perennial rivers flowing through them. Almost 90% of the state's population lives in rural areas. Agriculture, horticulture, hydropower and tourism are important constituents of the state's economy. The hilly state is almost universally electrified with 99.5% of the households having electricity as of 2016. The state was declared India's second open-defecation-free state in 2016. According to a survey of CMS – India Corruption Study 2017, Himachal Pradesh is India's least corrupt state.
Tribes such as the Koli and Kaibarta (the various Austro-Dravidian tribes), Hali, Dagi, Dhaugri, Dasa, Khasa, Kanaura, and Kirat inhabited the region from the prehistoric era. The foothills of the modern state of Himachal Pradesh were inhabited by people from the Indus valley civilisation which flourished between 2250 and 1750 B.C. The Kols and Kaibarttas along with other similar Austro-Dravidian ethnic groups are believed to be the original inhabitants to the hills of present-day Himachal Pradesh followed by the Bhotas and Kiratas.
During the Vedic period, several small republics known as "Janapada" existed which were later conquered by the Gupta Empire. After a brief period of supremacy by King Harshavardhana, the region was divided into several local powers headed by chieftains, including some Rajputs principalities. These kingdoms enjoyed a large degree of independence and were invaded by Delhi Sultanate a number of times. Mahmud Ghaznavi conquered Kangra at the beginning of the 11th century. Timur and Sikander Lodi also marched through the lower hills of the state and captured a number of forts and fought many battles. Several hill states acknowledged Mughal suzerainty and paid regular tribute to the Mughals.
The Kingdom of Gorkha conquered many kingdoms and came to power in Nepal in 1768. They consolidated their military power and began to expand their territory. Gradually, the Kingdom of Nepal annexed Sirmour and Shimla. Under the leadership of Amar Singh Thapa, the Nepali army laid siege to Kangra. They managed to defeat Sansar Chand Katoch, the ruler of Kangra, in 1806 with the help of many provincial chiefs. However, the Nepali army could not capture Kangra fort which came under Maharaja Ranjeet Singh in 1809. After the defeat, they expanded towards the south of the state. However, Raja Ram Singh, Raja of Siba State, captured the fort of Siba from the remnants of Lahore Darbar in Samvat 1846, during the First Anglo-Sikh War.
They came into direct conflict with the British along the "tarai" belt after which the British expelled them from the provinces of the Satluj. The British gradually emerged as the paramount power in the region. In the revolt of 1857, or first Indian war of independence, arising from a number of grievances against the British, the people of the hill states were not as politically active as were those in other parts of the country. They and their rulers, with the exception of Bushahr, remained more or less inactive. Some, including the rulers of Chamba, Bilaspur, Bhagal and Dhami, rendered help to the British government during the revolt.
The British territories came under the British Crown after Queen Victoria's proclamation of 1858. The states of Chamba, Mandi and Bilaspur made good progress in many fields during the British rule. During World War I, virtually all rulers of the hill states remained loyal and contributed to the British war effort, both in the form of men and materials. Among these were the states of Kangra, Jaswan, Datarpur, Guler, Rajgarh, Nurpur, Chamba, Suket, Mandi, and Bilaspur.
After independence, the Chief Commissioner's Province of Himachal Pradesh was organised on 15 April 1948 as a result of the integration of 28 petty princely states (including feudal princes and "zaildars") in the promontories of the western Himalayas. These were known as the Simla Hills States and four Punjab southern hill states under the Himachal Pradesh (Administration) Order, 1948 under Sections 3 and 4 of the Extra-Provincial Jurisdiction Act, 1947 (later renamed as the Foreign Jurisdiction Act, 1947 vide A.O. of 1950). The State of Bilaspur was merged into Himachal Pradesh on 1 July 1954 by the Himachal Pradesh and Bilaspur (New State) Act, 1954.
Himachal became a Part 'C' state on 26 January 1950 with the implementation of the Constitution of India and the Lieutenant Governor was appointed. The Legislative Assembly was elected in 1952. Himachal Pradesh became a union territory on 1 November 1956. Some areas of Punjab State— namely Simla, Kangra, Kullu and Lahul and Spiti Districts, Nalagarh Tehsil of Ambala District, Lohara, Amb and Una Janungo circles, some area of Santokhgarh Kanungo circle and some other specified area of Una Tehsil of Hoshiarpur District, besides some parts of Dhar Kalan Kanungo circle of Pathankot tehsil of Gurdaspur District—were merged with Himachal Pradesh on 1 November 1966 on enactment by Parliament of the Punjab Reorganisation Act, 1966. On 18 December 1970, the State of Himachal Pradesh Act was passed by Parliament, and the new state came into being on 25 January 1971. Himachal became the 18th state of the Indian Union with Dr. Yashwant Singh Parmar as its first chief minister.On 14 April 1971 Himachal Pradesh got Statehood hence celebrated as Himachal Statehood Day every year.
Himachal is in the western Himalayas. Covering an area of , it is a mountainous state. Most of the state lies on the foothills of the Dhauladhar Range. At 6,816 m, Reo Purgyil is the highest mountain peak in the state of Himachal Pradesh.
The drainage system of Himachal is composed both of rivers and glaciers. Himalayan rivers criss-cross the entire mountain chain.
Himachal Pradesh provides water to both the Indus and Ganges basins. The drainage systems of the region are the Chandra Bhaga or the Chenab, the Ravi, the Beas, the Sutlej, and the Yamuna. These rivers are perennial and are fed by snow and rainfall. They are protected by an extensive cover of natural vegetation.
Due to extreme variation in elevation, great variation occurs in the climatic conditions of Himachal. The climate varies from hot and subhumid tropical in the southern tracts to, with more elevation, cold, alpine, and glacial in the northern and eastern mountain ranges. The state's winter capital, Dharamsala receives very heavy rainfall, while areas like Lahaul and Spiti are cold and almost rainless. Broadly, Himachal experiences three seasons: summer, winter, and rainy season. Summer lasts from mid-April till the end of June and most parts become very hot (except in the alpine zone which experiences a mild summer) with the average temperature ranging from . Winter lasts from late November till mid-March. Snowfall is common in alpine tracts .
Himachal Pradesh is one of the states that lies in the Indian Himalayan Region (IHR), one of the richest reservoirs of biological diversity in the world. As of 2002, the IHR is undergoing large scale irrational extraction of wild, medicinal herbs, thus endangering many of its high-value gene stock. To address this, a workshop on ‘Endangered Medicinal Plant Species in Himachal Pradesh’ was held in 2002 and the conference was attended by forty experts from diverse disciplines.
According to 2003 Forest Survey of India report, legally defined forest areas constitute 66.52% of the area of Himachal Pradesh. Vegetation in the state is dictated by elevation and precipitation. The state is endowed with a high diversity of medicinal and aromatic plants. Lahaul-Spiti region of the state, being a cold desert, supports unique plants of medicinal value including "Ferula jaeschkeana", "Hyoscyamus niger", "Lancea tibetica", and "Saussurea bracteata".
Himachal is also said to be the fruit bowl of the country, with orchards being widespread. Meadows and pastures are also seen clinging to steep slopes. After the winter season, the hillsides and orchards bloom with wild flowers, while gladiolas, carnations, marigolds, roses, chrysanthemums, tulips and lilies are carefully cultivated. Himachal Pradesh Horticultural Produce Marketing and Processing Corporation Ltd. (HPMC) is a state body that markets fresh and processed fruits.
Himachal Pradesh has around 463 bird 77 mammalian, 44 reptile and 80 fish species. Great Himalayan National Park, a UNESCO World Heritage Site and Pin Valley National Park are the national Parks located in the state. The state also has 30 wildlife sanctuaries and 3 conservation reserves.
The Legislative Assembly of Himachal Pradesh has no pre-constitution history. The State itself is a post-independence creation. It came into being as a centrally administered territory on 15 April 1948 from the integration of thirty erstwhile princely states.
Himachal Pradesh is governed through a parliamentary system of representative democracy, a feature the state shares with other Indian states. Universal suffrage is granted to residents. The legislature consists of elected members and special office bearers such as the Speaker and the Deputy Speaker who are elected by the members. Assembly meetings are presided over by the Speaker or the Deputy Speaker in the Speaker's absence. The judiciary is composed of the Himachal Pradesh High Court and a system of lower courts.
Executive authority is vested in the Council of Ministers headed by the , although the titular head of government is the Governor. The governor is the head of state appointed by the President of India. The leader of the party or coalition with a majority in the Legislative Assembly is appointed as the Chief Minister by the governor, and the Council of Ministers are appointed by the governor on the advice of the Chief Minister. The Council of Ministers reports to the Legislative Assembly. The Assembly is unicameral with 68 Members of the Legislative Assembly (MLA). Terms of office run for five years, unless the Assembly is dissolved prior to the completion of the term. Auxiliary authorities known as "panchayats", for which local body elections are regularly held, govern local affairs.
In the assembly elections held in November 2017, the BJP secured an absolute majority, winning 44 of the 68 seats while the Congress won only 21 of the 68 seats. Jai Ram Thakur was sworn in as Himachal Pradesh's Chief Minister for the first time in Shimla on 27 December 2017.
The state of Himachal Pradesh is divided into 12 districts which are grouped into three divisions, Shimla, Kangra and Mandi. The districts are further divided into 69 subdivisions, 78 blocks and 145 Tehsils.
The era of planning in Himachal Pradesh started in 1951 along with the rest of India with the implementation of the first five-year plan. The First Plan allocated 52.7 million to Himachal Pradesh. More than 50% of this expenditure was incurred on transport and communication; while the power sector got a share of just 4.6%, though it had steadily increased to 7% by the Third Plan. Expenditure on agriculture and allied activities increased from 14.4% in the First Plan to 32% in the Third Plan, showing a progressive decline afterwards from 24% in the Fourth Plan to less than 10% in the Tenth Plan. Expenditure on energy sector was 24.2% of the total in the Tenth Plan.
The total GDP for 2005-06 was estimated at 254 billion as against 230 billion in the year 2004–05, showing an increase of 10.5%. The GDP for fiscal 2015–16 was estimated at 1.110 trillion, which increased to 1.247 trillion in 2016–17, recording growth of 6.8%. The per capita income increased from 130,067 in 2015–16 to 147,277 in 2016–17. The state government's advance estimates for fiscal 2017–18 stated the total GDP and per capita income as 1.359 trillion and 158,462 respectively. As of 2018, Himachal is the 22nd-largest state economy in India with in gross domestic product and has the 13th-highest per capita income () among the states and union territories of India.
Himachal Pradesh also ranks as the second-best performing state in the country on human development indicators after Kerala. One of the Indian government's key initiatives to tackle unemployment is the National Rural Employment Guarantee Act (NREGA). The participation of women in the NREGA has been observed to vary across different regions of the nation. As of the year 2009–2010, Himachal Pradesh joined the category of high female participation, recording a 46% share of NREGS (National Rural Employment Guarantee Scheme) work days to women. This was a drastic increase from the 13% that was recorded in 2006–2007.
Agriculture accounts for 9.4% of the net state domestic product. It is the main source of income and employment in Himachal. About 90% of the population in Himachal depends directly upon agriculture, which provides direct employment to 62% of total workers of state. The main cereals grown include wheat, maize, rice and barley with major cropping systems being maize-wheat, rice-wheat and maize-potato-wheat. Pulses, fruits, vegetables and oilseeds are among the other crops grown in the state. Land husbandry initiatives such as the Mid-Himalayan Watershed Development Project, which includes the Himachal Pradesh Reforestation Project (HPRP), the world's largest clean development mechanism (CDM) undertaking, have improved agricultural yields and productivity, and raised rural household incomes.
Apple is the principal cash crop of the state grown principally in the districts of Shimla, Kinnaur, Kullu, Mandi, Chamba and some parts of Sirmaur and Lahaul-Spiti with an average annual production of five lakh tonnes and per hectare production of 8 to 10 tonnes. The apple cultivation constitute 49 per cent of the total area under fruit crops and 85% of total fruit production in the state with an estimated economy of 3500 crore. Apples from Himachal are exported to other Indian states and even other countries. In 2011–12, the total area under apple cultivation was 1.04 lakh hectares, increased from 90,347 hectares in 2000–01. According to the provisional estimates of Ministry of Agriculture & Farmers Welfare, the annual apple production in Himachal for fiscal 2015–16 stood at 7.53 lakh tonnes, making it India's second-largest apple-producing state after Jammu and Kashmir.
Hydropower is one of the major sources of income generation for the state. The state has an abundance of hydropower resources because of the presence of various perennial rivers. Many high-capacity hydropower plants have been constructed which produce surplus electricity that is sold to other states, such as Delhi, Punjab and West Bengal. The income generated from exporting the electricity to other states is being provided as subsidy to the consumers in the state. The rich hydropower resources of Himachal have resulted in the state becoming almost universally electrified with around 94.8% houses receiving electricity as of 2001, as compared to the national average of 55.9%. Himachal's hydro-electric power production is however yet to be fully utilised: The identified hydroelectric potential for the state is 27,436 MW in five river basins while the hydroelectric capacity in 2016 was 10,351 MW.
Tourism in Himachal Pradesh is a major contributor to the state's economy and growth. The Himalayas attracts tourists from all over the world. Hill stations like Shimla, Manali, Dharamshala, Dalhousie, Chamba, Khajjiar, Kullu and Kasauli are popular destinations for both domestic and foreign tourists. The state also has many important Hindu pilgrimage sites with prominent temples like Naina Devi Temple, Bajreshwari Mata Temple, Jwala Ji Temple, Chintpurni, Chamunda Devi Temple, Baijnath Temple, Bhimakali Temple, Bijli Mahadev and Jakhoo Temple. Manimahesh Lake situated in the Bharmour region of Chamba district is the venue of an annual Hindu pilgrimage trek held in the month of August which attracts lakhs of devotees. The state is also referred to as ""Dev Bhoomi"" (literally meaning "Abode of Gods") due to its mention as such in ancient Hindu texts and occurrence of a large number of historical temples in the state.
Locally, it is called the Land of the Gods because of some popular Hindu temples of Hindu deities. Although neighbouring state Uttarakhand is famous as the land of the gods because of the major hindu shrines like Badrinath( One of the Char Dham), Kedarnath , Adi- Kailasha, Origin of holy rivers like Ganges, Yamuna , Holy city Haridwar , Rishikesh etc.
Himachal is also known for its adventure tourism activities like ice skating in Shimla, paragliding in Bir Billing and Solang valley, rafting in Kullu, skiing in Manali, boating in Bilaspur and trekking, horse riding and fishing in different parts in the state. Shimla, the state's capital, is home to Asia's only natural ice-skating rink. Spiti Valley in Lahaul and Spiti District situated at an altitude of over 3000 metres with its picturesque landscapes is an important destination for adventure seekers. The region also has some of the oldest Buddhist Monasteries in the world
Himachal hosted the first Paragliding World Cup in India from 24 to 31 October in 2015. The venue for the paragliding world cup was Bir Billing, which is 70 km from the tourist town Macleod Ganj, located in the heart of Himachal in Kangra District. Bir Billing is the centre for aero sports in Himachal and considered as best for paragliding. Buddhist monasteries, trekking to tribal villages and mountain biking are other local possibilities.
Himachal has three Domestic Airports in Kangra, Kullu and Shimla districts. The air routes connect the state with Delhi and Chandigarh.
The only broad gauge railway line in the whole state connects Una Himachal railway station to Nagal Dam in Punjab and runs all the way to Daulatpur. It is an electrified track since 1999.
Future constructions:
Himachal is known for its narrow-gauge railways. One is the Kalka-Shimla Railway, a UNESCO World Heritage Site, and another is the Pathankot-Jogindernagar line. The total length of these two tracks is . The Kalka-Shimla Railway passes through many tunnels and Bridgies, while the Pathankot–Jogindernagar meanders through a maze of hills and valleys. The total route length of the operational railway network in the state is .
Roads are the major mode of transport in the hilly terrains. The state has road network of , including eight National Highways (NH) that constitute and 19 State Highways with a total length of . Hamirpur district has the highest road density in the country. Some roads are closed during winter and monsoon seasons due to snow and landslides. The state-owned Himachal Road Transport Corporation with a fleet of over 3,100, operates bus services connecting important cities and towns with villages within the state and also on various interstate routes. In addition, around 5,000 private buses ply in the state.
Himachal Pradesh has a total population of 6,864,602 including 3,481,873 males and 3,382,729 females according to the Census of India 2011. It has only 0.57 per cent of India's total population, recording a growth of 12.81 per cent. The scheduled castes and scheduled tribes account for 25.19 per cent and 5.71 per cent of the population respectively. The sex ratio stood at 972 females per 1,000 males, recording a marginal increase from 968 in 2001. The child sex ratio increased from 896 in 2001 to 909 in 2011. The total fertility rate (TFR) per woman in 2015 stood at 1.7, one of the lowest in India.
In the census, the state is placed 21st on the population chart, followed by Tripura at 22nd place. Kangra district was top-ranked with a population strength of 1,507,223 (21.98%), Mandi district 999,518 (14.58%), Shimla district 813,384 (11.86%), Solan district 576,670 (8.41%), Sirmaur district 530,164 (7.73%), Una district 521,057 (7.60%), Chamba district 518,844 (7.57%), Hamirpur district 454,293 (6.63%), Kullu district 437,474 (6.38%), Bilaspur district 382,056 (5.57%), Kinnaur district 84,298 (1.23%) and Lahaul Spiti 31,528 (0.46%).
The life expectancy at birth in Himachal Pradesh increased significantly from 52.6 years in the period from 1970–75 (above the national average of 49.7 years) to 72.0 years for the period 2011–15 (above the national average of 68.3 years). The infant mortality rate stood at 40 in 2010, and the crude birth rate has declined from 37.3 in 1971 to 16.9 in 2010, below the national average of 26.5 in 1998. The crude death rate was 6.9 in 2010. Himachal Pradesh's literacy rate has almost doubled between 1981 and 2011 (see table to right). The state is one of the most literate states of India with a literacy rate of 83.78% as of 2011.
Hindi is the official language of Himachal Pradesh and is spoken by the majority of the population as a lingua franca. Sanskrit is the additional official language of the state. Most of the population, however, speak natively one or another of the Himachali languages (locally also known as "Pahari"), a subgroup of the Indo-Aryan languages that includes Mandeali, Kangri, Kullu, Bilaspuri and others. Additional Indo-Aryan languages spoken in the state include Hindi, Punjabi (native to 4.4% of the population), Nepali (1.3%) and Kashmiri (0.8%). In parts of the state there are speakers of Tibeto-Burman languages like Kinnauri (1.2%), Tibetan (0.3%), Lahauli (0.16%), Pattani (0.12%), and others.
Hinduism is the major religion in Himachal Pradesh. More than 95% of the total population adheres to the Hindu faith, the distribution of which is evenly spread throughout the state. Himachal Pradesh has the highest proportion of Hindu population among all the states and union territories in India.
Other religions that form a small percentage are Islam, Sikhism and Buddhism. Muslims are mainly concentrated in Sirmaur, Chamba, Una and Solan districts where they form 2.53-6.27% of the population. Sikhs mostly live in towns and cities and constitute 1.16% of the state population. The Buddhists, who constitute 1.15%, are mainly natives and tribals from Lahaul and Spiti, where they form a majority of 62%, and Kinnaur, where they form 21.5%.
Himachal Pradesh was one of the few states that had remained largely untouched by external customs, largely due to its difficult terrain. With remarkable economic and social advancements, the state has changed rapidly. Himachal Pradesh is a multireligious, multicultural as well as a multilingual state like other Indian states. Western Pahari languages also known as Himachali languages are widely spoken in the state. Some of the most commonly spoken individual languages are Kangri, Mandeali, Kulvi, Chambeali, Bharmauri and Kinnauri.
The Hindu communities residing in Himachal include the "Brahmins", "Kayasthas", "Rajputs", "Sunars", "Kannets", "Rathis" and "Kolis". The tribal population of the state consists mainly of "Gaddis", "Gujjars", "Kanauras", "Pangwalas", "Bhots", "Swanglas" and "Lahaulas".
Himachal is well known for its handicrafts. The carpets, leather works, Kullu shawls, Kangra paintings, Chamba Rumals, stoles, embroidered grass footwear ("Pullan chappal"), silver jewellery, metal ware, knitted woolen socks, "Pattoo", basketry of cane and bamboo ("Wicker" and "Rattan") and woodwork are among the notable ones. Of late, the demand for these handicrafts has increased within and outside the country.
Himachali caps of various colour bands are also well-known local art work, and are often treated as a symbol of the Himachali identity. The colour of the Himachali caps has been an indicator of political loyalties in the hill state for a long period of time with Congress party leaders like Virbhadra Singh donning caps with green band and the rival BJP leader Prem Kumar Dhumal wearing a cap with maroon band. The former has served six terms as the Chief Minister of the state while the latter is a two-time Chief Minister. Local music and dance also reflects the cultural identity of the state. Through their dance and music, the Himachali people entreat their gods during local festivals and other special occasions.
Apart from national fairs and festivals, there are regional fairs and festivals, including the temple fairs in nearly every region that are of great significance to Himachal Pradesh. The Kullu Dussehra festival is nationally known. The day-to-day cuisine of "Himachalis" is similar to the rest of northern India with Punjabi and Tibetan influences. Lentils ("Dāl"), rice ("Chāwal" or "Bhāț"), vegetables ("Sabzī") and chapati (wheat flatbread) form the staple food of the local population. Non-vegetarian food is more preferred and accepted in Himachal Pradesh than elsewhere in India, partly due to the scarcity of fresh vegetables on the hilly terrain of the state. Himachali specialities include "Siddu", "Babru", "Khatta", "Mhanee", "Channa Madra", "Patrode", "Mah Ki Dal", "Chamba-Style Fried Fish", "Kullu Trout", "Chha Gosht", "Pahadi Chicken", "Sepu Badi", "Auriya Kaddu", "Aloo Palda", "Pateer", "Makki Ki Roti" and "Sarson Ka Saag", "Chouck", "Bhagjery" and "Chutney" of Til.
At the time of Independence, Himachal Pradesh had a literacy rate of 8% - one of the lowest in the country. By 2011, the literacy rate surged to over 82%, making Himachal one of the most-literate states in the country. There are over 10,000 primary schools, 1,000 secondary schools and more than 1,300 high schools in the state. In meeting the constitutional obligation to make primary education compulsory, Himachal became the first state in India to make elementary education accessible to every child. Himachal Pradesh is an exception to the nationwide gender bias in education levels. The state has a female literacy rate of around 76%. In addition, school enrolment and participation rates for girls are almost universal at the primary level. While higher levels of education do reflect a gender-based disparity, Himachal is still significantly ahead of other states at bridging the gap. The Hamirpur District in particular stands out for high literacy rates across all metrics of measurement.
The state government has played an instrumental role in the rise of literacy in the state by spending a significant proportion of the state's GDP on education. During the first six five-year plans, most of the development expenditure in education sector was utilised in quantitative expansion, but after the seventh five-year-plan the state government switched emphasis on qualitative improvement and modernisation of education. In an effort to raise the number of teaching staff at primary schools they appointed over 1000 teacher aids through the Vidya Upasak Yojna in 2001. The Sarva Shiksha Abhiyan is another HP government initiative that not only aims for universal elementary education but also encourages communities to engage in the management of schools. The Rashtriya Madhayamic Shiksha Abhiyan launched in 2009, is a similar scheme but focuses on improving access to quality secondary education.
The standard of education in the state has reached a considerably high level as compared to other states in India with several reputed educational institutes for higher studies. The Baddi University of Emerging Sciences and Technologies, Indian Institute of Technology Mandi, Indian Institute of Management Sirmaur, Himachal Pradesh University in Shimla, Central University of Himachal Pradesh, Dharamsala, National Institute of Technology, Hamirpur, Indian Institute of Information Technology Una, Alakh Prakash Goyal University, Maharaja Agrasen University, Himachal Pradesh National Law University are some of the notable universities in the state. Indira Gandhi Medical College and Hospital in Shimla, Dr. Rajendra Prasad Government Medical College in Kangra, Rajiv Gandhi Government Post Graduate Ayurvedic College in Paprola and Homoeopathic Medical College & Hospital in Kumarhatti are the prominent medical institutes in the state. Besides these, there is a Government Dental College in Shimla which is the state's first recognised dental institute.
The state government has also decided to start three major nursing colleges to develop the healthcare system of the state. CSK Himachal Pradesh Krishi Vishwavidyalya Palampur is one of the most renowned hill agriculture institutes in the world. Dr. Yashwant Singh Parmar University of Horticulture and Forestry has earned a unique distinction in India for imparting teaching, research and extension education in horticulture, forestry and allied disciplines. Further, state-run Jawaharlal Nehru Government Engineering College was inaugurated in 2006 at Sundernagar.
Himachal Pradesh also hosts a campus of the prestigious fashion college, National Institute of Fashion Technology (NIFT) in Kangra.
Source: "Department of Information and Public Relations."
|
https://en.wikipedia.org/wiki?curid=14190
|
History of medicine
The history of medicine shows how societies have changed in their approach to illness and disease from ancient times to the present. Early medical traditions include those of Babylon, China, Egypt and India. Sushruta, from India, introduced the concepts of medical diagnosis and prognosis. The Hippocratic Oath was written in ancient Greece in the 5th century BCE, and is a direct inspiration for oaths of office that physicians swear upon entry into the profession today. In the Middle Ages, surgical practices inherited from the ancient masters were improved and then systematized in Rogerius's "The Practice of Surgery". Universities began systematic training of physicians around 1220 CE in Italy.
Invention of the microscope was a consequence of improved understanding, during the Renaissance. Prior to the 19th century, humorism (also known as humoralism) was thought to explain the cause of disease but it was gradually replaced by the germ theory of disease, leading to effective treatments and even cures for many infectious diseases. Military doctors advanced the methods of trauma treatment and surgery. Public health measures were developed especially in the 19th century as the rapid growth of cities required systematic sanitary measures. Advanced research centers opened in the early 20th century, often connected with major hospitals. The mid-20th century was characterized by new biological treatments, such as antibiotics. These advancements, along with developments in chemistry, genetics, and radiography led to modern medicine. Medicine was heavily professionalized in the 20th century, and new careers opened to women as nurses (from the 1870s) and as physicians (especially after 1970).
Although there is little record to establish when plants were first used for medicinal purposes (herbalism), the use of plants as healing agents, as well as clays and soils is ancient. Over time, through emulation of the behavior of fauna, a medicinal knowledge base developed and passed between generations. Even earlier, Neanderthals may have engaged in medical practices.
As tribal culture specialized specific castes, shamans and apothecaries fulfilled the role of healer.
The first known dentistry dates to c. 7000 BC in Baluchistan where Neolithic dentists used flint-tipped drills and bowstrings. The first known trepanning operation was carried out c. 5000 BC in Ensisheim, France. A possible amputation was carried out c. 4,900 BC in Buthiers-Bulancourt, France.
The ancient Mesopotamians had no distinction between "rational science" and magic. When a person became ill, doctors would prescribe both magical formulas to be recited as well as medicinal treatments. The earliest medical prescriptions appear in Sumerian during the Third Dynasty of Ur ( 2112 BC – 2004 BC). The oldest Babylonian texts on medicine date back to the Old Babylonian period in the first half of the 2nd millennium BCE. The most extensive Babylonian medical text, however, is the "Diagnostic Handbook" written by the "ummânū", or chief scholar, Esagil-kin-apli of Borsippa, during the reign of the Babylonian king Adad-apla-iddina (1069–1046 BCE). Along with the Egyptians, the Babylonians introduced the practice of diagnosis, prognosis, physical examination, and remedies. In addition, the "Diagnostic Handbook" introduced the methods of therapy and cause. The text contains a list of medical symptoms and often detailed empirical observations along with logical rules used in combining observed symptoms on the body of a patient with its diagnosis and prognosis. The "Diagnostic Handbook" was based on a logical set of axioms and assumptions, including the modern view that through the examination and inspection of the symptoms of a patient, it is possible to determine the patient's disease, its cause and future development, and the chances of the patient's recovery. The symptoms and diseases of a patient were treated through therapeutic means such as bandages, herbs and creams.
In East Semitic cultures, the main medicinal authority was a kind of exorcist-healer known as an "āšipu". The profession was generally passed down from father to son and was held in extremely high regard. Of less frequent recourse was another kind of healer known as an "asu", who corresponds more closely to a modern physician and treated physical symptoms using primarily folk remedies composed of various herbs, animal products, and minerals, as well as potions, enemas, and ointments or poultices. These physicians, who could be either male or female, also dressed wounds, set limbs, and performed simple surgeries. The ancient Mesopotamians also practiced prophylaxis and took measures to prevent the spread of disease.
Mental illnesses were well known in ancient Mesopotamia, where diseases and mental disorders were believed to be caused by specific deities. Because hands symbolized control over a person, mental illnesses were known as "hands" of certain deities. One psychological illness was known as "Qāt Ištar", meaning "Hand of Ishtar". Others were known as "Hand of Shamash", "Hand of the Ghost", and "Hand of the God". Descriptions of these illnesses, however, are so vague that it is usually impossible to determine which illnesses they correspond to in modern terminology. Mesopotamian doctors kept detailed record of their patients' hallucinations and assigned spiritual meanings to them. A patient who hallucinated that he was seeing a dog was predicted to die; whereas, if he saw a gazelle, he would recover. The royal family of Elam was notorious for its members frequently suffering from insanity. Erectile dysfunction was recognized as being rooted in psychological problems.
Ancient Egypt developed a large, varied and fruitful medical tradition. Herodotus described the Egyptians as "the healthiest of all men, next to the Libyans", because of the dry climate and the notable public health system that they possessed. According to him, "the practice of medicine is so specialized among them that each physician is a healer of one disease and no more." Although Egyptian medicine, to a considerable extent, dealt with the supernatural, it eventually developed a practical use in the fields of anatomy, public health, and clinical diagnostics.
Medical information in the Edwin Smith Papyrus may date to a time as early as 3000 BC. Imhotep in the 3rd dynasty is sometimes credited with being the founder of ancient Egyptian medicine and with being the original author of the "Edwin Smith Papyrus", detailing cures, ailments and anatomical observations. The "Edwin Smith Papyrus" is regarded as a copy of several earlier works and was written c. 1600 BC. It is an ancient textbook on surgery almost completely devoid of magical thinking and describes in exquisite detail the "examination, diagnosis, treatment," and "prognosis" of numerous ailments.
The Kahun Gynaecological Papyrus treats women's complaints, including problems with conception. Thirty four cases detailing diagnosis and treatment survive, some of them fragmentarily. Dating to 1800 BCE, it is the oldest surviving medical text of any kind.
Medical institutions, referred to as "Houses of Life" are known to have been established in ancient Egypt as early as 2200 BC.
The Ebers Papyrus is the oldest written text mentioning enemas. Many medications were administered by enemas and one of the many types of medical specialists was an Iri, the Shepherd of the Anus.
The earliest known physician is also credited to ancient Egypt: Hesy-Ra, "Chief of Dentists and Physicians" for King Djoser in the 27th century BCE. Also, the earliest known woman physician, Peseshet, practiced in Ancient Egypt at the time of the 4th dynasty. Her title was "Lady Overseer of the Lady Physicians." In addition to her supervisory role, Peseshet trained midwives at an ancient Egyptian medical school in Sais.
The Atharvaveda, a sacred text of Hinduism dating from the Early Iron Age, is one of the first Indian texts dealing with medicine. The Atharvaveda also contains prescriptions of herbs for various ailments. The use of herbs to treat ailments would later form a large part of Ayurveda.
Ayurveda, meaning the "complete knowledge for long life" is another medical system of India. Its two most famous texts belong to the schools of Charaka and Sushruta. The earliest foundations of Ayurveda were built on a synthesis of traditional herbal practices together with a massive addition of theoretical conceptualizations, new nosologies and new therapies dating from about 600 BCE onwards, and coming out of the communities of thinkers which included the Buddha and others.
According to the compendium of Charaka, the Charakasamhitā, health and disease are not predetermined and life may be prolonged by human effort. The compendium of Suśruta, the Suśrutasamhitā defines the purpose of medicine to cure the diseases of the sick, protect the healthy, and to prolong life. Both these ancient compendia include details of the examination, diagnosis, treatment, and prognosis of numerous ailments. The Suśrutasamhitā is notable for describing procedures on various forms of surgery, including rhinoplasty, the repair of torn ear lobes, perineal lithotomy, cataract surgery, and several other excisions and other surgical procedures. Most remarkable is Sushruta's penchant for scientific classification:
His medical treatise consists of 184 chapters, 1,120 conditions are listed, including injuries and illnesses relating to aging and mental illness.
The Ayurvedic classics mention eight branches of medicine: kāyācikitsā (internal medicine), śalyacikitsā (surgery including anatomy), śālākyacikitsā (eye, ear, nose, and throat diseases), kaumārabhṛtya (pediatrics with obstetrics and gynaecology), bhūtavidyā (spirit and psychiatric medicine), agada tantra (toxicology with treatments of stings and bites), rasāyana (science of rejuvenation), and vājīkaraṇa (aphrodisiac and fertility). Apart from learning these, the student of Āyurveda was expected to know ten arts that were indispensable in the preparation and application of his medicines: distillation, operative skills, cooking, horticulture, metallurgy, sugar manufacture, pharmacy, analysis and separation of minerals, compounding of metals, and preparation of alkalis. The teaching of various subjects was done during the instruction of relevant clinical subjects. For example, the teaching of anatomy was a part of the teaching of surgery, embryology was a part of training in pediatrics and obstetrics, and the knowledge of physiology and pathology was interwoven in the teaching of all the clinical disciplines.
The normal length of the student's training appears to have been seven years. But the physician was to continue to learn.
As an alternative form of medicine in India, Unani medicine found deep roots and royal patronage during medieval times. It progressed during the Indian sultanate and mughal periods. Unani medicine is very close to Ayurveda. Both are based on the theory of the presence of the elements (in Unani, they are considered to be fire, water, earth, and air) in the human body. According to followers of Unani medicine, these elements are present in different fluids and their balance leads to health and their imbalance leads to illness.
By the 18th century CE, Sanskrit medical wisdom still dominated. Muslim rulers built large hospitals in 1595 in Hyderabad, and in Delhi in 1719, and numerous commentaries on ancient texts were written.
China also developed a large body of traditional medicine. Much of the philosophy of traditional Chinese medicine derived from empirical observations of disease and illness by Taoist physicians and reflects the classical Chinese belief that individual human experiences express causative principles effective in the environment at all scales. These causative principles, whether material, essential, or mystical, correlate as the expression of the natural order of the universe.
The foundational text of Chinese medicine is the Huangdi neijing, (or "Yellow Emperor's Inner Canon"), written 5th century to 3rd century BCE. Near the end of the 2nd century CE, during the Han dynasty, Zhang Zhongjing, wrote a "Treatise on Cold Damage", which contains the earliest known reference to the "Neijing Suwen". The Jin Dynasty practitioner and advocate of acupuncture and moxibustion, Huangfu Mi (215–282), also quotes the Yellow Emperor in his "Jiayi jing", c. 265. During the Tang Dynasty, the "Suwen" was expanded and revised and is now the best extant representation of the foundational roots of traditional Chinese medicine. Traditional Chinese Medicine that is based on the use of herbal medicine, acupuncture, massage and other forms of therapy has been practiced in China for thousands of years.
In the 18th century, during the Qing dynasty, there was a proliferation of popular books as well as more advanced encyclopedias on traditional medicine. Jesuit missionaries introduced Western science and medicine to the royal court, although the Chinese physicians ignored them.
Finally in the 19th century, Western medicine was introduced at the local level by Christian medical missionaries from the London Missionary Society (Britain), the Methodist Church (Britain) and the Presbyterian Church (US). Benjamin Hobson (1816–1873) in 1839, set up a highly successful Wai Ai Clinic in Guangzhou, China. The Hong Kong College of Medicine for Chinese was founded in 1887 by the London Missionary Society, with its first graduate (in 1892) being Sun Yat-sen, who later led the Chinese Revolution (1911). The Hong Kong College of Medicine for Chinese was the forerunner of the School of Medicine of the University of Hong Kong, which started in 1911.
Because of the social custom that men and women should not be near to one another, the women of China were reluctant to be treated by male doctors. The missionaries sent women doctors such as Dr. Mary Hannah Fulton (1854–1927). Supported by the Foreign Missions Board of the Presbyterian Church (US) she in 1902 founded the first medical college for women in China, the Hackett Medical College for Women, in Guangzhou.
When reading the Chinese classics, it is important for scholars to examine these works from the Chinese perspective. Historians have noted two key aspects of Chinese medical history: understanding conceptual differences when translating the term "shén", and observing the history from the perspective of cosmology rather than biology.
In Chinese classical texts, the term "shén" is the closest historical translation to the English word "body" because it sometimes refers to the physical human body in terms of being weighed or measured, but the term is to be understood as an “ensemble of functions” encompassing both the human psyche and emotions. This concept of the human body is opposed to the European duality of a separate mind and body. It is critical for scholars to understand the fundamental differences in concepts of the body in order to connect the medical theory of the classics to the “human organism” it is explaining.
Chinese scholars established a correlation between the cosmos and the “human organism.” The basic components of cosmology, qi, yin yang and the Five Phase theory, were used to explain health and disease in texts such as Huangdi neijing. Yin and yang are the changing factors in cosmology, with qi as the vital force or energy of life. The Five phase theory Wu Xing of the Han dynasty contains the elements wood, fire, earth, metal, and water. By understanding medicine from a cosmology perspective, historians better understand Chinese medical and social classifications such as gender, which was defined by a domination or remission of yang in terms of yin.
These two distinctions are imperative when analyzing the history of traditional Chinese medical science.
A majority of Chinese medical history written after the classical canons comes in the form of primary source case studies where academic physicians record the illness of a particular person and the healing techniques used, as well as their effectiveness. Historians have noted that Chinese scholars wrote these studies instead of “books of prescriptions or advice manuals;” in their historical and environmental understanding, no two illnesses were alike so the healing strategies of the practitioner was unique every time to the specific diagnosis of the patient. Medical case studies existed throughout Chinese history, but “individually authored and published case history” was a prominent creation of the Ming Dynasty. An example such case studies would be the literati physician, Cheng Congzhou, collection of 93 cases published in 1644.
Around 800 BCE Homer in "The Iliad" gives descriptions of wound treatment by the two sons of Asklepios, the admirable physicians Podaleirius and Machaon and one acting doctor, Patroclus. Because Machaon is wounded and Podaleirius is in combat Eurypylus asks Patroclus to "cut out this arrow from my thigh, wash off the blood with warm water and spread soothing ointment on the wound". Asklepios like Imhotep becomes god of healing over time.
Temples dedicated to the healer-god Asclepius, known as "Asclepieia" (, sing. , "'Asclepieion"), functioned as centers of medical advice, prognosis, and healing. At these shrines, patients would enter a dream-like state of induced sleep known as "enkoimesis" () not unlike anesthesia, in which they either received guidance from the deity in a dream or were cured by surgery. Asclepeia provided carefully controlled spaces conducive to healing and fulfilled several of the requirements of institutions created for healing. In the Asclepeion of Epidaurus, three large marble boards dated to 350 BCE preserve the names, case histories, complaints, and cures of about 70 patients who came to the temple with a problem and shed it there. Some of the surgical cures listed, such as the opening of an abdominal abscess or the removal of traumatic foreign material, are realistic enough to have taken place, but with the patient in a state of enkoimesis induced with the help of soporific substances such as opium. Alcmaeon of Croton wrote on medicine between 500 and 450 BCE. He argued that channels linked the sensory organs to the brain, and it is possible that he discovered one type of channel, the optic nerves, by dissection.
A towering figure in the history of medicine was the physician Hippocrates of Kos (c. 460c. 370 BCE), considered the "father of modern medicine." The Hippocratic Corpus is a collection of around seventy early medical works from ancient Greece strongly associated with Hippocrates and his students. Most famously, the Hippocratics invented the Hippocratic Oath for physicians. Contemporary physicians swear an oath of office which includes aspects found in early editions of the Hippocratic Oath.
Hippocrates and his followers were first to describe many diseases and medical conditions. Though humorism (humoralism) as a medical system predates 5th-century Greek medicine, Hippocrates and his students systemetized the thinking that illness can be explained by an imbalance of blood, phlegm, black bile, and yellow bile. Hippocrates is given credit for the first description of clubbing of the fingers, an important diagnostic sign in chronic suppurative lung disease, lung cancer and cyanotic heart disease. For this reason, clubbed fingers are sometimes referred to as "Hippocratic fingers". Hippocrates was also the first physician to describe the Hippocratic face in "Prognosis". Shakespeare famously alludes to this description when writing of Falstaff's death in Act II, Scene iii. of "Henry V".
Hippocrates began to categorize illnesses as acute, chronic, endemic and epidemic, and use terms such as, "exacerbation, relapse, resolution, crisis, paroxysm, peak, and convalescence."
Another of Hippocrates's major contributions may be found in his descriptions of the symptomatology, physical findings, surgical treatment and prognosis of thoracic empyema, i.e. suppuration of the lining of the chest cavity. His teachings remain relevant to present-day students of pulmonary medicine and surgery. Hippocrates was the first documented person to practise cardiothoracic surgery, and his findings are still valid.
Some of the techniques and theories developed by Hippocrates are now put into practice by the fields of Environmental and Integrative Medicine. These include recognizing the importance of taking a complete history which includes environmental exposures as well as foods eaten by the patient which might play a role in his or her illness.
Two great Alexandrians laid the foundations for the scientific study of anatomy and physiology, Herophilus of Chalcedon and Erasistratus of Ceos. Other Alexandrian surgeons gave us ligature (hemostasis), lithotomy, hernia operations, ophthalmic surgery, plastic surgery, methods of reduction of dislocations and fractures, tracheotomy, and mandrake as an anaesthetic. Some of what we know of them comes from Celsus and Galen of Pergamum.
Herophilus of Chalcedon, working at the medical school of Alexandria placed intelligence in the brain, and connected the nervous system to motion and sensation. Herophilus also distinguished between veins and arteries, noting that the latter pulse while the former do not. He and his contemporary, Erasistratus of Chios, researched the role of veins and nerves, mapping their courses across the body. Erasistratus connected the increased complexity of the surface of the human brain compared to other animals to its superior intelligence. He sometimes employed experiments to further his research, at one time repeatedly weighing a caged bird, and noting its weight loss between feeding times. In Erasistratus' physiology, air enters the body, is then drawn by the lungs into the heart, where it is transformed into vital spirit, and is then pumped by the arteries throughout the body. Some of this vital spirit reaches the brain, where it is transformed into animal spirit, which is then distributed by the nerves.
The Greek Galen (c. 129–216 AD) was one of the greatest physicians of the ancient world, studying and traveling widely in ancient Rome. He dissected animals to learn about the body, and performed many audacious operations—including brain and eye surgeries—that were not tried again for almost two millennia. In "Ars medica" ("Arts of Medicine"), he explained mental properties in terms of specific mixtures of the bodily parts.
Galen's medical works were regarded as authoritative until well into the Middle Ages. Galen left a physiological model of the human body that became the mainstay of the medieval physician's university anatomy curriculum, but it suffered greatly from stasis and intellectual stagnation because some of Galen's ideas were incorrect; he did not dissect a human body. Greek and Roman taboos had meant that dissection was usually banned in ancient times, but in the Middle Ages it changed.
In 1523 Galen's "On the Natural Faculties" was published in London. In the 1530s Belgian anatomist and physician Andreas Vesalius launched a project to translate many of Galen's Greek texts into Latin. Vesalius's most famous work, "De humani corporis fabrica" was greatly influenced by Galenic writing and form.
The Romans invented numerous surgical instruments, including the first instruments unique to women, as well as the surgical uses of forceps, scalpels, cautery, cross-bladed scissors, the surgical needle, the sound, and speculas. Romans also performed cataract surgery.
The Roman army physician Dioscorides (c. 40–90 CE), was a Greek botanist and pharmacologist. He wrote the encyclopedia "De Materia Medica" describing over 600 herbal cures, forming an influential pharmacopoeia which was used extensively for the following 1,500 years.
Early Christians in the Roman Empire incorporated medicine into their theology, ritual practices, and metaphors.
Byzantine medicine encompasses the common medical practices of the Byzantine Empire from about 400 AD to 1453 AD. Byzantine medicine was notable for building upon the knowledge base developed by its Greco-Roman predecessors. In preserving medical practices from antiquity, Byzantine medicine influenced Islamic medicine as well as fostering the Western rebirth of medicine during the Renaissance.
Byzantine physicians often compiled and standardized medical knowledge into textbooks. Their records tended to include both diagnostic explanations and technical drawings. The Medical Compendium in Seven Books, written by the leading physician Paul of Aegina, survived as a particularly thorough source of medical knowledge. This compendium, written in the late seventh century, remained in use as a standard textbook for the following 800 years.
Late antiquity ushered in a revolution in medical science, and historical records often mention civilian hospitals (although battlefield medicine and wartime triage were recorded well before Imperial Rome). Constantinople stood out as a center of medicine during the Middle Ages, which was aided by its crossroads location, wealth, and accumulated knowledge.
The first ever known example of separating conjoined twins occurred in the Byzantine Empire in the 10th century. The next example of separating conjoined twins will be first recorded many centuries later in Germany in 1689.
The Byzantine Empire's neighbors, the Persian Sassanid Empire, also made their noteworthy contributions mainly with the establishment of the Academy of Gondeshapur, which was "the most important medical center of the ancient world during the 6th and 7th centuries." In addition, Cyril Elgood, British physician and a historian of medicine in Persia, commented that thanks to medical centers like the Academy of Gondeshapur, "to a very large extent, the credit for the whole hospital system must be given to Persia."
The Islamic civilization rose to primacy in medical science as its physicians contributed significantly to the field of medicine, including anatomy, ophthalmology, pharmacology, pharmacy, physiology, and surgery. The Arabs were influenced by ancient Indian, Persian, Greek, Roman and Byzantine medical practices, and helped them develop further. Galen & Hippocrates were pre-eminent authorities. The translation of 129 of Galen's works into Arabic by the Nestorian Christian Hunayn ibn Ishaq and his assistants, and in particular Galen's insistence on a rational systematic approach to medicine, set the template for Islamic medicine, which rapidly spread throughout the Arab Empire. Its most famous physicians included the Persian polymaths Muhammad ibn Zakarīya al-Rāzi and Avicenna, who wrote more than 40 works on health, medicine, and well-being. Taking leads from Greece and Rome, Islamic scholars kept both the art and science of medicine alive and moving forward. Persian polymath Avicenna has also been called the "father of medicine". He wrote "The Canon of Medicine" which became a standard medical text at many medieval European universities, considered one of the most famous books in the history of medicine. "The Canon of Medicine" presents an overview of the contemporary medical knowledge of the medieval Islamic world, which had been influenced by earlier traditions including Greco-Roman medicine (particularly Galen), Persian medicine, Chinese medicine and Indian medicine. Persian physician al-Rāzi was one of the first to question the Greek theory of humorism, which nevertheless remained influential in both medieval Western and medieval Islamic medicine. Some volumes of al-Rāzi's work "Al-Mansuri", namely "On Surgery" and "A General Book on Therapy", became part of the medical curriculum in European universities. Additionally, he has been described as a doctor's doctor, the father of pediatrics, and a pioneer of ophthalmology. For example, he was the first to recognize the reaction of the eye's pupil to light.
After AD 400, the study and practice of medicine in the Western Roman Empire went into deep decline. Medical services were provided, especially for the poor, in the thousands of monastic hospitals that sprang up across Europe, but the care was rudimentary and mainly palliative. Most of the writings of Galen and Hippocrates were lost to the West, with the summaries and compendia of St. Isidore of Seville being the primary channel for transmitting Greek medical ideas. The Carolingian renaissance brought increased contact with Byzantium and a greater awareness of ancient medicine, but only with the twelfth-century renaissance and the new translations coming from Muslim and Jewish sources in Spain, and the fifteenth-century flood of resources after the fall of Constantinople did the West fully recover its acquaintance with classical antiquity.
Greek and Roman taboos had meant that dissection was usually banned in ancient times, but in the Middle Ages it changed: medical teachers and students at Bologna began to open human bodies, and Mondino de Luzzi (c. 1275–1326) produced the first known anatomy textbook based on human dissection.
Wallis identifies a prestige hierarchy with university educated physicians on top, followed by learned surgeons; craft-trained surgeons; barber surgeons; itinerant specialists such as dentist and oculists; empirics; and midwives.
The first medical schools were opened in the 9th century, most notably the Schola Medica Salernitana at Salerno in southern Italy. The cosmopolitan influences from Greek, Latin, Arabic, and Hebrew sources gave it an international reputation as the Hippocratic City. Students from wealthy families came for three years of preliminary studies and five of medical studies. The medicine, following the laws of Federico II, that he founded in 1224 the University ad improved the Schola Salernitana, in the period between 1200 and 1400, it had in Sicily (so-called Sicilian Middle Ages) a particular development so much to create a true school of Jewish medicine.
As a result of which, after a legal examination, was conferred to a Jewish Sicilian woman, Virdimura, wife of another physician Pasquale of Catania, the hystorical record of before woman officially trained to exercise of the medical profession.
By the thirteenth century, the medical school at Montpellier began to eclipse the Salernitan school. In the 12th century, universities were founded in Italy, France, and England, which soon developed schools of medicine. The University of Montpellier in France and Italy's University of Padua and University of Bologna were leading schools. Nearly all the learning was from lectures and readings in Hippocrates, Galen, Avicenna, and Aristotle. In later centuries, the importance of universities founded in the late Middle Ages gradually increased, e.g. Charles University in Prague (established in 1348), Jagiellonian University in Cracow (1364), University of Vienna (1365), Heidelberg University (1386) and University of Greifswald (1456).
The underlying principle of most medieval medicine was Galen's theory of humours. This was derived from the ancient medical works, and dominated all western medicine until the 19th century. The theory stated that within every individual there were four humours, or principal fluids—black bile, yellow bile, phlegm, and blood, these were produced by various organs in the body, and they had to be in balance for a person to remain healthy. Too much phlegm in the body, for example, caused lung problems; and the body tried to cough up the phlegm to restore a balance. The balance of humours in humans could be achieved by diet, medicines, and by blood-letting, using leeches. The four humours were also associated with the four seasons, black bile-autumn, yellow bile-summer, phlegm-winter and blood-spring.
Healing included both physical and spiritual therapeutics, such as the right herbs, a suitable diet, clean bedding, and the sense that care was always at hand. Other procedures used to help patients included the Mass, prayers, relics of saints, and music used to calm a troubled mind or quickened pulse.
In 1376, in Sicily, it was historically given, in relationship to the laws of Federico II that they foresaw an examination with a regal errand of physicists, the first qualification to the exercise of the medicine to a woman, Virdimura a Jewess of Catania, whose document is preserved in Palermo to the Italian national archives.
The Renaissance brought an intense focus on scholarship to Christian Europe. A major effort to translate the Arabic and Greek scientific works into Latin emerged. Europeans gradually became experts not only in the ancient writings of the Romans and Greeks, but in the contemporary writings of Islamic scientists. During the later centuries of the Renaissance came an increase in experimental investigation, particularly in the field of dissection and body examination, thus advancing our knowledge of human anatomy.
The development of modern neurology began in the 16th century in Italy and France with Niccolò Massa, Jean Fernel, Jacques Dubois and Andreas Vesalius. Vesalius described in detail the anatomy of the brain and other organs; he had little knowledge of the brain's function, thinking that it resided mainly in the ventricles. Over his lifetime he corrected over 200 of Galen's mistakes. Understanding of medical sciences and diagnosis improved, but with little direct benefit to health care. Few effective drugs existed, beyond opium and quinine. Folklore cures and potentially poisonous metal-based compounds were popular treatments.
Independently from Ibn al-Nafis, Michael Servetus rediscovered the pulmonary circulation, but this discovery did not reach the public because it was written down for the first time in the "Manuscript of Paris" in 1546, and later published in the theological work which he paid with his life in 1553. Later this was perfected by Renaldus Columbus and Andrea Cesalpino.
In 1628 the English physician William Harvey made a ground-breaking discovery when he correctly described the circulation of the blood in his "Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus". Before this time the most useful manual in medicine used both by students and expert physicians was Dioscorides' "De Materia Medica", a pharmacopoeia.
Bacteria and protists were first observed with a microscope by Antonie van Leeuwenhoek in 1676, initiating the scientific field of microbiology.
Paracelsus (1493–1541), was an erratic and abusive innovator who rejected Galen and bookish knowledge, calling for experimental research, with heavy doses of mysticism, alchemy and magic mixed in. He rejected sacred magic (miracles) under Church auspisces and looked for cures in nature. He preached but he also pioneered the use of chemicals and minerals in medicine. His hermetical views were that sickness and health in the body relied on the harmony of man (microcosm) and Nature (macrocosm). He took an approach different from those before him, using this analogy not in the manner of soul-purification but in the manner that humans must have certain balances of minerals in their bodies, and that certain illnesses of the body had chemical remedies that could cure them. Most of his influence came after his death. Paracelsus is a highly controversial figure in the history of medicine, with most experts hailing him as a Father of Modern Medicine for shaking off religious orthodoxy and inspiring many researchers; others say he was a mystic more than a scientist and downplay his importance.
University training of physicians began in the 13th century.
The University of Padua was founded about 1220 by walkouts from the University of Bologna, and began teaching medicine in 1222. It played a leading role in the identification and treatment of diseases and ailments, specializing in autopsies and the inner workings of the body. Starting in 1595, Padua's famous anatomical theatre drew artists and scientists studying the human body during public dissections. The intensive study of Galen led to critiques of Galen modeled on his own writing, as in the first book of Vesalius's "De humani corporis fabrica." Andreas Vesalius held the chair of Surgery and Anatomy ("explicator chirurgiae") and in 1543 published his anatomical discoveries in "De Humani Corporis Fabrica". He portrayed the human body as an interdependent system of organ groupings. The book triggered great public interest in dissections and caused many other European cities to establish anatomical theatres.
At the University of Bologna the training of physicians began in 1219. The Italian city attracted students from across Europe. Taddeo Alderotti built a tradition of medical education that established the characteristic features of Italian learned medicine and was copied by medical schools elsewhere. Turisanus (d. 1320) was his student. The curriculum was revised and strengthened in 1560–1590. A representative professor was Julius Caesar Aranzi (Arantius) (1530–89). He became Professor of Anatomy and Surgery at the University of Bologna in 1556, where he established anatomy as a major branch of medicine for the first time. Aranzi combined anatomy with a description of pathological processes, based largely on his own research, Galen, and the work of his contemporary Italians. Aranzi discovered the 'Nodules of Aranzio' in the semilunar valves of the heart and wrote the first description of the superior levator palpebral and the coracobrachialis muscles. His books (in Latin) covered surgical techniques for many conditions, including hydrocephalus, nasal polyp, goitre and tumours to phimosis, ascites, haemorrhoids, anal abscess and fistulae.
Catholic women played large roles in health and healing in medieval and early modern Europe. A life as a nun was a prestigious role; wealthy families provided dowries for their daughters, and these funded the convents, while the nuns provided free nursing care for the poor.
The Catholic elites provided hospital services because of their theology of salvation that good works were the route to heaven. The Protestant reformers rejected the notion that rich men could gain God's grace through good works—and thereby escape purgatory—by providing cash endowments to charitable institutions. They also rejected the Catholic idea that the poor patients earned grace and salvation through their suffering. Protestants generally closed all the convents and most of the hospitals, sending women home to become housewives, often against their will. On the other hand, local officials recognized the public value of hospitals, and some were continued in Protestant lands, but without monks or nuns and in the control of local governments.
In London, the crown allowed two hospitals to continue their charitable work, under nonreligious control of city officials. The convents were all shut down but Harkness finds that women—some of them former nuns—were part of a new system that delivered essential medical services to people outside their family. They were employed by parishes and hospitals, as well as by private families, and provided nursing care as well as some medical, pharmaceutical, and surgical services.
Meanwhile, in Catholic lands such as France, rich families continued to fund convents and monasteries, and enrolled their daughters as nuns who provided free health services to the poor. Nursing was a religious role for the nurse, and there was little call for science.
During the Age of Enlightenment, the 18th century, science was held in high esteem and physicians upgraded their social status by becoming more scientific. The health field was crowded with self-trained barber-surgeons, apothecaries, midwives, drug peddlers, and charlatans.
Across Europe medical schools relied primarily on lectures and readings. The final year student would have limited clinical experience by trailing the professor through the wards. Laboratory work was uncommon, and dissections were rarely done because of legal restrictions on cadavers. Most schools were small, and only Edinburgh, Scotland, with 11,000 alumni, produced large numbers of graduates.
In Britain, there were but three small hospitals after 1550. Pelling and Webster estimate that in London in the 1580 to 1600 period, out of a population of nearly 200,000 people, there were about 500 medical practitioners. Nurses and midwives are not included. There were about 50 physicians, 100 licensed surgeons, 100 apothecaries, and 250 additional unlicensed practitioners. In the last category about 25% were women. All across Britain—and indeed all of the world—the vast majority of the people in city, town or countryside depended for medical care on local amateurs with no professional training but with a reputation as wise healers who could diagnose problems and advise sick people what to do—and perhaps set broken bones, pull a tooth, give some traditional herbs or brews or perform a little magic to cure what ailed them.
The London Dispensary opened in 1696, the first clinic in the British Empire to dispense medicines to poor sick people. The innovation was slow to catch on, but new dispensaries were open in the 1770s. In the colonies, small hospitals opened in Philadelphia in 1752, New York in 1771, and Boston (Massachusetts General Hospital) in 1811.
Guy's Hospital, the first great British hospital with a modern foundation opened in 1721 in London, with funding from businessman Thomas Guy. It had been preceded by St Bartholomew's Hospital and St Thomas's Hospital, both medieval foundations. In 1821 a bequest of £200,000 by William Hunt in 1829 funded expansion for an additional hundred beds at Guy's. Samuel Sharp (1709–78), a surgeon at Guy's Hospital from 1733 to 1757, was internationally famous; his "A Treatise on the Operations of Surgery" (1st ed., 1739), was the first British study focused exclusively on operative technique.
English physician Thomas Percival (1740–1804) wrote a comprehensive system of medical conduct, "Medical Ethics; or, a Code of Institutes and Precepts, Adapted to the Professional Conduct of Physicians and Surgeons" (1803) that set the standard for many textbooks.
In the Spanish Empire, the viceregal capital of Mexico City was a site of medical training for physicians and the creation of hospitals. Epidemic disease had decimated indigenous populations starting with the early sixteenth-century Spanish conquest of the Aztec empire, when a black auxiliary in the armed forces of conqueror Hernán Cortés, with an active case of smallpox, set off a virgin land epidemic among indigenous peoples, Spanish allies and enemies alike. Aztec emperor Cuitlahuac died of smallpox. Disease was a significant factor in the Spanish conquest elsewhere as well.
Medical education instituted at the Royal and Pontifical University of Mexico chiefly served the needs of urban elites. Male and female "curanderos" or lay practitioners, attended to the ills of the popular classes. The Spanish crown began regulating the medical profession just a few years after the conquest, setting up the Royal Tribunal of the Protomedicato, a board for licensing medical personnel in 1527. Licensing became more systematic after 1646 with physicians, druggists, surgeons, and bleeders requiring a license before they could publicly practice. Crown regulation of medical practice became more general in the Spanish empire.
Elites and the popular classes alike called on divine intervention in personal and society-wide health crises, such as the epidemic of 1737. The intervention of the Virgin of Guadalupe was depicted in a scene of dead and dying Indians, with elites on their knees praying for her aid. In the late eighteenth century, the crown began implementing secularizing policies on the Iberian peninsula and its overseas empire to control disease more systematically and scientifically.
Botanical medicines also became popular during the 16th, 17th, and 18th Centuries. Spanish pharmaceutical books during this time contain medicinal recipes consisting of spices, herbs, and other botanical products. For example, nutmeg oil was documented for curing stomach ailments and cardamom oil was believed to relieve intestinal ailments. During the rise of the global trade market, spices and herbs, along with many other goods, that were indigenous to different territories began to appear in different locations across the globe. Herbs and spices were especially popular for their utility in cooking and medicines. As a result of this popularity and increased demand for spices, some areas in Asia, like China and Indonesia, became hubs for spice cultivation and trade. The Spanish Empire also wanted to benefit from the international spice trade, so they looked towards their American colonies.
The Spanish American colonies became an area where the Spanish searched to discover new spices and indigenous American medicinal recipes. The Florentine Codex, a 16th-century ethnographic research study in Mesoamerica by the Spanish Franciscan friar Bernardino de Sahagún, is a major contribution to the history of Nahua medicine. The Spanish did discover many spices and herbs new to them, some of which were reportedly similar to Asian spices. A Spanish physician by the name of Nicolás Monardes studied many of the American spices coming into Spain. He documented many of the new American spices and their medicinal properties in his survey Historia medicinal de las cosas que se traen de nuestras Indias Occidentales. For example, Monardes describes the "Long Pepper" (Pimienta luenga), found along the coasts of the countries that are now known Panama and Colombia, as a pepper that was more flavorful, healthy, and spicy in comparison to the Eastern black pepper. The Spanish interest in American spices can first be seen in the commissioning of the Libellus de Medicinalibus Indorum Herbis, which was a Spanish-American codex describing indigenous American spices and herbs and describing the ways that these were used in natural Aztec medicines. The codex was commissioned in the year 1552 by Francisco de Mendoza, the son of Antonio de Mendoza, who was the first Viceroy of New Spain. Francisco de Mendoza was interested in studying the properties of these herbs and spices, so that he would be able to profit from the trade of these herbs and the medicines that could be produced by them.
Francisco de Mendoza recruited the help of Monardez in studying the traditional medicines of the indigenous people living in what was then the Spanish colonies. Monardez researched these medicines and performed experiments to discover the possibilities of spice cultivation and medicine creation in the Spanish colonies. The Spanish transplanted some herbs from Asia, but only a few foreign crops were successfully grown in the Spanish Colonies. One notable crop brought from Asia and successfully grown in the Spanish colonies was ginger, as it was considered Hispaniola's number 1 crop at the end of the 16th Century. The Spanish Empire did profit from cultivating herbs and spices, but they also introduced pre-Columbian American medicinal knowledge to Europe. Other Europeans were inspired by the actions of Spain and decided to try to establish a botanical transplant system in colonies that they controlled, however these subsequent attempts were not successful.
The practice of medicine changed in the face of rapid advances in science, as well as new approaches by physicians. Hospital doctors began much more systematic analysis of patients' symptoms in diagnosis. Among the more powerful new techniques were anaesthesia, and the development of both antiseptic and aseptic operating theatres. Effective cures were developed for certain endemic infectious diseases. However the decline in many of the most lethal diseases was due more to improvements in public health and nutrition than to advances in medicine.
Medicine was revolutionized in the 19th century and beyond by advances in chemistry, laboratory techniques, and equipment. Old ideas of infectious disease epidemiology were gradually replaced by advances in bacteriology and virology.
In the 1830s in Italy, Agostino Bassi traced the silkworm disease muscardine to microorganisms. Meanwhile, in Germany, Theodor Schwann led research on alcoholic fermentation by yeast, proposing that living microorganisms were responsible.
Leading chemists, such as Justus von Liebig, seeking solely physicochemical explanations, derided this claim and alleged that Schwann was regressing to vitalism.
In 1847 in Vienna, Ignaz Semmelweis (1818–1865), dramatically reduced the death rate of new mothers (due to childbed fever) by requiring physicians to clean their hands before attending childbirth, yet his principles were marginalized and attacked by professional peers. At that time most people still believed that infections were caused by foul odors called miasmas.
Eminent French scientist Louis Pasteur confirmed Schwann's fermentation experiments in 1857 and afterwards supported the hypothesis that yeast were microorganisms. Moreover, he suggested that such a process might also explain contagious disease. In 1860, Pasteur's report on bacterial fermentation of butyric acid motivated fellow Frenchman Casimir Davaine to identify a similar species (which he called "bacteridia") as the pathogen of the deadly disease anthrax. Others dismissed "bacteridia" as a mere byproduct of the disease. British surgeon Joseph Lister, however, took these findings seriously and subsequently introduced antisepsis to wound treatment in 1865.
German physician Robert Koch, noting fellow German Ferdinand Cohn's report of a spore stage of a certain bacterial species, traced the life cycle of Davaine's "bacteridia", identified spores, inoculated laboratory animals with them, and reproduced anthrax—a breakthrough for experimental pathology and germ theory of disease. Pasteur's group added ecological investigations confirming spores' role in the natural setting, while Koch published a landmark treatise in 1878 on the bacterial pathology of wounds. In 1881, Koch reported discovery of the "tubercle bacillus", cementing germ theory and Koch's acclaim.
Upon the outbreak of a cholera epidemic in Alexandria, Egypt, two medical missions went to investigate and attend the sick, one was sent out by Pasteur and the other led by Koch. Koch's group returned in 1883, having successfully discovered the cholera pathogen. In Germany, however, Koch's bacteriologists had to vie against Max von Pettenkofer, Germany's leading proponent of miasmatic theory. Pettenkofer conceded bacteria's casual involvement, but maintained that other, environmental factors were required to turn it pathogenic, and opposed water treatment as a misdirected effort amid more important ways to improve public health. The massive cholera epidemic in Hamburg in 1892 devastasted Pettenkoffer's position, and yielded German public health to "Koch's bacteriology".
On losing the 1883 rivalry in Alexandria, Pasteur switched research direction, and introduced his third vaccine—rabies vaccine—the first vaccine for humans since Jenner's for smallpox. From across the globe, donations poured in, funding the founding of Pasteur Institute, the globe's first biomedical institute, which opened in 1888. Along with Koch's bacteriologists, Pasteur's group—which preferred the term "microbiology"—led medicine into the new era of "scientific medicine" upon bacteriology and germ theory. Accepted from Jakob Henle, Koch's steps to confirm a species' pathogenicity became famed as "Koch's postulates". Although his proposed tuberculosis treatment, tuberculin, seemingly failed, it soon was used to test for infection with the involved species. In 1905, Koch was awarded the Nobel Prize in Physiology or Medicine, and remains renowned as the founder of medical microbiology.
Women had always served in ancillary roles, and as midwives and healers. The professionalization of medicine forced them increasingly to the sidelines. As hospitals multiplied they relied in Europe on orders of Roman Catholic nun-nurses, and German Protestant and Anglican deaconesses in the early 19th century. They were trained in traditional methods of physical care that involved little knowledge of medicine. The breakthrough to professionalization based on knowledge of advanced medicine was led by Florence Nightingale in England. She resolved to provide more advanced training than she saw on the Continent. At Kaiserswerth, where the first German nursing schools were founded in 1836 by Theodor Fliedner, she said, "The nursing was nil and the hygiene horrible.") Britain's male doctors preferred the old system, but Nightingale won out and her Nightingale Training School opened in 1860 and became a model. The Nightingale solution depended on the patronage of upper-class women, and they proved eager to serve. Royalty became involved. In 1902 the wife of the British king took control of the nursing unit of the British army, became its president, and renamed it after herself as the Queen Alexandra's Royal Army Nursing Corps; when she died the next queen became president. Today its Colonel In Chief is Sophie, Countess of Wessex, the daughter-in-law of Queen Elizabeth II. In the United States, upper-middle-class women who already supported hospitals promoted nursing. The new profession proved highly attractive to women of all backgrounds, and schools of nursing opened in the late 19th century. They soon a function of large hospitals, where they provided a steady stream of low-paid idealistic workers. The International Red Cross began operations in numerous countries in the late 19th century, promoting nursing as an ideal profession for middle-class women.
The Nightingale model was widely copied. Linda Richards (1841–1930) studied in London and became the first professionally trained American nurse. She established nursing training programs in the United States and Japan, and created the first system for keeping individual medical records for hospitalized patients. The Russian Orthodox Church sponsored seven orders of nursing sisters in the late 19th century. They ran hospitals, clinics, almshouses, pharmacies, and shelters as well as training schools for nurses. In the Soviet era (1917–1991), with the aristocratic sponsors gone, nursing became a low-prestige occupation based in poorly maintained hospitals.
It was very difficult for women to become doctors in any field before the 1970s. Elizabeth Blackwell (1821–1910) became the first woman to formally study and practice medicine in the United States. She was a leader in women's medical education. While Blackwell viewed medicine as a means for social and moral reform, her student Mary Putnam Jacobi (1842–1906) focused on curing disease. At a deeper level of disagreement, Blackwell felt that women would succeed in medicine because of their humane female values, but Jacobi believed that women should participate as the equals of men in all medical specialties using identical methods, values and insights. In the Soviet Union although the majority of medical doctors were women, they were paid less than the mostly male factory workers.
Paris (France) and Vienna were the two leading medical centers on the Continent in the era 1750–1914.
In the 1770s–1850s Paris became a world center of medical research and teaching. The "Paris School" emphasized that teaching and research should be based in large hospitals and promoted the professionalization of the medical profession and the emphasis on sanitation and public health. A major reformer was Jean-Antoine Chaptal (1756–1832), a physician who was Minister of Internal Affairs. He created the Paris Hospital, health councils, and other bodies.
Louis Pasteur (1822–1895) was one of the most important founders of medical microbiology. He is remembered for his remarkable breakthroughs in the causes and preventions of diseases. His discoveries reduced mortality from puerperal fever, and he created the first vaccines for rabies and anthrax. His experiments supported the germ theory of disease. He was best known to the general public for inventing a method to treat milk and wine in order to prevent it from causing sickness, a process that came to be called pasteurization. He is regarded as one of the three main founders of microbiology, together with Ferdinand Cohn and Robert Koch. He worked chiefly in Paris and in 1887 founded the Pasteur Institute there to perpetuate his commitment to basic research and its practical applications. As soon as his institute was created, Pasteur brought together scientists with various specialties. The first five departments were directed by Emile Duclaux (general microbiology research) and Charles Chamberland (microbe research applied to hygiene), as well as a biologist, Ilya Ilyich Mechnikov (morphological microbe research) and two physicians, Jacques-Joseph Grancher (rabies) and Emile Roux (technical microbe research). One year after the inauguration of the Institut Pasteur, Roux set up the first course of microbiology ever taught in the world, then entitled "Cours de Microbie Technique" (Course of microbe research techniques). It became the model for numerous research centers around the world named "Pasteur Institutes."
The First Viennese School of Medicine, 1750–1800, was led by the Dutchman Gerard van Swieten (1700–1772), who aimed to put medicine on new scientific foundations—promoting unprejudiced clinical observation, botanical and chemical research, and introducing simple but powerful remedies. When the Vienna General Hospital opened in 1784, it at once became the world's largest hospital and physicians acquired a facility that gradually developed into the most important research centre. Progress ended with the Napoleonic wars and the government shutdown in 1819 of all liberal journals and schools; this caused a general return to traditionalism and eclecticism in medicine.
Vienna was the capital of a diverse empire and attracted not just Germans but Czechs, Hungarians, Jews, Poles and others to its world-class medical facilities. After 1820 the Second Viennese School of Medicine emerged with the contributions of physicians such as Carl Freiherr von Rokitansky, Josef Škoda, Ferdinand Ritter von Hebra, and Ignaz Philipp Semmelweis. Basic medical science expanded and specialization advanced. Furthermore, the first dermatology, eye, as well as ear, nose, and throat clinics in the world were founded in Vienna. The textbook of ophthalmologist Georg Joseph Beer (1763–1821) "Lehre von den Augenkrankheiten" combined practical research and philosophical speculations, and became the standard reference work for decades.
After 1871 Berlin, the capital of the new German Empire, became a leading center for medical research. Robert Koch (1843–1910) was a representative leader. He became famous for isolating "Bacillus anthracis" (1877), the "Tuberculosis bacillus" (1882) and "Vibrio cholerae" (1883) and for his development of Koch's postulates. He was awarded the Nobel Prize in Physiology or Medicine in 1905 for his tuberculosis findings. Koch is one of the founders of microbiology, inspiring such major figures as Paul Ehrlich and Gerhard Domagk.
In the American Civil War (1861–65), as was typical of the 19th century, more soldiers died of disease than in battle, and even larger numbers were temporarily incapacitated by wounds, disease and accidents. Conditions were poor in the Confederacy, where doctors and medical supplies were in short supply. The war had a dramatic long-term impact on medicine in the U.S., from surgical technique to hospitals to nursing and to research facilities. Weapon development -particularly the appearance of Springfield Model 1861, mass-produced and much more accurate than muskets led to generals underestimating the risks of long range rifle fire; risks exemplified in the death of John Sedgwick and the disastrous Pickett's Charge. The rifles could shatter bone forcing amputation and longer ranges meant casualties were sometimes not quickly found. Evacuation of the wounded from Second Battle of Bull Run took a week. As in earlier wars, untreated casualties sometimes survived unexpectedly due to maggots debriding the wound -an observation which led to the surgical use of maggots -still a useful method in the absence of effective antibiotics.
The hygiene of the training and field camps was poor, especially at the beginning of the war when men who had seldom been far from home were brought together for training with thousands of strangers. First came epidemics of the childhood diseases of chicken pox, mumps, whooping cough, and, especially, measles. Operations in the South meant a dangerous and new disease environment, bringing diarrhea, dysentery, typhoid fever, and malaria. There were no antibiotics, so the surgeons prescribed coffee, whiskey, and quinine. Harsh weather, bad water, inadequate shelter in winter quarters, poor policing of camps, and dirty camp hospitals took their toll.
This was a common scenario in wars from time immemorial, and conditions faced by the Confederate army were even worse. The Union responded by building army hospitals in every state. What was different in the Union was the emergence of skilled, well-funded medical organizers who took proactive action, especially in the much enlarged United States Army Medical Department, and the United States Sanitary Commission, a new private agency. Numerous other new agencies also targeted the medical and morale needs of soldiers, including the United States Christian Commission as well as smaller private agencies.
The U.S. Army learned many lessons and in August 1886, it established the Hospital Corps.
A major breakthrough in epidemiology came with the introduction of statistical maps and graphs. They allowed careful analysis of seasonality issues in disease incidents, and the maps allowed public health officials to identify critical loci for the dissemination of disease. John Snow in London developed the methods. In 1849, he observed that the symptoms of cholera, which had already claimed around 500 lives within a month, were vomiting and diarrhoea. He concluded that the source of contamination must be through ingestion, rather than inhalation as was previously thought. It was this insight that resulted in the removal of The Pump On Broad Street, after which deaths from cholera plummeted afterwards. English nurse Florence Nightingale pioneered analysis of large amounts of statistical data, using graphs and tables, regarding the condition of thousands of patients in the Crimean War to evaluate the efficacy of hospital services. Her methods proved convincing and led to reforms in military and civilian hospitals, usually with the full support of the government.
By the late 19th and early 20th century English statisticians led by Francis Galton, Karl Pearson and Ronald Fisher developed the mathematical tools such as correlations and hypothesis tests that made possible much more sophisticated analysis of statistical data.
During the U.S. Civil War the Sanitary Commission collected enormous amounts of statistical data, and opened up the problems of storing information for fast access and mechanically searching for data patterns. The pioneer was John Shaw Billings (1838–1913). A senior surgeon in the war, Billings built the Library of the Surgeon General's Office (now the National Library of Medicine), the centerpiece of modern medical information systems. Billings figured out how to mechanically analyze medical and demographic data by turning facts into numbers and punching the numbers onto cardboard cards that could be sorted and counted by machine. The applications were developed by his assistant Herman Hollerith; Hollerith invented the punch card and counter-sorter system that dominated statistical data manipulation until the 1970s. Hollerith's company became International Business Machines (IBM) in 1911.
Johns Hopkins Hospital, founded in 1889, originated several modern medical practices, including residency and rounds.
European ideas of modern medicine were spread widely through the world by medical missionaries, and the dissemination of textbooks. Japanese elites enthusiastically embraced Western medicine after the Meiji Restoration of the 1860s. However they had been prepared by their knowledge of the Dutch and German medicine, for they had some contact with Europe through the Dutch. Highly influential was the 1765 edition of Hendrik van Deventer's pioneer work "Nieuw Ligt" ("A New Light") on Japanese obstetrics, especially on Katakura Kakuryo's publication in 1799 of "Sanka Hatsumo" ("Enlightenment of Obstetrics"). A cadre of Japanese physicians began to interact with Dutch doctors, who introduced smallpox vaccinations. By 1820 Japanese ranpô medical practitioners not only translated Dutch medical texts, they integrated their readings with clinical diagnoses. These men became leaders of the modernization of medicine in their country. They broke from Japanese traditions of closed medical fraternities and adopted the European approach of an open community of collaboration based on expertise in the latest scientific methods.
Kitasato Shibasaburō (1853–1931) studied bacteriology in Germany under Robert Koch. In 1891 he founded the Institute of Infectious Diseases in Tokyo, which introduced the study of bacteriology to Japan. He and French researcher Alexandre Yersin went to Hong Kong in 1894, where; Kitasato confirmed Yersin's discovery that the bacterium "Yersinia pestis" is the agent of the plague. In 1897 he isolates and described the organism that caused dysentery. He became the first dean of medicine at Keio University, and the first president of the Japan Medical Association.
Japanese physicians immediately recognized the values of X-Rays. They were able to purchase the equipment locally from the Shimadzu Company, which developed, manufactured, marketed, and distributed X-Ray machines after 1900. Japan not only adopted German methods of public health in the home islands, but implemented them in its colonies, especially Korea and Taiwan, and after 1931 in Manchuria. A heavy investment in sanitation resulted in a dramatic increase of life expectancy.
Until the nineteenth century, the care of the insane was largely a communal and family responsibility rather than a medical one. The vast majority of the mentally ill were treated in domestic contexts with only the most unmanageable or burdensome likely to be institutionally confined. This situation was transformed radically from the late eighteenth century as, amid changing cultural conceptions of madness, a new-found optimism in the curability of insanity within the asylum setting emerged. Increasingly, lunacy was perceived less as a physiological condition than as a mental and moral one to which the correct response was persuasion, aimed at inculcating internal restraint, rather than external coercion. This new therapeutic sensibility, referred to as moral treatment, was epitomised in French physician Philippe Pinel's quasi-mythological unchaining of the lunatics of the Bicêtre Hospital in Paris and realised in an institutional setting with the foundation in 1796 of the Quaker-run York Retreat in England.
From the early nineteenth century, as lay-led lunacy reform movements gained in influence, ever more state governments in the West extended their authority and responsibility over the mentally ill. Small-scale asylums, conceived as instruments to reshape both the mind and behaviour of the disturbed, proliferated across these regions. By the 1830s, moral treatment, together with the asylum itself, became increasingly medicalised and asylum doctors began to establish a distinct medical identity with the establishment in the 1840s of associations for their members in France, Germany, the United Kingdom and America, together with the founding of medico-psychological journals. Medical optimism in the capacity of the asylum to cure insanity soured by the close of the nineteenth century as the growth of the asylum population far outstripped that of the general population. Processes of long-term institutional segregation, allowing for the psychiatric conceptualisation of the natural course of mental illness, supported the perspective that the insane were a distinct population, subject to mental pathologies stemming from specific medical causes. As degeneration theory grew in influence from the mid-nineteenth century, heredity was seen as the central causal element in chronic mental illness, and, with national asylum systems overcrowded and insanity apparently undergoing an inexorable rise, the focus of psychiatric therapeutics shifted from a concern with treating the individual to maintaining the racial and biological health of national populations.
Emil Kraepelin (1856–1926) introduced new medical categories of mental illness, which eventually came into psychiatric usage despite their basis in behavior rather than pathology or underlying cause. Shell shock among frontline soldiers exposed to heavy artillery bombardment was first diagnosed by British Army doctors in 1915. By 1916, similar symptoms were also noted in soldiers not exposed to explosive shocks, leading to questions as to whether the disorder was physical or psychiatric. In the 1920s surrealist opposition to psychiatry was expressed in a number of surrealist publications. In the 1930s several controversial medical practices were introduced including inducing seizures (by electroshock, insulin or other drugs) or cutting parts of the brain apart (leucotomy or lobotomy). Both came into widespread use by psychiatry, but there were grave concerns and much opposition on grounds of basic morality, harmful effects, or misuse.
In the 1950s new psychiatric drugs, notably the antipsychotic chlorpromazine, were designed in laboratories and slowly came into preferred use. Although often accepted as an advance in some ways, there was some opposition, due to serious adverse effects such as tardive dyskinesia. Patients often opposed psychiatry and refused or stopped taking the drugs when not subject to psychiatric control. There was also increasing opposition to the use of psychiatric hospitals, and attempts to move people back into the community on a collaborative user-led group approach ("therapeutic communities") not controlled by psychiatry. Campaigns against masturbation were done in the Victorian era and elsewhere. Lobotomy was used until the 1970s to treat schizophrenia. This was denounced by the anti-psychiatric movement in the 1960s and later.
The ABO blood group system was discovered in 1901, and the Rhesus blood group system in 1937, facilitating blood transfusion.
During the 20th century, large-scale wars were attended with medics and mobile hospital units which developed advanced techniques for healing massive injuries and controlling infections rampant in battlefield conditions. During the Mexican Revolution (1910–1920), General Pancho Villa organized hospital trains for wounded soldiers. Boxcars marked "Servicio Sanitario" ("sanitary service") were re-purposed as surgical operating theaters and areas for recuperation, and staffed by up to 40 Mexican and U.S. physicians. Severely wounded soldiers were shuttled back to base hospitals. Canadian physician Norman Bethune, M.D. developed a mobile blood-transfusion service for frontline operations in the Spanish Civil War (1936–1939), but ironically, he himself died of blood poisoning.
Thousands of scarred troops provided the need for improved prosthetic limbs and expanded techniques in plastic surgery or reconstructive surgery. Those practices were combined to broaden cosmetic surgery and other forms of elective surgery.
During the second World War, Alexis Carrel and Henry Dakin developed the Carrel-Dakin method of treating wounds with an irrigation, Dakin's solution, a germicide which helped prevent gangrene.
The War spurred the usage of Roentgen's X-ray, and the electrocardiograph, for the monitoring of internal bodily functions. This was followed in the inter-war period by the development of the first anti-bacterial agents such as the sulpha antibiotics.
Public health measures became particularly important during the 1918 flu pandemic, which killed at least 50 million people around the world. It became an important case study in epidemiology. Bristow shows there was a gendered response of health caregivers to the pandemic in the United States. Male doctors were unable to cure the patients, and they felt like failures. Women nurses also saw their patients die, but they took pride in their success in fulfilling their professional role of caring for, ministering, comforting, and easing the last hours of their patients, and helping the families of the patients cope as well.
From 1917 to 1923, the American Red Cross moved into Europe with a battery of long-term child health projects. It built and operated hospitals and clinics, and organized antituberculosis and antityphus campaigns. A high priority involved child health programs such as clinics, better baby shows, playgrounds, fresh air camps, and courses for women on infant hygiene. Hundreds of U.S. doctors, nurses, and welfare professionals administered these programs, which aimed to reform the health of European youth and to reshape European public health and welfare along American lines.
The advances in medicine made a dramatic difference for Allied troops, while the Germans and especially the Japanese and Chinese suffered from a severe lack of newer medicines, techniques and facilities. Harrison finds that the chances of recovery for a badly wounded British infantryman were as much as 25 times better than in the First World War. The reason was that:
Unethical human subject research, and killing of patients with disabilities, peaked during the Nazi era, with Nazi human experimentation and Aktion T4 during the Holocaust as the most significant examples. Many of the details of these and related events were the focus of the Doctors' Trial. Subsequently, principles of medical ethics, such as the Nuremberg Code, were introduced to prevent a recurrence of such atrocities. After 1937, the Japanese Army established programs of biological warfare in China. In Unit 731, Japanese doctors and research scientists conducted large numbers of vivisections and experiments on human beings, mostly Chinese victims.
Starting in World War II, DDT was used as insecticide to combat insect vectors carrying malaria, which was endemic in most tropical regions of the world. The first goal was to protect soldiers, but it was widely adopted as a public health device. In Liberia, for example, the United States had large military operations during the war and the U.S. Public Health Service began the use of DDT for indoor residual spraying (IRS) and as a larvicide, with the goal of controlling malaria in Monrovia, the Liberian capital. In the early 1950s, the project was expanded to nearby villages. In 1953, the World Health Organization (WHO) launched an antimalaria program in parts of Liberia as a pilot project to determine the feasibility of malaria eradication in tropical Africa. However these projects encountered a spate of difficulties that foreshadowed the general retreat from malaria eradication efforts across tropical Africa by the mid-1960s.
The World Health Organization was founded in 1948 as a United Nations agency to improve global health. In most of the world, life expectancy has improved since then, and was about 67 years , and well above 80 years in some countries. Eradication of infectious diseases is an international effort, and several new vaccines have been developed during the post-war years, against infections such as measles, mumps, several strains of influenza and human papilloma virus. The long-known vaccine against Smallpox finally eradicated the disease in the 1970s, and Rinderpest was wiped out in 2011. Eradication of polio is underway. Tissue culture is important for development of vaccines. Though the early success of antiviral vaccines and antibacterial drugs, antiviral drugs were not introduced until the 1970s. Through the WHO, the international community has developed a response protocol against epidemics, displayed during the SARS epidemic in 2003, the Influenza A virus subtype H5N1 from 2004, the Ebola virus epidemic in West Africa and onwards.
As infectious diseases have become less lethal, and the most common causes of death in developed countries are now tumors and cardiovascular diseases, these conditions have received increased attention in medical research. Tobacco smoking as a cause of lung cancer was first researched in the 1920s, but was not widely supported by publications until the 1950s. Cancer treatment has been developed with radiotherapy, chemotherapy and surgical oncology.
Oral rehydration therapy has been extensively used since the 1970s to treat cholera and other diarrhea-inducing infections.
The sexual revolution included taboo-breaking research in human sexuality such as the 1948 and 1953 Kinsey reports, invention of hormonal contraception, and the normalization of abortion and homosexuality in many countries. Family planning has promoted a demographic transition in most of the world. With threatening sexually transmitted infections, not least HIV, use of barrier contraception has become imperative. The struggle against HIV has improved antiretroviral treatments.
X-ray imaging was the first kind of medical imaging, and later ultrasonic imaging, CT scanning, MR scanning and other imaging methods became available.
Genetics have advanced with the discovery of the DNA molecule, genetic mapping and gene therapy. Stem cell research took off in the 2000s (decade), with stem cell therapy as a promising method.
Evidence-based medicine is a modern concept, not introduced to literature until the 1990s.
Prosthetics have improved. In 1958, Arne Larsson in Sweden became the first patient to depend on an artificial cardiac pacemaker. He died in 2001 at age 86, having outlived its inventor, the surgeon, and 26 pacemakers. Lightweight materials as well as neural prosthetics emerged in the end of the 20th century.
Cardiac surgery was revolutionized in 1948 as open-heart surgery was introduced for the first time since 1925.
In 1954 Joseph Murray, J. Hartwell Harrison and others accomplished the first kidney transplantation. Transplantations of other organs, such as heart, liver and pancreas, were also introduced during the later 20th century. The first partial face transplant was performed in 2005, and the first full one in 2010. By the end of the 20th century, microtechnology had been used to create tiny robotic devices to assist microsurgery using micro-video and fiber-optic cameras to view internal tissues during surgery with minimally invasive practices.
Laparoscopic surgery was broadly introduced in the 1990s. Natural orifice surgery has followed. Remote surgery is another recent development, with the transatlantic Lindbergh operation in 2001 as a groundbreaking example.
|
https://en.wikipedia.org/wiki?curid=14194
|
Hanover
Hanover (; ; ) is the capital and largest city of the German state of Lower Saxony. Its 535,061 (2017) inhabitants make it the thirteenth-largest city in Germany as well as the third-largest city in Northern Germany after Hamburg and Bremen. The city lies at the confluence of the River Leine (progression: ) and its tributary Ihme, in the south of the North German Plain, and is the largest city in the Hannover–Braunschweig–Göttingen–Wolfsburg Metropolitan Region. It is the fifth-largest city in the Low German dialect area after Hamburg, Dortmund, Essen and Bremen.
Before it became the capital of Lower Saxony in 1946 Hanover was the capital of the Principality of Calenberg (1636–1692), the Electorate of Hanover (1692–1814), the Kingdom of Hanover (1814–1866), the Province of Hanover of the Kingdom of Prussia (1868–1918), the Province of Hanover of the Free State of Prussia (1918–1946) and of the State of Hanover (1946). From 1714 to 1837 Hanover was by personal union the family seat of the Hanoverian Kings of the United Kingdom of Great Britain and Ireland, under their title of the dukes of Brunswick-Lüneburg (later described as the Elector of Hanover).
The city is a major crossing point of railway lines and motorways (Autobahnen), connecting European main lines in both the east-west (Berlin–Ruhr area/Düsseldorf/Cologne) and north-south (Hamburg–Frankfurt/Stuttgart/Munich) directions. Hannover Airport lies north of the city, in Langenhagen, and is Germany's ninth-busiest airport. The city's most notable institutes of higher education are the Hannover Medical School with its university hospital (Klinikum der Medizinischen Hochschule Hannover) and the Leibniz University Hannover.
The Hanover fairground, owing to numerous extensions, especially for the Expo 2000, is the largest in the world. Hanover hosts annual commercial trade fairs such as the Hanover Fair and up to 2018 the CeBIT. The IAA Commercial Vehicles show takes place every two years. It is the world's leading trade show for transport, logistics and mobility. Every year Hanover hosts the Schützenfest Hannover, the world's largest marksmen's festival, and the Oktoberfest Hannover.
'Hanover' is the traditional English spelling. The German spelling (with a double n) is becoming more popular in English; recent editions of encyclopedias prefer the German spelling, and the local government uses the German spelling on English websites. The English pronunciation, with stress on the first syllable, is applied to both the German and English spellings, which is different from German pronunciation, with stress on the second syllable and a long second vowel. The traditional English spelling is still used in historical contexts, especially when referring to the British House of Hanover.
Hanover was founded in medieval times on the east bank of the River Leine. Its original name "Honovere" may mean 'high (river)bank', though this is debated (cf. "das Hohe Ufer"). Hanover was a small village of ferrymen and fishermen that became a comparatively large town in the 13th century, receiving town privileges in 1241, owing to its position at a natural crossroads. As overland travel was relatively difficult its position on the upper navigable reaches of the river helped it to grow by increasing trade. It was connected to the Hanseatic League city of Bremen by the Leine and was situated near the southern edge of the wide North German Plain and north-west of the Harz mountains, so that east-west traffic such as mule trains passed through it. Hanover was thus a gateway to the Rhine, Ruhr and Saar river valleys, their industrial areas which grew up to the southwest and the plains regions to the east and north, for overland traffic skirting the Harz between the Low Countries and Saxony or Thuringia.
In the 14th century the main churches of Hanover were built, as well as a city wall with three city gates. The beginning of industrialization in Germany led to trade in iron and silver from the northern Harz Mountains, which increased the city's importance.
In 1636 George, Duke of Brunswick-Lüneburg, ruler of the Brunswick-Lüneburg principality of Calenberg, moved his residence to Hanover. The Dukes of Brunswick-Lüneburg were elevated by the Holy Roman Emperor to the rank of Prince-Elector in 1692 and this elevation was confirmed by the Imperial Diet in 1708. Thus the principality was upgraded to the Electorate of Hanover, colloquially known as the Electorate of Hanover after Calenberg's capital (see also: House of Hanover). Its Electors later become monarchs of Great Britain (and from 1801 of the United Kingdom of Great Britain and Ireland). The first of these was George I Louis, who acceded to the British throne in 1714. The last British monarch who reigned in Hanover was William IV. Semi-Salic law, which required succession by the male line if possible, forbade the accession of Queen Victoria in Hanover. As a male-line descendant of George I, Queen Victoria was herself a member of the House of Hanover. Her descendants, however, bore her husband's titular name of Saxe-Coburg-Gotha. Three kings of Great Britain, or the United Kingdom, were concurrently also Electoral Princes of Hanover.
During the time of the personal union of the crowns of the United Kingdom and Hanover (1714–1837) the monarchs rarely visited the city. In fact during the reigns of the final three joint rulers (1760–1837) there was only one short visit, by George IV in 1821. From 1816 to 1837 Viceroy Adolphus represented the monarch in Hanover.
During the Seven Years' War the Battle of Hastenbeck was fought near the city on 26 July 1757. The French army defeated the Hanoverian Army of Observation, leading to the city's occupation as part of the Invasion of Hanover. It was recaptured by Anglo-German forces led by Ferdinand of Brunswick the following year.
After Napoleon imposed the Convention of Artlenburg (Convention of the Elbe) on July 5, 1803, about 35,000 French soldiers occupied Hanover. The Convention also required disbanding the army of Hanover. However, George III did not recognize the Convention of the Elbe. This resulted in a great number of soldiers from Hanover eventually emigrating to Great Britain, where the King's German Legion was formed. It was only troops from Hanover and Brunswick that consistently opposed France throughout the entire Napoleonic wars. The Legion later played an important role in the Peninsular War and the Battle of Waterloo in 1815. The Congress of Vienna in 1815 elevated the electorate to the Kingdom of Hanover. The capital town Hanover expanded to the western bank of the Leine and since then has grown considerably.
In 1837, the personal union of the United Kingdom and Hanover ended because William IV's heir in the United Kingdom was female (Queen Victoria). Hanover could be inherited only by male heirs. Thus, Hanover passed to William IV's brother, Ernest Augustus, and remained a kingdom until 1866, when it was annexed by Prussia during the Austro-Prussian war. Despite Hanover being expected to defeat Prussia at the Battle of Langensalza, Prussia employed Moltke the Elder's Kesselschlacht order of battle to instead destroy the Hanoverian army. The city of Hanover became the capital of the Prussian Province of Hanover. After the annexation, the people of Hanover generally opposed the Prussian government.
To Hanover's industry, however, the new connection with Prussia meant an improvement in business. The introduction of free trade promoted economic growth and led to the recovery of the Gründerzeit (the founders' era). Between 1879 and 1902 Hanover's population grew from 87,600 to 313,940.
In 1842 the first horse railway was inaugurated, and from 1893 an electric tram was installed. In 1887 Hanover's Emile Berliner invented the record and the gramophone.
After 1937 the Lord Mayor and the state commissioners of Hanover were members of the NSDAP (Nazi party). A large Jewish population then existed in Hanover. In October 1938, 484 Hanoverian Jews of Polish origin were expelled to Poland, including the Grynszpan family. However, Poland refused to accept them, leaving them stranded at the border with thousands of other Polish-Jewish deportees, fed only intermittently by the Polish Red Cross and Jewish welfare organisations. The Grynszpans' son Herschel Grynszpan was in Paris at the time. When he learned of what was happening, he drove to the German embassy in Paris and shot the German diplomat Eduard Ernst vom Rath, who died shortly afterwards.
The Nazis took this act as a pretext to stage a nationwide pogrom known as Kristallnacht (9 November 1938). On that day, the synagogue of Hanover, designed in 1870 by Edwin Oppler in neo-romantic style, was burnt by the Nazis.
In September 1941, through the "Action Lauterbacher" plan, a ghettoisation of the remaining Hanoverian Jewish families began. Even before the Wannsee Conference, on 15 December 1941, the first Jews from Hanover were deported to Riga. A total of 2,400 people were deported, and very few survived. During the war seven concentration camps were constructed in Hanover, in which many Jews were confined. Of the approximately 4,800 Jews who had lived in Hannover in 1938, fewer than 100 were still in the city when troops of the United States Army arrived on 10 April 1945 to occupy Hanover at the end of the war. Today, a memorial at the Opera Square is a reminder of the persecution of the Jews in Hanover.
After the war a large group of Orthodox Jewish survivors of the nearby Bergen-Belsen concentration camp settled in Hanover.
As an important railroad and road junction and production centre, Hanover was a major target for strategic bombing during World War II, including the Oil Campaign. Targets included the AFA (Stöcken), the Deurag-Nerag refinery (Misburg), the Continental plants (Vahrenwald and Limmer), the United light metal works (VLW) in Ricklingen and Laatzen (today Hanover fairground), the Hanover/Limmer rubber reclamation plant, the Hanomag factory (Linden) and the tank factory "M.N.H. Maschinenfabrik Niedersachsen" (Badenstedt). Residential areas were also targeted, and more than 6,000 civilians were killed by the Allied bombing raids. More than 90% of the city centre was destroyed in a total of 88 bombing raids. After the war, the Aegidienkirche was not rebuilt and its ruins were left as a war memorial.
The Allied ground advance into Germany reached Hanover in April 1945. The US 84th Infantry Division captured the city on 10 April 1945.
Hanover was in the British zone of occupation of Germany and became part of the new state (Land) of Lower Saxony in 1946.
Today Hanover is a Vice-President City of Mayors for Peace, an international mayoral organisation mobilising cities and citizens worldwide to abolish and eliminate nuclear weapons by the year 2020.
Hannover has an oceanic climate (Köppen: "Cfb") independent of the isotherm. Although the city is not on a coastal location, the predominant air masses are still from the ocean, unlike other places further east or south-central Germany.
One of Hanover's most famous sights is the "Royal Gardens of Herrenhausen". Its "Great Garden" is an important European baroque garden. The palace itself was largely destroyed by Allied bombing but has been reconstructed and reopened in 2013. Among the points of interest is the "Grotto". Its interior was designed by the French artist Niki de Saint Phalle). The Great Garden consists of several parts and contains Europe's highest garden fountain. The historic "Garden Theatre" hosted the musicals of the German rock musician Heinz Rudolf Kunze.
Also at Herrenhausen, the "Berggarten" is a botanical garden with the most varied collection of orchids in Europe. Some points of interest are the "Tropical House", the "Cactus House", the "Canary House" and the "Orchid House", and free-flying birds and butterflies. Near the entrance to the Berggarten is the historic "Library Pavillon". The "Mausoleum" of the Guelphs is also located in the Berggarten. Like the Great Garden, the Berggarten also consists of several parts, for example the "Paradies" and the "Prairie Garden". The "Georgengarten" is an English landscape garden. The "Leibniz Temple" and the "Georgen Palace" are two points of interest there.
The landmark of Hanover is the New Town Hall ("Neues Rathaus"). Inside the building are four scale models of the city. A worldwide unique diagonal/arch elevator goes up the large dome at a 17 degree angle to an observation deck.
The "Hanover Zoo" received the Park Scout Award for the fourth year running in 2009/10, placing it among the best zoos in Germany. The zoo consists of several theme areas: Sambesi, Meyers Farm, Gorilla-Mountain, Jungle-Palace, and Mullewapp. Some smaller areas are Australia, the wooded area for wolves, and the so-called swimming area with many seabirds. There is also a tropical house, a jungle house, and a show arena. The new Canadian-themed area, Yukon Bay, opened in 2010. In 2010 the Hanover Zoo had over 1.6 million visitors. There is also the "Sea Life Centre Hanover", which is the first tropical aquarium in Germany.
Another point of interest is the "Old Town". In the centre are the large Marktkirche (Church St. Georgii et Jacobi, preaching venue of the bishop of the Lutheran Landeskirche Hannovers) and the "Old Town Hall". Nearby are the "Leibniz House", the "Nolte House", and the "Beguine Tower". The "Kreuz-Church-Quarter" around the "Kreuz Church" contains many little lanes. Nearby is the old royal sports hall, now called the "Ballhof" theatre. On the edge of the Old Town are the "Market Hall", the "Leine Palace", and the ruin of the "Aegidien Church" which is now a monument to the victims of war and violence. Through the "Marstall Gate" the bank of the river "Leine" can be reached; the "Nanas" of Niki de Saint Phalle are located here. They are part of the "Mile of Sculptures", which starts from Trammplatz, leads along the river bank, crosses Königsworther Square, and ends at the entrance of the Georgengarten. Near the Old Town is the district of Calenberger Neustadt where the Catholic Basilica Minor of "St. Clemens", the "Reformed Church" and the Lutheran Neustädter Hof- und Stadtkirche St. Johannis are located.
Some other popular sights are the "Waterloo Column", the "Laves House", the "Wangenheim Palace", the "Lower Saxony State Archives", the "Hanover Playhouse", the "Kröpcke Clock", the "Anzeiger Tower Block", the "Administration Building of the NORD/LB", the "Cupola Hall" of the Congress Centre, the "Lower Saxony Stock", the "Ministry of Finance", the "Garten Church", the "Luther Church", the "Gehry Tower" (designed by the American architect Frank O. Gehry), the specially designed "Bus Stops", the "Opera House", "the Central Station", the "Maschsee" lake and the city forest "Eilenriede", which is one of the largest of its kind in Europe. With around 40 parks, forests and gardens, a couple of lakes, two rivers and one canal, Hanover offers a large variety of leisure activities.
Since 2007 the historic "Leibniz Letters", which can be viewed in the "Gottfried Wilhelm Leibniz Library", are on UNESCO's Memory of the World Register.
Outside the city centre is the "EXPO-Park", the former site of EXPO 2000. Some points of interest are the "Planet M.", the former "German Pavillon", some nations' vacant pavilions, the "Expowale", the "EXPO-Plaza" and the "EXPO-Gardens" (Parc Agricole, EXPO-Park South and the Gardens of change). The fairground can be reached by the "Exponale", one of the largest pedestrian bridges in Europe.
The "Hanover fairground" is the largest exhibition centre in the world.
It provides 496,000 square metres of covered indoor space, 58,000 square metres of open-air space, 27 halls and pavilions. Many of the Exhibition Centre's halls are architectural highlights. Furthermore, it offers the Convention Center with its 35 function rooms, glassed-in areas between halls, grassy park-like recreation zones and its own heliport. Two important sights on the fairground are the "Hermes Tower" (88.8 metres high) and the "EXPO Roof", the largest wooden roof in the world.
In the district of Anderten is the "European Cheese Centre", the only Cheese Experience Centre in Europe. Another tourist sight in Anderten is the "Hindenburg Lock", which was the biggest lock in Europe at the time of its construction in 1928. The "Tiergarten" (literally the "animals' garden") in the district of Kirchrode is a large forest originally used for deer and other game for the king's table.
In the district of Groß-Buchholz the 282-metre-high "Telemax" is located, which is the tallest building in Lower Saxony and the highest television tower in Northern Germany. Some other notable towers are the "VW-Tower" in the city centre and the old towers of the former middle-age defence belt: "Döhrener Tower", "Lister Tower" and the "Horse Tower".
The 36 most important sights of the city centre are connected with a -long red line, which is painted on the pavement. This so-called "Red Thread" marks out a walk that starts at the Tourist Information Office and ends on the Ernst-August-Square in front of the central station. There is also a guided sightseeing-bus tour through the city.
Hanover is headquarters for several Protestant organizations, including the World Communion of Reformed Churches, the Evangelical Church in Germany, the Reformed Alliance, the United Evangelical Lutheran Church of Germany, and the Independent Evangelical-Lutheran Church.
In 2015, 31.1% of the population were Protestant and 13.4% were Roman Catholic. The majority 55.5% were irreligious or other faith.
The "Historic Museum" describes the history of Hanover, from the medieval settlement "Honovere" to the world-famous Exhibition City of today. The museum focuses on the period from 1714 to 1834 when Hanover had a strong relationship with the British royal house.
With more than 4,000 members, the "Kestnergesellschaft" is the largest art society in Germany. The museum hosts exhibitions from classical modernist art to contemporary art. One big focus is put on film, video, contemporary music and architecture, room installments and big presentations of contemporary paintings, sculptures and video art.
The "Kestner-Museum" is located in the "House of 5.000 windows". The museum is named after August Kestner and exhibits 6,000 years of applied art in four areas: Ancient cultures, ancient Egypt, applied art and a valuable collection of historic coins.
The "KUBUS" is a forum for contemporary art. It features mostly exhibitions and projects of famous and important artists from Hanover.
The "Kunstverein Hannover" (Art Society Hanover) shows contemporary art and was established in 1832 as one of the first art societies in Germany. It is located in the "Künstlerhaus" (House of artists). There are around 7 international monografic and thematic Exhibitions in one year.
The "Lower Saxony State Museum" is the largest museum in Hanover. The "State Gallery" shows the European Art from the 11th to the 20th century, the "Nature Department" shows the zoology, geology, botanic, geology and a "Vivarium" with fishes, insects, reptiles and amphibians. The "Primeval Department" shows the primeval history of Lower Saxony and the "Folklore Department" shows the cultures from all over the world.
The "Sprengel Museum" shows the art of the 20th century. It is one of the most notable art museums in Germany. The focus is put on the classical modernist art with the collection of "Kurt Schwitters", works of German expressionism, and French cubism, the cabinet of abstracts, the graphics and the department of photography and media. Furthermore, the museum shows the famous works of the French artist Niki de Saint-Phalle.
The "Theatre Museum" shows an exhibition of the history of the theatre in Hanover from the 17th century up to now: opera, concert, drama and ballet. The museum also hosts several touring exhibitions during the year.
The "Wilhelm Busch Museum" is the "German Museum of Caricature and Critical Graphic Arts". The collection of the works of Wilhelm Busch and the extensive collection of cartoons and critical graphics is unique in Germany. Furthermore, the museum hosts several exhibitions of national and international artists during the year.
A cabinet of coins is the "Münzkabinett der TUI-AG". The "Polizeigeschichtliche Sammlung Niedersachsen" is the largest police museum in Germany. Textiles from all over the world can be visited in the "Museum for textile art". The "EXPOseeum" is the museum of the world-exhibition "EXPO 2000 Hannover". Carpets and objects from the orient can be visited in the "Oriental Carpet Museum". The "Museum for the visually impaired" is a rarity in Germany, there is only one other of its kind in Berlin. The "Museum of veterinary medicine" is unique in Germany. The "Museum for Energy History" describes the 150 years old history of the application of energy. The "Heimat-Museum Ahlem" shows the history of the district of Ahlem. The "Mahn- und Gedenkstätte Ahlem" describes the history of the Jewish people in Hanover and the "Stiftung Ahlers Pro Arte / Kestner Pro Arte" shows modern art. Modern art is also the main topic of the "Kunsthalle Faust", the "Nord/LB Art Gallery" and of the "Foro Artistico / Eisfabrik".
Some leading art events in Hanover are the "Long Night of the museums" and the "Zinnober Kunstvolkslauf" which features all the galleries in Hanover.
People who are interested in astronomy should visit the "Observatory Geschwister Herschel" on the Lindener Mountain or the small planetarium inside of the Bismarck School.
Around 40 theatres are located in Hanover. The "Opera House", the "Schauspielhaus" (Play House), the "Ballhof eins", the "Ballhof zwei" and the "Cumberlandsche Galerie" belong to the "Lower Saxony State Theatre". The "Theater am Aegi" is Hanover's big theatre for musicals, shows and guest performances. The "Neues Theater" (New Theatre) is the boulevard theatre of Hanover. The "Theater für Niedersachsen" is another big theatre in Hanover, which also has an own musical company. Some of the most important musical productions are the rock musicals of the German rock musician Heinz Rudolph Kunze, which take place at the "Garden-Theatre" in the Great Garden.
Some important theatre-events are the "Tanztheater International", the "Long Night of the Theatres", the "Festival Theaterformen" and the "International Competition for Choreographers".
Hanover's leading cabaret-stage is the "GOP Variety theatre" which is located in the "Georgs Palace". Some other famous cabaret-stages are the "Variety Marlene", the "Uhu-Theatre". the theatre "Die Hinterbühne", the "Rampenlich Variety" and the revue-stage "TAK". The most important cabaret event is the "Kleines Fest im Großen Garten" (Little Festival in the Great Garden) which is the most successful cabaret festival in Germany. It features artists from around the world. Some other important events are the "Calenberger Cabaret Weeks", the "Hanover Cabaret Festival" and the "Wintervariety".
Hanover has two symphony orchestras: The Lower Saxon State Orchestra Hanover and the NDR Radiophilharmonie (North German Radio Philharmonic Orchestra). Two notable choirs have their homes in Hanover: the Girls Choir Hanover (Mädchenchor Hannover) and the Knabenchor Hannover (Boys Choir Hanover).
There are/were two big international competitions for classical music in Hanover:
The rock bands Scorpions and Fury in the Slaughterhouse are originally from Hanover. Acclaimed DJ Mousse T also has his main recording studio in the area. Rick J. Jordan, member of the band Scooter was born here in 1968. Eurovision Song Contest winner of 2010, Lena, is also from Hanover.
Hannover 96 (nickname "Die Roten" or 'The Reds') is the top local football team that currently plays in the 2. Bundesliga. Home games are played at the HDI-Arena, which hosted matches in the 1974 and 2006 World Cups and the Euro 1988. Their reserve team Hannover 96 II plays in the fourth league. Their home games were played in the traditional Eilenriedestadium till they moved to the HDI Arena due to DFL directives. Arminia Hannover is another traditional soccer team in Hanover that has played in the first league for years and plays now in the Niedersachsen-West Liga (Lower Saxony League West). Home matches are played in the Rudolf-Kalweit-Stadium.
The Hannover Indians are the local ice hockey team. They play in the third tier. Their home games are played at the traditional Eisstadion am Pferdeturm. The Hannover Scorpions played in Hanover in Germany's top league until 2013 when they sold their license and moved to Langenhagen.
Hanover was one of the rugby union capitals in Germany. The first German rugby team was founded in Hanover in 1878. Hanover-based teams dominated the German rugby scene for a long time. DRC Hannover plays in the first division, and "SV Odin von 1905" as well as SG 78/08 Hannover play in the second division.
The first German fencing club was founded in Hanover in 1862. Today there are three additional fencing clubs in Hanover.
The Hannover Korbjäger are the city's top basketball team. They play their home games at the IGS Linden.
Hanover is a centre for water sports. Thanks to the Maschsee lake, the rivers Ihme and Leine and to the Mittellandkanal channel, Hanover hosts sailing schools, yacht schools, waterski clubs, rowing clubs, canoe clubs and paddle clubs. The water polo team WASPO W98 plays in the first division.
The Hannover Regents play in the third Bundesliga (baseball) division. The Hannover Grizzlies, Armina Spartans and Hannover Stampeders are the local American football teams.
The Hannover Marathon is the biggest running event in Hanover with more than 11,000 participants and usually around 200.000 spectators. Some other important running events are the Gilde Stadtstaffel (relay), the Sport-Check Nachtlauf (night-running), the Herrenhäuser Team-Challenge, the Hannoversche Firmenlauf (company running) and the Silvesterlauf (sylvester running).
Hanover also hosts an important international cycle race: The "Nacht von Hannover" (night of Hanover). The race takes place around the Market Hall.
The lake Maschsee hosts the International Dragon Boat Races and the Canoe Polo-Tournament. Many regattas take place during the year. "Head of the river Leine" on the river Leine is one of the biggest rowing regattas in Hanover. One of Germany's most successful dragon boat teams, the All Sports Team Hannover, which has won since its foundation in year 2000 more than 100 medals on national and international competitions, is doing practising on the Maschsee in the heart of Hannover. The All Sports Team has received the award "Team of the Year 2013" in Lower Saxony.
Some other important sport events are the Lower Saxony Beach Volleyball Tournament, the international horse show "German Classics" and the international ice hockey tournament Nations Cup.
Hanover is one of the leading exhibition cities in the world. It hosts more than 60 international and national exhibitions every year. The most popular ones are the "CeBIT", the "Hanover Fair", the "Domotex", the "Ligna", the "IAA Nutzfahrzeuge" and the "Agritechnica". Hanover also hosts a huge number of congresses and symposiums like the "International Symposium on Society and Resource Management."
Hanover is also host to the "Schützenfest Hannover," the largest marksmen's fun fair in the world which takes place once a year (late June to early July) (2014 − July 4th to the 13th). Founded in 1529, it consists of more than 260 rides and inns, five large beer tents and a big entertainment programme. The highlight of this fun fair is the long "Parade of the Marksmen" with more than 12,000 participants from all over the world, including around 5,000 marksmen, 128 bands, and more than 70 wagons, carriages, and other festival vehicles. This makes it the longest procession in Europe. Around 2 million people visit this fun fair every year. The landmark of this fun fair is the biggest transportable Ferris wheel in the world, about ( high).
Hanover also hosts one of the two largest spring festivals in Europe, with around 180 rides and inns, 2 large beer tents, and around 1.5 million visitors each year. The Oktoberfest Hannover is the second largest Oktoberfest in the world with around 160 rides and inns, two large beer tents and around 1 million visitors each year.
The "Maschsee Festival" takes place around the Maschsee Lake. Each year around 2 million visitors come to enjoy live music, comedy, cabaret, and much more. It is the largest Volksfest of its kind in Northern Germany. The Great Garden hosts every year the "International Fireworks Competition", and the "International Festival Weeks Herrenhausen," with music and cabaret performances. The "Carnival Procession" is around long and consists of 3.000 participants, around 30 festival vehicles and around 20 bands and takes place every year.
Other festivals include the Festival "Feuer und Flamme" (Fire and Flames), the "Gartenfestival" (Garden Festival), the "Herbstfestival" (Autumn Festival), the "Harley Days", the "Steintor Festival" (Steintor is a party area in the city centre) and the "Lister-Meile-Festival" (Lister Meile is a large pedestrian area).
Hanover also hosts food-oriented festivals including the "Wine Festival" and the "Gourmet Festival". It also hosts some special markets. The "Old Town Flea Market" is the oldest flea market in Germany and the "Market for Art and Trade" has a high reputation. Some other major markets include the "Christmas Markets of the City of Hanover" in the Old Town and city centre, and the Lister Meile.
The city's central station, Hannover Hauptbahnhof, is a hub of the German high-speed ICE network. It is the starting point of the Hanover-Würzburg high-speed rail line and also the central hub for the Hanover S-Bahn. It offers many international and national connections.
Hanover and its area is served by Hanover/Langenhagen International Airport (IATA code: HAJ; ICAO code: EDDV)
Hanover is also an important hub of Germany's Autobahn network; the junction of two major autobahns, the A2 and A7 is at "Kreuz Hannover-Ost", at the northeastern edge of the city.
Local autobahns are A 352 (a short cut between A7 [north] and A2 [west], also known as the "airport autobahn" because it passes "Hanover Airport") and the A 37.
The Schnellweg "(en: expressway)" system, a number of Bundesstraße roads, forms a structure loosely resembling a large ring road together with A2 and A7. The roads are B 3, B 6 and Bundesstraße 65|B 65, called Westschnellweg (B6 on the northern part, B3 on the southern part), Messeschnellweg (B3, becomes A37 near Burgdorf, crosses A2, becomes B3 again, changes to B6 at "Seelhorster Kreuz", then passes the Hanover fairground as B6 and becomes A37 again before merging into A7) and Südschnellweg (starts out as B65, becomes B3/B6/B65 upon crossing "Westschnellweg", then becomes B65 again at "Seelhorster Kreuz").
Hanover has an extensive Stadtbahn and bus system, operated by üstra. The city is famous for its designer buses and tramways, the TW 6000 and TW 2000 trams being the most well-known examples.
Cycle paths are very common in the city centre. At off-peak hours you are allowed to take your bike on a tram or bus.
Various industrial businesses are located in Hannover. The Volkswagen Commercial Vehicles Transporter (VWN) factory at Hannover-Stöcken is the biggest employer in the region and operates a large plant at the northern edge of town adjoining the Mittellandkanal and Motorway A2. Volkswagen shares a coal-burning power plant with a factory of German tire and automobile parts manufacturer Continental AG. Continental AG, founded in Hanover in 1871, is one of the city's major companies. Since 2008 a take-over has been in progress: the Schaeffler Group from Herzogenaurach (Bavaria) holds the majority of Continental's stock but were required due to the financial crisis to deposit the options as securities at banks.
The audio equipment company Sennheiser and the travel group TUI AG are both based in Hanover. Hanover is home to many insurance companies including Talanx, VHV Group, and Concordia Insurance. The major global reinsurance company is Hannover Re also has its headquarters east of the city centre.
In 2012, the city generated a GDP of €29.5 billion, which is equivalent to €74,822 per employee. The gross value of production in 2012 was €26.4 billion, which is equivalent to €66,822 per employee.
Around 300,000 employees were counted in 2014. Of these, 189,000 had their primary residence in Hanover, while 164,892 commute into the city every day.
In 2014 the city was home to 34,198 businesses, of which 9,342 were registered in the German Trade Register and 24,856 counted as small businesses. Hence, more than half of the metropolitan area's businesses in the German Trade Register are located in Hanover (17,485 total).
Hannoverimpuls GMBH is a joint business development company from the city and region of Hannover. The company was founded in 2003 and supports the start-up, growth and relocation of businesses in the Hannover Region. The focus is on seven sectors, which stand for sustainable economic growth: Automotive, Energy Solutions, Information and Communications Technology, Life Sciences, Optical Technologies, Creative Industries and Production Engineering.
A range of programmes supports companies from the key industries in their expansion plans in Hannover or abroad. Three regional centres specifically promote international economic relations with Russia, India and Turkey.
The Leibniz University Hannover is the largest funded institution in Hanover for providing higher education to the students from around the world. Below are the names of the universities and some of the important schools, including newly opened Hannover Medical Research School in 2003 for attracting the students from biology background from around the world.
There are several universities in Hanover:
There is one University of Applied Science and Arts in Hanover:
The "Schulbiologiezentrum Hannover" maintains practical biology schools in four locations (Botanischer Schulgarten Burg, Freiluftschule Burg, Zooschule Hannover, and Botanischer Schulgarten Linden). The University of Veterinary Medicine Hanover also maintains its own botanical garden specializing in medicinal and poisonous plants, the Heil- und Giftpflanzengarten der Tierärztlichen Hochschule Hannover.
The following is a selection of famous Hanover-natives, personalities connected with the city and honorary citizens:
Hanover is twinned with:
|
https://en.wikipedia.org/wiki?curid=14197
|
Handheld game console
A handheld game console, or simply handheld console, is a small, portable self-contained video game console with a built-in screen, game controls and speakers. Handheld game consoles are smaller than home video game consoles and contain the console, screen, speakers, and controls in one unit, allowing people to carry them and play them at any time or place.
In 1976, Mattel introduced the first handheld electronic game with the release of "Auto Race". Later, several companies—including Coleco and Milton Bradley—made their own single-game, lightweight table-top or handheld electronic game devices. The oldest true handheld game console with interchangeable cartridges is the Milton Bradley Microvision in 1979.
Nintendo is credited with popularizing the handheld console concept with the release of the Game Boy in 1989 and continues to dominate the handheld console market.
The origins of handheld game consoles are found in handheld and tabletop electronic game devices of the 1970s and early 1980s. These electronic devices are capable of playing only a single game, they fit in the palm of the hand or on a tabletop, and they may make use of a variety of video displays such as LED, VFD, or LCD. In 1978, handheld electronic games were described by "Popular Electronics" magazine as "nonvideo electronic games" and "non-TV games" as distinct from devices that required use of a television screen. Handheld electronic games, in turn, find their origins in the synthesis of previous handheld and tabletop electro-mechanical devices such as Waco's "Electronic Tic-Tac-Toe" (1972) Cragstan's "Periscope-Firing Range" (1951), and the emerging optoelectronic-display-driven calculator market of the early 1970s. This synthesis happened in 1976, when "Mattel began work on a line of calculator-sized sports games that became the world's first handheld electronic games. The project began when Michael Katz, Mattel's new product category marketing director, told the engineers in the electronics group to design a game the size of a calculator, using LED (light-emitting diode) technology."
The result was the 1976 release of "Auto Race". Followed by "Football" later in 1977, the two games were so successful that according to Katz, "these simple electronic handheld games turned into a '$400 million category.'" Mattel would later win the honor of being recognized by the industry for innovation in handheld game device displays. Soon, other manufacturers including Coleco, Parker Brothers, Milton Bradley, Entex, and Bandai began following up with their own tabletop and handheld electronic games.
In 1979 the LCD-based Microvision, designed by Smith Engineering and distributed by Milton-Bradley, became the first handheld game console and the first to use interchangeable game cartridges. The Microvision game "Cosmic Hunter" (1981) also introduced the concept of a directional pad on handheld gaming devices, and is operated by using the thumb to manipulate the on-screen character in any of four directions.
In 1979, Gunpei Yokoi, traveling on a bullet train, saw a bored businessman playing with an LCD calculator by pressing the buttons. Yokoi then thought of an idea for a watch that doubled as a miniature game machine for killing time. Starting in 1980, Nintendo began to release a series of electronic games designed by Yokoi called the Game & Watch games. Taking advantage of the technology used in the credit-card-sized calculators that had appeared on the market, Yokoi designed the series of LCD-based games to include a digital time display in the corner of the screen. For later, more complicated Game & Watch games, Yokoi invented a cross shaped directional pad or "D-pad" for control of on-screen characters. Yokoi also included his directional pad on the NES controllers, and the cross-shaped thumb controller soon became standard on game console controllers and ubiquitous across the video game industry since. When Yokoi began designing Nintendo's first handheld game console, he came up with a device that married the elements of his Game & Watch devices and the Famicom console, including both items' D-pad controller. The result was the Nintendo Game Boy.
In 1982, the Bandai LCD Solarpower was the first solar-powered gaming device. Some of its games, such as the horror-themed game "Terror House", features two LCD panels, one stacked on the other, for an early 3D effect. In 1983, Takara Tomy's Tomytronic 3D simulates 3D by having two LCD panels that were lit by external light through a window on top of the device, making it the first dedicated home video 3D hardware.
The late 1980s and early 1990s saw the beginnings of the modern day handheld game console industry, after the demise of the Microvision. As backlit LCD game consoles with color graphics consume a lot of power, they were not battery-friendly like the non-backlit original Game Boy whose monochrome graphics allowed longer battery life. By this point, rechargeable battery technology had not yet matured and so the more advanced game consoles of the time such as the Sega Game Gear and Atari Lynx did not have nearly as much success as the Game Boy.
Even though third-party rechargeable batteries were available for the battery-hungry alternatives to the Game Boy, these batteries employed a nickel-cadmium process and had to be completely discharged before being recharged to ensure maximum efficiency; lead-acid batteries could be used with automobile circuit limiters (cigarette lighter plug devices); but the batteries had mediocre portability. The later NiMH batteries, which do not share this requirement for maximum efficiency, were not released until the late 1990s, years after the Game Gear, Atari Lynx, and original Game Boy had been discontinued. During the time when technologically superior handhelds had strict technical limitations, batteries had a very low mAh rating since batteries with heavy power density were not yet available.
Modern game systems such as the Nintendo DS and PlayStation Portable have rechargeable Lithium-Ion batteries with proprietary shapes. Other seventh-generation consoles such as the GP2X use standard alkaline batteries. Because the mAh rating of alkaline batteries has increased since the 1990s, the power needed for handhelds like the GP2X may be supplied by relatively few batteries.
Nintendo released the Game Boy on April 21, 1989 (September 1990 for the UK). The design team headed by Gunpei Yokoi had also been responsible for the Game & Watch system, as well as the Nintendo Entertainment System games "Metroid" and "Kid Icarus". The Game Boy came under scrutiny by some industry critics, saying that the monochrome screen was too small, and the processing power was inadequate. The design team had felt that low initial cost and battery economy were more important concerns, and when compared to the Microvision, the Game Boy was a huge leap forward.
Yokoi recognized that the Game Boy needed a killer app—at least one game that would define the console, and persuade customers to buy it. In June 1988, Minoru Arakawa, then-CEO of Nintendo of America saw a demonstration of the game "Tetris" at a trade show. Nintendo purchased the rights for the game, and packaged it with the Game Boy system as a launch title. It was almost an immediate hit. By the end of the year more than a million units were sold in the US. As of March 31, 2005, the Game Boy and Game Boy Color combined to sell over 118 million units worldwide.
In 1987, Epyx created the Handy Game; a device that would turn into the Atari Lynx in 1989. It is the first color handheld console ever made, as well as the first with a backlit screen. It also features networking support with up to 17 other players, and advanced hardware that allows the zooming and scaling of sprites. The Lynx can also be turned upside down to accommodate left-handed players. However, all these features came at a very high price point, which drove consumers to seek cheaper alternatives. The Lynx is also very unwieldy, consumes batteries very quickly, and lacked the third-party support enjoyed by its competitors. Due to its high price, short battery life, production shortages, a dearth of compelling games, and Nintendo's aggressive marketing campaign, and despite a redesign in 1991, the Lynx became a commercial failure. Despite this, companies like Telegames helped to keep the system alive long past its commercial relevance, and when new owner Hasbro released the rights to develop for the public domain, independent developers like Songbird have managed to release new commercial games for the system every year until 2004's "Winter Games".
The TurboExpress is a portable version of the TurboGrafx, released in 1990 for $249.99 (the price was briefly raised to $299.99, soon dropped back to $249.99, and by 1992 it was $199.99). Its Japanese equivalent is the PC Engine GT.
It is the most advanced handheld of its time and can play all the TurboGrafx-16's games (which are on a small, credit-card sized media called HuCards). It has a 66 mm (2.6 in.) screen, the same as the original Game Boy, but in a much higher resolution, and can display 64 sprites at once, 16 per scanline, in 512 colors. Although the hardware can only handle 481 simultaneous colors. It has 8 kilobytes of RAM. The Turbo runs the HuC6820 CPU at 1.79 or 7.16 MHz.
The optional "TurboVision" TV tuner includes RCA audio/video input, allowing users to use TurboExpress as a video monitor. The "TurboLink" allowed two-player play. "Falcon", a flight simulator, included a "head-to-head" dogfight mode that can only be accessed via TurboLink. However, very few TG-16 games offered co-op play modes especially designed with the TurboExpress in mind.
The Bitcorp Gamate is the one of the first handheld game systems created in response to the Nintendo Game Boy. It was released in Asia in 1990 and distributed worldwide by 1991.
Like the Sega Game Gear, it was horizontal in orientation and like the Game Boy, required 4 AA batteries. Unlike many later Game Boy clones, its internal components were professionally assembled (no "glop-top" chips). Unfortunately the system's fatal flaw is its screen. Even by the standards of the day, its screen is rather difficult to use, suffering from similar motion blur problems that were common complaints with the first generation Game Boys. Likely because of this fact sales were quite poor, and Bitcorp closed by 1992. However, new games continued to be published for the Asian market, possibly as late as 1994. The total number of games released for the system remains unknown.
Gamate games were designed for stereo sound, but the console is only equipped with a mono speaker.
The Game Gear is the third color handheld console, after the Lynx and the TurboExpress; produced by Sega. Released in Japan in 1990 and in North America and Europe in 1991, it is based on the Master System, which gave Sega the ability to quickly create Game Gear games from its large library of games for the Master System. While never reaching the level of success enjoyed by Nintendo, the Game Gear proved to be a fairly durable competitor, lasting longer than any other Game Boy rivals.
While the Game Gear is most frequently seen in black or navy blue, it was also released in a variety of additional colors: red, light blue, yellow, clear, and violet. All of these variations were released in small quantities and frequently only in the Asian market.
Following Sega's success with the Game Gear, they began development on a successor during the early 1990s, which was intended to feature a touchscreen interface, many years before the Nintendo DS. However, such a technology was very expensive at the time, and the handheld itself was estimated to have cost around $289 were it to be released. Sega eventually chose to shelve the idea and instead release the Genesis Nomad, a handheld version of the Genesis, as the successor.
The Watara Supervision was released in 1992 in an attempt to compete with the Nintendo Game Boy. The first model was designed very much like a Game Boy, but it is grey in color and has a slightly larger screen. The second model was made with a hinge across the center and can be bent slightly to provide greater comfort for the user. While the system did enjoy a modest degree of success, it never impacted the sales of Nintendo or Sega. The Supervision was redesigned a final time as "The Magnum". Released in limited quantities it was roughly equivalent to the Game Boy Pocket. It was available in three colors: yellow, green and grey. Watara designed many of the games themselves, but did receive some third party support, most notably from Sachen.
A TV adapter was available in both PAL and NTSC formats that could transfer the Supervision's black-and-white palette to 4 colors, similar in some regards to the Super Game Boy from Nintendo.
The Hartung Game Master is an obscure handheld released at an unknown point in the early 1990s. Its graphics were much lower than most of its contemporaries, similar in complexity to the Atari 2600. It was available in black, white, and purple, and was frequently rebranded by its distributors, such as Delplay, Videojet and Systema.
The exact number of games released is not known, but is likely around 20. The system most frequently turns up in Europe and Australia.
By this time, the lack of significant development in Nintendo's product line began allowing more advanced systems such as the Neo Geo Pocket Color and the WonderSwan Color to be developed.
The Nomad was released in October 1995 in North America only. The release was five years into the market span of the Genesis, with an existing library of more than 500 Genesis games. According to former Sega of America research and development head Joe Miller, the Nomad was not intended to be the Game Gear's replacement and believes that there was little planning from Sega of Japan for the new handheld. Sega was supporting five different consoles: Saturn, Genesis, Game Gear, Pico, and the Master System, as well as the Sega CD and 32X add-ons. In Japan, the Mega Drive had never been successful and the Saturn was more successful than Sony's PlayStation, so Sega Enterprises CEO Hayao Nakayama decided to focus on the Saturn. By 1999, the Nomad was being sold at less than a third of its original price.
The Game Boy Pocket is a redesigned version of the original Game Boy having the same features. It was released in 1996. Notably, this variation is smaller and lighter. It comes in seven different colors; red, yellow, green, black, clear, silver, blue, and pink. It has space for two AAA batteries, which provide approximately 10 hours of game play. The screen was changed to a true black-and-white display, rather than the "pea soup" monochromatic display of the original Game Boy. Although, like its predecessor, the Game Boy Pocket has no backlight to allow play in a darkened area, it did notably improve visibility and pixel response-time (mostly eliminating ghosting).
Another notable improvement over the original Game Boy is a black-and-white display screen, rather than the green-tinted display of the original Game Boy, that also featured improved response time for less blurring during motion. The first model of the Game Boy Pocket did not have an LED to show battery levels, but the feature was added due to public demand. The Game Boy Pocket was not a new software platform and played the same software as the original Game Boy model.
The Game.com (pronounced in TV commercials as "game com", not "game dot com", and not capitalized in marketing material) is a handheld game console released by Tiger Electronics in September 1997. It featured many new ideas for handheld consoles and was aimed at an older target audience, sporting PDA-style features and functions such as a touch screen and stylus. However, Tiger hoped it would also challenge Nintendo's Game Boy and gain a following among younger gamers too. Unlike other handheld game consoles, the first game.com consoles included two slots for game cartridges, which would not happen again until the Tapwave Zodiac, the DS and DS Lite, and could be connected to a 14.4 kbit/s modem. Later models had only a single cartridge slot.
The Game Boy Color (also referred to as GBC or CGB) is Nintendo's successor to the Game Boy and was released on October 21, 1998, in Japan and in November of the same year in the United States. It features a color screen, and is slightly bigger than the Game Boy Pocket. The processor is twice as fast as a Game Boy's and has twice as much memory. It also had an infrared communications port for wireless linking which did not appear in later versions of the Game Boy, such as the Game Boy Advance.
The Game Boy Color was a response to pressure from game developers for a new system, as they felt that the Game Boy, even in its latest incarnation, the Game Boy Pocket, was insufficient. The resulting product was backward compatible, a first for a handheld console system, and leveraged the large library of games and great installed base of the predecessor system. This became a major feature of the Game Boy line, since it allowed each new launch to begin with a significantly larger library than any of its competitors. As of March 31, 2005, the Game Boy and Game Boy Color combined to sell 118.69 million units worldwide.
The console is capable of displaying up to 56 different colors simultaneously on screen from its palette of 32,768, and can add basic four-color shading to games that had been developed for the original Game Boy. It can also give the sprites and backgrounds separate colors, for a total of more than four colors.
The Neo Geo Pocket Color (or NGPC) was released in 1999 in Japan, and later that year in the United States and Europe. It is a 16-bit color handheld game console designed by SNK, the maker of the Neo Geo home console and arcade machine. It came after SNK's original Neo Geo Pocket monochrome handheld, which debuted in 1998 in Japan.
In 2000 following SNK's purchase by Japanese Pachinko manufacturer Aruze, the Neo Geo Pocket Color was dropped from both the US and European markets, purportedly due to commercial failure.
The system seemed well on its way to being a success in the U.S. It was more successful than any Game Boy competitor since Sega's Game Gear, but was hurt by several factors, such as SNK's infamous lack of communication with third-party developers, and anticipation of the Game Boy Advance. The decision to ship U.S. games in cardboard boxes in a cost-cutting move rather than hard plastic cases that Japanese and European releases were shipped in may have also hurt US sales.
The WonderSwan Color is a handheld game console designed by Bandai. It was released on December 9, 2000, in Japan, Although the WonderSwan Color was slightly larger and heavier (7 mm and 2 g) compared to the original WonderSwan, the color version featured 512 kB of RAM and a larger color LCD screen. In addition, the WonderSwan Color is compatible with the original WonderSwan library of games.
Prior to WonderSwan's release, Nintendo had virtually a monopoly in the Japanese video game handheld market. After the release of the WonderSwan Color, Bandai took approximately 8% of the market share in Japan partly due to its low price of 6800 yen (approximately US$65). Another reason for the WonderSwan's success in Japan was the fact that Bandai managed to get a deal with Square to port over the original Famicom "Final Fantasy" games with improved graphics and controls. However, with the popularity of the Game Boy Advance and the reconciliation between Square and Nintendo, the WonderSwan Color and its successor, the SwanCrystal quickly lost its competitive advantage.
The 2000s saw a major leap in innovation, particularly in the second half with the release of the DS and PSP.
In 2001, Nintendo released the Game Boy Advance (GBA or AGB), which added two shoulder buttons, a larger screen, and more computing power than the Game Boy Color.
The design was revised two years later when the Game Boy Advance SP (GBA SP), a more compact version, was released. The SP features a "clamshell" design (folding open and closed, like a laptop computer), as well as a frontlit color display and rechargeable battery. Despite the smaller form factor, the screen remained the same size as that of the original. In 2005, the Game Boy Micro was released. This revision sacrifices screen size and backwards compatibility with previous Game Boys for a dramatic reduction in total size and a brighter backlit screen. A new SP model with a backlit screen was released in some regions around the same time.
Along with the Nintendo GameCube, the GBA also introduced the concept of "connectivity": using a handheld system as a console controller. A handful of games use this feature, most notably "Animal Crossing", "Pac-Man Vs.", "Final Fantasy Crystal Chronicles", "", "", "Metroid Prime", and "".
As of December 31, 2007, the GBA, GBA SP, and the Game Boy Micro combined have sold 80.72 million units worldwide.
The original GP32 was released in 2001 by the South Korean company Game Park a few months after the launch of the Game Boy Advance. It featured a 32-bit CPU, 133 MHz processor, MP3 and Divx player, and e-book reader. SmartMedia cards were used for storage, and could hold up to 128mb of anything downloaded through a USB cable from a PC. The GP32 was redesigned in 2003. A front-lit screen was added and the new version was called GP32 FLU (Front Light Unit). In summer 2004, another redesign, the GP32 BLU, was made, and added a backlit screen. This version of the handheld was planned for release outside South Korea; in Europe, and it was released for example in Spain (VirginPlay was the distributor). While not a commercial success on a level with mainstream handhelds (only 30,000 units were sold), it ended up being used mainly as a platform for user-made applications and emulators of other systems, being popular with developers and more technically adept users.
Nokia released the N-Gage in 2003. It was designed as a combination MP3 player, cellphone, PDA, radio, and gaming device. The system received much criticism alleging defects in its physical design and layout, including its vertically oriented screen and requirement of removing the battery to change game cartridges. The most well known of these was "sidetalking", or the act of placing the phone speaker and receiver on an edge of the device instead of one of the flat sides, causing the user to appear as if they are speaking into a taco.
The N-Gage QD was later released to address the design flaws of the original. However, certain features available in the original N-Gage, including MP3 playback, FM radio reception, and USB connectivity were removed.
Second generation of N-Gage launched on April 3, 2008 in the form of a service for selected Nokia Smartphones.
The Cybiko is a Russian hand-held computer introduced in May 2000 by David Yang's company and designed for teenage audiences, featuring its own two-way radio text messaging system. It has over 430 "official" freeware games and applications. Because of the text messaging system, it features a QWERTY keyboard that was used with a stylus. An MP3 player add-on was made for the unit as well as a SmartMedia card reader. The company stopped manufacturing the units after two product versions and only a few years on the market. Cybikos can communicate with each other up to a maximum range of 300 metres (0.19 miles). Several Cybikos can chat with each other in a wireless chatroom.
Cybiko Classic:
There were two models of the Classic Cybiko. Visually, the only difference was that the original version had a power switch on the side, whilst the updated version used the "escape" key for power management. Internally, the differences between the two models were in the internal memory, and the location of the firmware.
Cybiko Xtreme:
The Cybiko Xtreme was the second-generation Cybiko handheld. It featured various improvements over the original Cybiko, such as a faster processor, more RAM, more ROM, a new operating system, a new keyboard layout and case design, greater wireless range, a microphone, improved audio output, and smaller size.
In 2003, Tapwave released the Zodiac. It was designed to be a PDA-handheld game console hybrid. It supported photos, movies, music, Internet, and documents. The Zodiac used a special version Palm OS 5, 5.2T, that supported the special gaming buttons and graphics chip. Two versions were available, Zodiac 1 and 2, differing in memory and looks. The Zodiac line ended in July 2005 when Tapwave declared bankruptcy.
The Nintendo DS was released in November 2004. Among its new features were the incorporation of two screens, a touchscreen, wireless connectivity, and a microphone port. As with the Game Boy Advance SP, the DS features a clamshell design, with the two screens aligned vertically on either side of the hinge.
The DS's lower screen is touch sensitive, designed to be pressed with a stylus, a user's finger or a special "thumb pad" (a small plastic pad attached to the console's wrist strap, which can be affixed to the thumb to simulate an analog stick). More traditional controls include four face buttons, two shoulder buttons, a D-pad, and "Start" and "Select" buttons. The console also features online capabilities via the Nintendo Wi-Fi Connection and ad-hoc wireless networking for multiplayer games with up to sixteen players. It is backwards-compatible with all Game Boy Advance games, but not games designed for the Game Boy or Game Boy Color.
On January 2006, Nintendo revealed an updated version of the DS: the Nintendo DS Lite (released on March 2, 2006, in Japan) with an updated, smaller form factor (42% smaller and 21% lighter than the original Nintendo DS), a cleaner design, longer battery life, and brighter, higher-quality displays, with adjustable brightness. It is also able to connect wirelessly with Nintendo's Wii console.
On October 2, 2008, Nintendo announced the Nintendo DSi, with larger, 3.25-inch screens and two integrated cameras. It has an SD card storage slot in place of the Game Boy Advance slot, plus internal flash memory for storing downloaded games. It was released on November 1, 2008, in Japan, and was released in Australia on April 2, 2009, April 3, 2009 in Europe, and April 5, 2009 in North America. On October 29, 2009, Nintendo announced a larger version of the DSi, called the DSi XL, which was released on November 21, 2009.
As of December 31, 2009, the Nintendo DS, Nintendo DS Lite, and Nintendo DSi combined have sold 125.13 million units worldwide.
The GameKing is a handheld game console released by the Chinese company TimeTop in 2004. The first model while original in design owes a large debt to Nintendo's Game Boy Advance. The second model, the GameKing 2, is believed to be inspired by Sony's PSP. This model also was upgraded with a backlit screen, with a distracting background transparency (which can be removed by opening up the console). A color model, the GameKing 3 apparently exists, but was only made for a brief time and was difficult to purchase outside of Asia. Whether intentionally or not, the GameKing has the most primitive graphics of any handheld released since the Game Boy of 1989.
As many of the games have an "old school" simplicity, the device has developed a small cult following. The Gameking's speaker is quite loud and the cartridges' sophisticated looping soundtracks (sampled from other sources) are seemingly at odds with its primitive graphics.
TimeTop made at least one additional device sometimes labeled as "GameKing", but while it seems to possess more advanced graphics, is essentially an emulator that plays a handful of multi-carts (like the GB Station Light II). Outside of Asia (especially China) however the Gameking remains relatively unheard of due to the enduring popularity of Japanese handhelds such as those manufactured by Nintendo and Sony.
The PlayStation Portable (officially abbreviated PSP) is a handheld game console manufactured and marketed by Sony Computer Entertainment. Development of the console was first announced during E3 2003, and it was unveiled on May 11, 2004, at a Sony press conference before E3 2004. The system was released in Japan on December 12, 2004, in North America on March 24, 2005, and in the PAL region on September 1, 2005.
The PlayStation Portable is the first handheld video game console to use an optical disc format, Universal Media Disc (UMD), for distribution of its games. UMD Video discs with movies and television shows were also released. The PSP utilized the Sony/SanDisk Memory Stick Pro Duo format as its primary storage medium. Other distinguishing features of the console include its large viewing screen, multi-media capabilities, and connectivity with the PlayStation 3, other PSPs, and the Internet.
Tiger's Gizmondo came out in the UK during March 2005 and it was released in the U.S. during October 2005. It is designed to play music, movies, and games, have a camera for taking and storing photos, and have GPS functions. It also has Internet capabilities. It has a phone for sending text and multimedia messages. Email was promised at launch, but was never released before Gizmondo, and ultimately Tiger Telematics', downfall in early 2006. Users obtained a second service pack, unreleased, hoping to find such functionality. However, Service Pack B did not activate the e-mail functionality.
The GP2X is an open-source, Linux-based handheld video game console and media player created by GamePark Holdings of South Korea, designed for homebrew developers as well as commercial developers. It is commonly used to run emulators for game consoles such as Neo-Geo, Genesis, Master System, Game Gear, Amstrad CPC, Commodore 64, Nintendo Entertainment System, TurboGrafx-16, MAME and others.
A new version called the "F200" was released October 30, 2007, and features a touchscreen, among other changes. Followed by GP2X Wiz (2009) and GP2X Caanoo (2010).
The Dingoo A-320 is a micro-sized gaming handheld that resembles the Game Boy Micro and is open to game development. It also supports music, radio, emulators (8 bit and 16 bit) and video playing capabilities with its own interface much like the PSP. There is also an onboard radio and recording program. It is currently available in two colors — white and black. Other similar products from the same manufacturer are the Dingoo A-330 (also known as Geimi), Dingoo A-360, Dingoo A-380 (available in pink, white and black) and the recently released Dingoo A-320E.
The PSP Go is a version of the PlayStation Portable handheld game console manufactured by Sony. It was released on October 1, 2009, in American and European territories, and on November 1 in Japan. It was revealed prior to E3 2009 through Sony's Qore VOD service. Although its design is significantly different from other PSPs, it is not intended to replace the PSP 3000, which Sony continued to manufacture, sell, and support. On April 20, 2011, the manufacturer announced that the PSP Go would be discontinued so that they may concentrate on the PlayStation Vita. Sony later said that only the European and Japanese versions were being cut, and that the console would still be available in the US.
Unlike previous PSP models, the PSP Go does not feature a UMD drive, but instead has 16 GB of internal flash memory to store games, video, pictures, and other media. This can be extended by up to 32 GB with the use of a Memory Stick Micro (M2) flash card. Also unlike previous PSP models, the PSP Go's rechargeable battery is not removable or replaceable by the user. The unit is 43% lighter and 56% smaller than the original PSP-1000, and 16% lighter and 35% smaller than the PSP-3000. It has a 3.8" 480 × 272 LCD (compared to the larger 4.3" 480 × 272 pixel LCD on previous PSP models). The screen slides up to reveal the main controls. The overall shape and sliding mechanism are similar to that of Sony's mylo COM-2 internet device.
The Pandora is a handheld game console/UMPC/PDA hybrid designed to take advantage of existing open source software and to be a target for home-brew development. It runs a full distribution of Linux, and in functionality is like a small PC with gaming controls. It is developed by OpenPandora, which is made up of former distributors and community members of the GP32 and GP2X handhelds.
OpenPandora began taking pre-orders for one batch of 4000 devices in November 2008 and after manufacturing delays, began shipping to customers on May 21, 2010.
The FC-16 Go is a portable Super NES hardware clone manufactured by Yobo Gameware in 2009. It features a 3.5-inch display, two wireless controllers, and CRT cables that allow cartridges to be played on a television screen. Unlike other Super NES clone consoles, it has region tabs that only allow NTSC North American cartridges to be played. Later revisions feature stereo sound output, larger shoulder buttons, and a slightly re-arranged button, power, and A/V output layout.
The Nintendo 3DS is the successor to Nintendo's DS handheld. The autostereoscopic device is able to project stereoscopic three-dimensional effects without requirement of active shutter or passive polarized glasses, which are required by most current 3D televisions to display the 3D effect. The 3DS was released in Japan on February 26, 2011; in Europe on March 25, 2011; in North America on March 27, 2011, and in Australia on March 31, 2011. The system features backward compatibility with Nintendo DS series software, including Nintendo DSi software. It also features an online service called the Nintendo eShop, launched on June 6, 2011, in North America and June 7, 2011, in Europe and Japan, which allows owners to download games, demos, applications and information on upcoming film and game releases. On November 24, 2011, a limited edition "Legend of Zelda 25th Anniversary 3DS" was released that contained a unique Cosmo Black unit decorated with gold Legend of Zelda related imagery, along with a copy of "The Legend of Zelda: Ocarina of Time 3D".
There are also other models including the Nintendo 2DS and the New Nintendo 3DS, the latter with a larger (XL/LL) variant, like the original Nintendo 3DS. The 2DS also has a successor, called the New Nintendo 2DS XL.
The Sony Ericsson Xperia PLAY is a handheld game console smartphone produced by Sony Ericsson under the Xperia smartphone brand. The device runs Android 2.3 Gingerbread, and is the first to be part of the PlayStation Certified program which means that it can play PlayStation Suite games. The device is a horizontally sliding phone with its original form resembling the Xperia X10 while the slider below resembles the slider of the PSP Go. The slider features a D-pad on the left side, a set of standard PlayStation buttons (, , and ) on the right, a long rectangular touchpad in the middle, start and select buttons on the bottom right corner, a menu button on the bottom left corner, and two shoulder buttons (L and R) on the back of the device. It is powered by a 1 GHz Qualcomm Snapdragon processor, a Qualcomm Adreno 205 GPU, and features a display measuring 4.0 inches (100 mm) (854 × 480), an 8-megapixel camera, 512 MB RAM, 8 GB internal storage, and a micro-USB connector. It supports microSD cards, versus the Memory Stick variants used in PSP consoles. The device was revealed officially for the first time in a Super Bowl ad on Sunday, February 6, 2011. On February 13, 2011, at Mobile World Congress (MWC) 2011, it was announced that the device would be shipping globally in March 2011, with a launch lineup of around 50 software titles.
The PlayStation Vita is the successor to Sony's PlayStation Portable (PSP) Handheld series. It was released in Japan on December 17, 2011 and in Europe, Australia, North and South America on February 22, 2012.
The handheld includes two analog sticks, a 5-inch (130 mm) OLED/LCD multi-touch capacitive touchscreen, and supports Bluetooth, Wi-Fi and optional 3G. Internally, the PS Vita features a 4 core ARM Cortex-A9 MPCore processor and a 4 core SGX543MP4+ graphics processing unit, as well as LiveArea software as its main user interface, which succeeds the XrossMediaBar.
The device is fully backwards-compatible with PlayStation Portable games digitally released on the PlayStation Network via the PlayStation Store. However, PSone Classics and PS2 titles were not compatible at the time of the primary public release in Japan. The Vita's dual analog sticks will be supported on selected PSP games. The graphics for PSP releases will be up-scaled, with a smoothing filter to reduce pixelation.
On September 20, 2018, Sony announced at Tokyo Game Show 2018 that the Vita would be discontinued in 2019, ending its hardware production. Production of Vita hardware officially ended on March 1, 2019.
The Razer Switchblade was a prototype pocket-sized like a Nintendo DSi XL designed to run Windows 7, featured a multi-touch LCD screen and an adaptive keyboard that changed keys depending on the game the user would play. It also was to feature a full mouse.
It was first unveiled on January 5, 2011, on the Consumer Electronics Show (CES). The Switchblade won The Best of CES 2011 People's Voice award. It has since been in development and the release date is still unknown. The device has likely been suspended indefinitely.
Project Shield is a handheld system developed by Nvidia announced at CES 2013. It runs on Android 4.2 and uses Nvidia Tegra 4 SoC. The hardware includes a 5-inches multitouch screen with support for HD graphics (720p). The console allows for the streaming of games running on a compatible desktop PC, or laptop.
Nvidia Shield Portable has received mixed reception from critics. Generally, reviewers praised the performance of the device, but criticized the cost and lack of worthwhile games. Engadget's review noted the system's "extremely impressive PC gaming", but also that due to its high price, the device was "a hard sell as a portable game console", especially when compared to similar handhelds on the market. CNET's Eric Franklin states in his review of the device that "The Nvidia Shield is an extremely well made device, with performance that pretty much obliterates any mobile product before it; but like most new console launches, there is currently a lack of available games worth your time." Eurogamer's comprehensive review of the device provides a detailed account of the device and its features; concluded by saying: "In the here and now, the first-gen Shield Portable is a gloriously niche, luxury product - the most powerful Android system on the market by a clear stretch and possessing a unique link to PC gaming that's seriously impressive in beta form, and can only get better."
The Nintendo Switch is a hybrid console that can either be used in a handheld form, or inserted into a docking station attached to a television to play on a bigger screen. The Switch features two detachable wireless controllers, called Joy-Con, which can be used individually or attached to a grip to provide a traditional gamepad form. A handheld-only revision named Nintendo Switch Lite was released on September 20, 2019.
The Switch Lite had sold about 1.95 million units worldwide by September 30, 2019, only 10 days after its launch. Nintendo also announced a brand new color, coral, which came to the market on March 20, 2020.
|
https://en.wikipedia.org/wiki?curid=14199
|
Henry Bruce, 1st Baron Aberdare
Henry Austin Bruce, 1st Baron Aberdare, (16 April 181525 February 1895) was a British Liberal Party politician, who served in government most notably as Home Secretary (1868–1873) and as Lord President of the Council.
Henry Bruce was born at Duffryn, Aberdare, Glamorganshire, the son of John Bruce, a Glamorganshire landowner, and his first wife Sarah, daughter of Reverend Hugh Williams Austin. John Bruce's original family name was Knight, but on coming of age in 1805 he assumed the name of Bruce: his mother, through whom he inherited the Duffryn estate, was the daughter of William Bruce, high sheriff of Glamorganshire.
Henry was educated from the age of twelve at the Bishop Gore School, Swansea (Swansea Grammar School). In 1837 he was called to the bar from Lincoln's Inn. Shortly after he had begun to practice, the discovery of coal beneath the Duffryn and other Aberdare Valley estates brought his family great wealth. From 1847 to 1854 Bruce was stipendiary magistrate for Merthyr Tydfil and Aberdare, resigning the position in the latter year, after entering parliament as Liberal member for Merthyr Tydfil.
Bruce was returned unopposed as MP for Merthyr Tydfil in December 1852, following the death of Sir John Guest. He did so with the enthusiastic support of the late member's political allies, notably the iron masters of Dowlais, and he was thereafter regarded by his political opponents, most notably in the Aberdare Valley, as their nominee. Even so, Bruce's parliamentary record demonstrated support for liberal policies, with the exception of the ballot. The electorate in the constituency at this time remained relatively small, excluding the vast majority of the working classes.
Significantly, however, Bruce's relationship with the miners of the Aberdare Valley, in particular, deteriorated as a result of the Aberdare Strike of 1857–8. In a speech to a large audience of miners at the Aberdare Market Hall, Bruce sought to strike a conciliatory tone in persuading the miners to return to work. In a second speech, however, he delivered a broadside against the trade union movement generally, referring to the violence engendered elsewhere as a result of strikes and to alleged examples of intimidation and violence in the immediate locality. The strike damaged his reputation and may well have contributed to his eventual election defeat ten years later. In 1855, Bruce was appointed a trustee of the Dowlais Iron Company and played a role in the further development of the iron industry.
In November 1862, after nearly ten years in Parliament, he became Under-Secretary of State for the Home Department, and held that office until April 1864. He became a Privy Councillor and a Charity Commissioner for England and Wales in 1864, when he was moved to be Vice-President of the Council of Education.
At the 1868 General Election, Merthyr Tydfil became a two-member constituency with a much-increased electorate as a result of the Second Reform Act of 1867. Since the formation of the constituency, Merthyr Tydfil had dominated representation as the vast majority of the electorate lived in the town and its vicinity, whereas there was a much lower number of electors in the neighbouring Aberdare Valley. During the 1850s and 1860s, however, the population of Aberdare grew rapidly, and the franchise changes in 1867 gave the vote to large numbers of miners in that valley. Amongst these new electors, Bruce remained unpopular as a result of his actions during the 1857-8 dispute. Initially, it appeared that the Aberdare iron master, Richard Fothergill, would be elected to the second seat alongside Bruce. However, the appearance of a third Liberal candidate, Henry Richard, a nonconformist radical popular in both Merthyr and Aberdare, left Bruce on the defensive and he was ultimately defeated, finishing in third place behind both Richard and Fothergill.
After losing his seat, Bruce was elected for Renfrewshire on 25 January 1869, he was made Home Secretary by William Ewart Gladstone. His tenure of this office was conspicuous for a reform of the licensing laws, and he was responsible for the Licensing Act 1872, which made the magistrates the licensing authority, increased the penalties for misconduct in public-houses and shortened the number of hours for the sale of drink. In 1873 Bruce relinquished the home secretaryship, at Gladstone's request, to become Lord President of the Council, and was elevated to the peerage as Baron Aberdare, of Duffryn in the County of Glamorgan, on 23 August that year. Being a Gladstonian Liberal, Aberdare had hoped for a much more radical proposal to keep existing licensee holders for a further ten years, and to prevent any new applicants. Its unpopularity pricked his nonconformist's conscience, when like Gladstone himself he had a strong leaning towards Temperance. He had already pursued 'moral improvement' on miners in the regulations attempting to further ban boys from the pits. The Trades Union Act 1871 was another more liberal regime giving further rights to unions, and protection from malicious prosecutions.
The defeat of the Liberal government in the following year terminated Lord Aberdare's official political life, and he subsequently devoted himself to social, educational and economic questions. Education became one of Lord Aberdare's main interests in later life. His interest had been shown by the speech on Welsh education which he had made on 5 May 1862. In 1880, he was appointed to chair the Departmental Committee on Intermediate and Higher Education in Wales and Monmouthshire, whose report ultimately led to the Welsh Intermediate Education Act of 1889. The report also stimulated the campaign for the provision of university education in Wales. In 1883, Lord Aberdare was elected the first president of the University College of South Wales and Monmouthshire. In his inaugural address he declared that the framework of Welsh education would not be complete until there was a University of Wales. The University was eventually founded in 1893 and Aberdare became its first chancellor.
In 1876 he was elected a Fellow of the Royal Society; from 1878 to 1891 he was president of the Royal Historical Society. and in 1881 he became president of both the Royal Geographical Society and the Girls' Day School Trust. In 1888 he headed the commission that established the Official Table of Drops, listing how far a person of a particular weight should be dropped when hanged for a capital offence (the only method of 'judicial execution' in the United Kingdom at that time), to ensure an instant and painless death, by cleanly breaking the neck between the 2nd and 3rd vertebrae, an 'exacting science', eventually brought to perfection by Chief Executioner Albert Pierrepoint. Prisoners health, clothing and discipline was a particular concern even at the end of his career. In the Lords he spoke at some length to the Home Affairs Committee chaired by Arthur Balfour about the prison rules system. Aberdare had always maintained a healthy skepticism about intemperate working-classes; in 1878 urging greater vigilance against the vice of excessive drinking, he took evidence on miners and railway colliers habitual imbibing. The committee tried to establish special legislation based on a link between Sunday Opening and absenteeism established in 1868. Aberdare had been interested in the plight of working class drinkers since Gladstone had appointed him Home Secretary. The defeat of the Licensing Bill by the Tory 'beerage' and publicans was drafted to limit hours and protect the public, but it persuaded a convinced Anglican forever more of the iniquities.
In 1882 he began a connection with West Africa which lasted the rest of his life, by accepting the chairmanship of the National African Company, formed by Sir George Goldie, which in 1886 received a charter under the title of the Royal Niger Company and in 1899 was taken over by the British government, its territories being constituted the protectorate of Nigeria. West African affairs, however, by no means exhausted Lord Aberdare's energies, and it was principally through his efforts that a charter was in 1894 obtained for the University College of South Wales and Monmouthshire,a constituent institution of the University of Wales. This is now Cardiff University. Lord Aberdare, who in 1885 was made a Knight Grand Cross of the Order of the Bath, presided over several Royal Commissions at different times.
Henry Bruce married firstly Annabella, daughter of Richard Beadon, of Clifton by Annabella A'Court, sister of 1st Baron Heytesbury, on 6 January 1846. They had one son and three daughters.
After her death on 28 July 1852 he married secondly on 17 August 1854 Norah Creina Blanche, youngest daughter of Lt-Gen Sir William Napier, KCB the historian of the Peninsular War, whose biography he edited, by Caroline Amelia, second daughter of Gen. Hon Henry Edward Fox, son of the Earl of Ilchester. They had seven daughters and two sons, of whom:
Lord Aberdare died at his London home, 39 Princes Gardens, South Kensington, on 25 February 1895, aged 79, and was succeeded in the barony by his only son by his first marriage, Henry. He was survived by his wife, Lady Aberdare, born 1827, who died on 27 April 1897. She was a proponent of women's education and active in the establishment of Aberdare Hall in Cardiff.
Henry Austin Bruce is buried at Aberffrwd Cemetery in Mountain Ash, Wales. His large family plot is surrounded by a chain, and his grave is a simple Celtic cross with double plinth and kerb. In place is written "To God the Judge of all and to the spirits of just men more perfect."
|
https://en.wikipedia.org/wiki?curid=14201
|
Halophile
Halophiles, named after the Greek word for "salt-loving", are extremophiles that thrive in high salt concentrations. While most halophiles are classified into the "Archaea" domain, there are also bacterial halophiles and some eukaryotic species, such as the alga "Dunaliella salina" and fungus "Wallemia ichthyophaga". Some well-known species give off a red color from carotenoid compounds, notably bacteriorhodopsin. Halophiles can be found in water bodies with salt concentration five times greater than that of the ocean, such as the Great Salt Lake in Utah, Owens Lake in California, the Dead Sea, and in evaporation ponds. They are theorized to be a possible candidate for extremophiles living in the salty subsurface water ocean of Jupiter's Europa and other similar moons.
Halophiles are categorized by the extent of their halotolerance: slight, moderate, or extreme. Slight halophiles prefer 0.3 to 0.8 M (1.7 to 4.8%—seawater is 0.6 M or 3.5%), moderate halophiles 0.8 to 3.4 M (4.7 to 20%), and extreme halophiles 3.4 to 5.1 M (20 to 30%) salt content. Halophiles require sodium chloride (salt) for growth, in contrast to halotolerant organisms, which do not require salt but can grow under saline conditions.
High salinity represents an extreme environment in which relatively few organisms have been able to adapt and survive. Most halophilic and all halotolerant organisms expend energy to exclude salt from their cytoplasm to avoid protein aggregation ('salting out'). To survive the high salinities, halophiles employ two differing strategies to prevent desiccation through osmotic movement of water out of their cytoplasm. Both strategies work by increasing the internal osmolarity of the cell. The first strategy is employed by the majority of halophilic bacteria, some archaea, yeasts, algae, and fungi; the organism accumulates organic compounds in the cytoplasm—osmoprotectants which are known as compatible solutes. These can be either synthesised or accumulated from the environment. The most common compatible solutes are neutral or zwitterionic, and include amino acids, sugars, polyols, betaines, and ectoines, as well as derivatives of some of these compounds.
The second, more radical adaptation involves selectively absorbing potassium (K+) ions into the cytoplasm. This adaptation is restricted to the moderately halophilic bacterial order "Halanaerobiales", the extremely halophilic archaeal family "Halobacteriaceae", and the extremely halophilic bacterium "Salinibacter ruber". The presence of this adaptation in three distinct evolutionary lineages suggests convergent evolution of this strategy, it being unlikely to be an ancient characteristic retained in only scattered groups or passed on through massive lateral gene transfer. The primary reason for this is the entire intracellular machinery (enzymes, structural proteins, etc.) must be adapted to high salt levels, whereas in the compatible solute adaptation, little or no adjustment is required to intracellular macromolecules; in fact, the compatible solutes often act as more general stress protectants, as well as just osmoprotectants.
Of particular note are the extreme halophiles or haloarchaea (often known as halobacteria), a group of archaea, which require at least a 2 M salt concentration and are usually found in saturated solutions (about 36% w/v salts). These are the primary inhabitants of salt lakes, inland seas, and evaporating ponds of seawater, such as the deep salterns, where they tint the water column and sediments bright colors. These species most likely perish if they are exposed to anything other than a very high-concentration, salt-conditioned environment. These prokaryotes require salt for growth. The high concentration of sodium chloride in their environment limits the availability of oxygen for respiration. Their cellular machinery is adapted to high salt concentrations by having charged amino acids on their surfaces, allowing the retention of water molecules around these components. They are heterotrophs that normally respire by aerobic means. Most halophiles are unable to survive outside their high-salt native environments. Many halophiles are so fragile that when they are placed in distilled water, they immediately lyse from the change in osmotic conditions.
Halophiles use a variety of energy sources and can be aerobic or anaerobic; anaerobic halophiles include phototrophic, fermentative, sulfate-reducing, homoacetogenic, and methanogenic species.
The Haloarchaea, and particularly the family Halobacteriaceae, are members of the domain "Archaea", and comprise the majority of the prokaryotic population in hypersaline environments. Currently, 15 recognised genera are in the family. The domain Bacteria (mainly "Salinibacter ruber") can comprise up to 25% of the prokaryotic community, but is more commonly a much lower percentage of the overall population. At times, the alga "Dunaliella salina" can also proliferate in this environment.
A comparatively wide range of taxa has been isolated from saltern crystalliser ponds, including members of these genera: "Haloferax, Halogeometricum, Halococcus, Haloterrigena, Halorubrum, Haloarcula", and "Halobacterium". However, the viable counts in these cultivation studies have been small when compared to total counts, and the numerical significance of these isolates has been unclear. Only recently has it become possible to determine the identities and relative abundances of organisms in natural populations, typically using PCR-based strategies that target 16S small subunit ribosomal ribonucleic acid (16S rRNA) genes. While comparatively few studies of this type have been performed, results from these suggest that some of the most readily isolated and studied genera may not in fact be significant in the "in situ" community. This is seen in cases such as the genus "Haloarcula", which is estimated to make up less than 0.1% of the" in situ" community, but commonly appears in isolation studies.
The comparative genomic and proteomic analysis showed distinct molecular signatures exist for the environmental adaptation of halophiles. At the protein level, the halophilic species are characterized by low hydrophobicity, an overrepresentation of acidic residues, underrepresentation of Cys, lower propensities for helix formation, and higher propensities for coil structure. The core of these proteins is less hydrophobic, such as DHFR, that was found to have narrower β-strands.
At the DNA level, the halophiles exhibit distinct dinucleotide and codon usage.
"Halobacteriaceae" is a family that includes a large part of halophilic archaea. The genus "Halobacterium" under it has a high tolerance for elevated levels of salinity. Some species of halobacteria have acidic proteins that resist the denaturing effects of salts. "Halococcus" is another genus of the family Halobacteriaceae.
Some hypersaline lakes are habitat to numerous families of halophiles. For example, the Makgadikgadi Pans in Botswana form a vast, seasonal, high-salinity water body that manifests halophilic species within the diatom genus "Nitzschia" in the family Bacillariaceae, as well as species within the genus "Lovenula" in the family Diaptomidae. Owens Lake in California also contains a large population of the halophilic bacterium "Halobacterium halobium".
"Wallemia ichthyophaga" is a basidiomycetous fungus, which requires at least 1.5 M sodium chloride for "in vitro" growth, and it thrives even in media saturated with salt. Obligate requirement for salt is an exception in fungi. Even species that can tolerate salt concentrations close to saturation (for example "Hortaea werneckii") in almost all cases grow well in standard microbiological media without the addition of salt.
The fermentation of salty foods (such as soy sauce, Chinese fermented beans, salted cod, salted anchovies, sauerkraut, etc.) often involves halophiles as either essential ingredients or accidental contaminants. One example is "Chromohalobacter beijerinckii", found in salted beans preserved in brine and in salted herring. "Tetragenococcus halophilus" is found in salted anchovies and soy sauce.
"Artemia" is a ubiquitous genus of small halophilic crustaceans living in salt lakes (such as Great Salt Lake) and solar salterns that can exist in water approaching the precipitation point of NaCl (340 g/L) and can withstand strong osmotic shocks due to its mitigating strategies for fluctuating salinity levels, such as its unique larval salt gland and osmoregulatory capacity.
North Ronaldsay sheep are a breed of sheep originating from Orkney, Scotland. They have limited access to freshwater sources on the island and their only food source is seaweed. They have adapted to handle salt concentrations that would kill other breeds of sheep.
|
https://en.wikipedia.org/wiki?curid=14204
|
Herbert A. Simon
Herbert Alexander Simon (June 15, 1916 – February 9, 2001) was an American economist, political scientist and cognitive psychologist, whose primary research interest was decision-making within organizations and is best known for the theories of "bounded rationality" and "satisficing". He received the Nobel Prize in Economics in 1978 and the Turing Award in 1975. His research was noted for its interdisciplinary nature and spanned across the fields of cognitive science, computer science, public administration, management, and political science. He was at Carnegie Mellon University for most of his career, from 1949 to 2001.
Notably, Simon was among the pioneers of several modern-day scientific domains such as artificial intelligence, information processing, decision-making, problem-solving, organization theory, and complex systems. He was among the earliest to analyze the architecture of complexity and to propose a preferential attachment mechanism to explain power law distributions.
Herbert Alexander Simon was born in Milwaukee, Wisconsin on June 15, 1916. Simon's father, Arthur Simon (1881–1948), was a Jewish electrical engineer who came to the United States from Germany in 1903 after earning his engineering degree at Technische Hochschule Darmstadt. An inventor, Arthur also was an independent patent attorney. Simon's mother, Edna Marguerite Merkel (1888-1969), was an accomplished pianist whose ancestors came from Prague and Cologne. Simon's European ancestors were piano makers, goldsmiths, and vintners. Like his father, Simon's mother also came from a family with Jewish, Lutheran, and Catholic backgrounds.
Simon attended Milwaukee Public Schools, where he developed an interest in science and established himself as an atheist. While attending middle school, Simon wrote a letter to "the editor of the "Milwaukee Journal" defending the civil liberties of atheists". Unlike most children, Simon's family introduced him to the idea that human behavior could be studied scientifically; his mother's younger brother, Harold Merkel (1892-1922), who studied economics at the University of Wisconsin–Madison under John R. Commons, became one of his earliest influences. Through Harold's books on economics and psychology, Simon discovered social science. Among his earliest influences, Simon cited Norman Angell for his book "The Great Illusion" and Henry George for his book "Progress and Poverty". While attending high school, Simon joined the debate team, where he argued "from conviction, rather than cussedness" in favor of George's single tax.
In 1933, Simon entered the University of Chicago, and, following his early influences, decided to study social science and mathematics. Simon was interested in studying biology but chose not to pursue the field because of his "color-blindness and awkwardness in the laboratory". At an early age, Simon learned he was color blind and discovered the external world is not the same as the perceived world. While in college, Simon focused on political science and economics. Simon's most important mentor was Henry Schultz, an econometrician and mathematical economist. Simon received both his B.A. (1936) and his Ph.D. (1943) in political science from the University of Chicago, where he studied under Harold Lasswell, Nicolas Rashevsky, Rudolf Carnap, Henry Schultz, and Charles Edward Merriam. After enrolling in a course on "Measuring Municipal Governments," Simon became a research assistant for Clarence Ridley, and the two co-authored "Measuring Municipal Activities: A Survey of Suggested Criteria for Appraising Administration" in 1938. Simon's studies led him to the field of organizational decision-making, which became the subject of his doctoral dissertation.
After graduating with his undergraduate degree, Simon obtained a research assistantship in municipal administration which turned into a directorship at the University of California, Berkeley.
From 1942 to 1949, Simon was a professor of political science and also served as department chairman at Illinois Institute of Technology in Chicago. There, he began participating in the seminars held by the staff of the Cowles Commission who at that time included Trygve Haavelmo, Jacob Marschak, and Tjalling Koopmans. He thus began an in-depth study of economics in the area of institutionalism. Marschak brought Simon in to assist in the study he was currently undertaking with Sam Schurr of the "prospective economic effects of atomic energy".
From 1949 to 2001, Simon was a faculty at Carnegie Mellon. In 1949, Simon became a professor of administration and chairman of the Department of Industrial Management at Carnegie Tech (later to become Carnegie Mellon University). Simon later also taught psychology and computer science in the same university, (occasionally visiting other universities).
Seeking to replace the highly simplified classical approach to economic modeling, Simon became best known for his theory of corporate decision in his book "Administrative Behavior". In this book he based his concepts with an approach that recognized multiple factors that contribute to decision making. His organization and administration interest allowed him to not only serve three times as a university department chairman, but he also played a big part in the creation of the Economic Cooperation Administration in 1948; administrative team that administered aid to the Marshall Plan for the U.S. government, serving on President Lyndon Johnson's Science Advisory Committee, and also the National Academy of Science. Simon has made a great number of contributions to both economic analysis and applications. Because of this, his work can be found in a number of economic literary works, making contributions to areas such as mathematical economics including theorem, human rationality, behavioral study of firms, theory of casual ordering, and the analysis of the parameter identification problem in econometrics.
"Administrative Behavior", first published in 1947, and updated across the years was based on Simon's doctoral dissertation. It served as the foundation for his life's work. The centerpiece of this book is the behavioral and cognitive processes of humans making rational choices, that is, decisions. By his definition, an operational administrative decision should be correct and efficient, and it must be practical to implement with a set of coordinated means.
Simon recognized that a theory of administration is largely a theory of human decision making, and as such must be based on both economics and on psychology. He states:
Contrary to the "homo economicus" stereotype, Simon argued that alternatives and consequences may be partly known, and means and ends imperfectly differentiated, incompletely related, or poorly detailed.
Simon defined the task of rational decision making is to select the alternative that results in the more preferred set of all the possible consequences. Correctness of administrative decisions was thus measured by:
The task of choice was divided into three required steps:
Any given individual or organization attempting to implement this model in a real situation would be unable to comply with the three requirements. Simon argued that knowledge of all alternatives, or all consequences that follow from each alternative is impossible in many realistic cases.
Simon attempted to determine the techniques and/or behavioral processes that a person or organization could bring to bear to achieve approximately the best result given limits on rational decision making. Simon writes:
Simon therefore, describes work in terms of an economic framework, conditioned on human cognitive limitations: Economic man and Administrative man.
"Administrative Behavior" addresses a wide range of human behaviors, cognitive abilities, management techniques, personnel policies, training goals and procedures, specialized roles, criteria for evaluation of accuracy and efficiency, and all of the ramifications of communication processes. Simon is particularly interested in how these factors influence the making of decisions, both directly and indirectly.
Simon argued that the two outcomes of a choice require monitoring and that many members of the organization would be expected to focus on adequacy, but that administrative management must pay particular attention to the efficiency with which the desired result was obtained.
Simon followed Chester Barnard who pointed out that "the decisions that an individual makes as a member of an organization are quite distinct from his personal decisions". Personal choices may be determined whether an individual joins a particular organization, and continue to be made in his or her extra–organizational private life. As a member of an organization, however, that individual makes decisions not in relationship to personal needs and results, but in an impersonal sense as part of the organizational intent, purpose, and effect. Organizational inducements, rewards, and sanctions are all designed to form, strengthen, and maintain this identification.
Simon saw two universal elements of human social behavior as key to creating the possibility of organizational behavior in human individuals: Authority (addressed in Chapter VII—The Role of Authority) and in Loyalties and Identification (Addressed in Chapter X: Loyalties, and Organizational Identification).
Authority is a well-studied, primary mark of organizational behavior, straightforwardly defined in the organizational context as the ability and right of an individual of higher rank to guide the decisions of an individual of lower rank. The actions, attitudes, and relationships of the dominant and subordinate individuals constitute components of role behavior that may vary widely in form, style, and content, but do not vary in the expectation of obedience by the one of superior status, and willingness to obey from the subordinate.
Loyalty was defined by Simon as the "process whereby the individual substitutes organizational objectives (service objectives or conservation objectives) for his own aims as the value-indices which determine his organizational decisions". This entailed evaluating alternative choices in terms of their consequences for the group rather than only for onself or ones family.
Decisions can be complex admixtures of facts and values. Information about facts, especially empirically-proven facts or facts derived from specialized experience, are more easily transmitted in the exercise of authority than are the expressions of values. Simon is primarily interested in seeking identification of the individual employee with the organizational goals and values. Following Lasswell, he states that "a person identifies himself with a group when, in making a decision, he evaluates the several alternatives of choice in terms of their consequences for the specified group". A person may identify himself with any number of social, geographic, economic, racial, religious, familial, educational, gender, political, and sports groups. Indeed, the number and variety are unlimited. The fundamental problem for organizations is to recognize that personal and group identifications may either facilitate or obstruct correct decision making for the organization. A specific organization has to determine deliberately, and specify in appropriate detail and clear language, its own goals, objectives, means, ends, and values.
Simon has been critical of traditional economics' elementary understanding of decision-making, and argues it "is too quick to build an idealistic, unrealistic picture of the decision-making process and then prescribe on the basis of such unrealistic picture". His contributions to research in the area of administrative decision-making have become increasingly mainstream in the business community.
Herbert Simon rediscovered path diagrams which were invented by Sewall Wright around 1920. Source: The Book of Why Judea Pearl, Dana Mackenzie p.79.
Simon was a pioneer in the field of artificial intelligence, creating with Allen Newell the Logic Theory Machine (1956) and the General Problem Solver (GPS) (1957) programs. GPS may possibly be the first method developed for separating problem solving strategy from information about particular problems. Both programs were developed using the Information Processing Language (IPL) (1956) developed by Newell, Cliff Shaw, and Simon. Donald Knuth mentions the development of list processing in IPL, with the linked list originally called "NSS memory" for its inventors. In 1957, Simon predicted that computer chess would surpass human chess abilities within "ten years" when, in reality, that transition took about forty years.
In the early 1960s psychologist Ulric Neisser asserted that while machines are capable of replicating "cold cognition" behaviors such as reasoning, planning, perceiving, and deciding, they would never be able to replicate "hot cognition" behaviors such as pain, pleasure, desire, and other emotions. Simon responded to Neisser's views in 1963 by writing a paper on emotional cognition, which he updated in 1967 and published in "Psychological Review". Simon's work on emotional cognition was largely ignored by the artificial intelligence research community for several years, but subsequent work on emotions by Sloman and Picard helped refocus attention on Simon's paper and eventually, made it highly influential on the topic.
Simon also collaborated with James G. March on several works in organization theory.
With Allen Newell, Simon developed a theory for the simulation of human problem solving behavior using production rules. The study of human problem solving required new kinds of human measurements and, with Anders Ericsson, Simon developed the experimental technique of verbal protocol analysis. Simon was interested in the role of knowledge in expertise. He said that to become an expert on a topic required about ten years of experience and he and colleagues estimated that expertise was the result of learning roughly 50,000 chunks of information. A chess expert was said to have learned about 50,000 chunks or chess position patterns.
He was awarded the ACM Turing Award, along with Allen Newell, in 1975. "In joint scientific efforts extending over twenty years, initially in collaboration with J. C. (Cliff) Shaw at the RAND Corporation, and with numerous faculty and student colleagues at Carnegie Mellon University, they have made basic contributions to artificial intelligence, the psychology of human cognition, and list processing."
Simon was interested in how humans learn and, with Edward Feigenbaum, he developed the EPAM (Elementary Perceiver and Memorizer) theory, one of the first theories of learning to be implemented as a computer program. EPAM was able to explain a large number of phenomena in the field of verbal learning. Later versions of the model were applied to concept formation and the acquisition of expertise. With Fernand Gobet, he has expanded the EPAM theory into the CHREST computational model. The theory explains how simple chunks of information form the building blocks of schemata, which are more complex structures. CHREST has been used predominantly, to simulate aspects of chess expertise.
Simon has been credited for revolutionary changes in microeconomics. He is responsible for the concept of organizational decision-making as it is known today. He also was the first to discuss this concept in terms of uncertainty; i.e., it is impossible to have perfect and complete information at any given time to make a decision. While this notion was not entirely new, Simon is best known for its origination. It was in this area that he was awarded the Nobel Prize in 1978.
At the Cowles Commission, Simon's main goal was to link economic theory to mathematics and statistics. His main contributions were to the fields of general equilibrium and econometrics. He was greatly influenced by the marginalist debate that began in the 1930s. The popular work of the time argued that it was not apparent empirically that entrepreneurs needed to follow the marginalist principles of profit-maximization/cost-minimization in running organizations. The argument went on to note that profit maximization was not accomplished, in part, because of the lack of complete information. In decision-making, Simon believed that agents face uncertainty about the future and costs in acquiring information in the present. These factors limit the extent to which agents may make a fully rational decision, thus they possess only "bounded rationality" and must make decisions by "satisficing", or choosing that which might not be optimal, but which will make them happy enough. Bounded rationality is a central theme in behavioral economics. It is concerned with the ways in which the actual decision making process influences decision. Theories of bounded rationality relax one or more assumptions of standard expected utility theory.
Further, Simon emphasized that psychologists invoke a "procedural" definition of rationality, whereas economists employ a "substantive" definition. Gustavos Barros argued that the procedural rationality concept does not have a significant presence in the economics field and has never had nearly as much weight as the concept of bounded rationality. However, in an earlier article, Bhargava (1997) noted the importance of Simon's arguments and emphasized that there are several applications of the "procedural" definition of rationality in econometric analyses of data on health. In particular, economists should employ "auxiliary assumptions" that reflect the knowledge in the relevant biomedical fields, and guide the specification of econometric models for health outcomes.
Simon was also known for his research on industrial organization. He determined that the internal organization of firms and the external business decisions thereof, did not conform to the neoclassical theories of "rational" decision-making. Simon wrote many articles on the topic over the course of his life, mainly focusing on the issue of decision-making within the behavior of what he termed "bounded rationality". "Rational behavior, in economics, means that individuals maximize their utility function under the constraints they face (e.g., their budget constraint, limited choices, ...) in pursuit of their self-interest. This is reflected in the theory of subjective expected utility. The term, bounded rationality, is used to designate rational choice that takes into account the cognitive limitations of both knowledge and cognitive capacity. Bounded rationality is a central theme in behavioral economics. It is concerned with the ways in which the actual decision-making process influences decisions. Theories of bounded rationality relax one or more assumptions of standard expected utility theory".
Simon determined that the best way to study these areas was through computer simulations. As such, he developed an interest in computer science. Simon's main interests in computer science were in artificial intelligence, human–computer interaction, principles of the organization of humans and machines as information processing systems, the use of computers to study (by modeling) philosophical problems of the nature of intelligence and of epistemology, and the social implications of computer technology.
In his youth, Simon took an interest in land economics and Georgism, an idea known at the time as "single tax". The system is meant to redistribute unearned economic rent to the public and improve land use. In 1979, Simon still maintained these ideas and argued that land value tax should replace taxes on wages.
Some of Simon's economic research was directed toward understanding technological change in general and the information processing revolution in particular.
Simon's work has strongly influenced John Mighton, developer of a program that has achieved significant success in improving mathematics performance among elementary and high school students. Mighton cites a 2000 paper by Simon and two coauthors that counters arguments by French mathematics educator, Guy Brousseau, and others suggesting that excessive practice hampers children's understanding:
He received many top-level honors in life, including becoming a fellow of the American Academy of Arts and Sciences in 1959; election as a Member of the National Academy of Sciences in 1967; APA Award for Distinguished Scientific Contributions to Psychology (1969); the ACM's Turing Award for making "basic contributions to artificial intelligence, the psychology of human cognition, and list processing" (1975); the Nobel Memorial Prize in Economics "for his pioneering research into the decision-making process within economic organizations" (1978); the National Medal of Science (1986); the APA's Award for Outstanding Lifetime Contributions to Psychology (1993); ACM fellow (1994); and IJCAI Award for Research Excellence (1995).
Simon was a prolific writer and authored 27 books and almost a thousand papers. As of 2016, Simon was the most cited person in artificial intelligence and cognitive psychology on Google Scholar. With almost a thousand highly cited publications, he was one of the most influential social scientists of the twentieth century.
Simon married Dorothea Pye in 1938. Their marriage lasted 63 years until his death. In January 2001, Simon underwent surgery at UPMC Presbyterian to remove a cancerous tumor in his abdomen. Although the surgery was successful, Simon later succumbed to the complications that followed. They had three children, Katherine, Peter, and Barbara. His wife died in 2002.
From 1950 to 1955, Simon studied mathematical economics and during this time, together with David Hawkins, discovered and proved the Hawkins–Simon theorem on the "conditions for the existence of positive solution vectors for input-output matrices". He also developed theorems on near-decomposability and aggregation. Having begun to apply these theorems to organizations, by 1954 Simon determined that the best way to study problem-solving was to simulate it with computer programs, which led to his interest in computer simulation of human cognition. Founded during the 1950s, he was among the first members of the Society for General Systems Research.
Simon had a keen interest in the arts, as he was a pianist. He was a friend of Robert Lepper and Richard Rappaport. Rappaport also painted Simon's commissioned portrait at Carnegie Mellon University. He was also a keen mountain climber. As a testament to his wide interests, he at one point taught an undergraduate course on the French Revolution.
|
https://en.wikipedia.org/wiki?curid=14205
|
Hematite
Hematite, also spelled as haematite, is a common iron oxide with a formula of Fe2O3 and is widespread in rocks and soils. Hematite forms in the shape of crystals through the rhombohedral lattice system, and it has the same crystal structure as ilmenite and corundum. Hematite and ilmenite form a complete solid solution at temperatures above .
Hematite is colored black to steel or silver-gray, brown to reddish-brown, or red. It is mined as the main ore of iron. Varieties include "kidney ore", "martite" (pseudomorphs after magnetite), "iron rose" and "specularite" (specular hematite). While these forms vary, they all have a rust-red streak. Hematite is harder than pure iron, but much more brittle. Maghemite is a hematite- and magnetite-related oxide mineral.
Large deposits of hematite are found in banded iron formations. Gray hematite is typically found in places that can have still, standing water or mineral hot springs, such as those in Yellowstone National Park in North America. The mineral can precipitate out of water and collect in layers at the bottom of a lake, spring, or other standing water. Hematite can also occur without water, usually as the result of volcanic activity.
Clay-sized hematite crystals can also occur as a secondary mineral formed by weathering processes in soil, and along with other iron oxides or oxyhydroxides such as goethite, is responsible for the red color of many tropical, ancient, or otherwise highly weathered soils.
The name hematite is derived from the Greek word for blood "(haima)", due to the red coloration found in some varieties of hematite. The color of hematite lends itself to use as a pigment. The English name of the stone is derived from Middle French "hématite pierre", which was imported from Latin "lapis haematites" the 15th century, which originated from Ancient Greek ("haimatitēs lithos", "blood-red stone").
Ochre is a clay that is colored by varying amounts of hematite, varying between 20% and 70%. Red ochre contains unhydrated hematite, whereas yellow ochre contains hydrated hematite (Fe2O3 · H2O). The principal use of ochre is for tinting with a permanent color.
The red chalk writing of this mineral was one of the earliest in the history of humans. The powdery mineral was first used 164,000 years ago by the Pinnacle-Point man, possibly for social purposes. Hematite residues are also found in graves from 80,000 years ago. Near Rydno in Poland and Lovas in Hungary red chalk mines have been found that are from 5000 BC, belonging to the Linear Pottery culture at the Upper Rhine.
Rich deposits of hematite have been found on the island of Elba that have been mined since the time of the Etruscans.
Hematite is an antiferromagnetic material below the Morin transition at , and a canted antiferromagnet or weakly ferromagnetic above the Morin transition and below its Néel temperature at , above which it is paramagnetic.
The magnetic structure of α-hematite was the subject of considerable discussion and debate during the 1950s, as it appeared to be ferromagnetic with a Curie temperature of approximately , but with an extremely small magnetic moment (0.002 Bohr magnetons). Adding to the surprise was a transition with a decrease in temperature at around to a phase with no net magnetic moment. It was shown that the system is essentially antiferromagnetic, but that the low symmetry of the cation sites allows spin–orbit coupling to cause canting of the moments when they are in the plane perpendicular to the "c" axis. The disappearance of the moment with a decrease in temperature at is caused by a change in the anisotropy which causes the moments to align along the "c" axis. In this configuration, spin canting does not reduce the energy. The magnetic properties of bulk hematite differ from their nanoscale counterparts. For example, the Morin transition temperature of hematite decreases with a decrease in the particle size. The suppression of this transition has been observed in hematite nanoparticles and is attributed to the presence of impurities, water molecules and defects in the crystals lattice. Hematite is part of a complex solid solution oxyhydroxide system having various contents of water, hydroxyl groups and vacancy substitutions that affect the mineral's magnetic and crystal chemical properties. Two other end-members are referred to as protohematite and hydrohematite.
Enhanced magnetic coercivities for hematite have been achieved by dry-heating a two-line ferrihydrite precursor prepared from solution. Hematite exhibited temperature-dependent magnetic coercivity values ranging from . The origin of these high coercivity values has been interpreted as a consequence of the subparticle structure induced by the different particle and crystallite size growth rates at increasing annealing temperature. These differences in the growth rates are translated into a progressive development of a subparticle structure at the nanoscale. At lower temperatures (350–600 °C), single particles crystallize however; at higher temperatures (600–1000 °C), the growth of crystalline aggregates with a subparticle structure is favored.
Hematite is present in the waste tailings of iron mines. A recently developed process, magnetation, uses magnets to glean waste hematite from old mine tailings in Minnesota's vast Mesabi Range iron district. Falu red is a pigment used in traditional Swedish house paints. Originally, it was made from tailings of the Falu mine.
The spectral signature of hematite was seen on the planet Mars by the infrared spectrometer on the NASA "Mars Global Surveyor" and "2001 Mars Odyssey" spacecraft in orbit around Mars. The mineral was seen in abundance at two sites on the planet, the Terra Meridiani site, near the Martian equator at 0° longitude, and the Aram Chaos site near the Valles Marineris. Several other sites also showed hematite, such as Aureum Chaos. Because terrestrial hematite is typically a mineral formed in aqueous environments or by aqueous alteration, this detection was scientifically interesting enough that the second of the two Mars Exploration Rovers was sent to a site in the Terra Meridiani region designated Meridiani Planum. In-situ investigations by the "Opportunity" rover showed a significant amount of hematite, much of it in the form of small spherules that were informally named "blueberries" by the science team. Analysis indicates that these spherules are apparently concretions formed from a water solution.
"Knowing just how the hematite on Mars was formed will help us characterize the past environment and determine whether that environment was favorable for life".
Hematite's popularity in jewelry rose in England during the Victorian era, due to its use in mourning jewelry. Certain types of hematite- or iron-oxide-rich clay, especially Armenian bole, have been used in gilding. Hematite is also used in art such as in the creation of intaglio engraved gems. Hematine is a synthetic material sold as "magnetic hematite".
|
https://en.wikipedia.org/wiki?curid=14207
|
Holocene extinction
The Holocene extinction, otherwise referred to as the sixth mass extinction or Anthropocene extinction, is an ongoing extinction event of species during the present Holocene epoch (with the more recent time sometimes called Anthropocene) as a result of human activity. The included extinctions span numerous families of plants and animals, including mammals, birds, reptiles, amphibians, fishes and invertebrates. With widespread degradation of highly biodiverse habitats such as coral reefs and rainforests, as well as other areas, the vast majority of these extinctions are thought to be "undocumented", as the species are undiscovered at the time of their extinction, or no one has yet discovered their extinction. The current rate of extinction of species is estimated at 100 to 1,000 times higher than natural background rates.
The Holocene extinction includes the disappearance of large land animals known as megafauna, starting at the end of the last glacial period. Megafauna outside of the African mainland (thus excluding Madagascar), which did not evolve alongside humans, proved highly sensitive to the introduction of new predation, and many died out shortly after early humans began spreading and hunting across the Earth (many African species have also gone extinct in the Holocene, but – with few exceptions – megafauna of the mainland was largely unaffected until a few hundred years ago). These extinctions, occurring near the Pleistocene–Holocene boundary, are sometimes referred to as the Quaternary extinction event.
The most popular theory is that human overhunting of species added to existing stress conditions as the extinction coincides with human emergence. Although there is debate regarding how much human predation affected their decline, certain population declines have been directly correlated with human activity, such as the extinction events of New Zealand and Hawaii. Aside from humans, climate change may have been a driving factor in the megafaunal extinctions, especially at the end of the Pleistocene.
Ecologically, humanity has been noted as an unprecedented "global superpredator" that consistently preys on the adults of other apex predators, and has worldwide effects on food webs. There have been extinctions of species on every land mass and in every ocean: there are many famous examples within Africa, Asia, Europe, Australia, North and South America, and on smaller islands. Overall, the Holocene extinction can be linked to the human impact on the environment. The Holocene extinction continues into the 21st century, with meat consumption, overfishing, and ocean acidification and the decline in amphibian populations being a few broader examples of a cosmopolitan decline in biodiversity. Human population growth and increasing per capita consumption are considered to be the primary drivers of this decline.
The 2019 "Global Assessment Report on Biodiversity and Ecosystem Services", published by the United Nations' Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services, posits that roughly one million species of plants and animals face extinction within decades as the result of human actions.
The Holocene extinction is also known as the "sixth extinction", as it is possibly the sixth mass extinction event, after the Ordovician–Silurian extinction events, the Late Devonian extinction, the Permian–Triassic extinction event, the Triassic–Jurassic extinction event, and the Cretaceous–Paleogene extinction event. Mass extinctions are characterized by the loss of at least 75% of species within a geologically short period of time. There is no general agreement on where the Holocene, or anthropogenic, extinction begins, and the Quaternary extinction event, which includes climate change resulting in the end of the last ice age, ends, or if they should be considered separate events at all. Some have suggested that anthropogenic extinctions may have begun as early as when the first modern humans spread out of Africa between 200,000 and 100,000 years ago; this is supported by rapid megafaunal extinction following recent human colonisation in Australia, New Zealand and Madagascar, as might be expected when any large, adaptable predator (invasive species) moves into a new ecosystem. In many cases, it is suggested that even minimal hunting pressure was enough to wipe out large fauna, particularly on geographically isolated islands. Only during the most recent parts of the extinction have plants also suffered large losses.
In "The Future of Life" (2002), Edward Osborne Wilson of Harvard calculated that, if the current rate of human disruption of the biosphere continues, one-half of Earth's higher lifeforms will be extinct by 2100. A 1998 poll conducted by the American Museum of Natural History found that 70% of biologists acknowledge an ongoing anthropogenic extinction event. At present, the rate of extinction of species is estimated at 100 to 1,000 times higher than the background extinction rate, the historically typical rate of extinction (in terms of the natural evolution of the planet); also, the current rate of extinction is 10 to 100 times higher than in any of the previous mass extinctions in the history of Earth. One scientist estimates the current extinction rate may be 10,000 times the background extinction rate, although most scientists predict a much lower extinction rate than this outlying estimate. Theoretical ecologist Stuart Pimm stated that the extinction rate for plants is 100 times higher than normal.
In a pair of studies published in 2015, extrapolation from observed extinction of Hawaiian snails led to the conclusion that 7% of all species on Earth may have been lost already.
There is widespread consensus among scientists that human activity is accelerating the extinction of many animal species through the destruction of habitats, the consumption of animals as resources, and the elimination of species that humans view as threats or competitors. But some contend that this biotic destruction has yet to reach the level of the previous five mass extinctions. Stuart Pimm, for example, asserts that the sixth mass extinction "is something that hasn't happened yet – we are on the edge of it." In November 2017, a statement, titled "World Scientists’ Warning to Humanity: A Second Notice", led by eight authors and signed by 15,364 scientists from 184 countries asserted that, among other things, "we have unleashed a mass extinction event, the sixth in roughly 540 million years, wherein many current life forms could be annihilated or at least committed to extinction by the end of this century."
The abundance of species extinctions considered anthropogenic, or due to human activity, has sometimes (especially when referring to hypothesized future events) been collectively called the "Anthropocene extinction". "Anthropocene" is a term introduced in 2000. Some now postulate that a new geological epoch has begun, with the most abrupt and widespread extinction of species since the Cretaceous–Paleogene extinction event 66 million years ago.
The term "anthropocene" is being used more frequently by scientists, and some commentators may refer to the current and projected future extinctions as part of a longer Holocene extinction. The Holocene–Anthropocene boundary is contested, with some commentators asserting significant human influence on climate for much of what is normally regarded as the Holocene Epoch. Other commentators place the Holocene–Anthropocene boundary at the industrial revolution and also say that "[f]ormal adoption of this term in the near future will largely depend on its utility, particularly to earth scientists working on late Holocene successions."
It has been suggested that human activity has made the period starting from the mid-20th century different enough from the rest of the Holocene to consider it a new geological epoch, known as the Anthropocene, a term which was considered for inclusion in the timeline of Earth's history by the International Commission on Stratigraphy in 2016. In order to constitute the Holocene as an extinction event, scientists must determine exactly when anthropogenic greenhouse gas emissions began to measurably alter natural atmospheric levels on a global scale, and when these alterations caused changes to global climate. Using chemical proxies from Antarctic ice cores, researchers have estimated the fluctuations of carbon dioxide (CO2) and methane (CH4) gases in the Earth's atmosphere during the late Pleistocene and Holocene epochs. Estimates of the fluctuations of these two gases in the atmosphere, using chemical proxies from Antarctic ice cores, generally indicate that the peak of the Anthropocene occurred within the previous two centuries: typically beginning with the Industrial Revolution, when the highest greenhouse gas levels were recorded.
The Holocene extinction is mainly caused by human activities. Extinction of animals, plants, and other organisms caused by human actions may go as far back as the late Pleistocene, over 12,000 years ago. There is a correlation between megafaunal extinction and the arrival of humans, and human population growth and per-capita consumption growth, prominently in the past two centuries, are regarded as the underlying causes of extinction.
Human civilization was founded on and grew from agriculture. The more land used for farming, the greater the population a civilization could sustain, and subsequent popularization of farming led to habitat conversion.
Habitat destruction by humans, including oceanic devastation, such as through overfishing and contamination; and the modification and destruction of vast tracts of land and river systems around the world to meet solely human-centered ends (with 13 percent of Earth's ice-free land surface now used as row-crop agricultural sites, 26 percent used as pastures, and 4 percent urban-industrial areas), thus replacing the original local ecosystems. The sustained conversion of biodiversity rich forests and wetlands into poorer fields and pastures (of lesser carrying capacity for wild species), over the last 10,000 years, has considerably reduced the Earth's carrying capacity for wild birds, among other organisms, in both population size and species count.
Other, related human causes of the extinction event include deforestation, hunting, pollution, the introduction in various regions of non-native species, and the widespread transmission of infectious diseases spread through livestock and crops. Humans both create and destroy crop cultivar and domesticated animal varieties. Advances in transportation and industrial farming has led to monoculture and the extinction of many cultivars. The use of certain plants and animals for food has also resulted in their extinction, including silphium and the passenger pigeon.
Some scholars assert that the emergence of capitalism as the dominant economic system has accelerated ecological exploitation and destruction, and has also exacerbated mass species extinction. CUNY professor David Harvey, for example, posits that the neoliberal era "happens to be the era of the fastest mass extinction of species in the Earth's recent history".
Megafauna were once found on every continent of the world and large islands such as New Zealand and Madagascar, but are now almost exclusively found on the continent of Africa, with notable comparisons on Australia and the islands previously mentioned experiencing population crashes and trophic cascades shortly after the earliest human settlers. It has been suggested that the African megafauna survived because they evolved alongside humans. The timing of South American megafaunal extinction appears to precede human arrival, although the possibility that human activity at the time impacted the global climate enough to cause such an extinction has been suggested.
It has been noted, in the face of such evidence, that humans are unique in ecology as an unprecedented "global superpredator", regularly preying on large numbers of fully grown terrestrial and marine apex predators, and with a great deal of influence over food webs and climatic systems worldwide. Although significant debate exists as to how much human predation and indirect effects contributed to prehistoric extinctions, certain population crashes have been directly correlated with human arrival. A 2018 study published in "PNAS" found that since the dawn of human civilization, 83% of wild mammals, 80% of marine mammals, 50% of plants and 15% of fish have vanished. Currently, livestock make up 60% of the biomass of all mammals on earth, followed by humans (36%) and wild mammals (4%). As for birds, 70% are domesticated, such as poultry, whereas only 30% are wild.
Recent investigations about hunter-gatherer landscape burning has a major implication for the current debate about the timing of the Anthropocene and the role that humans may have played in the production of greenhouse gases prior to the Industrial Revolution. Studies on early hunter-gatherers raises questions about the current use of population size or density as a proxy for the amount of land clearance and anthropogenic burning that took place in pre-industrial times. Scientists have questioned the correlation between population size and early territorial alterations. Ruddiman and Ellis' research paper in 2009 makes the case that early farmers involved in systems of agriculture used more land per capita than growers later in the Holocene, who intensified their labor to produce more food per unit of area (thus, per laborer); arguing that agricultural involvement in rice production implemented thousands of years ago by relatively small populations have created significant environmental impacts through large-scale means of deforestation.
While a number of human-derived factors are recognized as contributing to rising atmospheric concentrations of CH4 (methane) and CO2 (carbon dioxide), deforestation and territorial clearance practices associated with agricultural development may be contributing most to these concentrations globally. Scientists that are employing a variance of archaeological and paleoecological data argue that the processes contributing to substantial human modification of the environment spanned many thousands of years ago on a global scale and thus, not originating as early as the Industrial Revolution. Gaining popularity on his uncommon hypothesis, palaeoclimatologist William Ruddiman in 2003, stipulated that in the early Holocene 11,000 years ago, atmospheric carbon dioxide and methane levels fluctuated in a pattern which was different from the Pleistocene epoch before it. He argued that the patterns of the significant decline of CO2 levels during the last ice age of the Pleistocene inversely correlates to the Holocene where there have been dramatic increases of CO2 around 8000 years ago and CH4 levels 3000 years after that. The correlation between the decrease of CO2 in the Pleistocene and the increase of it during the Holocene implies that the causation of this spark of greenhouse gases into the atmosphere was the growth of human agriculture during the Holocene such as the anthropogenic expansion of (human) land use and irrigation.
Human arrival in the Caribbean around 6,000 years ago is correlated with the extinction of many species. Examples include many different genera of ground and arboreal sloths across all islands. These sloths were generally smaller than those found on the South American continent. "Megalocnus" were the largest genus at up to , "Acratocnus" were medium-sized relatives of modern two-toed sloths endemic to Cuba, "Imagocnus" also of Cuba, "Neocnus" and many others.
Recent research, based on archaeological and paleontological digs on 70 different Pacific islands has shown that numerous species became extinct as people moved across the Pacific, starting 30,000 years ago in the Bismarck Archipelago and Solomon Islands. It is currently estimated that among the bird species of the Pacific, some 2000 species have gone extinct since the arrival of humans, representing a 20% drop in the biodiversity of birds worldwide.
The first settlers are thought to have arrived in the islands between 300 and 800 CE, with European arrival in the 16th century. Hawaii is notable for its endemism of plants, birds, insects, mollusks and fish; 30% of its organisms are endemic. Many of its species are endangered or have gone extinct, primarily due to accidentally introduced species and livestock grazing. Over 40% of its bird species have gone extinct, and it is the location of 75% of extinctions in the United States. Extinction has increased in Hawaii over the last 200 years and is relatively well documented, with extinctions among native snails used as estimates for global extinction rates.
Australia was once home to a large assemblage of megafauna, with many parallels to those found on the African continent today. Australia's fauna is characterised by primarily marsupial mammals, and many reptiles and birds, all existing as giant forms until recently. Humans arrived on the continent very early, about 50,000 years ago. The extent human arrival contributed is controversial; climatic drying of Australia 40,000–60,000 years ago was an unlikely cause, as it was less severe in speed or magnitude than previous regional climate change which failed to kill off megafauna. Extinctions in Australia continued from original settlement until today in both plants and animals, whilst many more animals and plants have declined or are endangered.
Due to the older timeframe and the soil chemistry on the continent, very little subfossil preservation evidence exists relative to elsewhere. However, continent-wide extinction of all genera weighing over 100 kilograms, and six of seven genera weighing between 45 and 100 kilograms occurred around 46,400 years ago (4,000 years after human arrival) and the fact that megafauna survived until a later date on the island of Tasmania following the establishment of a land bridge suggest direct hunting or anthropogenic ecosystem disruption such as fire-stick farming as likely causes. The first evidence of direct human predation leading to extinction in Australia was published in 2016.
Within 500 years of the arrival of humans between 2,500–2,000 years ago, nearly all of Madagascar's distinct, endemic and geographically isolated megafauna became extinct. The largest animals, of more than , were extinct very shortly after the first human arrival, with large and medium-sized species dying out after prolonged hunting pressure from an expanding human population moving into more remote regions of the island around 1000 years ago. Smaller fauna experienced initial increases due to decreased competition, and then subsequent declines over the last 500 years. All fauna weighing over died out. The primary reasons for this are human hunting and habitat loss from early aridification, both of which persist and threaten Madagascar's remaining taxa today.
The eight or more species of elephant birds, giant flightless ratites in the genera "Aepyornis", "Vorombe", and "Mullerornis", are extinct from over-hunting, as well as 17 species of lemur, known as giant, subfossil lemurs. Some of these lemurs typically weighed over , and fossils have provided evidence of human butchery on many species.
New Zealand is characterised by its geographic isolation and island biogeography, and had been isolated from mainland Australia for 80 million years. It was the last large land mass to be colonised by humans. The arrival of Polynesian settlers circa 12th century resulted in the extinction of all of the islands' megafaunal birds within several hundred years. The last moa, large flightless ratites, became extinct within 200 years of the arrival of human settlers. The Polynesians also introduced the Polynesian rat. This may have put some pressure on other birds but at the time of early European contact (18th century) and colonisation (19th century) the bird life was prolific. With them, the Europeans brought ship rats, possums, cats and mustelids which decimated native bird life, some of which had adapted flightlessness and ground nesting habits and others had no defensive behavior as a result of having no extant endemic mammalian predators. The kakapo, the world's biggest parrot, which is flightless, now only exists in managed breeding sanctuaries. New Zealand's national emblem, the kiwi, is on the endangered bird list.
There has been a debate as to the extent to which the disappearance of megafauna at the end of the last glacial period can be attributed to human activities by hunting, or even by slaughter of prey populations. Discoveries at Monte Verde in South America and at Meadowcroft Rock Shelter in Pennsylvania have caused a controversy regarding the Clovis culture. There likely would have been human settlements prior to the Clovis Culture, and the history of humans in the Americas may extend back many thousands of years before the Clovis culture. The amount of correlation between human arrival and megafauna extinction is still being debated: for example, in Wrangel Island in Siberia the extinction of dwarf woolly mammoths (approximately 2000 BCE) did not coincide with the arrival of humans, nor did megafaunal mass extinction on the South American continent, although it has been suggested climate changes induced by anthropogenic effects elsewhere in the world may have contributed.
Comparisons are sometimes made between recent extinctions (approximately since the industrial revolution) and the Pleistocene extinction near the end of the last glacial period. The latter is exemplified by the extinction of large herbivores such as the woolly mammoth and the carnivores that preyed on them. Humans of this era actively hunted the mammoth and the mastodon, but it is not known if this hunting was the cause of the subsequent massive ecological changes, widespread extinctions and climate changes.
The ecosystems encountered by the first Americans had not been exposed to human interaction, and may have been far less resilient to human made changes than the ecosystems encountered by industrial era humans. Therefore, the actions of the Clovis people, despite seeming insignificant by today's standards could indeed have had a profound effect on the ecosystems and wild life which was entirely unused to human influence.
Africa experienced the smallest decline in megafauna compared to the other continents. This is presumably due to the idea that Afroeurasian megafauna evolved alongside humans, and thus developed a healthy fear of them, unlike the comparatively tame animals of other continents. Unlike other continents, the megafauna of Eurasia went extinct over a relatively long period of time, possibly due to climate fluctuations fragmenting and decreasing populations, leaving them vulnerable to over-exploitation, as with the steppe bison ("Bison priscus"). The warming of the arctic region caused the rapid decline of grasslands, which had a negative effect on the grazing megafauna of Eurasia. Most of what once was mammoth steppe has been converted to mire, rendering the environment incapable of supporting them, notably the woolly mammoth.
One of the main theories to the extinction is climate change. The climate change theory has suggested that a change in climate near the end of the late Pleistocene stressed the megafauna to the point of extinction. Some scientists favor abrupt climate change as the catalyst for the extinction of the mega-fauna at the end of the Pleistocene, but there are many who believe increased hunting from early modern humans also played a part, with others even suggesting that the two interacted. However, the annual mean temperature of the current interglacial period for the last 10,000 years is no higher than that of previous interglacial periods, yet some of the same megafauna survived similar temperature increases. In the Americas, a controversial explanation for the shift in climate is presented under the Younger Dryas impact hypothesis, which states that the impact of comets cooled global temperatures.
Megafauna play a significant role in the lateral transport of mineral nutrients in an ecosystem, tending to translocate them from areas of high to those of lower abundance. They do so by their movement between the time they consume the nutrient and the time they release it through elimination (or, to a much lesser extent, through decomposition after death). In South America's Amazon Basin, it is estimated that such lateral diffusion was reduced over 98% following the megafaunal extinctions that occurred roughly 12,500 years ago. Given that phosphorus availability is thought to limit productivity in much of the region, the decrease in its transport from the western part of the basin and from floodplains (both of which derive their supply from the uplift of the Andes) to other areas is thought to have significantly impacted the region's ecology, and the effects may not yet have reached their limits. The extinction of the mammoths allowed grasslands they had maintained through grazing habits to become birch forests. The new forest and the resulting forest fires may have induced climate change. Such disappearances might be the result of the proliferation of modern humans; some recent studies favor this theory.
Large populations of megaherbivores have the potential to contribute greatly to the atmospheric concentration of methane, which is an important greenhouse gas. Modern ruminant herbivores produce methane as a byproduct of foregut fermentation in digestion, and release it through belching or flatulence. Today, around 20% of annual methane emissions come from livestock methane release. In the Mesozoic, it has been estimated that sauropods could have emitted 520 million tons of methane to the atmosphere annually, contributing to the warmer climate of the time (up to 10 °C warmer than at present). This large emission follows from the enormous estimated biomass of sauropods, and because methane production of individual herbivores is believed to be almost proportional to their mass.
Recent studies have indicated that the extinction of megafaunal herbivores may have caused a reduction in atmospheric methane. This hypothesis is relatively new. One study examined the methane emissions from the bison that occupied the Great Plains of North America before contact with European settlers. The study estimated that the removal of the bison caused a decrease of as much as 2.2 million tons per year. Another study examined the change in the methane concentration in the atmosphere at the end of the Pleistocene epoch after the extinction of megafauna in the Americas. After early humans migrated to the Americas about 13,000 BP, their hunting and other associated ecological impacts led to the extinction of many megafaunal species there. Calculations suggest that this extinction decreased methane production by about 9.6 million tons per year. This suggests that the absence of megafaunal methane emissions may have contributed to the abrupt climatic cooling at the onset of the Younger Dryas. The decrease in atmospheric methane that occurred at that time, as recorded in ice cores, was 2–4 times more rapid than any other decrease in the last half million years, suggesting that an unusual mechanism was at work.
The hyperdisease hypothesis, proposed by Ross MacPhee in 1997, states that the megafaunal die-off was due to an indirect transmission of diseases by newly arriving aboriginal humans. According to MacPhee, aboriginals or animals travelling with them, such as domestic dogs or livestock, introduced one or more highly virulent diseases into new environments whose native population had no immunity to them, eventually leading to their extinction. K-selection animals, such as the now-extinct megafauna, are especially vulnerable to diseases, as opposed to r-selection animals who have a shorter gestation period and a higher population size. Humans are thought to be the sole cause as other earlier migrations of animals into North America from Eurasia did not cause extinctions.
There are many problems with this theory, as this disease would have to meet several criteria: it has to be able to sustain itself in an environment with no hosts; it has to have a high infection rate; and be extremely lethal, with a mortality rate of 50–75%. Disease has to be very virulent to kill off all the individuals in a genus or species, and even such a virulent disease as West Nile fever is unlikely to have caused extinction.
However, diseases have been the cause for some extinctions. The introduction of avian malaria and avipoxvirus, for example, have had a negative impact on the endemic birds of Hawaii.
The loss of species from ecological communities, defaunation, is primarily driven by human activity. This has resulted in empty forests, ecological communities depleted of large vertebrates. This is not to be confused with extinction, as it includes both the disappearance of species and declines in abundance. Defaunation effects were first implied at the Symposium of Plant-Animal Interactions at the University of Campinas, Brazil in 1988 in the context of Neotropical forests. Since then, the term has gained broader usage in conservation biology as a global phenomenon.
Big cat populations have severely declined over the last half-century and could face extinction in the following decades. According to IUCN estimates: lions are down to 25,000, from 450,000; leopards are down to 50,000, from 750,000; cheetahs are down to 12,000, from 45,000; tigers are down to 3,000 in the wild, from 50,000. A December 2016 study by the Zoological Society of London, Panthera Corporation and Wildlife Conservation Society showed that cheetahs are far closer to extinction than previously thought, with only 7,100 remaining in the wild, and crammed within only 9% of their historic range. Human pressures are to blame for the cheetah population crash, including prey loss due to overhunting by people, retaliatory killing from farmers, habitat loss and the illegal wildlife trade.
The term pollinator decline refers to the reduction in abundance of insect and other animal pollinators in many ecosystems worldwide beginning at the end of the twentieth century, and continuing into the present day. Pollinators, which are necessary for 75% of food crops, are declining globally in both abundance and diversity. A 2017 study led by Radboud University's Hans de Kroon indicated that the biomass of insect life in Germany had declined by three-quarters in the previous 25 years. Participating researcher Dave Goulson of Sussex University stated that their study suggested that humans are making large parts of the planet uninhabitable for wildlife. Goulson characterized the situation as an approaching "ecological Armageddon", adding that "if we lose the insects then everything is going to collapse." As of 2019, 40% of insect species are in decline, and a third are endangered. The most significant drivers in the decline of insect populations are associated with intensive farming practices, along with pesticide use and climate change.
Various species are predicted to become extinct in the near future, among them the rhinoceros, nonhuman primates. pangolins, and giraffes. Hunting alone threatens bird and mammalian populations around the world. The direct killing of megafauna for meat and body parts is the primary driver of their destruction, with 70% of the 362 megafauna species in decline as of 2019. Mammals in particular have suffered such severe losses as the result of human activity that it could take several million years for them to recover. According to the WWF's 2016 "Living Planet Report", global wildlife populations have declined 58% since 1970, primarily due to habitat destruction, over-hunting and pollution. They project that if current trends continue, 67% of wildlife could disappear by 2020. In their 2018 report, the WWF found that overconsumption of resources by the global population has destroyed 60% of animal populations since 1970, and this continued destruction of wildlife is an emergency which threatens the survival of human civilization. 189 countries, which are signatory to the Convention on Biological Diversity (Rio Accord), have committed to preparing a Biodiversity Action Plan, a first step at identifying specific endangered species and habitats, country by country.
A June 2020 study published in "PNAS" posits that the contemporary extinction crisis "may be the most serious environmental threat to the persistence of civilization, because it is irreversible" and that its acceleration "is certain because of the still fast growth in human numbers and consumption rates." The study found that more than 500 vertebrate species are poised to be lost in the next two decades.
Recent extinctions are more directly attributable to human influences, whereas prehistoric extinctions can be attributed to other factors, such as global climate change. The International Union for Conservation of Nature (IUCN) characterises 'recent' extinction as those that have occurred past the cut-off point of 1500, and at least 875 species have gone extinct since that time and 2012. Some species, such as the Père David's deer and the Hawaiian crow, are extinct in the wild, and survive solely in captive populations. Other species, such as the Florida panther, are ecologically extinct, surviving in such low numbers that they essentially have no impact on the ecosystem. Other populations are only locally extinct (extirpated), still existence elsewhere, but reduced in distribution, as with the extinction of gray whales in the Atlantic, and of the leatherback sea turtle in Malaysia.
Most recently, insect populations have experienced rapid surprising declines. Insects have declined at an annual rate of 2.5% over the last 25–30 years. The most severe effects may include Puerto Rico, where insect ground fall has declined by 98% in the previous 35 years. Butterflies and moths are experiencing some of the most severe effect. Butterfly species have declined by 58% on farmland in England. In the last ten years, 40% of insect species and 22% of mammal species have disappeared. Germany is experiencing a 75% decline. Climate change and agriculture are believed to be the most significant contributors to the change.
A 2019 study published in "Nature Communications" found that rapid biodiversity loss is impacting larger mammals and birds to a much greater extent than smaller ones, with the body mass of such animals expected to shrink by 25% over the next century. Over the past 125,000 years, the average body size of wildlife has fallen by 14% as human actions eradicated megafauna on all continents with the exception of Africa. Another 2019 study published in "Biology Letters" found that extinction rates are perhaps much higher than previously estimated, in particular for bird species.
Global warming is widely accepted as being a contributor to extinction worldwide, in a similar way that previous extinction events have generally included a rapid change in global climate and meteorology. It is also expected to disrupt sex ratios in many reptiles which have temperature-dependent sex determination.
The removal of land to clear way for palm oil plantations releases carbon emissions held in the peatlands of Indonesia. Palm oil mainly serves as a cheap cooking oil, and also as a (controversial) biofuel. However, damage to peatland contributes to 4% of global greenhouse gas emissions, and 8% of those caused by burning fossil fuels. Palm oil cultivation has also been criticized for other impacts to the environment, including deforestation, which has threatened critically endangered species such as the orangutan and the tree-kangaroo. The IUCN stated in 2016 that the species could go extinct within a decade if measures are not taken to preserve the rainforests in which they live.
Some scientists and academics assert that industrial agriculture and the growing demand for meat is contributing to significant global biodiversity loss as this is a significant driver of deforestation and habitat destruction; species-rich habitats, such as significant portions of the Amazon region, are being converted to agriculture for meat production. A 2017 study by the World Wildlife Fund (WWF) found that 60% of biodiversity loss can be attributed to the vast scale of feed crop cultivation required to rear tens of billions of farm animals. Moreover, a 2006 report by the Food and Agriculture Organization (FAO) of the United Nations, "Livestock's Long Shadow", also found that the livestock sector is a "leading player" in biodiversity loss. More recently, in 2019, the IPBES "Global Assessment Report on Biodiversity and Ecosystem Services" attributed much of this ecological destruction to agriculture and fishing, with the meat and dairy industries having a very significant impact. Since the 1970s food production has soared in order to feed a growing human population and bolster economic growth, but at a huge price to the environment and other species. The report says some 25% of the earth's ice-free land is used for cattle grazing. A 2020 study published in "Nature Communications" warned that human impacts from housing, industrial agriculture and in particular meat consumption are wiping out 50 billion years of earth's evolutionary history and driving to extinction some of the "most unique animals on the planet," among them the Aye-aye lemur, the Chinese crocodile lizard and the pangolin. Said lead author Rikki Gumbs:
Rising levels of carbon dioxide are resulting in influx of this gas into the ocean, increasing its acidity. Marine organisms which possess calcium carbonate shells or exoskeletons experience physiological pressure as the carbonate reacts with acid. For example, this is already resulting in coral bleaching on various coral reefs worldwide, which provide valuable habitat and maintain a high biodiversity. Marine gastropods, bivalves and other invertebrates are also affected, as are the organisms that feed on them. According to a 2018 study published in "Science", global Orca populations are poised to collapse due to toxic chemical and PCB pollution. PCBs are still leaking into the sea in spite of being banned for decades.
Some researchers suggest that by 2050 there could be more plastic than fish in the oceans by weight, with about of plastic being discharged into the oceans annually. Single-use plastics, such as plastic shopping bags, make up the bulk of this, and can often be ingested by marine life, such as with sea turtles. These plastics can degrade into microplastics, smaller particles that can affect a larger array of species. Microplastics make up the bulk of the Great Pacific Garbage Patch, and their smaller size is detrimental to cleanup efforts.
In March 2019, "Nature Climate Change" published a study by ecologists from Yale University, who found that over the next half century, human land use will reduce the habitats of 1,700 species by up to 50%, pushing them closer to extinction. That same month "PLOS Biology" published a similar study drawing on work at the University of Queensland, which found that "more than 1,200 species globally face threats to their survival in more than 90% of their habitat and will almost certainly face extinction without conservation intervention".
Overhunting can reduce the local population of game animals by more than half, as well as reducing population density, and may lead to extinction for some species. Populations located nearer to villages are significantly more at risk of depletion. Several conservationist organizations, among them IFAW and HSUS, assert that trophy hunters, particularly from the United States, are playing a significant role in the decline of giraffes, which they refer to as a "silent extinction".
The surge in the mass killings by poachers involved in the illegal ivory trade along with habitat loss is threatening African elephant populations. In 1979, their populations stood at 1.7 million; at present there are fewer than 400,000 remaining. Prior to European colonization, scientists believe Africa was home to roughly 20 million elephants. According to the Great Elephant Census, 30% of African elephants (or 144,000 individuals) disappeared over a seven-year period, 2007 to 2014. African elephants could become extinct by 2035 if poaching rates continue.
Fishing has had a devastating effect on marine organism populations for several centuries even before the explosion of destructive and highly effective fishing practices like trawling. Humans are unique among predators in that they regularly prey on other adult apex predators, particularly in marine environments; bluefin tuna, blue whales, North Atlantic right whales and over fifty species of sharks and rays are vulnerable to predation pressure from human fishing, in particular commercial fishing. A 2016 study published in "Science" concludes that humans tend to hunt larger species, and this could disrupt ocean ecosystems for millions of years. A 2020 study published in "Science Advances" found that around 18% of marine megafauna, including iconic species such as the Great white shark, are at risk of extinction from human pressures over the next century. In a worst case scenario, 40% could go extinct over the same time period.
The decline of amphibian populations has also been identified as an indicator of environmental degradation. As well as habitat loss, introduced predators and pollution, Chytridiomycosis, a fungal infection accidentally spread by human travel, globalization and the wildlife trade, has caused severe population drops of over 500 amphibian species, and perhaps 90 extinctions, including (among many others) the extinction of the golden toad in Costa Rica and the Gastric-brooding frog in Australia. Many other amphibian species now face extinction, including the reduction of Rabb's fringe-limbed treefrog to an endling, and the extinction of the Panamanian golden frog in the wild. Chytrid fungus has spread across Australia, New Zealand, Central America and Africa, including countries with high amphibian diversity such as cloud forests in Honduras and Madagascar. "Batrachochytrium salamandrivorans" is a similar infection currently threatening salamanders. Amphibians are now the most endangered vertebrate group, having existed for more than 300 million years through three other mass extinctions.
Millions of bats in the US have been dying off since 2012 due to a fungal infection spread from European bats, which appear to be immune. Population drops have been as great as 90% within five years, and extinction of at least one bat species is predicted. There is currently no form of treatment, and such declines have been described as "unprecedented" in bat evolutionary history by Alan Hicks of the New York State Department of Environmental Conservation.
Between 2007 and 2013, over ten million beehives were abandoned due to colony collapse disorder, which causes worker bees to abandon the queen. Though no single cause has gained widespread acceptance by the scientific community, proposals include infections with "Varroa" and "Acarapis" mites; malnutrition; various pathogens; genetic factors; immunodeficiencies; loss of habitat; changing beekeeping practices; or a combination of factors.
Some leading scientists have advocated for the global community to designate as protected areas 30 percent of the planet by 2030, and 50 percent by 2050, in order to mitigate the contemporary extinction crisis as the human population is projected to grow to 10 billion by the middle of the century. Human consumption of food and water resources is also projected to double by this time.
In November 2018, the UN's biodiversity chief Cristiana Pașca Palmer urged people around the world to put pressure on governments to implement significant protections for wildlife by 2020, as rampant biodiversity loss is a "silent killer" as dangerous as global warming, but has received little attention by comparison. She says that "It’s different from climate change, where people feel the impact in everyday life. With biodiversity, it is not so clear but by the time you feel what is happening, it may be too late." In January 2020, the UN Convention on Biological Diversity drafted a Paris-style plan to stop biodiversity and ecosystem collapse by setting a deadline of 2030 to protect 30% of the earth's land and oceans and reduce pollution by 50%, with the goal of allowing for the restoration of ecosystems by 2050. The world failed to meet similar targets for 2020 set by the Convention during a summit in Japan in 2010.
Some scientists have proposed keeping extinctions below 20 per year for the next century as a global target to reduce species loss, which is the biodiversity equivalent of the 2 °C climate target, although it is still much higher than the normal background rate of two per year prior to anthropogenic impacts on the natural world.
|
https://en.wikipedia.org/wiki?curid=14208
|
Harrison Narcotics Tax Act
The Harrison Narcotics Tax Act (Ch. 1, ) was a United States federal law that regulated and taxed the production, importation, and distribution of opiates and coca products. The act was proposed by Representative Francis Burton Harrison of New York and was approved on December 17, 1914.
"An Act To provide for the registration of, with collectors of internal revenue, and to impose a special tax on all persons who produce, import, manufacture, compound, deal in, dispense, sell, distribute, or give away opium or coca leaves, their salts, derivatives, or preparations, and for other purposes." The courts interpreted this to mean that physicians could prescribe narcotics to patients in the course of normal treatment, but not for the treatment of addiction.
The Harrison Anti-Narcotic legislation consisted of three U.S. House bills imposing restrictions on the availability and consumption of the psychoactive drug opium. U.S. House bills and passed conjointly with House bill or the Opium and Coca Leaves Trade Restrictions Act.
Although technically illegal for purposes of distribution and use, the distribution, sale and use of cocaine was still legal for registered companies and individuals.
Following the Spanish–American War the U.S. acquired the Philippines from Spain. At that time, opium addiction constituted a significant problem in the civilian population of the Philippines.
Charles Henry Brent was an American Episcopal bishop who served as Missionary Bishop of the Philippines beginning in 1901. He convened a Commission of Inquiry, known as the Brent Commission, for the purpose of examining alternatives to a licensing system for opium addicts. The Commission recommended that narcotics should be subject to international control. The recommendations of the Brent Commission were endorsed by the United States Department of State and in 1906 President Theodore Roosevelt called for an international conference, the International Opium Commission, which was held in Shanghai in February 1909. A second conference was held at The Hague in May 1911, and out of it came the first international drug control treaty, the International Opium Convention of 1912.
In the 1800s, opiates and cocaine were mostly unregulated drugs. In the 1890s, the Sears & Roebuck catalogue, which was distributed to millions of Americans homes, offered a syringe and a small amount of cocaine for $1.50. On the other hand, as early as 1880, some states and localities had already passed laws against smoking opium, at least in public, in the Los Angeles Herald, mentioning the city law against opium smoking.
At the beginning of the 20th century, cocaine began to be linked to crime. In 1900, the "Journal of the American Medical Association" published an editorial stating, "Negroes in the South are reported as being addicted to a new form of vice – that of 'cocaine sniffing' or the 'coke habit.'" Some newspapers later claimed cocaine use caused blacks to rape white women and was improving their pistol marksmanship. Chinese immigrants were blamed for importing the opium-smoking habit to the U.S. The 1903 blue-ribbon citizens' panel, the Committee on the Acquirement of the Drug Habit, concluded, "If the Chinaman cannot get along without his dope we can get along without him."
Theodore Roosevelt appointed Dr. Hamilton Wright as the first Opium Commissioner of the United States in 1908. In 1909, Wright attended the International Opium Commission in Shanghai as the American delegate. He was accompanied by Charles Henry Brent, the Episcopal Bishop. On March 12, 1911, Wright was quoted in an article in "The New York Times": "Of all the nations of the world, the United States consumes most habit-forming drugs per capita. Opium, the most pernicious drug known to humanity, is surrounded, in this country, with far fewer safeguards than any other nation in Europe fences it with." He further claimed that "it has been authoritatively stated that cocaine is often the direct incentive to the crime of rape by the negroes of the South and other sections of the country". He also stated that "one of the most unfortunate phases of smoking opium in this country is the large number of women who have become involved and were living as common-law wives or cohabitating with Chinese in the Chinatowns of our various cities".
|
https://en.wikipedia.org/wiki?curid=14210
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.