text stringlengths 10 951k | source stringlengths 39 44 |
|---|---|
Common ostrich
The common ostrich ("Struthio camelus") or simply ostrich, is a species of large flightless bird native to certain large areas of Africa. It is one of two extant species of ostriches, the only living members of the genus "Struthio" in the ratite order of birds. The other is the Somali ostrich ("Struthio molybdophanes"), which was recognized as a distinct species by BirdLife International in 2014 having been previously considered a very distinctive subspecies of ostrich.
The common ostrich belongs to the order Struthioniformes. Struthioniformes contained all the ratites, such as the kiwis, emus, rheas, and cassowaries. However, recent genetic analysis has found that the group is not monophyletic, as it is paraphyletic with respect to the tinamous, so the ostriches are classified as the only members of the order. Phylogenetic studies have shown that it is the sister group to all other members of Palaeognathae and thus the flighted tinamous are the sister group to the extinct moa. It is distinctive in its appearance, with a long neck and legs, and can run for a long time at a speed of with short bursts up to about , the fastest land speed of any bird. The common ostrich is the largest living species of bird and lays the largest eggs of any living bird (the extinct elephant birds of Madagascar and the giant moa of New Zealand laid larger eggs).
The common ostrich's diet consists mainly of plant matter, though it also eats invertebrates. It lives in nomadic groups of 5 to 50 birds. When threatened, the ostrich will either hide itself by lying flat against the ground, or run away. If cornered, it can attack with a kick of its powerful legs. Mating patterns differ by geographical region, but territorial males fight for a harem of two to seven females.
The common ostrich is farmed around the world, particularly for its feathers, which are decorative and are also used as feather dusters. Its skin is used for leather products and its meat is marketed commercially, with its leanness a common marketing point.
Common ostriches usually weigh from , or as much as two adult humans. The Masai ostriches of East Africa ("S. c. massaicus") averaged in males and in females, while the nominate subspecies, the North African ostrich ("S. c. camelus"), was found to average in unsexed adults. Exceptional male ostriches (in the nominate subspecies) can weigh up to . At sexual maturity (two to four years), male common ostriches can be from in height, while female common ostriches range from tall. New chicks are fawn in color, with dark brown spots. During the first year of life, chicks grow at about per month. At one year of age, common ostriches weigh approximately . Their lifespan is up to 40–45 years.
The feathers of adult males are mostly black, with white primaries and a white tail. However, the tail of one subspecies is buff. Females and young males are grayish-brown and white. The head and neck of both male and female ostriches is nearly bare, with a thin layer of down. The skin of the female's neck and thighs is pinkish gray, while the male's is gray or pink dependent on subspecies.
The long neck and legs keep their head up to above the ground, and their eyes are said to be the largest of any land vertebrate: in diameter; helping them to see predators at a great distance. The eyes are shaded from sunlight from above. However, the head and bill are relatively small for the birds' huge size, with the bill measuring .
Their skin varies in color depending on the subspecies, with some having light or dark gray skin and others having pinkish or even reddish skin. The strong legs of the common ostrich are unfeathered and show bare skin, with the tarsus (the lowest upright part of the leg) being covered in scales: red in the male, black in the female. The tarsus of the common ostrich is the largest of any living bird, measuring in length. The bird has just two toes on each foot (most birds have four), with the nail on the larger, inner toe resembling a hoof. The outer toe has no nail. The reduced number of toes is an adaptation that appears to aid in running, useful for getting away from predators. Common ostriches can run at a speed over and can cover in a single stride. The wings reach a span of about 2 metres (6 ft 7 in), and the wing chord measurement of is around the same size as for the largest flying birds.
The feathers lack the tiny hooks that lock together the smooth external feathers of flying birds, and so are soft and fluffy and serve as insulation. Common ostriches can tolerate a wide range of temperatures. In much of their habitat, temperatures vary as much as 40 °C (72 °F) between night and day. Their temperature control relies in part on behavioral thermoregulation. For example, they use their wings to cover the naked skin of the upper legs and flanks to conserve heat, or leave these areas bare to release heat. The wings also function as stabilizers to give better maneuverability when running. Tests have shown that the wings are actively involved in rapid braking, turning and zigzag maneuvers. They have 50–60 tail feathers, and their wings have 16 primary, four alular and 20–23 secondary feathers.
The common ostrich's sternum is flat, lacking the keel to which wing muscles attach in flying birds. The beak is flat and broad, with a rounded tip. Like all ratites, the ostrich has no crop, and it also lacks a gallbladder. They have three stomachs, and the caecum is long. Unlike all other living birds, the common ostrich secretes urine separately from faeces. All other birds store the urine and faeces combined in the coprodeum, but the ostrich stores the faeces in the terminal rectum. They also have unique pubic bones that are fused to hold their gut. Unlike most birds, the males have a copulatory organ, which is retractable and long. Their palate differs from other ratites in that the sphenoid and palatal bones are unconnected.
The common ostrich was originally described by Carl Linnaeus from Sweden in his 18th-century work, "Systema Naturae" under its current binomial name. Its scientific name is derived from Latin, "struthio" meaning "ostrich" and "camelus" meaning "camel", alluding to its dry habitat.
The common ostrich belongs to the ratite order Struthioniformes. Other members include rheas, emus, cassowaries, moa, kiwi and the largest known bird ever, the now-extinct elephant bird ("Aepyornis"). However, the classification of the ratites as a single order has always been questioned, with the alternative classification restricting the Struthioniformes to the ostrich lineage and elevating the other groups.
Four living subspecies are recognized:
Some analyses indicate that the Somali ostrich is now considered a full species by the Tree of Life Project, "The Clements Checklist of Birds of the World", BirdLife International, and the recognize it as a different species. A few authorities, including the "Howard and Moore Complete Checklist of the Birds of the World" do not recognize it as separate. Mitochondrial DNA haplotype comparisons suggest that it diverged from the other ostriches not quite 4 mya due to formation of the East African Rift. Hybridization with the subspecies that evolved southwestwards of its range, "S. c. massaicus", has apparently been prevented from occurring on a significant scale by ecological separation, the Somali ostrich preferring bushland where it browses middle-height vegetation for food while the Masai ostrich is, like the other subspecies, a grazing bird of the open savanna and "miombo" habitat.
The population from Río de Oro was once separated as "Struthio camelus spatzi" because its eggshell pores were shaped like a teardrop and not round. However, as there is considerable variation of this character and there were no other differences between these birds and adjacent populations of "S. c. camelus", the separation is no longer considered valid. This population disappeared in the latter half of the 20th century. There were 19th-century reports of the existence of small ostriches in North Africa; these are referred to as Levaillant's ostrich ("Struthio bidactylus") but remain a hypothetical form not supported by material evidence.
Common ostriches formerly occupied Africa north and south of the Sahara, East Africa, Africa south of the rainforest belt, and much of Asia Minor. Today common ostriches prefer open land and are native to the savannas and Sahel of Africa, both north and south of the equatorial forest zone. In southwest Africa they inhabit the semi-desert or true desert. Farmed common ostriches in Australia have established feral populations. The Arabian ostriches in the Near and Middle East were hunted to extinction by the middle of the 20th century. Attempts to reintroduce the common ostrich into Israel have failed. Common ostriches have occasionally been seen inhabiting islands on the Dahlak Archipelago, in the Red Sea near Eritrea.
Research conducted by the Birbal Sahni Institute of Palaeobotany in India found molecular evidence that ostriches lived in India 25,000 years ago. DNA tests on fossilized eggshells recovered from eight archaeological sites in the states of Rajasthan, Gujarat and Madhya Pradesh found 92% genetic similarity between the eggshells and the North African ostrich, so these could have been fairly distant relatives.
Common ostriches normally spend the winter months in pairs or alone. Only 16 percent of common ostrich sightings were of more than two birds. During breeding season and sometimes during extreme rainless periods ostriches live in nomadic groups of five to 100 birds (led by a top hen) that often travel together with other grazing animals, such as zebras or antelopes. Ostriches are diurnal, but may be active on moonlit nights. They are most active early and late in the day. The male common ostrich territory is between .
With their acute eyesight and hearing, common ostriches can sense predators such as lions from far away. When being pursued by a predator, they have been known to reach speeds in excess of , and can maintain a steady speed of , which makes the common ostrich the world's fastest two-legged animal. When lying down and hiding from predators, the birds lay their heads and necks flat on the ground, making them appear like a mound of earth from a distance, aided by the heat haze in their hot, dry habitat.
When threatened, common ostriches run away, but they can cause serious injury and death with kicks from their powerful legs. Their legs can only kick forward.
Contrary to popular belief, ostriches do not bury their heads in sand to avoid danger. This myth likely began with Pliny the Elder (AD 23–79), who wrote that ostriches "imagine, when they have thrust their head and neck into a bush, that the whole of their body is concealed." This may have been a misunderstanding of their sticking their heads in the sand to swallow sand and pebbles to help digest their fibrous food, or, as National Geographic suggests, of the defensive behavior of lying low, so that they may appear from a distance to have their head buried. Another possible origin for the myth lies with the fact that ostriches keep their eggs in holes in the sand instead of nests, and must rotate them using their beaks during incubation; digging the hole, placing the eggs, and rotating them might each be mistaken for an attempt to bury their heads in the sand.
They mainly feed on seeds, shrubs, grass, fruit and flowers; occasionally they also eat insects such as locusts. Lacking teeth, they swallow pebbles that act as gastroliths to grind food in the gizzard. When eating, they will fill their gullet with food, which is in turn passed down their esophagus in the form of a ball called a bolus. The bolus may be as much as . After passing through the neck (there is no crop) the food enters the gizzard and is worked on by the aforementioned pebbles. The gizzard can hold as much as , of which up to 45% may be sand and pebbles. Common ostriches can go without drinking for several days, using metabolic water and moisture in ingested plants, but they enjoy liquid water and frequently take baths where it is available. They can survive losing up to 25% of their body weight through dehydration.
Common ostriches become sexually mature when they are 2 to 4 years old; females mature about six months earlier than males. As with other birds, an individual may reproduce several times over its lifetime. The mating season begins in March or April and ends sometime before September. The mating process differs in different geographical regions. Territorial males typically boom in defense of their territory and harem of two to seven hens; the successful male may then mate with several females in the area, but will only form a pair bond with a 'major' female.
The cock performs with his wings, alternating wing beats, until he attracts a mate. They will go to the mating area and he will maintain privacy by driving away all intruders. They graze until their behavior is synchronized, then the feeding becomes secondary and the process takes on a ritualistic appearance. The cock will then excitedly flap alternate wings again, and start poking on the ground with his bill. He will then violently flap his wings to symbolically clear out a nest in the soil. Then, while the hen runs a circle around him with lowered wings, he will wind his head in a spiral motion. She will drop to the ground and he will mount for copulation. Common ostriches raised entirely by humans may direct their courtship behavior not at other ostriches, but toward their human keepers.
The female common ostrich lays her fertilized eggs in a single communal nest, a simple pit, deep and wide, scraped in the ground by the male. The dominant female lays her eggs first, and when it is time to cover them for incubation she discards extra eggs from the weaker females, leaving about 20 in most cases. A female common ostrich can distinguish her own eggs from the others in a communal nest. Ostrich eggs are the largest of all eggs, though they are actually the smallest eggs relative to the size of the adult bird — on average they are long, wide, and weigh , over 20 times the weight of a chicken's egg and only 1 to 4% the size of the female. They are glossy cream-colored, with thick shells marked by small pits.
The eggs are incubated by the females by day and by the males by night. This uses the coloration of the two sexes to escape detection of the nest, as the drab female blends in with the sand, while the black male is nearly undetectable in the night. The incubation period is 35 to 45 days, which is rather short compared to other ratites. This is believed to be the case due to the high rate of predation. Typically, the male defends the hatchlings and teaches them to feed, although males and females cooperate in rearing chicks. Fewer than 10% of nests survive the 9 week period of laying and incubation, and of the surviving chicks, only 15% of those survive to 1 year of age. However, among those common ostriches who survive to adulthood, the species is one of the longest-living bird species. Common ostriches in captivity have lived to 62 years and 7 months.
As a flightless species in the rich biozone of the African savanna, the common ostrich must face a variety of formidable predators throughout its life cycle. Animals that prey on ostriches of all ages may include cheetahs, lions, leopards, African hunting dogs, and spotted hyenas. Common ostriches can often outrun most of their predators in a pursuit, so most predators will try to ambush an unsuspecting bird using obstructing vegetation or other objects. A notable exception is the cheetah, which is the most prolific predator of adult common ostriches due to its own great running speeds.
Predators of nests and young common ostriches include jackals, various birds of prey, warthogs, mongoose and Egyptian vultures. Egyptian vultures have been known to hurl stones at ostrich eggs to crack them open so they can eat their contents. If the nest or young are threatened, either or both of the parents may create a distraction, feigning injury. However, they may sometimes fiercely fight predators, especially when chicks are being defended, and have been capable of killing even lions in such confrontations.
Morphology of the common ostrich lung indicates that the structure conforms to that of the other avian species, but still retains parts of its primitive avian species, ratite, structure. The opening to the respiratory pathway begins with the laryngeal cavity lying posterior to the choanae within the buccal cavity. The tip of the tongue then lies anterior to the choanae, excluding the nasal respiratory pathway from the buccal cavity. The trachea lies ventrally to the cervical vertebrae extending from the larynx to the syrinx, where the trachea enters the thorax, dividing into two primary bronchi, one to each lung, in which they continue directly through to become mesobronchi. Ten different air sacs attach to the lungs to form areas for respiration. The most posterior air sacs (abdominal and post-thoracic) differ in that the right abdominal air sac is relatively small, lying to the right of the mesentery, and dorsally to the liver. While the left abdominal air sac is large and lies to the left of the mesentery. The connection from the main mesobronchi to the more anterior air sacs including the interclavicular, lateral clavicular, and pre-thoracic sacs known as the ventrobronchi region. While the caudal end of the mesobronchus branches into several dorsobronchi. Together, the ventrobronchi and dorsobronchi are connected by intra-pulmonary airways, the parabronchi, which form an arcade structure within the lung called the paleopulmo. It is the only structure found in primitive birds such as ratites.
The largest air sacs found within the respiratory system are those of the post-thoracic region, while the others decrease in size respectively, the interclavicular (unpaired), abdominal, pre-thoracic, and lateral clavicular sacs. The adult common ostrich lung lacks connective tissue known as interparabronchial septa, which render strength to the non-compliant avian lung in other bird species. Due to this the lack of connective tissue surrounding the parabronchi and adjacent parabronchial lumen, they exchange blood capillaries or avascular epithelial plates. Like mammals, ostrich lungs contain an abundance of type II cells at gas exchange sites; an adaptation for preventing lung collapse during slight volume changes.
The common ostrich is an endotherm and maintains a body temperature of in its extreme living temperature conditions, such as the heat of the savanna and desert regions of Africa. The ostrich utilizes its respiratory system via a costal pump for ventilation rather than a diaphragmatic pump as seen in most mammals. Thus, they are able to use a series of air sacs connected to the lungs. The use of air sacs forms the basis for the three main avian respiratory characteristics:
Inhalation begins at the mouth and the nostrils located at the front of the beak. The air then flows through the anatomical dead space of a highly vascular trachea (c. ) and expansive bronchial system, where it is further conducted to the posterior air sacs. Air flow through the parabronchi of the paleopulmo is in the same direction to the dorsobronchi during inspiration and expiration. Inspired air moves into the respiratory system as a result of the expansion of thoraco abdominal cavity; controlled by inspiratory muscles. During expiration, oxygen poor air flows to the anterior air sacs and is expelled by the action of the expiratory muscles. The common ostrich air sacs play a key role in respiration since they are capacious, and increase surface area (as described by the Fick Principle). The oxygen rich air flows unidirectionally across the respiratory surface of the lungs; providing the blood that has a crosscurrent flow with a high concentration of oxygen.
To compensate for the large "dead" space, the common ostrich trachea lacks valves to allow faster inspiratory air flow. In addition, the total lung capacity of the respiratory system, (including the lungs and ten air sacs) of a ostrich is about , with a tidal volume ranging from . The tidal volume is seen to double resulting in a 16-fold increase in ventilation. Overall, ostrich respiration can be thought of as a high velocity-low pressure system. At rest, there is small pressure differences between the ostrich air sacs and the atmosphere, suggesting simultaneous filling and emptying of the air sacs.
The increase in respiration rate from the low range to the high range is sudden and occurs in response to hyperthermia. Birds lack sweat glands, so when placed under stress due to heat, they heavily rely upon increased evaporation from the respiratory system for heat transfer. This rise in respiration rate however is not necessarily associated with a greater rate of oxygen consumption. Therefore, unlike other birds, the common ostrich is able to dissipate heat through panting without experiencing respiratory alkalosis by modifying ventilation of the respiratory medium. During hyperpnea ostriches pant at a respiratory rate of 40–60 cycles per minute, versus their resting rate of 6–12 cycles per minute. Hot, dry and moisture lacking properties of the common ostrich respiratory medium affects oxygen's diffusion rate (Henry's Law).
Common ostriches develop via Intussusceptive angiogenesis, a mechanism of blood vessel formation, characterizing many organs. It is not only involved in vasculature expansion, but also in angioadaptation of vessels to meet physiological requirements. The use of such mechanisms demonstrates an increase in the later stages of lung development, along with elaborate parabronchial vasculature, and reorientation of the gas exchange blood capillaries to establish the crosscurrent system at the blood-gas barrier. The blood–gas barrier (BGB) of their lung tissue is thick. The advantage of this thick barrier may be protection from damage by large volumes of blood flow in times of activity, such as running, since air is pumped by the air sacs rather than the lung itself. As a result, the capillaries in the parabronchi have thinner walls, permitting more efficient gaseous exchange. In combination with separate pulmonary and systemic circulatory systems, it helps to reduce stress on the BGB.
The common ostrich heart is a closed system, contractile chamber. It is composed of myogenic muscular tissue associated with heart contraction features. There is a double circulatory plan in place possessing both a pulmonary circuit and systemic circuit.
The common ostrich's heart has similar features to other avian species like having a conically shaped heart, and being enclosed by a pericardium layer. Moreover, similarities also include a larger right atrium volume, and a thicker left ventricle to fulfil the systemic circuit. The ostrich heart has three features that are absent in related birds:
The atrioventricular node position differs from other fowl. It is located in the endocardium of the atrial surface of the right atrioventricular valve. It is not covered by connective tissue, which is characteristic of vertebrate heart anatomy. It also contains fewer myofibrils than usual myocardial cells. The AV node connects the atrial and ventricular chambers. It functions to carry the electrical impulse from the atria to the ventricle. Upon view, the myocardial cells are observed to have large densely packed chromosomes within the nucleus.
The coronary arteries start in the right and left aortic sinus and provide blood to the heart muscle in a similar fashion to most other vertebrates. Other domestic birds capable of flight have three or more coronary arteries that supply blood to the heart muscle. The blood supply by the coronary arteries are fashioned starting as a large branch over the surface of the heart. It then moves along the coronary groove and continues on into the tissue as interventricular branches toward the apex of the heart. The atria, ventricles, and septum are supplied of blood by this modality. The deep branches of the coronary arteries found within the heart tissue are small and supply the interventricular and right atrioventricular valve with blood nutrients for which to carry out their processes. The interatrial artery of the ostrich is small in size and exclusively supplies blood to only part of the left auricle and interatrial septum.
These Purkinje fibers (p-fibers) found in the hearts moderator bands are a specialized cardiac muscle fiber that causes the heart to contract. The Purkinje cells are mostly found within both the endocardium and the sub-endocardium. The sinoatrial node shows a small concentration of Purkinje fibers, however, continuing through the conducting pathway of the heart the bundle of his shows the highest amount of these Purkinje fibers.
The red blood cell count per unit volume in the ostrich is about 40% of that of a human; however, the red blood cells of the ostrich are about three times larger than the red blood cells of a human. The blood oxygen affinity, known as P50, is higher than that of both humans and similar avian species. The reason for this decreased oxygen affinity is due to the hemoglobin configuration found in common ostrich blood. The common ostrich's tetramer is composed of hemoglobin type A and D, compared to typical mammalian tetramers composed of hemoglobin type A and B; hemoglobin D configuration causes a decreased oxygen affinity at the site of the respiratory surface.
During the embryonic stage Hemoglobin E is present. This subtype increases oxygen affinity in order to transport oxygen across the allantoic membrane of the embryo. This can be attributed to the high metabolic need of the developing embryo, thus high oxygen affinity serves to satisfy this demand. When the chick hatches hemoglobin E diminishes while hemoglobin A and D increase in concentration. This shift in hemoglobin concentration results in both decreased oxygen affinity and increased P50 value.
Furthermore, the P50 value is influenced by differing organic modulators. In the typical mammalian RBC 2,3 – DPG causes a lower affinity for oxygen. 2,3- DPG constitutes approximately 42–47%, of the cells phosphate of the embryonic ostrich. However, the adult ostrich have no traceable 2,3- DPG.In place of 2,3-DPG the ostrich uses inositol polyphosphates (IPP), which vary from 1–6 phosphates per molecule. In relation to the IPP, the ostrich also uses ATP to lower oxygen affinity. ATP has a consistent concentration of phosphate in the cell. Around 31% at incubation periods, and dropping to 16–20% in 36-day-old chicks. However, IPP has low concentrations, around 4%, of total phosphate concentration in embryonic stages; However, the IPP concentration jumps to 60% of total phosphate of the cell. The majority of phosphate concentration switches from 2,3- DPG to IPP, suggesting the result of the overall low oxygen affinity is due to these varying polyphosphates.
Concerning immunological adaptation, it was discovered that wild common ostriches have a pronounced non specific immunity defense, with blood content reflecting high values of lysosome, and phagocyte cells in medium. This is in contrast to domesticated ostriches, who in captivity develop high concentration of immunoglobulin antibodies in their circulation, indicating an acquired immunological response. It is suggested that this immunological adaptability may allow this species to have a high success rate of survival in variable environmental settings.
The common ostrich is a xeric animal, due to the fact that it lives in habitats that are both dry and hot. Water is scarce in dry and hot environment, and this poses a challenge to the ostrich's water consumption. Also the ostrich is a ground bird and cannot fly to find water sources, which poses a further challenge. Because of their size, common ostriches cannot easily escape the heat of their environment; however, they dehydrate less than their small bird counterparts because of their small surface area to volume ratio. Hot, arid habitats pose osmotic stress, such as dehydration, which triggers the common ostrich's homeostatic response to osmoregulate.
The common ostrich is well adapted to hot, arid environments through specialization of excretory organs. The common ostrich has an extremely long and developed colon the length of approximately between the coprodeum and the paired caeca, which are around long. A well developed caeca is also found and in combination with the rectum forms the microbial fermentation chambers used for carbohydrate breakdown. The catabolism of carbohydrates produces around of water that can be used internally. The majority of their urine is stored in the coprodeum, and the faeces are separately stored in the terminal colon. The coprodeum is located ventral to the terminal rectum and urodeum (where the ureters open). Found between the terminal rectum and coprodeum is a strong sphincter. The coprodeum and cloaca are the main osmoregulatory mechanisms used for the regulation and reabsorption of ions and water, or net water conservation. As expected in a species inhabiting arid regions, dehydration causes a reduction in faecal water, or dry feces. This reduction is believed to be caused by high levels of plasma aldosterone, which leads to rectal absorption of sodium and water. Also expected is the production of hyperosmotic urine; cloacal urine has been found to be 800 mosmol/L. The U:P (urine:plasma) ratio of the common ostrich is therefore greater than one. Diffusion of water to the coprodeum (where urine is stored) from plasma across the epithelium is voided. This void is believed to be caused by the thick mucosal layering of the coprodeum.
Common ostriches have two kidneys, which are chocolate brown in color, granular in texture, and lie in a depression in the pelvic cavity of the dorsal wall. They are covered by peritoneum and a layer of fat. Each kidney is about long, wide, and divided into a cranial, middle, and caudal sections by large veins. The caudal section is the largest, extends into the middle of the pelvis. The ureters leave the ventral caudomedial surface and continue caudally, near the midline into the opening of the urodeum of the cloaca. Although there is no bladder, a dilated pouch of ureter stores the urine until it is secreted continuously down from the ureters to the urodeum until discharged.
Common ostrich kidneys are fairly large, and so are able to hold significant amounts of solutes. Hence, common ostriches drink relatively large volumes of water daily, and excrete generous quantities of highly concentrated urine. It is when drinking water is unavailable or withdrawn, that the urine becomes highly concentrated with uric acid and urates. It seems that common ostriches who normally drink relatively large amounts of water tend to rely on renal conservation of water when drinking water is scarce within the kidney system. Though there have been no official detailed renal studies conducted on the flow rate (Poiseuille's Law) and composition of the ureteral urine in the ostrich, knowledge of renal function has been based on samples of cloacal urine, and samples or quantitative collections of voided urine. Studies have shown that the amount of water intake, and dehydration impacts the plasma osmolality and urine osmolality within various sized ostriches. During a normal hydration state of the kidneys, young ostriches tend to have a measured plasma osmolality of 284 mOsm, and urine osmolality of 62 mOsm. Adults have higher rates with a plasma osmolality of 330 mOsm, and a urine osmolality of 163 mOsm. The osmolality of both plasma and urine can alter in regards to whether there is an excess or depleted amount of water present within the kidneys. An interesting fact of common ostriches is that when water is freely available, the urine osmolality can reduce to 60–70 mOsm, not losing any necessary solutes from the kidneys when excess water is excreted. Dehydrated or salt-loaded ostriches can reach a maximal urine osmolality of approximately 800 mOsm. When the plasma osmolality has been measured simultaneously with the maximal osmotic urine, it is seen that the urine:plasma ratio is 2.6:1, the highest encountered among avian species. Along with dehydration, there is also a reduction in flow rate from 20 L·d−1 to only 0.3–0.5 L·d−1.
In mammals and common ostriches, the increase of the glomerular filtration rate (GFR) and urine flow rate (UFR) is due to a high protein diets. As seen in various studies, scientists have measured clearance of creatinine, a fairly reliable marker of glomerular filtration rate (GFR). It has been seen that during normal hydration within the kidneys, the glomerular filtration rate is approximately 92 ml/min. However, when an ostrich experiences dehydration for at least 48 hours (2 days), this value diminishes to only 25% of the hydrated GFR rate. Thus in response to the dehydration, ostrich kidneys secrete small amounts of very viscous glomerular filtrates that have not been broken down, and return them to the circulatory system through blood vessels. The reduction of GFR during dehydration is extremely high and so the fractional excretion of water (urine flow rate as a percentage of GFR) drops down from 15% at normal hydration to 1% during dehydration.
Common ostriches employ adaptive features to manage the dry heat and solar radiation in their habitat. Ostriches will drink available water; however, they are limited in accessing water by being flightless. They are also able to harvest water through dietary means, consuming plants such as the Euphorbia heterochroma that hold up to 87% water.
Water mass accounts for 68% of body mass in adult common ostriches; this is down from 84% water mass in 35-day-old chicks. The differing degrees of water retention are thought to be a result of varying body fat mass. In comparison to smaller birds ostriches have a lower evaporative water loss resulting from their small body surface area per unit weight.
When heat stress is at its maximum, common ostriches are able to recover evaporative loss by using a metabolic water mechanism to counter the loss by urine, feces, and respiratory evaporation. An experiment to determine the primary source of water intake in the ostrich indicated that while the ostrich does employ a metabolic water production mechanism as a source of hydration, the most important source of water is food. When ostriches were restricted to the no food or water condition, the metabolic water production was only 0.5 L·d−1, while total water lost to urine, feces and evaporation was 2.3 L·d−1. When the birds were given both water and food, total water gain was 8.5 L·d−1. In the food only condition total water gain was 10.1 L·d−1. These results show that the metabolic water mechanism is not able to sustain water loss independently, and that food intake, specifically of plants with a high water content such as Euphorbia heterochroma, is necessary to overcome water loss challenges in the common ostrich's arid habitat.
In times of water deprivation, urine electrolyte and osmotic concentration increases while urination rate decreases. Under these conditions urine solute:plasma ratio is approximately 2.5, or hyperosmotic; that is to say that the ratio of solutes to water in the plasma is shifted down whereby reducing osmotic pressure in the plasma. Water is then able to be held back from excretion, keeping the ostrich hydrated, while the passed urine contains higher concentrations of solute. This mechanism exemplifies how renal function facilitates water retention during periods of dehydration stress.
A number of avian species use nasal salt glands, alongside their kidneys, to control hypertonicity in their blood plasma. However, the common ostrich shows no nasal glandular function in regard to this homeostatic process. Even in a state of dehydration, which increases the osmolality of the blood, nasal salt glands show no sizeable contribution of salt elimination. Also, the overall mass of the glands was less than that of the duck's nasal gland. The common ostrich, having a heavier body weight, should have larger, heavier nasal glands to more effectively excrete salt from a larger volume of blood, but this is not the case. These unequal proportions contribute to the assumption that the common ostrich's nasal glands do not play any role in salt excretion. The nasal glands may be the result of an ancestral trait, which is no longer needed by the common ostrich, but has not been bred out of their gene pool.
The majority of the common ostrich's internal solutes are made up of sodium ions (Na+), potassium ions (K+), chloride ions (Cl-), total short-chain fatty acids (SCFA), and acetate. The caecum contains a high water concentration with reduced levels nearing the terminal colon, and exhibits a rapid fall in Na+ concentrations and small changes in K+ and Cl-.
The colon is divided into three sections and take part in solute absorption. The upper colon largely absorbs Na+ and SCFA, and partially absorbs KCl. The middle colon absorbs Na+, SCFA, with little net transfer of K+ and Cl-. The lower colon then slightly absorbs Na+ and water, and secretes K+. There is no net movements of Cl- and SCFA found in the lower colon.
When the common ostrich is in a dehydrated state plasma osmolality, Na+, K+, and Cl- ions all increase, however, K+ ions returned to controlled concentration. The common ostrich also experiences an increase in haematocrit, resulting in a hypovolemic state. Two antidiuretic hormones, Arginine vasotocin (AVT) and angiotensin (AII) are increased in blood plasma as a response to hyperosmolality and hypovolemia. AVT triggers antidiuretic hormone (ADH) which targets the nephrons of the kidney. ADH causes a reabsorption of water from the lumen of the nephron to the extracellular fluid osmotically. These extracellular fluids then drain into blood vessels, causing a rehydrating effect. This drainage prevents loss of water by both lowering volume and increasing concentration of the urine. Angiotensin, on the other hand, causes vasoconstriction on the systemic arterioles, and acts as a dipsogen for ostriches. Both of these antidiuretic hormones work together to maintain water levels in the body that would normally be lost due to the osmotic stress of the arid environment.
The end-product of catabolism of protein metabolism in animals is nitrogen. Animals must excrete this in the form of nitrogenous compounds. Ostriches are uricotelic. They excrete nitrogen as the complex nitrogenous waste compound uric acid, and related derivatives. Uric acid's low solubility in water gives a semi-solid paste consistency to the ostrich's nitrogenous waste.
Common ostriches are homeothermic endotherms; they regulate a constant body temperature via regulating their metabolic heat rate. They closely regulate their core body temperature, but their appendages may be cooler in comparison as found with regulating species. The temperature of their beak, neck surfaces, lower legs, feet and toes are regulated through heat exchange with the environment. Up to 40% of their produced metabolic heat is dissipated across these structures, which account for about 12% of their total surface area. Total evaporative water loss (TEWL) is statistically lower in the common ostrich than in membering ratites.
As ambient temperature increases, dry heat loss decreases, but evaporative heat loss increases because of increased respiration. As ostriches experience high ambient temperatures, circa , they become slightly hyperthermic; however, they can maintain a stable body temperature, around , for up to 8 hours in these conditions. When dehydrated, the common ostrich minimizes water loss, causing the body temperature to increase further. When the body heat is allowed to increase the temperature gradient between the common ostrich and ambient heat is equilibrated.
Common ostriches have developed a comprehensive set of behavioral adaptations for thermoregulation, such as altering their feathers. Common ostriches display a feather fluffing behavior that aids them in thermoregulation by regulating convective heat loss at high ambient temperatures. They may also physically seek out shade in times of high ambient temperatures. When feather fluffing, they contract their muscles to raise their feathers to increase the air space next to their skin. This air space provides an insulating thickness of . The ostrich will also expose the thermal windows of their unfeathered skin to enhance convective and radiative loss in times of heat stress. At higher ambient temperatures lower appendage temperature increases to difference from ambient temperature. Neck surfaces are around difference at most ambient temperatures, except when temperatures are around it was only above ambient.
At low ambient temperatures the common ostrich utilizes feather flattening, which conserves body heat through insulation. The low conductance coefficient of air allows less heat to be lost to the environment. This flattening behavior compensate for common ostrich's rather poor cutaneous evaporative water loss (CEWL). These feather heavy areas such as the body, thighs and wings do not usually vary much from ambient temperatures due to this behavioural controls. This ostrich will also cover its legs to reduce heat loss to the environment, along with undergoing piloerection and shivering when faced with low ambient temperatures.
The use of countercurrent heat exchange with blood flow allows for regulated conservation/ elimination of heat of appendages. When ambient temperatures are low, heterotherms will constrict their arterioles to reduce heat loss along skin surfaces. The reverse occurs at high ambient temperatures, arterioles dilate to increase heat loss.
At ambient temperatures below their body temperatures (thermal neutral zone (TNZ)), common ostriches decrease body surface temperatures so that heat loss occurs only across about 10% of total surface area. This 10% include critical areas that require blood flow to remain high to prevent freezing, such as their eyes. Their eyes and ears tend to be the warmest regions. It has been found that temperatures of lower appendages were no more than above ambient temperature, which minimizes heat exchange between feet, toes, wings, and legs.
Both the Gular and air sacs, being close to body temperature, are the main contributors to heat and water loss. Surface temperature can be affected by the rate of blood flow to a certain area, and also by the surface area of the surrounding tissue. The ostrich reduces blood flow to the trachea to cool itself, and vasodilates its blood vessels around the gular region to raise the temperature of the tissue. The air sacs are poorly vascularized but show an increased temperature, which aids in heat loss.
Common ostriches have evolved a 'selective brain cooling' mechanism as a means of thermoregulation. This modality allows the common ostrich to manage the temperature of the blood going to the brain in response to the extreme ambient temperature of the surroundings. The morphology for heat exchange occurs via cerebral arteries and the ophthalmic rete, a network of arteries originating from the ophthalmic artery. The ophthalmic rete is analogous to the carotid rete found in mammals, as it also facilitates transfer of heat from arterial blood coming from the core to venous blood returning from the evaporative surfaces at the head.
Researchers suggest that common ostriches also employ a ‘selective brain warming’ mechanism in response to cooler surrounding temperatures in the evenings. The brain was found to maintain a warmer temperature when compared to carotid arterial blood supply. Researchers hypothesize three mechanisms for this finding. They first suggest a possible increase in metabolic heat production within the brain tissue itself to compensate for the colder arterial blood arriving from the core. They also speculate that there is an overall decrease in cerebral blood flow to the brain. Finally, they suggest that warm venous blood perfusion at the ophthalmic rete facilitates warming of cerebral blood that supplies the hypothalamus. Further research will need to be done to find how this occurs.
The common ostrich has no sweat glands, and under heat stress they rely on panting to reduce their body temperature. Panting increases evaporative heat (and water) loss from its respiratory surfaces, therefore forcing air and heat removal without the loss of metabolic salts. Panting allows the common ostrich to have a very effective respiratory evaporative water loss (REWL). Heat dissipated by respiratory evaporation increases linearly with ambient temperature, matching the rate of heat production. As a result of panting the common ostrich should eventually experience alkalosis. However, The CO2 concentration in the blood does not change when hot ambient temperatures are experienced. This effect is caused by a lung surface shunt. The lung is not completely shunted, allowing enough oxygen to fulfill the bird's metabolic needs. The common ostrich utilizes gular fluttering, rapid rhythmic contraction and relaxation of throat muscles, in a similar way to panting. Both these behaviors allow the ostrich to actively increase the rate of evaporative cooling.
In hot temperatures water is lost via respiration. Moreover, varying surface temperatures within the respiratory tract contribute differently to overall heat and water loss through panting. The surface temperature of the gular area is ; that of the tracheal area, between ; and that of both anterior and posterior air sacs, . The long trachea, being cooler than body temperature, is a site of water evaporation.
As ambient air becomes hotter, additional evaporation can take place lower in the trachea making its way to the posterior sacs, shunting the lung surface. The trachea acts as a buffer for evaporation because of the length, and the controlled vascularization. The Gular is also heavily vascularized; its purpose is for cooling blood, but also evaporation, as previously stated. Air flowing through the trachea can be either laminar or turbulent depending on the state of the bird. When the common ostrich is breathing normally, under no heat stress, air flow is laminar. When the common ostrich is experiencing heat stress from the environment the air flow is considered turbulent. This suggests that laminar air flow causes little to no heat transfer, while under heat stress turbulent airflow can cause maximum heat transfer within the trachea.
Common ostriches are able to attain their necessary energetic requirements via the oxidation of absorbed nutrients. Much of the metabolic rate in animals is dependent upon their allometry, the relationship between body size to shape, anatomy, physiology and behavior of an animal. Hence, it is plausible to state that metabolic rate in animals with larger masses is greater than animals with a smaller mass.
When a bird is inactive, unfed, and the ambient temperature (i.e. in the thermo-neutral zone) is high, the energy expended is at its minimum. This level of expenditure is better known as the basal metabolic rate (BMR), and can be calculated by measuring the amount of oxygen consumed during various activities. Therefore, in common ostriches we see use of more energy when compared to smaller birds in absolute terms, but less per unit mass.
A key point when looking at the common ostrich metabolism is to note that it is a non-passerine bird. Thus, BMR in ostriches is particularly low with a value of only 0.113 ml O2 g−1 h−1. This value can further be described using Kleiber's law, which relates the BMR to the body mass of an animal.
formula_1
where formula_2 is body mass, and metabolic rate is measured in kcal per day.
In common ostriches, a BMR (ml O2 g−1 h−1) = 389 kg0.73, describing a line parallel to the intercept with only about 60% in relation to other non-passerine birds.
Along with BMR, energy is also needed for a range of other activities. If the ambient temperature is lower than the thermo-neutral zone, heat is produced to maintain body temperature. So, the metabolic rate in a resting, unfed bird, that is producing heat is known as the standard metabolic rate (SMR) or resting metabolic rate(RMR). The common ostrich SMR has been seen to be approximately 0.26 ml O2 g−1 h−1, almost 2.3 times the BMR. On another note, animals that engage in extensive physical activity employ substantial amounts of energy for power. This is known as the maximum metabolic scope. In an ostrich, it is seen to be at least 28 times greater than the BMR. Likewise, the daily energy turnover rate for an ostrich with access to free water is 12,700 kJ·d−1, equivalent to 0.26 ml O2 g−1 h−1.
The wild common ostrich population has declined drastically in the last 200 years, with most surviving birds in reserves or on farms. However, its range remains very large (), leading the IUCN and BirdLife International to treat it as a species of Least Concern. Of its 5 subspecies, the Arabian ostrich ("S. c. syriacus") became extinct around 1966, and the North African ostrich ("S. c. camelus") has declined to the point where it now is included on CITES Appendix I and some treat it as Critically Endangered.
Common ostriches have inspired cultures and civilizations for 5,000 years in Mesopotamia and Egypt. A statue of Arsinoe II of Egypt riding a common ostrich was found in a tomb in Egypt. Hunter-gatherers in the Kalahari use ostrich eggshells as water containers, punching a hole in them. They also produce jewelry from it. The presence of such eggshells with engraved hatched symbols dating from the Howiesons Poort period of the Middle Stone Age at Diepkloof Rock Shelter in South Africa suggests common ostriches were an important part of human life as early as 60,000 BP.
In Eastern Christianity it is common to hang decorated common ostrich eggs on the chains holding the oil lamps. The initial reason was probably to prevent mice and rats from climbing down the chain to eat the oil. Another, symbolical explanation is based in the fictitious tradition that female common ostriches do not sit on their eggs, but stare at them incessantly until they hatch out, because if they stop staring even for a second the egg will addle. This is equated to the obligation of the Christian to direct his entire attention towards God during prayer, lest the prayer be fruitless.
In Roman times, there was a demand for common ostriches to use in "venatio" games or cooking. They have been hunted and farmed for their feathers, which at various times have been popular for ornamentation in fashionable clothing (such as hats during the 19th century). Their skins are valued for their leather. In the 18th century they were almost hunted to extinction; farming for feathers began in the 19th century. At the start of the 20th century there were over 700,000 birds in captivity. The market for feathers collapsed after World War I, but commercial farming for feathers and later for skins and meat became widespread during the 1970s. Common ostriches are so adaptable that they can be farmed in climates ranging from South Africa to Alaska.
Common ostriches have been farmed in South Africa since the beginning of the 19th century. According to Frank G. Carpenter, the English are credited with first taming common ostriches outside Cape Town. Farmers captured baby common ostriches and raised them successfully on their property, and were able to obtain a crop of feathers every seven to eight months instead of killing wild common ostriches for their feathers. It is claimed that common ostriches produce the strongest commercial leather. Common ostrich meat tastes similar to lean beef and is low in fat and cholesterol, as well as high in calcium, protein and iron. Uncooked, it is dark red or cherry red, a little darker than beef. Ostrich stew is a dish prepared using common ostrich meat.
Some common ostrich farms also cater to agri-tourism, which may produce a substantial portion of the farm's income. This may include tours of the farmlands, souvenirs, or even ostrich rides.
Common ostriches typically avoid humans in the wild, since they correctly assess humans as potential predators. If approached, they often run away, but sometimes ostriches can be very aggressive when threatened, especially if cornered, and may also attack if they feel the need to defend their territories or offspring. Similar behaviors are noted in captive or domesticated common ostriches, which retain the same natural instincts and can occasionally respond aggressively to stress. When attacking a person, common ostriches deliver slashing kicks with their powerful feet, armed with long claws, with which they can disembowel or kill a person with a single blow. In one study of common ostrich attacks, it was estimated that two to three attacks that result in serious injury or death occur each year in the area of Oudtshoorn, South Africa, where a large number of common ostrich farms are set next to both feral and wild common ostrich populations.
In some countries, people race each other on the backs of common ostriches. The practice is common in Africa and is relatively unusual elsewhere. The common ostriches are ridden in the same way as horses with special saddles, reins, and bits. However, they are harder to manage than horses.
The racing is also a part of modern South African culture. Within the United States, a tourist attraction in Jacksonville, Florida, called 'The Ostrich Farm' opened up in 1892; it and its races became one of the most famous early attractions in the history of Florida. Likewise, the arts scene in Indio, California, consists of both ostrich and camel racing.
Chandler, Arizona, hosts the annual "Ostrich Festival", which features common ostrich races. Racing has also occurred at many other locations such as Virginia City in Nevada, Canterbury Park in Minnesota, Prairie Meadows in Iowa, Ellis Park in Kentucky, and the Fairgrounds in New Orleans, Louisiana. | https://en.wikipedia.org/wiki?curid=22544 |
Orgasm
Orgasm (from Greek ὀργασμός "orgasmos" "excitement, swelling"; also sexual climax) is the sudden discharge of accumulated sexual excitement during the sexual response cycle, resulting in rhythmic muscular contractions in the pelvic region characterized by sexual pleasure. Experienced by males and females, orgasms are controlled by the involuntary or autonomic nervous system. They are usually associated with involuntary actions, including muscular spasms in multiple areas of the body, a general euphoric sensation and, frequently, body movements and vocalizations. The period after orgasm (known as the refractory period) is typically a relaxing experience, attributed to the release of the neurohormones oxytocin and prolactin as well as endorphins (or "endogenous morphine").
Human orgasms usually result from physical sexual stimulation of the penis in males (typically accompanying ejaculation) and of the clitoris in females. Sexual stimulation can be by self-practice (masturbation) or with a sex partner (penetrative sex, non-penetrative sex, or other sexual activity).
The health effects surrounding the human orgasm are diverse. There are many physiological responses during sexual activity, including a relaxed state created by prolactin, as well as changes in the central nervous system such as a temporary decrease in the metabolic activity of large parts of the cerebral cortex while there is no change or increased metabolic activity in the limbic (i.e., "bordering") areas of the brain. There is also a wide range of sexual dysfunctions, such as anorgasmia. These effects impact cultural views of orgasm, such as the beliefs that orgasm and the frequency/consistency of it are either important or irrelevant for satisfaction in a sexual relationship, and theories about the biological and evolutionary functions of orgasm.
In a clinical context, orgasm is usually defined strictly by the muscular contractions involved during sexual activity, along with the characteristic patterns of change in heart rate, blood pressure, and often respiration rate and depth. This is categorized as the sudden discharge of accumulated sexual tension during the sexual response cycle, resulting in rhythmic muscular contractions in the pelvic region. However, definitions of orgasm vary and there is sentiment that consensus on how to consistently classify it is absent. At least twenty-six definitions of orgasm were listed in the journal "Clinical Psychology Review".
There is some debate whether certain types of sexual sensations should be accurately classified as orgasms, including female orgasms caused by G-spot stimulation alone, and the demonstration of extended or continuous orgasms lasting several minutes or even an hour. The question centers around the clinical definition of orgasm, but this way of viewing orgasm is merely physiological, while there are also psychological, endocrinological, and neurological definitions of orgasm. In these and similar cases, the sensations experienced are subjective and do not necessarily involve the involuntary contractions characteristic of orgasm. However, the sensations in both sexes are extremely pleasurable and are often felt throughout the body, causing a mental state that is often described as transcendental, and with vasocongestion and associated pleasure comparable to that of a full-contractionary orgasm. For example, modern findings support distinction between ejaculation and male orgasm. For this reason, there are views on both sides as to whether these can be accurately defined as orgasms.
Orgasms can be achieved during a variety of activities, including vaginal, anal or oral sex, non-penetrative sex or masturbation. They may also be achieved by the use of a sex toy, such as a sensual vibrator or an erotic electrostimulation. Achieving orgasm by stimulation of the nipples or other erogenous zones is rarer. Multiple orgasms are also possible, especially in women, but they are also uncommon. Multiple orgasms are orgasms that occur within a short period of one another.
In addition to physical stimulation, orgasm can be achieved from psychological arousal alone, such as during dreaming (nocturnal emission for males or females) or by forced orgasm. Orgasm by psychological stimulation alone was first reported among people who had spinal cord injury. Although sexual function and sexuality after spinal cord injury is very often impacted, this injury does not deprive one of sexual feelings such as sexual arousal and erotic desires.
A person may also experience an involuntary orgasm, such as in the case of rape or other sexual assault.
Scientific literature focuses on the psychology of female orgasm significantly more than it does on the psychology of male orgasm, which "appears to reflect the assumption that female orgasm is psychologically more complex than male orgasm," but "the limited empirical evidence available suggests that male and female orgasm may bear more similarities than differences. In one controlled study by Vance and Wagner (1976), independent raters could not differentiate written descriptions of male versus female orgasm experiences".
In men, the most common way of achieving orgasm is by physical sexual stimulation of the penis. This is usually accompanied by ejaculation, but it is possible, though also rare, for men to orgasm without ejaculation (known as a "dry orgasm"). Prepubescent boys have dry orgasms. Dry orgasms can also occur as a result of retrograde ejaculation, or hypogonadism. Men may also ejaculate without reaching orgasm, which is known as anorgasmic ejaculation. They may also achieve orgasm by stimulation of the prostate (see below).
The traditional view of male orgasm is that there are two stages: emission following orgasm, almost instantly followed by a refractory period. The refractory period is the recovery phase after orgasm during which it is physiologically impossible for a man to have additional orgasms. In 1966, Masters and Johnson published pivotal research about the phases of sexual stimulation. Their work included women and men, and, unlike Alfred Kinsey in 1948 and 1953, tried to determine the physiological stages before and after orgasm.
Masters and Johnson argued that, in the first stage, "accessory organs contract and the male can feel the ejaculation coming; two to three seconds later the ejaculation occurs, which the man cannot constrain, delay, or in any way control" and that, in the second stage, "the male feels pleasurable contractions during ejaculation, reporting greater pleasure tied to a greater volume of ejaculate". They reported that, unlike females, "for the man the resolution phase includes a superimposed refractory period" and added that "many males below the age of 30, but relatively few thereafter, have the ability to ejaculate frequently and are subject to only very short refractory periods during the resolution phase". Masters and Johnson equated male orgasm and ejaculation and maintained the necessity for a refractory period between orgasms.
There has been little scientific study of multiple orgasm in men. Dunn and Trost defined male multiple orgasm as "two or more orgasms with or without ejaculation and without, or with only very limited, detumescence [loss of erection] during one and the same sexual encounter". Although, due to the refractory period, it is rare for men to achieve multiple orgasms, some men have reported having multiple, consecutive orgasms, particularly without ejaculation. Multiple orgasms are more commonly reported in very young men than in older men. In younger men, the refractory period may only last a few minutes, but last more than an hour in older men.
An increased infusion of the hormone oxytocin during ejaculation is believed to be chiefly responsible for the refractory period, and the amount by which oxytocin is increased may affect the length of each refractory period. A scientific study to successfully document natural, fully ejaculatory, multiple orgasms in an adult man was conducted at Rutgers University in 1995. During the study, six fully ejaculatory orgasms were experienced in 36 minutes, with no apparent refractory period.
In women, the most common way to achieve orgasm is by direct sexual stimulation of the clitoris (meaning consistent manual, oral or other concentrated friction against the external parts of the clitoris). General statistics indicate that 70–80% of women require direct clitoral stimulation to achieve orgasm, although indirect clitoral stimulation (for example, via vaginal penetration) may also be sufficient. The Mayo Clinic stated, "Orgasms vary in intensity, and women vary in the frequency of their orgasms and the amount of stimulation necessary to trigger an orgasm." Clitoral orgasms are easier to achieve because the glans of the clitoris, or clitoris as a whole, has more than 8,000 sensory nerve endings, which is as many (or more in some cases) nerve endings present in the human penis or glans penis. As the clitoris is homologous to the penis, it is the equivalent in its capacity to receive sexual stimulation.
One misconception, particularly in older research publications, is that the vagina is completely insensitive. However, there are areas in the anterior vaginal wall and between the top junction of the labia minora and the urethra that are especially sensitive. With regard to specific density of nerve endings, while the area commonly described as the G-spot may produce an orgasm, and the urethral sponge, an area in which the G-spot may be found, runs along the "roof" of the vagina and can create pleasurable sensations when stimulated, intense sexual pleasure (including orgasm) from vaginal stimulation is occasional or otherwise absent because the vagina has significantly fewer nerve endings than the clitoris. The greatest concentration of vaginal nerve endings are at the lower third (near the entrance) of the vagina.
Sex educator Rebecca Chalker states that only one part of the clitoris, the urethral sponge, is in contact with the penis, fingers, or a dildo in the vagina. Hite and Chalker state that the tip of the clitoris and the inner lips, which are also very sensitive, are not receiving direct stimulation during penetrative intercourse. Because of this, some couples may engage in the woman on top position or the coital alignment technique to maximize clitoral stimulation. For some women, the clitoris is very sensitive after climax, making additional stimulation initially painful.
Masters and Johnson argued that all women are potentially multiply orgasmic, but that multiply orgasmic men are rare, and stated that "the female is capable of rapid return to orgasm immediately following an orgasmic experience, if restimulated before tensions have dropped below plateau phase response levels". Though generally reported that women do not experience a refractory period and thus can experience an additional orgasm, or multiple orgasms, soon after the first one, some sources state that both men and women experience a refractory period because women may also experience a period after orgasm in which further sexual stimulation does not produce excitement. After the initial orgasm, subsequent orgasms for women may be stronger or more pleasurable as the stimulation accumulates.
Discussions of female orgasm are complicated by orgasms in women typically being divided into two categories: clitoral orgasm and vaginal (or G-spot) orgasm. In 1973, Irving Singer theorized that there are three types of female orgasms; he categorized these as vulval, uterine, and blended, but because he was a philosopher, "these categories were generated from descriptions of orgasm in literature rather than laboratory studies". In 1982, Ladas, Whipple and Perry also proposed three categories: the tenting type (derived from clitoral stimulation), the A-frame type (derived from G-spot stimulation), and the blended type (derived from clitoral and G-spot stimulation). In 1999, Whipple and Komisaruk proposed cervix stimulation as being able to cause a fourth type of female orgasm.
Female orgasms by means other than clitoral or vaginal/G-spot stimulation are less prevalent in scientific literature and most scientists contend that no distinction should be made between "types" of female orgasm. This distinction began with Sigmund Freud, who postulated the concept of "vaginal orgasm" as separate from clitoral orgasm. In 1905, Freud stated that clitoral orgasms are purely an adolescent phenomenon and that upon reaching puberty, the proper response of mature women is a change-over to vaginal orgasms, meaning orgasms without any clitoral stimulation. While Freud provided no evidence for this basic assumption, the consequences of this theory were considerable. Many women felt inadequate when they could not achieve orgasm via vaginal intercourse alone, involving little or no clitoral stimulation, as Freud's theory made penile-vaginal intercourse the central component to women's sexual satisfaction.
The first major national surveys of sexual behavior were the Kinsey Reports. Alfred Kinsey was the first researcher to harshly criticize Freud's ideas about female sexuality and orgasm when, through his interviews with thousands of women, Kinsey found that most of the women he surveyed could not have vaginal orgasms. He "criticized Freud and other theorists for projecting male constructs of sexuality onto women" and "viewed the clitoris as the main center of sexual response" and the vagina as "relatively unimportant" for sexual satisfaction, relaying that "few women inserted fingers or objects into their vaginas when they masturbated". He "concluded that satisfaction from penile penetration [is] mainly psychological or perhaps the result of referred sensation".
Masters and Johnson's research into the female sexual response cycle, as well as Shere Hite's, generally supported Kinsey's findings about female orgasm. Masters and Johnson's research on the topic came at the time of the second-wave feminist movement, and inspired feminists such as Anne Koedt, author of "The Myth of the Vaginal Orgasm", to speak about the "false distinction" made between clitoral and vaginal orgasms and women's biology not being properly analyzed.
Accounts that the vagina is capable of producing orgasms continue to be subject to debate because, in addition to the vagina's low concentration of nerve endings, reports of the G-spot's location are inconsistent—it appears to be nonexistent in some women and may be an extension of another structure, such as the Skene's gland or the clitoris, which is a part of the Skene's gland. In a January 2012 "The Journal of Sexual Medicine" review examining years of research into the existence of the G-spot, scholars stated that "[r]eports in the public media would lead one to believe the G-spot is a well-characterized entity capable of providing extreme sexual stimulation, yet this is far from the truth".
Possible explanations for the G-spot were examined by Masters and Johnson, who were the first researchers to determine that the clitoral structures surround and extend along and within the labia. In addition to observing that the majority of their female subjects could only have clitoral orgasms, they found that both clitoral and vaginal orgasms had the same stages of physical response. On this basis, they argued that clitoral stimulation is the source of both kinds of orgasms, reasoning that the clitoris is stimulated during penetration by friction against its hood; their notion that this provides the clitoris with sufficient sexual stimulation has been criticized by researchers such as Elisabeth Lloyd.
Australian urologist Helen O'Connell's 2005 research additionally indicates a connection between orgasms experienced vaginally and the clitoris, suggesting that clitoral tissue extends into the anterior wall of the vagina and that therefore clitoral and vaginal orgasms are of the same origin. Some studies, using ultrasound, have found physiological evidence of the G-spot in women who report having orgasms during vaginal intercourse, but O'Connell suggests that the clitoris's interconnected relationship with the vagina is the physiological explanation for the conjectured G-spot. Having used MRI technology which enabled her to note a direct relationship between the legs or roots of the clitoris and the erectile tissue of the "clitoral bulbs" and corpora, and the distal urethra and vagina, she stated that the vaginal wall is the clitoris; that lifting the skin off the vagina on the side walls reveals the bulbs of the clitoris—triangular, crescental masses of erectile tissue. O'Connell et al., who performed dissections on the female genitals of cadavers and used photography to map the structure of nerves in the clitoris, were already aware that the clitoris is more than just its glans and asserted in 1998 that there is more erectile tissue associated with the clitoris than is generally described in anatomical textbooks. They concluded that some females have more extensive clitoral tissues and nerves than others, especially having observed this in young cadavers as compared to elderly ones, and therefore whereas the majority of females can only achieve orgasm by direct stimulation of the external parts of the clitoris, the stimulation of the more generalized tissues of the clitoris via intercourse may be sufficient for others.
French researchers Odile Buisson and Pierre Foldès reported similar findings to that of O'Connell's. In 2008, they published the first complete 3D sonography of the stimulated clitoris, and republished it in 2009 with new research, demonstrating the ways in which erectile tissue of the clitoris engorges and surrounds the vagina, arguing that women may be able to achieve vaginal orgasm via stimulation of the G-spot because the highly innervated clitoris is pulled closely to the anterior wall of the vagina when the woman is sexually aroused and during vaginal penetration. They assert that since the front wall of the vagina is inextricably linked with the internal parts of the clitoris, stimulating the vagina without activating the clitoris may be next to impossible. In their 2009 published study, the "coronal planes during perineal contraction and finger penetration demonstrated a close relationship between the root of the clitoris and the anterior vaginal wall". Buisson and Foldès suggested "that the special sensitivity of the lower anterior vaginal wall could be explained by pressure and movement of clitoris's root during a vaginal penetration and subsequent perineal contraction".
Supporting a distinct G-spot is a study by Rutgers University, published 2011, which was the first to map the female genitals onto the sensory portion of the brain; brain scans showed that the brain registered distinct feelings between stimulating the clitoris, the cervix and the vaginal wall – where the G-spot is reported to be – when several women stimulated themselves in a functional magnetic resonance (fMRI) machine. "I think that the bulk of the evidence shows that the G-spot is not a particular thing," stated Barry Komisaruk, head of the research findings. "It's not like saying, 'What is the thyroid gland?' The G-spot is more of a thing like New York City is a thing. It's a region, it's a convergence of many different structures." Commenting on Komisaruk's research and other findings, Emmanuele Jannini, a professor of endocrinology at the University of Aquila in Italy, acknowledged a series of essays published in March 2012 in "The Journal of Sexual Medicine", which document evidence that vaginal and clitoral orgasms are separate phenomena that activate different areas of the brain and possibly suggest key psychological differences between women.
Regular difficulty reaching orgasm after ample sexual stimulation, known as anorgasmia, is significantly more common in women than in men (see below). In addition to sexual dysfunction being a cause for women's inability to reach orgasm, or the amount of time for sexual arousal needed to reach orgasm being variable and longer in women than in men, other factors include a lack of communication between sexual partners about what is needed for the woman to reach orgasm, feelings of sexual inadequacy in either partner, a focus on only penetration (vaginal or otherwise), and men generalizing women's trigger for orgasm based on their own sexual experiences with other women.
Scholars state "many couples are locked into the idea that orgasms should be achieved only through intercourse [vaginal sex]" and that "[e]ven the word "foreplay" suggests that any other form of sexual stimulation is merely preparation for the 'main event.'... ...Because women reach orgasm through intercourse less consistently than men, they are more likely than men to have faked an orgasm". Sex counselor Ian Kerner stated, "It's a myth that using the penis is the main way to pleasure a woman." He cites research concluding that women reach orgasm about 25% of the time with intercourse, compared with 81% of the time during oral sex (cunnilingus).
In the first large-scale empirical study worldwide to link specific practices with orgasm, reported in the "Journal of Sex Research" in 2006, demographic and sexual history variables were comparatively weakly associated with orgasm. Data was analyzed from the Australian Study of Health and Relationships, a national telephone survey of sexual behavior and attitudes and sexual health knowledge carried out in 2001–2002, with a representative sample of 19,307 Australians aged 16 to 59. Practices included "vaginal intercourse alone (12%), vaginal + manual stimulation of the man's and/or woman's genitals (49%), and vaginal intercourse + manual + oral (32%)" and the "[e]ncounters may also have included other practices. Men had an orgasm in 95% of encounters and women in 69%. Generally, the more practices engaged in, the higher a woman's chance of having an orgasm. Women were more likely to reach orgasm in encounters including cunnilingus".
Other studies suggest that women exposed to lower levels of prenatal androgens are more likely to experience orgasm during vaginal intercourse than other women.
Kinsey, in his 1953 book "Sexual Behavior in the Human Female", stated that exercise could bring about sexual pleasure, including orgasm. A review in 1990 on the sexual response itself as exercise, reviewed the literature and stated that the field was poorly researched; it also said that studies had found that aerobic or isotonic exercise that resembles sexual activity or sexual positions can induce sexual pleasure, including orgasm. A 2007 review of the relationship between pelvic floor dysfunction and sexual problems in men and women found that they are commonly linked and suggested that physical therapy strengthening the pelvic floor could help address the sexual problems but that it was not well studied enough to recommend. Starting in at least 2007, the term, "coregasm" was used in popular media to refer to exercise-induced orgasm or in academic parlance termed "exercise-induced sexual pleasure" or EISP, and an extensive discussion of the "yogasm" occurred in a 2011 "Daily Beast" posting. A paper published in 2012 presented results of an online survey of women who had experienced an orgasm or other sexual pleasure during exercise. The paper was widely discussed in popular media when it was published. The authors of the paper said that research on the relationship between exercise and sexual response was still lacking.
In both sexes, pleasure can be derived from the nerve endings around the anus and the anus itself, such as during anal sex. It is possible for men to achieve orgasms through prostate stimulation alone. The prostate is the male homologue (variation) to the Skene's glands (which are believed to be connected to the female G-spot), and can be sexually stimulated through anal sex, perineum massage or via a vibrator. Prostate stimulation can produce a deeper orgasm, described by some men as more widespread and intense, longer-lasting, and allowing for greater feelings of ecstasy than orgasm elicited by penile stimulation only. The practice of pegging (consisting of a woman penetrating a man's anus with a strap-on dildo) stimulates the prostate. It is also typical for a man to not reach orgasm as a receptive partner solely from anal sex.
For women, penile-anal penetration may also indirectly stimulate the clitoris by the shared sensory nerves, especially the pudendal nerve, which gives off the inferior anal nerves and divides into the perineal nerve and the dorsal nerve of the clitoris. The G-spot area, which is considered to be interconnected with the clitoris, may also be indirectly stimulated during anal sex. Although the anus has many nerve endings, their purpose is not specifically for inducing orgasm, and so a woman achieving orgasm solely by anal stimulation is rare. Direct stimulation of the clitoris, a G-spot area, or both, while engaging in anal sex can help some women enjoy the activity and reach orgasm during it.
The aforementioned orgasms are sometimes referred to as "anal orgasms," but sexologists and sex educators generally believe that orgasms derived from anal penetration are the result of the relationship between the nerves of the anus, rectum, clitoris or G-spot area in women, and the anus's proximity to the prostate and relationship between the anal and rectal nerves in men, rather than orgasms originating from the anus itself.
For women, stimulation of the breast area during sexual intercourse or foreplay, or solely having the breasts fondled, can create mild to intense orgasms, sometimes referred to as a "breast orgasm" or "nipple orgasm". Few women report experiencing orgasm from nipple stimulation. Before Komisaruk et al.'s functional magnetic resonance (fMRI) research on nipple stimulation in 2011, reports of women achieving orgasm from nipple stimulation relied solely on anecdotal evidence. Komisaruk's study was the first to map the female genitals onto the sensory portion of the brain; it indicates that sensation from the nipples travels to the same part of the brain as sensations from the vagina, clitoris and cervix, and that these reported orgasms are genital orgasms caused by nipple stimulation, and may be directly linked to the genital sensory cortex ("the genital area of the brain").
An orgasm is believed to occur in part because of the hormone oxytocin, which is produced in the body during sexual excitement and arousal and labor. It has also been shown that oxytocin is produced when a man or woman's nipples are stimulated and become erect. Komisaruk also relayed, however, that preliminary data suggests that nipple nerves may directly link up with the relevant parts of the brain without uterine mediation, acknowledging the men in his study who showed the same pattern of nipple stimulation activating genital brain regions.
Masters and Johnson were some of the first researchers to study the sexual response cycle in the early 1960s, based on the observation of 382 women and 312 men. They described a cycle that begins with excitement as blood rushes into the genitals, then reaches a plateau during which they are fully aroused, which leads to orgasm, and finally resolution, in which the blood leaves the genitals.
In the 1970s, Helen Singer Kaplan added the category of desire to the cycle, which she argued precedes sexual excitation. She stated that emotions of anxiety, defensiveness and the failure of communication can interfere with desire and orgasm. In the late 1980s and after, Rosemary Basson proposed a more cyclical alternative to what had largely been viewed as linear progression. In her model, desire feeds arousal and orgasm, and is in turn fueled by the rest of the orgasmic cycle. Rather than orgasm being the peak of the sexual experience, she suggested that it is just one point in the circle and that people could feel sexually satisfied at any stage, reducing the focus on climax as an end-goal of all sexual activity.
As a man nears orgasm during stimulation of the penis, he feels an intense and highly pleasurable pulsating sensation of neuromuscular euphoria. These pulses are a series of throbbing sensations of the bulbospongiosus muscles that begin in the anal sphincter and travel to the tip of the penis. They eventually increase in speed and intensity as the orgasm approaches, until a final "plateau" (the orgasmic) pleasure sustained for several seconds. The length of a man's orgasm has been estimated at 10–15 seconds on average, though it is possible for them to last up to 30 seconds.
During orgasm, a human male experiences rapid, rhythmic contractions of the anal sphincter, the prostate, and the muscles of the penis. The sperm are transmitted up the vas deferens from the testicles, into the prostate gland as well as through the seminal vesicles to produce what is known as semen. The prostate produces a secretion that forms one of the components of ejaculate. Except for in cases of a dry orgasm, contraction of the sphincter and prostate force stored semen to be expelled through the penis's urethral opening. The process takes from three to ten seconds, and produces a pleasurable feeling. Ejaculation may continue for a few seconds after the euphoric sensation gradually tapers off. It is believed that the exact feeling of "orgasm" varies from one man to another. Normally, as a man ages, the amount of semen he ejaculates diminishes, and so does the duration of orgasms. This does not normally affect the intensity of pleasure, but merely shortens the duration. After ejaculation, a refractory period usually occurs, during which a man cannot achieve another orgasm. This can last anywhere from less than a minute to several hours or days, depending on age and other individual factors.
A woman's orgasm may last slightly longer or much longer than a man's. Women's orgasms have been estimated to last, on average, approximately 20 seconds, and to consist of a series of muscular contractions in the pelvic area that includes the vagina, the uterus, and the anus. For some women, on some occasions, these contractions begin soon after the woman reports that the orgasm has started and continue at intervals of about one second with initially increasing, and then reducing, intensity. In some instances, the series of regular contractions is followed by a few additional contractions or shudders at irregular intervals. In other cases, the woman reports having an orgasm, but no pelvic contractions are measured at all.
Women's orgasms are preceded by erection of the clitoris and moistening of the opening of the vagina. Some women exhibit a sex flush, a reddening of the skin over much of the body due to increased blood flow to the skin. As a woman nears orgasm, the clitoral glans retracts under the clitoral hood, and the labia minora (inner lips) become darker. As orgasm becomes imminent, the outer third of the vagina tightens and narrows, while overall the vagina lengthens and dilates and also becomes congested from engorged soft tissue.
Elsewhere in the body, myofibroblasts of the nipple-areolar complex contract, causing erection of the nipples and contraction of the areolar diameter, reaching their maximum at the start of orgasm. A woman experiences full orgasm when her uterus, vagina, anus, and pelvic muscles undergo a series of rhythmic contractions. Most women find these contractions very pleasurable.
Researchers from the University Medical Center of Groningen in the Netherlands correlated the sensation of orgasm with muscular contractions occurring at a frequency of 8–13 Hz centered in the pelvis and measured in the anus. They argue that the presence of this particular frequency of contractions can distinguish between voluntary contraction of these muscles and spontaneous involuntary contractions, and appears to more accurately correlate with orgasm as opposed to other metrics like heart rate that only measure excitation. They assert that they have identified "[t]he first objective and quantitative measure that has a strong correspondence with the subjective experience that orgasm ultimately is" and state that the measure of contractions that occur at a frequency of 8–13 Hz is specific to orgasm. They found that using this metric they could distinguish from rest, voluntary muscular contractions, and even unsuccessful orgasm attempts.
Since ancient times in Western Europe, women could be medically diagnosed with a disorder called female hysteria, the symptoms of which included faintness, nervousness, insomnia, fluid retention, heaviness in abdomen, muscle spasm, shortness of breath, irritability, loss of appetite for food or sex, and "a tendency to cause trouble". Women considered suffering from the condition would sometimes undergo "pelvic massage" — stimulation of the genitals by the doctor until the woman experienced "hysterical paroxysm" (i.e., orgasm). Paroxysm was regarded as a medical treatment, and not a sexual release. The disorder has ceased to be recognized as a medical condition since the 1920s.
There have been very few studies correlating orgasm and brain activity in real time. One study examined 12 healthy women using a positron emission tomography (PET) scanner while they were being stimulated by their partners. Brain changes were observed and compared between states of rest, sexual stimulation, faked orgasm, and actual orgasm. Differences were reported in the brains of men and women during stimulation. However, changes in brain activity were observed in both sexes in which the brain regions associated with behavioral control, fear and anxiety shut down. Regarding these changes, Gert Holstege said in an interview with "The Times", "What this means is that deactivation, letting go of all fear and anxiety, might be the most important thing, even necessary, to have an orgasm."
While stroking the clitoris, the parts of the female brain responsible for processing fear, anxiety and behavioral control start to diminish in activity. This reaches a peak at orgasm when the female brain's emotion centers are effectively closed down to produce an almost trance-like state. Holstege is quoted as saying, at the 2005 meeting of the European Society for Human Reproduction and Development: "At the moment of orgasm, women do not have any emotional feelings."
Initial reports indicated that it was difficult to observe the effects of orgasm on men using PET scans, because the duration of the male orgasm was shorter. However, a subsequent report by Rudie Kortekaas, et al. stated, "Gender commonalities were most evident during orgasm... From these results, we conclude that during the sexual act, differential brain responses across genders are principally related to the stimulatory (plateau) phase and not to the orgasmic phase itself."
Research has shown that as in women, the emotional centers of a man's brain also become deactivated during orgasm but to a lesser extent than in women. Brain scans of both sexes have shown that the pleasure centers of a man's brain show more intense activity than in women during orgasm.
Human brain wave patterns show distinct changes during orgasm, which indicate the importance of the limbic system in the orgasmic response. Male and female brains demonstrate similar changes during orgasm, with brain activity scans showing a temporary decrease in the metabolic activity of large parts of the cerebral cortex with normal or increased metabolic activity in the limbic areas of the brain.
EEG tracings from volunteers during orgasm were first obtained by Mosovich and Tallaferro in 1954. These research workers recorded EEG changes resembling petit mal or the clonic phase of a grand mal seizure. Further studies in this direction were carried out by Sem-Jacobsen (1968), Heath (1972), Cohen et al. (1976), and others. Sarrel et al. reported a similar observation in 1977. These reports continue to be cited. | https://en.wikipedia.org/wiki?curid=22546 |
Old English literature
Old English literature, or Anglo-Saxon literature, encompasses literature written in Old English, in Anglo-Saxon England from the 7th century to the decades after the Norman Conquest of 1066. "Cædmon's Hymn", composed in the 7th century, according to Bede, is often considered as the oldest surviving poem in English. Poetry written in the mid-12th century represents some of the latest post-Norman examples of Old English; for example, "The Soul's Address to the Body" (c. 1150–1175) found in Worcester Cathedral Library MS F. 174 contains only one word of possible Latinate origin, while also maintaining a corrupt alliterative meter and Old English grammar and syntax, albeit in a degenerative state (hence, early scholars of Old English termed this late form as "Semi-Saxon"). The "Peterborough Chronicle" can also be considered a late-period text, continuing into the 12th century. The strict adherence to the grammatical rules of Old English is largely inconsistent in 12th century work – as is evident in the works cited above – and by the 13th century the grammar and syntax of Old English had almost completely deteriorated, giving way to the much larger Middle English corpus of literature.
The poem "Beowulf", which often begins the traditional canon of English literature, is the most famous work of Old English literature. The "Anglo-Saxon Chronicle" has also proven significant for historical study, preserving a chronology of early English history.
In descending order of quantity, Old English literature consists of: sermons and saints' lives; biblical translations; translated Latin works of the early Church Fathers; Anglo-Saxon chronicles and narrative history works; laws, wills and other legal works; practical works on grammar, medicine, geography; and poetry. In all there are over 400 surviving manuscripts from the period, of which about 189 are considered "major".
Besides Old English literature, Anglo-Saxons wrote a number of Anglo-Latin works.
The earliest "scholarship" on Old English literature was done by a scribe from Worcester known only as The Tremulous Hand - a sobriquet earned for a hand tremor causing characteristically messy handwriting - who flourished in the late 12th to early 13th century. The Tremulous Hand is known for many Latin glosses of Old English texts, which represent the earliest attempt to "translate" the language in the post-Norman period, but perhaps his most well known scribal work is that of the aforementioned Worcester Cathedral Library MS F. 174, which contains part of Ælfric's "Grammar" and "Glossary" and a short fragmentary poem often called "St. Bede's Lament", in addition to the Body and Soul poem. In the 19th and early 20th centuries the focus was on the Germanic and pagan roots that scholars thought they could detect in Old English literature. Later, on account of the work of Bernard F. Huppé, the influence of Augustinian exegesis was emphasised. Today, along with a focus upon paleography and the physical manuscripts themselves more generally, scholars debate such issues as dating, place of origin, authorship, and the connections between Anglo-Saxon culture and the rest of Europe in the Middle Ages, and literary merits.
A large number of manuscripts remain from the Anglo-Saxon period, with most written during its last 300 years (9th to 11th centuries).
Manuscripts written in both Latin and the vernacular remain. It is believed that Irish missionaries are responsible for the scripts used in early Anglo-Saxon texts, which include the Insular half-uncial (important Latin texts) and Insular minuscule (both Latin and the vernacular). In the 10th century, the Caroline minuscule was adopted for Latin, however the Insular minuscule continued to be used for Old English texts. Thereafter, it was increasingly influenced by Caroline minuscule, while retaining certain distinctively Insular letter-forms.
There were considerable losses of manuscripts as a result of the Dissolution of the Monasteries in the 16th century. Scholarly study of the language began when the manuscripts were collected by scholars and antiquarians such as Matthew Parker, Laurence Nowell and Sir Robert Bruce Cotton.
Old English manuscripts have been highly prized by collectors since the 16th century, both for their historic value and for their aesthetic beauty with their uniformly spaced letters and decorative elements.
There are four major poetic manuscripts:
Seven major scriptoria produced a good deal of Old English manuscripts: Winchester; Exeter; Worcester; Abingdon; Durham; and two Canterbury houses, Christ Church and St. Augustine's Abbey. In addition, some Old English text survives on stone structures and other ornate objects.
Regional dialects include: Northumbrian; Mercian; Kentish; and West dialect, leading to the speculation that much of the poetry may have been translated into West Saxon at a later date. An example of the dominance of the West Saxon dialect is a pair of charters, from the Stowe and British Museum collections, which outline grants of land in Kent and Mercia, but are nonetheless written in the West Saxon dialect of the period.
Early English manuscripts often contain later annotations in the margins of the texts; it is a rarity to find a completely unannotated manuscript. These include corrections, alterations and expansions of the main text, as well as commentary upon it, and even unrelated texts. The majority of these annotations appear to date to the 13th century and later.
Old English poetry falls broadly into two styles or fields of reference, the heroic Germanic and the Christian. Almost all Old English poets are anonymous.
Although there are Anglo-Saxon discourses on Latin prosody, the rules of Old English verse are understood only through modern analysis of the extant texts. The first widely accepted theory was constructed by Eduard Sievers (1893), who distinguished five distinct alliterative patterns. His system of alliterative verse is based on accent, alliteration, the quantity of vowels, and patterns of syllabic accentuation. It consists of five permutations on a base verse scheme; any one of the five types can be used in any verse. The system was inherited from and exists in one form or another in all of the older Germanic languages. Two poetic figures commonly found in Old English poetry are the kenning, an often formulaic phrase that describes one thing in terms of another (e.g. in "Beowulf", the sea is called the "whale road") and litotes, a dramatic understatement employed by the author for ironic effect. Alternative theories have been proposed, such as the theory of John C. Pope (1942), which uses musical notation to track the verse patterns. J. R. R. Tolkien describes and illustrates many of the features of Old English poetry in his 1940 essay "On Translating "Beowulf"".
Even though all extant Old English poetry is written and literate, it is assumed that Old English poetry was an oral craft that was performed by a "scop" and accompanied by a harp.
Most Old English poems are recorded without authors, and very few names are known with any certainty; the primary three are Cædmon, Aldhelm, and Cynewulf.
Cædmon is considered the first Old English poet whose work still survives. According to the account in Bede's "Historia ecclesiastica", he was first a herdsman before living as a monk at the abbey of Whitby in Northumbria in the 7th century. Only his first poem, comprising nine-lines, "Cædmon's Hymn", remains, in Northumbrian, West-Saxon and Latin versions that appear in 19 surviving manuscripts:
Cynewulf has proven to be a difficult figure to identify, but recent research suggests he was an Anglian poet from the early part of the 9th century. Four poems are attributed to him, signed with a runic acrostic at the end of each poem; these are "The Fates of the Apostles" and "Elene" (both found in the Vercelli Book), and "Christ II" and "Juliana" (both found in the Exeter Book).
Although William of Malmesbury claims that Aldhelm, bishop of Sherborne (d. 709), performed secular songs while accompanied by a harp, none of these Old English poems survives. Paul G. Remely has recently proposed that the Old English "Exodus" may have been the work of Aldhelm, or someone closely associated with him.
Bede is often thought to be the poet of a five-line poem entitled "Bede's Death Song", on account of its appearance in a letter on his death by Cuthbert. This poem exists in a Northumbrian and later version.
Alfred is said to be the author of some of the metrical prefaces to the Old English translations of Gregory's "Pastoral Care" and Boethius's "Consolation of Philosophy". Alfred is also thought to be the author of 50 metrical psalms, but whether the poems were written by him, under his direction or patronage, or as a general part in his reform efforts is unknown.
The hypotheses of Milman Parry and Albert Lord on the Homeric Question came to be applied (by Parry and Lord, but also by Francis Magoun) to verse written in Old English. That is, the theory proposes that certain features of at least some of the poetry may be explained by positing oral-formulaic composition. While Anglo-Saxon (Old English) epic poetry may bear some resemblance to Ancient Greek epics such as the "Iliad" and "Odyssey", the question of if and how Anglo-Saxon poetry was passed down through an oral tradition remains a subject of debate, and the question for any particular poem unlikely to be answered with perfect certainty.
Parry and Lord had already demonstrated the density of metrical formulas in Ancient Greek, and observed that the same phenomenon was apparent in the Old English alliterative line:
In addition to verbal formulas, many themes have been shown to appear among the various works of Anglo-Saxon literature. The theory proposes to explain this fact by suggesting that the poetry was composed of formulae and themes from a stock common to the poetic profession, as well as literary passages composed by individual artists in a more modern sense. Larry Benson introduced the concept of "written-formulaic" to describe the status of some Anglo-Saxon poetry which, while demonstrably written, contains evidence of oral influences, including heavy reliance on formulas and themes. Frequent oral-formulaic themes in Old English poetry include "Beasts of Battle" and the "Cliff of Death". The former, for example, is characterised by the mention of ravens, eagles, and wolves preceding particularly violent depictions of battle. Among the most thoroughly documented themes is "The Hero on the Beach". D. K. Crowne first proposed this theme, defined by four characteristics:
One example Crowne cites in his article is that which concludes Beowulf's fight with the monsters during his swimming match with Breca:
Crowne drew on examples of the theme's appearance in twelve Anglo-Saxon texts, including one occurrence in Beowulf. It was also observed in other works of Germanic origin, Middle English poetry, and even an Icelandic prose saga. John Richardson held that the schema was so general as to apply to virtually any character at some point in the narrative, and thought it an instance of the "threshold" feature of Joseph Campbell's Hero's Journey monomyth. J.A. Dane, in an article (characterised by Foley as "polemics without rigour") claimed that the appearance of the theme in Ancient Greek poetry, a tradition without known connection to the Germanic, invalidated the notion of "an autonomous theme in the baggage of an oral poet." Foley's response was that Dane misunderstood the nature of oral tradition, and that in fact the appearance of the theme in other cultures showed that it was a traditional form.
The Old English poetry which has received the most attention deals with the Germanic heroic past. The longest at 3,182 lines, and the most important, is "Beowulf", which appears in the damaged Nowell Codex. Beowulf relates the exploits of the hero Beowulf, King of the Weder-Geats or Angles, around the middle of the 5th century. The author is unknown, and no mention of Britain occurs. Scholars are divided over the date of the present text, with hypotheses ranging from the 8th to the 11th centuries. It has achieved much acclaim as well as sustained academic and artistic interest.
Other heroic poems besides "Beowulf" exist. Two have survived in fragments: "The Fight at Finnsburh", controversially interpreted by many to be a retelling of one of the battle scenes in "Beowulf", and "Waldere", a version of the events of the life of Walter of Aquitaine. Two other poems mention heroic figures: "Widsith" is believed to be very old in parts, dating back to events in the 4th century concerning Eormanric and the Goths, and contains a catalogue of names and places associated with valiant deeds. "Deor" is a lyric, in the style of "Consolation of Philosophy", applying examples of famous heroes, including Weland and Eormanric, to the narrator's own case.
The "Anglo-Saxon Chronicle" contains various heroic poems inserted throughout. The earliest from 937 is called "The Battle of Brunanburh", which celebrates the victory of King Athelstan over the Scots and Norse. There are five shorter poems: capture of the Five Boroughs (942); coronation of King Edgar (973); death of King Edgar (975); death of Alfred the son of King Æthelred (1036); and death of King Edward the Confessor (1065).
The 325 line poem "The Battle of Maldon" celebrates Earl Byrhtnoth and his men who fell in battle against the Vikings in 991. It is considered one of the finest, but both the beginning and end are missing and the only manuscript was destroyed in a fire in 1731. A well-known speech is near the end of the poem:
Old English heroic poetry was handed down orally from generation to generation. As Christianity began to appear, re-tellers often recast the tales of Christianity into the older heroic stories.
Related to the heroic tales are a number of short poems from the Exeter Book which have come to be described as "elegies" or "wisdom poetry". They are lyrical and Boethian in their description of the up and down fortunes of life. Gloomy in mood is "The Ruin", which tells of the decay of a once glorious city of Roman Britain (cities in Britain fell into decline after the Romans departed in the early 5th century, as the early English continued to live their rural life), and "The Wanderer", in which an older man talks about an attack that happened in his youth, where his close friends and kin were all killed; memories of the slaughter have remained with him all his life. He questions the wisdom of the impetuous decision to engage a possibly superior fighting force: the wise man engages in warfare to "preserve" civil society, and must not rush into battle but seek out allies when the odds may be against him. This poet finds little glory in bravery for bravery's sake. "The Seafarer" is the story of a somber exile from home on the sea, from which the only hope of redemption is the joy of heaven. Other wisdom poems include "Wulf and Eadwacer", "The Wife's Lament", and "The Husband's Message". Alfred the Great wrote a wisdom poem over the course of his reign based loosely on the neoplatonic philosophy of Boethius called the "Lays of Boethius".
Several Old English poems are adaptations of late classical philosophical texts. The longest is a 10th-century translation of Boethius' "Consolation of Philosophy" contained in the Cotton manuscript Otho A.vi. Another is "The Phoenix" in the Exeter Book, an allegorisation of the "De ave phoenice" by Lactantius.
Other short poems derive from the Latin bestiary tradition. Some examples include "The Panther", "The Whale" and "The Partridge".
Anglo-Saxon riddles are part of Anglo-Saxon literature. The most famous Anglo-Saxon riddles are found in the Exeter Book. This book contains secular and religious poems and other writings, along with a collection of 94 riddles, although there is speculation that there may have been closer to 100 riddles in the book. The riddles are written in a similar manner, but "it is unlikely that the whole collection was written by one person." It is more likely that many scribes worked on this collection of riddles. Although the Exeter Book has a unique and extensive collection of Anglo-Saxon riddles, riddles were not uncommon during this era. Riddles were both comical and obscene.
The Vercelli Book and Exeter Book contain four long narrative poems of saints' lives, or hagiography. In Vercelli are "Andreas" and "Elene" and in Exeter are "Guthlac" and "Juliana".
"Andreas" is 1,722 lines long and is the closest of the surviving Old English poems to "Beowulf" in style and tone. It is the story of Saint Andrew and his journey to rescue Saint Matthew from the Mermedonians. "Elene" is the story of Saint Helena (mother of Constantine) and her discovery of the True Cross. The cult of the True Cross was popular in Anglo-Saxon England and this poem was instrumental in promoting it.
"Guthlac" consists of two poems about the English 7th century Saint Guthlac. "Juliana" describes the life of Saint Juliana, including a discussion with the devil during her imprisonment.
There are a number of partial Old English Bible translations and paraphrases surviving. The Junius manuscript contains three paraphrases of Old Testament texts. These were re-wordings of Biblical passages in Old English, not exact translations, but paraphrasing, sometimes into beautiful poetry in its own right. The first and longest is of "Genesis" (originally presented as one work in the Junius manuscript but now thought to consist of two separate poems, A and B), the second is of "Exodus" and the third is "Daniel". Contained in Daniel are two lyrics, "Song of the Three Children" and "Song of Azarias", the latter also appearing in the Exeter Book after "Guthlac". The fourth and last poem, "Christ and Satan", which is contained in the second part of the Junius manuscript, does not paraphrase any particular biblical book, but retells a number of episodes from both the Old and New Testament.
The Nowell Codex contains a Biblical poetic paraphrase, which appears right after "Beowulf", called "Judith", a retelling of the story of Judith. This is not to be confused with Ælfric's homily "Judith", which retells the same Biblical story in alliterative prose.
Old English translations of Psalms 51-150 have been preserved, following a prose version of the first 50 Psalms. There are verse translations of the Gloria in Excelsis, the Lord's Prayer, and the Apostles' Creed, as well as some hymns and proverbs.
In addition to Biblical paraphrases are a number of original religious poems, mostly lyrical (non-narrative).
The Exeter Book contains a series of poems entitled "Christ", sectioned into "Christ I", "Christ II" and "Christ III".
Considered one of the most beautiful of all Old English poems is "Dream of the Rood", contained in the Vercelli Book. The presence of a portion of the poem (in Northumbrian dialect) carved in ruins
on an 8th century stone cross found in Ruthwell, Dumfriesshire, verifies the age of at least this portion of the poem. The Dream of the Rood is a dream vision in which the personified cross tells the story of the crucifixion. Christ appears as a young hero-king, confident of victory, while the cross itself feels all the physical pain of the crucifixion, as well as the pain of being forced to kill the young lord.
The dreamer resolves to trust in the cross, and the dream ends with a vision of heaven.
There are a number of religious debate poems. The longest is "Christ and Satan" in the Junius manuscript, it deals with the conflict between Christ and Satan during the forty days in the desert. Another debate poem is "Solomon and Saturn", surviving in a number of textual fragments, Saturn is portrayed as a magician debating with the wise king Solomon.
Other poetic forms exist in Old English including short verses, gnomes, and mnemonic poems for remembering long lists of names.
There are short verses found in the margins of manuscripts which offer practical advice, such as remedies against the loss of cattle or how to deal with a delayed birth, often grouped as charms. The longest is called "Nine Herbs Charm" and is probably of pagan origin. Other similar short verses, or charms, include "For a Swarm of Bees", "Against a Dwarf", "Against a Stabbing Pain", and "Against a Wen".
There are a group of mnemonic poems designed to help memorise lists and sequences of names and to keep objects in order. These poems are named "Menologium", "The Fates of the Apostles", "The Rune Poem", "The Seasons for Fasting", and the "Instructions for Christians".
Anglo-Saxon poetry is marked by the comparative rarity of similes. This is a particular feature of Anglo-Saxon verse style, and is a consequence both of its structure and of the rapidity with which images are deployed, to be unable to effectively support the expanded simile. As an example of this, "Beowulf" contains at best five similes, and these are of the short variety. This can be contrasted sharply with the strong and extensive dependence that Anglo-Saxon poetry has upon metaphor, particularly that afforded by the use of kennings. The most prominent example of this in "The Wanderer" is the reference to battle as a "storm of spears". This reference to battle shows how Anglo-Saxons viewed battle: as unpredictable, chaotic, violent, and perhaps even a function of nature.
Old English poetry traditionally alliterates, meaning that a sound (usually the initial consonant sound) is repeated throughout a line. For instance, in the first line of "Beowulf", "Hwaet! We Gar-Dena | in gear-dagum", (meaning "Lo! We ... of the Spear Danes in days of yore"), the stressed words "Gar-Dena" and "gear-dagum" alliterate on the consonant "G".
The Old English poet was particularly fond of describing the same person or object with varied phrases, (often appositives) that indicated different qualities of that person or object. For instance, the "Beowulf" poet refers in three and a half lines to a Danish king as "lord of the Danes" (referring to the people in general), "king of the Scyldings" (the name of the specific Danish tribe), "giver of rings" (one of the king's functions is to distribute treasure), and "famous chief". Such variation, which the modern reader (who likes verbal precision) is not used to, is frequently a difficulty in producing a readable translation.
Old English poetry, like other Old Germanic alliterative verse, is also commonly marked by the caesura or pause. In addition to setting pace for the line, the caesura also grouped each line into two hemistichs.
The amount of surviving Old English prose is much greater than the amount of poetry. Of the surviving prose, the majority consists of the homilies, saints' lives and biblical translations from Latin. The division of early medieval written prose works into categories of "Christian" and "secular", as below, is for convenience's sake only, for literacy in Anglo-Saxon England was largely the province of monks, nuns, and ecclesiastics (or of those laypeople to whom they had taught the skills of reading and writing Latin and/or Old English). Old English prose first appears in the 9th century, and continues to be recorded through the 12th century as the last generation of scribes, trained as boys in the standardised West Saxon before the Conquest, died as old men.
The most widely known secular author of Old English was King Alfred the Great (849–899), who translated several books, many of them religious, from Latin into Old English. Alfred, wanting to restore English culture, lamented the poor state of Latin education: Alfred proposed that students be educated in Old English, and those who excelled should go on to learn Latin. Alfred's cultural program produced the following translations: Gregory the Great's "The Pastoral Care", a manual for priests on how to conduct their duties; "The Consolation of Philosophy" by Boethius; and "The Soliloquies" of Saint Augustine. In the process, some original content was interweaved through the translations.
Other important Old English translations include: "Historiae adversum paganos" by Orosius, a companion piece for St. Augustine's "The City of God"; the "Dialogues" of Gregory the Great; and Bede's "Ecclesiastical History of the English People".
Ælfric of Eynsham, who wrote in the late 10th and early 11th century, is believed to have been a pupil of Æthelwold. He was the greatest and most prolific writer of Anglo-Saxon sermons, which were copied and adapted for use well into the 13th century. In the translation of the first six books of the Bible ("Old English Hexateuch"), portions have been assigned to Ælfric on stylistic grounds. He included some lives of the saints in the "Catholic Homilies", as well as a cycle of saints' lives to be used in sermons. Ælfric also wrote an Old English work on time-reckoning, and pastoral letters.
In the same category as Ælfric, and a contemporary, was Wulfstan II, archbishop of York. His sermons were highly stylistic. His best known work is "Sermo Lupi ad Anglos" in which he blames the sins of the English for the Viking invasions. He wrote a number of clerical legal texts "Institutes of Polity" and "Canons of Edgar".
One of the earliest Old English texts in prose is the "Martyrology", information about saints and martyrs according to their anniversaries and feasts in the church calendar. It has survived in six fragments. It is believed to date from the 9th century by an anonymous Mercian author.
The oldest collections of church sermons is the "Blickling homilies", found in a 10th-century manuscript.
There are a number of saint's lives prose works; beyond those written by Ælfric are the prose life of Saint Guthlac (Vercelli Book), the life of Saint Margaret and the life of Saint Chad. There are four additional lives in the earliest manuscript of the "Lives of Saints", the Julius manuscript: Seven Sleepers of Ephesus, Saint Mary of Egypt, Saint Eustace and Saint Euphrosyne.
There are six major manuscripts of the Wessex Gospels, dating from the 11th and 12th centuries. The most popular, "Old English Gospel of Nicodemus", is treated in one manuscript as though it were a 5th gospel; other apocryphal gospels in translation include the "Gospel of Pseudo-Matthew", "Vindicta salvatoris", "Vision of Saint Paul" and the "Apocalypse of Thomas".
The "Anglo-Saxon Chronicle" was probably started in the time of King Alfred the Great and continued for over 300 years as a historical record of Anglo-Saxon history.
A single example of a Classical romance has survived: a fragment of the story of "Apollonius of Tyre" was translated in the 11th century from the Gesta Romanorum.
A monk who was writing in Old English at the same time as Ælfric and Wulfstan was Byrhtferth of Ramsey, whose book "Handboc" was a study of mathematics and rhetoric. He also produced a work entitled "Computus", which outlined the practical application of arithmetic to the calculation of calendar days and movable feasts, as well as tide tables.
Ælfric wrote two proto-scientific works, "Hexameron" and "Interrogationes Sigewulfi", dealing with the stories of Creation. He also wrote a grammar and glossary in Old English called "Latin", later used by students interested in learning Old French because it had been glossed in Old French.
In the Nowell Codex is the text of "The Wonders of the East" which includes a remarkable map of the world, and other illustrations. Also contained in Nowell is "Alexander's Letter to Aristotle". Because this is the same manuscript that contains "Beowulf", some scholars speculate it may have been a collection of materials on exotic places and creatures.
There are a number of interesting medical works. There is a translation of "Apuleius's Herbarium" with striking illustrations, found together with "Medicina de Quadrupedibus". A second collection of texts is "Bald's Leechbook", a 10th-century book containing herbal and even some surgical cures. A third collection, known as the "Lacnunga", includes many charms and incantations.
Anglo-Saxon legal texts are a large and important part of the overall corpus. By the 12th century they had been arranged into two large collections (see "Textus Roffensis"). They include laws of the kings, beginning with those of Aethelbert of Kent and ending with those of Cnut, and texts dealing with specific cases and places in the country. An interesting example is "Gerefa" which outlines the duties of a reeve on a large manor estate. There is also a large volume of legal documents related to religious houses. These include many kinds of texts: records of donations by nobles; wills; documents of emancipation; lists of books and relics; court cases; guild rules. All of these texts provide valuable insights into the social history of Anglo-Saxon times, but are also of literary value. For example, some of the court case narratives are interesting for their use of rhetoric.
Old English literature did not disappear in 1066 with the Norman Conquest. Many sermons and works continued to be read and used in part or whole up through the 14th century, and were further catalogued and organised. During the Reformation, when monastic libraries were dispersed, the manuscripts were collected by antiquarians and scholars. These included Laurence Nowell, Matthew Parker, Robert Bruce Cotton and Humfrey Wanley. In the 17th century there began a tradition of Old English literature dictionaries and references. The first was William Somner's "Dictionarium Saxonico-Latino-Anglicum" (1659). Lexicographer Joseph Bosworth began a dictionary in the 19th century which was completed by Thomas Northcote Toller in 1898 called "An Anglo-Saxon Dictionary", which was updated by Alistair Campbell in 1972.
Because Old English was one of the first vernacular languages to be written down, nineteenth-century scholars searching for the roots of European "national culture" (see Romantic Nationalism) took special interest in studying Anglo-Saxon literature, and Old English became a regular part of university curriculum. Since WWII there has been increasing interest in the manuscripts themselves—Neil Ker, a paleographer, published the groundbreaking "Catalogue of Manuscripts Containing Anglo-Saxon" in 1957, and by 1980 nearly all Anglo-Saxon manuscript texts were in print. J.R.R. Tolkien is credited with creating a movement to look at Old English as a subject of literary theory in his seminal lecture "" (1936).
Old English literature has had some influence on modern literature, and notable poets have translated and incorporated Old English poetry. Well-known early translations include Alfred, Lord Tennyson's translation of "The Battle of Brunanburh" William Morris's translation of "Beowulf" and Ezra Pound's translation of "The Seafarer". The influence of the poetry can be seen in modern poets T. S. Eliot, Ezra Pound and W. H. Auden. Tolkien adapted the subject matter and terminology of heroic poetry for works like "The Hobbit" and "The Lord of the Rings", and John Gardner wrote "Grendel", which tells the story of Beowulf's opponent from his own perspective.
More recently other notable poets such as Paul Muldoon, Seamus Heaney, Denise Levertov and U. A. Fanthorpe have all shown an interest in Old English poetry. In 1987 Denise Levertov published a translation of Cædmon's Hymn under her title "Caedmon" in the collection "Breathing the Water". This was then followed by Seamus Heaney's version of the poem "Whitby-sur-Moyola" in his "The Spirit Level" (1996) Paul Muldoon's "Caedmona's Hymn" in his "Moy Sand and Gravel" (2002) and U. A. Fanthorpe's "Caedmon's Song" in her "Queuing for the Sun" (2003). These translations differ greatly from one another, just as Seamus Heaney's "Beowulf" (1999) deviates from earlier, similar projects. Heaney uses Irish diction across "Beowulf" to bring what he calls a "special body and force" to the poem, foregrounding his own Ulster heritage, "in order to render (the poem) ever more 'willable forward/again and again and again.'" | https://en.wikipedia.org/wiki?curid=22557 |
Magnavox Odyssey 2
The Magnavox Odyssey 2 (stylized as Magnavox Odyssey²), also known as Philips Odyssey 2, is a second generation home video game console that was released in 1978. It was sold in Europe as the Philips Videopac G7000, in Brazil as the Philips Odyssey and in Japan as Odyssey2 (オデッセイ2 "odessei2"). The Odyssey 2 was one of the major three home consoles prior to the 1983 video game market crash, along with Atari 2600 and Intellivision.
In the early 1970s, Magnavox pioneered the home video game industry by successfully bringing the first home console to market, the Odyssey, which was quickly followed by a number of later models, each with a few technological improvements ("see Magnavox Odyssey series"). In 1978, Magnavox, now a subsidiary of North American Philips, decided to release an all-new successor, Odyssey 2.
In 2009, the video game website IGN named the Odyssey 2 the 21st greatest video game console, out of its list of 25.
The original Odyssey had a number of removable circuit cards that switched between the built-in games. With the Odyssey 2, each game could be a completely unique experience, with its own background graphics, foreground graphics, gameplay, scoring, and music. The potential was enormous, as an unlimited number of games could be individually purchased; a game player could purchase a library of video games tailored to their own interest. Unlike any other system at that time, the Odyssey 2 included a full alphanumeric membrane keyboard, which was to be used for educational games, selecting options, or programming (Magnavox released a cartridge called "Computer Intro!" with the intent of teaching simple computer programming).
The Odyssey 2 used the standard joystick design of the 1970s and early 1980s: the original console had a moderately sized silver controller, held in one hand, with a square housing for its eight-direction stick that was manipulated with the other hand. Later releases had a similar black controller, with an 8-pointed star-shaped housing for its eight-direction joystick. In the upper corner of the joystick was a single 'Action' button, silver on the original controllers and red on the black controllers. The games, graphics and packaging were designed by Ron Bradford and Steve Lehner.
One other difference in these controllers is that the earliest releases of the silver joystick were removable. They could be plugged and unplugged from the back of the unit, while all later silver and all black controllers were hardwired into the rear of the unit itself (although the joysticks still can be easily replaced, but not without dismounting the cover deck).
One of the strongest points of the system was its speech synthesis unit, which was released as an add-on for speech, music, and sound effects enhancement. The area that the Odyssey 2 may be best remembered for was its pioneering fusion of board and video games: "The Master Strategy Series". The first game released was "Quest for the Rings!", with gameplay somewhat similar to "Dungeons & Dragons", and a storyline reminiscent of J. R. R. Tolkien's "The Lord of the Rings". Later, two other games were released in this series, "Conquest of the World" and "The Great Wall Street Fortune Hunt", each with its own gameboard.
Its graphics and few color choices, compared to its biggest competitors at the time—the Atari 2600, Mattel's Intellivision and the Bally Astrocade—were its "weakest point". Of these systems, the Odyssey 2 was listed by Jeff Rovin as being the third in total of sales, and one of the seven major video game suppliers.
The console sold moderately well in the U.S. Prior to the nationwide release of the Mattel Intellivision in 1980, the console video game market was dominated by the competition between the Odyssey 2 and Atari 2600. It remained one of the three primary consoles from 1980 to mid-1982, though a distant third behind the Atari 2600 and Mattel Intellivision. By 1983 over one million Odyssey 2 units were sold in the U.S. alone.
To sell would-be customers on its technical abilities as a computer-based console, the Odyssey 2 was marketed with phrases such as "The Ultimate Computer Video Game System", "Sync-Sound Action", "True-Reality Synthesization", "On-Screen Digital Readouts" and "a serious educational tool" on the packaging for the console and its game cartridges. All games, aside from Showdown in 2100 AD, produced by Magnavox/Philips ended with an exclamation point, such as "K.C. Munchkin!" and "Killer Bees!".
No third-party game appeared for the Odyssey 2 in the United States until Imagic's "Demon Attack" in 1983. The lack of third-party support kept the number of new games very limited, but the success of the Philips Videopac G7000 overseas led to two other companies producing games for it: Parker Brothers released "Popeye", "Frogger", "Q*bert" and "Super Cobra", while Imagic also released "Atlantis".
In Europe, the Odyssey 2 did very well on the market. The console was most widely known as the Philips Videopac G7000, or just the Videopac, although branded variants were released in some areas of Europe under the names Philips Videopac C52, Radiola Jet 25, Schneider 7000, and Siera G7000. Philips, as Magnavox's Dutch parent company, used their own name rather than Magnavox's for European marketing. A rare model, the Philips Videopac G7200, was only released in Europe; it had a built-in black-and-white monitor. Videopac game cartridges are mostly compatible with American Odyssey 2 units, although some games have color differences and a few are completely incompatible, such as Frogger on the European console, being unable to show the second half of the playing field, and Chess on the American model, as the extra hardware module could not work with the console. A number of additional games were released in Europe that never came out in the U.S.
In Brazil, the console was released as the Philips Odyssey; the Magnavox Odyssey was released in Brazil by a company named "Planil Comércio", not affiliated with Philips or Magnavox. Odyssey became much more popular in Brazil than it ever was in the U.S.; tournaments were even held for popular games like "K.C.'s Krazy Chase!" ("Come-Come" in Brazil). Titles of games were translated into Portuguese, sometimes creating a new story, like "Pick-axe Pete", that became "Didi na Mina Encantada" (Didi in the Enchanted Mine) referring to Renato Aragão's comedy character, and was one of the most famous Odyssey games in Brazil.
The Odyssey 2 was released in Japan in December 1982 by Kōton Trading Toitarii Enterprise (コートン・トレーディング・トイタリー・エンタープライズ, a division of DINGU company) under the name オデッセイ2 ("odessei2"). "Japanese" versions of the Odyssey 2 and its games consisted of the American boxes with katakana stickers on them and cheaply printed black-and-white Japanese manuals. The initial price for the console was ¥49,800, which is approximately US$200 in 1982, but roughly $500 in 2018. It was apparently not very successful; Japanese Odyssey 2 items are now very difficult to find.
An open source console emulator for the Odyssey 2 called "O2EM" is available. It includes Philips Videopac G7400 emulation among other features. The emulator works on Linux, Microsoft Windows, DOS and other platforms, and is included within OpenEmu for Mac OS X. "O2EM", (originally not open source) was created in 1997 by computer programmer Daniel Boris and further enhanced by André Rodrigues de la Rocha.
The open source multi-platform multi-system emulator MAME has Odyssey 2 support, and is the only emulator to emulate The Voice expansion module without using sound samples. | https://en.wikipedia.org/wiki?curid=22560 |
Otorhinolaryngology
Otorhinolaryngology (ORL, also called Otolaryngology, Otolaryngology–Head and Neck Surgery (ORL-H&N), or Ear, Nose, and Throat (ENT)) is a surgical subspecialty within medicine that deals with the surgical and medical management of conditions of the head and neck. Doctors who specialize in this area are called otorhinolaryngologists, otolaryngologists, head and neck surgeons, or ear, nose, throat (ENT) surgeons. Patients seek treatment from an otorhinolaryngologist for diseases of the ear, nose, throat, base of the skull, head, and neck. These commonly include functional diseases that affect the senses and activities of eating, drinking, speaking, breathing, swallowing, and hearing. In addition, ENT surgery encompasses the surgical management and reconstruction of cancers and benign tumors of the head and neck as well as plastic surgery of the face and neck (facial plastic surgery).
The term is a combination of New Latin combining forms ("oto-" + "rhino-" + "laryngo-" + "-logy") derived from four Ancient Greek words: οὖς "ous" (gen.: ὠτός "otos"), "ear", ῥίς "rhis", "nose", λάρυγξ "larynx", "larynx" and -λογία "logia", "study" (cf. Greek ωτορινολαρυγγολόγος, "otorhinolaryngologist").
Otorhinolaryngologists are physicians (MD, DO, MBBS, MBChB, etc.) who complete medical school and then 5–7 years of post-graduate surgical training in ORL-H&N. In the United States, trainees complete at least five years of surgical residency training. This comprises three to six months of general surgical training and four and a half years in ORL-H&N specialist surgery. In Canada and the United States, practitioners complete a five-year residency training after medical school.
Following residency training, some otolaryngologist-head & neck surgeons complete an advanced sub-specialty fellowship, where training can be one to two years in duration. Fellowships include head and neck surgical oncology, facial plastic surgery, rhinology and sinus surgery, neuro-otology, pediatric otolaryngology, and laryngology. In the United States and Canada, otorhinolaryngology is one of the most competitive specialties in medicine in which to obtain a residency position following medical school.
In the United Kingdom entrance to otorhinolaryngology higher surgical training is highly competitive and involves a rigorous national selection process. The training programme consists of 6 years of higher surgical training after which trainees frequently undertake fellowships in a sub-speciality prior to becoming a consultant.
The typical total length of education and training, post-secondary school is 12–14 years. Otolaryngology is among the more highly compensated surgical specialties in the United States ($461,000 2019 Average Annual Income).
Head and Neck Surgery
Study of diseases of the outer ear, middle ear and mastoid, and inner ear, and surrounding structures (such as the facial nerve and lateral skull base)
Rhinology includes nasal dysfunction and sinus diseases.
Facial Plastic and Reconstructive Surgery is a one-year fellowship open to otorhinolaryngologists and plastic surgeons who wish to specialize in the aesthetic and reconstructive surgery of the head, face, and neck.
In this type of surgery, a surgeon harvests a muscle from the back or from the abdominal region for reconstruction of the skull or the cranial vault. Latissimus is another word for back in the medical field as well as rectus abdominis which is your abdominal area. The muscle is sometimes useful for sealing off the central nervous system in one's body and allowing it to heal the complex wounds. A study was done with five patients who underwent the free muscle transfer for a smile reconstruction. Two of the five patients prior to this surgery had failed their first free muscle transfer. The next two patients had vascular anomalies and one had a previous distal ligation of the facial vessels. In three of the cases, they used a submental vein, and in all the cases they used a donor submental artery. "In all 5 the gracilis vascular pedicle comprised a muscular branch of the profunda femoris together with its venae comitantes, with the artery and vein ranging in size from 1.0 to 1.5 mm and 2.0 to 2.5 mm, respectively. The submental artery provided an excellent size match in all cases, ranging in size from 1.0 to 1.5 mm"(Faltaous AA, Yetman RJ). The first patient was a 45-year-old woman who developed a dense flaccid right facial paralysis at the age of 33. The second patient was an 8-year-old girl who had developed dense flaccid left facial paralysis after a laser treatment at four weeks for, "bilateral infantile segmental hemangiomas in the distribution of the mandibular division of the trigeminal nerve (V3). "(volume 38, issue 10). The third case was a 19-year-old male who had developed a segmental right facial paralysis after an excision of an infantile parotid hemangioma at the age of 2. The fourth case was a 20-year-old woman who had developed dense flaccid right facial paralysis after a biopsy of a pontomedullary junction tumor at the age of 2. Lastly, case five was a 19-year-old woman who had incomplete flaccid left facial palsy.
Bone defects are often the most difficult reconstructions as it requires precise alignment. Bone transfer is commonly used for the mandibular reconstruction, but it now allows surgeons to use it for the midface and the orbito maxillary. If for some reason the fibula is not available for transfer, another option the team may go is using the back rib free flap. This allows the transfer to give the bone volume for the patients. The earliest first bone transfer was done all the way back in 2000 BCE when the Peruvian priest implanted a metallic plate to reconstruct the contour defects of the religious trephination. In 1668, a man by the name of Jobs van Meekeren reported the use of dog bone grafts to reconstruct the calvarium in the soldier. "…the ideal of the future: the insertion of a piece of living bone which will exactly fill the gap and will continue to live without absorption." (The Epitome of Medicine).
The radial forearm is the most commonly dominant use of flap to be used to coverage up damages. Today, the anterolateral thigh flap is being used on patients for the head and the neck because it has an ideal match for the site and it is easy to harvest. If a surgeon chose to remove/harvest the tissue, safe places are the following; skin, skin and fat, fat and fascia, or just the fascia by itself.
Microvascular reconstruction repair is a common operation that is done on patients who see an Otorhinolaryngologist. Microvascular reconstruction repair is a surgical procedure that involves moving a composite piece of tissue from the patient's body and moves it to the head and or neck. Microvascular head and neck reconstruction is used to treat head and neck cancers, including those of the larynx and pharynx, oral cavity, salivary glands, jaws, calvarium, sinuses, tongue and skin. The tissue that is most common moved during this procedure is from the arms, legs, back, and can come from the skin, bone, fat, and or muscle. When doing this procedure, the decision on which is moved is determined on the reconstructive needs. Transfer of the tissue to the head and neck allows surgeons to rebuild the patient's jaw, optimize tongue function, and reconstruct the throat. When the pieces of tissue are moved, they require their own blood supply for a chance of survival in their new location. After the surgery is completed, the blood vessels that feed the tissue transplant are reconnected to new blood vessels in the neck. These blood vessels are typically no more than 1 to 3 millimeters in diameter which means these connections need to be made with a microscope which is why this procedure is called "microvascular surgery." | https://en.wikipedia.org/wiki?curid=22573 |
Olympic Games
The modern Olympic Games or Olympics () are leading international sporting events featuring summer and winter sports competitions in which thousands of athletes from around the world participate in a variety of competitions. The Olympic Games are considered the world's foremost sports competition with more than 200 nations participating. The Olympic Games are normally held every four years, alternating between the Summer and Winter Games every two years in the four-year period.
Their creation was inspired by the ancient Olympic Games (), which were held in Olympia, Greece, from the 8th century BC to the 4th century AD. Baron Pierre de Coubertin founded the International Olympic Committee (IOC) in 1894, leading to the first modern Games in Athens in 1896. The IOC is the governing body of the Olympic Movement, with the Olympic Charter defining its structure and authority.
The evolution of the Olympic Movement during the 20th and 21st centuries has resulted in several changes to the Olympic Games. Some of these adjustments include the creation of the Winter Olympic Games for snow and ice sports, the Paralympic Games for athletes with a disability, the Youth Olympic Games for athletes aged 14 to 18, the five Continental games (Pan American, African, Asian, European, and Pacific), and the World Games for sports that are not contested in the Olympic Games. The Deaflympics and Special Olympics are also endorsed by the IOC. The IOC has had to adapt to a variety of economic, political, and technological advancements. The abuse of amateur rules by the Eastern Bloc nations prompted the IOC to shift away from pure amateurism, as envisioned by Coubertin, to allowing participation of professional athletes. The growing importance of mass media created the issue of corporate sponsorship and commercialisation of the Games. World wars led to the cancellation of the 1916, 1940, and 1944 Games. Large-scale boycotts during the Cold War limited participation in the 1980 and 1984 Games, and the 2020 Games were postponed to 2021 due to the COVID-19 pandemic.
The Olympic Movement consists of international sports federations (IFs), National Olympic Committees (NOCs), and organising committees for each specific Olympic Games. As the decision-making body, the IOC is responsible for choosing the host city for each Games, and organises and funds the Games according to the Olympic Charter. The IOC also determines the Olympic programme, consisting of the sports to be contested at the Games. There are several Olympic rituals and symbols, such as the Olympic flag and torch, as well as the opening and closing ceremonies. Over 14,000 athletes competed at the 2016 Summer Olympics and 2018 Winter Olympics combined, in 35 different sports and over 400 events. The first, second, and third-place finishers in each event receive Olympic medals: gold, silver, and bronze, respectively.
The Games have grown so much that nearly every nation is now represented. This growth has created numerous challenges and controversies, including boycotts, doping, bribery, and a terrorist attack in 1972. Every two years the Olympics and its media exposure provide athletes with the chance to attain national and sometimes international fame. The Games also constitute an opportunity for the host city and country to showcase themselves to the world.
The Ancient Olympic Games were religious and athletic festivals held every four years at the sanctuary of Zeus in Olympia, Greece. Competition was among representatives of several city-states and kingdoms of Ancient Greece. These Games featured mainly athletic but also combat sports such as wrestling and the pankration, horse and chariot racing events. It has been widely written that during the Games, all conflicts among the participating city-states were postponed until the Games were finished. This cessation of hostilities was known as the Olympic peace or truce. This idea is a modern myth because the Greeks never suspended their wars. The truce did allow those religious pilgrims who were travelling to Olympia to pass through warring territories unmolested because they were protected by Zeus. The origin of the Olympics is shrouded in mystery and legend; one of the most popular myths identifies Heracles and his father Zeus as the progenitors of the Games. According to legend, it was Heracles who first called the Games "Olympic" and established the custom of holding them every four years. The myth continues that after Heracles completed his twelve labours, he built the Olympic Stadium as an honour to Zeus. Following its completion, he walked in a straight line for 200 steps and called this distance a "stadion" (, Latin: "stadium", "stage"), which later became a unit of distance. The most widely accepted inception date for the Ancient Olympics is 776 BC; this is based on inscriptions, found at Olympia, listing the winners of a footrace held every four years starting in 776 BC. The Ancient Games featured running events, a pentathlon (consisting of a jumping event, discus and javelin throws, a foot race, and wrestling), boxing, wrestling, pankration, and equestrian events. Tradition has it that Coroebus, a cook from the city of Elis, was the first Olympic champion.
The Olympics were of fundamental religious importance, featuring sporting events alongside ritual sacrifices honouring both Zeus (whose famous statue by Phidias stood in his temple at Olympia) and Pelops, divine hero and mythical king of Olympia. Pelops was famous for his chariot race with King Oenomaus of Pisatis. The winners of the events were admired and immortalised in poems and statues. The Games were held every four years, and this period, known as an Olympiad, was used by Greeks as one of their units of time measurement. The Games were part of a cycle known as the Panhellenic Games, which included the Pythian Games, the Nemean Games, and the Isthmian Games.
The Olympic Games reached their zenith in the 6th and 5th centuries BC, but then gradually declined in importance as the Romans gained power and influence in Greece. While there is no scholarly consensus as to when the Games officially ended, the most commonly held date is 393 AD, when the emperor Theodosius I decreed that all pagan cults and practices be eliminated. Another date commonly cited is 426 AD, when his successor, Theodosius II, ordered the destruction of all Greek temples.
Various uses of the term "Olympic" to describe athletic events in the modern era have been documented since the 17th century. The first such event was the Cotswold Games or "Cotswold Olimpick Games", an annual meeting near Chipping Campden, England, involving various sports. It was first organised by the lawyer Robert Dover between 1612 and 1642, with several later celebrations leading up to the present day. The British Olympic Association, in its bid for the 2012 Olympic Games in London, mentioned these games as "the first stirrings of Britain's Olympic beginnings".
"L'Olympiade de la République", a national Olympic festival held annually from 1796 to 1798 in Revolutionary France also attempted to emulate the ancient Olympic Games. The competition included several disciplines from the ancient Greek Olympics. The 1796 Games also marked the introduction of the metric system into sport.
In 1834 and 1836, Olympic games were held in (), and an additional in Stockholm, Sweden in 1843, all organised by Gustaf Johan Schartau and others. At most 25,000 spectators saw the games.
In 1850, an Olympian Class was started by William Penny Brookes at Much Wenlock, in Shropshire, England. In 1859, Brookes changed the name to the Wenlock Olympian Games. This annual sports festival continues to this day. The Wenlock Olympian Society was founded by Brookes on 15 November 1860.
Between 1862 and 1867, Liverpool held an annual Grand Olympic Festival. Devised by John Hulley and Charles Melly, these games were the first to be wholly amateur in nature and international in outlook, although only 'gentlemen amateurs' could compete. The programme of the first modern Olympiad in Athens in 1896 was almost identical to that of the Liverpool Olympics. In 1865 Hulley, Brookes and E.G. Ravenstein founded the National Olympian Association in Liverpool, a forerunner of the British Olympic Association. Its articles of foundation provided the framework for the International Olympic Charter. In 1866, a national Olympic Games in Great Britain was organised at London's Crystal Palace.
Greek interest in reviving the Olympic Games began with the Greek War of Independence from the Ottoman Empire in 1821. It was first proposed by poet and newspaper editor Panagiotis Soutsos in his poem "Dialogue of the Dead", published in 1833. Evangelos Zappas, a wealthy Greek-Romanian philanthropist, first wrote to King Otto of Greece, in 1856, offering to fund a permanent revival of the Olympic Games. Zappas sponsored the first Olympic Games in 1859, which was held in an Athens city square. Athletes participated from Greece and the Ottoman Empire. Zappas funded the restoration of the ancient Panathenaic Stadium so that it could host all future Olympic Games.
The stadium hosted Olympics in 1870 and 1875. Thirty thousand spectators attended that Games in 1870, though no official attendance records are available for the 1875 Games. In 1890, after attending the Olympian Games of the Wenlock Olympian Society, Baron Pierre de Coubertin was inspired to found the International Olympic Committee (IOC). Coubertin built on the ideas and work of Brookes and Zappas with the aim of establishing internationally rotating Olympic Games that would occur every four years. He presented these ideas during the first Olympic Congress of the newly created International Olympic Committee. This meeting was held from 16 to 23 June 1894, at the University of Paris. On the last day of the Congress, it was decided that the first Olympic Games to come under the auspices of the IOC would take place in Athens in 1896. The IOC elected the Greek writer Demetrius Vikelas as its first president.
The first Games held under the auspices of the IOC was hosted in the Panathenaic Stadium in Athens in 1896. The Games brought together 14 nations and 241 athletes who competed in 43 events. Zappas and his cousin Konstantinos Zappas had left the Greek government a trust to fund future Olympic Games. This trust was used to help finance the 1896 Games. George Averoff contributed generously for the refurbishment of the stadium in preparation for the Games. The Greek government also provided funding, which was expected to be recouped through the sale of tickets and from the sale of the first Olympic commemorative stamp set.
Greek officials and the public were enthusiastic about the experience of hosting an Olympic Games. This feeling was shared by many of the athletes, who even demanded that Athens be the permanent Olympic host city. The IOC intended for subsequent Games to be rotated to various host cities around the world. The second Olympics was held in Paris.
After the success of the 1896 Games, the Olympics entered a period of stagnation that threatened their survival. The Olympic Games held at the Paris Exposition in 1900 and the Louisiana Purchase Exposition at St. Louis in 1904 were side shows. This period was a low point for the Olympic Movement. The Games rebounded when the 1906 Intercalated Games (so-called because they were the second Games held within the third Olympiad) were held in Athens. These Games were, but are not now, officially recognised by the IOC and no Intercalated Games have been held since. The Games attracted a broad international field of participants and generated great public interest. This marked the beginning of a rise in both the popularity and the size of the Olympics.
The Winter Olympics was created to feature snow and ice sports that were logistically impossible to hold during the Summer Games. Figure skating (in 1908 and 1920) and ice hockey (in 1920) were featured as Olympic events at the Summer Olympics. The IOC desired to expand this list of sports to encompass other winter activities. At the 1921 Olympic Congress in Lausanne, it was decided to hold a winter version of the Olympic Games. A winter sports week (it was actually 11 days) was held in 1924 in Chamonix, France, in connection with the Paris Games held three months later; this event became the first Winter Olympic Games. Although it was intended that the same country host both the Winter and Summer Games in a given year, this idea was quickly abandoned. The IOC mandated that the Winter Games be celebrated every four years in the same year as their summer counterpart. This tradition was upheld through the 1992 Games in Albertville, France; after that, beginning with the 1994 Games, the Winter Olympics were held every four years, two years after each Summer Olympics.
In 1948, Sir Ludwig Guttmann, determined to promote the rehabilitation of soldiers after World War II, organised a multi-sport event between several hospitals to coincide with the 1948 London Olympics. Guttmann's event, known then as the Stoke Mandeville Games, became an annual sports festival. Over the next twelve years, Guttmann and others continued their efforts to use sports as an avenue to healing. For the 1960 Olympic Games in Rome, Guttmann brought 400 athletes to compete in the "Parallel Olympics", which became known as the first Paralympics. Since then, the Paralympics have been held in every Olympic year. Since the 1988 Summer Olympics in Seoul, South Korea, the host city for the Olympics has also played host to the Paralympics. In 2001 the International Olympic Committee (IOC) and the International Paralympic Committee (IPC) signed an agreement guaranteeing that host cities would be contracted to manage both the Olympic and Paralympic Games. The agreement came into effect at the 2008 Summer Games in Beijing, and at the 2010 Winter Games in Vancouver.Two years before the 2012 Summer Games,the chairman of the LOCOG, Lord Coe, said about the Paralympics and Olympics in London that,
In 2010, the Olympic Games were complemented by the Youth Games, which give athletes between the ages of 14 and 18 the chance to compete. The Youth Olympic Games were conceived by IOC president Jacques Rogge in 2001 and approved during the 119th Congress of the IOC. The first Summer Youth Games were held in Singapore from 14–26 August 2010, while the inaugural Winter Games were hosted in Innsbruck, Austria, two years later. These Games will be shorter than the senior Games; the summer version will last twelve days, while the winter version will last nine days. The IOC allows 3,500 athletes and 875 officials to participate at the Summer Youth Games, and 970 athletes and 580 officials at the Winter Youth Games. The sports to be contested will coincide with those scheduled for the senior Games, however there will be variations on the sports including mixed NOC and mixed gender teams as well as a reduced number of disciplines and events.
From 241 participants representing 14 nations in 1896, the Games have grown to about 10,500 competitors from 204 nations at the 2012 Summer Olympics. The scope and scale of the Winter Olympics is smaller. For example, Sochi hosted 2,873 athletes from 88 nations competing in 98 events during the 2014 Winter Olympics. During the Games most athletes and officials are housed in the Olympic Village. This village is intended to be a self-contained home for all the Olympic participants, and is furnished with cafeterias, health clinics, and locations for religious expression.
The IOC allowed the formation of National Olympic Committees representing nations that did not meet the strict requirements for political sovereignty that other international organisations demand. As a result, colonies and dependencies are permitted to compete at Olympic Games. Examples of this include territories such as Puerto Rico, Bermuda, and Hong Kong, all of which compete as separate nations despite being legally a part of another country. The current version of the Charter allows for the establishment of new National Olympic Committees to represent nations which qualify as "an independent State recognised by the international community". Therefore, it did not allow the formation of National Olympic Committees for Sint Maarten and Curaçao when they gained the same constitutional status as Aruba in 2010, although the IOC had recognised the Aruban Olympic Committee in 1986. After 2012, Netherlands Antilles athletes can choose to represent either the Netherlands or Aruba.
The Oxford Olympics Study 2016 found that sports-related costs for the Summer Games since 1960 were on average US$5.2 billion and for the Winter Games $3.1 billion. This does not include wider infrastructure costs like roads, urban rail, and airports, which often cost as much or more than the sports-related costs. The most expensive Summer Games were Beijing 2008 at US$40–44 billion and the most expensive Winter Games were Sochi 2014 at US$51 billion. As of 2016, costs per athlete were, on average, US$599,000 for the Summer Games and $1.3 million for the Winter Games. For London 2012, cost per athlete was $1.4 million; for Sochi 2014, $7.9 million.
Where ambitious construction for the 1976 games in Montreal and 1980 games in Moscow had saddled organisers with expenses greatly in excess of revenues, 1984 host Los Angeles strictly controlled expenses by using existing facilities that were paid for by corporate sponsors. The Olympic Committee led by Peter Ueberroth used some of the profits to endow the LA84 Foundation to promote youth sports in Southern California, educate coaches and maintain a sports library. The 1984 Summer Olympics are often considered the most financially successful modern Olympics and a model for future Games.
Budget overruns are common for the Games. Average overrun for Games since 1960 is 156% in real terms, which means that actual costs turned out to be on average 2.56 times the budget that was estimated at the time of winning the bid to host the Games. Montreal 1976 had the highest cost overrun for Summer Games, and for any Games, at 720%; Lake Placid 1980 had the highest cost overrun for Winter Games, at 324%. London 2012 had a cost overrun of 76%, Sochi 2014 of 289%.
Many economists are sceptical about the economic benefits of hosting the Olympic Games, emphasising that such "mega-events" often have large costs while yielding relatively few tangible benefits in the long run. Conversely hosting (or even bidding for) the Olympics appears to increase the host country's exports, as the host or candidate country sends a signal about trade openness when bidding to host the Games. Moreover, research suggests that hosting the Summer Olympics has a strong positive effect on the philanthropic contributions of corporations headquartered in the host city, which seems to benefit the local nonprofit sector. This positive effect begins in the years leading up to the Games and might persist for several years afterwards, although not permanently. This finding suggests that hosting the Olympics might create opportunities for cities to influence local corporations in ways that benefit the local nonprofit sector and civil society.
The Games have also had significant negative effects on host communities; for example, the Centre on Housing Rights and Evictions reports that the Olympics displaced more than two million people over two decades, often disproportionately affecting disadvantaged groups. The 2014 Winter Olympics in Sochi were the most expensive Olympic Games in history, costing in excess of US$50 billion. According to a report by the European Bank for Reconstruction and Development that was released at the time of the games, this cost will not boost Russia's national economy, but may attract business to Sochi and the southern Krasnodar region of Russia in the future as a result of improved services. But by December 2014, "The Guardian" stated that Sochi "now feels like a ghost town", citing the spread-out nature of the stadiums and arenas, the still-unfinished construction, and the overall effects of Russia's political and economic turmoil. Furthermore, at least four cities withdrew their bids for the 2022 Winter Olympics, citing the high costs or the lack of local support, resulting in only a two-city race between Almaty, Kazakhstan and Beijing, China. Thus in July 2016, "The Guardian" stated that the biggest threat to the future of the Olympics is that very few cities want to host them. Bidding for the 2024 Summer Olympics also became a two-city race between Paris and Los Angeles, so the IOC took the unusual step of simultaneously awarding both the 2024 Games to Paris and the 2028 Games to Los Angeles. The 2028 Los Angeles bid was praised by the IOC for using a record-breaking number of existing and temporary facilities and relying on corporate money.
The Olympic Movement encompasses a large number of national and international sporting organisations and federations, recognised media partners, as well as athletes, officials, judges, and every other person and institution that agrees to abide by the rules of the Olympic Charter. As the umbrella organisation of the Olympic Movement, the International Olympic Committee (IOC) is responsible for selecting the host city, overseeing the planning of the Olympic Games, updating and approving the sports program, and negotiating sponsorship and broadcasting rights.
The Olympic Movement is made of three major elements:
French and English are the official languages of the Olympic Movement. The other language used at each Olympic Games is the language of the host country (or languages, if a country has more than one official language apart from French or English). Every proclamation (such as the announcement of each country during the parade of nations in the opening ceremony) is spoken in these three (or more) languages, or the main two depending on whether the host country is an English or French speaking country: French is always spoken first, followed by an English translation, and then the dominant language of the host nation (when this is not English or French).
The IOC has often been criticised for being an intractable organisation, with several members on the committee for life. The presidential terms of Avery Brundage and Juan Antonio Samaranch were especially controversial. Brundage fought strongly for amateurism and against the commercialization of the Olympic Games, even as these stands came to be seen as incongruous with the realities of modern sports. The advent of the state-sponsored athlete of the Eastern Bloc countries further eroded the ideology of the pure amateur, as it put self-financed amateurs of the Western countries at a disadvantage. Brundage was accused of both racism, for resisting exclusion of apartheid South Africa, and antisemitism. Under the Samaranch presidency, the office was accused of both nepotism and corruption. Samaranch's ties with the Franco regime in Spain were also a source of criticism.
In 1998, it was reported that several IOC members had taken gifts from members of the Salt Lake City bid committee for the hosting of the 2002 Winter Olympics. Soon four independent investigations were underway: by the IOC, the United States Olympic Committee (USOC), the SLOC, and the United States Department of Justice. Although nothing strictly illegal had been done, it was felt that the acceptance of the gifts was morally dubious. As a result of the investigation, ten members of the IOC were expelled and another ten were sanctioned. Stricter rules were adopted for future bids, and caps were put into place as to how much IOC members could accept from bid cities. Additionally, new term and age limits were put into place for IOC membership, and fifteen former Olympic athletes were added to the committee. Nevertheless, from sporting and business standpoints, the 2002 Olympics were one of the most successful Winter Olympiads in history; records were set in both the broadcasting and marketing programs. Over 2 billion viewers watched more than 13 billion viewer-hours. The Games were also financially successful raising more money with fewer sponsors than any prior Olympic Games, which left SLOC with a surplus of $40 million. The surplus was used to create the Utah Athletic Foundation, which maintains and operates many of the remaining Olympic venues.
The 1999, it was reported that the Nagano Olympic bid committee had spent approximately $14 million to entertain the 62 IOC members and many of their companions. The precise figures are unknown since Nagano, after the IOC asked that the entertainment expenditures not be made public, destroyed the financial records.
A BBC documentary entitled "Panorama: Buying the Games", aired in August 2004, investigated the taking of bribes in the bidding process for the 2012 Summer Olympics. The documentary claimed it was possible to bribe IOC members into voting for a particular candidate city. After being narrowly defeated in their bid for the 2012 Summer Games, Parisian mayor Bertrand Delanoë specifically accused the British prime minister Tony Blair and the London Bid Committee (headed by former Olympic champion Sebastian Coe) of breaking the bid rules. He cited French president Jacques Chirac as a witness; Chirac gave guarded interviews regarding his involvement. The allegation was never fully explored. The Turin bid for the 2006 Winter Olympics was also shrouded in controversy. A prominent IOC member, Marc Hodler, strongly connected with the rival bid of Sion, Switzerland, alleged bribery of IOC officials by members of the Turin Organising Committee. These accusations led to a wide-ranging investigation. The allegations also served to sour many IOC members against Sion's bid and potentially helped Turin to capture the host city nomination.
In July 2012, the Anti-Defamation League called the continued refusal by the International Olympic Committee to hold a moment of silence at the opening ceremony for the eleven Israeli athletes killed by Palestinian terrorists at the 1972 Munich Olympics, "a continuing stubborn insensitivity and callousness to the memory of the murdered Israeli athletes."
The Olympics have been commercialised to various degrees since the initial 1896 Summer Olympics in Athens, when a number of companies paid for advertising, including Kodak. In 1908, Oxo, Odol mouthwash and Indian Foot Powder became official sponsors of the London Olympic Games. Coca-Cola sponsored the 1928 Summer Olympics, and has subsequently remained a sponsor to the current time. Before the IOC took control of sponsorship, national organising committees were responsible for negotiating their own contracts for sponsorship and the use of the Olympic symbols.
The IOC originally resisted funding by corporate sponsors. It was not until the retirement of IOC President Avery Brundage, in 1972, that the IOC began to explore the potential of the television medium and the lucrative advertising markets available to them. Under the leadership of Juan Antonio Samaranch the Games began to shift toward international sponsors who sought to link their products to the Olympic brand.
During the first half of the 20th century, the IOC ran on a small budget. As president of the IOC from 1952 to 1972, Avery Brundage rejected all attempts to link the Olympics with commercial interest. Brundage believed the lobby of corporate interests would unduly impact the IOC's decision-making. Brundage's resistance to this revenue stream meant the IOC left organising committees to negotiate their own sponsorship contracts and use the Olympic symbols. When Brundage retired the IOC had US$2 million in assets; eight years later the IOC coffers had swelled to US$45 million. This was primarily due to a shift in ideology toward expansion of the Games through corporate sponsorship and the sale of television rights. When Juan Antonio Samaranch was elected IOC president in 1980 his desire was to make the IOC financially independent.
The 1984 Summer Olympics became a watershed moment in Olympic history. The Los Angeles-based organising committee, led by Peter Ueberroth, was able to generate a surplus of US$225 million, which was an unprecedented amount at that time. The organising committee had been able to create such a surplus in part by selling exclusive sponsorship rights to select companies. The IOC sought to gain control of these sponsorship rights. Samaranch helped to establish The Olympic Programme (TOP) in 1985, in order to create an Olympic brand. Membership in TOP was, and is, very exclusive and expensive. Fees cost US$50 million for a four-year membership. Members of TOP received exclusive global advertising rights for their product category, and use of the Olympic symbol, the interlocking rings, in their publications and advertisements.
The 1936 Summer Olympics in Berlin were the first Games to be broadcast on television, though only to local audiences. The 1956 Winter Olympics were the first internationally televised Olympic Games, and the following Winter Games had their broadcasting rights sold for the first time to specialised television broadcasting networks—CBS paid US$394,000 for the American rights. In the following decades the Olympics became one of the ideological fronts of the Cold War, and the IOC wanted to take advantage of this heightened interest via the broadcast medium. The sale of broadcast rights enabled the IOC to increase the exposure of the Olympic Games, thereby generating more interest, which in turn created more appeal to advertisers time on television. This cycle allowed the IOC to charge ever-increasing fees for those rights. For example, CBS paid US$375 million for the American broadcast rights of the 1998 Nagano Games, while NBC spent US$3.5 billion for the American rights of all the Olympic Games from 2000 to 2012. In 2011, NBC agreed to a $4.38 billion contract with the International Olympic Committee to broadcast the Olympics through the 2020 games, the most expensive television rights deal in Olympic history. NBC then agreed to a $7.75 billion contract extension on May 7, 2014, to air the Olympics through the 2032 games. NBC also acquired the American television rights to the Youth Olympic Games, beginning in 2014, and the Paralympic Games. More than half of the Olympic Committee's global sponsors are American companies, and NBC is one of the major sources of revenue for the IOC.
Viewership increased exponentially from the 1960s until the end of the century. This was due to the use of satellites to broadcast live television worldwide in 1964, and the introduction of colour television in 1968. Global audience estimates for the 1968 Mexico City Games was 600 million, whereas at the Los Angeles Games of 1984, the audience numbers had increased to 900 million; that number swelled to 3.5 billion by the 1992 Summer Olympics in Barcelona. With such high costs charged to broadcast the Games, the added pressure of the internet, and increased competition from cable, the television lobby demanded concessions from the IOC to boost ratings. The IOC responded by making a number of changes to the Olympic program. At the Summer Games, the gymnastics competition was expanded from seven to nine nights, and a Champions Gala was added to draw greater interest. The IOC also expanded the swimming and diving programs, both popular sports with a broad base of television viewers. Due to the substantial fees NBC has paid for rights to the Olympics, the IOC has allowed NBC to have influence on event scheduling to maximize U.S. television ratings when possible.
The sale of the Olympic brand has been controversial. The argument is that the Games have become indistinguishable from any other commercialised sporting spectacle. Another criticism is that the Games are funded by host cities and national governments; the IOC incurs none of the cost, yet controls all the rights and profits from the Olympic symbols. The IOC also takes a percentage of all sponsorship and broadcast income. Host cities continue to compete ardently for the right to host the Games, even though there is no certainty that they will earn back their investments. Research has shown that trade is around 30 percent higher for countries that have hosted the Olympics.
The Olympic Movement uses symbols to represent the ideals embodied in the Olympic Charter. The Olympic symbol, better known as the Olympic rings, consists of five intertwined rings and represents the unity of the five inhabited continents (Africa, the Americas (when considered one continent), Asia, Europe, and Oceania). The coloured version of the rings—blue, yellow, black, green, and red—over a white field forms the Olympic flag. These colours were chosen because every nation had at least one of them on its national flag. The flag was adopted in 1914 but flown for the first time only at the 1920 Summer Olympics in Antwerp, Belgium. It has since been hoisted during each celebration of the Games.
The Olympic motto, "Citius, Altius, Fortius", a Latin expression meaning "Faster, Higher, Stronger" was proposed by Pierre de Coubertin in 1894 and has been official since 1924. The motto was coined by Coubertin's friend, the Dominican priest Henri Didon OP, for a Paris youth gathering of 1891.
Coubertin's Olympic ideals are expressed in the Olympic creed:
Months before each Games, the Olympic Flame is lit at the Temple of Hera in Olympia in a ceremony that reflects ancient Greek rituals. A female performer, acting as a priestess joined by ten female performers as Vestal Virgins, ignites a torch by placing it inside a parabolic mirror which focuses the sun's rays; she then lights the torch of the first relay bearer, thus initiating the Olympic torch relay that will carry the flame to the host city's Olympic stadium, where it plays an important role in the opening ceremony. Though the flame has been an Olympic symbol since 1928, the torch relay was only introduced at the 1936 Summer Games to promote the Third Reich.
The Olympic mascot, an animal or human figure representing the cultural heritage of the host country, was introduced in 1968. It has played an important part of the Games' identity promotion since the 1980 Summer Olympics, when the Soviet bear cub Misha reached international stardom. The mascot of the Summer Olympics in London was named Wenlock after the town of Much Wenlock in Shropshire. Much Wenlock still hosts the Wenlock Olympian Games, which were an inspiration to Pierre de Coubertin for the Olympic Games.
As mandated by the Olympic Charter, various elements frame the opening ceremony of the Olympic Games. This ceremony takes place before the events have occurred. Most of these rituals were established at the 1920 Summer Olympics in Antwerp. The ceremony typically starts with the entrance of the president of the host country followed by the hoisting of the host country's flag and a performance of its national anthem. The host nation then presents artistic displays of music, singing, dance, and theatre representative of its culture. The artistic presentations have grown in scale and complexity as successive hosts attempt to provide a ceremony that outlasts its predecessor's in terms of memorability. The opening ceremony of the Beijing Games reportedly cost $100 million, with much of the cost incurred in the artistic segment.
After the artistic portion of the ceremony, the athletes parade into the stadium grouped by nation. Greece is traditionally the first nation to enter in order to honour the origins of the Olympics. Nations then enter the stadium alphabetically according to the host country's chosen language, with the host country's athletes being the last to enter. During the 2004 Summer Olympics, which was hosted in Athens, Greece, the Greek flag entered the stadium first, while the Greek delegation entered last. Speeches are given, formally opening the Games. Finally, the Olympic torch is brought into the stadium and passed on until it reaches the final torch carrier, often a successful Olympic athlete from the host nation, who lights the Olympic flame in the stadium's cauldron.
The closing ceremony of the Olympic Games takes place after all sporting events have concluded. Flag-bearers from each participating country enter the stadium, followed by the athletes who enter together, without any national distinction. Three national flags are hoisted while the corresponding national anthems are played: the flag of the current host country; the flag of Greece, to honour the birthplace of the Olympic Games; and the flag of the country hosting the next Summer or Winter Olympic Games. The president of the organising committee and the IOC president make their closing speeches, the Games are officially closed, and the Olympic flame is extinguished. In what is known as the Antwerp Ceremony, the mayor of the city that organised the Games transfers a special Olympic flag to the president of the IOC, who then passes it on to the mayor of the city hosting the next Olympic Games. The next host nation then also briefly introduces itself with artistic displays of dance and theatre representative of its culture.
As is customary, the last medal presentation of the Games is held as part of the closing ceremony. Typically, the marathon medals are presented at the Summer Olympics, while the cross-country skiing mass start medals are awarded at the Winter Olympics.
A medal ceremony is held after each Olympic event is concluded. The winner, second and third-place competitors or teams stand on top of a three-tiered rostrum to be awarded their respective medals. After the medals are given out by an IOC member, the national flags of the three medallists are raised while the national anthem of the gold medallist's country plays. Volunteering citizens of the host country also act as hosts during the medal ceremonies, as they aid the officials who present the medals and act as flag-bearers. While in the Summer Olympics this ceremony is held on the ground where the event is played, in the Winter Games it is usually held in a special "plaza".
The Olympic Games programme consists of 35 sports, 30 disciplines and 408 events. For example, wrestling is a Summer Olympic sport, comprising two disciplines: Greco-Roman and Freestyle. It is further broken down into fourteen events for men and four events for women, each representing a different weight class. The Summer Olympics programme includes 26 sports, while the Winter Olympics programme features 15 sports. Athletics, swimming, fencing, and artistic gymnastics are the only summer sports that have never been absent from the Olympic programme. Cross-country skiing, figure skating, ice hockey, Nordic combined, ski jumping, and speed skating have been featured at every Winter Olympics programme since its inception in 1924. Current Olympic sports, like badminton, basketball, and volleyball, first appeared on the programme as demonstration sports, and were later promoted to full Olympic sports. Some sports that were featured in earlier Games were later dropped from the programme.
Olympic sports are governed by international sports federations (IFs) recognised by the IOC as the global supervisors of those sports. There are 35 federations represented at the IOC. There are sports recognised by the IOC that are not included on the Olympic program. These sports are not considered Olympic sports, but they can be promoted to this status during a programme revision that occurs in the first IOC session following a celebration of the Olympic Games. During such revisions, sports can be excluded or included in the programme on the basis of a two-thirds majority vote of the members of the IOC. There are recognised sports that have never been on an Olympic programme in any capacity, including chess and surfing.
In October and November 2004, the IOC established an Olympic Programme Commission, which was tasked with reviewing the sports on the Olympic programme and all non-Olympic recognised sports. The goal was to apply a systematic approach to establishing the Olympic programme for each celebration of the Games. The commission formulated seven criteria to judge whether a sport should be included on the Olympic programme. These criteria are history and tradition of the sport, universality, popularity of the sport, image, athletes' health, development of the International Federation that governs the sport, and costs of holding the sport. From this study five recognised sports emerged as candidates for inclusion at the 2012 Summer Olympics: golf, karate, rugby sevens, roller sports and squash. These sports were reviewed by the IOC Executive Board and then referred to the General Session in Singapore in July 2005. Of the five sports recommended for inclusion only two were selected as finalists: karate and squash. Neither sport attained the required two-thirds vote and consequently they were not promoted to the Olympic programme. In October 2009 the IOC voted to instate golf and rugby sevens as Olympic sports for the 2016 and 2020 Summer Olympic Games.
The 114th IOC Session, in 2002, limited the Summer Games programme to a maximum of 28 sports, 301 events, and 10,500 athletes. Three years later, at the 117th IOC Session, the first major programme revision was performed, which resulted in the exclusion of baseball and softball from the official programme of the 2012 London Games. Since there was no agreement in the promotion of two other sports, the 2012 programme featured just 26 sports. The 2016 and 2020 Games will return to the maximum of 28 sports given the addition of rugby and golf.
The ethos of the aristocracy as exemplified in the English public school greatly influenced Pierre de Coubertin. The public schools subscribed to the belief that sport formed an important part of education, an attitude summed up in the saying "mens sana in corpore sano", a sound mind in a sound body. In this ethos, a gentleman was one who became an all-rounder, not the best at one specific thing. There was also a prevailing concept of fairness, in which practising or training was considered tantamount to cheating. Those who practised a sport professionally were considered to have an unfair advantage over those who practised it merely as a hobby.
The exclusion of professionals caused several controversies throughout the history of the modern Olympics. The 1912 Olympic pentathlon and decathlon champion Jim Thorpe was stripped of his medals when it was discovered that he had played semi-professional baseball before the Olympics. His medals were posthumously restored by the IOC in 1983 on compassionate grounds. Swiss and Austrian skiers boycotted the 1936 Winter Olympics in support of their skiing teachers, who were not allowed to compete because they earned money with their sport and were thus considered professionals.
As class structure evolved through the 20th century, the definition of the amateur athlete as an aristocratic gentleman became outdated. The advent of the state-sponsored "full-time amateur athlete" of the Eastern Bloc countries further eroded the ideology of the pure amateur, as it put the self-financed amateurs of the Western countries at a disadvantage. Beginning in the 1970s, amateurism requirements were gradually phased out of the Olympic Charter. After the 1988 Games, the IOC decided to make all professional athletes eligible for the Olympics, subject to the approval of the IFs.
Near the end of the 1960s, the Canadian Amateur Hockey Association (CAHA) felt their amateur players could no longer be competitive against the Soviet team's full-time athletes and the other constantly improving European teams. They pushed for the ability to use players from professional leagues but met opposition from the IIHF and IOC. At the IIHF Congress in 1969, the IIHF decided to allow Canada to use nine non-NHL professional hockey players at the 1970 World Championships in Montreal and Winnipeg, Manitoba, Canada. The decision was reversed in January 1970 after Brundage said that ice hockey's status as an Olympic sport would be in jeopardy if the change was made. In response, Canada withdrew from international ice hockey competition and officials stated that they would not return until "open competition" was instituted. Günther Sabetzki became president of the IIHF in 1975 and helped to resolve the dispute with the CAHA. In 1976, the IIHF agreed to allow "open competition" between all players in the World Championships. However, NHL players were still not allowed to play in the Olympics until 1988, because of the IOC's amateur-only policy.
Greece, Australia, France, and United Kingdom are the only countries to be represented at every Olympic Games since their inception in 1896. While countries sometimes miss an Olympics due to a lack of qualified athletes, some choose to boycott a celebration of the Games for various reasons. The Olympic Council of Ireland boycotted the 1936 Berlin Games, because the IOC insisted its team needed to be restricted to the Irish Free State rather than representing the entire island of Ireland.
There were three boycotts of the 1956 Melbourne Olympics: the Netherlands, Spain, and Switzerland refused to attend because of the repression of the Hungarian uprising by the Soviet Union, but did send an equestrian delegation to Stockholm; Cambodia, Egypt, Iraq, and Lebanon boycotted the Games because of the Suez Crisis; and the People's Republic of China boycotted the Games due to the participation of the Republic of China, composed of athletes coming from Taiwan.
In 1972 and 1976 a large number of African countries threatened the IOC with a boycott to force them to ban South Africa and Rhodesia, because of their segregationist rule. New Zealand was also one of the African boycott targets, because its national rugby union team had toured apartheid-ruled South Africa. The IOC conceded in the first two cases, but refused to ban New Zealand on the grounds that rugby was not an Olympic sport. Fulfilling their threat, twenty African countries were joined by Guyana and Iraq in a withdrawal from the Montreal Games, after a few of their athletes had already competed.
The Republic of China (Taiwan) was excluded from the 1976 Games by order of Pierre Elliott Trudeau, the prime minister of Canada. Trudeau's action was widely condemned as having brought shame on Canada for having succumbed to political pressure to keep the Chinese delegation from competing under its name. The ROC refused a proposed compromise that would have still allowed them to use the ROC flag and anthem as long as the name was changed. Athletes from Taiwan did not participate again until 1984, when they returned under the name of Chinese Taipei and with a special flag and anthem.
In 1980 and 1984, the Cold War opponents boycotted each other's Games. The United States and sixty-five other countries boycotted the Moscow Olympics in 1980 because of the Soviet invasion of Afghanistan. This boycott reduced the number of nations participating to 80, the lowest number since 1956. The Soviet Union and 15 other nations countered by boycotting the Los Angeles Olympics of 1984. Although a boycott led by the Soviet Union depleted the field in certain sports, 140 National Olympic Committees took part, which was a record at the time. The fact that Romania, a Warsaw Pact country, opted to compete despite Soviet demands led to a warm reception of the Romanian team by the United States. When the Romanian athletes entered during the opening ceremonies, they received a standing ovation from the spectators, which comprised mostly U.S. citizens. The boycotting nations of the Eastern Bloc staged their own alternate event, the Friendship Games, in July and August.
There had been growing calls for boycotts of Chinese goods and the 2008 Olympics in Beijing in protest of China's human rights record, and in response to Tibetan disturbances. Ultimately, no nation supported a boycott. In August 2008, the government of Georgia called for a boycott of the 2014 Winter Olympics, set to be held in Sochi, Russia, in response to Russia's participation in the 2008 South Ossetia war.
The Olympic Games have been used as a platform to promote political ideologies almost from its inception. Nazi Germany wished to portray the National Socialist Party as benevolent and peace-loving when they hosted the 1936 Games, though they used the Games to display Aryan superiority. Germany was the most successful nation at the Games, which did much to support their allegations of Aryan supremacy, but notable victories by African American Jesse Owens, who won four gold medals, and Hungarian Jew Ibolya Csák, blunted the message. The Soviet Union did not participate until the 1952 Summer Olympics in Helsinki. Instead, starting in 1928, the Soviets organised an international sports event called Spartakiads. During the interwar period of the 1920s and 1930s, communist and socialist organisations in several countries, including the United States, attempted to counter what they called the "bourgeois" Olympics with the Workers Olympics. It was not until the 1956 Summer Games that the Soviets emerged as a sporting superpower and, in doing so, took full advantage of the publicity that came with winning at the Olympics. Soviet Union's success might be attributed to a heavy state's investment in sports to fulfill its political agenda on an international stage.
Individual athletes have also used the Olympic stage to promote their own political agenda. At the 1968 Summer Olympics in Mexico City, two American track and field athletes, Tommie Smith and John Carlos, who finished first and third in the 200 metres, performed the Black Power salute on the victory stand. The second-place finisher, Peter Norman of Australia, wore an Olympic Project for Human Rights badge in support of Smith and Carlos. In response to the protest, IOC president Avery Brundage ordered Smith and Carlos suspended from the US team and banned from the Olympic Village. When the US Olympic Committee refused, Brundage threatened to ban the entire US track team. This threat led to the expulsion of the two athletes from the Games. In another notable incident in the gymnastics competition, while standing on the medal podium after the balance beam event final, in which Natalia Kuchinskaya of the Soviet Union had controversially taken the gold, Czechoslovakian gymnast Věra Čáslavská quietly turned her head down and away during the playing of the Soviet national anthem. The action was Čáslavská's silent protest against the recent Soviet invasion of Czechoslovakia. Her protest was repeated when she accepted her medal for her floor exercise routine when the judges changed the preliminary scores of the Soviet Larisa Petrik to allow her to tie with Čáslavská for the gold. While Čáslavská's countrymen supported her actions and her outspoken opposition to Communism (she had publicly signed and supported Ludvik Vaculik's "Two Thousand Words" manifesto), the new regime responded by banning her from both sporting events and international travel for many years and made her an outcast from society until the fall of communism.
Currently, the government of Iran has taken steps to avoid any competition between its athletes and those from Israel. An Iranian judoka, Arash Miresmaeili, did not compete in a match against an Israeli during the 2004 Summer Olympics. Although he was officially disqualified for being overweight, Miresmaeli was awarded US$125,000 in prize money by the Iranian government, an amount paid to all Iranian gold medal winners. He was officially cleared of intentionally avoiding the bout, but his receipt of the prize money raised suspicion.
In the early 20th century, many Olympic athletes began using drugs to improve their athletic abilities. For example, in 1904, Thomas Hicks, a gold medallist in the marathon, was given strychnine by his coach (at the time, taking different substances was allowed, as there was no data regarding the effect of these substances on a body of an athlete). The only Olympic death linked to performance enhancing occurred at the 1960 Rome games. A Danish cyclist, Knud Enemark Jensen, fell from his bicycle and later died. A coroner's inquiry found that he was under the influence of amphetamines. By the mid-1960s, sports federations started to ban the use of performance-enhancing drugs; in 1967 the IOC followed suit.
According to British journalist Andrew Jennings, a KGB colonel stated that the agency's officers had posed as anti-doping authorities from the International Olympic Committee to undermine doping tests and that Soviet athletes were "rescued with [these] tremendous efforts". On the topic of the 1980 Summer Olympics, a 1989 Australian study said "There is hardly a medal winner at the Moscow Games, certainly not a gold medal winner, who is not on one sort of drug or another: usually several kinds. The Moscow Games might as well have been called the Chemists' Games."
Documents obtained in 2016 revealed the Soviet Union's plans for a statewide doping system in track and field in preparation for the 1984 Summer Olympics in Los Angeles. Dated prior to the country's decision to boycott the Games, the document detailed the existing steroids operations of the program, along with suggestions for further enhancements. The communication, directed to the Soviet Union's head of track and field, was prepared by Dr. Sergei Portugalov of the Institute for Physical Culture. Portugalov was also one of the main figures involved in the implementation of the Russian doping program prior to the 2016 Summer Olympics.
The first Olympic athlete to test positive for the use of performance-enhancing drugs was Hans-Gunnar Liljenwall, a Swedish pentathlete at the 1968 Summer Olympics, who lost his bronze medal for alcohol use. One of the most publicised doping-related disqualifications occurred after the 1988 Summer Olympics where Canadian sprinter, Ben Johnson (who won the 100-metre dash) tested positive for stanozolol.
In 1999 the IOC formed the World Anti-Doping Agency (WADA) in an effort to systematise the research and detection of performance-enhancing drugs. There was a sharp increase in positive drug tests at the 2000 Summer Olympics and 2002 Winter Olympics due to improved testing conditions. Several medallists in weightlifting and cross-country skiing from post-Soviet states were disqualified because of doping offences. The IOC-established drug testing regimen (now known as the Olympic Standard) has set the worldwide benchmark that other sporting federations attempt to emulate. During the Beijing games, 3,667 athletes were tested by the IOC under the auspices of the World Anti-Doping Agency. Both urine and blood tests were used to detect banned substances. In London over 6,000 Olympic and Paralympic athletes were tested. Prior to the Games 107 athletes tested positive for banned substances and were not allowed to compete.
Doping in Russian sports has a systemic nature. Russia has had 44 Olympic medals stripped for doping violations – the most of any country, more than three times the number of the runner-up, and more than a quarter of the global total. From 2011 to 2015, more than a thousand Russian competitors in various sports, including summer, winter, and Paralympic sports, benefited from a cover-up. Russia was partially banned from the 2016 Summer Olympics and was banned from the 2018 Winter Olympics (while being allowed to participate as the Olympic Athletes from Russia) due to the state-sponsored doping programme.
In December 2019, Russia got banned for four years from all major sporting events for systematic doping and lying to WADA. The World Anti-Doping Agency (WADA) issued the ban on 9 December 2019, and the Russian anti-doping agency RUSADA has 21 days to make an appeal to the Court of Arbitration for Sport (CAS). The ban means Russian athletes will be allowed to compete under the Olympic flag. Russia is appealing the decision in CAS.
Women were first allowed to compete at the 1900 Summer Olympics in Paris, but at the 1992 Summer Olympics 35 countries were still only fielding all-male delegations. This number dropped rapidly over the following years. In 2000, Bahrain sent two women competitors for the first time: Fatema Hameed Gerashi and Mariam Mohamed Hadi Al Hilli. In 2004, Robina Muqimyar and Fariba Rezayee became the first women to compete for Afghanistan at the Olympics. In 2008, the United Arab Emirates sent female athletes (Maitha Al Maktoum competed in taekwondo, and Latifa Al Maktoum in equestrian) to the Olympic Games for the first time. Both athletes were from Dubai's ruling family.
By 2010, only three countries had never sent female athletes to the Games: Brunei, Saudi Arabia, and Qatar. Brunei had taken part in only three celebrations of the Games, sending a single athlete on each occasion, but Saudi Arabia and Qatar had been competing regularly with all-male teams. In 2010, the International Olympic Committee announced it would "press" these countries to enable and facilitate the participation of women for the 2012 Summer Olympics. Anita DeFrantz, chair of the IOC's Women and Sports Commission, suggested that countries be barred if they prevented women from competing. Shortly thereafter, the Qatar Olympic Committee announced that it "hoped to send up to four female athletes in shooting and fencing" to the 2012 Summer Games in London.
In 2008, Ali Al-Ahmed, director of the Institute for Gulf Affairs, likewise called for Saudi Arabia to be barred from the Games, describing its ban on women athletes as a violation of the International Olympic Committee charter. He noted: "For the last 15 years, many international nongovernmental organisations worldwide have been trying to lobby the IOC for better enforcement of its own laws banning gender discrimination. ... While their efforts did result in increasing numbers of women Olympians, the IOC has been reluctant to take a strong position and threaten the discriminating countries with suspension or expulsion." In July 2010, "The Independent" reported: "Pressure is growing on the International Olympic Committee to kick out Saudi Arabia, who are likely to be the only major nation not to include women in their Olympic team for 2012. ... Should Saudi Arabia ... send a male-only team to London, we understand they will face protests from equal rights and women's groups which threaten to disrupt the Games".
At the 2012 Olympic Games in London, United Kingdom, for the first time in Olympic history, every country competing included female athletes. Saudi Arabia included two female athletes in its delegation; Qatar, four; and Brunei, one (Maziah Mahusin, in the 400m hurdles). Qatar made one of its first female Olympians, Bahiya al-Hamad (shooting), its flagbearer at the 2012 Games, and runner Maryam Yusuf Jamal of Bahrain became the first Gulf female athlete to win a medal when she won a bronze for her showing in the 1500 m race.
The only sport on the Olympic programme that features men and women competing together is the equestrian disciplines. There is no "Women's Eventing", or 'Men's Dressage'. As of 2008, there were still more medal events for men than women. With the addition of women's boxing to the programme in the 2012 Summer Olympics, however, female athletes were able to compete in all the same sports as men. In the winter Olympics, women are still unable to compete in the Nordic combined. There are currently two Olympic events in which male athletes may not compete: synchronised swimming and rhythmic gymnastics.
Three Olympiads had to pass without a celebration of the Games because of war: the 1916 Games were cancelled because of World War I, and the summer and winter games of 1940 and 1944 were cancelled because of World War II. The Russo-Georgian War between Georgia and Russia erupted on the opening day of the 2008 Summer Olympics in Beijing. Both President Bush and Prime Minister Putin were attending the Olympics at that time and spoke together about the conflict at a luncheon hosted by Chinese president Hu Jintao.
Terrorism most directly affected the Olympic Games in 1972. When the Summer Games were held in Munich, Germany, eleven members of the Israeli Olympic team were taken hostage by the Palestinian terrorist group Black September in what is now known as the Munich massacre. The terrorists killed two of the athletes soon after they had taken them hostage and killed the other nine during a failed liberation attempt. A German police officer and five terrorists also perished. Following the selection of Barcelona, Spain to host the 1992 Summer Olympics, the separatist ETA terrorist organisation launched attacks in the region, including the 1991 Vic bombing that killed ten people in a town that would also hold events.
Terrorism affected the last two Olympic Games held in the United States. During the Summer Olympics in 1996 in Atlanta, Georgia, a bomb was detonated at the Centennial Olympic Park, which killed two and injured 111 others. The bomb was set by Eric Rudolph, an American domestic terrorist, who is currently serving a life sentence for the bombing. The 2002 Winter Olympics in Salt Lake City, Utah, took place just five months after the September 11 attacks, which meant a higher level of security than ever before provided for an Olympic Games. The opening ceremonies of the Games featured symbols of the day's events. They included the flag that flew at Ground Zero and honour guards of NYPD and FDNY members.
The Olympic Games have been criticized as upholding (and in some cases increasing) the colonial policies and practices of some host nations and cities either in the name of the Olympics by associated parties or directly by official Olympic bodies, such as the International Olympic Committee, host organising committees and official sponsors.
Critics have argued that the Olympics have engaged in or caused: erroneous anthropological and colonial knowledge production; erasure; commodification and appropriation of indigenous ceremonies and symbolism; theft and inappropriate display of indigenous objects; further encroachment on and support of the theft of indigenous lands; and neglect and/or intensification of poor social conditions for indigenous peoples. Such practices have been observed at: the 1904 Summer Olympics in St. Louis, MO; the 1976 Summer Olympics in Montreal, Quebec; the 1988 Winter Olympics in Calgary, Alberta; the 2008 Summer Olympics in Beijing, China; the 2010 Winter Olympics in Vancouver, BC; the 2012 Summer Olympics in London, England; the 2014 Winter Olympics in Sochi, Krasnodar Krai and the 2022 Winter Olympics in Beijing, China.
The Olympic Charter requires that an athlete be a national of the country for which they compete. Dual nationals may compete for either country, as long as three years have passed since the competitor competed for the former country. However, if the NOCs and IF involved agree, then the IOC Executive Board may reduce or cancel this period. This waiting period exists only for athletes who previously competed for one nation and want to compete for another. If an athlete gains a new or second nationality, then they do not need to wait any designated amount of time before participating for the new or second nation. The IOC is only concerned with issues of citizenship and nationality after individual nations have granted citizenship to athletes.
Athletes will sometimes become citizens of a different nation so they are able to compete in the Olympics. This is often because they are drawn to sponsorships or training facilities. It could also be because an athlete is unable to qualify from within their original country. In preparation for the 2014 Winter Games in Sochi Russian Olympic Committee naturalized a Korean-born short-track speed-skater Ahn Hyun-soo and an American-born snowboarder Vic Wild. They won a total of 5 golds and 1 bronze in Sochi.
One of the most famous cases of changing nationality for the Olympics was Zola Budd, a South African runner who emigrated to the United Kingdom because there was an apartheid-era ban on the Olympics in South Africa. Budd was eligible for British citizenship because her grandfather was born in Britain, but British citizens accused the government of expediting the citizenship process for her.
Other notable examples include Kenyan runner Bernard Lagat, who became a United States citizen in May 2004. The Kenyan constitution required that one renounce their Kenyan citizenship when they became a citizen of another nation. Lagat competed for Kenya in the 2004 Athens Olympics even though he had already become a United States citizen. According to Kenya, he was no longer a Kenyan citizen, jeopardising his silver medal. Lagat said he started the citizenship process in late 2003 and did not expect to become an American citizen until after the Athens games.
The athletes or teams who place first, second, or third in each event receive medals. The winners receive gold medals, which were solid gold until 1912, then made of gilded silver and now gold-plated silver. Every gold medal however must contain at least six grams of pure gold. The runners-up receive silver medals and the third-place athletes are awarded bronze medals. In events contested by a single-elimination tournament (most notably boxing), third place might not be determined and both semifinal losers receive bronze medals. At the 1896 Olympics only the first two received a medal; silver for first and bronze for second. The current three-medal format was introduced at the 1904 Olympics. From 1948 onward athletes placing fourth, fifth, and sixth have received certificates, which became officially known as Olympic diplomas; in 1984 Olympic diplomas for seventh- and eighth-place finishers were added. At the 2004 Summer Olympics in Athens, the gold, silver, and bronze medal winners were also given olive wreaths. The IOC does not keep statistics of medals won on a national level (except for team sports), but NOCs and the media record medal statistics as a measure of success.
As of the 2016 Games in Rio de Janeiro, all of the current 206 NOCs and 19 obsolete NOCs have participated in at least one edition of the Summer Olympics. Competitors from Australia, France, Great Britain, Greece, and Switzerland have competed in all twenty-eight Summer Olympic Games. Athletes competing under the Olympic flag, Mixed Teams and the Refugee Team have competed at six Games.
A total of 119 NOCs (110 of the current 206 NOCs and nine obsolete NOCs) have participated in at least one Winter Games, and athletes from fourteen nations (Austria, Canada, Czech Republic, Finland, France, Great Britain, Hungary, Italy, Norway, Poland, Slovakia, Sweden, Switzerland, and the United States) have participated in all twenty-three Winter Games to date.
The host city for an Olympic Games is usually chosen seven to eight years ahead of their celebration. The process of selection is carried out in two phases that span a two-year period. The prospective host city applies to its country's National Olympic Committee; if more than one city from the same country submits a proposal to its NOC, the national committee typically holds an internal selection, since only one city per NOC can be presented to the International Olympic Committee for consideration. Once the deadline for submission of proposals by the NOCs is reached, the first phase (Application) begins with the applicant cities asked to complete a questionnaire regarding several key criteria related to the organisation of the Olympic Games. In this form, the applicants must give assurances that they will comply with the Olympic Charter and with any other regulations established by the IOC Executive Committee. The evaluation of the filled questionnaires by a specialised group provides the IOC with an overview of each applicant's project and their potential to host the Games. On the basis of this technical evaluation, the IOC Executive Board selects the applicants that will proceed to the candidature stage.
Once the candidate cities are selected, they must submit to the IOC a bigger and more detailed presentation of their project as part of a candidature file. Each city is thoroughly analysed by an evaluation commission. This commission will also visit the candidate cities, interviewing local officials and inspecting prospective venue sites, and submit a report on its findings one month prior to the IOC's final decision. During the interview process the candidate city must also guarantee that it will be able to fund the Games. After the work of the evaluation commission, a list of candidates is presented to the General Session of the IOC, which must assemble in a country that does not have a candidate city in the running. The IOC members gathered in the Session have the final vote on the host city. Once elected, the host city bid committee (together with the NOC of the respective country) signs a Host City Contract with the IOC, officially becoming an Olympic host nation and host city.
By 2016, the Olympic Games will have been hosted by 44 cities in 23 countries. Since the 1988 Summer Olympics in Seoul, South Korea, the Olympics have been held in Asia or Oceania four times, a sharp increase compared to the previous 92 years of modern Olympic history. The 2016 Games in Rio de Janeiro were the first Olympics for a South American country. No bids from countries in Africa have succeeded.
The United States hosted four Summer Games, more than any other nation. The British capital London holds the distinction of hosting three Olympic Games, all Summer, more than any other city. Paris, which previously hosted in 1900 and 1924, is due to host the Summer Games for a third time in 2024, and Los Angeles, which previously hosted in 1932 and 1984, is due to host the Summer Games for a third time in 2028. The other nations hosting the Summer Games at least twice are Germany, Australia, France and Greece. The other cities hosting the Summer Games at least twice are Los Angeles, Paris and Athens. With the 2020 Summer Olympic Games, Japan and Tokyo, respectively, will hold these statuses.
The United States hosted four Winter Games, more than any other nation. The other nations hosting multiple Winter Games are France with three, while Switzerland, Austria, Norway, Japan, Canada and Italy have hosted twice. Among host cities, Lake Placid, Innsbruck and St. Moritz have played host to the Winter Olympic Games more than once, each holding that honour twice. The most recent Winter Games were held in Pyeongchang in 2018, South Korea's first Winter Olympics and second Olympics overall (including the 1988 Summer Olympics in Seoul).
Beijing is due to host the 2022 Winter Olympics, which will make it the first city to host both the Summer and Winter Games. | https://en.wikipedia.org/wiki?curid=22576 |
Old Prussian language
Old Prussian was a Western Baltic language belonging to the Balto-Slavic branch of the Indo-European languages, once spoken by the Old Prussians, the Baltic peoples of the Prussian region. The language is called Old Prussian to avoid confusion with the German dialects of Low Prussian and High Prussian and with the adjective "Prussian" as it relates to the later German state. Old Prussian began to be written down in the Latin alphabet in about the 13th century, and a small amount of literature in the language survives.
In addition to Prussia proper, the original territory of the Old Prussians might have included eastern parts of Pomerelia (some parts of the region east of the Vistula River). The language might have also been spoken much further east and south in what became Polesia and part of Podlasie, with the conquests by Rus and Poles starting in the 10th century and the German colonisation of the area that began in the 12th century.
Old Prussian was closely related to the other extinct Western Baltic languages, namely Curonian, Galindian and Sudovian. It is related to the Eastern Baltic languages such as Lithuanian and Latvian, and more distantly related to Slavic. Compare the words for "land": Old Prussian "", (), and .
Old Prussian contained loanwords from Slavic languages (e.g., Old Prussian ' "hound", like Lithuanian ' and Latvian ', comes from Slavic (compare , ; ; )), as well as a few borrowings from Germanic, including from Gothic (e.g., Old Prussian ' "awl" as with Lithuanian ', Latvian ') and from Scandinavian languages.
With the conquest of the Old Prussian territory by the Teutonic Knights in the 13th century, and the subsequent influx of Polish, Lithuanian and especially German speakers, Old Prussian experienced a 400 year-long decline as an "oppressed language of an oppressed population". Groups of people from Germany, Poland, Lithuania, France, Scotland, England, and Austria (see Salzburg Protestants) found refuge in Prussia during the Protestant Reformation and thereafter. Old Prussian ceased to be spoken probably around the beginning of the 18th century, due to many of its remaining speakers dying in the famines and bubonic plague epidemics which harrowed the East Prussian countryside and towns from 1709 until 1711. The Germanic regional dialect of Low German spoken in Prussia (or East Prussia), called Low Prussian (cf. High Prussian, also a Germanic language), preserved a number of Baltic Prussian words, such as ', from the Old Prussian ', for "shoe" in contrast to common Low German ' (standard German ').
Before the 1930s, when Nazi Germany began a program of Germanisation, one could find Old Prussian river- and place-names there, such as ' and '.
Lord's Prayer after Simon Grunau (Curonian-Latvian)
Lord's Prayer after Prätorius (Curonian-Latvian)
Lord's Prayer in Old Prussian (from the so-called "1st Catechism")
Lord's Prayer in Lithuanian dialect of Insterburg (Prediger Hennig)
Lord's Prayer in Lithuanian dialect of Nadruvia, corrupted (Simon Prätorius)
This jocular inscription was most probably made by a Prussian student studying in Prague (Charles University); found by Stephen McCluskey (1974) in manuscript MS F.V.2 (book of physics " by Nicholas Oresme), fol. 63r, stored in the Basel University library.
):
With other remains being merely word lists, the grammar of Old Prussian is reconstructed chiefly on the basis of the three Catechisms. There is no consensus on the number of cases that Old Prussian had, and at least four can be determined with certainty: nominative, genitive, accusative and dative, with different desinences. There are traces of a vocative case, such as in the phrase ' "O God the Lord", reflecting the inherited PIE vocative ending *'. There was a definite article (' m., ' f.); three genders: masculine, feminine and neuter, and two numbers: singular and plural. Declensional classes were '-stems, '-stems (feminine), '-stems (feminine), '-stems, '-stems, '/'-stems, '/""-stems and consonant-stems. Present, future and past tense are attested, as well as optative forms (imperative, permissive), infinitive, and four participles (active/passive present/past).
The following description is based on the phonological analysis by Schmalstieg (1974):
A few linguists and philologists are involved in reviving a reconstructed form of the language from Luther's catechisms, the Elbing Vocabulary, place names, and Prussian loanwords in the Low Prussian dialect of German. Several dozen people use the language in Lithuania, Kaliningrad, and Poland, including a few children who are natively bilingual.
The Prusaspirā Society has published their translation of Antoine de Saint-Exupéry's "The Little Prince". The book was translated by Piotr Szatkowski (Pīteris Šātkis) and released in 2015. The other efforts of Baltic Prussian societies include the development of online dictionaries, learning apps and games. There also have been several attempts to produce music with lyrics written in the revived Baltic Prussian language, most notably in the Kaliningrad Oblast by Romowe Rikoito, Kellan and Āustras Laīwan, but also in Lithuania by Kūlgrinda in their 2005 album "Prūsų Giesmės" (Prussian Hymns), and in Latvia by Rasa Ensemble in 1988 and Valdis Muktupāvels in his 2005 oratorio "Pārcēlātājs Pontifex" featuring several parts sung in Prussian.
Important in this revival was Vytautas Mažiulis, who died on 11 April 2009, and his pupil Letas Palmaitis, leader of the experiment and author of the website "Prussian Reconstructions". Two late contributors were Prāncis Arellis (), Lithuania, and Dailūns Russinis (Dailonis Rusiņš), Latvia. After them, Twankstas Glabbis from Kaliningrad oblast and Nērtiks Pamedīns from East-Prussia, now Polish Warmia-Mazuria actively joined.
OSGi
The OSGi Alliance, formerly known as the Open Services Gateway initiative, is an open standards organization founded in March 1999 that originally specified and continues to maintain the OSGi standard.
The OSGi specification describes a modular system and a service platform for the Java programming language that implements a complete and dynamic component model, something that does not exist in standalone Java/VM environments. Applications or components, coming in the form of bundles for deployment, can be remotely installed, started, stopped, updated, and uninstalled without requiring a reboot; management of Java packages/classes is specified in great detail. Application life cycle management is implemented via APIs that allow for remote downloading of management policies. The service registry allows bundles to detect the addition of new services, or the removal of services, and adapt accordingly.
The OSGi specifications have evolved beyond the original focus of service gateways, and are now used in applications ranging from mobile phones to the open-source Eclipse IDE. Other application areas include automobiles, industrial automation, building automation, PDAs, grid computing, entertainment, fleet management and application servers.
The OSGi specification is developed by the members in an open process and made available to the public free of charge under the OSGi Specification License. The OSGi Alliance has a compliance program that is open to members only. As of November 2010, there are seven certified OSGi framework implementations. A separate page lists both certified and non-certified OSGi Specification Implementations, which include OSGi frameworks and other OSGi specifications.
OSGi is a Java framework for developing and deploying modular software programs and libraries. Each bundle is a tightly coupled, dynamically loadable collection of classes, jars, and configuration files that explicitly declare their external dependencies (if any).
The framework is conceptually divided into the following areas:
A bundle is a group of Java classes and additional resources equipped with a detailed manifest codice_1 file on all its contents, as well as additional services needed to give the included group of Java classes more sophisticated behaviors, to the extent of deeming the entire aggregate a component.
Below is an example of a typical codice_1 file with OSGi Headers:
The meaning of the contents in the example is as follows:
A Life Cycle layer adds bundles that can be dynamically installed, started, stopped, updated and uninstalled. Bundles rely on the module layer for class loading but add an API to manage the modules in run time. The life cycle layer introduces dynamics that are normally not part of an application. Extensive dependency mechanisms are used to assure the correct operation of the environment. Life cycle operations are fully protected with the security architecture.
Below is an example of a typical Java class implementing the codice_3 interface:
package org.wikipedia;
import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;
public class Activator implements BundleActivator {
The OSGi Alliance has specified many services. Services are specified by a Java interface. Bundles can implement this interface and register the service with the Service Registry. Clients of the service can find it in the registry, or react to it when it appears or disappears.
The table below shows a description of OSGi System Services:
The table below shows a description of OSGi Protocol Services:
The table below shows a description of OSGi Miscellaneous Services:
The OSGi Alliance was founded by Ericsson, IBM, Motorola, Sun Microsystems and others in March 1999. Before incorporating as a nonprofit corporation, it was called the Connected Alliance.
Among its members are () more than 35 companies from quite different business areas, for example Adobe Systems, Deutsche Telekom, Hitachi, IBM, Liferay, Makewave, NEC, NTT, Oracle, Orange S.A., ProSyst, Salesforce.com, Siemens, Software AG and TIBCO Software.
The Alliance has a board of directors that provides the organization's overall governance. OSGi officers have various roles and responsibilities in supporting the alliance. Technical work is conducted within Expert Groups (EGs) chartered by the board of directors, and non-technical work is conducted in various working groups and committees. The technical work conducted within Expert Groups include developing specifications, reference implementations, and compliance tests. These Expert Groups have produced five major releases of the OSGi specifications ().
Dedicated Expert Groups exist for the enterprise, mobile, vehicle and the core platform areas.
The Enterprise Expert Group (EEG) is the newest EG and is addressing Enterprise / Server-side applications.
In November 2007 the Residential Expert Group (REG) started to work on specifications to remotely manage residential/home-gateways.
In October 2003, Nokia, Motorola, IBM, ProSyst and other OSGi members formed a Mobile Expert Group (MEG) that will specify a MIDP-based service platform for the next generation of smart mobile phones, addressing some of the needs that CLDC cannot manage - other than CDC. MEG became part of OSGi as with R4.
https://blog.osgi.org/2020/03/osgi-core-r8-specification-available.html
Estrogen
Estrogen, or oestrogen, is the primary female sex hormone. It is responsible for the development and regulation of the female reproductive system and secondary sex characteristics. There are three major endogenous estrogens in females that have estrogenic hormonal activity: estrone, estradiol, and estriol. The estrane steroid estradiol is the most potent and prevalent of these.
Estrogens are synthesized in all vertebrates as well as some insects. Their presence in both vertebrates and insects suggests that estrogenic sex hormones have an ancient evolutionary history. The three major naturally occurring forms of estrogen in women are estrone (E1), estradiol (E2), and estriol (E3). Another type of estrogen called estetrol (E4) is produced only during pregnancy. Quantitatively, estrogens circulate at lower levels than androgens in both men and women. While estrogen levels are significantly lower in males compared to females, estrogens nevertheless also have important physiological roles in males.
Like all steroid hormones, estrogens readily diffuse across the cell membrane. Once inside the cell, they bind to and activate estrogen receptors (ERs) which in turn modulate the expression of many genes. Additionally, estrogens bind to and activate rapid-signaling membrane estrogen receptors (mERs), such as GPER (GPR30).
In addition to their role as natural hormones, estrogens are used as medications, for instance in menopausal hormone therapy and hormonal birth control.
The four major naturally occurring estrogens in women are estrone (E1), estradiol (E2), estriol (E3), and estetrol (E4). Estradiol is the predominant estrogen during reproductive years both in terms of absolute serum levels as well as in terms of estrogenic activity. During menopause, estrone is the predominant circulating estrogen and during pregnancy estriol is the predominant circulating estrogen in terms of serum levels. Given by subcutaneous injection in mice, estradiol is about 10-fold more potent than estrone and about 100-fold more potent than estriol. Thus, estradiol is the most important estrogen in non-pregnant females who are between the menarche and menopause stages of life. However, during pregnancy this role shifts to estriol, and in postmenopausal women estrone becomes the primary form of estrogen in the body. Another type of estrogen called estetrol (E4) is produced only during pregnancy. All of the different forms of estrogen are synthesized from androgens, specifically testosterone and androstenedione, by the enzyme aromatase.
Minor endogenous estrogens, the biosyntheses of which do not involve aromatase, include 27-hydroxycholesterol, dehydroepiandrosterone (DHEA), 7-oxo-DHEA, 7α-hydroxy-DHEA, 16α-hydroxy-DHEA, 7β-hydroxyepiandrosterone, androstenedione (A4), androstenediol (A5), 3α-androstanediol, and 3β-androstanediol. Some estrogen metabolites, such as the catechol estrogens 2-hydroxyestradiol, 2-hydroxyestrone, 4-hydroxyestradiol, and 4-hydroxyestrone, as well as 16α-hydroxyestrone, are also estrogens with varying degrees of activity. The biological importance of these minor estrogens is not entirely clear.
The actions of estrogen are mediated by the estrogen receptor (ER), a dimeric nuclear protein that binds to DNA and controls gene expression. Like other steroid hormones, estrogen enters passively into the cell where it binds to and activates the estrogen receptor. The estrogen:ER complex binds to specific DNA sequences called a hormone response element to activate the transcription of target genes (in a study using an estrogen-dependent breast cancer cell line as model, 89 such genes were identified). Since estrogen enters all cells, its actions are dependent on the presence of the ER in the cell. The ER is expressed in specific tissues including the ovary, uterus and breast. The metabolic effects of estrogen in postmenopausal women has been linked to the genetic polymorphism of the ER.
While estrogens are present in both men and women, they are usually present at significantly higher levels in women of reproductive age. They promote the development of female secondary sexual characteristics, such as breasts, and are also involved in the thickening of the endometrium and other aspects of regulating the menstrual cycle. In males, estrogen regulates certain functions of the reproductive system important to the maturation of sperm and may be necessary for a healthy libido.
Estrogens are responsible for the development of female secondary sexual characteristics during puberty, including breast development, widening of the hips, and female fat distribution. Conversely, androgens are responsible for pubic and body hair growth, as well as acne and axillary odor.
Estrogen, in conjunction with growth hormone (GH) and its secretory product insulin-like growth factor 1 (IGF-1), is critical in mediating breast development during puberty, as well as breast maturation during pregnancy in preparation of lactation and breastfeeding. Estrogen is primarily and directly responsible for inducing the ductal component of breast development, as well as for causing fat deposition and connective tissue growth. It is also indirectly involved in the lobuloalveolar component, by increasing progesterone receptor expression in the breasts and by inducing the secretion of prolactin. Allowed for by estrogen, progesterone and prolactin work together to complete lobuloalveolar development during pregnancy.
Androgens such as testosterone powerfully oppose estrogen action in the breasts, such as by reducing estrogen receptor expression in them.
Estrogens are responsible for maturation and maintenance of the vagina and uterus, and are also involved in ovarian function, such as maturation of ovarian follicles. In addition, estrogens play an important role in regulation of gonadotropin secretion. For these reasons, estrogens are required for female fertility.
Estrogen regulated DNA repair mechanisms in the brain have neuroprotective effects. Estrogen regulates the transcription of DNA base excision repair genes as well as the translocation of the base excision repair enzymes between different subcellular compartments.
Estrogens are involved in libido (sex drive) in both women and men.
Verbal memory scores are frequently used as one measure of higher level cognition. These scores vary in direct proportion to estrogen levels throughout the menstrual cycle, pregnancy, and menopause. Furthermore, estrogens when administered shortly after natural or surgical menopause prevents decreases in verbal memory. In contrast, estrogens have little effect on verbal memory if first administered years after menopause. Estrogens also have positive influences on other measures of cognitive function. However the effect of estrogens on cognition is not uniformly favorable and is dependent on the timing of the dose and the type of cognitive skill being measured.
The protective effects of estrogens on cognition may be mediated by estrogen's anti-inflammatory effects in the brain. Studies have also shown that the Met allele gene and level of estrogen mediates the efficiency of prefrontal cortex dependent working memory tasks.
Estrogen is considered to play a significant role in women's mental health. Sudden estrogen withdrawal, fluctuating estrogen, and periods of sustained low estrogen levels correlate with significant mood lowering. Clinical recovery from postpartum, perimenopause, and postmenopause depression has been shown to be effective after levels of estrogen were stabilized and/or restored. Menstrual exacerbation (including menstrual psychosis) is typically triggered by low estrogen levels, and is often mistaken for premenstrual dysphoric disorder.
Compulsions in male lab mice, such as those in obsessive-compulsive disorder (OCD), may be caused by low estrogen levels. When estrogen levels were raised through the increased activity of the enzyme aromatase in male lab mice, OCD rituals were dramatically decreased. Hypothalamic protein levels in the gene COMT are enhanced by increasing estrogen levels which are believed to return mice that displayed OCD rituals to normal activity. Aromatase deficiency is ultimately suspected which is involved in the synthesis of estrogen in humans and has therapeutic implications in humans having obsessive-compulsive disorder.
Local application of estrogen in the rat hippocampus has been shown to inhibit the re-uptake of serotonin. Contrarily, local application of estrogen has been shown to block the ability of fluvoxamine to slow serotonin clearance, suggesting that the same pathways which are involved in SSRI efficacy may also be affected by components of local estrogen signaling pathways.
Studies have also found that fathers had lower levels of cortisol and testosterone but higher levels of estrogen (estradiol) compared to non-fathers.
Estrogen may play a role in suppressing binge eating. Hormone replacement therapy using estrogen may be a possible treatment for binge eating behaviors in females. Estrogen replacement has been shown to suppress binge eating behaviors in female mice. The mechanism by which estrogen replacement inhibits binge-like eating involves the replacement of serotonin (5-HT) neurons. Women exhibiting binge eating behaviors are found to have increased brain uptake of neuron 5-HT, and therefore less of the neurotransmitter serotonin in the cerebrospinal fluid. Estrogen works to activate 5-HT neurons, leading to suppression of binge like eating behaviors.
It is also suggested that there is an interaction between hormone levels and eating at different points in the female menstrual cycle. Research has predicted increased emotional eating during hormonal flux, which is characterized by high progesterone and estradiol levels that occur during the mid-luteal phase. It is hypothesized that these changes occur due to brain changes across the menstrual cycle that are likely a genomic effect of hormones. These effects produce menstrual cycle changes, which result in hormone release leading to behavioral changes, notably binge and emotional eating. These occur especially prominently among women who are genetically vulnerable to binge eating phenotypes.
Binge eating is associated with decreased estradiol and increased progesterone. Klump et al. Progesterone may moderate the effects of low estradiol (such as during dysregulated eating behavior), but that this may only be true in women who have had clinically diagnosed binge episodes (BEs). Dysregulated eating is more strongly associated with such ovarian hormones in women with BEs than in women without BEs.
The implantation of 17β-estradiol pellets in ovariectomized mice significantly reduced binge eating behaviors and injections of GLP-1 in ovariectomized mice decreased binge-eating behaviors.
The associations between binge eating, menstrual-cycle phase and ovarian hormones correlated.
In rodents, estrogens (which are locally aromatized from androgens in the brain) play an important role in psychosexual differentiation, for example, by masculinizing territorial behavior; the same is not true in humans. In humans, the masculinizing effects of prenatal androgens on behavior (and other tissues, with the possible exception of effects on bone) appear to act exclusively through the androgen receptor. Consequently, the utility of rodent models for studying human psychosexual differentiation has been questioned.
Estrogens are responsible for both the pubertal growth spurt, which causes an acceleration in linear growth, and epiphyseal closure, which limits height and limb length, in both females and males. In addition, estrogens are responsible for bone maturation and maintenance of bone mineral density throughout life. Due to hypoestrogenism, the risk of osteoporosis increases during menopause.
Women suffer less from heart disease due to vasculo-protective action of estrogen which helps in preventing atherosclerosis. It also helps in maintaining the delicate balance between fighting infections and protecting arteries from damage thus lowering the risk of cardiovascular disease. During pregnancy, high levels of estrogens increase coagulation and the risk of venous thromboembolism.
Estrogen has anti-inflammatory properties and helps in mobilization of polymorphonuclear white blood cells or neutrophils.
Estrogens are implicated in various estrogen-dependent conditions, such as ER-positive breast cancer, as well as a number of genetic conditions involving estrogen signaling or metabolism, such as estrogen insensitivity syndrome, aromatase deficiency, and aromatase excess syndrome.
Estrogens, in females, are produced primarily by the ovaries, and during pregnancy, the placenta. Follicle-stimulating hormone (FSH) stimulates the ovarian production of estrogens by the granulosa cells of the ovarian follicles and corpora lutea. Some estrogens are also produced in smaller amounts by other tissues such as the liver, pancreas, bone, adrenal glands, skin, brain, adipose tissue, and the breasts. These secondary sources of estrogens are especially important in postmenopausal women.
The pathway of estrogen biosynthesis in extragonadal tissues is different. These tissues are not able to synthesize C19 steroids, and therefore depend on C19 supplies from other tissues and the level of aromatase.
In females, synthesis of estrogens starts in theca interna cells in the ovary, by the synthesis of androstenedione from cholesterol. Androstenedione is a substance of weak androgenic activity which serves predominantly as a precursor for more potent androgens such as testosterone as well as estrogen. This compound crosses the basal membrane into the surrounding granulosa cells, where it is converted either immediately into estrone, or into testosterone and then estradiol in an additional step. The conversion of androstenedione to testosterone is catalyzed by 17β-hydroxysteroid dehydrogenase (17β-HSD), whereas the conversion of androstenedione and testosterone into estrone and estradiol, respectively is catalyzed by aromatase, enzymes which are both expressed in granulosa cells. In contrast, granulosa cells lack 17α-hydroxylase and 17,20-lyase, whereas theca cells express these enzymes and 17β-HSD but lack aromatase. Hence, both granulosa and theca cells are essential for the production of estrogen in the ovaries.
Estrogen levels vary through the menstrual cycle, with levels highest near the end of the follicular phase just before ovulation.
Note that in males, estrogen is also produced by the Sertoli cells when FSH binds to their FSH receptors.
Estrogens are plasma protein bound to albumin and/or sex hormone-binding globulin in the circulation.
Estrogens are metabolized via hydroxylation by cytochrome P450 enzymes such as CYP1A1 and CYP3A4 and via conjugation by estrogen sulfotransferases (sulfation) and UDP-glucuronyltransferases (glucuronidation). In addition, estradiol is dehydrogenated by 17β-hydroxysteroid dehydrogenase into the much less potent estrogen estrone. These reactions occur primarily in the liver, but also in other tissues.
Estrogens are excreted primarily by the kidneys as conjugates via the urine.
Estrogens are used as medications, mainly in hormonal contraception, hormone replacement therapy, and to treat gender dysphoria in transgender women and other transfeminine individuals as part of feminizing hormone therapy.
The estrogen steroid hormones are estrane steroids.
In 1929, Adolf Butenandt and Edward Adelbert Doisy independently isolated and purified estrone, the first estrogen to be discovered. Then, estriol and estradiol were discovered in 1930 and 1933, respectively. Shortly following their discovery, estrogens, both natural and synthetic, were introduced for medical use. Examples include estriol glucuronide (Emmenin, Progynon), estradiol benzoate, conjugated estrogens (Premarin), diethylstilbestrol, and ethinylestradiol.
The word estrogen derives from Ancient Greek. It is derived from "oestros" (a periodic state of sexual activity in female mammals), and genos(generating). It was first published in the early 1920s and referenced as "oestrin". With the years, American English adapted the spelling of estrogen to fit with its phonetic pronunciation. Nevertheless, both estrogen and oestrogen are used nowadays, yet some still wish to maintain its original spelling as it reflects the origin of the word.
The name "estrogen" is derived from the Greek ("oistros"), literally meaning "verve or inspiration" but figuratively sexual passion or desire, and the suffix "-gen", meaning "producer of".
A range of synthetic and natural substances that possess estrogenic activity have been identified in the environment and are referred to xenoestrogens.
Estrogens are among the wide range of endocrine-disrupting compounds (EDCs) because they have high estrogenic potency. When an EDC makes its way into the environment, it may cause male reproductive dysfunction to wildlife. The estrogen excreted from farm animals makes its way into fresh water systems. During the germination period of reproduction the fish are exposed to low levels of estrogen which may cause reproductive dysfunction to male fish.
Some hair shampoos on the market include estrogens and placental extracts; others contain phytoestrogens. In 1998, there were case reports of four prepubescent African-American girls developing breasts after exposure to these shampoos. In 1993, the FDA determined that not all over-the-counter topically applied hormone-containing drug products for human use are generally recognized as safe and effective and are misbranded. An accompanying proposed rule deals with cosmetics, concluding that any use of natural estrogens in a cosmetic product makes the product an unapproved new drug and that any cosmetic using the term "hormone" in the text of its labeling or in its ingredient statement makes an implied drug claim, subjecting such a product to regulatory action.
In addition to being considered misbranded drugs, products claiming to contain placental extract may also be deemed to be misbranded cosmetics if the extract has been prepared from placentas from which the hormones and other biologically active substances have been removed and the extracted substance consists principally of protein. The FDA recommends that this substance be identified by a name other than "placental extract" and describing its composition more accurately because consumers associate the name "placental extract" with a therapeutic use of some biological activity.
Roland Octapad
Roland Octapad is a range of MIDI electronic drum percussion controllers produced by the Roland Corporation.
The first model, introduced in 1985, was the Pad-8. Originally to be called MPC-8 (MIDI Percussion Controller 8), but was renamed Pad-8 to avoid legal implications with MPC Electronics. It was an influential device at that time, allowing drummers and percussionists the opportunity to trigger virtually any MIDI sound source without the need of a full electronic drum set.
The Pad 8 Consists of eight individual pads (divided in two rows of four pads) and six external pad trigger ports. The controlled had no internal sound source and limited memory for four user patches. A unique initialization procedure, when powered on, would load a "patch preset" and configure the Pad-8 to work with either the Roland's TR-909 or TR-707/TR-727.
The Pad 8 could only transmit on a single MIDI channel (channel 10 on power up), however each of the 14 pads is assigned a different MIDI-Note number. Both MIDI channel and note numbers could be edited to suite the device being controlled over MIDI.
There is one parameter adjustable for each pad: MIDI Note. The remaining five parameters adjustable are global: MIDI Channel; Pad Sensitivity; Volume Curve; Minimum Velocity; and the Gate Time.
The second model, introduced in 1989, was the Pad-80 Octapad II. Again the Pad-80 was an eight pad MIDI controller that allowed for various types of MIDI sound sources. Improvements in this second model included the ability to play up to three notes per pad, and velocity switching, which allowed the user to stack or alternate between the assigned notes depending on how hard the pads were struck. This feature became useful for creating more realistic sounding drum parts, and in addition allowed drummers to play melodic instruments with greater ease. These new features were groundbreaking at the time, and are still utilized in Roland's electronic percussion today.
The memory was increased, allowing up to 64 different patches internally and another 64 patches to be stored on a Roland M-256E memory card. Further improvements to the MIDI specification included the control of modulation, pitch bend and aftertouch using a foot pedal, along with full System Exclusive (SysEx) capability. The Pad-80 had a patch chain function that allowed a series of 32 patches to be arranged in any sequence, eight of these chains could be stored in memory.
After the Pad-80, Roland continued to release SPD-8 with on-board sounds, as a standalone instrument in 1990, SPD-11 in 1993, which not only had more sounds but also built-in effects processing, and SPD-20 in 1998, which had more on-board sounds. These SPD Series products apart from the SPD-20 and 20x had not been named "Octapad" on the product panel.
Roland continued the line in 2010 with the Octapad SPD-30 which includes on-board sounds and effects.
Oswald Spengler
Oswald Manuel Arnold Gottfried Spengler (; 29 May 1880 – 8 May 1936) was a German historian and philosopher of history whose interests included mathematics, science, and art and their relation to his cyclical theory of history. He is best known for his book "The Decline of the West" ("Der Untergang des Abendlandes"), published in 1918 and 1922, covering all of world history. Spengler's model of history postulates that any culture is a superorganism with a limited and predictable lifespan.
Spengler predicted that about the year 2000, Western civilization would enter the period of pre‑death emergency whose countering would lead to roughly 200 years of Caesarism (extraconstitutional omnipotence of the executive branch of the central government) before Western Civilization's final collapse.
Spengler is regarded as a nationalist and an anti-democrat, and was a prominent member of the Conservative Revolution. However, he criticised Nazism due to its excessive racism. Instead, he saw Benito Mussolini, and entrepreneur types like Cecil Rhodes, as embryonic examples of the impending Caesars of Western culture, notwithstanding his stark criticism of Mussolini's imperial adventures.
He strongly influenced other historians, including Franz Borkenau and especially Arnold J. Toynbee and other successors like Carroll Quigley and Samuel P. Huntington.
John Calvert notes that he is also popular with the Islamists, who mobilize his critique of the West.
Oswald Arnold Gottfried Spengler was born on 29 May 1880 in Blankenburg, Duchy of Brunswick, German Reich, the oldest surviving child of Bernhard Spengler (1844–1901), an official in the post office, and Pauline Spengler (1840–1910), née Grantzow, the descendant of an artistic family. Oswald's elder brother was born prematurely (eight months) in 1879, when his mother tried to move a heavy laundry basket, and died at the age of three weeks. Oswald was born ten months after his brother's death. His younger sisters were Adele (1881–1917), Gertrud (1882–1957), and Hildegard (1885–1942). Oswald's paternal grandfather, Theodor Spengler (1806–76), was a metallurgical inspector ("Hütteninspektor") in Altenbrak.
Oswald's father, Bernhard Spengler, held the position of a postal secretary ("Postsekretär") and was a hard-working man with a marked dislike of intellectualism, who tried to instil the same values and attitudes in his son.
On 26 May 1799, Friedrich Wilhelm Grantzow, a tailor's apprentice in Berlin, married a Jewish woman named Bräunchen Moses ( 1769–1849; whose parents, Abraham and Reile Moses, were both deceased by that time). Shortly before the wedding, Moses was baptized as Johanna Elisabeth Anspachin (the surname was chosen after her birthplace—Anspach). The couple had eight children (three before and five after the wedding), one of whom was Gustav Adolf Grantzow (1811–83)—a solo dancer and ballet master in Berlin, who in 1837 married Katharina Kirchner (1813–73), a nervously beautiful solo dancer from a Munich Catholic family; the second of their four daughters was Oswald Spengler's mother Pauline Grantzow. Like the Grantzows in general, Pauline was of a Bohemian disposition, and, before marrying Bernhard Spengler, accompanied her dancer sister on tours. She was the least talented member of the Grantzow family. In appearance, she was plump and a bit unseemly. Her temperament, which Oswald inherited, complemented her appearance and frail physique: she was moody, irritable, and morose.
When Oswald was ten years of age, his family moved to the university city of Halle. Here he received a classical education at the local Gymnasium (academically oriented secondary school), studying Greek, Latin, mathematics and sciences. Here, too, he developed his propensity for the arts—especially poetry, drama, and music—and came under the influence of the ideas of Johann Wolfgang von Goethe and Friedrich Nietzsche. At 17, he wrote a drama titled "Montezuma".
After his father's death in 1901 Spengler attended several universities (Munich, Berlin, and Halle) as a private scholar, taking courses in a wide range of subjects. His private studies were undirected. In 1903, he failed his doctoral thesis on Heraclitus (titled "Der metaphysische Grundgedanke der Heraklitischen Philosophie" ("The Fundamental Metaphysical Thought of the Heraclitean Philosophy") and conducted under the direction of Alois Riehl) because of insufficient references. He eventually took the doctoral oral exam again and received his PhD from Halle on 6 April 1904. In December 1904, he set to write the secondary dissertation ("Staatsexamensarbeit") necessary to qualify as a high school teacher. This became "The Development of the Organ of Sight in the Higher Realms of the Animal Kingdom" ("Die Entwicklung des Sehorgans bei den Hauptstufen des Tierreiches"), a text now lost. It was approved and he received his teaching certificate. In 1905 Spengler suffered a nervous breakdown.
Biographers report his life as a teacher was uneventful. He briefly served as a teacher in Saarbrücken and then in Düsseldorf. From 1908 to 1911 he worked at a grammar school ("Realgymnasium") in Hamburg, where he taught science, German history, and mathematics.
In 1911, following his mother's death, he moved to Munich, where he would live until his death in 1936. He lived as a cloistered scholar, supported by his modest inheritance. Spengler survived on very limited means and was marked by loneliness. He owned no books, and took jobs as a tutor or wrote for magazines to earn additional income.
He began work on the first volume of "Decline of the West" intending at first to focus on Germany within Europe, but the Agadir Crisis of 1911 affected him deeply, and he widened the scope of his study:
The book was completed in 1914, but publishing was delayed by the conflict and appeared in 1918, shortly before the end of World War I. It instantly made him a celebrity. Due to a severe heart problem, Spengler was exempted military service. During the war, however, his inheritance was largely useless because it was invested overseas; thus he lived in genuine poverty for this period.
When "The Decline of the West" was published in the summer of 1918, it was a wild success. The national humiliation of the Treaty of Versailles (1919) and later the economic depression around 1923 fueled by hyperinflation seemed to prove Spengler right. It comforted Germans because it seemingly rationalized their downfall as part of larger world-historical processes. The book met with wide success outside of Germany as well, and by 1919 had been translated into several other languages. Spengler declined a subsequent offer to become Professor of Philosophy at the University of Göttingen, saying he needed time to focus on writing.
The book was widely discussed, even by those who had not read it. Historians took umbrage at his unapologetically non-scientific approach. Novelist Thomas Mann compared reading Spengler's book to reading Schopenhauer for the first time. Academics gave it a mixed reception. Sociologist Max Weber described Spengler as a "very ingenious and learned dilettante", while philosopher Karl Popper called the thesis "pointless".
A 1928 "Time" review of the second volume of "Decline" described the immense influence and controversy Spengler's ideas enjoyed during the 1920s: "When the first volume of "The Decline of the West" appeared in Germany a few years ago, thousands of copies were sold. Cultivated European discourse quickly became Spengler-saturated. Spenglerism spurted from the pens of countless disciples. It was imperative to read Spengler, to sympathize or revolt. It still remains so".
In the second volume, published in 1922, Spengler argued that German socialism differed from Marxism, and was in fact compatible with traditional German conservatism. In 1924, following the social-economic upheaval and inflation, Spengler entered politics in an effort to bring Reichswehr general Hans von Seeckt to power as the country's leader. The attempt failed and Spengler proved ineffective in practical politics.
In 1931, he published "Man and Technics", which warned against the dangers of technology and industrialism to culture. He especially pointed to the tendency of Western technology to spread to hostile "Colored races" which would then use the weapons against the West. It was poorly received because of its anti-industrialism. This book contains the well-known Spengler quote "Optimism is cowardice".
Despite voting for Hitler over Hindenburg in 1932, Spengler found the Führer vulgar. He met Hitler in 1933 and after a lengthy discussion remained unimpressed, saying that Germany did not need a "heroic tenor ["Heldentenor": one of several conventional tenor classifications] but a real hero ["Held"]". He quarreled publicly with Alfred Rosenberg, and his pessimism and remarks about the Führer resulted in isolation and public silence. He further rejected offers from Joseph Goebbels to give public speeches. However, Spengler did become a member of the German Academy in the course of the year.
"The Hour of Decision", published in 1934, was a bestseller, but the National Socialist German Workers Party later banned it for its critiques of National Socialism. Spengler's criticisms of liberalism were welcomed by the Nazis, but Spengler disagreed with their biological ideology and anti-Semitism. While racial mysticism played a key role in his own worldview, Spengler had always been an outspoken critic of the pseudo-scientific racial theories professed by the Nazis and many others in his time, and was not inclined to change his views upon Hitler's rise to power. Although himself a German nationalist, Spengler viewed the Nazis as too narrowly German, and not occidental enough to lead the fight against other peoples. The book also warned of a coming world war in which Western Civilization risked being destroyed, and was widely distributed abroad before eventually being banned in Germany. A "Time" review of "The Hour of Decision" noted his international popularity as a polemicist, observing that "When Oswald Spengler speaks, many a Western Worldling stops to listen". The review recommended the book for "readers who enjoy vigorous writing", who "will be glad to be rubbed the wrong way by Spengler's harsh aphorisms" and his pessimistic predictions.
On 13 October 1933 Spengler became one of the hundred senators of the German Academy.
Spengler spent his final years in Munich, listening to Beethoven, reading Molière and Shakespeare, buying several thousand books, and collecting ancient Turkish, Persian and Indian weapons. He made occasional trips to the Harz mountains and to Italy. In the spring of 1936 (shortly before his death), he prophetically remarked in a letter to Reichsleiter Hans Frank that "in ten years, a German Reich will probably no longer exist" (""da ja wohl in zehn Jahren ein Deutsches Reich nicht mehr existieren wird!"").
Spengler died of a heart attack on 8 May 1936, in Munich, three weeks before his 56th birthday and exactly nine years before the fall of the Third Reich.
In the introduction to "The Decline of the West", Spengler cites Johann W. von Goethe and Friedrich Nietzsche as his major influences. Goethe's vitalism and Nietzsche's cultural criticism, in particular, can be hi-lighted in his works.
Spengler was also influenced by the universal and cyclical vision of world history proposed by the German historian Eduard Meyer. The belief in the progression of civilizations through an evolutionary process comparable with living beings can be traced back to classical antiquity, although it is difficult to assess the extent of the influence those thinkers had on Spengler: Cato the Elder, Cicero, Seneca, Florus, Ammianus Marcellinus, and later Francis Bacon who compared different empires with each other with the help of biological analogies.
The concept of historical philosophy developed by Spengler is founded upon two assumptions: the existence of social entities called 'Cultures' ("Kulturen"), regarded as the largest possible actors in human history, which itself had no metaphysical sense, and the parallelism between the evolution of those Cultures and the evolutionary of living beings. Spengler numbered nine Cultures: Egyptian, Babylonian, Indian, Chinese, Greco-Roman, 'Magic' or 'Arabic' (including early and Byzantine Christianity and Islam), Mexican, Western, and Russian–, interacted between each other in time and space but were distinct from each other by 'internal' attributes. According to him, "Cultures are organisms, and world-history is their collective biography."
Spengler also compares the evolution of Cultures to the different ages of human life, "Every Culture passes through the age-phases of the individual man. Each has its childhood, youth, manhood and old age." When a Culture enters its late stage, Spengler argues, it becomes a 'Civilization' ("Zivilisation"), a petrified body characterized in the modern age by technology, imperialism, and mass society, which he expected to fossilize and decline from the 2000s onward. The first-millennium Near East was, in his views, not a transition between Classical Antiquity, Western Christianity, and Islam, but rather an emerging new Culture he named 'Arabian' or 'Magic', explaining Messianic Judaism, Zoroastrianism, early Christianity, and Islam are different expressions of a single Culture sharing a unique worldview.
The great historian of antiquity Eduard Meyer thought highly of Spengler, although he also had some criticisms of him. Spengler's obscurity, intuitionalism, and mysticism were easy targets, especially for the positivists and neo-Kantians who rejected the possibility that there was meaning in world history. The critic and aesthete Count Harry Kessler thought him unoriginal and rather inane, especially in regard to his opinion on Nietzsche. Philosopher Ludwig Wittgenstein, however, shared Spengler's cultural pessimism. Spengler's work became an important foundation for the social cycle theory.
In late 1919, Spengler published "Prussianism and Socialism" ("Preußentum und Sozialismus"), an essay based on notes intended for the second volume of "The Decline of the West" in which he argues that German socialism is the correct socialism in contrast to English socialism. In his view, correct socialism has a much more "national" spirit.
According to Spengler, mankind will spend the next and last several hundred years of its existence in a state of Caesarian socialism, when all humans will be into a harmonious and happy totality by a dictator, like an orchestra is synergized into a harmonious totality by its conductor.
According to some recent critics such as Ishay Landa, "Prussian socialism" has some decidedly capitalistic traits. Spengler declares himself resolutely opposed to labor strikes (Spengler describes them as "the unsocialistic earmark of Marxism"), trade unions ("wage-Bolshevism" in Spengler's terms), progressive taxation or any imposition of taxes on the rich ("dry Bolshevism"), any shortening of the working day (he argues that workers should work even on Sundays), as well as any form of government insurance for sickness, old age, accidents, or unemployment.
At the same time as he rejects any social democratic provisions, Spengler celebrates private property, competition, imperialism, capital accumulation, and "wealth, collected in few hands and among the ruling classes." Landa describes Spengler's "Prussian Socialism" as "working a whole lot, for the absolute minimum, but – and this is a vital aspect – being happy about it."
In his private papers, Spengler denounced Nazi anti-Semitism in even stronger terms, wondering "how much envy of the capability of other people in view of one's lack of it lies hidden in anti-Semitism!", and arguing that "when one would rather destroy business and scholarship than see Jews in them, one is an ideologue, i.e., a danger for the nation. Idiotic." Spengler was an admirer of the old Prussian aristocracy and showed contempt for the proletarian and demagogic character of the Nazi party, and considered the Aryan racial doctrine to be nonsense. In 1934, Spengler pronounced the funeral oration for one of the victims of the Night of the Long Knives and retired in 1935 from the board of the highly influential Nietzsche Archive in opposition to the regime.
Spengler, however, regarded the transformation of ultra-capitalist mass democracies into dictatorial regimes as inevitable, and he had expressed some sympathy for Benito Mussolini and the Italian Fascist movement as a first symptom of this development.
He also considered Judaism to be a "disintegrating element" (zersetzendes Element) that acts destructively "wherever it intervenes" (wo es auch eingreift). Jews are characterized by a "cynical intelligence" (zynische Intelligenz) and their "money thinking" (Gelddenken). Therefore, they were incapable of adapting to Western culture and represented a foreign body in Europe. With these anti-Semitic speculations Spengler contributed significantly to the enforcement of stereotypes about "the Jews" in pre-WW2 German circles.
Oracle
An oracle is a person or agency considered to provide wise and insightful counsel or prophetic predictions or precognition of the future, inspired by the gods. As such it is a form of divination.
The word "oracle" comes from the Latin verb "ōrāre", "to speak" and properly refers to the priest or priestess uttering the prediction. In extended use, "oracle" may also refer to the "site of the oracle", and to the oracular utterances themselves, called "khrēsmē" 'tresme' (χρησμοί) in Greek.
Oracles were thought to be portals through which the gods spoke directly to people. In this sense they were different from seers ("manteis", μάντεις) who interpreted signs sent by the gods through bird signs, animal entrails, and other various methods.
The most important oracles of Greek antiquity were Pythia (priestess to Apollo at Delphi), and the oracle of Dione and Zeus at Dodona in Epirus. Other oracles of Apollo were located at Didyma and Mallus on the coast of Anatolia, at Corinth and Bassae in the Peloponnese, and at the islands of Delos and Aegina in the Aegean Sea.
The Sibylline Oracles are a collection of oracular utterances written in Greek hexameters ascribed to the Sibyls, prophetesses who uttered divine revelations in frenzied states.
Walter Burkert observes that "Frenzied women from whose lips the god speaks" are recorded in the Near East as in Mari in the second millennium BC and in Assyria in the first millennium BC. In Egypt the goddess Wadjet (eye of the moon) was depicted as a snake-headed woman or a woman with two snake-heads. Her oracle was in the renowned temple in Per-Wadjet (Greek name Buto). The oracle of Wadjet may have been the source for the oracular tradition which spread from Egypt to Greece. Evans linked Wadjet with the "Minoan Snake Goddess".
At the oracle of Dodona she is called Diōnē (the feminine form of "Diós", genitive of "Zeus"; or of "dīos", "godly", literally "heavenly"), who represents the earth-fertile soil, probably the chief female goddess of the proto-Indo-European pantheon. Python, daughter (or son) of Gaia was the earth dragon of Delphi represented as a serpent and became the chthonic deity, enemy of Apollo, who slew her and possessed the oracle.
The Pythia was the mouthpiece of the oracles of the god Apollo, and was also known as the Oracle of Delphi.
The Pythia was not conceived to be infallible and in fact, according to Sourvinou-Inwood in "What is Polis Religion?", the ancient Greeks were aware of this and concluded the unknowability of the divine. In this way, the revelations of the Oracles were not seen as objective truth (as they consulted many) [see: Hyp. 4. 14-15]. The Pythia gave prophecies only on the seventh day of each month, seven being the number most associated with Apollo, during the nine warmer months of the year; thus, Delphi was the major source of divination for the ancient Greeks. Many wealthy individuals bypassed the hordes of people attempting a consultation by making additional animal sacrifices to please the oracle lest their request go unanswered. As a result, seers were the main source of everyday divination.
The temple was changed to a centre for the worship of Apollo during the classical period of Greece and priests were added to the temple organization—although the tradition regarding prophecy remained unchanged—and the priestesses continued to provide the services of the oracle exclusively. It is from this institution that the English word "oracle" is derived.
The Delphic Oracle exerted considerable influence throughout Hellenic culture. Distinctively, this female was essentially the highest authority both civilly and religiously in male-dominated ancient Greece. She responded to the questions of citizens, foreigners, kings, and philosophers on issues of political impact, war, duty, crime, family, laws—even personal issues.
The semi-Hellenic countries around the Greek world, such as Lydia, Caria, and even Egypt also respected her and came to Delphi as supplicants.
Croesus, king of Lydia beginning in 560 B.C., tested the oracles of the world to discover which gave the most accurate prophecies. He sent out emissaries to seven sites who were all to ask the oracles on the same day what the king was doing at that very moment. Croesus proclaimed the oracle at Delphi to be the most accurate, who correctly reported that the king was making a lamb-and-tortoise stew, and so he graced her with a magnitude of precious gifts. He then consulted Delphi before attacking Persia, and according to Herodotus was advised: "If you cross the river, a great empire will be destroyed". Believing the response favourable, Croesus attacked, but it was his own empire that ultimately was destroyed by the Persians.
She allegedly also proclaimed that there was no man wiser than Socrates, to which Socrates said that, if so, this was because he alone was aware of his own ignorance. After this confrontation, Socrates dedicated his life to a search for knowledge that was one of the founding events of western philosophy. He claimed that she was "an essential guide to personal and state development." This oracle's last recorded response was given in 362 AD, to Julian the Apostate.
The oracle's powers were highly sought after and never doubted. Any inconsistencies between prophecies and events were dismissed as failure to correctly interpret the responses, not an error of the oracle. Very often prophecies were worded ambiguously, so as to cover all contingencies – especially so "ex post facto". One famous such response to a query about participation in a military campaign was "You will go you will return never in war will you perish". This gives the recipient liberty to place a comma before or after the word "never", thus covering both possible outcomes. Another was the response to the Athenians when the vast army of king Xerxes I was approaching Athens with the intent of razing the city to the ground. "Only the wooden palisades may save you", answered the oracle, probably aware that there was sentiment for sailing to the safety of southern Italy and re-establishing Athens there. Some thought that it was a recommendation to fortify the Acropolis with a wooden fence and make a stand there. Others, Themistocles among them, said the oracle was clearly for fighting at sea, the metaphor intended to mean war ships. Others still insisted that their case was so hopeless that they should board every ship available and flee to Italy, where they would be safe beyond any doubt. In the event, variations of all three interpretations were attempted: some barricaded the Acropolis, the civilian population was evacuated over sea to nearby Salamis Island and to Troizen, and the war fleet fought victoriously at Salamis Bay. Should utter destruction have happened, it could always be claimed that the oracle had called for fleeing to Italy after all.
Dodona was another oracle devoted to the Mother Goddess identified at other sites with Rhea or Gaia, but here called Dione. The shrine of Dodona was the oldest Hellenic oracle, according to the fifth-century historian Herodotus and in fact dates to pre-Hellenic times, perhaps as early as the second millennium BC when the tradition probably spread from Egypt. Zeus displaced the Mother goddess and assimilated her as Aphrodite.
It became the second most important oracle in ancient Greece, which later was dedicated to Zeus and to Heracles during the classical period of Greece. At Dodona Zeus was worshipped as Zeus Naios or Naos (god of springs Naiads, from a spring which existed under the oak), and Zeus Bouleos (cancellor). Priestesses and priests interpreted the rustling of the oak leaves to determine the correct actions to be taken. The oracle was shared by Dione and Zeus.
Trophonius was an oracle at Lebadea of Boeotia devoted to the chthonian Zeus Trophonius. Trophonius is derived from the Greek word "trepho" (nourish) and he was a Greek hero, or demon or god. Demeter-Europa was his nurse. Europa (in Greek: broad-eyes) was a Phoenician princess whom Zeus, having transformed himself into a white bull, abducted and carried to Creta, and is equated with Astarte as a moon goddess by ancient sources. Some scholars connect Astarte with the Minoan snake goddess, whose cult as Aphrodite spread from Creta to Greece.
Near the Menestheus's port or "Menesthei Portus" (), modern El Puerto de Santa María, Spain, was the Oracle of Menestheus (), to whom also the inhabitants of Gades offered sacrifices.
The term "oracle" is also applied in modern English to parallel institutions of divination in other cultures.
Specifically, it is used in the context of Christianity for the concept of divine revelation, and in the context of Judaism for the Urim and Thummim breastplate, and in general any utterance considered prophetic.
In Celtic polytheism, divination was performed by the priestly caste, either the druids or the vates. This is reflected in the role of "seers" in Dark Age Wales ("dryw") and Ireland ("fáith").
In China, oracle bones were used for divination in the late Shang dynasty, (c. 1600–1046 BC). Diviners applied heat to these bones, usually ox scapulae or tortoise plastrons, and interpreted the resulting cracks.
A different divining method, using the stalks of the yarrow plant, was practiced in the subsequent Zhou dynasty (1046–256 BC). Around the late 9th century BC, the divination system was recorded in the "I Ching", or "Book of Changes", a collection of linear signs used as oracles. In addition to its oracular power, the "I Ching" has had a major influence on the philosophy, literature and statecraft of China since the Zhou period.
In Hawaii, oracles were found at certain "heiau", Hawaiian temples. These oracles were found in towers covered in white "kapa" cloth made from plant fibres. In here, priests received the will of gods. These towers were called " 'Anu'u". An example of this can be found at Ahu'ena heiau in Kona.
In ancient India, the oracle was known as "akashwani" or "Ashareera vani" (a voice without body or unseen) or "asariri" (Tamil), literally meaning "voice from the sky" and was related to the message of a god. Oracles played key roles in many of the major incidents of the epics Mahabharata and Ramayana. An example is that Kamsa (or Kansa), the evil uncle of Krishna, was informed by an oracle that the eighth son of his sister Devaki would kill him. However, there are no references in any Indian literature of the oracle being a specific person.
The Igbo people of southeastern Nigeria in Africa have a long tradition of using oracles. In Igbo villages, oracles were usually female priestesses to a particular deity, usually dwelling in a cave or other secluded location away from urban areas, and, much as the oracles of ancient Greece, would deliver prophecies in an ecstatic state to visitors seeking advice. Two of their ancient oracles became especially famous during the pre-colonial period: the Agbala oracle at Awka and the Chukwu oracle at Arochukwu. Though the vast majority of Igbos today are Christian, many of them still use oracles.
Among the related Yoruba peoples of the same country, the Babalawos (and their female counterparts, the Iyanifas) serve collectively as the principal aspects of the tribe's World-famous Ifa divination system. Due to this, they customarily officiate at a great many of its traditional and religious ceremonies.
In Norse mythology, Odin took the severed head of the god Mimir to Asgard for consultation as an oracle. The "Havamal" and other sources relate the sacrifice of Odin for the oracular Runes whereby he lost an eye (external sight) and won wisdom (internal sight; insight).
In the migration myth of the Mexitin, i.e., the early Aztecs, a mummy-bundle (perhaps an effigy) carried by four priests directed the trek away from the cave of origins by giving oracles. An oracle led to the foundation of Mexico-Tenochtitlan. The Yucatec Mayas knew oracle priests or "chilanes", literally 'mouthpieces' of the deity. Their written repositories of traditional knowledge, the Books of Chilam Balam, were all ascribed to one famous oracle priest who correctly had predicted the coming of the Spaniards and its associated disasters.
In Tibet, oracles have played, and continue to play, an important part in religion and government. The word "oracle" is used by Tibetans to refer to the spirit that enters those men and women who act as media between the natural and the spiritual realms. The media are, therefore, known as "kuten", which literally means, "the physical basis".
The Dalai Lama, who lives in exile in northern India, still consults an oracle known as the "Nechung Oracle", which is considered the official state oracle of the government of Tibet. The Dalai Lama has according to centuries-old custom, consulted the Nechung Oracle during the new year festivities of Losar. Nechung and Gadhong are the primary oracles currently consulted; former oracles such as Karmashar and Darpoling are no longer active in exile. The Gadhong oracle has died leaving Nechung to be the only primary oracle. Another oracle the Dalai Lama consults is the "Tenma Oracle", for which a young Tibetan woman by the name of Khandro La is the medium for the mountain goddesses Tseringma along with the other 11 goddesses. The Dalai Lama gives a complete description of the process of trance and spirit possession in his book "Freedom in Exile".
Dorje Shugden oracles were once consulted by the Dalai Lamas until the 14th Dalai Lama banned the practice, even though he consulted Dorje Shugden for advice to escape and was successful in it. Due to the ban, many of the abbots that were worshippers of Dorje Shugden have been forced to go against the Dalai Lama.
In computer science an oracle is a black box that is always able to provide correct answers. It is the component of an oracle machine after which the machine is named.
Oracle Corporation
Oracle Corporation is an American multinational computer technology corporation headquartered in Redwood Shores, California. The company sells database software and technology, cloud engineered systems, and enterprise software products—particularly its own brands of database management systems. In 2019, Oracle was the second-largest software company by revenue and market capitalization.
The company also develops and builds tools for database development and systems of middle-tier software, enterprise resource planning (ERP) software, Human Capital Management (HCM) software, customer relationship management (CRM) software, and supply chain management (SCM) software.
Larry Ellison co-founded Oracle Corporation in 1977 with Bob Miner and Ed Oates under the name Software Development Laboratories (SDL). Ellison took inspiration from the 1970 paper written by Edgar F. Codd on relational database management systems (RDBMS) named "A Relational Model of Data for Large Shared Data Banks." He heard about the IBM System R database from an article in the "IBM Research Journal" provided by Oates. Ellison wanted to make Oracle's product compatible with System R, but failed to do so as IBM kept the error codes for their DBMS a secret. SDL changed its name to Relational Software, Inc (RSI) in 1979, then again to Oracle Systems Corporation in 1983, to align itself more closely with its flagship product Oracle Database. At this stage Bob Miner served as the company's senior programmer. On March 12, 1986, the company had its initial public offering. In 1995, Oracle Systems Corporation changed its name to Oracle Corporation, officially named Oracle, but sometimes referred to as Oracle Corporation, the name of the holding company. Part of Oracle Corporation's early success arose from using the C programming language to implement its products. This eased porting to different operating systems most of which support C.
Oracle ranked No. 82 in the 2018 Fortune 500 list of the largest United States corporations by total revenue. According to Bloomberg, Oracle's CEO-to-employee pay ratio is 1,205:1. The CEO's compensation in 2017 was $108,295,023. Oracle is one of the approved employers of ACCA and the median employee compensation rate was $89,887.
Oracle designs, manufactures, and sells both software and hardware products, as well as offering services that complement them (such as financing, training, consulting, and hosting services). Many of the products have been added to Oracle's portfolio through acquisitions.
Oracle's E-delivery service (Oracle Software Delivery Cloud) provides generic downloadable Oracle software and documentation.
Oracle Corporation has acquired and developed the following additional database technologies:
Oracle Fusion Middleware is a family of middleware software products, including (for instance) application server, system integration, business process management (BPM), user interaction, content management, identity management and business intelligence (BI) products.
Oracle Secure Enterprise Search (SES), Oracle's enterprise-search offering, gives users the ability to search for content across multiple locations, including websites, XML files, file servers, content management systems, enterprise resource planning systems, customer relationship management systems, business intelligence systems, and databases.
Released in 2008, the Oracle Beehive collaboration software provides team workspaces (including wikis, team calendaring and file sharing), email, calendar, instant messaging, and conferencing on a single platform. Customers can use Beehive as licensed software or as software as a service ("SaaS").
Oracle also sells a suite of business applications. The Oracle E-Business Suite includes software to perform various enterprise functions related to (for instance) financials, manufacturing, customer relationship management (CRM), enterprise resource planning (ERP) and human resource management. The Oracle Retail Suite
covers the retail-industry vertical, providing merchandise management, price management, invoice matching, allocations, store operations management, warehouse management, demand forecasting, merchandise financial planning, assortment planning, and category management. Users can access these facilities through a browser interface over the Internet or via a corporate intranet.
Following a number of acquisitions beginning in 2003, especially in the area of applications, Oracle Corporation maintains a number of product lines:
Development of applications commonly takes place in Java (using Oracle JDeveloper) or through PL/SQL (using, for example, Oracle Forms and Oracle Reports/BIPublisher). Oracle Corporation has started a drive toward "wizard"-driven environments with a view to enabling non-programmers to produce simple data-driven applications.
Oracle Corporation works with "Oracle Certified Partners" to enhance its overall product marketing. The variety of applications from third-party vendors includes database applications for archiving, splitting and control, ERP and CRM systems, as well as more niche and focused products providing a range of commercial functions in areas like human resources, financial control and governance, risk management, and compliance (GRC). Vendors include Hewlett-Packard, Creoal Consulting, UC4 Software, Motus, and Knoa Software.
Oracle Enterprise Manager (OEM) provides web-based monitoring and management tools for Oracle products (and for some third-party software), including database management, middleware management, application management, hardware and virtualization management and cloud management.
The Primavera products of Oracle's Construction & Engineering Global Business Unit (CEGBU) consist of project-management software.
Oracle Corporation's tools for developing applications include (among others):
Many external and third-party tools make the Oracle database administrator's tasks easier.
Oracle Corporation develops and supports two operating systems: Oracle Solaris and Oracle Linux.
Oracle Cloud is a cloud computing service offered by Oracle Corporation providing servers, storage, network, applications and services through a global network of Oracle Corporation managed data centers. The company allows these services to be provisioned on demand over the Internet.
Oracle Cloud provides Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS) and Data as a Service (DaaS). These services are used to build, deploy, integrate and extend applications in the cloud. This platform supports open standards (SQL, HTML5, REST, etc.) open-source solutions (Kubernetes, Hadoop, Kafka, etc.) and a variety of programming languages, databases, tools and frameworks including Oracle-specific, Open Source and third-party software and systems.
On July 28, 2016 Oracle bought NetSuite, the very first cloud company, for $9.3 billion. On May 16, 2018 Oracle announced that it had acquired DataScience.com, a privately held cloud workspace platform for data science projects and workloads.
Registered customers can submit Service Requests (SRs)—usually via the web-accessible My Oracle Support (MOS), a re-incarnation of Oracle Metalink with web access administered by a site Customer User Administrator (CUA).
In 1990, Oracle laid off 10% (about 400 people) of its work force because of accounting errors. This crisis came about because of Oracle's "up-front" marketing strategy, in which sales people urged potential customers to buy the largest possible amount of software all at once. The sales people then booked the value of future license sales in the current quarter, thereby increasing their bonuses. This became a problem when the future sales subsequently failed to materialize. Oracle eventually had to restate its earnings twice, and also settled (out of court) class-action lawsuits arising from its having overstated its earnings. Ellison stated in 1992 that Oracle had made "an incredible business mistake".
In 1994, Informix overtook Sybase and became Oracle's most important rival. The intense war between Informix CEO Phil White and Ellison made front-page news in Silicon Valley for three years. Informix claimed that Oracle had hired away Informix engineers to disclose important trade secrets about an upcoming product. Informix finally dropped its lawsuit against Oracle in 1997. In November 2005, a book detailing the war between Oracle and Informix was published, titled "The Real Story of Informix Software and Phil White". It gave a detailed chronology of the battle of Informix against Oracle, and how Informix Software's CEO Phil White landed in jail because of his obsession with overtaking Ellison.
Once it had overcome Informix and Sybase, Oracle Corporation enjoyed years of dominance in the database market until use of Microsoft SQL Server became widespread in the late 1990s and IBM acquired Informix Software in 2001 (to complement its DB2 database). Oracle competes for new database licenses on UNIX, Linux, and Windows operating systems primarily against IBM's DB2 and Microsoft SQL Server. IBM's DB2 dominates the mainframe database market.
In 2004, Oracle's sales grew at a rate of 14.5% to $6.2 billion, giving it 41.3% and the top share of the relational-database market ("InformationWeek" – March 2005), with market share estimated at up to 44.6% in 2005 by some sources.
Oracle Corporation's main competitors in the database arena remain IBM DB2 and Microsoft SQL Server, and to a lesser extent Sybase and Teradata, with open source databases such as PostgreSQL and MySQL also having a significant share of the market. EnterpriseDB, based on PostgreSQL, has made inroads by proclaiming that its product delivers Oracle compatibility features at a much lower price-point.
In the software-applications market, Oracle Corporation primarily competes against SAP. On March 22, 2007 Oracle sued SAP, accusing them of fraud and unfair competition.
In the market for business intelligence software, many other software companies—small and large—have successfully competed in quality with Oracle and SAP products. Business intelligence vendors can be categorized into the "big four" consolidated BI firms such as Oracle, who has entered BI market through a recent trend of acquisitions (including Hyperion Solutions), and the independent "pure play" vendors such as MicroStrategy, Actuate, and SAS.
Oracle Financials was ranked in the Top 20 Most Popular Accounting Software Infographic by Capterra in 2014, beating out SAP and a number of their other competitors.
From 1988, Oracle Corporation and the German company SAP AG had a decade-long history of cooperation, beginning with the integration of SAP's R/3 enterprise application suite with Oracle's relational database products. Despite the SAP partnership with Microsoft, and the increasing integration of SAP applications with Microsoft products (such as Microsoft SQL Server, a competitor to Oracle Database), Oracle and SAP continue their cooperation. According to Oracle Corporation, the majority of SAP's customers use Oracle databases.
In 2004, Oracle began to increase its interest in the enterprise-applications market (in 1989, Oracle had already released Oracle Financials). A series of acquisitions by Oracle Corporation began, most notably with those of PeopleSoft, Siebel Systems and Hyperion.
SAP recognized that Oracle had started to become a competitor in a market where SAP had the leadership, and saw an opportunity to lure in customers from those companies that Oracle Corporation had acquired. SAP would offer those customers special discounts on the licenses for its enterprise applications.
Oracle Corporation would resort to a similar strategy, by advising SAP customers to get "OFF SAP" (a play on the words of the acronym for its middleware platform "Oracle Fusion for SAP"),
and also by providing special discounts on licenses and services to SAP customers who chose Oracle Corporation products.
Some analysts have suggested the suit could form part of a strategy by Oracle Corporation to decrease competition with SAP in the market for third-party enterprise software maintenance and support.
On July 3, 2007, SAP admitted that TomorrowNow employees had made "inappropriate downloads" from the Oracle support website. However, it claims that SAP personnel and SAP customers had no access to Oracle intellectual property via TomorrowNow. SAP's CEO Henning Kagermann stated that "Even a single inappropriate download is unacceptable from my perspective. We regret very much that this occurred." Additionally, SAP announced that it had "instituted changes" in TomorrowNow's operational oversight.
On November 23, 2010, a U.S. district court jury in Oakland, California found that SAP AG must pay Oracle Corp $1.3 billion for copyright infringement, awarding damages that could be the largest-ever for copyright infringement. While admitting liability, SAP estimated the damages at no more than $40 million, while Oracle claimed that they are at least $1.65 billion. The awarded amount is one of the 10 or 20 largest jury verdicts in U.S. legal history. SAP said they were disappointed by the verdict and might appeal. On September 1, 2011, a federal judge overturned the judgment and offered a reduced amount or a new trial, calling Oracle's original award "grossly" excessive. Oracle chose a new trial.
On August 3, 2012, SAP and Oracle agreed on a judgment for $306 million in damages, pending approval from the U.S. district court judge, “to save time and expense of [a] new trial". After the accord has been approved, Oracle can ask a federal appeals court to reinstate the earlier jury verdict. In addition to the damages payment, SAP has already paid Oracle $120 million for its legal fees.
and "Unbreakable"
Oracle Corporation produces and distributes the "Oracle ClearView" series of videos as part of its marketing mix.
In 2000, Oracle attracted attention from the computer industry and the press after hiring private investigators to dig through the trash of organizations involved in an antitrust trial involving Microsoft. The Chairman of Oracle Corporation, Larry Ellison, staunchly defended his company's hiring of an East Coast detective agency to investigate groups that supported rival Microsoft Corporation during its antitrust trial, calling the snooping a "public service". The investigation reportedly included a $1,200 offer to janitors at the Association for Competitive Technology to look through Microsoft's trash. When asked how he would feel if others were looking into Oracle's business activities, Ellison said: "We will ship our garbage to Redmond, and they can go through it. We believe in full disclosure."
In 2002, Oracle Corporation marketed many of its products using the slogan "Can't break it, can't break in", or "Unbreakable". This signified a demand on information security. Oracle Corporation also stressed the reliability of networked databases and network access to databases as major selling points.
However, two weeks after its introduction, David Litchfield, Alexander Kornbrust, Cesar Cerrudo and others demonstrated a whole suite of successful attacks against Oracle products. Oracle Corporation's chief security officer Mary Ann Davidson said that, rather than representing a literal claim of Oracle's products' impregnability, she saw the campaign in the context of fourteen independent security evaluations that Oracle Corporation's database server had passed.
In 2004, then-United States Attorney General John Ashcroft sued Oracle Corporation to prevent it from acquiring a multibillion-dollar intelligence contract. After Ashcroft's resignation from government, he founded a lobbying firm, The Ashcroft Group, which Oracle hired in 2005. With the group's help, Oracle went on to acquire the contract.
Computer Sciences Corporation reportedly spent a billion dollars developing the Expeditionary Combat Support System for the United States Air Force. It yielded no significant capability, because, according to an Air Force source, the Oracle software on which the system was based could not be adapted to meet the specialized performance criteria.
Oracle Corporation was awarded a contract by the State of Oregon's Oregon Health Authority (OHA) to develop Cover Oregon, the state's healthcare exchange website, as part of the U.S. Patient Protection and Affordable Care Act. When the site tried to go live on October 1, 2013, it failed, and registrations had to be taken using paper applications until the site could be fixed.
On April 25, 2014, the State of Oregon voted to discontinue Cover Oregon and instead use the federal exchange to enroll Oregon residents. The cost of switching to the federal portal was estimated at $5 million, whereas fixing Cover Oregon would have required another $78 million.
Oracle president Safra Catz responded to Cover Oregon and the OHA in a letter claiming that the site's problems were due to OHA mismanagement, specifically that a third-party systems integrator was not hired to manage the complex project.
In August 2014, Oracle Corporation sued Cover Oregon for breach of contract, and then later that month the state of Oregon sued Oracle Corporation, in a civil complaint for breach of contract, fraud, filing false claims and "racketeering". In September 2016, the two sides reached a settlement valued at over $100 million to the state, and a six-year agreement for Oracle to continue modernizing state software and IT.
On January 27, 2010, Oracle announced it had completed its acquisition of Sun Microsystems—valued at more than $7 billion—a move that transformed Oracle from solely a software company to a manufacturer of both software and hardware. The acquisition was delayed for several months by the European Commission because of concerns about MySQL, but was unconditionally approved in the end. This acquisition was important to some in the open source community and also to some other companies, as they feared Oracle might end Sun's traditional support of open source projects. Since the acquisition, Oracle has discontinued OpenSolaris and StarOffice, and sued Google over their newly acquired Java patents from Sun. In September 2011, U.S. State Department Embassy cables were leaked to WikiLeaks. One cable revealed that the U.S. pressured the E.U. to allow Oracle to acquire Sun.
On July 29, 2010, the United States Department of Justice filed suit against Oracle Corporation alleging fraud. The lawsuit argues that the government received deals inferior to those Oracle gave to its commercial clients. The DoJ added its heft to an already existing whistleblower lawsuit filed by Paul Frascella, who was once senior director of contract services at Oracle. It was settled in May 2012
Oracle, the plaintiff, bought the Java computer programing language when it acquired Sun Microsystems in January 2010. The Java software includes sets of pre-developed software code in order to accomplish common tasks consistently among programs and apps. The pre-developed code is organized into separate "packages" which each contain a set of "classes". Each class contains numerous methods, which instruct a program or app to do a certain task. Software developers "became accustomed to using Java’s designations at the package, class, and method level".
Oracle and Google (the defendant) tried to negotiate an agreement for Oracle to license Java to Google, which would have allowed Google to use Java in developing programs for mobile devices using the Android operating system. However, the two companies never reached an agreement. After negotiations failed, Google created its own programming platform, which was based on Java, and contained a mix of 37 copied Java packages and new packages developed by Google.
In 2010, Oracle sued Google for copyright infringement for the use of the 37 Java packages. The case was handled in U.S. District Court for the Northern District of California and assigned to Judge William H. Alsup (who taught himself how to code computers). In the lawsuit, Oracle sought between $1.4 billion and $6.1 billion. In June 2011 the judge had to force Google through a judicial order to make public the details about Oracle's claim for damages.
By the end of the first jury trial (the legal dispute would eventually go on to another trial) the arguments made by Oracle's attorneys focused on a Java function called "rangeCheck"."The argument centered on a function called rangeCheck. Of all the lines of code that Oracle had tested—15 million in total—these were the only ones that were 'literally' copied. Every keystroke, a perfect duplicate." – "The Verge", 10/19/17Although Google admitted to copying the packages, Judge Alsup found that none of the Java packages were covered under copyright protection, and therefore Google did not infringe.
After the case was over, Oracle appealed to the United States Court of Appeals for the Federal Circuit (750 F.3d 1339 (2014)). On May 9, 2014, the appeals court partially reversed Judge Alsup's decision, finding that Java APIs are copyrightable. API stands for "application programming interface" and are how different computer programs or apps communicate with each other. However, the appeals court also left open the possibility that Google might have a "fair use" defense.
On October 6, 2014, Google filed a petition to appeal to the U.S. Supreme Court, but the Supreme Court denied the petition.
The case was then returned to the U.S. District Court for another trial about Google's fair use defense. Oracle sought $9 billion in damages. In May 2016, the trial jury found that Google's use of Java's APIs was considered fair use.
In February 2017, Oracle filed another appeal to the U.S. Court of Appeals for the Federal Circuit. This time it was asking for a new trial because the District Court "repeatedly undermined Oracle's case", which Oracle argued led the jury to make the wrong decision. According to ZDNet, "For example, it [Oracle] says the court wrongly bought Google's claim that Android was limited to smartphones while Java was for PCs, whereas Oracle contends that Java and Android both compete as platforms for smart TVs, cars, and wearables."
On August 13, 2010, an internal Oracle memo leaked to the Internet cited plans for ending the OpenSolaris operating system project and community. With Oracle planning to develop Solaris only in a closed source fashion, OpenSolaris developers moved to the Illumos and OpenIndiana project, among others.
As Oracle completed their acquisition of Sun Microsystems in February 2010, they announced that OpenSSO would no longer be their strategic product. Shortly after, OpenSSO was forked to OpenAM. and will continue to be developed and supported by ForgeRock.
On September 6, 2010, Oracle announced that former Hewlett-Packard CEO Mark Hurd was to replace Charles Phillips, who resigned as Oracle Co-President. In an official statement made by Larry Ellison, Phillips had previously expressed his desire to transition out of the company. Ellison had asked Phillips to stay on through the integration of Sun Microsystems Inc. In a separate statement regarding the transition, Ellison said "Mark did a brilliant job at HP and I expect he'll do even better at Oracle. There is no executive in the IT world with more relevant experience than Mark."
On September 7, 2010, HP announced a civil lawsuit against Mark Hurd "to protect HP's trade secrets", in response to Oracle hiring Hurd. On September 20, Oracle and HP published a joint press release announcing the resolution of the lawsuit on confidential terms and reaffirming commitment to long-term strategic partnership between the companies.
A number of OpenOffice.org developers formed The Document Foundation and received backing by Google, Novell, Red Hat, and Canonical, as well as some others, but were unable to get Oracle to donate the brand OpenOffice.org, causing a fork in the development of OpenOffice.org with the foundation now developing and promoting LibreOffice. Oracle expressed no interest in sponsoring the new project and asked the OpenOffice.org developers that started the project to resign from the company due to "conflicts of interest". On November 1, 2010, 33 of the OpenOffice.org developers gave their letters of resignation. On June 1, 2011, Oracle donated OpenOffice.org to the Apache Software Foundation.
On June 15, 2011, HP filed a lawsuit in California Superior Court in Santa Clara, claiming that Oracle had breached an agreement to support the Itanium microprocessor used in HP's high-end enterprise servers. Oracle called the lawsuit "an abuse of the judicial process" and said that had it known SAP's Léo Apotheker was about to be hired as HP's new CEO, any support for HP's Itanium servers would not have been implied.
On August 1, 2012, a California judge said in a tentative ruling that Oracle must continue porting its software at no cost until HP discontinues its sales of Itanium-based servers. HP was awarded $3 billion in damages against Oracle in 2016. HP argued Oracle's canceling support damaged HP's Itanium server brand. Oracle has announced it will appeal both the decision and damages.
On August 31, 2011, "The Wall Street Journal" reported that Oracle was being investigated by the Federal Bureau of Investigation for paying bribes to government officials in order to win business in Africa, in contravention of the Foreign Corrupt Practices Act (FCPA).
On April 20, 2012 the US General Services Administration banned Oracle from the most popular portal for bidding on GSA contracts for undisclosed reasons. Oracle has previously used this portal for around four hundred million dollars a year in revenue. Oracle previously settled a lawsuit filed under the False Claims Act, which accused the company of overbilling the US government between 1998 and 2006. The 2011 settlement forced Oracle to pay $199.5 million to the General Services Administration.
Oracle Corporation has its overall headquarters on the San Francisco Peninsula in the Redwood Shores area of Redwood City, adjacent to Belmont and near San Carlos Airport (IATA airport code: SQL).
Oracle HQ stands on the former site of Marine World/Africa USA, which moved from Redwood Shores to Vallejo in 1986. Oracle Corporation originally leased two buildings on the site, moving its finance and administration departments from the corporation's former headquarters on Davis Drive, Belmont, California. Eventually, Oracle purchased the complex and constructed a further four main buildings.
The distinctive Oracle Parkway buildings, nicknamed the Emerald City, served as sets for the futuristic headquarters of the fictional company "NorthAm Robotics" in the Robin Williams film "Bicentennial Man" (1999).
The campus represented the headquarters of Cyberdyne Systems in the movie "Terminator Genisys" (2015).
Oracle Corporation operates in multiple markets and has acquired several companies which formerly functioned autonomously. In some cases these provided the starting points for global business units (GBUs) targeting particular vertical markets. Oracle Corporation GBUs include:
On October 20, 2006, the Golden State Warriors and the Oracle Corporation announced a 10-year agreement in which the Oakland Arena would become known as the Oracle Arena.
Larry Ellison's sailing team competes as Oracle Team USA. The team has won the America's Cup twice, in 2010 (as BMW Oracle Racing) and in 2013, despite being penalized for cheating.
Sean Tucker's "Challenger II" stunt biplane is sponsored by Oracle and performs frequently at air shows around the US.
On January 9, 2019, ESPN reported that the San Francisco Giants entered into a 20-year agreement to rename their stadium Oracle Park.
Official Monster Raving Loony Party
The Official Monster Raving Loony Party is a political party established in the United Kingdom in 1983 by the musician David Sutch, also known as "Screaming Lord Sutch, 3rd Earl of Harrow", or simply "Lord Sutch". It is notable for its deliberately bizarre policies and it effectively exists to satirise British politics, and to offer itself as an alternative for protest voters, especially in constituencies where the party holding a safe seat is unlikely to lose it.
Starting in 1963, David Sutch, head of the rock group Screaming Lord Sutch and the Savages, stood in British parliamentary elections under a range of party names, initially as the National Teenage Party candidate. At that time the minimum voting age was 21. The party's name was intended to highlight what Sutch and others viewed as hypocrisy, since teenagers were unable to vote because of their supposed immaturity while the adults running the country were involved in scandals such as the Profumo affair.
After being shot during a mugging attempt whilst living in the United States, Sutch returned to Britain and to politics during the 1980s. The "Raving Loony" name first appeared at the Bermondsey by-election of 1983.
A similar concept had appeared earlier in the "Election Night Special" sketch on "Monty Python's Flying Circus", in which the Silly and Sensible parties competed; and a similar skit by "The Goodies", in which Graeme Garden stood as a "Science Loony". There had also been a "Science Fiction Looney" candidate competing in the 1976 Cambridge by-election.
Two others were important in the formation of the OMRLP: John Desmond Dougrez-Lewis stood in the Crosby by-election of 1981 (won by the Social Democratic Party's co-founder Shirley Williams); and Dougrez-Lewis stood in the by-election as "Tarquin Fin-tim-lin-bin-whin-bim-lim-bus-stop-F'tang-F'tang-Olé-Biscuitbarrel", taken from the Election Night Special Monty Python sketch. He had changed his name by deed poll from John Desmond Lewis, on behalf of the Cambridge University Raving Loony Society (CURLS). CURLS were an "anti-political party" and charity fundraising group formed largely as a fun counter-response to increasingly polarised student politics in Cambridge, and they were responsible for a number of fun stunts. Their Oxford University equivalents were the "Oxford Raving Lunatics". Dougrez-Lewis became Sutch's agent at the notorious Bermondsey by-election mentioned above, where the OMRLP banner was first officially unfurled. Reverting to his original name, Dougrez-Lewis stood for the new party in Cambridge in the 1983 general election.
Another serial offbeat by-election candidate was Commander Bill Boaks, a retired World War II hero who took part in sinking the "Bismarck". Boaks campaigned and stood for election for over thirty years on limited funds, always on the issue of road safety. Boaks proved influential on Sutch's direction as the leading anti-politician: "It's the ones who "don't" vote you really want, because they're the ones who think".
Boaks thought that increased traffic and more roads would cause problems, and he addressed road safety with flamboyant campaigning and a variety of tactics, including private prosecution of public figures who escaped public prosecution for drunk driving.He successfully campaigned with Sutch and others to pedestrianise London's Carnaby Street. While recovering from being struck by a motorcycle, Boaks was one of Sutch's counting agents at Bermondsey in 1983. Following Boaks' death, popular opinion towards road safety has become closer to his views.
Screaming Lord Sutch committed suicide on 16 June 1999 while suffering from clinical depression after his mother, Annie, died in 1998. A biography of Sutch, "The Man Who Was Screaming Lord Sutch" (by Graham Sharpe, the Media Relations Manager for bookmakers William Hill), was published in April 2005, describing what remained of the party as "wannabes, never-would-bes and some bloody-well-shouldn't-bes".
Sutch's funeral – organised by his lifetime friend, the session drummer Carlo Little – was attended by members of the OMRLP and Raving Loony Green Giant Party, including Hughes, who with Freddie Zapp brought along a huge floral tribute shaped as an OMRLP rosette. The running of the OMRLP fell to Alan "Howling Laud" Hope and his cat, Catmando, who were the joint winners of the 1999 membership ballot for the replacement for Sutch. Although Hope took over as Party Leader after Sutch's death, the real day-to-day running of the party has always been done by other party members.
The OMRLP fielded 15 candidates in the 2001 general election, at which they had their best general election results to date.
The manifesto, entitled "The Manicfesto", for the 2005 general election featured the major commitment of their long held pledge to abolish income tax, citing as always that it was only meant to be a temporary measure during the Napoleonic Wars. Also included was another old staple, the "Putting Parliament on Wheels" idea of having Parliament sit throughout the country rather than solely in London—with special emphasis this time in its creation negating the need for national/regional assemblies.
The OMRLP has fielded candidates since 2001, with reduced success and losing their deposit. "Top Cat" Owen is the only member of the current OMRLP to poll over 1,000 votes (he polled 2,859 votes in the 1994 European elections).
The OMRLP's official headquarters was originally the Golden Lion Hotel in Ashburton, Devon, then the Dog & Partridge pub at Yateley in Hampshire, but this was lost shortly after the 2005 general election. Conference venues are now chosen in advance: the 2006 conference was held at Torrington in Devon, and the 2007 conference was held in Jersey.
The party's last elected representative was R. U. Seerius (formerly Jon Brewer) on the 11 member Sawley Parish Council in South Derbyshire, first elected (uncontested) in 2005. He was no longer a member as of May 2007, having failed to appear in no less than 11 statutory meetings during his time in office, due to illness.
The OMRLP succeeded in standing in the two by-elections of 19 July 2007 in Sedgefield and Ealing Southall, but again achieving derisory results: Alan Hope acquiring 129 votes (0.46%) and John Cartwright taking 188 (0.51%), beating the English Democrats but coming behind the Christian Party of the Reverend George Hargreaves and David Braid.
In recognition that reforms were needed, Peter 'T.C.' Owen was moved from the honorary position of Party chairman to that of Deputy Leader (and thus effective day-to-day leader) of the OMRLP, whilst Anthony "The Jersey Flyer" Blyth (owner of the Ommaroo and a member of the Jersey Heritage Trust) took over Owen's role. Owen is one of four Raving Loonies to have scored over 1000 votes in an election.
On 31 May 2017 Hope was interviewed by Andrew Neil on the BBC's "Daily Politics" programme.
In 1987, the OMRLP won its first seat on Ashburton Town Council in Devon, as Alan "Howling Laud" Hope was elected unopposed. He subsequently became Deputy Mayor and later Mayor of Ashburton in 1998 (mainly opposed by the local Conservatives; they never forgave him for becoming a member of the OMRLP) until he moved to Hampshire after Sutch's death. For over a decade, his hotel "The Golden Lion" in Ashburton (referred to by some in the party as "The Mucky Mog") was the party's headquarters and conference centre.
The first party member to win a vote, rather than an uncontested election, was Stuart Hughes, taking the "safe" Conservative Party seat of Sidmouth Woolbrook on East Devon District Council in May 1991. He also took a seat on Sidmouth Town Council from the Conservatives the following day. His success was met with hostility from the local Tories. Hughes' reaction was to attempt to make their lives a misery for the next three years by refusing to pay his Community Charge (popularly known as the Poll Tax), then dumping scrap metal in the middle of the council chambers to the value of his unpaid tax when threatened with legal action. He also formed an alliance known as "The Coastals" (because of the seats they held) of Independents and the sole Green Party councillor, giving East Devon's ruling Conservatives the first true opposition they had faced for decades (the local Liberal Democrat and Labour parties being negligible).
Hughes retained his seats with increased majorities in subsequent elections, and took the Devon County Council seat from the local party's Chief Whip in the council.
To date, two councillors have subsequently become mayors: Alan Hope in Ashburton, Devon and Chris "Screwy" Driver on the Isle of Sheppey in Kent.
At the Bootle by-election in May 1990, the Loony candidate (Sutch) received more votes than the candidate for the continuing Social Democrats. The story was a major headline in many UK newspapers; ironically, the by-election itself had attracted little coverage. Bootle is still regarded by the party as their most significant result in politics, albeit one largely lampooning the political world.
In the 2019 Brecon and Radnorshire by-election, the OMRLP candidate Lady Lily the Pink polled more votes than the United Kingdom Independence Party. The party got a record number of votes in the 2019 general election.
48th Parliament
49th Parliament
50th Parliament
51st Parliament
52nd Parliament
53rd Parliament
54th Parliament
55th Parliament
56th Parliament
57th Parliament
As of 2019, the party has two parish councillors.
For the 2010 general election, the OMRLP used the description "Monster Raving Loony William Hill Party", which was met with criticism by some members, with John Cartwright, Loony candidate in Croydon, publicly stating, "I am not and will not be a mercenary, or an advert, for a commercial company during the course of the election campaign."
The statement of accounts for the period 1 January to 31 December 2008 outlines membership at 1,354, made up of 173 paying members and 1,181 "lifetime but non-paying". It currently costs £12.00 per year for membership, which includes a party rosette, a certificate of insanity, a 'Loony Badge,' your personal party I.D card, and a letter from the party's current leader; Alan ‘Howlin Laud’ Hope. It is also noted that a £14.50 membership is available for those overseas.
Sir Patrick Moore (1923–2012), the British TV amateur astronomer, was the finance minister of the party for a short time. He once said that the Monster Raving Loony Party "had an advantage over all the other parties, in that they knew they were loonies".
In 1992, the Glasgow band Hugh Reed and the Velvet Underpants released the song "Vote Monster Raving Looney", despite not having any actual ties to the party.
The OMRLP are distinguished by having a deliberately bizarre manifesto, which contains things that seem to be impossible or too absurd to implement – usually to highlight what they see as real-life absurdities. Despite its satirical nature, some of the things that have featured in Loony manifestos have become law, such as "passports for pets", abolition of dog licences and all-day pub openings.
Other suggestions so far unadopted included minting a 99p coin and forbidding greyhound racing in order to "stop the country going to the dogs".
The Loonies generally field as many candidates as possible in United Kingdom general elections, some (but by no means all) standing under ridiculous names they have adopted via deed poll. Sutch himself stood against all three main party leaders (John Major, Neil Kinnock and Paddy Ashdown) in the 1992 general election. Parliamentary candidates have to pay their own deposit (which currently stands at £500) and cover all of their expenses. No OMRLP candidate has managed to get the required 5% of the popular vote needed to retain their deposit, but this does not stop people standing. Sutch came closest with 4.1% and over a thousand votes at the Rotherham by-election, whilst Stuart Hughes still holds the record for the largest number of votes for a Loony candidate at a Parliamentary election, with 1,442 at the 1992 general election in the Honiton seat in east Devon. The all-time highest vote achieved was by comedian Danny Bamford aka Danny Blue, who secured 3,339 votes in the 1994 European elections under the pseudonym of "John Major". Bamford had also acted as an election agent for Lindi St Clair's rival Corrective Party, and was a former close associate of Stuart Hughes.
In the run-up to the 2011 Alternative Vote referendum, the party adopted an equivocal stance, advising its supporters, on 8 April, to "vote as you see fit". In response to mainstream parties debating Brexit, the OMRLP suggested sending Noel Edmonds to the European Parliament "because he understands Deal or No Deal".
Omega-3 fatty acid
Omega−3 fatty acids, also called Omega-3 oils, ω−3 fatty acids or "n"−3 fatty acids, are polyunsaturated fatty acids (PUFAs) characterized by the presence of a double bond three atoms away from the terminal methyl group in their chemical structure. They are widely distributed in nature, being important constituents of animal lipid metabolism, and they play an important role in the human diet and in human physiology. The three types of omega−3 fatty acids involved in human physiology are α-linolenic acid (ALA), found in plant oils, and eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), both commonly found in marine oils. Marine algae and phytoplankton are primary sources of omega−3 fatty acids. Common sources of plant oils containing ALA include walnut, edible seeds, clary sage seed oil, algal oil, flaxseed oil, Sacha Inchi oil, "Echium" oil, and hemp oil, while sources of animal omega−3 fatty acids EPA and DHA include fish, fish oils, eggs from chickens fed EPA and DHA, squid oils, krill oil, and certain algae.
Mammals are unable to synthesize the essential omega−3 fatty acid ALA and can only obtain it through diet. However, they can use ALA, when available, to form EPA and DHA, by creating additional double bonds along its carbon chain (desaturation) and extending it (elongation). Namely, ALA (18 carbons and 3 double bonds) is used to make EPA (20 carbons and 5 double bonds), which is then used to make DHA (22 carbons and 6 double bonds). The ability to make the longer-chain omega−3 fatty acids from ALA may be impaired in aging. In foods exposed to air, unsaturated fatty acids are vulnerable to oxidation and rancidity. Dietary supplementation with omega−3 fatty acids does not appear to affect the risk of death, cancer or heart disease. Furthermore, fish oil supplement studies have failed to support claims of preventing heart attacks or strokes or any vascular disease outcomes.
The terms "ω–3 ("omega–3") fatty acid" and "n–3 fatty acid" are derived from organic nomenclature. One way in which an unsaturated fatty acid is named is determined by the location, in its carbon chain, of the double bond which is closest to the methyl end of the molecule. In general terminology, "n" (or ω) represents the locant of the methyl end of the molecule, while the number "n–x" (or ω–"x") refers to the locant of its nearest double bond. Thus, in omega"–"3 fatty acids in particular, there is a double bond located at the carbon numbered 3, starting from the methyl end of the fatty acid chain. This classification scheme is useful since most chemical changes occur at the carboxyl end of the molecule, while the methyl group and its nearest double bond are unchanged in most chemical or enzymatic reactions.
In the expressions "n–x" or ω–"x", the dash is actually meant to be a minus sign, although it is never read as such. Also, the symbol "n" (or ω) represents the locant of the methyl end, counted from the carboxyl end of the fatty acid carbon chain. For instance, in an omega-3 fatty acid with 18 carbon atoms (see illustration), where the methyl end is at location 18 from the carboxyl end, "n" (or ω) represents the number 18, and the notation n–3 (or ω–3) represents the subtraction 18–3 = 15, where 15 is the locant of the double bond which is closest to the methyl end, counted from the carboxyl end of the chain.
Although "n" and ω (omega) are synonymous, the IUPAC recommends that "n" be used to identify the highest carbon number of a fatty acid. Nevertheless, the more common name – omega"–"3 fatty acid – is used in both the lay media and scientific literature.
By example, α-linolenic acid (ALA; illustration) is an 18-carbon chain having three double bonds, the first being located at the third carbon from the methyl end of the fatty acid chain. Hence, it is an omega"–"3 fatty acid. Counting from the other end of the chain, that is the carboxyl end, the three double bonds are located at carbons 9, 12, and 15. These three locants are typically indicated as Δ9c,12c,15c, or cisΔ9,cisΔ12,cisΔ15, or cis-cis-cis-Δ9,12,15, where "c" or "cis" means that the double bonds have a "cis" configuration.
α-Linolenic acid is polyunsaturated (containing more than one double bond) and is also described by a lipid number, 18:3, meaning that there are 18 carbon atoms and 3 double bonds.
Supplementation does not appear to be associated with a lower risk of all-cause mortality.
The evidence linking the consumption of marine omega−3 fats to a lower risk of cancer is poor. With the possible exception of breast cancer, there is insufficient evidence that supplementation with omega−3 fatty acids has an effect on different cancers. The effect of consumption on prostate cancer is not conclusive. There is a decreased risk with higher blood levels of DPA, but an increased risk of more aggressive prostate cancer was shown with higher blood levels of combined EPA and DHA. In people with advanced cancer and cachexia, omega−3 fatty acids supplements may be of benefit, improving appetite, weight, and quality of life.
Evidence in the population generally does not support a beneficial role for omega−3 fatty acid supplementation in preventing cardiovascular disease (including myocardial infarction and sudden cardiac death) or stroke. A 2018 meta-analysis found no support that daily intake of one gram of omega-3 fatty acid in individuals with a history of coronary heart disease prevents fatal coronary heart disease, nonfatal myocardial infarction or any other vascular event. However, omega−3 fatty acid supplementation greater than one gram daily for at least a year may be protective against cardiac death, sudden death, and myocardial infarction in people who have a history of cardiovascular disease. No protective effect against the development of stroke or all-cause mortality was seen in this population. A 2018 study found that omega-3 supplementation was helpful in protecting cardiac health in those who did not regularly eat fish, particularly in the African American population. Eating a diet high in fish that contain long chain omega−3 fatty acids does appear to decrease the risk of stroke. Fish oil supplementation has not been shown to benefit revascularization or abnormal heart rhythms and has no effect on heart failure hospital admission rates. Furthermore, fish oil supplement studies have failed to support claims of preventing heart attacks or strokes. In the EU, a review by the European Medicines Agency of omega-3 fatty acid medicines containing a combination of an ethyl ester of eicosapentaenoic acid and docosahexaenoic acid at a dose of 1 g per day concluded that these medicines are not effective in secondary prevention of heart problems in patients who have had a myocardial infarction.
Evidence suggests that omega−3 fatty acids modestly lower blood pressure (systolic and diastolic) in people with hypertension and in people with normal blood pressure. Some evidence suggests that people with certain circulatory problems, such as varicose veins, may benefit from the consumption of EPA and DHA, which may stimulate blood circulation and increase the breakdown of fibrin, a protein involved in blood clotting and scar formation.
Omega−3 fatty acids reduce blood triglyceride levels but do not significantly change the level of LDL cholesterol or HDL cholesterol in the blood. The American Heart Association position (2011) is that borderline elevated triglycerides, defined as 150–199 mg/dL, can be lowered by 0.5-1.0 grams of EPA and DHA per day; high triglycerides 200–499 mg/dL benefit from 1-2 g/day; and >500 mg/dL be treated under a physician's supervision with 2-4 g/day using a prescription product. In this population omega-3 fatty acid supplementation decreases the risk of heart disease by about 25%.
ALA does not confer the cardiovascular health benefits of EPA and DHAs.
The effect of omega−3 polyunsaturated fatty acids on stroke is unclear, with a possible benefit in women.
A 2013 systematic review found tentative evidence of benefit for lowering inflammation levels in healthy adults and in people with one or more biomarkers of metabolic syndrome. Consumption of omega−3 fatty acids from marine sources lowers blood markers of inflammation such as C-reactive protein, interleukin 6, and TNF alpha.
For rheumatoid arthritis, one systematic review found consistent, but modest, evidence for the effect of marine n−3 PUFAs on symptoms such as "joint swelling and pain, duration of morning stiffness, global assessments of pain and disease activity" as well as the use of non-steroidal anti-inflammatory drugs. The American College of Rheumatology has stated that there may be modest benefit from the use of fish oils, but that it may take months for effects to be seen, and cautions for possible gastrointestinal side effects and the possibility of the supplements containing mercury or vitamin A at toxic levels. The National Center for Complementary and Integrative Health has concluded that "supplements containing omega-3 fatty acids... may help relieve rheumatoid arthritis symptoms" and warns that such supplements "may interact with drugs that affect blood clotting".
Although not supported by current scientific evidence as a primary treatment for attention deficit hyperactivity disorder (ADHD), autism, and other developmental disabilities, omega−3 fatty acid supplements are being given to children with these conditions.
One meta-analysis concluded that omega−3 fatty acid supplementation demonstrated a modest effect for improving ADHD symptoms. A Cochrane review of PUFA (not necessarily omega−3) supplementation found "there is little evidence that PUFA supplementation provides any benefit for the symptoms of ADHD in children and adolescents", while a different review found "insufficient evidence to draw any conclusion about the use of PUFAs for children with specific learning disorders". Another review concluded that the evidence is inconclusive for the use of omega−3 fatty acids in behavior and non-neurodegenerative neuropsychiatric disorders such as ADHD and depression.
Fish oil has only a small benefit on the risk of premature birth. A 2015 meta-analysis of the effect of omega−3 supplementation during pregnancy did not demonstrate a decrease in the rate of preterm birth or improve outcomes in women with singleton pregnancies with no prior preterm births. A 2018 Cochrane systematic review with moderate to high quality of evidence suggested that omega−3 fatty acids may reduce risk of perinatal death, risk of low body weight babies; and possibly mildly increased LGA babies. However, a 2019 clinical trial in Australia showed no significant reduction on rate of preterm delivery, and no higher incidence of interventions in post-term deliveries than control.
There is evidence that omega−3 fatty acids are related to mental health, particularly for depression where there are now large meta-analyses showing treatment efficacy compared to placebo. These data have also recently resulted in international clinical guidelines regarding the use omega-3 fatty acids in the treatment of depression. The link between omega−3 and depression has been attributed to the fact that many of the products of the omega−3 synthesis pathway play key roles in regulating inflammation (such as prostaglandin E3) which have been linked to depression. This link to inflammation regulation has been supported in both in vivo studies and in a meta-analysis. omega-3 fatty acids have also been investigated as an add-on for the treatment of depression associated with bipolar disorder. Significant benefits due to EPA supplementation were only seen, however, when treating depressive symptoms and not manic symptoms suggesting a link between omega−3 and depressive mood.
In contrast to dietary supplementation studies, there is significant difficulty in interpreting the literature regarding dietary intake of omega-3 fatty acids (e.g. from fish) due to participant recall and systematic differences in diets. There is also controversy as to the efficacy of omega−3, with many meta-analysis papers finding heterogeneity among results which can be explained mostly by publication bias. A significant correlation between shorter treatment trials was associated with increased omega−3 efficacy for treating depressed symptoms further implicating bias in publication. One review found that "Although evidence of benefits for any specific intervention is not conclusive, these findings suggest that it might be possible to delay or prevent transition to psychosis."`
Epidemiological studies are inconclusive about an effect of omega−3 fatty acids on the mechanisms of Alzheimer's disease. There is preliminary evidence of effect on mild cognitive problems, but none supporting an effect in healthy people or those with dementia.
Brain function and vision rely on dietary intake of DHA to support a broad range of cell membrane properties, particularly in grey matter, which is rich in membranes. A major structural component of the mammalian brain, DHA is the most abundant omega−3 fatty acid in the brain. It is under study as a candidate essential nutrient with roles in neurodevelopment, cognition, and neurodegenerative disorders.
Results of studies investigating the role of LCPUFA supplementation and LCPUFA status in the prevention and therapy of atopic diseases (allergic rhinoconjunctivitis, atopic dermatitis and allergic asthma) are controversial; therefore, at the present stage of our knowledge (as of 2013) we cannot state either that the nutritional intake of n−3 fatty acids has a clear preventive or therapeutic role, or that the intake of n-6 fatty acids has a promoting role in context of atopic diseases.
People with PKU often have low intake of omega−3 fatty acids, because nutrients rich in omega−3 fatty acids are excluded from their diet due to high protein content.
As of 2015, there was no evidence that taking omega−3 supplements can prevent asthma attacks in children.
An omega−3 fatty acid is a fatty acid with multiple double bonds, where the first double bond is between the third and fourth carbon atoms from the end of the carbon atom chain. "Short chain" omega−3 fatty acids have a chain of 18 carbon atoms or less, while "long chain" omega−3 fatty acids have a chain of 20 or more.
Three omega−3 fatty acids are important in human physiology, α-linolenic acid (18:3, "n"-3; ALA), eicosapentaenoic acid (20:5, "n"-3; EPA), and docosahexaenoic acid (22:6, "n"-3; DHA). These three polyunsaturates have either 3, 5, or 6 double bonds in a carbon chain of 18, 20, or 22 carbon atoms, respectively. As with most naturally-produced fatty acids, all double bonds are in the "cis"-configuration, in other words, the two hydrogen atoms are on the same side of the double bond; and the double bonds are interrupted by methylene bridges (--), so that there are two single bonds between each pair of adjacent double bonds.
This table lists several different names for the most common omega−3 fatty acids found in nature.
Omega−3 fatty acids occur naturally in two forms, triglycerides and phospholipids. In the triglycerides, they, together with other fatty acids, are bonded to glycerol; three fatty acids are attached to glycerol. Phospholipid omega−3 is composed of two fatty acids attached to a phosphate group via glycerol.
The triglycerides can be converted to the free fatty acid or to methyl or ethyl esters, and the individual esters of omega−3 fatty acids are available.
DHA in the form of lysophosphatidylcholine is transported into the brain by a membrane transport protein, MFSD2A, which is exclusively expressed in the endothelium of the blood–brain barrier.
The 'essential' fatty acids were given their name when researchers found that they are essential to normal growth in young children and animals. The omega−3 fatty acid DHA, also known as docosahexaenoic acid, is found in high abundance in the human brain. It is produced by a desaturation process, but humans lack the desaturase enzyme, which acts to insert double bonds at the ω6 and ω3 position. Therefore, the ω6 and ω3 polyunsaturated fatty acids cannot be synthesized, are appropriately called essential fatty acids, and must be obtained from the diet.
In 1964, it was discovered that enzymes found in sheep tissues convert omega−6 arachidonic acid into the inflammatory agent, prostaglandin E2, which is involved in the immune response of traumatized and infected tissues. By 1979, eicosanoids were further identified, including thromboxanes, prostacyclins, and leukotrienes. The eicosanoids typically have a short period of activity in the body, starting with synthesis from fatty acids and ending with metabolism by enzymes. If the rate of synthesis exceeds the rate of metabolism, the excess eicosanoids may have deleterious effects. Researchers found that certain omega−3 fatty acids are also converted into eicosanoids and docosanoids, but at a slower rate. If both omega−3 and omega−6 fatty acids are present, they will "compete" to be transformed, so the ratio of long-chain omega−3:omega−6 fatty acids directly affects the type of eicosanoids that are produced.
Humans can convert short-chain omega−3 fatty acids to long-chain forms (EPA, DHA) with an efficiency below 5%. The omega−3 conversion efficiency is greater in women than in men, but less studied. Higher ALA and DHA values found in plasma phospholipids of women may be due to the higher activity of desaturases, especially that of delta-6-desaturase.
These conversions occur competitively with omega−6 fatty acids, which are essential closely related chemical analogues that are derived from linoleic acid. They both utilize the same desaturase and elongase proteins in order to synthesize inflammatory regulatory proteins. The products of both pathways are vital for growth making a balanced diet of omega−3 and omega−6 important to an individual's health. A balanced intake ratio of 1:1 was believed to be ideal in order for proteins to be able to synthesize both pathways sufficiently, but this has been controversial as of recent research.
The conversion of ALA to EPA and further to DHA in humans has been reported to be limited, but varies with individuals. Women have higher ALA-to-DHA conversion efficiency than men, which is presumed to be due to the lower rate of use of dietary ALA for beta-oxidation. One preliminary study showed that EPA can be increased by lowering the amount of dietary linoleic acid, and DHA can be increased by elevating intake of dietary ALA.
Human diet has changed rapidly in recent centuries resulting in a reported increased diet of omega−6 in comparison to omega−3. The rapid evolution of human diet away from a 1:1 omega−3 and omega−6 ratio, such as during the Neolithic Agricultural Revolution, has presumably been too fast for humans to have adapted to biological profiles adept at balancing omega−3 and omega−6 ratios of 1:1. This is commonly believed to be the reason why modern diets are correlated with many inflammatory disorders. While omega−3 polyunsaturated fatty acids may be beneficial in preventing heart disease in humans, the level of omega−6 polyunsaturated fatty acids (and, therefore, the ratio) does not matter.
Both omega−6 and omega−3 fatty acids are essential: humans must consume them in their diet. Omega−6 and omega−3 eighteen-carbon polyunsaturated fatty acids compete for the same metabolic enzymes, thus the omega−6:omega−3 ratio of ingested fatty acids has significant influence on the ratio and rate of production of eicosanoids, a group of hormones intimately involved in the body's inflammatory and homeostatic processes, which include the prostaglandins, leukotrienes, and thromboxanes, among others. Altering this ratio can change the body's metabolic and inflammatory state. In general, grass-fed animals accumulate more omega−3 than do grain-fed animals, which accumulate relatively more omega−6. Metabolites of omega−6 are more inflammatory (esp. arachidonic acid) than those of omega−3. This necessitates that omega−6 and omega−3 be consumed in a balanced proportion; healthy ratios of omega−6:omega−3, according to some authors, range from 1:1 to 1:4. Other authors believe that a ratio of 4:1 (4 times as much omega−6 as omega−3) is already healthy. Studies suggest the evolutionary human diet, rich in game animals, seafood, and other sources of omega−3, may have provided such a ratio.
Typical Western diets provide ratios of between 10:1 and 30:1 (i.e., dramatically higher levels of omega−6 than omega−3). The ratios of omega−6 to omega−3 fatty acids in some common vegetable oils are: canola 2:1, hemp 2–3:1, soybean 7:1, olive 3–13:1, sunflower (no omega−3), flax 1:3, cottonseed (almost no omega−3), peanut (no omega−3), grapeseed oil (almost no omega−3) and corn oil 46:1.
Although omega−3 fatty acids have been known as essential to normal growth and health since the 1930s, awareness of their health benefits has dramatically increased since the 1980s.
On September 8, 2004, the U.S. Food and Drug Administration gave "qualified health claim" status to EPA and DHA omega−3 fatty acids, stating, "supportive but not conclusive research shows that consumption of EPA and DHA [omega−3] fatty acids may reduce the risk of coronary heart disease". This updated and modified their health risk advice letter of 2001 (see below).
The Canadian Food Inspection Agency has recognized the importance of DHA omega−3 and permits the following claim for DHA: "DHA, an omega−3 fatty acid, supports the normal physical development of the brain, eyes and nerves primarily in children under two years of age."
Historically, whole food diets contained sufficient amounts of omega−3, but because omega−3 is readily oxidized, the trend to shelf-stable, processed foods has led to a deficiency in omega−3 in manufactured foods.
In the United States, the Institute of Medicine publishes a system of Dietary Reference Intakes, which includes Recommended Dietary Allowances (RDAs) for individual nutrients, and Acceptable Macronutrient Distribution Ranges (AMDRs) for certain groups of nutrients, such as fats. When there is insufficient evidence to determine an RDA, the institute may publish an Adequate Intake (AI) instead, which has a similar meaning, but is less certain. The AI for α-linolenic acid is 1.6 grams/day for men and 1.1 grams/day for women, while the AMDR is 0.6% to 1.2% of total energy. Because the physiological potency of EPA and DHA is much greater than that of ALA, it is not possible to estimate one AMDR for all omega−3 fatty acids. Approximately 10 percent of the AMDR can be consumed as EPA and/or DHA. The Institute of Medicine has not established a RDA or AI for EPA, DHA or the combination, so there is no Daily Value (DVs are derived from RDAs), no labeling of foods or supplements as providing a DV percentage of these fatty acids per serving, and no labeling a food or supplement as an excellent source, or "High in..." As for safety, there was insufficient evidence as of 2005 to set an upper tolerable limit for omega−3 fatty acids, although the FDA has advised that adults can safely consume up to a total of 3 grams per day of combined DHA and EPA, with no more than 2 g from dietary supplements.
The American Heart Association (AHA) has made recommendations for EPA and DHA due to their cardiovascular benefits: individuals with no history of coronary heart disease or myocardial infarction should consume oily fish two times per week; and "Treatment is reasonable" for those having been diagnosed with coronary heart disease. For the latter the AHA does not recommend a specific amount of EPA + DHA, although it notes that most trials were at or close to 1000 mg/day. The benefit appears to be on the order of a 9% decrease in relative risk. The European Food Safety Authority (EFSA) approved a claim "EPA and DHA contributes to the normal function of the heart" for products that contain at least 250 mg EPA + DHA. The report did not address the issue of people with pre-existing heart disease. The World Health Organization recommends regular fish consumption (1-2 servings per week, equivalent to 200 to 500 mg/day EPA + DHA) as protective against coronary heart disease and ischaemic stroke.
Heavy metal poisoning by the body's accumulation of traces of heavy metals, in particular mercury, lead, nickel, arsenic, and cadmium, is a possible risk from consuming fish oil supplements. Also, other contaminants (PCBs, furans, dioxins, and PBDEs) might be found, especially in less-refined fish oil supplements. However, heavy metal toxicity from consuming fish oil supplements is highly unlikely, because heavy metals selectively bind with protein in the fish flesh rather than accumulate in the oil.
Throughout their history, the Council for Responsible Nutrition and the World Health Organization have published acceptability standards regarding contaminants in fish oil. The most stringent current standard is the International Fish Oils Standard. Fish oils that are molecularly distilled under vacuum typically make this highest-grade; levels of contaminants are stated in parts per billion per trillion.
The most widely available dietary source of EPA and DHA is oily fish, such as salmon, herring, mackerel, anchovies, menhaden, and sardines. Oils from these fish have a profile of around seven times as much omega−3 as omega−6. Other oily fish, such as tuna, also contain "n"-3 in somewhat lesser amounts. Consumers of oily fish should be aware of the potential presence of heavy metals and fat-soluble pollutants like PCBs and dioxins, which are known to accumulate up the food chain. After extensive review, researchers from Harvard's School of Public Health in the "Journal of the American Medical Association" (2006) reported that the benefits of fish intake generally far outweigh the potential risks. Although fish are a dietary source of omega−3 fatty acids, fish do not synthesize them; they obtain them from the algae (microalgae in particular) or plankton in their diets. In the case of farmed fish, omega-3 fatty acids are provided by fish oil; In 2009, 81% of the global fish oil production is used by aquaculture.
Marine and freshwater fish oil vary in content of arachidonic acid, EPA and DHA. They also differ in their effects on organ lipids.
Not all forms of fish oil may be equally digestible. Of four studies that compare bioavailability of the glyceryl ester form of fish oil vs. the ethyl ester form, two have concluded the natural glyceryl ester form is better, and the other two studies did not find a significant difference. No studies have shown the ethyl ester form to be superior, although it is cheaper to manufacture.
Krill oil is a source of omega−3 fatty acids. The effect of krill oil, at a lower dose of EPA + DHA (62.8%), was demonstrated to be similar to that of fish oil on blood lipid levels and markers of inflammation in healthy humans. While not an endangered species, krill are a mainstay of the diets of many ocean-based species including whales, causing environmental and scientific concerns about their sustainability.
Preliminary studies appear to indicate that the DHA and EPA omega-3 fatty acids found in krill oil may be more bio-available than in fish oil. Additionally, krill oil contains astaxanthin, a marine-source keto-carotenoid antioxidant that may act synergistically with EPA and DHA.
Table 1. ALA content as the percentage of the seed oil.
Table 2. ALA content as the percentage of the whole food.
Flaxseed (or linseed) ("Linum usitatissimum") and its oil are perhaps the most widely available botanical source of the omega−3 fatty acid ALA. Flaxseed oil consists of approximately 55% ALA, which makes it six times richer than most fish oils in omega−3 fatty acids. A portion of this is converted by the body to EPA and DHA, though the actual converted percentage may differ between men and women.
In 2013 Rothamsted Research in the UK reported they had developed a genetically modified form of the plant Camelina that produced EPA and DHA. Oil from the seeds of this plant contained on average 11% EPA and 8% DHA in one development and 24% EPA in another.
Eggs produced by hens fed a diet of greens and insects contain higher levels of omega−3 fatty acids than those produced by chickens fed corn or soybeans. In addition to feeding chickens insects and greens, fish oils may be added to their diets to increase the omega−3 fatty acid concentrations in eggs.
The addition of flax and canola seeds to the diets of chickens, both good sources of alpha-linolenic acid, increases the omega−3 content of the eggs, predominantly DHA.
The addition of green algae or seaweed to the diets boosts the content of DHA and EPA, which are the forms of omega−3 approved by the FDA for medical claims. A common consumer complaint is "Omega−3 eggs can sometimes have a fishy taste if the hens are fed marine oils".
Omega−3 fatty acids are formed in the chloroplasts of green leaves and algae. While seaweeds and algae are the source of omega−3 fatty acids present in fish, grass is the source of omega−3 fatty acids present in grass fed animals. When cattle are taken off omega−3 fatty acid rich grass and shipped to a feedlot to be fattened on omega−3 fatty acid deficient grain, they begin losing their store of this beneficial fat. Each day that an animal spends in the feedlot, the amount of omega−3 fatty acids in its meat is diminished.
The omega−6:omega−3 ratio of grass-fed beef is about 2:1, making it a more useful source of omega−3 than grain-fed beef, which usually has a ratio of 4:1.
In a 2009 joint study by the USDA and researchers at Clemson University in South Carolina, grass-fed beef was compared with grain-finished beef. The researchers found that grass-finished beef is higher in moisture content, 42.5% lower total lipid content, 54% lower in total fatty acids, 54% higher in beta-carotene, 288% higher in vitamin E (alpha-tocopherol), higher in the B-vitamins thiamin and riboflavin, higher in the minerals calcium, magnesium, and potassium, 193% higher in total omega−3s, 117% higher in CLA (cis-9, trans-11 octadecenoic acid, a conjugated linoleic acid, which is a potential cancer fighter), 90% higher in vaccenic acid (which can be transformed into CLA), lower in the saturated fats, and has a healthier ratio of omega−6 to omega−3 fatty acids (1.65 vs 4.84). Protein and cholesterol content were equal.
The omega−3 content of chicken meat may be enhanced by increasing the animals' dietary intake of grains high in omega−3, such as flax, chia, and canola.
Kangaroo meat is also a source of omega−3, with fillet and steak containing 74 mg per 100 g of raw meat.
Seal oil is a source of EPA, DPA, and DHA. According to Health Canada, it helps to support the development of the brain, eyes, and nerves in children up to 12 years of age. Like all seal products, it is not allowed to be imported into the European Union.
A trend in the early 21st century was to fortify food with omega−3 fatty acids. The microalgae "Crypthecodinium cohnii" and "Schizochytrium" are rich sources of DHA, but not EPA, and can be produced commercially in bioreactors for use as food additives. Oil from brown algae (kelp) is a source of EPA. The alga "Nannochloropsis" also has high levels of EPA.
Ore
Ore is natural rock or sediment that contains one or more valuable minerals, typically metals, that can be mined, treated and sold at a profit. Ore is extracted from the earth through mining and treated or refined, often via smelting, to extract the valuable metals or minerals.
The "grade" of ore refers to the concentration of the desired material it contains. The value of the metals or minerals an ore contains must be weighed against the cost of extraction to determine whether it is of sufficiently high grade to be worth mining.
Metal ores are generally oxides, sulfides, silicates, native metals such as copper, or noble metals such as gold. Ores must be processed to extract the elements of interest from the waste rock. Ore bodies are formed by a variety of geological processes generally referred to as ore genesis.
An ore deposit is an accumulation of ore. This is distinct from a mineral resource as defined by the mineral resource classification criteria. An ore deposit is one occurrence of a particular ore type. Most ore deposits are named according to their location (for example, the Witwatersrand, South Africa), or after a discoverer (e.g. the Kambalda nickel shoots are named after drillers), or after some whimsy, a historical figure, a prominent person, something from mythology (phoenix, kraken, serepentleopard, etc.) or the code name of the resource company which found it (e.g. MKD-5 was the in-house name for the Mount Keith nickel sulphide deposit).
Ore deposits are classified according to various criteria developed via the study of economic geology, or ore genesis. The classifications below are typical.
The basic extraction of ore deposits follows these steps:
Ores (metals) are traded internationally and comprise a sizeable portion of international trade in raw materials both in value and volume. This is because the worldwide distribution of ores is unequal and dislocated from locations of peak demand and from smelting infrastructure.
Most base metals (copper, lead, zinc, nickel) are traded internationally on the London Metal Exchange, with smaller stockpiles and metals exchanges monitored by the COMEX and NYMEX exchanges in the United States and the Shanghai Futures Exchange in China.
Iron ore is traded between customer and producer, though various benchmark prices are set quarterly between the major mining conglomerates and the major consumers, and this sets the stage for smaller participants.
Other, lesser, commodities do not have international clearing houses and benchmark prices, with most prices negotiated between suppliers and customers one-on-one. This generally makes determining the price of ores of this nature opaque and difficult. Such metals include lithium, niobium-tantalum, bismuth, antimony and rare earths. Most of these commodities are also dominated by one or two major suppliers with >60% of the world's reserves. The London Metal Exchange aims to add uranium to its list of metals on warrant.
The World Bank reports that China was the top importer of ores and metals in 2005 followed by the US and Japan.
Optical brightener
Optical brighteners, optical brightening agents (OBAs), fluorescent brightening agents (FBAs), or fluorescent whitening agents (FWAs), are chemical compounds that absorb light in the ultraviolet and violet region (usually 340-370 nm) of the electromagnetic spectrum, and re-emit light in the blue region (typically 420-470 nm) by fluorescence. These additives are often used to enhance the appearance of color of fabric and paper, causing a "whitening" effect; they make intrinsically yellow/orange materials look less so, by compensating the deficit in blue and purple light reflected by the material, with the blue and purple optical emission of the fluorophore.
The most common classes of compounds with this property are the stilbenes, e.g., 4,4′-diamino-2,2′-stilbenedisulfonic acid. Older, non-commercial fluorescent compounds include umbelliferone, which absorbs in the UV portion of the spectrum and re-emit it in the blue portion of the visible spectrum. A white surface treated with an optical brightener can emit more visible light than that which shines on it, making it appear brighter. The blue light emitted by the brightener compensates for the diminishing blue of the treated material and changes the hue away from yellow or brown and toward white.
Approximately 400 brightener types are listed in the Colour Index, but fewer than 90 are produced commercially, and only a handful are commercially important. Generically, the C.I. FBA number can be assigned to a specific substance, however, some are duplicated, since manufacturers apply for the index number when they produce it. The global OBA production for paper, textiles, and detergents is dominated by just a few di- and tetra-sulfonated triazole-stilbenes and a di-sulfonated stilbene-biphenyl derivatives. The stilbene derivatives are subject to fading upon prolonged exposure to UV, due to the formation of optically inactive cis-stilbenes. They are also degraded by oxygen in air, like most dye colorants. All brighteners have extended conjugation and/or aromaticity, allowing for electron movement. Some non-stilbene brighteners are used in more permanent applications such as whitening synthetic fiber.
Brighteners can be "boosted" by the addition of certain polyols, such as high molecular weight polyethylene glycol or polyvinyl alcohol. These additives increase the visible blue light emissions significantly. Brighteners can also be "quenched". Excess brightener will often cause a greening effect as emissions start to show above the blue region in the visible spectrum.
Brighteners are commonly added to laundry detergents to make the clothes appear cleaner. Normally cleaned laundry appears yellowish, which consumers do not like. Optical brighteners have replaced bluing which was formerly used to produce the same effect.
Brighteners are used in many papers, especially high brightness papers, resulting in their strongly fluorescent appearance under UV illumination. Paper brightness is typically measured at 457 nm, well within the fluorescent activity range of brighteners. Paper used for banknotes does not contain optical brighteners, so a common method for detecting counterfeit notes is to check for fluorescence.
Optical brighteners have also found use in cosmetics. One application is to formulas for washing and conditioning grey or blonde hair, where the brightener can not only increase the luminance and sparkle of the hair, but can also correct dull, yellowish discoloration without darkening the hair. Some advanced face and eye powders contain optical brightener microspheres that brighten shadowed or dark areas of the skin, such as "tired eyes".
End uses of optical brighteners include:
From around 2002 to 2012 chemical brighteners were used by many Chinese farmers to enhance the appearance of their white mushrooms. This illegal use was mostly eliminated by the Chinese Ministry of Agriculture.
Oil painting
Oil painting is the process of painting with pigments with a medium of drying oil as the binder. Commonly used drying oils include linseed oil, poppy seed oil, walnut oil, and safflower oil. The choice of oil imparts a range of properties to the oil paint, such as the amount of yellowing or drying time. Certain differences, depending on the oil, are also visible in the sheen of the paints. An artist might use several different oils in the same painting depending on specific pigments and effects desired. The paints themselves also develop a particular consistency depending on the medium. The oil may be boiled with a resin, such as pine resin or frankincense, to create a varnish prized for its body and gloss.
Although oil paint was first used for Buddhist paintings by painters in central and western Afghanistan sometime between the fifth and tenth centuries, it did not gain popularity until the 15th century. Its practice may have migrated westward during the Middle Ages. Oil paint eventually became the principal medium used for creating artworks as its advantages became widely known. The transition began with Early Netherlandish painting in Northern Europe, and by the height of the Renaissance oil painting techniques had almost completely replaced the use of tempera paints in the majority of Europe.
In recent years, water miscible oil paint has become available. Water-soluble paints are either engineered or an emulsifier has been added that allows them to be thinned with water rather than paint thinner, and allows, when sufficiently diluted, very fast drying times (1–3 days) when compared with traditional oils (1–3 weeks).
Traditional oil painting techniques often begin with the artist sketching the subject onto the canvas with charcoal or thinned paint. Oil paint is usually mixed with linseed oil, artist grade mineral spirits, or other solvents to make the paint thinner, faster or slower-drying. (Because these solvents thin the oil in the paint, they can also be used to clean paint brushes.) A basic rule of oil paint application is 'fat over lean', meaning that each additional layer of paint should contain more oil than the layer below to allow proper drying. If each additional layer contains less oil, the final painting will crack and peel. This rule does not ensure permanence; it is the quality and type of oil that leads to a strong and stable paint film.
There are many other media that can be used with the oil, including cold wax, resins, and varnishes. These additional media can aid the painter in adjusting the translucency of the paint, the sheen of the paint, the density or 'body' of the paint, and the ability of the paint to hold or conceal the brushstroke. These aspects of the paint are closely related to the expressive capacity of oil paint.
Traditionally, paint was transferred to the painting surface using paintbrushes, but there are other methods, including using palette knives and rags. Oil paint remains wet longer than many other types of artists' materials, enabling the artist to change the color, texture or form of the figure. At times, the painter might even remove an entire layer of paint and begin anew. This can be done with a rag and some turpentine for a time while the paint is wet, but after a while the hardened layer must be scraped. Oil paint dries by oxidation, not evaporation, and is usually dry to the touch within a span of two weeks (some colors dry within days). It is generally dry enough to be varnished in six months to a year.
The earliest discovered oil paintings date back to approx. 650AD in Afghanistan. These murals were presumably created by buddhist artists traveling along the silk road. These early oil works display a wide range of pigments and binders, and even included the use of a final varnish layer. This refinement of this painting technique and the survival of the paintings into the present day suggests that oil paints had been used in Asia even before the 7th century. It is possible, considering the history of tempera (pigment mixed with either egg whites or egg yolks, then painted on a plastered section) that oils were discovered in Europe independently around the 15th century. Outdoor surfaces and surfaces like shields—both those used in tournaments and those hung as decorations—were more durable when painted in oil-based media than when painted in the traditional tempera paints.
Most European Renaissance sources, in particular Vasari, credited northern European painters of the 15th century, and Jan van Eyck in particular, with the "invention" of painting with oil media on wood panel supports ("support" is the technical term for the underlying backing of a painting). However, Theophilus (Roger of Helmarshausen?) clearly gives instructions for oil-based painting in his treatise, "On Various Arts", written in 1125. At this period, it was probably used for painting sculptures, carvings and wood fittings, perhaps especially for outdoor use. However, early Netherlandish painting with artists like Van Eyck and Robert Campin in the 15th century were the first to make oil the usual painting medium, and explore the use of layers and glazes, followed by the rest of Northern Europe, and only then Italy.
Early works were still panel paintings on wood, but around the end of the 15th century canvas became more popular as the support, as it was cheaper, easier to transport, allowed larger works, and did not require complicated preliminary layers of gesso (a fine type of plaster). Venice, where sail-canvas was easily available, was a leader in the move to canvas. Small cabinet paintings were also made on metal, especially copper plates. These supports were more expensive but very firm, allowing intricately fine detail. Often printing plates from printmaking were reused for this purpose. The popularity of oil spread through Italy from the North, starting in Venice in the late 15th century. By 1540, the previous method for painting on panel (tempera) had become all but extinct, although Italians continued to use chalk-based fresco for wall paintings, which was less successful and durable in damper northern climates.
The linseed oil itself comes from the flax seed, a common fiber crop. Linen, a "support" for oil painting (see relevant section), also comes from the flax plant. Safflower oil or the walnut or poppyseed oil are sometimes used in formulating lighter colors like white because they "yellow" less on drying than linseed oil, but they have the slight drawback of drying more slowly and may not provide the strongest paint film. Linseed oil tends to dry yellow and can change the hue of the color.
Recent advances in chemistry have produced modern water miscible oil paints that can be used and cleaned up with water. Small alterations in the molecular structure of the oil creates this water miscible property.
Traditional artists' canvas is made from linen, but less expensive cotton fabric has gained popularity. The artist first prepares a wooden frame called a "stretcher" or "strainer". The difference between the two names is that "stretchers" are slightly adjustable, while "strainers" are rigid and lack adjustable corner notches. The canvas is then pulled across the wooden frame and tacked or stapled tightly to the back edge. Then the artist applies a "size" to isolate the canvas from the acidic qualities of the paint. Traditionally, the canvas was coated with a layer of animal glue (modern painters will use rabbit skin glue) as the size and primed with lead white paint, sometimes with added chalk. Panels were prepared with a "gesso", a mixture of glue and chalk.
Modern acrylic "gesso" is made of titanium dioxide with an acrylic binder. It is frequently used on canvas, whereas real gesso is not suitable for canvas. The artist might apply several layers of gesso, sanding each smooth after it has dried. Acrylic gesso is very difficult to sand. One manufacturer makes a "sandable" acrylic gesso, but it is intended for panels only and not canvas. It is possible to make the gesso a particular color, but most store-bought gesso is white. The gesso layer, depending on its thickness, will tend to draw the oil paint into the porous surface. Excessive or uneven gesso layers are sometimes visible in the surface of finished paintings as a change that's not from the paint.
Standard sizes for oil paintings were set in France in the 19th century. The standards were used by most artists, not only the French, as it was—and evidently still is—supported by the main suppliers of artists' materials. Size 0 ("toile de 0") to size 120 ("toile de 120") is divided in separate "runs" for figures ("figure"), landscapes ("paysage") and marines ("marine") that more or less preserve the diagonal. Thus a "0 figure" corresponds in height with a "paysage 1" and a "marine 2".
Although surfaces like linoleum, wooden panel, paper, slate, pressed wood, Masonite, and cardboard have been used, the most popular surface since the 16th century has been canvas, although many artists used panel through the 17th century and beyond. Panel is more expensive, heavier, harder to transport, and prone to warp or split in poor conditions. For fine detail, however, the absolute solidity of a wooden panel has an advantage.
Oil paint is made by mixing pigments of colors with an oil medium. Different colors are made, or purchased premixed, before painting begins, but further shades of color are usually obtained by mixing small quantities together as the painting process is underway. An artist's palette, traditionally a thin wood board held in the hand, is used for holding and mixing paints of different colors. Pigments may be any number of natural or synthetic substances with color, such as sulphides for yellow or cobalt salts for blue. Traditional pigments were based on minerals or plants, but many have proven unstable over long periods of time; the appearance of many old paintings today is very different from the original. Modern pigments often use synthetic chemicals. The pigment is mixed with oil, usually linseed, but other oils may be used. The various oils dry differently, which creates assorted effects.
Traditionally, artists mixed their own paints from raw pigments that they often ground themselves and medium. This made portability difficult and kept most painting activities confined to the studio. This changed in the 1800s, when tubes of oil paint became widely available following the American portrait painter John Goffe Rand's invention of the squeezable or collapsible metal tube in 1841 (the year of Claude Monet's birth). Artists could mix colors quickly and easily, which enabled, for the first time, relatively convenient plein air painting (a common approach in French Impressionism).
A brush is most commonly employed by the artist to apply the paint, often over a sketched outline of their subject (which could be in another medium). Brushes are made from a variety of fibers to create different effects. For example, brushes made with hog bristle might be used for bolder strokes and impasto textures. Fitch hair and mongoose hair brushes are fine and smooth, and thus answer well for portraits and detail work. Even more expensive are red sable brushes (weasel hair). The finest quality brushes are called "kolinsky sable"; these brush fibers are taken from the tail of the Siberian weasel. This hair keeps a superfine point, has smooth handling, and good memory (it returns to its original point when lifted off the canvas), known to artists as a brush's "snap." Floppy fibers with no snap, such as squirrel hair, are generally not used by oil painters.
In the past few decades, many synthetic brushes have been marketed. These are very durable and can be quite good, as well as cost efficient.
Brushes come in many sizes and are used for different purposes. The "type" of brush also makes a difference. For example, a "round" is a pointed brush used for detail work. "Flat" brushes are used to apply broad swaths of color. "Bright" is a flat brush with shorter brush hairs, used for "scrubbing in". "Filbert" is a flat brush with rounded corners. "Egbert" is a very long, and rare, filbert brush. The artist might also apply paint with a palette knife, which is a flat metal blade. A palette knife may also be used to remove paint from the canvas when necessary. A variety of unconventional tools, such as rags, sponges, and cotton swabs, may be used to apply or remove paint. Some artists even paint with their fingers.
Oil painters traditionally applied paint in layers known as "glazes", a method also simply called "indirect painting". This method was first perfected through an adaptation of the egg tempera painting technique, and was applied by the Flemish painters in Northern Europe with pigments ground in linseed oil. More recently, this approach has been called the "mixed technique" or "mixed method". The first coat (the underpainting) is laid down, often painted with egg tempera or turpentine-thinned paint. This layer helps to "tone" the canvas and to cover the white of the gesso. Many artists use this layer to sketch out the composition. This first layer can be adjusted before proceeding further, an advantage over the "cartooning" method used in fresco technique. After this layer dries, the artist might then proceed by painting a "mosaic" of color swatches, working from darkest to lightest. The borders of the colors are blended together when the "mosaic" is completed, and then left to dry before applying details.
Artists in later periods, such as the Impressionist era (late 19th century), often expanded on this wet-on-wet method, blending the wet paint on the canvas without following the Renaissance-era approach of layering and glazing. This method is also called "alla prima". This method was created due to the advent of painting outdoors, instead of inside a studio, because while outside, an artist did not have the time to let each layer of paint dry before adding a new layer. Several contemporary artists use a combination of both techniques to add bold color (wet-on-wet) and obtain the depth of layers through glazing.
When the image is finished and has dried for up to a year, an artist often seals the work with a layer of varnish that is typically made from dammar gum crystals dissolved in turpentine. Such varnishes can be removed without disturbing the oil painting itself, to enable cleaning and conservation. Some contemporary artists decide not to varnish their work, preferring the surface unvarnished.
Old Glory
Old Glory is a nickname for the flag of the United States. The original "Old Glory" was a flag owned by the 19th-century American sea captain William Driver (March 17, 1803 – March 3, 1886), who flew the flag during his career at sea and later brought it to Nashville, Tennessee, where he settled. Driver greatly prized the flag and ensured its safety from the Confederates, who attempted to seize the flag during the American Civil War. In 1922, Driver's daughter and niece claimed to own the original "Old Glory," which became part of the collection of the Smithsonian Institution, where it remains at the National Museum of American History.
Captain William Driver was born on March 17, 1803, in Salem, Massachusetts. At age 13, Driver ran away from home to become a cabin boy on a ship.
At 21, Driver qualified as a master mariner and assumed command of his own ship, the "Charles Doggett". In celebration of his appointment, Driver's mother and other women sewed the flag and gave it to him as a gift in 1824. With this flag flying over his ship, Driver went on to have a colorful career as a U.S. merchant seaman, sailing to China, India, Gibraltar, and the South Pacific. He participated in the tortoiseshell trade and knew some Fijian. In 1831, while voyaging in the South Pacific, Driver's ship "was the sole surviving vessel of six that departed Salem the same day." Driver picked up 65 descendants of the survivors of HMS "Bounty" and brought them back to Pitcairn Island. Driver was convinced that God saved his ship for that purpose.
Driver was deeply attached to the flag, writing: "It has ever been my staunch companion and protection. Savages and heathens, lowly and oppressed, hailed and welcomed it at the far end of the wide world. Then, why should it not be called Old Glory?"
Driver retired from seafaring in 1837 after his wife Martha Silsbee Babbage died from throat cancer. Driver was 34-years-old and had three young children. He settled in Nashville, Tennessee, where his three brothers operated a store. Driver remarried the next year to Sarah Jane Parks, a Southerner with whom he had several more children. Driver took his flag with him to Nashville, flying it on holidays rain or shine. The flag was so large that he attached it to a rope from his attic window and stretched it on a pulley across the street to secure it to a locust tree. Driver worked as a salesman and served as vestryman of Christ Episcopal Church.
In 1860, Driver, his wife and daughters repaired the flag, sewing on 10 additional stars, and Driver added by appliqué a small white anchor in the lower right corner, to symbolize his maritime career. By that time, the secession crisis had begun and Driver's family was split. While Driver was a staunch Unionist, two of his sons were fervent Confederates who enlisted in local regiments. One of Driver’s sons died from wounds suffered at Perryville. In March 1862, Driver wrote: "Two sons in the army of the South! My entire house estranged...and when I come home...no one to soothe me."
Soon after Tennessee seceded from the Union, Governor Isham G. Harris sent men to Driver's home to demand the flag. Driver, 58 years old, turned the men away at his door after demanding they produce a search warrant. An armed group returned to Driver's front porch, who refused to produce the flag, saying "If you want my flag you'll have to take it over my dead body."
To save the flag from further threats, Driver and some of his neighbors sewed it into a coverlet and hid it until February 1862, when Nashville fell to Union forces. When the Union Army, led by the 6th Ohio Infantry, entered the city, Driver went to Tennessee state capitol after seeing the U.S. flag and the 6th Ohio's regimental colors raised on the Capitol flagstaffand asked to see the general in command.
Horace Fisher, the aide-de-camp to the Union commander in the city, Brigadier General William "Bull" Nelson, described Driver as "a stout, middle-aged man, with hair well shot with gray, short in stature, broad in shoulder, and with a roll in his gait." Introducing himself as a sea captain and Unionist, Driver brought the coverlet with him and opened it, revealing the flag. Nelson accepted the flag and ordered it run up on the Capitol flagstaff. The 6th Ohio later adopted the motto "Old Glory."
That night, a violent storm threatened to tear flag, so Driver replaced it with a newer flag, taking the original Old Glory for safekeeping. The flag remained in his home until December 1864, when the Battle of Nashville was fought. As Confederate troopers under the command of John Bell Hood sought to retake the city, Driver hung the flag out of the third-story window and left to join the defense of the city. For the rest of the American Civil War, Driver served as provost marshal of Nashville, serving in hospitals.
Mary Jane Roland, Driver's daughter, said Driver gave her the flag as a gift on July 10, 1873, telling her "This is my old ship flag Old Glory. I love it as a mother loves her child. Take it and cherish it as I have always cherished it; for it has been my steadfast friend and protector in all parts of the world—savage, heathen and civilized."
Driver died on March 3, 1886, and was buried in the Nashville City Cemetery, where, at Driver's request, his rescue of the "Bounty" descendants is noted on his grave stone.
Following Driver’s death, a family feud erupted over the ownership of the flag. Driver's niece, Harriet Ruth Waters Cooke, the daughter of Driver's youngest sister, said she inherited the flag and presented her version of Old Glory to the Essex Institute in Salem, which became the Peabody Essex Museum, along with family memorabilia that included a letter from the Pitcairn Islands to Driver. Cooke published a family memoir in 1889, omitting any mention of Mary Jane Roland.
Roland wrote an account of the flag, publishing "Old Glory, The True Story" in 1918. In that memoir, Roland disputed Cooke's narrative and presented evidence for her claim that the flag she owned was the true Old Glory. In 1922, Roland gave her Old Glory to President Warren G. Harding. Harding had the flag sent to the Smithsonian Institution. The same year, the Peabody Essex Museum sent its Old Glory to the Smithsonian.
In 2019, Captain Driver's great-great grandson, Jack Benz, published a novel depicting the life and adventures of Captain William Driver using information collected from personal research and inherited from Captain Driver's descendants.
The Smithsonian Institution has regarded the Roland flag as the authentic Old Glory, since "documentary evidence in the Tennessee State Library and Archives suggests it was the one hidden in the quilt and presented to Union troops who took Nashville. The Roland flag is 17x10 feet. The Peabody flag is 12x6 feet.
In June 2006, the Smithsonian's National Museum of American History (NMAH) loaned the Roland flag to the Tennessee State Museum in Nashville for an eight-month exhibit entitled "Old Glory: An American Treasure Comes Home". The flag was in fragile condition and had to be carefully shipped and displayed.
A conservation evaluation of both flags by NMAH curator Jennifer Locke Jones and Thomassen-Krauss began in 2012. Preliminary findings indicate that the larger Roland flag has the stronger claim to being the original Old Glory, but that the Peabody flag dated to the same era and is a legitimate Driver family heirloom and Civil War-era relic. The Roland Old Glory is heavily worn on the fly edges, consistent with the wear of a seagoing flag.
The Peabody Essex Museum has in its collection fragmentary scraps from what was claimed to be Old Glory.
Orlando Letelier
Marcos Orlando Letelier del Solar (13 April 1932 – 21 September 1976) was a Chilean economist, politician and diplomat during the presidency of Salvador Allende. A refugee from the military dictatorship of General Augusto Pinochet, Letelier accepted several academic positions in Washington, D.C. following his exile from Chile. In 1976, agents of Dirección de Inteligencia Nacional (DINA), the Pinochet regime's secret police, assassinated Letelier in Washington via the use of a car bomb. These agents had been working in collaboration with members of the Coordination of United Revolutionary Organizations, a U.S.-sponsored anti-Castro militant group.
Letelier was born in Temuco, Chile, the youngest child of Orlando Letelier Ruiz and Inés del Solar. He studied at the Instituto Nacional and, at the age of sixteen, was accepted as a cadet at the Chilean Military Academy, where he completed his secondary studies. Later, he abandoned a military career. He did not finish college and never received a university degree. In 1955, he joined the recently formed Copper Office ("Departamento del Cobre", now CODELCO), where he worked until 1959 as a research analyst in the copper industry.
On 17 December 1955, Letelier married Isabel Margarita Morel Gumucio with whom he had four children: Cristián, José, Francisco, and Juan Pablo.
That year, Letelier was fired from the Copper Office, ostensibly for having supported Salvador Allende's unsuccessful second presidential campaign. The Letelier family left for Venezuela, where Orlando became a copper consultant to the Finance Ministry.
While at university, Letelier became a student representative in the University of Chile's Student Union. In 1959, he joined the Chilean Socialist Party (PS). In 1971, President Allende appointed him ambassador to the United States. His specific mission was to advocate in defense of the Chilean nationalization of copper, which had replaced the private ownership model favoured by the US government.
In 1973, Letelier was recalled to Chile and served successively as Minister of Foreign Affairs, the Interior and Defense.
In the coup d'état of 11 September 1973, he was the first high-ranking member of the Allende administration to be arrested. He was held for twelve months in various concentration camps and suffered severe torture: first at the Tacna Regiment, then at the Military Academy; later he was sent for eight months to a political prison on Dawson Island; from there he was transferred to the basement of the Air Force War Academy, and finally to the concentration camp of Ritoque. Following international diplomatic pressure, especially from Diego Arria, then Governor of Distrito Federal of Venezuela, he was released in September 1974 on the condition that he immediately leave Chile.
After his release, he and his family resettled in Caracas, but later moved to the United States on the recommendation of American writer Saul Landau.
In 1975, Letelier moved to Washington D.C., where he became senior fellow of the Washington, D.C.-based Institute for Policy Studies, a think tank Landau was involved in. Letelier became director of the Amsterdam-based Transnational Institute and taught at the School of International Service of the American University in Washington, D.C.
Letelier wrote several articles criticizing the "Chicago Boys", a group of South American economists trained at the University of Chicago by Milton Friedman and Arnold Harberger who returned to their home countries to promote and advise leaders on the benefits of a free-market economy.
This economic model was used to great effect in Chile where General Pinochet sought to dismantle the country's socialist economic system and replace it with a free market economy. Letelier believed that in a resource driven economy such as Chile, allowing markets to operate freely simply guaranteed the movement of wealth from the lower and middle classes to the monopolists and financial speculators. He soon became the leading voice of the Chilean resistance, preventing several loans (especially from Europe) from being awarded to the Chilean government. On 10 September 1976, he was stripped of his Chilean nationality.
Letelier was killed by a car bomb explosion on 21 September 1976 in Sheridan Circle in Washington, D.C., along with his American co-worker, Ronni Karpen Moffitt.
Moffit's husband, Michael Moffitt, was injured but survived. Several people were prosecuted and convicted for the murder. Among them were Michael Townley, a U.S. expatriate working for DINA, General Manuel Contreras, former head of DINA, and Brigadier Pedro Espinoza, also formerly of DINA. Townley was convicted in the United States in 1978 and served 62 months in prison for the murder; he is now free as a participant in the United States Federal Witness Protection Program. Contreras and Espinoza were convicted in Chile in 1993.
Diego Arria intervened again by bringing Letelier's body to Caracas for burial, where he remained until 1994 after the end of Pinochet's rule.
General Augusto Pinochet, who died on 10 December 2006, was never brought to trial for the murders, despite evidence implicating him as having ordered them. Following the assassination, the United States cut military aid to Chile, and took a stance of 'unobtrusiveness' within the country.
Following the death of Pinochet in December 2006, the Institute for Policy Studies (IPS), for which both Letelier and Moffitt worked, called for the release of all classified documents relating to the Letelier–Moffitt assassination.
According to the IPS, the Clinton administration de-classified more than 16,000 documents relating to Chile, but withheld documents relating to the Letelier-Moffitt assassination in Washington on the grounds that they were associated with an ongoing investigation. The IPS said the Clinton administration had re-opened the investigation into the Letelier-Moffitt murders and sent agents to Chile to gather additional evidence that Pinochet had authorized the crime. The former Chilean Secret Police Chief, Manuel Contreras, who, was convicted for his role in the crime in 1993, later pointed the finger at his superiors, claiming that all relevant orders had come from Pinochet.
A US State Department document made available by the National Security Archive on 10 April 2010 reveals that a démarche protesting Pinochet's Operation Condor assassination program was proposed and sent on 23 August 1976 to US diplomatic missions in Uruguay, Argentina, and Chile to be delivered to their host governments but later rescinded on 16 September 1976 by Henry Kissinger, following concerns raised by US ambassadors assigned there of both personal safety and a likely diplomatic contretemps. Five days later, the Letelier assassination took place.
Documents released in 2015 revealed a CIA report dated 28 April 1978, which showed that the agency by then had knowledge that Pinochet ordered the murders. The report stated, "Contreras told a confidante he authorized the assassination of Letelier on orders from Pinochet." A State Department document also referred to eight separate CIA reports from around the same date, each sourced to "extremely sensitive informants" who provided evidence of Pinochet's direct involvement in ordering the assassination and in directing the subsequent cover-up.
During the tenure of Richard Downie at the William J. Perry Center for Hemispheric Defense Studies, a U.S. Southern Command educational institution located at the National Defense University, the alleged (and as yet unproven) role of Jaime Garcia Covarrubias, a Chilean professor who was head of counterintelligence for DINA in the 1970s, in the torture and murder of seven detainees was revealed inside the center. His alleged role was first brought to Downie's attention in early 2008 by Center Assistant Professor Martin Edwin Andersen, a senior staff member who earlier, as a senior advisor for policy planning at the Criminal Division of the U.S. Department of Justice, was the first national security whistleblower to receive the U.S. Office of Special Counsel's "Public Servant Award." In an October 1987 investigative report in The Nation, Andersen broke the story of how, in a June 1976 meeting in the Hotel Carrera in Santiago, Kissinger gave the bloody military junta in neighboring Argentina the "green light" for their own dirty "war."
Oscar Wilde
Oscar Fingal O'Flahertie Wills Wilde (16 October 185430 November 1900) was an Irish poet and playwright. After writing in different forms throughout the 1880s, the early 1890s saw him become one of the most popular playwrights in London. He is best remembered for his epigrams and plays, his novel "The Picture of Dorian Gray", and the circumstances of his criminal conviction for gross indecency for consensual homosexual acts, imprisonment, and early death at age 46.
Wilde's parents were Anglo-Irish intellectuals in Dublin. A young Wilde learned to speak fluent French and German. At university, Wilde read Greats; he demonstrated himself to be an exceptional classicist, first at Trinity College Dublin, then at Oxford. He became associated with the emerging philosophy of aestheticism, led by two of his tutors, Walter Pater and John Ruskin. After university, Wilde moved to London into fashionable cultural and social circles.
As a spokesman for aestheticism, he tried his hand at various literary activities: he published a book of poems, lectured in the United States and Canada on the new "English Renaissance in Art" and interior decoration, and then returned to London where he worked prolifically as a journalist. Known for his biting wit, flamboyant dress and glittering conversational skill, Wilde became one of the best-known personalities of his day. At the turn of the 1890s, he refined his ideas about the supremacy of art in a series of dialogues and essays, and incorporated themes of decadence, duplicity, and beauty into what would be his only novel, "The Picture of Dorian Gray" (1890). The opportunity to construct aesthetic details precisely, and combine them with larger social themes, drew Wilde to write drama. He wrote "Salome" (1891) in French while in Paris but it was refused a licence for England due to an absolute prohibition on the portrayal of Biblical subjects on the English stage. Unperturbed, Wilde produced four society comedies in the early 1890s, which made him one of the most successful playwrights of late-Victorian London.
At the height of his fame and success, while "The Importance of Being Earnest" (1895) was still being performed in London, Wilde prosecuted the Marquess of Queensberry for criminal libel. The Marquess was the father of Wilde's lover, Lord Alfred Douglas. The libel trial unearthed evidence that caused Wilde to drop his charges and led to his own arrest and trial for gross indecency with men. After two more trials he was convicted and sentenced to two years' hard labour, the maximum penalty, and was jailed from 1895 to 1897. During his last year in prison, he wrote "De Profundis" (published posthumously in 1905), a long letter which discusses his spiritual journey through his trials, forming a dark counterpoint to his earlier philosophy of pleasure. On his release, he left immediately for France, never to return to Ireland or Britain. There he wrote his last work, "The Ballad of Reading Gaol" (1898), a long poem commemorating the harsh rhythms of prison life.
Oscar Wilde was born at 21 Westland Row, Dublin (now home of the Oscar Wilde Centre, Trinity College), the second of three children born to an Anglo-Irish couple: Jane, née Elgee and Sir William Wilde. Oscar was two years younger than his brother, William (Willie) Wilde.
Jane Wilde was a niece (by marriage) of the novelist, playwright and clergyman Charles Maturin (1780 – 1824), who may have influenced her own literary career. She had distant Italian ancestry, and under the pseudonym ""Speranza"" (the Italian word for 'hope'), she wrote poetry for the revolutionary Young Irelanders in 1848; she was a lifelong Irish nationalist. Jane Wilde read the Young Irelanders' poetry to Oscar and Willie, inculcating a love of these poets in her sons. Her interest in the neo-classical revival showed in the paintings and busts of ancient Greece and Rome in her home.
William Wilde was Ireland's leading oto-ophthalmologic (ear and eye) surgeon and was knighted in 1864 for his services as medical adviser and assistant commissioner to the censuses of Ireland. He also wrote books about Irish archaeology and peasant folklore. A renowned philanthropist, his dispensary for the care of the city's poor at the rear of Trinity College, Dublin, was the forerunner of the Dublin Eye and Ear Hospital, now located at Adelaide Road. On his father's side Wilde was descended from a Dutchman, Colonel de Wilde, who went to Ireland with King William of Orange's invading army in 1690, and numerous Anglo-Irish ancestors. On his mother's side, Wilde's ancestors included a bricklayer from County Durham, who emigrated to Ireland sometime in the 1770s.
Wilde was baptised as an infant in St. Mark's Church, Dublin, the local Church of Ireland (Anglican) church. When the church was closed, the records were moved to the nearby St. Ann's Church, Dawson Street. Davis Coakley mentions a second baptism by a Catholic priest, Father Prideaux Fox, who befriended Oscar's mother circa 1859. According to Fox's testimony in "Donahoe's Magazine" in 1905, Jane Wilde would visit his chapel in Glencree, County Wicklow, for Mass and would take her sons with her. She asked Father Fox in this period to baptise her sons.
Fox described it in this way:
"I am not sure if she ever became a Catholic herself but it was not long before she asked me to instruct two of her children, one of them being the future erratic genius, Oscar Wilde. After a few weeks I baptized these two children, Lady Wilde herself being present on the occasion.
In addition to his children with his wife, Sir William Wilde was the father of three children born out of wedlock before his marriage: Henry Wilson, born in 1838 to one woman, and Emily and Mary Wilde, born in 1847 and 1849, respectively, to a second woman. Sir William acknowledged paternity of his illegitimate or "natural" children and provided for their education, arranging for them to be reared by his relatives rather than with his legitimate children in his family household with his wife.
In 1855, the family moved to No. 1 Merrion Square, where Wilde's sister, Isola, was born in 1857. The Wildes' new home was larger. With both his parents' success and delight in social life, the house soon became the site of a "unique medical and cultural milieu". Guests at their salon included Sheridan Le Fanu, Charles Lever, George Petrie, Isaac Butt, William Rowan Hamilton and Samuel Ferguson.
Until he was nine, Oscar Wilde was educated at home, where a French nursemaid and a German governess taught him their languages. He attended Portora Royal School in Enniskillen, County Fermanagh, from 1864 to 1871. Until his early twenties, Wilde summered at the villa, Moytura House, which his father had built in Cong, County Mayo. There the young Wilde and his brother Willie played with George Moore.
Isola died at age nine of meningitis. Wilde's poem "" is written to her memory.
"Tread lightly, she is nearUnder the snow
Speak gently, she can hear
the daisies grow"
Wilde left Portora with a royal scholarship to read classics at Trinity College, Dublin, from 1871 to 1874, sharing rooms with his older brother Willie Wilde. Trinity, one of the leading classical schools, placed him with scholars such as R. Y. Tyrell, Arthur Palmer, Edward Dowden and his tutor, Professor J. P. Mahaffy, who inspired his interest in Greek literature. As a student Wilde worked with Mahaffy on the latter's book "Social Life in Greece". Wilde, despite later reservations, called Mahaffy "my first and best teacher" and "the scholar who showed me how to love Greek things". For his part, Mahaffy boasted of having created Wilde; later, he said Wilde was "the only blot on my tutorship".
The University Philosophical Society also provided an education, as members discussed intellectual and artistic subjects such as Dante Gabriel Rossetti and Algernon Charles Swinburne weekly. Wilde quickly became an established member – the members' suggestion book for 1874 contains two pages of banter (sportingly) mocking Wilde's emergent aestheticism. He presented a paper titled "Aesthetic Morality". At Trinity, Wilde established himself as an outstanding student: he came first in his class in his first year, won a scholarship by competitive examination in his second and, in his finals, won the Berkeley Gold Medal in Greek, the University's highest academic award. He was encouraged to compete for a demyship to Magdalen College, Oxford – which he won easily, having already studied Greek for over nine years.
At Magdalen, he read Greats from 1874 to 1878, and from there he applied to join the Oxford Union, but failed to be elected.
Attracted by its dress, secrecy, and ritual, Wilde petitioned the Apollo Masonic Lodge at Oxford, and was soon raised to the "Sublime Degree of Master Mason". During a resurgent interest in Freemasonry in his third year, he commented he "would be awfully sorry to give it up if I secede from the Protestant Heresy". Wilde's active involvement in Freemasonry lasted only for the time he spent at Oxford; he allowed his membership of the Apollo University Lodge to lapse after failing to pay subscriptions.
Catholicism deeply appealed to him, especially its rich liturgy, and he discussed converting to it with clergy several times. In 1877, Wilde was left speechless after an audience with Pope Pius IX in Rome. He eagerly read the books of Cardinal Newman, a noted Anglican priest who had converted to Catholicism and risen in the church hierarchy. He became more serious in 1878, when he met the Reverend Sebastian Bowden, a priest in the Brompton Oratory who had received some high-profile converts. Neither his father, who threatened to cut off his funds, nor Mahaffy thought much of the plan; but Wilde, the supreme individualist, balked at the last minute from pledging himself to any formal creed, and on the appointed day of his baptism, sent Father Bowden a bunch of altar lilies instead. Wilde did retain a lifelong interest in Catholic theology and liturgy.
While at Magdalen College, Wilde became particularly well known for his role in the aesthetic and decadent movements. He wore his hair long, openly scorned "manly" sports though he occasionally boxed, and he decorated his rooms with peacock feathers, lilies, sunflowers, blue china and other "objets d'art". He once remarked to friends, whom he entertained lavishly, "I find it harder and harder every day to live up to my blue china." The line quickly became famous, accepted as a slogan by aesthetes but used against them by critics who sensed in it a terrible vacuousness. Some elements disdained the aesthetes, but their languishing attitudes and showy costumes became a recognised pose. Wilde was once physically attacked by a group of four fellow students, and dealt with them single-handedly, surprising critics. By his third year Wilde had truly begun to develop himself and his myth, and considered his learning to be more expansive than what was within the prescribed texts. This attitude resulted in his being rusticated for one term, after he had returned late to a college term from a trip to Greece with Mahaffy.
Wilde did not meet Walter Pater until his third year, but had been enthralled by his "Studies in the History of the Renaissance", published during Wilde's final year in Trinity. Pater argued that man's sensibility to beauty should be refined above all else, and that each moment should be felt to its fullest extent. Years later, in "De Profundis", Wilde described Pater's "Studies..." as "that book that has had such a strange influence over my life". He learned tracts of the book by heart, and carried it with him on travels in later years. Pater gave Wilde his sense of almost flippant devotion to art, though he gained a purpose for it through the lectures and writings of critic John Ruskin. Ruskin despaired at the self-validating aestheticism of Pater, arguing that the importance of art lies in its potential for the betterment of society. Ruskin admired beauty, but believed it must be allied with, and applied to, moral good. When Wilde eagerly attended Ruskin's lecture series "The Aesthetic and Mathematic Schools of Art in Florence", he learned about aesthetics as the non-mathematical elements of painting. Despite being given to neither early rising nor manual labour, Wilde volunteered for Ruskin's project to convert a swampy country lane into a smart road neatly edged with flowers.
Wilde won the 1878 Newdigate Prize for his poem "", which reflected on his visit there the year before, and he duly read it at Encaenia. In November 1878, he graduated with a double first in his B.A. of Classical Moderations and Literae Humaniores (Greats). Wilde wrote to a friend, "The dons are " beyond words – the Bad Boy doing so well in the end!"
After graduation from Oxford, Wilde returned to Dublin, where he met again Florence Balcombe, a childhood sweetheart. She became engaged to Bram Stoker and they married in 1878. Wilde was disappointed but stoic: he wrote to her, remembering "the two sweet years – the sweetest years of all my youth" during which they had been close. He also stated his intention to "return to England, probably for good." This he did in 1878, only briefly visiting Ireland twice after that.
Unsure of his next step, Wilde wrote to various acquaintances enquiring about Classics positions at Oxford or Cambridge. "The Rise of Historical Criticism" was his submission for the Chancellor's Essay prize of 1879, which, though no longer a student, he was still eligible to enter. Its subject, "Historical Criticism among the Ancients" seemed ready-made for Wilde – with both his skill in composition and ancient learning – but he struggled to find his voice with the long, flat, scholarly style. Unusually, no prize was awarded that year.
With the last of his inheritance from the sale of his father's houses, he set himself up as a bachelor in London. The 1881 British Census listed Wilde as a boarder at 1 (now 44) Tite Street, Chelsea, where Frank Miles, a society painter, was the head of the household. Wilde spent the next six years in London and Paris, and in the United States, where he travelled to deliver lectures.
He had been publishing lyrics and poems in magazines since entering Trinity College, especially in "Kottabos" and the "Dublin University Magazine". In mid-1881, at 27 years old, he published "Poems", which collected, revised and expanded his poems.
The book was generally well received, and sold out its first print run of 750 copies. "Punch" was less enthusiastic, saying "The poet is Wilde, but his poetry's tame". By a tight vote, the Oxford Union condemned the book for alleged plagiarism. The librarian, who had requested the book for the library, returned the presentation copy to Wilde with a note of apology. Biographer Richard Ellmann argues that Wilde's poem "" was a sincere, though flamboyant, attempt to explain the dichotomies the poet saw in himself; one line reads: "To drift with every passion till my soulIs a stringed lute on which all winds can play".
The book had further printings in 1882. It was bound in a rich, enamel parchment cover (embossed with gilt blossom) and printed on hand-made Dutch paper; over the next few years, Wilde presented many copies to the dignitaries and writers who received him during his lecture tours.
Aestheticism was sufficiently in vogue to be caricatured by Gilbert and Sullivan in "Patience" (1881). Richard D'Oyly Carte, an English impresario, invited Wilde to make a lecture tour of North America, simultaneously priming the pump for the US tour of "Patience" and selling this most charming aesthete to the American public. Wilde journeyed on the SS "Arizona", arriving 2 January 1882, and disembarking the following day. Originally planned to last four months, it continued for almost a year due to the commercial success. Wilde sought to transpose the beauty he saw in art into daily life. This was a practical as well as philosophical project: in Oxford he had surrounded himself with blue china and lilies, and now one of his lectures was on interior design.
When asked to explain reports that he had paraded down Piccadilly in London carrying a lily, long hair flowing, Wilde replied, "It's not whether I did it or not that's important, but whether people believed I did it". Wilde believed that the artist should hold forth higher ideals, and that pleasure and beauty would replace utilitarian ethics.
Wilde and aestheticism were both mercilessly caricatured and criticised in the press; the "Springfield Republican", for instance, commented on Wilde's behaviour during his visit to Boston to lecture on aestheticism, suggesting that Wilde's conduct was more a bid for notoriety rather than devotion to beauty and the aesthetic. T. W. Higginson, a cleric and abolitionist, wrote in "Unmanly Manhood" of his general concern that Wilde, "whose only distinction is that he has written a thin volume of very mediocre verse", would improperly influence the behaviour of men and women.
According to biographer Michèle Mendelssohn, Wilde was the subject of anti-Irish caricature and was portrayed as a monkey, a blackface performer and a Christy's Minstrel throughout his career. ""Harper's Weekly" put a sunflower-worshipping monkey dressed as Wilde on the front of the January 1882 issue. The magazine didn't let its reputation for quality impede its expression of what are now considered odious ethnic and racial ideologies. The drawing stimulated other American maligners and, in England, had a full-page reprint in the "Lady's Pictorial". ... When the "National Republican" discussed Wilde, it was to explain 'a few items as to the animal's pedigree.' And on 22 January 1882 the Washington Post illustrated the Wild Man of Borneo alongside Oscar Wilde of England and asked 'How far is it from this to this?'" Though his press reception was hostile, Wilde was well received in diverse settings across America; he drank whiskey with miners in Leadville, Colorado, and was fêted at the most fashionable salons in many cities he visited.
His earnings, plus expected income from "The Duchess of Padua", allowed him to move to Paris between February and mid-May 1883. While there he met Robert Sherard, whom he entertained constantly. "We are dining on the Duchess tonight", Wilde would declare before taking him to an expensive restaurant. In August he briefly returned to New York for the production of "Vera", his first play, after it was turned down in London. He reportedly entertained the other passengers with "", about the rise and fall of empires. E. C. Stedman, in "Victorian Poets", describes this "lyric to England" as "manly verse – a poetic and eloquent invocation".
Wilde had to return to England, where he continued to lecture on topics including "Personal Impressions of America", "The Value of Art in Modern Life", and "Dress".
In London, he had been introduced in 1881 to Constance Lloyd, daughter of Horace Lloyd, a wealthy Queen's Counsel, and his wife. She happened to be visiting Dublin in 1884, when Wilde was lecturing at the Gaiety Theatre. He proposed to her, and they married on 29 May 1884 at the Anglican St James's Church, Paddington, in London. Although Constance had an annual allowance of £250, which was generous for a young woman (equivalent to about £ in current value), the Wildes had relatively luxurious tastes. They had preached to others for so long on the subject of design that people expected their home to set new standards. No. 16, Tite Street was duly renovated in seven months at considerable expense. The couple had two sons together, Cyril (1885) and Vyvyan (1886). Wilde became the sole literary signatory of George Bernard Shaw's petition for a pardon of the anarchists arrested (and later executed) after the Haymarket massacre in Chicago in 1886.
Robert Ross had read Wilde's poems before they met at Oxford in 1886. He seemed unrestrained by the Victorian prohibition against homosexuality, and became estranged from his family. By Richard Ellmann's account, he was a precocious seventeen-year-old who "so young and yet so knowing, was determined to seduce Wilde".According to Daniel Mendelsohn, Wilde, who had long alluded to Greek love, was "initiated into homosexual sex" by Ross, while his "marriage had begun to unravel after his wife's second pregnancy, which left him physically repelled".
Criticism over artistic matters in "The Pall Mall Gazette" provoked a letter in self-defence, and soon Wilde was a contributor to that and other journals during 1885–87. He enjoyed reviewing and journalism; the form suited his style. He could organise and share his views on art, literature and life, yet in a format less tedious than lecturing. Buoyed up, his reviews were largely chatty and positive. Wilde, like his parents before him, also supported the cause of Irish nationalism. When Charles Stewart Parnell was falsely accused of inciting murder, Wilde wrote a series of astute columns defending him in the "Daily Chronicle".
His flair, having previously been put mainly into socialising, suited journalism and rapidly attracted notice. With his youth nearly over, and a family to support, in mid-1887 Wilde became the editor of "The Lady's World" magazine, his name prominently appearing on the cover. He promptly renamed it as "The Woman's World" and raised its tone, adding serious articles on parenting, culture, and politics, while keeping discussions of fashion and arts. Two pieces of fiction were usually included, one to be read to children, the other for the ladies themselves. Wilde worked hard to solicit good contributions from his wide artistic acquaintance, including those of Lady Wilde and his wife Constance, while his own "Literary and Other Notes" were themselves popular and amusing.
The initial vigour and excitement which he brought to the job began to fade as administration, commuting and office life became tedious. At the same time as Wilde's interest flagged, the publishers became concerned anew about circulation: sales, at the relatively high price of one shilling, remained low. Increasingly sending instructions to the magazine by letter, Wilde began a new period of creative work and his own column appeared less regularly. In October 1889, Wilde had finally found his voice in prose and, at the end of the second volume, Wilde left "The Woman's World". The magazine outlasted him by one issue.
If Wilde's period at the helm of the magazine was a mixed success from an organizational point of view, it played a pivotal role in his development as a writer and facilitated his ascent to fame. Whilst Wilde the journalist supplied articles under the guidance of his editors, Wilde the editor was forced to learn to manipulate the literary marketplace on his own terms.
During the late 1880s, Wilde was a close friend of the artist James McNeill Whistler and they dined together on many occasions. At one of these dinners, Whistler said a bon mot that Wilde found particularly witty, Wilde exclaimed that he wished that he had said it, and Whistler retorted "You will, Oscar, you will". Herbert Vivian—a mutual friend of Wilde and Whistler— attended the dinner and recorded it in his article "The Reminiscences of a Short Life" which appeared in "The Sun" in 1889. The article alleged that Wilde had a habit of passing off other people's witticisms as his own—especially Whistler's. Wilde considered Vivian's article to be a scurrilous betrayal, and it directly caused the broken friendship between Wilde and Whistler. The Reminiscences also caused great acrimony between Wilde and Vivian, Wilde accusing Vivian of "the inaccuracy of an eavesdropper with the method of a blackmailer" and banishing Vivian from his circle.
Wilde published "The Happy Prince and Other Tales" in 1888, and had been regularly writing fairy stories for magazines. In 1891 he published two more collections, "Lord Arthur Savile's Crime and Other Stories", and in September "A House of Pomegranates" was dedicated "To Constance Mary Wilde". "The Portrait of Mr. W. H.", which Wilde had begun in 1887, was first published in "Blackwood's Edinburgh Magazine" in July 1889. It is a short story, which reports a conversation, in which the theory that Shakespeare's sonnets were written out of the poet's love of the boy actor "Willie Hughes", is advanced, retracted, and then propounded again. The only evidence for this is two supposed puns within the sonnets themselves.
The anonymous narrator is at first sceptical, then believing, finally flirtatious with the reader: he concludes that "there is really a great deal to be said of the Willie Hughes theory of Shakespeare's sonnets." By the end fact and fiction have melded together. Arthur Ransome wrote that Wilde "read something of himself into Shakespeare's sonnets" and became fascinated with the "Willie Hughes theory" despite the lack of biographical evidence for the historical William Hughes' existence. Instead of writing a short but serious essay on the question, Wilde tossed the theory amongst the three characters of the story, allowing it to unfold as background to the plot. The story thus is an early masterpiece of Wilde's combining many elements that interested him: conversation, literature and the idea that to shed oneself of an idea one must first convince another of its truth. Ransome concludes that Wilde succeeds precisely because the literary criticism is unveiled with such a deft touch.
Though containing nothing but "special pleading", it would not, he says "be possible to build an airier castle in Spain than this of the imaginary William Hughes" we continue listening nonetheless to be charmed by the telling. "You must believe in Willie Hughes," Wilde told an acquaintance, "I almost do, myself."
Wilde, having tired of journalism, had been busy setting out his aesthetic ideas more fully in a series of longer prose pieces which were published in the major literary-intellectual journals of the day. In January 1889, "The Decay of Lying: A Dialogue" appeared in "The Nineteenth Century", and "Pen, Pencil and Poison", a satirical biography of Thomas Griffiths Wainewright, in "The Fortnightly Review", edited by Wilde's friend Frank Harris. Two of Wilde's four writings on aesthetics are dialogues: though Wilde had evolved professionally from lecturer to writer, he retained an oral tradition of sorts. Having always excelled as a wit and raconteur, he often composed by assembling phrases, "bons mots" and witticisms into a longer, cohesive work.
Wilde was concerned about the effect of moralising on art; he believed in art's redemptive, developmental powers: "Art is individualism, and individualism is a disturbing and disintegrating force. There lies its immense value. For what it seeks is to disturb monotony of type, slavery of custom, tyranny of habit, and the reduction of man to the level of a machine." In his only political text, "The Soul of Man Under Socialism", he argued political conditions should establish this primacy – private property should be abolished, and cooperation should be substituted for competition. At the same time, he stressed that the government most amenable to artists was no government at all. Wilde envisioned a society where mechanisation has freed human effort from the burden of necessity, effort which can instead be expended on artistic creation. George Orwell summarised, "In effect, the world will be populated by artists, each striving after perfection in the way that seems best to him."
This point of view did not align him with the Fabians, intellectual socialists who advocated using state apparatus to change social conditions, nor did it endear him to the monied classes whom he had previously entertained. Hesketh Pearson, introducing a collection of Wilde's essays in 1950, remarked how "The Soul of Man Under Socialism" had been an inspirational text for revolutionaries in Tsarist Russia but laments that in the Stalinist era "it is doubtful whether there are any uninspected places in which it could now be hidden".
Wilde considered including this pamphlet and The Portrait of Mr. W.H., his essay-story on Shakespeare's sonnets, in a new anthology in 1891, but eventually decided to limit it to purely aesthetic subjects. "Intentions" packaged revisions of four essays: "The Decay of Lying"; "Pen, Pencil and Poison"; "The Truth of Masks" (first published 1885); and "The Critic as Artist" in two parts. For Pearson the biographer, the essays and dialogues exhibit every aspect of Wilde's genius and character: wit, romancer, talker, lecturer, humanist and scholar and concludes that "no other productions of his have as varied an appeal". 1891 turned out to be Wilde's "annus mirabilis"; apart from his three collections he also produced his only novel.
The first version of "The Picture of Dorian Gray" was published as the lead story in the July 1890 edition of "Lippincott's Monthly Magazine", along with five others. The story begins with a man painting a picture of Gray. When Gray, who has a "face like ivory and rose leaves", sees his finished portrait, he breaks down. Distraught that his beauty will fade while the portrait stays beautiful, he inadvertently makes a Faustian bargain in which only the painted image grows old while he stays beautiful and young. For Wilde, the purpose of art would be to guide life as if beauty alone were its object. As Gray's portrait allows him to escape the corporeal ravages of his hedonism, Wilde sought to juxtapose the beauty he saw in art with daily life.
Reviewers immediately criticised the novel's decadence and homosexual allusions; "The Daily Chronicle" for example, called it "unclean", "poisonous", and "heavy with the mephitic odours of moral and spiritual putrefaction". Wilde vigorously responded, writing to the editor of the "Scots Observer", in which he clarified his stance on ethics and aesthetics in art – "If a work of art is rich and vital and complete, those who have artistic instincts will see its beauty and those to whom ethics appeal more strongly will see its moral lesson." He nevertheless revised it extensively for book publication in 1891: six new chapters were added, some overtly decadent passages and homo-eroticism excised, and a preface was included consisting of twenty two epigrams, such as "Books are well written, or badly written. That is all."
Contemporary reviewers and modern critics have postulated numerous possible sources of the story, a search Jershua McCormack argues is futile because Wilde "has tapped a root of Western folklore so deep and ubiquitous that the story has escaped its origins and returned to the oral tradition." Wilde claimed the plot was "an idea that is as old as the history of literature but to which I have given a new form". Modern critic Robin McKie considered the novel to be technically mediocre, saying that the conceit of the plot had guaranteed its fame, but the device is never pushed to its full. On the other hand, Robert McCrum of "The Guardian" deemed it the 27th best novel ever written in English, calling it "an arresting, and slightly camp, exercise in late-Victorian gothic."
The 1891 census records the Wildes' residence at 16 Tite Street, where he lived with his wife Constance and two sons. Wilde though, not content with being better known than ever in London, returned to Paris in October 1891, this time as a respected writer. He was received at the "salons littéraires", including the famous "mardis" of Stéphane Mallarmé, a renowned symbolist poet of the time. Wilde's two plays during the 1880s, "Vera; or, The Nihilists" and "The Duchess of Padua", had not met with much success. He had continued his interest in the theatre and now, after finding his voice in prose, his thoughts turned again to the dramatic form as the biblical iconography of Salome filled his mind. One evening, after discussing depictions of Salome throughout history, he returned to his hotel and noticed a blank copybook lying on the desk, and it occurred to him to write in it what he had been saying. The result was a new play, "Salomé", written rapidly and in French.
A tragedy, it tells the story of Salome, the stepdaughter of the tetrarch Herod Antipas, who, to her stepfather's dismay but mother's delight, requests the head of Jokanaan (John the Baptist) on a silver platter as a reward for dancing the Dance of the Seven Veils. When Wilde returned to London just before Christmas the "Paris Echo" referred to him as ""le great event"" of the season. Rehearsals of the play, starring Sarah Bernhardt, began but the play was refused a licence by the Lord Chamberlain, since it depicted biblical characters. "Salome" was published jointly in Paris and London in 1893, but was not performed until 1896 in Paris, during Wilde's later incarceration.
Wilde, who had first set out to irritate Victorian society with his dress and talking points, then outrage it with "Dorian Gray", his novel of vice hidden beneath art, finally found a way to critique society on its own terms. "Lady Windermere's Fan" was first performed on 20 February 1892 at St James's Theatre, packed with the cream of society. On the surface a witty comedy, there is subtle subversion underneath: "it concludes with collusive concealment rather than collective disclosure". The audience, like Lady Windermere, are forced to soften harsh social codes in favour of a more nuanced view. The play was enormously popular, touring the country for months, but largely trashed by conservative critics.
It was followed by "A Woman of No Importance" in 1893, another Victorian comedy, revolving around the spectre of illegitimate births, mistaken identities and late revelations. Wilde was commissioned to write two more plays and "An Ideal Husband", written in 1894, followed in January 1895.
Peter Raby said these essentially English plays were well-pitched, "Wilde, with one eye on the dramatic genius of Ibsen, and the other on the commercial competition in London's West End, targeted his audience with adroit precision".
In mid-1891 Lionel Johnson introduced Wilde to Lord Alfred Douglas, Johnson's cousin and an undergraduate at Oxford at the time. Known to his family and friends as "Bosie", he was a handsome and spoilt young man. An intimate friendship sprang up between Wilde and Douglas and by 1893 Wilde was infatuated with Douglas and they consorted together regularly in a tempestuous affair. If Wilde was relatively indiscreet, even flamboyant, in the way he acted, Douglas was reckless in public. Wilde, who was earning up to £100 a week from his plays (his salary at "The Woman's World" had been £6), indulged Douglas's every whim: material, artistic or sexual.
Douglas soon initiated Wilde into the Victorian underground of gay prostitution and Wilde was introduced to a series of young working-class male prostitutes from 1892 onwards by Alfred Taylor. These infrequent rendezvous usually took the same form: Wilde would meet the boy, offer him gifts, dine him privately and then take him to a hotel room. Unlike Wilde's idealised relations with Ross, John Gray, and Douglas, all of whom remained part of his aesthetic circle, these consorts were uneducated and knew nothing of literature. Soon his public and private lives had become sharply divided; in "De Profundis" he wrote to Douglas that "It was like feasting with panthers; the danger was half the excitement... I did not know that when they were to strike at me it was to be at another's piping and at another's pay."
Douglas and some Oxford friends founded a journal, "The Chameleon", to which Wilde "sent a page of paradoxes originally destined for the "Saturday Review"". "" was to come under attack six months later at Wilde's trial, where he was forced to defend the magazine to which he had sent his work. In any case, it became unique: "The Chameleon" was not published again.
Lord Alfred's father, the Marquess of Queensberry, was known for his outspoken atheism, brutish manner and creation of the modern rules of boxing. Queensberry, who feuded regularly with his son, confronted Wilde and Lord Alfred about the nature of their relationship several times, but Wilde was able to mollify him. In June 1894, he called on Wilde at 16 Tite Street, without an appointment, and clarified his stance:
"I do not say that you are it, but you look it, and pose at it, which is just as bad. And if I catch you and my son again in any public restaurant I will thrash you" to which Wilde responded: "I don't know what the Queensberry rules are, but the Oscar Wilde rule is to shoot on sight". His account in "De Profundis" was less triumphant: "It was when, in my library at Tite Street, waving his small hands in the air in epileptic fury, your father... stood uttering every foul word his foul mind could think of, and screaming the loathsome threats he afterwards with such cunning carried out". Queensberry only described the scene once, saying Wilde had "shown him the white feather", meaning he had acted in a cowardly way. Though trying to remain calm, Wilde saw that he was becoming ensnared in a brutal family quarrel. He did not wish to bear Queensberry's insults, but he knew to confront him could lead to disaster were his liaisons disclosed publicly.
Wilde's final play again returns to the theme of switched identities: the play's two protagonists engage in "bunburying" (the maintenance of alternative personas in the town and country) which allows them to escape Victorian social mores. "Earnest" is even lighter in tone than Wilde's earlier comedies. While their characters often rise to serious themes in moments of crisis, "Earnest" lacks the by-now stock Wildean characters: there is no "woman with a past", the principals are neither villainous nor cunning, simply idle cultivés, and the idealistic young women are not that innocent. Mostly set in drawing rooms and almost completely lacking in action or violence, "Earnest" lacks the self-conscious decadence found in "The Picture of Dorian Gray" and "Salome".
The play, now considered Wilde's masterpiece, was rapidly written in Wilde's artistic maturity in late 1894. It was first performed on 14 February 1895, at St James's Theatre in London, Wilde's second collaboration with George Alexander, the actor-manager. Both author and producer assiduously revised, prepared and rehearsed every line, scene and setting in the months before the premiere, creating a carefully constructed representation of late-Victorian society, yet simultaneously mocking it. During rehearsal Alexander requested that Wilde shorten the play from four acts to three, which the author did. Premieres at St James's seemed like "brilliant parties", and the opening of "The Importance of Being Earnest" was no exception. Allan Aynesworth (who played Algernon) recalled to Hesketh Pearson, "In my fifty-three years of acting, I never remember a greater triumph than [that] first night." "Earnest's" immediate reception as Wilde's best work to date finally crystallised his fame into a solid artistic reputation. "The Importance of Being Earnest" remains his most popular play.
Wilde's professional success was mirrored by an escalation in his feud with Queensberry. Queensberry had planned to insult Wilde publicly by throwing a bouquet of rotting vegetables onto the stage; Wilde was tipped off and had Queensberry barred from entering the theatre. Fifteen weeks later Wilde was in prison.
On 18 February 1895, the Marquess left his calling card at Wilde's club, the Albemarle, inscribed: "For Oscar Wilde, posing somdomite". Wilde, encouraged by Douglas and against the advice of his friends, initiated a private prosecution against Queensberry for libel, since the note amounted to a public accusation that Wilde had committed the crime of sodomy.
Queensberry was arrested for criminal libel; a charge carrying a possible sentence of up to two years in prison. Under the 1843 Libel Act, Queensberry could avoid conviction for libel only by demonstrating that his accusation was in fact true, and furthermore that there was some "public benefit" to having made the accusation openly. Queensberry's lawyers thus hired private detectives to find evidence of Wilde's homosexual liaisons.
Wilde's friends had advised him against the prosecution at a "Saturday Review" meeting at the Café Royal on 24 March 1895; Frank Harris warned him that "they are going to prove sodomy against you" and advised him to flee to France. Wilde and Douglas walked out in a huff, Wilde saying "it is at such moments as these that one sees who are one's true friends". The scene was witnessed by George Bernard Shaw who recalled it to Arthur Ransome a day or so before Ransome's trial for libelling Douglas in 1913. To Ransome it confirmed what he had said in his 1912 book on Wilde; that Douglas's rivalry for Wilde with Robbie Ross and his arguments with his father had resulted in Wilde's public disaster; as Wilde wrote in "De Profundis". Douglas lost his case. Shaw included an account of the argument between Harris, Douglas and Wilde in the preface to his play "The Dark Lady of the Sonnets".
The libel trial became a "cause célèbre" as salacious details of Wilde's private life with Taylor and Douglas began to appear in the press. A team of private detectives had directed Queensberry's lawyers, led by Edward Carson QC, to the world of the Victorian underground. Wilde's association with blackmailers and male prostitutes, cross-dressers and homosexual brothels was recorded, and various persons involved were interviewed, some being coerced to appear as witnesses since they too were accomplices to the crimes of which Wilde was accused.
The trial opened on 3 April 1895 before Justice Richard Henn Collins amid scenes of near hysteria both in the press and the public galleries. The extent of the evidence massed against Wilde forced him to declare meekly, "I am the prosecutor in this case". Wilde's lawyer, Sir Edward George Clarke, opened the case by pre-emptively asking Wilde about two suggestive letters Wilde had written to Douglas, which the defence had in its possession. He characterised the first as a "prose sonnet" and admitted that the "poetical language" might seem strange to the court but claimed its intent was innocent. Wilde stated that the letters had been obtained by blackmailers who had attempted to extort money from him, but he had refused, suggesting they should take the £60 (equal to £ today) offered, "unusual for a prose piece of that length". He claimed to regard the letters as works of art rather than something of which to be ashamed.
Carson, a fellow Dubliner who had attended Trinity College, Dublin at the same time as Wilde, cross-examined Wilde on how he perceived the moral content of his works. Wilde replied with characteristic wit and flippancy, claiming that works of art are not capable of being moral or immoral but only well or poorly made, and that only "brutes and illiterates", whose views on art "are incalculably stupid", would make such judgements about art. Carson, a leading barrister, diverged from the normal practice of asking closed questions. Carson pressed Wilde on each topic from every angle, squeezing out nuances of meaning from Wilde's answers, removing them from their aesthetic context and portraying Wilde as evasive and decadent. While Wilde won the most laughs from the court, Carson scored the most legal points. To undermine Wilde's credibility, and to justify Queensberry's description of Wilde as a "posing somdomite", Carson drew from the witness an admission of his capacity for "posing", by demonstrating that he had lied about his age on oath. Playing on this, he returned to the topic throughout his cross-examination. Carson also tried to justify Queensberry's characterisation by quoting from Wilde's novel, "The Picture of Dorian Gray", referring in particular to a scene in the second chapter, in which Lord Henry Wotton explains his decadent philosophy to Dorian, an "innocent young man", in Carson's words.
Carson then moved to the factual evidence and questioned Wilde about his friendships with younger, lower-class men. Wilde admitted being on a first-name basis and lavishing gifts upon them, but insisted that nothing untoward had occurred and that the men were merely good friends of his. Carson repeatedly pointed out the unusual nature of these relationships and insinuated that the men were prostitutes. Wilde replied that he did not believe in social barriers, and simply enjoyed the society of young men. Then Carson asked Wilde directly whether he had ever kissed a certain servant boy, Wilde responded, "Oh, dear no. He was a particularly plain boy – unfortunately ugly – I pitied him for it." Carson pressed him on the answer, repeatedly asking why the boy's ugliness was relevant. Wilde hesitated, then for the first time became flustered: "You sting me and insult me and try to unnerve me; and at times one says things flippantly when one ought to speak more seriously."
In his opening speech for the defence, Carson announced that he had located several male prostitutes who were to testify that they had had sex with Wilde. On the advice of his lawyers, Wilde dropped the prosecution. Queensberry was found not guilty, as the court declared that his accusation that Wilde was "posing as a Somdomite " was justified, "true in substance and in fact". Under the Libel Act 1843, Queensberry's acquittal rendered Wilde legally liable for the considerable expenses Queensberry had incurred in his defence, which left Wilde bankrupt.
After Wilde left the court, a warrant for his arrest was applied for on charges of sodomy and gross indecency. Robbie Ross found Wilde at the Cadogan Hotel, Pont Street, Knightsbridge, with Reginald Turner; both men advised Wilde to go at once to Dover and try to get a boat to France; his mother advised him to stay and fight. Wilde, lapsing into inaction, could only say, "The train has gone. It's too late." On 6 April 1895, Wilde was arrested for "gross indecency" under Section 11 of the Criminal Law Amendment Act 1885, a term meaning homosexual acts not amounting to buggery (an offence under a separate statute). At Wilde's instruction, Ross and Wilde's butler forced their way into the bedroom and library of 16 Tite Street, packing some personal effects, manuscripts, and letters. Wilde was then imprisoned on remand at Holloway, where he received daily visits from Douglas.
Events moved quickly and his prosecution opened on 26 April 1895, before Mr Justice Charles. Wilde pleaded not guilty. He had already begged Douglas to leave London for Paris, but Douglas complained bitterly, even wanting to give evidence; he was pressed to go and soon fled to the Hotel du Monde. Fearing persecution, Ross and many others also left the United Kingdom during this time. Under cross examination Wilde was at first hesitant, then spoke eloquently:
This response was counter-productive in a legal sense as it only served to reinforce the charges of homosexual behaviour.
The trial ended with the jury unable to reach a verdict. Wilde's counsel, Sir Edward Clarke, was finally able to get a magistrate to allow Wilde and his friends to post bail. The Reverend Stewart Headlam put up most of the £5,000 surety required by the court, having disagreed with Wilde's treatment by the press and the courts. Wilde was freed from Holloway and, shunning attention, went into hiding at the house of Ernest and Ada Leverson, two of his firm friends. Edward Carson approached Frank Lockwood QC, the Solicitor General and asked "Can we not let up on the fellow now?" Lockwood answered that he would like to do so, but feared that the case had become too politicised to be dropped.
The final trial was presided over by Mr Justice Wills. On 25 May 1895 Wilde and Alfred Taylor were convicted of gross indecency and sentenced to two years' hard labour. The judge described the sentence, the maximum allowed, as "totally inadequate for a case such as this", and that the case was "the worst case I have ever tried". Wilde's response "And I? May I say nothing, my Lord?" was drowned out in cries of "Shame" in the courtroom.
Wilde was incarcerated from 25 May 1895 to 18 May 1897.
He first entered Newgate Prison in London for processing, then was moved to Pentonville Prison, where the "hard labour" to which he had been sentenced consisted of many hours of walking a treadmill and picking oakum (separating the fibres in scraps of old navy ropes), and where prisoners were allowed to read only the Bible and "The Pilgrim's Progress".
A few months later he was moved to Wandsworth Prison in London. Inmates there also followed the regimen of "hard labour, hard fare and a hard bed", which wore harshly on Wilde's delicate health. In November he collapsed during chapel from illness and hunger. His right ear drum was ruptured in the fall, an injury that later contributed to his death. He spent two months in the infirmary.
Richard B. Haldane, the Liberal MP and reformer, visited Wilde and had him transferred in November to Reading Gaol, west of London on 23 November 1895. The transfer itself was the lowest point of his incarceration, as a crowd jeered and spat at him on the railway platform. He spent the remainder of his sentence there, addressed and identified only as "C.3.3" – the occupant of the third cell on the third floor of C ward.
About five months after Wilde arrived at Reading Gaol, Charles Thomas Wooldridge, a trooper in the Royal Horse Guards, was brought to Reading to await his trial for murdering his wife on 29 March 1896; on 17 June Wooldridge was sentenced to death and returned to Reading for his execution, which took place on Tuesday, 7 July 1896 – the first hanging at Reading in 18 years. From Wooldridge's hanging, Wilde later wrote "The Ballad of Reading Gaol".
Wilde was not, at first, even allowed paper and pen but Haldane eventually succeeded in allowing access to books and writing materials. Wilde requested, among others: the Bible in French; Italian and German grammars; some Ancient Greek texts, Dante's "Divine Comedy", Joris-Karl Huysmans's new French novel about Christian redemption "En route", and essays by St Augustine, Cardinal Newman and Walter Pater.
Between January and March 1897 Wilde wrote a 50,000-word letter to Douglas. He was not allowed to send it, but was permitted to take it with him when released from prison. In reflective mode, Wilde coldly examines his career to date, how he had been a colourful "agent provocateur" in Victorian society, his art, like his paradoxes, seeking to subvert as well as sparkle. His own estimation of himself was: one who "stood in symbolic relations to the art and culture of my age". It was from these heights that his life with Douglas began, and Wilde examines that particularly closely, repudiating him for what Wilde finally sees as his arrogance and vanity: he had not forgotten Douglas' remark, when he was ill, "When you are not on your pedestal you are not interesting." Wilde blamed himself, though, for the ethical degradation of character that he allowed Douglas to bring about in him and took responsibility for his own fall, "I am here for having tried to put your father in prison." The first half concludes with Wilde forgiving Douglas, for his own sake as much as Douglas's. The second half of the letter traces Wilde's spiritual journey of redemption and fulfilment through his prison reading. He realised that his ordeal had filled his soul with the fruit of experience, however bitter it tasted at the time.
Wilde was released from prison on 19 May 1897 and sailed that evening for Dieppe, France. He never returned to the UK.
On his release, he gave the manuscript to Ross, who may or may not have carried out Wilde's instructions to send a copy to Douglas (who later denied having received it). The letter was partially published in 1905 as "De Profundis"; its complete and correct publication first occurred in 1962 in "The Letters of Oscar Wilde".
Though Wilde's health had suffered greatly from the harshness and diet of prison, he had a feeling of spiritual renewal. He immediately wrote to the Society of Jesus requesting a six-month Catholic retreat; when the request was denied, Wilde wept. "I intend to be received into the Catholic Church before long", Wilde told a journalist who asked about his religious intentions.
He spent his last three years impoverished and in exile. He took the name "Sebastian Melmoth", after Saint Sebastian and the titular character of "Melmoth the Wanderer" (a Gothic novel by Charles Maturin, Wilde's great-uncle). Wilde wrote two long letters to the editor of the "Daily Chronicle", describing the brutal conditions of English prisons and advocating penal reform. His discussion of the dismissal of Warder Martin for giving biscuits to an anaemic child prisoner repeated the themes of the corruption and degeneration of punishment that he had earlier outlined in "The Soul of Man under Socialism".
Wilde spent mid-1897 with Robert Ross in the seaside village of Berneval-le-Grand in northern France, where he wrote "The Ballad of Reading Gaol", narrating the execution of Charles Thomas Wooldridge, who murdered his wife in a rage at her infidelity. It moves from an objective story-telling to symbolic identification with the prisoners. No attempt is made to assess the justice of the laws which convicted them but rather the poem highlights the brutalisation of the punishment that all convicts share. Wilde juxtaposes the executed man and himself with the line "Yet each man kills the thing he loves". He adopted the proletarian ballad form and the author was credited as "C33", Wilde's cell number in Reading Gaol. He suggested that it be published in "Reynolds' Magazine", "because it circulates widely among the criminal classes – to which I now belong – for once I will be read by my peers – a new experience for me". It was an immediate roaring commercial success, going through seven editions in less than two years, only after which "[Oscar Wilde]" was added to the title page, though many in literary circles had known Wilde to be the author. It brought him a small amount of money.
Although Douglas had been the cause of his misfortunes, he and Wilde were reunited in August 1897 at Rouen. This meeting was disapproved of by the friends and families of both men. Constance Wilde was already refusing to meet Wilde or allow him to see their sons, though she sent him money – three pounds a week. During the latter part of 1897, Wilde and Douglas lived together near Naples for a few months until they were separated by their families under the threat of cutting off all funds.
Wilde's final address was at the dingy Hôtel d'Alsace (now known as L'Hôtel), on rue des Beaux-Arts in Saint-Germain-des-Prés, Paris. "This poverty really breaks one's heart: it is so "sale" [filthy], so utterly depressing, so hopeless. Pray do what you can" he wrote to his publisher. He corrected and published "An Ideal Husband" and "The Importance of Being Earnest", the proofs of which, according to Ellmann, show a man "very much in command of himself and of the play" but he refused to write anything else: "I can write, but have lost the joy of writing".
He wandered the boulevards alone and spent what little money he had on alcohol. A series of embarrassing chance encounters with hostile English visitors, or Frenchmen he had known in better days, drowned his spirit. Soon Wilde was sufficiently confined to his hotel to joke, on one of his final trips outside, "My wallpaper and I are fighting a duel to the death. One of us has got to go". On 12 October 1900 he sent a telegram to Ross: "Terribly weak. Please come".His moods fluctuated; Max Beerbohm relates how their mutual friend Reginald 'Reggie' Turner had found Wilde very depressed after a nightmare. "I dreamt that I had died, and was supping with the dead!" "I am sure", Turner replied, "that you must have been the life and soul of the party." Turner was one of the few of the old circle who remained with Wilde to the end and was at his bedside when he died.
By 25 November 1900 Wilde had developed meningitis, then called "cerebral meningitis". Robbie Ross arrived on 29 November, sent for a priest, and Wilde was conditionally baptised into the Catholic Church by Fr Cuthbert Dunne, a Passionist priest from Dublin, Wilde having been baptised in the Church of Ireland and having moreover a recollection of Catholic baptism as a child, a fact later attested to by the minister of the sacrament, Fr Lawrence Fox. Fr Dunne recorded the baptism,
Wilde died of meningitis on 30 November 1900. Different opinions are given as to the cause of the disease: Richard Ellmann claimed it was syphilitic; Merlin Holland, Wilde's grandson, thought this to be a misconception, noting that Wilde's meningitis followed a surgical intervention, perhaps a mastoidectomy; Wilde's physicians, Dr Paul Cleiss and A'Court Tucker, reported that the condition stemmed from an old suppuration of the right ear (from the prison injury, see above) treated for several years ("une ancienne suppuration de l'oreille droite d'ailleurs en traitement depuis plusieurs années") and made no allusion to syphilis.
Wilde was initially buried in the Cimetière de Bagneux outside Paris; in 1909 his remains were disinterred and transferred to Père Lachaise Cemetery, inside the city. His tomb there was designed by Sir Jacob Epstein. It was commissioned by Robert Ross, who asked for a small compartment to be made for his own ashes, which were duly transferred in 1950. The modernist angel depicted as a relief on the tomb was originally complete with male genitalia, which were initially censored by French Authorities with a golden leaf. The genitals have since been vandalised; their current whereabouts are unknown. In 2000, Leon Johnson, a multimedia artist, installed a silver prosthesis to replace them. In 2011, the tomb was cleaned of the many lipstick marks left there by admirers and a glass barrier was installed to prevent further marks or damage.
The epitaph is a verse from "The Ballad of Reading Gaol",
And alien tears will fill for him
Pity's long-broken urn,
For his mourners will be outcast men,
And outcasts always mourn.
In 2017, Wilde was among an estimated 50,000 men who were pardoned for homosexual acts that were no longer considered offences under the Policing and Crime Act 2017. The Act is known informally as the Alan Turing law.
In 2014 Wilde was one of the inaugural honorees in the Rainbow Honor Walk, a walk of fame in San Francisco's Castro neighbourhood noting LGBTQ people who have "made significant contributions in their fields."
The Oscar Wilde Temple, an installation by visual artists McDermott & McGough, opened in 2017 in cooperation with Church of the Village in New York City, then moved to Studio Voltaire in London the next year.
Wilde's life has been the subject of numerous biographies since his death. The earliest were memoirs by those who knew him: often they are personal or impressionistic accounts which can be good character sketches, but are sometimes factually unreliable. Frank Harris, his friend and editor, wrote a biography, "Oscar Wilde: His Life and Confessions" (1916); though prone to exaggeration and sometimes factually inaccurate, it offers a good literary portrait of Wilde. Lord Alfred Douglas wrote two books about his relationship with Wilde. "Oscar Wilde and Myself" (1914), largely ghost-written by T. W. H. Crosland, vindictively reacted to Douglas's discovery that "De Profundis" was addressed to him and defensively tried to distance him from Wilde's scandalous reputation. Both authors later regretted their work. Later, in "Oscar Wilde: A Summing Up" (1939) and his "Autobiography" he was more sympathetic to Wilde. Of Wilde's other close friends, Robert Sherard; Robert Ross, his literary executor; and Charles Ricketts variously published biographies, reminiscences or correspondence. The first more or less objective biography of Wilde came about when Hesketh Pearson wrote "Oscar Wilde: His Life and Wit" (1946). In 1954 Wilde's son Vyvyan Holland published his memoir "Son of Oscar Wilde", which recounts the difficulties Wilde's wife and children faced after his imprisonment. It was revised and updated by Merlin Holland in 1989.
"Oscar Wilde, a critical study" by Arthur Ransome was published in 1912. The book only briefly mentioned Wilde's life, but subsequently Ransome (and The Times Book Club) were sued for libel by Lord Alfred Douglas. In April 1913 Douglas lost the libel action after a reading of "De Profundis" refuted his claims.
Richard Ellmann wrote his 1987 biography "Oscar Wilde", for which he posthumously won a National (USA) Book Critics Circle Award in 1988 and a Pulitzer Prize in 1989. The book was the basis for the 1997 film "Wilde", directed by Brian Gilbert and starring Stephen Fry as the title character.
Neil McKenna's 2003 biography, "The Secret Life of Oscar Wilde," offers an exploration of Wilde's sexuality. Often speculative in nature, it was widely criticised for its pure conjecture and lack of scholarly rigour. Thomas Wright's "Oscar's Books" (2008) explores Wilde's reading from his childhood in Dublin to his death in Paris. After tracking down many books that once belonged to Wilde's Tite Street library (dispersed at the time of his trials), Wright was the first to examine Wilde's marginalia.
In 2018, Matthew Sturgis' "Oscar: A Life," was published in London. The book incorporates rediscovered letters and other documents and is the most extensively researched biography of Wilde to appear since 1988.
Parisian literati, also produced several biographies and monographs on him. André Gide wrote "In Memoriam, Oscar Wilde" and Wilde also features in his journals. Thomas Louis, who had earlier translated books on Wilde into French, produced his own "L'esprit d'Oscar Wilde" in 1920. Modern books include Philippe Jullian's "Oscar Wilde", and "L'affaire Oscar Wilde, ou, Du danger de laisser la justice mettre le nez dans nos draps" ("The Oscar Wilde Affair, or, On the Danger of Allowing Justice to put its Nose in our Sheets") by , a French religious historian.
Ostracism
Ostracism (, "ostrakismos") was a procedure under the Athenian democracy in which any citizen could be expelled from the city-state of Athens for ten years. While some instances clearly expressed popular anger at the citizen, ostracism was often used preemptively. It was used as a way of neutralizing someone thought to be a threat to the state or potential tyrant. The word "ostracism" continues to be used for various cases of social shunning.
The name is derived from the "ostraka" (singular "ostrakon", ὄστρακον), referring to the pottery shards that were used as voting tokens. Broken pottery, abundant and virtually free, served as a kind of scrap paper (in contrast to papyrus, which was imported from Egypt as a high-quality writing surface, and was thus too costly to be disposable).
Each year the Athenians were asked in the assembly whether they wished to hold an ostracism. The question was put in the sixth of the ten months used for state business under the democracy (January or February in the modern Gregorian Calendar). If they voted "yes", then an ostracism would be held two months later. In a section of the agora set off and suitably barriered, citizens gave the name of those they wished to be ostracised to a scribe, as many of them were illiterate, and they then scratched the name on pottery shards, and deposited them in urns. The presiding officials counted the "ostraka" submitted and sorted the names into separate piles. The person whose pile contained the most "ostraka" would be banished, provided that an additional criterion of a quorum was met, about which there are two principal sources:
Plutarch's evidence for a quorum of 6000, on "a priori" grounds a necessity for ostracism also per the account of Philochorus, accords with the number required for grants of citizenship in the following century and is generally preferred.
The person nominated had ten days to leave the city. If he attempted to return, the penalty was death. Notably, the property of the man banished was not confiscated and there was no loss of status. After the ten years, he was allowed to return without stigma. It was possible for the assembly to recall an ostracised person ahead of time; before the Persian invasion of 479 BC, an amnesty was declared under which at least two ostracised leaders—Pericles' father Xanthippus and Aristides 'the Just'—are known to have returned. Similarly, Cimon, ostracised in 461 BC, was recalled during an emergency.
Ostracism was crucially different from Athenian law at the time; there was no charge, and no defence could be mounted by the person expelled. The two stages of the procedure ran in the reverse order from that used under almost any trial system—here it is as if a jury are first asked ""Do you want to find someone guilty?"", and subsequently asked ""Whom do you wish to accuse?"". Equally out of place in a judicial framework is perhaps the institution's most peculiar feature: that it can take place at most once a year, and only for one person. In this it resembles the Greek "pharmakos" or scapegoat—though in contrast, "pharmakos" generally ejected a lowly member of the community.
A further distinction between these two modes (and one not obvious from a modern perspective) is that ostracism was an automatic procedure that required no initiative from any individual, with the vote simply occurring on the wish of the electorate—a diffuse exercise of power. By contrast, an Athenian trial needed the initiative of a particular citizen-prosecutor. While prosecution often led to a counterattack (or was a counterattack itself), no such response was possible in the case of ostracism as responsibility lay with the polity as a whole. In contrast to a trial, ostracism generally reduced political tension rather than increased it.
Although ten years of exile would have been difficult for an Athenian to face, it was relatively mild in comparison to the kind of sentences inflicted by courts; when dealing with politicians held to be acting against the interests of the people, Athenian juries could inflict very severe penalties such as death, unpayably large fines, confiscation of property, permanent exile and loss of citizens' rights through "atimia". Further, the elite Athenians who suffered ostracism were rich or noble men who had connections or "xenoi" in the wider Greek world and who, unlike genuine exiles, were able to access their income in Attica from abroad. In Plutarch, following as he does the anti-democratic line common in elite sources, the fact that people might be recalled early appears to be another example of the inconsistency of majoritarianism that was characteristic of Athenian democracy. However, ten years of exile usually resolved whatever had prompted the expulsion. Ostracism was simply a pragmatic measure; the concept of serving out the full sentence did not apply as it was a preventative measure, not a punitive one.
One curious window on the practicalities of ostracism comes from the cache of 190 ostraka discovered dumped in a well next to the acropolis. From the handwriting, they appear to have been written by fourteen individuals and bear the name of Themistocles, ostracised before 471 BC and were evidently meant for distribution to voters. This was not necessarily evidence of electoral fraud (being no worse than modern voting instruction cards), but their being dumped in the well may suggest that their creators wished to hide them. If so, these ostraka provide an example of organized groups attempting to influence the outcome of ostracisms. The two-month gap between the first and second phases would have easily allowed for such a campaign.
There is another interpretation, however, according to which these ostraka were prepared beforehand by enterprising businessmen who offered them for sale to citizens who could not easily inscribe the desired names for themselves or who simply wished to save time.
The two-month gap is a key feature in the institution, much as in elections under modern liberal democracies. It first prevented the candidate for expulsion being chosen out of immediate anger, although an Athenian general such as Cimon would have not wanted to lose a battle the week before such a second vote. Secondly, it opened up a period for discussion (or perhaps agitation), whether informally in daily talk or public speeches before the Athenian assembly or Athenian courts.[#endnote_Oration *] In this process a consensus, or rival consensuses, might emerge. Further, in that time of waiting, ordinary Athenian citizens must have felt a certain power over the greatest members of their city; conversely, the most prominent citizens had an incentive to worry how their social inferiors regarded them.
Ostracism was not in use throughout the whole period of Athenian democracy (circa 506–322 BC), but only occurred in the fifth century BC. The standard account, found in Aristotle's Constitution of the Athenians 22.3, attributes the establishment to Cleisthenes, a pivotal reformer in the creation of the democracy. In that case, ostracism would have been in place from around 506 BC. The first victim of the practice, however, was not expelled until 487 BC—nearly 20 years later. Over the course of the next 60 years some 12 or more individuals followed him. The list may not be complete, but there is good reason to believe the Athenians did not feel the need to eject someone in this way every year. The list of known ostracisms runs as follows:
Around 12,000 political ostraka have been excavated in the Athenian agora and in the Kerameikos. The second victim, Cleisthenes' nephew Megacles, is named by 4647 of these, but for a second undated ostracism not listed above. The known ostracisms seem to fall into three distinct phases: the 480s BC, mid-century 461–443 BC and finally the years 417–415: this matches fairly well with the clustering of known expulsions, although Themistocles before 471 may count as an exception. This suggests that ostracism fell in and out of fashion.
The last known ostracism was that of Hyperbolus in circa 417 BC. There is no sign of its use after the Peloponnesian War, when democracy was restored after the oligarchic coup of the Thirty had collapsed in 403 BC. However, while ostracism was not an active feature of the fourth-century version of democracy, it remained; the question was put to the assembly each year, but they did not wish to hold one.
Because ostracism was carried out by thousands of people over many decades of an evolving political situation and culture, it did not serve a single monolithic purpose. Observations can be made about the outcomes, as well as the initial purpose for which it was created.
The first rash of people ostracised in the decade after the defeat of the first Persian invasion at Marathon in 490 BC were all related or connected to the tyrant Peisistratos, who had controlled Athens for 36 years up to 527 BC. After his son Hippias was deposed with Spartan help in 510 BC, the family sought refuge with the Persians, and nearly twenty years later Hippias landed with their invasion force at Marathon. Tyranny and Persian aggression were paired threats facing the new democratic regime at Athens, and ostracism was used against both.
Tyranny and democracy had arisen at Athens out of clashes between regional and factional groups organised around politicians, including Cleisthenes. As a reaction, in many of its features the democracy strove to reduce the role of factions as the focus of citizen loyalties. Ostracism, too, may have been intended to work in the same direction: by temporarily decapitating a faction, it could help to defuse confrontations that threatened the order of the State.
In later decades when the threat of tyranny was remote, ostracism seems to have been used as a way to decide between radically opposed policies. For instance, in 443 BC Thucydides, son of Melesias (not to be confused with the historian of the same name) was ostracised. He led an aristocratic opposition to Athenian imperialism and in particular to Pericles' building program on the acropolis, which was funded by taxes created for the wars against the Achaemenid Empire. By expelling Thucydides the Athenian people sent a clear message about the direction of Athenian policy. Similar but more controversial claims have been made about the ostracism of Cimon in 461 BC.
The motives of individual voting citizens cannot, of course, be known. Many of the surviving ostraka name people otherwise unattested. They may well be just someone the submitter disliked, and voted for in moment of private spite. As such, it may be seen as a secular, civic variant of Athenian curse tablets, studied in scholarly literature under the Latin name "defixiones", where small dolls were wrapped in lead sheets written with curses and then buried, sometimes stuck through with nails for good measure.
In one anecdote about Aristides, known as "the Just", who was ostracised in 482, an illiterate citizen, not recognising him, came up to ask him to write the name Aristides on his ostrakon. When Aristides asked why, the man replied it was because he was sick of hearing him being called "the Just". Perhaps merely the sense that someone had become too arrogant or prominent was enough to get someone's name onto an ostrakon. Ostracism rituals could have also been an attempt to dissuade people from covertly committing murder or assassination for intolerable or emerging individuals of power so as to create an open arena or outlet for those harboring primal frustrations and urges or political motivations. The solution for murder, in Gregory H. Padowitz's theory, would then be "ostracism" which would ultimately be beneficial for all parties—the unfortunate individual would live and get a second chance and society would be spared the ugliness of feuds, civil war, political jams and murder.
The last ostracism, that of Hyperbolos in or near 417 BC, is elaborately narrated by Plutarch in three separate "lives": Hyperbolos is pictured urging the people to expel one of his rivals, but they, Nicias and Alcibiades, laying aside their own hostility for a moment, use their combined influence to have him ostracised instead. According to Plutarch, the people then become disgusted with ostracism and abandoned the procedure forever.
In part ostracism lapsed as a procedure at the end of the fifth century because it was replaced by the "graphe paranomon", a regular court action under which a much larger number of politicians might be targeted, instead of just one a year as with ostracism, and with greater severity. But it may already have come to seem like an anachronism as factional alliances organised around important men became increasingly less significant in the later period, and power was more specifically located in the interaction of the individual speaker with the power of the assembly and the courts. The threat to the democratic system in the late fifth century came not from tyranny but from oligarchic coups, threats of which became prominent after two brief seizures of power, in 411 BC by "the Four Hundred" and in 404 BC by "the Thirty", which were not dependent on single powerful individuals. Ostracism was not an effective defence against the oligarchic threat and it was not so used.
Other cities are known to have set up forms of ostracism on the Athenian model, namely Megara, Miletos, Argos and Syracuse, Sicily. In the last of these it was referred to as "petalismos", because the names were written on olive leaves. Little is known about these institutions. Furthermore, pottery shards identified as "ostraka" have been found in Chersonesos Taurica, leading historians to the conclusion that a similar institution existed there as well, in spite of the silence of the ancient records on that count.
A similar modern practice is the recall election, in which the electoral body removes its representation from an elected officer.
Unlike under modern voting procedures, the Athenians did not have to adhere to a strict format for the inscribing of "ostraka". Many extant "ostraka" show that it was possible to write expletives, short epigrams or cryptic injunctions beside the name of the candidate without invalidating the vote. For example:
The social psychologist Kipling Williams has written extensively on ostracism as a modern phenomenon. Williams defines ostracism as "any act or acts of ignoring and excluding of an individual or groups by an individual or a group". Williams suggests that the most common form of ostracism in a modern context is refusing to communicate with a person. By refusing to communicate with a person, that person is effectively ignored and excluded. The advent of the internet has made ostracism much easier to engage in, and conversely much more difficult to detect, with Williams and others describing this online ostracism as "cyberostracism". In email communication, in particular, it is relatively easy for a person or organization to ignore and exclude a specific person, through simply refusing to communicate with the person. Karen Douglas thus describes "unanswered emails" as constituting a form of cyberostracism, and similarly Eric Wesselmann and Kipling Williams describe "ignored emails" as a form of cyberostracism.
Williams and his colleagues have charted responses to ostracism in some five thousand cases, and found two distinctive patterns of response. The first is increased group-conformity, in a quest for re-admittance; the second is to become more provocative and hostile to the group, seeking attention rather than acceptance.
As it researched as well by many social psychologists, (Williams, 2007) research has demonstrated that being rejected from groups can have profound effects on a person (Smith, E. R., Macki, D. M., & Claypool, H. M., (2014) social psychology. Psychology Press. p. 409)
Research suggests that ostracism is a common reprisal strategy used by organizations in response to whistleblowing. Kipling Williams, in a survey on US whistleblowers, found that 100 percent reported post-whistleblowing ostracism. Alexander Brown similarly found that post-whistleblowing ostracism is a common response, and indeed describes ostracism as form of "covert" reprisal, as it is normally so difficult to identify and investigate.
Qahr and ashti is a culture-specific Iranian form of personal shunning, most frequently of another family member in Iran. While modern Western concepts of ostracism are based upon enforcing conformity within a societally-recognized group, qahr is a private (batin), family-oriented affair of conflict or display of anger that is never disclosed to the public at large, as to do would be a breach of social etiquette.
"Qahr" is avoidance of a lower-ranking family member who has committed a perceived insult. It is one of several ritualized social customs of Iranian culture.
"Gozasht" means 'tolerance, understanding and a desire or willingness to forgive' and is an essential componant of Qahr and Ashti for both psychological needs of closure and cognition, as well as a culturally accepted source for practicing necessary religious requirements of "tawbah" "(repentance, see Koran 2:222)" and "du'a" (supplication).
Oration IV of Andocides purports itself to be speech urging the ostracism of Alcibiades in 415 BC, but it is probably not authentic.
Omega
Omega (capital: Ω, lowercase: ω; Greek ὦ, later ὦ μέγα, Modern Greek ωμέγα) is the 24th and last letter of the Greek alphabet. In the Greek numeric system/Isopsephy (Gematria), it has a value of 800. The word literally means "great O" ("ō mega", mega meaning "great"), as opposed to Ο ο omicron, which means "little O" ("o mikron", micron meaning "little").
In phonetic terms, the Ancient Greek Ω is a long open-mid "o" , comparable to the vowel of British English "raw". In Modern Greek, Ω represents the , the same sound as omicron. The letter omega is transcribed "ō" or simply "o".
As the last letter of the Greek alphabet, Omega is often used to denote the last, the end, or the ultimate limit of a set, in contrast to alpha, the first letter of the Greek alphabet; see Alpha and Omega.
Ω was not part of the early (8th century BC) Greek alphabets. It was introduced in the late 7th century BC in the Ionian cities of Asia Minor to denote the long half-open [ɔː]. It is a variant of omicron (Ο), broken up at the side (), with the edges subsequently turned outward (, , , ).
The Dorian city of Knidos as well as a few Aegean islands, namely Paros, Thasos and Melos, chose the exact opposite innovation, using a broken-up circle for the short and a closed circle for the long /o/.
The name Ωμέγα is Byzantine; in Classical Greek, the letter was called "ō" (), whereas the omicron was called "ou" ().
The modern lowercase shape goes back to the uncial form , a form that developed during the 3rd century BC in ancient handwriting on papyrus, from a flattened-out form of the letter () that had its edges curved even further upward.
In addition to the Greek alphabet, Omega was also adopted into the early Cyrillic alphabet. See Cyrillic omega (Ѡ, ѡ). A Raetic variant is conjectured to be at the origin or parallel evolution of the Elder Futhark ᛟ.
Omega was also adopted into the Latin alphabet, as a letter of the 1982 revision to the African reference alphabet. It has had little use. See Latin omega.
The uppercase letter Ω is used as a symbol:
The minuscule letter ω is used as a symbol:
These characters are used only as mathematical symbols. Stylized Greek text should be encoded using the normal Greek letters, with markup and formatting to indicate the style of the text.
Operation Barbarossa
Operation Barbarossa () was the code name for the Axis invasion of the Soviet Union, which started on Sunday, 22 June 1941, during World War II. The operation put into action Nazi Germany's ideological goal of conquering the western Soviet Union so as to repopulate it with Germans. The German aimed to use some of the conquered as slave labour for the Axis war effort, to acquire the oil reserves of the Caucasus and the agricultural resources of Soviet territories, and eventually through extermination, enslavement, Germanization and mass deportation to Siberia, remove the Slavic peoples and create for Germany.
In the two years leading up to the invasion, Germany and the Soviet Union signed political and economic pacts for strategic purposes. Nevertheless, the German High Command began planning an invasion of the Soviet Union in July 1940 (under the codename Operation Otto), which Adolf Hitler authorized on 18 December 1940. Over the course of the operation, about three million personnel of the Axis powers—the largest invasion force in the history of warfare—invaded the western Soviet Union along a front, with 600,000 motor vehicles and over 600,000 horses for non-combat operations. The offensive marked an escalation of World War II, both geographically and in the formation of the Allied coalition including the Soviet Union.
The operation opened up the Eastern Front, in which more forces were committed than in any other theater of war in history. The area saw some of the war's largest battles, most horrific atrocities, and highest casualties (for Soviet and Axis forces alike), all of which influenced the course of World War II and the subsequent history of the 20th century. The German armies eventually captured some five million Soviet Red Army troops, a majority of whom never returned alive. The Nazis deliberately starved to death, or otherwise killed, 3.3 million Soviet prisoners of war, and a vast number of civilians, as the "Hunger Plan" worked to solve German food shortages and exterminate the Slavic population through starvation. Mass shootings and gassing operations, carried out by the Nazis or willing collaborators, murdered over a million Soviet Jews as part of the Holocaust.
The failure of Operation Barbarossa reversed the fortunes of the Third Reich. Operationally, German forces achieved significant victories and occupied some of the most important economic areas of the Soviet Union (mainly in Ukraine) and inflicted, as well as sustained, heavy casualties. Despite these early successes, the German offensive stalled in the Battle of Moscow at the end of 1941, and the subsequent Soviet winter counteroffensive pushed German troops back. The Germans had confidently expected a quick collapse of Soviet resistance as in Poland, but the Red Army absorbed the German Wehrmacht's strongest blows and bogged it down in a war of attrition for which the Germans were unprepared. The Wehrmacht's diminished forces could no longer attack along the entire Eastern Front, and subsequent operations to retake the initiative and drive deep into Soviet territory—such as Case Blue in 1942 and Operation Citadel in 1943—eventually failed, which resulted in the Wehrmacht's retreat and collapse.
As early as 1925, Adolf Hitler vaguely declared in his political manifesto and autobiography "Mein Kampf" that he would invade the Soviet Union, asserting that the German people needed to secure "Lebensraum" ("living space") to ensure the survival of Germany for generations to come. On 10 February 1939, Hitler told his army commanders that the next war would be "purely a war of "Weltanschauungen" ["worldview"] ... totally a people's war, a racial war". On 23 November, once World War II had already started, Hitler declared that "racial war has broken out and this war shall determine who shall govern Europe, and with it, the world". The racial policy of Nazi Germany portrayed the Soviet Union (and all of Eastern Europe) as populated by non-Aryan "Untermenschen" ("sub-humans"), ruled by Jewish Bolshevik conspirators. Hitler claimed in "Mein Kampf" that Germany's destiny was to "turn to the East" as it did "six hundred years ago" (see "Ostsiedlung"). Accordingly, it was stated Nazi policy to kill, deport, or enslave the majority of Russian and other Slavic populations and repopulate the land with Germanic peoples, under the Generalplan Ost. The Nazis' belief in their ethnic superiority pervades official records and pseudoscientific articles in German periodicals, on topics such as "how to deal with alien populations".
While older histories tended to emphasize the notion of a "Clean Wehrmacht" upholding its honor in the face of Hitler's fanaticism, the historian Jürgen Förster notes that "In fact, the military commanders were caught up in the ideological character of the conflict, and involved in its implementation as willing participants." Before and during the invasion of the Soviet Union, German troops were heavily indoctrinated with anti-Bolshevik, anti-Semitic, and anti-Slavic ideology via movies, radio, lectures, books, and leaflets. Likening the Soviets to the forces of Genghis Khan, Hitler told Croatian military leader Slavko Kvaternik that the "Mongolian race" threatened Europe. Following the invasion, Wehrmacht officers told their soldiers to target people who were described as "Jewish Bolshevik subhumans", the "Mongol hordes", the "Asiatic flood", and the "Red beast". Nazi propaganda portrayed the war against the Soviet Union as both an ideological war between German National Socialism and Jewish Bolshevism, and a racial war between the disciplined Germans and the Jewish, Gypsy, and Slavic "Untermenschen". An 'order from the Führer' stated that the "Einsatzgruppen" were to execute all Soviet functionaries who were "less valuable Asiatics, Gypsies and Jews". Six months into the invasion of the Soviet Union, the "Einsatzgruppen" had already murdered in excess of 500,000 Soviet Jews, a figure greater than the number of Red Army soldiers killed in combat during that time. German army commanders cast the Jews as the major cause behind the "partisan struggle". The main guideline for German troops was "Where there's a partisan, there's a Jew, and where there's a Jew, there's a partisan", or "The partisan is where the Jew is". Many German troops viewed the war in Nazi terms and regarded their Soviet enemies as sub-human.
After the war began, the Nazis issued a ban on sexual relations between Germans and foreign slave workers. There were regulations enacted against the "Ost-Arbeiter" ("Eastern workers") that included the death penalty for sexual relations with a German. Heinrich Himmler, in his secret memorandum, "Reflections on the Treatment of Peoples of Alien Races in the East" (dated 25 May 1940), outlined the Nazi plans for the non-German populations in the East. Himmler believed the Germanization process in Eastern Europe would be complete when "in the East dwell only men with truly German, Germanic blood".
The Nazi secret plan "Generalplan Ost" ("General Plan for the East"), prepared in 1941 and confirmed in 1942, called for a "new order of ethnographical relations" in the territories occupied by Nazi Germany in Eastern Europe. It envisaged ethnic cleansing, executions, and enslavement of the populations of conquered countries, with very small percentages undergoing Germanization, expulsion into the depths of Russia, or other fates, while the conquered territories would be Germanized. The plan had two parts: the "Kleine Planung" ("small plan"), which covered actions to be taken during the war, and the "Große Planung" ("large plan"), which covered policies after the war was won, to be implemented gradually over 25 to 30 years.
A speech given by General Erich Hoepner demonstrates the dissemination of the Nazi racial plan, as he informed the 4th Panzer Group that the war against the Soviet Union was "an essential part of the German people's struggle for existence" ("Daseinskampf"), also referring to the imminent battle as the "old struggle of Germans against Slavs" and even stated, "the struggle must aim at the annihilation of today's Russia and must, therefore, be waged with unparalleled harshness". Hoepner also added that the Germans were fighting for "the defense of European culture against Moscovite–Asiatic inundation, and the repulse of Jewish Bolshevism ... No adherents of the present Russian-Bolshevik system are to be spared." Walther von Brauchitsch also told his subordinates that troops should view the war as a "struggle between two different races and [should] act with the necessary severity". Racial motivations were central to Nazi ideology and played a key role in planning for Operation Barbarossa since both Jews and communists were considered equivalent enemies of the Nazi state. Nazi imperialist ambitions rejected the common humanity of both groups, declaring the supreme struggle for "Lebensraum" to be a "Vernichtungskrieg" ("war of annihilation").
In August 1939, Germany and the Soviet Union signed a non-aggression pact in Moscow known as the Molotov–Ribbentrop Pact. A secret protocol to the pact outlined an agreement between Germany and the Soviet Union on the division of the eastern European border states between their respective "spheres of influence": the Soviet Union and Germany would partition Poland in the event of an invasion by Germany, and the Soviets would be allowed to overrun the Baltic states and Finland. On 23 August 1939 the rest of the world learned of this pact but were unaware of the provisions to partition Poland. The pact stunned the world because of the parties' earlier mutual hostility and their conflicting ideologies. The conclusion of this pact was followed by the German invasion of Poland on 1 September that triggered the outbreak of World War II in Europe, then the Soviet invasion of Poland that led to the annexation of the eastern part of the country. As a result of the pact, Germany and the Soviet Union maintained reasonably strong diplomatic relations for two years and fostered an important economic relationship. The countries entered a trade pact in 1940 by which the Soviets received German military equipment and trade goods in exchange for raw materials, such as oil and wheat, to help the Nazis circumvent a British blockade of Germany.
Despite the parties' ostensibly cordial relations, each side was highly suspicious of the other's intentions. For instance, the Soviet invasion of Bukovina in June 1940 went beyond their sphere of influence as agreed with Germany. After Germany entered the Axis Pact with Japan and Italy, it began negotiations about a potential Soviet entry into the pact. After two days of negotiations in Berlin from 12 to 14 November 1940, Germany presented a written proposal for a Soviet entry into the Axis. On 25 November 1940, the Soviet Union offered a written counter-proposal to join the Axis if Germany would agree to refrain from interference in the Soviet Union's sphere of influence, but Germany did not respond. As both sides began colliding with each other in Eastern Europe, conflict appeared more likely, although they did sign a border and commercial agreement addressing several open issues in January 1941. According to historian Robert Service, Joseph Stalin was convinced that the overall military strength of the USSR was such that he had nothing to fear and anticipated an easy victory should Germany attack; moreover, Stalin believed that since the Germans were still fighting the British in the west, Hitler would be unlikely to open up a two front war and subsequently delayed the reconstruction of defensive fortifications in the border regions. When German soldiers swam across the Bug River to warn the Red Army of an impending attack, they were treated like enemy agents and shot. Some historians believe that Stalin, despite providing an amicable front to Hitler, did not wish to remain allies with Germany. Rather, Stalin might have had intentions to break off from Germany and proceed with his own campaign against Germany to be followed by one against the rest of Europe.
Stalin's reputation as a brutal dictator contributed both to the Nazis' justification of their assault and their faith in success; many competent and experienced military officers had been killed in the Great Purge of the 1930s, leaving the Red Army with a relatively inexperienced leadership compared to that of their German adversary. The Nazis often emphasized the Soviet regime's brutality when targeting the Slavs with propaganda. They also claimed that the Red Army was preparing to attack the Germans, and their own invasion was thus presented as a pre-emptive strike.
In the middle of 1940, following the rising tension between the Soviet Union and Germany over territories in the Balkans, an eventual invasion of the Soviet Union seemed the only solution to Hitler. While no concrete plans had yet been made, Hitler told one of his generals in June that the victories in Western Europe finally freed his hands for a showdown with Bolshevism. With the successful end to the campaign in France, General Erich Marcks was assigned the task of drawing up the initial invasion plans of the Soviet Union. The first battle plans were entitled "Operation Draft East" (colloquially known as the "Marcks Plan"). His report advocated the A-A line as the operational objective of any invasion of the Soviet Union. This assault would extend from the northern city of Arkhangelsk on the Arctic Sea through Gorky and Rostov to the port city of Astrakhan at the mouth of the Volga on the Caspian Sea. The report concluded that—once established—this military border would reduce the threat to Germany from attacks by enemy bombers.
Although Hitler was warned by his general staff that occupying "Western Russia" would create "more of a drain than a relief for Germany's economic situation", he anticipated compensatory benefits, such as the demobilization of entire divisions to relieve the acute labor shortage in German industry; the exploitation of Ukraine as a reliable and immense source of agricultural products; the use of forced labor to stimulate Germany's overall economy; and the expansion of territory to improve Germany's efforts to isolate the United Kingdom. Hitler was convinced that Britain would sue for peace once the Germans triumphed in the Soviet Union, and if they did not, he would use the resources available in the East to defeat the British Empire.
On 5 December 1940, Hitler received the final military plans for the invasion on which the German High Command had been working since July 1940 under the codename "Operation Otto". Hitler, however, was dissatisfied with these plans and on 18 December issued , which called for a new battle plan, now code-named "Operation Barbarossa". The operation was named after medieval Emperor Frederick Barbarossa of the Holy Roman Empire, a leader of the Third Crusade in the 12th century.
On 30 March 1941 the Barbarossa decree declared that the war would be one of extermination and advocated the eradication of all political and intellectual elites.
The invasion was set for 15 May 1941, though it was delayed for over a month to allow for further preparations and possibly better weather. (See Reasons for delay.)
According to a 1978 essay by German historian Andreas Hillgruber, the invasion plans drawn up by the German military elite were coloured by hubris stemming from the rapid defeat of France at the hands of the "invincible" Wehrmacht and by traditional German stereotypes of Russia as a primitive, backward "Asiatic" country. Red Army soldiers were considered brave and tough, but the officer corps was held in contempt. The leadership of the Wehrmacht paid little attention to politics, culture, and the considerable industrial capacity of the Soviet Union, in favour of a very narrow military view. Hillgruber argued that because these assumptions were shared by the entire military elite, Hitler was able to push through with a "war of annihilation" that would be waged in the most inhumane fashion possible with the complicity of "several military leaders", even though it was quite clear that this would be in violation of all accepted norms of warfare.
In autumn 1940, high-ranking German officials drafted a memorandum on the dangers of an invasion of the Soviet Union. They said Ukraine, Belorussia, and the Baltic States would end up as only a further economic burden for Germany. It was argued that the Soviets in their current bureaucratic form were harmless and that the occupation would not benefit Germany. Hitler disagreed with economists about the risks and told his right-hand man Hermann Göring, the chief of the Luftwaffe, that he would no longer listen to misgivings about the economic dangers of a war with Russia. It is speculated that this was passed on to General Georg Thomas, who had produced reports that predicted a net economic drain for Germany in the event of an invasion of the Soviet Union unless its economy was captured intact and the Caucasus oilfields seized in the first blow; Thomas revised his future report to fit Hitler's wishes. The Red Army's ineptitude in the Winter War against Finland in 1939–40 convinced Hitler of a quick victory within a few months. Neither Hitler nor the General Staff anticipated a long campaign lasting into the winter, and therefore adequate preparations, such as the distribution of warm clothing and winterization of vehicles and lubricants, were not made.
Beginning in March 1941, Göring's Green Folder laid out details for the Soviet economy after conquest. The Hunger Plan outlined how entire urban populations of conquered territories was to be starved to death, thus creating an agricultural surplus to feed Germany and urban space for the German upper class. Nazi policy aimed to destroy the Soviet Union as a political entity in accordance with the geopolitical "Lebensraum" ideals for the benefit of future generations of the "Nordic master race". In 1941, Nazi ideologue Alfred Rosenberg—later appointed Reich Minister of the Occupied Eastern Territories—suggested that conquered Soviet territory should be administered in the following "Reichskommissariate" ("Reich Commissionerships"):
German military planners also researched Napoleon's failed invasion of Russia. In their calculations, they concluded that there was little danger of a large-scale retreat of the Red Army into the Russian interior, as it could not afford to give up the Baltic states, Ukraine, or the Moscow and Leningrad regions, all of which were vital to the Red Army for supply reasons and would thus, have to be defended. Hitler and his generals disagreed on where Germany should focus its energy. Hitler, in many discussions with his generals, repeated his order of "Leningrad first, the Donbass second, Moscow third"; but he consistently emphasized the destruction of the Red Army over the achievement of specific terrain objectives. Hitler believed Moscow to be of "no great importance" in the defeat of the Soviet Union and instead believed victory would come with the destruction of the Red Army west of the capital, especially west of the Western Dvina and Dnieper rivers, and this pervaded the plan for Barbarossa. This belief later led to disputes between Hitler and several German senior officers, including Heinz Guderian, Gerhard Engel, Fedor von Bock and Franz Halder, who believed the decisive victory could only be delivered at Moscow. They were unable to sway Hitler, who had grown overconfident in his own military judgment as a result of the rapid successes in Western Europe.
Albert Speer said that oil had been a major factor in the decision to invade the Soviet Union. Hitler believed that Bakus oil resources were essential for the survival of the Third Reich, as a dearth of oil resources was a vulnerability for Germany's military.
The Germans had begun massing troops near the Soviet border even before the campaign in the Balkans had finished. By the third week of February 1941, 680,000 German soldiers were gathered in assembly areas on the Romanian-Soviet border. In preparation for the attack, Hitler had secretly moved upwards of 3 million German troops and approximately 690,000 Axis soldiers to the Soviet border regions. Additional Luftwaffe operations included numerous aerial surveillance missions over Soviet territory many months before the attack.
Although the Soviet High Command was alarmed by this, Stalin's belief that the Third Reich was unlikely to attack only two years after signing the Molotov–Ribbentrop Pact resulted in a slow Soviet preparation. This fact aside, the Soviets did not entirely overlook the threat of their German neighbor. Well before the German invasion, Marshal Semyon Timoshenko referred to the Germans as the Soviet Union's "most important and strongest enemy", and as early as July 1940, the Red Army Chief of Staff, Boris Shaposhnikov, produced a preliminary three-pronged plan of attack for what a German invasion might look like, remarkably similar to the actual attack. Since April 1941, the Germans had begun setting up Operation Haifisch and Operation Harpune to substantiate their claims that Britain was the real target. These simulated preparations in Norway and the English Channel coast included activities such as ship concentrations, reconnaissance flights and training exercises.
The reasons for the postponement of Barbarossa from the initially planned date of 15 May to the actual invasion date of 22 June 1941 (a 38-day delay) are debated. The reason most commonly cited is the unforeseen contingency of invading Yugoslavia in April 1941. Historian Thomas B. Buell indicates that Finland and Romania, which weren't involved in initial German planning, needed additional time to prepare to participate in the invasion. Buell adds that an unusually wet winter kept rivers at full flood until late spring. The floods may have discouraged an earlier attack, even if they occurred before the end of the Balkans Campaign.
The importance of the delay is still debated. William Shirer argued that Hitler's Balkan Campaign had delayed the commencement of Barbarossa by several weeks and thereby jeopardized it. Many later historians argue that the 22 June start date was sufficient for the German offensive to reach Moscow by September. Antony Beevor wrote in 2012 about the delay caused by German attacks in the Balkans that "most [historians] accept that it made little difference" to the eventual outcome of Barbarossa.
The Germans deployed one independent regiment, one separate motorized training brigade and 153 divisions for Barbarossa, which included 104 infantry, 19 panzer and 15 motorized infantry divisions in three army groups, nine security divisions to operate in conquered territories, four divisions in Finland and two divisions as reserve under the direct control of OKH. These were equipped with 6,867 armored vehicles, of which 3,350–3,795 were tanks, 2,770–4,389 aircraft (that amounted to 65 percent of the Luftwaffe), 7,200–23,435 artillery pieces, 17,081 mortars, about 600,000 motor vehicles and 625,000–700,000 horses. Finland slated 14 divisions for the invasion, and Romania offered 13 divisions and eight brigades over the course of Barbarossa. The entire Axis forces, 3.8 million personnel, deployed across a front extending from the Arctic Ocean southward to the Black Sea, were all controlled by the OKH and organized into Army Norway, Army Group North, Army Group Center and Army Group South, alongside three "Luftflotten" (air fleets, the air force equivalent of army groups) that supported the army groups: Luftflotte 1 for North, Luftflotte 2 for Center and Luftflotte 4 for South.
Army Norway was to operate in far northern Scandinavia and bordering Soviet territories. Army Group North was to march through the Baltic states into northern Russia, either take or destroy the city of Leningrad and link up with Finnish forces. Army Group Center, the army group equipped with the most armour and air power, was to strike from Poland into Belorussia and the west-central regions of Russia proper, and advance to Smolensk and then Moscow. Army Group South was to strike the heavily populated and agricultural heartland of Ukraine, taking Kiev before continuing eastward over the steppes of southern USSR to the Volga with the aim of controlling the oil-rich Caucasus. Army Group South was deployed in two sections separated by a gap. The northern section, which contained the army group's only panzer group, was in southern Poland right next to Army Group Center, and the southern section was in Romania.
The German forces in the rear (mostly "Waffen-SS" and "Einsatzgruppen" units) were to operate in conquered territories to counter any partisan activity in areas they controlled, as well as to execute captured Soviet political commissars and Jews. On 17 June, Reich Main Security Office (RSHA) chief Reinhard Heydrich briefed around thirty to fifty "Einsatzgruppen" commanders on "the policy of eliminating Jews in Soviet territories, at least in general terms". While the "Einsatzgruppen" were assigned to the Wehrmacht's units, which provided them with supplies such as gasoline and food, they were controlled by the RSHA. The official plan for Barbarossa assumed that the army groups would be able to advance freely to their primary objectives simultaneously, without spreading thin, once they had won the border battles and destroyed the Red Army's forces in the border area.
In 1930, Mikhail Tukhachevsky, a prominent military theorist in tank warfare in the interwar period and later Marshal of the Soviet Union, forwarded a memo to the Kremlin that lobbied for colossal investment in the resources required for the mass production of weapons, pressing the case for "40,000 aircraft and 50,000 tanks". In the early 1930s, a modern operational doctrine for the Red Army was developed and promulgated in the 1936 Field Regulations in the form of the Deep Battle Concept. Defense expenditure also grew rapidly from just 12 percent of the gross national product in 1933 to 18 percent by 1940.
During Stalin's Great Purge in the late-1930s, which had not ended by the time of the German invasion on 22 June 1941, much of the officer corps of the Red Army was executed or imprisoned and their replacements, appointed by Stalin for political reasons, often lacked military competence. Of the five Marshals of the Soviet Union appointed in 1935, only Kliment Voroshilov and Semyon Budyonny survived Stalin's purge. Tukhachevsky was killed in 1937. Fifteen of 16 army commanders, 50 of the 57 corps commanders, 154 of the 186 divisional commanders, and 401 of 456 colonels were killed, and many other officers were dismissed. In total, about 30,000 Red Army personnel were executed. Stalin further underscored his control by reasserting the role of political commissars at the divisional level and below to oversee the political loyalty of the army to the regime. The commissars held a position equal to that of the commander of the unit they were overseeing. But in spite of efforts to ensure the political subservience of the armed forces, in the wake of Red Army's poor performance in Poland and in the Winter War, about 80 percent of the officers dismissed during the Great Purge were reinstated by 1941. Also, between January 1939 and May 1941, 161 new divisions were activated. Therefore, although about 75 percent of all the officers had been in their position for less than one year at the start of the German invasion of 1941, many of the short tenures can be attributed not only to the purge, but also to the rapid increase in creation of military units.
In the Soviet Union, speaking to his generals in December 1940, Stalin mentioned Hitler's references to an attack on the Soviet Union in "Mein Kampf" and Hitler's belief that the Red Army would need four years to ready itself. Stalin declared "we must be ready much earlier" and "we will try to delay the war for another two years". As early as August 1940, British intelligence had received hints of German plans to attack the Soviets only a week after Hitler informally approved the plans for "Barbarossa" and warned the Soviet Union accordingly. But Stalin's distrust of the British led him to ignore their warnings in the belief that they were a trick designed to bring the Soviet Union into the war on their side. In early 1941, Stalin's own intelligence services and American intelligence gave regular and repeated warnings of an impending German attack. Soviet spy Richard Sorge also gave Stalin the exact German launch date, but Sorge and other informers had previously given different invasion dates that passed peacefully before the actual invasion. Stalin acknowledged the possibility of an attack in general and therefore made significant preparations, but decided not to run the risk of provoking Hitler.
Beginning in July 1940, the Red Army General Staff developed war plans that identified the Wehrmacht as the most dangerous threat to the Soviet Union, and that in the case of a war with Germany, the Wehrmacht's main attack would come through the region north of the Pripyat Marshes into Belorussia, which later proved to be correct. Stalin disagreed, and in October he authorized the development of new plans that assumed a German attack would focus on the region south of Pripyat Marshes towards the economically vital regions in Ukraine. This became the basis for all subsequent Soviet war plans and the deployment of their armed forces in preparation for the German invasion.
In early-1941 Stalin authorized the State Defense Plan 1941 (DP-41), which along with the Mobilization Plan 1941 (MP-41), called for the deployment of 186 divisions, as the first strategic echelon, in the four military districts of the western Soviet Union that faced the Axis territories; and the deployment of another 51 divisions along the Dvina and Dnieper Rivers as the second strategic echelon under Stavka control, which in the case of a German invasion was tasked to spearhead a Soviet counteroffensive along with the remaining forces of the first echelon. But on 22 June 1941 the first echelon only contained 171 divisions, numbering 2.6–2.9 million; and the second strategic echelon contained 57 divisions that were still mobilizing, most of which were still understrength. The second echelon was undetected by German intelligence until days after the invasion commenced, in most cases only when German ground forces bumped into them.
At the start of the invasion, the manpower of the Soviet military force that had been mobilized was 5.3–5.5 million, and it was still increasing as the Soviet reserve force of 14 million, with at least basic military training, continued to mobilize. The Red Army was dispersed and still preparing when the invasion commenced. Their units were often separated and lacked adequate transportation. While transportation remained insufficient for Red Army forces, when Operation Barbarossa kicked off, they possessed some 33,000 pieces of artillery, a number far greater than the Germans had at their disposal.
The Soviet Union had some 23,000 tanks available of which only 14,700 were combat-ready. Around 11,000 tanks were in the western military districts that faced the German invasion force. Hitler later declared to some of his generals, "If I had known about the Russian tank strength in 1941 I would not have attacked". However, maintenance and readiness standards were very poor; ammunition and radios were in short supply, and many armoured units lacked the trucks for supplies. The most advanced Soviet tank models – the KV-1 and T-34 – which were superior to all current German tanks, as well as all designs still in development as of the summer 1941, were not available in large numbers at the time the invasion commenced. Furthermore, in the autumn of 1939, the Soviets disbanded their mechanized corps and partly dispersed their tanks to infantry divisions; but following their observation of the German campaign in France, in late-1940 they began to reorganize most of their armored assets back into mechanized corps with a target strength of 1,031 tanks each. But these large armoured formations were unwieldy, and moreover they were spread out in scattered garrisons, with their subordinate divisions up to apart. The reorganization was still in progress and incomplete when Barbarossa commenced. Soviet tank units were rarely well equipped, and they lacked training and logistical support. Units were sent into combat with no arrangements in place for refueling, ammunition resupply, or personnel replacement. Often, after a single engagement, units were destroyed or rendered ineffective. The Soviet numerical advantage in heavy equipment was thoroughly offset by the superior training and organization of the Wehrmacht.
The Soviet Air Force (VVS) held the numerical advantage with a total of approximately 19,533 aircraft, which made it the largest air force in the world in the summer of 1941. About 7,133–9,100 of these were deployed in the five western military districts, and an additional 1445 were under naval control.
Historians have debated whether Stalin was planning an invasion of German territory in the summer of 1941. The debate began in the late-1980s when Viktor Suvorov published a journal article and later the book "Icebreaker" in which he claimed that Stalin had seen the outbreak of war in Western Europe as an opportunity to spread communist revolutions throughout the continent, and that the Soviet military was being deployed for an imminent attack at the time of the German invasion. This view had also been advanced by former German generals following the war. Suvorov's thesis was fully or partially accepted by a limited number of historians, including Valeri Danilov, Joachim Hoffmann, Mikhail Meltyukhov, and Vladimir Nevezhin, and attracted public attention in Germany, Israel, and Russia. It has been strongly rejected by most historians, and "Icebreaker" is generally considered to be an "anti-Soviet tract" in Western countries. David Glantz and Gabriel Gorodetsky wrote books to rebut Suvorov's arguments. The majority of historians believe that Stalin was seeking to avoid war in 1941, as he believed that his military was not ready to fight the German forces.
At around 01:00 on 22 June 1941, the Soviet military districts in the border area were alerted by NKO Directive No. 1, issued late on the night of 21 June. It called on them to "bring all forces to combat readiness," but to "avoid provocative actions of any kind". It took up to two hours for several of the units subordinate to the Fronts to receive the order of the directive, and the majority did not receive it before the invasion commenced.
On 21 June, at 13:00 Army Group North received the codeword Düsseldorf, indicating Barbarossa would commence the next morning, and passed down its own codeword, Dortmund. At around 03:15 on 22 June 1941, the Axis Powers commenced the invasion of the Soviet Union with the bombing of major cities in Soviet-occupied Poland and an artillery barrage on Red Army defences on the entire front. Air-raids were conducted as far as Kronstadt near Leningrad, Ismail in Bessarabia, and Sevastopol in the Crimea. Meanwhile, ground troops crossed the border, accompanied in some locales by Lithuanian and Ukrainian fifth columnists. Roughly three million soldiers of the Wehrmacht went into action and faced slightly fewer Soviet troops at the border. Accompanying the German forces during the initial invasion were Finnish and Romanian units as well.
At around noon, the news of the invasion was broadcast to the population by Soviet foreign minister Vyacheslav Molotov: "... Without a declaration of war, German forces fell on our country, attacked our frontiers in many places ... The Red Army and the whole nation will wage a victorious Patriotic War for our beloved country, for honour, for liberty ... Our cause is just. The enemy will be beaten. Victory will be ours!" By calling upon the population's devotion to their nation rather than the Party, Molotov struck a patriotic chord that helped a stunned people absorb the shattering news. Within the first few days of the invasion, the Soviet High Command and Red Army were extensively reorganized so as to place them on the necessary war footing. Stalin did not address the nation about the German invasion until 3 July, when he also called for a "Patriotic War ... of the entire Soviet people".
In Germany, on the morning of 22 June, Nazi propaganda minister Joseph Goebbels announced the invasion to the waking nation in a radio broadcast with Hitler's words: "At this moment a march is taking place that, for its extent, compares with the greatest the world has ever seen. I have decided today to place the fate and future of the Reich and our people in the hands of our soldiers. May God aid us, especially in this fight!" Later the same morning, Hitler proclaimed to his colleagues, "Before three months have passed, we shall witness a collapse of Russia, the like of which has never been seen in history." Hitler also addressed the German people via the radio, presenting himself as a man of peace, who reluctantly had to attack the Soviet Union. Following the invasion, Goebbels openly spoke of a "European crusade against Bolshevism".
The initial momentum of the German ground and air attack completely destroyed the Soviet organizational command and control within the first few hours, paralyzing every level of command from the infantry platoon to the Soviet High Command in Moscow. Moscow not only failed to grasp the magnitude of the catastrophe that confronted the Soviet forces in the border area, but Stalin's first reaction was also disbelief. At around 07:15, Stalin issued NKO Directive No. 2, which announced the invasion to the Soviet Armed Forces, and called on them to attack Axis forces wherever they had violated the borders and launch air strikes into the border regions of German territory. At around 09:15, Stalin issued NKO Directive No. 3, signed by Marshal Semyon Timoshenko, which now called for a general counteroffensive on the entire front "without any regards for borders" that both men hoped would sweep the enemy from Soviet territory. Stalin's order, which Timoshenko authorized, was not based on a realistic appraisal of the military situation at hand, but commanders passed it along for fear of retribution if they failed to obey; several days passed before the Soviet leadership became aware of the enormity of the opening defeat.
Luftwaffe reconnaissance units plotted Soviet troop concentration, supply dumps and airfields, and marked them down for destruction. Additional Luftwaffe attacks were carried out against Soviet command and control centers in order to disrupt the mobilization and organization of Soviet forces. In contrast, Soviet artillery observers based at the border area had been under the strictest instructions not to open fire on German aircraft prior to the invasion. One plausible reason given for the Soviet hesitation to return fire was Stalin's initial belief that the assault was launched without Hitler's authorization. Significant amounts of Soviet territory were lost along with Red Army forces as a result; it took several days before Stalin comprehended the magnitude of the calamity. The Luftwaffe reportedly destroyed 1,489 aircraft on the first day of the invasion and over 3,100 during the first three days. Hermann Göring, Minister of Aviation and Commander-in-Chief of the Luftwaffe, distrusted the reports and ordered the figure checked. Luftwaffe staffs surveyed the wreckage on Soviet airfields, and their original figure proved conservative, as over 2,000 Soviet aircraft were estimated to have been destroyed on the first day of the invasion. In reality, Soviet losses were likely higher; a Soviet archival document recorded the loss of 3,922 Soviet aircraft in the first three days against an estimated loss of 78 German aircraft. The Luftwaffe reported the loss of only 35 aircraft on the first day of combat. A document from the German Federal Archives puts the Luftwaffe's loss at 63 aircraft for the first day.
By the end of the first week, the Luftwaffe had achieved air supremacy over the battlefields of all the army groups, but was unable to effect this air dominance over the vast expanse of the western Soviet Union. According to the war diaries of the German High Command, the Luftwaffe by 5 July had lost 491 aircraft with 316 more damaged, leaving it with only about 70 percent of the strength it had at the start of the invasion.
On 22 June, Army Group North attacked the Soviet Northwestern Front and broke through its 8th and 11th Armies. The Soviets immediately launched a powerful counterattack against the German 4th Panzer Group with the Soviet 3rd and 12th Mechanized Corps, but the Soviet attack was defeated. On 25 June, the 8th and 11th Armies were ordered to withdraw to the Western Dvina River, where it was planned to meet up with the 21st Mechanized Corps and the 22nd and 27th Armies. However, on 26 June, Erich von Manstein's LVI Panzer Corps reached the river first and secured a bridgehead across it. The Northwestern Front was forced to abandon the river defenses, and on 29 June Stavka ordered the Front to withdraw to the Stalin Line on the approaches to Leningrad. On 2 July, Army Group North began its attack on the Stalin Line with its 4th Panzer Group, and on 8 July captured Pskov, devastating the defenses of the Stalin Line and reaching Leningrad oblast. The 4th Panzer Group had advanced about since the start of the invasion and was now only about from its primary objective Leningrad. On 9 July it began its attack towards the Soviet defenses along the Luga River in Leningrad oblast.
The northern section of Army Group South faced the Southwestern Front, which had the largest concentration of Soviet forces, and the southern section faced the Southern Front. In addition, the Pripyat Marshes and the Carpathian Mountains posed a serious challenge to the army group's northern and southern sections respectively. On 22 June, only the northern section of Army Group South attacked, but the terrain impeded their assault, giving the Soviet defenders ample time to react. The German 1st Panzer Group and 6th Army attacked and broke through the Soviet 5th Army. Starting on the night of 23 June, the Soviet 22nd and 15th Mechanized Corps attacked the flanks of the 1st Panzer Group from north and south respectively. Although intended to be concerted, Soviet tank units were sent in piecemeal due to poor coordination. The 22nd Mechanized Corps ran into the 1st Panzer Army's III Motorized Corps and was decimated, and its commander killed. The 1st Panzer Group bypassed much of the 15th Mechanized Corps, which engaged the German 6th Army's 297th Infantry Division, where it was defeated by antitank fire and Luftwaffe attacks. On 26 June, the Soviets launched another counterattack on the 1st Panzer Group from north and south simultaneously with the 9th, 19th and 8th Mechanized Corps, which altogether fielded 1649 tanks, and supported by the remnants of the 15th Mechanized Corps. The battle lasted for four days, ending in the defeat of the Soviet tank units. On 30 June Stavka ordered the remaining forces of the Southwestern Front to withdraw to the Stalin Line, where it would defend the approaches to Kiev.
On 2 July, the southern section of Army Group South – the Romanian 3rd and 4th Armies, alongside the German 11th Army – invaded Soviet Moldavia, which was defended by the Southern Front. Counterattacks by the Front's 2nd Mechanized Corps and 9th Army were defeated, but on 9 July the Axis advance stalled along the defenses of the Soviet 18th Army between the Prut and Dniester Rivers.
In the opening hours of the invasion, the Luftwaffe destroyed the Western Front's air force on the ground, and with the aid of Abwehr and their supporting anti-communist fifth columns operating in the Soviet rear paralyzed the Front's communication lines, which particularly cut off the Soviet 4th Army headquarters from headquarters above and below it. On the same day, the 2nd Panzer Group crossed the Bug River, broke through the 4th Army, bypassed Brest Fortress, and pressed on towards Minsk, while the 3rd Panzer Group bypassed most of the 3rd Army and pressed on towards Vilnius. Simultaneously, the German 4th and 9th Armies engaged the Western Front forces in the environs of Białystok. On the order of Dmitry Pavlov, the commander of the Western Front, the 6th and 11th Mechanized Corps and the 6th Cavalry Corps launched a strong counterstrike towards Grodno on 24–25 June in hopes of destroying the 3rd Panzer Group. However, the 3rd Panzer Group had already moved on, with its forward units reaching Vilnius on the evening of 23 June, and the Western Front's armoured counterattack instead ran into infantry and antitank fire from the V Army Corps of the German 9th Army, supported by Luftwaffe air attacks. By the night of 25 June, the Soviet counterattack was defeated, and the commander of the 6th Cavalry Corps was captured. The same night, Pavlov ordered all the remnants of the Western Front to withdraw to Slonim towards Minsk. Subsequent counterattacks to buy time for the withdrawal were launched against the German forces, but all of them failed. On 27 June, the 2nd and 3rd Panzer Groups met near Minsk and captured the city the next day, completing the encirclement of almost all of the Western Front in two pockets: one around Białystok and another west of Minsk. The Germans destroyed the Soviet 3rd and 10th Armies while inflicting serious losses on the 4th, 11th and 13th Armies, and reported to have captured 324,000 Soviet troops, 3,300 tanks, 1,800 artillery pieces.
A Soviet directive was issued on 29 June to combat the mass panic rampant among the civilians and the armed forces personnel. The order stipulated swift, severe measures against anyone inciting panic or displaying cowardice. The NKVD worked with commissars and military commanders to scour possible withdrawal routes of soldiers retreating without military authorization. Field expedient general courts were established to deal with civilians spreading rumors and military deserters. On 30 June, Stalin relieved Pavlov of his command, and on 22 July tried and executed him along with many members of his staff on charges of "cowardice" and "criminal incompetence".
On 29 June, Hitler, through the Commander-in-Chief of the German Army Walther von Brauchitsch, instructed the commander of Army Group Center Fedor von Bock to halt the advance of his panzers until the infantry formations liquidating the pockets catch up. But the commander of the 2nd Panzer Group Heinz Guderian, with the tacit support of Fedor von Bock and the chief of OKH Franz Halder, ignored the instruction and attacked on eastward towards Bobruisk, albeit reporting the advance as a reconnaissance-in-force. He also personally conducted an aerial inspection of the Minsk-Białystok pocket on 30 June and concluded that his panzer group was not needed to contain it, since Hermann Hoth's 3rd Panzer Group was already involved in the Minsk pocket. On the same day, some of the infantry corps of the 9th and 4th Armies, having sufficiently liquidated the Białystok pocket, resumed their march eastward to catch up with the panzer groups. On 1 July, Fedor von Bock ordered the panzer groups to resume their full offensive eastward on the morning of 3 July. But Brauchitsch, upholding Hitler's instruction, and Halder, unwillingly going along with it, opposed Bock's order. However, Bock insisted on the order by stating that it would be irresponsible to reverse orders already issued. The panzer groups resumed their offensive on 2 July before the infantry formations had sufficiently caught up.
During German-Finnish negotiations Finland had demanded to remain neutral unless the Soviet Union attacked them first. Germany therefore sought to provoke the Soviet Union into an attack on Finland. After Germany launched Barbarossa on 22 June, German aircraft used Finnish air bases to attack Soviet positions. The same day the Germans launched Operation Rentier and occupied the Petsamo Province at the Finnish-Soviet border. Simultaneously Finland proceeded to remilitarize the neutral Åland Islands. Despite these actions the Finnish government insisted via diplomatic channels that they remained a neutral party, but the Soviet leadership already viewed Finland as an ally of Germany. Subsequently, the Soviets proceeded to launch a massive bombing attack on 25 June against all major Finnish cities and industrial centers including Helsinki, Turku and Lahti. During a night session on the same day the Finnish parliament decided to go to war against the Soviet Union.
Finland was divided into two operational zones. Northern Finland was the staging area for Army Norway. Its goal was to execute a two-pronged pincer movement on the strategic port of Murmansk, named Operation Silver Fox. Southern Finland was still under the responsibility of the Finnish Army. The goal of the Finnish forces was, at first, to recapture Finnish Karelia at Lake Ladoga as well as the Karelian Isthmus, which included Finland's second largest city Viipuri.
On 2 July and through the next six days, a rainstorm typical of Belarusian summers slowed the progress of the panzers of Army Group Center, and Soviet defences stiffened. The delays gave the Soviets time to organize a massive counterattack against Army Group Center. The army group's ultimate objective was Smolensk, which commanded the road to Moscow. Facing the Germans was an old Soviet defensive line held by six armies. On 6 July, the Soviets launched a massive counter-attack using the V and VII Mechanized Corps of the 20th Army, which collided with the German 39th and 47th Panzer Corps in a battle where the Red Army lost 832 tanks of the 2,000 employed during five days of ferocious fighting. The Germans defeated this counterattack thanks largely to the coincidental presence of the Luftwaffe's only squadron of tank-busting aircraft. The 2nd Panzer Group crossed the Dnieper River and closed in on Smolensk from the south while the 3rd Panzer Group, after defeating the Soviet counterattack, closed on Smolensk from the north. Trapped between their pincers were three Soviet armies. The 29th Motorized Division captured Smolensk on 16 July yet a gap remained between Army Group Center. On 18 July, the panzer groups came to within of closing the gap but the trap did not finally close until 5 August, when upwards of 300,000 Red Army soldiers had been captured and 3,205 Soviet tanks were destroyed. Large numbers of Red Army soldiers escaped to stand between the Germans and Moscow as resistance continued.
Four weeks into the campaign, the Germans realized they had grossly underestimated Soviet strength. The German troops had used their initial supplies, and General Bock quickly came to the conclusion that not only had the Red Army offered stiff opposition, but German difficulties were also due to the logistical problems with reinforcements and provisions. Operations were now slowed down to allow for resupply; the delay was to be used to adapt strategy to the new situation. Hitler by now had lost faith in battles of encirclement as large numbers of Soviet soldiers had escaped the pincers. He now believed he could defeat the Soviet state by economic means, depriving them of the industrial capacity to continue the war. That meant seizing the industrial center of Kharkov, the Donbass and the oil fields of the Caucasus in the south and the speedy capture of Leningrad, a major center of military production, in the north.
Chief of the OKH, General Franz Halder, Fedor von Bock, the commander of Army Group Center, and almost all the German generals involved in Operation Barbarossa argued vehemently in favor of continuing the all-out drive toward Moscow. Besides the psychological importance of capturing the Soviet capital, the generals pointed out that Moscow was a major center of arms production, the center of the Soviet communications system and an important transport hub. Intelligence reports indicated that the bulk of the Red Army was deployed near Moscow under Semyon Timoshenko for the defense of the capital. Panzer commander Heinz Guderian was sent to Hitler by Bock and Halder to argue their case for continuing the assault against Moscow, but Hitler issued an order through Guderian (bypassing Bock and Halder) to send Army Group Center's tanks to the north and south, temporarily halting the drive to Moscow. Convinced by Hitler's argument, Guderian returned to his commanding officers as a convert to the Führer's plan, which earned him their disdain.
On 29 June Army Norway launched its effort to capture Murmansk in a pincer attack. The northern pincer, conducted by Mountain Corps Norway, approached Murmansk directly by crossing the border at Petsamo. However, in mid-July after securing the neck of the Rybachy Peninsula and advancing to the Litsa River the German advance was stopped by heavy resistance from the Soviet 14th Army. Renewed attacks led to nothing, and this front became a stalemate for the remainder of Barbarossa.
The second pincer attack began on 1 July with the German XXXVI Corps and Finnish III Corps slated to recapture the Salla region for Finland and then proceed eastwards to cut the Murmansk railway near Kandalaksha. The German units had great difficulty dealing with the Arctic conditions. After heavy fighting, Salla was taken on 8 July. To keep the momentum the German-Finnish forces advanced eastwards, until they were stopped at the town of Kayraly by Soviet resistance. Further south the Finnish III Corps made an independent effort to reach the Murmansk railway through the Arctic terrain. Facing only one division of the Soviet 7th Army it was able to make rapid headway. On 7 August it captured Kestenga while reaching the outskirts of Ukhta. Large Red Army reinforcements then prevented further gains on both fronts, and the German-Finnish force had to go onto the defensive.
The Finnish plan in the south in Karelia was to advance as swiftly as possible to Lake Ladoga, cutting the Soviet forces in half. Then the Finnish territories east of Lake Ladoga were to be recaptured before the advance along the Karelian Isthmus, including the recapture of Viipuri, commenced. The Finnish attack was launched on 10 July. The Army of Karelia held a numerical advantage versus the Soviet defenders of the 7th Army and 23rd Army, so it could advance swiftly. The important road junction at Loimola was captured on 14 July. By 16 July, the first Finnish units reached Lake Ladoga at Koirinoja, achieving the goal of splitting the Soviet forces. During the rest of July, the Army of Karelia advanced further southeast into Karelia, coming to a halt at the former Finnish-Soviet border at Mansila.
With the Soviet forces cut in half, the attack on the Karelian Isthmus could commence. The Finnish army attempted to encircle large Soviet formations at Sortavala and Hiitola by advancing to the western shores of Lake Ladoga. By mid-August the encirclement had succeeded and both towns were taken, but many Soviet formations were able to evacuate by sea. Further west, the attack on Viipuri was launched. With Soviet resistance breaking down, the Finns were able to encircle Viipuri by advancing to the Vuoksi River. The city itself was taken on 30 August, along with a broad advance on the rest of the Karelian Isthmus. By the beginning of September, Finland had restored its pre-Winter War borders.
By mid-July, the German forces had advanced within a few kilometers of Kiev below the Pripyat Marshes. The 1st Panzer Group then went south, while the 17th Army struck east and trapped three Soviet armies near Uman. As the Germans eliminated the pocket, the tanks turned north and crossed the Dnieper. Meanwhile, the 2nd Panzer Group, diverted from Army Group Center, had crossed the Desna River with 2nd Army on its right flank. The two panzer armies now trapped four Soviet armies and parts of two others.
By August, as the serviceability and the quantity of the Luftwaffe's inventory steadily diminished due to combat, demand for air support only increased as the VVS recovered. The Luftwaffe found itself struggling to maintain local air superiority. With the onset of bad weather in October, the Luftwaffe was on several occasions forced to halt nearly all aerial operations. The VVS, although faced with the same weather difficulties, had a clear advantage thanks to the prewar experience with cold-weather flying, and the fact that they were operating from intact airbases and airports. By December, the VVS had matched the Luftwaffe and was even pressing to achieve air superiority over the battlefields.
For its final attack on Leningrad, the 4th Panzer Group was reinforced by tanks from Army Group Center. On 8 August, the Panzers broke through the Soviet defences. By the end of August, 4th Panzer Group had penetrated to within of Leningrad. The Finns had pushed southeast on both sides of Lake Ladoga to reach the old Finnish-Soviet frontier.
The Germans attacked Leningrad in August 1941; in the following three "black months" of 1941, 400,000 residents of the city worked to build the city's fortifications as fighting continued, while 160,000 others joined the ranks of the Red Army. Nowhere was the Soviet "levée en masse" spirit stronger in resisting the Germans than at Leningrad where reserve troops and freshly improvised "Narodnoe Opolcheniye" units, consisting of worker battalions and even schoolboy formations, joined in digging trenches as they prepared to defend the city. On 7 September, the German 20th Motorized Division seized Shlisselburg, cutting off all land routes to Leningrad. The Germans severed the railroads to Moscow and captured the railroad to Murmansk with Finnish assistance to inaugurate the start of a siege that would last for over two years.
At this stage, Hitler ordered the final destruction of Leningrad with no prisoners taken, and on 9 September, Army Group North began the final push. Within ten days it had advanced within of the city. However, the push over the last proved very slow and casualties mounted. Hitler, now out of patience, ordered that Leningrad should not be stormed, but rather starved into submission. Along these lines, the OKH issued Directive No. la 1601/41 on 22 September 1941, which accorded Hitler's plans. Deprived of its Panzer forces, Army Group Center remained static and was subjected to numerous Soviet counterattacks, in particular the Yelnya Offensive, in which the Germans suffered their first major tactical defeat since their invasion began; this Red Army victory also provided an important boost to Soviet morale. These attacks prompted Hitler to concentrate his attention back to Army Group Center and its drive on Moscow. The Germans ordered the 3rd and 4th Panzer Armies to break off their Siege of Leningrad and support Army Group Center in its attack on Moscow.
Before an attack on Moscow could begin, operations in Kiev needed to be finished. Half of Army Group Center had swung to the south in the back of the Kiev position, while Army Group South moved to the north from its Dnieper bridgehead. The encirclement of Soviet forces in Kiev was achieved on 16 September. A battle ensued in which the Soviets were hammered with tanks, artillery, and aerial bombardment. After ten days of vicious fighting, the Germans claimed 665,000 Soviet soldiers captured, although the real figure is probably around 220,000 prisoners. Soviet losses were 452,720 men, 3,867 artillery pieces and mortars from 43 divisions of the 5th, 21st, 26th, and 37th Soviet Armies. Despite the exhaustion and losses facing some German units (upwards of 75 percent of their men) from the intense fighting, the massive defeat of the Soviets at Kiev and the Red Army losses during the first three months of the assault contributed to the German assumption that Operation Typhoon (the attack on Moscow) could still succeed.
After operations at Kiev were successfully concluded, Army Group South advanced east and south to capture the industrial Donbass region and the Crimea. The Soviet Southern Front launched an attack on 26 September with two armies on the northern shores of the Sea of Azov against elements of the German 11th Army, which was simultaneously advancing into the Crimea. On 1 October the 1st Panzer Army under Ewald von Kleist swept south to encircle the two attacking Soviet armies. By 7 October the Soviet 9th and 18th Armies were isolated and four days later they had been annihilated. The Soviet defeat was total; 106,332 men captured, 212 tanks destroyed or captured in the pocket alone as well as 766 artillery pieces of all types. The death or capture of two-thirds of all Southern Front troops in four days unhinged the Front's left flank, allowing the Germans to capture Kharkov on 24 October. Kleist's 1st Panzer Army took the Donbass region that same month.
In central Finland, the German-Finnish advance on the Murmansk railway had been resumed at Kayraly. A large encirclement from the north and the south trapped the defending Soviet corps and allowed XXXVI Corps to advance further to the east. In early-September it reached the old 1939 Soviet border fortifications. On 6 September the first defence line at the Voyta River was breached, but further attacks against the main line at the Verman River failed. With Army Norway switching its main effort further south, the front stalemated in this sector. Further south, the Finnish III Corps launched a new offensive towards the Murmansk railway on 30 October, bolstered by fresh reinforcements from Army Norway. Against Soviet resistance, it was able to come within 30 km (19 mi) of the railway, when the Finnish High Command ordered a stop to all offensive operations in the sector on 17 November. The United States of America applied diplomatic pressure on Finland to not disrupt Allied aid shipments to the Soviet Union, which caused the Finnish government to halt the advance on the Murmansk railway. With the Finnish refusal to conduct further offensive operations and German inability to do so alone, the German-Finnish effort in central and northern Finland came to an end.
Germany had pressured Finland to enlarge its offensive activities in Karelia to aid the Germans in their Leningrad operation. Finnish attacks on Leningrad itself remained limited. Finland stopped its advance just short of Leningrad and had no intentions to attack the city. The situation was different in eastern Karelia. The Finnish government agreed to restart its offensive into Soviet Karelia to reach Lake Onega and the Svir River. On 4 September this new drive was launched on a broad front. Albeit reinforced by fresh reserve troops, heavy losses elsewhere on the front meant that the Soviet defenders of the 7th Army were not able to resist the Finnish advance. Olonets was taken on 5 September. On 7 September, Finnish forward units reached the Svir River. Petrozavodsk, the capital city of the Karelo-Finnish SSR, fell on 1 October. From there the Army of Karelia moved north along the shores of Lake Onega to secure the remaining area west of Lake Onega, while simultaneously establishing a defensive position along the Svir River. Slowed by winter's onset they nevertheless continued to advance slowly during the following weeks. Medvezhyegorsk was captured on 5 December and Povenets fell the next day. On 7 December Finland called a stop to all offensive operations, going onto the defensive.
After Kiev, the Red Army no longer outnumbered the Germans and there were no more trained reserves directly available. To defend Moscow, Stalin could field 800,000 men in 83 divisions, but no more than 25 divisions were fully effective. Operation Typhoon, the drive to Moscow, began on 30 September 1941. In front of Army Group Center was a series of elaborate defence lines, the first centred on Vyazma and the second on Mozhaysk. Russian peasants began fleeing ahead of the advancing German units, burning their harvested crops, driving their cattle away, and destroying buildings in their villages as part of a scorched-earth policy designed to deny the Nazi war machine of needed supplies and foodstuffs.
The first blow took the Soviets completely by surprise when the 2nd Panzer Group, returning from the south, took Oryol, just south of the Soviet first main defense line. Three days later, the Panzers pushed on to Bryansk, while the 2nd Army attacked from the west. The Soviet 3rd and 13th Armies were now encircled. To the north, the 3rd and 4th Panzer Armies attacked Vyazma, trapping the 19th, 20th, 24th and 32nd Armies. Moscow's first line of defense had been shattered. The pocket eventually yielded over 500,000 Soviet prisoners, bringing the tally since the start of the invasion to three million. The Soviets now had only 90,000 men and 150 tanks left for the defense of Moscow.
The German government now publicly predicted the imminent capture of Moscow and convinced foreign correspondents of an impending Soviet collapse. On 13 October, the 3rd Panzer Group penetrated to within of the capital. Martial law was declared in Moscow. Almost from the beginning of Operation Typhoon, however, the weather worsened. Temperatures fell while there was continued rainfall. This turned the unpaved road network into mud and slowed the German advance on Moscow. Additional snows fell which were followed by more rain, creating a glutinous mud that German tanks had difficulty traversing, whereas the Soviet T-34, with its wider tread, was better suited to negotiate. At the same time, the supply situation for the Germans rapidly deteriorated. On 31 October, the German Army High Command ordered a halt to Operation Typhoon while the armies were reorganized. The pause gave the Soviets, far better supplied, time to consolidate their positions and organize formations of newly activated reservists. In little over a month, the Soviets organized eleven new armies that included 30 divisions of Siberian troops. These had been freed from the Soviet Far East after Soviet intelligence assured Stalin that there was no longer a threat from the Japanese. During October and November 1941, over 1,000 tanks and 1,000 aircraft arrived along with the Siberian forces to assist in defending the city.
With the ground hardening due to the cold weather, the Germans resumed the attack on Moscow on 15 November. Although the troops themselves were now able to advance again, there had been no improvement in the supply situation. Facing the Germans were the 5th, 16th, 30th, 43rd, 49th, and 50th Soviet Armies. The Germans intended to move the 3rd and 4th Panzer Armies across the Moscow Canal and envelop Moscow from the northeast. The 2nd Panzer Group would attack Tula and then close on Moscow from the south. As the Soviets reacted to their flanks, the 4th Army would attack the center. In two weeks of fighting, lacking sufficient fuel and ammunition, the Germans slowly crept towards Moscow. In the south, the 2nd Panzer Group was being blocked. On 22 November, Soviet Siberian units, augmented by the 49th and 50th Soviet Armies, attacked the 2nd Panzer Group and inflicted a defeat on the Germans. The 4th Panzer Group pushed the Soviet 16th Army back, however, and succeeded in crossing the Moscow Canal in an attempt to encircle Moscow.
On 2 December, part of the 258th Infantry Division advanced to within of Moscow. They were so close that German officers claimed they could see the spires of the Kremlin, but by then the first blizzards had begun. A reconnaissance battalion managed to reach the town of Khimki, only about from the Soviet capital. It captured the bridge over the Moscow-Volga Canal as well as the railway station, which marked the easternmost advance of German forces. In spite of the progress made, the Wehrmacht was not equipped for such severe winter warfare. The Soviet army was better adapted to fighting in winter conditions, but faced production shortages of winter clothing. The German forces fared worse, with deep snow further hindering equipment and mobility. Weather conditions had largely grounded the Luftwaffe, preventing large-scale air operations. Newly created Soviet units near Moscow now numbered over 500,000 men, and on 5 December, they launched a massive counterattack as part of the Soviet winter counteroffensive. The offensive halted on 7 January 1942, after having pushed the German armies back 100–250 km (62–155 mi) from Moscow. The Wehrmacht had lost the Battle for Moscow, and the invasion had cost the German Army over 830,000 men.
With the failure of the Battle of Moscow, all German plans for a quick defeat of the Soviet Union had to be revised. The Soviet counter-offensives in December 1941 caused heavy casualties on both sides, but ultimately eliminated the German threat to Moscow. Attempting to explain matters, Hitler issued Directive N. 39, which cited the early onset of winter and the severe cold as the reason for the German failure, whereas the main reason was the German military unpreparedness for such a giant enterprise. On 22 June 1941, the Wehrmacht as a whole had 209 divisions at its disposal, 163 of which were offensively capable. On 31 March 1942, less than one year after the invasion of the Soviet Union, the Wehrmacht was reduced to fielding 58 offensively capable divisions. The Red Army's tenacity and ability to counter-attack effectively took the Germans as much by surprise as their own initial attack had the Soviets. Spurred on by the successful defense and in an effort to imitate the Germans, Stalin wanted to begin his own counteroffensive, not just against the German forces around Moscow, but against their armies in the north and south. Anger over the failed German offensives caused Hitler to relieve Field Marshal Walther von Brauchitsch of command and in his place, Hitler assumed personal control of the German Army on 19 December 1941.
The Soviet Union had suffered heavily from the conflict, losing huge tracts of territory, and vast losses in men and material. Nonetheless, the Red Army proved capable of countering the German offensives, particularly as the Germans began experiencing irreplaceable shortages in manpower, armaments, provisions, and fuel. Despite the rapid relocation of Red Army armaments production east of the Urals and a dramatic increase of production in 1942, especially of armour, new aircraft types and artillery, the Wehrmacht was able to mount another large-scale offensive in July 1942, although on a much reduced front than the previous summer. Hitler, having realized that Germany's oil supply was "severely depleted", aimed to capture the oil fields of Baku in an offensive, codenamed Case Blue. Again, the Germans quickly overran great expanses of Soviet territory, but they failed to achieve their ultimate goals in the wake of their defeat at the Battle of Stalingrad in February 1943.
By 1943, Soviet armaments production was fully operational and increasingly outproducing the German war economy. The final major German offensive in the Eastern theater of the Second World War took place during July—August 1943 with the launch of Operation Zitadelle, an assault on the Kursk salient. Approximately one million German troops confronted a Soviet force over 2.5 million strong. The Soviets prevailed. Following the defeat of Operation Zitadelle, the Soviets launched counter-offensives employing six million men along a front towards the Dnieper River as they drove the Germans westwards. Employing increasingly ambitious and tactically sophisticated offensives, along with making operational improvements in secrecy and deception, the Red Army was eventually able to liberate much of the area which the Germans had previously occupied by the summer of 1944. The destruction of Army Group Centre, the outcome of Operation Bagration, proved to be a decisive success; additional Soviet offensives against the German Army Groups North and South in the fall of 1944 put the German war machine into retreat. By January 1945, Soviet military might was aimed at the German capital of Berlin. The war ended with the total defeat and capitulation of Nazi Germany in May 1945.
While the Soviet Union had not signed the Geneva Convention, Germany had signed the treaty and was thus obligated to offer Soviet POWs humane treatment according to its provisions (as they generally did with other Allied POWs). According to the Soviets, they had not signed the Geneva Conventions in 1929 due to Article 9 which, by imposing racial segregation of POWs into different camps, contravened the Soviet constitution. Article 82 of the convention specified that "In case, in time of war, one of the belligerents is not a party to the Convention, its provisions shall nevertheless remain in force as between the belligerents who are parties thereto." Despite this Hitler called for the battle against the Soviet Union to be a "struggle for existence" and emphasized that the Russian armies were to be "annihilated", a mindset that contributed to war crimes against Soviet prisoners of war. A memorandum from 16 July 1941, recorded by Martin Bormann, quotes Hitler saying, "The giant [occupied] area must naturally be pacified as quickly as possible; this will happen at best if anyone who just looks funny should be shot". Conveniently for the Nazis, the fact that the Soviets failed to sign the convention played into their hands as they justified their behavior accordingly. Even if the Soviets had signed, it is highly unlikely that this would have stopped the Nazis' genocidal policies towards combatants, civilians, and prisoners of war.
Before the war, Hitler issued the notorious Commissar Order, which called for all Soviet political commissars taken prisoner at the front to be shot immediately without trial. German soldiers participated in these mass killings along with members of the "SS-Einsatzgruppen", sometimes reluctantly, claiming "military necessity". On the eve of the invasion, German soldiers were informed that their battle "demands ruthless and vigorous measures against Bolshevik inciters, guerrillas, saboteurs, Jews and the complete elimination of all active and passive resistance". Collective punishment was authorized against partisan attacks; if a perpetrator could not be quickly identified, then burning villages and mass executions were considered acceptable reprisals. Although the majority of German soldiers accepted these crimes as justified due to Nazi propaganda, which depicted the Red Army as "Untermenschen", a few prominent German officers openly protested about them. An estimated two million Soviet prisoners of war died of starvation during Barbarossa alone. By the end of the war, 58 percent of all Soviet prisoners of war had died in German captivity.
Organized crimes against civilians, including women and children, were carried out on a huge scale by the German police and military forces, as well as the local collaborators. Under the command of the Reich Main Security Office, the "Einsatzgruppen" killing squads conducted large-scale massacres of Jews and communists in conquered Soviet territories. Holocaust historian Raul Hilberg puts the number of Jews murdered by "mobile killing operations" at 1,400,000. The original instructions to kill "Jews in party and state positions" were broadened to include "all male Jews of military age" and then expanded once more to "all male Jews regardless of age." By the end of July, the Germans were regularly killing women and children. On 18 December 1941, Himmler and Hitler discussed the "Jewish question", and Himmler noted the meeting's result in his appointment book: "To be annihilated as partisans." According to Christopher Browning, "annihilating Jews and solving the so-called 'Jewish question' under the cover of killing partisans was the agreed-upon convention between Hitler and Himmler". In accordance with Nazi policies against "inferior" Asian peoples, Turkmens were also persecuted. According to a post-war report by Prince Veli Kajum Khan, they were imprisoned in concentration camps in terrible conditions, where those deemed to have "Mongolian" features were murdered daily. Asians were also targeted by the "Einsatzgruppen" and were the subjects of lethal medical experiments and murder at a "pathological institute" in Kiev. Hitler received reports of the mass killings conducted by the "Einsatzgruppen" which were first conveyed to the RSHA, where they were aggregated into a summary report by Gestapo Chief Heinrich Müller.
Burning houses suspected of being partisan meeting places and poisoning water wells became common practice for soldiers of the German 9th Army. At Kharkov, the fourth largest city in the Soviet Union, food was provided only to the small number of civilians who worked for the Germans, with the rest designated to slowly starve. Thousands of Soviets were deported to Germany to be used as slave labor beginning in 1942.
The citizens of Leningrad were subjected to heavy bombardment and a siege that would last 872 days and starve more than a million people to death, of whom approximately 400,000 were children below the age of 14. The German-Finnish blockade cut off access to food, fuel and raw materials, and rations reached a low, for the non-working population, of four ounces (five thin slices) of bread and a little watery soup per day. Starving Soviet civilians began to eat their domestic animals, along with hair tonic and Vaseline. Some desperate citizens resorted to cannibalism; Soviet records list 2,000 people arrested for "the use of human meat as food" during the siege, 886 of them during the first winter of 1941–42. The Wehrmacht planned to seal off Leningrad, starve out the population, and then demolish the city entirely.
Rape was a widespread phenomenon in the East as German soldiers regularly committed violent sexual acts against Soviet women. Whole units were occasionally involved in the crime with upwards of one-third of the instances being gang rape. Historian Hannes Heer relates that in the world of the eastern front, where the German army equated Russia with Communism, everything was "fair game"; thus, rape went unreported unless entire units were involved. Frequently in the case of Jewish women, they were immediately murdered following acts of sexual violence. Historian Birgit Beck emphasizes that military decrees, which served to authorize wholesale brutality on many levels, essentially destroyed the basis for any prosecution of sexual offenses committed by German soldiers in the East. She also contends that detection of such instances was limited by the fact that sexual violence was often inflicted in the context of billets in civilian housing.
Operation Barbarossa was the largest military operation in history — more men, tanks, guns and aircraft were deployed than in any other offensive. The invasion opened up the Eastern Front, the war's largest theater, which saw clashes of unprecedented violence and destruction for four years and killed 26 million Soviet people, including about 8.6 million Red army soldiers. More died fighting on the Eastern Front than in all other fighting across the globe during World War II. Damage to both the economy and landscape was enormous, as approximately 1,710 Soviet towns and 70,000 villages were razed.
Operation Barbarossa and the subsequent German defeat changed the political landscape of Europe, dividing it into Eastern and Western blocs. The political vacuum left in the eastern half of the continent was filled by the USSR when Stalin secured his territorial prizes of 1944–1945 and firmly placed his Red Army in Bulgaria, Romania, Hungary, Poland, Czechoslovakia, and the eastern half of Germany. Stalin's fear of resurgent German power and his distrust of his earstwhile allies contributed to Soviet pan-Slavic initiatives and a subsequent alliance of Slavic states. Historians David Glantz and Jonathan House assert Operation Barbarossa influenced not only Stalin but subsequent Soviet leaders, claiming it "colored" their strategic mindsets for the "next four decades". As a result, the Soviets instigated the creation of "an elaborate system of buffer and client states, designed to insulate the Soviet Union from any possible future attack." As a consequence, Eastern Europe became communist in political disposition, and Western Europe fell under the democratic sway of the United States, a nation uncertain about its future policies in Europe. | https://en.wikipedia.org/wiki?curid=22577 |
OSGi
The OSGi Alliance, formerly known as the Open Services Gateway initiative, is an open standards organization founded in March 1999 that originally specified and continues to maintain the OSGi standard.
The OSGi specification describes a modular system and a service platform for the Java programming language that implements a complete and dynamic component model, something that does not exist in standalone Java/VM environments. Applications or components, coming in the form of bundles for deployment, can be remotely installed, started, stopped, updated, and uninstalled without requiring a reboot; management of Java packages/classes is specified in great detail. Application life cycle management is implemented via APIs that allow for remote downloading of management policies. The service registry allows bundles to detect the addition of new services, or the removal of services, and adapt accordingly.
The OSGi specifications have evolved beyond the original focus of service gateways, and are now used in applications ranging from mobile phones to the open-source Eclipse IDE. Other application areas include automobiles, industrial automation, building automation, PDAs, grid computing, entertainment, fleet management and application servers.
The OSGi specification is developed by the members in an open process and made available to the public free of charge under the OSGi Specification License. The OSGi Alliance has a compliance program that is open to members only. As of November 2010, there are seven certified OSGi framework implementations. A separate page lists both certified and non-certified OSGi Specification Implementations, which include OSGi frameworks and other OSGi specifications.
OSGi is a Java framework for developing and deploying modular software programs and libraries. Each bundle is a tightly coupled, dynamically loadable collection of classes, jars, and configuration files that explicitly declare their external dependencies (if any).
The framework is conceptually divided into the following areas:
A bundle is a group of Java classes and additional resources equipped with a detailed manifest codice_1 file on all its contents, as well as additional services needed to give the included group of Java classes more sophisticated behaviors, to the extent of deeming the entire aggregate a component.
Below is an example of a typical codice_1 file with OSGi Headers:
The meaning of the contents in the example is as follows:
A Life Cycle layer adds bundles that can be dynamically installed, started, stopped, updated and uninstalled. Bundles rely on the module layer for class loading but add an API to manage the modules in run time. The life cycle layer introduces dynamics that are normally not part of an application. Extensive dependency mechanisms are used to assure the correct operation of the environment. Life cycle operations are fully protected with the security architecture.
Below is an example of a typical Java class implementing the codice_3 interface:
package org.wikipedia;
import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;
public class Activator implements BundleActivator {
The OSGi Alliance has specified many services. Services are specified by a Java interface. Bundles can implement this interface and register the service with the Service Registry. Clients of the service can find it in the registry, or react to it when it appears or disappears.
The table below shows a description of OSGi System Services:
The table below shows a description of OSGi Protocol Services:
The table below shows a description of OSGi Miscellaneous Services:
The OSGi Alliance was founded by Ericsson, IBM, Motorola, Sun Microsystems and others in March 1999. Before incorporating as a nonprofit corporation, it was called the Connected Alliance.
Among its members are () more than 35 companies from quite different business areas, for example Adobe Systems, Deutsche Telekom, Hitachi, IBM, Liferay, Makewave, NEC, NTT, Oracle, Orange S.A., ProSyst, Salesforce.com, Siemens, Software AG and TIBCO Software.
The Alliance has a board of directors that provides the organization's overall governance. OSGi officers have various roles and responsibilities in supporting the alliance. Technical work is conducted within Expert Groups (EGs) chartered by the board of directors, and non-technical work is conducted in various working groups and committees. The technical work conducted within Expert Groups include developing specifications, reference implementations, and compliance tests. These Expert Groups have produced five major releases of the OSGi specifications ().
Dedicated Expert Groups exist for the enterprise, mobile, vehicle and the core platform areas.
The Enterprise Expert Group (EEG) is the newest EG and is addressing Enterprise / Server-side applications.
In November 2007 the Residential Expert Group (REG) started to work on specifications to remotely manage residential/home-gateways.
In October 2003, Nokia, Motorola, IBM, ProSyst and other OSGi members formed a Mobile Expert Group (MEG) that will specify a MIDP-based service platform for the next generation of smart mobile phones, addressing some of the needs that CLDC cannot manage - other than CDC. MEG became part of OSGi as with R4.
https://blog.osgi.org/2020/03/osgi-core-r8-specification-available.html | https://en.wikipedia.org/wiki?curid=22580 |
Estrogen
Estrogen, or oestrogen, is the primary female sex hormone. It is responsible for the development and regulation of the female reproductive system and secondary sex characteristics. There are three major endogenous estrogens in females that have estrogenic hormonal activity: estrone, estradiol, and estriol. The estrane steroid estradiol is the most potent and prevalent of these.
Estrogens are synthesized in all vertebrates as well as some insects. Their presence in both vertebrates and insects suggests that estrogenic sex hormones have an ancient evolutionary history. The three major naturally occurring forms of estrogen in women are estrone (E1), estradiol (E2), and estriol (E3). Another type of estrogen called estetrol (E4) is produced only during pregnancy. Quantitatively, estrogens circulate at lower levels than androgens in both men and women. While estrogen levels are significantly lower in males compared to females, estrogens nevertheless also have important physiological roles in males.
Like all steroid hormones, estrogens readily diffuse across the cell membrane. Once inside the cell, they bind to and activate estrogen receptors (ERs) which in turn modulate the expression of many genes. Additionally, estrogens bind to and activate rapid-signaling membrane estrogen receptors (mERs), such as GPER (GPR30).
In addition to their role as natural hormones, estrogens are used as medications, for instance in menopausal hormone therapy and hormonal birth control.
The four major naturally occurring estrogens in women are estrone (E1), estradiol (E2), estriol (E3), and estetrol (E4). Estradiol is the predominant estrogen during reproductive years both in terms of absolute serum levels as well as in terms of estrogenic activity. During menopause, estrone is the predominant circulating estrogen and during pregnancy estriol is the predominant circulating estrogen in terms of serum levels. Given by subcutaneous injection in mice, estradiol is about 10-fold more potent than estrone and about 100-fold more potent than estriol. Thus, estradiol is the most important estrogen in non-pregnant females who are between the menarche and menopause stages of life. However, during pregnancy this role shifts to estriol, and in postmenopausal women estrone becomes the primary form of estrogen in the body. Another type of estrogen called estetrol (E4) is produced only during pregnancy. All of the different forms of estrogen are synthesized from androgens, specifically testosterone and androstenedione, by the enzyme aromatase.
Minor endogenous estrogens, the biosyntheses of which do not involve aromatase, include 27-hydroxycholesterol, dehydroepiandrosterone (DHEA), 7-oxo-DHEA, 7α-hydroxy-DHEA, 16α-hydroxy-DHEA, 7β-hydroxyepiandrosterone, androstenedione (A4), androstenediol (A5), 3α-androstanediol, and 3β-androstanediol. Some estrogen metabolites, such as the catechol estrogens 2-hydroxyestradiol, 2-hydroxyestrone, 4-hydroxyestradiol, and 4-hydroxyestrone, as well as 16α-hydroxyestrone, are also estrogens with varying degrees of activity. The biological importance of these minor estrogens is not entirely clear.
The actions of estrogen are mediated by the estrogen receptor (ER), a dimeric nuclear protein that binds to DNA and controls gene expression. Like other steroid hormones, estrogen enters passively into the cell where it binds to and activates the estrogen receptor. The estrogen:ER complex binds to specific DNA sequences called a hormone response element to activate the transcription of target genes (in a study using an estrogen-dependent breast cancer cell line as model, 89 such genes were identified). Since estrogen enters all cells, its actions are dependent on the presence of the ER in the cell. The ER is expressed in specific tissues including the ovary, uterus and breast. The metabolic effects of estrogen in postmenopausal women has been linked to the genetic polymorphism of the ER.
While estrogens are present in both men and women, they are usually present at significantly higher levels in women of reproductive age. They promote the development of female secondary sexual characteristics, such as breasts, and are also involved in the thickening of the endometrium and other aspects of regulating the menstrual cycle. In males, estrogen regulates certain functions of the reproductive system important to the maturation of sperm and may be necessary for a healthy libido.
Estrogens are responsible for the development of female secondary sexual characteristics during puberty, including breast development, widening of the hips, and female fat distribution. Conversely, androgens are responsible for pubic and body hair growth, as well as acne and axillary odor.
Estrogen, in conjunction with growth hormone (GH) and its secretory product insulin-like growth factor 1 (IGF-1), is critical in mediating breast development during puberty, as well as breast maturation during pregnancy in preparation of lactation and breastfeeding. Estrogen is primarily and directly responsible for inducing the ductal component of breast development, as well as for causing fat deposition and connective tissue growth. It is also indirectly involved in the lobuloalveolar component, by increasing progesterone receptor expression in the breasts and by inducing the secretion of prolactin. Allowed for by estrogen, progesterone and prolactin work together to complete lobuloalveolar development during pregnancy.
Androgens such as testosterone powerfully oppose estrogen action in the breasts, such as by reducing estrogen receptor expression in them.
Estrogens are responsible for maturation and maintenance of the vagina and uterus, and are also involved in ovarian function, such as maturation of ovarian follicles. In addition, estrogens play an important role in regulation of gonadotropin secretion. For these reasons, estrogens are required for female fertility.
Estrogen regulated DNA repair mechanisms in the brain have neuroprotective effects. Estrogen regulates the transcription of DNA base excision repair genes as well as the translocation of the base excision repair enzymes between different subcellular compartments.
Estrogens are involved in libido (sex drive) in both women and men.
Verbal memory scores are frequently used as one measure of higher level cognition. These scores vary in direct proportion to estrogen levels throughout the menstrual cycle, pregnancy, and menopause. Furthermore, estrogens when administered shortly after natural or surgical menopause prevents decreases in verbal memory. In contrast, estrogens have little effect on verbal memory if first administered years after menopause. Estrogens also have positive influences on other measures of cognitive function. However the effect of estrogens on cognition is not uniformly favorable and is dependent on the timing of the dose and the type of cognitive skill being measured.
The protective effects of estrogens on cognition may be mediated by estrogen's anti-inflammatory effects in the brain. Studies have also shown that the Met allele gene and level of estrogen mediates the efficiency of prefrontal cortex dependent working memory tasks.
Estrogen is considered to play a significant role in women's mental health. Sudden estrogen withdrawal, fluctuating estrogen, and periods of sustained low estrogen levels correlate with significant mood lowering. Clinical recovery from postpartum, perimenopause, and postmenopause depression has been shown to be effective after levels of estrogen were stabilized and/or restored. Menstrual exacerbation (including menstrual psychosis) is typically triggered by low estrogen levels, and is often mistaken for premenstrual dysphoric disorder.
Compulsions in male lab mice, such as those in obsessive-compulsive disorder (OCD), may be caused by low estrogen levels. When estrogen levels were raised through the increased activity of the enzyme aromatase in male lab mice, OCD rituals were dramatically decreased. Hypothalamic protein levels in the gene COMT are enhanced by increasing estrogen levels which are believed to return mice that displayed OCD rituals to normal activity. Aromatase deficiency is ultimately suspected which is involved in the synthesis of estrogen in humans and has therapeutic implications in humans having obsessive-compulsive disorder.
Local application of estrogen in the rat hippocampus has been shown to inhibit the re-uptake of serotonin. Contrarily, local application of estrogen has been shown to block the ability of fluvoxamine to slow serotonin clearance, suggesting that the same pathways which are involved in SSRI efficacy may also be affected by components of local estrogen signaling pathways.
Studies have also found that fathers had lower levels of cortisol and testosterone but higher levels of estrogen (estradiol) compared to non-fathers.
Estrogen may play a role in suppressing binge eating. Hormone replacement therapy using estrogen may be a possible treatment for binge eating behaviors in females. Estrogen replacement has been shown to suppress binge eating behaviors in female mice. The mechanism by which estrogen replacement inhibits binge-like eating involves the replacement of serotonin (5-HT) neurons. Women exhibiting binge eating behaviors are found to have increased brain uptake of neuron 5-HT, and therefore less of the neurotransmitter serotonin in the cerebrospinal fluid. Estrogen works to activate 5-HT neurons, leading to suppression of binge like eating behaviors.
It is also suggested that there is an interaction between hormone levels and eating at different points in the female menstrual cycle. Research has predicted increased emotional eating during hormonal flux, which is characterized by high progesterone and estradiol levels that occur during the mid-luteal phase. It is hypothesized that these changes occur due to brain changes across the menstrual cycle that are likely a genomic effect of hormones. These effects produce menstrual cycle changes, which result in hormone release leading to behavioral changes, notably binge and emotional eating. These occur especially prominently among women who are genetically vulnerable to binge eating phenotypes.
Binge eating is associated with decreased estradiol and increased progesterone. Klump et al. Progesterone may moderate the effects of low estradiol (such as during dysregulated eating behavior), but that this may only be true in women who have had clinically diagnosed binge episodes (BEs). Dysregulated eating is more strongly associated with such ovarian hormones in women with BEs than in women without BEs.
The implantation of 17β-estradiol pellets in ovariectomized mice significantly reduced binge eating behaviors and injections of GLP-1 in ovariectomized mice decreased binge-eating behaviors.
The associations between binge eating, menstrual-cycle phase and ovarian hormones correlated.
In rodents, estrogens (which are locally aromatized from androgens in the brain) play an important role in psychosexual differentiation, for example, by masculinizing territorial behavior; the same is not true in humans. In humans, the masculinizing effects of prenatal androgens on behavior (and other tissues, with the possible exception of effects on bone) appear to act exclusively through the androgen receptor. Consequently, the utility of rodent models for studying human psychosexual differentiation has been questioned.
Estrogens are responsible for both the pubertal growth spurt, which causes an acceleration in linear growth, and epiphyseal closure, which limits height and limb length, in both females and males. In addition, estrogens are responsible for bone maturation and maintenance of bone mineral density throughout life. Due to hypoestrogenism, the risk of osteoporosis increases during menopause.
Women suffer less from heart disease due to vasculo-protective action of estrogen which helps in preventing atherosclerosis. It also helps in maintaining the delicate balance between fighting infections and protecting arteries from damage thus lowering the risk of cardiovascular disease. During pregnancy, high levels of estrogens increase coagulation and the risk of venous thromboembolism.
Estrogen has anti-inflammatory properties and helps in mobilization of polymorphonuclear white blood cells or neutrophils.
Estrogens are implicated in various estrogen-dependent conditions, such as ER-positive breast cancer, as well as a number of genetic conditions involving estrogen signaling or metabolism, such as estrogen insensitivity syndrome, aromatase deficiency, and aromatase excess syndrome.
Estrogens, in females, are produced primarily by the ovaries, and during pregnancy, the placenta. Follicle-stimulating hormone (FSH) stimulates the ovarian production of estrogens by the granulosa cells of the ovarian follicles and corpora lutea. Some estrogens are also produced in smaller amounts by other tissues such as the liver, pancreas, bone, adrenal glands, skin, brain, adipose tissue, and the breasts. These secondary sources of estrogens are especially important in postmenopausal women.
The pathway of estrogen biosynthesis in extragonadal tissues is different. These tissues are not able to synthesize C19 steroids, and therefore depend on C19 supplies from other tissues and the level of aromatase.
In females, synthesis of estrogens starts in theca interna cells in the ovary, by the synthesis of androstenedione from cholesterol. Androstenedione is a substance of weak androgenic activity which serves predominantly as a precursor for more potent androgens such as testosterone as well as estrogen. This compound crosses the basal membrane into the surrounding granulosa cells, where it is converted either immediately into estrone, or into testosterone and then estradiol in an additional step. The conversion of androstenedione to testosterone is catalyzed by 17β-hydroxysteroid dehydrogenase (17β-HSD), whereas the conversion of androstenedione and testosterone into estrone and estradiol, respectively is catalyzed by aromatase, enzymes which are both expressed in granulosa cells. In contrast, granulosa cells lack 17α-hydroxylase and 17,20-lyase, whereas theca cells express these enzymes and 17β-HSD but lack aromatase. Hence, both granulosa and theca cells are essential for the production of estrogen in the ovaries.
Estrogen levels vary through the menstrual cycle, with levels highest near the end of the follicular phase just before ovulation.
Note that in males, estrogen is also produced by the Sertoli cells when FSH binds to their FSH receptors.
Estrogens are plasma protein bound to albumin and/or sex hormone-binding globulin in the circulation.
Estrogens are metabolized via hydroxylation by cytochrome P450 enzymes such as CYP1A1 and CYP3A4 and via conjugation by estrogen sulfotransferases (sulfation) and UDP-glucuronyltransferases (glucuronidation). In addition, estradiol is dehydrogenated by 17β-hydroxysteroid dehydrogenase into the much less potent estrogen estrone. These reactions occur primarily in the liver, but also in other tissues.
Estrogens are excreted primarily by the kidneys as conjugates via the urine.
Estrogens are used as medications, mainly in hormonal contraception, hormone replacement therapy, and to treat gender dysphoria in transgender women and other transfeminine individuals as part of feminizing hormone therapy.
The estrogen steroid hormones are estrane steroids.
In 1929, Adolf Butenandt and Edward Adelbert Doisy independently isolated and purified estrone, the first estrogen to be discovered. Then, estriol and estradiol were discovered in 1930 and 1933, respectively. Shortly following their discovery, estrogens, both natural and synthetic, were introduced for medical use. Examples include estriol glucuronide (Emmenin, Progynon), estradiol benzoate, conjugated estrogens (Premarin), diethylstilbestrol, and ethinylestradiol.
The word estrogen derives from Ancient Greek. It is derived from "oestros" (a periodic state of sexual activity in female mammals), and genos(generating). It was first published in the early 1920s and referenced as "oestrin". With the years, American English adapted the spelling of estrogen to fit with its phonetic pronunciation. Nevertheless, both estrogen and oestrogen are used nowadays, yet some still wish to maintain its original spelling as it reflects the origin of the word.
The name "estrogen" is derived from the Greek ("oistros"), literally meaning "verve or inspiration" but figuratively sexual passion or desire, and the suffix "-gen", meaning "producer of".
A range of synthetic and natural substances that possess estrogenic activity have been identified in the environment and are referred to xenoestrogens.
Estrogens are among the wide range of endocrine-disrupting compounds (EDCs) because they have high estrogenic potency. When an EDC makes its way into the environment, it may cause male reproductive dysfunction to wildlife. The estrogen excreted from farm animals makes its way into fresh water systems. During the germination period of reproduction the fish are exposed to low levels of estrogen which may cause reproductive dysfunction to male fish.
Some hair shampoos on the market include estrogens and placental extracts; others contain phytoestrogens. In 1998, there were case reports of four prepubescent African-American girls developing breasts after exposure to these shampoos. In 1993, the FDA determined that not all over-the-counter topically applied hormone-containing drug products for human use are generally recognized as safe and effective and are misbranded. An accompanying proposed rule deals with cosmetics, concluding that any use of natural estrogens in a cosmetic product makes the product an unapproved new drug and that any cosmetic using the term "hormone" in the text of its labeling or in its ingredient statement makes an implied drug claim, subjecting such a product to regulatory action.
In addition to being considered misbranded drugs, products claiming to contain placental extract may also be deemed to be misbranded cosmetics if the extract has been prepared from placentas from which the hormones and other biologically active substances have been removed and the extracted substance consists principally of protein. The FDA recommends that this substance be identified by a name other than "placental extract" and describing its composition more accurately because consumers associate the name "placental extract" with a therapeutic use of some biological activity. | https://en.wikipedia.org/wiki?curid=22581 |
Roland Octapad
Roland Octapad is a range of MIDI electronic drum percussion controllers produced by the Roland Corporation.
The first model, introduced in 1985, was the Pad-8. Originally to be called MPC-8 (MIDI Percussion Controller 8), but was renamed Pad-8 to avoid legal implications with MPC Electronics. It was an influential device at that time, allowing drummers and percussionists the opportunity to trigger virtually any MIDI sound source without the need of a full electronic drum set.
The Pad 8 Consists of eight individual pads (divided in two rows of four pads) and six external pad trigger ports. The controlled had no internal sound source and limited memory for four user patches. A unique initialization procedure, when powered on, would load a "patch preset" and configure the Pad-8 to work with either the Roland's TR-909 or TR-707/TR-727.
The Pad 8 could only transmit on a single MIDI channel (channel 10 on power up), however each of the 14 pads is assigned a different MIDI-Note number. Both MIDI channel and note numbers could be edited to suite the device being controlled over MIDI.
There is one parameter adjustable for each pad: MIDI Note. The remaining five parameters adjustable are global: MIDI Channel; Pad Sensitivity; Volume Curve; Minimum Velocity; and the Gate Time.
The second model, introduced in 1989, was the Pad-80 Octapad II. Again the Pad-80 was an eight pad MIDI controller that allowed for various types of MIDI sound sources. Improvements in this second model included the ability to play up to three notes per pad, and velocity switching, which allowed the user to stack or alternate between the assigned notes depending on how hard the pads were struck. This feature became useful for creating more realistic sounding drum parts, and in addition allowed drummers to play melodic instruments with greater ease. These new features were groundbreaking at the time, and are still utilized in Roland's electronic percussion today.
The memory was increased, allowing up to 64 different patches internally and another 64 patches to be stored on a Roland M-256E memory card. Further improvements to the MIDI specification included the control of modulation, pitch bend and aftertouch using a foot pedal, along with full System Exclusive (SysEx) capability. The Pad-80 had a patch chain function that allowed a series of 32 patches to be arranged in any sequence, eight of these chains could be stored in memory.
After the Pad-80, Roland continued to release SPD-8 with on-board sounds, as a standalone instrument in 1990, SPD-11 in 1993, which not only had more sounds but also built-in effects processing, and SPD-20 in 1998, which had more on-board sounds. These SPD Series products apart from the SPD-20 and 20x had not been named "Octapad" on the product panel.
Roland continued the line in 2010 with the Octapad SPD-30 which includes on-board sounds and effects. | https://en.wikipedia.org/wiki?curid=22587 |
Oswald Spengler
Oswald Manuel Arnold Gottfried Spengler (; 29 May 1880 – 8 May 1936) was a German historian and philosopher of history whose interests included mathematics, science, and art and their relation to his cyclical theory of history. He is best known for his book "The Decline of the West" ("Der Untergang des Abendlandes"), published in 1918 and 1922, covering all of world history. Spengler's model of history postulates that any culture is a superorganism with a limited and predictable lifespan.
Spengler predicted that about the year 2000, Western civilization would enter the period of pre‑death emergency whose countering would lead to roughly 200 years of Caesarism (extraconstitutional omnipotence of the executive branch of the central government) before Western Civilization's final collapse.
Spengler is regarded as a nationalist and an anti-democrat, and was a prominent member of the Conservative Revolution. However, he criticised Nazism due to its excessive racism. Instead, he saw Benito Mussolini, and entrepreneur types like Cecil Rhodes, as embryonic examples of the impending Caesars of Western culture, notwithstanding his stark criticism of Mussolini's imperial adventures.
He strongly influenced other historians, including Franz Borkenau and especially Arnold J. Toynbee and other successors like Carroll Quigley and Samuel P. Huntington.
John Calvert notes that he is also popular with the Islamists, who mobilize his critique of the West.
Oswald Arnold Gottfried Spengler was born on 29 May 1880 in Blankenburg, Duchy of Brunswick, German Reich, the oldest surviving child of Bernhard Spengler (1844–1901), an official in the post office, and Pauline Spengler (1840–1910), née Grantzow, the descendant of an artistic family. Oswald's elder brother was born prematurely (eight months) in 1879, when his mother tried to move a heavy laundry basket, and died at the age of three weeks. Oswald was born ten months after his brother's death. His younger sisters were Adele (1881–1917), Gertrud (1882–1957), and Hildegard (1885–1942). Oswald's paternal grandfather, Theodor Spengler (1806–76), was a metallurgical inspector ("Hütteninspektor") in Altenbrak.
Oswald's father, Bernhard Spengler, held the position of a postal secretary ("Postsekretär") and was a hard-working man with a marked dislike of intellectualism, who tried to instil the same values and attitudes in his son.
On 26 May 1799, Friedrich Wilhelm Grantzow, a tailor's apprentice in Berlin, married a Jewish woman named Bräunchen Moses ( 1769–1849; whose parents, Abraham and Reile Moses, were both deceased by that time). Shortly before the wedding, Moses was baptized as Johanna Elisabeth Anspachin (the surname was chosen after her birthplace—Anspach). The couple had eight children (three before and five after the wedding), one of whom was Gustav Adolf Grantzow (1811–83)—a solo dancer and ballet master in Berlin, who in 1837 married Katharina Kirchner (1813–73), a nervously beautiful solo dancer from a Munich Catholic family; the second of their four daughters was Oswald Spengler's mother Pauline Grantzow. Like the Grantzows in general, Pauline was of a Bohemian disposition, and, before marrying Bernhard Spengler, accompanied her dancer sister on tours. She was the least talented member of the Grantzow family. In appearance, she was plump and a bit unseemly. Her temperament, which Oswald inherited, complemented her appearance and frail physique: she was moody, irritable, and morose.
When Oswald was ten years of age, his family moved to the university city of Halle. Here he received a classical education at the local Gymnasium (academically oriented secondary school), studying Greek, Latin, mathematics and sciences. Here, too, he developed his propensity for the arts—especially poetry, drama, and music—and came under the influence of the ideas of Johann Wolfgang von Goethe and Friedrich Nietzsche. At 17, he wrote a drama titled "Montezuma".
After his father's death in 1901 Spengler attended several universities (Munich, Berlin, and Halle) as a private scholar, taking courses in a wide range of subjects. His private studies were undirected. In 1903, he failed his doctoral thesis on Heraclitus (titled "Der metaphysische Grundgedanke der Heraklitischen Philosophie" ("The Fundamental Metaphysical Thought of the Heraclitean Philosophy") and conducted under the direction of Alois Riehl) because of insufficient references. He eventually took the doctoral oral exam again and received his PhD from Halle on 6 April 1904. In December 1904, he set to write the secondary dissertation ("Staatsexamensarbeit") necessary to qualify as a high school teacher. This became "The Development of the Organ of Sight in the Higher Realms of the Animal Kingdom" ("Die Entwicklung des Sehorgans bei den Hauptstufen des Tierreiches"), a text now lost. It was approved and he received his teaching certificate. In 1905 Spengler suffered a nervous breakdown.
Biographers report his life as a teacher was uneventful. He briefly served as a teacher in Saarbrücken and then in Düsseldorf. From 1908 to 1911 he worked at a grammar school ("Realgymnasium") in Hamburg, where he taught science, German history, and mathematics.
In 1911, following his mother's death, he moved to Munich, where he would live until his death in 1936. He lived as a cloistered scholar, supported by his modest inheritance. Spengler survived on very limited means and was marked by loneliness. He owned no books, and took jobs as a tutor or wrote for magazines to earn additional income.
He began work on the first volume of "Decline of the West" intending at first to focus on Germany within Europe, but the Agadir Crisis of 1911 affected him deeply, and he widened the scope of his study:
The book was completed in 1914, but publishing was delayed by the conflict and appeared in 1918, shortly before the end of World War I. It instantly made him a celebrity. Due to a severe heart problem, Spengler was exempted military service. During the war, however, his inheritance was largely useless because it was invested overseas; thus he lived in genuine poverty for this period.
When "The Decline of the West" was published in the summer of 1918, it was a wild success. The national humiliation of the Treaty of Versailles (1919) and later the economic depression around 1923 fueled by hyperinflation seemed to prove Spengler right. It comforted Germans because it seemingly rationalized their downfall as part of larger world-historical processes. The book met with wide success outside of Germany as well, and by 1919 had been translated into several other languages. Spengler declined a subsequent offer to become Professor of Philosophy at the University of Göttingen, saying he needed time to focus on writing.
The book was widely discussed, even by those who had not read it. Historians took umbrage at his unapologetically non-scientific approach. Novelist Thomas Mann compared reading Spengler's book to reading Schopenhauer for the first time. Academics gave it a mixed reception. Sociologist Max Weber described Spengler as a "very ingenious and learned dilettante", while philosopher Karl Popper called the thesis "pointless".
A 1928 "Time" review of the second volume of "Decline" described the immense influence and controversy Spengler's ideas enjoyed during the 1920s: "When the first volume of "The Decline of the West" appeared in Germany a few years ago, thousands of copies were sold. Cultivated European discourse quickly became Spengler-saturated. Spenglerism spurted from the pens of countless disciples. It was imperative to read Spengler, to sympathize or revolt. It still remains so".
In the second volume, published in 1922, Spengler argued that German socialism differed from Marxism, and was in fact compatible with traditional German conservatism. In 1924, following the social-economic upheaval and inflation, Spengler entered politics in an effort to bring Reichswehr general Hans von Seeckt to power as the country's leader. The attempt failed and Spengler proved ineffective in practical politics.
In 1931, he published "Man and Technics", which warned against the dangers of technology and industrialism to culture. He especially pointed to the tendency of Western technology to spread to hostile "Colored races" which would then use the weapons against the West. It was poorly received because of its anti-industrialism. This book contains the well-known Spengler quote "Optimism is cowardice".
Despite voting for Hitler over Hindenburg in 1932, Spengler found the Führer vulgar. He met Hitler in 1933 and after a lengthy discussion remained unimpressed, saying that Germany did not need a "heroic tenor ["Heldentenor": one of several conventional tenor classifications] but a real hero ["Held"]". He quarreled publicly with Alfred Rosenberg, and his pessimism and remarks about the Führer resulted in isolation and public silence. He further rejected offers from Joseph Goebbels to give public speeches. However, Spengler did become a member of the German Academy in the course of the year.
"The Hour of Decision", published in 1934, was a bestseller, but the National Socialist German Workers Party later banned it for its critiques of National Socialism. Spengler's criticisms of liberalism were welcomed by the Nazis, but Spengler disagreed with their biological ideology and anti-Semitism. While racial mysticism played a key role in his own worldview, Spengler had always been an outspoken critic of the pseudo-scientific racial theories professed by the Nazis and many others in his time, and was not inclined to change his views upon Hitler's rise to power. Although himself a German nationalist, Spengler viewed the Nazis as too narrowly German, and not occidental enough to lead the fight against other peoples. The book also warned of a coming world war in which Western Civilization risked being destroyed, and was widely distributed abroad before eventually being banned in Germany. A "Time" review of "The Hour of Decision" noted his international popularity as a polemicist, observing that "When Oswald Spengler speaks, many a Western Worldling stops to listen". The review recommended the book for "readers who enjoy vigorous writing", who "will be glad to be rubbed the wrong way by Spengler's harsh aphorisms" and his pessimistic predictions.
On 13 October 1933 Spengler became one of the hundred senators of the German Academy.
Spengler spent his final years in Munich, listening to Beethoven, reading Molière and Shakespeare, buying several thousand books, and collecting ancient Turkish, Persian and Indian weapons. He made occasional trips to the Harz mountains and to Italy. In the spring of 1936 (shortly before his death), he prophetically remarked in a letter to Reichsleiter Hans Frank that "in ten years, a German Reich will probably no longer exist" (""da ja wohl in zehn Jahren ein Deutsches Reich nicht mehr existieren wird!"").
Spengler died of a heart attack on 8 May 1936, in Munich, three weeks before his 56th birthday and exactly nine years before the fall of the Third Reich.
In the introduction to "The Decline of the West", Spengler cites Johann W. von Goethe and Friedrich Nietzsche as his major influences. Goethe's vitalism and Nietzsche's cultural criticism, in particular, can be hi-lighted in his works.
Spengler was also influenced by the universal and cyclical vision of world history proposed by the German historian Eduard Meyer. The belief in the progression of civilizations through an evolutionary process comparable with living beings can be traced back to classical antiquity, although it is difficult to assess the extent of the influence those thinkers had on Spengler: Cato the Elder, Cicero, Seneca, Florus, Ammianus Marcellinus, and later Francis Bacon who compared different empires with each other with the help of biological analogies.
The concept of historical philosophy developed by Spengler is founded upon two assumptions: the existence of social entities called 'Cultures' ("Kulturen"), regarded as the largest possible actors in human history, which itself had no metaphysical sense, and the parallelism between the evolution of those Cultures and the evolutionary of living beings. Spengler numbered nine Cultures: Egyptian, Babylonian, Indian, Chinese, Greco-Roman, 'Magic' or 'Arabic' (including early and Byzantine Christianity and Islam), Mexican, Western, and Russian–, interacted between each other in time and space but were distinct from each other by 'internal' attributes. According to him, "Cultures are organisms, and world-history is their collective biography."
Spengler also compares the evolution of Cultures to the different ages of human life, "Every Culture passes through the age-phases of the individual man. Each has its childhood, youth, manhood and old age." When a Culture enters its late stage, Spengler argues, it becomes a 'Civilization' ("Zivilisation"), a petrified body characterized in the modern age by technology, imperialism, and mass society, which he expected to fossilize and decline from the 2000s onward. The first-millennium Near East was, in his views, not a transition between Classical Antiquity, Western Christianity, and Islam, but rather an emerging new Culture he named 'Arabian' or 'Magic', explaining Messianic Judaism, Zoroastrianism, early Christianity, and Islam are different expressions of a single Culture sharing a unique worldview.
The great historian of antiquity Eduard Meyer thought highly of Spengler, although he also had some criticisms of him. Spengler's obscurity, intuitionalism, and mysticism were easy targets, especially for the positivists and neo-Kantians who rejected the possibility that there was meaning in world history. The critic and aesthete Count Harry Kessler thought him unoriginal and rather inane, especially in regard to his opinion on Nietzsche. Philosopher Ludwig Wittgenstein, however, shared Spengler's cultural pessimism. Spengler's work became an important foundation for the social cycle theory.
In late 1919, Spengler published "Prussianism and Socialism" ("Preußentum und Sozialismus"), an essay based on notes intended for the second volume of "The Decline of the West" in which he argues that German socialism is the correct socialism in contrast to English socialism. In his view, correct socialism has a much more "national" spirit.
According to Spengler, mankind will spend the next and last several hundred years of its existence in a state of Caesarian socialism, when all humans will be into a harmonious and happy totality by a dictator, like an orchestra is synergized into a harmonious totality by its conductor.
According to some recent critics such as Ishay Landa, "Prussian socialism" has some decidedly capitalistic traits. Spengler declares himself resolutely opposed to labor strikes (Spengler describes them as "the unsocialistic earmark of Marxism"), trade unions ("wage-Bolshevism" in Spengler's terms), progressive taxation or any imposition of taxes on the rich ("dry Bolshevism"), any shortening of the working day (he argues that workers should work even on Sundays), as well as any form of government insurance for sickness, old age, accidents, or unemployment.
At the same time as he rejects any social democratic provisions, Spengler celebrates private property, competition, imperialism, capital accumulation, and "wealth, collected in few hands and among the ruling classes." Landa describes Spengler's "Prussian Socialism" as "working a whole lot, for the absolute minimum, but – and this is a vital aspect – being happy about it."
In his private papers, Spengler denounced Nazi anti-Semitism in even stronger terms, wondering "how much envy of the capability of other people in view of one's lack of it lies hidden in anti-Semitism!", and arguing that "when one would rather destroy business and scholarship than see Jews in them, one is an ideologue, i.e., a danger for the nation. Idiotic." Spengler was an admirer of the old Prussian aristocracy and showed contempt for the proletarian and demagogic character of the Nazi party, and considered the Aryan racial doctrine to be nonsense. In 1934, Spengler pronounced the funeral oration for one of the victims of the Night of the Long Knives and retired in 1935 from the board of the highly influential Nietzsche Archive in opposition to the regime.
Spengler, however, regarded the transformation of ultra-capitalist mass democracies into dictatorial regimes as inevitable, and he had expressed some sympathy for Benito Mussolini and the Italian Fascist movement as a first symptom of this development.
He also considered Judaism to be a "disintegrating element" (zersetzendes Element) that acts destructively "wherever it intervenes" (wo es auch eingreift). Jews are characterized by a "cynical intelligence" (zynische Intelligenz) and their "money thinking" (Gelddenken). Therefore, they were incapable of adapting to Western culture and represented a foreign body in Europe. With these anti-Semitic speculations Spengler contributed significantly to the enforcement of stereotypes about "the Jews" in pre-WW2 German circles. | https://en.wikipedia.org/wiki?curid=22588 |
Oracle
An oracle is a person or agency considered to provide wise and insightful counsel or prophetic predictions or precognition of the future, inspired by the gods. As such it is a form of divination.
The word "oracle" comes from the Latin verb "ōrāre", "to speak" and properly refers to the priest or priestess uttering the prediction. In extended use, "oracle" may also refer to the "site of the oracle", and to the oracular utterances themselves, called "khrēsmē" 'tresme' (χρησμοί) in Greek.
Oracles were thought to be portals through which the gods spoke directly to people. In this sense they were different from seers ("manteis", μάντεις) who interpreted signs sent by the gods through bird signs, animal entrails, and other various methods.
The most important oracles of Greek antiquity were Pythia (priestess to Apollo at Delphi), and the oracle of Dione and Zeus at Dodona in Epirus. Other oracles of Apollo were located at Didyma and Mallus on the coast of Anatolia, at Corinth and Bassae in the Peloponnese, and at the islands of Delos and Aegina in the Aegean Sea.
The Sibylline Oracles are a collection of oracular utterances written in Greek hexameters ascribed to the Sibyls, prophetesses who uttered divine revelations in frenzied states.
Walter Burkert observes that "Frenzied women from whose lips the god speaks" are recorded in the Near East as in Mari in the second millennium BC and in Assyria in the first millennium BC. In Egypt the goddess Wadjet (eye of the moon) was depicted as a snake-headed woman or a woman with two snake-heads. Her oracle was in the renowned temple in Per-Wadjet (Greek name Buto). The oracle of Wadjet may have been the source for the oracular tradition which spread from Egypt to Greece. Evans linked Wadjet with the "Minoan Snake Goddess".
At the oracle of Dodona she is called Diōnē (the feminine form of "Diós", genitive of "Zeus"; or of "dīos", "godly", literally "heavenly"), who represents the earth-fertile soil, probably the chief female goddess of the proto-Indo-European pantheon. Python, daughter (or son) of Gaia was the earth dragon of Delphi represented as a serpent and became the chthonic deity, enemy of Apollo, who slew her and possessed the oracle.
The Pythia was the mouthpiece of the oracles of the god Apollo, and was also known as the Oracle of Delphi.
The Pythia was not conceived to be infallible and in fact, according to Sourvinou-Inwood in "What is Polis Religion?", the ancient Greeks were aware of this and concluded the unknowability of the divine. In this way, the revelations of the Oracles were not seen as objective truth (as they consulted many) [see: Hyp. 4. 14-15]. The Pythia gave prophecies only on the seventh day of each month, seven being the number most associated with Apollo, during the nine warmer months of the year; thus, Delphi was the major source of divination for the ancient Greeks. Many wealthy individuals bypassed the hordes of people attempting a consultation by making additional animal sacrifices to please the oracle lest their request go unanswered. As a result, seers were the main source of everyday divination.
The temple was changed to a centre for the worship of Apollo during the classical period of Greece and priests were added to the temple organization—although the tradition regarding prophecy remained unchanged—and the priestesses continued to provide the services of the oracle exclusively. It is from this institution that the English word "oracle" is derived.
The Delphic Oracle exerted considerable influence throughout Hellenic culture. Distinctively, this female was essentially the highest authority both civilly and religiously in male-dominated ancient Greece. She responded to the questions of citizens, foreigners, kings, and philosophers on issues of political impact, war, duty, crime, family, laws—even personal issues.
The semi-Hellenic countries around the Greek world, such as Lydia, Caria, and even Egypt also respected her and came to Delphi as supplicants.
Croesus, king of Lydia beginning in 560 B.C., tested the oracles of the world to discover which gave the most accurate prophecies. He sent out emissaries to seven sites who were all to ask the oracles on the same day what the king was doing at that very moment. Croesus proclaimed the oracle at Delphi to be the most accurate, who correctly reported that the king was making a lamb-and-tortoise stew, and so he graced her with a magnitude of precious gifts. He then consulted Delphi before attacking Persia, and according to Herodotus was advised: "If you cross the river, a great empire will be destroyed". Believing the response favourable, Croesus attacked, but it was his own empire that ultimately was destroyed by the Persians.
She allegedly also proclaimed that there was no man wiser than Socrates, to which Socrates said that, if so, this was because he alone was aware of his own ignorance. After this confrontation, Socrates dedicated his life to a search for knowledge that was one of the founding events of western philosophy. He claimed that she was "an essential guide to personal and state development." This oracle's last recorded response was given in 362 AD, to Julian the Apostate.
The oracle's powers were highly sought after and never doubted. Any inconsistencies between prophecies and events were dismissed as failure to correctly interpret the responses, not an error of the oracle. Very often prophecies were worded ambiguously, so as to cover all contingencies – especially so "ex post facto". One famous such response to a query about participation in a military campaign was "You will go you will return never in war will you perish". This gives the recipient liberty to place a comma before or after the word "never", thus covering both possible outcomes. Another was the response to the Athenians when the vast army of king Xerxes I was approaching Athens with the intent of razing the city to the ground. "Only the wooden palisades may save you", answered the oracle, probably aware that there was sentiment for sailing to the safety of southern Italy and re-establishing Athens there. Some thought that it was a recommendation to fortify the Acropolis with a wooden fence and make a stand there. Others, Themistocles among them, said the oracle was clearly for fighting at sea, the metaphor intended to mean war ships. Others still insisted that their case was so hopeless that they should board every ship available and flee to Italy, where they would be safe beyond any doubt. In the event, variations of all three interpretations were attempted: some barricaded the Acropolis, the civilian population was evacuated over sea to nearby Salamis Island and to Troizen, and the war fleet fought victoriously at Salamis Bay. Should utter destruction have happened, it could always be claimed that the oracle had called for fleeing to Italy after all.
Dodona was another oracle devoted to the Mother Goddess identified at other sites with Rhea or Gaia, but here called Dione. The shrine of Dodona was the oldest Hellenic oracle, according to the fifth-century historian Herodotus and in fact dates to pre-Hellenic times, perhaps as early as the second millennium BC when the tradition probably spread from Egypt. Zeus displaced the Mother goddess and assimilated her as Aphrodite.
It became the second most important oracle in ancient Greece, which later was dedicated to Zeus and to Heracles during the classical period of Greece. At Dodona Zeus was worshipped as Zeus Naios or Naos (god of springs Naiads, from a spring which existed under the oak), and Zeus Bouleos (cancellor). Priestesses and priests interpreted the rustling of the oak leaves to determine the correct actions to be taken. The oracle was shared by Dione and Zeus.
Trophonius was an oracle at Lebadea of Boeotia devoted to the chthonian Zeus Trophonius. Trophonius is derived from the Greek word "trepho" (nourish) and he was a Greek hero, or demon or god. Demeter-Europa was his nurse. Europa (in Greek: broad-eyes) was a Phoenician princess whom Zeus, having transformed himself into a white bull, abducted and carried to Creta, and is equated with Astarte as a moon goddess by ancient sources. Some scholars connect Astarte with the Minoan snake goddess, whose cult as Aphrodite spread from Creta to Greece.
Near the Menestheus's port or "Menesthei Portus" (), modern El Puerto de Santa María, Spain, was the Oracle of Menestheus (), to whom also the inhabitants of Gades offered sacrifices.
The term "oracle" is also applied in modern English to parallel institutions of divination in other cultures.
Specifically, it is used in the context of Christianity for the concept of divine revelation, and in the context of Judaism for the Urim and Thummim breastplate, and in general any utterance considered prophetic.
In Celtic polytheism, divination was performed by the priestly caste, either the druids or the vates. This is reflected in the role of "seers" in Dark Age Wales ("dryw") and Ireland ("fáith").
In China, oracle bones were used for divination in the late Shang dynasty, (c. 1600–1046 BC). Diviners applied heat to these bones, usually ox scapulae or tortoise plastrons, and interpreted the resulting cracks.
A different divining method, using the stalks of the yarrow plant, was practiced in the subsequent Zhou dynasty (1046–256 BC). Around the late 9th century BC, the divination system was recorded in the "I Ching", or "Book of Changes", a collection of linear signs used as oracles. In addition to its oracular power, the "I Ching" has had a major influence on the philosophy, literature and statecraft of China since the Zhou period.
In Hawaii, oracles were found at certain "heiau", Hawaiian temples. These oracles were found in towers covered in white "kapa" cloth made from plant fibres. In here, priests received the will of gods. These towers were called " 'Anu'u". An example of this can be found at Ahu'ena heiau in Kona.
In ancient India, the oracle was known as "akashwani" or "Ashareera vani" (a voice without body or unseen) or "asariri" (Tamil), literally meaning "voice from the sky" and was related to the message of a god. Oracles played key roles in many of the major incidents of the epics Mahabharata and Ramayana. An example is that Kamsa (or Kansa), the evil uncle of Krishna, was informed by an oracle that the eighth son of his sister Devaki would kill him. However, there are no references in any Indian literature of the oracle being a specific person.
The Igbo people of southeastern Nigeria in Africa have a long tradition of using oracles. In Igbo villages, oracles were usually female priestesses to a particular deity, usually dwelling in a cave or other secluded location away from urban areas, and, much as the oracles of ancient Greece, would deliver prophecies in an ecstatic state to visitors seeking advice. Two of their ancient oracles became especially famous during the pre-colonial period: the Agbala oracle at Awka and the Chukwu oracle at Arochukwu. Though the vast majority of Igbos today are Christian, many of them still use oracles.
Among the related Yoruba peoples of the same country, the Babalawos (and their female counterparts, the Iyanifas) serve collectively as the principal aspects of the tribe's World-famous Ifa divination system. Due to this, they customarily officiate at a great many of its traditional and religious ceremonies.
In Norse mythology, Odin took the severed head of the god Mimir to Asgard for consultation as an oracle. The "Havamal" and other sources relate the sacrifice of Odin for the oracular Runes whereby he lost an eye (external sight) and won wisdom (internal sight; insight).
In the migration myth of the Mexitin, i.e., the early Aztecs, a mummy-bundle (perhaps an effigy) carried by four priests directed the trek away from the cave of origins by giving oracles. An oracle led to the foundation of Mexico-Tenochtitlan. The Yucatec Mayas knew oracle priests or "chilanes", literally 'mouthpieces' of the deity. Their written repositories of traditional knowledge, the Books of Chilam Balam, were all ascribed to one famous oracle priest who correctly had predicted the coming of the Spaniards and its associated disasters.
In Tibet, oracles have played, and continue to play, an important part in religion and government. The word "oracle" is used by Tibetans to refer to the spirit that enters those men and women who act as media between the natural and the spiritual realms. The media are, therefore, known as "kuten", which literally means, "the physical basis".
The Dalai Lama, who lives in exile in northern India, still consults an oracle known as the "Nechung Oracle", which is considered the official state oracle of the government of Tibet. The Dalai Lama has according to centuries-old custom, consulted the Nechung Oracle during the new year festivities of Losar. Nechung and Gadhong are the primary oracles currently consulted; former oracles such as Karmashar and Darpoling are no longer active in exile. The Gadhong oracle has died leaving Nechung to be the only primary oracle. Another oracle the Dalai Lama consults is the "Tenma Oracle", for which a young Tibetan woman by the name of Khandro La is the medium for the mountain goddesses Tseringma along with the other 11 goddesses. The Dalai Lama gives a complete description of the process of trance and spirit possession in his book "Freedom in Exile".
Dorje Shugden oracles were once consulted by the Dalai Lamas until the 14th Dalai Lama banned the practice, even though he consulted Dorje Shugden for advice to escape and was successful in it. Due to the ban, many of the abbots that were worshippers of Dorje Shugden have been forced to go against the Dalai Lama.
In computer science an oracle is a black box that is always able to provide correct answers. It is the component of an oracle machine after which the machine is named. | https://en.wikipedia.org/wiki?curid=22589 |
Oracle Corporation
Oracle Corporation is an American multinational computer technology corporation headquartered in Redwood Shores, California. The company sells database software and technology, cloud engineered systems, and enterprise software products—particularly its own brands of database management systems. In 2019, Oracle was the second-largest software company by revenue and market capitalization.
The company also develops and builds tools for database development and systems of middle-tier software, enterprise resource planning (ERP) software, Human Capital Management (HCM) software, customer relationship management (CRM) software, and supply chain management (SCM) software.
Larry Ellison co-founded Oracle Corporation in 1977 with Bob Miner and Ed Oates under the name Software Development Laboratories (SDL). Ellison took inspiration from the 1970 paper written by Edgar F. Codd on relational database management systems (RDBMS) named "A Relational Model of Data for Large Shared Data Banks." He heard about the IBM System R database from an article in the "IBM Research Journal" provided by Oates. Ellison wanted to make Oracle's product compatible with System R, but failed to do so as IBM kept the error codes for their DBMS a secret. SDL changed its name to Relational Software, Inc (RSI) in 1979, then again to Oracle Systems Corporation in 1983, to align itself more closely with its flagship product Oracle Database. At this stage Bob Miner served as the company's senior programmer. On March 12, 1986, the company had its initial public offering. In 1995, Oracle Systems Corporation changed its name to Oracle Corporation, officially named Oracle, but sometimes referred to as Oracle Corporation, the name of the holding company. Part of Oracle Corporation's early success arose from using the C programming language to implement its products. This eased porting to different operating systems most of which support C.
Oracle ranked No. 82 in the 2018 Fortune 500 list of the largest United States corporations by total revenue. According to Bloomberg, Oracle's CEO-to-employee pay ratio is 1,205:1. The CEO's compensation in 2017 was $108,295,023. Oracle is one of the approved employers of ACCA and the median employee compensation rate was $89,887.
Oracle designs, manufactures, and sells both software and hardware products, as well as offering services that complement them (such as financing, training, consulting, and hosting services). Many of the products have been added to Oracle's portfolio through acquisitions.
Oracle's E-delivery service (Oracle Software Delivery Cloud) provides generic downloadable Oracle software and documentation.
Oracle Corporation has acquired and developed the following additional database technologies:
Oracle Fusion Middleware is a family of middleware software products, including (for instance) application server, system integration, business process management (BPM), user interaction, content management, identity management and business intelligence (BI) products.
Oracle Secure Enterprise Search (SES), Oracle's enterprise-search offering, gives users the ability to search for content across multiple locations, including websites, XML files, file servers, content management systems, enterprise resource planning systems, customer relationship management systems, business intelligence systems, and databases.
Released in 2008, the Oracle Beehive collaboration software provides team workspaces (including wikis, team calendaring and file sharing), email, calendar, instant messaging, and conferencing on a single platform. Customers can use Beehive as licensed software or as software as a service ("SaaS").
Oracle also sells a suite of business applications. The Oracle E-Business Suite includes software to perform various enterprise functions related to (for instance) financials, manufacturing, customer relationship management (CRM), enterprise resource planning (ERP) and human resource management. The Oracle Retail Suite
covers the retail-industry vertical, providing merchandise management, price management, invoice matching, allocations, store operations management, warehouse management, demand forecasting, merchandise financial planning, assortment planning, and category management. Users can access these facilities through a browser interface over the Internet or via a corporate intranet.
Following a number of acquisitions beginning in 2003, especially in the area of applications, Oracle Corporation maintains a number of product lines:
Development of applications commonly takes place in Java (using Oracle JDeveloper) or through PL/SQL (using, for example, Oracle Forms and Oracle Reports/BIPublisher). Oracle Corporation has started a drive toward "wizard"-driven environments with a view to enabling non-programmers to produce simple data-driven applications.
Oracle Corporation works with "Oracle Certified Partners" to enhance its overall product marketing. The variety of applications from third-party vendors includes database applications for archiving, splitting and control, ERP and CRM systems, as well as more niche and focused products providing a range of commercial functions in areas like human resources, financial control and governance, risk management, and compliance (GRC). Vendors include Hewlett-Packard, Creoal Consulting, UC4 Software, Motus, and Knoa Software.
Oracle Enterprise Manager (OEM) provides web-based monitoring and management tools for Oracle products (and for some third-party software), including database management, middleware management, application management, hardware and virtualization management and cloud management.
The Primavera products of Oracle's Construction & Engineering Global Business Unit (CEGBU) consist of project-management software.
Oracle Corporation's tools for developing applications include (among others):
Many external and third-party tools make the Oracle database administrator's tasks easier.
Oracle Corporation develops and supports two operating systems: Oracle Solaris and Oracle Linux.
Oracle Cloud is a cloud computing service offered by Oracle Corporation providing servers, storage, network, applications and services through a global network of Oracle Corporation managed data centers. The company allows these services to be provisioned on demand over the Internet.
Oracle Cloud provides Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS) and Data as a Service (DaaS). These services are used to build, deploy, integrate and extend applications in the cloud. This platform supports open standards (SQL, HTML5, REST, etc.) open-source solutions (Kubernetes, Hadoop, Kafka, etc.) and a variety of programming languages, databases, tools and frameworks including Oracle-specific, Open Source and third-party software and systems.
On July 28, 2016 Oracle bought NetSuite, the very first cloud company, for $9.3 billion. On May 16, 2018 Oracle announced that it had acquired DataScience.com, a privately held cloud workspace platform for data science projects and workloads.
Registered customers can submit Service Requests (SRs)—usually via the web-accessible My Oracle Support (MOS), a re-incarnation of Oracle Metalink with web access administered by a site Customer User Administrator (CUA).
In 1990, Oracle laid off 10% (about 400 people) of its work force because of accounting errors. This crisis came about because of Oracle's "up-front" marketing strategy, in which sales people urged potential customers to buy the largest possible amount of software all at once. The sales people then booked the value of future license sales in the current quarter, thereby increasing their bonuses. This became a problem when the future sales subsequently failed to materialize. Oracle eventually had to restate its earnings twice, and also settled (out of court) class-action lawsuits arising from its having overstated its earnings. Ellison stated in 1992 that Oracle had made "an incredible business mistake".
In 1994, Informix overtook Sybase and became Oracle's most important rival. The intense war between Informix CEO Phil White and Ellison made front-page news in Silicon Valley for three years. Informix claimed that Oracle had hired away Informix engineers to disclose important trade secrets about an upcoming product. Informix finally dropped its lawsuit against Oracle in 1997. In November 2005, a book detailing the war between Oracle and Informix was published, titled "The Real Story of Informix Software and Phil White". It gave a detailed chronology of the battle of Informix against Oracle, and how Informix Software's CEO Phil White landed in jail because of his obsession with overtaking Ellison.
Once it had overcome Informix and Sybase, Oracle Corporation enjoyed years of dominance in the database market until use of Microsoft SQL Server became widespread in the late 1990s and IBM acquired Informix Software in 2001 (to complement its DB2 database). Oracle competes for new database licenses on UNIX, Linux, and Windows operating systems primarily against IBM's DB2 and Microsoft SQL Server. IBM's DB2 dominates the mainframe database market.
In 2004, Oracle's sales grew at a rate of 14.5% to $6.2 billion, giving it 41.3% and the top share of the relational-database market ("InformationWeek" – March 2005), with market share estimated at up to 44.6% in 2005 by some sources.
Oracle Corporation's main competitors in the database arena remain IBM DB2 and Microsoft SQL Server, and to a lesser extent Sybase and Teradata, with open source databases such as PostgreSQL and MySQL also having a significant share of the market. EnterpriseDB, based on PostgreSQL, has made inroads by proclaiming that its product delivers Oracle compatibility features at a much lower price-point.
In the software-applications market, Oracle Corporation primarily competes against SAP. On March 22, 2007 Oracle sued SAP, accusing them of fraud and unfair competition.
In the market for business intelligence software, many other software companies—small and large—have successfully competed in quality with Oracle and SAP products. Business intelligence vendors can be categorized into the "big four" consolidated BI firms such as Oracle, who has entered BI market through a recent trend of acquisitions (including Hyperion Solutions), and the independent "pure play" vendors such as MicroStrategy, Actuate, and SAS.
Oracle Financials was ranked in the Top 20 Most Popular Accounting Software Infographic by Capterra in 2014, beating out SAP and a number of their other competitors.
From 1988, Oracle Corporation and the German company SAP AG had a decade-long history of cooperation, beginning with the integration of SAP's R/3 enterprise application suite with Oracle's relational database products. Despite the SAP partnership with Microsoft, and the increasing integration of SAP applications with Microsoft products (such as Microsoft SQL Server, a competitor to Oracle Database), Oracle and SAP continue their cooperation. According to Oracle Corporation, the majority of SAP's customers use Oracle databases.
In 2004, Oracle began to increase its interest in the enterprise-applications market (in 1989, Oracle had already released Oracle Financials). A series of acquisitions by Oracle Corporation began, most notably with those of PeopleSoft, Siebel Systems and Hyperion.
SAP recognized that Oracle had started to become a competitor in a market where SAP had the leadership, and saw an opportunity to lure in customers from those companies that Oracle Corporation had acquired. SAP would offer those customers special discounts on the licenses for its enterprise applications.
Oracle Corporation would resort to a similar strategy, by advising SAP customers to get "OFF SAP" (a play on the words of the acronym for its middleware platform "Oracle Fusion for SAP"),
and also by providing special discounts on licenses and services to SAP customers who chose Oracle Corporation products.
Some analysts have suggested the suit could form part of a strategy by Oracle Corporation to decrease competition with SAP in the market for third-party enterprise software maintenance and support.
On July 3, 2007, SAP admitted that TomorrowNow employees had made "inappropriate downloads" from the Oracle support website. However, it claims that SAP personnel and SAP customers had no access to Oracle intellectual property via TomorrowNow. SAP's CEO Henning Kagermann stated that "Even a single inappropriate download is unacceptable from my perspective. We regret very much that this occurred." Additionally, SAP announced that it had "instituted changes" in TomorrowNow's operational oversight.
On November 23, 2010, a U.S. district court jury in Oakland, California found that SAP AG must pay Oracle Corp $1.3 billion for copyright infringement, awarding damages that could be the largest-ever for copyright infringement. While admitting liability, SAP estimated the damages at no more than $40 million, while Oracle claimed that they are at least $1.65 billion. The awarded amount is one of the 10 or 20 largest jury verdicts in U.S. legal history. SAP said they were disappointed by the verdict and might appeal. On September 1, 2011, a federal judge overturned the judgment and offered a reduced amount or a new trial, calling Oracle's original award "grossly" excessive. Oracle chose a new trial.
On August 3, 2012, SAP and Oracle agreed on a judgment for $306 million in damages, pending approval from the U.S. district court judge, “to save time and expense of [a] new trial". After the accord has been approved, Oracle can ask a federal appeals court to reinstate the earlier jury verdict. In addition to the damages payment, SAP has already paid Oracle $120 million for its legal fees.
and "Unbreakable"
Oracle Corporation produces and distributes the "Oracle ClearView" series of videos as part of its marketing mix.
In 2000, Oracle attracted attention from the computer industry and the press after hiring private investigators to dig through the trash of organizations involved in an antitrust trial involving Microsoft. The Chairman of Oracle Corporation, Larry Ellison, staunchly defended his company's hiring of an East Coast detective agency to investigate groups that supported rival Microsoft Corporation during its antitrust trial, calling the snooping a "public service". The investigation reportedly included a $1,200 offer to janitors at the Association for Competitive Technology to look through Microsoft's trash. When asked how he would feel if others were looking into Oracle's business activities, Ellison said: "We will ship our garbage to Redmond, and they can go through it. We believe in full disclosure."
In 2002, Oracle Corporation marketed many of its products using the slogan "Can't break it, can't break in", or "Unbreakable". This signified a demand on information security. Oracle Corporation also stressed the reliability of networked databases and network access to databases as major selling points.
However, two weeks after its introduction, David Litchfield, Alexander Kornbrust, Cesar Cerrudo and others demonstrated a whole suite of successful attacks against Oracle products. Oracle Corporation's chief security officer Mary Ann Davidson said that, rather than representing a literal claim of Oracle's products' impregnability, she saw the campaign in the context of fourteen independent security evaluations that Oracle Corporation's database server had passed.
In 2004, then-United States Attorney General John Ashcroft sued Oracle Corporation to prevent it from acquiring a multibillion-dollar intelligence contract. After Ashcroft's resignation from government, he founded a lobbying firm, The Ashcroft Group, which Oracle hired in 2005. With the group's help, Oracle went on to acquire the contract.
Computer Sciences Corporation reportedly spent a billion dollars developing the Expeditionary Combat Support System for the United States Air Force. It yielded no significant capability, because, according to an Air Force source, the Oracle software on which the system was based could not be adapted to meet the specialized performance criteria.
Oracle Corporation was awarded a contract by the State of Oregon's Oregon Health Authority (OHA) to develop Cover Oregon, the state's healthcare exchange website, as part of the U.S. Patient Protection and Affordable Care Act. When the site tried to go live on October 1, 2013, it failed, and registrations had to be taken using paper applications until the site could be fixed.
On April 25, 2014, the State of Oregon voted to discontinue Cover Oregon and instead use the federal exchange to enroll Oregon residents. The cost of switching to the federal portal was estimated at $5 million, whereas fixing Cover Oregon would have required another $78 million.
Oracle president Safra Catz responded to Cover Oregon and the OHA in a letter claiming that the site's problems were due to OHA mismanagement, specifically that a third-party systems integrator was not hired to manage the complex project.
In August 2014, Oracle Corporation sued Cover Oregon for breach of contract, and then later that month the state of Oregon sued Oracle Corporation, in a civil complaint for breach of contract, fraud, filing false claims and "racketeering". In September 2016, the two sides reached a settlement valued at over $100 million to the state, and a six-year agreement for Oracle to continue modernizing state software and IT.
On January 27, 2010, Oracle announced it had completed its acquisition of Sun Microsystems—valued at more than $7 billion—a move that transformed Oracle from solely a software company to a manufacturer of both software and hardware. The acquisition was delayed for several months by the European Commission because of concerns about MySQL, but was unconditionally approved in the end. This acquisition was important to some in the open source community and also to some other companies, as they feared Oracle might end Sun's traditional support of open source projects. Since the acquisition, Oracle has discontinued OpenSolaris and StarOffice, and sued Google over their newly acquired Java patents from Sun. In September 2011, U.S. State Department Embassy cables were leaked to WikiLeaks. One cable revealed that the U.S. pressured the E.U. to allow Oracle to acquire Sun.
On July 29, 2010, the United States Department of Justice filed suit against Oracle Corporation alleging fraud. The lawsuit argues that the government received deals inferior to those Oracle gave to its commercial clients. The DoJ added its heft to an already existing whistleblower lawsuit filed by Paul Frascella, who was once senior director of contract services at Oracle. It was settled in May 2012
Oracle, the plaintiff, bought the Java computer programing language when it acquired Sun Microsystems in January 2010. The Java software includes sets of pre-developed software code in order to accomplish common tasks consistently among programs and apps. The pre-developed code is organized into separate "packages" which each contain a set of "classes". Each class contains numerous methods, which instruct a program or app to do a certain task. Software developers "became accustomed to using Java’s designations at the package, class, and method level".
Oracle and Google (the defendant) tried to negotiate an agreement for Oracle to license Java to Google, which would have allowed Google to use Java in developing programs for mobile devices using the Android operating system. However, the two companies never reached an agreement. After negotiations failed, Google created its own programming platform, which was based on Java, and contained a mix of 37 copied Java packages and new packages developed by Google.
In 2010, Oracle sued Google for copyright infringement for the use of the 37 Java packages. The case was handled in U.S. District Court for the Northern District of California and assigned to Judge William H. Alsup (who taught himself how to code computers). In the lawsuit, Oracle sought between $1.4 billion and $6.1 billion. In June 2011 the judge had to force Google through a judicial order to make public the details about Oracle's claim for damages.
By the end of the first jury trial (the legal dispute would eventually go on to another trial) the arguments made by Oracle's attorneys focused on a Java function called "rangeCheck"."The argument centered on a function called rangeCheck. Of all the lines of code that Oracle had tested—15 million in total—these were the only ones that were 'literally' copied. Every keystroke, a perfect duplicate." – "The Verge", 10/19/17Although Google admitted to copying the packages, Judge Alsup found that none of the Java packages were covered under copyright protection, and therefore Google did not infringe.
After the case was over, Oracle appealed to the United States Court of Appeals for the Federal Circuit (750 F.3d 1339 (2014)). On May 9, 2014, the appeals court partially reversed Judge Alsup's decision, finding that Java APIs are copyrightable. API stands for "application programming interface" and are how different computer programs or apps communicate with each other. However, the appeals court also left open the possibility that Google might have a "fair use" defense.
On October 6, 2014, Google filed a petition to appeal to the U.S. Supreme Court, but the Supreme Court denied the petition.
The case was then returned to the U.S. District Court for another trial about Google's fair use defense. Oracle sought $9 billion in damages. In May 2016, the trial jury found that Google's use of Java's APIs was considered fair use.
In February 2017, Oracle filed another appeal to the U.S. Court of Appeals for the Federal Circuit. This time it was asking for a new trial because the District Court "repeatedly undermined Oracle's case", which Oracle argued led the jury to make the wrong decision. According to ZDNet, "For example, it [Oracle] says the court wrongly bought Google's claim that Android was limited to smartphones while Java was for PCs, whereas Oracle contends that Java and Android both compete as platforms for smart TVs, cars, and wearables."
On August 13, 2010, an internal Oracle memo leaked to the Internet cited plans for ending the OpenSolaris operating system project and community. With Oracle planning to develop Solaris only in a closed source fashion, OpenSolaris developers moved to the Illumos and OpenIndiana project, among others.
As Oracle completed their acquisition of Sun Microsystems in February 2010, they announced that OpenSSO would no longer be their strategic product. Shortly after, OpenSSO was forked to OpenAM. and will continue to be developed and supported by ForgeRock.
On September 6, 2010, Oracle announced that former Hewlett-Packard CEO Mark Hurd was to replace Charles Phillips, who resigned as Oracle Co-President. In an official statement made by Larry Ellison, Phillips had previously expressed his desire to transition out of the company. Ellison had asked Phillips to stay on through the integration of Sun Microsystems Inc. In a separate statement regarding the transition, Ellison said "Mark did a brilliant job at HP and I expect he'll do even better at Oracle. There is no executive in the IT world with more relevant experience than Mark."
On September 7, 2010, HP announced a civil lawsuit against Mark Hurd "to protect HP's trade secrets", in response to Oracle hiring Hurd. On September 20, Oracle and HP published a joint press release announcing the resolution of the lawsuit on confidential terms and reaffirming commitment to long-term strategic partnership between the companies.
A number of OpenOffice.org developers formed The Document Foundation and received backing by Google, Novell, Red Hat, and Canonical, as well as some others, but were unable to get Oracle to donate the brand OpenOffice.org, causing a fork in the development of OpenOffice.org with the foundation now developing and promoting LibreOffice. Oracle expressed no interest in sponsoring the new project and asked the OpenOffice.org developers that started the project to resign from the company due to "conflicts of interest". On November 1, 2010, 33 of the OpenOffice.org developers gave their letters of resignation. On June 1, 2011, Oracle donated OpenOffice.org to the Apache Software Foundation.
On June 15, 2011, HP filed a lawsuit in California Superior Court in Santa Clara, claiming that Oracle had breached an agreement to support the Itanium microprocessor used in HP's high-end enterprise servers. Oracle called the lawsuit "an abuse of the judicial process" and said that had it known SAP's Léo Apotheker was about to be hired as HP's new CEO, any support for HP's Itanium servers would not have been implied.
On August 1, 2012, a California judge said in a tentative ruling that Oracle must continue porting its software at no cost until HP discontinues its sales of Itanium-based servers. HP was awarded $3 billion in damages against Oracle in 2016. HP argued Oracle's canceling support damaged HP's Itanium server brand. Oracle has announced it will appeal both the decision and damages.
On August 31, 2011, "The Wall Street Journal" reported that Oracle was being investigated by the Federal Bureau of Investigation for paying bribes to government officials in order to win business in Africa, in contravention of the Foreign Corrupt Practices Act (FCPA).
On April 20, 2012 the US General Services Administration banned Oracle from the most popular portal for bidding on GSA contracts for undisclosed reasons. Oracle has previously used this portal for around four hundred million dollars a year in revenue. Oracle previously settled a lawsuit filed under the False Claims Act, which accused the company of overbilling the US government between 1998 and 2006. The 2011 settlement forced Oracle to pay $199.5 million to the General Services Administration.
Oracle Corporation has its overall headquarters on the San Francisco Peninsula in the Redwood Shores area of Redwood City, adjacent to Belmont and near San Carlos Airport (IATA airport code: SQL).
Oracle HQ stands on the former site of Marine World/Africa USA, which moved from Redwood Shores to Vallejo in 1986. Oracle Corporation originally leased two buildings on the site, moving its finance and administration departments from the corporation's former headquarters on Davis Drive, Belmont, California. Eventually, Oracle purchased the complex and constructed a further four main buildings.
The distinctive Oracle Parkway buildings, nicknamed the Emerald City, served as sets for the futuristic headquarters of the fictional company "NorthAm Robotics" in the Robin Williams film "Bicentennial Man" (1999).
The campus represented the headquarters of Cyberdyne Systems in the movie "Terminator Genisys" (2015).
Oracle Corporation operates in multiple markets and has acquired several companies which formerly functioned autonomously. In some cases these provided the starting points for global business units (GBUs) targeting particular vertical markets. Oracle Corporation GBUs include:
On October 20, 2006, the Golden State Warriors and the Oracle Corporation announced a 10-year agreement in which the Oakland Arena would become known as the Oracle Arena.
Larry Ellison's sailing team competes as Oracle Team USA. The team has won the America's Cup twice, in 2010 (as BMW Oracle Racing) and in 2013, despite being penalized for cheating.
Sean Tucker's "Challenger II" stunt biplane is sponsored by Oracle and performs frequently at air shows around the US.
On January 9, 2019, ESPN reported that the San Francisco Giants entered into a 20-year agreement to rename their stadium Oracle Park. | https://en.wikipedia.org/wiki?curid=22591 |
Official Monster Raving Loony Party
The Official Monster Raving Loony Party is a political party established in the United Kingdom in 1983 by the musician David Sutch, also known as "Screaming Lord Sutch, 3rd Earl of Harrow", or simply "Lord Sutch". It is notable for its deliberately bizarre policies and it effectively exists to satirise British politics, and to offer itself as an alternative for protest voters, especially in constituencies where the party holding a safe seat is unlikely to lose it.
Starting in 1963, David Sutch, head of the rock group Screaming Lord Sutch and the Savages, stood in British parliamentary elections under a range of party names, initially as the National Teenage Party candidate. At that time the minimum voting age was 21. The party's name was intended to highlight what Sutch and others viewed as hypocrisy, since teenagers were unable to vote because of their supposed immaturity while the adults running the country were involved in scandals such as the Profumo affair.
After being shot during a mugging attempt whilst living in the United States, Sutch returned to Britain and to politics during the 1980s. The "Raving Loony" name first appeared at the Bermondsey by-election of 1983.
A similar concept had appeared earlier in the "Election Night Special" sketch on "Monty Python's Flying Circus", in which the Silly and Sensible parties competed; and a similar skit by "The Goodies", in which Graeme Garden stood as a "Science Loony". There had also been a "Science Fiction Looney" candidate competing in the 1976 Cambridge by-election.
Two others were important in the formation of the OMRLP: John Desmond Dougrez-Lewis stood in the Crosby by-election of 1981 (won by the Social Democratic Party's co-founder Shirley Williams); and Dougrez-Lewis stood in the by-election as "Tarquin Fin-tim-lin-bin-whin-bim-lim-bus-stop-F'tang-F'tang-Olé-Biscuitbarrel", taken from the Election Night Special Monty Python sketch. He had changed his name by deed poll from John Desmond Lewis, on behalf of the Cambridge University Raving Loony Society (CURLS). CURLS were an "anti-political party" and charity fundraising group formed largely as a fun counter-response to increasingly polarised student politics in Cambridge, and they were responsible for a number of fun stunts. Their Oxford University equivalents were the "Oxford Raving Lunatics". Dougrez-Lewis became Sutch's agent at the notorious Bermondsey by-election mentioned above, where the OMRLP banner was first officially unfurled. Reverting to his original name, Dougrez-Lewis stood for the new party in Cambridge in the 1983 general election.
Another serial offbeat by-election candidate was Commander Bill Boaks, a retired World War II hero who took part in sinking the "Bismarck". Boaks campaigned and stood for election for over thirty years on limited funds, always on the issue of road safety. Boaks proved influential on Sutch's direction as the leading anti-politician: "It's the ones who "don't" vote you really want, because they're the ones who think".
Boaks thought that increased traffic and more roads would cause problems, and he addressed road safety with flamboyant campaigning and a variety of tactics, including private prosecution of public figures who escaped public prosecution for drunk driving.He successfully campaigned with Sutch and others to pedestrianise London's Carnaby Street. While recovering from being struck by a motorcycle, Boaks was one of Sutch's counting agents at Bermondsey in 1983. Following Boaks' death, popular opinion towards road safety has become closer to his views.
Screaming Lord Sutch committed suicide on 16 June 1999 while suffering from clinical depression after his mother, Annie, died in 1998. A biography of Sutch, "The Man Who Was Screaming Lord Sutch" (by Graham Sharpe, the Media Relations Manager for bookmakers William Hill), was published in April 2005, describing what remained of the party as "wannabes, never-would-bes and some bloody-well-shouldn't-bes".
Sutch's funeral – organised by his lifetime friend, the session drummer Carlo Little – was attended by members of the OMRLP and Raving Loony Green Giant Party, including Hughes, who with Freddie Zapp brought along a huge floral tribute shaped as an OMRLP rosette. The running of the OMRLP fell to Alan "Howling Laud" Hope and his cat, Catmando, who were the joint winners of the 1999 membership ballot for the replacement for Sutch. Although Hope took over as Party Leader after Sutch's death, the real day-to-day running of the party has always been done by other party members.
The OMRLP fielded 15 candidates in the 2001 general election, at which they had their best general election results to date.
The manifesto, entitled "The Manicfesto", for the 2005 general election featured the major commitment of their long held pledge to abolish income tax, citing as always that it was only meant to be a temporary measure during the Napoleonic Wars. Also included was another old staple, the "Putting Parliament on Wheels" idea of having Parliament sit throughout the country rather than solely in London—with special emphasis this time in its creation negating the need for national/regional assemblies.
The OMRLP has fielded candidates since 2001, with reduced success and losing their deposit. "Top Cat" Owen is the only member of the current OMRLP to poll over 1,000 votes (he polled 2,859 votes in the 1994 European elections).
The OMRLP's official headquarters was originally the Golden Lion Hotel in Ashburton, Devon, then the Dog & Partridge pub at Yateley in Hampshire, but this was lost shortly after the 2005 general election. Conference venues are now chosen in advance: the 2006 conference was held at Torrington in Devon, and the 2007 conference was held in Jersey.
The party's last elected representative was R. U. Seerius (formerly Jon Brewer) on the 11 member Sawley Parish Council in South Derbyshire, first elected (uncontested) in 2005. He was no longer a member as of May 2007, having failed to appear in no less than 11 statutory meetings during his time in office, due to illness.
The OMRLP succeeded in standing in the two by-elections of 19 July 2007 in Sedgefield and Ealing Southall, but again achieving derisory results: Alan Hope acquiring 129 votes (0.46%) and John Cartwright taking 188 (0.51%), beating the English Democrats but coming behind the Christian Party of the Reverend George Hargreaves and David Braid.
In recognition that reforms were needed, Peter 'T.C.' Owen was moved from the honorary position of Party chairman to that of Deputy Leader (and thus effective day-to-day leader) of the OMRLP, whilst Anthony "The Jersey Flyer" Blyth (owner of the Ommaroo and a member of the Jersey Heritage Trust) took over Owen's role. Owen is one of four Raving Loonies to have scored over 1000 votes in an election.
On 31 May 2017 Hope was interviewed by Andrew Neil on the BBC's "Daily Politics" programme.
In 1987, the OMRLP won its first seat on Ashburton Town Council in Devon, as Alan "Howling Laud" Hope was elected unopposed. He subsequently became Deputy Mayor and later Mayor of Ashburton in 1998 (mainly opposed by the local Conservatives; they never forgave him for becoming a member of the OMRLP) until he moved to Hampshire after Sutch's death. For over a decade, his hotel "The Golden Lion" in Ashburton (referred to by some in the party as "The Mucky Mog") was the party's headquarters and conference centre.
The first party member to win a vote, rather than an uncontested election, was Stuart Hughes, taking the "safe" Conservative Party seat of Sidmouth Woolbrook on East Devon District Council in May 1991. He also took a seat on Sidmouth Town Council from the Conservatives the following day. His success was met with hostility from the local Tories. Hughes' reaction was to attempt to make their lives a misery for the next three years by refusing to pay his Community Charge (popularly known as the Poll Tax), then dumping scrap metal in the middle of the council chambers to the value of his unpaid tax when threatened with legal action. He also formed an alliance known as "The Coastals" (because of the seats they held) of Independents and the sole Green Party councillor, giving East Devon's ruling Conservatives the first true opposition they had faced for decades (the local Liberal Democrat and Labour parties being negligible).
Hughes retained his seats with increased majorities in subsequent elections, and took the Devon County Council seat from the local party's Chief Whip in the council.
To date, two councillors have subsequently become mayors: Alan Hope in Ashburton, Devon and Chris "Screwy" Driver on the Isle of Sheppey in Kent.
At the Bootle by-election in May 1990, the Loony candidate (Sutch) received more votes than the candidate for the continuing Social Democrats. The story was a major headline in many UK newspapers; ironically, the by-election itself had attracted little coverage. Bootle is still regarded by the party as their most significant result in politics, albeit one largely lampooning the political world.
In the 2019 Brecon and Radnorshire by-election, the OMRLP candidate Lady Lily the Pink polled more votes than the United Kingdom Independence Party. The party got a record number of votes in the 2019 general election.
48th Parliament
49th Parliament
50th Parliament
51st Parliament
52nd Parliament
53rd Parliament
54th Parliament
55th Parliament
56th Parliament
57th Parliament
As of 2019, the party has two parish councillors.
For the 2010 general election, the OMRLP used the description "Monster Raving Loony William Hill Party", which was met with criticism by some members, with John Cartwright, Loony candidate in Croydon, publicly stating, "I am not and will not be a mercenary, or an advert, for a commercial company during the course of the election campaign."
The statement of accounts for the period 1 January to 31 December 2008 outlines membership at 1,354, made up of 173 paying members and 1,181 "lifetime but non-paying". It currently costs £12.00 per year for membership, which includes a party rosette, a certificate of insanity, a 'Loony Badge,' your personal party I.D card, and a letter from the party's current leader; Alan ‘Howlin Laud’ Hope. It is also noted that a £14.50 membership is available for those overseas.
Sir Patrick Moore (1923–2012), the British TV amateur astronomer, was the finance minister of the party for a short time. He once said that the Monster Raving Loony Party "had an advantage over all the other parties, in that they knew they were loonies".
In 1992, the Glasgow band Hugh Reed and the Velvet Underpants released the song "Vote Monster Raving Looney", despite not having any actual ties to the party.
The OMRLP are distinguished by having a deliberately bizarre manifesto, which contains things that seem to be impossible or too absurd to implement – usually to highlight what they see as real-life absurdities. Despite its satirical nature, some of the things that have featured in Loony manifestos have become law, such as "passports for pets", abolition of dog licences and all-day pub openings.
Other suggestions so far unadopted included minting a 99p coin and forbidding greyhound racing in order to "stop the country going to the dogs".
The Loonies generally field as many candidates as possible in United Kingdom general elections, some (but by no means all) standing under ridiculous names they have adopted via deed poll. Sutch himself stood against all three main party leaders (John Major, Neil Kinnock and Paddy Ashdown) in the 1992 general election. Parliamentary candidates have to pay their own deposit (which currently stands at £500) and cover all of their expenses. No OMRLP candidate has managed to get the required 5% of the popular vote needed to retain their deposit, but this does not stop people standing. Sutch came closest with 4.1% and over a thousand votes at the Rotherham by-election, whilst Stuart Hughes still holds the record for the largest number of votes for a Loony candidate at a Parliamentary election, with 1,442 at the 1992 general election in the Honiton seat in east Devon. The all-time highest vote achieved was by comedian Danny Bamford aka Danny Blue, who secured 3,339 votes in the 1994 European elections under the pseudonym of "John Major". Bamford had also acted as an election agent for Lindi St Clair's rival Corrective Party, and was a former close associate of Stuart Hughes.
In the run-up to the 2011 Alternative Vote referendum, the party adopted an equivocal stance, advising its supporters, on 8 April, to "vote as you see fit". In response to mainstream parties debating Brexit, the OMRLP suggested sending Noel Edmonds to the European Parliament "because he understands Deal or No Deal". | https://en.wikipedia.org/wiki?curid=22592 |
Omega-3 fatty acid
Omega−3 fatty acids, also called Omega-3 oils, ω−3 fatty acids or "n"−3 fatty acids, are polyunsaturated fatty acids (PUFAs) characterized by the presence of a double bond three atoms away from the terminal methyl group in their chemical structure. They are widely distributed in nature, being important constituents of animal lipid metabolism, and they play an important role in the human diet and in human physiology. The three types of omega−3 fatty acids involved in human physiology are α-linolenic acid (ALA), found in plant oils, and eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), both commonly found in marine oils. Marine algae and phytoplankton are primary sources of omega−3 fatty acids. Common sources of plant oils containing ALA include walnut, edible seeds, clary sage seed oil, algal oil, flaxseed oil, Sacha Inchi oil, "Echium" oil, and hemp oil, while sources of animal omega−3 fatty acids EPA and DHA include fish, fish oils, eggs from chickens fed EPA and DHA, squid oils, krill oil, and certain algae.
Mammals are unable to synthesize the essential omega−3 fatty acid ALA and can only obtain it through diet. However, they can use ALA, when available, to form EPA and DHA, by creating additional double bonds along its carbon chain (desaturation) and extending it (elongation). Namely, ALA (18 carbons and 3 double bonds) is used to make EPA (20 carbons and 5 double bonds), which is then used to make DHA (22 carbons and 6 double bonds). The ability to make the longer-chain omega−3 fatty acids from ALA may be impaired in aging. In foods exposed to air, unsaturated fatty acids are vulnerable to oxidation and rancidity. Dietary supplementation with omega−3 fatty acids does not appear to affect the risk of death, cancer or heart disease. Furthermore, fish oil supplement studies have failed to support claims of preventing heart attacks or strokes or any vascular disease outcomes.
The terms "ω–3 ("omega–3") fatty acid" and "n–3 fatty acid" are derived from organic nomenclature. One way in which an unsaturated fatty acid is named is determined by the location, in its carbon chain, of the double bond which is closest to the methyl end of the molecule. In general terminology, "n" (or ω) represents the locant of the methyl end of the molecule, while the number "n–x" (or ω–"x") refers to the locant of its nearest double bond. Thus, in omega"–"3 fatty acids in particular, there is a double bond located at the carbon numbered 3, starting from the methyl end of the fatty acid chain. This classification scheme is useful since most chemical changes occur at the carboxyl end of the molecule, while the methyl group and its nearest double bond are unchanged in most chemical or enzymatic reactions.
In the expressions "n–x" or ω–"x", the dash is actually meant to be a minus sign, although it is never read as such. Also, the symbol "n" (or ω) represents the locant of the methyl end, counted from the carboxyl end of the fatty acid carbon chain. For instance, in an omega-3 fatty acid with 18 carbon atoms (see illustration), where the methyl end is at location 18 from the carboxyl end, "n" (or ω) represents the number 18, and the notation n–3 (or ω–3) represents the subtraction 18–3 = 15, where 15 is the locant of the double bond which is closest to the methyl end, counted from the carboxyl end of the chain.
Although "n" and ω (omega) are synonymous, the IUPAC recommends that "n" be used to identify the highest carbon number of a fatty acid. Nevertheless, the more common name – omega"–"3 fatty acid – is used in both the lay media and scientific literature.
By example, α-linolenic acid (ALA; illustration) is an 18-carbon chain having three double bonds, the first being located at the third carbon from the methyl end of the fatty acid chain. Hence, it is an omega"–"3 fatty acid. Counting from the other end of the chain, that is the carboxyl end, the three double bonds are located at carbons 9, 12, and 15. These three locants are typically indicated as Δ9c,12c,15c, or cisΔ9,cisΔ12,cisΔ15, or cis-cis-cis-Δ9,12,15, where "c" or "cis" means that the double bonds have a "cis" configuration.
α-Linolenic acid is polyunsaturated (containing more than one double bond) and is also described by a lipid number, 18:3, meaning that there are 18 carbon atoms and 3 double bonds.
Supplementation does not appear to be associated with a lower risk of all-cause mortality.
The evidence linking the consumption of marine omega−3 fats to a lower risk of cancer is poor. With the possible exception of breast cancer, there is insufficient evidence that supplementation with omega−3 fatty acids has an effect on different cancers. The effect of consumption on prostate cancer is not conclusive. There is a decreased risk with higher blood levels of DPA, but an increased risk of more aggressive prostate cancer was shown with higher blood levels of combined EPA and DHA. In people with advanced cancer and cachexia, omega−3 fatty acids supplements may be of benefit, improving appetite, weight, and quality of life.
Evidence in the population generally does not support a beneficial role for omega−3 fatty acid supplementation in preventing cardiovascular disease (including myocardial infarction and sudden cardiac death) or stroke. A 2018 meta-analysis found no support that daily intake of one gram of omega-3 fatty acid in individuals with a history of coronary heart disease prevents fatal coronary heart disease, nonfatal myocardial infarction or any other vascular event. However, omega−3 fatty acid supplementation greater than one gram daily for at least a year may be protective against cardiac death, sudden death, and myocardial infarction in people who have a history of cardiovascular disease. No protective effect against the development of stroke or all-cause mortality was seen in this population. A 2018 study found that omega-3 supplementation was helpful in protecting cardiac health in those who did not regularly eat fish, particularly in the African American population. Eating a diet high in fish that contain long chain omega−3 fatty acids does appear to decrease the risk of stroke. Fish oil supplementation has not been shown to benefit revascularization or abnormal heart rhythms and has no effect on heart failure hospital admission rates. Furthermore, fish oil supplement studies have failed to support claims of preventing heart attacks or strokes. In the EU, a review by the European Medicines Agency of omega-3 fatty acid medicines containing a combination of an ethyl ester of eicosapentaenoic acid and docosahexaenoic acid at a dose of 1 g per day concluded that these medicines are not effective in secondary prevention of heart problems in patients who have had a myocardial infarction.
Evidence suggests that omega−3 fatty acids modestly lower blood pressure (systolic and diastolic) in people with hypertension and in people with normal blood pressure. Some evidence suggests that people with certain circulatory problems, such as varicose veins, may benefit from the consumption of EPA and DHA, which may stimulate blood circulation and increase the breakdown of fibrin, a protein involved in blood clotting and scar formation.
Omega−3 fatty acids reduce blood triglyceride levels but do not significantly change the level of LDL cholesterol or HDL cholesterol in the blood. The American Heart Association position (2011) is that borderline elevated triglycerides, defined as 150–199 mg/dL, can be lowered by 0.5-1.0 grams of EPA and DHA per day; high triglycerides 200–499 mg/dL benefit from 1-2 g/day; and >500 mg/dL be treated under a physician's supervision with 2-4 g/day using a prescription product. In this population omega-3 fatty acid supplementation decreases the risk of heart disease by about 25%.
ALA does not confer the cardiovascular health benefits of EPA and DHAs.
The effect of omega−3 polyunsaturated fatty acids on stroke is unclear, with a possible benefit in women.
A 2013 systematic review found tentative evidence of benefit for lowering inflammation levels in healthy adults and in people with one or more biomarkers of metabolic syndrome. Consumption of omega−3 fatty acids from marine sources lowers blood markers of inflammation such as C-reactive protein, interleukin 6, and TNF alpha.
For rheumatoid arthritis, one systematic review found consistent, but modest, evidence for the effect of marine n−3 PUFAs on symptoms such as "joint swelling and pain, duration of morning stiffness, global assessments of pain and disease activity" as well as the use of non-steroidal anti-inflammatory drugs. The American College of Rheumatology has stated that there may be modest benefit from the use of fish oils, but that it may take months for effects to be seen, and cautions for possible gastrointestinal side effects and the possibility of the supplements containing mercury or vitamin A at toxic levels. The National Center for Complementary and Integrative Health has concluded that "supplements containing omega-3 fatty acids... may help relieve rheumatoid arthritis symptoms" and warns that such supplements "may interact with drugs that affect blood clotting".
Although not supported by current scientific evidence as a primary treatment for attention deficit hyperactivity disorder (ADHD), autism, and other developmental disabilities, omega−3 fatty acid supplements are being given to children with these conditions.
One meta-analysis concluded that omega−3 fatty acid supplementation demonstrated a modest effect for improving ADHD symptoms. A Cochrane review of PUFA (not necessarily omega−3) supplementation found "there is little evidence that PUFA supplementation provides any benefit for the symptoms of ADHD in children and adolescents", while a different review found "insufficient evidence to draw any conclusion about the use of PUFAs for children with specific learning disorders". Another review concluded that the evidence is inconclusive for the use of omega−3 fatty acids in behavior and non-neurodegenerative neuropsychiatric disorders such as ADHD and depression.
Fish oil has only a small benefit on the risk of premature birth. A 2015 meta-analysis of the effect of omega−3 supplementation during pregnancy did not demonstrate a decrease in the rate of preterm birth or improve outcomes in women with singleton pregnancies with no prior preterm births. A 2018 Cochrane systematic review with moderate to high quality of evidence suggested that omega−3 fatty acids may reduce risk of perinatal death, risk of low body weight babies; and possibly mildly increased LGA babies. However, a 2019 clinical trial in Australia showed no significant reduction on rate of preterm delivery, and no higher incidence of interventions in post-term deliveries than control.
There is evidence that omega−3 fatty acids are related to mental health, particularly for depression where there are now large meta-analyses showing treatment efficacy compared to placebo. These data have also recently resulted in international clinical guidelines regarding the use omega-3 fatty acids in the treatment of depression. The link between omega−3 and depression has been attributed to the fact that many of the products of the omega−3 synthesis pathway play key roles in regulating inflammation (such as prostaglandin E3) which have been linked to depression. This link to inflammation regulation has been supported in both in vivo studies and in a meta-analysis. omega-3 fatty acids have also been investigated as an add-on for the treatment of depression associated with bipolar disorder. Significant benefits due to EPA supplementation were only seen, however, when treating depressive symptoms and not manic symptoms suggesting a link between omega−3 and depressive mood.
In contrast to dietary supplementation studies, there is significant difficulty in interpreting the literature regarding dietary intake of omega-3 fatty acids (e.g. from fish) due to participant recall and systematic differences in diets. There is also controversy as to the efficacy of omega−3, with many meta-analysis papers finding heterogeneity among results which can be explained mostly by publication bias. A significant correlation between shorter treatment trials was associated with increased omega−3 efficacy for treating depressed symptoms further implicating bias in publication. One review found that "Although evidence of benefits for any specific intervention is not conclusive, these findings suggest that it might be possible to delay or prevent transition to psychosis."`
Epidemiological studies are inconclusive about an effect of omega−3 fatty acids on the mechanisms of Alzheimer's disease. There is preliminary evidence of effect on mild cognitive problems, but none supporting an effect in healthy people or those with dementia.
Brain function and vision rely on dietary intake of DHA to support a broad range of cell membrane properties, particularly in grey matter, which is rich in membranes. A major structural component of the mammalian brain, DHA is the most abundant omega−3 fatty acid in the brain. It is under study as a candidate essential nutrient with roles in neurodevelopment, cognition, and neurodegenerative disorders.
Results of studies investigating the role of LCPUFA supplementation and LCPUFA status in the prevention and therapy of atopic diseases (allergic rhinoconjunctivitis, atopic dermatitis and allergic asthma) are controversial; therefore, at the present stage of our knowledge (as of 2013) we cannot state either that the nutritional intake of n−3 fatty acids has a clear preventive or therapeutic role, or that the intake of n-6 fatty acids has a promoting role in context of atopic diseases.
People with PKU often have low intake of omega−3 fatty acids, because nutrients rich in omega−3 fatty acids are excluded from their diet due to high protein content.
As of 2015, there was no evidence that taking omega−3 supplements can prevent asthma attacks in children.
An omega−3 fatty acid is a fatty acid with multiple double bonds, where the first double bond is between the third and fourth carbon atoms from the end of the carbon atom chain. "Short chain" omega−3 fatty acids have a chain of 18 carbon atoms or less, while "long chain" omega−3 fatty acids have a chain of 20 or more.
Three omega−3 fatty acids are important in human physiology, α-linolenic acid (18:3, "n"-3; ALA), eicosapentaenoic acid (20:5, "n"-3; EPA), and docosahexaenoic acid (22:6, "n"-3; DHA). These three polyunsaturates have either 3, 5, or 6 double bonds in a carbon chain of 18, 20, or 22 carbon atoms, respectively. As with most naturally-produced fatty acids, all double bonds are in the "cis"-configuration, in other words, the two hydrogen atoms are on the same side of the double bond; and the double bonds are interrupted by methylene bridges (--), so that there are two single bonds between each pair of adjacent double bonds.
This table lists several different names for the most common omega−3 fatty acids found in nature.
Omega−3 fatty acids occur naturally in two forms, triglycerides and phospholipids. In the triglycerides, they, together with other fatty acids, are bonded to glycerol; three fatty acids are attached to glycerol. Phospholipid omega−3 is composed of two fatty acids attached to a phosphate group via glycerol.
The triglycerides can be converted to the free fatty acid or to methyl or ethyl esters, and the individual esters of omega−3 fatty acids are available.
DHA in the form of lysophosphatidylcholine is transported into the brain by a membrane transport protein, MFSD2A, which is exclusively expressed in the endothelium of the blood–brain barrier.
The 'essential' fatty acids were given their name when researchers found that they are essential to normal growth in young children and animals. The omega−3 fatty acid DHA, also known as docosahexaenoic acid, is found in high abundance in the human brain. It is produced by a desaturation process, but humans lack the desaturase enzyme, which acts to insert double bonds at the ω6 and ω3 position. Therefore, the ω6 and ω3 polyunsaturated fatty acids cannot be synthesized, are appropriately called essential fatty acids, and must be obtained from the diet.
In 1964, it was discovered that enzymes found in sheep tissues convert omega−6 arachidonic acid into the inflammatory agent, prostaglandin E2, which is involved in the immune response of traumatized and infected tissues. By 1979, eicosanoids were further identified, including thromboxanes, prostacyclins, and leukotrienes. The eicosanoids typically have a short period of activity in the body, starting with synthesis from fatty acids and ending with metabolism by enzymes. If the rate of synthesis exceeds the rate of metabolism, the excess eicosanoids may have deleterious effects. Researchers found that certain omega−3 fatty acids are also converted into eicosanoids and docosanoids, but at a slower rate. If both omega−3 and omega−6 fatty acids are present, they will "compete" to be transformed, so the ratio of long-chain omega−3:omega−6 fatty acids directly affects the type of eicosanoids that are produced.
Humans can convert short-chain omega−3 fatty acids to long-chain forms (EPA, DHA) with an efficiency below 5%. The omega−3 conversion efficiency is greater in women than in men, but less studied. Higher ALA and DHA values found in plasma phospholipids of women may be due to the higher activity of desaturases, especially that of delta-6-desaturase.
These conversions occur competitively with omega−6 fatty acids, which are essential closely related chemical analogues that are derived from linoleic acid. They both utilize the same desaturase and elongase proteins in order to synthesize inflammatory regulatory proteins. The products of both pathways are vital for growth making a balanced diet of omega−3 and omega−6 important to an individual's health. A balanced intake ratio of 1:1 was believed to be ideal in order for proteins to be able to synthesize both pathways sufficiently, but this has been controversial as of recent research.
The conversion of ALA to EPA and further to DHA in humans has been reported to be limited, but varies with individuals. Women have higher ALA-to-DHA conversion efficiency than men, which is presumed to be due to the lower rate of use of dietary ALA for beta-oxidation. One preliminary study showed that EPA can be increased by lowering the amount of dietary linoleic acid, and DHA can be increased by elevating intake of dietary ALA.
Human diet has changed rapidly in recent centuries resulting in a reported increased diet of omega−6 in comparison to omega−3. The rapid evolution of human diet away from a 1:1 omega−3 and omega−6 ratio, such as during the Neolithic Agricultural Revolution, has presumably been too fast for humans to have adapted to biological profiles adept at balancing omega−3 and omega−6 ratios of 1:1. This is commonly believed to be the reason why modern diets are correlated with many inflammatory disorders. While omega−3 polyunsaturated fatty acids may be beneficial in preventing heart disease in humans, the level of omega−6 polyunsaturated fatty acids (and, therefore, the ratio) does not matter.
Both omega−6 and omega−3 fatty acids are essential: humans must consume them in their diet. Omega−6 and omega−3 eighteen-carbon polyunsaturated fatty acids compete for the same metabolic enzymes, thus the omega−6:omega−3 ratio of ingested fatty acids has significant influence on the ratio and rate of production of eicosanoids, a group of hormones intimately involved in the body's inflammatory and homeostatic processes, which include the prostaglandins, leukotrienes, and thromboxanes, among others. Altering this ratio can change the body's metabolic and inflammatory state. In general, grass-fed animals accumulate more omega−3 than do grain-fed animals, which accumulate relatively more omega−6. Metabolites of omega−6 are more inflammatory (esp. arachidonic acid) than those of omega−3. This necessitates that omega−6 and omega−3 be consumed in a balanced proportion; healthy ratios of omega−6:omega−3, according to some authors, range from 1:1 to 1:4. Other authors believe that a ratio of 4:1 (4 times as much omega−6 as omega−3) is already healthy. Studies suggest the evolutionary human diet, rich in game animals, seafood, and other sources of omega−3, may have provided such a ratio.
Typical Western diets provide ratios of between 10:1 and 30:1 (i.e., dramatically higher levels of omega−6 than omega−3). The ratios of omega−6 to omega−3 fatty acids in some common vegetable oils are: canola 2:1, hemp 2–3:1, soybean 7:1, olive 3–13:1, sunflower (no omega−3), flax 1:3, cottonseed (almost no omega−3), peanut (no omega−3), grapeseed oil (almost no omega−3) and corn oil 46:1.
Although omega−3 fatty acids have been known as essential to normal growth and health since the 1930s, awareness of their health benefits has dramatically increased since the 1980s.
On September 8, 2004, the U.S. Food and Drug Administration gave "qualified health claim" status to EPA and DHA omega−3 fatty acids, stating, "supportive but not conclusive research shows that consumption of EPA and DHA [omega−3] fatty acids may reduce the risk of coronary heart disease". This updated and modified their health risk advice letter of 2001 (see below).
The Canadian Food Inspection Agency has recognized the importance of DHA omega−3 and permits the following claim for DHA: "DHA, an omega−3 fatty acid, supports the normal physical development of the brain, eyes and nerves primarily in children under two years of age."
Historically, whole food diets contained sufficient amounts of omega−3, but because omega−3 is readily oxidized, the trend to shelf-stable, processed foods has led to a deficiency in omega−3 in manufactured foods.
In the United States, the Institute of Medicine publishes a system of Dietary Reference Intakes, which includes Recommended Dietary Allowances (RDAs) for individual nutrients, and Acceptable Macronutrient Distribution Ranges (AMDRs) for certain groups of nutrients, such as fats. When there is insufficient evidence to determine an RDA, the institute may publish an Adequate Intake (AI) instead, which has a similar meaning, but is less certain. The AI for α-linolenic acid is 1.6 grams/day for men and 1.1 grams/day for women, while the AMDR is 0.6% to 1.2% of total energy. Because the physiological potency of EPA and DHA is much greater than that of ALA, it is not possible to estimate one AMDR for all omega−3 fatty acids. Approximately 10 percent of the AMDR can be consumed as EPA and/or DHA. The Institute of Medicine has not established a RDA or AI for EPA, DHA or the combination, so there is no Daily Value (DVs are derived from RDAs), no labeling of foods or supplements as providing a DV percentage of these fatty acids per serving, and no labeling a food or supplement as an excellent source, or "High in..." As for safety, there was insufficient evidence as of 2005 to set an upper tolerable limit for omega−3 fatty acids, although the FDA has advised that adults can safely consume up to a total of 3 grams per day of combined DHA and EPA, with no more than 2 g from dietary supplements.
The American Heart Association (AHA) has made recommendations for EPA and DHA due to their cardiovascular benefits: individuals with no history of coronary heart disease or myocardial infarction should consume oily fish two times per week; and "Treatment is reasonable" for those having been diagnosed with coronary heart disease. For the latter the AHA does not recommend a specific amount of EPA + DHA, although it notes that most trials were at or close to 1000 mg/day. The benefit appears to be on the order of a 9% decrease in relative risk. The European Food Safety Authority (EFSA) approved a claim "EPA and DHA contributes to the normal function of the heart" for products that contain at least 250 mg EPA + DHA. The report did not address the issue of people with pre-existing heart disease. The World Health Organization recommends regular fish consumption (1-2 servings per week, equivalent to 200 to 500 mg/day EPA + DHA) as protective against coronary heart disease and ischaemic stroke.
Heavy metal poisoning by the body's accumulation of traces of heavy metals, in particular mercury, lead, nickel, arsenic, and cadmium, is a possible risk from consuming fish oil supplements. Also, other contaminants (PCBs, furans, dioxins, and PBDEs) might be found, especially in less-refined fish oil supplements. However, heavy metal toxicity from consuming fish oil supplements is highly unlikely, because heavy metals selectively bind with protein in the fish flesh rather than accumulate in the oil.
Throughout their history, the Council for Responsible Nutrition and the World Health Organization have published acceptability standards regarding contaminants in fish oil. The most stringent current standard is the International Fish Oils Standard. Fish oils that are molecularly distilled under vacuum typically make this highest-grade; levels of contaminants are stated in parts per billion per trillion.
The most widely available dietary source of EPA and DHA is oily fish, such as salmon, herring, mackerel, anchovies, menhaden, and sardines. Oils from these fish have a profile of around seven times as much omega−3 as omega−6. Other oily fish, such as tuna, also contain "n"-3 in somewhat lesser amounts. Consumers of oily fish should be aware of the potential presence of heavy metals and fat-soluble pollutants like PCBs and dioxins, which are known to accumulate up the food chain. After extensive review, researchers from Harvard's School of Public Health in the "Journal of the American Medical Association" (2006) reported that the benefits of fish intake generally far outweigh the potential risks. Although fish are a dietary source of omega−3 fatty acids, fish do not synthesize them; they obtain them from the algae (microalgae in particular) or plankton in their diets. In the case of farmed fish, omega-3 fatty acids are provided by fish oil; In 2009, 81% of the global fish oil production is used by aquaculture.
Marine and freshwater fish oil vary in content of arachidonic acid, EPA and DHA. They also differ in their effects on organ lipids.
Not all forms of fish oil may be equally digestible. Of four studies that compare bioavailability of the glyceryl ester form of fish oil vs. the ethyl ester form, two have concluded the natural glyceryl ester form is better, and the other two studies did not find a significant difference. No studies have shown the ethyl ester form to be superior, although it is cheaper to manufacture.
Krill oil is a source of omega−3 fatty acids. The effect of krill oil, at a lower dose of EPA + DHA (62.8%), was demonstrated to be similar to that of fish oil on blood lipid levels and markers of inflammation in healthy humans. While not an endangered species, krill are a mainstay of the diets of many ocean-based species including whales, causing environmental and scientific concerns about their sustainability.
Preliminary studies appear to indicate that the DHA and EPA omega-3 fatty acids found in krill oil may be more bio-available than in fish oil. Additionally, krill oil contains astaxanthin, a marine-source keto-carotenoid antioxidant that may act synergistically with EPA and DHA.
Table 1. ALA content as the percentage of the seed oil.
Table 2. ALA content as the percentage of the whole food.
Flaxseed (or linseed) ("Linum usitatissimum") and its oil are perhaps the most widely available botanical source of the omega−3 fatty acid ALA. Flaxseed oil consists of approximately 55% ALA, which makes it six times richer than most fish oils in omega−3 fatty acids. A portion of this is converted by the body to EPA and DHA, though the actual converted percentage may differ between men and women.
In 2013 Rothamsted Research in the UK reported they had developed a genetically modified form of the plant Camelina that produced EPA and DHA. Oil from the seeds of this plant contained on average 11% EPA and 8% DHA in one development and 24% EPA in another.
Eggs produced by hens fed a diet of greens and insects contain higher levels of omega−3 fatty acids than those produced by chickens fed corn or soybeans. In addition to feeding chickens insects and greens, fish oils may be added to their diets to increase the omega−3 fatty acid concentrations in eggs.
The addition of flax and canola seeds to the diets of chickens, both good sources of alpha-linolenic acid, increases the omega−3 content of the eggs, predominantly DHA.
The addition of green algae or seaweed to the diets boosts the content of DHA and EPA, which are the forms of omega−3 approved by the FDA for medical claims. A common consumer complaint is "Omega−3 eggs can sometimes have a fishy taste if the hens are fed marine oils".
Omega−3 fatty acids are formed in the chloroplasts of green leaves and algae. While seaweeds and algae are the source of omega−3 fatty acids present in fish, grass is the source of omega−3 fatty acids present in grass fed animals. When cattle are taken off omega−3 fatty acid rich grass and shipped to a feedlot to be fattened on omega−3 fatty acid deficient grain, they begin losing their store of this beneficial fat. Each day that an animal spends in the feedlot, the amount of omega−3 fatty acids in its meat is diminished.
The omega−6:omega−3 ratio of grass-fed beef is about 2:1, making it a more useful source of omega−3 than grain-fed beef, which usually has a ratio of 4:1.
In a 2009 joint study by the USDA and researchers at Clemson University in South Carolina, grass-fed beef was compared with grain-finished beef. The researchers found that grass-finished beef is higher in moisture content, 42.5% lower total lipid content, 54% lower in total fatty acids, 54% higher in beta-carotene, 288% higher in vitamin E (alpha-tocopherol), higher in the B-vitamins thiamin and riboflavin, higher in the minerals calcium, magnesium, and potassium, 193% higher in total omega−3s, 117% higher in CLA (cis-9, trans-11 octadecenoic acid, a conjugated linoleic acid, which is a potential cancer fighter), 90% higher in vaccenic acid (which can be transformed into CLA), lower in the saturated fats, and has a healthier ratio of omega−6 to omega−3 fatty acids (1.65 vs 4.84). Protein and cholesterol content were equal.
The omega−3 content of chicken meat may be enhanced by increasing the animals' dietary intake of grains high in omega−3, such as flax, chia, and canola.
Kangaroo meat is also a source of omega−3, with fillet and steak containing 74 mg per 100 g of raw meat.
Seal oil is a source of EPA, DPA, and DHA. According to Health Canada, it helps to support the development of the brain, eyes, and nerves in children up to 12 years of age. Like all seal products, it is not allowed to be imported into the European Union.
A trend in the early 21st century was to fortify food with omega−3 fatty acids. The microalgae "Crypthecodinium cohnii" and "Schizochytrium" are rich sources of DHA, but not EPA, and can be produced commercially in bioreactors for use as food additives. Oil from brown algae (kelp) is a source of EPA. The alga "Nannochloropsis" also has high levels of EPA. | https://en.wikipedia.org/wiki?curid=22594 |
Ore
Ore is natural rock or sediment that contains one or more valuable minerals, typically metals, that can be mined, treated and sold at a profit. Ore is extracted from the earth through mining and treated or refined, often via smelting, to extract the valuable metals or minerals.
The "grade" of ore refers to the concentration of the desired material it contains. The value of the metals or minerals an ore contains must be weighed against the cost of extraction to determine whether it is of sufficiently high grade to be worth mining.
Metal ores are generally oxides, sulfides, silicates, native metals such as copper, or noble metals such as gold. Ores must be processed to extract the elements of interest from the waste rock. Ore bodies are formed by a variety of geological processes generally referred to as ore genesis.
An ore deposit is an accumulation of ore. This is distinct from a mineral resource as defined by the mineral resource classification criteria. An ore deposit is one occurrence of a particular ore type. Most ore deposits are named according to their location (for example, the Witwatersrand, South Africa), or after a discoverer (e.g. the Kambalda nickel shoots are named after drillers), or after some whimsy, a historical figure, a prominent person, something from mythology (phoenix, kraken, serepentleopard, etc.) or the code name of the resource company which found it (e.g. MKD-5 was the in-house name for the Mount Keith nickel sulphide deposit).
Ore deposits are classified according to various criteria developed via the study of economic geology, or ore genesis. The classifications below are typical.
The basic extraction of ore deposits follows these steps:
Ores (metals) are traded internationally and comprise a sizeable portion of international trade in raw materials both in value and volume. This is because the worldwide distribution of ores is unequal and dislocated from locations of peak demand and from smelting infrastructure.
Most base metals (copper, lead, zinc, nickel) are traded internationally on the London Metal Exchange, with smaller stockpiles and metals exchanges monitored by the COMEX and NYMEX exchanges in the United States and the Shanghai Futures Exchange in China.
Iron ore is traded between customer and producer, though various benchmark prices are set quarterly between the major mining conglomerates and the major consumers, and this sets the stage for smaller participants.
Other, lesser, commodities do not have international clearing houses and benchmark prices, with most prices negotiated between suppliers and customers one-on-one. This generally makes determining the price of ores of this nature opaque and difficult. Such metals include lithium, niobium-tantalum, bismuth, antimony and rare earths. Most of these commodities are also dominated by one or two major suppliers with >60% of the world's reserves. The London Metal Exchange aims to add uranium to its list of metals on warrant.
The World Bank reports that China was the top importer of ores and metals in 2005 followed by the US and Japan. | https://en.wikipedia.org/wiki?curid=22595 |
Optical brightener
Optical brighteners, optical brightening agents (OBAs), fluorescent brightening agents (FBAs), or fluorescent whitening agents (FWAs), are chemical compounds that absorb light in the ultraviolet and violet region (usually 340-370 nm) of the electromagnetic spectrum, and re-emit light in the blue region (typically 420-470 nm) by fluorescence. These additives are often used to enhance the appearance of color of fabric and paper, causing a "whitening" effect; they make intrinsically yellow/orange materials look less so, by compensating the deficit in blue and purple light reflected by the material, with the blue and purple optical emission of the fluorophore.
The most common classes of compounds with this property are the stilbenes, e.g., 4,4′-diamino-2,2′-stilbenedisulfonic acid. Older, non-commercial fluorescent compounds include umbelliferone, which absorbs in the UV portion of the spectrum and re-emit it in the blue portion of the visible spectrum. A white surface treated with an optical brightener can emit more visible light than that which shines on it, making it appear brighter. The blue light emitted by the brightener compensates for the diminishing blue of the treated material and changes the hue away from yellow or brown and toward white.
Approximately 400 brightener types are listed in the Colour Index, but fewer than 90 are produced commercially, and only a handful are commercially important. Generically, the C.I. FBA number can be assigned to a specific substance, however, some are duplicated, since manufacturers apply for the index number when they produce it. The global OBA production for paper, textiles, and detergents is dominated by just a few di- and tetra-sulfonated triazole-stilbenes and a di-sulfonated stilbene-biphenyl derivatives. The stilbene derivatives are subject to fading upon prolonged exposure to UV, due to the formation of optically inactive cis-stilbenes. They are also degraded by oxygen in air, like most dye colorants. All brighteners have extended conjugation and/or aromaticity, allowing for electron movement. Some non-stilbene brighteners are used in more permanent applications such as whitening synthetic fiber.
Brighteners can be "boosted" by the addition of certain polyols, such as high molecular weight polyethylene glycol or polyvinyl alcohol. These additives increase the visible blue light emissions significantly. Brighteners can also be "quenched". Excess brightener will often cause a greening effect as emissions start to show above the blue region in the visible spectrum.
Brighteners are commonly added to laundry detergents to make the clothes appear cleaner. Normally cleaned laundry appears yellowish, which consumers do not like. Optical brighteners have replaced bluing which was formerly used to produce the same effect.
Brighteners are used in many papers, especially high brightness papers, resulting in their strongly fluorescent appearance under UV illumination. Paper brightness is typically measured at 457 nm, well within the fluorescent activity range of brighteners. Paper used for banknotes does not contain optical brighteners, so a common method for detecting counterfeit notes is to check for fluorescence.
Optical brighteners have also found use in cosmetics. One application is to formulas for washing and conditioning grey or blonde hair, where the brightener can not only increase the luminance and sparkle of the hair, but can also correct dull, yellowish discoloration without darkening the hair. Some advanced face and eye powders contain optical brightener microspheres that brighten shadowed or dark areas of the skin, such as "tired eyes".
End uses of optical brighteners include:
From around 2002 to 2012 chemical brighteners were used by many Chinese farmers to enhance the appearance of their white mushrooms. This illegal use was mostly eliminated by the Chinese Ministry of Agriculture. | https://en.wikipedia.org/wiki?curid=22600 |
Oil painting
Oil painting is the process of painting with pigments with a medium of drying oil as the binder. Commonly used drying oils include linseed oil, poppy seed oil, walnut oil, and safflower oil. The choice of oil imparts a range of properties to the oil paint, such as the amount of yellowing or drying time. Certain differences, depending on the oil, are also visible in the sheen of the paints. An artist might use several different oils in the same painting depending on specific pigments and effects desired. The paints themselves also develop a particular consistency depending on the medium. The oil may be boiled with a resin, such as pine resin or frankincense, to create a varnish prized for its body and gloss.
Although oil paint was first used for Buddhist paintings by painters in central and western Afghanistan sometime between the fifth and tenth centuries, it did not gain popularity until the 15th century. Its practice may have migrated westward during the Middle Ages. Oil paint eventually became the principal medium used for creating artworks as its advantages became widely known. The transition began with Early Netherlandish painting in Northern Europe, and by the height of the Renaissance oil painting techniques had almost completely replaced the use of tempera paints in the majority of Europe.
In recent years, water miscible oil paint has become available. Water-soluble paints are either engineered or an emulsifier has been added that allows them to be thinned with water rather than paint thinner, and allows, when sufficiently diluted, very fast drying times (1–3 days) when compared with traditional oils (1–3 weeks).
Traditional oil painting techniques often begin with the artist sketching the subject onto the canvas with charcoal or thinned paint. Oil paint is usually mixed with linseed oil, artist grade mineral spirits, or other solvents to make the paint thinner, faster or slower-drying. (Because these solvents thin the oil in the paint, they can also be used to clean paint brushes.) A basic rule of oil paint application is 'fat over lean', meaning that each additional layer of paint should contain more oil than the layer below to allow proper drying. If each additional layer contains less oil, the final painting will crack and peel. This rule does not ensure permanence; it is the quality and type of oil that leads to a strong and stable paint film.
There are many other media that can be used with the oil, including cold wax, resins, and varnishes. These additional media can aid the painter in adjusting the translucency of the paint, the sheen of the paint, the density or 'body' of the paint, and the ability of the paint to hold or conceal the brushstroke. These aspects of the paint are closely related to the expressive capacity of oil paint.
Traditionally, paint was transferred to the painting surface using paintbrushes, but there are other methods, including using palette knives and rags. Oil paint remains wet longer than many other types of artists' materials, enabling the artist to change the color, texture or form of the figure. At times, the painter might even remove an entire layer of paint and begin anew. This can be done with a rag and some turpentine for a time while the paint is wet, but after a while the hardened layer must be scraped. Oil paint dries by oxidation, not evaporation, and is usually dry to the touch within a span of two weeks (some colors dry within days). It is generally dry enough to be varnished in six months to a year.
The earliest discovered oil paintings date back to approx. 650AD in Afghanistan. These murals were presumably created by buddhist artists traveling along the silk road. These early oil works display a wide range of pigments and binders, and even included the use of a final varnish layer. This refinement of this painting technique and the survival of the paintings into the present day suggests that oil paints had been used in Asia even before the 7th century. It is possible, considering the history of tempera (pigment mixed with either egg whites or egg yolks, then painted on a plastered section) that oils were discovered in Europe independently around the 15th century. Outdoor surfaces and surfaces like shields—both those used in tournaments and those hung as decorations—were more durable when painted in oil-based media than when painted in the traditional tempera paints.
Most European Renaissance sources, in particular Vasari, credited northern European painters of the 15th century, and Jan van Eyck in particular, with the "invention" of painting with oil media on wood panel supports ("support" is the technical term for the underlying backing of a painting). However, Theophilus (Roger of Helmarshausen?) clearly gives instructions for oil-based painting in his treatise, "On Various Arts", written in 1125. At this period, it was probably used for painting sculptures, carvings and wood fittings, perhaps especially for outdoor use. However, early Netherlandish painting with artists like Van Eyck and Robert Campin in the 15th century were the first to make oil the usual painting medium, and explore the use of layers and glazes, followed by the rest of Northern Europe, and only then Italy.
Early works were still panel paintings on wood, but around the end of the 15th century canvas became more popular as the support, as it was cheaper, easier to transport, allowed larger works, and did not require complicated preliminary layers of gesso (a fine type of plaster). Venice, where sail-canvas was easily available, was a leader in the move to canvas. Small cabinet paintings were also made on metal, especially copper plates. These supports were more expensive but very firm, allowing intricately fine detail. Often printing plates from printmaking were reused for this purpose. The popularity of oil spread through Italy from the North, starting in Venice in the late 15th century. By 1540, the previous method for painting on panel (tempera) had become all but extinct, although Italians continued to use chalk-based fresco for wall paintings, which was less successful and durable in damper northern climates.
The linseed oil itself comes from the flax seed, a common fiber crop. Linen, a "support" for oil painting (see relevant section), also comes from the flax plant. Safflower oil or the walnut or poppyseed oil are sometimes used in formulating lighter colors like white because they "yellow" less on drying than linseed oil, but they have the slight drawback of drying more slowly and may not provide the strongest paint film. Linseed oil tends to dry yellow and can change the hue of the color.
Recent advances in chemistry have produced modern water miscible oil paints that can be used and cleaned up with water. Small alterations in the molecular structure of the oil creates this water miscible property.
Traditional artists' canvas is made from linen, but less expensive cotton fabric has gained popularity. The artist first prepares a wooden frame called a "stretcher" or "strainer". The difference between the two names is that "stretchers" are slightly adjustable, while "strainers" are rigid and lack adjustable corner notches. The canvas is then pulled across the wooden frame and tacked or stapled tightly to the back edge. Then the artist applies a "size" to isolate the canvas from the acidic qualities of the paint. Traditionally, the canvas was coated with a layer of animal glue (modern painters will use rabbit skin glue) as the size and primed with lead white paint, sometimes with added chalk. Panels were prepared with a "gesso", a mixture of glue and chalk.
Modern acrylic "gesso" is made of titanium dioxide with an acrylic binder. It is frequently used on canvas, whereas real gesso is not suitable for canvas. The artist might apply several layers of gesso, sanding each smooth after it has dried. Acrylic gesso is very difficult to sand. One manufacturer makes a "sandable" acrylic gesso, but it is intended for panels only and not canvas. It is possible to make the gesso a particular color, but most store-bought gesso is white. The gesso layer, depending on its thickness, will tend to draw the oil paint into the porous surface. Excessive or uneven gesso layers are sometimes visible in the surface of finished paintings as a change that's not from the paint.
Standard sizes for oil paintings were set in France in the 19th century. The standards were used by most artists, not only the French, as it was—and evidently still is—supported by the main suppliers of artists' materials. Size 0 ("toile de 0") to size 120 ("toile de 120") is divided in separate "runs" for figures ("figure"), landscapes ("paysage") and marines ("marine") that more or less preserve the diagonal. Thus a "0 figure" corresponds in height with a "paysage 1" and a "marine 2".
Although surfaces like linoleum, wooden panel, paper, slate, pressed wood, Masonite, and cardboard have been used, the most popular surface since the 16th century has been canvas, although many artists used panel through the 17th century and beyond. Panel is more expensive, heavier, harder to transport, and prone to warp or split in poor conditions. For fine detail, however, the absolute solidity of a wooden panel has an advantage.
Oil paint is made by mixing pigments of colors with an oil medium. Different colors are made, or purchased premixed, before painting begins, but further shades of color are usually obtained by mixing small quantities together as the painting process is underway. An artist's palette, traditionally a thin wood board held in the hand, is used for holding and mixing paints of different colors. Pigments may be any number of natural or synthetic substances with color, such as sulphides for yellow or cobalt salts for blue. Traditional pigments were based on minerals or plants, but many have proven unstable over long periods of time; the appearance of many old paintings today is very different from the original. Modern pigments often use synthetic chemicals. The pigment is mixed with oil, usually linseed, but other oils may be used. The various oils dry differently, which creates assorted effects.
Traditionally, artists mixed their own paints from raw pigments that they often ground themselves and medium. This made portability difficult and kept most painting activities confined to the studio. This changed in the 1800s, when tubes of oil paint became widely available following the American portrait painter John Goffe Rand's invention of the squeezable or collapsible metal tube in 1841 (the year of Claude Monet's birth). Artists could mix colors quickly and easily, which enabled, for the first time, relatively convenient plein air painting (a common approach in French Impressionism).
A brush is most commonly employed by the artist to apply the paint, often over a sketched outline of their subject (which could be in another medium). Brushes are made from a variety of fibers to create different effects. For example, brushes made with hog bristle might be used for bolder strokes and impasto textures. Fitch hair and mongoose hair brushes are fine and smooth, and thus answer well for portraits and detail work. Even more expensive are red sable brushes (weasel hair). The finest quality brushes are called "kolinsky sable"; these brush fibers are taken from the tail of the Siberian weasel. This hair keeps a superfine point, has smooth handling, and good memory (it returns to its original point when lifted off the canvas), known to artists as a brush's "snap." Floppy fibers with no snap, such as squirrel hair, are generally not used by oil painters.
In the past few decades, many synthetic brushes have been marketed. These are very durable and can be quite good, as well as cost efficient.
Brushes come in many sizes and are used for different purposes. The "type" of brush also makes a difference. For example, a "round" is a pointed brush used for detail work. "Flat" brushes are used to apply broad swaths of color. "Bright" is a flat brush with shorter brush hairs, used for "scrubbing in". "Filbert" is a flat brush with rounded corners. "Egbert" is a very long, and rare, filbert brush. The artist might also apply paint with a palette knife, which is a flat metal blade. A palette knife may also be used to remove paint from the canvas when necessary. A variety of unconventional tools, such as rags, sponges, and cotton swabs, may be used to apply or remove paint. Some artists even paint with their fingers.
Oil painters traditionally applied paint in layers known as "glazes", a method also simply called "indirect painting". This method was first perfected through an adaptation of the egg tempera painting technique, and was applied by the Flemish painters in Northern Europe with pigments ground in linseed oil. More recently, this approach has been called the "mixed technique" or "mixed method". The first coat (the underpainting) is laid down, often painted with egg tempera or turpentine-thinned paint. This layer helps to "tone" the canvas and to cover the white of the gesso. Many artists use this layer to sketch out the composition. This first layer can be adjusted before proceeding further, an advantage over the "cartooning" method used in fresco technique. After this layer dries, the artist might then proceed by painting a "mosaic" of color swatches, working from darkest to lightest. The borders of the colors are blended together when the "mosaic" is completed, and then left to dry before applying details.
Artists in later periods, such as the Impressionist era (late 19th century), often expanded on this wet-on-wet method, blending the wet paint on the canvas without following the Renaissance-era approach of layering and glazing. This method is also called "alla prima". This method was created due to the advent of painting outdoors, instead of inside a studio, because while outside, an artist did not have the time to let each layer of paint dry before adding a new layer. Several contemporary artists use a combination of both techniques to add bold color (wet-on-wet) and obtain the depth of layers through glazing.
When the image is finished and has dried for up to a year, an artist often seals the work with a layer of varnish that is typically made from dammar gum crystals dissolved in turpentine. Such varnishes can be removed without disturbing the oil painting itself, to enable cleaning and conservation. Some contemporary artists decide not to varnish their work, preferring the surface unvarnished. | https://en.wikipedia.org/wiki?curid=22605 |
Old Glory
Old Glory is a nickname for the flag of the United States. The original "Old Glory" was a flag owned by the 19th-century American sea captain William Driver (March 17, 1803 – March 3, 1886), who flew the flag during his career at sea and later brought it to Nashville, Tennessee, where he settled. Driver greatly prized the flag and ensured its safety from the Confederates, who attempted to seize the flag during the American Civil War. In 1922, Driver's daughter and niece claimed to own the original "Old Glory," which became part of the collection of the Smithsonian Institution, where it remains at the National Museum of American History.
Captain William Driver was born on March 17, 1803, in Salem, Massachusetts. At age 13, Driver ran away from home to become a cabin boy on a ship.
At 21, Driver qualified as a master mariner and assumed command of his own ship, the "Charles Doggett". In celebration of his appointment, Driver's mother and other women sewed the flag and gave it to him as a gift in 1824. With this flag flying over his ship, Driver went on to have a colorful career as a U.S. merchant seaman, sailing to China, India, Gibraltar, and the South Pacific. He participated in the tortoiseshell trade and knew some Fijian. In 1831, while voyaging in the South Pacific, Driver's ship "was the sole surviving vessel of six that departed Salem the same day." Driver picked up 65 descendants of the survivors of HMS "Bounty" and brought them back to Pitcairn Island. Driver was convinced that God saved his ship for that purpose.
Driver was deeply attached to the flag, writing: "It has ever been my staunch companion and protection. Savages and heathens, lowly and oppressed, hailed and welcomed it at the far end of the wide world. Then, why should it not be called Old Glory?"
Driver retired from seafaring in 1837 after his wife Martha Silsbee Babbage died from throat cancer. Driver was 34-years-old and had three young children. He settled in Nashville, Tennessee, where his three brothers operated a store. Driver remarried the next year to Sarah Jane Parks, a Southerner with whom he had several more children. Driver took his flag with him to Nashville, flying it on holidays rain or shine. The flag was so large that he attached it to a rope from his attic window and stretched it on a pulley across the street to secure it to a locust tree. Driver worked as a salesman and served as vestryman of Christ Episcopal Church.
In 1860, Driver, his wife and daughters repaired the flag, sewing on 10 additional stars, and Driver added by appliqué a small white anchor in the lower right corner, to symbolize his maritime career. By that time, the secession crisis had begun and Driver's family was split. While Driver was a staunch Unionist, two of his sons were fervent Confederates who enlisted in local regiments. One of Driver’s sons died from wounds suffered at Perryville. In March 1862, Driver wrote: "Two sons in the army of the South! My entire house estranged...and when I come home...no one to soothe me."
Soon after Tennessee seceded from the Union, Governor Isham G. Harris sent men to Driver's home to demand the flag. Driver, 58 years old, turned the men away at his door after demanding they produce a search warrant. An armed group returned to Driver's front porch, who refused to produce the flag, saying "If you want my flag you'll have to take it over my dead body."
To save the flag from further threats, Driver and some of his neighbors sewed it into a coverlet and hid it until February 1862, when Nashville fell to Union forces. When the Union Army, led by the 6th Ohio Infantry, entered the city, Driver went to Tennessee state capitol after seeing the U.S. flag and the 6th Ohio's regimental colors raised on the Capitol flagstaffand asked to see the general in command.
Horace Fisher, the aide-de-camp to the Union commander in the city, Brigadier General William "Bull" Nelson, described Driver as "a stout, middle-aged man, with hair well shot with gray, short in stature, broad in shoulder, and with a roll in his gait." Introducing himself as a sea captain and Unionist, Driver brought the coverlet with him and opened it, revealing the flag. Nelson accepted the flag and ordered it run up on the Capitol flagstaff. The 6th Ohio later adopted the motto "Old Glory."
That night, a violent storm threatened to tear flag, so Driver replaced it with a newer flag, taking the original Old Glory for safekeeping. The flag remained in his home until December 1864, when the Battle of Nashville was fought. As Confederate troopers under the command of John Bell Hood sought to retake the city, Driver hung the flag out of the third-story window and left to join the defense of the city. For the rest of the American Civil War, Driver served as provost marshal of Nashville, serving in hospitals.
Mary Jane Roland, Driver's daughter, said Driver gave her the flag as a gift on July 10, 1873, telling her "This is my old ship flag Old Glory. I love it as a mother loves her child. Take it and cherish it as I have always cherished it; for it has been my steadfast friend and protector in all parts of the world—savage, heathen and civilized."
Driver died on March 3, 1886, and was buried in the Nashville City Cemetery, where, at Driver's request, his rescue of the "Bounty" descendants is noted on his grave stone.
Following Driver’s death, a family feud erupted over the ownership of the flag. Driver's niece, Harriet Ruth Waters Cooke, the daughter of Driver's youngest sister, said she inherited the flag and presented her version of Old Glory to the Essex Institute in Salem, which became the Peabody Essex Museum, along with family memorabilia that included a letter from the Pitcairn Islands to Driver. Cooke published a family memoir in 1889, omitting any mention of Mary Jane Roland.
Roland wrote an account of the flag, publishing "Old Glory, The True Story" in 1918. In that memoir, Roland disputed Cooke's narrative and presented evidence for her claim that the flag she owned was the true Old Glory. In 1922, Roland gave her Old Glory to President Warren G. Harding. Harding had the flag sent to the Smithsonian Institution. The same year, the Peabody Essex Museum sent its Old Glory to the Smithsonian.
In 2019, Captain Driver's great-great grandson, Jack Benz, published a novel depicting the life and adventures of Captain William Driver using information collected from personal research and inherited from Captain Driver's descendants.
The Smithsonian Institution has regarded the Roland flag as the authentic Old Glory, since "documentary evidence in the Tennessee State Library and Archives suggests it was the one hidden in the quilt and presented to Union troops who took Nashville. The Roland flag is 17x10 feet. The Peabody flag is 12x6 feet.
In June 2006, the Smithsonian's National Museum of American History (NMAH) loaned the Roland flag to the Tennessee State Museum in Nashville for an eight-month exhibit entitled "Old Glory: An American Treasure Comes Home". The flag was in fragile condition and had to be carefully shipped and displayed.
A conservation evaluation of both flags by NMAH curator Jennifer Locke Jones and Thomassen-Krauss began in 2012. Preliminary findings indicate that the larger Roland flag has the stronger claim to being the original Old Glory, but that the Peabody flag dated to the same era and is a legitimate Driver family heirloom and Civil War-era relic. The Roland Old Glory is heavily worn on the fly edges, consistent with the wear of a seagoing flag.
The Peabody Essex Museum has in its collection fragmentary scraps from what was claimed to be Old Glory. | https://en.wikipedia.org/wiki?curid=22609 |
Oscar Wilde
Oscar Fingal O'Flahertie Wills Wilde (16 October 185430 November 1900) was an Irish poet and playwright. After writing in different forms throughout the 1880s, the early 1890s saw him become one of the most popular playwrights in London. He is best remembered for his epigrams and plays, his novel "The Picture of Dorian Gray", and the circumstances of his criminal conviction for gross indecency for consensual homosexual acts, imprisonment, and early death at age 46.
Wilde's parents were Anglo-Irish intellectuals in Dublin. A young Wilde learned to speak fluent French and German. At university, Wilde read Greats; he demonstrated himself to be an exceptional classicist, first at Trinity College Dublin, then at Oxford. He became associated with the emerging philosophy of aestheticism, led by two of his tutors, Walter Pater and John Ruskin. After university, Wilde moved to London into fashionable cultural and social circles.
As a spokesman for aestheticism, he tried his hand at various literary activities: he published a book of poems, lectured in the United States and Canada on the new "English Renaissance in Art" and interior decoration, and then returned to London where he worked prolifically as a journalist. Known for his biting wit, flamboyant dress and glittering conversational skill, Wilde became one of the best-known personalities of his day. At the turn of the 1890s, he refined his ideas about the supremacy of art in a series of dialogues and essays, and incorporated themes of decadence, duplicity, and beauty into what would be his only novel, "The Picture of Dorian Gray" (1890). The opportunity to construct aesthetic details precisely, and combine them with larger social themes, drew Wilde to write drama. He wrote "Salome" (1891) in French while in Paris but it was refused a licence for England due to an absolute prohibition on the portrayal of Biblical subjects on the English stage. Unperturbed, Wilde produced four society comedies in the early 1890s, which made him one of the most successful playwrights of late-Victorian London.
At the height of his fame and success, while "The Importance of Being Earnest" (1895) was still being performed in London, Wilde prosecuted the Marquess of Queensberry for criminal libel. The Marquess was the father of Wilde's lover, Lord Alfred Douglas. The libel trial unearthed evidence that caused Wilde to drop his charges and led to his own arrest and trial for gross indecency with men. After two more trials he was convicted and sentenced to two years' hard labour, the maximum penalty, and was jailed from 1895 to 1897. During his last year in prison, he wrote "De Profundis" (published posthumously in 1905), a long letter which discusses his spiritual journey through his trials, forming a dark counterpoint to his earlier philosophy of pleasure. On his release, he left immediately for France, never to return to Ireland or Britain. There he wrote his last work, "The Ballad of Reading Gaol" (1898), a long poem commemorating the harsh rhythms of prison life.
Oscar Wilde was born at 21 Westland Row, Dublin (now home of the Oscar Wilde Centre, Trinity College), the second of three children born to an Anglo-Irish couple: Jane, née Elgee and Sir William Wilde. Oscar was two years younger than his brother, William (Willie) Wilde.
Jane Wilde was a niece (by marriage) of the novelist, playwright and clergyman Charles Maturin (1780 – 1824), who may have influenced her own literary career. She had distant Italian ancestry, and under the pseudonym ""Speranza"" (the Italian word for 'hope'), she wrote poetry for the revolutionary Young Irelanders in 1848; she was a lifelong Irish nationalist. Jane Wilde read the Young Irelanders' poetry to Oscar and Willie, inculcating a love of these poets in her sons. Her interest in the neo-classical revival showed in the paintings and busts of ancient Greece and Rome in her home.
William Wilde was Ireland's leading oto-ophthalmologic (ear and eye) surgeon and was knighted in 1864 for his services as medical adviser and assistant commissioner to the censuses of Ireland. He also wrote books about Irish archaeology and peasant folklore. A renowned philanthropist, his dispensary for the care of the city's poor at the rear of Trinity College, Dublin, was the forerunner of the Dublin Eye and Ear Hospital, now located at Adelaide Road. On his father's side Wilde was descended from a Dutchman, Colonel de Wilde, who went to Ireland with King William of Orange's invading army in 1690, and numerous Anglo-Irish ancestors. On his mother's side, Wilde's ancestors included a bricklayer from County Durham, who emigrated to Ireland sometime in the 1770s.
Wilde was baptised as an infant in St. Mark's Church, Dublin, the local Church of Ireland (Anglican) church. When the church was closed, the records were moved to the nearby St. Ann's Church, Dawson Street. Davis Coakley mentions a second baptism by a Catholic priest, Father Prideaux Fox, who befriended Oscar's mother circa 1859. According to Fox's testimony in "Donahoe's Magazine" in 1905, Jane Wilde would visit his chapel in Glencree, County Wicklow, for Mass and would take her sons with her. She asked Father Fox in this period to baptise her sons.
Fox described it in this way:
"I am not sure if she ever became a Catholic herself but it was not long before she asked me to instruct two of her children, one of them being the future erratic genius, Oscar Wilde. After a few weeks I baptized these two children, Lady Wilde herself being present on the occasion.
In addition to his children with his wife, Sir William Wilde was the father of three children born out of wedlock before his marriage: Henry Wilson, born in 1838 to one woman, and Emily and Mary Wilde, born in 1847 and 1849, respectively, to a second woman. Sir William acknowledged paternity of his illegitimate or "natural" children and provided for their education, arranging for them to be reared by his relatives rather than with his legitimate children in his family household with his wife.
In 1855, the family moved to No. 1 Merrion Square, where Wilde's sister, Isola, was born in 1857. The Wildes' new home was larger. With both his parents' success and delight in social life, the house soon became the site of a "unique medical and cultural milieu". Guests at their salon included Sheridan Le Fanu, Charles Lever, George Petrie, Isaac Butt, William Rowan Hamilton and Samuel Ferguson.
Until he was nine, Oscar Wilde was educated at home, where a French nursemaid and a German governess taught him their languages. He attended Portora Royal School in Enniskillen, County Fermanagh, from 1864 to 1871. Until his early twenties, Wilde summered at the villa, Moytura House, which his father had built in Cong, County Mayo. There the young Wilde and his brother Willie played with George Moore.
Isola died at age nine of meningitis. Wilde's poem "" is written to her memory.
"Tread lightly, she is nearUnder the snow
Speak gently, she can hear
the daisies grow"
Wilde left Portora with a royal scholarship to read classics at Trinity College, Dublin, from 1871 to 1874, sharing rooms with his older brother Willie Wilde. Trinity, one of the leading classical schools, placed him with scholars such as R. Y. Tyrell, Arthur Palmer, Edward Dowden and his tutor, Professor J. P. Mahaffy, who inspired his interest in Greek literature. As a student Wilde worked with Mahaffy on the latter's book "Social Life in Greece". Wilde, despite later reservations, called Mahaffy "my first and best teacher" and "the scholar who showed me how to love Greek things". For his part, Mahaffy boasted of having created Wilde; later, he said Wilde was "the only blot on my tutorship".
The University Philosophical Society also provided an education, as members discussed intellectual and artistic subjects such as Dante Gabriel Rossetti and Algernon Charles Swinburne weekly. Wilde quickly became an established member – the members' suggestion book for 1874 contains two pages of banter (sportingly) mocking Wilde's emergent aestheticism. He presented a paper titled "Aesthetic Morality". At Trinity, Wilde established himself as an outstanding student: he came first in his class in his first year, won a scholarship by competitive examination in his second and, in his finals, won the Berkeley Gold Medal in Greek, the University's highest academic award. He was encouraged to compete for a demyship to Magdalen College, Oxford – which he won easily, having already studied Greek for over nine years.
At Magdalen, he read Greats from 1874 to 1878, and from there he applied to join the Oxford Union, but failed to be elected.
Attracted by its dress, secrecy, and ritual, Wilde petitioned the Apollo Masonic Lodge at Oxford, and was soon raised to the "Sublime Degree of Master Mason". During a resurgent interest in Freemasonry in his third year, he commented he "would be awfully sorry to give it up if I secede from the Protestant Heresy". Wilde's active involvement in Freemasonry lasted only for the time he spent at Oxford; he allowed his membership of the Apollo University Lodge to lapse after failing to pay subscriptions.
Catholicism deeply appealed to him, especially its rich liturgy, and he discussed converting to it with clergy several times. In 1877, Wilde was left speechless after an audience with Pope Pius IX in Rome. He eagerly read the books of Cardinal Newman, a noted Anglican priest who had converted to Catholicism and risen in the church hierarchy. He became more serious in 1878, when he met the Reverend Sebastian Bowden, a priest in the Brompton Oratory who had received some high-profile converts. Neither his father, who threatened to cut off his funds, nor Mahaffy thought much of the plan; but Wilde, the supreme individualist, balked at the last minute from pledging himself to any formal creed, and on the appointed day of his baptism, sent Father Bowden a bunch of altar lilies instead. Wilde did retain a lifelong interest in Catholic theology and liturgy.
While at Magdalen College, Wilde became particularly well known for his role in the aesthetic and decadent movements. He wore his hair long, openly scorned "manly" sports though he occasionally boxed, and he decorated his rooms with peacock feathers, lilies, sunflowers, blue china and other "objets d'art". He once remarked to friends, whom he entertained lavishly, "I find it harder and harder every day to live up to my blue china." The line quickly became famous, accepted as a slogan by aesthetes but used against them by critics who sensed in it a terrible vacuousness. Some elements disdained the aesthetes, but their languishing attitudes and showy costumes became a recognised pose. Wilde was once physically attacked by a group of four fellow students, and dealt with them single-handedly, surprising critics. By his third year Wilde had truly begun to develop himself and his myth, and considered his learning to be more expansive than what was within the prescribed texts. This attitude resulted in his being rusticated for one term, after he had returned late to a college term from a trip to Greece with Mahaffy.
Wilde did not meet Walter Pater until his third year, but had been enthralled by his "Studies in the History of the Renaissance", published during Wilde's final year in Trinity. Pater argued that man's sensibility to beauty should be refined above all else, and that each moment should be felt to its fullest extent. Years later, in "De Profundis", Wilde described Pater's "Studies..." as "that book that has had such a strange influence over my life". He learned tracts of the book by heart, and carried it with him on travels in later years. Pater gave Wilde his sense of almost flippant devotion to art, though he gained a purpose for it through the lectures and writings of critic John Ruskin. Ruskin despaired at the self-validating aestheticism of Pater, arguing that the importance of art lies in its potential for the betterment of society. Ruskin admired beauty, but believed it must be allied with, and applied to, moral good. When Wilde eagerly attended Ruskin's lecture series "The Aesthetic and Mathematic Schools of Art in Florence", he learned about aesthetics as the non-mathematical elements of painting. Despite being given to neither early rising nor manual labour, Wilde volunteered for Ruskin's project to convert a swampy country lane into a smart road neatly edged with flowers.
Wilde won the 1878 Newdigate Prize for his poem "", which reflected on his visit there the year before, and he duly read it at Encaenia. In November 1878, he graduated with a double first in his B.A. of Classical Moderations and Literae Humaniores (Greats). Wilde wrote to a friend, "The dons are " beyond words – the Bad Boy doing so well in the end!"
After graduation from Oxford, Wilde returned to Dublin, where he met again Florence Balcombe, a childhood sweetheart. She became engaged to Bram Stoker and they married in 1878. Wilde was disappointed but stoic: he wrote to her, remembering "the two sweet years – the sweetest years of all my youth" during which they had been close. He also stated his intention to "return to England, probably for good." This he did in 1878, only briefly visiting Ireland twice after that.
Unsure of his next step, Wilde wrote to various acquaintances enquiring about Classics positions at Oxford or Cambridge. "The Rise of Historical Criticism" was his submission for the Chancellor's Essay prize of 1879, which, though no longer a student, he was still eligible to enter. Its subject, "Historical Criticism among the Ancients" seemed ready-made for Wilde – with both his skill in composition and ancient learning – but he struggled to find his voice with the long, flat, scholarly style. Unusually, no prize was awarded that year.
With the last of his inheritance from the sale of his father's houses, he set himself up as a bachelor in London. The 1881 British Census listed Wilde as a boarder at 1 (now 44) Tite Street, Chelsea, where Frank Miles, a society painter, was the head of the household. Wilde spent the next six years in London and Paris, and in the United States, where he travelled to deliver lectures.
He had been publishing lyrics and poems in magazines since entering Trinity College, especially in "Kottabos" and the "Dublin University Magazine". In mid-1881, at 27 years old, he published "Poems", which collected, revised and expanded his poems.
The book was generally well received, and sold out its first print run of 750 copies. "Punch" was less enthusiastic, saying "The poet is Wilde, but his poetry's tame". By a tight vote, the Oxford Union condemned the book for alleged plagiarism. The librarian, who had requested the book for the library, returned the presentation copy to Wilde with a note of apology. Biographer Richard Ellmann argues that Wilde's poem "" was a sincere, though flamboyant, attempt to explain the dichotomies the poet saw in himself; one line reads: "To drift with every passion till my soulIs a stringed lute on which all winds can play".
The book had further printings in 1882. It was bound in a rich, enamel parchment cover (embossed with gilt blossom) and printed on hand-made Dutch paper; over the next few years, Wilde presented many copies to the dignitaries and writers who received him during his lecture tours.
Aestheticism was sufficiently in vogue to be caricatured by Gilbert and Sullivan in "Patience" (1881). Richard D'Oyly Carte, an English impresario, invited Wilde to make a lecture tour of North America, simultaneously priming the pump for the US tour of "Patience" and selling this most charming aesthete to the American public. Wilde journeyed on the SS "Arizona", arriving 2 January 1882, and disembarking the following day. Originally planned to last four months, it continued for almost a year due to the commercial success. Wilde sought to transpose the beauty he saw in art into daily life. This was a practical as well as philosophical project: in Oxford he had surrounded himself with blue china and lilies, and now one of his lectures was on interior design.
When asked to explain reports that he had paraded down Piccadilly in London carrying a lily, long hair flowing, Wilde replied, "It's not whether I did it or not that's important, but whether people believed I did it". Wilde believed that the artist should hold forth higher ideals, and that pleasure and beauty would replace utilitarian ethics.
Wilde and aestheticism were both mercilessly caricatured and criticised in the press; the "Springfield Republican", for instance, commented on Wilde's behaviour during his visit to Boston to lecture on aestheticism, suggesting that Wilde's conduct was more a bid for notoriety rather than devotion to beauty and the aesthetic. T. W. Higginson, a cleric and abolitionist, wrote in "Unmanly Manhood" of his general concern that Wilde, "whose only distinction is that he has written a thin volume of very mediocre verse", would improperly influence the behaviour of men and women.
According to biographer Michèle Mendelssohn, Wilde was the subject of anti-Irish caricature and was portrayed as a monkey, a blackface performer and a Christy's Minstrel throughout his career. ""Harper's Weekly" put a sunflower-worshipping monkey dressed as Wilde on the front of the January 1882 issue. The magazine didn't let its reputation for quality impede its expression of what are now considered odious ethnic and racial ideologies. The drawing stimulated other American maligners and, in England, had a full-page reprint in the "Lady's Pictorial". ... When the "National Republican" discussed Wilde, it was to explain 'a few items as to the animal's pedigree.' And on 22 January 1882 the Washington Post illustrated the Wild Man of Borneo alongside Oscar Wilde of England and asked 'How far is it from this to this?'" Though his press reception was hostile, Wilde was well received in diverse settings across America; he drank whiskey with miners in Leadville, Colorado, and was fêted at the most fashionable salons in many cities he visited.
His earnings, plus expected income from "The Duchess of Padua", allowed him to move to Paris between February and mid-May 1883. While there he met Robert Sherard, whom he entertained constantly. "We are dining on the Duchess tonight", Wilde would declare before taking him to an expensive restaurant. In August he briefly returned to New York for the production of "Vera", his first play, after it was turned down in London. He reportedly entertained the other passengers with "", about the rise and fall of empires. E. C. Stedman, in "Victorian Poets", describes this "lyric to England" as "manly verse – a poetic and eloquent invocation".
Wilde had to return to England, where he continued to lecture on topics including "Personal Impressions of America", "The Value of Art in Modern Life", and "Dress".
In London, he had been introduced in 1881 to Constance Lloyd, daughter of Horace Lloyd, a wealthy Queen's Counsel, and his wife. She happened to be visiting Dublin in 1884, when Wilde was lecturing at the Gaiety Theatre. He proposed to her, and they married on 29 May 1884 at the Anglican St James's Church, Paddington, in London. Although Constance had an annual allowance of £250, which was generous for a young woman (equivalent to about £ in current value), the Wildes had relatively luxurious tastes. They had preached to others for so long on the subject of design that people expected their home to set new standards. No. 16, Tite Street was duly renovated in seven months at considerable expense. The couple had two sons together, Cyril (1885) and Vyvyan (1886). Wilde became the sole literary signatory of George Bernard Shaw's petition for a pardon of the anarchists arrested (and later executed) after the Haymarket massacre in Chicago in 1886.
Robert Ross had read Wilde's poems before they met at Oxford in 1886. He seemed unrestrained by the Victorian prohibition against homosexuality, and became estranged from his family. By Richard Ellmann's account, he was a precocious seventeen-year-old who "so young and yet so knowing, was determined to seduce Wilde".According to Daniel Mendelsohn, Wilde, who had long alluded to Greek love, was "initiated into homosexual sex" by Ross, while his "marriage had begun to unravel after his wife's second pregnancy, which left him physically repelled".
Criticism over artistic matters in "The Pall Mall Gazette" provoked a letter in self-defence, and soon Wilde was a contributor to that and other journals during 1885–87. He enjoyed reviewing and journalism; the form suited his style. He could organise and share his views on art, literature and life, yet in a format less tedious than lecturing. Buoyed up, his reviews were largely chatty and positive. Wilde, like his parents before him, also supported the cause of Irish nationalism. When Charles Stewart Parnell was falsely accused of inciting murder, Wilde wrote a series of astute columns defending him in the "Daily Chronicle".
His flair, having previously been put mainly into socialising, suited journalism and rapidly attracted notice. With his youth nearly over, and a family to support, in mid-1887 Wilde became the editor of "The Lady's World" magazine, his name prominently appearing on the cover. He promptly renamed it as "The Woman's World" and raised its tone, adding serious articles on parenting, culture, and politics, while keeping discussions of fashion and arts. Two pieces of fiction were usually included, one to be read to children, the other for the ladies themselves. Wilde worked hard to solicit good contributions from his wide artistic acquaintance, including those of Lady Wilde and his wife Constance, while his own "Literary and Other Notes" were themselves popular and amusing.
The initial vigour and excitement which he brought to the job began to fade as administration, commuting and office life became tedious. At the same time as Wilde's interest flagged, the publishers became concerned anew about circulation: sales, at the relatively high price of one shilling, remained low. Increasingly sending instructions to the magazine by letter, Wilde began a new period of creative work and his own column appeared less regularly. In October 1889, Wilde had finally found his voice in prose and, at the end of the second volume, Wilde left "The Woman's World". The magazine outlasted him by one issue.
If Wilde's period at the helm of the magazine was a mixed success from an organizational point of view, it played a pivotal role in his development as a writer and facilitated his ascent to fame. Whilst Wilde the journalist supplied articles under the guidance of his editors, Wilde the editor was forced to learn to manipulate the literary marketplace on his own terms.
During the late 1880s, Wilde was a close friend of the artist James McNeill Whistler and they dined together on many occasions. At one of these dinners, Whistler said a bon mot that Wilde found particularly witty, Wilde exclaimed that he wished that he had said it, and Whistler retorted "You will, Oscar, you will". Herbert Vivian—a mutual friend of Wilde and Whistler— attended the dinner and recorded it in his article "The Reminiscences of a Short Life" which appeared in "The Sun" in 1889. The article alleged that Wilde had a habit of passing off other people's witticisms as his own—especially Whistler's. Wilde considered Vivian's article to be a scurrilous betrayal, and it directly caused the broken friendship between Wilde and Whistler. The Reminiscences also caused great acrimony between Wilde and Vivian, Wilde accusing Vivian of "the inaccuracy of an eavesdropper with the method of a blackmailer" and banishing Vivian from his circle.
Wilde published "The Happy Prince and Other Tales" in 1888, and had been regularly writing fairy stories for magazines. In 1891 he published two more collections, "Lord Arthur Savile's Crime and Other Stories", and in September "A House of Pomegranates" was dedicated "To Constance Mary Wilde". "The Portrait of Mr. W. H.", which Wilde had begun in 1887, was first published in "Blackwood's Edinburgh Magazine" in July 1889. It is a short story, which reports a conversation, in which the theory that Shakespeare's sonnets were written out of the poet's love of the boy actor "Willie Hughes", is advanced, retracted, and then propounded again. The only evidence for this is two supposed puns within the sonnets themselves.
The anonymous narrator is at first sceptical, then believing, finally flirtatious with the reader: he concludes that "there is really a great deal to be said of the Willie Hughes theory of Shakespeare's sonnets." By the end fact and fiction have melded together. Arthur Ransome wrote that Wilde "read something of himself into Shakespeare's sonnets" and became fascinated with the "Willie Hughes theory" despite the lack of biographical evidence for the historical William Hughes' existence. Instead of writing a short but serious essay on the question, Wilde tossed the theory amongst the three characters of the story, allowing it to unfold as background to the plot. The story thus is an early masterpiece of Wilde's combining many elements that interested him: conversation, literature and the idea that to shed oneself of an idea one must first convince another of its truth. Ransome concludes that Wilde succeeds precisely because the literary criticism is unveiled with such a deft touch.
Though containing nothing but "special pleading", it would not, he says "be possible to build an airier castle in Spain than this of the imaginary William Hughes" we continue listening nonetheless to be charmed by the telling. "You must believe in Willie Hughes," Wilde told an acquaintance, "I almost do, myself."
Wilde, having tired of journalism, had been busy setting out his aesthetic ideas more fully in a series of longer prose pieces which were published in the major literary-intellectual journals of the day. In January 1889, "The Decay of Lying: A Dialogue" appeared in "The Nineteenth Century", and "Pen, Pencil and Poison", a satirical biography of Thomas Griffiths Wainewright, in "The Fortnightly Review", edited by Wilde's friend Frank Harris. Two of Wilde's four writings on aesthetics are dialogues: though Wilde had evolved professionally from lecturer to writer, he retained an oral tradition of sorts. Having always excelled as a wit and raconteur, he often composed by assembling phrases, "bons mots" and witticisms into a longer, cohesive work.
Wilde was concerned about the effect of moralising on art; he believed in art's redemptive, developmental powers: "Art is individualism, and individualism is a disturbing and disintegrating force. There lies its immense value. For what it seeks is to disturb monotony of type, slavery of custom, tyranny of habit, and the reduction of man to the level of a machine." In his only political text, "The Soul of Man Under Socialism", he argued political conditions should establish this primacy – private property should be abolished, and cooperation should be substituted for competition. At the same time, he stressed that the government most amenable to artists was no government at all. Wilde envisioned a society where mechanisation has freed human effort from the burden of necessity, effort which can instead be expended on artistic creation. George Orwell summarised, "In effect, the world will be populated by artists, each striving after perfection in the way that seems best to him."
This point of view did not align him with the Fabians, intellectual socialists who advocated using state apparatus to change social conditions, nor did it endear him to the monied classes whom he had previously entertained. Hesketh Pearson, introducing a collection of Wilde's essays in 1950, remarked how "The Soul of Man Under Socialism" had been an inspirational text for revolutionaries in Tsarist Russia but laments that in the Stalinist era "it is doubtful whether there are any uninspected places in which it could now be hidden".
Wilde considered including this pamphlet and The Portrait of Mr. W.H., his essay-story on Shakespeare's sonnets, in a new anthology in 1891, but eventually decided to limit it to purely aesthetic subjects. "Intentions" packaged revisions of four essays: "The Decay of Lying"; "Pen, Pencil and Poison"; "The Truth of Masks" (first published 1885); and "The Critic as Artist" in two parts. For Pearson the biographer, the essays and dialogues exhibit every aspect of Wilde's genius and character: wit, romancer, talker, lecturer, humanist and scholar and concludes that "no other productions of his have as varied an appeal". 1891 turned out to be Wilde's "annus mirabilis"; apart from his three collections he also produced his only novel.
The first version of "The Picture of Dorian Gray" was published as the lead story in the July 1890 edition of "Lippincott's Monthly Magazine", along with five others. The story begins with a man painting a picture of Gray. When Gray, who has a "face like ivory and rose leaves", sees his finished portrait, he breaks down. Distraught that his beauty will fade while the portrait stays beautiful, he inadvertently makes a Faustian bargain in which only the painted image grows old while he stays beautiful and young. For Wilde, the purpose of art would be to guide life as if beauty alone were its object. As Gray's portrait allows him to escape the corporeal ravages of his hedonism, Wilde sought to juxtapose the beauty he saw in art with daily life.
Reviewers immediately criticised the novel's decadence and homosexual allusions; "The Daily Chronicle" for example, called it "unclean", "poisonous", and "heavy with the mephitic odours of moral and spiritual putrefaction". Wilde vigorously responded, writing to the editor of the "Scots Observer", in which he clarified his stance on ethics and aesthetics in art – "If a work of art is rich and vital and complete, those who have artistic instincts will see its beauty and those to whom ethics appeal more strongly will see its moral lesson." He nevertheless revised it extensively for book publication in 1891: six new chapters were added, some overtly decadent passages and homo-eroticism excised, and a preface was included consisting of twenty two epigrams, such as "Books are well written, or badly written. That is all."
Contemporary reviewers and modern critics have postulated numerous possible sources of the story, a search Jershua McCormack argues is futile because Wilde "has tapped a root of Western folklore so deep and ubiquitous that the story has escaped its origins and returned to the oral tradition." Wilde claimed the plot was "an idea that is as old as the history of literature but to which I have given a new form". Modern critic Robin McKie considered the novel to be technically mediocre, saying that the conceit of the plot had guaranteed its fame, but the device is never pushed to its full. On the other hand, Robert McCrum of "The Guardian" deemed it the 27th best novel ever written in English, calling it "an arresting, and slightly camp, exercise in late-Victorian gothic."
The 1891 census records the Wildes' residence at 16 Tite Street, where he lived with his wife Constance and two sons. Wilde though, not content with being better known than ever in London, returned to Paris in October 1891, this time as a respected writer. He was received at the "salons littéraires", including the famous "mardis" of Stéphane Mallarmé, a renowned symbolist poet of the time. Wilde's two plays during the 1880s, "Vera; or, The Nihilists" and "The Duchess of Padua", had not met with much success. He had continued his interest in the theatre and now, after finding his voice in prose, his thoughts turned again to the dramatic form as the biblical iconography of Salome filled his mind. One evening, after discussing depictions of Salome throughout history, he returned to his hotel and noticed a blank copybook lying on the desk, and it occurred to him to write in it what he had been saying. The result was a new play, "Salomé", written rapidly and in French.
A tragedy, it tells the story of Salome, the stepdaughter of the tetrarch Herod Antipas, who, to her stepfather's dismay but mother's delight, requests the head of Jokanaan (John the Baptist) on a silver platter as a reward for dancing the Dance of the Seven Veils. When Wilde returned to London just before Christmas the "Paris Echo" referred to him as ""le great event"" of the season. Rehearsals of the play, starring Sarah Bernhardt, began but the play was refused a licence by the Lord Chamberlain, since it depicted biblical characters. "Salome" was published jointly in Paris and London in 1893, but was not performed until 1896 in Paris, during Wilde's later incarceration.
Wilde, who had first set out to irritate Victorian society with his dress and talking points, then outrage it with "Dorian Gray", his novel of vice hidden beneath art, finally found a way to critique society on its own terms. "Lady Windermere's Fan" was first performed on 20 February 1892 at St James's Theatre, packed with the cream of society. On the surface a witty comedy, there is subtle subversion underneath: "it concludes with collusive concealment rather than collective disclosure". The audience, like Lady Windermere, are forced to soften harsh social codes in favour of a more nuanced view. The play was enormously popular, touring the country for months, but largely trashed by conservative critics.
It was followed by "A Woman of No Importance" in 1893, another Victorian comedy, revolving around the spectre of illegitimate births, mistaken identities and late revelations. Wilde was commissioned to write two more plays and "An Ideal Husband", written in 1894, followed in January 1895.
Peter Raby said these essentially English plays were well-pitched, "Wilde, with one eye on the dramatic genius of Ibsen, and the other on the commercial competition in London's West End, targeted his audience with adroit precision".
In mid-1891 Lionel Johnson introduced Wilde to Lord Alfred Douglas, Johnson's cousin and an undergraduate at Oxford at the time. Known to his family and friends as "Bosie", he was a handsome and spoilt young man. An intimate friendship sprang up between Wilde and Douglas and by 1893 Wilde was infatuated with Douglas and they consorted together regularly in a tempestuous affair. If Wilde was relatively indiscreet, even flamboyant, in the way he acted, Douglas was reckless in public. Wilde, who was earning up to £100 a week from his plays (his salary at "The Woman's World" had been £6), indulged Douglas's every whim: material, artistic or sexual.
Douglas soon initiated Wilde into the Victorian underground of gay prostitution and Wilde was introduced to a series of young working-class male prostitutes from 1892 onwards by Alfred Taylor. These infrequent rendezvous usually took the same form: Wilde would meet the boy, offer him gifts, dine him privately and then take him to a hotel room. Unlike Wilde's idealised relations with Ross, John Gray, and Douglas, all of whom remained part of his aesthetic circle, these consorts were uneducated and knew nothing of literature. Soon his public and private lives had become sharply divided; in "De Profundis" he wrote to Douglas that "It was like feasting with panthers; the danger was half the excitement... I did not know that when they were to strike at me it was to be at another's piping and at another's pay."
Douglas and some Oxford friends founded a journal, "The Chameleon", to which Wilde "sent a page of paradoxes originally destined for the "Saturday Review"". "" was to come under attack six months later at Wilde's trial, where he was forced to defend the magazine to which he had sent his work. In any case, it became unique: "The Chameleon" was not published again.
Lord Alfred's father, the Marquess of Queensberry, was known for his outspoken atheism, brutish manner and creation of the modern rules of boxing. Queensberry, who feuded regularly with his son, confronted Wilde and Lord Alfred about the nature of their relationship several times, but Wilde was able to mollify him. In June 1894, he called on Wilde at 16 Tite Street, without an appointment, and clarified his stance:
"I do not say that you are it, but you look it, and pose at it, which is just as bad. And if I catch you and my son again in any public restaurant I will thrash you" to which Wilde responded: "I don't know what the Queensberry rules are, but the Oscar Wilde rule is to shoot on sight". His account in "De Profundis" was less triumphant: "It was when, in my library at Tite Street, waving his small hands in the air in epileptic fury, your father... stood uttering every foul word his foul mind could think of, and screaming the loathsome threats he afterwards with such cunning carried out". Queensberry only described the scene once, saying Wilde had "shown him the white feather", meaning he had acted in a cowardly way. Though trying to remain calm, Wilde saw that he was becoming ensnared in a brutal family quarrel. He did not wish to bear Queensberry's insults, but he knew to confront him could lead to disaster were his liaisons disclosed publicly.
Wilde's final play again returns to the theme of switched identities: the play's two protagonists engage in "bunburying" (the maintenance of alternative personas in the town and country) which allows them to escape Victorian social mores. "Earnest" is even lighter in tone than Wilde's earlier comedies. While their characters often rise to serious themes in moments of crisis, "Earnest" lacks the by-now stock Wildean characters: there is no "woman with a past", the principals are neither villainous nor cunning, simply idle cultivés, and the idealistic young women are not that innocent. Mostly set in drawing rooms and almost completely lacking in action or violence, "Earnest" lacks the self-conscious decadence found in "The Picture of Dorian Gray" and "Salome".
The play, now considered Wilde's masterpiece, was rapidly written in Wilde's artistic maturity in late 1894. It was first performed on 14 February 1895, at St James's Theatre in London, Wilde's second collaboration with George Alexander, the actor-manager. Both author and producer assiduously revised, prepared and rehearsed every line, scene and setting in the months before the premiere, creating a carefully constructed representation of late-Victorian society, yet simultaneously mocking it. During rehearsal Alexander requested that Wilde shorten the play from four acts to three, which the author did. Premieres at St James's seemed like "brilliant parties", and the opening of "The Importance of Being Earnest" was no exception. Allan Aynesworth (who played Algernon) recalled to Hesketh Pearson, "In my fifty-three years of acting, I never remember a greater triumph than [that] first night." "Earnest's" immediate reception as Wilde's best work to date finally crystallised his fame into a solid artistic reputation. "The Importance of Being Earnest" remains his most popular play.
Wilde's professional success was mirrored by an escalation in his feud with Queensberry. Queensberry had planned to insult Wilde publicly by throwing a bouquet of rotting vegetables onto the stage; Wilde was tipped off and had Queensberry barred from entering the theatre. Fifteen weeks later Wilde was in prison.
On 18 February 1895, the Marquess left his calling card at Wilde's club, the Albemarle, inscribed: "For Oscar Wilde, posing somdomite". Wilde, encouraged by Douglas and against the advice of his friends, initiated a private prosecution against Queensberry for libel, since the note amounted to a public accusation that Wilde had committed the crime of sodomy.
Queensberry was arrested for criminal libel; a charge carrying a possible sentence of up to two years in prison. Under the 1843 Libel Act, Queensberry could avoid conviction for libel only by demonstrating that his accusation was in fact true, and furthermore that there was some "public benefit" to having made the accusation openly. Queensberry's lawyers thus hired private detectives to find evidence of Wilde's homosexual liaisons.
Wilde's friends had advised him against the prosecution at a "Saturday Review" meeting at the Café Royal on 24 March 1895; Frank Harris warned him that "they are going to prove sodomy against you" and advised him to flee to France. Wilde and Douglas walked out in a huff, Wilde saying "it is at such moments as these that one sees who are one's true friends". The scene was witnessed by George Bernard Shaw who recalled it to Arthur Ransome a day or so before Ransome's trial for libelling Douglas in 1913. To Ransome it confirmed what he had said in his 1912 book on Wilde; that Douglas's rivalry for Wilde with Robbie Ross and his arguments with his father had resulted in Wilde's public disaster; as Wilde wrote in "De Profundis". Douglas lost his case. Shaw included an account of the argument between Harris, Douglas and Wilde in the preface to his play "The Dark Lady of the Sonnets".
The libel trial became a "cause célèbre" as salacious details of Wilde's private life with Taylor and Douglas began to appear in the press. A team of private detectives had directed Queensberry's lawyers, led by Edward Carson QC, to the world of the Victorian underground. Wilde's association with blackmailers and male prostitutes, cross-dressers and homosexual brothels was recorded, and various persons involved were interviewed, some being coerced to appear as witnesses since they too were accomplices to the crimes of which Wilde was accused.
The trial opened on 3 April 1895 before Justice Richard Henn Collins amid scenes of near hysteria both in the press and the public galleries. The extent of the evidence massed against Wilde forced him to declare meekly, "I am the prosecutor in this case". Wilde's lawyer, Sir Edward George Clarke, opened the case by pre-emptively asking Wilde about two suggestive letters Wilde had written to Douglas, which the defence had in its possession. He characterised the first as a "prose sonnet" and admitted that the "poetical language" might seem strange to the court but claimed its intent was innocent. Wilde stated that the letters had been obtained by blackmailers who had attempted to extort money from him, but he had refused, suggesting they should take the £60 (equal to £ today) offered, "unusual for a prose piece of that length". He claimed to regard the letters as works of art rather than something of which to be ashamed.
Carson, a fellow Dubliner who had attended Trinity College, Dublin at the same time as Wilde, cross-examined Wilde on how he perceived the moral content of his works. Wilde replied with characteristic wit and flippancy, claiming that works of art are not capable of being moral or immoral but only well or poorly made, and that only "brutes and illiterates", whose views on art "are incalculably stupid", would make such judgements about art. Carson, a leading barrister, diverged from the normal practice of asking closed questions. Carson pressed Wilde on each topic from every angle, squeezing out nuances of meaning from Wilde's answers, removing them from their aesthetic context and portraying Wilde as evasive and decadent. While Wilde won the most laughs from the court, Carson scored the most legal points. To undermine Wilde's credibility, and to justify Queensberry's description of Wilde as a "posing somdomite", Carson drew from the witness an admission of his capacity for "posing", by demonstrating that he had lied about his age on oath. Playing on this, he returned to the topic throughout his cross-examination. Carson also tried to justify Queensberry's characterisation by quoting from Wilde's novel, "The Picture of Dorian Gray", referring in particular to a scene in the second chapter, in which Lord Henry Wotton explains his decadent philosophy to Dorian, an "innocent young man", in Carson's words.
Carson then moved to the factual evidence and questioned Wilde about his friendships with younger, lower-class men. Wilde admitted being on a first-name basis and lavishing gifts upon them, but insisted that nothing untoward had occurred and that the men were merely good friends of his. Carson repeatedly pointed out the unusual nature of these relationships and insinuated that the men were prostitutes. Wilde replied that he did not believe in social barriers, and simply enjoyed the society of young men. Then Carson asked Wilde directly whether he had ever kissed a certain servant boy, Wilde responded, "Oh, dear no. He was a particularly plain boy – unfortunately ugly – I pitied him for it." Carson pressed him on the answer, repeatedly asking why the boy's ugliness was relevant. Wilde hesitated, then for the first time became flustered: "You sting me and insult me and try to unnerve me; and at times one says things flippantly when one ought to speak more seriously."
In his opening speech for the defence, Carson announced that he had located several male prostitutes who were to testify that they had had sex with Wilde. On the advice of his lawyers, Wilde dropped the prosecution. Queensberry was found not guilty, as the court declared that his accusation that Wilde was "posing as a Somdomite " was justified, "true in substance and in fact". Under the Libel Act 1843, Queensberry's acquittal rendered Wilde legally liable for the considerable expenses Queensberry had incurred in his defence, which left Wilde bankrupt.
After Wilde left the court, a warrant for his arrest was applied for on charges of sodomy and gross indecency. Robbie Ross found Wilde at the Cadogan Hotel, Pont Street, Knightsbridge, with Reginald Turner; both men advised Wilde to go at once to Dover and try to get a boat to France; his mother advised him to stay and fight. Wilde, lapsing into inaction, could only say, "The train has gone. It's too late." On 6 April 1895, Wilde was arrested for "gross indecency" under Section 11 of the Criminal Law Amendment Act 1885, a term meaning homosexual acts not amounting to buggery (an offence under a separate statute). At Wilde's instruction, Ross and Wilde's butler forced their way into the bedroom and library of 16 Tite Street, packing some personal effects, manuscripts, and letters. Wilde was then imprisoned on remand at Holloway, where he received daily visits from Douglas.
Events moved quickly and his prosecution opened on 26 April 1895, before Mr Justice Charles. Wilde pleaded not guilty. He had already begged Douglas to leave London for Paris, but Douglas complained bitterly, even wanting to give evidence; he was pressed to go and soon fled to the Hotel du Monde. Fearing persecution, Ross and many others also left the United Kingdom during this time. Under cross examination Wilde was at first hesitant, then spoke eloquently:
This response was counter-productive in a legal sense as it only served to reinforce the charges of homosexual behaviour.
The trial ended with the jury unable to reach a verdict. Wilde's counsel, Sir Edward Clarke, was finally able to get a magistrate to allow Wilde and his friends to post bail. The Reverend Stewart Headlam put up most of the £5,000 surety required by the court, having disagreed with Wilde's treatment by the press and the courts. Wilde was freed from Holloway and, shunning attention, went into hiding at the house of Ernest and Ada Leverson, two of his firm friends. Edward Carson approached Frank Lockwood QC, the Solicitor General and asked "Can we not let up on the fellow now?" Lockwood answered that he would like to do so, but feared that the case had become too politicised to be dropped.
The final trial was presided over by Mr Justice Wills. On 25 May 1895 Wilde and Alfred Taylor were convicted of gross indecency and sentenced to two years' hard labour. The judge described the sentence, the maximum allowed, as "totally inadequate for a case such as this", and that the case was "the worst case I have ever tried". Wilde's response "And I? May I say nothing, my Lord?" was drowned out in cries of "Shame" in the courtroom.
Wilde was incarcerated from 25 May 1895 to 18 May 1897.
He first entered Newgate Prison in London for processing, then was moved to Pentonville Prison, where the "hard labour" to which he had been sentenced consisted of many hours of walking a treadmill and picking oakum (separating the fibres in scraps of old navy ropes), and where prisoners were allowed to read only the Bible and "The Pilgrim's Progress".
A few months later he was moved to Wandsworth Prison in London. Inmates there also followed the regimen of "hard labour, hard fare and a hard bed", which wore harshly on Wilde's delicate health. In November he collapsed during chapel from illness and hunger. His right ear drum was ruptured in the fall, an injury that later contributed to his death. He spent two months in the infirmary.
Richard B. Haldane, the Liberal MP and reformer, visited Wilde and had him transferred in November to Reading Gaol, west of London on 23 November 1895. The transfer itself was the lowest point of his incarceration, as a crowd jeered and spat at him on the railway platform. He spent the remainder of his sentence there, addressed and identified only as "C.3.3" – the occupant of the third cell on the third floor of C ward.
About five months after Wilde arrived at Reading Gaol, Charles Thomas Wooldridge, a trooper in the Royal Horse Guards, was brought to Reading to await his trial for murdering his wife on 29 March 1896; on 17 June Wooldridge was sentenced to death and returned to Reading for his execution, which took place on Tuesday, 7 July 1896 – the first hanging at Reading in 18 years. From Wooldridge's hanging, Wilde later wrote "The Ballad of Reading Gaol".
Wilde was not, at first, even allowed paper and pen but Haldane eventually succeeded in allowing access to books and writing materials. Wilde requested, among others: the Bible in French; Italian and German grammars; some Ancient Greek texts, Dante's "Divine Comedy", Joris-Karl Huysmans's new French novel about Christian redemption "En route", and essays by St Augustine, Cardinal Newman and Walter Pater.
Between January and March 1897 Wilde wrote a 50,000-word letter to Douglas. He was not allowed to send it, but was permitted to take it with him when released from prison. In reflective mode, Wilde coldly examines his career to date, how he had been a colourful "agent provocateur" in Victorian society, his art, like his paradoxes, seeking to subvert as well as sparkle. His own estimation of himself was: one who "stood in symbolic relations to the art and culture of my age". It was from these heights that his life with Douglas began, and Wilde examines that particularly closely, repudiating him for what Wilde finally sees as his arrogance and vanity: he had not forgotten Douglas' remark, when he was ill, "When you are not on your pedestal you are not interesting." Wilde blamed himself, though, for the ethical degradation of character that he allowed Douglas to bring about in him and took responsibility for his own fall, "I am here for having tried to put your father in prison." The first half concludes with Wilde forgiving Douglas, for his own sake as much as Douglas's. The second half of the letter traces Wilde's spiritual journey of redemption and fulfilment through his prison reading. He realised that his ordeal had filled his soul with the fruit of experience, however bitter it tasted at the time.
Wilde was released from prison on 19 May 1897 and sailed that evening for Dieppe, France. He never returned to the UK.
On his release, he gave the manuscript to Ross, who may or may not have carried out Wilde's instructions to send a copy to Douglas (who later denied having received it). The letter was partially published in 1905 as "De Profundis"; its complete and correct publication first occurred in 1962 in "The Letters of Oscar Wilde".
Though Wilde's health had suffered greatly from the harshness and diet of prison, he had a feeling of spiritual renewal. He immediately wrote to the Society of Jesus requesting a six-month Catholic retreat; when the request was denied, Wilde wept. "I intend to be received into the Catholic Church before long", Wilde told a journalist who asked about his religious intentions.
He spent his last three years impoverished and in exile. He took the name "Sebastian Melmoth", after Saint Sebastian and the titular character of "Melmoth the Wanderer" (a Gothic novel by Charles Maturin, Wilde's great-uncle). Wilde wrote two long letters to the editor of the "Daily Chronicle", describing the brutal conditions of English prisons and advocating penal reform. His discussion of the dismissal of Warder Martin for giving biscuits to an anaemic child prisoner repeated the themes of the corruption and degeneration of punishment that he had earlier outlined in "The Soul of Man under Socialism".
Wilde spent mid-1897 with Robert Ross in the seaside village of Berneval-le-Grand in northern France, where he wrote "The Ballad of Reading Gaol", narrating the execution of Charles Thomas Wooldridge, who murdered his wife in a rage at her infidelity. It moves from an objective story-telling to symbolic identification with the prisoners. No attempt is made to assess the justice of the laws which convicted them but rather the poem highlights the brutalisation of the punishment that all convicts share. Wilde juxtaposes the executed man and himself with the line "Yet each man kills the thing he loves". He adopted the proletarian ballad form and the author was credited as "C33", Wilde's cell number in Reading Gaol. He suggested that it be published in "Reynolds' Magazine", "because it circulates widely among the criminal classes – to which I now belong – for once I will be read by my peers – a new experience for me". It was an immediate roaring commercial success, going through seven editions in less than two years, only after which "[Oscar Wilde]" was added to the title page, though many in literary circles had known Wilde to be the author. It brought him a small amount of money.
Although Douglas had been the cause of his misfortunes, he and Wilde were reunited in August 1897 at Rouen. This meeting was disapproved of by the friends and families of both men. Constance Wilde was already refusing to meet Wilde or allow him to see their sons, though she sent him money – three pounds a week. During the latter part of 1897, Wilde and Douglas lived together near Naples for a few months until they were separated by their families under the threat of cutting off all funds.
Wilde's final address was at the dingy Hôtel d'Alsace (now known as L'Hôtel), on rue des Beaux-Arts in Saint-Germain-des-Prés, Paris. "This poverty really breaks one's heart: it is so "sale" [filthy], so utterly depressing, so hopeless. Pray do what you can" he wrote to his publisher. He corrected and published "An Ideal Husband" and "The Importance of Being Earnest", the proofs of which, according to Ellmann, show a man "very much in command of himself and of the play" but he refused to write anything else: "I can write, but have lost the joy of writing".
He wandered the boulevards alone and spent what little money he had on alcohol. A series of embarrassing chance encounters with hostile English visitors, or Frenchmen he had known in better days, drowned his spirit. Soon Wilde was sufficiently confined to his hotel to joke, on one of his final trips outside, "My wallpaper and I are fighting a duel to the death. One of us has got to go". On 12 October 1900 he sent a telegram to Ross: "Terribly weak. Please come".His moods fluctuated; Max Beerbohm relates how their mutual friend Reginald 'Reggie' Turner had found Wilde very depressed after a nightmare. "I dreamt that I had died, and was supping with the dead!" "I am sure", Turner replied, "that you must have been the life and soul of the party." Turner was one of the few of the old circle who remained with Wilde to the end and was at his bedside when he died.
By 25 November 1900 Wilde had developed meningitis, then called "cerebral meningitis". Robbie Ross arrived on 29 November, sent for a priest, and Wilde was conditionally baptised into the Catholic Church by Fr Cuthbert Dunne, a Passionist priest from Dublin, Wilde having been baptised in the Church of Ireland and having moreover a recollection of Catholic baptism as a child, a fact later attested to by the minister of the sacrament, Fr Lawrence Fox. Fr Dunne recorded the baptism,
Wilde died of meningitis on 30 November 1900. Different opinions are given as to the cause of the disease: Richard Ellmann claimed it was syphilitic; Merlin Holland, Wilde's grandson, thought this to be a misconception, noting that Wilde's meningitis followed a surgical intervention, perhaps a mastoidectomy; Wilde's physicians, Dr Paul Cleiss and A'Court Tucker, reported that the condition stemmed from an old suppuration of the right ear (from the prison injury, see above) treated for several years ("une ancienne suppuration de l'oreille droite d'ailleurs en traitement depuis plusieurs années") and made no allusion to syphilis.
Wilde was initially buried in the Cimetière de Bagneux outside Paris; in 1909 his remains were disinterred and transferred to Père Lachaise Cemetery, inside the city. His tomb there was designed by Sir Jacob Epstein. It was commissioned by Robert Ross, who asked for a small compartment to be made for his own ashes, which were duly transferred in 1950. The modernist angel depicted as a relief on the tomb was originally complete with male genitalia, which were initially censored by French Authorities with a golden leaf. The genitals have since been vandalised; their current whereabouts are unknown. In 2000, Leon Johnson, a multimedia artist, installed a silver prosthesis to replace them. In 2011, the tomb was cleaned of the many lipstick marks left there by admirers and a glass barrier was installed to prevent further marks or damage.
The epitaph is a verse from "The Ballad of Reading Gaol",
And alien tears will fill for him
Pity's long-broken urn,
For his mourners will be outcast men,
And outcasts always mourn.
In 2017, Wilde was among an estimated 50,000 men who were pardoned for homosexual acts that were no longer considered offences under the Policing and Crime Act 2017. The Act is known informally as the Alan Turing law.
In 2014 Wilde was one of the inaugural honorees in the Rainbow Honor Walk, a walk of fame in San Francisco's Castro neighbourhood noting LGBTQ people who have "made significant contributions in their fields."
The Oscar Wilde Temple, an installation by visual artists McDermott & McGough, opened in 2017 in cooperation with Church of the Village in New York City, then moved to Studio Voltaire in London the next year.
Wilde's life has been the subject of numerous biographies since his death. The earliest were memoirs by those who knew him: often they are personal or impressionistic accounts which can be good character sketches, but are sometimes factually unreliable. Frank Harris, his friend and editor, wrote a biography, "Oscar Wilde: His Life and Confessions" (1916); though prone to exaggeration and sometimes factually inaccurate, it offers a good literary portrait of Wilde. Lord Alfred Douglas wrote two books about his relationship with Wilde. "Oscar Wilde and Myself" (1914), largely ghost-written by T. W. H. Crosland, vindictively reacted to Douglas's discovery that "De Profundis" was addressed to him and defensively tried to distance him from Wilde's scandalous reputation. Both authors later regretted their work. Later, in "Oscar Wilde: A Summing Up" (1939) and his "Autobiography" he was more sympathetic to Wilde. Of Wilde's other close friends, Robert Sherard; Robert Ross, his literary executor; and Charles Ricketts variously published biographies, reminiscences or correspondence. The first more or less objective biography of Wilde came about when Hesketh Pearson wrote "Oscar Wilde: His Life and Wit" (1946). In 1954 Wilde's son Vyvyan Holland published his memoir "Son of Oscar Wilde", which recounts the difficulties Wilde's wife and children faced after his imprisonment. It was revised and updated by Merlin Holland in 1989.
"Oscar Wilde, a critical study" by Arthur Ransome was published in 1912. The book only briefly mentioned Wilde's life, but subsequently Ransome (and The Times Book Club) were sued for libel by Lord Alfred Douglas. In April 1913 Douglas lost the libel action after a reading of "De Profundis" refuted his claims.
Richard Ellmann wrote his 1987 biography "Oscar Wilde", for which he posthumously won a National (USA) Book Critics Circle Award in 1988 and a Pulitzer Prize in 1989. The book was the basis for the 1997 film "Wilde", directed by Brian Gilbert and starring Stephen Fry as the title character.
Neil McKenna's 2003 biography, "The Secret Life of Oscar Wilde," offers an exploration of Wilde's sexuality. Often speculative in nature, it was widely criticised for its pure conjecture and lack of scholarly rigour. Thomas Wright's "Oscar's Books" (2008) explores Wilde's reading from his childhood in Dublin to his death in Paris. After tracking down many books that once belonged to Wilde's Tite Street library (dispersed at the time of his trials), Wright was the first to examine Wilde's marginalia.
In 2018, Matthew Sturgis' "Oscar: A Life," was published in London. The book incorporates rediscovered letters and other documents and is the most extensively researched biography of Wilde to appear since 1988.
Parisian literati, also produced several biographies and monographs on him. André Gide wrote "In Memoriam, Oscar Wilde" and Wilde also features in his journals. Thomas Louis, who had earlier translated books on Wilde into French, produced his own "L'esprit d'Oscar Wilde" in 1920. Modern books include Philippe Jullian's "Oscar Wilde", and "L'affaire Oscar Wilde, ou, Du danger de laisser la justice mettre le nez dans nos draps" ("The Oscar Wilde Affair, or, On the Danger of Allowing Justice to put its Nose in our Sheets") by , a French religious historian. | https://en.wikipedia.org/wiki?curid=22614 |
Ostracism
Ostracism (, "ostrakismos") was a procedure under the Athenian democracy in which any citizen could be expelled from the city-state of Athens for ten years. While some instances clearly expressed popular anger at the citizen, ostracism was often used preemptively. It was used as a way of neutralizing someone thought to be a threat to the state or potential tyrant. The word "ostracism" continues to be used for various cases of social shunning.
The name is derived from the "ostraka" (singular "ostrakon", ὄστρακον), referring to the pottery shards that were used as voting tokens. Broken pottery, abundant and virtually free, served as a kind of scrap paper (in contrast to papyrus, which was imported from Egypt as a high-quality writing surface, and was thus too costly to be disposable).
Each year the Athenians were asked in the assembly whether they wished to hold an ostracism. The question was put in the sixth of the ten months used for state business under the democracy (January or February in the modern Gregorian Calendar). If they voted "yes", then an ostracism would be held two months later. In a section of the agora set off and suitably barriered, citizens gave the name of those they wished to be ostracised to a scribe, as many of them were illiterate, and they then scratched the name on pottery shards, and deposited them in urns. The presiding officials counted the "ostraka" submitted and sorted the names into separate piles. The person whose pile contained the most "ostraka" would be banished, provided that an additional criterion of a quorum was met, about which there are two principal sources:
Plutarch's evidence for a quorum of 6000, on "a priori" grounds a necessity for ostracism also per the account of Philochorus, accords with the number required for grants of citizenship in the following century and is generally preferred.
The person nominated had ten days to leave the city. If he attempted to return, the penalty was death. Notably, the property of the man banished was not confiscated and there was no loss of status. After the ten years, he was allowed to return without stigma. It was possible for the assembly to recall an ostracised person ahead of time; before the Persian invasion of 479 BC, an amnesty was declared under which at least two ostracised leaders—Pericles' father Xanthippus and Aristides 'the Just'—are known to have returned. Similarly, Cimon, ostracised in 461 BC, was recalled during an emergency.
Ostracism was crucially different from Athenian law at the time; there was no charge, and no defence could be mounted by the person expelled. The two stages of the procedure ran in the reverse order from that used under almost any trial system—here it is as if a jury are first asked ""Do you want to find someone guilty?"", and subsequently asked ""Whom do you wish to accuse?"". Equally out of place in a judicial framework is perhaps the institution's most peculiar feature: that it can take place at most once a year, and only for one person. In this it resembles the Greek "pharmakos" or scapegoat—though in contrast, "pharmakos" generally ejected a lowly member of the community.
A further distinction between these two modes (and one not obvious from a modern perspective) is that ostracism was an automatic procedure that required no initiative from any individual, with the vote simply occurring on the wish of the electorate—a diffuse exercise of power. By contrast, an Athenian trial needed the initiative of a particular citizen-prosecutor. While prosecution often led to a counterattack (or was a counterattack itself), no such response was possible in the case of ostracism as responsibility lay with the polity as a whole. In contrast to a trial, ostracism generally reduced political tension rather than increased it.
Although ten years of exile would have been difficult for an Athenian to face, it was relatively mild in comparison to the kind of sentences inflicted by courts; when dealing with politicians held to be acting against the interests of the people, Athenian juries could inflict very severe penalties such as death, unpayably large fines, confiscation of property, permanent exile and loss of citizens' rights through "atimia". Further, the elite Athenians who suffered ostracism were rich or noble men who had connections or "xenoi" in the wider Greek world and who, unlike genuine exiles, were able to access their income in Attica from abroad. In Plutarch, following as he does the anti-democratic line common in elite sources, the fact that people might be recalled early appears to be another example of the inconsistency of majoritarianism that was characteristic of Athenian democracy. However, ten years of exile usually resolved whatever had prompted the expulsion. Ostracism was simply a pragmatic measure; the concept of serving out the full sentence did not apply as it was a preventative measure, not a punitive one.
One curious window on the practicalities of ostracism comes from the cache of 190 ostraka discovered dumped in a well next to the acropolis. From the handwriting, they appear to have been written by fourteen individuals and bear the name of Themistocles, ostracised before 471 BC and were evidently meant for distribution to voters. This was not necessarily evidence of electoral fraud (being no worse than modern voting instruction cards), but their being dumped in the well may suggest that their creators wished to hide them. If so, these ostraka provide an example of organized groups attempting to influence the outcome of ostracisms. The two-month gap between the first and second phases would have easily allowed for such a campaign.
There is another interpretation, however, according to which these ostraka were prepared beforehand by enterprising businessmen who offered them for sale to citizens who could not easily inscribe the desired names for themselves or who simply wished to save time.
The two-month gap is a key feature in the institution, much as in elections under modern liberal democracies. It first prevented the candidate for expulsion being chosen out of immediate anger, although an Athenian general such as Cimon would have not wanted to lose a battle the week before such a second vote. Secondly, it opened up a period for discussion (or perhaps agitation), whether informally in daily talk or public speeches before the Athenian assembly or Athenian courts.[#endnote_Oration *] In this process a consensus, or rival consensuses, might emerge. Further, in that time of waiting, ordinary Athenian citizens must have felt a certain power over the greatest members of their city; conversely, the most prominent citizens had an incentive to worry how their social inferiors regarded them.
Ostracism was not in use throughout the whole period of Athenian democracy (circa 506–322 BC), but only occurred in the fifth century BC. The standard account, found in Aristotle's Constitution of the Athenians 22.3, attributes the establishment to Cleisthenes, a pivotal reformer in the creation of the democracy. In that case, ostracism would have been in place from around 506 BC. The first victim of the practice, however, was not expelled until 487 BC—nearly 20 years later. Over the course of the next 60 years some 12 or more individuals followed him. The list may not be complete, but there is good reason to believe the Athenians did not feel the need to eject someone in this way every year. The list of known ostracisms runs as follows:
Around 12,000 political ostraka have been excavated in the Athenian agora and in the Kerameikos. The second victim, Cleisthenes' nephew Megacles, is named by 4647 of these, but for a second undated ostracism not listed above. The known ostracisms seem to fall into three distinct phases: the 480s BC, mid-century 461–443 BC and finally the years 417–415: this matches fairly well with the clustering of known expulsions, although Themistocles before 471 may count as an exception. This suggests that ostracism fell in and out of fashion.
The last known ostracism was that of Hyperbolus in circa 417 BC. There is no sign of its use after the Peloponnesian War, when democracy was restored after the oligarchic coup of the Thirty had collapsed in 403 BC. However, while ostracism was not an active feature of the fourth-century version of democracy, it remained; the question was put to the assembly each year, but they did not wish to hold one.
Because ostracism was carried out by thousands of people over many decades of an evolving political situation and culture, it did not serve a single monolithic purpose. Observations can be made about the outcomes, as well as the initial purpose for which it was created.
The first rash of people ostracised in the decade after the defeat of the first Persian invasion at Marathon in 490 BC were all related or connected to the tyrant Peisistratos, who had controlled Athens for 36 years up to 527 BC. After his son Hippias was deposed with Spartan help in 510 BC, the family sought refuge with the Persians, and nearly twenty years later Hippias landed with their invasion force at Marathon. Tyranny and Persian aggression were paired threats facing the new democratic regime at Athens, and ostracism was used against both.
Tyranny and democracy had arisen at Athens out of clashes between regional and factional groups organised around politicians, including Cleisthenes. As a reaction, in many of its features the democracy strove to reduce the role of factions as the focus of citizen loyalties. Ostracism, too, may have been intended to work in the same direction: by temporarily decapitating a faction, it could help to defuse confrontations that threatened the order of the State.
In later decades when the threat of tyranny was remote, ostracism seems to have been used as a way to decide between radically opposed policies. For instance, in 443 BC Thucydides, son of Melesias (not to be confused with the historian of the same name) was ostracised. He led an aristocratic opposition to Athenian imperialism and in particular to Pericles' building program on the acropolis, which was funded by taxes created for the wars against the Achaemenid Empire. By expelling Thucydides the Athenian people sent a clear message about the direction of Athenian policy. Similar but more controversial claims have been made about the ostracism of Cimon in 461 BC.
The motives of individual voting citizens cannot, of course, be known. Many of the surviving ostraka name people otherwise unattested. They may well be just someone the submitter disliked, and voted for in moment of private spite. As such, it may be seen as a secular, civic variant of Athenian curse tablets, studied in scholarly literature under the Latin name "defixiones", where small dolls were wrapped in lead sheets written with curses and then buried, sometimes stuck through with nails for good measure.
In one anecdote about Aristides, known as "the Just", who was ostracised in 482, an illiterate citizen, not recognising him, came up to ask him to write the name Aristides on his ostrakon. When Aristides asked why, the man replied it was because he was sick of hearing him being called "the Just". Perhaps merely the sense that someone had become too arrogant or prominent was enough to get someone's name onto an ostrakon. Ostracism rituals could have also been an attempt to dissuade people from covertly committing murder or assassination for intolerable or emerging individuals of power so as to create an open arena or outlet for those harboring primal frustrations and urges or political motivations. The solution for murder, in Gregory H. Padowitz's theory, would then be "ostracism" which would ultimately be beneficial for all parties—the unfortunate individual would live and get a second chance and society would be spared the ugliness of feuds, civil war, political jams and murder.
The last ostracism, that of Hyperbolos in or near 417 BC, is elaborately narrated by Plutarch in three separate "lives": Hyperbolos is pictured urging the people to expel one of his rivals, but they, Nicias and Alcibiades, laying aside their own hostility for a moment, use their combined influence to have him ostracised instead. According to Plutarch, the people then become disgusted with ostracism and abandoned the procedure forever.
In part ostracism lapsed as a procedure at the end of the fifth century because it was replaced by the "graphe paranomon", a regular court action under which a much larger number of politicians might be targeted, instead of just one a year as with ostracism, and with greater severity. But it may already have come to seem like an anachronism as factional alliances organised around important men became increasingly less significant in the later period, and power was more specifically located in the interaction of the individual speaker with the power of the assembly and the courts. The threat to the democratic system in the late fifth century came not from tyranny but from oligarchic coups, threats of which became prominent after two brief seizures of power, in 411 BC by "the Four Hundred" and in 404 BC by "the Thirty", which were not dependent on single powerful individuals. Ostracism was not an effective defence against the oligarchic threat and it was not so used.
Other cities are known to have set up forms of ostracism on the Athenian model, namely Megara, Miletos, Argos and Syracuse, Sicily. In the last of these it was referred to as "petalismos", because the names were written on olive leaves. Little is known about these institutions. Furthermore, pottery shards identified as "ostraka" have been found in Chersonesos Taurica, leading historians to the conclusion that a similar institution existed there as well, in spite of the silence of the ancient records on that count.
A similar modern practice is the recall election, in which the electoral body removes its representation from an elected officer.
Unlike under modern voting procedures, the Athenians did not have to adhere to a strict format for the inscribing of "ostraka". Many extant "ostraka" show that it was possible to write expletives, short epigrams or cryptic injunctions beside the name of the candidate without invalidating the vote. For example:
The social psychologist Kipling Williams has written extensively on ostracism as a modern phenomenon. Williams defines ostracism as "any act or acts of ignoring and excluding of an individual or groups by an individual or a group". Williams suggests that the most common form of ostracism in a modern context is refusing to communicate with a person. By refusing to communicate with a person, that person is effectively ignored and excluded. The advent of the internet has made ostracism much easier to engage in, and conversely much more difficult to detect, with Williams and others describing this online ostracism as "cyberostracism". In email communication, in particular, it is relatively easy for a person or organization to ignore and exclude a specific person, through simply refusing to communicate with the person. Karen Douglas thus describes "unanswered emails" as constituting a form of cyberostracism, and similarly Eric Wesselmann and Kipling Williams describe "ignored emails" as a form of cyberostracism.
Williams and his colleagues have charted responses to ostracism in some five thousand cases, and found two distinctive patterns of response. The first is increased group-conformity, in a quest for re-admittance; the second is to become more provocative and hostile to the group, seeking attention rather than acceptance.
As it researched as well by many social psychologists, (Williams, 2007) research has demonstrated that being rejected from groups can have profound effects on a person (Smith, E. R., Macki, D. M., & Claypool, H. M., (2014) social psychology. Psychology Press. p. 409)
Research suggests that ostracism is a common reprisal strategy used by organizations in response to whistleblowing. Kipling Williams, in a survey on US whistleblowers, found that 100 percent reported post-whistleblowing ostracism. Alexander Brown similarly found that post-whistleblowing ostracism is a common response, and indeed describes ostracism as form of "covert" reprisal, as it is normally so difficult to identify and investigate.
Qahr and ashti is a culture-specific Iranian form of personal shunning, most frequently of another family member in Iran. While modern Western concepts of ostracism are based upon enforcing conformity within a societally-recognized group, qahr is a private (batin), family-oriented affair of conflict or display of anger that is never disclosed to the public at large, as to do would be a breach of social etiquette.
"Qahr" is avoidance of a lower-ranking family member who has committed a perceived insult. It is one of several ritualized social customs of Iranian culture.
"Gozasht" means 'tolerance, understanding and a desire or willingness to forgive' and is an essential componant of Qahr and Ashti for both psychological needs of closure and cognition, as well as a culturally accepted source for practicing necessary religious requirements of "tawbah" "(repentance, see Koran 2:222)" and "du'a" (supplication).
Oration IV of Andocides purports itself to be speech urging the ostracism of Alcibiades in 415 BC, but it is probably not authentic. | https://en.wikipedia.org/wiki?curid=22615 |
Omega
Omega (capital: Ω, lowercase: ω; Greek ὦ, later ὦ μέγα, Modern Greek ωμέγα) is the 24th and last letter of the Greek alphabet. In the Greek numeric system/Isopsephy (Gematria), it has a value of 800. The word literally means "great O" ("ō mega", mega meaning "great"), as opposed to Ο ο omicron, which means "little O" ("o mikron", micron meaning "little").
In phonetic terms, the Ancient Greek Ω is a long open-mid "o" , comparable to the vowel of British English "raw". In Modern Greek, Ω represents the , the same sound as omicron. The letter omega is transcribed "ō" or simply "o".
As the last letter of the Greek alphabet, Omega is often used to denote the last, the end, or the ultimate limit of a set, in contrast to alpha, the first letter of the Greek alphabet; see Alpha and Omega.
Ω was not part of the early (8th century BC) Greek alphabets. It was introduced in the late 7th century BC in the Ionian cities of Asia Minor to denote the long half-open [ɔː]. It is a variant of omicron (Ο), broken up at the side (), with the edges subsequently turned outward (, , , ).
The Dorian city of Knidos as well as a few Aegean islands, namely Paros, Thasos and Melos, chose the exact opposite innovation, using a broken-up circle for the short and a closed circle for the long /o/.
The name Ωμέγα is Byzantine; in Classical Greek, the letter was called "ō" (), whereas the omicron was called "ou" ().
The modern lowercase shape goes back to the uncial form , a form that developed during the 3rd century BC in ancient handwriting on papyrus, from a flattened-out form of the letter () that had its edges curved even further upward.
In addition to the Greek alphabet, Omega was also adopted into the early Cyrillic alphabet. See Cyrillic omega (Ѡ, ѡ). A Raetic variant is conjectured to be at the origin or parallel evolution of the Elder Futhark ᛟ.
Omega was also adopted into the Latin alphabet, as a letter of the 1982 revision to the African reference alphabet. It has had little use. See Latin omega.
The uppercase letter Ω is used as a symbol:
The minuscule letter ω is used as a symbol:
These characters are used only as mathematical symbols. Stylized Greek text should be encoded using the normal Greek letters, with markup and formatting to indicate the style of the text. | https://en.wikipedia.org/wiki?curid=22616 |
Operation Barbarossa
Operation Barbarossa () was the code name for the Axis invasion of the Soviet Union, which started on Sunday, 22 June 1941, during World War II. The operation put into action Nazi Germany's ideological goal of conquering the western Soviet Union so as to repopulate it with Germans. The German aimed to use some of the conquered as slave labour for the Axis war effort, to acquire the oil reserves of the Caucasus and the agricultural resources of Soviet territories, and eventually through extermination, enslavement, Germanization and mass deportation to Siberia, remove the Slavic peoples and create for Germany.
In the two years leading up to the invasion, Germany and the Soviet Union signed political and economic pacts for strategic purposes. Nevertheless, the German High Command began planning an invasion of the Soviet Union in July 1940 (under the codename Operation Otto), which Adolf Hitler authorized on 18 December 1940. Over the course of the operation, about three million personnel of the Axis powers—the largest invasion force in the history of warfare—invaded the western Soviet Union along a front, with 600,000 motor vehicles and over 600,000 horses for non-combat operations. The offensive marked an escalation of World War II, both geographically and in the formation of the Allied coalition including the Soviet Union.
The operation opened up the Eastern Front, in which more forces were committed than in any other theater of war in history. The area saw some of the war's largest battles, most horrific atrocities, and highest casualties (for Soviet and Axis forces alike), all of which influenced the course of World War II and the subsequent history of the 20th century. The German armies eventually captured some five million Soviet Red Army troops, a majority of whom never returned alive. The Nazis deliberately starved to death, or otherwise killed, 3.3 million Soviet prisoners of war, and a vast number of civilians, as the "Hunger Plan" worked to solve German food shortages and exterminate the Slavic population through starvation. Mass shootings and gassing operations, carried out by the Nazis or willing collaborators, murdered over a million Soviet Jews as part of the Holocaust.
The failure of Operation Barbarossa reversed the fortunes of the Third Reich. Operationally, German forces achieved significant victories and occupied some of the most important economic areas of the Soviet Union (mainly in Ukraine) and inflicted, as well as sustained, heavy casualties. Despite these early successes, the German offensive stalled in the Battle of Moscow at the end of 1941, and the subsequent Soviet winter counteroffensive pushed German troops back. The Germans had confidently expected a quick collapse of Soviet resistance as in Poland, but the Red Army absorbed the German Wehrmacht's strongest blows and bogged it down in a war of attrition for which the Germans were unprepared. The Wehrmacht's diminished forces could no longer attack along the entire Eastern Front, and subsequent operations to retake the initiative and drive deep into Soviet territory—such as Case Blue in 1942 and Operation Citadel in 1943—eventually failed, which resulted in the Wehrmacht's retreat and collapse.
As early as 1925, Adolf Hitler vaguely declared in his political manifesto and autobiography "Mein Kampf" that he would invade the Soviet Union, asserting that the German people needed to secure "Lebensraum" ("living space") to ensure the survival of Germany for generations to come. On 10 February 1939, Hitler told his army commanders that the next war would be "purely a war of "Weltanschauungen" ["worldview"] ... totally a people's war, a racial war". On 23 November, once World War II had already started, Hitler declared that "racial war has broken out and this war shall determine who shall govern Europe, and with it, the world". The racial policy of Nazi Germany portrayed the Soviet Union (and all of Eastern Europe) as populated by non-Aryan "Untermenschen" ("sub-humans"), ruled by Jewish Bolshevik conspirators. Hitler claimed in "Mein Kampf" that Germany's destiny was to "turn to the East" as it did "six hundred years ago" (see "Ostsiedlung"). Accordingly, it was stated Nazi policy to kill, deport, or enslave the majority of Russian and other Slavic populations and repopulate the land with Germanic peoples, under the Generalplan Ost. The Nazis' belief in their ethnic superiority pervades official records and pseudoscientific articles in German periodicals, on topics such as "how to deal with alien populations".
While older histories tended to emphasize the notion of a "Clean Wehrmacht" upholding its honor in the face of Hitler's fanaticism, the historian Jürgen Förster notes that "In fact, the military commanders were caught up in the ideological character of the conflict, and involved in its implementation as willing participants." Before and during the invasion of the Soviet Union, German troops were heavily indoctrinated with anti-Bolshevik, anti-Semitic, and anti-Slavic ideology via movies, radio, lectures, books, and leaflets. Likening the Soviets to the forces of Genghis Khan, Hitler told Croatian military leader Slavko Kvaternik that the "Mongolian race" threatened Europe. Following the invasion, Wehrmacht officers told their soldiers to target people who were described as "Jewish Bolshevik subhumans", the "Mongol hordes", the "Asiatic flood", and the "Red beast". Nazi propaganda portrayed the war against the Soviet Union as both an ideological war between German National Socialism and Jewish Bolshevism, and a racial war between the disciplined Germans and the Jewish, Gypsy, and Slavic "Untermenschen". An 'order from the Führer' stated that the "Einsatzgruppen" were to execute all Soviet functionaries who were "less valuable Asiatics, Gypsies and Jews". Six months into the invasion of the Soviet Union, the "Einsatzgruppen" had already murdered in excess of 500,000 Soviet Jews, a figure greater than the number of Red Army soldiers killed in combat during that time. German army commanders cast the Jews as the major cause behind the "partisan struggle". The main guideline for German troops was "Where there's a partisan, there's a Jew, and where there's a Jew, there's a partisan", or "The partisan is where the Jew is". Many German troops viewed the war in Nazi terms and regarded their Soviet enemies as sub-human.
After the war began, the Nazis issued a ban on sexual relations between Germans and foreign slave workers. There were regulations enacted against the "Ost-Arbeiter" ("Eastern workers") that included the death penalty for sexual relations with a German. Heinrich Himmler, in his secret memorandum, "Reflections on the Treatment of Peoples of Alien Races in the East" (dated 25 May 1940), outlined the Nazi plans for the non-German populations in the East. Himmler believed the Germanization process in Eastern Europe would be complete when "in the East dwell only men with truly German, Germanic blood".
The Nazi secret plan "Generalplan Ost" ("General Plan for the East"), prepared in 1941 and confirmed in 1942, called for a "new order of ethnographical relations" in the territories occupied by Nazi Germany in Eastern Europe. It envisaged ethnic cleansing, executions, and enslavement of the populations of conquered countries, with very small percentages undergoing Germanization, expulsion into the depths of Russia, or other fates, while the conquered territories would be Germanized. The plan had two parts: the "Kleine Planung" ("small plan"), which covered actions to be taken during the war, and the "Große Planung" ("large plan"), which covered policies after the war was won, to be implemented gradually over 25 to 30 years.
A speech given by General Erich Hoepner demonstrates the dissemination of the Nazi racial plan, as he informed the 4th Panzer Group that the war against the Soviet Union was "an essential part of the German people's struggle for existence" ("Daseinskampf"), also referring to the imminent battle as the "old struggle of Germans against Slavs" and even stated, "the struggle must aim at the annihilation of today's Russia and must, therefore, be waged with unparalleled harshness". Hoepner also added that the Germans were fighting for "the defense of European culture against Moscovite–Asiatic inundation, and the repulse of Jewish Bolshevism ... No adherents of the present Russian-Bolshevik system are to be spared." Walther von Brauchitsch also told his subordinates that troops should view the war as a "struggle between two different races and [should] act with the necessary severity". Racial motivations were central to Nazi ideology and played a key role in planning for Operation Barbarossa since both Jews and communists were considered equivalent enemies of the Nazi state. Nazi imperialist ambitions rejected the common humanity of both groups, declaring the supreme struggle for "Lebensraum" to be a "Vernichtungskrieg" ("war of annihilation").
In August 1939, Germany and the Soviet Union signed a non-aggression pact in Moscow known as the Molotov–Ribbentrop Pact. A secret protocol to the pact outlined an agreement between Germany and the Soviet Union on the division of the eastern European border states between their respective "spheres of influence": the Soviet Union and Germany would partition Poland in the event of an invasion by Germany, and the Soviets would be allowed to overrun the Baltic states and Finland. On 23 August 1939 the rest of the world learned of this pact but were unaware of the provisions to partition Poland. The pact stunned the world because of the parties' earlier mutual hostility and their conflicting ideologies. The conclusion of this pact was followed by the German invasion of Poland on 1 September that triggered the outbreak of World War II in Europe, then the Soviet invasion of Poland that led to the annexation of the eastern part of the country. As a result of the pact, Germany and the Soviet Union maintained reasonably strong diplomatic relations for two years and fostered an important economic relationship. The countries entered a trade pact in 1940 by which the Soviets received German military equipment and trade goods in exchange for raw materials, such as oil and wheat, to help the Nazis circumvent a British blockade of Germany.
Despite the parties' ostensibly cordial relations, each side was highly suspicious of the other's intentions. For instance, the Soviet invasion of Bukovina in June 1940 went beyond their sphere of influence as agreed with Germany. After Germany entered the Axis Pact with Japan and Italy, it began negotiations about a potential Soviet entry into the pact. After two days of negotiations in Berlin from 12 to 14 November 1940, Germany presented a written proposal for a Soviet entry into the Axis. On 25 November 1940, the Soviet Union offered a written counter-proposal to join the Axis if Germany would agree to refrain from interference in the Soviet Union's sphere of influence, but Germany did not respond. As both sides began colliding with each other in Eastern Europe, conflict appeared more likely, although they did sign a border and commercial agreement addressing several open issues in January 1941. According to historian Robert Service, Joseph Stalin was convinced that the overall military strength of the USSR was such that he had nothing to fear and anticipated an easy victory should Germany attack; moreover, Stalin believed that since the Germans were still fighting the British in the west, Hitler would be unlikely to open up a two front war and subsequently delayed the reconstruction of defensive fortifications in the border regions. When German soldiers swam across the Bug River to warn the Red Army of an impending attack, they were treated like enemy agents and shot. Some historians believe that Stalin, despite providing an amicable front to Hitler, did not wish to remain allies with Germany. Rather, Stalin might have had intentions to break off from Germany and proceed with his own campaign against Germany to be followed by one against the rest of Europe.
Stalin's reputation as a brutal dictator contributed both to the Nazis' justification of their assault and their faith in success; many competent and experienced military officers had been killed in the Great Purge of the 1930s, leaving the Red Army with a relatively inexperienced leadership compared to that of their German adversary. The Nazis often emphasized the Soviet regime's brutality when targeting the Slavs with propaganda. They also claimed that the Red Army was preparing to attack the Germans, and their own invasion was thus presented as a pre-emptive strike.
In the middle of 1940, following the rising tension between the Soviet Union and Germany over territories in the Balkans, an eventual invasion of the Soviet Union seemed the only solution to Hitler. While no concrete plans had yet been made, Hitler told one of his generals in June that the victories in Western Europe finally freed his hands for a showdown with Bolshevism. With the successful end to the campaign in France, General Erich Marcks was assigned the task of drawing up the initial invasion plans of the Soviet Union. The first battle plans were entitled "Operation Draft East" (colloquially known as the "Marcks Plan"). His report advocated the A-A line as the operational objective of any invasion of the Soviet Union. This assault would extend from the northern city of Arkhangelsk on the Arctic Sea through Gorky and Rostov to the port city of Astrakhan at the mouth of the Volga on the Caspian Sea. The report concluded that—once established—this military border would reduce the threat to Germany from attacks by enemy bombers.
Although Hitler was warned by his general staff that occupying "Western Russia" would create "more of a drain than a relief for Germany's economic situation", he anticipated compensatory benefits, such as the demobilization of entire divisions to relieve the acute labor shortage in German industry; the exploitation of Ukraine as a reliable and immense source of agricultural products; the use of forced labor to stimulate Germany's overall economy; and the expansion of territory to improve Germany's efforts to isolate the United Kingdom. Hitler was convinced that Britain would sue for peace once the Germans triumphed in the Soviet Union, and if they did not, he would use the resources available in the East to defeat the British Empire.
On 5 December 1940, Hitler received the final military plans for the invasion on which the German High Command had been working since July 1940 under the codename "Operation Otto". Hitler, however, was dissatisfied with these plans and on 18 December issued , which called for a new battle plan, now code-named "Operation Barbarossa". The operation was named after medieval Emperor Frederick Barbarossa of the Holy Roman Empire, a leader of the Third Crusade in the 12th century.
On 30 March 1941 the Barbarossa decree declared that the war would be one of extermination and advocated the eradication of all political and intellectual elites.
The invasion was set for 15 May 1941, though it was delayed for over a month to allow for further preparations and possibly better weather. (See Reasons for delay.)
According to a 1978 essay by German historian Andreas Hillgruber, the invasion plans drawn up by the German military elite were coloured by hubris stemming from the rapid defeat of France at the hands of the "invincible" Wehrmacht and by traditional German stereotypes of Russia as a primitive, backward "Asiatic" country. Red Army soldiers were considered brave and tough, but the officer corps was held in contempt. The leadership of the Wehrmacht paid little attention to politics, culture, and the considerable industrial capacity of the Soviet Union, in favour of a very narrow military view. Hillgruber argued that because these assumptions were shared by the entire military elite, Hitler was able to push through with a "war of annihilation" that would be waged in the most inhumane fashion possible with the complicity of "several military leaders", even though it was quite clear that this would be in violation of all accepted norms of warfare.
In autumn 1940, high-ranking German officials drafted a memorandum on the dangers of an invasion of the Soviet Union. They said Ukraine, Belorussia, and the Baltic States would end up as only a further economic burden for Germany. It was argued that the Soviets in their current bureaucratic form were harmless and that the occupation would not benefit Germany. Hitler disagreed with economists about the risks and told his right-hand man Hermann Göring, the chief of the Luftwaffe, that he would no longer listen to misgivings about the economic dangers of a war with Russia. It is speculated that this was passed on to General Georg Thomas, who had produced reports that predicted a net economic drain for Germany in the event of an invasion of the Soviet Union unless its economy was captured intact and the Caucasus oilfields seized in the first blow; Thomas revised his future report to fit Hitler's wishes. The Red Army's ineptitude in the Winter War against Finland in 1939–40 convinced Hitler of a quick victory within a few months. Neither Hitler nor the General Staff anticipated a long campaign lasting into the winter, and therefore adequate preparations, such as the distribution of warm clothing and winterization of vehicles and lubricants, were not made.
Beginning in March 1941, Göring's Green Folder laid out details for the Soviet economy after conquest. The Hunger Plan outlined how entire urban populations of conquered territories was to be starved to death, thus creating an agricultural surplus to feed Germany and urban space for the German upper class. Nazi policy aimed to destroy the Soviet Union as a political entity in accordance with the geopolitical "Lebensraum" ideals for the benefit of future generations of the "Nordic master race". In 1941, Nazi ideologue Alfred Rosenberg—later appointed Reich Minister of the Occupied Eastern Territories—suggested that conquered Soviet territory should be administered in the following "Reichskommissariate" ("Reich Commissionerships"):
German military planners also researched Napoleon's failed invasion of Russia. In their calculations, they concluded that there was little danger of a large-scale retreat of the Red Army into the Russian interior, as it could not afford to give up the Baltic states, Ukraine, or the Moscow and Leningrad regions, all of which were vital to the Red Army for supply reasons and would thus, have to be defended. Hitler and his generals disagreed on where Germany should focus its energy. Hitler, in many discussions with his generals, repeated his order of "Leningrad first, the Donbass second, Moscow third"; but he consistently emphasized the destruction of the Red Army over the achievement of specific terrain objectives. Hitler believed Moscow to be of "no great importance" in the defeat of the Soviet Union and instead believed victory would come with the destruction of the Red Army west of the capital, especially west of the Western Dvina and Dnieper rivers, and this pervaded the plan for Barbarossa. This belief later led to disputes between Hitler and several German senior officers, including Heinz Guderian, Gerhard Engel, Fedor von Bock and Franz Halder, who believed the decisive victory could only be delivered at Moscow. They were unable to sway Hitler, who had grown overconfident in his own military judgment as a result of the rapid successes in Western Europe.
Albert Speer said that oil had been a major factor in the decision to invade the Soviet Union. Hitler believed that Bakus oil resources were essential for the survival of the Third Reich, as a dearth of oil resources was a vulnerability for Germany's military.
The Germans had begun massing troops near the Soviet border even before the campaign in the Balkans had finished. By the third week of February 1941, 680,000 German soldiers were gathered in assembly areas on the Romanian-Soviet border. In preparation for the attack, Hitler had secretly moved upwards of 3 million German troops and approximately 690,000 Axis soldiers to the Soviet border regions. Additional Luftwaffe operations included numerous aerial surveillance missions over Soviet territory many months before the attack.
Although the Soviet High Command was alarmed by this, Stalin's belief that the Third Reich was unlikely to attack only two years after signing the Molotov–Ribbentrop Pact resulted in a slow Soviet preparation. This fact aside, the Soviets did not entirely overlook the threat of their German neighbor. Well before the German invasion, Marshal Semyon Timoshenko referred to the Germans as the Soviet Union's "most important and strongest enemy", and as early as July 1940, the Red Army Chief of Staff, Boris Shaposhnikov, produced a preliminary three-pronged plan of attack for what a German invasion might look like, remarkably similar to the actual attack. Since April 1941, the Germans had begun setting up Operation Haifisch and Operation Harpune to substantiate their claims that Britain was the real target. These simulated preparations in Norway and the English Channel coast included activities such as ship concentrations, reconnaissance flights and training exercises.
The reasons for the postponement of Barbarossa from the initially planned date of 15 May to the actual invasion date of 22 June 1941 (a 38-day delay) are debated. The reason most commonly cited is the unforeseen contingency of invading Yugoslavia in April 1941. Historian Thomas B. Buell indicates that Finland and Romania, which weren't involved in initial German planning, needed additional time to prepare to participate in the invasion. Buell adds that an unusually wet winter kept rivers at full flood until late spring. The floods may have discouraged an earlier attack, even if they occurred before the end of the Balkans Campaign.
The importance of the delay is still debated. William Shirer argued that Hitler's Balkan Campaign had delayed the commencement of Barbarossa by several weeks and thereby jeopardized it. Many later historians argue that the 22 June start date was sufficient for the German offensive to reach Moscow by September. Antony Beevor wrote in 2012 about the delay caused by German attacks in the Balkans that "most [historians] accept that it made little difference" to the eventual outcome of Barbarossa.
The Germans deployed one independent regiment, one separate motorized training brigade and 153 divisions for Barbarossa, which included 104 infantry, 19 panzer and 15 motorized infantry divisions in three army groups, nine security divisions to operate in conquered territories, four divisions in Finland and two divisions as reserve under the direct control of OKH. These were equipped with 6,867 armored vehicles, of which 3,350–3,795 were tanks, 2,770–4,389 aircraft (that amounted to 65 percent of the Luftwaffe), 7,200–23,435 artillery pieces, 17,081 mortars, about 600,000 motor vehicles and 625,000–700,000 horses. Finland slated 14 divisions for the invasion, and Romania offered 13 divisions and eight brigades over the course of Barbarossa. The entire Axis forces, 3.8 million personnel, deployed across a front extending from the Arctic Ocean southward to the Black Sea, were all controlled by the OKH and organized into Army Norway, Army Group North, Army Group Center and Army Group South, alongside three "Luftflotten" (air fleets, the air force equivalent of army groups) that supported the army groups: Luftflotte 1 for North, Luftflotte 2 for Center and Luftflotte 4 for South.
Army Norway was to operate in far northern Scandinavia and bordering Soviet territories. Army Group North was to march through the Baltic states into northern Russia, either take or destroy the city of Leningrad and link up with Finnish forces. Army Group Center, the army group equipped with the most armour and air power, was to strike from Poland into Belorussia and the west-central regions of Russia proper, and advance to Smolensk and then Moscow. Army Group South was to strike the heavily populated and agricultural heartland of Ukraine, taking Kiev before continuing eastward over the steppes of southern USSR to the Volga with the aim of controlling the oil-rich Caucasus. Army Group South was deployed in two sections separated by a gap. The northern section, which contained the army group's only panzer group, was in southern Poland right next to Army Group Center, and the southern section was in Romania.
The German forces in the rear (mostly "Waffen-SS" and "Einsatzgruppen" units) were to operate in conquered territories to counter any partisan activity in areas they controlled, as well as to execute captured Soviet political commissars and Jews. On 17 June, Reich Main Security Office (RSHA) chief Reinhard Heydrich briefed around thirty to fifty "Einsatzgruppen" commanders on "the policy of eliminating Jews in Soviet territories, at least in general terms". While the "Einsatzgruppen" were assigned to the Wehrmacht's units, which provided them with supplies such as gasoline and food, they were controlled by the RSHA. The official plan for Barbarossa assumed that the army groups would be able to advance freely to their primary objectives simultaneously, without spreading thin, once they had won the border battles and destroyed the Red Army's forces in the border area.
In 1930, Mikhail Tukhachevsky, a prominent military theorist in tank warfare in the interwar period and later Marshal of the Soviet Union, forwarded a memo to the Kremlin that lobbied for colossal investment in the resources required for the mass production of weapons, pressing the case for "40,000 aircraft and 50,000 tanks". In the early 1930s, a modern operational doctrine for the Red Army was developed and promulgated in the 1936 Field Regulations in the form of the Deep Battle Concept. Defense expenditure also grew rapidly from just 12 percent of the gross national product in 1933 to 18 percent by 1940.
During Stalin's Great Purge in the late-1930s, which had not ended by the time of the German invasion on 22 June 1941, much of the officer corps of the Red Army was executed or imprisoned and their replacements, appointed by Stalin for political reasons, often lacked military competence. Of the five Marshals of the Soviet Union appointed in 1935, only Kliment Voroshilov and Semyon Budyonny survived Stalin's purge. Tukhachevsky was killed in 1937. Fifteen of 16 army commanders, 50 of the 57 corps commanders, 154 of the 186 divisional commanders, and 401 of 456 colonels were killed, and many other officers were dismissed. In total, about 30,000 Red Army personnel were executed. Stalin further underscored his control by reasserting the role of political commissars at the divisional level and below to oversee the political loyalty of the army to the regime. The commissars held a position equal to that of the commander of the unit they were overseeing. But in spite of efforts to ensure the political subservience of the armed forces, in the wake of Red Army's poor performance in Poland and in the Winter War, about 80 percent of the officers dismissed during the Great Purge were reinstated by 1941. Also, between January 1939 and May 1941, 161 new divisions were activated. Therefore, although about 75 percent of all the officers had been in their position for less than one year at the start of the German invasion of 1941, many of the short tenures can be attributed not only to the purge, but also to the rapid increase in creation of military units.
In the Soviet Union, speaking to his generals in December 1940, Stalin mentioned Hitler's references to an attack on the Soviet Union in "Mein Kampf" and Hitler's belief that the Red Army would need four years to ready itself. Stalin declared "we must be ready much earlier" and "we will try to delay the war for another two years". As early as August 1940, British intelligence had received hints of German plans to attack the Soviets only a week after Hitler informally approved the plans for "Barbarossa" and warned the Soviet Union accordingly. But Stalin's distrust of the British led him to ignore their warnings in the belief that they were a trick designed to bring the Soviet Union into the war on their side. In early 1941, Stalin's own intelligence services and American intelligence gave regular and repeated warnings of an impending German attack. Soviet spy Richard Sorge also gave Stalin the exact German launch date, but Sorge and other informers had previously given different invasion dates that passed peacefully before the actual invasion. Stalin acknowledged the possibility of an attack in general and therefore made significant preparations, but decided not to run the risk of provoking Hitler.
Beginning in July 1940, the Red Army General Staff developed war plans that identified the Wehrmacht as the most dangerous threat to the Soviet Union, and that in the case of a war with Germany, the Wehrmacht's main attack would come through the region north of the Pripyat Marshes into Belorussia, which later proved to be correct. Stalin disagreed, and in October he authorized the development of new plans that assumed a German attack would focus on the region south of Pripyat Marshes towards the economically vital regions in Ukraine. This became the basis for all subsequent Soviet war plans and the deployment of their armed forces in preparation for the German invasion.
In early-1941 Stalin authorized the State Defense Plan 1941 (DP-41), which along with the Mobilization Plan 1941 (MP-41), called for the deployment of 186 divisions, as the first strategic echelon, in the four military districts of the western Soviet Union that faced the Axis territories; and the deployment of another 51 divisions along the Dvina and Dnieper Rivers as the second strategic echelon under Stavka control, which in the case of a German invasion was tasked to spearhead a Soviet counteroffensive along with the remaining forces of the first echelon. But on 22 June 1941 the first echelon only contained 171 divisions, numbering 2.6–2.9 million; and the second strategic echelon contained 57 divisions that were still mobilizing, most of which were still understrength. The second echelon was undetected by German intelligence until days after the invasion commenced, in most cases only when German ground forces bumped into them.
At the start of the invasion, the manpower of the Soviet military force that had been mobilized was 5.3–5.5 million, and it was still increasing as the Soviet reserve force of 14 million, with at least basic military training, continued to mobilize. The Red Army was dispersed and still preparing when the invasion commenced. Their units were often separated and lacked adequate transportation. While transportation remained insufficient for Red Army forces, when Operation Barbarossa kicked off, they possessed some 33,000 pieces of artillery, a number far greater than the Germans had at their disposal.
The Soviet Union had some 23,000 tanks available of which only 14,700 were combat-ready. Around 11,000 tanks were in the western military districts that faced the German invasion force. Hitler later declared to some of his generals, "If I had known about the Russian tank strength in 1941 I would not have attacked". However, maintenance and readiness standards were very poor; ammunition and radios were in short supply, and many armoured units lacked the trucks for supplies. The most advanced Soviet tank models – the KV-1 and T-34 – which were superior to all current German tanks, as well as all designs still in development as of the summer 1941, were not available in large numbers at the time the invasion commenced. Furthermore, in the autumn of 1939, the Soviets disbanded their mechanized corps and partly dispersed their tanks to infantry divisions; but following their observation of the German campaign in France, in late-1940 they began to reorganize most of their armored assets back into mechanized corps with a target strength of 1,031 tanks each. But these large armoured formations were unwieldy, and moreover they were spread out in scattered garrisons, with their subordinate divisions up to apart. The reorganization was still in progress and incomplete when Barbarossa commenced. Soviet tank units were rarely well equipped, and they lacked training and logistical support. Units were sent into combat with no arrangements in place for refueling, ammunition resupply, or personnel replacement. Often, after a single engagement, units were destroyed or rendered ineffective. The Soviet numerical advantage in heavy equipment was thoroughly offset by the superior training and organization of the Wehrmacht.
The Soviet Air Force (VVS) held the numerical advantage with a total of approximately 19,533 aircraft, which made it the largest air force in the world in the summer of 1941. About 7,133–9,100 of these were deployed in the five western military districts, and an additional 1445 were under naval control.
Historians have debated whether Stalin was planning an invasion of German territory in the summer of 1941. The debate began in the late-1980s when Viktor Suvorov published a journal article and later the book "Icebreaker" in which he claimed that Stalin had seen the outbreak of war in Western Europe as an opportunity to spread communist revolutions throughout the continent, and that the Soviet military was being deployed for an imminent attack at the time of the German invasion. This view had also been advanced by former German generals following the war. Suvorov's thesis was fully or partially accepted by a limited number of historians, including Valeri Danilov, Joachim Hoffmann, Mikhail Meltyukhov, and Vladimir Nevezhin, and attracted public attention in Germany, Israel, and Russia. It has been strongly rejected by most historians, and "Icebreaker" is generally considered to be an "anti-Soviet tract" in Western countries. David Glantz and Gabriel Gorodetsky wrote books to rebut Suvorov's arguments. The majority of historians believe that Stalin was seeking to avoid war in 1941, as he believed that his military was not ready to fight the German forces.
At around 01:00 on 22 June 1941, the Soviet military districts in the border area were alerted by NKO Directive No. 1, issued late on the night of 21 June. It called on them to "bring all forces to combat readiness," but to "avoid provocative actions of any kind". It took up to two hours for several of the units subordinate to the Fronts to receive the order of the directive, and the majority did not receive it before the invasion commenced.
On 21 June, at 13:00 Army Group North received the codeword Düsseldorf, indicating Barbarossa would commence the next morning, and passed down its own codeword, Dortmund. At around 03:15 on 22 June 1941, the Axis Powers commenced the invasion of the Soviet Union with the bombing of major cities in Soviet-occupied Poland and an artillery barrage on Red Army defences on the entire front. Air-raids were conducted as far as Kronstadt near Leningrad, Ismail in Bessarabia, and Sevastopol in the Crimea. Meanwhile, ground troops crossed the border, accompanied in some locales by Lithuanian and Ukrainian fifth columnists. Roughly three million soldiers of the Wehrmacht went into action and faced slightly fewer Soviet troops at the border. Accompanying the German forces during the initial invasion were Finnish and Romanian units as well.
At around noon, the news of the invasion was broadcast to the population by Soviet foreign minister Vyacheslav Molotov: "... Without a declaration of war, German forces fell on our country, attacked our frontiers in many places ... The Red Army and the whole nation will wage a victorious Patriotic War for our beloved country, for honour, for liberty ... Our cause is just. The enemy will be beaten. Victory will be ours!" By calling upon the population's devotion to their nation rather than the Party, Molotov struck a patriotic chord that helped a stunned people absorb the shattering news. Within the first few days of the invasion, the Soviet High Command and Red Army were extensively reorganized so as to place them on the necessary war footing. Stalin did not address the nation about the German invasion until 3 July, when he also called for a "Patriotic War ... of the entire Soviet people".
In Germany, on the morning of 22 June, Nazi propaganda minister Joseph Goebbels announced the invasion to the waking nation in a radio broadcast with Hitler's words: "At this moment a march is taking place that, for its extent, compares with the greatest the world has ever seen. I have decided today to place the fate and future of the Reich and our people in the hands of our soldiers. May God aid us, especially in this fight!" Later the same morning, Hitler proclaimed to his colleagues, "Before three months have passed, we shall witness a collapse of Russia, the like of which has never been seen in history." Hitler also addressed the German people via the radio, presenting himself as a man of peace, who reluctantly had to attack the Soviet Union. Following the invasion, Goebbels openly spoke of a "European crusade against Bolshevism".
The initial momentum of the German ground and air attack completely destroyed the Soviet organizational command and control within the first few hours, paralyzing every level of command from the infantry platoon to the Soviet High Command in Moscow. Moscow not only failed to grasp the magnitude of the catastrophe that confronted the Soviet forces in the border area, but Stalin's first reaction was also disbelief. At around 07:15, Stalin issued NKO Directive No. 2, which announced the invasion to the Soviet Armed Forces, and called on them to attack Axis forces wherever they had violated the borders and launch air strikes into the border regions of German territory. At around 09:15, Stalin issued NKO Directive No. 3, signed by Marshal Semyon Timoshenko, which now called for a general counteroffensive on the entire front "without any regards for borders" that both men hoped would sweep the enemy from Soviet territory. Stalin's order, which Timoshenko authorized, was not based on a realistic appraisal of the military situation at hand, but commanders passed it along for fear of retribution if they failed to obey; several days passed before the Soviet leadership became aware of the enormity of the opening defeat.
Luftwaffe reconnaissance units plotted Soviet troop concentration, supply dumps and airfields, and marked them down for destruction. Additional Luftwaffe attacks were carried out against Soviet command and control centers in order to disrupt the mobilization and organization of Soviet forces. In contrast, Soviet artillery observers based at the border area had been under the strictest instructions not to open fire on German aircraft prior to the invasion. One plausible reason given for the Soviet hesitation to return fire was Stalin's initial belief that the assault was launched without Hitler's authorization. Significant amounts of Soviet territory were lost along with Red Army forces as a result; it took several days before Stalin comprehended the magnitude of the calamity. The Luftwaffe reportedly destroyed 1,489 aircraft on the first day of the invasion and over 3,100 during the first three days. Hermann Göring, Minister of Aviation and Commander-in-Chief of the Luftwaffe, distrusted the reports and ordered the figure checked. Luftwaffe staffs surveyed the wreckage on Soviet airfields, and their original figure proved conservative, as over 2,000 Soviet aircraft were estimated to have been destroyed on the first day of the invasion. In reality, Soviet losses were likely higher; a Soviet archival document recorded the loss of 3,922 Soviet aircraft in the first three days against an estimated loss of 78 German aircraft. The Luftwaffe reported the loss of only 35 aircraft on the first day of combat. A document from the German Federal Archives puts the Luftwaffe's loss at 63 aircraft for the first day.
By the end of the first week, the Luftwaffe had achieved air supremacy over the battlefields of all the army groups, but was unable to effect this air dominance over the vast expanse of the western Soviet Union. According to the war diaries of the German High Command, the Luftwaffe by 5 July had lost 491 aircraft with 316 more damaged, leaving it with only about 70 percent of the strength it had at the start of the invasion.
On 22 June, Army Group North attacked the Soviet Northwestern Front and broke through its 8th and 11th Armies. The Soviets immediately launched a powerful counterattack against the German 4th Panzer Group with the Soviet 3rd and 12th Mechanized Corps, but the Soviet attack was defeated. On 25 June, the 8th and 11th Armies were ordered to withdraw to the Western Dvina River, where it was planned to meet up with the 21st Mechanized Corps and the 22nd and 27th Armies. However, on 26 June, Erich von Manstein's LVI Panzer Corps reached the river first and secured a bridgehead across it. The Northwestern Front was forced to abandon the river defenses, and on 29 June Stavka ordered the Front to withdraw to the Stalin Line on the approaches to Leningrad. On 2 July, Army Group North began its attack on the Stalin Line with its 4th Panzer Group, and on 8 July captured Pskov, devastating the defenses of the Stalin Line and reaching Leningrad oblast. The 4th Panzer Group had advanced about since the start of the invasion and was now only about from its primary objective Leningrad. On 9 July it began its attack towards the Soviet defenses along the Luga River in Leningrad oblast.
The northern section of Army Group South faced the Southwestern Front, which had the largest concentration of Soviet forces, and the southern section faced the Southern Front. In addition, the Pripyat Marshes and the Carpathian Mountains posed a serious challenge to the army group's northern and southern sections respectively. On 22 June, only the northern section of Army Group South attacked, but the terrain impeded their assault, giving the Soviet defenders ample time to react. The German 1st Panzer Group and 6th Army attacked and broke through the Soviet 5th Army. Starting on the night of 23 June, the Soviet 22nd and 15th Mechanized Corps attacked the flanks of the 1st Panzer Group from north and south respectively. Although intended to be concerted, Soviet tank units were sent in piecemeal due to poor coordination. The 22nd Mechanized Corps ran into the 1st Panzer Army's III Motorized Corps and was decimated, and its commander killed. The 1st Panzer Group bypassed much of the 15th Mechanized Corps, which engaged the German 6th Army's 297th Infantry Division, where it was defeated by antitank fire and Luftwaffe attacks. On 26 June, the Soviets launched another counterattack on the 1st Panzer Group from north and south simultaneously with the 9th, 19th and 8th Mechanized Corps, which altogether fielded 1649 tanks, and supported by the remnants of the 15th Mechanized Corps. The battle lasted for four days, ending in the defeat of the Soviet tank units. On 30 June Stavka ordered the remaining forces of the Southwestern Front to withdraw to the Stalin Line, where it would defend the approaches to Kiev.
On 2 July, the southern section of Army Group South – the Romanian 3rd and 4th Armies, alongside the German 11th Army – invaded Soviet Moldavia, which was defended by the Southern Front. Counterattacks by the Front's 2nd Mechanized Corps and 9th Army were defeated, but on 9 July the Axis advance stalled along the defenses of the Soviet 18th Army between the Prut and Dniester Rivers.
In the opening hours of the invasion, the Luftwaffe destroyed the Western Front's air force on the ground, and with the aid of Abwehr and their supporting anti-communist fifth columns operating in the Soviet rear paralyzed the Front's communication lines, which particularly cut off the Soviet 4th Army headquarters from headquarters above and below it. On the same day, the 2nd Panzer Group crossed the Bug River, broke through the 4th Army, bypassed Brest Fortress, and pressed on towards Minsk, while the 3rd Panzer Group bypassed most of the 3rd Army and pressed on towards Vilnius. Simultaneously, the German 4th and 9th Armies engaged the Western Front forces in the environs of Białystok. On the order of Dmitry Pavlov, the commander of the Western Front, the 6th and 11th Mechanized Corps and the 6th Cavalry Corps launched a strong counterstrike towards Grodno on 24–25 June in hopes of destroying the 3rd Panzer Group. However, the 3rd Panzer Group had already moved on, with its forward units reaching Vilnius on the evening of 23 June, and the Western Front's armoured counterattack instead ran into infantry and antitank fire from the V Army Corps of the German 9th Army, supported by Luftwaffe air attacks. By the night of 25 June, the Soviet counterattack was defeated, and the commander of the 6th Cavalry Corps was captured. The same night, Pavlov ordered all the remnants of the Western Front to withdraw to Slonim towards Minsk. Subsequent counterattacks to buy time for the withdrawal were launched against the German forces, but all of them failed. On 27 June, the 2nd and 3rd Panzer Groups met near Minsk and captured the city the next day, completing the encirclement of almost all of the Western Front in two pockets: one around Białystok and another west of Minsk. The Germans destroyed the Soviet 3rd and 10th Armies while inflicting serious losses on the 4th, 11th and 13th Armies, and reported to have captured 324,000 Soviet troops, 3,300 tanks, 1,800 artillery pieces.
A Soviet directive was issued on 29 June to combat the mass panic rampant among the civilians and the armed forces personnel. The order stipulated swift, severe measures against anyone inciting panic or displaying cowardice. The NKVD worked with commissars and military commanders to scour possible withdrawal routes of soldiers retreating without military authorization. Field expedient general courts were established to deal with civilians spreading rumors and military deserters. On 30 June, Stalin relieved Pavlov of his command, and on 22 July tried and executed him along with many members of his staff on charges of "cowardice" and "criminal incompetence".
On 29 June, Hitler, through the Commander-in-Chief of the German Army Walther von Brauchitsch, instructed the commander of Army Group Center Fedor von Bock to halt the advance of his panzers until the infantry formations liquidating the pockets catch up. But the commander of the 2nd Panzer Group Heinz Guderian, with the tacit support of Fedor von Bock and the chief of OKH Franz Halder, ignored the instruction and attacked on eastward towards Bobruisk, albeit reporting the advance as a reconnaissance-in-force. He also personally conducted an aerial inspection of the Minsk-Białystok pocket on 30 June and concluded that his panzer group was not needed to contain it, since Hermann Hoth's 3rd Panzer Group was already involved in the Minsk pocket. On the same day, some of the infantry corps of the 9th and 4th Armies, having sufficiently liquidated the Białystok pocket, resumed their march eastward to catch up with the panzer groups. On 1 July, Fedor von Bock ordered the panzer groups to resume their full offensive eastward on the morning of 3 July. But Brauchitsch, upholding Hitler's instruction, and Halder, unwillingly going along with it, opposed Bock's order. However, Bock insisted on the order by stating that it would be irresponsible to reverse orders already issued. The panzer groups resumed their offensive on 2 July before the infantry formations had sufficiently caught up.
During German-Finnish negotiations Finland had demanded to remain neutral unless the Soviet Union attacked them first. Germany therefore sought to provoke the Soviet Union into an attack on Finland. After Germany launched Barbarossa on 22 June, German aircraft used Finnish air bases to attack Soviet positions. The same day the Germans launched Operation Rentier and occupied the Petsamo Province at the Finnish-Soviet border. Simultaneously Finland proceeded to remilitarize the neutral Åland Islands. Despite these actions the Finnish government insisted via diplomatic channels that they remained a neutral party, but the Soviet leadership already viewed Finland as an ally of Germany. Subsequently, the Soviets proceeded to launch a massive bombing attack on 25 June against all major Finnish cities and industrial centers including Helsinki, Turku and Lahti. During a night session on the same day the Finnish parliament decided to go to war against the Soviet Union.
Finland was divided into two operational zones. Northern Finland was the staging area for Army Norway. Its goal was to execute a two-pronged pincer movement on the strategic port of Murmansk, named Operation Silver Fox. Southern Finland was still under the responsibility of the Finnish Army. The goal of the Finnish forces was, at first, to recapture Finnish Karelia at Lake Ladoga as well as the Karelian Isthmus, which included Finland's second largest city Viipuri.
On 2 July and through the next six days, a rainstorm typical of Belarusian summers slowed the progress of the panzers of Army Group Center, and Soviet defences stiffened. The delays gave the Soviets time to organize a massive counterattack against Army Group Center. The army group's ultimate objective was Smolensk, which commanded the road to Moscow. Facing the Germans was an old Soviet defensive line held by six armies. On 6 July, the Soviets launched a massive counter-attack using the V and VII Mechanized Corps of the 20th Army, which collided with the German 39th and 47th Panzer Corps in a battle where the Red Army lost 832 tanks of the 2,000 employed during five days of ferocious fighting. The Germans defeated this counterattack thanks largely to the coincidental presence of the Luftwaffe's only squadron of tank-busting aircraft. The 2nd Panzer Group crossed the Dnieper River and closed in on Smolensk from the south while the 3rd Panzer Group, after defeating the Soviet counterattack, closed on Smolensk from the north. Trapped between their pincers were three Soviet armies. The 29th Motorized Division captured Smolensk on 16 July yet a gap remained between Army Group Center. On 18 July, the panzer groups came to within of closing the gap but the trap did not finally close until 5 August, when upwards of 300,000 Red Army soldiers had been captured and 3,205 Soviet tanks were destroyed. Large numbers of Red Army soldiers escaped to stand between the Germans and Moscow as resistance continued.
Four weeks into the campaign, the Germans realized they had grossly underestimated Soviet strength. The German troops had used their initial supplies, and General Bock quickly came to the conclusion that not only had the Red Army offered stiff opposition, but German difficulties were also due to the logistical problems with reinforcements and provisions. Operations were now slowed down to allow for resupply; the delay was to be used to adapt strategy to the new situation. Hitler by now had lost faith in battles of encirclement as large numbers of Soviet soldiers had escaped the pincers. He now believed he could defeat the Soviet state by economic means, depriving them of the industrial capacity to continue the war. That meant seizing the industrial center of Kharkov, the Donbass and the oil fields of the Caucasus in the south and the speedy capture of Leningrad, a major center of military production, in the north.
Chief of the OKH, General Franz Halder, Fedor von Bock, the commander of Army Group Center, and almost all the German generals involved in Operation Barbarossa argued vehemently in favor of continuing the all-out drive toward Moscow. Besides the psychological importance of capturing the Soviet capital, the generals pointed out that Moscow was a major center of arms production, the center of the Soviet communications system and an important transport hub. Intelligence reports indicated that the bulk of the Red Army was deployed near Moscow under Semyon Timoshenko for the defense of the capital. Panzer commander Heinz Guderian was sent to Hitler by Bock and Halder to argue their case for continuing the assault against Moscow, but Hitler issued an order through Guderian (bypassing Bock and Halder) to send Army Group Center's tanks to the north and south, temporarily halting the drive to Moscow. Convinced by Hitler's argument, Guderian returned to his commanding officers as a convert to the Führer's plan, which earned him their disdain.
On 29 June Army Norway launched its effort to capture Murmansk in a pincer attack. The northern pincer, conducted by Mountain Corps Norway, approached Murmansk directly by crossing the border at Petsamo. However, in mid-July after securing the neck of the Rybachy Peninsula and advancing to the Litsa River the German advance was stopped by heavy resistance from the Soviet 14th Army. Renewed attacks led to nothing, and this front became a stalemate for the remainder of Barbarossa.
The second pincer attack began on 1 July with the German XXXVI Corps and Finnish III Corps slated to recapture the Salla region for Finland and then proceed eastwards to cut the Murmansk railway near Kandalaksha. The German units had great difficulty dealing with the Arctic conditions. After heavy fighting, Salla was taken on 8 July. To keep the momentum the German-Finnish forces advanced eastwards, until they were stopped at the town of Kayraly by Soviet resistance. Further south the Finnish III Corps made an independent effort to reach the Murmansk railway through the Arctic terrain. Facing only one division of the Soviet 7th Army it was able to make rapid headway. On 7 August it captured Kestenga while reaching the outskirts of Ukhta. Large Red Army reinforcements then prevented further gains on both fronts, and the German-Finnish force had to go onto the defensive.
The Finnish plan in the south in Karelia was to advance as swiftly as possible to Lake Ladoga, cutting the Soviet forces in half. Then the Finnish territories east of Lake Ladoga were to be recaptured before the advance along the Karelian Isthmus, including the recapture of Viipuri, commenced. The Finnish attack was launched on 10 July. The Army of Karelia held a numerical advantage versus the Soviet defenders of the 7th Army and 23rd Army, so it could advance swiftly. The important road junction at Loimola was captured on 14 July. By 16 July, the first Finnish units reached Lake Ladoga at Koirinoja, achieving the goal of splitting the Soviet forces. During the rest of July, the Army of Karelia advanced further southeast into Karelia, coming to a halt at the former Finnish-Soviet border at Mansila.
With the Soviet forces cut in half, the attack on the Karelian Isthmus could commence. The Finnish army attempted to encircle large Soviet formations at Sortavala and Hiitola by advancing to the western shores of Lake Ladoga. By mid-August the encirclement had succeeded and both towns were taken, but many Soviet formations were able to evacuate by sea. Further west, the attack on Viipuri was launched. With Soviet resistance breaking down, the Finns were able to encircle Viipuri by advancing to the Vuoksi River. The city itself was taken on 30 August, along with a broad advance on the rest of the Karelian Isthmus. By the beginning of September, Finland had restored its pre-Winter War borders.
By mid-July, the German forces had advanced within a few kilometers of Kiev below the Pripyat Marshes. The 1st Panzer Group then went south, while the 17th Army struck east and trapped three Soviet armies near Uman. As the Germans eliminated the pocket, the tanks turned north and crossed the Dnieper. Meanwhile, the 2nd Panzer Group, diverted from Army Group Center, had crossed the Desna River with 2nd Army on its right flank. The two panzer armies now trapped four Soviet armies and parts of two others.
By August, as the serviceability and the quantity of the Luftwaffe's inventory steadily diminished due to combat, demand for air support only increased as the VVS recovered. The Luftwaffe found itself struggling to maintain local air superiority. With the onset of bad weather in October, the Luftwaffe was on several occasions forced to halt nearly all aerial operations. The VVS, although faced with the same weather difficulties, had a clear advantage thanks to the prewar experience with cold-weather flying, and the fact that they were operating from intact airbases and airports. By December, the VVS had matched the Luftwaffe and was even pressing to achieve air superiority over the battlefields.
For its final attack on Leningrad, the 4th Panzer Group was reinforced by tanks from Army Group Center. On 8 August, the Panzers broke through the Soviet defences. By the end of August, 4th Panzer Group had penetrated to within of Leningrad. The Finns had pushed southeast on both sides of Lake Ladoga to reach the old Finnish-Soviet frontier.
The Germans attacked Leningrad in August 1941; in the following three "black months" of 1941, 400,000 residents of the city worked to build the city's fortifications as fighting continued, while 160,000 others joined the ranks of the Red Army. Nowhere was the Soviet "levée en masse" spirit stronger in resisting the Germans than at Leningrad where reserve troops and freshly improvised "Narodnoe Opolcheniye" units, consisting of worker battalions and even schoolboy formations, joined in digging trenches as they prepared to defend the city. On 7 September, the German 20th Motorized Division seized Shlisselburg, cutting off all land routes to Leningrad. The Germans severed the railroads to Moscow and captured the railroad to Murmansk with Finnish assistance to inaugurate the start of a siege that would last for over two years.
At this stage, Hitler ordered the final destruction of Leningrad with no prisoners taken, and on 9 September, Army Group North began the final push. Within ten days it had advanced within of the city. However, the push over the last proved very slow and casualties mounted. Hitler, now out of patience, ordered that Leningrad should not be stormed, but rather starved into submission. Along these lines, the OKH issued Directive No. la 1601/41 on 22 September 1941, which accorded Hitler's plans. Deprived of its Panzer forces, Army Group Center remained static and was subjected to numerous Soviet counterattacks, in particular the Yelnya Offensive, in which the Germans suffered their first major tactical defeat since their invasion began; this Red Army victory also provided an important boost to Soviet morale. These attacks prompted Hitler to concentrate his attention back to Army Group Center and its drive on Moscow. The Germans ordered the 3rd and 4th Panzer Armies to break off their Siege of Leningrad and support Army Group Center in its attack on Moscow.
Before an attack on Moscow could begin, operations in Kiev needed to be finished. Half of Army Group Center had swung to the south in the back of the Kiev position, while Army Group South moved to the north from its Dnieper bridgehead. The encirclement of Soviet forces in Kiev was achieved on 16 September. A battle ensued in which the Soviets were hammered with tanks, artillery, and aerial bombardment. After ten days of vicious fighting, the Germans claimed 665,000 Soviet soldiers captured, although the real figure is probably around 220,000 prisoners. Soviet losses were 452,720 men, 3,867 artillery pieces and mortars from 43 divisions of the 5th, 21st, 26th, and 37th Soviet Armies. Despite the exhaustion and losses facing some German units (upwards of 75 percent of their men) from the intense fighting, the massive defeat of the Soviets at Kiev and the Red Army losses during the first three months of the assault contributed to the German assumption that Operation Typhoon (the attack on Moscow) could still succeed.
After operations at Kiev were successfully concluded, Army Group South advanced east and south to capture the industrial Donbass region and the Crimea. The Soviet Southern Front launched an attack on 26 September with two armies on the northern shores of the Sea of Azov against elements of the German 11th Army, which was simultaneously advancing into the Crimea. On 1 October the 1st Panzer Army under Ewald von Kleist swept south to encircle the two attacking Soviet armies. By 7 October the Soviet 9th and 18th Armies were isolated and four days later they had been annihilated. The Soviet defeat was total; 106,332 men captured, 212 tanks destroyed or captured in the pocket alone as well as 766 artillery pieces of all types. The death or capture of two-thirds of all Southern Front troops in four days unhinged the Front's left flank, allowing the Germans to capture Kharkov on 24 October. Kleist's 1st Panzer Army took the Donbass region that same month.
In central Finland, the German-Finnish advance on the Murmansk railway had been resumed at Kayraly. A large encirclement from the north and the south trapped the defending Soviet corps and allowed XXXVI Corps to advance further to the east. In early-September it reached the old 1939 Soviet border fortifications. On 6 September the first defence line at the Voyta River was breached, but further attacks against the main line at the Verman River failed. With Army Norway switching its main effort further south, the front stalemated in this sector. Further south, the Finnish III Corps launched a new offensive towards the Murmansk railway on 30 October, bolstered by fresh reinforcements from Army Norway. Against Soviet resistance, it was able to come within 30 km (19 mi) of the railway, when the Finnish High Command ordered a stop to all offensive operations in the sector on 17 November. The United States of America applied diplomatic pressure on Finland to not disrupt Allied aid shipments to the Soviet Union, which caused the Finnish government to halt the advance on the Murmansk railway. With the Finnish refusal to conduct further offensive operations and German inability to do so alone, the German-Finnish effort in central and northern Finland came to an end.
Germany had pressured Finland to enlarge its offensive activities in Karelia to aid the Germans in their Leningrad operation. Finnish attacks on Leningrad itself remained limited. Finland stopped its advance just short of Leningrad and had no intentions to attack the city. The situation was different in eastern Karelia. The Finnish government agreed to restart its offensive into Soviet Karelia to reach Lake Onega and the Svir River. On 4 September this new drive was launched on a broad front. Albeit reinforced by fresh reserve troops, heavy losses elsewhere on the front meant that the Soviet defenders of the 7th Army were not able to resist the Finnish advance. Olonets was taken on 5 September. On 7 September, Finnish forward units reached the Svir River. Petrozavodsk, the capital city of the Karelo-Finnish SSR, fell on 1 October. From there the Army of Karelia moved north along the shores of Lake Onega to secure the remaining area west of Lake Onega, while simultaneously establishing a defensive position along the Svir River. Slowed by winter's onset they nevertheless continued to advance slowly during the following weeks. Medvezhyegorsk was captured on 5 December and Povenets fell the next day. On 7 December Finland called a stop to all offensive operations, going onto the defensive.
After Kiev, the Red Army no longer outnumbered the Germans and there were no more trained reserves directly available. To defend Moscow, Stalin could field 800,000 men in 83 divisions, but no more than 25 divisions were fully effective. Operation Typhoon, the drive to Moscow, began on 30 September 1941. In front of Army Group Center was a series of elaborate defence lines, the first centred on Vyazma and the second on Mozhaysk. Russian peasants began fleeing ahead of the advancing German units, burning their harvested crops, driving their cattle away, and destroying buildings in their villages as part of a scorched-earth policy designed to deny the Nazi war machine of needed supplies and foodstuffs.
The first blow took the Soviets completely by surprise when the 2nd Panzer Group, returning from the south, took Oryol, just south of the Soviet first main defense line. Three days later, the Panzers pushed on to Bryansk, while the 2nd Army attacked from the west. The Soviet 3rd and 13th Armies were now encircled. To the north, the 3rd and 4th Panzer Armies attacked Vyazma, trapping the 19th, 20th, 24th and 32nd Armies. Moscow's first line of defense had been shattered. The pocket eventually yielded over 500,000 Soviet prisoners, bringing the tally since the start of the invasion to three million. The Soviets now had only 90,000 men and 150 tanks left for the defense of Moscow.
The German government now publicly predicted the imminent capture of Moscow and convinced foreign correspondents of an impending Soviet collapse. On 13 October, the 3rd Panzer Group penetrated to within of the capital. Martial law was declared in Moscow. Almost from the beginning of Operation Typhoon, however, the weather worsened. Temperatures fell while there was continued rainfall. This turned the unpaved road network into mud and slowed the German advance on Moscow. Additional snows fell which were followed by more rain, creating a glutinous mud that German tanks had difficulty traversing, whereas the Soviet T-34, with its wider tread, was better suited to negotiate. At the same time, the supply situation for the Germans rapidly deteriorated. On 31 October, the German Army High Command ordered a halt to Operation Typhoon while the armies were reorganized. The pause gave the Soviets, far better supplied, time to consolidate their positions and organize formations of newly activated reservists. In little over a month, the Soviets organized eleven new armies that included 30 divisions of Siberian troops. These had been freed from the Soviet Far East after Soviet intelligence assured Stalin that there was no longer a threat from the Japanese. During October and November 1941, over 1,000 tanks and 1,000 aircraft arrived along with the Siberian forces to assist in defending the city.
With the ground hardening due to the cold weather, the Germans resumed the attack on Moscow on 15 November. Although the troops themselves were now able to advance again, there had been no improvement in the supply situation. Facing the Germans were the 5th, 16th, 30th, 43rd, 49th, and 50th Soviet Armies. The Germans intended to move the 3rd and 4th Panzer Armies across the Moscow Canal and envelop Moscow from the northeast. The 2nd Panzer Group would attack Tula and then close on Moscow from the south. As the Soviets reacted to their flanks, the 4th Army would attack the center. In two weeks of fighting, lacking sufficient fuel and ammunition, the Germans slowly crept towards Moscow. In the south, the 2nd Panzer Group was being blocked. On 22 November, Soviet Siberian units, augmented by the 49th and 50th Soviet Armies, attacked the 2nd Panzer Group and inflicted a defeat on the Germans. The 4th Panzer Group pushed the Soviet 16th Army back, however, and succeeded in crossing the Moscow Canal in an attempt to encircle Moscow.
On 2 December, part of the 258th Infantry Division advanced to within of Moscow. They were so close that German officers claimed they could see the spires of the Kremlin, but by then the first blizzards had begun. A reconnaissance battalion managed to reach the town of Khimki, only about from the Soviet capital. It captured the bridge over the Moscow-Volga Canal as well as the railway station, which marked the easternmost advance of German forces. In spite of the progress made, the Wehrmacht was not equipped for such severe winter warfare. The Soviet army was better adapted to fighting in winter conditions, but faced production shortages of winter clothing. The German forces fared worse, with deep snow further hindering equipment and mobility. Weather conditions had largely grounded the Luftwaffe, preventing large-scale air operations. Newly created Soviet units near Moscow now numbered over 500,000 men, and on 5 December, they launched a massive counterattack as part of the Soviet winter counteroffensive. The offensive halted on 7 January 1942, after having pushed the German armies back 100–250 km (62–155 mi) from Moscow. The Wehrmacht had lost the Battle for Moscow, and the invasion had cost the German Army over 830,000 men.
With the failure of the Battle of Moscow, all German plans for a quick defeat of the Soviet Union had to be revised. The Soviet counter-offensives in December 1941 caused heavy casualties on both sides, but ultimately eliminated the German threat to Moscow. Attempting to explain matters, Hitler issued Directive N. 39, which cited the early onset of winter and the severe cold as the reason for the German failure, whereas the main reason was the German military unpreparedness for such a giant enterprise. On 22 June 1941, the Wehrmacht as a whole had 209 divisions at its disposal, 163 of which were offensively capable. On 31 March 1942, less than one year after the invasion of the Soviet Union, the Wehrmacht was reduced to fielding 58 offensively capable divisions. The Red Army's tenacity and ability to counter-attack effectively took the Germans as much by surprise as their own initial attack had the Soviets. Spurred on by the successful defense and in an effort to imitate the Germans, Stalin wanted to begin his own counteroffensive, not just against the German forces around Moscow, but against their armies in the north and south. Anger over the failed German offensives caused Hitler to relieve Field Marshal Walther von Brauchitsch of command and in his place, Hitler assumed personal control of the German Army on 19 December 1941.
The Soviet Union had suffered heavily from the conflict, losing huge tracts of territory, and vast losses in men and material. Nonetheless, the Red Army proved capable of countering the German offensives, particularly as the Germans began experiencing irreplaceable shortages in manpower, armaments, provisions, and fuel. Despite the rapid relocation of Red Army armaments production east of the Urals and a dramatic increase of production in 1942, especially of armour, new aircraft types and artillery, the Wehrmacht was able to mount another large-scale offensive in July 1942, although on a much reduced front than the previous summer. Hitler, having realized that Germany's oil supply was "severely depleted", aimed to capture the oil fields of Baku in an offensive, codenamed Case Blue. Again, the Germans quickly overran great expanses of Soviet territory, but they failed to achieve their ultimate goals in the wake of their defeat at the Battle of Stalingrad in February 1943.
By 1943, Soviet armaments production was fully operational and increasingly outproducing the German war economy. The final major German offensive in the Eastern theater of the Second World War took place during July—August 1943 with the launch of Operation Zitadelle, an assault on the Kursk salient. Approximately one million German troops confronted a Soviet force over 2.5 million strong. The Soviets prevailed. Following the defeat of Operation Zitadelle, the Soviets launched counter-offensives employing six million men along a front towards the Dnieper River as they drove the Germans westwards. Employing increasingly ambitious and tactically sophisticated offensives, along with making operational improvements in secrecy and deception, the Red Army was eventually able to liberate much of the area which the Germans had previously occupied by the summer of 1944. The destruction of Army Group Centre, the outcome of Operation Bagration, proved to be a decisive success; additional Soviet offensives against the German Army Groups North and South in the fall of 1944 put the German war machine into retreat. By January 1945, Soviet military might was aimed at the German capital of Berlin. The war ended with the total defeat and capitulation of Nazi Germany in May 1945.
While the Soviet Union had not signed the Geneva Convention, Germany had signed the treaty and was thus obligated to offer Soviet POWs humane treatment according to its provisions (as they generally did with other Allied POWs). According to the Soviets, they had not signed the Geneva Conventions in 1929 due to Article 9 which, by imposing racial segregation of POWs into different camps, contravened the Soviet constitution. Article 82 of the convention specified that "In case, in time of war, one of the belligerents is not a party to the Convention, its provisions shall nevertheless remain in force as between the belligerents who are parties thereto." Despite this Hitler called for the battle against the Soviet Union to be a "struggle for existence" and emphasized that the Russian armies were to be "annihilated", a mindset that contributed to war crimes against Soviet prisoners of war. A memorandum from 16 July 1941, recorded by Martin Bormann, quotes Hitler saying, "The giant [occupied] area must naturally be pacified as quickly as possible; this will happen at best if anyone who just looks funny should be shot". Conveniently for the Nazis, the fact that the Soviets failed to sign the convention played into their hands as they justified their behavior accordingly. Even if the Soviets had signed, it is highly unlikely that this would have stopped the Nazis' genocidal policies towards combatants, civilians, and prisoners of war.
Before the war, Hitler issued the notorious Commissar Order, which called for all Soviet political commissars taken prisoner at the front to be shot immediately without trial. German soldiers participated in these mass killings along with members of the "SS-Einsatzgruppen", sometimes reluctantly, claiming "military necessity". On the eve of the invasion, German soldiers were informed that their battle "demands ruthless and vigorous measures against Bolshevik inciters, guerrillas, saboteurs, Jews and the complete elimination of all active and passive resistance". Collective punishment was authorized against partisan attacks; if a perpetrator could not be quickly identified, then burning villages and mass executions were considered acceptable reprisals. Although the majority of German soldiers accepted these crimes as justified due to Nazi propaganda, which depicted the Red Army as "Untermenschen", a few prominent German officers openly protested about them. An estimated two million Soviet prisoners of war died of starvation during Barbarossa alone. By the end of the war, 58 percent of all Soviet prisoners of war had died in German captivity.
Organized crimes against civilians, including women and children, were carried out on a huge scale by the German police and military forces, as well as the local collaborators. Under the command of the Reich Main Security Office, the "Einsatzgruppen" killing squads conducted large-scale massacres of Jews and communists in conquered Soviet territories. Holocaust historian Raul Hilberg puts the number of Jews murdered by "mobile killing operations" at 1,400,000. The original instructions to kill "Jews in party and state positions" were broadened to include "all male Jews of military age" and then expanded once more to "all male Jews regardless of age." By the end of July, the Germans were regularly killing women and children. On 18 December 1941, Himmler and Hitler discussed the "Jewish question", and Himmler noted the meeting's result in his appointment book: "To be annihilated as partisans." According to Christopher Browning, "annihilating Jews and solving the so-called 'Jewish question' under the cover of killing partisans was the agreed-upon convention between Hitler and Himmler". In accordance with Nazi policies against "inferior" Asian peoples, Turkmens were also persecuted. According to a post-war report by Prince Veli Kajum Khan, they were imprisoned in concentration camps in terrible conditions, where those deemed to have "Mongolian" features were murdered daily. Asians were also targeted by the "Einsatzgruppen" and were the subjects of lethal medical experiments and murder at a "pathological institute" in Kiev. Hitler received reports of the mass killings conducted by the "Einsatzgruppen" which were first conveyed to the RSHA, where they were aggregated into a summary report by Gestapo Chief Heinrich Müller.
Burning houses suspected of being partisan meeting places and poisoning water wells became common practice for soldiers of the German 9th Army. At Kharkov, the fourth largest city in the Soviet Union, food was provided only to the small number of civilians who worked for the Germans, with the rest designated to slowly starve. Thousands of Soviets were deported to Germany to be used as slave labor beginning in 1942.
The citizens of Leningrad were subjected to heavy bombardment and a siege that would last 872 days and starve more than a million people to death, of whom approximately 400,000 were children below the age of 14. The German-Finnish blockade cut off access to food, fuel and raw materials, and rations reached a low, for the non-working population, of four ounces (five thin slices) of bread and a little watery soup per day. Starving Soviet civilians began to eat their domestic animals, along with hair tonic and Vaseline. Some desperate citizens resorted to cannibalism; Soviet records list 2,000 people arrested for "the use of human meat as food" during the siege, 886 of them during the first winter of 1941–42. The Wehrmacht planned to seal off Leningrad, starve out the population, and then demolish the city entirely.
Rape was a widespread phenomenon in the East as German soldiers regularly committed violent sexual acts against Soviet women. Whole units were occasionally involved in the crime with upwards of one-third of the instances being gang rape. Historian Hannes Heer relates that in the world of the eastern front, where the German army equated Russia with Communism, everything was "fair game"; thus, rape went unreported unless entire units were involved. Frequently in the case of Jewish women, they were immediately murdered following acts of sexual violence. Historian Birgit Beck emphasizes that military decrees, which served to authorize wholesale brutality on many levels, essentially destroyed the basis for any prosecution of sexual offenses committed by German soldiers in the East. She also contends that detection of such instances was limited by the fact that sexual violence was often inflicted in the context of billets in civilian housing.
Operation Barbarossa was the largest military operation in history — more men, tanks, guns and aircraft were deployed than in any other offensive. The invasion opened up the Eastern Front, the war's largest theater, which saw clashes of unprecedented violence and destruction for four years and killed 26 million Soviet people, including about 8.6 million Red army soldiers. More died fighting on the Eastern Front than in all other fighting across the globe during World War II. Damage to both the economy and landscape was enormous, as approximately 1,710 Soviet towns and 70,000 villages were razed.
Operation Barbarossa and the subsequent German defeat changed the political landscape of Europe, dividing it into Eastern and Western blocs. The political vacuum left in the eastern half of the continent was filled by the USSR when Stalin secured his territorial prizes of 1944–1945 and firmly placed his Red Army in Bulgaria, Romania, Hungary, Poland, Czechoslovakia, and the eastern half of Germany. Stalin's fear of resurgent German power and his distrust of his earstwhile allies contributed to Soviet pan-Slavic initiatives and a subsequent alliance of Slavic states. Historians David Glantz and Jonathan House assert Operation Barbarossa influenced not only Stalin but subsequent Soviet leaders, claiming it "colored" their strategic mindsets for the "next four decades". As a result, the Soviets instigated the creation of "an elaborate system of buffer and client states, designed to insulate the Soviet Union from any possible future attack." As a consequence, Eastern Europe became communist in political disposition, and Western Europe fell under the democratic sway of the United States, a nation uncertain about its future policies in Europe. | https://en.wikipedia.org/wiki?curid=22618 |
Revelation
In religion and theology, revelation is the revealing or disclosing of some form of truth or knowledge through communication with a deity or other supernatural entity or entities.
Some religions have religious texts which they view as divinely or supernaturally revealed or inspired. For instance, Orthodox Jews, Christians and Muslims believe that the "Torah" was received from Yahweh on biblical Mount Sinai. Most Christians believe that both the Old Testament and the New Testament were inspired by God. Muslims believe the Quran was revealed by God to Muhammad word by word through the angel Gabriel ("Jibril"). In Hinduism, some Vedas are considered "", "not human compositions", and are supposed to have been directly revealed, and thus are called "śruti", "what is heard". The 15,000 handwritten pages produced by the mystic Maria Valtorta were represented as direct dictations from Jesus, while she attributed "The Book of Azariah" to her guardian angel. Aleister Crowley stated that "The Book of the Law" had been revealed to him through a higher being that called itself "Aiwass".
A revelation communicated by a supernatural entity reported as being present during the event is called a vision. Direct conversations between the recipient and the supernatural entity, or physical marks such as stigmata, have been reported. In rare cases, such as that of Saint Juan Diego, physical artifacts accompany the revelation. The Roman Catholic concept of interior locution includes just an inner voice heard by the recipient.
In the Abrahamic religions, the term is used to refer to the process by which God reveals knowledge of himself, his will, and his divine providence to the world of human beings. In secondary usage, revelation refers to the resulting human knowledge about God, prophecy, and other divine things. Revelation from a supernatural source plays a less important role in some other religious traditions such as Buddhism, Confucianism and Taoism.
Inspiration – such as that bestowed by God on the author of a sacred book – involves a special illumination of the mind, in virtue of which the recipient conceives such thoughts as God desires him to commit to writing, and does not necessarily involve supernatural communication.
With the Age of Enlightenment in Europe, beginning about the mid-17th century, the development of rationalism, materialism and atheism, the concept of supernatural revelation itself faced skepticism. In "The Age of Reason" (1794–1809), Thomas Paine develops the theology of deism, rejecting the possibility of miracles and arguing that a revelation can be considered valid only for the original recipient, with all else being hearsay.
Thomas Aquinas believed in two types of individual revelation from God, "general revelation" and "special revelation". In general revelation, God reveals himself through his creation, such that at least some truths about God can be learned by the empirical study of nature, physics, cosmology, etc., to an individual. Special revelation is the knowledge of God and spiritual matters which can be discovered through supernatural means, such as scripture or miracles, by individuals. Direct revelation refers to communication from God to someone in particular.
Though one may deduce the existence of God and some of God's attributes through general revelation, certain specifics may be known only through special revelation. Aquinas believed that special revelation is equivalent to the revelation of God in Jesus. The major theological components of Christianity, such as the Trinity and the Incarnation, are revealed in the teachings of the church and the scriptures and may not otherwise be deduced. Special revelation and natural revelation are complementary rather than contradictory in nature.
"Continuous revelation" is a term for the theological position that God continues to reveal divine principles or commandments to humanity.
In the 20th century, religious existentialists proposed that revelation held no content in and of itself but rather that God inspired people with his presence by coming into contact with them. Revelation is a human response that records how we respond to God.
Some religious groups believe a deity has been revealed or spoken to a large group of people or have legends to a similar effect. In the Book of Deuteronomy, Yahweh is said to have been revealed upon giving the Ten Commandments to the Israelites at Mount Sinai. In Christianity, the Book of Acts describes the Day of Pentecost wherein a large group of the followers of Jesus experienced mass revelation. The Lakota people believe Ptesáŋwiŋ spoke directly to the people in the establishment of Lakota religious traditions. Some versions of an Aztec legend tell of Huitzilopochtli speaking directly to the Aztec people upon their arrival at Anåhuac. Historically, some emperors, cult leaders, and other figures have also been deified and treated as though their words are themselves revelations.
Some people hold that God can communicate with people in a way that gives direct, propositional content: This is termed "verbal revelation". Orthodox Judaism and some forms of Christianity hold that the first five books of Moses were dictated by God in such a fashion.
One school of thought holds that revelation is non-verbal and non-literal, yet it may have propositional content. People were divinely inspired by God with a message, but not in a verbal-like fashion.
Rabbi Abraham Joshua Heschel has written, "To convey what the prophets experienced, the Bible could either use terms of descriptions or terms of indication. Any description of the act of revelation in empirical categories would have produced a caricature. That is why all the Bible does is to state that revelation happened; how it happened is something they could only convey in words that are evocative and suggestive."
Isaiah writes that he received his message through visions, where he would see YHWH, the God of Israel, speaking to angelic beings that surrounded him. Isaiah would then write down the dialogue exchanged between YHWH and the angels. This form of revelation constitutes the major part of the text of the Book of Isaiah. The same formula of divine revelation is used by other prophets throughout the Tanakh, such as Micaiah in 1 Kings 22:19–22.
Members of Abrahamic religions, including Judaism, Christianity and Islam, believe that God exists and can in some way reveal his will to people. Members of those religions distinguish between true prophets and false prophets, and there are documents offering criteria by which to distinguish true from false prophets. The question of epistemology then arises: how to know?
Some believe that revelation can originate directly from a deity or through an agent such as an angel. One who has experienced such contact with, or communication from, the divine is often called a prophet. An article (p. 555) under the heading "mysticism," and contributed by Ninian Smart, J. F. Rowny Professor of Comparative Religion, University of California, and President of the American Academy of Religion, writing in the 1999 edition of "The Norton Dictionary of Modern Thought," (W. W. Norton & Co. Inc.), suggests that the more proper and wider term for such an encounter would be mystical, making such a person a mystic. All prophets would be mystics, but not all mystics would be prophets.
Revelation from a supernatural source is of lesser importance in some other religious traditions, such as Taoism and Confucianism.
The Báb, Bahá'u'lláh and `Abdu'l-Bahá received thousands of written enquiries, and wrote thousands of responses, hundreds of which amount to whole and proper books, while many are shorter texts, such as letters. In addition, the Bahá'í faith has large works which were divinely revealed in a very short time, as in a night, or a few days. Additionally, because many of the works were first recorded by an amanuensis, most were submitted for approval and correction and the final text was personally approved by the revelator.
Bahá'u'lláh would occasionally write the words of revelation down himself, but normally the revelation was dictated to his amanuensis, who sometimes recorded it in what has been called "revelation writing", a shorthand script written with extreme speed owing to the rapidity of the utterance of the words. Afterwards, Bahá'u'lláh revised and approved these drafts. These "revelation drafts" and many other transcriptions of Bahá'u'lláh's writings, around 15,000 items, some of which are in his own handwriting, are kept in the International Bahá'í Archives in Haifa, Israel.
Many Christians believe in the possibility and even reality of private revelations, messages from God for individuals, which can come in a variety of ways. Montanism is an example in early Christianity and there are alleged cases today also. However, Christians see as of a much higher level the revelation recorded in the collection of books known as the Bible. They consider these books to be written by human authors under the inspiration of the Holy Spirit. They regard Jesus as the supreme revelation of God, with the Bible being a revelation in the sense of a witness to him. The "Catechism of the Catholic Church" states that "the Christian faith is not a 'religion of the book.' Christianity is the religion of the 'Word of God', a word which is 'not a written and mute word, but the Word which is incarnate and living".
Gregory and Nix speak of Biblical inerrancy as meaning that, in its original form, the Bible is totally without error, and free from all contradiction, including the historical and scientific parts. Coleman speaks of Biblical infallibility as meaning that the Bible is inerrant on issues of faith and practice but not history or science. The Catholic Church speaks not about infallibility of Scripture but about its freedom from error, holding "the doctrine of the inerrancy of Scripture". The Second Vatican Council, citing earlier declarations, stated: "Since everything asserted by the inspired authors or sacred writers must be held to be asserted by the Holy Spirit, it follows that the books of Scripture must be acknowledged as teaching solidly, faithfully and without error that truth which God wanted put into sacred writings for the sake of salvation". It added: "Since God speaks in Sacred Scripture through men in human fashion, the interpreter of Sacred Scripture, in order to see clearly what God wanted to communicate to us, should carefully investigate what meaning the sacred writers really intended, and what God wanted to manifest by means of their words." The Reformed Churches believe in the Bible is inerrant in the sense spoken of by Gregory and Nix and "deny that Biblical infallibility and inerrancy are limited to spiritual, religious, or redemptive themes, exclusive of assertions in the fields of history and science". The Westminster Confession of Faith speaks of "the infallible truth and divine authority" of the Scriptures.
In the New Testament, Jesus treats the Old Testament as authoritative and says it "cannot be broken" (). 2 Timothy says: "All Scripture is breathed out by God and profitable for teaching, for reproof, for correction, and for training in righteousness". The Second Epistle of Peter claims that "no prophecy of Scripture comes from someone's own interpretation. For no prophecy was ever produced by the will of man, but men spoke from God as they were carried along by the Holy Spirit" (). It also speaks of Paul's letters as containing some things "hard to understand, which the ignorant and unstable twist to their own destruction, as they do the other Scriptures".
This letter does not specify "the other Scriptures", nor does the term "all Scripture" in 2 Timothy indicate which writings were or would be breathed out by God and useful for teaching, since it does not preclude later works, such as the Book of Revelation and the Epistles of John may have been. The Catholic Church recognizes 73 books as inspired and forming the Bible (46 books of the Old Testament and 27 books of the New Testament). The most common versions of the Bible that Protestants have today consist of 66 of these books. None of the 66 or 73 books gives a list of revealed books.
Theologian and Christian existentialist philosopher Paul Johannes Tillich (1886–1965), who sought to correlate culture and faith so that "faith need not be unacceptable to contemporary culture and contemporary culture need not be unacceptable to faith", argued that revelation never runs counter to reason (affirming Thomas Aquinas who said that faith is eminently rational), and that both poles of the subjective human experience are complementary.
Karl Barth argued that God is the object of God’s own self-knowledge, and revelation in the Bible means the self-unveiling to humanity of the God who cannot be discovered by humanity simply through its own efforts. For him, the Bible is not "The Revelation"; rather, it points to revelation. Human concepts can never be considered as identical to God's revelation, and Scripture is written in human language, expressing human concepts. It cannot be considered identical with God's revelation. However, God does reveal himself through human language and concepts, and thus Christ is truly presented in scripture and the preaching of the church.
This denomination believes that the president of the Church receives revelation directly from God for the direction of the church. The Church of Jesus Christ of Latter-day Saints (LDS Church) and some other Latter Day Saint denominations claim to be led by revelation from God to a living prophet, who receives God’s word, just as Abraham, Moses, other ancient prophets and apostles did.
Latter-day Saints believe in an open scriptural canon, and in addition to the Bible and the Book of Mormon, have books of scripture containing the revelations of modern-day prophets such as the Doctrine and Covenants and the Pearl of Great Price. Church leaders (from the Quorum of the Twelve Apostles) have taught during the church's General Conferences that conference talks which are "…[spoken as] moved upon by the Holy Ghost shall be scripture…". In addition, many Latter-day Saints believe that ancient prophets in other regions of the world received revelations that resulted in additional scriptures that have been lost and may, one day, be forthcoming. Hence, the belief in continuing revelation. Latter-day Saints also believe that the United States Constitution is a divinely inspired document.
Latter-day Saints sustain the President of the Church of Jesus Christ of Latter-day Saints as prophet, seer, and revelator—the only person on earth who receives revelation to guide the entire church. They also sustain the two counselors in the First Presidency, as well as the Quorum of the Twelve Apostles, as prophets, seers, and revelators. They believe that God has followed a pattern of continued revelation to prophets throughout the history of mankind (KJV Luke 1:70)--both to establish doctrine and maintain its integrity, as well as to guide the church under changing world conditions. When this pattern of revelation was broken, it was because the receivers of revelation had been rejected and often killed (Matt 23:31–37, Luke 11:47–51). In the meridian of time, Paul described prophets and apostles in terms of a foundation, with Christ as the cornerstone, which was built to prevent doctrinal shift—"that we henceforth be no more children, tossed to and fro, and carried about by every wind of doctrine" (Eph 2:20 and 4:11–14, see also Matt 16:17–18). To maintain this foundation, new apostles were chosen and ordained to replace those lost to death or transgression, as when Matthias was called by revelation to replace Judas (Acts 1:15–26). However, as intensifying persecution led to the imprisonment and martyrdom of the apostles, it eventually became impossible to continue the apostolic succession. Once the foundation of apostles and prophets was lost, the integrity of Christian doctrine as established by Christ and the apostles began to be compromised by those who continued to develop doctrine despite not being called or authorized to receive revelation for the body of the church. In the absence of revelation, these post-apostolic theologians couldn’t help but introduce elements of human reasoning, speculation, and personal interpretation of scripture (2 Pet 1:19–20)—which over time led to the loss or corruption of various doctrinal truths, as well as the addition of new man-made doctrines. This naturally led to much disagreement and schism, which over the centuries culminated in the large number of Christian churches on the earth today. Mormons believe that God resumed his pattern of revelation when the world was again ready, by calling the Prophet Joseph Smith to restore the fullness of the gospel of Jesus Christ to the earth. Since that time there has been a consistent succession of prophets and apostles, which God has promised will not be broken before the Second Coming of Christ (Dan 2:44).
Each member of the LDS Church is also confirmed a member of the church following baptism and given the "gift of the Holy Ghost" by which each member is encouraged to develop a personal relationship with that divine being and receive personal revelation for their own direction and that of their family. The Latter Day Saint concept of revelation includes the belief that revelation from God is available to all those who earnestly seek it with the intent of doing good. It also teaches that everyone is entitled to "personal" revelation with respect to his or her stewardship (leadership responsibility). Thus, parents may receive inspiration from God in raising their families, individuals can receive divine inspiration to help them meet personal challenges, church officers may receive revelation for those whom they serve, and so forth.
The important consequence of this is that each person may receive confirmation that particular doctrines taught by a prophet are true, as well as gain divine insight in using those truths for their own benefit and eternal progress. In the church, personal revelation is expected and encouraged, and many converts believe that personal revelation from God was instrumental in their conversion. Joseph F. Smith, the sixth president of the LDS Church, summarized this church's belief concerning revelation by saying, "We believe… in the principle of direct revelation from God to man." (Smith, 362)
Śruti, Sanskrit for "that which is heard", refers to the body of most authoritative, ancient religious texts comprising the central canon of Hinduism. It includes the four Vedas including its four types of embedded texts—the Samhitas, the early Upanishads. "Śruti"s have been variously described as a revelation through "anubhava" (direct experience), or of primordial origins realized by ancient Rishis. In Hindu tradition, they have been referred to as "apauruṣeya" (not created by humans). The "Śruti" texts themselves assert that they were skillfully created by Rishis (sages), after inspired creativity, just as a carpenter builds a chariot.
Muslims believe that God (Arabic: ألله "Allah") revealed his final message to all of existence through Muhammad via the angel Gabriel. Muhammad is considered to have been the Seal of the Prophets and the last revelation, the Qur'an, is believed by Muslims to be the flawless final revelation of God to humanity, valid until the Last Day. The Qur'an claims to have been revealed word by word and letter by letter.
Muslims hold that the message of Islam is the same as the message preached by all the messengers sent by God to humanity since Adam. Muslims believe that Islam is the oldest of the monotheistic religions because it represents both the original and the final revelation of God to Abraham, Moses, David, Jesus, and Muhammad. Likewise, Muslims believe that every prophet received revelation in their lives, as each prophet was sent by God to guide mankind. Jesus is significant in this aspect as he received revelation in a twofold aspect, as Muslims believe he preached the Gospel while also having been taught the Torah.
According to Islamic traditions, Muhammad began receiving revelations from the age of 40, delivered through the angel Gabriel over the last 23 years of his life. The content of these revelations, known as the Qur'an, was memorized and recorded by his followers and compiled from dozens of hafiz as well as other various parchments or hides into a single volume shortly after his death. In Muslim theology, Muhammad is considered equal in importance to all other prophets of God and to make distinction among the prophets is a sin, as the Qur'an itself promulgates equality between God's prophets.(Qur'an 3:84)
Many scholars have made the distinction between revelation and inspiration, which according to Muslim theology, all righteous people can receive. Inspiration refers to God inspiring a person to commit some action, as opposed to revelation, which only the prophets received. Moses's mother, Jochebed, being inspired to send the infant Moses in a cradle down the Nile river is a frequently cited example of inspiration, as is Hagar searching for water for the infant Ishmael.
The term "revelation" is used in two senses in Jewish theology; it either denotes (1) what in rabbinical language is called "Gilluy Shekinah," a manifestation of God by some wondrous act of His which overawes man and impresses him with what he sees, hears, or otherwise perceives of His glorious presence; or it denotes (2) a manifestation of His will through oracular words, signs, statutes, or laws.
In Judaism, issues of epistemology have been addressed by Jewish philosophers such as Saadiah Gaon (882–942) in his Book of Beliefs and Opinions; Maimonides (1135–1204) in his Guide for the Perplexed; Samuel Hugo Berman, professor of philosophy at the Hebrew University; Joseph Dov Soloveitchik (1903–1993), talmudic scholar and philosopher; Neil Gillman, professor of philosophy at the Jewish Theological Seminary of America, and Elliot N. Dorff, professor of philosophy at the American Jewish University.
One of the major trends in modern Jewish philosophy was the attempt to develop a theory of Judaism through existentialism. One of the primary players in this field was Franz Rosenzweig. His major work, "Star of Redemption", expounds a philosophy in which he portrays the relationships between God, humanity and world as they are connected by creation, revelation and redemption.
Conservative Jewish philosophers Elliot N. Dorff and Neil Gillman take the existentialist philosophy of Rosenzweig as one of their starting points for understanding Jewish philosophy. (They come to different conclusions, however.)
Rabbinic Judaism, and contemporary Orthodox Judaism, hold that the Torah (Pentateuch) extant today is essentially the same one that the whole of the Jewish people received on Mount Sinai, from God, upon their Exodus from Egypt. Beliefs that God gave a "Torah of truth" to Moses (and the rest of the people), that Moses was the greatest of the prophets, and that the Law given to Moses will never be changed, are three of the Thirteen Principles of Faith of Orthodox Judaism according to Maimonides.
Orthodox Judaism believes that in addition to the written Torah, God also revealed to Moses a set of oral teachings, called the Oral Torah. In addition to this revealed law, Jewish law contains decrees and enactments made by prophets, rabbis, and sages over the course of Jewish history. Haredi Judaism tends to regard even rabbinic decrees as being of divine origin or divinely inspired, while Modern Orthodox Judaism tends to regard them as being more potentially subject to human error, although due to the Biblical verse "Do not stray from their words" ("Deuteronomy 17:11) it is still accepted as binding law.
Conservative Judaism tends to regard both the Torah and the Oral law as not verbally revealed. The Conservative approach tends to regard the Torah as compiled by redactors in a manner similar to the Documentary Hypothesis. However, Conservative Jews also regard the authors of the Torah as divinely inspired, and many regard at least portions of it as originating with Moses. Positions can vary from the position of Joel Roth, following David Weiss HaLivni, that while the Torah originally given to Moses on Mount Sinai became corrupted or lost and had to be recompiled later by redactors, the recompiled Torah is nonetheless regarded as fully Divine and legally authoritative, to the position of Gordon Tucker that the Torah, while Divinely inspired, is a largely human document containing significant elements of human error, and should be regarded as the beginning of an ongoing process which is continuing today. Conservative Judaism regards the Oral Law as divinely inspired, but nonetheless subject to human error.
Reform and Reconstructionist Jews also accept the Documentary Hypothesis for the origin of the Torah, and tend to view all of the Oral law as an entirely human creation. Reform believe that the Torah is not a direct revelation from God, but is a document written by human ancestors, carrying human understanding and experience, and seeking to answer the question: 'What does God require of us?'. They believe that, though it contains many 'core-truths' about God and humanity, it is also time bound. They believe that God's will is revealed through the interaction of humanity and God throughout history, and so, in that sense, Torah is a product of an ongoing revelation. Reconstructionist Judaism denies the notion of revelation entirely.
Although the Nevi'im (the books of the Prophets) are considered divine and true, this does not imply that the books of the prophets are always read literally. Jewish tradition has always held that prophets used metaphors and analogies. There exists a wide range of commentaries explaining and elucidating those verses consisting of metaphor. Rabbinic Judaism regards Moses as the greatest of the prophets, and this view is one of the Thirteen Principles of Faith of traditional Judaism. Consistent with the view that revelation to Moses was generally clearer than revelation to other prophets, Orthodox views of revelation to prophets other than Moses have included a range of perspectives as to directness. For example, Maimonides in "The Guide for the Perplexed" said that accounts of revelation in the Nevi'im were not always as literal as in the Torah and that some prophetic accounts reflect allegories rather than literal commands or predictions.
Conservative Rabbi and philosopher Abraham Joshua Heschel (1907–1972), author of a number of works on prophecy, said that, "Prophetic inspiration must be understood "as an event", not as "a process"." In his work "God in Search of Man", he discussed the experience of being a prophet. In his book "Prophetic Inspiration After the Prophets: Maimonides and Others", Heschel references to continued prophetic inspiration in Jewish rabbinic literature following the destruction of the Temple in Jerusalem and into medieval and even Modern times. He wrote that
The Guru Granth Sahib is considered to be a divine revelation by God to the Sikh gurus.
In various verses of Guru Granth Sahib, the Sikh gurus themselves state that they merely speak what the divine teacher (God) commands them to speak.
Guru Nanak frequently used to tell his ardent follower Mardana "Oh Mardana, play the rabaab the Lord's word is descending onto me."
In certain passages of Guru Granth sahib, it is clearly said the authorship is of divine origin and the gurus were merely the channel through which such revelations came.
The miracle of the Sun is probably the best-known revelation of recent times, but while some still consider it to be a genuine miracle, others regard it as a natural phenomenon with a natural explanation.
In 1909 Aleister Crowley, whilst staying in Egypt, received "The Book of the Law", the founding document of Thelema. | https://en.wikipedia.org/wiki?curid=26042 |
Rotterdam
Rotterdam (, ; ) is a city and municipality in the Netherlands. It is in the province of South Holland, at the mouth of the Nieuwe Maas channel leading into the Rhine–Meuse–Scheldt delta at the North Sea. Its history goes back to 1270, when a dam was constructed in the Rotte. In 1340, Rotterdam was granted city rights by the Count of Holland. The Rotterdam–The Hague metropolitan area, with a population of approximately 2.7 million, is the 13th-largest in the European Union and the most populous in the country.
A major logistic and economic centre, Rotterdam is Europe's largest seaport. In 2020, it had a population of 651,446 and is home to over 180 nationalities. Rotterdam is known for its university, riverside setting, lively cultural life, maritime heritage and modern architecture. The near-complete destruction of the city centre in the World War II Rotterdam Blitz has resulted in a varied architectural landscape, including skyscrapers designed by architects such as Rem Koolhaas, Piet Blom and Ben van Berkel.
The Rhine, Meuse and Scheldt give waterway access into the heart of Western Europe, including the highly industrialized Ruhr. The extensive distribution system including rail, roads, and waterways have earned Rotterdam the nicknames "Gateway to Europe" and "Gateway to the World".
The settlement at the lower end of the fen stream "Rotte" (or "Rotta", as it was then known, from "rot", "muddy" and "a", "water", thus "muddy water") dates from at least 900 CE. Around 1150, large floods in the area ended development, leading to the construction of protective dikes and dams, including "Schielands Hoge Zeedijk" ("Schieland’s High Sea Dike") along the northern banks of the present-day Nieuwe Maas. A dam on the Rotte was built in the 1260s and was located at the present-day "Hoogstraat" ("High Street").
On 7 July 1340, Count Willem IV of Holland granted city rights to Rotterdam, whose population then was only a few thousand. Around the year 1350, a shipping canal, the "Rotterdamse Schie" was completed, which provided Rotterdam access to the larger towns in the north, allowing it to become a local trans-shipment centre between the Netherlands, England and Germany, and to urbanize.
The port of Rotterdam grew slowly but steadily into a port of importance, becoming the seat of one of the six "chambers" of the "Vereenigde Oostindische Compagnie" (VOC), the Dutch East India Company.
The greatest spurt of growth, both in port activity and population, followed the completion of the Nieuwe Waterweg in 1872. The city and harbour started to expand on the south bank of the river. The "Witte Huis" or "White House" skyscraper, inspired by American office buildings and built in 1898 in the French Château-style, is evidence of Rotterdam's rapid growth and success. When completed, it was the tallest office building in Europe, with a height of .
During World War I, the city was the world's largest spy centre because of Dutch neutrality and its strategic location in between Britain, Germany and German-occupied Belgium. Many spies who were arrested and executed in Britain were led by German secret agents operating from Rotterdam. MI6 had its main European office on de Boompjes. From there the British coordinated espionage in Germany and occupied Belgium. During World War I, an average of 25,000 Belgian refugees lived in the city, as well as hundreds of German deserters and escaped Allied prisoners of war.
During World War II, the German army invaded the Netherlands on 10 May 1940. Adolf Hitler had hoped to conquer the country in just one day, but his forces met unexpectedly fierce resistance. The Dutch army was forced to capitulate on 15 May 1940, following the bombing of Rotterdam on 14 May and the threat of bombing of other Dutch cities. The heart of Rotterdam was almost completely destroyed by the Luftwaffe. Some 80,000 civilians were made homeless and 900 were killed; a relatively low number due to the fact that many had fled the city because of the warfare and bombing going on in Rotterdam since the start of the invasion three days earlier. The City Hall survived the bombing. Ossip Zadkine later attempted to capture the event with his statue "De Verwoeste Stad" ('The Destroyed City'). The statue stands near the "Leuvehaven", not far from the Erasmusbrug in the centre of the city, on the north shore of the river "Nieuwe Maas".
Rotterdam was gradually rebuilt from the 1950s through to the 1970s. It remained quite windy and open until the city councils from the 1980s on began developing an active architectural policy. Daring and new styles of apartments, office buildings and recreation facilities resulted in a more 'livable' city centre with a new skyline. In the 1990s, the Kop van Zuid was built on the south bank of the river as a new business centre. Rotterdam was voted 2015 European City of the Year by the Academy of Urbanism. A Guardian profile of Rem Koolhaas begins "If you put the last 50 years of architecture in a blender, and spat it out in building-sized chunks across the skyline, you would probably end up with something that looked a bit like Rotterdam."
'Rotterdam' is divided into a northern and a southern part by the river Nieuwe Maas, connected by (from west to east): the Beneluxtunnel; the Maastunnel; the "Erasmusbrug" ('Erasmus Bridge'); a subway tunnel; the "Willemsspoortunnel" ('Willems railway tunnel'); the "Willemsbrug" ('Willems Bridge'); the "Koninginnebrug" ('Queen's Bridge'); and the "Van Brienenoordbrug" ('Van Brienenoord Bridge'). The former railway lift bridge "De Hef" ('the Lift') is preserved as a monument in lifted position between the "Noordereiland" ('North Island') and the south of Rotterdam.
The city centre is located on the northern bank of the Nieuwe Maas, although recent urban development has extended the centre to parts of southern Rotterdam known as "De Kop van Zuid" ('the Head of South', i.e. the northern part of southern Rotterdam). From its inland core, Rotterdam reaches the North Sea by a swathe of predominantly harbour area.
Built mostly behind dikes, large parts of the Rotterdam are below sea level. For instance, the Prins Alexander Polder in the northeast of Rotterdam extends below sea level, or rather below Normaal Amsterdams Peil (NAP) or 'Amsterdam Ordnance Datum'. The lowest point in the Netherlands ( below NAP) is situated just to the east of Rotterdam, in the municipality of Nieuwerkerk aan den IJssel.
The Rotte river no longer joins the Nieuwe Maas directly. Since the early 1980s, when the construction of Rotterdam's second subway line interfered with the Rotte's course, its waters have been pumped through a pipe into the Nieuwe Maas via the Boerengat.
Between the summers of 2003 and 2008, an artificial beach was created at the Boompjeskade along the Nieuwe Maas, between the Erasmus Bridge and the Willems Bridge. Swimming was not possible, digging pits was limited to the height of the layer of sand, about . Alternatively people go the beach of Hoek van Holland (which is a Rotterdam district) or one of the beaches in Zeeland: Renesse or the Zuid Hollandse Eilanden: Ouddorp, Oostvoorne.
Rotterdam forms the centre of the Rijnmond conurbation, bordering the conurbation surrounding The Hague to the north-west. The two conurbations are close enough to be a single conurbation. They share the Rotterdam The Hague Airport and a light rail system called RandstadRail. Consideration is being given to creating an official Metropolitan region Rotterdam The Hague ("Metropoolregio Rotterdam Den Haag"), which would have a combined population approaching 2.5 million.
On its turn, the Rijnmond conurbation is part of the southern wing (the Zuidvleugel) of the Randstad, which is one of the most important economic and densely populated areas in the north-west of Europe. Having a population of 7.1 million, the Randstad is the sixth-largest urban area in Europe (after Moscow, London, Paris, Istanbul, and the Rhein-Ruhr Area). The Zuidvleugel, situated in the province of South Holland, has a population of around 3 million.
Rotterdam experiences a temperate oceanic climate (Köppen climate classification "Cfb") similar to all of the coastal areas in Netherlands. Located near to the coast, its climate is slightly milder than locations further inland. Winters are cool with frequent cold days, while the summers are mild to warm, with occasional hot temperatures. Temperatures above 30 °C are not uncommon during summer, while (night) temperatures can drop below -5 °C during winter for short periods of time, mostly during periods of sustained easterly (continental) winds. The following climate data is from the airport, which is slightly cooler than the city, being surrounded by water canals which make the climate milder and with a higher relative humidity. The city has an urban heat island, especially inside the city centre.
Rotterdam is diverse, with the demographics differing by neighbourhood. The city centre has a disproportionately high number of single people when compared to other cities, with 70% of the population between the ages of 20 and 40 identifying as single. Those with higher education and higher income live disproportionately in the city centre, as do foreign born citizens. 54% of city centre residents are foreign born, compared to 45% in other parts of the city, while in the city centre 70% of businesses are run by foreign born people. Nonetheless, this is no comment on income, as 80% of homes are rented in the city centre.
The municipality of Rotterdam is part of the Rotterdam-The Hague Metropolitan Area which, as of 2015, covers an area of 1,130 km2, of which 990 km km2 is land, and has a population of approximately 2,563,197. As of 2019, the municipality itself occupies an area of 325.79 km2, 208.80 km2 of which is land, and is home to 638,751 inhabitants. Its population peaked at 731,564 in 1965, but the dual processes of suburbanization and counterurbanization saw this number steadily decline over the next 2 decades, reaching 560,000 by 1985. Although Rotterdam has experienced population growth since then, it has done so at a slower pace than comparable cities in the Netherlands, like Amsterdam, The Hague and Utrecht .
Rotterdam consists of 14 submunicipalities: Centrum, Charlois (including Heijplaat), Delfshaven, Feijenoord, Hillegersberg-Schiebroek, Hoek van Holland, Hoogvliet, IJsselmonde, Kralingen-Crooswijk, Noord, Overschie, Pernis, and Prins Alexander (the most populous submunicipality with around 85,000 inhabitants). One other area, Rozenburg, does have an official submunicipality status since 18 March 2010.
The current size of the municipality of Rotterdam is the result of the amalgamation of the following former municipalities, some of which now are a submunicipality:
In the Netherlands, Rotterdam has the highest percentage of foreigners from non-industrialised nations. They form a large part of Rotterdam's multi ethnic and multicultural diversity. 50.3% of the population are of non Dutch origins or have at least one parent born outside the country. There are 80,000 Muslims, constituting 13% of the population. The mayor of Rotterdam, Ahmed Aboutaleb, is of Moroccan descent and is a practicing Muslim. The city is home to the largest Dutch Antillean community. The city also has its own China Town at the West-Kruiskade, close to Rotterdam Centraal.
Christianity is the largest religion in Rotterdam, with 31.1% of the population identifying. The second and third largest religions are Islam (13.3%) and Hinduism (3.3%), while about half of the population has no religious affiliation.
Since 1795 Rotterdam has hosted the chief congregation of the liberal Protestant brotherhood of Remonstrants. From 1955 it has been the see of the bishop of Rotterdam when the Rotterdam diocese was split from the Haarlem diocese. Since 2010 the city is home to the largest mosque in the Netherlands, , (capacity 1,500).
Rotterdam has always been one of the main centres of the shipping industry in the Netherlands. From the Rotterdam Chamber of the VOC, the world's first multinational, established in 1602, to the merchant shipping leader Royal Nedlloyd established in 1970, with its corporate headquarters located in the landmark building the 'Willemswerf' in 1988. In 1997, Nedlloyd merged with the British shipping industry leader P&O forming the third largest merchant shipping company in the world. The Anglo-Dutch P&O Nedlloyd was bought by the Danish giant corporation 'AP Moller Maersk' in 2005 and its Dutch operations are still headquartered in the 'Willemswerf'.
Nowadays, well-known companies with headquarters in Rotterdam are consumers goods company Unilever, asset management firm Robeco, energy company Eneco, dredging company Van Oord, oil company Shell Downstream, terminal operator Vopak, commodity trading company Vitol and architecture firm Office for Metropolitan Architecture.
It is also home to the regional headquarters of chemical company LyondellBasell, commodities trading company Glencore, pharmaceutical company Pfizer, logistics companies Stolt-Nielsen, electrical equipment company ABB Group and consumer goods company Procter & Gamble. Furthermore, Rotterdam has the Dutch headquarters of Allianz, Maersk, Petrobras, Samskip, Louis Dreyfus Group, Aon and MP Objects.
The City of Rotterdam makes use of the services of semi-government companies Roteb and students of Rotterdam Business school RBS (to take care of sanitation, waste management and assorted services) and the Port of Rotterdam Authority (to maintain the Port of Rotterdam). Both these companies were once municipal bodies, now they are autonomous entities, owned by the City.
Being the largest port and one of the largest cities of the country, Rotterdam attracts many people seeking jobs, especially in the cheap labour segment. The city's unemployment rate is 12%, almost twice the national average.
Rotterdam is the largest port in Europe, with the rivers Maas and Rhine providing excellent access to the hinterland upstream reaching to Basel, Switzerland and into France. In 2004 Shanghai took over as the world's busiest port. In 2006, Rotterdam was the world's seventh largest container port in terms of twenty-foot equivalent units (TEU) handled.
The port's main activities are petrochemical industries and general cargo handling and transshipment. The harbour functions as an important transit point for bulk materials between the European continent and overseas. From Rotterdam goods are transported by ship, river barge, train or road. In 2007, the "Betuweroute", a new fast freight railway from Rotterdam to Germany, was completed.
Well-known streets in Rotterdam are the Lijnbaan (the first set of pedestrian streets of the country, opened in 1953), the Hoogstraat, the Coolsingel with the city hall, and the Weena, which runs from the Central Station to the Hofplein (square). A modern shopping venue is the Beurstraverse ("Stock Exchange Traverse"), better known by its informal name 'Koopgoot' ('Buying/Shopping Gutter', after its subterranean position), which crosses the Coolsingel below street level). The Kruiskade is a more upscale shopping street, with retailers like Michael Kors, 7 For All Mankind, Calvin Klein, Hugo Boss, Tommy Hilfiger and the Dutch well known men's clothier Oger. Another upscale shopping venue is a flagship store of department store De Bijenkorf. Located a little more to the east is the Markthal, with lots of small retailers inside. This hall is also one of Rotterdam's famous architectural landmarks.
The main shopping venue in the south of Rotterdam is Zuidplein, which lies close to Rotterdam Ahoy, an accommodation center for shows, exhibitions, sporting events, concerts and congresses. Another prominent shopping center, called Alexandrium, lies in the east of Rotterdam. It includes a large kitchen and furniture center.
Rotterdam has one major university, the Erasmus University Rotterdam (EUR), named after one of the city's famous former inhabitants, Desiderius Erasmus. The Woudestein campus houses (among others) Rotterdam School of Management, Erasmus University. In Financial Times' 2005 rankings it placed 29th globally and 7th in Europe. In the 2009 rankings of Masters of Management, the school reached first place with the CEMS Master in Management and a tenth place with its RSM Master in Management. The university is also home to Europe's largest student association, STAR Study Association Rotterdam School of Management, Erasmus University and the world's largest student association, AIESEC, has its international office in the city.
The Willem de Kooning Academy Rotterdam's main art school, which is part of the Hogeschool Rotterdam. It is regarded as one of the most prestigious art schools in the Netherlands and the number 1 in Advertising and Copywriting. Part of the Willem de Kooning Academy is the Piet Zwart Institute for postgraduate studies and research in Fine Art, Media Design and Retail Design. The Piet Zwart Institute boasts a selective roster of emerging international artists.
The Hoboken campus of EUR houses the Dijkzigt (general) hospital, the Sophia Hospital (for children) and the Medical Department of the University. They are known collectively as the Erasmus Medical Center. This center is ranked third in Europe by CSIC as a hospital, and is also ranked within top 50 universities of the world in the field of medicine (clinical, pre-clinical & health, 2017).
Three "Hogescholen" (Universities of applied sciences) exist in Rotterdam. These schools award their students a professional Bachelor's degree and postgraduate or Master's degree. The three "Hogescholen" are Hogeschool Rotterdam, Hogeschool Inholland and Hogeschool voor Muziek en Dans (uni for music and dance) which is also known as CodArts.
As there are many international and American schools scattered across Europe such as ASH (American International School of the Hague) Rotterdam also has its own international/American school by the name AISR (American International School of Rotterdam). At AISR children receive a multicultural education in a culturally diverse community and it offers the International Baccalaureate (IB) Diploma Program.
Unique to the city is the Shipping & Transport College which offers masters, bachelors and vocational diplomas on all levels.
Alongside Porto, Rotterdam was European Capital of Culture in 2001. The city has its own orchestra, the Rotterdam Philharmonic, with its well-regarded young music director Yannick Nézet-Séguin; a large congress and concert building called "De Doelen"; several theaters (including the new "Luxor") and movie theatres; and the Rotterdam Ahoy complex in the south of the city, which is used for pop concerts, exhibitions, tennis tournaments, and other activities. A major zoo called Diergaarde Blijdorp is situated at the northwest side of Rotterdam, complete with a walkthrough sea aquarium called the Oceanium.
Rotterdam features some urban architecture projects, nightlife, and many summer festivals celebrating the city's multicultural population and identity, such as the Caribbean-inspired "Summer Carnival", the Dance Parade, Rotterdam 666, the Metropolis pop festival and the World Port days. In the years 2005–2011 the city struggled with venues for popmusic. Many of the venues suffered severe financial problems. This resulted in the disappearance of the major music venues Nighttown and WATT and smaller stages such as Waterfront, Exit, and Heidegger. Currently the city has a few venues for pop music like Rotown, Poortgebouw and Annabel. The venue WORM focuses on experimental music and related cutting edge subcultural music.
There are also the International Film Festival in January, the Poetry International Festival in June, the North Sea Jazz Festival in July, the Valery Gergiev Festival in September, September in Rotterdam and the World of the Witte de With. In June 1970, The Holland Pop Festival (which featured Jefferson Airplane, The Byrds, Canned Heat, It's a Beautiful Day, and Santana) was held and filmed at the Stamping Grounds in Rotterdam.
There is a healthy competition with Amsterdam, which is often viewed as the cultural capital of the Netherlands. This rivalry is most common amongst the city's football supporters, Feyenoord (Rotterdam) and Ajax (Amsterdam). There is a saying: "Amsterdam to party, Den Haag (The Hague) to live, Rotterdam to work". Another one, more popular by Rotterdammers, is "Money is earned in Rotterdam, distributed in The Hague and spent in Amsterdam". Another saying that reflects both the rivalry between Rotterdam and Amsterdam is "Amsterdam has it, Rotterdam doesn't need it".
In terms of alternative culture, Rotterdam had from the 1960s until the 2000s a thriving squatters movement which as well as housing thousands of people, occupied venues, social centres and so on. From this movement came clubs like Boogjes, Eksit, Nighttown, Vlerk and Waterfront. The Poortgebouw was squatted in the 1980s and quickly legalised.
Rotterdam is also the home of Gabber, a type of hardcore electronic music popular in the mid-1990s, with hard beats and samples. Groups like Neophyte and Rotterdam Terror Corps (RTC) started in Rotterdam, playing at clubs like Parkzicht.
The main cultural organisations in Amsterdam, such as the Concertgebouw and Holland Festival, have joint forces with similar organisations in Rotterdam, via A'R'dam. In 2007 these organisations published with plans for co-operation. One of the goals is to strengthen the international position of culture and art in the Netherlands in the international context.
On 30 August 2019, it was announced by the European Broadcasting Union and Dutch television broadcasters AVROTROS, NOS & NPO, that Rotterdam would host the Eurovision Song Contest 2020, following the Dutch victory at the contest in Tel Aviv, Israel with the song "Arcade", performed by Duncan Laurence. However, due to the COVID-19 pandemic in Europe, the contest was cancelled. Rotterdam will instead host the contest, which was confirmed by the EBU on 16 May 2020. The contest is set to take place at the Rotterdam Ahoy, with the semi-finals taking place on 18 & 20 May 2021, and the final taking place on 22 May 2021. This will be the first time that Rotterdam has hosted the contest, and the first time The Netherlands has hosted the contest since , when it was held in The Hague.
Rotterdam has many museums. Well known museums are the Museum Boijmans Van Beuningen, the Netherlands Architecture Institute, the Wereldmuseum, the Kunsthal, Witte de With Center for Contemporary Art and the Maritime Museum Rotterdam. The Historical Museum Rotterdam has changed into Museum Rotterdam which aims to exhibit Rotterdam as a contemporary transnational city, and not a past city.
Other museums include the tax museum and the natural history museum. At the historical shipyard and museum Scheepswerf 'De Delft', the reconstruction of ship of the line "Delft" can be visited.
Rotterdam has become world famous because of its modern and groundbreaking architecture. Throughout the years the city has been nicknamed "Manhattan at the Meuse" and "The architectural capital of the Netherlands" both for its skyline and because it is home to internationally leading architectural firms involved in the design of famous buildings and bridges in other big cities. Examples include OMA (Rem Koolhaas), Neutelings & Riedijk and Erick van Egeraat. It has the reputation in being a platform for architectural development and education through the NAi (Netherlands Architecture Institute), which is open to the public and has a variety of exhibitions on architecture and urban planning issues and prior the Berlage Institute, a postgraduate laboratory of architecture. The city has 38 skyscrapers and 352 high-rises and has many skyscrapers planned or under construction. The top 5 of highest buildings in the Netherlands consists entirely of buildings in Rotterdam. It is home to the tallest building in the Netherlands, the Maastoren with a height of 165 meters. In 2021, the Zalmhaven Tower will be completed with a height of 215 meters, which will become the new tallest building in the Netherlands.
In 1898, the high-rise office building the White House (in Dutch Witte Huis) was completed, at that time the tallest office building in Europe.
In the first decades of the 20th century, some influential architecture in the modern style was built in Rotterdam. Notable are the Van Nelle fabriek (1929) a monument of modern factory design by Brinkman en Van der Vlugt, the Jugendstil clubhouse of the Royal Maas Yacht Club designed by Hooijkaas jr. en Brinkman (1909), and Feyenoord's football stadium De Kuip (1936) also by Brinkman en Van der Vlugt. The architect J. J. P. Oud was a famous Rotterdammer in those days. The Van Nelle Factory obtained the status of UNESCO World Heritage Site in 2014.
During the early stages of World War II the center of Rotterdam was bombed by the German Luftwaffe, destroying many of the older buildings in the center of the city. After initial crisis re-construction the center of Rotterdam has become the site of ambitious new architecture.
Rotterdam is also famous for its Lijnbaan 1952 by architects Broek en Bakema, Peperklip by architect Carel Weeber, Kubuswoningen or cube houses designed by architect Piet Blom 1984.
The newest landmark in Rotterdam is the Markthal, designed by architect firm MVRDV. In addition to that there are many international well known architects based in Rotterdam like O.M.A (Rem Koolhaas), Neutelings & Riedijk and Erick van Egeraat to name a few. Two architectural landmarks are located in the Lloydkwartier: the STC college building and the Schiecentrale 4b.
Rotterdam also houses several of the tallest structures in the Netherlands.
Rotterdam has a reputation in being a platform for architectural development and education through the Berlage Institute, a postgraduate laboratory of architecture, and the NAi (Netherlands Architecture Institute), which is open to the public and has a variety of good exhibitions on architecture and urban planning issues.
Over 30 new highrise projects are being developed at the moment. A Guardian journalist wrote in 2013 that "All this is the consequence of the city suffering a bombardment of two things: bombs and architects."
Highrise buildings that are currently being built:
Rotterdam calls itself "Sportstad" (City of Sports). The city annually organises several world-renowned sporting events. Some examples are the Rotterdam Marathon, the World Port Tournament, and the Rotterdam World Tennis Tournament. Rotterdam also organises one race of the Red Bull Air Race World Championship and the car racing event Monaco aan de Maas (Monaco at the Meuse).
The city is also the home of many sports clubs and some historic and iconic athletes.
Rotterdam is the home of three professional football clubs, being first tier clubs Feyenoord and Sparta and second tier club Excelsior.
Feyenoord, founded in 1908 and the dominant of the three professional clubs, has won fifteen national titles since the introduction of professional football in the Netherlands. It won the Champions league as the first Dutch club in 1970, and won the World Cup for club teams in the same year. In 1974, they were the first Dutch club to win the UEFA Cup and in 2002, Feyenoord won the UEFA Cup again. In 2008, the year of their 100-year-anniversary, Feyenoord won the KNVB-cup.
Seating 51,480, its 1937 stadium, called "Stadion Feijenoord" but popularly known as De Kuip ('the Tub'), is the second largest in the country, after the Amsterdam Arena. De Kuip, located in the southeast of the city, has hosted many international football games, including the final of Euro 2000 and has been awarded a FIFA 5 star ranking. There are concrete plans to build a new stadium with a capacity of at least 63,000 seats.
Sparta, founded in 1888 and situated in the northwest of Rotterdam, won the national title six times; Excelsior (founded 1902), in the northeast, has never won any.
Rotterdam also has three fourth tier clubs, SC Feijenoord (Feyenoord Amateurs), PVV DOTO and TOGR.
Rotterdam is and has been the home to many great football players and coaches, among whom:
Rotterdam has its own annual international marathon, which offers one of the fastest courses in the world. From 1985 until 1998, the world record was set in Rotterdam, first by Carlos Lopes and later in 1988 by Belayneh Densamo.
In 1998, the world record for women was set by Tegla Loroupe, in a time of 2:20.47. Loroupe won the Rotterdam Marathon three consecutive times, from 1997 to 1999.
The current track record for men is held by Duncan Kibet, who ran a time of 2:04.27 in 2009. The female record was set in 2012, when Tiki Gelana finished the race in 2:18.58. Gelana went on to become the 2012 Olympic champion in London, a few months later.
The marathon starts and ends on the "Coolsingel" in the heart of Rotterdam. It attracts a total of 900.000 visitors.
Since 1972, Rotterdam hosts the indoor hard court ABN AMRO World Tennis Tournament, part of the ATP Tour. The event was first organised in 1972, when it was won by Arthur Ashe. Ashe went on to win the tournament two more times, making him the singles title record holder.
Former Wimbledon winner Richard Krajicek became the tournament director after his retirement in 2000. The latest edition of the tournament attracted a total of 116.354 visitors.
In November 2008 Rotterdam was chosen as the host of the Grand Départ of the 2010 Tour de France.
Rotterdam won the selection over the Dutch city of Utrecht. Germany's Düsseldorf had previously also expressed interest in hosting. The Amaury Sport Organization (ASO), organizer of the Tour de France, said in a statement on its web site that it chose Rotterdam because, in addition to it being another big city, like London, to showcase the use of bikes for urban transportation, it provided a location well positioned considering the rest of the route envisioned for the 2010 event.
The start in Rotterdam was the fifth in the Netherlands. The prologue was a individual time trial crossing the centre of the city. The first regular stage left the Erasmusbrug and went south, towards Brussels.
The second stage of 2015 edition took the riders through Rotterdam on their way to Neeltje Jans in Zeeland.
Members of the student rowing club Skadi were part of the 'Holland Acht', winning a gold medal at the Olympics in 1996. . Since the opening in April 2013, Rotterdam hosts the rowing venue Willem-Alexander Baan that hosted the 2016 World Rowing Championships for Seniors, U23 and Juniors.
In field hockey, Rotterdam has the largest hockey club in the Netherlands, HC Rotterdam, with its own stadium in the north of the city and nearly 2,400 members. The first men's and women's teams both play on the highest level in the Dutch "Hoofdklasse".
Rotterdam is home to the most successful European baseball team, Neptunus Rotterdam, winning the most European Cups.
Rotterdam has a long boxing tradition starting with Bep van Klaveren (1907–1992), aka 'The Dutch Windmill', Gold medal winner of the 1928 Amsterdam Olympics, followed by professional boxers like Regilio Tuur and Don Diego Poeder.
Rotterdam's swimming tradition started with Marie Braun aka Zus (sister) Braun, who was coached to a Gold medal at the 1928 Amsterdam Olympics by her mother Ma Braun, and 3 European titles 3 years later in Paris. In her career as 14 time national champ, she broke 6 world records. Ma Braun later also coached the Rotterdam born, three-times Olympic champion Rie Mastenbroek during the Berlin Olympics in 1936. In later years Inge de Bruijn became a Rotterdam sport icon as triple Olympic Gold medal winner in 2000 and triple European Gold medal winner in 2001.
Olympic Gold medalist, in the O-Jolle during 1936 Olympics, Daan Kagchelland was born in Rotterdam and member of the . The Kralingse plas was and is still a source of Olympic sailors like Koos de Jong, Ben Verhagen, Henny Vegter, Serge Kats and Margriet Matthijsse.
Motor cycle speedway was staged in the Feyenoord Stadium after the second world war. The team which raced in a Dutch league was known as the Feyenoord Tigers. The team included Dutch riders and some English and Australian riders.
Since 1986, the city has selected its best sportsman, woman and team at the Rotterdam Sports Awards Election, held in December.
Rotterdam hosts several annual events unique to the city. It hosts the "Zomercarnaval" (Summercarnaval), the second largest Caribbean carnival in Europe, originally called the Antillean carnival. Other events include: "North Sea Jazz Festival", the largest Jazz festival in Europe, "Bavaria City Race", a Formula 1 race inside the city center and a 3 day long maritime extravaganza called the "World Port Days" celebrating the Port of Rotterdam.
Rotterdam offers connections by international, national, regional and local public transport systems, as well as by the Dutch motorway network.
Motorways
There are several motorways to/from Rotterdam. The following four are part of its 'Ring' (ring road):
The following two other motorways also serve Rotterdam:
Airport
Much smaller than the international hub Schiphol Airport, Rotterdam The Hague Airport (formerly known as "Zestienhoven") is the third largest airport in the country, behind Schiphol Airport and Eindhoven Airport. Located north of the city, it has shown a very strong growth over the past five years, mostly caused by the growth of the low-cost carrier market. For business travelers, Rotterdam The Hague Airport offers advantages in terms of rapid handling of passengers and baggage. Environmental regulations make further growth uncertain.
Train
Rotterdam is well connected to the Dutch railway network, and has several international connections:
Railway stations
The main connections:
In Rotterdam, public transport services are provided by the following companies:
Metro
In 1968, Rotterdam was the first Dutch city to open a metro system. Currently the metro system consists of three main lines, each of which has its own variants. The metro network has of railtracks and there are 70 stations, which makes it the biggest of the Benelux. The system is operated by 5 lines; 3 lines (A, B and C) on the east–west line, and two (D and E) on the north–south line. Line E (Randstadrail) connects Rotterdam with The Hague as of December 2011.
Tram
The Rotterdam tramway network offers 9 regular tram lines and 4 special tram lines with a total length of . Service
Tramlines in Rotterdam :
Special tram lines:
Bus
Rotterdam offers 55 city bus lines with a total length of .
RET runs buses in the city of Rotterdam and surrounding places like Spijkenisse, Barendrecht, Ridderkerk, Rhoon, Poortugaal, Schiedam, Vlaardingen, Delft and Capelle aan den IJssel.
Arriva Netherlands, Connexxion, Qbuzz and Veolia run buses from other cities to Rotterdam.
Waterbus
The Waterbus network consists of seven lines. The main line (Line 20) stretches from Rotterdam to Dordrecht. The ferry carries about 130 passengers and there is space for 60 bicycles. The stops between Rotterdam and Dordrecht are:
Ferry
P&O Ferries have daily sailings from Europoort to Kingston upon Hull in the UK.
Rotterdam has city and port connections throughout the world. In 2008, the city had 13 sister cities, 12 partner cities, and 4 sister ports. Since 2008, the City of Rotterdam doesn't forge new sister or partner connections. Sister and partner cities are not a priority in international relations.
On 15 March 2017 the Turkish president expressed his wish that Istanbul should no longer be the twin town of Rotterdam. A speaker of the Rotterdam municipality then explained that the two cities have no official partnership. Both authorities do cooperate often.
Rotterdam is twinned with:
Rotterdam features in Edgar Allan Poe's short story ‘The Unparalleled Adventure of One Hans Pfaall’ (1835), as well as J.T. Sheridan Le Fanu's 'Strange Event in the Life of Schalken the Painter' (1839).
Part of Jackie Chan's 1998 film "Who am I?" is set in Rotterdam.
"Ender's Shadow", part of the series "Ender's Game" is partially set in Rotterdam.
In season 1, episode 2 of "The Golden Girls" ("Guess Who's Coming to the Wedding?"), Dorothy reminisces how her ex-husband, Stan, would buy her tulips after they had a fight. "Towards the end, our house looked like Easter in Rotterdam."
In 1996, the British band The Beautiful South recorded a song named after this region titled Rotterdam (or Anywhere).
In the 2004 video game "", the levels "Rendezvous in Rotterdam" and "Deadly Cargo" both take place in Rotterdam.
The 2017 Olivier award winning play, Rotterdam, written by Jon Brittain, is set in the city.
In "Battlefield V", this city is used as a BFV map and according to its history, the white building was almost left untouched by the bombing during WWII and that building can be seen on both in-game and real world. | https://en.wikipedia.org/wiki?curid=26049 |
Ringworld
Ringworld is a 1970 science fiction novel by Larry Niven, set in his Known Space universe and considered a classic of science fiction literature. "Ringworld" tells the story of Louis Wu and his companions on a mission to the Ringworld, a rotating wheel space station, an alien construct in space 186 million miles in diameter. Niven later added three sequel novels and then cowrote, with Edward M. Lerner, four prequels and a final sequel; the five latter novels constitute the Fleet of Worlds series. All the "Ringworld" novels tie into numerous other books set in Known Space. "Ringworld" won the Nebula Award in 1970, as well as both the Hugo Award and Locus Award in 1971.
On planet Earth in 2850 AD, Louis Gridley Wu is celebrating his 200th birthday. Despite his age, Louis is in perfect physical condition (due to the longevity drug boosterspice). He has once again become bored with human society and is thinking about taking one of his periodic sabbaticals, alone in a spaceship far away from other people. He meets Nessus, a Pierson's puppeteer, who offers him a mysterious job. Intrigued, Louis eventually accepts. Speaker-to-Animals (Speaker), who is a Kzin, and Teela Brown, a young human woman who becomes Louis's lover, also join the crew.
They first go to the puppeteer home world, where they learn that the expedition's goal is to investigate the Ringworld, a gigantic artificial ring, to see if it poses any threat. The Ringworld is about one million miles (1.6 million km) wide and approximately the diameter of Earth's orbit (which makes it about 584.3 million miles or 940.4 million km in circumference), encircling a sunlike star. It rotates to provide artificial gravity 99.2% as strong as Earth's from centrifugal force. The Ringworld has a habitable, flat inner surface (equivalent in area to approximately three million Earths), a breathable atmosphere and a temperature optimal for humans. Night is provided by an inner ring of shadow squares which are connected to each other by thin, ultra-strong wire. When the crew completes their mission, they will be given the starship in which they travelled to the puppeteer home world; it is orders of magnitude faster than any possessed by humans or Kzinti.
When they reach the vicinity of the Ringworld, they are unable to contact anyone, and their ship, the "Lying Bastard", is disabled by the Ringworld's automated meteoroid-defense system. The severely damaged vessel collides with a strand of shadow-square wire and crash-lands near a huge mountain, "Fist-of-God". Although many of the ship's systems survive intact the normal drive is destroyed leaving them unable to launch back into space where they could use the undamaged faster-than-light hyperdrive to return home. They set out to find a way to get the "Lying Bastard" off of the Ringworld.
Using their flycycles (similar to antigravity motorcycles), they try to reach the rim of the ring, where they hope to find some technology that will help them. It will take them months to cross the vast distance. When Teela develops "Plateau trance" (a kind of highway hypnosis), they are forced to land. On the ground, they encounter apparently primitive human natives who live in the crumbling ruins of a once-advanced city and think that the crew are the engineers who created the ring, and whom they revere as gods. The crew is attacked when they commit what the natives consider blasphemy (the misuse of certain technologies).
They continue their journey, during which Nessus reveals some Puppeteer secrets: they have conducted experiments on both humans (breeding for luck via Birthright Lotteries: all of Teela's ancestors for six generations were born from winning the lottery) and Kzinti (breeding for reduced aggression via the Man-Kzin wars, which the Kzinti always lost). Speaker's outrage forces Nessus to flee and follow them from a safe distance.
In a floating building over the ruins of a city, they find a map of the Ringworld and videos of its past civilization.
While flying through a giant storm, caused by air escaping through a hole in the Ring floor due to a meteoroid impact, Teela becomes separated from the others. While Louis and Speaker search for her, their flycycles are caught by an automatic police trap designed to catch traffic offenders. They are trapped in the basement of a floating police station. Nessus enters the station to try to help them.
In the station, they meet Halrloprillalar Hotrufan ("Prill"), a former crew member of a spaceship used for trade between the Ringworld and other inhabited worlds. When her ship returned to the Ringworld the last time, they found that civilization had collapsed. The crew managed to enter the Ringworld, but some of them were killed and others suffered brain damage when the device that let them pass through the Ringworld floor failed. From her account, Louis surmises that a mold was brought back from one of the other planets by a spaceship like Prill's; it broke down the superconductors vital to the Ringworld civilization, dooming it.
Teela reaches the police station, accompanied by her new lover, a native "hero" called Seeker who helped her survive. Based on an insight gained from studying an ancient Ringworld map, Louis comes up with a plan to get home. Teela, however, chooses to remain on the Ringworld with Seeker. Louis, formerly skeptical about breeding for luck, now wonders if the entire mission was caused by Teela's luck, to unite her with her true love and help her mature.
The party collects one end of the shadow-square wire that was snapped when the ship crashed. They travel back to their crashed ship in the floating police station, dragging the wire behind them. Louis threads it through the ship to tether it to the police station. He then takes the police station up to the summit of "Fist-of-God", the enormous mountain near their crash site. The mountain had not appeared on the Ringworld map, leading Louis to conclude that it is in fact the result of a meteoroid impact with the underside of the ring, which pushed the "mountain" up from the ring's floor and broke through. The top of the mountain, above the atmosphere, is therefore just a hole in the Ringworld floor. Louis drives the police station over the edge, dragging the "Lying Bastard" along with it. The Ringworld spins very quickly, so once the ship drops through the hole and clears the ring, they can use the ship's hyperdrive to get home. The book concludes with Louis and Speaker discussing returning to the Ringworld.
Algis Budrys found "Ringworld" to be "excellent and entertaining ... woven together very skillfully and proceed[ing] at a pretty smooth pace." While praising the novel generally, he faulted Niven for relying on inconsistencies regarding evolution in his extrapolations to support his fictional premises.
In addition to the two aliens, Niven includes a number of concepts from his other Known Space stories:
The opening chapter of the original paperback edition of "Ringworld" featured Louis Wu teleporting eastward around the Earth in order to extend his birthday. Moving in this direction would, in fact, make local time later rather than earlier, so that Wu would soon arrive in the early morning of the next calendar day. Niven was "endlessly teased" about this error, which he corrected in subsequent printings to show Wu teleporting westward. In his dedication to "The Ringworld Engineers", Niven wrote, "If you own a first paperback edition of "Ringworld", it's the one with the mistakes in it. It's worth money."
After the publication of "Ringworld", many fans identified numerous engineering problems in the Ringworld as described in the novel. One major one was that the Ringworld, being a rigid structure, was not actually in orbit around the star it encircled and would eventually drift, ultimately colliding with its sun and disintegrating. This led MIT students attending the 1971 Worldcon to chant, "The Ringworld is unstable!" Niven wrote the 1980 sequel "The Ringworld Engineers" in part to address these engineering issues. In it, the ring is found to have a system of attitude jets atop the rim walls, but the Ringworld has become gravely endangered because most of the jets have been removed by the natives, to power their interstellar ships. (The natives had forgotten the original purpose of the jets.)
The second chapter refers to standard Earth gravity as , while standard Earth gravity is . In chapter 8, the "Liar" is referred to as thrusting at .992 gee, and later Ringworld gravity is referred to as .
The fifth chapter refers to Nereid as Neptune's largest moon; the planet's largest moon is Triton.
"Ringworld", or more formally, "Niven ring", has become a generic term for such a structure, which is an example of what science fiction fans call a "Big Dumb Object", or more formally a megastructure. Other science fiction authors have devised their own variants of Niven's Ringworld, notably Iain M. Banks' Culture Orbitals, best described as miniature Ringworlds, and the ring-shaped Halo structures of the video game series "Halo". Ringworlds are featured in several video games, such as Paradox Interactive's 4X grand strategy game "Stellaris", Blind Mind Studios' "Star Ruler" 2, and Malfador Machinations' "Space Empires" series.
In 1984, a role-playing game based on this setting was produced by Chaosium named "The Ringworld Roleplaying Game". Information from the RPG, along with notes composed by RPG author John Hewitt with Niven, was later used to form the "Bible" given to authors writing in the "Man-Kzin Wars" series. Niven himself recommended that Hewitt write one of the stories for the original two MKW books, although this never came to pass.
Tsunami Games released two adventure games based on "Ringworld". "" was released in 1992 and "Return to Ringworld" in 1994. A third game, "Ringworld: Within ARM's Reach", was also planned, but never completed.
The video game franchise "Halo", created by Bungie and now handled by 343 Industries, took inspiration from the book in the creation and development of its story around the eponymous rings.
In 2017 Paradox Interactive added a DLC called Utopia to their game Stellaris, allowing the player to build a Ringworld or other megastructures such as dyson spheres and Habitat Stations or restore an Ancient Precursor species.
The Massively multiplayer online game Shores of Hazeron by Software Engineering Inc. features abandoned ringworld structures for players to discover and colonize. Ringworlds in Shores of Hazeron are the creations of ancient precursor species not featured in the game. In addition to acting as centers for habitation, these rings feature functionalities such as self-sustaining power generation, the creation of interstellar wormholes, and simulated day-night cycles via orbital solar shields.
There have been many aborted attempts to adapt the novel to the screen.
In 2001, Larry Niven reported that a movie deal had been signed and was in the early planning stages.
In 2004, the Sci-Fi Channel reported that it was developing a "Ringworld" miniseries. The series never came to fruition.
In 2013, it was again announced by the channel, now rebranded as Syfy, that a miniseries of the novel was in development. This proposed 4-hour miniseries was being written by Michael R. Perry and would have been a co-production between MGM Television and Universal Cable Productions.
In 2017, Amazon announced that Ringworld was one of three science fiction series it was developing for its streaming service. MGM were again listed as a co-producer.
Tor Books and Seven Seas Entertainment published a two-part original English-language manga adaptation of "Ringworld", with the script written by Robert Mandell and the artwork by Sean Lam. "Ringworld: The Graphic Novel, Part One", covering the events of the novel up to the sunflower attack on Speaker, was released on July 8, 2014. "Part Two" was released on November 10, 2015. | https://en.wikipedia.org/wiki?curid=26052 |
Rudolf II, Holy Roman Emperor
Rudolf II (18 July 1552 – 20 January 1612) was Holy Roman Emperor (1576–1612), King of Hungary and Croatia (as Rudolf I, 1572–1608), King of Bohemia (1575–1608/1611) and Archduke of Austria (1576–1608). He was a member of the House of Habsburg.
Rudolf's legacy has traditionally been viewed in three ways: an ineffectual ruler whose mistakes led directly to the Thirty Years' War; a great and influential patron of Northern Mannerist art; and an intellectual devotee of occult arts and learning which helped seed what would be called the scientific revolution.
Rudolf was born in Vienna on 18 July 1552. He was the eldest son and successor of Maximilian II, Holy Roman Emperor, King of Bohemia, and King of Hungary and Croatia; his mother was Maria of Spain, a daughter of Charles V and Isabella of Portugal. He was the elder brother of Matthias who was to succeed him as king of Bohemia and Holy Roman Emperor.
Rudolf spent eight formative years, from age 11 to 19 (1563–1571), in Spain, at the court of his maternal uncle Philip II, together with his younger brother Ernest, future governor of the Low Countries. After his return to Vienna, his father was concerned about Rudolf's aloof and stiff manner, typical of the more conservative Spanish court, rather than the more relaxed and open Austrian court; but his Spanish mother saw in him courtliness and refinement. In the years following his return to Vienna, Rudolf was created King of Hungary (1572), King of Bohemia and King of the Romans (1575) when his father was still alive.
Rudolf would remain for the rest of his life reserved, secretive, and largely a recluse who did not like to travel or even partake in the daily affairs of state. He was more intrigued by occult learning such as astrology and alchemy, which was mainstream in the Renaissance period, and had a wide variety of personal hobbies such as horses, clocks, collecting rarities, and being a patron of the arts. He suffered from periodic bouts of "melancholy" (depression), which was common in the Habsburg line. These became worse with age, and were manifested by a withdrawal from the world and its affairs into his private interests.
Like his near-contemporary, Elizabeth I of England (who was born 19 years before he was), Rudolf dangled himself as a prize in a string of diplomatic negotiations for marriages, but never in fact married. During his periods of self-imposed isolation, Rudolf reportedly had affairs with his court chamberlain, Wolfgang von Rumpf, and a series of valets. One of these, Philip Lang, ruled him for years and was hated by those seeking favour with the emperor.
In addition, Rudolf was known to have had a succession of affairs with women, some of whom claimed to have been impregnated by him. He had several illegitimate children with his mistress Catherina Strada. Their eldest son, Don Julius Caesar d'Austria, was likely born between 1584 and 1586 and received an education and opportunities for political and social prominence from his father. In 1607, Rudolf sent Julius to live at Český Krumlov in Bohemia (in what is now the Czech Republic), a castle which Rudolf purchased from Peter Vok/Wok von Rosenberg, the last of the House of Rosenberg, after he fell into financial ruin. Julius lived at Český Krumlov when in 1608 he reportedly abused and murdered the daughter of a local barber, who had been living in the castle, and then disfigured her body. Rudolf condemned his son's act and suggested that he should be imprisoned for the rest of his life. However, Julius died in 1609 after showing signs of schizophrenia, refusing to bathe, and living in squalor; his death was apparently caused by an ulcer that ruptured.
Many artworks commissioned by Rudolf are unusually erotic. The emperor was the subject of a whispering campaign by his enemies in his family and the Catholic Church in the years before he was deposed. Sexual allegations may well have formed a part of the campaign against him.
Rudolf succeeded his father Maximilian II on 12 October 1576. In 1583 he moved the court to Prague.
Historians have traditionally blamed Rudolf's preoccupation with the arts, occult sciences, and other personal interests as the reason for the political disasters of his reign. More recently historians have re-evaluated this view and see his patronage of the arts and occult sciences as a triumph and key part of the Renaissance, while his political failures are seen as a legitimate attempt to create a unified Christian empire, which was undermined by the realities of religious, political and intellectual disintegrations of the time.
Although raised in his uncle's Catholic court in Spain, Rudolf was tolerant of Protestantism and other religions including Judaism. He largely withdrew from Catholic observances, even in death refusing the last sacramental rites. He had little attachment to Protestants either, except as counter-weight to papal policies. He put his primary support behind conciliarists, irenicists and humanists. When the papacy instigated the Counter-Reformation, using agents sent to his court, Rudolf backed those whom he thought were the most neutral in the debate, not taking a side or trying to effect restraint, thus leading to political chaos and threatening to provoke civil war.
His conflict with the Ottoman Empire was the final cause of his undoing. Unwilling to compromise with the Turks, and stubbornly determined that he could unify all of Christendom with a new Crusade, he started a long and indecisive war with the Turks in 1593. This war lasted till 1606, and was known as "The Long War". By 1604 his Hungarian subjects were exhausted by the war and revolted, led by Stephen Bocskay (Bocskai Uprising). In 1605 Rudolf was forced by his other family members to cede control of Hungarian affairs to his younger brother Archduke Matthias. By 1606 Matthias forged a difficult peace with the Hungarian rebels (Peace of Vienna) and the Turks (Peace of Zsitvatorok). Rudolf was angry with his brother's concessions, which he saw as giving away too much in order to further Matthias' hold on power. So Rudolf prepared to start a new war with the Turks. But Matthias rallied support from the disaffected Hungarians and forced Rudolf to cede the crowns of Hungary, Austria, and Moravia to him. At the same time, seeing a moment of royal weakness, Bohemian Protestants demanded greater religious liberty, which Rudolf granted in the "Letter of Majesty" in 1609. The Bohemians continued to press for further freedoms, and Rudolf used his army to repress them. The Bohemian Protestants then appealed to Matthias for help; Matthias' army then held Rudolf prisoner in his castle in Prague until 1611, when Rudolf ceded the crown of Bohemia to his brother.
Rudolf died in 1612, nine months after he had been stripped of all effective power by his younger brother, except the empty title of Holy Roman Emperor, to which Matthias was elected five months later. In May 1618 with the event known as the Defenestration of Prague, the Protestant Bohemians, in defence of the rights granted them in the "Letter of Majesty", threw imperial officials out of the window and thus the Thirty Years' War (1618–1648) started.
Rudolf moved the Habsburg capital from Vienna to Prague in 1583. Rudolf loved collecting paintings, and was often reported to sit and stare in rapture at a new work for hours on end. He spared no expense in acquiring great past masterworks, such as those of Dürer and Brueghel. He was also patron to some of the best contemporary artists, who mainly produced new works in the Northern Mannerist style, such as Bartholomeus Spranger, Hans von Aachen, Giambologna, Giuseppe Arcimboldo, Aegidius Sadeler, Roelant Savery, and Adrian de Vries, as well as commissioning works from Italians like Veronese. Rudolf's collections were the most impressive in the Europe of his day, and the greatest collection of Northern Mannerist art ever assembled. The adjective Rudolfine, as in "Rudolfine Mannerism" is often used in art history to describe the style of the art he patronized.
Rudolf's love of collecting went far beyond paintings and sculptures. He commissioned decorative objects of all kinds and in particular mechanical moving devices. Ceremonial swords and musical instruments, clocks, water works, astrolabes, compasses, telescopes and other scientific instruments, were all produced for him by some of the best craftsmen in Europe.
He patronized natural philosophers such as the botanist Charles de l'Ecluse, and the astronomers Tycho Brahe and Johannes Kepler both attended his court. Tycho Brahe developed the Rudolphine Tables (finished by Kepler, after Brahe's death), the first comprehensive table of data of the movements of the planets. As mentioned before, Rudolf also attracted some of the best scientific instrument makers of the time, such as Jost Bürgi, Erasmus Habermel and Hans Christoph Schissler. They had direct contact with the court astronomers and, through the financial support of the court, they were economically independent to develop scientific instruments and manufacturing techniques.
The poet Elizabeth Jane Weston, a writer of Renaissance Latin poetry, was also part of his court and wrote numerous odes to him.
Rudolf kept a menagerie of exotic animals, botanical gardens, and Europe's most extensive "cabinet of curiosities" ("Kunstkammer") incorporating "the three kingdoms of nature and the works of man". It was housed at Prague Castle, where between 1587 and 1605 he built the northern wing to house his growing collections. A lion and a tiger were allowed to roam the castle, documented by the account books which record compensation paid to survivors of attacks, or to family members of victims.
Rudolf was even alleged by one person to have owned the Voynich manuscript, a codex whose author and purpose, as well as the language and script and posited cipher remain unidentified to this day. According to hearsay passed on in a letter written by Johannes Marcus Marci in 1665, Rudolf was said to have acquired the manuscript at some unspecified time for 600 gold ducats. No evidence in support of this single piece of hearsay has ever been discovered. The Codex Gigas was also in his possessions.
As was typical of the time, Rudolf II had a portrait painted in the studio of the renowned Alonso Sanchez Coello. Completed in 1567, the portrait depicted Rudolf II at the age of 15. This painting can be seen at the Lobkowicz Palace in the Rozmberk room.
By 1597, the collection occupied three rooms of the incomplete northern wing. When building was completed in 1605, the collection was moved to the dedicated "Kunstkammer". "Naturalia" (minerals and gemstones) were arranged in a 37 cabinet display that had three vaulted chambers in front, each about 5.5 metres wide by 3 metres high and 60 metres long, connected to a main chamber 33 metres long. Large uncut gemstones were held in strong boxes.
Rudolf's "Kunstkammer" was not a typical "cabinet of curiosities" – a haphazard collection of unrelated specimens. Rather, the Rudolfine "Kunstkammer" was systematically arranged in an encyclopaedic fashion. In addition, Rudolf II employed his polyglot court physician, Anselmus Boetius de Boodt, to curate the collection. Anselmus was an avid mineral collector and travelled widely on collecting trips to the mining regions of Germany, Bohemia and Silesia, often accompanied by his Bohemian naturalist friend, Thaddaeus Hagecius. Between 1607 and 1611, Anselmus catalogued the "Kunstkammer", and in 1609 he published "Gemmarum et Lapidum", one of the finest mineralogical treatises of the 17th century.
As was customary at the time, the collection was private, but friends of the Emperor, artists, and professional scholars were allowed to study it. The collection became an invaluable research tool during the flowering of 17th-century European philosophy, the "Age of Reason".
Rudolf's successors did not appreciate the collection and the "Kunstkammer" gradually fell into disarray. Some 50 years after its establishment, most of the collection was packed into wooden crates and moved to Vienna. The collection remaining at Prague was looted during the last year of the Thirty Years War, by Swedish troops who sacked Prague Castle on 26 July 1648, also taking the best of the paintings, many of which later passed to the Orléans Collection after the death of Christina of Sweden. In 1782, the remainder of the collection was sold piecemeal to private parties by Joseph II. One of the surviving items from the "Kunstkammer" is a "fine chair" looted by the Swedes in 1648 and now owned by the Earl of Radnor at Longford Castle in England; others survive in museums.
Astrology and alchemy were regarded as mainstream scientific fields in Renaissance Prague, and Rudolf was a firm devotee of both. His lifelong quest was to find the Philosopher's Stone and Rudolf spared no expense in bringing Europe's best alchemists to court, such as Edward Kelley and John Dee. Rudolf even performed his own experiments in a private alchemy laboratory. When Rudolf was a prince, Nostradamus prepared a horoscope which was dedicated to him as 'Prince and King'. In the 1590s Sendivogius was active at Rudolph's court.
Rudolf gave Prague a mystical reputation that persists in part to this day, with Alchemists' Alley on the grounds of Prague Castle a popular visiting place and tourist attraction.
Rudolf is also the ruler in many of the legends of the Golem of Prague, either because of, or simply adding to, his occult reputation. | https://en.wikipedia.org/wiki?curid=26054 |
Robert Anton Wilson
Robert Anton Wilson (born Robert Edward Wilson; January 18, 1932 – January 11, 2007) was an American author, novelist, essayist, editor, playwright, poet, futurist, and self-described agnostic mystic. Recognized by Discordianism as an Episkopos, Pope, and saint, Wilson helped publicize the group through his writings and interviews.
Wilson described his work as an "attempt to break down conditioned associations, to look at the world in a new way, with many models recognized as models or maps, and no one model elevated to the truth". His goal being "to try to get people into a state of generalized agnosticism, not agnosticism about God alone but agnosticism about everything."
Wilson was a major figure in the counterculture, comparable to one of his coauthors, Timothy Leary, as well as Terence McKenna and others.
Born Robert Edward Wilson in Methodist Hospital, in Brooklyn, New York, he spent his first years in Flatbush, and moved with his family to lower middle class Gerritsen Beach around the age of four or five, where they stayed until relocating to the steadfastly middle-class neighborhood of Bay Ridge when Wilson was thirteen. He suffered from polio as a child, and found generally effective treatment with the Kenny Method (created by Elizabeth Kenny) which the American Medical Association repudiated at that time. Polio's effects remained with Wilson throughout his life, usually manifesting as minor muscle spasms causing him to use a cane occasionally until 2000, when he experienced a major bout with post-polio syndrome that would continue until his death.
Wilson attended Catholic grammar school, likely the school associated with Gerritsen Beach's Resurrection Church , and attended Brooklyn Technical High School (a selective public institution) to remove himself from the Catholic influence; at "Brooklyn Tech," Wilson was influenced by literary modernism (particularly Ezra Pound and James Joyce), the Western philosophical tradition, then-innovative historians such as Charles A. Beard, science fiction (including the works of Olaf Stapledon, Robert A. Heinlein and Theodore Sturgeon) and Alfred Korzybski's interdisciplinary theory of general semantics. He would later recall that the family was "living so well ... compared to the Depression" during this period "that I imagined we were lace-curtain Irish at last."
Following his graduation in 1950, Wilson was employed in a succession of jobs (including ambulance driver, engineering aide, salesman and medical orderly) and absorbed various philosophers and cultural practices (including bebop, psychoanalysis, Bertrand Russell, Carl Jung, Wilhelm Reich, Leon Trotsky and Ayn Rand, whom he later repudiated) while writing in his spare time. He studied electrical engineering and mathematics at the Brooklyn Polytechnic Institute from 1952 to 1957 and English education at New York University from 1957 to 1958 but failed to take a degree from either institution.
After smoking marijuana for nearly a decade, he first experimented with mescaline in Yellow Springs, Ohio on December 28, 1961. Wilson began to work as a freelance journalist and advertising copywriter in the late 1950s. He adopted his maternal grandfather's name, Anton, for his writings, telling himself that he would save the "Edward" for when he wrote the Great American Novel and later finding that "Robert Anton Wilson" had become an established identity.
He assumed co-editorship of the School for Living's Brookville, Ohio-based "Balanced Living" magazine in 1962 and briefly returned to New York as associate editor of Ralph Ginzburg's quarterly "fact:" before leaving for "Playboy", where he served as an associate editor from 1965 to 1971. According to Wilson, "Playboy" "paid me a higher salary than any other magazine at which I had worked and never expected me to become a conformist or sell my soul in return. I enjoyed my years in the Bunny Empire. I only resigned when I reached 40 and felt I could not live with myself if I didn't make an effort to write full-time at last." Along with frequent collaborator Robert Shea, Wilson edited the magazine's "Playboy" Forum, a letters section consisting of responses to the "Playboy" Philosophy editorial column. During this period, he covered Timothy Leary and Richard Alpert's Millbrook, New York-based Castalia Foundation at the instigation of Alan Watts in "The Realist", cultivated important friendships with William S. Burroughs and Allen Ginsberg, and lectured at the Free University of New York on 'Anarchist and Synergetic Politics' in 1965. | https://en.wikipedia.org/wiki?curid=26056 |
Royal Navy
The Royal Navy (RN) is the United Kingdom's naval warfare force. Although warships were used by the English kings from the early medieval period, the first major maritime engagements were fought in the Hundred Years' War against the Kingdom of France. The modern Royal Navy traces its origins to the early 16th century; the oldest of the UK's armed services, it is known as the Senior Service.
From the middle decades of the 17th century, and through the 18th century, the Royal Navy vied with the Dutch Navy and later with the French Navy for maritime supremacy. From the mid 18th century, it was the world's most powerful navy until the Second World War. The Royal Navy played a key part in establishing the British Empire as the unmatched world power during the 19th and first part of the 20th centuries. Due to this historical prominence, it is common, even among non-Britons, to refer to it as "the Royal Navy" without qualification.
Following World War I, the Royal Navy was significantly reduced in size, although at the onset of World War II it was still the world's largest. During the Cold War, the Royal Navy transformed into a primarily anti-submarine force, hunting for Soviet submarines and mostly active in the GIUK gap. Following the collapse of the Soviet Union, its focus has returned to expeditionary operations around the world and it remains one of the world's foremost blue-water navies. However, 21st-century reductions in naval spending have led to a personnel shortage and a reduction in the number of warships.
The Royal Navy maintains a fleet of technologically sophisticated ships, submarines, and aircraft, including two aircraft carriers, two amphibious transport docks, four ballistic missile submarines (which maintain the UK's nuclear deterrent), seven nuclear fleet submarines, six guided missile destroyers, 13 frigates, 13 mine-countermeasure vessels and 23 patrol vessels. As of April 2020, there are 77 commissioned ships (including submarines) in the Royal Navy, plus 13 ships of the Royal Fleet Auxiliary (RFA); there are also five Merchant Navy ships available to the RFA under a private finance initiative. The RFA replenishes Royal Navy warships at sea, and augments the Royal Navy's amphibious warfare capabilities through its three vessels. It also works as a force multiplier for the Royal Navy, often doing patrols that frigates used to do. The total displacement of the Royal Navy is approximately 448,600 tonnes (824,600 tonnes including the Royal Fleet Auxiliary).
The Royal Navy is part of Her Majesty's Naval Service, which also includes the Royal Marines. The professional head of the Naval Service is the First Sea Lord who is an admiral and member of the Defence Council of the United Kingdom. The Defence Council delegates management of the Naval Service to the Admiralty Board, chaired by the Secretary of State for Defence. The Royal Navy operates from three bases in the United Kingdom where commissioned ships and submarines are based: Portsmouth, Clyde and Devonport, the last being the largest operational naval base in Western Europe, as well as two naval air stations, RNAS Yeovilton and RNAS Culdrose where maritime aircraft are based.
As the seaborne branch of HM Armed Forces, the RN has various roles. As it stands today, the RN has stated its 6 major roles as detailed below in umbrella terms.
The strength of the fleet of the Kingdom of England was an important element in the kingdom's power in the 10th century. At one point Aethelred II had an especially large fleet built by a national levy of one ship for every 310 hides of land, but it is uncertain whether this was a standard or exceptional model for raising fleets. During the period of Danish rule in the 11th century, the authorities maintained a standing fleet by taxation, and this continued for a time under the restored English regime of Edward the Confessor (reigned 1042–1066), who frequently commanded fleets in person.
English naval power seemingly declined as a result of the Norman conquest. Following the Battle of Hastings, the Norman navy that brought over William the Conqueror seemingly disappeared from records, possibly due to William receiving all of those ships from feudal obligations or because of some sort of leasing agreement which lasted only for the duration of the enterprise. There is no evidence that William adopted or kept the Anglo-Saxon ship mustering system, known as the "scipfryd." Hardly noted after 1066, it appears that the Normans let the "scipfryd" languish so that by 1086, when the Doomsday Book was completed, it had apparently ceased to exist.
According to the "Anglo-Saxon Chronicle", in 1068, Harold Godwinson's sons Godwine and Edmund conducted a "raiding-ship army" which came from Ireland, raiding across the region and to the townships of Bristol and Somerset. In the following year of 1069, they returned with a bigger fleet which they sailed up the River Taw before being beaten back by a local earl near Devon. However, this made explicitly clear that the newly conquered England under Norman rule, in effect, ceded the Irish Sea to the Irish, the Vikings of Dublin, and other Norwegians. Besides ceding away the Irish Sea, the Normans also ceded the North Sea, a major area where Nordic peoples travelled. In 1069, this lack of naval presence in the North Sea allowed for the invasion and ravaging of England by Jarl Osborn (brother of King Svein Estridsson) and his sons Harald, Cnut, and Bjorn. In addition to the ravaging of the English townships of Dover, Sandwich, Ipswich, and Norwich, the Danes connected with the "aetheling" (Anglo-Saxon Heir to the crown) Edgar and rebels in Northumbria. William chased Edgar and the rebels to Scotland, but could not defeat the Danes, causing him to resort to the old Anglo-Saxon practice of paying them off.
Though William the Conqueror caused a massive decline in English naval practices, he did occasionally assemble small fleets of ships, but only for limited activities. Most of these limited actions also did not involve direct combat at sea. An example of this was when the rebellious Anglo-Saxon Earl Morcar and his ally Bishop Æthelwine of Durham sought refuge on the Isle of Ely in 1071. According to Florence of Worcester reported, "The king [William the Conqueror] hearing of this, blocked up every outlet on the eastern side by means of boatmen [butescarls], and caused a bridge two miles long to be constructed on the western side." The "Anglo-Saxon Chronicle" also confirms these events. Though William used ships for blockading purposes and for important strategic engagements, his infrequent use of an established navy promoted a damaging practice of infrequent maritime operations, which his successors would practice on a frequent basis.
Medieval fleets, in England as elsewhere, were almost entirely composed of merchant ships enlisted into naval service in time of war. From time to time a few "king's ships" owned by the monarch were built for specifically warlike purposes; but, unlike some European states, England did not maintain a small permanent core of warships in peacetime. England's naval organisation was haphazard and the mobilization of fleets when war broke out was slow.
When King John’s campaign to recover Normandy from the French was at a breaking point, the northern barons of England began to rise in revolt. Forced by the insurrection, John signed the Magna Carta on 15 June 1215, in hopes of satisfying the barons to buy time for Pope Innocent III to excommunicate the rebellious barons and condemn the Magna Carta. From this, the barons revolted, commencing the First Barons' War with the capture of Rochester Castle. Grasping, however, that they (the barons) were outmatched by royalists and King John, the barons decided to turn to France for assistance. Realizing the baron's intentions, John attempted to assemble a Navy, to prevent the arrival of the French. France, who saw this as a fortunate opportunity, decided to assist the barons, with Phillip II's (King of France) son Dauphin Louis, later known as Louis VIII of France, to invade England. With John unable to swiftly build up his navy, due to the adopting of infrequent maritime operations from William the Conqueror, the French Navy under Louis invaded and landed at Sandwich unopposed in April 1216. With Louis near London, John fled to Winchester, where he would stay until his death on 19 October 1216, having his nine-year-old son Henry III as heir to the throne.
Paradoxically, John's death turned the tide against Louis, the rebellion in England, and the development of the English navy. William Marshal, 1st Earl of Pembroke, who became regent to the son of the recently deceased English king, began to regain the loyalties to the royalist cause through a regimen of compromise. Among these were the Cinque Ports, who had substantial amounts of maritime ships. With the English now having a substantial number of vessels, Louis returned to France to gather reinforcements and more maritime vessels. Though he succeeded, English vessels began to blockade and harass French shipping, trade, and blockaded multiple French-controlled English ports.
By mid-1217, English royalists began to gain the advantage over the rebellious Barons and their French allies. Again needing more troops, Louis requested from his wife Blanche of Castile to assemble more troops for him. Up to the task, Blanche assisted in gathering forces for her husband, with a massive French force being assembled by August 1217, at the port of Calais. At the head of the French transports was Eustace the Monk, Louis’ best naval commander, who had helped Louis escape many English blockades, such as the one in Winchelsea in January 1217.
The subsequent encounter between the two fleets came in the Downs off Sandwich, known as the Battle of Sandwich. For the first time in northern waters a decisive naval battle was fought on the open sea. The battle was dominated by the English, with French losing almost all of their ships, including Eustace the Monk who was killed in the action that took place. With the death of Eustace the Monk and the defeat of the French at Sandwich, William Marshal was able to isolate Louis in London, compelling him to renounce his claim to the English throne and force him to return to France.
There are mentions in medieval records of fleets commanded by Scottish kings including William the Lion and Alexander II. The latter took personal command of a large naval force which sailed from the Firth of Clyde and anchored off the island of Kerrera in 1249, intended to transport his army in a campaign against the Kingdom of the Isles, but he died before the campaign could begin. Viking naval power was disrupted by conflicts between the Scandinavian kingdoms but entered a period of resurgence in the 13th century when Norwegian kings began to build some of the largest ships seen in Northern European waters. These included king Hakon Hakonsson's "Kristsúðin", built at Bergen from 1262–63, which was long, of 37 rooms. In 1263 Hakon responded to Alexander III's designs on the Hebrides by personally leading a major fleet of forty vessels, including the "Kristsúðin", to the islands, where they were boosted by local allies to as many as 200 ships. Records indicate that Alexander had several large oared ships built at Ayr, but he avoided a sea battle. Defeat on land at the Battle of Largs and winter storms forced the Norwegian fleet to return home, leaving the Scottish crown as the major power in the region and leading to the ceding of the Western Isles to Alexander in 1266.
English naval power was vital to King Edward I's successful campaigns in Scotland from 1296, using largely merchant ships from England, Ireland and his allies in the Islands to transport and supply his armies. Part of the reason for Robert I's success was his ability to call on naval forces from the Islands. As a result of the expulsion of the Flemings from England in 1303, he gained the support of a major naval power in the North Sea. The development of naval power allowed Robert to successfully defeat English attempts to capture him in the Highlands and Islands and to blockade major English controlled fortresses at Perth and Stirling, the last forcing King Edward II to attempt the relief that resulted at English defeat at Bannockburn in 1314. Scottish naval forces allowed invasions of the Isle of Man in 1313 and 1317 and Ireland in 1315. They were also crucial in the blockade of Berwick, which led to its fall in 1318.
With the Viking era at an end, and conflict with France largely confined to the French lands of the English monarchy, England faced little threat from the sea during the 12th and 13th centuries, but in the 14th century, the outbreak of the Hundred Years War dramatically increased the French menace. Early in the war, French plans for an invasion of England failed when Edward III of England destroyed the French fleet in the Battle of Sluys in 1340. Major fighting was thereafter confined to French soil and England's naval capabilities sufficed to transport armies and supplies safely to their continental destinations. However, while subsequent French invasion schemes came to nothing, England's naval forces could not prevent frequent raids on the south-coast ports by the French and their Genoese and Castilian allies. Such raids halted finally only with the occupation of northern France by Henry V.
Henry VII deserves a large share of credit in fostering sea power. He embarked on a program of building merchant ships larger than heretofore. He also invested in dockyards, and commissioned the oldest surviving dry dock in 1495 at Portsmouth.
After the establishment of Scottish independence, King Robert I turned his attention to building up a Scottish naval capacity. This was largely focused on the west coast, with the Exchequer Rolls of 1326 recording the feudal duties of his vassals in that region to aid him with their vessels and crews. Towards the end of his reign, he supervised the building of at least one royal man-of-war near his palace at Cardross on the River Clyde. In the late 14th century naval warfare with England was conducted largely by hired Scots, Flemish and French merchantmen and privateers. King James I of Scotland (1394–1437, reigned 1406–1437), took a greater interest in naval power. After his return to Scotland in 1424, he established a shipbuilding yard at Leith, a house for marine stores, and a workshop. King's ships were built and equipped there to be used for trade as well as war, one of which accompanied him on his expedition to the Islands in 1429. The office of Lord High Admiral was probably founded in this period. It would soon become a hereditary office, in the control of the Earls of Bothwell in the 15th and 16th centuries and the Earls of Lennox in the 17th century.
King James II (1430–1460, reigned 1437–1460) is known to have purchased a caravel by 1449. Around 1476 the Scottish merchant John Barton received letters of marque that allowed him to gain compensation for the capture of his vessels by the Portuguese by capturing ships under their colours. These letters would be repeated to his three sons John, Andrew and Robert, who would play a major part in the Scottish naval effort into the 16th century. In his struggles with his nobles in 1488 James III (r. 1451–88) received assistance from his two warships the "Flower" and the "King's Carvel" also known as the "Yellow Carvel", commanded by Andrew Wood of Largo. After the king's death Wood served his son James IV (r. 1488–1513), defeating an English incursion into the Forth by five English ships in 1489 and three more heavily armed English ships off the mouth of the River Tay the next year.
A standing "Navy Royal", with its own secretariat, dockyards and a permanent core of purpose-built warships, emerged during the reign of Henry VIII. Under Elizabeth I England became involved in a war with Spain, which saw privately owned vessels combining with the Queen's ships in highly profitable raids against Spanish commerce and colonies.
In 1588, Philip II of Spain sent the Spanish Armada against England to end English support for Dutch rebels, to stop English corsair activity and to depose the Protestant Elizabeth I and restore Catholicism to England. Preparations, under the command of the Marqués de Santa Cruz, began in 1586 but were seriously delayed by a surprise attack on Cádiz by Sir Francis Drake in 1587. By the time the expedition was ready Santa Cruz had died, and command was given to the Duke of Medina Sedonia. The Armada consisted of 130 ships, including transports and merchantmen, and carried about 30,000 men. It was to go to Flanders and from there convoy, the army of the Duke of Parma, to invade England. It set out from Lisbon in May 1588 but was forced into A Coruña by storms and did not set sail again until July.
The Armada was first sighted by the English off Lizard Point, in Cornwall, on 19 July, and the first engagement took place off Plymouth on 21 July. In four hours the Spanish fired 720 round shot and the English 2,000 rounds, but little real damage was done to either side. In fighting off Portland Bill on 23 July, some 5,000 shots were discharged by the rival fleets. Spanish casualties were about 50 killed and 70 wounded. After another engagement off the Isle of Wight on 24 July, in which the Armada lost another 50 men slain, Medina Sedonia steered for Calais to replenish his empty powder and shot stocks from Parma's ammunition depots. Parma, however, blockaded in Bruges by 60 Dutch ships, was unable to come to the Armada's assistance. After an indecisive engagement with the English off Gravelines, the Armada ran out of ammunition. The Spanish had expended 125,000 cannonballs against the English. Consequently, the Spanish commander decided to retreat to Spain by going north around Scotland and Ireland. The Spanish ships were dispersed by storms; their provisions gave out, and many of those who landed in Ireland were killed by English troops. Only about half the fleet reached home. An English Armada sent to destroy the port at A Coruña in 1589 was itself defeated with 40 ships sunk and 15,000 men lost. In October 1596, another Armada left Lisbon. The invasion fleet numbered 126 ships and carried 9,000 Spaniards and 3,000 Portuguese. The Royal Navy was unprepared, but England was saved by stormy seas that wrecked 72 ships and drowned 3,000 sailors and soldiers. The following year, in October 1597, yet another Armada was sent out, but this also was blown back.
During the early 17th century, England's relative naval power deteriorated, and there were increasing raids by Barbary corsairs on ships and English coastal communities to capture people as slaves, which the Navy had little success in countering. Charles I undertook a major programme of warship building, creating a small force of powerful ships, but his methods of fundraising to finance the fleet contributed to the outbreak of the English Civil War.
In the wake of the civil war and abolition of the monarchy, the Commonwealth of England (as a republic), officially removed or changed most names and symbols (including heraldry) associated with royalty and/or the high church. This affected the Commonwealth Navy. As early as 1646, vessels were renamed, including "Liberty" (previously "Charles"), "Resolution" (ex-"Royal Prince") and "George" (ex-"St George"); new vessels were often given names associated with institutions or individual officials, including "President", "Speaker", "Fairfax" (after Thomas Fairfax), "Monck" (George Monck) and "Richard" (Richard Cromwell), or Parliamentary victories in the civil war, such as "Worcester", "Bristol", "Gainsborough", "Preston", Langport", Newbury", "Martson Moor", "Nantwich", "Colchester", and "Naseby". (The prefix "English ship" has normally been used of naval vessels before the late 17th century; "His Majesty's Ship" was not official usage at the time.) The new regime, isolated and threatened from all sides, dramatically expanded the Commonwealth Navy, which became the most powerful in the world. The Commonwealth's introduction of Navigation Acts, providing that all merchant shipping to and from England or her colonies should be carried out by English ships, led to war with the Dutch Republic. In the early stages of this First Anglo-Dutch War (1652–1654), the superiority of the large, heavily armed English ships was offset by superior Dutch tactical organisation and the fighting was inconclusive. English tactical improvements resulted in a series of crushing victories in 1653 at Portland, the Gabbard and Scheveningen, bringing peace on favourable terms. This was the first war fought largely, on the English side, by purpose-built, state-owned warships. It was followed by a war with Spain, which saw the English conquest of Jamaica in 1655 and successful attacks on Spanish treasure fleets in 1656 and 1657, but also the devastation of English merchant shipping by the privateers of Dunkirk, until their home port was captured by Anglo-French forces in 1658.
The Restoration of the English monarchy occurred in May 1660, and Charles II assumed the throne. One of his first acts was to officially name the Royal Navy, The prefix HMS was also officially attached to its vessels for the first time. Nevertheless, the navy remained a national institution, rather than the personal possession of the reigning monarch, as it had been before the civil war.
As a result of their defeat in the First Anglo-Dutch War, the Dutch transformed their navy, largely abandoning the use of militarised merchantmen and establishing a fleet composed mainly of heavily armed, purpose-built warships, as the English had done previously. Consequently, the Second Anglo-Dutch War (1665–1667) was a closely fought struggle between evenly matched opponents, with English victory at Lowestoft (1665) countered by Dutch triumph in the epic Four Days' Battle (1666). The deadlock was broken not by combat but by the superiority of Dutch public finance, as in 1667 Charles II was forced to lay up the fleet in port for lack of money to keep it at sea while negotiating for peace. Disaster followed as the Dutch fleet mounted the Raid on the Medway, breaking into Chatham Dockyard and capturing or burning many of the Navy's largest ships at their moorings. In the Third Anglo-Dutch War (1672–1674), Charles II allied with Louis XIV of France against the Dutch, but the combined Anglo-French fleet was fought to a standstill in a series of inconclusive battles, while the French invasion by land was warded off.
During the 1670s and 1680s, the English Royal Navy succeeded in permanently ending the threat to English shipping from the Barbary corsairs, inflicting defeats which induced the Barbary states to conclude long-lasting peace treaties. Following the Glorious Revolution of 1688, England joined the European coalition against Louis XIV in the War of the Grand Alliance (1688–1697). Louis' recent shipbuilding programme had given France the largest navy in Europe. A combined Anglo-Dutch fleet was defeated at Beachy Head (1690), but victory at Barfleur-La Hogue (1692) was a turning-point, marking the end of France's brief pre-eminence at sea and the beginning of an enduring English, later British, supremacy.
In the course of the 17th century, the English Royal Navy completed the transition from a semi-amateur Navy Royal fighting in conjunction with private vessels into a fully professional institution. Its financial provisions were gradually regularised, it came to rely on dedicated warships only, and it developed a professional officer corps with a defined career structure, superseding an earlier mix of "gentlemen" (upper-class soldiers) and "tarpaulins" (professional seamen, who generally served on merchant or fishing vessels in peacetime).
James IV put the Royal Scots Navy on a new footing, founding a harbour at Newhaven in May 1504, and two years later ordering the construction of a dockyard at the Pools of Airth. The upper reaches of the Forth were protected by new fortifications on Inchgarvie. Scottish ships had some success against privateers, accompanied the king in his expeditions in the islands and intervened in conflicts in Scandinavia and the Baltic Sea. Expeditions to the Highlands to Islands to curb the power of the MacDonald Lord of the Isles were largely ineffective until in 1504 the king accompanied a squadron under Wood heavily armed with artillery, which battered the MacDonald strongholds into submission. Since some of these island fortresses could only be attacked from seaward, naval historian N. A. M. Rodger has suggested this may have marked the end of medieval naval warfare in the British Isles, ushering in a new tradition of artillery warfare. The king acquired a total of 38 ships for the Royal Scottish Navy, including , and the carrack or "Great Michael", the largest warship of its time (1511). The latter, built at great expense at Newhaven and launched in 1511, was in length, weighed 1,000 tons, had 24 cannon, and was, at that time, the largest ship in Europe. It marked a shift in design as it was crafted specifically to carry a main armament of heavy artillery.
During the Rough Wooing, the attempt to force a marriage between James V's heir Mary, Queen of Scots and Henry VIII's son, the future Edward VI, in 1542, "Mary Willoughby", "Lion", and "Salamander" under the command of John Barton, son of Robert Barton, attacked merchants and fishermen off Whitby. They later blockaded a London merchant ship called "Antony of Bruges" in a creek on the coast of Brittany. In 1544, Edinburgh was attacked by an English marine force and burnt. "Salamander" and the Scottish-built "Unicorn" were captured at Leith. The Scots still had two royal naval vessels and numerous smaller private vessels.
When, as a result of the series of international treaties, Charles V declared war upon Scotland in 1544, the Scots were able to engage in a highly profitable campaign of privateering that lasted six years and the gains of which probably outweighed the losses in trade with the Low Countries.
The Scots operated in the West Indies from the 1540s, joining the French in the capture of Burburuta in 1567. English and Scottish naval warfare and privateering broke out sporadically in the 1550s. When Anglo-Scottish relations deteriorated again in 1557 as part of a wider war between Spain and France, small ships called 'shallops' were noted between Leith and France, passing as fishermen, but bringing munitions and money. Private merchant ships were rigged at Leith, Aberdeen and Dundee as men-of-war, and the regent Mary of Guise claimed English prizes, one over 200 tons, for her fleet. The re-fitted "Mary Willoughby" sailed with 11 other ships against Scotland in August 1557, landing troops and six field guns on Orkney to attack the Kirkwall Castle, St Magnus Cathedral and the Bishop's Palace. The English were repulsed by a Scottish force numbering 3000, and the English vice-admiral Sir John Clere of Ormesby was killed, but none of the English ships were lost.
After the Union of Crowns in 1603 conflict between Scotland and England ended, but Scotland found itself involved in England's foreign policy, opening up Scottish shipping to attack. In the 1620s, Scotland found herself fighting a naval war as England's ally, first against Spain and then also against France, while simultaneously embroiled in undeclared North Sea commitments in the Danish intervention in the Thirty Years' War. In 1626 a squadron of three ships was bought and equipped, at a cost of least £5,200 sterling, to guard against privateers operating out of Spanish-controlled Dunkirk and other ships were armed in preparation for potential action. The acting High Admiral John Gordon of Lochinvar organized as many as three marque fleets of privateers. It was probably one of Lochinvar's marque fleets that was sent to support the English Royal Navy in defending Irish waters in 1626. In 1627, the Royal Scots Navy and accompanying contingents of burgh privateers participated in the major expedition to Biscay. The Scots also returned to the West Indies, with Lochinvar taking French prizes and founding the colony of Charles Island. In 1629, two squadrons of privateers led by Lochinvar and William Lord Alexander, sailed for Canada, taking part in the campaign that resulted in the capture of Quebec from the French, which was handed back after the subsequent peace.
By 1697 the English Royal Navy had 323 warships, while Scotland was still dependent on merchantman and privateers. In the 1690s, two separate schemes for larger naval forces were put in motion. As usual, the larger part was played by the merchant community rather than the government. The first was the Darien Scheme to found a Scottish colony in Spanish controlled America. It was undertaken by the Company of Scotland, who created a fleet of five ships, including "Caledonia" and "St. Andrew", all built or chartered in Holland and Hamburg. It sailed to the Isthmus of Darien in 1698, but the venture failed and only one ship returned to Scotland. In the same period, it was decided to establish a professional navy for the protection of commerce in home waters during the Nine Years' War (1688–1697) with France, with three purpose-built warships bought from English shipbuilders in 1696. These were "Royal William", a 32-gun fifth rate and two smaller ships, "Royal Mary" and "Dumbarton Castle", each of 24 guns, generally described as frigates.
The Acts of Union, which created the Kingdom of Great Britain in 1707, established the Royal Navy of the newly united kingdom. The Scots office of Lord High Admiral was subsumed within the office of the Admiral of Great Britain. The three vessels of the small Royal Scottish Navy were transferred to the Royal Navy.
Throughout the 18th and 19th centuries, the Royal Navy was the largest maritime force in the world, but until 1805 combinations of enemies repeatedly matched or exceeded its forces in numbers. Despite this, it was able to maintain an almost uninterrupted ascendancy over its rivals through superiority in financing, tactics, training, organisation, social cohesion, hygiene, dockyard facilities, logistical support and (from the middle of the 18th century) warship design and construction.
During the War of the Spanish Succession (1702–1714), the Navy operated in conjunction with the Dutch against the navies of France and Spain, in support of the efforts of Britain's Austrian Habsburg allies to seize control of Spain and its Mediterranean dependencies from the Bourbons. Amphibious operations by the Anglo-Dutch fleet brought about the capture of Sardinia, the Balearic Islands and a number of Spanish mainland ports, most importantly Barcelona. While most of these gains were turned over to the Habsburgs, Britain held on to Gibraltar and Menorca, which were retained in the peace settlement, providing the Navy with Mediterranean bases. Early in the war French naval squadrons had done considerable damage to English and Dutch commercial convoys. However, a major victory over France and Spain at Vigo Bay (1702), further successes in battle, and the scuttling of the entire French Mediterranean fleet at Toulon in 1707 virtually cleared the Navy's opponents from the seas for the latter part of the war. Naval operations also enabled the conquest of the French colonies in Nova Scotia and Newfoundland. Further conflict with Spain followed in the War of the Quadruple Alliance (1718–1720), in which the Navy helped thwart a Spanish attempt to regain Sicily and Sardinia from Austria and Savoy, defeating a Spanish fleet at Cape Passaro (1718), and in an undeclared war in the 1720s, in which Spain tried to retake Gibraltar and Menorca.
After a period of relative peace, the Navy became engaged in the War of Jenkins' Ear (1739–1748) against Spain, which was dominated by a series of costly and mostly unsuccessful attacks on Spanish ports in the Caribbean, primarily a huge expedition against Cartagena de Indias in 1741. These led to heavy loss of life from tropical diseases. In 1742 the Kingdom of the Two Sicilies was driven to withdraw from the war in the space of half an hour by the threat of a bombardment of its capital Naples by a small British squadron. The war became subsumed in the wider War of the Austrian Succession (1744–1748), once again pitting Britain against France. Naval fighting in this war, which for the first time included major operations in the Indian Ocean, was largely inconclusive, the most significant event being the failure of an attempted French invasion of England in 1744.
Total naval losses in the War of the Austrian Succession, including ships lost in storms and in shipwrecks were: France—20 ships-of-the-line, 16 frigates, 20 smaller ships, 2,185 merchantmen, 1,738 guns; Spain—17 ships-of-the-line, 7 frigates, 1,249 merchantmen, 1,276 guns; Britain—14 ships-of-the-line, 7 frigates, 28 smaller ships, 3,238 merchantmen, 1,012 guns. Personnel losses at sea were about 12,000 killed, wounded, or taken prisoner for France, 11,000 for Spain, and 7,000 for Britain.
The subsequent Seven Years' War (1756–1763) saw the Navy conduct amphibious campaigns leading to the conquest of New France, of French colonies in the Caribbean and West Africa, and of small islands off the French coast, while operations in the Indian Ocean contributed to the destruction of French power in India. A new French attempt to invade Britain was thwarted by the defeat of their escort fleet in the extraordinary Battle of Quiberon Bay in 1759, fought in a gale on a dangerous lee shore. Once again the British fleet effectively eliminated the French Navy from the war, leading France to abandon major operations. In 1762 the resumption of hostilities with Spain led to the British capture of Manila and of Havana, along with a Spanish fleet sheltering there.
Naval losses of the Seven Years' War testify to the extent of the British victory. France lost 20 of her ships-of-the-line captured and 25 sunk, burned, destroyed, or lost in storms. The French navy also lost 25 frigates captured and 17 destroyed, and suffered casualties of 20,000 killed, drowned, or missing, as well as another 20,000 wounded or captured. Spain lost 12 ships-of-the-line captured or destroyed, 4 frigates, and 10,000 seamen killed, wounded, or captured. The Royal Navy lost 2 ships-of-the-line captured, 17 sunk or destroyed by either battle or storm, 3 frigates captured and 14 sunk, but added 40 ships-of-the-line during the course of the war. British crews suffered 20,000 casualties, including POWs. Actual naval combat deaths for Britain were only 1,500, but the figure of 133,708 is given for those who died of sickness or deserted.
In the American War of Independence (1775–1783) the Royal Navy readily obliterated the small Continental Navy of frigates fielded by the rebel colonists, but the entry of France, Spain and the Netherlands into the war against Britain produced a combination of opposing forces which deprived the Navy of its position of superiority for the first time since the 1690s, briefly but decisively. The war saw a series of inconclusive battles in the Atlantic and Caribbean, in which the Navy failed to achieve the decisive victories needed to secure the supply lines of British forces in North America and to cut off the colonial rebels from outside support. The most important operation of the war came in 1781 when, in the Battle of the Chesapeake, the British fleet failed to lift the French blockade of Lord Cornwallis's army, resulting in Cornwallis's surrender at Yorktown. Although this disaster effectively concluded the fighting in North America, hostilities continued in the Indian Ocean, where the French were prevented from re-establishing a meaningful foothold in India, and in the Caribbean. British victory in the Caribbean in the Battle of the Saintes in 1782 and the relief of Gibraltar later the same year symbolised the restoration of British naval ascendancy, but this came too late to prevent the independence of the rebellious Thirteen Colonies.
The eradication of scurvy from the Royal Navy in the 1790s came about due to the efforts of Gilbert Blane, chairman of the Navy's Sick and Hurt Board, which ordered fresh lemon juice to be given to sailors on ships. Other navies soon adopted this successful solution.
The French Revolutionary Wars (1793–1801) and Napoleonic Wars (1803–1814 and 1815) saw the Royal Navy reach a peak of efficiency, dominating the navies of all Britain's adversaries, which spent most of the war blockaded in port. The Navy achieved an emphatic early victory at the Glorious First of June (1794) and gained a number of smaller victories while supporting abortive French Royalist efforts to regain control of France. In the course of one such operation, the majority of the French Mediterranean fleet was captured or destroyed during a short-lived occupation of Toulon in 1793. The military successes of the French Revolutionary régime brought the Spanish and Dutch navies into the war on the French side, but the losses inflicted on the Dutch at the Battle of Camperdown in 1797 and the surrender of their surviving fleet to a landing force at Den Helder in 1799 effectively eliminated the Dutch navy from the war. The Spithead and Nore mutinies in 1797 incapacitated the Channel and North Sea fleets, leaving Britain potentially exposed to invasion, but were rapidly resolved. The British Mediterranean fleet under Horatio Nelson failed to intercept Napoleon Bonaparte's 1798 expedition to invade Egypt, but annihilated the French fleet at the Battle of the Nile, leaving Bonaparte's army isolated. The emergence of a Baltic coalition opposed to Britain led to an attack on Denmark, which lost much of its fleet in the Battle of Copenhagen (1801) and came to terms with Britain.
During these years, the Navy also conducted amphibious operations that captured most of the French Caribbean islands and the Dutch colonies at the Cape of Good Hope and Ceylon. Though successful in their outcome, the expeditions to the Caribbean, conducted on a grand scale, led to devastating losses from disease. Except for Ceylon and Trinidad, these gains were returned following the Peace of Amiens in 1802, which briefly halted the fighting. Menorca, which had been repeatedly lost and regained during the 18th century, was restored to Spain, its place as the Navy's main base in the Mediterranean being taken by the new acquisition of Malta. War resumed in 1803 and Napoleon attempted to assemble a large enough fleet from the French and Spanish squadrons blockaded in various ports to cover an invasion of England. The Navy frustrated these efforts, and following the abandonment of the invasion plan, Nelson defeated the combined Franco-Spanish fleet at Trafalgar (1805).
After Trafalgar, large-scale fighting at sea remained limited to the destruction of small, fugitive French squadrons, and amphibious operations which again captured the colonies which had been restored at Amiens, along with France's Indian Ocean base at Mauritius and parts of the Dutch East Indies, including Java and the Moluccas. In 1807, French plans to seize the Danish fleet led to a pre-emptive British attack on Copenhagen, resulting in the surrender of the entire Danish navy. The impressment of British and American sailors from American ships contributed to the outbreak of the War of 1812 (1812–1814) against the United States, in which the naval fighting was largely confined to commerce raiding and a series of single-ship actions, usually won by America. The brief renewal of war after Napoleon's return to power in 1815 did not bring a resumption of naval combat.
Between 1815 and 1914, the Navy saw little serious action, owing to the absence of any opponent strong enough to challenge its dominance. During this period, naval warfare underwent a comprehensive transformation, brought about by steam propulsion, metal ship construction, and explosive munitions. Despite having to completely replace its war fleet, the Navy managed to maintain its overwhelming advantage over all potential rivals. Due to British leadership in the Industrial Revolution, the country enjoyed unparalleled shipbuilding capacity and financial resources, which ensured that no rival could take advantage of these revolutionary changes to negate the British advantage in ship numbers.
In 1859, the fleet was estimated to number about 1000 in all, including both combat and non-combat vessels. In 1889, Parliament passed the Naval Defence Act, which formally adopted the 'two-power standard', which stipulated that the Royal Navy should maintain a number of battleships at least equal to the combined strength of the next two largest navies.
The first major action that the Royal Navy saw during this period was the Bombardment of Algiers in 1816 by a joint Anglo-Dutch fleet under Lord Exmouth, to force the Barbary state of Algiers to free Christian slaves and to halt the practice of enslaving Europeans. During the Greek War of Independence, the combined navies of Britain, France and Russia defeated an Ottoman fleet at the Battle of Navarino in 1827, the last major action between sailing ships. During the same period, the Royal Navy took anti-piracy actions in the South China Sea. Between 1807 and 1865, it maintained a Blockade of Africa to counter the illegal slave trade. It also participated in the Crimean War of 1854–56, as well as numerous military actions throughout Asia and Africa, notably the First and Second Opium Wars with Qing dynasty China. On 27 August 1896, the Royal Navy took part in the Anglo-Zanzibar War, which was the shortest war in history.
The end of the 19th century saw structural changes brought about by the First Sea Lord Jackie Fisher, who retired, scrapped or placed into reserve many of the older vessels, making funds and manpower available for newer ships. He also oversaw the development of , launched in 1906. Its speed and firepower rendered all existing battleships obsolete. The industrial and economic development of Germany had by this time overtaken Britain, enabling the Imperial German Navy to attempt to outpace British construction of dreadnoughts. In the ensuing arms race, Britain succeeded in maintaining a substantial numerical advantage over Germany, but for the first time since 1805 another navy now existed with the capacity to challenge the Royal Navy in battle.
Reforms were also gradually introduced in the conditions for enlisted men with the abolition of military flogging in 1879, amongst others.
During the First World War, the Royal Navy's strength was mostly deployed at home in the Grand Fleet, confronting the German High Seas Fleet across the North Sea. Several inconclusive clashes took place between them, chiefly the Battle of Jutland in 1916. The British numerical advantage proved insurmountable, leading the High Seas Fleet to abandon any attempt to challenge British dominance.
Elsewhere in the world, the Navy hunted down the handful of German surface raiders at large. During the Dardanelles Campaign against the Ottoman Empire in 1915, it suffered heavy losses during a failed attempt to break through the system of minefields and shore batteries defending the straits.
Upon entering the war, the Navy had immediately established a blockade of Germany. The Navy's Northern Patrol closed off access to the North Sea, while the Dover Patrol closed off access to the English Channel. The Navy also mined the North Sea. As well as closing off the Imperial German Navy's access to the Atlantic, the blockade largely blocked neutral merchant shipping heading to or from Germany. The blockade was maintained during the eight months after the armistice was agreed to force Germany to end the war and sign the Treaty of Versailles.
The most serious menace faced by the Navy came from the attacks on merchant shipping mounted by German U-boats. For much of the war this submarine campaign was restricted by prize rules requiring merchant ships to be warned and evacuated before sinking. In 1915, the Germans renounced these restrictions and began to sink merchant ships on sight, but later returned to the previous rules of engagement to placate neutral opinion. A resumption of unrestricted submarine warfare in 1917 raised the prospect of Britain and its allies being starved into submission. The Navy's response to this new form of warfare had proved inadequate due to its refusal to adopt a convoy system for merchant shipping, despite the demonstrated effectiveness of the technique in protecting troopships. The belated introduction of convoys sharply reduced losses and brought the U-boat threat under control.
In the inter-war period, the Royal Navy was stripped of much of its power. The Washington and London Naval Treaties imposed the scrapping of some capital ships and limitations on new construction. In 1932, the Invergordon Mutiny took place over a proposed 25% pay cut, which was eventually reduced to 10%. International tensions increased in the mid-1930s and the Second London Naval Treaty of 1935 failed to halt the development of a naval arms race. By 1938, treaty limits were effectively being ignored. The re-armament of the Royal Navy was well under way by this point; the Royal Navy had begun construction of the still treaty-affected new battleships and its first full-sized purpose-built aircraft carriers. In addition to new construction, several existing old battleships, battlecruisers and heavy cruisers were reconstructed, and anti-aircraft weaponry reinforced, while new technologies, such as ASDIC, Huff-Duff and hydrophones, were developed. The Navy had lost control of naval aviation when the Royal Naval Air Service was merged with the Royal Flying Corps to form the Royal Air Force in 1918, but regained control of ship-board aircraft with the return of the Fleet Air Arm to Naval control in 1937.
At the start of World War II in 1939, the Royal Navy was the largest in the world, with over 1,400 vessels, including:
During one of the earliest phases of the War the Royal Navy provided critical cover during Operation Dynamo, the British evacuations from Dunkirk, and as the ultimate deterrent to a German invasion of Britain during the following four months. At Taranto, Admiral Cunningham commanded a fleet that launched the first all-aircraft naval attack in history. Cunningham was determined that the Navy be perceived as the United Kingdom's most daring military force: when warned of risks to his vessels during the Allied evacuation after the Battle of Crete he said, "It takes the Navy three years to build a new ship. It will take three hundred years to build a new tradition. The evacuation will continue."
The Royal Navy suffered heavy losses in the first two years of the war, including the carriers , and , the battleships and and the battlecruiser in the European Theatre, and the carrier , the battleship , the battlecruiser and the heavy cruisers , and in the Asian Theatre. Of the 1,418 men aboard "Hood", only three survived its sinking. Over 3,000 people were lost when the converted troopship "Lancastria" was sunk in June 1940, the greatest maritime disaster in Britain's history. There were however also successes against enemy surface ships, as in the battles of the River Plate in 1939, Narvik in 1940 and Cape Matapan in 1941, and the sinking of the German capital ships in 1941 and in 1943.
The Navy's most critical struggle was the Battle of the Atlantic defending Britain's vital commercial supply lines against U-boat attack. A traditional convoy system was instituted from the start of the war, but German submarine tactics, based on group attacks by "wolf-packs", were much more effective than in the previous war, and the threat remained serious for well over three years. Defences were strengthened by deployment of purpose-built escorts, of escort carriers, of long-range patrol aircraft, improved anti-submarine weapons and sensors, and by the deciphering of German signals by the code-breakers of Bletchley Park. The threat was at last effectively broken by devastating losses inflicted on the U-boats in the spring of 1943. Intense convoy battles of a different sort, against combined air, surface and submarine threats, were fought off enemy-controlled coasts in the Arctic, where Britain ran supply convoys through to Russia, and in the Mediterranean, where the struggle focused on Convoys to Malta.
The Navy was also vital in guarding the sea lanes that enabled British forces to fight in North Africa, the Mediterranean and the Far East. Naval supremacy was essential to amphibious operations such as the invasions of Northwest Africa, Sicily, Italy, and Normandy. By the end of the war the Royal Navy comprised over 4,800 ships, and was the second-largest fleet in the world.
After the Second World War, the decline of the British Empire and the economic hardships in Britain forced the reduction in the size and capability of the Royal Navy. All of the pre-war ships (except for the light cruisers) were quickly retired and most sold for scrapping over the years 1945–1948, and only the best condition ships (the four surviving "King George V"-class battleships, carriers, cruisers, and some destroyers) were retained and refitted for service. The increasingly powerful United States Navy took on the former role of the Royal Navy as a global naval power and police force of the sea. The combination of the threat of the Soviet Union, and Britain's commitments throughout the world, created a new role for the Navy. Governments since the Second World War have had to balance commitments with increasing budgetary pressures, partly due to the increasing cost of weapons systems, what historian Paul Kennedy called the Upward Spiral.
These pressures were exacerbated by bitter inter-service rivalry. A modest new construction programme was initiated with some new carriers ("Majestic"- and light carriers, and large carriers, such as , being completed between 1948 through 1958), along with three cruisers (completed 1959–1961), the s in the 1950s, and finally the guided missile destroyers completed in the 1960s.
, the Royal Navy's first nuclear submarine, was launched in the 1960s. The navy also received its first nuclear weapons with the introduction of the first of the s armed with the Polaris missile. The introduction of Polaris followed the cancellation of the GAM-87 Skybolt missile which had been proposed for use by the Air Force's V bomber force. By the 1990s, the navy became responsible for the maintenance of the UK's entire nuclear arsenal. The financial costs attached to nuclear deterrence became an increasingly significant issue for the navy.
The Navy began plans to replace its fleet of aircraft carriers in the mid-1960s. A plan was drawn up for three large aircraft carriers, each displacing about 60,000 tons; the plan was designated CVA-01. These carriers would be able to operate the latest aircraft coming into service and keep the Royal Navy's place as a major naval power. The new Labour government that came to power in 1964 was determined to cut defence expenditure as a means to reduce public spending, and in the 1966 Defence White Paper the project was cancelled. The existing carriers (all built during, or just after World War II) were refitted, two ( and ) becoming "commando" carriers, and four (, , and ) being completed or rebuilt. Starting in 1965 with , one by one these carriers were decommissioned without replacement, culminating with the 1979 retirement of "Ark Royal". By the early 1980s, only survived and received a refit (just in time for the Falklands War), to operate Sea Harriers. She operated along with three much smaller s, and the fleet was now centred around anti-submarine warfare in the north Atlantic as opposed to its former position with worldwide strike capability. Along with the war era carriers, all of the war built cruisers and destroyers, along with the post-war built "Tiger"-class cruisers and large County-class guided-missile destroyers were either retired or sold by 1984.
The Royal Navy was involved in three major confrontations with the Icelandic Coast Guard from 1958 to 1976. These largely bloodless incidents became known as the Cod Wars. One of the most important operations conducted predominantly by the Royal Navy after the Second World War was the 1982 defeat of Argentina in the Falkland Islands War. Despite losing four naval ships and other civilian and RFA ships, the Royal Navy fought and won a war over 8,000 miles (12,000 km) from Great Britain. is the only nuclear-powered submarine to have engaged an enemy ship with torpedoes, sinking the cruiser .
Before the Falklands War, Defence Secretary John Nott had advocated and initiated a series of cutbacks to the Navy. The Falklands War though, provided a reprieve in Nott-proposed cutbacks, and proved a need for the Royal Navy to regain an expeditionary and littoral capability which, with its resources and structure at the time, would prove difficult. At the beginning of the 1980s, the Royal Navy was a force focused on blue-water anti-submarine warfare. Its purpose was to search for and destroy Soviet submarines in the North Atlantic, and to operate the nuclear deterrent submarine force. For a time "Hermes" was retained, along with all three of the "Invincible"-class light aircraft carriers. More Sea Harriers were ordered; not just to replace losses, but to also increase the size of the Fleet Air Arm. New and more capable ships were built; notably the "Sheffield"-class destroyers, the Type 21, Type 22, and Type 23 frigates, new LPDs of the , and , but never in the numbers of the ships that they replaced. As a result, the Royal Navy surface fleet continues to reduce in size. A 2013 report found that the current RN was already too small, and that Britain would have to depend on her allies if her territories were attacked.
The Royal Navy also took part in the Gulf War, the Kosovo conflict, the Afghanistan Campaign, and the 2003 invasion of Iraq, the last of which saw RN warships bombard positions in support of the Al Faw Peninsula landings by Royal Marines. In August 2005, the Royal Navy rescued seven Russians stranded in a submarine off the Kamchatka peninsula. The Navy's Scorpio 45 remote-controlled mini-sub freed the Russian submarine from the fishing nets and cables that had held it for three days. The Royal Navy was also involved in an incident involving Somali pirates in November 2008, after the pirates tried to capture a civilian vessel.
The global economic recession of 2008 had a significant impact on the Royal Navy resulting in the Strategic Defence and Security Review 2010 which made sweeping cuts to the Navy's budget. The Harrier aircraft were retired with some being presented to museums and the rest being sold to the United States for spare parts to keep their aircraft flying. The carrier "Ark Royal" and the remaining Type 22 frigates were all removed from service and sold for scrap. however, was retained through to 2014 in the LPH role, until "Ocean" completed her refit. Plans were made to allow "Illustrious" to be retained as a floating museum, but by summer of 2016 she too was sold for scrap. The future of "Albion" and "Bulwark" is uncertain as funds may not be available to allow them to remain in service. The Royal Navy was to receive 12 Type 45 destroyers as a replacement for the older Type 42 class that was completely retired by 2013. The number was later reduced to 6 vessels, all in service by 2012.
In 2015, the Royal Navy was deployed to the Mediterranean in the mission to rescue migrants crossing the Mediterranean from Libya to Italy. By spring 2018, the Royal Navy had decommissioned HMS "Ocean", as well as started the replacement of the River-class offshore patrol vessels. The first of the new s was undergoing tests and workups before her first fixed-wing aircraft arrive later in the year, and design work was underway for the new generation of nuclear deterrent submarines. By July 2017 the first of 8 new frigates was laid down, the Type 26 frigate.
at Torpoint, Cornwall, is the basic training facility for newly enlisted ratings. Britannia Royal Naval College is the initial officer training establishment for the navy, located at Dartmouth, Devon. Personnel are divided into a warfare branch, which includes warfare officers (previously named seamen officers) and Naval Aviators, as well other branches including the Royal Naval Engineers, Royal Navy Medical Branch, and Logistics Officers, the renamed Supply Officer branch. Present-day officers and ratings have several different Royal Navy uniforms; some are designed to be worn aboard ship, others ashore or in ceremonial duties. Women began to join the Royal Navy in 1917 with the formation of the Women's Royal Naval Service (WRNS), which was disbanded after the end of the First World War in 1919. It was revived in 1939, and the WRNS continued until disbandment in 1993, as a result of the decision to fully integrate women into the structures of the Royal Navy. Women now serve in all sections of the Royal Navy including the Royal Marines.
By January 2015, the Naval Service (Royal Navy and Royal Marines) numbered some 32,880 Regular and 3,040 Maritime Reserve personnel (Royal Naval Reserve and Royal Marines Reserve), giving a combined component strength of 35,920 personnel. In addition to the active elements of the Naval Service (Regular and Maritime Reserve), all ex-Regular personnel remain liable to be recalled for duty in a time of need, this is known as the Regular Reserve. In 2002, there were 26,520 Regular Reserves of the Naval Service, of which 13,720 served in the Royal Fleet Reserve. Publications since April 2013 no-longer report the entire strength of the Regular Reserve, instead they only give a figure for Regular Reserves who serve in the Royal Fleet Reserve. They had a strength of 7,960 personnel in 2013.
In August 2019, the Ministry of Defence published figures showing that the Royal Navy and Royal Marines had 29,090 full-time trained personnel compared with a target of 30,600.
In December 2019 the First Sea Lord, Admiral Tony Radakin outlined a proposal to reduce the number of Rear-Admirals at Navy Command by five. The fighting arms (excluding Commandant General Royal Marines) would be reduced to Commodore (1-star) rank and the surface flotillas would be combined together. Training would be concentrated under the Fleet Commander.
The large fleet units in the Royal Navy consisted of amphibious warfare ships until December 2017, when the first "Queen Elizabeth-"class carrier was commissioned. Amphibious warfare ships in current service include two landing platform docks ( and ). While their primary role is to conduct amphibious warfare, they have also been deployed for humanitarian aid missions.
The Royal Navy has two s, currently undertaking sea and aircraft trials which are both due to enter naval service within the next few years. These carriers cost £6 billion and displace The first, commenced flight trials in 2018. Both are intended to operate the STOVL variant of the F-35 Lightning II. "Queen Elizabeth" began sea trials in June 2017, was commissioned later that year, and will enter service in 2020, while the second, began sea trials on 22 September 2019, was commissioned in December 2019 and is due to enter service in 2023.
The escort fleet comprises guided missile destroyers and frigates and is the traditional workhorse of the Navy. there are six Type 45 destroyers and 13 Type 23 frigates in active service. Among their primary roles is to provide escort for the larger capital ships—protecting them from air, surface and subsurface threats. Other duties include undertaking the Royal Navy's standing deployments across the globe, which often consists of: counter-narcotics, anti-piracy missions and providing humanitarian aid.
All six Type 45 destroyers have been built and are in commission, with being the last and final Type 45 entering service in September 2013. The new Type 45 destroyers replaced the older Type 42 destroyers. The Type 45 is primarily designed for anti-aircraft and anti-missile warfare and the Royal Navy describe the destroyers mission as "to shield the Fleet from air attack". They are equipped with the PAAMS (also known as Sea Viper) integrated anti-aircraft warfare system which incorporates the sophisticated SAMPSON and S1850M long range radars and the Aster 15 and 30 missiles.
Initially, 16 Type 23 frigates were delivered to the Royal Navy, with the final vessel, , commissioned in June 2002. However, the 2004 review of defence spending (Delivering Security in a Changing World) announced that three frigates of the fleet of sixteen would be paid off as part of a continuous cost-cutting strategy, and these were subsequently sold to the Chilean Navy. The 2010 Strategic Defence and Security Review announced that the remaining 13 Type 23 frigates would eventually be replaced by the Type 26 Frigate. The Strategic Defence and Security Review 2015 reduced the procurement of Type 26 to eight with five Type 31e frigates to be procured.
There are two classes of MCMVs in the Royal Navy: seven s and six s. The Hunt-class vessels combine the separate roles of the traditional minesweeper and the active minehunter in one hull. If required, the "Sandown" and Hunt-class vessels can take on the role of offshore patrol vessels.
In 1997, three Batch 1 ships were procured; unusually, these were owned by Vosper Thorneycroft, and leased to the Royal Navy until 2013. This relationship was defined by a ground-breaking contractor logistic support contract which contracts the ships' availability to the RN, including technical and stores support. In November 2013, it was announced that in order to sustain shipbuilding capabilities on the Clyde, five new ocean-going patrol vessels with Merlin-capable flight decks would be ordered for delivery from 2017. These ‘Batch 2’ ships will replace the four existing River-class ships. In October 2014, the Ministry of Defence announced the names of the first three ships as , HMS "Medway" and HMS "Trent". The fourth and fifth ships were ordered in December 2016: these will be named HMS Tamar and HMS "Spey".
In December 2019, the modified ‘Batch 1’ River-class vessel, , was decommissioned, with the ‘Batch 2’ HMS "Forth" taking over duties as the Falkland Islands patrol ship.
is a dedicated Antarctic patrol ship that fulfils the nation's mandate to provide support to the British Antarctic Survey (BAS). is an ocean survey vessel and at 13,500 tonnes is one of the largest ships in the Navy. The other survey vessels of the Royal Navy are the two multi-role ships of the , which came into service in 2002 and 2003. As of 2018, the newly commissioned HMS "Magpie" also undertakes survey duties at sea.
The Navy's large fleet units are supported by the Royal Fleet Auxiliary which possesses three amphibious transport docks within its operational craft. These are known as the landing ships, of which four were introduced in 2006–2007, but one was sold to the Royal Australian Navy in 2011. In November 2006, the First Sea Lord Admiral Sir Jonathon Band described the Royal Fleet Auxiliary vessels as "a major uplift in the Royal Navy's war fighting capability".
The Submarine Service is the submarine based element of the Royal Navy. It is sometimes referred to as the ""Silent Service"", as the submarines are generally required to operate undetected. Founded in 1901, the service made history in 1982 when, during the Falklands War, became the first nuclear-powered submarine to sink a surface ship, . Today, all of the Royal Navy's submarines are nuclear-powered.
The Royal Navy operates four ballistic missile submarines displacing nearly 16,000 tonnes and equipped with Trident II missiles (armed with nuclear weapons) and heavyweight Spearfish torpedoes, with the purpose to carry out Operation Relentless, the United Kingdom's Continuous At Sea Deterrent (CASD). In December 2006, the Government published recommendations for a new class of four ballistic missile submarines to replace the current "Vanguard" class, starting 2024. These new s will mean that the United Kingdom will maintain a nuclear ballistic missile submarine fleet and the ability to launch nuclear weapons.
Six fleet submarines are presently in service, with three and three (with the remainder in construction) making up the total. The "Trafalgar" class displace little over 5,300 tonnes when submerged and are armed with Tomahawk land-attack missiles and Spearfish torpedoes. The "Astute" class at 7,400 tonnes are much larger and carry a larger number of Tomahawk missiles and Spearfish torpedoes. Four more "Astute"-class fleet submarines are expected to be commissioned and will eventually replace the remaining "Trafalgar"-class boats. was the latest "Astute"-class boat to be commissioned.
In the 2010 Strategic Defence and Security Review, the UK Government reaffirmed its intention to procure seven "Astute"-class submarines.
The Fleet Air Arm (FAA) is the branch of the Royal Navy responsible for the operation of naval aircraft, it can trace its roots back to 1912 and the formation of the Royal Flying Corps. The Fleet Air Arm currently comprises: Commando Helicopter Force (operating the AW-101 Merlin HC4 in support of 3 Commando Brigade), Maritime Wildcat Force (operating the AW-159 Wildcat HM2 on small ship's flights), Merlin Force (operating the AW-101 Merlin HM2 in an anti-submarine role) and the newly formed Lightning Force (operating the F-35B Lightning II in the maritime strike role).
Pilots designated for rotary wing service train under No. 1 Flying Training School (1 FTS) at RAF Shawbury.
The FAA is undergoing F-35B (Lightning II) aviation trials on board the new "Queen Elizabeth"-class aircraft carriers following the retirement of Joint Force Harrier and the Harrier GR7/GR9 strike aircraft in 2010, preparing for front-line operations.
The Royal Marines are an amphibious, specialised light infantry force of commandos, capable of deploying at short notice in support of Her Majesty's Government's military and diplomatic objectives overseas. The Royal Marines are organised into a highly mobile light infantry brigade (3 Commando Brigade) and 7 commando units including 1 Assault Group Royal Marines, 43 Commando Fleet Protection Group Royal Marines and a company strength commitment to the Special Forces Support Group. The Corps operates in all environments and climates, though particular expertise and training is spent on amphibious warfare, Arctic warfare, mountain warfare, expeditionary warfare and commitment to the UK's Rapid Reaction Force. The Royal Marines are also the primary source of personnel for the Special Boat Service (SBS), the Royal Navy's contribution to the United Kingdom Special Forces.
The Royal Marines have seen action in a number of wars, often fighting beside the British Army; including in the Seven Years' War, the Napoleonic Wars, the Crimean War, World War I and World War II. In recent times, the Corps has been deployed in expeditionary warfare roles, such as the Falklands War, the Gulf War, the Bosnian War, the Kosovo War, the Sierra Leone Civil War, the Iraq War and the War in Afghanistan. The Royal Marines have international ties with allied marine forces, particularly the United States Marine Corps and the Netherlands Marine Corps/Korps Mariniers.
The Royal Navy currently uses three major naval port bases in the UK, each housing its own flotilla of ships and boats ready for service along, two naval air stations and a support facility base in Bahrain:
The current role of the Royal Navy is to protect British interests at home and abroad, executing the foreign and defence policies of Her Majesty's Government through the exercise of military effect, diplomatic activities and other activities in support of these objectives. The Royal Navy is also a key element of the British contribution to NATO, with a number of assets allocated to NATO tasks at any time. These objectives are delivered via a number of core capabilities:
The Royal Navy is currently deployed in different areas of the world, including some standing Royal Navy deployments. These include several home tasks as well as overseas deployments. The Navy is deployed in the Mediterranean as part of standing NATO deployments including mine countermeasures and NATO Maritime Group 2. In both the North and South Atlantic, RN vessels are patrolling. There is always a Falkland Islands patrol vessel on deployment, currently HMS Forth.
The Royal Navy operates a Response Force Task Group (a product of the 2010 Strategic Defence and Security Review), which is poised to respond globally to short-notice tasking across a range of defence activities, such as non-combatant evacuation operations, disaster relief, humanitarian aid or amphibious operations. In 2011, the first deployment of the task group occurred under the name 'COUGAR 11' which saw them transit through the Mediterranean where they took part in multinational amphibious exercises before moving further east through the Suez Canal for further exercises in the Indian Ocean.
In the Persian Gulf, the RN sustains commitments in support of both national and coalition efforts to stabilise the region. The Armilla Patrol, which started in 1980, is the navy's primary commitment to the Gulf region. The Royal Navy also contributes to the combined maritime forces in the Gulf in support of coalition operations. The UK Maritime Component Commander, overseer of all of Her Majesty's warships in the Persian Gulf and surrounding waters, is also deputy commander of the Combined Maritime Forces. The Royal Navy has been responsible for training the fledgeling Iraqi Navy and securing Iraq's oil terminals following the cessation of hostilities in the country. The Iraqi Training and Advisory Mission (Navy) (Umm Qasr), headed by a Royal Navy captain, has been responsible for the former duty whilst Commander Task Force Iraqi Maritime, a Royal Navy commodore, has been responsible for the latter.
The Royal Navy contributes to standing NATO formations and maintains forces as part of the NATO Response Force. The RN also has a long-standing commitment to supporting the Five Powers Defence Arrangements countries and occasionally deploys to the Far East as a result. This deployment typically consists of a frigate and a survey vessel, operating separately. Operation Atalanta, the European Union's anti-piracy operation in the Indian Ocean, is permanently commanded by a senior Royal Navy or Royal Marines officer at Northwood Headquarters and the navy contributes ships to the operation.
The titular head of the Royal Navy is the Lord High Admiral, a position which has been held by the Duke of Edinburgh since 2011. The position had been held by Queen Elizabeth II from 1964 to 2011; the Sovereign is the Commander-in-chief of the British Armed Forces. The professional head of the Naval Service is the First Sea Lord, an admiral and member of the Defence Council of the United Kingdom. The Defence Council delegates management of the Naval Service to the Admiralty Board, chaired by the Secretary of State for Defence, which directs the Navy Board, a sub-committee of the Admiralty Board comprising only naval officers and Ministry of Defence (MOD) civil servants. These are all based in MOD Main Building in London, where the First Sea Lord, also known as the Chief of the Naval Staff, is supported by the Naval Staff Department.
The Fleet Commander has responsibility for the provision of ships, submarines and aircraft ready for any operations that the Government requires. Fleet Commander exercises his authority through the Navy Command Headquarters, based at in Portsmouth. An operational headquarters, the Northwood Headquarters, at Northwood, London, is co-located with the Permanent Joint Headquarters of the United Kingdom's armed forces, and a NATO Regional Command, Allied Maritime Command.
The Royal Navy was the first of the three armed forces to combine the personnel and training command, under the Principal Personnel Officer, with the operational and policy command, combining the Headquarters of the Commander-in-Chief, Fleet and Naval Home Command into a single organisation, Fleet Command, in 2005 and becoming Navy Command in 2008. Within the combined command, the Second Sea Lord continues to act as the Principal Personnel Officer.
The Naval Command senior appointments are:
Intelligence support to fleet operations is provided by intelligence sections at the various headquarters and from MOD Defence Intelligence, renamed from the Defence Intelligence Staff in early 2010.
The Royal Navy currently operates from three bases in the United Kingdom where commissioned ships are based; Portsmouth, Clyde and Devonport, Plymouth—Devonport is the largest operational naval base in the UK and Western Europe. Each base hosts a flotilla command under a commodore, or, in the case of Clyde, a captain, responsible for the provision of operational capability using the ships and submarines within the flotilla. 3 Commando Brigade Royal Marines is similarly commanded by a brigadier and based in Plymouth. Historically, the Royal Navy maintained Royal Navy Dockyards around the world. Dockyards of the Royal Navy are harbours where ships are overhauled and refitted. Only four are operating today; at Devonport, Faslane, Rosyth and at Portsmouth. A Naval Base Review was undertaken in 2006 and early 2007, the outcome being announced by Secretary of State for Defence, Des Browne, confirming that all would remain however some reductions in manpower were anticipated.
The academy where initial training for future Royal Navy officers takes place is Britannia Royal Naval College, located on a hill overlooking Dartmouth, Devon. Basic training for future ratings takes place at at Torpoint, Cornwall, close to HMNB Devonport.
Significant numbers of naval personnel are employed within the Ministry of Defence, Defence Equipment and Support and on exchange with the Army and Royal Air Force. Small numbers are also on exchange within other government departments and with allied fleets, such as the United States Navy. The navy also posts personnel in small units around the world to support ongoing operations and maintain standing commitments. Nineteen personnel are stationed in Gibraltar to support the small Gibraltar Squadron, the RN's only permanent overseas squadron. Some personnel are also based at East Cove Military Port and RAF Mount Pleasant in the Falkland Islands to support APT(S). Small numbers of personnel are based in Diego Garcia (Naval Party 1002), Miami (NP 1011 – AUTEC), Singapore (NP 1022), Dubai (NP 1023) and elsewhere.
On 6 December 2014, the Foreign and Commonwealth Office announced it would expand the UK's naval facilities in Bahrain to support larger Royal Navy ships deployed to the Persian Gulf. Once complete, it will be the UK's first permanent military base located East of Suez since it withdrew from the region in 1971. The base will reportedly be large enough to accommodate Type 45 destroyers and "Queen Elizabeth"-class aircraft carriers.
The navy of the United Kingdom is commonly referred to as the "Royal Navy" both in the United Kingdom and other countries. Navies of other Commonwealth countries where the British monarch is also head of state include their national name, e.g. Royal Australian Navy. Some navies of other monarchies, such as the "Koninklijke Marine" (Royal Netherlands Navy) and "Kungliga Flottan" (Royal Swedish Navy), are also called "Royal Navy" in their own language. The Danish Navy stands out with the term Royal incorporated in its official name (Royal Danish Navy), but only using the term "Flåden" (Navy) in everyday speech. The French Navy, despite France being a republic since 1870, is often nicknamed ""La Royale"" (literally: The Royal).
Royal Navy ships in commission are prefixed since 1789 with Her Majesty's Ship (His Majesty's Ship), abbreviated to "HMS"; for example, . Submarines are styled HM Submarine, also abbreviated "HMS". Names are allocated to ships and submarines by a naming committee within the MOD and given by class, with the names of ships within a class often being thematic (for example, the Type 23s are named after British dukes) or traditional (for example, the s all carry the names of famous historic ships). Names are frequently re-used, offering a new ship the rich heritage, battle honours and traditions of her predecessors. Often, a particular vessel class will be named after the first ship of that type to be built. As well as a name, each ship and submarine of the Royal Navy and the Royal Fleet Auxiliary is given a pennant number which in part denotes its role. For example, the destroyer displays the pennant number 'D32'.
The Royal Navy ranks, rates and insignia form part of the uniform of the Royal Navy. The Royal Navy uniform is the pattern on which many of the uniforms of the other national navies of the world are based (e.g. Ranks and insignia of NATO navies officers, Uniforms of the United States Navy, Uniforms of the Royal Canadian Navy, French Naval Uniforms).
For officers (see also Royal Navy officer rank insignia):
1 Rank in abeyance – routine appointments no longer made to this rank, though honorary awards of this rank are occasionally made to senior members of the Royal family and prominent former First Sea Lords.
For Enlisted rates (see also Royal Navy ratings rank insignia):
The Royal Navy has several formal customs and traditions including the use of ensigns and ships badges. Royal Navy ships have several ensigns used when under way and when in port. Commissioned ships and submarines wear the White Ensign at the stern whilst alongside during daylight hours and at the main-mast whilst under way. When alongside, the "Union Jack" is flown from the jackstaff at the bow, and can only be flown under way either to signal a court-martial is in progress or to indicate the presence of an admiral of the fleet on-board (including the Lord High Admiral or the monarch).
The Fleet Review is an irregular tradition of assembling the fleet before the monarch. The first review on record was held in 1400, and the most recent review was held on 28 June 2005 to mark the bi-centenary of the Battle of Trafalgar; 167 ships from many different nations attended with the Royal Navy supplying 67.
There are several less formal traditions including service nicknames and Naval slang, known as ""Jackspeak"". The nicknames include "The Andrew" (of uncertain origin, possibly after a zealous press ganger) and "The Senior Service". British sailors are referred to as "Jack" (or "Jenny"), or more widely as “Matelots”. Royal Marines are fondly known as "Bootnecks" or often just as "Royals". A compendium of Naval slang was brought together by Commander A. Covey-Crump and his name has in itself become the subject of Naval slang; Covey Crump. A game traditionally played by the Navy is the four-player board game known as "Uckers". This is similar to Ludo and it is regarded as easy to learn, but difficult to play well.
The Royal Navy sponsors or supports three youth organisations:
The above organisations are the responsibility of the CUY branch of Commander Core Training and Recruiting (COMCORE) who reports to Flag Officer Sea Training (FOST).
The Royal Navy of the 18th century is depicted in many novels and several films dramatising the voyage and mutiny on the Bounty. The Royal Navy's Napoleonic campaigns of the early 19th century are also a popular subject of historical novels. Some of the best-known are Patrick O'Brian's Aubrey-Maturin series and C. S. Forester's Horatio Hornblower chronicles.
The Navy can also be seen in numerous films. The fictional spy James Bond is a commander in the Royal Naval Volunteer Reserve (RNVR). The Royal Navy is featured in "The Spy Who Loved Me", when a nuclear ballistic-missile submarine is stolen, and in "Tomorrow Never Dies" when a media baron sinks a Royal Navy warship in an attempt to trigger a war between the UK and People's Republic of China. "" was based on Patrick O'Brian's Aubrey-Maturin series. The "Pirates of the Caribbean" series of films also includes the Navy as the force pursuing the eponymous pirates. Noël Coward directed and starred in his own film "In Which We Serve", which tells the story of the crew of the fictional HMS "Torrin" during the Second World War. It was intended as a propaganda film and was released in 1942. Coward starred as the ship's captain, with supporting roles from John Mills and Richard Attenborough.
C. S. Forester's Hornblower novels have been adapted for television. The Royal Navy was the subject of an acclaimed 1970s BBC television drama series, "Warship", and of a five-part documentary, "Shipmates", that followed the workings of the Royal Navy day to day.
Television documentaries about the Royal Navy include: "Empire of the Seas: How the Navy Forged the Modern World", a four-part documentary depicting Britain's rise as a naval superpower, up until the First World War; "Sailor", about life on the aircraft carrier ; and "Submarine", about the submarine captains' training course, 'The Perisher'. There have also been Channel 5 documentaries such as "Royal Navy Submarine Mission", following a nuclear-powered fleet submarine.
The popular BBC radio comedy series "The Navy Lark" featured a fictitious warship ("HMS "Troutbridge"") and ran from 1959 to 1977. | https://en.wikipedia.org/wiki?curid=26061 |
Robert M. Pirsig
Robert Maynard Pirsig (; September 6, 1928 – April 24, 2017) was an American writer and philosopher. He was the author of the philosophical novels "Zen and the Art of Motorcycle Maintenance: An Inquiry into Values" (1974) and "" (1991).
Pirsig was born on September 6, 1928, in Minneapolis, Minnesota, the son of Harriet Marie Sjobeck and Maynard Pirsig. He was of German and Swedish descent. His father was a University of Minnesota Law School (UMLS) graduate who started teaching at the school in 1934. The elder Pirsig served as the law school dean from 1948 to 1955, and retired from teaching at UMLS in 1970. He resumed his career as a professor at the William Mitchell College of Law, where he remained until his final retirement in 1993.
A precocious child with an IQ of 170 at age nine, Pirsig skipped several grades and was enrolled at the Blake School in Minneapolis. At 14, in May 1943, Pirsig was awarded a high school diploma from the University of Minnesota's laboratory school, University High School (now Marshall-University High School) where he edited the school yearbook, the Bisbilla. He then entered the University of Minnesota to study biochemistry that autumn. In "Zen and the Art of Motorcycle Maintenance", he described the central character, thought to represent himself, as being far from a typical student; he was interested in science as a goal in itself, rather than as a way to establish a career.
While doing laboratory work in biochemistry, Pirsig became greatly troubled by the existence of more than one workable hypothesis to explain a given phenomenon, and that the number of hypotheses appeared unlimited. He could not find any way to reduce the number of hypotheses—he became perplexed by the role and source of hypothesis generation within scientific practice. The question distracted him to the extent that he lost interest in his studies and failed to maintain good grades. Finally, he was expelled from the university.
In 1946, aged eighteen, Pirsig enlisted in the United States Army. He was stationed in South Korea until 1948. Upon his discharge from the Army, he returned to the United States and lived in Seattle, Washington, for less than a year, at which point he decided to finish the education he had abandoned. Pirsig earned a bachelor's degree in 1950 from the University of Minnesota. He then attended Banaras Hindu University in India, to study Eastern philosophy and culture. At the University of Chicago, he performed graduate-level work in philosophy in the Committee on the Analysis of Ideas and Study of Methods but he did not obtain a degree there. In 1958 he earned a master's degree in journalism from the University of Minnesota.
In 1958, he became a professor at Montana State University in Bozeman, and taught creative writing courses for two years. Shortly thereafter he taught at the University of Illinois at Chicago.
Pirsig's published writing consists most notably of two books. The better known, "Zen and the Art of Motorcycle Maintenance", develops around Pirsig's exploration into the nature of "quality". Ostensibly a first-person narrative based on a motorcycle trip he and his young son Chris had taken from Minneapolis to San Francisco, it is an exploration of the underlying metaphysics of Western culture. He also gives the reader a short summary of the history of philosophy, including his interpretation of the philosophy of Aristotle as part of an ongoing dispute between "universalists", admitting the existence of "universals", and the Sophists, opposed by Socrates and his student Plato. Pirsig finds in "Quality" a special significance and common ground between Western and Eastern world views.
Pirsig had great difficulty finding a publisher for "Zen and the Art of Motorcycle Maintenance". When he did, his publisher's internal recommendation stated, "This book is brilliant beyond belief, it is probably a work of genius, and will, I'll wager, attain classic stature." In his book review, George Steiner compared Pirsig's writing to Dostoevsky, Broch, Proust, and Bergson, stating that "the assertion itself is valid ... the analogies with "Moby-Dick" are patent".
In 1974, Pirsig was awarded a Guggenheim Fellowship to allow him to write a follow-up, "" (1991), in which he developed a value-based metaphysics, Metaphysics of Quality, that challenges our subject–object view of reality. The second book, this time "the captain" of a sailboat, follows on from where "Zen and the Art of Motorcycle Maintenance" left off.
Pirsig was vice-president of the Minnesota Zen Meditation Center from 1973 to 1975 and also served on the board of directors.
Robert Pirsig married Nancy Ann James on May 10, 1954. They had two sons: Chris, born in 1956, and Theodore (Ted), born in 1958.
Pirsig suffered a mental breakdown and spent time in and out of psychiatric hospitals between 1961 and 1963. He was diagnosed with schizophrenia and treated with electroconvulsive therapy on numerous occasions, a treatment he discusses in "Zen and the Art of Motorcycle Maintenance". Nancy sought a divorce during this time; they formally separated in 1976 and divorced in 1978. On December 28, 1978, Pirsig married Wendy Kimball in Tremont, Maine.
In 1979, his son Chris, who figured prominently in "Zen and the Art of Motorcycle Maintenance", was fatally stabbed in a mugging outside the San Francisco Zen Center at the age of 22. Pirsig discusses this tragedy in an afterword to subsequent editions of "Zen and the Art of Motorcycle Maintenance", writing that he and his second wife Wendy Kimball decided not to abort the child they conceived in 1980 because he believed that this unborn child — later their daughter Nell — was a continuation of the "life pattern" that Chris had occupied.
Pirsig died aged 88, at his home in South Berwick, Maine, on April 24, 2017, after a period of failing health.
Pirsig received a Guggenheim Fellowship in 1974 for General Nonfiction, allowing him to complete his second book. The University of Minnesota conferred an Outstanding Achievement Award in 1975.
On December 15, 2012, Montana State University bestowed upon Pirsig an honorary doctorate in philosophy during the university's fall commencement. Pirsig was also honored in a commencement speech by MSU Regent Professor Michael Sexson. Pirsig had been an instructor in writing at what was then Montana State College from 1958–1960. In "Zen and the Art of Motorcycle Maintenance", Pirsig writes about his time at MSC as a less than pleasurable experience, and that this limited his ability to teach writing effectively and to develop his own philosophy and writing. Due to frailty of health, Pirsig did not travel to Bozeman in December 2012 to accept the accolade.
In December 2019, the Smithsonian's National Museum of American History acquired Pirsig's 1966 Honda CB77F Super Hawk on which the famous 1968 ride with his son Chris was taken. The donation included a manuscript of "Zen and the Art of Motorcycle Maintenance", a signed first edition of the book, and tools and clothing from the ride. | https://en.wikipedia.org/wiki?curid=26071 |
Red Hat Linux
Red Hat Linux, created by the company Red Hat, was a widely used Linux distribution until its discontinuation in 2004.
Early releases of Red Hat Linux were called Red Hat Commercial Linux. Red Hat published the first non-beta release in May 1995. It was the first Linux distribution to use the RPM Package Manager as its packaging format, and over time has served as the starting point for several other distributions, such as Mandriva Linux and Yellow Dog Linux.
In 2003, Red Hat discontinued the Red Hat Linux line in favor of Red Hat Enterprise Linux (RHEL) for enterprise environments. Fedora, developed by the community-supported Fedora Project and sponsored by Red Hat, is a free-of-cost alternative intended for home use. Red Hat Linux 9, the final release, hit its official end-of-life on April 30, 2004, although updates were published for it through 2006 by the Fedora Legacy project until that shut down in early 2007.
Version 3.0.3 was one of the first Linux distributions to support Executable and Linkable Format instead of the older a.out format.
Red Hat Linux introduced a graphical installer called Anaconda developed by Ketan Bagal, intended to be easy to use for novices, and which has since been adopted by some other Linux distributions. It also introduced a built-in tool called "Lokkit" for configuring the firewall capabilities.
In version 6 Red Hat moved to glibc 2.1, egcs-1.2, and to the 2.2 kernel. It also introduced Kudzu, a software library for automatic discovery and configuration of hardware.
Version 7 was released in preparation for the 2.4 kernel, although the first release still used the stable 2.2 kernel. Glibc was updated to version 2.1.92, which was a beta of the upcoming version 2.2 and Red Hat used a patched version of GCC from CVS that they called "2.96". The decision to ship an unstable GCC version was due to GCC 2.95's bad performance on non-i386 platforms, especially DEC Alpha. Newer GCCs had also improved support for the C++ standard, which caused much of the existing code not to compile.
In particular, the use of a non-released version of GCC caused some criticism, e.g. from Linus Torvalds and the GCC Steering Committee; Red Hat was forced to defend their decision.
GCC 2.96 failed to compile the Linux kernel, and some other software used in Red Hat, due to stricter checks. It also had an incompatible C++ ABI with other compilers. The distribution included a previous version of GCC for compiling the kernel, called "kgcc".
As of Red Hat Linux 7.0, UTF-8 was enabled as the default character encoding for the system. This had little effect on English-speaking users, but enabled much easier internationalisation and seamless support for multiple languages, including ideographic, bi-directional and complex script languages along with European languages. However, this did cause some negative reactions among existing Western European users, whose legacy ISO-8859-based setups were broken by the change.
Version 8.0 was also the second to include the Bluecurve desktop theme. It used a common theme for GNOME-2 and KDE 3.0.2 desktops, as well as OpenOffice-1.0. KDE members did not appreciate the change, claiming that it was not in the best interests of KDE.
Version 9 supported the Native POSIX Thread Library, which was ported to the 2.4 series kernels by Red Hat.
Red Hat Linux lacked many features due to possible copyright and patent problems. For example, MP3 support was disabled in both Rhythmbox and XMMS; instead, Red Hat recommended using Ogg Vorbis, which has no patents. MP3 support, however, could be installed afterwards, although royalties are required everywhere MP3 is patented. Support for Microsoft's NTFS file system was also missing, but could be freely installed as well.
Red Hat Linux was originally developed exclusively inside Red Hat, with the only feedback from users coming through bug reports and contributions to the included software packages – not contributions to the distribution as such. This was changed in late 2003 when Red Hat Linux merged with the community-based Fedora Project. The new plan is to draw most of the codebase from Fedora when creating new Red Hat Enterprise Linux distributions. Fedora replaces the original Red Hat Linux download and retail version. The model is similar to the relationship between Netscape Communicator and Mozilla, or StarOffice and OpenOffice.org, although in this case the resulting commercial product is also fully free software.
Release dates were drawn from announcements on "comp.os.linux.announce". Version names are chosen as to be cognitively related to the prior release, yet not related in the same way as the release before that.
The Fedora and Red Hat Projects were merged on September 22, 2003. | https://en.wikipedia.org/wiki?curid=26072 |
Roger Clemens
William Roger Clemens (born August 4, 1962), nicknamed "Rocket", is an American former professional baseball pitcher who played 24 seasons in Major League Baseball (MLB) for four teams, most notably the Boston Red Sox and the New York Yankees. Clemens was one of the most dominant pitchers in major league history, tallying 354 wins, a 3.12 earned run average (ERA), and 4,672 strikeouts, the third-most all time. An 11-time All-Star and two-time World Series champion, he won seven Cy Young Awards during his career, more than any other pitcher in history. Clemens was known for his fierce competitive nature and hard-throwing pitching style, which he used to intimidate batters.
Clemens debuted in the MLB in 1984 with the Red Sox, whose pitching staff he anchored for 12 years. In 1986, he won the American League (AL) Cy Young Award, the AL Most Valuable Player (MVP) Award, and the All-Star Game MVP Award, and he struck out an MLB-record 20 batters in a single game (Clemens repeated the 20-strikeout feat 10 years later). After the 1996 season, Clemens left Boston via free agency and joined the Toronto Blue Jays. In each of his two seasons with Toronto, Clemens won a Cy Young Award, as well as the pitching triple crown by leading the league in wins, ERA, and strikeouts. Prior to the 1999 season, Clemens was traded to the Yankees where he won his two World Series titles. In 2001, Clemens became the first pitcher in major league history to start a season with a win-loss record of 20–1. In 2003, he reached his 300th win and 4,000th strikeout in the same game. Clemens left for the Houston Astros in 2004, where he spent three seasons and won his seventh Cy Young Award. He rejoined the Yankees in 2007 for one last season before retiring. He is the only pitcher in major league history to record over 350 wins and strike out over 4,500 batters.
Clemens was alleged by the Mitchell Report to have used anabolic steroids during his late career, mainly based on testimony given by his former trainer, Brian McNamee. Clemens firmly denied these allegations under oath before the United States Congress, leading congressional leaders to refer his case to the Justice Department on suspicions of perjury. On August 19, 2010, a federal grand jury at the U.S. District Court in Washington, D.C., indicted Clemens on six felony counts involving perjury, false statements and Contempt of Congress. Clemens pleaded not guilty, but proceedings were complicated by prosecutorial misconduct, leading to a mistrial. The verdict from his second trial came in June 2012, when Clemens was found not guilty on all six counts of lying to Congress.
Clemens was born in Dayton, Ohio, the fifth child of Bill and Bess (Lee) Clemens. He is of German descent, his great-grandfather Joseph Clemens having immigrated in the 1880s. Clemens's parents separated when he was an infant. His mother soon married Woody Booher, whom Clemens considers his father. Booher died when Clemens was nine years old, and Clemens has said that the only time he ever felt envious of other players was when he saw them in the clubhouse with their fathers. Clemens lived in Vandalia, Ohio, until 1977, and then spent most of his high school years in Houston, Texas. At Spring Woods High School, Clemens played baseball for longtime head coach Charles Maiorana and also played football and basketball. He was scouted by the Philadelphia Phillies and Minnesota Twins during his senior year, but opted to go to college.
He began his college career pitching for San Jacinto College North in 1981, where he was 9–2. The New York Mets selected Clemens in the 12th round of the 1981 Major League Baseball draft, but he did not sign. He then attended the University of Texas at Austin, compiling a 25–7 record in two All-American seasons, and was on the mound when the Longhorns won the 1983 College World Series. He became the first player to have his baseball uniform number retired at the University of Texas. In 2004, the Rotary Smith Award, given to America's best college baseball player, was changed to the Roger Clemens Award, honoring the best pitcher.
At Texas, Clemens pitched 35 consecutive scoreless innings, a NCAA record that stood until Justin Pope broke it in 2001.
Clemens was selected in the first round (19th overall) of the 1983 MLB draft by the Boston Red Sox and quickly rose through the minor league system, making his MLB debut on May 15, 1984. An undiagnosed torn labrum threatened to end his career early; he would successfully undergo arthroscopic surgery at the hands of the then relatively unknown Dr. James Andrews.
In 1986, Clemens won the American League MVP award, finishing with a 24-4 record, 2.48 ERA, and 238 strikeouts. Clemens started the 1986 All-Star Game in the Astrodome and was named the Most Valuable Player of the contest by throwing three perfect innings and striking out two. He also won the first of his seven Cy Young Awards. When Hank Aaron said that pitchers should not be eligible for the MVP, Clemens responded: "I wish he were still playing. I'd probably crack his head open to show him how valuable I was." Clemens was the only starting pitcher since Vida Blue in to win a league MVP award until Justin Verlander won the award in .
On April 29, 1986, Clemens became the first pitcher in MLB history to strike out 20 batters in a nine-inning game, against the Seattle Mariners at Boston's Fenway Park. Following his performance, Clemens made the cover of "Sports Illustrated" which carried the headline "Lord of the K's [strikeouts]." Other than Clemens, only Kerry Wood and Max Scherzer have matched the total. (Randy Johnson fanned 20 batters in nine innings on May 8, 2001. However, as the game went into extra innings, it is not categorized as occurring in a nine-inning game. Tom Cheney holds the record for any game: 21 strikeouts in 16 innings.) Clemens attributes his switch from what he calls a "thrower" to a "pitcher" to the partial season Hall of Fame pitcher Tom Seaver spent with the Red Sox in 1986. | https://en.wikipedia.org/wiki?curid=26074 |
Robert E. Howard
Robert Ervin Howard (January 22, 1906 – June 11, 1936) was an American author who wrote pulp fiction in a diverse range of genres. He is well known for his character Conan the Barbarian and is regarded as the father of the sword and sorcery subgenre.
Howard was born and raised in Texas. He spent most of his life in the town of Cross Plains, with some time spent in nearby Brownwood. A bookish and intellectual child, he was also a fan of boxing and spent some time in his late teens bodybuilding, eventually taking up amateur boxing. From the age of nine he dreamed of becoming a writer of adventure fiction but did not have real success until he was 23. Thereafter, until his death by suicide at age 30, Howard's writings were published in a wide selection of magazines, journals, and newspapers, and he became proficient in several subgenres. His greatest success occurred after his death.
Although a Conan novel was nearly published in 1934, Howard's stories were never collected during his lifetime. The main outlet for his stories was "Weird Tales", where Howard created Conan the Barbarian. With Conan and his other heroes, Howard helped fashion the genre now known as sword and sorcery, spawning many imitators and giving him a large influence in the fantasy field. Howard remains a highly-read author, with his best works still reprinted, and is one of the best-selling fantasy writers of all time.
Howard's suicide and the circumstances surrounding it have led to speculation about his mental health. His mother had been ill with tuberculosis his entire life, and upon learning she had entered a coma from which she was not expected to wake, he walked out to his car and shot himself in the head.
Howard was born January 22, 1906 in Peaster, Texas, the only son of a traveling country physician, Dr. Isaac Mordecai Howard, and his wife, Hester Jane Ervin Howard. His early life was spent wandering through a variety of Texas cowtowns and boomtowns: Dark Valley (1906), Seminole (1908), Bronte (1909), Poteet (1910), Oran (1912), Wichita Falls (1913), Bagwell (1913), Cross Cut (1915), and Burkett (1917).
During Howard's youth his parents' relationship began to break down. The Howard family had problems with money which may have been exacerbated by Isaac Howard investing in get-rich-quick schemes. Hester Howard, meanwhile, came to believe that she had married below herself. Soon the pair were actively fighting. Hester did not want Isaac to have anything to do with their son. She had a particularly strong influence on her son's intellectual growth. She had spent her early years helping a variety of sick relatives, contracting tuberculosis in the process. She instilled in her son a deep love of poetry and literature, recited verse daily and supported him unceasingly in his efforts to write.
Other experiences would later seep into his prose. Although he loved reading and learning, he found school to be confining and began to hate having anyone in authority over him. Experiences watching and confronting bullies revealed the omnipresence of evil and enemies in the world, and taught him the value of physical strength and violence. As the son of the local doctor, Howard had frequent exposure to the effects of injury and violence, due to accidents on farms and oil fields combined with the massive increase in crime that came with the oil boom. Firsthand tales of gunfights, lynchings, feuds, and Indian raids developed his distinctly Texan, hardboiled outlook on the world. Sports, especially boxing, became a passionate preoccupation. At the time, boxing was the most popular sport in the country, with a cultural influence far in excess of what it is today. James J. Jeffries, Jack Johnson, Bob Fitzsimmons, and later Jack Dempsey were the names that inspired during those years, and he grew up a lover of all contests of violent, masculine struggle.
Voracious reading, along with a natural talent for prose writing and the encouragement of teachers, created in Howard an interest in becoming a professional writer. From the age of nine he began writing stories, mostly tales of historical fiction centering on Vikings, Arabs, battles, and bloodshed. One by one he discovered the authors who would influence his later work: Jack London and his stories of reincarnation and past lives, most notably "The Star Rover" (1915); Rudyard Kipling's tales of subcontinent adventures; the classic mythological tales collected by Thomas Bulfinch. Howard was considered by friends to be eidetic, and astounded them with his ability to memorize lengthy reams of poetry with ease after one or two readings.
In 1919, when Howard was thirteen, Dr. Howard moved his family to the Central Texas hamlet of Cross Plains, and there the family would stay for the rest of Howard's life. Howard's father bought a house in the town with a cash down payment and made extensive renovations. That same year, sitting in a library in New Orleans while his father took medical courses at a nearby college, Howard discovered a book concerned with the scant fact and abundant legends surrounding an indigenous culture in ancient Scotland called the Picts.
In 1920, on February 17, the Vestal Well within the limits of Cross Plains struck oil and Cross Plains became an oil boomtown. Thousands of people arrived in the town looking for oil wealth. New businesses sprang up from scratch and the crime rate increased to match. Cross Plains' population quickly grew from 1,500 to 10,000, it suffered overcrowding, the traffic ruined its unpaved roads and vice crime exploded but it also used its new wealth on civic improvements, including a new school, an ice manufacturing plant, and new hotels. Howard hated the boom and despised the people who came with it. He was already poorly disposed towards oil booms as they were the cause of the constant traveling in his early years but this was aggravated by what he perceived to be the effect oil booms had on towns.
At fifteen Howard first sampled pulp magazines, especially "Adventure" and its star authors Talbot Mundy and Harold Lamb. The next few years saw him creating a variety of series characters. Soon he was submitting stories to magazines such as "Adventure" and "Argosy". Rejections piled up, and with no mentors or instructions of any kind to aid him, Howard became a writing autodidact, methodically studying the markets and tailoring his stories and style to each.
In the fall of 1922, when Howard was sixteen, he temporarily moved to a boarding house in the nearby city of Brownwood to complete his senior year of high school, accompanied by his mother. It was in Brownwood that he first met friends his own age who shared his interest not only for sports and history but also writing and poetry. The two most important of these, Tevis Clyde Smith and Truett Vinson, shared his Bohemian and literary outlook on life, and together they wrote amateur papers and magazines, exchanged long letters filled with poetry and existential thoughts on life and philosophy, and encouraged each other's writing endeavors. Through Vinson, Howard was introduced to "The Tattler", the newspaper of the Brownwood High School. It was in this publication that Howard's stories were first printed. The December 1922 issue featured two stories, "'Golden Hope Christmas" and "West is West," which won gold and silver prizes respectively.
Howard graduated from high school in May 1923 and moved back to Cross Plains. On his return to his home town, he engaged in a self-created regimen of exercise, including cutting down oak trees and chopping them into firewood every day, lifting weights, punching a bag and springing exercises; eventually building himself from a skinny teenager into a more muscled, burly form.
Howard spent his late teens working odd jobs around Cross Plains, all of which he hated. In 1924, Howard returned to Brownwood to take a stenography course at Howard Payne College, this time boarding with his friend Lindsey Tyson instead of his mother. Howard would have preferred a literary course but was not allowed to take one for some reason. Biographer Mark Finn suggests that his father refused to pay for such a non-vocational education. In the week of Thanksgiving that year, and after years of rejection slips and near acceptances, he finally sold a short caveman tale titled "Spear and Fang", which netted him the sum of $16 and introduced him to the readers of a struggling pulp called "Weird Tales".
Now that his career in fiction had begun, Howard dropped out of Howard Payne College at the end of the semester and returned to Cross Plains. Shortly afterwards, he received notice that another story, "The Hyena," had been accepted by "Weird Tales". During the same period, Howard made his first attempt to write a novel, a loosely autobiographical book modeled on Jack London's "Martin Eden" and titled "Post Oaks & Sand Roughs". The book was otherwise of middling quality and was never published in the author's lifetime but it is of interest to Howard scholars for the personal information it contains. Howard's alter ego in this novel is Steve Costigan, a name he would use more than once in the future. The novel was finished in 1928 but not published until long after his death.
"Weird Tales" paid on publication, meaning that Howard had no money of his own at this time. To remedy this, he took a job writing oil news for the local newspaper "Cross Plains Review" at $5 per column. It was not until July 1925 that Howard received payment for his first printed story. Howard lost his job at the newspaper in the same year and spent one month working in a post office before quitting over the low wages. His next job, at the Cross Plains Natural Gas Company, did not last long due to his refusal to be subservient to his boss. He did manual labor for a surveyor for a time before beginning a job as a stenographer for an oil company.
In conjunction with his friend Tevis Clyde Smith, he dabbled heavily in verse, writing hundreds of poems and getting dozens published in "Weird Tales" and assorted poetry journals. With poor sales, and many publishers recoiling from his subject matter, Howard ultimately judged poetry writing a luxury he could not afford, and after 1930 he wrote little verse, instead dedicating his time to short stories and higher-paying markets. Nevertheless, as a result of this apprenticeship, his stories increasingly took on the aura of "prose-poems" filled with hypnotic, dreamy imagery and a power lacking in most other pulp efforts of the time.
Further story sales to "Weird Tales" were sporadic but encouraging, and soon Howard was a regular in the magazine. His first cover story was for "Wolfshead", a werewolf story published when he was only twenty. On reading "Wolfshead" in "Weird Tales" Howard became dismayed with his writing. He quit his stenographer's job to work at Robertson's Drug Store, where he rose to become head soda jerk on $80 per week. However, he resented the job itself and worked such long hours every day of the week that he became ill. He relaxed by visiting the Neeb Ice House, to which he was introduced by an oil-field worker befriended at the drug store, to drink and began to take part in boxing matches. These matches became an important part of his life; the combination of boxing and writing provided an outlet for his frustrations and anger.
In August 1926, Howard quit his exhausting job at the drug store and, in September, returned to Brownwood to complete his bookkeeping course. It was during this August that he began working on the story that would become "The Shadow Kingdom", one of the most important works of his career. While at college, Howard wrote for their newspaper, "The Yellow Jacket". One of the short stories printed in this newspaper was a comedy called "Cupid vs. Pollux." This story is Howard's earliest surviving boxing story known to exist; it is told in the first person, uses elements of a traditional tall-tale and is a fictionalized account of Howard (as "Steve") and his friend Lindsey Tyson (as "Spike") training for a fight. This story and the elements it uses would also be important in Howard's literary future.
In May 1927, after having to return home due to contracting measles and then being forced to retake the course, Howard passed his exams. While waiting for the official graduation in August, he returned to writing, including a re-write of "The Shadow Kingdom." He rewrote it again in August and submitted it to "Weird Tales" in September. This story was an experiment with the entire concept of the "weird tale"— horror fiction as defined by practitioners such as Edgar Allan Poe, A. Merritt, and H. P. Lovecraft, mixing elements of fantasy, horror and mythology with historical romance, action and swordplay into thematic vehicles never before seen, a new style of tale which ultimately became known as "sword and sorcery". Featuring Kull, a barbarian precursor to later Howard heroes such as Conan, the tale hit "Weird Tales" in August 1929 and received fanfare from readers. "Weird Tales" editor Farnsworth Wright bought the story for $100, the most Howard had earned for a story at this time, and several more Kull stories followed. However, all but two were rejected, convincing Howard not to continue the series.
In March 1928, Howard salvaged and re-submitted to "Weird Tales" a story rejected by the more popular pulp "Argosy", and the result was "Red Shadows", the first of many stories featuring the vengeful Puritan swashbuckler Solomon Kane. Appearing in the August 1928 issue of "Weird Tales", the character was a big hit with readers and this was the first of Howard's characters to sustain a series in print beyond just two stories (seven Kane stories were printed in the 1928–32 period). As the magazine published the Solomon Kane tale before Kull, this can be considered the first published example of sword and sorcery.
1929 was the year Howard broke out into other pulp markets, rather than just "Weird Tales". The first story he sold to another magazine was "The Apparition in the Prize Ring," a boxing-related ghost story published in the magazine "Ghost Stories". In July of the same year, "Argosy" finally published one of Howard's stories, "Crowd-Horror", which was also a boxing story. Neither developed into ongoing series, however.
After several minor successes and false starts, he struck gold again with a new series based on one of his favorite passions: boxing. July 1929 saw the debut of Sailor Steve Costigan in the pages of "Fight Stories". A tough-as-nails, two-fisted mariner with a head of rocks and occasionally a heart of gold, Costigan began boxing his way through a variety of exotic seaports and adventure locales, becoming so popular in "Fight Stories" that the same editors began using additional Costigan episodes in their sister magazine "Action Stories". The series saw a return to Howard's use of humor and (unreliable) first-person narration, with the combination of a traditional tall tale and slapstick comedy. Stories sold to "Fight Stories" provided Howard with a market just as stable as "Weird Tales".
Due to his success in "Fight Stories", Howard was contacted by the publisher Street & Smith in February 1931 with a request to move the Steve Costigan stories to their own pulp "Sport Story Magazine". Howard refused but created a new, similar series just for them based on a boxer called Kid Allison. Howard wrote ten stories for this series but "Sport Story" only published three of them.
With solid markets now all buying up his stories regularly, Howard quit taking college classes, and indeed would never again work a regular job. At twenty-three years of age, from the middle of nowhere in Texas, he had become a full-time writer; he was making good money and his father began bragging about his success, not to mention buying multiple copies of his work in the pulps.
Howard's "Celtic phase" began in 1930, during which he became fascinated by Celtic themes and his own Irish ancestry. He shared this enthusiasm with Harold Preece, a friend made in Austin in the summer of 1927; Howard's letters to both Preece and Clyde Smith contain much Irish-related material and discussion. Howard taught himself a little Gaelic, examined the Irish parts of his family history and began writing about Irish characters. Turlogh Dubh O'Brien and Cormac Mac Art were created at this time, although he was not able to sell the latter's stories.
When Farnsworth Wright started a new pulp in 1930 called "Oriental Stories", Howard was overjoyed—here was a venue where he could run riot through favorite themes of history and battle and exotic mysticism. During the four years of the magazine's existence, he crafted some of his very best tales, gloomy vignettes of war and rapine in the Middle and Far East during the Middle Ages and the early Renaissance, tales that rival even his best Conan stories for their historical sweep and splendor. In addition to series characters such as Turlogh Dubh O'Brien and Cormac Fitzgeoffrey, Howard sold a variety of tales depicting various times and periods from the fall of Rome to the fifteenth century. The magazine eventually ceased publication in 1934 due to the Depression, leaving several of Howard's stories aimed at this market unsold.
In August 1930 Howard wrote a letter to "Weird Tales" praising a recent reprint of H. P. Lovecraft's "The Rats in the Walls" and discussing some of the obscure Gaelic references used within. Editor Farnsworth Wright forwarded the letter to Lovecraft, who responded warmly to Howard, and soon the two "Weird Tales" veterans were engaged in a vigorous correspondence that would last for the rest of Howard's life. By virtue of this, Howard quickly became a member of the "Lovecraft Circle", a group of writers and friends all linked via the immense correspondence of H.P. Lovecraft, who made it a point to introduce his many like-minded friends to one another and encourage them to share stories, utilize each other's invented fictional trappings, and help each other succeed in the pulp field. In time this circle of correspondents has developed a legendary patina about it rivaling similar literary conclaves such as The Inklings, the Bloomsbury Group, and the Beats.
Howard was given the affectionate nickname "Two-Gun Bob" by virtue of his long explications to Lovecraft about the history of his beloved Southwest, and during the ensuing years he contributed several notable elements to Lovecraft's Cthulhu Mythos of horror stories (beginning with "The Black Stone", his Mythos stories also included "The Cairn on the Headland", "The Children of the Night" and "The Fire of Asshurbanipal"). He also corresponded with other "Weird Tale" writers such as Clark Ashton Smith, August Derleth, and E. Hoffmann Price.
The correspondence between Howard and Lovecraft contained a lengthy discussion on a frequent element in Howard's fiction, barbarism versus civilization. Howard held that civilization was inherently corrupt and fragile. This attitude is summed up in his famous line from "Beyond the Black River": "Barbarism is the natural state of mankind. Civilization is unnatural. It is a whim of circumstance. And barbarism must always ultimately triumph." Lovecraft held the opposite viewpoint, that civilization was the peak of human achievement and the only way forward. Howard countered by listing many historic abuses of the citizenry by so-called 'civilized' leaders. Howard initially deferred to Lovecraft but gradually asserted his own views, even coming to deride Lovecraft's opinions.
In 1930, with his interest in Solomon Kane dwindling and his Kull stories not catching on, Howard applied his new sword-and-sorcery and horror experience to one of his first loves: the Picts. His story "Kings of the Night" depicted King Kull conjured into pre-Christian Britain to aid the Picts in their struggle against the invading Romans, and introduced readers to Howard's king of the Picts, Bran Mak Morn. Howard followed up this tale with the now-classic revenge nightmare "Worms of the Earth" and several other tales, creating horrific adventures tinged with a Cthulhu-esque gloss and notable for their use of metaphor and symbolism.
With the onset of the Great Depression, many pulp markets reduced their schedules or went out of business entirely. Howard saw market after market falter and vanish. "Weird Tales" became a bimonthly publication and pulps such as "Fight Stories", "Action Stories" and "Strange Tales" all folded. Howard was further hit when his savings were wiped out in 1931 when the Farmer's National Bank failed, and again, after transferring to another bank, when that one failed as well.
Early 1932 saw Howard taking one of his frequent trips around Texas. He traveled through the southern part of the state with his main occupation being, in his own words, "the wholesale consumption of tortillas, enchiladas and cheap Spanish wine." In Fredericksburg, while overlooking sullen hills through a misty rain, he conceived of the fantasy land of Cimmeria, a bitter hard northern region home to fearsome barbarians. In February, while in Mission, he wrote the poem "Cimmeria".
It was also during this trip that Howard first conceived of the character of Conan. Later, in 1935, Howard claimed in a letter to Clark Ashton Smith that Conan "simply grew up in my mind a few years ago when I was stopping in a little border town on the lower Rio Grande." However, the character actually took nine months to develop.
Howard had originally used the name "Conan" for a Gael reaver in a past-life-themed story he completed in October 1931, which was published in the magazine "Strange Tales" in June 1932. Although the character swears by the god "Crom", that is his only link to the more famous successor character.
Going back home he developed the idea, fleshing out a new invented world—his Hyborian Age—and populating it with all manner of countries, peoples, monsters, and magic. Howard loved history and enjoyed writing historical stories. However, the research necessary for a purely historical setting was too time consuming for him to engage in on a regular basis and still earn a living. The Hyborian Age, with its varied settings similar to real places and eras of history, allowed him to write pseudo-historical fiction without such problems. He may have been inspired in the creation of his setting by Thomas Bulfinch's 1913 edition of his "Bulfinch's Mythology" called "The Outline of Mythology", which contained stories from history and legend, including many which were direct influences on Howard's work. Another potential inspiration is G. K. Chesterton's "The Ballad of the White Horse" and Chesterton's concept that "it is the chief value of legend to mix up the centuries while preserving the sentiment."
By March, Howard had recycled an unpublished Kull story called "By This Axe I Rule!" into his first Conan story. The central plot remains that of a barbarian having become king of a civilized country and a conspiracy to assassinate him. However, he removed an entire subplot concerning a couple's romance and created a new one with a supernatural element; the story was re-titled "The Phoenix on the Sword", an element from this new subplot. Howard immediately went on to write two more Conan stories. The first of these was "The Frost-Giant's Daughter", an inversion of the Greek myth surrounding Apollo and Daphne, set much earlier in Conan's life. The last of the initial trio was "The God in the Bowl", which went through three drafts and has a slower pace than most Conan stories. This one is a murder mystery filled with corrupt officials and serves as Conan's introduction into civilization, while showing that he is a more decent person than the civilized characters. Before the end of the month, he sent the first two stories to "Weird Tales" in the same package, with the third following a few days later.
With these three completed he created an essay called "The Hyborian Age" in order to flesh out his setting in more detail. There were four drafts of this essay, starting with a two-page outline and finishing as an 8,000-word essay. Howard supplemented this with two sketched maps and an additional short piece entitled "Notes on Various Peoples of the Hyborian Age."
In a letter dated March 10, 1932, Farnsworth Wright rejected "The Frost-Giant's Daughter" but noted that "The Phoenix on the Sword" had "points of real excellence" and suggested changes. "The God in the Bowl" would also be rejected and so a potential fourth Conan story concerning Conan as a thief was abandoned at the synopsis stage. Instead of abandoning the entire Conan concept, as had happened with previous failed characters, Howard rewrote "The Phoenix on the Sword" based on Wright's feedback and including material from his essay. Both this revision and the next Conan story, "The Tower of the Elephant", sold with no problems. Howard had written nine Conan stories before the first saw print.
Conan first appeared to the public in "Weird Tales" in December 1932 and was such a hit that Howard was eventually able to place seventeen Conan stories in the magazine between 1933 and 1936. Howard then took a short break from Conan after his initial burst of stories, returning to the character in mid-1933. These stories, his "middle period," are routine and considered the weakest of the series. Stories, such as "Iron Shadows in the Moon", were often simply Conan rescuing a damsel in distress from a monster in some ruins. While earlier Conan stories had three or four drafts, some in this period had only two including the final version. "Rogues in the House" is the only Conan story to be completed in a single draft. These stories sold easily and they include the first and second Conan stories to feature on the cover of "Weird Tales", "Black Colossus" and "Xuthal of the Dusk". Howard's motivation for quick and easy sales at this time was influenced by the collapse of some other markets, such as "Fight Stories", in the Depression.
Also in this period, Howard wrote the first of the James Allison stories, "Marchers of Valhalla." Allison is a disabled Texan who begins to recall his past lives, the first of which is in the later part of Howard's new Hyborian age. In a letter to Clark Ashton Smith in October 1933, he wrote that its sequel "The Garden of Fear" was "dealing with one of my various conceptions of the Hyborian and post-Hyborian world."
In May 1933, a British publisher, Denis Archer, contacted Howard about publishing a book in the United Kingdom. Howard submitted a batch of his best available stories, including "The Tower of the Elephant" and "The Scarlet Citadel", on June 15. In January 1934 the publisher rejected the collection but suggested a novel instead. Though the publisher was "exceedingly interested" in the stories, the rejection letter explained that there was a "prejudice that is very strong over here just now against collections of short stories." The suggested novel, however, could be published by Pawling and Ness Ltd in a first edition of 5,000 copies for lending libraries.
In late 1933 Howard returned to Conan, starting again slightly awkwardly with "The Devil in Iron". However, this was followed with the beginning of the latter group of Conan stories which "carry the most intellectual punch," starting with "The People of the Black Circle".
Howard probably began to work on the novel in February 1934, starting to write "Almuric" (a non-Conan, sword and planet science fiction novel) but abandoned it half way. This was followed by another abortive attempt at a novel, this time a Conan novel which later became "Drums of Tombalku". The third attempt at writing the novel was more successful, resulting in Howard's only Conan novel "The Hour of the Dragon", which was probably started on or around March 17, 1934. This novel combines elements of two previous Conan stories, "Black Colossus" and "The Scarlet Citadel," with Arthurian myth and provides an overview of Conan and the Hyborian age for the new British audience. Howard sent his final draft to Denis Archer on May 20, 1934. He had worked exclusively on the novel for two months, writing approximately 5,000 words per day, seven days a week. Although he told acquaintances that he had little hope for this novel, he had put a lot of effort into it. However, the publisher went into receivership in late 1934, before it could print the novel. The story was briefly held as part of the company's assets before being returned to Howard. It was later printed in "Weird Tales" as a serial over five months, beginning with the December 1935 issue.
Howard may have begun losing interest in Conan in late 1934, with a growing desire to write westerns. He began to write, although never finished, a Conan story called "Wolves Beyond the Border". This was the first Conan tale to have an explicit (Robert W. Chambers-influenced) American setting, although American themes had appeared earlier, and the only one in which Conan himself does not appear. His next story was based on his unfinished material and became "Beyond the Black River" which not only used the different American-frontier setting but was also, in Howard's own words, a "Conan yarn without sex interest." In another novel twist, Conan and the other protagonists have, at best, a pyrrhic victory; this was rare for pulp magazines. This was followed by another experimental Conan story, "The Black Stranger", with a similar setting. The story was, however, rejected by "Weird Tales", which was rare for later Conan stories. Howard's next piece, "The Man-Eaters of Zamboula", was more formulaic and was accepted by the magazine with no problems. Howard only wrote one more Conan story, "Red Nails," which was influenced both by his personal experiences at the time and an extrapolation of his views on civilization.
The character of Conan had a wide and enduring influence among other "Weird Tales" writers, including C. L. Moore and Fritz Leiber, and over the ensuing decades the genre of sword and sorcery grew up around Howard's masterwork, with dozens of practitioners evoking Howard's creation to one degree or another.
In spring 1933, Howard started to place work with Otis Adelbert Kline, a former pulp writer, as his agent. Kline encouraged him to try writing in other genres in order to expand into different markets. Kline's agency was successful in finding outlets for more of Howard's stories and even placed works that had been rejected when Howard was marketing himself alone. Howard continued to sell directly to "Weird Tales", however.
Howard wrote one of the first "Weird Western" stories ever created, "The Horror from the Mound," published in the May 1932 issue of "Weird Tales". This genre acted as a bridge between his early "weird" stories (a contemporary term for horror and fantasy) and his later straight western tales.
He tried writing detective fiction but hated reading mystery stories and disliked writing them; he was not successful in this genre. More successfully, in late 1933 Howard took a character conceived in his youth, El Borak, and began using him in mature, professional tales of World War I-era Middle Eastern adventure that landed in "Top Notch", "Complete Stories", and "Thrilling Adventures". The 1920s version was a treasure-hunting adventurer but the 1930s version, first seen in "The Daughter of Erlik Khan" in the December 1934 issue of "Top-Notch", was a grim gun-fighter keeping the peace after having gone native in Afghanistan. The stories have a lot in common with those of Talbot Mundy, Harold Lamb and T. E. Lawrence, with Western themes and Howard's hardboiled style of writing. As with his other series, he created another character in the same vein, Kirby O'Donnell, but this character lacked the grim, western elements and was not as successful.
In the years since Conan had been created, Howard found himself increasingly fascinated with the history and lore of Texas and the American Southwest. Many of his letters to H. P. Lovecraft ran for a dozen pages or more, filled with stories he had picked up from elderly Civil War veterans, Texas Rangers, and pioneers. His Conan stories began featuring western elements, most notably in "Beyond the Black River", "The Black Stranger", and the unfinished "Wolves Beyond the Border". By 1934 some of the markets killed off by the Depression had come back, and "Weird Tales" was over $1500 behind on payments to Howard. The author therefore stopped writing weird fiction and turned his attentions to this steadily growing passion.
The first of Howard's most commercially successful series (within his own lifetime) was started in July 1933. "Mountain Man" was the first of the Breckinridge Elkins stories, humorous westerns in a similar style to his earlier Sailor Steve Costigan stories and again featuring an exaggerated, cartoonish version of Howard himself as the main character. Written as tall tales in the vein of Texas "Tall Lying" stories, the story first appeared in the March–April 1934 issue of "Action Stories" and was so successful that other magazines asked Howard for similar characters. Howard created Pike Bearfield for "Argosy" and Buckner J. Grimes for "Cowboy Stories". "Action Stories" published a new Elkins story every issue without fail until well after Howard's death. At Kline's suggestion, he also created "A Gent from Bear Creek", a Breckinridge Elkins novel comprising existing short stories and new material.
Conan remained the only character that Howard ever spoke of with his friends in Texas and the only one in whom they seemed interested. It is possible that Breckinridge Elkins and the other characters in his stories were too close to home for Howard to be entirely comfortable discussing them.
In the spring of 1936, Howard sold a series of "spicy" stories to "Spicy-Adventure Stories". The "spicy" series of pulp magazines dealt in stories that were considered borderline softcore pornography at the time but are now similar to romance novels. These stories, which Howard referred to as "bubby-twisters", featured the character Wild Bill Clanton and were published under the pseudonym Sam Walser.
Howard is only known to have had one girlfriend in his life, Novalyne Price. Price was an ex-girlfriend of Tevis Clyde Smith, one of Howard's best friends, whom she had known since high school and they had remained friends after their relationship ended. She first met Howard in spring 1933 when Howard was visiting Smith after driving his mother to a Brownwood clinic. Howard and Smith drove to the Price farm and Smith introduced his friends to each other. Price was an aspiring writer, had heard of Howard from Smith in the past and was enthusiastic to meet him in person. However, he was not what she expected. She wrote in her diary about this first meeting: "This man was a writer! Him? It was unbelievable. He was not dressed as I thought a writer should dress." They parted after a drive and would not see each other again for over a year.
In late 1934 Price got a job as a schoolteacher in Cross Plains High School through her cousin, the Head of the English department. When Howard came up in conversation with her new colleagues she defended him from accusations of being a "freak" and "crazy," then phoned his house and left a message. This call was not returned so she tried a few more times. Price visited the Howard house in person after having her telephone calls blocked by a passive aggressive Hester Howard. After a drive through town they arranged their first date.
Through much of the next two years they dated on and off, spending much time discussing writing, philosophy, history, religion, reincarnation and much else. Both considered marriage but never at the same time. Price became ill from overwork in mid-1935. Her doctor, a friend of Howard's father, advised her to end the relationship and get a job in a different state. Despite agreeing to this, she met with Howard soon after being discharged. Howard, however, was too preoccupied with the state of his mother's health to give her the attention she wanted. Their relationship did not last much longer.
Not considering herself to be in an exclusive relationship, Price began dating one of Howard's best friends, Truett Vinson. Howard discovered his friends' relationship while he and Truett were on a week's trip together to New Mexico (the same trip which inspired a lot of the final Conan story "Red Nails"). The relationship between the couple was irrevocably scarred, but they continued visiting with each other as friends until May 1936, when Price left Cross Plains for Louisiana State University to get a graduate degree. The two never spoke or wrote to each other again.
In an effort to improve her memory and writing, Price began recording all her daily conversations into a journal, in the process preserving an intimate record of her time with Howard. This was useful years later when she wrote of their relationship in a book called "One Who Walked Alone", which was the basis for the 1996 film "The Whole Wide World" starring Vincent D'Onofrio as Howard and Renée Zellweger as Price.
By 1936, almost all of Howard's fiction writing was being devoted to westerns. The novel "A Gent from Bear Creek" was due to be published by Herbert Jenkins in England, and by all accounts it looked as if he was finally breaking out of the pulps and into the more prestigious book market. However, life was becoming especially difficult for Howard. All of his close friends had married and were immersed in their careers, Novalyne Price had left Cross Plains for graduate school, and his most reliable market, "Weird Tales", had grown far behind on its payments. His home life was also falling apart. Having suffered from tuberculosis for decades, his mother was finally nearing death. The constant interruptions of care workers at home, combined with frequent trips to various sanatoriums for her care, made it nearly impossible for Howard to write.
In hindsight, there were hints about Howard's plans. Several times in 1935–36, whenever his mother's health had declined, he made veiled allusions to his father about planning suicide, which his father did not understand at the time. He had made references when speaking to Novalyne Price about her being in his "sere and yellow leaf." The words sounded familiar to her, but it was only in early June 1936 that she found the source in "Macbeth":
In the weeks before his suicide, Howard wrote to Kline giving his agent instructions of what to do in case of his death, he wrote his last will and testament, and he borrowed a .380 Colt Automatic from his friend Lindsey Tyson. On June 10, he drove to Brownwood and bought a burial plot for the whole family. On the night before his suicide, when his father confirmed that his mother was finally dying, he asked where his father would go afterwards. Isaac Howard replied that he would go wherever his son went, thinking he meant to leave Cross Plains. It is possible that Howard thought his father would join him in ending their lives together as a family.
In June 1936, as Hester Howard slipped into her final coma, her son maintained a death vigil with his father and friends of the family, getting little sleep, drinking huge amounts of coffee, and growing more despondent. On the morning of June 11, 1936, Howard asked one of his mother's nurses, a Mrs. Green, if she would ever regain consciousness. When she told him no, he walked out to his car in the driveway, took the pistol from the glove box, and shot himself in the head. His father and another doctor rushed out, but the wound was too grievous for anything to be done. Howard lived for another eight hours, dying at 4 pm; his mother died the following day. The story occupied the entirety of that week's edition of the "Cross Plains Review", along with the publication of Howard's "A Man-Eating Jeopard". On June 14, 1936 a double funeral service was held at Cross Plains First Baptist Church, and both were buried in Greenleaf Cemetery in Brownwood, Texas.
Robert E. Howard's health, especially his mental health, has been the focus of the biographical and critical analysis of his life. In terms of physical health, Howard had a weak heart which he treated by taking Digoxin. The precise nature of Howard's mental health has been much debated, both during his life and following his suicide. Three main points of view exist: some have declared that Howard suffered from an Oedipal complex or similar; another viewpoint is that Howard suffered from major depressive disorder; the third view is that Howard had no disorders and his suicide was a common reaction to stress.
Howard's attitude towards race and racism is debated. Howard used race as shorthand for physical characteristics and motivation. He would also employ some racial stereotypes, possibly for the sake of simplification. He was also of the belief that, no matter who won the subsequent conflicts, it would only ever be a temporary victory. In "Wings in the Night" for instance, Howard writes that:
Howard became less racist as he grew older, due to several influences. Later works include more sympathetic black characters, as well as other minority groups such as Jews. Significant works in terms of Howard's views on race are "Black Canaan" and "The Last White Man", which depict white protagonists at war with black barbarity.
Howard had feminist views despite his era and location which he espoused in both personal and professional life. Howard wrote to his friends and associates defending the achievements and capabilities of women. Strong female characters in Howard's works of fiction include the protofeminist Dark Agnes de Chastillon (first appearing in "Sword Woman", circa 1932–1934); the early modern pirate Helen Tavrel ("The Isle of Pirates' Doom", 1928), two pirates and Conan supporting characters, Bêlit ("Queen of the Black Coast", 1934) and Valeria of the Red Brotherhood ("Red Nails", 1936); as well as the Ukrainian mercenary Red Sonya of Rogatino ("The Shadow of the Vulture", 1934).
Howard had a phobia of aging and old age, a frequent subject in his writings, where characters were always eternally youthful and vigorous. He often spoke of a desire to die young.
Physically, Howard was tall and heavily built. He had a gentle, round face with a soft, deep voice. E. Hoffmann Price wrote that when he first met Howard in 1934 he "was busy trying to combine two images, that of the actual man, and that of the man who loomed up in those stirring yarns. The synthesis was never effected. He was packed with the whimsy and poetry which rang out in his letters, and blazed up in much of his published fiction, but, as is usually the case with writers, his appearance belied him. His face was boyish, not yet having squared off into angles; his blue eyes slightly prominent, had a wide-openness which did not suggest anything of the man's keen wit and agile fancy. That first picture persists—a powerful, solid, round-faced fellow, kindly and somewhat stolid seeming."
Howard enjoyed listening to other people's stories. He listened to tales told by family members growing up and, as an adult, collected stories from any older people willing to tell them. Howard's parents were both natural storytellers of different kinds and he grew up in early twentieth century Texas, an environment in which the telling of tall tales was a standard form of entertainment. Howard himself was a natural storyteller and later a professional storyteller. Combined, this often led to Howard embellishing facts in his communication, not with an intention to deceive but just to make a better story. This can be a problem for biographers reading his works and letters with an aim to understand Howard himself.
Howard had an almost photographic memory and could memorize long poems after only a few readings. Howard also enjoyed listening to music and drama on the radio. However his main interests were sports and politics, and he would listen to match reports and election results as they came in.
After Howard bought a car in 1932, he and his friends took regular excursions across Texas and nearby states. His letters to Lovecraft also contain information about the history and geography he encountered on his journeys. Howard was also a practitioner and fan of boxing, as well as an avid weightlifter.
Howard's first published poem was "", in an early 1923 issue of local newspaper "The Baylor United Statement". His first published story was "Spear and Fang", sold in late November 1924 and published in the July 1925 issue of the pulp magazine "Weird Tales". However, Howard's first real success was the Sailor Steve Costigan series of humorous boxing stories, beginning with "The Pit of the Serpent" published in the July 1929 issue of the pulp magazine "Fight Stories".
Howard's distinctive literary style relies on a combination of existentialism, poetic lyricism, violence, grimness, humour, burlesque, and a degree of hardboiled realism. Howard's background in Texan tall tales is the source of the rhythm, drive and authenticity of his work. Howard used an economy of words to sketch out scenes in his stories; his ability to do so has been attributed to his skill with, and experience of, both tall tales and poetry. The tone of Howard's works, especially in the Conan stories, is hardboiled, dark and realistic. This is contrasted with the fantastic elements contained within the stories. Direct experience of the oil booms in early twentieth century Texas influenced Howard's view of civilization. The benefits of progress came with lawlessness and corruption. One of the most common themes in Howard's writing is based on his view of history, a repeating pattern of civilizations reaching their peak, becoming decadent, decaying and then being conquered by another people. Many of his works are set in the period of decay or among the ruins the dead civilization leaves behind.
The oil boom in Texas was "one of the most powerful influences on [Howard's] life and art", albeit one that he hated. Howard grew to despise the oil industry along with everyone and everything associated with it. The oil boom heavily influenced Howard's view of civilization as a constant cycle of boom and bust in the same manner as the oil industry in contemporary Texas. A town such as Cross Plains was built by pioneers. The boom brought civilization in the form of people and investment but also social breakdown. The oil people contributed little or nothing to the town in the long term and eventually left for the next oil field. This led Howard to see civilization as corrupting and society as a whole in decay.
Howard first bought a pulp magazine, a copy of "Adventure", when he was fifteen. The stories and writers featured in this magazine were a strong influence on Howard. In the same year, he sent his first story, "Bill Smalley and the Power of the Human Eye", to the magazine, although it was rejected. Despite repeated attempts during his life, Howard never sold a story to "Adventure".
Howard was both influenced by and an influence on his friend H. P Lovecraft. Many ideas that he discussed in his letters to Lovecraft were repeated in his fiction and the discussion with a fellow professional writer was useful to him. For his part, Lovecraft began to include Howardian action sequences in his own work, for example in "The Shadow Over Innsmouth". Much of 1931 was spent by Howard attempting to mimic Lovecraft's style. After that year, he had absorbed the parts of it that worked best for him and made them his own.
Another inspiration for Howard was theosophy and the theories of Helena Blavatsky and William Scott-Elliot, who described lost civilizations, ancient wisdom, races, magic and sunken continents and the lands of Lemuria, Atlantis and Hyperborea, and also influenced other writers of weird fiction.
Howard influenced and inspired later writers including Samuel R. Delany, David Gemmell, Michael Moorcock, Matthew Woodring Stover, Charles R. Saunders, Karl Edward Wagner, Paul Kearney, Steven Erikson, Joe R. Lansdale, and William King. He also has an influence on the field of fantasy fiction rivaled only by J. R. R. Tolkien and Tolkien's similarly inspired creation of the modern genre of high fantasy.
Criticism of Robert E. Howard and his work often turns towards biographical details and "backhanded Some imply that Howard was an uneducated idiot savant and that his success was due more to luck than skill. Although given the volume of quality works produced, such an implication reveals more about the implicator than the implicated.
The first professional critic to comment on Howard's work was Hoffman Reynolds Hays, reviewing the Arkham House collection "Skull-Face and Others" in "The New York Times Book Review". Under the title "Superman on a Psychotic Bender", Hays wrote, "Howard used a good deal of the Lovecraft cosmogony and demonology, but his own contribution was a sadistic conqueror who, when cracking heads did not solve his difficulties, had recourse to magic and the aid of Lovecraft's Elder Gods. The stories are written on a competent pulp level (a higher level, by the way, than that of some best sellers) and are allied to the Superman genre which pours forth in countless comic books and radio serials." Hays then moved on to Howard himself and the genre in which he wrote:
In a review of Michel Houellebecq's essay "H. P. Lovecraft: Against the World, Against Life" published in the "Los Angeles Times", April 17, 2005, Stephen King implies that Howard did not work at his craft and was merely pastiching Lovecraft. King described his disapproval of the sword and sorcery genre, and superheroes, in his book on writing "Danse Macabre": "[It] is not fantasy at its lowest, but it still has a pretty tacky feel. ... Sword and sorcery novels and stories are tales of power for the powerless. The fellow who is afraid of being rousted by those young punks who hang around his bus stop can go home at night and imagine himself wielding a sword, his potbelly miraculously gone, his slack muscles magically transmuted into those "iron thews" which have been sung and storied in the pulps for the last fifty years." On Howard in particular, he wrote:
An exception to this, in King's opinion (again from "Danse Macabre") was the author's Southern Gothic horror story "Pigeons From Hell." King referred to this work as "one of the finest horror stories of our century."
In the foreword to "Two-Gun Bob", a collection of essays on the subject of Howard, fellow fantasy fiction writer, Michael Moorcock, wrote: "The ability to paint a complex scene with a few expert brushstrokes remains Howard's greatest talent, and such talent can't, of course, ever be taught." Howard scholar Rob Roehm considers the use of the phrase "can't ever be taught" to be a variation on the recurrent theme of Howard's lack of skill or training. Moorcock's foreword goes on "[Howard's] greatest hero, Conan the Barbarian, is his best, created from whole cloth, with a nod to Natty Bumppo and Tarzan of the Apes, and most closely representing the kind of person Howard, home-bound, mother-worshipping, suspicious of big cities, would in his dreams most like to be." Roehm counters that none of the assertions made about Howard in that comment are true, although none of them are unique to Moorcock either. In "Wizardry & Wild Romance", Moorcock has also written both that Howard "brought a brash, tough element to the epic fantasy that did as much to change the course of the American school away from previous writing and static imagery as Hammett, Chandler and the "Black Mask" pulp writers were to change the course of the American detective fiction" and that he "was never a commercially successful writer in his lifetime. His brash, hasty, careless style did not lend itself to the classier pulps. Most of his work appeared in the cheapest of them."
Lovecraft scholar S. T. Joshi wrote, in his biography "", that "The bulk of Howard's fiction is subliterary hackwork that does not even begin to approach genuine literature" and "The simple fact is, however, that his views are not of any great substance or profundity and that Howard's style is crude, slip-shod, and unwieldy. It is all just pulp—although, perhaps, a somewhat superior grade of pulp than the average."
The following table shows Howard's earnings from writing throughout his career, with appropriate milestones and events noted by each year. During the Depression, Howard earned more than anyone else in Cross Plains. When Howard died, "Weird Tales" still owed him between $800 and $1,300. (Adjusted for inflation, this amount would be equivalent to between $ and $.)
Three publishing houses have put out collections of Howard's letters. In 1989 and 1991, Necronomicon Press published "Robert E. Howard: Selected Letters" in two volumes (1923–1930 and 1931–1936) edited by Glenn Lord with Rusty Burke, S. T. Joshi, and Steve Behrends. In 2007 and 2008, The Robert E. Howard Foundation Press published a three volume set (1923–1929, 1930–1932, and 1933–1936) titled "The Collected Letters of Robert E. Howard", edited by Rob Roehm. Additionally, in 2009, Hippocampus Press published two volumes (1930–1932 and 1933–1936) of Howard's correspondence with H. P. Lovecraft as "A Means to Freedom: The Letters of H.P. Lovecraft & Robert E. Howard", edited by S. T. Joshi, David Schultz, and Rusty Burke.
Robert E. Howard's legacy extended after his death in 1936. Howard's most famous character, Conan the Barbarian, has a pop-culture imprint that has been compared to such icons as Tarzan of the Apes, Count Dracula, Sherlock Holmes, and James Bond. Howard's critical reputation suffered at first but over the decades works of Howard scholarship have been published. The first professionally published example of this was L. Sprague de Camp's "Dark Valley Destiny" (1983) which was followed by other works, including Don Herron's "The Dark Barbarian" (1984) and Mark Finn's "" (2006). Also in 2006, a charity, Robert E. Howard Foundation, was created to promote further scholarship.
Following Robert E. Howard's death, the courts granted his estate to his father, who continued to work with Howard's literary agent Otis Adelbert Kline. Dr. Isaac Howard passed the rights on to his friend Dr. Pere Kuykendall, who passed them to his wife, Alla Ray Kuykendall, and daughter, Alla Ray Morris. Morris left the rights to the widow of her cousin, Zora Mae Bryant, who gave control to her children, Jack Baum and Terry Baum Rogers. The Baums eventually sold their rights to the Swedish (now US) company Paradox Entertainment.
Howard's first published novel, "A Gent from Bear Creek", was printed in Britain one year after his death. This was followed in the United States by a collection of Howard's stories, "Skull-Face and Others" (1946) and then the novel "Conan the Conqueror" (1950). The success of "Conan the Conqueror" led to a series of Conan books from publisher Gnome Press, the later editor of which was L. Sprague de Camp. The series led to the first Conan pastiche, the novel "The Return of Conan" by de Camp and Swedish Howard fan Björn Nyberg. De Camp eventually achieved control over the Conan stories and Conan brand in general. Oscar Friend took over from Kline as literary agent and he was followed by his daughter Kittie West. When she closed the agency in 1965, a new agent was required. De Camp was offered the role but he recommended Glenn Lord instead. Lord began as a fan of Howard and had re-discovered many unpublished pieces that would otherwise have been lost, printing them in books such as "Always Comes Evening" (1957) and his own magazine "The Howard Collector" (1961–1973). He became responsible for the non-Conan works and later restored, textually-pure versions of the Conan stories themselves.
In 1966, de Camp made a deal with Lancer Books to republish the Conan series, which led to the "First Howard Boom" of the 1970s; their popularity was enhanced by the cover artwork of Frank Frazetta on most of the volumes. Many of his works were reprinted (some printed for the first time) and they expanded into other media such as comic books and films. The Conan stories were increasingly edited by de Camp and the series was extended by pastiches until they replaced the original stories. In response, a puristic movement grew up demanding Howard's original, un-edited stories. The first boom ended in the mid-1980s. In the late 1990s and early 21st century, the "Second Howard Boom" occurred. This saw the printing of new collections of Howard's work, with the restored texts desired by purists. As before, the boom led to new comic books, films and computer games. Howard's house in Cross Plains has been converted into the Robert E. Howard Museum, which has been added to the National Register of Historic Places.
The works of Robert E. Howard have been adapted into multiple media, such as the two Conan films released in the 1980s starring Arnold Schwarzenegger. In addition to the Conan films, other adaptations have included "Kull the Conqueror" (1997) and "Solomon Kane" (2009). In television, the anthology series "Thriller" (1961) led the adaptations with an episode based on the short story "Pigeons from Hell". The bulk of the adaptations have, however, been based on Conan with two animated and one live action series. Multiple audio dramas have been adapted, from professional audio books and plays to LibriVox recordings of works in the public domain. Computer games have focussed on Conan, beginning with "" (1984) and continuing on to the MMO "" (2008). The first table-top roleplaying game based on Howard's works was TSR's "Conan Unchained!" (1984) for their game "Advanced Dungeons & Dragons". The first comic book adaptation was in the Mexican "Cuentos de Abuelito – La Reina de la Costa Negra" No. 17 (1952). Howard-related comic books continued to be published to the present day.
Howard is an ongoing inspiration for and influence on heavy metal music. Several bands have adapted Howard's works to tracks or entire albums. The British metal band Bal-Sagoth is named after Howard's story "The Gods of Bal-Sagoth". | https://en.wikipedia.org/wiki?curid=26078 |
Revolutionary Association of the Women of Afghanistan
The Revolutionary Association of the Women of Afghanistan (RAWA) (Persian:جمعیت انقلابی زنان افغانستان, "Jamiʿat-e Enqelābi-ye Zanān-e Afghānestān", Pashto:د افغانستان د ښڅو انقلابی جمعیت) is a women's organization based in Quetta, Pakistan, that promotes women's rights and secular democracy. It was founded in 1977 by Meena Keshwar Kamal, an Afghan student activist who was assassinated in February 1987 for her political activities. The group, which supports non-violent strategies, had its initial office in Kabul, Afghanistan, but then moved to Pakistan in the early 1980s.
The organization aims to involve women of Afghanistan in both political and social activities aimed at acquiring human rights for women and continuing the struggle against the government of Afghanistan based on democratic and secular, not fundamentalist principles, in which women can participate fully. RAWA also strives for multilateral disarmament.
The group opposed the Soviet-supported government, the following Mujahideen and Islamist governments, and the present United States-supported Islamic Republican form of government.
The RAWA was first initiated in Kabul in 1977 as an independent social and political organization of Afghan women fighting for human rights and social justice. The organization then moved parts of its work out of Afghanistan into Pakistan and established their main base there to work for Afghan women.
RAWA was founded by a group of Afghan women led by Meena Keshwar Kamal. At age 21, she laid the foundations of RAWA through her work educating women. In 1979, Kamal began a campaign against Soviet forces and the Soviet-supported government of Afghanistan. In 1981, she launched a bilingual magazine called Payam-e-Zan (Women’s Message). In the same year, she visited France for the French Socialist Party Congress. She also established schools for Afghan refugee children, hospitals and handicraft centers for refugee women in Pakistan. Her activities and views, as well as her work against the government and religious fundamentalists led to her assassination on February 4, 1987.
Much of RAWA's efforts in the 1990s involved holding seminars and press conferences and other fund-raising activities in Pakistan. RAWA also created secret schools, orphanages, nursing courses, and handicraft centers for women and girls in Pakistan and Afghanistan. They secretly filmed women being beaten in the street in Afghanistan by the religious police, and being executed. RAWA activities were forbidden by both the Taliban and the Northern Alliance, but they persisted, and even publicised their work in publications like Payam-e-Zan.
RAWA is highly critical of the NATO war that began in 2001, because of the high rate of casualties among the civilian population. The organization went so far as to threaten to sue United States government for unauthorized use of four photos from their website that were used in propaganda handbills dropped on various cities in Afghanistan during the 2001 invasion.
After the defeat of the Taliban government by US and Afghan Northern Alliance forces, RAWA warned that the Northern Alliance were just as fundamentalist and dangerous as the Taliban. They charge that the government led by President Hamid Karzai lacked support in most areas of Afghanistan, and that fundamentalists are enforcing laws unfairly treating women as they were under the Taliban. These claims are supported by media reports about the Herat government of Ismail Khan, who has created a religious police that forces women to obey strict dress and behavior codes, as well as many reports by Human Rights Watch.
One recent report released by Human Rights Watch in 2012 describes a situation where women are punished by the judicial system for attempting to escape from domestic abuse and also occasionally for being victims of rape. It says that Karzai is "[u]nwilling or unable to take a consistent line against conservative forces within the country," and that the lack of improvement in the plight of women in Afghanistan after ten years is "shocking."
RAWA collects funds to support hospitals, schools and orphanages and still run many projects in Pakistan and Afghanistan, including a project in conjunction with CharityHelp.org for orphan sponsorships.
Recently RAWA restarted its mission inside Afghanistan and organized some of its events in Kabul. They've held events annually on International Women's Day since 2006.
On September 27, 2006, a RAWA member for the first time (perhaps in the whole history of RAWA) appeared on a round table debate on a local Afghan TV channel, TOLO TV. She had a debate with a representative of a hard line Islamic fundamentalist group. She named the top leaders of the Islamist groups and termed them "war criminal and responsible for the ongoing tragedy in Afghanistan". Tolo TV censored the audio of any sections where names were called.
On October 7, the Afghan Women's Mission (AWM) organized a fund raising event for RAWA in Los Angeles, California. Eve Ensler was the chief guest and Sonali Kolhatkar and Zoya, a member of RAWA, were among the speakers. "Zoya" is a pseudonym for an active member of RAWA's Foreign Committee who has traveled to many countries, including the U.S. and Spain as well as Germany. In 2003, she received international acclaim for her biography "Zoya’s Story - An Afghan Woman’s Battle for Freedom."
In June 2008 Zoya testified to the Human Rights Commission of the German Parliament (Bundestag) to persuade the German government to withdraw its troops from Afghanistan.
In 2009, RAWA and other women's rights groups strongly condemned a "Shia Family Code" which is claimed to legalise spousal rape within Northern Afghan Shia Muslim communities, as well as endorsing child marriage, purdah (seclusion) for married women, which was passed by President Hamid Karzai to garner support for his coalition government from hardline elements within the aforesaid communities, as well as the neighbouring Shia-dominated Islamic Republic of Iran. In addition to the above, the new "Family Code" also enshrines discriminatory legal status in the context of inheritance and divorce against women.
In February 2012, the group commemorated the 25th anniversary of the death of RAWA founder Meena Keshwar Kamal with a gathering of women in Kabul.
RAWA has so far won 16 awards and certificates from around the world for its work for human rights and democracy. They include the sixth Asian Human Rights Award - 2001, the French Republic's Liberty, Equality, Fraternity Human Rights Prize, 2000, Emma Humphreys Memorial Prize 2001,
Glamour Women of the Year 2001, 2001 SAIS-Novartis International Journalism Award from Johns Hopkins University, Certificate of Special Congressional Recognition from the U.S. Congress, 2004, Honorary Doctorate from University of Antwerp (Belgium) for outstanding non-academic achievements, and many other awards.
In the book "With All Our Strength: The Revolutionary Association of the Women of Afghanistan"
by Anne Brodsky, a number of world-known writers and human rights activists share their views of RAWA. They include Arundhati Roy who says "Each of us needs a little RAWA"; Eve Ensler, author of "The Vagina Monologues", who suggests that RAWA must stand as a model for every group working to end violence; Katha Pollitt, author of "Subject to Debate: Sense and Dissents on Women, Politics, and Culture"; Ahmed Rashid, author of "Taliban" and "Jihad"; and Asma Jahangir, Special Rapporteur of the United Nations and prominent women's rights activist of Pakistan are two Pakistanis who write about RAWA and express their support. | https://en.wikipedia.org/wiki?curid=26079 |
Railtrack
Railtrack was a group of companies that owned the track, signalling, tunnels, bridges, level crossings and all but a handful of the stations of the British railway system from 1994 until 2002. It was created as part of the privatisation of British Rail, listed on the London Stock Exchange, and was a constituent of the FTSE 100 Index. In 2002, after experiencing major financial difficulty, most of Railtrack's operations were transferred to the state-controlled non-profit company Network Rail. The remainder of Railtrack was renamed RT Group plc and eventually dissolved on 22 June 2010.
Founded under Conservative Party legislation that privatised British Rail, Railtrack took control of the railway infrastructure on 1 April 1994 and was floated on the London Stock Exchange in May 1996. Robert Horton was first chairman, leading the organisation through the early years of its existence up to 1999, including an industrial dispute from June to September 1994.
The Southall rail crash in 1997 and the Ladbroke Grove rail crash in 1999 called into question the effect that the fragmentation of the railway network had had on both safety and maintenance procedures.
In February 1999 the company launched a bond issue which caused a significant fall in Railtrack's share price.
Railtrack was severely criticised for both its performance in improving the railway infrastructure and for its safety record. Between its creation and late 1998, the company had a relatively calm relationship with its first economic regulator, John Swift QC, whose strategy was to encourage Railtrack to make commitments to improvement. But critics said that the regulator was not tough enough and that the company had, as a result, been able to abuse its monopoly position. In particular, its customers, the passenger and freight train operators, were desperate for regulatory action to force the company to improve its stewardship of the network and its performance. Swift had been appointed rail regulator in 1993 by the then Conservative transport secretary John MacGregor MP. When the Labour government took over after the general election in May 1997, the new transport secretary (and deputy prime minister) John Prescott took a much harder line. When Swift's five-year term of office expired on 30 November 1998, he was not reappointed. After an interim holding period, during which Chris Bolt, Swift's chief economic adviser and effective deputy, filled the regulator's position, in July 1999 a new rail regulator began a five-year term, and a new, much tougher regulatory era began.
The new rail regulator Tom Winsor had been Swift's general counsel (1993–95), and adopted a more interventionist and aggressive regulatory approach. At times the relationship was stormy, with Railtrack resisting pressure to improve its performance. In April 2000 it was reported in the Guardian that "Railtrack is adopting a deliberate 'culture of defiance' against the rail regulator". Gerald Corbett, Railtrack's chief executive at the time, and Winsor clearly saw things very differently from each other. Railtrack resisted regulatory action to improve its performance, and as the regulator probed ever more deeply, serious shortcomings in the company's stewardship of the network were revealed.
It was the Hatfield rail crash on 17 October 2000 that proved to be the defining moment in Railtrack's collapse. The subsequent major repairs undertaken across the whole British rail network are estimated to have cost in the order of £580 million. According to Christian Wolmar, author of "On the Wrong Line", the Railtrack board panicked in the wake of Hatfield. Because most of the engineering skill of British Rail had been sold off into the maintenance and renewal companies, Railtrack had no idea how many Hatfields were waiting to happen, nor did they have any way of assessing the consequence of the speed restrictions they were ordering – restrictions that brought the railway network to all but a standstill.
Regulatory and customer pressure had been increasing, and the company's share price began to fall sharply as it became apparent that there were serious shortcomings in the company's ability to tackle and solve its greatest problems.
Meanwhile, the costs of modernising the West Coast Main Line were spiralling. In 2001, Railtrack announced that, despite making a pre-tax profits before exceptional expenses of £199m, the £733m of costs and compensation paid out over the Hatfield crash plunged Railtrack from profit to a loss of £534m. This caused it to approach the government for funding, which it then controversially used to pay a £137m dividend to its shareholders in May 2001.
Railtrack plc was placed into railway administration under the Railways Act 1993 on 7 October 2001, following an application to the High Court by the then Transport Secretary, Stephen Byers. This was effectively a form of bankruptcy protection that allowed the railway network to continue operating despite the financial problems of the operator. The parent company, Railtrack Group plc, was not put into administration and continued operating its other subsidiaries, which included property and telecommunications interests.
For most of the year in administration, the government's position had been that the new company would have to live within the existing regulatory settlement (£14.8 billion for the five years 2001–2006). However, it soon became obvious that that was impossible, and that the aftermath of the Hatfield crash had revealed that the network required significantly more money for its operation, maintenance and renewal. It was reported on 23 November 2001, that a further £3.5 billion may be needed to keep the national railway network running, a sum disputed by Ernst & Young, the administrators.
To get Railtrack out of administration, the government had to go back to the High Court and present evidence that the company was no longer insolvent. The principal reason given by the government to the court for this assertion was the decision of the rail regulator – announced on 22 September 2002 – to carry out an interim review of the company's finances, with the potential to advance significant additional sums to the company. The High Court accepted that the company was not therefore insolvent, and the railway administration order was discharged on 2 October 2002.
Network Rail was formed with the principal purpose of acquiring and owning Railtrack plc. Originally the Government allowed private companies to bid for Railtrack plc. However, with limited availability of financial data on Railtrack, the political implications of owning the company and the very obvious preference of the government that the national railway network should go to Network Rail, no bidders apart from Network Rail were forthcoming, and Network Rail bought Railtrack plc on 3 October 2002. Railtrack plc was subsequently renamed to Network Rail Infrastructure Limited.
Network Rail's acquisition of Railtrack plc was welcomed at the time by groups that represented British train passengers. The attitude of Railtrack's customers – the passenger- and freight-train operators – was much more cautious, especially as they were wary of a corporate structure under which shareholders' equity was not at risk if the company's new management mis-managed its affairs.
Railtrack's parent company, Railtrack Group, was placed into members' voluntary liquidation as RT Group on 18 October 2002. The Railtrack business (and its £7 billion debt) had been sold to Network Rail for £500 million, and the various diversified businesses it had created to seek to protect itself from the loss-making business of running a railway were disposed of to various buyers. £370 million held by Railtrack Group was frozen at the time the company went into administration and was earmarked to pay Railtrack shareholders an estimated 70p a share in compensation. The Group's interest in the partially built High Speed 1 line was also sold for £295m.
Railtrack shareholders formed two groups to press for increased compensation. A lawyer speaking for one of those groups remarked on GMTV that his strategy was to sue the government for incorrect and misleading information given at the time Railtrack was created, when John Major was Conservative Prime Minister. An increased offer of up to 262p per share was enough to convince the larger shareholder group, the Railtrack Action Group, to abandon legal action. The Chairman, Usman Mahmud, believed that legal action would not be successful without the support of management and major shareholders.
The legality of the decision to put Railtrack into railway administration was challenged by the smaller Railtrack Private Shareholders Action Group. Their action against the government alleged that the Secretary of State for Transport at the time – Stephen Byers MP – had, by deciding to cut off funding for Railtrack and asking the High Court to put the company into railway administration, committed the common law tort of misfeasance in public office. It is believed that there was £532 million available to Railtrack comprising £370 million in the bank and £162 million of an existing Department of Transport loan facility still available to be drawn down, but Stephen Byers MP cancelled this facility, causing shareholders to believe that he had broken the loan agreement.
This was the largest class action ever conducted in the English courts – there were 49,500 claimants, all small shareholders in Railtrack. Keith Rowley, QC, the barrister for the shareholders, alleged Byers had "devised a scheme by which he intended to injure the shareholders of Railtrack Group by impairing the value of their interests in that company without paying compensation and without the approval of Parliament".
The case was heard in the High Court in London in July 2005; some embarrassment was caused to Byers when he admitted that an answer he had given to a House of Commons Select Committee was inaccurate, but on 14 October 2005 the judge found that there was no evidence that Byers had committed the tort of misfeasance in public office.
The private shareholders decided not to appeal against the judgment, because there were no legal grounds for doing so. For many of them – who had contributed around £50 each, on average, to the fighting fund to bring the action – the case had served its purpose.
The circumstances in which Railtrack had been put into administration were highly controversial, with allegations in Parliament on 24 October 2005 that the company had not been insolvent at the time (7 October 2001) and therefore that the administration order had been wrongly obtained. This was because of the jurisdiction of the independent rail regulator – at the time Tom Winsor – to provide additional money to maintain the company's financial position. Sir Alan Duncan MP, then the shadow transport secretary, said in Parliament that this aspect of the affair – which was not dealt with in the shareholders' case in the High Court – was "perhaps the most shameful scar on the Government's honesty" and "an absolute scandal".
Byers apologised in the House of Commons on 17 October 2005 for having given a "factually inaccurate" reply to the Select Committee but said that he had not intended to mislead them. This personal statement to Parliament was not accepted by the MP who had asked the original question, and the matter was remitted to the House of Commons Standards and Privileges Committee for investigation. As a result of that committee's report, Mr Byers made another statement of apology to Parliament.
RT Group plc (in voluntary liquidation) made a number of payments to shareholders during the winding up of the company's affairs before finally being dissolved on 22 June 2010.
Gerald Corbett was the company's Chief Executive from 1997 until his resignation in November 2000. He was succeeded by Steve Marshall, who announced his own resignation in October 2001 and actually stood down in March 2002.
Geoffrey Howe was appointed Chairman of Railtrack Group (the part of the business not in administration) in March 2002. | https://en.wikipedia.org/wiki?curid=26081 |
Rock Bridge High School
Rock Bridge High School is a public high school located in southern Columbia, Missouri. The school serves grades 9 through 12 and is one of four High Schools in Columbia Public Schools. It is located next to the Columbia Career Center. The mascot is the Bruin Bear.
Due to the increasing population of Columbia in the 1970s, and the crowding of David H. Hickman High School towards the end of the 1960s, the Columbia Board of Education decided to form a new high school. The board bought of land in southern Columbia and started the construction of the new high school. The name "Rock Bridge" was chosen because of the school's proximity to the natural rock bridge of Rock Bridge State Park.
Construction started in 1969 on the original portion of the building, consisting of 18 classrooms and one office area in the present-day east wing of the building. Many of these classrooms were connected by motorized folding walls, many of which are still operational and in use. The school was planned to open in 1971, but funding issues pushed back construction of the second phase of the building. As such, this original portion sat unused for a year or two while the second portion was not yet complete.
Construction started on the second portion in early 1972, which added the "Main Commons", another office area, the library, the gymnasium, and a few specialty classrooms underneath that area. In September 1973, with the completion of the second portion, Rock Bridge was considered "complete enough" to open and had a class of 583 students, mostly sophomores and juniors. This high school was the second centrally air conditioned school built in Columbia after Oakland Junior High School north of town. In 1974, the planetarium was completed with a capacity of nearly 90 people, featuring a state-of-the-art star ball and a full-dome projection system .
In 1979, the west wing opened, which was basically a mirror image of the original 1971 building but with a finished basement. The west wing featured about ten general-purpose classrooms, as well as science, art, and band rooms, providing the school with a then-total of about 40 classrooms. Three new science classrooms, as well as a performing arts center, were added in 1992. Enrollment reached 1000 in the 1995–96 school year.
In 2000, a large addition opened between the east and west wings, featuring seven science classrooms, eight English and social studies classrooms, seven foreign language classrooms, a new media center and three new computer labs. In January 2013 Rock Bridge opened a new auxiliary gym after freshmen began attending high school following the secondary redistricting in August. The area under the auxiliary gym featured a new wrestling room, making room for three new math classrooms in the former location. A new weight room was also added.
Rock Bridge runs on a block scheduling format during the hours of 8:55 am to 4:05 pm. This format is structured so that students have four 90-minute-long classes each day. However, most of these classes meet every other day for a total of eight classes for the year. Unique to Rock Bridge is AUT, a free period for sophomores and upperclassmen allowing them to go anywhere within their 90-minute period. (Block scheduling was established in the 1994–95 school year.
The school offers 18 Advanced Placement courses and a multitude of honors classes available to students. However, RBHS does not weight grade point averages.
Rock Bridge High offers a variety of sports. Fall sports include cross country, football, golf, soccer, softball, swimming and diving, tennis and volleyball. Rock Bridge has won Football State Championships with undefeated teams in 1975 and 1977. Winter sports include basketball and wrestling. Spring sports include baseball and track and field. Year-round sports include cheerleading and poms. Rock Bridge has won the girls' state golf title in 1999, 2002, 2003, 2011 and 2014, as well as the boys' state title in 2008, 2010, 2011, and 2012. At the club level, Rock Bridge fields a lacrosse team in the spring and an ultimate frisbee team year round.
Rock Bridge's Student Council consists of 10 students from each class, with four upperclassman making up the leadership team of Treasurer, Secretary, Vice-President, and President. Student Council is responsible for school-wide events take place through the year, including Homecoming, Courtwarming, Prom, and Powderpuff.
From the time the school opened journalism has been part of the course offerings for the school. There was a newspaper, "The Rock", as well as a yearbook, "Flashback". In 1995, the school created a prerequisite class called Journalistic Writing, and the paper became a monthly publication. The journalism department created a special edition magazine, "Southpaw", in 2005, and in 2011, an online news source, "Bearing News". "The Rock" began on letter-sized sheets that were published through a class. In 1994, it began its run as a monthly publication. | https://en.wikipedia.org/wiki?curid=26082 |
Red wolf
The red wolf ("Canis lupus rufus" or "Canis rufus") is a canine native to the southeastern United States which has a reddish-tawny color to its fur. Morphologically it is intermediate between the coyote and gray wolf, and is very closely related to the eastern wolf of eastern Canada.
The red wolf's proper taxonomic classification — in essence, whether it is an admixture of wolf and coyote or a third, distinct species — has been contentious for well over a century and is still under debate. Because of this, it is sometimes excluded from endangered species lists despite its critically low numbers. Under the Endangered Species Act of 1973, the U.S. Fish and Wildlife Service currently recognizes the red wolf as an endangered and grants protected status. "Canis rufus" is not listed in the CITES Appendices of endangered species. Since 1996 the IUCN has listed it as a critically endangered species.
Red wolves were originally distributed throughout the southeastern and south-central United States from the Atlantic Ocean to central Texas, southeastern Oklahoma and southwestern Illinois in the west, and in the north from the Ohio River Valley, northern Pennsylvania and southern New York south to the Gulf of Mexico. The red wolf was nearly driven to extinction by the mid-1900s due to aggressive predator-control programs, habitat destruction, and extensive hybridization with coyotes. By the late 1960s, it occurred in small numbers in the Gulf Coast of western Louisiana and eastern Texas.
Fourteen of these survivors were selected to be the founders of a captive-bred population, which was established in the Point Defiance Zoo and Aquarium between 1974 and 1980. After a successful experimental relocation to Bulls Island off the coast of South Carolina in 1978, the red wolf was declared extinct in the wild in 1980 to proceed with restoration efforts. In 1987, the captive animals were released into the Alligator River National Wildlife Refuge on the Albemarle Peninsula in North Carolina, with a second release, since reversed, taking place two years later in the Great Smoky Mountains National Park. Of 63 red wolves released from 1987–1994, the population rose to as many as 100–120 individuals in 2012, but due to the lack of regulation enforcement by the US Fish and Wildlife Service, the population has declined to 40 individuals in 2018 and about 14 as of 2019.
The taxonomic status of the red wolf is debated. It has been described as either a species with a distinct lineage, a recent hybrid of the gray wolf and the coyote, an ancient hybrid of the gray wolf and the coyote which warrants species status, or a distinct species that has undergone recent hybridization with the coyote.
The naturalists John James Audubon and John Bachman were the first to suggest that the wolves of the southern United States were different from wolves in its other regions. In 1851 they recorded the "Black American Wolf" as "C. l." var. "arer" that existed in Florida, South Carolina, North Carolina, Kentucky, southern Indiana, southern Missouri, Louisiana, and northern Texas. They also recorded the "Red Texan Wolf" as "C. l." var. "rufus" that existed from northern Arkansas, through Texas, and into Mexico. In 1912 the zoologist Gerrit Smith Miller Jr. noted that the designation "arer" was unavailable and recorded these wolves as "C. l. floridanus".
In 1937, the zoologist Edward Alphonso Goldman proposed a new species of wolf "Canis rufus". Three subspecies of red wolf were originally recognized by Goldman, with two of these subspecies now being extinct. The Florida black wolf ("Canis rufus floridanus") (Maine to Florida) has been extinct since 1908 and the Mississippi Valley red wolf ("Canis rufus gregoryi") (south-central United States) was declared extinct by 1980. By the 1970s, the Texas red wolf ("Canis rufus rufus") existed only in the coastal prairies and marshes of extreme southeastern Texas and southwestern Louisiana. These were removed from the wild to form a captive breeding program and reintroduced into eastern North Carolina in 1987.
In 1967, the zoologists Barbara Lawrence and William H. Bossert believed that the case for classifying "C. rufus" as a species was based too heavily on the small red wolves of central Texas, from where it was known that there existed hybridization with the coyote. They said that if an adequate number of specimens had have been included from Florida, then the separation of "C. rufus" from "C. lupus" would have been unlikely. The taxonomic reference "Catalogue of Life" classifies the red wolf as a subspecies of "Canis lupus". The mammalogist W. Christopher Wozencraft, writing in "Mammal Species of the World" (2005), regards the red wolf as a hybrid of the gray wolf and the coyote, but due to its uncertain status compromised by recognizing it as a subspecies of the gray wolf "Canis lupus rufus".
When European settlers first arrived to North America, the coyote's range was limited to the western half of the continent. They existed in the arid areas and across the open plains, including the prairie regions of the midwestern states. Early explorers found some in Indiana and Wisconsin. From the mid-1800s onward, coyotes began expanding beyond their original range.
The taxonomic debate regarding North American wolves can be summarised as follows:
The paleontologist Ronald M. Nowak notes that the oldest fossil remains of the red wolf are 10,000 years old and were found in Florida near Melbourne, Brevard County, Withlacoochee River, Citrus County, and Devil's Den Cave, Levy County. He notes that there are only a few, but questionable, fossil remains of the gray wolf found in the southeastern states. He proposes that following the extinction of the dire wolf, the coyote appears to have been displaced from the southeastern US by the red wolf until the last century, when the extirpation of wolves allowed the coyote to expand its range. He also proposes that the ancestor of all North American and Eurasian wolves was "C. mosbachensis", which lived in the Middle Pleistocene 700,000–300,000 years ago.
"C. mosbachensis" was a wolf that once lived across Eurasia before going extinct. It was smaller than most North American wolf populations and smaller than "C. rufus", and has been described as being similar in size to the small Indian wolf, "Canis lupus pallipes". He further proposes that "C. mosbachensis" invaded North America where it became isolated by the later glaciation and there gave rise to "C. rufus". In Eurasia, "C. mosbachensis" evolved into "C. lupus", which later invaded North America.
The paleontologist and expert on the genus "Canis"' natural history, Xiaoming Wang, looked at red wolf fossil material but could not state if it was, or was not, a separate species. He said that Nowak had put together more morphometric data on red wolves than anybody else, but Nowak's statistical analysis of the data revealed a red wolf that is difficult to deal with. Wang proposes that studies of ancient DNA taken from fossils might help settle the debate.
In 1771, the English naturalist Mark Catesby referred to Florida and the Carolinas when he wrote that "The Wolves in America are like those of Europe, in shape and colour, but are somewhat smaller." They were described as being more timid and less voracious. In 1791 the American naturalist William Bartram wrote in his book "Travels" about a wolf which he had encountered in Florida that was larger than a dog, but was black in contrast to the larger yellow-brown wolves of Pennsylvania and Canada. In 1851 the naturalists John James Audubon and John Bachman described the "Red Texan Wolf" in detail. They noted that it could be found in Florida and other southeastern states, but it differed from other North American wolves and named it "Canis lupus rufus". It was described as being more fox-like than the gray wolf, but retaining the same "sneaking, cowardly, yet ferocious disposition".
In 1905, the mammalogist Vernon Bailey referred to the "Texan Red Wolf" with the first use of the name "Canis rufus". In 1937 the zoologist Edward Goldman undertook a morphological study of southeastern wolf specimens. He noted that their skulls and dentition differed from those of gray wolves and closely approached those of coyotes. He identified the specimens as all belonging to the one species which he referred to as "Canis rufus". Goldman then examined a large number of southeastern wolf specimens and identified three subspecies, noting that their colors ranged from black, gray, and cinnamon-buff.
It is difficult to distinguish the red wolf from a red wolf × coyote hybrid. During the 1960s, two studies of the skull morphology of wild "Canis" in the southeastern states found them to belong to the red wolf, the coyote, or many variations in between. The conclusion was that there has been recent massive hybridization with the coyote. In contrast, another 1960s study of "Canis" morphology concluded that the red wolf, eastern wolf, and domestic dog were closer to the gray wolf than the coyote, while still remaining clearly distinctive from each other. The study regarded these 3 canines as subspecies of the gray wolf. However, the study noted that "red wolf" specimens taken from the edge of their range which they shared with the coyote could not be attributed to any one species because the cranial variation was very wide. The study proposed further research to ascertain if hybridization had occurred.
In 1971, a study of the skulls of "C. rufus", "C. lupus" and "C. latrans" indicated that "C. rufus" was distinguishable by being in size and shape midway between the gray wolf and the coyote. A re-examination of museum canine skulls collected from central Texas between 1915-1918 showed variations spanning from "C. rufus" through to "C. latrans". The study proposes that by 1930 due to human habitat modification, the red wolf had disappeared from this region and had been replaced by a hybrid swarm. By 1969, this hybrid swarm was moving eastwards into eastern Texas and Louisiana.
In the late 19th century, sheep farmers in Kerr County, Texas stated that the coyotes in the region were larger than normal coyotes, and they believed that they were a gray wolf and coyote cross. In 1970, the wolf mammalogist L. David Mech proposed that the red wolf was a hybrid of the gray wolf and coyote, and suggested that it should be taxonomically recognized as "C. lupus × C. latrans". However, a 1971 study compared the cerebellum within the brain of six "Canis" species and found that the cerebellum of the red wolf indicated a distinct species, was closest to that of the gray wolf, but in contrast indicated some characteristics that were more primitive than those found in any of the other "Canis" species. In 2014, a three-dimensional morphometrics study of "Canis" species accepted only six red wolf specimens for analysis from those on offer, due to the impact of hybridization on the others.
Different DNA studies may give conflicting results because of the specimens selected, the technology used, and the assumptions made by the researchers.
Phylogenetic trees compiled using different genetic markers have given conflicting results on the relationship between the wolf, dog and coyote. One study based on SNPs (a single mutation), and another based on nuclear gene sequences (taken from the cell nucleus), showed dogs clustering with coyotes and separate from wolves. Another study based on SNPS showed wolves clustering with coyotes and separate from dogs. Other studies based on a number of markers show the more widely accepted result of wolves clustering with dogs separate from coyotes. These results demonstrate that caution is needed when interpreting the results provided by genetic markers.
In 1980, a study used gel electrophoresis to look at fragments of DNA taken from dogs, coyotes, and wolves from the red wolf's core range. The study found that a unique allele (expression of a gene) associated with Lactate dehydrogenase could be found in red wolves, but not dogs and coyotes. The study suggests that this allele survives in the red wolf. The study did not compare gray wolves for the existence of this allele.
Mitochondrial DNA (mDNA) passes along the maternal line and can date back thousands of years. In 1991, a study of red wolf mDNA indicates that red wolf genotypes match those known to belong to the gray wolf or the coyote. The study concluded that the red wolf is either a wolf × coyote hybrid or a species that has hybridized with the wolf and coyote across its entire range. The study proposed that the red wolf is a southeastern occurring subspecies of the gray wolf that has undergone hybridization due to an expanding coyote population; however, being unique and threatened that it should remain protected. This conclusion led to debate for the remainder of the decade.
In 2000, a study looked at red wolves and eastern Canadian wolves. The study agreed that these two wolves readily hybridize with the coyote. The study used eight microsatellites (genetic markers taken from across the genome of a specimen). The phylogenetic tree produced from the genetic sequences showed red wolves and eastern Canadian wolves clustering together. These then clustered next closer with the coyote and away from the gray wolf. A further analysis using mDNA sequences indicated the presence of coyote in both of these two wolves, and that these two wolves had diverged from the coyote 150,000–300,000 years ago. No gray wolf sequences were detected in the samples. The study proposes that these findings are inconsistent with the two wolves being subspecies of the gray wolf, that red wolves and eastern Canadian wolves evolved in North America after having diverged from the coyote, and therefore they are more likely to hybridize with coyotes.
In 2009, a study of eastern Canadian wolves using microsatellites, mDNA, and the paternally-inherited yDNA markers found that the eastern Canadian wolf was a unique ecotype of the gray wolf that had undergone recent hybridization with other gray wolves and coyotes. It could find no evidence to support the findings of the earlier 2000 study regarding the eastern Canadian wolf. The study did not include the red wolf.
In 2011, a study compared the genetic sequences of 48,000 single nucleotide polymorphisms (mutations) taken from the genomes of canids from around the world. The comparison indicated that the red wolf was about 76% coyote and 24% gray wolf with hybridization having occurred 287–430 years ago. The eastern wolf was 58% gray wolf and 42% coyote with hybridization having occurred 546–963 years ago. The study rejected the theory of a common ancestry for the red and eastern wolves. However the next year, a study reviewed a subset of the 2011 study's Single-nucleotide polymorphism (SNP) data and proposed that its methodology had skewed the results and that the red and eastern wolves are not hybrids but are in fact the same species separate from the gray wolf. The 2012 study proposed that there are three true "Canis" species in North America - the gray wolf, the western coyote, and the red wolf / eastern wolf, with the eastern wolf represented by the Algonquin wolf, with the Great Lakes wolf being a hybrid of the eastern wolf and the gray wolf, and the eastern coyote being a hybrid of the western coyote and the eastern (Algonquin) wolf.
Also in 2011, a scientific literature review was undertaken to help assess the taxonomy of North American wolves. One of the findings proposed was that the eastern wolf is supported as a separate species by morphological and genetic data. Genetic data supports a close relationship between the eastern and red wolves, but not close enough to support these as one species. It was "likely" that these were the separate descendants of a common ancestor shared with coyotes. This review was published in 2012. In 2014, the National Center for Ecological Analysis and Synthesis was invited by the United States Fish and Wildlife Service to provide an independent review of its proposed rule relating to gray wolves. The Center's panel findings were that the proposed rule was heavily dependent upon the analysis contained in a scientific literature review conducted in 2011 (Chambers "et al".), that this work was not universally accepted, that the issue was "not settled", and that the rule does not represent the "best available science".
In early 2016, an mDNA analysis of three ancient (300–1,900 years old) wolf-like samples from the southeastern United States found that they grouped with the coyote clade, although their teeth were wolf-like. The study proposed that the specimens were either coyotes and this would mean that coyotes had occupied this region continuously rather than intermittently, a North American evolved red wolf lineage related to coyotes, or an ancient coyote–wolf hybrid. Ancient hybridization between wolves and coyotes would likely have been due to natural events or early human activities, not landscape changes associated with European colonization because of the age of these samples. Coyote–wolf hybrids may have occupied the southeastern United States for a long time, filling an important niche as a large predator.
In July 2016, a whole-genome DNA study proposed, based on the assumptions made, that all of the North American wolves and coyotes diverged from a common ancestor less than 6,000–117,000 years ago. The study also indicated that all North America wolves have a significant amount of coyote ancestry and all coyotes some degree of wolf ancestry, and that the red wolf and Great Lakes region wolf are highly admixed with different proportions of gray wolf and coyote ancestry. One test indicated a wolf/coyote divergence time of 51,000 years before present that matched other studies indicating that the extant wolf came into being around this time. Another test indicated that the red wolf diverged from the coyote between 55,000–117,000 years before present and the Great Lakes region wolf 32,000 years before present. Other tests and modelling showed various divergence ranges and the conclusion was a range of less than 6,000 and 117,000 years before present. The study found that coyote ancestry was highest in red wolves from the southeast of the United States and lowest among the Great Lakes region wolves.
The theory proposed was that this pattern matched the south-to-north disappearance of the wolf due to European colonization and its resulting loss of habitat. Bounties led to the extirpation of wolves initially in the southeast, and as the wolf population declined wolf-coyote admixture increased. Later, this process occurred in the Great Lakes region with the influx of coyotes replacing wolves, followed by the expansion of coyotes and their hybrids across the wider region. The red wolf may possess some genomic elements that were unique to gray wolf and coyote lineages from the American South. The proposed timing of the wolf/coyote divergence conflicts with the finding of a coyote-like specimen in strata dated to 1 million years before present, and red wolf fossil specimens dating back 10,000 years ago. The study concluded by stating that because of the extirpation of gray wolves in the American Southeast, "the reintroduced population of red wolves in eastern North Carolina is doomed to genetic swamping by coyotes without the extensive management of hybrids, as is currently practiced by the USFWS."
In September 2016, the USFWS announced a program of changes to the red wolf recovery program and "will begin implementing a series of actions based on the best and latest scientific information". The service will secure the captive population which is regarded as not sustainable, determine new sites for additional experimental wild populations, revise the application of the existing experimental population rule in North Carolina, and complete a comprehensive Species Status Assessment.
In 2017, a group of canid researchers challenged the recent finding that the red wolf and the eastern wolf were the result of recent coyote-wolf hybridization. The group highlight that no testing had been undertaken to ascertain the time period that hybridization had occurred and that, by the previous study's own figures, the hybridization could not have occurred recently but supports a much more ancient hybridization. The group found deficiencies in the previous study's selection of specimens and the findings drawn from the different techniques used. Therefore, the group argues that both the red wolf and the eastern wolf remain genetically distinct North American taxa. This was rebutted by the authors of the earlier study. Another study in late 2018 of wild canids in southwestern Louisiana also supported the red wolf as a separate species, citing distinct red wolf DNA within hybrid canids.
In 2019, a literature review of the previous studies was undertaken by the National Academies of Sciences, Engineering, and Medicine. The position of the National Academies is that the historical red wolf forms a valid taxonomic species, the modern red wolf is distinct from wolves and coyotes, and modern red wolves trace some of their ancestry to historic red wolves. The species "Canis rufus" is supported for the modern red wolf, unless genomic evidence from historical red wolf specimens changes this assessment, due to a lack of continuity between the historic and the modern red wolves.
Genetic studies relating to wolves or dogs have inferred phylogenetic relationships based on the only reference genome available, that of the Boxer dog. In 2017, the first reference genome of the wolf "Canis lupus lupus" was mapped to aid future research. In 2018, a study looked at the genomic structure and admixture of North American wolves, wolf-like canids, and coyotes using specimens from across their entire range that mapped the largest dataset of nuclear genome sequences against the wolf reference genome. The study supports the findings of previous studies that North American gray wolves and wolf-like canids were the result of complex gray wolf and coyote mixing. A polar wolf from Greenland and a coyote from Mexico represented the purest specimens. The coyotes from
Alaska, California, Alabama, and Quebec show almost no wolf ancestry. Coyotes from Missouri, Illinois, and Florida exhibit 5–10% wolf ancestry. There was 40%:60% wolf to coyote ancestry in red wolves, 60%:40% in Eastern timber wolves, and 75%:25% in the Great Lakes wolves. There was 10% coyote ancestry in Mexican wolves and Atlantic Coast wolves, 5% in Pacific Coast and Yellowstone wolves, and less than 3% in Canadian archipelago wolves.
The study shows that the genomic ancestry of red, eastern timber and Great Lakes wolves were the result of admixture between modern gray wolves and modern coyotes. This was then followed by development into local populations. Individuals within each group showed consistent levels of coyote to wolf inheritance, indicating that this was the result of relatively ancient admixture. The eastern timber wolf (Algonquin Provincial Park) is genetically closely related to the Great Lakes wolf (Minnesota, Isle Royale National Park). If a third canid had been involved in the admixture of the North American wolf-like canids, then its genetic signature would have been found in coyotes and wolves, which it has not.
Grey wolves suffered a species-wide population bottleneck (reduction) approximately 25,000 YBP during the Last Glacial Maximum. This was followed by a single population of modern wolves expanding out of a Beringia refuge to repopulate the wolf's former range, replacing the remaining Late Pleistocene wolf populations across Eurasia and North America as they did so. This implies that if the coyote and red wolf were derived from this invasion, their histories date only tens of thousands and not hundreds of thousands of years ago, which is consistent with other studies.
The Endangered Species Act provides protection to endangered species, but does not provide protection for endangered admixed individuals, even if these serve as reservoirs for extinct genetic variation. Researchers on both sides of the red wolf debate argue that admixed canids warrant full protection under this Act.
The red wolf's appearance is typical of the genus "Canis", and is generally intermediate in size between the coyote and gray wolf, though some specimens may overlap in size with small gray wolves. A study of "Canis" morphometrics conducted in eastern North Carolina reported that red wolves are morphometrically distinct from coyotes and hybrids. Adults measure 136–160 cm (53.5–63 in) in length, and weigh 23–39 kg (50-85 lbs). Its pelage is typically more reddish and sparsely furred than the coyote's and gray wolf's, though melanistic individuals do occur. Its fur is generally tawny to grayish in color, with light markings around the lips and eyes. The red wolf has been compared by some authors to the greyhound in general form, owing to its relatively long and slender limbs. The ears are also proportionately larger than the coyote's and gray wolf's. The skull is typically narrow, with a long and slender rostrum, a small braincase and a well developed sagittal crest. Its cerebellum is unlike that of other "Canis" species, being closer in form to that of canids of the "Vulpes" and "Urocyon" genera, thus indicating that the red wolf is one of the more plesiomorphic members of its genus.
The red wolf is more sociable than the coyote, but less so than the gray wolf. It mates in January–February, with an average of 6-7 pups being born in March, April, and May. It is monogamous, with both parents participating the rearing of young. Denning sites include hollow tree trunks, along stream banks and the abandoned earths of other animals. By the age of six weeks, the pups distance themselves from the den, and reach full size at the age of one year, becoming sexually mature two years later.
Using long-term data on red wolf individuals of known pedigree, it was found that inbreeding among first-degree relatives was rare. A likely mechanism for avoidance of inbreeding is independent dispersal trajectories from the natal pack. Many of the young wolves spend time alone or in small non-breeding packs composed of unrelated individuals. The union of two unrelated individuals in a new home range is the predominant pattern of breeding pair formation. Inbreeding is avoided because it results in progeny with reduced fitness (inbreeding depression) that is predominantly caused by the homozygous expression of recessive deleterious alleles.
Prior to its extinction in the wild, the red wolf's diet consisted of rabbits, rodents, and nutria (an introduced species). In contrast, the red wolves from the restored population rely on white-tailed deer, raccoon, nutria and rabbits. White-tailed deer were largely absent from the last wild refuge of red wolves on the Gulf Coast between Texas and Louisiana (where specimens were trapped from the last wild population for captive breeding), which likely accounts for the discrepancy in their dietary habits listed here. Historical accounts of wolves in the southeast by early explorers such as William Hilton, who sailed along the Cape Fear River in what is now North Carolina in 1644, also note that they ate deer.
The originally recognized red wolf range extended throughout the southeastern United States from the Atlantic and Gulf Coasts, north to the Ohio River Valley and central Pennsylvania, and west to Central Texas and southeastern Missouri. Research into paleontological, archaeological and historical specimens of red wolves by Ronald Nowak expanded their known range to include land south of the Saint Lawrence River in Canada, along the eastern seaboard, and west to Missouri and mid-Illinois, terminating in the southern latitudes of Central Texas.
Given their wide historical distribution, red wolves probably used a large suite of habitat types at one time. The last naturally occurring population used coastal prairie marshes, swamps, and agricultural fields used to grow rice and cotton. However, this environment probably does not typify preferred red wolf habitat. Some evidence shows the species was found in highest numbers in the once extensive bottom-land river forests and swamps of the southeastern United States. Red wolves reintroduced into northeastern North Carolina have used habitat types ranging from agricultural lands to forest/wetland mosaics characterized by an overstory of pine and an understory of evergreen shrubs. This suggests that red wolves are habitat generalists and can thrive in most settings where prey populations are adequate and persecution by humans is slight.
Since before European colonization of the Americas, the red wolf has featured prominently in Cherokee spiritual beliefs, where it is known as "wa'ya" (ᏩᏯ), and is said to be the companion of Kana'ti - the hunter and father of the "Aniwaya" or Wolf Clan. Traditionally, Cherokee people generally avoid killing red wolves, as such an act is believed to bring about the vengeance of the killed animals' pack-mates.
In 1940 the biologist Stanley P. Young noted that the red wolf was still common in eastern Texas, where more than 800 had been caught in 1939 because of their attacks on livestock. He did not believe that they could be exterminated because of their habit of living concealed in thickets. In 1962 a study of skull morphology of wild "Canis" in the states of Arkansas, Louisiana, Oklahoma, and Texas indicated that the red wolf existed in only a few populations due to hybridization with the coyote. The explanation was that either the red wolf could not adapt to changes to its environment due to human land-use along with its accompanying influx of competing coyotes from the west, or that the red wolf was being hybridized out of existence by the coyote.
Since 1987, red wolves have been released into northeastern North Carolina, where they roam 1.7 million acres. These lands span five counties (Dare, Hyde, Tyrrell, Washington, and Beaufort) and include three national wildlife refuges, a U.S. Air Force bombing range, and private land. The red wolf recovery program is unique for a large carnivore reintroduction in that more than half of the land used for reintroduction lies on private property. Approximately are federal and state lands, and are private lands.
Beginning in 1991, red wolves were also released into the Great Smoky Mountains National Park in eastern Tennessee. However, due to exposure to environmental disease (parvovirus), parasites, and competition (with coyotes as well as intraspecific aggression), the red wolf was unable to successfully establish a wild population in the park. Low prey density was also a problem, forcing the wolves to leave the park boundaries in pursuit of food in lower elevations. In 1998, the FWS took away the remaining red wolves in the Great Smoky Mountains National Park, relocating them to Alligator River National Wildlife Refuge in eastern North Carolina. Other red wolves have been released on the coastal islands in Florida, Mississippi, and South Carolina as part of the captive breeding management plan. St. Vincent Island in Florida is currently the only active island propagation site.
After the passage of the Endangered Species Act of 1973, formal efforts backed by the U.S. Fish and Wildlife Service began to save the red wolf from extinction, when a captive-breeding program was established at the Point Defiance Zoological Gardens, Tacoma, Washington. Four hundred animals were captured from southwestern Louisiana and southeastern Texas from 1973 to 1980 by the USFWS.
Measurements, vocalization analyses, and skull X-rays were used to distinguish red wolves from coyotes and red wolf × coyote hybrids. Of the 400 canids captured, only 43 were believed to be red wolves and sent to the breeding facility. The first litters were produced in captivity in May 1977. Some of the pups were determined to be hybrids, and they and their parents were removed from the program. Of the original 43 animals, only 17 were considered pure red wolves and since three were unable to breed, 14 became the breeding stock for the captive-breeding program. These 14 were so closely related that they had the genetic effect of being only eight individuals.
In 1996, the red wolf was listed by the International Union for Conservation of Nature as a critically endangered species.
Over 30 facilities participate in the red wolf Species Survival Plan and oversee the breeding and reintroduction of over 150 wolves.
In 2007, the USFWS estimated that 300 red wolves remained in the world, with 207 of those in captivity.
A 2019 analysis by the Center for Biological Diversity of available habitat throughout the red wolf's former range found that over 20,000 square miles of public land across 5 sites had viable habitat for red wolves to be reintroduced to in the future. These sites were chosen based on prey levels, isolation from coyotes and human development, and connectivity with other sites. These sites include: the Apalachicola and Osceola National Forests along with the Okefenokee National Wildlife Refuge and nearby protected lands; numerous national parks and national forests in the Appalachian Mountains including the Monongahela, George Washington & Jefferson, Cherokee, Pisgah, Nantahala, Chattahoochee, and Talladega National Forests along with Shenandoah National Park and the lower elevations of Great Smoky Mountains National Park; Croatoan National Forest and Hofmann Forest on the North Carolina coast, and the Ozark, Ouatchita, and Mark Twain National Forests in the central United States.
Interbreeding with the coyote has been recognized as a threat affecting the restoration of red wolves. Currently, adaptive management efforts are making progress in reducing the threat of coyotes to the red wolf population in northeastern North Carolina. Other threats, such as habitat fragmentation, disease, and human-caused mortality, are of concern in the restoration of red wolves. Efforts to reduce the threats are presently being explored.
By 1999, introgression of coyote genes was recognized as the single greatest threat to wild red wolf recovery and an adaptive management plan which included coyote sterilization has been successful, with coyote genes being reduced by 2015 to < 4% of the wild red wolf population.
Since the 2014 programmatic review, the USFWS ceased implementing the red wolf adaptive management plan that was responsible for preventing red wolf hybridization with coyotes and allowed the release of captive-born red wolves into the wild population. Since then, the wild population has decreased from 100–115 red wolves to 50–65. Despite the controversy over the red wolf's status as a unique taxon as well as the USFWS' apparent disinterest towards wolf conservation in the wild, the vast majority of public comments (including NC residents) submitted to the USFWS in 2017 over their new wolf management plan were in favor of the original wild conservation plan.
A 2016 genetic study of canid scats found that despite high coyote density inside the Red Wolf Experimental Population Area (RWEPA), hybridization occurs rarely (4% are hybrids).
In late 2018, two canids that are largely coyote were found on Galveston Island, Texas with red wolf alleles (gene expressions) left from a ghost population of red wolves. Since these alleles are from a different population from the red wolves in the North Carolina captive breeding program, there has been a proposal to selectively cross-breed the Galveston Island coyotes into the captive red wolf population. Another study published around the same time analyzing canid scat and hair samples in southwestern Louisiana found genetic evidence of red wolf ancestry in about 55% of sampled canids, with one such individual having between 78–100% red wolf ancestry, suggesting the possibility of more red wolf genes in the wild that may not be present in the captive population.
High wolf mortality related to anthropogenic causes appeared to be the main factor limiting wolf dispersal westward from the RWEPA. High anthropogenic wolf mortality similarly limits expansion of eastern wolves outside of protected areas in south-eastern Canada.
In 2012, the Southern Environmental Law Center filed a lawsuit against the North Carolina Wildlife Resources Commission for jeopardizing the existence of the wild red wolf population by allowing nighttime hunting of coyotes in the five-county restoration area in eastern North Carolina. A 2014 court-approved settlement agreement was reached that banned nighttime hunting of coyotes and requires permitting and reporting coyote hunting. In response to the settlement, the North Carolina Wildlife Resources Commission adopted a resolution requesting the USFWS to remove all wild red wolves from private lands, terminate recovery efforts, and declare red wolves extinct in the wild. This resolution came in the wake of a 2014 programmatic review of the red wolf conservation program conducted by The Wildlife Management Institute. The Wildlife Management Institute indicated the reintroduction of the red wolf was an incredible achievement. The report indicated that red wolves could be released and survive in the wild, but that illegal killing of red wolves threatens the long-term persistence of the population. The report stated that the USFWS needed to update its red wolf recovery plan, thoroughly evaluate its strategy for preventing coyote hybridization and increase its public outreach.
In 2014, the USFWS issued the first take permit for a red wolf to a private landowner. Since then, the USFWS issued several other take permits to landowners in the five-county restoration area. During June 2015, a landowner shot and killed a female red wolf after being authorized a take permit, causing a public outcry. In response, the Southern Environmental Law Center filed a lawsuit against the USFWS for violating the Endangered Species Act.
By 2016, the red wolf population of North Carolina had declined to 45-60 wolves. The largest cause of this decline was gunshot.
In June 2018, the USFWS announced a proposal that would limit the wolves' safe range to only Alligator River National Wildlife Refuge, where only about 35 wolves remain, thus allowing hunting on private land. In November 2018, Chief Judge Terrence W. Boyle found that the USFWS had violated its congressional mandate to protect the red wolf, and ruled that USFWS had no power to give landowners the right to shoot them. | https://en.wikipedia.org/wiki?curid=26088 |
Richard Myers
Richard Bowman Myers (born March 1, 1942) is the 14th president of Kansas State University and a retired four-star general in the United States Air Force who served as the 15th Chairman of the Joint Chiefs of Staff. As Chairman, Myers was the highest ranking uniformed officer of the United States military forces.
Myers became the Chairman of the Joint Chiefs on October 1, 2001. In this capacity, he served as the principal military advisor to the President, the Secretary of Defense, and the National Security Council during the earliest stages of the War on Terror, including planning and execution of the 2003 invasion of Iraq. On September 30, 2005, he retired and was succeeded by General Peter Pace. His Air Force career included operational command and leadership positions in a variety of Air Force and Joint assignments.
Myers began serving as the interim President of Kansas State University in late April 2016, and was announced as the permanent president on November 15, 2016.
Myers was born in Kansas City, Missouri. His father owned a hardware store and his mother was a homemaker. He graduated from Shawnee Mission North High School in 1960. He graduated from Kansas State University (KSU) with a Bachelor of Science in mechanical engineering in 1965 where he was a member of Sigma Alpha Epsilon fraternity. He was commissioned by Detachment 270 of the Air Force Reserve Officer Training Corps at KSU. He graduated from Auburn University at Montgomery with a Master of Business Administration in 1977. Myers has attended the Air Command and Staff College at Maxwell Air Force Base, Alabama; the U.S. Army War College at Carlisle Barracks, Pennsylvania; and the Program for Senior Executives in National and International Security at Harvard University's John F. Kennedy School of Government.
Myers entered the United States Air Force in 1965 through the Reserve Officer Training Corps program. He received pilot training from 1965 to 1966 at Vance Air Force Base, Oklahoma. Myers is a command pilot with more than 4,100 flying hours in the T-33 Shooting Star, C-37, C-21, F-4 Phantom II, F-15 Eagle and F-16 Fighting Falcon, including 600 combat hours in the F-4. During his tenure as Chairman of the Joint Chiefs of Staff, Myers often flew official aircraft such as the Gulfstream C-37A and C-37B by himself during official trips. According to his 2009 autobiography ("Eyes on The Horizon: Serving on the Front Lines of National Security)", "one of the pleasures he had as both Chairman of the Joint Chiefs of Staff and Vice Chairman of the Joint Chiefs of Staff was to be able to sometimes fly on his required travels and stay pilot-qualified."
From November 1993 to June 1996, Myers was Commander of United States Forces Japan and Fifth Air Force at Yokota Air Base, Japan and From July 1996 to July 1997 Myers served as Assistant to the Chairman of the Joint Chiefs of Staff at the Pentagon. Myers received his fourth-star in 1997 when he was appointed as commander in chief of Pacific Air Forces. He commanded the Pacific Air Forces at Hickam Air Force Base, Hawaii, from July 1997 to July 1998. From August 1998 to February 2000, Myers was commander in chief of the North American Aerospace Defense Command and United States Space Command; Commander of the Air Force Space Command; and Department of Defense manager of the space transportation system contingency support at Peterson Air Force Base, Colorado. As commander, Myers was responsible for defending America through space and intercontinental ballistic missile operations.
Following the appointment of General Joseph Ralston as Supreme Allied Commander Europe (SACEUR), Myers was appointed by President Bill Clinton to succeed Ralston as Vice Chairman of the Joint Chiefs of Staff on February 2000. He assumed his duties on February 29, 2000. As Vice Chairman, Myers served as the Chairman of the Joint Requirements Oversight Council, Vice Chairman of the Defense Acquisition Board, and as a member of the National Security Council Deputies Committee and the Nuclear Weapons Council. In addition, he acted for the Chairman in all aspects of the Planning, Programming and Budgeting System including participation in the Defense Resources Board.
In August 2001, a year after assuming the role of Vice Chairman of the Joint Chiefs of Staff, President George W. Bush appointed Myers to be the next Chairman of the Joint Chiefs of Staff. Myers was the first Vice Chairman of the Joint Chiefs of Staff to be appointed Chairman, since the role was established in 1987 after the enactment of Goldwater–Nichols Act of 1986.
On the morning of September 11, 2001 Myers was on Capitol Hill to meet Georgia Senator Max Cleland for his scheduled courtesy calls before his Senate confirmation hearings to be the next Chairman of the Joint Chiefs of Staff. While waiting for the senator, Myers watched on a television news network in the outer office of Senator Cleland that a plane had just hit the North Tower of the World Trade Center. A few minutes later Myers was informed by his military aide Captain Chris Donahue about the hijacked plane that just hit the second tower of the World Trade Center. Later on General Ralph Eberhart, the Commander-in-Chief of the North American Aerospace Defense Command, managed to contact Myers and inform him about the recent hijacking situation. Myers then immediately left Capitol Hill and proceed back to The Pentagon, where he was informed that this time another commercial airplane had just hit the western side of The Pentagon. During the crisis, Myers became the Acting Chairman of the Joint Chiefs of Staff, since General Hugh Shelton was en-route to Europe for a NATO Summit. Upon arriving at The Pentagon and after rendezvous with Secretary of Defense Donald Rumsfeld, Myers then conferred with Secretary Rumsfeld about the current situation and the next steps to be taken. Myers took command as the Acting Chairman of the Joint Chiefs of Staff for half of the day during the September 11 crisis, until General Shelton arrived back in Washington after aborted his flight to Europe at 5:40 P.M. local time.
Myers was sworn in as 15th Chairman of the Joint Chiefs of Staff on October 1, 2001. He served as the principal military advisor to the President, the Secretary of Defense, and the National Security Council during the earliest stages of the War on Terror, including planning of the War in Afghanistan and planning and execution of the 2003 invasion of Iraq. A few days later, on October 7, 2001, Operation Enduring Freedom was initiated. Myers and General Tommy Franks, the commander of United States Central Command (CENTCOM), coordinated the early stage of Operation Enduring Freedom. Within three months, several radical terrorist groups had been toppled.
Myers also supported the involvement of NATO and allied coalition forces during the War on Terror. As a result of Operation Enduring Freedom, the political regime in Afghanistan was toppled and a new constitution was ratified in January 2004, which provided for direct presidential elections on October 9, 2004.
During his tenure as Chairman, Myers also oversaw the early stage of the invasion of Iraq. Together with CENTCOM commander General Tommy Franks, Myers coordinated the plan for the Iraqi invasion and the reconstruction of the country, and also established a combined joint task force in order to focus on post-conflict issues in Iraq. Operation Iraqi Freedom was initiated on 20 March 2003, which was preceded by an airstrike on Saddam Hussein's Palace and followed by the Fall of Baghdad in April 2003. Operation Iraqi Freedom eventually led to the downfall of Saddam Hussein's 24-year regime and the captured of Hussein on December 13, 2003. Following Operation Iraqi Freedom, the Coalition Provisional Authority was established in Iraq and was succeeded by the Iraqi Interim Government, which presided over parliamentary elections in 2005.
In order to gain support on both the War on Terror and the invasion of Iraq, Myers often travelling abroad in order to strengthen military relations with another allied nations, such as Mongolia. He was the first Chairman of the Joint Chiefs of Staff to visit Mongolia. Myers met with Mongolian President Natsagiin Bagabandi at Ulaanbaatar on January 15, 2004. As a result the United States gained the support of the Mongolian government and Mongolia also deployed troops in support of Operation Iraqi Freedom.
In February 2004 Haitian President Jean-Bertrand Aristide was overthrown in a coup d'état, leading to conflict within the country. The United States deployed Marines to Haiti as part of the multinational Operation Secure Tomorrow from February to July 2004. On March 13, Myers visited the United States troops deployed to Haiti.
Together with Secretary of Defense Donald Rumsfeld, Myers conducted weekly press briefings at The Pentagon on the War on Terror.
One of Myers' achievements as Chairman of the Joint Chiefs of Staff was his pursuit of the transformation of the United States military. Myers orchestrated substantive changes to the nation's Unified Combatant Command's plan following the September 11 attacks, leading to the establishment of United States Northern Command (USNORTHCOM) as the new Unified Combatant Command to consolidate and coordinate domestic defense. It was also to support local, state and federal authorities in order to assist the newly created Department of Homeland Security, especially in responding to national emergencies. Following the establishment of USNORTHCOM, the North American Aerospace Defense Command (NORAD) was also merged into USNORTHCOM and the United States Space Command was merged in to the United States Strategic Command (USSTRATCOM) in order to consolidate and strengthen the nation’s nuclear deterrent and space missions. Like his predecessors, Myers also continued to promote a joint culture among the nation’s military services in order to avoid interservice rivalry.
In order to emphasize the War on Terror, Myers created what was known as "National Military Strategic Plan for the War on Terrorism 2002-2005." The Strategic Plan provided a new guidance to the Joint Chiefs of Staff, regional commanders and Unified Combatant Command commanders for a multi-pronged strategy that aimed at targeting global terrorist networks.
Myers' tenure as Chairman of the Joint Chiefs of Staff ended in September 2005 and he was succeeded by General Peter Pace, who had served as Myers' Vice Chairman of the Joint Chiefs Staff. Myers retired from active duty on September 30, 2005 after more than forty years of active service. His retirement ceremony was held at Fort Myer, Virginia, with President George W. Bush delivering the retirement remarks.
Since 1999, General Myers is an Air Force Gray Eagle. He also received the Badge of the Commander of the Military Forces (Paraguay).
On September 27, 2005, only three days before leaving his post as Chairman, Myers said of the Iraq War that, "the outcome and consequences of defeat are greater than World War II." His rise to and stint as Chairman are chronicled in "Washington Post" reporter Bob Woodward's book, "State of Denial", as well as his own book "Eyes on The Horizon: Serving on the Front Lines of National Security."
On November 9, 2005, Myers received the Presidential Medal of Freedom. His citation reads:
In 2006, Myers accepted a part-time appointment as a Foundation Professor of Military History at Kansas State University. That same year, he was also elected to the Board of Directors of Northrop Grumman Corporation, the world's third largest defense contractor. On September 13, 2006, he also joined the board of directors of United Technologies Corporation. He also serves on the boards of Aon Corporation, John Deere, the United Service Organizations and holds the Colin L. Powell Chair for National Security, Leadership, Character and Ethics at the National Defense University. He also has advised the Defense Health Board and served on the Army War College Board of Visitors.
On July 26, 2011, Myers was inducted into the Air Force Reserve Officer Training Corps Distinguished Alumni in a ceremony at Maxwell AFB, Alabama, officiated by Lieutenant General Allen G. Peck, Commander, Air University.
On April 14, 2016, Myers was selected as the interim president of Kansas State University, which he began on April 20. On November 15, 2016, the Board of Regents removed his interim title and announced Myers would become the university's 14th president.
Myers currently serves as Chairman of the Board of Trustees of Medisend College of Biomedical Engineering Technology and the General Richard B. Myers Veterans Program. Medisend College of Biomedical Engineering Technology.
Myers and his wife, the former Mary Jo Rupp, have three children: two daughters and a son. | https://en.wikipedia.org/wiki?curid=26090 |
Rupert Murdoch
Keith Rupert Murdoch, (born 11 March 1931) is an Australian-born American media mogul who founded News Corp. He is the son of Keith Murdoch, one time senior executive of Australia's Herald & Weekly Times publishing company. After his father's death in 1952, Murdoch took over the running of "The News", a small Adelaide newspaper owned by his father.
In the 1950s and 1960s, Murdoch acquired a number of newspapers in Australia and New Zealand before expanding into the United Kingdom in 1969, taking over the "News of the World", followed closely by "The Sun". In 1974, Murdoch moved to New York City, to expand into the U.S. market; however, he retained interests in Australia and Britain. In 1981, Murdoch bought "The Times", his first British broadsheet and, in 1985, became a naturalized U.S. citizen, giving up his Australian citizenship, to satisfy the legal requirement for U.S. television network ownership.
In 1986, keen to adopt newer electronic publishing technologies, Murdoch consolidated his UK printing operations in London, causing bitter industrial disputes. His holding company News Corporation acquired Twentieth Century Fox (1985), HarperCollins (1989), and "The Wall Street Journal" (2007). Murdoch formed the British broadcaster BSkyB in 1990 and, during the 1990s, expanded into Asian networks and South American television. By 2000, Murdoch's News Corporation owned over 800 companies in more than 50 countries, with a net worth of over $5 billion.
In July 2011, Murdoch faced allegations that his companies, including the "News of the World", owned by News Corporation, had been regularly hacking the phones of celebrities, royalty, and public citizens. Murdoch faced police and government investigations into bribery and corruption by the British government and FBI investigations in the U.S. On 21 July 2012, Murdoch resigned as a director of News International. On 1 July 2015, Murdoch left his post as CEO of 21st Century Fox. However, Murdoch and his family would continue to own both 21st Century Fox until 2019 and News Corp through the Murdoch Family Trust until 20th Century Fox itself was purchased by Disney.
In July 2016, after the resignation of Roger Ailes due to accusations of sexual harassment, Murdoch was named the acting CEO of Fox News.
In 2018, his son, Lachlan Murdoch, purchased the Chartwell Estate for an estimated 150 million dollars.
Keith Rupert Murdoch was born on 11 March 1931 in Melbourne, Victoria, Australia, the son of Sir Keith Murdoch (1885–1952) and Dame Elisabeth Murdoch ("née" Greene; 1909–2012). He is of English, Irish, and Scottish ancestry. Murdoch's parents were also born in Melbourne. Keith Murdoch was a war correspondent and later a regional newspaper magnate owning two newspapers in Adelaide, South Australia, and a radio station in a faraway mining town, and chairman of the powerful Herald and Weekly Times group. Later in life, Keith Rupert chose to go by his second name, the first name of his maternal grandfather.
Keith Murdoch the elder asked to meet with his future wife after seeing her debutante photograph in one of his own newspapers and they married in 1928, when she was aged 19 and he was 23 years older. In addition to Rupert, the couple had three daughters: Janet Calvert-Jones, Anne Kantor and Helen Handbury (1929–2004).
Murdoch attended Geelong Grammar School, where he was co-editor of the school's official journal "The Corian" and editor of the student journal "If Revived". He took his school's cricket team to the National Junior Finals. He worked part-time at the "Melbourne Herald" and was groomed by his father to take over the family business. Murdoch studied Philosophy, Politics and Economics at Worcester College, Oxford in England, where he kept a bust of Lenin in his rooms and came to be known as "Red Rupert". He was a member of the Oxford University Labour Party, stood for Secretary of the Labour Club and managed Oxford Student Publications Limited, the publishing house of "Cherwell". After his father's death from cancer in 1952, his mother Elisabeth did charity work as life governor of the Royal Women's Hospital in Melbourne and established the Murdoch Childrens Research Institute. At the age of 102 (in 2011), she had 74 descendants. Murdoch completed an MA before working as a sub-editor with the "Daily Express" for two years.
Following his father's death, when he was 21, Murdoch returned from Oxford to take charge of what was left of the family business. After liquidation of his father's "Herald" stake to pay taxes, what was left was News Limited, which had been established in 1923. Rupert Murdoch turned its Adelaide newspaper, "The News", its main asset, into a major success. He began to direct his attention to acquisition and expansion, buying the troubled "Sunday Times" in Perth, Western Australia (1956) and over the next few years acquiring suburban and provincial newspapers in New South Wales, Queensland, Victoria and the Northern Territory, including the Sydney afternoon tabloid, "The Daily Mirror" (1960). "The Economist" describes Murdoch as "inventing the modern tabloid", as he developed a pattern for his newspapers, increasing sports and scandal coverage and adopting eye-catching headlines.
Murdoch's first foray outside Australia involved the purchase of a controlling interest in the New Zealand daily "The Dominion". In January 1964, while touring New Zealand with friends in a rented Morris Minor after sailing across the Tasman, Murdoch read of a takeover bid for the Wellington paper by the British-based Canadian newspaper magnate, Lord Thomson of Fleet. On the spur of the moment, he launched a counter-bid. A four-way battle for control ensued in which the 32-year-old Murdoch was ultimately successful. Later in 1964, Murdoch launched "The Australian", Australia's first national daily newspaper, which was based first in Canberra and later in Sydney. In 1972, Murdoch acquired the Sydney morning tabloid "The Daily Telegraph" from Australian media mogul Sir Frank Packer, who later regretted selling it to him. In 1984, Murdoch was appointed Companion of the Order of Australia (AC) for services to publishing.
In 1999, Murdoch significantly expanded his music holdings in Australia by acquiring the controlling share in a leading Australian independent label, Michael Gudinski's Mushroom Records; he merged that with Festival Records, and the result was Festival Mushroom Records (FMR). Both Festival and FMR were managed by Murdoch's son James Murdoch for several years.
Murdoch found a political ally in Sir John McEwen, leader of the Australian Country Party (now known as the National Party of Australia), who was governing in coalition with the larger Menzies-Holt-Gorton Liberal Party. From the very first issue of "The Australian," Murdoch began taking McEwen's side in every issue that divided the long-serving coalition partners. (The Australian, 15 July 1964, first edition, front page: "Strain in Cabinet, Liberal-CP row flares.") It was an issue that threatened to split the coalition government and open the way for the stronger Australian Labor Party to dominate Australian politics. It was the beginning of a long campaign that served McEwen well.
After McEwen and Menzies retired, Murdoch threw his growing power behind the Australian Labor Party under the leadership of Gough Whitlam and duly saw it elected on a social platform that included universal free health care, free education for all Australians to tertiary level, recognition of the People's Republic of China, and public ownership of Australia's oil, gas and mineral resources. Rupert Murdoch's backing of Whitlam turned out to be brief. Murdoch had already started his short-lived "National Star" newspaper in America, and was seeking to strengthen his political contacts there.
Asked about the 2007 Australian federal election at News Corporation's annual general meeting in New York on 19 October 2007, its chairman Rupert Murdoch said, "I am not commenting on anything to do with Australian politics. I'm sorry. I always get into trouble when I do that." Pressed as to whether he believed Prime Minister John Howard should continue as prime minister, he said: "I have nothing further to say. I'm sorry. Read our editorials in the papers. It'll be the journalists who decide that – the editors." In 2009, in response to accusations by Australian Prime Minister Kevin Rudd that News Limited was running vendettas against him and his government, Murdoch opined that Rudd was "oversensitive". Murdoch described Howard's successor, Labor Party Prime Minister Kevin Rudd, as "...more ambitious to lead the world [in tackling climate change] than to lead Australia..." and criticised Rudd's expansionary fiscal policies in the wake of the financial crisis of 2007–2008 as unnecessary. Although News Limited's interests are extensive, also including the "Daily Telegraph", the "Courier-Mail" and the "Adelaide Advertiser", it was suggested by the commentator Mungo MacCallum in "The Monthly" that "the anti-Rudd push, if coordinated at all, was almost certainly locally driven" as opposed to being directed by Murdoch, who also took a different position from local editors on such matters as climate change and stimulus packages to combat the financial crisis.
Murdoch is a supporter of an Australian republic, having campaigned for one during the 1999 referendum.
In 1968, Murdoch entered the British newspaper market with his acquisition of the populist "News of the World", followed in 1969 with the purchase of the struggling daily "The Sun" from IPC. Murdoch turned "The Sun" into a tabloid format and reduced costs by using the same printing press for both newspapers. On acquiring it, he appointed Albert 'Larry' Lamb as editor and – Lamb recalled later – told him: "I want a tearaway paper with lots of tits in it". In 1997 "The Sun" attracted 10 million daily readers. In 1981, Murdoch acquired the struggling "Times" and "Sunday Times" from Canadian newspaper publisher Lord Thomson of Fleet. Ownership of "The Times" came to him through his relationship with Lord Thomson, who had grown tired of losing money on it as a result of an extended period of industrial action that stopped publication. In the light of success and expansion at "The Sun" the owners believed that Murdoch could turn the papers around. Harold Evans, editor of the "Sunday Times" from 1967, was switched to the daily "Times", though he stayed only a year amid editorial conflict with Murdoch.
During the 1980s and early 1990s, Murdoch's publications were generally supportive of Britain's Prime Minister Margaret Thatcher. At the end of the Thatcher/Major era, Murdoch switched his support to the Labour Party and its leader, Tony Blair. The closeness of his relationship with Blair and their secret meetings to discuss national policies was to become a political issue in Britain. This later changed, with "The Sun", in its English editions, publicly renouncing the ruling Labour government and lending its support to David Cameron's Conservative Party, which soon afterwards formed a coalition government. In Scotland, where the Tories had suffered a complete annihilation in 1997, the paper began to endorse the Scottish National Party (though not yet its flagship policy of independence), which soon after came to form the first ever outright majority in the proportionally elected Scottish Parliament. Former Prime Minister Gordon Brown's official spokesman said in November 2009 that Brown and Murdoch "were in regular communication" and that "there is nothing unusual in the prime minister talking to Rupert Murdoch".
In 1986, Murdoch introduced electronic production processes to his newspapers in Australia, Britain and the United States. The greater degree of automation led to significant reductions in the number of employees involved in the printing process. In England, the move roused the anger of the print unions, resulting in a long and often violent dispute that played out in Wapping, one of London's docklands areas, where Murdoch had installed the very latest electronic newspaper purpose-built publishing facility in an old warehouse. The bitter Wapping dispute started with the dismissal of 6,000 employees who had gone on strike and resulted in street battles and demonstrations. Many on the political left in Britain alleged the collusion of Margaret Thatcher's Conservative government with Murdoch in the Wapping affair, as a way of damaging the British trade union movement. In 1987, the dismissed workers accepted a settlement of £60 million.
Murdoch's British-based satellite network, Sky Television, incurred massive losses in its early years of operation. As with many of his other business interests, Sky was heavily subsidised by the profits generated by his other holdings, but convinced rival satellite operator British Satellite Broadcasting to accept a merger on his terms in 1990. The merged company, BSkyB, has dominated the British pay-TV market ever since pursuing direct to home (DTH) satellite broadcasting. By 1996, BSkyB had more than 3.6 million subscribers, triple the number of cable customers in the UK. Murdoch has a seat on the Strategic Advisory Board of Genie Oil and Gas, having jointly investing with Lord Rothschild in a 5.5% stake in the company which conducted shale gas and oil exploration in Colorado, Mongolia, Israel and, controversially, the occupied Golan Heights.
In response to print media's decline and the increasing influence of online journalism during the 2000s, Murdoch proclaimed his support of the micropayments model for obtaining revenue from on-line news, although this has been criticised by some.
In January 2018, the CMA blocked Murdoch from taking over the remaining 61% of BSkyB he did not already own, over fear of market dominance that could potentialise censorship of the media. His bid for BSkyB was later approved by the CMA as long as he sold Sky News to The Walt Disney Company, who was already set to acquire 21st Century Fox. However, it was Comcast who won control of BSkyB in a blind auction ordered by the CMA. Murdoch ultimately sold his 39% of BSkyB to Comcast.
News Corporation has subsidiaries in the Bahamas, the Cayman Islands, the Channel Islands and the Virgin Islands. From 1986, News Corporation's annual tax bill averaged around seven percent of its profits.
In Britain, in the 1980s, Murdoch formed a close alliance with Conservative prime minister Margaret Thatcher, and "The Sun" credited itself with helping her successor John Major to win an unexpected election victory in the 1992 general election, which had been expected to end in a hung parliament or a narrow win for Labour, then led by Neil Kinnock. In the general elections of 1997, 2001 and 2005, Murdoch's papers were either neutral or supported Labour under Tony Blair.
The Labour Party, from when Tony Blair became leader in 1994, had moved from the centre-left to a more centrist position on many economic issues prior to 1997. Murdoch identifies himself as a libertarian, saying "What does libertarian mean? As much individual responsibility as possible, as little government as possible, as few rules as possible. But I'm not saying it should be taken to the absolute limit."
In 1998, Murdoch made an attempt to buy the football club Manchester United F.C., with an offer of £625 million, but this failed. It was the largest amount ever offered for a sports club. It was blocked by the United Kingdom's Competition Commission, which stated that the acquisition would have "hurt competition in the broadcast industry and the quality of British football".
In a speech he delivered in New York in 2005, Murdoch claimed that Blair described the BBC coverage of the Hurricane Katrina disaster, which was critical of the Bush administration's response, as full of hatred of America.
On 28 June 2006, the BBC reported that Murdoch and News Corporation were considering backing new Conservative leader David Cameron at the next General Election – still up to four years away. In a later interview in July 2006, when he was asked what he thought of the Conservative leader, Murdoch replied "Not much". In a 2009 blog, it was suggested that in the aftermath of the News of the World phone hacking scandal which might yet have Transatlantic implications Murdoch and News Corporation might have decided to back Cameron. Despite this, there had already been a convergence of interests between the two men over the muting of Britain's communications regulator Ofcom.
In August 2008, British Conservative leader and future Prime Minister David Cameron accepted free flights to hold private talks and attend private parties with Murdoch on his yacht, the "Rosehearty". Cameron declared in the Commons register of interests he accepted a private plane provided by Murdoch's son-in-law, public relations guru Matthew Freud; Cameron did not reveal his talks with Murdoch. The gift of travel in Freud's Gulfstream IV private jet was valued at around £30,000. Other guests attending the "social events" included the then EU trade commissioner Lord Mandelson, the Russian oligarch Oleg Deripaska and co-chairman of NBC Universal Ben Silverman. The Conservatives did not disclose what was discussed.
In July 2011, it emerged that Cameron had met key executives of Murdoch's News Corporation a total of 26 times during the 14 months that Cameron had served as Prime Minister up to that point. It was also reported that Murdoch had given Cameron a personal guarantee that there would be no risk attached to hiring Andy Coulson, the former editor of "News of the World", as the Conservative Party's communication director in 2007. This was in spite of Coulson having resigned as editor over phone hacking by a reporter. Cameron chose to take Murdoch's advice, despite warnings from Deputy Prime Minister Nick Clegg, Lord Ashdown and "The Guardian". Coulson resigned his post in 2011 and was later arrested and questioned on allegations of further criminal activity at the "News of the World", specifically the News International phone hacking scandal. As a result of the subsequent trial, Coulson was sentenced to 18 months in jail.
In June 2016, "The Sun" supported Vote Leave in the United Kingdom European Union membership referendum. Murdoch called the Brexit result "wonderful", comparing the decision to withdraw from the EU to "a prison break….we're out".
In July 2011, Murdoch, along with his son James, provided testimony before a British parliamentary committee regarding phone hacking. In the UK, his media empire remains under fire, as investigators continue to probe reports of other phone hacking.
On 14 July, the Culture, Media and Sport Committee of the House of Commons served a summons on Murdoch, his son James, and his former CEO Rebekah Brooks to testify before a committee five days later. After an initial refusal, the Murdochs confirmed they would attend, after the committee issued them a summons to Parliament. The day before the committee, the website of the News Corporation publication "The Sun" was hacked, and a false story was posted on the front page claiming that Murdoch had died. Murdoch described the day of the committee "the most humble day of my life". He argued that since he ran a global business of 53,000 employees and that "News of the World" was "just 1%" of this, he was not ultimately responsible for what went on at the tabloid. He added that he had not considered resigning, and that he and the other top executives had been completely unaware of the hacking.
On 15 July, Murdoch attended a private meeting in London with the family of Milly Dowler, where he personally apologized for the hacking of their murdered daughter's voicemail by a company he owns. On 16 and 17 July, News International published two full-page apologies in many of Britain's national newspapers. The first apology took the form of a letter, signed by Murdoch, in which he said sorry for the "serious wrongdoing" that occurred. The second was titled "Putting right what's gone wrong", and gave more detail about the steps News International was taking to address the public's concerns. In the wake of the allegations, Murdoch accepted the resignations of Rebekah Brooks, head of Murdoch's British operations, and Les Hinton, head of Dow Jones who was chairman of Murdoch's British newspaper division when some of the abuses happened. They both deny any knowledge of any wrongdoing under their command.
On 27 February 2012, the day after the first issue of "The Sun on Sunday" was published, Deputy Assistant Commissioner Sue Akers informed the Leveson Inquiry that police are investigating a "network of corrupt officials" as part of their inquiries into phone hacking and police corruption. She said that evidence suggested a "culture of illegal payments" at "The Sun" and that these payments allegedly made by "The Sun" were authorised at a senior level.
In testimony on 25 April, Murdoch did not deny the quote attributed to him by his former editor of "The Sunday Times", Harold Evans: "I give instructions to my editors all round the world, why shouldn't I in London?" On 1 May 2012, the Culture, Media and Sport Committee issued a report stating that Murdoch was "not a fit person to exercise the stewardship of a major international company".
On 3 July 2013, the Exaro website and "Channel 4 News" broke the story of a secret recording. This was recorded by "The Sun" journalists, and in it Murdoch can be heard telling them that the whole investigation was one big fuss over nothing, and that he, or his successors, would take care of any journalists who went to prison. He said: "Why are the police behaving in this way? It's the biggest inquiry ever, over next to nothing."
Murdoch made his first acquisition in the United States in 1973, when he purchased the "San Antonio Express-News". In 1974, Murdoch moved to New York City, to expand into the U.S. market; however, he retained interests in Australia and Britain. Soon afterwards, he founded "Star", a supermarket tabloid, and in 1976, he purchased the "New York Post". On 4 September 1985, Murdoch became a naturalized citizen to satisfy the legal requirement that only US citizens were permitted to own US television stations. This resulted in Murdoch, in effect, renouncing his Australian citizenship.
In March 1984, Marvin Davis sold Marc Rich's interest in 20th Century Fox to Murdoch for $250 million due to Marc Rich's trade deals with Iran, which were sanctioned by the United States at the time. Marvin Davis later backed out of a deal with Murdoch to purchase John Kluge's Metromedia television stations. Rupert Murdoch bought the stations by himself, without Marvin Davis, and later bought out Davis's remaining stake in Fox for $325 million. The six television stations owned by Metromedia formed the nucleus of the Fox Broadcasting Company, founded on 9 October 1986, which later had great success with programs including "The Simpsons" and "The X-Files".
In 1986 Murdoch bought Misty Mountain, a Wallace Neff designed house on Angelo Drive in Beverly Hills. The house was the former residence of Jules C. Stein. Murdoch sold the house to his son James in 2018.
In Australia, during 1987, he bought The Herald and Weekly Times Ltd., the company that his father had once managed.
Rupert Murdoch's 20th Century Fox bought out the remaining assets of Four Star Television from Ronald Perelman's Compact Video in 1996. Most of Four Star Television's library of programs are controlled by 20th Century Fox Television today. After Murdoch's numerous buyouts during the buyout era of the eighties, News Corporation had built up financial debts of $7 billion (much from Sky TV in the UK), despite the many assets that were held by NewsCorp. The high levels of debt caused Murdoch to sell many of the American magazine interests he had acquired in the mid-1980s.
In 1993, Murdoch's Fox Network took exclusive coverage of the National Football Conference (NFC) of the National Football League (NFL) from CBS and increased programming to seven days a week. In 1995, Fox became the object of scrutiny from the Federal Communications Commission (FCC), when it was alleged that News Ltd.'s Australian base made Murdoch's ownership of Fox illegal. However, the FCC ruled in Murdoch's favour, stating that his ownership of Fox was in the best interests of the public. That same year, Murdoch announced a deal with MCI Communications to develop a major news website and magazine, "The Weekly Standard". Also that year, News Corporation launched the Foxtel pay television network in Australia in partnership with Telstra. In 1996, Murdoch decided to enter the cable news market with the Fox News Channel, a 24-hour cable news station. Ratings studies released in 2009 showed that the network was responsible for nine of the top ten programs in the "Cable News" category at that time. Rupert Murdoch and Ted Turner (founder and former owner of CNN) are long-standing rivals. In late 2003, Murdoch acquired a 34% stake in Hughes Electronics, the operator of the largest American satellite TV system, DirecTV, from General Motors for $6 billion (USD). His Fox movie studio had global hits with "Titanic" and "Avatar".
In 2004, Murdoch announced that he was moving News Corporation headquarters from Adelaide, Australia to the United States. Choosing a US domicile was designed to ensure that American fund managers could purchase shares in the company, since many were deciding not to buy shares in non-US companies.
On 20 July 2005, News Corporation bought Intermix Media Inc., which held Myspace, Imagine Games Network and other social networking-themed websites, for US$580 million, making Murdoch a major player in online media concerns. In June 2011, it sold off Myspace for US$35 million. On 11 September 2005, News Corporation announced that it would buy IGN Entertainment for $650 million (USD).
In May 2007, Murdoch made a $5 billion offer to purchase Dow Jones & Company. At the time, the Bancroft family, who had owned Dow Jones & Company for 105 years and controlled 64% of the shares at the time, declined the offer. Later, the Bancroft family confirmed a willingness to consider a sale. Besides Murdoch, the Associated Press reported that supermarket magnate Ron Burkle and Internet entrepreneur Brad Greenspan were among the other interested parties. In 2007, Murdoch acquired Dow Jones & Company, which gave him such publications as "The Wall Street Journal", "Barron's Magazine", the "Far Eastern Economic Review" (based in Hong Kong) and "SmartMoney".
In June 2014, Murdoch's 21st Century Fox made a bid for Time Warner at $85 per share in stock and cash ($80 billion total) which Time Warner's board of directors turned down in July. Warner's CNN unit would have been sold to ease antitrust issues of the purchase. On 5 August 2014 the company announced it had withdrawn its offer for Time Warner, and said it would spend $6 billion buying back its own shares over the following 12 months.
McNight (2010) identifies four characteristics of his media operations: free market ideology; unified positions on matters of public policy; global editorial meetings; and opposition to liberal bias in other public media.
On 8 May 2006, the "Financial Times" reported that Murdoch would be hosting a fund-raiser for Senator Hillary Clinton's (D-New York) Senate re-election campaign. In a 2008 interview with Walt Mossberg, Murdoch was asked whether he had "anything to do with the "New York Post"'s endorsement of Barack Obama in the democratic primaries". Without hesitating, Murdoch replied, "Yeah. He is a rock star. It's fantastic. I love what he is saying about education. I don't think he will win Florida [...] but he will win in Ohio and the election. I am anxious to meet him. I want to see if he will walk the walk." Murdoch is a strong supporter of Israel and its domestic policies.
In 2010, News Corporation gave US$1 million to the Republican Governors Association and $1 million to the U.S. Chamber of Commerce. Murdoch also served on the board of directors of the libertarian Cato Institute. Murdoch is also a supporter of the Stop Online Piracy Act and Protect Intellectual Property Act.
Murdoch advocates more open immigration policies in western nations generally. In the United States, Murdoch and chief executives from several major corporations, including Hewlett-Packard, Boeing and Disney joined New York City Mayor Michael Bloomberg to form the Partnership for a New American Economy to advocate "for immigration reform – including a path to legal status for all illegal aliens now in the United States". The coalition, reflecting Murdoch and Bloomberg's own views, also advocates significant increases in legal immigration to the United States as a means of boosting America's sluggish economy and lowering unemployment. The Partnership's immigration policy prescriptions are notably similar to those of the Cato Institute and the US Chamber of Commerce — both of which Murdoch has supported in the past.
"The Wall Street Journal" editorial page has similarly advocated for increased legal immigration, in contrast to the staunch anti-immigration stance of Murdoch's British newspaper, "The Sun". On 5 September 2010, Murdoch testified before the House Subcommittee on Immigration, Citizenship, Refugees, Border Security, and International Law Membership on the "Role of Immigration in Strengthening America's Economy". In his testimony, Murdoch called for ending mass deportations and endorsed a "comprehensive immigration reform" plan that would include a pathway to citizenship for all illegal immigrants.
In the 2012 U.S. presidential election, Murdoch was critical of the competence of Mitt Romney's team but was nonetheless strongly supportive of a Republican victory, tweeting: "Of course I want him [Romney] to win, save us from socialism, etc."
In May 2013, Murdoch purchased the Moraga Estate, an estate, vineyard and winery in Bel Air, Los Angeles, California.
In October 2015, Murdoch stirred controversy when he praised Republican presidential candidate Ben Carson and referenced President Barack Obama, tweeting, "Ben and Candy Carson terrific. What about a real black President who can properly address the racial divide? And much else." After which he apologized, tweeting, "Apologies! No offence meant. Personally find both men charming."
Since Donald Trump became the US President, Murdoch has shown support for him through the news stories broadcast in his media empire, including on Fox News. In early 2018, Mohammad bin Salman, the crown prince and de facto ruler of Saudi Arabia, had an intimate dinner at Murdoch's Bel Air estate in Los Angeles.
Murdoch owns a controlling interest in "Sky Italia", a satellite television provider in Italy. Murdoch's business interests in Italy have been a source of contention since they began. In 2010 Murdoch won a media dispute with then Italian Prime Minister Silvio Berlusconi. A judge ruled the then Prime Minister's media arm Mediaset prevented News Corporation's Italian unit, Sky Italia, from buying advertisements on its television networks.
In November 1986, News Corporation purchased a 35% stake in the "South China Morning Post" group for about . At that time, SCMP group was a stock-listed company, and was owned by HSBC, Hutchison Whampoa and Dow Jones & Company. In December 1986, Dow Jones & Company offered News Corporation to sell about 19% of share it owned of SCMP for , and, by 1987, News Corporation completed the full takeover. In September 1993, News Corporation have agreed to sell a 34.9% share in SCMP to Robert Kuok's Kerry Media for . In 1994, News Corporation sold the remaining 15.1% share in SCMP to MUI Group, disposing the Hong Kong newspaper.
In June 1993, News Corporation attempted to acquire a 22% share in TVB, a terrestrial television broadcaster in Hong Kong, for about $237million, but Murdoch's company gave up, as the Hong Kong government would not relax the regulation regarding foreign ownership of broadcasting companies.
In 1993, News Corporation acquired Star TV (renamed as Star in 2001), a Hong Kong company headed by Richard Li, from Hutchison Whampoa for $1 billion (Souchou, 2000:28), and subsequently set up offices for it throughout Asia. The deal enabled News International to broadcast from Hong Kong to India, China, Japan and over thirty other countries in Asia, becoming one of the biggest satellite television networks in the east. However, the deal did not work out as Murdoch had planned, because the Chinese government placed restrictions on it that prevented it from reaching most of China.
In 2009, News Corporation reorganised Star; a few of these arrangements were that the original company's operations in East Asia, Southeast Asia and the Middle East were integrated into Fox International Channels, and Star India was spun-off (but still within News Corporation).
In 1956 Murdoch married Patricia Booker, a former shop assistant and flight attendant from Melbourne; the couple had their only child, Prudence, in 1958. They divorced in 1967.
In 1967 Murdoch married Anna Maria Torv (Tõrv), a Scottish-born cadet journalist working for his Sydney newspaper "The Daily Telegraph". In January 1998, 3 months before the announcement of his separation from Anna, a Roman Catholic, Murdoch was made a Knight Commander of the Order of Saint Gregory the Great (KSG), a papal honour awarded by Pope John Paul II. While Murdoch would often attend Mass with Torv, he never converted to Catholicism. Torv and Murdoch had three children: Elisabeth Murdoch (born in Sydney, Australia on 22 August 1968), Lachlan Murdoch (born in London, UK on 8 September 1971), and James Murdoch, (born in London on 13 December 1972). Murdoch's companies published two novels by his then wife: "Family Business" (1988) and "Coming to Terms" (1991), both considered to be vanity publications. They divorced in June 1999. Anna Murdoch received a settlement of US$1.2 billion in assets.
On 25 June 1999, 17 days after divorcing his second wife, Murdoch, then aged 68, married Chinese-born Wendi Deng. She was 30, a recent Yale School of Management graduate, and a newly appointed vice-president of his STAR TV. Murdoch had two daughters with her: Grace (born 2001) and Chloe (born 2003). Murdoch has six children in all, and is grandfather to thirteen grandchildren. Near the end of his marriage to Wendi, hearsay concerning a possible link with Chinese intelligence became problematic to their relationship. On 13 June 2013, a News Corporation spokesperson confirmed that Murdoch filed for divorce from Deng in New York City, U.S. According to the spokesman, the marriage had been irretrievably broken for more than six months. Murdoch also ended his long-standing friendship with Tony Blair after suspecting him of having an affair with Deng while they were still married.
On 11 January 2016, Murdoch announced his engagement to former model Jerry Hall in a notice in "The Times" newspaper. On 4 March 2016, Murdoch, a week short of his 85th birthday, and 59-year-old Hall were married in London, at Spencer House; this is Murdoch's fourth marriage.
Murdoch has six children. His eldest child, Prudence MacLeod, was appointed on 28 January 2011 to the board of Times Newspapers Ltd, part of News International, which publishes "The Times" and "The Sunday Times". Murdoch's elder son Lachlan, formerly the Deputy Chief Operating Officer at the News Corporation and publisher of the "New York Post", was Murdoch's heir apparent before resigning from his executive posts at the global media company at the end of July 2005. Lachlan's departure left James Murdoch, Chief Executive of the satellite television service British Sky Broadcasting since November 2003 as the only Murdoch son still directly involved with the company's operations, though Lachlan has agreed to remain on the News Corporation's board.
After graduating from Vassar College and marrying classmate Elkin Kwesi Pianim (the son of Ghanaian financial and political mogul Kwame Pianim) in 1993, Murdoch's daughter Elisabeth and her husband purchased a pair of NBC-affiliate television stations in California, KSBW and KSBY, with a $35 million loan provided by her father. By quickly re-organising and re-selling them at a $12 million profit in 1995, Elisabeth emerged as an unexpected rival to her brothers for the eventual leadership of the publishing dynasty. But, after divorcing Pianim in 1998 and quarrelling publicly with her assigned mentor Sam Chisholm at BSkyB, she struck out on her own as a television and film producer in London. She has since enjoyed independent success, in conjunction with her second husband, Matthew Freud, the great-grandson of Sigmund Freud, whom she met in 1997 and married in 2001.
It is not known how long Murdoch will remain as News Corporation's CEO. For a while the American cable television entrepreneur John Malone was the second-largest voting shareholder in News Corporation after Murdoch himself, potentially undermining the family's control. In 2007, the company announced that it would sell certain assets and give cash to Malone's company in exchange for its stock. In 2007, the company issued Murdoch's older children voting stock.
Murdoch has two children with Wendi Deng: Grace (b. New York, 19 November 2001) and Chloe (b. New York, 17 July 2003). It was revealed in September 2011 that Tony Blair is Grace's godfather. There is reported to be tension between Murdoch and his oldest children over the terms of a trust holding the family's 28.5% stake in News Corporation, estimated in 2005 to be worth about $6.1 billion. Under the trust, his children by Wendi Deng share in the proceeds of the stock but have no voting privileges or control of the stock. Voting rights in the stock are divided 50/50 between Murdoch on the one side and his children of his first two marriages. Murdoch's voting privileges are not transferable but will expire upon his death and the stock will then be controlled solely by his children from the prior marriages, although their half-siblings will continue to derive their share of income from it. It is Murdoch's stated desire to have his children by Deng given a measure of control over the stock proportional to their financial interest in it (which would mean, if Murdoch dies while at least one of the children is a minor, that Deng would exercise that control). It does not appear that he has any strong legal grounds to contest the present arrangement, and both ex-wife Anna and their three children are said to be strongly resistant to any such change.
Murdoch and rival newspaper and publishing magnate Robert Maxwell are thinly fictionalised as "Keith Townsend" and "Richard Armstrong" in "The Fourth Estate" by British novelist and former MP Jeffrey Archer.
Murdoch has been portrayed by:
It was speculated that the character of Elliot Carver, the global media magnate and main villain in the 1997 James Bond movie "Tomorrow Never Dies", is based on Murdoch. The screenwriter of the film, Bruce Feirstein, stated that Carver was actually inspired by British press magnate Robert Maxwell, who was one of Murdoch's rivals.
Whenever the Eagles drummer and lead singer Don Henley performs his 1981 hit solo release "Dirty Laundry", which directly criticizes what Henley sees as the news industry favoring style and sensationalism over substance and proper journalism, he says that he'd "like to dedicate this song to Mr. Rupert Murdoch."
In the 1997 film "Fierce Creatures", the head of Octopus Inc.'s Rod McCain (initials R.M.) character is likely modelled after Murdoch.
In 1999, the Ted Turner owned TBS channel aired an original sitcom, "The Chimp Channel". This featured an all-simian cast and the role of an Australian TV veteran named Harry Waller. The character is described as "a self-made gazillionaire with business interests in all sorts of fields. He owns newspapers, hotel chains, sports franchises and genetic technologies, as well as everyone's favourite cable TV channel, The Chimp Channel." Waller is thought to be a parody of Murdoch, a long-time rival of Turner.
In 2004, the movie "Outfoxed" included many interviews accusing Fox News of pressuring reporters to report only one side of news stories, in order to influence viewers' political opinions.
In 2012, the satirical show "Hacks", broadcast on the UK's Channel 4, made obvious comparisons with Murdoch using the fictional character "Stanhope Feast", portrayed by Michael Kitchen, as well as other central figures in the phone hacking scandal.
In the novel "Dunbar" by Edward St Aubyn the eponymous lead character is at least partly inspired by Murdoch.
Murdoch was part of the inspiration for Logan Roy, the protagonist of TV show "Succession" who is portrayed by Brian Cox.
Murdoch is played by Malcolm McDowell in the 2019 film "Bombshell".
According to "Forbes'" real time list of world's billionaires, Murdoch is the 34th richest person in the US and the 96th richest person in the world, with a net worth of US$13.1 billion as of February 2017. In 2016, "Forbes" ranked "Rupert Murdoch & Family" as the 35th most powerful person in the world. Later, in 2019, Rupert Murdoch & family were ranked 52nd in the Forbes' annual list of the world's billionaires.
In 2003 Murdoch bought "Rosehearty", an 11 bedroom home on a 5-acre waterfront estate in Centre Island, New York.
In August 2013, Terry Flew, Professor of Media and Communications at Queensland University of Technology, wrote an article for the "Conversation" publication in which he verified a claim by former Australian prime minister Kevin Rudd that Murdoch owned 70% of Australian newspapers in 2011. Flew's article showed that News Corp Australia owned 23% of the nation's newspapers in 2011, according to the Finkelstein Review of Media and Media Regulation, but, at the time of the article, the corporation's titles accounted for 59% of the sales of all daily newspapers, with weekly sales of 17.3 million copies.
In connection with Murdoch's testimony to the Leveson Inquiry "into the ethics of the British press", editor of "Newsweek International", Tunku Varadarajan, referred to him as "the man whose name is synonymous with unethical newspapers".
News Corp papers were accused of supporting the campaign of the Australian Liberal government and influencing public opinion during the 2013 federal election. Following the announcement of the Liberal Party victory at the polls, Murdoch tweeted "Aust. election public sick of public sector workers and phony welfare scroungers sucking life out of economy. Other nations to follow in time."
In November 2015, former Australian prime minister Tony Abbott said that Murdoch "arguably has had more impact on the wider world than any other living Australian".
In late 2015, "The Wall Street Journal" journalist John Carreyrou began a series of investigative articles on Theranos, the blood-testing start-up founded by Elizabeth Holmes, that questioned its claim to be able to run a wide range of lab tests from a tiny sample of blood from a finger prick. Holmes had turned to Murdoch, whose media empire includes Carreyrou's employer, "The Wall Street Journal", to kill the story. Murdoch, who became the biggest investor in Theranos in 2015 as a result of his $125 million injection, refused the request from Holmes saying that "he trusted the paper’s editors to handle the matter fairly.” | https://en.wikipedia.org/wiki?curid=26091 |
Historical negationism
Historical negationism, also called denialism, is a distortion of the historical record. It is often imprecisely referred to as "historical revisionism", but that term also applies to legitimate academic reinterpretations of the historical record that diverge from previously accepted views.
In attempting to revise the past, illegitimate historical revisionism may use techniques inadmissible in proper historical discourse, such as presenting known forged documents as genuine, inventing ingenious but implausible reasons for distrusting genuine documents, attributing conclusions to books and sources that report the opposite, manipulating statistical series to support the given point of view, and deliberately mistranslating texts.
Some countries, such as Germany, have criminalized the negationist revision of certain historical events, while others take a more cautious position for various reasons, such as protection of free speech; others mandate negationist views.
Notable examples of negationism include Holocaust denial, Armenian Genocide denial, Lost Cause of the Confederacy, Myth of the clean Wehrmacht, Japanese war crime denial and the denial of Soviet crimes.
In literature, the consequences of historical negationism have been imaginatively depicted in some works of fiction, such as "Nineteen Eighty-Four", by George Orwell. In modern times, negationism may spread via new media, such as the Internet.
The term "negationism" ("négationnisme") was first coined by the French historian Henry Rousso in his 1987 book "The Vichy Syndrome" which looked at the French popular memory of Vichy France and the French Resistance. Rousso argued that it was necessary to distinguish between legitimate historical revisionism in Holocaust studies and politically motivated denial of the Holocaust, which he termed negationism.
Usually, the purpose of historical negation is to achieve a national, political aim, by transferring war-guilt, demonizing an enemy, providing an illusion of victory, or preserving a friendship. Sometimes the purpose of a revised history is to sell more books or to attract attention with a newspaper headline. The historian James M. McPherson said that negationists would want revisionist history understood as, "a consciously-falsified or distorted interpretation of the past to serve partisan or ideological purposes in the present".
The principal functions of negationist history are the abilities to control ideological influence and to control political influence.
In "History Men Battle over Britain's Future", Michael d'Ancona said that historical negationists "seem to have been given a collective task in [a] nation's cultural development, the full significance of which is emerging only now: To redefine [national] status in a changing world". History is a social resource that contributes to shaping national identity, culture, and the public memory. Through the study of history, people are imbued with a particular cultural identity; therefore, by negatively revising history, the negationist can craft a specific, ideological identity. Because historians are credited as people who single-mindedly pursue truth, by way of fact, negationist historians capitalize on the historian's professional credibility, and present their pseudohistory as true scholarship. By adding a measure of credibility to the work of revised history, the ideas of the negationist historian are more readily accepted in the public mind. As such, professional historians recognize the revisionist practice of historical negationism as the work of "truth-seekers" finding different truths in the historical record to fit their political, social, and ideological contexts.
History provides insight into past political policies and consequences, and thus assists people in extrapolating political implications for contemporary society. Historical negationism is applied to cultivate a specific political myth – sometimes with official consent from the government – whereby self-taught, amateur, and dissident academic historians either manipulate or misrepresent historical accounts to achieve political ends. In the USSR (1917–91), the ideology of the Communist Party of the Soviet Union and Soviet historiography treated reality and the party line as the same intellectual entity; Soviet historical negationism advanced a specific, political and ideological agenda about Russia and its place in world history.
Historical negationism applies the techniques of research, quotation, and presentation for deception of the reader and denial of the historical record. In support of the "revised history" perspective, the negationist historian uses false documents as genuine sources, presents specious reasons to distrust genuine documents, exploits published opinions by quoting out of historical context, manipulates statistics, and mistranslates texts in other languages. The revision techniques of historical negationism operate in the intellectual space of public debate for the advancement of a given interpretation of history and the cultural perspective of the "revised history". As a document, the revised history is used to negate the validity of the factual, documentary record, and so reframe explanations and perceptions of the discussed historical event, in order to deceive the reader, the listener, and the viewer; therefore, historical negationism functions as a technique of propaganda. Rather than submit their works for peer review, negationist historians rewrite history and use logical fallacies to construct arguments that will obtain the desired results, a "revised history" that supports an agenda – political, ideological, religious, etc. In the practice of historiography, the British historian Richard J. Evans describes the technical differences, between professional historians and negationist historians:
Deception includes falsifying information, obscuring the truth, and lying in order to manipulate public opinion about the historical event discussed in the revised history. The negationist historian applies the techniques of deception to achieve either a political or an ideological goal, or both. The field of history distinguishes among history books based upon credible, verifiable sources, which were peer-reviewed before publication; and deceptive history books, based upon unreliable sources, which were not submitted for peer review. | https://en.wikipedia.org/wiki?curid=26092 |
Romanticism
Romanticism (also known as the Romantic era) was an artistic, literary, musical and intellectual movement that originated in Europe towards the end of the 18th century, and in most areas was at its peak in the approximate period from 1800 to 1890. Romanticism was characterized by its emphasis on emotion and individualism as well as glorification of all the past and nature, preferring the medieval rather than the classical. It was partly a reaction to the Industrial Revolution, the aristocratic social and political norms of the Age of Enlightenment, and the scientific rationalization of nature—all components of modernity. It was embodied most strongly in the visual arts, music, and literature, but had a major impact on historiography, education, chess, social sciences, and the natural sciences. It had a significant and complex effect on politics, with romantic thinkers influencing liberalism, radicalism, conservatism, and nationalism.
The movement emphasized intense emotion as an authentic source of aesthetic experience, placing new emphasis on such emotions as apprehension, horror and terror, and awe—especially that experienced in confronting the new aesthetic categories of the sublimity and beauty of nature. It elevated folk art and ancient custom to something noble, but also spontaneity as a desirable characteristic (as in the musical impromptu). In contrast to the Rationalism and Classicism of the Enlightenment, Romanticism revived medievalism and elements of art and narrative perceived as authentically medieval in an attempt to escape population growth, early urban sprawl, and industrialism.
Although the movement was rooted in the German "Sturm und Drang" movement, which preferred intuition and emotion to the rationalism of the Enlightenment, the events and ideologies of the French Revolution were also proximate factors. Romanticism assigned a high value to the achievements of "heroic" individualists and artists, whose examples, it maintained, would raise the quality of society. It also promoted the individual imagination as a critical authority allowed of freedom from classical notions of form in art. There was a strong recourse to historical and natural inevitability, a "Zeitgeist", in the representation of its ideas. In the second half of the 19th century, Realism was offered as a polar opposite to Romanticism. The decline of Romanticism during this time was associated with multiple processes, including social and political changes and the spread of nationalism.
The nature of Romanticism may be approached from the primary importance of the free expression of the feelings of the artist. The importance the Romantics placed on emotion is summed up in the remark of the German painter Caspar David Friedrich, "the artist's feeling is his law". For William Wordsworth, poetry should begin as "the spontaneous overflow of powerful feelings", which the poet then "recollect[s] in tranquility", evoking a new but corresponding emotion the poet can then mold into art.
To express these feelings, it was considered the content of art had to come from the imagination of the artist, with as little interference as possible from "artificial" rules dictating what a work should consist of. Samuel Taylor Coleridge and others believed there were natural laws the imagination—at least of a good creative artist—would unconsciously follow through artistic inspiration if left alone. As well as rules, the influence of models from other works was considered to impede the creator's own imagination, so that originality was essential. The concept of the genius, or artist who was able to produce his own original work through this process of "creation from nothingness", is key to Romanticism, and to be derivative was the worst sin. This idea is often called "romantic originality". Translator and prominent Romantic August Wilhelm Schlegel argued in his "Lectures on Dramatic Arts and Letters" that the most phenomenal power of human nature is its capacity to divide and diverge into opposite directions.
Not essential to Romanticism, but so widespread as to be normative, was a strong belief and interest in the importance of nature. This particularly in the effect of nature upon the artist when he is surrounded by it, preferably alone. In contrast to the usually very social art of the Enlightenment, Romantics were distrustful of the human world, and tended to believe a close connection with nature was mentally and morally healthy. Romantic art addressed its audiences with what was intended to be felt as the personal voice of the artist. So, in literature, "much of romantic poetry invited the reader to identify the protagonists with the poets themselves".
According to Isaiah Berlin, Romanticism embodied "a new and restless spirit, seeking violently to burst through old and cramping forms, a nervous preoccupation with perpetually changing inner states of consciousness, a longing for the unbounded and the indefinable, for perpetual movement and change, an effort to return to the forgotten sources of life, a passionate effort at self-assertion both individual and collective, a search after means of expressing an unappeasable yearning for unattainable goals".
The group of words with the root "Roman" in the various European languages, such as "romance" and "Romanesque", has a complicated history. By the 1700s, European languages – notably German, French and Russian – were using the term "Roman" in the sense of the English word "novel", i.e. a work of popular narrative fiction. This usage derived from the term "Romance languages", which referred to vernacular (or popular) language in contrast to formal Latin. Most such novels took the form of "chivalric romance", tales of adventure, devotion and honour.
The founders of Romanticism, critics August and Friedrich, began to speak of "romantische Poesie" ("romantic poetry") in the 1790s, contrasting it with "classic" but in terms of spirit rather than merely dating. Friedrich Schlegel wrote in his 1800 essay "Gespräch über die Poesie" ("Dialogue on Poetry"): "I seek and find the romantic among the older moderns, in Shakespeare, in Cervantes, in Italian poetry, in that age of chivalry, love and fable, from which the phenomenon and the word itself are derived."
The modern sense of the term spread more widely in France by its persistent use by Germaine de Staël in her "De l'Allemagne" (1813), recounting her travels in Germany. In England Wordsworth wrote in a preface to his poems of 1815 of the "romantic harp" and "classic lyre", but in 1820 Byron could still write, perhaps slightly disingenuously, "I perceive that in Germany, as well as in Italy, there is a great struggle about what they call 'Classical' and 'Romantic', terms which were not subjects of classification in England, at least when I left it four or five years ago". It is only from the 1820s that Romanticism certainly knew itself by its name, and in 1824 the Académie française took the wholly ineffective step of issuing a decree condemning it in literature.
The period typically called Romantic varies greatly between different countries and different artistic media or areas of thought. Margaret Drabble described it in literature as taking place "roughly between 1770 and 1848", and few dates much earlier than 1770 will be found. In English literature, M. H. Abrams placed it between 1789, or 1798, this latter a very typical view, and about 1830, perhaps a little later than some other critics. Others have proposed 1780–1830. In other fields and other countries the period denominated as Romantic can be considerably different; musical Romanticism, for example, is generally regarded as only having ceased as a major artistic force as late as 1910, but in an extreme extension the "Four Last Songs" of Richard Strauss are described stylistically as "Late Romantic" and were composed in 1946–48. However, in most fields the Romantic period is said to be over by about 1850, or earlier.
The early period of the Romantic era was a time of war, with the French Revolution (1789–1799) followed by the Napoleonic Wars until 1815. These wars, along with the political and social turmoil that went along with them, served as the background for Romanticism. The key generation of French Romantics born between 1795–1805 had, in the words of one of their number, Alfred de Vigny, been "conceived between battles, attended school to the rolling of drums". According to Jacques Barzun, there were three generations of Romantic artists. The first emerged in the 1790s and 1800s, the second in the 1820s, and the third later in the century.
The more precise characterization and specific definition of Romanticism has been the subject of debate in the fields of intellectual history and literary history throughout the 20th century, without any great measure of consensus emerging. That it was part of the Counter-Enlightenment, a reaction against the Age of Enlightenment, is generally accepted in current scholarship. Its relationship to the French Revolution, which began in 1789 in the very early stages of the period, is clearly important, but highly variable depending on geography and individual reactions. Most Romantics can be said to be broadly progressive in their views, but a considerable number always had, or developed, a wide range of conservative views, and nationalism was in many countries strongly associated with Romanticism, as discussed in detail below.
In philosophy and the history of ideas, Romanticism was seen by Isaiah Berlin as disrupting for over a century the classic Western traditions of rationality and the idea of moral absolutes and agreed values, leading "to something like the melting away of the very notion of objective truth", and hence not only to nationalism, but also fascism and totalitarianism, with a gradual recovery coming only after World War II. For the Romantics, Berlin says, in the realm of ethics, politics, aesthetics it was the authenticity and sincerity of the pursuit of inner goals that mattered; this applied equally to individuals and groups—states, nations, movements. This is most evident in the aesthetics of romanticism, where the notion of eternal models, a Platonic vision of ideal beauty, which the artist seeks to convey, however imperfectly, on canvas or in sound, is replaced by a passionate belief in spiritual freedom, individual creativity. The painter, the poet, the composer do not hold up a mirror to nature, however ideal, but invent; they do not imitate (the doctrine of mimesis), but create not merely the means but the goals that they pursue; these goals represent the self-expression of the artist's own unique, inner vision, to set aside which in response to the demands of some "external" voice—church, state, public opinion, family friends, arbiters of taste—is an act of betrayal of what alone justifies their existence for those who are in any sense creative.
Arthur Lovejoy attempted to demonstrate the difficulty of defining Romanticism in his seminal article "On The Discrimination of Romanticisms" in his "Essays in the History of Ideas" (1948); some scholars see Romanticism as essentially continuous with the present, some like Robert Hughes see in it the inaugural moment of modernity, and some like Chateaubriand, Novalis and Samuel Taylor Coleridge see it as the beginning of a tradition of resistance to Enlightenment rationalism—a "Counter-Enlightenment"— to be associated most closely with German Romanticism. An earlier definition comes from Charles Baudelaire: "Romanticism is precisely situated neither in choice of subject nor exact truth, but in the way of feeling."
The end of the Romantic era is marked in some areas by a new style of Realism, which affected literature, especially the novel and drama, painting, and even music, through Verismo opera. This movement was led by France, with Balzac and Flaubert in literature and Courbet in painting; Stendhal and Goya were important precursors of Realism in their respective media. However, Romantic styles, now often representing the established and safe style against which Realists rebelled, continued to flourish in many fields for the rest of the century and beyond. In music such works from after about 1850 are referred to by some writers as "Late Romantic" and by others as "Neoromantic" or "Postromantic", but other fields do not usually use these terms; in English literature and painting the convenient term "Victorian" avoids having to characterise the period further.
In northern Europe, the Early Romantic visionary optimism and belief that the world was in the process of great change and improvement had largely vanished, and some art became more conventionally political and polemical as its creators engaged polemically with the world as it was. Elsewhere, including in very different ways the United States and Russia, feelings that great change was underway or just about to come were still possible. Displays of intense emotion in art remained prominent, as did the exotic and historical settings pioneered by the Romantics, but experimentation with form and technique was generally reduced, often replaced with meticulous technique, as in the poems of Tennyson or many paintings. If not realist, late 19th-century art was often extremely detailed, and pride was taken in adding authentic details in a way that earlier Romantics did not trouble with. Many Romantic ideas about the nature and purpose of art, above all the pre-eminent importance of originality, remained important for later generations, and often underlie modern views, despite opposition from theorists.
In literature, Romanticism found recurrent themes in the evocation or criticism of the past, the cult of "sensibility" with its emphasis on women and children, the isolation of the artist or narrator, and respect for nature. Furthermore, several romantic authors, such as Edgar Allan Poe and Nathaniel Hawthorne, based their writings on the supernatural/occult and human psychology. Romanticism tended to regard satire as something unworthy of serious attention, a prejudice still influential today. The Romantic movement in literature was preceded by the Enlightenment and succeeded by Realism.
Some authors cite 16th-century poet Isabella di Morra as an early precursor of Romantic literature. Her lyrics covering themes of isolation and loneliness, which reflected the tragic events of her life, are considered "an impressive prefigurement of Romanticism", differing from the Petrarchist fashion of the time based on the philosophy of love.
The precursors of Romanticism in English poetry go back to the middle of the 18th century, including figures such as Joseph Warton (headmaster at Winchester College) and his brother Thomas Warton, Professor of Poetry at Oxford University. Joseph maintained that invention and imagination were the chief qualities of a poet. The Scottish poet James Macpherson influenced the early development of Romanticism with the international success of his Ossian cycle of poems published in 1762, inspiring both Goethe and the young Walter Scott. Thomas Chatterton is generally considered the first Romantic poet in English. Both Chatterton and Macpherson's work involved elements of fraud, as what they claimed was earlier literature that they had discovered or compiled was, in fact, entirely their own work. The Gothic novel, beginning with Horace Walpole's "The Castle of Otranto" (1764), was an important precursor of one strain of Romanticism, with a delight in horror and threat, and exotic picturesque settings, matched in Walpole's case by his role in the early revival of Gothic architecture. "Tristram Shandy", a novel by Laurence Sterne (1759–67), introduced a whimsical version of the anti-rational sentimental novel to the English literary public.
An early German influence came from Johann Wolfgang von Goethe, whose 1774 novel "The Sorrows of Young Werther" had young men throughout Europe emulating its protagonist, a young artist with a very sensitive and passionate temperament. At that time Germany was a multitude of small separate states, and Goethe's works would have a seminal influence in developing a unifying sense of nationalism. Another philosophic influence came from the German idealism of Johann Gottlieb Fichte and Friedrich Schelling, making Jena (where Fichte lived, as well as Schelling, Hegel, Schiller and the brothers Schlegel) a centre for early German Romanticism (see Jena Romanticism). Important writers were Ludwig Tieck, Novalis ("Heinrich von Ofterdingen", 1799), Heinrich von Kleist and Friedrich Hölderlin. Heidelberg later became a centre of German Romanticism, where writers and poets such as Clemens Brentano, Achim von Arnim, and Joseph Freiherr von Eichendorff ("Aus dem Leben eines Taugenichts") met regularly in literary circles.
Important motifs in German Romanticism are travelling, nature, for example the German Forest, and Germanic myths. The later German Romanticism of, for example E. T. A. Hoffmann's "Der Sandmann" ("The Sandman"), 1817, and Joseph Freiherr von Eichendorff's "Das Marmorbild" ("The Marble Statue"), 1819, was darker in its motifs and has gothic elements. The significance to Romanticism of childhood innocence, the importance of imagination, and racial theories all combined to give an unprecedented importance to folk literature, non-classical mythology and children's literature, above all in Germany. Brentano and von Arnim were significant literary figures who together published "Des Knaben Wunderhorn" ("The Boy's Magic Horn" or cornucopia), a collection of versified folk tales, in 1806–08. The first collection of "Grimms' Fairy Tales" by the Brothers Grimm was published in 1812. Unlike the much later work of Hans Christian Andersen, who was publishing his invented tales in Danish from 1835, these German works were at least mainly based on collected folk tales, and the Grimms remained true to the style of the telling in their early editions, though later rewriting some parts. One of the brothers, Jacob, published in 1835 "Deutsche Mythologie", a long academic work on Germanic mythology. Another strain is exemplified by Schiller's highly emotional language and the depiction of physical violence in his play "The Robbers" of 1781.
In English literature, the key figures of the Romantic movement are considered to be the group of poets including William Wordsworth, Samuel Taylor Coleridge, John Keats, Lord Byron, Percy Bysshe Shelley and the much older William Blake, followed later by the isolated figure of John Clare; also such novelists as Walter Scott from Scotland and Mary Shelley, and the essayists William Hazlitt and Charles Lamb. The publication in 1798 of "Lyrical Ballads", with many of the finest poems by Wordsworth and Coleridge, is often held to mark the start of the movement. The majority of the poems were by Wordsworth, and many dealt with the lives of the poor in his native Lake District, or his feelings about nature—which he more fully developed in his long poem "The Prelude", never published in his lifetime. The longest poem in the volume was Coleridge's "The Rime of the Ancient Mariner", which showed the Gothic side of English Romanticism, and the exotic settings that many works featured. In the period when they were writing, the Lake Poets were widely regarded as a marginal group of radicals, though they were supported by the critic and writer William Hazlitt and others.
In contrast, Lord Byron and Walter Scott achieved enormous fame and influence throughout Europe with works exploiting the violence and drama of their exotic and historical settings; Goethe called Byron "undoubtedly the greatest genius of our century". Scott achieved immediate success with his long narrative poem "The Lay of the Last Minstrel" in 1805, followed by the full epic poem "Marmion" in 1808. Both were set in the distant Scottish past, already evoked in "Ossian"; Romanticism and Scotland were to have a long and fruitful partnership. Byron had equal success with the first part of "Childe Harold's Pilgrimage" in 1812, followed by four "Turkish tales", all in the form of long poems, starting with "The Giaour" in 1813, drawing from his Grand Tour, which had reached Ottoman Europe, and orientalizing the themes of the Gothic novel in verse. These featured different variations of the "Byronic hero", and his own life contributed a further version. Scott meanwhile was effectively inventing the historical novel, beginning in 1814 with "Waverley", set in the 1745 Jacobite rising, which was an enormous and highly profitable success, followed by over 20 further Waverley Novels over the next 17 years, with settings going back to the Crusades that he had researched to a degree that was new in literature.
In contrast to Germany, Romanticism in English literature had little connection with nationalism, and the Romantics were often regarded with suspicion for the sympathy many felt for the ideals of the French Revolution, whose collapse and replacement with the dictatorship of Napoleon was, as elsewhere in Europe, a shock to the movement. Though his novels celebrated Scottish identity and history, Scott was politically a firm Unionist, but admitted to Jacobite sympathies. Several spent much time abroad, and a famous stay on Lake Geneva with Byron and Shelley in 1816 produced the hugely influential novel "Frankenstein" by Shelley's wife-to-be Mary Shelley and the novella "The Vampyre" by Byron's doctor John William Polidori. The lyrics of Robert Burns in Scotland, and Thomas Moore from Ireland, reflected in different ways their countries and the Romantic interest in folk literature, but neither had a fully Romantic approach to life or their work.
Though they have modern critical champions such as György Lukács, Scott's novels are today more likely to be experienced in the form of the many operas that composers continued to base on them over the following decades, such as Donizetti's "Lucia di Lammermoor" and Vincenzo Bellini's "I puritani" (both 1835). Byron is now most highly regarded for his short lyrics and his generally unromantic prose writings, especially his letters, and his unfinished satire "Don Juan". Unlike many Romantics, Byron's widely publicised personal life appeared to match his work, and his death at 36 in 1824 from disease when helping the Greek War of Independence appeared from a distance to be a suitably Romantic end, entrenching his legend. Keats in 1821 and Shelley in 1822 both died in Italy, Blake (at almost 70) in 1827, and Coleridge largely ceased to write in the 1820s. Wordsworth was by 1820 respectable and highly regarded, holding a government sinecure, but wrote relatively little. In the discussion of English literature, the Romantic period is often regarded as finishing around the 1820s, or sometimes even earlier, although many authors of the succeeding decades were no less committed to Romantic values.
The most significant novelist in English during the peak Romantic period, other than Walter Scott, was Jane Austen, whose essentially conservative world-view had little in common with her Romantic contemporaries, retaining a strong belief in decorum and social rules, though critics such as Claudia L. Johnson have detected tremors under the surface of many works, such as "Northanger Abbey" (1817), "Mansfield Park" (1814) and "Persuasion" (1817). But around the mid-century the undoubtedly Romantic novels of the Yorkshire-based Brontë family appeared. Most notably Charlotte's "Jane Eyre" and Emily's "Wuthering Heights", both published in 1847, which also introduced more Gothic themes. While these two novels were written and published after the Romantic period is said to have ended, their novels were heavily influenced by Romantic literature they'd read as children.
Byron, Keats and Shelley all wrote for the stage, but with little success in England, with Shelley's "The Cenci" perhaps the best work produced, though that was not played in a public theatre in England until a century after his death. Byron's plays, along with dramatizations of his poems and Scott's novels, were much more popular on the Continent, and especially in France, and through these versions several were turned into operas, many still performed today. If contemporary poets had little success on the stage, the period was a legendary one for performances of Shakespeare, and went some way to restoring his original texts and removing the Augustan "improvements" to them. The greatest actor of the period, Edmund Kean, restored the tragic ending to "King Lear"; Coleridge said that, "Seeing him act was like reading Shakespeare by flashes of lightning."
Although after union with England in 1707 Scotland increasingly adopted English language and wider cultural norms, its literature developed a distinct national identity and began to enjoy an international reputation. Allan Ramsay (1686–1758) laid the foundations of a reawakening of interest in older Scottish literature, as well as leading the trend for pastoral poetry, helping to develop the Habbie stanza as a poetic form. James Macpherson (1736–96) was the first Scottish poet to gain an international reputation. Claiming to have found poetry written by the ancient bard Ossian, he published translations that acquired international popularity, being proclaimed as a Celtic equivalent of the Classical epics. "Fingal", written in 1762, was speedily translated into many European languages, and its appreciation of natural beauty and treatment of the ancient legend has been credited more than any single work with bringing about the Romantic movement in European, and especially in German literature, through its influence on Johann Gottfried von Herder and Johann Wolfgang von Goethe. It was also popularised in France by figures that included Napoleon. Eventually it became clear that the poems were not direct translations from Scottish Gaelic, but flowery adaptations made to suit the aesthetic expectations of his audience.
Robert Burns (1759–96) and Walter Scott (1771–1832) were highly influenced by the Ossian cycle. Burns, an Ayrshire poet and lyricist, is widely regarded as the national poet of Scotland and a major influence on the Romantic movement. His poem (and song) "Auld Lang Syne" is often sung at Hogmanay (the last day of the year), and "Scots Wha Hae" served for a long time as an unofficial national anthem of the country. Scott began as a poet and also collected and published Scottish ballads. His first prose work, "Waverley" in 1814, is often called the first historical novel. It launched a highly successful career, with other historical novels such as "Rob Roy" (1817), "The Heart of Midlothian" (1818) and "Ivanhoe" (1820). Scott probably did more than any other figure to define and popularise Scottish cultural identity in the nineteenth century. Other major literary figures connected with Romanticism include the poets and novelists James Hogg (1770–1835), Allan Cunningham (1784–1842) and John Galt (1779–1839). One of the most significant figures of the Romantic movement, Lord Byron, was brought up in Scotland until he inherited his family's English peerage.
Scottish "national drama" emerged in the early 1800s, as plays with specifically Scottish themes began to dominate the Scottish stage. Theatres had been discouraged by the Church of Scotland and fears of Jacobite assemblies. In the later eighteenth century, many plays were written for and performed by small amateur companies and were not published and so most have been lost. Towards the end of the century there were "closet dramas", primarily designed to be read, rather than performed, including work by Scott, Hogg, Galt and Joanna Baillie (1762–1851), often influenced by the ballad tradition and Gothic Romanticism.
Romanticism was relatively late in developing in French literature, more so than in the visual arts. The 18th-century precursor to Romanticism, the cult of sensibility, had become associated with the "Ancien Régime", and the French Revolution had been more of an inspiration to foreign writers than those experiencing it at first-hand. The first major figure was François-René de Chateaubriand, a minor aristocrat who had remained a royalist throughout the Revolution, and returned to France from exile in England and America under Napoleon, with whose regime he had an uneasy relationship. His writings, all in prose, included some fiction, such as his influential novella of exile "René" (1802), which anticipated Byron in its alienated hero, but mostly contemporary history and politics, his travels, a defence of religion and the medieval spirit ("Génie du christianisme", 1802), and finally in the 1830s and 1840s his enormous autobiography "Mémoires d'Outre-Tombe" ("Memoirs from beyond the grave").
After the Bourbon Restoration, French Romanticism developed in the lively world of Parisian theatre, with productions of Shakespeare, Schiller (in France a key Romantic author), and adaptations of Scott and Byron alongside French authors, several of whom began to write in the late 1820s. Cliques of pro- and anti-Romantics developed, and productions were often accompanied by raucous vocalizing by the two sides, including the shouted assertion by one theatregoer in 1822 that "Shakespeare, c'est l'aide-de-camp de Wellington" ("Shakespeare is Wellington's aide-de-camp"). Alexandre Dumas began as a dramatist, with a series of successes beginning with "Henri III et sa cour" (1829) before turning to novels that were mostly historical adventures somewhat in the manner of Scott, most famously "The Three Musketeers" and "The Count of Monte Cristo", both of 1844. Victor Hugo published as a poet in the 1820s before achieving success on the stage with "Hernani"—a historical drama in a quasi-Shakespearian style that had famously riotous performances on its first run in 1830. Like Dumas, Hugo is best known for his novels, and was already writing "The Hunchback of Notre-Dame" (1831), one of the best known works, which became a paradigm of the French Romantic movement. The preface to his unperformed play "Cromwell" gives an important manifesto of French Romanticism, stating that "there are no rules, or models". The career of Prosper Mérimée followed a similar pattern; he is now best known as the originator of the story of "Carmen", with his novella published 1845. Alfred de Vigny remains best known as a dramatist, with his play on the life of the English poet "Chatterton" (1835) perhaps his best work. George Sand was a central figure of the Parisian literary scene, famous both for her novels and criticism and her affairs with Chopin and several others; she too was inspired by the theatre, and wrote works to be staged at her private estate.
French Romantic poets of the 1830s to 1850s include Alfred de Musset, Gérard de Nerval, Alphonse de Lamartine and the flamboyant Théophile Gautier, whose prolific output in various forms continued until his death in 1872.
Stendhal is today probably the most highly regarded French novelist of the period, but he stands in a complex relation with Romanticism, and is notable for his penetrating psychological insight into his characters and his realism, qualities rarely prominent in Romantic fiction. As a survivor of the French retreat from Moscow in 1812, fantasies of heroism and adventure had little appeal for him, and like Goya he is often seen as a forerunner of Realism. His most important works are "Le Rouge et le Noir" ("The Red and the Black", 1830) and "La Chartreuse de Parme" ("The Charterhouse of Parma", 1839).
Romanticism in Poland is often taken to begin with the publication of Adam Mickiewicz's first poems in 1822, and end with the crushing of the January Uprising of 1863 against the Russians. It was strongly marked by interest in Polish history. Polish Romanticism revived the old "Sarmatism" traditions of the "szlachta" or Polish nobility. Old traditions and customs were revived and portrayed in a positive light in the Polish messianic movement and in works of great Polish poets such as Adam Mickiewicz ("Pan Tadeusz"), Juliusz Słowacki and Zygmunt Krasiński, as well as prose writers such as Henryk Sienkiewicz. This close connection between Polish Romanticism and Polish history became one of the defining qualities of the literature of Polish Romanticism period, differentiating it from that of other countries. They had not suffered the loss of national statehood as was the case with Poland. Influenced by the general spirit and main ideas of European Romanticism, the literature of Polish Romanticism is unique, as many scholars have pointed out, in having developed largely outside of Poland and in its emphatic focus upon the issue of Polish nationalism. The Polish intelligentsia, along with leading members of its government, left Poland in the early 1830s, during what is referred to as the "Great Emigration", resettling in France, Germany, Great Britain, Turkey, and the United States.
Their art featured emotionalism and irrationality, fantasy and imagination, personality cults, folklore and country life, and the propagation of ideals of freedom. In the second period, many of the Polish Romantics worked abroad, often banished from Poland by the occupying powers due to their politically subversive ideas. Their work became increasingly dominated by the ideals of political struggle for freedom and their country's sovereignty. Elements of mysticism became more prominent. There developed the idea of the "poeta wieszcz" (the prophet). The "wieszcz" (bard) functioned as spiritual leader to the nation fighting for its independence. The most notable poet so recognized was Adam Mickiewicz.
Zygmunt Krasiński also wrote to inspire political and religious hope in his countrymen. Unlike his predecessors, who called for victory at whatever price in Poland's struggle against Russia, Krasinski emphasized Poland's spiritual role in its fight for independence, advocating an intellectual rather than a military superiority. His works best exemplify the Messianic movement in Poland: in two early dramas, "Nie-boska komedia" (1835; "The Undivine Comedy") and "Irydion" (1836; "Iridion"), as well as in the later "Psalmy przyszłości" (1845), he asserted that Poland was the Christ of Europe: specifically chosen by God to carry the world's burdens, to suffer, and eventually be resurrected.
Early Russian Romanticism is associated with the writers Konstantin Batyushkov ("A Vision on the Shores of the Lethe", 1809), Vasily Zhukovsky ("The Bard", 1811; "Svetlana", 1813) and Nikolay Karamzin ("Poor Liza", 1792; "Julia", 1796; "Martha the Mayoress", 1802; "The Sensitive and the Cold", 1803). However the principal exponent of Romanticism in Russia is Alexander Pushkin ("The Prisoner of the Caucasus", 1820–1821; "The Robber Brothers", 1822; "Ruslan and Ludmila", 1820; "Eugene Onegin", 1825–1832). Pushkin's work influenced many writers in the 19th century and led to his eventual recognition as Russia's greatest poet. Other Russian Romantic poets include Mikhail Lermontov ("A Hero of Our Time", 1839), Fyodor Tyutchev ("Silentium!", 1830), Yevgeny Baratynsky ("Eda", 1826), Anton Delvig, and Wilhelm Küchelbecker.
Influenced heavily by Lord Byron, Lermontov sought to explore the Romantic emphasis on metaphysical discontent with society and self, while Tyutchev's poems often described scenes of nature or passions of love. Tyutchev commonly operated with such categories as night and day, north and south, dream and reality, cosmos and chaos, and the still world of winter and spring teeming with life. Baratynsky's style was fairly classical in nature, dwelling on the models of the previous century.
Romanticism in Spanish literature developed a well-known literature with a huge variety of poets and playwrights. The most important Spanish poet during this movement was José de Espronceda. After him there were other poets like Gustavo Adolfo Bécquer, Mariano José de Larra and the dramatists Ángel de Saavedra and José Zorrilla, author of "Don Juan Tenorio". Before them may be mentioned the pre-romantics José Cadalso and Manuel José Quintana. The plays of Antonio García Gutiérrez were adapted to produce Giuseppe Verdi's operas "Il trovatore" and "Simon Boccanegra". Spanish Romanticism also influenced regional literatures. For example, in Catalonia and in Galicia there was a national boom of writers in the local languages, like the Catalan Jacint Verdaguer and the Galician Rosalía de Castro, the main figures of the national revivalist movements Renaixença and Rexurdimento, respectively.
There are scholars who consider Spanish Romanticism to be Proto-Existentialism because it is more anguished than the movement in other European countries. Foster et al., for example, say that the work of Spain's writers such as Espronceda, Larra, and other writers in the 19th century demonstrated a "metaphysical crisis". These observers put more weight on the link between the 19th-century Spanish writers with the existentialist movement that emerged immediately after. According to Richard Caldwell, the writers that we now identify with Spain's romanticism were actually precursors to those who galvanized the literary movement that emerged in the 1920s. This notion is the subject of debate for there are authors who stress that Spain's romanticism is one of the earliest in Europe, while some assert that Spain really had no period of literary romanticism. This controversy underscores a certain uniqueness to Spanish Romanticism in comparison to its European counterparts.
Romanticism began in Portugal with the publication of the poem "Camões" (1825), by Almeida Garrett, who was raised by his uncle D. Alexandre, bishop of Angra, in the precepts of Neoclassicism, which can be observed in his early work. The author himself confesses (in "Camões" preface) that he voluntarily refused to follow the principles of epic poetry enunciated by Aristotle in his "Poetics", as he did the same to Horace's "Ars Poetica". Almeida Garrett had participated in the 1820 Liberal Revolution, which caused him to exile himself in England in 1823 and then in France, after the Vila-Francada. While living in Great Britain, he had contacts with the Romantic movement and read authors such as Shakespeare, Scott, Ossian, Byron, Hugo, Lamartine and de Staël, at the same time visiting feudal castles and ruins of Gothic churches and abbeys, which would be reflected in his writings. In 1838, he presented "Um Auto de Gil Vicente" ("A Play by Gil Vicente"), in an attempt to create a new national theatre, free of Greco-Roman and foreign influence. But his masterpiece would be "Frei Luís de Sousa" (1843), named by himself as a "Romantic drama" and it was acclaimed as an exceptional work, dealing with themes as national independence, faith, justice and love. He was also deeply interested in Portuguese folkloric verse, which resulted in the publication of "Romanceiro" ("Traditional Portuguese Ballads") (1843), that recollect a great number of ancient popular ballads, known as "romances" or "rimances", in "redondilha maior" verse form, that contained stories of chivalry, life of saints, crusades, courtly love, etc. He wrote the novels "Viagens na Minha Terra", "O Arco de Sant'Ana" and "Helena."
Alexandre Herculano is, alongside Almeida Garrett, one of the founders of Portuguese Romanticism. He too was forced to exile to Great Britain and France because of his liberal ideals. All of his poetry and prose are (unlike Almeida Garrett's) entirely Romantic, rejecting Greco-Roman myth and history. He sought inspiration in medieval Portuguese poems and chronicles as in the Bible. His output is vast and covers many different genres, such as historical essays, poetry, novels, opuscules and theatre, where he brings back a whole world of Portuguese legends, tradition and history, especially in "Eurico, o Presbítero" ("Eurico, the Priest") and "Lendas e Narrativas" ("Legends and Narratives"). His work was influenced by Chateaubriand, Schiller, Klopstock, Walter Scott and the Old Testament Psalms.
António Feliciano de Castilho made the case for Ultra-Romanticism, publishing the poems "A Noite no Castelo" ("Night in the Castle") and "Os Ciúmes do Bardo" ("The Jealousy of the Bard"), both in 1836, and the drama "Camões". He became an unquestionable master for successive Ultra-Romantic generations, whose influence would not be challenged until the famous Coimbra Question. He also created polemics by translating Goethe's "Faust" without knowing German, but using French versions of the play. Other notable figures of Portuguese Romanticism are the famous novelists Camilo Castelo Branco and Júlio Dinis, and Soares de Passos, Bulhão Pato and Pinheiro Chagas.
Romantic style would be revived in the beginning of the 20th century, notably through the works of poets linked to the Portuguese Renaissance, such as Teixeira de Pascoais, Jaime Cortesão, Mário Beirão, among others, who can be considered Neo-Romantics. An early Portuguese expression of Romanticism is found already in poets such as Manuel Maria Barbosa du Bocage (especially in his sonnets dated at the end of the 18th century) and Leonor de Almeida Portugal, Marquise of Alorna.
Romanticism in Italian literature was a minor movement although some important works were produced; it began officially in 1816 when Germaine de Staël wrote an article in the journal "Biblioteca italiana" called "Sulla maniera e l'utilità delle traduzioni", inviting Italian people to reject Neoclassicism and to study new authors from other countries. Before that date, Ugo Foscolo had already published poems anticipating Romantic themes. The most important Romantic writers were Ludovico di Breme, Pietro Borsieri and Giovanni Berchet. Better known authors such as Alessandro Manzoni and Giacomo Leopardi were influenced by Enlightenment as well as by Romanticism and Classicism.
Spanish-speaking South American Romanticism was influenced heavily by Esteban Echeverría, who wrote in the 1830 and 1840s. His writings were influenced by his hatred for the Argentine dictator Juan Manuel de Rosas, and filled with themes of blood and terror, using the metaphor of a slaughterhouse to portray the violence of Rosas' dictatorship.
Brazilian Romanticism is characterized and divided in three different periods. The first one is basically focused on the creation of a sense of national identity, using the ideal of the heroic Indian. Some examples include José de Alencar, who wrote "Iracema" and "O Guarani", and Gonçalves Dias, renowned by the poem "Canção do exílio" (Song of the Exile). The second period, sometimes called Ultra-Romanticism, is marked by a profound influence of European themes and traditions, involving the melancholy, sadness and despair related to unobtainable love. Goethe and Lord Byron are commonly quoted in these works. Some of the most notable authors of this phase are Álvares de Azevedo, Casimiro de Abreu, Fagundes Varela and Junqueira Freire. The third cycle is marked by social poetry, especially the abolitionist movement, and it includes Castro Alves, Tobias Barreto and Pedro Luís Pereira de Sousa.
In the United States, at least by 1818 with William Cullen Bryant's "To a Waterfowl", Romantic poetry was being published. American Romantic Gothic literature made an early appearance with Washington Irving's "The Legend of Sleepy Hollow" (1820) and "Rip Van Winkle" (1819), followed from 1823 onwards by the "Leatherstocking Tales" of James Fenimore Cooper, with their emphasis on heroic simplicity and their fervent landscape descriptions of an already-exotic mythicized frontier peopled by "noble savages", similar to the philosophical theory of Rousseau, exemplified by Uncas, from "The Last of the Mohicans". There are picturesque "local colour" elements in Washington Irving's essays and especially his travel books. Edgar Allan Poe's tales of the macabre and his balladic poetry were more influential in France than at home, but the romantic American novel developed fully with the atmosphere and melodrama of Nathaniel Hawthorne's "The Scarlet Letter" (1850). Later Transcendentalist writers such as Henry David Thoreau and Ralph Waldo Emerson still show elements of its influence and imagination, as does the romantic realism of Walt Whitman. The poetry of Emily Dickinson—nearly unread in her own time—and Herman Melville's novel "Moby-Dick" can be taken as epitomes of American Romantic literature. By the 1880s, however, psychological and social realism were competing with Romanticism in the novel.
The European Romantic movement reached America in the early 19th century. American Romanticism was just as multifaceted and individualistic as it was in Europe. Like the Europeans, the American Romantics demonstrated a high level of moral enthusiasm, commitment to individualism and the unfolding of the self, an emphasis on intuitive perception, and the assumption that the natural world was inherently good, while human society was filled with corruption.
Romanticism became popular in American politics, philosophy and art. The movement appealed to the revolutionary spirit of America as well as to those longing to break free of the strict religious traditions of early settlement. The Romantics rejected rationalism and religious intellect. It appealed to those in opposition of Calvinism, which includes the belief that the destiny of each individual is preordained. The Romantic movement gave rise to New England Transcendentalism, which portrayed a less restrictive relationship between God and Universe. The new philosophy presented the individual with a more personal relationship with God. Transcendentalism and Romanticism appealed to Americans in a similar fashion, for both privileged feeling over reason, individual freedom of expression over the restraints of tradition and custom. It often involved a rapturous response to nature. It encouraged the rejection of harsh, rigid Calvinism, and promised a new blossoming of American culture.
American Romanticism embraced the individual and rebelled against the confinement of neoclassicism and religious tradition. The Romantic movement in America created a new literary genre that continues to influence American writers. Novels, short stories, and poems replaced the sermons and manifestos of yore. Romantic literature was personal, intense, and portrayed more emotion than ever seen in neoclassical literature. America's preoccupation with freedom became a great source of motivation for Romantic writers as many were delighted in free expression and emotion without so much fear of ridicule and controversy. They also put more effort into the psychological development of their characters, and the main characters typically displayed extremes of sensitivity and excitement.
The works of the Romantic Era also differed from preceding works in that they spoke to a wider audience, partly reflecting the greater distribution of books as costs came down during the period.
Romantic architecture appeared in the late 18th century in a reaction against the rigid forms of neoclassical architecture. reached its peak in the mid-19th century, and continued to appear until the end of the 19th century. It was designed to evoke an emotional reaction, either respect for tradition or nostalgia for a bucolic past. It was frequently inspired by the architecture of the Middle Ages, especially Gothic architecture, It was strongly influenced by romanticism in literature, particularly the historical novels of Victor Hugo and Walter Scott. It sometimes moved into the domain of eclecticism, with features assembled from different historic periods and regions of the world.
Gothic Revival architecture was a popular variant of the romantic style, particularly in the construction of churches, Cathedrals, and university buildings. Notable examples include the completion of Cologne Cathedral in Germany, by Karl Friedrich Schinkel. The cathedral had been begun in 1248, but work was halted in 1473. The original plans for the façade were discovered in 1840, and it was decided to recommence. Schinkel followed the original design as much as possible, but used modern construction technology, including an iron frame for the roof. The building was finished in 1880.
In Britain, notable examples include the Royal Pavilion in Brighton, a romantic version of traditional Indian architecture by John Nash (1815–1823), and the Houses of Parliament in London, built in a Gothic revival style by Charles Barry between 1840 and 1876.
In France, one of the earliest examples of romantic architecture is the Hameau de la Reine, the small rustic hamlet created at the Palace of Versailles for Queen Marie Antoinette between 1783 and 1785 by the royal architect Richard Mique with the help of the romantic painter Hubert Robert. It consisted of twelve structures, ten of which still exist, in the style of villages in Normandy. It was designed for the Queen and her friends to amuse themselves by playing at being peasants, and included a farmhouse with a dairy, a mill, a boudoir, a pigeon loft, a tower in the form of a lighthouse from which one could fish in the pond, a belvedere, a cascade and grotto, and a luxuriously furnished cottage with a billiard room for the Queen.
French romantic architecture in the 19th century was strongly influenced by two writers; Victor Hugo, whose novel "The Hunchback of Notre Dame" inspired a resurgence in interest in the Middle Ages; and Prosper Mérimée, who wrote celebrated romantic novels and short stories and was also the first head of the commission of Historic Monuments in France, responsible for publicizing and restoring (and sometimes romanticizing) many French cathedrals and monuments desecrated and ruined after the French Revolution. His projects were carried out by the architect Eugène Viollet-le-Duc. These included the restoration (sometimes creative) of the Cathedral of Notre Dame de Paris, the fortified city of Carcassonne, and the unfinished medieval Château de Pierrefonds.
The romantic style continued in the second half of the 19th century. The Palais Garnier, the Paris opera house designed by Charles Garnier was a highly romantic and eclectic combination of artistic styles. Another notable example of late 19th century romanticism is the Basilica of Sacré-Cœur by Paul Abadie, who drew upon the model of Byzantine architecture for his elongated domes (1875–1914).
In the visual arts, Romanticism first showed itself in landscape painting, where from as early as the 1760s British artists began to turn to wilder landscapes and storms, and Gothic architecture, even if they had to make do with Wales as a setting. Caspar David Friedrich and J. M. W. Turner were born less than a year apart in 1774 and 1775 respectively and were to take German and English landscape painting to their extremes of Romanticism, but both their artistic sensibilities were formed when forms of Romanticism was already strongly present in art. John Constable, born in 1776, stayed closer to the English landscape tradition, but in his largest "six-footers" insisted on the heroic status of a patch of the working countryside where he had grown up—challenging the traditional hierarchy of genres, which relegated landscape painting to a low status. Turner also painted very large landscapes, and above all, seascapes. Some of these large paintings had contemporary settings and staffage, but others had small figures that turned the work into history painting in the manner of Claude Lorrain, like Salvator Rosa, a late Baroque artist whose landscapes had elements that Romantic painters repeatedly turned to. Friedrich often used single figures, or features like crosses, set alone amidst a huge landscape, "making them images of the transitoriness of human life and the premonition of death".
Other groups of artists expressed feelings that verged on the mystical, many largely abandoning classical drawing and proportions. These included William Blake and Samuel Palmer and the other members of the Ancients in England, and in Germany Philipp Otto Runge. Like Friedrich, none of these artists had significant influence after their deaths for the rest of the 19th century, and were 20th-century rediscoveries from obscurity, though Blake was always known as a poet, and Norway's leading painter Johan Christian Dahl was heavily influenced by Friedrich. The Rome-based Nazarene movement of German artists, active from 1810, took a very different path, concentrating on medievalizing history paintings with religious and nationalist themes.
The arrival of Romanticism in French art was delayed by the strong hold of Neoclassicism on the academies, but from the Napoleonic period it became increasingly popular, initially in the form of history paintings propagandising for the new regime, of which Girodet's "Ossian receiving the Ghosts of the French Heroes", for Napoleon's Château de Malmaison, was one of the earliest. Girodet's old teacher David was puzzled and disappointed by his pupil's direction, saying: "Either Girodet is mad or I no longer know anything of the art of painting". A new generation of the French school, developed personal Romantic styles, though still concentrating on history painting with a political message. Théodore Géricault (1791–1824) had his first success with "The Charging Chasseur", a heroic military figure derived from Rubens, at the Paris Salon of 1812 in the years of the Empire, but his next major completed work, "The Raft of the Medusa" of 1821, remains the greatest achievement of the Romantic history painting, which in its day had a powerful anti-government message.
Eugène Delacroix (1798–1863) made his first Salon hits with "The Barque of Dante" (1822), "The Massacre at Chios" (1824) and "Death of Sardanapalus" (1827). The second was a scene from the Greek War of Independence, completed the year Byron died there, and the last was a scene from one of Byron's plays. With Shakespeare, Byron was to provide the subject matter for many other works of Delacroix, who also spent long periods in North Africa, painting colourful scenes of mounted Arab warriors. His "Liberty Leading the People" (1830) remains, with the "Medusa", one of the best-known works of French Romantic painting. Both reflected current events, and increasingly "history painting", literally "story painting", a phrase dating back to the Italian Renaissance meaning the painting of subjects with groups of figures, long considered the highest and most difficult form of art, did indeed become the painting of historical scenes, rather than those from religion or mythology.
Francisco Goya was called "the last great painter in whose art thought and observation were balanced and combined to form a faultless unity". But the extent to which he was a Romantic is a complex question. In Spain, there was still a struggle to introduce the values of the Enlightenment, in which Goya saw himself as a participant. The demonic and anti-rational monsters thrown up by his imagination are only superficially similar to those of the Gothic fantasies of northern Europe, and in many ways he remained wedded to the classicism and realism of his training, as well as looking forward to the Realism of the later 19th century. But he, more than any other artist of the period, exemplified the Romantic values of the expression of the artist's feelings and his personal imaginative world. He also shared with many of the Romantic painters a more free handling of paint, emphasized in the new prominence of the brushstroke and impasto, which tended to be repressed in neoclassicism under a self-effacing finish.
Sculpture remained largely impervious to Romanticism, probably partly for technical reasons, as the most prestigious material of the day, marble, does not lend itself to expansive gestures. The leading sculptors in Europe, Antonio Canova and Bertel Thorvaldsen, were both based in Rome and firm Neoclassicists, not at all tempted to allow influence from medieval sculpture, which would have been one possible approach to Romantic sculpture. When it did develop, true Romantic sculpture—with the exception of a few artists such as Rudolf Maison— rather oddly was missing in Germany, and mainly found in France, with François Rude, best known from his group of the 1830s from the Arc de Triomphe in Paris, David d'Angers, and Auguste Préault. Préault's plaster relief entitled "Slaughter", which represented the horrors of wars with exacerbated passion, caused so much scandal at the 1834 Salon that Préault was banned from this official annual exhibition for nearly twenty years. In Italy, the most important Romantic sculptor was Lorenzo Bartolini.
In France, historical painting on idealized medieval and Renaissance themes is known as the "style Troubadour", a term with no equivalent for other countries, though the same trends occurred there. Delacroix, Ingres and Richard Parkes Bonington all worked in this style, as did lesser specialists such as Pierre-Henri Révoil (1776–1842) and Fleury-François Richard (1777–1852). Their pictures are often small, and feature intimate private and anecdotal moments, as well as those of high drama. The lives of great artists such as Raphael were commemorated on equal terms with those of rulers, and fictional characters were also depicted. Fleury-Richard's "Valentine of Milan weeping for the death of her husband", shown in the Paris Salon of 1802, marked the arrival of the style, which lasted until the mid-century, before being subsumed into the increasingly academic history painting of artists like Paul Delaroche.
Another trend was for very large apocalyptic history paintings, often combining extreme natural events, or divine wrath, with human disaster, attempting to outdo "The Raft of the Medusa", and now often drawing comparisons with effects from Hollywood. The leading English artist in the style was John Martin, whose tiny figures were dwarfed by enormous earthquakes and storms, and worked his way through the biblical disasters, and those to come in the final days. Other works such as Delacroix's "Death of Sardanapalus" included larger figures, and these often drew heavily on earlier artists, especially Poussin and Rubens, with extra emotionalism and special effects.
Elsewhere in Europe, leading artists adopted Romantic styles: in Russia there were the portraitists Orest Kiprensky and Vasily Tropinin, with Ivan Aivazovsky specializing in marine painting, and in Norway Hans Gude painted scenes of fjords. In Italy Francesco Hayez (1791–1882) was the leading artist of Romanticism in mid-19th-century Milan. His long, prolific and extremely successful career saw him begin as a Neoclassical painter, pass right through the Romantic period, and emerge at the other end as a sentimental painter of young women. His Romantic period included many historical pieces of "Troubadour" tendencies, but on a very large scale, that are heavily influenced by Gian Battista Tiepolo and other late Baroque Italian masters.
Literary Romanticism had its counterpart in the American visual arts, most especially in the exaltation of an untamed American landscape found in the paintings of the Hudson River School. Painters like Thomas Cole, Albert Bierstadt and Frederic Edwin Church and others often expressed Romantic themes in their paintings. They sometimes depicted ancient ruins of the old world, such as in Fredric Edwin Church's piece "Sunrise in Syria". These works reflected the Gothic feelings of death and decay. They also show the Romantic ideal that Nature is powerful and will eventually overcome the transient creations of men. More often, they worked to distinguish themselves from their European counterparts by depicting uniquely American scenes and landscapes. This idea of an American identity in the art world is reflected in W. C. Bryant's poem "To Cole, the Painter, Departing for Europe", where Bryant encourages Cole to remember the powerful scenes that can only be found in America.
Some American paintings (such as Albert Bierstadt's "The Rocky Mountains, Lander's Peak") promote the literary idea of the "noble savage" by portraying idealized Native Americans living in harmony with the natural world. Thomas Cole's paintings tend towards allegory, explicit in "The Voyage of Life" series painted in the early 1840s, showing the stages of life set amidst an awesome and immense nature.
Musical Romanticism is predominantly a German phenomenon—so much so that one respected French reference work defines it entirely in terms of "The role of music in the aesthetics of German romanticism". Another French encyclopedia holds that the German temperament generally "can be described as the deep and diverse action of romanticism on German musicians", and that there is only one true representative of Romanticism in French music, Hector Berlioz, while in Italy, the sole great name of musical Romanticism is Giuseppe Verdi, "a sort of [Victor] Hugo of opera, gifted with a real genius for dramatic effect". Similarly, in his analysis of Romanticism and its pursuit of harmony, Henri Lefebvre posits that, "But of course, German romanticism was more closely linked to music than French romanticism was, so it is there we should look for the direct expression of harmony as the central romantic idea." Nevertheless, the huge popularity of German Romantic music led, "whether by imitation or by reaction", to an often nationalistically inspired vogue amongst Polish, Hungarian, Russian, Czech, and Scandinavian musicians, successful "perhaps more because of its extra-musical traits than for the actual value of musical works by its masters".
Although the term "Romanticism" when applied to music has come to imply the period roughly from 1800 until 1850, or else until around 1900, the contemporary application of "romantic" to music did not coincide with this modern interpretation. Indeed, one of the earliest sustained applications of the term to music occurs in 1789, in the "Mémoires" of André Grétry. This is of particular interest because it is a French source on a subject mainly dominated by Germans, but also because it explicitly acknowledges its debt to Jean-Jacques Rousseau (himself a composer, amongst other things) and, by so doing, establishes a link to one of the major influences on the Romantic movement generally. In 1810 E. T. A. Hoffmann named Haydn, Mozart and Beethoven as "the three masters of instrumental compositions" who "breathe one and the same romantic spirit". He justified his view on the basis of these composers' depth of evocative expression and their marked individuality. In Haydn's music, according to Hoffmann, "a child-like, serene disposition prevails", while Mozart (in the late E-flat major Symphony, for example) "leads us into the depths of the spiritual world", with elements of fear, love, and sorrow, "a presentiment of the infinite ... in the eternal dance of the spheres". Beethoven's music, on the other hand, conveys a sense of "the monstrous and immeasurable", with the pain of an endless longing that "will burst our breasts in a fully coherent concord of all the passions". This elevation in the valuation of pure emotion resulted in the promotion of music from the subordinate position it had held in relation to the verbal and plastic arts during the Enlightenment. Because music was considered to be free of the constraints of reason, imagery, or any other precise concept, it came to be regarded, first in the writings of Wackenroder and Tieck and later by writers such as Schelling and Wagner, as preeminent among the arts, the one best able to express the secrets of the universe, to evoke the spirit world, infinity, and the absolute.
This chronologic agreement of musical and literary Romanticism continued as far as the middle of the 19th century, when Richard Wagner denigrated the music of Meyerbeer and Berlioz as "neoromantic": "The Opera, to which we shall now return, has swallowed down the Neoromanticism of Berlioz, too, as a plump, fine-flavoured oyster, whose digestion has conferred on it anew a brisk and well-to-do appearance."
It was only toward the end of the 19th century that the newly emergent discipline of "Musikwissenschaft" (musicology)—itself a product of the historicizing proclivity of the age—attempted a more scientific periodization of music history, and a distinction between Viennese Classical and Romantic periods was proposed. The key figure in this trend was Guido Adler, who viewed Beethoven and Franz Schubert as transitional but essentially Classical composers, with Romanticism achieving full maturity only in the post-Beethoven generation of Frédéric Chopin, Felix Mendelssohn, Robert Schumann, Hector Berlioz and Franz Liszt. From Adler's viewpoint, found in books like "Der Stil in der Musik" (1911), composers of the New German School and various late-19th-century nationalist composers were not Romantics but "moderns" or "realists" (by analogy with the fields of painting and literature), and this schema remained prevalent through the first decades of the 20th century.
By the second quarter of the 20th century, an awareness that radical changes in musical syntax had occurred during the early 1900s caused another shift in historical viewpoint, and the change of century came to be seen as marking a decisive break with the musical past. This in turn led historians such as Alfred Einstein to extend the musical "Romantic era" throughout the 19th century and into the first decade of the 20th. It has continued to be referred to as such in some of the standard music references such as "The Oxford Companion to Music" and Grout's "History of Western Music" but was not unchallenged. For example, the prominent German musicologist Friedrich Blume, the chief editor of the first edition of "Die Musik in Geschichte und Gegenwart" (1949–86), accepted the earlier position that Classicism and Romanticism together constitute a single period beginning in the middle of the 18th century, but at the same time held that it continued into the 20th century, including such pre-World War II developments as expressionism and neoclassicism. This is reflected in some notable recent reference works such as the "New Grove Dictionary of Music and Musicians" and the new edition of "Musik in Geschichte und Gegenwart".
In the contemporary music culture, the romantic musician followed a public career depending on sensitive middle-class audiences rather than on a courtly patron, as had been the case with earlier musicians and composers. Public persona characterized a new generation of virtuosi who made their way as soloists, epitomized in the concert tours of Paganini and Liszt, and the conductor began to emerge as an important figure, on whose skill the interpretation of the increasingly complex music depended.
The Romantic movement affected most aspects of intellectual life, and Romanticism and science had a powerful connection, especially in the period 1800–1840. Many scientists were influenced by versions of the "Naturphilosophie" of Johann Gottlieb Fichte, Friedrich Wilhelm Joseph von Schelling and Georg Wilhelm Friedrich Hegel and others, and without abandoning empiricism, sought in their work to uncover what they tended to believe was a unified and organic Nature. The English scientist Sir Humphry Davy, a prominent Romantic thinker, said that understanding nature required "an attitude of admiration, love and worship, [...] a personal response". He believed that knowledge was only attainable by those who truly appreciated and respected nature. Self-understanding was an important aspect of Romanticism. It had less to do with proving that man was capable of understanding nature (through his budding intellect) and therefore controlling it, and more to do with the emotional appeal of connecting himself with nature and understanding it through a harmonious co-existence.
History writing was very strongly, and many would say harmfully, influenced by Romanticism. In England, Thomas Carlyle was a highly influential essayist who turned historian; he both invented and exemplified the phrase "hero-worship", lavishing largely uncritical praise on strong leaders such as Oliver Cromwell, Frederick the Great and Napoleon. Romantic nationalism had a largely negative effect on the writing of history in the 19th century, as each nation tended to produce its own version of history, and the critical attitude, even cynicism, of earlier historians was often replaced by a tendency to create romantic stories with clearly distinguished heroes and villains. Nationalist ideology of the period placed great emphasis on racial coherence, and the antiquity of peoples, and tended to vastly over-emphasize the continuity between past periods and the present, leading to national mysticism. Much historical effort in the 20th century was devoted to combating the romantic historical myths created in the 19th century.
To insulate theology from scientism or reductionism in science, 19th-century post-Enlightenment German theologians developed a modernist or so-called liberal conception of Christianity, led by Friedrich Schleiermacher and Albrecht Ritschl. They took the Romantic approach of rooting religion in the inner world of the human spirit, so that it is a person's feeling or sensibility about spiritual matters that comprises religion.
Romantic chess was the style of chess which emphasized quick, tactical maneuvers rather than long-term strategic planning. The Romantic era in chess is generally considered to have begun with Joseph MacDonnell and Pierre LaBourdonnais, the two dominant chess players in the 1830s. The 1840s was dominated by Howard Staunton, and other leading players of the era included Adolf Anderssen, Daniel Harrwitz, Henry Bird, Louis Paulsen, and Paul Morphy. The "Immortal Game", played by Adolf Anderssen and Lionel Kieseritzky on 21 June 1851 in London—where Anderssen made bold sacrifices to secure victory, giving up both rooks and a bishop, then his queen, and then checkmating his opponent with his three remaining minor pieces—is considered a supreme example of Romantic chess. The end of the Romantic era in chess is considered to be the 1873 Vienna Tournament where Wilhelm Steinitz popularized positional play and the closed game.
One of Romanticism's key ideas and most enduring legacies is the assertion of nationalism, which became a central theme of Romantic art and political philosophy. From the earliest parts of the movement, with their focus on development of national languages and folklore, and the importance of local customs and traditions, to the movements that would redraw the map of Europe and lead to calls for self-determination of nationalities, nationalism was one of the key vehicles of Romanticism, its role, expression and meaning. One of the most important functions of medieval references in the 19th century was nationalist. Popular and epic poetry were its workhorses. This is visible in Germany and Ireland, where underlying Germanic or Celtic linguistic substrates dating from before the Romanization-Latinization were sought out.
Early Romantic nationalism was strongly inspired by Rousseau, and by the ideas of Johann Gottfried von Herder, who in 1784 argued that the geography formed the natural economy of a people, and shaped their customs and society.
The nature of nationalism changed dramatically, however, after the French Revolution with the rise of Napoleon, and the reactions in other nations. Napoleonic nationalism and republicanism were, at first, inspirational to movements in other nations: self-determination and a consciousness of national unity were held to be two of the reasons why France was able to defeat other countries in battle. But as the French Republic became Napoleon's Empire, Napoleon became not the inspiration for nationalism, but the object of its struggle. In Prussia, the development of spiritual renewal as a means to engage in the struggle against Napoleon was argued by, among others, Johann Gottlieb Fichte, a disciple of Kant. The word "Volkstum", or nationality, was coined in German as part of this resistance to the now conquering emperor. Fichte expressed the unity of language and nation in his address "To the German Nation" in 1806:
Those who speak the same language are joined to each other by a multitude of invisible bonds by nature herself, long before any human art begins; they understand each other and have the power of continuing to make themselves understood more and more clearly; they belong together and are by nature one and an inseparable whole. ...Only when each people, left to itself, develops and forms itself in accordance with its own peculiar quality, and only when in every people each individual develops himself in accordance with that common quality, as well as in accordance with his own peculiar quality—then, and then only, does the manifestation of divinity appear in its true mirror as it ought to be.
This view of nationalism inspired the collection of folklore by such people as the Brothers Grimm, the revival of old epics as national, and the construction of new epics as if they were old, as in the "Kalevala", compiled from Finnish tales and folklore, or "Ossian", where the claimed ancient roots were invented. The view that fairy tales, unless contaminated from outside literary sources, were preserved in the same form over thousands of years, was not exclusive to Romantic Nationalists, but fit in well with their views that such tales expressed the primordial nature of a people. For instance, the Brothers Grimm rejected many tales they collected because of their similarity to tales by Charles Perrault, which they thought proved they were not truly German tales; "Sleeping Beauty" survived in their collection because the tale of Brynhildr convinced them that the figure of the sleeping princess was authentically German. Vuk Karadžić contributed to Serbian folk literature, using peasant culture as the foundation. He regarded the oral literature of the peasants as an integral part of Serbian culture, compiling it to use in his collections of folk songs, tales and proverbs, as well as the first dictionary of vernacular Serbian. Similar projects were undertaken by the Russian Alexander Afanasyev, the Norwegians Peter Christen Asbjørnsen and Jørgen Moe, and the Englishman Joseph Jacobs.
Romanticism played an essential role in the national awakening of many Central European peoples lacking their own national states, not least in Poland, which had recently failed to restore its independence when Russia's army crushed the Polish Uprising under Nicholas I. Revival and reinterpretation of ancient myths, customs and traditions by Romantic poets and painters helped to distinguish their indigenous cultures from those of the dominant nations and crystallise the mythography of Romantic nationalism. Patriotism, nationalism, revolution and armed struggle for independence also became popular themes in the arts of this period. Arguably, the most distinguished Romantic poet of this part of Europe was Adam Mickiewicz, who developed an idea that Poland was the Messiah of Nations, predestined to suffer just as Jesus had suffered to save all the people. The Polish self-image as a "Christ among nations" or the martyr of Europe can be traced back to its history of Christendom and suffering under invasions. During the periods of foreign occupation, the Catholic Church served as bastion of Poland's national identity and language, and the major promoter of Polish culture. The partitions came to be seen in Poland as a Polish sacrifice for the security for Western civilization. Adam Mickiewicz wrote the patriotic drama "Dziady" (directed against the Russians), where he depicts Poland as the Christ of Nations. He also wrote "Verily I say unto you, it is not for you to learn civilization from foreigners, but it is you who are to teach them civilization ... You are among the foreigners like the Apostles among the idolaters". In "Books of the Polish Nation and Polish Pilgrimage" Mickiewicz detailed his vision of Poland as a Messias and a Christ of Nations, that would save mankind. Dziady is known for various interpretation. The most known ones are the moral aspect of part II, individualist and romantic message of part IV, as well as deeply patriotic, messianistic and Christian vision in part III of the poem. Zdzisław Kępiński, however, focuses his interpretation on Slavic pagan and occult elements found in the drama. In his book "Mickiewicz hermetyczny" he writes about hermetic, theosophic and alchemical philosophy on the book as well as Masonic symbols. | https://en.wikipedia.org/wiki?curid=26094 |
Romantic nationalism
Romantic nationalism (also national romanticism, organic nationalism, identity nationalism) is the form of nationalism in which the state derives its political legitimacy as an organic consequence of the unity of those it governs. This includes, depending on the particular manner of practice, the language, race, ethnicity, culture, religion, and customs of the nation in its primal sense of those who were born within its culture. This form of nationalism arose in reaction to dynastic or imperial hegemony, which assessed the legitimacy of the state from the top down, emanating from a monarch or other authority, which justified its existence. Such downward-radiating power might ultimately derive from a god or gods
(see the divine right of kings and the Mandate of Heaven).
Among the key themes of Romanticism, and its most enduring legacy, the cultural assertions of romantic nationalism have also been central in post-Enlightenment art and political philosophy. From its earliest stirrings, with their focus on the development of national languages and folklore, and the spiritual value of local customs and traditions, to the movements that would redraw the map of Europe and lead to calls for self-determination of nationalities, nationalism was one of the key issues in Romanticism, determining its roles, expressions and meanings. Romantic nationalism, resulting from this interaction between cultural production and political thought, became "the celebration of the nation (defined in its language, history and cultural character) as an inspiring ideal for artistic expression; and the instrumentalization of that expression in political consciousness-raising".
Historically in Europe, the watershed year for romantic nationalism was 1848, when a revolutionary wave spread across the continent; numerous nationalistic revolutions occurred in various fragmented regions (such as Italy) or multinational states (such as the Austrian Empire). While initially the revolutions fell to reactionary forces and the old order was quickly re-established, the many revolutions would mark the first step towards liberalization and the formation of modern nation states across much of Europe.
The ideas of Rousseau (1712–1778) and of Johann Gottfried von Herder (1744–1803) inspired much early Romantic nationalism in Europe. Herder argued nationality was the product of climate, geography 'but more particularly, languages, inclinations and characters,' rather than genetics.
From its beginnings in the late 18th century, romantic nationalism has relied upon the existence of a historical ethnic culture which meets the romantic ideal; folklore developed as a romantic nationalist concept. The Brothers Grimm, inspired by Herder's writings, put together an idealized collection of tales, which they labeled as authentically German. The concept of an inherited cultural patrimony from a common origin rapidly became central to a divisive question within romantic nationalism: specifically, is a nation unified because it comes from the same genetic source, that is because of race, or is the participation in the organic nature of the "folk" culture self-fulfilling?
Romantic nationalism formed a key strand in the philosophy of Hegel (1770–1831), who argued that there was a "spirit of the age" or "zeitgeist" that inhabited a particular people at a particular time. When this group of people became the active determiner of history, it was simply because their cultural and political moment had come. Because of the Germans' role in the Protestant Reformation, Hegel (a Lutheran) argued that his historical moment had seen the "Zeitgeist" settle on the German-speaking peoples.
In continental Europe, Romantics had embraced the French Revolution in its beginnings, then found themselves fighting the counter-Revolution in the trans-national Imperial system of Napoleon. The sense of self-determination and national consciousness that had enabled revolutionary forces to defeat aristocratic regimes in battle became rallying points for resistance against the French Empire (1804–14). In Prussia, the development of spiritual renewal as a means to engage in the struggle against Napoleon was argued by, among others, Johann Gottlieb Fichte (1762–1814), a disciple of Kant. The word "Volkstum", or "folkhood", was coined in Germany as part of this resistance to French hegemony.
Fichte expressed the unity of language and nation in his thirteenth address "To the German Nation" in 1806:
In the Balkans, Romantic views of a connection with classical Greece, which inspired Philhellenism infused the Greek War of Independence (1821–32), in which the Romantic poet Lord Byron died of high fever. Rossini's opera "William Tell" (1829) marked the onset of the Romantic Opera, using the central national myth unifying Switzerland; and in Brussels, a riot (August 1830) after an opera that set a doomed romance against a background of foreign oppression (Auber's "La Muette de Portici") sparked the Belgian Revolution of 1830–31, the first successful revolution in the model of Romantic nationalism. Verdi's opera choruses of an oppressed people inspired two generations of patriots in Italy, especially with "Va pensiero" ("Nabucco", 1842). Under the influence of romantic nationalism, among economic and political forces, both Germany and Italy found political unity, and movements to create nations similarly based upon ethnic groups. It would flower in the Balkans (see for example, the Carinthian Plebiscite, 1920), along the Baltic Sea, and in the interior of Central Europe, where in the eventual outcome, the Habsburgs succumbed to the surge of Romantic nationalism. In Norway, romanticism was embodied, not in literature, but in the movement toward a national style, both in architecture and in "ethos". Earlier, there was a strong romantic nationalist element mixed with Enlightenment rationalism in the rhetoric used in British North America, in the colonists' Declaration of Independence and the United States Constitution of 1787, as well as the rhetoric in the wave of revolts, inspired by new senses of localized identities, which swept the American colonies of Spain, one after the other, from the May Revolution of Argentina in 1810.
Following the ultimate collapse of the First French Empire with the fall of Napoleon, conservative elements took control in Europe, led by the Austrian noble Klemens von Metternich, ideals of the balance of power between the great powers of Europe dominated continental politics of the first half of the 19th century. Following the Congress of Vienna, and subsequent Concert of Europe system, several major empires took control of European politics. Among these were the Russian Empire, the restored French monarchy, the German Confederation, under the dominance of Prussia, the Austrian Empire, and the Ottoman Empire.
The conservative forces held sway until the Revolutions of 1848 swept across Europe and threatened the old order. Numerous movements developed around various cultural groups, who began to develop a sense of national identity. While initially, all of these revolutions failed, and reactionary forces would re-establish political control, the revolutions marked the start of the steady progress towards the end of the Concert of Europe under the dominance of a few multi-national empires and led to the establishment of the modern nation state in Europe; a process that would not be complete for over a century and a half. Central and Eastern Europe's political situation was partly shaped by the two World Wars, while many national identities in these two regions formed modern nation states when the collapse of the Soviet Union and the multinational states Yugoslavia and Czechoslovakia led to numerous new states forming during the last decade of the 20th century.
Romantic nationalism inspired the collection of folklore by such people as the Brothers Grimm. The view that fairy tales, unless contaminated from outside literary sources, were preserved in the same form over thousands of years, was not exclusive to Romantic Nationalists, but it fit in well with their views that such tales expressed the primordial nature of a people.
The Brothers Grimm were criticized because their first edition was insufficiently German, and they followed the advice. They rejected many tales they collected because of their similarity to tales by Charles Perrault, which they thought proved they were not truly German tales; "Sleeping Beauty" survived in their collection because the tale of Brynhildr convinced them that the figure of the sleeping princess was authentically German. They also altered the language used, changing each "Fee" (fairy) to an enchantress or wise woman, every "prince" to a "king's son", every "princess" to a "king's daughter". Discussing these views in their third editions, they particularly singled out Giambattista Basile's "Pentamerone" as the first "national" collection of fairy tales, and as capturing Neapolitan voice.
The work of the Brothers Grimm influenced other collectors, both inspiring them to collect tales and leading them to similarly believe that the fairy tales of a country were particularly representative of it, to the neglect of cross-cultural influence. Among those influenced were the Russian Alexander Afanasyev, the Norwegians Peter Christen Asbjørnsen and Jørgen Moe, and the Australian Joseph Jacobs.
The concept of a "national epic", an extensively mythologized legendary work of poetry of defining importance to a certain nation, is another product of Romantic nationalism. The "discovery" of "Beowulf" in a single manuscript, first transcribed in 1818, came under the impetus of Romantic nationalism, after the manuscript had lain as an ignored curiosity in scholars' collections for two centuries. "Beowulf" was felt to provide people self-identified as "Anglo-Saxon" with their missing "national epic", just when the need for it was first being felt: the fact that Beowulf himself was a Geat was easily overlooked. The pseudo-Gaelic literary forgeries of "Ossian" had failed, finally, to fill the need for the first Romantic generation.
The first publication of "The Tale of Igor's Campaign" coincided with the rise in Russian national spirit in the wake of the Napoleonic wars and Suvorov's campaigns in Central Europe. The unseen and unheard "Song of Roland" had become a dim memory, until the antiquary Francisque Michel transcribed a worn copy in the Bodleian Library and put it into print in 1837; it was timely: French interest in the national epic revived among the Romantic generation. In Greece, the "Iliad" and "Odyssey" took on new urgency during the Greek War of Independence. Amongst the world's Jewish community, the early Zionists considered the Bible a more suitable national epic than the Talmud.
Many other "national epics", epic poetry considered to reflect the national spirit, were produced or revived under the influence of Romantic nationalism: particularly in the Russian Empire, national minorities seeking to assert their own identities in the face of Russification produced new national poetry – either out of whole cloth, or from cobbling together folk poetry, or by resurrecting older narrative poetry. Examples include the Estonian "Kalevipoeg", Finnish "Kalevala", Polish "Pan Tadeusz", Latvian "Lāčplēsis", Armenian "Sasuntzi Davit" by Hovhannes Tumanyan, Georgian "The Knight in the Panther's Skin" and Greater Iran , " Shahnameh.
After the 1870s "national romanticism", as it is more usually called, became a familiar movement in the arts. Romantic musical nationalism is exemplified by the work of Bedřich Smetana, especially the symphonic poem "Vltava". In Scandinavia and the Slavic parts of Europe especially, "national romanticism" provided a series of answers to the 19th-century search for styles that would be culturally meaningful and evocative, yet not merely historicist. When a church was built over the spot in St Petersburg where Tsar Alexander II of Russia had been assassinated, the "Church of the Savior on Blood", the natural style to use was one that best evoked traditional Russian features ("illustration, left"). In Finland, the reassembly of the national epic, the "Kalevala," inspired paintings and murals in the National Romantic style that substituted there for the international Art Nouveau styles. The foremost proponent in Finland was Akseli Gallen-Kallela ("illustration, below right").
By the turn of the century, ethnic self-determination had become an assumption held as being progressive and liberal. There were romantic nationalist movements for separation in Finland, Estonia, Latvia and Lithuania, the Kingdom of Bavaria held apart from a united Germany, and Czech and Serb nationalism continued to trouble Imperial politics. The flowering of arts which drew inspiration from national epics and song continued unabated. The Zionist movement revived Hebrew, and began immigration to Eretz Yisrael, and Welsh and Irish tongues also experienced a poetic revival.
At the same time, linguistic and cultural nationality, colored with pre-genetic concepts of race, bolstered two rhetorical claims consistently associated with romantic nationalism to this day: claims of primacy and claims of superiority. Primacy is the claimed inalienable right of a culturally and racially defined people to a geographical terrain, a ""heartland"" (a vivid expression) or homeland. The polemics of racial superiority became inexorably intertwined with romantic nationalism. Richard Wagner notoriously argued that those who were ethnically different could not comprehend the artistic and cultural meaning inherent in national culture. Identifying "Jewishness" even in musical style, he specifically attacked the Jews as being unwilling to assimilate into German culture, and thus unable to truly comprehend the mysteries of its music and language. Sometimes "national epics" such as the Nibelunglied have had a galvanizing effect on social politics.
In the first two decades of the 20th century, Romantic Nationalism as an idea was to have crucial influence on political events. Following the Panic of 1873 that gave rise to a new wave of antisemitism and racism in the German Empire politically ruled by an authoritarian, militaristic conservatism under Otto von Bismarck and in parallel with a wide revival of irrational emotionalism known as "Fin de siècle" (also reflected to a degree in the contemporary art movements of symbolism, the Decadent movement, and Art Nouveau), the racist, so-called "völkisch" movement grew out of Romantic nationalism during the last third of the 19th century, to some extent modelling itself on British Imperialism and "the White Man's Burden". The idea was that Germans should "naturally" rule over lesser peoples. Romantic nationalism, which had begun as a revolt against "foreign" kings and overlords, had come full circle, and was being used to make the case for a "Greater Germanic Empire" which would rule over Europe.
The nationalistic and imperialistic tensions rising high between the European nations throughout the irrational, neo-Romantic "Fin de siècle" period eventually erupted in the First World War. After Germany had lost the war and undergone the tumultuous German Revolution, the "völkisch" movement drastically radicalized itself in Weimar Germany under the harsh terms of the Treaty of Versailles, and Adolf Hitler would go on to say that "the basic ideas of National-Socialism are "völkisch", just as the "völkisch" ideas are National-Socialist".
Outside of Germany, the belief among European powers was that nation-states forming around unities of language, culture and ethnicity were "natural" in some sense. For this reason President Woodrow Wilson would argue for the creation of self-determining states in the wake of the Great War. However, the belief in romantic nationalism would be honored in the breach. In redrawing the map of Europe, Yugoslavia was created as an intentional coalition state among competing, and often mutually hostile, southern Slavic peoples, and the League of Nations' mandates were often drawn, not to unify ethnic groups, but to divide them. To take one example, the nation now known as Iraq intentionally joined together three Ottoman vilayets, uniting Kurds in the north, Sunni Arabs in the center, and Shia Arabs in the south, in an effort to present a strong national buffer state between Turkey and Persia: over these was placed a foreign king from the Hashemite dynasty native to the Hijaz. | https://en.wikipedia.org/wiki?curid=26096 |
Revolutionary Armed Forces of Colombia
The Revolutionary Armed Forces of Colombia—People's Army (, FARC–EP and FARC) was a guerrilla movement involved in the continuing Colombian conflict starting in 1964. They were known to employ a variety of military tactics in addition to more unconventional methods, including terrorism. The FARC–EP was formed during the Cold War period as a Marxist–Leninist peasant force promoting a political line of agrarianism and anti-imperialism.
The operations of the FARC–EP were funded by kidnap and ransom, illegal mining, extortion and taxation of various forms of economic activity, and the production and distribution of illegal drugs. The United Nations has estimated that 12% of all civilians deaths in the Colombian conflict were committed by FARC and National Liberation Army (ELN) guerrillas, with 80% committed by right-wing paramilitaries, and the remaining 8% committed by Colombian security forces.
The strength of the FARC–EP forces were high; in 2007, the FARC said they were an armed force of 18,000 men and women; in 2010, the Colombian military calculated that FARC forces consisted of about 13,800 members, 50 percent of whom were armed guerrilla combatants; and, in 2011, the President of Colombia, Juan Manuel Santos, said that FARC–EP forces comprised fewer than 10,000 members. By 2013 it was reported that 26,648 FARC and ELN members had decided to demobilize since 2002.
In 2012, the FARC made 239 attacks on the energy infrastructure. However, they showed signs of fatigue. By 2014, the FARC were not seeking to engage in outright combat with the army, instead concentrating on small-scale ambushes against isolated army units. Meanwhile, from 2008 to 2017, the FARC opted to attack police patrols with home-made mortars, sniper rifles, and explosives, as they were not considered strong enough to engage police units directly. This followed the trend of the 1990s during the strengthening of Colombian government forces.
In June 2016, the FARC signed a ceasefire accord with the President of Colombia, Juan Manuel Santos in Havana. This accord was seen as an historic step to ending the war that has gone on for fifty years. On 25 August 2016, the Colombian president, Juan Manuel Santos, announced that four years of negotiation had secured a peace deal with FARC and that a national referendum would take place on 2 October. The referendum failed with 50.24% voting against. The Colombian government and the FARC on 12 November 24 signed a revised peace deal, which the Colombian Congress approved on November 30.
On 27 June 2017, FARC ceased to be an armed group, disarming itself and handing over its weapons to the United Nations. One month later, FARC announced its reformation as a legal political party, the Common Alternative Revolutionary Force, in accordance with the terms of the peace deal. However, about 2,000 to 2,500 FARC dissidents still take on FARC's original doctrine and continue with drug trafficking, though far smaller than the group at its peak.
A small faction of FARC leaders announced a return to armed activity on 29 August 2019, stating that the Colombian government did not respect peace agreements, a position Colombian officials disagreed with. The Colombian government responded with offensive strikes, killing FARC members destined to lead rearmament activities.
In 1948, in the aftermath of the assassination of the populist politician Jorge Eliécer Gaitán, there occurred a decade of large-scale political violence throughout Colombia, which was a Conservative – Liberal civil war that killed more than 200,000 people. In Colombian history and culture, the killings are known as "La Violencia" (The Violence, 1948–58); most of the people killed were peasants and laborers in rural Colombia. In 1957–1958, the political leadership of the Liberal Party and the Conservative Party agreed to establish a bipartisan political system known as the National Front (Frente Nacional, 1958–74). The Liberal and the Conservative parties agreed to alternate in the exercise of government power by presenting a joint National Front candidate to each election and restricting the participation of other political movements.
The pact was ratified as a constitutional amendment by a national plebiscite on 1 December 1957 and was supported by the Church as well as Colombia's business leaders. The initial power-sharing agreement was effective until 1974; nonetheless, with modifications, the Liberal–Conservative bipartisan system lasted until 1990. The sixteen-year extension of the bipartisan power-sharing agreement permitted the Liberal and Conservative élites to consolidate their socioeconomic control of Colombian society, and to strengthen the military to suppress political reform and radical politics proposing alternative forms of government for Colombia.
During the 1960s, the Colombian government effected a policy of Accelerated Economic Development (AED), the agribusiness plan of Lauchlin Currie, a Canadian-born U.S. economist who owned ranching land in Colombia. The plan promoted industrial farming that would produce great yields of agricultural and animal products for worldwide exportation, while the Colombian government would provide subsidies to large-scale private farms. The AED policy came at the expense of the small-scale family farms that only yielded food supplies for local consumption. Based on a legalistic interpretation of what constituted "efficient use" of the land, thousands of peasants were forcefully evicted from their farms and migrated to the cities, where they became part of the industrial labor pool. In 1961, the dispossession of farmland had produced 40,000 landless families and by 1969 their numbers amounted to 400,000 throughout Colombia. By 1970, the "latifundio" type of industrial farm (more than 50 hectares in area) occupied more than 77 per cent of arable land in the country. The AED policy increased the concentration of land ownership among cattle ranchers and urban industrialists, whose businesses expanded their profits as a result of reductions in the cost of labor wages after the influx of thousands of displaced peasants into the cities. During this period, most rural workers lacked basic medical care and malnutrition was almost universal, which increased the rates of preventable disease and infant mortality.
Communists were active throughout rural and urban Colombia in the period immediately following World War I. The Colombian Communist Party ("Partido Comunista Colombiano", PCC) was formally accredited by the Comintern in 1930. The PCC began establishing "peasant leagues" in rural areas and "popular fronts" in urban areas, calling for improved living and working conditions, education, and rights for the working class. These groups began networking together to present a defensive front against the state-supported violence of large landholders. Members organized strikes, protests, seizures of land, and organized communist-controlled "self-defense communities" in southern Colombia that were able to resist state military forces, while providing for the subsistence needs of the populace. Many of the PCC's attempts at organizing peasants were met with violent repression by the Colombian government and the landowning class. U.S. military intelligence estimated that in 1962, the size of the PCC had grown to 8,000 to 10,000 active members, and an additional 28,000 supporters.
In 1961, a guerrilla leader and long-time PCC organizer named Manuel Marulanda Vélez declared an independent "Republic of Marquetalia". The Lleras government attempted unsuccessfully to attack the communities to drive out the guerrillas, due to fears that "a Cuban-style revolutionary situation might develop". After the failed attacks, several army outposts were set up in the area.
In October 1959, the United States sent a "Special Survey Team" composed of counterinsurgency experts to investigate Colombia's internal security situation. Among other policy recommendations the US team advised that "to shield the interests of both Colombian and US authorities against 'interventionist' charges any special aid given for internal security was to be sterile and covert in nature". In February 1962, three years after the 1959 "US Special Survey Team", a Fort Bragg top-level U.S. Special Warfare team headed by Special Warfare Center commander General William P. Yarborough, visited Colombia for a second survey.
In a secret supplement to his report to the Joint Chiefs of Staff, Yarborough encouraged the creation and deployment of a US-backed force to commit "paramilitary, sabotage and/or terrorist activities against known communist proponents".
The new counter-insurgency policy was instituted as Plan Lazo in 1962 and called for both military operations and civic action programs in violent areas. Following Yarborough's recommendations, the Colombian military recruited civilians into "civil defense" groups which worked alongside the military in its counter-insurgency campaign, as well as in civilian intelligence networks to gather information on guerrilla activity. Doug Stokes argues that it was not until the early part of the 1980s that the Colombian government attempted to move away from the counterinsurgency strategy represented by Plan Lazo and Yarborough's 1962 recommendations.
The Colombian government began attacking many of the communist groups in the early 1960s, attempting to re-assimilate the territories under the control of the national government. FARC was formed in 1964 by Manuel Marulanda Vélez and other PCC members, after a military attack on the community of Marquetalia. 16,000 Colombian troops attacked the community, which only had 48 armed fighters. Marulanda and 47 others fought against government forces at Marquetalia and then escaped into the mountains along with the other fighters. These 48 men formed the core of FARC, which later grew in size to hundreds of fighters.
In 1982, FARC–EP held its Seventh Guerrilla Conference, which called for a major shift in FARC's strategy. FARC had historically been doing most of its fighting in rural areas and was limited to small-scale confrontations with Colombian military forces. By 1982, increased income from the "coca boom" allowed them to expand into an irregular army, which would then stage large-scale attacks on Colombian troops. They also began sending fighters to Vietnam and the Soviet Union for advanced military training. They also planned to move closer to middle-sized cities, as opposed to only remote rural areas, and closer to areas rich in natural resources, in order to create a strong economic infrastructure. It was also at this conference that FARC added the initials "EP", for ""Ejército del Pueblo"" or "People's Army", to the organization's name.
In the early 1980s, President Belisario Betancur began discussing the possibility of peace talks with the guerrillas. This resulted in the 1984 "La Uribe" Agreement, which called for a cease-fire, which ended up lasting from 1984 to 1987.
In 1985, members of the FARC–EP, along with a large number of other leftist and communist groups, formed a political party known as the "Union Patriótica" ("Patriotic Union", UP). The UP sought political reforms (known as "Apertura Democratica") such as constitutional reform, more democratic local elections, political decentralization, and ending the domination of Colombian politics by the Liberal and Conservative parties. They also pursued socioeconomic reforms such land redistribution, greater health and education spending, the nationalization of foreign businesses, Colombian banks, and transportation, and greater public access to mass media. While many members of the UP were involved with the FARC–EP, the large majority of them were not and came from a wide variety of backgrounds such as labor unions and socialist parties such as the PCC. In the cities, the FARC–EP began integrating itself with the UP and forming "Juntas Patrióticas" (or "solidarity cells") – small groups of people associated with labor unions, student activist groups, and peasant leagues, who traveled into the "barrios" discussing social problems, building support for the UP, and determining the sociopolitical stance of the urban peasantry.
The UP performed better in elections than any other leftist party in Colombia's history. In 1986, UP candidates won 350 local council seats, 23 deputy positions in departmental assemblies, 9 seats in the House, and 6 seats in the Senate. The 1986 Presidential candidate, Jaime Pardo Leal, won 4.6% of the national vote.
Since 1986, thousands of members of the UP and other leftist parties were murdered (estimates range from 4,000 to 6,000). In 1987, the President of the UP, Jaime Pardo, was murdered. In 1989 a single large landholder had over 400 UP members murdered. Over 70% of all Colombian presidential candidates in 1990—and 100% of those from center-left parties—were assassinated.
During this period, the Colombian government continued its negotiations with the FARC–EP and other armed groups, some of which were successful. Some of the groups which demobilized at this time include the EPL, the ERP, the Quintín Lame Armed Movement, and the M-19.
On 10 August 1990, senior leader Jacobo Arenas, an ideological leader and founder of FARC–EP, died of a heart attack at the Casa Verde compound in Colombia's eastern mountains.
Towards the end of 1990, the army, with no advance warning and while negotiations were still ongoing with the group, attacked and seized four linked bases. The last of these a compound known as Casa Verde, which housed the National Secretariat of the FARC–EP, was seized on 15 December 1990. The Colombian government argued that the attack was caused by the FARC–EP's lack of commitment to the process, demonstrated by continuing its criminal activities and FARC attacks in November.
On 3 June 1991, dialogue resumed between the Simón Bolívar Guerrilla Coordinating Board and the government on neutral territory in Caracas, Venezuela and Tlaxcala, Mexico. However, the war did not stop, and armed attacks by both sides continued. The negotiation process was broken off in 1993 after no agreement was reached. The Coordinating Board disappeared not long after that time, and guerrilla groups continued their activities independently.
Before the break off of dialogue, a letter written by a group of Colombian intellectuals (among whom were Nobel laureate Gabriel García Márquez) to the Simón Bolívar Guerrilla Coordinating Board was released denouncing the approach taken by the FARC–EP and the dire consequences that it was having for the country.
In the early 1990s, the FARC–EP had between 7,000 and 10,000 fighters, organized into 70 fronts spread throughout the country. From 1996 to 1998 they inflicted a series of strikes on the Colombian Army, including a three-day offensive in Mitú (Vaupés department), taking a large number of soldiers prisoner.
On 23 September 1994, the FARC kidnapped American agricultural scientist Thomas Hargrove and held him captive for 11 months. After his release, Hargrove wrote a book about his ordeal which inspired the 2000 film "Proof of Life" starring Meg Ryan and Russell Crowe.
Over this period in Colombia, the cultivation of different drugs expanded and there were widespread coca farmers' marches. These marches brought to a halt several major arteries in southern Colombia. Government officials said that FARC-EP had forced the protesters to participate. According to social anthropologist María Clemencia Ramírez, the relationship between the guerrillas and the marches was ambivalent: FARC-EP promoted the 1996 protests as part of their participatory democracy policies yet also exercised authoritarianism, which led to tensions and negotiations with peasant leaders, but the "cocalero" movement brought proposals on behalf of the coca growers and defended its own interests.
French sociologist Alain Labrousse, who has conducted extensive research on the illicit narcotics industry in Latin America and Central Asia, has noted similarities in the reliance on the drug trade by both the FARC-EP and the Taliban. In his thesis, Labrousse asserts that the FARC-EP leadership, like that of the Taliban, explicitly bans the use of drugs by its membership and within the local population, but vigorously advocates for legalization of drug trafficking as a tool to finance its military objectives. In both cases, the insurgency groups manage to garner significant political support of farmers who serve to benefit from the illicit drug trade, prompting grassroots mobilization, political activism, and agitation to demand legalization by the government.
In March 1999 members of a local FARC contingent killed three USA-based indigenous rights activists, who were working with the U'Wa people to build a school for U'Wa children, and were fighting against encroachment of U'Wa territory by multinational oil corporations. The killings were questioned by many and condemned by many others, and led the United States to increase pressure on the Pastrana administration to crack down on FARC guerrillas.
With the hope of negotiating a peace settlement, on 7 November 1998, President Andrés Pastrana granted FARC-EP a safe haven meant to serve as a confidence building measure, centred on the San Vicente del Caguán settlement.
After a series of high-profile guerrilla actions, including the hijacking of an aircraft, the attack on several small towns and cities, the arrest of the Irish Colombia Three (see below) and the alleged training of FARC-EP militants in bomb making by them, and the kidnapping of several political figures, Pastrana ended the peace talks on 21 February 2002 and ordered the armed forces to start retaking the FARC-EP controlled zone, beginning at midnight. A 48-hour respite that had been previously agreed to with the rebel group was not respected as the government argued that it had already been granted during an earlier crisis in January, when most of the more prominent FARC-EP commanders had apparently left the demilitarised zone. Shortly after the end of talks, the FARC-EP kidnapped Oxygen Green Party presidential candidate Íngrid Betancourt, who was travelling in Colombian territory. Betancourt was rescued by the Colombian government on 2 July 2008 (see Operation Jaque below).
On 24 April 2002, the U.S. House of Representatives Committee on International Relations published the findings of its investigation into IRA activities in Colombia. Their report alleged a longstanding connection between the IRA and FARC–EP, mentioned at least 15 IRA members who had been travelling in and out of Colombia since 1998, and estimated that the IRA had received at least $2 million in drug proceeds for training FARC-EP members. The IRA/FARC-EP connection was first made public on 11 August 2001, following the arrest in Bogotá of two IRA explosives and urban warfare experts and of a representative of Sinn Féin who was known to be stationed in Cuba. Jim Monaghan, Martin McCauley and Niall Connolly (known as the Colombia Three), were arrested in Colombia in August 2001 and were accused of teaching bomb-making methods to FARC–EP.
On 15 February 2002, the Colombia Three were charged with training FARC-EP members in bomb-making in Colombia. The Colombian authorities had received satellite footage of the men with FARC-EP in an isolated jungle area, where they were thought to have spent five weeks. They could have spent up to 20 years in gaol if the allegations were proved.
During October 2001, a key witness in the case against the three Irish republicans disappeared. This came as Sinn Féin President Gerry Adams admitted one of the men was the party's representative in Cuba. The missing witness, a former police inspector, said he had seen Mr McCauley with FARC-EP members in 1998. Without his testimony, legal sources said the chances of convicting the three men were reduced.
They were eventually found guilty of travelling on false passports in June 2004 but were acquitted of training FARC-EP members. That decision was reversed after an appeal by the Attorney General of Colombia and they were sentenced to 17-year terms. However, they vanished in December 2004 while on bail and returned to Ireland. Tánaiste Mary Harney said no deal had been done with Sinn Féin or the IRA over the three's return to Ireland adding that the Irish government would consider any request from the Colombian authorities for their extradition. Colombian vice-president Francisco Santos Calderón did not rule out allowing them to serve their sentences in Ireland.
For most of the period between 2002 and 2005, the FARC-EP was in a strategic withdrawal due to the increasing military and police actions of new president Álvaro Uribe, which led to the capture or desertion of many fighters and medium-level commanders. Uribe ran for office on an anti-FARC-EP platform and was determined to defeat FARC-EP in a bid to create "confidence" in the country. Uribe's own father had been killed by FARC-EP in an attempted kidnapping in 1983.
In 2002 and 2003, FARC broke up ten large ranches in Meta, an eastern Colombian province, and distributed the land to local subsistence farmers.
During the first two years of the Uribe administration, several FARC-EP fronts, most notably in Cundinamarca and Antioquia, were broken by the government's military operations.
On 5 May 2003, the FARC assassinated the governor of Antioquia, Guillermo Gaviria Correa, his advisor for peace, former defence minister Gilberto Echeverri Mejía, and eight soldiers. The FARC had kidnapped Mr. Gaviria and Mr. Echeverri a year earlier, when the two men were leading a march for peace from Medellín to Caicedo in Antioquia.
On 13 July 2004, the office of the United Nations' High Commissioner for Human Rights publicly condemned the group, given evidence that FARC-EP violated article 17 of the additional Protocol II of the Geneva Convention and international humanitarian law, as a result of the 10 July massacre of seven peasants and the subsequent displacement of eighty individuals in San Carlos, Antioquia.
In early February 2005, a series of small-scale actions by the FARC-EP around the southwestern departments of Colombia, resulted in an estimated 40 casualties. The FARC–EP, in response to government military operations in the south and in the southeast, displaced its military centre of gravity towards the Nariño, Putumayo and Cauca departments.
The FARC-EP originally said that they would only release the police and military members they held captive (whom they considered to be prisoners of war) through exchanges with the government for imprisoned FARC-EP members. During the duration of the DMZ negotiations, a small humanitarian exchange took place.
The group demanded a demilitarised zone including two towns (Florida and Pradera) in the strategic region of Valle del Cauca, where much of the current military action against them has taken place; this region is also an important way of transporting drugs to the Pacific coast. This demand was rejected by the Colombian government based on previous experience during the 2002 peace talks.
On 2 December 2004, the government announced the pardon of 23 FARC–EP prisoners, to encourage a reciprocal move. The prisoners to be released were all of low rank and had promised not to rejoin the armed struggle. In November 2004, the FARC–EP had rejected a proposal to hand over 59 of its captives in exchange for 50 guerrillas imprisoned by the government.
In a communique dated 28 November but released publicly on 3 December, the FARC-EP declared that they were no longer insisting on the demilitarisation of San Vicente del Caguán and Cartagena del Chairá as a precondition for the negotiation of the prisoner exchange, but instead that of Florida and Pradera in the Valle department. They state that this area would lie outside the "area of influence" of both their Southern and Eastern Blocks (the FARC-EP's strongest) and that of the military operations being carried out by the Uribe administration.
They requested security guarantees both for the displacement of their negotiators and that of the guerrillas that would be freed, which were stated to number as many as 500 or more, and ask the Catholic Church to coordinate the participation of the United Nations and other countries in the process.
The FARC–EP also mention in the communique that Simón Trinidad's extradition, would be a serious obstacle to reaching a prisoner exchange agreement with the government. On 17 December 2004, the Colombian government authorised Trinidad's extradition to the United States, but stated that the measure could be revoked if the FARC-EP released all political hostages and military captives in its possession before 30 December. The FARC-EP rejected the demand.
On 25 March 2006, after a public announcement made weeks earlier, the FARC–EP released two captured policemen at La Dorada, Putumayo. The release took place some southwest of Bogotá, near the Ecuadorean border. The Red Cross said the two were released in good health. Military operations in the area and bad weather had prevented the release from occurring one week earlier.
In a separate series of events, civilian hostage and German citizen Lothar Hintze was released by FARC–EP on 4 April 2006, after five years in captivity. Hintze had been kidnapped for extortion purposes, and his wife had paid three ransom payments without any result.
One prisoner, Julian Ernesto Guevara Castro, a police officer, died of tuberculosis on 28 January 2006. He was a captain and was captured on 1 November 1998. On 29 March 2009, the FARC-EP announced that they would give Guevara's remains to his mother. The FARC handed over Guevara's remains on 1 April 2010.
Another civilian hostage, Fernando Araújo, later named Minister of Foreign Relations and formerly Development Minister, escaped his captors on 31 December 2006. Araújo had to walk through the jungle for five days before being found by troops in the hamlet of San Agustin, north of Bogotá. He was kidnapped on 5 December 2000 while jogging in the Caribbean coastal city of Cartagena. He was reunited with his family on 5 January 2007.
Another prisoner, Frank Pinchao, a police officer, escaped his captors on 28 April 2007 after nine years in captivity. He was reunited with his family on 15 May 2007.
On 28 June 2007, the FARC–EP reported the death of 11 out of 12 provincial deputies from the Valle del Cauca Department whom the guerrillas had kidnapped in 2002. The guerrillas claimed that the deputies had been killed by crossfire during an attack by an "unidentified military group." The Colombian government stated that government forces had not made any rescue attempts and that the FARC–EP executed the hostages. FARC did not report any other casualties on either side and delayed months before permitting the Red Cross to recover the remains. According to the government, the guerrillas delayed turning over the corpses to let decomposition hide evidence of how they died. The Red Cross reported that the corpses had been washed and their clothing changed before burial, hiding evidence of how they were killed. The Red Cross also reported that the deputies had been killed by multiple close-range shots, many of them in the backs of the victims, and even two by shots to the head.
In February 2009, Sigifredo López, the only deputy who survived and was later released by FARC, accused the group of killing the 11 captives and denied that any military rescue attempt had taken place. According to López, the unexpected arrival of another guerrilla unit resulted in confusion and paranoia, leading the rebels to kill the rest of the Valle deputies. He survived after previously being punished for insubordination and was held in chains nearby but separated from the rest of the group.
On 10 January 2008, former vice presidential candidate Clara Rojas and former congresswoman Consuelo González were freed after nearly six years in captivity. In a Venezuela-brokered deal, a helicopter flew deep into Colombia to pick up both hostages. The women were escorted out of the jungle by armed guerrillas to a clearing where they were picked up by Venezuelan helicopters that bore International Red Cross insignias. In a statement published on a pro-rebel Web site, the FARC-EP said the unilateral release demonstrated the group's willingness to engage the Colombian government in talks over the release of as many as 800 people who are still being held. In a televised speech, Colombia's U.S.-allied president, Álvaro Uribe, thanked Chavez for his efforts.
During the period she was held kidnapped in the jungle in 2004, Clara Rojas gave birth to her son by Caesarean. At 8 months old, the baby was removed from the area and Rojas didn't hear of the boy again until 31 December, when she heard Colombian President Álvaro Uribe say on the radio that the child was no longer with her captors. DNA tests later confirmed the boy, who had been living in a Bogotá foster home for more than two years under a different name, was hers. She reclaimed her son. Asked about her opinion of the FARC–EP as group, Rojas called it "a criminal organisation", condemning its kidnappings as "a total violation of human dignity" and saying some captive police and soldiers are constantly chained.
On 31 January 2008, the FARC–EP announced that they would release civilian hostages Luis Eladio Perez Bonilla, Gloria Polanco, and Orlando Beltran Cuellar to Venezuelan President Hugo Chávez as a humanitarian gesture. On 27 February 2008, the three hostages and Jorge Eduardo Gechem Turbay (who was added to the list due to his poor health) were released by FARC–EP. With the authorization of the Colombian government and the participation of the International Red Cross, a Venezuelan helicopter transported them to Caracas from San José del Guaviare. The FARC–EP had called its planned release of the hostages a gesture of recognition for the mediation efforts of Chávez, who had called on the international community to recognize the rebels as belligerents a month prior. Colombian President Álvaro Uribe, who had tense relations with Chavez, thanked the socialist leader and called for the release of all hostages. He said Colombia was still in a fight "against terrorist actions" but was open to reconciliation.
On February 4, 2008, anti-FARC protests were held in 45 Colombian cities and towns, with an estimated 1.5 million people coming out in Bogotá alone. Solidarity rallies were held in some 200 cities worldwide including Berlin, Barcelona, London, Madrid, Toronto, Dubai, Miami, New York, Brisbane, and La Paz. The protests were originally organised through Facebook and were also supported by local Colombian media outlets as well as the Colombian government. Participation estimates vary from the hundreds of thousands to several millions of people in Colombia and thousands worldwide.
Kiraz Janicke of the leftist and chavista website Venezuelanalysis criticised the rallies, claiming that "right-wing paramilitary leaders featured prominently" in their organisation and arguing that workers were also pressured to attend the gatherings. According to her, the purpose of the protests was to promote "Uribe's policy of perpetuating Colombia's decades-long civil war." Shortly before the rallies took place thirteen demobilised AUC paramilitary leaders, including Salvatore Mancuso, had expressed their support of the protest through a communique. However, this move was rejected by organiser Carlos Andrés Santiago, who stated that such an endorsement was harmful and criticised the AUC's actions.
On 20 July 2008, a subsequent set of rallies against FARC included thousands of Colombians in Bogotá and hundreds of thousands throughout the rest of the country.
On March 1, 2008, Raul Reyes, a member of FARC's ruling Secretariat, in the small village of Santa Rosa, Ecuador, was killed just across the border from Colombia, after Colombian planes bombarded a FARC camp there. The bombardment was "followed by troops in helicopters who recovered the bodies of Reyes and another 16 rebels." Reyes was the former FARC chief negotiator during the unsuccessful 1998–2002 peace process, and was also a key FARC hostage release negotiator. Reyes' demise marked the first time that a FARC Secretariat member had been killed in combat.
This incident led to a breakdown in diplomatic relations between Ecuador and Colombia, and between Venezuela and Colombia. Ecuador condemned the attack. The incident also resulted in diplomatic strains between the United States and Ecuador, following revelations that the Central Intelligence Agency provided intelligence that allowed the Colombian military to locate the FARC–EP commander and ordnance used in the attack.
It has been considered the biggest blow against FARC–EP in its more than four decades of existence. This event was quickly followed by the death of Iván Ríos, another member of FARC–EP's seven-man Secretariat, less than a week later, by the hand of his own bodyguard. It came as a result of heavy Colombian military pressure and a reward offer of up to $5 million from the Colombian government.
After the attack, the Colombian military forced managed to secure six laptop computers belonging to Reyes, in which they found information linking several left wing Colombian personalities, such as politicians, journalists and human rights activists with terrorist activities.
Manuel Marulanda Vélez died on 26 March 2008 after a heart attack. His death would be kept a secret, until Colombian magazine "Semana" published an interview with Colombian defence minister Juan Manuel Santos on 24 May 2008 in which Santos mentions the death of Manuel Marulanda Vélez. The news was confirmed by FARC–EP commander "Timochenko" on Latin American television station teleSUR on 25 May 2008. "Timochenko" announced the new commander in chief was Alfonso Cano After speculations in several national and international media about the "softening up" of the FARC and the announcement of Colombian President Álvaro Uribe that several FARC leaders were ready to surrender and free their captives, the secretariat of the FARC sent out a communiqué emphasising the death of their founder would not change their approach towards the captives or the humanitarian agreement.
On 11 January 2008 during the annual State of the Nation in the Venezuelan National Assembly, Venezuelan President Hugo Chávez referred to the FARC as "a real army that occupies territory in Colombia, they're not terrorists ... They have a political goal and we have to recognise that". However, on 13 January 2008, Chavez retracted his previous statement and stated his disapproval of the FARC–EP strategy of armed struggle and kidnapping, saying "I don't agree with kidnapping and I don't agree with armed struggle". President Hugo Chávez repeatedly expressed his disapproval of the practice of kidnapping stating on 14 April: "If I were a guerrilla, I wouldn't have the need to hold a woman, a man who aren't soldiers ... Free the civilians who don't have anything to do with the war. I don't agree with that." On 7 March at the Cumbre de Rio, Chavez stated again that the FARC–EP should lay down their arms "Look at what has happened and is happening in Latin America, reflect on this (FARC-EP), we are done with war ... enough with all this death". On 8 June Chavez repeated his call for a political solution and an end to the war, "The guerrilla war is history ... At this moment in Latin America, an armed guerrilla movement is out of place".
On 2 July 2008, under a Colombian military operation called Operation Jaque, the FARC–EP was tricked by the Colombian Government into releasing 15 captives to Colombian Intelligence agents disguised as journalists and international aid workers in a helicopter rescue. Military intelligence agents infiltrated the guerrilla ranks and led the local commander in charge of the captives, Gerardo Aguilar Ramírez, alias Cesar, to believe they were going to take them by helicopter to Alfonso Cano, the guerrillas' supreme leader. The rescued included Íngrid Betancourt (former presidential Candidate), U.S. military contractors Marc Gonsalves, Thomas Howes, and Keith Stansell, as well as eleven Colombian police officers and soldiers. The commander, Cesar and one other rebel were taken into custody by agents without incident after boarding the helicopter. On 4 July, some observers questioned whether or not this was an intercepted captive release made to look like a rescue. In a 5 July communique, FARC itself blamed rebels Cesar and Enrique for the escape of the captives and acknowledged the event as a setback but reiterated their willingness to reach future humanitarian agreements. Immediately after the captive rescue, Colombian military forces cornered the rest of FARC–EP's 1st Front, the unit which had held the captives. Colombian forces did not wish to attack the 1st Front but instead offered them amnesty if they surrender. Colombia's Program for Humanitarian Attention for the Demobilized announced in August 2008 that 339 members of Colombia's rebel groups surrendered and handed in their weapons in July, including 282 guerrillas from the Revolutionary Armed Forces of Colombia.
Óscar Tulio Lizcano, a Colombian Conservative Party congressman, was kidnapped 5 August 2000. On Sunday, 26 October 2008, the ex-congressman escaped from FARC–EP rebels. Tulio Lizcano was a hostage for over 8 years, and escaped with a FARC–EP rebel he convinced to travel with him. They evaded pursuit for three days as they trekked through mountains and jungles, encountering the military in the western coastal region of Colombia. Tulio Lizcano is the first hostage to escape since the successful military rescue of Íngrid Betancourt, and the longest held political hostage by the organization. He became the 22nd Colombian political hostage to gain freedom during 2008.
During his final days in captivity, Lizcano told Santos, they had nothing to eat but wild palm hearts and sugar cane. With the military tightening the noose, a FARC–EP rebel turned himself in and provided Colombian authorities with Lizcano's exact location in the northwest state of Choco. As police and army troops prepared to launch a rescue operation, Lizcano escaped alongside one of his guerrilla guards who had decided to desert. The two men hiked through the rain forest for three days and nights until they encountered an army patrol. Speaking from a clinic in the western city of Cali, Mr Lizcano said that when soldiers saw him screaming from across a jungle river, they thought he was drunk and ignored him. Only when he lifted the FARC–EP rebel's Galil assault rifle did the soldiers begin to understand that he was escaping from the FARC–EP rebels. "They jumped into the river, and then I started to shout, 'I'm Lizcano'", he said.
Soon after the liberation of this prominent political hostage, the Vice President of Colombia Francisco Santos Calderón called Latin America's biggest guerrilla group a "paper tiger" with little control of the nation's territory, adding that "they have really been diminished to the point where we can say they are a minimal threat to Colombian security", and that "After six years of going after them, reducing their income and promoting reinsertion of most of their members, they look like a paper tiger." However, he warned against any kind of premature triumphalism, because "crushing the rebels will take time". The of jungle in Colombia makes it hard to track them down to fight.
On 21 December 2008, The FARC–EP announced that they would release civilian hostages Alan Jara, Sigifredo López, three low-ranking police officers and a low-ranking soldier to Senator Piedad Córdoba as a humanitarian gesture. On 1 February 2009, the FARC–EP proceeded with the release of the four security force members, Juan Fernando Galicio Uribe, José Walter Lozano Guarnizo, Alexis Torres Zapata and William Giovanni Domínguez Castro. All of them were captured in 2007. Jara (kidnapped in 2001) was released on 3 February and López (kidnapped in 2002) was released on 5 February.
On 17 March 2009, The FARC-EP released Swedish hostage Erik Roland Larsson. Larsson, paralyzed in half his body, was handed over to detectives in a rugged region of the northern state of Córdoba. Larsson was kidnapped from his ranch in Tierralta, not far from where he was freed, on 16 May 2007, along with his Colombian girlfriend, Diana Patricia Pena while paying workers. She escaped that same month following a gun battle between her captors and police. Larsson suffered a stroke while in captivity. The FARC-EP had sought a $5 million ransom. One of Larsson's sons said that the ransom was not paid.
On 22 December 2009, the body of Luis Francisco Cuéllar, the Governor of Caquetá, was discovered, a day after he had been kidnapped from his house in Florencia, Caquetá. Officials said the abduction and execution had been carried by the FARC. According to officials, he had been killed soon after the abduction. The kidnappers cut the governor's throat as they evaded security forces. In a statement broadcast on radio, the acting governor, Patricia Vega, said, "I no longer have any doubts that FARC has done it again." The FARC claimed responsibility for Cuéllar's kidnapping and murder in January 2010. The group said that they kidnapped him in order to "put him on trial for corruption" and blamed his death on an attempt to rescue him by force.
On 16 April 2009, the FARC-EP announced that they would release Army Corporal Pablo Emilio Moncayo Cabrera to Piedad Córdoba as a humanitarian gesture. Moncayo was kidnapped on 21 December 1997. On 28 June 2009, the FARC announced that they would release soldier Josue Daniel Calvo Sanchez. Calvo was kidnapped on 20 April 2009. Calvo was released on 28 March 2010. Moncayo was released on 30 March 2010.
On 13 June 2010, Colombian troops rescued Police Colonel Luis Herlindo Mendieta Ovalle, Police Captain Enrique Murillo Sanchez and Army Sergeant Arbey Delgado Argote in an event known as Operation Chameleon, twelve years after the individuals were captured; Argote was kidnapped on 3 August 1998. Ovalle and Sanchez were kidnapped on 1 November 1998. On 14 June, Police Lieutenant William Donato Gomez was also rescued. He was also kidnapped on 3 August 1998.
President Juan Manuel Santos began his term with a suspected FARC bomb-blast in Bogotá. This followed the resolution of the 2010 Colombia–Venezuela diplomatic crisis which erupted over outgoing President Álvaro Uribe's allegations of active Venezuelan support for FARC.
In early September 2010, FARC-EP attacks in the Nariño Department and Putumayo Department in southern Colombia killed some fifty policemen and soldiers in hit-and-run assaults.
According to a December report by the Corporación Nuevo Arco Iris NGO, 473 FARC-EP guerrillas and 357 members of the Colombian security forces died in combat between January and September 2010. An additional 1,382 government soldiers or policemen were wounded during the same period, with the report estimating that the total number of casualties could reach 2,500 by the end of the year. Nuevo Arco Iris head León Valencia considered that FARC guerrillas have reacted to a series of successful military blows against them by splitting up their forces into smaller groups and intensifying the offensive use of anti-personnel land mines, leading to what he called a further "degradation" of the conflict. Valencia also added that both coca crops and the drug trade have "doubled" in areas with FARC-EP presence. Researcher Claudia López considered that the Colombian government is winning the strategic and aerial side of the war but not the infantry front, where both the FARC-EP and ELN continue to maintain an offensive capacity.
The International Crisis Group claimed that the military offensives carried out under former President Álvaro Uribe and President Juan Manuel Santos had led to the number of FARC-EC combatants being reduced to around 7,000, less than half the 20,000 combatants estimated to have been employed by the FARC-EC in the early 2000s. The same organisation also stated that the military offensive had been able to reduce FARC territorial control and push guerillas to more remote and sparsely populated regions, often close to territorial or internal borders.
Colombian authorities announced the death of Víctor Julio Suárez Rojas, also known as "Mono Jojoy", on 23 September 2010. President Juan Manuel Santos stated that the FARC commander was killed in an operation that began in the early hours of 21 September in the department of Meta, south of the capital Bogotá. According to Santos, he was "the impersonation of terror and a symbol of violence". After this event, the FARC-EP released a statement saying that defeating the group would not bring peace to Colombia and called for a negotiated solution, not surrender, to the social and political conflict.
In January 2011 Juan Manuel Santos admitted that FARC-EP had killed 460 government soldiers and wounded over 2,000 in 2010. In April 2011 the Colombian congress issued a statement saying that FARC has a "strong presence" in roughly one third of the municipalities in Colombia, while their attacks have increased. Overall FARC operations, including attacks against security forces as well as kidnappings and the use of land mines, have increased every year since 2005. In the first six months of 2011 the FARC carried out an estimated 1,115 actions, which constitutes a 10% increase over the same period in 2010.
By early 2011 Colombian authorities and news media reported that the FARC and the clandestine sister groups had partly shifted strategy from guerrilla warfare to "a war of militias", meaning that they were increasingly operating in civilian clothes while hiding amongst sympathizers in the civilian population. In early January 2011 the Colombian army said that the FARC has some 18,000 members, with 9,000 of those forming part of the militias. The army says it has identified at least 1,400 such militia members in the FARC strongholds of Valle del Cauca and Cauca in 2011. In June 2011 Colombian chief of staff Edgar Cely claimed that the FARC wants to "urbanize their actions", which could partly explain the increased guerrilla activity in Medellín and particularly Cali. Jeremy McDermott, co-director of Insight Crime, estimates that FARC may have some 30,000 'part-time fighters' in 2011, consisting of both armed and unarmed civilian supporters making up the rebel militia network, instead of full-time fighters wearing uniforms.
According to Corporación Nuevo Arco Iris, FARC-EP killed 429 members of the Colombian government's security forces between January and October 2011. During this same period, the rebel group lost 316 of its own members. The year 2011 saw over 2,000 incidents of FARC activity, which was the highest figure recorded since 1998. The NGO has stated that while most of these incidents remain defensive in nature and were not like the large offensives from years past, FARC actions grew since 2005, and the rebel group was carrying out intense operations against small and medium-sized Colombian military units in vulnerable areas.
Colombian troops killed FARC leader Alfonso Cano in a firefight on 4 November 2011. The 6th Front of the FARC, which was in charge of Cano's security at the time of his death, retaliated by killing two policemen in Suarez and Jambaló some 24 hours after the death of Cano.
On 26 November 2011, the FARC killed Police Captain Edgar Yesid Duarte Valero, Police Lieutenant Elkin Hernández Rivas, Army Corporal Libio José Martínez Estrada, and Police Intendant Álvaro Moreno after government troops approached the guerrilla camp where they were held in an area of the Caqueta department. Police Sergeant Luis Alberto Erazo Maya managed to escape his captors and was later rescued.
The Colombian military had information indicating that there could be captives in the area and initiated Operation Jupiter in October 2011, using a 56 men Special Forces unit to carry out surveillance for preparing a future rescue mission that would involve additional troops and air support. According to the Colombian military, this same unit remained in the area for 43 days and did not find the captives until they accidentally ran into the FARC camp on the way back, which led to a shootout. Relatives of the captives, former victims and civil society groups blamed both the government and FARC for the outcome, questioning the operation as well as criticizing military rescues.
In 2012, FARC announced they would no longer participate in kidnappings for ransom and released the last ten soldiers and police officers they kept as prisoners, but it has kept silent about the status of hundreds of civilians still reported as hostages, and continued kidnapping soldiers and civilians. On 26 February 2012, the FARC announced that they would release their remaining ten political hostages. The hostages were released on 2 April 2012. The president of Colombia, Juan Manuel Santos, said that this incident was "not enough", and asked the FARC to release the civilian hostages they possess.
On 22 November 2012, the FARC released four Chinese oil workers. The hostages were working for the Emerald Energy oil company, a British-based subsidiary of China's Sinochem Group, when they were kidnapped on 8 June 2011. Their Colombian driver was also kidnapped, but released several hours later. Authorities identified the freed men as Tang Guofu, Zhao Hongwei, Jian Mingfu, and Jiang Shan.
Santos announced on 27 August 2012 that the Colombian government has engaged in talks with FARC in order to seek an end to the conflict:
Exploratory conversations have been held with the FARC to find an end to the conflict. I want to make very clear to Colombians that the approaches that have been carried out and the ones that will happen in the future will be carried out within the framework based on these principles: We are going to learn from the mistakes made in the past so that they are not repeated. Second, any process must lead to the end of the conflict, not making it longer. Third, operations and military presence will be maintained across the entire national territory.
He also said that he would learn from the mistakes of previous leaders, who failed to secure a lasting ceasefire with FARC, though the military would still continue operations throughout Colombia while talks continued. An unnamed Colombian intelligence source said Santos has assured FARC that no one would be extradited to stand trial in another country. "Al Jazeera" reported that the initiative began after Santos met with Venezuelan President Hugo Chávez and asked him to mediate. Former President Uribe has criticized Santos for seeking peace "at any cost" and rejected the idea of holding talks. "Telesur" reported that FARC and the Colombian government had signed a preliminary agreement in Havana the same day. The first round of the talks will take place in Oslo on 5 October and then return to Havana for approximately six months of talks before culminating in Colombia. However, Santos later ruled out a ceasefire pending the talks in Oslo and reiterated that offensive operations against FARC would continue.
ELN leader Nicolás Rodríguez Bautista, otherwise known as Gabino, added that his group was interested in joining the talks too: "Well we are open, it's exactly our proposal, to seek room for open dialogue without conditions and start to discuss the nation's biggest problems. But the government has said no! Santos says he has the keys to peace in his pocket, but I think he has lost them because there seems to be no possibility of a serious dialogue, we remain holding out for that."
Colombia's RCN Radio reported on 29 September that a preliminary draft of the proposals indicated that a resolution would involve answering FARC's historic grievances including rural development and agrarian reform; democracy development via an enhancement of the number of registered political parties; security and compensation for the victims of the conflict. In this regards, the Colombian government has already passed a series of laws that entail compensation for the victims and a return of land to the displaced. FARC also indicated a willingness to give up their arms. Former M19 member Antonio Navarro Wolff said: "If the government wants a serious peace plan they will have to take control of the coca leaf plantations that are currently owned by the FARC because if not another criminal group will take over it." Santos later told "Al Jazeera" that peace was possible if there was "goodwill" on both sides. Santos told the General debate of the sixty-seventh session of the United Nations General Assembly on 26 September, that Venezuela and Chile were also helping in the discussion along with Cuba and Norway.
Peace talks were formally started on 18 October in a hotel 30 miles north of the Norwegian capital Oslo with a joint-press conference by both delegations. The representatives of the government, led by Humberto de la Calle and the FARC, led by Iván Márquez, said the so-called second phase of the peace process will be inaugurated in Oslo on 15 November, after which the delegations will go to Cuba to work on the negotiation of the peace accord, which will ultimately lead to a permanent agreement and ceasefire. The Colombian government has also stated that they expect that a post-Chavez government will continue to support the peace process. In late 2012, FARC declared a two-month unilateral cease-fire and said that they would be open to extending it as a bilateral truce afterwards during the rest of the negotiations. The Colombian government refused to agree to a bilateral cease-fire, alleging violations of the truce by FARC.
Shortly after lifting the ceasefire, FARC conducted attacks on a coal transport railway, which derailed 17 wagons and forced a suspension of operations and assaulted Milan, a town in the southern Caquetá, killing at least seven government soldiers and injuring five others.
Santos has been far more responsive to threats against social leaders than his predecessors. He has also been decisive in combatting the New Illegal Armed Groups that emerged as a result of the paramilitary process, especially in fighting threats and violence against human rights defenders and social leaders. During Santos' presidency, private security and proclaimed self-defense movements have also lost their legitimacy.
On 27 May 2013, it was announced that one of the most contentious issues had been resolved. Land reform and compensation was tackled with promises to compensate those who had lost land. This is the first time the government and FARC have reached an agreement on a substantive issue in four different negotiating attempts over 30 years. The peace process then moved on to the issue of "political participation", during which FARC insisted on its demand for an elected Constituent Assembly to rewrite Colombia's constitution. This demand has been forcefully rejected by Colombia's lead government negotiator, Humberto de la Calle.
On 1 July 2013, FARC and the second-largest guerrilla group in Colombia, ELN, announced that they would be working together to find a "political solution to the social and armed conflict." The details of this partnership, however, were far from clear; Washington Office on Latin America's Adam Isacson explains that two issues central to peace accords with ELN—resource policy and kidnapping—are currently off the table in the talks in Havana with FARC, and the addition of these topics may complicate and slow down an already sluggish process.
On 6 November 2013 the Colombian government and FARC announced that they had come to an agreement regarding the participation of political opposition and would begin discussing their next issue, the illicit drug trade.
On 23 January 2014 Juan Fernando Cristo, the President of the Senate of Colombia, proposed a second Plan Colombia during a conference on the Colombian peace process in Washington, D.C. Cristo stated that this new plan should be "for the victims" and should redirect the resources from the original Plan Colombia towards supporting a post-conflict Colombia.
On 16 May 2014, the Colombian government and the FARC rebels agreed to work together against drug trafficking, added to the development of these peace talks.
On 28 June 2015, humanitarian and spiritual leader Ravi Shankar, on a three-day-visit to Cuba, had several rounds of discussions with FARC members in an exercise of confidence-building in the peace process, which had many hurdles from the past three years.
FARC requested Shankar to actively participate in the peace process. He said, "In this conflict, everyone should be considered as victims. And inside every culprit, there is a victim crying for help."
After many discussions, FARC finally agreed to embrace the Gandhian principle of non-violence. Commander Ivan Marquez declared in the press conference that they would adopt it. The FARC agreed that hatred had derailed the peace process. Marquez said, "We will work for peace and justice for all the people of Colombia."
On 8 July 2015, FARC announced a unilateral ceasefire, which began on 20 July 2015.
On 30 September 2015, Ravi Shankar accused Norway of sidetracking his effort at brokering a peace deal between the Colombian government and FARC, after Norway, which was part of a four-nation group (along with Cuba, Chile and Venezuela) acting as guarantors in the talks, released a statement saying that the peace deal was a result of "painstaking efforts undertaken by a league of Western nations".
On 23 June 2016 a ceasefire accord was signed between the FARC Guerilla Army and the Colombian Government, in Havana, Cuba. Leaders of several Latin American countries which contributed to the deal, including Cuba and Venezuela, were present. A final peace accord will required a referendum to be approved.
Under the accord, the Colombian government will support massive investment for rural development and facilitate the FARC's reincarnation as a legal political party. FARC promised to help eradicate illegal drug crops, remove landmines in the areas of conflict, and offer reparations to victims. FARC leaders can avoid prosecution by acts of reparation to victims and other community work.
On 2 October 2016 Colombians voted and rejected the peace deal with FARC by 50.2% to 49.8%.
The government met with victims and peace opponents after the referendum was rejected, receiving over 500 proposed changes, and continued to negotiate with FARC. A revised agreement announced on 12 November 2016, which would require parliamentary approval rather than a nationwide referendum. Former President and chief peace opponent Álvaro Uribe met with President Juan Manuel Santos and thereafter issued a noncommittal statement that he awaited release of the full text. Among the new reported 60 new or modified terms was a provision for FARC assets to be distributed for victim compensation. FARC members would be able to establish a political party, and would in general be granted full immunity for full confession and cooperation, although drug trafficking would be assessed on a case-by-case basis. Peace terms would be enforced by a Special Justice for the Peace, who would report to the Constitutional Court and not to an international body, and both Parliament and the Special Justice would have the ability to modify terms of the agreement as seen necessary.
The Colombian government and the FARC on 24 November signed a revised peace deal, which Congress approved on 30 November.
On 18 February 2017, the last FARC guerrillas arrived in a designated transition zone, where they began the process of disarming. The rebels stayed in the zones until May 31, after which they were registered and reintegrated into civilian life.
On 27 June 2017, the FARC ceased to be an armed group, with its forces disarming and handing more than 7,000 weapons to the United Nations at a ceremony hosted by the FARC leadership, and the Colombian government, which included the Cabinet and President Juan Manuel Santos. Peace observers had received the coordinates of 873 weapons caches hidden in Colombia's remote jungles and mountains. The UN was able to remove 510 of these weapons caches, leaving the remaining 363 caches for the military to pick up.
The last batch of weapons belonging to former FARC rebels has been removed under UN supervision. The United Nations collected 8,112 guns, 1.3 million bullets, 22 tons of explosives, 3,000 grenades and 1,000 land mines from the FARC.
The Special Jurisdiction of Peace ("Jurisdicción Especial para la Paz", JEP) would be the transitional justice component of the Comprehensive System, complying with Colombia's duty to investigate, clarify, prosecute and punish serious human rights violations and grave breaches of international humanitarian law which occurred during the armed conflict. Its objectives would be to satisfy victims' right to justice, offer truth to the public, contribute to the reparation of victims, contribute to the fight against impunity, adopt decisions which give full legal security to direct and indirect participants in the conflict and contribute to the achievement of a stable and lasting peace. At the end of a six-day visit to Colombia, on 9 October 2017 the UN Assistant Secretary-General for human rights Andrew Gilmour issued statement welcoming progress in the demobilization and disarmament of the FARC. However, he expressed, "concern about problems in the implementation of the accords which relate to the continued attacks against human rights defenders and community leaders."
On 20 July 20, 2019, ten former FARC members, including former senior leader Pablo Catatumbo, were sworn in as members of the Congress of Colombia. All of these ex-rebels are members of the Common Alternative Revolutionary Force political party. Five of these ten ex FARC rebels were sworn in as members of the House of Representatives, while the other five were sworn in as members of the Senate. As part of the peace agreement, these ten seats will remain under control of members from the Common Alternative Revolutionary Force until 2026.
In a video published on 29 August 2019, former second-in-command FARC leader Iván Márquez announced his return to arms in the name of the guerrilla movement. Márquez denounced that the Government did not comply with its part of the Havana accord, with 667 local activists and 150 former guerrillas killed since the peace accord was signed. This position was criticized by former FARC supreme leader Rodrigo Londoño, who assured that his party remains committed to peace agreements and that "[m]ore than 90 percent of former FARC guerrillas remain committed to the peace process". Londoño also criticized Márquez, stating that the majority of former guerrillas killed were FARC dissidents who continued armed actions.
After the announcement, President Iván Duque authorized the Joint Special Operations Command to start an offensive operation. Government forces conducted a bombing raid in San Vicente del Caguán in which twelve people identified as FARC dissidents were killed. According to Duque, one of them, Gildardo Cucho, was the leader of the group which would be joining Iván Márquez in the rearmament. Duque also accused Venezuelan president Nicolás Maduro of assisting FARC and providing a safe haven for militants in Venezuela.
FARC received most of its funding—which was estimated to average some US$300 million per year—from taxation of the illegal drug trade and other activities, ransom kidnappings, bank robberies, and extortion of large landholders, multinational corporations, and agribusiness. From taxation of illegal drugs and other economic activity, FARC was estimated to receive US$60–100 million per year.
The guerillas's main means of financing was through the drug trade which includes both direct and indirect participation; taxation, administration or control of areas of production and trafficking. A large but often difficult to estimate portion of funding comes from the taxation of businesses and even local farmers, often lumped in with or defined by its opponents as extortion.
FARC was not initially involved in direct drug cultivation, trafficking, or trans-shipment prior to or during the 1980s. Instead, it maintained a system of taxation on the production that took place in the territories that they controlled, in exchange for protecting the growers and establishing law and order in these regions by implementing its own rules and regulations. During the 1990s, FARC expanded its operations, in some areas, to include trafficking and production, which had provided a significant portion of its funding. Right-wing paramilitary groups also receive a large portion of their income from drug trafficking and production operations.
A 1992 Central Intelligence Agency report "acknowledged that the FARC had become increasingly involved in drugs through their "taxing" of the trade in areas under their geographical control and that in some cases the insurgents protected trafficking infrastructure to further fund their insurgency", but also described the relationship between the FARC and the drug traffickers as one "characterized by both cooperation and friction"" and concluded that ""we do not believe that the drug industry [in Colombia] would be substantially disrupted in the short term by attacks against guerrillas. Indeed, many traffickers would probably welcome, and even assist, increased operations against insurgents."
In 1994, the Drug Enforcement Administration (DEA) came to three similar conclusions. First, that any connections between drug trafficking organizations and Colombian insurgents were "ad hoc 'alliances of convenience'". Second, that "the independent involvement of insurgents in Colombia's domestic drug productions, transportation, and distribution is limited ... there is no evidence that the national leadership of either the FARC or the ELN has directed, as a matter of policy, that their respective organizations directly engage in independent illicit drug production, transportation, or distribution." Third, the report determined that the DEA "has no evidence that the FARC or ELN have been involved in the transportation, distribution, or marketing of illegal drugs in the United States. Furthermore it is doubtful that either insurgent group could develop the international transportation and logistics infrastructure necessary to establish independent drug distribution in the United States or Europe ... DEA believes that the insurgents never will be major players in Colombia's drug trade."
FARC had called for crop substitution programs that would allow coca farmers to find alternative means of income and subsistence. In 1999, FARC worked with a United Nations alternative development project to enable the transition from coca production to sustainable food production. On its own, the group had also implemented agrarian reform programs in Putumayo.
In those FARC controlled territories that do produce coca, it is generally grown by peasants on small plots; in paramilitary or government controlled areas, coca is generally grown on large plantations. The FARC-EP generally made sure that peasant coca growers receive a much larger share of profits than the paramilitaries would give them, and demanded that traffickers pay a decent wage to their workers. When growers in a FARC-controlled area were caught selling coca to non-FARC brokers, they were generally forced to leave the region, but when growers were caught selling to FARC in paramilitary-controlled areas, they were generally killed. Lower prices paid for raw coca in paramilitary-controlled areas lead to significantly larger profits for the drug processing and trafficking organizations, which means that they generally prefer that paramilitaries control an area rather than FARC.
In 2000, FARC Spokesman Simon Trinidad said that taxes on drug laboratories represented an important part of the organization's income, although he didn't say how much it was. He defended this funding source, arguing that drug trade was endemic in Colombia because it had pervaded many sectors of its economy.
After the 21 April 2001 capture of Brazilian drug lord Luiz Fernando da Costa (a.k.a. Fernandinho Beira-Mar) in Colombia, Colombian and Brazilian authorities accused him of cooperating with FARC-EP through the exchange of weapons for cocaine. They also claimed that he received armed protection from the guerrilla group.
On 18 March 2002 the Attorney General of the United States John Ashcroft indicted leaders of the FARC after an 18-month investigation into their narcotics trafficking. Tomás Molina Caracas, the commander of the FARC's 16th Front, led the 16th Front's drug-trafficking activities together with Carlos Bolas and a rebel known as Oscar El Negro. Between 1994 and 2001, Molina and other 16th Front members controlled Barranco Minas, where they collected cocaine from other FARC fronts to sell it to international drug traffickers for payment in currency, weapons and equipment.
On 22 March 2006 the Attorney General Alberto Gonzales announced the indictment of fifty leaders of FARC for exporting more than $25 billion worth of cocaine to the United States and other countries. Several of the FARC leaders appeared on the Justice Department's Consolidated Priority Organization target list, which identifies the most dangerous international drug trafficking organizations. Recognizing the increased profits, the FARC moved to become directly involved in the manufacture and distribution of cocaine by setting the price paid for cocaine paste and transporting it to jungle laboratories under FARC control. The charged FARC leaders ordered that Colombian farmers who sold paste to non-FARC buyers would be murdered and that U.S. fumigation planes should be shot down.
On 11 October 2012 Jamal Yousef, a.k.a. "Talal Hassan Ghantou", a native of Lebanon, was sentenced to 12 years in prison for conspiring to provide military-grade weapons to the Fuerzas Armadas Revolucionarias de Colombia (the FARC), in exchange for over a ton of cocaine. Yousef pleaded guilty in May 2012 to one count of providing material support to the FARC.
The FARC-EP carried out both ransom and politically motivated kidnappings in Colombia and was responsible for the majority of such kidnappings carried out in the country.
The guerrillas initially targeted the families of drug traffickers, the wealthy upper-class and foreigners but the group later expanded its kidnapping and extortion operations to include the middle-class.
During the 1984 peace negotiations, FARC pledged to stop kidnapping and condemned the practice. However, hostage-taking by FARC increased in the years following this declaration. In a 1997 interview, FARC-EP Commander Alfonso Cano argued that some guerrilla units continued to do so for "political and economic reasons" in spite of the prohibition issued by the leadership.
In 2000, the FARC-EP issued a directive called "Law 002" which demanded a "tax" from all individuals and corporations with assets worth at least US$1 million, warning that those who failed to pay would be detained by the group. In 2001, FARC Commander Simón Trinidad claimed that the FARC-EP does not engage in kidnapping but instead "retains [individuals] in order to obtain resources needed for our struggle". Commander Trinidad said he did not know how many people had been taken by FARC or how much money was collected by the organization in exchange for their freedom. In addition, FARC spokesperson Joaquín Gómez stated that the payment demanded was a tax which many people paid "voluntarily", with kidnapping undertaken because "those who have the resources must pay their share".
In 2002, Amnesty International sent a letter to FARC-EP Commander Manuel Marulanda condemning kidnapping and hostage-taking as well as rejecting the threats directed at municipal or judicial officials and their families, arguing that they are civilians who are protected by international humanitarian law as long as they do not participate in hostilities.
According to Amnesty International, the number of kidnappings decreased in the last years of the conflict, but the human rights organization estimated that FARC and ELN guerrillas continued to be behind hundreds of cases until their disarming. In 2008, press reports estimated that about 700 hostages continued to be held captive by FARC. According to the "Fundación País Libre" anti-kidnapping NGO, an estimated total of 6,778 people were kidnapped by FARC between 1997 and 2007. In 2009, the state's anti-kidnapping agency "Fondelibertad" reviewed 3,307 officially unsettled cases and removed those that had already been resolved or for which there was insufficient information. The agency concluded that 125 hostages remained in captivity nationwide of whom 66 were being held by the FARC–EP. The government's revised figures were considered "absurdly low" by "Fundación País Libre", which has argued that its own archives suggest an estimated 1,617 people taken hostage between 2000 and 2008 remain in the hands of their captors, including hundreds seized by FARC. FARC claimed at the time that it was holding nine people for ransom in addition to hostages kept for a prisoner exchange.
In 2008, Venezuelan President Hugo Chávez expressed his disagreement with FARC-EP's resorting to kidnappings. Former President Fidel Castro of Cuba also criticized the use of hostage-taking by the guerrillas as "objectively cruel" and suggested that the group free all of its prisoners and hostages.
In February 2012, FARC announced that it would release ten members of the security forces, who it described as political prisoners, representing the last such captives in its custody. It further announced the repeal of Law 002, bringing to an end its support for the practice of kidnapping for ransom. However, it was not clear from the FARC statement what would happen to the civilians it still held in captivity. Colombian president Juan Manuel Santos used Twitter to welcome the move as a "necessary, if insufficient, step in the right direction".
FARC was accused of committing violations of human rights by numerous groups, including Human Rights Watch, Amnesty International, the United Nations as well as by the Colombian, U.S. and European Union governments.
A February 2005 report from the United Nations' High Commissioner for Human Rights mentioned that, during 2004, "FARC-EP continued to commit grave breaches [of human rights] such as murders of protected persons, torture and hostage-taking, which affected many civilians, including men, women, returnees, boys and girls, and ethnic groups."
FARC consistently carried out attacks against civilians specifically targeting suspected supporters of paramilitary groups, political adversaries, journalists, local leaders, and members of certain indigenous groups since at least as early as 1994. From 1994 to 1997 the region of Urabá in Antioquia Department was the site of FARC attacks against civilians. FARC has also executed civilians for failing to pay "war taxes" to their group.
In 2001, Human Rights Watch (HRW) announced that the FARC-EP had abducted and executed civilians accused of supporting paramilitary groups in the demilitarized zone and elsewhere, without providing any legal defense mechanisms to the suspects and generally refusing to give any information to relatives of the victims. The human rights NGO directly investigated three such cases and received additional information about over twenty possible executions during a visit to the zone.
According to HRW, those extrajudicial executions would qualify as forced disappearances if they had been carried out by agents of the government or on its behalf, but nevertheless remained "blatant violations of the FARC-EP's obligations under international humanitarian law and in particular key provisions of article 4 of Protocol II, which protects against violence to the life, physical, and mental well-being of persons, torture, and ill-treatment".
The Colombian human rights organization CINEP reported that FARC-EP killed an estimated total of 496 civilians during 2000.
The FARC-EP has employed a type of improvised mortars made from gas canisters (or cylinders), when launching attacks.
According to Human Rights Watch, the FARC-EP has killed civilians not involved in the conflict through the use of gas cylinder mortars and its use of landmines.
Human Rights Watch considers that "the FARC-EP's continued use of gas cylinder mortars shows this armed group's flagrant disregard for lives of civilians...gas cylinder bombs are impossible to aim with accuracy and, as a result, frequently strike civilian objects and cause avoidable civilian casualties."
According to the ICBL Landmine and Cluster Munitions Monitor, "FARC is probably the most prolific current user of antipersonnel mines among rebel groups anywhere in the world." Furthermore, FARC use child soldiers to carry and deploy antipersonnel mines.
FARC sometimes threatened or assassinated indigenous Colombian leaders for attempting to prevent FARC incursions into their territory and resisting the forcible recruitment by FARC of indigenous youth. Between 1986 and 2001, FARC was responsible for 27 assassinations, 15 threats, and 14 other abuses of indigenous people in Antioquia Department.
In March 1999 members of a local FARC contingent killed 3 indigenous rights activists, who were working with the U'Wa people to build a school for U'Wa children, and were fighting against encroachment of U'Wa territory by multinational oil corporations. The killings were almost universally condemned, and seriously harmed public perceptions of FARC.
Members of indigenous groups have demanded the removal of military bases set up by the Colombian government and guerrilla encampments established by FARC in their territories, claiming that both the Colombian National Army and the FARC should respect indigenous autonomy and international humanitarian law. According to a 2012 research from the National Indigenous Organization of Colombia (ONIC), 80,000 members of indigenous communities have been displaced from their native lands since 2004 because of FARC-related violence.
Luis Evelis, an indigenous leader and ONIC representative, has stated that "the armed conflict is still in force, causing damages to the indigenous. Our territories are self-governed and we demand our autonomy. During the year 2011, fifty-six indigenous people have been killed." The United Nations Declaration on the Rights of Indigenous Peoples has indicated that no military activities may be carried out within indigenous territories without first undertaking an "effective consultation" with indigenous representatives and authorities from the communities involved.
The Regional Indigenous Council of Cauca (CRIC) issued a statement concerning the release of two hostages taken by FARC in 2011: "Compared to past statements made by the national government, it is important to reiterate that the presence of armed groups in our territories is a fact that has been imposed by force of arms, against which our communities and their leaders have remained in peaceful resistance." The CRIC also indicated that neither the Colombian government nor the mediators and armed groups involved consulted with the indigenous people and their authorities about the hostage release, raising concerns about the application of national and international law guaranteeing their autonomy, self-determination and self-government. The indigenous organization also demanded the immediate end of all violence and conflict within indigenous territories and called for a negotiated solution to the war.
Official Colombian government statistics show that murders of indigenous people between January and May 2011 have increased 38% compared to the same timeframe in 2010. Colombia is home to nearly 1 million indigenous people, divided into around 100 different ethnicities. The Colombian Constitutional Court has warned that 35 of those groups are in danger of dying out. The Permanent Assembly for the Defense of Life and Territorial Control has stated that the armed conflict "is not only part of one or two areas, it is a problem of all the indigenous people."
FARC–EP was the largest and oldest insurgent group in the Americas. According to the Colombian government, FARC–EP had an estimated 6,000–8,000 members in 2008, down from 16,000 in 2001, and lost much of its fighting force since President Álvaro Uribe took office in 2002. Political analyst and former guerrilla estimated that FARC's numbers were reduced to around 11,000 from their 18,000 peak but cautioned against considering the group a defeated force. In 2007, FARC–EP Commander Raúl Reyes claimed that their force consisted of 18,000 guerrillas.
According to a report from Human Rights Watch in 2006, approximately 10–15% of the recruits were minors, some of whom were forced to join the FARC, while women comprise around 40 percent of the guerilla army.
FARC was organized hierarchically into military units as follows:
The FARC–EP secretariat was led by Alfonso Cano and six others after the death of Manuel Marulanda (Pedro Antonio Marín), also known as "Tirofijo", or Sureshot, in 2008. The "international spokesman" of the organization was Raúl Reyes, who was killed in a Colombian army raid against a guerrilla camp in Ecuador on 1 March 2008. Cano was killed in a military operation on 4 November 2011.
FARC–EP was open to a negotiated solution to the nation's conflict through dialogue with a flexible government that agreed to certain conditions, such as the demilitarization of certain areas, cessation of paramilitary and government violence against rural peasants, social reforms to reduce poverty and inequality, and the release of all jailed (and extradited) FARC–EP rebels. It said that until these conditions surfaced, the armed revolutionary struggle would remain necessary to fight against Colombia's elites. The FARC–EP said it would continue its armed struggle because it perceived the Colombian government as an enemy because of historical politically motivated violence against its members and supporters, including members of the Patriotic Union, a FARC–EP-created political party.
The largest concentrations of FARC–EP guerrillas were located in the southeastern parts of Colombia's of jungle and in the plains at the base of the Andean mountains. However, the FARC and the ELN lost control of much of their territory, especially in urban areas, forcing them to relocate to remote areas in the jungle and the mountains.
Relations between the FARC-EP and local populations vary greatly depending on the history and specific characteristics of each region. In rural areas where the guerrillas have maintained a continuous presence for several decades, there are often organic links between the FARC and peasant communities. Such ties include shared generational membership and historical struggles dating back to the period of "La Violencia". These areas have traditionally been located in the departments of Caquetá, Meta, Guaviare and Putumayo, and – to a lesser extent – portions of Huila, Tolima and Nariño. Within remote locations under FARC control and where the national government is generally absent, the group can function as a revolutionary vanguard and institutes its "de facto" rule of law by carrying out activities that aim to combat corruption and reduce small-scale crime.
The FARC had also been able to provide limited social services in these regions, such as health care and education, including building minor infrastructure works in the form of rural roads. Peasants who have grown up in areas under historical FARC control may become accustomed to accepting them as the local authority. The guerrillas also attempt to keep the peace between peasants and drug traffickers in addition to regulating other aspects of daily life and economics.
In other rural regions of the country, where a FARC presence had only been established within the last twenty years of the conflict and primarily remained military in nature, there was often a level of distrust between FARC rebels and the local peasant communities, which lack historical ties to the group. Civilians in these locations also tended to get caught in the middle of the conflict between FARC and its government or paramilitary opponents. In the populated urban areas where the Colombian state has maintained a solid historical presence, some FARC sympathies may have existed in the poorest neighborhoods and among certain progressive sectors of the middle class, but most city inhabitants tended to view the guerrillas as one of Colombia's main problems.
By the end of 2010, FARC-EP influence was significantly reduced in the regions where it had only carried out a recent military-focused expansion during the 1980s and 1990s, in part due to the failure to establish close social ties with local populations. Government offensives eradicated much of the visible guerrilla presence in northern and central Colombia as well as in Guainía, Vaupés and Amazonas, limiting FARC to clandestine operations. Similar military setbacks and retreats occurred even within its traditional strongholds, forcing the FARC to move towards the most remote areas, but there the guerrillas did appear to maintain popular support among the peasants that had developed organic links to the insurgency.
The FARC dissidents refers to a group formerly part of the Revolutionary Armed Forces of Colombia, who have refused to lay down their arms after the FARC-government peace treaty came into effect in 2016. The dissidents number some 1,200 armed combatants with an unknown number of civilian militia supporting them. The FARC dissidents have become "an increasing headache" for the Colombian armed forces, as they have to fight them, the EPL, ELN and Clan del Golfo at the same time. FARC dissidents are led by former mid-level commanders such as alias Gentil Duarte, alias Euclides Mora, alias John 40, alias Giovanny Chuspas y alias Julián Chollo. The FARC dissidents have been responsible for several attacks on the Colombian armed forces. Dissidents of FARC's 1st Front are located in the eastern plains of Colombia. John 40 and their dissident 43rd Front moved into the Amazonas state of western Venezuela. Venezuela has served as the primary location for many FARC dissidents.
On 15 July 2018, the Colombian and Peruvian governments launched a joint military effort known as Operation Armageddon to combat FARC dissidents. Peru issued a 60-day state of emergency in the Putumayo Province, an area bordering both Colombia and Ecuador. On the first day alone, more than 50 individuals were arrested in the operation, with the majority being Colombian nationals, while four cocaine labs were dismantled.
The FARC were a violent non-state actor (VNSA) whose formal recognition as legitimate belligerent forces is disputed by some organizations. As such, the FARC has been classified as a terrorist organization by the governments of Colombia, (since 1997) the United States, Canada, Chile, (since 2010) New Zealand, Venezuela (Guaido-led government, since 2019) and (until 2016) the European Union; whereas the governments of Venezuela (Maduro-led government), Brazil, Argentina, Ecuador, and Nicaragua do not. In 2008, Venezuelan President Hugo Chávez recognized the FARC–EP as a proper army. President Chávez also asked the Colombian government and their allies to recognize the FARC as a belligerent force, arguing that such political recognition would oblige the FARC to forgo kidnapping and terrorism as methods of civil war and to abide by the Geneva Convention. Juan Manuel Santos followed a middle path by recognizing in 2011 that there is an "armed conflict" in Colombia although his predecessor, Álvaro Uribe, strongly disagreed. | https://en.wikipedia.org/wiki?curid=26100 |
Rotary engine
The rotary engine was an early type of internal combustion engine, usually designed with an odd number of cylinders per row in a radial configuration, in which the crankshaft remained stationary in operation, with the entire crankcase and its attached cylinders rotating around it as a unit. Its main application was in aviation, although it also saw use before its primary aviation role, in a few early motorcycles and automobiles.
This type of engine was widely used as an alternative to conventional inline engines (straight or V) during World War I and the years immediately preceding that conflict. It has been described as "a very efficient solution to the problems of power output, weight, and reliability".
By the early 1920s, the inherent limitations of this type of engine had rendered it obsolete.
A rotary engine is essentially a standard Otto cycle engine, with cylinders arranged radially around a central crankshaft just like a conventional radial engine, but instead of having a fixed cylinder block with rotating crankshaft as with a radial engine, the crankshaft remains stationary and the entire cylinder block rotates around it. In the most common form, the crankshaft was fixed solidly to the airframe, and the propeller was simply bolted to the front of the crankcase.
This difference also has much impact on design (lubrication, ignition, fuel admission, cooling, etc.) and functioning (see below).
The Musée de l'Air et de l'Espace in Paris has on display a special, "sectioned" working model of an engine with seven radially disposed cylinders. It alternates between rotary and radial modes to demonstrate the difference between the internal motions of the two types of engine.
Like "fixed" radial engines, rotaries were generally built with an odd number of cylinders (usually 5, 7 or 9), so that a consistent every-other-piston firing order could be maintained, to provide smooth running. Rotary engines with an even number of cylinders were mostly of the "two row" type.
Most rotary engines were arranged with the cylinders pointing outwards from a single crankshaft, in the same general form as a radial, but there were also rotary boxer engines and even one-cylinder rotaries.
Three key factors contributed to the rotary engine's success at the time:
Engine designers had always been aware of the many limitations of the rotary engine so when the static style engines became more reliable and gave better specific weights and fuel consumption, the days of the rotary engine were numbered.
The late WWI Bentley BR2, as the largest and most powerful rotary engine, had reached a point beyond which this type of engine could not be further developed, and it was the last of its kind to be adopted into RAF service.
It is often asserted that rotary engines had no throttle and hence power could only be reduced by intermittently cutting the ignition using a "blip" switch. This was almost literally true of the "Monosoupape" (single valve) type, which took most of the air into the cylinder through the exhaust valve, which remained open for a portion of the downstroke of the piston. Thus the richness of the mixture in the cylinder could not be controlled via the crankcase intake. The "throttle" (fuel valve) of a monosoupape provided only a very limited degree of speed regulation, as opening it made the mixture too rich, while closing it made it too lean (in either case quickly stalling the engine, or damaging the cylinders). Early models featured a pioneering form of variable valve timing in an attempt to give greater control, but this caused the valves to burn and therefore it was abandoned.
The only way of running a Monosoupape engine smoothly at reduced revs was with a switch that changed the normal firing sequence so that each cylinder fired only once per two or three engine revolutions, but the engine remained more or less in balance. As with excessive use of the "blip" switch: running the engine on such a setting for too long resulted in large quantities of unburned fuel and oil in the exhaust, and gathering in the lower cowling, where it was a notorious fire hazard.
Most rotaries had normal inlet valves, so that the fuel (and lubricating oil) was taken into the cylinders already mixed with air - as in a normal four-stroke engine. Although a conventional carburetor, with the ability to keep the fuel/air ratio constant over a range of throttle openings, was precluded by the spinning crankcase; it was possible to adjust the air supply through a separate flap valve or "bloctube". The pilot needed to set the throttle to the desired setting (usually full open) and then adjust the fuel/air mixture to suit using a separate "fine adjustment" lever that controlled the air supply valve (in the manner of a manual choke control). Due to the rotary engine's large rotational inertia, it was possible to adjust the appropriate fuel/air mixture by trial and error without stalling it, although this varied between different types of engine, and in any case it required a good deal of practice to acquire the necessary knack. After starting the engine with a known setting that allowed it to idle, the air valve was opened until maximum engine speed was obtained.
Throttling a running engine back to reduce revs was possible by closing off the fuel valve to the required position while re-adjusting the fuel/air mixture to suit. This process was also tricky, so that reducing power, especially when landing, was often accomplished instead by intermittently cutting the ignition using the blip switch.
Cutting cylinders using ignition switches had the drawback of letting fuel continue to pass through the engine, oiling up the spark plugs and making smooth restarting problematic. Also, the raw oil-fuel mix could collect in the cowling. As this could cause a serious fire when the switch was released, it became common practice for part or all of the bottom of the basically circular cowling on most rotary engines to be cut away, or fitted with drainage slots.
By 1918 a Clerget handbook advised maintaining all necessary control by using the fuel and air controls, and starting and stopping the engine by turning the fuel on and off. The recommended landing procedure involved shutting off the fuel using the fuel lever, while leaving the blip switch on. The windmilling propeller made the engine continue to spin without delivering any power as the aircraft descended. It was important to leave the ignition on to allow the spark plugs to continue to spark and keep them from oiling up, so that the engine could (if all went well) be restarted simply by re-opening the fuel valve. Pilots were advised to not use an ignition cut out switch, as it would eventually damage the engine.
Pilots of surviving or reproduction aircraft fitted with rotary engines still find that the blip switch is useful while landing, as it provides a more reliable, quicker way to initiate power if needed, rather than risk a sudden engine stall, or the failure of a windmilling engine to restart at the worst possible moment.
Félix Millet showed a 5-cylinder rotary engine built into a bicycle wheel at the Exposition Universelle in Paris in 1889. Millet had patented the engine in 1888, so must be considered the pioneer of the internal combustion rotary engine. A machine powered by his engine took part in the Paris-Bordeaux-Paris race of 1895 and the system was put into production by Darracq and Company London in 1900.
Lawrence Hargrave first developed a rotary engine in 1889 using compressed air, intending to use it in powered flight. Materials weight and lack of quality machining prevented it becoming an effective power unit.
Stephen M. Balzer of New York, a former watchmaker, constructed rotary engines in the 1890s. He was interested in the rotary layout for two main reasons:
Balzer produced a 3-cylinder, rotary engined car in 1894, then later became involved in Langley's "Aerodrome" attempts, which bankrupted him while he tried to make much larger versions of his engines. Balzer's rotary engine was later converted to static radial operation by Langley's assistant, Charles M. Manly, creating the notable Manly-Balzer engine.
The famous De Dion-Bouton company produced an experimental 4-cylinder rotary engine in 1899. Though intended for aviation use, it was not fitted to any aircraft.
The Adams-Farwell firm's automobiles, with the firm's first rolling prototypes using 3-cylinder rotary engines designed by Fay Oliver Farwell in 1898, led to production Adams-Farwell cars with first the 3-cylinder, then very shortly thereafter 5-cylinder rotary engines later in 1906, as another early American automaker utilizing rotary engines expressly manufactured for automotive use. Emil Berliner sponsored its development of the 5-cylinder Adams-Farwell rotary engine design concept as a lightweight power unit for his unsuccessful helicopter experiments. Adams-Farwell engines later powered fixed-wing aircraft in the US after 1910. It has also been asserted that the Gnôme design was derived from the Adams-Farwell, since an Adams-Farwell car is reported to have been demonstrated to the French Army in 1904. In contrast to the later Gnôme engines, and much like the later Clerget 9B and Bentley BR1 aviation rotaries, the Adams-Farwell rotaries had conventional exhaust and inlet valves mounted in the cylinder heads.
The Gnome engine was the work of the three Seguin brothers, Louis, Laurent and Augustin. They were talented engineers and the grandsons of famous French engineer Marc Seguin. In 1906 the eldest brother, Louis, had formed the Société des Moteurs Gnome to build stationary engines for industrial use, having licensed production of the Gnom single-cylinder stationary engine from Motorenfabrik Oberursel—who, in turn, built licensed Gnome engines for German aircraft during World War I.
Louis was joined by his brother Laurent who designed a rotary engine specifically for aircraft use, using Gnom engine cylinders. The brothers' first experimental engine is said to have been a 5-cylinder model that developed , and was a radial rather than rotary engine, but no photographs survive of the five-cylinder experimental model. The Seguin brothers then turned to rotary engines in the interests of better cooling, and the world's first production rotary engine, the 7-cylinder, air-cooled "Omega" was shown at the 1908 Paris automobile show. The first Gnome Omega built still exists, and is now in the collection of the Smithsonian's National Air and Space Museum. The Seguins used the highest strength material available - recently developed nickel steel alloy - and kept the weight down by machining components from solid metal, using the best American and German machine tools to create the engine's components; the cylinder wall of a 50 hp Gnome was only 1.5 mm (0.059 inches) thick, while the connecting rods were milled with deep central channels to reduce weight. While somewhat low powered in terms of units of power per litre, its power-to-weight ratio was an outstanding per kg.
The following year, 1909, the inventor Roger Ravaud fitted one to his "Aéroscaphe", a combination hydrofoil/aircraft, which he entered in the motor boat and aviation contests at Monaco. Henry Farman's use of the Gnome at the famous Rheims aircraft meet that year brought it to prominence, when he won the Grand Prix for the greatest non-stop distance flown——and also set a world record for endurance flight. The very first successful seaplane flight, of Henri Fabre's "Le Canard", was powered by a Gnome Omega on March 28, 1910 near Marseille.
Production of Gnome rotaries increased rapidly, with some 4,000 being produced before World War I, and Gnome also produced a two-row version (the 100 h.p. Double Omega), the larger 80 hp Gnome Lambda and the 160 hp two-row Double Lambda. By the standards of other engines of the period, the Gnome was considered not particularly temperamental, and was credited as the first engine able to run for ten hours between overhauls.
In 1913 the Seguin brothers introduced the new Monosoupape ("single valve") series, which replaced inlet valves in the pistons by using a single valve in each cylinder head, which doubled as inlet and exhaust valve. The engine speed was controlled by varying the opening time and extent of the exhaust valves using levers acting on the valve tappet rollers, a system later abandoned due to valves burning. The weight of the Monosoupape was slightly less than the earlier two-valve engines, and it used less lubricating oil. The 100 hp Monosoupape was built with 9 cylinders, and developed its rated power at 1,200 rpm. The later 160 hp nine-cylinder Gnome 9N rotary engine used the Monosoupape valve design while adding the safety factor of a dual ignition system, and was the last known rotary engine design to use such a cylinder head valving format.
Rotary engines produced by the Clerget and Le Rhône companies used conventional pushrod-operated valves in the cylinder head, but used the same principle of drawing the fuel mixture through the crankshaft, with the Le Rhônes having prominent copper intake tubes running from the crankcase to the top of each cylinder to admit the intake charge.
The 80 hp (60 kW) seven-cylinder Gnome was the standard at the outbreak of World War I, as the Gnome Lambda, and it quickly found itself being used in a large number of aircraft designs. It was so good that it was licensed by a number of companies, including the German Motorenfabrik Oberursel firm who designed the original Gnom engine. Oberursel was later purchased by Fokker, whose 80 hp Gnome Lambda copy was known as the Oberursel U.0. It was not at all uncommon for French Gnôme Lambdas, as used in the earliest examples of the Bristol Scout biplane, to meet German versions, powering Fokker E.I Eindeckers in combat, from the latter half of 1915 on.
The only attempts to produce twin-row rotary engines in any volume were undertaken by Gnome, with their Double Lambda fourteen-cylinder 160 hp design, and with the German Oberursel firm's early World War I clone of the Double Lambda design, the U.III of the same power rating. While an example of the Double Lambda went on to power one of the Deperdussin Monocoque racing aircraft to a world-record speed of nearly 204 km/h (126 mph) in September 1913, the Oberursel U.III is only known to have been fitted into a few German production military aircraft, the Fokker E.IV fighter monoplane and Fokker D.III fighter biplane, both of whose failures to become successful combat types were partially due to the poor quality of the German powerplant, which was prone to wearing out after only a few hours of combat flight.
The favourable power-to-weight ratio of the rotaries was their greatest advantage. While larger, heavier aircraft relied almost exclusively on conventional in-line engines, many fighter aircraft designers preferred rotaries right up to the end of the war.
Rotaries had a number of disadvantages, notably very high fuel consumption, partially because the engine was typically run at full throttle, and also because the valve timing was often less than ideal. Oil consumption was also very high. Due to primitive carburetion and absence of a true sump, the lubricating oil was added to the fuel/air mixture. This made engine fumes heavy with smoke from partially burnt oil. Castor oil was the lubricant of choice, as its lubrication properties were unaffected by the presence of the fuel, and its gum-forming tendency was irrelevant in a total-loss lubrication system. An unfortunate side-effect was that World War I pilots inhaled and swallowed a considerable amount of the oil during flight, leading to persistent diarrhoea. Flying clothing worn by rotary engine pilots was routinely soaked with oil.
The rotating mass of the engine also made it, in effect, a large gyroscope. During level flight the effect was not especially apparent, but when turning the gyroscopic precession became noticeable. Due to the direction of the engine's rotation, left turns required effort and happened relatively slowly, combined with a tendency to nose up, while right turns were almost instantaneous, with a tendency for the nose to drop. In some aircraft, this could be advantageous in situations such as dogfights. The Sopwith Camel suffered to such an extent that it required left rudder for both left and right turns, and could be extremely hazardous if the pilot applied full power at the top of a loop at low airspeeds. Trainee Camel pilots were warned to attempt their first hard right turns only at altitudes above . The Camel's most famous German foe, the Fokker Dr.I triplane, also used a rotary engine, usually the Oberursel Ur.II clone of the French-built Le Rhone 9J 110 hp powerplant.
Even before the First World War, attempts were made to overcome the inertia problem of rotary engines. As early as 1906 Charles Benjamin Redrup had demonstrated to the Royal Flying Corps at Hendon a 'Reactionless' engine in which the crankshaft rotated in one direction and the cylinder block in the opposite direction, each one driving a propeller. A later development of this was the 1914 reactionless 'Hart' engine designed by Redrup in which there was only one propeller connected to the crankshaft, but it rotated in the opposite direction to the cylinder block, thereby largely cancelling out negative effects. This proved too complicated for reliable operation and Redrup changed the design to a static radial engine, which was later tried in the experimental Vickers F.B.12b and F.B.16 aircraft, unfortunately without success.
As the war progressed, aircraft designers demanded ever-increasing amounts of power. Inline engines were able to meet this demand by improving their upper rev limits, which meant more power. Improvements in valve timing, ignition systems, and lightweight materials made these higher revs possible, and by the end of the war the average engine had increased from 1,200 rpm to 2,000. The rotary was not able to do the same due to the drag of the rotating cylinders through the air. For instance, if an early-war model of 1,200 rpm increased its revs to only 1,400, the drag on the cylinders increased 36%, as air drag increases with the square of velocity. At lower rpm, drag could simply be ignored, but as the rev count rose, the rotary was putting more and more power into spinning the engine, with less remaining to provide useful thrust through the propeller.
One clever attempt to rescue the design, in a similar manner to Redrup's British "reactionless" engine concept, was made by Siemens AG. The crankcase (with the propeller still fastened directly to the front of it) and cylinders spun counterclockwise at 900 rpm, as seen externally from a "nose on" viewpoint, while the crankshaft (which unlike other designs, never "emerged" from the crankcase) and other internal parts spun clockwise at the same speed, so the set was effectively running at 1800 rpm. This was achieved by the use of bevel gearing at the rear of the crankcase, resulting in the eleven-cylindered Siemens-Halske Sh.III, with less drag and less net torque. Used on several late war types, notably the Siemens-Schuckert D.IV fighter, the new engine's low running speed, coupled with large, coarse pitched propellers that sometimes had four blades (as the SSW D.IV used), gave types powered by it outstanding rates of climb, with some examples of the late production Sh.IIIa powerplant even said to be delivering as much as 240 hp.
One new rotary powered aircraft, Fokker's own D.VIII, was designed at least in part to provide some use for the Oberursel factory's backlog of otherwise redundant Ur.II engines, themselves clones of the Le Rhône 9J rotary.
Because of the Allied blockade of shipping, the Germans were increasingly unable to obtain the castor oil necessary to properly lubricate their rotary engines. Substitutes were never entirely satisfactory - causing increased running temperatures and reduced engine life.
By the time the war ended, the rotary engine had become obsolete, and it disappeared from use quite quickly. The British Royal Air Force probably used rotary engines for longer than most other operators. The RAF's standard post-war fighter, the Sopwith Snipe, used the Bentley BR2 rotary as the most powerful (at some ) rotary engine ever built by the Allies of World War I. The standard RAF training aircraft of the early post-war years, the 1914-origin Avro 504K, had a universal mounting to allow the use of several different types of low powered rotary, of which there was a large surplus supply. Similarly, the Swedish FVM Ö1 Tummelisa advanced training aircraft, fitted with a Le-Rhone-Thulin rotary engine, served until the mid thirties.
Designers had to balance the cheapness of war-surplus engines against their poor fuel efficiency and the operating expense of their total-loss lubrication system, and by the mid-1920s, rotaries had been more or less completely displaced even in British service, largely by the new generation of air-cooled "stationary" radials such as the Armstrong Siddeley Jaguar and Bristol Jupiter.
Experiments with the concept of the rotary engine continued.
The first version of the 1921 Michel engine, an unusual opposed-piston cam engine, used the principle of a rotary engine, in that its "cylinder block" rotated. This was soon replaced by a version with the same cylinders and cam, but with stationary cylinders and the cam track rotating in lieu of a crankshaft. A later version abandoned the cam altogether and used three coupled crankshafts.
By 1930 the Soviet helicopter pioneers, Boris N. Yuriev and Alexei M. Cheremukhin, both employed by "Tsentralniy Aerogidrodinamicheskiy Institut" (TsAGI, the Central Aerohydrodynamic Institute), constructed one of the first practical single-lift rotor machines with their TsAGI 1-EA single rotor helicopter, powered by two Soviet-designed and built M-2 rotary engines, themselves up-rated copies of the Gnome Monosoupape rotary engine of World War I. The TsAGI 1-EA set an unofficial altitude record of 605 meters (1,985 ft) with Cheremukhin piloting it on 14 August 1932 on the power of its twinned M-2 rotary engines.
Although rotary engines were mostly used in aircraft, a few cars and motorcycles were built with rotary engines. Perhaps the first was the Millet motorcycle of 1892. A famous motorcycle, winning many races, was the Megola, which had a rotary engine inside the front wheel. Another motorcycle with a rotary engine was Charles Redrup's 1912 Redrup Radial, which was a three-cylinder 303 cc rotary engine fitted to a number of motorcycles by Redrup.
In 1904 the Barry engine, also designed by Redrup, was built in Wales: a rotating 2-cylinder boxer engine weighing 6.5 kg was mounted inside a motorcycle frame.
The early-1920s German Megola motorcycle used a five-cylinder rotary engine within its front wheel design.
In the 1940s Cyril Pullin developed the Powerwheel, a wheel with a rotating one-cylinder engine, clutch and drum brake inside the hub, but it never entered production.
Besides the configuration of cylinders moving around a fixed crankshaft, several different engine designs are also called "rotary engines". The most notable pistonless rotary engine, the Wankel rotary engine has been used by NSU in the Ro80 car, by Mazda in a variety of cars such as the RX-series, and in some experimental aviation applications.
In the late 1970s a concept engine called the Bricklin-Turner Rotary Vee was tested. The Rotary Vee is similar in configuration to the elbow steam engine. Piston pairs connect as solid V shaped members, with each end floating in a pair of rotating cylinders clusters. The rotating cylinder cluster pairs are set with their axes at a wide V angle. The pistons in each cylinder cluster move parallel to each other instead of a radial direction, This engine design has not gone into production. The Rotary Vee was intended to power the Bricklin SV-1. | https://en.wikipedia.org/wiki?curid=26103 |
Rudolf Steiner
Rudolf Joseph Lorenz Steiner (27 (or 25) February 1861 – 30 March 1925) was an Austrian philosopher, social reformer, architect, esotericist, and claimed clairvoyant. Steiner gained initial recognition at the end of the nineteenth century as a literary critic and published philosophical works including "The Philosophy of Freedom". At the beginning of the twentieth century he founded an esoteric spiritual movement, anthroposophy, with roots in German idealist philosophy and theosophy; other influences include Goethean science and Rosicrucianism.
In the first, more philosophically oriented phase of this movement, Steiner attempted to find a synthesis between science and spirituality. His philosophical work of these years, which he termed "spiritual science", sought to apply the clarity of thinking characteristic of Western philosophy to spiritual questions, differentiating this approach from what he considered to be vaguer approaches to mysticism. In a second phase, beginning around 1907, he began working collaboratively in a variety of artistic media, including drama, the movement arts (developing a new artistic form, eurythmy) and architecture, culminating in the building of the Goetheanum, a cultural centre to house all the arts. In the third phase of his work, beginning after World War I, Steiner worked to establish various practical endeavors, including Waldorf education, biodynamic agriculture, and anthroposophical medicine.
Steiner advocated a form of ethical individualism, to which he later brought a more explicitly spiritual approach. He based his epistemology on Johann Wolfgang Goethe's world view, in which "Thinking… is no more and no less an organ of perception than the eye or ear. Just as the eye perceives colours and the ear sounds, so thinking perceives ideas." A consistent thread that runs from his earliest philosophical phase through his later spiritual orientation is the goal of demonstrating that there are no essential limits to human knowledge.
Steiner's father, Johann(es) Steiner (1829–1910), left a position as a gamekeeper in the service of Count Hoyos in Geras, northeast Lower Austria to marry one of the Hoyos family's housemaids, Franziska Blie (1834 Horn – 1918, Horn), a marriage for which the Count had refused his permission. Johann became a telegraph operator on the Southern Austrian Railway, and at the time of Rudolf's birth was stationed in Murakirály (Kraljevec) in the "Muraköz" region of the Kingdom of Hungary, Austrian Empire (present-day Donji Kraljevec in the Međimurje region of northernmost Croatia). In the first two years of Rudolf's life, the family moved twice, first to Mödling, near Vienna, and then, through the promotion of his father to stationmaster, to , located in the foothills of the eastern Austrian Alps in Lower Austria.
Steiner entered the village school, but following a disagreement between his father and the schoolmaster, he was briefly educated at home. In 1869, when Steiner was eight years old, the family moved to the village of and in October 1872 Steiner proceeded from the village school there to the realschule in Wiener Neustadt.
In 1879, the family moved to Inzersdorf to enable Steiner to attend the Vienna Institute of Technology, where he enrolled in courses in mathematics, physics, chemistry, botany, zoology, and mineralogy and audited courses in literature and philosophy, on an academic scholarship from 1879 to 1883, at the end of which time he withdrew from the Institute without graduating. In 1882, one of Steiner's teachers, Karl Julius Schröer, suggested Steiner's name to Joseph Kürschner, chief editor of a new edition of Goethe's works, who asked Steiner to become the edition's natural science editor, a truly astonishing opportunity for a young student without any form of academic credentials or previous publications.
Before attending the Vienna Institute of Technology, Steiner had studied Kant, Fichte and Schelling.
When he was nine years old, Steiner believed that he saw the spirit of an aunt who had died in a far-off town asking him to help her at a time when neither he nor his family knew of the woman's death. Steiner later related that as a child he felt "that one must carry the knowledge of the spiritual world within oneself after the fashion of geometry ... [for here] one is permitted to know something which the mind alone, through its own power, experiences. In this feeling I found the justification for the spiritual world that I experienced ... I confirmed for myself by means of geometry the feeling that I must speak of a world 'which is not seen'."
Steiner believed that at the age of 15 he had gained a complete understanding of the concept of time, which he considered to be the precondition of spiritual clairvoyance. At 21, on the train between his home village and Vienna, Steiner met an herb gatherer, Felix Kogutzki, who spoke about the spiritual world "as one who had his own experience therein". Kogutzki conveyed to Steiner a knowledge of nature that was non-academic and spiritual.
In 1888, as a result of his work for the Kürschner edition of Goethe's works, Steiner was invited to work as an editor at the Goethe archives in Weimar. Steiner remained with the archive until 1896. As well as the introductions for and commentaries to four volumes of Goethe's scientific writings, Steiner wrote two books about Goethe's philosophy: "The Theory of Knowledge Implicit in Goethe's World-Conception" (1886), which Steiner regarded as the epistemological foundation and justification for his later work, and "Goethe's Conception of the World" (1897). During this time he also collaborated in complete editions of the works of Arthur Schopenhauer and the writer Jean Paul and wrote numerous articles for various journals.
In 1891, Steiner received a doctorate in philosophy at the University of Rostock, for his dissertation discussing
Fichte's concept of the ego, submitted to , whose "Seven Books of Platonism" Steiner esteemed. Steiner's dissertation was later published in expanded form as "Truth and Knowledge: Prelude to a Philosophy of Freedom", with a dedication to Eduard von Hartmann. Two years later, he published "Die Philosophie der Freiheit" ("The Philosophy of Freedom" or "The Philosophy of Spiritual Activity"—Steiner's preferred English title) (1894), an exploration of epistemology and ethics that suggested a way for humans to become spiritually free beings. Steiner later spoke of this book as containing implicitly, in philosophical form, the entire content of what he later developed explicitly as anthroposophy.
In 1896, Steiner declined an offer from Elisabeth Förster-Nietzsche to help organize the Nietzsche archive in Naumburg. Her brother by that time was "non compos mentis". Förster-Nietzsche introduced Steiner into the presence of the catatonic philosopher; Steiner, deeply moved, subsequently wrote the book "Friedrich Nietzsche, Fighter for Freedom". Steiner later related that:
My first acquaintance with Nietzsche's writings belongs to the year 1889. Previous to that I had never read a line of his. Upon the substance of my ideas as these find expression in "The Philosophy of Spiritual Activity", Nietzsche's thought had not the least influence...Nietzsche's ideas of the 'eternal recurrence' and of 'Übermensch' remained long in my mind. For in these was reflected that which a personality must feel concerning the evolution and essential being of humanity when this personality is kept back from grasping the spiritual world by the restricted thought in the philosophy of nature characterizing the end of the 19th century...What attracted me particularly was that one could read Nietzsche without coming upon anything which strove to make the reader a 'dependent' of Nietzsche's.
In 1897, Steiner left the Weimar archives and moved to Berlin. He became part owner of, chief editor of, and an active contributor to the literary journal "Magazin für Literatur", where he hoped to find a readership sympathetic to his philosophy. Many subscribers were alienated by Steiner's unpopular support of Émile Zola in the Dreyfus Affair and the journal lost more subscribers when Steiner published extracts from his correspondence with anarchist John Henry Mackay. Dissatisfaction with his editorial style eventually led to his departure from the magazine.
In 1899, Steiner married Anna Eunicke; the couple separated several years later. Anna died in 1911.
In 1899, Steiner published an article, "Goethe's Secret Revelation", discussing the esoteric nature of Goethe's fairy tale "The Green Snake and the Beautiful Lily". This article led to an invitation by the Count and Countess Brockdorff to speak to a gathering of Theosophists on the subject of Nietzsche. Steiner continued speaking regularly to the members of the Theosophical Society, becoming the head of its newly constituted German section in 1902 without ever formally joining the society. It was also in connection with this society that Steiner met and worked with Marie von Sivers, who became his second wife in 1914. By 1904, Steiner was appointed by Annie Besant to be leader of the Theosophical "Esoteric Society" for Germany and Austria. In 1904, Eliza, the wife of Helmuth von Moltke the Younger, became one of his favourite scholars. Through Eliza, Steiner met Helmuth, who served as the Chief of the German General Staff from 1906 to 1914.
In contrast to mainstream Theosophy, Steiner sought to build a Western approach to spirituality based on the philosophical and mystical traditions of European culture. The German Section of the Theosophical Society grew rapidly under Steiner's leadership as he lectured throughout much of Europe on his spiritual science. During this period, Steiner maintained an original approach, replacing Madame Blavatsky's terminology with his own, and basing his spiritual research and teachings upon the Western esoteric and philosophical tradition. This and other differences, in particular Steiner's vocal rejection of Leadbeater and Besant's claim that Jiddu Krishnamurti was the vehicle of a new "Maitreya", or world teacher, led to a formal split in 1912/13, when Steiner and the majority of members of the German section of the Theosophical Society broke off to form a new group, the Anthroposophical Society. Steiner took the name "Anthroposophy" from the title of a work of the Austrian philosopher Robert von Zimmermann, published in Vienna in 1856. Despite his departure from the Theosophical Society, Steiner maintained his interest in Theosophy throughout his life.
The Anthroposophical Society grew rapidly. Fueled by a need to find an artistic home for their yearly conferences, which included performances of plays written by Edouard Schuré and Steiner, the decision was made to build a theater and organizational center. In 1913, construction began on the first Goetheanum building, in Dornach, Switzerland. The building, designed by Steiner, was built to a significant part by volunteers who offered craftsmanship or simply a will to learn new skills. Once World War I started in 1914, the Goetheanum volunteers could hear the sound of cannon fire beyond the Swiss border, but despite the war, people from all over Europe worked peaceably side by side on the building's construction. Steiner moved from Berlin to Dornach in 1913 and lived there to the end of his life.
Steiner's lecture activity expanded enormously with the end of the war. Most importantly, from 1919 on Steiner began to work with other members of the society to found numerous practical institutions and activities, including the first Waldorf school, founded that year in Stuttgart, Germany. At the same time, the Goetheanum developed as a wide-ranging cultural centre. On New Year's Eve, 1922/1923, the building burned to the ground; contemporary police reports indicate arson as the probable cause. Steiner immediately began work designing a second Goetheanum building - this time made of concrete instead of wood - which was completed in 1928, three years after his death.
At a "Foundation Meeting" for members held at the Dornach center during Christmas, 1923, Steiner spoke of laying a new for the society in the hearts of his listeners. At the meeting, a new "General Anthroposophical Society" was established with a new executive board. At this meeting, Steiner also founded a School of Spiritual Science, intended as an "organ of initiative" for research and study and as "the 'soul' of the Anthroposophical Society". This School, which was led by Steiner, initially had sections for general anthroposophy, education, medicine, performing arts (eurythmy, speech, drama and music), the literary arts and humanities, mathematics, astronomy, science, and visual arts. Later sections were added for the social sciences, youth and agriculture. The School of Spiritual Science included meditative exercises given by Steiner.
Steiner became a well-known and controversial public figure during and after World War I. In response to the catastrophic situation in post-war Germany, he proposed extensive social reforms through the establishment of a Threefold Social Order in which the cultural, political and economic realms would be largely independent. Steiner argued that a fusion of the three realms had created the inflexibility that had led to catastrophes such as World War I. In connection with this, he promoted a radical solution in the disputed area of Upper Silesia, claimed by both Poland and Germany. His suggestion that this area be granted at least provisional independence led to his being publicly accused of being a traitor to Germany.
Steiner opposed Wilson's proposal to create new European nations based around ethnic groups, which he saw as opening the door to rampant nationalism. Steiner proposed as an alternative "'social territories' with democratic institutions that were accessible to all inhabitants of a territory whatever their origin while the needs of the various ethnicities would be met by independent cultural institutions."
The National Socialist German Workers Party gained strength in Germany after the First World War. In 1919, a political theorist of this movement, Dietrich Eckart, attacked Steiner and suggested that he was a Jew. In 1921, Adolf Hitler attacked Steiner on many fronts, including accusations that he was a tool of the Jews, while other nationalist extremists in Germany called for a "war against Steiner". That same year, Steiner warned against the disastrous effects it would have for Central Europe if the National Socialists came to power. In 1922 a lecture Steiner was giving in Munich was disrupted when stink bombs were let off and the lights switched out, while people rushed the stage apparently attempting to attack Steiner, who exited safely through a back door. Unable to guarantee his safety, Steiner's agents cancelled his next lecture tour. The 1923 Beer Hall Putsch in Munich led Steiner to give up his residence in Berlin, saying that if those responsible for the attempted coup [Hitler and others] came to power in Germany, it would no longer be possible for him to enter the country.
From 1923 on, Steiner showed signs of increasing frailness and illness. He nonetheless continued to lecture widely, and even to travel; especially towards the end of this time, he was often giving two, three or even four lectures daily for courses taking place concurrently. Many of these lectures focused on practical areas of life such as education.
Increasingly ill, he held his last lecture in late September, 1924. He continued work on his autobiography during the last months of his life; he died on 30 March 1925.
Steiner first began speaking publicly about spiritual experiences and phenomena in his 1899 lectures to the Theosophical Society. By 1901 he had begun to write about spiritual topics, initially in the form of discussions of historical figures such as the mystics of the Middle Ages. By 1904 he was expressing his own understanding of these themes in his essays and books, while continuing to refer to a wide variety of historical sources.
"A world of spiritual perception is discussed in a number of writings which I have published since this book appeared. "The Philosophy of Freedom" forms the philosophical basis for these later writings. For it tries to show that the experience of thinking, rightly understood, is in fact an experience of spirit." (Steiner, Philosophy of Freedom, Consequences of Monism)
Steiner aimed to apply his training in mathematics, science, and philosophy to produce rigorous, verifiable presentations of those experiences. He believed that through freely chosen ethical disciplines and meditative training, anyone could develop the ability to experience the spiritual world, including the higher nature of oneself and others. Steiner believed that such discipline and training would help a person to become a more moral, creative and free individual – free in the sense of being capable of actions motivated solely by love. His philosophical ideas were affected by Franz Brentano, with whom he had studied, as well as by Fichte, Hegel, Schelling, and Goethe's phenomenological approach to science.
Steiner followed Wilhelm Dilthey in using the term "Geisteswissenschaft", usually translated as "spiritual science". Steiner used the term to describe a discipline treating the spirit as something actual and real, starting from the premise that it is possible for human beings to penetrate behind what is sense-perceptible. He proposed that psychology, history, and the humanities generally were based on the direct grasp of an ideal reality, and required close attention to the particular period and culture which provided the distinctive character of religious qualities in the course of the evolution of consciousness. In contrast to William James' pragmatic approach to religious and psychic experience, which emphasized its idiosyncratic character, Steiner focused on ways such experience can be rendered more intelligible and integrated into human life.
Steiner proposed that an understanding of reincarnation and karma was necessary to understand psychology and that the form of external nature would be more comprehensible as a result of insight into the course of karma in the evolution of humanity. Beginning in 1910, he described aspects of karma relating to health, natural phenomena and free will, taking the position that a person is not bound by his or her karma, but can transcend this through actively taking hold of one's own nature and destiny. In an extensive series of lectures from February to September 1924, Steiner presented further research on successive reincarnations of various individuals and described the techniques he used for karma research.
Steiner was founder and leader of the following:
After the First World War, Steiner became active in a wide variety of cultural contexts. He founded a number of schools, the first of which was known as the Waldorf school, which later evolved into a worldwide school network. He also founded a system of organic agriculture, now known as biodynamic agriculture, which was one of the very first forms of, and has contributed significantly to the development of, modern organic farming. His work in medicine led to the development of a broad range of complementary medications and supportive artistic and biographic therapies. Numerous homes for children and adults with developmental disabilities based on his work (including those of the Camphill movement) are found in Africa, Europe, and North America. His paintings and drawings influenced Joseph Beuys and other modern artists. His two Goetheanum buildings have been widely cited as masterpieces of modern architecture, and other anthroposophical architects have contributed thousands of buildings to the modern scene. One of the first institutions to practice ethical banking was an anthroposophical bank working out of Steiner's ideas; other anthroposophical social finance institutions have since been founded.
Steiner's literary estate is correspondingly broad. Steiner's writings, published in about forty volumes, include books, essays, four plays ('mystery dramas'), mantric verse, and an autobiography. His collected lectures, making up another approximately 300 volumes, discuss an extremely wide range of themes. Steiner's drawings, chiefly illustrations done on blackboards during his lectures, are collected in a separate series of 28 volumes. Many publications have covered his architectural legacy and sculptural work.
As a young man, Steiner was a private tutor and a lecturer on history for the Berlin "Arbeiterbildungsschule", an educational initiative for working class adults. Soon thereafter, he began to articulate his ideas on education in public lectures, culminating in a 1907 essay on "The Education of the Child" in which he described the major phases of child development which formed the foundation of his approach to education. His conception of education was influenced by the Herbartian pedagogy prominent in Europe during the late nineteenth century, though Steiner criticized Herbart for not sufficiently recognizing the importance of educating the will and feelings as well as the intellect.
In 1919, Emil Molt invited him to lecture to his workers at the Waldorf-Astoria cigarette factory in Stuttgart. Out of these lectures came a new school, the Waldorf school. In 1922, Steiner presented these ideas at a conference called for this purpose in Oxford by Professor Millicent Mackenzie. He subsequently presented a teacher training course at Torquay in 1924 at an Anthroposophy Summer School organised by Eleanor Merry. The Oxford Conference and the Torquay teacher training led to the founding of the first Waldorf schools in Britain. During Steiner's lifetime, schools based on his educational principles were also founded in Hamburg, Essen, The Hague and London; there are now more than 1000 Waldorf schools worldwide.
In 1924, a group of farmers concerned about the future of agriculture requested Steiner's help. Steiner responded with a lecture series on an ecological and sustainable approach to agriculture that increased soil fertility without the use of chemical fertilizers and pesticides. Steiner's agricultural ideas promptly spread and were put into practice internationally and biodynamic agriculture is now practiced in Europe, North America, South America, Africa, Asia and Australasia.
A central aspect of biodynamics is that the farm as a whole is seen as an organism, and therefore should be a largely self-sustaining system, producing its own manure and animal feed. Plant or animal disease is seen as a symptom of problems in the whole organism. Steiner also suggested timing such agricultural activities as sowing, weeding, and harvesting to utilize the influences on plant growth of the moon and planets; and the application of natural materials prepared in specific ways to the soil, compost, and crops, with the intention of engaging non-physical beings and elemental forces. He taught that mushrooms were "very harmful" because "they contain hindering lunar forces, and everything that arose on the old Moon signifies rigidification." He encouraged his listeners to verify his suggestions empirically, as he had not yet done.
From the late 1910s, Steiner was working with doctors to create a new approach to medicine. In 1921, pharmacists and physicians gathered under Steiner's guidance to create a pharmaceutical company called "Weleda" which now distributes naturopathic medical and beauty products worldwide. At around the same time, Dr. Ita Wegman founded a first anthroposophic medical clinic (now the Ita Wegman Clinic) in Arlesheim.
For a period after World War I, Steiner was active as a lecturer on social reform. A petition expressing his basic social ideas was widely circulated and signed by many cultural figures of the day, including Hermann Hesse.
In Steiner's chief book on social reform, "Toward Social Renewal", he suggested that the cultural, political and economic spheres of society need to work together as consciously cooperating yet independent entities, each with a particular task: political institutions should establish political equality and protect human rights; cultural institutions should nurture the free and unhindered development of science, art, education and religion; and economic institutions should enable producers, distributors and consumers to cooperate to provide efficiently for society's needs. He saw such a division of responsibility, which he called the Threefold Social Order, as a vital task which would take up consciously the historical trend toward the mutual independence of these three realms. Steiner also gave suggestions for many specific social reforms.
Steiner proposed what he termed a "fundamental law" of social life:
He expressed this in the motto:
Steiner designed 17 buildings, including the First and Second Goetheanums. These two buildings, built in Dornach, Switzerland, were intended to house significant theater spaces as well as a "school for spiritual science". Three of Steiner's buildings have been listed amongst the most significant works of modern architecture.
His primary sculptural work is "The Representative of Humanity" (1922), a nine-meter high wood sculpture executed as a joint project with the sculptor Edith Maryon. This was intended to be placed in the first Goetheanum. It shows a central, free-standing Christ holding a balance between the beings of Lucifer and Ahriman, representing opposing tendencies of expansion and contraction. It was intended to show, in conscious contrast to Michelangelo's Last Judgment, Christ as mute and impersonal such that the beings that approach him must judge themselves. The sculpture is now on permanent display at the Goetheanum.
Steiner's blackboard drawings were unique at the time and almost certainly not originally intended as art works. Josef Beuys' work, itself heavily influenced by Steiner, has led to the modern understanding of Steiner's drawings as artistic objects.
Steiner wrote four mystery plays between 1909 and 1913: "The Portal of Initiation", "The Souls' Probation", "The Guardian of the Threshold" and "The Soul's Awakening", modeled on the esoteric dramas of Edouard Schuré, Maurice Maeterlinck, and Johann Wolfgang von Goethe. Steiner's plays continue to be performed by anthroposophical groups in various countries, most notably (in the original German) in Dornach, Switzerland and (in English translation) in Spring Valley, New York and in Stroud and Stourbridge in the U.K.
In collaboration with Marie von Sivers, Steiner also founded a new approach to acting, storytelling, and the recitation of poetry. His last public lecture course, given in 1924, was on speech and drama. The Russian actor, director, and acting coach Michael Chekhov based significant aspects of his method of acting on Steiner's work.
Together with Marie von Sivers, Rudolf Steiner also developed the art of eurythmy, sometimes referred to as "visible speech and song". According to the principles of eurythmy, there are archetypal movements or gestures that correspond to every aspect of speech – the sounds (or phonemes), the rhythms, and the grammatical function – to every "soul quality" – joy, despair, tenderness, etc. – and to every aspect of music – tones, intervals, rhythms, and harmonies.
In his commentaries on Goethe's scientific works, written between 1884 and 1897, Steiner presented Goethe's approach to science as essentially phenomenological in nature, rather than theory- or model-based. He developed this conception further in several books, "The Theory of Knowledge Implicit in Goethe's World-Conception" (1886) and "Goethe's Conception of the World" (1897), particularly emphasizing the transformation in Goethe's approach from the physical sciences, where experiment played the primary role, to plant biology, where both accurate perception and imagination were required to find the biological archetypes ("Urpflanze"), and postulated that Goethe had sought but been unable to fully find the further transformation in scientific thinking necessary to properly interpret and understand the animal kingdom. Steiner emphasized the role of evolutionary thinking in Goethe's discovery of the intermaxillary bone in human beings; Goethe expected human anatomy to be an evolutionary transformation of animal anatomy. Steiner defended Goethe's qualitative description of color as arising synthetically from the polarity of light and darkness, in contrast to Newton's particle-based and analytic conception.
Steiner approached the philosophical questions of knowledge and freedom in two stages. In his dissertation, published in expanded form in 1892 as "Truth and Knowledge", Steiner suggests that there is an inconsistency between Kant's philosophy, which posits that all knowledge is a representation of an essential verity inaccessible to human consciousness, and modern science, which assumes that all influences can be found in the sensory and mental world to which we have access. Steiner considered Kant's philosophy of an inaccessible beyond ("Jenseits-Philosophy") a stumbling block in achieving a satisfying philosophical viewpoint.
Steiner postulates that the world is essentially an indivisible unity, but that our consciousness divides it into the sense-perceptible appearance, on the one hand, and the formal nature accessible to our thinking, on the other. He sees in thinking itself an element that can be strengthened and deepened sufficiently to penetrate all that our senses do not reveal to us. Steiner thus considered what appears to human experience as a division between the spiritual and natural worlds to be a conditioned result of the structure of our consciousness, which separates perception and thinking. These two faculties give us not two worlds, but two complementary views of the same world; neither has primacy and the two together are necessary and sufficient to arrive at a complete understanding of the world. In thinking about perception (the path of natural science) and perceiving the process of thinking (the path of spiritual training), it is possible to discover a hidden inner unity between the two poles of our experience. Truth, for Steiner, is paradoxically both an objective discovery and yet "a free creation of the human spirit, that never would exist at all if we did not generate it ourselves. The task of understanding is not to replicate in conceptual form something that already exists, but rather to create a wholly new realm, that together with the world given to our senses constitutes the fullness of reality."
In the "Philosophy of Freedom", Steiner further explores potentials within thinking: freedom, he suggests, can only be approached gradually with the aid of the creative activity of thinking. Thinking can be a free deed; in addition, it can liberate our will from its subservience to our instincts and drives. Free deeds, he suggests, are those for which we are fully conscious of the motive for our action; freedom is the spiritual activity of penetrating with consciousness our own nature and that of the world, and the real activity of acting in full consciousness. This includes overcoming influences of both heredity and environment: "To be free is to be capable of thinking one's own thoughts – not the thoughts merely of the body, or of society, but thoughts generated by one's deepest, most original, most essential and spiritual self, one's individuality."
Steiner affirms Darwin's and Haeckel's evolutionary perspectives but extended this beyond its materialistic consequences; he sees human consciousness, indeed, all human culture, as a product of natural evolution that transcends itself. For Steiner, nature becomes self-conscious in the human being. Steiner's description of the nature of human consciousness thus closely parallels that of Solovyov:
In his earliest works, Steiner already spoke of the "natural and spiritual worlds" as a unity. From 1900 on, he began lecturing about concrete details of the spiritual world(s), culminating in the publication in 1904 of the first of several systematic presentations, his "Theosophy: An Introduction to the Spiritual Processes in Human Life and in the Cosmos". As a starting point for the book Steiner took a quotation from Goethe, describing the method of natural scientific observation, while in the Preface he made clear that the line of thought taken in this book led to the same goal as that in his earlier work, "The Philosophy of Freedom".
In the years 1903–1908 Steiner maintained the magazine "Lucifer-Gnosis" and published in it essays on topics such as initiation, reincarnation and karma, and knowledge of the supernatural world. Some of these were later collected and published as books, such as "How to Know Higher Worlds" (1904/5) and "Cosmic Memory". The book "An Outline of Esoteric Science" was published in 1910. Important themes include:
Steiner emphasized that there is an objective natural and spiritual world that can be known, and that perceptions of the spiritual world and incorporeal beings are, under conditions of training comparable to that required for the natural sciences, including self-discipline, replicable by multiple observers. It is on this basis that spiritual science is possible, with radically different epistemological foundations than those of natural science. He believed that natural science was correct in its methods but one-sided for exclusively focusing on sensory phenomena, while mysticism was vague in its methods, though seeking to explore the inner and spiritual life. Anthroposophy was meant to apply the systematic methods of the former to the content of the latter
For Steiner, the cosmos is permeated and continually transformed by the creative activity of non-physical processes and spiritual beings. For the human being to become conscious of the objective reality of these processes and beings, it is necessary to creatively enact and reenact, within, their creative activity. Thus objective spiritual knowledge always entails creative inner activity. Steiner articulated three stages of any creative deed:
Steiner termed his work from this period onwards "Anthroposophy". He emphasized that the spiritual path he articulated builds upon and supports individual freedom and independent judgment; for the results of spiritual research to be appropriately presented in a modern context they must be in a form accessible to logical understanding, so that those who do not have access to the spiritual experiences underlying anthroposophical research can make independent evaluations of the latter's results. Spiritual training is to support what Steiner considered the overall purpose of human evolution, the development of the mutually interdependent qualities of love and freedom.
In 1899 Steiner experienced what he described as a life-transforming inner encounter with the being of Christ; previously he had little or no relation to Christianity in any form. Then and thereafter, his relationship to Christianity remained entirely founded upon personal experience, and thus both non-denominational and strikingly different from conventional religious forms. Steiner was then 38, and the experience of meeting the Christ occurred after a tremendous inner struggle. To use Steiner's own words, the "experience culminated in my standing in the spiritual presence of the Mystery of Golgotha in a most profound and solemn festival of knowledge."
Steiner describes Christ as the unique pivot and meaning of earth's evolutionary processes and human history, redeeming the Fall from Paradise. He understood the Christ as a being that unifies and inspires all religions, not belonging to a particular religious faith. To be "Christian" is, for Steiner, a search for balance between polarizing extremes and the ability to manifest love in freedom.
Central principles of his understanding include:
In Steiner's esoteric cosmology, the spiritual development of humanity is interwoven in and inseparable from the cosmological development of the universe. Continuing the evolution that led to humanity being born out of the natural world, the Christ being brings an impulse enabling human consciousness of the forces that act creatively, but unconsciously, in nature.
Steiner's views of Christianity diverge from conventional Christian thought in key places, and include gnostic elements. However, unlike many gnostics, Steiner affirms the unique and actual physical Incarnation of Christ in Jesus at the beginning of the Christian era.
One of the central points of divergence with conventional Christian thought is found in Steiner's views on reincarnation and karma.
Steiner also posited two different Jesus children involved in the Incarnation of the Christ: one child descended from Solomon, as described in the Gospel of Matthew; the other child from Nathan, as described in the Gospel of Luke. He references in this regard the fact that the genealogies in these two gospels list twenty-six (Luke) to forty-one (Matthew) completely different ancestors for the generations from David to Jesus.
Steiner's view of the second coming of Christ is also unusual. He suggested that this would not be a physical reappearance, but rather, meant that the Christ being would become manifest in non-physical form, in the "etheric realm" – i.e. visible to spiritual vision and apparent in community life – for increasing numbers of people, beginning around the year 1933. He emphasized that the future would require humanity to recognize this Spirit of Love in all its genuine forms, regardless of how this is named. He also warned that the traditional name, "Christ", might be used, yet the true essence of this Being of Love ignored.
In the 1920s, Steiner was approached by Friedrich Rittelmeyer, a Lutheran pastor with a congregation in Berlin, who asked if it was possible to create a more modern form of Christianity. Soon others joined Rittelmeyer – mostly Protestant pastors and theology students, but including several Roman Catholic priests. Steiner offered counsel on renewing the spiritual potency of the sacraments while emphasizing freedom of thought and a personal relationship to religious life. He envisioned a new synthesis of Catholic and Protestant approaches to religious life, terming this "modern, Johannine Christianity".
The resulting movement for religious renewal became known as "The Christian Community". Its work is based on a free relationship to the Christ, without dogma or policies. Its priesthood, which is open to both men and women, is free to preach out of their own spiritual insights and creativity.
Steiner emphasized that the resulting movement for the renewal of Christianity was a personal gesture of help to a movement founded by Rittelmeyer and others independently of his anthroposophical work. The distinction was important to Steiner because he sought with Anthroposophy to create a scientific, not faith-based, spirituality. He recognized that for those who wished to find more traditional forms, however, a renewal of the traditional religions was also a vital need of the times.
Steiner's work has influenced a broad range of notable personalities. These include philosophers Albert Schweitzer, Owen Barfield and Richard Tarnas; writers Saul Bellow, Andrej Belyj, Michael Ende, Selma Lagerlöf, Edouard Schuré, David Spangler, and William Irwin Thompson; child psychiatrist Eva Frommer; economist Leonard Read; artists Josef Beuys, Wassily Kandinsky, and Murray Griffin; esotericist and educationalist George Trevelyan; actor and acting teacher Michael Chekhov; cinema director Andrei Tarkovsky; composers Jonathan Harvey and Viktor Ullmann; and conductor Bruno Walter. Olav Hammer, though sharply critical of esoteric movements generally, terms Steiner "arguably the most historically and philosophically sophisticated spokesperson of the Esoteric Tradition."
Albert Schweitzer wrote that he and Steiner had in common that they had "taken on the life mission of working for the emergence of a true culture enlivened by the ideal of humanity and to encourage people to become truly thinking beings".
Anthony Storr stated about Rudolf Steiner's Anthroposophy: "His belief system is so eccentric, so unsupported by evidence, so manifestly bizarre, that rational skeptics are bound to consider it delusional."
Robert Todd Carroll has said of Steiner that "Some of his ideas on education – such as educating the handicapped in the mainstream – are worth considering, although his overall plan for developing the spirit and the soul rather than the intellect cannot be admired". Steiner's translators have pointed out that his use of "Geist" includes both mind and spirit, however, as the German term "Geist" can be translated equally properly in either way.
The 150th anniversary of Rudolf Steiner's birth was marked by the first major retrospective exhibition of his art and work, 'Kosmos - Alchemy of the everyday'. Organized by Vitra Design Museum, the traveling exhibition presented many facets of Steiner's life and achievements, including his influence on architecture, furniture design, dance (Eurythmy), education, and agriculture (Biodynamic agriculture). The exhibition opened in 2011 at the Kunstmuseum in Stuttgart, Germany,
Olav Hammer has criticized as scientism Steiner's claim to use scientific methodology to investigate spiritual phenomena that were based upon his claims of clairvoyant experience. Steiner regarded the observations of spiritual research as more dependable (and above all, consistent) than observations of physical reality. However, he did consider spiritual research to be fallible and held the view that anyone capable of thinking logically was in a position to correct errors by spiritual researchers.
Steiner's work includes both universalist, humanist elements and historically influenced racial assumptions. Due to the contrast and even contradictions between these elements, "whether a given reader interprets Anthroposophy as racist or not depends upon that reader's concerns". Steiner considered that by dint of its shared language and culture, each people has a unique essence, which he called its soul or spirit. He saw race as a physical manifestation of humanity's spiritual evolution, and at times discussed race in terms of complex hierarchies that were largely derived from 19th century biology, anthropology, philosophy and theosophy. However, he consistently and explicitly subordinated race, ethnicity, gender, and indeed all hereditary factors, to individual factors in development. For Steiner, human individuality is centered in a person's unique biography, and he believed that an individual's experiences and development are not bound by a single lifetime or the qualities of the physical body. More specifically:
In the context of his ethical individualism, Steiner considered "race, folk, ethnicity and gender" to be general, describable categories into which individuals may choose to fit, but from which free human beings can and will liberate themselves.
During the years when Steiner was best known as a literary critic, he published a series of articles attacking various manifestations of antisemitism and criticizing some of the most prominent anti-Semites of the time as "barbaric" and "enemies of culture". On a number of occasions, however, Steiner suggested that Jewish cultural and social life had lost all contemporary relevance and promoted full assimilation of the Jewish people into the nations in which they lived. This stance has come under severe criticism in recent years.
Steiner was a critic of his contemporary Theodor Herzl's goal of a Zionist state, and indeed of any ethnically determined state, as he considered ethnicity to be an outmoded basis for social life and civic identity.
Towards the end of Steiner's life and after his death, there were massive defamatory press attacks mounted on him by early National Socialist leaders (including Adolf Hitler) and other right-wing nationalists. These criticized Steiner's thought and anthroposophy as being incompatible with National Socialist racial ideology, and charged him with being influenced by his close connections with Jews and even (falsely) that he himself was Jewish.
The standard edition of Steiner's Collected Works constitutes about 400 volumes. This includes 43 volumes of his writings (books, essays, plays, and correspondence), over 6000 lectures, and 16 volumes of his artistic work (drawings, paintings, graphic design, other design work, and choreography). His architectural work has been documented extensively outside of the Collected Works. | https://en.wikipedia.org/wiki?curid=26104 |
Roof
A roof is the top covering of a building, including all materials and constructions necessary to support it on the walls of the building or on uprights; it provides protection against rain, snow, sunlight, extremes of temperature, and wind. A roof is part of the building envelope.
The characteristics of a roof are dependent upon the purpose of the building that it covers, the available roofing materials and the local traditions of construction and wider concepts of architectural design and practice and may also be governed by local or national legislation. In most countries a roof protects primarily against rain. A verandah may be roofed with material that protects against sunlight but admits the other elements. The roof of a garden conservatory protects plants from cold, wind, and rain, but admits light.
A roof may also provide additional living space, for example a roof garden.
Old English hrof "roof, ceiling, top, summit; heaven, sky," also figuratively, "highest point of something," from Proto-Germanic *khrofam (cf. Dutch roef "deckhouse, cabin, coffin-lid," Middle High German rof "penthouse," Old Norse hrof "boat shed").
There are no apparent connections outside the Germanic family. "English alone has retained the word in a general sense, for which the other languages use forms corresponding to OE. þæc thatch" [OED].
The elements in the design of a roof are:
The material of a roof may range from banana leaves, wheaten straw or seagrass to laminated glass, copper "(see: copper roofing)", aluminium sheeting and pre-cast concrete. In many parts of the world ceramic tiles have been the predominant roofing material for centuries, if not millennia. Other roofing materials include asphalt, coal tar pitch, EPDM rubber, Hypalon, polyurethane foam, PVC, slate, Teflon fabric, TPO, and wood shakes and shingles.
The construction of a roof is determined by its method of support and how the underneath space is bridged and whether or not the roof is "pitched". The "pitch" is the angle at which the roof rises from its lowest to highest point. Most US domestic architecture, except in very dry regions, has roofs that are sloped, or "pitched". Although modern construction elements such as drainpipes may remove the need for pitch, roofs are pitched for reasons of tradition and aesthetics. So the pitch is partly dependent upon stylistic factors, and partially to do with practicalities.
Some types of roofing, for example thatch, require a steep pitch in order to be waterproof and durable. Other types of roofing, for example pantiles, are unstable on a steeply pitched roof but provide excellent weather protection at a relatively low angle. In regions where there is little rain, an almost flat roof with a slight run-off provides adequate protection against an occasional downpour. Drainpipes also remove the need for a sloping roof.
A person that specializes in roof construction is called a roofer.
The durability of a roof is a matter of concern because the roof is often the least accessible part of a building for purposes of repair and renewal, while its damage or destruction can have serious effects.
The shape of roofs differs greatly from region to region. The main factors which influence the shape of roofs are the climate and the materials available for roof structure and the outer covering.
The basic shapes of roofs are flat,
mono-pitched, gabled, hipped, butterfly, arched and domed. There are many variations on these types. Roofs constructed of flat sections that are sloped are referred to as pitched roofs (generally if the angle exceeds 10 degrees). Pitched roofs, including gabled, hipped and skillion roofs, make up the greatest number of domestic roofs. Some roofs follow organic shapes, either by architectural design or because a flexible material such as thatch has been used in the construction.
There are two parts to a roof, its supporting structure and its outer skin, or uppermost weatherproof layer. In a minority of buildings, the outer layer is also a self-supporting structure.
The roof structure is generally supported upon walls, although some building styles, for example, geodesic and A-frame, blur the distinction between wall and roof.
The supporting structure of a roof usually comprises beams that are long and of strong, fairly rigid material such as timber, and since the mid-19th century, cast iron or steel. In countries that use bamboo extensively, the flexibility of the material causes a distinctive curving line to the roof, characteristic of Oriental architecture.
Timber lends itself to a great variety of roof shapes. The timber structure can fulfil an aesthetic as well as practical function, when left exposed to view.
Stone lintels have been used to support roofs since prehistoric times, but cannot bridge large distances. The stone arch came into extensive use in the ancient Roman period and in variant forms could be used to span spaces up to 140 feet (43 m) across. The stone arch or vault, with or without ribs, dominated the roof structures of major architectural works for about 2,000 years, only giving way to iron beams with the Industrial Revolution and the designing of such buildings as Paxton's Crystal Palace, completed 1851.
With continual improvements in steel girders, these became the major structural support for large roofs, and eventually for ordinary houses as well. Another form of girder is the reinforced concrete beam, in which metal rods are encased in concrete, giving it greater strength under tension.
This part of the roof shows great variation dependent upon availability of material. In vernacular architecture, roofing material is often vegetation, such as thatches, the most durable being sea grass with a life of perhaps 40 years. In many Asian countries bamboo is used both for the supporting structure and the outer layer where split bamboo stems are laid turned alternately and overlapped. In areas with an abundance of timber, wooden shingles and boards are used, while in some countries the bark of certain trees can be peeled off in thick, heavy sheets and used for roofing.
The 20th century saw the manufacture of composition asphalt shingles which can last from a thin 20-year shingle to the thickest which are limited lifetime shingles, the cost depending on the thickness and durability of the shingle. When a layer of shingles wears out, they are usually stripped, along with the underlay and roofing nails, allowing a new layer to be installed. An alternative method is to install another layer directly over the worn layer. While this method is faster, it does not allow the roof sheathing to be inspected and water damage, often associated with worn shingles, to be repaired. Having multiple layers of old shingles under a new layer causes roofing nails to be located further from the sheathing, weakening their hold. The greatest concern with this method is that the weight of the extra material could exceed the dead load capacity of the roof structure and cause collapse. Because of this, jurisdictions which use the International Building Code prohibit the installation of new roofing on top of an existing roof that has two or more applications of any type of roof covering; the existing roofing material must be removed before installing a new roof.
Slate is an ideal, and durable material, while in the Swiss Alps roofs are made from huge slabs of stone, several inches thick. The slate roof is often considered the best type of roofing. A slate roof may last 75 to 150 years, and even longer. However, slate roofs are often expensive to install – in the US, for example, a slate roof may have the same cost as the rest of the house. Often, the first part of a slate roof to fail is the fixing nails; they corrode, allowing the slates to slip. In the UK, this condition is known as "nail sickness". Because of this problem, fixing nails made of stainless steel or copper are recommended, and even these must be protected from the weather.
Asbestos, usually in bonded corrugated panels, has been used widely in the 20th century as an inexpensive, non-flammable roofing material with excellent insulating properties. Health and legal issues involved in the mining and handling of asbestos products means that it is no longer used as a new roofing material. However, many asbestos roofs continue to exist, particularly in South America and Asia.
Roofs made of cut turf (modern ones known as green roofs, traditional ones as sod roofs) have good insulating properties and are increasingly encouraged as a way of "greening" the Earth. Adobe roofs are roofs of clay, mixed with binding material such as straw or animal hair, and plastered on lathes to form a flat or gently sloped roof, usually in areas of low rainfall.
In areas where clay is plentiful, roofs of baked tiles have been the major form of roofing. The casting and firing of roof tiles is an industry that is often associated with brickworks. While the shape and colour of tiles was once regionally distinctive, now tiles of many shapes and colours are produced commercially, to suit the taste and pocketbook of the purchaser.
Sheet metal in the form of copper and lead has also been used for many hundreds of years. Both are expensive but durable, the vast copper roof of Chartres Cathedral, oxidised to a pale green colour, having been in place for hundreds of years. Lead, which is sometimes used for church roofs, was most commonly used as flashing in valleys and around chimneys on domestic roofs, particularly those of slate. Copper was used for the same purpose.
In the 19th century, iron, electroplated with zinc to improve its resistance to rust, became a light-weight, easily transported, waterproofing material. Its low cost and easy application made it the most accessible commercial roofing, worldwide. Since then, many types of metal roofing have been developed. Steel shingle or standing-seam roofs last about 50 years or more depending on both the method of installation and the moisture barrier (underlayment) used and are between the cost of shingle roofs and slate roofs.
In the 20th century a large number of roofing materials were developed, including roofs based on bitumen (already used in previous centuries), on rubber and on a range of synthetics such as thermoplastic and on fibreglass.
Because the purpose of a roof is to secure people and their possessions from climatic elements, the insulating properties of a roof are a consideration in its structure and the choice of roofing material.
Some roofing materials, particularly those of natural fibrous material, such as thatch, have excellent insulating properties. For those that do not, extra insulation is often installed under the outer layer. In developed countries, the majority of dwellings have a ceiling installed under the structural members of the roof. The purpose of a ceiling is to insulate against heat and cold, noise, dirt and often from the droppings and lice of birds who frequently choose roofs as nesting places.
Concrete tiles can be used as insulation. When installed leaving a space between the tiles and the roof surface, it can reduce heating caused by the sun.
Forms of insulation are felt or plastic sheeting, sometimes with a reflective surface, installed directly below the tiles or other material; synthetic foam batting laid above the ceiling and recycled paper products and other such materials that can be inserted or sprayed into roof cavities. So called Cool roofs are becoming increasingly popular, and in some cases are mandated by local codes. Cool roofs are defined as roofs with both high reflectivity and high thermal emittance.
Poorly insulated and ventilated roofing can suffer from problems such as the formation of ice dams around the overhanging eaves in cold weather, causing water from melted snow on upper parts of the roof to penetrate the roofing material. Ice dams occur when heat escapes through the uppermost part of the roof, and the snow at those points melts, refreezing as it drips along the shingles, and collecting in the form of ice at the lower points. This can result in structural damage from stress, including the destruction of gutter and drainage systems.
The primary job of most roofs is to keep out water. The large area of a roof repels a lot of water, which must be directed in some suitable way, so that it does not cause damage or inconvenience.
Flat roof of adobe dwellings generally have a very slight slope. In a Middle Eastern country, where the roof may be used for recreation, it is often walled, and drainage holes must be provided to stop water from pooling and seeping through the porous roofing material.
Similar problems, although on a very much larger scale, confront the builders of modern commercial properties which often have flat roofs. Because of the very large nature of such roofs, it is essential that the outer skin be of a highly impermeable material. Most industrial and commercial structures have conventional roofs of low pitch.
In general, the pitch of the roof is proportional to the amount of precipitation. Houses in areas of low rainfall frequently have roofs of low pitch while those in areas of high rainfall and snow, have steep roofs. The longhouses of Papua New Guinea, for example, being roof-dominated architecture, the high roofs sweeping almost to the ground. The high steeply-pitched roofs of Germany and Holland are typical in regions of snowfall. In parts of North America such as Buffalo, USA or Montreal, Canada, there is a required minimum slope of 6 inches in 12 inches, a pitch of 30 degrees.
There are regional building styles which contradict this trend, the stone roofs of the Alpine chalets being usually of gentler incline. These buildings tend to accumulate a large amount of snow on them, which is seen as a factor in their insulation. The pitch of the roof is in part determined by the roofing material available, a pitch of 3/12 or greater slope generally being covered with asphalt shingles, wood shake, corrugated steel, slate or tile.
The water repelled by the roof during a rainstorm is potentially damaging to the building that the roof protects. If it runs down the walls, it may seep into the mortar or through panels. If it lies around the foundations it may cause seepage to the interior, rising damp or dry rot. For this reason most buildings have a system in place to protect the walls of a building from most of the roof water. Overhanging eaves are commonly employed for this purpose. Most modern roofs and many old ones have systems of valleys, gutters, waterspouts, waterheads and drainpipes to remove the water from the vicinity of the building. In many parts of the world, roofwater is collected and stored for domestic use.
Areas prone to heavy snow benefit from a metal roof because their smooth surfaces shed the weight of snow more easily and resist the force of wind better than a wood shingle or a concrete tile roof.
Newer systems include solar shingles which generate electricity as well as cover the roof. There are also solar systems available that generate hot water or hot air and which can also act as a roof covering. More complex systems may carry out all of these functions: generate electricity, recover thermal energy, and also act as a roof covering.
Solar systems can be integrated with roofs by: | https://en.wikipedia.org/wiki?curid=26118 |
Robben Island
Robben Island () is an island in Table Bay, west of the coast of Bloubergstrand, north of Cape Town, South Africa. It takes its name from the Dutch word for seals ("robben"), hence the Dutch/Afrikaans name "Robbeneiland" which translates to "Seal(s) Island".
Robben Island is roughly oval in shape, long north-south, and wide, with an area of . It is flat and only a few metres above sea level, as a result of an ancient erosion event. It was fortified and used as a prison from the late 17th century to 1996, after the end of apartheid. Many of its prisoners were political.
Political activist Nelson Mandela was imprisoned there for 18 of the 27 years he served behind bars before the fall of apartheid and expansion of the franchise to all residents of the country. He was later awarded the Nobel Prize for Peace and was elected in 1994 as President of South Africa, serving one term. In addition, two other former inmates of Robben Island have been elected as President of South Africa since the late 1990s: Kgalema Motlanthe and Jacob Zuma.
Robben Island is a South African National Heritage Site as well as a UNESCO World Heritage Site.
In 1654 the settlers of the Dutch Cape Colony placed all their ewes and a few rams on Robben Island, the men built a large shed and a shelter. The isolation offered better protection against wild animals than on the mainland. The settlers also collected seal skins and boiled oil to supply the needs of the settlement.
Since the end of the 17th century, Robben Island has been used for the incarceration of chiefly political prisoners. The Dutch settlers were the first to use Robben Island as a prison. Its first prisoner was probably Autshumato in the mid-17th century. Among its early permanent inhabitants were political leaders imprisoned from other Dutch colonies, including Indonesia, and the leader of the mutiny on the slave ship "Meermin".
After the British Royal Navy captured several Dutch East Indiamen at the battle of Saldanha Bay in 1781, a boat rowed out to meet the British warships. On board were the "kings of Ternate and Tidore, and the princes of the respective families". The Dutch had long held them on "Isle Robin", but then had moved them to Saldanha Bay.
In 1806 the Scottish whaler John Murray opened a whaling station at a sheltered bay on the north-eastern shore of the island, which became known as Murray's Bay. It was adjacent to the site of the present-day harbour named Murray's Bay Harbour, which was constructed in 1939–40.
After a failed uprising at Grahamstown in 1819, the fifth of the Xhosa Wars, the British colonial government sentenced African leader Makanda Nxele to life imprisonment on the island. He drowned on the shores of Table Bay after escaping the prison.
The island was also used as a leper colony and animal quarantine station. Starting in 1845, lepers from the "Hemel-en-Aarde" (heaven and earth) leper colony near Caledon were moved to Robben Island when "Hemel-en-Aarde" was found unsuitable as a leper colony. Initially this was done on a voluntary basis and the lepers were free to leave the island if they so wished. In April 1891 the cornerstones for 11 new buildings to house lepers were laid. After passage of the Leprosy Repression Act in May 1892, admission was no longer voluntary, and the movement of the lepers was restricted. Doctors and scientists did not understand the disease and thought isolation was the only way to prevent other people from contracting it. Prior to 1892 an average of about 25 lepers a year were admitted to Robben Island, but in 1892 that number rose to 338, and in 1893 a further 250 were admitted.
During the Second World War, the island was fortified. BL 9.2-inch guns and 6-inch guns were installed as part of the defences for Cape Town.
From 1961, Robben Island was used by the South African government as a prison for political prisoners and convicted criminals. In 1969 the "Moturu Kramat," now a sacred site for Muslim pilgrimage on Robben Island, was built to commemorate Sayed Abdurahman Moturu, the Prince of Madura. Moturu, one of Cape Town's first imams, had been exiled in the mid-1740s to the island. He died there in 1754. Muslim political prisoners would pay homage at the shrine before leaving the island.
In 1982, former inmate Indres Naidoo's book "Island in Chains" became the first published account of prison life on the island.
The maximum security prison for political prisoners closed in 1991. The medium security prison for criminal prisoners was closed five years later.
With the end of apartheid, the island has become a popular tourist destination. It is managed by Robben Island Museum (RIM); which operates the site as a living museum. In 1999 the island was declared a World Heritage Site for its importance to South Africa's political history and development of a democratic society. Every year thousands of visitors take the ferry from the Victoria & Alfred Waterfront in Cape Town for tours of the island and its former prison. Many of the guides are former prisoners. All land on the island is owned by the nation of South Africa, with the exception of the island church. Administratively, Robben Island is a suburb of the City of Cape Town. It is open all year around, weather permitting.
Robben Island is accessible to visitors through tours that depart from Cape Town's waterfront. Tours depart three times a day and take about 3.5 hours, consisting of a ferry trip to and from the island, and a tour of the various historical sites on the island that form part of the Robben Island Museum. These include the island graveyard, the disused lime quarry, Robert Sobukwe's house, the Bluestone quarry, the army and navy bunkers, and the maximum security prison. Nelson Mandela's cell is shown.
Seagoing vessels must take great care navigating near Robben Island and nearby Whale Rock (it does not break the surface) as these pose a danger to shipping. A prevailing rough Atlantic swell surrounds the offshore reefs and the island's jagged coastline. Stricken vessels driven onto rocks are quickly broken up by the powerful surf. A total of 31 vessels are known to have been wrecked around the island.
In 1990, a marine archaeology team from the University of Cape Town began Operation "Sea Eagle". It was an underwater survey that scanned of seabed around Robben Island. The task was made particularly difficult by the strong currents and high waves of these waters. The group found 24 vessels that had sunk around Robben Island. Most wrecks were found in waters less than deep. The team concluded that poor weather, darkness and fog were the cause of the sinkings.
Maritime wrecks around Robben Island and its surrounding waters include the 17th-century Dutch East Indiaman ships, the "Yeanger van Horne" (1611), the "Shaapejacht" (1660), and the "Dageraad" (1694). Later 19th-century wrecks include several British brigs, including the "Gondolier" (1836), and the United States clipper, "A.H. Stevens" (1866). In 1901 the mail steamer SS "Tantallon Castle" struck rocks off Robben Island in dense fog shortly after leaving Cape Town. After distress cannons were fired from the island, nearby vessels rushed to the rescue. All 120 passengers and crew were taken off the ship before it was broken apart in the relentless swell. A further 17 ships have been wrecked in the 20th century, including British, Spanish, Norwegian and Taiwanese vessels.
Due to the maritime danger of Robben Island and its near waters, Jan van Riebeeck, the first Dutch colonial administrator in Cape Town in the 1650s, ordered that huge bonfires were to be lit at night on top of Fire Hill, the highest point on the island (now Minto Hill). These were to warn VOC ships that they were approaching the island.
In 1865 Robben Island lighthouse was completed on Minto Hill. The cylindrical masonry tower, which has an attached lightkeeper's house at its base, is high with a lantern gallery at the top. In 1938 the lamp was converted to electricity. The lighthouse uses a flashing lantern instead of a revolving lamp; it shines for a duration of 5 seconds every seven seconds. The 46,000 candela beam flashes white light away from Table Bay. It is visible up to . A secondary red light acts as a navigation aid for vessels sailing south-southeast.
When the Dutch arrived in the area in 1652, the only large animals on the island were seals and birds, principally penguins. In 1654, the settlers released rabbits on the island to provide a ready source of meat for passing ships.
The original colony of African penguins on the island was completely exterminated by 1800. But since 1983 a new colony has been established there, and the modern island is again an important breeding area for the species. The colony grew to a size of ~16,000 individuals in 2004, before starting to decline in size again. , this decline has been continuous (to a colony size of ~3,000 individuals). Such a decline has been found at almost all other African penguin colonies. Its causes are still largely unclear and likely to vary between colonies, but at Robben Island are probably related to a diminishing of the food supply (sardines and anchovies) through competition by fisheries. Easy to see in their natural habitat, the penguins have been a popular tourist attraction.
Around 1958, Lieutenant Peter Klerck, a naval officer serving on the island, introduced various animals. The following extract of an article, written by his son Michael Klerck, who lived on the island from an early age, describes the local fauna:
My father, a naval officer at the time, with the sanction of Doctor Hey, director of Nature Conservation, turned an area into a nature reserve. A 'Noah's Ark' berthed in the harbour sometime in 1958. They stocked the island with tortoise, duck, geese, buck (which included Springbok, Eland, Steenbok, Bontebok and Fallow Deer), Ostrich and a few Wildebeest which did not last long. All except the fallow deer are indigenous to the Cape. Many animals are still there including three species of tortoise—the most recently discovered in 1998—two Parrot Beaked specimens that have remained undetected until now. The leopard or mountain tortoises might have suspected the past terror; perhaps they had no intention of being a part of a future infamy, but they often attempted the swim back to the mainland (they are the only species in the world that can swim). Boats would lift them out of the sea in Table Bay and return them to us. None of the original 12 shipped over remain, and in 1995, four more were introduced—they seem to have more easily accepted their home as they are still residents. One resident brought across a large leopard tortoise discovered in a friend's garden in Newlands, Cape Town. He lived in our garden and grew big enough to climb over the wall and roam the island much like the sheep in Van Riebeeck's time. As children we were able to ride his great frame comfortably, as did some grown men. The buck and ostriches seemed equally happy and the ducks and Egyptian Geese were assigned a home in the old quarry, which had, some three hundred years before, supplied the dressed stone for the foundations of the Castle; at the time of my residence it bristled with fish.
Recent reports in Cape Town newspapers show that a lack of upkeep, a lack of culling, and the proliferation of rabbits on the island has led to the total devastation of the wildlife; there remains today almost none of the animals my father brought over all those years ago; the rabbits themselves have laid the island waste, stripping it of almost all ground vegetation. It looks almost like a desert. A reporter from the broadcasting corporation told me recently that they found the carcass of the last Bontebok.
In the early 21st century, the rabbit population had reached an estimated 25,000, which had become an invasive species, endangering others. Humans are hunting and culling the rabbits to reduce their number. | https://en.wikipedia.org/wiki?curid=26122 |
Real-time operating system
A real-time operating system (RTOS) is an operating system (OS) intended to serve real-time applications that process data as it comes in, typically without buffer delays. Processing time requirements (including any OS delay) are measured in tenths of seconds or shorter increments of time. A real-time system is a time-bound system which has well-defined, fixed time constraints. Processing must be done within the defined constraints or the system will fail. They either are event-driven or time-sharing. Event-driven systems switch between tasks based on their priorities, while time-sharing systems switch the task based on clock interrupts. Most RTOSs use a pre-emptive scheduling algorithm.
A key characteristic of an RTOS is the level of its consistency concerning the amount of time it takes to accept and complete an application's task; the variability is 'jitter'. A 'hard' real-time operating system (Hard RTOS) has less jitter than a 'soft' real-time operating system (Soft RTOS). The late answer is a wrong answer in hard RTOS while late answer is acceptable in soft RTOS. The chief design goal is not high throughput, but rather a guarantee of a soft or hard performance category. An RTOS that can usually or generally meet a deadline is a soft real-time OS, but if it can meet a deadline deterministically it is a hard real-time OS.
An RTOS has an advanced algorithm for scheduling. Scheduler flexibility enables a wider, computer-system orchestration of process priorities, but a real-time OS is more frequently dedicated to a narrow set of applications. Key factors in a real-time OS are minimal interrupt latency and minimal thread switching latency; a real-time OS is valued more for how quickly or how predictably it can respond than for the amount of work it can perform in a given period of time.
See the comparison of real-time operating systems for a comprehensive list. Also, see the list of operating systems for all types of operating systems.
An RTOS is an operating system in which the time taken to process an input stimulus is less than the time lapsed until the next input stimulus of the same type.
The most common designs are:
Time sharing designs switch tasks more often than strictly needed, but give smoother multitasking, giving the illusion that a process or user has sole use of a machine.
Early CPU designs needed many cycles to switch tasks during which the CPU could do nothing else useful. For example, with a 20 MHz 68000 processor (typical of the late 1980s), task switch times are roughly 20 microseconds. In contrast, a 100 MHz ARM CPU (from 2008) switches in less than 3 microseconds. Because switching took so long, early OSes tried to minimize wasting CPU time by avoiding unnecessary task switching.
In typical designs, a task has three states:
Most tasks are blocked or ready most of the time because generally only one task can run at a time per CPU. The number of items in the ready queue can vary greatly, depending on the number of tasks the system needs to perform and the type of scheduler that the system uses. On simpler non-preemptive but still multitasking systems, a task has to give up its time on the CPU to other tasks, which can cause the ready queue to have a greater number of overall tasks in the ready to be executed state (resource starvation).
Usually, the data structure of the ready list in the scheduler is designed to minimize the worst-case length of time spent in the scheduler's critical section, during which preemption is inhibited, and, in some cases, all interrupts are disabled, but the choice of data structure depends also on the maximum number of tasks that can be on the ready list.
If there are never more than a few tasks on the ready list, then a doubly linked list of ready tasks is likely optimal. If the ready list usually contains only a few tasks but occasionally contains more, then the list should be sorted by priority. That way, finding the highest priority task to run does not require iterating through the entire list. Inserting a task then requires walking the ready list until reaching either the end of the list, or a task of lower priority than that of the task being inserted.
Care must be taken not to inhibit preemption during this search. Longer critical sections should be divided into small pieces. If an interrupt occurs that makes a high priority task ready during the insertion of a low priority task, that high priority task can be inserted and run immediately before the low priority task is inserted.
The critical response time, sometimes called the flyback time, is the time it takes to queue a new ready task and restore the state of the highest priority task to running. In a well-designed RTOS, readying a new task will take 3 to 20 instructions per ready-queue entry, and restoration of the highest-priority ready task will take 5 to 30 instructions.
In more advanced systems, real-time tasks share computing resources with many non-real-time tasks, and the ready list can be arbitrarily long. In such systems, a scheduler ready list implemented as a linked list would be inadequate.
Some commonly used RTOS scheduling algorithms are:
A multitasking operating system like Unix is poor at real-time tasks. The scheduler gives the highest priority to jobs with the lowest demand on the computer, so there is no way to ensure that a time-critical job will have access to enough resources. Multitasking systems must manage sharing data and hardware resources among multiple tasks. It is usually unsafe for two tasks to access the same specific data or hardware resource simultaneously. There are three common approaches to resolve this problem:
General-purpose operating systems usually do not allow user programs to mask (disable) interrupts, because the user program could control the CPU for as long as it wishes. Some modern CPUs do not allow user mode code to disable interrupts as such control is considered a key operating system resource. Many embedded systems and RTOSs, however, allow the application itself to run in kernel mode for greater system call efficiency and also to permit the application to have greater control of the operating environment without requiring OS intervention.
On single-processor systems, an application running in kernel mode and masking interrupts is the lowest overhead method to prevent simultaneous access to a shared resource. While interrupts are masked and the current task does not make a blocking OS call, the current task has "exclusive" use of the CPU since no other task or interrupt can take control, so the critical section is protected. When the task exits its critical section, it must unmask interrupts; pending interrupts, if any, will then execute. Temporarily masking interrupts should only be done when the longest path through the critical section is shorter than the desired maximum interrupt latency. Typically this method of protection is used only when the critical section is just a few instructions and contains no loops. This method is ideal for protecting hardware bit-mapped registers when the bits are controlled by different tasks.
When the shared resource must be reserved without blocking all other tasks (such as waiting for Flash memory to be written), it is better to use mechanisms also available on general-purpose operating systems, such as a mutex and OS-supervised interprocess messaging. Such mechanisms involve system calls, and usually invoke the OS's dispatcher code on exit, so they typically take hundreds of CPU instructions to execute, while masking interrupts may take as few as one instruction on some processors.
A (non-recursive) mutex is either locked or unlocked. When a task has locked the mutex, all other tasks must wait for the mutex to be unlocked by its " owner" - the original thread. A task may set a timeout on its wait for a mutex. There are several well-known problems with mutex based designs such as priority inversion and deadlocks.
In priority inversion a high priority task waits because a low priority task has a mutex, but the lower priority task is not given CPU time to finish its work. A typical solution is to have the task that owns a mutex at, or 'inherit,' the priority of the highest waiting task. But this simple approach gets more complex when there are multiple levels of waiting: task "A" waits for a mutex locked by task "B", which waits for a mutex locked by task "C". Handling multiple levels of inheritance causes other code to run in high priority context and thus can cause starvation of medium-priority threads.
In a deadlock, two or more tasks lock mutex without timeouts and then wait forever for the other task's mutex, creating a cyclic dependency. The simplest deadlock scenario occurs when two tasks alternately lock two mutex, but in the opposite order. Deadlock is prevented by careful design.
The other approach to resource sharing is for tasks to send messages in an organized message passing scheme. In this paradigm, the resource is managed directly by only one task. When another task wants to interrogate or manipulate the resource, it sends a message to the managing task. Although their real-time behavior is less crisp than semaphore systems, simple message-based systems avoid most protocol deadlock hazards, and are generally better-behaved than semaphore systems. However, problems like those of semaphores are possible. Priority inversion can occur when a task is working on a low-priority message and ignores a higher-priority message (or a message originating indirectly from a high priority task) in its incoming message queue. Protocol deadlocks can occur when two or more tasks wait for each other to send response messages.
Since an interrupt handler blocks the highest priority task from running, and since real-time operating systems are designed to keep thread latency to a minimum, interrupt handlers are typically kept as short as possible. The interrupt handler defers all interaction with the hardware if possible; typically all that is necessary is to acknowledge or disable the interrupt (so that it won't occur again when the interrupt handler returns) and notify a task that work needs to be done. This can be done by unblocking a driver task through releasing a semaphore, setting a flag or sending a message. A scheduler often provides the ability to unblock a task from interrupt handler context.
An OS maintains catalogues of objects it manages such as threads, mutexes, memory, and so on. Updates to this catalogue must be strictly controlled. For this reason, it can be problematic when an interrupt handler calls an OS function while the application is in the act of also doing so. The OS function called from an interrupt handler could find the object database to be in an inconsistent state because of the application's update. There are two major approaches to deal with this problem: the unified architecture and the segmented architecture. RTOSs implementing the unified architecture solve the problem by simply disabling interrupts while the internal catalogue is updated. The downside of this is that interrupt latency increases, potentially losing interrupts. The segmented architecture does not make direct OS calls but delegates the OS related work to a separate handler. This handler runs at a higher priority than any thread but lower than the interrupt handlers. The advantage of this architecture is that it adds very few cycles to interrupt latency. As a result, OSes which implement the segmented architecture are more predictable and can deal with higher interrupt rates compared to the unified architecture.
Similarly, the System Management Mode on x86 compatible Hardware can take too much time before it returns control to the operating system. It is generally wrong to write real-time software for x86 Hardware.
Memory allocation is more critical in a real-time operating system than in other operating systems.
First, for stability there cannot be memory leaks (memory that is allocated but not freed after use). The device should work indefinitely, without ever needing a reboot. For this reason, dynamic memory allocation is frowned upon. Whenever possible, all required memory allocation is specified statically at compile time.
Another reason to avoid dynamic memory allocation is memory fragmentation. With frequent allocation and releasing of small chunks of memory, a situation may occur where available memory is divided into several sections and the RTOS is incapable of allocating a large enough continuous block of memory, although there is enough free memory. Secondly, speed of allocation is important. A standard memory allocation scheme scans a linked list of indeterminate length to find a suitable free memory block, which is unacceptable in an RTOS since memory allocation has to occur within a certain amount of time.
Because mechanical disks have much longer and more unpredictable response times, swapping to disk files is not used for the same reasons as RAM allocation discussed above.
The simple fixed-size-blocks algorithm works quite well for simple embedded systems because of its low overhead. | https://en.wikipedia.org/wiki?curid=26123 |
The Righteous Brothers
The Righteous Brothers were originally an American musical duo of Bill Medley and Bobby Hatfield. They began performing together in 1962 in the Los Angeles area as part of a five-member group called the Paramours, but adopted the name "The Righteous Brothers" when they embarked on their recording career as a duo. Their most active recording period was in the 1960s and 70s, and although the duo was inactive for some years, Hatfield and Medley reunited in 1981 and continued to perform until Hatfield's death in 2003.
Hatfield and Medley had contrasting vocal ranges, which helped them to create a distinctive sound as a duet, but also strong vocal talent individually that allowed them to perform as soloists. Medley sang the low parts with his bass-baritone voice, with Hatfield taking the higher register vocals with his tenor voice.
They had their first major hit with the 1964 song "You've Lost That Lovin' Feelin'", produced by Phil Spector and often considered one of his finest works. Other notable hits include "Ebb Tide", "Soul and Inspiration", "Rock and Roll Heaven", and in particular, their version of "Unchained Melody". Both Hatfield and Medley also had for a time their own solo careers. In 2016, Medley re-formed The Righteous Brothers with Bucky Heard and they continue to perform as a duo.
Bobby Hatfield and Bill Medley were in different groups before they met – Hatfield was in a group from Anaheim called the Variations, and Medley in a group from Santa Ana called the Paramours. Barry Rillera, a member of Medley's band who was also in Hatfield's group, suggested that they go see each other's show and perform together. Later, after a member of Paramours left in 1962, Hatfield and Medley joined forces and formed a new Paramours, which included Johnny Wimber (a founder of the Vineyard Movement). They started performing at a club called John's Black Derby in Santa Ana, and were signed to a small record label Moonglow in 1962. They released a single "There She Goes (She's Walking Away)" in December 1962. However, the Paramours did not have much success and soon broke up, leaving Hatfield and Medley to perform as a duo in 1963. According to Medley, they then adopted the name "The Righteous Brothers" for the duo because black Marines from the El Toro Marine base started calling them "righteous brothers". At the end of a performance, a black U.S. Marine in the audience would shout, "That was righteous, brothers!", and would greet them with "Hey righteous brothers, how you doin'?" on meeting them.
The Righteous Brothers released three albums under the Moonglow label, one of these and a further compilation album were released after they had joined Phil Spector. They released 12 singles with Moonglow, but only two were moderate hits – "Little Latin Lupe Lu" and "My Babe" from their first album "Right Now!". In August and September 1964, they opened for The Beatles in their first U.S. tour. However, they left before the tour finished as they were asked to appear on a new television show called "Shindig!"; they also felt unappreciated by the audience as they were then little known on the East Coast, and the audience demanded to hear the Beatles while they were performing. They returned to Los Angeles to tape the pilot for the show, and would later appear in the show regularly. Their next album was "Some Blue-Eyed Soul"; the term blue-eyed soul was first used to refer to The Righteous Brothers by black DJs, but after they became popular, the term became a general term for all white singers who sang what was then considered "black music". In October and November 1964, they opened for The Rolling Stones on their American tour.
In 1964, music producer Phil Spector came across the Righteous Brothers when they performed in a show at the Cow Palace in Daly City, where one of Spector's acts, The Ronettes, was also appearing and he conducted the band for the show. Spector was impressed enough to arrange a deal with Moonglow in early October 1964 allowing him to record and release songs by the Righteous Brothers in the US, Canada and UK under his own label, Philles Records. Prior to this, all the songs Spector produced for Philles Records featured black singers; the Righteous Brothers would be his first white vocal group for the label, but they had a black vocal style, referred to as "blue-eyed soul", that suited Spector.
Spector commissioned Barry Mann and Cynthia Weil to write a song for them, which turned out to be "You've Lost That Lovin' Feelin'". The song, released in late 1964, became their first major hit single and reached No. 1 in February 1965. Produced by Phil Spector, the record is often cited as one of the finest expressions of Spector's Wall of Sound production techniques. It is one of the most successful pop singles of its time, despite exceeding the then-standard length for radio play. Indeed, according to BMI, "You've Lost That Lovin' Feelin'" became the most-played song on American radio and television of the 20th century, with more than eight million airplays by the end of 1999.
The Righteous Brothers had several other hit singles with Philles Records in 1965, including "Just Once in My Life" and "Unchained Melody" (originally the B-side of "Hung on You"), both reaching the Billboard Top 10. "Unchained Melody" was produced by Medley; according to Medley, it was originally intended only as an album track, and Spector had asked him to produce the albums so Spector could spend time and money on producing singles. Later copies of the original 45 release credited Spector as producer when it became a hit.
After the success of "Unchained Melody", Spector started recording older songs with the Righteous Brothers, including "Ebb Tide", which reached No. 5. Hatfield was the only vocal on "Unchained Melody" and "Ebb Tide", and both were songs Bobby Hatfield had performed with his first group, the Variations. According to Medley, both the early singles "You've Lost That Lovin' Feelin'" and "Just Once in My Life" featured Medley's vocal strongly, which caused some friction between the duo, and the Hatfield solos in later singles restored some balance between the two. The last single released that they recorded with Philles Records was "The White Cliffs of Dover". Although Spector focused his attention in producing singles, a number of albums by the Righteous Brothers released with Philles Records sold well.
In 1965, they had a couple of guest appearances in the films "A Swingin' Summer" and "Beach Ball". They also became the first rock and roll act to play the Strip in Las Vegas (at The Sands).
The duo's relationship with Spector however ended in some acrimony; in 1966 they signed with Verve/MGM Records, leading to a lawsuit from Spector, which MGM settled with a $600,000 payment to Spector. Their next release in 1966, "Soul and Inspiration" was a Phil Spector sound-alike song. The song was first written by Mann and Weil after the success of "Lovin' Feelin'" but not completed, and they finished the song following a request by Medley after the Righteous Brothers moved to Verve. Medley then produced the completed song, and was able to fully simulate the Spector style of production and achieve a similar sound to that of "Lovin' Feelin'". It quickly became their second No. 1 U.S. hit, staying at the top for three weeks.
After a few more top 40 hits, including "He" and "Go Ahead And Cry", their popularity began to decline. Even a collaboration with former Motown A&R chief William "Mickey" Stevenson failed to work. In 1967, before they went their separate ways, and to capitalize on their previous hits, Verve/MGM issued a "Greatest Hits" compilation which has been modified twice: in 1983 with 10 tracks and in 1990 with two more tracks.
The duo split up in February 1968, a breakup that would last for more than six years, when Medley left to pursue a solo career. Medley recorded a few solo recordings on several labels, while Hatfield teamed up with singer Jimmy Walker (from The Knickerbockers) using the Righteous Brothers name on the MGM label. Medley first recorded "I Can't Make It Alone" written by Carole King, but the song failed to make much of an impact. The following single, "Brown Eyed Woman" written by Mann and Weil, performed better. However, neither he nor Hatfield was able to match their previous chart success.
Hatfield and Jimmy Walker recorded an album, "Re-Birth", as "The Righteous Brothers" before disbanding in 1971. In a 2013 interview, Jimmy Walker said he had wanted to continue, but Hatfield decided to take a break and broke up the act. In 1969, Hatfield appeared in a TV movie, "The Ballad of Andy Crocker", and also recorded "Only You". He released a solo album, "Messin' In Muscle Shoals" in 1971.
According to Medley, he was performing three shows a night in Las Vegas, but finding it too much of a strain on his voice singing solo, and under advice he sought out Hatfield to reform The Righteous Brothers; Hatfield at this point was broke and living alone in a small apartment. In 1974, Medley and Hatfield announced their reunion at an appearance on "The Sonny & Cher Comedy Hour". They signed with Haven Records, run by producers Dennis Lambert and Brian Potter and distributed by Capitol Records. Within a few weeks of reforming, they recorded Alan O'Day's "Rock and Roll Heaven", a paean to several deceased rock singers which became a hit, peaking at No. 3 on the Billboard Hot 100. Several more minor hits on Haven followed. After 1975, however, the Righteous Brothers would not appear in music charts except for re-releases of older songs and compilation albums, some of which were re-recordings of earlier works.
Between 1976 and 1981, Hatfield and Medley stopped performing as a duo after the death of Medley's first wife, as he wanted time off to look after his son. They reunited for an anniversary special on "American Bandstand" in 1981 to perform an updated version of "Rock And Roll Heaven". They resumed touring intermittently, and they recorded a 21st Anniversary Celebration concert in 1983 at the Roxy on the Sunset Strip in Los Angeles, which was later released on video and was also aired on television.
In the late 1970s, Medley once again began to record as a solo artist and had some success in the 1980s. In 1984, he scored country hits with "Till Your Memory's Gone" and "I Still Do", the latter also an adult-contemporary crossover hit. In late 1987, his duet with Jennifer Warnes, "(I've Had) The Time of My Life", which appeared on the soundtrack for "Dirty Dancing", topped the "Billboard" Hot 100. It won them a Grammy Award for Best Pop Performance by a Duo or Group with Vocals.
In 1990, Bobby Hatfield's original recording of "Unchained Melody" was featured in the popular feature film "Ghost", starring Patrick Swayze and Demi Moore. It triggered an avalanche of requests to Top 40 radio stations by fans who had seen the movie to play the 1965 Righteous Brothers' recording. This motivated Polygram (which now owned the Verve/MGM label archives) to re-release the song to Top 40 radio. It became a major hit for a second time, reaching No. 13 on the Hot 100 in 1990. It also became their second No. 1 in the UK. The duo quickly re-recorded another version of "Unchained Melody" for Curb Records. Both the reissued and the re-recorded songs charted at the same time for several weeks, and the Righteous Brothers made history as the first act to have two versions of the same song in the Top 20 at the same time. The re-recorded "Unchained Melody" hit No. 19 on the Hot 100 and was certified platinum by the Recording Industry Association of America (RIAA).
They also re-recorded other songs for a budget-priced CD, "The Best of The Righteous Brothers", released by Curb Records. Medley would later describe the re-recordings as "artistically, a stupid idea; financially, a wonderfully idea". The album sold very well and received a double platinum certification from the RIAA. A greatest hits CD collection of the original recordings called "The Very Best of The Righteous Brothers...Unchained Melody" was released later by Verve/Polydor. This compilation album also became their first entry in the UK album chart. They began to tour extensively all through the 1990s and early 2000s and performed for about 12 weeks a year in Las Vegas.
Bobby Hatfield was found dead in his hotel room in Kalamazoo, Michigan, on November 5, 2003, shortly before he was due to perform at a concert with Bill Medley at Western Michigan University's Miller Auditorium. The autopsy report attributed his death to heart failure brought on by cocaine. Bill Medley continued to perform as a solo artist for some time after Hatfield's death, occasionally singing with a screen projection of old film footage of Hatfield.
In January 2016, Medley announced he intended to revive the Righteous Brothers for the first time since 2003. The late Hatfield was replaced with singer Bucky Heard at Las Vegas's Harrah's Showroom for more than 40 shows from March 23, 2016 to November 8. The repertoire included some of the Righteous Brothers' best-known songs, such as "You've Lost That Lovin' Feelin'," "Soul & Inspiration," "Unchained Melody", but also the later "Rock and Roll Heaven", as well as Bill Medley's "The Time of My Life". Medley explained that it was the encouragement of the Righteous Brothers’ fans as well as several friends, producers and contacts in Las Vegas that made him consider reviving the Righteous Brothers' name again, while acknowledging that it was a difficult choice to continue without Hatfield, saying “I’ve had a million fans hollering at me to keep the Righteous Brothers alive... I looked at a couple of guys, but you know, you can’t replace Bobby Hatfield, he’s the best in the world”. Medley was previously acquainted with Heard and watched him perform at a tribute concert to Journey, after which he realized that Heard was the only one he would consider capable of filling Hatfield's shoes, also noting that he and Heard had good chemistry together. Medley approached Heard a few days later and discussed the matter, which ended in a coin toss, which Medley won, resulting in Heard accepting Medley's proposal. Heard has since communicated that he knows he can never replace Hatfield, nor will he attempt to do so and that he intends to sing like Hatfield rather than sound like him. A new CD was released the same year, featuring several of the Righteous Brothers' hit singles sung by Medley and Heard.
The Righteous Brothers were nominated twice for a Grammy. In 1965, their recording of "You've Lost That Lovin' Feelin'" was nominated in the Best Rock And Roll Recording category at the 7th Annual Grammy Awards. Their re-recording of "Unchained Melody" was nominated for Best Pop Vocal Performance by a Duo or Group at the 1991 Grammy. They were also awarded the Best New Singing Group in the Billboard Disc Jockey Poll in 1965.
The Righteous Brothers were inducted into the Rock and Roll Hall of Fame on March 10, 2003.
Current members
Former members
For their discography as solo artists, see Bill Medley and Bobby Hatfield.
Many compilation albums by The Righteous Brothers have been released, the following is a selection of compilation albums that received certifications. | https://en.wikipedia.org/wiki?curid=26127 |
Rajiv Gandhi
Rajiv Ratna Gandhi (; 20 August 1944 – 21 May 1991) was an Indian politician who served as the 6th Prime Minister of India from 1984 to 1989. He took office after the 1984 assassination of his mother, Prime Minister Indira Gandhi, to become the youngest Indian Prime Minister at the age of 40.
Gandhi was from the politically powerful Nehru–Gandhi family, which had been associated with the Indian National Congress party. For much of his childhood, his maternal grandfather Jawaharlal Nehru was Prime Minister. Gandhi attended college in the United Kingdom. He returned to India in 1966 and became a professional pilot for the state-owned Indian Airlines. In 1968, he married Sonia Gandhi; the couple settled in Delhi to a domestic life with their children Rahul Gandhi and Priyanka Gandhi Vadra. For much of the 1970s, his mother Indira Gandhi was prime minister and his brother Sanjay Gandhi an MP; despite this, Rajiv Gandhi remained apolitical. After Sanjay's death in a plane crash in 1980, Gandhi reluctantly entered politics at the behest of Indira. The following year he won his brother's Parliamentary seat of Amethi and became a member of the Lok Sabha—the lower house of India's Parliament. As part of his political grooming, Rajiv was made general secretary of the Congress party and given significant responsibility in organising the 1982 Asian Games.
On the morning of 31 October 1984, his mother was assassinated by one of her bodyguards; later that day, Gandhi was appointed Prime Minister. His leadership was tested over the next few days as organised mobs rioted against the Sikh community, resulting in anti-Sikh riots in Delhi. That December, Congress party won the largest Lok Sabha majority to date, 411 seats out of 542. Rajiv Gandhi's period in office was mired in controversies; perhaps the greatest crises were the Bhopal disaster, Bofors scandal and Mohd. Ahmed Khan v. Shah Bano Begum. In 1988, he reversed the coup in Maldives, antagonising militant Tamil groups such as PLOTE, intervening and then sending peacekeeping troops to Sri Lanka in 1987, leading to open conflict with the Liberation Tigers of Tamil Eelam (LTTE). In mid-1987, the Bofors scandal damaged his corruption-free image and resulted in a major defeat for his party in the 1989 election.
Gandhi remained Congress President until the elections in 1991. While campaigning for the elections, he was assassinated by a suicide bomber from the LTTE. His widow Sonia became the president of the Congress party in 1998 and led the party to victory in the 2004 and 2009 parliamentary elections. His son Rahul Gandhi is a Member of Parliament and was the President of the Indian National Congress till 2019. In 1991, the Indian government posthumously awarded Gandhi the Bharat Ratna, the country's highest civilian award. At the India Leadership Conclave in 2009, the "Revolutionary Leader of Modern India" award was conferred posthumously on Gandhi.
Rajiv Gandhi was born in Bombay on 20 August 1944 to Indira and Feroze Gandhi. In 1951, Rajiv and Sanjay were admitted to Shiv Niketan school, where the teachers said Rajiv was shy and introverted, and "greatly enjoyed painting and drawing".
He was admitted to the Welham Boys' School, Dehradun and Doon School, Dehradun in 1954, where Sanjay joined him two years later. Rajiv was sent to London in 1961 to study A-levels. Rajiv was also educated at the Ecole D'Humanité, an international boarding school in Switzerland. From 1962 to 1965 he studied engineering at Trinity College, Cambridge, but did not obtain a degree. In 1966 he began a course in mechanical engineering at Imperial College London, but did not complete it. Gandhi really was not studious enough, as he went on to admit later.
Gandhi returned to India in 1966, the year his mother became Prime Minister. He went to Delhi and became a member of the Flying Club, where he was trained as a pilot. In 1970, he was employed as a pilot by Air India; unlike Sanjay, he did not exhibit any interest of joining politics. In 1968, after three years of courtship, he married Edvige Antonia Albina Màino, who changed her name to Sonia Gandhi and made India her home. Their first child, a son, Rahul Gandhi was born in 1970. In 1972, the couple had a daughter, Priyanka Gandhi, who married Robert Vadra.
On 23 June 1980, Rajiv's younger brother Sanjay Gandhi died unexpectedly in an aeroplane crash. At that time, Rajiv Gandhi was in London as part of his foreign tour. Hearing the news, he returned to Delhi and cremated Sanjay's body. As per Agarwal, in the week following Sanjay's death, Shankaracharya Swami Shri Swaroopanand, a saint from Badrinath, visited the family's house to offer his condolences. He advised Rajiv not to fly aeroplanes and instead "dedicate himself to the service of the nation". 70 members of the Congress party signed a proposal and went to Indira, urging Rajiv to enter politics. Indira told them it was Rajiv's decision whether to enter politics. When he was questioned about it, he replied, "If my mother gets help from it, then I will enter politics". Rajiv entered politics on 16 February 1981, when he addressed a national farmers' rally in Delhi. During this time, he was still an employee of Air India.
On 4 May 1981, Indira Gandhi presided over a meeting of the All India Congress Committee. Vasantdada Patil proposed Rajiv as a candidate for the Amethi constituency, which was accepted by all members at the meeting. A week later, the party officially announced his candidacy for the constituency. He then paid the party membership fees of the party and flew to Sultanpur to file his nomination papers and completed other formalities. He won the seat, defeating Lok Dal candidate Sharad Yadav by a margin of 237,000 votes. He took his oath on 17 August as Member of Parliament.
Rajiv Gandhi's first political tour was to England, where he attended the wedding ceremony of Prince Charles and Lady Diana Spencer on 29 July 1981. In December the same year, he was put in charge of the Indian Youth Congress. He first showed his organisational ability by "working round the clock" on the 1982 Asian Games. He was one of 33 members of the Indian parliament who were part of the Games' organising committee; sports historian Boria Majumdar writes that being "son of the prime minister he had a moral and unofficial authority" over the others. The report submitted by the Asian Games committee mentions Gandhi's "drive, zeal and initiative" for the "outstanding success" of the games.
On 31 October 1984, the Prime Minister, Rajiv Gandhi's mother Indira Gandhi, was assassinated by her Sikh bodyguards, which led to violent riots against Sikhs. At a Boat Club rally 19 days after the assassination, Gandhi said, "Some riots took place in the country following the murder of Indiraji. We know the people were very angry and for a few days it seemed that India had been shaken. But, when a mighty tree falls, it is only natural that the earth around it does shake a little". According to Verinder Grover, the statement made by Gandhi was a "virtual justification" of the riots. Congress leader Mani Shankar Aiyar wrote, "Did it constitute an incitement to mass murder?" He also criticised Gandhi for his reluctance to bring the army from Meerut to handle the mob.
Rajiv Gandhi was in West Bengal on 31 October 1984 when his mother, Prime Minister Indira Gandhi, was assassinated by two of her Sikh bodyguards, Satwant Singh and Beant Singh, to avenge the military attack on the Golden Temple during Operation Blue Star. Sardar Buta Singh and President Zail Singh pressed Rajiv to succeed his mother as Prime Minister within hours of her murder. Commenting on the anti-Sikh riots in Delhi, Rajiv Gandhi said, "When a giant tree falls, the earth below shakes"; a statement for which he was widely criticised. Many Congress politicians were accused of orchestrating the violence.
Soon after assuming office, Gandhi asked President Singh to dissolve Parliament and hold fresh elections, as the Lok Sabha had completed its five-year term. Gandhi officially became the President of the Congress party, which won a landslide victory with the largest majority in history of the Indian Parliament, giving Gandhi absolute control of government. He benefited from his youth and a general perception of being free of a background in corrupt politics. Gandhi took his oath on 31 December 1984; at 40, he was the youngest Prime Minister of India. Historian Meena Agarwal writes that even after taking the Prime Ministerial oath, he was a relatively unknown figure, "novice in politics" as he assumed the post after being an MP for three years.
After his swearing-in as Prime Minister, Gandhi appointed his fourteen-member cabinet. He said he would monitor their performance and would "fire ministers who do not come to the mark". From the Third Indira Gandhi ministry, he removed two powerful figures; Finance Minister Pranab Mukherjee and Railway Minister A. B. A. Ghani Khan Choudhury. Mohsina Kidwai became the Minister of Railways; she was the only female figure in the cabinet. Former Home Minister PV Narasimha Rao was put in charge of defence. V.P. Singh who was initially appointed as the Finance Minister, was given the Defence Ministry in 1987. During his tenure as Prime Minister, Gandhi frequently shuffled his cabinet ministers, drawing criticism from the magazine "India Today", which called it a "wheel of confusion". The West Bengal chief minister Jyoti Basu said, "The Cabinet change reflects the instability of the Congress (I) Government at the Centre".
Gandhi's first action as Prime Minister was passing the anti-defection law in January 1985. According to this law, an elected Member of Parliament or legislative assembly could not join an opposition party until the next election. Historian Manish Telikicherla Chary calls it a measure of curbing corruption and bribery of ministers by switching parties so they could gain majority. Many such defections occurred during the 1980s as elected leaders of the Congress party joined opposition parties.
In 1985, the Supreme Court of India ruled in favour of Muslim divorcee Shah Bano, declaring that her husband should give her alimony. Some Indian Muslims treated it as an encroachment upon Muslim Personal Law and protested against it. Gandhi agreed to their demands. In 1986, the Parliament of India passed The Muslim Women (Protection of Rights on Divorce) Act 1986, which nullified the Supreme Court's judgment in the Shah Bano case. The Act diluted the Supreme Court judgment and allowed maintenance payments to divorced women only during the period of Iddah, or until 90 days after the divorce, according to the provisions of Islamic law. This was in contrast to Section 125 of the Code. Indian magazine "Business and Economics" called it a minority appeasement by Gandhi. Lawyer and former Law Minister of India, Ram Jethmalani, called the Act "retrogressive obscurantism for short-term minority populism". Gandhi's colleague Arif Mohammad Khan, who was then a Member of Parliament, resigned in protest.
In his election manifesto for the 1984 general election, he did not mention any economic reforms, but after assuming office he tried to liberalise the country's economy. He did so by providing incentives to make private production profitable. Subsidies were given to corporate companies to increase industrial production, especially of durable goods. It was hoped this would increase economic growth and improve the quality of investment. But according to Professor Atul Kohli of Princeton University in a book published by Cambridge University, Gandhi faced stiff opposition from Congress leadership who thought "it would open the economy to external economic influences". Rural and tribal people protested because they saw them as "pro-rich" and "pro-city" reforms.
Gandhi increased government support for science, technology and associated industries, and reduced import quotas, taxes and tariffs on technology-based industries, especially computers, airlines, defence and telecommunications. In 1986, he announced a National Policy on Education to modernise and expand higher education programs across India. In 1986, he founded the Jawahar Navodaya Vidyalaya System, which is a Central government-based education institution that provides rural populations with free residential education from grades six to twelve. His efforts created MTNL in 1986, and his public call offices—better known as PCOs—helped develop the telephone network in rural areas. He introduced measures to significantly reduce the "Licence Raj" after 1990, allowing businesses and individuals to purchase capital, consumer goods and import without bureaucratic restrictions.
According to Rejaul Karim Laskar, a scholar of Indian foreign policy and an ideologue of Congress party, Rajiv Gandhi's vision for a new world order was premised on India's place in its front rank. According to Laskar, the "whole gamut" of Rajiv Gandhi's foreign policy was "geared towards" making India "strong, independent, self-reliant and in the front rank of the nations of the world." According to Laskar, Rajiv Gandhi's diplomacy was "properly calibrated" so as to be "conciliatory and accommodating when required" and "assertive when the occasion demanded."
In 1986, by request of the President of Seychelles France-Albert René, Gandhi sent India's navy to Seychelles to oppose an attempted coup against René. The intervention of India averted the coup. This mission was codenamed as Operation Flowers are Blooming. In 1987, India re-occupied the Quaid Post in the disputed Siachen region of the Indo-Pakistani border after winning what was termed Operation Rajiv. In the 1988 Maldives coup d'état, the Maldives president Maumoon Abdul Gayoom asked for help from Gandhi. He dispatched 1500 soldiers and the coup was suppressed.
On Thursday, 9 June 1988, at the fifteenth special session of the United Nations General Assembly, held at Headquarters, New York, Gandhi made vocal his views on a world free of nuclear weapons, to be realised through an, 'Action Plan for Ushering in a Nuclear-Weapon Free and Non-Violent World Order.'
He said:
Alas, nuclear weapons are not the only weapons of mass destruction. New knowledge is being generated in the life sciences. Military applications of these developments could rapidly undermine the existing convention against the military use of biological weapons. The ambit of our concern must extend to all means of mass annihilation.
This was based on his prior historic speech before the Japanese National Diet on 29 November 1985, in which he said:
Let us remove the mental partitions which obstruct the ennobling vision of the human family linked together in peace and prosperity. The Buddha's message of compassion is the very condition of human survival in our age.
The foiled bid of India recently to enter the Nuclear Suppliers Group, echoed his policy of non-proliferation to be linked to universal disarmament, which the World Nuclear Association refuses to recognise; non-proliferation being seen by India as essentially a weapon of the arms control regime, of the big nuclear powers as United States, Russia, United Kingdom, France, and China.
In February 1987, the Pakistani President Zia-ul-Haq visited Delhi, where he met Gandhi to discuss "routine military exercises of the Indian army" on the borders of Rajasthan and Punjab. Gandhi reciprocated, in December 1988, by visiting Islamabad and meeting the new Prime Minister of Pakistan, Benazir Bhutto, to reaffirm the 1972 Shimla agreement.
The Sri Lankan Civil War broke out with the Liberation Tigers of Tamil Eelam (LTTE), which was demanding an independent Tamil state in Sri Lanka. Gandhi discussed the matter with the Sri Lankan Prime Minister Ranasinghe Premadasa at the SAARC meeting in 1986. In that year, the Sri Lankan army blockaded the Tamil majority district of Jaffna; Gandhi ordered relief supplies to be dropped into the area by parachute because the Sri Lankan navy did not allow the Indian Navy to enter.
Gandhi signed the Indo-Sri Lanka Accord in July 1987. The accord "envisaged a devolution of power to the Tamil-majority areas", dissolved the LTTE, and designated Tamil as an official language of Sri Lanka. Gandhi said:
The Government of India believe that, despite some problems and delays, many of which were foreseen but unavoidable in the resolution of an issue of this magnitude and complexity, this Agreement represents the only way of safeguarding legitimate Tamil interests and ensuring a durable peace in Sri Lanka. Some have chosen to criticise the Agreement. None has shown a better way of meeting the legitimate aspirations of the Tamils in Sri Lanka, restoring peace in that country and of meeting our own security concern in the region. We have accepted a role which is difficult, but which is in our national interests to discharge. We shall not shrink our obligations and commitments. This is a national endeavour.
Chanderasekar withdrew the IPKF in 1989.
On 30 July 1987, a day after Gandhi went to Sri Lanka and signed the Indo-Sri Lanka Accord, an honour guard named Vijitha Rohana hit him on his shoulder with his rifle; Gandhi's quick reflexes saved him from injury. The guard was then dragged off by his security personnel. The guard said his intention was to kill Gandhi because of "the damage he had caused" to Sri Lanka. Wijemuni was imprisoned for two-and-a-half years for the assault. Gandhi later said about the incident:
When I was inspecting the guard of honour and as I walked past one person, I saw through the corner of my eye some movement. I ducked down a little bit in a reflex action. By my ducking, he missed my head and the brunt of the blow came on my shoulder below the left ear.
Soon after assuming office, Gandhi released the leaders of the Akali Dal who had been imprisoned since 1984's Operation Blue Star during Indira Gandhi's prime ministership. He lifted the ban on All India Sikh Students Federation and filed an inquiry into the 1984 Anti-Sikh Riots. He also held a closed-door meeting with senior Akali Dal leaders to find a solution to the Punjab problem. Despite Akali opposition, in January 1985, Gandhi signed the Rajiv-Longowal Accord with Akali leader HS Longowal. Punjab's state assembly election was scheduled in September 1985, but Longowal died and was replaced by Surjit Singh Barnala, who formed the government. After two years, in 1987, Barnala resigned his office because of a breakdown of law and order, leading to the implementation of President's rule in the state.
In May 1988, Gandhi launched the Operation Black Thunder to clear the Golden Temple in Amritsar of arms and gunmen. Two groups called National Security Guard and Special Action Group were created; they surrounded the temple in a 10-day siege during which the extremists' weapons were confiscated. Congress leader Anand Sharma said, "Operation Black Thunder effectively demonstrated the will of Rajiv Gandhi's government to take firm action to bring peace to Punjab".
Gandhi's prime-ministership marked an increase of insurgency in northeast India. Mizo National Front demanded independence for Mizoram. In 1987, Gandhi addressed this problem; Mizoram and Arunachal Pradesh were given the status of states that were earlier union territories. Gandhi also ended the Assam Movement, which was launched by Assamese people to protest against the alleged illegal migration of Bangladeshi Muslims and immigration of other Bengalis to their state, which had reduced the Assamese to a minority there. He signed the Assam Accord on 15 August 1985. According to the accord, foreigners who came to the state between 1951 and 1961 were given full citizenship but those who arrived there between 1961 and 1971 did not get right to vote for the next ten years.
Gandhi employed former Rockwell International executive Sam Pitroda as his adviser on public information infrastructure and innovation. During Gandhi's time in office, public sector telecom companies MTNL and VSNL was developed. According to Pitroda, Gandhi's ability to resist pressure from multi-national companies to abandon his plan to spread telecommunication services has been an important factor in India's development. According to news website Oneindia, "About 20 years ago telephones were considered to be a thing for the use of the rich, but credit goes to Rajiv Gandhi for taking them to the rural masses". Pitroda also said their plan to expand India's telephone network succeeded because of Gandhi's political support. According to Pitroda, by 2007 they were "adding six million phones every month". Gandhi's government also allowed the import of fully assembled motherboards, which led to the price of computers being reduced. According to some commentators, the seed for the information technology (IT) revolution was also planted during Rajiv Gandhi's time.
Rajiv Gandhi's finance minister, V. P. Singh, uncovered compromising details about government and political corruption, to the consternation of Congress leaders. Transferred to the Defence Ministry, Singh uncovered what became known as the Bofors scandal, which involved millions of US dollars and concerned alleged payoffs by the Swedish arms company Bofors through Italian businessman and Gandhi family associate Ottavio Quattrocchi, in return for Indian contracts. Upon discovering the scandal, Singh was dismissed from office and later resigned his Congress membership. Gandhi was later personally implicated in the scandal when the investigation was continued by Narasimhan Ram and Chitra Subramaniam of "The Hindu" newspaper, damaging his image as an honest politician. In 2004, he was posthumously cleared of this allegation.
In an interview in July 2005, V.P.Singh explained that his fall out with Rajiv Gandhi was not due to the Bofors deal, but rather due to the HDW deal. Courtesy a contract signed with the Germany company HDW in 1981, the Indian government had agreed to purchase two ready submarines built in Germany by HDW and two submarines in CKD form to be assembled in Mazagaon docks. V.P.Singh had received a telegram from the Indian ambassador in Germany, stating that an Indian agents had received commissions in the HDW submarine deal. He told Rajiv Gandhi about this and instituted an enquiry. This led to differences and V.P.Singh resigned from the cabinet.
In his book, "Unknown Facets of Rajiv Gandhi, Jyoti Basu and Indrajit Gupta", released in November 2013, former CBI director Dr. A P Mukherjee wrote that Gandhi wanted commission paid by defence suppliers to be used exclusively for meeting running expenses of the Congress party. Mukherjee said Gandhi explained his position in a meeting between the two at the Prime Minister's residence on 19 June 1989. In May 2015, Indian president Pranab Mukherjee said the scandal was a "media trial" as "no Indian court has as yet established it as a scandal".
Opposition parties Lok Dal, Indian National Congress (Socialist) and Jan Morcha united under Singh to form the Janata Dal. Singh led the National Front coalition to victory in 1989 elections and he was sworn in as Prime Minister. Though the coalition won 143 seats compared to Congress's 197, it gained majority in the lower house of the parliament through outside support from the Bharatiya Janta Party under the leadership of Atal Bihari Vajpayee and Lal Krishna Advani and the left parties such as the Communist Party of India (Marxist) and the Communist Party of India. Eminent lawyer and politician, former Law Minister of India Ram Jethmalani said that as Prime Minister, Gandhi was "lacklustre and mediocre".
In November 1991, "Schweizer Illustrierte" magazine published an article on black money held in secret accounts by Imelda Marcos and 14 other rulers of Third World countries. Citing McKinsey as a source, the article stated that Rajiv Gandhi held 2.5 billion Swiss francs in secret Indian accounts in Switzerland. Several leaders of opposition parties in India raised the issue, citing the "Schweizer Illustrierte" article. In December 1991, Amal Datta raised the issue in the Indian Parliament; the Speaker of the Lok Sabha, Shivraj Patil, expunged Rajiv Gandhi's name from the proceedings. In December 2011, Subramanian Swamy wrote to the director of the Central Bureau of Investigation, citing the article and asking him to take action on black money accounts of the Nehru-Gandhi family. On 29 December 2011, Ram Jethmalani made an indirect reference to the issue in the Rajya Sabha, calling it a shame that one of India's former Prime Ministers was named by a Swiss magazine. This was met by an uproar and a demand for withdrawal of the remark by the ruling Congress party members.
In 1992, the Indian newspapers "Times of India" and "The Hindu" published reports alleging that Rajiv Gandhi had received funds from the KGB. The Russian government confirmed this disclosure and defended the payments as necessary for Soviet ideological interest. In their 1994 book "The State Within a State", journalists Yevgenia Albats and Catherine Fitzpatrick quoted a letter signed by Viktor Chebrikov, head of the KGB, in the 1980s. The letter says the KGB maintained contact with Gandhi, who expressed his gratitude to the KGB for benefits accruing to his family from commercial dealings of a controlled firm. A considerable portion of funds obtained from this channel were used to support his party. Albats later said that in December 1985, Chebrikov had asked for authorisation from the Central Committee of the Communist Party of the Soviet Union to make payments to family members of Rajiv Gandhi, including Sonia Gandhi and Rahul Gandhi. The payments were authorised by a resolution and endorsed by the USSR Council of Ministers, and had been paid since 1971. In December 2001, Subramanian Swamy filed a writ petition in the Delhi High Court; the Court ordered CBI to ascertain the truth of the allegations in May 2002. After two years, the CBI told the Court Russia would not entertain such queries without a registered FIR.
Rajiv Gandhi's last public meeting was on 21 May 1991, at Sriperumbudur, a village approximately from Madras (present-day Chennai), where he was assassinated while campaigning for the Sriperumbudur Lok Sabha Congress candidate. At 10:10 pm, a woman later identified as Thenmozhi Rajaratnam approached Gandhi in public and greeted him. She then bent down to touch his feet and detonated a belt laden with of RDX explosives tucked under her dress. The explosion killed Gandhi, Rajaratnam, and at least 14 other people. The assassination was captured by a 21-year-old local photographer, whose camera and film were found at the site. The cameraman, named Haribabu, died in the blast but the camera remained intact. Gandhi's mutilated body was airlifted to the All India Institute of Medical Sciences in New Delhi for post-mortem, reconstruction and embalming.
A state funeral was held for Gandhi on 24 May 1991; it was telecast live and was attended by dignitaries from over 60 countries. He was cremated at Veer Bhumi, on the banks of the river Yamuna near the shrines of his mother Indira Gandhi, brother Sanjay Gandhi, and grandfather Jawaharlal Nehru.
The Supreme Court judgement, by Justice K. T. Thomas, confirmed that Gandhi was killed because of personal animosity by the LTTE chief Prabhakaran arising from his sending the Indian Peace Keeping Force (IPKF) to Sri Lanka and the alleged IPKF atrocities against Sri Lankan Tamils. The Gandhi administration had already antagonised other Tamil militant organisations like PLOTE for reversing the 1988 military coup in Maldives. The judgement further cites the death of Thileepan in a hunger strike and the suicide by 12 LTTE cadres in a vessel in Oct 1987.
In the Jain Commission report, various people and agencies are named as suspects in the murder of Rajiv Gandhi. Among them, the cleric Chandraswami was suspected of involvement, including financing the assassination. Nalini Sriharan, the only surviving member of the five-member squad behind the assassination of Rajiv Gandhi, is serving life imprisonment. Arrested on 14 June 1991, she and 25 others were sentenced to death by a special court on 28 January 1998. The court confirmed the death sentences of four of the convicts, including Nalini, on 11 May 1999. Nalini was a close friend of an LTTE operative known as Sriharan alias Murugan, another convict in the case who has been sentenced to death. Nalini later gave birth to a girl, Harithra, in prison. Nalini's death sentence was commuted to life imprisonment in April 2000. Rajiv's widow, Sonia Gandhi, intervened and asked for clemency for Nalini on the grounds of the latter being a mother. Later, it was reported that Gandhi's daughter, Priyanka Gandhi Vadra, had met Nalini at Vellore Central Prison in March 2008. Nalini regrets the killing of Gandhi and said the real conspirators have not been caught yet.
In August 2011, the President of India rejected the clemency pleas of Murugan and two others on death row—Suthendraraja, alias Santhan, and Perarivalan, alias Arivu. The execution of the three convicts was scheduled for 9 September 2011. However, the Madras High Court intervened and stayed their executions for eight weeks based on their petitions. In 2010, Nalini had petitioned the Madras High Court seeking release because she had served more than 20 years in prison. She argued that even life convicts were released after 14 years. The state government rejected her request. Murugan, Santhan and Perarivalan have said they are political prisoners rather than ordinary criminals. On 18 February 2014, the Supreme Court of India commuted the death sentences of Murugan, Santhan and Perarivalan to life imprisonment, holding that the 11-year-long delay in deciding their mercy petition had a dehumanising effect on them. On 19 February 2014 Tamil Nadu government decided to release all seven convicts in Rajiv Gandhi's assassination case, including A. G. Perarivalan and Nalini. The Government of India challenged this decision before the Supreme Court, which referred the case to a Constitution Bench.
The report of the Jain Commission created controversy when it accused the Tamil Nadu chief minister Karunanidhi of a role in the assassination, leading to Congress withdrawing its support for the I. K. Gujral government and fresh elections in 1998. LTTE spokesman Anton Balasingham told the Indian television channel NDTV the killing was a "great tragedy, a monumental historical tragedy which we deeply regret". A memorial called Veer Bhumi was constructed at the place of Gandhi's cremation in Delhi. In 1992, the Rajiv Gandhi National Sadbhavana Award was instituted by the Indian National Congress Party.
Since his death, 21 May has been declared Anti-Terrorism Day in India.
A Right to Information (RTI) request filed in August 2009 found that more than 450 government projects and schemes are named after the Gandhi-Nehru family. In May 2012, "Zee News" reported there were 16 government schemes named after Gandhi, including "Rajiv Awas Yojana" and Rajiv Gandhi Udyami Mitra Yojana. In March 2015, Haryana sports minister Anil Vij said that at that time there were 232 rural stadia in India, with 226 of them being named after him. He said the government was "planning to rename" all the stadia in Haryana state named after him. Vij drew criticism from Congress leader Kuldeep Sharma, who said it was an "insult to their national leaders".
A movie titled "Madras Cafe" showed the planning of an intelligence agency to stop the assassination and catch the LTTE leader. However, at the end, they were unable to save Rajiv Gandhi. | https://en.wikipedia.org/wiki?curid=26129 |
Racial profiling
Racial or ethnic profiling is the act of suspecting or targeting a person on the basis of assumed characteristics or behavior of a racial or ethnic group, rather than on individual suspicion. Racial profiling, however, is not limited only to an individual's ethnicity or race, but can also be based on the individual's religion, or national origin. In European countries, the term "ethnic profiling" is also used instead of racial profiling.
Accusations of racial profiling of visible minorities who accuse police of targeting them due to their ethnic background is a growing concern in Canada. In 2005, the Kingston Police released the first study ever in Canada which pertains to racial profiling. The study focused on the city of Kingston, Ontario, a small city where most of the inhabitants are white. The study showed that black-skinned people were 3.7 times more likely to be pulled over by police than white-skinned people, while Asian and White people are less likely to be pulled over than more than Black people. Several police organizations condemned this study and suggested more studies like this would make them hesitant to pull over visible minorities.
Canadian Aboriginals are more likely to be charged with crimes, particularly on reserves. The Canadian crime victimization survey does not collect data on the ethnic origin of perpetrators, so comparisons between incidence of victimizations and incidence of charging are impossible. Although aboriginal persons make up 3.6% of Canada's population, they account for 20% of Canada's prison population. This may show how racial profiling increases effectiveness of police, or be a result of racial profiling, as they are watched more intensely than others.
In February 2010, an investigation of the "Toronto Star" daily newspaper found that black people across Toronto were three times more likely to be stopped and documented by police than white people. To a lesser extent, the same seemed true for people described by police as having "brown" skin (South Asians, Arabs and Latinos). This was the result of an analysis of 1.7 million contact cards filled out by Toronto Police officers in the period 2003–2008.
The Ontario Human Rights Commission states that "police services have acknowledged that racial profiling does occur and have taken [and are taking] measures to address [the issue], including upgrading training for officers, identifying officers at risk of engaging in racial profiling, and improving community relations". Ottawa Police addressed this issue and planned on implementing a new policy regarding officer racially profiling persons, "the policy explicitly forbids officers from investigating or detaining anyone based on their race and will force officers to go through training on racial profiling". This policy was implemented after the 2008 incident where an African-Canadian woman was strip searched by members of the Ottawa police. There is a video showing the strip search where one witnesses the black woman being held to the ground and then having her bra and shirt cut ripped/cut off by a member of the Ottawa Police Force which was released to the viewing of the public in 2010.
The Chinese government has been using a facial recognition technology, analysing output of surveillance cameras to track and control Uyghurs, a Muslim minority in China's Western province of Xinjiang. The extent of the vast system was published in the spring of 2019 by the NYT who called it "automated racism". In research projects aided by European institutions it has combined the facial output with people's DNA, to create an ethnic profile. The DNA was collected at the prison camps, which are interning more than one million Uyghurs, as had been corroborated in November 2019 by data leaks, such as the China Cables.
In February 2012, the first court ruling concerning racial profiling in German police policy, allowing police to use skin color and "non-German ethnic origin" to select persons who will be asked for identification in spot-checks for illegal immigrants. Subsequently, it was decided legal for a person submitted to a spot-check to compare the policy to that of the SS in public. A higher court later overruled the earlier decision declaring the racial profiling unlawful and in violation of anti-discrimination provisions in Art. 3 Basic Law and the General Equal Treatment Act of 2006.
The civil rights organisation "Büro zur Umsetzung von Gleichbehandlung" (Office for the Implementation of Equal Treatment) makes a distinction between criminal profiling, which is legitimate in Germany, and ethnic profiling, which is not.
According to a 2016 report by the Interior ministry in Germany, there had been an increase in hate crimes and violence against migrant groups in Germany. The reports concluded that there were more than 10 attacks per day against migrants in Germany in 2016. This report from Germany garnered the attention of the United Nations, which alleged that people of African descent face widespread discrimination in Germany.
A 2016 statement by the Office of the UN High Commissioner for Human Rights after a visit to Germany said that "although the [German] constitution guarantees equality, bans racial discrimination and enshrines the inviolability of human dignity, these principles are not put into practice." and called racial profiling against Africans endemic.
In 1972, terrorists from the Japanese Red Army launched an attack that led to the deaths of at least 24 people at Ben Gurion Airport. Since then, security at the airport has relied on a number of fundamentals, including a heavy focus on what Raphael Ron, former director of security at Ben Gurion, terms the "human factor", which he generalized as "the inescapable fact that terrorist attacks are carried out by people who can be found and stopped by an effective security methodology." As part of its focus on this so-called "human factor," Israeli security officers interrogate travelers using racial profiling, singling out those who appear to be Arab based on name or physical appearance. Additionally, all passengers, including those who do not appear to be of Arab descent, are questioned as to why they are traveling to Israel, followed by several general questions about the trip in order to search for inconsistencies. Although numerous civil rights groups have demanded an end to the profiling, the Israeli government maintains that it is both effective and unavoidable. According to Ariel Merari, an Israeli terrorism expert, "it would be foolish not to use profiling when everyone knows that most terrorists come from certain ethnic groups. They are likely to be Muslim and young, and the potential threat justifies inconveniencing a certain ethnic group."
The General Law on Population (Reglamento de la Ley General de Poblacion) of 2000 in Mexico has been cited as being used to racially profile and abuse immigrants to Mexico. Mexican law makes illegal immigration punishable by law and allows law officials great discretion in identifying and questioning illegal immigrants. Mexico has been criticized for its immigration policy. Chris Hawley of "USA Today" stated that "Mexico has a law that is no different from Arizona's", referring to legislation which gives local police forces the power to check documents of people suspected of being in the country illegally. Immigration and human rights activists have also noted that Mexican authorities frequently engage in racial profiling, harassment, and shakedowns against migrants from Central America.
Racial profiling by police forces in Spain is a common practice. A study by the University of Valencia, found that people of non-white aspect are up to ten times more likely to be stopped by the police on the street. Amnesty International accused Spanish authorities of using racial and ethnic profiling, with police singling out people who do not look Caucasian in the street and public places.
In 2011, the United Nations Committee on the Elimination of Racial Discrimination (CERD) urged the Spanish government to take "effective measures" to ethnic profiling, including the modification of existing laws and regulations which permit its practice. In 2013, the UN Special Rapporteur, Mutuma Ruteere, described the practice of ethnic profiling by Spanish law enforcement officers "a persisting and pervasive problem". In 2014, the Spanish government approved a law which prohibited racial profiling by police forces.
According to the American Civil Liberties Union (ACLU):'Racial profiling' refers to the practice by law enforcement officials of targeting individuals for suspicion of crime based on the individual's race, ethnicity, religion or national origin. Criminal profiling, generally, as practiced by police, is the reliance on a group of characteristics they believe to be associated with crime which unfortunately leads to innocent people dying. Examples of racial profiling are the use of race to determine which drivers to stop for minor traffic violations (commonly referred to as 'driving while black, Asian, Native American, Middle Eastern, Hispanic, or brown'), or the use of race to determine which pedestrians to search for illegal contraband.Besides such disproportionate searching of African Americans, and members of other minority groups, other examples of racial profiling by law enforcement in the U.S. include the targeting of Hispanic and Latino Americans in the investigation of illegal immigration; and the focus on Middle Eastern and South Asians present in the country in screenings for ties to Islamic terrorism. These suspicions may be held on the basis of belief that members of a target racial group commit crimes at a higher rate than that of other racial groups.
According to Minnesota House of Representatives analyst Jim Cleary, "there appears to be at least two clearly distinguishable definitions of the term 'racial profiling': a narrow definition and a broad definition... Under the narrow definition, racial profiling occurs when a police officer stops and/or searches someone solely on the basis of the person's race or ethnicity... Under the broader definition, racial profiling occurs whenever police routinely use race as a factor that, along with an accumulation of other factors, causes an officer to react with suspicion and take action."
A study conducted by Domestic Human Rights Program of Amnesty International USA, found that racial profiling had increased from the September 11, 2001 terrorist attacks to 2004, and that state laws had provided inconsistent and insufficient protections against racial profiling.
More commonly in the United States, racial profiling is referred to regarding its use by law enforcement at the local, state, and federal levels, and its use leading to discrimination against people in the African American, Native American, Asian, Pacific Islander, Latino, Arab, and Muslim communities of the U.S.
Sociologist Robert Staples emphasizes that racial profiling in the U.S. is "not merely a collection of individual offenses" but, rather, a systemic phenomenon across American society, dating back to the era of slavery, and, until the 1950s, was, in some instances, "codified into law". Enshrinement of racial profiling ideals in United States law can be exemplified by several major periods in U.S. history.
In 1693, Philadelphia's court officials gave police legal authority to stop and detain any Negro (freed or enslaved) seen wandering about. Starting around the mid 18th century, slave patrols were used to stop slaves at any location in order to ensure they were being lawful. In the mid 19th century, the Black Codes, a set of statutes, laws and rules, were enacted in the South in order to regain control over freed and former slaves and relegate African Americans to a lower social status. Similar discriminatory practices continued through the Jim Crow era.
Prior to U.S. immigration restrictions following the September 11 attacks, Japanese immigrants were rejected U.S. citizenship during World War II, for fear of disloyalty following the attacks on Pearl Harbor. What resulted was the government's preemptive internment of more than 100,000 Japanese immigrants and Japanese American citizens during World War II, as a measure against potential Japanese espionage, constituting a form of racial profiling.
In the late 1990s racial profiling became politicized when police and other law enforcement fell under scrutiny for the disproportionate traffic stops of minority motorists. Researchers from the American Civil Liberties Union (ACLU) provided evidence of widespread racial profiling, one study showed that while blacks only made up 42 percent of New Jersey's driving population, they accounted for 79 percent of motorists stopped in the state.
"Terry v. Ohio" was the first challenge to racial profiling in the United States in 1968. This case was about African American people who were thought to be stealing. The police officer arrested the three men and searched them and found a gun on two of the three men, and John W. Terry (one of the three men searched) was convicted and sentenced to jail. Terry challenged the arrest on the grounds that it violated the search and seizure clause of the Fourth Amendment; however, in an 8-1 ruling, the Supreme Court decided that the police officer acted in a reasonable manner, and with reasonable suspicion, under the Fourth Amendment. The decision in this case allowed for greater police discretion in identifying suspicious or illegal activities.
In 1975, "United States v. Brignoni-Ponce" was decided. Felix Humberto Brignoni-Ponce was traveling in his vehicle and was stopped by border patrol agents because he appeared to be Mexican. The agents questioned Brignoni-Ponce and the other passengers in the car and discovered that the passengers were illegal immigrants, and the border agents subsequently arrested all occupants of the vehicle. The Supreme Court determined that the testimonies that led to the arrests, in this case, were not valid, as they were obtained in the absence of reasonable suspicion and the vehicle was stopped without probable cause, as required under the Fourth Amendment.
In 1996, the U.S. Supreme Court ruled in "United States v. Armstrong" that disparity in conviction rates is not unconstitutional in the absence of data that "similarly situated" defendants of another race were disparately prosecuted, overturning a 9th Circuit Court ruling that was based on "the presumption that people of all races commit all types of crimes – not with the premise that any type of crime is the exclusive province of any particular racial or ethnic group", waving away challenges based on the Fourth Amendment of the U.S. Constitution which guarantees the right to be safe from search and seizure without a warrant (which is to be issued "upon probable cause"), and the Fourteenth Amendment which requires that all citizens be treated equally under the law. To date, there have been no known cases in which any U.S. court dismissed a criminal prosecution because the defendant was targeted based on race. This Supreme Court decision doesn't prohibit government agencies from enacting policies prohibiting it in the field by agents and employees.
The Supreme Court also decided the case of "Whren v. United States" in 1996. Michael Whren was arrested on felony drug charges after police officers observed his truck sitting at an intersection for a long period of time before he failed to use his turn signal to drive away, and the officers stopped his vehicle for the traffic violation. Upon approaching the vehicle the officers observed that Whren was in possession of crack cocaine. The Court determined the officers did not violate the Fourth Amendment through an unreasonable search and seizure and that the officers were permitted to stop the vehicle after it committed a traffic violation and the subsequent search of the vehicle was permitted regardless of the pretext of the officers.
In June 2001 the Bureau of Justice Assistance, a component of the Office of Justice Programs of the United States Department of Justice, awarded a Northeastern research team a grant to create the web-based Racial Profiling Data Collection Resource Center. It now maintains a website designed to be a central clearinghouse for police agencies, legislators, community leaders, social scientists, legal researchers, and journalists to access information about current data collection efforts, legislation and model policies, police-community initiatives, and methodological tools that can be used to collect and analyze racial profiling data. The website contains information on the background of data collection, jurisdictions currently collecting data, community groups, legislation that is pending and enacted in states across the country, and has information on planning and implementing data collection procedures, training officers in to implement these systems, and analyzing and reporting the data and results.
In April 2010, Arizona enacted SB 1070, a law that would require law-enforcement officers to verify the citizenship of individuals they stop if they have reasonable suspicion that they may be in the United States illegally. The law states that "Any person who is arrested shall have the person's immigration status determined before the person is released". United States federal law requires that all immigrants who remain in the United States for more than 30 days register with the U.S. government. In addition, all immigrants age 18 and over are required to have their registration documents with them at all times. Arizona made it a misdemeanor crime for an illegal immigrant 14 years of age and older to be found without carrying these documents at all times.
According to SB 1070, law-enforcement officials may not consider "race, color, or national origin" in the enforcement of the law, except under the circumstances allowed under the United States and Arizona constitutions. In June 2012, the majority of SB 1070 was struck down by the United States Supreme Court, while the provision allowing for an immigration check on detained persons was upheld.
Some states contain "stop and identify" laws that allow officers to detain suspected persons and ask for identification, and if there is a failure to provide identification punitive measures can be taken by the officer. As of 2017, there are 24 states that have "stop and identify" statues; however, the criminal punishments and requirements to produce identification vary from state to state. Utah HB 497 requires residents to carry relevant identification at all times in order to prove resident status or immigration status; even so, police may still dismiss provided documents under suspicion of falsification and arrest or detain suspects.
In early 2001, a bill was introduced to Congress named "End Racial Profiling Act of 2001" but lost support in the wake of the September 11 attacks. The bill was re-introduced to Congress in 2010 but also failed to gain the support it needed. Several U.S. states now have reporting requirements for incidents of racial profiling. Texas, for example, requires all agencies to provide annual reports to its Law Enforcement Commission. The requirement began on September 1, 2001, when the State of Texas passed a law to require all law enforcement agencies in the state to begin collecting certain data in connection to traffic or pedestrian stops beginning on January 1, 2002. Based on that data, the law mandated law enforcement agencies to submit a report to the law enforcement agencies' governing body beginning March 1, 2003 and each year thereafter no later than March 1. The law is found in the Texas Code of Criminal Procedure beginning with Article 2.131.
Additionally, on January 1, 2011, all Texas law enforcement agencies began submitting annual reports to the Texas State Law Enforcement Officers Standards and Education Commission. The submitted reports can be accessed on the Commission's website for public review.
In June 2003, the Department of Justice issued its "Guidance Regarding the Use of Race by Federal Law Enforcement Agencies" forbidding racial profiling by federal law enforcement officials.
Supporters defend the practice of racial profiling by emphasizing the crime control model. They claim that the practice is both efficient and ideal due to utilizing the laws of probability in order to determine one's criminality. This system focuses on controlling crime with swift judgment, bestowing full discretion on police to handle what they perceive as a threat to society.
The use and support of racial profiling has surged in recent years, namely in North America due to heightened tension and awareness following the events of 9/11. As a result, the issue of profiling has created a debate that centers on the values of equality and self-defense. Supporters uphold the stance that sacrifices must be made in order to maintain national safety, even if it warrants differential treatment. According to a 2011 survey by Rasmussen Reports, a majority of Americans support profiling as necessary "in today's society".
In December 2010, Fernando Mateo, then president of the New York State Federation of Taxi Drivers, made pro-racial profiling remarks in the case of gun-shot taxi-cab driver: "You know sometimes it's good that we are racially profiled because the God's-honest truth is that 99 percent of the people that are robbing, stealing, killing these drivers are blacks and Hispanics." "Clearly everyone knows I'm not racist. I'm Hispanic and my father is black. ... My father is blacker than Al Sharpton." When confronted with accusations of racial profiling the police claim that they do not participate in it. They emphasize that numerous factors (such as race, interactions, and dress) are used to determine if a person is involved in criminal activity and that race is not a sole factor in the decision to detain or question an individual. They further claim that the job of policing is far more imperative than to concerns of minorities or interest groups claiming unfair targeting.
Proponents of racial profiling believe that inner city residents of Hispanic communities are subjected to racial profiling because of theories such as the "gang suppression model". The "gang suppression model" is believed by some to be the basis for increased policing, the theory being based on the idea that Latinos are violent and out of control and are therefore "in need of suppression". Based on research, the criminalization of a people can lead to abuses of power on behalf of law enforcement.
Critics of racial profiling argue that the individual rights of a suspect are violated if race is used as a factor in that suspicion. Notably, civil liberties organizations such as the American Civil Liberties Union (ACLU) have labeled racial profiling as a form of discrimination, stating, "Discrimination based on race, ethnicity, religion, nationality or on any other particular identity undermines the basic human rights and freedoms to which every person is entitled."
Conversely, those in opposition of the police tactic employ the teachings of the due process model, arguing that minorities are not granted equal rights and are thus subject to unjust treatment. In addition, some argue that the singling out of individuals based on their ethnicity comes in violation of the Rule of Law, having voided all instance of neutrality. Those in opposition also make note of the role that the news media plays within the conflict. The general public internalizes much of its knowledge from the media, relying on sources to convey information of events that transpire outside of their immediate domain. In conjunction with this power, media outlets are aware of the public's intrigue with controversy and have been known to construct headlines that entail moral panic and negativity.
In the case of racial profiling drivers, the ethnic backgrounds of drivers stopped by traffic police in the U.S. suggests the possibility of biased policing against non-white drivers. Black drivers felt that they were being pulled over by law enforcement officers simply because of their skin color. However, some argue in favor of the "veil of darkness" hypothesis, which states that police are less likely to know the race of a driver before they make a stop at nighttime as opposed to in the daytime. Referring to the veil of darkness hypothesis, it is suggested that if the race distribution of drivers stopped during the day differs from that of drivers stopped at night, officers are engaging in racial profiling. For example, in one study done by Jeffrey Grogger and Greg Ridgeway, the veil of darkness hypothesis was used to determine whether or not racial profiling in traffic stops occurs in Oakland, California. The conductors found that there was little evidence of racial profiling in traffic stops made in Oakland.
Research through random sampling in the South Tucson, Arizona area has established that immigration authorities sometimes target the residents of barrios with the use of possibly discriminatory policing based on racial profiling. Author Mary Romero writes that immigration raids are often carried out at places of gathering and cultural expression such as grocery stores based on the fluency of language of a person (e.g. being bilingual especially in Spanish) and skin color of a person. She goes on to state that immigration raids are often conducted with a disregard for due process, and that these raids lead people from these communities to distrust law enforcement.
In a recent journal comparing the 1990s to the present, studies have established that when the community criticized police for targeting the black community during traffic stops it received more media coverage and toned down racial profiling. However, whenever there was a significant lack of media coverage or concern with racial profiling, the amount of arrests and traffic stops for the African-American community would significantly rise again.
Between 2003 and 2014, the New York City Police Department (NYPD) operated the "Demographics Unit" (later renamed "Zone Assessment Unit") which mapped communities of 28 "ancestries of interest", including those of Muslims, Arabs, and Albanians. Plain-clothed detectives were sent to public places such as coffee shops, mosques and parks to observe and record the public sentiment, as well as map locations where potential terrorists could "blend in". In its 11 years of operation, however, the unit did not generate any information leading to a criminal charge. A series of publications by the Associated Press during 2011–12 gave rise to public pressure to close the unit, and it was finally disbanded in 2014.
Racial profiling not only occurs on the streets but also in many institutions. Much like the book "Famous all over Town" where the author Danny Santiago mentions this type of racism throughout the novel. According to Jesper Ryberg's 2011 article "Racial Profiling And Criminal Justice" in the "Journal of Ethics", "It is argued that, given the assumption that criminals are currently being punished too severely in Western countries, the apprehension of more criminals may not constitute a reason in favor of racial profiling at all." It has been stated in a scholarly journal that for over 30 years the use of racial and/or demographic profiling by local authorities and higher level law enforcement's continue to proceed. NYPD Street cops use racial profiling more often, due to the widespread patterns. They first frisk them to check whether they have enough evidence to be even arrested for the relevant crime. "As a practical matter, the stops display a measurable racial disparity: black and Hispanic people generally represent more than 85 percent of those stopped by the police, though their combined populations make up a small share of the city's racial composition." (Baker)
The NYPD has been subject to much criticism for its "stop-and-frisk" tactics. According to statistics on the NYPD's stop and frisk policies, collected by the Center for Constitutional Rights, 51% of the people stopped by the police were Black, 33% were Latino, and 9% were White, and only 2% of all stops resulted in contraband findings. Starting in 2013, use of racial profiling by the NYPD was drastically curtailed, as New York Mayor Bill de Blasio was campaigning for the office, and this policy has continued into his term.
In June 2019, the independent Office of the Inspector General for the NYPD (OIG-NYPD), under New York City's Department of Investigation (DOI), released a report which found deficiencies in how NYPD tracked and investigated allegations of racial profiling and other types of biased policing against NYPD officers. The report concluded that NYPD had never substantiated any complaints of biased policing since it began tracking them in 2014.
The September 11, 2001 attacks on the World Trade Center and the Pentagon have led to targeting of some Muslims and Middle Easterners as potential terrorists and, according to some, are targeted by the national government through preventive measures similar to those practiced by local law enforcement. The national government has passed laws, such as the Patriot Act of 2001, to increase surveillance of potential threats to national security as a result of the events that occurred during 9/11. It is argued that the passage of these laws and provisions by the national government leads to justification of preventative methods, such as racial profiling, that has been controversial for racial profiling and leads to further minority distrust in the national government. One of the techniques used by the FBI to target Muslims was monitoring 100 mosques and business in Washington DC and threatened to deport Muslims who did not agree to serve as informers. The FBI denied to be taking part in blanket profiling and argued that they were trying to build trust within the Muslim community.
On September 14, 2001, three days after the September 11th attacks, an Indian American motorist and three family members were pulled over and ticketed by a Maryland state trooper because their car had broken taillights. The trooper interrogated the family, questioned them about their nationality, and asked for proof of citizenship. When the motorist said that their passports were at home, the officer allegedly stated, "You are lying. You are Arabs involved in terrorism." He ordered them out of the car, had them put their hands on the hood, and searched the car. When he discovered a knife in a toolbox, the officer handcuffed the driver and later reported that the driver "wore and carried a butcher knife, a dangerous, deadly weapon, concealed upon and about his person." The driver was detained for several hours but eventually released.
In December 2001, an American citizen of Middle Eastern descent named Assem Bayaa cleared all the security checks at Los Angeles airport and attempted to board a flight to New York City. Upon boarding, he was told that he made the passengers uncomfortable by being on board the plane and was asked to leave. Once off the plane, he wasn't searched or questioned any further and the only consolation he was given was a boarding pass for the next flight. He filed a lawsuit on the basis of discrimination against United Airlines. United Airlines filed a counter motion which was dismissed by a district judge on October 11, 2002. In June 2005, the ACLU announced a settlement between Bayaa and United Airlines who still disputed Bayaa's allegations, but noted that the settlement "was in the best interest of all".
The events of 9/11 also led to restrictions in immigration laws. The U.S. government imposed stricter immigration quotas to maintain national security at their national borders. In 2002, men over sixteen years old who entered the country from twenty-five Middle Eastern countries and North Korea were required to be photographed, fingerprinted, interviewed and have their financial information copied, and had to register again before leaving the country under the National Security Entry-Exit Registration System. No charges of terrorism resulted from the program, and it was deactivated in April 2011.
In 2006, 18 young men from the Greater Toronto Area were charged with conspiring to carry out a series of bombings and beheadings, resulting in a swell of media coverage. Two media narratives stood out with the former claiming that a "militant subculture" was forming within the Islamic community while the latter attributed the case to a bunch of deviant youth who had too much testosterone brewing. Eventually, it was shown that government officials had been tracking the group for some time, having supplied the youth with the necessary compounds to create explosives, prompting critics to discern whether the whole situation was a set-up. Throughout the case, many factors were put into question but none more than the Muslim community who faced much scrutiny and vitriol due to the build-up of negative headlines stemming from the media.
Statistical data demonstrates that although policing practices and policies vary widely across the United States, a large disparity between racial groups in regards to traffic stops and searches exists. However, whether this is due to racial profiling or the fact that different races are involved in crime in different rates, is still highly debated. Based on academic search, various studies have been conducted regarding the existence of racial profiling in traffic and pedestrian stops. For motor vehicle searches, academic research showed that the probability of a successful search is very similar across races. This suggests that police officers are not motivated by racial preferences but by the desire to maximize the probability of a successful search. Similar evidence has been found for pedestrian stops, with identical ratios of stops to arrests for different races.
The studies have been published in various Academic Journals aimed towards Academic professionals as well practitioners such as law enforcers. Some of these journals include, Police Quarterly and the Journal of Contemporary Criminal Justice, so that both sides of the argument are present and evaluated. Of those gathered the most noted study refuting racial profiling was the conducted using the veil of darkness hypothesis stating that it will be difficult, if not impossible, for officers to discern race in the twilight hours. The results of this study concluded that the ratio of different races stopped by New York cops is about the same for all races tested.
Some of the most referenced organizations, who offer evidence on the existence of racial profiling, are The American Civil Liberties Union, which conducted studies in various major U.S. cities, and RAND. In a study conducted in Cincinnati, Ohio, it was concluded that "Blacks were between three and five times more likely to (a) be asked if they were carrying drugs or weapons, (b) be asked to leave the vehicle, (c) be searched, (d) have a passenger searched, and (e) have the vehicle physically searched in a study conducted. This conclusion was based on the analysis of 313 randomly selected, traffic stop police tapes gathered from 2003 to 2004."
A 2001 study analyzing data from the Richmond, Virginia Police Department found that African Americans were disproportionately stopped compared to their proportion in the general population, but that they were not searched more often than Whites. The same study found that Whites were more likely than African Americans to be "the subjects of consent searches," and that Whites were more likely to be ticked or arrested than minorities, while minorities were more likely to be warned. A 2002 study found that African Americans were more likely to be watched and stopped by police when driving through white areas, despite the fact that African Americans' "hit rates" were lower in such areas. A 2004 study analyzing traffic stop data from suburban police department found that although minorities were disproportionately stopped, there is only a "very weak" relationship between race and police decisions to stop. Another 2004 study found that young black and Hispanic men were more likely to be issued citations, arrested, and to have force used against them by police, even after controlling for numerous other factors.
A 2005 study found that the percent of speeding drivers who were black (as identified by other drivers) on the New Jersey Turnpike was very similar to the percent of people pulled over for speeding who were black. A 2004 study looking at motor vehicle searches in Missouri found that unbiased policing did not explain the racial disparity in such searches. In contrast, a 2006 study examining data from Kansas concluded that its results were "consistent with the notion that police in Wichita choose their search strategies to maximize successful searches," and a 2009 study found that racial disparities in people being searched by the Washington state patrol was "likely not the result of intentional or purposeful discrimination." Another 2009 study found that police in Boston were more likely to search if their race was different from that of the suspect, in contrast to what would be expected if preference based discrimination was not occurring (which would be that police search decisions are independent of officer race).
A 2010 study found that black drivers were more likely to be searched at traffic stops in white neighborhoods, whereas white drivers were more likely to be searched by white officers at stops in black neighborhoods. A 2013 study found that police were more likely to issue warnings and citations, but not arrests, to young black men. A 2014 study analyzing data from Rhode Island found that blacks were more likely than whites to be frisked and, to a lesser extent, searched while driving; the study concluded that "Biased policing is largely the product of implicit stereotypes that are activated in contexts in which Black drivers appear out of place and in police actions that require quick decisions providing little time to monitor cognitions."
As a response to the shooting of Michael Brown in Ferguson on August 9, 2014, the Department of Justice recruited in September a team of criminal justice researchers to study racial bias in law enforcement in five cities and to subsequently devise strategic recommendations. In its March 2015 report on the Ferguson Police Department, the Department of Justice found that although only 67% of the population of Ferguson was black, 85% of people pulled over by police in Ferguson were black, as were 93 percent of those arrested and 90 percent of those given citations by the police.
Shopping forms one major avenue for racial profiling. General discrimination devalues the experience of shopping, arguably raising the costs and reducing the rewards derived from consumption for the individual. When a store's sales staff appears hesitant to serve black shoppers or suspects that they are prospective shoplifters, the act of shopping no longer becomes a form of leisure.
Racial profiling in retail was prominent enough in 2001 that psychology researchers such as Jerome D. Williams coined the term "shopping while black", which describes the experience of being denied service or given poor service because one is black. Commonly, "shopping while black" involves, but is not limited to, a black or non-white customer being followed around and/or closely monitored by a clerk or guard who suspects he or she may steal, based on the color of their skin. It can also involve being denied store access, being refused service, use of ethnic slurs, being searched, being asked for extra forms of identification, having purchases limited, being required to have a higher credit limit than other customers, being charged a higher price, or being asked more or more rigorous questions on applications.These negative shopping experiences can directly contribute to the decline of shopping in stores as individuals will come to prefer to shop online, avoiding interactions that are deemed degrading, embarrassing, and highly offensive.
In a particular study, Higgins, Gabbidon, and Vito studied the relationship between public opinion on racial profiling in conjunction with their viewpoint of race relations and their perceived awareness of safety. It was found that race relations had a statistical correlation with the legitimacy of racial profiling. Specifically, results showed that those who believed that racial profiling was widespread and that racial tension would never be fixed were more likely to be opposed to racial profiling than those who did not believe racial profiling was as widespread or that racial tensions would be fixed eventually. On the other hand, in reference to the perception of safety, the research concluded that one's perception of safety had no influence on public opinion of racial profiling. Higgins, Gabbidon, and Vito acknowledge that this may not have been the case immediately after 9/11, but state that any support of racial profiling based on safety was "short-lived".
One particular study focused on individuals who self-identified as religiously affiliated and their relationship with racial profiling. By using national survey data from October 2001, researcher Phillip H. Kim studied which individuals were more likely to support racial profiling. The research concludes that individuals that identified themselves as either Jewish, Catholic, or Protestant showed higher statistical numbers that illustrated support for racial profiling in comparison to individuals who identified themselves as non-religious.
After the September 11, 2001 terrorist attacks on the United States, according to Johnson, a new debate concerning the appropriateness of racial profiling in the context of terrorism took place. According to Johnson, prior to the September 11, 2001 attacks the debate on racial profiling within the public targeted primarily African-Americans and Latino Americans with enforced policing on crime and drugs. The attacks on the World Trade Center and the Pentagon changed the focus of the racial profiling debate from street crime to terrorism. According to a June 4–5, 2002 FOX News/Opinion Dynamics Poll, 54% of Americans approved of using "racial profiling to screen Arab male airline passengers." A 2002 survey by Public Agenda tracked the attitudes toward the racial profiling of Blacks and people of Middle Eastern descent. In this survey, 52% of Americans said there was "no excuse" for law enforcement to look at African Americans with greater suspicion and scrutiny because they believe they are more likely to commit crimes, but only 21% said there was "no excuse" for extra scrutiny of Middle Eastern people.
However, using data from an internet survey based experiment performed in 2006 on a random sample of 574 adult university students, a study was conducted that examined public approval for the use of racial profiling to prevent crime and terrorism. It was found that approximately one third of students approved the use of racial profiling in general. Furthermore, it was found that students were equally likely to approve of the use of racial profiling to prevent crime as to prevent terrorism-33% and 35.8% respectively. The survey also asked respondents whether they would approve of racial profiling across different investigative contexts.
The data showed that 23.8% of people approved of law enforcement using racial profiling as a means to stop and question someone in a terrorism context while 29.9% of people approved of racial profiling in a crime context for the same situation. It was found that 25.3% of people approved of law enforcement using racial profiling as a means to search someone's bags or packages in a terrorism context while 33.5% of people approved of racial profiling in a crime context for the same situation. It was also found that 16.3% of people approved of law enforcement wire tapping a person's phone based upon racial profiling in the context of terrorism while 21.4% of people approved of racial profiling in a crime context for the same situation. It was also found that 14.6% of people approved of law enforcement searching someone's home based upon racial profiling in a terrorism context while 18.2% of people approved of racial profiling in a crime context for the same situation.
The study also found that white students were more likely to approve of racial profiling to prevent terrorism than nonwhite students. However, it was found that white students and nonwhite students held the same views about racial profiling in the context of crime. It was also found that foreign born students were less likely to approve of racial profiling to prevent terrorism than non-foreign born students while both groups shared similar views on racial profiling in the context of crime. | https://en.wikipedia.org/wiki?curid=26131 |
Rankine scale
The Rankine scale () is an absolute scale of thermodynamic temperature named after the Glasgow University engineer and physicist William John Macquorn Rankine, who proposed it in 1859 (the Kelvin scale was first proposed in 1848). It is used in engineering systems where heat computations are done using degrees Fahrenheit.
The symbol for degrees Rankine is °R (or °Ra if necessary to distinguish it from the Rømer and Réaumur scales). By analogy with the SI unit, the kelvin, some authors term the unit "rankine", omitting the degree symbol. Zero on both the Kelvin and Rankine scales is absolute zero, but a temperature difference of one Rankine degree is defined as equal to one Fahrenheit degree, rather than the Celsius degree used on the Kelvin scale. Thus, a temperature of 0 K (−273.15 °C; −459.67 °F) is equal to 0 °R, and a temperature of −458.67 °F equal to 1 °R.
Some important temperatures relating the Rankine scale to other temperature scales are shown in the table below. | https://en.wikipedia.org/wiki?curid=26132 |
Retroposon
Retroposons are repetitive DNA fragments which are inserted into chromosomes after they had been reverse transcribed from any RNA molecule.
In contrast to retrotransposons, retroposons never encode reverse transcriptase (RT) (but see below). Therefore, they are non-autonomous elements with regard to transposition activity (as opposed to transposons).
Non-long terminal repeat (LTR) retrotransposons such as the human LINE1 elements are sometimes falsely referred to as retroposons. However, this depends on the author. For example, Howard Temin published the following definition: Retroposons encode RT but are devoid of long terminal repeats (LTRs), for example long interspersed elements (LINEs). Retrotransposons also feature LTRs and retroviruses, in addition, are packaged as viral particles (virions). Retrosequences are non-autonomous elements devoid of RT. They are retroposed with the aid of the machinery of autonomous elements, such as LINEs; examples are short interspersed nuclear elements (SINEs) or mRNA-derived retro(pseudo)genes.
Retroposition accounts for approximately 10,000 gene-duplication events in the human genome, of which approximately 2-10% are likely to be functional. Such genes are called retrogenes and represent a certain type of retroposon. A classical event is the retroposition of a spliced pre-mRNA molecule of the c-src gene into the proviral ancestor of the Rous Sarcoma Virus (RSV). The retroposed c-src pre-mRNA still contained a single intron and within RSV is now referred to as v-src gene. | https://en.wikipedia.org/wiki?curid=26139 |
Russell Crowe
Russell Ira Crowe (born 7 April 1964) is an actor, film producer and musician. Although a New Zealand citizen, he has lived most of his life in Australia. He came to international attention for his role as the Roman General Maximus Decimus Meridius in the historical film "Gladiator" (2000), directed by Ridley Scott, for which Crowe won an Academy Award, a Broadcast Film Critics Association Award, an Empire Award, and a London Film Critics Circle Award for best actor, along with ten other nominations in the same category. Crowe's other award-winning performances include portrayals of tobacco firm whistle-blower Jeffrey Wigand in the drama film "The Insider" (1999) and John F. Nash in the biopic "A Beautiful Mind" (2001).
Crowe's other films include "Romper Stomper" (1992), "L.A. Confidential" (1997), "" (2003), "Cinderella Man" (2005), "" (2007), "American Gangster" (2007), "State of Play" (2009), "Robin Hood" (2010), "Les Misérables" (2012), "Man of Steel" (2013), "Noah" (2014), and "The Nice Guys" (2016). In 2015, Crowe made his directorial debut with "The Water Diviner", in which he also starred. Crowe's work has earned him several accolades during his career, including a star on the Hollywood Walk of Fame, two Golden Globe Awards, one BAFTA and one Academy Award out of three consecutive nominations (1999, 2000, and 2001). Crowe has also been the co-owner of the National Rugby League (NRL) team South Sydney Rabbitohs since 2006.
Crowe was born on 7 April 1964 in the Wellington suburb of Strathmore Park, New Zealand, the son of Jocelyn Yvonne Wemyss and John Alexander Crowe, both of whom were film set caterers; his father also managed a hotel. Crowe's maternal grandfather, Stan Wemyss, was a cinematographer who was appointed an MBE for filming footage of World War II. Crowe's paternal grandfather, John Doubleday Crowe, was from Wrexham, Wales, while one of Crowe's maternal great-great-grandmothers was Māori. Crowe also has English, German, Irish, Italian, Norwegian, Scottish, Swedish and Welsh ancestry. He is a cousin of former New Zealand cricket captains Martin Crowe and Jeff Crowe, and nephew of cricketer Dave Crowe. Russell has built a cricket field named after his uncle.
When Crowe was four years old, his family moved to Sydney, Australia, where his parents pursued a career in set catering. The producer of the Australian TV series "Spyforce" was his mother's godfather, and Crowe (at age five or six) was hired for a line of dialogue in one episode, opposite series star Jack Thompson (in 1994 Thompson played the father of Crowe's character in "The Sum of Us"). Crowe also appeared briefly in the serial "The Young Doctors".
Crowe was educated at Vaucluse Public School but later moved to Sydney Boys High School. When he was 14, his family moved back to New Zealand where, along with his brother Terry, he attended Auckland Grammar School (also attended by his cousins Martin Crowe and Jeff Crowe). He then continued his secondary education at Mount Roskill Grammar School, which he left at the age 16 to pursue his ambition of becoming an actor.
Crowe began his performing career as a musician in the early 1980s, under guidance from his good friend Tom Sharplin, when he performed under the stage name "Russ Le Roq". He released several New Zealand singles including "I Just Wanna Be Like Marlon Brando", "Pier 13", "Shattered Glass", none of which charted. He managed an Auckland music venue called "The Venue" in 1984. When he was 18, he was featured in "A Very Special Person...", a promotional video for the theology/ministry course at Avondale College, a Seventh-day Adventist tertiary education provider in New South Wales, Australia.
Crowe returned to Australia at age 21, intending to apply to the National Institute of Dramatic Art. "I was working in a theatre show, and talked to a guy who was then the head of technical support at NIDA", Crowe has recalled. "I asked him what he thought about me spending three years at NIDA. He told me it'd be a waste of time. He said, 'You already do the things you go there to learn, and you've been doing it for most of your life, so there's nothing to teach you but bad habits.'" From 1986 to 1988, he was given his first professional role by director Daniel Abineri, in a New Zealand production of "The Rocky Horror Show". He played the role of Eddie/Dr Scott. He repeated this performance in a further Australian production of the show, which also toured New Zealand. In 1987, Crowe spent six months busking when he could not find other work. In the 1988 Australian production of "Blood Brothers", Crowe played the role of Mickey. He was also cast again by Daniel Abineri in the role of Johnny, in the stage musical "Bad Boy Johnny and the Prophets of Doom" in 1989.
After appearing in the TV series "Neighbours" and "Living with the Law," Crowe was cast by Faith Martin in his first film, "The Crossing" (1990), a small-town love triangle directed by George Ogilvie. Before production started, a film-student protégé of Ogilvie, Steve Wallace, hired Crowe for the 1990 film "Blood Oath" (aka "Prisoners of the Sun"), which was released a month earlier than "The Crossing", although actually filmed later. In 1992, Crowe starred in the first episode of the second series of "Police Rescue." Also in 1992, Crowe starred in "Romper Stomper", an Australian film which followed the exploits and downfall of a racist skinhead group in blue-collar suburban Melbourne, directed by Geoffrey Wright and co-starring Jacqueline McKenzie. For the role, Crowe won an Australian Film Institute (AFI) award for Best Actor, following up from his Best Supporting Actor award for "Proof" in 1991.
In 2015, it was reported that Crowe had applied for Australian citizenship in 2006 and again in 2013 but was rejected because he failed to fulfill the residency requirements. However, Australia's Immigration Department said it had no record of any such application by Crowe.
After initial success in Australia, Crowe first starred in a Canadian production in 1993, "For the Moment", before concentrating on American films. He co-starred with Denzel Washington in "Virtuosity" (the duo later appearing together in "American Gangster") and with Sharon Stone in "The Quick and the Dead" in 1995. He went on to become a three-time Oscar nominee, winning the Academy Award as Best Actor in 2000 for "Gladiator". Crowe was awarded the (Australian) Centenary Medal in 2001 for "service to Australian society and Australian film production."
Crowe received three consecutive best actor Oscar nominations, for "The Insider", "Gladiator", and "A Beautiful Mind". Crowe won the best actor award for "A Beautiful Mind" at the 2002 BAFTA award ceremony, as well as the Golden Globe and Screen Actors Guild awards for the same performance. Although nominated for an Academy Award, he lost to Denzel Washington.
All three films were also nominated for best picture, and both "Gladiator" and "A Beautiful Mind" won the award. Within the six-year stretch from 1997 to 2003, he also starred in two other best picture nominees, "L.A. Confidential" and "". In 2005, he re-teamed with "A Beautiful Mind" director Ron Howard for "Cinderella Man". In 2006, he re-teamed with "Gladiator" director Ridley Scott for "A Good Year", the first of two consecutive collaborations (the second being "American Gangster" co-starring again with Denzel Washington, released in late 2007). Although the light romantic comedy of "A Good Year" was not greatly received, Crowe seemed pleased with the film, telling STV in an interview that he thought it would be enjoyed by fans of his other films.
In recent years, Crowe's box office standing has declined. The Hollywood stock market (HSX) share Russell Crowe (RCROW), issued in 1998, however, maintains constant accretion. Crowe appeared in "Robin Hood", a film based on the Robin Hood legend, directed by Ridley Scott and released on 14 May 2010.
Crowe starred in the 2010 Paul Haggis film "The Next Three Days", an adaptation of the 2008 French film "Pour Elle".
After a year off from acting, Crowe played Jackknife in "The Man with the Iron Fists", opposite RZA. He took on the role of Inspector Javert in the musical film of "Les Misérables" (2012), and portrayed Superman's biological father, Jor-El, in the Christopher Nolan-produced film, "Man of Steel", released in the summer of 2013. In 2014, he played a gangster in the film adaptation of Mark Helprin's 1983 novel "Winter's Tale", and the title role in the Darren Aronofsky film "Noah". In June 2013, Crowe signed to make his directional debut with an historical drama film "The Water Diviner", which he also starred in alongside Jacqueline McKenzie, Olga Kurylenko, Jai Courtney. Set in the year 1919, the film was produced by Troy Lum, Andrew Mason and Keith Rodger. Crowe also starred in "The Mummy" (2017).
In the 1980s, Crowe, under the name of "Russ le Roq", recorded a song titled "I Want to Be Like Marlon Brando".
In the 1980s, Crowe and friend Billy Dean Cochran formed a band, Roman Antix, which later evolved into the Australian rock band 30 Odd Foot of Grunts (abbreviated to TOFOG). Crowe performed lead vocals and guitar for the band, which formed in 1992. The band released "The Photograph Kills EP" in 1995, as well as three full-length records, "Gaslight" (1998), "Bastard Life or Clarity" (2001) and "Other Ways of Speaking" (2003). In 2000, TOFOG performed shows in London, Los Angeles and the now famous run of shows at Stubbs in Austin, Texas which became a live DVD that was released in 2001, called "Texas". In 2001, the band came to the US for major press, radio and TV appearances for the "Bastard Life or Clarity" release and returned to Stubbs in Austin, Texas to kick off a sold out US tour with dates in Austin, Boulder, Chicago, Portland, San Francisco, Hollywood, Philadelphia, New York City and the last show at the famous Stone Pony in Asbury Park, New Jersey.
In early 2005, 30 Odd Foot of Grunts as a group had "dissolved/evolved" with Crowe feeling his future music would take a new direction. He began a collaboration with Alan Doyle of the Canadian band Great Big Sea, and with it a new band emerged: The Ordinary Fear of God which also involved some members of the previous TOFOG line-up. A new single, "Raewyn", was released in April 2005 and an album entitled "My Hand, My Heart" which was released and is available for download on iTunes. The album includes a tribute song to actor Richard Harris, who became Crowe's friend during the making of "Gladiator".
Russell Crowe & The Ordinary Fear of God set out to break the new band in by performing a successful sold out series of dates of Australia in 2005, and then in 2006, returned to the US to promote their new release "My Hand, My Heart" with another sold-out US Tour and major press, radio and television appearances.
In March 2010, Russell Crowe & The Ordinary Fear of God's version of the John Williamson song "Winter Green" was included on a new compilation album "The Absolute Best of John Williamson: 40 Years True Blue", commemorating the singer-songwriter's milestone of 40 years in the Australian music industry. As of May 2011, there are plans to release a new Russell Crowe & The Ordinary Fear of God recording (co-written with Alan Doyle) and for a US tour which would be the first live dates in the US since 2006.
On 2 August 2011, the third collaboration between Crowe and Doyle was released on iTunes as "The Crowe/Doyle Songbook Vol III", featuring nine original songs followed by their acoustic demo counterparts (for a total of 18 tracks). Danielle Spencer does guest vocals on most tracks. The release coincided with a pair of live performances at the LSPU Hall in St. John's, Newfoundland. The digital album was released as download versions only on Amazon.com, iTunes, spotify. The album has since charted at No. 72 on the Canadian Albums Chart.
On 26 September 2011, Crowe appeared on-stage at Rogers Arena in Vancouver in the middle of Keith Urban's concert. He sang a cover of "Folsom Prison Blues", before joining the rest of the band in a rendition of "The Joker". On 18 August 2012, Crowe appeared along with Doyle at the Harpa Concert Hall in Reykjavík, Iceland as part of the city's Menningarnótt program. They also appeared at downtown bars, "Gaukurinn" and "Kex."
By 2017 Crowe and Doyle had created a new act (with Samantha Barks, Scott Grimes and Carl Falk) called Indoor Garden Party who appeared on "The One Show" to promote their album called "The Musical".
During location filming of "Cinderella Man", Crowe made a donation to a Jewish elementary school whose library had been damaged as a result of arson. A note with an anti-Semitic message had been left at the scene. Crowe called school officials to express his concern and wanted his message relayed to the students. The school's building fund received donations from throughout Canada and the amount of Crowe's donation was not disclosed.
On another occasion, Crowe donated $200,000 to a struggling primary school near his home in rural Australia. The money went towards an $800,000 project to construct a swimming pool at the school. Crowe's sympathies were sparked when a pupil drowned at the nearby Coffs Harbour beach in 2001, and he felt the pool would help students become better swimmers and improve their water safety. At the opening ceremony, he dived into the pool fully clothed as soon as it was declared open. Nana Glen principal Laurie Renshall said, "The many things he does up here, people just don't know about. We've been trying to get a pool for 10 years."
In 1989, Crowe met Australian singer Danielle Spencer while working on the film "The Crossing". The two began an on-again, off-again relationship. In 2000, Crowe became romantically involved with co-star Meg Ryan while working on their film "Proof of Life". In 2001, Crowe and Spencer reconciled, and they married two years later in April 2003. The wedding took place at Crowe's cattle property in Nana Glen, New South Wales, with the ceremony taking place on Crowe's 39th birthday. The couple have two sons: Charles Spencer Crowe, born 21 December 2003 and Tennyson Spencer Crowe, born 7 July 2006. In October 2012, it was reported that Crowe and Spencer had separated. The divorce was finalised in April 2018.
Crowe resides in Australia. In 2011, he and his family moved to a house in Sydney's affluent Rose Bay. Crowe also owns a house in the North Queensland city of Townsville, purchased in May 2008. He is reportedly frugal with money, and is known to drive an old Jeep.
In the beginning of 2009, Crowe appeared in a series of special-edition postage stamps called "Legends of the Screen", featuring "Australian" actors. Crowe, Geoffrey Rush, Cate Blanchett and Nicole Kidman each appear twice in the series, once as themselves and once as their Academy Award-nominated character.
In June 2010, Crowe, who started smoking when he was 10, announced he had quit for the sake of his two sons. In November 2010, Crowe told David Letterman that he had smoked more than 60 cigarettes a day for 36 years, and that he had fallen off the wagon the previous night and smoked heavily.
On 9 March 2005, Crowe revealed to "GQ" magazine that Federal Bureau of Investigation (FBI) agents had approached him prior to the 73rd Academy Awards in March 2001, and told him that the terrorist group al-Qaeda wanted to kidnap him. Crowe recalled: "It was something to do with some recording picked up by a French policewoman, I think, in either Libya or Algiers... It was about taking iconographic Americans out of the picture as a sort of cultural destabilisation plan."
Between 1999 and 2005, Crowe was involved in four altercations, which gave him a reputation for having a bad temper.
In 1999, Crowe was involved in a scuffle at the Plantation Hotel in Coffs Harbour, Australia, which was caught on security video. Two men were acquitted of using the video in an attempt to blackmail Crowe.
Four years later, when part of Crowe's appearance at the 2002 BAFTA awards was cut out to fit into the BBC's tape-delayed broadcast, Crowe used strong language during an argument with producer Malcolm Gerrie. The part cut was a poem in tribute to actor Richard Harris, and it was cut for copyright reasons. Crowe later apologised, saying "What I said to him may have been a little bit more passionate than now, in the cold light of day, I would have liked it to have been."
Later that year, Crowe was alleged to have been involved in a brawl with businessman and fellow New Zealander Eric Watson inside the London branch of Zuma, a fashionable Japanese restaurant chain. The fight was broken up by British actor Ross Kemp.
In June 2005, Crowe was arrested and charged with second-degree assault by New York City police after he threw a telephone at the concierge of the Mercer Hotel who had refused to help him place a call when the system did not work from Crowe's room. He was also charged with fourth-degree criminal possession of a weapon (the telephone). The concierge was treated for a facial laceration. After his arrest, Crowe underwent a perp walk, a procedure customary in New York City, exposing the handcuffed suspect to the news media to take pictures. This procedure was under discussion as potentially violating Article 5 of the Universal Declaration of Human Rights. Crowe later described the incident as "possibly the most shameful situation that I've ever gotten myself in...". Crowe pleaded guilty and was conditionally discharged. Before the trial, he settled a lawsuit filed by the concierge, Nestor Estrada. Terms of the settlement were not disclosed, but amounts in the six-figure range have been reported.
The telephone incident had a generally negative impact on Crowe's public image, an example of negative public relations in the mass media, although Crowe had made a point of befriending Australian journalists in an effort to influence his image. The "South Park" episode "The New Terrance and Phillip Movie Trailer" revolves around a lampooning of his aggressive tendencies. Crowe commented on the ongoing media perpetuation in November 2010, during an interview with American television talk show host and journalist Charlie Rose: "I think it indelibly changed me. It was a very, very minor situation that was made into something outrageous. More violence perpetuated me walking between the car to the courtroom with the waiting media than anything I'd done ... it very definitely affected me ... psychologically."
Crowe has been a supporter of the rugby league football team the South Sydney Rabbitohs since childhood. Since his rise to fame as an actor, he has continued appearing at home games, and supported the financially troubled club. Following the Super League war of the 1990s, Crowe made an attempt to use his Hollywood connections to convince Ted Turner, rival of Super League's Rupert Murdoch, to save the Rabbitohs before they were forced from the National Rugby League competition for two years. In 1999, Crowe paid $42,000 at auction for the brass bell used to open the inaugural rugby league match in Australia in 1908 at a fundraiser to assist Souths' legal battle for re-inclusion in the league. In 2005, he made the Rabbitohs the first club team in Australia to be sponsored by a film, when he negotiated a deal to advertise his film "Cinderella Man" on their jerseys.
On 19 March 2006, the voting members of the South Sydney club voted (in a 75.8% majority) to allow Crowe and businessman Peter Holmes à Court to purchase 75% of the organisation, leaving 25% ownership with the members. It cost them A$3 million, and they received four of eight seats on the board of directors. A six-part television miniseries entitled "South Side Story" depicting the takeover aired in Australia in 2007.
On 5 November 2006, Crowe appeared on "The Tonight Show with Jay Leno" to announce that Firepower International was sponsoring the South Sydney Rabbitohs for $3 million over three years. During a "Tonight Show with Jay Leno" appearance, Crowe showed viewers a Rabbitoh playing jersey with Firepower's name emblazoned on it.
Crowe helped to organise a rugby league game that took place at the University of North Florida, in Jacksonville, Florida, between the South Sydney Rabbitohs and the 2007 Super League Grand Final winners the Leeds Rhinos on 26 January 2008 (Australia Day). Crowe told ITV Local Yorkshire the game was not a marketing exercise.
Crowe wrote a letter of apology to a Sydney newspaper following the sacking of South Sydney's coach Jason Taylor and one of their players David Fa'alogo after a drunken altercation between the two at the end of the 2009 NRL season.
Also in 2009, Crowe persuaded young England international forward Sam Burgess to sign with the Rabbitohs over other clubs that were competing for his signature, after inviting Burgess and his mother to the set of "Robin Hood", which he was filming in Britain at the time.
Crowe's influence helped to persuade noted player Greg Inglis to renege on his deal to join the Brisbane Broncos and sign for the Rabbitohs for 2011.
In 2010, the NRL was investigating Crowe's business relationships with a number of media and entertainment companies including Channel Nine, Channel Seven, ANZ Stadium and V8 Supercars in relation to the South Sydney Rabbitohs' salary cap.
In 2011, Souths also announced a corporate partnership with the bookmaking conglomerate Luxbet.
Previously, Crowe had been prominent in trying to prevent gambling being associated with the Rabbitohs.
In May 2011, Crowe helped arrange to have Fox broadcast the 2011 State of Origin series live for the first time in the United States, in addition to the NRL Grand Final.
In November 2012 the South Sydney Rabbitohs confirmed that Russell Crowe was selling his 37.5 per cent stake in the club.
At the Rabbitohs Annual General Meeting on 3 March 2013, Chairman Nick Pappas claimed Crowe "would not be selling his shareholding in the short-to-medium term and at this stage has no intention of selling at all".
Crowe was a guest presenter at the 2013 Dally M Awards and presented the prestigious Dally M Medal to winner Cooper Cronk. Russell was present at the 2014 NRL Grand Final when the Rabbitohs won the NRL premiership for the first time in 43 years.
Crowe watches and plays cricket, and captained the 'Australian' Team containing Steve Waugh against an English side in the 'Hollywood Ashes' Cricket Match. On 17 July 2009, Crowe took to the commentary box for the British sports channel, Sky Sports, as the 'third man' during the second Test of the 2009 Ashes series, between England and Australia. Two of his cousins, Martin Crowe and Jeff Crowe, captained the New Zealand national cricket team.
Crowe is a fan of the New Zealand All Blacks rugby team.
He is friends with Lloyd Carr, the former coach of the University of Michigan Wolverines American football team, and Carr used Crowe's movie "Cinderella Man" to motivate his 2006 team following a 7–5 season the previous year. Upon hearing of this, Crowe called Carr and invited him to Australia to address his rugby league team, the South Sydney Rabbitohs, which Carr did the following summer. In September 2007, after Carr came under fire following the Wolverines' 0–2 start, Crowe travelled to Ann Arbor, Michigan for the Wolverines' 15 September game against Notre Dame to show his support for Carr. He addressed the team before the game and watched from the sidelines as the Wolverines defeated the Irish 38–0.
Crowe is also a fan of the National Football League. On 22 October 2007, Crowe appeared in the booth of a Monday night game between the Indianapolis Colts and the Jacksonville Jaguars.
He is also a fan of Leeds United and narrated the "Take us Home: Leeds United" Amazon Prime documentary.
Crowe has appeared in 44 films and three television series since his career began in 1985. He won the Academy Award for Best Actor for "Gladiator" (2000) and was nominated twice more for "The Insider" (1999) and "A Beautiful Mind" (2001), making him the ninth actor to have received three consecutive Academy Award nominations. He has also received six Golden Globe Award nominations (winning two), three BAFTA Award nominations (winning one) and three Screen Actors Guild Award nominations (winning one). | https://en.wikipedia.org/wiki?curid=25695 |
Robert Musil
Robert Musil (; 6 November 1880 – 15 April 1942) was an Austrian philosophical writer. His unfinished novel, "The Man Without Qualities" (), is generally considered to be one of the most important and influential modernist novels.
Musil was born in Klagenfurt, Carinthia, the son of engineer Alfred "Edler von" Musil (1846, Timișoara – 1924) and his wife Hermine Bergauer (1853, Linz – 1924). The orientalist Alois Musil ("The Czech Lawrence") was his second cousin.
Soon after his birth, the family moved to Chomutov in Bohemia, and in 1891 Musil's father was appointed to the chair of Mechanical Engineering at the German Technical University in Brno and, later, he was raised to hereditary nobility in the Austro-Hungarian Empire. He was baptized "Robert Mathias Musil" and his name was officially "Robert Mathias Edler von Musil" from 22 October 1917, when his father was ennobled (made "Edler"), until 3 April 1919, when the use of noble titles was forbidden in Austria.
Musil was short in stature, but strong and skilled at wrestling, and by his early teens, he proved to be more than his parents could handle. They sent him to a military boarding school at Eisenstadt (1892–1894) and then Hranice (1894–1897). The school experiences are reflected in his first novel "Die Verwirrungen des Zöglings Törless" ("The Confusions of Young Törless").
After graduation Musil studied at a military academy in Vienna during the fall of 1897, but then switched to mechanical engineering, joining his father's department at the Technical University in Brno. During his university studies, he studied engineering by day, and at night, read literature and philosophy and went to the theatre and art exhibitions. Friedrich Nietzsche, Fyodor Dostoyevsky, Ralph Waldo Emerson, and Ernst Mach were particular interests of his university years. Musil finished his studies in three years and, in 1902–1903, served as an unpaid assistant to Professor of Mechanical Engineering , in Stuttgart. During that time, he began work on "Young Törless".
He also invented , the Musil color top, a motorised device for producing mixed colours by additive colour-mixing with two differently colored, sectored, rotating discs. This was an improvement over earlier models, allowing a user to vary the proportions of the two colors during rotation and to read off those proportions precisely.
Musil's sexual life around the turn of the century, according to his own records, was mainly with a prostitute, which he treated partly as an experimental self-experience. But he also was infatuated with the pianist and mountaineer Valerie Hilpert, who assumed mystical features.
In March 1902, Musil underwent treatment for syphilis with mercurial ointment. During this time, his several years of relationship began with Hermine Dietz, the 'Tonka' of his own novel, published in 1923. Hermine's syphilitic miscarriage in 1906 and her death in 1907 may have been due to infection from Musil.
Musil grew tired of engineering and what he perceived as the limited world-view of the engineer. He launched a new round of doctoral studies (1903–1908) in psychology and philosophy at the University of Berlin under Professor Carl Stumpf. In 1905, Musil met his future wife, Martha Marcovaldi (née Heimann, 21 January 1874 – 6 November 1949). She had been widowed and remarried, with two children, and was seven years older than Musil. His first novel, "Young Törless", was published in 1906.
In 1909, Musil completed his doctorate and Professor Alexius Meinong offered him a position at the University of Graz, which he turned down to concentrate on writing of novels. Over the next two years, he wrote and published two stories, ("The Temptation of Quiet Veronica" and "The Perfecting of a Love") collected in "Vereinigungen" ("Unions") published in 1911. During the same year, Martha's divorce was completed, and Musil married her. As she was Jewish and Musil Roman Catholic, they both converted to Protestantism as a sign of their union. Until then, Musil had been supported by his family, but he now found employment first as a librarian in the Technical University of Vienna and then in an editorial role with the Berlin literary journal "Die neue Rundschau". He also worked on a play entitled "Die Schwärmer" ("The Enthusiasts"), which was published in 1921.
When World War I began, Musil joined the army and was stationed first in Tirol and then at Austria's Supreme Army Command in Bolzano. In 1916, Musil visited Prague and met Franz Kafka, whose work he held in high esteem. After the end of the war and the collapse of the Austro-Hungarian Empire, Musil returned to his literary career in Vienna. He published a collection of short stories, "Drei Frauen" ("Three Women"), in 1924. He also admired the Bohemian poet Rainer Maria Rilke, whom Musil called "great and not always understood" at his memorial service in 1927 in Berlin. According to Musil, Rilke "did nothing but perfect the German poem for the first time", but by the time of his death, Rilke had turned into "a delicate, well-matured liqueur suitable for grown-up ladies". However, his work is "too demanding" to be "considered relaxing".
In 1930 and 1933, his masterpiece, "The Man Without Qualities" ("Der Mann ohne Eigenschaften") was published in two volumes consisting of three parts, from Berlin, running into 1,074 pages. Volume 1 (Part I: A Sort of Introduction, and Part II: The Like of It Now Happens) and 605-page unfinished Volume 2 (Part III: Into the Millennium (The Criminals)). Part III did not include 20 chapters withdrawn from Volume 2 of 1933 in printer's galley proofs. The novel deals with the moral and intellectual decline of the Austro-Hungarian Empire through the eyes of the book's protagonist, Ulrich, an ex-mathematician who has failed to engage with the world around him in a manner that would allow him to possess "qualities". It is set in Vienna on the eve of World War I.
"The Man Without Qualities" brought Musil only mediocre commercial success. Although he was nominated for the Nobel Prize in Literature, he felt that he did not receive the recognition he deserved. He sometimes expressed annoyance at the success of better known colleagues such as Thomas Mann or Hermann Broch, who admired his work deeply and tried to shield him from economic difficulties and encouraged his writing even though Musil initially, was critical of Mann.
In the early 1920s, Musil lived mostly in Berlin. In Vienna, Musil was a frequent visitor to Eugenie Schwarzwald's salon (the model for Diotima in "The Man Without Qualities"). In 1932, the Robert Musil Society was founded in Berlin on the initiative of Mann. In the same year, Mann was asked to name outstanding contemporary novels, and he cited only one, "The Man Without Qualities". In 1936, Musil suffered his first stroke.
The fundamental problem Musil confronts in his essays and fiction is the crisis of Enlightenment values that engulfed Europe during the early twentieth century. He endorses the Enlightenment project of emancipation, while at the same time, examining its shortcomings with a questioning irony. Musil believed that the crisis required a renewal in social and individual values that, accepting science and reason, could liberate humanity in beneficent ways. Musil wrote:After the Enlightenment most of us lost courage. A minor failure was enough to turn us away from reason, and we allowed every barren enthusiast to inveigh against the intentions of a d'Alembert or a Diderot as mere rationalism. We beat the drums for feeling against intellect and forgot that without intellect... feeling is as dense as a blockhead ("dick wie ein Mops ist").
He took aim at the ideological chaos and misleading generalizations about culture and society promoted by nationalist reactionaries. Musil wrote a withering critique of Oswald Spengler entitled, "Mind and Experience: Notes for Readers Who Have Escaped the Decline of the West ("Geist und Erfahrung: Anmerkung für Leser, welche dem Untergang des Abendlandes entronnen sind")", in which he dismantles the latter's misunderstanding of science and misuse of axiomatic thinking to try to understand human complexity and promote a deterministic philosophy.
He deplored the social conditions under the Austro-Hungarian Empire and foresaw its disappearance. Surveying the upheavals of the 1910s and 1920s, Musil hoped that Europe could find an internationalist solution to the "dead end of imperial nationalism." In 1927, he signed a declaration of support for the Austrian Social Democratic Party.
Musil was a staunch individualist who opposed the authoritarianism of both right and left. A recurring theme in his speeches and essays through the 1930s is the defense of the autonomy of the individual against the authoritarian and collectivist concepts then prevailing in Germany, Italy, Austria, and Russia. He participated in the anti-fascist International Writers' Congress for the Defense of Culture in 1935 in which he spoke in favor of artistic independence against the claims of the state, class, nation, and religion.
The last years of Musil's life were dominated by Nazism and World War II: the Nazis banned his books. He saw early Nazism first-hand while he was living in Berlin from 1931 to 1933. In 1938, when Austria was annexed by Nazi Germany, Musil and his Jewish wife, Martha, left for exile in Switzerland, where he died at the age of 61. Martha wrote to Franz Theodor Csokor that he had suffered a stroke.
Only eight people attended his cremation. Martha cast his ashes into the woods of Mont Salève. | https://en.wikipedia.org/wiki?curid=25701 |
Geography of Russia
Russia is a country that stretches over a vast expanse of Eastern Europe and Northern Asia. Comprising much of Northern Eurasia, it is the world's largest country in total area. Due to its size, Russia displays both monotony and diversity. As with its topography, its climates, vegetation, and soils span vast distances. From north to south the East European Plain is clad sequentially in tundra, coniferous forest (taiga), mixed and broadleaf forests, grassland (steppe), and semi-desert (fringing the Caspian Sea) as the changes in vegetation reflect the changes in climate. Siberia supports a similar sequence but is predominantly taiga. The country contains forty UNESCO biosphere reserves.
Located in the north, west and east latitudes of the Northern Hemisphere, most of Russia is much closer to the North Pole than to the equator. Individual country comparisons are of little value in gauging Russia's enormous size and diversity. The country's 17.09 million square kilometers include one-eighth of the Earth's inhabited land area. Its European portion, which occupies a substantial part of continental Europe, is home to most of Russia's industrial activity and is where, roughly between the Dnieper River and the Ural Mountains, the Russian Empire took shape. Russia includes the entire northern portion of Asia.
From west to east, the country stretches from Kaliningrad (the exclave separated by the 1990 Re-Establishment of the State of Lithuania from the then-Soviet Union) to Ratmanov Island (one of the Diomede Islands) in the Bering Strait. This distance spans about , to Nome, Alaska. From north to south, the country ranges from the northern tip of the Russian Arctic islands at Franz Josef Land to the southern tip of the Republic of Dagestan on the Caspian Sea, spanning about of extremely varied, often inhospitable terrain.
Extending for , the Russian border is the world's longest. Along the 20,139-kilometer land frontier, Russia has boundaries with 14 countries: Norway, Finland, Estonia, Latvia, Lithuania, Poland (via the Kaliningrad Oblast), Belarus, Ukraine, Georgia, Azerbaijan, Kazakhstan, Mongolia, the People's Republic of China and North Korea.
Approximately two-thirds of the frontier is bounded by seawater. Virtually all of the lengthy northern coast is well above the Arctic Circle; except for the port of Murmansk—which receives currents that are somewhat warmer than would be expected at that latitude, due to the effects of the Gulf Stream—that coast is locked in ice much of the year. Thirteen seas and parts of two oceans—the Arctic and Pacific—wash Russian shores.
Russia shares a maritime boundary with the United States and with Japan.
With a few changes of status, most of the Soviet-era administrative and territorial divisions of the Russian Republic were retained in constituting the Russian Federation. As of 2014, there are eighty-five administrative territorial divisions (called federal subjects): twenty-two republics, nine krais (territories), forty-six oblasts (provinces), one autonomous oblast, four autonomous okrugs, and three cities with federal status, namely the cities of Moscow, Saint Petersburg, and Sevastopol.
The republics include a wide variety of peoples, including northern Europeans, Tatars, Caucasus peoples, and indigenous Siberians. The largest federal subjects are in Siberia. Located in east-central Siberia, the Sakha Republic (Yakutia) is the largest federal subject in the country (and the largest country subdivision in the world), twice the size of Alaska. Second in size is Krasnoyarsk Krai, located west of Sakha in Siberia. Kaliningrad Oblast, which is a noncontiguous constituent entity of Russia, is the smallest oblast. The Republic of Ingushetia is both the smallest republic and the smallest federal subject of Russia except for the three federal cities. The two most populous federal subjects, Moscow Oblast (with Moscow) and Krasnodar Krai, are in European Russia.
As of 2018, Russia has a population of 146.9 million people. It is ranked as having the 9th largest population of any country in the world.
Russia's capital, Moscow, has 12.166 million people; Saint Petersburg 4.993 million; Novosibirsk 1.497 million; Yekaterinburg 1.379 million; Nizhniy Novgorod 1.212 million; and Samara 1.164 million. This makes Moscow the most populated city in Russia. As of 2017, 74.29 percent of the population is urbanized.
Names and locations sacred in Russia
Geographers traditionally divide the vast territory of Russia into five natural zones: the tundra zone; the Taiga, or forest, zone; the steppe, or plains, zone; the arid zone; and the mountain zone. Most of Russia consists of two plains (the East European Plain and the West Siberian Plain), three lowlands (the North Siberian, the Central Yakutian and the East Siberian), two plateaus (the Central Siberian Plateau and the Lena Plateau), and two systems of mountainous areas (the East Siberian Mountains in far northeastern Siberia and the South Siberian Mountains along the southern border).
The East European Plain encompasses most of European Russia. The West Siberian Plain, which is the world's largest, extends east from the Urals to the Yenisei River. Because the terrain and vegetation are relatively uniform in each of the natural zones, Russia presents an illusion of uniformity. Nevertheless, Russian territory contains all the major vegetation zones of the world except a tropical rain forest.
The Russian Arctic stretches for close to west to east, from Karelia and the Kola Peninsula to Nenetsia, the Gulf of Ob, the Taymyr Peninsula and the Chukchi Peninsula (Kolyma, Anadyr River, Cape Dezhnev). Russian islands and archipelagos in the Arctic Sea include Novaya Zemlya, Severnaya Zemlya, and the New Siberian Islands.
About 10 percent of Russia is tundra—a treeless, marshy plain. The tundra is Russia's northernmost zone, stretching from the Finnish border in the west to the Bering Strait in the east, then running south along the Pacific coast to the northern Kamchatka Peninsula. The zone is known for its herds of wild reindeer, for so-called white nights (dusk at midnight, dawn shortly thereafter) in summer, and for days of total darkness in winter. The long, harsh winters and lack of sunshine allow only mosses, lichens, and dwarf willows and shrubs to sprout low above the barren permafrost. Although several powerful Siberian rivers traverse this zone as they flow northward to the Arctic Ocean, partial and intermittent thawing hamper drainage of the numerous lakes, ponds, and swamps of the tundra. Frost weathering is the most important physical process here, gradually shaping a landscape that was severely modified by glaciation in the last ice age. Less than one percent of Russia's population lives in this zone. The fishing and port industries of the northwestern Kola Peninsula and the huge oil and gas fields of northwestern Siberia are the largest employers in the tundra. With a population of 180,000, the industrial frontier city of Norilsk is second in population to Murmansk among Russia's settlements above the Arctic Circle. From here you can also see the auroras (northern lights).
Taiga, the most extensive natural area of Russia, stretches from the western borders of Russia to the Pacific. It occupies the territory of the Eastern Europe and West Siberian plains to the north of 56 ° -58 ° N and most of the territory east of Yenisei River taiga forests reach the southern borders of Russia in Siberia taiga only accounts for over 60% of Russia. In the north-south direction the eastern taiga is divided (east of the Yenisei River), with a continental climate, and west, with a milder climate, in general, the climate zone is moist, moderately warm (cool in the north) in the summer and harsh winter, there is a steady snow cover in the winter. In the latitudinal direction, the taiga is divided into three subzones - northern, middle and southern taiga. In the western taiga dense spruce and fir forests on wetlands alternate with pine forests, shrubs, and meadows on the lighter soils. Such vegetation is typical of the eastern taiga, but it plays an important role not fir and larch. Coniferous forest, however, does not form a continuous array and sparse areas of birch, alder, willow (mainly in river valleys), the wetlands - marshes. Within the taiga are widespread fur-bearing animals - sable, marten, ermine, moose, brown bear, wolverine, wolf, and muskrat.
In the taiga is dominated by podzolic and cryogenic taiga soils, characterized by clearly defined horizontal structure (only in the southern taiga there is sod-podzolic soil). Formed in a leaching regime, poor humus om. Groundwater is normally found in the forest close to the surface, washing calcium from the upper layers, resulting in the top layer of soil of the taiga discolored and oxidized. Few areas of the taiga, suitable for farming, are located mainly in the European part of Russia. Large areas are occupied by sphagnum marshes (here is dominated by podzolic-boggy soil). To enrich the soil for agricultural purposes should be making lime and other fertilizer.
Russian Taiga has the world's largest reserves of coniferous wood, but from year to year - as a result of intensive logging - they decrease. Development of hunting, farming (mainly in river valleys).
The mixed and deciduous forest belt is triangular, widest along the western border and narrower towards the Ural Mountains. The main trees are Oak and Spruce, but many other growths of vegetation such as ash, aspen, birch, hornbeam, maple, and pine reside there. Separating the taiga from the wooded steppe is a narrow belt of birch and aspen woodland located east of the Urals as far as the Altay Mountains. Much of the forested zone has been cleared for agriculture, especially in European Russia. Wildlife is more scarce as a result of this, but the roe deer, wolf, fox, and squirrel are very common.
The steppe has long been depicted as the typical Russian landscape. It is a broad band of treeless, grassy plains, interrupted by mountain ranges, extending from Hungary across Ukraine, southern Russia, and Kazakhstan before ending in Manchuria. Most of the Soviet Union's steppe zone was located in the Ukrainian and Kazakh republics; the much smaller Russian steppe is located mainly between those nations, extending southward between the Black and Caspian Seas before blending into the increasingly desiccated territory of the Republic of Kalmykia. In a country of extremes, the steppe zone provides the most favorable conditions for human settlement and agriculture because of its moderate temperatures and normally adequate levels of sunshine and moisture. Even here, however, agricultural yields are sometimes adversely affected by unpredictable levels of precipitation and occasional catastrophic droughts. The soil is very dry.
Russia's mountain ranges are located principally along its continental dip (the Ural Mountains), along the southwestern border (the Caucasus), along the border with Mongolia (the eastern and western Sayan Mountains and the western extremity of the Altay Mountains), and in eastern Siberia (a complex system of ranges in the northeastern corner of the country and forming the spine of the Kamchatka Peninsula, and lesser mountains extending along the Sea of Okhotsk and the Sea of Japan). Russia has nine major mountain ranges. In general, the eastern half of the country is much more mountainous than the western half, the interior of which is dominated by low plains. The traditional dividing line between the east and the west is the Yenisei River valley. In delineating the western edge of the Central Siberian Plateau from the West Siberian Plain, the Yenisey runs from near the Mongolian border northward into the Arctic Ocean west of the Taymyr Peninsula.
The Ural Mountains form the natural boundary between Europe and Asia; the range extends about from the Arctic Ocean to the northern border of Kazakhstan. Several low passes provide major transportation routes through the Urals eastward from Europe. The highest
peak, Mount Narodnaya, is . The Urals also contain valuable deposits of minerals.
To the east of the Urals is the West Siberian Plain, stretching about 1,900 kilometers from west to east and about 2,400 kilometers from north to south. With more than half its territory below 200 meters in elevation, the plain contains some of the world's largest swamps and floodplains. Most of the plain's population lives in the drier section south of 77 north latitude.
The region directly east of the West Siberian Plain is the Central Siberian Plateau, which extends eastward from the Yenisei River valley to the Lena River valley. The region is divided into several plateaus, with elevations ranging between 320 and 740 meters; the highest elevation is about 1,800 meters, in the northern Putoran Mountains. The plain is bounded on the south by the Baikal Mountains system and on the north by the North Siberian Lowland, an extension of the West Siberian Plain extending into the Taymyr Peninsula on the Arctic Ocean.
In the mountain system west of Lake Baikal in south-central Siberia, the highest elevations are 3,300 meters in the Western Sayan, 3,200 meters in the Eastern Sayan, and 4,500 meters at Belukha Mountain in the Altay Mountains. The Eastern Sayan reach nearly to the southern shore of Lake Baikal; at the lake, there is an elevation difference of more than 4,500 meters between the nearest mountain, 2,840 meters high, and the deepest part of the lake, which is 1,700 meters below sea level. The mountain systems east of Lake Baikal are lower, forming a complex of minor ranges and valleys that reaches from the lake to the Pacific coast. The maximum height of the Stanovoy Range, which runs west to east from northern Lake Baikal to the Sea of Okhotsk, is 2,550 meters. To the south of that range is southeastern Siberia, whose mountains reach 800 meters. Across the Strait of Tartary from that region is Sakhalin Island, Russia's largest island, where the highest elevation is about 1,700 meters. The small Moneron Island, the site of the shootdown of Korean Air Lines Flight 007, is found to its west.
Truly alpine terrain appears in the southern mountain ranges. Between the Black and Caspian seas, the Caucasus Mountains rise to impressive heights, forming a boundary between Europe and Asia. One of the peaks, Mount Elbrus, is the highest point in Europe, at 5,642 meters. The geological structure of the Caucasus extends to the northwest as the Crimean and Carpathian Mountains and southeastward into Central Asia as the Tian Shan and Pamirs. The Caucasus Mountains create an imposing natural barrier between Russia and its neighbors to the southwest, Georgia and Azerbaijan.
Northeastern Siberia, north of the Stanovoy Range, is an extremely mountainous region. The long Kamchatka Peninsula, which juts southward into the Sea of Okhotsk, includes many volcanic peaks, some of which are still active. The highest is the 4,750-meter Klyuchevskaya Sopka, the highest point in the Russian Far East. The volcanic chain continues from the southern tip of Kamchatka southward through the Kuril Islands chain and into Japan. Kamchatka also is one of Russia's two centers of seismic activity (the other is the Caucasus). In 1995, a major earthquake largely destroyed the oil-processing town of Neftegorsk. Also located in this region is the very large Beyenchime-Salaatin crater.
Russia is a water-rich country, divided into twenty watershed districts. The earliest settlements in the country sprang up along the rivers, where most of the urban population continues to live. The Volga, Europe's longest river, is by far Russia's most important commercial waterway. Four of the country's thirteen largest cities are located on its banks: Nizhny Novgorod, Samara, Kazan, and Volgograd. The Kama River, which flows west from the southern Urals to join the Volga in the Republic of Tatarstan, is a second key European water system whose banks are densely populated.
Russia has thousands of rivers and inland bodies of water, providing it with one of the world's largest surface-water resources. However, most of Russia's rivers and streams belong to the Arctic drainage basin, which lies mainly in Siberia but also includes part of European Russia. Altogether, 84 percent of Russia's surface water is located east of the Urals in rivers flowing through sparsely populated territory and into the Arctic and Pacific oceans. In contrast, areas with the highest concentrations of population, and therefore the highest demand for water supplies, tend to have the warmest climates and highest rates of evaporation. As a result, densely populated areas such as the Don and Kuban River basins north of the Caucasus have barely adequate (or in some cases inadequate) water resources.
Forty of Russia's rivers longer than 1,000 kilometers are east of the Urals, including the three major rivers that drain Siberia as they flow northward to the Arctic Ocean: the Irtysh-Ob system (totaling 5,380 kilometers), the Yenisei (4,000 kilometers), and the Lena (3,630 kilometers). The basins of those river systems cover about eight million square kilometers, discharging nearly 50,000 cubic meters of water per second into the Arctic Ocean. The northward flow of these rivers means that source areas thaw before the areas downstream, creating vast swamps such as the 48,000-square-kilometer Vasyugan Swamp in the center of the West Siberian Plain. The same is true of other river systems, including the Pechora and the Northern Dvina in Europe and the Kolyma and the Indigirka in Siberia. Approximately 10 percent of Russian territory is classified as swampland.
A number of other rivers drain Siberia from eastern mountain ranges into the Pacific Ocean. The Amur River and its main tributary, the Ussuri, form a long stretch of the winding boundary between Russia and China. The Amur system drains most of southeastern Siberia. Three basins drain European Russia. The Dnieper, which flows mainly through Belarus and Ukraine, has its headwaters in the hills west of Moscow. The 1,860-kilometer |Don originates in the Central Russian Upland south of Moscow and then flows into the Sea of Azov and the Black Sea at Rostov-on-Don. The Volga is the third and by far the largest of the European systems, rising in the Valdai Hills west of Moscow and meandering southeastward for 3,510 kilometers before emptying into the Caspian Sea. Altogether, the Volga system drains about 1.4 million square kilometers. Linked by several canals, European Russia's rivers long have been a vital transportation system; the Volga system still carries two-thirds of Russia's inland water traffic.
Russia's inland bodies of water are chiefly a legacy of extensive glaciation. In European Russia, the largest lakes are Ladoga and Onega northeast of Saint Petersburg, Lake Peipus on the Estonian border, and the Rybinsk Reservoir north of Moscow. Smaller man-made reservoirs, 160 to 320 kilometers long, are on the Don, the Kama, and the Volga rivers. Many large reservoirs also have been constructed on the Siberian rivers; the Bratsk Reservoir northwest of Lake Baikal is one of the world's largest.
The most prominent of Russia's bodies of fresh water is Lake Baikal, the world's deepest and most capacious freshwater lake. Lake Baikal alone holds 85% of the freshwater resources of the lakes in Russia and 20% of the world's total. It extends 632 kilometers in length and 59 kilometers across at its widest point. Its maximum depth is 1,713 meters. Numerous smaller lakes dot the northern regions of the European and Siberian plains. The largest of these are lakes Belozero, Topozero, Vygozero, and Ilmen in the European northwest and Lake Chany in southwestern Siberia.
One billion acres of land is arable in Russia, but only about 0.1 percent is permanent agriculture. The landscapes of region have extremely varied environments because of the following:
The workforce involved in agriculture workforce was reported to be about 9.4% of the population in 2016.
The main export of Russia is grain, which is about 6% of the world trade. Other exported products include fish and oil with 3%, meals with 2%, and meat which accounts for less than 1%.
Agriculture has always been important for Russia. The land was worked by its peasant class.
Russia has a largely continental climate because of its sheer size and compact configuration. Most of its land is more than from the sea, and the centre is from the sea. In addition, Russia's mountain ranges, predominantly to the south and the east, block moderating temperatures from the Indian and Pacific Oceans, but European Russian and northern Siberia lack such topographic protection from the Arctic and North Atlantic Oceans.
Because only small parts of Russia are south of 50° north latitude and more than half of the country is north of 60° north latitude, extensive regions experience six months of snow cover over subsoil that is permanently frozen to depths as far as several hundred meters. The average yearly temperature of nearly all of Siberia is below freezing, and the average for most of European Russia is between . Most of Russia has only two seasons, summer and winter, with very short intervals of moderation between them. Transportation routes, including entire railroad lines, are redirected in winter to traverse rock-solid waterways and lakes. Some areas constitute important exceptions to this description, however: the moderate maritime climate of Kaliningrad Oblast on the Baltic Sea is similar to that of the American Northwest; the Russian Far East, under the influence of the Pacific Ocean, has a monsoonal climate that reverses the direction of wind in summer and winter, sharply differentiating temperatures; and a narrow, subtropical band of territory provides Russia's most popular summer resort area on the Black Sea.
In winter, an intense high-pressure system causes winds to blow from the south and the southwest in all but the Pacific region of the Russian landmass; in summer, a low-pressure system brings winds from the north and the northwest to most of the landmass. Russia is the coldest country of the world (average annual temperature is ). That meteorological combination reduces the wintertime temperature difference between north and south. Thus, average January temperatures are in Saint Petersburg, in the West Siberian Plain, and at Yakutsk (in east-central Siberia, at approximately the same latitude as Saint Petersburg), while the winter average on the Mongolian border, whose latitude is some 10° farther south, is barely warmer. Summer temperatures are more affected by latitude, however; the Arctic islands average , and the southernmost regions average . Russia's potential for temperature extremes is typified by the national record low of , recorded at Verkhoyansk in north-central Siberia and the record high of , recorded at several southern stations (Utta).
The long, cold winter has a profound impact on almost every aspect of life in Russia. It affects where and how long people live and work, what kinds of crops are grown, and where they are grown (no part of the country has a year-round growing season). The length and severity of the winter, together with the sharp fluctuations in the mean summer and winter temperatures, impose special requirements on many branches of the economy. In regions of permafrost, buildings must be constructed on pilings, machinery must be made of specially tempered steel, and transportation systems must be engineered to perform reliably in extremely low and extremely high temperatures. In addition, during extended periods of darkness and cold, there are increased demands for energy, health care, and textiles.
Because Russia has little exposure to ocean influences, most of the country receives low to moderate amounts of precipitation. The highest precipitation falls in the northwest, with amounts decreasing from northwest to southeast across European Russia. The wettest areas are the small, lush subtropical region adjacent to the Caucasus and along the Pacific coast: Sochi receives per year and the Kuril Islands typically around - much of which is snow. Along the Baltic coast, average annual precipitation is , and in Moscow it is . An average of only falls along the Russian–Kazakh border, and as little as may fall along Siberia's Arctic coastline. Average annual days of snow cover, a critical factor for agriculture, depends on both latitude and altitude. Cover varies from forty to 200 days in European Russia, and from 120 to 250 days in Siberia.
The main territorial changes of Russia happened by means of military conquest and by planned settlements in the course of over five centuries (1533 to present).
There was some dispute with its neighboring country Belarus after the introduction of visa-free travel in early 2017. Several months after the agreement Russia established strict border control between the two countries, which has affected Belarusian state-run airline income. Moscow claimed that the measure was in the interest of countering the growing threat of terrorism.
Russia's government has an Executive Branch, Legislative Branch, and Judicial Branch. Chief of State has been President Vladimir Putin, since May 7, 2012. The Legislative Branch is made of the Federation Council, 170 seats, 4 year terms. The Supreme Court of the Russian Federation is made of 170 members divided up into The Judicial Panel for Criminal Affair, The Judicial Panel for Civil Affairs, and The Military Panel.
The list of Russian political parties and their leaders are as follows:
Suffrage is given at the age of 18 and is universal.
The recent falling in the Russian economy has turned and its momentum is expected to peak at a modest 3.1% in 2018. Russia has shown further macro-economic security, by continuing its recovery, which has been intensified by non trading sectors. Higher energy prices, which has heightened the trade surplus, has proven to also continue the trend in strengthening the current economy. A Central Bank cleansing, has helped the State controlled banking assets growth, which helped the almost 70%increase with combined assets for the Russian Banking system. With focus on a new digital infrastructure, new socio-economic benefits can only happen with the implementation of newer policies that can sustain and accelerate this transformation. Although the outlook is mostly positive, the paradigm shift from the still recovering economy, forecasts show that this may not fully happen until 2020.
Russian foreign trade exports are mainly derived from oil and petroleum (gas and coal), steel, metals and minerals. The emphasis in export is mainly focused on the oil and petroleum industry. Other important exports include machinery, equipment, fertilizer, timber, and natural gas. CIS countries heavily rely on the Russian export industry, as it provides most of their needs. Russia imports machinery and equipment, vehicles, consumer goods, foodstuff, chemical products, and industrial consumer goods. Major trading partners with Russia are Germany, Italy, Poland, Switzerland, United Kingdom, United States, and Finland.
Russia's GDP was US$1.578 trillion in 2017.
Russia's GDP per capita was US$10,743.10 in 2017.
Russia's gross national income was 3.655 trillion PPP dollars in 2017.
The Russian ruble is Russia's currency.
1 Russian Ruble = 0.015 United States Dollar.
In 2015, it was estimated that about 13.3% of the population lived below the poverty line in Russia.
Area (excluding Crimea):
Area - comparative: Slightly larger than twice size of Brazil
Land boundaries:
Kaliningrad forms the westernmost part of Russia, having no land connection to the rest of the country. It is bounded by Poland, Lithuania, and the Baltic Sea.
Crimea, a peninsula on the Black Sea, is claimed and de facto administered by the Russian Federation since Russia annexed it in March 2014. It is recognized as a territory of Ukraine by most of the international community.
Border countries:
Coastline excluding Crimea: 37,653 km (23,396 mi)
Maritime claims:
Elevation extremes:
Russia holds the greatest reserves of mineral resources than any country in the world. Though they are abundant, they are in remote areas with extreme climates, making them expensive to mine. The country is the most abundant in mineral fuels. It may hold as much as half of the world's coal reserves and even larger reserves of petroleum. Deposits of coal are scattered throughout the region, but the largest are located in central and eastern Siberia. The most developed fields lie in western Siberia, in the northeastern European region, in the area around Moscow, and in the Urals. The major petroleum deposits are located in western Siberia and in the Volga-Urals. Smaller deposits are found throughout the country. Natural gas, a resource of which Russia holds around forty percent of the world's reserves, can be found along Siberia's Arctic coast, in the North Caucasus, and in northwestern Russia. Major iron-ore deposits are located south of Moscow, near the Ukrainian border in the Kursk Magnetic Anomaly; this area contains vast deposits of iron ore that have caused a deviation in the Earth's magnetic field. There are smaller deposits in other parts of the country. The Ural mountains hold small deposits of manganese. nickel, tungsten, cobalt, molybdenum and other iron alloying elements occur in adequate quantities.
Russia also contains most of the nonferrous metals. Aluminium ores are scarce and are found primarily in the Ural region, northwestern European Russia, and south-central Siberia. Copper is more abundant and major reserves are located in the Urals, the Norilsk area near the mouth of the Yenisey in eastern Siberia, and the Kola Peninsula. Another vast deposit located east of Lake Baikal only became exploited when the Baikal-Amur Mainline (BAM) railroad was finished in 1989.
The North Caucasus, far eastern Russia, and the western edge of the Kuznetsk Basin in southern Siberia contain an abundance of lead and zinc ores. These are commonly found along with copper, gold, silver, and a large amount of other rare metals. The country has one of the largest gold reserves in the world; mostly in Siberia and the Urals. Mercury deposits can be found in the central and southern Urals and in south-central Siberia.
Raw materials are abundant as well, including potassium and magnesium salt deposits in the Kama River region of the western Urals. Russia also contains one of the world's largest deposits of apatite found in the central Kola Peninsula. Rock salt is located in the southwestern Urals and the southwest of Lake Baikal. Surface deposits of salt are found in salt lakes along the lower Volga Valley. Sulfur can be found in the Urals and the middle Volga Valley.
Eight percent of the land is used for arable farming, four percent—for permanent pastures, forty-six percent of the land is forests and woodland, and forty-two percent is used for other purposes.
Volcanic activity in the Kuril Islands and volcanoes and earthquakes on the Kamchatka peninsula are other natural hazards. | https://en.wikipedia.org/wiki?curid=25702 |
Demographics of Russia
The demographics of Russia are the demographic features of the population of the Russian Federation including population growth, population density, ethnic composition, education level, health, economic status and other aspects.
With a population of 142.8 million according to the 2010 census, which rose to 146.7 million as of 2020, Russia is the most populous nation in Europe and the ninth-most populous country in the world. Its population density stands at 9 inhabitants per square kilometre (23 per square mile). The overall life expectancy in Russia at birth is 72.4 years (66.9 years for males and 77.6 years for females). Since the 1990s, Russia's death rate has exceeded its birth rate. As of 2018, the total fertility rate (TFR) across Russia was estimated to be 1.58 born per woman, one of the lowest fertility rates in the world, below the replacement rate of 2.1, and considerably below the high of 7.44 children born per woman in 1908. Consequently, the country has one of the oldest populations in the world, with an average age of 40.3 years.
Russia is home to approximately 111 million ethnic Russians and about 20 million ethnic Russians live outside Russia in the former republics of the Soviet Union, mostly in Ukraine and Kazakhstan. The 2010 census recorded 81% of the population as ethnically Russian, and 19% as other ethnicities: 3.7% Tatars; 1.4% Ukrainians; 1.1% Bashkirs; 1% Chuvashes; 11.8% others and unspecified. According to the Census, 84.93% of the Russian population belongs to European ethnic groups (Slavic, Germanic, Finnic, Greek, and others). This is a decline from the 2002, when they constituted for more than 86% of the population. In total, 185 different ethnic groups live within the Russian Federation's borders.
The population of Russia peaked at 148,689,000 in 1991, just before the breakup of the Soviet Union. Low birth rates and abnormally high death rates caused Russia's population to decline at a 0.5% annual rate, or about 750,000 to 800,000 people per year from the mid-1990s to the mid-2000s. The UN warned in 2005 that Russia's then population of about 143 million could fall by a third by 2050, if trends did not improve. In 2018, the UN claimed that Russia's population could fall to 132 million by 2050.
The decline slowed considerably in the late 2000s, and in 2009 Russia recorded population growth for the first time in 15 years, adding 23,300. Key reasons for the slow current population growth are improving health care, changing fertility patterns among younger women, falling emigration and a steady influx of immigrants from ex-USSR countries. In 2012, Russia's population increased by 292,400.
As of 2018, Russia's TFR of 1.579 children born/woman was among the highest in Eastern, Southern and Central Europe. In 2013, Russia experienced the first natural population growth since 1990 at 22,700. However since 2016 TFR has been plummeting, which has already led to a return of a growing natural decrease. In 2018 it surpassed the net migration increment leading to a slight decline of total population. Even though life expectancy in Russia is steadily growing at a high pace (~0.5 year annually) reaching all-time record highs every year, it is still not enough for recovery due to the distorted wave-like age structure of the population.
The number of Russians living in poverty has decreased by 50% since the economic crisis following the disintegration of the Soviet Union, and the improving economy had a positive impact on the country's low birth rate. The latter rose from its lowest point of 8.27 births per 1000 people in 1999 to 13.3 per 1000 in 2014. Likewise, the fertility rate rose from its lowest point of 1.157 in 1999 to 1.777 in 2015. 2007 marked the highest growth in birth rates that the country had seen in 25 years, and 2009 marked the highest total birth rate since 1991.
While the Russian birth rate is comparable to that of developed countries, its death rate is much higher, especially among working-age males due to a comparatively high rate of fatalities caused by heart disease and other external causes such as accidents. The Russian death rate in 2010 was 14.3 per 1000 citizens.
The causes for this sharp increase in mortality are widely debated. According to a 2009 report by "The Lancet", a British medical journal, mass privatization, an element of the economic-reform package nicknamed shock therapy, clearly correlates with higher mortality rates. The report argues that advocates of economic reforms ignored the human cost of the policies they were promoting, such as unemployment and human suffering, leading to an early death. These conclusions were criticized by "The Economist". A WHO press-release in 2000, on the other hand, reported widespread alcohol abuse in Russia being used as the most common explanation of higher mortality among men. A 2008 study produced very similar results.
A 2009 study blamed alcohol for more than half the deaths (52%) among Russians aged 15 to 54 in the '90s. For the same demographic, this compares to 4% of deaths for the rest of the world. The study claimed alcohol consumption in mid-90s in Russia averaged 10.5 litres, and was based on personal interviews conducted in three Siberian industrial cities, Barnaul, Biysk and Omsk. More recent studies have confirmed these findings.
According to the Russian demographic publication Demoscope, the rising male death rate was a long-term trend from 1960 to 2005. The only significant reversion of the trend was caused by Mikhail Gorbachev's anti-alcohol campaign, but its effect was only temporary. According to the publication, the sharp rise of death rates in the early 1990s was caused by the exhaustion of the effect of the anti-alcohol campaign, while the market reforms were only of secondary importance. The authors also claimed the Lancet's study is flawed because it used the 1985 death rate as the base, while that was in fact the very maximum of the effect of the anti-alcohol campaign.
Other factors contributing to the collapse, along with the economic problems, include the dying off of a relatively large cohort of people born between 1925 and 1940 (between the Russian Civil War and World War II), when Russian birth rates were very high, along with an "echo boom" in the 1980s that may have satisfied the demand for children, leading to a subsequent drop in birth rates.
Government measures to halt the demographic crisis was a key subject of Vladimir Putin's 2006 state of the nation address. As a result, a national programme was developed with the goal to reverse the trend by 2020. Soon after, a study published in 2007 showed that the rate of population decrease had begun to slow: if the net decrease from January to August 2006 was 408,200 people, it was 196,600 in the same period in 2007. The death rate accounted for 357,000 of these, which is 137,000 less than in 2006.
At the same time period in 2007, there were just over one million births in Russia (981,600 in 2006), whilst deaths decreased from 1,475,000 to 1,402,300. In all, the number of deaths exceeded the number of births by 1.3 times, down from 1.5 in 2006. 18 of the 83 provinces showed a natural growth of population (in 2006: 16). The Russian Ministry of Economic Development expressed hope that by 2020 the population would stabilize at 138–139 million, and by 2025, to increase again to its present-day status of 143–145, also raising the life expectancy to 75 years.
The natural population decline continued to slow through 2008—2012 due to declining death rates and increasing birth rates. In 2009 the population saw yearly growth for the first time in 15 years. In September 2009, the Ministry of Health and Social Development reported that Russia recorded natural population growth for the first time in 15 years, with 1,000 more births than deaths in August. In April 2011 the Russian Prime Minister (Russian president as of 2012) Vladimir Putin pledged to spend the 1.5 trillion rubles (£32.5 billion or $54 billion) on various measures to boost Russia's declining birthrate by 30 per cent in the next four years.
In 2012, the birth rate increased again. Russia recorded 1,896,263 births, the highest number since 1990, and even exceeding annual births during the period 1967–1969, with a TFR of 1.691, the highest since 1991. (Source: Vital statistics table below). In fact, Russia, despite having only slightly more people than Japan, has recently had nearly twice as many births as that country. The number of births was expected to fall over the next few years as women born during the baby bust in the 1990s enter their prime childbearing years, but this didn't occur thanks to the continued growth of the TFR. The figures for 2013–2015 again showed around 1.9 million births, about the same as in 2012, but because the number of women of childbearing age is dropping, especially for those in their early 20s, the TFR actually rose to 1.777, which places Russia at first 9 or 10 countries out of 50 developed nations, and at 6th place in Europe.
In 2017, the number of births took a drop mostly due to falling fertility rates, which, in turn, were affected by falling of fertility of 2nd children due to planned but postponed termination of maternal capital program, and falling of fertility of 1st children. The more recent drop in fertility has been sharpest in the North Caucasus, including in Chechnya where the birth rate fell by one-third since 2010. Change of number of reproductive-age women also played a key role. However, the number of deaths also declined due to improving healthcare, decline in violent crime rates and declining consumption of alcohol, tobacco and hard drugs.
In 2018, the number of births kept falling, but at much slower pace. However, the number of deaths didn't decline by as much as it did the previous year because whilst life expectancy improved, the population aged leading to a higher mortality rate. By 2020 around 25.7% of Russians would be over 60 years, which is nearly double the percentage in 1985 of 12.7%. By the middle of the century it is possible that more than a third of the population will be over 60, similar to modern Japan.
In 2006, in a bid to compensate for the country's demographic decline, the Russian government started simplifying immigration laws and launched a state program "for providing assistance to voluntary immigration of ethnic Russians from former Soviet republics". In August 2012, as the country saw its first demographic growth since the 1990s, President Putin declared that Russia's population could reach 146 million by 2025, "mainly as a result of immigration". Introduced in April 2014 new citizenship rules allowing citizens of former Soviet countries to obtain Russian citizenship If they meet certain criteria (e.g. preferred language, ethnicity) have gained strong interest among Russian-speaking residents of those countries (i.e. Russians, Germans, Belarusians and Ukrainians).
There are an estimated four million illegal immigrants from the ex-Soviet states in Russia. In 2012, the Russian Federal Security Service's Border Service stated there had been an increase in illegal migration from the Middle East and Southeast Asia (Note that these were Temporary Contract Migrants) Under legal changes made in 2012, illegal immigrants who are caught will be banned from reentering the country for 10 years.
Since the collapse of the USSR, most immigrants have come from Ukraine, Uzbekistan, Tajikistan, Azerbaijan, Moldova, Kazakhstan, Kyrgyzstan, Armenia, Belarus, and China.
Temporary migrant workers in Russia consists of about 7 million people, most of the temporary workers come from Central Asia the Balkans and East Asia. Most of them work in the construction, cleaning and in the household industries. They primarily live in cities such as Moscow, Sochi and Blagoveshchensk. While worker migrants are opposed by most Russians, the mayor of Moscow said that Moscow cannot do without worker migrants. New laws are in place that require worker migrants to be fluent in Russian, know Russian history and laws. The Russian Opposition and most of the Russian population opposes worker migration. Alexei Navalny stated that if he came to power he would introduce a visa regime to non-Eurasian Union countries in the former Soviet Union and have a visa free regime with the European Union and The West to attract skilled migrants. The problem of worker migration has become so severe it has caused a rise in Russian nationalism, and spawned groups like Movement Against Illegal Immigration.
The total fertility rate is the number of children born per woman. It is based on fairly good data for the entire period. Sources: Our World In Data and Gapminder Foundation.
In many of the following years, Russia has had the highest total fertility rate in the world. These very high fertility rates did not increase even more the population due to the casualties of the Russian Revolution, the two world wars and political killings.
Note: Russian data includes Crimea starting in 2014.
Demographic statistics according to the World Population Review in 2019.
Demographic statistics according to the US based CIA World Factbook, unless otherwise indicated.
definition: age 15 and over can read and write (2015 est.)
Russian 77.7%, Tatar 3.7%, Ukrainian 1.4%, Bashkir 1.1%, Chuvash 1%, Chechen 1%, Black 0.1% other 10.2%, unspecified 3.9%
note: nearly 200 national and/or ethnic groups are represented in Russia's 2010 census (2010 est.)
Russian Orthodox 15–20%, Muslim 10–15%, other Christian 2% (2006 est.) Note: estimates are of practicing worshipers; Russia has large populations of non-practicing believers and non-believers, a legacy of over seven decades of Soviet rule; Russia officially recognizes Orthodox Christianity, Islam, Judaism, and Buddhism as traditional religions.
Russian (official) 85.7%, Tatar 3.2%, Chechen 1%, other 10.1%. Note: data represent native language spoken (2010 est.)
Population is heavily concentrated in the westernmost fifth of the country extending from the Baltic Sea, south to the Caspian Sea, and eastward parallel to the Kazakh border; elsewhere, sizeable pockets are isolated and generally found in the south
8.4 people per square kilometer (2010 Russian Census)
"at birth:" 1.06 male(s)/female
"under 15 years:" 1.05 male(s)/female
"15–64 years:" 0.4 male(s)/female
"65 years and over:" 0.46 male(s)/female
"total population:" 0.86 male(s)/female (2009)
In 2017, Russia's TFR of 1.62 children born/woman was among the highest in Eastern Europe, meaning that the average Russian family had more children than an average family in most other Eastern European countries, but that the rate was below the replacement rate of 2.1. After experiencing a surge in births for several years, Russia's birth rate fell in 2017 by 10.6% percent, reaching its lowest level in 10 years.
In 1990, just prior to the dissolution of the Soviet Union, Russia's total fertility rate (TFR) stood at 1.89. Fertility rates had already begun to decline in the late 1980s due to the natural progression of Russia's demographic structure, but the rapid and widely negative changes in society following the collapse greatly influenced the rate of decline. The TFR hit a historic low of 1.157 in 1999. The only federal subject of Russia to see a decline in fertility since 1999 is Ingushetia, where the TFR fell from 2.443 to 2.278 in 2014.
In 2009, 8 of Russia's federal subjects had a TFR above 2.1 children per woman (the approximate minimum required to ensure population replacement), These federal subjects are Chechnya (3.38), Tuva (2.81), Ust-Orda Buryat Okrug (2.73), Agin-Buryat Okrug (2.63), Komi-Permyak (2.16), Evenk Okrug (2.58), Altai Republic (2.36), Nenets Autonomous Okrug (2.1). Of these federal subjects, four have an ethnic Russian majority (Altai, Evenk, Ust-Orda and Nenets). In 2011, the highest TFR were recorded in Chechnya (3.362), Tyva (3.249), Ingushetia (2.94), Altai Republic (2.836), Sakha Republic (2.057), Buryatia (2.027), and Nenets Autonomous Okrug (2.007).
Until 2010, the Russian republic of Chechnya was the region with the highest birth rate in the former USSR (excluding Central Asia). However, in 2011, the Armenian province of Qashatagh overtook it (28.9 vs 29.3 per 1.000).
In 2010, the average number of children born to women has decreased from 1513 to 1000 women from 2002 to 1469 in 2010 in urban areas the figure was 1328 children (2002–1350), and in the village – 1876 (in 2002, – 1993).
In recent years the percentage of children per woman 16 years or more were:
Year : 2002–2010
1 child : 30.5%–31.2%
2 children : 33.7%–34.4%
3 children : 8.9%–8.7%
4 or more children : 5.2%–4.2%
no children : 21.7%–21.5%
Despite a decrease in women who have not had children, the number of three-child and large families has declined between 2002 and 2010.
In every region in Russia, rural areas reported higher TFR compared to urban areas. In most of the federal subjects in Siberia and the Russian Far East, the total fertility rates were high, but not high enough to ensure population replacement. For example, Zabaykalsky Krai had a TFR of 1.82, which is higher than the national average, but less than the 2.1 needed for population replacement.
Compared to the G7 countries, in 2015, Russian TFR of 1.78 children/ woman was lower than that of France (1.93), the USA (1.84), the UK (1.82). Yet its TFR is higher than in other G7 countries like Canada (1.61), Germany (1.50), Japan (1.46) and Italy (1.35).
Compared to other most populous nations, Russia has a lower TFR than Nigeria (5.37), Pakistan (3.42), Indonesia (2.5), India (2.30), Mexico (2.19), the USA (1.84), and higher TFR than Brazil (1.74), and China (1.5–1.6).
Experts were puzzled with a sharp increase in deaths coincided with a sharp increase in life expectancy. While they have found out that a decrease in potential mothers led to a decrease in births and a rapid rise in fertility.
Data from Federal State Statistics Service.
"Further information:" List of federal subjects of Russia by life expectancy
"total population:" 72.5 years
"male:" 67.5 years
"female:" 77.4 years
The disparity in the average lifespan between genders in Russia is the largest in the world. Women live 9–12 years longer than men, while the difference in lifespan is typically only five years in other parts of the world. David Stuckler, Lawrence King, and Martin McKee propose mass privatization and the neo-liberalist shock therapy policies of Yeltsin administration as key reasons of falling life expectancy of Russian men. As of 2011, the average life expectancy in Russia was 64.3 years for males and 76.1 years for females. According to the WHO 2011 report, annual per capita alcohol consumption in Russia is about 15.76 litres, fourth highest volume in Europe (compare to 13.37 in the UK, 13.66 in France, 15.6 in Ukraine, 16.45 in the Czech Republic, etc.).
In the late 1950s, the USSR claimed a higher life expectancy than the United States, but the Soviet Union has lagged behind Western countries in terms of mortality and life expectancy since the late 1960s.
When controlling for confounding variables, neither alcoholism, poverty, pollution, nor the collapse of the health system explain the high male mortality. Most former communist countries got through the same economic collapse and health system collapse. Alcohol consumption per capita is as high in other East European countries. Poverty is high in many other countries. One factor that could explain the low male lifespan in Russia is violence, tolerance for violence and tolerance for risk, "male toughness". Violence, tolerance for risk together with alcoholism reduce the Russian male lifespan.
The life expectancy was about 70 in 1986, prior to the transition-induced disruption of the healthcare system. The turmoil in the early 1990s caused life expectancy in Russia to steadily decrease while it was steadily increasing in the rest of the world. Recently however, Russian life expectancy has again begun to rise. Between 2006—2011 the male life expectancy in Russia rose by almost four years, increasing the overall life expectancy by nearly 4 years to 70.3.
In 2012, 1,043,292, or 55% of all deaths in Russia were caused by cardiovascular disease. The second leading cause of death was cancer, which claimed 287,840 lives (15.2%). External causes of death such as suicide (1.5%), road accidents (1.5%), murders (0.8%), accidental alcohol poisoning (0.4%), and accidental drowning (0.5%), claimed 202,175 lives in total (10.6%). Other major causes of death were diseases of the digestive system (4.6%), respiratory disease (3.6%), infectious and parasitic diseases (1.6%), and tuberculosis (0.9%). The infant mortality rate in 2012 was 7.6 deaths per 1,000 (down from 8.2 in 2009 and 16.9 in 1999).
In the 1980s only 8% to 10% of married Russian women of reproductive age used hormonal and intrauterine contraception methods, compared to 20% to 40% in other developed countries.
This led to much higher abortion rates in Russia compared to other developed countries: in the 1980s Russia had a figure of 120 abortions per 1,000 women of reproductive age compared with only 20 per 1,000 in Western countries. However, after the dissolution of the Soviet Union in 1991 many changes took place, such as the demonopolization of the market for contraceptive drugs and media liberalization, which led to a rapid conversion to more efficient pregnancy-control practices. Abortion rates fell in the first half of the 1990s for the first time in Russia's history, even despite declining fertility rates. From the early 1990s to 2006, the number of expected abortions per woman during her lifetime fell by nearly 2.5 times, from 3.4 to 1.2. As of 2004, the share of women of reproductive age using hormonal or intrauterine birth control methods was about 46% (29% intrauterine, 17% hormonal).
Despite an increase in "family planning", a large portion of Russian families do not achieve the target of desired children at the desired time. According to a 2004 study, current pregnancies were termed "desired and timely" by 58% of respondents, while 23% described them as "desired, but untimely", and 19% said they were "undesired". The share of unexpected pregnancies remains much lower in countries with developed family planning culture, such as the Netherlands, whose percentage of unwanted pregnancies 20 years before was half of that in Russia .
The Russian Federation is home to as many as 160 different ethnic groups and indigenous peoples. As of the 2010 census, 80.90% of the population that disclosed their ethnicity (111,016,896 people) is ethnically Russian, followed by (groups larger than one million):
According to the 2010 Census in Russia lived 142,856,536 people. It is important to note that 5,629,429 people (3.94% of the overall population.) did not declare any ethnic origin, compared to about 1 million in the 2002 Census. This is due to the fact that those people were counted from administrative databases and not directly, and were therefore unable to state their ethnicity. Therefore, the percentages mentioned above are taken from the total population that declared their ethnicity, given that the non-declared remainder is thought to have an ethnic composition similar to the declared segment.
Most smaller groups live compactly in their respective regions and can be categorized by language group.
The ethnic divisions used here are those of the official census, and may in some respects be controversial. The following lists all ethnicities resolved by the 2010 census, grouped by language:
The ethno-demographic structure of Russia has gradually changed over time. During the past century the most striking change is the fast increase of the peoples from the Caucasus. In 1926, these people composed 2% of the Russian population, compared to 6.5% in 2010. Though low in absolute numbers, the Siberian people also increased during the past century, but their growth was mainly realized after WW II (from 0.7% in 1959 to 1.2% in 2010) and not applicable to most of the small peoples (less than 10,000 people).
The relative proportion of the peoples of European Russia gradually decreased during the past century, but still compose 91% of the total population of Russia in 2010. The absolute numbers of most of these peoples reached its highest level in the beginning of the 1990s. Since 1992, natural growth in Russia has been negative and the numbers of all peoples of European Russia were lower in 2010 than in 2002, the only exceptions being the Roma (due to high fertility rates) and the Gagauz (due to high levels of migration from Moldova to Russia).
Several peoples saw a much larger decrease than can be explained by the low fertility rates and high mortality rates in Russia during the past two decades. Emigration and assimilation contributed to the decrease in numbers of many peoples. Emigration was the most important factor for Germans, Jews and Baltic peoples (Estonians, Latvians, Lithuanians). The number of Germans halved between 1959 and 2010. Their main country of destination is Germany.
The number of Jews decreased by more than 80% between 1959 and 2010. In 1970, the Soviet Union had the third largest population of Jews in the world, (2,183,000 of whom 808,000 with residence in Russia), following only that of the United States and Israel. By 2010, due to Jewish emigration, their number fell as low as 158,000. A sizeable emigration of other minorities has been enduring, too. The main destinations of emigrants from Russia are the USA (Russians, Jews, Belarusians, Chechens, Meskhetian Turks, Ukrainians and others), Israel (Jews), Germany (Germans and Jews), Poland (Poles), Canada (Finns and Ukrainians), Finland (Finns), France (Jews and Armenians) and the United Kingdom (mainly rich Russians).
Assimilation (i.e., marrying Russians and having children of such unions counted as Russians) explains the decrease in numbers of Ukrainians, Belarusians and most of the Uralic peoples. The assimilation is reflected in the high median age of these peoples (see the table below), as assimilation is stronger among young people than among old people. The process of assimilation of the Uralic peoples of Russia is probably going on for centuries and is most prominent among the Mordvins (1.4% of the Russian population in 1926 and 0.5% in 2010), the Karelians, Veps and Izhorians.
Assimilation on the other hand slowed down the decrease of the number of ethnic Russians. Besides, the decrease of the number of Russians was also slowed down by the immigration of ethnic Russians from the former Soviet republics, especially Central Asia. Similarly, the numbers of Ukrainians, Belarusians, Germans, Jews, and other non-autochthonous ethnic groups has also been decreased by emigration to Ukraine, Belarus, Germany, Israel, and so forth, respectively.
Peoples of European Russia in the Russian Federation, 1926–2010
Peoples of the Caucasus in the Russian Federation, 1926–2010
Peoples of Siberia in the Russian Federation, 1926–2010
Russia experiences a constant flow of immigration. On average, close to 300,000 legal immigrants enter the country every year; about half are ethnic Russians from the other republics of the former Soviet Union. There is a significant inflow of ethnic Armenians, Uzbeks, Kyrgyz and Tajiks into big Russian cities, something that is viewed unfavorably by some citizens. According to a 2013 opinion poll, 74% of Russians view the large number of labor migrants as a negative phenomenon. According to the United Nations, Russia's legal immigrant population is the third biggest in the world, numbering 11.6 million. In addition, there are an estimated 4 million illegal immigrants from the ex-Soviet states in Russia. In 2015, Ukraine–Russia was the world's largest migration corridor after Mexico–USA. According to the Armenian government, between 80,000 and 120,000 Armenians travel to Russia every year to do seasonal work, returning home for the winter. According to the Tajik government, at least 870,000 Tajiks are working in Russia. In 2014, remittances from Russia accounted for around one-third of Kyrgyzstan's and over 40% of Tajikistan's GDP.
The Kazakhs in Russia are mostly not recent immigrants. The majority inhabit regions bordering Kazakhstan such as the Astrakhan (16% of the population are Kazakhs), Orenburg (6% of the population are Kazakhs), Omsk (4% of the population are Kazakhs) and Saratov (3% of the population are Kazakhs) oblasts. Together these oblasts host 60% of the Kazakh population in Russia. The number of Kazakhs slightly decreased between 2002 and 2010 due to emigration to Kazakhstan, which has by far the strongest economy in Central Asia (Russia does receive immigration from Kazakhstan, but they are mainly ethnic Russians); other Central Asian populations, especially Uzbeks, Tajiks, and Kyrgyz, have continued to rise rapidly. (Turkmen are an exception; citizens of Turkmenistan do not have visa-free access to Russia.)
Russian statistical organizations classify the immigrants based on their ethnicity, although there is an information gap between 2007 and 2013, In 2007, the net immigration was 190,397 (plus another 49,546 for which ethnicity was unknown). Of this, 97,813 was Slavic / Germanic / Finnic (51.4%, of which Russian – 72,769, Ukrainian – 17,802), Turkic and other Muslim – 52,536 (27.6%, of which Azeri – 14,084, Tatar – 10,391, Uzbek – 10,517, Tajik – 9,032, Kyrgyz – 7,533 & Kazakh – (-) 1,424) and Others – 40,048 (21.0%, of which Armenian – 25,719).
Many immigrants are actually migrant workers, who come to Russia and work for around five years then return to their countries. Major sources of migrant workers but where permanent migrants of majority ethnicity of those countries are virtually nonexistent are in 2013. China 200,000 migrant workers, 1000 settled permanently. Uzbekistan 100,000 migrant workers, 489 permanent settlers. Tajikistan 80,000 migrant workers, 220 settled permanently. Kyrgyzstan 50,000 miagrant workers, 219 settled permanently. North Macedonia – 20,000 worker arrivals, 612 settled permanently.
Peoples of Central Asia in the Russian Federation, 1926–2010
The 2010 census found the following figures for foreign citizens resident in Russia:
, , : 5,300
All others: 41,400
Median ages of ethnic groups vary considerably between groups. Ethnic Russians and other Slavic and Finnic groups have higher median age compared to the Caucasian groups.
Median ages are strongly correlated with fertility rates, ethnic groups with higher fertility rates have lower median ages, and vice versa. For example, in 2002, in the ethnic group with the lowest median age – Ingush – women 35 or older had, on average, 4.05 children; in the ethnic group with the highest median age – Jews – women 35 or older averaged only 1.37 children.
Ethnic Jews have both the highest median age and the lowest fertility rate; this is a consequence of Jewish emigration.
Ethnic Russians represent a significant deviation from the pattern, with second lowest fertility rate of all major groups, but relatively low median age (37.6 years). This phenomenon is at least partly due to a high mortality rate among older people, especially males as well as the fact that children from mixed marriages are often registered as ethnic Russians in the census. The most noticeable trend in the past couple of decades is the convergence of birth rates between minorities (including Muslim minorities) and the Russian majority.
The following table shows the variation in median age and fertility rates according to 2002 census.
Russian is the common official language throughout Russia understood by 99% of its current inhabitants and widespread in many adjacent areas of Asia and Eastern Europe. National subdivisions of Russia have additional official languages (see their respective articles). There are more than 100 languages spoken in Russia, many of which are in danger of extinction.
Russia officially recognizes Orthodox Christianity, Islam, Judaism, and Buddhism as traditional religions. Russia has large populations of non-practicing believers and non-believers; many people identify only nominally with a religion. There is no official census on religion in Russia. The Pew Research Center found that 71% of Russians identified as Orthodox, with 1.8% Protestants, 0.5% Catholics and 0.3% other Christians. Pew estimated 11.7% of the population to be Muslim as of 2010. Estimates of practicing worshipers are:
Russian Orthodox 15–20%, Muslim 10–15%, other Christian 2% (2006 est.). Only a small percentage of the population is strongly religious: about approximately 2–4% of the general population are integrated into church life (воцерковленные), while others attend on a less regular basis or not at all. Many non-religious ethnic Russians identify with the Orthodox faith for cultural reasons. The majority of Muslims live in the Volga–Ural region and the North Caucasus, although Moscow, Saint Petersburg, and parts of Siberia also have sizable Muslim populations.
Other branches of Christianity present in Russia include Roman Catholicism (approx. 1%), Baptists, Pentecostals, Lutherans and other Protestant churches (together totalling about 0.5% of the population) and Old Believers. There is some presence of Judaism, Buddhism, Hinduism and other pagan beliefs are also present to some extent in remote areas, sometimes syncretized with one of the mainstream religions.
According to the data of the 2010 Census, presented above, 88.26% of the people who stated their ethnicity belong to traditional Christian ethnic groups, 10.90% belong to traditional Muslim ethnic groups and 0.84% belong to traditional Buddhist, Jewish, Hindu and other ethnic groups.
"definition:" age 15 and over can read and write
"total literacy:" 99.7% (2015)
"male:" 99.7%
"female:" 99.6%
Russia's free, widespread and in-depth educational system, inherited with almost no changes from the Soviet Union, has produced nearly 100% literacy. 97% of children receive their compulsory 9-year basic or complete 11-year education in Russian. Other languages are also used in their respective republics, for instance Tatar, and Yakut.
About 3 million students attend Russia's 519 institutions of higher education and 48 universities. As a result of great emphasis on science and technology in education, Russian medical, mathematical, scientific, and space and aviation research is generally of a high order.
The Russian workforce is undergoing tremendous changes. Although well-educated and skilled, it is largely mismatched to the rapidly changing needs of the Russian economy. The unemployment rate in Russia was 5.3% as of 2013. Unemployment is highest among women and young people. Following the breakup of the Soviet Union and the economic dislocation it engendered, the standard of living fell dramatically. However, since recovering from the 1998 economic crisis, the standard of living has been on the rise. As of 2010 about 13.1% of the population was living below the poverty line, compared to 40% in 1999. The average yearly salary in Russia was $14,302 (about $23,501 PPP) as of October 2013, up from $455 per year in August 1999.
According to the FMS, as of 2011, there were 7,000,000 immigrants working in Russia. Half of these were from Ukraine, while the remainder was mostly from Central Asia. Only 3 million or less than half of all the immigrants are legal. Illegal immigrants number 4 million, mostly from Ukraine and the Caucasus. The Census usually covers only a part of this population and the last one (2002 Census) counted one million non-citizens.
Russia is a highly urbanized country, with 74.2% of the total population (2017) living in urban areas. Moscow is the capital and most populous city of Russia, with 12.2 million residents within the city limits and 17.1 million within the urban area. Moscow is recognized as a Russian federal city. Moscow is a major political, economic, cultural, and scientific centre of Russia and Eastern Europe, as well as the largest city entirely on the European continent.
Rural life in Russia is distinct from many other nations. Relatively few Russian people live in villages—rural population accounted for 26% of the total population according to the 2010 Russian Census. Some people own or rent village houses and use them as dachas (summer houses).
Census information: | https://en.wikipedia.org/wiki?curid=25703 |
Politics of Russia
The politics of Russia take place in the framework of the federal semi-presidential republic of Russia. According to the Constitution of Russia, the President of Russia is head of state, and of a multi-party system with executive power exercised by the government, headed by the Prime Minister, who is appointed by the President with the parliament's approval. Legislative power is vested in the two houses of the Federal Assembly of the Russian Federation, while the President and the government issue numerous legally binding by-laws.
Since the collapse of the Soviet Union at the end of 1991, Russia has seen serious challenges in its efforts to forge a political system to follow nearly seventy-five years of Soviet governance. For instance, leading figures in the legislative and executive branches have put forth opposing views of Russia's political direction and the governmental instruments that should be used to follow it. That conflict reached a climax in September and October 1993, when President Boris Yeltsin used military force to dissolve the parliament and called for new legislative elections ("see" Russian constitutional crisis of 1993). This event marked the end of Russia's first constitutional period, which was defined by the much-amended constitution adopted by the Supreme Soviet of the Russian Soviet Federative Socialist Republic in 1978. A new constitution, creating a strong presidency, was approved by referendum in December 1993.
With a new constitution and a new parliament representing diverse parties and factions, Russia's political structure subsequently showed signs of stabilization. As the transition period extended into the mid-1990s, the power of the national government continued to wane as Russia's regions gained political and economic concessions from Moscow.
The first constitution of the Soviet Union, as promulgated in 1924, incorporated a treaty of union between various Soviet republics. Under the treaty, the Russian Socialist Federative Soviet Republic became known as the Russian Soviet Federated Socialist Republic (RSFSR). Nominally, the borders of each subunit incorporated the territory of a specific nationality. The constitution endowed the new republics with sovereignty, although they were said to have voluntarily delegated most of their sovereign powers to the Soviet center. Formal sovereignty was evidenced by the existence of flags, constitutions, and other state symbols, and by the republics' constitutionally guaranteed "right" to secede from the union. Russia was the largest of the Union republics in terms of territory and population. During the Cold War era (ca 1947-1991),
Because of the Russians' dominance in the affairs of the union, the RSFSR failed to develop some of the institutions of governance and administration that were typical of public life in the other republics: a republic-level communist party, a Russian academy of sciences, and Russian branches of trade unions, for example. As the titular nationalities of the other fourteen union republics began to call for greater republic rights in the late 1980s, however, ethnic Russians also began to demand the creation or strengthening of various specifically Russian institutions in the RSFSR. Certain policies of Soviet leader Mikhail Gorbachev (in office as General Secretary of the Communist Party of the Soviet Union from 1985 to 1991) also encouraged nationalities in the union republics, including the Russian Republic, to assert their rights. These policies included "glasnost" (literally, public "voicing"), which made possible open discussion of democratic reforms and long-ignored public problems such as pollution. "Glasnost" also brought constitutional reforms that led to the election of new republic legislatures with substantial blocs of pro-reform representatives.
In the RSFSR a new legislature, called the Congress of People's Deputies, was elected in March 1990 in a largely free and competitive vote. Upon convening in May, the congress elected Boris Yeltsin, a onetime Gorbachev protégé who had resigned/been exiled from the top party echelons because of his radical reform proposals and erratic personality, as president of the congress's permanent working body, the Supreme Soviet. The next month, the Congress declared Russia's sovereignty over its natural resources and the primacy of Russia's laws over those of the central Soviet government. During 1990-1991, the RSFSR enhanced its sovereignty by establishing republic branches of organizations such as the Communist Party, the Academy of Sciences of the Soviet Union, radio and television broadcasting facilities, and the Committee for State Security (Komitet gosudarstvennoy bezopasnosti—KGB). In 1991 Russia created a new executive office, the presidency, following the example of Gorbachev, who had created such an office for himself in 1990. The Russian presidential election of June 1991 conferred legitimacy on the office, whereas Gorbachev had eschewed such an election and had had himself appointed by the Soviet parliament. Despite Gorbachev's attempts to discourage Russia's electorate from voting for him, Yeltsin won the popular election to become the president, handily defeating five other candidates with more than 57 percent of the vote.
Yeltsin used his role as president of Russia to trumpet Russian sovereignty and patriotism, and his legitimacy as president was a major cause of the collapse of the coup by hard-line government and party officials against Gorbachev in August 1991 Soviet Coup of 1991. ("see" August coup of 1991) The coup leaders had attempted to overthrow Gorbachev in order to halt his plan to sign a New Union Treaty that they believed would wreck the Soviet Union. Yeltsin defiantly opposed the coup plotters and called for Gorbachev's restoration, rallying the Russian public. Most importantly, Yeltsin's faction led elements in the "power ministries" that controlled the military, the police, and the KGB to refuse to obey the orders of the coup plotters. The opposition led by Yeltsin, combined with the irresolution of the plotters, caused the coup to collapse after three days.
Following the failed August coup, Gorbachev found a fundamentally changed constellation of power, with Yeltsin in "de facto" control of much of a sometimes recalcitrant Soviet administrative apparatus. Although Gorbachev returned to his position as Soviet president, events began to bypass him. Communist Party activities were suspended. Most of the union republics quickly declared their independence, although many appeared willing to sign Gorbachev's vaguely-delineated confederation treaty. The Baltic states achieved full independence, and they quickly received diplomatic recognition from many nations. Gorbachev's rump government recognized the independence of Estonia, Latvia, and Lithuania in August and September 1991.
In late 1991 the Yeltsin government assumed budgetary control over Gorbachev's rump government. Russia did not declare its independence, and Yeltsin continued to hope for the establishment of some form of confederation. In December, one week after the Ukrainian Republic approved independence by referendum, Yeltsin and the leaders of Ukraine and Belarus met to form the Commonwealth of Independent States (CIS). In response to calls by the Central Asian and other union republics for admission, another meeting took place in Alma-Ata, on 21 December, to form an expanded CIS. At that meeting, all parties declared that the 1922 treaty of union, which had established the Soviet Union, annulled and that the Soviet Union had ceased to exist. Gorbachev announced the decision officially on 25 December 1991. Russia gained international recognition as the principal successor to the Soviet Union, receiving the Soviet Union's permanent seat on the United Nations Security Council and positions in other international and regional organizations. The CIS states also agreed that Russia initially would take over Soviet embassies and other properties abroad.
In October 1991, during the "honeymoon" period after his resistance to the Soviet coup, Yeltsin had convinced the legislature to grant him special executive (and legislative) powers for one year so that he might implement his economic reforms. In November 1991 Yeltsin appointed a new government, with himself as acting prime minister, a post he held until the appointment of Yegor Gaidar as acting prime minister in June 1992.
During 1992 Yeltsin and his reforms came under increasing attack from former members and officials of the Communist Party of the Soviet Union, from extreme nationalists, and from others calling for reform to be slowed or even halted in Russia. A locus of this opposition was increasingly the two-chamber parliament, the Supreme Soviet of Russia, comprising the Soviet of the Republic and the Soviet of Nationalities. The Chair of the Supreme Soviet, Ruslan Khasbulatov, became Yeltsin's most vocal opponent. Under the 1978 constitution, the parliament was the supreme organ of power in Russia. After Russia added the office of president in 1991, the division of powers between the two branches remained ambiguous, while the Congress of People's Deputies of Russia (CPD) retained its obvious power "to examine and resolve any matter within the jurisdiction of the Russian Federation". In 1992 the Congress was even further empowered, gaining the ability to suspend any articles of the Constitution, per amended article 185 of the 1978 Constitution (Basic Law) of the Russian Federation.
Although Yeltsin managed to beat back most challenges to his reform program when the CPD met in April 1992, in December he suffered a significant loss of his special executive powers. The CPD ordered him to halt appointments of administrators in the localities and also the practice of naming additional local oversight emissaries (termed "presidential representatives"). Yeltsin also lost the power to issue special decrees concerning the economy, while retaining his constitutional power to issue decrees in accordance with existing laws. When the CPD rejected Yeltsin's attempt to secure the confirmation of Gaidar as prime minister (December 1992), Yeltsin appointed Viktor Chernomyrdin, whom the parliament approved because he was viewed as more economically conservative than Gaidar. After contentious negotiations between the parliament and Yeltsin, the two sides agreed to hold a national referendum to allow the population to determine the basic division of powers between the two branches of government. In the meantime, proposals for extreme limitation of Yeltsin's power were tabled.
However, early 1993 saw increasing tension between Yeltsin and the parliament over the referendum and over power-sharing. In mid-March 1993, an emergency session of the CPD rejected Yeltsin's proposals on power-sharing and canceled the referendum, again opening the door to legislation that would shift the balance of power away from the president. Faced with these setbacks, Yeltsin addressed the nation directly to announce a "special regime", under which he would assume extraordinary executive power pending the results of a referendum on the timing of new legislative elections, on a new constitution, and on public confidence in the president and vice president. After the Constitutional Court declared his announcement unconstitutional, Yeltsin backed down.
Despite Yeltsin's change of heart, a second extraordinary session of the CPD took up discussion of emergency measures to defend the constitution, including impeachment of the president. Although the impeachment vote failed, the CPD set new terms for a popular referendum. The legislature's version of the referendum asked whether citizens had confidence in Yeltsin, approved of his reforms, and supported early presidential and legislative elections. Under the CPD's terms, Yeltsin would need the support of 50 percent of eligible voters, rather than 50 percent of those actually voting, to avoid an early presidential election. In the vote on 25 April, Russians failed to provide this level of approval, but a majority of voters approved Yeltsin's policies and called for new legislative elections. Yeltsin termed the results, which delivered a serious blow to the prestige of the parliament, a mandate for him to continue in power.
In June 1993 Yeltsin decreed the creation of a special constitutional convention to examine the draft constitution that he had presented in April. This convention was designed to circumvent the parliament, which was working on its own draft constitution. As expected, the two main drafts contained contrary views of legislative-executive relations. The convention, which included delegates from major political and social organizations and the 89 subnational jurisdictions, approved a compromise draft constitution in July 1993, incorporating some aspects of the parliament's draft. The parliament failed to approve the draft, however.
In late September 1993, Yeltsin responded to the impasse in legislative-executive relations by repeating his announcement of a constitutional referendum, but this time he followed the announcement by dissolving the parliament and announcing new legislative elections for December ("see" Russian constitutional crisis of 1993). The CPD again met in emergency session, confirmed Vice President Aleksandr Rutskoy as president, and voted to impeach Yeltsin. On 27 September, military units surrounded the legislative building (popularly known as the White House - ), but 180 delegates refused to leave the building. After a two-week standoff, Rutskoy urged supporters outside the legislative building to overcome Yeltsin's military forces. Firefights and destruction of property resulted at several locations in Moscow.
The next day, on 3 October, Yeltsin chose a radical solution to settle his dispute with parliament: he called up tanks to shell the parliament building. Under the direction of Minister of Defense Pavel Grachev, tanks fired on the White House, and military forces occupied the building and the rest of the city. As Yeltsin was taking the unconstitutional step of dissolving the legislature, Russia came the closest to serious civil conflict since the revolution of 1917.
This open, violent confrontation remained a backdrop to Yeltsin's relations with the legislative branch for the next three years.
During 1992-93 Yeltsin had argued that the existing, heavily amended 1978 constitution of Russia was obsolete and self-contradictory and that Russia required a new constitution granting the president greater power. This assertion led to the submission and advocacy of rival constitutional drafts drawn up by the legislative and executive branches. The parliament's failure to endorse a compromise was an important factor in Yeltsin's dissolution of that body in September 1993. Yeltsin then used his presidential powers to form a sympathetic constitutional assembly, which quickly produced a draft constitution providing for a strong executive, and to shape the outcome of the December 1993 referendum on Russia's new basic law. The turnout requirement for the referendum was changed from 50 percent of the electorate to simply 50 percent of participating voters. The referendum vote resulted in approval by 58.4 percent of Russia's registered voters.
The 1993 constitution declares Russia a democratic, federative, law-based state with a republican form of government. State power is divided among the legislative, executive, and judicial branches. Diversity of ideologies and religions is sanctioned, and a state or compulsory ideology may not be adopted. Progressively, however, human rights violations in connection with religious groups labeled "extremist" by the government have been increasingly frequent. The right to a multiparty political system is upheld. The content of laws must be approved by the public before they take effect, and they must be formulated in accordance with international law and principles. Russian is proclaimed the state language, although the republics of the federation are allowed to establish their own state.
The 1993 constitution created a dual executive consisting of a president and prime minister, with the president as the dominant figure. Russia's strong presidency sometimes is compared with that of Charles de Gaulle (in office 1958-69) in the French Fifth Republic. The constitution spells out many prerogatives specifically, but some powers enjoyed by Yeltsin were developed in an "ad hoc" manner.
Russia's president determines the basic direction of Russia's domestic and foreign policy and represents the Russian state within the country and in foreign affairs. The president appoints and recalls Russia's ambassadors upon consultation with the legislature, accepts the credentials and letters of recall of foreign representatives, conducts international talks, and signs international treaties. A special provision allowed Yeltsin to complete the term prescribed to end in June 1996 and to exercise the powers of the new constitution, although he had been elected under a different constitutional order.
In the 1996 presidential election campaign, some candidates called for eliminating the presidency, criticizing its powers as dictatorial. Yeltsin defended his presidential powers, claiming that Russians desire "a vertical power structure and a strong hand" and that a parliamentary government would result in indecisive talk rather than action.
Several prescribed powers put the president in a superior position vis-à-vis the legislature. The president has broad authority to issue decrees and directives that have the force of law without judicial review, although the constitution notes that they must not contravene that document or other laws. Under certain conditions, the president may dissolve the State Duma, the lower house of parliament, the Federal Assembly. The president has the prerogatives of scheduling referendums (a power previously reserved to the parliament), submitting draft laws to the State Duma, and promulgating federal laws.
The executive-legislative crisis of the fall of 1993 prompted Yeltsin to emplace constitutional obstacles to legislative removal of the president. Under the 1993 constitution, if the president commits "grave crimes" or treason, the State Duma may file impeachment charges with the parliament's upper house, the Federation Council. These charges must be confirmed by a ruling of the Supreme Court that the president's actions constitute a crime and by a ruling of the Constitutional Court that proper procedures in filing charges have been followed. The charges then must be adopted by a special commission of the State Duma and confirmed by at least two-thirds of State Duma deputies. A two-thirds vote of the Federation Council is required for removal of the president. If the Federation Council does not act within three months, the charges are dropped. If the president is removed from office or becomes unable to exercise power because of serious illness, the prime minister is to temporarily assume the president's duties; a presidential election then must be held within three months. The constitution does not provide for a vice president, and there is no specific procedure for determining whether the president is able to carry out his duties.
The president is empowered to appoint the prime minister to chair the Government (called the cabinet or the council of ministers in other countries), with the consent of the State Duma. The president chairs meetings of the Government, which he also may dismiss in its entirety. Upon the advice of the prime minister, the president can appoint or remove Government members, including the deputy prime ministers. The president submits candidates to the State Duma for the post of chairman of the Central Bank of the Russian Federation (RCB) and may propose that the State Duma dismiss the chairman. In addition, the president submits candidates to the Federation Council for appointment as justices of the Constitutional Court, the Supreme Court, and the Superior Court of Arbitration, as well as candidates for the office of procurator general, Russia's chief law enforcement officer. The president also appoints justices of federal district courts.
Many of the president's powers are related to the incumbent's undisputed leeway in forming an administration and hiring staff. The presidential administration is composed of several competing, overlapping, and vaguely delineated hierarchies that historically have resisted efforts at consolidation. In early 1996, Russian sources reported the size of the presidential apparatus in Moscow and the localities at more than 75,000 people, most of them employees of state-owned enterprises directly under presidential control. This structure is similar to, but several times larger than, the top-level apparatus of the Soviet-era Communist Party of the Soviet Union (CPSU).
Former first deputy prime minister Anatoly Chubais was appointed chief of the presidential administration (chief of staff) in July 1996. Chubais replaced Nikolay Yegorov, a hard-line associate of deposed Presidential Security Service chief Alexander Korzhakov. Yegorov had been appointed in early 1996, when Yeltsin reacted to the strong showing of antireform factions in the legislative election by purging reformers from his administration. Yeltsin now ordered Chubais, who had been included in that purge, to reduce the size of the administration and the number of departments overseeing the functions of the ministerial apparatus. The six administrative departments in existence at that time dealt with citizens' rights, domestic and foreign policy, state and legal matters, personnel, analysis, and oversight, and Chubais inherited a staff estimated at 2,000 employees. Chubais also received control over a presidential advisory group with input on the economy, national security, and other matters. Reportedly that group had competed with Korzhakov's security service for influence in the Yeltsin administration.
Another center of power in the presidential administration is the Security Council, which was created by statute in mid-1992. The 1993 constitution describes the council as formed and headed by the president and governed by statute. Since its formation, it apparently has gradually lost influence in competition with other power centers in the presidential administration. However, the June 1996 appointment of former army general and presidential candidate Alexander Lebed to head the Security Council improved prospects for the organization's standing. In July 1996, a presidential decree assigned the Security Council a wide variety of new missions. The decree's description of the Security Council's consultative functions was especially vague and wide-ranging, although it positioned the head of the Security Council directly subordinate to the president. As had been the case previously, the Security Council was required to hold meetings at least once a month.
Other presidential support services include the Control Directorate (in charge of investigating official corruption), the Administrative Affairs Directorate, the Presidential Press Service, and the Protocol Directorate. The Administrative Affairs Directorate controls state dachas, sanatoriums, automobiles, office buildings, and other perquisites of high office for the executive, legislative, and judicial branches of government, a function that includes management of more than 200 state industries with about 50,000 employees. The Committee on Operational Questions, until June 1996 chaired by antireformist Oleg Soskovets, has been described as a "government within a government". Also attached to the presidency are more than two dozen consultative commissions and extrabudgetary "funds".
The president also has extensive powers over military policy. As the Supreme Commander-in-Chief of the Armed Forces of the Russian Federation, the president approves defense doctrine, appoints and removes the high command of the armed forces, and confers higher military ranks and awards. The president is empowered to declare national or regional states of martial law, as well as state of emergency. In both cases, both houses of the parliament must be notified immediately. The Federation Council, the upper house, has the power to confirm or reject such a decree. The regime of martial law is defined by federal law "On Martial law", signed into law by president Vladimir Putin in 2002. The circumstances and procedures for the president to declare a state of emergency are more specifically outlined in federal law than in the constitution. In practice, the Constitutional Court ruled in 1995 that the president has wide leeway in responding to crises within Russia, such as lawlessness in the separatist Republic of Chechnya, and that Yeltsin's action in Chechnya did not require a formal declaration of a state of emergency. In 1994 Yeltsin declared a state of emergency in Ingushetia and North Ossetia, two republics beset by intermittent ethnic conflict.
The constitution sets few requirements for presidential elections, deferring in many matters to other provisions established by law. The presidential term is set at six years, and the president may only serve two consecutive terms. A candidate for president must be a citizen of Russia, at least 35 years of age, and a resident of the country for at least ten years. If a president becomes unable to continue in office because of health problems, resignation, impeachment, or death, a presidential election is to be held not more than three months later. In such a situation, the Federation Council is empowered to set the election date.
The Law on Presidential Elections, ratified in May 1995, establishes the legal basis for presidential elections. Based on a draft submitted by Yeltsin's office, the new law included many provisions already contained in the Russian Republic's 1990 election law; alterations included the reduction in the number of signatures required to register a candidate from 2 million to 1 million. The law, which set rigorous standards for fair campaign and election procedures, was hailed by international analysts as a major step toward democratization. Under the law, parties, blocs, and voters' groups register with the Central Electoral Commission of Russia (CEC) and designate their candidates. These organizations then are permitted to begin seeking the 1 million signatures needed to register their candidates; no more than 7 percent of the signatures may come from a single federal jurisdiction. The purpose of the 7 percent requirement is to promote candidacies with broad territorial bases and eliminate those supported by only one city or ethnic enclave.
The law required that at least 50 percent of eligible voters participate in order for a presidential election to be valid. In State Duma debate over the legislation, some deputies had advocated a minimum of 25 percent (which was later incorporated into the electoral law covering the State Duma), warning that many Russians were disillusioned with voting and would not turn out. To make voter participation more appealing, the law required one voting precinct for approximately every 3,000 voters, with voting allowed until late at night. The conditions for absentee voting were eased, and portable ballot boxes were to be made available on demand. Strict requirements were established for the presence of election observers, including emissaries from all participating parties, blocs, and groups, at polling places and local electoral commissions to guard against tampering and to ensure proper tabulation.
The Law on Presidential Elections requires that the winner receive more than 50 percent of the votes cast. If no candidate receives more than 50 percent of the vote (a highly probable result because of multiple candidacies), the top two vote-getters must face each other in a runoff election. Once the results of the first round are known, the runoff election must be held within fifteen days. A traditional provision allows voters to check off "none of the above," meaning that a candidate in a two-person runoff might win without attaining a majority. Another provision of the election law empowers the CEC to request that the Supreme Court ban a candidate from the election if that candidate advocates a violent transformation of the constitutional order or the integrity of the Russian Federation.
The presidential election of 1996 was a major episode in the struggle between Yeltsin and the Communist Party of the Russian Federation (KPRF), which sought to oust Yeltsin from office and return to power. Yeltsin had banned the Communist Party of the Russian Republic for its central role in the August 1991 coup against the Gorbachev government. As a member of the Politburo and the Secretariat of the banned party, Gennady Zyuganov had worked hard to gain its relegalization. Despite Yeltsin's objections, the Constitutional Court cleared the way for the Russian communists to reemerge as the KPRF, headed by Zyuganov, in February 1993. Yeltsin temporarily banned the party again in October 1993 for its role in the Supreme Soviet's just-concluded attempt to overthrow his administration. Beginning in 1993, Zyuganov also led efforts by KPRF deputies to impeach Yeltsin. After the KPRF's triumph in the December 1995 legislative elections, Yeltsin announced that he would run for reelection with the main purpose of safeguarding Russia from a communist restoration.
Although there was speculation that losing parties in the December 1995 election might choose not to nominate presidential candidates, in fact dozens of citizens both prominent and obscure announced their candidacies. After the gathering and review of signature lists, the CEC validated eleven candidates, one of whom later dropped out.
In the opinion polls of early 1996, Yeltsin trailed far behind most of the other candidates; his popularity rating was below 10 percent for a prolonged period. However, a last-minute, intense campaign featuring heavy television exposure, speeches throughout Russia promising increased state expenditures for a wide variety of interest groups, and campaign-sponsored concerts boosted Yeltsin to a 3 percent plurality over Zyuganov in the first round. The election campaign was largely sponsored by wealthy tycoons, for whom Yeltsin remaining at power was the key to protect their property acquired during the reforms of 1991-1996. After the first election round, Yeltsin took the tactically significant step of appointing first-round presidential candidate Aleksandr Lebed, who had placed third behind Yeltsin and Zyuganov, as head of the Security Council. Yeltsin followed the appointment of Lebed as the president's top adviser on national security by dismissing several top hard-line members of his entourage who were widely blamed for human rights violations in Chechnya and other mistakes. Despite his virtual disappearance from public view for health reasons shortly thereafter, Yeltsin was able to sustain his central message that Russia should move forward rather than return to its communist past. Zyuganov failed to mount an energetic or convincing second campaign, and three weeks after the first phase of the election, Yeltsin easily defeated his opponent, 54 percent to 40 percent.
It was argued Yeltsin won the 1996 Russian presidential election thanks to the extensive assistance provided by the team of media and PR experts from the United States. The "Guardian" reported that Joe Shumate, George Gorton, Richard Dresner, a close associate of Dick Morris, "and Steven Moore (who came on later as a PR specialist) gave an exclusive interview to "Time" magazine in 1996 about their adventures working as political consultants in Russia. They also detailed the extent of their collaboration with the Clinton White House."
Turnout in the first round was high, with about 70 percent of 108.5 million voters participating. Total turnout in the second round was nearly the same as in the first round. A contingent of almost 1,000 international observers judged the election to be largely fair and democratic, as did the CEC.
Most observers in Russia and elsewhere concurred that the election boosted democratization in Russia, and many asserted that reforms in Russia had become irreversible. Yeltsin had strengthened the institution of regularly contested elections when he rejected calls by business organizations and other groups and some of his own officials to cancel or postpone the balloting because of the threat of violence. The high turnout indicated that voters had confidence that their ballots would count, and the election went forward without incident. The democratization process also was bolstered by Yeltsin's willingness to change key personnel and policies in response to public protests and by his unprecedented series of personal campaign appearances throughout Russia.
The constitution prescribes that the Government of Russia, which corresponds to the Western cabinet structure, consist of a prime minister (chairman of the Government), deputy prime ministers, and federal ministers and their ministries and departments. Within one week of appointment by the president and approval by the State Duma, the prime minister must submit to the president nominations for all subordinate Government positions, including deputy prime ministers and federal ministers. The prime minister carries out administration in line with the constitution and laws and presidential decrees. The ministries of the Government, which numbered 24 in mid-1996, execute credit and monetary policies and defense, foreign policy, and state security functions; ensure the rule of law and respect for human and civil rights; protect property; and take measures against crime. If the Government issues implementing decrees and directives that are at odds with legislation or presidential decrees, the president may rescind them.
The Government formulates the federal budget, submits it to the State Duma, and issues a report on its implementation. In late 1994, the parliament successfully demanded that the Government begin submitting quarterly reports on budget expenditures and adhere to other guidelines on budgetary matters, although the parliament's budgetary powers are limited. If the State Duma rejects a draft budget from the Government, the budget is submitted to a conciliation commission including members from both branches.
Besides the ministries, in 1996 the executive branch included eleven state committees and 46 state services and agencies, ranging from the State Space Agency (Glavkosmos) to the State Committee for Statistics (Goskomstat). There were also myriad agencies, boards, centers, councils, commissions, and committees. Prime Minister Viktor Chernomyrdin's personal staff was reported to number about 2,000 in 1995.
Chernomyrdin, who had been appointed prime minister in late 1992 to appease antireform factions, established a generally smooth working relationship with Yeltsin. Chernomyrdin proved adept at conciliating hostile domestic factions and at presenting a positive image of Russia in negotiations with other nations. However, as Yeltsin's standing with public opinion plummeted in 1995, Chernomyrdin became one of many Government officials who received public blame from the president for failures in the Yeltsin administration. As part of his presidential campaign, Yeltsin threatened to replace the Chernomyrdin Government if it failed to address pressing social welfare problems in Russia. After the mid-1996 presidential election, however, Yeltsin announced that he would nominate Chernomyrdin to head the new Government.
The 616-member parliament, termed the Federal Assembly, consists of two houses, the 450-member State Duma (the lower house) and the 166-member Federation Council (the upper house). Russia's legislative body was established by the constitution approved in the December 1993 referendum. The first elections to the Federal Assembly were held at the same time—a procedure criticized by some Russians as indicative of Yeltsin's lack of respect for constitutional niceties. Under the constitution, the deputies elected in December 1993 were termed "transitional" because they were to serve only a two-year term. In April 1994, legislators, Government officials, and many prominent businesspeople and religious leaders signed a "Civic Accord" proposed by Yeltsin, pledging during the two-year "transition period" to refrain from violence, calls for early presidential or legislative elections, and attempts to amend the constitution. This accord, and memories of the violent confrontation of the previous parliament with Government forces, had some effect in softening political rhetoric during the next two years.
The first legislative elections under the new constitution included a few irregularities. The republics of Tatarstan and Chechnya and Chelyabinsk Oblast boycotted the voting; this action, along with other discrepancies, resulted in the election of only 170 members to the Federation Council. However, by mid-1994 all seats were filled except those of Chechnya, which continued to proclaim its independence. All federal jurisdictions participated in the December 1995 legislative elections, although the fairness of voting in Chechnya was compromised by the ongoing conflict there.
The Federal Assembly is prescribed as a permanently functioning body, meaning that it is in continuous session except for a regular break between the spring and fall sessions. This working schedule distinguishes the new parliament from Soviet-era "rubber-stamp" legislative bodies, which met only a few days each year. The new constitution also directs that the two houses meet separately in sessions open to the public, although joint meetings are held for important speeches by the president or foreign leaders.
Deputies of the State Duma work full-time on their legislative duties; they are not allowed to serve simultaneously in local legislatures or hold Government positions. A transitional clause in the constitution, however, allowed deputies elected in December 1993 to retain their Government employment, a provision that allowed many officials of the Yeltsin administration to serve in the parliament. After the December 1995 legislative elections, nineteen Government officials were forced to resign their offices in order to take up their legislative duties.
Despite its "transitional" nature, the Federal Assembly of 1994-95 approved about 500 pieces of legislation in two years. When the new parliament convened in January 1996, deputies were provided with a catalog of these laws and were directed to work in their assigned committees to fill gaps in existing legislation as well as to draft new laws. A major accomplishment of the 1994-95 legislative sessions was passage of the first two parts of a new civil code, desperately needed to update antiquated Soviet-era provisions. The new code included provisions on contract obligations, rents, insurance, loans and credit, partnership, and trusteeship, as well as other legal standards essential to support the creation of a market economy. Work on several bills that had been in committee or in floor debate in the previous legislature resumed in the new body. Similarly, several bills that Yeltsin had vetoed were taken up again by the new legislature.
The composition of the Federation Council was a matter of debate until shortly before the 2000 elections. The legislation that emerged in December 1995 over Federation Council objections clarified the constitution's language on the subject by providing ex officio council seats to the heads of local legislatures and administrations in each of the eighty-nine subnational jurisdictions, hence a total of 178 seats. As composed in 1996, the Federation Council included about fifty chief executives of subnational jurisdictions who had been appointed to their posts by Yeltsin during 1991-92, then won popular election directly to the body in December 1993. But the law of 1995 provided for popular elections of chief executives in all subnational jurisdictions, including those still governed by presidential appointees. The individuals chosen in those elections then would assume ex officio seats in the Federation Council.
Each house elects a chairman to control the internal procedures of the house. The houses also form Parliamentary committees and commissions to deal with particular types of issues. Unlike committees and commissions in previous Russian and Soviet parliaments, those operating under the 1993 constitution have significant responsibilities in devising legislation and conducting oversight. They prepare and evaluate draft laws, report on draft laws to their houses, conduct hearings, and oversee implementation of the laws. As of early 1996, there were twenty-eight committees and several ad hoc commissions in the State Duma, and twelve committees and two commissions in the Federation Council. The Federation Council has established fewer committees because of the part-time status of its members, who also hold political office in the subnational jurisdictions. In 1996 most of the committees in both houses were retained in basic form from the previous parliament. According to internal procedure, no deputy may sit on more than one committee. By 1996 many State Duma committees had established subcommittees.
Committee positions are allocated when new parliaments are seated. The general policy calls for allocation of committee chairmanships and memberships among parties and factions roughly in proportion to the size of their representation. In 1994, however, Vladimir Zhirinovsky's Liberal Democratic Party of Russia (Liberal'no-demokraticheskaya partiya Rossii—LDPR), which had won the second largest number of seats in the recent election, was denied all but one key chairmanship, that of the State Duma's Committee on Geopolitics.
The two chambers of the Federal Assembly possess different powers and responsibilities, with the State Duma the more powerful. The Federation Council, as its name and composition implies, deals primarily with issues of concern to the subnational jurisdictions, such as adjustments to internal borders and decrees of the president establishing martial law or states of emergency. As the upper chamber, it also has responsibilities in confirming and removing the procurator general and confirming justices of the Constitutional Court, the Supreme Court, and the Superior Court of Arbitration, upon the recommendation of the president. The Federation Council also is entrusted with the final decision if the State Duma recommends removing the president from office. The constitution also directs that the Federation Council examine bills passed by the lower chamber dealing with budgetary, tax, and other fiscal measures, as well as issues dealing with war and peace and with treaty ratification.
In the consideration and disposition of most legislative matters, however, the Federation Council has less power than the State Duma. All bills, even those proposed by the Federation Council, must first be considered by the State Duma. If the Federation Council rejects a bill passed by the State Duma, the two chambers may form a conciliation commission to work out a compromise version of the legislation. The State Duma then votes on the compromise bill. If the State Duma objects to the proposals of the upper chamber in the conciliation process, it may vote by a two-thirds majority to send its version to the president for signature. The part-time character of the Federation Council's work, its less developed committee structure, and its lesser powers vis-à-vis the State Duma make it more a consultative and reviewing body than a law-making chamber.
Because the Federation Council initially included many regional administrators appointed by Yeltsin, that body often supported the president and objected to bills approved by the State Duma, which had more anti-Yeltsin deputies. The power of the upper house to consider bills passed by the lower chamber resulted in its disapproval of about one-half of such bills, necessitating concessions by the State Duma or votes to override upper-chamber objections. In February 1996, the heads of the two chambers pledged to try to break this habit, but wrangling appeared to intensify in the months that followed.
The State Duma confirms the appointment of the prime minister, although it does not have the power to confirm Government ministers. The power to confirm or reject the prime minister is severely limited. According to the 1993 constitution, the State Duma must decide within one week to confirm or reject a candidate once the president has placed that person's name in nomination. If it rejects three candidates, the president is empowered to appoint a prime minister, dissolve the parliament, and schedule new legislative elections.
The State Duma's power to force the resignation of the Government also is severely limited. It may express a vote of no-confidence in the Government by a majority vote of all members of the State Duma, but the president is allowed to disregard this vote. If, however, the State Duma repeats the no-confidence vote within three months, the president may dismiss the Government. But the likelihood of a second no-confidence vote is virtually precluded by the constitutional provision allowing the president to dissolve the State Duma rather than the Government in such a situation. The Government's position is further buttressed by another constitutional provision that allows the Government at any time to demand a vote of confidence from the State Duma; refusal is grounds for the president to dissolve the Duma.
The legislative process in Russia includes three hearings in the State Duma, then approvals by the Federation Council, the upper house and sign into law by the President.
Draft laws may originate in either legislative chamber, or they may be submitted by the president, the Government, local legislatures and the Supreme Court, the Constitutional Court, or the Superior Court of Arbitration within their respective competences. Draft laws are first considered in the State Duma. Upon adoption by a majority of the full State Duma membership, a draft law is considered by the Federation Council, which has fourteen days to place the bill on its calendar. Conciliation commissions are the prescribed procedure to work out differences in bills considered by both chambers.
A constitutional provision dictating that draft laws dealing with revenues and expenditures may be considered "only when the Government's findings are known" substantially limits the Federal Assembly's control of state finances. However, the legislature may alter finance legislation submitted by the Government at a later time, a power that provides a degree of traditional legislative control over the purse. The two chambers of the legislature also have the power to override a presidential veto of legislation. The constitution requires at least a two-thirds vote of the total number of members of both chambers.
The Judiciary of Russia is defined under the Constitution and law of Russia with a hierarchical structure with the Constitutional Court, Supreme Court, and Supreme Court of Arbitration at the apex. As of 2014, the Supreme Court of Arbitration has merged with the Supreme Court. The district courts are the primary criminal trial courts, and the regional courts are the primary appellate courts. The judiciary is governed by the All-Russian Congress of Judges and its Council of Judges, and its management is aided by the Judicial Department of the Supreme Court, the Judicial Qualification Collegia, the Ministry of Justice, and the various courts' chairpersons. There are many officers of the court, including jurors, but the Prosecutor General remains the most powerful component of the Russian judicial system.
Many judges appointed by the regimes of Leonid Brezhnev (in office 1964-82) and Yuri Andropov (in office 1982-84) remained in place in the mid-1990s. Such arbiters were trained in "socialist law" and had become accustomed to basing their verdicts on telephone calls from local CPSU bosses rather than on the legal merits of cases.
For court infrastructure and financial support, judges must depend on the Ministry of Justice, and for housing they must depend on local authorities in the jurisdiction where they sit. In 1995 the average salary for a judge was US$160 per month, substantially less than the earnings associated with more menial positions in Russian society. These circumstances, combined with irregularities in the appointment process and the continued strong position of the procurators, deprived judges in the lower jurisdictions of independent authority.
Numerous matters which are dealt with by administrative authority in European countries remain subject to political influence in Russia. The Constitutional Court of Russia was reconvened in March 1995 following its suspension by President Yeltsin during the October 1993 constitutional crisis. The 1993 constitution empowers the court to arbitrate disputes between the executive and legislative branches and between Moscow and the regional and local governments. The court also is authorized to rule on violations of constitutional rights, to examine appeals from various bodies, and to participate in impeachment proceedings against the president. The July 1994 Law on the Constitutional Court prohibits the court from examining cases on its own initiative and limits the scope of issues the court can hear.
The State Duma passed a Criminal Procedure Code and other judicial reforms during its 2001 session. These reforms help make the Russian judicial system more compatible with its Western counterparts and are seen by most as an accomplishment in human rights. The reforms have reintroduced jury trials in certain criminal cases and created a more adversarial system of criminal trials that protect the rights of defendants more adequately. In 2002, the introduction of the new code led to significant reductions in time spent in detention for new detainees, and the number of suspects placed in pretrial detention declined by 30%. Another significant advance in the new Code is the transfer from the Procuracy to the courts of the authority to issue search and arrest warrants.
In the Soviet period, some of Russia's approximately 100 nationalities were granted their own ethnic enclaves, to which varying formal federal rights were attached. Other smaller or more dispersed nationalities did not receive such recognition. In most of these enclaves, ethnic Russians constituted a majority of the population, although the titular nationalities usually enjoyed disproportionate representation in local government bodies. Relations between the central government and the subordinate jurisdictions, and among those jurisdictions, became a political issue in the 1990s.
The Russian Federation has made few changes in the Soviet pattern of regional jurisdictions. The 1993 constitution establishes a federal government and enumerates eighty-nine subnational jurisdictions, including twenty-one ethnic enclaves with the status of republics. There are ten autonomous regions, or okruga (sing., okrug ), and the Jewish Autonomous Oblast (Yevreyskaya avtonomnaya oblast', also known as Birobidzhan). Besides the ethnically identified jurisdictions, there are six territories (kraya ; sing., kray ) and forty-nine oblasts (provinces). The cities of Moscow and St. Petersburg are independent of surrounding jurisdictions; termed "cities of federal significance," they have the same status as the oblasts. The ten autonomous regions and Birobidzhan are part of larger jurisdictions, either an oblast or a territory. As the power and influence of the central government have become diluted, governors and mayors have become the only relevant government authorities in many jurisdictions.
The Federation Treaty was signed in March 1992 by President Yeltsin and most leaders of the autonomous republics and other ethnic and geographical subunits. The treaty consisted of three separate documents, each pertaining to one type of regional jurisdiction. It outlined powers reserved for the central government, shared powers, and residual powers to be exercised primarily by the subunits. Because Russia's new constitution remained in dispute in the Federal Assembly at the time of ratification, the Federation Treaty and provisions based on the treaty were incorporated as amendments to the 1978 constitution. A series of new conditions were established by the 1993 constitution and by bilateral agreements.
The constitution of 1993 resolved many of the ambiguities and contradictions concerning the degree of decentralization under the much-amended 1978 constitution of the Russian Republic; most such solutions favored the concentration of power in the central government. When the constitution was ratified, the Federation Treaty was demoted to the status of a subconstitutional document. A transitional provision of the constitution provided that in case of discrepancies between the federal constitution and the Federation Treaty, or between the constitution and other treaties involving a subnational jurisdiction, all other documents would defer to the constitution.
The 1993 constitution presents a daunting list of powers reserved to the center. Powers shared jointly between the federal and local authorities are less numerous. Regional jurisdictions are only allocated powers not specifically reserved to the federal government or exercised jointly. Those powers include managing municipal property, establishing and executing regional budgets, establishing and collecting regional taxes, and maintaining law and order. Some of the boundaries between joint and exclusively federal powers are vaguely prescribed; presumably they would become clearer through the give and take of federal practice or through adjudication, as has occurred in other federal systems. Meanwhile, bilateral power-sharing treaties between the central government and the subunits have become an important means of clarifying the boundaries of shared powers. Many subnational jurisdictions have their own constitutions, however, and often those documents allocate powers to the jurisdiction inconsistent with provisions of the federal constitution. As of 1996, no process had been devised for adjudication of such conflicts.
Under the 1993 constitution, the republics, territories, oblasts, autonomous oblast, autonomous regions, and cities of federal designation are held to be "equal in their relations with the federal agencies of state power"; this language represents an attempt to end the complaints of the nonrepublic jurisdictions about their inferior status. In keeping with this new equality, republics no longer receive the epithet "sovereign," as they did in the 1978 constitution. Equal representation in the Federation Council for all eighty-nine jurisdictions furthers the equalization process by providing them meaningful input into legislative activities, particularly those of special local concern. However, Federation Council officials have criticized the State Duma for failing to represent regional interests adequately. In mid-1995 Vladimir Shumeyko, then speaker of the Federation Council, criticized the current electoral system's party-list provision for allowing some parts of Russia to receive disproportionate representation in the lower house. (In the 1995 elections, Moscow Oblast received nearly 38 percent of the State Duma's seats based on the concentration of party-list candidates in the national capital.) Shumeyko contended that such misallocation fed potentially dangerous popular discontent with the parliament and politicians.
Despite constitutional language equalizing the regional jurisdictions in their relations with the center, vestiges of Soviet-era multitiered federalism remain in a number of provisions, including those allowing for the use of non-Russian languages in the republics but not in other jurisdictions, and in the definitions of the five categories of subunit. On most details of the federal system, the constitution is vague, and clarifying legislation had not been passed by mid-1996. However, some analysts have pointed out that this vagueness facilitates resolution of individual conflicts between the center and the regions.
Flexibility is a goal of the constitutional provision allowing bilateral treaties or charters between the central government and the regions on power sharing. For instance, in the bilateral treaty signed with the Russian government in February 1994, the Republic of Tatarstan gave up its claim to sovereignty and accepted Russia's taxing authority, in return for Russia's acceptance of Tatar control over oil and other resources and the republic's right to sign economic agreements with other countries. This treaty has particular significance because Tatarstan was one of the two republics that did not sign the Federation Treaty in 1992. By mid-1996 almost one-third of the federal subunits had concluded power-sharing treaties or charters.
The first power-sharing charter negotiated by the central government and an oblast was signed in December 1995 with Orenburg Oblast. The charter divided power in the areas of economic and agricultural policy, natural resources, international economic relations and trade, and military industries. According to Prime Minister Chernomyrdin, the charter gave Orenburg full power over its budget and allowed the oblast to participate in privatization decisions. By early 1996, similar charters had been signed with Krasnodar Territory and Kaliningrad and Sverdlovsk oblasts. In the summer of 1996, Yeltsin wooed potential regional supporters of his reelection by signing charters with Perm', Rostov, Tver', and Leningrad oblasts and with the city of St. Petersburg, among others, granting these regions liberal tax treatment and other economic advantages.
By the mid-1990s, regional jurisdictions also had become bolder in passing local legislation to fill gaps in federation statutes rather than waiting for the Federal Assembly to act. For example, Volgograd Oblast passed laws regulating local pensions, the issuance of promissory notes, and credit unions. The constitution upholds regional legislative authority to pass laws that accord with the constitution and existing federal laws.
During Boris Yeltin's presidency, he signed a total of 46 power-sharing treaties with Russia's various subjects starting with Tatarstan on 15 February 1994 and ending with Moscow on 16 June 1998, giving them greater autonomy from the federal government. According to Prime Minister Viktor Chernomyrdin, the government intended to sign power-sharing agreements with all of Russia's 89 subjects. Following the election of Vladimir Putin on 26 March 2000 and his subsequent overhaul of the federal system, the power-sharing treaties began to be abolished. On 24 July 2017, Tatarstan's power-sharing treaty expired, making it the last subject to lose its autonomy.
The president retains the power to appoint and remove presidential representatives, who act as direct emissaries to the jurisdictions in overseeing local administrations' implementation of presidential policies. The power to appoint these overseers was granted by the Russian Supreme Soviet to Yeltsin in late 1991. The parliament attempted several times during 1992-93 to repeal or curtail the activities of these appointees, whose powers are only alluded to in the constitution. The presence of Yeltsin's representatives helped bring out the local vote on his behalf in the 1996 presidential election.
The governments of the republics include a president or prime minister (or both) and a regional council or legislature. The chief executives of lower jurisdictions are called governors or administrative heads. Generally, in jurisdictions other than republics the executive branches have been more sympathetic to the central government, and the legislatures (called soviets until late 1993, then called dumas or assemblies) have been the center of whatever separatist sentiment exists. Under the power given him in 1991 to appoint the chief executives of territories, oblasts, autonomous regions, and the autonomous oblast, Yeltsin had appointed virtually all of the sixty-six leaders of those jurisdictions. By contrast, republic presidents have been popularly elected since 1992. Some of Yeltsin's appointees have encountered strong opposition from their legislatures; in 1992 and 1993, in some cases votes of no-confidence brought about popular elections for the position of chief executive.
After the Moscow confrontation of October 1993, Yeltsin sought to bolster his regional support by dissolving the legislatures of all federal subunits except the republics (which were advised to "reform" their political systems). Accordingly, in 1994 elections were held in all the jurisdictions whose legislatures had been dismissed. In some cases, that process placed local executives at the head of legislative bodies, eliminating checks and balances between the branches at the regional level.
Election results in the subnational jurisdictions held great significance for the Yeltsin administration because the winners would fill the ex officio seats in the Federation Council, which until 1996 was a reliable bastion of support. The election of large numbers of opposition candidates would end the Federation Council's usefulness as a balance against the anti-Yeltsin State Duma and further impede Yeltsin's agenda. In 1995 some regions held gubernatorial elections to fill the administrative posts originally granted to Yeltsin appointees in 1991. Faced with an escalating number of requests for such elections, Yeltsin decreed December 1996 as the date for most gubernatorial and republic presidential elections. This date was confirmed by a 19 95 Federation Council law. The decree also set subnational legislative elections for June or December 1997. (In July 1996, the State Duma advanced these elections to late 1996.) Observers noted that by calling for most of these elections to take place after the presidential election, Yeltsin prevented unfavorable outcomes from possibly reducing his reelection chances—even though voter apathy after the presidential election had the potential to help opposition candidates.
Formerly seats in Russia the Duma were elected half by proportional representation ( with at least 5% of the vote to qualify for seats) and half by single member districts. However, President Putin passed a law that all seats are to be elected by proportional representation ( with at least 7% of the vote to qualify for seats) to take effect in the December 2007 elections. By doing this Putin has eliminated independents and made it more difficult for small parties to be elected to the Duma.
Although the 1993 constitution weakened their standing vis-à-vis the presidency, the parliaments elected in 1993 and 1995 nonetheless used their powers to shape legislation according to their own precepts and to defy Yeltsin on some issues. An early example was the February 1994 State Duma vote to grant amnesty to the leaders of the 1991 Moscow coup. Yeltsin vehemently denounced this action, although it was within the constitutional purview of the State Duma. In October 1994, both legislative chambers passed a law over Yeltsin's veto requiring the Government to submit quarterly reports on budget expenditures to the State Duma and adhere to other budgetary guidelines.
In the most significant executive-legislative clash since 1993, the State Duma overwhelmingly voted no confidence in the Government in June 1995. The vote was triggered by a Chechen rebel raid into the neighboring Russian town of Budenovsk, where the rebels were able to take more than 1,000 hostages. Dissatisfaction with Yeltsin's economic reforms also was a factor in the vote. A second motion of no confidence failed to carry in early July. In March 1996, the State Duma again incensed Yeltsin by voting to revoke the December 1991 resolution of the Russian Supreme Soviet abrogating the 1922 treaty under which the Soviet Union had been founded. That resolution had prepared the way for formation of the Commonwealth of Independent States.
In his February 1996 State of the Union speech, Yeltsin commended the previous parliament for passing a number of significant laws, and he noted with relief the "civil" resolution of the June 1995 no-confidence conflict. He complained, however, that the Federal Assembly had not acted on issues such as the private ownership of land, a tax code, and judicial reform. Yeltsin also was critical of legislation that he had been forced to return to the parliament because it contravened the constitution and existing law, and of legislative attempts to pass fiscal legislation in violation of the constitutional stricture that such bills must be preapproved by the Government. He noted that he would continue to use his veto power against ill-drafted bills and his power to issue decrees on issues he deemed important, and that such decrees would remain in force until suitable laws were passed. The State Duma passed a resolution in March 1996 demanding that Yeltsin refrain from returning bills to the parliament for redrafting, arguing that the president was obligated either to sign bills or to veto them.
In the first half of the 1990s, observers speculated about the possibility that some of the jurisdictions in the federation might emulate the former Soviet republics and demand full independence. Several factors militate against such an outcome, however. Russia is more than 80 percent ethnic Russian, and most of the thirty-two ethnically based jurisdictions are demographically dominated by ethnic Russians, as are all of the territories and oblasts. Many of the subnational jurisdictions are in the interior of Russia, meaning that they could not break away without joining a bloc of seceding border areas, and the economies of all such jurisdictions were thoroughly integrated with the national economy in the Soviet system. The 1993 constitution strengthens the official status of the central government in relation to the various regions, although Moscow has made significant concessions in bilateral treaties. Finally, most of the differences at the base of separatist movements are economic and geographic rather than ethnic.
Advocates of secession, who are numerous in several regions, generally appear to be in the minority and are unevenly dispersed. Some regions have even advocated greater centralization on some matters. By 1996 most experts believed that the federation would hold together, although probably at the expense of additional concessions of power by the central government. The trend is not toward separatism so much as the devolution of central powers to the localities on trade, taxes, and other matters.
Some experts observe that the Russia's ethnically distinct Republics pressing claims for greater subunit rights fall into three groups. The first is composed of those jurisdictions most vociferous in pressing ethnic separatism, including Chechnya and perhaps other republics of the North Caucasus, and the Republic of Tuva. The second group consists of large, resource-rich republics, including Karelia, Komi Republic, and Sakha (Yakutia). Their differences with Moscow center on resource control and taxes rather than demands for outright independence. A third, mixed group consists of republics along the Volga River, which straddle strategic water, rail, and pipeline routes, possess resources such as oil, and include large numbers of Russia's Muslim and Buddhist populations. These republics include Bashkortostan, Kalmykia, Mari El, Mordovia, Tatarstan, and Udmurtia.
In addition to the republics, several other jurisdictions have lobbied for greater rights, mainly on questions of resource control and taxation. These include Sverdlovsk Oblast, which in 1993 proclaimed itself an autonomous republic as a protest against receiving fewer privileges in taxation and resource control than the republics, and strategically vital Primorsky Krai ("Maritime Territory") on the Pacific coast, whose governor in the mid-1990s, Yevgeniy Nazdratenko, defied central economic and political policies on a number of well-publicized issues.
Some limited cooperation has occurred among Russia's regional jurisdictions, and experts believe there is potential for even greater coordination. Eight regional cooperation organizations have been established, covering all subnational jurisdictions except Chechnya: the Siberian Accord Association; the Central Russia Association; the Northwest Association; the Black Earth Association; the Cooperation Association of North Caucasus Republics, Territories, and Oblasts; the Greater Volga Association; the Ural Regional Association; and the Far East and Baikal Association. The Federation Council formally recognized these interjurisdictional organizations in 1994. Expansion of the organizations' activities is hampered by economic inequalities among their members and by inadequate interregional transportation infrastructure, but in 1996 they began increasing their influence in Moscow.
Regional and ethnic conflicts have encouraged proposals to abolish the existing subunits and resurrect the tsarist-era guberniya, or large province, which would incorporate several smaller subunits on the basis of geography and population rather than ethnic considerations. Russian ultranationalists such as Vladimir Zhirinovsky have been joined in supporting this proposal by some officials of the national Government and oblast and territory leaders who resent the privileges of the republics. Some have called for these new subunits to be based on the eight interregional economic associations.
Russian politics are now dominated by President Vladimir Putin, his United Russia party, and Prime minister Mikhail Mishustin. At the 2003 legislative elections, United Russia reduced all other parties to minority status. Other parties retaining seats in the State Duma, the lower house of the legislature, are the Communist Party of the Russian Federation, the Liberal Democratic Party of Russia and A Just Russia.
The first presidential elections were held on 26 March 2000. Putin, who had previously been made Prime Minister of Russia and following Yeltsin's resignation was acting president of Russia, won in the first round with 53% of the vote in what were judged generally free and fair elections. (see 2000 Russian presidential election). Putin won a second full term without difficulty in the March 2004 presidential election. While the Organization for Security and Co-operation in Europe reported that the elections were generally organized professionally, there was criticism of unequal treatment of candidates by State-controlled media among other issues. After the election, Prime Minister Mikhail Kasyanov and his cabinet were dismissed by Putin. However, pundits in Russia believed this not to be due to the president's displeasure with the government, but with Mikhail Kasyanov himself, as the Russian constitution does not allow the prime minister to be removed without firing the whole cabinet. Kasyanov later went on to become a stark Putin critic. Although Russia's regions enjoy a degree of autonomous self-government, the election of regional governors was substituted by direct appointment by the president in 2005. In September 2007, Putin accepted the resignation of Prime minister Mikhail Fradkov, appointing Viktor Zubkov as the new Prime minister.
In the 2008 Russian Presidential election, Dmitry Medvedev—whose nomination was supported by the popular outgoing President Vladimir Putin—scored a landslide victory. According to analysts, the country was now effectively ruled by a "tandem", with a constitutionally powerful President and an influential and popular Prime Minister.
Russia has suffered democratic backsliding during Putin's and Medvedev's tenures. Freedom House has listed Russia as being "not free" since 2005. In 2004, Freedom House warned that Russia's "retreat from freedom marks a low point not registered since 1989, when the country was part of the Soviet Union." Alvaro Gil-Robles (then head of the Council of Europe human rights division) stated in 2004 that "the fledgling Russian democracy is still, of course, far from perfect, but its existence and its successes cannot be denied." The Economist Intelligence Unit has rated Russia as "authoritarian" since 2011, whereas it had previously been considered a "hybrid regime" (with "some form of democratic government" in place) as late as 2007. The Russian Federation states that Russia is a democratic federal law-bound state with a republican form of government, which has been proven of not being acted upon today. According to political scientist, Larry Diamond, writing in 2015, "no serious scholar would consider Russia today a democracy".
The arrest of prominent oligarch Mikhail Khodorkovsky on charges of fraud, embezzlement and tax evasion was met with domestic and Western criticism that the arrest was political and that his trial was highly flawed. However, the move was met positively by the Russian public and has largely undeterred investment from the country, which continued to grow at double digit rates.
In 2005, Russia started steadily increasing the price it sold heavily subsidized gas to ex-Soviet republics. Russia has recently been accused of using its natural resources as a political weapon. Russia, in turn, accuses the West of applying double standards relating to market principles, pointing out that Russia has been supplying gas to the states in question at prices that were significantly below world market levels, and in most cases remain so even after the increases. Politicians in Russia argued that it is not obligated to effectively subsidize the economies of post-Soviet states by offering them resources at below-market prices. Regardless of alleged political motivation, observers have noted that charging market prices is Russia's legitimate right, and point out that Russia has raised the price even for its close ally, Belarus.
The constitution guarantees citizens the right to choose their place of residence and to travel abroad. Some big-city governments, however, have restricted this right through residential registration rules that closely resemble the Soviet-era "propiska" regulations. Although the rules were touted as a notification device rather than a control system, their implementation has produced many of the same results as the propiska system. The freedom to travel abroad and emigrate is respected although restrictions may apply to those who have had access to state secrets. | https://en.wikipedia.org/wiki?curid=25704 |
Economy of Russia
The economy of Russia is an upper-middle income mixed and transition economy. It is the fifth-largest national economy in Europe, the eleventh-largest nominal GDP in the world, and the fifth-largest by purchasing power parity.
Russia's vast geography is an important determinant of its economic activity, with some sources estimating that Russia contains over 30 percent of the world's natural resources. The World Bank estimates the total value of Russia's natural resources at $75 trillion US dollars. Russia relies on energy revenues to drive most of its growth. Russia has an abundance of oil, natural gas and precious metals, which make up a major share of Russia's exports. , the oil-and-gas sector accounted for 16% of GDP, 52% of federal budget revenues and over 70% of total exports. Russia is considered an "energy superpower". It has the world's largest proven natural gas reserves and is the largest exporter of natural gas. It is also the second-largest exporter of petroleum.
Russia has a large and sophisticated arms industry, capable of designing and manufacturing high-tech military equipment, including a fifth-generation fighter jet, nuclear powered submarines, firearms, and short range/long range ballistic missiles. The value of Russian arms exports totalled $15.7 billion in 2013—second only to the US. Top military exports from Russia include combat aircraft, air defence systems, ships and submarines.
The economic development of the country has been uneven geographically with the Moscow region contributing a very large share of the country's GDP. There has been a substantial rise in wealth inequality in Russia since 1990 (far more than in China and other Eastern European countries). Credit Suisse has described Russian wealth inequality as so extreme compared to other countries that it "deserves to be placed in a separate category." One study estimates that "the wealth held offshore by rich Russians is about three times larger than official net foreign reserves, and is comparable in magnitude to total household financial assets held in Russia."
Beginning in 1928, the course of the Soviet Union's economy was guided by a series of five-year plans. By the 1950s, the Soviet Union had rapidly evolved from a mainly agrarian society into a major industrial power.
By the 1970s the Soviet Union entered the Era of Stagnation. The complex demands of the modern economy and inflexible administration overwhelmed and constrained the central planners. The volume of decisions facing planners in Moscow became overwhelming. The cumbersome procedures for bureaucratic administration foreclosed the free communication and flexible response required at the enterprise level for dealing with worker alienation, innovation, customers, and suppliers. From 1975 to 1985, corruption and data fiddling became common practice among bureaucracy to report satisfied targets and quotas thus entrenching the crisis. Starting in 1986, Mikhail Gorbachev attempted to address economic problems by moving towards a market-oriented socialist economy. Gorbachev's policies of Perestroika failed to rejuvenate the Soviet economy; instead, a process of political and economic disintegration culminated in the breakup of the Soviet Union in 1991.
Following the collapse of the Soviet Union, Russia had undergone a radical transformation, moving from a centrally planned economy to a globally integrated market economy. Corrupt and haphazard privatization processes turned over major state-owned firms to politically connected "oligarchs", which has left equity ownership highly concentrated.
Yeltsin's program of radical, market-oriented reform came to be known as a "shock therapy". It was based on the policies associated with the Washington Consensus, recommendations of the IMF and a group of top American economists, including Larry Summers. With deep corruption afflicting the process, the result was disastrous, with real GDP falling by more than 40% by 1999, hyperinflation which wiped out personal savings, crime and destitution spreading rapidly. This was accompanied by a drop in the standard of living, including surging economic inequality and poverty, along with increased excess mortality and a decline in life expectancy.
The majority of state enterprises were privatized amid great controversy and subsequently came to be owned by insiders for far less than they were worth. For example, the director of a factory during the Soviet regime would often become the owner of the same enterprise. Under the government's cover, outrageous financial manipulations were performed that enriched a narrow group of individuals at key positions of business and government. Many of them promptly invested their newfound wealth abroad producing an enormous capital flight.
Difficulties in collecting government revenues amid the collapsing economy and a dependence on short-term borrowing to finance budget deficits led to the 1998 Russian financial crisis.
In the 1990s Russia was "the largest borrower" from the International Monetary Fund with loans totaling $20 billion. The IMF was the subject of criticism for lending so much as Russia introduced little of the reforms promised for the money and a large part of these funds could have been "diverted from their intended purpose and included in the flows of capital that left the country illegally".
Russia bounced back from the August 1998 financial crash with surprising speed. Much of the reason for the recovery was the devaluation of the ruble, which made domestic producers more competitive nationally and internationally.
Between 2000 and 2002, there was a significant amount of pro-growth economic reforms including a comprehensive tax reform, which introduced a flat income tax of 13%; and a broad effort at deregulation which improved the situation for small and medium-sized enterprises.
Between 2000 and 2008, Russian economy got a major boost from rising commodity prices. GDP grew on average 7% per year. Disposable incomes more than doubled and in dollar-denominated terms increased eightfold. The volume of consumer credit between 2000–2006 increased 45 times, fuelling a boom in private consumption. The number of people living below poverty line declined from 30% in 2000 to 14% in 2008.
Inflation remained a problem however, as the central bank aggressively expanded money supply to combat appreciation of the ruble. Nevertheless, in 2007 the World Bank declared that the Russian economy achieved "unprecedented macroeconomic stability". Until October 2007, Russia maintained impressive fiscal discipline with budget surpluses every year from 2000.
Russian banks were hit by the global credit crunch in 2008, though no long term damage was done thanks to proactive and timely response by the government and central bank, which shielded the banking system from effects of the global financial crisis. A sharp, but brief recession in Russia was followed by a strong recovery beginning in late 2009.
After 16 years of negotiations, Russia's membership to the WTO was accepted in 2011. In 2013, Russia was labeled a high-income economy by the World Bank.
Russian leaders repeatedly spoke of the need to diversify the economy away from its dependence on oil and gas and foster a high-technology sector. In 2012 oil, gas and petroleum products accounted for over 70% of total exports. This economic model appeared to show its limits, when after years of strong performance, Russian economy expanded by a mere 1.3% in 2013. Several reasons have been proposed to explain the slowdown, including prolonged recession in the EU, which is Russia's largest trading partner, stagnant oil prices, lack of spare industrial capacity and demographic problems. Political turmoil in neighboring Ukraine added to the uncertainty and suppressed investment.
According to a survey provided by "Financial Times" in 2012, Russia was second by economic performance among G20, following Saudi Arabia, based on seven measures: gross domestic product growth, budget deficit and government debt for 2012; economic recovery – output compared with the pre-crisis peak; change in debt since 2009; change in unemployment from 2009 to 2013; and, finally, the deviation of the current account from balance. "Forbes" magazine lists Russia as #91 in the best countries for business. The country has made substantial improvement recently in areas like innovation and trade freedom. (Forbes ranks each country in a number of categories and draws from multiple sources such as the World Economic Forum, World Bank, and Central Intelligence Agency). Since 2008, Moscow has been by "Forbes" magazine repeatedly named the "billionaire capital of the world".
Following the annexation of Crimea in March 2014 and Russia's involvement in the ongoing conflict in Ukraine, the United States, the EU (and some other European countries), Canada and Japan imposed sanctions on Russia's financial, energy and defense sectors. This led to the decline of the Russian ruble and sparked fears of a Russian financial crisis. Russia responded with sanctions against a number of countries, including a one-year period of total ban on food imports from the European Union and the United States. As of 2018 it is estimated that Western sanctions may have reduced Russian economy by as much as 6%.
According to the Russian economic ministry in July 2014, GDP growth in the first half of 2014 was 1%. The ministry projected growth of 0.5% for 2014. The Russian economy grew by a better than expected 0.6% in 2014. As of the 2nd quarter of 2015 inflation, compared to the second quarter of 2014, was 8%; the economy had contracted by 4.6% as the economy entered recession. To balance the state budget in 2015, oil price would need to be around as opposed to for 2014. Russia used to have around billion in forex reserves, but holds billion in summer 2015 and plans to keep accumulating forex reserves for years to come, until they reach again $500 billion.
According to Herman Gref from Sberbank, the contraction of the Russian economy is "not a crisis but rather a new reality" to which it has to adapt, primarily due to the low oil prices. He also presented a number of metrics demonstrating the change - the GDP has fallen by 3.7%, income - by 4.3%, salaries - by 9.3% and inflation reached 12.9%. However, during December 2015 it was reported by the Moscow Times that the number of people living at or below the poverty line, "those with monthly incomes of less than 9,662 rubles ($140)" increased by more than 2.3 million people. Russia is rated one of the most unequal of the world’s major economies.
During 2014-2015 a quarter of banks in Russia left the market, the expenses of Russian bank guarantee fund reached 1 trillion roubles plus additional government funds for recapitalisation of banks reached 1.9 trillion roubles.
At the end of 2016, the United States imposed further sanctions on the Russian Federation in response to what the US government said was Russian interference in the 2016 United States elections.
In 2016, the Russian economy was the sixth largest in the world by PPP and twelfth largest at market exchange rates. Between 2000 and 2012 Russia's energy exports fueled a rapid growth in living standards, with real disposable income rising by 160%. In dollar-denominated terms this amounted to a more than sevenfold increase in disposable incomes since 2000. In the same period, unemployment and poverty more than halved and Russians' self-assessed life satisfaction also rose significantly. This growth was a combined result of the 2000s commodities boom, high oil prices, as well as prudent economic and fiscal policies. However, these gains have been distributed unevenly, as the 110 wealthiest individuals were found in a report by Credit Suisse to own 35% of all financial assets held by Russian households. Russia also has the second-largest volume of illicit money outflows, having lost over $880 billion between 2002 and 2011 in this way. Since 2008 Forbes has repeatedly named Moscow the "billionaire capital of the world".
The Russian economy risked going into recession from early 2014, mainly due to falling oil prices, sanctions, and the subsequent capital flight. While in 2014 GDP growth remained positive at 0.6%, in 2015 the Russian economy shrunk by 3.7% and was expected to shrink further in 2016. However, the World Bank and the IMF estimated that Russia's economy will begin to recover by 2017. By 2016, the Russian economy rebounded with 0.3% GDP growth and is officially out of the recession. The growth continued in 2017, with an increase of 1.5%.
In January 2016, the US company Bloomberg rated Russia's economy as the 12th most innovative in the world, up from 14th in January 2015 and 18th in January 2014. Russia has the world's 15th highest patent application rate, the 8th highest concentration of high-tech public companies, such as internet and aerospace and the third highest graduation rate of scientists and engineers. Former finance minister Alexei Kudrin has said that Russia needs to reduce geopolitical tensions to improve its economic conditions.
In May 2016 the average nominal monthly wages fell below $450 per month, and tax on the income of individuals is payable at the rate of 13% on most incomes. Approximately 19.2 million of Russians lived below the national poverty line in 2016, significantly up from 16.1 million in 2015.
A poll completed in 2018 among 1400 managers of non-hydrocarbons Russian businesses demonstrated high level of pessimism, with majority describing the economic situation in the country as "catastrophic". 73% of respondents in large businesses and 77% in medium and small are dealing with a "crisis" while only 4% described it as "good". 50% suffered from increased real tax rates, 60% were hit by increasing public utilities tariffs.
In 2019 Russia’s Natural Resources and Environment Ministry estimated the value of natural resources to $844 billion or 60% of the country's GDP.
The following table shows the main economic indicators in 1992–2018. Inflation under 5% is in green.
The Russian ruble is the unit of currency of the Russian Federation. It is also accepted as legal tender in the partially recognised states of Abkhazia and South Ossetia and the unrecognised Donetsk People's Republic and Lugansk People's Republic.
The Russian monetary system is managed by the Bank of Russia. Founded on 13 July 1990 as the State Bank of the RSFSR, Bank of Russia assumed responsibilities of the central bank following the breakup of the Soviet Union in 1991.
According to the Constitution, Bank of Russia is an independent entity, with the primary responsibility of protecting the stability of the national currency, the ruble. It is also chief regulator and a lender of last resort for the banking industry in Russia. Bank of Russia is governed by a board of directors, headed by a governor who is appointed by the President of Russia.
Large current account surpluses caused rapid real appreciation of the ruble between 2000 and 2008. Bank of Russia attempted to combat this trend by aggressively accumulating foreign currency reserves. This was a major contributing cause to relatively high inflation rates during this period. Central bank policy evolved following the global financial crisis. Instead of targeting a fixed exchange rate vs a basket of dollar and euro, Bank of Russia shifted its focus to inflation targeting. In April 2012 Russian inflation reached record low of 3.6%.
The Russian Central Bank has been planning to free float the Russian ruble and has been widening the currency's trading band and expects the ruble to be fully free floating in 2015. However, the ruble has fallen significantly since 2013 when the central bank announced the plans. On 3 October 2014, the USD–RUB exchange reached 40.00 Russian rubles to USD, up from 32.19 rubles the same time last year; this represents a decline of 24.26%. The Russian Central bank has stated that Russian banks are able to withstand a devaluation of up to 25%–30% in January 2014 when the ruble has just begun its decline, therefore Central Bank intervention may be needed; however, plans to free-float the currency continued, . The Russian Central Bank spent $88 billion in order to stem the fall of the ruble in 2014. Due to central bank intervention and stronger oil prices the ruble rebounded sharply at the beginning of 2015. In April 2015, Ksenia Yudaeva, Bank of Russia's First Deputy Governor, stated that she believed the currency had stabilized at the present rate of around 50 rubles to USD.
Along with a rapid devaluation of the ruble, inflation in Russia has greatly increased. In October 2014 the rate of inflation was reported to be 8%, although this was well below the 2333.30% inflation rate experienced in 1992. In November 2017 inflation was at its lowest ever point since the fall of the Soviet Union at 2.5%.
Russia was expected to have a Government Budget deficit of $21 billion in 2016. The budget deficit narrowed to 0.6% of GDP in 2017 from 2.8% in 2016.
On 1 January 2004, the Government of Russia established the Stabilization fund of the Russian Federation as a part of the federal budget to balance it if oil price falls. On 1 February 2008 the Stabilization fund was divided into two parts. The first part is a reserve fund equal to 10% of GDP (10% of GDP equals to about $200 billion now), and was to be invested in a similar way as the Stabilization Fund. The second part is the National Prosperity Fund of Russian Federation. Deputy Finance Minister Sergei Storchak estimated it would reach 600–700 billion rubles by 1 February 2008. The National Prosperity Fund is to be invested into more risky instruments, including the shares of foreign companies.
Russia has very low debt-to-GDP ratio, it is among the lowest ratios in the world. Most of its external debt is private. In 2016 its debt to GDP ratio was 12%.
As a chief successor state to the Soviet Union, Russia took up the responsibility for paying USSR's external debts.
Russia is one of the leading nations for protectionist policies. According to the independent Global Trade Alert, Russia put significant protectionist policies in place. Russia's strategic trade bloc consisting of Russia, Belarus, and Kazakhstan is responsible for a significant portion of worldwide protectionism during 2013. Of the protectionist policies, 43% were targeted bailouts and direct subsidies for local companies, while 15% were tariff measures. Since 2008, Russia has implemented many protectionist measures, about the same scale as did India, however the most protectionist state was the United States.
Starting from Putin's second term, very few corruption cases have been the subject of outrage. Putin's system is remarkable for its ubiquitous and open merging of the civil service and business, as well as its use of relatives, friends, and acquaintances to benefit from budgetary expenditures and take over state property. Corporate, property, and land raiding is commonplace.
Corruption in Russia is perceived as a significant problem impacting all aspects of life, including public administration, law enforcement, healthcare and education. The phenomenon of corruption is strongly established in the historical model of public governance in Russia and attributed to general weakness of rule of law in Russia. According to the 2018 results of the Corruption Perceptions Index by Transparency International, Russia ranked 138th out of 180 countries with a score of 28 out of 100, tying with Guinea, Iran, Lebanon, Mexico and Papua New Guinea.
There are many different estimates of the actual cost of corruption. According to official government statistics from Rosstat, the "shadow economy" occupied only 15% of Russia's GDP in 2011, and this included unreported salaries (to avoid taxes and social payments) and other types of tax evasion. According to Rosstat's estimates, corruption in 2011 amounted to only 3.5 to 7% of GDP. In comparison, some independent experts maintain that corruption consumes as much of 25% of Russia's GDP. A World Bank report puts this figure at 48%. There is also an interesting shift in the main focus of bribery: whereas previously officials took bribes to shut their eyes to legal infractions, they now take them simply to perform their duties. Many experts admit that in recent years corruption in Russia has become a business. In the 1990s, businessmen had to pay different criminal groups to provide a ""krysha"" (literally, a "roof", i.e., protection). Nowadays, this "protective" function is performed by officials. Corrupt hierarchies characterize different sectors of the economy, including education.
In the end, the Russian population pays for this corruption. For example, some experts believe that rapid increases in monthly bills significantly outpacing the rate of inflation are a direct result of high volumes of corruption at the highest levels. In a 2020 survey 45% of Russian citizens approved tax evasion, a sharp rise from 35% from 2019, and 60% have experienced reduced income in May as compared to February.
In Russia, services are the biggest sector of the economy and account for 58% of GDP. Within services the most important segments are: wholesale and retail trade, repair of motor vehicles, motorcycles and personal and household goods (17% of total GDP); public administration, health and education (12%); real estate (9%) and transport storage and communications (7%). Industry contributes 40% to total output. Mining (11% of GDP), manufacturing (13%) and construction (4%) are the most important industry segments. Agriculture accounts for the remaining 2%.
The mineral-packed Ural Mountains and the vast fossil fuel (oil, gas, coal), and timber reserves of Siberia and the Russian Far East make Russia rich in natural resources, which dominate Russian exports. Oil and gas exports, specifically, continue to be the main source of hard currency.
The petroleum industry in Russia is one of the largest in the world. Russia has the largest reserves, and is the largest exporter, of natural gas. It has the second largest coal reserves, the eighth largest oil reserves, and is the largest exporter, or second largest exporter of oil in the world in absolute numbers. It periodically changes positions with Saudi Arabia. Per capita oil production in Russia, though, is not that high. As of 2007, Russia was producing 69.603 bbl/day per 1,000 people, much less than Canada (102.575 bbl/day), Saudi Arabia (371.363 bbl/day), or Norway (554.244 bbl/day), but more than two times the USA (28.083 bbl/day) or the UK (27.807 bbl/day).
Russia is also a leading producer and exporter of minerals and gold. Russia is the largest diamond-producing nation in the world, estimated to produce over 33 million carats in 2013, or 25% of global output valued at over $3.4 billion, with state-owned ALROSA accounting for approximately 95% of all Russian production.
Expecting the area to become more accessible as climate change melts Arctic ice, and believing the area contains large reserves of untapped oil and natural gas, Russian explorers on 2 August 2007 in submersibles planted the Russian flag on the Arctic seabed, staking a claim to energy sources right up to the North Pole. Reaction to the event was mixed: President Vladimir Putin congratulated the explorers for "the outstanding scientific project", while Canadian officials stated the expedition was just a public show.
Under the Federal Law ""On Continental Shelf Development"" upon proposal from the federal agency managing the state fund of mineral resources, or its territorial offices, the Russian government approves the list of some sections of the mineral resources that are passed for development without any contests and auctions, some sections of federal importance of the Russian continental shelf, some sections of the mineral resources of federal importance that are situated in Russia and stretch out on its continental shelf, some gas deposits of federal importance that are handed over for prospecting and developing mineral resources under a joint license. The Russian government is also empowered to decide on the handover of the previously mentioned sections of the mineral resources for development without any contests and auctions.
Russia has more than a fifth of the world's forests, which makes it the largest forest country in the world. However, according to a 2012 study by the Food and Agriculture Organization of the United Nations and the government of the Russian Federation, the considerable potential of Russian forests is underutilized and Russia's share of the global trade in forest products is less than 4%.
Russia comprises roughly three-quarters of the territory of the former Soviet Union. Following the breakup of the Soviet Union in 1991 and after nearly 10 years of decline, Russian agriculture began to show signs of improvement due to organizational and technological modernization. Northern areas concentrate mainly on livestock, and the southern parts and western Siberia produce grain. The 2014 devaluation of the rouble and imposition of sanctions spurred domestic production, and in 2016 Russia exceeded Soviet grain production levels, and in that year became the world's largest exporter of wheat. In 2016 agriculture surpassed the arms industry as Russia's second largest export sector after oil and gas. As of 2020, Russia faces problems of over-nutrition with over 23% of the adults obese and over 57% overweight. Under 2.5% of the population suffer from undernourishment.
Russia's defense industry employs 2.53 million people, accounting for 20% of all manufacturing jobs. Russia is the world's second largest conventional arms exporter after the United States. The largest firearm manufacturer in the country, Kalashnikov Concern, produces about 95% of all small arms in Russia and supplies to more than 27 countries around the world. The most popular types of weaponry bought from Russia are Sukhoi and MiG fighters, air defense systems, helicopters, battle tanks, armored personnel carriers and infantry fighting vehicles. The research organization Centre for Analysis of Strategies and Technologies ranked the air defense system producer Almaz-Antey as the industry's most successful company in 2007, followed by aircraft-maker Sukhoi. Almaz-Antey's revenue that year was $3.122 billion, and it had a work force of 81,857 people.
Aircraft manufacturing is an important industry sector in Russia, employing around 355,300 people. The Russian aircraft industry offers a portfolio of internationally competitive military aircraft such as MiG-29 and Su-30, while new projects such as the Sukhoi Superjet 100 are hoped to revive the fortunes of the civilian aircraft segment. In 2009, companies belonging to the United Aircraft Corporation delivered 95 new fixed-wing aircraft to its customers, including 15 civilian models. In addition, the industry produced over 141 helicopters. It is one of the most science-intensive hi-tech sectors and employs the largest number of skilled personnel. The production and value of the military aircraft branch far outstrips other defense industry sectors, and aircraft products make up more than half of the country's arms exports.
Space industry of Russia consists of over 100 companies and employs 250,000 people. The largest company of the industry is RKK Energia, the main manned space flight contractor. Leading launch vehicle producers are Khrunichev and TsSKB Progress. Largest satellite developer is Reshetnev Information Satellite Systems, while NPO Lavochkin is the main developer of interplanetary probes.
Automobile production is a significant industry in Russia, directly employing around 600,000 people or 0.7% of the country's total workforce. In addition, the industry supports around 2–3 million people in related industries. Russia was the world's 15th largest car producer in 2010, and accounts for about 7% of the worldwide production. In 2009 the industry produced 595,807 light vehicles, down from 1,469,898 in 2008 due to the global financial crisis. The largest companies are light vehicle producers AvtoVAZ and GAZ, while KAMAZ is the leading heavy vehicle producer.
Russia is experiencing a regrowth of microelectronics, with the revival of JCS Mikron.
As of 2013, Russians spent 60% of their pre-tax income shopping, the highest percentage in Europe. This is possible because many Russians pay no rent or house payments, owning their own home after privatization of state-owned Soviet housing. Shopping malls were popular with international investors and shoppers from the emerging middle class. Eighty-two malls had been built near major cities including a few that were very large. A supermarket selling groceries is a typical anchor store in a Russian mall.
Retail sales in Russia
Russia's telecommunications industry is growing in size and maturity. As of December 2007, there were an estimated 4,900,000 broadband lines in Russia.
In 2006, there were more than 300 BWA operator networks, accounting for 5% of market share, with dial-up accounting for 30%, and Broadband Fixed Access accounting for the remaining 65%. In December 2006, Tom Phillips, chief government and regulatory affairs officer of the GSM Association stated:
The financial crisis, which had already hit the country at the end of 2008, caused a sharp reduction of the investments by the business sectors and a notable reduction of IT budget made by government in 2008–2009. As a consequence, in 2009 the IT market in Russia declined by more than 20% in ruble terms and by one-third in euro terms. Among the particular segments, the biggest share of the Russian IT market still belongs to hardware.
Key data on the telecommunications market in Russia
Russian Railways accounts for 2.5% of Russia's GDP. The percentage of freight and passenger traffic that goes by rail is unknown, since no statistics are available for private transportation such as private automobiles or company-owned trucks. In 2007, about 1.3 billion passengers and 1.3 billion tons of freight went via Russian Railways. In 2007 the company owned 19,700 goods and passenger locomotives, 24,200 passenger cars (carriages) (2007) and 526,900 freight cars (goods wagons) (2007). A further 270,000 freight cars in Russia are privately owned. In 2009 Russia had 128,000 kilometres of common-carrier railroad line, of which about half was electrified and carried most of the traffic; over 40% was double track or better.
In 2009 the Russian construction industry survived its most difficult year in more than a decade. The 0.8% reduction recorded by the industry for the first three quarters of 2010 looked remarkably healthy in comparison with the 18.4% slump recorded the previous year, and construction firms became much more optimistic about the future than in previous months. The most successful construction firms concluded contracts worth billions of dollars and planned to take on employees and purchase new building machinery. The downturn served to emphasise the importance of the government to the construction market.
According to the Central Bank of Russia 422 insurance companies operate on the Russian insurance market by the end of 2013. The concentration of insurance business is significant across all major segments except , as the Top 10 companies in 2013 charged 58.1% premiums in total without compulsory health insurance (CHI). Russian insurance market in 2013 demonstrated quite significant rate of growth in operations. Total amount of premiums charged (without CHI) in 2013 is RUB 904.9 bln (increase on 11.8% compared to 2012), total amount of claims paid is RUB 420.8 bln (increase on 13.9% compared to 2012). Premiums to GDP ratio (total without CHI) in 2013 increased to 1.36% compared to 1.31 a year before. The share of premiums in household spending increased to 1.39%. Level of claims paid on the market total without CHI is 46.5%, an insufficient increase compared to 2012. The number of policies in 2013 increased on 0.1% compared to 2012, to 139.6 mln policies.
Although relative indicators of the Russian insurance market returned to pre-crisis levels, the progress is achieved mainly by the increase of life insurance and accident insurance, the input of these two market segments in premium growth in 2013 largely exceeds their share on the market. As before, life insurance and accident insurance are often used by banks as an appendix to a credit contract protecting creditors from the risk of credit default in case of borrower’s death or disability. The rise of these lines is connected, evidently, with the increase in consumer loans, as the total sum of credit obligations of population in 2013 increased by 28% to RUB 9.9 trillion. At the same time premium to GDP ratio net of life and accident insurance remained at the same level of 1.1% as in 2012. Thus, if "banking" lines of business are excluded, Russian insurance market is in stagnation stage for the last four years, as premiums to GDP ratio net of life and accident insurance remains at the same level of 1.1% since 2010.
The IT market is one of the most dynamic sectors of the Russian economy. Russian software exports have risen from just $120 million in 2000 to $3.3 billion in 2010. Since the year 2000 the IT market has started growth rates of 30–40% a year, growing by 54% in 2006 alone. The biggest sector in terms of revenue is system and network integration, which accounts for 28.3% of the total market revenues. Meanwhile, the fastest growing segment of the IT market is offshore programming.
Currently, Russia controls 3% of the offshore software development market and is the third leading country (after India and China) among software exporters. Such growth of software outsourcing in Russia is caused by a number of factors. One of them is the supporting role of the Russian Government. The government has launched a program promoting construction of IT-oriented technology parks (Technoparks)—special zones that have an established infrastructure and enjoy a favorable tax and customs regime, in seven different places around the country: Moscow, Novosibirsk, Nizhny Novgorod, Kaluga, Tumen, Republic of Tatarstan and St. Peterburg Regions. Another factor stimulating the IT sector growth in Russia is the presence of global technology corporations such as Intel, Google, Motorola, Sun Microsystems, Boeing, Nortel, Hewlett-Packard, SAP AG, and others, which have intensified their software development activities and opened their R&D centers in Russia.
Under a government decree signed On June 2013, a special "roadmap" is expected to ease business suppliers’ access to the procurement programs of state-owned infrastructure monopolies, including such large ones as Gazprom, Rosneft, Russian Railways, Rosatom, and Transneft. These companies will be expected to increase the proportion of domestic technology solutions they use in their operations. The decree puts special emphasis on purchases of innovation products and technologies. According to the new decree, by 2015, government-connected companies must double their purchases of Russian technology solutions compared to the 2013 level and their purchasing levels must quadruple by 2018.
Russia is one of the few countries in the world with a home grown internet search engine who owns a relevant marketshare as the Russian-based search engine Yandex is used by 53.8% of internet users in the country.
Known Russian IT companies are ABBYY (FineReader OCR system and Lingvo dictionaries), Kaspersky Lab (Kaspersky Anti-Virus, Kaspersky Internet Security), Mail.Ru (portal, search engine, mail service, Mail.ru Agent messenger, ICQ, Odnoklassniki social network, online media sources).
Tourism in Russia has seen rapid growth since the late Soviet period, first domestic tourism and then international tourism. Rich cultural heritage and great natural variety place Russia among the most popular tourist destinations in the world.
In 2013, Russia was visited by 28.4 million tourists being the ninth most visited country in the world. The most visited destinations in Russia are Moscow and Saint Petersburg, recognized as World Cities.
Russia recorded a trade surplus of USD$15.8 billion in 2013. Balance of trade in Russia is reported by the Central Bank of Russia. Historically, from 1997 until 2013, Russia balance of trade averaged 8338.23 USD million reaching an all-time high of 20647 USD million in December 2011 and a record low of −185 USD million in February 1998. Russia runs regular trade surpluses primarily due to exports of commodities.
In 2015, Russia main exports are oil and natural gas (62.8% of total exports), ores and metals (5.9%), chemical products (5.8%), machinery and transport equipment (5.4%) and food (4.7%). Others include: agricultural raw materials (2.2%) and textiles (0.2%).
Russia imports food, ground transports, pharmaceuticals and textile and footwear. Main trading partners are: China (7% of total exports and 10% of imports), Germany (7% of exports and 8% of imports) and Italy. This page includes a chart with historical data for Russia balance of trade. Exports in Russia decreased to 39038 USD million in January 2013 from 48568 USD million in December 2012. Exports in Russia is reported by the Central Bank of Russia. Historically, from 1994 until 2013, Russia Exports averaged 18668.83 USD million reaching an all-time high of 51338 USD million in December 2011 and a record low of 4087 USD million in January 1994. Russia is the 16th largest export economy in the world (2016) and is a leading exporter of oil and natural gas. In Russia, services are the biggest sector of the economy and account for 58% of GDP. Within services the most important segments are: wholesale and retail trade, repair of motor vehicles, motorcycles and personal and household goods (17% of total GDP); public administration, health and education (12%); real estate (9%) and transport storage and communications (7%). Industry contributes 40% to total output. Mining (11% of GDP), manufacturing (13%) and construction (4%) are the most important industry segments. Agriculture accounts for the remaining 2%. This page includes a chart with historical data for Russia Exports. Imports in Russia decreased to 21296 USD million in January 2013 from 31436 USD million in December 2012. Imports in Russia is reported by the Central Bank of Russia. Historically, from 1994 until 2013, Russia imports averaged 11392.06 USD million reaching an all-time high of 31553 USD million in October 2012 and a record low of 2691 USD million in January 1999. Russia main imports are food (13% of total imports) and ground transports (12%). Others include: pharmaceuticals, textile and footwear, plastics and optical instruments. Main import partners are China (10% of total imports) and Germany (8%). Others include: Italy, France, Japan and United States. This page includes a chart with historical data for Russia Imports.
Foreign trade of Russia - Russian export and import
Foreign trade rose 34% to $151.5 billion in the first half of 2005, mainly due to the increase in oil and gas prices which now form 64% of all exports by value. Trade with CIS countries is up 13.2% to $23.3 billion. Trade with the EU forms 52.9%, with the CIS 15.4%, Eurasian Economic Community 7.8% and Asia-Pacific Economic Community 15.9%.
Between 1985 and 2018 almost 28,500 mergers or acquisitions have been announced in Russia. This cumulates to an overall value of around 984 bil. USD which translates to 5.456 bil. RUB. In terms of value, 2007 has been the most active year with 158 bil. USD, whereas the number of deals peaked in 2010 with 3,684 (964 compared to the value record year 2007). Since 2010 value and numbers have decreased constantly and another wave of M&A is expected.
The majority of deals in, into or out of Russia have taken place in the financial sector (29%), followed by banks (8.6%), oil and gas (7.8%) and Metals and Mining (7.2%).
Here is a list of the top deals with Russian companies participating ranked by deal value in mil. USD:
The majority of the top 10 deals are within the Russian Oil and Gas sector, followed by Metals and Mining. | https://en.wikipedia.org/wiki?curid=25705 |
Telecommunications in Russia
Censorship and the issue of Media freedom in Russia have been main themes since the era of the telegraph. Radio was a major new technology in the 1920s, when the Communists had recently come to power. Soviet authorities realized that the "ham" operator was highly individualistic and encouraged private initiative– too much so for the totalitarian regime. Criminal penalties were imposed but the working solution was to avoid broadcasting over the air. Instead radio programs were transmitted by copper wire, using a hub and spoke system, to loudspeakers in approved listening stations, such as the "Red" corner of a factory. Due to the enormous size of the country Russia today leads in the number of TV broadcast stations and repeaters. There were few channels in the Soviet time, but in the past two decades many new state-run and private-owned radio stations and TV channels appeared.
The telecommunications system in Russia has undergone significant changes since the 1980s, resulting in thousands of companies licensed to offer communication services today. The foundation for liberalization of broadcasting was laid by the decree signed by the President of the USSR in 1990. Telecommunication is mainly regulated through the Federal Law ""On Communications"" and the Federal Law ""On Mass Media""
The Soviet-time "Ministry of communications of the RSFSR" was through 1990s transformed to "Ministry for communications and informatization" and in 2004 it was renamed to "Ministry of information technologies and communications (Mininformsvyazi)", and since 2008 Ministry of Communications and Mass Media.
Russia is served by an extensive system of automatic telephone exchanges connected by modern networks of fiber-optic cable, coaxial cable, microwave radio relay, and a domestic satellite system; cellular telephone service is widely available, expanding rapidly, and includes roaming service to foreign countries. Fiber to the x infrastructure has been expanded rapidly in recent years, principally by regional players including Southern Telecom Company, SibirTelecom, ER Telecom and Golden Telecom. Collectively, these players are having a significant impact of fiber broadband in regional areas, and are enabling operators to take advantage of consumer demand for faster access and bundled services.
"Networking" can be traced to the spread of mail and journalism in Russia, and information transfer by technical means came to Russia with the telegraph and radio (besides, an 1837 sci-fi novel "Year 4338", by the 19th-century Russian philosopher Vladimir Odoevsky, contains predictions such as "friends' houses are connected by means of magnetic telegraphs that allow people who live far from each other to talk to each other" and "household journals" "having replaced regular correspondence" with "information about the hosts’ good or bad health, family news, various thoughts and comments, small inventions, as well as invitations").
Computing systems became known in the USSR by the 1950s. Starting from 1952, works were held in the Moscow-based Institute of Precision Mechanics and Computer Engineering (headed by Sergei Lebedev) on automated missile defense system which used a "computer network" which calculated radar data on test missiles through central machine called M-40 and was interchanging information with smaller remote terminals about 100—200 kilometers distant. The scientists used several locations in the USSR for their works, the largest was a massive test range to the West from Lake Balkhash. In the meantime amateur radio users all over USSR were conducting "P2P" connections with their comrades worldwide using data codes. Later, a massive "automated data network" called "Express" was launched in 1972 to serve needs of Russian Railways.
From the early 1980s the All Union Scientific Research Institute for Applied Computerized Systems ("VNIIPAS") was working to implement data connections over the X.25 telephone protocol. A test Soviet connection to Austria in 1982 existed, in 1982 and 1983 there were series of "world computer conferences" at VNIIPAS initiated by the U. N. where USSR was represented by a team of scientists from many Soviet Republics headed by biochemist Anatole Klyosov; the other participating countries were UK, USA, Canada, Sweden, FRG, GDR, Italy, Finland, Philippines, Guatemala, Japan, Thailand, Luxembourg, Denmark, Brazil and New Zealand.
Also, in 1983 the "San Francisco Moscow Teleport (SFMT)" project was started by VNIIPAS and an American team which included George Soros. It resulted in the creation in the latter 80s of the data transfer operator "SovAm" (Soviet-American) "Teleport". Meanwhile, on April 1, 1984 a Fool's Day hoax about "Kremlin computer" Kremvax was made in English-speaking Usenet. There are reports of spontaneous Internet (UUCP and telnet) connections "from home" through X.25 in the USSR in as early as 1988. In 1990 a "GlasNet" non-profit initiative by the US-based Association for Progressive Communications sponsored Internet usage in several educational projects in the USSR (through Sovam).
When the Russian economy's collapse came about in August 1998, the market shrank drastically and the ruble fell several cellular operators were squeezed between low traffic and huge foreign currency denominated credits and telecommunications equipment bills. In 1998, prepaid subscriptions were made at a loss and infrastructure investments fell. NMT450 operator Moscow Cellular communications was hardest hit due to its about 50% corporate users. The 1998 crisis also caused many regional
operators tariff and payment problems with accumulated debt to vendors; large debts were restructured and foreign investors lost out.
In November 2013 President Putin instructed Dmitry Medvedev's Cabinet in 2014-2016 to provide "modern communication services" to rural settlements throughout Russia with a population of 250 to 500 people, by Rostelecom at the expense of the provision of universal service. The document does not specify what is meant by "modern communication services", but sources close to the Ministry of Communications and the state operator explain its intention of connecting villages to the wired internet. The budget comes among others, from the Universal Service Fund.
The Ministry of Communications and Mass Media is responsible for establishing and enforcing state policy in the sphere of electronic and postal communications, for promulgating the development and introduction of new information and communication technologies, and for coordinating the work of other state agencies in this area. Legislative oversight is exercised mainly through the State Duma Committee for mass media. The Committee develops mass media-related draft laws, and provides expert analysis of laws submitted by other Duma committees regarding their compliance with current media law.
Universal Service Fund is a fund to finance socially important projects, for example, providing payphones in remote settlements. It consists of the contributions of all Russian operators of 1.2% of revenue. These funds are the Federal Communications Agency (Rossvyaz) distributes between 21 universal operator. These operators money comes to the budget, and Rossvâz receives from the budget for compensation and still these amounts roughly coincided, employee profile departments. But universal operators recently complained that they themselves lack the money to compensate for losses in the implementation of social projects.
In February 2014, Russian President Vladimir Putin signed amendments to the federal law ""On Communications"", which set Rostelecom a single operator of universal communication services. The company must commit itself to support the existing infrastructure of Universal Service, including payphones and access points (VRM) on the Internet. In addition to these duties, a single operator will also fight the digital divide by providing broadband at speeds of at least 10 Mbit / s settlements up to 250 people.
Telephones – main lines in use: 32.277 million (2016)
Telephones – mobile cellular: 229.126 million (2016)
The telephone system employs an extensive system of modern network elements such as digital telephone exchanges, mobile switching centres, media gateways and signalling gateways at the core, interconnected by a wide variety of transmission systems using fibre-optics or Microwave radio relay networks. The access network, which connects the subscriber to the core, is highly diversified with different copper-pair, optic-fibre and wireless technologies.; cellular services, both analog and digital, are available in many areas. In the rural areas, the telephone services are still outdated, inadequate, and low density.
The Tsarist government of Russia issued its first decree on the development of urban telephone networks in 1881 and, as already discussed, the first exchanges in the Empire opened the following year. Initially, telephone exchanges were granted to private developers as concessions in the major cities, but in 1884 the government began to construct the first of its own exchanges and subsequently suspended the award of new concessions. Intercity telephone communications grew very slowly, with only a dozen lines in place by the start of the 20th century, most serving Moscow-Saint Petersburg traffic. After 1900, when the initial concessions had expired, the government eased control over private concessionaires and a burst of new construction took place. Included in the expansion during this period was the slow growth of exchanges built and operated by rural "Zemstva", which were treated essentially as private concessionaires by the Imperial government.
Telephones played a significant role during the upheavals of 1917. In February, according to the last tsarist Chief of Police, 'neither the military authorities nor the mutineers thought of occupying the Telephone Exchange'; consequently it continued to function, serving both sides, until the operators finally left their
positions amidst the growing confusion. In early July, however, the Provisional Government, fearing a Bolshevik coup, reportedly ordered the central telephone exchange to boycott calls requested by Bolsheviks (automatic switching systems had not yet been introduced).
In 1918, when the Soviet government moved to Moscow and war conditions were producing extreme shortages, Sovnarkom ordered a reduction of 50% in the volume of telephone communications in the new capital, to ensure that official needs of the new government would be served. The primary consequence of this decree for individuals was the 'communalisation' of telephones in private houses and flats. According to the decree, restrictions were focused on the 'parasitic stratum' of society, in the interest of the 'working population'. With the exception of personal phones belonging to high government officials, doctors and midwives, telephones in private flats were placed at the disposal of 'house committees', to be made available for 'general use' free of charge. Houses without telephones were entitled to free use of the communal phone of a neighbouring house; the decree further ordered the immediate installation of at least 150 telephones in public squares, particularly in outlying regions.
One year later Sovnarkom nationalized all telephone systems in the Russian Republic-including all intercity, urban, concessionary and zemstvo exchangesand assigned their administration and operation to the People's Commissariat for Posts and Telegraphs of the RSFSR. Beginning with the nationalization of telephones in 1919, Soviet policy exhibited two main characteristics: telephones increasingly became instruments for the bureaucracy and bureaucrats, and telephones in general were accorded a low investment priority. In March 1920, for instance, government institutions were exempted from the telephone tariff, receiving the right to use the telephone without payment, albeit for sharply restricted periods.
Until the end of 1991 (the end of the USSR), the sole fixed-line telephone operator in the country was the Ministry of Communications of the USSR. The state possessed all telecommunications structure and access networks. In 1994, the investment communication company (OJSC “Sviazinvest”) was established by the Presidential Decree No.1989 dated 10 October 1994 “"On the specific features of the state management of the electric communication network for public use in Russian Federation"”. The authorised capital of OJSC “Sviazinvest” was formed by the consolidation of federal shares of joint stock companies acting in the area of electric communications and established during the privatisation of the state enterprises for electric communications. The seven regional incumbents which make up Svyazinvest, majority-owned by the government, in early 2011 merged with the key subsidiary Rostelecom. The move created an integrated company based on Rostelecom which will be better placed to exploit economies of scale in coming years.
Cross-country digital trunk lines run from Saint Petersburg to Vladivostok, and from Moscow to Novorossiysk.
Liberalization of the long distance communication market is another market driver. In January 2006, Russia passed a new law in relation to long distance telecommunications, which partially broke up the monopolization that Rostelecom had been enjoying in the toll market. The law now allows other carriers to operate toll services. Currently, there are about 32 active companies in this space, including Interregional TransitTelekom (MTT), Golden Telecom, TransTelekom and Synterra Media. share of fixed-line business of Rostelecom's main competitors varied in 2012 from 6% (Megafon) to 19% (MTS). Still, At the beginning of the 2010s, Rostelecom is de facto a monopoly local telephony provider to households in Russia, except for few regions, where incumbents were not part of Svyazinvest holding after the privatization in the early 1990s (the cities of Moscow, Pskov, Kostroma, the republics of Tatarstan, Bashkortostan, as well as Tuva, Chukotka, Chechnya, and Ingushetia).
The substitution of long-distance fixed-line voice services by mobile and IP traffic sped up after 2008, when mobile operators shifted to the fixed-line segment (Vimpelcom was the first company out of the Big 3 to acquire Golden Telecom in early 2008) and simultaneously increased investments into own trunk network infrastructure to support rapid 3G traffic growth. In February 2014 Megafon, through its subsidiary NetByNet purchased Tele-MIG Besides a company founded in 2003 which provides fixed telephony, IP-telephony and data transmission in Yamalo-Nenets Autonomous Okrug.
Russian regulation stipulates that new players must build their own networks. The growth of traffic between Europe and Asia is an additional opportunity; more than 6,000 km of international communication cables were built during the first nine months of 2007, representing a 48.5% increase on 2006, according to the Russian Ministry of Communication and Mass Media.
Tariffs in the fixed-line segment are determined by the Federal Tariff Service on an annual basis, taking into consideration inflation and the operators' expenses. The price competition in the long-distance segment increased as mobile operators began implementing promotional tariffs to stimulate voice traffic growth after the crisis (long-distance traffic is predominantly built by corporate clients). At the same time, traditional operators had limited room for maneuver as intra-zonal and domestic LD tariffs, which are subject to regulation by the government, remained flat over the last three years. As a result, mobile operators managed to bite off a heavy share of intraregional and long-distance market from traditional fixed-line operators, first of all regional operators of Svyazinvest, which are now united under Rostelecom.
Russian public switched telephone network (PSTN) has specific features. The lowest part of this model is example of the local network in the middle and large cities. The central office (CO) is connected to the tandem exchange (TE). In some cases, COs are connected by the directly. Such possibility is shown by the dotted lines for three COs connected to the TEIII. COs may be directly connected with the toll exchange. This option is shown by the dotted line for the COII1. Automatic Branch Exchange (PABX) is served by the nearest CO. All TEs are forming the meshed network. Up to the 1990s, TE was independent element of the local network. Operators did not use the equipment combined functions Tandem and Toll Exchanges. So, TE provided connections between COs of the local network, and access to the toll exchange. A function of the toll exchange is to establish connections for the long-distance and international calls. Last type of calls is served by the Gateway (GW). Processing of the local calls is performed by the COs and TEs. If a subscriber dials digit "8" (prefix of the long-distance connection in the national PSTN) all further processing of the call is a function of a toll exchange. The numbering plan for the cellular networks based on the Area Code (three digits) and number of mobile terminal (seven digits). In this case, the Area Code defines the concrete cellular network.
There are four mobile phone service brands that cover all Russia: Beeline, MegaFon, Mobile TeleSystems and Tele2. At the end of 2013 there were about 239 million SIM cards in use in the country, which is equal to 168% of the population. The access points (AP) are built in long-distance telephone exchanges (LDTEs), Russian fixed-line communication infrastructure which is present in every province. As a result, interconnecting mobile operator only needs to create "last kilometer" circuits to the regional LDTE, the requirement already imposed by its mobile license. Rostelecom, the leading fixed-line operator in the country has regional subsidiaries who provide cellular services.
In May 2008, 3G network was deployed in St. Petersburg, in Kazan in June of that year, and in Sochi in July of that year. By 2010, 3G networks covered largely most of Russia.
In April 2011, MegaFon deployed high-definition voice services on its Moscow and Sochi GSM and UMTS networks. As the key supplier of core and access networks to MegaFon, Nokia Siemens Networks was responsible for the HD voice implementation, which is also a world first for a commercial GSM network.
In early 2011, Rostelecom signed a memorandum of understanding with the three main MNOs to develop a joint LTE network using the infrastructure to be built by Yota. The network will expand LTE availability to 70 million Russians in 180 cities by 2014, vastly improving regional broadband availability in coming years.
In December 2011, Rostelecom signed an agreement with Yota, a Russian mobile broadband provider, to jointly develop and use 4G wireless networks. The agreement facilitated the development and expansion of advanced communications technologies in the country, including the latest 4G-LTE system. Both companies will make full use of each other's telecommunications infrastructures and advanced telecommunications services will be made more accessible to Russian residents. As part of the agreement, Rostelecom have the right to use Yota's wireless networks and to provide customers with telecommunications services as a MVNO. The agreement will also provide Rostelecom with access to Yota's existing telecommunications equipment sites and its wire communications channels at these sites. In return, Yota will use Rostelecom's wire communications channels at their telecommunication equipment sites; it will gain access to Rostelecom's Internet connection and inter-city backbone links and the company's existing telecommunication equipment sites and data centres.
In September 2012, MTS launched the country's first TD-LTE network, using the TD-LTE spectrum in the 2595-2620 MHz band it secured in February. In May 2013, there were over one million LTE subscribers in Russia.
Rostelecom the largest fixed-line operator and former monopoly, together with its subsidiary T2-Mobile provide mobile services on the territory of 65 regions of Russia, serving more than 36.5 million subscribers. During the 2010s, Rostelecom and Tele2 built mobile networks of the third generation in 40 regions of Russia. Total planned to install more than 8 thousand base stations. Suppliers of equipment and solutions for the 3G+ network are Ericsson and Huawei.
Radio Rossii is the primary public radio station in Russia. Digital radio broadcasting is developing fast with the Voice of Russia announced on 1 July 2004, the successful implementation, and planned expansion, of its DRM broadcasts on short-wave and medium-wave. In September 2009, the Russian State Commission for Radio Frequencies, the national regulator of broadcasting, has decided on the DRM has the standard for mediumwave and shortwave services.
Radios: 61.5 million (1998)
Radio broadcasting stations: AM 420, FM 447, shortwave 56 (1998).
Privately owned stations are often owned by industrial groups either controlled by the State or with close connections to the government so that they can be called semi-state. Both state and private stations can have a national status (broadcasters that reach over 70% of the national territory), or a regional, district or local status. Local partners are often united in bigger networks.
In the 1970s and 1980s, television become the preeminent mass medium. In 1988 approximately 75 million households owned television sets, and an estimated 93 percent of the population watched television. Moscow, the base from which most of the television stations broadcast, transmitted some 90 percent of the country's programs, with the help of more than 350 stations and nearly 1,400 relay facilities.
There are about 15,000 TV transmitters. Development of domestic digital TV transmitters, led within "Multichannel" research program, had already been finished. New domestic digital transmitters have been developed and installed in Nizhniy Novgorod and Saint Petersburg in 2001–2002.
The state television broadcaster is Pervy kanal (Channel One)., VGTRK (channels: Rossiya 1, Rossiya 2, Rossiya K, Rossiya 24, Carousel (together with Channel One)), TV Tsentr (it is owned by the administration of the city of Moscow), Telekanal Zvezda (owner Ministry of Defence) and TV-Novosti (RT channel in English, Rusiya Al-Yaum channel in Arabic, RT America channel based in Washington, D.C. , United States in English, RT Actualidad channel in Spanish, RT Documentary channel in Russian).
Broadband internet access is becoming more readily available in Russia, and as a result the internet is growing as an avenue for Russian commerce, with 42% of internet users in Russia shopping online, and 38% using online banking services.
The IPTV developing fast as a cheap alternative to regular television. In July 2011, Rostelecom started a plan to unify IPTV services in Russia's regions offering standard features such as linear and on-demand TV along with new interactive and OTT services provided by the operator to various mobile devices. For this Russian company SmartLabs was chosen.
Country code top-level domain: RU (Also SU – left from Soviet Union)
Russia is connected internationally by three undersea fiber-optic cables; digital switches in several cities provide more than 50,000 lines for international calls; satellite earth stations provide access to Intelsat, Intersputnik, Eutelsat, Inmarsat, and Orbita. Rostelecom set up international fiber-optic communication lines providing access to Finland, Turkey, Italy, Bulgaria, Japan, China, Estonia, Latvia, Kazakhstan, Ukraine, Azerbaijan, Georgia, and Belarus. The company's international points of presence are in Stockholm, Frankfurt, Amsterdam, and London. Russia due to its connections to Europe and Asia offers high-speed transit services from Europe to Asia via the Russian territory. international digital transit telephone network of Rostelecom is based on ten international transit and communication centers and six combined communication centers. The total installed capacity of the zonal network by the end of 2011 constituted 1,100,600 channels. The level of international communication centers digitalization constituted 100%.
In May 2006, Rostelecom launched a new fiber-optic data transmission line linking Russia's Far Eastern cities of Belogorsk and Blagoveshchensk with the Chinese city of Heihe on the Chinese-Russian border. In May 2006 TransTeleCom Company and North Korea's Ministry of Communications have signed an agreement for the construction and joint operation of a fiber-optic transmission line (FOTL) in the section of the Khasan–Tumangang railway checkpoint. This is the first direct land link between Russia and North Korea. TTC's partner in the design, construction, and connection of the communication line from the Korean side to the junction was Korea Communication Company of North Korea's Ministry of Communications. The technology transfer was built around STM-1 level digital equipment with the possibility of further increasing bandwidth. The construction was completed in 2007.
In 2011, Rostelecom came to an agreement with Mongolian operator Mobicom aimed at establishing a Russia-Mongolia border-crossing transmission line and at providing telecommunications services. It also opened a new international Kaliningrad-Poland transmission line through the Poland–Russia border to optimize costs when providing services to end users and operators in Kaliningrad.
In February 2012, the national operator Rostelecom has selected TeliaSonera International Carrier to operate and manage its new backbone network between Kingisepp, Russia and Stockholm. The next-generation managed optical network provides connectivity between the cable landing points of the Baltic Cable System, Kingisepp and Kotka, implemented over TeliaSonera International Carrier's wholly owned fibre-optic infrastructure to Stockholm.
In September 2013, EPEG International Cable System, of which Russia is a member, became in commercial use. Main line connects Western Europe and the Middle East through Russia. The line, connecting Frankfurt across Eastern Europe, Russia, Azerbaijan, Iran and the Persian Gulf to the capital of the Oman, Muscat, has an initial capacity of 540 gigabits per second. The total length of the new cable system amounted to about 10,000 kilometers, and design capacity is up to 3.2 terabits per second. Vodafone organized a main line connecting Europe with Ukraine to the border with Russia. From the Russian-Ukrainian border to the border with Azerbaijan and through Azerbaijan to the borders with Iran the line was built by Rostelecom together with the Azerbaijani partner Delta Telecom.
In 2015, Transarctic Russian optical cable system (ROTAX) will be completed. The fiber optical cable a pass route from Bude (UK) through Murmansk, Anadyr and Vladivostok in Russia and finish at Tokyo. The total length of the cable system will be about 16,000 km with capacity of the system is 60 Tbit/s. The project was initiated ROTAX is JSC "Polarnet Project", and is being built by Tyco Electronic Subcom.
In late 2012, Russia's leading telecom companies Rostelecom, MTS, Vimpelcom and Megafon signed memorandum to jointly build and operate submarine-laid fiber optic cable to connect between town of Okha on Sakhalin Island with the mainland towns of Magadan and Petropavlovsk-Kamchatsky. Capacity of the underwater cable will amount to 8 Tbit/s (80*100 Gbit/s) with the total length of lines around 2,000 km.
At the end of 2013, Rostelecom completed to deploy the Tynda - Yakutsk fiber line which according to the company provides network redundancy, optimizing traffic and increase trunk in areas Tynda - Skovorodino - Khabarovsk. The 1,056-km, 80 Gbit/s link is based on DWDM technology. Its capacity can be expanded to 3.2 Tbit/s in future. The new backbone increased the capacity of telecommunications links in Yakutsk, Aldan and Neryungri, as well as Nizhny-Bestyakh, Kachikatsy, Nizhny-Kuranakh, Bolshoy-Khatymi and Yengra.
In December 2010, then President Dmitry Medvedev signed a presidential decree enabling the implementation of a single number, 112, for emergency services in all the regions of Russia. Transition to the new emergency number will be gradual; it is envisaged that 112 will replace the previous emergency numbers 01, 02, 03 and 04 by 2017. In December 2012, Russian President Vladimir Putin signed a law establishing the single emergency service number 112 throughout the country. In a press conference in December 2013, Minister of Emergency Situations Vladimir Puchkov said that the unified system will be running in a full pilot mode from 2014 and will fully enter to operational mode in 2016.
Percentage (%) of enterprises using selected hardware and ICT services in Russia, 2004-2010
Key data on the telecommunications and ICT market in Russia, 2004-2011
"e - estimate" | https://en.wikipedia.org/wiki?curid=25706 |
Transport in Russia
The transport network of the Russian Federation is one of the world's most extensive transport networks. The national web of roads, railways and airways stretches almost from Kaliningrad in the west to the Kamchatka Peninsula in the east, and major cities such as Moscow and Saint Petersburg are served by extensive rapid transit systems.
Russia has adopted two national transport strategies in recent years. On 12 May 2005, the Russian Ministry of Transport adopted the Transport Strategy of the Russian Federation to 2020. Three years later, on 22 November 2008, the Russian government adopted a revised strategy, extending to 2030.
The export of transport services is an important component of Russia’s GDP. The government anticipates that between 2007 and 2030, the measures included in its 2008 transport strategy will increase the export of transport services to a total value of $80 billion, a sevenfold increase on its 2008 value. Foreign cargo weight transported is expected to increase from 28 million tonnes to 100 million tonnes over the same period.
Russia has the world's third-largest railway network, behind only the United States and China, with a total track length of as of 2011. of this uses a broad rail gauge of , while a narrow gauge of is used on a 957-km (595-mile) stretch of railway on Sakhalin Island. Electrified track accounts for around half of the Russian railway network — totalling — but carries the majority of railway traffic.
Russian Railways, the state-owned national rail carrier, is one of the world's largest transport companies, enjoying a monopoly over rail transport in Russia. Established in 1992, it employs an estimated 950,000 people, and accounted for 2.5% of the entire national GDP in 2009. In 2007 alone, Russian Railways carried a total of 1.3 billion passengers and 1.3 billion tons of freight on its common-carrier routes.
Also there is a Metrotram system in Volgograd and three more cities with metro systems under construction:
Voltage of electrification systems not necessarily compatible.
As of 2006 Russia had 933,000 km of roads, of which 755,000 were paved. Some of these make up the Russian federal motorway system. With a large land area the road density is the lowest of all the G8 and BRIC countries.
The state of Russia’s road system ranks 136th out of 144 countries evaluated. Rustam Minnikhanov, the president of Tatarstan and head of the State Council working group on roads, told the Novosibirsk meeting that 53 percent of federal highways and 63 percent of regional ones are substandard and that the situation is growing worse: Every year, the number of cars in Russia rises by six percent, but the highway system expands only 2200 kilometers. The Kremlin leader blamed this on corruption, the lack of oversight, and the failure to update standards set 30 years ago. According to the Russian Federal State Statistics Service the road network expanded by 504,000 kilometers between 2003 and 2015, though this is largely due to the registration of previously ownerless roads.
Road safety in Russia is poor, with a road accident rate higher than in Europe or the United States. In 2011, Russia was 4th by number of absolute recorded road deaths. Increasingly harsher penalties for traffic violations were imposed after 2008, but the level of corruption among traffic law enforcement authorities limits their effectiveness in reducing the number of accidents. Dashcams are widespread, inasmuch as Russian courts prefer video evidence to eyewitness testimony, but also as a guard against police corruption and insurance fraud.
After World War II, trucks and buses were manufactured for the socialist countries of Eastern Europe: Ikarus urban, intercity and tourist buses, Skoda buses and trucks, Industriewerke Ludwigsfelde and Robur trucks, Tatra, LIAZ, Praga V3S, Csepel, Avia and ZSD Nysa passenger vans and Zuk cargo vans). During the late 1950s OAF trucks were imported from the West, and Berliet T60 dump trucks were imported in 1969 to open the mine and ore-processing plant of Ai in the Orenburg Oblast. Tractors from Volvo and Mercedes-Benz NG were imported during the 1970s for the road-transport organization Sovtransavto. Unic-Fiat tractors were imported in the mid-1970s for the port of Leningrad, and Unit Rig and International Harvester Paystar dump trucks and cement mixers were used for the construction of irrigation canals from 1979 to 1983. Fawn ballast tractors were imported from 1970 to the 1980s, and Komatsu dump trucks began to be imported in 1979. Magirus bonneted flatbed trucks and dump trucks were used in 1975 for the construction of the Baikal–Amur Mainline (BAM).
By the 1980 Summer Olympics in Moscow, priority was given to smaller cars (such as the Mercedes-Benz S-Class W116) as police cars, taxis and vans. However, most vehicles were Soviet-made cars: Moskvitch, GAZ-M20 Pobeda, GAZ, ZiL, VAZ, Izh and ZAZ automobiles, UAZ and LuAZ jeeps, RAF and ErAZ vans, GAZ, Kamaz, ZiL, MAZ, KrAZ, UralAZ, BelAZ and KAZ (Colkhides) trucks, KAvZ, PAZ, LiAZ and LAZ buses and ZiU trolleybuses.
In 1988, the free sale of trucks and buses was permitted. Since the 1990s, many new and used cars have been imported. During the 2000s, foreign companies began to build factories in Russia or enter into agreements with existing assembly plants.
Currently, European and Asian parts of Russia have different fleets. European Russia primarily contains Russian, European, Japanese, American and Chinese cars and trucks; the Asian side contains used vehicles from the Japanese domestic market, concentrated in Vladivostok. The largest share of Russian auto brands is in the North Caucasus regions of Dagestan and Chechnya.
GAZelle "marshrutkas" and Ford Transit, Peugeot Boxer, Fiat Ducato, Renault Master, Iveco Daily, Mercedes-Benz Sprinter and Volkswagen Crafter vans and Russian (PAZ), Ukrainian (Bogdan, South Korean (Hyundai County) and Chinese (BAW) minibuses, painted in one color, are used as share taxis. City buses are primarily the Russian (PAZ, KAvZ, LiAZ, MARZ, NefAZ, Volzhanin) and Belarusian MAZ. European buses are used in Vladivostok (51 MAN A78 Lion's City LE buses, Moscow (one Mercedes-Benz Turk O345 Connecto LF, four Ikarus 435, 71 Scania OmniLink assembled in Russia and one MAN A23 Lion's City GL), Kolomna (16 Mercedes-Benz Turk O345 Connecto H and one Mercedes-Benz Türk O345 Conecto LF) and St. Petersburg (16 MAN Lion's Classic and 52 buses Scania OmniLink buses). Other cities run new Chinese and used German, Swedish, Finnish and Dutch buses. In July 2014, Prime Minister Dmitry Medvedev issued a decree banning foreign technical purchases (including public transport) for state and municipal needs. Intercity buses are Chinese, Korean and Russian, and large companies are buying European buses.
Grey market vehicles, such as the Ford Mustang, Lincoln Town Car, Ford F-Series, Dodge Viper, Toyota Sienna, Toyota 4Runner, Acura, Toyota Highlander, Toyota Venza, Infiniti, Chevrolet Corvette and Chevrolet Camaro, are sold by special dealers. Grey-market US trucks include Freightliner, International, Peterbilt and Volvo. In late 2013 International began selling a Russian version of the International ProStar tractor, and sales of Western Star 6900XD dump trucks were scheduled to begin in 2014.
According to the Russian Federal State Statistics Service, in 2013 the number of individually-owned cars per 1,000 of population was 304.1 in the Ural Federal District, 312.6 in Sverdlovsk Oblast, 202.5 in the North-West Federal District, 345.3 in Pskov oblast, 298.5 in the Far Eastern Federal District, 484.8 in Kamchatka Krai, 284.6 in the Central Federal District, 340.5 in the Belgorod Oblast, 274.3 in the Southern Federal District (289.5 in Krasnodar Krai), 261.8 in the Siberian Federal District (292.5 in the Republic of Khakassia and Novosibirsk Oblast), 258 in the Volga Federal District (298.1 in Orenburg Oblast) and 197 in the North Caucasian Federal District (267.2 in Stavropol Krai). The regions with the greatest car ownership are Kamchatka Krai in Asiatic Russia (484.8) and Belgorod Oblast in European Russia (340.5). Those with the least are Chukotka Autonomous Okrug in Asiatic Russia (73.1) and the Republic of Ingushetia in European Russia (130.0).
According to the data from the Maritime Board ("Morskaya Kollegiya") of the Russian Government, in 2004, 136.6 million tons of cargo were carried that year over Russia's inland waterways, the total cargo transportation volume being 87,556.5 million ton-km. During that same year, 53 companies were engaged in carrying passengers over Russia's inland waterways; they transported 22.8 million passengers, the total volume of river passenger transportation being 841.1 million passenger-km.
Novorossiysk, Rostov-on-Don, Sochi, Tuapse, Yeysk.
Baltiysk, Kaliningrad, Primorsk, St. Petersburg, Vyborg, Vysotsk.
Arkhangelsk, Dudinka, Igarka, Murmansk, Tiksi, Vitino.
Kholmsk, Magadan, Nakhodka Vostochny Port, Nevelsk, Petropavlovsk-Kamchatsky, Vanino, Vladivostok
Astrakhan, Makhachkala.
Russia is home to the world's longest oil pipeline, the Druzhba pipeline and in fact one of the biggest oil pipeline networks in the world. It carries oil some from the eastern part of the European Russia to points in Ukraine, Belarus, Poland, Hungary, Slovakia, the Czech Republic and Germany. The network also branches out into numerous pipelines to deliver its product throughout the Eastern Europe and beyond. The name "Druzhba" means "friendship", alluding to the fact that the pipeline supplied oil to the energy-hungry western regions of the Soviet Union, to its "fraternal socialist allies" in the former Soviet bloc, and to western Europe. Today, it is the largest principal artery for the transportation of Russian (and Kazakh) oil across Europe.
On 29 October 2012 president Vladimir Putin instructed the general manager of Gazprom to start the construction of the pipeline. On 21 May 2014, Russia and China signed a 30-years gas deal which was need to make the project feasible. Construction was launched on 1 September 2014 in Yakutsk by president Putin and Chinese deputy premier minister Zhang Gaoli.
As of 2002, there were 2,743 airports in Russia.
Since 2013, the Russian government subsidizes about 140 domestic air routes covering 12 airports. The subsidies are managed by Rosaviatsia and cover the Crimea, Kaliningrad and Far East regions of Russia.
Aircraft manufacturing is an important industrial sector in Russia, employing around 355,300 people. The dissolution of the Soviet Union led to a deep crisis for the industry, especially for the civilian aircraft segment. The situation started improving during the middle of the first decade of the 2000s due to growth in air transportation and increasing demand. A consolidation programme launched in 2005 led to the creation of the United Aircraft Corporation holding company, which includes most of the industry's key companies. According to the Federal State Statistics Service of the Russian Federation, as of 2012, there were 6,200 civil aircraft in Russia.
"Total:"
630 BD IS ON
"over 3,047 m:"
54
"2,438 to 3,047 m:"
202
"1,524 to 2,437 m:"
108
"914 to 1,523 m:"
115
"under 914 m:"
151 (1994 est.)
"Total:"
1,887
"over 3,047 m:"
25
"2,438 to 3,047 m:"
45
"1,524 to 2,437 m:"
134
"914 to 1,523 m:"
291
"under 914 m:"
1,392 (1994 est.) | https://en.wikipedia.org/wiki?curid=25707 |
Foreign relations of Russia
The foreign relations of the Russian Federation are the policy of the government of Russia by which it guides the interactions with other nations, their citizens and foreign organizations. This article covers the foreign policy of the Russian Federation since the dissolution of the Soviet Union in late 1991.
Following the dissolution of the Soviet Union, Russian foreign policy is seen as being born from the conflict between three rival schools: Atlanticists, seeking a closer relationship with the United States and the Western World in general; Imperialists, seeking a recovery of the semi-hegemonic status lost during the previous decade; and Neo-Slavophiles, promoting the isolation of Russia within its own cultural sphere. While Atlanticism was the dominant ideology during the first years of the new Russian Federation, under Andrei Kozyrev, it came under attack for its failure to defend Russian preeminence in the former USSR. The promotion of Yevgeny Primakov to Minister of Foreign Affairs will mark the beginning of a more nationalistic approach to foreign policy.
Vladimir Putin's presidency lasted from January 2000 until May 2008 and again from 2012. In international affairs, Putin made increasingly critical public statements regarding the foreign policy of the United States and other Western countries. In February 2007, at the annual Munich Conference on Security Policy, he criticised what he called the United States' monopolistic dominance in global relations, and pointed out that the United States displayed an "almost uncontained hyper use of force in international relations". He said the result of it is that "no one feels safe! Because no one can feel that international law is like a stone wall that will protect them. Of course such a policy stimulates an arms race.".
Putin proposed certain initiatives such as establishing international centres for the enrichment of uranium and prevention of deploying weapons in outer space. In a January 2007 interview Putin said Russia is in favour of a democratic multipolar world and of strengthening the system of international law.
While Putin is often characterised as an autocrat by the Western media and some politicians, his relationship with former U.S. President George W. Bush, former Brazilian President Luis Inacio Lula da Silva, former Venezuelan President Hugo Chávez, former German Chancellor Gerhard Schröder, former French President Jacques Chirac, and former Italian Prime Minister Silvio Berlusconi are reported to be personally friendly. Putin's relationship with Germany's new Chancellor, Angela Merkel, is reported to be "cooler" and "more business-like" than his partnership with Gerhard Schröder, who accepted a job with a Russian-led consortium after vacating office.
In the wake of the 11 September attacks on the United States, he agreed to the establishment of coalition military bases in Central Asia before and during the US-led invasion of Afghanistan. Russian nationalists objected to the establishment of any US military presence on the territory of the former Soviet Union, and had expected Putin to keep the US out of the Central Asian republics, or at the very least extract a commitment from Washington to withdraw from these bases as soon as the immediate military necessity had passed.
During the Iraq disarmament crisis 2002–2003, Putin opposed Washington's move to invade Iraq without the benefit of a United Nations Security Council resolution explicitly authorizing the use of military force. After the official end of the war was announced, American president George W. Bush asked the United Nations to lift sanctions on Iraq. Putin supported lifting of the sanctions in due course, arguing that the UN commission first be given a chance to complete its work on the search for weapons of mass destruction in Iraq.
In 2005, Putin and former German Chancellor Gerhard Schröder negotiated the construction of a major gas pipeline over the Baltic exclusively between Russia and Germany. Schröder also attended Putin's 53rd birthday in Saint Petersburg the same year.
The Commonwealth of Independent States (CIS), seen in Moscow as its traditional sphere of influence, became one of the foreign policy priorities under Putin, as the EU and NATO have grown to encompass much of Central Europe and, more recently, the Baltic states.
During the 2004 Ukrainian presidential election, Putin twice visited Ukraine before the election to show his support for Ukrainian Prime Minister Viktor Yanukovych, who was widely seen as a pro-Kremlin candidate, and he congratulated him on his anticipated victory before the official election returns had been in. Putin's personal support for Yanukovych was criticized as unwarranted interference in the affairs of a sovereign state ("See also The Orange revolution"). Crises also developed in Russia's relations with Georgia and Moldova, both former Soviet republics accusing Moscow of supporting separatist entities in their territories.
Russia's relations with the Baltic states also remain tense. In 2007, Russo-Estonian relations deteriorated further as a result of the Bronze Soldier controversy.
Putin took an active personal part in promoting the Act of Canonical Communion with the Moscow Patriarchate signed 17 May 2007 that restored relations between the Moscow-based Russian Orthodox Church and Russian Orthodox Church outside Russia after the 80-year schism.
In his annual address to the Federal Assembly on 26 April 2007, Putin announced plans to declare a moratorium on the observance of the Treaty on Conventional Armed Forces in Europe by Russia until all NATO members ratified it and started observing its provisions, as Russia had been doing on a unilateral basis. Putin argues that as new NATO members have not even signed the treaty so far, an imbalance in the presence of NATO and Russian armed forces in Europe creates a real threat and an unpredictable situation for Russia. NATO members said they would refuse to ratify the treaty until Russia complied with its 1999 commitments made in Istanbul whereby Russia should remove troops and military equipment from Moldova and Georgia. Russian Foreign Minister Sergey Lavrov was quoted as saying in response that "Russia has long since fulfilled all its Istanbul obligations relevant to CFE". Russia has suspended its participation in the CFE as of midnight Moscow time on 11 December 2007. On 12 December 2007, the United States officially said it "deeply regretted the Russian Federation's decision to 'suspend' implementation of its obligations under the Treaty on Conventional Armed Forces in Europe (CFE)." State Department spokesman Sean McCormack, in a written statement, added that "Russia's conventional forces are the largest on the European continent, and its unilateral action damages this successful arms control regime." NATO's primary concern arising from Russia's suspension is that Moscow could now accelerate its military presence in the Northern Caucasus.
The months following Putin's Munich speech were marked by tension and a surge in rhetoric on both sides of the Atlantic. So, Vladimir Putin said at the anniversary of the Victory Day, "these threats are not becoming fewer but are only transforming and changing their appearance. These new threats, just as under the Third Reich, show the same contempt for human life and the same aspiration to establish an exclusive dictate over the world." This was interpreted by some Russian and Western commentators as comparing the U.S. to Nazi Germany. On the eve of the 33rd Summit of the G8 in Heiligendamm, American journalist Anne Applebaum, who is married to a Polish politician, wrote that "Whether by waging cyberwarfare on Estonia, threatening the gas supplies of Lithuania, or boycotting Georgian wine and Polish meat, he [Putin] has, over the past few years, made it clear that he intends to reassert Russian influence in the former communist states of Europe, whether those states want Russian influence or not. At the same time, he has also made it clear that he no longer sees Western nations as mere benign trading partners, but rather as Cold War-style threats."
British historian Max Hastings described Putin as "Stalin's spiritual heir" in his article "Will we have to fight Russia in this Century?". British academic Norman Stone in his article "No wonder they like Putin" compared Putin to General Charles de Gaulle. Adi Ignatius argues that "Putin... is not a Stalin. There are no mass purges in Russia today, no broad climate of terror. But Putin is reconstituting a strong state, and anyone who stands in his way will pay for it". In the same article, Hastings continues that although "a return to the direct military confrontation of the Cold War is unlikely", "the notion of Western friendship with Russia is a dead letter" Both Russian and American officials always denied the idea of a new Cold War. So, the US Secretary of Defense Robert Gates said yet on the Munich Conference: "We all face many common problems and challenges that must be addressed in partnership with other countries, including Russia... One Cold War was quite enough." Vladimir Putin said prior to 33rd G8 Summit, on 4 June 2007: "we do not want confrontation; we want to engage in dialogue. However, we want a dialogue that acknowledges the equality of both parties’ interests."
Putin publicly opposed to a U.S. missile shield in Europe, presented President George W. Bush with a counterproposal on 7 June 2007 of sharing the use of the Soviet-era radar system in Azerbaijan rather than building a new system in Poland and the Czech Republic. Putin expressed readiness to modernize the Gabala radar station, which has been in operation since 1986. Putin proposed it would not be necessary to place interceptor missiles in Poland then, but interceptors could be placed in NATO member Turkey or Iraq. Putin suggested also equal involvement of interested European countries in the project.
In a 4 June 2007, interview to journalists of G8 countries, when answering the question of whether Russian nuclear forces may be focused on European targets in case "the United States continues building a strategic shield in Poland and the Czech Republic", Putin admitted that "if part of the United States’ nuclear capability is situated in Europe and that our military experts consider that they represent a potential threat then we will have to take appropriate retaliatory steps. What steps? Of course we must have new targets in Europe."
The end of 2006 brought strained relations between Russia and Britain in the wake of the death of a former FSB officer in London by poisoning. On 20 July 2007 UK Prime Minister Gordon Brown expelled "four Russian envoys over Putin's refusal to extradite ex-KGB agent Andrei Lugovoi, wanted in the UK for the murder of fellow former spy Alexander Litvinenko in London." The Russian constitution prohibits the extradition of Russian nationals to third countries. British Foreign Secretary David Miliband said that "this situation is not unique, and other countries have amended their constitutions, for example to give effect to the European Arrest Warrant".
Miliband's statement was widely publicized by Russian media as a British proposal to change the Russian constitution. According to VCIOM, 62% of Russians are against changing the Constitution in this respect. The British Ambassador in Moscow Tony Brenton said that the UK is not asking Russia to break its Constitution, but rather interpret it in such a way that would make Lugovoi's extradition possible. Putin, in response, advised British officials to "fix their heads" rather than propose changing the Russian constitution and said that the British proposals were "a relic of a colonial-era mindset".
When Litvinenko was dying from radiation poisoning, he allegedly accused Putin of directing the assassination in a statement which was released shortly after his death by his friend Alex Goldfarb. Critics have doubted that Litvinenko is the true author of the released statement. When asked about the Litvinenko accusations, Putin said that a statement released posthumously of its author "naturally deserves no comment".
The expulsions were seen as "the biggest rift since the countries expelled each other's diplomats in 1996 after a spying dispute." In response to the situation, Putin stated "I think we will overcome this mini-crisis. Russian-British relations will develop normally. On both the Russian side and the British side, we are interested in the development of those relations." Despite this, British Ambassador Tony Brenton was told by the Russian Foreign Ministry that UK diplomats would be given 10 days before they were expelled in response. The Russian government also announced that it would suspend issuing visas to UK officials and froze cooperation on counterterrorism in response to Britain suspending contacts with their Federal Security Service.
Alexander Shokhin, president of the Russian Union of Industrialists and Entrepreneurs warned that British investors in Russia will "face greater scrutiny from tax and regulatory authorities. [And] They could also lose out in government tenders". Some see the crisis as originating with Britain's decision to grant Putin's former patron, Russian billionaire Boris Berezovsky, political asylum in 2003. Earlier in 2007, Berezovsky had called for the overthrow of Putin.
On 10 December 2007, Russia ordered the British Council to halt work at its regional offices in what was seen as the latest round of a dispute over the murder of Alexander Litvinenko; Britain said Russia's move was illegal.
Following the Peace Mission 2007 military exercises jointly conducted by the Shanghai Cooperation Organisation (SCO) member states, Putin announced on 17 August 2007 the resumption on a permanent basis of long-distance patrol flights of Russia's strategic bombers that were suspended in 1992. US State Department spokesman Sean McCormack was quoted as saying in response that "if Russia feels as though they want to take some of these old aircraft out of mothballs and get them flying again, that's their decision." The announcement made during the SCO summit in the light of joint Russian-Chinese military exercises, first-ever in history to be held on Russian territory, makes some believe that Putin is inclined to set up an anti-NATO bloc or the Asian version of OPEC. When presented with the suggestion that "Western observers are already likening the SCO to a military organisation that would stand in opposition to NATO", Putin answered that "this kind of comparison is inappropriate in both form and substance". Russian Chief of the General Staff Yury Baluyevsky was quoted as saying that "there should be no talk of creating a military or political alliance or union of any kind, because this would contradict the founding principles of SCO".
The resumption of long-distance flights of Russia's strategic bombers was followed by the announcement by Russian Defense Minister Anatoliy Serdyukov during his meeting with Putin on 5 December 2007, that 11 ships, including the aircraft carrier "Kuznetsov", would take part in the first major navy sortie into the Mediterranean since Soviet times. The sortie was to be backed up by 47 aircraft, including strategic bombers. According to Serdyukov, this is an effort to resume regular Russian naval patrols on the world's oceans, the view that is also supported by Russian media. The military analyst from "Novaya Gazeta" Pavel Felgenhauer believes that the accident-prone "Kuznetsov" is scarcely seaworthy and is more of a menace to her crew than any putative enemy.
In September 2007, Putin visited Indonesia and in doing so became the first Russian leader to visit the country in more than 50 years. In the same month, Putin also attended the APEC meeting held in Sydney, Australia where he met with Australian Prime Minister John Howard and signed a uranium trade deal. This was the first visit of a Russian president to Australia.
On 16 October 2007 Putin visited Tehran, Iran to participate in the Second Caspian Summit, where he met with Iranian leader Mahmoud Ahmadinejad. Other participants were leaders of Azerbaijan, Kazakhstan, and Turkmenistan. This is the first visit of a leader from the Kremlin to Iran since Joseph Stalin's participation in the Tehran Conference in 1943. At a press conference after the summit Putin said that "all our (Caspian) states have the right to develop their peaceful nuclear programmes without any restrictions". During the summit it was also agreed that its participants, under no circumstances, would let any third-party state use their territory as a base for aggression or military action against any other participant.
On 26 October 2007, at a press conference following the 20th Russia-EU Summit in Portugal, Putin proposed to create a Russian-European Institute for Freedom and Democracy headquartered either in Brussels or in one of the European capitals, and added that "we are ready to supply funds for financing it, just as Europe covers the costs of projects in Russia". This newly proposed institution is expected to monitor human rights violations in Europe and contribute to development of European democracy.
Russian President Vladimir Putin and ex-U.S. President George W. Bush failed to resolve their differences over U.S. plans for the planned missile defense system based in Poland and the Czech Republic, on their meeting in the Russian Black Sea resort of Sochi on 6 April 2008. Putin made clear that he does not agree with the decision to establish sites in the Eastern European countries, but said they had agreed a "strategic framework" to guide future U.S.-Russian relations, in which Russia and the U.S. said they recognized that the era in which each had considered the other to be a "strategic threat or enemy" was over. Putin expressed cautious optimism that the two sides could find a way to cooperate over missile defense and described his eight-year relationship as Russian president with Bush as "mostly positive". The summit was the final meeting between Bush and Putin as presidents and follows both leaders' attendance at last the NATO summit in Romania 2 April 2008 – 4 April 2008. That summit also highlighted differences between Washington and Moscow over U.S.-backed proposals to extend the military alliance to include the former Soviet republics of Ukraine and Georgia. Russia opposes the proposed expansion, fearing it will reduce its own influence over its neighbours.
Fareed Zakaria suggests that the 2008 South Ossetia War turned out to be a diplomatic disaster for Russia. He suggests that it was a major strategic blunder, turning neighboring nations such as Ukraine to embrace the United States and other Western nations more. George Friedman, founder and CEO of private intelligence agency Stratfor, takes an opposite view, arguing that both the war and Russian foreign policy have been successful in expanding Russia's influence.
In July 2012 Putin said in address during a meeting with Russian ambassadors in Moscow:
The mid-2010s marked a dramatic downturn in Russian relations with the west, with some even considering it the start of a new Cold War. The United States and Russia back opposing sides in the Syrian Civil War, and Washington regarded Moscow as obstructionist regarding its support for the Bashar al-Assad government.
In 2013, for the first time since 1960, the United States cancelled a summit with Russia after the latter granted asylum to Edward Snowden.
The greatest increase in tensions, however, came during the Ukraine crisis that began in 2014, which saw the Crimean peninsula annexed by Russia. Russia also inflamed a separatist uprising in the Donbass region, though Moscow continues to deny its involvement. The United States responded to these events by putting forth sanctions against Russia, and most European countries followed suit, worrying about Russian interference in the affairs of central and Eastern Europe. October 2015 saw Russia, after years of supporting the Syrian government indirectly, directly intervene in the conflict, turning the tide in favor of the Assad regime. Russia's relations with Turkey, already strained over its support for the Assad regime, deteriorated further during this period, especially after the Turkish Air Force shot down a Russian jet fighter on 24 November 2015. In 2015, Russia also formed the Eurasian Economic Union with Armenia, Kazakhstan, and Belarus.
The Russian government also remains bitter over the expansion of NATO into Eastern Europe, arguing that western leaders promised that NATO would not expand beyond its 1990s borders.
For decades, the dispute between Japan and Russia over the ownership of the Kuril Islands has hindered closer cooperation between the two countries, but since 2017 high level talks involving Prime Minister Shinzō Abe have been ongoing in an attempt to resolve the situation.
Russia's power on the international stage depends on its petroleum revenue. If the world completes a transition to renewable energy and international demand for Russian oil, gas and coal resources is dramatically reduced, so will Russia's international power be. Russia is ranked 148 out of 156 countries in the index of Geopolitical Gains and Losses after energy transition (GeGaLo).
Pew Research Center indicated that (as of 2015) only 4 surveyed countries have a positive view (50% or above) of Russia. The top ten most positive countries are Vietnam (75%), Ghana (56%), China (51%), South Korea (46%), Lebanon (44%), Philippines (44%), India (43%), Nigeria (39%), Tanzania (38%), Ethiopia (37%), and Uganda (37%). While ten surveyed countries have the most negative view (Below 25%) of Russia. With the countries being Pakistan (12%), Turkey (15%), Poland (15%), United Kingdom (18%), Jordan (18%), Ukraine (21%), Japan (21%), United States (22%), Mexico (24%), and Australia (24%). Russian's own view of Russia was overwhelmingly positive at 92%.
The term has been used to refer to both Catherine the Great's 18th century agenda and 21st century Russian policies. In the 1990s supporters of NATO expansion into Eastern Europe claimed this would diminish "Russian aggression".
The post-Maidan conflict in Ukraine is usually blamed on "Russian aggression".
NATO-sponsored analysts have described what they call a cybernetic "Russian aggression" against Ukraine in the 2010s.
Russia is a member of the Commonwealth of Independent States (CIS), Union of Russia and Belarus, Organization for Security and Cooperation in Europe (OSCE), Paris Club, and the North Atlantic Cooperation Council (NACC). It signed the NATO Partnership for Peace initiative on 22 June 1994. On 20 May 1997, NATO and Russia signed the NATO–Russia Founding Act, which the parties hoped would provide the basis for an enduring and robust partnership between the Alliance and Russia—one that could make an important contribution to European security architecture in the 21st century, though already at the time of its signing doubts were cast on whether this accord could deliver on these ambitious goals. This agreement was superseded by the NATO–Russia Council that was agreed at the Reykjavik Ministerial and unveiled at the Rome NATO Summit in May 2002. On 24 June 1994, Russia and the European Union (EU) signed a partnership and cooperation agreement. In recent years, tensions have heightened, as NATO members in Eastern Europe, especially Latvia, Lithuania, Estonia, and Poland, feel threatened by Russia. European Union imposed sanctions on Russian businesses and individuals in 2014, regarding the annexation of Crimea and alleged support for separatists during Donbass War.
The non-Russian countries that were once part of the USSR have been termed the 'near abroad' by Russians. More recently, Russian leaders have been referring to all 15 countries collectively as "Post-Soviet Space," while asserting Russian foreign policy interest throughout the region. After the USSR was dissolved by the presidents of Russia, Ukraine and Belarus, Russia tried to regain some sort of influence over the post-Soviet space by creating, on 8 December 1991, a regional organization – the Commonwealth of Independent States. The following years, Russia initiated a set of agreements with the Post-Soviet states which were designed to institutionalize the relations inside the CIS. However, most of these agreements were not fulfilled and the CIS republics began to drift away from Russia, which at that time was attempting to stabilize its broken economy and ties with the West.
One of the major issues which had an influence on the foreign relations of Russia in FSU was the remaining large Russian minority populations in many countries of the near abroad. This issue has been dealt with in various ways by each individual country. They have posed a particular problem in countries where they live close to the Russian border, such as in Ukraine and Kazakhstan, with some of these Russians calling for these areas to be absorbed into Russia. By and large, however, Russians in the near-abroad do not favor active intervention of Russia into the domestic affairs of neighboring countries, even in defense of the interests of ethnic Russians. Moreover, the three Baltic states (Estonia, Latvia, and Lithuania) have clearly signaled their desire to be outside any claimed Russian sphere of influence, as is reflected by their joining both the NATO alliance and the European Union in 2004.
Close cultural, ethnic and historical links exist between Russia, Belarus and Ukraine. The traditional Russian perspective is that they are one ethnic group, with Russians called 'Great Russians', Belarusians 'White Russians' and Ukrainians 'Little Russians'. This manifested itself in lower levels of nationalism in these areas, particularly Belarus and Ukraine, during the disintegration of the Soviet Union. However, few Ukrainians accept a "younger brother" status relative to Russia, and Russia's efforts to insert itself into Ukrainian domestic politics, such as Putin's endorsement of a candidate for the Ukrainian presidency in the last election, are contentious.
Russia maintains its military bases in Armenia, Belarus, Kyrgyzstan, Moldova, and Tajikistan.
Russia's relationships with Georgia are at their lowest point in modern history due to the Georgian-Russian espionage controversy and due to the 2008 Russo-Georgian war, Georgia broke off diplomatic relations with Russia and has left the Commonwealth of Independent States.
Russia's relations with Ukraine, since 2013, are also at their lowest point in history as a result of the pro-Western Euromaidan revolution in Ukraine, the 2014 Crimean Crisis and the pro-Russian insurgency in Ukraine's Donetsk and Luhansk regions. Ukraine, like Georgia, has introduced a bill to the Verkhovna Rada to withdraw from the Commonwealth of Independent States, and Kiev has begun the process of doing so.
In addition, Russia also maintains relations with Bulgaria, Czech Republic, part of the former East Germany, Hungary, Poland, Romania and Slovakia, the countries that were once part of the former Warsaw Pact, and furthermore, Albania. Russia also continues to maintain friendly relations with Cuba, Mongolia and Vietnam as well as third world and non-aligned countries of Afghanistan, Angola, Benin, Cambodia, Congo, Egypt, Ethiopia, Grenada, Guinea-Bissau, India, Iraq, Laos, Mozambique, Serbia, Syria and the former Southern part of Yemen.
Membership in International Organizations:
Russia holds a permanent seat, which grants it veto power, on the Security Council of the United Nations (UN). Prior to 1991, the Soviet Union held Russia's UN seat, but, after the breakup of the Soviet Union the Russian government informed the United Nations that Russia will continue the Soviet Union's membership at the United Nations and all other UN organs.
Russia is an active member of numerous UN system organizations, including:
Russia also participates in some of the most important UN peacekeeping missions, including:
Russia also holds memberships in:
Russia has played an important role in helping mediate international conflicts and has been particularly actively engaged in trying to promote a peace following the Kosovo conflict. Russia's foreign minister claimed on 25 February 2008 that NATO and the European Union have been considering using force to keep Serbs from leaving Kosovo following the 2008 Kosovo declaration of independence.
Russia is a co-sponsor of the Middle East peace process and supports UN and multilateral initiatives in the Persian Gulf, Cambodia, Burma, Angola, the former Yugoslavia, and Haiti. Russia is a founding member of the Contact Group and (since the Denver Summit in June 1997) a member of the G8. In November 1998, Russia joined the Asia-Pacific Economic Cooperation Forum (APEC). Russia has contributed troops to the NATO-led stabilization force in Bosnia and has affirmed its respect for international law and OSCE principles. Russia has accepted UN and/or OSCE involvement in instances of regional conflict in neighboring countries, including the dispatch of observers to Georgia, Moldova, Tajikistan, and the "de facto" Republic of Artsakh.
Russia supported, on 16 May 2007, the set up of the international tribunal to try the suspects in the murder of the Lebanese Prime Minister, Rafiq Hariri. | https://en.wikipedia.org/wiki?curid=25708 |
Russian Armed Forces
The Armed Forces of the Russian Federation (), commonly known as the Russian Armed Forces, are the military forces of Russia, established after the dissolution of the Soviet Union.
On 7 May 1992, Boris Yeltsin signed a presidential decree establishing the Russian Ministry of Defence and placing all Soviet Armed Forces troops on the territory of the Russian Soviet Federative Socialist Republic under Russian control. The Commander-in-Chief of the Armed Forces is the President of Russia. The Russian Armed Forces were formed in 1992. It is the most powerful military in Europe, and is one of the world's largest military forces. According to Credit Suisse, Russia has the world's second-most powerful military.
Under Russian federal law, the Armed Forces along with the Federal Security Service (FSB)'s Border Troops, the National Guard, the Ministry of Internal Affairs (MVD), the Federal Protective Service (FSO), the Foreign Intelligence Service (SVR), and EMERCOM's civil defence form Russia's military services and are under direct control of the Security Council of Russia.
Armed forces under the Ministry of Defence are divided into:
There are additionally two further "separate troop branches", the National Guard and the Border Service. These retain the legal status of "Armed Forces", while falling outside of the jurisdiction of the General Staff of the Armed Forces of the Russian Federation. The National Guard is formed on the basis of the former Internal Troops of Russia. The new structure has been detached from the Ministry of Internal Affairs into a separate agency, directly subordinated to the President of Russia. The Border Service is a paramilitary organization of the Federal Security Service - the country's main internal intelligence agency. Both organizations have significant wartime tasks in addition to their main peacetime activities and operate their own land, air and maritime units.
The number of personnel is specified by decree of the President of Russia. On 1 January 2008, a number of 2,019,629 units, including military of 1,134,800 units, was set. In 2010 the International Institute for Strategic Studies (IISS) estimated that the Russian Armed Forces numbered about 1,027,000 active troops and in the region of 2,035,000 reserves (largely ex-conscripts). As opposed to personnel specified by decree, actual personnel numbers on the payroll was reported by the Audit Chamber of Russia as 766,000 in October 2013. As of December 2016, the armed forces are at 93 percent of the required manpower, up from 82 percent reported in December 2013.
According to the Stockholm International Peace Research Institute, between 2005-2009 and 2010–2014, Russian exports of major weapons increased by 37 percent; Russia spent $66.4 billion on arms in 2015, then $69.2 billion in 2016, having taken 3rd place (after the U.S. and China). According to the Russian Defence Ministry, share of modern weapons in the Armed Forces reached from 26 to 48 percent among different kinds of troops in December 2014. This was raised to 30.5–70.7% as of July 2015. The average was 68.2 per cent over the end of 2019.
The Soviet Union officially dissolved on 25 December 1991, leaving the Soviet military in limbo. For the next year and a half various attempts to keep its unity and to transform it into the military of the Commonwealth of Independent States (CIS) failed. Over time, some units stationed in the newly independent republics swore loyalty to their new national governments, while a series of treaties between the newly independent states divided up the military's assets.
Apart from assuming control of the bulk of the former Soviet Internal Troops and the KGB Border Troops, seemingly the only independent defence move the new Russian government made before March 1992 involved announcing the establishment of a National Guard. Until 1995, it was planned to form at least 11 brigades numbering 3,000 to 5,000 each, with a total of no more than 100,000. National Guard military units were to be deployed in 10 regions, including in Moscow (three brigades), (two brigades), and a number of other important cities and regions. By the end of September 1991 in Moscow the National Guard was about 15,000 strong, mostly consisting of former Soviet Armed Forces servicemen. In the end, President Yeltsin tabled a decree "On the temporary position of the Russian Guard", but it was not put into practice.
After signing the Belavezha Accords on 21 December 1991, the countries of the newly formed CIS signed a protocol on the temporary appointment of Marshal of Aviation Yevgeny Shaposhnikov as Minister of Defence and commander of the armed forces in their territory, including strategic nuclear forces. On 14 February 1992 Shaposhnikov formally became Supreme Commander of the CIS Armed Forces. On 16 March 1992 a decree by Boris Yeltsin created "The Armed Forces of the Russian Federation" the operational control of Allied High Command and the Ministry of Defence, which was headed by President. Finally, on 7 May 1992, Yeltsin signed a decree establishing the armed forces and Yeltsin assumed the duties of the Supreme Commander.
In May 1992, General Colonel Pavel Grachev became the Minister of Defence, and was made Russia's first Army General on assuming the post. By August or December 1993 CIS military structures had become CIS military cooperation structures with all real influence lost.
In the next few years, Russian forces withdrew from central and eastern Europe, as well as from some newly-independent post-Soviet republics. While in most places the withdrawal took place without any problems, the Russian Armed Forces remained in some disputed areas such as the Sevastopol naval base in the Crimea as well as in Abkhazia and in Transnistria. The Armed Forces have several bases in foreign countries, especially on territory of the former Soviet Republics.
A new military doctrine, promulgated in November 1993, implicitly acknowledged the contraction of the old Soviet military into a regional military power without global ambitions. In keeping with its emphasis on the threat of regional conflicts, the doctrine called for a smaller, lighter, and more mobile Russian military, with a higher degree of professionalism and with greater rapid-deployment capability. Such change proved extremely difficult to achieve. Under Pavel Grachev (Defence Minister from 1992 to 1996) little military reform took place, though there was a plan to create more deployable mobile forces. Later Defence Minister Rodionov (in office 1996-1997) had good qualifications but did not manage to institute lasting change. Only under Defence Minister Igor Sergeyev (in office 1997-2001) did a certain amount of limited reform begin, though attention focused upon the Strategic Rocket Forces.
Significant reforms were announced in late 2008 by Defence Minister Anatoliy Serdyukov (in office 2007-2012), and major structural reorganisation began in 2009. Key elements of the reforms announced in October 2008 included reducing the armed forces to a strength of one million by 2012 (planned end-date was 2016); reducing the number of officers; centralising officer training from 65 military schools into 10 "systemic" military training centres; reducing the size of the central command; introducing more civilian logistics and auxiliary staff; elimination of cadre-strength formations; reorganising the reserves; reorganising the army into a brigade system; and reorganising air forces into an air-base system instead of regiments. On 17 October 2012 the head of the State Duma's Defence Committee told RIA Novosti that Russia planned to boost annual defense spending by 59 percent to almost 3 trillion rubles ($83.3 billion) in 2015 up from $61 billion in 2012. "Targeted national defence spending as a percentage of GDP will amount to 3.2 percent in 2013, 3.4 percent in 2014 and 3.7 percent in 2015", Defence Committee chairman Vladimir Komoedov is quoted as saying in the committee's conclusion on the draft budget for 2013–2015.
The number of military units is to be reduced in accordance with the table:
An essential part of the military reform involves down-sizing. At the beginning of the reform the Russian Army had about 1,200,000 active personnel. Largely, the reductions fall among the officers. Personnel are to be reduced according to the table:
The schedule envisaged reducing the total numbers in the officer corps from 335 thousand to 150 thousand, but in early February 2011 Defence Minister Anatoly Serdyukov announced the decision to increase officers by 70,000 - to 220 thousand to counteract this.
The Defence Ministry of the Russian Federation serves as the administrative body of the Armed Forces. Since Soviet times, the General Staff has acted as the main commanding and supervising body of the Russian armed forces: U.S. expert William Odom said in 1998, that 'the Soviet General Staff without the MoD is conceivable, but the MoD without the General Staff is not.' However, currently the General Staff's role is being reduced to that of the Ministry's department of strategic planning, the Minister himself, currently Sergey Shoygu may now be gaining further executive authority over the troops. Other departments include the personnel directorate as well as the Logistical Support, Railway Troops, Signal Troops and Construction Troops. The Chief of the General Staff is currently General of the Army Valery Gerasimov.
The Russian military is divided into three services: the Russian Ground Forces, the Russian Navy, and the Russian Aerospace Forces. In addition there are two independent "arms of service": the Strategic Missile Troops and the Russian Airborne Troops. The Armed Forces as a whole are traditionally referred to as the Army ("armiya"), except in some cases, the Navy is specifically singled out.
Since late 2010 the Ground Forces as well as the Air Forces and Navy are distributed among four military districts: Western Military District, Southern Military District, Central Military District, and the Eastern Military District which also constitute four Joint Strategic Commands — West, South, Central, and East. Previously from 1992 to 2010, the Ground Forces were divided into six military districts: Moscow, Leningrad, North Caucausian, Privolzhsk-Ural, Siberian and Far Eastern and Russia's four fleets and one flotilla were organizations on par with the Ground Forces' Military Districts. These six MDs were merged into the four new MDs, which now also incorporate the air forces and naval forces. There is one remaining Russian military base, the 102nd Military Base, in Armenia left of the former Transcaucasus Group of Forces. It likely reports to the Southern Military District.
In mid-2010 a reorganisation was announced which consolidated military districts and the navy's fleets into four Joint Strategic Commands (OSC).
In 2014 the Northern Fleet was reorganized in separate Joint Strategic Command.
Geographically divided, the five commands are:
The plan was put in place on 1 December 2010 and mirrors a proposed reorganisation by former Chief of the General Staff Army General Yuri Baluyevsky for a Regional Command East which was not implemented. The four commands were set up by a decree of President Medvedev on 14 July 2010. In July 2011, an Operational-Strategic Command of Missile-Space Defence has also been established on the basis of the former Special Purpose Command of the Russian Air Force. A Presidential decree of January 2011 named commanders for several of the new organisational structures.
Russian military command posts, according to globalsecurity.org, include Chekhov/Sharapovo about south of Moscow, for the General Staff and President, Chaadayevka near Penza, Voronovo in Moscow, and a facility at Lipetsk all for the national leadership, Yamantau in the Urals, and command posts for the Strategic Rocket Forces at Kuntsevo in Moscow (primary) and Kosvinsky Mountain in the Urals (alternate). It is speculated that many of the Moscow bunkers are linked by the special underground Moscow Metro 2 line.
Russian security bodies not under the control of the Ministry of Defence include the Internal Troops of the Ministry of Internal Affairs (now the National Guard of Russia's National Guard Forces Command), the Border Guard Service of Russia (part of the Federal Security Service), the Kremlin Regiment and the rest of the Federal Protective Service (Russia), and the Ministry of Emergency Situations, the country's civil defence service since 1995 and successor to earlier civil defence units.
The Navy consists of four fleets and one flotilla:
The Kaliningrad Special Region, under the command of the Commander Baltic Fleet, comprises Ground & Coastal Forces, formerly the 11th Guards Army, with a motor rifle division and a motor rifle brigade, and a fighter aviation regiment of Sukhoi Su-27 'Flanker', as well as other forces.
Similarly, the Northeast Group of Troops and Forces, headquartered at Petropavlovsk-Kamchatskiy, comprises all Russian Armed Forces components in the Kamchatka Krai and the Chukotka Autonomous Okrug [district] and is subordinate to the Commander Pacific Fleet headquartered in Vladivostok.
Conscription is still used in Russia; the term of service being 12 months; and eligible age is between 18 and 33 years old. Deferments are provided to undergraduate and graduate students, men solely supporting disabled relatives, parents of at least two children and — upon Presidential proclamation — to some employees of military-oriented enterprises. Men holding a Ph.D., as well as sons and brothers of servicemen killed or disabled during their military service, are released from conscription.
There were widespread problems with hazing in the Army, known as "dedovshchina", where first-year draftees are bullied by second-year draftees, a practice that appeared in its current form after the change to a two-year service term in 1967. According to Anna Politkovskaya, in 2002, "a complete battalion, more than five hundred men, had been killed not by enemy fire but by beatings". To combat this problem, a new decree was signed in March 2007, which cut the conscription service term from 24 to 18 months. The term was cut further to one year on 1 January 2008.
Thirty percent of Russian Armed Forces' personnel were contract servicemen at the end of 2005. For the foreseeable future, the Armed Forces will be a mixed contract/conscript force. The Russian Armed Forces need to maintain a mobilization reserve to have manning resources capable of reinforcing the permanent readiness forces if the permanent readiness forces cannot deter or suppress an armed conflict on their own. Professional soldiers now outnumber their conscript counterparts in the Russian Army for the first time in Russian history, Defence Minister Sergei Shoigu told Russian media on 28 April 2015. Nearly 400,000 contractors serve in the Russian Army as of March 2019. According to Defence Minister Shoigu, in every regiment and brigade, two battalions are formed by contractors, while one is formed by recruits, who are not involved in combat missions. Currently, there are 136 tactical battalion groups in the armed forces formed by contractors. The number of conscripts amounts to 225,000 and the number of contractors amounts to 405,000 as of March 2020.
Recruitment into the Russian military are also open to non-Russian citizens of the Commonwealth of Independent States, of which Russia is the largest member. By December 2003, the Russian parliament had approved a law in principle to permit the Armed Forces to employ foreign nationals on contract by offering them Russian citizenship after several years service yet, up to 2010, foreigners could only serve in Russia's armed forces after getting a Russian passport. Under a 2010 Defence Ministry plan, foreigners without dual citizenship would be able to sign up for five-year contracts and will be eligible for Russian citizenship after serving three years. The change could open the way for CIS citizens to get fast-track Russian citizenship, and counter the effects of Russia's demographic crisis on its army recruitment. Each soldier in duty receives an Identity Card of the Russian Armed Forces.
Awards and decorations of the Armed Forces are covered at the Awards and Emblems of the Ministry of Defence of the Russian Federation.
On 17 November 2011, General Nikolai Makarov said that Russia had reached a crisis in the conscript service where there simply were not sufficient able bodied men to draft and was forced to halve its conscription. Military draft dodging declined 66% since 2012 and as of March 2019. It is reported that about 80% of the young people who were drafted into the ranks of the Russian Armed Forces in the autumn of 2018 were found fit for military service. According to the head of the mobilization, in recent years, the fitness of future recruits has increased by 7%.
In March 2013, Defence Minister Sergey Shoygu promised that all army quarters would have showers by the end of the year. RIA also said that the shower plans were the latest in a series of creature-comfort improvements the Defence Ministry had recently announced. In mid-January, Shoygu said he would rid the army of its antiquated "footwraps," or portyanki, and a few days later the designer of Russia's new army uniform said that the ear-flap hats traditionally worn in winter would be replaced with more modern headgear. The Russian military's ushanka hats were improved between 2013 and 2015, when the Russian armed forces were being equipped with new uniforms. The new version of the traditional - and somewhat stereotypical - hat features better heat insulation and longer ear flaps.
Russian military officers with top secret security clearance are now being issued domestically-developed, ultra-secure, cryptographically protected Atlas M-663S cellphones, Russia's Izvestia newspaper has reported in early 2018. A new uniform for hot climates was introduced in mid-2018.
According to the article 51.2 of Federal Law of 28 March, 1998, №53-FZ "About military duty and military service", Russian Armed Forces have reserve (Russian: запас; transliteration: zapas) which includes 2 components:
By default, at the end of active duty each military personnel is enrolled in mobilization human resource; this applies equally to conscripts and volunteers regardless of ranks. Furthermore, graduates of civilian institutions of higher education, who have graduated the military training centers of their almae matres, trained under reserve officer program, is enrolled in mobilization human resource after their promotion to officer's rank (unlike graduates of such centers, trained under active duty officer program, who are due to be enrolled for active duty after their promotion to officer's rank). Moreover, mobilization human resource is replenished with males who reached the age of 27 years old and, herewith, was not in military service for any reasons.
Enrolling in mobilization human reserve is voluntary and implies the special contract; this possibility is available for each persons, who is in mobilization human resource already. The initial contract is concluded for 3 years period. Military personnel of mobilization human reserve (reservists) perform part-time duties in military units. As a rule, in peaceful time reservists perform their duties 2-3 days per month and also during annual military camp training for a duration from 20 to 30 days.
The persons who are in mobilization human resource (non-reservists) may be enlisted to military camp trainings in peaceful time. According to the article 54 of Federal Law of 28 March, 1998, №53-FZ "About military duty and military service", the duration of each training can not exceed 2 months, herewith the total duration of such trainings for the entire period of being in mobilization human resource can not exceed 12 months, and person may be enlisted to such trainings no more than once every three years.
Reservists are subject to mobilization in wartime first of all. Non-reservists are subject to mobilization secondarily. Mobilization of non-reservists is carried out taking into account the age category under the article 53 of Federal Law of 28 March, 1998, №53-FZ "About military duty and military service": in order from first category to third category. The first category includes: 1) the persons at the any military rank below that of a commissioned officer (enlisted personnel) and not reached the age of 35 years old; 2) the persons at the any rank from junior lieutenant to captain (captain-lieutenant in naval service) inclusively (junior commissioned officers) and not reached the age of 50 years old; 3) the persons at the any rank from major (captain 3rd rank in naval service) to lieutenant colonel (captain 2nd rank in naval service) inclusively and not reached the age of 55 years old; 4) the persons at the rank of colonel (captain 1st rank in naval service) and not reached the age of 60 years old; 5) the persons at the rank of major general (counter admiral in naval service) or higher (supreme officers) and not reached the age of 65 years old. The second category includes: 1) enlisted personnel in age from 35 but less than 45; 2) junior commissioned officers in the age from 50 but less than 55; 3) commissioned officers at the any rank from major (captain 3rd rank in naval service) to lieutenant colonel (captain 2nd rank in naval service) inclusively in the age from 55 but less than 60; 4) commissioned officers at the rank of colonel (captain 1st rank in naval service) in the age from 60 but less than 65; 5) supreme officers in age from 65 but less than 70. The third category includes: 1) enlisted personnel in the age from 45 but less than 50; 2) junior commissioned officers in the age from 55 but less than 60; 3) commissioned officers at the any rank from major (captain 3rd rank in naval service) to lieutenant colonel (captain 2nd rank in naval service) inclusively in the age from 60 but less than 65; 4) all females in the age less than 45 for enlisted personnel and less than 50 for commissioned officers. The person who has reached the age limit, established for the third category (the second category for persons at the rank of colonel (captain 1st rank in naval service) or higher), is retired and is not subject to mobilization.
Between 1991 and 1997 newly independent Russia's defence spending fell by a factor of eight in real prices. In 1998, when Russia experienced a severe financial crisis, its military expenditure in real terms reached its lowest point— barely one-quarter of the USSR's in 1991, and two-fifths of the level of 1992, the first year of Russia's independent existence.
In the early 2000s, defence spending increased by at least a minimum of one-third year-on-year, leading to overall defence expenditure almost quadrupling over the past six years, and according to Finance Minister Alexei Kudrin, this rate is to be sustained through 2010. Official government military spending for 2005 was US$32.4 billion, though various sources, have estimated Russia's military expenditures to be considerably higher than the reported amount. Estimating Russian military expenditure is beset with difficulty; the annual IISS Military Balance has underscored the problem numerous times within its section on Russia. The IISS Military Balance comments - 'By simple observation..[the military budget] would appear to be lower than is suggested by the size of the armed forces or the structure of the military–industrial complex, and thus neither of the figures is particularly useful for comparative analysis'. By some estimates, overall Russian defence expenditure is now at the second highest in the world after the USA. According to Alexander Kanshin, Chairman of the Public Chamber of Russia on affairs of veterans, military personnel, and their families, the Russian military is losing up to US$13 billion to corruption every year.
On 16 September 2008 Russian Prime Minister Vladimir Putin announced that in 2009, Russia's defence budget would be increased to a record amount of $50 billion.
On 16 February 2009 Russia's deputy defence minister said state defence contracts would not be subject to cuts this year despite the ongoing financial crisis, and that there would be no decrease in 2009. The budget would still be 1,376 billion roubles and in the current exchange rates this would amount to $41.5 billion.
However, later that month, due to the world financial crisis, the Russian Parliament's Defence Committee stated that the Russian defence budget would instead be slashed by 15 percent, from $40 billion to $34 billion, with further cuts to come. On 5 May 2009, First Deputy Prime Minister Sergei Ivanov said that the defence budget for 2009 will be 1.3 trillion rubles (US$39.4 billion). 322 billion rubles are allocated to purchase weapons, and the rest of the fund will be spent on construction, fuel storage and food supply.
According to the head of the Defence Committee of the State Duma Vladimir Komoyedov, Russia plans to spend 101.15 billion rubles on nuclear weapons in 2013–2015. "The budget provisions under 'The Nuclear Weapons Complex' section in 2013-2015 will amount to 29.28 billion rubles, 33.3 billion rubles and 38.57 billion rubles respectively," Komoyedov said, Vechernaya Moskva reports.
Komoyedov added that in 2012 the spending on nuclear weapons made up 27.4 billion rubles. The draft law "On the Federal Budget for 2013 and for the planning period of 2014 and 2015" will be discussed in the first reading on 19 October 2012, The Voice of Russia reports. In a meeting in Sochi in November 2013, President Putin said the country's defence budget will reach 2.3 trillion roubles, stressing the huge amount in comparison to the 2003 budget, which stood on 600 billion rubles.
The Russian government's published 2014 military budget is about 2.49 trillion rubles (approximately US$69.3 billion), the fourth largest in the world behind the US, China and Saudi Arabia. The official budget is set to rise to 3.03 trillion rubles (approximately US$83.7 billion) in 2015, and 3.36 trillion rubles (approximately US$93.9 billion) in 2016. As of 2014, Russia's military budget is higher than any other European nation, and approximately 1/7th (14 percent) of the US military budget. In 2015, SIPRI found that Russia was the world's second biggest exporter of major weapons for the period 2010–14, increasing exports by 37 per cent. India, China and Algeria accounted for almost 60 percent of total Russian exports. Asia and Oceania received 66 percent of Russian arms exports in 2010–14, Africa 12 percent and the Middle East 10 percent.
In 2017, Russia was reported to have slashed its defense spending by 20%, due to calls by Vladimir Putin to spend money on other initiatives such as healthcare and education. The cut decreased Russia's military spending to $66.3 billion, in which Russia slumped to being the fourth-highest military spender. Russia's 2019 defense budget was US$48 billion.
About 70 percent of the former Soviet Union's defence industries are located in the Russian Federation. Many defence firms have been privatised; some have developed significant partnerships with firms in other countries.
The recent steps towards modernization of the Armed Forces have been made possible by Russia's economic resurgence based on oil and gas revenues as well a strengthening of its own domestic market. Currently, the military is in the middle of a major equipment upgrade, with the government in the process of spending about $200 billion (what equals to about $400 billion in PPP dollars) on development and production of military equipment between 2006-2015 under the State Armament Programme for 2007-2015 (GPV — госпрограмма вооружения). Mainly as a result of lessons learned during the Russo-Georgian War, the State Armament Programme for 2011-2020 was launched in December 2010. Prime Minister Putin announced that 20–21.5 trillion rubles (over $650 billion) will be allocated to purchase new hardware in the next 10 years. The aim is to have a growth of 30% of modern equipment in the army, navy and air force by 2015, and of 70% by 2020.
In some categories, the proportion of new weapon systems will reach 80% or even 100%. At this point, the Russian MoD plans to purchase, among others, up to 250 ICBMs, 800 aircraft, 1,200 helicopters, 44 submarines, 36 frigates, 28 corvettes, 18 cruisers, 24 destroyers, 6 aircraft carriers, and 62 air defence battalions. Several existing types will be upgraded. The share of modern and advanced weapons in some branches of the Russian Armed Forces currently amounts over 60 percent, the Defence Ministry reported 31 July 2015.
In total since 2012 and as of 2017, the Armed Forces received more than 30,000 units of new and modernized weapons and equipment, including more than 50 warships, 1,300 aircraft, over 1,800 drones, 4,700 tanks and armored combat vehicles compared to two warships, 151 aircraft and 217 tanks received in 2007–2011. The Russian army also receives 150-250 aircraft per year and over 300 short-range UAVs.
The Russian Federation is also producing satellite-guided weapons, drones (including combat and kamikaze ones and quadrocopters) and EW systems to counter them, cruise missiles, unmanned vehicles, exoskeletons and military robots and other military equipment.
As of 2011, Russia's chief military prosecutor said that 20 percent of the defence budget was being stolen or defrauded yearly.
In 2018, RF Armed Forces adopted 35 types of weapons and military equipment and completed state tests of 21 more. The Russian Ministry of Defence (MoD) was procured the YeSU TZ (Yedinaya Sistema Upravleniya Takticheskogo Zvena) battlefield management system that same year. The YeSU TZ battlefield management system incorporates 11 subsystems that control artillery, electronic warfare systems, ground vehicles, air defence assets, engineering equipment, and logistics support, among other things. Russian military introduce Big Data decision-making technology.
12 missile regiments have been rearmed with Yars ICBMs, 10 missile brigades with Iskander tactical ballistic missile systems, 13 aviation regiments with MiG-31BM, Su-35S, Su-30SM, and Su-34 combat aircraft, three army aviation brigades and six helicopter regiments with Mi-28N and Ka-52 combat helicopters, 20 surface-to-air missile (SAM) regiments with S-400 Triumf SAM systems, 23 batteries with Pantsir-S self-propelled anti-aircraft gun-missile systems, and 17 batteries with Bal and Bastion mobile coastal defence missile systems [MCDMSs] since 2012 and as of March 2019.
Since 2012 and as of March 2020, the Ground Forces have received more than 12,000 pieces of weapon systems and equipment and rearmed all missile brigades with the Iskander tactical ballistic missile system. Aerospace Force and naval aviation have received over 1,400 aircraft and helicopters and the Navy more than 190 ships, vessels and boats.
As of January 2017, the Federation of American Scientists estimated that Russia has approximately 1,765 deployed strategic warheads, and another 2,700 non-deployed strategic and deployed and non-deployed tactical warheads, plus an additional 2,510 warheads awaiting dismantlement. Russia's Strategic Rocket Forces controls its land-based nuclear warheads, while the Navy controls the submarine based missiles and the Air Force the air-launched warheads. Russia's nuclear warheads are deployed in four areas:
The military doctrine of Russia sees NATO expansion as one of the threats for the Russian Federation and reserves the right to use nuclear weapons in response to a conventional aggression that can endanger the existence of the state. In keeping with this, the country's nuclear forces received adequate funding throughout the late 1990s. The number of intercontinental ballistic missiles and warheads on active duty has declined over the years, in part in keeping with arms limitation agreements with the U.S. and in part due to insufficient spending on maintenance, but this is balanced by the deployment of new missiles as proof against missile defences. Russia has developed the new RT-2PM2 Topol-M (SS-27) missiles that are stated to be able to penetrate any missile defence, including the planned U.S. National Missile Defence. The missile can change course in both air and space to avoid countermeasures. It is designed to be launched from land-based, mobile TEL units. Russian nuclear forces are confident that they can carry out a successful retaliation strike if attacked.
Because of international awareness of the danger that Russian nuclear technology might fall into the hands of terrorists or rogue officers who it was feared might want to use nuclear weapons to threaten or attack other countries, the federal government of the United States and many other countries provided considerable financial assistance to the Russian nuclear forces in the early 1990s. Many friendly countries gave huge amounts of money in lieu for Russian Arms purchase deals which kept Russian Agencies functioning. This money went in part to finance decommissioning of warheads under international agreements, such the Cooperative Threat Reduction programme, but also to improve security and personnel training in Russian nuclear facilities.
In the late evening of 11 September 2007 the fuel-air explosive AVBPM or "Father of All Bombs" was successfully field-tested. According to the Russian military, the new weapon will replace several smaller types of nuclear bombs in its arsenal. | https://en.wikipedia.org/wiki?curid=25709 |
Railway Mail Service
The United States Postal Service's Railway Mail Service was a significant mail transportation service in the US from the mid-19th century until the mid-20th century. The RMS, or its successor the Postal Transportation Service (PTS), carried the vast majority of letters and packages mailed in the United States from the 1890s until the 1960s.
George B. Armstrong, manager of the Chicago Post Office, is generally credited with being the founder of the concept of en route mail sorting aboard trains which became the Railway Mail Service. Mail had been carried in locked pouches aboard trains prior to Armstrong's involvement with the system, but there had been no organized system of sorting mail en route, to have mail prepared for delivery when the mail pouches reached their destination city.
In response to Armstrong's request to experiment with the concept, the first railway post office (RPO) began operating on the Chicago and North Western Railway between Chicago and Clinton, Iowa, on August 28, 1864. The concept was successful, and was expanded to other railroads operating from Chicago, including the Chicago, Burlington and Quincy, Chicago and Rock Island, Pennsylvania and the Erie.
By 1869 when the Railway Mail Service was officially inaugurated, the system had expanded to virtually all of the major railroads of the United States, and the country was divided into six operating divisions. A superintendent was over each division, all under the direction of George B. Armstrong, who had been summoned from Chicago to Washington, D.C. to become general superintendent of the postal railway service. Armstrong served only two years as general superintendent before resigning because of failing health. He died in Chicago on May 5, 1871, two days after his resignation.
Armstrong's successor in Chicago, George Bangs, was appointed as the second general superintendent of the postal railway service. Bangs encouraged the use of fast mail trains, trains made up entirely of mail cars, traveling on expedited schedules designed to accommodate the needs of the Post Office rather than the needs of the traveling public.
In 1890, 5,800 postal railway clerks provided service over of railroad. By 1907, over 14,000 clerks were providing service over of railroad. When the post office began handling parcel post in 1913, terminal Railway Post Office operations were established in major cities by the RMS to handle the large increase in mail volume. The Railway Mail Service reached its peak in the 1920s, then began a gradual decline with the discontinuance of RPO service on branchlines and secondary routes. After 1942, Highway Post Office (HPO) service was utilized to continue en route sorting after discontinuance of some railway post office operations. As highway mail transportation became more prevalent, the Railway Mail Service was redesignated as the Postal Transportation Service.
Abandonment of routes accelerated in the late 1950s and early 1960s, and many of the remaining lines were discontinued in 1967. On June 30, 1974, the Cleveland and Cincinnati highway post office, the last HPO route, was discontinued. The last railway post office operated between New York and Washington, D.C. on June 30, 1977.
A large bust and monument to Armstrong is displayed in the north side of Chicago's Loop Station Post Office.
A restored RPO car is displayed as part of the "Pioneer Zephyr" at the Chicago's Museum of Science and Industry.
The restored 1927 AT&SF Railway #74 RPO car is displayed at the Pacific Southwest Railway Museum in Campo (San Diego County), CA.
Effective August 15, 1955, the fifteen divisions of the Postal Transportation Service were eliminated and the mail routes divided among the same Postal Regions into which Post Offices were classified. | https://en.wikipedia.org/wiki?curid=25714 |
Rail transport
Rail transport (also known as train transport) is a means of transferring passengers and goods on wheeled vehicles running on rails, which are located on tracks. In contrast to road transport, where vehicles run on a prepared flat surface, rail vehicles (rolling stock) are directionally guided by the tracks on which they run. Tracks usually consist of steel rails, installed on ties (sleepers) set in ballast, on which the rolling stock, usually fitted with metal wheels, moves. Other variations are also possible, such as slab track. This is where the rails are fastened to a concrete foundation resting on a prepared subsurface.
Rolling stock in a rail transport system generally encounters lower frictional resistance than rubber-tired road vehicles, so passenger and freight cars (carriages and wagons) can be coupled into longer trains. The operation is carried out by a railway company, providing transport between train stations or freight customer facilities. Power is provided by locomotives which either draw electric power from a railway electrification system or produce their own power, usually by diesel engines or, historically, steam engines. Most tracks are accompanied by a signalling system. Railways are a safe land transport system when compared to other forms of transport. Railway transport is capable of high levels of passenger and cargo utilization and energy efficiency, but is often less flexible and more capital-intensive than road transport, when lower traffic levels are considered.
The oldest known, man/animal-hauled railways date back to the 6th century BC in Corinth, Greece. Rail transport then commenced in mid 16th century in Germany in the form of horse-powered funiculars and wagonways. Modern rail transport commenced with the British development of the steam locomotives in the early 19th century. Thus the railway system in Great Britain is the oldest in the world. Built by George Stephenson and his son Robert's company Robert Stephenson and Company, the "Locomotion" No. 1 is the first steam locomotive to carry passengers on a public rail line, the Stockton and Darlington Railway in 1825. George Stephenson also built the first public inter-city railway line in the world to use only the steam locomotives all the time, the Liverpool and Manchester Railway which opened in 1830. With steam engines, one could construct mainline railways, which were a key component of the Industrial Revolution. Also, railways reduced the costs of shipping, and allowed for fewer lost goods, compared with water transport, which faced occasional sinking of ships. The change from canals to railways allowed for "national markets" in which prices varied very little from city to city. The spread of the railway network and the use of railway timetables, led to the standardisation of time (railway time) in Britain based on Greenwich Mean Time. Prior to this, major towns and cities varied their local time relative to GMT. The invention and development of the railway in the United Kingdom was one of the most important technological inventions of the 19th century. The world's first underground railway, the Metropolitan Railway (part of the London Underground), opened in 1863.
In the 1880s, electrified trains were introduced, leading to electrification of tramways and rapid transit systems. Starting during the 1940s, the non-electrified railways in most countries had their steam locomotives replaced by diesel-electric locomotives, with the process being almost complete by the 2000s. During the 1960s, electrified high-speed railway systems were introduced in Japan and later in some other countries. Many countries are in the process of replacing diesel locomotives with electric locomotives, mainly due to environmental concerns, a notable example being Switzerland, which has completely electrified its network. Other forms of guided ground transport outside the traditional railway definitions, such as monorail or maglev, have been tried but have seen limited use.
Following a decline after World War II due to competition from cars and airplanes, rail transport has had a revival in recent decades due to road congestion and rising fuel prices, as well as governments investing in rail as a means of reducing CO2 emissions in the context of concerns about global warming.
The history of rail transport began in the 6th century BC in Ancient Greece. It can be divided up into several discrete periods defined by the principal means of track material and motive power used.
Evidence indicates that there was 6 to 8.5 km long "Diolkos" paved trackway, which transported boats across the Isthmus of Corinth in Greece from around 600 BC. Wheeled vehicles pulled by men and animals ran in grooves in limestone, which provided the track element, preventing the wagons from leaving the intended route. The Diolkos was in use for over 650 years, until at least the 1st century AD. Paved trackways were also later built in Roman Egypt.
In 1515, Cardinal Matthäus Lang wrote a description of the Reisszug, a funicular railway at the Hohensalzburg Fortress in Austria. The line originally used wooden rails and a hemp haulage rope and was operated by human or animal power, through a treadwheel. The line still exists and is operational, although in updated form and is possibly the oldest operational railway.
Wagonways (or tramways) using wooden rails, hauled by horses, started appearing in the 1550s to facilitate the transport of ore tubs to and from mines, and soon became popular in Europe. Such an operation was illustrated in Germany in 1556 by Georgius Agricola in his work De re metallica. This line used "Hund" carts with unflanged wheels running on wooden planks and a vertical pin on the truck fitting into the gap between the planks to keep it going the right way. The miners called the wagons "Hunde" ("dogs") from the noise they made on the tracks.
There are many references to their use in central Europe in the 16th century. Such a transport system was later used by German miners at Caldbeck, Cumbria, England, perhaps from the 1560s. A wagonway was built at Prescot, near Liverpool, sometime around 1600, possibly as early as 1594. Owned by Philip Layton, the line carried coal from a pit near Prescot Hall to a terminus about half a mile away. A funicular railway was also made at Broseley in Shropshire some time before 1604. This carried coal for James Clifford from his mines down to the river Severn to be loaded onto barges and carried to riverside towns. The Wollaton Wagonway, completed in 1604 by Huntingdon Beaumont, has sometimes erroneously been cited as the earliest British railway. It ran from Strelley to Wollaton near Nottingham.
The Middleton Railway in Leeds, which was built in 1758, later became the world's oldest operational railway (other than funiculars), albeit now in an upgraded form. In 1764, the first railway in the Americas was built in Lewiston, New York.
In the late 1760s, the Coalbrookdale Company began to fix plates of cast iron to the upper surface of the wooden rails. This allowed a variation of gauge to be used. At first only balloon loops could be used for turning, but later, movable points were taken into use that allowed for switching.
A system was introduced in which unflanged wheels ran on L-shaped metal plates these became known as plateways. John Curr, a Sheffield colliery manager, invented this flanged rail in 1787, though the exact date of this is disputed. The plate rail was taken up by Benjamin Outram for wagonways serving his canals, manufacturing them at his Butterley ironworks. In 1803, William Jessop opened the Surrey Iron Railway, a double track plateway, erroneously sometimes cited as world's first public railway, in south London.
Meanwhile, William Jessop had earlier used a form of all-iron edge rail and flanged wheels successfully for an extension to the Charnwood Forest Canal at Nanpantan, Loughborough, Leicestershire in 1789. In 1790, Jessop and his partner Outram began to manufacture edge-rails. Jessop became a partner in the Butterley Company in 1790. The first public edgeway (thus also first public railway) built was Lake Lock Rail Road in 1796. Although the primary purpose of the line was to carry coal, it also carried passengers.
These two systems of constructing iron railways, the "L" plate-rail and the smooth edge-rail, continued to exist side by side until well into the early 19th century. The flanged wheel and edge-rail eventually proved its superiority and became the standard for railways.
Cast iron used in rails proved unsatisfactory because it was brittle and broke under heavy loads. The wrought iron invented by John Birkinshaw in 1820 replaced cast iron. Wrought iron (usually simply referred to as "iron") was a ductile material that could undergo considerable deformation before breaking, making it more suitable for iron rails. But iron was expensive to produce until Henry Cort patented the puddling process in 1784. In 1783 Cort also patented the rolling process, which was 15 times faster at consolidating and shaping iron than hammering. These processes greatly lowered the cost of producing iron and rails. The next important development in iron production was hot blast developed by James Beaumont Neilson (patented 1828), which considerably reduced the amount of coke (fuel) or charcoal needed to produce pig iron. Wrought iron was a soft material that contained slag or "dross". The softness and dross tended to make iron rails distort and delaminate and they lasted less than 10 years. Sometimes they lasted as little as one year under high traffic. All these developments in the production of iron eventually led to replacement of composite wood/iron rails with superior all iron rails.
The introduction of the Bessemer process, enabling steel to be made inexpensively, led to the era of great expansion of railways that began in the late 1860s. Steel rails lasted several times longer than iron. Steel rails made heavier locomotives possible, allowing for longer trains and improving the productivity of railroads. The Bessemer process introduced nitrogen into the steel, which caused the steel to become brittle with age. The open hearth furnace began to replace the Bessemer process near the end of the 19th century, improving the quality of steel and further reducing costs. Thus steel completely replaced the use of iron in rails, becoming standard for all railways.
The first passenger horsecar or tram, Swansea and Mumbles Railway was opened between Swansea and Mumbles in Wales in 1807. Horses remained the preferable mode for tram transport even after the arrival of steam engines until the end of the 19th century, because they were cleaner compared to steam driven trams which caused smoke in city streets.
In 1784 James Watt, a Scottish inventor and mechanical engineer, patented a design for a steam locomotive. Watt had improved the steam engine of Thomas Newcomen, hitherto used to pump water out of mines, and developed a reciprocating engine in 1769 capable of powering a wheel. This was a large stationary engine, powering cotton mills and a variety of machinery; the state of boiler technology necessitated the use of low pressure steam acting upon a vacuum in the cylinder, which required a separate condenser and an air pump. Nevertheless, as the construction of boilers improved, Watt investigated the use of high-pressure steam acting directly upon a piston, raising the possibility of a smaller engine that might be used to power a vehicle. Following his patent, Watt's employee William Murdoch produced a working model of a self-propelled steam carriage in that year.
The first full-scale working railway steam locomotive was built in the United Kingdom in 1804 by Richard Trevithick, a British engineer born in Cornwall. This used high-pressure steam to drive the engine by one power stroke. The transmission system employed a large flywheel to even out the action of the piston rod. On 21 February 1804, the world's first steam-powered railway journey took place when Trevithick's unnamed steam locomotive hauled a train along the tramway of the Penydarren ironworks, near Merthyr Tydfil in South Wales. Trevithick later demonstrated a locomotive operating upon a piece of circular rail track in Bloomsbury, London, the "Catch Me Who Can", but never got beyond the experimental stage with railway locomotives, not least because his engines were too heavy for the cast-iron plateway track then in use.
The first commercially successful steam locomotive was Matthew Murray's rack locomotive "Salamanca" built for the Middleton Railway in Leeds in 1812. This twin-cylinder locomotive was light enough to not break the edge-rails track and solved the problem of adhesion by a cog-wheel using teeth cast on the side of one of the rails. Thus it was also the first rack railway.
This was followed in 1813 by the locomotive "Puffing Billy" built by Christopher Blackett and William Hedley for the Wylam Colliery Railway, the first successful locomotive running by adhesion only. This was accomplished by the distribution of weight between a number of wheels. "Puffing Billy" is now on display in the Science Museum in London, making it the oldest locomotive in existence.
In 1814 George Stephenson, inspired by the early locomotives of Trevithick, Murray and Hedley, persuaded the manager of the Killingworth colliery where he worked to allow him to build a steam-powered machine. Stephenson played a pivotal role in the development and widespread adoption of the steam locomotive. His designs considerably improved on the work of the earlier pioneers. He built the locomotive "Blücher", also a successful flanged-wheel adhesion locomotive. In 1825 he built the locomotive "Locomotion" for the Stockton and Darlington Railway in the north east of England, which became the first public steam railway in the world in 1825, although it used both horse power and steam power on different runs. In 1829, he built the locomotive "Rocket", which entered in and won the Rainhill Trials. This success led to Stephenson establishing his company as the pre-eminent builder of steam locomotives for railways in Great Britain and Ireland, the United States, and much of Europe. The first public railway which used only steam locomotives, all the time, was Liverpool and Manchester Railway, built in 1830.
Steam power continued to be the dominant power system in railways around the world for more than a century.
The first known electric locomotive was built in 1837 by chemist Robert Davidson of Aberdeen in Scotland, and it was powered by galvanic cells (batteries). Thus it was also the earliest battery electric locomotive. Davidson later built a larger locomotive named "Galvani", exhibited at the Royal Scottish Society of Arts Exhibition in 1841. The seven-ton vehicle had two direct-drive reluctance motors, with fixed electromagnets acting on iron bars attached to a wooden cylinder on each axle, and simple commutators. It hauled a load of six tons at four miles per hour (6 kilometers per hour) for a distance of . It was tested on the Edinburgh and Glasgow Railway in September of the following year, but the limited power from batteries prevented its general use. It was destroyed by railway workers, who saw it as a threat to their job security.
Werner von Siemens demonstrated an electric railway in 1879 in Berlin. The world's first electric tram line, Gross-Lichterfelde Tramway, opened in Lichterfelde near Berlin, Germany, in 1881. It was built by Siemens. The tram ran on 180 Volt DC, which was supplied by running rails. In 1891 the track was equipped with an overhead wire and the line was extended to Berlin-Lichterfelde West station. The Volk's Electric Railway opened in 1883 in Brighton, England. The railway is still operational, thus making it the oldest operational electric railway in the world. Also in 1883, Mödling and Hinterbrühl Tram opened near Vienna in Austria. It was the first tram line in the world in regular service powered from an overhead line. Five years later, in the U.S. electric trolleys were pioneered in 1888 on the Richmond Union Passenger Railway, using equipment designed by Frank J. Sprague.
The first use of electrification on a main line was on a four-mile section of the Baltimore Belt Line of the Baltimore and Ohio Railroad (B&O) in 1895 connecting the main portion of the B&O to the new line to New York through a series of tunnels around the edges of Baltimore's downtown. Electricity quickly became the power supply of choice for subways, abetted by the Sprague's invention of multiple-unit train control in 1897. By the early 1900s most street railways were electrified.
The London Underground, the world's oldest underground railway, opened in 1863, and it began operating electric services using a fourth rail system in 1890 on the City and South London Railway, now part of the London Underground Northern line. This was the first major railway to use electric traction. The world's first deep-level electric railway, it runs from the City of London, under the River Thames, to Stockwell in south London.
The first practical AC electric locomotive was designed by Charles Brown, then working for Oerlikon, Zürich. In 1891, Brown had demonstrated long-distance power transmission, using three-phase AC, between a hydro-electric plant at Lauffen am Neckar and Frankfurt am Main West, a distance of 280 km. Using experience he had gained while working for Jean Heilmann on steam-electric locomotive designs, Brown observed that three-phase motors had a higher power-to-weight ratio than DC motors and, because of the absence of a commutator, were simpler to manufacture and maintain. However, they were much larger than the DC motors of the time and could not be mounted in underfloor bogies: they could only be carried within locomotive bodies.
In 1894, Hungarian engineer Kálmán Kandó developed a new type 3-phase asynchronous electric drive motors and generators for electric locomotives. Kandó's early 1894 designs were first applied in a short three-phase AC tramway in Evian-les-Bains (France), which was constructed between 1896 and 1898.
In 1896, Oerlikon installed the first commercial example of the system on the Lugano Tramway. Each 30-tonne locomotive had two motors run by three-phase 750 V 40 Hz fed from double overhead lines. Three-phase motors run at constant speed and provide regenerative braking, and are well suited to steeply graded routes, and the first main-line three-phase locomotives were supplied by Brown (by then in partnership with Walter Boveri) in 1899 on the 40 km Burgdorf–Thun line, Switzerland.
Italian railways were the first in the world to introduce electric traction for the entire length of a main line rather than a short section. The 106 km Valtellina line was opened on 4 September 1902, designed by Kandó and a team from the Ganz works. The electrical system was three-phase at 3 kV 15 Hz. In 1918, Kandó invented and developed the rotary phase converter, enabling electric locomotives to use three-phase motors whilst supplied via a single overhead wire, carrying the simple industrial frequency (50 Hz) single phase AC of the high voltage national networks.
An important contribution to the wider adoption of AC traction came from SNCF of France after World War II. The company conducted trials at AC 50 Hz, and established it as a standard. Following SNCF's successful trials, 50 Hz, now also called industrial frequency was adopted as standard for main-lines across the world.
Earliest recorded examples of an internal combustion engine for railway use included a prototype designed by William Dent Priestman, which was examined by Sir William Thomson in 1888 who described it as a ""[Priestman oil engine] mounted upon a truck which is worked on a temporary line of rails to show the adaptation of a petroleum engine for locomotive purposes."". In 1894, a two axle machine built by Priestman Brothers was used on the Hull Docks.
In 1906, Rudolf Diesel, Adolf Klose and the steam and diesel engine manufacturer Gebrüder Sulzer founded Diesel-Sulzer-Klose GmbH to manufacture diesel-powered locomotives. Sulzer had been manufacturing diesel engines since 1898. The Prussian State Railways ordered a diesel locomotive from the company in 1909. The world's first diesel-powered locomotive was operated in the summer of 1912 on the Winterthur–Romanshorn railway in Switzerland, but was not a commercial success. The locomotive weight was 95 tonnes and the power was 883 kW with a maximum speed of 100 km/h. Small numbers of prototype diesel locomotives were produced in a number of countries through the mid-1920s.
A significant breakthrough occurred in 1914, when Hermann Lemp, a General Electric electrical engineer, developed and patented a reliable direct current electrical control system (subsequent improvements were also patented by Lemp). Lemp's design used a single lever to control both engine and generator in a coordinated fashion, and was the prototype for all diesel–electric locomotive control systems. In 1914, world's first functional diesel–electric railcars were produced for the "Königlich-Sächsische Staatseisenbahnen" (Royal Saxon State Railways) by Waggonfabrik Rastatt with electric equipment from Brown, Boveri & Cie and diesel engines from Swiss Sulzer AG. They were classified as DET 1 and DET 2 (). The first regular use of diesel–electric locomotives was in switching (shunter) applications. General Electric produced several small switching locomotives in the 1930s (the famous "44-tonner" switcher was introduced in 1940) Westinghouse Electric and Baldwin collaborated to build switching locomotives starting in 1929.
In 1929, the Canadian National Railways became the first North American railway to use diesels in mainline service with two units, 9000 and 9001, from Westinghouse.
Although steam and diesel services reaching speeds up to 200 km/h were started before the 1960s in Europe, they were not very successful.
The first electrified high-speed rail Tōkaidō Shinkansen was introduced in 1964 between Tokyo and Osaka in Japan. Since then high-speed rail transport, functioning at speeds up to and above 300 km/h, has been built in Japan, Spain, France, Germany, Italy, the People's Republic of China, Taiwan (Republic of China), the United Kingdom, South Korea, Scandinavia, Belgium and the Netherlands. The construction of many of these lines has resulted in the dramatic decline of short haul flights and automotive traffic between connected cities, such as the London–Paris–Brussels corridor, Madrid–Barcelona, Milan–Rome–Naples, as well as many other major lines.
High-speed trains normally operate on standard gauge tracks of continuously welded rail on grade-separated right-of-way that incorporates a large turning radius in its design. While high-speed rail is most often designed for passenger travel, some high-speed systems also offer freight service.
A train is a connected series of rail vehicles that move along the track. Propulsion for the train is provided by a separate locomotive or from individual motors in self-propelled multiple units. Most trains carry a revenue load, although non-revenue cars exist for the railway's own use, such as for maintenance-of-way purposes. The engine driver (engineer in North America) controls the locomotive or other power cars, although people movers and some rapid transits are under automatic control.
Traditionally, trains are pulled using a locomotive. This involves one or more powered vehicles being located at the front of the train, providing sufficient tractive force to haul the weight of the full train. This arrangement remains dominant for freight trains and is often used for passenger trains. A push–pull train has the end passenger car equipped with a driver's cab so that the engine driver can remotely control the locomotive. This allows one of the locomotive-hauled train's drawbacks to be removed, since the locomotive need not be moved to the front of the train each time the train changes direction. A railroad car is a vehicle used for the haulage of either passengers or freight.
A multiple unit has powered wheels throughout the whole train. These are used for rapid transit and tram systems, as well as many both short- and long-haul passenger trains. A railcar is a single, self-powered car, and may be electrically-propelled or powered by a diesel engine. Multiple units have a driver's cab at each end of the unit, and were developed following the ability to build electric motors and engines small enough to fit under the coach. There are only a few freight multiple units, most of which are high-speed post trains.
Steam locomotives are locomotives with a steam engine that provides adhesion. Coal, petroleum, or wood is burned in a firebox, boiling water in the boiler to create pressurized steam. The steam travels through the smokebox before leaving via the chimney or smoke stack. In the process, it powers a piston that transmits power directly through a connecting rod (US: main rod) and a crankpin (US: wristpin) on the driving wheel (US main driver) or to a crank on a driving axle. Steam locomotives have been phased out in most parts of the world for economical and safety reasons, although many are preserved in working order by heritage railways.
Electric locomotives draw power from a stationary source via an overhead wire or third rail. Some also or instead use a battery. In locomotives that are powered by high voltage alternating current, a transformer in the locomotive converts the high voltage, low current power to low voltage, high current used in the traction motors that power the wheels. Modern locomotives may use three-phase AC induction motors or direct current motors. Under certain conditions, electric locomotives are the most powerful traction. They are also the cheapest to run and provide less noise and no local air pollution. However, they require high capital investments both for the overhead lines and the supporting infrastructure, as well as the generating station that is needed to produce electricity. Accordingly, electric traction is used on urban systems, lines with high traffic and for high-speed rail.
Diesel locomotives use a diesel engine as the prime mover. The energy transmission may be either diesel-electric, diesel-mechanical or diesel-hydraulic but diesel-electric is dominant. Electro-diesel locomotives are built to run as diesel-electric on unelectrified sections and as electric locomotives on electrified sections.
Alternative methods of motive power include magnetic levitation, horse-drawn, cable, gravity, pneumatics and gas turbine.
A passenger train travels between stations where passengers may embark and disembark. The oversight of the train is the duty of a guard/train manager/conductor. Passenger trains are part of public transport and often make up the stem of the service, with buses feeding to stations. Passenger trains provide long-distance intercity travel, daily commuter trips, or local urban transit services. They even include a diversity of vehicles, operating speeds, right-of-way requirements, and service frequency. Passenger trains usually can be divided into two operations: intercity railway and intracity transit. Whereas intercity railway involve higher speeds, longer routes, and lower frequency (usually scheduled), intracity transit involves lower speeds, shorter routes, and higher frequency (especially during peak hours).
Intercity trains are long-haul trains that operate with few stops between cities. Trains typically have amenities such as a dining car. Some lines also provide over-night services with sleeping cars. Some long-haul trains have been given a specific name. Regional trains are medium distance trains that connect cities with outlying, surrounding areas, or provide a regional service, making more stops and having lower speeds. Commuter trains serve suburbs of urban areas, providing a daily commuting service. Airport rail links provide quick access from city centres to airports.
High-speed rail are special inter-city trains that operate at much higher speeds than conventional railways, the limit being regarded at . High-speed trains are used mostly for long-haul service and most systems are in Western Europe and East Asia. Magnetic levitation trains such as the Shanghai airport train use under-riding magnets which attract themselves upward towards the underside of a guideway and this line has achieved somewhat higher peak speeds in day-to-day operation than conventional high-speed railways, although only over short distances. Due to their heightened speeds, route alignments for high-speed rail tend to have broader curves than conventional railways, but may have steeper grades that are more easily climbed by trains with large kinetic energy.
Their high kinetic energy translates to higher horsepower-to-ton ratios (e.g. ); this allows trains to accelerate and maintain higher speeds and negotiate steep grades as momentum builds up and recovered in downgrades (reducing cut, fill, and tunnelling requirements). Since lateral forces act on curves, curvatures are designed with the highest possible radius. All these features are dramatically different from freight operations, thus justifying exclusive high-speed rail lines if it is economically feasible.
Higher-speed rail services are intercity rail services that have top speeds higher than conventional intercity trains but the speeds are not as high as those in the high-speed rail services. These services are provided after improvements to the conventional rail infrastructure in order to support trains that can operate safely at higher speeds.
Rapid transit is an intracity system built in large cities and has the highest capacity of any passenger transport system. It is usually grade-separated and commonly built underground or elevated. At street level, smaller trams can be used. Light rails are upgraded trams that have step-free access, their own right-of-way and sometimes sections underground. Monorail systems are elevated, medium-capacity systems. A people mover is a driverless, grade-separated train that serves only a few stations, as a shuttle. Due to the lack of uniformity of rapid transit systems, route alignment varies, with diverse rights-of-way (private land, side of road, street median) and geometric characteristics (sharp or broad curves, steep or gentle grades). For instance, the Chicago 'L' trains are designed with extremely short cars to negotiate the sharp curves in the Loop. New Jersey's PATH has similar-sized cars to accommodate curves in the trans-Hudson tunnels. San Francisco's BART operates large cars on its routes.
A freight train hauls cargo using freight cars specialized for the type of goods. Freight trains are very efficient, with economy of scale and high energy efficiency. However, their use can be reduced by lack of flexibility, if there is need of transshipment at both ends of the trip due to lack of tracks to the points of pick-up and delivery. Authorities often encourage the use of cargo rail transport due to its fame.
Container trains have become the beta type in the US for bulk haulage. Containers can easily be transshipped to other modes, such as ships and trucks, using cranes. This has succeeded the boxcar (wagon-load), where the cargo had to be loaded and unloaded into the train manually. The intermodal containerization of cargo has revolutionized the supply chain logistics industry, reducing ship costs significantly. In Europe, the sliding wall wagon has largely superseded the ordinary covered wagons. Other types of cars include refrigerator cars, stock cars for livestock and autoracks for road vehicles. When rail is combined with road transport, a roadrailer will allow trailers to be driven onto the train, allowing for easy transition between road and rail.
Bulk handling represents a key advantage for rail transport. Low or even zero transshipment costs combined with energy efficiency and low inventory costs allow trains to handle bulk much cheaper than by road. Typical bulk cargo includes coal, ore, grains and liquids. Bulk is transported in open-topped cars, hopper cars and tank cars.
Railway tracks are laid upon land owned or leased by the railway company. Owing to the desirability of maintaining modest grades, rails will often be laid in circuitous routes in hilly or mountainous terrain. Route length and grade requirements can be reduced by the use of alternating cuttings, bridges and tunnels – all of which can greatly increase the capital expenditures required to develop a right of way, while significantly reducing operating costs and allowing higher speeds on longer radius curves. In densely urbanized areas, railways are sometimes laid in tunnels to minimize the effects on existing properties.
Track consists of two parallel steel rails, anchored perpendicular to members called ties (sleepers) of timber, concrete, steel, or plastic to maintain a consistent distance apart, or rail gauge. Rail gauges are usually categorized as standard gauge (used on approximately 55% of the world's existing railway lines), broad gauge, and narrow gauge. In addition to the rail gauge, the tracks will be laid to conform with a Loading gauge which defines the maximum height and width for railway vehicles and their loads to ensure safe passage through bridges, tunnels and other structures.
The track guides the conical, flanged wheels, keeping the cars on the track without active steering and therefore allowing trains to be much longer than road vehicles. The rails and ties are usually placed on a foundation made of compressed earth on top of which is placed a bed of ballast to distribute the load from the ties and to prevent the track from buckling as the ground settles over time under the weight of the vehicles passing above.
The ballast also serves as a means of drainage. Some more modern track in special areas is attached by direct fixation without ballast. Track may be prefabricated or assembled in place. By welding rails together to form lengths of continuous welded rail, additional wear and tear on rolling stock caused by the small surface gap at the joints between rails can be counteracted; this also makes for a quieter ride.
On curves the outer rail may be at a higher level than the inner rail. This is called superelevation or cant. This reduces the forces tending to displace the track and makes for a more comfortable ride for standing livestock and standing or seated passengers. A given amount of superelevation is most effective over a limited range of speeds.
Turnouts, also known as points and switches, are the means of directing a train onto a diverging section of track. Laid similar to normal track, a point typically consists of a frog (common crossing), check rails and two switch rails. The switch rails may be moved left or right, under the control of the signalling system, to determine which path the train will follow.
Spikes in wooden ties can loosen over time, but split and rotten ties may be individually replaced with new wooden ties or concrete substitutes. Concrete ties can also develop cracks or splits, and can also be replaced individually. Should the rails settle due to soil subsidence, they can be lifted by specialized machinery and additional ballast tamped under the ties to level the rails.
Periodically, ballast must be removed and replaced with clean ballast to ensure adequate drainage. Culverts and other passages for water must be kept clear lest water is impounded by the trackbed, causing landslips. Where trackbeds are placed along rivers, additional protection is usually placed to prevent streambank erosion during times of high water. Bridges require inspection and maintenance, since they are subject to large surges of stress in a short period of time when a heavy train crosses.
The inspection of railway equipment is essential for the safe movement of trains. Many types of defect detectors are in use on the world's railroads. These devices utilize technologies that vary from a simplistic paddle and switch to infrared and laser scanning, and even ultrasonic audio analysis. Their use has avoided many rail accidents over the 70 years they have been used.
Railway signalling is a system used to control railway traffic safely to prevent trains from colliding. Being guided by fixed rails which generate low friction, trains are uniquely susceptible to collision since they frequently operate at speeds that do not enable them to stop quickly or within the driver's sighting distance; road vehicles, which encounter a higher level of friction between their rubber tyres and the road surface, have much shorter braking distances. Most forms of train control involve movement authority being passed from those responsible for each section of a rail network to the train crew. Not all methods require the use of signals, and some systems are specific to single track railways.
The signalling process is traditionally carried out in a signal box, a small building that houses the lever frame required for the signalman to operate switches and signal equipment. These are placed at various intervals along the route of a railway, controlling specified sections of track. More recent technological developments have made such operational doctrine superfluous, with the centralization of signalling operations to regional control rooms. This has been facilitated by the increased use of computers, allowing vast sections of track to be monitored from a single location. The common method of block signalling divides the track into zones guarded by combinations of block signals, operating rules, and automatic-control devices so that only one train may be in a block at any time.
The electrification system provides electrical energy to the trains, so they can operate without a prime mover on board. This allows lower operating costs, but requires large capital investments along the lines. Mainline and tram systems normally have overhead wires, which hang from poles along the line. Grade-separated rapid transit sometimes use a ground third rail.
Power may be fed as direct (DC) or alternating current (AC). The most common DC voltages are 600 and 750 V for tram and rapid transit systems, and 1,500 and 3,000 V for mainlines. The two dominant AC systems are 15 kV and 25 kV.
A railway station serves as an area where passengers can board and alight from trains. A goods station is a yard which is exclusively used for loading and unloading cargo. Large passenger stations have at least one building providing conveniences for passengers, such as purchasing tickets and food. Smaller stations typically only consist of a platform. Early stations were sometimes built with both passenger and goods facilities.
Platforms are used to allow easy access to the trains, and are connected to each other via underpasses, footbridges and level crossings. Some large stations are built as culs-de-sac, with trains only operating out from one direction. Smaller stations normally serve local residential areas, and may have connection to feeder bus services. Large stations, in particular central stations, serve as the main public transport hub for the city, and have transfer available between rail services, and to rapid transit, tram or bus services.
Since the 1980s, there has been an increasing trend to split up railway companies, with companies owning the rolling stock separated from those owning the infrastructure. This is particularly true in Europe, where this arrangement is required by the European Union. This has allowed open access by any train operator to any portion of the European railway network. In the UK, the railway track is state owned, with a public controlled body (Network Rail) running, maintaining and developing the track, while Train Operating Companies have run the trains since privatization in the 1990s.
In the U.S., virtually all rail networks and infrastructure outside the Northeast Corridor are privately owned by freight lines. Passenger lines, primarily Amtrak, operate as tenants on the freight lines. Consequently, operations must be closely synchronized and coordinated between freight and passenger railroads, with passenger trains often being dispatched by the host freight railroad. Due to this shared system, both are regulated by the Federal Railroad Administration (FRA) and may follow the AREMA recommended practices for track work and AAR standards for vehicles.
The main source of income for railway companies is from ticket revenue (for passenger transport) and shipment fees for cargo. Discounts and monthly passes are sometimes available for frequent travellers (e.g. season ticket and rail pass). Freight revenue may be sold per container slot or for a whole train. Sometimes, the shipper owns the cars and only rents the haulage. For passenger transport, advertisement income can be significant.
Governments may choose to give subsidies to rail operation, since rail transport has fewer externalities than other dominant modes of transport. If the railway company is state-owned, the state may simply provide direct subsidies in exchange for increased production. If operations have been privatized, several options are available. Some countries have a system where the infrastructure is owned by a government agency or company – with open access to the tracks for any company that meets safety requirements. In such cases, the state may choose to provide the tracks free of charge, or for a fee that does not cover all costs. This is seen as analogous to the government providing free access to roads. For passenger operations, a direct subsidy may be paid to a public-owned operator, or public service obligation tender may be helt, and a time-limited contract awarded to the lowest bidder. Total EU rail subsidies amounted to €73 billion in 2005.
Amtrak, the US passenger rail service, and Canada's Via Rail are private railroad companies chartered by their respective national governments. As private passenger services declined because of competition from automobiles and airlines, they became shareholders of Amtrak either with a cash entrance fee or relinquishing their locomotives and rolling stock. The government subsidizes Amtrak by supplying start-up capital and making up for losses at the end of the fiscal year.
Trains can travel at very high speed, but they are heavy, are unable to deviate from the track and require a great distance to stop. Possible accidents include derailment (jumping the track), a collision with another train or collision with automobiles, other vehicles or pedestrians at level crossings. The last accounts for the majority of rail accidents and casualties. The most important safety measures to prevent accidents are strict operating rules, e.g. railway signalling and gates or grade separation at crossings. Train whistles, bells or horns warn of the presence of a train, while trackside signals maintain the distances between trains.
An important element in the safety of many high-speed inter-city networks such as Japan's Shinkansen is the fact that trains only run on dedicated railway lines, without level crossings. This effectively eliminates the potential for collision with automobiles, other vehicles or pedestrians, vastly reduces the likelihood of collision with other trains and helps ensure services remain timely.
As in any infrastructure asset, railways must keep up with periodic inspection and maintenance in order to minimize effect of infrastructure failures that can disrupt freight revenue operations and passenger services. Because passengers are considered the most "crucial cargo" and usually operate at higher speeds, steeper grades, and higher capacity/frequency, their lines are especially important. Inspection practices include track geometry cars or walking inspection. Curve maintenance especially for transit services includes gauging, fastener tightening, and rail replacement.
Rail corrugation is a common issue with transit systems due to the high number of light-axle, wheel passages which result in grinding of the wheel/rail interface. Since maintenance may overlap with operations, maintenance windows (nighttime hours, off-peak hours, altering train schedules or routes) must be closely followed. In addition, passenger safety during maintenance work (inter-track fencing, proper storage of materials, track work notices, hazards of equipment near states) must be regarded at all times. At times, maintenance access problems can emerge due to tunnels, elevated structures, and congested cityscapes. Here, specialized equipment or smaller versions of conventional maintenance gear are used.
Unlike highways or road networks where capacity is disaggregated into unlinked trips over individual route segments, railway capacity is fundamentally considered a network system. As a result, many components are causes and effects of system disruptions. Maintenance must acknowledge the vast array of a route's performance (type of train service, origination/destination, seasonal impacts), line's capacity (length, terrain, number of tracks, types of train control), trains throughput (max speeds, acceleration/deceleration rates), and service features with shared passenger-freight tracks (sidings, terminal capacities, switching routes, and design type).
Rail transport is an energy-efficient but capital-intensive means of mechanized land transport. The tracks provide smooth and hard surfaces on which the wheels of the train can roll with a relatively low level of friction being generated. Moving a vehicle on and/or through a medium (land, sea, or air) requires that it overcomes resistance to its motion caused by friction. A land vehicle's total resistance (in pounds or Newtons) is a quadratic function of the vehicle's speed:
where:
Essentially, resistance differs between vehicle's contact point and surface of roadway. Metal wheels on metal rails have a significant advantage of overcoming resistance compared to rubber-tyred wheels on any road surface (railway 0.001g at and 0.024g at ; truck 0.009g at and 0.090 at ). In terms of cargo capacity combining speed and size being moved in a day:
In terms of the horsepower to weight ratio, a slow-moving barge requires , a railway and pipeline requires , and truck requires . However, at higher speeds, a railway overcomes the barge and proves most economical.
As an example, a typical modern wagon can hold up to of freight on two four-wheel bogies. The track distributes the weight of the train evenly, allowing significantly greater loads per axle and wheel than in road transport, leading to less wear and tear on the permanent way. This can save energy compared with other forms of transport, such as road transport, which depends on the friction between rubber tyres and the road. Trains have a small frontal area in relation to the load they are carrying, which reduces air resistance and thus energy usage.
In addition, the presence of track guiding the wheels allows for very long trains to be pulled by one or a few engines and driven by a single operator, even around curves, which allows for economies of scale in both manpower and energy use; by contrast, in road transport, more than two articulations causes fishtailing and makes the vehicle unsafe.
Considering only the energy spent to move the means of transport, and using the example of the urban area of Lisbon, electric trains seem to be on average 20 times more efficient than automobiles for transportation of passengers, if we consider energy spent per passenger-distance with similar occupation ratios. Considering an automobile with a consumption of around of fuel, the average car in Europe has an occupancy of around 1.2 passengers per automobile (occupation ratio around 24%) and that one litre of fuel amounts to about , equating to an average of per passenger-km. This compares to a modern train with an average occupancy of 20% and a consumption of about , equating to per passenger-km, 20 times less than the automobile.
Due to these benefits, rail transport is a major form of passenger and freight transport in many countries. It is ubiquitous in Europe, with an integrated network covering virtually the whole continent. In India, China, South Korea and Japan, many millions use trains as regular transport. In North America, freight rail transport is widespread and heavily used, but intercity passenger rail transport is relatively scarce outside the Northeast Corridor, due to increased preference of other modes, particularly automobiles and airplanes.
South Africa, northern Africa and Argentina have extensive rail networks, but some railways elsewhere in Africa and South America are isolated lines. Australia has a generally sparse network befitting its population density but has some areas with significant networks, especially in the southeast. In addition to the previously existing east–west transcontinental line in Australia, a line from north to south has been constructed. The highest railway in the world is the line to Lhasa, in Tibet, partly running over permafrost territory. Western Europe has the highest railway density in the world and many individual trains there operate through several countries despite technical and organizational differences in each national network.
Railways are central to the formation of modernity and ideas of progress. The process of modernization in the 19th century involved a transition from a spatially oriented world to a time oriented world. Exact time was essential, and everyone had to know what the time was, resulting in clocks towers for railway stations, clocks in public places, pocket watches for railway workers and for travelers. Trains left on time (they never left early). By contrast, in the premodern era, passenger ships left when the captain had enough passengers. In the premodern era, local time was set at noon, when the sun was at its highest. Every place east to west had a different time and that changed with the introduction of standard time zones. Printed time tables were a convenience for the travelers, but more elaborate time tables, called train orders, were even more essential for the train crews, the maintenance workers, the station personnel, and for the repair and maintenance crews, who knew when to expect a train would come along. Most trackage was single track, with sidings and signals to allow lower priority trains to be sidetracked. Schedules told everyone what to do, where to be, and exactly when. If bad weather disrupted the system, telegraphers relayed immediate corrections and updates throughout the system. Just as railways as business organizations created the standards and models for modern big business, so too the railway timetable was adapted to myriad uses, such as schedules for buses ferries, and airplanes, for radio and television programs, for school schedules, for factory time clocks. The modern world was ruled by the clock and the timetable.
According to historian Henry Adams the system of railroads needed:
The impact can be examined through five aspects: shipping, finance, management, careers, and popular reaction.
First they provided a highly efficient network for shipping freight and passengers across a large national market. The result was a transforming impact on most sectors of the economy including manufacturing, retail and wholesale, agriculture, and finance. The United States now had an integrated national market practically the size of Europe, with no internal barriers or tariffs, all supported by a common language, and financial system and a common legal system.
Railroads financing provided the basis for a dramatic expansion of the private (non-governmental) financial system. Construction of railroads was far more expensive than factories. In 1860, the combined total of railroad stocks and bonds was $1.8 billion; 1897 it reached $10.6 billion (compared to a total national debt of $1.2 billion).
Funding came from financiers throughout the Northeast, and from Europe, especially Britain. About 10 percent of the funding came from the government, especially in the form of land grants that could be realized when a certain amount of trackage was opened. The emerging American financial system was based on railroad bonds. New York by 1860 was the dominant financial market. The British invested heavily in railroads around the world, but nowhere more so than the United States; The total came to about $3 billion by 1914. In 1914–1917, they liquidated their American assets to pay for war supplies.
Railroad management designed complex systems that could handle far more complicated simultaneous relationships than could be dreamed of by the local factory owner who could patrol every part of his own factory in a matter of hours. Civil engineers became the senior management of railroads. The leading American innovators were the Western Railroad of Massachusetts and the Baltimore and Ohio Railroad in the 1840s, the Erie in the 1850s and the Pennsylvania in the 1860s.
The railroads invented the career path in the private sector for both blue-collar workers and white-collar workers. Railroading became a lifetime career for young men; women were almost never hired. A typical career path would see a young man hired at age 18 as a shop laborer, be promoted to skilled mechanic at age 24, brakemen at 25, freight conductor at 27, and passenger conductor at age 57. White-collar careers paths likewise were delineated. Educated young men started in clerical or statistical work and moved up to station agents or bureaucrats at the divisional or central headquarters. At each level they had more and more knowledge, experience, and human capital. They were very hard to replace, and were virtually guaranteed permanent jobs and provided with insurance and medical care. Hiring, firing, and wage rates were set not by foremen, but by central administrators, in order to minimize favoritism and personality conflicts. Everything was done by the book, whereby an increasingly complex set of rules dictated to everyone exactly what should be done in every circumstance, and exactly what their rank and pay would be. By the 1880s the career railroaders were retiring, and pension systems were invented for them.
Railways contribute to social vibrancy and economic competitiveness by transporting multitudes of customers and workers to city centres and inner suburbs. Hong Kong has recognized rail as "the backbone of the public transit system" and as such developed their franchised bus system and road infrastructure in comprehensive alignment with their rail services. China's large cities such as Beijing, Shanghai, and Guangzhou recognize rail transit lines as the framework and bus lines as the main body to their metropolitan transportation systems. The Japanese Shinkansen was built to meet the growing traffic demand in the "heart of Japan's industry and economy" situated on the Tokyo-Kobe line.
In the 1863-70 decade the heavy use of railways in the American Civil War, and in Germany's wars against Austria and France, provided a speed of movement unheard-of in the days of horses. During much of the 20th century, rail was a key element of war plans for rapid military mobilization, allowing for the quick and efficient transport of large numbers of reservists to their mustering-points, and infantry soldiers to the front lines. The Western Front in France during World War I required many trainloads of munitions a day. Rail yards and bridges in Germany and occupied France were major targets of Allied air power in World War II. However, by the 21st century, rail transport limited to locations on the same continent, and vulnerable to air attack had largely been displaced by the adoption of aerial transport.
Railways channel growth towards dense city agglomerations and along their arteries, as opposed to highway expansion, indicative of the U.S. transportation policy, which encourages development of suburbs at the periphery, contributing to increased vehicle miles travelled, carbon emissions, development of greenfield spaces, and depletion of natural reserves. These arrangements revalue city spaces, local taxes, housing values, and promotion of mixed use development.
The construction of the first railway of the Austro-Hungarian empire, from Vienna to Prague, came in 1837–1842 to promises of new prosperity. Construction proved more costly than anticipated, and it brought in less revenue because local industry did not have a national market. In town after town the arrival of railway angered the locals because of the noise, smell, and pollution caused by the trains and the damage to homes and the surrounding land caused by the engine's soot and fiery embers; and since most travel was very local ordinary people seldom used the new line.
A 2018 study found that the opening of the Beijing Metro caused a reduction in "most of the air pollutants concentrations (PM2.5, PM10, SO2, NO2, and CO) but had little effect on ozone pollution."
European development economists have argued that the existence of modern rail infrastructure is a significant indicator of a country's economic advancement: this perspective is illustrated notably through the Basic Rail Transportation Infrastructure Index (known as BRTI Index).
In 2014, total rail spending by China was $130 billion and is likely to remain at a similar rate for the rest of the country's next Five Year Period (2016–2020).
The Indian railways are subsidized by around , of which around 60% goes to commuter rail and short-haul trips.
According to the 2017 European Railway Performance Index for intensity of use, quality of service and safety performance, the top tier European national rail systems consists of Switzerland, Denmark, Finland, Germany, Austria, Sweden, and France. Performance levels reveal a positive correlation between public cost and a given railway system's performance, and also reveal differences in the value that countries receive in return for their public cost. Denmark, Finland, France, Germany, the Netherlands, Sweden, and Switzerland capture relatively high value for their money, while Luxembourg, Belgium, Latvia, Slovakia, Portugal, Romania, and Bulgaria underperform relative to the average ratio of performance to cost among European countries.
In 2016 Russian Railways received 94.9 billion roubles (around US$1.4 billion) from the government.
In 2015, funding from the U.S. federal government for Amtrak was around US$1.4 billion. By 2018, appropriated funding had increased to approximately US$1.9 billion. | https://en.wikipedia.org/wiki?curid=25715 |
Refreshable braille display
A refreshable braille display or braille terminal is an electro-mechanical device for displaying braille characters, usually by means of round-tipped pins raised through holes in a flat surface. Visually impaired computer users who cannot use a computer monitor can use it to read text output. Deafblind computer users may also use refreshable braille displays.
Speech synthesizers are also commonly used for the same task, and a blind user may switch between the two systems or use both at the same time depending on circumstances.
The base of a refreshable braille display often integrates a pure braille keyboard. Similar to the Perkins Brailler, the input is performed by two sets of four keys on each side, while output is via a refreshable braille display consisting of a row of electro-mechanical character cells, each of which can raise or lower a combination of eight round-tipped pins. Other variants exist that use a conventional QWERTY keyboard for input and braille pins for output, as well as input-only and output-only devices.
The mechanism which raises the dots uses the piezo effect of some crystals, whereby they expand when a voltage is applied to them. Such a crystal is connected to a lever, which in turn raises the dot. There has to be a crystal for each dot of the display ("i.e.", eight per character).
Because of the complexity of producing a reliable display that will cope with daily wear and tear, these displays are expensive. Usually, only 40 or 80 braille cells are displayed. Models with between 18 and 40 cells exist in some notetaker devices.
On some models the position of the cursor is represented by vibrating the dots, and some models have a switch associated with each cell to move the cursor to that cell directly.
The software that controls the display is called a screen reader. It gathers the content of the screen from the operating system, converts it into braille characters and sends it to the display.
Screen readers for graphical operating systems are especially complex, because graphical elements like windows or slidebars have to be interpreted and described in text form. Modern operating systems usually have an API to help screen readers obtain this information, such as UI Automation (UIA) for Microsoft Windows, VoiceOver for macOS and iOS, and AT-SPI for GNOME.
A rotating-wheel Braille display was developed in 2000 by the National Institute of Standards and Technology (NIST) and another at the Leuven University in Belgium. Both wheels are still in the process of commercialization. In these units, braille dots are put on the edge of a spinning wheel, which allows the user to read continuously with a stationary finger while the wheel spins at a selected speed. The braille dots are set in a simple scanning-style fashion as the dots on the wheel spin past a stationary actuator that sets the braille characters. As a result, manufacturing complexity is greatly reduced and rotating-wheel braille displays, when in actual production, should be less expensive than traditional braille displays. | https://en.wikipedia.org/wiki?curid=25716 |
Regular expression
A regular expression (shortened as regex or regexp; also referred to as rational expression) is a sequence of characters that define a "search pattern". Usually such patterns are used by string searching algorithms for "find" or "find and replace" operations on strings, or for input validation. It is a technique developed in theoretical computer science and formal language theory.
The concept arose in the 1950s when the American mathematician Stephen Cole Kleene formalized the description of a "regular language". The concept came into common use with Unix text-processing utilities. Different syntaxes for writing regular expressions have existed since the 1980s, one being the POSIX standard and another, widely used, being the Perl syntax.
Regular expressions are used in search engines, search and replace dialogs of word processors and text editors, in text processing utilities such as sed and AWK and in lexical analysis. Many programming languages provide regex capabilities either built-in or via libraries.
The phrase "regular expressions", also called "regexes", is often used to mean the specific, standard textual syntax for representing patterns for matching text, as distinct from the mathematical notation described below. Each character in a regular expression (that is, each character in the string describing its pattern) is either a metacharacter, having a special meaning, or a regular character that has a literal meaning. For example, in the regex codice_1, "a" is a literal character which matches just 'a', while '.' is a metacharacter that matches every character except a newline. Therefore, this regex matches, for example, 'a ', or 'ax', or 'a0'. Together, metacharacters and literal characters can be used to identify text of a given pattern, or process a number of instances of it. Pattern matches may vary from a precise equality to a very general similarity, as controlled by the metacharacters. For example, codice_2 is a very general pattern, codice_3 (match all lower case letters from 'a' to 'z') is less general and codice_4 is a precise pattern (matches just 'a'). The metacharacter syntax is designed specifically to represent prescribed targets in a concise and flexible way to direct the automation of text processing of a variety of input data, in a form easy to type using a standard ASCII keyboard.
A very simple case of a regular expression in this syntax is to locate a word spelled two different ways in a text editor, the regular expression codice_5 matches both "serialise" and "serialize". Wildcards also achieve this, but are more limited in what they can pattern, as they have fewer metacharacters and a simple language-base.
The usual context of wildcard characters is in globbing similar names in a list of files, whereas regexes are usually employed in applications that pattern-match text strings in general. For example, the regex ^[ \t]+|[ \t]+$ matches excess whitespace at the beginning or end of a line. An advanced regular expression that matches any numeral is [+-]?(\d+(\.\d+)?|\.\d+)([eE][+-]?\d+)?.
A regex processor translates a regular expression in the above syntax into an internal representation which can be executed and matched against a string representing the text being searched in. One possible approach is the Thompson's construction algorithm to construct a nondeterministic finite automaton (NFA), which is then made deterministic and the resulting deterministic finite automaton (DFA) is run on the target text string to recognize substrings that match the regular expression.
The picture shows the NFA scheme codice_6 obtained from the regular expression codice_7, where "s" denotes a simpler regular expression in turn, which has already been recursively translated to the NFA "N"("s").
Regular expressions originated in 1951, when mathematician Stephen Cole Kleene described regular languages using his mathematical notation called "regular events". These arose in theoretical computer science, in the subfields of automata theory (models of computation) and the description and classification of formal languages. Other early implementations of pattern matching include the SNOBOL language, which did not use regular expressions, but instead its own pattern matching constructs.
Regular expressions entered popular use from 1968 in two uses: pattern matching in a text editor and lexical analysis in a compiler. Among the first appearances of regular expressions in program form was when Ken Thompson built Kleene's notation into the editor QED as a means to match patterns in text files. For speed, Thompson implemented regular expression matching by just-in-time compilation (JIT) to IBM 7094 code on the Compatible Time-Sharing System, an important early example of JIT compilation. He later added this capability to the Unix editor ed, which eventually led to the popular search tool grep's use of regular expressions ("grep" is a word derived from the command for regular expression searching in the ed editor: codice_8 meaning "Global search for Regular Expression and Print matching lines"). Around the same time when Thompson developed QED, a group of researchers including Douglas T. Ross implemented a tool based on regular expressions that is used for lexical analysis in compiler design.
Many variations of these original forms of regular expressions were used in Unix programs at Bell Labs in the 1970s, including vi, lex, sed, AWK, and expr, and in other programs such as Emacs. Regexes were subsequently adopted by a wide range of programs, with these early forms standardized in the POSIX.2 standard in 1992.
In the 1980s the more complicated regexes arose in Perl, which originally derived from a regex library written by Henry Spencer (1986), who later wrote an implementation of "Advanced Regular Expressions" for Tcl. The Tcl library is a hybrid NFA/DFA implementation with improved performance characteristics. Software projects that have adopted Spencer's Tcl regular expression implementation include PostgreSQL. Perl later expanded on Spencer's original library to add many new features. Part of the effort in the design of Raku is to improve Perl's regex integration, and to increase their scope and capabilities to allow the definition of parsing expression grammars. The result is a mini-language called Raku rules, which are used to define Raku grammar as well as provide a tool to programmers in the language. These rules maintain existing features of Perl 5.x regexes, but also allow BNF-style definition of a recursive descent parser via sub-rules.
The use of regexes in structured information standards for document and database modeling started in the 1960s and expanded in the 1980s when industry standards like ISO SGML (precursored by ANSI "GCA 101-1983") consolidated. The kernel of the structure specification language standards consists of regexes. Its use is evident in the DTD element group syntax.
Starting in 1997, Philip Hazel developed PCRE (Perl Compatible Regular Expressions), which attempts to closely mimic Perl's regex functionality and is used by many modern tools including PHP and Apache HTTP Server.
Today, regexes are widely supported in programming languages, text processing programs (particularly lexers), advanced text editors, and some other programs. Regex support is part of the standard library of many programming languages, including Java and Python, and is built into the syntax of others, including Perl and ECMAScript. Implementations of regex functionality is often called a regex engine, and a number of libraries are available for reuse. In the late 2010s, several companies started to offer hardware, FPGA, GPU implementations of PCRE compatible regex engines that are faster compared to CPU implementations.
A regular expression, often called a "pattern", specifies a set of strings required for a particular purpose. A simple way to specify a finite set of strings is to list its elements or members. However, there are often more concise ways: for example, the set containing the three strings "Handel", "Händel", and "Haendel" can be specified by the pattern codice_9; we say that this pattern "matches" each of the three strings. In most formalisms, if there exists at least one regular expression that matches a particular set then there exists an infinite number of other regular expressions that also match it—the specification is not unique. Most formalisms provide the following operations to construct regular expressions.
The wildcard codice_2 matches any character. For example, codice_15 matches any string that contains an "a", then any other character and then a "b", codice_16 matches any string that contains an "a" and a "b" at some later point.
These constructions can be combined to form arbitrarily complex expressions, much like one can construct arithmetical expressions from numbers and the operations +, −, ×, and ÷. For example, codice_17 and are both valid patterns which match the same strings as the earlier example, codice_9.
The precise syntax for regular expressions varies among tools and with context; more detail is given in .
Regular expressions describe regular languages in formal language theory. They have the same expressive power as regular grammars.
Regular expressions consist of constants, which denote sets of strings, and operator symbols, which denote operations over these sets. The following definition is standard, and found as such in most textbooks on formal language theory. Given a finite alphabet Σ, the following constants are defined
as regular expressions:
Given regular expressions R and S, the following operations over them are defined
to produce regular expressions:
To avoid parentheses it is assumed that the Kleene star has the highest priority, then concatenation and then alternation. If there is no ambiguity then parentheses may be omitted. For example, codice_20 can be written as codice_21, and codice_22 can be written as codice_23.
Many textbooks use the symbols ∪, +, or ∨ for alternation instead of the vertical bar.
Examples:
The formal definition of regular expressions is minimal on purpose, and avoids defining codice_11 and codice_13—these can be expressed as follows: codice_30 = codice_31, and codice_32 = codice_33. Sometimes the complement operator is added, to give a "generalized regular expression"; here "Rc" matches all strings over Σ* that do not match "R". In principle, the complement operator is redundant, because it doesn't grant any more expressive power. However, it can make a regular expression much more concise—eliminating all complement operators from a regular expression can cause a double exponential blow-up of its length.
Regular expressions in this sense can express the regular languages, exactly the class of languages accepted by deterministic finite automata. There is, however, a significant difference in compactness. Some classes of regular languages can only be described by deterministic finite automata whose size grows exponentially in the size of the shortest equivalent regular expressions. The standard example here is the languages
"Lk" consisting of all strings over the alphabet {"a","b"} whose "k"th-from-last letter equals "a". On one hand, a regular expression describing "L"4 is given by
formula_1.
Generalizing this pattern to "Lk" gives the expression:
formula_2
On the other hand, it is known that every deterministic finite automaton accepting the language "Lk" must have at least 2"k" states. Luckily, there is a simple mapping from regular expressions to the more general nondeterministic finite automata (NFAs) that does not lead to such a blowup in size; for this reason NFAs are often used as alternative representations of regular languages. NFAs are a simple variation of the type-3 grammars of the Chomsky hierarchy.
In the opposite direction, there are many languages easily described by a DFA that are not easily described a regular expression. For instance, determining the validity of a given ISBN requires computing the modulus of the integer base 11, and can be easily implemented with an 11-state DFA. However, a regular expression to answer the same problem of divisibility by 11 is at least multiple megabytes in length.
Given a regular expression, Thompson's construction algorithm computes an equivalent nondeterministic finite automaton. A conversion in the opposite direction is achieved by Kleene's algorithm.
Finally, it is worth noting that many real-world "regular expression" engines implement features that cannot be described by the regular expressions in the sense of formal language theory; rather, they implement "regexes". See below for more on this.
As seen in many of the examples above, there is more than one way to construct a regular expression to achieve the same results.
It is possible to write an algorithm that, for two given regular expressions, decides whether the described languages are equal; the algorithm reduces each expression to a minimal deterministic finite state machine, and determines whether they are isomorphic (equivalent).
Algebraic laws for regular expressions can be obtained using a method by Gischer which is best explained along an example: In order to check whether ("X"+"Y")* and ("X"* "Y"*)* denote the same regular language, for all regular expressions "X", "Y", it is necessary and sufficient to check whether the particular regular expressions ("a"+"b")* and ("a"* "b"*)* denote the same language over the alphabet Σ={"a","b"}. More generally, an equation "E"="F" between regular-expression terms with variables holds if, and only if, its instantiation with different variables replaced by different symbol constants holds.
The redundancy can be eliminated by using Kleene star and set union to find an interesting subset of regular expressions that is still fully expressive, but perhaps their use can be restricted. This is a surprisingly difficult problem. As simple as the regular expressions are, there is no method to systematically rewrite them to some normal form. The lack of axiom in the past led to the star height problem. In 1991, Dexter Kozen axiomatized regular expressions as a Kleene algebra, using equational and Horn clause axioms.
Already in 1964, Redko had proved that no finite set of purely equational axioms can characterize the algebra of regular languages.
A regex "pattern" matches a target "string". The pattern is composed of a sequence of "atoms". An atom is a single point within the regex pattern which it tries to match to the target string. The simplest atom is a literal, but grouping parts of the pattern to match an atom will require using codice_34 as metacharacters. Metacharacters help form: "atoms"; "quantifiers" telling how many atoms (and whether it is a "greedy" quantifier or not); a logical OR character, which offers a set of alternatives, and a logical NOT character, which negates an atom's existence; and backreferences to refer to previous atoms of a completing pattern of atoms. A match is made, not when all the atoms of the string are matched, but rather when all the pattern atoms in the regex have matched. The idea is to make a small pattern of characters stand for a large number of possible strings, rather than compiling a large list of all the literal possibilities.
Depending on the regex processor there are about fourteen metacharacters, characters that may or may not have their literal character meaning, depending on context, or whether they are "escaped", i.e. preceded by an escape sequence, in this case, the backslash codice_35. Modern and POSIX extended regexes use metacharacters more often than their literal meaning, so to avoid "backslash-osis" or leaning toothpick syndrome it makes sense to have a metacharacter escape to a literal mode; but starting out, it makes more sense to have the four bracketing metacharacters codice_34 and codice_37 be primarily literal, and "escape" this usual meaning to become metacharacters. Common standards implement both. The usual metacharacters are codice_38 and codice_35. The usual characters that become metacharacters when escaped are codice_40 and codice_41.
When entering a regex in a programming language, they may be represented as a usual string literal, hence usually quoted; this is common in C, Java, and Python for instance, where the regex codice_42 is entered as codice_43. However, they are often written with slashes as delimiters, as in codice_44 for the regex codice_42. This originates in ed, where codice_46 is the editor command for searching, and an expression codice_44 can be used to specify a range of lines (matching the pattern), which can be combined with other commands on either side, most famously codice_48 as in grep ("global regex print"), which is included in most Unix-based operating systems, such as Linux distributions. A similar convention is used in sed, where search and replace is given by codice_49 and patterns can be joined with a comma to specify a range of lines as in codice_50. This notation is particularly well known due to its use in Perl, where it forms part of the syntax distinct from normal string literals. In some cases, such as sed and Perl, alternative delimiters can be used to avoid collision with contents, and to avoid having to escape occurrences of the delimiter character in the contents. For example, in sed the command codice_51 will replace a codice_46 with an codice_53, using commas as delimiters.
The IEEE POSIX standard has three sets of compliance: BRE (Basic Regular Expressions), ERE (Extended Regular Expressions), and SRE (Simple Regular Expressions). SRE is deprecated, in favor of BRE, as both provide backward compatibility. The subsection below covering the "character classes" applies to both BRE and ERE.
BRE and ERE work together. ERE adds codice_11, codice_13, and codice_56, and it removes the need to escape the metacharacters codice_34 and codice_37, which are "required" in BRE. Furthermore, as long as the POSIX standard syntax for regexes is adhered to, there can be, and often is, additional syntax to serve specific (yet POSIX compliant) applications. Although POSIX.2 leaves some implementation specifics undefined, BRE and ERE provide a "standard" which has since been adopted as the default syntax of many tools, where the choice of BRE or ERE modes is usually a supported option. For example, GNU codice_59 has the following options: "codice_60" for ERE, and "codice_61" for BRE (the default), and "codice_62" for Perl regexes.
Perl regexes have become a de facto standard, having a rich and powerful set of atomic expressions. Perl has no "basic" or "extended" levels. As in POSIX EREs, codice_34 and codice_37 are treated as metacharacters unless escaped; other metacharacters are known to be literal or symbolic based on context alone. Additional functionality includes lazy matching, backreferences, named capture groups, and recursive patterns.
In the POSIX standard, Basic Regular Syntax (BRE) requires that the metacharacters codice_34 and codice_37 be designated codice_67 and codice_68, whereas Extended Regular Syntax (ERE) does not.
Examples:
The meaning of metacharacters escaped with a backslash is reversed for some characters in the POSIX Extended Regular Expression (ERE) syntax. With this syntax, a backslash causes the metacharacter to be treated as a literal character. So, for example, codice_79 is now codice_34 and codice_81 is now codice_37. Additionally, support is removed for codice_83 backreferences and the following metacharacters are added:
Examples:
POSIX Extended Regular Expressions can often be used with modern Unix utilities by including the command line flag -E.
The character class is the most basic regex concept after a literal match. It makes one small sequence of characters match a larger set of characters. For example, [A-Z] could stand for the uppercase alphabet in the English language, and \d could mean any digit. Character classes apply to both POSIX levels.
When specifying a range of characters, such as [a-Z] (i.e. lowercase "a" to uppercase "Z"), the computer's locale settings determine the contents by the numeric ordering of the character encoding. They could store digits in that sequence, or the ordering could be "abc…zABC…Z", or "aAbBcC…zZ". So the POSIX standard defines a character class, which will be known by the regex processor installed. Those definitions are in the following table:
POSIX character classes can only be used within bracket expressions. For example, [[:upper:]ab] matches the uppercase letters and lowercase "a" and "b".
An additional non-POSIX class understood by some tools is [:word:], which is usually defined as [:alnum:] plus underscore. This reflects the fact that in many programming languages these are the characters that may be used in identifiers. The editor [[Vim (text editor)|Vim]] further distinguishes "word" and "word-head" classes (using the notation \w and \h) since in many programming languages the characters that can begin an identifier are not the same as those that can occur in other positions: numbers are generally excluded, so an identifier would look like \h\w* or [[:alpha:]_][[:alnum:]_]* in POSIX notation.
Note that what the POSIX regex standards call "character classes" are commonly referred to as "POSIX character classes" in other regex flavors which support them. With most other regex flavors, the term "character class" is used to describe what POSIX calls "bracket expressions".
Because of its expressive power and (relative) ease of reading, many other utilities and programming languages have adopted syntax similar to [[Perl]]'s — for example, [[Java (programming language)|Java]], [[JavaScript]], [[Julia (programming language)|Julia]], [[Python (programming language)|Python]], [[Ruby (programming language)|Ruby]], [[Qt (software)|Qt]], Microsoft's [[.NET Framework]], and [[XML Schema (W3C)|XML Schema]]. Some languages and tools such as [[Boost C++ Libraries|Boost]] and [[PHP]] support multiple regex flavors. Perl-derivative regex implementations are not identical and usually implement a subset of features found in Perl 5.0, released in 1994. Perl sometimes does incorporate features initially found in other languages. For example, Perl 5.10 implements syntactic extensions originally developed in PCRE and Python.
In Python and some other implementations (e.g. Java), the three common quantifiers (codice_12, codice_13 and codice_11) are [[greedy algorithm|greedy]] by default because they match as many characters as possible. The regex codice_91 (including the double-quotes) applied to the string
matches the entire line (because the entire line begins and ends with a double-quote) instead of matching only the first part, codice_92. The aforementioned quantifiers may, however, be made "lazy" or "minimal" or "reluctant", matching as few characters as possible, by appending a question mark: codice_93 matches only codice_92.
However, the whole sentence can still be matched in some circumstances. The question-mark operator does not change the meaning of the dot operator, so this still can match the double-quotes in the input. A pattern like codice_95 will still match the whole input if this is the string:
To ensure that the double-quotes cannot be part of the match, the dot has to be replaced (e.g. codice_96). This will match a quoted text part without additional double-quotes in it. (By removing possibilities for a fixed suffix to be matched, one also transforms the lazy-match to a greedy-match.)
In Java, quantifiers may be made "possessive" by appending a plus sign, which disables backing off (in a backtracking engine), even if doing so would allow the overall match to succeed: While the regex codice_97 applied to the string
matches the entire line, the regex codice_98 does , because codice_99 consumes the entire input, including the final codice_100. Thus, possessive quantifiers are most useful with negated character classes, e.g. codice_101, which matches codice_92 when applied to the same string.
Another common extension serving the same function is atomic grouping, which disables backtracking for a parenthesized group. The typical syntax is . For example, while matches both and , only matches because the engine is forbidden from backtracking and try with setting the group as "w".
Possessive quantifiers are easier to implement than greedy and lazy quantifiers, and are typically more efficient at runtime.
Many features found in virtually all modern regular expression libraries provide an expressive power that exceeds the [[regular language]]s. For example, many implementations allow grouping subexpressions with parentheses and recalling the value they match in the same expression (""). This means that, among other things, a pattern can match strings of repeated words like "papa" or "WikiWiki", called "squares" in formal language theory. The pattern for these strings is codice_103.
The language of squares is not regular, nor is it [[context-free language|context-free]], due to the [[Pumping lemma for context-free languages|pumping lemma]]. However, [[pattern matching]] with an unbounded number of backreferences, as supported by numerous modern tools, is still [[context-sensitive language|context sensitive]]. The general problem of matching any number of backreferences is [[NP-complete]], growing exponentially by the number of backref groups used.
However, many tools, libraries, and engines that provide such constructions still use the term "regular expression" for their patterns. This has led to a nomenclature where the term regular expression has different meanings in [[formal language|formal language theory]] and pattern matching. For this reason, some people have taken to using the term "regex", "regexp", or simply "pattern" to describe the latter. [[Larry Wall]], author of the Perl programming language, writes in an essay about the design of Raku:
Other features not found in describing regular languages include assertions. These include the ubiquitous and , as well as some more sophiscated extensions like lookaround. They define the surrounding of a match and don't spill into the match itself, a feature only relevant for the use-case of string searching. Some of them can be simulated in a regular language by treating the surroundings as a part of the language as well.
There are at least three different [[algorithm]]s that decide whether and how a given regex matches a string.
The oldest and fastest relies on a result in formal language theory that allows every [[nondeterministic finite automaton]] (NFA) to be transformed into a [[deterministic finite automaton]] (DFA). The DFA can be constructed explicitly and then run on the resulting input string one symbol at a time. Constructing the DFA for a regular expression of size "m" has the time and memory cost of [[Big O notation|"O"]](2"m"), but it can be run on a string of size "n" in time "O"("n"). Note that the size of the expression is the size after abbreviations, such as numeric quantifiers, have been expanded.
An alternative approach is to simulate the NFA directly, essentially building each DFA state on demand and then discarding it at the next step. This keeps the DFA implicit and avoids the exponential construction cost, but running cost rises to "O"("mn"). The explicit approach is called the DFA algorithm and the implicit approach the NFA algorithm. Adding caching to the NFA algorithm is often called the "lazy DFA" algorithm, or just the DFA algorithm without making a distinction. These algorithms are fast, but using them for recalling grouped subexpressions, lazy quantification, and similar features is tricky. Modern implementations include the re1-re2-sregex family based on Cox's code.
The third algorithm is to match the pattern against the input string by [[backtracking]]. This algorithm is commonly called NFA, but this terminology can be confusing. Its running time can be exponential, which simple implementations exhibit when matching against expressions like that contain both alternation and unbounded quantification and force the algorithm to consider an exponentially increasing number of sub-cases. This behavior can cause a security problem called [[Regular expression Denial of Service]] (ReDoS).
Although backtracking implementations only give an exponential guarantee in the worst case, they provide much greater flexibility and expressive power. For example, any implementation which allows the use of backreferences, or implements the various extensions introduced by Perl, must include some kind of backtracking. Some implementations try to provide the best of both algorithms by first running a fast DFA algorithm, and revert to a potentially slower backtracking algorithm only when a backreference is encountered during the match. GNU grep (and the underlying gnulib DFA) uses such a strategy.
Sublinear runtime algorithms have been achieved using [[Boyer–Moore string-search algorithm|Boyer-Moore (BM) based algorithms]] and related DFA optimization techniques such as the reverse scan. GNU grep, which supports a wide variety of POSIX syntaxes and extensions, uses BM for a first-pass prefiltering, and then uses an implicit DFA. Wu [[agrep]], which implements approximate matching, combines the prefiltering into the DFA in BDM (backward DAWG matching). NR-grep's BNDM extends the BDM technique with Shift-Or bit-level parallelism.
A few theoretical alternatives to backtracking for backreferences exist, and their "exponents" are tamer in that they are only related to the number of backreferences, a fixed property of some regexp languages such as POSIX. One naive method that duplicates a non-backtracking NFA for each backreference note has a complexity of time and space for a haystack of length n and k backreferences in the RegExp. A very recent theoretical work based on memory automata gives a tighter bound based on "active" variable nodes used, and a polynomial possibility for some backreferenced regexps.
In theoretical terms, any token set can be matched by regular expressions as long as it is pre-defined. In terms of historical implementations, regexes were originally written to use [[American Standard Code for Information Interchange|ASCII]] characters as their token set though regex libraries have supported numerous other [[character set]]s. Many modern regex engines offer at least some support for [[Unicode]]. In most respects it makes no difference what the character set is, but some issues do arise when extending regexes to support Unicode.
Regexes are useful in a wide variety of [[text processing]] tasks, and more generally [[string processing]], where the data need not be textual. Common applications include [[data validation]], [[data scraping]] (especially [[web scraping]]), [[data wrangling]], simple [[parsing]], the production of [[syntax highlighting]] systems, and many other tasks.
While regexes would be useful on Internet [[Search engine (computing)|search engine]]s, processing them across the entire database could consume excessive computer resources depending on the complexity and design of the regex. Although in many cases system administrators can run regex-based queries internally, most search engines do not offer regex support to the public. Notable exceptions: [[Google Code Search]], [[Exalead]]. Google Code Search has been shut down as of January 2012.
It used a trigram index to speed queries.
The specific syntax rules vary depending on the specific implementation, [[programming language]], or [[Library (computing)|library]] in use. Additionally, the functionality of regex implementations can vary between [[Software versioning|version]]s.
Because regexes can be difficult to both explain and understand without examples, interactive websites for testing regexes are a useful resource for learning regexes by experimentation.
This section provides a basic description of some of the properties of regexes by way of illustration.
The following conventions are used in the examples.
Also worth noting is that these regexes are all Perl-like syntax. Standard [[#POSIX Basic Regular Expressions|POSIX]] regular expressions are different.
Unless otherwise indicated, the following examples conform to the [[Perl]] programming language, release 5.8.8, January 31, 2006. This means that other implementations may lack support for some parts of the syntax shown here (e.g. basic vs. extended regex, codice_125 vs. codice_126, or lack of codice_127 instead of [[POSIX]] codice_128).
The syntax and conventions used in these examples coincide with that of other programming environments as well.
Regular expressions can often be created ("induced" or "learned") based on a set of example strings. This is known as the [[induction of regular languages]], and is part of the general problem of [[grammar induction]] in [[computational learning theory]]. Formally, given examples of strings in a regular language, and perhaps also given examples of strings "not" in that regular language, it is possible to induce a grammar for the language, i.e., a regular expression that generates that language. Not all regular languages can be induced in this way (see [[language identification in the limit]]), but many can. For example, the set of examples {1, 10, 100}, and negative set (of counterexamples) {11, 1001, 101, 0} can be used to induce the regular expression 1⋅0* (1 followed by zero or more 0s).
[[Category:Automata (computation)]]
[[Category:Formal languages]]
[[Category:Pattern matching]]
[[Category:Programming constructs]]
[[Category:Regular expressions| ]]
[[Category:Articles with example code]]
[[Category:1951 introductions]] | https://en.wikipedia.org/wiki?curid=25717 |
Red Dwarf
Red Dwarf is a British science fiction comedy franchise created by Rob Grant and Doug Naylor which primarily consists of a television sitcom that aired on BBC Two between 1988 and 1999, and on Dave since 2009, gaining a cult following. The premise of the series follows the low-ranking technician Dave Lister, who awakens after being in suspended animation for three million years to find he is the last living human, with no crew on board the mining spacecraft "Red Dwarf" other than Arnold Rimmer, a hologram of Lister's deceased bunkmate, and Cat, a life form which evolved from Lister's pregnant cat.
Since the 2020 television film "", the current cast has included Chris Barrie as Rimmer, Craig Charles as Lister, Danny John-Jules as Cat, Robert Llewellyn as the sanitation robot Kryten, and Norman Lovett as the ship's computer Holly.
To date, twelve series of the show have aired, in addition to "The Promised Land". Four novels were published from 1989 to 1996. Two pilot episodes of an American version of the show were produced but never aired. The magazine "The Red Dwarf Smegazine" was published from 1992 to 1994.
One of the series' highest accolades came in 1994, when an episode from the sixth series, "Gunmen of the Apocalypse", won an International Emmy Award in the Popular Arts category, and in the same year the series was also awarded "Best BBC Comedy Series" at the British Comedy Awards. The series attracted its highest ratings, of more than eight million viewers, during the eighth series in 1999.
The revived series on Dave has consistently delivered some of the highest ratings for non-Public Service Broadcasting commissions in the UK. The show has been critically acclaimed, and has a Metacritic score of 84/100. Series XI was voted "Best Returning TV Sitcom" and "Comedy of the Year" for 2016 by readers for the British Comedy Guide. In a 2019 ranking by "Empire", "Red Dwarf" came 80 on a list of the 100 best TV shows of all time.
The main setting of the series is the eponymous mining spaceship "Red Dwarf". In the first episode, set sometime in the late 22nd century, an on-board radiation leak kills everyone except lowest-ranking technician Dave Lister, who is in suspended animation at the time, and his pregnant cat, Frankenstein, who is safe in the cargo hold. Following the accident, the ship's computer Holly keeps Lister in stasis until the radiation levels return to normal—a process that takes three million years. Lister therefore emerges as the last human being in the universe—but not alone on-board the ship. His former bunkmate and immediate superior Arnold Judas Rimmer (a character plagued by failure) is resurrected by Holly as a hologram to keep Lister sane. They are joined by a creature known only as Cat, the last member of a race of humanoid felines that evolved in the ship's hold from Lister's pregnant cat during the 3 million years that Lister was in stasis.
The series revolves around Lister being the last human alive, 3 million years from Earth, with his companions. The crew encounters phenomena such as time distortions, faster-than-light travel, mutant diseases and strange lifeforms (all evolved from Earth, because the series has no aliens) that had developed in the intervening millions of years. Though it has a science fiction setting, much of the humour comes from the interactions of the characters, particularly the laid-back Lister and the stuck-up Rimmer.
Despite the pastiche of science fiction used as a backdrop, "Red Dwarf" is primarily a character-driven comedy, with science fiction elements used as complementary plot devices. Especially in the early episodes, a recurring source of comedy was the "Odd Couple"-style relationship between the two central characters of the show, who have an intense dislike for each other yet are trapped together deep in space.
In Series III, the computer Holly changes from male (Norman Lovett) to female (Hattie Hayridge), and the mechanoid Kryten (who had appeared in one episode in Series II) joins the crew and becomes a regular character.
In Series VI, a story arc is introduced where "Red Dwarf" has been stolen, and the crew pursues it in the smaller "Starbug" craft, with the side effect that the character Holly disappears.
Series VII is also set in "Starbug". Early in series VII, Rimmer departs (due to actor Chris Barrie's commitments) and is replaced by Kristine Kochanski, Lister's long-term love interest, from an alternate universe. Kochanski becomes a regular character for Series VII and VIII.
At the end of Series VII, we learn that Kryten's service nanobots, which had abandoned him years earlier, were behind the theft of the "Red Dwarf" at the end of series five. At the beginning of the eighth series, Kryten's nanobots reconstruct the "Red Dwarf", which they had broken down into its constituent atoms.
As a consequence, Series VIII features the entire original crew of "Red Dwarf" resurrected (except for the already-alive Lister and Kochanski), including a pre-accident Rimmer; and the original male Holly. The series ends with a metal-eating virus loose on "Red Dwarf". The entire crew evacuates save the main cast (Lister, Rimmer, Cat, Kryten and Kochanski), whose fate is unresolved in a cliffhanger ending.
Series IX onwards revert to the same four main characters of Series 3–6 (Lister, Rimmer, Cat and Kryten), on "Red Dwarf" and without Kochanski or Holly; Rimmer reappears as a hologram once again. It has not been confirmed whether the Rimmer onboard ship is the one who originally left, the revived version, or a third incarnation entirely; however, episodes have alluded to him remembering events from both previous incarnations' lives.
The first series aired on BBC2 in 1988. Eleven full series and one miniseries have so far been produced, with a feature length special releasing in 2020.
The concept for the show was originally developed from the sketch series "" on the BBC Radio 4 show "Son of Cliché" in the mid-1980s, written by Rob Grant and Doug Naylor. Their influences came from films and television programmes such as "" (1966), "Silent Running" (1972), "Alien" (1979), "Dark Star" (1974) and "The Hitchhiker's Guide to the Galaxy" (1981), but also had a large element of British-style comedy and satire thrown into the mix, ultimately moulded into the form of a sitcom. Many visual and character elements bear similarities to the Trident nuclear submarine BBC documentary "Defence of the Realm". Having written the pilot script in 1983, the former "Spitting Image" writers pitched their unique concept to the BBC, but it was rejected on fears that a science fiction sitcom would not be popular.
It was finally accepted by BBC North in 1986, a result of a spare budget being assigned for a second series of "Happy Families" that would never arise, and producer Paul Jackson's insistence that "Red Dwarf" should be filmed instead. The show was lucky to be remounted after an electricians' strike partway through rehearsals in early 1987 shut the entire production down (the title sequence was filmed in January 1987). The filming was rescheduled for September, and the pilot episode finally made it onto television screens on 15 February 1988.
Despite the commission of further series, the cast felt like "outsiders" at the BBC. Co-creator Doug Naylor attributed this to the show getting commissioned by BBC Manchester, but filming at Shepperton Studios near where the cast lived in London. When the show won an International Emmy Award in 1994, Naylor's attempts to have the cast invited to a party thrown by the BBC proved futile when they objected to Craig Charles and Danny John-Jules's inclusion, claiming they were "fire risks".
Alan Rickman and Alfred Molina auditioned for roles in the series, with Molina being cast as Rimmer. However, after Molina had difficulties with the concept of the series, and of his role in particular, the role was recast and filled by Chris Barrie, a professional voice-actor and impressionist who had previously worked with both the writers on "Spitting Image", and with the producers on "Happy Families" and Jasper Carrott productions. Craig Charles, a Liverpudlian "punk poet", was given the role of Dave Lister. He was approached by the production team for his opinion about the "Cat" character, as they were concerned it may be considered by people as racist. Charles described "Cat" as 'pretty cool' and after reading the script he decided he wanted to audition for the part of Dave Lister. Laconic stand-up comedian Norman Lovett, who had originally tried out for the role of Rimmer, was kept in the show as Holly, the senile computer of the titular ship. A professional dancer and singer, Danny John-Jules, arriving half an hour late for his appointment, stood out as the Cat immediately. This was partly due to his "cool" exterior, dedicated research (reading Desmond Morris's book "Catwatching"), and his showing up in character, wearing his father's 1950s-style zoot suit.
Grant and Naylor wrote the first six series together (using the pseudonym Grant Naylor on the first two novels and later as the name of their production company, although never on the episodes themselves). Grant left in 1995, to pursue other projects, leaving Naylor to write series VII and VIII with a group of new writers, including Paul Alexander and actor Robert Llewellyn (who portrayed the character Kryten).
For the most part, Ed Bye produced and directed the series. He left before series V due to a scheduling clash (he ended up directing a show starring his wife, Ruby Wax) so Juliet May took over as director. May parted ways with the show halfway through the series for personal and professional reasons and Grant and Naylor took over direction of the series, in addition to writing and producing. Series VI was directed by Andy de Emmony, and Ed Bye returned to direct series VII and VIII. Series I, II and III were made by Paul Jackson Productions, with subsequent series produced by the writers' own company Grant Naylor Productions for BBC North. All eight series were broadcast on BBC Two. At the beginning of series IV, production moved from BBC North's New Broadcasting House in Manchester to Shepperton.
The theme tune and incidental music were written and performed by Howard Goodall, with the vocals on the closing theme tune by Jenna Russell. The first two series used a relatively sombre instrumental version of the closing theme for the opening titles; from series III onwards this switched to a more upbeat version. Goodall also wrote music for the show's various songs, including "Tongue Tied", with lyrics written by Grant and Naylor. Danny John-Jules (credited as 'The Cat') re-orchestrated and released "Tongue Tied" in October 1993; it reached number 17 on the UK charts. Goodall himself sang "The Rimmer Song" heard during the series VII episode "Blue", to which Chris Barrie mimed.
In 1998, on the tenth anniversary of the show's first airing (and between the broadcast of series VII and VIII), the first three series of "Red Dwarf" were remastered and released on VHS. The remastering included replacing model shots with computer graphics, cutting certain dialogue and scenes, re-filming Norman Lovett's Holly footage, creating a consistent set of opening titles, replacing music and creating ambient sound effects with a digital master. The remastered series were released in a 4-disc DVD box set "The Bodysnatcher Collection" in 2007.
Three years elapsed between series VI and VII, partly due to the dissolving of the Grant and Naylor partnership, but also due to cast and crew working on other projects. When the series eventually returned, it was filmised and no longer shot in front of a live audience, allowing for greater use of four-walled sets, location shooting, and single-camera techniques. When the show returned for its eighth series two years later, it had dropped use of the filmising process and returned to using a live audience.
The show received a setback when the BBC rejected proposals for a series IX. Doug Naylor confirmed in 2007 that the BBC decided not to renew the series as they preferred to work on other projects. A short animated Christmas special was, however, made available to mobile phone subscribers the same year. Ultimately, however, fans had to wait a decade before the series returned to television.
In 2008, a three-episode production was commissioned by the digital channel Dave. "Red Dwarf: Back to Earth" was broadcast over the Easter weekend of 2009, along with a "making of" documentary. The episode was set nine years after the events of "Only the Good..." (with the cliffhanger ending of that episode left unresolved, a situation that would continue with series X). The storyline involves the characters arriving back on Earth, circa 2009, only to find that they are characters in a TV show called "Red Dwarf". Kochanski is supposedly dead and Holly is offline due to water damage caused by Lister leaving a tap running. Actress Sophie Winkleman played a character called Katerina, a resurrected hologram of a "Red Dwarf" science officer intent on replacing Rimmer.
To achieve a more cinematic atmosphere, "Back to Earth" was not filmed in front of a studio audience. Some previous "Red Dwarf" episodes had been shot in that way ("Bodyswap" and all of the seventh series), but "Back to Earth" represented the first time that a laughter track was not added before broadcast. It was also the first episode of "Red Dwarf" to be filmed in high definition.
The specials were televised over three nights starting on Friday 10 April 2009. The broadcasts received record ratings for Dave; the first of the three episodes represented the UK's highest-ever viewing figures for a commissioned programme on a digital network. "Back to Earth" was released on DVD on 15 June 2009, and on Blu-ray on 31 August 2009. "Back to Earth" was subsequently described on the series' official website as "for all intents and purposes, the 'ninth series' of "Red Dwarf"". This placement was confirmed when Series X was commissioned and branded as the tenth series, although "Back to Earth" continues not to be referred to as "Series IX" on home media or digital releases.
On 10 April 2011 Dave announced it had commissioned a six-episode series X to be broadcast on Dave in late 2012. Filming dates for the new series Red Dwarf X were announced on 11 November 2011, along with confirmation that the series would be shot at Shepperton Studios in front of an audience. Principal filming began on 16 December 2011 and ended on 27 January 2012, and the cast and crew subsequently returned for six days filming pick-ups. Discounting guest stars, only the core cast of Charles, Barrie, Llewellyn and John-Jules returned for Series X, with Annett and Lovett absent, though the scripts include references to Kochanski and Holly.
On 20 July 2012, a 55-second trailer for series X was released on Facebook, followed by a new teaser every Friday. The new series debuted on Thursday 4 October 2012.
Following series X, which attracted high viewing figures, Dave, Doug Naylor and the cast showed strong interest in making another series. During the Dimension Jump fan convention in May 2013, Doug Naylor stated that discussions were ongoing with all involved parties and while arrangements had not been finalised, he hoped shooting could begin in February 2014. In October 2013, Robert Llewellyn posted on his blog, stating that "an eleventh series would happen" and that it would be "sometime in 2014". Llewellyn later removed the post from his blog and Doug Naylor issued a statement on Twitter, saying: "Getting tweets claiming Red Dwarf XI is commissioned. Not true. Not yet." However, in January 2014 Danny John-Jules stated that the eleventh series of "Red Dwarf" was in the process of being written.
At the April 2014 Sci-Fi Scarborough Festival, during the "Red Dwarf" cast panel, Danny John-Jules stated that filming of the eleventh series would commence in October 2014, with an expected release of Autumn 2015 on Dave.
On 2 May 2015, at the Dimension Jump XVIII convention, Naylor announced that an eleventh and a twelfth series had been commissioned. The two series would be shot back-to-back towards the end of 2015 for broadcast on Dave in 2016 and 2017, respectively, and would be co-produced by Baby Cow Productions, with company CEO, Henry Normal, executive producing the new episodes.
Series XI and XII were filmed back-to-back at Pinewood Studios between November 2015 and March 2016. The eleventh series premiered on UKTV's video on demand service UKTV Play on 15 September 2016, a week ahead of its broadcast transmission on 22 September.
On 8 September 2017, it was announced that "Red Dwarf XII" would begin broadcasting on Dave on 12 October 2017, and on 15 September 2017 it was further announced that each episode would preview a week earlier via the UKTV Play video on demand service, effectively meaning that series 12 would be starting on 5 October 2017.
In late May 2019, in a radio interview, Robert Llewellyn stated that a thirteenth series was happening and in June of that year, Danny John-Jules stated that it was expected to be wrapped up by the end of 2019. However, in October 2019, UKTV announced that a 90-minute feature-length special would be produced instead, to be filmed from December 2019 to January 2020 with location filming scheduled for November. Three 60-minute documentaries were also announced to accompany it, intended to act as a retrospective of all previous 12 series.
In January 2020, the first publicity photos of the special were released, with Ray Fearon revealed as the first confirmed guest actor portraying Rodon, the "leader of the feral cats". In February 2020, the day before the 32nd anniversary of when "Red Dwarf" first aired, a synopsis was given by the official "Red Dwarf" website: "The special will see the posse meet three cat clerics (Tom Bennett, Mandeep Dhillon, Lucy Pearman) who worship Lister as their God. Lister vows to help them as they're being hunted by Rodon, the ruthless feral cat leader (Ray Fearon) who has vowed to wipe out all cats who worship anyone but him." Al Roberts was also added to the cast in an undisclosed role and Norman Lovett officially announced to be returning as Holly following his one-off guest spot in Series XII.
On 10 March 2020, in an exclusive with "Radio Times", a teaser trailer was released. A rough release date of sometime in April was given, and a day later on 11 March 2020, the official Twitter account for Dave revealed the title of the television film: "Red Dwarf: The Promised Land".
"Red Dwarf" was founded on the standard sitcom focus of a disparate and frequently dysfunctional group of individuals living together in a restricted setting. With the main characters routinely displaying their cowardice, incompetence and laziness, while exchanging insulting and sarcastic dialogue, the series provided a humorous antidote to the fearless and morally upright space explorers typically found in science-fiction series, with its main characters acting bravely only when there was no other possible alternative. The increasing science-fiction elements of the series were treated seriously by creators Rob Grant and Doug Naylor. Satire, parody and drama were alternately woven into the episodes, referencing other television series, films and books. These have included references to the likes of "" (1968), "Top Gun" (1986), "RoboCop" (1987), "Star Wars" (1977), "Citizen Kane" (1942), "The Wild One" (1953), "High Noon" (1952), "Rebel Without a Cause" (1955), "Casablanca" (1942), "Easy Rider" (1969), "The Terminator" (1984), "Pride and Prejudice" (1813), Isaac Asimov's "Robot" series (1939–85) and the Four Horsemen of the Apocalypse.
The writers based the whole theme of some episodes on the plots of feature films. The series III episode "Polymorph" references and parodies key moments from "Alien" (1979); from series IV, "Camille" echoes key scenes from "Casablanca" (1942), while "Meltdown" borrows the main plot from "Westworld" (1973). For series IX, "" was partially inspired by "Blade Runner" (1982). The series' themes are not limited to films or television, having also incorporated historical events and figures. Religion also plays a part in the series, as a significant factor in the ultimate fate of the Cat race, and the perception of Lister as their 'God', both within the episode "Waiting for God" (whose title makes a literary reference to the Samuel Beckett play "Waiting for Godot"), as well as the crew meeting a man they believe to be Jesus Christ in series X episode "Lemons". The series VII episode titled "Ouroboros" derives its name and theme from the ancient mythological snake by the same name. The third episode of series VI "Gunmen of the Apocalypse" was based on the Four Horsemen of the Apocalypse.
The series explores many science-fiction staples such as time-travel paradoxes (including the grandfather paradox), the question of determinism and free will (on several episodes), the pursuit of happiness in virtual reality and, crucially to the show's premise of Lister being the last human, the near-certainty of the human species' extinction some time in the far future.
Aliens do not feature in the series, as Grant and Naylor decided very early in the process that they did not want aliens involved. This is usually addressed with Rimmer's belief in extraterrestrial life being shot down, such as a vessel he believes to be an alien ship turning out to be a garbage pod. However, there are non-human life forms such as evolutions of Earth species (e.g. the Cat race), robotic or holo-life forms created by humans, and a kind of 'Genetically Engineered Life Form' (GELF), an artificially created creature. Simulants and GELFs frequently serve as antagonists among the later series of the show.
The series developed its own distinct vocabulary. Words and phrases such as hologramatic, dollarpound, "Felis sapiens", Simulants, GELF, space weevil, and Zero Gee Football appear throughout the series, highlighting a development in language, political climate, technology, evolution and culture in the future. The creators also employed a vocabulary of fictional expletives in order to avoid using potentially offensive words in the show, and to give nuance to futuristic colloquial language; in particular, "smeg" (and variants such as "smegging", "smegger", and "smeg-head") features prominently, alongside the terms "gimboid" and "goit".
The changes that were made to the series' cast, setting, creative teams and even production values from series to series have meant that opinions differ greatly between fans and critics as to the quality of certain series. In the "Great Red Dwarf Debate", published in volume 2 issue 3 of the "Red Dwarf Smegazine", science-fiction writers Steve Lyons and Joe Nazzaro both argued on the pros and cons of the early series against the later series. Lyons stated that what the show "once had was a unique balance of sci-fi comedy, which worked magnificently." Nazarro agreed that "the first two series are very original and very funny", but went on to say that "it wasn't until series III that the show hit its stride." Series VI is regarded as a continuation of the "monster of the week" philosophy of series V, which was nevertheless considered to be visually impressive. Discussions revolve around the quality of series VI, seen by one reviewer as just as good as the earlier series', but has been criticised by another reviewer as a descent into formulaic comedy with an unwelcome change of setting.
The changes seen in series VII were seen by some as a disappointment; while much slicker and higher-budget in appearance, the shift away from outright sitcom and into something approaching comedy drama was seen by one reviewer as a move in the wrong direction. Furthermore, the attempt to shift back into traditional sitcom format for series VIII was greeted with a response that was similarly lukewarm. There was criticism aimed at the decision to resurrect the entire crew of "Red Dwarf", as it was felt this detracted from the series' central premise of Lister being the last human being alive. There are other critics who feel that series VII and VIII are no weaker than the earlier series, however, and the topic is the subject of constant fervent debate among the show's fanbase.
Although the pilot episode of the show gathered over four million viewers, viewing figures dipped in successive episodes and the first series had generally poor ratings. Through to series VI the ratings steadily increased and peaked at over six million viewers, achieved with the episode "Gunmen of the Apocalypse". When the series returned in 1999 it gained the highest audience figures yet—over eight million viewers tuned in for series VIII's opening episode "". The series has won numerous awards including the Royal Television Society Award for special effects, the British Science Fiction award for Best Dramatic Presentation, as well as an International Emmy Award for series VI episode "Gunmen of the Apocalypse", which tied with an "Absolutely Fabulous" episode, "Hospital", in the Popular Arts category. The show had also been nominated for the International Emmy Award in 1987, 1989 and 1992. Series VI won a British Comedy Award for 'Best BBC Comedy Series'. The video sales have won eight Gold Awards from the British Video Association, and the series still holds the record for being BBC Two's longest-running, highest-rated sitcom. In 2007 the series was voted 'Best Sci-Fi Show Of All Time' by the readers of "Radio Times" magazine. Editor Gill Hudson stated that this result had surprised them as 'the series had not given any new episodes this century'. In January 2017, series XI was voted "Best Returning TV Sitcom" and "Comedy of the Year" for 2016 by readers for the British Comedy Guide.
A year later "Red Dwarf" once again was voted "Best Returning TV Sitcom" for series XII retaining the title from British Comedy Guide.
The show's logo and characters have appeared on a wide range of merchandise. "Red Dwarf" has also been spun off in a variety of different media formats. For instance, the song "Tongue Tied", featured in the "Parallel Universe" episode of the show, was released in 1993 as a single and became a top 20 UK hit for Danny John-Jules (under the name 'The Cat'). Stage plays of the show have been produced through Blak Yak, a theatre group in Perth, Western Australia, who were given permission by Grant Naylor Productions to mount stage versions of certain episodes in 2002, 2004 and 2006. In October 2006 an Interactive Quiz DVD entitled "Red Dwarf: Beat The Geek" was released, hosted by Norman Lovett and Hattie Hayridge, both reprising their roles as Holly. In 2005, Grant Naylor Productions and Across the Pond Comics collaborated to produce the spin-off webcomic "Red Dwarf: Prelude to Nanarchy".
Working together under the name "Grant Naylor", the creators of the series collaboratively wrote two novels. The first, "Infinity Welcomes Careful Drivers", was published in November 1989, and incorporates plot lines from several episodes of the show's first two series. The second novel, "Better Than Life", followed in October 1990, and is largely based on the second-series episode of the same name. Together, the two novels provide expanded backstory and development of the series' principal characters and themes.
The authors began work on a sequel to "Better than Life", called "The Last Human", but Rob Grant was drawn away from "Red Dwarf" by an interest in other projects. Still owing Penguin Publishing two more "Red Dwarf" novels, Grant and Naylor decided to each write an alternative sequel to "Better than Life". Two completely different sequels were made as a result, each presenting a possible version of the story's continuation. "Last Human", by Doug Naylor, adds Kochanski to the crew and places more emphasis on the science-fiction and plot elements, while Rob Grant's novel "Backwards", is more in keeping with the previous two novels, and borrows more extensively from established television stories.
An omnibus edition of the first two novels was released in 1992, including edits to the original text and extra material such as the original pilot script of the TV series. All four novels have been released in audiobook format, the first two read by Chris Barrie, "Last Human" read by Craig Charles, and "Backwards" read by author Rob Grant.
In December 2009, "Infinity Welcomes Careful Drivers" was released in Germany with the title "Roter Zwerg" ("Red Dwarf" in German).
For the initial release of the VHS editions, episodes of "Red Dwarf" were separated and two volumes released for each series (except series VII and VIII, which were released on three separate tapes), labelled 'Byte One' and 'Byte Two' (plus 'Byte Three' for series VII and VIII). These videos were named after the first episode of the three presented on the tape, as was typical with other BBC video releases at the time. However, on occasions the BBC decided to ignore the original running order and use the most popular episodes from the series to maximise sales of the videos: for series III (the first-ever release), "Bodyswap" and "Timeslides" were swapped round, so that the latter could receive top billing on the second VHS volume; for the second VHS volume of series I, "Confidence and Paranoia" was given top billing, even though the original broadcast order was retained; this was due to the leading episode being "Waiting for God" which shared its name with the title of another comedy series (set in a retirement home); and for series V, "Back to Reality" and "Quarantine" were given top billing on their respective video release, which completely re-organised the order of episodes from that in which they were originally broadcast. Future releases would increasingly observe authenticity with the 'original broadcast' context. All eight series were made available on VHS, and three episodes of series VII were also released as special "Xtended" ["sic"] versions with extra scenes (including an original, unbroadcast ending for the episode "Tikka To Ride") and no laugh track; the remastered versions of series I–III were also released individually and in a complete box-set. Finally, two outtake videos were released, both hosted by Robert Llewellyn in character as Kryten: "Smeg Ups" in 1994, and its sequel, "Smeg Outs", in 1995.
The first eight series have been released on DVD in Regions 1, 2 and 4, each with a bonus disc of extra material. Each release from series III onwards also features an original documentary about the making of each respective series. Regions 2 and 4 have also seen the release of two "Just the Shows", digipack box sets containing the episodes from series I–IV (Volume 1) and V–VIII (Volume 2) with static menus and no extras. "Red Dwarf: The Bodysnatcher Collection", containing the 1998 remastered episodes, as well as new documentaries for series I and II, was released in 2007. This release showcased a storyboard construction of "Bodysnatcher", an unfinished script from 1987, which was finally completed in 2007 by Rob Grant and Doug Naylor who were working together for the first time since 1993. In December 2008 an anniversary DVD set entitled "Red Dwarf: All the Shows" was released, reworking the vanilla disc content of the two "Just the Shows" sets within A4 packaging resembling a photo album, which omitted information that no extras were included. This box set was re-released in a smaller slipcase-sized box, reverting to the "Just the Shows" title, in November 2009. The series is also available for download on iTunes.
Only in Japan
Only in the United Kingdom
In 2016, BBC Worldwide began creating an 'up-resed' version of the first five series for release on Blu-ray, due to demand from Japan. When asked about the project in 2017, Naylor confirmed he had stopped it due to lacklustre picture quality. By 2018, the project, now encompassing the entire original run, had been restarted, and a series 1–8 Blu-ray set release was confirmed in August.
The "Red Dwarf Magazine"—the magazine part of the title changed to "Smegazine" from issue 3—was launched in 1992 by Fleetway Editions. It comprised a mix of news, reviews, interviews, comic strips and competitions. The comic strips featured episode adaptations and original material, including further stories of popular characters like Mr. Flibble, the Polymorph and Ace Rimmer.
Notably, the comic strip stories' holographic characters, predominately Rimmer, were drawn in grayscale. This was at the request of Grant and Naylor, who had wanted to use the technique for the television series, but the process was deemed too expensive to produce. Despite achieving circulation figures of over 40,000 per month, the magazine's publisher decided to close the title down to concentrate on their other publications. A farewell issue was published, cover dated January 1994, and featured the remaining interviews, features and comic strips that were to feature in the following issues.
The Official Red Dwarf Fan Club produces a periodical magazine for members titled "Back to Reality". The previous volume of this magazine, dating back to the 1990s, was known as "Better Than Life."
Despite the original version having been broadcast on PBS, a pilot episode for an American version (known as "Red Dwarf USA") was produced through Universal Studios with the intention of broadcasting on NBC in 1992. The show essentially followed the same story as the first episode of the original series, using American actors for most of the main roles: Craig Bierko as Lister, Chris Eigeman as Rimmer, and Hinton Battle as Cat. Exceptions to this were Llewellyn, who reprised his role as Kryten, and the British actress Jane Leeves, who played Holly. It was written by Linwood Boomer and directed by Jeffrey Melman, with Grant and Naylor onboard as creators and executive producers. Llewellyn, Grant and Naylor travelled to America for the filming of the American pilot after production of the fifth series of the UK series. According to Llewellyn and Naylor, the cast were not satisfied with Linwood Boomer's script. Grant and Naylor rewrote the script, but although the cast preferred the re-write, the script as filmed was closer to Boomer's version. The pilot episode includes footage from the UK series in its title sequence, although it did not retain the logo or the theme music of the UK series. During filming of the pilot, the audience reaction was good and it was felt that the story had been well received.
The studio executives were not entirely happy with the pilot, especially the casting, but decided to give the project another chance with Grant and Naylor in charge. The intention was to shoot a "promo video" for the show in a small studio described by the writers as "a garage". New cast members were hired for the roles of Cat (now depicted as female) and Rimmer, Terry Farrell and Anthony Fusco, respectively. This meant that, unlike the original British series, the cast was all Caucasian, which Charles referred to as "White Dwarf". Chris Barrie was asked to play Rimmer in the second pilot, but he declined. With a small budget and deadline, new scenes were quickly shot and mixed in with existing footage of the pilot and UK series V episodes, to give an idea of the basic plot and character dynamics, alongside proposed future episodes, remakes of episodes from the original show. Llewellyn did not participate in the re-shoot, though clips from the British version were used to show the character. Despite the re-shoots and re-casting, the option on the pilot was not picked up. Farrell found work almost immediately afterwards with "", in which she was cast as Jadzia Dax. Similarly, one year later Jane Leeves was cast in "Frasier" as Daphne Moon.
The cast of both the British and American versions criticised the casting of "Red Dwarf USA", particularly the part of Lister, who is portrayed in the British version as a likeable slob, but in the U.S. version as somewhat clean-cut. In the 2004 documentary "Dwarfing USA", Danny John-Jules said the only actor who could have successfully portrayed an American Lister was John Belushi. In a 2009 interview on "Kevin Pollak's Chat Show", Bierko said that casting him as Lister was a "huge mistake," and also said a "John Belushi-type" would have been better suited to the role.
The American pilot has been heavily bootlegged, but it has never been broadcast on TV in any country. Excerpts from the first pilot are included in "Dwarfing USA", a featurette on the making of the pilots included on the DVD release of "Red Dwarf" fifth series. Because of rights-clearance issues, no footage from the second pilot is included in the featurette.
Since the beginning of the seventh series in 1997, Doug Naylor had been attempting to make a feature-length version of the show. A final draft of the script was written, by Naylor, and flyers began circulating around certain websites. The flyer was genuine and had been distributed by Winchester Films to market the film overseas. Plot details were included as part of the teaser. It was set in the distant future where "Homo sapienoids"—a race of cyborgs—had taken over the solar system and were wiping out the human race. Spaceships that tried to escape Earth were hunted down "until only one remained... "Red Dwarf"".
Naylor had scouted Australia to get an idea of locations and finance costs, with pre-production beginning in 2004 and filming planned for 2005. Costumes were made, including Kryten's, and A-list celebrity cameos, including Madonna, were rumoured. However, finding sufficient funding had been difficult. Naylor explained at a "Red Dwarf" Dimension Jump convention that the film had been rejected by the BBC and the British Film Council.
In 2012, material from early drafts of the film was incorporated into the series X finale "The Beginning".
In 2018, Naylor suggested production of the movie was still under consideration, "The order will probably be another TV series, a stage show and possibly a movie, and I think the guys agree on that. The film is a long shot at this point just because it can take so long to get funding." In late 2019, following the announcement of a feature-length special due for release in 2020, publications such as "Yahoo!" and "The Sun" began referring to it as "Red Dwarf: The Movie" finally seeing realization.
Deep7 Press (formerly Deep7 LLC) released "Red Dwarf – The Roleplaying Game" in February 2003 (although the printed copyright is 2002). Based on the series, the game allows its players to portray original characters within the "Red Dwarf" universe. Player characters can be human survivors, holograms, "evolved" house pets (cats, dogs, iguanas, rabbits, rats and mice), various types of mechanoid (Series 4000, Hudzen 10 and Waxdroids in the corebook, Series 3000 in the Extra Bits Book) or GELFs (Kinatawowi and Pleasure GELF in the corebook, "Vindaloovians" in the Extra Bits Book).
A total of three products were released for the game: the core 176-page rulebook, the "AI Screen" (analogous to the "Game Master's Screen" used in other role-playing games, also featuring the "Extra Bits Book" booklet) and the "Series Sourcebook". The "Series Sourcebook" contains plot summaries of each episode from series I to VIII as well as game rules for all major and minor characters from each series.
The game has been praised for staying true to the comedic nature of the series, for its entertaining writing and for the detail to which the background material is explained. However, some reviewers found the game mechanics to be simplistic and uninspiring compared to other science-fiction role-playing games on the market.
In promotion of the upcoming release of series XI, a mobile game titled "Red Dwarf XI - The Game" was released to coincide with the release of "Twentica" on 22 September 2016. Developed by GameDigits, it was intended to release episodically with new releases being based off all the episodes of XI. However, it ceased development following the end of its adaptation of "Officer Rimmer" to instead focus on developing "Red Dwarf XII - The Game", which dropped the episodic format and instead featured minigames such as running through the corridors of spaceships featured in XII, similar to "Temple Run", and free-roaming space onboard "Starbug". Fan reception to the games were mixed, and by late 2019, both games were no longer available to download off Google Play.
"Red Dwarf" was featured as a hidden area in the Lego video game, "Lego Dimensions". The area was featured in the game's "Fantastic Beasts and Where to Find Them" expansion pack released on 18 November 2016, where the player was able to explore a small section of the titular ship including the sleeping quarters. References to the most recent series of the show were also included such as Snacky from "Give & Take" making a non-speaking appearance and the bio-printer from "Officer Rimmer" being an interactable object.
On 14 February 1998, the night before the tenth anniversary of the show's first episode broadcast, BBC Two devoted an evening of programmes to the series, under the banner of "Red Dwarf Night". The evening consisted of a mixture of new and existing material, and was introduced and linked by actor and fan Patrick Stewart. In addition, a series of special take-offs on BBC Two's idents, featuring the "2" logo falling in love with a skutter, were used. The night began with "Can't Smeg, Won't Smeg", a spoof of the cookery programme "Can't Cook, Won't Cook", presented by that show's host Ainsley Harriott who had himself appeared as a GELF in the series VI episode "". Taking place outside the continuity of the series, two teams (Kryten and Lister versus Rimmer and Cat, although Cat quickly departs to be replaced by alter ego Duane Dibbley) were challenged to make the best chicken vindaloo.
After a compilation bloopers show, featuring out-takes, the next programme was "Universe Challenge", a spoof of "University Challenge". Hosted by original "University Challenge" presenter Bamber Gascoigne, the show had a team of knowledgeable "Dwarf" fans compete against a team consisting of Chris Barrie, Craig Charles, Robert Llewellyn, Chloë Annett and Danny John-Jules. This was followed by "The Red Dwarf A–Z", a half-hour documentary that chose a different aspect of the show to focus on for each letter of the alphabet. Talking heads on the episode included Stephen Hawking, Terry Pratchett, original producer Paul Jackson, Mr Blobby, Patrick Stewart and a Dalek. Finally, the night ended with a showing of the episode "Gunmen of the Apocalypse".
In August 2013, YouTube held a campaign to promote user-generated content concerning science fiction, comics, gaming, and science. Robert Llewellyn in-character as Kryten hosted the event's daily videoes, making references to Lister, Rimmer, and the Cat whilst presenting featured uploads.
On 1 July 2019, an advert for the AA called "Stellar Rescue" featuring the core "Red Dwarf" crew premiered on ITV. The advert has "Starbug" break down on an inhospitable planet with Lister using the AA app to call a mechanic and successfully escape. On 2 March 2020, a second advert called "Stellar Rescue - Smart Breakdown" was uploaded to the AA official YouTube channel featuring "Starbug" stranded without power on an ice planet but with Lister again calling a mechanic and saving the day. An alternate 30-second one accompanied it, with this one serving as the broadcast version.
"Red Dwarf" was originally based on "Dave Hollins: Space Cadet", a series of five sketches that aired in the BBC Radio 4 series "Son of Cliché", produced by Rob Grant and Doug Naylor in 1984.
The sketches recounted the adventures of Dave Hollins (voiced by Nick Wilton), a hapless space traveller who is marooned in space far from Earth. His only steady companion is the computer Hab (voiced by Chris Barrie).
Grant and Naylor chose to use the "Dave Hollins: Space Cadet" sketches as a base for a television show after watching the 1974 film "Dark Star". They changed some elements from the sketches:
The 7-trillion-year figure was first changed to 7 billion years and then to 3 million and the characters of Arnold Rimmer and the Cat were created. The name Dave Hollins was changed to Dave Lister when a football player called Dave Hollins became well known, and Hab was replaced by Holly. One of the voice actors from "Son of Cliché", Chris Barrie went on to portray Arnold Rimmer in the "Red Dwarf" TV series.
Episodes of "Dave Hollins" can be found on the 2-disc "Red Dwarf" DVD sets starting with series V and ending with series VIII. | https://en.wikipedia.org/wiki?curid=25721 |
Regular language
In theoretical computer science and formal language theory, a regular language (also called a rational language) is a formal language that can be expressed using a regular expression, in the strict sense of the latter notion used in theoretical computer science (as opposed to many regular expressions engines provided by modern programming languages, which are augmented with features that allow recognition of languages that cannot be expressed by a classic regular expression).
Alternatively, a regular language can be defined as a language recognized by a finite automaton. The equivalence of regular expressions and finite automata is known as Kleene's theorem (after American mathematician Stephen Cole Kleene). In the Chomsky hierarchy, regular languages are defined to be the languages that are generated by Type-3 grammars (regular grammars).
Regular languages are very useful in input parsing and programming language design.
The collection of regular languages over an alphabet Σ is defined recursively as follows:
See regular expression for its syntax and semantics. Note that the above cases are in effect the defining rules of regular expression.
All finite languages are regular; in particular the empty string language {ε} = Ø* is regular. Other typical examples include the language consisting of all strings over the alphabet {"a", "b"} which contain an even number of "a"s, or the language consisting of all strings of the form: several "a"s followed by several "b"s.
A simple example of a language that is not regular is the set of strings { "a""n""b""n" | "n" ≥ 0 }. Intuitively, it cannot be recognized with a finite automaton, since a finite automaton has finite memory and it cannot remember the exact number of a's. Techniques to prove this fact rigorously are given below.
A regular language satisfies the following equivalent properties:
Properties 10. and 11. are purely algebraic approaches to define regular languages; a similar set of statements can be formulated for a monoid "M"⊂Σ*. In this case, equivalence over "M" leads to the concept of a recognizable language.
Some authors use one of the above properties different from "1." as an alternative definition of regular languages.
Some of the equivalences above, particularly those among the first four formalisms, are called "Kleene's theorem" in textbooks. Precisely which one (or which subset) is called such varies between authors. One textbook calls the equivalence of regular expressions and NFAs ("1." and "2." above) "Kleene's theorem". Another textbook calls the equivalence of regular expressions and DFAs ("1." and "3." above) "Kleene's theorem". Two other textbooks first prove the expressive equivalence of NFAs and DFAs ("2." and "3.") and then state "Kleene's theorem" as the equivalence between regular expressions and finite automata (the latter said to describe "recognizable languages"). A linguistically oriented text first equates regular grammars ("4." above) with DFAs and NFAs, calls the languages generated by (any of) these "regular", after which it introduces regular expressions which it terms to describe "rational languages", and finally states "Kleene's theorem" as the coincidence of regular and rational languages. Other authors simply "define" "rational expression" and "regular expressions" as synonymous and do the same with "rational languages" and "regular languages".
The regular languages are closed under various operations, that is, if the languages "K" and "L" are regular, so is the result of the following operations:
Given two deterministic finite automata "A" and "B", it is decidable whether they accept the same language.
As a consequence, using the above closure properties, the following problems are also decidable for arbitrarily given deterministic finite automata "A" and "B", with accepted languages "L""A" and "L""B", respectively:
For regular expressions, the universality problem is NP-complete already for a singleton alphabet.
For larger alphabets, that problem is PSPACE-complete. If regular expressions are extended to allow also a "squaring operator", with ""A"2" denoting the same as ""AA"", still just regular languages can be described, but the universality problem has an exponential space lower bound, and is in fact complete for exponential space with respect to polynomial-time reduction.
In computational complexity theory, the complexity class of all regular languages is sometimes referred to as REGULAR or REG and equals DSPACE(O(1)), the decision problems that can be solved in constant space (the space used is independent of the input size). REGULAR ≠ AC0, since it (trivially) contains the parity problem of determining whether the number of 1 bits in the input is even or odd and this problem is not in AC0. On the other hand, REGULAR does not contain AC0, because the nonregular language of palindromes, or the nonregular language formula_1 can both be recognized in AC0.
If a language is "not" regular, it requires a machine with at least Ω(log log "n") space to recognize (where "n" is the input size). In other words, DSPACE(o(log log "n")) equals the class of regular languages. In practice, most nonregular problems are solved by machines taking at least logarithmic space.
To locate the regular languages in the Chomsky hierarchy, one notices that every regular language is context-free. The converse is not true: for example the language consisting of all strings having the same number of "a"'s as "b"'s is context-free but not regular. To prove that a language such as this is not regular, one often uses the Myhill–Nerode theorem or the pumping lemma among other methods.
Important subclasses of regular languages include
Let formula_2 denote the number of words of length formula_3 in formula_4. The ordinary generating function for "L" is the formal power series
The generating function of a language "L" is a rational function if "L" is regular. Hence for every regular language formula_4 the sequence formula_7 is constant-recursive; that is, there exist an integer constant formula_8, complex constants formula_9 and complex polynomials formula_10
such that for every formula_11 the number formula_2 of words of length formula_3 in formula_4 is
formula_15.
Thus, non-regularity of certain languages formula_16 can be proved by counting the words of a given length in
formula_16. Consider, for example, the Dyck language of strings of balanced parentheses. The number of words of length formula_18
in the Dyck language is equal to the Catalan number formula_19, which is not of the form formula_20,
witnessing the non-regularity of the Dyck language. Care must be taken since some of the eigenvalues formula_21 could have the same magnitude. For example, the number of words of length formula_3 in the language of all even binary words is not of the form formula_20, but the number of words of even or odd length are of this form; the corresponding eigenvalues are formula_24. In general, for every regular language there exists a constant formula_25 such that for all formula_26, the number of words of length formula_27 is asymptotically formula_28.
The "zeta function" of a language "L" is
The zeta function of a regular language is not in general rational, but that of an arbitrary cyclic language is.
The notion of a regular language has been generalized to infinite words (see ω-automata) and to trees (see tree automaton).
Rational set generalizes the notion (of regular/rational language) to monoids that are not necessarily free. Likewise, the notion of a recognizable language (by a finite automaton) has namesake as recognizable set over a monoid that is not necessarily free. Howard Straubing notes in relation to these facts that “The term "regular language" is a bit unfortunate. Papers influenced by Eilenberg's monograph often use either the term "recognizable language", which refers to the behavior of automata, or "rational language", which refers to important analogies between regular expressions and rational power series. (In fact, Eilenberg defines rational and recognizable subsets of arbitrary monoids; the two notions do not, in general, coincide.) This terminology, while better motivated, never really caught on, and "regular language" is used almost universally.”
Rational series is another generalization, this time in the context of a formal power series over a semiring. This approach gives rise to weighted rational expressions and weighted automata. In this algebraic context, the regular languages (corresponding to Boolean-weighted rational expressions) are usually called "rational languages". Also in this context, Kleene's theorem finds a generalization called the Kleene-Schützenberger theorem. | https://en.wikipedia.org/wiki?curid=25723 |
Reference work
A reference work is a work such as a book or periodical (or its electronic equivalent) to which one can refer for information. The information is intended to be found quickly when needed. Reference works are usually "referred" to for particular pieces of information, rather than read beginning to end. The writing style used in these works is informative; the authors avoid use of the first person, and emphasize facts. Many reference works are compiled by a team of contributors whose work is coordinated by one or more editors rather than by an individual author. Indices are commonly provided in many types of reference work. Updated editions are usually published as needed, in some cases annually (e.g. "Whitaker's Almanack", "Who's Who"). Reference works include dictionaries, encyclopedias, almanacs, atlases, bibliographies, biographical sources, catalogs such as library catalogs and art catalogs, concordances, directories such as business directories and telephone directories, discographies, filmographies, glossaries, handbooks, indices such as bibliographic indices and citation indices, manuals, research guides, thesauruses, and yearbooks. Many reference works are available in electronic form and can be obtained as reference software, CD-ROMs, DVDs, or online through the Internet.
A reference work is useful to its users if they attribute some degree of trust.
In contrast to books that are loaned, a reference book or reference-only book in a library is one that may only be used in the library and may not be borrowed from the library. Many such books are reference works (in the first sense), which are, usually, used briefly or photocopied from, and therefore, do not need to be borrowed. Keeping reference books in the library assures that they will always be available for use on demand. Some reference-only books are too valuable to permit borrowers to take them out. Reference-only items may be shelved in a reference collection located separately from circulating items. Some libraries consist entirely, or to a large extent, of books which may not be borrowed.
An electronic resource is a computer program or data that is stored electronically, which is usually found on a computer, including information that is available on the Internet. Libraries offer numerous types of electronic resources including electronic texts such as electronic books and electronic journals, bibliographic databases, institutional repositories, websites, and software applications. | https://en.wikipedia.org/wiki?curid=25727 |
Roger Casement
Roger David Casement () (1 September 1864 – 3 August 1916), known as Sir Roger Casement, CMG, between 1911 and 1916, was a diplomat and Irish nationalist. He worked for the British Foreign Office as a diplomat and later became a humanitarian activist, poet and Easter Rising leader. Described as the "father of twentieth-century human rights investigations", he was honoured in 1905 for the Casement Report on the Congo and knighted in 1911 for his important investigations of human rights abuses in Peru.
In Africa as a young man, Casement first worked for commercial interests before joining the British Colonial Service. In 1891 he was appointed as a British consul, a profession he followed for more than 20 years. Influenced by the Boer War and his investigation into colonial atrocities against indigenous peoples, Casement grew to mistrust imperialism. After retiring from consular service in 1913, he became more involved with Irish republicanism and other separatist movements. During World War I he made efforts to gain German military aid for the 1916 Easter Rising that sought to gain Irish independence.
He was arrested, convicted and executed for high treason. He was stripped of his knighthood and other honours. Before the trial, the British government circulated excerpts said to be from his private journals, known as the "Black Diaries", which detailed homosexual activities. Given prevailing views and existing laws on homosexuality, this material undermined support for clemency for Casement. Debates have continued about these diaries: a handwriting comparison study in 2002 concluded Casement had written the diaries, but this was still contested by some.
Casement was born in Dublin to an Anglo-Irish family, living in very early childhood at Doyle's Cottage, Lawson Terrace, Sandycove.
His father, Captain Roger Casement of the (King's Own) Regiment of Dragoons, was the son of a bankrupt Belfast shipping merchant, Hugh Casement, who later moved to Australia. Captain Casement had served in the 1842 Afghan campaign. He travelled to Europe to fight as a volunteer in the Hungarian Revolution of 1848 but arrived after the Surrender at Világos. After the family moved to England, Roger's mother, Anne Jephson (or Jepson), of a Dublin Anglican family, purportedly had him secretly baptised at the age of three as a Roman Catholic in Rhyl, Wales. However, the priest who arranged his baptism in 1916 clearly stated that the claimed earlier baptism had been in Aberystwyth, 80 miles from Rhyl, raising the question as to why such a supposedly-important event should also become so misremembered.
According to an 1892 letter, Casement believed his mother was descended from the Jephson family of Mallow, County Cork. However, the Jephson family's historian provides no evidence of this. The family lived in England in genteel poverty; Roger's mother died when he was nine. They returned to Ireland to County Antrim to live near paternal relatives. When Casement was 13 years old his father died in Ballymena, and he was left dependent on the charity of relatives, the Youngs and the Casements. He was educated at the Diocesan School, Ballymena (later the Ballymena Academy). He left school at 16 and went to England to work as a clerk with Elder Dempster, a Liverpool shipping company headed by Alfred Lewis Jones.
Roger Casement's brother, Thomas Hugh Jephson Casement (1863–1939), helped establish the Irish Coastguard Service. He drowned in Dublin's Grand Canal on 6 March 1939, and is buried in Deansgrange Cemetery.
Casement worked in the Congo for Henry Morton Stanley and the African International Association from 1884; this association became known as a front for King Leopold II of Belgium in his takeover of the Congo Free State. Casement worked on a survey to improve communication and recruited and supervised workmen in building a railroad to bypass the lower 220 miles of the Congo River, which is made unnavigable by cataracts, in order to improve transportation and trade to the Upper Congo. During his commercial work, he learned African languages.
In 1890 Casement met Joseph Conrad, who had come to the Congo to pilot a merchant ship, "Le Roi des Belges" ("King of the Belgians"). Both had come inspired by the idea that "European colonisation would bring moral and social progress to the continent and free its inhabitants 'from slavery, paganism and other barbarities.' Each would soon learn the gravity of his error." Conrad published his short novel "Heart of Darkness" in 1899. Casement would later take on a different kind of writing to expose the conditions he found in the Congo during his official investigation for the British government. In these formative years, he also met Herbert Ward, and they became longtime friends. Ward left Africa in 1889, and devoted his time to becoming an artist, but his experience there strongly influenced his work.
Casement joined the Colonial Service, under the authority of the Colonial Office, first serving overseas as a clerk in British West Africa before in August 1901 transferring to the Foreign Office service as British consul in the eastern part of the French Congo. In 1903 the Balfour Government commissioned Casement, then its consul at Boma in the Congo Free State, to investigate the human rights situation in that colony of the Belgian king, Leopold II. Setting up a private army known as the Force Publique, Leopold had squeezed revenue out of the people of the territory through a reign of terror in the harvesting and export of rubber and other resources. In trade, Belgium shipped guns, whips ("chicotte") and other materials to the Congo, used chiefly to suppress the local people.
Casement travelled for weeks in the upper Congo Basin to interview people throughout the region, including workers, overseers, and mercenaries. He delivered a long, detailed eyewitness report to the Crown that exposed abuses: "the enslavement, mutilation, and torture of natives on the rubber plantations," becoming known as the "Casement Report" of 1904. King Leopold had held the Congo Free State since 1885, when the Berlin Conference of European powers and the United States effectively gave him free rein in the area.
Leopold had exploited the territory's natural resources (mostly rubber) as a private entrepreneur, not as king of the Belgians. Using violence and murder against men and their families, Leopold's private Force Publique had decimated many native villages in the course of forcing the men to gather rubber and abusing them to increase productivity. Casement's report provoked controversy, and some companies with a business interest in the Congo rejected its findings, as did Casement's former boss, Alfred Lewis Jones.
When the report was made public, opponents of Congolese conditions formed interest groups, such as the Congo Reform Association, founded by E. D. Morel with Casement's support, and demanded action to relieve the situation of the Congolese. Other European nations followed suit, as did the United States, and the British Parliament demanded a meeting of the 14 signatory powers to review the 1885 Berlin Agreement defining interests in Africa. The Belgian Parliament, pushed by Socialist leader Emile Vandervelde and other critics of the king's Congolese policy, forced Léopold to set up an independent commission of inquiry. In 1905, despite Léopold's efforts, it confirmed the essentials of Casement's report. On 15 November 1908, the parliament of Belgium took over the Congo Free State from Léopold and organised its administration as the Belgian Congo.
In 1906 the Foreign Office sent Casement to Brazil: first as consul in Santos, then transferred to Pará, and lastly promoted to consul-general in Rio de Janeiro. He was attached as a consular representative to a commission investigating rubber slavery by the Peruvian Amazon Company (PAC), which had been registered in Britain in 1908 and had a British board of directors and numerous stockholders. In September 1909, a journalist named Sidney Paternoster, wrote in "Truth", a British magazine, of abuses against PAC workers and competing Colombians in the disputed region of the Peruvian Amazon.
In addition, the British consul at Iquitos had said that Barbadians, considered British subjects as part of the empire, had been ill-treated while working for PAC, which gave the government a reason to intervene. Ordinarily it could not investigate the internal affairs of another country. American civil engineer Walter Hardenburg had told Paternoster of witnessing a joint PAC and Peruvian military action against a Colombian rubber station, which they destroyed, stealing the rubber. He also saw Peruvian Indians whose backs were marked by severe whipping, in a pattern called the Mark of Arana (the head of the rubber company), and reported other abuses.
PAC, with its operational headquarters in Iquitos, dominated the city and the region. The area was separated from the main population of Peru by the Andes, and it was 1900 miles from the Amazon's mouth at Pará. The British-registered company was effectively controlled by the archetypal rubber baron Julio César Arana and his brother. Born in Lima, Arana had climbed out of poverty to own and operate a company harvesting great quantities of rubber in the Peruvian Amazon, which was much in demand on the world market. The rubber boom had led to expansion in Iquitos as a trading center, as all the company rubber was shipped down the Amazon River from there to the Atlantic port. Numerous foreigners had flocked to the area seeking their fortunes in the rubber boom, or at least some piece of the business. The rough frontier city, both respectable businesses and the vice district, was highly dependent on the PAC.
Casement traveled to the Putumayo District, where the rubber was harvested deep in the Amazon Basin, and explored the treatment of the local Indians of Peru. The isolated area was outside the reach of the national government and near the border with Colombia, which periodically made incursions in competition for the rubber. For years, the Indians had been forced into unpaid labor by field staff of the PAC, who exerted absolute power over them and subjected them to near starvation, severe physical abuse, rape of women and girls by the managers and overseers, branding and casual murder. Casement found conditions as inhumane as those in the Congo. He interviewed both the Putumayo and men who had abused them, including three Barbadians who had also suffered from conditions of the company. When the report was publicised, there was public outrage in Britain over the abuses. Casement made two lengthy visits to the region, first in 1910 with a commission of investigators.
Casement's report has been described as a "brilliant piece of journalism", as he wove together first-person accounts by both "victims and perpetrators of atrocities ... Never before had distant colonial subjects been given such personal voices in an official document." After his report was made to the British government, some wealthy board members of the PAC were horrified by what they learned. Arana and the Peruvian government promised to make changes. In 1911, the British government asked Casement to return to Iquitos and Putumayo to see if promised changes in treatment had occurred. In a report to the British foreign secretary, dated 17 March 1911, Casement detailed the rubber company's continued use of pillories to punish the Indians:Men, women, and children were confined in them for days, weeks, and often months. ... Whole families ... were imprisoned—fathers, mothers, and children, and many cases were reported of parents dying thus, either from starvation or from wounds caused by flogging, while their offspring were attached alongside of them to watch in misery themselves the dying agonies of their parents.
After his return to Britain, Casement repeated his extra-consular campaigning work by organising Anti-Slavery Society and Catholic mission interventions in the region. Some of the company men exposed as killers in his 1910 report were charged by Peru, while most fled the region and were never captured. Some entrepreneurs had smuggled out cuttings from rubber plants and began cultivation in southeast Asia in colonies of the British Empire. The scandal of the PAC caused major losses in business to the company, and rubber demand began to be met by farmed rubber in other parts of the world. With the collapse of business for PAC, most foreigners left Iquitos and it quickly returned to its former status as an isolated backwater. For a period, the Putumayo Indians were largely left alone. Arana was never prosecuted as head of the company. He lived in London for years, then returned to Peru. Despite the scandal associated with Casement's report and international pressure on the Peruvian government to change conditions, Arana later had a successful political career. He was elected a senator and died in Lima, Peru in 1952, aged 88.
Casement wrote extensively for his private record (as always) in those two years. During this period he continued to write in his diaries, and the one for 1911 was described as being unusually discursive. He kept them in London along with the 1903 diary and other papers of the period, presumably so they could be consulted in his continuing work as "Congo Casement" and as the saviour of the Putumayo Indians. In 1911 Casement received a knighthood for his efforts on behalf of the Amazonian Indians, having been appointed Companion of the Order of St Michael and St George (CMG) in 1905 for his Congo work.
In Ireland in 1904, on leave from Africa from that year until 1905, Casement joined the Gaelic League, an organisation established in 1893 to preserve and revive the spoken and literary use of the Irish language. He met the leaders of the powerful Irish Parliamentary Party (IPP) to lobby for his work in the Congo. He did not support those, like the IPP, proposing Home Rule, as he felt sure the House of Lords would veto such efforts. Casement was more impressed by Arthur Griffith's new Sinn Féin party (founded 1905), which called for an independent Ireland (through a non-violent series of strikes and boycotts) whose sole imperial tie would be a dual monarchy between Britain and Ireland, modeled on the policy example of Ferenc Deák in Hungary. Casement joined the party in 1905.
Casement retired from the British consular service in the summer of 1913. In November of that year he was one of those helping to form the Irish Volunteers. He and Eoin MacNeill, later the organisation's chief of staff, co-wrote the Volunteers' manifesto. In July 1914, Casement journeyed to the United States to promote and raise money for the Volunteers among the large and numerous Irish community there. Through his friendship with men such as Bulmer Hobson, a member both of the Volunteers and of the secret Irish Republican Brotherhood (IRB), Casement established connections with exiled Irish nationalists, particularly "Clan na Gael".
Elements of the suspicious "Clan" did not trust him completely, as he was not a member of the IRB and held views they considered too moderate, although others such as John Quinn regarded him as extreme. Devoy, initially hostile to Casement for his part in conceding control of the Irish Volunteers to John Redmond, was won over in June, and another "Clan" leader, Joseph McGarrity, became devoted to Casement and remained so from then on. The Howth gun-running in late July 1914, which Casement had helped to organise and finance, further enhanced his reputation.
In August 1914, at the outbreak of World War I, Casement and John Devoy arranged a meeting in New York with the western hemisphere's top-ranking German diplomat, Count Bernstorff, to propose a mutually beneficial plan: if Germany would sell guns to the Irish revolutionaries and provide military leaders, the Irish would revolt against England, diverting troops and attention from the war with Germany. Bernstorff appeared sympathetic. Casement and Devoy sent an envoy, "Clan na Gael" president John Kenny, to present their plan personally. Kenny, while unable to meet the German Emperor, did receive a warm reception from Flotow, the German ambassador to Italy, and from Prince von Bülow.
In October 1914, Casement sailed for Germany via Norway, traveling in disguise and seeing himself as an ambassador of the Irish nation. While the journey was his idea, "Clan na Gael" financed the expedition. During their stop in Christiania, his companion Adler Christensen was taken to the British legation, where a reward was allegedly offered if Casement were "knocked on the head". British diplomat Mansfeldt Findlay, in contrast, advised London that Christensen had "implied that their relations were of an unnatural nature and that consequently he had great power over this man". No evidence was provided by Findlay for the insinuation.
Findlay's handwritten letter of 1914 is kept in University College, Dublin, and is viewable online. This letter—written on official notepaper by Minister Findlay at the British Legation in Oslo—offers to Christensen the sum of £5,000 plus immunity from prosecution and free passage to the United States in return for information leading to the capture of Roger Casement. That amount would be approximately £2,616,000 in 2014.
In November 1914, Casement negotiated a declaration by Germany which stated:The Imperial Government formally declares that under no circumstances would Germany invade Ireland with a view to its conquest or the overthrow of any native institutions in that country. Should the fortune of this Great War, that was not of Germany's seeking, ever bring in its course German troops to the shores of Ireland, they would land there not as an army of invaders to pillage and destroy but as the forces of a Government that is inspired by goodwill towards a country and people for whom Germany desires only national prosperity and national freedom.
Casement spent most of his time in Germany seeking to recruit an Irish Brigade from among more than 2,000 Irish prisoners-of-war taken in the early months of the war and held in the prison camp of Limburg an der Lahn. His plan was that they would be trained to fight against Britain in the cause of Irish independence. American Ambassador to Germany James W. Gerard mentioned the effort in his memoir "Four Years in Germany": The Germans collected all the soldier prisoners of Irish nationality in one camp at Limburg not far from Frankfurt a. M. There efforts were made to induce them to join the German army. The men were well treated and were often visited by Sir Roger Casement who, working with the German authorities, tried to get these Irishmen to desert their flag and join the Germans. A few weaklings were persuaded by Sir Roger who finally discontinued his visits, after obtaining about thirty recruits, because the remaining Irishmen chased him out of the camp.
On 27 December 1914 Casement signed an agreement in Berlin to this effect with Arthur Zimmermann in the German Foreign Office. Fifty-two of the 2000 prisoners volunteered for the Brigade. Contrary to German promises, they received no training in the use of machine guns, which at the time were relatively new and unfamiliar weapons.
During World War I, Casement is known to have been involved in the German-backed plan by Indians to win their freedom from the British Raj, the "Hindu–German Conspiracy", recommending Joseph McGarrity to Franz von Papen as an intermediary. The Indian nationalists may also have followed Casement's strategy of trying to recruit prisoners of war to fight for Indian independence.
Both efforts proved unsuccessful. In addition to finding it difficult to ally with the Germans while held as prisoners, potential recruits to Casement's brigade knew they would be liable to the death penalty as traitors if Britain won the war. In April 1916, Germany offered the Irish 20,000 Mosin–Nagant 1891 rifles, ten machine guns and accompanying ammunition, but no German officers; it was a fraction of the quantity of the arms Casement had hoped for, with no military expertise on offer.
Casement did not learn about the Easter Rising until after the plan was fully developed. The German weapons never landed in Ireland; the Royal Navy intercepted the ship transporting them, a German cargo vessel named the "Libau", disguised as a Norwegian vessel, "Aud-Norge". All the crew were German sailors, but their clothes and effects, even the charts and books on the bridge, were Norwegian. As John Devoy had either misunderstood or disobeyed Pearse's instructions that the arms were under no circumstances to land before Easter Sunday, the Irish Transport and General Workers' Union (TGWU) members set to unload the arms under the command of Irish Citizen Army officer and trade unionist William Partridge were not ready. The IRB men sent to meet the boat drove off a pier and drowned.
The British had intercepted German communications coming from Washington and suspected that there was going to be an attempt to land arms at Ireland, although they were not aware of the precise location. The arms ship, under Captain Karl Spindler, was apprehended by HMS "Bluebell" on the late afternoon of Good Friday. About to be escorted into Queenstown (present-day Cobh), County Cork on the morning of Saturday 22 April, Captain Spindler scuttled the ship by pre-set explosive charges. It now lies at a depth of 40 metres. Its surviving crew became prisoners of war.
Casement confided his personal papers to Dr Charles Curry, with whom he had stayed at Riederau on the Ammersee, before he left Germany. He departed with Robert Monteith and Sergeant Daniel Beverley (Bailey) of the Irish Brigade in a submarine, initially the , which developed engine trouble, and then the , shortly after the "Aud" sailed. According to Monteith, Casement believed the Germans were toying with him from the start and providing inadequate aid that would doom a rising to failure. He wanted to reach Ireland before the shipment of arms and to convince Eoin MacNeill (who he believed was still in control) to cancel the rising.
Casement sent John McGoey, a recently arrived Irish-American, through Denmark to Dublin, ostensibly to advise what military aid was coming from Germany and when, but with Casement's orders "to get the Heads in Ireland to call off the rising and merely try to land the arms and distribute them". McGoey did not reach Dublin, nor did his message. His fate was unknown until recently. He joined the Royal Navy in 1916, survived the war, and later returned to the United States, where he died in an accident on a building site in 1925.
In the early hours of 21 April 1916, three days before the rising began, the German submarine put Casement ashore at Banna Strand in Tralee Bay, County Kerry. Suffering from a recurrence of the malaria that had plagued him since his days in the Congo, and too weak to travel, he was discovered at McKenna's Fort (an ancient ring fort now called Casement's Fort) in Rahoneen, Ardfert, and arrested on charges of high treason, sabotage and espionage against the Crown. He sent word to Dublin about the inadequate German assistance. The Kerry Brigade of the Irish Volunteers might have tried to rescue him over the next three days, but had been ordered by its leadership in Dublin to "do nothing" —not a shot was to be fired in Ireland before the Easter Rising was in train. "He was taken to Brixton Prison to be placed under special observation for fear of an attempt of suicide. There was no staff at the Tower [of London] to guard suicidal cases."
At Casement's highly publicised trial for high treason, the prosecution had trouble arguing its case. Casement's crimes had been carried out in Germany and the Treason Act 1351 seemed to apply only to activities carried out on English (or arguably British) soil. A close reading of the Act allowed for a broader interpretation: the court decided that a comma should be read in the unpunctuated original Norman-French text, crucially altering the sense so that "in the realm or elsewhere" referred to where acts were done and not just to where the "King's enemies" might be. Afterwards, Casement himself wrote that he was to be "hanged on a comma", leading to the well-used epigram.
During his trial the prosecution (F. E. Smith), who had admired some of Casement's work before he went over to the Germans, informally suggested to the defence barrister (A. M. Sullivan) that they should jointly produce what are now called the "Black Diaries" in evidence, as this would most likely cause the court to find Casement "guilty but insane", and save his life. Casement refused to agree to this, and was found guilty and was sentenced to be hanged.
Before and during the trial and appeal, the British government secretly circulated some excerpts from Casement's journals, exposing Casement as a sexual deviant. These included numerous explicit accounts of sexual activity. This aroused public opinion against him and influenced those notables who might otherwise have tried to intervene. Given societal norms and the illegality of homosexuality at the time, support for Casement's reprieve declined in some quarters. The journals became known in the 1950s as the "Black Diaries".
Casement unsuccessfully appealed against his conviction and death sentence. Those who pleaded for clemency for Casement included Sir Arthur Conan Doyle, who was acquainted with Casement through the work of the Congo Reform Association, poet W. B. Yeats, and playwright George Bernard Shaw. Joseph Conrad could not forgive Casement, nor could Casement's longtime friend, the sculptor Herbert Ward, whose son Charles had been killed on the Western Front that January, and who would change the name of Casement's godson, who had been named after him. Members of the Casement family in Antrim contributed discreetly to the defence fund, although they had sons in the British Army and Navy. A United States Senate appeal against the death sentence was rejected by the British cabinet on the insistence of prosecutor F. E. Smith, an opponent of Irish independence.
Casement's knighthood was forfeited on 29 June 1916.
On the day of his execution, Casement was received into the Catholic Church at his request. He was attended by two Catholic priests, Dean Timothy Ring and Father James Carey, from the East London parish of SS Mary and Michael.
The latter, also known as James McCarroll, said of Casement that he was "a saint ... we should be praying to him [Casement] instead of for him". Casement was hanged at Pentonville Prison in London on 3 August 1916. He was 51 years old.
British officials have claimed that Casement kept the "Black Diaries", a set of diaries covering the years 1903, 1910 and 1911 (twice). Jeffrey Dudgeon, who published an edition of all the diaries said, "His homosexual life was almost entirely out of sight and disconnected from his career and political work". If genuine, the diaries reveal Casement as a homosexual who had many partners, had a fondness for young men and mostly paid for sex.
In 1916 after Casement's conviction for high treason, the British government circulated alleged photographs of pages of the diary to individuals campaigning for the commutation of Casement's death sentence. At a time of strong conservatism, not least among Irish Catholics, publicising the "Black Diaries" and Casement's alleged homosexuality undermined support for him. The question of whether the diaries are genuine or forgeries has been much debated. The diaries were declassified for limited inspection (by persons approved by the Home Office) in August 1959. The original diaries may be seen at the British National Archives in Kew. Historians and biographers of Casement's life have taken opposing views. Roger McHugh (in 1976) and Angus Mitchell (in 2000 and later) regard the diaries as forged. In 2012, Mitchell published several articles in the "Field Day Review" of Notre Dame University.
In 2005 the Royal Irish Academy, Dublin published "The Giles Report", a private report on the "Black Diaries" written in 2002. Two US forensic-document examiners reviewed the Giles Report; both were critical of it. James Horan stated, "As editor of the "Journal of Forensic Sciences" and "The Journal of the American Society of Questioned Document Examiners", I would not recommend publication of the Giles Report because the report does not show how its conclusion was reached. To the question, 'Is the writing Roger Casement's?' on the basis of the Giles Report as it stands, my answer would have to be I cannot tell."
Marcel Matley, a second document examiner, stated, "Even if every document examined were the authentic writing of Casement, this report does nothing to establish the fact." A very brief expert opinion in 1959 by a Home Office employee failed to identify Casement as author of the diaries. This opinion is almost unknown and does not appear in the Casement literature. As late as July 2015 the UK National Archives ambiguously described the "Black Diaries" as "attributed to Roger Casement", while at the same time unambiguously declaring their satisfaction with the result of the private Giles Report.
Mario Vargas Llosa presented a mixed account of Casement's sexuality in his 2010 novel, "The Dream of the Celt", suggesting that Casement wrote partially fictional diaries of what he wished had taken place in homosexual encounters. Dudgeon suggested in a 2013 article that Casement needed to be "sexless" to fit his role as a Catholic martyr in the nationalist movement of the time. Dudgeon writes, "The evidence that Casement was a busy homosexual is in his own words and handwriting in the diaries, and is colossally convincing because of its detail and extent."
Research published in 2016 again casts doubt on the "Black Diaries". "The Casement Secret" by Paul R. Hyde argues that there is no evidence of the existence of the diaries during Casement's lifetime since only typescript pages – allegedly copies – were circulated; no-one was shown the diaries now in the National Archives. An official memorandum by the British Secretary of State dated 6 March 1959 states: ""There is no record on the Home Office papers of the diaries or the copies having been shown to anyone outside the Government service before Casement's trial"".
This argument reflects the question raised in 1955 by Lord Russell of Liverpool concerning the existence of the diaries at the time of Casement's trial. "Anatomy of a Lie", by Paul R. Hyde proposes a paradigm shift – the diaries were fabricated after Casement's execution as forged versions of the original typescripts. It is also demonstrated that the homosexual dimension was originally the invention of British Envoy Extraordinary and Minister Plenipotentiary Mansfeldt Findlay in Christiania (present-day Oslo in Norway) in a false memorandum on 29 October 1914. The rarely-seen document containing the first innuendo has never been analysed before and is unmentioned by all Casement authors save one. Hyde also demonstrates that in the following months Findlay amplified his allegations because he feared exposure of his written bribe through a threatened lawsuit against him by Casement; a subsequent diplomatic scandal might have destroyed his career.
It is argued that the prosecution offered the diaries to the defence at the start of Casement's trial on 16 May, as part of a plea bargain that would save his life. He had been arrested on 21 April, giving the authorities only 3 weeks in which to forge the diaries, including rare up-country Congolese dialect phrases, which seems impossible. Against this, however, are the verified facts that only police typescripts were offered by prosecutor F. E. Smith and that there was no trial on that date, merely a preliminary hearing to decide about the trial. Therefore, on 16 May no diaries had been forged. Smith had earlier tried to save Casement's life, but he blocked his appeal to the House of Lords and threatened to resign to prevent the cabinet advising the monarch to grant a reprieve as he did not wish to help Irish Independence. It has been suggested that Smith's motive in the original attempt to avoid the death penalty was to compromise the defence by inducing a tacit authentication of the police typescripts.
Hyde's book "Anatomy of a lie," published in April 2019 demonstrated the opinion that the diary controversy has been framed by various biographers to promote authenticity by skillful use of innuendo, omission and misinformation. The book attempts to demostrate that there is no independent witness evidence for the material existence of the diaries before Casement's execution and that only police typescripts were shown to selected persons including King George V, journalists, politicians, diplomats etc. It claims that the UK National Archives confirmed that there is no witness evidence.
Casement's body was buried in quicklime in the prison cemetery at the rear of Pentonville Prison, where he was hanged. During the decades after his execution, many formal requests for repatriation of Casement's remains were refused by British governments. For example, in September 1953 Taoiseach Éamon de Valera, on a visit to Prime Minister Winston Churchill in Downing Street, requested the return of the remains. Churchill said he wasn’t personally opposed to the idea but would consult with his colleagues and take legal advice. He ultimately turned down the Irish request citing 'specific and binding' legal obligations that the remains of executed prisoners could not be exhumed. De Valera disputed the legal advice and responded that:
De Valera received no reply.
Finally, in 1965 Casement's remains were repatriated to Ireland. Despite the annulment, or withdrawal, of his knighthood in 1916, the 1965 UK Cabinet record of the repatriation decision refers to him as Sir Roger Casement.
Casement's last wish was to be buried at Murlough Bay on the north coast of County Antrim, in Northern Ireland, but Prime Minister Harold Wilson's government had released the remains only on condition that they could "not" be brought into Northern Ireland, as "the government feared that a reburial there could provoke Catholic celebrations and Protestant reactions."
Casement's remains lay in state at the Garrison Church, Arbour Hill (now Arbour Hill Prison) in Dublin city for five days, close to the graves of other leaders of the Easter Rising, but he was not to be buried beside them. After a state funeral, the remains were buried with full military honours in the Republican plot in Glasnevin Cemetery in Dublin, with other Irish republicans and nationalists. The President of Ireland, Éamon de Valera, who in his mid-eighties was the last surviving leader of the Easter Rising attended the ceremony, along with an estimated 30,000 others.
Casement has been the subject of ballads, poetry, novels, and TV series since his death, including:
By Roger Casement:
"Secondary Literature, and other materials cited in this entry": | https://en.wikipedia.org/wiki?curid=25730 |
Real Irish Republican Army
The Real Irish Republican Army or Real IRA (RIRA), is a dissident Irish republican paramilitary group which aims to bring about a united Ireland. It formed in 1997 following a split in the Provisional IRA by dissident members, who rejected the IRA's ceasefire that year. Like the Provisional IRA before it, the Real IRA sees itself as the only rightful successor to the original Irish Republican Army and styles itself as simply "the Irish Republican Army" in English or "Óglaigh na hÉireann" in Irish. It is an illegal organisation in the Republic of Ireland and designated as a proscribed terrorist organisation in the United Kingdom and the United States.
Since its formation, the Real IRA has waged a campaign in Northern Ireland against the Police Service of Northern Ireland (PSNI)—formerly the Royal Ulster Constabulary (RUC)—and the British Army. It is the largest and most active of the "dissident republican" paramilitary groups operating against the British security forces. It has targeted the security forces in gun attacks and bombings, and with grenades, mortars and rockets. The organisation has also been responsible for bombings in Northern Ireland and England with the goal of causing economic harm and disruption. The most notable of these was the 1998 Omagh bombing, which killed 29 people. After that bombing the Real IRA went on ceasefire, but resumed operations again in 2000. In March 2009 it claimed responsibility for an attack on Massereene Barracks which killed two British soldiers, the first to be killed in Northern Ireland since 1997. The Real IRA has also been involved in vigilantism, mainly against drug dealers and organised crime gangs. In Dublin in particular it has been accused of extortion and engaging in feuds with these gangs.
In July 2012 it was reported that Republican Action Against Drugs (RAAD) and other small republican militant groups were merging with the Real IRA. This new entity was named the New IRA (NIRA) by the media but members continue to identify themselves as simply "the Irish Republican Army". Small pockets of the Real IRA that did not merge with the New IRA continue have a presence in Republic of Ireland, particularly in Cork and to a lesser extent in Dublin.
In July 1997 the Provisional IRA called a ceasefire. On 10 October 1997 a Provisional IRA General Army Convention was held in Falcarragh, County Donegal. At the convention, Provisional IRA Quartermaster General Michael McKevitt—also a member of the 12-person Provisional IRA Executive—denounced the leadership and called for an end to the group's ceasefire and to its participation in the Northern Ireland peace process. He was backed by his partner and fellow Executive member Bernadette Sands McKevitt. The two dissidents were outmanoeuvred by the leadership and were left isolated. The convention backed the pro-ceasefire line, and on 26 October McKevitt and Sands McKevitt resigned from the Executive along with other members.
In November 1997 McKevitt and other dissidents held a meeting in a farmhouse in Oldcastle, County Meath, and a new organisation, styling itself "Óglaigh na hÉireann", was formed. The organisation attracted disaffected Provisional IRA members the republican stronghold of South Armagh, as well as Dublin, Belfast, Limerick, Tipperary, County Louth, County Tyrone and County Monaghan.
The name "Real IRA" entered common usage when in early 1998 members set up a roadblock in Jonesborough, County Armagh and told motorists "We're from the IRA. The "real" IRA".
The RIRA's objective is a united Ireland by forcing the end of British sovereignty over Northern Ireland through the use of physical force. The organisation rejects the Mitchell Principles and the Good Friday Agreement, comparing the latter to the 1921 Anglo-Irish Treaty which resulted in the partition of Ireland. The organisation aims to uphold an uncompromising form of Irish republicanism and opposes any political settlement that falls short of Irish unity and independence.
Bernadette Sands McKevitt, sister of hunger striker Bobby Sands and a founder of the RIRA's political wing, the 32 County Sovereignty Movement, said in an interview that her brother "did not die for cross-border bodies with executive powers. He did not die for nationalists to be equal British citizens within the Northern Ireland state". The RIRA adopted a tactic of bombing town centres to damage the economic infrastructure of Northern Ireland. The organisation also attacks members of the security forces using land mines, home-made mortars and car bombs, and has also targeted England using incendiary devices and car bombs to "spread terror and disruption".
The organisation's first action was an attempted bombing in Banbridge, County Down on 7 January 1998. The intention was to explode a car bomb, but this was thwarted when the bomb was defused by security forces. The RIRA continued its campaign in late February with bombings in Moira, County Down and Portadown, County Armagh. On 9 May the organisation announced its existence, in a coded telephone call to Belfast media claiming responsibility for a mortar attack on a police station in Belleek, County Fermanagh.
The RIRA also carried out attacks in Newtownhamilton and Newry, and a second attack in Banbridge on 1 August injured 35 people and caused £3.5 million of damage when a car bomb exploded. Despite these attacks the organisation lacked a significant base and was heavily infiltrated by informers. This led to a series of high-profile arrests and seizures by the Garda Síochána in the first half of 1998; these involved the death of RIRA member Rónán Mac Lochlainn who was shot dead trying to escape from police, following an attempted robbery of a security van in County Wicklow.
On 15 August 1998 the RIRA left a car containing 500 lb of home-made explosives in the centre of Omagh, County Tyrone. The bombers could not find a parking space near the intended target of the courthouse, and the car was left 400 metres away. As a result, three inaccurate telephone warnings were issued, and the Royal Ulster Constabulary (RUC) believed the bomb was located outside the courthouse. They attempted to establish a security cordon to keep civilians clear of the area, which inadvertently pushed people closer to the location of the bomb. Shortly after, the bomb exploded killing 29 people and injuring 220 others, in what became the single deadliest strike of the Troubles.
The bombing caused a major outcry throughout the world, and the Irish and British governments introduced new legislation in an attempt to destroy the organisation. The RIRA also came under pressure from the Provisional IRA, when Provisional IRA members visited the homes of 60 people connected with the RIRA and ordered them to disband and stop interfering with Provisional IRA arms dumps. With the organisation under intense pressure, which included McKevitt and Sands-McKevitt being forced from their home after the media named McKevitt in connection with the bombing, the RIRA called a ceasefire on 8 September.
Following the declaration of the ceasefire the RIRA began to regroup, and by the end of October had elected a new leadership and were planning their future direction. In late December Irish government representative Martin Mansergh held a meeting with McKevitt in Dundalk, in an attempt to convince McKevitt to disband the RIRA. McKevitt refused, stating that members would be left defenceless to attacks by the Provisional IRA. In 1999 the RIRA began preparations for a renewed campaign, and in May three members travelled to Split in Croatia to purchase arms, which were smuggled back to Ireland. On 20 October, ten people were arrested when Gardaí raided a RIRA training camp near Stamullen, County Meath.
Officers found a firing range inside a disused wine cellar being used as an underground bunker, and seized weapons including an assault rifle, a submachine gun, a semi-automatic pistol and an RPG-18 rocket launcher. An earlier version of the rocket launcher, the RPG-7, had been in the possession of the Provisional IRA from as early as 1972, but this was the first time the RPG-18 had been found in the possession of a paramilitary organisation in Ireland.
On 20 January 2000 the RIRA issued a call-to-arms in a statement to the "Irish News". The statement condemned the Northern Ireland Executive, and stated: "Once again, Óglaigh na hÉireann declares the right of the Irish people to the ownership of Ireland. We call on all volunteers loyal to the Irish Republic to unite to uphold the Republic and establish a permanent national parliament representative of all the people." The RIRA launched its new campaign on 25 February with an attempted bombing of Shackleton Army Barracks in Ballykelly. The bombers were disturbed as they were assembling the device, which would have caused mass murder if detonated, according to soldiers.
On 29 February a rocket launcher similar to one seized in the 1999 raid was found near an army base in Dungannon, County Tyrone, and on 15 March three men were arrested following the discovery of 500 lb of home-made explosives when the RUC searched two cars in Hillsborough, County Down. On 6 April a bomb attack took place at Ebrington Barracks in Derry. RIRA members lowered a device consisting of 5 lb of homemade explosives over the perimeter fence using ropes, and the bomb subsequently exploded damaging the fence and an unmanned guardhouse.
After the Omagh bombing, the RIRA leadership were unwilling to launch a full-scale campaign in Northern Ireland due to the possibility of civilians being killed. Instead they decided to launch a series of attacks in England, in particular London, which they hoped would attract disenchanted Provisional IRA members to join the RIRA. On 1 June 2000 a bomb damaged Hammersmith Bridge, a symbolic target for Irish republican paramilitary groups. The bridge had been targeted by the Irish Republican Army on 29 March 1939 as part of its Sabotage Campaign, and by the Provisional IRA on 24 April 1996.
On 19 July, security forces carried out a controlled explosion on a bomb left at Ealing Broadway station and public transport was disrupted when the Metropolitan Police closed Victoria and Paddington train stations and halted services on the London Underground. On 21 September a rocket-propelled grenade was fired at the MI6 headquarters using an RPG-22 rocket launcher, which generated headlines around the world. In November 2000, security forces foiled a plot to drive 500 lb of homemade explosives to central London that month, a bomb twice as powerful as the one in Omagh. At the time police were warning for weeks that a terrorist attack in London could be imminent.
On 21 February 2001 a bomb disguised as a torch left outside a Territorial Army base in Shepherd's Bush seriously injured a 14-year-old cadet, who was blinded and had his hand blown off. A second attack in Shepherd's Bush, the 4 March BBC bombing, injured a civilian outside the BBC Television Centre. The explosion was captured by a BBC cameraman, and the footage was broadcast on TV stations worldwide, and gained mass publicity for the group. On 14 April a bomb exploded at a postal sorting office in Hendon, causing minor damage but no injuries. Three weeks later on 6 May a second bomb exploded at the same building, causing slight injuries to a passer-by. The 3 August 2001 Ealing bombing injured seven people, and on 3 November a car bomb containing 60 lb of home-made explosives was planted in the centre of Birmingham. The bomb did not fully detonate and no one was injured.
The successful attack on Hammersmith Bridge encouraged the RIRA leadership to launch further attacks in Northern Ireland. On 19 June 2000 a bomb was found in the grounds of Hillsborough Castle, home of Secretary of State for Northern Ireland Peter Mandelson. On 30 June a bomb exploded on the Dublin-to-Belfast railway line near the village of Meigh in County Armagh. The explosion damaged the tracks, and caused disruption to train services. On 9 July a car bomb damaged buildings in Stewartstown, County Tyrone including an RUC station, and on 10 August an attack in Derry was thwarted by the RUC after a van containing a 500 lb bomb failed to stop at a police checkpoint. Following a car chase the bombers escaped across the Irish border, and the Irish Army carried out a controlled explosion on the bomb after the van was found abandoned in County Donegal.
On 13 September 2000, two 80 lb bombs were planted at the Magilligan army camp in County Londonderry, one of which was planted in a wooden hut and partially exploded when a soldier opened the door to the hut. The second bomb was found during a follow-up search and made safe by bomb disposal experts. On 11 November the RUC and British Army prevented a mortar attack after stopping a van near Derrylin, County Fermanagh, and the RUC prevented a further attack on 13 January 2001 when an 1100 lb bomb was found in Armagh – the largest bomb found in several years according to the RUC.
On 23 January the RIRA attacked Ebrington Army Barracks in Derry for a second time, firing a mortar over a perimeter fence. A mortar similar to the one used in the attack was found by Gardaí near Newtowncunningham on 13 February, and British army bomb disposal experts made safe another mortar found between Dungannon and Carrickmore on 12 April. On 1 August a 40 lb bomb was discovered in a car at the long-stay car park of Belfast International Airport following a telephone warning, and was made safe with two controlled explosions by bomb disposal experts. In December a six-day security operation ended when a 70 lb bomb found under railway tracks at Killeen Bridge near Newry was defused. The operation began following telephone warnings, and the road and railway line connecting Newry to Dundalk were closed due to security alerts.
A pipe bomb was discovered at a police officer's home in Annalong, County Down on 3 January 2002, and two teenage boys were injured in County Armagh on 2 March when a bomb hidden in a traffic cone exploded. On 29 March 2002 the RIRA targeted a former member of the Royal Irish Regiment from Sion Mills, County Tyrone, with a bomb attached to his car that failed to explode. On 1 August 2002 a civilian worker was killed by an explosion at a Territorial Army base in Derry. The man, a 51-year-old former member of the Ulster Defence Regiment, was the thirtieth person killed by the RIRA.
Despite the RIRA's renewed activity, the organisation was weakened by the arrest of key members and continued infiltration by informers. McKevitt was arrested on 29 March 2001 and charged with membership of an illegal organisation and directing terrorism, and remanded into custody. In July 2001, following the arrests of McKevitt and other RIRA members, British and Irish government sources hinted that the organisation was now in disarray. Other key figures were jailed, including the RIRA's Director of Operations, Liam Campbell, who was convicted of membership of an illegal organisation, and Colm Murphy who was convicted of conspiring to cause the Omagh bombing, although this conviction was overturned on appeal.
On 10 April 2002 Ruairi Convey, from Donaghmede, Dublin, was jailed for three years for membership of the RIRA. During a search of his home a list of names and home addresses of members of the Gardaí's Emergency Response Unit was found. Five RIRA members were also convicted in connection with the 2001 bombing campaign in England, and received sentences varying from 16 years to 22 years' imprisonment. In October 2002, McKevitt and other RIRA members imprisoned in Portlaoise Prison issued a statement calling for the organisation to stand down. After a two-month trial, McKevitt was sentenced to twenty years' imprisonment in August 2003 after being convicted of directing terrorism.
After McKevitt's imprisonment, the RIRA regrouped and claimed responsibility for a series of firebomb attacks against premises in Belfast in November 2004, and an attack on a Police Service of Northern Ireland (PSNI) patrol in Ballymena during March 2006 was attributed to the RIRA by the Independent Monitoring Commission (IMC). On 9 August 2006, fire bomb attacks by the RIRA hit businesses in Newry, County Down. Buildings belonging to JJB Sports and Carpetright were destroyed, and ones belonging to MFI and TK Maxx were badly damaged. On 27 October 2006, a large amount of explosives was found in Kilbranish, Mount Leinster, County Carlow by police, who believe the RIRA were trying to derail the peace process with a bomb attack. The IMC believe the RIRA were also responsible for a failed mortar attack on Craigavon PSNI Station on 4 December 2006. The IMC's October 2006 report stated that the RIRA remains "active and dangerous" and that it seeks to "sustain its position as a terrorist organisation". The RIRA has stated it has no intention of calling a ceasefire unless a declaration of intent to withdraw from Northern Ireland is made by the British Government.
In a lengthy interview with "An Phoblacht" newspaper in 2003, the leadership of the Provisional IRA said that the RIRA had "no coherent strategy".
On 8 November 2007 two RIRA members shot an off-duty PSNI officer as he sat in his car on Bishop Street in Derry, causing injuries to his face and arm. On 12 November another PSNI member was shot by RIRA members in Dungannon, County Tyrone. On 7 February 2008, the RIRA stated that, after experiencing a three-year period of reorganisation, it intended to "go back to war" by launching a new offensive against "legitimate targets". It also, despite having apologised for the Omagh bombing, denied any large scale involvement with the attack and said that their part had only gone as far as their codeword being used. On 12 May 2008 the RIRA seriously injured a member of the PSNI when a booby trap bomb exploded underneath his car near Spamount, County Tyrone. On 25 September 2008 the RIRA shot a man in the neck in St Johnston, near the County Londonderry border. The same man was targeted in a pipe bomb attack on his home on 25 October, the RIRA did not claim responsibility for the attack, but security forces believe they were responsible for it.
On 7 March 2009 the RIRA claimed responsibility for the 2009 Massereene Barracks shooting. This shooting occurred outside the Massereene Barracks as four soldiers were receiving a pizza delivery. Two soldiers were killed, and the other two soldiers and two deliverymen were injured. On 3 April 2009 the RIRA in Derry claimed responsibility for carrying out a punishment shooting of a man who was awaiting sentencing for raping a 15-year-old girl. The RIRA were also blamed for orchestrating rioting in the Ardoyne area of Belfast on 13 July 2009 as an Apprentice Boys parade was passing. Several PSNI officers were injured in the rioting and at least one shot was fired at police. In early November, the Independent Monitoring Commission released a report stating that the threat from the RIRA and other dissident republicans was at its most serious level since the 1998 Good Friday Agreement.
When drug dealer Sean Winters was shot dead in Portmarnock, north Dublin in September 2010, the Real IRA "emerged as the chief suspects". They were also suspected of shooting dead drugs gang leader Michael Kelly in Coolock in September 2011.
On 5 October 2010, a car bomb exploded outside a branch of the Ulster Bank on Culmore Road in Derry. Two police officers were slightly injured in the blast, which also damaged a hotel and other businesses. Several telephone warnings were received an hour prior to the blast allowing police to cordon off the area. The RIRA later claimed responsibility in a telephone call to the "Derry Journal".
A large Real IRA explosives dump and arms cache were discovered in Dunleer, County Louth by Gardaí in October 2010, following a weekend of searches and arrests in the east of the country. In addition, two Real IRA men were charged in Dublin's non-jury Special Criminal Court of membership of an illegal organisation. The Real IRA claimed responsibility for kidnapping and shooting dead of one of their members, Kieran Doherty, for alleged drug dealing. Further seizures of the group's arms and explosives by the Gardaí in 2012 and 2013 led to over a dozen more arrests. In 2011 Michael Campbell, brother of Liam, was found guilty in Vilnius, Lithuania, of trying to purchase arms and explosives and was sentenced to twelve years in prison. In October 2013 Campbell was freed on appeal, only to have the Supreme Court of Lithuania order a retrial in June 2014. Campbell has maintained his innocence, accusing British intelligence of attempting to frame him.
On 26 July 2012, it was reported that Republican Action Against Drugs (RAAD) and other small republican militant groups were merging with the Real IRA. As before, the group would continue to refer to itself as "the Irish Republican Army", though some media began to refer to the group as a "new IRA".
As well as RAAD, the alliance includes an east Tyrone group thought to be responsible for killing PSNI officer Ronan Kerr in 2011, and a Belfast group who badly wounded PSNI officer Peadar Heffron in 2010. The Continuity IRA, and the group often referred to as Óglaigh na hÉireann (ONH), remain independent. The PSNI reckoned in 2012 that the new group had a membership of "between 250 and 300 military activists, backed up by associates". In November 2012 it claimed responsibility for shooting dead a prison officer near Lurgan, the first prison officer to be killed since 1993.
On 3 September 2012, prominent New IRA (former RIRA) member Alan Ryan was shot dead in Dublin. Gardaí believed he had been involved in a feud with major crime gangs from whom he was trying to extort money. Following Ryan's death an internal feud developed in the Real IRA section of the NIRA. Ryan's replacement as leader and another associate were shot and wounded in November 2012, allegedly on the orders of the Northern leadership. In March 2013, another prominent former Real IRA member, Peter Butterly from Dunleer, was shot dead; three Dublin men, allegedly from the Alan Ryan faction, were charged with his murder and IRA membership.
In February 2014, the group sent seven letter bombs to British Army recruitment offices in south-east England; the first time republicans had struck in Britain since 2001. The following month, a PSNI landrover was hit by an explosively formed projectile in Belfast. A civilian car was also hit by debris, but there were no injuries. The Real IRA claimed responsibility. In November 2014, a PSNI armoured jeep was hit by another 'horizontal mortar' in Derry, and in Belfast a PSNI landrover was attacked with a homemade rocket-propelled grenade (RPG) launcher.
In April–May 2015, there were two New IRA bomb attacks in Derry. One exploded at the Probation Board offices, and two partially exploded at the perimeter fence of a British Army Reserve base. Later in May, four men, one an alleged associate of Real IRA leader Michael McKevitt, were reportedly arrested during an explosives seizure by police in Northern Ireland. In August, a firebomb exploded in a post van parked inside Palace Barracks, a British military base which is home to MI5 in Northern Ireland. The firebomb destroyed the van and set nearby vehicles and garages on fire. On Halloween morning, three men were arrested and charged with IRA membership in addition to firearm offences. In November, a PSNI vehicle in Belfast was riddled with automatic gunfire, fired from an AK-47. On Christmas Day in North Belfast, police came under fire again but were not injured. The attacker was charged with attempted murder. Days later, on 27 November 2015, police in West Belfast came under heavy fire yet again. No officers were wounded because of the armour-plating and bullet-proof glass. The Real IRA or another dissident republican group was suspected to be behind the attack.
On 4 March 2016, a prison officer (Adrian Ismay) had a heart attack and died in a hospital. He had received serious wounds following a booby-trap bomb detonating under his van on Hillsborough Drive, East Belfast 11 days earlier. The wounds he received from the bombing were directly responsible for the heart attack that killed him. The New IRA claimed responsibility and said it was a response to the alleged mistreatment of republican prisoners at Maghaberry Prison. It added that the officer was targeted because he trained prison officers at Maghaberry.
In April 2016, Gardaí arrested two significant members of the New IRA and seized €10,000. In April 2016, explosives linked to the Real IRA were found in Dublin and several people were questioned by police. In April 2016, the Real IRA were blamed for badly injuring a man in a punishment shooting in Derry, shortly after a man had been killed by a dissident republican attack in Ardoyne. In May 2016 three men were shot in paramilitary style attacks in republican areas of Belfast during a 24-hour period, leaving two injured and one dead. On 25 April a Real IRA member, Michael Barr was shot dead in west Dublin. Gardaí suspected Barr was shot dead because it was believed by the Kinahan cartel he provided a "safe house" to one of the gunmen in the Regency Hotel attack. Fifteen people were arrested in Northern Ireland following a paramilitary funeral for him.
In June 2016 it was revealed that a five-man IRA hit team were in Dublin's north inner city looking to murder two leading gangsters after one of their associates was shot dead in a gangland feud. Sources said the murder squad from the North spent several days and nights looking for their targets in the streets. In September 2016, Vincent Kelly, a close associate of Alan Ryan, who had been arrested and imprisoned following the Stamullen raid, was sentenced to nine years' imprisonment in Belfast for possession of a sub-machine gun and ammunition, after getting off a bus from Dublin.
In Cork City at 5 pm on 7 December 2016, former Chief of Staff of the RIRA southern command, Aidan "The Beast" O'Driscoll, was shot and killed in the street by two masked gunmen. O'Driscoll had been shot in the leg in June 2013 in what the RIRA claimed was a punishment-style shooting for "unrepublican conduct" before he had stepped-down from command in 2012.
On 7 June 2017, Gardaí foiled a serious IRA bomb plot after discovering six kilos of Semtex, "enough to blow up a street".
On 1 September 2017, the Police Service of Northern Ireland warned that the group had developed a new type of bomb.
In December 2017, MI5 said that Northern Ireland has the highest level of terrorist activity of anywhere in Europe with attacks being disrupted weekly. Over 250 seizures, thwarted attacks, and counter-terrorist operations are reported to have been undertaken by British security services.
The group remained active in 2018, with it and the Continuity IRA claiming they have no plans to announce a ceasefire along the lines of that of the ONH. However, both groups have suffered major setbacks and inactivity due to feuding and heavy police intervention, and have likewise often failed to commit successful attacks due to antiquated equipment and member inexperience.
In July 2018 the New IRA claimed responsibility for a spate of gun and bomb attacks at police officers during the riots in Derry.
On 19 January 2019 there was a car bomb attack at the Bishop Street Courthouse in Derry, for which the New IRA are the "main line of enquiry". Four men were arrested in connection with the bombing. The following month, two men were shot in the city of Derry, in what was described as a "paramilitary attack" by New IRA members.
On 5 March 2019 at around 12:00 pm three explosive devices were found in packages that were found in Jiffy bags at Waterloo station and City Airport in London, as well as a separate package found nearby Heathrow Airport. It is suspected that the New IRA is behind the attack because of several postage stamps on all of the packages that can be traced to Irish post offices. MI5 warned that the possibility of Republicans being behind the suspicious packages as "possible". Also on 5 March, a parcel bomb was found in the Store Room of the University of Glasgow at around 11:40am. The West Blocks of the University were evacuated by the Police and the bomb was safely detonated under a controlled explosion by a bomb disposal unit. Nobody was injured. On 11 March 2019, it was reported that a group stylizing themselves as the IRA claimed to be behind the explosive devices, stating that they had sent 5 devices, but only 4 had been discovered. The fifth device was discovered on 22 March in a postal sorting office in the Irish city of Limerick. The device was addressed to Charing Cross railway station in central London.
On 18 April 2019 rioting took place on the streets of the Creggan after PSNI launched a raid looking for munitions. It is believed the New IRA incited the riots; they were responsible for the fatal shooting of journalist Lyra McKee—who was not the intended victim—and later admitted responsibility and issued a statement of apology to her family and friends. Using their traditional Easter Rising commemorations various other Republican groupings including Sinn Féin and Éirígí expressly called for an end to all armed actions, while others including the 32 County Sovereignty Movement condemned the attack without adding a call for the end of violence. The Irish Republican Socialist Party cancelled its Easter Rising commemoration in Derry as a direct result of Lyra McKee's death. Republican murals around the city of Derry, including the famous Free Derry Corner gable end wall, were amended over the weekend following Lyra McKee's death expressing a community desire to move away from the violence of the past and disowning the dissident groupings who desire a return to it. These events have been cited as a sign of change in attitude towards dissidents in traditionally Republican areas.
On 7 June 2019 the New IRA admitted responsibility for a potentially lethal bomb discovered on 1 June fitted under the car of a police officer at a golf club in east Belfast. A cross-border investigation was launched.
The RIRA has a command structure similar to the Provisional IRA, with a seven-member Army Council consisting of a chief of staff, quartermaster general, director of training, director of operations, director of finance, director of publicity, and adjutant general. The rank-and-file members operate in active service units of covert cells to prevent the organisation from being compromised by informers. As of June 2005, the organisation is believed to have a maximum of about 150 members, according to a statement by the Irish Minister for Justice, Equality and Law Reform, Michael McDowell.
The RIRA also has political wings: the 32 County Sovereignty Movement (formerly the 32 County Sovereignty Committee), led by Francis Mackey, and unregistered political party Saoradh, led by Brian Kenna.
The RIRA is distinct from the Continuity IRA, another Provisional IRA splinter group founded in 1986, although the two groups have been known to co-operate at a local level. The Provisional IRA has been hostile to the RIRA and issued threats to RIRA members, and in October 2000 was alleged to be responsible for the fatal shooting of Belfast RIRA member Joe O'Connor according to O'Connor's family and 32 County Sovereignty Movement member Marian Price.
Organisations called "Irish Republican Army" are illegal in both UK law and Irish law; both proscriptions have been held to apply to the RIRA as to other groups of the name. Membership in the organisation is punishable by a sentence of up to ten years' imprisonment under UK law. In 2001 the United States government designated the RIRA (and its aliases) as a "Foreign Terrorist Organization" (FTO). This makes it illegal for Americans to provide material support to the RIRA, requires American financial institutions to freeze the group's assets, and denies suspected RIRA members visas into the United States.
In 2014, "Forbes" magazine estimated the group's annual turnover at US$50 million. According to the police in Northern Ireland, the main sources of the Real IRA's funding are illegal fuel operations and various smuggling activities. Illicit cigarettes were also said to be a significant source of income for the group. There are also other significant sources of funding from the group, including funding from sympathisers based in the US and other countries.
The RIRA initially took small amounts of materiel from Provisional IRA arms dumps under the control of McKevitt and other former Provisional IRA members, including the plastic explosive Semtex, Uzi submachine guns, AK-47 assault rifles, handguns, detonators and timing devices. The defection of senior Provisional IRA members also gave the RIRA the ability to manufacture home-made explosives and improvised mortars, including the Mark 15 mortar capable of firing a shell.
In 1999 the organisation supplemented its equipment by importing arms from Croatia, including military explosive TM500, CZ Model 25 submachine guns, modified AK-47 assault rifles with a folding stock, and RPG-18 and RPG-22 rocket launchers but a July 2000 attempt to smuggle a second consignment of arms was foiled by Croatian police, who seized seven RPG-18s, AK-47 assault rifles, detonators, ammunition, and twenty packs of TM500.
In 2001 RIRA members travelled to Slovakia to procure arms, and were caught in a sting operation by the British security agency MI5. The men attempted to purchase five tonnes of plastic explosives, 2,000 detonators, 500 handguns, 200 rocket-propelled-grenades, and also wire-guided missiles and sniper rifles. Three men from County Louth were arrested and extradited to the UK and subsequently imprisoned for 30 years each after pleading guilty to conspiring to cause explosions and other charges.
In June 2006, the PSNI made arrests following an MI5 sting operation targeting a dissident republican gun smuggling plot. The RIRA had attempted to procure arms from France including Semtex and C-4 plastic explosives, SA-7 surface-to-air missiles, AK-47s, rocket launchers, heavy machine guns, sniper rifles, pistols with silencers, anti-tank weapons and detonators. On 30 June 2010, two of those arrested were found guilty following a trial by judge in Belfast. On 1 October 2010 one man was sentenced to 20 years' imprisonment for attempting to import weapons and explosives, while the other was sentenced to 4 years' imprisonment for making a Portuguese property available for the purpose of terrorism. | https://en.wikipedia.org/wiki?curid=25731 |
Roy Chapman Andrews
Roy Chapman Andrews (January 26, 1884 – March 11, 1960) was an American explorer, adventurer and naturalist who became the director of the American Museum of Natural History. He is primarily known for leading a series of expeditions through the politically disturbed China of the early 20th century into the Gobi Desert and Mongolia. The expeditions made important discoveries and brought the first-known fossil dinosaur eggs to the museum. His popular writings about his adventures made him famous.
Andrews was born on January 26, 1884, in Beloit, Wisconsin. As a child, he explored forests, fields, and waters nearby, developing marksmanship skills. He taught himself taxidermy and used funds from this hobby to pay tuition to Beloit College. After graduating, Andrews applied for work at the American Museum of Natural History in New York City. He so much wanted to work there that after being told that there were no openings at his level, Andrews accepted a job as a janitor in the taxidermy department and began collecting specimens for the museum. During the next few years, he worked and studied simultaneously, earning a Master of Arts degree in mammalogy from Columbia University. Andrews joined The Explorers Club in New York during 1908, four years after its founding.
From 1909 to 1910, Andrews sailed on the to the East Indies, collecting snakes and lizards and observing marine mammals.
In 1913, he sailed aboard the schooner "Adventuress" with owner John Borden to the Arctic. They were hoping to obtain a bowhead whale specimen for the American Museum of Natural History. On this expedition, he filmed some of the best footage of seals ever seen, though did not succeed in acquiring a whale specimen.
He married Yvette Borup in 1914. From 1916 to 1917, Andrews and his wife led the Asiatic Zoological Expedition of the museum through much of western and southern Yunnan, as well as other provinces of China. The book "Camps and Trails in China" records their experiences.
In 1920, Andrews began planning for expeditions to Mongolia and drove a fleet of Dodge cars westward from Peking. In 1922, the party discovered a fossil of "Indricotherium" (then named ""Baluchitherium""), a gigantic hornless rhinoceros, which was sent back to the museum, arriving on December 19. The fossil species "Andrewsarchus" was named after him.
Andrews, along with Henry Fairfield Osborn, was a proponent of the Out of Asia theory of humanity's origins and led several expeditions to Asia from 1922 to 1928 known as the "Central Asiatic Expeditions" to search for the earliest human remains in Asia. The expeditions did not find human remains. However, Andrews and his team made many other finds, including dinosaur bones and fossil mammals and most notably the first nests full of dinosaur eggs ever discovered (see below). Andrews's main account of these expeditions can be found in his book "The New Conquest of Central Asia".
In his preface to Andrews's 1926 book, "On the Trail of the Ancient Man", Henry Fairfield Osborn predicted that the birthplace of modern humans would be found in Asia and stated that he had predicted this decades earlier, even before the Asiatic expeditions.
On July 13, 1923, the party was the first in the world to discover dinosaur eggs. Initially thought to be eggs of a ceratopsian, "Protoceratops", they were determined in 1995 actually to belong to the theropod "Oviraptor." protoceratops. During that same expedition, Walter W. Granger discovered a skull from the Cretaceous period. In 1925, the museum sent a letter back informing the party that the skull was that of a mammal, and therefore even more rare and valuable; more were uncovered. Expeditions in the area stopped during 1926 and 1927. In 1928, the expedition's finds were seized by Chinese authorities but were eventually returned. The 1929 expedition was cancelled. In 1930, Andrews made one final trip and discovered some mastodon fossils. A cinematographer, James B. Shackelford, made filmed records of many of Andrews' expeditions. (Sixty years after Andrews' initial expedition, the American Museum of Natural History sent a new expedition to Mongolia on the invitation of its government to continue exploration.) Later that year, Andrews returned to the United States and divorced his wife, with whom he had two sons. He married his second wife, Wilhelmina Christmas, in 1935.
In 1927, the Boy Scouts of America made Andrews an "Honorary Scout", a new category of Scout created that same year. This distinction was given to "American citizens whose achievements in outdoor activity, exploration and worthwhile adventure are of such an exceptional character as to capture the imagination of boys...".
Andrews was President of The Explorers Club from 1931 to 1934. In 1934, he became the director of the Natural History museum. In his 1935 book "The Business of Exploring", he wrote "I was born to be an explorer...There was never any decision to make. I couldn't do anything else and be happy." In 1942, Andrews retired to Carmel Valley, California, where he wrote about his life.
He died on March 11, 1960, of heart failure at Peninsula Community Hospital in Carmel, California. He is buried in Oakwood Cemetery in his hometown of Beloit.
Douglas Preston of the American Museum of Natural History wrote: "Andrews is allegedly the person that the movie character of Indiana Jones was patterned after. However, neither George Lucas nor the other creators of the films have confirmed this. Other candidates have been suggested, including Colonel Percy Fawcett. The 120-page transcript of the story conferences for the movie does not mention Andrews."
An analysis by the Smithsonian Channel concludes that the linkage was indirect, with Andrews (and other explorers) serving as the model for heroes in adventure films of the 1940s and 1950s, who in turn inspired Lucas and his fellow writers.
Books listed on Worldcat | https://en.wikipedia.org/wiki?curid=25732 |
Taiwan
Taiwan, officially the Republic of China, is a country in East Asia. Neighbouring countries include the People's Republic of China (PRC) to the northwest, Japan to the northeast, and the Philippines to the south. The main island of Taiwan has an area of , with mountain ranges dominating the eastern two-thirds and plains in the western third, where its highly urbanised population is concentrated. Taipei is the capital and largest metropolitan area. Other major cities include New Taipei, Kaohsiung, Taichung, Tainan and Taoyuan. With 23.7 million inhabitants, Taiwan is among the most densely populated countries, and is the most populous country and largest economy that is not a member of the United Nations (UN).
Taiwanese indigenous peoples settled the island of Taiwan around 6,000 years ago. In the 17th century, Dutch rule opened the island to mass Han immigration. After the brief Kingdom of Tungning in parts of the southern and western areas of the island, the island was annexed in 1683 by the Qing dynasty of China, and ceded to the Empire of Japan in 1895. Following the surrender of Japan in 1945, the Republic of China, which had overthrown and succeeded the Qing in 1911, took control of Taiwan on behalf of the World War II Allies. The resumption of the Chinese Civil War led to the loss of the mainland to the Communist Party of China and the flight of the ROC government to Taiwan in 1949. Although the ROC government continued to claim to be the legitimate representative of China, since 1950 its effective jurisdiction has been limited to Taiwan and numerous smaller islands. In the early 1960s, Taiwan entered a period of rapid economic growth and industrialisation called the "Taiwan Miracle". In the late 1980s and early 1990s, the ROC transitioned from a one-party military dictatorship to a multi-party democracy with a semi-presidential system.
Taiwan's export-oriented industrial economy is the 21st-largest in the world, with major contributions from steel, machinery, electronics and chemicals manufacturing. Taiwan is a developed country, ranking 15th in GDP per capita. It is ranked highly in terms of political and civil liberties, education, health care and human development.
The political status of Taiwan remains uncertain. The ROC is no longer a member of the UN, having been replaced by the PRC in 1971. Taiwan is claimed by the PRC, which refuses diplomatic relations with countries that recognise the ROC. Taiwan maintains official ties with 14 out of 193 UN member states and the Holy See. International organisations in which the PRC participates either refuse to grant membership to Taiwan or allow it to participate only on a non-state basis. Taiwan is a member of the World Trade Organization, Asia-Pacific Economic Cooperation and Asian Development Bank under various names. Nearby countries and countries with large economies maintain unofficial ties with Taiwan through representative offices and institutions that function as "de facto" embassies and consulates. Domestically, the major political division is between parties favouring eventual Chinese unification and promoting a Chinese identity contrasted with those aspiring to independence and promoting Taiwanese identity, although both sides have moderated their positions to broaden their appeal.
Various names for the island of Taiwan remain in use, each derived from explorers or rulers during a particular historical period. The name Formosa () dates from 1542, when Portuguese sailors sighted an uncharted island and noted it on their maps as "Ilha Formosa" ("beautiful island"). The name "Formosa" eventually "replaced all others in European literature" and remained in common use among English speakers into the 20th century. | https://en.wikipedia.org/wiki?curid=25734 |
Rugby league
Rugby league is a full-contact sport played by two teams of thirteen players on a rectangular field measuring 68 m wide and 112–122 m long. One of the two codes of rugby football, it originated in Northern England in 1895 as a split from the Rugby Football Union over the issue of payments to players. Its rules progressively changed with the aim of producing a faster, more entertaining game for spectators.
In rugby league, points are scored by carrying the ball and touching it to the ground beyond the opposing team's goal line; this is called a "try", and is the primary method of scoring. The opposing team attempts to stop the attacking side scoring points by tackling the player carrying the ball. In addition to tries, points can be scored by kicking goals. Field goals can be attempted at any time, and following a successful try, the scoring team gains a free kick to "try at goal" with a conversion for further points. Kicks at goal may also be awarded for penalties.
The Super League and the National Rugby League (NRL) are the premier club competitions. Rugby league is played internationally, predominantly by European, Australasian and Pacific Island countries, and is governed by the International Rugby League (IRL). Rugby league is the national sport of Papua New Guinea, and is a popular sport in countries such as England, Australia,New Zealand, France, Wales, Ireland, Tonga, Fiji, Samoa and Lebanon.
The first Rugby League World Cup was held in France in 1954; the current holders are Australia.
Rugby league football takes its name from the bodies that split to create a new form of rugby, distinct from that run by the Rugby Football Unions, in Britain, Australia and New Zealand between 1895 and 1908.
The first of these, the Northern Rugby Football Union, was established in 1895 as a breakaway faction of England's Rugby Football Union (RFU). Both organisations played the game under the same rules at first, although the Northern Union began to modify rules almost immediately, thus creating a new faster, stronger paced form of rugby football. Similar breakaway factions split from RFU-affiliated unions in Australia and New Zealand in 1907 and 1908, renaming themselves "rugby football "league"s" and introducing Northern Union rules. In 1922, the Northern Union also changed its name to the Rugby Football League and thus over time the sport itself became known as "rugby league" football.
In 1895, a schism in Rugby football resulted in the formation of the Northern Rugby Football Union (NRFU). Although many factors played a part in the split, including the success of working class northern teams, the main division was caused by the RFU decision to enforce the amateur principle of the sport, preventing "broken time payments" to players who had taken time off work to play rugby. Northern teams typically had more working class players (coal miners, mill workers etc.) who could not afford to play without this compensation, in contrast to affluent southern teams who had other sources of income to sustain the amateur principle. In 1895, a decree by the RFU banning the playing of rugby at grounds where entrance fees were charged led to twenty-two clubs (including Stockport, who negotiated by telephone) meeting at the George Hotel, Huddersfield on 29 August 1895 and forming the "Northern Rugby Football Union". Within fifteen years of that first meeting in Huddersfield, more than 200 RFU clubs had left to join the rugby revolution.
In 1897, the line-out was abolished and in 1898 professionalism introduced. In 1906, the Northern Union changed its rules, reducing teams from 15 to 13 a side and replacing the ruck formed after every tackle with the play the ball.
A similar schism to that which occurred in England took place in Sydney, Australia. There, on 8 August 1907 the New South Wales Rugby Football League was founded at Bateman's Hotel in George Street. Rugby league then went on to displace rugby union as the primary football code in New South Wales and Queensland.
On 5 May 1954 over 100,000 (official figure 102,569) spectators watched the 1953–54 Challenge Cup Final at Odsal Stadium, Bradford, England, setting a new record for attendance at a rugby football match of either code. Also in 1954 the Rugby League World Cup, the first for either code of rugby, was formed at the instigation of the French. In 1966, the International Board introduced a rule that a team in possession was allowed three play-the-balls and on the fourth tackle a scrum was to be formed. This was increased to six tackles in 1972 and in 1983 the scrum was replaced by a handover. 1967 saw the first professional Sunday matches of rugby league played.
The first sponsors, Joshua Tetley and John Player, entered the game for the 1971–72 Northern Rugby Football League season. Television would have an enormous impact on the sport of rugby league in the 1990s when Rupert Murdoch's News Corporation sought worldwide broadcasting rights and refused to take no for an answer. The media giant's "Super League" movement saw big changes for the traditional administrators of the game. In Europe, it resulted in a move from a winter sport to a summer one as the new Super League competition tried to expand its market. In Australasia, the Super League war resulted in long and costly legal battles and changing loyalties, causing significant damage to the code in an extremely competitive sporting market. In 1997 two competitions were run alongside each other in Australia, after which a peace deal in the form of the National Rugby League was formed. The NRL has since become recognised as the sport's flagship competition and since that time has set record TV ratings and crowd figures.
The objective in rugby league is to score more points through tries, goals (also known as conversions) and field goals (also known as drop goals) than the opposition within the 80 minutes of play. If after two-halves of play, each consisting of forty minutes, the two teams are drawing, a draw may be declared, or the game may enter extra time under the golden point rule, depending on the relevant competition's format.
The try is the most common form of scoring, and a team will usually attempt to score one by running and kicking the ball further upfield or passing from player-to-player in order to manoeuvre around the opposition's defence. A try involves touching the ball to the ground on or beyond the defending team's goal-line and is worth four points. A goal is worth two points and may be gained from a conversion or a penalty. A field goal, or drop goal, is only worth one point and is gained by dropping and then kicking the ball on the half volley between the uprights in open play.
Field position is crucial in rugby league, achieved by running with or kicking the ball. Passing in rugby league may only be in a backward or sideways direction. Teammates, therefore, have to remain on-side by not moving ahead of the player with the ball. However the ball may be kicked ahead for teammates, but again, if they are in front of the kicker when the ball is kicked, they are deemed off-side. Tackling is a key component of rugby league play. Only the player holding the ball may be tackled. A tackle is complete, for example, when the player is held by one or more opposing players
in such a manner that he can make no further progress and cannot part with the ball, or when the player is held by one or more opposing players and the ball or the hand or arm holding the ball comes into contact with the ground. An attacking team gets a maximum of six tackles to progress up the field before possession is changed over. Once the tackle is completed, the ball-carrier must be allowed to get to his feet to 'play-the-ball'. Ball control is also important in rugby league, as a fumble of the ball on the ground forces a handover, unless the ball is fumbled backwards. The ball can also be turned over by going over the sideline.
Rugby league and rugby union are distinct sports with many similarities and a shared origin. Both have the same fundamental rules, are played for 80 minutes and feature an oval-shaped ball and H-shaped goalposts. Both have rules that the ball cannot be passed forward, and dropping it forwards leads to a scrum. Both use tries as the central scoring method and conversion kicks, penalty goals and drop goals as additional scoring methods. However, there are differences in how many points each method is worth.
One of the main differences is the rules of possession, When the ball goes into touch, possession in rugby union is contested through a line-out, while in rugby league a scrum restarts play. The lesser focus on contesting possession means that play stops less frequently in rugby league, with the ball typically in play for 50 out of the 80 minutes compared to around 35 minutes for professional rugby union. Other differences include that there are fewer players in rugby league (13 compared to 15), different rules for tackling. Rugby union has more detailed rules than rugby league and has changed less since the 1895 schism.
Rugby league historian Tony Collins has written that since rugby union turned professional in the mid-1990s, it has increasingly borrowed techniques and tactics from rugby league. The inherent similarities between rugby league and rugby union have at times led to experimental hybrid games being played that use a mix of the two sports' rules.
Players on the pitch are divided into forwards and backs, although the game's rules apply to all players the same way. Each position has a designated number to identify himself from other players. These numbers help to identify which position a person is playing. The system of numbering players is different depending on which country the match is played in. In Australia and New Zealand, each player is usually given a number corresponding to their playing position on the field. However, since 1996 European teams have been able to grant players specific squad numbers, which they keep without regard to the position they play, similarly to association football.
Substitutes (generally referred to as "the bench") are allowed in the sport, and are typically used when a player gets tired or injured, although they can also be used tactically. Each team is currently allowed four substitutes, and in Australia and New Zealand, these players occupy shirt numbers 14 to 22. There are no limitations on which players must occupy these interchangeable slots. Generally, twelve interchanges are allowed in any game from each team, although in the National Rugby League, this was reduced to ten prior to the 2008 season and further reduced to eight prior to the 2016 season. If a team has to interchange a player due to the blood bin rule or due to injury, and this was the result of misconduct from the opposing team, the compromised team does not have to use one of its allocated interchanges to take the player in question off the field.
The backs are generally smaller, faster and more agile than the forwards. They are often the most creative and evasive players on the field, relying on running, kicking and handling skills, as well as tactics and set plays, to break the defensive line, instead of brute force. Generally forwards do the majority of the work (hit-ups/tackling).
Usually, the stand-off/five-eighth and scrum-half/half-back are a team's creative unit or 'playmakers'. During the interactions between a team's 'key' players (five-eighth, half-back, fullback, lock forward, and hooker), the five-eighth and half-back will usually be involved in most passing moves. These two positions are commonly called the "halves".
The forwards' two responsibilities can be broken into "normal play" and "scrum play". For information on a forward's role in the scrum see rugby league scrummage. Forward positions are traditionally named after the player's position in the scrum yet are equal with respect to "normal play" with the exception of the hooker. Forward positions are traditionally assigned as follows:
Rugby league is played in over 70 nations throughout the world. Seven countries – Australia, Canada, England, France, New Zealand, Papua New Guinea and Wales – have teams that play at a professional level, while the rest are completely amateur. 36 national teams are ranked by the RLIF and a further 32 are officially recognized and unranked. The strongest rugby league nations are Australia, England and New Zealand.
The Rugby League World Cup is the highest form of representative rugby league and currently features 14 teams. Those which have contested World Cups are; Australia, New Zealand, England, France, Fiji, Wales, Papua New Guinea, Samoa, Ireland, USA, Scotland, Italy, Tonga, Cook Islands, Lebanon, Russia and South Africa. The current World Champions are Australia, who won the 2017 Rugby League World Cup. The next Rugby League World Cup will be held in October and November 2021 and hosted by England. This will be the first time that the Men's, Women's and Wheelchair competitions will be staged together.
The Asia-Pacific Rugby League Confederation's purpose is to spread the sport of rugby league throughout their region along with other governing bodies such as the ARL and NZRL. Since rugby league was introduced to Australia in 1908,
it has become the largest television sport and 3rd most attended sport in Australia. Neighbouring Papua New Guinea is one of two countries to have rugby league as its national sport (with Cook Islands). Australia's elite club competition also features a team from Auckland, New Zealand's biggest city. Rugby league is the dominant winter sport in the eastern Australian states of New South Wales and Queensland. The game is also among the predominant sports of Tonga and is played in other Pacific nations such as Samoa and Fiji. In Australia, and indeed the rest of the region, the annual State of Origin series ranks among the most popular sporting events.
The Rugby League European Federation are responsible for developing rugby league in Europe and the Northern Hemisphere.
In England, rugby league has traditionally been associated with the northern counties of Yorkshire, Lancashire and Cumberland where the game originated, especially in towns along the M62 corridor. Its popularity has also increased elsewhere. , only three of the twelve Super League teams are based outside of these traditional counties: London Broncos, Toronto Wolfpack (Toronto, Canada) and Catalans Dragons (Perpignan, France). One other team from outside the United Kingdom, the Toulouse Olympique, competes in the English Rugby League system, although not at the highest tier Super League level. The Olympique play in the Rugby League Championship while The Wolfpack won promotion to the Super League in 2019.
Super League average attendances are in the 8,000 to 9,500 range. The average Super League match attendance in 2014 was 8,365. In 2018 average Super League match attendance was 8,547. Ranked the eighth most popular sport in the UK overall, rugby league is the 27th most popular participation sport in England according to figures released by Sport England; the total number of rugby league participants in England aged 16 and over was 44,900 in 2017. This is a 39% drop from 10 years ago. While the sport is largely concentrated in the north of England there have been complaints about its lack of profile in the British media. On the eve of the 2017 Rugby League World Cup Final where England would face Australia, English amateur rugby league coach Ben Dawson stated, "we’re in the final of a World Cup. First time in more than 30 years and there's no coverage anywhere".
France first played rugby league as late as 1934, where in the five years prior to the Second World War, the sport's popularity increased as Frenchmen became disenchanted with the state of French rugby union in the 1930s. However, after the Allied Forces were defeated by Germany in June 1940, the Vichy regime in the south seized assets belonging to rugby league authorities and clubs and banned the sport for its association with the left-wing Popular Front government that had governed France before the war. The sport was unbanned after the Liberation of Paris in August 1944 and the collapse of the Vichy regime, although it was still actively marginalised by the French authorities until the 1990s. Despite this, the national side appeared in the finals of the 1954 and 1968 World Cups, and the country hosted the 1954 event. In 1996, a French team, Paris Saint-Germain was one of eleven teams which formed the new Super League, although the club was dissolved in 1997. In 2006, the Super League admitted the Catalans Dragons, a team from Perpignan in the southern Languedoc-Roussillon region. They have subsequently reached the 2007 Challenge Cup Final and made the playoffs of the 2008 Super League XIII season. The success of the Dragons in Super League has initiated a renaissance in French rugby league, with new-found enthusiasm for the sport in the south of the country where most of the Elite One Championship teams are based. In other parts of Europe, the game is played at semi-professional and amateur level.
The Toronto Wolfpack are currently North America's only active professional Rugby League team, competing in the English Rugby League system. The Wolfpack won the 2017 Kingstone Press League 1 in their inaugural season and earned promotion to the 2018 Rugby League Championship. In 2019 The Wolfpack won promotion to the Super League. The Wolfpack play their home games at Lamport Stadium in Toronto. Beginning in 2020, the English Hemel Stags will be relocated to Ottawa as the Ottawa Aces, where their home field will be TD Place Stadium.
The early 21st century has seen take up the game and compete in international rugby league with the Rugby League European Federation and Asia-Pacific Rugby League Confederation expanding the game to new areas such as Canada, Ghana, Philippines, Czech Republic, Germany, Sweden, Norway, Spain, Hungary, Turkey, Thailand and Brazil to name a few.
The two most prominent full-time professional leagues are the Australasian National Rugby League and the Super League and to a lesser extent the semi professional French Elite One Championship and Elite Two Championship. Domestic leagues, with some full-time exceptions, exist at a semi-professional level below the NRL and Super League, in Australia the Queensland Cup (which includes a team from Papua New Guinea) and NSW Cup, which provides players to various NRL teams.
In the United Kingdom, below the Super League, are the Championship and League 1. The UK professional system includes 2 Welsh teams, 2 French and 1 Canadian team).
The Papua New Guinea National Rugby League operates as a semi-professional competition and enjoys nationwide media coverage, being the national sport of the country.
The top five attendances for rugby league test matches (International) are:
The top five attendances for domestic based rugby league matches are:
* NRL double header played to open Round 1 of the 1999 NRL season. Figure shown is the total attendance which is officially counted for both games.** The official attendance of the 1954 Challenge Cup Final replay was 102,569. Unofficial estimates put the attendance as high as 150,000, Bradford Police confirming 120,000. | https://en.wikipedia.org/wiki?curid=25735 |
Rowing (sport)
Rowing, sometimes referred to as crew in the United States, is a sport whose origins reach back to Ancient Egyptian times. It involves propelling a boat (racing shell) on water using oars. By pushing against the water with an oar, a force is generated to move the boat. The sport can be either recreational for enjoyment or fitness, or competitive, when athletes race against each other in boats. The training and physical strain on the body required to be a successful rower is intense. A very tough mind and body is needed to succeed. There are a number of different boat classes in which athletes compete, ranging from an individual shell (called a single scull) to an eight-person shell with a coxswain (called a coxed eight).
Modern rowing as a competitive sport can be traced to the early 17th century when races (regattas) were held between professional watermen on the River Thames in London, United Kingdom. Often prizes were offered by the London Guilds and Livery Companies. Amateur competition began towards the end of the 18th century with the arrival of "boat clubs" at the British public schools of Eton College, Shrewsbury School, Durham School, and Westminster School. Similarly, clubs were formed at the University of Oxford, with a race held between Brasenose College and Jesus College in 1815. At the University of Cambridge the first recorded races were in 1827. Public rowing clubs were beginning at the same time; in England Leander Club was founded in 1818, in Germany Der Hamburger und Germania Ruder Club was founded in 1836 and in the United States Narragansett Boat Club was founded in 1838 and Detroit Boat Club was founded in 1839. In 1843, the first American college rowing club was formed at Yale University.
The International Rowing Federation (, abbreviated FISA), responsible for international governance of rowing, was founded in 1892 to provide regulation at a time when the sport was gaining popularity. Across six continents, 150 countries now have rowing federations that participate in the sport.
Rowing is one of the oldest Olympic sports. Though it was on the programme for the 1896 games, racing did not take place due to bad weather. Male rowers have competed since the 1900 Summer Olympics. Women's rowing was added to the Olympic programme in 1976. Today, there are fourteen boat classes which race at the Olympics.
Each year the World Rowing Championships are staged by FISA with 22 boat classes that race. In Olympic years, only the non-Olympic boat classes are raced at the World Championships. The European Rowing Championships are held annually, along with three World Rowing Cups in which each event earns a number of points for a country towards the World Cup title. Since 2008, rowing has also been competed at the Paralympic Games.
Major domestic competitions take place in dominant rowing nations and include The Boat Race and Henley Royal Regatta in the United Kingdom, the Australian Rowing Championships in Australia, the Harvard–Yale Regatta and Head of the Charles Regatta in the United States, and Royal Canadian Henley Regatta in Canada. Many other competitions often exist for racing between clubs, schools, and universities in each nation.
While rowing, the athlete sits in the boat facing toward the stern, and uses the oars which are held in place by the oarlocks to propel the boat forward (towards the bow). This may be done on a canal, river, lake, sea, or other large bodies of water. The sport requires strong core balance, physical strength, flexibility, and cardiovascular endurance.
Whilst the action of rowing and equipment used remains fairly consistent throughout the world, there are many different types of competition. These include endurance races, time trials, stake racing, bumps racing, and the side-by-side format used in the Olympic games. The many different formats are a result of the long history of the sport, its development in different regions of the world, and specific local requirements and restrictions.
There are two forms of rowing:
The rowing stroke may be characterized by three fundamental reference points. The "catch", which is placement of the oar blade in the water, the "drive", which is the part of the stoke where one pulls one on the oar, and the "extraction", also known as the "finish" or "release", when the rower removes the oar blade from the water. The action between catch and release is the first phase of the stroke that propels the boat.
At the catch the rower places the blade in the water and applies pressure to the oar by pushing the seat toward the bow of the boat by extending the legs, thus pushing the boat through the water. The point of placement of the blade in the water is a relatively fixed point about which the oar serves as a lever to propel the boat. As the rower's legs approach full extension, the rower pivots the torso toward the bow of the boat and then finally pulls the arms towards his or her chest. The hands meet the chest right above the diaphragm.
At the end of the stroke, with the blade still in the water, the hands drop slightly to unload the oar so that spring energy stored in the bend of the oar gets transferred to the boat, which eases removing the oar from the water and minimizes energy wasted on lifting water above the surface (splashing).
The recovery phase follows the drive. The recovery starts with the extraction and involves coordinating the body movements with the goal to move the oar back to the catch position. In extraction, the rower pushes down on the oar handle to quickly lift the blade from the water and rapidly rotates the oar so that the blade is parallel to the water. This process is sometimes referred to as "feathering the blade". Simultaneously, the rower pushes the oar handle away from the chest. The blade emerges from the water square and feathers immediately once clear of the water. After feathering and extending the arms, the rower pivots the body forward. Once the hands are past the knees, the rower compresses the legs which moves the seat towards the stern of the boat. The leg compression occurs relatively slowly compared to the rest of the stroke, which affords the rower a moment to recover, and allows the boat to glide through the water. The gliding of the boat through the water during recovery is often called "run".
A controlled slide is necessary to maintain momentum and achieve optimal boat run. However, various teaching methods disagree about the optimal relation in timing between drive and recovery. Near the end of the recovery, the rower squares the blade into perpendicular orientation with respect to the water, and begins another stroke.
There are two schools of thought with respect to the appropriate breathing technique during the rowing motion: Full lungs at the catch and empty lungs at the catch.
With the full lung technique, rowers exhale during the stroke and inhale during the recovery. In laboured circumstances, rowers will take a quick pant at the end of the stroke before taking a deep breath on the recovery that fills the lungs by the time the catch is reached.
In the empty-lung technique, rowers inhale during the drive, and exhale during the recovery so that they have empty lungs at the catch. Because the knees come up to the chest when the lungs are empty, this technique allows the rower to reach a little bit further than if the lungs were full of air. Full lungs at the release also can help the rower to maintain a straighter back, a style encouraged by many coaches.
A scientific study of the benefits of entrained breathing technique in relatively fit, but untrained, rowers did not show any physiological or psychological benefit to either technique.
Rowing is a cyclic (or intermittent) form of propulsion such that in the quasi-steady state the motion of the system (the system comprising the rower, the oars, and the boat), is repeated regularly. In order to maintain the steady-state propulsion of the system without either accelerating or decelerating the system, the sum of all the external forces on the system, averaged over the cycle, must be zero. Thus, the average drag (retarding) force on the system must equal the average propulsion force on the system. The drag forces consist of aerodynamic drag on the superstructure of the system (components of the boat situated above the waterline), as well as the hydrodynamic drag on the submerged portion of the system. The propulsion forces are the forward reaction of the water on the oars while in the water. The oar can be used to provide a drag force (a force acting against the forward motion) when the system is brought to rest.
Although the oar can be conveniently thought of as a lever with a "fixed" pivot point in the water, the blade moves sideways and sternwards through the water, so that the magnitude of the propulsion force developed is the result of a complex interaction between unsteady fluid mechanics (the water flow around the blade) and solid mechanics and dynamics (the handle force applied to the oar, the oar's inertia and bending characteristic, the acceleration of the boat and so on).
The distinction between rowing and other forms of water transport, such as canoeing or kayaking, is that in rowing the oars are held in place at a pivot point that is in a fixed position relative to the boat, this point is the load point for the oar to act as a second class lever (the blade fixed in the water is the fulcrum). In flatwater rowing, the boat (also called a "shell" or "fine boat") is narrow to avoid drag, and the oars are attached to oarlocks ( also called "gates" ) at the end of outriggers extending from the sides of the boat. Racing boats also have sliding seats to allow the use of the legs in addition to the body to apply power to the oar.
Rowing is one of the few non-weight bearing sports that exercises all the major muscle groups, including quads, biceps, triceps, lats, glutes and abdominal muscles. The sport also improves cardiovascular endurance and muscular strength. High-performance rowers tend to be tall and muscular: although extra weight does increase the drag on the boat, the larger athletes' increased power tends to be more significant. The increased power is achieved through increased length of leverage on the oar through longer limbs of the athlete. In multi-person boats (2,4, or 8), the lightest person typically rows in the bow seat at the front of the boat.
Rowing is a low impact sport with movement only in defined ranges, so twist and sprain injuries are rare. However, the repetitive rowing action can put strain on knee joints, the spine and the tendons of the forearm, and inflammation of these are the most common rowing injuries. If one rows with poor technique, especially rowing with a curved rather than straight back, other injuries may surface, including back pains. Blisters occur for almost all rowers, especially in the beginning of one's rowing career, as every stroke puts pressure on the hands, though rowing frequently tends to harden hands and generate protective calluses. Holding the oars too tightly or making adjustments to technique may cause recurring or new blisters, as it is common to feather the blade (previously described). Another common injury is getting "track bites", thin cuts on the back of one's calf or thigh caused by contact with the seat tracks at either end of the stroke.
Ever since the earliest recorded references to rowing, the sporting element has been present. An Egyptian funerary inscription of 1430 BC records that the warrior Amenhotep (Amenophis) II was also renowned for his feats of oarsmanship. In the Aeneid, Virgil mentions rowing forming part of the funeral games arranged by Aeneas in honour of his father. In the 13th century, Venetian festivals called "regata" included boat races among others.
The first known "modern" rowing races began from competition among the professional watermen in the United Kingdom that provided ferry and taxi service on the River Thames in London. Prizes for wager races were often offered by the London Guilds and Livery Companies or wealthy owners of riverside houses.
The oldest surviving such race, Doggett's Coat and Badge was first contested in 1715 and is still held annually from London Bridge to Chelsea. During the 19th century these races were to become numerous and popular, attracting large crowds. Prize matches amongst professionals similarly became popular on other rivers throughout Great Britain in the 19th century, notably on the Tyne. In America, the earliest known race dates back to 1756 in New York, when a pettiauger defeated a Cape Cod whaleboat in a race.
Amateur competition in England began towards the end of the 18th century. Documentary evidence from this period is sparse, but it is known that the Monarch Boat Club of Eton College and the Isis Club of Westminster School were both in existence in the 1790s. The Star Club and Arrow Club in London for gentlemen amateurs were also in existence before 1800. At the University of Oxford bumping races were first organised in 1815 when Brasenose College and Jesus College boat clubs had the first annual race while at Cambridge the first recorded races were in 1827. Brasenose beat Jesus to win Oxford University's first Head of the River; the two clubs claim to be the oldest established boat clubs in the world. The Boat Race between Oxford University and Cambridge University first took place in 1829, and was the second intercollegiate sporting event (following the first Varsity Cricket Match by 2 years). The interest in the first Boat Race and subsequent matches led the town of Henley-on-Thames to begin hosting an annual regatta in 1839.
Founded in 1818, Leander Club is the world's oldest public rowing club. The second oldest club which still exists is the Der Hamburger und Germania Ruder Club which was founded 1836 and marked the beginning of rowing as an organized sport in Germany. During the 19th century, as in England, wager matches in North America between professionals became very popular attracting vast crowds. Narragansett Boat Club was founded in 1838 exclusively for rowing. During an 1837 parade in Providence, R.I, a group of boatmen were pulling a longboat on wheels, which carried the oldest living survivor of the 1772 Gaspee Raid. They boasted to the crowd that they were the fastest rowing crew on the Bay. A group of Providence locals took issue with this and challenged them to race, which the Providence group summarily won. The six-man core of that group went on in 1838 to found NBC. Detroit Boat Club was founded in 1839 and is the second oldest continuously-operated rowing club in the U.S. In 1843, the first American college rowing club was formed at Yale University. The Harvard–Yale Regatta is the oldest intercollegiate sporting event in the United States,http://rowinghistory.net/Time%20Line/TL%20-1849images.htm having been contested every year since 1852 (excepting interruptions for wars).
The Schuylkill Navy is an association of amateur rowing clubs of Philadelphia. Founded in 1858, it is the oldest amateur athletic governing body in the United States. The member clubs are all on the Schuylkill River where it flows through Fairmount Park in Philadelphia, mostly on the historic Boathouse Row. The success of the Schuylkill Navy and similar organizations contributed heavily to the extinction of professional rowing and the sport's current status as an amateur sport. At its founding, it had nine clubs; today, there are 12. At least 23 other clubs have belonged to the Navy at various times. Many of the clubs have a rich history, and have produced a large number of Olympians and world-class competitors.
The sport's governing body is formally known as the "Fédération Internationale des Sociétés d'Aviron" (English translation: "International Federation of Rowing Associations"), though, the majority of the time, either the initialism "FISA" or the English co-name, World Rowing, which the organization "uses for 'commercial purposes,'" is used to refer to it. Founded by representatives from France, Switzerland, Belgium, Adriatica (now a part of Italy), and Italy in Turin on 25 June 1892, FISA is the oldest international sports federation in the Olympic movement.
FISA first organized a European Rowing Championships in 1893. An annual World Rowing Championships was introduced in 1962. Rowing has also been conducted at the Olympic Games since 1900 (cancelled at the first modern Games in 1896 due to bad weather).
Racing boats (often called "shells") are long, narrow, and broadly semi-circular in cross-section in order to reduce drag to a minimum. There is some trade off between boat speed and stability in choice of hull shape. They usually have a fin towards the rear, to help prevent roll and yaw and to increase the effectiveness of the rudder.
Originally made from wood, shells are now almost always made from a composite material (usually a double skin of carbon-fibre reinforced plastic with a sandwich of honeycomb material) for strength and weight advantages. FISA rules specify minimum weights for each class of boat so that no individual team will gain a great advantage from the use of expensive materials or technology.
There are several different types of boats. They are classified using:
Although sculling and sweep boats are generally identical to each other (except having different riggers), they are referred to using different names:
With the smaller boats, specialist versions of the shells for sculling can be made lighter. The riggers in sculling apply the forces symmetrically to each side of the boat, whereas in sweep oared racing these forces are staggered alternately along the boat. The sweep oared boat has to be stiffer to handle these unmatched forces, so consequently requires more bracing and is usually heavier – a pair (2-) is usually a more robust boat than a double scull (2x) for example, and being heavier is also slower when used as a double scull. In theory this could also apply to the 4x and 8x, but most rowing clubs cannot afford to have a dedicated large hull which might be rarely used and instead generally opt for versatility in their fleet by using stronger shells which can be rigged for either sweep rowing or sculling. The symmetrical forces also make sculling more efficient than rowing: the double scull is faster than the coxless pair, and the quadruple scull is faster than the coxless four.
One additional boat is the "queep", a coxed or non-coxed shell. The bow and stroke positions have a set of sculling riggers and two and three have a sweep set. These shells have been used in the UK and recently at a club in Victoria BC, Canada.
Many adjustments can be made to the equipment to accommodate the physiques of the crew. Collectively these adjustments are known as the boat's rigging.
Single, and double sculls are usually steered by the scullers pulling harder on one side or the other. In other boats, there is a rudder, controlled by the coxswain, if present, or by one of the crew. In the latter case, the rudder cable is attached to the toe of one of his shoes which can pivot about the ball of the foot, moving the cable left or right. The bowman may steer since he has the best vision when looking over his shoulder. On straighter courses, the strokesman may steer, since he can point the stern of the boat at some landmark at the start of the course. On international courses, landmarks for the steersmen, consisting of two aligned poles, may be provided.
Blades, otherwise known as oars to amateurs or non-rowers, are used to propel the boat. They are long (sculling: 250–300 cm; sweep oar: 340–360 cm) poles with one flat end about 50 cm long and 25 cm wide, called the blade. Classic blades were made out of wood, but modern blades are made from more expensive and durable synthetic material, the most common being carbon fiber.
An 'oar' is often referred to as a "blade" in the case of sweep oar rowing and as a "scull" in the case of sculling. A sculling oar is shorter and has a smaller blade area than the equivalent sweep oar. The combined blade area of a pair of sculls is however greater than that of a single sweep oar, so the oarsman when sculling is working against more water than when rowing sweep-oared. He is able to do this because the body action in sculling is more anatomically efficient (due to the symmetry).
The "spoon" of oars is normally painted with the colours of the club to which they belong. This greatly simplifies identification of boats at a distance. As many sports teams have logos printed on their jerseys, rowing clubs have specifically painted blades that each team is associated with.
Indoor rowing (on ergometer, or tank) is a way to train technique and strength by going through the same motions as rowing, with resistance. Indoor rowing is helpful when there are no rowable bodies of water near by, or weather conditions don't permit rowing.
A rowing tank is an indoor facility which attempts to mimic the conditions rowers face on open water. Rowing tanks are primarily used for off-season rowing, muscle specific conditioning and technique training, or simply when bad weather doesn't allow for open water training.
Ergometer rowing machines (colloquially "ergs" or "ergo") simulate the rowing action and provide a means of training on land when waterborne training is restricted, and of measuring rowing fitness. Ergometers do not simulate the lateral balance challenges, the exact resistance of water, or the exact motions of true rowing including the sweep of the oar handles. For that reason ergometer scores are generally not used as the sole selection criterion for crews (colloquially ""ergs don't float""), and technique training is limited to the basic body position and movements. However, this action can still allow a comparable workout to those experienced on the water.
Sometimes, slides are placed underneath the erg to try to simulate the movement of being on the water. It allows the machine to move back and forth smoothly as if there is water beneath you. The slides can be connected in rows or columns so that rowers are forced to move together on the ergometer, similar to how they would match up their rhythm in a boat.
Indoor rowing has become popular as a sport in its own right with numerous indoor competitions (and the annual World Championship CRASH-B Sprints in Boston) during the winter off-season.
One of the most common brand of ergometers is Concept2. The company offers multiple types of models, including the Model D, Model E, and the dynamic rower. An updated Rowperfect brand of dynamic rowers, RP3, produces ergometers that more naturally mimic the feel and resistance of rowing in a shell on the water. It additionally, shows a dynamic force curve of power that provides the rower with detailed information about their stroke which they can use to improve technique and get stronger.
The most commonly damaged piece of rowing equipment is the skeg, which is a metal or plastic fin that comes out of the bottom of the boat to help maintain stability, and to assist in steering. Since the skeg sticks out below the hull of the boat it is the most vulnerable to damage, however it is relatively easy to replace skegs by gluing a new one on. Hull damage is also a significant concern both for maintaining equipment, and for rower safety. Hull damage can be caused by submerged logs, poor strapping to trailers, and collisions with other boats, docks, rocks, etc.
Boats are conveyed to competitions on special trailers accommodating up to 20 boats.
Racing boats are stored in boat houses. These are specially designed storage areas which usually consist of a long two-story building with a large door at one end which leads out to a pontoon or slipway on the river or lakeside. The boats are stored on racks (horizontal bars, usually metal) on the ground floor. Oars, riggers, and other equipment is stored around the boats. Boat houses are typically associated with rowing clubs and include some social facilities on the upper floor: a cafe, bar, or gym.
Rowers may take part in the sport for their leisure or they may row competitively. There are different types of competition in the sport of rowing. In the U.S. all types of races are referred to as "regattas" whereas this term is only used in the UK for head-to-head or multi-lane races (such as those that take place at Dorney Lake), which generally take place in the summer season. Time trials occur in the UK during the winter, and are referred to as Head races. In the US, head races (usually about 5k, depending on the body of water) are rowed in the fall, while 2k sprint races are rowed in the spring and summer.
Rowing is unusual in the demands it places on competitors. The standard world championship race distance of 2,000 metres is long enough to have a large endurance element, but short enough (typically 5.5 to 7.5 minutes) to feel like a sprint. This means that rowers have some of the highest power outputs of athletes in any sport. At the same time the motion involved in the sport compresses the rowers' lungs, limiting the amount of oxygen available to them. This requires rowers to tailor their breathing to the stroke, typically inhaling and exhaling twice per stroke, unlike most other sports such as cycling where competitors can breathe freely.
Most races that are held in the spring and summer feature side by side racing, or sprint racing, sometimes called a regatta; all the boats start at the same time from a stationary position and the winner is the boat that crosses the finish line first. The number of boats in a race typically varies between two (which is sometimes referred to as a "dual race") to eight, but any number of boats can start together if the course is wide enough.
The standard length races for the Olympics and the World Rowing Championships is long; – for US high school races on the east coast; and 1,000 m for "masters" rowers (rowers older than 27). However the race distance can and does vary from "dashes" or sprints, which may be long, to races of marathon or ultra-marathon length races such as the Tour du Léman in Geneva, Switzerland which is , and the 2 day, Corvallis to Portland Regatta held in Oregon, USA. In the UK, regattas are generally between and long.
A feature of the end of twentieth century rowing was the development of non-Olympic multiple crew racing boats, typically fixed seat-gigs, pilot boats and in Finland church- or longboats. The most usual craft in races held around the coasts of Britain during summer months is the Cornish pilot gig, most typically in the south-west, with crews of 6 from local towns and races of varying distances. The Cornish pilot gig was designed and built to ferry harbour and river pilots to and from ships in fierce coastal waters. The boat needed to be stable and fast with the large crew hence making it ideal for its modern racing usage. In Finland 14-oared church boats race throughout the summer months, usually on lakes, and often with mixed crews. The largest gathering sees over 7000 rowers mainly rowing the course at Sulkava near the eastern border over a long weekend in mid July. The weekend features the World Masters church boat event which also includes a dash.
Two traditional non-standard distance shell races are the annual Boat Race between Oxford and Cambridge and the Harvard-Yale Boat Race which cover courses of approximately . The Henley Royal Regatta is also raced upon a non-standard distance at 2,112 meters (1 mile, 550 yards).
In general, multi-boat competitions are organized in a series of rounds, with the fastest boats in each heat qualifying for the next round. The losing boats from each heat may be given a second chance to qualify through a repechage. The World Rowing Championships offers multi-lane racing in heats, finals and repechages. At Henley Royal Regatta two crews compete side by side in each round, in a straightforward knock-out format, with no repechages.
Head races are time trial / processional races that take place from autumn (fall) to early spring (depending on local conditions). Boats begin with a rolling start at intervals of 10 – 20 seconds, and are timed over a set distance. Head courses usually vary in length from to , though there are longer races such as the Boston Rowing Marathon and shorter such as Pairs Head.
The oldest, and arguably most famous, head race is the Head of the River Race, founded by Steve Fairbairn in 1926 which takes place each March on the river Thames in London, United Kingdom. Head racing was exported to the United States in the 1950s, and the Head of the Charles Regatta held each October on the Charles River in Boston, Massachusetts, United States is now the largest rowing event in the world. The Head of the Charles, along with the Head of the Schuylkill in Philadelphia and the Head of the Connecticut, are considered to be the three "fall classics."
These processional races are known as "Head Races", because, as with bumps racing, the fastest crew is awarded the title "Head of the River" (as in "head of the class"). It was not deemed feasible to run bumps racing on the Tideway, so a timed format was adopted and soon caught on.
Time trials are sometimes used to determine who competes in an event where there is a limited number of entries, for example the qualifying races for Henley Royal Regatta, and "rowing on" and "getting on" for the Oxford and Cambridge Bumps races respectively.
A bumps race is a multi-day race beginning with crews lined up along the river at set intervals. They start simultaneously and all pursue the boat ahead while avoiding being bumped by a boat from behind. If a crew overtakes or makes physical contact with the crew ahead, a "bump" is awarded. As a result, damage to boats and equipment is common during bumps racing. To avoid damage the cox of the crew being bumped may concede the bump before contact is actually made. The next day, the bumping crew will start ahead of any crews that have been bumped. The positions at the end of the last race are used to set the positions on the first day of the races the next year. Oxford and Cambridge Universities hold bumps races for their respective colleges twice a year, and there are also "Town Bumps" races in both cities, open to non-university crews. Oxford's races are organised by City of Oxford Rowing Club and Cambridge's are organised by the Cambridgeshire Rowing Association.
The stake format was often used in early American races. Competitors line up at the start, race to a stake, moored boat, or buoy some distance away, and return. The 180° turn requires mastery of steering. These races are popular with spectators because one may watch both the start and finish. Usually only two boats would race at once to avoid collision. The Green Mountain Head Regatta continues to use the stake format but it is run as a head race with an interval start. A similar type of racing is found in UK and Irish coastal rowing, where a number of boats race out to a given point from the coast and then return fighting rough water all the way. In Irish coastal rowing the boats are in individual lanes with the races consisting of up to 3 turns to make the race distance 2.3 km.
The Olympic Games are held every four years, where only select boat classes are raced (14 in total):
At the end of each year, the FISA holds the World Rowing Championships with events in 22 different boat classes. Athletes generally consider the Olympic classes to be premier events . In 2017 FISA voted to adopt a new Olympic programme for 2020, whereby the lightweight men's coxless four event was replaced by the women's heavyweight coxless four. This was done to ensure that rowing had a gender equal Olympic programme. During Olympic years only non-Olympic boats compete at the World Championships.
There are many differing sets of rules governing racing, and these are generally defined by the governing body of the sport in a particular country—e.g., British Rowing in England and Wales, Rowing Australia in Australia, and USRowing in the United States. In international competitions, the rules are set out by the world governing body, the Fédération Internationale des Sociétés d'Aviron (FISA). The rules are mostly similar but do vary; for example, British Rowing requires coxswains to wear buoyancy aids at all times, whereas FISA rules do not.
Rowers in multi-rower boats are numbered sequentially from the bow aft. The number-one rower is called the bowman, or just 'bow', whilst the rower closest to the stern is called the 'strokeman' or just 'stroke'. There are some exceptions to this – some UK coastal rowers, and in France, Spain, and Italy rowers number from stern to bow.
In addition to this, certain crew members have other titles and roles. In an 8+ the stern pair are responsible for setting the stroke rate and rhythm for the rest of the boat to follow. The middle four (sometimes called the "engine room" or "power house") are usually the less technical, but more powerful rowers in the crew, whilst the bow pair are the more technical and generally regarded as the pair to set up the balance of the boat. They also have most influence on the line the boat steers.
The coxswain (or simply the cox) is the member who sits in the boat facing the bow, steers the boat, and coordinates the power and rhythm of the rowers – by communicating to the crew through a device called a cox box and speakers. They usually sit in the stern of the boat, except in bowloaders where the coxswain lies in the bow. Bowloader are usually seen as the coxed four and coxed pair type of boat.
It is an advantage for the coxswain to be light, as this requires less effort for the crew to propel the boat. In many competitive events there is a minimum weight set for the coxswain to prevent unfair advantage.
If a coxswain is under the minimum weight allowance (underweight) they may have to carry weights in the boat such as sandbags.
In most levels of rowing there are different weight classes – typically "open" (or referred to as "heavyweight") and lightweight. Competitive rowing favours tall, muscular athletes due to the additional leverage height provides in pulling the oar through the water as well as the explosive power needed to propel the boat at high speed.
Heavyweight rowers of both sexes tend to be very tall, broad-shouldered, have long arms and legs as well as tremendous cardiovascular capacity and low body fat ratios. Olympic or International level heavyweight male oarsmen are typically anywhere between 190 cm and 206 cm (6'3" to 6'9") tall with most being around 198 cm (6'6") and weighing approximately 102 kg (225 lb) with about 6 to 7% body fat.
Heavyweight women are slightly shorter at around 186 cm (6'1") and lighter than their male counterparts.
Some rowing enthusiasts claim that the disproportionate number of tall rowers is simply due to the unfair advantage that tall rowers have on the ergometer. This is due to the ergometer's inability to properly simulate the larger rowers drag on a boat due to weight. Since the ergometer is used to assess potential rowers, results on the ergometer machine play a large role in a rower's career success. Thus, many erg scores are weight-adjusted, as heavyweights typically find it easier to get better erg scores. Also, since crew selection has favored tall rowers long before the advent of the ergometer, and bigger, taller crews are almost universally faster than smaller, shorter crews on the water, being tall is a definite advantage ultimately having little to do with the ergometer.
Unlike most other non-combat sports, rowing has a special weight category called "lightweight" (Lwt for short). According to FISA, this weight category was introduced "to encourage more universality in the sport especially among nations with less statuesque people". The first lightweight events were held at the World Championships in 1974 for men and 1985 for women. Lightweight rowing was added to the Olympics in 1996.
At international level the limits are:
The Olympic lightweight boat classes are limited to; Men's double (LM2x), Women's double (LW2x).
At the junior level (in the United States), regattas require each rower to weigh in at least two hours before their race; they are sometimes given two chances to make weight at smaller regattas, with the exception of older more prestigious regattas, which allow only one opportunity to make weight. For juniors in the United States, the lightweight cutoff for men is 150.0 lb.; for women, it is 130.0 lb. In the fall the weight limits are increased for women, with the cutoff being 135 lb.
At the collegiate level (in the United States), the lightweight weight requirements can be different depending on competitive season. For fall regattas (typically head races), the lightweight cutoff for men is 165.0 lb. and 135.0 lb. for women. In the spring season (typically sprint races), the lightweight cutoff for men is 160.0 lb., with a boat average of 155.0 lb. for the crew; for women, the lightweight cutoff is 130.0 lb.
Women row in all boat classes, from single scull to coxed eights, across the same age ranges and standards as men, from junior amateur through university-level to elite athlete. Typically men and women compete in separate crews although mixed crews and mixed team events also take place. Coaching for women is similar to that for men. The world's first women's rowing team was formed in 1896 at the Furnivall Sculling Club in London.
The first international women's races were the 1954 European Rowing Championships. The introduction of women's rowing at the 1976 Summer Olympics in Montreal increased the growth of women's rowing because it created the incentive for national rowing federations to support women's events. Rowing at the 2012 Summer Olympics in London included six events for women compared with eight for men. In the US, rowing is an NCAA sport for women but not for men; though it is one of the country's oldest collegiate sports, the difference is in large part due to the requirements of Title IX.
At the international level, women's rowing traditionally has been dominated by Eastern European countries, such as Romania, Russia, and Bulgaria, although other countries such as Germany, Canada, the Netherlands, Great Britain and New Zealand often field competitive teams. The United States also has had very competitive crews, and in recent years these crews have become even more competitive given the surge in women's collegiate rowing.
Now there is usually the same number of girls and boys in a group
Adaptive rowing is a special category of races for those with physical disabilities. Under FISA rules there are 5 boat classes for adaptive rowers; mixed (2 men and 2 women plus cox) LTA (Legs, Trunk, Arms), mixed intellectual disability (2 men and 2 women plus cox) LTA (Legs, Trunk, Arms), mixed (1 man and 1 woman) TA (Trunk and Arms), and men's and women's AS (Arms and Shoulders). Events are held at the World Rowing Championships and were also held at the 2008 Summer Paralympics.
Rowing events use a systematic nomenclature for the naming of events, so that age, gender, ability and size of boat can all be expressed in a few numbers and letters. The first letter to be used is 'L' or 'Lt' for lightweight. If absent then the crew is open weight. This can be followed by either a 'J' or 'B' to signify under 19 ("Junior") or under 23 years respectively. If absent the crew is open age (the letter 'O' is sometimes used). Next is either an 'M' or 'W' to signify if the crew are men or women. Then there is a number to show how many athletes are in the boat (1,2,4 or 8). An 'x' following the number indicates a sculling boat. Finally either a '+' or a '–' is added to indicate whether the boat is coxed or coxswainless.
Some events will use an experience rating to separate races. In the UK boats are classed as "Elite", "Senior", "Intermediate 1/2/3" or "Novice", depending on the number of wins the athletes have accumulated. Masters events use age ranges (represented by letters) to separate crews of older rowers. Mixed events are also held.
Examples:
"Sculling boat abbreviations and names:"
"Rowing boat abbreviations and names:" | https://en.wikipedia.org/wiki?curid=25736 |
RuneQuest
RuneQuest is a fantasy role-playing game created by Steve Perrin and others, set in Greg Stafford's mythical world of Glorantha, and first published in 1978 by Chaosium. "RuneQuest" is notable for its system, designed around percentile dice and with an early implementation of skill rules, which became the basis of numerous other games. There have been several editions of the game.
In 1975, game designer Greg Stafford released the fantasy board game "White Bear and Red Moon" (later renamed "Dragon Pass"), produced and marketed by Chaosium, a game publishing company set up by Stafford specifically for the release of the game. In 1978, Chaosium published the first edition of "RuneQuest", a role playing game set in the world of Glorantha from "White Bear and Red Moon". A second edition, with various minor revisions, was released in 1980. "RuneQuest" quickly established itself as the second most popular fantasy role-playing game, after "Dungeons & Dragons".
In order to increase distribution and marketing of the game, Chaosium made a deal with Avalon Hill, who published a third edition in 1984. Under the agreement struck, Avalon Hill took ownership of trademark for "RuneQuest", while all Glorantha-related content required approval by Chaosium, who also retained the copyright of the rules text. In an attempt to also have a setting they could release freely, Avalon Hill also supported a new "default" setting, Fantasy Earth, based on fantasy interpretations of several eras of earth's pre-modern history, including viking and ninja supplements. Later Avalon Hill published generic fantasy material.
A proposed fourth edition developed by Avalon Hill, titled "RuneQuest: Adventures in Glorantha", was intended to return the tight "RuneQuest"/Glorantha relationship, but it was shelved mid-project in 1994 after Stafford refused permission, unhappy with Avalon Hill's stewardship of the third edition. In response, Avalon Hill, as owners of the trademark, began development of a mechanically unrelated game originally titled "RuneQuest: Slayers". However, when Avalon Hill was acquired by Hasbro in 1998, the project was canceled despite being near completion. The copyrights to the rules reverted to the authors, who released it for free as "RuneSlayers".
In 1998, Following the financial failure of the collectible card game "Mythos", Stafford, along with fellow shareholder Sandy Petersen, left the management of Chaosium (he remained a shareholder in the company). Stafford had formed a subsidiary company, Issaries, Inc., to manage the Glorantha property and took ownership of that company with him. He partnered with Robin D. Laws to publish an all-new game system set in Glorantha called "Hero Wars" in 2000. It was later renamed "HeroQuest" in 2003 after the rights to that name, along with the "RuneQuest" trademark, were acquired from Hasbro by Issaries.
Mongoose Publishing released a new edition of "RuneQuest" in August 2006 under a license from Issaries. This required that Mongoose recreate much of the function of prior editions without reusing the prior texts (the copyrights of which were retained by Chaosium). The new rules were developed by a team led by Mongoose co-founder Matthew Sprange, and were released under the Open Game License. The official setting takes place during the Second Age of Glorantha (previous editions covered the Third Age). In January 2010, Mongoose published a much-revised edition written by Pete Nash and Lawrence Whitaker called "RuneQuest II", known as "MRQ2" by fans.
In May 2011, Mongoose Publishing announced that they had parted company with Issaries. In July 2011, The Design Mechanism, a company formed by Nash and Whitaker, announced that they had entered a licensing agreement with Issaries, and would be producing a 6th edition of "RuneQuest". "RuneQuest 6th edition", released in July 2012, is largely an expansion of the Mongoose "RuneQuest II" rules aimed at providing rules that can be adapted to many fantasy or historical settings, and do not contain any specifically Gloranthan content (though they do use the Gloranthan runes).
In 2013, Stafford outright sold the Glorantha setting and "RuneQuest" and "HeroQuest" trademarks to Moon Design Publications, which had published the second edition of "HeroQuest" under license in 2009; Moon Design maintained Design Mechanism's "RuneQuest" license. In June 2015, following a series of financial issues at Chaosium, Stafford and Petersen retook control of the company. They in turn arranged a merger with Moon Design, which saw the Moon Design management team take over Chaosium. Shortly thereafter a new edition of "RuneQuest", subtitled "Roleplaying in Glorantha" was announced. It is based on the Chaosium 2nd edition, drawing upon ideas from later editions. They also successfully raised funds through Kickstarter to produce a hardcover reprint of the 2nd edition and PDFs of its supplements as "RuneQuest Classic". The new edition of the game, officially referred to as "RQG" for short, was previewed on Free RPG Day 2017 with the release of a quickstart module. The PDF of the full rules were released in May 2018, with the printed book to follow later that year.
As with most RPGs, players begin by making a player character. Player characters are devised through a number of dice rolls to represent physical, mental and spiritual characteristics.
Characters in "RuneQuest" gain power as they are used in play, but not to the degree that characters do in other fantasy RPGs. It is still possible for a weak character to slay a strong one through luck, tactics, or careful planning.
Both combat and non-combat actions use a percentile roll-under system to determine success of actions. The game features mechanics for critical hits and fumbling. For example, if a character has climbing at 35% and his player rolls 25 on a D100, the character has succeeded. However, a nuanced range of results existed in every die roll. If a die roll was 1/5 of the necessary percentile roll or less, it was a special success, and if it was 1/20 of the necessary roll or less it was a critical success. Very high rolls (in the range 96-00) on the other hand, could be "fumbles" or spectacular failures if they were in the top 1/20 of possible failed rolls.
The game's combat system was designed in an attempt to recreate designer Steve Perrin's experience with live-action combat. Perrin experienced mock medieval combat through the Society for Creative Anachronism. The player rolls against the character's combat skill. If the number rolled is equal to or less than the character's skill level, they have hit their target. The defender has the chance to roll to avoid the blow or parry it.
The "RuneQuest" combat system has a subsystem for "hit location". Successful attacks are normally allocated randomly (or can be aimed) to a part of the target's body. In "RuneQuest", a hit against a character's leg, weapon arm, or head has specific effects on the game's mechanics and narrative. This was a unique part of the game's combat system and helped to separate it from the more abstracted, hit-point-based combat of competitors such as "Dungeons & Dragons".
Rules for skill advancement also use percentile dice. In a departure from the level-based advancement of "Dungeons & Dragons", "RuneQuest" allowed characters to improve their abilities directly; the player needs to roll higher than the character's skill rating. For the climber example used earlier, the player would need to roll greater than 35 on a D100 in order to advance the character's skill. Thus, the better the character is at a skill the more difficult it is to improve.
Characters in "RuneQuest" are not divided into magic using and non-magic using characters. At the time of the game's release, this was an unorthodox mechanic. Although all characters have access to magic, for practical gameplay purposes a character's magical strength is proportional to his or her connection to the divine or natural skill at sorcery. The exact divisions of magic vary from edition to edition, but most contain divisions such as Common Magic, Sorcery, Divine Magic, Spirit Magic, and Enchantments.
The "RuneQuest" rule book contained a large selection of fantasy monsters and their physical stats. As well as traditional fantasy staples (Dwarves, Trolls, Undead, Lycanthropes, etc.), the book featured original creatures such as the goat-headed creatures called Broo. Some of its traditional fantasy creatures differed notably from the versions from other games (or fantasy or traditional sources), for example, Elves are humanoid plant life. Unlike other fantasy RPGs of the time, "RuneQuest" encouraged the use of monsters as player characters.
With the exception of the third and sixth editions, which still offered it as an option, the default setting for "RuneQuest" has been the world of Glorantha. The well-developed background of the game offered a breadth of material for players and game masters to draw from. At a time when many RPG settings were cobbled together, "RuneQuest" offered players a vibrant living world, giving them a much more developed fictional world with established geography, history, and religion.
The original rules contained a map of an area called Dragon Pass, a region offered as the default setting for adventures. It is an extended upland, with not one but several passes through the surrounding mountains, each leading to different regions. This, together with the regional history, has led to a "melting pot" area, with unusually-high variety and concentrations of non-Human species, particularly for a largely-rural setting. The original "RuneQuest" game was set during a period of invasion, offering further opportunities for game scenarios.
One of the areas adjacent to Dragon Pass, touched upon in the original rulebook, was the Plains (sometimes called the Wastes) of Prax. It is an arid, even blasted region, hostile to all but the native peoples whose founding god, Waha, created the Survival Covenant by which they live. The tribes are Beast-Riders, each riding a different species (but none riding horses, which are taboo); the Great Tribes (most numerous) are: Bison, High Llama, Sable, Impala, and Morokanth (the Morokanth are an oddity - intelligent Tapir-people, herding non-sentient humans). The Praxian tribes are inspired by ancient Mongol, Bedouin, Native American, and other nomadic tribal cultures.
A key element of "RuneQuest" flavor is a character's affiliation with a cult (the default "RuneQuest" name for a religious group). Characters begin as lay members and progress through a series of membership levels, such as initiate or Rune Lord. This system offers narrative and mechanical benefits to players who chose to have their characters join a cult.
The basic rules described a handful of original and mythological gods. These were greatly expanded upon in the supplements "Cults of Prax" and "Cults of Terror".
Supplements published for "RuneQuest" second edition include "Plunder", "Runemasters", "Foes", "The Gateway Bestiary", "Griffin Mountain", "City of Lei Tabor", "Cults of Prax" and "Cults of Terror".
In the September-October 1978 edition of "The Space Gamer" (Issue No. 19), Dana Holm commented that "Since this game contains a logical system, almost anything can be added to the matrix it presents. A gem of a game. You won't be disappointed."
In the February-March 1979 edition of "White Dwarf" (Issue 11), Jim Donohoe thought the rulebook was "116 pages of well thought-out and comprehensive rules," but he found the character generation system "quite complex." He advised new referees to use the monster loot tables with caution, since the treasure was randomly generated, which meant "the amount of treasure a monster can have can vary wildly using these tables, and a weak monster can have a fortune while a tough one is impoverished. This is one area of the rules which could use some revision." He concluded by giving the game an excellent rating of 9 out of 10, saying, "These are a set of rules which I can recommend as a good alternative to "Dungeons & Dragons". Using the "Runequest" rules, a campaign can be set up simply and quickly with little effort to the referee."
In the March 1980 edition of "Ares" (Issue #1), Greg Costikyan commented that ""RuneQuest" is the most playable and elegant fantasy role-playing designed to date. Its only drawback is that it does not cover enough ground for a full-scale role-playing campaign, and is, perhaps, a bit simpler than experienced frp'ers would desire." He concluded by giving it a slightly better than average rating of 7 out of 9.
In the July 1980 issue of "Ares", Eric Goldberg reviewed the second edition and commented "When "RQ" came out, it was well-organized by the FRP standards of that time. The rules are not painful to read, and a second edition, in which the charts are easier to find, has helped matters considerably. The drawbacks of the game are that the foundation of the game (combat) has play problems and that the individual systems do not mesh together as nicely as one would hope. Among the strengths are its freshness of design concepts, the elimination of the odious 'level' progressions for characters, and the detailed background."
In the July 1981 edition of "The Space Gamer" (Issue No. 35), Forrest Johnson reviewed the 1980 boxed edition of "RuneQuest", and commented "An experienced gamer, who probably bought the rules separately [...] has no need of this edition. However, it might be of use to a newcomer."
In the April 1985 edition of "White Dwarf" (Issue 64), Oliver Dickinson reviewed the third edition produced by Avalon Hill, and found "Everything is well laid out and clearly expressed." The major difference he found with previous editions of rules was that most items "cost a great deal more [...] Acquiring good armour, magic items, etc is going to be more of a struggle and so, I feel, more satisfying. I hope this will bring the days of the overmighty PC/NPC to an end; but the difficulties may be precisely what put some players off." Dickinson concluded by giving the new edition an excellent rating of 9 out of 10, saying, "the revised rules deserve a proper trial; they are well thought out and explained, though quite complex, and I suspect will in many cases be welcomed with the words, 'That makes better sense!'"
In the March 1987 edition of "White Dwarf" (Issue 87), Peter Green reviewed a new hardcover version of the third edition, and generally liked it, although he did find "a few irritations", notably that reference was made to sections of the 1st- or 2nd-edition rules that no longer existed. He concluded by warning that "beginners should perhaps leave it until they are familiar with a more introductory system [...] Experienced players of other games will find much in "Runequest" to recommend it [...] it is superb value and well worth getting even if you never intend to play it."
In the August 1987 edition of "White Dwarf" (Issue 92), Paul Cockburn reviewed "Advanced Runequest", a streamlined version of the 3rd edition rules, and liked what he saw. He concluded, "It's a very good package [...] a very powerful roleplaying game, in a very accessible form."
In a 1996 reader poll conducted by "Arcane" magazine to determine the 50 most popular roleplaying games of all time, "RuneQuest" was ranked 5th. Editor Paul Pettengale commented: ""RuneQuest" manages to establish itself as a cut above the rest because of its intricate and highly original campaign setting. [...] This is a world that combines high-fantasy heroism with the gritty realities of cross-humanoid racism and the problems of day-to-day living. The cults of the world, which play an intrinsic part of every adventurer's life, add to the mysticism of the game, and give it a level of depth which other fantasy systems can be but envious of."
"RuneQuest" was chosen for inclusion in the 2007 book "Hobby Games: The 100 Best". Jennell Jaquays commented, "After "RuneQuest" and Glorantha, detailed fantasy worlds would become the norm, not the exception. Dragon Pass paved the way for TSR's Faerûn, better known as the Forgotten Realms, and Krynn, setting for the "Dragonlance" saga. But few would ever achieve the elegant but approachable rules complexity of the original "RuneQuest" or instill a fervent loyalty in fans that would span decades."
Chaosium reused the rules system developed in "RuneQuest" to form the basis of several other games. In 1980 the core of the "RuneQuest" system was published in a simplified form edited by Greg Stafford and Lynn Willis as "Basic Role-Playing (BRP)". "BRP" is a generic role-playing game system, derived from the two first "RuneQuest" editions. It was used for many Chaosium role-playing games that followed "RuneQuest", including:
The science-fiction roleplaying game "Other Suns" by Fantasy Games Unlimited, 1983, used the "Basic Role-Playing" system as well.
Minor modifications of the "BRP" rules were introduced in every one of those games, to suit the flavor of each game's universe. "Pendragon" used a 1-20 scale and 1d20 roll instead of a percentile scale and 1d100. "" (1989), which used coin tosses instead of dice rolls, is the only Chaosium-published role-playing game that does not use any variant of the "BRP" system.
In 2004, Chaosium released a print-on-demand version of the 3rd edition "RuneQuest" rules with the trademarks removed under the titles "Basic Roleplaying Players Book", "Basic Roleplaying Magic Book", and "Basic Roleplaying Creatures Book". That same year, Chaosium began preparing a new edition of "Basic Roleplaying", which was released in 2008 as a single, comprehensive 400 page book, incorporating material from many of their previous "BRP" system games. The book offers many optional rules, as well as genre-specific advice for fantasy, horror, and science-fiction, but contains no setting-specific material.
After losing the license to use the "RuneQuest" name and Glorantha setting, Mongoose Publishing announced the release of a new series of books which are fully compatible with the "RuneQuest II" ruleset under the title of "Legend". "Legend" was released in late 2011 under the Open Gaming License. Mongoose titles for "RuneQuest II" were re-released as "Legend"-compatible books. Similarly, after the announcement of the Chaosium/Moon Design merger, The Design Mechanism announced that "RuneQuest 6th edition" would continue to be published under the name "Mythras". | https://en.wikipedia.org/wiki?curid=25737 |
Rich Text Format
The Rich Text Format (often abbreviated RTF) is a proprietary document file format with published specification developed by Microsoft Corporation from 1987 until 2008 for cross-platform document interchange with Microsoft products. Prior to 2008, Microsoft published updated specifications for RTF with major revisions of Microsoft Word and Office versions.
Most word processors are able to read and write some versions of RTF. There are several different revisions of RTF specification and portability of files will depend on what version of RTF is being used.
It should not be confused with enriched text (media type "text/enriched" of RFC 1896) or its predecessor Rich Text (media type "text/richtext" of RFC 1341 and ), nor with IBM's RFT-DCA (Revisable Format Text-Document Content Architecture); these are completely different specifications.
Richard Brodie, Charles Simonyi, and David Luebbert, members of the Microsoft Word development team, developed the original RTF in the middle to late 1980s. Its syntax was influenced by the TeX typesetting language. The first RTF reader and writer shipped in 1987 as part of Microsoft Word 3.0 for Macintosh, which implemented the RTF version 1.0 specification. All subsequent releases of Microsoft Word for the Macintosh and all versions for Windows can read and write files in RTF format.
Microsoft maintains the format. The final version was 1.9.1 in 2008, implementing features of Office 2007. Microsoft has discontinued enhancements to the RTF specification. New features in Word 2010 and later versions will not save properly to the RTF format. Microsoft anticipates no further updates to RTF, but has stated willingness to consider editorial and other non-substantive modifications of the RTF Specification during an associated ISO/IEC 29500 balloting period.
For some time, RTF files were used to produce Windows .HLP help files, though this use has been superseded by Microsoft Compiled HTML Help files.
RTF is programmed by using groups, a backslash, a control word and a delimiter. Groups are contained within braces ({}), with the opening brace and closing brace indicating the start of the group and end of the group respectively. Groups are used to indicate what type of attributes to apply to certain text. The backslash (\) indicates that a control word is going to be used. Control words are specifically programmed commands for RTF. They can have certain states in which they're active. Their state is represented by a number. For example,
A delimiter is one of three things:
As an example, the following RTF code:
is a document which would be rendered like this when read by a program that supports RTF:
This is some bold text.
A standard RTF file can consist of only 7-bit ASCII characters, but can encode characters beyond ASCII by escape sequences. The character escapes are of two types: code page escapes and, starting with RTF 1.5, Unicode escapes. In a code page escape, two hexadecimal digits following a backslash and typewriter apostrophe are used for denoting a character taken from a Windows code page. For example, if the code page is set to Windows-1256, the sequence codice_1 will encode the Arabic letter bāʼ (ب). Alternatively, it is possible to specify a "Character Set" in the preamble of the RTF document and associate it with a header. If the preamble has the text codice_2, then in the body of the document, the text codice_3 will represent the code point codice_4 from the Character Set 128 (which corresponds to the Shift-JIS code page): which encodes "金".
For a Unicode escape the control word codice_5 is used, followed by a 16-bit signed decimal integer giving the Unicode UTF-16 code unit number. For the benefit of programs without Unicode support, this must be followed by the nearest representation of this character in the specified code page. For example, codice_6 would give the Arabic letter "bāʼ" ب, specifying that older programs which do not have Unicode support should render it as a question mark instead.
The control word codice_7 can be used to indicate that subsequent Unicode escape sequences within the current group do not specify the substitution character.
Until RTF specification version 1.5 release in 1997, RTF has only handled 7-bit characters directly and 8-bit characters encoded as hexadecimal (using codice_8). RTF control words (since RTF 1.5) generally accept signed 16-bit numbers as arguments. Unicode values greater than 32767 must be expressed as negative numbers. If a Unicode character is outside BMP, it is encoded with a surrogate pair. Support for Unicode was made due to text handling changes in Microsoft Word – Microsoft Word 97 is a partially Unicode-enabled application and it handles text using the 16-bit Unicode character encoding scheme. Microsoft Word 2000 and later versions are Unicode-enabled applications that handle text using the 16-bit Unicode character encoding scheme.
RTF files are usually 7-bit ASCII plain text. RTF consists of control words, control symbols, and groups. RTF files can be easily transmitted between PC based operating systems because they are encoded as a text file with 7-bit graphic ASCII characters. Converters that communicate with Microsoft Word for MS Windows or Macintosh should expect data transfer as 8-bit characters and binary data can contain any 8-bit values.
RTF is a data format for saving and sharing documents, not a markup language; it is not intended for intuitive and easy typing by a human. Nonetheless, unlike many word processing formats, RTF code can be human-readable: when an RTF file containing mostly latin characters without diacritics is viewed as a plain text file, the underlying ASCII text is readable, provided that the author has kept formatting concise – otherwise, the formatting code can impede readability.
When RTF was released, most word processors used binary file formats (Microsoft Word used the .doc file format); RTF was unique in its simple formatting control which allows for a non-RTF aware program (e.g. Notepad) to open and provide a readable file. Today, the majority of these programs have changed to a XML-based file format (Word has switched to the .docx file format). Regardless, these files contain large amounts of formatting code. As such, they are ten or more times larger than the corresponding plain text.
To be standard-compliant RTF, non-ASCII characters must be escaped. Thus, even with concise formatting, text that uses certain dashes and quotation marks is less legible. Latin languages that make heavy use of characters with diacritics, such as \'f1 for ñ and \'e9 for é are particularly difficult to read in RTF. Non-Latin scripts, consisting of characters such as \u21563 for 吻, are illegible in RTF. In addition, from its beginnings, RTF has supported Microsoft OLE embedded objects and Macintosh Edition Manager subscriber objects, which are not human-readable.
Most word processing software supports RTF format importing and exporting (following some version of RTF specification), and/or direct editing, often making it a "common" format between otherwise incompatible word processing software and operating systems. These factors contribute to its interoperability, but it will depend on what version of RTF is being used. There are several consciously designed or accidentally born RTF dialects. Most applications that read RTF files silently ignore unknown RTF control words.
RTF is the internal markup language used by Microsoft Word. Overall, since 1987, RTF files may be transferred back and forth between many old and new computer systems (and now over the Internet) despite differences between operating systems and their versions. (But there are incompatibilities, e.g. between RTF 1.0 1987 and later specifications, or between RTF 1.0-1.4 and RTF 1.5+ in use of Unicode characters.) This makes it a useful format for basic formatted text documents such as instruction manuals, résumés, letters, and modest information documents. These documents at minimum support bold, italic, and underline text formatting. Also typically supported are left-, center-, and right-aligned text, font specification and document margins.
Font and margin defaults, as well as style presets and other functions vary according to program defaults. There may also be subtle differences perhaps between different versions of the RTF specification implemented in differing programs and program versions. Nevertheless, the RTF format is consistent enough from computer to computer to be considered highly portable and acceptable for cross-platform use. The format supports metadata such as title, author, etc. but not all implementations support this.
Use of Microsoft Object Linking and Embedding (OLE) objects or Macintosh Edition Manager subscriber objects limits the interoperability, because these objects are not widely supported in programs for viewing or editing RTF files (e.g. embedding of other files inside the RTF, such as tables or charts from spreadsheet application). If a software that understands an OLE object is not available, the object is usually replaced by a picture (bitmap representation of the object) or not displayed at all.
RTF supports inclusion of JPEG, Portable Network Graphics (PNG), Enhanced Metafile (EMF), Windows Metafile (WMF), Apple PICT, Windows Device-dependent bitmap, Windows Device Independent bitmap and OS/2 Metafile picture types in hexadecimal (the default) or binary format in a RTF file. Not all of these picture types are supported in all RTF readers. When a RTF document is opened in software that does not support the picture type of an inserted picture, such picture is not displayed at all.
RTF writers usually convert inserted pictures from an unsupported picture types (e.g. BMP, TIFF, GIF, etc.) to one of supported picture types (PNG, WMF) or they do not include pictures at all.
For better compatibility with Microsoft products, some RTF writers include the same picture in two different picture types in one RTF file:
This method increases the RTF file size rapidly. The RTF specification does not require this method and there are various implementations that include pictures without the WMF copy (e.g. Abiword or Ted).
For Microsoft Word it is also possible to set a specific registry value ("ExportPictureWithMetafile=0") in order to prevent Word from saving the WMF copy (see link "Document file size increases with EMF, PNG, GIF, or JPEG graphics in Word" at the beginning).
RTF supports embedding of fonts used in the document, but this feature is not widely supported in software implementations.
RTF also supports generic font family names used for font substitution: "roman" (serif), "swiss" (sans-serif), "modern" (monospace), "script", "decorative", "technical". This feature is not widely supported for font substitution, e.g. in OpenOffice.org or Abiword.
RTF specification supports annotations (comments in documents) since version 1.0. RTF 1.7 specification defined some new features for annotations: date stamp (there was previously only "time stamp") and parents of annotations. When a RTF document with annotations is opened in an application that does not support RTF annotations, they are not displayed at all. Similarly, when a document with annotations is saved as RTF in an application that does not support RTF annotations, annotations are not preserved in the RTF file. Some implementations may hide annotations by default or require some user action to display them – e.g. in Abiword since version 2.8 or in IBM Lotus Symphony (up to version 1.3).
Microsoft products do not support comments within footers, footnotes or headers. Inserting a comment within headers, footers, or footnotes may result in a corrupted RTF document.
The RTF specification also supports footnotes (not to be confused with annotations), which are widely supported in RTF implementations (e.g. in OpenOffice.org, Abiword, KWord, Ted, but not in Wordpad). Endnotes are implemented as a variation on footnotes such that applications that support footnotes and not endnotes will render endnotes in an RTF document as footnotes. Similar to annotations, due to Microsoft products not supporting footnotes in headers, footers, or comments, including footnotes within those contexts in an RTF document may result in a corrupted document.
RTF 1.2 specification defined use of drawing objects such as rectangles, ellipses, lines, arrows, polygons and various other shapes. RTF 1.5 specification introduced many new control words for drawing objects. RTF drawing objects are also called "shapes" since RTF 1.5.
However, RTF drawing objects are not supported in many RTF implementations, such as Apache OpenOffice (though they are supported in LibreOffice 4.0 on) or Abiword. When an RTF document with drawing objects is opened in an application that does not support RTF drawing objects, they are not displayed at all. Some implementations will also not display any text inside drawing objects. Similarly, when a document with drawing objects is saved as RTF in an application that does not support RTF drawing objects, these are not preserved in the RTF file.
Unlike Microsoft Word's DOC format, as well as the newer Office Open XML and OpenDocument formats, RTF does not support macros. For this reason, RTF was often recommended over those formats when the spread of computer viruses through macros was a concern. However, having the .RTF extension does not guarantee that a file is safe, since Microsoft Word will open standard DOC files renamed with an RTF extension and run any contained macros as usual. Manual examination of a file in a plain text editor such as Notepad, or use of the codice_9 command in UNIX-like systems, is required to determine whether or not a suspect file is really RTF. Enabling Word's "Confirm file format conversion on open" option (not enabled by default in any version of Word) can also assist by warning a document being opened is in a format that does not match the format implied by the file's extension, and giving the option to abort opening that file.
RTF files can carry malware; sometimes malicious files in RTF format are renamed with the .DOC extension. One exploit attacking a vulnerability was patched in Microsoft Word in April 2015.
Since 2014 there have been malware RTF files embedding OpenXML exploits (.DOCX file with ZIP header, renamed with RTF extension) "to create a multi-exploit master key to cover a number of recent patched exploits in one RTF with low AV detection".
Each RTF implementation usually implements only some versions or subsets of the RTF specification. Many of the available RTF converters cannot understand all new features in the latest RTF specifications.
The WordPad editor in Microsoft Windows creates RTF files by default. It once defaulted to the Microsoft Word 6.0 file format, but write support for Word documents (.doc) was dropped in a security update. Read support was also dropped in Windows 7. WordPad does not support some RTF features, such as headers and footers. However, WordPad can read and save many RTF features that it cannot create such as: tables, strikeout, superscript, subscript, "extra" colors, text background colors, numbered lists, right or left indent, quasi-hypertext and URL linking, and various line spacings. RTF is also the data format for "rich text controls" in MS Windows APIs.
The default text editor for Mac OS X, TextEdit, can also view, edit and save RTF files as well as RTFD files. TextEdit currently (as of July 2009) has limited ability to edit RTF document margins. Much older Mac word processing application programs such as MacWrite and WriteNow were able to view, edit, and save RTF files as well.
The free and open-source word processors AbiWord, Apache OpenOffice, Bean, Calligra, KWord, LibreOffice and NeoOffice can view, edit and save RTF files. RTF format is also used in the Ted word processor.
Scrivener uses individual RTF files for all the text files that make up a given "project".
SIL International’s Toolbox freeware application for developing and publishing dictionaries uses RTF as its most common form of document output. RTF files produced by Toolbox are designed to be used in Microsoft Word, but can also be used by other RTF-aware word processors.
RTF can be used on some ebook readers because of its interoperability, simplicity, and low CPU processing requirements.
The open-source script rtf2xml can partially convert RTF to XML.
GNU UnRTF is an open-source program to convert RTF into HTML, LaTeX, troff macros and other formats. pyth is a Python library to create and convert documents in RTF, XHTML and PDF format. Ruby RTF is a project to create Rich Text content via Ruby. RaTFink is a library of Tcl routines, free software, to generate RTF output, and a Cost script to convert SGML to RTF. RTF::Writer is a Perl module for generating RTF documents. PHPRtfLite is an API enabling developers to create RTF documents with PHP. Pandoc is an open source document converter with multiple output formats, including RTF. RTFGen is a project to create RTF documents via pure PHP. rtf.js is a JavaScript based library to render RTF documents in HTML.
The Mac OS X command line tool textutil enables files to be converted between rtf, rtfd, text, doc, docx, wordml, odt, and webarchive.
The Rich Text Format was the standard file format for text-based documents in applications developed for Microsoft Windows. Microsoft did not initially make the RTF specification publicly available, making it difficult for competitors to develop document conversion features in their applications. Because Microsoft's developers had access to the specification, Microsoft's applications had better compatibility with the format. Also, every time Microsoft changed the RTF specification, Microsoft's own applications had a lead in time-to-market, because competitors had to redevelop their applications after studying the newer version of the format.
Novell alleged that Microsoft's practices were anticompetitive in its 2004 antitrust complaint against Microsoft.
According to blogger Hannes Schmidt, the RTF specifications lack some of the semantic definitions necessary to read, write, and modify documents. | https://en.wikipedia.org/wiki?curid=25739 |
Robert E. Lee
Robert Edward Lee (January 19, 1807 – October 12, 1870) was an American Confederate general best known as a commander of the Confederate States Army during the American Civil War. He commanded the Army of Northern Virginia from 1862 until its surrender in 1865 and earned a reputation as a skilled tactician.
A son of Revolutionary War officer Henry "Light Horse Harry" Lee III, Lee was a top graduate of the United States Military Academy and an exceptional officer and military engineer in the United States Army for 32 years. During this time, he served throughout the United States, distinguished himself during the Mexican–American War, and served as Superintendent of the United States Military Academy. He was also the husband of Mary Anna Custis Lee, adopted great-granddaughter of George Washington. When Virginia's 1861 Richmond Convention declared secession from the Union, Lee chose to follow his home state, despite his desire for the country to remain intact and an offer of a senior Union command. During the first year of the Civil War, he served in minor combat operations and as a senior military adviser to Confederate President Jefferson Davis.
Lee took command of the Army of Northern Virginia in June 1862 during the Peninsula Campaign following the wounding of Joseph E. Johnston. He succeeded in driving the Union Army of the Potomac under George B. McClellan away from the Confederate capital of Richmond during the Seven Days Battles, although he was unable to destroy McClellan's army. Lee then overcame Union forces under John Pope at the Second Battle of Bull Run in August. His invasion of Maryland that September ended with the inconclusive Battle of Antietam, after which he retreated to Virginia. Lee then won two decisive victories at Fredericksburg and Chancellorsville before launching a second invasion of the North in the summer of 1863, where he was decisively defeated at the Battle of Gettysburg by the Army of the Potomac under George Meade. He led his army in the minor and inconclusive Bristoe Campaign that fall before General Ulysses S. Grant took command of Union armies in the spring of 1864. Grant engaged Lee's army in bloody but inconclusive battles at the Wilderness and Spotsylvania before the lengthy Siege of Petersburg, which was followed in April 1865 by the capture of Richmond and the destruction of most of Lee's army, which he finally surrendered to Grant at Appomattox Court House.
In 1865, Lee became president of Washington College (later Washington and Lee University) in Lexington, Virginia; in that position, he supported reconciliation between North and South. He accepted "the extinction of slavery" provided for by the Thirteenth Amendment, but opposed racial equality for African Americans and died in 1870. Lee enjoys the status of a cultural icon in the South and is largely hailed as one of the Civil War's greatest generals. As commander of the Army of Northern Virginia, he fought most of his battles against armies of significantly larger size, and managed to win many of them. He built up a collection of talented subordinates, most notably James Longstreet, Stonewall Jackson, and J. E. B. Stuart, who along with Lee were critical to the Confederacy's battlefield success. In spite of his success, his two major strategic offensives into Union territory both ended in failure. His aggressive and risky tactics, especially at Gettysburg, which resulted in high casualties at a time when the Confederacy had a shortage of manpower, have come under criticism.
Lee was born at Stratford Hall Plantation in Westmoreland County, Virginia, to Henry Lee III and Anne Hill Carter Lee in early 1806. His ancestor, Richard Lee I, emigrated from Shropshire, England to Virginia in 1639.
Lee's father suffered severe financial reverses from failed investments and was put in debtors' prison. Soon after his release the following year, the family moved to Alexandria, Virginia, both because there were then high quality local schools there, and because several members of her extended family lived nearby. In 1811, the family, including the newly born sixth child, Mildred, moved to a house on Oronoco Street.
In 1812 Lee's father moved permanently to the West Indies. Lee attended Eastern View, a school for young gentlemen, in Fauquier County, Virginia, and then at the Alexandria Academy, free for local boys, where he showed an aptitude for mathematics. Although brought up to be a practicing Christian, he was not confirmed in the Episcopal Church until age 46.
Anne Lee's family was often supported by a relative, William Henry Fitzhugh, who owned the Oronoco Street house and allowed the Lees to stay at his country home Ravensworth. Fitzhugh wrote to United States Secretary of War, John C. Calhoun, urging that Robert be given an appointment to the United States Military Academy at West Point. Fitzhugh had young Robert deliver the letter. Lee entered West Point in the summer of 1825. At the time, the focus of the curriculum was engineering; the head of the United States Army Corps of Engineers supervised the school and the superintendent was an engineering officer. Cadets were not permitted leave until they had finished two years of study, and were rarely allowed off the Academy grounds. Lee graduated second in his class, behind only Charles Mason (who resigned from the Army a year after graduation). Lee did not incur any demerits during his four-year course of study, a distinction shared by five of his 45 classmates. In June 1829, Lee was commissioned a brevet second lieutenant in the Corps of Engineers. After graduation, while awaiting assignment, he returned to Virginia to find his mother on her deathbed; she died at Ravensworth on July 26, 1829.
On August 11, 1829, Brigadier General Charles Gratiot ordered Lee to Cockspur Island, Georgia. The plan was to build a fort on the marshy island which would command the outlet of the Savannah River. Lee was involved in the early stages of construction as the island was being drained and built up. In 1831, it became apparent that the existing plan to build what became known as Fort Pulaski would have to be revamped, and Lee was transferred to Fort Monroe at the tip of the Virginia Peninsula (today in Hampton, Virginia).
While home in the summer of 1829, Lee had apparently courted Mary Custis whom he had known as a child. Lee obtained permission to write to her before leaving for Georgia, though Mary Custis warned Lee to be "discreet" in his writing, as her mother read her letters, especially from men. Custis refused Lee the first time he asked to marry her; her father did not believe the son of the disgraced Light Horse Harry Lee was a suitable man for his daughter. She accepted him with her father's consent in September 1830, while he was on summer leave, and the two were wed on June 30, 1831.
Lee's duties at Fort Monroe were varied, typical for a junior officer, and ranged from budgeting to designing buildings. Although Mary Lee accompanied her husband to Hampton Roads, she spent about a third of her time at Arlington, though the couple's first son, Custis Lee was born at Fort Monroe. Although the two were by all accounts devoted to each other, they were different in character: Robert Lee was tidy and punctual, qualities his wife lacked. Mary Lee also had trouble transitioning from being a rich man's daughter to having to manage a household with only one or two slaves. Beginning in 1832, Robert Lee had a close but platonic relationship with Harriett Talcott, wife of his fellow officer Andrew Talcott.
Life at Fort Monroe was marked by conflicts between artillery and engineering officers. Eventually the War Department transferred all engineering officers away from Fort Monroe, except Lee, who was ordered to take up residence on the artificial island of Rip Raps across the river from Fort Monroe, where Fort Wool would eventually rise, and continue work to improve the island. Lee duly moved there, then discharged all workers and informed the War Department he could not maintain laborers without the facilities of the fort.
In 1834, Lee was transferred to Washington as General Gratiot's assistant. Lee had hoped to rent a house in Washington for his family, but was not able to find one; the family lived at Arlington, though Lieutenant Lee rented a room at a Washington boarding house for when the roads were impassable. In mid-1835, Lee was assigned to assist Andrew Talcott in surveying the southern border of Michigan. While on that expedition, he responded to a letter from an ill Mary Lee, which had requested he come to Arlington, "But why do you urge my "immediate" return, & tempt one in the "strongest" manner[?] ... I rather require to be strengthened & encouraged to the "full" performance of what I am called on to execute." Lee completed the assignment and returned to his post in Washington, finding his wife ill at Ravensworth. Mary Lee, who had recently given birth to their second child, remained bedridden for several months. In October 1836, Lee was promoted to first lieutenant.
Lee served as an assistant in the chief engineer's office in Washington, D.C. from 1834 to 1837, but spent the summer of 1835 helping to lay out the state line between Ohio and Michigan. As a first lieutenant of engineers in 1837, he supervised the engineering work for St. Louis harbor and for the upper Mississippi and Missouri rivers. Among his projects was the mapping of the Des Moines Rapids on the Mississippi above Keokuk, Iowa, where the Mississippi's mean depth of was the upper limit of steamboat traffic on the river. His work there earned him a promotion to captain. Around 1842, Captain Robert E. Lee arrived as Fort Hamilton's post engineer.
While Lee was stationed at Fort Monroe, he married Mary Anna Randolph Custis (1808–1873), great-granddaughter of Martha Washington by her first husband Daniel Parke Custis, and step-great-granddaughter of George Washington, the first president of the United States. Mary was the only surviving child of George Washington Parke Custis, George Washington's stepgrandson, and Mary Lee Fitzhugh Custis, daughter of William Fitzhugh and Ann Bolling Randolph. Robert and Mary married on June 30, 1831, at Arlington House, her parents' house just across from Washington, D.C. The 3rd U.S. Artillery served as honor guard at the marriage. They eventually had seven children, three boys and four girls:
All the children survived him except for Annie, who died in 1862. They are all buried with their parents in the crypt of the Lee Chapel at Washington and Lee University in Lexington, Virginia.
Lee was a great-great-great-grandson of William Randolph and a great-great-grandson of Richard Bland. He was also related to Helen Keller through Helen's mother, Kate, and was a distant relative of Admiral Willis Augustus Lee.
On May 1, 1864, General Lee was at the baptism of General A.P. Hill's daughter, Lucy Lee Hill, to serve as her godfather. This is referenced in the painting "Tender is the Heart" by Mort Künstler. He was also the godfather of actress and writer Odette Tyler, the daughter of brigadier general William Whedbee Kirkland.
Lee distinguished himself in the Mexican–American War (1846–1848). He was one of Winfield Scott's chief aides in the march from Veracruz to Mexico City. He was instrumental in several American victories through his personal reconnaissance as a staff officer; he found routes of attack that the Mexicans had not defended because they thought the terrain was impassable.
He was promoted to brevet major after the Battle of Cerro Gordo on April 18, 1847. He also fought at Contreras, Churubusco, and Chapultepec and was wounded at the last. By the end of the war, he had received additional brevet promotions to lieutenant colonel and colonel, but his permanent rank was still captain of engineers, and he would remain a captain until his transfer to the cavalry in 1855.
For the first time, Robert E. Lee and Ulysses S. Grant met and worked with each other during the Mexican–American War. Close observations of their commanders constituted a learning process for both Lee and Grant. The Mexican–American War concluded on February 2, 1848.
After the Mexican War, Lee spent three years at Fort Carroll in Baltimore harbor. During this time, his service was interrupted by other duties, among them surveying and updating maps in Florida. Cuban revolutionary Narciso López intended to forcibly liberate Cuba from Spanish rule. In 1849, searching for a leader for his filibuster expedition, he approached Jefferson Davis, then a United States senator. Davis declined and suggested Lee, who also declined. Both decided it was inconsistent with their duties.
The 1850s were a difficult time for Lee, with his long absences from home, the increasing disability of his wife, troubles in taking over the management of a large slave plantation, and his often morbid concern with his personal failures.
In 1852, Lee was appointed Superintendent of the Military Academy at West Point. He was reluctant to enter what he called a "snake pit", but the War Department insisted and he obeyed. His wife occasionally came to visit. During his three years at West Point, Brevet Colonel Robert E. Lee improved the buildings and courses and spent much time with the cadets. Lee's oldest son, George Washington Custis Lee, attended West Point during his tenure. Custis Lee graduated in 1854, first in his class.
Lee was enormously relieved to receive a long-awaited promotion as second-in-command of the 2nd Cavalry Regiment in Texas in 1855. It meant leaving the Engineering Corps and its sequence of staff jobs for the combat command he truly wanted. He served under Colonel Albert Sidney Johnston at Camp Cooper, Texas; their mission was to protect settlers from attacks by the Apache and the Comanche.
In 1857, his father-in-law George Washington Parke Custis died, creating a serious crisis when Lee took on the burden of executing the will. Custis's will encompassed vast landholdings and hundreds of slaves balanced against massive debts, and required Custis's former slaves "to be emancipated by my executors in such manner as to my executors may seem most expedient and proper, the said emancipation to be accomplished in not exceeding five years from the time of my decease." The estate was in disarray, and the plantations had been poorly managed and were losing money.
Lee tried to hire an overseer to handle the plantation in his absence, writing to his cousin, "I wish to get an energetic honest farmer, who while he will be considerate & kind to the negroes, will be firm & make them do their duty." But Lee failed to find a man for the job, and had to take a two-year leave of absence from the army in order to run the plantation himself.
Lee's cruelty on the Arlington plantation nearly led to a slave revolt, since many of the slaves had been given to understand that they were to be made free as soon as Custis died, and protested angrily at the delay. In May 1858, Lee wrote to his son Rooney, "I have had some trouble with some of the people. Reuben, Parks & Edward, in the beginning of the previous week, rebelled against my authority—refused to obey my orders, & said they were as free as I was, etc., etc.—I succeeded in capturing them & lodging them in jail. They resisted till overpowered & called upon the other people to rescue them." Less than two months after they were sent to the Alexandria jail, Lee decided to remove these three men and three female house slaves from Arlington, and sent them under lock and key to the slave-trader William Overton Winston in Richmond, who was instructed to keep them in jail until he could find "good & responsible" slaveholders to work them until the end of the five-year period.
Lee ruptured the Washington and Custis tradition of respecting slave families and by 1860 he had broken up every family but one on the estate, some of whom had been together since Mount Vernon days.
In 1859, three of the Arlington slaves—Wesley Norris, his sister Mary, and a cousin of theirs—fled for the North, but were captured a few miles from the Pennsylvania border and forced to return to Arlington. On June 24, 1859, the anti-slavery newspaper "New York Daily Tribune" published two anonymous letters (dated June 19, 1859 and June 21, 1859), each claiming to have heard that Lee had the Norrises whipped, and each going so far as to claim that the overseer refused to whip the woman but that Lee took the whip and flogged her personally. Lee privately wrote to his son Custis that "The N. Y. Tribune has attacked me for my treatment of your grandfather's slaves, but I shall not reply. He has left me an unpleasant legacy."
Wesley Norris himself spoke out about the incident after the war, in an 1866 interview printed in an abolitionist newspaper, the "National Anti-Slavery Standard". Norris stated that after they had been captured, and forced to return to Arlington, Lee told them that "he would teach us a lesson we would not soon forget." According to Norris, Lee then had the three of them firmly tied to posts by the overseer, and ordered them whipped with fifty lashes for the men and twenty for Mary Norris. Norris claimed that Lee encouraged the whipping, and that when the overseer refused to do it, called in the county constable to do it instead. Unlike the anonymous letter writers, he does not state that Lee himself whipped any of the slaves. According to Norris, Lee "frequently enjoined [Constable] Williams to 'lay it on well,' an injunction which he did not fail to heed; not satisfied with simply lacerating our naked flesh, Gen. Lee then ordered the overseer to thoroughly wash our backs with brine, which was done."
The Norris men were then sent by Lee's agent to work on the railroads in Virginia and Alabama. According to the interview, Norris was sent to Richmond in January 1863 "from which place I finally made my escape through the rebel lines to freedom." But Federal authorities reported that Norris came within their lines on September 5, 1863, and that he "left Richmond ... with a pass from General Custis Lee." Lee freed the Custis slaves, including Wesley Norris, after the end of the five-year period in the winter of 1862, filing the deed of manumission on December 29, 1862.
Biographers of Lee have differed over the credibility of the account of the punishment as described in the letters in the "Tribune" and in Norris's personal account. They broadly agree that Lee had a group of escaped slaves recaptured, and that after recapturing them he hired them out off of the Arlington plantation as a punishment; but they disagree over the likelihood that Lee flogged them, and over the charge that he personally whipped Mary Norris. In 1934, Douglas S. Freeman described them as "Lee's first experience with the extravagance of irresponsible antislavery agitators" and asserted that "There is no evidence, direct or indirect, that Lee ever had them or any other Negroes flogged. The usage at Arlington and elsewhere in Virginia among people of Lee's station forbade such a thing."
In 2000, Michael Fellman, in "The Making of Robert E. Lee", found the claims that Lee had personally whipped Mary Norris "extremely unlikely," but found it not at all unlikely that Lee had ordered the runaways whipped: "corporal punishment (for which Lee substituted the euphemism 'firmness') was (believed to be) an intrinsic and necessary part of slave discipline. Although it was supposed to be applied only in a calm and rational manner, overtly physical domination of slaves, unchecked by law, was always brutal and potentially savage."
In 2003, Bernice-Marie Yates's "The Perfect Gentleman", cited Freeman's denial and followed his account in holding that, because of Lee's family connections to George Washington, he "was a prime target for abolitionists who lacked all the facts of the situation."
Lee biographer Elizabeth Brown Pryor concluded in 2008 that "the facts are verifiable," based on "the consistency of the five extant descriptions of the episode (the only element that is not repeatedly corroborated is the allegation that Lee gave the beatings himself), as well as the existence of an account book that indicates the constable received compensation from Lee on the date that this event occurred."
In 2014, Michael Korda wrote that "Although these letters are dismissed by most of Lee's biographers as exaggerated, or simply as unfounded abolitionist propaganda, it is hard to ignore them. ... It seems incongruously out of character for Lee to have whipped a slave woman himself, particularly one stripped to the waist, and that charge may have been a flourish added by the two correspondents; it was not repeated by Wesley Norris when his account of the incident was published in 1866. ... [A]lthough it seems unlikely that he would have done any of the whipping himself, he may not have flinched from observing it to make sure his orders were carried out exactly."
Several historians have noted the paradoxical nature of Lee's beliefs and actions concerning race and slavery. While Lee protested he had sympathetic feelings for blacks, they were subordinate to his own racial identity. While Lee held slavery to be an evil institution, he also saw some benefit to blacks held in slavery. While Lee helped assist individual slaves to freedom in Liberia, and provided for their emancipation in his own will, he believed the enslaved should be eventually freed in a general way only at some unspecified future date as a part of God's purpose. Slavery for Lee was a moral and religious issue, and not one that would yield to political solutions. Emancipation would sooner come from Christian impulse among slave masters before "storms and tempests of fiery controversy" such as was occurring in "Bleeding Kansas". Countering Southerners who argued for slavery as a positive good, Lee in his well-known analysis of slavery from an 1856 letter ("see below") called it a moral and political evil. While both Robert and his wife Mary Lee were disgusted with slavery, they also defended it against abolitionist demands for immediate emancipation for all enslaved.
Lee argued that slavery was bad for white people but good for black people, claiming that he found slavery bothersome and time-consuming as an everyday institution to run. In an 1856 letter to his wife, he maintained that slavery was a great evil, but primarily due to adverse impact that it had on white people:In this enlightened age, there are few I believe, but what will acknowledge, that slavery as an institution, is a moral & political evil in any Country. It is useless to expatiate on its disadvantages. I think it however a greater evil to the white man than to the black race, & while my feelings are strongly enlisted in behalf of the latter, my sympathies are more strong for the former. The blacks are immeasurably better off here than in Africa, morally, socially & physically. The painful discipline they are undergoing, is necessary for their instruction as a race, & I hope will prepare & lead them to better things. How long their subjugation may be necessary is known & ordered by a wise Merciful Providence.
Lee's father-in-law G. W. Parke Custis freed his slaves in his will. In the same tradition, before leaving to serve in Mexico, Lee had written a will providing for the manumission of the only slaves he owned. Parke Custis was a member of the American Colonization Society, which was formed to gradually end slavery by establishing a free republic in Liberia for African-Americans, and Lee assisted several ex-slaves to emigrate there. Also, according to historian Richard B. McCaslin, Lee was a gradual emancipationist, denouncing extremist proposals for immediate abolition of slavery. Lee rejected what he called evilly motivated political passion, fearing a civil and servile war from precipitous emancipation.
Historian Elizabeth Brown Pryor offered an alternative interpretation of Lee's voluntary manumission of slaves in his will, and assisting slaves to a life of freedom in Liberia, seeing Lee as conforming to a "primacy of slave law". She wrote that Lee's private views on race and slavery,
On taking on the role of administrator for the Parke Custis will, Lee used a provision to retain them in slavery to produce income for the estate to retire debt. Lee did not welcome the role of planter while administering the Custis properties at Romancoke, another nearby the Pamunkey River and Arlington; he rented the estate's mill. While all the estates prospered under his administration, Lee was unhappy at direct participation in slavery as a hated institution.
Even before what Michael Fellman called a "sorry involvement in actual slave management", Lee judged the experience of white mastery to be a greater moral evil to the white man than blacks suffering under the "painful discipline" of slavery which introduced Christianity, literacy and a work ethic to the "heathen African". Columbia University historian Eric Foner notes that:
By the time of Lee's career in the U.S. Army, the officers of West Point stood aloof from political-party and sectional strife on such issues as slavery, as a matter of principle, and Lee adhered to the principle. He considered it his patriotic duty to be apolitical while in active Army service, and Lee did not speak out publicly on the subject of slavery prior to the Civil War. Before the outbreak of the War, in 1860, Lee voted for John C. Breckinridge, who was the extreme pro-slavery candidate in the 1860 presidential election, not John Bell, the more moderate Southerner who won Virginia.
Lee himself owned a small number of slaves in his lifetime and considered himself a paternalistic master. There are various historical and newspaper hearsay accounts of Lee personally whipping a slave, but they are not direct eyewitness accounts. He was definitely involved in administering the day-to-day operations of a plantation and was involved in the recapture of runaway slaves. One historian noted that Lee separated slave families, something that prominent slave-holding families in Virginia such as Washington and Custis did not do. In 1862, Lee freed the slaves that his wife inherited, but that was in accordance with his father-in-law's will.
Foner writes that "Lee's code of gentlemanly conduct did not seem to apply to blacks" during the War, as he did not stop his soldiers from kidnapping free black farmers and selling them into slavery. Princeton University historian James M. McPherson noted that Lee initially rejected a prisoner exchange between the Confederacy and the Union when the Union demanded that black Union soldiers be included. Lee did not accept the swap until a few months before the Confederacy's surrender.
After the War, Lee told a congressional committee that blacks were "not disposed to work" and did not possess the intellectual capacity to vote and participate in politics. Lee also said to the committee that he hoped that Virginia could "get rid of them," referring to blacks. While not politically active, Lee defended Lincoln's successor Andrew Johnson's approach to Reconstruction, which according to Foner, "abandoned the former slaves to the mercy of governments controlled by their former owners." According to Foner, "A word from Lee might have encouraged white Southerners to accord blacks equal rights and inhibited the violence against the freed people that swept the region during Reconstruction, but he chose to remain silent." Lee was also urged to condemn the white-supremacy organization Ku Klux Klan, but opted to remain silent.
In the generation following the war, Lee, though he died just a few years later, became a central figure in the Lost Cause interpretation of the war. The argument that Lee had always somehow opposed slavery, and freed his wife's slaves, helped maintain his stature as a symbol of Southern honor and national reconciliation. Douglas Southall Freeman's Pulitzer prize-winning four-volume "R. E. Lee: A Biography" (1936), which was for a long period considered the definitive work on Lee, downplayed his involvement in slavery and emphasized Lee as a virtuous person. Eric Foner, who describes Freeman's volume as a "hagiography", notes that on the whole, Freeman "displayed little interest in Lee's relationship to slavery. The index to his four volumes contained 22 entries for 'devotion to duty', 19 for 'kindness', 53 for Lee's celebrated horse, Traveller. But 'slavery', 'slave emancipation' and 'slave insurrection' together received five. Freeman observed, without offering details, that slavery in Virginia represented the system 'at its best'. He ignored the postwar testimony of Lee's former slave Wesley Norris about the brutal treatment to which he had been subjected."
Both Harpers Ferry and the secession of Texas were monumental events leading up to the Civil War. Robert E. Lee was at both events. Lee initially remained loyal to the Union after Texas seceded.
John Brown led a band of 21 abolitionists who seized the federal arsenal at Harpers Ferry, Virginia, in October 1859, hoping to incite a slave rebellion. President James Buchanan gave Lee command of detachments of militia, soldiers, and United States Marines, to suppress the uprising and arrest its leaders. By the time Lee arrived that night, the militia on the site had surrounded Brown and his hostages. At dawn, Brown refused the demand for surrender. Lee attacked, and Brown and his followers were captured after three minutes of fighting. Lee's summary report of the episode shows Lee believed it "was the attempt of a fanatic or madman". Lee said Brown achieved "temporary success" by creating panic and confusion and by "magnifying" the number of participants involved in the raid.
In 1860, Lt. Col. Robert E. Lee relieved Major Heintzelman at Fort Brown, and the Mexican authorities offered to restrain "their citizens from making predatory descents upon the territory and people of Texas ... this was the last active operation of the Cortina War". Rip Ford, a Texas Ranger at the time, described Lee as "dignified without hauteur, grand without pride ... he evinced an imperturbable self-possession, and a complete control of his passions ... possessing the capacity to accomplish great ends and the gift of controlling and leading men."
When Texas seceded from the Union in February 1861, General David E. Twiggs surrendered all the American forces (about 4,000 men, including Lee, and commander of the Department of Texas) to the Texans. Twiggs immediately resigned from the U.S. Army and was made a Confederate general. Lee went back to Washington and was appointed Colonel of the First Regiment of Cavalry in March 1861. Lee's colonelcy was signed by the new president, Abraham Lincoln. Three weeks after his promotion, Colonel Lee was offered a senior command (with the rank of Major General) in the expanding Army to fight the Southern States that had left the Union. Fort Mason, Texas was Lee's last command with the United States Army.
Unlike many Southerners who expected a glorious war, Lee correctly predicted it as protracted and devastating. He privately opposed the new Confederate States of America in letters in early 1861, denouncing secession as "nothing but revolution" and an unconstitutional betrayal of the efforts of the Founding Fathers. Writing to George Washington Custis in January, Lee stated:
Despite opposing secession, Lee said in January that "we can with a clear conscience separate" if all peaceful means failed. He agreed with secessionists in most areas, such as dislike of Northern anti-slavery criticisms and prevention of expanding slavery to new territories, and fear of its larger population. Lee supported the Crittenden Compromise, which would have constitutionally protected slavery.
Lee's objection to secession was ultimately outweighed by a sense of personal honor, reservations about the legitimacy of a strife-ridden "Union that can only be maintained by swords and bayonets", and duty to defend his native Virginia if attacked. He was asked while leaving Texas by a lieutenant if he intended to fight for the Confederacy or the Union, to which Lee replied, "I shall never bear arms against the Union, but it may be necessary for me to carry a musket in the defense of my native state, Virginia, in which case I shall not prove recreant to my duty".
Although Virginia had the most slaves of any state, it was more similar to Maryland, which stayed in the Union, than the Deep South; a convention voted against secession in early 1861. Scott, commanding general of the Union Army and Lee's mentor, told Lincoln he wanted him for a top command, telling Secretary of War Simon Cameron that he had "entire confidence" in Lee. He accepted a promotion to colonel of the 1st Cavalry Regiment on March 28, again swearing an oath to the United States. Meanwhile, Lee ignored an offer of command from the Confederacy. After Lincoln's call for troops to put down the rebellion, a second Virginia convention in Richmond voted to secede on April 17, and a May 23 referendum would likely ratify the decision. That night Lee dined with brother Smith and cousin Phillips, naval officers. Because of Lee's indecision, Phillips went to the War Department the next morning to warn that the Union might lose his cousin if the government did not act quickly.
In Washington that day, Lee was offered by presidential advisor Francis P. Blair a role as major general to command the defense of the national capital. He replied:
Lee agreed that to avoid dishonor he had to resign before receiving unwanted orders. While historians have usually called his decision inevitable ("the answer he was born to make", wrote Douglas Southall Freeman; another called it a "no-brainer") given the ties to family and state, an 1871 letter from his eldest daughter, Mary Custis Lee, to a biographer described Lee as "worn and harassed" yet calm as he deliberated alone in his office. People on the street noticed Lee's grim face as he tried to decide over the next two days, and he later said that he kept the resignation letter for a day before sending it on April 20. Two days later the Richmond convention invited Lee to the city. It elected him as commander of Virginia state forces before his arrival on April 23, and almost immediately gave him George Washington's sword as symbol of his appointment; whether he was told of a decision he did not want without time to decide, or did want the excitement and opportunity of command, is unclear.
A cousin on Scott's staff told the family that Lee's decision so upset Scott that he collapsed on a sofa and mourned as if he had lost a son, and asked to not hear Lee's name. When Lee told family his decision he said "I suppose you will all think I have done very wrong", as the others were mostly pro-Union; only Mary Custis was a secessionist, and her mother especially wanted to choose the Union but told her husband that she would support whatever he decided. Many younger men like nephew Fitzhugh wanted to support the Confederacy, but Lee's three sons joined the Confederate military only after their father's decision.
Most family members like brother Smith reluctantly also chose the South, but Smith's wife and Anne, Lee's sister, still supported the Union; Anne's son joined the Union Army, and no one in his family ever spoke to Lee again. Many cousins fought for the Confederacy, but Phillips and John Fitzgerald told Lee in person that they would uphold their oaths; John H. Upshur stayed with the Union military despite much family pressure; Roger Jones stayed in the Union army after Lee refused to advise him on what to do; and two of Philip Fendall's sons fought for the Union. Forty percent of Virginian officers stayed with the North.
At the outbreak of war, Lee was appointed to command all of Virginia's forces, but upon the formation of the Confederate States Army, he was named one of its first five full generals. Lee did not wear the insignia of a Confederate general, but only the three stars of a Confederate colonel, equivalent to his last U.S. Army rank. He did not intend to wear a general's insignia until the Civil War had been won and he could be promoted, in peacetime, to general in the Confederate Army.
Lee's first field assignment was commanding Confederate forces in western Virginia, where he was defeated at the Battle of Cheat Mountain and was widely blamed for Confederate setbacks. He was then sent to organize the coastal defenses along the Carolina and Georgia seaboard, appointed commander, "Department of South Carolina, Georgia and Florida" on November 5, 1861. Between then and the fall of Fort Pulaski, April 11, 1862, he put in place a defense of Savannah that proved successful in blocking Federal advance on Savannah. Confederate fort and naval gunnery dictated night time movement and construction by the besiegers. Federal preparations required four months. In those four months, Lee developed a defense in depth. Behind Fort Pulaski on the Savannah River, Fort Jackson was improved, and two additional batteries covered river approaches. In the face of the Union superiority in naval, artillery and infantry deployment, Lee was able to block any Federal advance on Savannah, and at the same time, well-trained Georgia troops were released in time to meet McClellan's Peninsula Campaign. The city of Savannah would not fall until Sherman's approach from the interior at the end of 1864.
At first, the press spoke to the disappointment of losing Fort Pulaski. Surprised by the effectiveness of large caliber Parrott Rifles in their first deployment, it was widely speculated that only betrayal could have brought overnight surrender to a Third System Fort. Lee was said to have failed to get effective support in the Savannah River from the three sidewheeler gunboats of the Georgia Navy. Although again blamed by the press for Confederate reverses, he was appointed military adviser to Confederate President Jefferson Davis, the former U.S. Secretary of War. While in Richmond, Lee was ridiculed as the 'King of Spades' for his excessive digging of trenches around the capitol. These trenches would later play a pivotal role in battles near the end of the war.
In the spring of 1862, in the Peninsula Campaign, the Union Army of the Potomac under General George B. McClellan advanced on Richmond from Fort Monroe to the east. McClellan forced Gen. Joseph E. Johnston and the Army of Virginia to retreat to just north and east of the Confederate capital.
Then Johnston was wounded at the Battle of Seven Pines, on June 1, 1862. Lee now got his first opportunity to lead an army in the field – the force he renamed the Army of "Northern" Virginia, signalling his confidence that the Union army would be driven away from Richmond. Early in the war, Lee had been called "Granny Lee" for his allegedly timid style of command. Confederate newspaper editorials objected to him replacing Johnston, opining that Lee would be passive, waiting for Union attack. And for the first three weeks of June, he did not attack, instead strengthening Richmond's defenses.
But then he launched a series of bold attacks against McClellan's forces, the Seven Days Battles. Despite superior Union numbers, and some clumsy tactical performances by his subordinates, Lee's attacks derailed McClellan's plans and drove back part of his forces. Confederate casualties were heavy, but McClellan was unnerved, retreated to the lower James River, and abandoned the Peninsula Campaign. This success completely changed Confederate morale, and the public's regard for Lee. After the Seven Days Battles, and until the end of the war, his men called him simply "Marse Robert", a term of respect and affection.
The setback, and the resulting drop in Union morale, impelled Lincoln to adopt a new policy of relentless, committed warfare. After the Seven Days, Lincoln decided he would move to emancipate most Confederate slaves by executive order, as a military act, using his authority as commander-in-chief. But he needed a Union victory first.
Meanwhile, Lee defeated another Union army under Gen. John Pope at the Second Battle of Bull Run. In less than 90 days after taking command, Lee had run McClellan off the Peninsula, defeated Pope, and moved the battle lines from outside Richmond, to outside Washington.
Lee now invaded Maryland and Pennsylvania, hoping to collect supplies in Union territory, and possibly win a victory that would sway the upcoming Union elections in favor of ending the war. But McClellan's men found a lost Confederate dispatch, Special Order 191, that revealed Lee's plans and movements. McClellan always exaggerated Lee's numerical strength, but now he knew the Confederate army was divided and could be destroyed in detail. However, McClellan moved slowly, not realizing a spy had informed Lee that McClellan had the plans. Lee quickly concentrated his forces west of Antietam Creek, near Sharpsburg, Maryland, where McClellan attacked on September 17. The Battle of Antietam was the single bloodiest day of the war, with both sides suffering enormous losses. Lee's army barely withstood the Union assaults, then retreated to Virginia the next day. This narrow Confederate defeat gave President Abraham Lincoln the opportunity to issue his Emancipation Proclamation, which put the Confederacy on the diplomatic and moral defensive.
Disappointed by McClellan's failure to destroy Lee's army, Lincoln named Ambrose Burnside as commander of the Army of the Potomac. Burnside ordered an attack across the Rappahannock River at Fredericksburg, Virginia. Delays in bridging the river allowed Lee's army ample time to organize strong defenses, and the Union frontal assault on December 13, 1862, was a disaster. There were 12,600 Union casualties to 5,000 Confederate; one of the most one-sided battles in the Civil War. After this victory, Lee reportedly said "It is well that war is so terrible, else we should grow too fond of it." At Fredericksburg, according to historian Michael Fellman, Lee had completely entered into the "spirit of war, where destructiveness took on its own beauty."
After the bitter Union defeat at Fredericksburg, President Lincoln named Joseph Hooker commander of the Army of the Potomac. In May 1863, Hooker maneuvered to attack Lee's army via Chancellorsville, Virginia. But Hooker was defeated by Lee's daring maneuver: dividing his army and sending Stonewall Jackson's corps to attack Hooker's flank. Lee won a decisive victory over a larger force, but with heavy casualties, including Jackson, his finest corps commander, who was accidentally killed by his own troops.
The critical decisions came in May–June 1863, after Lee's smashing victory at the Battle of Chancellorsville. The western front was crumbling, as multiple uncoordinated Confederate armies were unable to handle General Ulysses S. Grant's campaign against Vicksburg. The top military advisers wanted to save Vicksburg, but Lee persuaded Davis to overrule them and authorize yet another invasion of the North. The immediate goal was to acquire urgently needed supplies from the rich farming districts of Pennsylvania; a long-term goal was to stimulate peace activity in the North by demonstrating the power of the South to invade. Lee's decision proved a significant strategic blunder and cost the Confederacy control of its western regions, and nearly cost Lee his own army as Union forces cut him off from the South.
In the summer of 1863, Lee invaded the North again, marching through western Maryland and into south central Pennsylvania. He encountered Union forces under George G. Meade at the three-day Battle of Gettysburg in Pennsylvania in July; the battle would produce the largest number of casualties in the American Civil War. With some of his subordinates being new and inexperienced in their commands, J.E.B. Stuart's cavalry being out of the area, and Lee being slightly ill, he was less than comfortable with how events were unfolding. While the first day of battle was controlled by the Confederates, key terrain that should have been taken by General Ewell was not. The second day ended with the Confederates unable to break the Union position, and the Union being more solidified. Lee's decision on the third day, against the judgment of his best corps commander General Longstreet, to launch a massive frontal assault on the center of the Union line turned out to be disastrous. The assault known as Pickett's Charge was repulsed and resulted in heavy Confederate losses. The general rode out to meet his retreating army and proclaimed, "All this has been my fault." Lee was compelled to retreat. Despite flooded rivers that blocked his retreat, he escaped Meade's ineffective pursuit. Following his defeat at Gettysburg, Lee sent a letter of resignation to President Davis on August 8, 1863, but Davis refused Lee's request. That fall, Lee and Meade met again in two minor campaigns that did little to change the strategic standoff. The Confederate Army never fully recovered from the substantial losses incurred during the three-day battle in southern Pennsylvania. The historian Shelby Foote stated, "Gettysburg was the price the South paid for having Robert E. Lee as commander."
In 1864 the new Union general-in-chief, Lt. Gen. Ulysses S. Grant, sought to use his large advantages in manpower and material resources to destroy Lee's army by attrition, pinning Lee against his capital of Richmond. Lee successfully stopped each attack, but Grant with his superior numbers kept pushing each time a bit farther to the southeast. These battles in the Overland Campaign included the Wilderness, Spotsylvania Court House and Cold Harbor.
Grant eventually was able to stealthily move his army across the James River. After stopping a Union attempt to capture Petersburg, Virginia, a vital railroad link supplying Richmond, Lee's men built elaborate trenches and were besieged in Petersburg, a development which presaged the trench warfare of World War I. Lee attempted to break the stalemate by sending Jubal A. Early on a raid through the Shenandoah Valley to Washington, D.C., but Early was defeated early on by the superior forces of Philip Sheridan. The Siege of Petersburg lasted from June 1864 until March 1865, with Lee's outnumbered and poorly supplied army shrinking daily because of desertions by disheartened Confederates.
On February 6, 1865, Lee was appointed General in Chief of the Armies of the Confederate States.
As the South ran out of manpower the issue of arming the slaves became paramount. Lee explained, "We should employ them without delay ... [along with] gradual and general emancipation". The first units were in training as the war ended. As the Confederate army was devastated by casualties, disease and desertion, the Union attack on Petersburg succeeded on April 2, 1865. Lee abandoned Richmond and retreated west. Lee then made an attempt to escape to the southwest and join up with Joseph E. Johnston's Army of Tennessee in North Carolina. However, his forces were soon surrounded and he surrendered them to Grant on April 9, 1865, at the Battle of Appomattox Court House. Other Confederate armies followed suit and the war ended. The day after his surrender, Lee issued his Farewell Address to his army.
Lee resisted calls by some officers to reject surrender and allow small units to melt away into the mountains, setting up a lengthy guerrilla war. He insisted the war was over and energetically campaigned for inter-sectional reconciliation. "So far from engaging in a war to perpetuate slavery, I am rejoiced that slavery is abolished. I believe it will be greatly for the interests of the South."
The following are summaries of Civil War campaigns and major battles where Robert E. Lee was the commanding officer:
After the war, Lee was not arrested or punished (although he was indicted), but he did lose the right to vote as well as some property. Lee's prewar family home, the Custis-Lee Mansion, was seized by Union forces during the war and turned into Arlington National Cemetery, and his family was not compensated until more than a decade after his death.
In 1866 Lee counseled southerners not to resume fighting, of which Grant said Lee was "setting an example of forced acquiescence so grudging and pernicious in its effects as to be hardly realized". Lee joined with Democrats in opposing the Radical Republicans who demanded punitive measures against the South, distrusted its commitment to the abolition of slavery and, indeed, distrusted the region's loyalty to the United States. Lee supported a system of free public schools for blacks, but forthrightly opposed allowing blacks to vote. "My own opinion is that, at this time, they [black Southerners] cannot vote intelligently, and that giving them the [vote] would lead to a great deal of demagogism, and lead to embarrassments in various ways," Lee stated. Emory Thomas says Lee had become a suffering Christ-like icon for ex-Confederates. President Grant invited him to the White House in 1869, and he went. Nationally he became an icon of reconciliation between the North and South, and the reintegration of former Confederates into the national fabric.
Lee hoped to retire to a farm of his own, but he was too much a regional symbol to live in obscurity. From April to June 1865, he and his family resided in Richmond at the Stewart-Lee House. He accepted an offer to serve as the president of Washington College (now Washington and Lee University) in Lexington, Virginia, and served from October 1865 until his death. The Trustees used his famous name in large-scale fund-raising appeals and Lee transformed Washington College into a leading Southern college, expanding its offerings significantly, adding programs in commerce and journalism, and incorporating the Lexington Law School. Lee was well liked by the students, which enabled him to announce an "honor system" like that of West Point, explaining that "we have but one rule here, and it is that every student be a gentleman." To speed up national reconciliation Lee recruited students from the North and made certain they were well treated on campus and in town.
Several glowing appraisals of Lee's tenure as college president have survived, depicting the dignity and respect he commanded among all. Previously, most students had been obliged to occupy the campus dormitories, while only the most mature were allowed to live off-campus. Lee quickly reversed this rule, requiring most students to board off-campus, and allowing only the most mature to live in the dorms as a mark of privilege; the results of this policy were considered a success. A typical account by a professor there states that "the students fairly worshipped him, and deeply dreaded his displeasure; yet so kind, affable, and gentle was he toward them that all loved to approach him. ... No student would have dared to violate General Lee's expressed wish or appeal; if he had done so, the students themselves would have driven him from the college."
While at Washington College, Lee told a colleague that the greatest mistake of his life was taking a military education.
During his time as president of Washington College, he defended his father in a biographical sketch.
On May 29, 1865, President Andrew Johnson issued a Proclamation of Amnesty and Pardon to persons who had participated in the rebellion against the United States. There were fourteen excepted classes, though, and members of those classes had to make special application to the President. Lee sent an application to Grant and wrote to President Johnson on June 13, 1865:
On October 2, 1865, the same day that Lee was inaugurated as president of Washington College in Lexington, Virginia, he signed his Amnesty Oath, thereby complying fully with the provision of Johnson's proclamation. Lee was not pardoned, nor was his citizenship restored.
Three years later, on December 25, 1868, Johnson proclaimed a second amnesty which removed previous exceptions, such as the one that affected Lee.
Lee, who had opposed secession and remained mostly indifferent to politics before the Civil War, supported President Andrew Johnson's plan of Presidential Reconstruction that took effect in 1865–66. However, he opposed the Congressional Republican program that took effect in 1867. In February 1866, he was called to testify before the Joint Congressional Committee on Reconstruction in Washington, where he expressed support for Johnson's plans for quick restoration of the former Confederate states, and argued that restoration should return, as far as possible, to the "status quo ante" in the Southern states' governments (with the exception of slavery).
Lee told the committee, "...every one with whom I associate expresses kind feelings towards the freedmen. They wish to see them get on in the world, and particularly to take up some occupation for a living, and to turn their hands to some work." Lee also expressed his "willingness that blacks should be educated, and ... that it would be better for the blacks and for the whites." Lee forthrightly opposed allowing blacks to vote: "My own opinion is that, at this time, they [black Southerners] cannot vote intelligently, and that giving them the [vote] would lead to a great deal of demagogism, and lead to embarrassments in various ways."
In an interview in May 1866, Lee said: "The Radical party are likely to do a great deal of harm, for we wish now for good feeling to grow up between North and South, and the President, Mr. Johnson, has been doing much to strengthen the feeling in favor of the Union among us. The relations between the Negroes and the whites were friendly formerly, and would remain so if legislation be not passed in favor of the blacks, in a way that will only do them harm."
In 1868, Lee's ally Alexander H. H. Stuart drafted a public letter of endorsement for the Democratic Party's presidential campaign, in which Horatio Seymour ran against Lee's old foe Republican Ulysses S. Grant. Lee signed it along with thirty-one other ex-Confederates. The Democratic campaign, eager to publicize the endorsement, published the statement widely in newspapers. Their letter claimed paternalistic concern for the welfare of freed Southern blacks, stating that "The idea that the Southern people are hostile to the negroes and would oppress them, if it were in their power to do so, is entirely unfounded. They have grown up in our midst, and we have been accustomed from childhood to look upon them with kindness." However, it also called for the restoration of white political rule, arguing that "It is true that the people of the South, in common with a large majority of the people of the North and West, are, for obvious reasons, inflexibly opposed to any system of laws that would place the political power of the country in the hands of the negro race. But this opposition springs from no feeling of enmity, but from a deep-seated conviction that, at present, the negroes have neither the intelligence nor the other qualifications which are necessary to make them safe depositories of political power."
In his public statements and private correspondence, Lee argued that a tone of reconciliation and patience would further the interests of white Southerners better than hotheaded antagonism to federal authority or the use of violence. Lee repeatedly expelled white students from Washington College for violent attacks on local black men, and publicly urged obedience to the authorities and respect for law and order. He privately chastised fellow ex-Confederates such as Jefferson Davis and Jubal Early for their frequent, angry responses to perceived Northern insults, writing in private to them as he had written to a magazine editor in 1865, that "It should be the object of all to avoid controversy, to allay passion, give full scope to reason and to every kindly feeling. By doing this and encouraging our citizens to engage in the duties of life with all their heart and mind, with a determination not to be turned aside by thoughts of the past and fears of the future, our country will not only be restored in material prosperity, but will be advanced in science, in virtue and in religion."
On September 28, 1870, Lee suffered a stroke. He died two weeks later, shortly after 9 a.m. on October 12, 1870, in Lexington, Virginia, from the effects of pneumonia. According to one account, his last words on the day of his death, were "Tell Hill he "must" come up! Strike the tent", but this is debatable because of conflicting accounts and because Lee's stroke had resulted in aphasia, possibly rendering him unable to speak.
At first no suitable coffin for the body could be located. The muddy roads were too flooded for anyone to get in or out of the town of Lexington. An undertaker had ordered three from Richmond that had reached Lexington, but due to unprecedented flooding from long-continued heavy rains, the caskets were washed down the Maury River. Two neighborhood boys, C.G. Chittum and Robert E. Hillis, found one of the coffins that had been swept ashore. Undamaged, it was used for the General's body, though it was a bit short for him. As a result, Lee was buried without shoes. He was buried underneath Lee Chapel at Washington and Lee University, where his body remains.
Among the supporters of the Confederacy, Lee came to be even more revered after his surrender than he had been during the war, when Stonewall Jackson had been the great Confederate hero. In an address before the Southern Historical Society in Atlanta, Georgia in 1874, Benjamin Harvey Hill described Lee in this way:
By the end of the 19th century, Lee's popularity had spread to the North. Lee's admirers have pointed to his character and devotion to duty, and his occasional tactical successes in battles against a stronger foe.
Military historians continue to pay attention to his battlefield tactics and maneuvering, though many think he should have designed better strategic plans for the Confederacy. He was not given full direction of the Southern war effort until late in the conflict.
Historian Eric Foner writes that at the end of his life,
Robert E. Lee has been commemorated on U.S. postage stamps at least five times, the first one being a commemorative stamp that also honored Stonewall Jackson, issued in 1936. A second "regular-issue" stamp was issued in 1955. He was commemorated with a 32-cent stamp issued in the American Civil War Issue of June 29, 1995. His horse Traveller is pictured in the background.
Washington and Lee University in Lexington, Virginia was commemorated on its 200th anniversary on November 23, 1948, with a 3-cent postage stamp. The central design is a view of the university, flanked by portraits of generals George Washington and Robert E. Lee. Lee was again commemorated on a commemorative stamp in 1970, along with Jefferson Davis and Thomas J. "Stonewall" Jackson, depicted on horseback on the 6-cent Stone Mountain Memorial commemorative issue, modeled after the actual Stone Mountain Memorial carving in Georgia. The stamp was issued on September 19, 1970, in conjunction with the dedication of the Stone Mountain Confederate Memorial in Georgia on May 9, 1970. The design of the stamp replicates the memorial, the largest high relief sculpture in the world. It is carved on the side of Stone Mountain 400 feet above the ground.
Stone Mountain also led to Lee's appearance on a commemorative coin, the 1925 Stone Mountain Memorial half dollar. During the 1920s and '30s dozens of specially designed half dollars were struck to raise money for various events and causes. This issue had a particularly wide distribution, with 1,314,709 minted. Unlike some of the other issues it remains a very common coin.
On September 29, 2007, General Lee's three Civil War-era letters were sold for $61,000 at auction by Thomas Willcox, much less than the record of $630,000 for a Lee item in 2002. The auction included more than 400 documents of Lee's from the estate of the parents of Willcox that had been in the family for generations. South Carolina sued to stop the sale on the grounds that the letters were official documents and therefore property of the state, but the court ruled in favor of Willcox.
In 1865, after the war, Lee was paroled and signed an oath of allegiance, asking to have his citizenship of the United States restored. However, his application was misplaced and as a result he did not receive a pardon and his citizenship was not restored. On January 30, 1975, Senate Joint Resolution 23, "A joint resolution to restore posthumously full rights of citizenship to General R. E. Lee" was introduced into the Senate by Senator Harry F. Byrd Jr. (I-VA), the result of a five-year campaign to accomplish this. The resolution, which enacted Public Law 94–67, was passed, and the bill was signed by President Gerald Ford on September 5.
Lee opposed the construction of public memorials to Confederate rebellion on the grounds that they would prevent the healing of wounds inflicted during the war. Nevertheless, after his death, he became an icon used by promoters of "Lost Cause" mythology, who sought to romanticize the Confederate cause and strengthen white supremacy in the South. Later in the 20th century, particularly following the civil rights movement, historians reassessed Lee; his reputation fell based on his failure to support rights for freedmen after the war, and even his strategic choices as a military leader fell under scrutiny.
From its installation in 1884 until its removal in 2017, the most prominent monument in New Orleans was a -tall monument to General Lee. A statue of Lee stood tall upon a towering column of white marble in the middle of Lee Circle. The statue of Lee, which weighs more than faced the north. Lee Circle is situated along New Orleans's famous St. Charles Avenue. The New Orleans streetcars roll past Lee Circle and New Orleans's best Mardi Gras parades go around Lee Circle (the spot is so popular that bleachers are set up annually around the perimeter for Mardi Gras). Around the corner from Lee Circle is New Orleans's Confederate museum, which contains the second-largest collection of Confederate memorabilia in the world. The statue of General Lee was removed on May 19, 2017, the last of four Confederate monuments in New Orleans to be taken down.
In a tribute to Lee Circle (which had formerly been known as Tivoli Circle), former Confederate soldier George Washington Cable wrote:
In Tivoli Circle, New Orleans, from the centre and apex of its green flowery mound, an immense column of pure white marble rises in the ... majesty of Grecian proportions high up above the city's house-tops into the dazzling sunshine ... On its dizzy top stands the bronze figure of one of the world's greatest captains. He is alone. Not one of his mighty lieutenants stand behind, beside or below him. His arms are folded on that breast that never knew fear, and his calm, dauntless gaze meets the morning sun as it rises, like the new prosperity of the land he loved and served so masterly, above the far distant battle fields where so many thousands of his gray veterans lie in the sleep of fallen heroes. ("Silent South", 1885, The Century Illustrated Monthly Magazine)
Arlington House, The Robert E. Lee Memorial, also known as the Custis–Lee Mansion, is a Greek revival mansion in Arlington, Virginia, that was once Lee's home. It overlooks the Potomac River and the National Mall in Washington, D.C. During the Civil War, the grounds of the mansion were selected as the site of Arlington National Cemetery, in part to ensure that Lee would never again be able to return to his home. The United States designated the mansion as a National Memorial to Lee in 1955, a mark of widespread respect for him in both the North and South.
In Richmond, Virginia, a large equestrian statue of Lee by French sculptor Jean Antonin Mercié is the centerpiece of the city's famous Monument Avenue, which boasts four other statues to famous Confederates. This monument to Lee was unveiled on May 29, 1890; over 100,000 people attended this dedication. That has been described as "the day white Virginia stopped admiring Gen. Robert E. Lee and started worshiping him". Lee is also shown mounted on Traveller in Gettysburg National Military Park on top of the Virginia Monument; he is facing roughly in the direction of Pickett's Charge. Lee's portrayal on a mural on Richmond's Flood Wall on the James River, considered offensive by some, was removed in the late 1990s, but currently is back on the flood wall.
Also in Virginia, the Robert Edward Lee (sculpture) at Charlottesville was listed on the National Register of Historic Places in 1997. Since there is no historical link between Lee and the city of Charlottesville, the City Council of Charlottesville voted in February 2017 to remove it, along with a statue of Stonewall Jackson, but this was temporarily stayed by court action. They did rename Lee Park, Emancipation Park. The prospect of the statues being removed and the parks being renamed brought many out-of-towners, described as white supremacist and alt-right, to Charlottesville in the Unite the Right rally of August 2017, in which 3 people died. For several months the monuments were shrouded in black. As of October 2018, the fate of the statue of Lee is unresolved. The name of the park it is located in was changed again by the City Council, to Market Street Park, in July 2018.
In Baltimore's Wyman Park, a large double equestrian statue of Lee and Jackson is located directly across from the Baltimore Museum of Art. Designed by Laura Gardin Fraser and dedicated in 1948, Lee is depicted astride his horse Traveller next to Stonewall Jackson who is mounted on "Little Sorrel." Architect John Russell Pope created the base, which was dedicated on the anniversary of the eve of the Battle of Chancellorsville. The Baltimore area of Maryland is also home to a large nature park called Robert E. Lee Memorial Park.
In 1953, two stained-glass windows – one honoring Lee, the other Stonewall Jackson – were installed in the Washington National Cathedral. The stained glass of Lee shows him on horseback at Chancellorsville; it was sponsored by the United Daughters of the Confederacy. In 2017, these windows were removed by a vote of the cathedral's governing board. The cathedral plans to keep the windows and eventually display them in historical context.
An equestrian statue of Lee was installed in Robert E. Lee Park, in Dallas, until 2017; and in Austin, a statue of Lee is on display at the main mall of the University of Texas at Austin. A statue of Robert E. Lee is one of two statues (the other is Washington) representing Virginia in Statuary Hall in the Capitol in Washington, D.C. Lee is one of the figures depicted in bas-relief carved into Stone Mountain near Atlanta. Accompanying him on horseback in the relief are Stonewall Jackson and Jefferson Davis.
The birthday of Robert E. Lee is celebrated or commemorated in several states. In Virginia, Lee–Jackson Day is celebrated on the Friday preceding Martin Luther King, Jr. Day which is the third Monday in January. In Texas, he is celebrated as part of Confederate Heroes Day on January 19, Lee's birthday. In Alabama and Mississippi, his birthday is celebrated on the same day as Martin Luther King, Jr. Day, while in Georgia, this occurred on the day after Thanksgiving before 2016, when the state stopped officially recognizing the holiday.
One United States college and one junior college are named for Lee: Washington and Lee University in Lexington, Virginia; and Lee College in Baytown, Texas, respectively. Lee Chapel at Washington and Lee University marks Lee's final resting place. Throughout the South, many primary and secondary schools were also named for him as well as private schools such as Robert E. Lee Academy in Bishopville, South Carolina.
In 1900, Lee was one of the first 29 individuals selected for the Hall of Fame for Great Americans (the first Hall of Fame in the United States), designed by Stanford White, on the Bronx, New York, campus of New York University, now a part of Bronx Community College. However, his bust was removed in August 2017 by order of New York Governor Andrew Cuomo.
Lee is featured on the 1925 Stone Mountain Memorial half dollar.
In 1862, the newly formed Confederate Navy purchased a 642-ton iron-hulled side-wheel gunboat, built in at Glasgow, Scotland, and gave her the name of CSS "Robert E. Lee" in honor of this Confederate General. During the next year, she became one of the South's most famous Confederate blockade runners, successfully making more than twenty runs through the Union blockade.
The Mississippi River steamboat "Robert E. Lee" was named for Lee after the Civil War. It was the participant in an 1870 St. Louis – New Orleans race with the "Natchez VI", which was featured in a Currier and Ives lithograph. The "Robert E. Lee" won the race. The steamboat inspired the 1912 song "Waiting for the Robert E. Lee" by Lewis F. Muir and L. Wolfe Gilbert. In more modern times, the , a built in 1958, was named for Lee, as was the M3 Lee tank, produced in 1941 and 1942.
The Commonwealth of Virginia issues an optional license plate honoring Lee, making reference to him as 'The Virginia Gentleman'. In February 2014, a road on Fort Bliss previously named for Lee was renamed to honor Buffalo Soldiers.
A recent biographer, Jonathan Horn, outlines the unsuccessful efforts in Washington to memorialize Lee in the naming of the Arlington Memorial Bridge after both Grant and Lee.
Lee is a main character in the Shaara Family novels "The Killer Angels" (1974, "Gettysburg"), "Gods and Generals" (1988), and "The Last Full Measure" (2000), as well as the film adaptations of "Gettysburg" (1993) and "Gods and Generals" (2003). He is played by Martin Sheen in the former and by Lee's descendant Robert Duvall in the latter. Lee is portrayed as a hero in the historical children's novel "Lee and Grant at Appomattox" (1950) by MacKinlay Kantor. His part in the Civil War is told from the perspective of his horse in Richard Adams's book "Traveller" (1988).
Lee is an obvious subject for American Civil War alternate histories. Ward Moore's "Bring the Jubilee" (1953), MacKinlay Kantor's "If the South Had Won the Civil War" (1960), and Harry Turtledove's "The Guns of the South" (1992), all have Lee ending up as President of a victorious Confederacy and freeing the slaves (or laying the groundwork for the slaves to be freed in a later decade). Although Moore and Kantor's novels relegate him to a set of passing references, Lee is more of a main character in Turtledove's "Guns". He is also the prime character of Turtledove's "Lee at the Alamo", which can be read online, and sees the opening of the Civil War drastically altered so as to affect Lee's personal priorities considerably. Turtledove's "War Between the Provinces" series is an allegory of the Civil War told in the language of fairy tales, with Lee appearing as a knight named "Duke Edward of Arlington". Lee is also a knight in "The Charge of Lee's Brigade" in "Alternate Generals" volume 1, written by Turtledove's friend S. M. Stirling and featuring Lee, whose Virginia is still a loyal British colony, fighting for the Crown against the Russians in Crimea. In Lee Allred's "East of Appomattox" in "Alternate Generals" volume 3, Lee is the Confederate Minister to London circa 1868, desperately seeking help for a CSA which has turned out poorly suited to independence. Robert Skimin's "Grey Victory" features Lee as a supporting character preparing to run for the presidency in 1867.
In Connie Willis' 1987 novel "Lincoln's Dreams", a research assistant meets a young woman who dreams about the Civil War from Robert E. Lee's point of view.
The Dodge Charger featured in the CBS television series "The Dukes of Hazzard" (1979–1985) was named The General Lee. In the 2005 film based on this series, the car is driven past a statue of the General, while the car's occupants salute him. | https://en.wikipedia.org/wiki?curid=25740 |
Raster graphics
In computer graphics, a raster graphics or bitmap image is a dot matrix data structure that represents a generally rectangular grid of pixels (points of color), viewable via a monitor, paper, or other display medium. Raster images are stored in image files with varying formats.
A bitmap is a rectangular grid of pixels, with each pixel's color being specified by a number of bits. A bitmap might be created for storage in the display's video memory or as a device-independent bitmap file. A raster is technically characterized by the width and height of the image in pixels and by the number of bits per pixel (or color depth, which determines the number of colors it can represent).
The printing and prepress industries know raster graphics as contones (from "continuous tones"). The opposite to contones is "line work", usually implemented as vector graphics in digital systems. Vector images can be rasterized (converted into pixels), and raster images vectorized (raster images converted into vector graphics), by software. In both cases some information is lost, although certain vectorization operations can recreate salient information, as in the case of optical character recognition.
The word "raster" has its origins in the Latin "rastrum" (a rake), which is derived from "radere" (to scrape). It originates from the raster scan of cathode ray tube (CRT) video monitors, which paint the image line by line by magnetically or electrostatically steering a focused electron beam. By association, it can also refer to a rectangular grid of pixels. The word rastrum is now used to refer to a device for drawing musical staff lines.
Most modern computers have bitmapped displays, where each on-screen pixel directly corresponds to a small number of bits in memory. The screen is refreshed simply by scanning through pixels and coloring them according to each set of bits. The refresh procedure, being speed critical, is often implemented by dedicated circuitry, often as a part of a graphics processing unit. An early scanned display with raster computer graphics was invented in the late 1960s by A. Michael Noll at Bell Labs, but its patent application filed February 5, 1970 was abandoned at the Supreme Court in 1977 over the issue of the patentability of computer software.
Most computer images are stored in raster graphics formats or compressed variations, including GIF, JPEG, and PNG, which are popular on the World Wide Web.
Three-dimensional voxel raster graphics are employed in video games and are also used in medical imaging such as MRI scanners.
GIS data is commonly stored in a raster format to encode geographic data as the pixel values. Georeferencing information can also be associated with pixels.
Raster graphics are resolution dependent, meaning they cannot scale up to an arbitrary resolution without loss of apparent quality. This property contrasts with the capabilities of vector graphics, which easily scale up to the quality of the device rendering them. Raster graphics deal more practically than vector graphics with photographs and photo-realistic images, while vector graphics often serve better for typesetting or for graphic design. Modern computer-monitors typically display about 72 to 130 pixels per inch (PPI), and some modern consumer printers can resolve 2400 dots per inch (DPI) or more; determining the most appropriate image resolution for a given printer-resolution can pose difficulties, since printed output may have a greater level of detail than a viewer can discern on a monitor. Typically, a resolution of 150 to 300 PPI works well for 4-color process (CMYK) printing.
However, for printing technologies that perform color mixing through dithering (halftone) rather than through overprinting (virtually all home/office inkjet and laser printers), printer DPI and image PPI have a very different meaning, and this can be misleading. Because, through the dithering process, the printer builds a single image pixel out of several printer dots to increase color depth, the printer's DPI setting must be set far higher than the desired PPI to ensure sufficient color depth without sacrificing image resolution. Thus, for instance, printing an image at 250 PPI may actually require a printer setting of 1200 DPI.
Raster-based image editors, such as PaintShop Pro, Painter, Photoshop, Paint.NET, MS Paint, and GIMP, revolve around editing pixels, unlike vector-based image editors, such as Xfig, CorelDRAW, Adobe Illustrator, or Inkscape, which revolve around editing lines and shapes (vectors). When an image is rendered in a raster-based image editor, the image is composed of millions of pixels. At its core, a raster image editor works by manipulating each individual pixel. Most pixel-based image editors work using the RGB color model, but some also allow the use of other color models such as the CMYK color model. | https://en.wikipedia.org/wiki?curid=25742 |
Rerun
A rerun or repeat is a rebroadcast of an episode of a radio or television program. There are two types of reruns – those that occur during a hiatus, and those that occur when a program is syndicated.
In the United Kingdom, the word "repeat" refers only to a single episode; "rerun" or "rerunning" is the preferred term for an entire series/season. A "repeat" is a single episode of a series that is broadcast outside its original timeslot on the same channel/network. The episode is usually the "repeat" of the scheduled episode that was broadcast in the original timeslot earlier the previous week. It allows viewers who weren't able to watch the show in its timeslot to catch up before the next episode is broadcast. The term "rerun" can also be used in some respects as a synonym for "reprint", the equivalent term for print items; this is especially true for print items that are part of ongoing series (such as comic strips; "Peanuts", for instance, has been in reruns since the retirement and death of creator Charles M. Schulz). In South Africa, reruns of the daily soap opera 7de Laan, and others, are called an Omnibus. The Omnibus is a weekly rerun that is broadcast on a Sunday afternoon on the original channel/network. It only broadcasts the past week's episodes back to back.
When used to refer to the rebroadcast of a single episode, Lucille Ball and Desi Arnaz are generally credited as the inventors of the rerun; it was first utilized for the American television series "I Love Lucy" (1951–57) during Ball's pregnancy. Prior to "I Love Lucy" rerunning its episodes during the summer, shows typically went on a summer hiatus and were replaced with "summer replacements", generally lower-priority programs; this strategy has seen increased use in the 21st century as fewer episodes have been produced each season and in-season reruns have increased. Rod Serling's 1955 teleplay "Patterns" was credited with proving reruns' viability; buoyed by strong word of mouth, the rerun of "Patterns" drew more viewers than the first run as people who had missed the first airing a month prior tuned in to catch the re-airing.
In the United States, most television shows from the late 1940s and early 1950s were performed live, and in many cases they were never recorded. However, television networks in the United States began making kinescope recordings of shows broadcast live from the East Coast. This allowed the show to be broadcast later for the West Coast. These kinescopes, along with pre-filmed shows, and later, videotape, paved the way for extensive reruns of syndicated television series.
In the United States, currently running shows will rerun older episodes from the same season to fill the time slot with the same program during the "off-season" period when no new episodes are being made. Shows will tend to start rerunning episodes after the November sweeps period (the ratings during which will determine the cost of a commercial run during that time slot). and usually show only reruns from mid-December until mid-January or even February sweeps This winter (or "mid-season") phase is also used to try out new shows that did not make it onto the fall schedule to see how they fare with the public. These series usually run six to 13 episodes. If they do well with the public, they may get a renewal for a half (13 weeks) or full season in the new schedule. Shows that are already popular will return from February sweeps until the end of the season (which sometimes ends before May sweeps) with only limited reruns used.
The number of episodes per season, originally well over 30 episodes during the 1950s and 1960s, dropped below 26 (the number of episodes required to fill a time slot for a year without rerunning any episode more than once) in the 1970s. Specials typically pad out the remainder of the schedule.
Often, if a television special such as "Peter Pan" or a network television broadcast of a classic film like "The Wizard of Oz" is especially well-received, it will be rerun from time to time. Before the VCR era, this would be the only opportunity audiences had of seeing a program more than once.
Seasonal programming such as "How the Grinch Stole Christmas", "The Ten Commandments", "It's A Wonderful Life" or the Charlie Brown television specials are normally re-shown each year, in the appropriate timeframe.
A television program goes into syndication when many episodes of the program are sold as a package. Generally the buyer is either a cable channel or an owner of local television stations. Often, programs are not particularly profitable until they are sold for syndication. Since local television stations often need to sell more commercial airtime than network affiliates, syndicated shows are usually edited to make room for extra commercials. Often about 100 episodes (four to five seasons' worth) are required for a weekly series to be rerun in daily syndication (at least four times a week). Very popular series running more than four seasons may start daily reruns of the first seasons, while production and airings continue of the current season's episodes; until approximately the early 1980s, shows that aired in syndication while still in production had the reruns aired under an alternate name (or multiple alternate names, as was the case with "Death Valley Days") to differentiate the reruns from the first-run episodes.
Few people anticipated the long life that a popular television series would eventually see in syndication, so most performers signed contracts that limited residual payments to about six repeats. After that, the actors received nothing and the production company would keep 100% of any income until the copyright expired; many shows did not even have their copyrights renewed and others were systematically destroyed to recycle valuable film, such was the lack of awareness of the potential for revenue from them. This situation went unchanged until the mid-1970s, when contracts for new shows extended residual payments for the performers, regardless of the number of reruns, while tape recycling effectively came to an end (rapid advancements in digital video in the 1990s made preservation far more economical) and the Copyright Act of 1976 extended copyright terms to much longer lengths, eliminating the need for renewal.
Once a series is no longer performing well enough to be sold in syndication, it may still remain in "barter" syndication, in which television stations are offered the program for free in exchange for a requirement to air additional advertisements (without compensation) bundled with the free program during other shows (barter syndication is far more common, if not the norm, in radio, where only the most popular programs charge rights fees). The Program Exchange was once the most prominent barter syndicator in United States television, offering mostly older series from numerous network libraries. Barter syndicated series may be seen on smaller, independent stations with small budgets or as short-term filler on larger stations; they tend not to be as widely syndicated as programs syndicated with a rights fee.
With the growing availability of cable and satellite television channels as well as over-the-air digital subchannels, combined with a growing body of available post-syndication programming, a handful of specialty channels have been built solely or primarily to run former network programming which otherwise would no longer be in syndication. Branded as "classic television", these often carry reruns of programming dating back to the monochrome television era and are promoted as nostalgia. The corresponding radio format would be that of an oldies, classic rock, classic hits or adult standards station. Depending on the programs chosen for a classic network, running the format can be very inexpensive, due to many shows beginning to fall into the public domain.
On cable and satellite, channels that devote at least some of their program schedule to post-syndication reruns include Nick at Nite, TV Land, TBS, USA Network, WGN America, Pop, Discovery Family, Game Show Network, Boomerang, Nicktoons, INSP, RFD-TV, and the Hallmark Channel. Equity Media Holdings had been using low-power television stations to carry its own Retro Television Network in various markets; those stations were, as a result of Equity going bankrupt, sold to religious broadcaster Daystar Television Network. Since the early 2010s, the growth of digital subchannel networks has allowed for increasing specialization of these classic networks: in addition to general-interest program networks such as MeTV, Logo TV, Retro TV and Antenna TV, there exist networks solely for sitcoms (Laff), game shows (Buzzr), black-oriented programs (Bounce TV), children's programming (PBJ, Qubo), true crime and court programming (Justice Network), and feature films (Movies!, getTV and This TV).
Traditionally, shows most likely to be rerun in this manner are scripted comedies and dramas. Such shows are more likely to be considered evergreen content that can be rerun for a long period of time without losing its cultural relevance. Game shows, variety shows, Saturday morning cartoons and, to a lesser extent, newsmagazines, tabloid talk shows and late-night talk shows (often in edited form) have been seen less commonly in reruns; game shows can quickly become dated because of inflation, while talk shows often draw humor from contemporary events. Most variants of reality television have proven to be a comparative failure in reruns, due to a number of factors (high cast turnover, loss of the element of surprise, overall hostility toward the format, and lack of media cross-promotion among them); some self-contained and personality-driven reality shows have been successfully rerun. Reruns of sports broadcasts, which face many of the same issues reality shows face, have found a niche, and networks such as MSG Network, ESPN Classic and NFL Network currently have a significant portion of programming time devoted to reruns of live sportscasts.
With the rise of the DVD video format, box sets featuring season or series runs of television series have become an increasingly important retail item. Some view this development as a rising new idea in the industry of reruns as an increasingly major revenue source in themselves instead of the standard business model as a draw for audiences for advertising. While there were videotape releases of television series before DVD, the format's limited content capacity, large size and reliance on mechanical winding made it impractical as a widespread retail item. Many series which continue to air first-run episodes (such as "Modern Family" and "Grey's Anatomy") may release DVD sets of the prior season between the end of that season and the beginning of the next.
Some television programs that are released on DVD (particularly those that have been out of production for several years) may not have all of the seasons released, either due to poor overall sales or prohibitive costs for obtaining rights to music used in the program; one such incidence is "Perfect Strangers", which has seldom been in wide syndication since the late 1990s primarily due to lack of demand, which had only a DVD set of the first and second seasons released due to the expensiveness of relicensing songs used in later seasons of the series that are performed by the show's two lead characters. In some cases, series whose later season releases have been held up for these reasons may have the remaining seasons made available on DVD, often after a distributor that does not hold syndication rights to the program (such as Shout! Factory) secures the rights for future DVD releases.
TV Guide originally used the term "rerun" to designate rebroadcast programs, but abruptly changed to "repeat" in the early 1970s.
Other TV listings services and publications, including local newspapers, would often indicate reruns as "(R)"; since the early 2000s, many listing services only provide a notation if an episode is new ("(N)"), with reruns getting no notation.
In the United Kingdom, most drama and comedy series run for shorter seasons – typically six, seven or thirteen episodes – and are then replaced by others. An exception is soap operas, which are either on all year round (for example, "EastEnders" and "Coronation Street"), or are on for a season similar to the American format.
As in the U.S., fewer new episodes are made during the summer. Until recently it was also common practice for the BBC, ITV and Channel 4 to repeat classic shows from their archives, but this has more or less dried up in favor of newer (and cheaper) formats like reality shows, except on the BBC where older BBC shows, especially sitcoms like "Dad's Army" and "Fawlty Towers", are frequently repeated.
Syndication did not exist as such in United Kingdom until the arrival of satellite, cable and later, from 1998 on, digital television, although it could be argued that many ITV programs up to the early 1990s, particularly imported programming was syndicated in the sense that each ITV region bought some programs independently of the ITV Network, and in particular many programs out of prime time made by smaller ITV stations were "part-networked" where some regions would show them and others would not. Nowadays there are many channels in the UK (for example, Gold) which repackage and rebroadcast "classic" programming from both sides of the Atlantic. Some of these channels, like their U.S. counterparts, make commercial timing cuts; others get around this by running shows in longer time slots, and critics of timing cuts see no reason why all channels should not do the same.
Early on in the history of British television, agreements with the actors' union Equity and other trade bodies limited the number of times a single program could be broadcast, usually only twice, and these showings were limited to within a set time period such as five years. This was due to the unions' fear that the channels filling their schedules with repeats could put actors and other production staff out of work as fewer new shows would be made. It also had the unintentional side effect of causing many programs to be junked after their repeat rights had expired, as they were considered to be of no further use by the broadcasters. Although these agreements changed during the 1980s and beyond, it is still expensive to repeat archive television series on British terrestrial television, as new contracts have to be drawn up and payments made to the artists concerned. Repeats on multi-channel television are cheaper, as are re-showings of newer programs covered by less strict repeat clauses. However, programs are no longer destroyed, as the historical and cultural reasons for keeping them have now been seen and the cost to maintain archives is now far less, even if the programs have little or no repeat value. | https://en.wikipedia.org/wiki?curid=25745 |
Router (computing)
A router is a networking device that forwards data packets between computer networks. Routers perform the traffic directing functions on the Internet. Data sent through the internet, such as a web page or email, is in the form of data packets. A packet is typically forwarded from one router to another router through the networks that constitute an internetwork (e.g. the Internet) until it reaches its destination node.
A router is connected to two or more data lines from different IP networks. When a data packet comes in on one of the lines, the router reads the network address information in the packet header to determine the ultimate destination. Then, using information in its routing table or routing policy, it directs the packet to the next network on its journey.
The most familiar type of IP routers are home and small office routers that simply forward IP packets between the home computers and the Internet. An example of a router would be the owner's cable or DSL router, which connects to the Internet through an Internet service provider (ISP). More sophisticated routers, such as enterprise routers, connect large business or ISP networks up to the powerful core routers that forward data at high speed along the optical fiber lines of the Internet backbone.
When multiple routers are used in interconnected networks, the routers can exchange information about destination addresses using a routing protocol. Each router builds up a routing table listing the preferred routes between any two computer systems on the interconnected networks.
A router has two types of network element components organized onto separate processing "planes":
A router may have interfaces for different types of physical layer connections, such as copper cables, fiber optic, or wireless transmission. It can also support different network layer transmission standards. Each network interface is used to enable data packets to be forwarded from one transmission system to another. Routers may also be used to connect two or more logical groups of computer devices known as subnets, each with a different network prefix.
Routers may provide connectivity within enterprises, between enterprises and the Internet, or between internet service providers' (ISPs') networks. The largest routers (such as the Cisco CRS-1 or Juniper PTX) interconnect the various ISPs, or may be used in large enterprise networks. Smaller routers usually provide connectivity for typical home and office networks.
All sizes of routers may be found inside enterprises. The most powerful routers are usually found in ISPs, academic and research facilities. Large businesses may also need more powerful routers to cope with ever-increasing demands of intranet data traffic. A hierarchical internetworking model for interconnecting routers in large networks is in common use.
Access routers, including small office/home office (SOHO) models, are located at home and customer sites such as branch offices that do not need hierarchical routing of their own. Typically, they are optimized for low cost. Some SOHO routers are capable of running alternative free Linux-based firmware like Tomato, OpenWrt or DD-WRT.
Distribution routers aggregate traffic from multiple access routers. Distribution routers are often responsible for enforcing quality of service across a wide area network (WAN), so they may have considerable memory installed, multiple WAN interface connections, and substantial onboard data processing routines. They may also provide connectivity to groups of file servers or other external networks.
In enterprises, a core router may provide a collapsed backbone interconnecting the distribution tier routers from multiple buildings of a campus, or large enterprise locations. They tend to be optimized for high bandwidth, but lack some of the features of edge routers.
External networks must be carefully considered as part of the overall security strategy of the local network. A router may include a firewall, VPN handling, and other security functions, or these may be handled by separate devices. Routers also commonly perform network address translation which restricts connections initiated from external connections but is not recognized as a security feature by all experts. Some experts argue that open source routers are more secure and reliable than closed source routers because open-source routers allow mistakes to be quickly found and corrected.
Routers are also often distinguished on the basis of the network in which they operate. A router in a local area network (LAN) of a single organisation is called an "interior router". A router that is operated in the Internet backbone is described as "exterior router". While a router that connects a LAN with the Internet or a wide area network (WAN) is called a "border router", or "gateway router".
Routers intended for ISP and major enterprise connectivity usually exchange routing information using the Border Gateway Protocol (BGP). defines the types of BGP routers according to their functions:
The concept of an "Interface computer" was first used by Donald Davies for the NPL network in 1966. The Interface Message Processor (IMP), conceived in 1967 for use in the ARPANET, had fundamentally the same functionality as a router does today. The idea for a router (called "gateways" at the time) initially came about through an international group of computer networking researchers called the International Network Working Group (INWG). Set up in 1972 as an informal group to consider the technical issues involved in connecting different networks, it became a subcommittee of the International Federation for Information Processing later that year. These gateway devices were different from most previous packet switching schemes in two ways. First, they connected dissimilar kinds of networks, such as serial lines and local area networks. Second, they were connectionless devices, which had no role in assuring that traffic was delivered reliably, leaving that entirely to the hosts.
The idea was explored in more detail, with the intention to produce a prototype system as part of two contemporaneous programs. One was the initial DARPA-initiated program, which created the TCP/IP architecture in use today. The other was a program at Xerox PARC to explore new networking technologies, which produced the PARC Universal Packet system; due to corporate intellectual property concerns it received little attention outside Xerox for years. Some time after early 1974, the first Xerox routers became operational. The first true IP router was developed by Ginny Strazisar at BBN, as part of that DARPA-initiated effort, during 1975–1976. By the end of 1976, three PDP-11-based routers were in service in the experimental prototype Internet.
The first multiprotocol routers were independently created by staff researchers at MIT and Stanford in 1981; the Stanford router was done by William Yeager, and the MIT one by Noel Chiappa; both were also based on PDP-11s. Virtually all networking now uses TCP/IP, but multiprotocol routers are still manufactured. They were important in the early stages of the growth of computer networking when protocols other than TCP/IP were in use. Modern Internet routers that handle both IPv4 and IPv6 are multiprotocol but are simpler devices than routers processing AppleTalk, DECnet, IP and Xerox protocols.
From the mid-1970s and in the 1980s, general-purpose minicomputers served as routers. Modern high-speed routers are network processors or highly specialized computers with extra hardware acceleration added to speed both common routing functions, such as packet forwarding, and specialized functions such as IPsec encryption. There is substantial use of Linux and Unix software-based machines, running open source routing code, for research and other applications. The Cisco IOS operating system was independently designed. Major router operating systems, such as Junos and NX-OS, are extensively modified versions of Unix software.
The main purpose of a router is to connect multiple networks and forward packets destined either for its own networks or other networks. A router is considered a layer-3 device because its primary forwarding decision is based on the information in the layer-3 IP packet, specifically the destination IP address. When a router receives a packet, it searches its routing table to find the best match between the destination IP address of the packet and one of the addresses in the routing table. Once a match is found, the packet is encapsulated in the layer-2 data link frame for the outgoing interface indicated in the table entry. A router typically does not look into the packet payload, but only at the layer-3 addresses to make a forwarding decision, plus optionally other information in the header for hints on, for example, quality of service (QoS). For pure IP forwarding, a router is designed to minimize the state information associated with individual packets. Once a packet is forwarded, the router does not retain any historical information about the packet.
The routing table itself can contain information derived from a variety of sources, such as a default or static routes that are configured manually, or dynamic routing protocols where the router learns routes from other routers. A default route is one that is used to route all traffic whose destination does not otherwise appear in the routing table; this is common – even necessary – in small networks, such as a home or small business where the default route simply sends all non-local traffic to the Internet service provider. The default route can be manually configured (as a static route), or learned by dynamic routing protocols, or be obtained by DHCP.
A router can run more than one routing protocol at a time, particularly if it serves as an autonomous system border router between parts of a network that run different routing protocols; if it does so, then redistribution may be used (usually selectively) to share information between the different protocols running on the same router.
Besides making a decision as to which interface a packet is forwarded to, which is handled primarily via the routing table, a router also has to manage congestion when packets arrive at a rate higher than the router can process. Three policies commonly used in the Internet are tail drop, random early detection (RED), and weighted random early detection (WRED). Tail drop is the simplest and most easily implemented; the router simply drops new incoming packets once the length of the queue exceeds the size of the buffers in the router. RED probabilistically drops datagrams early when the queue exceeds a pre-configured portion of the buffer, until a pre-determined max, when it becomes tail drop. WRED requires a weight on the average queue size to act upon when the traffic is about to exceed the pre-configured size so that short bursts will not trigger random drops.
Another function a router performs is to decide which packet should be processed first when multiple queues exist. This is managed through QoS, which is critical when Voice over IP is deployed, so as not to introduce excessive latency.
Yet another function a router performs is called policy-based routing where special rules are constructed to override the rules derived from the routing table when a packet forwarding decision is made.
Router functions may be performed through the same internal paths that the packets travel inside the router. Some of the functions may be performed through an application-specific integrated circuit (ASIC) to avoid overhead of scheduling CPU time to process the packets. Others may have to be performed through the CPU as these packets need special attention that cannot be handled by an ASIC. | https://en.wikipedia.org/wiki?curid=25748 |
Routing
Routing is the process of selecting a path for traffic in a network or between or across multiple networks. Broadly, routing is performed in many types of networks, including circuit-switched networks, such as the public switched telephone network (PSTN), and computer networks, such as the Internet.
In packet switching networks, routing is the higher-level decision making that directs network packets from their source toward their destination through intermediate network nodes by specific packet forwarding mechanisms. Packet forwarding is the transit of network packets from one network interface to another. Intermediate nodes are typically network hardware devices such as routers, gateways, firewalls, or switches. General-purpose computers also forward packets and perform routing, although they have no specially optimized hardware for the task.
The routing process usually directs forwarding on the basis of routing tables. Routing tables maintain a record of the routes to various network destinations. Routing tables may be specified by an administrator, learned by observing network traffic or built with the assistance of routing protocols.
Routing, in a narrower sense of the term, often refers to IP routing and is contrasted with bridging. IP routing assumes that network addresses are structured and that similar addresses imply proximity within the network. Structured addresses allow a single routing table entry to represent the route to a group of devices. In large networks, structured addressing (routing, in the narrow sense) outperforms unstructured addressing (bridging). Routing has become the dominant form of addressing on the Internet. Bridging is still widely used within local area networks.
Routing schemes differ in how they deliver messages:
Unicast is the dominant form of message delivery on the Internet. This article focuses on unicast routing algorithms.
With static routing, small networks may use manually configured routing tables. Larger networks have complex topologies that can change rapidly, making the manual construction of routing tables unfeasible. Nevertheless, most of the public switched telephone network (PSTN) uses pre-computed routing tables, with fallback routes if the most direct route becomes blocked (see routing in the PSTN).
Dynamic routing attempts to solve this problem by constructing routing tables automatically, based on information carried by routing protocols, allowing the network to act nearly autonomously in avoiding network failures and blockages. Dynamic routing dominates the Internet. Examples of dynamic-routing protocols and algorithms include Routing Information Protocol (RIP), Open Shortest Path First (OSPF) and Enhanced Interior Gateway Routing Protocol (EIGRP).
Distance vector algorithms use the Bellman–Ford algorithm. This approach assigns a "cost" number to each of the links between each node in the network. Nodes send information from point A to point B via the path that results in the lowest "total cost" (i.e. the sum of the costs of the links between the nodes used).
When a node first starts, it only knows of its immediate neighbors and the direct cost involved in reaching them. (This information — the list of destinations, the total cost to each, and the "next hop" to send data to get there — makes up the routing table, or "distance table".) Each node, on a regular basis, sends to each neighbor node its own current assessment of the total cost to get to all the destinations it knows of. The neighboring nodes examine this information and compare it to what they already know; anything that represents an improvement on what they already have, they insert in their own table. Over time, all the nodes in the network discover the best next hop and total cost for all destinations.
When a network node goes down, any nodes that used it as their next hop discard the entry and convey the updated routing information to all adjacent nodes, which in turn repeat the process. Eventually, all the nodes in the network receive the updates and discover new paths to all the destinations that don't involve the down node.
When applying link-state algorithms, a graphical map of the network is the fundamental data used for each node. To produce its map, each node floods the entire network with information about the other nodes it can connect to. Each node then independently assembles this information into a map. Using this map, each router independently determines the least-cost path from itself to every other node using a standard shortest paths algorithm such as Dijkstra's algorithm. The result is a tree graph rooted at the current node, such that the path through the tree from the root to any other node is the least-cost path to that node. This tree then serves to construct the routing table, which specifies the best next hop to get from the current node to any other node.
A link-state routing algorithm optimized for mobile ad hoc networks is the optimized Link State Routing Protocol (OLSR). OLSR is proactive; it uses Hello and Topology Control (TC) messages to discover and disseminate link-state information through the mobile ad hoc network. Using Hello messages, each node discovers 2-hop neighbor information and elects a set of "multipoint relays" (MPRs). MPRs distinguish OLSR from other link-state routing protocols.
Distance vector and link-state routing are both intra-domain routing protocols. They are used inside an autonomous system, but not between autonomous systems. Both of these routing protocols become intractable in large networks and cannot be used in inter-domain routing. Distance vector routing is subject to instability if there are more than a few hops in the domain. Link state routing needs significant resources to calculate routing tables. It also creates heavy traffic due to flooding.
Path-vector routing is used for inter-domain routing. It is similar to distance vector routing. Path-vector routing assumes that one node (there can be many) in each autonomous system acts on behalf of the entire autonomous system. This node is called the "speaker node." The speaker node creates a routing table and advertises it to neighboring speaker nodes in neighboring autonomous systems. The idea is the same as distance vector routing except that only speaker nodes in each autonomous system can communicate with each other. The speaker node advertises the path, not the metric, of the nodes in its autonomous system or other autonomous systems.
The path-vector routing algorithm is similar to the distance vector algorithm in the sense that each border router advertises the destinations it can reach to its neighboring router. However, instead of advertising networks in terms of a destination and the distance to that destination, networks are advertised as destination addresses and path descriptions to reach those destinations. The path, expressed in terms of the domains (or confederations) traversed so far, is carried in a special path attribute that records the sequence of routing domains through which the reachability information has passed. A route is defined as a pairing between a destination and the attributes of the path to that destination, thus the name, path-vector routing; The routers receive a vector that contains paths to a set of destinations.
Path selection involves applying a routing metric to multiple routes to select (or predict) the best route. Most routing algorithms use only one network path at a time. Multipath routing and specifically equal-cost multi-path routing techniques enable the use of multiple alternative paths.
In computer networking, the metric is computed by a routing algorithm, and can cover information such as bandwidth, network delay, hop count, path cost, load, maximum transmission unit, reliability, and communication cost. The routing table stores only the best possible routes, while link-state or topological databases may store all other information as well.
In case of overlapping or equal routes, algorithms consider the following elements to decide which routes to install into the routing table (sorted by priority):
Because a routing metric is specific to a given routing protocol, multi-protocol routers must use some external heuristic to select between routes learned from different routing protocols. Cisco routers, for example, attribute a value known as the administrative distance to each route, where smaller administrative distances indicate routes learned from a supposedly more reliable protocol.
A local network administrator, in special cases, can set up host-specific routes to a particular device that provides more control over network usage, permits testing, and better overall security. This is useful for debugging network connections or routing tables.
In some small systems, a single central device decides ahead of time the complete path of every packet.
In some other small systems, whichever edge device injects a packet into the network decides ahead of time the complete path of that particular packet.
In both of these systems, that route-planning device needs to know a lot of information about what devices are connected to the network and how they are connected to each other.
Once it has this information, it can use an algorithm such as A* search algorithm to find the best path.
In high-speed systems, there are so many packets transmitted every second that it is infeasible for a single device to calculate the complete path for each and every packet. Early high-speed systems dealt with this by setting up a circuit switching relay channel once for the first packet between some source and some destination; later packets between that same source and that same destination continue to follow the same path without recalculating until the channel teardown. Later high-speed systems inject packets into the network without any one device ever calculating a complete path for that packet—multiple agents.
In large systems, there are so many connections between devices, and those connections change so frequently, that it is infeasible for any one device to even know how all the devices are connected to each other, much less calculate a complete path through them.
Such systems generally use next-hop routing.
Most systems use a deterministic dynamic routing algorithm:
When a device chooses a path to a particular final destination, that device always chooses the same path to that destination until it receives information that makes it think some other path is better.
A few routing algorithms do not use a deterministic algorithm to find the "best" link for a packet to get from its original source to its final destination.
Instead, to avoid congestion in switched systems or network hot spots in packet systems, a few algorithms use a randomized algorithm—Valiant's paradigm—that routes a path to a randomly picked intermediate destination, and from there to its true final destination.
In many early telephone switches, a randomizer was often used to select the start of a path through a multistage switching fabric.
Depending on the application for which path selection is performed, different metrics can be used. For example, for web requests one can use minimum latency paths to minimize web page load time, or for bulk data transfers one can choose the least utilized path to balance load across the network and increase throughput. A popular path selection objective is to reduce the average completion times of traffic flows and the total network bandwidth consumption which basically leads to better use of network capacity. Recently, a path selection metric was proposed that computes the total number of bytes scheduled on the edges per path as selection metric. An empirical analysis of several path selection metrics, including this new proposal, has been made available.
In some networks, routing is complicated by the fact that no single entity is responsible for selecting paths; instead, multiple entities are involved in selecting paths or even parts of a single path. Complications or inefficiency can result if these entities choose paths to optimize their own objectives, which may conflict with the objectives of other participants.
A classic example involves traffic in a road system, in which each driver picks a path that minimizes their travel time. With such routing, the equilibrium routes can be longer than optimal for all drivers. In particular, Braess' paradox shows that adding a new road can "lengthen" travel times for all drivers.
In another model, for example, used for routing automated guided vehicles (AGVs) on a terminal, reservations are made for each vehicle to prevent simultaneous use of the same part of an infrastructure. This approach is also referred to as context-aware routing.
The Internet is partitioned into autonomous systems (ASs) such as internet service providers (ISPs), each of which controls routes involving its network, at multiple levels. First, AS-level paths are selected via the BGP protocol, which produces a sequence of ASs through which packets flow. Each AS may have multiple paths, offered by neighboring ASs, from which to choose. Its decision often involves business relationships with these neighboring ASs, which may be unrelated to path quality or latency. Second, once an AS-level path has been selected, there are often multiple corresponding router-level paths, in part because two ISPs may be connected in multiple locations. In choosing the single router-level path, it is common practice for each ISP to employ hot-potato routing: sending traffic along the path that minimizes the distance through the ISP's own network—even if that path lengthens the total distance to the destination.
Consider two ISPs, "A" and "B". Each has a presence in New York, connected by a fast link with latency 5 ms—and each has a presence in London connected by a 5 ms link. Suppose both ISPs have trans-Atlantic links that connect their two networks, but "A"'s link has latency 100 ms and B's has latency 120 ms. When routing a message from a source in "A" 's London network to a destination in "B" 's New York network, "A" may choose to immediately send the message to "B" in London. This saves "A" the work of sending it along an expensive trans-Atlantic link, but causes the message to experience latency 125 ms when the other route would have been 20 ms faster.
A 2003 measurement study of Internet routes found that, between pairs of neighboring ISPs, more than 30% of paths have inflated latency due to hot-potato routing, with 5% of paths being delayed by at least 12 ms. Inflation due to AS-level path selection, while substantial, was attributed primarily to BGP's lack of a mechanism to directly optimize for latency, rather than to selfish routing policies. It was also suggested that, were an appropriate mechanism in place, ISPs would be willing to cooperate to reduce latency rather than use hot-potato routing.
Such a mechanism was later published by the same authors, first for the case of two ISPs and then for the global case.
As the Internet and IP networks become mission critical business tools, there has been increased interest in techniques and methods to monitor the routing posture of networks. Incorrect routing or routing issues cause undesirable performance degradation, flapping and/or downtime. Monitoring routing in a network is achieved using route analytics tools and techniques.
In networks where a logically centralized control is available over the forwarding state, for example, using Software-defined networking, routing techniques can be used that aim to optimize global and network-wide performance metrics. This has been used by large internet companies that operate many data centers in different geographical locations attached using private optical links examples of which includes Microsoft's Global WAN, Facebook's Express Backbone, and Google's B4. Global performance metrics to optimize include maximizing network utilization, minimizing traffic flow completion times, and maximizing the traffic delivered prior to specific deadlines. Minimizing flow completion times over private WAN, particularly, has not received much attention from the research community. However, with the increasing number of businesses that operate globally distributed data centers connected using private inter-data center networks, it is likely to see increasing research effort in this realm. A very recent work on reducing the completion times of flows over private WAN discusses modeling routing as a graph optimization problem by pushing all the queuing to the end-points. Authors also propose a heuristic to solve the problem efficiently while sacrificing negligible performance. | https://en.wikipedia.org/wiki?curid=25750 |
Resistor
A resistor is a passive two-terminal electrical component that implements electrical resistance as a circuit element. In electronic circuits, resistors are used to reduce current flow, adjust signal levels, to divide voltages, bias active elements, and terminate transmission lines, among other uses. High-power resistors that can dissipate many watts of electrical power as heat, may be used as part of motor controls, in power distribution systems, or as test loads for generators.
Fixed resistors have resistances that only change slightly with temperature, time or operating voltage. Variable resistors can be used to adjust circuit elements (such as a volume control or a lamp dimmer), or as sensing devices for heat, light, humidity, force, or chemical activity.
Resistors are common elements of electrical networks and electronic circuits and are ubiquitous in electronic equipment. Practical resistors as discrete components can be composed of various compounds and forms. Resistors are also implemented within integrated circuits.
The electrical function of a resistor is specified by its resistance: common commercial resistors are manufactured over a range of more than nine orders of magnitude. The nominal value of the resistance falls within the manufacturing tolerance, indicated on the component.
Two typical schematic diagram symbols are as follows:
The notation to state a resistor's value in a circuit diagram varies.
One common scheme is the RKM code following IEC 60062. It avoids using a decimal separator and replaces the decimal separator with a letter loosely associated with SI prefixes corresponding with the part's resistance. For example, "8K2" as part marking code, in a circuit diagram or in a bill of materials (BOM) indicates a resistor value of 8.2 kΩ. Additional zeros imply a tighter tolerance, for example "15M0" for three significant digits. When the value can be expressed without the need for a prefix (that is, multiplicator 1), an "R" is used instead of the decimal separator. For example, "1R2" indicates 1.2 Ω, and "18R" indicates 18 Ω.
The behaviour of an ideal resistor is dictated by the relationship specified by Ohm's law:
Ohm's law states that the voltage (V) across a resistor is proportional to the current (I), where the constant of proportionality is the resistance (R). For example, if a 300 ohm resistor is attached across the terminals of a 12 volt battery, then a current of 12 / 300 = 0.04 amperes flows through that resistor.
Practical resistors also have some inductance and capacitance which affect the relation between voltage and current in alternating current circuits.
The ohm (symbol: Ω) is the SI unit of electrical resistance, named after Georg Simon Ohm. An ohm is equivalent to a volt per ampere. Since resistors are specified and manufactured over a very large range of values, the derived units of milliohm (1 mΩ = 10−3 Ω), kilohm (1 kΩ = 103 Ω), and megohm (1 MΩ = 106 Ω) are also in common usage.
The total resistance of resistors connected in series is the sum of their individual resistance values.
The total resistance of resistors connected in parallel is the reciprocal of the sum of the reciprocals of the individual resistors.
For example, a 10 ohm resistor connected in parallel with a 5 ohm resistor and a 15 ohm resistor produces ohms of resistance, or = 2.727 ohms.
A resistor network that is a combination of parallel and series connections can be broken up into smaller parts that are either one or the other. Some complex networks of resistors cannot be resolved in this manner, requiring more sophisticated circuit analysis. Generally, the Y-Δ transform, or matrix methods can be used to solve such problems.
At any instant, the power "P" (watts) consumed by a resistor of resistance "R" (ohms) is calculated as:
formula_4
where "V" (volts) is the voltage across the resistor and "I" (amps) is the current flowing through it. Using Ohm's law, the two other forms can be derived. This power is converted into heat which must be dissipated by the resistor's package before its temperature rises excessively.
Resistors are rated according to their maximum power dissipation. Discrete resistors in solid-state electronic systems are typically rated as 1/10, 1/8, or 1/4 watt. They usually absorb much less than a watt of electrical power and require little attention to their power rating.
Resistors required to dissipate substantial amounts of power, particularly used in power supplies, power conversion circuits, and power amplifiers, are generally referred to as "power resistors"; this designation is loosely applied to resistors with power ratings of 1 watt or greater. Power resistors are physically larger and may not use the preferred values, color codes, and external packages described below.
If the average power dissipated by a resistor is more than its power rating, damage to the resistor may occur, permanently altering its resistance; this is distinct from the reversible change in resistance due to its temperature coefficient when it warms. Excessive power dissipation may raise the temperature of the resistor to a point where it can burn the circuit board or adjacent components, or even cause a fire. There are flameproof resistors that fail (open circuit) before they overheat dangerously.
Since poor air circulation, high altitude, or high operating temperatures may occur, resistors may be specified with higher rated dissipation than is experienced in service.
All resistors have a maximum voltage rating; this may limit the power dissipation for higher resistance values.
Practical resistors have a series inductance and a small parallel capacitance; these specifications can be important in high-frequency applications. In a low-noise amplifier or pre-amp, the noise characteristics of a resistor may be an issue.
The temperature coefficient of the resistance may also be of concern in some precision applications.
The unwanted inductance, excess noise, and temperature coefficient are mainly dependent on the technology used in manufacturing the resistor. They are not normally specified individually for a particular family of resistors manufactured using a particular technology. A family of discrete resistors is also characterized according to its form factor, that is, the size of the device and the position of its leads (or terminals) which is relevant in the practical manufacturing of circuits using them.
Practical resistors are also specified as having a maximum power rating which must exceed the anticipated power dissipation of that resistor in a particular circuit: this is mainly of concern in power electronics applications.
Resistors with higher power ratings are physically larger and may require heat sinks. In a high-voltage circuit, attention must sometimes be paid to the rated maximum working voltage of the resistor. While there is no minimum working voltage for a given resistor, failure to account for a resistor's maximum rating may cause the resistor to incinerate when current is run through it.
Through-hole components typically have "leads" (pronounced ) leaving the body "axially," that is, on a line parallel with the part's longest axis. Others have leads coming off their body "radially" instead. Other components may be SMT (surface mount technology), while high power resistors may have one of their leads designed into the heat sink.
Carbon composition resistors (CCR) consist of a solid cylindrical resistive element with embedded wire leads or metal end caps to which the lead wires are attached. The body of the resistor is protected with paint or plastic. Early 20th-century carbon composition resistors had uninsulated bodies; the lead wires were wrapped around the ends of the resistance element rod and soldered. The completed resistor was painted for color-coding of its value.
The resistive element is made from a mixture of finely powdered carbon and an insulating material, usually ceramic. A resin holds the mixture together. The resistance is determined by the ratio of the fill material (the powdered ceramic) to the carbon. Higher concentrations of carbon, which is a good conductor, result in lower resistance. Carbon composition resistors were commonly used in the 1960s and earlier, but are not popular for general use now as other types have better specifications, such as tolerance, voltage dependence, and stress. Carbon composition resistors change value when stressed with over-voltages. Moreover, if internal moisture content, from exposure for some length of time to a humid environment, is significant, soldering heat creates a non-reversible change in resistance value. Carbon composition resistors have poor stability with time and were consequently factory sorted to, at best, only 5% tolerance.
These resistors are non-inductive, which provides benefits when used in voltage pulse reduction and surge protection applications.
Carbon composition resistors have higher capability to withstand overload relative to the component's size.
Carbon composition resistors are still available, but relatively expensive. Values ranged from fractions of an ohm to 22 megohms. Due to their high price, these resistors are no longer used in most applications. However, they are used in power supplies and welding controls. They are also in demand for repair of vintage electronic equipment where authenticity is a factor.
A carbon pile resistor is made of a stack of carbon disks compressed between two metal contact plates. Adjusting the clamping pressure changes the resistance between the plates. These resistors are used when an adjustable load is required, for example in testing automotive batteries or radio transmitters. A carbon pile resistor can also be used as a speed control for small motors in household appliances (sewing machines, hand-held mixers) with ratings up to a few hundred watts. A carbon pile resistor can be incorporated in automatic voltage regulators for generators, where the carbon pile controls the field current to maintain relatively constant voltage. The principle is also applied in the carbon microphone.
A carbon film is deposited on an insulating substrate, and a helix is cut in it to create a long, narrow resistive path. Varying shapes, coupled with the resistivity of amorphous carbon (ranging from 500 to 800 μΩ m), can provide a wide range of resistance values. Compared to carbon composition they feature low noise, because of the precise distribution of the pure graphite without binding. Carbon film resistors feature a power rating range of 0.125 W to 5 W at 70 °C. Resistances available range from 1 ohm to 10 megohm. The carbon film resistor has an operating temperature range of −55 °C to 155 °C. It has 200 to 600 volts maximum working voltage range. Special carbon film resistors are used in applications requiring high pulse stability.
Carbon composition resistors can be printed directly onto printed circuit board (PCB) substrates as part of the PCB manufacturing process. Although this technique is more common on hybrid PCB modules, it can also be used on standard fibreglass PCBs. Tolerances are typically quite large, and can be in the order of 30%. A typical application would be non-critical pull-up resistors.
Thick film resistors became popular during the 1970s, and most SMD (surface mount device) resistors today are of this type. The resistive element of thick films is 1000 times thicker than thin films, but the principal difference is how the film is applied to the cylinder (axial resistors) or the surface (SMD resistors).
Thin film resistors are made by sputtering (a method of vacuum deposition) the resistive material onto an insulating substrate. The film is then etched in a similar manner to the old (subtractive) process for making printed circuit boards; that is, the surface is coated with a photo-sensitive material, then covered by a pattern film, irradiated with ultraviolet light, and then the exposed photo-sensitive coating is developed, and underlying thin film is etched away.
Thick film resistors are manufactured using screen and stencil printing processes.
Because the time during which the sputtering is performed can be controlled, the thickness of the thin film can be accurately controlled. The type of material is also usually different consisting of one or more ceramic (cermet) conductors such as tantalum nitride (TaN), ruthenium oxide (), lead oxide (PbO), bismuth ruthenate (), nickel chromium (NiCr), or bismuth iridate ().
The resistance of both thin and thick film resistors after manufacture is not highly accurate; they are usually trimmed to an accurate value by abrasive or laser trimming. Thin film resistors are usually specified with tolerances of 1% and 5%, and with temperature coefficients of 5 to 50 ppm/K. They also have much lower noise levels, on the level of 10–100 times less than thick film resistors.
Thick film resistors may use the same conductive ceramics, but they are mixed with sintered (powdered) glass and a carrier liquid so that the composite can be screen-printed. This composite of glass and conductive ceramic (cermet) material is then fused (baked) in an oven at about 850 °C.
Thick film resistors, when first manufactured, had tolerances of 5%, but standard tolerances have improved to 2% or 1% in the last few decades. Temperature coefficients of thick film resistors are high, typically ±200 or ±250 ppm/K; a 40-kelvin (70 °F) temperature change can change the resistance by 1%.
Thin film resistors are usually far more expensive than thick film resistors. For example, SMD thin film resistors, with 0.5% tolerances, and with 25 ppm/K temperature coefficients, when bought in full size reel quantities, are about twice the cost of 1%, 250 ppm/K thick film resistors.
A common type of axial-leaded resistor today is the metal-film resistor. Metal Electrode Leadless Face (MELF) resistors often use the same technology.
Metal film resistors are usually coated with nickel chromium (NiCr), but might be coated with any of the cermet materials listed above for thin film resistors. Unlike thin film resistors, the material may be applied using different techniques than sputtering (though this is one of the techniques). Also, unlike thin-film resistors, the resistance value is determined by cutting a helix through the coating rather than by etching. (This is similar to the way carbon resistors are made.) The result is a reasonable tolerance (0.5%, 1%, or 2%) and a temperature coefficient that is generally between 50 and 100 ppm/K. Metal film resistors possess good noise characteristics and low non-linearity due to a low voltage coefficient. Also beneficial are their tight tolerance, low temperature coefficient and long-term stability.
Metal-oxide film resistors are made of metal oxides which results in a higher operating temperature and greater stability and reliability than metal film. They are used in applications with high endurance demands.
Wirewound resistors are commonly made by winding a metal wire, usually nichrome, around a ceramic, plastic, or fiberglass core. The ends of the wire are soldered or welded to two caps or rings, attached to the ends of the core. The assembly is protected with a layer of paint, molded plastic, or an enamel coating baked at high temperature. These resistors are designed to withstand unusually high temperatures of up to 450 °C. Wire leads in low power wirewound resistors are usually between 0.6 and 0.8 mm in diameter and tinned for ease of soldering. For higher power wirewound resistors, either a ceramic outer case or an aluminum outer case on top of an insulating layer is used – if the outer case is ceramic, such resistors are sometimes described as "cement" resistors, though they do not actually contain any traditional cement. The aluminum-cased types are designed to be attached to a heat sink to dissipate the heat; the rated power is dependent on being used with a suitable heat sink, e.g., a 50 W power rated resistor overheats at a fraction of the power dissipation if not used with a heat sink. Large wirewound resistors may be rated for 1,000 watts or more.
Because wirewound resistors are coils they have more undesirable inductance than other types of resistor, although winding the wire in sections with alternately reversed direction can minimize inductance. Other techniques employ bifilar winding, or a flat thin former (to reduce cross-section area of the coil). For the most demanding circuits, resistors with Ayrton–Perry winding are used.
Applications of wirewound resistors are similar to those of composition resistors with the exception of the high frequency. The high frequency response of wirewound resistors is substantially worse than that of a composition resistor.
In 1960 Felix Zandman and Sidney J. Stein presented a development of resistor film of very high stability.
The primary resistance element of a foil resistor is a chromium nickel alloy foil several micrometers thick. Chromium nickel alloys are characterized by having a large electrical resistance (about 58 times that of copper), a small temperature coefficient and high resistance to oxidation. Examples are Chromel A and Nichrome V, whose typical composition is 80 Ni and 20 Cr, with a melting point of 1420° C. When iron is added, the chromium nickel alloy becomes more ductile. The Nichrome and Chromel C are examples of an alloy containing iron. The composition typical of Nichrome is 60 Ni, 12 Cr, 26 Fe, 2 Mn and Chromel C, 64 Ni, 11 Cr, Fe 25. The melting temperature of these alloys are 1350° and 1390 ° C, respectively.
Since their introduction in the 1960s, foil resistors have had the best precision and stability of any resistor available. One of the important parameters of stability is the temperature coefficient of resistance (TCR). The TCR of foil resistors is extremely low, and has been further improved over the years. One range of ultra-precision foil resistors offers a TCR of 0.14 ppm/°C, tolerance ±0.005%, long-term stability (1 year) 25 ppm, (3 years) 50 ppm (further improved 5-fold by hermetic sealing), stability under load (2000 hours) 0.03%, thermal EMF 0.1 μV/°C, noise −42 dB, voltage coefficient 0.1 ppm/V, inductance 0.08 μH, capacitance 0.5 pF.
The thermal stability of this type of resistor also has to do with the opposing effects of the metal's electrical resistance increasing with temperature, and being reduced by thermal expansion leading to an increase in thickness of the foil, whose other dimensions are constrained by a ceramic substrate.
An ammeter shunt is a special type of current-sensing resistor, having four terminals and a value in milliohms or even micro-ohms. Current-measuring instruments, by themselves, can usually accept only limited currents. To measure high currents, the current passes through the shunt across which the voltage drop is measured and interpreted as current. A typical shunt consists of two solid metal blocks, sometimes brass, mounted on an insulating base. Between the blocks, and soldered or brazed to them, are one or more strips of low temperature coefficient of resistance (TCR) manganin alloy. Large bolts threaded into the blocks make the current connections, while much smaller screws provide volt meter connections. Shunts are rated by full-scale current, and often have a voltage drop of 50 mV at rated current. Such meters are adapted to the shunt full current rating by using an appropriately marked dial face; no change need to be made to the other parts of the meter.
In heavy-duty industrial high-current applications, a grid resistor is a large convection-cooled lattice of stamped metal alloy strips connected in rows between two electrodes. Such industrial grade resistors can be as large as a refrigerator; some designs can handle over 500 amperes of current, with a range of resistances extending lower than 0.04 ohms. They are used in applications such as dynamic braking and load banking for locomotives and trams, neutral grounding for industrial AC distribution, control loads for cranes and heavy equipment, load testing of generators and harmonic filtering for electric substations.
The term "grid resistor" is sometimes used to describe a resistor of any type connected to the control grid of a vacuum tube. This is not a resistor technology; it is an electronic circuit topology.
A resistor may have one or more fixed tapping points so that the resistance can be changed by moving the connecting wires to different terminals. Some wirewound power resistors have a tapping point that can slide along the resistance element, allowing a larger or smaller part of the resistance to be used.
Where continuous adjustment of the resistance value during operation of equipment is required, the sliding resistance tap can be connected to a knob accessible to an operator. Such a device is called a rheostat and has two terminals.
A potentiometer (colloquially, "pot") is a three-terminal resistor with a continuously adjustable tapping point controlled by rotation of a shaft or knob or by a linear slider. The name "potentiometer" comes from its function as an adjustable voltage divider to provide a variable potential at the terminal connected to the tapping point. Volume control in an audio device is a common application of a potentiometer. A typical low power potentiometer "(see drawing)" is constructed of a flat resistance element "(B)" of carbon composition, metal film, or conductive plastic, with a springy phosphor bronze wiper contact "(C)" which moves along the surface. An alternate construction is resistance wire wound on a form, with the wiper sliding axially along the coil. These have lower resolution, since as the wiper moves the resistance changes in steps equal to the resistance of a single turn.
High-resolution multiturn potentiometers are used in precision applications. These have wire-wound resistance elements typically wound on a helical mandrel, with the wiper moving on a helical track as the control is turned, making continuous contact with the wire. Some include a conductive-plastic resistance coating over the wire to improve resolution. These typically offer ten turns of their shafts to cover their full range. They are usually set with dials that include a simple turns counter and a graduated dial, and can typically achieve three digit resolution. Electronic analog computers used them in quantity for setting coefficients, and delayed-sweep oscilloscopes of recent decades included one on their panels.
A resistance decade box or resistor substitution box is a unit containing resistors of many values, with one or more mechanical switches which allow any one of various discrete resistances offered by the box to be dialed in. Usually the resistance is accurate to high precision, ranging from laboratory/calibration grade accuracy of 20 parts per million, to field grade at 1%. Inexpensive boxes with lesser accuracy are also available. All types offer a convenient way of selecting and quickly changing a resistance in laboratory, experimental and development work without needing to attach resistors one by one, or even stock each value. The range of resistance provided, the maximum resolution, and the accuracy characterize the box. For example, one box offers resistances from 0 to 100 megohms, maximum resolution 0.1 ohm, accuracy 0.1%.
There are various devices whose resistance changes with various quantities. The resistance of NTC thermistors exhibit a strong negative temperature coefficient, making them useful for measuring temperatures. Since their resistance can be large until they are allowed to heat up due to the passage of current, they are also commonly used to prevent excessive current surges when equipment is powered on. Similarly, the resistance of a humistor varies with humidity. One sort of photodetector, the photoresistor, has a resistance which varies with illumination.
The strain gauge, invented by Edward E. Simmons and Arthur C. Ruge in 1938, is a type of resistor that changes value with applied strain. A single resistor may be used, or a pair (half bridge), or four resistors connected in a Wheatstone bridge configuration. The strain resistor is bonded with adhesive to an object that is subjected to mechanical strain. With the strain gauge and a filter, amplifier, and analog/digital converter, the strain on an object can be measured.
A related but more recent invention uses a Quantum Tunnelling Composite to sense mechanical stress. It passes a current whose magnitude can vary by a factor of 1012 in response to changes in applied pressure.
The value of a resistor can be measured with an ohmmeter, which may be one function of a multimeter. Usually, probes on the ends of test leads connect to the resistor. A simple ohmmeter may apply a voltage from a battery across the unknown resistor (with an internal resistor of a known value in series) producing a current which drives a meter movement. The current, in accordance with Ohm's law, is inversely proportional to the sum of the internal resistance and the resistor being tested, resulting in an analog meter scale which is very non-linear, calibrated from infinity to 0 ohms. A digital multimeter, using active electronics, may instead pass a specified current through the test resistance. The voltage generated across the test resistance in that case is linearly proportional to its resistance, which is measured and displayed. In either case the low-resistance ranges of the meter pass much more current through the test leads than do high-resistance ranges, in order for the voltages present to be at reasonable levels (generally below 10 volts) but still measurable.
Measuring low-value resistors, such as fractional-ohm resistors, with acceptable accuracy requires four-terminal connections. One pair of terminals applies a known, calibrated current to the resistor, while the other pair senses the voltage drop across the resistor. Some laboratory quality ohmmeters, especially milliohmmeters, and even some of the better digital multimeters sense using four input terminals for this purpose, which may be used with special test leads. Each of the two so-called Kelvin clips has a pair of jaws insulated from each other. One side of each clip applies the measuring current, while the other connections are only to sense the voltage drop. The resistance is again calculated using Ohm's Law as the measured voltage divided by the applied current.
Resistor characteristics are quantified and reported using various national standards. In the US, MIL-STD-202 contains the relevant test methods to which other standards refer.
There are various standards specifying properties of resistors for use in equipment:
There are other United States military procurement MIL-R- standards.
The primary standard for resistance, the "mercury ohm" was initially defined in 1884 in as a column of mercury 106.3 cm long and in cross-section, at . Difficulties in precisely measuring the physical constants to replicate this standard result in variations of as much as 30 ppm. From 1900 the mercury ohm was replaced with a precision machined plate of manganin. Since 1990 the international resistance standard has been based on the quantized Hall effect discovered by Klaus von Klitzing, for which he won the Nobel Prize in Physics in 1985.
Resistors of extremely high precision are manufactured for calibration and laboratory use. They may have four terminals, using one pair to carry an operating current and the other pair to measure the voltage drop; this eliminates errors caused by voltage drops across the lead resistances, because no charge flows through voltage sensing leads. It is important in small value resistors (100–0.0001 ohm) where lead resistance is significant or even comparable with respect to resistance standard value.
Axial resistors' cases are usually tan, brown, blue, or green (though other colors are occasionally found as well, such as dark red or dark gray), and display 3–6 colored stripes that indicate resistance (and by extension tolerance), and may be extended to indicate the temperature coefficient and reliability class. The first two stripes represent the first two digits of the resistance in ohms, the third represents a multiplier, and the fourth the tolerance (which if absent, denotes ±20%). For five- and six- striped resistors the third is the third digit, the fourth the multiplier and the fifth is the tolerance; a sixth stripe represents the temperature coefficient. The power rating of the resistor is usually not marked and is deduced from the size.
Surface-mount resistors are marked numerically.
Early 20th century resistors, essentially uninsulated, were dipped in paint to cover their entire body for color-coding. A second color of paint was applied to one end of the element, and a color dot (or band) in the middle provided the third digit. The rule was "body, tip, dot", providing two significant digits for value and the decimal multiplier, in that sequence. Default tolerance was ±20%. Closer-tolerance resistors had silver (±10%) or gold-colored (±5%) paint on the other end.
Early resistors were made in more or less arbitrary round numbers; a series might have 100, 125, 150, 200, 300, etc. Resistors as manufactured are subject to a certain percentage tolerance, and it makes sense to manufacture values that correlate with the tolerance, so that the actual value of a resistor overlaps slightly with its neighbors. Wider spacing leaves gaps; narrower spacing increases manufacturing and inventory costs to provide resistors that are more or less interchangeable.
A logical scheme is to produce resistors in a range of values which increase in a geometric progression, so that each value is greater than its predecessor by a fixed multiplier or percentage, chosen to match the tolerance of the range. For example, for a tolerance of ±20% it makes sense to have each resistor about 1.5 times its predecessor, covering a decade in 6 values. In practice the factor used is 1.4678, giving values of 1.47, 2.15, 3.16, 4.64, 6.81, 10 for the 1–10-decade (a decade is a range increasing by a factor of 10; 0.1–1 and 10–100 are other examples); these are rounded in practice to 1.5, 2.2, 3.3, 4.7, 6.8, 10; followed by 15, 22, 33, … and preceded by … 0.47, 0.68, 1. This scheme has been adopted as the E48 series of the IEC 60063 preferred number values. There are also E12, E24, E48, E96 and E192 series for components of progressively finer resolution, with 12, 24, 96, and 192 different values within each decade. The actual values used are in the IEC 60063 lists of preferred numbers.
A resistor of 100 ohms ±20% would be expected to have a value between 80 and 120 ohms; its E6 neighbors are 68 (54–82) and 150 (120–180) ohms. A sensible spacing, E6 is used for ±20% components; E12 for ±10%; E24 for ±5%; E48 for ±2%, E96 for ±1%; E192 for ±0.5% or better. Resistors are manufactured in values from a few milliohms to about a gigaohm in IEC60063 ranges appropriate for their tolerance. Manufacturers may sort resistors into tolerance-classes based on measurement. Accordingly, a selection of 100 ohms resistors with a tolerance of ±10%, might not lie just around 100 ohm (but no more than 10% off) as one would expect (a bell-curve), but rather be in two groups – either between 5 and 10% too high or 5 to 10% too low (but not closer to 100 ohm than that) because any resistors the factory had measured as being less than 5% off would have been marked and sold as resistors with only ±5% tolerance or better. When designing a circuit, this may become a consideration. This process of sorting parts based on post-production measurement is known as "binning", and can be applied to other components than resistors (such as speed grades for CPUs).
Earlier power wirewound resistors, such as brown vitreous-enameled types, however, were made with a different system of preferred values, such as some of those mentioned in the first sentence of this section.
Surface mounted resistors of larger sizes (metric 1608 and above) are printed with numerical values in a code related to that used on axial resistors. Standard-tolerance surface-mount technology (SMT) resistors are marked with a three-digit code, in which the first two digits are the first two significant digits of the value and the third digit is the power of ten (the number of zeroes). For example:
Resistances less than 100 Ω are written: 100, 220, 470. The final zero represents ten to the power zero, which is 1. For example:
Sometimes these values are marked as 10 or 22 to prevent a mistake.
Resistances less than 10 Ω have 'R' to indicate the position of the decimal point (radix point). For example:
Precision resistors are marked with a four-digit code, in which the first three digits are the significant figures and the fourth is the power of ten. For example:
000 and 0000 sometimes appear as values on surface-mount zero-ohm links, since these have (approximately) zero resistance.
More recent surface-mount resistors are too small, physically, to permit practical markings to be applied.
Format:" [two letters][resistance value (three digit)][tolerance code(numerical – one digit)]
"
Steps to find out the resistance or capacitance values:
If a resistor is coded:
In amplifying faint signals, it is often necessary to minimize electronic noise, particularly in the first stage of amplification. As a dissipative element, even an ideal resistor naturally produces a randomly fluctuating voltage, or noise, across its terminals. This Johnson–Nyquist noise is a fundamental noise source which depends only upon the temperature and resistance of the resistor, and is predicted by the fluctuation–dissipation theorem. Using a larger value of resistance produces a larger voltage noise, whereas a smaller value of resistance generates more current noise, at a given temperature.
The thermal noise of a practical resistor may also be larger than the theoretical prediction and that increase is typically frequency-dependent. Excess noise of a practical resistor is observed only when current flows through it. This is specified in unit of μV/V/decade – μV of noise per volt applied across the resistor per decade of frequency. The μV/V/decade value is frequently given in dB so that a resistor with a noise index of 0 dB exhibits 1 μV (rms) of excess noise for each volt across the resistor in each frequency decade. Excess noise is thus an example of 1/"f" noise. Thick-film and carbon composition resistors generate more excess noise than other types at low frequencies. Wire-wound and thin-film resistors are often used for their better noise characteristics. Carbon composition resistors can exhibit a noise index of 0 dB while bulk metal foil resistors may have a noise index of −40 dB, usually making the excess noise of metal foil resistors insignificant. Thin film surface mount resistors typically have lower noise and better thermal stability than thick film surface mount resistors. Excess noise is also size-dependent: in general excess noise is reduced as the physical size of a resistor is increased (or multiple resistors are used in parallel), as the independently fluctuating resistances of smaller components tend to average out.
While not an example of "noise" per se, a resistor may act as a thermocouple, producing a small DC voltage differential across it due to the thermoelectric effect if its ends are at different temperatures. This induced DC voltage can degrade the precision of instrumentation amplifiers in particular. Such voltages appear in the junctions of the resistor leads with the circuit board and with the resistor body. Common metal film resistors show such an effect at a magnitude of about 20 µV/°C. Some carbon composition resistors can exhibit thermoelectric offsets as high as 400 µV/°C, whereas specially constructed resistors can reduce this number to 0.05 µV/°C. In applications where the thermoelectric effect may become important, care has to be taken to mount the resistors horizontally to avoid temperature gradients and to mind the air flow over the board.
The failure rate of resistors in a properly designed circuit is low compared to other electronic components such as semiconductors and electrolytic capacitors. Damage to resistors most often occurs due to overheating when the average power delivered to it greatly exceeds its ability to dissipate heat (specified by the resistor's "power rating"). This may be due to a fault external to the circuit, but is frequently caused by the failure of another component (such as a transistor that shorts out) in the circuit connected to the resistor. Operating a resistor too close to its power rating can limit the resistor's lifespan or cause a significant change in its resistance. A safe design generally uses overrated resistors in power applications to avoid this danger.
Low-power thin-film resistors can be damaged by long-term high-voltage stress, even below maximum specified voltage and below maximum power rating. This is often the case for the startup resistors feeding the SMPS integrated circuit.
When overheated, carbon-film resistors may decrease or increase in resistance.
Carbon film and composition resistors can fail (open circuit) if running close to their maximum dissipation. This is also possible but less likely with metal film and wirewound resistors.
There can also be failure of resistors due to mechanical stress and adverse environmental factors including humidity. If not enclosed, wirewound resistors can corrode.
Surface mount resistors have been known to fail due to the ingress of sulfur into the internal makeup of the resistor. This sulfur chemically reacts with the silver layer to produce non-conductive silver sulfide. The resistor's impedance goes to infinity. Sulfur resistant and anti-corrosive resistors are sold into automotive, industrial, and military applications. ASTM B809 is an industry standard that tests a part's susceptibility to sulfur.
An alternative failure mode can be encountered where large value resistors are used (hundreds of kilohms and higher). Resistors are not only specified with a maximum power dissipation, but also for a maximum voltage drop. Exceeding this voltage causes the resistor to degrade slowly reducing in resistance. The voltage dropped across large value resistors can be exceeded before the power dissipation reaches its limiting value. Since the maximum voltage specified for commonly encountered resistors is a few hundred volts, this is a problem only in applications where these voltages are encountered.
Variable resistors can also degrade in a different manner, typically involving poor contact between the wiper and the body of the resistance. This may be due to dirt or corrosion and is typically perceived as "crackling" as the contact resistance fluctuates; this is especially noticed as the device is adjusted. This is similar to crackling caused by poor contact in switches, and like switches, potentiometers are to some extent self-cleaning: running the wiper across the resistance may improve the contact. Potentiometers which are seldom adjusted, especially in dirty or harsh environments, are most likely to develop this problem. When self-cleaning of the contact is insufficient, improvement can usually be obtained through the use of contact cleaner (also known as "tuner cleaner") spray. The crackling noise associated with turning the shaft of a dirty potentiometer in an audio circuit (such as the volume control) is greatly accentuated when an undesired DC voltage is present, often indicating the failure of a DC blocking capacitor in the circuit. | https://en.wikipedia.org/wiki?curid=25754 |
Republicanism
Republicanism is a representative form of government organization. It is a political ideology centered on citizenship in a state organized as a republic. Historically, it ranges from the rule of a representative minority or oligarchy to popular sovereignty. It has had different definitions and interpretations which vary significantly based on historical context and methodological approach.
Republicanism may also refer to the non-ideological scientific approach to politics and governance. As the republican thinker and second president of the United States John Adams stated in the introduction to his famous "Defense of the Constitution," the "science of politics is the science of social happiness" and a republic is the form of government arrived at when the science of politics is appropriately applied to the creation of a rationally designed government. Rather than being ideological, this approach focuses on applying a scientific methodology to the problems of governance through the rigorous study and application of past experience and experimentation in governance. This is the approach that may best be described to apply to republican thinkers such as Niccolò Machiavelli (as evident in his "Discourses on Livy"), John Adams, and James Madison.
The word "republic" derives from the Latin noun-phrase "res publica" (thing of the people), which referred to the system of government that emerged in the 6th century BCE following the expulsion of the kings from Rome by Lucius Junius Brutus and Collatinus.
This form of government in the Roman state collapsed in the latter part of the 1st century B.C., giving way to what was a monarchy in form, if not in name. Republics recurred subsequently, with, for example, Renaissance Florence or early modern Britain. The concept of a republic became a powerful force in Britain's North American colonies, where it contributed to the American Revolution. In Europe, it gained enormous influence through the French Revolution and through the First French Republic of 1792–1804.
In Ancient Greece, several philosophers and historians analysed and described elements we now recognize as classical republicanism. Traditionally, the Greek concept of "politeia" was rendered into Latin as res publica. Consequently, political theory until relatively recently often used republic in the general sense of "regime". There is no single written expression or definition from this era that exactly corresponds with a modern understanding of the term "republic" but most of the essential features of the modern definition are present in the works of Plato, Aristotle, and Polybius. These include theories of mixed government and of civic virtue. For example, in "The Republic", Plato places great emphasis on the importance of civic virtue (aiming for the good) together with personal virtue ('just man') on the part of the ideal rulers. Indeed, in Book V, Plato asserts that until rulers have the nature of philosophers (Socrates) or philosophers become the rulers, there can be no civic peace or happiness.
A number of Ancient Greek city-states such as Athens and Sparta have been classified as "classical republics", because they featured extensive participation by the citizens in legislation and political decision-making. Aristotle considered Carthage to have been a republic as it had a political system similar to that of some of the Greek cities, notably Sparta, but avoided some of the defects that affected them.
Both Livy, a Roman historian, and Plutarch, who is noted for his biographies and moral essays, described how Rome had developed its legislation, notably the transition from a "kingdom" to a "republic", by following the example of the Greeks. Some of this history, composed more than 500 years after the events, with scant written sources to rely on, may be fictitious reconstruction.
The Greek historian Polybius, writing in the mid-2nd century BCE, emphasized (in Book 6) the role played by the Roman Republic as an institutional form in the dramatic rise of Rome's hegemony over the Mediterranean. In his writing on the constitution of the Roman Republic, Polybius described the system as being a "mixed" form of government. Specifically, Polybius described the Roman system as a mixture of monarchy, aristocracy, and democracy with the Roman Republic constituted in such a manner that it applied the strengths of each system to offset the weaknesses of the others. In his view, the mixed system of the Roman Republic provided the Romans with a much greater level of domestic tranquility than would have been experienced under another form of government. Furthermore, Polybius argued, the comparative level of domestic tranquility the Romans enjoyed allowed them to conquer the Mediterranean. Polybius exerted a great influence on Cicero as he wrote his politico-philosophical works in the 1st century BCE. In one of these works, "De re publica", Cicero linked the Roman concept of "res publica" to the Greek "politeia".
The modern term "republic", despite its derivation, is not synonymous with the Roman "res publica". Among the several meanings of the term "res publica", it is most often translated "republic" where the Latin expression refers to the Roman state, and its form of government, between the era of the Kings and the era of the Emperors. This Roman Republic would, by a modern understanding of the word, still be defined as a true republic, even if not coinciding entirely. Thus, Enlightenment philosophers saw the Roman Republic as an ideal system because it included features like a systematic separation of powers.
Romans still called their state "Res Publica" in the era of the early emperors because, on the surface, the organization of the state had been preserved by the first emperors without significant alteration. Several offices from the Republican era, held by individuals, were combined under the control of a single person. These changes became permanent, and gradually conferred sovereignty on the Emperor.
Cicero's description of the ideal state, in "De re Publica", does not equate to a modern-day "republic"; it is more like enlightened absolutism. His philosophical works were influential when Enlightenment philosophers such as Voltaire developed their political concepts.
In its classical meaning, a republic was any stable well-governed political community. Both Plato and Aristotle identified three forms of government: democracy, aristocracy, and monarchy. First Plato and Aristotle, and then Polybius and Cicero, held that the ideal republic is a mixture of these three forms of government. The writers of the Renaissance embraced this notion.
Cicero expressed reservations concerning the republican form of government. While in his "theoretical" works he defended monarchy, or at least a mixed monarchy/oligarchy, in his own political life, he generally opposed men, like Julius Caesar, Mark Antony, and Octavian, who were trying to realise such ideals. Eventually, that opposition led to his death and Cicero can be seen as a victim of his own Republican ideals.
Tacitus, a contemporary of Plutarch, was not concerned with whether a form of government could be analyzed as a "republic" or a "monarchy". He analyzed how the powers accumulated by the early Julio-Claudian dynasty were all given by a State that was still notionally a republic. Nor was the Roman Republic "forced" to give away these powers: it did so freely and reasonably, certainly in Augustus' case, because of his many services to the state, freeing it from civil wars and disorder.
Tacitus was one of the first to ask whether such powers were given to the head of state because the citizens wanted to give them, or whether they were given for other reasons (for example, because one had a deified ancestor). The latter case led more easily to abuses of power. In Tacitus' opinion, the trend away from a true republic was "irreversible" only when Tiberius established power, shortly after Augustus' death in 14 CE (much later than most historians place the start of the Imperial form of government in Rome). By this time, too many principles defining some powers as "untouchable" had been implemented.
In Europe, republicanism was revived in the late Middle Ages when a number of states, which arose from medieval communes, embraced a republican system of government. These were generally small but wealthy trading states in which the merchant class had risen to prominence. Haakonssen notes that by the Renaissance, Europe was divided, such that those states controlled by a landed elite were monarchies, and those controlled by a commercial elite were republics. The latter included the Italian city-states of Florence, Genoa, and Venice and members of the Hanseatic League. One notable exception was Dithmarschen, a group of largely autonomous villages, who confederated in a peasants' republic. Building upon concepts of medieval feudalism, Renaissance scholars used the ideas of the ancient world to advance their view of an ideal government. Thus the republicanism developed during the Renaissance is known as 'classical republicanism' because it relied on classical models. This terminology was developed by Zera Fink in the 1960s, but some modern scholars, such as Brugger, consider it confuses the "classical republic" with the system of government used in the ancient world. 'Early modern republicanism' has been proposed as an alternative term. It is also sometimes called civic humanism. Beyond simply a non-monarchy, early modern thinkers conceived of an "ideal" republic, in which mixed government was an important element, and the notion that virtue and the common good were central to good government. Republicanism also developed its own distinct view of liberty.
Renaissance authors who spoke highly of republics were rarely critical of monarchies. While Niccolò Machiavelli's "Discourses on Livy" is the period's key work on republics, he also wrote the treatise "The Prince," which is better remembered and more widely read, on how best to run a monarchy. The early modern writers did not see the republican model as universally applicable; most thought that it could be successful only in very small and highly urbanized city-states. Jean Bodin in "Six Books of the Commonwealth" (1576) identified monarchy with republic.
Classical writers like Tacitus, and Renaissance writers like Machiavelli tried to avoid an outspoken preference for one government system or another. Enlightenment philosophers, on the other hand, expressed a clear opinion. Thomas More, writing before the Age of Enlightenment, was too outspoken for the reigning king's taste, even though he coded his political preferences in a utopian allegory.
In England a type of republicanism evolved that was not wholly opposed to monarchy; thinkers such as Thomas More and Sir Thomas Smith saw a monarchy, firmly constrained by law, as compatible with republicanism.
Anti-monarchism became more strident in the Dutch Republic during and after the Eighty Years' War, which began in 1568. This anti-monarchism was more propaganda than a political philosophy; most of the anti-monarchist works appeared in the form of widely distributed pamphlets. This evolved into a systematic critique of monarchy, written by men such as the brothers Johan and Peter de la Court. They saw all monarchies as illegitimate tyrannies that were inherently corrupt. These authors were more concerned with preventing the position of Stadholder from evolving into a monarchy, than with attacking their former rulers. Dutch republicanism also influenced on French Huguenots during the Wars of Religion. In the other states of early modern Europe republicanism was more moderate.
In the Polish–Lithuanian Commonwealth, republicanism was the influential ideology. After the establishment of the Commonwealth of Two Nations, republicans supported the status quo, of having a very weak monarch, and opposed those who thought a stronger monarchy was needed. These mostly Polish republicans, such as Łukasz Górnicki, Andrzej Wolan, and Stanisław Konarski, were well read in classical and Renaissance texts and firmly believed that their state was a republic on the Roman model, and started to call their state the Rzeczpospolita. Atypically, Polish–Lithuanian republicanism was not the ideology of the commercial class, but rather of the landed nobility, which would lose power if the monarchy were expanded. This resulted in an oligarchy of the great landed magnates.
The first of the Enlightenment republics established in Europe during the eighteenth century occurred in the small Mediterranean island of Corsica. Although perhaps an unlikely place to act as a laboratory for such political experiments, Corsica combined a number of factors that made it unique: a tradition of village democracy; varied cultural influences from the Italian city-states, Spanish empire and Kingdom of France which left it open to the ideas of the Italian Renaissance, Spanish humanism and French Enlightenment; and a geo-political position between these three competing powers which led to frequent power vacuums in which new regimes could be set up, testing out the fashionable new ideas of the age.
From the 1720s the island had been experiencing a series of short-lived but ongoing rebellions against its current sovereign, the Italian city-state of Genoa. During the initial period (1729–36) these merely sought to restore the control of the Spanish Empire; when this proved impossible, an independent Kingdom of Corsica (1736–40) was proclaimed, following the Enlightenment ideal of a written constitutional monarchy. But the perception grew that the monarchy had colluded with the invading power, a more radical group of reformers led by the Pasquale Paoli pushed for political overhaul, in the form of a constitutional and parliamentary republic inspired by the popular ideas of the Enlightenment.
Its governing philosophy was both inspired by the prominent thinkers of the day, notably the French philosophers Montesquieu and Voltaire and the Swiss theorist Jean-Jacques Rousseau. Not only did it include a permanent national parliament with fixed-term legislatures and regular elections, but, more radically for the time, it introduced universal male suffrage, and it is thought to be the first constitution in the world to grant women the right to vote female suffrage may also have existed. It also extended Enlightened principles to other spheres, including administrative reform, the foundation of a national university at Corte, and the establishment of a popular standing army.
The Corsican Republic lasted for fifteen years, from 1755 to 1769, eventually falling to a combination of Genoese and French forces and was incorporated as a province of the Kingdom of France. But the episode resonated across Europe as an early example of Enlightened constitutional republicanism, with many of the most prominent political commentators of the day recognising it to be an experiment in a new type of popular and democratic government. Its influence was particularly notable among the French Enlightenment philosophers: Rousseau's famous work On the Social Contract (1762: chapter 10, book II) declared, in its discussion on the conditions necessary for a functional popular sovereignty, that ""There is still one European country capable of making its own laws: the island of Corsica. valour and persistency with which that brave people has regained and defended its liberty well deserves that some wise man should teach it how to preserve what it has won. I have a feeling that some day that little island will astonish Europe"."; indeed Rousseau volunteered to do precisely that, offering a draft constitution for Paoli'se use. Similarly, Voltaire affirmed in his "Précis du siècle de Louis XV" (1769: chapter LX) that ""Bravery may be found in many places, but such bravery only among free peoples"". But the influence of the Corsican Republic as an example of a sovereign people fighting for liberty and enshrining this constitutionally in the form of an Enlightened republic was even greater among the Radicals of Great Britain and North America, where it was popularised via An Account of Corsica, by the Scottish essayist James Boswell. The Corsican Republic went on to influence the American revolutionaries ten years later: the Sons of Liberty, initiators of the American Revolution, would declare Pascal Paoli to be a direct inspiration for their own struggle against despotism; the son of Ebenezer Mackintosh was named Pascal Paoli Mackintosh in his honour, and no fewer than five American counties are named Paoli for the same reason.
Oliver Cromwell set up a republic called the Commonwealth of England (1649–1660) which he ruled after the overthrow of King Charles I. James Harrington was then a leading philosopher of republicanism. John Milton was another important Republican thinker at this time, expressing his views in political tracts as well as through poetry and prose. In his epic poem "Paradise Lost", for instance, Milton uses Satan's fall to suggest that unfit monarchs should be brought to justice, and that such issues extend beyond the constraints of one nation. As Christopher N. Warren argues, Milton offers “a language to critique imperialism, to question the legitimacy of dictators, to defend free international discourse, to fight unjust property relations, and to forge new political bonds across national lines.” This form of international Miltonic republicanism has been influential on later thinkers including 19th-century radicals Karl Marx and Friedrich Engels, according to Warren and other historians.
The collapse of the Commonwealth of England in 1660 and the restoration of the monarchy under Charles II discredited republicanism among England's ruling circles. Nevertheless, they welcomed the liberalism, and emphasis on rights, of John Locke, which played a major role in the Glorious Revolution of 1688. Even so, republicanism flourished in the "country" party of the early 18th century (commonwealthmen), which denounced the corruption of the "court" party, producing a political theory that heavily influenced the American colonists. In general, the English ruling classes of the 18th century vehemently opposed republicanism, typified by the attacks on John Wilkes, and especially on the American Revolution and the French Revolution.
French and Swiss Enlightenment thinkers, such as Baron Charles de Montesquieu and later Jean-Jacques Rousseau, expanded upon and altered the ideas of what an ideal republic should be: some of their new ideas were scarcely traceable to antiquity or the Renaissance thinkers. Concepts they contributed, or heavily elaborated, were social contract, positive law, and mixed government. They also borrowed from, and distinguished republicanism from, the ideas of liberalism that were developing at the same time.
Liberalism and republicanism were frequently conflated during this period, because they both opposed absolute monarchy. Modern scholars see them as two distinct streams that both contributed to the democratic ideals of the modern world. An important distinction is that, while republicanism stressed the importance of civic virtue and the common good, liberalism was based on economics and individualism. It is clearest in the matter of private property, which, according to some, can be maintained only under the protection of established positive law.
Jules Ferry, Prime Minister of France from 1880 to 1885, followed both these schools of thought. He eventually enacted the Ferry Laws, which he intended to overturn the Falloux Laws by embracing the anti-clerical thinking of the "Philosophs". These laws ended the Catholic Church's involvement in many government institutions in late 19th-century France, including schools.
In recent years a debate has developed over the role of republicanism in the American Revolution and in the British radicalism of the 18th century. For many decades the consensus was that liberalism, especially that of John Locke, was paramount and that republicanism had a distinctly secondary role.
The new interpretations were pioneered by J.G.A. Pocock, who argued in "The Machiavellian Moment" (1975) that, at least in the early 18th century, republican ideas were just as important as liberal ones. Pocock's view is now widely accepted. Bernard Bailyn and Gordon Wood pioneered the argument that the American founding fathers were more influenced by republicanism than they were by liberalism. Cornell University professor Isaac Kramnick, on the other hand, argues that Americans have always been highly individualistic and therefore Lockean. Joyce Appleby has argued similarly for the Lockean influence on America.
In the decades before the American Revolution (1776), the intellectual and political leaders of the colonies studied history intently, looking for models of good government. They especially followed the development of republican ideas in England. Pocock explained the intellectual sources in America:
The Whig canon and the neo-Harringtonians, John Milton, James Harrington and Sidney, Trenchard, Gordon and Bolingbroke, together with the Greek, Roman, and Renaissance masters of the tradition as far as Montesquieu, formed the authoritative literature of this culture; and its values and concepts were those with which we have grown familiar: a civic and patriot ideal in which the personality was founded in property, perfected in citizenship but perpetually threatened by corruption; government figuring paradoxically as the principal source of corruption and operating through such means as patronage, faction, standing armies (opposed to the ideal of the militia), established churches (opposed to the Puritan and deist modes of American religion) and the promotion of a monied interest – though the formulation of this last concept was somewhat hindered by the keen desire for readily available paper credit common in colonies of settlement. A neoclassical politics provided both the ethos of the elites and the rhetoric of the upwardly mobile, and accounts for the singular cultural and intellectual homogeneity of the Founding Fathers and their generation.
The commitment of most Americans to these republican values made the American Revolution inevitable. Britain was increasingly seen as corrupt and hostile to republicanism, and as a threat to the established liberties the Americans enjoyed.
Leopold von Ranke in 1848 claimed that American republicanism played a crucial role in the development of European liberalism:
By abandoning English constitutionalism and creating a new republic based on the rights of the individual, the North Americans introduced a new force in the world. Ideas spread most rapidly when they have found adequate concrete expression. Thus republicanism entered our Romanic/Germanic world... Up to this point, the conviction had prevailed in Europe that monarchy best served the interests of the nation. Now the idea spread that the nation should govern itself. But only after a state had actually been formed on the basis of the theory of representation did the full significance of this idea become clear. All later revolutionary movements have this same goal... This was the complete reversal of a principle. Until then, a king who ruled by the grace of God had been the center around which everything turned. Now the idea emerged that power should come from below... These two principles are like two opposite poles, and it is the conflict between them that determines the course of the modern world. In Europe the conflict between them had not yet taken on concrete form; with the French Revolution it did.
Republicanism, especially that of Rousseau, played a central role in the French Revolution and foreshadowed modern republicanism. The revolutionaries, after overthrowing the French monarchy in the 1790s, began by setting up a republic; Napoleon converted it into an Empire with a new aristocracy. In the 1830s Belgium adopted some of the innovations of the progressive political philosophers of the Enlightenment.
"Républicanisme" is a French version of modern republicanism. It is a form of social contract, deduced from Jean-Jacques Rousseau's idea of a general will. Ideally, each citizen is engaged in a direct relationship with the state, removing the need for identity politics based on local, religious, or racial identification.
"Républicanisme", in theory, makes anti-discrimination laws unnecessary, but some critics argue that colour-blind laws serve to perpetuate discrimination.
Inspired by the American and French Revolutions, the Society of United Irishmen was founded in 1791 in Belfast and Dublin. The inaugural meeting of the United Irishmen in Belfast on 18 October 1791 approved a declaration of the society's objectives. It identified the central grievance that Ireland had no national government: "...we are ruled by Englishmen, and the servants of Englishmen, whose object is the interest of another country, whose instrument is corruption, and whose strength is the weakness of Ireland..." They adopted three central positions: (i) to seek out a cordial union among all the people of Ireland, to maintain that balance essential to preserve liberties and extend commerce; (ii) that the sole constitutional mode by which English influence can be opposed, is by a complete and radical reform of the representation of the people in Parliament; (iii) that no reform is practicable or efficacious, or just which shall not include Irishmen of every religious persuasion. The declaration, then, urged constitutional reform, union among Irish people and the removal of all religious disqualifications.
The event that above all influenced men's thoughts at that time was the French Revolution. Public interest, already strongly aroused, was brought to a pitch by the publication in 1790 of Edmund Burke's "Reflections on the Revolution in France", and Thomas Paine's response, "Rights of Man", in February 1791. Theobald Wolfe Tone wrote later that, "This controversy, and the gigantic event which gave rise to it, changed in an instant the politics of Ireland." Paine himself was aware of this commenting on sales of Part I of "Rights of Man" in November 1791, only eight months after publication of the first edition, he informed a friend that in England "almost sixteen thousand has gone off – and in Ireland above forty thousand". Paine my have been inclined to talk up sales of his works but what is striking in this context is that Paine believed that Irish sales were so far ahead of English ones before Part II had appeared. On 5 June 1792, Thomas Paine, author of the "Rights of Man" was proposed for honorary membership of the Dublin Society of the United Irishmen.
The fall of the Bastille was to be celebrated in Belfast on 14 July 1791 by a Volunteer meeting. At the request of Thomas Russell, Tone drafted suitable resolutions for the occasion, including one favouring the inclusion of Catholics in any reforms. In a covering letter to Russell, Tone wrote, "I have not said one word that looks like a wish for separation, though I give it to you and your friends as my most decided opinion that such an event would be a regeneration of their country". By 1795, Tone's Republicanism and that of the society had openly crystallized when he tells us: "I remember particularly two days thae we passed on Cave Hill. On the first Russell, Neilson, Simms, McCracken and one or two more of us, on the summit of McArt's fort, took a solemn obligation...never to desist in our efforts until we had subverted the authority of England over our country and asserted her independence."
The culmination was an uprising against British rule in Ireland lasting from May to September 1798 – the Irish Rebellion of 1798 – with military support from revolutionary France in August and again October 1798. After the failure of the rising of 1798 the United Irishman, John Daly Burk, an émigré in the United States in his " The History of the Late War in Ireland" written in 1799, was most emphatic in its identification of the Irish, French and American causes.
During the Enlightenment, anti-monarchism extended beyond the civic humanism of the Renaissance. Classical republicanism, still supported by philosophers such as Rousseau and Montesquieu, was only one of several theories seeking to limit the power of monarchies rather than directly opposing them. New forms of anti-monarchism, such as liberalism and later socialism, quickly overtook classical republicanism as the leading ideologies. Republicanism gained support, and monarchies were challenged throughout Europe.
The French version of Republicanism after 1870 was called "Radicalism"; it became the Radical Party, a major political party. In Western Europe, there were similar smaller "radical" parties. They all supported a constitutional republic and universal suffrage, while European "liberals" were at the time in favor of constitutional monarchy and census suffrage. Most radical parties later favored economic liberalism and capitalism. This distinction between radicalism and liberalism had not totally disappeared in the 20th century, although many radicals simply joined liberal parties. For example, the Radical Party of the Left in France or the (originally Italian) Transnational Radical Party, which still exist, focus more on republicanism than on simple liberalism.
Liberalism, was represented in France by the Orleanists who rallied to the Third Republic only in the late 19th century, after the comte de Chambord's 1883 death and the 1891 papal encyclical "Rerum novarum".
But the early Republican, Radical and Radical-Socialist Party in France, and Chartism in Britain, were closer to republicanism. Radicalism remained close to republicanism in the 20th century, at least in France, where they governed several times with other parties (participating in both the Cartel des Gauches coalitions as well as the Popular Front).
Discredited after the Second World War, French radicals split into a left-wing party – the Radical Party of the Left, an associate of the Socialist Party – and the Radical Party "valoisien", an associate party of the conservative Union for a Popular Movement (UMP) and its Gaullist predecessors. Italian radicals also maintained close links with republicanism, as well as with socialism, with the "Partito radicale" founded in 1955, which became the Transnational Radical Party in 1989.
Increasingly, after the fall of communism in 1989 and the collapse of the Marxist interpretation of the French Revolution, France increasingly turned to Republicanism to define its national identity. Charles de Gaulle, presenting himself as the military savior of France in the 1940s, and the political savior in the 1950s, refashioned the meaning of Republicanism. Both left and right enshrined him in the Republican pantheon.
Republicanism became the dominant political value of Americans during and after the American Revolution. The "Founding Fathers" were strong advocates of republican values, especially Thomas Jefferson, Samuel Adams, Patrick Henry, Thomas Paine, Benjamin Franklin, John Adams, James Madison and Alexander Hamilton. However, in 1854, social movements started to harness values of abolitionism and free labor. These burgeoning radical traditions in America became epitomized in the early formation of the Republican Party, known as "red republicanism." The efforts were primarily led by political leaders such as Alvan E. Bovay, Thaddeus Stevens, and Abraham Lincoln.
In some countries of the British Empire, later the Commonwealth of Nations, republicanism has taken a variety of forms.
In Barbados, the government gave the promise of a referendum on becoming a republic in August 2008, but it was postponed due to the change of government in the 2008 election.
In South Africa, republicanism in the 1960s was identified with the supporters of apartheid, who resented British interference in their treatment of the country's black population.
In Australia, the debate between republicans and monarchists is still active, and republicanism draws support from across the political spectrum. Former Prime Minister Malcolm Turnbull was a leading proponent of an Australian republic prior to joining the centre-right Liberal Party, and led the pro-republic campaign during the failed 1999 Australian republic referendum. After becoming Prime Minister in 2015, he confirmed he still supports a republic, but stated that the issue should wait until after the reign of Queen Elizabeth II. The centre-left Labor Party officially supports the abolition of the monarchy and another referendum on the issue.
On 22 March 2015, Prime Minister Freundel Stuart announced that Barbados will move towards a republican form of government "in the very near future".
Andrew Holness, the current Prime Minister of Jamaica, has announced that his government intends to begin the process of transitioning to a republic.
In New Zealand, there is also a republican movement.
Republican groups are also active in the United Kingdom. The major organisation campaigning for a republic in the United Kingdom is 'Republic'.
The Netherlands have known two republican periods: the Dutch Republic (1581–1795) that gained independence from the Spanish Empire during the Eighty Years' War, followed by the Batavian Republic (1795–1806) that after conquest by the French First Republic had been established as a Sister Republic. After Napoleon crowned himself Emperor of the French, he made his brother Louis Bonaparte King of Holland (1806–1810), then annexed the Netherlands into the French First Empire (1810–1813) until he was defeated at the Battle of Leipzig. Thereafter the Sovereign Principality of the United Netherlands (1813–1815) was established, granting the Orange-Nassau family, who during the Dutch Republic had only been stadtholders, a princely title over the Netherlands, and soon William Frederick even crowned himself King of the Netherlands. His rather autocratic tendencies in spite of the principles of constitutional monarchy met increasing resistance from Parliament and the population, which eventually limited the monarchy's power and democratised the government, most notably through the Constitutional Reform of 1848. Since the late 19th century, republicanism has had various degrees of support in society, which the royal house generally dealt with by gradually letting go of its formal influence in politics and taking on a more ceremonial and symbolic role. Nowadays, popularity of the monarchy is high, but there is a significant republican minority that strives to abolish the monarchy altogether.
In the period around and after the dissolution of the union between Norway and Sweden in 1905, an opposition to the monarchy grew in Norway, and republican movements and thoughts continues to exist to this day.
In Sweden, a major promoter of republicanism is the Swedish Republican Association, which advocates for a democratic ending to the Monarchy of Sweden.
There is a renewed interest in republicanism in Spain after two earlier attempts: the First Spanish Republic (1873–1874) and the Second Spanish Republic (1931–1939). Movements such as "", Citizens for the Republic in Spanish, have emerged, and parties like United Left (Spain) and the Republican Left of Catalonia increasingly refer to republicanism. In a survey conducted in 2007 reported that 69% of the population prefer the monarchy to continue, compared with 22% opting for a Republic. In a 2008 survey, 58% of Spanish citizens were indifferent, 16% favored a republic, 16% were monarchists, and 7% claimed they were "Juancarlistas" (supporters of continued monarchy under King Juan Carlos I, without a common position for the fate of the monarchy after his death). In the last years republicanism has been rising, especially among the young people, with successive surveys in recent years projecting a technical tie between supporters of the monarchy and supporters of the republic.
Neorepublicanism is the effort by current scholars to draw on a classical republican tradition in the development of an attractive public philosophy intended for contemporary purposes. With traditional socialism virtually defunct, it emerges as an alternative postsocialist critique of market society from the left.
Prominent theorists in this movement are Philip Pettit and Cass Sunstein, who have each written several works defining republicanism and how it differs from liberalism. Michael Sandel, a late convert to republicanism from communitarianism, advocates replacing or supplementing liberalism with republicanism, as outlined in his "Democracy's Discontent: America in Search of a Public Philosophy."
Contemporary work from a neorepublican include jurist K. Sabeel Rahman's book "Democracy Against Domination", which seeks to create a neorepublican framework for economic regulation grounded in the thought of Louis Brandeis and John Dewey and popular control, in contrast to both New Deal-style managerialism and neoliberal deregulation. Philosopher Elizabeth Anderson's "Private Government" traces the history of republican critiques of private power, arguing that the classical free market policies of the 18th and 19th centuries intended to help workers only lead to their domination by employers. In "From Slavery to the Cooperative Commonwealth," political scientist Alex Gourevitch examines a strain of late 19th century American republicanism known as labor republicanism that was the producerist labor union The Knights of Labor, and how republican concepts were used in service of workers rights, but also with a strong critique of the role of that union in supporting the Chinese Exclusion Act.
In the late 18th century there was convergence of democracy and republicanism. Republicanism is a system that replaces or accompanies inherited rule. There is an emphasis on liberty, and a rejection of corruption. It strongly influenced the American Revolution and the French Revolution in the 1770s and 1790s, respectively. Republicans, in these two examples, tended to reject inherited elites and aristocracies, but left open two questions: whether a republic, to restrain unchecked majority rule, should have an unelected upper chamber—perhaps with members appointed as meritorious experts—and whether it should have a constitutional monarch.
Though conceptually separate from democracy, republicanism included the key principles of rule by consent of the governed and sovereignty of the people. In effect, republicanism held that kings and aristocracies were not the real rulers, but rather the whole people were. Exactly "how" the people were to rule was an issue of democracy: republicanism itself did not specify a means. In the United States, the solution was the creation of political parties that reflected the votes of the people and controlled the government (see Republicanism in the United States). Many exponents of republicanism, such as Benjamin Franklin, Thomas Paine, and Thomas Jefferson were strong promoters of representative democracy. Other supporters of republicanism, such as John Adams and Alexander Hamilton, were more distrustful of majority rule and sought a government with more power for elites. There were similar debates in many other democratizing nations.
In contemporary usage, the term "democracy" refers to a government chosen by the people, whether it is direct or representative. Today the term "republic" usually refers to representative democracy with an elected head of state, such as a president, who serves for a limited term; in contrast to states with a hereditary monarch as a head of state, even if these states also are representative democracies, with an elected or appointed head of government such as a prime minister.
The Founding Fathers of the United States rarely praised and often criticized democracy, which in their time tended to specifically mean direct democracy and which they equated with mob rule; James Madison argued that what distinguished a "democracy" from a "republic" was that the former became weaker as it got larger and suffered more violently from the effects of faction, whereas a republic could get stronger as it got larger and combatted faction by its very structure. What was critical to American values, John Adams insisted, was that the government should be "bound by fixed laws, which the people have a voice in making, and a right to defend."
Some countries (such as the United Kingdom, the Netherlands, Belgium, Luxembourg, the Scandinavian countries, and Japan) turned powerful monarchs into constitutional ones with limited, or eventually merely symbolic, powers. Often the monarchy was abolished along with the aristocratic system, whether or not they were replaced with democratic institutions (such as in France, China, Iran, Russia, Germany, Austria, Hungary, Italy, Greece, Turkey and Egypt). In Australia, New Zealand, Canada, Papua New Guinea, and some other countries the monarch, or its representative, is given supreme executive power, but by convention acts only on the advice of his or her ministers. Many nations had elite upper houses of legislatures, the members of which often had lifetime tenure, but eventually these houses lost much power (as the UK House of Lords), or else became elective and remained powerful. | https://en.wikipedia.org/wiki?curid=25755 |
Repetitive strain injury
A repetitive strain injury (RSI) is an injury to part of the musculoskeletal or nervous system caused by repetitive use, vibrations, compression or long periods in a fixed position. Other common names include repetitive stress disorders, cumulative trauma disorders (CTDs), and overuse syndrome.
Some examples of symptoms experienced by patients with RSI are aching, pulsing pain, tingling and extremity weakness, initially presenting with intermittent discomfort and then with a higher degree of frequency.
Repetitive strain injury (RSI) and associative trauma orders are umbrella terms used to refer to several discrete conditions that can be associated with repetitive tasks, forceful exertions, vibrations, mechanical compression, sustained or awkward positions, or repetitive eccentric contractions. The exact terminology is controversial, but the terms now used by the United States Department of Labor and the National Institute of Occupational Safety and Health (NIOSH) are musculoskeletal disorders (MSDs) and work-related muscular skeletal disorders (WMDs).
Examples of conditions that may sometimes be attributed to such causes include tendinosis (or less often tendinitis), carpal tunnel syndrome, cubital tunnel syndrome, De Quervain syndrome, thoracic outlet syndrome, intersection syndrome, golfer's elbow (medial epicondylitis), tennis elbow (lateral epicondylitis), trigger finger (so-called stenosing tenosynovitis), radial tunnel syndrome, ulnar tunnel syndrome, and focal dystonia.
A general worldwide increase since the 1970s in RSIs of the arms, hands, neck, and shoulder has been attributed to the widespread use in the workplace of keyboard entry devices, such as typewriters and computers, which require long periods of repetitive motions in a fixed posture. Extreme temperatures have also been reported as risk factor for RSI.
Workers in certain fields are at risk of repetitive strains. Most occupational injuries are musculoskeletal disorders, and many of these are caused by cumulative trauma rather than a single event. Miners and poultry workers, for example, must make repeated motions which can cause tendon, muscular, and skeletal injuries. Jobs that involve repeated motion patterns or prolonged posture within a work cycle, or both, may be repetitive. Young athletes are predisposed to RSIs due to an underdeveloped musculoskeletal system.
Factors such as personality differences to work-place organization problems. Certain workers may negatively perceive their work organization due to excessive work rate, long work hours, limited job control, and low social support. Previous studies shown elevated urinary catecholamines (stress-related chemicals) in workers with RSI. Pain related to RSI may evolve into chronic pain syndrome particularly for workers who do not have supports from co-workers and supervisors.
Age and gender are important risk factors for RSIs. The risk of RSI increases with age. Women are more likely affected than men because of their smaller frame, lower muscle mass and strength, and due to endocrine influences. In addition, lifestyle choices such as smoking and alcohol consumption are recognizable risk factors for RSI. Recent scientific findings indicate that obesity and diabetes may predispose an individual to RSIs by creating a chronic low grade inflammatory response that prevents the body from effectively healing damaged tissues.
RSIs are assessed using a number of objective clinical measures. These include effort-based tests such as grip and pinch strength, diagnostic tests such as Finkelstein's test for De Quervain's tendinitis, Phalen's contortion, Tinel's percussion for carpal tunnel syndrome, and nerve conduction velocity tests that show nerve compression in the wrist. Various imaging techniques can also be used to show nerve compression such as x-ray for the wrist, and MRI for the thoracic outlet and cervico-brachial areas. Utilization of routine imaging is useful in early detection and treatment of overuse injuries in at risk populations, which is important in preventing long term adverse effects.
There are no quick fixes for RSI. Early diagnosis is critical to limiting damage. The RICE treatment is used as the first treatment for many muscle strains, ligament sprains, or other bruises and injuries. RICE is used immediately after an injury happens and for the first 24 to 48 hours after the injury. These modalities can help reduce the swelling and pain. Commonly prescribed treatments for early-stage RSIs include analgesics, myofeedback, biofeedback, physical therapy, relaxation, and ultrasound therapy. Low-grade RSIs can sometimes resolve themselves if treatments begin shortly after the onset of symptoms. However, some RSIs may require more aggressive intervention including surgery and can persist for years.
Although there are no "quick fixes" for RSI, there are effective approaches to its treatment and prevention. One is that of ergonomics, the changing of one's environment (especially workplace equipment) to minimize repetitive strain. Another is specific massage techniques such as trigger point therapy and related techniques such as the Alexander Technique. Licensed massage therapists specializing in RSI, as well as physical therapists and chiropractors, generally provide hands-on therapy, but also expect that the patient supplement and reinforce the office-visit therapy sessions with daily (or several times daily) exercises, self-massage, and stretching as prescribed by the practitioner.
General exercise has been shown to decrease the risk of developing RSI. Doctors sometimes recommend that RSI sufferers engage in specific strengthening exercises, for example to improve sitting posture, reduce excessive kyphosis, and potentially thoracic outlet syndrome. Modifications of posture and arm use are often recommended.
Although seemingly a modern phenomenon, RSIs have long been documented in the medical literature. In 1700, the Italian physician Bernardino Ramazzini first described RSI in more than 20 categories of industrial workers in Italy, including musicians and clerks. Carpal tunnel syndrome was first identified by the British surgeon James Paget in 1854. The April 1875 issue of "The Graphic" describes "telegraphic paralysis."
The Swiss surgeon Fritz de Quervain first identified De Quervain’s tendinitis in Swiss factory workers in 1895. The French neurologist Jules Tinel (1879–1952) developed his percussion test for compression of the median nerve in 1900. The American surgeon George Phalen improved the understanding of the aetiology of carpal tunnel syndrome with his clinical experience of several hundred patients during the 1950s and 1960s.
Specific sources of discomfort have been popularly referred to by terms such as Blackberry thumb, iPod finger, Wii elbow, mouse arm disease, PlayStation thumb, Rubik's wrist or "cuber's thumb", stylus finger, raver's wrist, and Emacs pinky, among others. | https://en.wikipedia.org/wiki?curid=25756 |
Nuclear power
Nuclear power is the use of nuclear reactions that release nuclear energy to generate heat, which most frequently is then used in steam turbines to produce electricity in a nuclear power plant. Nuclear power can be obtained from nuclear fission, nuclear decay and nuclear fusion reactions. Presently, the vast majority of electricity from nuclear power is produced by nuclear fission of uranium and plutonium. Nuclear decay processes are used in niche applications such as radioisotope thermoelectric generators. Generating electricity from fusion power remains at the focus of international research. This article mostly deals with nuclear fission power for electricity generation.
Civilian nuclear power supplied 2,563 terawatt hours (TWh) of electricity in 2018, equivalent to about 10% of global electricity generation, and was the second largest low-carbon power source after hydroelectricity. there are 443 civilian fission reactors in the world, with a combined electrical capacity of 395 gigawatt (GW). There are also 56 nuclear power reactors under construction and 109 reactors planned, with a combined capacity of 60 GW and 120 GW, respectively. The United States has the largest fleet of nuclear reactors, generating over 800 TWh zero-emissions electricity per year with an average capacity factor of 92%. Most reactors under construction are generation III reactors in Asia.
Nuclear power has one of the lowest levels of fatalities per unit of energy generated compared to other energy sources. Coal, petroleum, natural gas and hydroelectricity each have caused more fatalities per unit of energy due to air pollution and accidents. Since its commercialization in the 1970s, nuclear power has prevented about 1.84 million air pollution-related deaths and the emission of about 64 billion tonnes of carbon dioxide equivalent that would have otherwise resulted from the burning of fossil fuels.
Accidents in nuclear power plants include the Chernobyl disaster in the Soviet Union in 1986, the Fukushima Daiichi nuclear disaster in Japan in 2011, and the more contained Three Mile Island accident in the United States in 1979.
There is a debate about nuclear power. Proponents, such as the World Nuclear Association and Environmentalists for Nuclear Energy, contend that nuclear power is a safe, sustainable energy source (see also Nuclear power proposed as renewable energy) that reduces carbon emissions. Nuclear power opponents, such as Greenpeace and NIRS, contend that nuclear power poses many threats to people and the environment.
In 1932 physicist Ernest Rutherford discovered that when lithium atoms were "split" by protons from a proton accelerator, immense amounts of energy were released in accordance with the principle of mass–energy equivalence. However, he and other nuclear physics pioneers Niels Bohr and Albert Einstein believed harnessing the power of the atom for practical purposes anytime in the near future was unlikely.
The same year, his doctoral student James Chadwick discovered the neutron, which was immediately recognized as a potential tool for nuclear experimentation because of its lack of an electric charge. Experiments bombarding materials with neutrons led Frédéric and Irène Joliot-Curie to discover induced radioactivity in 1934, which allowed the creation of radium-like elements. Further work by Enrico Fermi in the 1930s focused on using slow neutrons to increase the effectiveness of induced radioactivity. Experiments bombarding uranium with neutrons led Fermi to believe he had created a new transuranic element, which was dubbed hesperium.
In 1938, German chemists Otto Hahn and Fritz Strassmann, along with Austrian physicist Lise Meitner and Meitner's nephew, Otto Robert Frisch, conducted experiments with the products of neutron-bombarded uranium, as a means of further investigating Fermi's claims.
They determined that the relatively tiny neutron split the nucleus of the massive uranium atoms into two roughly equal pieces, contradicting Fermi.
This was an extremely surprising result: all other forms of nuclear decay involved only small changes to the mass of the nucleus, whereas this process—dubbed "fission" as a reference to biology—involved a complete rupture of the nucleus.
Numerous scientists, including Leó Szilárd, who was one of the first, recognized that if fission reactions released additional neutrons, a self-sustaining nuclear chain reaction could result. Once this was experimentally confirmed and announced by Frédéric Joliot-Curie in 1939, scientists in many countries (including the United States, the United Kingdom, France, Germany, and the Soviet Union) petitioned their governments for support of nuclear fission research, just on the cusp of World War II, for the development of a nuclear weapon.
In the United States, where Fermi and Szilárd had both emigrated, the discovery of the nuclear chain reaction led to the creation of the first man-made reactor, the research reactor known as Chicago Pile-1, which achieved criticality on December 2, 1942. The reactor's development was part of the Manhattan Project, the Allied effort to create atomic bombs during World War II. It led to the building of larger single-purpose production reactors, such as the X-10 Pile, for the production of weapons-grade plutonium for use in the first nuclear weapons. The United States tested the first nuclear weapon in July 1945, the Trinity test, with the atomic bombings of Hiroshima and Nagasaki taking place one month later.
In August 1945, the first widely distributed account of nuclear energy, the pocketbook "The Atomic Age", was released. It discussed the peaceful future uses of nuclear energy and depicted a future where fossil fuels would go unused. Nobel laureate Glenn Seaborg, who later chaired the United States Atomic Energy Commission, is quoted as saying "there will be nuclear powered earth-to-moon shuttles, nuclear powered artificial hearts, plutonium heated swimming pools for SCUBA divers, and much more".
In the same month, with the end of the war, Seaborg and others would file hundreds of initially classified patents, most notably Eugene Wigner and Alvin Weinberg's Patent #2,736,696, on a conceptual light water reactor (LWR) that would later become the United States' primary reactor for naval propulsion and later take up the greatest share of the commercial fission-electric landscape.
The United Kingdom, Canada, and the USSR proceeded to research and develop nuclear energy over the course of the late 1940s and early 1950s.
Electricity was generated for the first time by a nuclear reactor on December 20, 1951, at the EBR-I experimental station near Arco, Idaho, which initially produced about 100 kW.
In 1953, American President Dwight Eisenhower gave his "Atoms for Peace" speech at the United Nations, emphasizing the need to develop "peaceful" uses of nuclear power quickly. This was followed by the Atomic Energy Act of 1954 which allowed rapid declassification of U.S. reactor technology and encouraged development by the private sector.
The first organization to develop nuclear power was the U.S. Navy, with the S1W reactor for the purpose of propelling submarines and aircraft carriers. The first nuclear-powered submarine, , was put to sea in January 1954. The trajectory of civil reactor design was heavily influenced by Admiral Hyman G. Rickover, who with Weinberg as a close advisor, selected the PWR/Pressurized Water Reactor design, in the form of a 10 MW reactor for the Nautilus, a decision that would result in the PWR receiving a government commitment to develop, an engineering lead that would result in a lasting impact on the civilian electricity market in the years to come. The United States Navy Nuclear Propulsion design and operation community, under Rickover's style of attentive management retains a continuing record of zero reactor accidents (defined as the uncontrolled release of fission products to the environment resulting from damage to a reactor core). with the U.S. Navy fleet of nuclear-powered ships, standing at some 80 vessels as of 2018.
On June 27, 1954, the USSR's Obninsk Nuclear Power Plant, based on what would become the prototype of the RBMK reactor design, became the world's first nuclear power plant to generate electricity for a power grid, producing around 5 megawatts of electric power.
On July 17, 1955 the BORAX III reactor, the prototype to later Boiling Water Reactors, became the first to generate electricity for an entire community, the town of Arco, Idaho. A motion picture record of the demonstration, of supplying some 2 megawatts (2 MW) of electricity, was presented to the United Nations, Where at the "First Geneva Conference", the world's largest gathering of scientists and engineers, met to explore the technology in that year. In 1957 EURATOM was launched alongside the European Economic Community (the latter is now the European Union). The same year also saw the launch of the International Atomic Energy Agency (IAEA).
The world's first "commercial nuclear power station", Calder Hall at Windscale, England, was opened in 1956 with an initial capacity of 50 MW per reactor (200 MW total), it was the first of a fleet of dual-purpose MAGNOX reactors, though officially code-named PIPPA (Pressurized Pile Producing Power and Plutonium) by the UKAEA to denote the plant's dual commercial and military role.
The U.S. Army Nuclear Power Program formally commenced in 1954. Under its management, the 2 megawatt SM-1, at Fort Belvoir, Virginia, was the first in the United States to supply electricity in an industrial capacity to the commercial grid (VEPCO), in April 1957.
The first commercial nuclear station to become operational in the United States was the 60 MW Shippingport Reactor (Pennsylvania), in December 1957.
The 3 MW SL-1 was a U.S. Army experimental nuclear power reactor at the National Reactor Testing Station, Idaho National Laboratory. It was derived from the Borax Boiling water reactor (BWR) design and it first achieved operational criticality and connection to the grid in 1958.
For reasons unknown, in 1961 a technician removed a control rod about 22 inches farther than the prescribed 4 inches. This resulted in a steam explosion which killed the three crew members and caused a meltdown. The event was eventually rated at 4 on the seven-level INES scale.
In service from 1963 and operated as the experimental testbed for the later Alfa-class submarine fleet, one of the two liquid-metal-cooled reactors on board the , underwent a fuel element failure accident in 1968, with the emission of gaseous fission products into the surrounding air, producing 9 crew fatalities and 83 injuries.
The total global installed nuclear capacity initially rose relatively quickly, rising from less than 1 gigawatt (GW) in 1960 to 100 GW in the late 1970s, and 300 GW in the late 1980s. Since the late 1980s worldwide capacity has risen much more slowly, reaching 366 GW in 2005. Between around 1970 and 1990, more than 50 GW of capacity was under construction (peaking at over 150 GW in the late 1970s and early 1980s)—in 2005, around 25 GW of new capacity was planned. More than two-thirds of all nuclear plants ordered after January 1970 were eventually cancelled. A total of 63 nuclear units were canceled in the United States between 1975 and 1980.
In 1972 Alvin Weinberg, co-inventor of the light water reactor design (the most common nuclear reactors today) was fired from his job at Oak Ridge National Laboratory by the Nixon administration, "at least in part" over his raising of concerns about the safety and wisdom of ever larger scaling-up of his design, especially above a power rating of ~500 MWe, as in a loss of coolant accident scenario, the decay heat generated from such large compact solid-fuel cores was thought to be beyond the capabilities of passive/natural convection cooling to prevent a rapid fuel rod melt-down and resulting in then, potential far reaching fission product pluming. While considering the LWR, well suited at sea for the submarine and naval fleet, Weinberg did not show complete support for its use by utilities on land at the power output that they were interested in for supply scale reasons, and would request for a greater share of AEC research funding to evolve his team's demonstrated, Molten-Salt Reactor Experiment, a design with greater inherent safety in this scenario and with that an envisioned greater economic growth potential in the market of large-scale civilian electricity generation.
Similar to the earlier BORAX reactor safety experiments, conducted by Argonne National Laboratory, in 1976 Idaho National Laboratory began a test program focused on LWR reactors under various accident scenarios, with the aim of understanding the event progression and mitigating steps necessary to respond to a failure of one or more of the disparate systems, with much of the redundant back-up safety equipment and nuclear regulations drawing from these series of destructive testing investigations.
During the 1970s and 1980s rising economic costs (related to extended construction times largely due to regulatory changes and pressure-group litigation) and falling fossil fuel prices made nuclear power plants then under construction less attractive. In the 1980s in the U.S. and 1990s in Europe, the flat electric grid growth and electricity liberalization also made the addition of large new baseload energy generators economically unattractive.
The 1973 oil crisis had a significant effect on countries, such as France and Japan, which had relied more heavily on oil for electric generation (39% and 73% respectively) to invest in nuclear power.
The French plan, known as the Messmer plan, was for the complete independence from oil, with an envisaged construction of 80 reactors by 1985 and 170 by 2000.
France would construct 25 fission-electric stations, installing 56 mostly PWR design reactors over the next 15 years, though foregoing the 100 reactors initially charted in 1973, for the 1990s. In 2018, 72% of French electricity was generated by 58 reactors, the highest percentage by any nation in the world.
Some local opposition to nuclear power emerged in the U.S. in the early 1960s, beginning with the proposed Bodega Bay station in California, in 1958, which produced conflict with local citizens and by 1964 the concept was ultimately abandoned. In the late 1960s some members of the scientific community began to express pointed concerns. These anti-nuclear concerns related to nuclear accidents, nuclear proliferation, nuclear terrorism and radioactive waste disposal. In the early 1970s, there were large protests about a proposed nuclear power plant in Wyhl, Germany. The project was cancelled in 1975 the anti-nuclear success at Wyhl inspired opposition to nuclear power in other parts of Europe and North America. By the mid-1970s anti-nuclear activism gained a wider appeal and influence, and nuclear power began to become an issue of major public protest. In some countries, the nuclear power conflict "reached an intensity unprecedented in the history of technology controversies". In May 1979, an estimated 70,000 people, including then governor of California Jerry Brown, attended a march against nuclear power in Washington, D.C. Anti-nuclear power groups emerged in every country that had a nuclear power programme.
Globally during the 1980s one new nuclear reactor started up every 17 days on average.
In the early 1970s, the increased public hostility to nuclear power in the United States lead the United States Atomic Energy Commission and later the Nuclear Regulatory Commission to lengthen the license procurement process, tighten engineering regulations and increase the requirements for safety equipment. Together with relatively minor percentage increases in the total quantity of steel, piping, cabling and concrete per unit of installed nameplate capacity, the more notable changes to the regulatory open public hearing-response cycle for the granting of construction licenses, had the effect of what was once an initial 16 months for project initiation to the pouring of first concrete in 1967, escalating to 32 months in 1972 and finally 54 months in 1980, which ultimately, quadrupled the price of power reactors.
Utility proposals in the U.S for nuclear generating stations, peaked at 52 in 1974, fell to 12 in 1976 and have never recovered, in large part due to the pressure-group litigation strategy, of launching lawsuits against each proposed U.S construction proposal, keeping private utilities tied up in court for years, one of which having reached the supreme court in 1978. With permission to build a nuclear station in the U.S. eventually taking longer than in any other industrial country, the spectre facing utilities of having to pay interest on large construction loans while the anti-nuclear movement used the legal system to produce delays, increasingly made the viability of financing construction, less certain. By the close of the 1970s it became clear that nuclear power would not grow nearly as dramatically as once believed.
Over 120 reactor proposals in the United States were ultimately cancelled and the construction of new reactors ground to a halt. A cover story in the February 11, 1985, issue of "Forbes" magazine commented on the overall failure of the U.S. nuclear power program, saying it "ranks as the largest managerial disaster in business history".
According to some commentators, the 1979 accident at Three Mile Island (TMI) played a major part in the reduction in the number of new plant constructions in many other countries. According to the NRC, TMI was the most serious accident in "U.S. commercial nuclear power plant operating history, even though it led to no deaths or injuries to plant workers or members of the nearby community." The regulatory uncertainty and delays eventually resulted in an escalation of construction related debt that led to the bankruptcy of Seabrook's major utility owner, Public Service Company of New Hampshire. At the time, the fourth largest bankruptcy in United States corporate history.
Among American engineers, the cost increases from implementing the regulatory changes that resulted from the TMI accident were, when eventually finalized, only a few percent of total construction costs for new reactors, primarily relating to the prevention of safety systems from being turned off. With the most significant engineering result of the TMI accident, the recognition that better operator training was needed and that the existing emergency core cooling system of PWRs worked better in a real-world emergency than members of the anti-nuclear movement had routinely claimed.
The already slowing rate of new construction along with the shutdown in the 1980s of two existing demonstration nuclear power stations in the Tennessee Valley, United States, when they couldn't economically meet the NRC's new tightened standards, shifted electricity generation to coal-fired power plants. In 1977, following the first oil shock, U.S. President Jimmy Carter made a speech calling the energy crisis the "moral equivalent of war" and prominently supporting nuclear power. However, nuclear power could not compete with cheap oil and gas, particularly after public opposition and regulatory hurdles made new nuclear prohibitively expensive.
In 2006 The Brookings Institution, a public policy organization, stated that new nuclear units had not been built in the United States because of soft demand for electricity, the potential cost overruns on nuclear reactors due to regulatory issues and resulting construction delays.
In 1982, amongst a backdrop of ongoing protests directed at the construction of the first commercial scale breeder reactor in France, a later member of the Swiss Green Party fired five RPG-7 rocket-propelled grenades at the still under construction containment building of the Superphenix reactor. Two grenades hit and caused minor damage to the reinforced concrete outer shell. It was the first time protests reached such heights. After examination of the superficial damage, the prototype fast breeder reactor started and operated for over a decade.
According to some commentators, the 1986 Chernobyl disaster played a major part in the reduction in the number of new plant constructions in many other countries:
Unlike the Three Mile Island accident the much more serious Chernobyl accident did not increase regulations or engineering changes affecting Western reactors; because the RBMK design, which lacks safety features such as "robust" containment buildings, was only used in the Soviet Union. Over 10 RBMK reactors are still in use today. However, changes were made in both the RBMK reactors themselves (use of a safer enrichment of uranium) and in the control system (preventing safety systems being disabled), amongst other things, to reduce the possibility of a similar accident. Russia now largely relies upon, builds and exports a variant of the PWR, the VVER, with over 20 in use today.
An international organization to promote safety awareness and the professional development of operators in nuclear facilities, the World Association of Nuclear Operators (WANO), was created as a direct outcome of the 1986 Chernobyl accident. The organization was created with the intent to share and grow the adoption of nuclear safety culture, technology and community, where before there was an atmosphere of cold war secrecy.
Numerous countries, including Austria (1978), Sweden (1980) and Italy (1987) (influenced by Chernobyl) have voted in referendums to oppose or phase out nuclear power.
In the early 2000s, the nuclear industry was expecting a nuclear renaissance, an increase in the construction of new reactors, due to concerns about carbon dioxide emissions. However, in 2009, Petteri Tiippana, the director of STUK's nuclear power plant division, told the BBC that it was difficult to deliver a Generation III reactor project on schedule because builders were not used to working to the exacting standards required on nuclear construction sites, since so few new reactors had been built in recent years.
In 2018 the MIT Energy Initiative study on the future of nuclear energy concluded that, together with the strong suggestion that government should financially support development and demonstration of new Generation IV nuclear technologies, for a worldwide renaissance to commence, a global standardization of regulations needs to take place, with a move towards serial manufacturing of standardized units akin to the other complex engineering field of aircraft and aviation. At present it is common for each country to demand bespoke changes to the design to satisfy varying national regulatory bodies, often to the benefit of domestic engineering supply firms. The report goes on to note that the most cost-effective projects have been built with multiple (up to six) reactors per site using a standardized design, with the same component suppliers and construction crews working on each unit, in a continuous work flow.
Following the Tōhoku earthquake on 11 March 2011, one of the largest earthquakes ever recorded, and a subsequent tsunami off the coast of Japan, the Fukushima Daiichi Nuclear Power Plant suffered three core meltdowns due to failure of the emergency cooling system for lack of electricity supply. This resulted in the most serious nuclear accident since the Chernobyl disaster.
The Fukushima Daiichi nuclear accident prompted a re-examination of nuclear safety and nuclear energy policy in many countries and raised questions among some commentators over the future of the renaissance.
Germany approved plans to close all its reactors by 2022. Italian nuclear energy plans ended when Italy banned the generation, but not consumption, of nuclear electricity in a June 2011 referendum.
China, Switzerland, Israel, Malaysia, Thailand, United Kingdom, and the Philippines reviewed their nuclear power programs.
In 2011 the International Energy Agency halved its prior estimate of new generating capacity to be built by 2035.
Nuclear power generation had the biggest ever fall year-on-year in 2012, with nuclear power plants globally producing 2,346 TWh of electricity, a drop of 7% from 2011.
This was caused primarily by the majority of Japanese reactors remaining offline that year and the permanent closure of eight reactors in Germany.
The Fukushima Daiichi nuclear accident sparked controversy about the importance of the accident and its effect on nuclear's future.
The crisis prompted countries with nuclear power to review the safety of their reactor fleet and reconsider the speed and scale of planned nuclear expansions.
In 2011, "The Economist" opined that nuclear power "looks dangerous, unpopular, expensive and risky", and suggested a nuclear phase-out.
Jeffrey Sachs, Earth Institute Director, disagreed claiming combating climate change would require an expansion of nuclear power.
Investment banks were also critical of nuclear soon after the accident.
In 2011 German engineering giant Siemens said it would withdraw entirely from the nuclear industry in response to the Fukushima accident. In 2017, Siemens set the "milestone" of supplying the first additive manufacturing part to a nuclear power station, at the Krško Nuclear Power Plant in Slovenia, which it regards as an "industry breakthrough".
The "Associated Press" and Reuters reported in 2011 the suggestion that the safety and survival of the younger Onagawa Nuclear Power Plant, the closest reactor facility to the epicenter and on the coast, demonstrate that it is possible for nuclear facilities to withstand the greatest natural disasters. The Onagawa plant was also said to show that nuclear power can retain public trust, with the surviving residents of the town of Onagawa taking refuge in the gymnasium of the nuclear facility following the destruction of their town.
Following an IAEA inspection in 2012, the agency stated that "The structural elements of the [Onagawa] NPS (nuclear power station) were remarkably undamaged given the magnitude of ground motion experienced and the duration and size of this great earthquake,”.
In February 2012, the U.S. NRC approved the construction of 2 reactors at the Vogtle Electric Generating Plant, the first approval in 30 years.
Kharecha and Hansen estimated that "global nuclear power has prevented an average of 1.84 million air pollution-related deaths and 64 gigatonnes of CO2-equivalent (GtCO2-eq) greenhouse gas (GHG) emissions that would have resulted from fossil fuel burning" and, if continued, it could prevent up to 7 million deaths and 240 GtCO2-eq emissions by 2050.
In August 2015, following 4 years of near zero fission-electricity generation, Japan began restarting its nuclear reactors, after safety upgrades were completed, beginning with Sendai Nuclear Power Plant.
By 2015, the IAEA's outlook for nuclear energy had become more promising.
"Nuclear power is a critical element in limiting greenhouse gas emissions," the agency noted, and "the prospects for nuclear energy remain positive in the medium to long term despite a negative impact in some countries in the aftermath of the [Fukushima-Daiichi] accident...it is still the second-largest source worldwide of low-carbon electricity.
And the 72 reactors under construction at the start of last year were the most in 25 years."
As of 2015 the global trend was for new nuclear power stations coming online to be balanced by the number of old plants being retired. Eight new grid connections were completed by China in 2015.
In 2016 the BN-800 sodium cooled fast reactor in Russia, began commercial electricity generation, while plans for a BN-1200 were initially conceived the future of the fast reactor program in Russia awaits the results from MBIR, an under construction multi-loop Generation research facility for testing the chemically more inert lead, lead-bismuth and gas coolants, it will similarly run on recycled MOX (mixed uranium and plutonium oxide) fuel. An on-site pyrochemical processing, closed fuel-cycle facility, is planned, to recycle the spent fuel/"waste" and reduce the necessity for a growth in uranium mining and exploration. In 2017 the manufacture program for the reactor commenced with the facility open to collaboration under the "International Project on Innovative Nuclear Reactors and Fuel Cycle", it has a construction schedule, that includes an operational start in 2020. As planned, it will be the world's most-powerful research reactor.
In 2015 the Japanese government committed to the aim of restarting its fleet of 40 reactors by 2030 after safety upgrades, and to finish the construction of the Generation III Ōma Nuclear Power Plant.
This would mean that approximately 20% of electricity would come from nuclear power by 2030. As of 2018, some reactors have restarted commercial operation following inspections and upgrades with new regulations. While South Korea has a large nuclear power industry, the new government in 2017, influenced by a vocal anti-nuclear movement, committed to halting nuclear development after the completion of the facilities presently under construction.
The bankruptcy of Westinghouse in March 2017 due to US$9 billion of losses from the halting of construction at Virgil C. Summer Nuclear Generating Station, in the U.S. is considered an advantage for eastern companies, for the future export and design of nuclear fuel and reactors.
In 2016, the U.S. Energy Information Administration projected for its “base case” that world nuclear power generation would increase from 2,344 terawatt hours (TWh) in 2012 to 4,500 TWh in 2040. Most of the predicted increase was expected to be in Asia. As of 2018, there are over 150 nuclear reactors planned including 50 under construction. In January 2019, China had 45 reactors in operation, 13 under construction, and plans to build 43 more, which would make it the world's largest generator of nuclear electricity.
Zero-emission nuclear power is an important part of the climate change mitigation effort. Under IEA Sustainable Development Scenario by 2030 nuclear power and CCUS would have generated 3900 TWh globally while wind and solar 8100 TWh with the ambition to achieve net-zero emissions by 2070. In order to achieve this goal on average 15 GWe of nuclear power should have been added annually on average. As of 2019 over 60 GW in new nuclear power plants was in construction, mostly in China, Russia, Korea, India and UAE. Many countries in the world are considering Small Modular Reactors with one in Russia connected to the grid in 2020.
Countries with nuclear power plant in planning phase include Argentina, Brazil, Bulgaria, the Czech Republic, Egypt, Finland, Hungary, India, Kazakhstan, Poland, Saudi Arabia and Uzbekistan.
The future of nuclear power varies greatly between countries, depending on government policies. Some countries, most notably, Germany, have adopted policies of nuclear power phase-out. At the same time, some Asian countries, such as China and India, have committed to rapid expansion of nuclear power. In other countries, such as the United Kingdom and the United States, nuclear power is planned to be part of the energy mix together with renewable energy.
the cost of extending plant lifetimes is competitive with other electricity generation technologies, including new solar and wind projects. In the United States, licenses of almost half of the operating nuclear reactors have been extended to 60 years.
The U.S. NRC and the U.S. Department of Energy have initiated research into Light water reactor sustainability which is hoped will lead to allowing extensions of reactor licenses beyond 60 years, provided that safety can be maintained, to increase energy security and preserve low-carbon generation sources. Research into nuclear reactors that can last 100 years, known as Centurion Reactors, is being conducted.
As of 2020 a number of US nuclear power plants were cleared by Nuclear Regulatory Commission for operations up to 80 years.
Just as many conventional thermal power stations generate electricity by harnessing the thermal energy released from burning fossil fuels, nuclear power plants convert the energy released from the nucleus of an atom via nuclear fission that takes place in a nuclear reactor. When a neutron hits the nucleus of a uranium-235 or plutonium atom, it can split the nucleus into two smaller nuclei. The reaction is called nuclear fission. The fission reaction releases energy and neutrons. The released neutrons can hit other uranium or plutonium nuclei, causing new fission reactions, which release more energy and more neutrons. This is called a chain reaction. The reaction rate is controlled by control rods that absorb excess neutrons. The controllability of nuclear reactors depends on the fact that a small fraction of neutrons resulting from fission are delayed. The time delay between the fission and the release of the neutrons slows down changes in reaction rates and gives time for moving the control rods to adjust the reaction rate.
A fission nuclear power plant is generally composed of a nuclear reactor, in which the nuclear reactions generating heat take place; a cooling system, which removes the heat from inside the reactor; a steam turbine, which transforms the heat in mechanical energy; an electric generator, which transform the mechanical energy into electrical energy.
The life cycle of nuclear fuel starts with Uranium mining, which can be underground, open-pit, or in-situ leach mining, an increasing number of the highest output mines are remote underground operations, such as McArthur River uranium mine, in Canada, which by itself accounts for 13% of global production. The uranium ore, now independent from the ore body is then, as is shared in common with other metal mining, converted into a compact ore concentrate form, known in the case of uranium as "yellowcake"(U3O8) to facilitate transport.
In reactors that can sustain the neutron economy with the use of graphite or heavy water moderators, the reactor fuel can be this natural uranium on reducing to the much denser black ceramic oxide (UO2) form. For light water reactors, the fuel for which requires a further isotopic refining, the yellowcake is converted to the only suitable monoatomic uranium molecule, that is a gas just above room temperature, uranium hexafluoride, which is then sent through gaseous enrichment. In civilian light water reactors, Uranium is typically enriched to 3-5% uranium-235, and then generally converted back into a black powdered ceramic uranium oxide(UO2) form, that is then compressively sintered into fuel pellets, a stack of which forms fuel rods of the proper composition and geometry for the particular reactor that the fuel is needed in.
In modern light-water reactors the fuel rods will typically spend 3 operational cycles (about 6 years) inside the reactor, generally until about 3% of the uranium has been fissioned. Afterwards, they will be moved to a spent fuel pool which provides cooling for the thermal heat and shielding for ionizing radiation. Depending largely upon burnup efficiency, after about 5 years in a spent fuel pool the spent fuel is radioactively and thermally cool enough to handle, and can be moved to dry storage casks or reprocessed.
Uranium is a fairly common element in the Earth's crust: it is approximately as common as tin or germanium, and is about 40 times more common than silver.
Uranium is present in trace concentrations in most rocks, dirt, and ocean water, but is generally economically extracted only where it is present in high concentrations.
As of 2011 the world's known resources of uranium, economically recoverable at the arbitrary price ceiling of US$130/kg, were enough to last for between 70 and 100 years.
The OECD's red book of 2011 said that conventional uranium resources had grown by 12.5% since 2008 due to increased exploration, with this increase translating into greater than a century of uranium available if the rate of use were to continue at the 2011 level. In 2007, the OECD estimated 670 years of economically recoverable uranium in total conventional resources and phosphate ores assuming the then-current use rate.
Light water reactors make relatively inefficient use of nuclear fuel, mostly fissioning only the very rare uranium-235 isotope. Nuclear reprocessing can make this waste reusable. Newer generation III reactors also achieve a more efficient use of the available resources than the generation II reactors which make up the vast majority of reactors worldwide. With a pure fast reactor fuel cycle with a burn up of all the Uranium and actinides (which presently make up the most hazardous substances in nuclear waste), there is an estimated 160,000 years worth of Uranium in total conventional resources and phosphate ore at the price of 60–100 US$/kg.
Unconventional uranium resources also exist.
Uranium is naturally present in seawater at a concentration of about 3 micrograms per liter, with 4.5 billion tons of uranium considered present in seawater at any time.
In 2012 it was estimated that this fuel source could be extracted at 10 times the current price of uranium.
In 2014, with the advances made in the efficiency of seawater uranium extraction, it was suggested that it would be economically competitive to produce fuel for light water reactors from seawater if the process was implemented at large scale.
Uranium extracted on an industrial scale from seawater would constantly be replenished by both river erosion of rocks and the natural process of uranium dissolved from the surface area of the ocean floor, both of which maintain the solubility equilibria of seawater concentration at a stable level.
Some commentators have argued that this strengthens the case for Nuclear power to be considered a renewable energy
As opposed to light water reactors which use uranium-235 (0.7% of all natural uranium), fast breeder reactors use uranium-238 (99.3% of all natural uranium) or thorium. A number of fuel cycles and breeder reactor combinations are considered to be sustainable and/or renewable sources of energy. In 2006 it was estimated that with seawater extraction, there was likely some five billion years' worth of uranium-238 for use in breeder reactors.
Breeder technology has been used in several reactors, but the high cost of reprocessing fuel safely, at 2006 technological levels, requires uranium prices of more than US$200/kg before becoming justified economically. Breeder reactors are however being pursued as they have the potential to burn up all of the actinides in the present inventory of nuclear waste while also producing power and creating additional quantities of fuel for more reactors via the breeding process.
As of 2017, there are two breeders producing commercial power, BN-600 reactor and the BN-800 reactor, both in Russia.
The BN-600, with a capacity of 600 MW, was built in 1980 in Beloyarsk and is planned to produce power until 2025. The BN-800 is an updated version of the BN-600, and started operation in 2014. The Phénix breeder reactor in France was powered down in 2009 after 36 years of operation.
Both China and India are building breeder reactors.
The Indian 500 MWe Prototype Fast Breeder Reactor is in the commissioning phase, with plans to build more.
Another alternative to fast breeders are thermal breeder reactors that use uranium-233 bred from thorium as fission fuel in the thorium fuel cycle. Thorium is about 3.5 times more common than uranium in the Earth's crust, and has different geographic characteristics. This would extend the total practical fissionable resource base by 450%. India's three-stage nuclear power programme features the use of a thorium fuel cycle in the third stage, as it has abundant thorium reserves but little uranium.
The most important waste stream from nuclear power reactors is spent nuclear fuel. From LWRs, it is typically composed of 95% uranium, 4% fission products from the energy generating nuclear fission reactions, as well as about 1% transuranic actinides (mostly reactor grade plutonium, neptunium and americium) from unavoidable neutron capture events. The plutonium and other transuranics are responsible for the bulk of the long-term radioactivity, whereas the fission products are responsible for the bulk of the short-term radioactivity.
The high-level radioactive waste/spent fuel that is generated from power production, requires treatment, management and isolation from the environment. The technical issues in accomplishing this are considerable, due to the extremely long periods some particularly sublimation prone, mildly radioactive wastes, remain potentially hazardous to living organisms, namely the long-lived fission products, technetium-99 (half-life 220,000 years) and iodine-129 (half-life 15.7 million years), which dominate the waste stream in radioactivity after the more intensely radioactive short-lived fission products(SLFPs) have decayed into stable elements, which takes approximately 300 years. To successfully isolate the LLFP waste from the biosphere, either separation and transmutation, or some variation of a synroc treatment and deep geological storage, is commonly suggested.
While in the US, spent fuel is presently in its entirety, federally classified as a nuclear waste and is treated similarly, in other countries it is largely reprocessed to produce a partially recycled fuel, known as mixed oxide fuel or MOX. For spent fuel that does not undergo reprocessing, the most concerning isotopes are the medium-lived transuranic elements, which are led by reactor grade plutonium (half-life 24,000 years).
Some proposed reactor designs, such as the American Integral Fast Reactor and the Molten salt reactor can more completely use or burnup the spent reactor grade plutonium fuel and other minor actinides, generated from light water reactors, as under the designed fast fission spectrum, these elements are more likely to fission and produce the aforementioned fission products in their place. This offers a potentially more attractive alternative to deep geological disposal.
The thorium fuel cycle results in similar fission products, though creates a much smaller proportion of transuranic elements from neutron capture events within a reactor. Therefore, spent thorium fuel, breeding the true fuel of fissile uranium-233, is somewhat less concerning from a radiotoxic and security standpoint.
The nuclear industry also produces a large volume of low-level radioactive waste in the form of contaminated items like clothing, hand tools, water purifier resins, and (upon decommissioning) the materials of which the reactor itself is built. Low-level waste can be stored on-site until radiation levels are low enough to be disposed as ordinary waste, or it can be sent to a low-level waste disposal site.
In countries with nuclear power, radioactive wastes account for less than 1% of total industrial toxic wastes, much of which remains hazardous for long periods. Overall, nuclear power produces far less waste material by volume than fossil-fuel based power plants. Coal-burning plants are particularly noted for producing large amounts of toxic and mildly radioactive ash due to concentrating naturally occurring metals and mildly radioactive material in coal. A 2008 report from Oak Ridge National Laboratory concluded that coal power actually results in more radioactivity being released into the environment than nuclear power operation, and that the population effective dose equivalent, or dose to the public from radiation from coal plants is 100 times as much as from the operation of nuclear plants.
Although coal ash is much less radioactive than spent nuclear fuel on a weight per weight basis, coal ash is produced in much higher quantities per unit of energy generated, and this is released directly into the environment as fly ash, whereas nuclear plants use shielding to protect the environment from radioactive materials, for example, in dry cask storage vessels.
Disposal of nuclear waste is often considered the most politically divisive aspect in the lifecycle of a nuclear power facility.
Presently, waste is mainly stored at individual reactor sites and there are over 430 locations around the world where radioactive material continues to accumulate.
Some experts suggest that centralized underground repositories which are well-managed, guarded, and monitored, would be a vast improvement.
There is an "international consensus on the advisability of storing nuclear waste in deep geological repositories", with the lack of movement of nuclear waste in the 2 billion year old natural nuclear fission reactors in Oklo, Gabon being cited as "a source of essential information today."
There are no commercial scale purpose built underground high-level waste repositories in operation. However, in Finland the Onkalo spent nuclear fuel repository is under construction as of 2015. The Waste Isolation Pilot Plant (WIPP) in New Mexico has been taking nuclear waste since 1999 from production reactors, but as the name suggests is a research and development facility.
In 2014 a radiation leak caused by violations in the use of chemically reactive packaging brought renewed attention to the need for quality control management, along with some initial calls for more R&D into the alternative methods of disposal for radioactive waste and spent fuel.
In 2017, the facility was formally reopened after three years of investigation and cleanup, with the resumption of new storage taking place later that year.
The U.S Nuclear Waste Policy Act, a fund which previously received $750 million in fee revenues each year from the nation's combined nuclear electric utilities, had an unspent balance of $44.5 billion as of the end of FY2017, when a court ordered the federal government to cease withdrawing the fund, until it provides a destination for the utilities commercial spent fuel.
Horizontal drillhole disposal describes proposals to drill over one kilometer vertically, and two kilometers horizontally in the earth’s crust, for the purpose of disposing of high-level waste forms such as spent nuclear fuel, Caesium-137, or Strontium-90. After the emplacement and the retrievability period, drillholes would be backfilled and sealed.
Most thermal reactors run on a once-through fuel cycle, mainly due to the low price of fresh uranium, though many reactors are also fueled with recycled fissionable materials that remain in spent nuclear fuel. The most common fissionable material that is recycled is the reactor-grade plutonium ("RGPu") that is extracted from spent fuel, it is mixed with uranium oxide and fabricated into mixed-oxide or MOX fuel. The first LWR designs certified to operate on a full core of MOX fuel, the ABWR and the System 80, began to appear in the 1990s. The potential for recycling the spent fuel a second time is limited by undesirable neutron economy issues using second-generation MOX fuel in "thermal"-reactors. These issues do not affect fast reactors, which are therefore preferred in order to achieve the full energy potential of the original uranium. The only commercial demonstration of twice recycled, high burnup fuel to date, occurred in the Phénix "fast" reactor.
Because thermal LWRs remain the most common reactor worldwide, the most typical form of commercial spent fuel recycling is to recycle the plutonium a single time as MOX fuel, as is done in France, where it is considered to increase the sustainability of the nuclear fuel cycle, reduce the attractiveness of spent fuel to theft and lower the volume of high level nuclear waste. Reprocessing of civilian fuel from power reactors is also currently done in the United Kingdom, Russia, Japan, and India.
The main constituent of spent fuel from the most common light water reactor, is uranium that is slightly more enriched than natural uranium, which can be recycled, though there is a lower incentive to do so. Most of this "recovered uranium", or at times referred to as reprocessed uranium, remains in storage. It can however be used in a fast reactor, used directly as fuel in CANDU reactors, or re-enriched for another cycle through an LWR. The direct use of recovered uranium to fuel a CANDU reactor was first demonstrated at Quishan, China. The first re-enriched uranium reload to fuel a commercial LWR, occurred in 1994 at the Cruas unit 4, France. Re-enriching of reprocessed uranium is common in France and Russia. When reprocessed uranium, namely Uranium-236, is part of the fuel of LWRs, it generates a spent fuel and plutonium isotope stream with greater inherent self-protection, than the once-thru fuel cycle.
While reprocessing offers the potential recovery of up to 95% of the remaining uranium and plutonium fuel, in spent nuclear fuel and a reduction in long term radioactivity within the remaining waste. Reprocessing has been politically controversial because of the potential to contribute to nuclear proliferation and varied perceptions of increasing the vulnerability to nuclear terrorism and because of its higher fuel cost, compared to the once-through fuel cycle. Similarly, while reprocessing reduces the volume of high-level waste, it does not reduce the fission products that are the primary residual heat generating and radioactive substances for the first few centuries outside the reactor, thus still requiring an almost identical container-spacing for the initial first few hundred years, within proposed geological waste isolation facilities. However much of the opposition to the Yucca Mountain project and those similar to it, primarily center not around fission products but the "plutonium mine" concern that placed in the underground, un-reprocessed spent fuel, will eventually become.
In the United States, spent nuclear fuel is currently not reprocessed. A major recommendation of the Blue Ribbon Commission on America's Nuclear Future was that "the United States should undertake...one or more permanent deep geological facilities for the safe disposal of spent fuel and high-level nuclear waste".
The French La Hague reprocessing facility has operated commercially since 1976 and is responsible for half the world's reprocessing as of 2010. Having produced MOX fuel from spent fuel derived from France, Japan, Germany, Belgium, Switzerland, Italy, Spain and the Netherlands, with the non-recyclable part of the spent fuel eventually sent back to the user nation. More than 32,000 tonnes of spent fuel had been reprocessed as of 2015, with the majority from France, 17% from Germany, and 9% from Japan. Once a source of criticism from Greenpeace, more recently the organization have ceased attempting to criticize the facility on technical grounds, having succeeded at performing the process without serious incidents that have been frequent at other such facilities around the world. In the past, the antinuclear movement argued that reprocessing would not be technically or economically feasible.
A PUREX related facility, frequently considered to be the proprietary COEX, designed by Areva, is a major long term commitment of the PRC with the intention to supply by 2030, Chinese reactors with economically separated and indigenous recycled fuel.
The financial costs of every nuclear power plant continues for some time after the facility has finished generating its last useful electricity. Once no longer economically viable, nuclear reactors and uranium enrichment facilities are generally decommissioned, returning the facility and its parts to a safe enough level to be entrusted for other uses, such as greenfield status.
After a cooling-off period that may last decades, reactor core materials are dismantled and cut into small pieces to be packed in containers for interim storage or transmutation experiments.
In the United States a Nuclear Waste Policy Act and Nuclear Decommissioning Trust Fund is legally required, with utilities banking 0.1 to 0.2 cents/kWh during operations to fund future decommissioning. They must report regularly to the Nuclear Regulatory Commission (NRC) on the status of their decommissioning funds. About 70% of the total estimated cost of decommissioning all U.S. nuclear power reactors has already been collected (on the basis of the average cost of $320 million per reactor-steam turbine unit).
In the United States in 2011, there are 13 reactors that had permanently shut down and are in some phase of decommissioning. With Connecticut Yankee Nuclear Power Plant and Yankee Rowe Nuclear Power Station having completed the process in 2006–2007, after ceasing commercial electricity production circa 1992.
The majority of the 15 years, was used to allow the station to naturally cool-down on its own, which makes the manual disassembly process both safer and cheaper.
Decommissioning at nuclear sites which have experienced a serious accident are the most expensive and time-consuming.
Nuclear fission power stations, excluding the contribution from naval nuclear fission reactors, provided 11% of the world's electricity in 2012, somewhat less than that generated by hydro-electric stations at 16%.
Since electricity accounts for about 25% of humanity's energy usage with the majority of the rest coming from fossil fuel reliant sectors such as transport, manufacture and home heating, nuclear fission's contribution to the global final energy consumption was about 2.5%.
This is a little more than the combined global electricity production from wind, solar, biomass and geothermal power, which together provided 2% of global final energy consumption in 2014.
In addition, there were approximately 140 naval vessels using nuclear propulsion in operation, powered by about 180 reactors.
Nuclear power's share of global electricity production has fallen from 16.5% in 1997 to about 10% in 2017, in large part because the economics of nuclear power have become more difficult.
Regional differences in the use of nuclear power are large.
The United States produces the most nuclear energy in the world, with nuclear power providing 19% of the electricity it consumes, while France produces the highest percentage of its electrical energy from nuclear reactors – 72% as of 2018.
In the European Union as a whole nuclear power provides 25% of the electricity as of 2017.
Nuclear power is the single largest low-carbon electricity source in the United States, and accounts for two-thirds of the European Union's low-carbon electricity.
Nuclear energy policy differs among European Union countries, and some, such as Austria, Estonia, Ireland and Italy, have no active nuclear power stations.
Many military and some civilian (such as some icebreakers) ships use nuclear marine propulsion.
A few space vehicles have been launched using nuclear reactors: 33 reactors belong to the Soviet RORSAT series and one was the American SNAP-10A.
International research is continuing into additional uses of process heat such as hydrogen production (in support of a hydrogen economy), for desalinating sea water, and for use in district heating systems.
Both fission and fusion appear promising for space propulsion applications, generating higher mission velocities with less reaction mass. This is due to the much higher energy density of nuclear reactions: some 7 orders of magnitude (10,000,000 times) more energetic than the chemical reactions which power the current generation of rockets.
Radioactive decay has been used on a relatively small scale (few kW), mostly to power space missions and experiments by using radioisotope thermoelectric generators such as those developed at Idaho National Laboratory.
The economics of new nuclear power plants is a controversial subject, since there are diverging views on this topic, and multibillion-dollar investments depend on the choice of an energy source.
Nuclear power plants typically have high capital costs for building the plant, but low fuel costs.
Comparison with other power generation methods is strongly dependent on assumptions about construction timescales and capital financing for nuclear plants as well as the future costs of fossil fuels and renewables as well as for energy storage solutions for intermittent power sources.
On the other hand, measures to mitigate global warming, such as a carbon tax or carbon emissions trading, may favor the economics of nuclear power.
Analysis of the economics of nuclear power must also take into account who bears the risks of future uncertainties.
To date all operating nuclear power plants have been developed by state-owned or regulated electric utility monopolies
Many countries have now liberalized the electricity market where these risks, and the risk of cheaper competitors emerging before capital costs are recovered, are borne by plant suppliers and operators rather than consumers, which leads to a significantly different evaluation of the economics of new nuclear power plants.
Nuclear power plants, though capable of some grid-load following, are typically run as much as possible to keep the cost of the generated electrical energy as low as possible, supplying mostly base-load electricity.
Peer reviewed analyses of the available cost trends of nuclear power, since its inception,
show large disparity by nation, design, build rate and the establishment of familiarity in expertise. The two nations of which data were available, which have produced reactors at a lower cost trend than prior facilities in the 2000s were India and S.Korea. In the history of civilian reactor power, certain designs lent considerable early positive economics, over competitors, such as the CANDU which alongside, at one time, much higher realized capacity factor/reliability when compared to Gen II LWRs up to about the 1990s, at a time when LWRs in the U.S began to utilize improved enrichment, permitting longer operation times without stoppages, the CANDU design had allowed Canada to also forego uranium enrichment facilities and due to the on-line refueling reactor design, the larger set of PHWRs of which the CANDU design is a part, continue to hold many world record positions for longest continual electricity generation, without stoppage, routinely close to and over 800 days, before maintenance checks. The specific record as of 2019 is held by a PHWR at Kaiga Atomic Power Station, generating electricity at the nameplate rating continuously for 962 days.
The PHWR fleet of India, in analysis by M.V. Ramana, were constructed, fuelled and continue to operate, close to the price of Indian coal power stations,
As of 2015, only the indigenously financed and constructed S.Korean OPR-1000 fleet, were completed
at a similar price.
The Fukushima Daiichi nuclear disaster is expected to increase the costs of operating and new LWR power stations, due to increased requirements for on-site spent fuel management and elevated design basis threats.
Nuclear reactors have three unique characteristics that affect their safety, as compared to other power plants.
Firstly, intensely radioactive materials are present in a nuclear reactor. Their release to the environment could be hazardous.
Secondly, the fission products, which make up most of the intensely radioactive substances in the reactor, continue to generate a significant amount of decay heat even after the fission chain reaction has stopped. If the heat cannot be removed from the reactor, the fuel rods may overheat and release radioactive materials.
Thirdly, a criticality accident (a rapid increase of the reactor power) is possible in certain reactor designs if the chain reaction cannot be controlled.
These three characteristics have to be taken into account when designing nuclear reactors.
All modern reactors are designed so that an uncontrolled increase of the reactor power is prevented by natural feedback mechanisms: if the temperature or the amount of steam in the reactor increases, the fission rate inherently decreases by designing in a negative void coefficient of reactivity. The chain reaction can also be manually stopped by inserting control rods into the reactor core. Emergency core cooling systems (ECCS) can remove the decay heat from the reactor if normal cooling systems fail. If the ECCS fails, multiple physical barriers limit the release of radioactive materials to the environment even in the case of an accident. The last physical barrier is the large containment building. Approximately 120 reactors, such as all those in Switzerland prior to and all reactors in Japan after the Fukushima accident, incorporate Filtered Containment Venting Systems, onto the containment structure, which are designed to relieve the containment pressure during an accident by releasing gases to the environment while retaining most of the fission products in the filter structures.
Nuclear power with death rate of 0.07 per TWh remains the safest energy source per unit of energy compared to other energy sources.
Some serious nuclear and radiation accidents have occurred.
The severity of nuclear accidents is generally classified using the International Nuclear Event Scale (INES) introduced by the International Atomic Energy Agency (IAEA).
The scale ranks anomalous events or accidents on a scale from 0 (a deviation from normal operation that poses no safety risk) to 7 (a major accident with widespread effects).
There have been 3 accidents of level 5 or higher in the civilian nuclear power industry, two of which, the Chernobyl accident and the Fukushima accident, are ranked at level 7.
The Chernobyl accident in 1986 caused approximately 50 deaths from direct and indirect effects, and some temporary serious injuries.
The future predicted mortality from cancer increases, is usually estimated at some 4000 in the decades to come. A higher number of the routinely treatable Thyroid cancer, set to be the only type of causal cancer, will likely be seen in future large studies.
The Fukushima Daiichi nuclear accident was caused by the 2011 Tohoku earthquake and tsunami.
The accident has not caused any radiation-related deaths, but resulted in radioactive contamination of surrounding areas.
The difficult Fukushima disaster cleanup will take 40 or more years, and is expected to cost tens of billions of dollars.
The Three Mile Island accident in 1979 was a smaller scale accident, rated at INES level 5.
There were no direct or indirect deaths caused by the accident.
According to Benjamin K. Sovacool, fission energy accidents ranked first among energy sources in terms of their total economic cost, accounting for 41 percent of all property damage attributed to energy accidents.
Another analysis presented in the international journal "Human and Ecological Risk Assessment" found that coal, oil, Liquid petroleum gas and hydroelectric accidents (primarily due to the Banqiao dam burst) have resulted in greater economic impacts than nuclear power accidents. Comparing Nuclear's "latent" cancer deaths, such as cancer with other energy sources "immediate" deaths per unit of energy generated(GWeyr). This study does not include Fossil fuel related cancer and other indirect deaths created by the use of fossil fuel consumption in its "severe accident", an accident with more than 5 fatalities, classification.
Nuclear power works under an insurance framework that limits or structures accident liabilities in accordance with the Paris convention on nuclear third-party liability, the Brussels supplementary convention, the Vienna convention on civil liability for nuclear damage and the Price-Anderson Act in the United States.
It is often argued that this potential shortfall in liability represents an external cost not included in the cost of nuclear electricity; but the cost is small, amounting to about 0.1% of the levelized cost of electricity, according to a CBO study.
These beyond-regular-insurance costs for worst-case scenarios are not unique to nuclear power, as hydroelectric power plants are similarly not fully insured against a catastrophic event such as the Banqiao Dam disaster, where 11 million people lost their homes and from 30,000 to 200,000 people died, or large dam failures in general. As private insurers base dam insurance premiums on limited scenarios, major disaster insurance in this sector is likewise provided by the state.
In terms of lives lost per unit of energy generated, nuclear power has caused fewer accidental deaths per unit of energy generated than all other major sources of energy generation.
Energy produced by coal, petroleum, natural gas and hydropower has caused more deaths per unit of energy generated due to air pollution and energy accidents.
This is found when comparing the immediate deaths from other energy sources to both the immediate nuclear related deaths from accidents and also including the latent, or predicted, indirect cancer deaths from nuclear energy accidents.
When the combined immediate and indirect fatalities from nuclear power and all fossil fuels are compared, including fatalities resulting from the mining of the necessary natural resources to power generation and to air pollution, the use of nuclear power has been calculated to have prevented about 1.8 million deaths between 1971 and 2009, by reducing the proportion of energy that would otherwise have been generated by fossil fuels, and is projected to continue to do so.
Following the 2011 Fukushima nuclear disaster, it has been estimated that if Japan had never adopted nuclear power, accidents and pollution from coal or gas plants would have caused more lost years of life.
Forced evacuation from a nuclear accident may lead to social isolation, anxiety, depression, psychosomatic medical problems, reckless behavior, even suicide.
Such was the outcome of the 1986 Chernobyl nuclear disaster in Ukraine.
A comprehensive 2005 study concluded that "the mental health impact of Chernobyl is the largest public health problem unleashed by the accident to date".
Frank N. von Hippel, an American scientist, commented on the 2011 Fukushima nuclear disaster, saying that a disproportionate radiophobia, or "fear of ionizing radiation could have long-term psychological effects on a large portion of the population in the contaminated areas".
A 2015 report in "Lancet" explained that serious impacts of nuclear accidents were often not directly attributable to radiation exposure, but rather social and psychological effects.
Evacuation and long-term displacement of affected populations created problems for many people, especially the elderly and hospital patients.
In January 2015, the number of Fukushima evacuees was around 119,000, compared with a peak of around 164,000 in June 2012.
Terrorists could target nuclear power plants in an attempt to release radioactive contamination into the community. The United States 9/11 Commission has said that nuclear power plants were potential targets originally considered for the September 11, 2001 attacks. An attack on a reactor's spent fuel pool could also be serious, as these pools are less protected than the reactor core. The release of radioactivity could lead to thousands of near-term deaths and greater numbers of long-term fatalities.
In the United States, the NRC carries out "Force on Force" (FOF) exercises at all nuclear power plant sites at least once every three years.
In the United States, plants are surrounded by a double row of tall fences which are electronically monitored.
The plant grounds are patrolled by a sizeable force of armed guards.
Insider sabotage is also a threat because insiders can observe and work around security measures.
Successful insider crimes depended on the perpetrators' observation and knowledge of security vulnerabilities.
A fire caused 5–10 million dollars worth of damage to New York's Indian Point Energy Center in 1971.
The arsonist turned out to be a plant maintenance worker. Some reactors overseas have also reported varying levels of sabotage by workers.
Many technologies and materials associated with the creation of a nuclear power program have a dual-use capability, in that they can be used to make nuclear weapons if a country chooses to do so. When this happens a nuclear power program can become a route leading to a nuclear weapon or a public annex to a "secret" weapons program. The concern over Iran's nuclear activities is a case in point.
As of April 2012 there were thirty one countries that have civil nuclear power plants, of which nine have nuclear weapons, with the vast majority of these nuclear weapons states having first produced weapons, before commercial fission electricity stations.
Moreover, the re-purposing of civilian nuclear industries for military purposes would be a breach of the Non-proliferation treaty, to which 190 countries adhere.
A fundamental goal for global security is to minimize the nuclear proliferation risks associated with the expansion of nuclear power.
The Global Nuclear Energy Partnership was an international effort to create a distribution network in which developing countries in need of energy would receive nuclear fuel at a discounted rate, in exchange for that nation agreeing to forgo their own indigenous develop of a uranium enrichment program.
The France-based Eurodif/"European Gaseous Diffusion Uranium Enrichment Consortium" is a program that successfully implemented this concept, with Spain and other countries without enrichment facilities buying a share of the fuel produced at the French controlled enrichment facility, but without a transfer of technology.
Iran was an early participant from 1974, and remains a shareholder of Eurodif via Sofidif.
A 2009 United Nations report said that:
the revival of interest in nuclear power could result in the worldwide dissemination of uranium enrichment and spent fuel reprocessing technologies, which present obvious risks of proliferation as these technologies can produce fissile materials that are directly usable in nuclear weapons.
On the other hand, power reactors can also reduce nuclear weapons arsenals when military grade nuclear materials are reprocessed to be used as fuel in nuclear power plants.
The Megatons to Megawatts Program, the brainchild of Thomas Neff of MIT, is the single most successful non-proliferation program to date.
Up to 2005, the Megatons to Megawatts Program had processed $8 billion of high enriched, weapons grade uranium into low enriched uranium suitable as nuclear fuel for commercial fission reactors by diluting it with natural uranium.
This corresponds to the elimination of 10,000 nuclear weapons.
For approximately two decades, this material generated nearly 10 percent of all the electricity consumed in the United States (about half of all U.S. nuclear electricity generated) with a total of around 7 trillion kilowatt-hours of electricity produced. Enough energy to energize the entire United States electric grid for about two years. In total it is estimated to have cost $17 billion, a "bargain for US ratepayers", with Russia profiting $12 billion from the deal. Much needed profit for the Russian nuclear oversight industry, which after the collapse of the Soviet economy, had difficulties paying for the maintenance and security of the Russian Federations highly enriched uranium and warheads.
The Megatons to Megawatts Program was hailed as a major success by anti-nuclear weapon advocates as it has largely been the driving force behind the sharp reduction in the quantity of nuclear weapons worldwide since the cold war ended.
However without an increase in nuclear reactors and greater demand for fissile fuel, the cost of dismantling and down blending has dissuaded Russia from continuing their disarmament.
As of 2013 Russia appears to not be interested in extending the program.
Nuclear power is one of the leading low carbon power generation methods of producing electricity, and in terms of total life-cycle greenhouse gas emissions per unit of energy generated, has emission values comparable to or lower than renewable energy.
A 2014 analysis of the carbon footprint literature by the Intergovernmental Panel on Climate Change (IPCC) reported that the embodied total life-cycle emission intensity of fission electricity has a median value of 12 g eq/kWh, which is the lowest out of all commercial baseload energy sources.
This is contrasted with coal and natural gas at 820 and 490 g eq/kWh.
From the beginning of its commercialization in the 1970s, nuclear power has prevented the emission of about 64 billion tonnes of carbon dioxide equivalent that would have otherwise resulted from the burning of fossil fuels in thermal power stations.
The variation in a person's absorbed natural background radiation, averages 2.4 mSv/a globally but frequently varies between 1 mSv/a and 13 mSv/a depending in most part on the geology a person resides upon. According to the United Nations (UNSCEAR), regular NPP/nuclear power plant operations including the nuclear fuel cycle, increases this amount to 0.0002 millisieverts (mSv) per year of public exposure as a global average.
The average dose from operating NPPs to the local populations around them is "less than" 0.0001 mSv/a. The average dose to those living within 50 miles of a coal power plant is over three times this dose, 0.0003 mSv/a.
As of a 2008 report, Chernobyl resulted in the most affected surrounding populations and male recovery personnel receiving an average initial 50 to 100 mSv over a few hours to weeks, while the remaining global legacy of the worst nuclear power plant accident in average exposure is 0.002 mSv/a and is continually dropping at the decaying rate, from the initial high of 0.04 mSv per person averaged over the entire populace of the Northern Hemisphere in the year of the accident in 1986.
Slowing global warming requires a transition to a low-carbon economy, mainly by burning far less fossil fuel. Limiting global warming to 1.5 degrees C is technically possible if no new fossil fuel power plants are built from 2019. This has generated considerable interest and dispute in determining the best path forward to rapidly replace fossil-based fuels in the global energy mix, with intense academic debate. Sometimes the IEA says that countries without nuclear should develop it as well as their renewable power.
In developed nations the economically feasible geography for new hydropower is lacking, with every geographically suitable area largely already exploited. Proponents of wind and solar energy claim these resources alone could eliminate the need for nuclear power.
Some analysts argue that conventional renewable energy sources, wind and solar do not offer the scalability necessary for a large scale decarbonization of the electric grid, mainly due to intermittency-related considerations. Along with other commentators who have questioned the links between the anti-nuclear movement and the fossil fuel industry. These commentators point, in support of the assessment, to the expansion of the coal burning Lippendorf Power Station in Germany and in 2015 the opening of a large, 1730 MW coal burning power station in Moorburg, the only such coal burning facility of its kind to commence operations, in Western Europe in the 2010s. Germany is likely to miss its 2020 emission reduction target.
Several studies suggest that it might be theoretically possible to cover a majority of world energy generation with new renewable sources.
The Intergovernmental Panel on Climate Change (IPCC) has said that if governments were supportive, renewable energy supply could account for close to 80% of the world's energy use by 2050.
Analysis in 2015 by professor and chair of Environmental Sustainability Barry W. Brook and his colleagues on the topic of replacing fossil fuels entirely, from the electric grid of the world, has determined that at the historically modest and proven-rate at which nuclear energy was added to and replaced fossil fuels in France and Sweden during each nation's building programs in the 1980s, nuclear energy could displace or remove fossil fuels from the electric grid completely within 10 years, "allow[ing] the world to meet the most stringent greenhouse-gas mitigation targets".
In a similar analysis, Brook had earlier determined that 50% of all global energy, that is not solely electricity, but transportation synthetic fuels etc. could be generated within approximately 30 years, if the global nuclear fission build rate was identical to each of these nation's already proven installation rates in units of installed nameplate capacity, GW per year, per unit of global GDP (GW/year/$).
This is in contrast to the conceptual studies for a "100% renewable energy" world, which would require an orders of magnitude more costly global investment per year, which has no historical precedent, along with far greater land that would have to be devoted to the wind, wave and solar projects, and the inherent assumption that humanity will use less, and not more, energy in the future. As Brook notes, the "principal limitations on nuclear fission are not technical, economic or fuel-related, but are instead linked to complex issues of societal acceptance, fiscal and political inertia, and inadequate critical evaluation of the real-world constraints facing [the other] low-carbon alternatives."
In some places which aim to phase out fossil fuels in favor of low carbon power, such as Britain, seasonal energy storage is difficult to provide, so having renewables supply over 60% of electricity might be expensive. whether interconnectors or new nuclear would be more expensive than taking renewables over 60% is still being researched and debated. Britain's older gas-cooled nuclear reactors are not flexible to balance demand, wind and solar, but the island's newer water-cooled reactors should have similar flexibility to fossil fueled power plants. According to the operator from 2025 the British electricity grid may spend periods zero-carbon, with only renewables and nuclear. However actually supplying the electricity grid only from nuclear and renewables may be done together with interconnected countries, such as France in the case of Britain.
Nuclear power is comparable to, and in some cases lower, than many renewable energy sources in terms of lives lost per unit of electricity delivered.
However, as opposed to renewable energy, conventional designs for nuclear reactors produce a smaller volume of manufacture and operations related waste, most notably, the intensely radioactive spent fuel that needs to be stored or reprocessed.
A nuclear plant also needs to be disassembled and removed and much of the disassembled nuclear plant needs to be stored as low level nuclear waste for a few decades.
In an EU wide 2018 assessment of progress in reducing greenhouse gas emissions per capita, France and Sweden were the only two large industrialized nations within the EU to receive a positive rating, as every other country received a "poor" to "very poor" grade.
A 2018 analysis by MIT argued that, to be much more cost-effective as they approach deep decarbonization, electricity systems should integrate baseload low carbon resources, such as nuclear, with renewables, storage and demand response.
Nuclear power stations require approximately one square kilometer of land per typical reactor. Environmentalists and conservationists have begun to question the global renewable energy expansion proposals, as they are opposed to the frequently controversial use of once forested land to situate renewable energy systems. Seventy five academic conservationists signed a letter, suggesting a more effective policy to mitigate climate change involving the reforestation of this land proposed for renewable energy production, to its prior natural landscape, by means of the native trees that previously inhabited it, in tandem with the lower land use footprint of nuclear energy, as the path to assure both the commitment to carbon emission reductions and to succeed with landscape rewilding programs that are part of the global native species protection and re-introduction initiatives.
These scientists argue that government commitments to increase renewable energy usage while simultaneously making commitments to expand areas of biological conservation, are two competing land use outcomes, in opposition to one another, that are increasingly coming into conflict. With the existing protected areas for conservation at present regarded as insufficient to safeguard biodiversity "the conflict for space between energy production and habitat will remain one of the key future conservation issues to resolve."
The nuclear power debate concerns the controversy which has surrounded the deployment and use of nuclear fission reactors to generate electricity from nuclear fuel for civilian purposes. The debate about nuclear power peaked during the 1970s and 1980s, when it "reached an intensity unprecedented in the history of technology controversies", in some countries.
Proponents of nuclear energy regard it as a sustainable energy source that reduces carbon emissions and increases energy security by decreasing dependence on imported energy sources. M. King Hubbert, who popularized the concept of peak oil, saw oil as a resource that would run out and considered nuclear energy its replacement.
Proponents also claim that the present quantity of nuclear waste is small and can be reduced through the latest technology of newer reactors, and that the operational safety record of fission-electricity is unparalleled.
Opponents believe that nuclear power poses many threats to people and the environment such as the risk of nuclear weapons proliferation and terrorism. They also contend that reactors are complex machines where many things can and have gone wrong. In years past, they also argued that when all the energy-intensive stages of the nuclear fuel chain are considered, from uranium mining to nuclear decommissioning, nuclear power is neither a low-carbon nor an economical electricity source.
Arguments of economics and safety are used by both sides of the debate.
Current fission reactors in operation around the world are second or third generation systems, with most of the first-generation systems having been already retired.
Research into advanced generation IV reactor types was officially started by the Generation IV International Forum (GIF) based on eight technology goals, including to improve economics, safety, proliferation resistance, natural resource utilization and the ability to consume existing nuclear waste in the production of electricity.
Most of these reactors differ significantly from current operating light water reactors, and are expected to be available for commercial construction after 2030.
Hybrid nuclear power is a proposed means of generating power by use of a combination of nuclear fusion and fission processes. The concept dates to the 1950s, and was briefly advocated by Hans Bethe during the 1970s, but largely remained unexplored until a revival of interest in 2009, due to delays in the realization of pure fusion. When a sustained nuclear fusion power plant is built, it has the potential to be capable of extracting all the fission energy that remains in spent fission fuel, reducing the volume of nuclear waste by orders of magnitude, and more importantly, eliminating all actinides present in the spent fuel, substances which cause security concerns.
Nuclear fusion reactions have the potential to be safer and generate less radioactive waste than fission.
These reactions appear potentially viable, though technically quite difficult and have yet to be created on a scale that could be used in a functional power plant.
Fusion power has been under theoretical and experimental investigation since the 1950s.
Several experimental nuclear fusion reactors and facilities exist.
The largest and most ambitious international nuclear fusion project currently in progress is ITER, a large tokamak under construction in France.
ITER is planned to pave the way for commercial fusion power by demonstrating self-sustained nuclear fusion reactions with positive energy gain.
Construction of the ITER facility began in 2007, but the project has run into many delays and budget overruns.
The facility is now not expected to begin operations until the year 2027–11 years after initially anticipated. A follow on commercial nuclear fusion power station, DEMO, has been proposed. There are also suggestions for a power plant based upon a different fusion approach, that of an inertial fusion power plant.
Fusion powered electricity generation was initially believed to be readily achievable, as fission-electric power had been. However, the extreme requirements for continuous reactions and plasma containment led to projections being extended by several decades. In 2010, more than 60 years after the first attempts, commercial power production was still believed to be unlikely before 2050. | https://en.wikipedia.org/wiki?curid=22153 |
Nuclear proliferation
Nuclear proliferation is the spread of nuclear weapons, fissionable material, and weapons-applicable nuclear technology and information to nations not recognized as "Nuclear Weapon States" by the Treaty on the Non-Proliferation of Nuclear Weapons, commonly known as the Non-Proliferation Treaty or NPT. Proliferation has been opposed by many nations with and without nuclear weapons, as governments fear that more countries with nuclear weapons will increase the possibility of nuclear warfare (up to and including the so-called countervalue targeting of civilians with nuclear weapons), de-stabilize international or regional relations, or infringe upon the national sovereignty of states.
Four countries besides the five recognized Nuclear Weapons States have acquired, or are presumed to have acquired, nuclear weapons: India, Pakistan, North Korea, and Israel. None of these four is a party to the NPT, although North Korea acceded to the NPT in 1985, then withdrew in 2003 and conducted announced nuclear tests in 2006, 2009, 2013, 2016, and 2017. One critique of the NPT is that the treaty is discriminatory in the sense that only those countries that tested nuclear weapons before 1968 are recognized as nuclear weapon states while all other states are treated as non-nuclear-weapon states who can only join the treaty if they forswear nuclear weapons.
Research into the development of nuclear weapons was initially undertaken during World War II by the United States (in cooperation with the United Kingdom and Canada), Germany, Japan, and the USSR. The United States was the first and is the only country to have used a nuclear weapon in war, when it used two bombs against Japan in August 1945. After surrendering to end the war, Germany and Japan ceased to be involved in any nuclear weapon research. In August 1949, the USSR tested a nuclear weapon, becoming the second country to detonate a nuclear bomb. The United Kingdom first tested a nuclear weapon in October 1952. France first tested a nuclear weapon in 1960. The People's Republic of China detonated a nuclear weapon in 1964. India conducted its first nuclear test in 1974, which prompted Pakistan to develop its own nuclear program and, when India conducted a second series of nuclear tests in 1998, Pakistan followed with a series of tests of its own. In 2006, North Korea conducted its first nuclear test.
Early efforts to prevent nuclear proliferation involved intense government secrecy, the wartime acquisition of known uranium stores (the Combined Development Trust), and at times even outright sabotage—such as the bombing of a heavy-water facility thought to be used for a German nuclear program. These efforts began immediately after the discovery of nuclear fission and its military potential. None of these efforts were explicitly public, because the weapon developments themselves were kept secret until the bombing of Hiroshima.
Earnest international efforts to promote nuclear non-proliferation began soon after World War II, when the Truman Administration proposed the Baruch Plan of 1946, named after Bernard Baruch, America's first representative to the United Nations Atomic Energy Commission. The Baruch Plan, which drew heavily from the Acheson–Lilienthal Report of 1946, proposed the verifiable dismantlement and destruction of the U.S. nuclear arsenal (which, at that time, was the only nuclear arsenal in the world) after all governments had cooperated successfully to accomplish two things: (1) the establishment of an "international atomic development authority," which would actually own and control all military-applicable nuclear materials and activities, and (2) the creation of a system of automatic sanctions, which not even the U.N. Security Council could veto, and which would proportionately punish states attempting to acquire the capability to make nuclear weapons or fissile material.
Baruch's plea for the destruction of nuclear weapons invoked basic moral and religious intuitions. In one part of his address to the UN, Baruch said, "Behind the black portent of the new atomic age lies a hope which, seized upon with faith, can work out our salvation. If we fail, then we have damned every man to be the slave of Fear. Let us not deceive ourselves. We must elect World Peace or World Destruction... We must answer the world's longing for peace and security." With this remark, Baruch helped launch the field of nuclear ethics, to which many policy experts and scholars have contributed.
Although the Baruch Plan enjoyed wide international support, it failed to emerge from the UNAEC because the Soviet Union planned to veto it in the Security Council. Still, it remained official American policy until 1953, when President Eisenhower made his "Atoms for Peace" proposal before the U.N. General Assembly. Eisenhower's proposal led eventually to the creation of the International Atomic Energy Agency (IAEA) in 1957. Under the "Atoms for Peace" program thousands of scientists from around the world were educated in nuclear science and then dispatched home, where many later pursued secret weapons programs in their home country.
Efforts to conclude an international agreement to limit the spread of nuclear weapons did not begin until the early 1960s, after four nations (the United States, the Soviet Union, the United Kingdom and France) had acquired nuclear weapons (see List of states with nuclear weapons for more information). Although these efforts stalled in the early 1960s, they renewed once again in 1964, after China detonated a nuclear weapon. In 1968, governments represented at the Eighteen Nation Disarmament Committee (ENDC) finished negotiations on the text of the NPT. In June 1968, the U.N. General Assembly endorsed the NPT with General Assembly Resolution 2373 (XXII), and in July 1968, the NPT opened for signature in Washington, DC, London and Moscow. The NPT entered into force in March 1970.
Since the mid-1970s, the primary focus of non-proliferation efforts has been to maintain, and even increase, international control over the fissile material and specialized technologies necessary to build such devices because these are the most difficult and expensive parts of a nuclear weapons program. The main materials whose generation and distribution is controlled are highly enriched uranium and plutonium. Other than the acquisition of these special materials, the scientific and technical means for weapons construction to develop rudimentary, but working, nuclear explosive devices are considered to be within the reach of industrialized nations.
Since its founding by the United Nations in 1957, the International Atomic Energy Agency (IAEA) has promoted two, sometimes contradictory, missions: on the one hand, the Agency seeks to promote and spread internationally the use of civilian nuclear energy; on the other hand, it seeks to prevent, or at least detect, the diversion of civilian nuclear energy to nuclear weapons, nuclear explosive devices or purposes unknown. The IAEA now operates a safeguards system as specified under Article III of the Nuclear Non-Proliferation Treaty (NPT) of 1968, which aims to ensure that civil stocks of uranium and plutonium, as well as facilities and technologies associated with these nuclear materials, are used only for peaceful purposes and do not contribute in any way to proliferation or nuclear weapons programs. It is often argued that proliferation of nuclear weapons to many other states has been prevented by the extension of assurances and mutual defence treaties to these states by nuclear powers, but other factors, such as national prestige, or specific historical experiences, also play a part in hastening or stopping nuclear proliferation.
Dual-use technology refers to the possibility of military use of civilian nuclear power technology. Many technologies and materials associated with the creation of a nuclear power program have a dual-use capability, in that several stages of the nuclear fuel cycle allow diversion of nuclear materials for nuclear weapons. When this happens a nuclear power program can become a route leading to the atomic bomb or a public annex to a secret bomb program. The crisis over Iran’s nuclear activities is a case in point.
Many UN and US agencies warn that building more nuclear reactors unavoidably increases nuclear proliferation risks. A fundamental goal for American and global security is to minimize the proliferation risks associated with the
expansion of nuclear power. If this development is "poorly managed or efforts to contain risks are unsuccessful, the nuclear future will be dangerous". For nuclear power programs to be developed and managed safely and securely, it is important that countries have domestic “good governance” characteristics that will encourage proper nuclear operations and management:
These characteristics include low degrees of corruption (to avoid officials selling materials and technology for their own personal gain as occurred with the A.Q. Khan smuggling network in Pakistan), high degrees of political stability (defined by the World Bank as “likelihood that the government will be destabilized or overthrown by unconstitutional or violent means, including motivated violence and terrorism”), high governmental effectiveness scores (a World Bank aggregate measure of “the quality of the civil service and the degree of its independence from political pressures [and] the quality of policy formulation and implementation”), and a strong degree of regulatory competence.
At present, 189 countries are States Parties to the "Treaty on the Nonproliferation of Nuclear Weapons", more commonly known as the Nuclear Non-Proliferation Treaty or NPT. These include the five Nuclear Weapons States (NWS) recognized by the NPT: the People's Republic of China, France, Russian Federation, the UK, and the United States.
Notable non-signatories to the NPT are Israel, Pakistan, and India (the latter two have since tested nuclear weapons, while Israel is considered by most to be an unacknowledged nuclear weapons state). North Korea was once a signatory but withdrew in January 2003. The legality of North Korea's withdrawal is debatable but as of 9 October 2006, North Korea clearly possesses the capability to make a nuclear explosive device.
The IAEA was established on 29 July 1957 to help nations develop nuclear energy for peaceful purposes. Allied to this role is the administration of safeguards arrangements to provide assurance to the international community that individual countries are honoring their commitments under the treaty. Though established under its own international treaty, the IAEA reports to both the United Nations General Assembly and the Security Council.
The IAEA regularly inspects civil nuclear facilities to verify the accuracy of documentation supplied to it. The agency checks inventories, and samples and analyzes materials. Safeguards are designed to deter diversion of nuclear material by increasing the risk of early detection. They are complemented by controls on the export of sensitive technology from countries such as UK and United States through voluntary bodies such as the Nuclear Suppliers Group. The main concern of the IAEA is that uranium not be enriched beyond what is necessary for commercial civil plants, and that plutonium which is produced by nuclear reactors not be refined into a form that would be suitable for bomb production.
Traditional safeguards are arrangements to account for and control the use of nuclear materials. This verification is a key element in the international system which ensures that uranium in particular is used only for peaceful purposes.
Parties to the NPT agree to accept technical safeguard measures applied by the IAEA. These require that operators of nuclear facilities maintain and declare detailed accounting records of all movements and transactions involving nuclear material. Over 550 facilities and several hundred other locations are subject to regular inspection, and their records and the nuclear material being audited. Inspections by the IAEA are complemented by other measures such as surveillance cameras and instrumentation.
The inspections act as an alert system providing a warning of the possible diversion of nuclear material from peaceful activities. The system relies on;
All NPT non-weapons states must accept these full-scope safeguards. In the five weapons states plus the non-NPT states (India, Pakistan and Israel), facility-specific safeguards apply. IAEA inspectors regularly visit these facilities to verify completeness and accuracy of records.
The terms of the NPT cannot be enforced by the IAEA itself, nor can nations be forced to sign the treaty. In reality, as shown in Iraq and North Korea, safeguards can be backed up by diplomatic, political and economic measures.
While traditional safeguards easily verified the correctness of formal declarations by suspect states, in the 1990s attention turned to what might not have been declared. While accepting safeguards at declared facilities, Iraq had set up elaborate equipment elsewhere in an attempt to enrich uranium to weapons grade. North Korea attempted to use research reactors (not commercial electricity-generating reactors) and a reprocessing plant to produce some weapons-grade plutonium.
The weakness of the NPT regime lay in the fact that no obvious diversion of material was involved. The uranium used as fuel probably came from indigenous sources, and the nuclear facilities were built by the countries themselves without being declared or placed under safeguards. Iraq, as an NPT party, was obliged to declare all facilities but did not do so. Nevertheless, the activities were detected and brought under control using international diplomacy. In Iraq, a military defeat assisted this process.
In North Korea, the activities concerned took place before the conclusion of its NPT safeguards agreement. With North Korea, the promised provision of commercial power reactors appeared to resolve the situation for a time, but it later withdrew from the NPT and declared it had nuclear weapons.
In 1993 a program was initiated to strengthen and extend the classical safeguards system, and a model protocol was agreed by the IAEA Board of Governors 1997. The measures boosted the IAEA's ability to detect undeclared nuclear activities, including those with no connection to the civil fuel cycle.
Innovations were of two kinds. Some could be implemented on the basis of IAEA's existing legal authority through safeguards agreements and inspections. Others required further legal authority to be conferred through an Additional Protocol. This must be agreed by each non-weapons state with IAEA, as a supplement to any existing comprehensive safeguards agreement. Weapons states have agreed to accept the principles of the model additional protocol.
Key elements of the model Additional Protocol:
As of 3 July 2015, 146 countries have signed Additional Protocols and 126 have brought them into force. The IAEA is also applying the measures of the Additional Protocol in Taiwan. Under the Joint Comprehensive Plan of Action, Iran has agreed to implement its protocol provisionally. Among the leading countries that have not signed the Additional Protocol are Egypt, which says it will not sign until Israel accepts comprehensive IAEA safeguards, and Brazil, which opposes making the protocol a requirement for international cooperation on enrichment and reprocessing, but has not ruled out signing.
The greatest risk from nuclear weapons proliferation comes from countries which have not joined the NPT and which have significant unsafeguarded nuclear activities; India, Pakistan, and Israel fall within this category. While safeguards apply to some of their activities, others remain beyond scrutiny.
A further concern is that countries may develop various sensitive nuclear fuel cycle facilities and research reactors under full safeguards and then subsequently opt out of the NPT. Bilateral agreements, such as insisted upon by Australia and Canada for sale of uranium, address this by including fallback provisions, but many countries are outside the scope of these agreements. If a nuclear-capable country does leave the NPT, it is likely to be reported by the IAEA to the UN Security Council, just as if it were in breach of its safeguards agreement. Trade sanctions would then be likely.
IAEA safeguards can help ensure that uranium supplied as nuclear fuel and other nuclear supplies do not contribute to nuclear weapons proliferation. In fact, the worldwide application of those safeguards and the substantial world trade in uranium for nuclear electricity make the proliferation of nuclear weapons much less likely.
The Additional Protocol, once it is widely in force, will provide credible assurance that there are no undeclared nuclear materials or activities in the states concerned. This will be a major step forward in preventing nuclear proliferation.
The Nuclear Suppliers Group communicated its guidelines, essentially a set of export rules, to the IAEA in 1978. These were to ensure that transfers of nuclear material or equipment would not be diverted to unsafeguarded nuclear fuel cycle or nuclear explosive activities, and formal government assurances to this effect were required from recipients. The Guidelines also recognised the need for physical protection measures in the transfer of sensitive facilities, technology and weapons-usable materials, and strengthened retransfer provisions. The group began with seven members—the United States, the former USSR, the United Kingdom, France, Germany, Canada and Japan—but now includes 46 countries including all five nuclear weapons states.
The International Framework for Nuclear Energy Cooperation is an international project involving 25 partner countries, 28 observer and candidate partner countries, and the International Atomic Energy Agency, the Generation IV International Forum, and the European Commission. Its goal is to "[..] provide competitive, commercially-based services as an alternative to a state’s development of costly, proliferation-sensitive facilities, and address other issues associated with the safe and secure management of used fuel and radioactive waste."
According to Kenneth D. Bergeron's "Tritium on Ice: The Dangerous New Alliance of Nuclear Weapons and Nuclear Power", tritium is not classified as a "special nuclear material" but rather as a by-product. It is seen as an important litmus test on the seriousness of the United States' intention to nuclear disarm. This radioactive super-heavy hydrogen isotope is used to boost the efficiency of fissile materials in nuclear weapons. The United States resumed tritium production in 2003 for the first time in 15 years. This could indicate that there is a potential nuclear arm stockpile replacement since the isotope naturally decays.
In May 1995, NPT parties reaffirmed their commitment to a Fissile Materials Cut-off Treaty to prohibit the production of any further fissile material for weapons. This aims to complement the Comprehensive Nuclear-Test-Ban Treaty of 1996 (not entered into force as of 2011) and to codify commitments made by the United States, the UK, France and Russia to cease production of weapons material, as well as putting a similar ban on China. This treaty will also put more pressure on Israel, India and Pakistan to agree to international verification.
On 9 August 2005, Ayatollah Ali Khamenei issued a fatwa forbidding the production, stockpiling and use of nuclear weapons. Khamenei's official statement was made at the meeting of the International Atomic Energy Agency (IAEA) in Vienna. As of February 2006 Iran formally announced that uranium enrichment within their borders has continued. Iran claims it is for peaceful purposes but the United Kingdom, France, Germany, and the United States claim the purpose is for nuclear weapons research and construction.
India, Pakistan and Israel have been "threshold" countries in terms of the international non-proliferation regime. They possess or are quickly capable of assembling one or more nuclear weapons. They have remained outside the 1970 NPT. They are thus largely excluded from trade in nuclear plant or materials, except for safety-related devices for a few safeguarded facilities.
In May 1998 India and Pakistan each exploded several nuclear devices underground. This heightened concerns regarding an arms race between them, with Pakistan involving the People's Republic of China, an acknowledged nuclear weapons state. Both countries are opposed to the NPT as it stands, and India has consistently attacked the Treaty since its inception in 1970 labeling it as a lopsided treaty in favor of the nuclear powers.
Relations between the two countries are tense and hostile, and the risks of nuclear conflict between them have long been considered quite high. Kashmir is a prime cause of bilateral tension, its sovereignty being in dispute since 1948. There is persistent low level bilateral military conflict due to alleged backing of insurgency by Pakistan in India and infiltration of Pakistani state backed militants in the Indian state of Jammu and Kashmir, along with the disputed status of Kashmir.
Both engaged in a conventional arms race in the 1980s, including sophisticated technology and equipment capable of delivering nuclear weapons. In the 1990s the arms race quickened. In 1994 India reversed a four-year trend of reduced allocations for defence, and despite its much smaller economy, Pakistan was expected to push its own expenditures yet higher. Both have lost their patrons: India, the former USSR, and Pakistan, the United States.
But it is the growth and modernization of China's nuclear arsenal and its assistance with Pakistan's nuclear power programme and, reportedly, with missile technology, which exacerbate Indian concerns. In particular, as viewed by Indian strategists, Pakistan is aided by China's People's Liberation Army.
Nuclear power for civil use is well established in India. Its civil nuclear strategy has been directed towards complete independence in the nuclear fuel cycle, necessary because of its outspoken rejection of the NPT. Due to economic and technological isolation of India after the nuclear tests in 1974, India has largely diverted focus on developing and perfecting the fast breeder technology by intensive materials and fuel cycle research at the dedicated center established for research into fast reactor technology, Indira Gandhi Center for Atomic Research (IGCAR) at Kalpakkam, in the southern part of the country. At the moment, India has a small fast breeder reactor and is planning a much larger one (Prototype Fast Breeder Reactor). This self-sufficiency extends from uranium exploration and mining through fuel fabrication, heavy water production, reactor design and construction, to reprocessing and waste management. It is also developing technology to utilise its abundant resources of thorium as a nuclear fuel.
India has 14 small nuclear power reactors in commercial operation, two larger ones under construction, and ten more planned. The 14 operating ones (2548 MWe total) comprise:
The two under construction and two of the planned ones are 450 MWe versions of these 200 MWe domestic products. Construction has been seriously delayed by financial and technical problems. In 2001 a final agreement was signed with Russia for the country's first large nuclear power plant, comprising two VVER-1000 reactors, under a Russian-financed US$3 billion contract. The first unit is due to be commissioned in 2007. A further two Russian units are under consideration for the site. Nuclear power supplied 3.1% of India's electricity in 2000.
Its weapons material appears to come from a Canadian-designed 40 MW "research" reactor which started up in 1960, well before the NPT, and a 100 MW indigenous unit in operation since 1985. Both use local uranium, as India does not import any nuclear fuel. It is estimated that India may have built up enough weapons-grade plutonium for a hundred nuclear warheads.
It is widely believed that the nuclear programs of India and Pakistan used CANDU reactors to produce fissionable materials for their weapons; however, this is not accurate. Both Canada (by supplying the 40 MW research reactor) and the United States (by supplying 21 tons of heavy water) supplied India with the technology necessary to create a nuclear weapons program, dubbed CIRUS (Canada-India Reactor, United States). Canada sold India the reactor on the condition that the reactor and any by-products would be "employed for peaceful purposes only.". Similarly, the United States sold India heavy water for use in the reactor "only... in connection with research into and the use of atomic energy for peaceful purposes". India, in violation of these agreements, used the Canadian-supplied reactor and American-supplied heavy water to produce plutonium for their first nuclear explosion, Smiling Buddha. The Indian government controversially justified this, however, by claiming that Smiling Buddha was a "peaceful nuclear explosion."
The country has at least three other research reactors including the tiny one which is exploring the use of thorium as a nuclear fuel, by breeding fissile U-233. In addition, an advanced heavy-water thorium cycle is under development.
India exploded a nuclear device in 1974, the so-called Smiling Buddha test, which it has consistently claimed was for peaceful purposes. Others saw it as a response to China's nuclear weapons capability. It was then universally perceived, notwithstanding official denials, to possess, or to be able to quickly assemble, nuclear weapons. In 1999 it deployed its own medium-range missile and has developed an intermediate-range missile capable of reaching targets in China's industrial heartland.
In 1995 the United States quietly intervened to head off a proposed nuclear test. However, in 1998 there were five more tests in Operation Shakti. These were unambiguously military, including one claimed to be of a sophisticated thermonuclear device, and their declared purpose was "to help in the design of nuclear weapons of different yields and different delivery systems".
Indian security policies are driven by:
It perceives nuclear weapons as a cost-effective political counter to China's nuclear and conventional weaponry, and the effects of its nuclear weapons policy in provoking Pakistan is, by some accounts, considered incidental.
India has had an unhappy relationship with China. After an uneasy ceasefire ended the 1962 war, relations between the two nations were frozen until 1998. Since then a degree of high-level contact has been established and a few elementary confidence-building measures put in place. China still occupies some territory which it captured during the aforementioned war, claimed by India, and India still occupies some territory claimed by China. Its nuclear weapon and missile support for Pakistan is a major bone of contention.
American President George W. Bush met with India Prime Minister Manmohan Singh to discuss India's involvement with nuclear weapons. The two countries agreed that the United States would give nuclear power assistance to India.
Over the years in Pakistan their nuclear power infrastructure has been well established. It is dedicated to the industrial and economic development of the country. Its current nuclear policy is aimed to promote the socio-economic development of its people as a "foremost priority"; and to fulfill energy, economic, and industrial needs from nuclear sources. , there were three operational mega-commercial nuclear power plants while three larger ones were under construction. The nuclear power plants supplied 787 megawatts (MW) (roughly ≈3.6%) of electricity, and the country has projected production of 8800 MW by 2030. Infrastructure established by the IAEA and the U.S. in the 1950s–1960s was based on peaceful research and development and the economic prosperity of the country.
Although the civil-sector nuclear power was established in the 1950s, the country has an active nuclear weapons program which was started in the 1970s. The bomb program has its roots after East Pakistan gained its independence through the Bangladesh Liberation War, as the new nation of Bangladesh, after India's successful intervention led to a decisive victory over Pakistan in 1971. This large-scale but clandestine atomic bomb project was directed towards the indigenous development of reactor and military-grade plutonium. In 1974, when India surprised the world with the successful detonation of its own bomb, codename "Smiling Buddha", it became "imperative for Pakistan" to pursue weapons research. According to a leading scientist in the program, it became clear that once India detonated their bomb, "Newton's Third Law" came into "operation", from then on it was a classic case of "action and reaction". Earlier efforts were directed towards mastering the plutonium technology from France, but that route was slowed when the plan failed after U.S. intervention to cancel the project. Contrary to popular perception, Pakistan did not forego the "plutonium" route and covertly continued its indigenous research under Munir Ahmad Khan and it succeeded with that route in the early 1980s. Reacting to India's first nuclear weapon test, Prime Minister Zulfikar Ali Bhutto and the country's political and military science circles sensed this test as final and dangerous anticipation to Pakistan's "moral and physical existence." With diplomat Aziz Ahmed on his side, Prime Minister Bhutto launched a serious diplomatic offense and aggressively maintained at the session of the United Nations Security Council:
After 1974, Bhutto's government redoubled its effort, this time equally focused on uranium and plutonium. Pakistan had established science directorates in almost all of her embassies in the important countries of the world, with theoretical physicist S.A. Butt being the director. Abdul Qadeer Khan then established a network through Dubai to smuggle URENCO technology to the Engineering Research Laboratories. Earlier, he worked with the "Physics Dynamics Research Laboratories" (FDO), a subsidiary of the Dutch firm VMF-Stork based in Amsterdam. Later after joining, Urenco, he had access through photographs and documents to the technology. Against popular perception, the technology that Khan had brought from Urenco was based on first generation civil reactor technology, filled with many serious technical errors, though it was authentic and vital link for centrifuge project of the country. After the British Government stopped the British subsidiary of the American Emerson Electric Co. from shipping components to Pakistan, he describes his frustration with a supplier from Germany as: "That man from the German team was unethical. When he did not get the order from us, he wrote a letter to a Labour Party member and questions were asked in [British] Parliament." By 1978, his efforts paid off and made him into a national hero.
In early 1996 the next Prime Minister of Pakistan Benazir Bhutto made it clear that "if India conducts a nuclear test, Pakistan could be forced to "follow suit". In 1997, her statement was echoed by Prime Minister Nawaz Sharif who maintained that "since 1972, [P]akistan had progressed significantly, and we have left that stage (developmental) far behind. Pakistan will not be made a "hostage" to India by signing the CTBT, before (India).!" In May 1998, within weeks of India's nuclear tests, Pakistan announced that it had conducted six underground tests in the Chagai Hills, five on 28 May and one on 30 May. Seismic events consistent with these claims were recorded.
In 2004, the revelation of Khan's efforts led to the exposure of many defunct European consortiums which had defied export restrictions in the 1970s, and of many defunct Dutch companies that exported thousands of centrifuges to Pakistan as early as 1976. Many centrifuge components were apparently manufactured by the Malaysian Scomi Precision Engineering with the assistance of South Asian and German companies, and used a UAE-based computer company as a false front.
It was widely believed to have had direct involvement by the Government of Pakistan. This claim could not be verified due to the refusal of that Government to allow the IAEA to interview the alleged head of the nuclear black market, who happened to be no other than Abdul Qadeer Khan. Confessing his crimes a month later on national television, Khan bailed out the Government by taking full responsibility. Independent investigation conducted by International Institute for Strategic Studies (IISS) confirmed that he had control over the import-export deals, and his acquisition activities were largely unsupervised by Pakistan governmental authorities. All of his activities went undetected for several years. He duly confessed to running the atomic proliferation ring from Pakistan to Iran and North Korea. He was immediately given presidential immunity. The exact nature of involvement at the governmental level is still unclear, but the manner in which the government acted cast doubt on the sincerity of Pakistan.
The Democratic People's Republic of Korea (or better known as North Korea), joined the NPT in 1985 and had subsequently signed a safeguards agreement with the IAEA. However, it was believed that North Korea was diverting plutonium extracted from the fuel of its reactor at Yongbyon, for use in nuclear weapons. The subsequent confrontation with IAEA on the issue of inspections and suspected violations, resulted in North Korea threatening to withdraw from the NPT in 1993. This eventually led to negotiations with the United States resulting in the Agreed Framework of 1994, which provided for IAEA safeguards being applied to its reactors and spent fuel rods. These spent fuel rods were sealed in canisters by the United States to prevent North Korea from extracting plutonium from them. North Korea had to therefore freeze its plutonium programme.
During this period, Pakistan-North Korea cooperation in missile technology transfer was being established. A high level delegation of Pakistan military visited North Korea in August–September 1992, reportedly to discuss the supply of missile technology to Pakistan. In 1993, PM Benazir Bhutto repeatedly traveled to China, and the paid state visit to North Korea. The visits are believed to be related to the subsequent acquisition technology to developed its Ghauri system by Pakistan. During the period 1992–1994, A.Q. Khan was reported to have visited North Korea thirteen times. The missile cooperation program with North Korea was under Dr. A. Q. Khan Research Laboratories. At this time China was under U.S. pressure not to supply the M Dongfeng series of missiles to Pakistan. It is believed by experts that possibly with Chinese connivance and facilitation, the latter was forced to approach North Korea for missile transfers. Reports indicate that North Korea was willing to supply missile sub-systems including rocket motors, inertial guidance systems, control and testing equipment for US$50 million.
It is not clear what North Korea got in return. Joseph S. Bermudez Jr. in "Jane's Defence Weekly" (27 November 2002) reports that Western analysts had begun to question what North Korea received in payment for the missiles; many suspected it was the nuclear technology. The KRL was in charge of both uranium program and also of the missile program with North Korea. It is therefore likely during this period that cooperation in nuclear technology between Pakistan and North Korea was initiated. Western intelligence agencies began to notice exchange of personnel, technology and components between KRL and entities of the North Korean 2nd Economic Committee (responsible for weapons production).
A "New York Times" report on 18 October 2002 quoted U.S. intelligence officials having stated that Pakistan was a major supplier of critical equipment to North Korea. The report added that equipment such as gas centrifuges appeared to have been "part of a barter deal" in which North Korea supplied Pakistan with missiles. Separate reports indicate ("The Washington Times", 22 November 2002) that U.S. intelligence had as early as 1999 picked up signs that North Korea was continuing to develop nuclear arms. Other reports also indicate that North Korea had been working covertly to develop an enrichment capability for nuclear weapons for at least five years and had used technology obtained from Pakistan ("The Washington Times", 18 October 2002).
Israel is also thought to possess an arsenal of potentially up to several hundred nuclear warheads based on estimates of the amount of fissile material produced by Israel. This has never been openly confirmed or denied however, due to Israel's policy of deliberate ambiguity.
An Israeli nuclear installation is located about ten kilometers to the south of Dimona, the Negev Nuclear Research Center. Its construction commenced in 1958, with French assistance. The official reason given by the Israeli and French governments was to build a nuclear reactor to power a "desalination plant", in order to "green the Negev". The purpose of the Dimona plant is widely assumed to be the manufacturing of nuclear weapons, and the majority of defense experts have concluded that it does in fact do that. However, the Israeli government refuses to confirm or deny this publicly, a policy it refers to as "ambiguity".
Norway sold 20 tonnes of heavy water needed for the reactor to Israel in 1959 and 1960 in a secret deal. There were no "safeguards" required in this deal to prevent usage of the heavy water for non-peaceful purposes. The British newspaper "Daily Express" accused Israel of working on a bomb in 1960.
When the United States intelligence community discovered the purpose of the Dimona plant in the early 1960s, it demanded that Israel agree to international inspections. Israel agreed, but on a condition that U.S., rather than IAEA, inspectors were used, and that Israel would receive advanced notice of all inspections.
Some claim that because Israel knew the schedule of the inspectors' visits, it was able to hide the alleged purpose of the site from the inspectors by installing temporary false walls and other devices before each inspection. The inspectors eventually informed the U.S. government that their inspections were useless due to Israeli restrictions on what areas of the facility they could inspect. In 1969, the United States terminated the inspections.
In 1986, Mordechai Vanunu, a former technician at the Dimona plant, revealed to the media some evidence of Israel's nuclear program. Israeli agents arrested him from Italy, drugged him and transported him to Israel, and an Israeli court then tried him in secret on charges of treason and espionage, and sentenced him to eighteen years imprisonment. He was freed on 21 April 2004, but was severely limited by the Israeli government. He was arrested again on 11 November 2004, though formal charges were not immediately filed.
Comments on photographs taken by Mordechai Vanunu inside the Negev Nuclear Research Center have been made by prominent scientists. British nuclear weapons scientist Frank Barnaby, who questioned Vanunu over several days, estimated Israel had enough plutonium for about 150 weapons.
According to Lieutenant Colonel Warner D. Farr in a report to the USAF Counterproliferation Center while France was previously a leader in nuclear research "Israel and France were at a similar level of expertise after WWII, and Israeli scientists could make significant contributions to the French effort." In 1986 Francis Perrin, French high-commissioner for atomic energy from 1951 to 1970 stated that in 1949 Israeli scientists were invited to the Saclay nuclear research facility, this cooperation leading to a joint effort including sharing of knowledge between French and Israeli scientists especially those with knowledge from the Manhattan Project.
The public stance of the two states on non-proliferation differs markedly. Pakistan has initiated a series of regional security proposals. It has repeatedly proposed a nuclear free zone in South Asia and has proclaimed its willingness to engage in nuclear disarmament and to sign the Non-Proliferation Treaty if India would do so. It has endorsed a United States proposal for a regional five power conference to consider non-proliferation in South Asia.
India has taken the view that solutions to regional security issues should be found at the international rather than the regional level, since its chief concern is with China. It therefore rejects Pakistan's proposals.
Instead, the 'Gandhi Plan', put forward in 1988, proposed the revision of the Non-Proliferation Treaty, which it regards as inherently discriminatory in favor of the nuclear-weapon States, and a timetable for complete nuclear weapons disarmament. It endorsed early proposals for a Comprehensive Test Ban Treaty and for an international convention to ban the production of highly enriched uranium and plutonium for weapons purposes, known as the 'cut-off' convention.
The United States for some years, especially under the Clinton administration, pursued a variety of initiatives to persuade India and Pakistan to abandon their nuclear weapons programs and to accept comprehensive international safeguards on all their nuclear activities. To this end, the Clinton administration proposed a conference of the five nuclear-weapon states, Japan, Germany, India and Pakistan.
India refused this and similar previous proposals, and countered with demands that other potential weapons states, such as Iran and North Korea, should be invited, and that regional limitations would only be acceptable if they were accepted equally by China. The United States would not accept the participation of Iran and North Korea and these initiatives have lapsed.
Another, more recent approach, centers on 'capping' the production of fissile material for weapons purposes, which would hopefully be followed by 'roll back'. To this end, India and the United States jointly sponsored a UN General Assembly resolution in 1993 calling for negotiations for a 'cut-off' convention. Should India and Pakistan join such a convention, they would have to agree to halt the production of fissile materials for weapons and to accept international verification on their relevant nuclear facilities (enrichment and reprocessing plants). It appears that India is now prepared to join negotiations regarding such a Cut-off Treaty, under the UN Conference on Disarmament.
Bilateral confidence-building measures between India and Pakistan to reduce the prospects of confrontation have been limited. In 1990 each side ratified a treaty not to attack the other's nuclear installations, and at the end of 1991 they provided one another with a list showing the location of all their nuclear plants, even though the respective lists were regarded as not being wholly accurate. Early in 1994 India proposed a bilateral agreement for a 'no first use' of nuclear weapons and an extension of the 'no attack' treaty to cover civilian and industrial targets as well as nuclear installations.
Having promoted the Comprehensive Test Ban Treaty since 1954, India dropped its support in 1995 and in 1996 attempted to block the Treaty. Following the 1998 tests the question has been reopened and both Pakistan and India have indicated their intention to sign the CTBT. Indian ratification may be conditional upon the five weapons states agreeing to specific reductions in nuclear arsenals. The UN Conference on Disarmament has also called upon both countries "to accede without delay to the Non-Proliferation Treaty", presumably as non-weapons states.
In 2004 and 2005, Egypt disclosed past undeclared nuclear activities and material to the IAEA. In 2007 and 2008, high enriched and low enriched uranium particles were found in environmental samples taken in Egypt. In 2008, the IAEA states Egypt's statements were consistent with its own findings. In May 2009, "Reuters" reported that the IAEA was conducting further investigation in Egypt.
In 2003, the IAEA reported that Iran had been in breach of its obligations to comply with provisions of its safeguard agreement. In 2005, the IAEA Board of Governors voted in a rare non-consensus decision to find Iran in non-compliance with its NPT Safeguards Agreement and to report that non-compliance to the UN Security Council. In response, the UN Security Council passed a series of resolutions citing concerns about the program. Iran's representative to the UN argues sanctions compel Iran to abandon its rights under the Nuclear Nonproliferation Treaty to peaceful nuclear technology. Iran says its uranium enrichment program is exclusively for peaceful purposes and has enriched uranium to "less than 5 percent," consistent with fuel for a nuclear power plant and significantly below the purity of WEU (around 90%) typically used in a weapons program. The director general of the International Atomic Energy Agency, Yukiya Amano, said in 2009 he had not seen any evidence in IAEA official documents that Iran was developing nuclear weapons.
Up to the late 1980s it was generally assumed that any undeclared nuclear activities would have to be based on the diversion of nuclear material from safeguards. States acknowledged the possibility of nuclear activities entirely separate from those covered by safeguards, but it was assumed they would be detected by national intelligence activities. There was no particular effort by IAEA to attempt to detect them.
Iraq had been making efforts to secure a nuclear potential since the 1960s. In the late 1970s a specialised plant, Osiraq, was constructed near Baghdad. The plant was attacked during the Iran–Iraq War and was destroyed by Israeli bombers in June 1981.
Not until the 1990 NPT Review Conference did some states raise the possibility of making more use of (for example) provisions for "special inspections" in existing NPT Safeguards Agreements. Special inspections can be undertaken at locations other than those where safeguards routinely apply, if there is reason to believe there may be undeclared material or activities.
After inspections in Iraq following the UN Gulf War cease-fire resolution showed the extent of Iraq's clandestine nuclear weapons program, it became clear that the IAEA would have to broaden the scope of its activities. Iraq was an NPT Party, and had thus agreed to place all its nuclear material under IAEA safeguards. But the inspections revealed that it had been pursuing an extensive clandestine uranium enrichment programme, as well as a nuclear weapons design programme.
The main thrust of Iraq's uranium enrichment program was the development of technology for electromagnetic isotope separation (EMIS) of indigenous uranium. This uses the same principles as a mass spectrometer (albeit on a much larger scale). Ions of uranium-238 and uranium-235 are separated because they describe arcs of different radii when they move through a magnetic field. This process was used in the Manhattan Project to make the highly enriched uranium used in the Hiroshima bomb, but was abandoned soon afterwards.
The Iraqis did the basic research work at their nuclear research establishment at Tuwaitha, near Baghdad, and were building two full-scale facilities at Tarmiya and Ash Sharqat, north of Baghdad. However, when the war broke out, only a few separators had been installed at Tarmiya, and none at Ash Sharqat.
The Iraqis were also very interested in centrifuge enrichment, and had been able to acquire some components including some carbon-fibre rotors, which they were at an early stage of testing. In May 1998, "Newsweek" reported that Abdul Qadeer Khan had sent Iraq centrifuge designs, which were apparently confiscated by the UNMOVIC officials. Iraqi officials said "the documents were authentic but that they had not agreed to work with A. Q. Khan, fearing an ISI sting operation, due to strained relations between two countries. The Government of Pakistan and A. Q. Khan strongly denied this allegation whilst the government declared the evidence to be "fraudulent".
They were clearly in violation of their NPT and safeguards obligations, and the IAEA Board of Governors ruled to that effect. The UN Security Council then ordered the IAEA to remove, destroy or render harmless Iraq's nuclear weapons capability. This was done by mid-1998, but Iraq then ceased all cooperation with the UN, so the IAEA withdrew from this work.
The revelations from Iraq provided the impetus for a very far-reaching reconsideration of what safeguards are intended to achieve.
Libya possesses ballistic missiles and previously pursued nuclear weapons under the leadership of Muammar Gaddafi. On 19 December 2003, Gaddafi announced that Libya would voluntarily eliminate all materials, equipment and programs that could lead to internationally proscribed weapons, including weapons of mass destruction and long-range ballistic missiles. Libya signed the Nuclear Non-Proliferation Treaty (NPT) in 1968 and ratified it in 1975, and concluded a safeguards agreement with the International Atomic Energy Agency (IAEA) in 1980. In March 2004, the IAEA Board of Governors welcomed Libya's decision to eliminate its formerly undeclared nuclear program, which it found had violated Libya's safeguards agreement, and approved Libya's Additional Protocol. The United States and the United Kingdom assisted Libya in removing equipment and material from its nuclear weapons program, with independent verification by the IAEA.
A report in the "Sydney Morning Herald" and "Searchina", a Japanese newspaper, report that two Myanma defectors saying that the Myanmar junta was secretly building a nuclear reactor and plutonium extraction facility with North Korea's help, with the aim of acquiring its first nuclear bomb in five years. According to the report, "The secret complex, much of it in caves tunnelled into a mountain at Naung Laing in northern Burma, runs parallel to a civilian reactor being built at another site by Russia that both the Russians and Burmese say will be put under international safeguards." In 2002, Myanmar had notified IAEA of its intention to pursue a civilian nuclear programme. Later, Russia announced that it would build a nuclear reactor in Myanmar. There have also been reports that two Pakistani scientists, from the AQ Khan stable, had been dispatched to Myanmar where they had settled down, to help Myanmar's project. Recently, the David Albright-led Institute for Science and International Security (ISIS) rang alarm bells about Myanmar attempting a nuclear project with North Korean help. If true, the full weight of international pressure will be brought against Myanmar, said officials familiar with developments. But equally, the information that has been peddled by the defectors is also "preliminary" and could be used by the west to turn the screws on Myanmar—on democracy and human rights issues—in the run-up to the elections in the country in 2010. During an ASEAN meeting in Thailand in July 2009, US secretary of state Hillary Clinton highlighted concerns of the North Korean link. "We know there are also growing concerns about military cooperation between North Korea and Burma which we take very seriously," Clinton said. However, in 2012, after contact with the American president, Barack Obama, the Burmese leader, Thein Sein, renounced military ties with DPRK (North Korea).
The Democratic People's Republic of Korea (DPRK) acceded to the NPT in 1985 as a condition for the supply of a nuclear power station by the USSR. However, it delayed concluding its NPT Safeguards Agreement with the IAEA, a process which should take only 18 months, until April 1992.
During that period, it brought into operation a small gas-cooled, graphite-moderated, natural-uranium (metal) fuelled "Experimental Power Reactor" of about 25 MWt (5 MWe), based on the UK Magnox design. While this was a well-suited design to start a wholly indigenous nuclear reactor development, it also exhibited all the features of a small plutonium production reactor for weapons purposes. North Korea also made substantial progress in the construction of two larger reactors designed on the same principles, a prototype of about 200 MWt (50 MWe), and a full-scale version of about 800 MWt (200 MWe). They made only slow progress; construction halted on both in 1994 and has not resumed. Both reactors have degraded considerably since that time and would take significant efforts to refurbish.
In addition it completed and commissioned a reprocessing plant that makes the Magnox spent nuclear fuel safe, recovering uranium and plutonium. That plutonium, if the fuel was only irradiated to a very low burn-up, would have been in a form very suitable for weapons. Although all these facilities at Yongbyon were to be under safeguards, there was always the risk that at some stage, the DPRK would withdraw from the NPT and use the plutonium for weapons.
One of the first steps in applying NPT safeguards is for the IAEA to verify the initial stocks of uranium and plutonium to ensure that all the nuclear materials in the country have been declared for safeguards purposes. While undertaking this work in 1992, IAEA inspectors found discrepancies which indicated that the reprocessing plant had been used more often than the DPRK had declared, which suggested that the DPRK could have weapons-grade plutonium which it had not declared to the IAEA. Information passed to the IAEA by a Member State (as required by the IAEA) supported that suggestion by indicating that the DPRK had two undeclared waste or other storage sites.
In February 1993 the IAEA called on the DPRK to allow special inspections of the two sites so that the initial stocks of nuclear material could be verified. The DPRK refused, and on 12 March announced its intention to withdraw from the NPT (three months' notice is required). In April 1993 the IAEA Board concluded that the DPRK was in non-compliance with its safeguards obligations and reported the matter to the UN Security Council. In June 1993 the DPRK announced that it had "suspended" its withdrawal from the NPT, but subsequently claimed a "special status" with respect to its safeguards obligations. This was rejected by IAEA.
Once the DPRK's non-compliance had been reported to the UN Security Council, the essential part of the IAEA's mission had been completed. Inspections in the DPRK continued, although inspectors were increasingly hampered in what they were permitted to do by the DPRK's claim of a "special status". However, some 8,000 corroding fuel rods associated with the experimental reactor have remained under close surveillance.
Following bilateral negotiations between the United States and the DPRK, and the conclusion of the Agreed Framework in October 1994, the IAEA has been given additional responsibilities. The agreement requires a freeze on the operation and construction of the DPRK's plutonium production reactors and their related facilities, and the IAEA is responsible for monitoring the freeze until the facilities are eventually dismantled. The DPRK remains uncooperative with the IAEA verification work and has yet to comply with its safeguards agreement.
While Iraq was defeated in a war, allowing the UN the opportunity to seek out and destroy its nuclear weapons programme as part of the cease-fire conditions, the DPRK was not defeated, nor was it vulnerable to other measures, such as trade sanctions. It can scarcely afford to import anything, and sanctions on vital commodities, such as oil, would either be ineffective or risk provoking war.
Ultimately, the DPRK was persuaded to stop what appeared to be its nuclear weapons programme in exchange, under the agreed framework, for about US$5 billion in energy-related assistance. This included two 1000 MWe light water nuclear power reactors based on an advanced U.S. System-80 design.
In January 2003 the DPRK withdrew from the NPT. In response, a series of discussions among the DPRK, the United States, and China, a series of six-party talks (the parties being the DPRK, the ROK, China, Japan, the United States and Russia) were held in Beijing; the first beginning in April 2004 concerning North Korea's weapons program.
On 10 January 2005, North Korea declared that it was in the possession of nuclear weapons. On 19 September 2005, the fourth round of the Six-Party Talks ended with a joint statement in which North Korea agreed to end its nuclear programs and return to the NPT in exchange for diplomatic, energy and economic assistance. However, by the end of 2005 the DPRK had halted all six-party talks because the United States froze certain DPRK international financial assets such as those in a bank in Macau.
On 9 October 2006, North Korea announced that it has performed its first-ever nuclear weapon test. On 18 December 2006, the six-party talks finally resumed. On 13 February 2007, the parties announced "Initial Actions" to implement the 2005 joint statement including shutdown and disablement of North Korean nuclear facilities in exchange for energy assistance. Reacting to UN sanctions imposed after missile tests in April 2009, North Korea withdrew from the six-party talks, restarted its nuclear facilities and conducted a second nuclear test on 25 May 2009.
On 12 February 2013, North Korea conducted an underground nuclear explosion with an estimated yield of 6 to 7 kilotonnes. The detonation registered a magnitude 4.9 disturbance in the area around the epicenter.
"See also: North Korea and weapons of mass destruction and Six-party talks"
Security of nuclear weapons in Russia remains a matter of concern. According to high-ranking Russian SVR defector Tretyakov, he had a meeting with two Russian businessman representing a state-created "C-W" corporation in 1991. They came up with a project of destroying large quantities of chemical wastes collected from Western countries at the island of Novaya Zemlya (a test place for Soviet nuclear weapons) using an underground nuclear blast. The project was rejected by Canadian representatives, but one of the businessmen told Tretyakov that he keeps his own nuclear bomb at his dacha outside Moscow. Tretyakov thought that man was insane, but the "businessmen" (Vladimir K. Dmitriev) replied: "Do not be so naive. With economic conditions the way they are in Russia today, anyone with enough money can buy a nuclear bomb. It's no big deal really".
In 1991, South Africa acceded to the NPT, concluded a comprehensive safeguards agreement with the IAEA, and submitted a report on its nuclear material subject to safeguards. At the time, the state had a nuclear power programme producing nearly 10% of the country's electricity, whereas Iraq and North Korea only had research reactors.
The IAEA's initial verification task was complicated by South Africa's announcement that between 1979 and 1989 it built and then dismantled a number of nuclear weapons. South Africa asked the IAEA to verify the conclusion of its weapons programme. In 1995 the IAEA declared that it was satisfied all materials were accounted for and the weapons programme had been terminated and dismantled.
South Africa has signed the NPT, and now holds the distinction of being the only known state to have indigenously produced nuclear weapons, and then verifiably dismantled them.
On 6 September 2007, Israel bombed an officially unidentified site in Syria which it later asserted was a nuclear reactor under construction ("see Operation Outside the Box"). The alleged reactor was not asserted to be operational and it was not asserted that nuclear material had been introduced into it. Syria said the site was a military site and was not involved in any nuclear activities. The IAEA requested Syria to provide further access to the site and any other locations where the debris and equipment from the building had been stored. Syria denounced what it called the Western "fabrication and forging of facts" in regards to the incident. IAEA Director General Mohamed ElBaradei criticized the strikes and deplored that information regarding the matter had not been shared with his agency earlier.
For a state that does not possess nuclear weapons, the capability to produce one or more weapons quickly and with little warning is called a breakout capability.
There has been much debate in the academic study of International Security as to the advisability of proliferation. In the late 1950s and early 1960s, Gen. Pierre Marie Gallois of France, an adviser to Charles DeGaulle, argued in books like "The Balance of Terror: Strategy for the Nuclear Age" (1961) that mere possession of a nuclear arsenal, what the French called the "force de frappe", was enough to ensure deterrence, and thus concluded that the spread of nuclear weapons could increase international stability.
Some very prominent neo-realist scholars, such as Kenneth Waltz, Emeritus Professor of Political Science at UC Berkeley and Adjunct Senior Research Scholar at Columbia University, and John Mearsheimer, R. Wendell Harrison Distinguished Service Professor of Political Science at the University of Chicago, continue to argue along the lines of Gallois in a separate development. Specifically, these scholars advocate some forms of nuclear proliferation, arguing that it will decrease the likelihood of war, especially in troubled regions of the world. Aside from the majority opinion which opposes proliferation in any form, there are two schools of thought on the matter: those, like Mearsheimer, who favor selective proliferation, and those such as Waltz, who advocate a laissez-faire attitude to programs like North Korea's.
In embryo, Waltz argues that the logic of mutually assured destruction (MAD) should work in all security environments, regardless of historical tensions or recent hostility. He sees the Cold War as the ultimate proof of MAD logic—the only occasion when enmity between two Great Powers did not result in military conflict. This was, he argues, because nuclear weapons promote caution in decision-makers. Neither Washington nor Moscow would risk a nuclear apocalypse to advance territorial or power goals, hence a peaceful stalemate ensued (Waltz and Sagan (2003), p. 24). Waltz believes there to be no reason why this effect would not occur in all circumstances.
John Mearsheimer would not support Waltz's optimism in the majority of potential instances; however, he has argued for nuclear proliferation as policy in certain places, such as post–Cold War Europe. In two famous articles, Professor Mearsheimer opines that Europe is bound to return to its pre–Cold War environment of regular conflagration and suspicion at some point in the future. He advocates arming both Germany and Ukraine with nuclear weaponry in order to achieve a balance of power between these states in the east and France/UK in the west. If this does not occur, he is certain that war will eventually break out on the European continent.
Another separate argument against Waltz's open proliferation and in favor of Mearsheimer's selective distribution is the possibility of nuclear terrorism. Some countries included in the aforementioned laissez-faire distribution could predispose the transfer of nuclear materials or a bomb falling into the hands of groups not affiliated with any governments. Such countries would not have the political will or ability to safeguard attempts at devices being transferred to a third party. Not being deterred by self-annihilation, terrorism groups could push forth their own nuclear agendas or be used as shadow fronts to carry out the attack plans by mentioned unstable governments.
There are numerous arguments presented against both selective and total proliferation, generally targeting the very neorealist assumptions (such as the primacy of military security in state agendas, the weakness of international institutions, and the long-run unimportance of economic integration and globalization to state strategy) its proponents tend to make. With respect to Mearsheimer's specific example of Europe, many economists and neoliberals argue that the economic integration of Europe through the development of the European Union has made war in most of the European continent so disastrous economically so as to serve as an effective deterrent. Constructivists take this one step further, frequently arguing that the development of EU political institutions has led or will lead to the development of a nascent European identity, which most states on the European continent wish to partake in to some degree or another, and which makes all states within or aspiring to be within the EU regard war between them as unthinkable.
As for Waltz, the general opinion is that most states are not in a position to safely guard against nuclear use, that he underestimates the long-standing antipathy in many regions, and that weak states will be unable to prevent—or will actively provide for—the disastrous possibility of nuclear terrorism. Waltz has dealt with all of these objections at some point in his work; though to many, he has not adequately responded (Betts (2000)).
The Learning Channel documentary Doomsday: "On The Brink" illustrated 40 years of U.S. and Soviet nuclear weapons accidents. Even the 1995 Norwegian rocket incident demonstrated a potential scenario in which Russian democratization and military downsizing at the end of the Cold War did not eliminate the danger of accidental nuclear war through command and control errors. After asking: might a future Russian ruler or renegade Russian general be tempted to use nuclear weapons to make foreign policy? The documentary writers revealed a greater danger of Russian security over its nuclear stocks, but especially the ultimate danger of human nature to want the ultimate weapon of mass destruction to exercise political and military power. Future world leaders might not understand how close the Soviets, Russians, and Americans were to doomsday, how easy it all seemed because apocalypse was avoided for a mere 40 years between rivals, politicians not terrorists, who loved their children and did not want to die, against 30,000 years of human prehistory. History and military experts agree that proliferation can be slowed, but never stopped (technology cannot be uninvented).
Proliferation begets proliferation is a concept described by Scott Sagan in his article, "Why Do States Build Nuclear Weapons?". This concept can be described as a strategic chain reaction. If one state produces a nuclear weapon it creates almost a domino effect within the region. States in the region will seek to acquire nuclear weapons to balance or eliminate the security threat. Sagan describes this reaction best in his article when he states, “Every time one state develops nuclear weapons to balance against its main rival, it also creates a nuclear threat to another region, which then has to initiate its own nuclear weapons program to maintain its national security”. Going back through history we can see how this has taken place. When the United States demonstrated that it had nuclear power capabilities after the bombing of Hiroshima and Nagasaki, the Russians started to develop their program in preparation for the Cold War. With the Russian military buildup, France and the United Kingdom perceived this as a security threat and therefore they pursued nuclear weapons (Sagan, p. 71). Even though proliferation causes proliferation, this does not guarantee that other states will successfully develop nuclear weapons because the economic stability of a state plays an important role on whether the state will successfully be able to acquire nuclear weapons. The article written by Dong-Jong Joo and Erik Gartzke discusses how the economy of a country determines whether they will successfully acquire nuclear weapons.
Former Iranian President Mahmoud Ahmadinejad has been a frequent critic of the concept of "nuclear apartheid" as it has been put into practice by several countries, particularly the United States. In an interview with CNN's Christiane Amanpour, Ahmadinejad said that Iran was "against 'nuclear apartheid,' which means some have the right to possess it, use the fuel, and then sell it to another country for 10 times its value. We're against that. We say clean energy is the right of all countries. But also it is the duty and the responsibility of all countries, including ours, to set up frameworks to stop the proliferation of it." Hours after that interview, he spoke passionately in favor of Iran's right to develop nuclear technology, claiming the nation should have the same liberties.
Iran is a signatory of the Nuclear Non-Proliferation Treaty and claims that any work done in regards to nuclear technology is related only to civilian uses, which is acceptable under the treaty. Iran violated its safeguards obligations under the treaty by performing uranium-enrichment in secret, after which the United Nations Security Council ordered Iran to suspend all uranium-enrichment until July 2015.
India has also been discussed in the context of "nuclear apartheid". India has consistently attempted to pass measures that would call for full international disarmament, however they have not succeeded due to protests from those states that already have nuclear weapons. In light of this, India viewed nuclear weapons as a necessary right for all nations as long as certain states were still in possession of nuclear weapons. India stated that nuclear issues were directly related to national security.
Years before India's first underground nuclear test in 1998, the Comprehensive Nuclear-Test-Ban Treaty was passed. Some have argued that coercive language was used in an attempt to persuade India to sign the treaty, which was pushed for heavily by neighboring China. India viewed the treaty as a means for countries that already had nuclear weapons, primarily the five nations of the United Nations Security Council, to keep their weapons while ensuring that no other nations could develop them.
In their article, "The Correlates of Nuclear Proliferation," Sonali Singh and Christopher R. Way argue that states protected by a security guarantee from a great power, particularly if backed by the "nuclear umbrella" of extended deterrence, have less of an incentive to acquire their own nuclear weapons. States that lack such guarantees are more likely to feel their security threatened and so have greater incentives to bolster or assemble nuclear arsenals. As a result, it is then argued that bipolarity may prevent proliferation where as multipolarity may actually influence proliferation. | https://en.wikipedia.org/wiki?curid=22158 |
Nuclear energy
Nuclear energy may refer to: | https://en.wikipedia.org/wiki?curid=22161 |
Netlist
In electronic design, a netlist is a description of the connectivity of an electronic circuit. In its simplest form, a netlist consists of a list of the electronic components in a circuit and a list of the nodes they are connected to. A network (net) is a collection of two or more interconnected components.
The structure, complexity and representation of netlists can vary considerably, but the fundamental purpose of every netlist is to convey connectivity information. Netlists usually provide nothing more than instances, nodes, and perhaps some attributes of the components involved. If they express much more than this, they are usually considered to be a hardware description language such as Verilog or VHDL, or one of several languages specifically designed for input to simulators.
Netlists can be "physical" or "logical", "instance-based" or "net-based", and "flat" or "hierarchical". The latter can be either "folded" or "unfolded".
Most netlists either contain or refer to descriptions of the parts or devices used. Each time a part is used in a netlist, this is called an "instance".
These descriptions will usually list the connections that are made to that kind of device, and some basic properties of that device. These connection points are called "terminals" or "pins", among several other names.
An "instance" could be anything from a MOSFET transistor or a bipolar junction transistor, to a resistor, a capacitor, or an integrated circuit chip.
Instances have "terminals". In the case of a vacuum cleaner, these terminals would be the three metal prongs in the plug. Each terminal has a name, and in continuing the vacuum cleaner example, they might be "Neutral", "Live" and "Ground". Usually, each instance will have a unique name, so that if you have two instances of vacuum cleaners, one might be "vac1" and the other "vac2". Besides their names, they might otherwise be identical.
Networks (nets) are the "wires" that connect things together in the circuit. There may or may not be any special attributes associated with the nets in a design, depending on the particular language the netlist is written in, and that language's features.
Instance based netlists usually provide a list of the instances used in a design.
Along with each instance, either an ordered list of net names is provided, or a list of pairs provided, of an instance port name, along with the net name to which that port is connected. In this kind of description, the list of nets can be gathered from the connection lists, and there is no place to associate particular attributes with the nets themselves. SPICE is an example of instance-based netlists.
Net-based netlists usually describe all the instances and their attributes, then describe each net, and say which port they are connected on each instance. This allows for attributes to be associated with nets.
EDIF is probably the most famous of the net-based netlists.
In large designs, it is a common practice to split the design into pieces, each piece becoming a "definition" which can be used as instances in the design. In the vacuum cleaner analogy, one might have a vacuum cleaner definition with its ports, but now this definition would also include a full description of the machine's internal components and how they connect (motors, switches, etc.), like a wiring diagram does.
A definition which includes no instances is called a "primitive" (or a "leaf", or other names); whereas a definition which includes instances is "hierarchical".
A "folded" hierarchy allows a single definition to be represented several times by instances. An "unfolded" hierarchy does not allow a definition to be used more than once in the hierarchy.
Folded hierarchies can be extremely compact. A small netlist of just a few instances can describe designs with a very large number of instances. For example, suppose definition A is a simple primitive, like a memory cell. Then suppose definition B contains 32 instances of A; C contains 32 instances of B; D contains 32 instances of C; and E contains 32 instances of D. The design now contains 5 definitions (A through E) and 128 instances. Yet, E describes a circuit that contains over a million memory cells.
In a "flat" design, only primitives are instanced. Hierarchical designs can be recursively "exploded" ("flattened") by creating a new copy (with a new name) of each definition each time it is used. If the design is highly folded, expanding it like this will result in a much larger netlist database, but preserves the hierarchy dependencies. Given a hierarchical netlist, the list of instance names in a path from the root definition to a primitive instance specifies the single unique path to that primitive. The paths to every primitive, taken together, comprise a large but flat netlist that is exactly equivalent to the compact hierarchical version.
Backannotation is data that could be added to a hierarchical netlist. Usually they are kept separate from the netlist, because several such alternate sets of data could be applied to a single netlist. These data may have been extracted from a physical design, and might provide extra information for more accurate simulations. Usually the data are composed of a hierarchical path and a piece of data for that primitive or finding the values of RC delay due to interconnection.
Another concept often used in netlists is that of inheritance. Suppose a definition of a capacitor has an associated attribute called "Capacitance", corresponding to the physical property of the same name, with a default value of "100 pF" (100 picofarads). Each instance of this capacitor might also have such an attribute, only with a different value of capacitance. And other instances might not associate any capacitance at all. In the case where no capacitance is specified for an instance, the instance will "inherit" the 100 pF value from its definition. A value specified will "override" the value on the definition. If a great number of attributes end up being the same as on the definition, a great amount of information can be "inherited", and not have to be redundantly specified in the netlist, saving space, and making the design easier to read by both machines and people. | https://en.wikipedia.org/wiki?curid=22164 |
Nuclear disarmament
Nuclear disarmament is the act of reducing or eliminating nuclear weapons. It can also be the end state of a nuclear-weapons-free world, in which nuclear weapons are completely eliminated. The term denuclearization is also used to describe the process leading to complete nuclear disarmament.
Nuclear disarmament groups include the Campaign for Nuclear Disarmament, Peace Action, Pugwash Conferences on Science and World Affairs, Greenpeace, Soka Gakkai International, International Physicians for the Prevention of Nuclear War, Mayors for Peace, Global Zero, the International Campaign to Abolish Nuclear Weapons, and the Nuclear Age Peace Foundation. There have been many large anti-nuclear demonstrations and protests. On June 12, 1982, one million people demonstrated in New York City's Central Park against nuclear weapons and for an end to the cold war arms race. It was the largest anti-nuclear protest and the largest political demonstration in American history.
In recent years, some U.S. elder statesmen have also advocated nuclear disarmament. Sam Nunn, William Perry, Henry Kissinger, and George Shultz have called upon governments to embrace the vision of a world free of nuclear weapons, and in various op-ed columns have proposed an ambitious program of urgent steps to that end. The four have created the Nuclear Security Project to advance this agenda. Organisations such as Global Zero, an international non-partisan group of 300 world leaders dedicated to achieving nuclear disarmament, have also been established.
Proponents of nuclear disarmament say that it would lessen the probability of nuclear war occurring, especially accidentally. Critics of nuclear disarmament say that it would undermine deterrence.
In 1945 in the New Mexico desert, American scientists conducted "Trinity," the first nuclear weapons test, marking the beginning of the atomic age. Even before the Trinity test, national leaders debated the impact of nuclear weapons on domestic and foreign policy. Also involved in the debate about nuclear weapons policy was the scientific community, through professional associations such as the Federation of Atomic Scientists and the Pugwash Conference on Science and World Affairs.
On August 6, 1945, towards the end of World War II, the "Little Boy" device was detonated over the Japanese city of Hiroshima. Exploding with a yield equivalent to 12,500 tonnes of TNT, the blast and thermal wave of the bomb destroyed nearly 50,000 buildings (including the headquarters of the 2nd General Army and Fifth Division) and killed 70,000–80,000 people outright, with total deaths being around 90,000–146,000. Detonation of the "Fat Man" device exploded over the Japanese city of Nagasaki three days later on 9 August 1945, destroying 60% of the city and killing 35,000–40,000 people outright, though up to 40,000 additional deaths may have occurred over some time after that. Subsequently, the world’s nuclear weapons stockpiles grew.
Operation Crossroads was a series of nuclear weapon tests conducted by the United States at Bikini Atoll in the Pacific Ocean in the summer of 1946. Its purpose was to test the effect of nuclear weapons on naval ships. Pressure to cancel Operation Crossroads came from scientists and diplomats. Manhattan Project scientists argued that further nuclear testing was unnecessary and environmentally dangerous. A Los Alamos study warned "the water near a recent surface explosion will be a 'witch's brew' of radioactivity". To prepare the atoll for the nuclear tests, Bikini's native residents were evicted from their homes and resettled on smaller, uninhabited islands where they were unable to sustain themselves.
Radioactive fallout from nuclear weapons testing was first drawn to public attention in 1954 when a hydrogen bomb test in the Pacific contaminated the crew of the Japanese fishing boat "Lucky Dragon". One of the fishermen died in Japan seven months later. The incident caused widespread concern around the world and "provided a decisive impetus for the emergence of the anti-nuclear weapons movement in many countries". The anti-nuclear weapons movement grew rapidly because for many people the atomic bomb "encapsulated the very worst direction in which society was moving".
Peace movements emerged in Japan and in 1954 they converged to form a unified "Japanese Council Against Atomic and Hydrogen Bombs". Japanese opposition to the Pacific nuclear weapons tests was widespread, and "an estimated 35 million signatures were collected on petitions calling for bans on nuclear weapons". In the United Kingdom, the first Aldermaston March organised by the Direct Action Committee and supported by the Campaign for Nuclear Disarmament took place on Easter 1958, when several thousand people marched for four days from Trafalgar Square, London, to the Atomic Weapons Research Establishment close to Aldermaston in Berkshire, England, to demonstrate their opposition to nuclear weapons. CND organised Aldermaston marches into the late 1960s when tens of thousands of people took part in the four-day events.
On November 1, 1961, at the height of the Cold War, about 50,000 women brought together by Women Strike for Peace marched in 60 cities in the United States to demonstrate against nuclear weapons. It was the largest national women's peace protest of the 20th century.
In 1958, Linus Pauling and his wife presented the United Nations with the petition signed by more than 11,000 scientists calling for an end to nuclear-weapon testing. The "Baby Tooth Survey," headed by Dr Louise Reiss, demonstrated conclusively in 1961 that above-ground nuclear testing posed significant public health risks in the form of radioactive fallout spread primarily via milk from cows that had ingested contaminated grass. Public pressure and the research results subsequently led to a moratorium on above-ground nuclear weapons testing, followed by the Partial Test Ban Treaty, signed in 1963 by John F. Kennedy and Nikita Khrushchev. On the day that the treaty went into force, the Nobel Prize Committee awarded Pauling the Nobel Peace Prize, describing him as "Linus Carl Pauling, who ever since 1946 has campaigned ceaselessly, not only against nuclear weapons tests, not only against the spread of these armaments, not only against their very use, but against all warfare as a means of solving international conflicts." Pauling started the International League of Humanists in 1974. He was president of the scientific advisory board of the World Union for Protection of Life and also one of the signatories of the Dubrovnik-Philadelphia Statement.
In the 1980s, a movement for nuclear disarmament again gained strength in the light of the weapons build-up and statements of US President Ronald Reagan. Reagan had "a world free of nuclear weapons" as his personal mission, and was largely scorned for this in Europe. Reagan was able to start discussions on nuclear disarmament with Soviet Union. He changed the name "SALT" (Strategic Arms Limitation Talks) to "START" (Strategic Arms Reduction Talks).
On June 3, 1981, William Thomas launched the White House Peace Vigil in Washington, D.C.. He was later joined on the vigil by anti-nuclear activists Concepcion Picciotto and Ellen Benjamin.
On June 12, 1982, one million people demonstrated in New York City's Central Park against nuclear weapons and for an end to the cold war arms race. It was the largest anti-nuclear protest and the largest political demonstration in American history. International Day of Nuclear Disarmament protests were held on June 20, 1983 at 50 sites across the United States. In 1986, hundreds of people walked from Los Angeles to Washington, D.C. in the Great Peace March for Global Nuclear Disarmament. There were many Nevada Desert Experience protests and peace camps at the Nevada Test Site during the 1980s and 1990s.
On May 1, 2005, 40,000 anti-nuclear/anti-war protesters marched past the United Nations in New York, 60 years after the atomic bombings of Hiroshima and Nagasaki. In 2008, 2009, and 2010, there have been protests about, and campaigns against, several new nuclear reactor proposals in the United States.
There is an annual protest against U.S. nuclear weapons research at Lawrence Livermore National Laboratory in California and in the 2007 protest, 64 people were arrested. There have been a series of protests at the Nevada Test Site and in the April 2007 Nevada Desert Experience protest, 39 people were cited by police. There have been anti-nuclear protests at Naval Base Kitsap for many years, and several in 2008.
In 2017, the International Campaign to Abolish Nuclear Weapons was awarded the Nobel Peace Prize "for its work to draw attention to the catastrophic humanitarian consequences of any use of nuclear weapons and for its ground-breaking efforts to achieve a treaty-based prohibition of such weapons".
One of the earliest peace organisations to emerge after the Second World War was the World Peace Council, which was directed by the Communist Party of the Soviet Union through the Soviet Peace Committee. Its origins lay in the Communist Information Bureau's (Cominform) doctrine, put forward 1947, that the world was divided between peace-loving progressive forces led by the Soviet Union and warmongering capitalist countries led by the United States. In 1949, Cominform directed that peace "should now become the pivot of the entire activity of the Communist Parties", and most western Communist parties followed this policy. Lawrence Wittner, a historian of the post-war peace movement, argues that the Soviet Union devoted great efforts to the promotion of the WPC in the early post-war years because it feared an American attack and American superiority of arms at a time when the USA possessed the atom bomb but the Soviet Union had not yet developed it.
In 1950, the WPC launched its Stockholm Appeal calling for the absolute prohibition of nuclear weapons. The campaign won support, collecting, it is said, 560 million signatures in Europe, most from socialist countries, including 10 million in France (including that of the young Jacques Chirac), and 155 million signatures in the Soviet Union – the entire adult population. Several non-aligned peace groups who had distanced themselves from the WPC advised their supporters not to sign the Appeal.
The WPC had uneasy relations with the non-aligned peace movement and has been described as being caught in contradictions as "it sought to become a broad world movement while being instrumentalized increasingly to serve foreign policy in the Soviet Union and nominally socialist countries." From the 1950s until the late 1980s it tried to use non-aligned peace organizations to spread the Soviet point of view. At first there was limited co-operation between such groups and the WPC, but western delegates who tried to criticize the Soviet Union or the WPC's silence about Russian armaments were often shouted down at WPC conferences and by the early 1960s they had dissociated themselves from the WPC.
After the 1986 Reykjavik Summit between U.S. President Ronald Reagan and the new Soviet General Secretary Mikhail Gorbachev, the United States and the Soviet Union concluded two important nuclear arms reduction treaties: the INF Treaty (1987) and START I (1991). After the end of the Cold War, the United States and the Russian Federation concluded the Strategic Offensive Reductions Treaty (2003) and the New START Treaty (2010).
When the extreme danger intrinsic to nuclear war and the possession of nuclear weapons became apparent to all sides during the Cold War, a series of disarmament and nonproliferation treaties were agreed upon between the United States, the Soviet Union, and several other states throughout the world. Many of these treaties involved years of negotiations, and seemed to result in important steps in arms reductions and reducing the risk of nuclear war.
Only one country (South Africa) has been known to ever dismantle an indigenously-developed nuclear arsenal completely. The apartheid government of South Africa produced half a dozen crude fission weapons during the 1980s, but they were dismantled in the early 1990s.
In its landmark resolution 1653 of 1961, "Declaration on the prohibition of the use of nuclear and thermo-nuclear weapons," the UN General Assembly stated that use of nuclear weaponry “would exceed even the scope of war and cause indiscriminate suffering and destruction to mankind and civilization and, as such, is contrary to the rules of international law and to the laws of humanity”.
The UN Office for Disarmament Affairs (UNODA) is a department of the United Nations Secretariat established in January 1998 as part of the United Nations Secretary-General Kofi Annan's plan to reform the UN as presented in his report to the General Assembly in July 1997.
Its goal is to promote nuclear disarmament and non-proliferation and the strengthening of the disarmament regimes in respect to other weapons of mass destruction, chemical and biological weapons. It also promotes disarmament efforts in the area of conventional weapons, especially land mines and small arms, which are often the weapons of choice in contemporary conflicts.
Following the retirement of Sergio Duarte in February 2012, Angela Kane was appointed as the new High Representative for Disarmament Affairs.
On 7 July 2017, a UN conference adopted the Treaty on the Prohibition of Nuclear Weapons with the backing of 122 states. It opened for signature on 20 September 2017.
Despite a general trend toward disarmament in the early 2000s, the George W. Bush administration repeatedly pushed to fund policies that would allegedly make nuclear weapons more usable in the post–Cold War environment. To date the U.S. Congress has refused to fund many of these policies. However, some feel that even considering such programs harms the credibility of the United States as a proponent of nonproliferation.
Former U.S. officials Henry Kissinger, George Shultz, Bill Perry, and Sam Nunn (aka 'The Gang of Four' on nuclear deterrence) proposed in January 2007 that the United States rededicate itself to the goal of eliminating nuclear weapons, concluding: "We endorse setting the goal of a world free of nuclear weapons and working energetically on the actions required to achieve that goal." Arguing a year later that "with nuclear weapons more widely available, deterrence is decreasingly effective and increasingly hazardous," the authors concluded that although "it is tempting and easy to say we can't get there from here, [...] we must chart a course toward that goal." During his presidential campaign, U.S. President-Elect Barack Obama pledged to "set a goal of a world without nuclear weapons, and pursue it."
The United States has taken the lead in ensuring that nuclear materials globally are properly safeguarded. A popular program that has received bipartisan domestic support for over a decade is the Cooperative Threat Reduction Program (CTR). While this program has been deemed a success, many believe that its funding levels need to be increased so as to ensure that all dangerous nuclear materials are secured in the most expeditious manner possible. The CTR program has led to several other innovative and important nonproliferation programs that need to continue to be a budget priority in order to ensure that nuclear weapons do not spread to actors hostile to the United States.
Key programs:
While the vast majority of states have adhered to the stipulations of the Nuclear Nonproliferation Treaty, a few states have either refused to sign the treaty or have pursued nuclear weapons programs while not being members of the treaty. Many view the pursuit of nuclear weapons by these states as a threat to nonproliferation and world peace.
Eliminating nuclear weapons has long been an aim of the pacifist left. But now many mainstream politicians, academic analysts, and retired military leaders also advocate nuclear disarmament. Sam Nunn, William Perry, Henry Kissinger, and George Shultz have called upon governments to embrace the vision of a world free of nuclear weapons, and in three "Wall Street Journal" opeds proposed an ambitious program of urgent steps to that end. The four have created the Nuclear Security Project to advance this agenda. Nunn reinforced that agenda during a speech at the Harvard Kennedy School on October 21, 2008, saying, "I’m much more concerned about a terrorist without a return address that cannot be deterred than I am about deliberate war between nuclear powers. You can’t deter a group who is willing to commit suicide. We are in a different era. You have to understand the world has changed." In 2010, the four were featured in a documentary film entitled "Nuclear Tipping Point". The film is a visual and historical depiction of the ideas laid forth in the Wall Street Journal op-eds and reinforces their commitment to a world without nuclear weapons and the steps that can be taken to reach that goal.
Global Zero is an international non-partisan group of 300 world leaders dedicated to achieving nuclear disarmament. The initiative, launched in December 2008, promotes a phased withdrawal and verification for the destruction of all devices held by official and unofficial members of the nuclear club. The Global Zero campaign works toward building an international consensus and a sustained global movement of leaders and citizens for the elimination of nuclear weapons. Goals include the initiation of United States-Russia bilateral negotiations for reductions to 1,000 total warheads each and commitments from the other key nuclear weapons countries to participate in multilateral negotiations for phased reductions of nuclear arsenals. Global Zero works to expand the diplomatic dialogue with key governments and continue to develop policy proposals on the critical issues related to the elimination of nuclear weapons.
The International Conference on Nuclear Disarmament took place in Oslo in February, 2008, and was organized by The Government of Norway, the Nuclear Threat Initiative and the Hoover Institute. The Conference was entitled "Achieving the Vision of a World Free of Nuclear Weapons" and had the purpose of building consensus between nuclear weapon states and non-nuclear weapon states in relation to the Nuclear Non-proliferation Treaty.
The Tehran International Conference on Disarmament and Non-Proliferation took place in Tehran in April 2010. The conference was held shortly after the signing of the New START, and resulted in a call of action toward eliminating all nuclear weapons. Representatives from 60 countries were invited to the conference. Non-governmental organizations were also present.
Among the prominent figures who have called for the abolition of nuclear weapons are "the philosopher Bertrand Russell, the entertainer Steve Allen, CNN’s Ted Turner, former Senator Claiborne Pell, Notre Dame president Theodore Hesburgh, South African Bishop Desmond Tutu and the Dalai Lama".
Others have argued that nuclear weapons have made the world relatively safer, with peace through deterrence and through the stability–instability paradox, including in south Asia. Kenneth Waltz has argued that nuclear weapons have created a nuclear peace, and further nuclear weapon proliferation might even help avoid the large scale conventional wars that were so common prior to their invention at the end of World War II. In the July 2012 issue of Foreign Affairs Waltz took issue with the view of most U.S., European, and Israeli, commentators and policymakers that a nuclear-armed Iran would be unacceptable. Instead Waltz argues that it would probably be the best possible outcome, as it would restore stability to the Middle East by balancing Israel's regional monopoly on nuclear weapons. Professor John Mueller of Ohio State University, the author of "Atomic Obsession", has also dismissed the need to interfere with Iran's nuclear program and expressed that arms control measures are counterproductive. During a 2010 lecture at the University of Missouri, which was broadcast by C-SPAN, Dr. Mueller has also argued that the threat from nuclear weapons, especially nuclear terrorism, has been exaggerated, both in the popular media and by officials.
Former Secretary Kissinger says there is a new danger, which cannot be addressed by deterrence: "The classical notion of deterrence was that there was some consequences before which aggressors and evildoers would recoil. In a world of suicide bombers, that calculation doesn’t operate in any comparable way". George Shultz has said, "If you think of the people who are doing suicide attacks, and people like that get a nuclear weapon, they are almost by definition not deterrable".
Andrew Bacevich wrote that there is no feasible scenario under which the US could sensibly use nuclear weapons. "For the United States, they are becoming unnecessary, even as a deterrent. Certainly, they are unlikely to dissuade the adversaries most likely to employ such weapons against us -- Islamic extremists intent on acquiring their own nuclear capability. If anything, the opposite is true. By retaining a strategic arsenal in readiness (and by insisting without qualification that the dropping of atomic bombs on two Japanese cities in 1945 was justified), the United States continues tacitly to sustain the view that nuclear weapons play a legitimate role in international politics ... ."
In "The Limits of Safety", Scott Sagan documented numerous incidents in US military history that could have produced a nuclear war by accident. He concluded, "while the military organizations controlling U.S. nuclear forces during the Cold War performed this task with less success than we know, they performed with more success than we "should" have reasonably predicted. The problems identified in this book were not the product of incompetent organizations. They reflect the inherent limits of organizational safety. Recognizing that simple truth is the first and most important step toward a safer future." | https://en.wikipedia.org/wiki?curid=22165 |
Net (mathematics)
In mathematics, more specifically in general topology and related branches, a net or Moore–Smith sequence is a generalization of the notion of a sequence. In essence, a sequence is a function with domain the natural numbers, and in the context of topology, the codomain of this function is usually any topological space. However, in the context of topology, sequences do not fully encode all information about a function between topological spaces. In particular, the following two conditions are not equivalent in general for a map "f" between topological spaces "X" and "Y":
It is true, however, that condition 1 implies condition 2. The difficulty encountered when attempting to prove that condition 2 implies condition 1 lies in the fact that topological spaces are, in general, not first-countable.
If the first-countability axiom were imposed on the topological spaces in question, the two above conditions would be equivalent. In particular, the two conditions are equivalent for metric spaces.
The purpose of the concept of a net, first introduced by E. H. Moore and Herman L. Smith in 1922, is to generalize the notion of a sequence so as to confirm the equivalence of the conditions (with "sequence" being replaced by "net" in condition 2). In particular, rather than being defined on a countable linearly ordered set, a net is defined on an arbitrary directed set. In particular, this allows theorems similar to that asserting the equivalence of condition 1 and condition 2, to hold in the context of topological spaces that do not necessarily have a countable or linearly ordered neighbourhood basis around a point. Therefore, while sequences do not encode sufficient information about functions between topological spaces, nets do, because collections of open sets in topological spaces are much like directed sets in behaviour. The term "net" was coined by John L. Kelley.
Nets are one of the many tools used in topology to generalize certain concepts that may only be general enough in the context of metric spaces. A related notion, that of the filter, was developed in 1937 by Henri Cartan.
Let A be a directed set with preorder relation "≥" and "X" be a topological space with topology "T". A function "f: A → X" is said to be a "net".
If "A" is a directed set, we often write a net from "A" to "X" in the form ("x"α), which expresses the fact that the element α in "A" is mapped to the element "x"α in "X".
A subnet is not merely the restriction of a net "f" to a directed subset of "A"; see the linked page for a definition.
Every non-empty totally ordered set is directed. Therefore, every function on such a set is a net. In particular, the natural numbers with the usual order form such a set, and a sequence is a function on the natural numbers, so every sequence is a net.
Another important example is as follows. Given a point "x" in a topological space, let "N""x" denote the set of all neighbourhoods containing "x". Then "N""x" is a directed set, where the direction is given by reverse inclusion, so that "S" ≥ "T" if and only if "S" is contained in "T". For "S" in "N""x", let "x""S" be a point in "S". Then ("x""S") is a net. As "S" increases with respect to ≥, the points "x""S" in the net are constrained to lie in decreasing neighbourhoods of "x", so intuitively speaking, we are led to the idea that "x""S" must tend towards "x" in some sense. We can make this limiting concept precise.
If ("x"α) is a net from a directed set "A" into "X", and if "Y" is a subset of "X", then we say that ("x"α) is eventually in "Y (or residually in "Y) if there exists an α in "A" so that for every β in "A" with β ≥ α, the point "x"β lies in "Y".
If ("x"α) is a net in the topological space "X", and "x" is an element of "X", we say that the net converges towards "x or has limit "x and write
if and only if
Intuitively, this means that the values "x"α come and stay as close as we want to "x" for large enough α.
The example net given above on the neighborhood system of a point "x" does indeed converge to "x" according to this definition.
Given a base for the topology, in order to prove convergence of a net it is necessary and sufficient to prove that there exists some point "x", such that ("x"α) is eventually in all members of the base containing this putative limit.
Let φ be a net on "X" based on the directed set "D" and let "A" be a subset of "X", then φ is said to be frequently in (or cofinally in) "A" if for every α in "D" there exists some β ≥ α, β in "D", so that φ(β) is in "A".
A point "x" in "X" is said to be an accumulation point or cluster point of a net if (and only if) for every neighborhood "U" of "x", the net is frequently in "U".
A net φ on set "X" is called universal, or an ultranet if for every subset "A" of "X", either φ is eventually in "A" or φ is eventually in "X" − "A".
Sequence in a topological space:
A sequence ("a"1, "a"2, ...) in a topological space "V" can be considered a net in "V" defined on N.
The net is eventually in a subset "Y" of "V" if there exists an N in N such that for every "n" ≥ "N", the point "a""n" is in "Y".
We have lim"n" "a""n" = "L" if and only if for every neighborhood "Y" of "L", the net is eventually in "Y".
The net is frequently in a subset "Y" of "V" if and only if for every "N" in N there exists some "n" ≥ "N" such that "a""n" is in "Y", that is, if and only if infinitely many elements of the sequence are in "Y". Thus a point "y" in "V" is a cluster point of the net if and only if every neighborhood "Y" of "y" contains infinitely many elements of the sequence.
Function from a metric space to a topological space:
Consider a function from a metric space "M" to a topological space "V", and a point "c" of "M". We direct the set "M"\{"c"} reversely according to distance from "c", that is, the relation is "has at least the same distance to "c" as", so that "large enough" with respect to the relation means "close enough to "c"". The function "f" is a net in "V" defined on "M"\{"c"}.
The net "f" is eventually in a subset "Y" of "V" if there exists an "a" in "M" \ {"c"} such that for every "x" in "M" \ {"c"} with d("x","c") ≤ d("a","c"), the point f("x") is in "Y".
We have lim"x" → "c" "f"("x") = "L" if and only if for every neighborhood "Y" of "L", "f" is eventually in "Y".
The net "f" is frequently in a subset "Y" of "V" if and only if for every "a" in "M" \ {"c"} there exists some "x" in "M" \ {"c"} with "d"("x","c") ≤ d("a","c") such that "f(x)" is in "Y".
A point "y" in "V" is a cluster point of the net "f" if and only if for every neighborhood "Y" of "y", the net is frequently in "Y".
Function from a well-ordered set to a topological space:
Consider a well-ordered set [0, "c"] with limit point "c", and a function "f" from [0, "c") to a topological space "V". This function is a net on [0, "c").
It is eventually in a subset "Y" of "V" if there exists an "a" in [0, "c") such that for every "x" ≥ "a", the point "f"("x") is in "Y".
We have lim"x" → "c" "f"("x") = "L" if and only if for every neighborhood "Y" of "L", "f" is eventually in "Y".
The net "f" is frequently in a subset "Y" of "V" if and only if for every "a" in [0, "c") there exists some "x" in ["a", "c") such that "f"("x") is in "Y".
A point "y" in "V" is a cluster point of the net "f" if and only if for every neighborhood "Y" of "y", the net is frequently in "Y".
The first example is a special case of this with "c" = ω.
See also ordinal-indexed sequence.
Virtually all concepts of topology can be rephrased in the language of nets and limits. This may be useful to guide the intuition since the notion of limit of a net is very similar to that of limit of a sequence. The following set of theorems and lemmas help cement that similarity:
A Cauchy net generalizes the notion of Cauchy sequence to nets defined on uniform spaces.
A net ("x"α) is a Cauchy net if for every entourage "V" there exists γ such that for all α, β ≥ γ, ("x"α, "x"β) is a member of "V". More generally, in a Cauchy space, a net ("x"α) is Cauchy if the filter generated by the net is a Cauchy filter.
A filter is another idea in topology that allows for a general definition for convergence in general topological spaces. The two ideas are equivalent in the sense that they give the same concept of convergence. More specifically, for every filter base an "associated net" can be constructed, and convergence of the filter base implies convergence of the associated net—and the other way around (for every net there is a filter base, and convergence of the net implies convergence of the filter base). For instance, any net formula_2 in formula_3 induces a filter base of tails formula_4 where the filter in formula_3 generated by this filter base is called the net's "eventuality filter." This correspondence allows for any theorem that can be proven with one concept to be proven with the other. For instance, continuity of a function from one topological space to the other can be characterized either by the convergence of a net in the domain implying the convergence of the corresponding net in the codomain, or by the same statement with filter bases.
Robert G. Bartle argues that despite their equivalence, it is useful to have both concepts. He argues that nets are enough like sequences to make natural proofs and definitions in analogy to sequences, especially ones using sequential elements, such as is common in analysis, while filters are most useful in algebraic topology. In any case, he shows how the two can be used in combination to prove various theorems in general topology.
Limit superior and limit inferior of a net of real numbers can be defined in a similar manner as for sequences. Some authors work even with more general structures than the real line, like complete lattices.
For a net formula_6 we put
Limit superior of a net of real numbers has many properties analogous to the case of sequences, e.g.
where equality holds whenever one of the nets is convergent. | https://en.wikipedia.org/wiki?curid=22170 |
Nuclear winter
Nuclear winter is a severe and prolonged global climatic cooling effect hypothesized to occur after widespread firestorms following a nuclear war. The hypothesis is based on the fact that such fires can inject soot into the stratosphere, where it can block some direct sunlight from reaching the surface of the Earth. It is speculated that the resulting cooling would lead to widespread crop failure and famine. When developing computer models of nuclear-winter scenarios, researchers use the conventional bombing of Hamburg, and the Hiroshima firestorm in World War II as example cases where soot might have been injected into the stratosphere, alongside modern observations of natural, large-area wildfire-firestorms.
"Nuclear winter," or as it was initially termed, "nuclear twilight," began to be considered as a scientific concept in the 1980s, after it became clear that an earlier hypothesis, that fireball generated NOx emissions would devastate the ozone layer, was losing credibility. It was within this context that the climatic effects of soot from fires became the new focus of the climatic effects of nuclear war. In these model scenarios, various soot clouds containing uncertain quantities of soot were assumed to form over cities, oil refineries, and more rural missile silos. Once the quantity of soot is decided upon by the researchers, the climate effects of these soot clouds are then modeled. The term "nuclear winter" was a neologism coined in 1983 by Richard P. Turco in reference to a 1-dimensional computer model created to examine the "nuclear twilight" idea, this 1-D model output the finding that massive quantities of soot and smoke would remain aloft in the air for on the order of years, causing a severe planet-wide drop in temperature. Turco would later distance himself from these extreme 1-D conclusions.
After the failure of the predictions on the effects of the 1991 Kuwait oil fires, that were made by the primary team of climatologists that advocate the hypothesis, over a decade passed without any new published papers on the topic. More recently, the same team of prominent modellers from the 1980s have begun again to publish the outputs of computer models, these newer models produce the same general findings as their old ones, that the ignition of 100 firestorms, each comparable in intensity to that observed in Hiroshima in 1945, could produce a "small" nuclear winter. These firestorms would result in the injection of soot (specifically black carbon) into the Earth's stratosphere, producing an anti-greenhouse effect that would lower the Earth's surface temperature. The severity of this cooling in Alan Robock's model suggests that the cumulative products of 100 of these firestorms could cool the global climate by approximately 1 °C (1.8 °F), largely eliminating the magnitude of anthropogenic global warming for the next roughly two or three years. Robock has not modeled this, but has speculated that it would have global agricultural losses as a consequence.
As nuclear devices need not be detonated to ignite a firestorm, the term "nuclear winter" is something of a misnomer. The majority of papers published on the subject state that without qualitative justification, nuclear explosions are the cause of the modeled firestorm effects. The only phenomenon that is modeled by computer in the nuclear winter papers is the climate forcing agent of firestorm-soot, a product which can be ignited and formed by a myriad of means. Although rarely discussed, the proponents of the hypothesis state that the same "nuclear winter" effect would occur if 100 conventional firestorms were ignited.
A much larger number of firestorms, in the thousands, was the initial assumption of the computer modelers who coined the term in the 1980s. These were speculated to be a possible result of any large scale employment of counter-value airbursting nuclear weapon use during an American-Soviet total war. This larger number of firestorms, which are not in themselves modeled, are presented as causing nuclear winter conditions as a result of the smoke inputted into various climate models, with the depths of severe cooling lasting for as long as a decade. During this period, summer drops in average temperature could be up to 20 °C (36 °F) in core agricultural regions of the US, Europe, and China, and as much as 35 °C (63 °F) in Russia. This cooling would be produced due to a 99% reduction in the natural solar radiation reaching the surface of the planet in the first few years, gradually clearing over the course of several decades.
On the fundamental level, since the advent of photographic evidence of tall clouds were captured, it was known that firestorms could inject soot smoke/aerosols into the stratosphere but the longevity of this slew of aerosols was a major unknown. Independent of the team that continue to publish theoretical models on nuclear winter, in 2006, Mike Fromm of the Naval Research Laboratory, experimentally found that each natural occurrence of a massive wildfire firestorm, much larger than that observed at Hiroshima, can produce minor "nuclear winter" effects, with short-lived, approximately one month of a nearly immeasurable drop in surface temperatures, confined to the hemisphere that they burned in. This is somewhat analogous to the frequent volcanic eruptions that inject sulfates into the stratosphere and thereby produce minor, even negligible, volcanic winter effects.
A suite of satellite and aircraft-based firestorm-soot-monitoring instruments are at the forefront of attempts to accurately determine the lifespan, quantity, injection height, and optical properties of this smoke. Information regarding all of these properties is necessary to truly ascertain the length and severity of the cooling effect of firestorms, independent of the nuclear winter computer model projections.
Presently, from satellite tracking data, stratospheric smoke aerosols dissipate in a time span under approximately two months. The existence of any hint of a tipping point into a new stratospheric condition where the aerosols would not be removed within this time frame remains to be determined.
The nuclear winter scenario assumes that 100 or more city firestorms are ignited by nuclear explosions, and that the firestorms lift large amounts of sooty smoke into the upper troposphere and lower stratosphere by the movement offered by the pyrocumulonimbus clouds that form during a firestorm. At above the Earth's surface, the absorption of sunlight could further heat the soot in the smoke, lifting some or all of it into the stratosphere, where the smoke could persist for years if there is no rain to wash it out. This aerosol of particles could heat the stratosphere and prevent a portion of the sun's light from reaching the surface, causing surface temperatures to drop drastically. In this scenario it is predicted that surface air temperatures would be the same as, or colder than, a given region's winter for months to years on end.
The modeled stable inversion layer of hot soot between the troposphere and high stratosphere that produces the anti-greenhouse effect was dubbed the "Smokeosphere" by Stephen Schneider et al. in their 1988 paper.
Although it is common in the climate models to consider city firestorms, these need not be ignited by nuclear devices; more conventional ignition sources can instead be the spark of the firestorms. Prior to the previously mentioned solar heating effect, the soot's injection height is controlled by the rate of energy release from the firestorm's fuel, not the size of an initial nuclear explosion. For example, the mushroom cloud from the bomb dropped on Hiroshima reached a height of six kilometers (middle troposphere) within a few minutes and then dissipated due to winds, while the individual fires within the city took almost three hours to form into a firestorm and produce a pyrocumulus cloud, a cloud that is assumed to have reached upper tropospheric heights, as over its multiple hours of burning, the firestorm released an estimated 1000 times the energy of the bomb.
As the incendiary effects of a nuclear explosion do not present any especially characteristic features, it is estimated by those with Strategic bombing experience that as the city was a firestorm hazard, the same fire ferocity and building damage produced at Hiroshima by one 16-kiloton nuclear bomb from a single B-29 bomber could have been produced instead by the conventional use of about 1.2 kilotons of incendiary bombs from 220 B-29s distributed over the city.
While the firestorms of Dresden and Hiroshima and the mass fires of Tokyo and Nagasaki occurred within mere months in 1945, the more intense and conventionally lit Hamburg firestorm occurred in 1943. Despite the separation in time, ferocity and area burned, leading modelers of the hypothesis state that these five fires potentially placed five percent as much smoke into the stratosphere as the hypothetical 100 nuclear-ignited fires discussed in modern models. While it is believed that the modeled climate-cooling-effects from the mass of soot injected into the stratosphere by 100 firestorms (one to five teragrams) would have been detectable with technical instruments in WWII, five percent of that would not have been possible to observe at that time.
The exact timescale for how long this smoke remains, and thus how severely this smoke affects the climate once it reaches the stratosphere, is dependent on both chemical and physical removal processes.
The most important physical removal mechanism is "rainout", both during the "fire-driven convective column" phase, which produces "black rain" near the fire site, and rainout after the convective plume's dispersal, where the smoke is no longer concentrated and thus "wet removal" is believed to be very efficient. However, these efficient removal mechanisms in the troposphere are avoided in the Robock 2007 study, where solar heating is modeled to quickly loft the soot into the stratosphere, "detraining" or separating the darker soot particles from the fire clouds' whiter water condensation.
Once in the stratosphere, the physical removal mechanisms affecting the timescale of the soot particles' residence are how quickly the aerosol of soot collides and coagulates with other particles via Brownian motion, and falls out of the atmosphere via gravity-driven dry deposition, and the time it takes for the "phoretic effect" to move coagulated particles to a lower level in the atmosphere. Whether by coagulation or the phoretic effect, once the aerosol of smoke particles are at this lower atmospheric level, cloud seeding can begin, permitting precipitation to wash the smoke aerosol out of the atmosphere by the wet deposition mechanism.
The chemical processes that affect the removal are dependent on the ability of atmospheric chemistry to oxidize the carbonaceous component of the smoke, via reactions with oxidative species such as ozone and nitrogen oxides, both of which are found at all levels of the atmosphere, and which also occur at greater concentrations when air is heated to high temperatures.
Historical data on residence times of aerosols, albeit a different mixture of aerosols, in this case stratospheric sulfur aerosols and volcanic ash from megavolcano eruptions, appear to be in the one-to-two-year time scale, however aerosol–atmosphere interactions are still poorly understood.
Sooty aerosols can have a wide range of properties, as well as complex shapes, making it difficult to determine their evolving atmospheric optical depth value. The conditions present during the creation of the soot are believed to be considerably important as to their final properties, with soot generated on the more efficient spectrum of burning efficiency considered almost "elemental carbon black," while on the more inefficient end of the burning spectrum, greater quantities of partially burnt/oxidized fuel are present. These partially burnt "organics" as they are known, often form tar balls and brown carbon during common lower-intensity wildfires, and can also coat the purer black carbon particles. However, as the soot of greatest importance is that which is injected to the highest altitudes by the pyroconvection of the firestorm – a fire being fed with storm-force winds of air – it is estimated that the majority of the soot under these conditions is the more oxidized black carbon.
A study presented at the annual meeting of the American Geophysical Union in December 2006 found that even a small-scale, regional nuclear war could disrupt the global climate for a decade or more. In a regional nuclear conflict scenario where two opposing nations in the subtropics would each use 50 Hiroshima-sized nuclear weapons (about 15 kiloton each) on major population centers, the researchers estimated as much as five million tons of soot would be released, which would produce a cooling of several degrees over large areas of North America and Eurasia, including most of the grain-growing regions. The cooling would last for years, and, according to the research, could be "catastrophic".
Nuclear detonations produce large amounts of Nitrogen oxides by breaking down the air around them. These are then lifted upwards by thermal convection. As they reach the stratosphere, these nitrogen oxides are capable of catalytically breaking down the Ozone present in this part of the atmosphere. Ozone depletion would allow a much greater intensity of harmful ultraviolet radiation from the sun to reach the ground.
A 2008 study by Michael J. Mills et al., published in the Proceedings of the National Academy of Sciences, found that a nuclear weapons exchange between Pakistan and India using their current arsenals could create a near-global ozone hole, triggering human health problems and causing environmental damage for at least a decade. The computer-modeled study looked at a nuclear war between the two countries involving 50 Hiroshima-sized nuclear devices on each side, producing massive urban fires and lofting as much as five million metric tons of soot about into the stratosphere. The soot would absorb enough solar radiation to heat surrounding gases, increasing the break down of the stratospheric ozone layer protecting Earth from harmful ultraviolet radiation, with up to 70% ozone loss at northern high latitudes.
A "nuclear summer" is a hypothesized scenario in which, after a nuclear winter caused by aerosols inserted into the atmosphere that would prevent sunlight from reaching lower levels or the surface, has abated, a greenhouse effect then occurs due to carbon dioxide released by combustion and methane released from the decay of the organic matter and methane from dead organic matter and corpses that froze during the nuclear winter.
Another more sequential hypothetical scenario, following the settling out of most of the aerosols in 1–3 years, the cooling effect would be overcome by a heating effect from greenhouse warming, which would raise surface temperatures rapidly by many degrees, enough to cause the death of much if not most of the life that had survived the cooling, much of which is more vulnerable to higher-than-normal temperatures than to lower-than-normal temperatures. The nuclear detonations would release CO2 and other greenhouse gases from burning, followed by more released from decay of dead organic matter. The detonations would also insert nitrogen oxides into the stratosphere that would then deplete the ozone layer around the Earth. This layer screens out UV-C radiation from the Sun, which causes genetic damage to life forms on the surface. As the temperature rises, the amount of water in the atmosphere would increase, causing further greenhouse warming of the surface, and if it rose enough, it could cause the sublimation of methane clathrate deposits on the sea floor, releasing huge amounts of methane, a greenhouse gas, into the atmosphere, perhaps enough to trigger runaway climate change.
Other more straightforward hypothetical versions exist of the hypothesis that nuclear winter might give way to a nuclear summer. The high temperatures of the nuclear fireballs could destroy the ozone gas of the middle stratosphere.
In 1952, a few weeks prior to the Ivy Mike (10.4 megaton) bomb test on Elugelab Island, there were concerns that the aerosols lifted by the explosion might cool the Earth. Major Norair Lulejian, USAF, and astronomer Natarajan Visvanathan studied this possibility, reporting their findings in "Effects of Superweapons Upon the Climate of the World", whose distribution was tightly controlled. This report is described in a 2013 report by the Defense Threat Reduction Agency as the initial study of the "nuclear winter" concept. It indicated no appreciable chance of explosion-induced climate change.
The implications for civil defense of numerous surface bursts of high yield hydrogen bomb explosions on Pacific Proving Ground islands such as those of Ivy Mike in 1952 and Castle Bravo (15 Mt) in 1954 were described in a 1957 report on "The Effects of Nuclear Weapons", edited by Samuel Glasstone. A section in that book entitled "Nuclear Bombs and the Weather" states: "The dust raised in severe volcanic eruptions, such as that at Krakatoa in 1883, is known to cause a noticeable reduction in the sunlight reaching the earth ... The amount of [soil or other surface] debris remaining in the atmosphere after the explosion of even the largest nuclear weapons is probably not more than about one percent or so of that raised by the Krakatoa eruption. Further, solar radiation records reveal that none of the nuclear explosions to date has resulted in any detectable change in the direct sunlight recorded on the ground." The US Weather Bureau in 1956 regarded it as conceivable that a large enough nuclear war with megaton-range surface detonations could lift enough soil to cause a new ice age.
In the 1966 RAND corporation memorandum "The Effects of Nuclear War on the Weather and Climate" by E. S. Batten, while primarily analysing potential dust effects from surface bursts, it notes that "in addition to the effects of the debris, extensive fires ignited by nuclear detonations might change the surface characteristics of the area and modify local weather patterns ... however, a more thorough knowledge of the atmosphere is necessary to determine their exact nature, extent, and magnitude."
In the United States National Research Council (NRC) book "Long-Term Worldwide Effects of Multiple Nuclear-Weapons Detonations" published in 1975, it states that a nuclear war involving 4,000 Mt from "present arsenals" would probably deposit much less dust in the stratosphere than the Krakatoa eruption, judging that the effect of dust and oxides of nitrogen would probably be slight climatic cooling which "would probably lie within normal global climatic variability, but the possibility of climatic changes of a more dramatic nature cannot be ruled out".
In the 1985 report "The Effects on the Atmosphere of a Major Nuclear Exchange", the Committee on the Atmospheric Effects of Nuclear Explosions argues that a "plausible" estimate on the amount of stratospheric dust injected following a surface burst of 1 Mt is 0.3 teragrams, of which 8 percent would be in the micrometer range. The potential cooling from soil dust was again looked at in 1992, in a US National Academy of Sciences (NAS) report on geoengineering, which estimated that about 1010 kg (10 teragrams) of stratospheric injected soil dust with particulate grain dimensions of 0.1 to 1 micrometer would be required to mitigate the warming from a doubling of atmospheric carbon dioxide, that is, to produce ~2 °C of cooling.
In 1969, Paul Crutzen discovered that oxides of nitrogen (NOx) could be an efficient catalyst for the destruction of the ozone layer/stratospheric ozone. Following studies on the potential effects of NOx generated by engine heat in stratosphere flying Supersonic Transport (SST) airplanes in the 1970s, in 1974, John Hampson suggested in the journal "Nature" that due to the creation of atmospheric NOx by nuclear fireballs, a full-scale nuclear exchange could result in depletion of the ozone shield, possibly subjecting the earth to ultraviolet radiation for a year or more. In 1975, Hampson's hypothesis "led directly" to the United States National Research Council (NRC) reporting on the models of ozone depletion following nuclear war in the book "Long-Term Worldwide Effects of Multiple Nuclear-Weapons Detonations".
In the section of this 1975 NRC book pertaining to the issue of fireball generated NOx and ozone layer loss therefrom, the NRC present model calculations from the early-to-mid 1970s on the effects of a nuclear war with the use of large numbers of multi-megaton yield detonations, which returned conclusions that this could reduce ozone levels by 50 percent or more in the northern hemisphere.
However independent of the computer models presented in the 1975 NRC works, a paper in 1973 in the journal "Nature" depicts the stratospheric ozone levels worldwide overlaid upon the number of nuclear detonations during the era of atmospheric testing. The authors conclude that neither the data nor their models show any correlation between the approximate 500 Mt in historical atmospheric testing and an increase or decrease of ozone concentration. In 1976, a study on the experimental measurements of an earlier atmospheric nuclear test as it affected the ozone layer also found that nuclear detonations are exonerated of depleting ozone, after the at first alarming model calculations of the time. Similarly, a 1981 paper found that the models on ozone destruction from one test and the physical measurements taken were in disagreement, as no destruction was observed.
In total, about 500 Mt were atmospherically detonated between 1945 and 1971, peaking in 1961–62, when 340 Mt were detonated in the atmosphere by the United States and Soviet Union. During this peak, with the multi-megaton range detonations of the two nations nuclear test series, in exclusive examination, a total yield estimated at 300 Mt of energy was released. Due to this, 3 × 1034 additional molecules of nitric oxide (about 5,000 tons per Mt, 5 × 109 grams per megaton) are believed to have entered the stratosphere, and while ozone depletion of 2.2 percent was noted in 1963, the decline had started prior to 1961 and is believed to have been caused by other meteorological effects.
In 1982 journalist Jonathan Schell in his popular and influential book "The Fate of the Earth", introduced the public to the belief that fireball generated NOx would destroy the ozone layer to such an extent that crops would fail from solar UV radiation and then similarly painted the fate of the Earth, as plant and aquatic life going extinct. In the same year, 1982, Australian physicist Brian Martin, who frequently corresponded with John Hampson who had been greatly responsible for much of the examination of NOx generation, penned a short historical synopsis on the history of interest in the effects of the direct NOx generated by nuclear fireballs, and in doing so, also outlined Hampson's other non-mainstream viewpoints, particularly those relating to greater ozone destruction from upper-atmospheric detonations as a result of any widely used anti-ballistic missile (ABM-1 Galosh) system. However, Martin ultimately concludes that it is "unlikely that in the context of a major nuclear war" ozone degradation would be of serious concern. Martin describes views about potential ozone loss and therefore increases in ultraviolet light leading to the widespread destruction of crops, as advocated by Jonathan Schell in "The Fate of the Earth", as highly unlikely.
More recent accounts on the specific ozone layer destruction potential of NOx species are much less than earlier assumed from simplistic calculations, as "about 1.2 million tons" of natural and anthropogenic generated stratospheric NOx is believed to be formed each year according to Robert P. Parson in the 1990s.
The first published suggestion that a cooling of climate could be an effect of a nuclear war, appears to have been originally put forth by Poul Anderson and F.N. Waldrop in their post-war story "Tomorrow's Children", in the March 1947 issue of the "Astounding Science Fiction" magazine. The story, primarily about a team of scientists hunting down mutants, warns of a "Fimbulwinter" caused by dust that blocked sunlight after a recent nuclear war and speculated that it may even trigger a new Ice Age. Anderson went on to publish a novel based partly on this story in 1961, titling it "Twilight World". Similarly in 1985 it was noted by T. G. Parsons that the story "Torch" by C. Anvil, which also appeared in "Astounding Science Fiction" magazine, but in the April 1957 edition, contains the essence of the "Twilight at Noon"/"nuclear winter" hypothesis. In the story a nuclear warhead ignites an oil field, and the soot produced "screens out part of the sun's radiation", resulting in Arctic temperatures for much of the population of North America and the Soviet Union.
The 1988 Air Force Geophysics Laboratory publication, "An assessment of global atmospheric effects of a major nuclear war" by H. S. Muench, et al., contains a chronology and review of the major reports on the nuclear winter hypothesis from 1983–1986. In general these reports arrive at similar conclusions as they are based on "the same assumptions, the same basic data", with only minor model-code differences. They skip the modeling steps of assessing the possibility of fire and the initial fire plumes and instead start the modeling process with a "spatially uniform soot cloud" which has found its way into the atmosphere.
Although never openly acknowledged by the multi-disciplinary team who authored the most popular 1980s TTAPS model, in 2011 the American Institute of Physics states that the TTAPS team (named for its participants, who had all previously worked on the phenomenon of dust storms on Mars, or in the area of asteroid impact events: Richard P. Turco, Owen Toon, Thomas P. Ackerman, James B. Pollack and Carl Sagan) announcement of their results in 1983 "was with the explicit aim of promoting international arms control". However, "the computer models were so simplified, and the data on smoke and other aerosols were still so poor, that the scientists could say nothing for certain."
In 1981, William J. Moran began discussions and research in the National Research Council (NRC) on the airborne soil/dust effects of a large exchange of nuclear warheads, having seen a possible parallel in the dust effects of a war with that of the asteroid-created K-T boundary and its popular analysis a year earlier by Luis Alvarez in 1980. An NRC study panel on the topic met in December 1981 and April 1982 in preparation for the release of the NRC's "The Effects on the Atmosphere of a Major Nuclear Exchange", published in 1985.
As part of a study on the creation of oxidizing species such as NOx and ozone in the troposphere after a nuclear war, launched in 1980 by "AMBIO", a journal of the Royal Swedish Academy of Sciences, Paul J. Crutzen and John Birks began preparing for the 1982 publication of a calculation on the effects of nuclear war on stratospheric ozone, using the latest models of the time. However they found that in part as a result of the trend towards more numerous but less energetic, sub-megaton range nuclear warheads (made possible by the ceaseless march to increase ICBM warhead accuracy/Circular Error Probable), the ozone layer danger was "not very significant".
It was after being confronted with these results that they "chanced" upon the notion, as "an afterthought" of nuclear detonations igniting massive fires everywhere and, crucially, the smoke from these conventional fires then going on to absorb sunlight, causing surface temperatures to plummet. In early-1982, the two circulated a draft paper with the first suggestions of alterations in short-term climate from fires presumed to occur following a nuclear war. Later in the same year, the special issue of "Ambio" devoted to the possible environmental consequences of nuclear war by Crutzen and Birks was titled "Twilight at Noon", and largely anticipated the nuclear winter hypothesis. The paper looked into fires and their climatic effect and discussed particulate matter from large fires, nitrogen oxide, ozone depletion and the effect of nuclear twilight on agriculture. Crutzen and Birks' calculations suggested that smoke particulates injected into the atmosphere by fires in cities, forests and petroleum reserves could prevent up to 99 percent of sunlight from reaching the Earth's surface. This darkness, they said, could exist "for as long as the fires burned", which was assumed to be many weeks, with effects such as: "The normal dynamic and temperature structure of the atmosphere would ... change considerably over a large fraction of the Northern Hemisphere, which will probably lead to important changes in land surface temperatures and wind systems." An implication of their work was that a successful nuclear decapitation strike could have severe climatic consequences for the perpetrator.
After reading a paper by N. P. Bochkov and E. I. Chazov, published in the same edition of "Ambio" that carried Crutzen and Birks's paper "Twilight at Noon", Soviet atmospheric scientist Georgy Golitsyn applied his research on Mars dust storms to soot in the Earth's atmosphere. The use of these influential Martian dust storm models in nuclear winter research began in 1971, when the Soviet spacecraft Mars 2 arrived at the red planet and observed a global dust cloud. The orbiting instruments together with the 1971 Mars 3 lander determined that temperatures on the surface of the red-planet were considerably colder than temperatures at the top of the dust cloud. Following these observations, Golitsyn received two telegrams from astronomer Carl Sagan, in which Sagan asked Golitsyn to "explore the understanding and assessment of this phenomenon." Golitsyn recounts that it was around this time that he had "proposed a theory to explain how Martian dust may be formed and how it may reach global proportions."
In the same year Alexander Ginzburg, an employee in Golitsyn's institute, developed a model of dust storms to describe the cooling phenomenon on Mars. Golitsyn felt that his model would be applicable to soot after he read a 1982 Swedish magazine dedicated to the effects of a hypothetical nuclear war between the USSR and the US. Golitsyn would use Ginzburg's largely unmodified dust-cloud model with soot assumed as the aerosol in the model instead of soil dust and in an identical fashion to the results returned, when computing dust-cloud cooling in the Martian atmosphere, the cloud high above the planet would be heated while the planet below would cool drastically. Golitsyn presented his intent to publish this Martian derived Earth-analog model to the Andropov instigated "Committee of Soviet Scientists in Defence of Peace Against the Nuclear Threat" in May 1983, an organization that Golitsyn would later be appointed a position of vice-chairman of. The establishment of this committee was done with the expressed approval of the Soviet leadership with the intent "to expand controlled contacts with Western "nuclear freeze" activists". Having gained this committees approval, in September 1983, Golitsyn published the first computer model on the nascent "nuclear winter" effect in the widely read "Herald of the Russian Academy of Sciences".
On 31 October 1982, Golitsyn and Ginsburg's model and results were presented at the conference on "The World after Nuclear War", hosted in Washington, D.C.
Both Golitsyn and Sagan had been interested in the cooling on the dust storms on the planet Mars in the years preceding their focus on "nuclear winter". Sagan had also worked on Project A119 in the 1950s–1960s, in which he attempted to model the movement and longevity of a plume of lunar soil.
After the publication of "Twilight at Noon" in 1982, the TTAPS team have said that they began the process of doing a 1-dimensional computational modeling study of the atmospheric consequences of nuclear war/soot in the stratosphere, though they would not publish a paper in "Science" magazine until late-December 1983. The phrase "nuclear winter" had been coined by Turco just prior to publication. In this early paper, TTAPS used assumption-based estimates on the total smoke and dust emissions that would result from a major nuclear exchange, and with that, began analyzing the subsequent effects on the atmospheric radiation balance and temperature structure as a result of this quantity of assumed smoke. To compute dust and smoke effects, they employed a one-dimensional microphysics/radiative-transfer model of the Earth's lower atmosphere (up to the mesopause), which defined only the vertical characteristics of the global climate perturbation.
Interest in the environmental effects of nuclear war, however, had continued in the Soviet Union after Golitsyn's September paper, with Vladimir Alexandrov and G. I. Stenchikov also publishing a paper in December 1983 on the climatic consequences, although in contrast to the contemporary TTAPS paper, this paper was based on simulations with a three-dimensional global circulation model. (Two years later Alexandrov disappeared under mysterious circumstances). Richard Turco and Starley L. Thompson were both critical of the Soviet research. Turco called it "primitive" and Thompson said it used obsolete US computer models. Later they were to rescind these criticisms and instead applauded Alexandrov's pioneering work, saying that the Soviet model shared the weaknesses of all the others.
In 1984, the World Meteorological Organization (WMO) commissioned Golitsyn and N. A. Phillips to review the state of the science. They found that studies generally assumed a scenario where half of the world's nuclear weapons would be used, ~5000 Mt, destroying approximately 1,000 cities, and creating large quantities of carbonaceous smoke – 1– being most likely, with a range of 0.2– (NAS; TTAPS assumed ). The smoke resulting would be largely opaque to solar radiation but transparent to infrared, thus cooling the Earth by blocking sunlight, but not creating warming by enhancing the greenhouse effect. The optical depth of the smoke can be much greater than unity. Forest fires resulting from non-urban targets could increase aerosol production further. Dust from near-surface explosions against hardened targets also contributes; each megaton-equivalent explosion could release up to five million tons of dust, but most would quickly fall out; high altitude dust is estimated at 0.1–1 million tons per megaton-equivalent of explosion. Burning of crude oil could also contribute substantially.
The 1-D radiative-convective models used in these studies produced a range of results, with coolings up to 15–42 °C between 14 to 35 days after the war, with a "baseline" of about 20 °C. Somewhat more sophisticated calculations using 3-D GCMs produced similar results: temperature drops of about 20 °C, though with regional variations.
All calculations show large heating (up to 80 °C) at the top of the smoke layer at about ; this implies a substantial modification of the circulation there and the possibility of advection of the cloud into low latitudes and the southern hemisphere.
In a 1990 paper entitled "Climate and Smoke: An Appraisal of Nuclear Winter", TTAPS gave a more detailed description of the short- and long-term atmospheric effects of a nuclear war using a three-dimensional model:
First one to three months:
Following one to three years:
One of the major results of TTAPS' 1990 paper was the re-iteration of the team's 1983 model that 100 oil refinery fires would be sufficient to bring about a small scale, but still globally deleterious nuclear winter.
Following Iraq's invasion of Kuwait and Iraqi threats of igniting the country's approximately 800 oil wells, speculation on the cumulative climatic effect of this, presented at the World Climate Conference in Geneva that November in 1990, ranged from a nuclear winter type scenario, to heavy acid rain and even short term immediate global warming.
In articles printed in the "Wilmington Morning Star" and the "Baltimore Sun" newspapers in January 1991, prominent authors of nuclear winter papers – Richard P. Turco, John W. Birks, Carl Sagan, Alan Robock and Paul Crutzen – collectively stated that they expected catastrophic nuclear winter like effects with continental-sized effects of sub-freezing temperatures as a result of the Iraqis going through with their threats of igniting 300 to 500 pressurized oil wells that could subsequently burn for several months.
As threatened, the wells were set on fire by the retreating Iraqis in March 1991, and the 600 or so burning oil wells were not fully extinguished until November 6, 1991, eight months after the end of the war, and they consumed an estimated six million barrels of oil per day at their peak intensity.
When Operation Desert Storm began in January 1991, coinciding with the first few oil fires being lit, Dr. S. Fred Singer and Carl Sagan discussed the possible environmental effects of the Kuwaiti petroleum fires on the ABC News program "Nightline". Sagan again argued that some of the effects of the smoke could be similar to the effects of a nuclear winter, with smoke lofting into the stratosphere, beginning around above sea level in Kuwait, resulting in global effects. He also argued that he believed the net effects would be very similar to the explosion of the Indonesian volcano Tambora in 1815, which resulted in the year 1816 being known as the "Year Without a Summer".
Sagan listed modeling outcomes that forecast effects extending to South Asia, and perhaps to the Northern Hemisphere as well. Sagan stressed this outcome was so likely that "It should affect the war plans." Singer, on the other hand, anticipated that the smoke would go to an altitude of about and then be rained out after about three to five days, thus limiting the lifetime of the smoke. Both height estimates made by Singer and Sagan turned out to be wrong, albeit with Singer's narrative being closer to what transpired, with the comparatively minimal atmospheric effects remaining limited to the Persian Gulf region, with smoke plumes, in general, lofting to about and a few as high as .
Sagan and his colleagues expected that a "self-lofting" of the sooty smoke would occur when it absorbed the sun's heat radiation, with little to no scavenging occurring, whereby the black particles of soot would be heated by the sun and lifted/lofted higher and higher into the air, thereby injecting the soot into the stratosphere, a position where they argued it would take years for the sun blocking effect of this aerosol of soot to fall out of the air, and with that, catastrophic ground level cooling and agricultural effects in Asia and possibly the Northern Hemisphere as a whole. In a 1992 follow-up, Peter Hobbs and others had observed no appreciable evidence for the nuclear winter team's predicted massive "self-lofting" effect and the oil-fire smoke clouds contained less soot than the nuclear winter modelling team had assumed.
The atmospheric scientist tasked with studying the atmospheric effect of the Kuwaiti fires by the National Science Foundation, Peter Hobbs, stated that the fires' modest impact suggested that "some numbers [used to support the Nuclear Winter hypothesis]... were probably a little overblown."
Hobbs found that at the peak of the fires, the smoke absorbed 75 to 80% of the sun's radiation. The particles rose to a maximum of , and when combined with scavenging by clouds the smoke had a short residency time of a maximum of a few days in the atmosphere.
Pre-war claims of wide scale, long-lasting, and significant global environmental effects were thus not borne out, and found to be significantly exaggerated by the media and speculators, with climate models by those not supporting the nuclear winter hypothesis at the time of the fires predicting only more localized effects such as a daytime temperature drop of ~10 °C within 200 km of the source.
Sagan later conceded in his book "The Demon-Haunted World" that his predictions obviously did not turn out to be correct: "it "was" pitch black at noon and temperatures dropped 4–6° C over the Persian Gulf, but not much smoke reached stratospheric altitudes and Asia was spared."
The idea of oil well and oil reserve smoke pluming into the stratosphere serving as a main contributor to the soot of a nuclear winter was a central idea of the early climatology papers on the hypothesis; they were considered more of a possible contributor than smoke from cities, as the smoke from oil has a higher ratio of black soot, thus absorbing more sunlight. Hobbs compared the papers' assumed "emission factor" or soot generating efficiency from ignited oil pools and found, upon comparing to measured values from oil pools at Kuwait, which were the greatest soot producers, the emissions of soot assumed in the nuclear winter calculations were still "too high". Following the results of the Kuwaiti oil fires being in disagreement with the core nuclear winter promoting scientists, 1990s nuclear winter papers generally attempted to distance themselves from suggesting oil well and reserve smoke will reach the stratosphere.
In 2007, a nuclear winter study, noted that modern computer models have been applied to the Kuwait oil fires, finding that individual smoke plumes are not able to loft smoke into the stratosphere, but that smoke from fires covering a large area like some forest fires can lift smoke into the stratosphere, and recent evidence suggests that this occurs far more often than previously thought. The study also suggested that the burning of the comparably smaller cities, which would be expected to follow a nuclear strike, would also loft significant amounts of smoke into the stratosphere:
However the above simulation notably contained the assumption that no dry or wet deposition would occur.
Between 1990 and 2003, commentators noted that no peer-reviewed papers on "nuclear winter" were published.
Based on new work published in 2007 and 2008 by some of the authors of the original studies, several new hypotheses have been put forth, primarily the assessment that as few as 100 firestorms would result in a nuclear winter. However far from the hypothesis being "new", it drew the same conclusion as earlier 1980s models, which similarly regarded 100 or so city firestorms as a threat.
Compared to climate change for the past millennium, even the smallest exchange modeled would plunge the planet into temperatures colder than the Little Ice Age (the period of history between approximately 1600 and 1850 AD). This would take effect instantly, and agriculture would be severely threatened. Larger amounts of smoke would produce larger climate changes, making agriculture impossible for years. In both cases, new climate model simulations show that the effects would last for more than a decade.
A study published in the "Journal of Geophysical Research" in July 2007, titled "Nuclear winter revisited with a modern climate model and current nuclear arsenals: Still catastrophic consequences", used current climate models to look at the consequences of a global nuclear war involving most or all of the world's current nuclear arsenals (which the authors judged to be one similar to the size of the world's arsenals twenty years earlier). The authors used a global circulation model, ModelE from the NASA Goddard Institute for Space Studies, which they noted "has been tested extensively in global warming experiments and to examine the effects of volcanic eruptions on climate." The model was used to investigate the effects of a war involving the entire current global nuclear arsenal, projected to release about 150 Tg of smoke into the atmosphere, as well as a war involving about one third of the current nuclear arsenal, projected to release about 50 Tg of smoke. In the 150 Tg case they found that:
In addition, they found that this cooling caused a weakening of the global hydrological cycle, reducing global precipitation by about 45%. As for the 50 Tg case involving one third of current nuclear arsenals, they said that the simulation "produced climate responses very similar to those for the 150 Tg case, but with about half the amplitude," but that "the time scale of response is about the same." They did not discuss the implications for agriculture in depth, but noted that a 1986 study which assumed no food production for a year projected that "most of the people on the planet would run out of food and starve to death by then" and commented that their own results show that, "This period of no food production needs to be extended by many years, making the impacts of nuclear winter even worse than previously thought."
In 2014, Michael J. Mills (at the US National Center for Atmospheric Research, NCAR), et al., published "Multi-decadal global cooling and unprecedented ozone loss following a regional nuclear conflict" in the journal "Earth's Future". The authors used computational models developed by NCAR to simulate the climatic effects of a soot cloud that they suggest would be a result, of a regional nuclear war in which 100 "small" (15 Kt) weapons are detonated over cities. The model had outputs, due to the interaction of the soot cloud:
global ozone losses of 20–50% over populated areas, levels unprecedented in human history, would accompany the coldest average surface temperatures in the last 1000 years. We calculate summer enhancements in UV indices of 30–80% over Mid-Latitudes, suggesting widespread damage to human health, agriculture, and terrestrial and aquatic ecosystems. Killing frosts would reduce growing seasons by 10–40 days per year for 5 years. Surface temperatures would be reduced for more than 25 years, due to thermal inertia and albedo effects in the ocean and expanded sea ice. The combined cooling and enhanced UV would put significant pressures on global food supplies and could trigger a global nuclear famine.
Research published in the peer-reviewed journal "Safety" suggested that no nation should possess more than 100 nuclear warheads because of the blowback effect on the aggressor nation's own population because of "nuclear autumn".
The four major, largely independent underpinnings that the nuclear winter concept has and continues to receive criticism over, are regarded as: firstly, would cities readily firestorm, and if so how much soot would be generated? Secondly, "atmospheric" longevity: would the quantities of soot assumed in the models remain in the atmosphere for as long as projected or would far more soot precipitate as black rain much sooner? Third, "timing" of events: how reasonable is it for the modeling of firestorms or war to commence in late spring or summer; this is done in almost all US-Soviet nuclear winter papers, thereby giving rise to the largest possible degree of modeled cooling? Lastly, the issue of "darkness or opacity": how much light-blocking effect the assumed quality of the soot reaching the atmosphere would have.
While the highly popularized initial 1983 TTAPS 1-dimensional model forecasts were widely reported and criticized in the media, in part because every later model predicts far less of its "apocalyptic" level of cooling, most models continue to suggest that some deleterious global cooling would still result, under the assumption that a large number of fires occurred in the spring or summer. Starley L. Thompson's less primitive mid-1980s 3-Dimensional model, which notably contained the very same general assumptions, led him to coin the term "nuclear autumn" to more accurately describe the climate results of the soot in this model, in an on camera interview in which he dismisses the earlier "apocalyptic" models.
A major criticism of the assumptions that continue to make these model results possible appeared in the 1987 book "Nuclear War Survival Skills" ("NWSS"), a civil defense manual by Cresson Kearny for the Oak Ridge National Laboratory. According to the 1988 publication "An assessment of global atmospheric effects of a major nuclear war", Kearny's criticisms were directed at the excessive amount of soot that the modelers assumed would reach the stratosphere. Kearny cited a Soviet study that modern cities would not burn as firestorms, as most flammable city items would be buried under non-combustible rubble and that the TTAPS study included a massive overestimate on the size and extent of non-urban wildfires that would result from a nuclear war. The TTAPS authors responded that, amongst other things, they did not believe target planners would intentionally blast cities into rubble, but instead argued fires would begin in relatively undamaged suburbs when nearby sites were hit, and partially conceded his point about non-urban wildfires. Dr. Richard D. Small, director of thermal sciences at the Pacific-Sierra Research Corporation similarly disagreed strongly with the model assumptions, in particular the 1990 update by TTAPS that argues that some 5,075 Tg of material would burn in a total US-Soviet nuclear war, as analysis by Small of blueprints and real buildings returned a maximum of 1,475 Tg of material that could be burned, "assuming that all the available combustible material was actually ignited".
Although Kearny was of the opinion that future more accurate models would "indicate there will be even smaller reductions in temperature", including future potential models that did not so readily accept that firestorms would occur as dependably as nuclear winter modellers assume, in "NWSS" Kearny did summarize the comparatively moderate cooling estimate of no more than a few days, from the 1986 "Nuclear Winter Reappraised" model by Starley Thompson and Stephen Schneider. This was done in an effort to convey to his readers that contrary to the popular opinion at the time, in the conclusion of these two climate scientists, "on scientific grounds the global apocalyptic conclusions of the initial nuclear winter hypothesis can now be relegated to a vanishing low level of probability."
However while a 1988 article by Brian Martin in "Science and Public Policy" states that although "Nuclear Winter Reappraised" concluded the US-Soviet "nuclear winter" would be much less severe than originally thought, with the authors describing the effects more as a "nuclear autumn", other statements by Thompson and Schneider show that they "resisted the interpretation that this means a rejection of the basic points made about nuclear winter". In the Alan Robock et al. 2007 paper they write that "because of the use of the term 'nuclear autumn' by Thompson and Schneider [1986], even though the authors made clear that the climatic consequences would be large, in policy circles the theory of nuclear winter is considered by some to have been exaggerated and disproved [e.g., Martin, 1988]." In 2007 Schneider expressed his tentative support for the cooling results of the limited nuclear war (Pakistan and India) analyzed in the 2006 model, saying "The sun is much stronger in the tropics than it is in mid-latitudes. Therefore, a much more limited war [there] could have a much larger effect, because you are putting the smoke in the worst possible place", and "anything that you can do to discourage people from thinking that there is any way to win anything with a nuclear exchange is a good idea."
The contribution of smoke from the ignition of live non-desert vegetation, living forests, grasses and so on, nearby to many missile silos is a source of smoke originally assumed to be very large in the initial "Twilight at Noon" paper, and also found in the popular TTAPS publication. However, this assumption was examined by Bush and Small in 1987 and they found that the burning of live vegetation could only conceivably contribute very slightly to the estimated total "nonurban smoke production". With the vegetation's potential to sustain burning only probable if it is within a radius or two from the surface of the nuclear fireball, which is at a distance that would also experience extreme blast winds that would influence any such fires. This reduction in the estimate of the non-urban smoke hazard is supported by the earlier preliminary "Estimating Nuclear Forest Fires" publication of 1984, and by the 1950–60s in-field examination of surface-scorched, mangled but never burnt-down tropical forests on the surrounding islands from the shot points in the Operation Castle, and Operation Redwing test series.
A paper by the United States Department of Homeland Security, finalized in 2010, states that after a nuclear detonation targeting a city "If fires are able to grow and coalesce, a firestorm could develop that would be beyond the abilities of firefighters to control. However experts suggest in the nature of modern US city design and construction may make a raging firestorm unlikely". The nuclear bombing of Nagasaki for example, did not produce a firestorm. This was similarly noted as early as 1986–88, when the assumed quantity of fuel "mass loading" (the amount of fuel per square meter) in cities underpinning the winter models was found to be too high and intentionally creates heat fluxes that loft smoke into the lower stratosphere, yet assessments "more characteristic of conditions" to be found in real-world modern cities, had found that the fuel loading, and hence the heat flux that would result from efficient burning, would rarely loft smoke much higher than 4 km.
Russell Seitz, Associate of the Harvard University Center for International Affairs, argues that the winter models' assumptions give results which the researchers want to achieve and is a case of "worst-case analysis run amok". In September 1986 Seitz published "Siberian fire as 'nuclear winter' guide" in the journal "Nature" in which he investigated the 1915 Siberian fire which started in the early summer months and was caused by the worst drought in the region's recorded history. The fire ultimately devastated the region burning the world's largest boreal forest, the size of Germany. While approximately 8 ˚C of daytime summer cooling occurred under the smoke clouds during the weeks of burning, no increase in potentially devastating agricultural night frosts occurred. Following his investigation into the Siberian fire of 1915, Seitz criticized the "nuclear winter" model results for being based on successive worst-case events: "The improbability of a string of 40 such coin tosses coming up heads approaches that of a pat royal flush. Yet it was represented as a "sophisticated one-dimensional model" – a usage that is oxymoronic, unless applied to [the British model Lesley Lawson] Twiggy."
Seitz cited Carl Sagan, adding an emphasis: ""In almost any realistic case" involving nuclear exchanges between the superpowers, global environmental changes sufficient to cause an extinction event equal to or more severe than that of the close of the Cretaceous when the dinosaurs and many other species died out are likely." Seitz comments: "The ominous rhetoric italicized in this passage puts even the 100 megaton [the original 100 city firestorm] scenario ... on a par with the 100 million megaton blast of an asteroid striking the Earth. This [is] astronomical mega-hype ..." Seitz concludes:
Seitz's opposition caused the proponents of nuclear winter to issue responses in the media. The proponents believed it was simply necessary to show only the possibility of climatic catastrophe, often a worst-case scenario, while opponents insisted that to be taken seriously, nuclear winter should be shown as likely under "reasonable" scenarios. One of these areas of contention, as elucidated by Lynn R. Anspaugh, is upon the question of which season should be used as the backdrop for the US-USSR war models, as most models choose the summer in the Northern Hemisphere as the start point to produce the maximum soot lofting and therefore eventual winter effect, whereas it has been pointed out that if the firestorms occurred in the autumn or winter months, when there is much less intense sunlight to loft soot into a stable region of the stratosphere, the magnitude of the cooling effect from the same number of firestorms as ignited in the summer models, would be negligible according to a January model run by Covey et al. Schneider conceded the issue in 1990, saying "a war in late fall or winter would have no appreciable [cooling] effect".
Anspaugh also expressed frustration that although a managed forest fire in Canada on 3 August 1985 is said to have been lit by proponents of nuclear winter, with the fire potentially serving as an opportunity to do some basic measurements of the optical properties of the smoke and smoke-to-fuel ratio, which would have helped refine the estimates of these critical model inputs, the proponents did not indicate that any such measurements were made. Peter V. Hobbs, who would later successfully attain funding to fly into and sample the smoke clouds from the Kuwait oil fires in 1991, also expressed frustration that he was denied funding to sample the Canadian, and other forest fires in this way. Turco wrote a 10-page memorandum with information derived from his notes and some satellite images, claiming that the smoke plume reached 6 km in altitude.
In 1986, atmospheric scientist Joyce Penner from the Lawrence Livermore National Laboratory published an article in "Nature" in which she focused on the specific variables of the smoke's optical properties and the quantity of smoke remaining airborne after the city fires and found that the published estimates of these variables varied so widely that depending on which estimates were chosen the climate effect could be negligible, minor or massive.
The assumed optical properties for black carbon in more recent nuclear winter papers in 2006 are still "based on those assumed in earlier nuclear winter simulations".
John Maddox, editor of the journal "Nature", issued a series of skeptical comments about nuclear winter studies during his tenure. Similarly S. Fred Singer was a long term vocal critic of the hypothesis in the journal and in televised debates with Carl Sagan.
In a 2011 response to the more modern papers on the hypothesis, Russell Seitz published a comment in "Nature" challenging Alan Robock's claim that there has been no real scientific debate about the 'nuclear winter' concept. In 1986 Seitz also contends that many others are reluctant to speak out for fear of being stigmatized as "closet Dr. Strangeloves", physicist Freeman Dyson of Princeton for example stated "It's an absolutely atrocious piece of science, but I quite despair of setting the public record straight." According to the Rocky Mountain News, Stephen Schneider had been called a fascist by some disarmament supporters for having written his 1986 article "Nuclear Winter Reappraised." As MIT meteorologist Kerry Emanuel similarly wrote a review in "Nature" that the winter concept is "notorious for its lack of scientific integrity" due to the unrealistic estimates selected for the quantity of fuel likely to burn, the imprecise global circulation models used, and ends by stating that the evidence of other models, point to substantial scavenging of the smoke by rain. Emanuel also made an "interesting point" about questioning proponent's objectivity when it came to strong emotional or political issues that they hold.
William R. Cotton, Professor of Atmospheric Science at Colorado State University, specialist in cloud physics modeling and co-creator of the highly influential, and previously mentioned RAMS atmosphere model, had in the 1980s worked on soot rain-out models and supported the predictions made by his own and other nuclear winter models, but has since reversed this position according to a book co-authored by him in 2007, stating that, amongst other systematically examined assumptions, far more rain out/wet deposition of soot will occur than is assumed in modern papers on the subject: "We must wait for a new generation of GCMs to be implemented to examine potential consequences quantitatively" and revealing that in his view, "nuclear winter was largely politically motivated from the beginning".
During the Cuban Missile Crisis, Fidel Castro and Che Guevara called on the USSR to launch a nuclear first strike against the US in the event of a US invasion of Cuba. In the 1980s Castro was pressuring the Kremlin to adopt a harder line against the US under President Ronald Reagan, even arguing for the potential use of nuclear weapons. As a direct result of this a Soviet official was dispatched to Cuba in 1985 with an entourage of "experts", who detailed the ecological effect on Cuba in the event of nuclear strikes on the United States. Soon after, the Soviet official recounts, Castro lost his prior "nuclear fever". In 2010 Alan Robock was summoned to Cuba to help Castro promote his new view that nuclear war would bring about Armageddon. Robock's 90 minute lecture was later aired on the nationwide state-controlled television station in the country.
However, according to Robock, insofar as getting US government attention and affecting nuclear policy, he has failed. In 2009, together with Owen Toon, he gave a talk to the United States Congress but nothing transpired from it and the then presidential science adviser, John Holdren, did not respond to their requests in 2009 or at the time of writing in 2011.
In a 2012 "Bulletin of the Atomic Scientists" feature, Robock and Toon, who had routinely mixed their disarmament advocacy into the conclusions of their "nuclear winter" papers, argue in the political realm that the hypothetical effects of nuclear winter necessitates that the doctrine they assume is active in Russia and US, "mutually assured destruction" (MAD) should instead be replaced with their own "self-assured destruction" (SAD) concept, because, regardless of whose cities burned, the effects of the resultant nuclear winter that they advocate, would be, in their view, catastrophic. In a similar vein, in 1989 Carl Sagan and Richard Turco wrote a policy implications paper that appeared in "AMBIO" that suggested that as nuclear winter is a "well-established prospect", both superpowers should jointly reduce their nuclear arsenals to "Canonical Deterrent Force" levels of 100–300 individual warheads each, such that in "the event of nuclear war [this] would minimize the likelihood of [extreme] nuclear winter."
An originally classified 1984 US interagency intelligence assessment states that in both the preceding 1970s and 80s, the Soviet and US military were already following the ""existing trends"" in warhead miniaturization, of higher accuracy and lower yield nuclear warheads, this is seen when assessing the most numerous physics packages in the US arsenal, which in the 1960s were the B28 and W31, however both quickly became less prominent with the 1970s mass production runs of the 50 Kt W68, the 100 Kt W76 and in the 1980s, with the B61. This trend towards miniaturization, enabled by advances in inertial guidance and accurate GPS navigation etc., was motivated by a multitude of factors, namely the desire to leverage the physics of equivalent megatonnage that miniaturization offered; of freeing up space to fit more MIRV warheads and decoys on each missile. Alongside the desire to still destroy hardened targets but while reducing the severity of fallout collateral damage depositing on neighboring, and potentially friendly, countries. As it relates to the likelihood of nuclear winter, the range of potential thermal radiation ignited fires was already reduced with miniaturization. For example, the most popular nuclear winter paper, the 1983 TTAPS paper, had described a 3000 Mt counterforce attack on ICBM sites with each individual warhead having approximately one Mt of energy; however not long after publication, Michael Altfeld of Michigan State University and political scientist Stephen Cimbala of Pennsylvania State University argued that the then already developed and deployed smaller, more accurate warheads (e.g. W76), together with lower detonation heights, could produce the same counterforce strike with a total of only 3 Mt of energy being expended. They continue that, "if" the nuclear winter models prove to be representative of reality, then far less climatic-cooling would occur, even if firestorm prone areas existed in the target list, as lower fusing heights such as surface bursts, would also limit the range of the burning thermal rays due to terrain masking and shadows cast by buildings, while also temporarily lofting far more localized fallout when compared to airburst fuzing – the standard mode of employment against un-hardened targets. This logic is similarly reflected in the originally classified 1984 "Interagency Intelligence assessment", which suggests that targeting planners would simply have to consider target combustibility along with yield, height of burst, timing and other factors to reduce the amount of smoke to safeguard against the potentiality of a nuclear winter. Therefore, as a consequence of attempting to limit the target fire hazard by reducing the range of thermal radiation with fuzing for surface and sub-surface bursts, this will result in a scenario where the far more concentrated, and therefore deadlier, "local" fallout that is generated following a surface burst forms, as opposed to the comparatively dilute "global" fallout created when nuclear weapons are fuzed in air burst mode.
Altfeld and Cimbala also argued that belief in the possibility of nuclear winter would actually make nuclear war more likely, contrary to the views of Sagan and others, because it would serve yet further motivation to follow the "existing trends", towards the development of more accurate, and even lower explosive yield, nuclear weapons. As the winter hypothesis suggests that the replacement of the then Cold War viewed strategic nuclear weapons in the multi-megaton yield range, with weapons of explosive yields closer to tactical nuclear weapons, such as the Robust Nuclear Earth Penetrator (RNEP), would safeguard against the nuclear winter potential. With the latter capabilities of the then, largely still conceptual RNEP, specifically cited by the influential nuclear warfare analyst Albert Wohlstetter. Tactical nuclear weapons, on the low end of the scale have yields that overlap with large conventional weapons, and are therefore often viewed "as blurring the distinction between conventional and nuclear weapons", making the prospect of using them "easier" in a conflict.
In an interview in 2000 with Mikhail Gorbachev (the leader of the Soviet Union from 1985–91), the following statement was posed to him: "In the 1980s, you warned about the unprecedented dangers of nuclear weapons and took very daring steps to reverse the arms race", with Gorbachev replying "Models made by Russian and American scientists showed that a nuclear war would result in a nuclear winter that would be extremely destructive to all life on Earth; the knowledge of that was a great stimulus to us, to people of honor and morality, to act in that situation."
However, a 1984 US Interagency Intelligence Assessment expresses a far more skeptical and cautious approach, stating that the hypothesis is not scientifically convincing. The report predicted that Soviet nuclear policy would be to maintain their strategic nuclear posture, such as their fielding of the high throw-weight SS-18 missile and they would merely attempt to exploit the hypothesis for propaganda purposes, such as directing scrutiny on the US portion of the nuclear arms race. Moreover, it goes on to express the belief that if Soviet officials did begin to take nuclear winter seriously, it would probably make them demand exceptionally high standards of scientific proof for the hypothesis, as the implications of it would undermine their military doctrine – a level of scientific proof which perhaps could not be met without field experimentation. The un-redacted portion of the document ends with the suggestion that substantial increases in Soviet Civil defense food stockpiles might be an early indicator that Nuclear Winter was beginning to influence Soviet upper echelon thinking.
In 1985 "Time" magazine noted "the suspicions of some Western scientists that the nuclear winter hypothesis was promoted by Moscow to give anti-nuclear groups in the U.S. and Europe some fresh ammunition against America's arms buildup."
In 1985, the United States Senate met to discuss the science and politics of nuclear winter. During the congressional hearing, the influential analyst Leon Gouré presented evidence that perhaps the Soviets have simply echoed Western reports rather than producing unique findings. Gouré hypothesized that Soviet research and discussions of nuclear war may serve only Soviet political agendas, rather than to reflect actual opinions of Soviet leadership.
In 1986, the Defense Nuclear Agency document "An update of Soviet research on and exploitation of Nuclear winter 1984–1986" charted the minimal [public domain] research contribution on, and Soviet propaganda usage of, the nuclear winter phenomenon.
There is some doubt as to when the Soviet Union began modelling fires and the atmospheric effects of nuclear war. Former Soviet intelligence officer Sergei Tretyakov claimed that, under the directions of Yuri Andropov, the KGB invented the concept of "nuclear winter" in order to stop the deployment of NATO Pershing II missiles. They are said to have distributed to peace groups, the environmental movement and the journal "Ambio" disinformation based on a faked "doomsday report" by the Soviet Academy of Sciences by Georgii Golitsyn, Nikita Moiseyev and Vladimir Alexandrov concerning the climatic effects of nuclear war. Although it is accepted that the Soviet Union exploited the nuclear winter hypothesis for propaganda purposes, Tretyakov's inherent claim that the KGB funnelled disinformation to "AMBIO", the journal in which Paul Crutzen and John Birks published the 1982 paper "Twilight at Noon", has not been corroborated . In an interview in 2009, conducted by the National Security Archive, Vitalii Nikolaevich Tsygichko; a Senior Analyst at the Soviet Academy of Sciences and military mathematical modeler, stated that Soviet military analysts were discussing the idea of "nuclear winter" years before U.S. scientists, although they did not use that exact term.
A number of solutions have been proposed to mitigate the potential harm of a nuclear winter if one appears inevitable; with the problem being attacked at both ends, from those focusing on preventing the growth of fires and therefore limiting the amount of smoke that reaches the stratosphere in the first place, and those focusing on food production with reduced sunlight, with the assumption that the very worst-case analysis results of the nuclear winter models prove accurate and no other mitigation strategies are fielded.
In a report from 1967, techniques included various methods of applying liquid nitrogen, dry ice, and water to nuclear-caused fires. The report considered attempting to stop the spread of fires by creating firebreaks by blasting combustible material out of an area, possibly even using nuclear weapons, along with the use of preventative Hazard Reduction Burns. According to the report, one of the most promising techniques investigated was initiation of rain from seeding of mass-fire thunderheads and other clouds passing over the developing, and then stable, firestorm.
In Feeding Everyone No Matter What, under the worst-case scenario predictions of nuclear winter, the authors present various unconventional food possibilities including; natural-gas-digesting bacteria the most well known being Methylococcus capsulatus, that is presently used as a feed in Fish farming, Bark bread a long-standing famine food utilizing the edible inner bark of trees and part of Scandinavian history during the Little Ice Age, mention is similarly given to increased fungiculture or mushrooms such as the honey fungi that grow directly on moist wood without sunlight, and variations of wood or cellulosic biofuel production, which typically already creates edible sugars/xylitol from inedible cellulose, as an intermediate product before the final step of alcohol generation. One author, mechanical engineer David Denkenberger, states that mushrooms could theoretically feed everyone for three years. Seaweed, like mushrooms, can also grow in low-light conditions. Dandelions and tree needles could provide Vitamin C, and bacteria could provide Vitamin E. More conventional cold-weather crops such as potatoes might get sufficient sunlight at the equator to remain feasible.
The minimum annual global wheat storage is approximately 2 months. To feed everyone despite nuclear winter, years of food storage prior to the event has been proposed. While the suggested masses of preserved food would likely never get used as a nuclear winter is comparatively unlikely to occur, the stockpiling of food would have the positive result of ameliorating the effect of the far more frequent disruptions to regional food supplies caused by lower-level conflicts and droughts. There is however the danger that if a sudden rush to food stockpiling occurs without the buffering effect offered by Victory gardens etc., it may exacerbate current food security problems by elevating present food prices.
Despite the name "nuclear winter", nuclear events are not necessary to produce the modeled climatic effect. In an effort to find a quick and cheap solution to the global warming projection of at least 2 ˚C of surface warming as a result of the doubling in CO2 levels within the atmosphere, through solar radiation management (a form of climate engineering) the underlying nuclear winter effect has been looked at as perhaps holding potential. Besides the more common suggestion to inject sulfur compounds into the stratosphere to approximate the effects of a volcanic winter, the injection of other chemical species such as the release of a particular type of soot particle to create minor "nuclear winter" conditions, has been proposed by Paul Crutzen and others. According to the threshold "nuclear winter" computer models, if one to five teragrams of firestorm-generated soot is injected into the low stratosphere, it is modeled, through the anti-greenhouse effect, to heat the stratosphere but cool the lower troposphere and produce 1.25 °C cooling for two to three years; and after 10 years, average global temperatures would still be 0.5 °C lower than before the soot injection.
Similar climatic effects to "nuclear winter" followed historical supervolcano eruptions, which plumed sulfate aerosols high into the stratosphere, with this being known as a volcanic winter. The effects of smoke in the atmosphere (short wave absorption) are sometimes termed an 'antigreenhouse' effect, and a strong analog is the hazy atmosphere of Titan. Pollack, Toon and others were involved in developing models of Titan's climate in the late 1980s, at the same time as their early nuclear winter studies.
Similarly, extinction-level comet and asteroid impacts are also believed to have generated impact winters by the pulverization of massive amounts of fine rock dust. This pulverized rock can also produce "volcanic winter" effects, if sulfate-bearing rock is hit in the impact and lofted high into the air, and "nuclear winter" effects, with the heat of the heavier rock ejecta igniting regional and possibly even global forest firestorms.
This global "impact firestorms" hypothesis, initially supported by Wolbach, H. Jay Melosh and Owen Toon, suggests that as a result of massive impact events, the small sand-grain-sized ejecta fragments created can meteorically re-enter the atmosphere forming a hot blanket of global debris high in the air, potentially turning the entire sky red-hot for minutes to hours, and with that, burning the complete global inventory of above-ground carbonaceous material, including rain forests. This hypothesis is suggested as a means to explain the severity of the Cretaceous–Paleogene extinction event, as the earth impact of an asteroid about 10 km wide which precipitated the extinction is not regarded as sufficiently energetic to have caused the level of extinction from the initial impact's energy release alone.
The global firestorm winter, however, has been questioned in more recent years (2003–2013) by Claire Belcher, Tamara Goldin and Melosh, who had initially supported the hypothesis, with this re-evaluation being dubbed the "Cretaceous-Palaeogene firestorm debate" by Belcher. The issues raised by these scientists in the debate are the perceived low quantity of soot in the sediment beside the fine-grained iridium-rich asteroid dust layer, if the quantity of re-entering ejecta was perfectly global in blanketing the atmosphere, and if so, the duration and profile of the re-entry heating, whether it was a high thermal pulse of heat or the more prolonged and therefore more incendiary "oven" heating, and finally, how much the "self-shielding effect" from the first wave of now-cooled meteors in dark flight contributed to diminishing the total heat experienced on the ground from later waves of meteors.
In part due to the Cretaceous period being a high-atmospheric-oxygen era, with concentrations above that of the present day. Owen Toon et al. in 2013 were critical of the re-evaluations the hypothesis is undergoing.
It is difficult to successfully ascertain the percentage contribution of the soot in this period's geological sediment record from living plants and fossil fuels present at the time, in much the same manner that the fraction of the material ignited directly by the meteor impact is difficult to determine. | https://en.wikipedia.org/wiki?curid=22171 |
Temple of Olympian Zeus, Athens
The Temple of Olympian Zeus (, ), also known as the Olympieion or Columns of the Olympian Zeus, is a former colossal temple at the center of the Greek capital Athens. It was dedicated to "Olympian" Zeus, a name originating from his position as head of the Olympian gods. Construction began in the 6th century BC during the rule of the Athenian tyrants, who envisaged building the greatest temple in the ancient world, but it was not completed until the reign of the Roman Emperor Hadrian in the 2nd century AD, some 638 years after the project had begun. During the Roman period the temple, which included 104 colossal columns, was renowned as the largest temple in Greece and housed one of the largest cult statues in the ancient world.
The temple's glory was short-lived, as it fell into disuse after being pillaged during a barbarian invasion in 267 AD, just about a century after its completion. It was probably never repaired and was reduced to ruins thereafter. In the centuries after the fall of the Roman Empire, it was extensively quarried for building materials to supply building projects elsewhere in the city. Despite that, a substantial part of the temple remains today, notably sixteen of the original gigantic columns, and it continues to be part of a very important archaeological site of Greece.
The temple is located approximately south-east of the Acropolis, and about south of the center of Athens, Syntagma Square. Its foundations were laid on the site of an ancient outdoor sanctuary dedicated to Zeus. An earlier temple had stood there, constructed by the tyrant Peisistratus around 550 BC. The building was demolished after the death of Peisistratos and the construction of a colossal new Temple of Olympian Zeus was begun around 520 BC by his sons, Hippias and Hipparchos.
They sought to surpass two famous contemporary temples, the Heraion of Samos and the second Temple of Artemis at Ephesus. Designed by the architects Antistates, Callaeschrus, Antimachides and Phormos, the Temple of Olympian Zeus was intended to be built of local limestone in the Doric style on a colossal platform measuring by . It was to be flanked by a double colonnade of eight columns across the front and back and twenty-one on the flanks, surrounding the cella.
The work was abandoned when the tyranny was overthrown and Hippias was expelled in 510 BC. Only the platform and some elements of the columns had been completed by that point, and the temple remained in that state for 336 years. The temple was left unfinished during the years of Athenian democracy, apparently, because the Greeks thought it was hubris to build on such a scale. In his treatise "Politics", Aristotle cited the temple as an example of how tyrannies engaged the populace in great works for the state (like a white elephant) and left them no time, energy or means to rebel.
It was not until 174 BC that the Seleucid king Antiochus IV Epiphanes, who presented himself as the earthly embodiment of Zeus, revived the project and placed the Roman architect Decimus Cossutius in charge. The design was changed to have three rows of eight columns across the front and back of the temple and a double row of twenty on the flanks, for a total of 104 columns. The columns would stand high and in diameter. The building material was changed to the expensive but high-quality Pentelic marble and the order was changed from Doric to Corinthian, marking the first time that this order had been used on the exterior of a major temple. However, the project ground to a halt again in 164 BC with the death of Antiochus. The temple was still only half-finished by that stage.
Serious damage was inflicted on the partly built temple by Lucius Cornelius Sulla's sack of Athens in 86 BC. While looting the city, Sulla seized some of the incomplete columns and transported them back to Rome, where they were re-used in the Temple of Jupiter on the Capitoline Hill. A half-hearted attempt was made to complete the temple during Augustus' reign as the first Roman emperor, but it was not until the accession of Hadrian in the 2nd century AD that the project was finally completed around 638 years after it had begun.
In 124–125 AD, when the strongly Philhellene Hadrian visited Athens, a massive building programme was begun that included the completion of the Temple of Olympian Zeus. A walled marble-paved precinct was constructed around the temple, making it a central focus of the ancient city. Cossutius' design was used with few changes and the temple was formally dedicated by Hadrian in 132, who took the title of "Panhellenios" in commemoration of the occasion. The temple and the surrounding precinct were adorned with numerous statues depicting Hadrian, the gods, and personifications of the Roman provinces. A colossal statue of Hadrian was raised behind the building by the people of Athens in honor of the emperor's generosity. An equally colossal chryselephantine statue of Zeus occupied the cella of the temple. The statue's form of construction was unusual, as the use of chryselephantine was by this time regarded as archaic. It has been suggested that Hadrian was deliberately imitating Phidias' famous statue of Athena Parthenos in the Parthenon, seeking to draw attention to the temple and himself by doing so.
Pausanias describes the temple as it was in the 2nd century:
The Temple of Olympian Zeus was badly damaged during the sack of Athens by the Heruli in 267 AD. It is unlikely to have been repaired, given the extent of the damage to the rest of the city. Assuming that it was not abandoned it would certainly have been closed down in 425 by the Christian Emperor Theodosius II, when he prohibited the worship of the old Roman and Greek gods during the persecution of pagans in the late Roman Empire. Material from the (presumably now ruined) building was incorporated into a basilica constructed nearby during the 5th or 6th century.
Over the following centuries, the temple was systematically quarried to provide building materials and material for the houses and churches of medieval Athens. By the end of the Byzantine period, it had been almost totally destroyed; when Ciriaco de' Pizzicolli (Cyriacus of Ancona) visited Athens in 1436 he found only 21 of the original 104 columns still standing.
The fate of one of the columns is recorded by a Greek inscription on one of the surviving columns, which states that "on 27 April 1759 he pulled down the column". This refers to the Turkish governor of Athens, Mustapha Agha Tzistarakis, who is recorded by a chronicler as having "destroyed one of Hadrian's columns with gunpowder" in order to re-use the marble to make plaster for the Tzistarakis Mosque that he was building in the Monastiraki district of the city. During the Ottoman period the temple was known to the Greeks as the Palace of Hadrian, while the Turks called it the Palace of Belkis, from a Turkish legend that the temple had been the residence of Solomon's wife.
Fifteen columns remain standing today and a sixteenth column lies on the ground where it fell during a storm in 1852. Nothing remains of the cella or the great statue that it once housed.
The temple was excavated in 1889–1896 by Francis Penrose of the British School in Athens (who also played a leading role in the restoration of the Parthenon), in 1922 by the German archaeologist Gabriel Welter and in the 1960s by Greek archaeologists led by Ioannes Travlos. The temple, along with the surrounding ruins of other ancient structures, is a historical precinct administered by Ephorate of Antiquities of the Greek Interior Ministry.
Today, the temple is an open-air museum, part of the unification of the archaeological sites of Athens. As a historical site it is protected and supervised by the Ephorate of Antiquities.
On 21 January 2007, a group of Greek pagans held a ceremony honoring Zeus on the grounds of the temple. The event was organized by Ellinais, an organization which won a court battle to obtain recognition for Ancient Greek religious practices in the fall of 2006.
On June 28, 2001, Vangelis organized the Mythodea Chorus at the Temple of Olympian Zeus in the context of NASA's Mars mission. Soprano Jessie Norman and Kathleen Battle participated in the concert. The concert was covered by 20 television networks from America, Australia, Canada, Japan and European countries, under the direction of Irish filmmaker Declan Looney. The chorus arrangement brought thousands of people inside the Olympic venues, and outside the temple, into the empty streets of Athens. Joined by Jessie Norman were soprano Kathleen Battle, the London Metropolitan Orchestra and the Greek National Opera, as well as over a hundred people dressed in ancient Greek clothing. The screen mounted at the Olympia connected visual images of ancient Greek performances - vases, frescoes and statues - that invested music with images of the planet Mars. | https://en.wikipedia.org/wiki?curid=22189 |
Organic electronics
Organic electronics is a field of materials science concerning the design, synthesis, characterization, and application of organic molecules or polymers that show desirable electronic properties such as conductivity. Unlike conventional inorganic conductors and semiconductors, organic electronic materials are constructed from organic (carbon-based) molecules or polymers using synthetic strategies developed in the context of organic chemistry and polymer chemistry.
One of the promised benefits of organic electronics is their potential low cost compared to traditional electronics. Attractive properties of polymeric conductors include their electrical conductivity (which can be varied by the concentrations of dopants) and comparatively high mechanical flexibility. Some have high thermal stability.
One class of materials of interest in organic electronics are electrical conductive, i.e. substances that can transmit electrical charges with low resistivity. Traditionally, conductive materials are inorganic. Classical (and still technologically dominant) conductive materials are metals such as copper and aluminum as well as many alloys.
The earliest reported organic conductive material, polyaniline, was described by Henry Letheby in 1862. Work on other polymeric organic materials began in earnest in the 1960s, A high conductivity of 1 S/cm (S = Siemens) was reported in 1963 for a derivative of tetraiodopyrrole. In 1977, it was discovered that polyacetylene can be oxidized with halogens to produce conducting materials from either insulating or semiconducting materials. The 2000 Nobel Prize in Chemistry was awarded to Alan J. Heeger, Alan G. MacDiarmid, and Hideki Shirakawa jointly for their work on conductive polymers. These and many other workers identified large families of electrically conducting polymers including polythiophene, polyphenylene sulfide, and others.
In the 1950s, a second class of electric conductors were discovered based on charge-transfer salts. Early examples were derivatives of polycyclic aromatic compounds. For example, pyrene was shown to form semiconducting charge-transfer complex salts with halogens. In 1972, researchers found metallic conductivity (conductivity comparable to a metal) in the charge-transfer complex TTF-TCNQ.
Conductive plastics have undergone development for applications in industry. In 1987, the first organic diode was produced at Eastman Kodak by Ching W. Tang and Steven Van Slyke.
The initial characterization of the basic properties of polymer light emitting diodes, demonstrating that the light emission phenomenon was injection electroluminescence and that the frequency response was sufficiently fast to permit video display applications, was reported by Bradley, Burroughes, Friend, et al. in a 1990 Nature paper. Moving from molecular to macromolecular materials solved the problems previously encountered with the long-term stability of the organic films and enabled high-quality films to be easily made. Subsequent research developed multilayer polymers and the new field of plastic electronics and organic light-emitting diodes (OLED) research and device production grew rapidly.
Organic conductive materials can be grouped into two main classes: conductive polymers and conductive molecular solids and salts.
Semiconducting small molecules include polycyclic aromatic compounds such as pentacene and rubrene.
Conductive polymers are often typically intrinsically conductive or at least semiconductors. They sometimes show mechanical properties comparable to those of conventional organic polymers. Both organic synthesis and advanced dispersion techniques can be used to tune the electrical properties of conductive polymers, unlike typical inorganic conductors. The most well-studied class of conductive polymers include polyacetylene, polypyrrole, polyaniline, and their copolymers. Poly(p-phenylene vinylene) and its derivatives are used for electroluminescent semiconducting polymers. Poly(3-alkythiophenes) are also a typical material for use in solar cells and transistors.
An OLED (organic light-emitting diode) consists of a thin film of organic material that emits light under stimulation by an electric current. A typical OLED consists of an anode, a cathode, OLED organic material and a conductive layer.
André Bernanose was the first person to observe electroluminescence in organic materials, and Ching W. Tang, reported fabrication of an OLED device in 1987. The OLED device incorporated a double-layer structure motif consisting of separate hole transporting and electron-transporting layers, with light emission taking place in between the two layers. Their discovery opened a new era of current OLED research and device design.
OLED organic materials can be divided into two major families: small-molecule-based and polymer-based. Small molecule OLEDs (SM-OLEDs) include organometallic chelates(Alq3), fluorescent and phosphorescent dyes, and conjugated dendrimers. Fluorescent dyes can be selected according to the desired range of emission wavelengths; compounds like perylene and rubrene are often used. Very recently, Dr. Kim J. et al. at University of Michigan reported a pure organic light emitting crystal, Br6A, by modifying its halogen bonding, they succeeded in tuning the phosphorescence to different wavelengths including green, blue and red. By modifying the structure of Br6A, scientists are attempting to achieve a next generation organic light emitting diode. Devices based on small molecules are usually fabricated by thermal evaporation under vacuum. While this method enables the formation of well-controlled homogeneous film; is hampered by high cost and limited scalability.
Polymer light-emitting diodes (PLEDs), similar to SM-OLED, emit light under an applied electric current. Polymer-based OLEDs are generally more efficient than SM-OLEDs requiring a comparatively lower amount of energy to produce the same luminescence. Common polymers used in PLEDs include derivatives of poly(p-phenylene vinylene) and polyfluorene. The emitted color can be tuned by substitution of different side chains onto the polymer backbone or modifying the stability of the polymer. In contrast to SM-OLEDs, polymer-based OLEDs cannot be fabricated through vacuum evaporation, and must instead be processed using solution-based techniques. Compared to thermal evaporation, solution based methods are more suited to creating films with large dimensions. Zhenan Bao. et al. at Stanford University reported a novel way to construct large-area organic semiconductor thin films using aligned single crystalline domains.
An Organic field-effect transistor is a field-effect transistor utilizing organic molecules or polymers as the active semiconducting layer. A field-effect transistor (FET) is any semiconductor material that utilizes electric field to control the shape of a channel of one type of charge carrier, thereby changing its conductivity. Two major classes of FET are n-type and p-type semiconductor, classified according to the charge type carried. In the case of organic FETs (OFETs), p-type OFET compounds are generally more stable than n-type due to the susceptibility of the latter to oxidative damage.
J.E. Lilienfeld first proposed the field-effect transistor in 1930, but the first OFET was not reported until 1987, when Koezuka et al. constructed one using Polythiophene which shows extremely high conductivity. Other conductive polymers have been shown to act as semiconductors, and newly synthesized and characterized compounds are reported weekly in prominent research journals. Many review articles exist documenting the development of these materials.
Like OLEDs, OFETs can be classified into small-molecule and polymer-based system. Charge transport in OFETs can be quantified using a measure called carrier mobility; currently, rubrene-based OFETs show the highest carrier mobility of 20–40 cm2/(V·s). Another popular OFET material is Pentacene. Due to its low solubility in most organic solvents, it's difficult to fabricate thin film transistors (TFTs) from pentacene itself using conventional spin-cast or, dip coating methods, but this obstacle can be overcome by using the derivative TIPS-pentacene. Current research focuses more on thin-film transistor (TFT) model, which eliminates the usage of conductive materials. Very recently, two studies conducted by Dr. Bao Z. et al. and Dr. Kim J. et al. demonstrated control over the formation of designed thin-film transistors. By controlling the formation of crystalline TFT, it is possible to create an aligned (as opposed to randomly ordered) charge transport pathway, resulting in enhanced charge mobility.
Organic solar cells could cut the cost of solar power by making use of inexpensive organic polymers rather than the expensive crystalline silicon used in most solar cells. What's more, the polymers can be processed using low-cost equipment such as ink-jet printers or coating equipment employed to make photographic film, which reduces both capital and operating costs compared with conventional solar-cell manufacturing.
Silicon thin-film solar cells on flexible substrates allow a significant cost reduction of large-area photovoltaics for several reasons:
Inexpensive polymeric substrates like polyethylene terephthalate (PET) or polycarbonate (PC) have the potential for further cost reduction in photovoltaics. Protomorphous solar cells prove to be a promising concept for efficient and low-cost photovoltaics on cheap and flexible substrates for large-area production as well as small and mobile applications.
One advantage of printed electronics is that different electrical and electronic components can be printed on top of each other, saving space and increasing reliability and sometimes they are all transparent. One ink must not damage another, and low temperature annealing is vital if low-cost flexible materials such as paper and plastic film are to be used. There is much sophisticated engineering and chemistry involved here, with iTi, Pixdro, Asahi Kasei, Merck & Co.|Merck, BASF, HC Starck, Hitachi Chemical and Frontier Carbon Corporation among the leaders.
Electronic devices based on organic compounds are now widely used, with many new products under development. Sony reported the first full-color, video-rate, flexible, plastic display made purely of organic materials; television screen based on OLED materials; biodegradable electronics based on organic compound and low-cost organic solar cell are also available.
There are important differences between the processing of small molecule organic semiconductors and semiconducting polymers. Small molecule semiconductors are quite often insoluble and typically require deposition via vacuum sublimation. While usually thin films of soluble conjugated polymers. Devices based on conductive polymers can be prepared by solution processing methods. Both solution processing and vacuum based methods produce amorphous and polycrystalline films with variable degree of disorder. "Wet" coating techniques require polymers to be dissolved in a volatile solvent, filtered and deposited onto a substrate. Common examples of solvent-based coating techniques include drop casting, spin-coating, doctor-blading, inkjet printing and screen printing. Spin-coating is a widely used technique for small area thin film production. It may result in a high degree of material loss. The doctor-blade technique results in a minimal material loss and was primarily developed for large area thin film production. Vacuum based thermal deposition of small molecules requires evaporation of molecules from a hot source. The molecules are then transported through vacuum onto a substrate. The process of condensing these molecules on the substrate surface results in thin film formation. Wet coating techniques can in some cases be applied to small molecules depending on their solubility.
Compared to conventional inorganic solar cell, organic solar cells have the advantage of lower fabrication cost. An organic solar cell is a device that uses organic electronics to convert light into electricity. Organic solar cells utilize organic photovoltaic materials, organic semiconductor diodes that convert light into electricity. Figure to the right shows five commonly used organic photovoltaic materials. Electrons in these organic molecules can be delocalized in a delocalized π orbital with a corresponding π* antibonding orbital. The difference in energy between the π orbital, or highest occupied molecular orbital(HOMO), and π* orbital, or lowest unoccupied molecular orbital(LUMO) is called the band gap of organic photovoltaic materials. Typically, the band gap lies in the range of 1-4eV.
The difference in the band gap of organic photovoltaic materials leads to different chemical structures and forms of organic solar cells. Different forms of solar cells includes single-layer organic photovoltaic cells, bilayer organic photovoltaic cells and heterojunction photovoltaic cells. However, all three of these types of solar cells share the approach of sandwiching the organic electronic layer between two metallic conductors, typically indium tin oxide.
An organic field-effect transistor device consists of three major components: the source, the drain and the gate. Generally, a field-effect transistor has two plates, source in contact with drain and the gate respectively, working as conducting channel. The electrons move from source to the drain, and the gate serves to control the electrons' movement from source to drain. Different types of FETs are designed based on carrier properties. Thin film transistor (TFT), among them, is an easy fabricating one. In a thin film transistor, the source and drain are made by directly depositing a thin layer of semiconductor followed by a thin film of insulator between semiconductor and the metal gate contact. Such a thin film is made by either thermal evaporation, or simply spins coating. In a TFT device, there is no carrier movement between the source and drain. After applying a positive charge, accumulation of electrons on the interface cause bending of the semiconductor and ultimately lowers the conduction band with regards to the Fermi-level of the semiconductor. Finally, a highly conductive channel is formed at the interface.
Conductive polymers are lighter, more flexible, and less expensive than inorganic conductors. This makes them a desirable alternative in many applications. It also creates the possibility of new applications that would be impossible using copper or silicon.
Organic electronics not only includes organic semiconductors, but also organic dielectrics, conductors and light emitters.
New applications include smart windows and electronic paper. Conductive polymers are expected to play an important role in the emerging science of molecular computers. | https://en.wikipedia.org/wiki?curid=22190 |
Operating system
An operating system (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs.
Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources.
For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is usually executed directly by the hardware and frequently makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer from cellular phones and video game consoles to web servers and supercomputers.
The dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. macOS by Apple Inc. is in second place (13.23%), and the varieties of Linux are collectively in third place (1.57%). In the mobile sector (including smartphones and tablets), Android's share is up to 70% in the year 2017. According to third quarter 2016 data, Android's share on smartphones is dominant with 87.5 percent with also a growth rate of 10.3 percent per year, followed by Apple's iOS with 12.1 percent with per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent. Linux distributions are dominant in the server and supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications.
A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency. This is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted repeatedly in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in preemptive and co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking; 32-bit versions of both Windows NT and Win9x used preemptive multi-tasking.
Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, and the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources to multiple users.
A distributed operating system manages a group of distinct, networked computers and makes them appear to be a single computer, as all computations are distributed (divided amongst the constituent computers).
In the distributed and cloud computing context of an OS, "templating" refers to creating a single virtual machine image as a guest operating system, then saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses.
Embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines with less autonomy (e.g. PDAs). They are very compact and extremely efficient by design, and are able to operate with a limited amount of resources. Windows CE and Minix 3 are some examples of embedded operating systems.
A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. Such an event-driven system switches between tasks based on their priorities or external events, whereas time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments.
Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their modern and more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries, interrupts, and parallel processing. When personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers.
In the 1940s, the earliest electronic digital systems had no operating systems. Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plugboards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the printing of payroll checks from data on punched paper cards. After programmable general purpose computers were invented, machine languages(consisting of strings of the binary digits 0 and 1 on punched paper tape) were introduced that sped up the programming process (Stern, 1981).
In the early 1950s, a computer could execute only one program at a time. Each user had sole use of the computer for a limited period and would arrive at a scheduled time with their program and data on punched paper cards or punched tape. The program would be loaded into the machine, and the machine would be set to work until the program completed or crashed. Programs could generally be debugged via a front panel using toggle switches and panel lights. It is said that Alan Turing was a master of this on the early Manchester Mark 1 machine, and he was already deriving the primitive conception of an operating system from the principles of the universal Turing machine.
Later machines came with libraries of programs, which would be linked to a user's program to assist in operations such as input and output and compiling (generating machine code from human-readable symbolic code). This was the genesis of the modern-day operating system. However, machines still ran a single job at a time. At Cambridge University in England, the job queue was at one time a washing line (clothes line) from which tapes were hung with different colored clothes-pegs to indicate job priority.
An improvement was the Atlas Supervisor. Introduced with the Manchester Atlas in 1962, it is considered by many to be the first recognisable modern operating system. Brinch Hansen described it as "the most significant breakthrough in the history of operating systems."
Through the 1950s, many major features were pioneered in the field of operating systems on mainframe computers, including batch processing, input/output interrupting, buffering, multitasking, spooling, runtime libraries, link-loading, and programs for sorting records in files. These features were included or not included in application software at the option of application programmers, rather than in a separate operating system used by all applications. In 1959, the SHARE Operating System was released as an integrated utility for the IBM 704, and later in the 709 and 7090 mainframes, although it was quickly supplanted by IBSYS/IBJOB on the 709, 7090 and 7094.
During the 1960s, IBM's OS/360 introduced the concept of a single OS spanning an entire product line, which was crucial for the success of the System/360 machines. IBM's current mainframe operating systems are distant descendants of this original system and modern machines are backwards-compatible with applications written for OS/360.
OS/360 also pioneered the concept that the operating system keeps track of all of the system resources that are used, including program and data space allocation in main memory and file space in secondary storage, and file locking during updates. When a process is terminated for any reason, all of these resources are re-claimed by the operating system.
The alternative CP-67 system for the S/360-67 started a whole line of IBM operating systems focused on the concept of virtual machines. Other operating systems used on IBM S/360 series mainframes included systems developed by IBM: COS/360 (Compatibility Operating System), DOS/360 (Disk Operating System), TSS/360 (Time Sharing System), TOS/360 (Tape Operating System), BOS/360 (Basic Operating System), and ACP (Airline Control Program), as well as a few non-IBM systems: MTS (Michigan Terminal System), MUSIC (Multi-User System for Interactive Computing), and ORVYL (Stanford Timesharing System).
Control Data Corporation developed the SCOPE operating system in the 1960s, for batch processing. In cooperation with the University of Minnesota, the Kronos and later the NOS operating systems were developed during the 1970s, which supported simultaneous batch and timesharing use. Like many commercial timesharing systems, its interface was an extension of the Dartmouth BASIC operating systems, one of the pioneering efforts in timesharing and programming languages. In the late 1970s, Control Data and the University of Illinois developed the PLATO operating system, which used plasma panel displays and long-distance time sharing networks. Plato was remarkably innovative for its time, featuring real-time chat, and multi-user graphical games.
In 1961, Burroughs Corporation introduced the B5000 with the MCP (Master Control Program) operating system. The B5000 was a stack machine designed to exclusively support high-level languages with no machine language or assembler; indeed, the MCP was the first OS to be written exclusively in a high-level language (ESPOL, a dialect of ALGOL). MCP also introduced many other ground-breaking innovations, such as being the first commercial implementation of virtual memory. During development of the AS/400, IBM made an approach to Burroughs to license MCP to run on the AS/400 hardware. This proposal was declined by Burroughs management to protect its existing hardware production. MCP is still in use today in the Unisys company's ClearPath/MCP line of computers.
UNIVAC, the first commercial computer manufacturer, produced a series of EXEC operating systems. Like all early main-frame systems, this batch-oriented system managed magnetic drums, disks, card readers and line printers. In the 1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale time sharing, also patterned after the Dartmouth BC system.
General Electric and MIT developed General Electric Comprehensive Operating Supervisor (GECOS), which introduced the concept of ringed security privilege levels. After acquisition by Honeywell it was renamed General Comprehensive Operating System (GCOS).
Digital Equipment Corporation developed many operating systems for its various computer lines, including TOPS-10 and TOPS-20 time sharing systems for the 36-bit PDP-10 class systems. Before the widespread use of UNIX, TOPS-10 was a particularly popular system in universities, and in the early ARPANET community. RT-11 was a single-user real-time OS for the PDP-11 class minicomputer, and RSX-11 was the corresponding multi-user OS.
From the late 1960s through the late 1970s, several hardware capabilities evolved that allowed similar or ported software to run on more than one system. Early systems had utilized microprogramming to implement features on their systems in order to permit different underlying computer architectures to appear to be the same as others in a series. In fact, most 360s after the 360/40 (except the 360/165 and 360/168) were microprogrammed implementations.
The enormous investment in software for these systems made since the 1960s caused most of the original computer manufacturers to continue to develop compatible operating systems along with the hardware. Notable supported mainframe operating systems include:
The first microcomputers did not have the capacity or need for the elaborate operating systems that had been developed for mainframes and minis; minimalistic operating systems were developed, often loaded from ROM and known as "monitors". One notable early disk operating system was CP/M, which was supported on many early microcomputers and was closely imitated by Microsoft's MS-DOS, which became widely popular as the operating system chosen for the IBM PC (IBM's version of it was called IBM DOS or PC DOS). In the 1980s, Apple Computer Inc. (now Apple Inc.) abandoned its popular Apple II series of microcomputers to introduce the Apple Macintosh computer with an innovative graphical user interface (GUI) to the Mac OS.
The introduction of the Intel 80386 CPU chip in October 1985, with 32-bit architecture and paging capabilities, provided personal computers with the ability to run multitasking operating systems like those of earlier minicomputers and mainframes. Microsoft responded to this progress by hiring Dave Cutler, who had developed the VMS operating system for Digital Equipment Corporation. He would lead the development of the Windows NT operating system, which continues to serve as the basis for Microsoft's operating systems line. Steve Jobs, a co-founder of Apple Inc., started NeXT Computer Inc., which developed the NEXTSTEP operating system. NEXTSTEP would later be acquired by Apple Inc. and used, along with code from FreeBSD as the core of Mac OS X (macOS after latest name change).
The GNU Project was started by activist and programmer Richard Stallman with the goal of creating a complete free software replacement to the proprietary UNIX operating system. While the project was highly successful in duplicating the functionality of various parts of UNIX, development of the GNU Hurd kernel proved to be unproductive. In 1991, Finnish computer science student Linus Torvalds, with cooperation from volunteers collaborating over the Internet, released the first version of the Linux kernel. It was soon merged with the GNU user space components and system software to form a complete operating system. Since then, the combination of the two major components has usually been referred to as simply "Linux" by the software industry, a naming convention that Stallman and the Free Software Foundation remain opposed to, preferring the name GNU/Linux. The Berkeley Software Distribution, known as BSD, is the UNIX derivative distributed by the University of California, Berkeley, starting in the 1970s. Freely distributed and ported to many minicomputers, it eventually also gained a following for use on PCs, mainly as FreeBSD, NetBSD and OpenBSD.
File:Unix history-simple.png|256px|thumb|Evolution of Unix systems
default
Unix was originally written in assembly language. Ken Thompson wrote B, mainly based on BCPL, based on his experience in the MULTICS project. B was replaced by C, and Unix, rewritten in C, developed into a large, complex family of inter-related operating systems which have been influential in every modern operating system (see History).
The "Unix-like" family is a diverse group of operating systems, with several major sub-categories including System V, BSD, and Linux. The name "UNIX" is a trademark of The Open Group which licenses it for use with any operating system that has been shown to conform to their definitions. "UNIX-like" is commonly used to refer to the large set of operating systems which resemble the original UNIX.
Unix-like systems run on a wide variety of computer architectures. They are used heavily for servers in business, as well as workstations in academic and engineering environments. Free UNIX variants, such as Linux and BSD, are popular in these areas.
Four operating systems are certified by The Open Group (holder of the Unix trademark) as Unix. HP's HP-UX and IBM's AIX are both descendants of the original System V Unix and are designed to run only on their respective vendor's hardware. In contrast, Sun Microsystems's Solaris can run on multiple types of hardware, including x86 and Sparc servers, and PCs. Apple's macOS, a replacement for Apple's earlier (non-Unix) Mac OS, is a hybrid kernel-based BSD variant derived from NeXTSTEP, Mach, and FreeBSD.
Unix interoperability was sought by establishing the POSIX standard. The POSIX standard can be applied to any operating system, although it was originally created for various Unix variants.
A subgroup of the Unix family is the Berkeley Software Distribution family, which includes FreeBSD, NetBSD, and OpenBSD. These operating systems are most commonly found on webservers, although they can also function as a personal computer OS. The Internet owes much of its existence to BSD, as many of the protocols now commonly used by computers to connect, send and receive data over a network were widely implemented and refined in BSD. The World Wide Web was also first demonstrated on a number of computers running an OS based on BSD called NeXTSTEP.
In 1974, University of California, Berkeley installed its first Unix system. Over time, students and staff in the computer science department there began adding new programs to make things easier, such as text editors. When Berkeley received new VAX computers in 1978 with Unix installed, the school's undergraduates modified Unix even more in order to take advantage of the computer's hardware possibilities. The Defense Advanced Research Projects Agency of the US Department of Defense took interest, and decided to fund the project. Many schools, corporations, and government organizations took notice and started to use Berkeley's version of Unix instead of the official one distributed by AT&T.
Steve Jobs, upon leaving Apple Inc. in 1985, formed NeXT Inc., a company that manufactured high-end computers running on a variation of BSD called NeXTSTEP. One of these computers was used by Tim Berners-Lee as the first webserver to create the World Wide Web.
Developers like Keith Bostic encouraged the project to replace any non-free code that originated with Bell Labs. Once this was done, however, AT&T sued. After two years of legal disputes, the BSD project spawned a number of free derivatives, such as NetBSD and FreeBSD (both in 1993), and OpenBSD (from NetBSD in 1995).
macOS (formerly "Mac OS X" and later "OS X") is a line of open core graphical operating systems developed, marketed, and sold by Apple Inc., the latest of which is pre-loaded on all currently shipping Macintosh computers. macOS is the successor to the original classic Mac OS, which had been Apple's primary operating system since 1984. Unlike its predecessor, macOS is a UNIX operating system built on technology that had been developed at NeXT through the second half of the 1980s and up until Apple purchased the company in early 1997.
The operating system was first released in 1999 as Mac OS X Server 1.0, followed in March 2001 by a client version (Mac OS X v10.0 "Cheetah"). Since then, six more distinct "client" and "server" editions of macOS have been released, until the two were merged in OS X 10.7 "Lion".
Prior to its merging with macOS, the server edition macOS Server was architecturally identical to its desktop counterpart and usually ran on Apple's line of Macintosh server hardware. macOS Server included work group management and administration software tools that provide simplified access to key network services, including a mail transfer agent, a Samba server, an LDAP server, a domain name server, and others. With Mac OS X v10.7 Lion, all server aspects of Mac OS X Server have been integrated into the client version and the product re-branded as "OS X" (dropping "Mac" from the name). The server tools are now offered as an application.
The Linux kernel originated in 1991, as a project of Linus Torvalds, while a university student in Finland. He posted information about his project on a newsgroup for computer students and programmers, and received support and assistance from volunteers who succeeded in creating a complete and functional kernel.
Linux is Unix-like, but was developed without any Unix code, unlike BSD and its variants. Because of its open license model, the Linux kernel code is available for study and modification, which resulted in its use on a wide range of computing machinery from supercomputers to smart-watches. Although estimates suggest that Linux is used on only 1.82% of all "desktop" (or laptop) PCs, it has been widely adopted for use in servers and embedded systems such as cell phones. Linux has superseded Unix on many platforms and is used on most supercomputers including the top 385. Many of the same computers are also on Green500 (but in different order), and Linux runs on the top 10. Linux is also commonly used on other small energy-efficient computers, such as smartphones and smartwatches. The Linux kernel is used in some popular distributions, such as Red Hat, Debian, Ubuntu, Linux Mint and Google's Android, Chrome OS, and Chromium OS.
Microsoft Windows is a family of proprietary operating systems designed by Microsoft Corporation and primarily targeted to Intel architecture based computers, with an estimated 88.9 percent total usage share on Web connected computers. The latest version is Windows 10.
In 2011, Windows 7 overtook Windows XP as most common version in use.
Microsoft Windows was first released in 1985, as an operating environment running on top of MS-DOS, which was the standard operating system shipped on most Intel architecture personal computers at the time. In 1995, Windows 95 was released which only used MS-DOS as a bootstrap. For backwards compatibility, Win9x could run real-mode MS-DOS and 16-bit Windows 3.x drivers. Windows ME, released in 2000, was the last version in the Win9x family. Later versions have all been based on the Windows NT kernel. Current client versions of Windows run on IA-32, x86-64 and 32-bit ARM microprocessors. In addition Itanium is still supported in older server version Windows Server 2008 R2. In the past, Windows NT supported additional architectures.
Server editions of Windows are widely used. In recent years, Microsoft has expended significant capital in an effort to promote the use of Windows as a server operating system. However, Windows' usage on servers is not as widespread as on personal computers as Windows competes against Linux and BSD for server market share.
ReactOS is a Windows-alternative operating system, which is being developed on the principles of Windows without using any of Microsoft's code.
There have been many operating systems that were significant in their day but are no longer so, such as AmigaOS; OS/2 from IBM and Microsoft; classic Mac OS, the non-Unix precursor to Apple's macOS; BeOS; XTS-300; RISC OS; MorphOS; Haiku; BareMetal and FreeMint. Some are still used in niche markets and continue to be developed as minority platforms for enthusiast communities and specialist applications. OpenVMS, formerly from DEC, is still under active development by Hewlett-Packard. Yet other operating systems are used almost exclusively in academia, for operating systems education or to do research on operating system concepts. A typical example of a system that fulfills both roles is MINIX, while for example Singularity is used purely for research. Another example is the Oberon System designed at ETH Zürich by Niklaus Wirth, Jürg Gutknecht and a group of students at the former Computer Systems Institute in the 1980s. It was used mainly for research, teaching, and daily work in Wirth's group.
Other operating systems have failed to win significant market share, but have introduced innovations that have influenced mainstream operating systems, not least Bell Labs' Plan 9.
The components of an operating system all exist in order to make the different parts of a computer work together. All user software needs to go through the operating system in order to use any of the hardware, whether it be as simple as a mouse or keyboard or as complex as an Internet component.
With the aid of the firmware and device drivers, the kernel provides the most basic level of control over all of the computer's hardware devices. It manages memory access for programs in the RAM, it determines which programs get access to which hardware resources, it sets up or resets the CPU's operating states for optimal operation at all times, and it organizes the data for long-term non-volatile storage with file systems on such media as disks, tapes, flash memory, etc.
The operating system provides an interface between an application program and the computer hardware, so that an application program can interact with the hardware only by obeying rules and procedures programmed into the operating system. The operating system is also a set of services which simplify development and execution of application programs. Executing an application program involves the creation of a process by the operating system kernel which assigns memory space and other resources, establishes a priority for the process in multi-tasking systems, loads program binary code into memory, and initiates execution of the application program which then interacts with the user and with hardware devices.
Interrupts are central to operating systems, as they provide an efficient way for the operating system to interact with and react to its environment. The alternative having the operating system "watch" the various sources of input for events (polling) that require action can be found in older systems with very small stacks (50 or 60 bytes) but is unusual in modern systems with large stacks. Interrupt-based programming is directly supported by most modern CPUs. Interrupts provide a computer with a way of automatically saving local register contexts, and running specific code in response to events. Even very basic computers support hardware interrupts, and allow the programmer to specify code which may be run when that event takes place.
When an interrupt is received, the computer's hardware automatically suspends whatever program is currently running, saves its status, and runs computer code previously associated with the interrupt; this is analogous to placing a bookmark in a book in response to a phone call. In modern operating systems, interrupts are handled by the operating system's kernel. Interrupts may come from either the computer's hardware or the running program.
When a hardware device triggers an interrupt, the operating system's kernel decides how to deal with this event, generally by running some processing code. The amount of code being run depends on the priority of the interrupt (for example: a person usually responds to a smoke detector alarm before answering the phone). The processing of hardware interrupts is a task that is usually delegated to software called a device driver, which may be part of the operating system's kernel, part of another program, or both. Device drivers may then relay information to a running program by various means.
A program may also trigger an interrupt to the operating system. If a program wishes to access hardware, for example, it may interrupt the operating system's kernel, which causes control to be passed back to the kernel. The kernel then processes the request. If a program wishes additional resources (or wishes to shed resources) such as memory, it triggers an interrupt to get the kernel's attention.
Modern microprocessors (CPU or MPU) support multiple modes of operation. CPUs with this capability offer at least two modes: user mode and supervisor mode. In general terms, supervisor mode operation allows unrestricted access to all machine resources, including all MPU instructions. User mode operation sets limits on instruction use and typically disallows direct access to machine resources. CPUs might have other modes similar to user mode as well, such as the virtual modes in order to emulate older processor types, such as 16-bit processors on a 32-bit one, or 32-bit processors on a 64-bit one.
At power-on or reset, the system begins in supervisor mode. Once an operating system kernel has been loaded and started, the boundary between user mode and supervisor mode (also known as kernel mode) can be established.
Supervisor mode is used by the kernel for low level tasks that need unrestricted access to hardware, such as controlling how memory is accessed, and communicating with devices such as disk drives and video display devices. User mode, in contrast, is used for almost everything else. Application programs, such as word processors and database managers, operate within user mode, and can only access machine resources by turning control over to the kernel, a process which causes a switch to supervisor mode. Typically, the transfer of control to the kernel is achieved by executing a software interrupt instruction, such as the Motorola 68000 codice_1 instruction. The software interrupt causes the microprocessor to switch from user mode to supervisor mode and begin executing code that allows the kernel to take control.
In user mode, programs usually have access to a restricted set of microprocessor instructions, and generally cannot execute any instructions that could potentially cause disruption to the system's operation. In supervisor mode, instruction execution restrictions are typically removed, allowing the kernel unrestricted access to all machine resources.
The term "user mode resource" generally refers to one or more CPU registers, which contain information that the running program isn't allowed to alter. Attempts to alter these resources generally causes a switch to supervisor mode, where the operating system can deal with the illegal operation the program was attempting, for example, by forcibly terminating ("killing") the program).
Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory.
Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen any more, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system.
Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which doesn't exist in all computers.
In both segmentation and paging, certain protected mode registers specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses trigger an interrupt which cause the CPU to re-enter supervisor mode, placing the kernel in charge. This is called a segmentation violation or Seg-V for short, and since it is both difficult to assign a meaningful result to such an operation, and because it is usually a sign of a misbehaving program, the kernel generally resorts to terminating the offending program, and reports the error.
Windows versions 3.1 through ME had some level of memory protection, but programs could easily circumvent the need to use it. A general protection fault would be produced, indicating a segmentation violation had occurred; however, the system would often crash anyway.
The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks.
If a program tries to access memory that isn't in its current range of accessible memory, but nonetheless has been allocated to it, the kernel is interrupted in the same way as it would if the program were to exceed its allocated memory. (See section on memory management.) Under UNIX this kind of interrupt is referred to as a page fault.
When the kernel detects a page fault it generally adjusts the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has actually been allocated yet.
In modern operating systems, memory which is accessed less frequently can be temporarily stored on disk or other media to make that space available for use by other programs. This is called swapping, as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand.
"Virtual memory" provides the programmer or the user with the perception that there is a much larger amount of RAM in the computer than is really there.
Multitasking refers to the running of multiple independent computer programs on the same computer; giving the appearance that it is performing the tasks at the same time. Since most computers can do at most one or two things at one time, this is generally done via time-sharing, which means that each program uses a share of the computer's time to execute.
An operating system kernel contains a scheduling program which determines how much time each process spends executing, and in which order execution control should be passed to programs. Control is passed to a process by the kernel, which allows the program access to the CPU and memory. Later, control is returned to the kernel through some mechanism, so that another program may be allowed to use the CPU. This so-called passing of control between the kernel and applications is called a context switch.
An early model which governed the allocation of time to programs was called cooperative multitasking. In this model, when control is passed to a program by the kernel, it may execute for as long as it wants before explicitly returning control to the kernel. This means that a malicious or malfunctioning program may not only prevent any other programs from using the CPU, but it can hang the entire system if it enters an infinite loop.
Modern operating systems extend the concepts of application preemption to device drivers and kernel code, so that the operating system has preemptive control over internal run-times as well.
The philosophy governing preemptive multitasking is that of ensuring that all programs are given regular time on the CPU. This implies that all programs must be limited in how much time they are allowed to spend on the CPU without being interrupted. To accomplish this, modern operating system kernels make use of a timed interrupt. A protected mode timer is set by the kernel which triggers a return to supervisor mode after the specified time has elapsed. (See above sections on Interrupts and Dual Mode Operation.)
On many single user operating systems cooperative multitasking is perfectly adequate, as home computers generally run a small number of well tested programs. The AmigaOS is an exception, having preemptive multitasking from its very first version. Windows NT was the first version of Microsoft Windows which enforced preemptive multitasking, but it didn't reach the home user market until Windows XP (since Windows NT was targeted at professionals).
Access to data stored on disks is a central feature of all operating systems. Computers store data on disks using files, which are structured in specific ways in order to allow for faster access, higher reliability, and to make better use of the drive's available space. The specific way in which files are stored on a disk is called a file system, and enables files to have names and attributes. It also allows them to be stored in a hierarchy of directories or folders arranged in a directory tree.
Early operating systems generally supported a single type of disk drive and only one kind of file system. Early file systems were limited in their capacity, speed, and in the kinds of file names and directory structures they could use. These limitations often reflected limitations in the operating systems they were designed for, making it very difficult for an operating system to support more than one file system.
While many simpler operating systems support a limited range of options for accessing storage systems, operating systems like UNIX and Linux support a technology known as a virtual file system or VFS. An operating system such as UNIX supports a wide array of storage devices, regardless of their design or file systems, allowing them to be accessed through a common application programming interface (API). This makes it unnecessary for programs to have any knowledge about the device they are accessing. A VFS allows the operating system to provide programs with access to an unlimited number of devices with an infinite variety of file systems installed on them, through the use of specific device drivers and file system drivers.
A connected storage device, such as a hard drive, is accessed through a device driver. The device driver understands the specific language of the drive and is able to translate that language into a standard language used by the operating system to access all disk drives. On UNIX, this is the language of block devices.
When the kernel has an appropriate device driver in place, it can then access the contents of the disk drive in raw format, which may contain one or more file systems. A file system driver is used to translate the commands used to access each specific file system into a standard set of commands that the operating system can use to talk to all file systems. Programs can then deal with these file systems on the basis of filenames, and directories/folders, contained within a hierarchical structure. They can create, delete, open, and close files, as well as gather various information about them, including access permissions, size, free space, and creation and modification dates.
Various differences between file systems make supporting all file systems difficult. Allowed characters in file names, case sensitivity, and the presence of various kinds of file attributes makes the implementation of a single interface for every file system a daunting task. Operating systems tend to recommend using (and so support natively) file systems specifically designed for them; for example, NTFS in Windows and ext3 and ReiserFS in Linux. However, in practice, third party drivers are usually available to give support for the most widely used file systems in most general-purpose operating systems (for example, NTFS is available in Linux through NTFS-3g, and ext2/3 and ReiserFS are available in Windows through third-party software).
Support for file systems is highly varied among modern operating systems, although there are several common file systems which almost all operating systems include support and drivers for. Operating systems vary on file system support and on the disk formats they may be installed on. Under Windows, each file system is usually limited in application to certain media; for example, CDs must use ISO 9660 or UDF, and as of Windows Vista, NTFS is the only file system which the operating system can be installed on. It is possible to install Linux onto many types of file systems. Unlike other operating systems, Linux and UNIX allow any file system to be used regardless of the media it is stored in, whether it is a hard drive, a disc (CD, DVD...), a USB flash drive, or even contained within a file located on another file system.
A device driver is a specific type of computer software developed to allow interaction with hardware devices. Typically this constitutes an interface for communicating with the device, through the specific computer bus or communications subsystem that the hardware is connected to, providing commands to and/or receiving data from the device, and on the other end, the requisite interfaces to the operating system and software applications. It is a specialized hardware-dependent computer program which is also operating system specific that enables another program, typically an operating system or applications software package or computer program running under the operating system kernel, to interact transparently with a hardware device, and usually provides the requisite interrupt handling necessary for any necessary asynchronous time-dependent hardware interfacing needs.
The key design goal of device drivers is abstraction. Every model of hardware (even within the same class of device) is different. Newer models also are released by manufacturers that provide more reliable or better performance and these newer models are often controlled differently. Computers and their operating systems cannot be expected to know how to control every device, both now and in the future. To solve this problem, operating systems essentially dictate how every type of device should be controlled. The function of the device driver is then to translate these operating system mandated function calls into device specific calls. In theory a new device, which is controlled in a new manner, should function correctly if a suitable driver is available. This new driver ensures that the device appears to operate as usual from the operating system's point of view.
Under versions of Windows before Vista and versions of Linux before 2.6, all driver execution was co-operative, meaning that if a driver entered an infinite loop it would freeze the system. More recent revisions of these operating systems incorporate kernel preemption, where the kernel interrupts the driver to give it tasks, and then separates itself from the process until it receives a response from the device driver, or gives it more tasks to do.
Currently most operating systems support a variety of networking protocols, hardware, and applications for using them. This means that computers running dissimilar operating systems can participate in a common network for sharing resources such as computing, files, printers, and scanners using either wired or wireless connections. Networks can essentially allow a computer's operating system to access the resources of a remote computer to support the same functions as it could if those resources were connected directly to the local computer. This includes everything from simple communication, to using networked file systems or even sharing another computer's graphics or sound hardware. Some network services allow the resources of a computer to be accessed transparently, such as SSH which allows networked users direct access to a computer's command line interface.
Client/server networking allows a program on a computer, called a client, to connect via a network to another computer, called a server. Servers offer (or host) various services to other network computers and users. These services are usually provided through ports or numbered access points beyond the server's IP address. Each port number is usually associated with a maximum of one running program, which is responsible for handling requests to that port. A daemon, being a user program, can in turn access the local hardware resources of that computer by passing requests to the operating system kernel.
Many operating systems support one or more vendor-specific or open networking protocols as well, for example, SNA on IBM systems, DECnet on systems from Digital Equipment Corporation, and Microsoft-specific protocols (SMB) on Windows. Specific protocols for specific tasks may also be supported such as NFS for file access. Protocols like ESound, or esd can be easily extended over the network to provide sound from local applications, on a remote system's sound hardware.
A computer being secure depends on a number of technologies working properly. A modern operating system provides access to a number of resources, which are available to software running on the system, and to external devices like networks via the kernel.
The operating system must be capable of distinguishing between requests which should be allowed to be processed, and others which should not be processed. While some systems may simply distinguish between "privileged" and "non-privileged", systems commonly have a form of requester "identity", such as a user name. To establish identity there may be a process of "authentication". Often a username must be quoted, and each username may have a password. Other methods of authentication, such as magnetic cards or biometric data, might be used instead. In some cases, especially connections from the network, resources may be accessed with no authentication at all (such as reading files over a network share). Also covered by the concept of requester identity is "authorization"; the particular services and resources accessible by the requester once logged into a system are tied to either the requester's user account or to the variously configured groups of users to which the requester belongs.
In addition to the allow or disallow model of security, a system with a high level of security also offers auditing options. These would allow tracking of requests for access to resources (such as, "who has been reading this file?"). Internal security, or security from an already running program is only possible if all possibly harmful requests must be carried out through interrupts to the operating system kernel. If programs can directly access hardware and resources, they cannot be secured.
External security involves a request from outside the computer, such as a login at a connected console or some kind of network connection. External requests are often passed through device drivers to the operating system's kernel, where they can be passed onto applications, or carried out directly. Security of operating systems has long been a concern because of highly sensitive data held on computers, both of a commercial and military nature. The United States Government Department of Defense (DoD) created the "Trusted Computer System Evaluation Criteria" (TCSEC) which is a standard that sets basic requirements for assessing the effectiveness of security. This became of vital importance to operating system makers, because the TCSEC was used to evaluate, classify and select trusted operating systems being considered for the processing, storage and retrieval of sensitive or classified information.
Network services include offerings such as file sharing, print services, email, web sites, and file transfer protocols (FTP), most of which can have compromised security. At the front line of security are hardware devices known as firewalls or intrusion detection/prevention systems. At the operating system level, there are a number of software firewalls available, as well as intrusion detection/prevention systems. Most modern operating systems include a software firewall, which is enabled by default. A software firewall can be configured to allow or deny network traffic to or from a service or application running on the operating system. Therefore, one can install and be running an insecure service, such as Telnet or FTP, and not have to be threatened by a security breach because the firewall would deny all traffic trying to connect to the service on that port.
An alternative strategy, and the only sandbox strategy available in systems that do not meet the Popek and Goldberg virtualization requirements, is where the operating system is not running user programs as native code, but instead either emulates a processor or provides a host for a p-code based system such as Java.
Internal security is especially relevant for multi-user systems; it allows each user of the system to have private files that the other users cannot tamper with or read. Internal security is also vital if auditing is to be of any use, since a program can potentially bypass the operating system, inclusive of bypassing auditing.
Every computer that is to be operated by an individual requires a user interface. The user interface is usually referred to as a shell and is essential if human interaction is to be supported. The user interface views the directory structure and requests services from the operating system that will acquire data from input hardware devices, such as a keyboard, mouse or credit card reader, and requests operating system services to display prompts, status messages and such on output hardware devices, such as a video monitor or printer. The two most common forms of a user interface have historically been the command-line interface, where computer commands are typed out line-by-line, and the graphical user interface, where a visual environment (most commonly a WIMP) is present.
Most of the modern computer systems support graphical user interfaces (GUI), and often include them. In some computer systems, such as the original implementation of the classic Mac OS, the GUI is integrated into the kernel.
While technically a graphical user interface is not an operating system service, incorporating support for one into the operating system kernel can allow the GUI to be more responsive by reducing the number of context switches required for the GUI to perform its output functions. Other operating systems are modular, separating the graphics subsystem from the kernel and the Operating System. In the 1980s UNIX, VMS and many others had operating systems that were built this way. Linux and macOS are also built this way. Modern releases of Microsoft Windows such as Windows Vista implement a graphics subsystem that is mostly in user-space; however the graphics drawing routines of versions between Windows NT 4.0 and Windows Server 2003 exist mostly in kernel space. Windows 9x had very little distinction between the interface and the kernel.
Many computer operating systems allow the user to install or create any user interface they desire. The X Window System in conjunction with GNOME or KDE Plasma 5 is a commonly found setup on most Unix and Unix-like (BSD, Linux, Solaris) systems. A number of Windows shell replacements have been released for Microsoft Windows, which offer alternatives to the included Windows shell, but the shell itself cannot be separated from Windows.
Numerous Unix-based GUIs have existed over time, most derived from X11. Competition among the various vendors of Unix (HP, IBM, Sun) led to much fragmentation, though an effort to standardize in the 1990s to COSE and CDE failed for various reasons, and were eventually eclipsed by the widespread adoption of GNOME and K Desktop Environment. Prior to free software-based toolkits and desktop environments, Motif was the prevalent toolkit/desktop combination (and was the basis upon which CDE was developed).
Graphical user interfaces evolve over time. For example, Windows has modified its user interface almost every time a new major version of Windows is released, and the Mac OS GUI changed dramatically with the introduction of Mac OS X in 1999.
A real-time operating system (RTOS) is an operating system intended for applications with fixed deadlines (real-time computing). Such applications include some small embedded systems, automobile engine controllers, industrial robots, spacecraft, industrial control, and some large-scale computing systems.
An early example of a large-scale real-time operating system was Transaction Processing Facility developed by American Airlines and IBM for the Sabre Airline Reservations System.
Embedded systems that have fixed deadlines use a real-time operating system such as VxWorks, PikeOS, eCos, QNX, MontaVista Linux and RTLinux. Windows CE is a real-time operating system that shares similar APIs to desktop Windows but shares none of desktop Windows' codebase. Symbian OS also has an RTOS kernel (EKA2) starting with version 8.0b.
Some embedded systems use operating systems such as Palm OS, BSD, and Linux, although such operating systems do not support real-time computing.
Operating system development is one of the most complicated activities in which a computing hobbyist may engage. A hobby operating system may be classified as one whose code has not been directly derived from an existing operating system, and has few users and active developers.
In some cases, hobby development is in support of a "homebrew" computing device, for example, a simple single-board computer powered by a 6502 microprocessor. Or, development may be for an architecture already in widespread use. Operating system development may come from entirely new concepts, or may commence by modeling an existing operating system. In either case, the hobbyist is his/her own developer, or may interact with a small and sometimes unstructured group of individuals who have like interests.
Examples of a hobby operating system include Syllable and TempleOS.
Application software is generally written for use on a specific operating system, and sometimes even for specific hardware. When porting the application to run on another OS, the functionality required by that application may be implemented differently by that OS (the names of functions, meaning of arguments, etc.) requiring the application to be adapted, changed, or otherwise maintained.
Unix was the first operating system not written in assembly language, making it very portable to systems different from its native PDP-11.
This cost in supporting operating systems diversity can be avoided by instead writing applications against software platforms such as Java or Qt. These abstractions have already borne the cost of adaptation to specific operating systems and their system libraries.
Another approach is for operating system vendors to adopt standards. For example, POSIX and OS abstraction layers provide commonalities that reduce porting costs. | https://en.wikipedia.org/wiki?curid=22194 |
Orson Welles
George Orson Welles (May 6, 1915 – October 10, 1985) was an American actor, director, writer and producer who is remembered for his innovative work in radio, theatre and film. He is considered one of the greatest filmmakers of all time.
While in his twenties Welles directed a number of high-profile stage productions for the Federal Theatre Project, including an adaptation of "Macbeth" with an entirely African American cast and the political musical "The Cradle Will Rock". In 1937 he and John Houseman founded the Mercury Theatre, an independent repertory theatre company that presented a series of productions on Broadway through 1941, including "Caesar" (1937), a Broadway adaptation of William Shakespeare's "Julius Caesar".
In 1938, his radio anthology series "The Mercury Theatre on the Air" gave Welles the platform to find international fame as the director and narrator of a radio adaptation of H. G. Wells's novel "The War of the Worlds", which caused widespread panic because many listeners thought that an invasion by extraterrestrial beings was actually occurring. Although some contemporary sources say these reports of panic were mostly false and overstated, they rocketed Welles to notoriety.
His first film was "Citizen Kane" (1941), which is consistently ranked as the greatest film ever made, and which he co-wrote, produced, directed and starred in as Charles Foster Kane. Welles released twelve other features, the most acclaimed of which include "The Magnificent Ambersons" (1942), "The Lady from Shanghai" (1947), "Touch of Evil" (1958), "The Trial" (1962), "Chimes at Midnight" (1965) and "F for Fake" (1973). His distinctive directorial style featured layered and nonlinear narrative forms, uses of lighting such as chiaroscuro, unusual camera angles, sound techniques borrowed from radio, deep focus shots and long takes. He has been praised as "the ultimate auteur".
Welles was an outsider to the studio system, and struggled for creative control on his projects early on with the major film studios in Hollywood and later in life with a variety of independent financiers across Europe, where he spent most of his career. Many of his films were either heavily edited or remained unreleased. Some, like "Touch of Evil", have been painstakingly re-edited from his notes. With a development spanning almost 50 years, Welles's final film, "The Other Side of the Wind", was released in 2018.
Welles had three marriages, including one with Rita Hayworth, and three children. Known for his baritone voice, Welles performed extensively across theatre, radio and film. He was a lifelong magician noted for presenting troop variety shows in the war years. In 2002 he was voted the greatest film director of all time in two British Film Institute polls among directors and critics. In 2018 he was included in the list of the 50 greatest Hollywood actors of all time by "The Daily Telegraph".
George Orson Welles was born May 6, 1915, in Kenosha, Wisconsin, a son of Richard Head Welles (1872–1930) and Beatrice Ives Welles ("née" Beatrice Lucy Ives; 1883–1924). He was named after one of his great-grandfathers, influential Kenosha attorney Orson S. Head, and his brother George Head. An alternative story of the source of his first and middle names was told by George Ade, who met Welles's parents on a West Indies cruise toward the end of 1914. Ade was traveling with a friend, Orson Wells (no relation), and the two of them sat at the same table as Mr. and Mrs. Richard Welles. Mrs. Welles was pregnant at the time, and when they said goodbye, she told them that she had enjoyed their company so much that if the child were a boy, she intended to name it for them: George Orson.
Despite his family's affluence, Welles encountered hardship in childhood. His parents separated and moved to Chicago in 1919. His father, who made a fortune as the inventor of a popular bicycle lamp, became an alcoholic and stopped working. Welles's mother, a pianist, played during lectures by Dudley Crafts Watson at the Art Institute of Chicago to support her son and herself; the oldest Welles boy, "Dickie", was institutionalized at an early age because he had learning difficulties. Beatrice died of hepatitis in a Chicago hospital on May 10, 1924, just after Welles's ninth birthday. The Gordon String Quartet, which had made its first appearance at her home in 1921, played at Beatrice's funeral.
After his mother's death, Welles ceased pursuing music. It was decided that he would spend the summer with the Watson family at a private art colony in the village of Wyoming in the finger lakes region of New York State, established by Lydia Avery Coonley Ward. There he played and became friends with the children of the Aga Khan, including the 12-year-old Prince Aly Khan. Then, in what Welles later described as "a hectic period" in his life, he lived in a Chicago apartment with both his father and Dr. Maurice Bernstein, a Chicago physician who had been a close friend of both his parents. Welles briefly attended public school before his alcoholic father left business altogether and took him along on his travels to Jamaica and the Far East. When they returned they settled in a hotel in Grand Detour, Illinois, that was owned by his father. When the hotel burned down, Welles and his father took to the road again.
"During the three years that Orson lived with his father, some observers wondered who took care of whom", wrote biographer Frank Brady.
"In some ways, he was never really a young boy, you know," said Roger Hill, who became Welles's teacher and lifelong friend.
Welles briefly attended public school in Madison, Wisconsin, enrolled in the fourth grade. On September 15, 1926, he entered the Todd Seminary for Boys, an expensive independent school in Woodstock, Illinois, that his older brother, Richard Ives Welles, had attended ten years before until he was expelled for misbehavior. At Todd School, Welles came under the influence of Roger Hill, a teacher who was later Todd's headmaster. Hill provided Welles with an "ad hoc" educational environment that proved invaluable to his creative experience, allowing Welles to concentrate on subjects that interested him. Welles performed and staged theatrical experiments and productions there.
"Todd provided Welles with many valuable experiences", wrote critic Richard France. "He was able to explore and experiment in an atmosphere of acceptance and encouragement. In addition to a theatre the school's own radio station was at his disposal." Welles's first radio experience was on the Todd station, where he performed an adaptation of "Sherlock Holmes" that was written by him.
On December 28, 1930, when Welles was 15, his father died of heart and kidney failure at the age of 58, alone in a hotel in Chicago. Shortly before this, Welles had announced to his father that he would stop seeing him, believing it would prompt his father to refrain from drinking. As a result, Orson felt guilty because he believed his father had drunk himself to death because of him. His father's will left it to Orson to name his guardian. When Roger Hill declined, Welles chose Maurice Bernstein.
Following graduation from Todd in May 1931, Welles was awarded a scholarship to Harvard University, while his mentor Roger Hill advocated he attend Cornell College in Iowa. Rather than enrolling, he chose travel. He studied for a few weeks at the Art Institute of Chicago with Boris Anisfeld, who encouraged him to pursue painting.
Welles occasionally returned to Woodstock, the place he eventually named when he was asked in a 1960 interview, "Where is home?" Welles replied, "I suppose it's Woodstock, Illinois, if it's anywhere. I went to school there for four years. If I try to think of a home, it's that."
After his father's death, Welles traveled to Europe using a small portion of his inheritance. Welles said that while on a walking and painting trip through Ireland, he strode into the Gate Theatre in Dublin and claimed he was a Broadway star. The manager of the Gate, Hilton Edwards, later said he had not believed him but was impressed by his brashness and an impassioned audition he gave. Welles made his stage debut at the Gate Theatre on October 13, 1931, appearing in Ashley Dukes's adaptation of "Jew Suss" as Duke Karl Alexander of Württemberg. He performed small supporting roles in subsequent Gate productions, and he produced and designed productions of his own in Dublin. In March 1932 Welles performed in W. Somerset Maugham's "The Circle" at Dublin's Abbey Theatre and traveled to London to find additional work in the theatre. Unable to obtain a work permit, he returned to the U.S.
Welles found his fame ephemeral and turned to a writing project at Todd School that became immensely successful, first entitled "Everybody's Shakespeare" and subsequently, "The Mercury Shakespeare". Welles traveled to North Africa while working on thousands of illustrations for the "Everybody's Shakespeare" series of educational books, a series that remained in print for decades.
In 1933, Roger and Hortense Hill invited Welles to a party in Chicago, where Welles met Thornton Wilder. Wilder arranged for Welles to meet Alexander Woollcott in New York, in order that he be introduced to Katharine Cornell, who was assembling a repertory theatre company. Cornell's husband, director Guthrie McClintic, immediately put Welles under contract and cast him in three plays. "Romeo and Juliet", "The Barretts of Wimpole Street" and "Candida" toured in repertory for 36 weeks beginning in November 1933, with the first of more than 200 performances taking place in Buffalo, New York.
In 1934, Welles got his first job on radio—on "The American School of the Air"—through actor-director Paul Stewart, who introduced him to director Knowles Entrikin. That summer Welles staged a drama festival with the Todd School at the Opera House in Woodstock, Illinois, inviting Micheál Mac Liammóir and Hilton Edwards from Dublin's Gate Theatre to appear along with New York stage luminaries in productions including "Trilby", "Hamlet", "The Drunkard" and "Tsar Paul". At the old firehouse in Woodstock he also shot his first film, an eight-minute short titled "The Hearts of Age".
On November 14, 1934, Welles married Chicago socialite and actress Virginia Nicolson (often misspelled "Nicholson") in a civil ceremony in New York. To appease the Nicolsons, who were furious at the couple's elopement, a formal ceremony took place December 23, 1934, at the New Jersey mansion of the bride's godmother. Welles wore a cutaway borrowed from his friend George Macready.
A revised production of Katharine Cornell's "Romeo and Juliet" opened December 20, 1934, at the Martin Beck Theatre in New York. The Broadway production brought the 19-year-old Welles (now playing Tybalt) to the notice of John Houseman, a theatrical producer who was casting the lead role in the debut production of Archibald MacLeish's verse play, "Panic". On March 22, 1935, Welles made his debut on the CBS Radio series "The March of Time", performing a scene from "Panic" for a news report on the stage production
By 1935 Welles was supplementing his earnings in the theatre as a radio actor in Manhattan, working with many actors who later formed the core of his Mercury Theatre on programs including "America's Hour", "Cavalcade of America", "Columbia Workshop" and "The March of Time". "Within a year of his debut Welles could claim membership in that elite band of radio actors who commanded salaries second only to the highest paid movie stars," wrote critic Richard France.
Part of the Works Progress Administration, the Federal Theatre Project (1935–39) was a New Deal program to fund theatre and other live artistic performances and entertainment programs in the United States during the Great Depression. It was created as a relief measure to employ artists, writers, directors and theatre workers. Under national director Hallie Flanagan it was shaped into a true national theatre that created relevant art, encouraged experimentation and innovation, and made it possible for millions of Americans to see live theatre for the first time.
John Houseman, director of the Negro Theatre Unit in New York, invited Welles to join the Federal Theatre Project in 1935. Far from unemployed — "I was so employed I forgot how to sleep" — Welles put a large share of his $1,500-a-week radio earnings into his stage productions, bypassing administrative red tape and mounting the projects more quickly and professionally. "Roosevelt once said that I was the only operator in history who ever illegally siphoned money "into" a Washington project," Welles said.
The Federal Theatre Project was the ideal environment in which Welles could develop his art. Its purpose was employment, so he was able to hire any number of artists, craftsmen and technicians, and he filled the stage with performers. The company for the first production, an adaptation of William Shakespeare's "Macbeth" with an entirely African-American cast, numbered 150. The production became known as the "Voodoo Macbeth" because Welles changed the setting to a mythical island suggesting the Haitian court of King Henri Christophe, with Haitian "vodou" fulfilling the rôle of Scottish witchcraft. The play opened April 14, 1936, at the Lafayette Theatre in Harlem and was received rapturously. At 20, Welles was hailed as a prodigy. The production then made a 4,000-mile national tour that included two weeks at the Texas Centennial Exposition in Dallas.
Next mounted was the farce "Horse Eats Hat", an adaptation by Welles and Edwin Denby of "The Italian Straw Hat", an 1851 five-act farce by Eugène Marin Labiche and Marc-Michel. The play was presented September 26 – December 5, 1936, at Maxine Elliott's Theatre, New York, and featured Joseph Cotten in his first starring role. It was followed by an adaptation of "Dr. Faustus" that used light as a prime unifying scenic element in a nearly black stage, presented January 8 – May 9, 1937, at Maxine Elliott's Theatre.
Outside the scope of the Federal Theatre Project, American composer Aaron Copland chose Welles to direct "The Second Hurricane" (1937), an operetta with a libretto by Edwin Denby. Presented at the Henry Street Settlement Music School in New York for the benefit of high school students, the production opened April 21, 1937, and ran its scheduled three performances.
In 1937, Welles rehearsed Marc Blitzstein's political operetta, "The Cradle Will Rock". It was originally scheduled to open June 16, 1937, in its first public preview. Because of severe federal cutbacks in the Works Progress projects, the show's premiere at the Maxine Elliott Theatre was canceled. The theater was locked and guarded to prevent any government-purchased materials from being used for a commercial production of the work. In a last-minute move, Welles announced to waiting ticket-holders that the show was being transferred to the Venice, 20 blocks away. Some cast, and some crew and audience, walked the distance on foot. The union musicians refused to perform in a commercial theater for lower non-union government wages. The actors' union stated that the production belonged to the Federal Theater Project and could not be performed outside that context without permission. Lacking the participation of the union members, "The Cradle Will Rock" began with Blitzstein introducing the show and playing the piano accompaniment on stage with some cast members performing from the audience. This impromptu performance was well received by its audience.
Breaking with the Federal Theatre Project in 1937, Welles and Houseman founded their own repertory company, which they called the Mercury Theatre. The name was inspired by the title of the iconoclastic magazine, "The American Mercury". Welles was executive producer, and the original company included such actors as Joseph Cotten, George Coulouris, Geraldine Fitzgerald, Arlene Francis, Martin Gabel, John Hoyt, Norman Lloyd, Vincent Price, Stefan Schnabel and Hiram Sherman.
"I think he was the greatest directorial talent we've ever had in the [American] theater," Lloyd said of Welles in a 2014 interview. "When you saw a Welles production, you saw the text had been affected, the staging was remarkable, the sets were unusual, music, sound, lighting, a totality of everything. We had not had such a man in our theater. He was the first and remains the greatest."
The Mercury Theatre opened November 11, 1937, with "Caesar", Welles's modern-dress adaptation of Shakespeare's tragedy "Julius Caesar"—streamlined into an anti-fascist tour de force that Joseph Cotten later described as "so vigorous, so contemporary that it set Broadway on its ear." The set was completely open with no curtain, and the brick stage wall was painted dark red. Scene changes were achieved by lighting alone. On the stage was a series of risers; squares were cut into one at intervals and lights were set beneath it, pointing straight up to evoke the "cathedral of light" at the Nuremberg Rallies. "He staged it like a political melodrama that happened the night before," said Lloyd.
Beginning January 1, 1938, "Caesar" was performed in repertory with "The Shoemaker's Holiday"; both productions moved to the larger National Theatre. They were followed by "Heartbreak House" (April 29, 1938) and "Danton's Death" (November 5, 1938). As well as being presented in a pared-down oratorio version at the Mercury Theatre on Sunday nights in December 1937, "The Cradle Will Rock" was at the Windsor Theatre for 13 weeks (January 4 – April 2, 1938). Such was the success of the Mercury Theatre that Welles appeared on the cover of "Time" magazine, in full makeup as Captain Shotover in "Heartbreak House", in the issue dated May 9, 1938—three days after his 23rd birthday.
Simultaneously with his work in the theatre, Welles worked extensively in radio as an actor, writer, director and producer, often without credit. Between 1935 and 1937 he was earning as much as $2,000 a week, shuttling between radio studios at such a pace that he would arrive barely in time for a quick scan of his lines before he was on the air. While he was directing the "Voodoo Macbeth" Welles was dashing between Harlem and midtown Manhattan three times a day to meet his radio commitments.
In addition to continuing as a repertory player on "The March of Time", in the fall of 1936 Welles adapted and performed "Hamlet" in an early two-part episode of CBS Radio's "Columbia Workshop". His performance as the announcer in the series' April 1937 presentation of Archibald MacLeish's verse drama "The Fall of the City" was an important development in his radio career and made the 21-year-old Welles an overnight star.
In July 1937, the Mutual Network gave Welles a seven-week series to adapt "Les Misérables". It was his first job as a writer-director for radio, the radio debut of the Mercury Theatre, and one of Welles's earliest and finest achievements. He invented the use of narration in radio.
"By making himself the center of the storytelling process, Welles fostered the impression of self-adulation that was to haunt his career to his dying day", wrote critic Andrew Sarris. "For the most part, however, Welles was singularly generous to the other members of his cast and inspired loyalty from them above and beyond the call of professionalism."
That September, Mutual chose Welles to play Lamont Cranston, also known as "The Shadow". He performed the role anonymously through mid-September 1938.
After the theatrical successes of the Mercury Theatre, CBS Radio invited Orson Welles to create a summer show for 13 weeks. The series began July 11, 1938, initially titled "First Person Singular", with the formula that Welles would play the lead in each show. Some months later the show was called "The Mercury Theatre on the Air". The weekly hour-long show presented radio plays based on classic literary works, with original music composed and conducted by Bernard Herrmann.
The Mercury Theatre's radio adaptation of "The War of the Worlds" by H. G. Wells October 30, 1938, brought Welles instant fame. The combination of the news bulletin form of the performance with the between-breaks dial spinning habits of listeners was later reported to have created widespread confusion among listeners who failed to hear the introduction, although the extent of this confusion has come into question. Panic was reportedly spread among listeners who believed the fictional news reports of a Martian invasion. The myth of the result created by the combination was reported as fact around the world and disparagingly mentioned by Adolf Hitler in a public speech.
Welles's growing fame drew Hollywood offers, lures that the independent-minded Welles resisted at first. "The Mercury Theatre on the Air," which had been a sustaining show (without sponsorship) was picked up by Campbell Soup and renamed "The Campbell Playhouse." "The Mercury Theatre on the Air" made its last broadcast on December 4, 1938, and "The Campbell Playhouse" began five days later.
Welles began commuting from California to New York for the two Sunday broadcasts of "The Campbell Playhouse" after signing a film contract with RKO Pictures in August 1939. In November 1939, production of the show moved from New York to Los Angeles.
After 20 shows, Campbell began to exercise more creative control and had complete control over story selection. As his contract with Campbell came to an end, Welles chose not to sign on for another season. After the broadcast of March 31, 1940, Welles and Campbell parted amicably.
RKO Radio Pictures president George Schaefer eventually offered Welles what generally is considered the greatest contract offered to a filmmaker, much less to one who was untried. Engaging him to write, produce, direct and perform in two motion pictures, the contract subordinated the studio's financial interests to Welles's creative control, and broke all precedent by granting Welles the right of final cut. After signing a summary agreement with RKO on July 22, Welles signed a full-length 63-page contract August 21, 1939. The agreement was bitterly resented by the Hollywood studios and persistently mocked in the trade press.
RKO rejected Welles's first two movie proposals, but agreed on the third offer – "Citizen Kane". Welles co-wrote, produced and directed the film, and performed the lead role. Welles conceived the project with screenwriter Herman J. Mankiewicz, who was writing radio plays for "The Campbell Playhouse". Mankiewicz based the original outline of the film script on the life of William Randolph Hearst, whom he knew socially and came to hate after being exiled from Hearst's circle.
After agreeing on the storyline and character, Welles supplied Mankiewicz with 300 pages of notes and put him under contract to write the first draft screenplay under the supervision of John Houseman. Welles wrote his own draft, then drastically condensed and rearranged both versions and added scenes of his own. The industry accused Welles of underplaying Mankiewicz's contribution to the script, but Welles countered the attacks by saying, "At the end, naturally, I was the one making the picture, after all—who had to make the decisions. I used what I wanted of Mank's and, rightly or wrongly, kept what I liked of my own."
Welles's project attracted some of Hollywood's best technicians, including cinematographer Gregg Toland. For the cast, Welles primarily used actors from his Mercury Theatre. Filming "Citizen Kane" took ten weeks.
Hearst's newspapers barred all reference to "Citizen Kane" and exerted enormous pressure on the Hollywood film community to force RKO to shelve the film. RKO chief George Schaefer received a cash offer from MGM's Louis B. Mayer and other major studio executives if he would destroy the negative and existing prints of the film.
While waiting for "Citizen Kane" to be released, Welles produced and directed the original Broadway production of "Native Son", a drama written by Paul Green and Richard Wright based on Wright's novel. Starring Canada Lee, the show ran March 24 – June 28, 1941, at the St. James Theatre. The Mercury Production was the last time Welles and Houseman worked together.
"Citizen Kane" was given a limited release and the film received overwhelming critical praise. It was voted the best picture of 1941 by the National Board of Review and the New York Film Critics Circle. The film garnered nine Academy Award nominations but won only for Best Original Screenplay, shared by Mankiewicz and Welles. "Variety" reported that block voting by screen extras deprived "Citizen Kane" of Oscars for Best Picture and Best Actor (Welles), and similar prejudices were likely to have been responsible for the film receiving no technical awards.
The delay in the film's release and uneven distribution contributed to mediocre results at the box office. After it ran its course theatrically, "Citizen Kane" was retired to the vault in 1942. In postwar France, however, the film's reputation grew after it was seen for the first time in 1946. In the United States, it began to be re-evaluated after it began to appear on television in 1956. That year it was also re-released theatrically, and film critic Andrew Sarris described it as "the great American film" and "the work that influenced the cinema more profoundly than any American film since "Birth of a Nation"." "Citizen Kane" is now hailed as the greatest film ever made.
Welles's second film for RKO was "The Magnificent Ambersons", adapted by Welles from the Pulitzer Prize-winning novel by Booth Tarkington. Toland was not available, so Stanley Cortez was named cinematographer. The meticulous Cortez worked slowly and the film lagged behind schedule and over budget. Prior to production, Welles's contract was renegotiated, revoking his right to control the final cut. "The Magnificent Ambersons" was in production October 28, 1941 – January 22, 1942.
Throughout the shooting of the film Welles was also producing a weekly half-hour radio series, "The Orson Welles Show". Many of the "Ambersons" cast participated in the CBS Radio series, which ran September 15, 1941 – February 2, 1942.
At RKO's request, Welles worked on an adaptation of Eric Ambler's spy thriller, "Journey into Fear", co-written with Joseph Cotten. In addition to acting in the film, Welles was the producer. Direction was credited to Norman Foster. Welles later said that they were in such a rush that the director of each scene was determined by whoever was closest to the camera.
"Journey into Fear" was in production January 6 – March 12, 1942.
In late November 1941, Welles was appointed as a goodwill ambassador to Latin America by Nelson Rockefeller, U.S. Coordinator of Inter-American Affairs and a principal stockholder in RKO Radio Pictures. The mission of the OCIAA was cultural diplomacy, promoting hemispheric solidarity and countering the growing influence of the Axis powers in Latin America. John Hay Whitney, head of the agency's Motion Picture Division, was asked by the Brazilian government to produce a documentary of the annual Rio Carnival celebration taking place in early February 1942. In a telegram December 20, 1941, Whitney wrote Welles, "Personally believe you would make great contribution to hemisphere solidarity with this project."
The OCIAA sponsored cultural tours to Latin America and appointed goodwill ambassadors including George Balanchine and the American Ballet, Bing Crosby, Aaron Copland, Walt Disney, John Ford and Rita Hayworth. Welles was thoroughly briefed in Washington, D.C., immediately before his departure for Brazil, and film scholar Catherine L. Benamou, a specialist in Latin American affairs, finds it "not unlikely" that he was among the goodwill ambassadors who were asked to gather intelligence for the U.S. government in addition to their cultural duties. She concludes that Welles's acceptance of Whitney's request was "a logical and patently patriotic choice".
In addition to working on his ill-fated film project, "It's All True", Welles was responsible for radio programs, lectures, interviews and informal talks as part of his OCIAA-sponsored cultural mission, which was regarded as a success. He spoke on topics ranging from Shakespeare to visual art at gatherings of Brazil's elite, and his two intercontinental radio broadcasts in April 1942 were particularly intended to tell U.S. audiences that President Vargas was a partner with the Allies. Welles's ambassadorial mission was extended to permit his travel to other nations including Argentina, Bolivia, Chile, Colombia, Ecuador, Guatemala, Mexico, Peru and Uruguay. Welles worked for more than half a year with no compensation.
Welles's own expectations for the film were modest. ""It's All True" was not going to make any cinematic history, nor was it intended to," he later said. "It was intended to be a perfectly honorable execution of my job as a goodwill ambassador, bringing entertainment to the Northern Hemisphere that showed them something about the Southern one."
In July 1941, Welles conceived "It's All True" as an omnibus film mixing documentary and docufiction in a project that emphasized the dignity of labor and celebrated the cultural and ethnic diversity of North America. It was to have been his third film for RKO, following "Citizen Kane" (1941) and "The Magnificent Ambersons" (1942). Duke Ellington was put under contract to score a segment with the working title, "The Story of Jazz", drawn from Louis Armstrong's 1936 autobiography, "Swing That Music". Armstrong was cast to play himself in the brief dramatization of the history of jazz performance, from its roots to its place in American culture in the 1940s. "The Story of Jazz" was to go into production in December 1941.
Mercury Productions purchased the stories for two other segments—"My Friend Bonito" and "The Captain's Chair"—from documentary filmmaker Robert J. Flaherty. Adapted by Norman Foster and John Fante, "My Friend Bonito" was the only segment of the original "It's All True" to go into production. Filming took place in Mexico September–December 1941, with Norman Foster directing under Welles's supervision.
In December 1941, the Office of the Coordinator of Inter-American Affairs asked Welles to make a film in Brazil that would showcase the Carnaval in Rio de Janeiro. With filming of "My Friend Bonito" about two-thirds complete, Welles decided he could shift the geography of "It's All True" and incorporate Flaherty's story into an omnibus film about Latin America—supporting the Roosevelt administration's Good Neighbor policy, which Welles strongly advocated. In this revised concept, "The Story of Jazz" was replaced by the story of samba, a musical form with a comparable history and one that came to fascinate Welles. He also decided to do a ripped-from-the-headlines episode about the epic voyage of four poor Brazilian fishermen, the jangadeiros, who had become national heroes. Welles later said this was the most valuable story.
Required to film the Carnaval in Rio de Janeiro in early February 1942, Welles rushed to edit "The Magnificent Ambersons" and finish his acting scenes in "Journey into Fear". He ended his lucrative CBS radio show February 2, flew to Washington, D.C., for a briefing, and then lashed together a rough cut of "Ambersons" in Miami with editor Robert Wise. Welles recorded the film's narration the night before he left for South America: "I went to the projection room at about four in the morning, did the whole thing, and then got on the plane and off to Rio—and the end of civilization as we know it."
Welles left for Brazil on February 4 and began filming in Rio February 8, 1942. At the time it did not seem that Welles's other film projects would be disrupted, but as film historian Catherine L. Benamou wrote, "the ambassadorial appointment would be the first in a series of turning points leading—in 'zigs' and 'zags,' rather than in a straight line—to Welles's loss of complete directorial control over both "The Magnificent Ambersons" and "It's All True", the cancellation of his contract at RKO Radio Studio, the expulsion of his company Mercury Productions from the RKO lot, and, ultimately, the total suspension of "It's All True".
In 1942 RKO Pictures underwent major changes under new management. Nelson Rockefeller, the primary backer of the Brazil project, left its board of directors, and Welles's principal sponsor at RKO, studio president George Schaefer, resigned. RKO took control of "Ambersons" and edited the film into what the studio considered a commercial format. Welles's attempts to protect his version ultimately failed. In South America, Welles requested resources to finish "It's All True". Given a limited amount of black-and-white film stock and a silent camera, he was able to finish shooting the episode about the jangadeiros, but RKO refused to support further production on the film.
"So I was fired from RKO," Welles later recalled. "And they made a great publicity point of the fact that I had gone to South America without a script and thrown all this money away. I never recovered from that attack." Later in 1942, when RKO Pictures began promoting its new corporate motto, "Showmanship In Place of Genius: A New Deal at RKO", Welles understood it as a reference to him.
Welles returned to the United States August 22, 1942, after more than six months in South America. A week after his return he produced and emceed the first two hours of a seven-hour coast-to-coast War Bond drive broadcast titled "I Pledge America". Airing August 29, 1942, on the Blue Network, the program was presented in cooperation with the United States Department of the Treasury, Western Union (which wired bond subscriptions free of charge) and the American Women's Voluntary Services. Featuring 21 dance bands and a score of stage and screen and radio stars, the broadcast raised more than $10 million—more than $146 million today—for the war effort.
On October 12, 1942, "Cavalcade of America" presented Welles's radio play, "Admiral of the Ocean Sea", an entertaining and factual look at the legend of Christopher Columbus.
"It belongs to a period when hemispheric unity was a crucial matter and many programs were being devoted to the common heritage of the Americas," wrote broadcasting historian Erik Barnouw. "Many such programs were being translated into Spanish and Portuguese and broadcast to Latin America, to counteract many years of successful Axis propaganda to that area. The Axis, trying to stir Latin America against Anglo-America, had constantly emphasized the differences between the two. It became the job of American radio to emphasize their common experience and essential unity."
"Admiral of the Ocean Sea", also known as "Columbus Day", begins with the words, "Hello Americans"—the title Welles would choose for his own series five weeks later.
"Hello Americans", a CBS Radio series broadcast November 15, 1942 – January 31, 1943, was produced, directed and hosted by Welles under the auspices of the Office of the Coordinator for Inter-American Affairs. The 30-minute weekly program promoted inter-American understanding and friendship, drawing upon the research amassed for the ill-fated film, "It's All True". The series was produced concurrently with Welles's other CBS series, "Ceiling Unlimited" (November 9, 1942 – February 1, 1943), sponsored by the Lockheed-Vega Corporation. The program was conceived to glorify the aviation industry and dramatize its role in World War II. Welles's shows were regarded as significant contributions to the war effort.
Throughout the war Welles worked on patriotic radio programs including "Command Performance", "G.I. Journal", "Mail Call", "Nazi Eyes on Canada", "Stage Door Canteen" and "Treasury Star Parade".
In early 1943, the two concurrent radio series ("Ceiling Unlimited", "Hello Americans") that Orson Welles created for CBS to support the war effort had ended. Filming also had wrapped on the 1943 film adaptation of "Jane Eyre" and that fee, in addition to the income from his regular guest-star roles in radio, made it possible for Welles to fulfill a lifelong dream. He approached the War Assistance League of Southern California and proposed a show that evolved into a big-top spectacle, part circus and part magic show. He offered his services as magician and director, and invested some $40,000 of his own money in an extravaganza he co-produced with his friend Joseph Cotten: "The Mercury Wonder Show for Service Men". Members of the U.S. armed forces were admitted free of charge, while the general public had to pay. The show entertained more than 1,000 service members each night, and proceeds went to the War Assistance League, a charity for military service personnel.
The development of the show coincided with the resolution of Welles's oft-changing draft status in May 1943, when he was finally declared 4-F—unfit for military service—for a variety of medical reasons. "I felt guilty about the war," Welles told biographer Barbara Leaming. "I was guilt-ridden about my civilian status." He had been publicly hounded about his patriotism since "Citizen Kane", when the Hearst press began persistent inquiries about why Welles had not been drafted.
"The Mercury Wonder Show" ran August 3 – September 9, 1943, in an 80-by-120-foot tent located at 9000 Cahuenga Boulevard, in the heart of Hollywood.
At intermission September 7, 1943, KMPC radio interviewed audience and cast members of "The Mercury Wonder Show"—including Welles and Rita Hayworth, who were married earlier that day. Welles remarked that "The Mercury Wonder Show" had been performed for approximately 48,000 members of the U.S. armed forces.
The idea of doing a radio variety show occurred to Welles after his success as substitute host of four consecutive episodes (March 14 – April 4, 1943) of "The Jack Benny Program", radio's most popular show, when Benny contracted pneumonia on a performance tour of military bases. A half-hour variety show broadcast January 26 – July 19, 1944, on the Columbia Pacific Network, "The Orson Welles Almanac" presented sketch comedy, magic, mindreading, music and readings from classic works. Many of the shows originated on U.S. military camps, where Welles and his repertory company and guests entertained the troops with a reduced version of "The Mercury Wonder Show". The performances of the all-star jazz group Welles brought together for the show were so popular that the band became a regular feature and was an important force in reviving interest in traditional New Orleans jazz.
Welles was placed on the U.S. Treasury payroll on May 15, 1944, as an expert consultant for the duration of the war, with a retainer of $1 a year. On the recommendation of President Franklin D. Roosevelt, Secretary of the Treasury Henry Morgenthau asked Welles to lead the Fifth War Loan Drive, which opened June 12 with a one-hour radio show on all four networks, broadcast from Texarkana, Texas. Including a statement by the President, the program defined the causes of the war and encouraged Americans to buy $16 billion in bonds to finance the Normandy landings and the most violent phase of World War II. Welles produced additional war loan drive broadcasts on June 14 from the Hollywood Bowl, and June 16 from Soldier Field, Chicago. Americans purchased $20.6 billion in War Bonds during the Fifth War Loan Drive, which ended on July 8, 1944.
Welles campaigned ardently for Roosevelt in 1944. A longtime supporter and campaign speaker for FDR, he occasionally sent the president ideas and phrases that were sometimes incorporated into what Welles characterized as "less important speeches". One of these ideas was the joke in what came to be called the Fala speech, Roosevelt's nationally broadcast September 23 address to the International Teamsters Union which opened the 1944 presidential campaign.
Welles campaigned for the Roosevelt–Truman ticket almost full-time in the fall of 1944, traveling to nearly every state to the detriment of his own health and at his own expense. In addition to his radio addresses he filled in for Roosevelt, opposite Republican presidential nominee Thomas E. Dewey, at "The New York Herald Tribune Forum" broadcast October 18 on the Blue Network. Welles accompanied FDR to his last campaign rally, speaking at an event November 4 at Boston's Fenway Park before 40,000 people, and took part in a historic election-eve campaign broadcast November 6 on all four radio networks.
On November 21, 1944, Welles began his association with "This Is My Best", a CBS radio series he would briefly produce, direct, write and host (March 13 – April 24, 1945). He wrote a political column called "Orson Welles' Almanac" (later titled "Orson Welles Today") for "The New York Post" January–November 1945, and advocated the continuation of FDR's New Deal policies and his international vision, particularly the establishment of the United Nations and the cause of world peace.
On April 12, 1945, the day Franklin D. Roosevelt died, the Blue-ABC network marshalled its entire executive staff and national leaders to pay homage to the late president. "Among the outstanding programs which attracted wide attention was a special tribute delivered by Orson Welles", reported "Broadcasting" magazine. Welles spoke at 10:10 p.m Eastern War Time, from Hollywood, and stressed the importance of continuing FDR's work: "He has no need for homage and we who loved him have no time for tears … Our fighting sons and brothers cannot pause tonight to mark the death of him whose name will be given to the age we live in."
Welles presented another special broadcast on the death of Roosevelt the following evening: "We must move on beyond mere death to that free world which was the hope and labor of his life."
He dedicated the April 17 episode of "This Is My Best" to Roosevelt and the future of America on the eve of the United Nations Conference on International Organization. Welles was an advisor and correspondent for the Blue-ABC radio network's coverage of the San Francisco conference that formed the UN, taking place April 24 – June 23, 1945. He presented a half-hour dramatic program written by Ben Hecht on the opening day of the conference, and on Sunday afternoons (April 29 – June 10) he led a weekly discussion from the San Francisco Civic Auditorium.
In the fall of 1945 Welles began work on "The Stranger" (1946), a film noir drama about a war crimes investigator who tracks a high-ranking Nazi fugitive to an idyllic New England town. Edward G. Robinson, Loretta Young and Welles star.
Producer Sam Spiegel initially planned to hire director John Huston, who had rewritten the screenplay by Anthony Veiller. When Huston entered the military, Welles was given the chance to direct and prove himself able to make a film on schedule and under budget—something he was so eager to do that he accepted a disadvantageous contract. One of its concessions was that he would defer to the studio in any creative dispute.
"The Stranger" was Welles's first job as a film director in four years. He was told that if the film was successful he could sign a four-picture deal with International Pictures, making films of his own choosing. Welles was given some degree of creative control, and he endeavored to personalize the film and develop a nightmarish tone. He worked on the general rewrite of the script and wrote scenes at the beginning of the picture that were shot but subsequently cut by the producers. He filmed in long takes that largely thwarted the control given to editor Ernest J. Nims under the terms of the contract.
"The Stranger" was the first commercial film to use documentary footage from the Nazi concentration camps. Welles had seen the footage in early May 1945 in San Francisco, as a correspondent and discussion moderator at the UN Conference on International Organization. He wrote of the Holocaust footage in his syndicated "New York Post" column May 7, 1945.
Completed a day ahead of schedule and under budget, "The Stranger" was the only film made by Welles to have been a "bona fide" box office success upon its release. Its cost was $1.034 million; 15 months after its release it had grossed $3.216 million. Within weeks of the completion of the film, International Pictures backed out of its promised four-picture deal with Welles. No reason was given, but the impression was left that "The Stranger" would not make money.
In the summer of 1946, Welles moved to New York to direct the Broadway musical "Around the World", a stage adaptation of the Jules Verne novel "Around the World in Eighty Days" with a book by Welles and music by Cole Porter. Producer Mike Todd, who would later produce the successful 1956 film adaptation, pulled out from the lavish and expensive production, leaving Welles to support the finances. When Welles ran out of money he convinced Columbia Pictures president Harry Cohn to send enough money to continue the show, and in exchange Welles promised to write, produce, direct and star in a film for Cohn for no further fee. The stage show soon failed due to poor box-office, with Welles unable to claim the losses on his taxes.
In 1946, Welles began two new radio series—"The Mercury Summer Theatre of the Air" for CBS, and "Orson Welles Commentaries" for ABC. While "Mercury Summer Theatre" featured half-hour adaptations of some classic Mercury radio shows from the 1930s, the first episode was a condensation of his "Around the World" stage play, and is the only record of Cole Porter's music for the project. Several original Mercury actors returned for the series, as well as Bernard Herrmann. Welles invested his earnings into his failing stage play. "Commentaries" was a political vehicle for him, continuing the themes from his "New York Post" column. Again, Welles lacked a clear focus, until the NAACP brought to his attention the case of Isaac Woodard. Welles brought significant attention to Woodard's cause.
The last broadcast of "Orson Welles Commentaries" on October 6, 1946, marked the end of Welles's own radio shows.
The film that Welles was obliged to make in exchange for Harry Cohn's help in financing the stage production "Around the World" was "The Lady from Shanghai", filmed in 1947 for Columbia Pictures. Intended as a modest thriller, the budget skyrocketed after Cohn suggested that Welles's then-estranged second wife Rita Hayworth co-star.
Cohn disliked Welles's rough cut, particularly the confusing plot and lack of close-ups, and was not in sympathy with Welles's Brechtian use of irony and black comedy, especially in a farcical courtroom scene. Cohn ordered extensive editing and re-shoots. After heavy editing by the studio, approximately one hour of Welles's first cut was removed, including much of a climactic confrontation scene in an amusement park funhouse. While expressing displeasure at the cuts, Welles was appalled particularly with the musical score. The film was considered a disaster in America at the time of release, though the closing shootout in a hall of mirrors has since become a touchstone of film noir. Not long after release, Welles and Hayworth finalized their divorce.
Although "The Lady From Shanghai" was acclaimed in Europe, it was not embraced in the U.S. until decades later, where it is now often regarded as a classic of film noir. A similar difference in reception on opposite sides of the Atlantic, followed by greater American acceptance, befell the Welles-inspired Chaplin film "Monsieur Verdoux", originally to be directed by Welles starring Chaplin, then directed by Chaplin with the idea credited to Welles.
Prior to 1948, Welles convinced Republic Pictures to let him direct a low-budget version of "Macbeth", which featured highly stylized sets and costumes, and a cast of actors lip-syncing to a pre-recorded soundtrack, one of many innovative cost-cutting techniques Welles deployed in an attempt to make an epic film from B-movie resources. The script, adapted by Welles, is a violent reworking of Shakespeare's original, freely cutting and pasting lines into new contexts via a collage technique and recasting "Macbeth" as a clash of pagan and proto-Christian ideologies. Some voodoo trappings of the famous Welles/Houseman Negro Theatre stage adaptation are visible, especially in the film's characterization of the Weird Sisters, who create an effigy of Macbeth as a charm to enchant him. Of all Welles's post-"Kane" Hollywood productions, "Macbeth" is stylistically closest to "Citizen Kane" in its long takes and deep focus photography.
Republic initially trumpeted the film as an important work but decided it did not care for the Scottish accents and held up general release for almost a year after early negative press reaction, including "Life"s comment that Welles's film "doth foully slaughter Shakespeare." Welles left for Europe, while co-producer and lifelong supporter Richard Wilson reworked the soundtrack. Welles returned and cut 20 minutes from the film at Republic's request and recorded narration to cover some gaps. The film was decried as a disaster. "Macbeth" had influential fans in Europe, especially the French poet and filmmaker Jean Cocteau, who hailed the film's "crude, irreverent power" and careful shot design, and described the characters as haunting "the corridors of some dreamlike subway, an abandoned coal mine, and ruined cellars oozing with water."
In Italy he starred as Cagliostro in the 1948 film "Black Magic". His co-star, Akim Tamiroff, impressed Welles so much that Tamiroff would appear in four of Welles's productions during the 1950s and 1960s.
The following year, Welles starred as Harry Lime in Carol Reed's "The Third Man", alongside Joseph Cotten, his friend and co-star from "Citizen Kane", with a script by Graham Greene and a memorable score by Anton Karas.
A few years later, British radio producer Harry Alan Towers would resurrect the Lime character in the radio series "The Adventures of Harry Lime".
Welles appeared as Cesare Borgia in the 1949 Italian film "Prince of Foxes", with Tyrone Power and Mercury Theatre alumnus Everett Sloane, and as the Mongol warrior Bayan in the 1950 film version of the novel "The Black Rose" (again with Tyrone Power).
During this time, Welles was channeling his money from acting jobs into a self-financed film version of Shakespeare's play "Othello". From 1949 to 1951, Welles worked on "Othello", filming on location in Italy and Morocco. The film featured Welles's friends, Micheál Mac Liammóir as Iago and Hilton Edwards as Desdemona's father Brabantio. Suzanne Cloutier starred as Desdemona and Campbell Playhouse alumnus Robert Coote appeared as Iago's associate Roderigo.
Filming was suspended several times as Welles ran out of funds and left for acting jobs, accounted in detail in MacLiammóir's published memoir "Put Money in Thy Purse". The American release prints had a technically flawed soundtrack, suffering from a dropout of sound at every quiet moment. Welles's daughter, Beatrice Welles-Smith, restored "Othello" in 1992 for a wide re-release. The restoration included reconstructing Angelo Francesco Lavagnino's original musical score, which was originally inaudible, and adding ambient stereo sound effects, which were not in the original film. The restoration went on to a successful theatrical run in America.
In 1952, Welles continued finding work in England after the success of the "Harry Lime" radio show. Harry Alan Towers offered Welles another series, "The Black Museum", which ran for 52 weeks with Welles as host and narrator. Director Herbert Wilcox offered Welles the part of the murdered victim in "Trent's Last Case", based on the novel by E. C. Bentley. In 1953, the BBC hired Welles to read an hour of selections from Walt Whitman's epic poem "Song of Myself". Towers hired Welles again, to play Professor Moriarty in the radio series, "The Adventures of Sherlock Holmes", starring John Gielgud and Ralph Richardson.
Welles briefly returned to America to make his first appearance on television, starring in the "Omnibus" presentation of "King Lear", broadcast live on CBS October 18, 1953. Directed by Peter Brook, the production costarred Natasha Parry, Beatrice Straight and Arnold Moss.
In 1954, director George More O'Ferrall offered Welles the title role in the 'Lord Mountdrago' segment of "Three Cases of Murder", co-starring Alan Badel. Herbert Wilcox cast Welles as the antagonist in "Trouble in the Glen" opposite Margaret Lockwood, Forrest Tucker and Victor McLaglen. Old friend John Huston cast him as Father Mapple in his 1956 film adaptation of Herman Melville's "Moby-Dick", starring Gregory Peck.
Welles's next turn as director was the film "Mr. Arkadin" (1955), which was produced by his political mentor from the 1940s, Louis Dolivet. It was filmed in France, Germany, Spain and Italy on a very limited budget. Based loosely on several episodes of the Harry Lime radio show, it stars Welles as a billionaire who hires a man to delve into the secrets of his past. The film stars Robert Arden, who had worked on the Harry Lime series; Welles's third wife, Paola Mori, whose voice was dubbed by actress Billie Whitelaw; and guest stars Akim Tamiroff, Michael Redgrave, Katina Paxinou and Mischa Auer. Frustrated by his slow progress in the editing room, producer Dolivet removed Welles from the project and finished the film without him. Eventually five different versions of the film would be released, two in Spanish and three in English. The version that Dolivet completed was retitled "Confidential Report". In 2005 Stefan Droessler of the Munich Film Museum oversaw a reconstruction of the surviving film elements.
In 1955, Welles also directed two television series for the BBC. The first was "Orson Welles' Sketch Book", a series of six 15-minute shows featuring Welles drawing in a sketchbook to illustrate his reminiscences for the camera (including such topics as the filming of "It's All True" and the Isaac Woodard case), and the second was "Around the World with Orson Welles", a series of six travelogues set in different locations around Europe (such as Vienna, the Basque Country between France and Spain, and England). Welles served as host and interviewer, his commentary including documentary facts and his own personal observations (a technique he would continue to explore in later works).
During Episode 3 of Sketchbook Welles makes a deliberate attack on the abuse of police powers around the world. The episode starts with him telling the story of Isaac Woodard, an African-American Veteran of the South Pacific during World War II being falsely accused by a bus driver of being drunk and disorderly, who then has a policeman remove the man from the bus. Woodard is not arrested right away, but rather he is beaten into unconsciousness nearly to the point of death and when he finally regains consciousness he is permanently blinded. By the time doctors from the US Army located him three weeks later there was nothing that could be done. Welles assures the audience that he personally saw to it that justice was served to this policeman although he doesn't mention what type of justice was delivered. Welles then goes on to give other examples of police being given more power and authority than is necessary. The title of this episode is: The Police.
In 1956, Welles completed "Portrait of Gina". The film cans would remain in a lost-and-found locker at the hotel for several decades, where they were discovered after Welles's death.
In 1956, Welles returned to Hollywood.
He began filming a projected pilot for Desilu, owned by Lucille Ball and her husband Desi Arnaz, who had recently purchased the former RKO studios. The film was "The Fountain of Youth", based on a story by John Collier. Originally deemed not viable as a pilot, the film was not aired until 1958—and won the Peabody Award for excellence.
Welles guest starred on television shows including "I Love Lucy". On radio, he was narrator of "Tomorrow" (October 17, 1956), a nuclear holocaust drama produced and syndicated by ABC and the Federal Civil Defense Administration.
Welles's next feature film role was in "Man in the Shadow" for Universal Pictures in 1957, starring Jeff Chandler.
Welles stayed on at Universal to direct (and co-star with) Charlton Heston in the 1958 film "Touch of Evil", based on Whit Masterson's novel "Badge of Evil". Originally only hired as an actor, Welles was promoted to director by Universal Studios at the insistence of Charlton Heston. The film reunited many actors and technicians with whom Welles had worked in Hollywood in the 1940s, including cameraman Russell Metty ("The Stranger"), makeup artist Maurice Seiderman ("Citizen Kane"), and actors Joseph Cotten, Marlene Dietrich and Akim Tamiroff. Filming proceeded smoothly, with Welles finishing on schedule and on budget, and the studio bosses praising the daily rushes. Nevertheless, after the end of production, the studio re-edited the film, re-shot scenes, and shot new exposition scenes to clarify the plot. Welles wrote a 58-page memo outlining suggestions and objections, stating that the film was no longer his version—it was the studio's, but as such, he was still prepared to help with it.
In 1978, a longer preview version of the film was discovered and released.
As Universal reworked "Touch of Evil", Welles began filming his adaptation of Miguel de Cervantes's novel "Don Quixote" in Mexico, starring Mischa Auer as Quixote and Akim Tamiroff as Sancho Panza.
He continued shooting "Don Quixote" in Spain and Italy, but replaced Mischa Auer with Francisco Reiguera, and resumed acting jobs.
In Italy in 1959, Welles directed his own scenes as King Saul in Richard Pottier's film "David and Goliath". In Hong Kong he co-starred with Curt Jürgens in Lewis Gilbert's film "Ferry to Hong Kong". In 1960, in Paris he co-starred in Richard Fleischer's film "Crack in the Mirror". In Yugoslavia he starred in Richard Thorpe's film "The Tartars" and Veljko Bulajić's "Battle of Neretva".
Throughout the 1960s, filming continued on "Quixote" on-and-off until the end of the decade, as Welles evolved the concept, tone and ending several times. Although he had a complete version of the film shot and edited at least once, he would continue toying with the editing well into the 1980s, he never completed a version of the film he was fully satisfied with, and would junk existing footage and shoot new footage. (In one case, he had a complete cut ready in which Quixote and Sancho Panza end up going to the moon, but he felt the ending was rendered obsolete by the 1969 moon landings, and burned 10 reels of this version.) As the process went on, Welles gradually voiced all of the characters himself and provided narration. In 1992, the director Jesús Franco constructed a film out of the portions of "Quixote" left behind by Welles. Some of the film stock had decayed badly. While the Welles footage was greeted with interest, the post-production by Franco was met with harsh criticism.
In 1961, Welles directed "In the Land of Don Quixote", a series of eight half-hour episodes for the Italian television network RAI. Similar to the "Around the World with Orson Welles" series, they presented travelogues of Spain and included Welles's wife, Paola, and their daughter, Beatrice. Though Welles was fluent in Italian, the network was not interested in him providing Italian narration because of his accent, and the series sat unreleased until 1964, by which time the network had added Italian narration of its own. Ultimately, versions of the episodes were released with the original musical score Welles had approved, but without the narration.
In 1962, Welles directed his adaptation of "The Trial", based on the novel by Franz Kafka and produced by Michael and Alexander Salkind. The cast included Anthony Perkins as Josef K, Jeanne Moreau, Romy Schneider, Paola Mori and Akim Tamiroff. While filming exteriors in Zagreb, Welles was informed that the Salkinds had run out of money, meaning that there could be no set construction. No stranger to shooting on found locations, Welles soon filmed the interiors in the Gare d'Orsay, at that time an abandoned railway station in Paris. Welles thought the location possessed a "Jules Verne modernism" and a melancholy sense of "waiting", both suitable for Kafka. To remain in the spirit of Kafka Welles set up the cutting room together with the Film Editor, Frederick Muller (as Fritz Muller), in the old un-used, cold, depressing, station master office. The film failed at the box-office. Peter Bogdanovich would later observe that Welles found the film riotously funny. Welles also told a BBC interviewer that it was his best film. While filming "The Trial" Welles met Oja Kodar, who later became his mistress and collaborator for the last 20 years of his life.
Welles played a film director in "La Ricotta" (1963), Pier Paolo Pasolini's segment of the "Ro.Go.Pa.G." movie, although his renowned voice was dubbed by Italian writer Giorgio Bassani. He continued taking what work he could find acting, narrating or hosting other people's work, and began filming "Chimes at Midnight", which was completed in 1965.
Filmed in Spain, "Chimes at Midnight" was based on Welles's play, "Five Kings", in which he drew material from six Shakespeare plays to tell the story of Sir John Falstaff (Welles) and his relationship with Prince Hal (Keith Baxter). The cast includes John Gielgud, Jeanne Moreau, Fernando Rey and Margaret Rutherford; the film's narration, spoken by Ralph Richardson, is taken from the chronicler Raphael Holinshed. Welles held the film in high regard: "It's my favorite picture, yes. If I wanted to get into heaven on the basis of one movie, that's the one I would offer up."
In 1966, Welles directed a film for French television, an adaptation of "The Immortal Story", by Karen Blixen. Released in 1968, it stars Jeanne Moreau, Roger Coggio and Norman Eshley. The film had a successful run in French theaters. At this time Welles met Oja Kodar again, and gave her a letter he had written to her and had been keeping for four years; they would not be parted again. They immediately began a collaboration both personal and professional. The first of these was an adaptation of Blixen's "The Heroine", meant to be a companion piece to "The Immortal Story" and starring Kodar. Unfortunately, funding disappeared after one day's shooting. After completing this film, he appeared in a brief cameo as Cardinal Wolsey in Fred Zinnemann's adaptation of "A Man for All Seasons"—a role for which he won considerable acclaim.
In 1967, Welles began directing "The Deep", based on the novel "Dead Calm" by Charles Williams and filmed off the shore of Yugoslavia. The cast included Jeanne Moreau, Laurence Harvey and Kodar. Personally financed by Welles and Kodar, they could not obtain the funds to complete the project, and it was abandoned a few years later after the death of Harvey. The surviving footage was eventually edited and released by the Filmmuseum München. In 1968 Welles began filming a TV special for CBS under the title "Orson's Bag", combining travelogue, comedy skits and a condensation of Shakespeare's play "The Merchant of Venice" with Welles as Shylock. In 1969 Welles called again the Film Editor Frederick Muller to work with him re-editing the material and they set up cutting rooms at the Safa Palatino Studios in Rome. Funding for the show sent by CBS to Welles in Switzerland was seized by the IRS. Without funding, the show was not completed. The surviving film clips portions were eventually released by the Filmmuseum München.
In 1969, Welles authorized the use of his name for a cinema in Cambridge, Massachusetts. The Orson Welles Cinema remained in operation until 1986, with Welles making a personal appearance there in 1977. Also in 1969 he played a supporting role in John Huston's "The Kremlin Letter". Drawn by the numerous offers he received to work in television and films, and upset by a tabloid scandal reporting his affair with Kodar, Welles abandoned the editing of "Don Quixote" and moved back to America in 1970.
Welles returned to Hollywood, where he continued to self-finance his film and television projects. While offers to act, narrate and host continued, Welles also found himself in great demand on television talk shows. He made frequent appearances for Dick Cavett, Johnny Carson, Dean Martin and Merv Griffin.
Welles's primary focus during his final years was "The Other Side of the Wind", a project that was filmed intermittently between 1970 and 1976. Co-written by Welles and Oja Kodar, it is the story of an aging film director (John Huston) looking for funds to complete his final film. The cast includes Peter Bogdanovich, Susan Strasberg, Norman Foster, Edmond O'Brien, Cameron Mitchell and Dennis Hopper. Financed by Iranian backers, ownership of the film fell into a legal quagmire after the Shah of Iran was deposed. The legal disputes kept the film in its unfinished state until early 2017, and was finally released in November 2018.
Welles portrayed Louis XVIII of France in the 1970 film "Waterloo", and narrated the beginning and ending scenes of the historical comedy "Start the Revolution Without Me" (1970).
In 1971, Welles directed a short adaptation of "Moby-Dick", a one-man performance on a bare stage, reminiscent of his 1955 stage production "Moby Dick—Rehearsed". Never completed, it was eventually released by the Filmmuseum München. He also appeared in "Ten Days' Wonder", co-starring with Anthony Perkins and directed by Claude Chabrol (who reciprocated with a bit part as himself in "Other Wind"), based on a detective novel by Ellery Queen. That same year, the Academy of Motion Picture Arts and Sciences gave him an Academy Honorary Award "for superlative artistry and versatility in the creation of motion pictures." Welles pretended to be out of town and sent John Huston to claim the award, thanking the Academy on film. In his speech, Huston criticized the Academy for presenting the award while refusing to support Welles' projects.
In 1972, Welles acted as on-screen narrator for the film documentary version of Alvin Toffler's 1970 book "Future Shock". Working again for a British producer, Welles played Long John Silver in director John Hough's "Treasure Island" (1972), an adaptation of the Robert Louis Stevenson novel, which had been the second story broadcast by "The Mercury Theatre on the Air" in 1938. This was the last time he played the lead role in a major film. Welles also contributed to the script, his writing credit was attributed to the pseudonym 'O. W. Jeeves'. In some versions of the film Welles's original recorded dialog was redubbed by Robert Rietty.
In 1973, Welles completed "F for Fake", a personal essay film about art forger Elmyr de Hory and the biographer Clifford Irving. Based on an existing documentary by François Reichenbach, it included new material with Oja Kodar, Joseph Cotten, Paul Stewart and William Alland. An excerpt of Welles's 1930s "War of the Worlds" broadcast was recreated for this film; however, none of the dialogue heard in the film actually matches what was originally broadcast. Welles filmed a five-minute trailer, rejected in the U.S., that featured several shots of a topless Kodar.
Welles hosted a British syndicated anthology series, "Orson Welles's Great Mysteries", during the 1973–74 television season. His brief introductions to the 26 half-hour episodes were shot in July 1973 by Gary Graver. The year 1974 also saw Welles lending his voice for that year's remake of Agatha Christie's classic thriller "Ten Little Indians" produced by his former associate, Harry Alan Towers and starring an international cast that included Oliver Reed, Elke Sommer and Herbert Lom.
In 1975, Welles narrated the documentary "", focusing on Warner Bros. cartoons from the 1940s. Also in 1975, the American Film Institute presented Welles with its third Lifetime Achievement Award (the first two going to director John Ford and actor James Cagney). At the ceremony, Welles screened two scenes from the nearly finished "The Other Side of the Wind".
In 1976, Paramount Television purchased the rights for the entire set of Rex Stout's Nero Wolfe stories for Orson Welles. Welles had once wanted to make a series of Nero Wolfe movies, but Rex Stout—who was leery of Hollywood adaptations during his lifetime after two disappointing 1930s films—turned him down. Paramount planned to begin with an ABC-TV movie and hoped to persuade Welles to continue the role in a mini-series. Frank D. Gilroy was signed to write the television script and direct the TV movie on the assurance that Welles would star, but by April 1977 Welles had bowed out. In 1980 the Associated Press reported "the distinct possibility" that Welles would star in a Nero Wolfe TV series for NBC television. Again, Welles bowed out of the project due to creative differences and William Conrad was cast in the role.
In 1979, Welles completed his documentary "Filming Othello", which featured Michael MacLiammoir and Hilton Edwards. Made for West German television, it was also released in theaters. That same year, Welles completed his self-produced pilot for "The Orson Welles Show" television series, featuring interviews with Burt Reynolds, Jim Henson and Frank Oz and guest-starring the Muppets and Angie Dickinson. Unable to find network interest, the pilot was never broadcast. Also in 1979, Welles appeared in the biopic "The Secret of Nikola Tesla", and a cameo in "The Muppet Movie" as Lew Lord.
Beginning in the late 1970s, Welles participated in a series of famous television commercial advertisements. For two years he was on-camera spokesman for the Paul Masson Vineyards, and sales grew by one third during the time Welles intoned what became a popular catchphrase: "We will sell no wine before its time." He was also the voice behind the long-running Carlsberg "Probably the best lager in the world" campaign, promoted Domecq sherry on British television and provided narration on adverts for Findus, though the actual adverts have been overshadowed by a famous blooper reel of voice recordings, known as the Frozen Peas reel. He also did commercials for the Preview Subscription Television Service seen on stations around the country including WCLQ/Cleveland, KNDL/St. Louis and WSMW/Boston. As money ran short, he began directing commercials to make ends meet, including the famous British "Follow the Bear" commercials for Hofmeister lager.
In 1981, Welles hosted the documentary "The Man Who Saw Tomorrow", about Renaissance-era prophet Nostradamus. In 1982, the BBC broadcast "The Orson Welles Story" in the "Arena" series. Interviewed by Leslie Megahey, Welles examined his past in great detail, and several people from his professional past were interviewed as well. It was reissued in 1990 as "With Orson Welles: Stories of a Life in Film". Welles provided narration for the tracks "Defender" from Manowar's 1987 album "Fighting the World" and "Dark Avenger" on their 1982 album, "Battle Hymns". He also recorded the concert introduction for the live performances of Manowar that says, "Ladies and gentlemen, from the United States of America, all hail Manowar." Manowar have been using this introduction for all of their concerts since then.
During the 1980s, Welles worked on such film projects as "The Dreamers", based on two stories by Isak Dinesen and starring Oja Kodar, and "Orson Welles' Magic Show", which reused material from his failed TV pilot. Another project he worked on was "Filming The Trial", the second in a proposed series of documentaries examining his feature films. While much was shot for these projects, none of them was completed. All of them were eventually released by the Filmmuseum München.
In 1984, Welles narrated the short-lived television series "Scene of the Crime". During the early years of "Magnum, P.I.", Welles was the voice of the unseen character Robin Masters, a famous writer and playboy. Welles's death forced this minor character to largely be written out of the series. In an oblique homage to Welles, the "Magnum, P.I." producers ambiguously concluded that story arc by having one character accuse another of having hired an actor to portray Robin Masters. He also, in this penultimate year released a music single, titled "I Know What It Is To Be Young (But You Don't Know What It Is To Be Old)", which he recorded under Italian label Compagnia Generale del Disco. The song was performed with the Nick Perito Orchestra and the Ray Charles Singers and produced by Jerry Abbott (father of guitarist "Dimebag Darrell" Abbott).
The last film roles before Welles's death included voice work in the animated films "Enchanted Journey" (1984) and "" (1986), in which he played the planet-eating robot Unicron. His last film appearance was in Henry Jaglom's 1987 independent film "Someone to Love", released two years after his death but produced before his voice-over in "Transformers: The Movie". His last television appearance was on the television show "Moonlighting". He recorded an introduction to an episode entitled "The Dream Sequence Always Rings Twice", which was partially filmed in black and white. The episode aired five days after his death and was dedicated to his memory.
In the mid-1980s, Henry Jaglom taped lunch conversations with Welles at Los Angeles's Ma Maison as well as in New York. Edited transcripts of these sessions appear in Peter Biskind's 2013 book "My Lunches With Orson: Conversations Between Henry Jaglom and Orson Welles".
Orson Welles and Chicago-born actress and socialite Virginia Nicolson (1916–1996) were married on November 14, 1934. The couple separated in December 1939 and were divorced on February 1, 1940. After bearing with Welles's romances in New York, Virginia had learned that Welles had fallen in love with Mexican actress Dolores del Río.
Infatuated with her since adolescence, Welles met del Río at Darryl Zanuck's ranch soon after he moved to Hollywood in 1939. Their relationship was kept secret until 1941, when del Río filed for divorce from her second husband. They openly appeared together in New York while Welles was directing the Mercury stage production "Native Son". They acted together in the movie "Journey into Fear" (1943). Their relationship came to an end due, among other things, to Welles's infidelities. Del Río returned to Mexico in 1943, shortly before Welles married Rita Hayworth.
Welles married Rita Hayworth on September 7, 1943. They were divorced on November 10, 1947. During his last interview, recorded for "The Merv Griffin Show" on the evening before his death, Welles called Hayworth "one of the dearest and sweetest women that ever lived … and we were a long time together—I was lucky enough to have been with her longer than any of the other men in her life."
In 1955, Welles married actress Paola Mori (née Countess Paola di Gerfalco), an Italian aristocrat who starred as Raina Arkadin in his 1955 film, "Mr. Arkadin". The couple began a passionate affair, and they were married at her parents' insistence. They were wed in London May 8, 1955, and never divorced.
Croatian-born artist and actress Oja Kodar became Welles's longtime companion both personally and professionally from 1966 onward, and they lived together for some of the last 20 years of his life.
Welles had three daughters from his marriages: Christopher Welles Feder (born March 27, 1938, with Virginia Nicolson); Rebecca Welles Manning (December 17, 1944 – October 17, 2004, with Rita Hayworth); and Beatrice Welles (born November 13, 1955, with Paola Mori).
Welles is thought to have had a son, British director Michael Lindsay-Hogg (born May 5, 1940), with Irish actress Geraldine Fitzgerald, then the wife of Sir Edward Lindsay-Hogg, 4th baronet. When Lindsay-Hogg was 16, his mother reluctantly divulged pervasive rumors that his father was Welles, and she denied them—but in such detail that he doubted her veracity. Fitzgerald evaded the subject for the rest of her life. Lindsay-Hogg knew Welles, worked with him in the theatre and met him at intervals throughout Welles's life. After learning that Welles's oldest daughter, Chris, his childhood playmate, had long suspected that he was her brother, Lindsay-Hogg initiated a DNA test that proved inconclusive. In his 2011 autobiography, Lindsay-Hogg reported that his questions were resolved by his mother's close friend Gloria Vanderbilt, who wrote that Fitzgerald had told her that Welles was his father. A 2015 Welles biography by Patrick McGilligan, however, reports the impossibility of Welles's paternity: Fitzgerald left the U.S. for Ireland in May 1939, and her son was conceived before her return in late October, whereas Welles did not travel overseas during that period.
After the death of Rebecca Welles Manning, a man named Marc McKerrow was revealed to be her son—and therefore a direct descendant of Orson Welles and Rita Hayworth. McKerrow's reactions to the revelation and his meeting with Oja Kodar are documented in the 2008 film "Prodigal Sons". McKerrow died on June 18, 2010.
Despite an urban legend promoted by Welles, he was not related to Abraham Lincoln's wartime Secretary of the Navy, Gideon Welles. The myth dates back to the first newspaper feature ever written about Welles—"Cartoonist, Actor, Poet and only 10"—in the February 19, 1926, issue of "The Capital Times". The article falsely states that he was descended from "Gideon Welles, who was a member of President Lincoln's cabinet". As presented by Charles Higham in a genealogical chart that introduces his 1985 biography of Welles, Orson Welles's father was Richard Head Welles (born Wells), son of Richard Jones Wells, son of Henry Hill Wells (who had an uncle named Gideon "Wells"), son of William Hill Wells, son of Richard Wells (1734–1801).
Peter Noble's 1956 biography describes Welles as "a magnificent figure of a man, over six feet tall, handsome, with flashing eyes and a gloriously resonant speaking-voice". Welles said that a voice specialist once told him he was born to be a heldentenor, a heroic tenor, but that when he was young and working at the Gate Theatre in Dublin, he forced his voice down into a bass-baritone.
Even as a baby, Welles was prone to illness, including diphtheria, measles, whooping cough, and malaria. From infancy he suffered from asthma, sinus headaches, and backache that was later found to be caused by congenital anomalies of the spine. Foot and ankle trouble throughout his life was the result of flat feet. "As he grew older", Brady wrote, "his ill health was exacerbated by the late hours he was allowed to keep [and] an early penchant for alcohol and tobacco".
In 1928, at age 13, Welles was already more than six feet tall (1.83 meters) and weighed over 180 pounds (81.6 kg). His passport recorded his height as six feet three inches (192 cm), with brown hair and green eyes.
"Crash diets, drugs, and corsets had slimmed him for his early film roles", wrote biographer Barton Whaley. "Then always back to gargantuan consumption of high-caloric food and booze. By summer 1949, when he was 34, his weight had crept up to a stout 230 pounds (104.3 kg). In 1953, he ballooned from 250 to 275 pounds (113.3 to 124.7 kg). After 1960, he remained permanently obese."
When Peter Bogdanovich once asked him about his religion, Welles gruffly replied that it was none of his business, then misinformed him that he was raised Catholic.
Although the Welles family was no longer devout, it was fourth-generation Protestant Episcopalian and, before that, Quaker and Puritan.
The funeral of Welles's father, Richard H. Welles, was Episcopalian.
In April 1982, when interviewer Merv Griffin asked him about his religious beliefs, Welles replied, "I try to be a Christian. I don't pray really, because I don't want to bore God." Near the end of his life, Welles was dining at Ma Maison, his favorite restaurant in Los Angeles, when proprietor Patrick Terrail conveyed an invitation from the head of the Greek Orthodox Church, who asked Welles to be his guest of honor at divine liturgy at Saint Sophia Cathedral. Welles replied, "Please tell him I really appreciate that offer, but I am an atheist."
"Orson never joked or teased about the religious beliefs of others", wrote biographer Barton Whaley. "He accepted it as a cultural artifact, suitable for the births, deaths, and marriages of strangers and even some friends—but without emotional or intellectual meaning for himself."
Welles was politically active from the beginning of his career. He remained aligned with the left throughout his life, and always defined his political orientation as "progressive". He was an outspoken critic of racism in the United States and the practice of segregation. He was a strong supporter of Franklin D. Roosevelt and the New Deal and often spoke out on radio in support of progressive politics. He campaigned heavily for Roosevelt in the 1944 election. Welles did not support the 1948 presidential bid of Roosevelt's second vice president Henry A. Wallace for the Progressive Party, however, later describing Wallace as "a prisoner of the Communist Party."p. 66
"During a White House dinner," Welles recalled in a 1983 conversation with his friend Roger Hill, "when I was campaigning for Roosevelt, in a toast, with considerable tongue in cheek, he said, 'Orson, you and I are the two greatest actors alive today.' In private that evening, and on several other occasions, he urged me to run for a Senate seat in either California or Wisconsin. He wasn't alone." In the 1980s, Welles still expressed admiration for Roosevelt but also described his presidency as "a semidictatorship."p. 187
He was hiking across the Austrian hiking country with a teacher who happened to be a budding Nazi. Near Innsbruck he attended a meeting run by a man who ran the antisemitic propaganda newspaper Der Sturmer named Julius Streicher. He happened to be seated next to Adolf Hitler. He said that Hitler made no impression on him at all and doesn't remember him. He said that he had no personality at all: "He was invisible. There was nothing there until there were 5,000 people yelling sieg heil".
For several years, he wrote a newspaper column on political issues and considered running for the U.S. Senate in 1946, representing his home state of Wisconsin—a seat that was ultimately won by Joseph McCarthy.
Welles's political activities were reported on pages 155–157 of "Red Channels", the anti-Communist publication that, in part, fueled the already flourishing Hollywood Blacklist. He was in Europe during the height of the Red Scare, thereby adding one more reason for the Hollywood establishment to ostracize him.
In 1970, Welles narrated (but did not write) a satirical political record on the rise of President Richard Nixon titled "The Begatting of the President".
He was a lifelong member of the International Brotherhood of Magicians and the Society of American Magicians.
On the evening of October 9, 1985, Welles recorded his final interview on syndicated TV program "The Merv Griffin Show", appearing with biographer Barbara Leaming. "Both Welles and Leaming talked of Welles's life, and the segment was a nostalgic interlude," wrote biographer Frank Brady. Welles returned to his house in Hollywood and worked into the early hours typing stage directions for the project he and Gary Graver were planning to shoot at UCLA the following day. Welles died sometime on the morning of October 10, following a heart attack. He was found by his chauffeur at around 10 a.m.; the first of Welles's friends to arrive was Paul Stewart. Welles was 70 years old at his death.
Welles was cremated by prior agreement with the executor of his estate, Greg Garrison, whose advice about making lucrative TV appearances in the 1970s made it possible for Welles to pay off a portion of the taxes he owed the IRS. A brief private funeral was attended by Paola Mori and Welles's three daughters—the first time they had ever been together. Only a few close friends were invited: Garrison, Graver, Roger Hill and Prince Alessandro Tasca di Cuto. Chris Welles Feder later described the funeral as an awful experience.
A public memorial tribute took place November 2, 1985, at the Directors Guild of America Theater in Los Angeles. Host Peter Bogdanovich introduced speakers including Charles Champlin, Geraldine Fitzgerald, Greg Garrison, Charlton Heston, Roger Hill, Henry Jaglom, Arthur Knight, Oja Kodar, Barbara Leaming, Janet Leigh, Norman Lloyd, Dan O'Herlihy, Patrick Terrail and Robert Wise.
"I know what his feelings were regarding his death", Joseph Cotten later wrote. "He did not want a funeral; he wanted to be buried quietly in a little place in Spain. He wanted no memorial services ..." Cotten declined to attend the memorial program; instead he sent a short message, ending with the last two lines of a Shakespeare sonnet that Welles had sent him on his most recent birthday:
But if the while I think on thee, dear friend,All losses are restored and sorrows end.
In 1987 the ashes of Welles and Mori (killed in a 1986 car crash) were taken to Ronda, Spain, and buried in an old well covered by flowers on the rural estate of a longtime friend, bullfighter Antonio Ordóñez.
Welles's reliance on self-production meant that many of his later projects were filmed piecemeal or were not completed. Welles financed his later projects through his own fundraising activities. He often also took on other work to obtain money to fund his own films.
In the mid-1950s, Welles began work on "Don Quixote", initially a commission from CBS television. Welles expanded the film to feature length, developing the screenplay to take Quixote and Sancho Panza into the modern age. Filming stopped with the death of Francisco Reiguera, the actor playing Quixote, in 1969. Orson Welles continued editing the film into the early 1970s. At the time of his death, the film remained largely a collection of footage in various states of editing. The project and, more important, Welles's conception of the project changed radically over time.
A version Oja Kodar supervised, with help from Jess Franco, assistant director during production, was released in 1992 to poor reviews.
Frederick Muller, the film editor for The "Trial", "Chimes at Midnight," and the CBS Special "Orson Bag," worked on editing three reels of the original, unadulterated version. When asked in 2013 by a journalist of "Time Out" for his opinion, he said that he felt that if released without image re-editing but with the addition of "ad hoc" sound and music, it probably would have been rather successful.
In 1969, Welles was given a TV commission to film a condensed adaptation of "The Merchant of Venice". Welles completed the film by 1970, but the finished negative was later mysteriously stolen from his Rome production office. A restored and reconstructed version of the film, made by using the original script and composer's notes, premiered at pre-opening ceremonies of the 72nd Venice International Film Festival, alongside "Othello", in 2015.
In 1970, Welles began shooting "The Other Side of the Wind". The film relates the efforts of a film director (played by John Huston) to complete his last Hollywood picture and is largely set at a lavish party. By 1972 the filming was reported by Welles as being "96% complete", though by 1979 Welles had only edited about 40 minutes of the film. In that year, legal complications over the ownership of the film put the negative into a Paris vault. In 2004 director Peter Bogdanovich, who acted in the film, announced his intention to complete the production.
On October 28, 2014, Los Angeles-based production company Royal Road Entertainment announced it had negotiated an agreement, with the assistance of producer Frank Marshall, and would purchase the rights to complete and release "The Other Side of the Wind". Bogdanovich and Marshall planned to complete Welles's nearly finished film in Los Angeles, aiming to have it ready for screening May 6, 2015, the 100th anniversary of Welles's birth. Royal Road Entertainment and German producer Jens Koethner Kaul acquired the rights held by Les Films de l'Astrophore and the late Mehdi Boushehri. They reached an agreement with Oja Kodar, who inherited Welles's ownership of the film, and Beatrice Welles, manager of the Welles estate; but at the end of 2015, efforts to complete the film were at an impasse.
In March 2017, Netflix acquired distribution rights to the film. That month, the original negative, dailies and other footage arrived in Los Angeles for post-production; the film was completed in 2018. The film premiered at the 75th Venice International Film Festival on August 31, 2018.
On November 2, 2018, the film debuted in select theaters and on Netflix, forty-eight years after principal photography began.
Some footage is included in the documentaries "Working with Orson Welles" (1993), "Orson Welles: One Man Band" (1995), and most extensively "They'll Love Me When I'm Dead" (2018).
"Too Much Johnson" is a 1938 comedy film written and directed by Welles. Designed as the cinematic aspect of Welles's Mercury Theatre stage presentation of William Gillette's 1894 comedy, the film was not completely edited or publicly screened. "Too Much Johnson" was considered a lost film until August 2013, with news reports that a pristine print had been discovered in Italy in 2008. A copy restored by the George Eastman House museum was scheduled to premiere October 9, 2013, at the Pordenone Silent Film Festival, with a U.S. premiere to follow. A single performance of "Too Much Johnson", on February 2, 2015, at the Film Forum in New York City, was a great success. Produced by Bruce Goldstein and adapted and directed by Allen Lewis Rickman, it featured the Film Forum Players with live piano.
"Heart of Darkness" was Welles's projected first film, in 1940. It was planned in extreme detail and some test shots were filmed; the footage is now lost. It was planned to be entirely shot in long takes from the point of view of the narrator, Marlow, who would be played by Welles; his reflection would occasionally be seen in the window as his boat sailed down river. The project was abandoned because it could not be delivered on budget, and "Citizen Kane" was made instead.
In 1941, Welles planned a film with his then partner, the Mexican actress Dolores del Río. "Santa" was adapted from the novel by Mexican writer Federico Gamboa. The film would have marked the debut of Dolores del Río in the Mexican cinema. Welles made a correction of the script in 13 extraordinary sequences. The high salary demanded by del Río stopped the project. In 1943, the film was finally completed with the settings of Welles, led by Norman Foster and starring Mexican actress Esther Fernández.
In 1941 Welles also planned a Mexican drama with Dolores del Río, which he gave to RKO to be budgeted. The film was a movie version of the novel by the same name by Calder Marshall. In the story, del Río would play Elena Medina, "the most beautiful girl in the world", with Welles playing an American who becomes entangled in a mission to disrupt a Nazi plot to overthrow the Mexican government. Welles planned to shoot in Mexico, but the Mexican government had to approve the story, and this never occurred.
In 1941, Welles received the support of Bishop Fulton Sheen for a retelling of the life of Christ, to be set in the American West in the 1890s. After filming of "Citizen Kane" was complete, Welles, Perry Ferguson, and Gregg Toland scouted locations in Baja California and Mexico. Welles wrote a screenplay with dialogue from the Gospels of Mark, Matthew, and Luke. "Every word in the film was to be from the Bible — no original dialogue, but done as a sort of American primitive," Welles said, "set in the frontier country in the last century." The unrealized project was revisited by Welles in the 1950s, when he wrote a second unfilmed screenplay, to be shot in Egypt.
Welles did not originally want to direct "It's All True", a 1942 documentary about South America, but after its abandonment by RKO, he spent much of the 1940s attempting to buy the negative of his material from RKO, so that he could edit and release it in some form. The footage remained unseen in vaults for decades, and was assumed lost. Over 50 years later, some (but not all) of the surviving material saw release in the 1993 documentary "It's All True: Based on an Unfinished Film by Orson Welles".
In 1944, Welles wrote the first-draft script of "Monsieur Verdoux", a film that he also intended to direct. Charlie Chaplin initially agreed to star in it, but later changed his mind, citing never having been directed by someone else in a feature before. Chaplin bought the film rights and made the film himself in 1947, with some changes. The final film credits Chaplin with the script, "based on an idea by Orson Welles".
Welles spent around nine months around 1947–48 co-writing the screenplay for "Cyrano de Bergerac" along with Ben Hecht, a project Welles was assigned to direct for Alexander Korda. He began scouting for locations in Europe whilst filming "Black Magic", but Korda was short of money, so sold the rights to Columbia pictures, who eventually dismissed Welles from the project, and then sold the rights to United Artists, who in turn made a film version in 1950, which was not based on Welles's script.
After Welles's elaborate musical stage version of this Jules Verne novel, encompassing 38 different sets, went live in 1946, Welles shot some test footage in Morocco in 1947 for a film version. The footage was never edited, funding never came through, and Welles abandoned the project. Nine years later, the stage show's producer Mike Todd made his own award-winning film version of the book.
"Moby Dick—Rehearsed" was a film version of Welles's 1955 London meta-play, starring Gordon Jackson, Christopher Lee, Patrick McGoohan, and with Welles as Ahab. Using bare, minimalist sets, Welles alternated between a cast of nineteenth-century actors rehearsing a production of "Moby Dick", with scenes from "Moby Dick" itself. Kenneth Williams, a cast member who was apprehensive about the entire project, recorded in his autobiography that Welles's dim, atmospheric stage lighting made some of the footage so dark as to be unwatchable. The entire play was filmed, but is now presumed lost. This was made during one weekend at the Hackney Empire theater.
The producers of "Histoires extraordinaires", a 1968 anthology film based on short stories by Edgar Allan Poe, announced in June 1967 that Welles would direct one segment based on both "Masque of the Red Death" and "The Cask of Amontillado" for the omnibus film. Welles withdrew in September 1967 and was replaced. The script, written in English by Welles and Oja Kodar, is in the Filmmuseum Munchen collection.
This Monty Python-esque spoof in which Welles plays all but one of the characters (including two characters in drag), was made around 1968-9. Welles intended this completed sketch to be one of several items in a television special on London. Other items filmed for this special – all included in the "One Man Band" documentary by his partner Oja Kodar — comprised a sketch on Winston Churchill (played in silhouette by Welles), a sketch on peers in a stately home, a feature on London gentlemen's clubs, and a sketch featuring Welles being mocked by his snide Savile Row tailor (played by Charles Gray).
Welles wrote two screenplays for "Treasure Island" in the 1960s, and was eager to seek financial backing to direct it. His plan was to film it in Spain in concert with "Chimes at Midnight". Welles intended to play the part of Long John Silver. He wanted Keith Baxter to play Doctor Livesey and John Gielgud to take on the role of Squire Trelawney. Australian-born child actor Fraser MacIntosh ("The Boy Cried Murder"), then 11-years old, was cast as Jim Hawkins and flown to Spain for the shoot, which would have been directed by Jess Franco. About 70 percent of the "Chimes at Midnight" cast would have had roles in "Treasure Island". However, funding for the project fell through. Eventually, Welles's own screenplay (under the pseudonym of O.W. Jeeves) was further rewritten, and formed the basis of the 1972 film version directed by John Hough, in which Welles played Long John Silver.
"The Deep", an adaptation of Charles Williams's "Dead Calm", was entirely set on two boats and shot mostly in close-ups. It was filmed off the coasts of Yugoslavia and the Bahamas between 1966 and 1969, with all but one scene completed. It was originally planned as a commercially viable thriller, to show that Welles could make a popular, successful film. It was put on hold in 1970 when Welles worried that critics would not respond favorably to this film as his theatrical follow-up to the much-lauded "Chimes at Midnight", and Welles focused instead on "F for Fake". It was abandoned altogether in 1973, perhaps due to the death of its star Laurence Harvey. In a 2015 interview, Oja Kodar blamed Welles's failure to complete the film on Jeanne Moreau's refusal to participate in its dubbing.
"Dune", an early attempt at adapting Frank Herbert's sci-fi novel by Chilean film director Alejandro Jodorowsky, was to star Welles as the evil Baron Vladimir Harkonnen. Jodorowsky had personally chosen Welles for the role, but the planned film never advanced past pre-production.
In 1978 Welles was lined up by his long-time protégé Peter Bogdanovich (who was then acting as Welles's "de facto" agent) to direct "Saint Jack", an adaptation of the 1973 Paul Theroux novel about an American pimp in Singapore. Hugh Hefner and Bogdanovich's then-partner Cybill Shepherd were both attached to the project as producers, with Hefner providing finance through his Playboy productions. However, both Hefner and Shepherd became convinced that Bogdanovich himself would be a more commercially viable director than Welles, and insisted that Bogdanovich take over. Since Bogdanovich was also in need of work after a series of box office flops, he agreed. When the film was finally made in 1979 by Bogdanovich and Hefner (but without Welles or Shepherd's participation), Welles felt betrayed and according to Bogdanovich the two "drifted apart a bit".
After the success of his 1978 film "Filming Othello" made for West German television, and mostly consisting of a monolog to the camera, Welles began shooting scenes for this follow-up film, but never completed it. What Welles did film was an 80-minute question-and-answer session in 1981 with film students asking about the film. The footage was kept by Welles's cinematographer Gary Graver, who donated it to the Munich Film Museum, which then pieced it together with Welles's trailer for the film, into an 83-minute film which is occasionally screened at film festivals.
Written by Welles with Oja Kodar, "The Big Brass Ring" was adapted and filmed by director George Hickenlooper in partnership with writer F.X. Feeney. Both the Welles script and the 1999 film center on a U.S. Presidential hopeful in his 40s, his elderly mentor—a former candidate for the Presidency, brought low by homosexual scandal—and the Italian journalist probing for the truth of the relationship between these men. During the last years of his life, Welles struggled to get financing for the planned film; however, his efforts at casting Jack Nicholson, Robert Redford, Warren Beatty, Clint Eastwood, Burt Reynolds and Paul Newman as the main character were unsuccessful. All of the actors turned down the role for various reasons.
In 1984, Welles wrote the screenplay for a film he planned to direct, an autobiographical drama about the 1937 staging of "The Cradle Will Rock". Rupert Everett was slated to play the young Welles. However, Welles was unable to acquire funding. Tim Robbins later directed a similar film, but it was not based on Welles's script.
At the time of his death, Welles was in talks with a French production company to direct a film version of the Shakespeare play "King Lear", in which he would also play the title role.
"" was an adaptation of Vladimir Nabokov's novel. Welles flew to Paris to discuss the project personally with the Russian author. | https://en.wikipedia.org/wiki?curid=22196 |
Open content
Open content describes any work that others can copy or modify freely by attributing to the original creator, but without needing to ask for permission. This has been applied to a range of formats, including textbooks, academic journals, films and music. The term was an expansion of the related concept of open-source software. Such content is said to be under an open licence.
The concept of applying free software licenses to content was introduced by Michael Stutz, who in 1994 wrote the paper "Applying Copyleft to Non-Software Information" for the GNU Project. The term "open content" was coined by David A. Wiley in 1998 and evangelized via the "Open Content Project", describing works licensed under the Open Content License (a non-free share-alike license, see 'Free content' below) and other works licensed under similar terms.
It has since come to describe a broader class of content without conventional copyright restrictions. The openness of content can be assessed under the '5Rs Framework' based on the extent to which it can be reused, revised, remixed and redistributed by members of the public without violating copyright law. Unlike free content and content under open-source licenses, there is no clear threshold that a work must reach to qualify as 'open content'.
Although open content has been described as a counterbalance to copyright, open content licenses rely on a copyright holder's power to license their work, similarly as copyleft which also utilizes copyright for such a purpose.
In 2003 Wiley announced that the Open Content Project has been succeeded by Creative Commons and their licenses, where he joined as "Director of Educational Licenses".
In 2005, the Open Icecat project was launched, in which product information for e-commerce applications was created and published under the Open Content License. It was embraced by the tech sector, which was already quite open source minded.
In 2006 the Creative Commons' successor project was the "Definition of Free Cultural Works" for free content, put forth by Erik Möller, Richard Stallman, Lawrence Lessig, Benjamin Mako Hill, Angela Beesley, and others. The "Definition of Free Cultural Works" is used by the Wikimedia Foundation. In 2008, the Attribution and Attribution-ShareAlike Creative Commons licenses were marked as "Approved for Free Cultural Works" among other licenses.
Another successor project is the "Open Knowledge Foundation" ("OKF"), founded by Rufus Pollock in Cambridge, UK in 2004 as a global non-profit network to promote and share open content and data. In 2007 the Open Knowledge Foundation gave an "Open Knowledge Definition" for ""Content such as music, films, books; Data be it scientific, historical, geographic or otherwise; Government and other administrative information"". In October 2014 with version 2.0 "Open Works" and "Open Licenses" were defined and "open" is described as synonymous to the definitions of open/free in the Open Source Definition, the Free Software Definition and the Definition of Free Cultural Works. A distinct difference is the focus given to the public domain and that it focuses also on the accessibility ("open access") and the readability ("open formats"). Among several conformant licenses, six are recommended, three own (Open Data Commons Public Domain Dedication and Licence (PDDL), Open Data Commons Attribution License (ODC-BY), Open Data Commons Open Database License (ODbL)) and the CC BY, CC BY-SA, and CC0 creative commons licenses.
The OpenContent website once defined OpenContent as 'freely available for modification, use and redistribution under a license similar to those used by the open-source / free software community'. However, such a definition would exclude the Open Content License (OPL) because that license forbade charging 'a fee for the [OpenContent] itself', a right required by free and open-source software licenses.
The term since shifted in meaning. OpenContent ""is licensed in a manner that provides users with free and perpetual permission to engage in the 5R activities.""
The 5Rs are put forward on the OpenContent website as a framework for assessing the extent to which content is open:
This broader definition distinguishes open content from open-source software, since the latter must be available for commercial use by the public. However, it is similar to several definitions for open educational resources, which include resources under noncommercial and verbatim licenses.
The later "Open Definition" by the Open Knowledge Foundation (now known as Open Knowledge International) define open knowledge with open content and open data as sub-elements and draws heavily on the Open Source Definition; it preserves the limited sense of open content as free content, unifying both.
"Open access" refers to toll-free or gratis access to content, mainly published originally peer-reviewed scholarly journals. Some open access works are also licensed for reuse and redistribution ("libre open access"), which would qualify them as open content.
Over the past decade, open content has been used to develop alternative routes towards higher education. Traditional universities are expensive, and their tuition rates are increasing. Open content allows a free way of obtaining higher education that is "focused on collective knowledge and the sharing and reuse of learning and scholarly content."
There are multiple projects and organizations that promote learning through open content, including OpenCourseWare Initiative, The Saylor Foundation and Khan Academy. Some universities, like MIT, Yale, and Tufts are making their courses freely available on the internet.
The textbook industry is one of the educational industries in which open content can make the biggest impact.name =Khing Phyo San "funny monkey"> Traditional textbooks, aside from being expensive, can also be inconvenient and out of date, because of publishers' tendency to constantly print new editions. Open textbooks help to eliminate this problem, because they are online and thus easily updatable. Being openly licensed and online can be helpful to teachers, because it allows the textbook to be modified according to the teacher's unique curriculum. There are multiple organizations promoting the creation of openly licensed textbooks. Some of these organizations and projects include The University of Minnesota's Open Textbook Library, Connexions, OpenStax College, The Saylor Foundation Open Textbook Challenge and Wikibooks
According to the current definition of open content on the OpenContent website, any general, royalty-free copyright license would qualify as an open license because it 'provides users with the right to make more kinds of uses than those normally permitted under the law. These permissions are granted to users free of charge.'
However, the narrower definition used in the Open Definition effectively limits open content to libre content, any free content license, defined by the Definition of Free Cultural Works, would qualify as an open content license. According to this narrower criteria, the following still-maintained licenses qualify:
(For more licenses see Open Knowledge, Free content and Free Cultural Works licenses) | https://en.wikipedia.org/wiki?curid=22197 |
Ohio
Ohio is a state in the East North Central region of the Midwestern United States. Of the fifty states, it is the 34th largest by area, the seventh most populous, and the tenth most densely populated. The state's capital and largest city is Columbus. Ohio is bordered by Lake Erie to the north, Pennsylvania to the east, West Virginia to the southeast, Kentucky to the southwest, Indiana to the west, and Michigan to the northwest.
The state takes its name from the Ohio River, whose name in turn originated from the Seneca word " ohiːyo", meaning "good river", "great river" or "large creek". Partitioned from the Northwest Territory, Ohio was the 17th state admitted to the Union on March 1, 1803, and the first under the Northwest Ordinance. Ohio is historically known as the "Buckeye State" after its Ohio buckeye trees, and Ohioans are also known as "Buckeyes".
Ohio rose from the land west of Appalachia in colonial times through the Northwest Indian Wars as part of the Northwest Territory in the early frontier, to become the first non-colonial "free" state admitted to the union, to an industrial powerhouse in the 20th century before transitioning to a more information and service based economy in the 21st.
The government of Ohio is composed of the executive branch, led by the governor; the legislative branch, which comprises the bicameral Ohio General Assembly; and the judicial branch, led by the state Supreme Court. Ohio occupies 16 seats in the United States House of Representatives. Ohio is known for its status as both a swing state and a bellwether in national elections. Seven presidents of the United States have come from Ohio.
Ohio is an industrial state, ranking 8th out of 50 states in GDP (2015), is the third largest US state for manufacturing, and is the second largest producer of automobiles behind Michigan.
Ohio's geographic location has proven to be an asset for economic growth and expansion. Because Ohio links the Northeast to the Midwest, much cargo and business traffic passes through its borders along its well-developed highways. Ohio has the nation's 10th largest highway network and is within a one-day drive of 50% of North America's population and 70% of North America's manufacturing capacity. To the north, Lake Erie gives Ohio of coastline, which allows for numerous cargo ports. Ohio's southern border is defined by the Ohio River (with the border being at the 1792 low-water mark on the north side of the river), and much of the northern border is defined by Lake Erie. Ohio's neighbors are Pennsylvania to the east, Michigan to the northwest, Lake Erie to the north, Indiana to the west, Kentucky on the south, and West Virginia on the southeast. Ohio's borders were defined by metes and bounds in the Enabling Act of 1802 as follows:
Ohio is bounded by the Ohio River, but nearly all of the river itself belongs to Kentucky and West Virginia. In 1980, the U.S. Supreme Court held that, based on the wording of the cessation of territory by Virginia (which at the time included what is now Kentucky and West Virginia), the boundary between Ohio and Kentucky (and, by implication, West Virginia) is the northern low-water mark of the river as it existed in 1792. Ohio has only that portion of the river between the river's 1792 low-water mark and the present high-water mark.
The border with Michigan has also changed, as a result of the Toledo War, to angle slightly northeast to the north shore of the mouth of the Maumee River.
Much of Ohio features glaciated till plains, with an exceptionally flat area in the northwest being known as the Great Black Swamp. This glaciated region in the northwest and central state is bordered to the east and southeast first by a belt known as the glaciated Allegheny Plateau, and then by another belt known as the unglaciated Allegheny Plateau. Most of Ohio is of low relief, but the unglaciated Allegheny Plateau features rugged hills and forests.
The rugged southeastern quadrant of Ohio, stretching in an outward bow-like arc along the Ohio River from the West Virginia Panhandle to the outskirts of Cincinnati, forms a distinct socio-economic unit. Geologically similar to parts of West Virginia and southwestern Pennsylvania, this area's coal mining legacy, dependence on small pockets of old manufacturing establishments, and distinctive regional dialect set this section off from the rest of the state. In 1965 the United States Congress passed the Appalachian Regional Development Act, an attempt to "address the persistent poverty and growing economic despair of the Appalachian Region". This act defines 29 Ohio counties as part of Appalachia. While 1/3 of Ohio's land mass is part of the federally defined Appalachian region, only 12.8% of Ohioans live there (1.476 million people.)
Significant rivers within the state include the Cuyahoga River, Great Miami River, Maumee River, Muskingum River, and Scioto River. The rivers in the northern part of the state drain into the northern Atlantic Ocean via Lake Erie and the St. Lawrence River, and the rivers in the southern part of the state drain into the Gulf of Mexico via the Ohio River and then the Mississippi.
The worst weather disaster in Ohio history occurred along the Great Miami River in 1913. Known as the Great Dayton Flood, the entire Miami River watershed flooded, including the downtown business district of Dayton. As a result, the Miami Conservancy District was created as the first major flood plain engineering project in Ohio and the United States.
Grand Lake St. Marys in the west-central part of the state was constructed as a supply of water for canals in the canal-building era of 1820–1850. For many years this body of water, over , was the largest artificial lake in the world. were not the economic fiasco that similar efforts were in other states. Some cities, such as Dayton, owe their industrial emergence to location on canals, and as late as 1910 interior canals carried much of the bulk freight of the state.
The climate of Ohio is a humid continental climate (Köppen climate classification "Dfa/Dfb") throughout most of the state, except in the extreme southern counties of Ohio's Bluegrass region section, which are located on the northern periphery of the humid subtropical climate ("Cfa") and Upland South region of the United States. Summers are typically hot and humid throughout the state, while winters generally range from cool to cold. Precipitation in Ohio is moderate year-round. Severe weather is not uncommon in the state, although there are typically fewer tornado reports in Ohio than in states located in what is known as the Tornado Alley. Severe lake effect snowstorms are also not uncommon on the southeast shore of Lake Erie, which is located in an area designated as the Snowbelt.
Although predominantly not in a subtropical climate, some warmer-climate flora and fauna do reach well into Ohio. For instance, some trees with more southern ranges, such as the blackjack oak, "Quercus marilandica", are found at their northernmost in Ohio just north of the Ohio River. Also evidencing this climatic transition from a subtropical to continental climate, several plants such as the Southern magnolia "(Magnolia grandiflora)", Albizia julibrissin (mimosa), Crape Myrtle, and even the occasional needle palm are hardy landscape materials regularly used as street, yard, and garden plantings in the Bluegrass region of Ohio; but these same plants will simply not thrive in much of the rest of the state. This interesting change may be observed while traveling through Ohio on Interstate 75 from Cincinnati to Toledo; the observant traveler of this diverse state may even catch a glimpse of Cincinnati's common wall lizard, one of the few examples of permanent "subtropical" fauna in Ohio.
Due to flooding resulting in severely damaged highways, Governor Mike DeWine declared a state of emergency in 37 Ohio counties in 2019.
The highest recorded temperature was , near Gallipolis on July 21, 1934.
The lowest recorded temperature was , at Milligan on February 10, 1899, during the Great Blizzard of 1899.
Although few have registered as noticeable to the average resident, more than 200 earthquakes with a magnitude of 2.0 or higher have occurred in Ohio since 1776. The Western Ohio Seismic Zone and a portion of the Southern Great Lakes Seismic Zone are located in the state, and numerous faults lie under the surface.
The most substantial known earthquake in Ohio history was the Anna (Shelby County) earthquake, which occurred on March 9, 1937. It was centered in western Ohio, and had a magnitude of 5.4, and was of intensity VIII.
Other significant earthquakes in Ohio include: one of magnitude 4.8 near Lima on September 19, 1884; one of magnitude 4.2 near Portsmouth on May 17, 1901; and one of 5.0 in LeRoy Township in Lake County on January 31, 1986, which continued to trigger 13 aftershocks of magnitude 0.5 to 2.4 for two months.
Notable Ohio earthquakes in the 21st century include one occurring on December 31, 2011, approximately northwest of Youngstown, and one occurring on June 10, 2019, approximately north-northwest of Eastlake under Lake Erie; both registered a 4.0 magnitude.
Columbus is both the capital of Ohio and its largest city, located near the geographic center of the state and well known for The Ohio State University. However, other Ohio cities function as economic and cultural centers of metropolitan areas. Akron, Canton, Cleveland, Mansfield, and Youngstown are in the Northeast, known for major industrial companies Goodyear Tire and Rubber and Timken, top ranked colleges Case Western Reserve University and Kent State University, the Cleveland Clinic, and cultural attractions including the Cleveland Museum of Art, Big Five group Cleveland Orchestra, Playhouse Square, the Pro Football Hall of Fame, and the Rock and Roll Hall of Fame. Lima and Toledo are the major cities in Northwest Ohio. Northwest Ohio is known for its glass making industry, and is home to Owens Corning and Owens-Illinois, two Fortune 500 corporations. Dayton and Springfield are located in the Miami Valley, which is home to the University of Dayton, the Dayton Ballet, and the extensive Wright-Patterson Air Force Base. Cincinnati anchors Southwest Ohio, home of Miami University and the University of Cincinnati, Cincinnati Union Terminal, Cincinnati Symphony Orchestra, and various Fortune 500 companies including Procter & Gamble, Kroger, Macy's, Inc., and Fifth Third Bank. Steubenville is the only metropolitan city in Appalachian Ohio, which is home to Hocking Hills State Park.
The Cincinnati metropolitan area extends into Kentucky and Indiana, the Steubenville metropolitan area extends into West Virginia, the Toledo metropolitan area extends into Michigan, and the Youngstown metropolitan area extends into Pennsylvania.
Ohio cities that function as centers of United States micropolitan areas include:
Archeological evidence of spear points of both the Folsom and Clovis types indicate that the Ohio Valley was inhabited by nomadic people as early as 13,000 BC. These early nomads disappeared from Ohio by 1,000 BC. Between 1,000 and 800 BC, the sedentary Adena culture emerged. The Adena were able to establish "semi-permanent" villages because they domesticated plants, including, sunflowers, and "grew squash and possibly corn"; with hunting and gathering, this cultivation supported more settled, complex villages. The most notable remnant of the Adena culture is the Great Serpent Mound, located in Adams County, Ohio.
Around 100 BC, the Adena evolved into the Hopewell people who were also mound builders. Their complex, large and technologically sophisticated earthworks can be found in modern-day Marietta, Newark, and Circleville. They were also a prolific trading society, their trading network spanning a third of the continent. The Hopewell disappeared from the Ohio Valley about 600 AD. The Mississippian Culture rose as the Hopewell Culture declined. Many Siouan-speaking peoples from the plains and east coast claim them as ancestors and say they lived throughout the Ohio region until approximately the 13th century.
There were three other cultures contemporaneous with the Mississippians: the Fort Ancient people, the Whittlesey Focus people and the Monongahela Culture. All three cultures disappeared in the 17th century. Their origins are unknown. The Shawnees may have absorbed the Fort Ancient people. It is also possible that the Monongahela held no land in Ohio during the Colonial Era. The Mississippian Culture were close to and traded extensively with the Fort Ancient people.
Indians in the Ohio Valley were greatly affected by the aggressive tactics of the Iroquois Confederation, based in central and western New York. After the Beaver Wars in the mid-17th century, the Iroquois claimed much of the Ohio country as hunting and, more importantly, beaver-trapping ground. After the devastation of epidemics and war in the mid-17th century, which largely emptied the Ohio country of indigenous people by the mid-to-late 17th century, the land gradually became repopulated by the mostly Algonquian. Many of these Ohio-country nations were multi-ethnic (sometimes multi-linguistic) societies born out of the earlier devastation brought about by disease, war, and subsequent social instability. They subsisted on agriculture (corn, sunflowers, beans, etc.) supplemented by seasonal hunts. By the 18th century, they were part of a larger global economy brought about by European entry into the fur trade.
The indigenous nations to inhabit Ohio in the historical period included the Iroquoian, the Algonquian & the Siouan. Ohio country was also the site of Indian massacres, such as the Yellow Creek Massacre, Gnadenhutten and Pontiac's Rebellion school massacre. Most Native Peoples who remained in Ohio were slowly bought out and convinced to leave, or ordered to do so by law, in the early 19th century with the Indian Removal Act of 1830.
During the 18th century, the French set up a system of trading posts to control the fur trade in the region. Beginning in 1754, France and Great Britain fought the French and Indian War. As a result of the Treaty of Paris, the French ceded control of Ohio and the remainder of the Old Northwest to Great Britain.
Pontiac's Rebellion in the 1760s, however, posed a challenge to British military control. This came to an end with the colonists' victory in the American Revolution. In the Treaty of Paris in 1783, Britain ceded all claims to Ohio country to the United States.
The United States created the Northwest Territory under the Northwest Ordinance of 1787. Slavery was not permitted in the new territory. Settlement began with the founding of Marietta by the Ohio Company of Associates, which had been formed by a group of American Revolutionary War veterans. Following the Ohio Company, the Miami Company (also referred to as the "Symmes Purchase") claimed the southwestern section, and the Connecticut Land Company surveyed and settled the Connecticut Western Reserve in present-day Northeast Ohio. Territorial surveyors from Fort Steuben began surveying an area of eastern Ohio called the Seven Ranges at about the same time.
The old Northwest Territory originally included areas previously known as Ohio Country and Illinois Country. As Ohio prepared for statehood, the Indiana Territory was created, reducing the Northwest Territory to approximately the size of present-day Ohio plus the eastern half of the Lower Peninsula of Michigan and the eastern tip of the Upper Peninsula and a sliver of southeastern Indiana called "The Gore".
Under the Northwest Ordinance, areas could be defined and admitted as states once their population reached 60,000. Although Ohio's population was only 45,000 in December 1801, Congress determined that it was growing rapidly and had already begun the path to statehood. In regards to the Leni Lenape natives, Congress decided that 10,000 acres on the Muskingum River in the present state of Ohio would "be set apart and the property thereof be vested in the Moravian Brethren ... or a society of the said Brethren for civilizing the Indians and promoting Christianity".
On February 19, 1803, U.S. president Thomas Jefferson signed an act of Congress that approved Ohio's boundaries and constitution. However, Congress had never passed a resolution formally admitting Ohio as the 17th state. The current custom of Congress declaring an official date of statehood did not begin until 1812, with Louisiana's admission as the 18th state. Although no formal resolution of admission was required, when the oversight was discovered in 1953, as Ohio began preparations for celebrating its sesquicentennial, Ohio congressman George H. Bender introduced a bill in Congress to admit Ohio to the Union retroactive to March 1, 1803, the date on which the Ohio General Assembly first convened. At a special session at the old state capital in Chillicothe, the Ohio state legislature approved a new petition for statehood which was delivered to Washington, D.C., on horseback. On August 7, 1953 (the year of Ohio's 150th anniversary), President Eisenhower signed a congressional joint resolution that officially declared March 1, 1803, the date of Ohio's admittance into the Union.
Ohio has had three capital cities: Chillicothe, Zanesville, and Columbus. Chillicothe was the capital from 1803 to 1810. The capital was then moved to Zanesville for two years, as part of a state legislative compromise to get a bill passed. The capital was then moved back to Chillicothe, which was the capital from 1812 to 1816. Finally, the capital was moved to Columbus, to have it near the geographic center of the state.
Although many Native Americans had migrated west to evade American encroachment, others remained settled in the state, sometimes assimilating in part. In 1830 under President Andrew Jackson, the US government forced Indian Removal of most tribes to the Indian Territory west of the Mississippi River.
In 1835, Ohio fought with Michigan in the Toledo War, a mostly bloodless boundary war over the Toledo Strip. Only one person was injured in the conflict. Congress intervened, making Michigan's admittance as a state conditional on ending the conflict. In exchange for giving up its claim to the Toledo Strip, Michigan was given the western two-thirds of the Upper Peninsula, in addition to the eastern third which was already considered part of the state.
Ohio's central position and its population gave it an important place during the Civil War. The Ohio River was a vital artery for troop and supply movements, as were Ohio's railroads. The industry of Ohio made the state one of the most important states in the Union during the Civil war. Ohio contributed more soldiers per-capita than any other state in the Union. In 1862, the state's morale was badly shaken in the aftermath of the Battle of Shiloh, a costly victory in which Ohio forces suffered 2,000 casualties. Later that year, when Confederate troops under the leadership of Stonewall Jackson threatened Washington, D.C., Ohio governor David Tod still could recruit 5,000 volunteers to provide three months of service. From July 12 to July 23, 1863, Southern Ohio and Indiana were attacked in Morgan's Raid. While this raid was insignificant and small, it aroused fear among people in Ohio and Indiana. Almost 35,000 Ohioans died in the conflict, and 30,000 were physically wounded. By the end of the Civil War, the Union's top three generals–Ulysses S. Grant, William Tecumseh Sherman, and Philip Sheridan–were all from Ohio.
In 1912 a Constitutional Convention was held with Charles Burleigh Galbreath as secretary. The result reflected the concerns of the Progressive Era. It introduced the initiative and the referendum. Also, it allowed the General Assembly to put questions on the ballot for the people to ratify laws and constitutional amendments originating in the Legislature. Under the Jeffersonian principle that laws should be reviewed once a generation, the constitution provided for a recurring question to appear on Ohio's general election ballots every 20 years. The question asks whether a new convention is required. Although the question has appeared in 1932, 1952, 1972, and 1992, it has never been approved. Instead, constitutional amendments have been proposed by petition to the legislature hundreds of times and adopted in a majority of cases.
From just over 45,000 residents in 1800, Ohio's population grew faster than 10% per decade (except for the 1940 census) until the 1970 census, which recorded just over 10.65 million Ohioans. Growth then slowed for the next four decades. The United States Census Bureau estimates that the population of Ohio was 11,689,100 on July 1, 2019, a 1.32% increase since the 2010 United States Census. Ohio's population growth lags that of the entire United States, and Caucasians are found in a greater density than the United States average. , Ohio's center of population is located in Morrow County, in the county seat of Mount Gilead. This is approximately south and west of Ohio's population center in 1990.
As of 2011, 27.6% of Ohio's children under the age of 1 belonged to minority groups.
6.2% of Ohio's population is under five years of age, 23.7 percent under 18 years of age, and 14.1 percent were 65 or older. Females made up approximately 51.2 percent of the population.
"Note: Births in table don't add up, because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number."
According to the 2010 United States Census, the racial composition of Ohio was the following:
In 2010, there were 469,700 foreign-born residents in Ohio, corresponding to 4.1% of the total population. Of these, 229,049 (2.0%) were naturalized US citizens and 240,699 (2.1%) were not. The largest groups were: Mexico (54,166), India (50,256), China (34,901), Germany (19,219), Philippines (16,410), United Kingdom (15,917), Canada (14,223), Russia (11,763), South Korea (11,307), and Ukraine (10,681). Though predominantly white, Ohio has large black populations in all major metropolitan areas throughout the state, Ohio has a significant Hispanic population made up of Mexicans in Toledo and Columbus, and Puerto Ricans in Cleveland and Columbus, and also has a significant and diverse Asian population in Columbus.
The largest ancestry groups (which the Census defines as not including racial terms) in the state are:
Ancestries claimed by less than 1% of the population include Sub-Saharan African, Puerto Rican, Swiss, Swedish, Arab, Greek, Norwegian, Romanian, Austrian, Lithuanian, Finnish, West Indian, Portuguese and Slovene.
About 6.7% of the population age 5 years and older reported speaking a language other than English, with 2.2% of the population speaking Spanish, 2.6% speaking other Indo-European languages, 1.1% speaking Asian and Austronesian languages, and 0.8% speaking other languages. Numerically: 10,100,586 spoke English, 239,229 Spanish, 55,970 German, 38,990 Chinese, 33,125 Arabic, and 32,019 French. In addition 59,881 spoke a Slavic language and 42,673 spoke another West Germanic language according to the 2010 Census. Ohio also had the nation's largest population of Slovene speakers, second largest of Slovak speakers, second largest of Pennsylvania Dutch (German) speakers, and the third largest of Serbian speakers.
According to a Pew Forum poll, as of 2008, 76% of Ohioans identified as Christian. Specifically, 26% of Ohio's population identified as Evangelical Protestant, 22% as Mainline Protestant, and 21% as Catholic. 17% of the population is unaffiliated with any religious body. 1.3% (148,380) were Jewish. There are also small minorities of Jehovah's Witnesses (1%), Muslims (1%), Hindus (<0.5%), Buddhists (<0.5%), Mormons (<0.5%), and other faiths (1-1.5%).
According to the Association of Religion Data Archives (ARDA), in 2010 the largest denominations by adherents were the Catholic Church with 1,992,567; the United Methodist Church with 496,232; the Evangelical Lutheran Church in America with 223,253, the Southern Baptist Convention with 171,000, the Christian Churches and Churches of Christ with 141,311, the United Church of Christ with 118,000, and the Presbyterian Church (USA) with 110,000. With about 70,000 people in 2015 Ohio had the second largest Amish population of all states of the US.
According to the same data, a majority of Ohioans, 55%, feel religion is "very important", 30% that it is "somewhat important", and 15% that religion is "not too important/not important at all". 36% of Ohioans indicate that they attend religious services at least once weekly, 35% occasionally, and 27% seldom or never.
According to the U.S. Census Bureau, the total number for employment in 2016 was 4,790,178. The total number of unique employer establishments was 252,201, while the total number of nonemployer establishments was 785,833. In 2010, Ohio was ranked second in the country for best business climate by Site Selection magazine, based on a business-activity database. The state has also won three consecutive Governor's Cup awards from the magazine, based on business growth and developments. , Ohio's gross domestic product (GDP) was $626 billion. This ranks Ohio's economy as the seventh-largest of all fifty states and the District of Columbia.
The Small Business & Entrepreneurship Council ranked the state No. 10 for best business-friendly tax systems in their Business Tax Index 2009, including a top corporate tax and capital gains rate that were both ranked No.6 at 1.9%. Ohio was ranked No. 11 by the council for best friendly-policy states according to their Small Business Survival Index 2009. The Directorship's Boardroom Guide ranked the state No. 13 overall for best business climate, including No.7 for best litigation climate. Forbes ranked the state No.8 for best regulatory environment in 2009. Ohio has five of the top 115 colleges in the nation, according to U.S. News and World Report's 2010 rankings, and was ranked No.8 by the same magazine in 2008 for best high schools.
Ohio's unemployment rate stands at 4.5% as of February 2018, down from 10.7% in May 2010. The state still lacks 45,000 jobs compared to the pre-recession numbers of 2007. The labor force participation as of April 2015 is 63%, slightly above the national average. Ohio's per capita income stands at $34,874. , Ohio's median household income is $52,334, and 14.6% of the population is below the poverty line
The manufacturing and financial activities sectors each compose 18.3% of Ohio's GDP, making them Ohio's largest industries by percentage of GDP. Ohio has the third largest manufacturing workforce behind California and Texas. Ohio has the largest bioscience sector in the Midwest, and is a national leader in the "green" economy. Ohio is the largest producer in the country of plastics, rubber, fabricated metals, electrical equipment, and appliances. 5,212,000 Ohioans are currently employed by wage or salary.
By employment, Ohio's largest sector is trade/transportation/utilities, which employs 1,010,000 Ohioans, or 19.4% of Ohio's workforce, while the health care and education sector employs 825,000 Ohioans (15.8%). Government employs 787,000 Ohioans (15.1%), manufacturing employs 669,000 Ohioans (12.9%), and professional and technical services employs 638,000 Ohioans (12.2%). Ohio's manufacturing sector is the third-largest of all fifty United States states in terms of gross domestic product. Fifty-nine of the United States' top 1,000 publicly traded companies (by revenue in 2008) are headquartered in Ohio, including Procter & Gamble, Goodyear Tire & Rubber, AK Steel, Timken, Abercrombie & Fitch, and Wendy's.
Ohio is also one of 41 states with its own lottery, the Ohio Lottery. The Ohio Lottery has contributed over $15.5 billion to public education in its 34-year history.
Many major east–west transportation corridors go through Ohio. One of those pioneer routes, known in the early 20th century as "Main Market Route 3", was chosen in 1913 to become part of the historic Lincoln Highway which was the first road across America, connecting New York City to San Francisco. In Ohio, the Lincoln Highway linked many towns and cities together, including Canton, Mansfield, Wooster, Lima, and Van Wert. The arrival of the Lincoln Highway to Ohio was a major influence on the development of the state. Upon the advent of the federal numbered highway system in 1926, the Lincoln Highway through Ohio became U.S. Route 30.
Ohio also is home to of the Historic National Road, now U.S. Route 40.
Ohio has a highly developed network of roads and interstate highways. Major east-west through routes include the Ohio Turnpike (I-80/I-90) in the north, I-76 through Akron to Pennsylvania, I-70 through Columbus and Dayton, and the Appalachian Highway (State Route 32) running from West Virginia to Cincinnati. Major north–south routes include I-75 in the west through Toledo, Dayton, and Cincinnati, I-71 through the middle of the state from Cleveland through Columbus and Cincinnati into Kentucky, and I-77 in the eastern part of the state from Cleveland through Akron, Canton, New Philadelphia and Marietta south into West Virginia. Interstate 75 between Cincinnati and Dayton is one of the heaviest traveled sections of interstate in Ohio.
Ohio also has a highly developed network of signed state bicycle routes. Many of them follow rail trails, with conversion ongoing. The Ohio to Erie Trail (route 1) connects Cincinnati, Columbus, and Cleveland. U.S. Bicycle Route 50 traverses Ohio from Steubenville to the Indiana state line outside Richmond.
Ohio has several long-distance hiking trails, the most prominent of which is the Buckeye Trail which extends in a loop around the state of Ohio. Part of it is on roads and part is on wooded trail. Additionally, the North Country Trail (the longest of the eleven National Scenic Trails authorized by Congress) and the American Discovery Trail (a system of recreational trails and roads that collectively form a coast-to-coast route across the mid-tier of the United States) pass through Ohio. Much of these two trails coincide with the Buckeye Trail.
Ohio has five international airports, four commercial, and two military. The five international include Cleveland Hopkins International Airport, John Glenn Columbus International Airport, and Dayton International Airport, Ohio's third largest airport. Akron Fulton International Airport handles cargo and for private use. Rickenbacker International Airport is one of two military airfields which is also home to the 7th largest FedEx building in America. The other military airfield is Wright Patterson Air Force Base which is one of the largest Air Force bases in the United States. Other major airports are located in Toledo and Akron.
Cincinnati/Northern Kentucky International Airport is in Hebron, Kentucky, and therefore is not listed above.
The state government of Ohio consists of the executive, judicial, and legislative branches.
The executive branch is headed by the governor of Ohio. The current governor is Mike DeWine since 2019, a member of the Republican Party. A lieutenant governor succeeds the governor in the event of any removal from office, and performs any duties assigned by the governor. The current lieutenant governor is Jon A. Husted. The other elected constitutional offices in the executive branch are the secretary of state (Frank LaRose), auditor (Keith Faber), treasurer (Robert Sprague), and attorney general (Dave Yost).
There are three levels of the Ohio state judiciary. The lowest level is the court of common pleas: each county maintains its own constitutionally mandated court of common pleas, which maintain jurisdiction over "all justiciable matters". The intermediate-level court system is the district court system. Twelve courts of appeals exist, each retaining jurisdiction over appeals from common pleas, municipal, and county courts in a set geographical area. A case heard in this system is decided by a three-judge panel, and each judge is elected.
The highest-ranking court, the Ohio Supreme Court, is Ohio's "court of last resort". A seven-justice panel composes the court, which, by its own discretion, hears appeals from the courts of appeals, and retains original jurisdiction over limited matters.
The Ohio General Assembly is a bicameral legislature consisting of the Senate and House of Representatives. The Senate is composed of 33 districts, each of which is represented by one senator. Each senator represents approximately 330,000 constituents. The House of Representatives is composed of 99 members.
Eight US presidents hailed from Ohio at the time of their elections, giving rise to its nickname "mother of presidents", a sobriquet it shares with Virginia. It is also termed "modern mother of presidents", in contrast to Virginia's status as the origin of presidents earlier in American history. Seven presidents were born in Ohio, making it second to Virginia's eight. Virginia-born William Henry Harrison lived most of his life in Ohio and is also buried there. Harrison conducted his political career while living on the family compound, founded by his father-in-law, John Cleves Symmes, in North Bend, Ohio. The seven presidents born in Ohio were Ulysses S. Grant, Rutherford B. Hayes, James A. Garfield, Benjamin Harrison (grandson of William Henry Harrison), William McKinley, William Howard Taft and Warren G. Harding. All seven were Republicans.
Ohio is considered a swing state, being won by either the Democratic or Republican candidates reasonably each election. As a swing state, Ohio is usually targeted by both major-party campaigns, especially in competitive elections. Pivotal in the election of 1888, Ohio has been a regular swing state since 1980.
Additionally, Ohio is considered a bellwether. Historian R. Douglas Hurt asserts that not since Virginia "had a state made such a mark on national political affairs". "The Economist" notes that "This slice of the mid-west contains a bit of everything American—part north-eastern and part southern, part urban and part rural, part hardscrabble poverty and part booming
suburb", Since 1896, Ohio has had only two misses in the general election (Thomas E. Dewey in 1944 and Richard Nixon in 1960) and has the longest perfect streak of any state, voting for the winning presidential candidate in each election since 1964, and in 33 of the 37 held since the Civil War. No Republican has ever won the presidency without winning Ohio.
As of 2019, there are more than 7.8 million registered Ohioan voters, with 1.3 million Democrats and 1.9 million Republicans. They are disproportionate in age, with a million more over 65 than there are 18- to 24-year-olds. Since the 2010 midterm elections, Ohio's voter demographic has leaned towards the Republican Party. The governor, Mike DeWine, is Republican, as well as all other non-judicial statewide elected officials, including Lieutenant Governor Jon A. Husted, Attorney General Dave Yost, State Auditor Keith Faber, Secretary of State Frank LaRose and State Treasurer Robert Sprague. In the Ohio State Senate the Republicans are the majority, 24–9, and in the Ohio House of Representatives the Republicans control the delegation 61–38.
Losing two seats in the U.S. House of Representatives following the 2010 Census, Ohio has had 16 seats for the three presidential elections of the decade in 2012, 2016 and 2020. As of the 2018 midterms, twelve federal representatives are Republicans while four are Democrats. Marcy Kaptur (D-09) is the most senior member of the Ohio delegation to the U.S. House of Representatives. The senior U.S. senator, Sherrod Brown, is a Democrat, while the junior, Rob Portman, is a Republican.
Since 1994, the state has had a policy of purging infrequent voters from its rolls. In April 2016, a lawsuit was filed, challenging this policy on the grounds that it violated the National Voter Registration Act (NVRA) of 1993 and the Help America Vote Act of 2002. In June, the federal district court ruled for the plaintiffs and entered a preliminary injunction applicable only to the November 2016 election. The preliminary injunction was upheld in September by the Court of Appeals for the Sixth Circuit. Had it not been upheld, thousands of voters would have been purged from the rolls just a few weeks before the election.
Still, it has been estimated that the state has removed up to two million voters since 2011.
Ohio's system of public education is outlined in Article VI of the state constitution, and in Title XXXIII of the Ohio Revised Code. Ohio University, the first university in the Northwest Territory, was also the first public institution in Ohio. Substantively, Ohio's system is similar to those found in other states. At the State level, the Ohio Department of Education, which is overseen by the Ohio State Board of Education, governs primary and secondary educational institutions. At the municipal level, there are approximately 700 school districts statewide. The Ohio Board of Regents coordinates and assists with Ohio's institutions of higher education which have recently been reorganized into the University System of Ohio under Governor Strickland. The system averages an annual enrollment of more than 400,000 students, making it one of the five largest state university systems in the U.S.
Ohio schools consistently ranking in the top 50 nationally of the U.S. News & World Report of liberal arts colleges are Kenyon College, Oberlin College, and Denison University. Ranking in the top 100 nationally of the U.S. News & World Report of national research universities are Case Western Reserve University, Ohio State University and Miami University.
Ohio is home to some of the nation's highest-ranked public libraries. The 2008 study by Thomas J. Hennen Jr. ranked Ohio as number one in a state-by-state comparison. For 2008, 31 of Ohio's library systems were all ranked in the top ten for American cities of their population category.
The Ohio Public Library Information Network (OPLIN) is an organization that provides Ohio residents with internet access to their 251 public libraries. OPLIN also provides Ohioans with free home access to high-quality, subscription research databases.
Ohio also offers the OhioLINK program, allowing Ohio's libraries (particularly those from colleges and universities) access to materials for the other libraries. The program is largely successful in allowing researchers for access to books and other media that might not be otherwise available.
Ohio is home to nine professional sports teams in each of the five different major leagues in the United States. Current teams include the Cincinnati Reds and Cleveland Indians of Major League Baseball, the Columbus Crew SC and FC Cincinnati of Major League Soccer, the Cleveland Cavaliers of the National Basketball Association, the Cincinnati Bengals and Cleveland Browns of the National Football League, and the Columbus Blue Jackets of the National Hockey League.
Ohio has brought home seven World Series titles (Reds 1919, 1940, 1975, 1976, 1990; Indians 1920, 1948), one MLS Cup (Crew 2008), one NBA Championship (Cavaliers 2016), and nine NFL Championships (Pros 1920; Bulldogs 1922, 1923, 1924; Rams 1945; Browns 1950, 1954, 1955, 1964). Despite this success in the NFL in the first half of the 20th century, no Ohio team has won the Super Bowl since its inception in 1967 or made an appearance since 1989. No Ohio team has made an appearance in the Stanley Cup Finals.
Ohio played a central role in the development of both Major League Baseball and the National Football League. Baseball's first fully professional team, the Cincinnati Red Stockings of 1869, were organized in Ohio. An informal early-20th-century American football association, the Ohio League, was the direct predecessor of the NFL, although neither of Ohio's modern NFL franchises trace their roots to an Ohio League club. The Pro Football Hall of Fame is located in Canton.
On a smaller scale, Ohio hosts minor league baseball, arena football, indoor football, mid-level hockey, and lower division soccer.
Winter Guard International has hosted national championships in the UD Arena at the University of Dayton in Dayton, Ohio from 1983 - 1989, 1991 - 1996, 1998 - 2000, 2002 - 2003, and 2005 - 2020.
The Mid-Ohio Sports Car Course has hosted several auto racing championships, including CART World Series, IndyCar Series, NASCAR Nationwide Series, Can-Am, Formula 5000, IMSA GT Championship, American Le Mans Series and Rolex Sports Car Series.
The Grand Prix of Cleveland also hosted CART races from 1982 to 2007. The Eldora Speedway is a major dirt oval that hosts NASCAR Camping World Truck Series, World of Outlaws Sprint Cars and USAC Silver Crown Series races.
Ohio hosts two PGA Tour events, the WGC-Bridgestone Invitational and Memorial Tournament.
The Cincinnati Masters is an ATP World Tour Masters 1000 and WTA Premier 5 tennis tournament.
Ohio has eight NCAA Division I Football Bowl Subdivision college football teams, divided among three different conferences. It has also experienced considerable success in the secondary and tertiary tiers of college football divisions.
There is only one program in the Power Five conferences, the Ohio State Buckeyes, who play in the Big Ten Conference. The football team is fifth in all-time winning percentage, with a 922–326–53 overall record and a 24–26 bowl record as of 2019. The program has produced seven Heisman Trophy winners, forty conference titles, and eight undisputed national championships. The men's basketball program has appeared in the NCAA Division I Men's Basketball Tournament 27 times.
In the Group of Five conferences, the Cincinnati Bearcats play as a member of the American Athletic Conference. Their men's basketball team has over 1,800 wins, 33 March Madness appearances, and is currently on a nine-year streak of appearances as of 2019. Six teams are represented in the Mid-American Conference: the Akron Zips, Bowling Green Falcons, Kent State Golden Flashes, Miami RedHawks, Ohio Bobcats and the Toledo Rockets. The MAC headquarters are in Cleveland. The Cincinnati–Miami rivalry game has been played in southwest Ohio every year since 1888, and is the oldest current non-conference NCAA football rivalry.
Other Division I schools, either part of the NCAA Division I Football Championship Subdivision or not fielding in football include the Cleveland State Vikings, Xavier Musketeers, Wright State Raiders, and Youngstown State Penguins. Xavier's men's basketball has performed particularly well, with 27 March Madness appearances. Youngstown State's football has the third most NCAA Division I Football Championship wins, with 3.
There are 12 NCAA Division II universities and 22 NCAA Division III universities in Ohio. | https://en.wikipedia.org/wiki?curid=22199 |
Organic compound
In chemistry, organic compounds are generally any chemical compounds that contain carbon-hydrogen bonds. Due to carbon's ability to catenate (form chains with other carbon atoms), millions of organic compounds are known. The study of the properties, reactions, and syntheses of organic compounds comprises the discipline known as organic chemistry. For historical reasons, a few classes of carbon-containing compounds (e.g., carbonate anion salts and cyanide salts), along with a handful of other exceptions (e.g., carbon dioxide), are not classified as organic compounds and are considered inorganic. Other than those just named, little consensus exists among chemists on precisely which carbon-containing compounds are excluded, making any rigorous definition of an organic compound elusive.
Although organic compounds make up only a small percentage of the Earth's crust, they are of central importance because all known life is based on organic compounds. Living things incorporate inorganic carbon compounds into organic compounds through a network of processes (the carbon cycle) that begins with the conversion of carbon dioxide and a hydrogen source like water into simple sugars and other organic molecules by autotrophic organisms using light (photosynthesis) or other sources of energy. Most synthetically produced organic compounds are ultimately derived from petrochemicals consisting mainly of hydrocarbons, which are themselves formed from the high pressure and temperature degradation of organic matter underground over geological timescales. This ultimate derivation notwithstanding, organic compounds are no longer defined as compounds originating in living things, as they were historically.
In chemical nomenclature, an "organyl group", frequently represented by the letter R, refers to any monovalent substituent whose open valence is on a carbon atom.
For historical reasons discussed below, a few types of carbon-containing compounds, such as carbides, carbonates, simple oxides of carbon (for example, CO and CO2), and cyanides are considered inorganic. Different forms (allotropes) of pure carbon, such as diamond, graphite, fullerenes, and carbon nanotubes are also excluded because they are simple substances composed of only a single element and therefore are not generally considered to be chemical "compounds".
Vitalism was a widespread conception that substances found in organic nature are created from the chemical elements by the action of a "vital force" or "life-force" ("vis vitalis") that only living organisms possess. Vitalism taught that these "organic" compounds were fundamentally different from the "inorganic" compounds that could be obtained from the elements by chemical manipulations.
Vitalism survived for a while even after the rise of modern ideas about the atomic theory and chemical elements. It first came under question in 1824, when Friedrich Wöhler synthesized oxalic acid, a compound known to occur only in living organisms, from cyanogen. A further experiment was Wöhler's 1828 synthesis of urea from the inorganic salts potassium cyanate and ammonium sulfate. Urea had long been considered an "organic" compound, as it was known to occur only in the urine of living organisms. Wöhler's experiments were followed by many others, in which increasingly complex "organic" substances were produced from "inorganic" ones without the involvement of any living organism.
Although vitalism has been discredited, scientific nomenclature retains the distinction between "organic" and "inorganic" compounds. The modern meaning of "organic compound" is any compound that contains a significant amount of carbon—even though many of the organic compounds known today have no connection to any substance found in living organisms. The term "carbogenic" has been proposed by E. J. Corey as a modern alternative to "organic", but this neologism remains relatively obscure.
The organic compound L-isoleucine molecule presents some features typical of organic compounds: carbon–carbon bonds, carbon–hydrogen bonds, as well as covalent bonds from carbon to oxygen and to nitrogen.
As described in detail below, any definition of organic compound that uses simple, broadly applicable criteria turns out to be unsatisfactory, to varying degrees. The modern, commonly accepted definition of organic compound essentially amounts to any carbon containing compound, excluding several classes of substances traditionally considered as 'inorganic'. However, the list of substances so excluded varies from author to author. Still, it is generally agreed upon that there are (at least) a few carbon containing compounds that should not be considered organic. For instance, almost all authorities would require the exclusion of alloys that contain carbon, including steel (which contains cementite, Fe3C), as well as other metal and semimetal carbides (including "ionic" carbides, e.g, Al4C3 and CaC2 and "covalent" carbides, e.g. B4C and SiC, and graphite intercalation compounds, e.g. KC8). Other compounds and materials that are considered 'inorganic' by most authorities include: metal carbonates, simple oxides (CO, CO2, and arguably, C3O2), the allotropes of carbon, cyanide derivatives not containing an organic residue (e.g., KCN, (CN)2, BrCN, CNO−, etc.), and heavier analogs thereof (e.g., CP− 'cyaphide anion', CSe2, COS; although CS2 'carbon disulfide' is often classed as an "organic" solvent). Halides of carbon without hydrogen (e.g., CF4 and CClF3), phosgene (COCl2), carboranes, metal carbonyls (e.g., nickel carbonyl), mellitic anhydride (C12O9), and other exotic oxocarbons are also considered inorganic by some authorities.
Nickel carbonyl (Ni(CO)4) and other metal carbonyls present an interesting case. They are often volatile liquids, like many organic compounds, yet they contain only carbon bonded to a transition metal and to oxygen and are often prepared directly from metal and carbon monoxide. Nickel carbonyl is frequently considered to be "organometallic". Although many organometallic chemists employ a broad definition, in which any compound containing a carbon-metal covalent bond is considered organometallic, it is debatable whether organometallic compounds form a subset of organic compounds.
Metal complexes with organic ligands but no carbon-metal bonds (e.g., Cu(OAc)2) are not considered organometallic; instead they are classed as "metalorganic". Likewise, it is also unclear whether metalorganic compounds should automatically be considered organic.
The relatively narrow definition of organic compounds as those containing C-H bonds excludes compounds that are (historically and practically) considered organic. Neither urea nor oxalic acid is organic by this definition, yet they were two key compounds in the vitalism debate. The IUPAC Blue Book on organic nomenclature specifically mentions urea and oxalic acid. Other compounds lacking C-H bonds but traditionally considered organic include benzenehexol, mesoxalic acid, and carbon tetrachloride. Mellitic acid, which contains no C-H bonds, is considered a possible organic substance in Martian soil. Terrestrially, it, and its anhydride, mellitic anhydride, are associated with the mineral mellite (Al2C6(COO)6·16H2O).
A slightly broader definition of organic compound includes all compounds bearing C-H or C-C bonds. This would still exclude urea. Moreover, this definition still leads to somewhat arbitrary divisions in sets of carbon-halogen compounds. For example, CF4 and CCl4 would be considered by this rule to be "inorganic", whereas CF3H, CHCl3, and C2Cl6 would be organic, though these compounds share many physical and chemical properties.
Organic compounds may be classified in a variety of ways. One major distinction is between natural and synthetic compounds. Organic compounds can also be classified or subdivided by the presence of heteroatoms, e.g., organometallic compounds, which feature bonds between carbon and a metal, and organophosphorus compounds, which feature bonds between carbon and a phosphorus.
Another distinction, based on the size of organic compounds, distinguishes between small molecules and polymers.
Natural compounds refer to those that are produced by plants or animals. Many of these are still extracted from natural sources because they would be more expensive to produce artificially. Examples include most sugars, some alkaloids and terpenoids, certain nutrients such as vitamin B12, and, in general, those natural products with large or stereoisometrically complicated molecules present in reasonable concentrations in living organisms.
Further compounds of prime importance in biochemistry are antigens, carbohydrates, enzymes, hormones, lipids and fatty acids, neurotransmitters, nucleic acids, proteins, peptides and amino acids, lectins, vitamins, and fats and oils.
Compounds that are prepared by reaction of other compounds are known as "synthetic". They may be either compounds that already are found in plants or animals or those that do not occur naturally.
Most polymers (a category that includes all plastics and rubbers) are organic synthetic or semi-synthetic compounds.
Many organic compounds—two examples are ethanol and insulin—are manufactured industrially using organisms such as bacteria and yeast. Typically, the DNA of an organism is altered to express compounds not ordinarily produced by the organism. Many such biotechnology-engineered compounds did not previously exist in nature.
A great number of more specialized databases exist for diverse branches of organic chemistry.
The main tools are proton and carbon-13 NMR spectroscopy, IR Spectroscopy, Mass spectrometry, UV/Vis Spectroscopy and X-ray crystallography. | https://en.wikipedia.org/wiki?curid=22203 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.